Jacobi Matrices and the Moment Problem (Operator Theory: Advances and Applications, 294) 3031463862, 9783031463860

This monograph presents the solution of the classical moment problem, the construction of Jacobi matrices and correspond

112 104

English Pages 496 [489] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
1 Introduction
2 Some Aspects of the Spectral Theory of Unbounded Operators
2.1 Preliminary Information About Unbounded Operators
2.2 Extensions of Hermitian Operators to Self-AdJoint Operators
2.3 The Spectral Decomposition of Bounded Operators
2.4 The Spectral Decomposition of Unbounded Operators
2.5 Representations of One-Parameter Unitary Groups
2.6 Self-Adjointness Criteria
2.7 Rigged Spaces
2.8 Tensor Products
2.9 Representations of Continuous Multi-Linear Forms
2.10 Semi-Bounded Bilinear Forms
2.11 The Generalized Eigenvectors Expansion
2.12 A General Case with a Quasi-Scalar Product
Bibliographical Notes
3 Jacobi Matrices and the Classical Moment Problem
3.1 Difference Operators, Jacobi Matrices, and Self-Adjointness Conditions
3.2 The Generalized Eigenvectors Expansion and the Fourier Transform Corresponding to Jacobi Matrices
3.3 The Inverse Problem
3.4 Further Spectral Analysis of the Difference Operator
3.5 The Classical Moment Problem
3.6 Some Other Generalizations of the Moment Problem
3.7 Connections with the Theory of Jacobi Matrices
Bibliographical Notes
4 The Strong Moment Problem
4.1 Preliminaries to the Strong Moment Problem
4.2 The Solution of the Strong Moment Problem
4.3 The Orthogonalization Procedure and the Construction of a Tri-Diagonal Block Matrix
4.4 Direct and Inverse Spectral Problems Corresponding to Tri-Diagonal Block Jacobi-Laurent Matrices Generating Self-Adjoint Operators
4.5 Considerations of Hermitian Block Jacobi-Laurent Type Matrices
4.6 The Connection Between the Strong Moment Problem and the Spectral Theory of Jacobi-Laurent Matrices
4.7 Two Addition Facts
4.8 The Inner Structure of the Jacobi-Laurent Matrix
Bibliographical Notes
5 Block Jacobi Type Matrices in the Complex Moment Problem
5.1 Construction of the Tri-Diagonal Block Jacobi Type Matrix of a Bounded Normal Operator
5.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi Type Matrices of Bounded Normal Operators
5.3 Normality Conditions of Block Jacobi Type Matrices
5.4 The Solution of the Complex Moment Problem
5.5 The Weyl Function and Polynomials of the Second Kind in the Complex Moment Problem
Bibliographical Notes
6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem
6.1 Introduction to the Unitary Block Jacobi Type Matrices
6.2 Construction of the Tri-Diagonal Block Jacobi Type Matrices of a Unitary Operator
6.3 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi Type Matrices of Unitary Operators
6.4 The Detailed Internal Structure of the Block Matrix of the Unitary Operator
6.5 The Solution of the Trigonometric Moment Problem
Bibliographical Notes
7 Block Jacobi Type Matrices and the Complex Moment Problem in the Exponential Form
7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the Complex Moment Problem in the Exponential Form
7.2 Direct and Inverse Spectral Problems for Tri-Diagonal Block Jacobi Type Matrices Corresponding to the Complex Moment Problem in the Exponential Form
7.3 Conditions of Unitarity and Commutativity of Matrices Corresponding to the Complex Moment Problem in the Exponential Form
7.4 The Solution of the Complex Moment Problem in the Exponential Form
Bibliographical Notes
8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem
8.1 Construction of the Tri-Diagonal Block Jacobi Type Matrices Corresponding to the Two Dimensional Real Moment Problem
8.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi Type Matrices Corresponding to the Two-Dimensional Real Moment Problem
8.3 An Analogue of the Weyl Function Corresponding to the Two-Dimensional Real Moment Problem
8.4 The Solution of the Two-Dimensional Real Moment Problem
8.5 Examples of Matrices Corresponding to the Two-Dimensional Real Moment Problem
Bibliographical Notes
9 Applications of the Spectral Theory of Jacobi Matrices and Their Generalizations to the Integration of Nonlinear Equations
9.1 The Integration of the Toda Chain on the Semi-axis Using the Spectral Theory of Jacobi Matrices
9.2 The Doubly Infinite Toda Chain and Its Equivalent Lax Equation for Block Jacobi Type Matrices
9.3 The Spectral Theory of Corresponding to Toda Chains Block Jacobi Type Matrices
9.4 Equations for the Weyl Function and Spectral Matrices
9.5 The Basic Statement and Applications to Hamiltonian Systems
Bibliographical Notes
References
Subject Index
Notation Index
Recommend Papers

Jacobi Matrices and the Moment Problem (Operator Theory: Advances and Applications, 294)
 3031463862, 9783031463860

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Operator Theory Advances and Applications 294

Yurij M. Berezansky Mykola E. Dudkin

Jacobi Matrices and the Moment Problem

Operator Theory: Advances and Applications Volume 294

Founded in 1979 by Israel Gohberg Series Editors: Joseph A. Ball (Blacksburg, VA, USA) Albrecht Böttcher (Chemnitz, Germany) Harry Dym (Rehovot, Israel) Heinz Langer (Wien, Austria) Christiane Tretter (Bern, Switzerland) Associate Editors: Vadim Adamyan (Odessa, Ukraine) Wolfgang Arendt (Ulm, Germany) Raul Curto (Iowa, IA, USA) Kenneth R. Davidson (Waterloo, ON, Canada) Fritz Gesztesy (Waco, TX, USA) Pavel Kurasov (Stockholm, Sweden) Vern Paulsen (Houston, TX, USA) Mihai Putinar (Santa Barbara, CA, USA) Ilya Spitkovsky (Abu Dhabi, UAE)

Honorary and Advisory Editorial Board: Lewis A. Coburn (Buffalo, NY, USA) J.William Helton (San Diego, CA, USA) Marinus A. Kaashoek (Amsterdam, NL) Thomas Kailath (Stanford, CA, USA) Peter Lancaster (Calgary, Canada) Peter D. Lax (New York, NY, USA) Bernd Silbermann (Chemnitz, Germany)

Subseries Linear Operators and Linear Systems Subseries editors: Daniel Alpay (Orange, CA, USA) Birgit Jacob (Wuppertal, Germany) André C.M. Ran (Amsterdam, The Netherlands) Subseries Advances in Partial Differential Equations Subseries editors: Bert-Wolfgang Schulze (Potsdam, Germany) Jerome A. Goldstein (Memphis, TN, USA) Nobuyuki Tose (Yokohama, Japan) Ingo Witt (Göttingen, Germany)

Yurij M. Berezansky . Mykola E. Dudkin

Jacobi Matrices and the Moment Problem

Yurij M. Berezansky Institute of Mathematics National Academy of Sciences of Ukraine Kyiv, Ukraine

Mykola E. Dudkin Igor Sikorsky Kyiv Polytechnic Institute National Technical University of Ukraine Kyiv, Ukraine

Translated by Mykola E. Dudkin

ISSN 0255-0156 ISSN 2296-4878 (electronic) Operator Theory: Advances and Applications ISBN 978-3-031-46386-0 ISBN 978-3-031-46387-7 (eBook) https://doi.org/10.1007/978-3-031-46387-7 Translation from the Ukrainian language edition: “ÂÍÑÄiÈÅi ¯ÃÕÓËÙi i ÒÓÑÄÎÈÏà ÏÑÏÈÐÕiÅ” by Yurij M. Berezansky and Mykola E. Dudkin, © National Academy of Science of Ukraine 2019. Published by National Academy of Science of Ukraine. All Rights Reserved. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface

The classical Hamburger moment problem has become the source and the basis for the development of various theories. Among them, there are the theory of extensions of symmetric operators, Jacobi matrices, Padé approximations, the Nevanlinna-Pick interpolation, orthogonal polynomials of the first and second order with respect to a given measure, the corresponding Weyl function, and others. On its own, the moment problem has today acquired numerous generalizations, and therefore, corresponding theories possess natural generalizations. The first famous researchers of the moment problem are Hamburger H., Stiltjes T., Chebyshev P. L., and Markov A. A. At the beginning, the monograph deals with the usual Jacobi matrix that naturally arise by the consideration of the classical moment problem [2, 3]. Further we consider Jacobi-type matrices corresponding to the trigonometric moment problem [62, 63], the complex moment problem [37], and the real two-dimensional moment problem [87]. However, such matrices have a multi-diagonal structure. If we put such matrices into blocks of a certain size, then they become a block tri-diagonal structure and, hence, we can apply the widely developed Berezansky theory of generalized eigenfunctions expansion for a set of commuting corresponding operators [21, 23]. In this monograph, we do not focus so much on demonstrating the form of matrices corresponding to some moment problem, as we give a sufficiently rigorous solution of the corresponding direct and inverse spectral problems. The inverse spectral problem refers to the construction of block tri-diagonal matrices of a certain structure according to a given measure, which has all corresponding moments in some functional space; the direct problem is to restore the measure in the “sense of the Parseval equality” using given matrices of a certain structure. The material of the book will be useful for specialists in operator theory, functional analysis, theoretical physicists, graduate students, and engineers who deal with the problems of coupled pendulums. The first section of the monograph contains a number of a necessary information concerning unbounded operators, the generalized eigenvectors expansion theory, the theory of extensions of densely defined symmetric operators, etc. v

vi

Preface

The material of individual sections of this monograph was presented in special courses for students of Taras Shevchenko National University, Kyiv, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute.” Kyiv, Ukraine

Yurij M. Berezansky Mykola E. Dudkin

Contents

1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

Some Aspects of the Spectral Theory of Unbounded Operators. . . . . . . . 17 2.1 Preliminary Information About Unbounded Operators . . . . . . . . . . . . . . 17 2.2 Extensions of Hermitian Operators to Self-AdJoint Operators . . . . . . 27 2.3 The Spectral Decomposition of Bounded Operators . . . . . . . . . . . . . . . . . 30 2.4 The Spectral Decomposition of Unbounded Operators . . . . . . . . . . . . . . 41 2.5 Representations of One-Parameter Unitary Groups . . . . . . . . . . . . . . . . . . 43 2.6 Self-Adjointness Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.7 Rigged Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 2.8 Tensor Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 2.9 Representations of Continuous Multi-Linear Forms . . . . . . . . . . . . . . . . . 81 2.10 Semi-Bounded Bilinear Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 2.11 The Generalized Eigenvectors Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 2.12 A General Case with a Quasi-Scalar Product . . . . . . . . . . . . . . . . . . . . . . . . 116 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

3

Jacobi Matrices and the Classical Moment Problem . . . . . . . . . . . . . . . . . . . . 3.1 Difference Operators, Jacobi Matrices, and Self-Adjointness Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The Generalized Eigenvectors Expansion and the Fourier Transform Corresponding to Jacobi Matrices . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The Inverse Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Further Spectral Analysis of the Difference Operator . . . . . . . . . . . . . . . 3.5 The Classical Moment Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Some Other Generalizations of the Moment Problem . . . . . . . . . . . . . . . 3.7 Connections with the Theory of Jacobi Matrices . . . . . . . . . . . . . . . . . . . . Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

1

121 121 126 136 138 148 164 177 180

The Strong Moment Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 4.1 Preliminaries to the Strong Moment Problem . . . . . . . . . . . . . . . . . . . . . . . . 184 4.2 The Solution of the Strong Moment Problem . . . . . . . . . . . . . . . . . . . . . . . . 187 vii

viii

Contents

4.3

The Orthogonalization Procedure and the Construction of a Tri-Diagonal Block Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Direct and Inverse Spectral Problems Corresponding to Tri-Diagonal Block Jacobi-Laurent Matrices Generating Self-Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Considerations of Hermitian Block Jacobi-Laurent Type Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 The Connection Between the Strong Moment Problem and the Spectral Theory of Jacobi-Laurent Matrices . . . . . . . . . . . . . . . . . . . . . 4.7 Two Addition Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 The Inner Structure of the Jacobi-Laurent Matrix . . . . . . . . . . . . . . . . . Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

6

7

Block Jacobi Type Matrices in the Complex Moment Problem . . . . . . . . 5.1 Construction of the Tri-Diagonal Block Jacobi Type Matrix of a Bounded Normal Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi Type Matrices of Bounded Normal Operators . . . . . . . . 5.3 Normality Conditions of Block Jacobi Type Matrices . . . . . . . . . . . . . . . 5.4 The Solution of the Complex Moment Problem . . . . . . . . . . . . . . . . . . . . . 5.5 The Weyl Function and Polynomials of the Second Kind in the Complex Moment Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction to the Unitary Block Jacobi Type Matrices . . . . . . . . . . . . 6.2 Construction of the Tri-Diagonal Block Jacobi Type Matrices of a Unitary Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi Type Matrices of Unitary Operators . . . . . . . . . . . . . . . . . . . 6.4 The Detailed Internal Structure of the Block Matrix of the Unitary Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 The Solution of the Trigonometric Moment Problem. . . . . . . . . . . . . . . . Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

191

204 214 224 227 231 244 245 245 257 267 275 285 287 289 289 291 301 310 316 323

Block Jacobi Type Matrices and the Complex Moment Problem in the Exponential Form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the Complex Moment Problem in the Exponential Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 7.2 Direct and Inverse Spectral Problems for Tri-Diagonal Block Jacobi Type Matrices Corresponding to the Complex Moment Problem in the Exponential Form . . . . . . . . . . . . . . . . 345

Contents

ix

7.3

Conditions of Unitarity and Commutativity of Matrices Corresponding to the Complex Moment Problem in the Exponential Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 7.4 The Solution of the Complex Moment Problem in the Exponential Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 8

9

Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Construction of the Tri-Diagonal Block Jacobi Type Matrices Corresponding to the Two Dimensional Real Moment Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi Type Matrices Corresponding to the Two-Dimensional Real Moment Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 An Analogue of the Weyl Function Corresponding to the Two-Dimensional Real Moment Problem . . . . . . . . . . . . . . . . . . . . . 8.4 The Solution of the Two-Dimensional Real Moment Problem . . . . . . 8.5 Examples of Matrices Corresponding to the Two-Dimensional Real Moment Problem . . . . . . . . . . . . . . . . . . . . . Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications of the Spectral Theory of Jacobi Matrices and Their Generalizations to the Integration of Nonlinear Equations . . . . . . 9.1 The Integration of the Toda Chain on the Semi-axis Using the Spectral Theory of Jacobi Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 The Doubly Infinite Toda Chain and Its Equivalent Lax Equation for Block Jacobi Type Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 The Spectral Theory of Corresponding to Toda Chains Block Jacobi Type Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Equations for the Weyl Function and Spectral Matrices . . . . . . . . . . . . . 9.5 The Basic Statement and Applications to Hamiltonian Systems . . . . Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

371

371

386 397 399 407 411 413 414 424 430 446 458 464

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 Notation Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487

Chapter 1

Introduction

For a better understanding of the content of the monograph, we will first outline the main content of the second chapter: direct and inverse spectral problems for classical Jacobi matrices and orthogonal polynomials on the real axis .R (see, for example, [4, 21]). In this classical theory, the symmetric (Hermitian) Jacobi matrix ⎡

b0 ⎢ a0 ⎢ J =⎢ 0 ⎣ . .. .

a0 b1 a1 .. .

0 a1 b2 .. .

0 0 a2 .. .

⎤ 0 ··· 0 ···⎥ ⎥ , 0 ···⎥ ⎦ .. . . . .

bn ∈ R, an > 0,

(1.0.1)

n ∈ N0 = {0, 1, 2, . . .}

is investigated in the space .l2 of sequences .f = (fn )∞ n=0 . This matrix defines an operator J in .l2 on finite sequences .f ∈ lfin ⊂ l2 , so that the operator is Hermitian with equal defect numbers and, hence, has self-adjoint expansions in .l2 . The closure .J˜ of the operator J is the self-adjoint operator under ∞  certain conditions on J (for example, . an−1 = ∞), n=0

The direct spectral problem, i.e., the eigenfunction expansion for .J˜ (for simplicity, we will assume that .J˜ is self-adjoint) is considered in the following way. We introduce the sequence of polynomials .P (λ) = (Pn (λ))∞ n=0 , .∀λ ∈ R as a solution of the equation .J P (λ) = λP (λ) with initial conditions .P0 (λ) = 1, .P−1 (λ) = 0, namely an−1 Pn−1 (λ) + bn Pn (λ) + an Pn+1 (λ) = λPn (λ), .

P0 (λ) = 1,

P−1 (λ) = 0,

∀n ∈ N0 .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y. M. Berezansky, M. E. Dudkin, Jacobi Matrices and the Moment Problem, Operator Theory: Advances and Applications 294, https://doi.org/10.1007/978-3-031-46387-7_1

(1.0.2)

1

2

1 Introduction

This recurrence relation has a solution: it is only necessary to proceed step by step, starting with .P0 (λ) and .P−1 (λ) = 0. It is possible, since .an > 0. The sequens .P (λ) of polynomials belongs to .l = C∞ .∀λ (more exactly, to the real part of l) and it is a generalized eigenvector for .J˜ with the corresponding eigenvalue .λ (we use a certain rigging of .l2 ). The appropriate Fourier transform “. ” with generalized eigenvectors of the operator .J˜ has a form ˆ l2 ⊃ lfin  f = (fn )∞ n=0 −→ f (λ) =



.

fn Pn (λ) ∈ L2 (R, dρ(λ)) = L2 .

n=0

(1.0.3) The last transformation is an isometrical operator (after taking the extension by continuity) from the whole space .l2 into the whole space .L2 . Herewith, the image of .J˜ is the multiplication by .λ operator in the space .L2 . The polynomials .Pn (λ) are orthogonal with respect to the measure .dρ(λ). The inverse problem in this classical case is the following. Let us have a probability Borel measure .dρ(λ) on .R, that has all its moments, sn =

n ∈ N0

λn dρ(λ),

.

(1.0.4)

R

(and the support .dρ(λ) contains an infinite set on a finite interval). The question is whether it is possible to recover the corresponding Jacobi matrix J such that the initial measure .dρ(λ) would be equal to the spectral measure for .J˜? And how to obtain such a reconstruction? The answer is very simple: it is necessary to take the sequence of functions in .L2 (according to (1.0.4)) 1, λ, λ2 , . . .

(1.0.5)

.

that are linearly independent due to the condition on the support of .dρ(λ) and apply the Gram-Schmidt orthogonalization procedure. As a result we get a sequence of orthonormal polynomials that form a basis in .L2 P0 (λ) = 1, P1 (λ), P2 (λ), . . . .

.

(1.0.6)

Then the matrix J is reconstructed by formulas an =

λPn (λ)Pn+1 (λ) dρ(λ),

.

R

bn =

λ(Pn (λ))2 dρ(λ),

n ∈ N0 .

R

(1.0.7)

1 Introduction

3

The above mentioned connection between Jacobi matrices, the classical moment problem and orthogonal polynomials is very fruitful for studying these objects. Many mathematicians worked in this direction, but it is necessary single out the corresponding results of Krein M. G. [128, 185, 186] and Achiezer N. I. [2, 3]. The main question of further chapters is the following: in what way it is necessary to proceed for the obtaining a generalization of the above mentioned classical theory of orthogonal polynomials on the case of the complex plane .C (or on some subset of .C, for example, on the unite circle .T ⊂ C), on the real plane .R, etc.? Roughly spiking, it is necessary instead of one self-adjoint operator in .l2 to consider a normal (unitary) operator or the couple of commuting operators that act in some space like .l2 . More exactly, in fourth chapter, for example, instead of the space .l2 = C⊕C⊕· · · we take l2 = H 0 ⊕ H 1 ⊕ H 2 ⊕ · · · ,

.

where

Hn = Cn+1 ,

(1.0.8)

and instead of the scalar matrix (1.0.1) we take Jacobi type block matrix with elements .an , .bn and .cn , that are finite dimensional operators (matrices), acting between spaces .Hn in (1.0.8), namely: ⎡

b0 ⎢ a0 ⎢ .J = ⎢ ⎣0 .. .

c0 b1 a1 .. .

0 c1 b2 .. .

0 0 c2 .. .

0 0 0 .. .

⎤ ··· an : Hn −→ Hn+1 , ···⎥ ⎥ , bn : Hn −→ Hn , ···⎥ ⎦ cn : Hn+1 −→ Hn , .. .

(1.0.9) n ∈ N0 .

Such a matrix (1.0.9) induces the operator J in .l2 defined on finite vectors lfin ⊂ l2 . For simplicity, we will demand everywhere in the sequel, that the norm of elements .an , .bn and .cn are uniformly bounded and, therefore, the operator J is bounded in .l2 . The essential condition .an > 0 in (1.0.1) has now a long form

.



an;0,0 ⎢0 ⎢ ⎢. .an = ⎢ . ⎢. ⎣0

0

⎫ ⎤⎪ ⎪ ⎪ ⎪ ⎪ ⎥⎪ ⎬ ⎥⎪ ⎥ ⎥ n + 2, ⎪ ⎥⎪ ⎪ 0 . . . an;n,n ⎦ ⎪ ⎪ ⎪ ⎪ ⎭ 0 ... 0

 

∗ an;1,1 .. . 0 0

∗ ∗ .. .

n+1

... ... .. .

∗ ∗ .. .

4

1 Introduction



cn;0,0 ⎢∗ ⎢ ⎢ cn = ⎢ ... ⎢ ⎣∗



cn;0,1 ∗ .. .

0 cn;1,2 .. .

... ... .. .

0 0 .. .

∗ ∗

∗ ∗

. . . cn;n−1,n ... ∗



⎫ ⎤⎪ ⎪ ⎪ ⎪ ⎪ ⎥⎪ ⎬ ⎥⎪ ⎥ ⎥ n + 1, ⎥⎪ ⎪ ⎦⎪ ⎪ 0 ⎪ ⎪ ⎭ cn;n,n+1 ⎪ 

0 0 .. .

n+2

an;0,0 , an;1,1 , . . . , an;n,n > 0,

cn;0,1 , cn;1,2 , . . . , cn;n,n+1 > 0,

n ∈ N0 . (1.0.10)

Under some simple condition on .an , .bn and .cn , .n ∈ Nn , the matrix J is a formally normal operator, i.e., .J J + = J + J , where .J + denotes the matrix adjoint to J . Therefore, (due to the fact that J is bounded) the closure .J˜ is the bounded operator in .l2 . Let .z ∈ C belong to the spectrum of .J˜ and .P (z) = (Pn (z))∞ n=0 be the ˜ corresponding generalized eigenvector .J . Here .Pn (z) ∈ Hn is a vector-valued polynomial with respect to variables .z, z¯ of degree n, i.e., it is some linear combination of .zj z¯ k , .0 ≤ j + k ≤ n. According to the generalized eigenfunctions expansion theorem, it is some solution of two equations of type (1.0.2) (but with matrix coefficients) J P (z) = zP (z),

.

J + P (z) = z¯ P (z).

(1.0.11)

The corresponding Fourier transform “. ” for the operator .J˜ has a form ˆ l2 ⊃ lfin  f = (fn )∞ n=0 −→ f (z) =

.



(fn , Pn (z))Hn ∈ L2 (C, dρ(z)) = L2 , n=0

(1.0.12) where .dρ(z) is the spectral measure of .J˜, and it has a compact support. The mapping (1.0.12) is the isometrical operator from whole .l2 into whoule .L2 . The polynomials .Pn (z) are orthonormal with respect to the measure .dρ(z) and form a basis in the space .L2 . It is conveniens to denote these polynomials by (Qn;0 (z), Qn;1 (z), . . . , Qn;n (z)) = (Pn;0 (z), Pn;1 (z), . . . , Pn;n (z)) = Pn (z).

.

So, the described above result is called the direct spectral problem for the matrix J of the form (1.0.9) with conditions on elements (1.0.10).

1 Introduction

5

The inverse spectral problem is now formulated as follows. Let us have a probability Borel measure .dρ(z) with a compact support on .C; assume that all complex moments sm,n =

zm z¯ n dρ(z),

.

m, n ∈ N0 ,

(1.0.13)

C

exist and the support of .dρ(z) must be such that all functions .zj z¯ k , .j, k ∈ N0 (belonging to .L2 , see (1.0.12)), are linearly independent in this space (for example, the support contains some open set of .C). It is necessary to construct a Jacobi type matrix (1.0.9) satisfying conditions (1.0.10) so that the spectral measure of the normal operator .J˜ would be equal to the initial measure. As in the classical case, it is necessary to apply the standard Gram-Schmidt orthogonalization procedure to the sequence of functions in .L2 (zj z¯ k )∞ j,k=0

(1.0.14)

.

(instead of (1.0.5)). But the sequence (1.0.14) has two indices, and therefore, it is necessary to choose a convenient global (linear) order for (1.0.14). We order it in the following way: z0 z¯ 0 = 1;

.

z1 z¯ 0 , z0 z¯ 1 ;

z2 z¯ 0 , z1 z¯ 1 , z0 z¯ 2 ; z z¯ , z n 0

... ; z¯ , . . . , z0 z¯ n ;

n−1 1

....

(1.0.15)

After such an orthogonalization, we get a sequence of polynomials Pn (z) = (Pn;0 (z), Pn;1 (z), . . . , Pn;n (z)),

.

n ∈ N0 ,

and the matrix (1.0.9) with (1.0.10) is reconstructed by using the formulas like (1.0.7). The above mentioned results are given in Chap. 4. All the necessary information about the projection spectral theorem is presented in the first section. Note also, that the theory of Jacobi block matrices, which are Hermitian or self-adjoint operators acting in the space .l2 (H) = H ⊕ H ⊕ · · · , where .H some Hilbert space, investigated in [187] for the case .dimH < ∞ and in [17, 21] for the case .dimH ≤ ∞. The family of commuting self-adjoint operators acting on the Fock space can be seen, for example, in [30]. In particular, the Fock space has the form (1.0.8) with .Hn , which are n-particle in finite-dimensional Hilbert spaces for .n > 0. It would be interesting to develop the spectral theory of block matrices J of type (1.0.9) and (1.0.10) in the space .l2 (1.0.8) in the case of unbounded normal operators .J˜. In connection with this, a number of questions arise. What conditions on the elements of matrices J guarantee, that the operator .J˜ is normal? In what

6

1 Introduction

terms can the normal extension J in .l2 be described as in case of classical Jacobi matrices? Let us compare the latest results with the results of the fifth chapter. Briefly speaking, it is necessary to apply the previously explained theory in the case when the matrix J of the form (1.0.9) with elements from (1.0.10) is a unitary operator. But there is an important point here: the spectrum .J˜ lies on the unit circle .T ⊂ C, therefore, the functions (1.0.15) are linearly dependent on this set, because zj +n z¯ k+n = zj z¯ k ,

.

j, k ∈ N0 ,

z ∈ T,

∀n ∈ N0 .

Now it is necessary to take only functions from (1.0.15), for which .j · k = 0, namely

.

z0 z¯ 0 = 1; z1 z¯ 0 = z1 , z0 z¯ 1 = z¯ 1 ; z2 z¯ 0 = z2 , z0 z¯ 2 = z¯ 2 ; . . . ; zn z¯ 0 = zn , z0 z¯ n = z¯ n ; . . . ,

(1.0.16) z ∈ T.

Note that, it is possible to get the classical Jacobi matrices in a similar way assuming .z¯ = z, i.e., .z ∈ R. The support of .dρ(z) must be an infinite set on .T, hence, functions (1.0.16) are linear independent and form a total set in .L2 (T, dρ(z)). The global order of orthogonalization of the sequence (1.0.16) is still the same as in (1.0.15). But now “the diagonal” contains only two elements .n = 1, 2, . . . (instead of .n + 1 as before). Due to this, it is convenient to consider our operator .J˜ on the subspace .l2,u of the space .l2 , namely l2,u = H0 ⊕ H1 ⊕ H2 ⊕ · · · ,

.

where

H0 = C, H1 = H2 = · · · = C2 . (1.0.17)

The corresponding block matrices (1.0.9) act in the space (1.0.17), so blocks have action as operators, namely, a0 : C −→ C2 , b0 : C −→ C, .

(1.0.18)

c0 : C2 −→ C, an , bn , cn : C2 −→ C2 ,

n ∈ N = {1, 2, ...}.

1 Introduction

7

Conditions .an > 0 from (1.0.1) and from (1.0.10) have the form 

a0 =

.

 a0;0,0 2, 0  

   c0 = c0;0,0 c0;0,1 1;

 

1

1

 an;0,0 an;0,1 2, 0 0

  

an =

   b0 = b0;0,0 1,  

2

 0 0 2; cn;1,0 cn;1,1

 

 cn =

2

(1.0.19)

2

a0;0,0 , c0;0,1 , an;0,0 , cn;1,1 > 0,

n ∈ N.

All results of the previous chapter hold true for the unitary operators .J˜, which act in the space .l2,u (1.0.17) and are defined by block matrices of the Jacobi type (1.0.9) with conditions (1.0.19). Namely, the direct spectral problem leads to a Fourier transform of type (1.0.12) between the spaces .l2,u and .L2 (T, dρ(z)). The inverse spectral problem is similar to the one described in the previous chapter, where the orthogonalization is used for the sequence (1.0.16). This problem is related to the trigonometric moment problem. It should be noted that in the fourth chapter, Jacobi matrices of the normal bounded operator J are considered, when the functions (1.0.15) are linearly independent in the space .L2 (C, dρ(z)) constructed by the spectral measure .dρ(z) of the bounded normal operator .N = J˜. In terms of the operator .N = J˜, this condition means that if for some coefficients .cj,k ∈ C and .n ∈ N n

.

cj,k N j N ∗k = 0,

(1.0.20)

j,k=0

then .cj,k = 0, .∀j, k ∈ {0, 1, . . . , n}. It is not difficult to understand that the last condition is equivalent to linear independence of functions in (1.0.15). So, let .dE(z) be the expansion of the identity for N , then (1.0.20) can be rewritten  2  

 n  j k  . c z z ¯ j,k   d(E(z)f, f )l2 = 0, j,k=0 

∀f ∈ l2 .

C

By using the boundedness of the support of .E(α), we find that the last equality n  cj,k zj z¯ k ∈ L2 (C, dρ(z)) and is equal to zero. By assumption, means, that . j,k=0

we have for all .cj,k = 0, that is, functions (1.0.15) are linearly independent in .L2 (C, dρ(z)). The converse statement is also clear.

8

1 Introduction

It follows from the previous reasoning that if the normal operator N satisfies the condition of the type .NN ∗ = 1 or .N = N ∗ , then its matrix is necessarily a unitary or self-adjoint operator in the corresponding basis .l2 . Let us discuss the third chapter. Each moment problem is associated with the generalized eigenvector expansion theory, that is applicable to research in various cases. In this approach, first of all, the representation of moments is proved, using the theorem about generalized eigenvector expansion, which is applied to the corresponding operators. For such vectors, a simple equation is given that depends on the corresponding moment problem: the solution of this equation gives the form of the representation. The corresponding Parseval equality actually gives the representation of moments. After that, we consider the connection of block tri-diagonal Jacobi type matrices with a spectral measure equivalent to the measure of the moments representation. The corresponding spectral theory of such matrices gives additional information about the mentioned problem. This approach allows us to explore in addition classical, trigonometric, complex, as well as a strong moment problem from a single point of view. This chapter is devoted to the demonstration such an approach to the study of the strong moment problem. The complete list of the theory of the strong moment problem provided here is based on the spectral theory of self-adjoint operators regardless of the previous and subsequent chapters. It is important to note that, for the first time, the similar ideas to these appeared in studies of the representation of positive definite functions, moment problems and others in works of Krein M. G. (1946–1948, [180, 185]). He constructed a Hilbert space based on a positive definite kernel and considered an operator in this space related to the investigated problem. He applied this method to directional functionals, which he invented at that time. Berezansky Y. M. (1956) in [19] applied the method of generalized eigenvectors expansion to these operator, which gave, in particular, a positive result for the moment problem. The strong moment problem is formulated as follows. For a given sequence .s = (sn )∞ n=−∞ of real numbers .sn ∈ R, it is need to find the measure .dρ(λ) defined on the Borel .σ -algebra .B(R), so that these numbers .sn are its moments, i.e., sn =

n ∈ Z := {. . . , −1, 0, 1, . . .}.

λn dρ(λ),

.

(1.0.21)

R

Recall that, if the representation (1.0.21) holds true only for .n ∈ N0 := {0, 1, 2, . . .} = {0} ∪ N, then we have the classic moment problem [4, 21], the answer to which is well known: .(sn )∞ n=0 is the moment sequence if and only if for of complex numbers .fn ∈ C, the inequality an arbitrary finite sequence .(fn )∞ n=0 ∞

.

j,k=0

sj +k fj f¯k ≥ 0

(1.0.22)

1 Introduction

9

holds true. In other words, the matrix .(sj +k )∞ j,k=0 must be positive definite, i.e., non-negative. It follows from the definition, every strong moment sequence is classical on .N0 ⊂ Z. So, it is understandable, if we have the representation (1.0.21) for .n ∈ N0 , then it can be expanded to .n ∈ Z− := {. . . , −2, −1}, if the measure .dρ(λ) is “small” (vanishes) in a neighborhood of the point 0: every integral for .n ∈ Z must exist. This situation corresponds to a certain reality, but this problem is not a simple one, since 1983–1984, many works have been devoted to the study of a strong moment problem. We do not provide a relevant list, but only a review in [142] and some works closer to our research. It is important to note that the representation (1.0.21) holds true if and only if the condition like (1.0.22) holds true for an arbitrary finite set .(fn )∞ n=−∞ with the sum from .−∞ to .∞. This result was published in 1984 in [143], but we would like to note, that the representation (1.0.21) and its equivalence to positive definiteness (1.0.22) were published in 1965 by Berezansky in [21] (see also [20]), as a special case of a more general theorem. We also note, that the strong moment problem appeared for the first time in the article by Nudel’man [230]. Both, in the classical moment problem and in the strong case, similar questions arise: under what conditions is the representation (1.0.21) unique, and if we have a case of non-unity, then how to describe all measures .dρ(λ) with given moments ∞ .s = (sn )n=−∞ . Therefore, the so-called Laurent polynomials, that is, finite linear combinations .λn , .n ∈ Z, are essential by this problem. Important questions are the form of an analog of the Jacobi matrix associated with s, the spectral theory of such matrices, and others. Some of these investigations were explored in works cited in [142] and [228]. But the approach to the corresponding problem was analytical, often without applications of the corresponding natural tool—the spectral theory of operators. It is important to say that the application to this problem of the generalized eigenvectors expansion theory and the corresponding results for Jacobi matrices and positive definite kernels ([21], Sections 5, 7, 8) together give a clear, transparent picture, similarly to the classical moment problem. Unfortunately, previous authors used other ways. Works [137, 261, 267] had a great influence on our research (note, that these works also do not use the generalized eigenvectors expansion theory). In [137] we observe the Jacobi-Laurent matrix with Laurent polynomials corresponding to the strong moment problem. The operator generated by such a matrix was considered and some of its spectral properties were investigated here. Papers [261, 267] generalize some results from [75, 142] of the strong moment problem for the matrix case. The author systematically uses the theory of operators, and it was his work that gave impetus to writing our work. We note recent works [78, 79] containing questions related to the strong moment problem, which influenced on our construction. The structure of the chapter is as follows. First of all, we recall the main result about the generalized eigenvectors expansion, which is important for what follows.

10

1 Introduction

That gives a possibility to prove a theorem about the representation (1.0.21) and some conditions under which the measure is unique. Sections 3.3 and 3.4 are devoted to the construction and the study of Jacobi type block matrices associated with the representation (1.0.21). It is important to note that our Jacobi-Laurent matrix is block tri-diagonal instead of five-diagonal considered by previous authors. The use of block matrices is often, unlike the usual ones, very convenient in the relevant situation and is used for trigonometric and complex problems. To consider such an approach, it is important to keep in mind the results of the article [269]. The convenience of block matrices gives the simpler way for a finding of relationships between objects similarly considering for classical Jacobi matrices. Our Jacobi-Laurent matrix J is symmetric and has an algebraic inverse .J −1 , which is also block tri-diagonal with appropriate properties. Here we see the following tasks: to describe such a matrix J internally, similarly to the case of a five-diagonal unitary matrix (that is, a tri-diagonal block matrix). In the unitary case, such a description was provided by Verblunsky S. (see the monograph [263, 264]). A corresponding description is provided in the last section. The chapter presents also the spectral theory of Jacobi-Laurent block matrices including the corresponding direct and inverse spectral problems. The construction is analogous to that applied to classical Jacobi matrices (Chap. 2) and uses the generalized eigenvectors expansions theory. First, the Jacobi-Laurent matrix J is considered in the general case when J defines only a Hermitian (symmetric) operator, that is not self-adjoint. The relevant theory constructed further is similarly to the case of the classical Jacobi matrix. We also note that we do not touch upon the consideration of self-adjoint extensions of the operator generated by J leaving the space. Thus we consider only orthogonal spectral measures. Note that, for ease reading, the theory is often presented first, not in the most general case. So, first we consider the simpler case of the bounded operator generated by the Jacobi-Laurent matrix, that is, the case of the measure .dρ(λ) with a bounded support. This is below extended to the case of an arbitrary support, but the set of functions .R  λ −  → λm , .m ∈ Z must be total in the space .L2 (R, dρ(λ)) (in particular, such a measure is quite exotic). The last eighth chapter is devoted to the study of semi-infinite and double-sided infinite Toda chain. Recall that the classical double-sided infinite Toda chain [282] is a system of nonlinear difference-differential equations α˙ n (t) = .

1 αn (t)(βn+1 (t) − βn (t)), 2

2 β˙n (t) = αn2 (t) − αn−1 (t),

n ∈ Z,

(1.0.23) t ∈ [0, T ),

T > 0.

Here, the unknowns .αn (t), βn (t) are real continuously differentiable functions and d “. · ” .= dt . For (1.0.23), we can set the Cauchy problem: a solution .αn (t), .βn (t), .n ∈ Z, .t ∈ [0, T ) must be found for the given initial data .αn (0), .βn (0), .n ∈ Z.

1 Introduction

11

Note that this system is really a Hamiltonian system describing the dynamics of a chain of particles .qn (t), .n ∈ Z along the whole line with an exponential interaction. Let us explain that .αn (t), .βn (t) are some coordinates of .qn (t) and .q˙n (t) (Flaschka variables [103, 104]). The study of the Cauchy problem for (1.0.23) is a significant problem, that is discussed in a wide range of physical literature. The references contains articles that are important for us. Essential for us results, obtained in [147, 219], describe the case of a finite number of equations (1.0.23), where .Z is replaced with a finite set .{0, . . . , N }. For the semi-infinite case, when only .N0 = {0, 1, 2, . . .}, is taken instead of the set .Z, the obtained results are close to the classical method of integrating the Cauchy problem for the KdV equation on .(x, t) ∈ [0, ∞) × [0, T ) in the sense of the inverse spectral problem for the Sturm-Liouville equation on .x ∈ [0, ∞). In our case, instead of the Sturm-Liouville equation we take its difference analog, or more precisely, the classical Jacobi matrix. Corresponding results are published in [24, 27–29]. Now the unknowns .αn (t) are considered positive and functions .αn (t), .βn (t) are bounded uniformly with respect to .n ∈ N0 . Let us explain the main idea of this approach. We associate with the equation (1.0.23) the classical Jacobi matrix .J (t) ∞ with the main diagonal .(βn (t))∞ n=0 and two adjacent identical diagonals .(αn (t))n=0 . 1 1 The matrix .J (t) acts in the space .2 = C ⊕ C ⊕ . . . (that is, .2 on .N0 ) and defines the bounded self-adjoint operator .J(t) (its boundedness follows from our assumption: we consider only bounded solutions of (1.0.23)). An important point: the evolution in time .t ∈ [0, T ) of the matrix .J (t) is complicated and given by the equation (1.0.23), but the evolution of the spectral measure .dρ(λ; t) is simple, that is dρ(λ; t) = eλt dρ(λ; 0),

.

λ ∈ R,

t ∈ [0, T ).

(1.0.24)

This fact characterizes the Toda systems. Note that, it follows from (1.0.24), that the spectrum of the operator .J(t) does not depend on t. Let us also explain that using (1.0.23) it is possible to find for .ρ(λ; t) a simple differential equation with respect to the variable t and, hence, (1.0.24) is its solution. Now the finding of a solution of the Cauchy problem is quite simple: using the initial data .α0 (0), .β0 (0), .n ∈ N0 , that is, the matrix .J (0), we find its spectral measure .dρ(λ; 0). Further, using (1.0.24), it is possible to obtain the spectral measure .dρ(λ; t) of the matrix .J (t). This knowledge provides information about .J (t) according to classical formulas (it is necessary to orthogonalize the sequence 2 .1, λ, λ , . . ., etc., that is, to solve the inverse spectral problem for the Jacobi matrix). Thus we get the solution of the Cauchy problem. Some publications have generalizations of the approach to the semi-infinite case. Thus, solutions of the Toda chain, which is not necessarily bounded (that is, the Hermitian operator .J(t) is generally unbounded), are investigated in [277]. A chain more general than (1.0.23), when the formula (1.0.24) is more complicated, is considered in [41, 49, 217, 218, 259] (“non-isospectral” case when spectrum .J(t) changes with time).

12

1 Introduction

The situation of matrix (operator non-Abelian) equations of type (1.0.23) is considered in [46, 106]; see also [60, 61]. Now, the corresponding operators .J(t) acting in the space .Cd ⊕ Cd ⊕ . . ., where .d > 1 is the dimension of the matrixvalued solution (or the orthogonal sum of fixed Hilbert spaces in which belongs our solution). The spectral theory of the corresponding Jacobi-type matrices is developed in [21, 36, 37, 187]. A new class of the Toda solutions is presented in [119]. Let us pass to the Toda doubly infinite (double-sided infinite) chains. Note that there is no direct application of the classical theory of Jacobi matrices in this case, since n is taken from .Z and elements of the Jacobi matrix .(am,n )∞ m,n=0 is indexed by .m, n. The situation with such Toda chains is much more complicated than in the case of .N0 . First, we note the significant results that are directly related to our approach. In a classic work [282], Toda chains applied to the integration of the Cauchy problem for (1.0.23) the inverse scattering problem for the difference analog of the Sturm-Liouville equation on the whole axis .x ∈ R with the basic operator for which a discrete analog of the Laplacian is taken. This approach made it possible to find a set of appropriate solutions. Later works in this direction are: [103, 104, 205]. In the paper [191], the case of periodic solutions is studied using the thetafunction (see the previous paper [147]). In papers [99, 281], searches for solutions of the Cauchy problem for (1.0.23) of the difference analog of the scattering theory for the Schrödinger equation with a periodic potential are given; there considered also the case when the potential goes to different constants at .n → +∞ and .n → −∞. This approach was generalized in the above works to find solutions related to the more general than periodic case. Namely, the finite-band potential corresponding to the basic operator was studied. Let us explain, that the application of the scattering theory to find solutions of (1.0.23) is based on the fact that if the basic potential is changed according to (1.0.23), then the scattering indicators is changed in the same way (similarly to (1.0.24) in our approach) . Significant results related to the search for solutions of the Cauchy problem for doubly infinite equations (1.0.23) obtained in [50, 300]. In particular, a solution is found that strongly vanishes for .|n| → ∞. In general, the problem of integrating (1.0.23) with arbitrary conditions on the matrix structure is open. In the work [42], authors try to find a general solution to this problem (1.0.23), using the regarding of the equation (1.0.23) as a .2 × 2-matrix in the space .(C2 ⊕ C2 ⊕ . . . ). Thus, matrix-valued Toda systems are obtained, but the non-commutativity of matrices adds difficulties to the differential equation with the matrix measure .ρ(λ; t). This approach uses a standard doubling method: a doubly infinite vector .ξ = (. . . , ξ−1 , ξ0 , ξ1 , . . .), .ξn ∈ C1 is regarded as a vector (x0 , x1 , . . .) ∈ C2 ⊕ C2 ⊕ . . . ,

.

xn = (ξn , ξ−n−1 ) ∈ C2 ,

namely .ξ ↔ (x0 = (ξ0 , ξ−1 ), x1 = (ξ1 , ξ−2 ), . . .).

1 Introduction

13

But, one can do a more convenient otherwise doubling: ξ = (. . . , ξ−1 , ξ0 , ξ1 , . . .) ↔ (x0 = ξ0 , x1 = (ξ1 , ξ−1 ), x2 = (ξ2 , ξ−2 ), . . .). (1.0.25)

.

With such doubling, the vector .ξ turns into a vector from the space C1 ⊕ C2 ⊕ C2 ⊕ · · · .

.

(1.0.26)

After our doubling, the Toda chain possessed to equations in unknowns with values in the space (1.0.26). To apply the approach [24, 27–29] in our case, it was necessary to develop the spectral theory of Jacobi type matrices in the space (1.0.26). Note that this theory is an essential for doubly infinite Jacobi matrices in the space as .l2 for .Z. It is possible to generalize the relevant results published in [33]. Note that the spectral theory of such Jacobi matrices is not standard, its development is related to works [32, 141]. It must be said that the previously discussed theory of self-adjoint matrices of the Jacobi type in the space (1.0.26) can be developed due to the theory of generalized eigenvectors expansion for the self-adjoint operator. This approach was first stated in works [18, 111]. Our researches also uses results from monographs [21, 43, 44, 47, 48]. The presented chapter is devoted to a full statement of the above results from [33] (the latter work also contains many other facts, but we only touch on the results related to Eq. (1.0.23)). It should also be said that some of the results from [24, 27– 29, 33] are proved concisely. Full detailed proofs of the corresponding results are given in the chapter. We start with a semi-bounded Toda chain and give a mathematically complete list of results. Note, that some examples in finding the solution of the corresponding Cauchy problem, which are in [27, 29], have not been added. The first section is devoted to the proof of the writing the Toda chain (1.0.23) as a Lax equation with some matrix coefficients .A(t) in the space (1.0.26); properties of .A(t) are important. The direct and inverse spectral problems for block matrices defined in the space (1.0.26) are considered in Sect. 8.3. Note that the construction of such a theory is specific since the first space in the sum (1.0.26) is .C1 and the others are .C2 spaces. Now the spectral measure .dρ(λ) is a .2×2-matrix. The corresponding inverse problem is more complicated than for the space .(C2 ⊕ C2 ⊕ · · · ). At the end, the differential equation of functions with the variable t is written for elements of the spectral matrix .dρ(λ; t) = (dρα,β (λ; t))1α,β=0 of the corresponding operator .J(t) acting in the space (1.0.26). Now, given .ρ0,1 (λ; t) = ρ1,0 (λ; t), our differential equation is a first order linear system of equations with continuous real functions .r0,0 (λ; t), r0,1 (λ; t), r1,1 (λ; t), which are derivatives of .dρα,β (λ; t) by the scalar measure .dσ (λ), namely, rα,β (λ; t) =

.

dρα,β (λ; t) . dσ (λ)

14

1 Introduction

Here .t ∈ [0, T ), .λ is a parameter, the coefficients of the system simply depend on α0 (t), .β0 (t) and .λ, but unfortunately the system is not integrated. This is the main difference between the semi-infinite and doubly-infinite cases for the Toda chain: in the first case, the written general solution corresponding to the (one-dimensional) differential equation is the formula (1.0.24); in the second case, only the procedure for finding solutions is written out, that is, only the linearization of the problem is obtained, but not the solution. Sections 8.4 and 8.5 contain main theorems that give the procedure for finding the solution of the Cauchy problem for (1.0.23). These subsections also contain remarks on the obtained systems of differential equations. The Hamiltonian system in the form of a second-order equation, associated with (1.0.23), has the form

.

x¨n (t) = exn−1 (t)−xn (t) − exn (t)−xn+1 (t) ,

.

n ∈ Z,

t ∈ [0, T )

(1.0.27)

The relationship between (1.0.27) and (1.0.23) is given by the Flaschka transformation of variables αn (t) = e1/2(xn (t)−xn+1 (t)) ,

.

βn (t) = −x˙n (t).

The Cauchy problem for (1.0.27) with the given initial data .xn (0), x˙n (0), .n ∈ Z upon the Flaschka transformation turns into our Cauchy problem for (1.0.23), and Theorem 9.5.1 provides an opportunity for researches. Let us pass to a brief general overview of what is presented in the book. The first chapter, in addition to the foundations of the theory of unbounded selfadjoint and normal operators, contains new aspects of the self-adjointness criteria. Namely: the spectral expansion of the unbounded self-adjoint operator, the normal operator, and the family of commutative operators in general; elements of the theory of extensions of a symmetric densely defined operator to a self-adjoint one (without leaving the space); a number of quasi-analytic self-adjointness criteria; and the theory of generalized eigenvectors expansion (in simple cases). Some of the material of the chapter is mostly taken from [47, 48], Chapters 12–15. The second chapter, already discussed above, is a summary of Chapter 7 from [21]. It concerns the connections of the classical moment problem with the corresponding Jacobi matrix, the operator corresponding to it and the selfadjointness, polynomials of the first and second kind, the Weyl function. The third chapter is a generalization of the second one for the case of a strong moment problem. The fourth chapter generalizes the second one to the case of a complex power moment problem. The material of the fifth chapter can be considered a partial case to the fourth, since it is devoted to the trigonometric moment problem as a partial case of the complex power moment problem (which was discussed above). However, the corresponding block matrices of Jacobi type have a significant difference.

1 Introduction

15

In the sixth chapter, a version of the moment problem is considered, which generalizes the complex moment problem. This chapter deals with the complex moment problem in the exponential form. If one block matrix of the normal operator corresponds to the complex moment problem in power form (and its adjoint is automatically considered), then in the case of this chapter two block matrices appear, one of them generates a positive self-adjoint (unbounded) operator and the second generate the unitary one. And operators corresponding to matrices are commutative. The seventh chapter is also a generalization of the second one. In this chapter, the real power two-dimensional moment problem and the corresponding Jacobi block matrices are considered. The last eighth chapter is devoted to the integration of the classical Toda chain and its generalization generated by the study of block matrices.

Chapter 2

Some Aspects of the Spectral Theory of Unbounded Operators

We hope that the reader is well acquainted with basic courses of mathematical and functional analysis, in particular with the theory of linear bounded operators in Hilbert spaces. Therefore, in this chapter we will recall only the basic concepts regarding unbounded self-adjoint operators, without which the book reading would be difficult.

2.1 Preliminary Information About Unbounded Operators Let .H denotes a Hilbert space. Consider a linear set .D(A) ⊆ H. We will define a mapping on this set .D(A)  f → Af ∈ H (namely, .A(λf + μg) = λAf + μAg for all .f, g ∈ D(A) and, in general, complex numbers .λ, μ ∈ C). Such a mapping is called a linear (generally possibly unbounded) operator with a domain .D(A) (sometimes we use the designations .dom (A)). Two linear operators A and B are equivalent, if their domains and actions coincide, i.e., .A = B, if .D(A) = D(B) and .Af = Bf , .f ∈ D(A). In general, the domain is not dense in .H. Its density or non-density will be specified each time. Let A and B be two operators such that .D(A) ⊆ D(B) and .Af = Bf , .f ∈ D(A). In this case, the operator B is called the extension of the operator A, and, accordingly, A is called the restriction of the operator B and is denoted by .A ⊆ B and .B  D(A) = A. By .I we denote the unit (identity) operator in .H, i.e., .If = f , .f ∈ H .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y. M. Berezansky, M. E. Dudkin, Jacobi Matrices and the Moment Problem, Operator Theory: Advances and Applications 294, https://doi.org/10.1007/978-3-031-46387-7_2

17

18

2 Some Aspects of the Spectral Theory of Unbounded Operators

When defining an action on linear operators, it is necessary to follow the domains of the corresponding operators. Therefore, if A and B are two operators acting in .H and .λ ∈ C, then it is assumed (i) .(λA)f = λ(Af ), f ∈ D(λA) = D(A); (ii) .(A + B)f = Af + Bf, f ∈ D(A + B) = D(A) ∩ D(B); (iii) .(AB)f = A(Bf ), f ∈ D(AB) = {f ∈ D(B) | Bf | D(A)}. If .D(A + B) and .D(AB), possibly, are non-dense in .H, despite the density D(A) and .D(B), we will, additionally, require density .D(A + B) and .D(AB) in corresponding problems. The range of the operator A is called the linear set of vectors Af , where .f ∈ D(A), and is denoted by .R(A) (or sometimes .ran (A)). If A defines a one-to-one correspondence between .D(A) and .R(A), then there exists an inverse operator .A−1 defined by the expression .A−1 (Af ) = f (f ∈ D(A)). Therefore, −1 ) = R(A) and .R(A−1 ) = D(A). The criterion for the existence of a one.D(A to-one correspondence: the kernel of the operator A is zero, i.e, .KerA = {f ∈ D(A) | Af = 0} = 0. The operator defined now is called algebraically inverse, since by the inverse operator we regard .A−1 , if .D(A−1 ) = R(A) = H and .A−1 is bounded. .

Graphs of Operators The graph of an operator A in .H is introduced in the usual way. Consider the orthogonal sum .H ⊕ H for pairs . f, g , where .f, g ∈ H. Linear operations for such pairs are defined coordinate-wise, and the scalar product is given by the equality: ( f1 , g1 , f2 , g2 )H⊕H = (f1 , f2 )H + (g1 , g2 )H ,

.

f1 , f2 , g1 , g2 ∈ H. (2.1.1)

A set   A = f, Af ∈ H ⊕ H | f ∈ D(A) .

.

(2.1.2)

is called the graph .A of the operator A. To simplify the study of operators by using their graphs, it is convenient to introduce the following two additional isometric operators acting in .H ⊕ H: H ⊕ H  f, g → U f, g = g, f ∈ H ⊕ H, .

H ⊕ H  f, g → O f, g = −g, f ∈ H ⊕ H.

(2.1.3)

By virtue (2.1.3), .R(U ) = H ⊕ H and .R(O) = H ⊕ H; therefore, U and O are unitary. Definition (2.1.3) immediately implies that U 2 = I,

.

O 2 = −I,

OU = −U O.

The following general theorem holds true for the operator graph.

(2.1.4)

2.1 Preliminary Information About Unbounded Operators

19

Theorem 2.1.1 Let A be a linear operator, generally with a non-densely domain. In order that the algebraically inverse operator .A−1 exists, it is necessary and sufficient that a set .U A be the graph of a certain operator. Furthermore, A−1 = U A .

(2.1.5)

.

Closed Operators We recall the three equivalent definitions of a closed operator A acting in .H. (1) An operator A is closed if its graph .A is closed in .H ⊕ H. (2) An operator A is closed if, for any sequence of vectors .(fn )∞ n=1 ⊆ D(A), the fact that .fn → f ∈ H and .Afn → g ∈ H as .n → ∞ implies .f ∈ D(A) and .Af = g. (3) In the domain .D(A) of an operator A, we introduce a graph scalar product (f, g)A = (f, g)H + (Af, Ag)H ,

.

f, g ∈ D(A).

(2.1.6)

The operator A is closed if .D(A) is a complete space with respect to the graph scalar product. Note that the norm corresponding to (2.1.6) is called the norm of the graph. For this norm, we have f 2A = f 2H + Af 2H ,

.

f ∈ D(A).

(2.1.7)

Theorem 2.1.2 Three given above definitions of closed operators are equivalent. Closable Operators If an operator A acting in .H is not closed, at first sight, it is always possible to construct its closure, i.e., to extend A to a closed operator .A˜ by adding to its domain .D(A) all vectors .f ∈ H for which one can find a sequence ∞ ⊆ D(A) such that .f → f as .n → ∞ and there exists . lim Af = g. On .(fn ) n n n=1 n→∞

˜ = g. However, this procedure is, generally the added vectors f , we naturally set .Af ˜ speaking, incorrect: .Af may depend on a sequence .(fn )∞ n=1 that approximates f . (1) An operator A admits a closure .A˜ (or is closable), if the outlined above procedure is correct, namely, for all .f ∈ H that admits a required approximating sequence .(fn )∞ n=1 . (2) An operator A is closable, if the fact .

lim fn = 0 and

n→∞

lim Afn = h ∈ H

n→∞

for a sequence .(fn )∞ n=1 ⊂ D(A) implies .h = 0. (3) An operator A is closable, if the closure .˜ A of its graph is the graph of some operator.

20

2 Some Aspects of the Spectral Theory of Unbounded Operators

Theorem 2.1.3 Three given above definitions of closable operators are equivalent. The Adjoint Operator Let A be an operator densely defined on .D(A) in .H. Consider a vector .g ∈ H, for which on can find a vector .g ∗ ∈ H such that (Af, g)H = (f, g ∗ )H ,

.

∀f ∈ D(A).

(2.1.8)

From the linearity of the scalar product, it follows that the set of all vectors g of this type (we denote it by .D(A∗ )) forms a linear set and (λg + μh)∗ = λg ∗ + μh∗ ,

.

g, h ∈ D(A∗ ), λ, μ ∈ C.

The vector .g ∗ is uniquely defined for a given g. Indeed, suppose that, along  with (2.1.8), there exists another representation .(Af, g)H = (f, g ∗ )H (f ∈ D(A)).   Then .(f, g ∗ − g ∗ )H = 0, .f ∈ D(A) and we get .g ∗ = g ∗ since .D(A) is dense in .H. For .g ∈ D(A∗ ) we put .A∗ g = g ∗ . The operator, defined in this way, is linear and is called adjoint to A. In this case, we can write (Af, g)H = (f, A∗ g)H , f ∈ D(A), g ∈ D(A∗ ),

.

and moreover .D(A∗ ) consists all g such that (2.1.8) holds true (the last expression has also a short form, namely, .g ∈ D(A∗ ) ⇐⇒ the functional .D(A)  f → lg (f ) = (Af, g)H ∈ C is continuous). The main properties of the adjoint operator are collected by the theorem. Theorem 2.1.4 Let A be a densely defined operator in the space .H and .A∗ be its adjoint. Then (1) the operator .A∗ is closed; ˜ ∗ = A∗ (the operation of closure does not affect (2) if A admits a closure, then .(A) for the adjoint operator); (3) assume, that .(R(A))∼ = H and the operator .A−1 algebraically inverse to A exists, then the operator .(A∗ )−1 exists and (A−1 )∗ = (A∗ )−1 ;

.

(2.1.9)

(4) if B is another densely defined operator in .H, .B ∗ is its adjoint and .B ⊇ A, then B ⊇ A ⇒ A∗ ⊇ B ∗ ;

.

(2.1.10)

(5) let A and B be operators from the previous points such that the set .D(A + B) is dense in .H, then (A + B)∗ ⊇ A∗ + B ∗ ;

.

(2.1.11)

2.1 Preliminary Information About Unbounded Operators

21

(6) let A and B be operators from the previous points such that .D(BA) is dense in .H, then (BA)∗ ⊇ A∗ B ∗ .

.

(2.1.12)

Note that if B is a bounded operator, then (2.1.11) and (2.1.12) are equalities, i.e., .(A + B)∗ = A∗ + B ∗ and .(BA)∗ A∗ B ∗ . Theorem 2.1.5 (On the Second Adjoint Operator) Let A be an operator densely defined in .H. Assume, that A admits a closure. Then the second adjoint operator ∗ ∗ .(A ) exists and satisfies the equality ˜ (A∗ )∗ = A.

(2.1.13)

.

Conversely, assume that A has a dense domain in .H and the operator .(A∗ )∗ exists, then A admits a closure and (2.1.13) holds true. The following theorem is also often useful. Theorem 2.1.6 (Banach on Closed Graph) Let A be a closed operator defined in the whole Hilbert space .H, i.e., .D(A) = H, then A is bounded. Deficient Subspaces of Operators Consider an operator A acting in a Hilbert space .H with a domain .D(A) (which may be not dense in .H, and may be .D(A) = H, i.e., A is bounded). A point .z ∈ C is called a point of regular type for the operator A if there exists a number .cz > 0 such that (A − zI)f H ≥ cz f H ,

.

f ∈ D(A).

(2.1.14)

Denote .(A − zI)f = g. Then (2.1.14) means that the inverse operator R(A − zI)  g → (A − zI)−1 g ∈ D(A) ⊆ H

.

exists and is continuous. A point z of regular type is regular if .R(A−zI) = H. Thus, the notation of points of regular type generalizes the notation of regular points. Let us note a number of properties of points of regular type. (1) The set of points of regular type is open for a given operator A. (2) Let A be closed operator in .H and .z ∈ C is a point of regular type, then .R(A − zI) is a subspace (i.e., the set .R(A − zI) is closed). Conversely, let z be a point of regular type and let .R(A − zI) be a subspace in .H, then A is closed. ˜ Then each point z of regular type (3) Assume that an operator A admits a closure .A. ˜ Furthermore, of the operator A is also a point of regular type of the operator .A. R(A˜ − zI) = (R(A − zI))∼ ,

.

where “.∼ ” denotes the closure.

(2.1.15)

22

2 Some Aspects of the Spectral Theory of Unbounded Operators

Let .z ∈ C be a point of regular type of the operator A. The subspace Nz = H  (R(A − zI)) = (R(A − zI))⊥

.

is called the deficient subspace of the operator A corresponding to z.Hence, we can write a decomposition H = (R(A − zI))∼ ⊕ Nz .

.

(2.1.16)

If the operator A admits a closure or is closed, then, according to (3) and (2), the decomposition (2.1.16) can be rewritten in the form H = (R(A˜ − zI)) ⊕ Nz

.

or

H = R(A − zI) ⊕ Nz ,

(2.1.17)

respectively. We can describe deficient subspaces in a different way. We call .ϕ an eigenvector of the operator B with a domain .D(B), if .0 = ϕ ∈ D(B) and .Bϕ = λϕ with some .λ ∈ C, which is called the eigenvalue corresponding to the eigenvector .ϕ. The set .(λ), which consist of 0 and all eigenvectors corresponding to the same eigenvalue .λ, is linear. It is clear that if B is closed, then the set .(λ) is also closed and is called the eigensubspace corresponding to .λ. (4) Assume that the domain of A is dense in .H and, therefore, .A∗ exists. Then the deficient subspace .Nz coincides with the eigensubspace of the operator .A∗ corresponding to the eigenvalue .z. Defect Numbers Consider an operator A which domain .D(A) may be not dense in .H. Let z be its point of regular type. It is obviously that deficient subspaces .Nz , corresponding to different z, are different. A remarkable property of these spaces is that the dimension .dim Nz is invariant under changes of z (.dim Nz is a natural number if .Nz is finite-dimensional or .∞). It is described below in the theorem more precisely. Theorem 2.1.7 (Krasnoselsky-Krein) Let A be a closed operator in a Hilbert space .H. Then .nz = dim Nz is invariant under the changes of z within a connected component of the set of points z of regular type for the operator A. Thus, every component G of this sort can be associated with a fixed number .nz , where .z ∈ G. This number is called a defect number of the operator A (in the component G). Hermitian and Self-Adjoint Operators Recall that the bounded operator A in the Hilbert space .H is called self-adjoint if (Af, g)H = (f, Ag)H ,

.

f, g ∈ H,

2.1 Preliminary Information About Unbounded Operators

23

namely .A∗ = A. The situation is more complicated in the case of an unbounded operator. An operator A, densely defined in the Hilbert space .H, is called Hermitian if (Af, g)H = (f, Ag)H ,

.

f, g ∈ D(A).

(2.1.18)

An operator A, densely defined in .H, is called self-adjoint, if A∗ = A.

.

(2.1.19)

Let A be Hermitian, i.e., (2.1.18) holds true. This equality means that g belongs to .D(A∗ ) and .A∗ g = Ag, i.e., .A ⊆ A∗ . Thus, for an operator A with a dense domain, the fact that it is Hermitian is equivalent to the inclusion A ⊆ A∗ ,

.

(2.1.20)

and its self-adjointness is equivalent to the equality (2.1.19). Hermitian operator A always admits a closure. It follows from the inclusion (2.1.20) and from the fact that .A∗ is closable. Its closure .A˜ is also the Hermitian operator. Tn operator A is called essentially self-adjoint, if its closure .A˜ is self-adjoint. Lemma 2.1.8 Each point .z ∈ C \ R is point of regular type for an arbitrary Hermitian operator. Criterion of Self-Adjointness The following theorems are used to find out the self-adjointness of an operator. Theorem 2.1.9 A closed Hermitian operator is self-adjoint if and only if its defect numbers are equal to zero, i.e., .m = n = 0. In other words, an Hermitian operator A is self-adjoint if the equalities R(A − z1 I) = H

.

and

R(A − z2 I) = H

(2.1.21)

hold true for some .z1 , z2 ∈ C, where .Im z2 < 0 and .Im z1 > 0. Conversely, if A is a self-adjoint operator, then the equalities (2.1.21) hold true for any .z1 and .z2 of the indicated type. A pair of numbers .(m, n) will be called deficiency indices of the corresponding operator. Corollary 2.1.10 Let A be an Hermitian operator (in general it is non-closed). This operator is essentially self-adjoint provided that its defect numbers are equal zero. Semi-Bounded Operators Assume that an Hermitian operator A is such that at least one its point of regular type lies in the real axis. Then the set of all points of

24

2 Some Aspects of the Spectral Theory of Unbounded Operators

regular type of the operator A is connectable and, therefore, its defect numbers are the same, i.e., .m = n. Semi-bounded operators constitute an important class of Hermitian operators with this property. Let A be an operator with a dense domain .D(A) in .H. Assume that there exists a number .α ∈ R such that (Af, f )H ≥ α f 2H ,

.

f ∈ D(A).

(2.1.22)

The operator A is called semi-bounded below, and the number .α is called its vertex. The operator semi-bounded above are defined similarly. If an operator A is semibounded below, then .(−A) is semi-bounded above. A semi-bounded operator with a vertex .α = 0 is called non-negative. ˜ then it is evident that .A˜ is also If A is semi-bounded and admits a closure .A, semi-bounded and has the same vertex. Lemma 2.1.11 Let A be a semi-bounded operator with a vertex .α ∈ R. Any .z ∈ R \ [α, +∞) is a point of regular type for this operator. Theorem 2.1.12 Let A be a closed semi-bounded operator wit a vertex .α ∈ R. It has a (mutually) equal defect numbers. If R(A − zI) = H

.

(2.1.23)

for some .z ∈ C \ [α, +∞), then the operator is self-adjoint. Let we have .A = A∗ . Recall that a point .z ∈ C is called a regular point of the operator A, if the inverse operator .Rz = (A − zI)−1 exists and is bounded and defined in the whole .H (this operator is called the resolvent of A). Thus, the point z is regular, if it is of regular type and .R(A − zI) = H. As in the case of bounded operators, one can easily prove that the set of regular points is open in .C and the Hilbert identity (equality) Rz − Rζ = (z − ζ )Rz Rζ .

.

(2.1.24)

holds true for any two regular points z and .ζ . The spectrum .S(A) of a self-adjoint operator A if defined as the complement to the set of its regular points in .C. Thus, the spectrum .S(A) is closed, generally speaking, unbounded set on the real axis. If, in addition, the operator A is semi-bounded with a vertex .α, then .S(A) ⊆ [α, +∞) (see. Lemmas 2.1.8 and 2.1.11).

2.1 Preliminary Information About Unbounded Operators

25

Isometric and Unitary Operators In general, an operator U , acting from the subspace .D(U ) ⊆ H1 to the subspace .R(U ) ⊆ H2 , is called isometric if (Uf, Ug)H2 = (f, g)H1 ,

.

f, g ∈ D(U ).

(2.1.25)

Since, as before, all operators considered below act in a Hilbert space .H, then we say that an operator U , acting from the subspace .D(U ) ⊆ H to the subspace .R(U ) ⊆ H, is called isometric if (Uf, Ug)H = (f, g)H ,

.

f, g ∈ D(U ).

(2.1.26)

This operator is called unitary if, in addition, .D(U ) = R(U ) = H. Note that the isometric operator U is necessarily continuous. Therefore, it is always assumed that .D(U ) and .R(U ) are subspaces and the operator U is closed. Let us describe the set containing only points of regular type for an isometric operator. Lemma 2.1.13 Every .z ∈ C, .|z| = 1 is a point of regular type for an isometric operator . Thus, the structure of the set of regular type points for isometric operators characterized by the presence of two connected component .{z ∈ C | |z| > 1} and .{z ∈ C | |z| < 1}. According to Theorem 2.1.7, their defect numbers are m and n, respectively and .(m, n) is the deficiency index of the operator U . The theorem below is the similar to Theorem 2.1.9. Theorem 2.1.14 An isometric operator U in the Hilbert space .H is unitary iff its defect numbers are equal to zero, i.e., .m = n = 0. Cayley Transforms Let .H be a Hilbert space and A be a closed Hermitian operator in .H. The domain .D(A) of A may be not dense in .H. We fixe .z ∈ C such that .Im z > 0. Consider a vector .g ∈ R(A − zI) of the form .g = (A − zI)f , where .f ∈ D(A). We construct a mapping .g → (A − z¯ I)f = Ug. This mapping is correct since f is uniquely defined for a given g. It is also clear that U is a linear operator with a domain .R(A − zI) and a range .R(A − z¯ I). Hence, we have g = (A − zI)f,

Ug = (A − z¯ I)f,

.

D(U ) = R(A − zI),

and

f ∈ D(A); .

(2.1.27)

R(U ) = R(A − z¯ I).

(2.1.28)

It is clear that (2.1.27) can be written in a more concise form Ug = (A − z¯ I)(A − zI)−1 g.

.

(2.1.29)

26

2 Some Aspects of the Spectral Theory of Unbounded Operators

The operator U is called the Cayley transform of an operator A. It has the following simple properties: (1) The the Cayley transform of a closed Hermitian operator is an isometric operator. (2) Let .m(A), .n(A) and .m(U ), .n(U ) be the defect numbers of the operators A and U , respectively. Then m(A) = m(U ) and

.

n(A) = n(U ).

(2.1.30)

(3) The the Cayley transform of a self-adjoint operator is a unitary operator. (4) Let .B ⊇ A be the closed Hermitian extension of an operator A. Then its Cayley transform V is an isometric extension of the operator U . Inverse Cayley Transform To write the inverse Cayley transform, let us express the operator A in terms of U using (2.1.27). Subtracting the first equality in (2.1.27) from the second one, we get (U − I)g = (z − z¯ )f,

.

f ∈ D(A).

(2.1.31)

Further, by multiplying the second equality in (2.1.27) by z (.z¯ ) and subtracting again, we obtain (zU − z¯ I)g = (z − z¯ )Af,

.

f ∈ D(A).

(2.1.32)

Equalities (2.1.31) and (2.1.32) can be rewritten in the form f =

.

1 (U − I)g, z − z¯

Af =

1 (zU − z¯ I)g. z − z¯

(2.1.33)

Assume now that an isometric operator U acting in .H exists. Then, for vectors f of the form .f = (z − z¯ )−1 (U − I)g where .g ∈ D(U ), we define an operator A by setting .Af = (z − z¯ )−1 (zU − z¯ I)g. This definition is correct and the operator A is correct and it follows to a linear operator A if Ker(U − I) = {0}.

.

(2.1.34)

Suppose the condition (2.1.34) holds true. The operator A constructed as a result is called the inverse Cayley transform of the operator U , herewith D(A) = R(U − I),

.

R(A) = R(zU − z¯ I).

(2.1.35)

2.2 Extensions of Hermitian Operators to Self-AdJoint Operators

27

Formally, the operator A can be expressed in terms of U : Af = (zU − z¯ I)(U − I)−1 f.

.

(2.1.36)

Let us recall some properties of the inverse Cayley transform. (5) The inverse Cayley transform of the isometric operator is a closed Hermitian operator. (6) Defect numbers of operators U and A satisfy equalities .m(A) = m(U ), .n(A) = n(U ). (7) The inverse Cayley transform of the unitary operator is the self-adjoint operator, if it is, additionally, known that .D(A) = R(U − I) is dense in .H. Lemma 2.1.15 If .R(U − I) is dense in .H, then .Ker (U − I) = {0}. It is now convenient to formulate the property 4 in the form. (8) Let .V ⊇ U be the isometric extension of an isometric operator U such that .R(U − I) is dense in .H. Then the operator V possesses the inverse Cayley transform B, which is the closed Hermitian extension of a closed Hermitian operator A. (9) Let us construct the inverse Cayley transform of an operator U , which is, in turn, the Cayley transform of a given operator A. As a result, we obtain A, i.e., .A → U → A. similarly, .U → A → U .

2.2 Extensions of Hermitian Operators to Self-AdJoint Operators Let us recall the main statements of the theory of the extension of a densely defined symmetric (Hermitian) operator to a self-adjoint one without leaving the space. Below, we assume that the defect numbers m, n of an operator acting in a Hilbert space .H has the value .0, 1, . . . , or .∞. The last is really so sins .H is separable. Theorem 2.2.1 Let U be an isometric operator in .H with a domain .D(U ), a range R(U ), and the defect numbers .m = dim (H  D(U )) > 0 and .n = dim (H  R(U )) > 0. Fix a number .k ≤ min (m, n), choose two k-dimensional subspaces .F ⊆ H  D(U ), .G ⊆ H  R(U ), and construct an isometric operator W acting from whole F into whole G, i.e., .D(W ) = F and .R(W ) = G. Then, the orthogonal sum .

V = U ⊕ W,

.

D(V ) = D(U ) ⊕ D(W ),

R(V ) = R(U ) ⊕ R(W )

28

2 Some Aspects of the Spectral Theory of Unbounded Operators

is an isometric extension of the operator U . All possible isometric extensions of this operator can be obtained by using the same procedure for all possible k, F , G, and W. The last theorem has simple but important corollaries. Corollary 2.2.2 If at least one defect number (m or n) of an isometric operator U is equal to zero, then this operator has no no non-trivial isometric extensions. Corollary 2.2.3 In order that the operator U admit unitary extensions, it is necessary and sufficient that its defect numbers are equal to each other, that is m=n. To construct a unitary extension, one must set .F = H  D(U ) and .G = H  R(U ), and take an isometric operator W with .D(W ) = F and .R(W ) = G. If one of the defect numbers m or n is zero, and the other is positive, then U is called the maximal operator. We now consider the case of Hermitian operators. Let A be a closed Hermitian operator with dense domain and let .B ⊇ A be its closed Hermitian extension. Since ∗ ∗ ∗ .B ⊆ A and .A ⊆ A , then we get the chain: A ⊆ B ⊆ B ∗ ⊆ A∗ .

.

(2.2.1)

If B is a self-adjoint operator, then the relation (2.2.1) turns into: A ⊆ B ⊆ A∗ .

.

(2.2.2)

Let us clarify the condition under which, for a given operator A, one can construct an operator B satisfying (2.2.1) or (2.2.2) and describe the set of such B. Theorem 2.2.4 Let A be a closed Hermitian operator with a dense domain in the separable Hilbert space .H and defect numbers m, n. In order that A admits nontrivial closed Hermitian extension .B ⊇ A, it is necessary and sufficient that its defect numbers m and n be positive. In order that A admits a self-adjoint extension ∗ .B = B ⊇ A, it is necessary and sufficient that its defect numbers m and n be equal to each other, i.e. .m = n. These extensions are constructed as follows. Fix a point .z ∈ C such that .Im z > 0. In the defect subspaces .Nz = H  R(A − zI) and .Nz¯ = H  R(A − z¯ I) whose dimensions are .m > 0 and .n > 0, respectively, we chose subspaces .F ⊆ Nz and .G ⊆ Nz¯ of the same dimensions and construct an isometric operator W that maps the whole .F = D(W ) into whole .G = R(W ). Let U be the Cayley transform of the operator A, .D(U ) = R(A − zI), and .R(U ) = R(A − z¯ I). Consider an isometric operator V = U ⊕ W,

.

D(V ) = D(U ) ⊕ D(W ),

R(V ) = R(U ) ⊕ R(W ).

2.2 Extensions of Hermitian Operators to Self-AdJoint Operators

29

The inverse Cayley transform B of the operator V is a closed Hermitian extension of the operator A. By taking all possible F , G, and W (for a fixed z), we obtain the set of all closed Hermitian extensions B of the operator A. If .m = n, then, in particular, we can set .F = Nz and .G = Nz¯ . In this case, the operator .B ⊇ A is a self-adjoint extension of A. By takin all possible W , we obtain the set of all self-adjoint extensions B of the operator A. Von Neumann Formulas The following theory belongs to Von Neumann J. (1) Consider a formula that describes the action of the operator .A∗ . Firs, it is necessary to recall that a linear set .L ⊆ H is called the direct sum of linear sets .L1 , . . . , Ln ⊆ H, if, for any .f ∈ L, one can write a representation .f = f1 + · · · + fn , where .fj ∈ Lj , and this representation is unique (in other words, .0 = f1 + · · · + fn ⇒ f1 = · · · = fn = 0). Denote this direct sum L = L1  L 2  · · ·  L n .

.

(2.2.3)

Let A be a closed Hermitian operator with a dense domain in .H and let z ∈ C \ R be fixed. Then

.

D(A∗ ) = D(A)  Nz  Nz¯ .

.

(2.2.4)

Thus, according to (2.2.4), any .g ∈ D(A∗ ) admits a unique decomposition g = f + hz + hz¯

(2.2.5)

.

f = f (g) ∈ D(A),

.

hz = hz (g) ∈ Nz ,

hz¯ = hz (g) ∈ Nz¯ .

The result of the actin of the operator .A∗ upon vector (2.2.5) can now be determined in a very simple way. Indeed, since .A∗ ⊇ A, it follows from (4) that A∗ g = Af + z¯ hz + zhz¯ .

.

(2.2.6)

(2) Let us describe the action of a closed Hermitian extension B of an operator A, by using the decomposition (2.2.4). Fix .z ∈ C such that Im .z > 0. Let W be the operator associated with the extension B according to Theorem 2.2.4, .D(W ) = F ⊆ Nz , and .R(W ) = G ⊆ Nz¯ . The set .D(B) admits a decomposition D(B) = D(A)  (W − I)F,

.

(2.2.7)

i.e., for all .g ∈ D(B) ⊆ D(A∗ ), the decomposition (2.2.5) has a form g = f − hz + W hz , f ∈ D(A), hz ∈ F ⊆ Nz , W hz ∈ W F ⊆ Nz¯ .

.

(2.2.8)

30

2 Some Aspects of the Spectral Theory of Unbounded Operators

Since .B ⊆ A∗ , the action B upon the vector g is is defined by (2.2.6), namely Bg = A∗ G = Af − zhz + zW hz .

.

(2.2.9)

We will recall another important theorem of Von Neumann J. Theorem 2.2.5 Let A be closed densely defined operator in .H. Then the operator A∗ A is self-adjoint and non-negative.

.

2.3 The Spectral Decomposition of Bounded Operators We begin by the describing the resolution of the identity for a bounded operator in order to use it to get the resolution of the identity of an unbounded one. The resolution of the identity is a projector-valued measure. Before constructing the spectral integral, we give a number of statements. Lemma 2.3.1 Let .PG1 and .PG2 be projectors onto subspaces .G1 , G2 .⊆ H, respectively. The sum .P = PG1 + PG2 is projector if an only if .G1 ⊥ G2 , i.e., .PG1 PG2 = 0 (or, that is equivalent .PG2 PG1 = 0). In this case, P is projector onto .G1 ⊕ G2 : P = PG1 ⊕G2 . We now proceed to the definition of a general resolution of the identity. Let R be an abstract space and let .R be .σ -algebra of its subsets. In other words, we define a measurable space . R, R . In particular, is also considered a Hilbert space .H. An operator-valued function .R  α → E(α) ∈ L(H) is called a resolution of the identity (on R), provided that: (a) for any .α ∈ R, .E(α) is a projector in .H; .E(∅) = 0, and .E(R) = I. (b) the function E is countably-additive, i.e., for an arbitrary sequence .(αj )∞ j =1 that consists of disjoint set from .R, the equality E

 ∞

.

j =1

 αj

=

∞ 

E(αj ),

(2.3.1)

j =1

holds true, where the series converges in the sense of the strong convergence of operators (i.e., on every vector .f ∈ H with respect the norm of .H). Theorem 2.3.2 The resolution of the identity possesses the property of orthogonality, namely E(α)E(β) = E(α ∩ β),

.

α, β ∈ R.

(2.3.2)

Corollary 2.3.3 It follows from (2.3.2) that operators .E(α) (α ∈ R) are commuting.

2.3 The Spectral Decomposition of Bounded Operators

31

Let .R  α → E(α) be the resolution of the identity. We fix .f ∈ H. Then the set function  R  α → ρf,f (α) = E(α)f, f H = E(α)f 2H ≥ 0,

.

(2.3.3)

is, clearly, a non-negative finite measure on .R (in (2.3.3), we have used the fact that  2  ∗ E(α) = E(α), . E(α) = E(α)). For fixed .f, g ∈ H, the function of sets

.

 R  α → ρf,g (α) = E(α)f, g H ∈ C

(2.3.4)

.

is a complex-valued measure (a charge) on .R. It is a linear combination of measures (2.3.3) by virtue of the polarization identity. For a scalar measure, the following result holds true. ∞ Theorem 2.3.4 Let .(αn )∞ n=1 , .(βn )n=1 be decreasing and increasing sequences of sets .αn , βn ∈ R : α1 ⊇ α2 ⊇ . . . , β1 ⊆ β2 ⊆ . . . . Then

.

lim E(αn ) = E(

n→∞



αn ) and

n=1

lim E(βn ) = E(

n→∞

∞ 

βn ).

(2.3.5)

n=1

in the sense of strong convergence in .H, Theorem on Extension We recall one more result about the resolution of the identity. As in the case of scalar measure, by the construction of the resolution of the identity, it is often convenient to define it on a certain algebra of sets, and then apply the theorem about an extension. Let R be a space and let .R be an algebra of subsets R. As above, the function .R  α → E(α) ∈ L(H) is called a resolution of the identity provided that conditions (a) and (b) are satisfied with an additional requirement that .∪∞ j =1 αj ∈ R in (b). It is clear that Theorem 2.3.2 remain true in the case where .R is an algebra. Theorem 2.3.5 Let E be a resolution of the identity on the algebra .R. Then there exists its extension to the resolution of the identity .Eσ on its .σ -hull .Rσ , i.e., the restriction .Eσ  R = E. The restriction of the identity .Eσ is uniquely defined for given E. The Construction of Spectral Integrals Suppose, that the measurable space R, R is given or E is given on R. Elements from R are defined by .λ, μ, . . . . Recall that a function .R  λ → F (λ) ∈ C is called a simple function (or a step function) if it is a linear combination of indicators .χα (λ) of sets .α ∈ R. A collection of all simple functions is denoted by .S(R, R) = S. The set S is an algebra with respect to ordinary summation and multiplication and contains all constants.

.

32

2 Some Aspects of the Spectral Theory of Unbounded Operators

It is clear that every simple functions can be represented in the form of a linear combination of indicators of disjoint sets, i.e., F (λ) =

n 

.

Fk χαk (λ),

Fk ∈ C; αk ∩ αj = ∅, k = j ; λ ∈ R.

(2.3.6)

k=1

By definition, we set F (λ) dE(λ) =

.

R

 n R

 n  Fk χαk (λ) dE(λ) = Fk E(αk ) ∈ L(H).

k=1

(2.3.7)

k=1

It is clear that according to (2.3.6), it can be assumed that the sets .αk form the decomposition of the set .R : ∪nk=1 αk = R. The representation (2.3.6) of a simple function F is, obviously, not unique because sets .αk are not defined uniquely for a given F (they can be subdivided into smaller sets). Nevertheless, integrals (2.3.7) is defined correctly and does not depend on the representation (2.3.6). The constructed integral possesses a series of simple properties. (1) The linearity, i.e.,

 aF (λ)+bG(λ) dE(λ) = a

.

R



F (λ) dE(λ)+b

R

G(λ) dE(λ)

(2.3.8)

R

a, b ∈ C; F, G ∈ S.

.

(2) The unusual property of the multiplicativety of an integral

F (λ) dE(λ) ·

.

R

G(λ) dE(λ) =

R

F (λ)G(λ) dE(λ),

F, G ∈ S

R

(2.3.9) that yields the property of commutativity of any two integrals. (3) For any .F ∈ S, we have  .

∗ F (λ) dE(λ) = F (λ) dE(λ).

R

(2.3.10)

R

(4) For any .F ∈ S and arbitrary .f, g ∈ H, we have  .

R

  F (λ) dE(λ) f, g

H

=

F (λ) d(E(λ)f, g)H ,

(2.3.11)

R

where .d(E(λ)f, g)H denotes the integration with respect to the charge (2.3.4).

2.3 The Spectral Decomposition of Bounded Operators

33

(5) For any .F ∈ S and some .f ∈ H, we have   2 . F (λ) dE(λ) f = |F (λ)|2 d(E(λ)f, f )H . H

R

(2.3.12)

R

(6) For any .F ∈ S, we have

  F (λ) dE(λ) ≤ sup |F (λ)| λ ∈ R .

.

(2.3.13)

R

Integrals of Bounded Functions Consider the set .L∞ (R, R) = L∞ of all bounded measurable functions connected with a measurable space . R, R . Just as S, this set is an algebra with respect to the standard algebraic operations. By definition, we set L(H) 

F (λ) dE(λ) = lim

.

Fn (λ) dE(λ),

n→∞

R

∀F ∈ L∞ ,

(2.3.14)

R

where the limit is regarded in the operator norm. (7) Integrals of bounded measurable functions .F, G ∈ L∞ also possess properties (1)–(6). Integrals of Unbounded Functions Consider a set .L0 := L0 (R, R, E) of functions of the form .R  λ → F (λ) ∈ C ∪ {∞}, measurable with respect to .R and almost everywhere finite with respect to an operator-valued measure E. Clearly, the last statement means that  E {λ ∈ R | |F (λ)| = ∞} = 0.

.

(2.3.15)

Just as S and .L∞ , by virtue of the additivity of the measure E, the set .L0 allows us to form an algebra with ordinary operations of summation and multiplication of functions (and with standard formal rules for operations with .∞). Lemma 2.3.6 Let .F ∈ L0 . Then the set

 

 2

.DF = f ∈ H

|F (λ)| d E(λ)f, f H < ∞

(2.3.16)

R

is linear and everywhere dense in .H. We define now the corresponding integral. As usual, for .F ∈ L0 and .N ≥ 0, we denote by .FN its cutoff function, i.e., the bounded function of the form .FN (λ) =

34

2 Some Aspects of the Spectral Theory of Unbounded Operators

F (λ) for .λ ∈ {λ ∈ R | |F (λ)| ≤ N} and .FN (λ) = N for all other .λ ∈ R. By definition, for .f ∈ DF , we set

IF f =

F (λ) dE(λ)f = lim

.

FN (λ) dE(λ)f

N →∞

R

(2.3.17)

R

in the sense of convergence in .H. This integral .IF (2.3.17) exists, generally speaking, as un unbounded operator with a dense domain .D(IF ) = DF in .H defined in (2.3.16). Let us describe the properties of the integral .IF . First, note that the property (6) is meaningless. (8) The integral of unbounded functions .F ∈ L0 possesses properties (4) and (5) with .f ∈ D(IF ) in (2.3.11) and (2.3.12). Other Properties of Spectral Integral The generalization of properties (1)–(3) appears to be more complicated. Theorem 2.3.7 Let .F ∈ L0 . Then the operator .IF of the form (2.3.17) with a dense domain .D(IF ) = DF is closed. The equality  D (IF )∗ = DF ,

(IF )∗ = IF¯ ,

.

holds true, i.e., the property (3) admits the generalization  .

∗ F (λ) dE(λ) = F (λ) dE(λ),

R

F ∈ L0 .

(2.3.18)

R

Theorem 2.3.8 Let F ,.G ∈ L0 and .a, b ∈ C. Then the following equalities, which generalize properties (1) and (2),



.

 ∼ aF (λ) + bG(λ) dE(λ) = a F (λ) dE(λ) + b G(λ) dE(λ) ,

R

R

R

(2.3.19) 





F (λ)G(λ) dE(λ) =

.

R

F (λ) dE(λ) R

∼ G(λ) dE(λ)

(2.3.20)

R

are true. An operator A acting in the Hilbert space .H is called normal if it is densely defined, closed and commutes with adjoint operator, i.e., A A∗ = A∗ A.

.

(2.3.21)

2.3 The Spectral Decomposition of Bounded Operators

35

It is clear that self-adjoint and unitary operators are normal. For all .F ∈ L0 , operator (2.3.17) is normal. The Representation of a Resolution of the Identity by Change of Variables Let us explain, how to replace the variable in the spectral integral and construct the Cartesian product of resolutions of the identity. Let . R, R be a measurable space, .R  be another space of points .λ ,, .μ , . . . , and   .R  λ → ϕ(λ) ∈ R be a fixed one-to-one mapping of R into .R . Without loss of  generality, we can assume that .ϕ(R) = R . Given .R and .ϕ, we can define on .R  (in the standard way) .σ -algebra .R that contains of all sets .α  ⊆ R  whose preimage −1 (α  ) lies in .R. It can be shown that .R is a .σ -algebra. Thus we have constructed .ϕ the measurable space . R  , R . Let E be a resolution of the identity given on . R, R . We now describe the standard method for constructing the image of E, i.e., the resolution of the identity    .E in . R , R . Thus, we set  R  α  → E  (α  ) = E ϕ −1 (α  ) .

.

(2.3.22)

It is not difficalt to show that the operator-valued measure (2.3.22) is a resolution of the identity, i.e., satisfies the requirements (a) and (b). The following theorem gives the rule of the change of variables in spectral integral. Theorem 2.3.9 Let .L0 (R  , R , E  ) be a collection of complex-valued functions defined on .R  , measurable with respect to .R and finite .E  -almost everywhere. If      .F ∈ L0 (R , R , E ), then .F ◦ ϕ ∈ L0 (R, R, E) and formulas about the change of variables  . F  ϕ(λ) dE(λ) = F  (λ ) dE  (λ ) = IF  ; (2.3.23) R

R

 

 

D(IF  ) = f ∈ H |F  ϕ(λ) |2 d E(λ)f, f H < ∞ R

.

 

  

  2 = f ∈ H |F (λ )| d E (λ )f, f H < ∞

(2.3.24)

R

holds true. The Product of Resolutions of the Identity Let us describe the direct product of resolutions of the identity. Let . R1 , .R1 , R2 , R2 be two measurable spaces with resolutions of the identity .E1 and .E2 , respectively. The values of theses resolutions of the identity are

36

2 Some Aspects of the Spectral Theory of Unbounded Operators

projectors in the same Hilbert space .H. Suppose that .E1 and .E2 commute, i.e., the operators .E1 (α1 ) and .E2 (α2 ) commute for all .α1 ∈ R1 and .α2 ∈ R2 . As in the case of scalar measures, one can construct the direct product E of resolution of the identity .E1 and .E2 on the space .R = R1 × R2 . More precisely, denote by .R the direct product .R1 × R2 of the .σ -algebras .R1 and .R2 consisting all subsets .R1 × R2 that belong to .σ -span of all possible rectangles .α1 × α2 , .α1 ∈ R1 , α2 ∈ R2 . It is necessary to construct a resolution of the identity E in the measurable space . R, R such that E(α1 × α2 ) = E1 (α1 )E2 (α2 ),

.

α1 ∈ R1 , α2 ∈ R2 ,

(2.3.25)

i.e., the measure E of a rectangle is the product of the corresponding measures of its sides. Note that, in view of the fact that .E1 (α1 ) and .E2 (α2 ) commute, then the operator in (2.3.25) is projector. This E is called the direct product of .E1 and .E2 and it is denoted by .E = E1 × E2 . We arrive at a quite unexpected result that, unlike the scalar case, the direct product E does not always exist. Nevertheless, in “proper” cases, it exists. Thus, let R be a complete metric separable space and let .R = B(R) be the .σ -algebra of its Borel subsets. The resolution of the identity defined on .B(R) is called the Borel resolution of the identity. Theorem 2.3.10 Let .E1 and .E2 be two commuting Borel resolutions of the identity R1 and .R2 , respectively. Then condition (2.3.25) determines a unique resolution of the identity .E = E1 × E2 defined on .R = B (R1 × R2 ).

.

The given result remains correct for an arbitrary finite number of resolutions of the identity, and also has a generalization to an infinite number of resolutions of the identity. The Spectral Decomposition of a Bounded Self-Adjoint Operator Let us start with the construction of the spectral integral for the bounded operator. Consider a bounded operator A acting in a Hilbert space .H. Assume that its spectrum .S(A) is a (closed) set in a finite intervale .(a, b) ⊂ R. Denote its resolvent by .Rz = (A − zI)−1 , .z ∈ C \ S(A). of Let .F (z) be an analytic function defined in a certain (complex) neighborhood  the interval .[a, b]. The collection of all functions of this sort is denoted by .A [a, b] . For any closed Jordan contour .γ that encloses the spectrum .S(A) and lies in the domain of the function F , one can write the integral F (A) = −

.

1 2π i

 F (z)Rz dz. γ

(2.3.26)

2.3 The Spectral Decomposition of Bounded Operators

37

As we know, this integral is independent of the choice of the Jordan contour .γ . The mapping  A [a, b]  F → F (A) ∈ L(H)

(2.3.27)

.

 is the homomorphism of the algebra of functions .A [a, b] to the algebra .L(H) of bounded operators in .H that transforms the function .F (z) ≡ 1 to the identity operator and the function  .F (z) = z to the operator A. Recall that .A [a, b] is an algebra with respect to the standard algebraic operations (F + G)(z) = F (z) + G(z),

(F G)(z) = F (z)G(z),

.

 F, G ∈ A [a, b] , (2.3.28)

where the functions .F + G and F G are  defined on the intersection of the domains of the functions F and G (note that .A [a, b] also contains constants). Lemma 2.3.11 The formula, 1 F (A) = lim ε→+0 2π i

b F (λ)(Rλ+iε − Rλ−iε ) dλ a

.

= lim

ε→+0

ε π

(2.3.29)

b F (λ)Rλ+iε Rλ−iε dλ,

 F ∈ A [a, b] ,

a

where the limit is regarded in the operator norm, holds true. Now let us construct the resolution of the identity of the self-adjoint operator. Theorem 2.3.12 Let A be a bounded self-adjoint operator. Then one can define a resolution of the identity E on the .σ -algebra .B(R) of Borel subsets of the real axis such that the spectral representation A=

λ dE(λ)

.

(2.3.30)

R

is true. By proving this theorem, the lemma below is used, which is important in its own right. Let A be self-adjoin. Then ∗   −1 Rz∗ = (A − zI)−1 = A∗ − z¯ I = Rz¯

.

38

2 Some Aspects of the Spectral Theory of Unbounded Operators

and relation (2.3.29) can be rewritten in the form ε .F (A) = lim ε→+0 π

b

∗ F (λ)Rλ+iε Rλ+iε dλ,

 F ∈ A [a, b] .

(2.3.31)

a

 Lemma 2.3.13 Assume that .F ∈ A [a, b] takes values on the real axis (the algebra of such F is denoted by .ARe [a, b] ). The operator .F (A) is self-adjoint and admits the estimate .



(F (A)f, g)H ≤ max {|F (λ)| | λ ∈ [a, b]} f H g H ,  f, g ∈ H; F ∈ ARe [a, b] .

(2.3.32)

Functions of Operators and Their Spectrum An analytic function of the form (2.3.26) of a self -adjoint operator .A ∈ L(H) admit a representation as a spectral integral F (A) =

F (λ) dE(λ),

.

F ∈ A([a, b]).

(2.3.33)

R

In (2.3.33), it is assumed that .S(A) ⊂ (a, b); therefore, E concentrated on .(a, b)). The spectral representation (2.3.30) enables us to generalize the definition of a function of an operator A from the class of analytic functions to the class .L0 = L0 (R, B(R), E) of functions of the form .R  λ → F (λ) ∈ C ∪ {∞} measurable with respect to .B(R) and almost everywhere finite with respect to E. Namely, we set .L0 (R, B(R), E)  F → F (A) = F (λ) dE(λ), (2.3.34) R

 



2 D (F (A)) = f ∈ H |F (λ)| d(E(λ)f, f )H < ∞ . R

As in the case of a scalar measure, the support of a general resolution of identity E on the measurable space . R, B(R) is regarded as the intersection of all closed sets .ϕ ⊆ R of full measure E, i.e., such that .E(ϕ) = I. This support is denoted .suppE. It is closed and, hence, belongs to .B(R). It is easy to see that E(suppE) = I.

.

(2.3.35)

Also note that if o is an open set on the axis .R such that .o ∩ suppE =  ∅, then E(o) = 0.

.

2.3 The Spectral Decomposition of Bounded Operators

39

Theorem 2.3.14 The spectrum of a self-adjoint operator .A ∈ L(H) coincides with the support of its resolution of the identity, namely .S(A) = suppE. Thus, the integral over .R in relations (2.3.30), (2.3.33) and (2.3.34) can be replaced with the integrals over .S(A). The resolution of the identity E is uniquely defined for a given self-adjoint operator .A ∈ L(H). It is called the resolution of the identity of the operator A. We remark that, if R is a topological space with a countable basis of neighborhoods and .R is the .σ -algebra .B(R) of its Borel subsets, then the properties of the support, in particular (2.3.35), remain true. The Spectral Decomposition of Unitary Operators For a unitary operator, spectral decomposition is constructed as for self-adjoint operator with change of the real axis on the unit circle. Let U be a unitary operator in .H and let .S(U ) be its spectrum. It is well known that .S(U ) is a closed set on the unit circle .T = {z ∈ C | |z| = 1}. Denote by .A(T) the collection of all functions analytic in a certain neighborhood of .T. Jus as .A([a, b]), the class .A(T) is an algebra with respect to the standard algebraic operations. Relations (2.3.26)–(2.3.28) remain true with A replaced with U , .A([a, b]) by .A(T), and .γ by a contour that encloses the unit circle and is bypassed in the required direction. For example, .γ = γ1 ∪ γ2 , where .γ1 (γ2 ) is a circle whose radius is greater then one (less than one) which is bypassed counterclockwise (clockwise). The role of the transition .C  z → z¯ ∈ C is now played by the reflection with respect to the unit circle: .C \ {0}  z → z∗ = z¯ −1 ∈ C \ {0}. Lemma 2.3.15 Let .ARe (T) be the class of functions from .A(T) that take real values in .T. Then the operator .F (U ) is self-adjoint and satisfies the estimate |(F (U )f, g)H | ≤ max {|F (λ)| | λ ∈ T} f H g H ,

.

(2.3.36)

F ∈ ARe (T); f, g ∈ H. The following theorem is similar to Theorems 2.3.12 and 2.3.14. Theorem 2.3.16 Let U be a unitatyoperator. Then,on the .σ -algebra .B(T) of the Borel subsets of the unit circle .T = z ∈ C | |z| = 1 , the resolution of the identity E of the operator U is defined so that the representation U= T

holds true.

2π λ dE(λ) =

.

eiϕ dE(eiϕ ) 0

(2.3.37)

40

2 Some Aspects of the Spectral Theory of Unbounded Operators

In (2.3.37), .T can be replaced with the spectrum .S(U ) of the operator U . For every function .F (z) analytic in the neighborhood of .S(U ), the equality .F (U ) = F (λ) dE(λ), (2.3.38) S(U )

holds true, where the operator .F (U ) is constructed as a contour integral of the type (2.3.26). The resolution of the identity E from (2.3.37) is defined uniquely. As in the case of self-adjoint operator, the representation (2.3.37) gives the possibility to generalize the notion of a function of an operator U for arbitrary functions from the class .L0 T, B(T), E , of E-almost everywhere finite functions defined on .T and measurable with respect to .B(T). This generalization is given by the formula .L0 (T, B(T), E)  F → F (U ) = F (λ) dE(λ), (2.3.39) T

 



2 D (F (U )) = f ∈ H |F (λ)| d(E(λ)f, f )H < ∞ . T

The Spectral Theorem for a Normal Operator Let us construct the spectral decomposition for a bounded normal operator A. The corresponding resolution of the identity will be constructed as a direct product of the resolutions of the identity of two commuting self-adjoint operators associated with A. Theorem 2.3.17 Let .A1 and .A2 be two self-adjoint bounded operators. In order that their resolutions of the identity be commuting, it is necessary and sufficient that their resolvents .Rz1 (A1 ) and .Rz2 (A2 ) commute for some fixed .z1 and .z2 regular for the operators .A1 and .A2 , respectively. Consider a bounded normal operator .H, i.e., an operator .A ∈ L(H) such that A∗ A = AA∗ .

.

Theorem 2.3.18 Let A be a bounded normal operator. Then, on the .σ -algebra B(C) of Borel subsets of the complex plane, one can construct the resolution of the identity E of the operator A satisfying the representation

.

A=

λ dE(λ).

.

(2.3.40)

C

In (2.3.40), .C can be replaced with the spectrum .S(A) of the operator A. Any function .F (z) analytic in a neighborhood of .S(A) satisfies the equality F (A) =

F (λ) dE(λ),

.

S(A)

(2.3.41)

2.4 The Spectral Decomposition of Unbounded Operators

41

where the operator .F (A) is constructed as a contour integral of the type (2.3.26). The resolution of the identity E from (2.3.40) is determined uniquely.

2.4 The Spectral Decomposition of Unbounded Operators Now let us generalize the previous material to the case of an unbounded operator. We consider first a self-adjoint operator A. The idea is as follows. To construct the resolution of the identity E of the operator A, we first find a bounded (selfadjoint or unitary) operator for which the resolution of the identity has already been constructed; its proper representation would give the required E. Theorem 2.4.1 Let A be an arbitrary self-adjoint operator. Then the resolution of the identity E of the operator A is defined on the .σ -algebra .B(R) of Borel subsets of the real axis and the representation A=

λ dE(λ),

.

 

D(A) = f ∈ H

λ2 d(E(λ)f, f )H < ∞

R

(2.4.1)

R

holds true. In (2.4.1), .R can be replaced with the spectrum .S(A) of the operator A. The resolution of the identity E in (2.4.1) is defined uniquely. As before, using the spectral integral, it is possible to construct the theory of functions of the unbounded self-adjoint operator A. Thus, for .F ∈ L0 (R, B(R), E), where E is the resolution of the identity that corresponds to A, we construct the operator F (A) =

F (λ) dE(λ),

.

(2.4.2)

R

 

 

D F (A) = f ∈ H |F (λ)|2 d E(λ)f, f H < ∞ . R

 The properties of the mapping .L0 R, B(R), R  F → F (A) are described previously. Let us mention some important functions of a self-adjoint operator A (operators in (2.4.5)–(2.4.7) are, generally speaking, unbounded). (1) A resolution of the identity. For the indicator .χα of a set .α ∈ B(R), we have E(α) = χα (A) =

χα (λ) dE(λ).

.

R

(2.4.3)

42

2 Some Aspects of the Spectral Theory of Unbounded Operators

(2) A resolvent. Let .z ∈ / S(A), then the representation  −1 Rz = A − zI =



.

S(A)

1 dE(λ) λ−z

(2.4.4)

holds true. (3) The square root of a non-negative operator A. √ ∞ √ √ . A = λ dE(λ) = λ dE(λ) ≥ 0;

√ ( A)2 = A.

(2.4.5)

0

S(A)

(4) The absolute value of an operator:



|A| =

|λ| dE(λ) =

.

S(A)

|λ| dE(λ) ≥ 0.

(2.4.6)

R

(5) The exponent. For any .z ∈ C, .

exp(zA) = ezA =

ezA dE(λ).

(2.4.7)

S(A)

The Stone Formula There are more items related to the resolution of the identity E of the self-adjoint operator A. Thus, we present a usual expression for E in terms of the resolvent of A. Theorem 2.4.2 Let E be a resolution of the identity of a self-adjoint operator and let .Rz be its resolvent. Then, the formula .

 1 1 ˜ = lim E(δ) + E(δ) ε→+∞ 2π i 2

(Rz − Rz¯ ) dz

(2.4.8)

δ+iε

holds true in the sense of the strong convergence for every open finite interval .δ ⊂ R. Theorem 2.4.3 For general self-adjoint operators .A1 and .A2 , Theorem 2.3.17 holds true in the same formulation. As in the case of scalar measures, for a resolution of the identity defined on Borel subsets of the real axis, we often use (instead of an operator-valued measure) a nondecreasing operator-valued function that is also called a resolution of the identity.

2.5 Representations of One-Parameter Unitary Groups

43

The operator-valued function .R  λ → Eλ whose values are projectors in a fixed Hilbert space .H is called a resolution of the identity if it is satisfies conditions (a) monotonicity: .∀λ, μ ∈ R, λ < μ ⇒ Eλ ≤ Eμ ; (b) completeness: . lim Eλ = 0 and . lim Eλ = I in the sense of strong λ→−∞ λ→+∞ convergence; (c) left-continuity: . lim Eλ = Eμ in the sense of the strong convergence. λ→μ−0

Theorem 2.4.4 Let E be a resolution of the identity (a measure) given on .B(R). Then  R  λ → Eλ = E (−∞, λ)

.

(2.4.9)

is resolution of the identity (a function). Conversely, for a given resolution of the identity .Eλ , one can construct a resolution of the identity E on .B(R) such that .Eλ and E satisfy (2.4.9). The Case of Normal Operators Recall that a closed densely defined operator A is called normal, if .A∗ A = AA∗ for the case of a bounded operator and commute their resolutions of the identity for the case of the unbounded operator. Theorem 2.4.5 Let A be an arbitrary normal operator. Then, on the .σ -algebra B(C) of Borel subsets of the complex plane, one can define a resolution of the identity E of the operator A such that the spectral representation

.

A=

λ dE(λ),

.

 



2 D(A) = f ∈ H |λ| d(E(λ)f, f )H < ∞

C

(2.4.10)

C

holds true. In (2.4.10), .C can be replaced with the spectrum .S(A) of the operator A. The resolution of the identity E from (2.4.10) is defined uniquely.

2.5 Representations of One-Parameter Unitary Groups Let A be a self-adjoint operator acting in a Hilbert space .H and let E be its resolution of the identity. For a given function .R × R  (λ, t) → eitλ ∈ C, we construct the operator-valued function R  t → U (t) =

eitλ dE(λ) = eitA ∈ L(H )

.

R

according to (2.4.7).

(2.5.1)

44

2 Some Aspects of the Spectral Theory of Unbounded Operators

It follows from the properties of spectral integral that the operator .U (t) is unitary for all .t ∈ R and satisfies the equality U (t + s) = U (t)U (s),

.

t, s ∈ R.

(2.5.2)

The function, defined in (2.5.1), is strongly continuous, i.e., .∀f ∈ H, .∀t ∈ R: U (s)f → U (t)f as .s → t. To prove this fact, we use relation (2.3.12) and obtain

.

2  isλ itλ U (s)f − U (t)f H = e −e dE(λ)f 2



.

=

H

R

isλ

e − eitλ 2 d(E(λ)f, f )H → 0. s→t

R

Moreover, .U (t) is strongly continuous, i.e., for all .f ∈ D(A) and .t ∈ R, the strong derivative U  (t)f = lim

.

h→0

1 (U (t + h) − U (t)) f = iU (t)Af h

(2.5.3)

exists and is a continuous vector function. Indeed, by virtue of relations (2.5.1), (2.4.7), and (2.3.12), we obtain 2 iU (t)Af − 1 (U (t + h) − U (t))f h H 2  itλ 1  i(t+h)λ itλ e ie dE(λ)f = λ − − e . h H

R



iλ − 1 (eihλ − 1) 2 d(E(λ)f, f )H , h

= R

where the integrand, on the right-hand side of this equality, vanishes as .h → 0 for all .λ ∈ R and is uniformly bounded with respect to .H by the function .cλ2 . For .f ∈ D(A), by virtue of the Lebesgue theorem, we can pass to the limit under the integral sign and get (2.5.3). The continuity of the derivative follows from the inclusion .f ∈ D(A). Relations (2.5.1) and (2.5.3) have the following interpretation: Consider an operator-differential equation u (t) = iAu(t),

.

t ∈ R,

(2.5.4)

where .R  t → u(t) ∈ H is required solution. The function u is assumed to be strongly continuously differentiable and .u(t) ∈ D(A) for all t. Such solution are

2.5 Representations of One-Parameter Unitary Groups

45

called strong. The strong solution of the Eq. (2.5.3) satisfying the initial condition u(0) = u0 ∈ D(A), i.e., the solution of the corresponding Cauchy problem exists and is given by the formula

.

u(t) = U (t)u0 ,

.

t ∈ R.

(2.5.5)

It is not difficult to verify that this Cauchy problem has a unique solution. Another view of these formulas is as follows: We say that a function .R  t → U (t), whose value are unitary operators in .H, satisfying the relation (2.5.2), defines a one-parameter unitary group. Or, in other words, there is given the unitary representation of the group .R. Thus, (2.5.1) is an example of a one-parameter unitary group which is, in addition, strongly continuous. The theorem below say that the formula (2.5.1) gives the general form of these groups. Theorem 2.5.1 (Stone) A strongly continuous one-parameter unitary group .U (t), (t ∈ R) always admits the representation (2.5.1) with a certain resolution of the identity E uniquely determined for a given group. The corresponding operator A is called the infinitesimal generator of this group.

.

Proof Let us construct a linear set .D ⊆ H dense in .H, and such that the vector function R  t → U (t)f ∈ H

(2.5.6)

.

is strongly continuously differentiable for all .f ∈ D (the presented below construction is a particular case of the construction of the so-called Gårding domain). For .F ∈ C0∞ (R) and .g ∈ H, we consider a vector of the form gF =

F (s)U (s)g ds ∈ H,

.

(2.5.7)

R

where the integral is regarded as the limit of Riemann integral sums in .H. By virtue of the continuity of .F (·) and .U (·)f and finiteness of .F (·), the standard arguments enable us to conclude that integral (2.5.7) exists and possesses natural properties of Riemann integrals. Thus, the set .D be chosen as the collection of linear combinations of vectors (2.5.7) with .F ∈ C0∞ (R) and .g ∈ H. The set .D is dense in .H. To prove this, we consider a vector .H  h ⊥ D. Multiplying scalar (2.5.7) on this vector, we obtain 0 = (gF , h)H =

F (s)(U (s)g, h)H ds,

.

R

∀F ∈ C0∞ (R).

46

2 Some Aspects of the Spectral Theory of Unbounded Operators

In view of the arbitrariness of F in this equality and the continuity of the function R  s → U (s)g, h H ∈ C (which follows from our assumptions), we conclude that .∀s ∈ R : .(U (s)g, h)H = 0, .g ∈ H. In particular, by setting .s = 0 and using the equality .U (0) = I (a consequence of (2.5.2)), we obtain .(g, h)H = 0, .∀g ∈ H .⇒ h = 0. Therefore, .D is dense in .H. To prove that the function (2.5.6) is strongly continuously differentiable, it suffices to consider f of the form (2.5.7). By using (2.5.2) and changing variables in the integral, we get .

1 1 (U (t + h)gF − U (t)gF ) = h h =

1 h

F (s) (U (t + h)U (s) − U (t)U (s)) g ds R



F (s) (U (t + h + s) − U (t + s)) g ds R

1 (F (s − t − h) − F (s − t)) U (s)g ds h

=

.

R



→ −

h→0

F  (s − t)U (s)g ds = g−F  (·−t)

R 

= U (t)gF ,

∀t, h ∈ R. (2.5.8)

The limit transition in (2.5.8) can be easily justified by the following simple estimate (established by limit transition from integral sums) G(s)U (s)g ds



.

R

H

U (s)g H |G(s)| ds,



G ∈ C0 (R, g ∈ H.

R

(2.5.9) As follows, from(2.5.9), the vector function on the right-hand side of (2.5.8) is strongly continuous. Note that .U  (t)gF has the form (2.5.7) for all .t ∈ R. On .D, we introduce the operator A in the space .H by the formula D = D(A)  f → Af =

.

1  U (0)f ∈ D ⊆ H. i

(2.5.10)

Note that .D is invariant under the action of A. By applying the operator .U (t) to (2.5.7), using (2.5.2), and changing the variables in the integral, we establish that .D is also invariant under the action of .U (t) for all .t ∈ R.

2.5 Representations of One-Parameter Unitary Groups

47

The introduced above operator A is Hermitian. To prove this, it suffices to show that .(AgF , hG )H = (gF , AhG )H , where .F, G ∈ C0∞ (R) and .g, h ∈ H. According to (2.5.8), we have 1 U (h) − I gF , hG H h→0 ih  1 U (−h) − I hG H = (gF , AhG )H . = lim gF , − h→0 ih

(AgF , hG )H = lim .

Furthermore, we can prove that A is essentially self-adjoint. Let .z ∈ C \ R and ϕ ∈ D(A∗ ) be such that .A∗ ϕ = zϕ; it is necessary to prove that .ϕ = 0. To do this, we first show that

.

U  (t)gF = iAU (t)gF

.

F ∈ C0∞ (R,

g ∈ H, t ∈ R.

(2.5.11)

This equality will be proved if we notice that, in this case, .U (t)gF also has the form (2.5.7) (this vector is equal to .gF (·−t) ) and then compute .AU (t)gF according to (2.5.10). In view of (2.5.11), we obtain, for any .F ∈ C0∞ (R) and .g ∈ H d (U (t)gF , ϕ)H = (U  (t)gF , ϕ)H = i(AU (t)gF , ϕ)H dt . = i(U (t)gF , A∗ ϕ)H = i z¯ (U (t)gF , ϕ)H ,

t ∈ R.

Hence, the complex-valued bounded function .a(t) = (U (t)gF , ϕ)H satisfies the equation .a  = i z¯ a. Therefore, .a(t) = ei z¯ t a(0), .t ∈ R, and, by virtue of the condition .Im z = 0, it may be bounded only if .a(0) = 0. Hence .(gF , ϕ)H = a(0) = 0, ∞ .F ∈ C (R, .g ∈ H), i.e., .ϕ ⊥ D ⇒ ϕ = 0. 0 Thus, we have proved that the operator .A˜ is self-adjoint. In terms of this operator, according to formula (2.5.1), we can now construct the operator-valued function it A˜ ∈ L(H ). It remains to show that .V (t) = U (t), .t ∈ R. .R  t → V (t) = e  ˜ is a strong solution According to (2.5.4) and (2.5.5), .v(t) = V (t)v0 , .v0 ∈ D A) of the Cauchy problem ˜ v  (t) = i Av(t),

.

t ∈ R, v(0) = v0 .

(2.5.12)

We set .v0 = gF for some .F ∈ C0∞ (R) and .g ∈ H. In view of (2.5.11), the function .u(t) = U (t)gF is also a strong solution of the problem (2.5.12) (in (2.5.11)), ˜ since .U (t)gF ∈ D). Further in Theorem 2.6.1, it will be A can be replaced with .A, shown that strong solutions of the Eq. (2.5.12) with self-adjoint .A˜ that correspond to the same initial data coincide. Therefore, .V (t)gF = U (t)gF and, by virtue of the denseness of .D in .H, we get .V (t) = U (t), .t ∈ R. (It is clear that this coincidence can be proved even without using the general Theorem 2.6.1)

48

2 Some Aspects of the Spectral Theory of Unbounded Operators

In order to prove that E is uniquely determined by(2.5.1), we note that .

(U (t)f, g)H =

eitλ d(E(λ)f, g)H ,

t ∈ R,

∀f, g ∈ H,

(2.5.13)

R

i.e., the function on the left-hand side of (2.5.13) is the Fourier-Stieltjes transform of the charge .B(R) → (E(α)f, g)H ∈ C, and it is well known that a charge is uniquely determined by its Fourier transform. This proves that E is uniquely determined. ! We remark that the strong continuity of a one-parameter unitary group is equivalent to its weak continuity, i.e., to the continuity of the function .R  t → (U (t)f, g)H ∈ C for all .f, g ∈ H. Indeed, it suffices to prove that weak continuity implies strong continuity. For .f ∈ H we have U (t)f − U (s)f 2H = ((U (t) − U (s))f, (U (t) − U (s))f )H = ((U (t) − U (s))∗ (U (t) − U (s))f, f )H

.

= ((2 · I − U (t − s) − U (s − t))f, f )H → 0. s→t

Using the last remark, Theorem 2.5.1 can be reformulated. A function .R  t → k(t) ∈ C is called positive definite if it satisfies the inequality n  .

k(tj − tl )ξj ξ¯l ≥ 0

(2.5.14)

j,l=1

for all .t1 , . . . , tn ∈ R, .ξ1 , . . . , ξn ∈ C, and .n ∈ N. The well-known Bochner-Khinchin theorem (see [Shi1]) asserts that a continuous positive definite function admits a representation k(t) =

eiλt dσ (λ),

.

t ∈ R,

(2.5.15)

R

where .B(R)  α → σ (α) ≥ 0 is a finite measure uniquely defined in terms of this function. Conversely, each function of the form (2.5.15) is positive definite. By comparing .

(U (t)f, g)H =

eitλ d(E(λ)f, g)H , R

t ∈ R, ∀f, g ∈ H

2.6 Self-Adjointness Criteria

49

with (2.5.15), we conclude that the function R  t → k(t) = (U (t)f, f )H ,

.

f ∈H

(2.5.16)

is positive definite.

2.6 Self-Adjointness Criteria By the study of the operator differential Eq. (2.5.4), we have already shown that relations (2.5.5) and (2.5.1) give a solution of the corresponding Cauchy problem. Here, we somewhat generalize these arguments. Consider a densely defined operator B acting in a Hilbert space .H and let .I ⊆ R be a finite or infinite closed, open, or half-open interval and .r ∈ N (in fact, we restrict ourselves to the cases where .r = 1, 2). A vector-function .I  t → u(t) ∈ H is called a strong solution of the equation  .

 dr u (t) + Bu(t) = 0 t ∈ I dt r

(2.6.1)

on I , if it is r times strongly continuously differentiable (i.e., has r strong derivatives on I , the last of which is continuous), .u(t) ∈ D(B) for all .t ∈ I and it satisfies the Eq. (2.6.1). An r times strongly continuously differentiable vector function .I  t → u(t) ∈ H is a strong solution of the equation  .

 dr u (t) + B ∗ u(t) = 0, dt r

t ∈I

(2.6.2)

if and only if the “weak” equality  .

   dr u (t), f + u(t), Bf H = 0, r dt H

f ∈ D(B), t ∈ I

(2.6.3)

hods true. This statement immediately follows from the definition of adjoint operator, since (2.6.3) implies that .u(t) ∈ D(B ∗ ) for all .t ∈ I in view of inclusion (r) (t) ∈ H. .u We say that the Cauchy problem for Eq. (2.6.1) on .I = [0, b), .(0 < b ≤ ∞) is uniquely solvable in the strong sense if each strong solution of this equation on (r−1) (0) = 0 vanishes for all .t ∈ (0, b). .[0, b) such that .u(0) = · · · = u If the Cauchy problem is uniquely solvable on .[0, b) for some .b > 0, then it is uniquely solvable on .[0, ∞). Indeed, let .[0, ∞)  t → u(t) ∈ H be a strong solution of Eq. (2.6.1) on .[0, ∞) such that .u(0) = · · · = u(r−1) (0) = 0. In view of the assumed unique solvability on

50

2 Some Aspects of the Spectral Theory of Unbounded Operators

[0, b), we have .u(t) = 0 for .t ∈ (0, b) and, in particular, .u(t) = 0 in a neighborhood of the point .c = b/2; therefore .u(c) = · · · = u(r−1) (c) = 0. The function .[0, ∞)  t → u1 (t) = u(t + c) is a strong solution of (2.6.1) on .[0, ∞) such that .u1 (0) = u(c) = 0, . . . , u1r−1 (0) = ur−1 (c) = 0 and, hence, .u1 (t) = 0 for .t ∈ (0, b). By repeating the same reasoning, we can show that the function

.

[0, ∞)  t → u2 (t) = u1 (t + c) = u(t + 2c)

.

also vanishes for .t ∈ (0, b). Then we construct the function .u3 (t), etc. As a final . result, we conclude that .u(t) = 0 .(t ∈ [0, ∞)). If the operator B in (2.6.1) has the form .B = ζ A, where .ζ ∈ C is a fixed number and A is a self-adjoint operator, then, under the corresponding restriction imposed on the initial conditions, the Cauchy problem for (2.6.1) is solvable, and one can write a representation of this solution in terms of the resolution of the identity E for the operator A. Let us show that the self-adjointness of operators is closely related to the uniqueness of strong solutions of the Cauchy problems for the corresponding evolutionary equations. The Schrödinger Criterion of Self-Adjointness The Schrödinger criterion of self-adjointness is formulated in the theorem. Theorem 2.6.1 Let A be an Hermitian operator acting in the separable Hilbert space .H. For its essential self-adjointness, it is necessary that the Cauchy problems for both equations 

 du (t) ± (iA∗ )u(t) = 0, dt

.

t ∈ [0, b)

(2.6.4)

be strongly uniquely solvable on .[0, b) for all .b ∈ (0, ∞] and it is sufficient that these problems be uniquely solvable for some b. Proof The proof is split into several steps. I. Let us first establish sufficiency under the assumption that A has equal defect numbers. Assume the contrary, namely, let .A˜ be not self-adjoint. Then A has two different self-adjoint extensions .A1 and .A2 in .H. Let .E1 and .E2 be corresponding resolutions   of the identity. For every .g ∈ D(A) ⊆ D(A1 ) the integral . R λ2 d E1 (λ)g, g H is convergent. Therefore, the vector-function [0, ∞)  t → u1 (t) =

eiλt dE1 (λ)g

.

R

(2.6.5)

2.6 Self-Adjointness Criteria

51

is strongly continuously differentiable and u1 (t) = i

λeiλt dE1 (λ)g.

.

R

It is easy to see that it is a strong solution of Eq. (2.6.4) with the sign “+” on [0, ∞). Indeed, it is necessary to check the corresponding weak equality (2.6.3), which now has the form    du1 (t), f + (u1 (t), iAf )H = 0, f ∈ D(A), t ∈ [0, ∞)). . dt H

.

Since d(E1 (λ)g, Af )H = d(E1 (λ)g, A1 f )H .



 λ =d

μ d(E1 (μ)g, f )H

= λd(E1 (λ)g, f )H ,

−∞

then we have    du1 (t), f + (u1 (t), iAf )H dt H . = i λeiλt d(E1 (λ)g, f )H − i eiλt d(E1 (λ)g, Af )H = 0, R

R

f ∈ D(A); t ∈ [0, ∞), i.e., the required relation is satisfied. ! Similarly, the function .u2 (t) constructed according to (2.6.5) for given .E2 is a strong solution of the same equation; .u1 (0) = u2 (0) = g. Thus, .u(t) = u1 (t) − u2 (t) is also a strong solution of the Eq. (2.6.4) with the sign “+” on .[0, ∞) such that .u(0) = 0. By virtue of the condition of the theorem, the problem is uniquely solvable on .[0, ∞). Therefore, .u(t) = 0 for .t ∈ [0, ∞), whence . eiλt d((E1 (λ) − E2 (λ))g, h)H = 0, g ∈ D(A), h ∈ H, t ∈ [0, ∞). R

(2.6.6)

52

2 Some Aspects of the Spectral Theory of Unbounded Operators

Consider the Eq. (2.6.4) with the sign “.−”. By repeating the arguments presented above with .eiλt replaced with .e−iλt in (2.6.5), we arrive at the relation that differs from (2.6.6) by the same change. Therefore, if we introduce the charge ω(α) = ((E1 (α) − E2 (α))g, h)H ,

.

α ∈ B(R),

then,  iλtaccording to (2.6.6) and the indicated modification of this relation, we have dω(λ) = 0, for any .t ∈ R. Taking into account the theorem on uniqueness of Re the Fourier-Stieltjes transform of a charge already applied, we conclude that .ω = 0, i.e., .((E1 (α) − E2 (α))g, h)H = 0, .α ∈ B(R). Since .g ∈ D(A) and .h ∈ H are arbitrary, this implies that .E1 = E2 , and we arrive at a contradiction.

.

II. In the case of an operator A with distinct defect numbers, we use the lemma. Lemma 2.6.2 Consider the orthogonal sum of spaces .H ⊕ H with vectors .f = f1 , f2 , .f1 , f2 ∈ H and an operator C with a dense domain .D(C) = D(A) ⊕ D(A), acting in this space according to the formula .Cf = Af1 , −Af2 , .f ∈ D(C), and the equality  .

 du (t) + (iC)∗ u(t) = 0, dt

t ∈ [0, b),

b ∈ (0, ∞],

(2.6.7)

for the vector functions with values in .H ⊕ H. It is stated that if the Cauchy problem for both Eq. (2.6.4) is uniquely solvable in the strong sense on .[0, b), then the Cauchy problem for Eq. (2.6.7) is also uniquely solvable in the sense of strong solutions, and vice versa. Proof Let [0, b)  t → u(t) = u1 (t), u2 (t) ∈ H ⊕ H

.

be a strong solution of the Cauchy problem for Eq. (2.6.7). Since C ∗ f = A∗ f1 , −A∗ f2 ,

.

f ∈ D(C ∗ ) = D(A∗ ) ⊕ D(A∗ ),

then functions [0, b)  t → u1 (t) ∈ H, and [0, b)  t → u2 (t) ∈ H

.

are strong solutions of the Eq. (2.6.4) with signs “.+” and “.−”, respectively. In view of the fact that, by assumption, strong solutions of the Cauchy problem for (2.6.4) are unique on .[0, b), this implies the required uniqueness for (2.6.7). The converse ! statement is deduced similarly. III. Let us prove the sufficiency for an operator A with deficiency index .(m, n). As in Lemma 2.6.2, we construct the operator C. It is easy to verify that the

2.6 Self-Adjointness Criteria

53

deficiency index of this operator is equal to .(m + n, m + n). By virtue 2.6.2 the Cauchy problem for (2.6.7) is uniquely solvable on .[0, b). By applying this lemma to the case where A is replaced with .−A, we conclude that this uniqueness is preserved for the Eq. (2.6.7), in which “+” is replaced with “.−”. In view of the fact that the defect numbers of the operator C coincide and are equal to .m + n, the reasoning used in step I is applicable in this case, and we conclude that C is essentially self-adjoint. But then .m + n = 0, whence .m = n = 0, i.e., A is also essentially self-adjoint. IV. To prove necessity, we first establish a general lemma that, in our case, reflects the Holmgren principle in the theory of partial differential equations. Lemma 2.6.3 Consider Eq. (2.6.2) on .[0, b), .b ∈ (0, ∞). Assume that there exists a set . dense in .H such that the Cauchy problem  .

dr ϕ dt r

 (t) + (−1)r Bϕ(t) = 0, ϕ(T ) = ϕ0 , . . . , ϕ

(r−1)

t ∈ [0, T ];

(2.6.8)

(T ) = ϕr−1

has a strong solution for all .T ∈ (0, b) and .ϕ0 , . . . ϕr−1 ∈ . Then the Cauchy problem for (2.6.2) is uniquely solvable on .[0, b) in the sense of strong solutions. Proof Let us prove Lemma 2.6.2, e.g., in the case of .r = 2. One can easily verify the following formula of integration by parts. Let .[0, T ]  t → α(t), .β(t) ∈ H be twice strongly continuously differentiable vector functions. Then T

T



(α (t), β(t))H dt = .

0

(α(t), β  (t))H dt (2.6.9)

0

T

+ [(α  (t), β(t))H − (α(t), β  (t))H ] . 0

Let .u(t) be a strong solution of the Cauchy problem for the Eq. (2.6.2) with .r = 2 on .[0, b) such that .u((0) = u (0) = 0 and let .ϕ(t) be a strong solution mentioned in the formulation of the lemma. By using (2.6.9), we obtain T .

  (u (t), ϕ(t))H − (u(t), ϕ  (t))H dt = (u (T ), ϕ0 r)H − (u(T ), ϕ1 )H .

0

(2.6.10) Note that .ϕ(s) ∈ D(B) for every .s ∈ [0, T ]. Therefore, according to equality (2.6.3) with .f = ϕ(s), we can write (u (t), ϕ(s))H + (u(t), Bϕ(s))H = 0,

.

t ∈ [0, b).

54

2 Some Aspects of the Spectral Theory of Unbounded Operators

We set now .t = s and then replace s with t. This yields (u (t), ϕ(t))H = −(u(t), Bϕ(t))H ,

.

t ∈ [0, T ].

By virtue of (2.6.8) with .r = 2, we have (u(t), ϕ  (t))H = −(u(t), Bϕ(t))H ,

.

t ∈ [0, T ].

These two equalities imply that the expression on the left-hand side of (2.6.10) vanishes. Consequently, (u (T ), ϕ0 )H − (u(T ), ϕ1 )H = 0,

.

ϕ0 , ϕ1 ∈ .

Hence, it follows from the denseness of . in .H that .u(T ) = u (T ) = 0. Since .T ∈ (0, b) is arbitrary, this yields the required assertion. In the case where .r = 1 the reasoning is similar, one should only use the following formula for integration by parts: T

T



(α (t), β(t))H dt = −

.

0

T

(α(t), β  (t))H dt + [(α(t), β(t))H ] , 0

(2.6.11)

0

which holds true for continuously differentiable vector functions .[0, T ), . t → α(t), .β(t) ∈ H. In the case where r is arbitrary, one must iterate relation (2.6.11) r times (note that relation (2.6.10) is, in fact, the relation (2.6.11), iterated twice). ! V. Let us prove necessity. Let .A˜ be self-adjoint and let E be its resolution of the ˜ . = identity. We apply Lemma 2.6.3, setting .r = 1, .B = (iA)∗ = −i A, ∪∞ E((−n, n))H . A strong solution of the Cauchy problem (2.6.8), which now n=1 has the form ˜ ϕ  (t) + i Aϕ(t) = 0,

.

t ∈ [0, T ]), ϕ(T ) = ϕ0 ,

exists and is equal to ϕ(t) =

.

e−iλ(t−T ) dE(λ)ϕ0 ,

t ∈ [0, T ],

(2.6.12)

R

(since .ϕ0 ∈ , the integration in (2.6.12) is, in fact, carried out over a finite interval and, therefore, the function .[0, T ]  t → ϕ(t) is continuously differentiable; it is clear that it solves the problem under consideration). Thus, by virtue of this lemma, the Eq. (2.6.4) with the sign “.+” is uniquely solvable on .[0, b). The Eq. (2.6.4) with sign “.−” is investigated similarly. Finally, we ˜ conclude that .B = −(iA)∗ = i A. .

2.6 Self-Adjointness Criteria

55

The Hyperbolic Criterion of Self-Adjointness The “hyperbolic” criterion of selfadjointness is formulated in the form of two theorems. Theorem 2.6.4 Let A be an Hermitian operator acting in .H. For its essential selfadjointness, it is necessary that the Cauchy problem for the equation  2  d u (t) + A∗ u(t) = 0, t ∈ [0, b) . (2.6.13) dt 2 be uniquely solvable on .[0, b) for all .b ∈ (0, ∞] (in the sense of strong solutions) and it is sufficient that A be semi-bounded below and that the indicated Cauchy problem be uniquely solvable in the same sense for some .b > 0. Proof Sufficiency. Suppose that .A˜ is not self-adjoint. Then A has two different selfadjoint extensions .A1 and .A2 in .H bounded below by a number .c > −∞ Let .E1 and .E2 be the corresponding resolutions of the identity. For every .g ∈ D(A) ⊆  D(A1 ), the integral . R λ2 d(E1 (λ)g, g)H is convergent and, therefore, the vector function ∞ [0, ∞)  t → u1 (t) =

.

√ cos( λt) dE1 (λ)g

(2.6.14)

c

is twice strongly continuously differentiable. As in the proof of Theorem 2.6.1, one can easily show that it is a strong solution of the Eq. (2.6.13) on .[0, ∞). For this purpose, one must check the validity of the corresponding weak equality of the form (2.6.3). In addition, we have, .u1 (0) = g and .u1 (0) = 0. Similarly, by changing .E1 by .E2 in (2.6.14), we construct the function .u2 (t). The difference .u(t) = u1 (t) − u2 (t) is also a strong solution of Eq. (2.6.13) on .[0, ∞) such that  .u(0) = u (0) = 0. In view of the assumed uniqueness of strong solutions of the Cauchy problem, .u(t) = 0 for .t ≥ 0. Multiplying this equality by .h ∈ H, we obtain ∞ .

√ cos( λt) d((E1 (λ) − E2 (λ))g, h)H = 0,

t ∈ [0, ∞).

c

Since the charge .ω is uniquely determined in terms of its cosine Fourier-Stieltjes transform, then this makes it possible to assert .E1 = E2 , which leads to a contradiction. Necessity. Let .A˜ be self-adjoint and let E be its resolution of the identity. Let us ˜ and . = ∪∞ E((−n, n))H. apply Lemma 2.6.3. setting .r = 2, .B = A∗ = A, n=1 A strong solution of the Cauchy problem (2.6.8), which now has the form ˜ ϕ  (t) + Aϕ(t) = 0,

.

t ∈ [0, T ]), ϕ(T ) = ϕ0 , ϕ  (T ) = ϕ1 ,

56

2 Some Aspects of the Spectral Theory of Unbounded Operators

exists and is equal to ϕ(t) =

.

√ √ √ cos( λ(t − T )) dE(λ)ϕ0 + 1/ λ sin( λ(t − T )) dE(λ)ϕ1 , R

R

where, as in (2.6.12), the integration is, in fact, carried out over a finite segment. Therefore, according to Lemma 2.6.3, we conclude that the Cauchy problem for (2.6.13) is uniquely solvable on .[0, b) for all .b ∈ (0, ∞] in the sense of strong solutions. ! It is convenient to use this theorem in a simple combination with Lemma 2.6.3. Theorem 2.6.5 Let A be an Hermitian operator acting in .H and semi-bounded below. Assume that there exists a linear set . ⊆ H dense in .H and such that the Cauchy problem  .

d 2ϕ dt 2

 (t) + Aϕ(t) = 0,

t ∈ [0, T ], ϕ(T ) = ϕ0 , ϕ  (T ) = ϕ1

(2.6.15)

has a strong solution for some .b > 0 and all .T ∈ (0, b) and .ϕ0 , ϕ1 ∈ . Then the operator A is essentially self-adjoint. Proof By virtue of Lemma 2.6.3, it follows from the condition of the theorem that the Cauchy problem for the Eq. (2.6.13) has a unique strong solution on .[0, b). But then, according to Theorem 2.6.4, the operator .A˜ is self-adjoint. ! The Parabolic Criterion of Self-Adjointness Theorem 2.6.6 Let A be an Hermitian operator acting in .H. For its essential selfadjointness, it is necessary that the Cauchy problem for the equation  .

 du (t) + A∗ u(t) = 0, dt

t ∈ [0, ∞)

(2.6.16)

be uniquely solvable in the sense of strong solutions. For an operator semi-bounded below, this is also a sufficient condition. Proof Necessity. As in Theorems 2.6.1 and 2.6.4, it is proved by using Lemma 2.6.3 ˜ and . = ∪∞ E((−n, n))H, where E is the resolution of with .r = 1, .B = A, n=1 ˜ A strong solution of the corresponding Cauchy problem which the identity of .A. now has the form ˜ ϕ  (t) − Aϕ(t) = 0,

.

t ∈ [0, T ], ϕ(T ) = ϕ0 ∈ ,

2.6 Self-Adjointness Criteria

57

exists and is equal to ϕ(t) =

eλ(t−T ) dE(λ)ϕ0 ,

.

t ∈ [0, T ].

R

Sufficiency. It is established as in Theorems 2.6.1 and 2.6.4. Suppose that .A˜ is not self-adjoint. Let .A1 and .A2 be two different self-adjoint extensions of A bounded below by a number .c > −∞ and let .E1 and .E2 be the corresponding resolutions of the identity. The vector function ∞ [0, ∞)  t → u1 (t) =

.

e−λt dE1 (λ)g,

g ∈ D(A) ⊆ D(A1 )

(2.6.17)

c

is strongly continuously differentiable and .u1 (t) ∈ D(A1 ) ⊆ D(A∗ ). The derivative .u1 (t) is expressed by integral (2.6.17) with the factor .−λ before .e−λt . The expression .A∗ u1 (t) = A1 u1 (t) also has the same form. Thus, (2.6.17) is a strong solution of Eq. (2.6.16) with .u1 (0) = g. Further, by the same procedure, we construct .u2 (t) in terms of .E2 and consider the difference .u(t) = u1 (t) − u2 (t). For this difference, we have .u(0) = 0 and, therefore, in view of the assumed uniqueness of strong solutions, .u(t) = 0, whence ∞ .

e−λt d((E1 (λ) − E2 (λ))g, h)H = 0,

t ∈ [0, ∞).

(2.6.18)

c

This relation means that the Laplace-Stieltjes transform of the charge appearing in (2.6.18) is equal to zero, but then the charge is also identically equal to zero. This gives the equality .E1 = E2 , which leads to a contradiction. ! The Quasi-Analytic Criterion of Self-Adjointness First, we recall some facts from the theory of quasi-analytic functions. Let .[a, b] ⊂ R be a finite segment and let .(mn )∞ n=1 be a fixed sequence of positive numbers. The class .C{mn } is defined as the linear set of all functions .f ∈ C ∞ ([a, b]), |(D n f )(t)| ≤ Kfn mn ,

.

t ∈ [a, b], n ∈ N,

(2.6.19)

where .Kf is a constant that depends on f . As is known, the class of analytic functions defined on .[a, b] is characterized by the estimates (2.6.19) with .mn = n! . It is clear that the class .C{n!} is characterized by the following property. If .f ∈ C{n!} is such that .(D n f )(t0 ) = 0 for all .n ∈ N and .f (t0 ) = 0 at a fixed point .t0 ∈ [a, b], then .f (t) = 0 for .t ∈ [a, b]. In order to generalize this situation, we introduce the following definition.

58

2 Some Aspects of the Spectral Theory of Unbounded Operators

The class .C{mn } is called quasi-analytic if the fact that a function .f ∈ C{mn } satisfies the equalities .(D n f )(t0 ) = 0, .n ∈ N and .f (t0 ) = 0 at a fixed point .t0 ∈ [a, b] implies .f (t) = 0, .t ∈ [a, b]. We will recall the well-known Denjoy-Càrleman theorem. The class .C{mn } is quasi-analytic if and only if ∞   .

 −1  1/k = ∞. inf mk k ≥ n

(2.6.20)

n=1

For example, the class .C{npn } is quasi-analytic if and only if .p ≤ 1. Let .H be a Hilbert space and let A be an Hermitian operator in it. A vector ∞ D(An ) and the class .ϕ ∈ H is called quasi-analytic (with respect to A), if .ϕ ∈ ∩ n=1 n .C { A ϕ H } is quasi-analytic. n Lemma 2.6.7 A vector .ϕ ∈ ∩∞ n=1 D(A ) is quasi-analytic if and only if ∞  .

−1/n

An ϕ H

= ∞.

(2.6.21)

n=1

Proof It is clear that .C { An ϕ H } = C { An (λϕ) H }, where .λ > 0 is fixed. This implies that it suffices to verify the lemma for a vector .ϕ such that . ϕ H = 1. For a vector of this sort, the sequence  n 1/n ∞ . A ϕ (2.6.22) H n=1 is non-decreasing. Indeed, Aϕ 2H = (Aϕ, Aϕ)H = (A2 ϕ, ϕ)H ≤ A2 ϕ H ϕ H ,

.

1/2

i.e., . Aϕ H ≤ A2 ϕ H .

1/n

1/(n+1)

Assume that the inequality . An ϕ H ≤ An+1 ϕ H n+1 ϕ 1/(n+1) . A

prove that inequality, we get

H



1/(n+2) An+2 ϕ ,

H

is already proved and

n ∈ N. In view of the assumed

.

An+1 ϕ 2H = (An+1 ϕ, An+1 ϕ)H = (An+2 ϕ, An ϕ)H .

n/(n+1)

≤ An+2 ϕ H An ϕ H ≤ An+2 ϕ H An+1 ϕ H 1+1/(n+1)

,

whence . An+1 ϕ H ≤ An+2 ϕ H . Thus, (2.6.22) is a non-decreasing sequence. Let us apply the Denjoy-Càrleman criterion to the class .C { An ϕ H }, where . ϕ H = 1. Since (2.6.22) is a non-decreasing sequence, we have .

  1/k 1/n inf Ak ϕ H k ≥ n = An ϕ H .

2.6 Self-Adjointness Criteria

59

Therefore, the condition (2.6.20) on the quasi-analyticity of this class, i.e., for the quasi-analyticity of the vector .ϕ, can be rewritten in the form (2.6.21). ! Theorem 2.6.8 Let A be Hermitian operator acting in .H. It is self-adjoint if and only if .H contains total set of quasi-analytic vectors. Proof In one direction, this statement is trivial. Indeed, let A be self-adjoint. Then it suffices to prove the quasi-analyticity of each vector .ϕ of the form .ϕ = E((a, b))f , where E is a resolution of the identity that corresponds to A, .a, b ∈ R, .(a < b), and ∞ D(An ). Further, we have: .f ∈ H. It is obvious that .ϕ ∈ ∩ n=1 b A ϕ H =

.

λ2n d(E(λ)f, f )H ≤ c2n f 2H ,

2

n

c = max(|a|, |b|), n ∈ N.

a

Therefore, the series (2.6.21) is divergent and, according to Lemma 2.6.7, the vector ϕ is quasi-analytic. Suppose that A has a total set M of quasi-analytic vectors .ϕ. Since A is closed, it suffices to prove its essential self-adjointness or, according to Theorem 2.6.1, the uniqueness of strong solutions of the Cauchy problem for the Eq. (2.6.4) with .b = ∞. Let .u(t) be a strong solution of the problem .

 .

 du (t) − (ζ A)∗ u(t) = 0, dt

t ∈ [0, ∞), u(0) = 0,

(2.6.23)

where .ζ = ±i. It suffices to establish that .u(t) = 0 for .t ∈ [0, T ] with any .T > 0. The “weak” equality (2.6.3) for (2.6.23) with a quasi-analytic vector .f = ϕ ∈ n ∩∞ n=1 D(A ) gives d (u(t), ϕ)H = . dt



  du (t), ϕ = (u(t), (ζ A)ϕ)H , dt H

t ∈ [0, T ].

n But .(ζ A)ϕ ∈ ∩∞ n=1 D(A ) and, therefore,

.

d (u(t), (ζ A)ϕ)H = (u(t), (ζ A)2 ϕ)H , dt

t ∈ [0, T ].

And we continue in such a way. This implies that .(u(t), ϕ)H ∈ C ∞ ([0, T ]) and .

D n (u(t), ϕ)H = D n−1 (u(t), (ζ A)ϕ)H = · · · = (u(t), (ζ A)n ϕ)H , t ∈ [0, T ], n ∈ Z+ .

(2.6.24)

Since the values of .u(t) on .[0, T ] are bounded, it follows from (2.6.24) that |D n (u(t), ϕ)H | ≤ c (ζ A)n ϕ H = c An ϕ H ,

.

t ∈ [0, T ], n ∈ Z+ ,

60

2 Some Aspects of the Spectral Theory of Unbounded Operators

i.e., the scalar function .[0, T ]  t → f (t) = (u(t), ϕ)H belongs to the class C{ An ϕ H }. Equalities (2.6.24) and .u(0) = 0 imply that .(D n f )(0) = 0, .n ∈ Z+ . Therefore, by virtue of the quasi-analyticity of .C{ An ϕ H }, we get the equality .f (t) = (u(t), ϕ)H = 0, t ∈ [0, T ]. Since the set M of vectors .ϕ is total, we have .u(t) = 0, .t ∈ [0, T ]. ! .

Other Criteria of Self-Adjointness As above, let A be an Hermitian operator acting in a Hilbert space .H. A vector .ϕ ∈ H is called analytic (with respect to n A) if .ϕ belongs to .∩∞ n=1 D(A ) and the series .

∞  An ϕ H n=0

n!

zn

(2.6.25)

has a non-zero radius of convergence. This vector is called entire if this radius is equal to infinity. It is clear that each analytic vector is quasi-analytic but the converse statements is not true. Note that, if A is self-adjoint, then it possesses a total set of entire vectors. Indeed, every vector .ϕ = E((a, b))f , satisfying the estimate . An ϕ H ≤ cn ϕ H , .n ∈ N, is entire. Let us formulate a theorem that makes Theorem 2.6.8 more precise in the case of operators semi-bounded from below. Let A be an Hermitian operator acting in .H. A vector .ϕ ∈ H is called the Stieltjes 1/2 n n vector (with respect to A), if .ϕ ∈ ∩∞ n=1 D(A ) and the class .C{ A ϕ H } is quasianalytic, or, in other words,

.

∞  n −1/(2n) A ϕ = ∞. H

(2.6.26)

n=1

The equivalence of the fact that a vector belongs to the class of Stieltjes vectors and 1/(2n) condition (2.6.26) follows from the fact that the sequence .( An ϕ H )∞ n=1 , where . ϕ H = 1, is non-decreasing together with (2.6.22). To complete the proof, one should use the Denjoy-Càrleman criterion. It is clear that every quasi-analytic vector is a Stieltjes vector but not vice versa. Thus, if we denote the sets of all entire, analytic, quasi-analytic, and Stieltjes vectors with respect to the operator A by .E(A), .A(A), .Q(A), and .S(A), respectively, then we get the inclusions E(A) ⊆ A(A) ⊆ Q(A) ⊆ S(A).

.

(2.6.27)

Theorem 2.6.9 Let A be a closed Hermitian operator semi-bounded below. If .H contains a total set that consists of Stieltjes vectors, then A is self-adjoint. The converse statement is evident by virtue of the already proved assertions and (2.6.27).

2.6 Self-Adjointness Criteria

61

Proof According to Theorem 2.6.4, it suffices to prove the uniqueness of strong solutions of the Cauchy problem for Eq. (2.6.13) for .b = ∞. Let .u(t) be a strong solution of this problem such that .u(0) = u (0) = 0. Let us show that .u(t) = 0, .t ∈ [0, T ]) for all .T > 0. Assume that M is a total set of Stieltjes vectors .ϕ appearing in the condition of the theorem. We set .[0, T ]  t → f (t) = (u(t), ϕ)H , where 2 .ϕ ∈ M. It follows from the relation (2.6.3), written for (2.6.13) that .f ∈ C ([0, T ]) and  2  d f (t) = −(u(t), Aϕ)H , t ∈ [0, T ]. . dt 2 n Since .Aϕ ∈ ∩∞ n=1 D(A ), by the same reasoning, we conclude that

(u(t), Aϕ)H ∈ C 2 ([0, T ]),

.

d2 (u(t), Aϕ)H = −(u(t), A2 ϕ)H , dt 2

t ∈ [0, T ],

etc.

As a result, we get .f ∈ C ∞ ([0, T ]) and (D 2k f )(t) = D 2k (u(t), ϕ)H = −D 2(k−1) (u(t), Aϕ)H = . . . .

= (−1)k (u(t), Ak ϕ)H ,

t ∈ [0, T ], k ∈ Z+ .

(2.6.28)

One can also deduce a similar equality for odd derivatives. Indeed, by differentiating (2.6.28), we obtain (D 2k+1 f )(t) = (−1)k (u (t), Ak ϕ)H ,

.

t ∈ [0, T ], k ∈ Z+ .

(2.6.29)

Values of .u(t) and .u (t) on .[0, T ] are bounded; therefore, it follows from (2.6.28), (2.6.29), the fact that the operator is Hermitian, and the CauchyBunyakovsky inequality that 1/2

|(D 2k f )(t)|, |(D 2k+1 f )(t)| ≤ c Ak ϕ H ≤ c ϕ H A2k ϕ 1/2 ,

.

t ∈ [0, T ], k ∈ Z+ ,

.

i.e., .f ∈ C{mn }, where .(mn )∞ n=1 is a sequence of numbers 1/2

1/2

1/2

1/2

1/2

1/2

ϕ H , ϕ H , Aϕ H , Aϕ H , A2 ϕ H , A2 ϕ H , . . . .

.

The class .C{mn } will be not changed if we normalize .ϕ. But then, as already 1/(2n) mentioned, the sequence .( An ϕ H )∞ n=1 is non-decreasing. This means that the Denjoy-Càrleman condition (2.6.20) on the sequence, under considerations, can be written in the form (2.6.26). Hence, the class .C{mn } is quasi-analytic.

62

2 Some Aspects of the Spectral Theory of Unbounded Operators

Furthermore, relations (2.6.28), (2.6.29), and the condition .u(0) = u (0) = 0 imply that .(D n f )(0) = 0, .n ∈ Z+ . Therefore, .(u(t), ϕ)H = f (t) = 0, .t ∈ [0, T ]. ! In view of the fact that M is total, we conclude that .u(t) = 0 .(t ∈ [0, T ]). Commutativity Criteria of Operators Here we consider another application of quasi-analytic vectors. Let .A1 and .A2 be two bounded self-adjoint operators. To commute mutually the operators .A1 A2 = A2 A1 , it is necessary and sufficient to commute mutually their resolutions of identity .E1 and .E2 . In what follows, we exclude the word “mutually” by considerations of the commutativity, because we do not intend otherwise. In the case of unbounded .A1 and .A2 , their formal commutativity, that is, commutativity on a dense set of vectors from .H, does not lead to the commutativity of .E1 and .E2 . Additional conditions under which the commutativity of .E1 and .E2 are satisfied are established by the following theorem. Theorem 2.6.10 Let .A1 and .A2 be two Hermitian operators with domains .D(A1 ) and .D(A2 ) in .H, and let .D be a linear set such that .D ⊆ D(A1 ) ∩ D(A2 ). Suppose that our operators commute on .D, i.e., .A1 D ⊆ D(A2 ), .A2 D ⊆ D(A1 ), and .A1 A2 = A2 A1 f , .f ∈ D. Let in addition .A1 , .A2 , and the restriction .A1  ((A2 − zI)D) (for some nonreal z) have total set of quasi-analytic vectors. Then the operators .A1 and .A2 are essentially self-adjoint both and their resolutions of the identity commute. Proof By virtue of Theorem 2.6.8, operators .A˜ 1 and .A˜ 2 are self-adjoint both. Let ˜1 ) and .Rz (A˜2 ) be their resolvents, respectively. By Theorem 2.3.17, to prove .Rz (A the commutativity of resolutions of identity .A˜1 and .A˜2 , it is enough to check the commutativity of .Rz (A˜1 ) and .Rz (A˜2 ). For .f ∈ D, due to the commutativity of .A1 and .A2 , we obtain Rz (A˜1 )Rz (A˜2 )(A1 − zI)(A2 − zI)f .

= Rz (A˜1 )Rz (A˜2 )(A2 − zI)(A1 − zI)f = f = Rz (A˜2 )Rz (A˜1 )(A1 − zI)(A2 − zI)f.

Thus, to prove that .Rz (A˜1 ) commutes with .Rz (A˜2 ), it suffices to show that .(A1 − zI)(A2 − zI)D is dense in .H. But this set coincides with the range .R(A1  ((A2 − zI)D) − zI) and the operator .A1  ((A2 − zI)D) is essentially self-adjoint by the same Theorem 2.6.8. Therefore, the specified range is dense in .H. ! The following theorem from the work of [225] is also used. Theorem 2.6.11 Let .A1 and .A2 be two Hermitian operators defined on .D(A1 ) and D(A2 ) in a Hilbert space .H, and let .D be a linear set dense in .H such that .D is contained in domains of operators .A1 , .A2 , .A21 , .A1 A2 , .A2 A1 , and .A22 , so that .A1 A2 f = A2 A1 f , for all .f ∈ D. .

2.7 Rigged Spaces

63

If the restriction .A21 + A22 on .D is essentially self-adjoint operator, then .A1 and .A2 are essentially self-adjoint and their closures commute in the strong resolvent sense. The Self-Adjointness of Perturbed Operators Ahead, it is found out when the Hermitian operator is self-adjoint. Now we will find out the self-adjointness of .A + B, when A is self-adjoint and B is Hermitian. Let A be a self-adjoint operator in a Hilbert space .H and let B be a bounded self-adjoint operator. Then the operator .A + B is self-adjoint. However, in the case of an unbounded perturbation B, the situation becomes much more complicated. First, we introduce the following definition Consider operators A and B acting in .H and such that .D(B) ⊇ D(A). We say that B is subordinated to A if it satisfies the inequality Bf H ≤ p Af H + q f H ,

.

f ∈ D(A)

(2.6.30)

with constants .p, q ≥ 0 (the constants of the subordination). If, for any .p > 0, there exists .q = q(p) such that inequality (2.6.30) is satisfied, then the operator B is called infinitely small as compared to A. Theorem 2.6.12 (Rellich-Kato) Consider a self-adjoint operator A and an Hermitian operator B acting in .H and such that .D(B) ⊇ D(A). If B is subordinated to A with a constant of the subordination .p ∈ [0, 1), then the operator .A + B, : .(D(A + B) = D(A)) is self-adjoint.

2.7 Rigged Spaces Let .H0 be a Hilbert space with a scalar product .(· , ·)H0 and a norm . · H0 , and let f and g be its elements. Assume that a linear set .H+ , which is also a Hilbert space with respect to a new scalar product .(· , ·)H+ , is dense in .H0 and let . · H+ be a norm in .H+ such that u H0 ≤ u H+ ,

.

u ∈ H+ .

(2.7.1)

We denote by .u, v elements of .H+ and this space will be called positive. Every element .f ∈ H0 generates an antilinear continuous functional .lf in .H+ by the formula lf (u) = (f, u)H0 ,

.

u ∈ H+ ;

its continuity follows from the estimate |lf (u)| = |(f, u)H0 | ≤ f H0 u H0 ≤ f H0 u H+ .

.

(2.7.2)

64

2 Some Aspects of the Spectral Theory of Unbounded Operators

We now introduce a new norm . · H− in .H0 by taking the norm of the functional lf that corresponds to an element f as the norm of this element, i.e.,

.

 f H− = lf = sup

.

 |(f, u)H0 |

u ∈ H + . u H+

(2.7.3)

We must check that if . f H− = 0, then .f = 0; all the other properties of norm are evident. If . f H− = 0, then we have .(f, u)H0 = 0 for all .u ∈ H+ . Since .H+ is dense in .H0 , this yields .f = 0. By completing the space .H0 with respect to the norm (2.7.3) we obtain linear normed space denoted by .H− , which is called a space with negative norm or negative space. Thus, we have constructed the chain of spaces H− ⊇ H0 ⊇ H+

.

(2.7.4)

with negative, zero, and positive norm, respectively. Their elements are denoted by α, .β, .. . . ∈ H− , .f, g, . . . ∈ H0 , and .u, v, . . . ∈ H+ . Sometimes they are called generalized, ordinary, and smooth vectors, respectively. We say that (2.7.4) is the (Hilbert) rigging (chain) of the spaces .H0 , .H+ , and .H− . We also say the space .H0 is equipped by the spaces .H+ and .H− . Since .H0  f → lf ∈ (H+ ) is a one-to-one linear mapping, it is easy to show that .H− can be regarded as a subset of the dual space of antilinear functionals over  .H+ , namely, .H− ⊆ (H+ ) . Therefore, the expression .α(u) is meaningful. As in the case of the form .(α, u)L2 (RN ) in the Sobolev–Schwartz theory of generalized functions, we denote this expression by .(α, u)H0 = (u, α)H0 . A bilinear form .

H− × H+  α, u → (α, u)H0 ∈ C

.

(2.7.5)

is an extension of the form H+ × H+  v, u → (v, u)H0 ∈ C

.

to .H− × H+ by continuity. Clearly, the Cauchy-Bunyakowsky inequality admits the generalization |(α, u)H0 | ≤ α H− u H+ ,

.

α ∈ H− , u ∈ H+ .

(2.7.6)

Let us prove several simple but important facts. Theorem 2.7.1 The negative space .H− is a Hilbert space. In further studies, the construction of the scalar product of .H− will be important.

2.7 Rigged Spaces

65

Proof Let us construct a scalar product in .H− . Consider a bilinear form H0 × H+  f, u → b(f, u) = (f, u)H0 ∈ C.

.

(2.7.7)

It is continuous. Indeed, |b(f, u)| ≤ f H0 u H0 ≤ f H0 u H+ .

.

Therefore, it is representable in the form b(f, u) = (f, Au)H0 = (A∗ f, u)H+ ,

.

where .A : H+ → H0 , .A∗ : H0 → H+ are mutually adjoint continuous operators. According to (2.7.7), the operator A is equal to the imbedding operator .O : H+ → H0 . Denote .I = O ∗ : H0 → H+ . Thus, (f, u)H0 = (f, Ou)H0 = (If, u)H+ ,

.

f ∈ H0 , u ∈ H+ .

(2.7.8)

In .H0 we introduce a quasi-scalar product (f, g)H− = (If, Ig)H+ = (f, Ig)H0 = (If, g)H0 ,

.

f, g ∈ H0 .

(2.7.9)

According to (2.7.3), (2.7.8), and (2.7.9), we have

   |(If, u)H+ | |(f, u)H0 |

u ∈ H+ u ∈ H = sup f H− = sup + u H+ u H+ .  = If H+ = (f, f )H− , f ∈ H0 . 

Since . · H− is a norm in .H0 , then the relation (2.7.9) actually determines not just a quasi-scalar but a scalar product. As a result of completion, this scalar product becomes meaningful not only in .H0 but also in .H− and .H− is transformed into a ! Hilbert space. Hence, .H− is now equipped with the scalar product (α, β)H− ,

.

α H− =

 (α, α)H− ,

α, β ∈ H− .

Since . I = O = 1, then . f H− = If H+ ≤ f H0 , .f ∈ H0 . The first equality in (2.7.9) indicates that I is an isometric operator acting from .H− into .H+ defined on a dense set in the space .H− . Its extension by continuity is an isometric operator .I : H− → H+ acting from the whole .H− into .H+ ; .I = I  H0 . It is easy to see that .I is an isometry between the whole .H− and the whole .H+ (i.e., .R(I) = H+ ).

66

2 Some Aspects of the Spectral Theory of Unbounded Operators

Thus, .R(I) is dense in .H+ . If .u ∈ H+ such that .u ⊥ R(I) in .H+ , then, by virtue of (2.7.8), for any .f ∈ H0 , we have 0 = (If, u)H+ = (If, u)H+ = (f, u)H0 ,

.

whence we conclude that .u = 0. Moreover, .R(I) is closed in .H+ . Therefore, .R(I) = H+ . Further, we have (α, u)H0 = (Iα, u)H+ ,

α ∈ H− , u ∈ H+ .

.

(2.7.10)

Indeed, assume that .H0  fn → α as .n → ∞ in .H− . Then, by virtue of (2.7.6), (2.7.8), and the continuity of .I, we obtain (α, u)H0 = lim (fn , u)H0 = lim (Ifn , u)H+ = (Iα, u)H+ .

.

n→∞

n→∞

Theorem 2.7.2 The equality H− = (H+ ) ,

.

holds true, i.e., the negative space can be regarded as the space of antilinear functionals dual to the positive space. Proof It suffices to show that every functional .l ∈ (H+ ) can be represented in the form .l(u) = (α, u)H0 , .u ∈ H+ with some .α ∈ H− . According to the Riesz theorem, there exists .α ∈ H+ such that .l(u) = (a, u)H+ , .u ∈ H+ . We set .α = I−1 a ∈ H− . Since .R(I) = H+ , in view of (2.7.10), we obtain l(u) = (a, u)H+ = (I · I−1 a, u)H+ = (Iα, u)H+ = (α, u)H0 ,

.

u ∈ H+ . !

We emphasize that the identification of the dual space .(H+ ) with .H+ (the classical identification) and with .H− depends on the form of a functional .l ∈ (H+ ) , .∀u ∈ H+ . It can be written in the form .l (u) = (a, u)H+ or in the form .l(u) = (α, u)H0 , .α ∈ H− , a ∈ H+ . In the second case, we say that the spaces .H+ and .H− are coupled by a form .( · , · )H0 of the type (2.7.5). Note that all results discussed above admit a natural generalization to the case of real spaces .H0 , .H+ , and .H− . The operator .I can be decomposed into a product of two operators. Theorem 2.7.3 The isometry .I : H− → H+ admits a decomposition into a product of two isometries, namely I = J J,

(2.7.11)

.

J : H− → H0 ,

J : H0 → H+ ,

OJ = J  H0 .

2.7 Rigged Spaces

67

Proof Let .A = OI : H0 → H0 . This operator is bounded and non-negative. Indeed, according to (2.7.8), we have (Af, f )H0 = (OIf, f )H0 = (If, If )H+ ≥ 0,

.

f ∈ H0 .

√ We set .B = A : H0 → H0 and consider this operator as acting from a dense set .H0 in the space .H− . In this case, it is isometric, (Bf, Bg)H0 = (B 2 f, g)H0 = (If, Ig)H+ = (f, g)H− ,

.

f, g ∈ H0

and its extension by continuity is an isometric operator .J : H− → H0 . Let us prove that .R(J) = H0 . It suffices to show that the fact that .f ⊥ R(J) in .H0 implies that .f = 0. For an arbitrary .g ∈ H0 , we have 0 = (Jg, f )H0 = (Bg, f )H0 = (g, Bf )H0 ,

.

whence .Bf = 0. Then .OIf = B 2 f = 0, i.e., .If = 0 and, hence, .f = 0. We now show that .R(B) ⊆ H+ . It suffices to prove the equality BJ = OI.

(2.7.12)

.

For .f ∈ H0 , we have .BJf = B 2 f = OIf = OIf . Since .H0 is dense in .H− as a set and the operator .J : H− → H0 , .B : H0 → H0 , and .OI : H− → H0 are continuous, then this yields (2.7.12). Denote by J the operator B regarded as an operator from .H0 into .H+ . Then (2.7.12) implies .J J = I, and it remains to prove that J is an isometric operator and .R(J ) = H+ . But this follows from the equality .J = IJ−1 because −1 : H → H and .I : H → H are isometries. .J ! 0 − − + Let us write the most important relations connected with the rigging of the Hilbert space .H0 by the spaces .H+ and .H− constructed above (Fig. 2.1). Thus, we have

I = O∗ O H−



J

J I Fig. 2.1 Hilbert equipment diagram



H0

H+

68

2 Some Aspects of the Spectral Theory of Unbounded Operators

I = O∗ H− ⊇H0 ⊇ H+ · H− ≤ · H0 ≤ · H+ , OJ = J  H0 ,

.

I = J J,

(2.7.13)

(α, u)H0 = (Iα,u)H+ = (α, I−1 u)H− , (Iα, β)H0 = (α, Iβ)H0 ,

(Jα, f )H0 = (α, Jf )H0 ,

α, β ∈ H− ; f ∈ H0 ;

u ∈ H+

(the last two equalities in (2.7.13) are obtained by setting, respectively .u = Iβ and u = Jf ). Note that, instead of (2.7.1) one can require the validity of the inequality

.

u H0 ≤ c u H+ ,

.

u ∈ H+

with some .c > 0; by a simple renormalization of .H+ , this case is reduced to (2.7.1). Let us introduce an important definition. The rigging (2.7.4) is called quasi-nuclear if the imbedding operator .O : H+ → H0 is quasi-nuclear, i.e., a Hilbert-Schmidt operator. Operators in Chain In this subsection, we present another procedure for constructing chain (2.7.4) and generalize the notion of adjointness. Relations (2.7.13) imply that

.

(u, v)H+ =(J −1 u, J −1 v)H0 , (α, β)H− = (Jα, Jβ)H0 ,

u, v ∈ H+ , α, β ∈ H− .

(2.7.14)

These equalities indicate that it is possible to construct riggings of the space .H0 in terms of a certain operator similar to J or .J −1 acting between this spaces and (that is more convenient). Consider a closed operator D acting in .H0 with a dense domain .D(D) in .H0 and such that Du H0 ≥ u H0 ,

.

u ∈ D(D).

(2.7.15)

By virtue of the closeness of D, the set .D(D) is a Hilbert space with the scalar product (u, v)H+ = (Du, Dv)H0 ,

.

u, v ∈ D(D).

(2.7.16)

This set is regarded as a positive space .H+ and we construct the corresponding Hilbert space .H− dual to .H+ with respect to .H0 according to the procedure described above.

2.7 Rigged Spaces

69

Let J be an isometric operator corresponding to the chain .H− ⊇ H0 ⊇ H+ just constructed and let .(J −1 )H0 be the operator .J −1 : H+ → H0 regarded as an operator in .H0 with the domain .H+ . It follows from the proof of Theorem 2.7.3 that −1 ) −1 and, therefore, .(J −1 ) .(J H0 = B H0 is a positive self-adjoint operator in .H0 . By comparing(2.7.14) with (2.7.16), we conclude that . (J −1 )H0 f H0 = Df H0 , −1 ) .f ∈ H+ , i.e., operators .(J H0 and D are metrically equal. This immediately implies that (J −1 )H0 =

.



D ∗ D.

(2.7.17)

The relation (2.7.17) for the operators D and J becomes much simpler if the operator D is self-adjoint. In this case .(J −1 )H0 = D. It follows from (2.7.14) that −1 f, D −1 g) , .f, g ∈ H . The negative space .H is obtained as .(f, g)H− = (D 0 − H0 the completion of .H0 with respect to the last scalar product. Let us generalize the concept of adjointness for continuous operators acting between the spaces of chain (2.7.4) (“adjointness with respect to .H0 ”). Let we have + ∈ L(H , H ) is adjoint to A with respect .A ∈ L(H+ , H− ). Then the operator .A + − to .H0 and it is defined by the equality (Au, v)H0 = (u, A+ v)H0 ,

.

u, v ∈ H+ .

(2.7.18)

It is easy to see that the operator .A+ exists and can be expressed in terms of the ordinary adjoint operator .A∗ ∈ L(H− , H+ ). Thus, by using the relation .(α, u)H0 = (α, I−1 u)H− .(α ∈ H− , u ∈ H+ ), which follows from (2.7.10), we obtain (Au, v)H0 = (Au, I−1 v)H− = (u, A∗ I−1 v)H+ = (u, I−1 A∗ I−1 v)H0

.

(u, v ∈ H+ ), i.e., A+ = I−1 A∗ I−1 ,

.

A∗ : H− → H+ .

(2.7.19)

The concept of adjointness with respect to .H0 can also be introduced for operators acting between the other spaces of chain (2.7.4). Thus, let .A ∈ L(H+ , H0 ). Then .A+ ∈ L(H0 , H− ) is defined by the following equality (similar to (2.7.18)): (Au, f )H0 = (u, A+ f )H0 ,

.

u ∈ H+ , f ∈ H0 .

(2.7.20)

It is easy to see that the operator .A+ exists and satisfies the equality .A+ = I−1 A∗ , where .A∗ ∈ L(H0 , H+ ). Indeed, by virtue of (2.7.10), we have (Au, f )H0 = (u, A∗ f )H+ = (u, I−1 A∗ f )H0 ,

.

u ∈ H+ , f ∈ H0 .

70

2 Some Aspects of the Spectral Theory of Unbounded Operators

By using the definitions similar to (2.7.18) and (2.7.20), we can write .I+ = I, = J , and .J + = J. For operators .A ∈ L(H+ , H− ), the concept of self-adjointness admits the following natural generalization. We say that A is self-adjoint if .(u, Av)H0 = (Au, v)H0 for all .u, v ∈ H+ , that is .A+ = A. Similarly, an operator .A ∈ L(H+ , H− ) is called non-negative if .(Au, u)H0 ≥ 0, .u ∈ H+ . Non-negative operators are certainly self-adjoint. An ordinary bounded self-adjoint operator A in .H0 regarded as an operator acting from .H+ to .H− is obviously self-adjoint in the generalized sense. Generally speaking, an operator .A ∈ L(H+ , H− ) is self-adjoint in the generalized sense if and only if the operator .IA ∈ L(H+ , H+ ) is self-adjoint in .H+ (or .AI−1 is self-adjoint in .H− ). This follows from the equality (2.7.13)

+ .J

(IAu, v)H+ = (Au, v)H0 = (u, Av)H0 = (u, IAv)H+ ,

.

u, v ∈ H+ .

The operator .A ∈ L(H+ , H− ) is non-negative if .IA ∈ L(H+ , H+ ) is nonnegative in .H+ . Rigging of Hilbert Spaces by Linear Topological Spaces We now proceed to the construction a chain of the type (2.7.4), where the roles of .H+ and .H− are played a linear topological space and its dual space, respectively. First, we recall some facts from the theory of linear topological spaces Let R be an abstract set. It becomes a topological space if we indicate a family (collection) . of its subsets .U, V , · · · ⊆ R, which are called neighborhoods. Let .x ∈ R and U is a neighborhood from . which contains x. Then we say that U is a neighborhood of the point x, and this is denoted by .U = U (x). The notion of a neighborhood plays an important role in the theory if the family . satisfies the following two axioms. (a) Every point .x ∈ R belongs to a certain neighborhood and, moreover, for all .x, y ∈ R such that .x = y, one can indicate disjoint neighborhoods .U (x) and .U (y) of these points (this is the Hausdorff separation axiom; we consider only spaces satisfying this axiom); (b) For every .x ∈ R and any two its neighborhoods .U (x) and .V (x), there exists a neighborhood of this point .W (x) such that .W (x) ⊆ U (x) ∩ V (x). We consider only such topological spaces that satisfy the given axioms. In topological spaces, one can introduce the notions similarly to well-known topological notions from analysis on the real axis. Thus, for any .α ⊆ R its closure .α˜ is defined as a set that consists of all points .x ∈ R such that for every .U (x), the intersection .U (x) ∩ α is non-empty. It is clear, that .α˜ ⊇ α. A set .α ⊆ R is called closed if .α˜ = α and open if its complement .R \ α is closed. (1) Every neighborhood .U ∈  is an open set, i.e., its complement .R \ U is closed.

2.7 Rigged Spaces

71

(2) A set .α ⊆ R is open if and only if each point of this set belongs to .α together with its neighborhood. The choice of . is not unambiguous. The following statement describes cases, when there are other equivalent systems. (3) In order for . and .  to be equivalent, it is necessary and sufficient that, .∀x ∈ R and .∀U (x) ∈ , ∃U  (x) ∈   , U  (x) ⊆ U (x) and the same relation holds true with . replaced with .  and .  replaced with .. In topological spaces, one can introduce the notion of convergence. Namely, a sequence .(xn )∞ n=1 of points .xn ∈ R is called convergent to a point .x ∈ R if, for any .U (x) ∈ , there exists .N = N(U (x)) such that .xn ∈ U (x) whenever .n > N. One more note. If .R1 = R1,1 and .R2 = R2,2 are topological spaces and .R = R1 × R2 is their direct product, which consists of the points .x = x1 , x2 , where .x1 ∈ R1 and .x2 ∈ R2 , then this direct product can always be topologized by the family . that consists of all rectangles .U = U1 × U2 such that .U1 ∈ 1 and .U2 ∈ 2 (it is easy to show that . satisfies axioms (a) and (b) of topological space). Projective Limits of Spaces We now introduce the notion of linear topological space. Let . be a linear space over the field .C of complex numbers (for definiteness), which is, at the same time, a topology space with a base of neighborhoods . = {U (ϕ)|ϕ ∈ }. If the linear structure of . is consistent, in a certain sense, with the topology, then . is called a linear topological space. The consistence is regarded in the sense of the validity of the following axiom. (c) Mappings  ×   ϕ, ψ → ϕ + ψ ∈ , and C ×   λ, ϕ → λϕ ∈ 

.

are continuous. In other words, operations of summation and multiplication by a scalar are continuous. It is clear that the continuity of these mapping should be regarded as follows. For the first mapping, for any .ϕ, ψ ∈  and .U (ϕ + ψ), one can choose .U (ϕ) and .U (ψ) such that .U (ϕ) + U (ψ) ⊆ U (ϕ + ψ); for the second mapping, for any .λ ∈ C, .ϕ ∈ , and .U (λϕ), one can choose .U (λ) and .U (ϕ) such that .U (λ)U (ϕ) ⊆ U (λϕ) (where .U (λ) denotes a neighborhood of a point .λ in the ordinary topology of .C;

.

∀α, β ⊆ ,

def

α + β = {ϕ + ψ | ϕ ∈ α, ψ ∈ β};

∀α ⊆ C, ∀β ⊆ ,

αβ = {λϕ|λ ∈ α, ϕ ∈ β}.

Thus, the base of neighborhoods of a linear topological space . must satisfy axioms (a)–(c). Due to the fact that . is equipped with linear structure, it is often convenient to introduce first a family .0 of subsets .U, V , . . . of ., which are regarded as neighborhoods of the origin, then construct sets .

{ϕ ∈ | ϕ − ϕ1 ∈ U } ,

U ∈ 0 , ϕ1 ∈ 

(2.7.21)

72

2 Some Aspects of the Spectral Theory of Unbounded Operators

shifted by vectors .ϕ1 ∈ , and, finally, take all possible sets of the form (2.7.21) as a base of neighborhoods . (as in the case of a linear normed space, where we first consider open spheres centered at the origin and then realize all possible shifts of these spheres). Axioms (a)–(c) can be easily reformulated in terms of the family .0 . Clearly, a linear normed space is an example of a linear topological space with open spheres with arbitrary centers and radius regarded as base neighborhoods. Consider a family of Banach spaces .(Bτ )τ ∈T parameterized by elements of an arbitrary indexing set T . Assume that the set . = ∩τ ∈T Bτ is dense in each .Bτ and the family .(Bτ )τ ∈T is directed by imbedding operators, i.e., ∀τ  , τ  ∈ T , ∃τ  ∈ T ,

.

Bτ  ⊆ Bτ  ,

and

Bτ  ⊆ Bτ  ,

(2.7.22)

where all imbedding operators are dense and continuous. In ., we introduce the projective topology with respect to the families of Banach spaces .(Bτ )τ ∈T and natural imbedding operators .Oτ  → Bτ . By definition, the family of all possible sets   U (ϕ1 ; τ ; ε) = ϕ ∈  | ϕ − ϕ1 Bτ < ε ,

.

ϕ1 ∈ , τ ∈ T , ε > 0.

(2.7.23)

is a system ., i.e., the base of neighborhoods in this topology. Thus, the base of neighborhoods (2.7.23) is intersections of the set . with open spheres in the spaces .Bτ (with arbitrary index .τ , radius, and centers). The system (2.7.23) is obtained according to (2.7.21) by the shifts of the corresponding spheres centered at the origin by .ϕ1 ∈ . The system (2.7.23) satisfies axioms (a)–(c) of the linear topological space. The space . = ∩τ ∈T Bτ equipped with the topology generated by the base of neighborhoods (2.7.23) is called the projective limit of Banach spaces .Bτ and denoted by  = pr lim Bτ .

.

(2.7.24)

τ ∈T

The projective limit introduced above is sometimes called reduced—this emphasizes the fact that . is dense in each .Bτ . Note that the convergence of a sequence ∞ of vectors .ϕ ∈  to .ϕ ∈  in terms of the space (2.7.24) means that .(ϕn ) n n=1 . ϕn − ϕ Bτ → 0 as .n → ∞ for all .τ ∈ T . If T is countable, then the projective limit (2.7.24) is called a countably normed space. In the case .T = Z+ , it is also denoted by . = pr lim Bn . In this case, we often n→∞ encounter a situation where norms of spaces .Bn are monotone, i.e., B0 ⊇ B1 ⊇ . . . ,

.

ϕ B0 ≤ ϕ B1 ≤ . . . ,

ϕ∈=



n=0

Bn

(2.7.25)

2.7 Rigged Spaces

73

and the condition on the directionality by imbedding is clearly satisfied. However, it is possible to show that the general case of countably normed spaces can be reduced to the monotone case (2.7.25) by a proper renormalization of .Bτ (τ ∈ T ). If .Bτ = Hτ are Hilbert spaces, then (2.7.24) is called a projective limit of Hilbert spaces. If T is countable, . is called a countably Hilbert space. In the class of projective limits of Hilbert spaces, one can select an important subclass of nuclear spaces that are extensively used in the book. The projective limit of Hilbert spaces . = pr lim Hτ is called nuclear if, for τ ∈T

any .τ ∈ T , one can find .τ  ∈ T such that .Hτ  ⊆ Hτ and the imbedding operator .Hτ  → Hτ is quasi-nuclear. Riggings Constructed by Using Projective Limits Let . be some linear topological space over the field .C. An antilinear continuous functional l over . is defined as a continuous mapping .  ϕ → l(ϕ) ∈ C satisfying the antilinearity condition ¯ l(λϕ + μψ) = λl(ϕ) + μl(ψ), ¯

.

ϕ, ψ ∈ ; λ, μ ∈ C.

A collection of all functionals of this sort (for a fixed .) obviously forms a linear space over .C with the ordinary summation of functions and multiplication by a scalar. As above, this space is called dual to . and denoted by . . Note that, it is now convenient to consider an antilinear functional instead of linear one. There are many methods for introducing topology in the dual space . . Here, we study only the case of weak topology. The weak convergence in . is regarded in a standard way, i.e., a sequence ∞ of functionals .l ∈  is called weakly convergent to .l ∈  if .l (ϕ) → l(ϕ) .(ln ) n n n=1 as .n → ∞ for any .ϕ ∈ . The weak topology in . is generated by a system that is the base of neighborhoods in . of the form   U (l1 ; ϕ1 , . . . , ϕn ) = l ∈  | |l(ϕk ) − l1 (ϕk )| < 1; k = 1, . . . , n ,

.

(2.7.26)

l1 ∈  ; ϕk ∈ , k = 1, . . . , n; n ∈ N. The convergence with respect to this topology is equivalent to the weak convergence introduced above. The fact that . equipped with the system of the base of neighborhoods (2.7.26), satisfies axioms (a)–(c) of linear topological space can be verified by repeating the reasoning used in the same situation for Banach spaces. The following simple but important theorem clarifies the structure of . in the case where . is a projective limit.

74

2 Some Aspects of the Spectral Theory of Unbounded Operators

Theorem 2.7.4 (Schwartz) Assume that . is a projective limit of Banach spaces, i.e., . = pr lim Bτ . Then τ ∈T

 =



.

Bτ

(2.7.27)

τ ∈T

and this equality must be interpreted as follows. For all .l ∈  , there exists .τ ∈ T such that l can be extended by continuity from . on .Bτ to an element of .Bτ and, vice versa, if .l ∈ Bτ for some .τ ∈ T , then .l   ∈  . The representation (2.7.27) enables one to introduce in . , parallel with the weak topology, so called topology of inductive limit of spaces. Thus, relation (2.7.22) yields the following property of the directionality of the family .(Bτ )τ ∈T : ∀τ  , τ  ∈ T , ∃τ  ∈ T ,

.

Bτ  ⊇ Bτ  ,

and

Bτ  ⊇ Bτ  ,

(2.7.28)

where imbedding operators are dense and continuous. By using (2.7.28), one can easily prove that the collection of all possible sets of the form   l ∈  | l − l1 ∈ Bτ , U l1 ; ε(·) =

.

 l − l1 Bτ < ε(τ ) ,

(2.7.29)

τ ∈T

l1 ∈  ; T  τ → ε(τ ) > 0 forms a system of the base of neighborhoods in . . In other words, for .l1 = 0, all possible unions of open spheres with positive radius centered at the origin form systems of the base of neighborhoods at the origin in the spaces .Bτ (τ ∈ T ). Then these neighborhoods are shifted by arbitrary .l1 ∈  . In fact, we have given the general definition of the inductive limit of Banach spaces. Thus, let .(Bτ )τ ∈T be a given family of Banach spaces with the following property of directedness of the type (2.7.28): for any .τ  , τ  ∈ T there exits .τ  ∈ T such that .Bτ  ⊇ Bτ  and .Bτ  ⊇ Bτ  . In this case, the union .∪τ ∈T Bτ =  can be equipped with a natural linear structure. Indeed, let  .ϕ, ψ ∈ Bτ ⇒ ∃τ  , τ  ∈ T , ϕ ∈ Bτ  , ψ ∈ Bτ  ⇒ ∃τ  ∈ T , ϕ, ψ ∈ Bτ  . τ ∈T

Therefore, the expression .λϕ + μψ is meaningful as a vector in .Bτ  and, hence, in . A linear space . is called the inductive limit of the spaces .Bτ (the notation: . = ind lim Bτ ) if it is equipped with a system of base of neighborhoods of the .

τ ∈T

form (2.7.29), where . and .Bτ are replaced with . and .Bτ respectively.

2.7 Rigged Spaces

75

After a brief survey of simple properties of linear topological spaces, we now proceed to the construction of the rigging of a Hilbert space by linear topological spaces. Let .H0 be a Hilbert space with elements .f, g, . . . and . be a linear topological space, with elements .ϕ, ψ, . . . , densely and continuously imbedded in .H0 . Every element .f ∈ H0 generates an antilinear continuous functional in . by the formula .lf (ϕ) = (f, ϕ)H , .ϕ ∈ . By identifying f with .lf (this is possible in view of 0 the fact that . is dense in .H0 ), we arrive at the imbedding of .H0 in the space . of antilinear continuous functionals on .. It is clear that if . is equipped with the weak topology, then the imbedding .H0 →  is continuous. Thus, we have constructed a chain that generalizes (2.7.4), namely,  ⊇ H0 ⊇ .

(2.7.30)

.

We also say that (2.7.30) is the rigging of the space .H0 by the spaces . and . or, in other words, that we have defined the pairing of the spaces . and . by the space .H0 . If . is a nuclear space, then the rigging (2.7.30) is called nuclear. In what follows, we consider only riggings of the form (2.7.30), where . is a projective limit of Hilbert spaces, i.e.,  = pr lim Hτ .

.

(2.7.31)

τ ∈T

In addition, we suppose that each .Hτ is densely and continuously imbedded into .H0 and, moreover, . ϕ H0 ≤ ϕ Hτ , .ϕ ∈ Hτ , .τ ∈ T , where T is assumed to contain the index “0” (in fact, this is not a restriction because any general situation can always be reduced to this case). Thus, for any .τ ∈ T , one can construct a chain of the form (2.7.4) H−τ ⊇ H0 ⊇ Hτ ,

(2.7.32)

.

where .Hτ is a positive space and .H−τ is the corresponding negative space. Since H−τ = (Hτ ) , (see Theorem 2.7.2), then the equality (2.7.27) implies that

.

 =



.

H−τ .

(2.7.33)

τ ∈T

This construction takes a fairly simple form in the case where . is a countably Hilbert space and the norms of the spaces are monotone (see (2.7.25)). In this case, we arrive at the chain .

 =

∞  n=1

H−n ⊇ . . . ⊇ H−2 ⊇ H−1 ⊇ H0 ⊇ H1 ⊇ H2 ⊇ . . . ⊇



Hn = ,

n=1

(2.7.34)

76

2 Some Aspects of the Spectral Theory of Unbounded Operators

where each space .Hm+1 is densely imbedded in .Hm and ϕ Hm ≤ ϕ Hm+1 ,

.

ϕ ∈ Hm+1 ; m ∈ Z.

Note that the last results also hold for real spaces. And one more lemma plays an important role in following chapters. Lemma 2.7.5 Suppose we have two riggings H− ⊃ H ⊃ H+ ,

.

F − ⊃ F ⊃ F+ = H +

(2.7.35)

with equal positive spaces. Then there exists a unitary operator .U : H− → F− , U H− = F− such that

.

(U ξ, f )F = (ξ, f )H ,

.

ξ ∈ H− ,

f ∈ H + = F+ .

(2.7.36)

This operator can be given by expression .U = I−1 F IH , where .IF and .IH are standard two isometric isomorphisms in corresponding chains .IF F− = F+ , .IH H− = H+ . Proof This is simple. So, standard operators .IH : .H− −→ H+ , .IF : .F− −→ F+ are two isometric isomorphisms in the corresponding spaces. For such operators we have .∀α ∈ H− , .f ∈ H+ (α, f )H = (IH α, f )H+ = (α, I−1 H f )H − ,

(IH α, β)H = (α, IH β)H

.

and the similar equality for the second chain (2.7.35). Using these equalities, we get

.

(U ξ, f )F = (I−1 F IH ξ, f )F = (IH ξ, f )F+ = (IH ξ, f )H+ = (ξ, f )H ,

ξ ∈ H− ,

f ∈ H + = F+ . !

2.8 Tensor Products To consider the multi-dimensional moment problem, the concept of tensor product will be needed. For simplicity, we consider only tensor products of Hilbert spaces. Tensor Products of Spaces Let .(Hk )nk=1 be a finite sequence of separable Hilbert (k) spaces and let and .(ej )∞ j =0 be an orthonormal basis in .Hk . Consider a formal product eα = eα(1) ⊗ · · · ⊗ eα(n) , n 1

.

(2.8.1)

2.8 Tensor Products

77

where .α = (α1 , . . . , αn ) ∈ Zn+ = Z+ × · · · × Z+ (n times), i.e., we consider (n) the ordered sequence .(eα(1) 1 , . . . , eαn ) and construct a Hilbert space spanned by the formal vectors (2.8.1), which are assumed to be an orthonormal basis of this space. The separable Hilbert space, thus constructed, is called the tensor product of the spaces .H1 , . . . , Hn and is denoted by .H1 ⊗ · · · ⊗ Hn = ⊗nk=1 Hk . Its vectors have the form   .f = fα eα , fα ∈ C, f 2⊗n Hk = |fα |2 < ∞, (2.8.2) k=1

α∈Zn+

(f, g)⊗nk=1 Hk =



fα g¯ α ,

α∈Zn+

g=

∞

(k) (k) j =0 fj ej

n

gα eα ∈ ⊗ Hk . k=1

α∈Z+ n

α∈Zn+

Let .f (k) =



∈ Hk , .k = 1, 2, . . . , n be some vectors. By definition,

f = f (1) ⊗ · · · ⊗ f (n) =



.

fα(1) . . . fα(n) e . n α 1

(2.8.3)

α∈Zn+ (1)

(n)

The coefficients .fα = fα1 . . . fαn of decomposition (2.8.3) satisfy condition (2.8.2). Therefore, vector (2.8.3) belongs to .⊗nk=1 Hk in addition, f ⊗nk=1 Hk =

n 

.

fk Hk .

(2.8.4)

k=1

Clearly, the function n

H1 ⊕ · · · ⊕ Hn  f (1) , . . . , f (n) → f (1) ⊗ · · · ⊗ f (n) ∈ ⊗ Hk

.

k=1

is linear in each argument and the linear span L of vectors (2.8.3) is dense in ⊗nk=1 Hk . This linear span is called an algebraic (non-completed) tensor product of the spaces .H1 , . . . , Hn and is denoted by .a. ⊗nk=1 Hk . If .Lk is a linear set in .Hk , .k = 1, 2, . . . , n), then, by analogy, .

 n a. ⊗ Lk = l.s. f (1) ⊗ · · · ⊗ f (n) | f (k) ∈ Lk ,

 k = 1, 2, . . . , n ,

 ⊗ Lk = c.l.s. f (1) ⊗ · · · ⊗ f (n) | f (k) ∈ Lk ,

 k = 1, 2, . . . , n .

.

k=1

n

k=1

This definition of tensor product depends, clearly, on the choice of an orthonor (k) ∞ mal basis . ej j =0 in each multiplier .Hk . However, it is easy to show that, by

78

2 Some Aspects of the Spectral Theory of Unbounded Operators

changing the basis, one always arrives at a tensor product which is isomorphic to the original one with preservation of the structure. In fact, for the case of two Hilbert spaces .H1 and .H2 , the concept of tensor product introduced above has the following meaning. We consider the linear span L of the formal products .f (1) ⊗ f (2) and suppose that (f (1) + g (1) ) ⊗ f (2) = f (1) ⊗ f (2) + g (1) ⊗ f (2) , f (1) ⊗ (f (2) + g (2) ) = f (1) ⊗ f (2) + f (1) ⊗ g (2) , .

(λf (1)) ⊗ f (2) = λ(f (1) ⊗ f (2) ),

,

(2.8.5)

f (1) ⊗ (λf (2) ) = λ(f (1) ⊗ f (2) ) f (1) , g 1 ∈ H1 , f (2) , g (2) ∈ H2 , λ ∈ C. In other words, the linear space L is factorized by its linear subspace spanned by all possible vectors representable as differences between right-hand and left-hand sides of equalities (2.8.5). Then L is equipped with a scalar product. For vectors of the form .f (1) ⊗ f (2) , it is defined by the formula (f (1) ⊗ f (2) , g (1) ⊗ g (2) )H1 ⊗H2 = (f (1) , g (1) )H1 (f (2) , g (2) )H2 ,

.

f (1) , g (1) ∈ H1 ,

f (2) , g (2) ∈ H2

and then bilinearly extended to other elements of the factorized space L. Tensor Products of Operators We will present the definition of the tensor product of bounded operators. Theorem 2.8.1 Let .(Hk )nk=1 and .(Gk )nk=1 be a finite sequence of separable Hilbert spaces and let .(Ak )nk=1 be a sequence of operators .Ak ∈ L(Hk , Gk ). The tensor product .A1 ⊗ · · · ⊗ An = ⊗nk=1 Ak is defined by the formula n

n

k=1

k=1



( ⊗ Ak )f = ( ⊗ Ak ) (

.

fα eα ) =

α∈Zn+



  ⊗ · · · ⊗ An eα(n) , fα A1 eα(1) n 1

α∈Zn+

(2.8.6) n

f ∈ ⊗ Hk . k=1

It is stated that the series on the right-hand side of (2.8.6) is weakly convergent in ⊗nk=1 Gk and defines the operator

.

.

 n ⊗ Ak ∈ L ⊗nk=1 Hk , ⊗nk=1 Gk .

k=1

2.8 Tensor Products

79

Furthermore, n n  ⊗ Ak = Ak .

(2.8.7)

.

k=1

k=1

The definition (2.8.6) yields the equality  .

n

⊗ Ak

k=1

 (1) f ⊗ · · · ⊗ f (n) = A1 f (1) ⊗ · · · ⊗ An f (n) ,

(2.8.8)

f (k) ∈ Hk ; k = 1, 2, . . . , n, which uniquely determines the operator .⊗nk=1 Ak . The mapping  n n n × L(Hk , Gk )  A1 , . . . , An → ⊗ Ak ∈ L ⊗ Hk , ⊗nk=1 Gk

.

k=1

k=1

k=1

is linear in each variable. Note that, by using (2.8.6), we can get the relations  .

n

⊗ Bk

k=1



n n ⊗ Ak = ⊗ (Bk Ak ),

k=1



k=1

n

⊗ Ak

k=1



n

= ⊗ A∗k

(2.8.9)

k=1

for .Ak ∈ L(Hk , Gk ) and .Bk ∈ L(Gk , Fk ), .k = 1, 2, . . . , n. Suppose that each .Ak in Theorem 2.8.1 is a Hilbert-Schmidt operator. Then n A is also a Hilbert-Schmidt operator and .⊗ k=1 k n

.

⊗ Ak =

k=1

n 

Ak .

(2.8.10)

k=1

Corollary 2.8.2 Let .Hk ⊆ Gk be Hilbert spaces such that each imbedding operator .Ok : Hk → Gk is continuous .k = 1, 2, . . . , n. Then .⊗nk=1 Hk ⊆ ⊗nk=0 Gk and, for the corresponding imbedding operator, we have .O = ⊗nk=0 Ok . If the operators .Ok are quasi-nuclear for all k, then the operator O is also quasi-nuclear. The Tensor Product of Chains Let us turn to the tensor product of chains of Hilbert spaces type (2.7.4) H−,k ⊇ H0,k ⊇ H+,k ,

k = 1, 2, . . . , n.

.

(2.8.11)

According to Corollary 2.8.2 from Theorem 2.8.1, we have .

n

n

n

k=1

k=1

k=1

⊗ H−,k ⊇ ⊗ H0,k ⊇ ⊗ H+,k .

(2.8.12)

80

2 Some Aspects of the Spectral Theory of Unbounded Operators

Theorem 2.8.3 The Hilbert space .⊗nk=1 H−,k can be considered as a negative space with respect to the zero .⊗nk=1 H0,k and the positive space .⊗nk=1 H+,k , i.e., (2.8.12) is a chain. The Projective Limit By using the properties of the tensor products of Hilbert spaces established above, we can investigate the same collection of problems for riggings by linear topological spaces. Consider a set of riggings of the form (2.7.30) k ⊇ H0,k ⊇ k ,

.

k = 1, 2, . . . , n,

(2.8.13)

where .k = pr lim H+,τk is the projective limit of a family of Hilbert spaces τk ∈Tk

(H+,τk )τk ∈Tk , .k = 1, 2, . . . , n, directed by imbedding operators and satisfying the required conditions. For every multi-index .τ = (τ1 , . . . , τn ) .∈ T = ×nk=1 Tk , we consider a collection of Hilbert riggings

.

H−,τk ⊇ H0,τk ⊇ H+,τk ,

k = 1, 2, . . . , n,

.

(2.8.14)

where .H−,τk is the Hilbert space dual to .H+,τk with respect to .H0,k . According to Theorem 2.8.2, for a fixed .τ ∈ T the tensor product of chains (2.8.14) is also a chain .

n

n

n

k=1

k=1

k=1

⊗ H−,τk ⊇ ⊗ H0,τk ⊇ ⊗ H+,τk .

(2.8.15)

Since each family .(H+,τk )τk ∈Tk , .k = 1, 2, . . . , n is directed, the family of Hilbert spaces .(⊗nk=1 H+,τk )τk ∈Tk is also directed by imbedding operators following from (2.8.7). Furthermore, in addition, the set .∩τ ∈T ⊗nk=1 H+,τk is dense in each space of this family. According to our assumption, for any .τk ∈ Tk we have . · H0,k ≤ · H+,τk . Then, we conclude that ∀τ ∈ T ,

.

· ⊗nk=1 H0,k ≤ · ⊗nk=1 H+,τk .

The tensor product .⊗nk=1 k of the spaces .k , .k = 1, 2, . . . , n), is defined as the projective limit n

.

⊗ k = pr

k=1

lim

n

⊗ H+,τk ,

τ =(τ1 ,...,τn )∈T k=1

n

T = × Tk .

(2.8.16)

k=1

Hence, as the multi-index .τ = (τ1 , . . . , τn ) in (2.8.15) runs over the indexing set T , we obtain the family of chains of the form (2.7.32) with .H0 = ⊗nk=1 H0,k . This enables us to apply the scheme described above to construct the chain n

n

n

 ⊇ H0 = ⊗ H0,k ⊇ pr lim ⊗ H+,τk = ⊗ k = .

.

k=1

τ ∈T k=1

k=1

2.9 Representations of Continuous Multi-Linear Forms

81

The space . can be equipped by the topology of the inductive limit of negative spaces .⊗nk=1 H−,τk of the chain (2.8.15). By definition, n

⊗ k = ind

.

k=1

n

⊗ H−,τk .

lim

τ =(τ1 ,...,τn )∈T k=1

(2.8.17)

Finally, we arrive at the chain .

n

n

n

k=1

k=1

k=1

⊗ k ⊇ ⊗ H0,k ⊇ ⊗ k .

(2.8.18)

Corollary 2.8.2 implies that if each .k , .k = 1, 2, . . . , n is a nuclear space, then the space .⊗nk=1 k is also nuclear. In fact, we have proved the theorem. Theorem 2.8.4 The tensor product of chains k ⊇ H0,k ⊇ k ,

.

k = 1, 2, . . . , n,

where .k = pr lim H+,τk , is also a chain τk ∈Tk

.

n

n

n

k=1

k=1

k=1

⊗ k ⊇ ⊗ H0,k ⊇ ⊗ k ,

where spaces .⊗nk=1 k and .⊗nk=1 k are defined by equalities (2.8.16), (2.8.17). Furthermore, assume that each rigging (2.8.14) is nuclear, then the rigging (2.8.18), constructed as indicated above is also nuclear.

2.9 Representations of Continuous Multi-Linear Forms First, we introduce the notion of a generalized kernel. Consider a collection of n chains H−,k ⊇ H0,k ⊇ H+,k ,

.

k = 1, 2, . . . , n.

(2.9.1)

and their tensor product .

n

n

n

k=1

k=1

k=1

⊗ H−,k ⊇ ⊗ H0,k ⊗ H+,k .

(2.9.2)

82

2 Some Aspects of the Spectral Theory of Unbounded Operators

Elements n

F, G, · · · ∈ ⊗ H0,k ,

.

k=1

n

U, V , · · · ∈ ⊗ H+,k , k=1

n

A, B, · · · ∈ ⊗ H−,k k=1

are called ordinary, smooth, and generalized kernels, respectively. Also consider a continuous n-linear form .a(f (1) , . . . , f (n) ) regarded as a continuous function H0,1 ⊕ · · · ⊕ H0,n  f (1) , . . . , f (n) → a(f (1) , . . . , f (n) ) ∈ C,

(2.9.3)

.

linear in each .f (k) provided that all other variables are fixed. The continuity of (2.9.3) is equivalent to the existence of the estimate |a(f

.

(1)

,...,f

(n)

)| ≤ c

n 

f (k) H0,k ,

f (k) ∈ H0,k ; k = 1, 2, . . . , n

k=1

(2.9.4) with some constant .c > 0. In every .H0,k , we fix an orthonormal basis .(ej )∞ j =0 . Let ∞ (k) (k) (k) .f = αk =0 fαk eαk be the decomposition of a vector .f ∈ H0,k in this basis. We set .α = (α1 , . . . , αn ) ∈ Zn+ . In view of the continuity and multi-linearity of a, it can be represented in the form of a convergent series in its coordinates .aα and the coordinates of the vectors .f (k) (k)

a(f (1) , . . . , f (n) ) =



.

aα fα(1) . . . fα(n) , n 1

aα = a(eα(1) , . . . , eα(n) ). n 1

(2.9.5)

α∈Zn+

The main result of the subsection is the kernel theorem, which is based on two lemmas. Lemma 2.9.1 Let a be the continuous n-linear form (2.9.3) and .Ak ∈ L(H0,k ), k = 2, 3, . . . , n be Hilbert-Schmidt operators. Consider a continuous n-linear form

.

H0,1 ⊕ · · · ⊕ H0,n  f (1) , . . . , f (n) → b(f (1) , . . . , f (n) ) .

= a(f (1) , A2 f (2) , . . . , An f (n) ).

It is stated that the coordinates .(bα )α∈Zn+ of the form b are such that  .

|bα |2 < ∞.

(2.9.6)

α∈Zn+

Conversely, if .Ak ∈ L(H0,k ), .Ak = 0, .k = 2, 3, . . . , n and, for any continuous form a, the coordinates of the form b satisfy condition (2.9.6), then all .Ak are Hilbert-Schmidt operators.

2.9 Representations of Continuous Multi-Linear Forms

83

Lemma 2.9.2 Let H0,1 ⊕ · · · ⊕ H0,n  f (1) , . . . , f (n) → b(f (1) , . . . , f (n) )

.

be a continuous n-linear form. It can be represented in the form b(f (1) , . . . , f (n) ) = (f (1) ⊗ · · · ⊗ f (n) , K)⊗nk=1 H0,k ,

.

where .K ∈ ⊗nk=1 H0,k , if and only if its coordinates .(bα )α∈Zn+ satisfy the condition (2.9.6). Theorem 2.9.3 Assume that chains (2.9.1) are such that all imbedding operators Ok : H+,k → H0,k , .k = 2, 3, . . . , n are quasi-nuclear. Then every continuous n-linear form

.

H0,1 ⊕ · · · ⊕ H0,n  f (1) , . . . , f (n) → a(f (1) , . . . , f (n) ) ∈ C

.

can be associated with a unique generalized kernel A ∈ H0,k ⊗ H−,2 ⊗ · · · ⊗ H−,n ,

.

such that a(f (1) , u(2) , . . . , u(n) ) = (f (1) ⊗ u(2) ⊗ · · · ⊗ u(n) , A)⊗nk=1 H0,k ,

.

(2.9.7)

f (1) ∈ H0,1 ; u(k) ∈ H+,k ; k = 2, 3, . . . , n.

.

Conversely, if every form a of the indicated type admits representation (2.9.7) with A ∈ H0,1 ⊗ H−,2 ⊗ · · · ⊗ H−,n ,

.

then imbedding operators .Ok , .k = 2, 3, . . . , n are quasi-nuclear. Corollary 2.9.4 The statement of Theorem 2.9.3 can be made somewhat in “symmetric form”, (if we simplify the result): assume that each imbedding .H+,k → H0,k , .k = 1, 2, . . . , n in (2.9.1) is quasi-nuclear. Then every n-linear continuous form (2.9.3) admits the representation a(u(1) , . . . , u(n) ) = (u(1) ⊗ · · · ⊗ u(n) , A)⊗nk=1 H0,k ,

.

(2.9.8)

u(k) ∈ H+,k , k = 1, 2, . . . , n. Moreover, the kernel .A ∈ ⊗nk=1 H−,k is determined uniquely in this representation. Often Corollary 2.9.4 is used instead of Theorem 2.9.3.

84

2 Some Aspects of the Spectral Theory of Unbounded Operators

Nuclear Riggings Let us now modify Theorem 2.9.3 for the case of nuclear riggings and forms defined in nuclear spaces. Consider a collection of n nuclear riggings given by (2.7.30) and (2.7.33) k ⊇ H0,k ⊇ k = pr lim H+,τk ,

.

k = 1, 2, . . . , n.

τk ∈Tk

(2.9.9)

According to the previous scheme (see (2.8.18)), we construct the nuclear chain .

n

n

n

k=1

k=1

k=1

⊗ k ⊇ ⊗ H0,k ⊇ ⊗ k

(2.9.10)

and consider, as in (2.9.3), n-linear forms n

⊕ k  ϕ (1) , . . . , ϕ (n) → a(ϕ (1) , . . . , ϕ (n) ) ∈ C,

.

(2.9.11)

k=1

continuous in the direct product .⊕nk=1 k of linear topological spaces .k . Since n

.

⊗ k = pr

k=1

lim

n

⊗ H+,τk ,

τ =(τ1 ,...,τn )∈T k=1

n

T = × Tk , k=1

one can easily prove that the continuity of the form (2.9.11) is equivalent to its continuity in the norm of the space .⊗nk=1 H+,τk with certain set .τ = (τ1 , . . . , τn ) and, hence, to the validity of the estimate |a(ϕ (1) , . . . , ϕ (n) )| ≤ cτ

n 

.

ϕ (k) H+,τk ,

(2.9.12)

k=1

cτ > 0, ϕ (k) ∈ H+,τk ; k = 1, 2, . . . , n. Theorem 2.9.5 For the nuclear riggings (2.9.9), every continuous n-linear form (2.9.11) admits the representation a(ϕ (1) , . . . , ϕ (n) ) = (ϕ (1) ⊗ · · · ⊗ ϕ (n) , A)⊗nk=1 H0,k ,

.

(2.9.13)

ϕ (k) ∈ k , k = 1, 2, . . . , n; moreover, the generalized kernel .A ∈ ⊗nk=1 k in this representation is uniquely determined for given a. In the case where each .k in (2.9.11) is a countable Hilbert space, then each multi-linear form (2.9.11) is also continuous with respect to the collection of its variables, i.e., in .×nk=1 k . Therefore, for countable Hilbert space .k , Theorem 2.9.1 remains true for separately continuous forms.

2.9 Representations of Continuous Multi-Linear Forms

85

Bilinear Forms By using Theorem2.9.3, one can also establish the kernel theorem for bilinear (sesquilinear) forms. Thus, consider the chain H− ⊇ H0 ⊇ H+

.

(2.9.14)

and assume that .H+ is equipped with an involution, which is, at the same time, an involution in .H0 . It means that one can indicate an antilinear mapping .H+  u → u∗ ∈ H+ acting in .H+ such that .(u∗ )∗ = u, (u∗ , v ∗ )H+ = (u, v)H+ and ∗ ∗ .(u , v )H = (u, v)H (.u, v ∈ H+ ). 0 0 It is easy to see that, in this case, .(u∗ , v ∗ )H− = (u, v)H− , i.e., the operation “.∗” is also an involution in .H− and, consequently, can be extended by continuity to the involution in the whole .H− : H−  α → α ∗ ∈ H− . The restriction of this mapping .H− ⊇ H0  f → f ∗ ∈ H0 is an involution in .H0 . Clearly, .(Iα)∗ = Iα ∗ , .α ∈ H− . If the spaces of chain (2.9.14) are equipped with this involution, then we say that (2.9.14) is a chain with the involution “.∗”. Let .a(f, g) be a bilinear form defined in .H0 , i.e., a continuous function .H0 ⊕ H0  f, g → a(f, g) ∈ C is linear in the first variable and antilinear in the second variable. Theorem 2.9.6 Assume that the imbedding .H+ → H0 in chain (2.9.14) is quasinuclear. Then, for every continuous bilinear form .a(f, g), .f, g ∈ H0 one can construct a unique generalized kernel .Aa ∈ H0 ⊗ H− such that a(u, g) = (Aa , g ⊗ u∗ )H0 ⊗H0 ,

.

u ∈ H+ ; g ∈ H0 .

(2.9.15)

Conversely, if every form a of the indicated type admits representation (2.9.15) with .Aa ∈ H0 ⊗ H− , then the imbedding .H+ → H0 is quasi-nuclear. In Theorem 2.9.3 one can fix any other variable instead of the first one. Hence, in addition to (2.9.15), the form a admits the following representation a(f, v) = (Aa , v ⊗ f ∗ )H0 ⊗H0 ,

.

f ∈ H0 , v ∈ H+ ,

(2.9.16)

where the kernel .Aa ∈ H− ⊗ H0 is uniquely determined for a given form a. It follows from (2.9.15) and (2.9.16), that a(v ∗ , u) = (Aa , u ⊗ v)H0 ⊗H0 = (Aa , u ⊗ v)H0 ⊗H0

.

for .u, v ∈ H+ , i.e., the restrictions of the functionals .Aa and .Aa to .H+ ⊗ H+ mutually coincide. Denote this common restriction by .A. Thus, under the conditions of Theorem 2.9.6, we have the representation  a(u, v) = A, v ⊗ u∗ H ⊗H , 0 0

.

with the kernel .A ∈ H− ⊗ H− .

u, v ∈ H+

(2.9.17)

86

2 Some Aspects of the Spectral Theory of Unbounded Operators

The assertions of Theorem 2.9.5 for the case of nuclear riggings can also be easily generalized to the case of bilinear forms. Thus, let . be a nuclear space with the natural involution .∗  → . Consider a bilinear form defined in ., i.e., a continuous function . ×   ϕ, ψ → a(ϕ, ψ) ∈ C linear in the first variable and antilinear in the second variable. Then .a(ϕ, ψ) = A, ψ ⊗ ϕ ∗ H ⊗H , where 0 0   .A ∈  ⊗  . Let .A ∈ L(H0 ). Given A we construct the continuous bilinear form .a(f, g) = (Af, g)H0 , .f, g ∈ H0 . Then, by using this form, we construct, according to (2.9.17), the kernel .AA = A ∈ H− ⊗ H− . It is called the kernel of the operator A: (Au, v)H0 = (A, v ⊗ u∗ )H0 ⊗H0 ,

.

u, v ∈ H+ .

(2.9.18)

A Separate Kernel Theorem In order to formulate the kernel theorem for the case of multi-linear (bilinear) forms defined in the spaces .L2 (G), .G ⊆ RN , .N ∈ N, with respect to the Lebesgue measure, one can use the Sobolev space with a quasinuclear imbedding in .L2 (G) or, in the case of nuclear riggings, the spaces .C ∞ (G), N N .S(R ) and .D(R ). Let us give an “elementary” kernel theorem. Theorem 2.9.7 Let .L2 (G) ⊕ L2 (G)  f, g → a(f, g) ∈ C be a continuous bilinear form. Then there exists a kernel .T ∈ C(RN × RN ) such that a(u, v) =

T (x, y)(Du)(y)(Dv)(x) dxdy

.

(2.9.19)

G×G

D = D1 . . . DN ; u, v ∈ C0N (G), for the smooth functions finite with respect to G. Note that if the kernel T is smooth, then (2.9.19) has the form (Af, g)H0 = K(x, y)f (y)g(x) d(μ × μ)(x, y) = (K, g ⊗ f ∗ )H0 ⊗H0 . R×R

with .K = AA = Dx Dy T . Generally speaking, these derivatives should be regarded in the sense of generalized functions and the last kernel is generalized. Note that one can indicate a quasi-nuclear rigging (2.9.14) of the space .H0 = L2 (G), which transforms (2.9.17) into (2.9.19). Also note that a relation, similar to (2.9.19), can be written for the case of multi-linear forms. The formula (2.9.19) is connected with the following expression for the elements m of the matrix .(aj k )m j,k=1 of an operator A in the m-dimensional space .C : aj k = (Aδk , δj )CN ,

.

j, k = 1, 2, . . . , m.

(2.9.20)

2.9 Representations of Continuous Multi-Linear Forms

87

m Here, .δj = (δj n )m n=1 are vectors of an orthonormal basis in .C . Let us clarify m this assertion. On passing from .C to .L2 (G), a role of .δj must play .δ-functions .δx (ξ ), .x ∈ G, but they do not belong to .L2 (G), and, therefore, the relation (2.9.20) becomes meaningless. But at the same time, one can transform the basis j in (2.9.20) by setting .ωj = l=1 δl , .j = 1, 2, . . . , m, and construct the matrix .tj k = (Aωk , ωj )Cm , .j, k = 1, 2, . . . , m. This matrix enables us to present the action of the operator in a simple form. The kernel T is an analogue of the matrix m .(tj k ) j,k=1 , e.g., if .G = (0, ∞) ⊂ R, then we can formally write

x ωx (ξ ) =

δz (ξ ) dz,

.

x ∈ (0, ∞)).

0

The generalized kernel .Dx Dy T is an analogue of matrix (2.9.20) and the relation (2.9.19) is similar to the formula that reconstructs the action of the operator A in terms of the matrix .(tj k )m j,k=1 . Completions of a Space with Respect to Two Different Norms Let L be a linear space, let .L  f → f E ≥ 0 be a norm in this space, and let E be the completion of L with respect to this norm. Recall that E consists of the classes .fE of equivalent ∞ fundamental sequences .(fn )∞ n=1 , .fn ∈ L. The relation of equivalence .(fn )n=1 ∼ ∞ (gn )n=1 means that . fn − gn E → 0 as .n → ∞. The linear structure in the space E is induced by the linear operations over the sequences. If . · E is a Hilbert norm, i.e., 1/2 if L is equipped with the scalar product .(f, g)E , .(f, g ∈ L) and . · E = (·, ·)E , then E is a Hilbert space. The space L is imbedded in E by the identification of .f ∈ L with a class that contains a stationary sequence .(f, f, . . . ). Now, assume that L is a linear set with two different norms, namely, .L  f →  f E2 ≥ 0. Let .E1 and .E2 be the corresponding f E1 ≥ 0 and .L  f → completions of L. Assume that these norms are comparable in the following sense: f E1 ≤ f E2 ,

.

f ∈L

(2.9.21)

(it is clear that, instead of (2.9.21) one can write the inequality .∃c > 0, .∀f ∈ L, f E1 ≤ c f E2 ). Arguing somewhat inaccurately, one can conclude that, as a result of the completion, inequality (2.9.21) yields the inclusion .E1 ⊇ E2 and the inequality . f E1 ≤ f E2 , .f ∈ E2 . However, it has been already mentioned that this is not true. Let us clarify the situation. Assume that .(fn )∞ n=1 , .fn ∈ L is a fundamental sequence with respect to the norm . · E2 . Then, by virtue of (2.9.21), it is also fundamental with respect to the norm ∞ ∈f ∞ . · E1 . Let .(fn ) E2 ∈ E2 and .(fn )n=1 ∈ fE1 ∈ E1 . Let us associate the vector n=1 ∞ ∈ f , .fE2 with the vector .fE1 . This mapping is well defined. Indeed, if .(gn ) E2 n=1 ∞ ∞ i.e., .(fn )n=1 ∼ (gn )n=1 with respect to . · E2 , then, by virtue of (2.9.21), the same relation of equivalence can be written for . · E1 . .

88

2 Some Aspects of the Spectral Theory of Unbounded Operators

Thus, we have constructed the mapping E2  fE2 → QfE2 = fE1 ∈ E1 .

.

It follows from the method, according to which the completions are equipped with a linear structure, that Q is linear. Indeed, by virtue of (2.9.21): QfE2 E1 = fE1 E1 = lim fn E1 ≤ lim fn E2 = fE2 E2 ,

.

n→∞

(fn )∞ n=1

n→∞

∈ fE2 ∈ E2 .

Note that the restriction .Q  L is the imbedding operator which imbeds .L ⊆ E2 in the set L regarded as a subset of the space .E1 . Consider a subspace   Ker Q = f ∈ E2 | Qf = 0 ⊆ E2 .

.

(2.9.22)

If .Ker Q = {0}, then .E2 can be identified with the range .R(Q) and one can assume that .E2 ⊆ E1 and . f E1 ≤ f E2 , .f ∈ E2 . In the general case, these inclusion and inequality hold true for the factor-space .E2 /KerQ ⊆ E1 . In the case where .E2 is a Hilbert space, instead of the factor-space, we can take the orthogonal complement .E2  Ker Q. Note that the second ultimate case (.Ker Q = E2 ) is impossible. Moreover, .L ∩ Ker Q = {0}, as follows from the fact that .Q  L, is the indicated imbedding. Let us formulate these results as a theorem. Theorem 2.9.8 Let L be a linear space with two norms . · E1 and . · E2 comparable in the sense of (1.9.21), let .E1 and .E2 be the corresponding completions of L, and let Q be the operator introduced above. Then E1 ⊇ E2 /Ker Q,

.

f E1 ≤ f E2 /Ker Q ,

f ∈ E2 /KerQ.

(2.9.23)

If .E2 is a Hilbert space, then .E2  Ker Q plays a role of the factor-space in (2.9.23). If .Ker Q = {0}, then .E1 ⊇ E2 and for .f ∈ E2 , . f E1 ≤ f E2 . The theorem below immediately follows from the construction of the operator Q and relation (2.9.22). Theorem 2.9.9 The kernel is trivial, i.e., .Ker Q = {0} if and only if any sequence (fn )∞ n=1 , .fn ∈ L fundamental with respect to the norm . · E2 and converges to zero with respect to the norm . · E1 , also converges to zero with respect to the norm . · E2 . .

2.10 Semi-Bounded Bilinear Forms

89

2.10 Semi-Bounded Bilinear Forms It is known that an arbitrary continuous bilinear form a in a Hilbert space .H admits the representation a(f, g) = (Af, g)H ,

.

f, g ∈ H,

where A is a bounded operator in .H. An important role is played the similar theorem on representations in the case of forms that are not continuous. Further, the material is based on statements about the completion of a space according to two norms. Lemma 2.10.1 Assume that .D(A) = {u ∈ H+ | I−1 u ∈ H0 }. In .H0 , we consider an operator .A = I−1  D(A). It is stated that A is self-adjoint and satisfies the relations (u, v)H+ = (Au, v)H0 , u ∈ D(A), v ∈ H+ , √ √ √ (u, v)H+ = ( Au, Av)H0 , u, v ∈ H+ = D( A).

.

(2.10.1)

Positive Forms Here, we introduce the notion of a prechain, which is closely related to the notion of the chain. In fact, the presence of a prechain is equivalent to the determination of a positive form and the existing of the close relation between the theories of bilinear forms and rigged Hilbert spaces is largely based on this fact. Let .H0 be a Hilbert space and let L be a linear set dense in this space  with a scalar product .(f, g)L+ , .f, g ∈ L such that . f H0 ≤ f L+ , . f L+ = (f, f )L+ , .f ∈ L. In this case, we say that the prechain H0 ⊇ L

.

(2.10.2)

is defined. Denote by .L+ the completion of L with respect to the norm . · L+ . In this case, all requirements of the scheme are satisfied for .E1 = H0 and .E2 = L+ . Let .Q : L+ → H0 be the corresponding operator. According to (2.9.23), for a given prechain (2.10.2), one can construct the chain H− ⊇ H0 ⊇ H+ = L+  Ker Q.

.

(2.10.3)

We say that the prechain (2.10.2) is closed if L is complete with respect to the norm . · L+ and closable (admitting a closure) if .Ker Q = {0} (it is obvious that closeness yields a closability). In view of Theorem 2.9.9 , we can formulate the following criterion of the closability. The prechain (2.10.2) is closable if and only if every sequence .(fn )∞ n=1 ⊂ L fundamental with respect to the norm . · L+ and convergent to zero with respect to the norm . · H0 converges to zero with respect to the norm . · L+ .

90

2 Some Aspects of the Spectral Theory of Unbounded Operators

The prechain .H ⊇ L+ constructed according to the closable prechain (2.10.1) by completing L with respect to . · L+ is called a closure of the prechain (2.10.1). Throughout this book, we consider only closed or closable prechains .H0 ⊇ L (in the sense of closeness .L+ = L). Every prechain of this sort can be extended to the chain H − ⊇ H 0 ⊇ H + = L+ ,

(2.10.4)

.

by constructing the corresponding negative space. Now, we pass in our presentations to the concept of forms. A function .D(a) × D(a)  f, g → a(f, g) ∈ C linear in the first variable and antilinear in the second variable is called a bilinear (sesquilinear) form a in a Hilbert space .H0 (where .D(a) is a linear set dense in .H0 and it is the domain of the form a). The diagonal values of the functional .a( · , · ) represent the quadratic form .a[ · ] associated with the bilinear form under consideration, i.e., .D(a)  f → a[f ] = a(f, f ) ∈ C. For a given quadratic form, one can uniquely reconstruct the corresponding bilinear form by using the polarization identity a(f, g) =

.

1 a[f + g] − a[f − g] + ia[f + ig] − ia[f − ig] , 4

(2.10.5)

f, g ∈ D(a). The linear operations are introduced on bilinear forms in a natural way. Thus, if a and b are two bilinear forms and, at the same time, the intersection .D(a) ∩ D(b) is dense in .H0 , then the bilinear form .a + b is defined by the equality (a + b)(f, g) = a(f, g) + b(f, g),

.

f, g ∈ D(a + b) = D(a) ∩ D(b).

The product .λa, where .λ ∈ C, is always defined. Indeed, (λa)(f, g) = λa(f, g),

.

f, g ∈ D(λa) = D(a).

For a given bilinear form a, one can always construct the adjoint bilinear form a ∗ according to the equality

.

a ∗ (f, g) = a(g, f ),

.

f, g ∈ D(a ∗ ) = D(a).

A bilinear form a is called Hermitian if .a ∗ = a. It follows from (2.10.5) that in order for a to be Hermitian, it is necessary and sufficient that the quadratic form .a [ · ] take only real values. Every bilinear form a can be expressed as a linear combination of two Hermitian forms .Re a and .Im a, namely, a = Re a + iIm a,

.

Re a =

1 (a + a ∗ ), 2

Im a =

1 (a − a ∗ ). 2i

2.10 Semi-Bounded Bilinear Forms

91

A bilinear form a is called positive with vertex .α > 0 if a(f, f ) ≥ α f 2H0 ,

.

f ∈ D(a).

(2.10.6)

This is convenient to assume that .α = 1. Positive forms are always Hermitian because .a[ · ] is real-valued. For a given positive form a in .H0 , one can naturally construct the prechain (2.10.2) by setting .L = D(a), .(f, g)L+ = a(f, g), .f, g ∈ L. Conversely, the prechain (2.10.2) determines the positive form a(f, g) = (f, g)L+ ,

.

f, g ∈ D(a) = L.

The definitions introduced above for prechains can be easily reformulated for positive forms. A positive form a is called closed if the corresponding prechain is closed. The closure .a˜ of a closable form a is defined by the equality .a(f, ˜ g) = (f, g)L+ , where .f, g ∈ D(a) ˜ = L+ , L and .(·, ·)L+ are constructed according to a. Thus, to calculate .a(f, ˜ g) for .f, g ∈ D(a) ˜ ⊆ H0 , we must construct sequences ∞ , .(g )∞ .(fn ) ⊂ D(a), fundamental in the norm .(a[ · ])1/2 and convergent in m n=1 m=1 .H0 to f and g , respectively. Then a(f, ˜ g) =

.

lim a(fn , gm ).

n,m→∞

Lemma 2.10.1 yields the following theorem on the representation of a positive form. Theorem 2.10.2 Let a be a closed positive bilinear form with vertex .α = 1. It is stated that there exists a self-adjoint operator .A ≥ I acting in the space .H0 such that a(f, g) = (Af, g)H0 ,

.

f ∈ D(A) ⊆ D(a), g ∈ D(a).

(2.10.7)

Its domain .D(A) is dense in .D(a) with respect to the norm .(a [ · ])1/2 and, moreover, 1/2 , .f ∈ D(a). In addition to (2.10.7), the form a admits the . Af H ≥ (a[f ]) 0 √ following representation in terms of the operator . A: √ √ a(f, g) = ( Af, Ag)H0 ,

.

√ f, g ∈ D( A) = D(a).

(2.10.8)

Semi-Bounded Forms Usually, the representation theory is based on semibounded forms. A bilinear form a in the space .H0 is called semi-bounded (from below) with vertex .α ∈ R if a(f, f ) ≥ α f 2H0 ,

.

f ∈ D(a).

(2.10.9)

92

2 Some Aspects of the Spectral Theory of Unbounded Operators

If .α = 0, then the form a is called non-negative. For .α > 0, it is called a positive form and has been already introduced above. It is clear that semi-bounded forms are Hermitian. Each semi-bounded form a can be associated with a positive form .ap (whose vertex is equal to one) by setting ap (f, g) = a(f, g) + (1 − α)(f, g)H0

.

f, g ∈ D(ap ) = D(a).

(2.10.10)

Definition, related to the form a, are formulated in terms of the form .ap , namely, a is closed (closable) if .ap is closed (closable); the closure .a˜ of a closable form a is determined, according to (2.10.10) by the formula a(f, ˜ g) = a˜ p (f, g) − (1 − α)(f, g)H0 ,

.

f, g ∈ D(a) ˜ = D(a˜ p );

(2.10.11)

this closure is a semi-bounded form with the same vertex .α. In the case of positive forms, we can act in a somewhat different manner. If a is a positive form with vertex .α, then . α1 a is a positive form with vertex one and the application of the definitions introduced to . α1 a leads us to corresponding definitions for a (since the norms .(ap [ · ])1/2 and .( α1 a [ · ])1/2 are equivalent). This implies that, in the case of semi-bounded forms, .ap can also be defined by relation (2.10.10) with “1” replaced with .ε > 0. For semi-bounded forms, Theorem 2.10.2 takes the following form. Theorem 2.10.3 Let a be a closed semi-bounded bilinear form with vertex .α ∈ R. There exists a self-adjoint operator .A ≥ αI acting in the space .H0 such that the representation (2.10.7) holds true. Its domain .D(A) is dense in .D(a) with respect to the norm .(ap [ · ])1/2 . If a is non-negative, then it is also representable in the form (2.10.8). If the form a admits a closure .a, ˜ then we can write it in the form (2.10.7) and (2.10.8), whence we get the required representations for the form a. Let us dwell upon an important procedure for constructing extensions of semi-bounded operators to self-adjoint operators, which was introduced by Friedrichs K. O. Consider an Hermitian semi-bounded operator .A ≥ αI, .α ∈ R acting in .H0 with dense domain. It generates, in a standard way, a semi-bounded form a with vertex .α a(f, g) = (Af, g)H0 ,

.

f, g ∈ D(a) = D(A).

(2.10.12)

The properties of this form are described by the following theorem. Theorem 2.10.4 (Friedrichs) The bilinear form a, defined in (2.10.12), admits the ˜ g) = (AF f, g)H0 , .f ∈ D(AF ) ⊆ D(a), ˜ closure .a˜ representable in the form .a(f, .g ∈ D(a), ˜ where .AF ≥ αI is a self-adjoint operator in .H0 and it is an extension of the operator A (the so-called Friedrichs extension). The operator .AF is the unique self-adjoint extension of A whose domain lies in .D(a). ˜

2.11 The Generalized Eigenvectors Expansion

93

The Form Sum of Operators Let A and B be, respectively, self-adjoint and Hermitian operators in the space .H0 . It is necessary to study the operator .A + B, .D(A + B) = D(A) ∩ D(B) and indicate conditions under which it is self-adjoint or essentially self-adjoint. For this, the form sum method is considered here. First, we present the well-known result for forms which allows one, under certain restrictions, to make the operator .A + B meaningful even in the case where .D(A) ∩ D(B) = {0}. Theorem 2.10.5 (KLMN) Let a be a closed positive form and let b be an Hermitian form on .D(b) = D(a) such that |b(f, f )| ≤ pa(f, f ) + q(f, f )H0 ,

.

f ∈ D(a),

(2.10.13)

for some .p ∈ [0, 1) and .q ∈ R. Then the form .a + b, .D(a + b) = D(a) is semibounded and closed. In the name of the theorem, the abbreviation KLMN means that the following authors contributed to the theorem: Kato T. (1955), Lax P. and Milgram A. (1954), Lions J.-L. (1961), Nelson E. (1964) [151, 196, 200, 226].

2.11 The Generalized Eigenvectors Expansion As already noted earlier, the expansion in eigenvectors of the self-adjoint operator A in a finite-dimensional Hilbert space .H cannot be directly generalized to the case of infinite-dimensional space because the operator A may have no eigenvectors. We will give a simple example of a self-adjoint operator A that has no eigenvectors. Assume that .H = L2 ((a, b)) with respect to Lebesgue measure. Consider the operator .(Af )(x) = xf (x), .f ∈ L2 ((a, b)), .x ∈ (a, b) which is bounded and self-adjoint and has the spectrum .S(A) = [a, b]. The equation for the eigenvector .ϕ ∈ L2 ((a, b)) that corresponds to a point .λ ∈ [a, b] of the spectrum has the form (x − λ)ϕ(x) = 0.

.

(2.11.1)

On the one hand, this implies that .ϕ(x) = 0 almost everywhere, i.e., .ϕ(x) = 0 in .L2 ((a, b)) and, therefore, it is not an eigenvector in .H. On the other hand, the .δ-function at the point .λ, i.e., .ϕ = δλ , is also a formal solution of Eq. (2.11.1). As an element of a corresponding space, it differs from zero and, therefore, can be regarded as an eigenvector. Thus, the operator A has no ordinary eigenvectors but, at the same time, it has eigenvectors which are generalized functions. It turns out that this is a general

94

2 Some Aspects of the Spectral Theory of Unbounded Operators

property of self-adjoint operators in a separable space .H. We show that, under certain restrictions, the spectral theorem for A, i.e., the formulas ∞ I=

∞ A=

dE(λ),

.

−∞

λ dE(λ),

(2.11.2)

−∞

can be rewritten in the form similar to that in the case of discrete spectrum where I=

∞ 

.

A=

P (λk ),

k=1

∞ 

λk P (λk ).

k=1

Namely, formulas (2.11.2) can be rewritten in the form ∞ I=

∞ P (λ) dρ(λ),

.

A=

−∞

λP (λ) dρ(λ),

(2.11.3)

−∞

where .ρ is a measure and .P (λ) is an operator of “generalized projection”, whose range consists of the generalized eigenvectors of the operator A that correspond to the eigenvalue .λ. In what follows, we will state the theorem of Radon-Nikodym type on differentiation of an operator-valued measure with respect to its trace and present the corollary concerning the differentiation of a resolution of the identity. The Differentiation of Operator-Valued Measures Let us fix a chain H− ⊇ H0 ⊇ H+ ,

.

(2.11.4)

in which all spaces are separable (clearly, it is sufficient to assume that .H+ is separable). Recall that an operator A : .H+ → H− is called non-negative if .(Au, u)H ≥ 0, .u ∈ H+ . By definition, the trace of a non-negative operator is 0 equal to ∞  .Tr(A) = (Aej , ej )H0 , j =1

where .(ej )∞ j =1 is an orthonormal basis in .H+ . The value .Tr(A) does not depend on the choice of this basis. Indeed, if .I is an isometry associated with (2.11.4), then, by virtue of the relation .(α, u)H0 = (Iα, u)H+ .α ∈ H− , .u ∈ H+ , we can conclude that the non-negativity of A is equivalent to the ordinary non-negativity of .IA : H+ → H+ and .Tr(A) = Tr(IA).

2.11 The Generalized Eigenvectors Expansion

95

Assume that R is an abstract space not necessarily equipped with a topology and .R is a .σ -algebra of sets from R. We say that a function .R  α → θ (α) is an operator-valued measure with a finite trace, if the following conditions are fulfilled: (a) .θ (α) is a non-negative operator from .H+ to .H− such that .θ (∅) = 0 and .Tr(θ (R)) < ∞; (b) the property of counted additivity is fulfilled, i.e., if the sets .αj ∈ R, .j ∈ N) do not intersect each other, then θ

 ∞

.

 αj

j =1

=

∞ 

θ (αj ),

j =1

where the series converges in the weak sense. It follows from the additivity and non-negativity of .θ that it is monotone, i.e., if α  ⊆ α  , then .θ (α  ) ≤ θ (α  ). Therefore, .θ (α) ≤ θ (R) and .Tr(θ (α)) ≤ Tr(θ (R)), .α ∈ R. Let us introduce a numerical non-negative function of sets .

R  α → ρ(α) = Tr(θ (α)).

.

If .αj ∈ R, .j ∈ N are disjoint, then, by virtue of condition (b) and non-negativity of the terms, we have ρ

 ∞

 αj

     ∞ ∞ = Tr θ = Tr αj θ (αj )

j =1

j =1

=

∞   ∞ 

=

  θ (αj ) ek , ek

j =1

k=1 .

j =1

H0

∞ ∞   (θ (αj )ek , ek )H0 k=1 j =1

=

∞  j =1

Tr(θ (αj )) =

∞ 

ρ(αj ).

j =1

Thus, .R  α → ρ(α) is a numerical non-negative finite measure. The measure .ρ is called a trace measure for .θ .

96

2 Some Aspects of the Spectral Theory of Unbounded Operators

Theorem 2.11.1 An operator-valued measure .θ with a finite trace can be differentiated with respect to its trace measure .ρ. This means that there exists an operator-valued function Q(λ) : H+ → H− , Q(λ) ≥ 0,

.

Q(λ) ≤ Tr(Q(λ)) = 1

weakly measurable with respect to .R, defined for .ρ-almost all .λ ∈ R, and such that θ (α) =

Q(λ) dρ(λ),

.

α∈R

(2.11.5)

α

(the integral converges in the Hilbert-Schmidt norm . · ). The function .Q(λ) is uniquely defined to within its values on the set of measure .ρ zero and is called the Radon-Nikodym derivative .(dθ/dρ)(λ) = Q(λ). Note that the convergence of the integral (2.11.5) in the Hilbert-Schmidt norm means its convergence in the Bochner sense if .Q(λ) is regarded as a vector-function with values in the space of Hilbert-Schmidt operators from .H+ to .H− . The strong measurability of .Q(λ) can be proved as follows. Let .(Qj k (λ))∞ j,k=1 be n a matrix of .Q(λ). One can construct a sequence of matrices .(Qj k (λ))j,k=1 , .n ∈ N. The corresponding finite-dimensional operators converge to .Q(λ) for any .λ ∈ R in the norm . · . As is well known, measurable functions .Qj k (λ), .j, k = 1, 2, . . . , n can be approximated. Proof We fix an orthonormal basis .(ej )∞ j =1 in the space .H+ . The measure .θ is absolutely continuous with respect to .ρ, i.e., if .ρ(α) = 0, then .θ (α) = 0, .α ∈ R. Indeed, |(θ (α)ej , ek )H0 |2 ≤ (θ (α)ej , ej )H0 (θ (α)ek , ek )H0 ≤ ρ 2 (α) = 0,

.

j, k ∈ N.

This implies that, for fixed .u, v ∈ H+ , the complex-valued measure .R  α → (θ (α)u, v)H0 ∈ C is also absolutely continuous with respect to .ρ and, according to the ordinary Radon-Nikodym theorem, we have (θ (α)u, v)H0 =

q(λ; u, v) dρ(λ),

.

α ∈ R, u, v ∈ H+ ,

(2.11.6)

α

where the derivative .q(λ; u, v) is defined on the set .βu,v ⊆ R of the complete measure .ρ, measurable with respect to .R, and summable; for .u = v, it is nonnegative. Denote by L a linear span of vectors .(ej )∞ j =1 with rational complex  coefficients; .L˜ = H+ . Since L is countable, the set . u,v∈L βu,v is also a set of the complete measure; for .λ from this set, all functions .q(λ; u, v)(u, v ∈ L) are defined and .q(λ; u, u) ≥ 0, .u ∈ L. Since derivative is uniquely defined to within its values on the set of measure zero, the bilinearity of the left-hand side of (2.11.6) with respect to u and v yields

2.11 The Generalized Eigenvectors Expansion

97

the bilinearity of .q(λ; u, v). More exactly, there exists a set of the complete measure  β ⊆ u,v∈L βu,v such that for .λ ∈ β,

.

q(λ; p1 u1 + p2 u2 , r1 v1 + r2 v2 ) = p1 r¯1 q(λ; u1 , v1 ) + p1 r¯2 q(λ; u1 , v2 ) .

+ p2 r¯1 q(λ; u2 , v1 ) + p2 r¯2 q(λ; u2 , v2 )

for any .u1 , u2 , v1 , v2 ∈ L and complex rational .p1 , p2 , r1 , and .r2 . To prove this, we use the bilinearity of .(θ (α)u, v)H0 and the fact that .α ∈ R in (2.11.6) and  conclude that this equality holds true for .λ from the set .βp1 ,p2 ,r1 ,r2 ,u1 ,u2 ,v1 ,v2 .⊆ u,v∈L βu,v of the complete measure. Then we take the (countable) intersection of all such sets as .β. Furthermore, as was mentioned above, for such .λ, we have .q(λ; u, u) ≥ 0, .u ∈ L. The bilinearity and non-negativity yield, in a standard way, the CauchyBunyakovsky inequality |q(λ; u, v)|2 ≤ q(λ; u, u)q(λ; v, v),

λ ∈ β, u, v ∈ L.

.

(2.11.7)

By setting .u = v = ej in (2.11.6), summing over .j ∈ N, and using the Fubini theorem, we get ρ(α) =

 ∞

.

α

 q(λ; ej , ej ) dρ(λ),

α ∈ R.

j =1

Hence, for almost all .λ ∈ β, we have ∞  .

q(λ; ej , ej ) = 1.

(2.11.8)

j =1

Reducing, if necessary, the set .β, we can assume that (2.11.8) holds true for all λ ∈ β. Relations (2.11.7) and (2.11.8) yield

.

∞  .

|q(λ; ej , ej )|2 ≤

j,k=1

∞ 

q(λ; ej , ej )q(λ; ek , ek ) = 1,

λ ∈ β.

(2.11.9)

j,k=1

Let us fix .λ ∈ β and denote by .A(λ) the operator in .H+ that corresponds ∞ to the matrix .(aj k (λ))∞ j,k=1 in the basis .(ej )j =1 ; here, .aj k (λ) = q(λ; ek , ej ). By virtue of (2.11.9), .A(λ) is well defined and is a Hilbert-Schmidt operator. The measurability of each β  λ → q(λ; ej , ek ),

.

j, k ∈ N

98

2 Some Aspects of the Spectral Theory of Unbounded Operators

implies that the operator-valued function .β  λ → A(λ) is weakly measurable. Let us introduce a continuous operator .Q(λ) = I−1 A(λ) : H+ → H− and show that this operator is the required one. It follows from the measurability of .A(λ) that .β  λ → Q(λ) is weakly  ∞ measurable. Further, for .u = ∞ p e , v = k k k=1 j =1 rj ej ∈ L, we have (Q(λ)u, v)H0 = (I−1 A(λ)u, v)H0 = (A(λ)u, v)H+ .

=

∞ 

q(λ; ek , ej )pk r¯j = q(λ; u, v).

(2.11.10)

j,k=1

In particular, .(Q(λ)u, u)H0 = q(λ; u, u) ≥ 0. By passing to the limit, we find that the inequality remains valid for an arbitrary .u ∈ H+ , i.e., .Q(λ) ≥ 0. According to(2.11.8), .Tr(Q(λ)) = Tr(A(λ)) = 1, .λ ∈ β. Thus, . Q(λ) ≤ 1 and .Q(λ), .λ ∈ β is weakly measurable. Hence, there exists the integral Q(λ) dρ(λ),

.

α∈R

α

convergent in the Hilbert-Schmidt norm. According to (2.11.10) and (2.11.6), for u, v ∈ L and .α ∈ R, we have    Q(λ) dρ(λ) u, v = (Q(λ)u, v)H0 dρ(λ)

.

H0

α

α



.

=

q(λ; u, v) dρ(λ) = (θ (α)u, v)H0 , α

(2.11.11) i.e., (2.11.5) holds true. Finally, let us establish the uniqueness of .Q(λ). Assume that, parallel with .Q(λ), there is an operator-valued function .Q1 (λ) of the same type satisfying the equality

Q(λ)dρ(λ) =

.

α

Q1 (λ) dρ(λ),

α ∈ R.

α

Then, for every .u, v ∈ L, we have .(Q(λ)u, v)H0 = (Q1 (λ)u, v)H0 for .λ from the set of the complete  measure .β1;u,v ⊆ R. But then, for .λ from the set of the complete measure .β1 = u,v∈L β1;u,v , we have (Q(λ)u, v)H0 = (Q1 (λ)u, v)H0

.

2.11 The Generalized Eigenvectors Expansion

99

for all .u, v ∈ L. This and the continuity of the operators .Q(λ) and .Q1 (λ) for each fixed .λ ∈ β1 imply that .Q(λ) = Q1 (λ). ! In this way, we can also consider an operator-valued measure with .σ -finite trace. This means that there exists a sequence .(Rk )∞ k=1 ⊂ R such that R1 ⊆ R2 ⊆, . . . ,

∞ 

.

Rk = R and Tr(θ (Rk )) < ∞,

k ∈ N.

k=1

In this case, Theorem 2.11.1 is modified as follows. One must consider a .σ -finite trace measure instead of a finite one and state that the representation (2.11.5) holds true for every .R  α ⊆ Rk for some .k ∈ N. The proof remains the same. The formulation of Theorem 2.11.1 can be made similar to the Radon-Nikodym theorem. Namely, assume that an operator-valued measure .θ with .σ -finite trace and a .σ -finite non-negative numerical measure .R  α → ρ(α) ∈ [0, ∞] with respect to which .θ is absolutely continuous (i.e., if .ρ(α) = 0 for some .α ∈ R, then .θ (α) = 0). In this case, the representation (2.11.5), in which .R  α ⊆ Rk , k ∈ N and .Q(λ) is a weakly measurable operator-valued function defined for .ρ-almost all .λ ∈ R. Values of the function .Q(λ) are non-negative operators from .H+ to .H− , each having a finite trace summable with respect to .ρ over .Rk , .k ∈ N) (one should write representation (2.11.5) with a trace measure and differentiate this measure with respect to .ρ). The Differentiation of a Resolution of the Identity Assume that R is an abstract space, .R is a .σ -algebra of its sets, and .R  α → E(α) is a general resolution of the identity acting in the space .H0 . As a rule, the measure E has no finite or .σ -finite trace and, therefore, Theorem 2.11.1 cannot be directly applied. It is convenient for us to act as follows. Assume that the rigging (2.11.4) of the space .H0 is given. Let O : .H+ → H0 , .O + : .H0 → H− be the corresponding imbedding operators (by the equality (f, Ou)H0 = (f, u)H0 = (O + f, u)H0 ,

.

f ∈ H0 , u ∈ H+ ,

it follows that .O + is, in fact, adjoint to O with respect to .H0 ). The function R  α → θ (α) = O + E(α)O,

.

(2.11.12)

whose values are continuous operators from .H+ to .H− , is an operator-valued measure (.O + E(α)O ≥ 0 because (O + E(α)Ou, u)H0 = (E(α)Ou, Ou)H0 ≥ 0

.

for .u ∈ H+ . Recall that the rigging (2.11.4) is called quasi-nuclear if the imbedding operator O is quasi-nuclear.

100

2 Some Aspects of the Spectral Theory of Unbounded Operators

Lemma 2.11.2 If the rigging (2.11.4) is quasi-nuclear, then the operator-valued measure (2.11.12) has a finite trace. We note also that if .A : H0 → H0 is the non-negative operator, then .O + AO : H+ → H− is also non-negative and Tr(O + AO) ≤ A O 2 .

(2.11.13)

.

Indeed, the inequality .O + AO ≥ 0 has already been explained by the example of ∞ be an orthonormal basis in .H+ . Then .A = E(α). Further, let .(ej ) j =1 Tr(O + AO) =

.

∞ ∞   (O + AOej , ej )H0 = (AOej , Oej )H0 j =1

j =1

≤ A

∞  j =1

(Oej ) 2H0 = A O 2 .

Proof of Lemma 2.11.2. According to (2.11.13), we have Tr(θ (R)) = Tr(O + E(R)O) ≤ O

.

2

< ∞. !

Let us fix the quasi-nuclear rigging (2.11.4). A non-negative finite measure R  α → ρ(α) = Tr(O + E(α)O) ∈ [0, ∞) is called the spectral measure of resolution of the identity E. Clearly, E and .ρ are mutually absolutely continuous: for some .α ∈ R, the equalities .E(α) = 0 and .ρ(α) = 0 are equivalent. By applying Theorem 2.11.1 to (2.11.13) and .ρ, we obtain the following assertion.

.

Theorem 2.11.3 Suppose that .R  α → E(α) is a resolution of the identity acting in the space .H0 , (2.11.4) is a fixed quasi-nuclear rigging, and .R  α → ρ(α) ∈ [0, ∞) is the corresponding spectral measure. Then the following representation in the form of an integral convergent in the Hilbert-Schmidt norm +



O E(α)O =

.

P (λ) dρ(λ),

α∈R

(2.11.14)

α

holds true. Here .P (λ) : H+ → H− is an operator-valued function weakly measurable with respect to .R, defined for .ρ-almost all .λ ∈ R, and such that .P (λ) ≥ 0 and . P (λ) ≤ Tr(P (λ)) = 1. In particular, .P (λ) is called a generalized projection.

2.11 The Generalized Eigenvectors Expansion

101

In the case of resolution of the identity E of a self-adjoint operator in .H0 with a discrete spectrum .(λj )∞ j =1 , the equality E(α) =



.

P (λj ),

α ∈ B(R),

λj ∈α

holds true, in which .P (λj ) is the projector onto the eigensubspace A of corresponding to the eigenvalue .λj . The comparison of this formula with (2.11.14) has determined the choice of the term “generalized projector”. We can also introduce the concept of the general spectral measure corresponding to resolution of the identity E. This measure is defined as a .σ -finite non-negative measure .R  α → ρ(α) ∈ [0, ∞] such that .ρ and E are mutually absolutely continuous. The representation (2.11.14) remains valid for the general spectral measure except that the operator .P (λ) takes a scalar multiplier. The Case of a Nuclear Rigging Similar results can be obtained if we use the nuclear rigging  ⊇ H 0 ⊇ 

.

(2.11.15)

of the Hilbert space .H0 instead of its quasi-nuclear rigging (2.11.4) (recall, that a rigging is called nuclear if . = pr lim Hτ is a nuclear space). More exactly, the τ ∈T

case of the chain (2.11.15) is reduced to the chain (2.11.4). Consider the rigging (2.11.15) and denote by O and .O + imbedding operators  . ⊆ H0 and .H0 ⊆  , respectively. We also consider continuous operators .A :   →  . These operators are called non-negative .(A ≥ 0) if .(Aϕ, ϕ)H0 ≥ 0, .ϕ ∈ . In particular, if .R  α → E(α) is the resolution of the identity mentioned above, then the operator .O + E(α)O :  →  is non-negative. The function .R  α → θ (α) = O + E(α)O is a measure similar to those was before but with values in .L(,  ). Theorem 2.11.4 Let .R  α → E(α) and .ρ(α) be a resolution of the identity acting in the space .H0 and some spectral measure corresponding to it, respectively. Suppose that (2.11.15) is a fixed nuclear rigging. Then the representation (2.11.14) in the form of a weakly convergent integral is true, in which .0 ≤ P (λ) :  →  (a generalized projector) is an operator-valued function weakly measurable with respect to .R and defined for .ρ-almost all .λ ∈ R. Proof As was shown above, every .Hτ , .τ ∈ T , is imbedded densely and continuously in .H0 and .0 ∈ T . We choose .τ so that the imbedding .Hτ ⊆ H0 is quasi-nuclear; this is possible because . is nuclear. As a result, we obtain the chain  ⊇ H−τ ⊇ H0 ⊇ Hτ ⊇ 

.

(2.11.16)

102

2 Some Aspects of the Spectral Theory of Unbounded Operators

with dense and continuous imbedding operators. Let us take the central part of chain (2.11.15) as (2.11.4) and introduce imbedding operators .O1 : Hτ → H0 and .O1+ : H0 → H−τ . Then, by virtue of Theorem 2.11.3 and in accordance with the previous remark, we get

O1+ E(α)O1 =

P1 (λ) dρ(λ),

.

α ∈ R,

(2.11.17)

α

where .P1 (λ) : Hτ → H−τ is a corresponding generalized projector. Consider imbedding operators .O2 :  → Hτ and .O3 : H−τ →  . Multiplying (2.11.17) from the right and from the left by .O2 and .O3 , respectively, we obtain the required equality (2.11.14) with .P (λ) = O3 P1 (λ)O2 . The convergence of integral (2.11.17) in the Hilbert-Schmidt norm (from .Hτ to .H−τ ) obviously yields the weak convergence of integral (2.11.14). ! The results presented in this section are valid, e.g., for the resolution of the identity corresponding to a given self-adjoint or normal operator. In the next considerations, we examine this situation in detail. Generalized Eigenvectors and the Projection Spectral Theorem Consider the simplest classic case of only one self-adjoint operator A acting in a rigged Hilbert space .H0 . Assume that there exists a linear topological space D densely and continuously imbedded in .H+ such that .D ⊆ D(A) and the restriction .A  D acts continuously from D to .H+ . In this case, instead of (2.11.4) we have the rigging (chain) H− ⊇ H0 ⊇ H+ ⊇ D.

.

(2.11.18)

We say that the operator A with the mentioned properties and rigging (2.11.18) are standardly connected (or A admits (2.11.18). The chain (2.11.18) is called an extension of (2.11.4). As before, (2.11.18) is quasi-nuclear by definition if the imbedding of the (separable) space .H+ in .H0 is quasi-nuclear. A non-zero vector .ϕ ∈ H− is called a generalized eigenvector of the operator A corresponding to an eigenvalue .λ ∈ C if (ϕ, Au)H0 = λ(ϕ, u)H0 ,

.

u ∈ D.

(2.11.19)

If .ϕ ∈ D(A), then the operator A in (2.11.19) can be transferred to .ϕ and, since .u ∈ D is arbitrary, we get .Aϕ = λϕ, i.e., .ϕ is an ordinary eigenvector of A corresponding to the eigenvalue .λ. Thus, the definition introduced above generalizes the classic concept. The collection of eigenvalues corresponding to all possible generalized eigenvectors is called a generalized spectrum .g(A) generalized spectrum A. Clearly, this spectrum depends on the choice of rigging (2.11.18) and, generally speaking, differs from the spectrum .S(A) of operator A.

2.11 The Generalized Eigenvectors Expansion

103

Example Let .H0 = L2 (R, dx) with respect to the Lebesgue measure dx and let A be the minimal operator generated by the expression .(Lu)(x) = −iu (x), i.e., the closure of the operator .H0 ⊇ C0∞ (R)  u → iu (x) ∈ H0 ; this operator is 2 self-adjoint and .S(A) = R. Assume that .H+ = L2 (R, ex dx) and .D = D(R). Thus, the conditions concerning the rigging (2.11.18) are satisfied and .H− = −x 2 dx). For every .λ ∈ C, the function .ϕ(x) = eiλx , .x ∈ R belongs to .L2 (R, e .H− and satisfies (2.11.19) Thus, g(A) = C = R = S(A),

.

g(A) ⊃ S(A).

Consider the same operator A and assume that .H+ = H0 . In this case, .H− = H0 . There are no vectors .H−  ϕ = 0 satisfying (2.11.19). Indeed, each .ϕ ∈ D (R) ⊃ H0 , satisfying (2.11.19) is a generalized solution of the equation .−ϕ  = λϕ and, therefore, .ϕ(x) = ceiλx , .x ∈ R, .C  c = 0. But such a function .ϕ can not belong to .H0 = H− for any .λ ∈ C. Thus, .g(A) = ∅ = R = S(A), .g(A) ⊂ S(A). Assume that A is a self-adjoint operator acting in a separable Hilbert space .H0 and standardly connected with the quasi-nuclear rigging (2.11.18). According to Theorem 2.11.3, its resolution of the identity E defined on the .σ -algebra of Borel sets .B(R), admits the representation (2.11.14), i.e., O + E(α)O =

P (λ) dρ(λ),

.

α ∈ B(R).

(2.11.20)

α

Here, by definition, B(R)  α → ρ(α) = Tr(O + E(α)O) ∈ [0, ∞)

.

is the spectral measure of A and .P (λ) : H+ → H− is an operator-valued function weakly measurable with respect to .B(R), defined for .ρ-almost all .λ ∈ R and such that .P (λ) ≥ 0 and . P (λ) ≤ Tr(P (λ)) = 1. Since, for .B ∈ L(H+ , H− ), .B ≥ 0, the equalities .B = 0 and .Tr(B) = 0 are equivalent, we have .S(A) = suppE = suppρ. Therefore, we can assume that .λ takes values not in .R but in .S(A). Naturally, such measure connected with the resolution of the identity of the operator A is called a general spectral measure of this operator. Theorem 2.11.5 (BGKM) Let A be a self-adjoint operator acting in a separable Hilbert space .H0 and standardly connected with the quasi-nuclear chain (2.11.18) in which D is separable. Then there exists an operator-valued function .P (λ) weakly measurable, defined for almost all .λ from the spectrum .S(A) in the sense of the spectral measure .ρ and such that its values are non-negative operators from the positive space .H+ to the negative space .H− and . P (λ) ≤ Tr(P (λ)) = 1, which

104

2 Some Aspects of the Spectral Theory of Unbounded Operators

realizes the following representation of the operator A and its resolution of the identity E:  E(α)u =

 P (λ) dρ(λ) u,

α



.

α ∈ B(R), u ∈ H+ ,

 λP (λ) dρ(λ) u,

Au =

(2.11.21) u ∈ D(A) ∩ H+ .

S(A)

The range .R(P (λ)) ⊆ H− consists of generalized eigenvectors .ϕ of the operator A corresponding to the eigenvalue .λ, i.e., (ϕ, Au)H0 = λ(ϕ, u)H0 ,

.

u ∈ D.

(2.11.22)

In the name of the theorem, the abbreviation BGKM means that the following authors contributed to the theorem: Berezansky Yu. M.(1956), Gårding L.(1954), Gelfand I. M., Kostyuchenko A. G. (1955), Kac G. I. (1958), and Maurin K. (1958) [18–21, 111, 116, 131, 132, 146, 210, 212, 213]. Thus, formulas (2.11.21) , in fact, have the form of the representation (2.11.3) with selected “projectors on the eigensubspaces” consisting of generalized eigenvectors. In this connection, the spectral theorem for A in the form ((2.11.21) is called the projection theorem. Proof By comparing Theorem 2.11.5 with Theorem 2.11.3 in the particular case of the resolution of the identity E, corresponding to A (see (2.11.20)), corresponding to (2.11.21) follows from the representation (2.4.1) and the equality .dE(λ)u = P (λ)dρ(λ)u for .u ∈ H+ , which, in fact, is the first equality in (2.11.21) (i.e., (2.11.20)), rewritten in a different form. Let us prove that .ϕ ∈ R(P (λ)) satisfies (2.11.22). In other words, we must establish the following relation: there exists the set .β ∈ B(R) of a complete spectral measure .ρ such that for any .λ ∈ β (P (λ)u, Av)H0 = λ(P (λ)u, v)H0 ,

.

u ∈ H+ , v ∈ D.

(2.11.23)

First, note that it suffices to establish the existence of the set .β ∈ B(R) of the complete measure .ρ such that for .λ ∈ β the relation (2.11.23) holds true for u and v taking values, respectively, in fixed sets .G ⊆ H+ and .F ⊆ D, which are dense in these spaces. Indeed, assume that .u ∈ H+ , .v ∈ D, .D  un −→ u in .H+ , and .F  vm −→ v n→∞ m→∞ in D. By assumption, we have (P (λ)un , Avm )H0 = λ(P (λ)un , vm )H0 ,

.

λ ∈ β, n, m ∈ N.

(2.11.24)

2.11 The Generalized Eigenvectors Expansion

105

Let us pass to the limit in (2.11.24), first, as .n → ∞ and then as .m → ∞. This is possible because .P (λ) ∈ L(H+ , H− ) and .A  D ∈ L(D, H+ ). As a result, we obtain the required relation (2.11.23). We fix .u, v ∈ D. By using (2.11.20), we get 

(P (λ)u,Av)H0 dρ(λ) = α

  P (λ) dρ(λ) u, Av H0

α

= (O + E(α)Ou, Av)H0 = (AE(α)u, v)H0    = λ dE(λ) u, v = λ d(O + E(λ)Ou, v)H0

.

H0

α

(2.11.25)

α

for all .α ∈ B(R). Here, we have used the relation .AE(α) = λ dE(λ), α ∈ B(R), α

which follows from the equalities (2.4.1) and (2.3.20). Further, by replacing the differential in the last integral in (2.11.25) according to (2.11.20), we obtain

(P (λ)u, Av)H0 dρ(λ) =

.

α

λ(P (λ)u, v)H0 dρ(λ),

α ∈ B(R).

α

Since .α is arbitrary, the last relation implies that there exists a set .βu,v ∈ B(R) of the complete measure .ρ such that for .λ ∈ βu,v , the relation (2.11.23) holds true. Assume now that L is a countable set  dense in the space D and, hence, in .H+ . Consider a countable intersection . βu,v = β, which is a set of complete u,v∈L

measure .ρ. If .λ ∈ β, relation (2.11.23) holds true for all .u, v ∈ L. But then, according to the remark made at the beginning of the proof, this relation is also true for all .u ∈ H+ and .v ∈ D. ! Under somewhat stronger restrictions on the operator A, this theorem can be reformulated in terms of nuclear riggings of the space .H0 . Consider a rigging of .H0 by linear topological spaces  ⊇ H0 ⊇ .

.

(2.11.26)

We say that a self-adjoint operator A in .H0 and the chain (2.11.26) are standardly connected (or A admits (2.11.26)) if . ⊆ D(A) and .A   acts continuously in .. The definition of generalized eigenvectors remains, in fact, the same, i.e., equality (2.11.19) must hold for .u ∈ . Furthermore, the generalized spectrum .g(A) is, as before, a collection of all eigenvalues corresponding to generalized eigenvectors.

106

2 Some Aspects of the Spectral Theory of Unbounded Operators

Note that if A is standardly connected with (2.11.26) and . = pr lim Hτ , then A τ ∈T

is also standardly connected with each chain H−τ ⊇ H0 ⊇ Hτ ⊇  = D

(2.11.27)

.

of the form (2.11.18). Let us apply Theorem 2.11.4 instead of Theorem 2.11.3 to the resolution of the identity E of the operator A and repeat the proof scheme. Theorem 2.11.6 Let A be a self-adjoint operator acting in a separable Hilbert space .H0 and standardly connected with the nuclear chain (2.11.26). Then all statements of Theorem 2.11.5 remain valid with the only modification that .P (λ) acts continuously from . to . (acts continuously from). In this case, the measure .ρ is a spectral measure of the operator A. The Case of a Normal Operator Let us show how the previous result looks like when a self-adjoint operator is replaced with the normal operator A. First, we modify the definition of the generalized eigenvector. Let A be a normal operator in .H0 and let .ϕ ∈ H0 be its eigenvector corresponding to an eigenvalue .λ0 , in a certain neighborhood .U ⊂ C of which there are no other points of the spectrum of A. Then the spectral decomposition of A has the following form .A = λ dE(λ) = λ0 P (λ0 ) + λ dE(λ), (2.11.28) C

C\U

where E is the resolution of the identity of the operator A and .P (λ0 ) is the projector to the eigensubspace corresponding to .λ0 and consisting of the vectors .ϕ. For the adjoint operator, the relation like (2.11.28), namely ∗



A =

.

λ¯ dE(λ) = λ¯ 0 P (λ0 ) +

C



λ¯ dE(λ)

(2.11.29)

C\U

holds true. It follows from (2.11.29) that .ϕ is also an eigenvector of the operator .A∗ corresponding to the eigenvalue .λ¯ 0 . In view of this, it is convenient to introduce the following definition. The chain (2.11.18) considered above and a normal operator A acting in .H0 are called standardly connected if .D ⊆ D(A) and the restrictions .A  D and .A∗  D act continuously from D to .H+ both. The generalized eigenvector of the operator A corresponding to an eigenvalue .λ ∈ C is defined as a vector .ϕ ∈ H− such that (ϕ, A∗ u)H0 = λ(ϕ, u)H0 ,

.

(ϕ, Au)H0 = λ¯ (ϕ, u)H0 ,

u ∈ D.

(2.11.30)

2.11 The Generalized Eigenvectors Expansion

107

As above, we can conclude from (2.11.30) that if, in addition, .ϕ ∈ D(A), then ¯ Thus, (2.11.30) is a generalization of the concept of an Aϕ = λϕ and .A∗ ϕ = λϕ. eigenvector of a normal operator. Obviously, the equality (2.11.20) remains valid; however, .α ∈ B(C). The concept of the spectral measure .ρ of an operator A is introduced similarly. In the considered case, the theorem similar to Theorem 2.11.5 is also true.

.

Theorem 2.11.7 Let A be a normal operator acting in a separable Hilbert space H0 and standardly connected with the quasi-nuclear chain (2.11.18) in which D is separable. Then all statements of Theorem 2.11.5 remain valid with the formulas (2.11.21) replaced with

.

 E(α)u =

 P (λ) dρ(λ) u,

α

 .

Au = S(A)





A u=

α ∈ B(C, u ∈ H+ ,

 λP (λ) dρ(λ) u,

u ∈ D(A) ∩ H+ ,

 ¯λP (λ) dρ(λ) u,

u ∈ D(A) ∩ H+

(2.11.31)

S(A)

and the equality (2.11.22) is replaced with (2.11.30). Proof As in the case of Theorem 2.11.5, the question is reduced to proving the following assertion. There exists a set .β ∈ B(C) of complete spectral measure such that for every .λ ∈ β, we have (P (λ)u, A∗ v)H0 = λ(P (λ)u, v)H0 , .

(P (λ)u, Av)H0 = λ¯ (P (λ)u, v)H0 ,

u ∈ H+ , v ∈ D.

(2.11.32)

First, we prove the existence of the set .β1 ∈ B(C) of complete measure .ρ such that for any .λ ∈ β1 , the first relation in (2.11.32) holds true. It is done in the same way as by the proving Theorem 2.11.5; it is only necessary to use the relation (2.4.10) instead of (2.4.1). Similarly, one can show that there exists the set .β2 ∈ B(C) of complete measure .ρ such that for any .λ ∈ β2 , the second relation in (2.11.32) is true (by using the representation (2.4.10) instead of the representation (2.4.1). Finally, we set .β = ! β1 ∩ β2 . As in the case of self-adjoint operators, we can use the nuclear chain (2.11.26) standardly connected with a normal operator. The definitions of standard connection and generalized eigenvector are similar to the corresponding definitions presented above. The corresponding analogue of Theorem 2.11.6 can be obtained by obvious modification.

108

2 Some Aspects of the Spectral Theory of Unbounded Operators

In a special case, the results of this subsection can be applied to unitary operators. Theorems corresponding to normal, unitary and others variants of operators related to the generalized eigenvectors expansions, we will present in chapters where necessary. The Set of Commuting Operators Let us discuss one more problem concerning self-adjoint operators. We assume that unbounded self-adjoint operators .A1 , . . . , An act in a Hilbert space .H0 and denote their resolutions of the identity by .E1 , . . . , En , respectively. These operators are called commuting if their resolutions of the identity commute, namely, .Ej (αj )Ek (αk ) = Ek (αk )Ej (αj ), .αj , αk ∈ B(R); .j, k = 1, . . . , n. The commutativity conditions of operators .A1 , . . . , An have already been clarified earlier. Recall that if these operators are bounded, then relations .Aj Ak = Ak Aj , .j, k = 1, 2, . . . , n are necessary and sufficient conditions for these operators to commute. If they are unbounded, the situation is much more complicated and, for example, the fact that such conditions are satisfied on certain dense sets does not guarantee that the corresponding resolutions of the identity commute, i.e., that .Aj commute. Consider the family .A = (Aj )nj=1 of commuting self-adjoint operators in .H0 . Let us construct the expansion of these operators in generalized joint eigenvectors with generalized eigenvalues. A vector .0 = ϕ ∈ H0 is called an (ordinary) joint eigenvector of the family A if .ϕ ∈ D(Aj ) and .Aj ϕ = λj ϕ with some .λj ∈ R, n .j = 1, 2, . . . , n), where .λ = (λ1 , . . . , λn ) ∈ R is the eigenvalue of the family A corresponding to .ϕ. In accordance with this definition, we introduce a concept of generalized joint eigenvector. Consider a chain (2.11.18) standardly connected with each .Aj (j = 1, . . . , n). Then, by definition, .ϕ ∈ H− is a generalized joint eigenvector of the family A corresponding to the eigenvalue .λ = (λ1 , 2, . . . , λn ∈ Cn if .

(ϕ, Aj u)H0 = λj (ϕ, u)H0 ,

j = 1, 2, . . . , n, u ∈ D.

(2.11.33)

The collection of all these .λ forms the generalized spectrum .g(A) of the family. The family A is naturally associated with so called joint resolution of the identity E, i.e., the resolution of the identity in the space .Rn defined on a .σ -algebra .B(Rn ) as a direct product .E = ×nj=1 Ej of resolutions of the identity .E1 , . . . , En . By definition, the support of the measure E is the spectrum of A, n

n

j =1

j =1

S(A) = suppE ⊆ × suppEj = × S(Aj ).

.

As before, by using E and the quasi-nuclear chain (2.11.18), we construct the spectral measure of the family A, namely, B(Rn )  α → ρ(α) = Tr(O + E(α)O).

.

2.11 The Generalized Eigenvectors Expansion

109

Similarly, we introduce the general spectral measure of the family A. Recall that the operators .Aj , by using E, can be reconstructed Aj =

λj dE(λ),

.

j = 1, . . . , n.

(2.11.34)

Rn

Theorem 2.11.8 Let .A = (Aj )nj=1 be a family of commuting self-adjoint operators acting in a separable Hilbert space .H0 , each of which is standardly connected with the quasi-nuclear chain (2.11.18) in which D is separable. Then all statements of Theorem 2.11.5 remain valid with the formulas (2.11.21) replaced with  E(α)u =

P (λ) dρ(λ) u,

.

 Aj u =

 α ∈ B(Rn ), u ∈ H+ ,

(2.11.35)

α

 λj P (λ) dρ(λ) u,

j = 1, 2, . . . , n, u ∈ D(Aj ) ∩ H+

S(A)

and equality (2.11.22) replaced with (2.11.33). Proof As in the case of Theorem 2.11.5, the problem is reduced to proving the following relation: there exists a set .β ∈ B(Rn ) of complete measure .ρ such that for any .λ ∈ β, we have (P (λ)u, Aj v)H0 = λj (P (λ)u, v)H0 ,

.

j = 1, 2, . . . , n, u ∈ H+ , v ∈ D. (2.11.36)

As in the proof of Theorem 2.11.5, we can conclude that, for any fixed .j = 1, 2, . . . , n, there is a set .βj ∈ B(Rn ), for which the relation (2.11.36) is satisfied. To prove Theorem 2.11.8, one must repeat the proof of Theorem 2.11.5, using the  representation (2.11.34) instead of (2.4.1) instead of .β = nj=1 βj . ! Theorem 2.11.8 remains valid for commuting normal operators (i.e., operators whose resolutions of the identity commute). The corresponding results can be also formulated for the case where the nuclear chain (2.11.26) is used instead of the quasi-nuclear chain (2.11.18). For infinite families of commuting self-adjoint (or normal) operators, the facts presented above remain valid as well, but their formulations and proofs are more complicated). It is worth noting that the space D from (2.11.18) may be not dense in .H+ . If D is dense in .H0 , then one can construct a chain .D  ⊇ H0 ⊇ D, where .D  is a dual space of antilinear continuous functionals on D. In a certain sense, .D  contains .H− ; indeed, any .α ∈ H− is also a continuous antilinear functional in D and, therefore,  .α can be interpreted as an element .lα ∈ D (more exactly, .lα is identified with the

110

2 Some Aspects of the Spectral Theory of Unbounded Operators

class of .β ∈ H− such that .(β, u)H0 = (α, u)H0 , u ∈ D). It is clear that all results obtained above for .ϕ ∈ H− , remain valid for generalized eigenvectors .ϕ from .D  . Cyclic Vectors Recall that the spectral measure of a self-adjoint operator A in .H0 was defined as the trace measure B(R)  α → ρ(α) = Tr(O + E(α)O) ∈ [0, ∞),

.

constructed by using the quasi-nuclear chain (2.11.18). The general spectral measure was introduced as a scalar measure .ρ such that .ρ and E are absolutely continuous with respect to each other Here, we consider an important case where the role of the spectral measure .ρ can be played by a measure different from a trace one. A unit vector . ∈ H0 is called a cyclic vector (or a vacuum) of an operator A if ∞ m m . ∈ m=1 D(A ) and the set of vectors .{A  | m ∈ Z+ } is total in .H0 . Theorem 2.11.9 Assume that a self-adjoint operator A with the resolution of the identity E has a cyclic vector .. Then a finite non-negative measure B(R)  α → ρ(α) = (E(α), )H0 ∈ [0, ∞)

.

is one of spectral measures of this operator. Proof If .E(α) = 0 for some .α ∈ B(R), then .ρ(α) = 0. Let us show the inverse implication. Assume that 0 = ρ(α) = (E(α), )H0 = E(α) 2H0 ,

.

i.e., .E(α) = 0. Then, for any .m ∈ Z+ , we have 0 = Am E(α) = E(α)Am ,

.

and, hence, .E(α)f = 0, where f belongs to the linear span of vectors .Am , .m ∈ Z. By assumption, this span is dense in .H0 and, therefore, .E(α) = 0. ! In the case of normal operators, the formulation of the theorem remains the same. For a family of commuting operators .A = (Aj )nj=1 , instead of .Am , one must take mn 1 the products .Am 1 · . . . · An , where .m1 , . . . , mn ∈ Z+ . The Fourier Transform by Generalized Eigenvectors Note that initial formulas (2.11.1) and (2.11.2) of the expansion in eigenvectors of a self-adjoint operator in a finite-dimensional space differ from expressions (2.11.21) proved above. Namely, the initial formulas contain the eigenvectors while the expressions (2.11.21) contain the projectors .P (λ). Let us show that analogous expansions can be also obtained in the general case.

2.11 The Generalized Eigenvectors Expansion

111

We consider the case of one self-adjoint operator. But the results obtained can be easily generalized to normal operators and families of commuting operators considered above. We leave formulations of corresponding results for chapters below. Assume that a self-adjoint operator A satisfies the conditions of the projection spectral theorem 2.11.5. By virtue of the first relation in (2.11.21), we have (E(α)u, v)H0 =

(2.11.37)

(P (λ)u, v)H0 dρ(λ),

.

α

u, v ∈ H+ , α ∈ B(R). In particular, for .α = R, the equality (2.11.37) gives the decomposition of .(u, v)H0 into the integral of .(P (λ)u, v)H0 . The last expression determines a scalar product and (2.11.37) turns into the decomposition of .H0 , which is called a direct integral of the corresponding Hilbert spaces. Let us consider such a case in detail. We fix .λ ∈ R so that .Tr(P (λ)) = 1. Points .λ, possessing this property, form a set of complete spectral measure. Below, we assume that .λ belong to this set. By virtue of the relations .P (λ) ≥ 0 and . P (λ) ≤ Tr(P (λ)) = 1, the operator .JP (λ)J : H0 → H0 is a non-negative Hilbert-Schmidt operator. Let .hγ (λ) ∈ H0 .(γ = 1, 2, . . . , N(λ) ≤ ∞) be an orthonormal sequence of eigenvectors of the operator .JP (λ)J corresponding to eigenvalues .νγ (λ). Since the dependence of .JP (λ)J on the parameter .λ is weakly measurable with respect to .B(R), we can assume that .hγ (λ) is weakly measurable and .νγ (λ) is measurable .(γ = 1, 2, . . . , N(λ)). Then (P (λ)Jf, J g)H0 = (JP (λ)Jf, g)H0 =

N (λ) 

νγ (λ)(f, hγ (λ))H0 (g, hγ (λ))H0

γ =1 .

=

N (λ) 

(Jf, ϕγ (λ))H0 (J g, ϕγ (λ))H0 ,

f, g ∈ H0 ,

γ =1

(2.11.38) where ϕγ (λ) =

.



νγ (λ)J−1 hγ (λ) = P (λ)((νγ (λ))−1/2 J hγ (λ)) ∈ R(P (λ)) ⊆ H− .

The vectors .ϕγ (λ), .λ ∈ R, .γ = 1, 2, . . . , N (λ) ≤ ∞ are individual generalized eigenvectors of the operator A.

112

2 Some Aspects of the Spectral Theory of Unbounded Operators

It follows from (2.11.38) that (P (λ)u, v)H0 =

N (λ) 

.

γ =1

(u, ϕγ (λ))H0 (v, ϕγ (λ))H0 ,

u, v ∈ H+ ,

(2.11.39)

for .ρ-almost all .λ ∈ R. Denote .l2 (∞) = l2 and .l2 (N ) = CN , .N < ∞, assuming that the last space is imbedded in .l2 (all the coordinates of the vector, beginning from .N + 1, are equal to zero). The mapping H+  u →  u(λ) ˆ = (uˆ 1 (λ), uˆ 2 (λ), . . .) ∈ l2 (N (λ)), .

uˆ γ (λ) = (u, ϕγ (λ))H0 ,

γ = 1, 2, . . . , N(λ)

(2.11.40)

is called the Fourier transform corresponding to the operator A (the inclusion u(λ) ˆ ∈ l2 (N (λ)) follows from (2.11.39) for .v = u). The Fourier transform .u(λ) ˆ of the vector u is defined for .ρ-almost all .λ ∈ R and each its coordinate is measurable with respect to .B(R). By inserting (2.11.39) into (2.11.37) and using (2.11.40), we obtain the Parseval equality for Fourier transforms

.

(E(α)u, v)H0 =

(u(λ), ˆ v(λ)) ˆ l2 (N (λ)) dρ(λ),

.

(2.11.41)

α

α ∈ B(R), u, v ∈ H+ . Note one important fact. Lemma 2.11.10 For a fixed .λ, the set .{u(λ) ˆ | u ∈ H+ } coincides with .l2 (N (λ)) for N (λ) < ∞ and contains all finite vectors from .l2 for .N (λ) = ∞.

.

Proof It suffices to show that every vector .(0, . . . , 0, 1, 0, . . .) with the unit on the k-th position belongs to this set. We take .u = (νk (λ))−1/2 J hk (λ) .∈ H+ . Then (u, ϕγ (λ)H0 ) = ((νk (λ))−1/2 (J hk (λ), ϕγ (λ))H0 .

= (νk (λ))−1/2 (hk (λ), Jϕγ (λ))H0 = (hk (λ), hγ (λ))H0 = δkγ , γ = 1, 2, . . . , N(λ). !

The Direct Integral of Hilbert Spaces Consider the direct integral of Hilbert spaces .l2 (N (λ)) over .R with a measure .ρ L2 =



.

R

l2 (N (λ)) dρ(λ).

2.11 The Generalized Eigenvectors Expansion

113

This integral is defined as a collection of all vector functions .R  λ → F (λ) ∈ l2 (N (λ)), given for .ρ-almost all .λ, measurable with respect to .B(R) in the sense that each coordinate .Fγ (λ), .γ = 1, 2, . . . , N(λ) is measurable, and such that F (λ) 2l2 (N (λ)) dρ(λ) < ∞.

.

R

One can easily prove that the direct integral is a Hilbert space with the scalar product has a form .(F (·), G(·))L2 = (F (λ), G(λ))l2 (N (λ)) dρ(λ) (2.11.42) R

F (·), G(·) ∈ L2 . Comparing (2.11.41) and (2.11.42), we obtain that for .α = R, the expression on the right (2.11.41) defines the scalar product in the direct integral, and hence the Parseval equality can be rewritten as .(u, v)H0 = (u(·), ˆ v(·)) ˆ L2 , .u, v ∈ H+ . Extending this equality by continuity on the whole .H0 , we get (f, g)H0 =

.

(fˆ(λ), g(λ)) ˆ l2 (N (λ)) dρ(λ),

f, g ∈ H0 ,

(2.11.43)

R

where the Fourier transform .fˆ(λ) of the vector f is regarded as the limit of the Fourier transform (2.11.40) in the norm of the direct integral (clearly, it is no possible longer to use the last formula in (2.11.40) for .fˆγ (λ)). Theorem 2.11.11 If D is a base of the operator A, i.e., if the closure of .A  D in H0 coincides with A, then the Fourier transforms .u(λ), ˆ .u ∈ H+ are dense in the direct integral and, hence, the Fourier transform .H0  f → fˆ(λ) ∈ L2 realizes an isomorphism between spaces .H0 and .L2 .

.

Thus, in the indicated sense, .H0 can be regarded as decomposed into the direct integral H0 =



.

l2 (N (λ)) dρ(λ).

(2.11.44)

R

Lemma 2.11.12 Under mapping (2.11.40), the operator .A  D is transferred into the operator of the multiplication by .λ, i.e., for .ρ-almost all .λ ∈ R, and the relation  (Au)(λ) = λu(λ), ˆ

.

hold true.

u∈D

(2.11.45)

114

2 Some Aspects of the Spectral Theory of Unbounded Operators

Proof Denote by .β ∈ B(R) a set of complete spectral measure .ρ such that for .λ ∈ β, .R(P (λ)) consists of generalized eigenvectors of the operator A corresponding to the eigenvalue .λ. The existence of this set is guaranteed by Theorem 2.11.5. Let .λ ∈ β. Then, for all .γ = 1, 2, . . . , N (λ), we have .ϕγ (λ) ∈ R(P (λ)). Therefore, according to (2.11.22) and (2.11.40), the equality

.

 (Au)(λ) = (Au, ϕγ (λ))H0 = (ϕγ (λ), Au)H0 = λ(ϕγ (λ), u)H0 = λ(u, ϕγ (λ))H0 = λuˆ γ (λ),

∀u ∈ D

(2.11.46)

holds true. This relation is the required equality (2.11.45) in the coordinate form. ! Proof of Theorem 2.11.11 Let us fix a non-real .z ∈ C. It should be proved that, ˆ ∈ Hˆ 0 ⊂ L2 . Indeed, since .(λ − z)−1 for every .u ∈ H+ , we have .(λ − z)−1 u(λ) regarded as a function of .λ ∈ R is bounded, the multiplication operator by .(λ − z)−1 is continuous in .L2 and, therefore, for .v ∈ D, according to (2.11.45) and (2.11.43), we have (λ − z)−1 u(λ) ˆ − v(λ) ˆ ˆ − (λ − z)v(λ) ˆ L2 ≤ c u(λ) L2 = c u(λ) ˆ − ((A − zI)v) (λ) L2

.

= c u − (A − zI)v H0 . Since D forms the base of the self-adjoint operator A, the right-hand side of this estimate can be made as small as desired, which proves the assertion made above. ! Let .F (λ) ∈ L2 be orthogonal in .L2 to .Hˆ 0 .Then, in particular, for non-real z ∈ C and .u ∈ H+ is chosen so that .u(λ) ˆ = (1, 0, 0, . . . ) (this is possible, see Lemma 2.11.10)

.

 F (λ), 0= R



.

= R

 1 u(λ) ˆ dρ(λ) λ−z l2 (N (λ))

1 F1 (λ) dρ(λ) = λ − z¯



ω(δ) =

F1 (λ) dρ(λ),

R

1 dω(λ); λ − z¯

δ ∈ B(R).

δ

This implies that .ω = 0, and, hence, .F1 (λ) = 0 for .ρ-almost all .λ ∈ R (this follows from the following well-known fact: if a charge .ω of a bounded variation given on −1 dω(λ) = 0 for all non-real .z ∈ C, then .ω = 0. .B(R) is such that . R (λ − z)

2.11 The Generalized Eigenvectors Expansion

115

The same reasoning can be applied to .F2 (λ) and so on. As a result, we find that F (λ) = 0 in .L2 , which implies that .Hˆ 0 = L2 . . The obtained above result yields .N(λ) = dim(R(P (λ))), i.e., .N(λ) is the “multiplicity” of the eigenvalue .λ. In the case where .N(λ) = 1 for .ρ-almost all .λ, i.e., if “spectrum is simple”, then we have .

u(λ) ˆ = uˆ 1 (λ) = (u, ϕ1 (λ))H0 ∈ C,

.

u ∈ H+ ,

and (2.11.44) gives a decomposition of .H0 into the direct integral of complex planes C = l2 (1).

.

Theorem 2.11.13 Suppose that a self-adjoint operator A, satisfying conditions of the projection spectral Theorem 2.11.5, has a cyclic vector . ∈ D such that for any .m ∈ N, .Am  ∈ D and the linear span of these vectors is dense not only in .H0 , and also in .H+ . Then the spectrum of the operator A is simple. Proof Assume that .N(λ) > 1 for a set of .λ of positive measure .ρ. Then there exists λ ∈ R such that .N(λ) > 1, .Tr(P (λ)) = 1, and .R(P (λ)) consists of generalized eigenvectors corresponding to .λ. For .f ∈ H0 and .g = J −1 , relation (2.11.38) yields

.

(P (λ)Jf, )H0 =

N (λ)  γ =1

.

=

N (λ) 

νγ (λ)(f, hγ (λ))H0 (J −1 , hγ (λ))H0 (2.11.47) (f, hγ (λ))H0 a¯ γ .

γ =1 N (λ)

Since .(hγ (λ))γ =1 is an orthonormal sequence in .H0 , vectors b(f ) = ((f, h1 (λ))H0 , (f, h2 (λ))H0 , . . . ) ∈ l2 (N (λ))

.

run over .l2 (N (λ)) as f runs over .H0 . This implies that, for .N (λ) > 1, there exists f0 ∈ H0 such that .b(f0 ) is not equal to zero and is orthogonal in .l2 (N (λ)) to the vector .a = (a1 , a2 , . . . ), introduced in (2.11.47) (the vector a belongs to .l2 (N (λ)), because .J −1  ∈ H0 and the factors .νγ (λ) are bounded). By setting .f = f0 in (2.11.47), we get .(P (λ)Jf0 , )H0 = 0. Since .P (λ)Jf0 is a generalized eigenvector of the operator A corresponding to .λ, the application of the equality (2.11.19) yields .

(P (λ)Jf0 , Am )H0 = λm (P (λ)Jf0 , )H0 = 0,

.

m ∈ Z+ .

In view of the fact that the linear span of the vectors .Am  is dense in .H+ , this implies .P (λ)Jf0 = 0.

116

2 Some Aspects of the Spectral Theory of Unbounded Operators

By setting .f = f0 in (2.11.38), we get 0=

N (λ) 

.

νγ (λ)bγ (f0 )bγ (g),

g ∈ H0 .

γ =1

Vectors .b(g) ∈ l2 (N (λ)) run over the whole .l2 (N (λ)) as g runs over .H0 . Therefore, the last equality and relations .0 < νγ (λ) ≤ c < ∞ yield .b(f0 ) = 0. We arrived at a ! contradiction. In all formulas of this section, we can, clearly, replace .R and .B(R) with the spectrum .S(A) of the operator A and .B(S(A)), respectively.

2.12 A General Case with a Quasi-Scalar Product Consider a chain of Hilbert spaces with an involution “*”, namely, G− ⊃ G0 ⊃ G+ .

(2.12.1)

.

This means that an involution is defined in the space .G−  ξ → ξ ∗ ∈ G− , the restriction of which on .G0 or .G+ is an involution in the corresponding space. Let us construct the tensor square of chains (2.12.1) in the form G− ⊗ G− ⊃ G0 ⊗ G0 ⊃ G+ ⊗ G+

.

(2.12.2)

and fix some generalized kernel .K ∈ G− ⊗ G− . A kernel is said to be positive definite if (K, ϕ ⊗ ϕ ∗ )G0 ⊗G0 ≥ 0,

.

ϕ ∈ G+ ,

(2.12.3)

that is, a continuous bilinear form G+ ⊕ G+  (ϕ, ψ) → a(ϕ, ψ) = (K, ψ ⊗ ϕ ∗ )G0 ⊗G0 ∈ C

.

is non-negative. The kernel .K ∈ G− ⊗ G− is called Hermitian, if the form is Hermitian, i.e., (K, ψ ⊗ ϕ ∗ )G0 ⊗G0 = (K, ϕ ⊗ ψ ∗ )G0 ⊗G0 .

.

Every positive definite kernel is Hermitian.

2.12 A General Case with a Quasi-Scalar Product

117

Let us construct the Hilbert space generated by such a kernel. For the positive definite kernel K, we introduce, in general, the quasi-scalar product in .G+ ϕ, ψ = (K, ψ ⊗ ϕ ∗ )G0 ⊗G0 ≥ 0,

.

ϕ, ψ ∈ G+ .

(2.12.4)

After identifying with zero those .ϕ ∈ G+ for which . ϕ, ϕ = 0, and further completion of the set of classes ϕˆ = {χ ∈ G+ | ϕ − χ , ϕ − χ = 0}

.

we get the space denoted by .HK . In more detail, the procedure looks like this. Let N = {ϕ ∈ G+ | ϕ, ϕ = 0}.

.

Due to the inequality ϕ, ϕ ≤ K G0 ⊗G0 ϕ 2G+ ,

.

ϕ ∈ G+ ,

(2.12.5)

N is a subspace in .G+ . The class .ϕ, ˆ for which .ϕ ∈ G+ , coincides with hyperplane ϕ + N . The form (2.12.4) does not change if .ϕ and .ψ run over the classes .ϕˆ and ˆ H . The space .HK is a complement ˆ and it defines their scalar product .(ϕ, .ψ ˆ ψ) K of this pre-Hilbert space of classes. The described procedure is isomorphic to the following: on the orthogonal complement .G+  N in .G+ , the formula (2.12.4) already generates a scalar product. The completion it gives .HK . The isomorphism is defined accordingly (before completion) between the class .ϕˆ and the projection in .G+ of the vector .ϕ onto .G+  N (all vector projections .ϕ ∈ ϕˆ coincide with each other). Consider the case of Hilbert rigging. Let be given a chain with an involution .

H− ⊃ G− ⊃ G0 ⊃ G+ ⊃ H+ ,

.

(2.12.6)

where a Hilbert space .H+ is topologically imbedded into .G+ . Let us construct the equipment of the space .HK according to (2.12.6) . For this, we note that .N ∩ H+ is a subspace in .H+ (due to continuity of the imbedding .H+ → G+ ) and (2.12.5)), and consider .H+  (N ∩ H+ ) (orthogonal difference in .H+ ). Vectors of the space .H+  (N ∩ H+ ), can be identified with the set of all classes .ϕ, ˆ where .ϕ ∈ H+ : each such class .ϕˆ has only one common vector .ϕN with .H+  (N ∩ H+ ), i.e., a projection .ϕ in .H+ on .H+  (N ∩ H+ ). Thus, let .ϕ1 be a such projection. Then .ϕ − ϕ1 ∈ H+  (N ∩ H+ ), i.e., .ϕ1 ∈ ϕ. ˆ Let now .ϕ1 , ϕ2 ∈ ϕˆ ∩ H+  (N ∩ H+ ), then .ϕ1 , ϕ2 ∈ H+ , .ϕ1 − ϕ2 ∈ N, whence follows .ϕ1 − ϕ2 ∈ N ∩ H+ ; since .ϕ1 − ϕ2 ∈ H+  (N ∩ H+ ), then .ϕ1 − ϕ2 = 0. .

118

2 Some Aspects of the Spectral Theory of Unbounded Operators

We denote by .H+,K the linear set of all classes .ϕ, ˆ where .ϕ ∈ H+ , and transform it into a Hilbert space by putting ˆ H (ϕ, ˆ ψ) = (ϕN , ψN )H+ , +,K

.

ϕ, ψ ∈ H+ .

(2.12.7)

This definition is obviously correct; the obtained space .H+,K is isometric to .H+  (N ∩ H+ ). The space .H+,K is imbedded into .HK , moreover, the imbedding operator O can be represented as a product .O = SO+ , where .O+ : H+ → G+ is the imbedding operator .H+ into .G+ , and .S+ : G+ → HK is an operator which transfers the vector .ϕ ∈ G+ to the corresponding class .ϕ ˆ ∈ HK . Thus, taking into account (2.12.5)), we have 1/2

ϕ ˆ HK ≤ SO+ ϕ ˆ H+,K ≤ K G− ⊗G− O+ ϕ ˆ H+,K ,

.

ϕˆ ∈ H+,K .

It is clear that .H+,K is imbedded in .HK . This fact follows from the density .H+ in G+ and the inequality (2.12.5). Hence, it can be assumed that .HK is the zero space, .H+,K is a positive space, and we construct the chain .

H−,K ⊃ HK ⊃ H+,K .

.

(2.12.8)

From the representation of .O = SO+ , it follows that the chain (2.12.8) is quasinuclear if the imbedding .H+ → G+ is quasi-nuclear. In the case of a degenerate kernel, i.e., when .N = {0}, the procedure (2.12.8) of the construction of a chain becomes quite simple, then .H+,K = H+ .

Bibliographical Notes The first section of the book deals with the general principles of the theory, mainly of unbounded self-adjoint operators, outlined in [47, 48]. In more detail, sections of the theory of unbounded operators and their spectral properties are considered in [4, 43, 44, 54, 152, 194, 211, 236, 238, 239, 242, 247, 250]. In particular, generalized resolvents and a description of all self-adjoint extensions are given in [4]. A proof of the spectral theorem, which uses the theory of commutative Banach algebras, can be found in [211, 247, 298]. Some other methods of establishing the self-adjointness of operators are presented in [43, 44, 239]. Spectral representations of families of commutative operators and their application to the Stone theorems, harmonic analysis, and non-commutative operators are described in [43, 44, 250]. The detailed information on differential equations in Banach spaces can be found in [190]. Quasi-analytic classes of functions are studied in [206]. For a detailed description of the theory of equipped (rigged) spaces, see: [21, 43, 44, 116, 128, 203, 211]; the spaces of test and generalized functions are carefully studied in [114] in the case of a finite number of variables and in [43, 44] in the

Bibliographical Notes

119

case of an infinite number of variables. Various versions of the kernel theorem can be found in [21, 23, 43, 44, 116, 211, 242]. Many results about bilinear forms are presented in [43, 44, 152, 170, 171, 239, 242]. The generalized eigenvectors expansion method is considered in detail in [21, 43, 44, 51, 116, 211]. In particular, the book [43, 44] is devoted to the generalized eigenvectors expansion for a family of commutative normal operators of any power. Direct integrals of Hilbert spaces are studied in [211, 221]. The self-adjointness of differential operators with partial derivatives is studied in [21, 239]; in the case of an infinite number of independent variables, these problems are considered in [43, 44]. Books [51, 122, 241] contain additional information on the spectral properties of differential operators with partial derivatives (including the usual differential operators). Books [4, 95, 197, 198, 207, 220, 222] devoted to the study of spectral properties of ordinary differential operators. Spectral properties of the specified operators with operator-valued coefficients are studied in [128, 203] (such operators cover certain classes of operators with partial derivatives). The first chapter and the book as a whole did not include some areas of functional analysis. Among them: group theory [16, 52, 138, 152, 211, 239, 298]; scattering theory [4, 240]; perturbations of operators [4, 152, 170, 171]; non-self-adjoint operators [125, 248]; spaces with an indefinite metric [14, 80]; Banach algebras (normed rings) [112, 138, 202, 211, 221, 247, 298]; topological groups and their representations [16, 161, 202, 223, 237]; differential calculus in infinite-dimensional spaces and nonlinear functional analysis [43, 44, 82, 150, 163, 175, 201]; approximation methods of functional analysis [68, 150, 174]; topology [7, 10, 246, 251]; equations of mathematical physics [243, 244, 291, 292]. In general, an important information related to functional analysis, we recommend in particular [8, 9, 15, 69, 72, 94–96, 98, 123, 134, 140, 176, 184, 245, 279, 283, 284, 294]. Issues of generalized functions and related to them are covered in [109, 110, 113–116, 162, 192, 227, 293]. The main questions of the theory of extensions of symmetric operators to self-adjoint ones are presented in [58, 129, 130, 170, 193, 204]. The theory of analytic functions is in [208, 209]. For general and applied questions of mathematical analysis and the measure theory, are recommended [1, 100, 108, 133, 199, 214, 224, 225, 229, 256–258, 268, 280, 285, 286] theory of probabilities [121]; differential and integral equations [67, 149, 183, 195, 215, 216, 234, 235]. Some applied questions of functional analysis and the theory of singularly perturbed operators are in works [5, 6, 170].

Chapter 3

Jacobi Matrices and the Classical Moment Problem

This section is devoted to the presentation of the basic provisions of the theory of (ordinary numerical tri-diagonal) Jacobi matrices and the classical moment problem which is closely related to this theory.

3.1 Difference Operators, Jacobi Matrices, and Self-Adjointness Conditions Consider the transformation defined by the expression L of the form (Lu)j = aj −1 uj −1 + bj uj + aj uj +1 ,

.

j ∈ N0 ,

(3.1.1)

where .aj > 0, .bj ∈ R are given coefficients, .u = (uj )∞ j =0 = (u0 , u1 , u2 , . . .) is a sequence of numbers on which L acts, and for the convenience of writings, we put .u−1 := 0. The expression (3.1.1) is the differential analogue of the Sturm-Liouville differential expression on the semi-axis .[0, ∞): d . dx

( ) du a(x) + q(x)u(x), dx

u(0) = 0.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y. M. Berezansky, M. E. Dudkin, Jacobi Matrices and the Moment Problem, Operator Theory: Advances and Applications 294, https://doi.org/10.1007/978-3-031-46387-7_3

121

122

3 Jacobi Matrices and the Classical Moment Problem

The correctness of the Green formula can be easily verified by simple calculations:

.

l E [(Lu)j v¯j − uj (Lv)j ] = al (ul+1 v¯l − ul v¯l+1 ) j =k

− ak−1 (uk v¯k−1 − uk−1 v¯k ), k, l ∈ N0 , k < l,

(3.1.2)

where a dash above the expression means a complex conjugation. We consider the operator L in the Hilbert space .l2 = H0 , which consists of ∞ E sequences .u = (u0 , u1 , u2 , . . .), .uj ∈ C, for which the series converges . |uj |2 < j =0

∞ and the scalar product is given by the expression (u, v) =

∞ E

.

uj v¯j ,

u = (u0 , u1 , u2 , . . .), v = (v0 , v1 , v2 , . . .).

j =0

We denote by .L0 the operator defined by (3.1.1) on the finite vectors combined into the linear set .l2,0 . Then, through L we denote the closure .L0 ; it is the socalled minimal operator generated by the expression (3.1.1). Therefore, in what follows, the expression (3.1.1) and the corresponding operator will be denoted by L. Moreover, the domain of the adjoint operator .L∗ consists of vectors .v ∈ l2 for which .Lv ∈ l2 and .(L∗ v)j = (Lv)j , where .v−1 = 0. In general, .L∗ ⊇ L. Self-adjointness conditions for L will be discussed in detail below. For now, we only note that the defect numbers of the operator L are equal to each other. This follows from the fact that L is real with respect to the involution ∞ →u .u = (uj ) ¯ = (u¯ j )∞ j =0 j =0 . Thus, now the action of the operator, L can be associated with the action of the matrix ⎡

b0 ⎢ a0 ⎢ .J = ⎢ ⎣0 .. .

a0 b1 a1 .. .

0 a1 b2 .. .

0 0 a2 .. .

⎤ 0 ··· 0 ···⎥ ⎥ , 0 ···⎥ ⎦ .. . . . .

aj > 0, bj ∈ R, j ∈ N0

(3.1.3)

on the vector .u = (uj )∞ j =0 = (u0 , u1 , u2 , . . .). Matrices with this structure are called Jacobi matrices. Note that often, the minimal operator generated by the matrix (3.1.3) will be denoted by a bold letter .J. Consider the equation given by the matrix (3.1.3): (Lu)j = aj −1 uj −1 + bj uj + aj uj +1 = zuj ,

.

j ∈ N0 ,

(3.1.4)

3.1 Difference Operators, Jacobi Matrices, and Self-Adjointness Conditions

123

where .z ∈ C, and .uj are desired solutions. Considering .u−1 = 0 and .u0 = 1, all 0 uj are calculated recursively. Thus, for example, .u1 = z−b a0 . Let us redefine the solutions .Pj (z) := uj ; they are polynomials depending on z. Polynomials .Pj (z) are called polynomials of the first kind. Therefore,

.

(LP (z))j = zPj (z),

.

j ∈ N0 , P−1 (z) = 0, P0 (z) = 1.

(3.1.5)

Due to the real .aj and .bj , .j ∈ N0 , coefficients of polynomials .Pj (z), .j ∈ N0 are real, because .aj > 0, .j ∈ N0 , the coefficient by .zj in .Pj (z), .j ∈ N0 is always positive. The difference equation in (3.1.4) is a second-order equation, therefore, it has two linearly independent solutions. The second solution will be indicated below. We denote by .Nz for .Imz /= 0 the orthogonal complement to .R(L − z¯ I) in .H0 = l2 , where we recall that .I is the identity operator and .Nz is the defect subspace of the operator L corresponding to the regular point z. This subspace coincides with the set of solutions of the equation .L∗ f = zf , that is, due to the form of .L∗ , (it coincides with the subspace of solutions of the difference equation .(Lu)j = zuj , .u−1 = 0, which are included in .l2 ). Each solution of such an equation is represented in the form .uj = u0 Pj (z), therefore, the defect subspace is no more than onedimensional, and it is not empty if and only if .(uj ) = (u0 Pj (z)) ∈ l2 , that is, when the series ∞ E .

|Pj (z)|2

(3.1.6)

j =0

converges. In this way, we can formulate the theorem. Theorem 3.1.1 The operator L, defined using (3.1.1) by the matrix generated by the difference expression (3.1.4) in the space .l2 , has defect indices .(0, 0) or .(1, 1). In the case of defect indices .(0, 0), the series (3.1.6) diverges for z, .Imz /= 0. In the case of defect indices .(1, 1), the series (3.1.6) converges, and the defect subspace .Nz is formed by the vector .(P0 (z), P1 (z), . . .). The Self-Adjointness Criteria It follows from Theorem 3.1.1 that for the operator L to be self-adjoint it is necessary that the series (3.1.6) diverges as z, .Imz /= 0, and it is sufficient that it diverges for at least one such z. In terms of coefficients .aj and .bj , .j ∈ N0 , it is possible to provide sufficient conditions for the convergence or divergence of the series (3.1.6), i.e., conditions for the self-adjointness or non-selfadjointness of the operator L. We will present the simplest of such criteria. Theorem 3.1.2 If elements .aj , .bj , .j ∈ N0 of the matrix (3.1.3) are bounded, then the corresponding operator L generated by the expression (3.1.1) is bounded, and therefore also self-adjoint.

124

3 Jacobi Matrices and the Classical Moment Problem

Proof Let .aj , ||bj || ≤ C, .j = 0, 1, . . ., then it follows from the triangle inequality ⎛ ||Lu||0 = ⎝

∞ E

⎞1/2 |aj −1 uj −1 + aj uj +1 + bj uj |2 ⎠

j =0

⎛ ≤⎝

.

∞ E



⎞1/2

+⎝

|aj −1 uj −1 |2 ⎠

j =0

∞ E

⎞1/2 |aj uj +1 |2 ⎠

j =0

⎛ +⎝

∞ E

⎞1/2 |bj uj |2 ⎠

j =0

⎛ ⎞1/2 ⎛ ⎞1/2 ∞ ∞ E E ≤C(⎝ |uj −1 |2 ⎠ + ⎝ |uj +1 |2 ⎠ + ||u||0 ) ≤ 3C||u||0 . j =0

j =0

u n

Thus, the theorem is proved.

Theorem 3.1.3 If elements .aj , .j ∈ N0 of the matrix (3.1.3) generated by the expression (3.1.6) are such that

.

∞ E 1 = ∞, aj

(3.1.7)

j =0

and .bn are arbitrary, then the corresponding (3.1.3) operator L is self-adjoint. Proof It suffices to check that, it follows from (3.1.7) that, the series (3.1.6) diverges for some z, .Imz /= 0. By using the Green formula (3.1.2), we find (z − z¯ )

n E

Pj (z)Pj (z)

j =0



.

⎞ n E = ⎝ (L[P (z)])j Pj (z) − Pj (z)(L[P (z)])j ⎠ j =0

= an (Pn+1 (z)Pn (z) − Pn (z)Pn+1 (z)), n ∈ N0 . Hence, n E .

j =0

|Pj (z)|2 =

an (Pn+1 (z)Pn (z) − Pn (z)Pn+1 (z)), n ∈ N0 . (z − z¯ )

(3.1.8)

3.1 Difference Operators, Jacobi Matrices, and Self-Adjointness Conditions

125

Since .P0 (z) = 1, then, evaluating (3.1.8) by the modulo, we get 1≤ .

=

an (|Pn+1 (z)||Pn (z)| − |Pn (z)||Pn+1 (z)|) |z − z¯ | 2an |Pn (z)||Pn+1 (z)|. |z − z¯ |

Hence, .

1 ≤ C|Pn (z)||Pn+1 (z)|. an

And finally, ∞=

∞ ∞ E E 1 ≤C |Pn (z)||Pn+1 (z)| an j =0

j =0

.

⎛ ⎞1/2 ∞ ∞ E E ≤C ⎝ |Pn (z)|2 · |Pn+1 (z)|2 ⎠ j =0

n0 . If the last product converges, then .A < ∞ and (3.1.10) is proved. But the convergence is really valid, because, due to (3.1.10), ∞ E .

j =0

1

√ aj −1 aj

⎛ ∞ E ≤⎝ j =0

1 aj −1

⎞1/2 ∞ E 1⎠ < ∞. · aj j =0

Thus, the theorem is proved.

u n

3.2 The Generalized Eigenvectors Expansion and the Fourier Transform Corresponding to Jacobi Matrices In this section, we construct the generalized eigenvectors expansion for some selfadjoint extension of the operator L defined in the previous section.

3.2 The Generalized Eigenvectors Expansion and the Fourier Transform. . .

127

Let .E(·) be the generalized resolution of the identity of the operator L (see Sect. 2.11). Since the operator L acts in the space .l2 , then according to the spectral theorem f .E(A) = P (λ) de(λ), (3.2.1) A

where .A is from Borel .σ -algebra on .R, .P (λ) is the generalized projection operator acting from the space .l2 into the space, .l2 ((pj )−1 ) that is the space .l2 with weight ∞ E 1 −1 , where elements of the sequence .p ≥ 1 such that . .(pj ) j pj < ∞. The j =0

operator .P (λ) is a positive defined matrix .[oj,k (λ)]∞ j,k=0 with elements which .ealmost everywhere satisfy the condition

.

∞ E |oj,k (λ)|2 ≤ 1. pj pk

(3.2.2)

j,k=0

The Representation of the Generalized Projector Recall that the generalized eigenvectors .Pj (λ) are defined in the previous section. Before the main result, we prove the lemma. Lemma 3.2.1 For matrix elements of the generalized projection operator and generalized eigenvectors, the representation oj,k (λ) = Pj (λ)Pk (λ)o0,0 (λ),

.

j, k ∈ N0

(3.2.3)

holds true. Proof Each vector .ϕ ∈ R(P (λ)) is a generalized eigenvector with a corresponding eigenvalue .λ of the operator L. It means that .(ϕ, (L∗ − λI)u)0 = 0 for all vectors .u ∈ l2,0 , or in other words .(ϕ, (L − λI)u)0 = 0 for all vectors .u ∈ l2,0 , where we put .u−1 = 0. Using the Green formula (3.1.2), we get 0 = (ϕ, (L − λI)u)0 = ((L − λI)ϕ, u)0 − a−1 ϕ−1 u¯ 0 ,

.

i.e., .a−1 ϕ−1 u¯ 0 = ((L − λI)ϕ, u)0 . Due to the arbitrariness of u , the last expression is possible only when .((L − λI)ϕ)j = 0, .j ∈ N and .a0 ϕ1 + (b0 − λ)ϕ0 = 0. Hence, it can be assumed that .((L − λI)ϕ)j = 0, .j ∈ N0 and .ϕ−1 = 0. Thus, using (3.1.5), we get .ϕj = ϕ0 Pj (λ), .j ∈ N0 . In particular, for fixed .k ∈ N0 , the vector (o0,k (λ), o1,k (λ), . . .) = P (λ)δk ∈ R(P (λ)),

.

128

3 Jacobi Matrices and the Classical Moment Problem

where .δk = (δ0,k , δ1,k , . . .). Then .oj,k (λ) = o0,k (λ)Pj (λ), .j ∈ N0 . Hence o0,k (λ) = ok,0 (λ) ∈ R(P (λ)), therefore, .o0,k (λ) = o0,0 (λ)Pk (λ). Due to the fact that .Pk (λ) is real, we have .o0,k (λ) = o0,0 (λ)Pk (λ), .k ∈ N0 . Substituting the last equality into the resulting expression for .oj,k (λ), we get (3.2.3). u n

.

Denote .dσ (λ) = o0,0 (λ)de(λ). Since .o0,0 ≥ 0, then the expression .dσ (λ) is positive and, hence, it is a measure. From (3.2.1) and (3.2.3), we get f E(A)j,k = (E(A)δk , δj )0 =

Pj (λ)Pk (λ) dσ (λ), j, k ∈ N0 ,

.

(3.2.4)

A

where .δk are basis vectors in .l2 : .δk = (0, 0, . . . , 0, 1, 0, . . .) in which the unite is situated on the k-th position. Therefore, from (3.2.4), it follows σ (A) = (E(A)δ0 , δ0 )0 .

.

(3.2.5)

In what follows we will call the measure .dσ (λ) spectral. It is clear that both measures .dσ (λ) and .de(λ) are absolutely continuous with respect to each other. It also follows from (3.2.4) that the polynomials .Pj (λ), .j ∈ N form an orthonormal system with measure .dσ (λ): f Pj (λ)Pk (λ) dσ (λ) = δj,k , j, k ∈ N0 ,

.

(3.2.6)

R

where .δj,k is the Kronecker symbol. Let us write the Parseval equality. To do this, we introduce the Fourier transform of the finite sequence u, by setting for .λ e C: u(λ) ˜ =

∞ E

.

uj Pj (λ),

u ∈ l2,0 .

(3.2.7)

j =0

Now (3.2.4) gives the Parseval equality f (E(A)u, v)0 =

u(λ) ˜ v(λ) ˜ dσ (λ),

.

u, v ∈ l2,0 .

(3.2.8)

A

Note that with the Fourier transform introduced in this way in (3.2.7), .u(λ) ˜ polynomially depends on .λ. It also follows from (3.2.7) and (3.2.8) that the spectrum of the operator L in this case is unique. So, we can formulate the theorem. Theorem 3.2.2 Let .E(A) be the resolution of the identity (ordinary or generalized) of one of the self-adjoint expansions of the operator L. Define the spectral measure .σ (A) = (E(A)δ0 , δ0 )0 . Then, relative to this measure polynomials .Pj (λ), .j ∈ N0 form an orthonormal system. The corresponding to .E(A) the generalized

3.2 The Generalized Eigenvectors Expansion and the Fourier Transform. . .

129

eigenvectors expansion of the operator L is described by formulas (3.2.7) and (3.2.8). Thus, from the solutions .Pj (λ) of the difference equation .(LP (λ))j = λPj (λ), j ∈ N0 , .P−1 (λ) = 0 as a function (sequences) are selected those solutions for which .λ is situated on the spectrum of the corresponding extension. From (3.2.1) and (3.2.3), it follows that for .σ -almost all .λ, the sequence .(P0 (λ), P1 (λ), . . .) satisfies the estimate

.

.

∞ E |Pj (λ)|2 j =1

where .pj ≥ 0, .

∞ E

j =1

1 pj

pj

< ∞,

(3.2.9)

< ∞. The set of .λ, for which (3.2.9) is valid, depends on the

choice of the sequence .(pj )∞ j =0 . Proof of Theorem 3.2.2. It remains to prove only formulas (3.2.7) and (3.2.8). Let us write the action of the operator L on the vector .δj . For an arbitrary sequence .u ∈ l2,0 , we have (u, Lδj )0 =(Lu, δj )0 = (Lu)j =aj −1 uj −1 + bj uj + aj uj +1

.

=(u, aj −1 δj −1 + bj δj + aj δj +1 )0 . Due to the arbitrariness of the choice of u we have Lδj = aj −1 δj −1 + bj δj + aj δj +1 , j ∈ N0 ,

.

(3.2.10)

where, it is also set .δ−1 = 0. The formula (3.2.10) shows that .Lδj is also a finite sequence and, therefore, it included to .D(L). Hence, .δ ∈ D(Ln ), .n ∈ N0 . The equality (3.2.10) is considered as a recurrence relation to find .δj by given .δ0 with .δ−1 = 0. Thus, at .j = 0 we get a0 δ1 + b0 δ0 = Lδ0 , δ−1 = 0,

.

and hence δ1 =

.

1 (L − b0 I)δ0 = P1 (L)δ0 . a0

Then, we find .δ2 = P2 (L)δ0 and similarly further values. Thus, δj = Pj (L)δ0 , j ∈ N0 .

.

(3.2.11)

130

3 Jacobi Matrices and the Classical Moment Problem

It is now easily proved (3.2.4), namely, (E(A)δk , δl )0 =(E(A)Pk (L)δ0 , Pj (L)δ0 )0 =(Pj (L)E(A)Pk (L)δ0 , δ0 )0 f = Pj (λ)Pk (λ) d(Eλ δ0 , δ0 )0 , j, k, ∈ N0 .

.

A

u n We also note that formulas (3.2.7) and (3.2.11) are very useful. The Generalized Fourier Transform The polynomials .Pj (λ), .j ∈ N0 are orthonormal, hence, they are linearly independent in .L2 (R, dσ (λ)). Each polynomial has a power j . Therefore, they form a basis in the space of polynomials. And each polynomial is represented as a linear combination of polynomials .Pj (λ), .j ∈ N0 . Therefore, it follows from (3.2.7) that the Fourier transform maps all finite sequences into the set of all polynomials on .λ. Since .Pj (λ) ∈ L2 (R, dσ (λ)) (3.2.6), then each polynomial belongs to .L2 (R, dσ (λ)). In other words, the spectral measure is such that f λm dσ (λ) = cm < ∞,

.

m ∈ N0 .

(3.2.12)

R

However, the set of all polynomials does not have to be dense in .L2 (R, dσ (λ)). The density criterion is given below. In what follows, we will study separately the problem in which the measure .dσ (λ)) is restored from a given sequence of numbers .cm , .m ∈ N0 , which are called moments of the measure, that is, we will consider the classical moment problem. Let us point out one more important property of the spectral measure—the set of growth points of the function .σ (λ) is infinite. Indeed, if we were to assume the opposite, then this set (denoted by .A) is finite. Let us construct a sufficiently high degree n polynomial .P (λ) with a set of zeros on .A. Expand .P (λ) by the system .Pj (λ), .j ∈ N0 : P (λ) =

n E

.

cj Pj (λ),

j =0

then f cj =

P (λ)Pj (λ) dσ (λ) = 0,

.

R

i.e., .P (λ) ≡ 0, that is impossible.

j ∈ N0 ,

3.2 The Generalized Eigenvectors Expansion and the Fourier Transform. . .

131

The proven property can also be formulated in the form if the polynomial .P (λ) f such that . R |P (λ)|2 dσ (λ) = 0, then .P (λ) ≡ 0. The correspondence .u → u(λ) ˜ is one-to-one between a set of finite sequences and its Fourier image, that is, some set of polynomials of the variable .λ, which are elements of the space .L2 (R, dσ (λ)). Note, the Parseval equality has a form f ˜ dσ (λ), u(λ) ˜ v(λ)

(u, v)0 =

.

(3.2.13)

R

which follows from (3.2.8) for .A = R. The equality (3.2.13) is an isometry between parts of .l2 and .L2 (R, dσ (λ)). Extending such an isometry by continuity, we obtain an isometry between whole spaces .l2 and .L2 (R, dσ (λ)), that coincides with the closure of all polynomials in this space. We denote this closure .l2 as the closure of the Fourier image. Let the sequence .u ∈ l2 corresponds to its image .u˜ ∈ l2 . It ˜ is calculated for u using the same formula (3.2.7), only now, the is clear that .u(λ) series will be infinite and convergent in the sense of the space .L2 (R, dσ (λ)). Thus, the Fourier transform is extended to infinite sequences .l2 . The operator L by the transferring from .l2 to .l2 turns into a multiplication operator by the independent variable .λ in .L2 (R, dσ (λ)). More precisely, the operator L is defined on .l2,0 ; its image .L˜ during the Fourier transform acts on the image .l2,0 , that is, on .l2,0 (the set of all polynomials) as a multiplication on .λ. Thus, thanks to(3.1.2) we have (Lu)(λ) =

∞ ∞ E E (Lu)j Pj (λ) = uj (LPj (λ))j j =0

.



∞ E

j =0

uj Pj (λ)) = λu(λ), ˜ u ∈ l2,0 , u−1 = 0.

j =0

The operator L is the closure of this operator from .l2,0 , so .L˜ is equal to the closure of the multiplication operator by .λ defined initially for the set of polynomials. Theorem 3.2.3 In order for the set of all polynomials of the variable .λ to be dense in the space .L2 (R, dσ (λ)), it is necessary and sufficient that .dσ (λ) be generated by the usual resolution of the identity. .(Such spectral measures are called orthogonal..) Proof Let .σ (A) = (E(A)δ0 , δ0 ), where .E(A), be the usual resolution of the identity. Let us show the density of polynomials in .L2 (R, dσ (λ)). Denote .A ⊃ L a self-adjoint extension of the operator L in .l2 corresponding to .E(A). Denote also −1 , .Imz /= 0 its resolvent. Take .u ∈ l . Since .R u ∈ D(A) ⊂ l , .Rz = (A − zI) 2,0 z 2 then for a fixed z, there is .v ∈ l2,0 such that .||Rz u − v||0 < ε for an arbitrary .ε.

132

3 Jacobi Matrices and the Classical Moment Problem

When passing by isometry from the space .l2 to the space .L2 (R, dσ (λ)), we get that for each .u(λ), ˜ there is a polynomial .v(λ) ˜ such that .

|| || || || u(λ) || || ˜ − v(λ) ˜ || || λ − z

< ε.

L2 (R,dσ (λ))

˜ Given the arbitrariness of .ε, the last inequality means that an arbitrary function . u(λ) λ−z u(λ) ˜ can be approximated by a polynomial, i.e., . λ−z ∈ l2 for an arbitrary polynomial .u(λ). ˜ Now, the proof of sufficiency is easily completed. Let there exist a vector .h(λ) ∈ L2 (R, dσ (λ)) orthogonal to .l2 . Then for an arbitrary non-real z

f .

R

1 h(λ) dσ (λ) = 0, λ−z

u = δ0 , u(λ) ˜ =1

holds true. By the general Stieltjes transform theorem, this means that f η(A) =

h(λ) dσ (λ) = 0,

.

A

were .A is from Borel .σ -algebra on .R, i.e., .h(λ) = 0 .σ -almost everywhere. Therefore, the equality .l2 = L2 (R, dσ (λ)) is proved, which was required. Let us show the necessity. Take .l2 = L2 (R, dσ (λ)). It is necessary to show that .dσ (λ)) is generated by some self-adjoint extension A of operator L in .l2 . When passing by isometry from the space .l2 to the space .L2 (R, dσ (λ)), the operator L is transferred to the operator .L˜ of multiplication on .λ, defined initially on polynomials, and then we take its closure. We denote by .A˜ the usual multiplication operator by ˜ is self-adjoint. .L˜ ⊆ A˜ and .σ (A) = .λ in the space .L2 (R, dσ (λ)). The operator .A ˜ ˜ is the resolution of the identity corresponding (E(A)1, 1)L2 (R,dσ (λ)) , where .E(A) ˜ and it coincides with the multiplication operator on a characteristic function. to .A, Since .L2 (R, dσ (λ)) = l2 then by inverse isometry, we go from the operator .A˜ to A in the space .l2 . Obviously, this is the self-adjoint operator which is an extension of L and which generates the measure .dσ (λ). u n Theorem 3.2.4 If the spectral measure such that f .

R

log σ ' (λ) dλ > −∞, 1 + λ2

(3.2.14)

where .σ ' (λ) is its derivative, then it is generated by the generalized resolution of the identity, i.e., it is not orthogonal one. Proof Thanks to the theorem from the monograph of Akhiezer N. I. [3] (chapter 2), it is known that for the density of the linear envelope of functions .eiαλ , .0 ≤ α < ∞

3.2 The Generalized Eigenvectors Expansion and the Fourier Transform. . .

133

in .L2 (R, dσ (λ)) is necessary and sufficient that f .

R

log σ ' (λ) dλ = −∞, 1 + λ2

where .dσ (λ) is an arbitrary non-negative finite measure on the axis. According to this theorem, in connection with (3.2.14), there exists a function .0 /= h(λ) ∈ L2 (R, dσ (λ)) such that f . h(λ)eiαλ dσ (λ) = 0, 0 ≤ α < ∞. R

We differentiate this integral by .α and then set .α = 0 and get f h(λ)λj dσ (λ) = 0, j ∈ N0 .

.

R

This means that the polynomials are not dense in .L2 (R, dσ (λ)). And therefore, according to the previous theorem, the measure .dσ (λ) is generated by the generalized resolution of the identity. u n Previously, the spectral measure was defined as .σ (A) = (E(A)δ0 , δ0 ). We will show that it can be, additionally, described in another way. Theorem 3.2.5 Let .dσ (λ) be a finite measure on the real axis of .R, which has all moments, that is, there exist integrals (3.2.12). If for arbitrary finite sequences u, v and their Fourier transforms .u(λ), ˜ .v(λ) ˜ defined in (3.2.7), the Parseval equality (3.2.13) holds true .(or, that is the same, the orthogonality relation (3.2.6) holds true.), then .dσ (λ) is a spectral measure, that is, there exists a generalized resolution of the identity .E(A) constructed by the self-adjoint extension of the operator L, so that .σ (A) = (E(A)δ0 , δ0 ). Proof is almost obvious due to Parseval equality. In the same way as before, an isometry is established between .l2 and .l2 ⊂ L2 (R, dσ (λ)), under which the operator L is transferred to the multiplication operator .L˜ by .λ, which is initially defined on polynomials. Using this isomorphism, the problem is reduced to constructing such ˜ a self-adjoint extension .A˜ of the operator .L˜ that .σ (A) = (E(A)1, 1)L2 (R,dσ (λ)) , ˜ ˜ where .E(A) is the the resolution of the identity of the operator .A. But A can be taken as the self-adjoint multiplication operator by .λ in .L2 (R, dσ (λ)). If .l2 = L2 (R, dσ (λ)), then the resulting extension corresponds to the extension of L without leaving .l2 . In case .l2 ⊂ L2 (R, dσ (λ)) we have an extension with an exit from the space. u n Since the spectral measure can be determined using the Parseval equality, regardless of the theory of operators, the following corollary is meaningful.

134

3 Jacobi Matrices and the Classical Moment Problem

Corollary 3.2.6 The operator generated by the difference expression (3.1.1) has a unique spectral measure if and only if the operator L generated by it is self-adjoint. Indeed, due to the equality (3.2.8), there is a one-to-one correspondence between the spectral measures and the generalized resolution of the identity. We say that a difference equation or the corresponding Jacobi matrix is determined or (non-determined) indeterminate if the operator L is self-adjoint or not self-adjoint. Let us fix the difference expression (its operator L) and consider the set .E of all measures corresponding to this expression, that is, corresponding to various selfadjoint extensions of the operator L without exit and with exit from the space. In the indeterminate case, .E consists of more than one element. The set .E is convex, that is, if .σ1 , σ2 ∈ E, then for .μ1 + μ2 = 1, .μ1 , μ2 ≥ 0 the measure .σ (A) = μ1 σ1 (A) + μ2 σ2 (A) is also included in .E. Indeed, if the Parseval equality holds for .σ1 and .σ2 , then it also holds for .σ . Recall that the elements of this set are called extreme if they cannot be represented in the form .σ (A) = μ1 σ1 (A) + μ2 σ2 (A) with some .σ1 , σ2 ∈ E, .μ1 +μ2 = 1, .μ1 , μ2 > 0 and an arbitrary .A. In a certain case, the unique measure is extreme. Then, it will be shown that every orthogonal measure is extreme. However, not each extreme measure is orthogonal. Theorem 3.2.7 In order, for the set of all polynomials from .λ, to be dense in the space .L2 (R, dσ (λ)), it is necessary and sufficient that .dσ (λ) be extreme spectral measure. Before proving the theorem, we consider the following lemma. Lemma 3.2.8 Two measures .dσ1 (λ) and .dσ2 (λ) on the real axis of .R satisfying (3.2.12) .(that is, they have all moments.) belong both to the same class .E if and only if for an arbitrary polynomial .P (λ) f

f P (λ) dσ1 (λ) =

.

R

(3.2.15)

P (λ) dσ2 (λ). R

Proof If there is equality (3.2.15) and, for example, the Parseval equality (3.2.13) ˜ and .v(λ) ˜ are holds true for .dσ1 (λ), then it also holds for .dσ2 (λ), since .u(λ) polynomials .u, v ∈ l2,0 . Let (3.2.13) holds true for .dσ1 (λ) and .dσ2 (λ) with one operator L, then .Pj (λ) are orthonormal in each space .L2 (R, dσ1 (λ)) and .L2 (R, dσ2 (λ)). Expanding .P (λ) by .Pj (λ), we get f

f P (λ) dσ1 (λ) =

.

R

R

⎛ ⎝

n E j =0

Therefore, the lemma is proved.

⎞ cj Pj (λ)⎠ dσ1 (λ) = c0 =

f P (λ) dσ2 (λ). R

u n

3.2 The Generalized Eigenvectors Expansion and the Fourier Transform. . .

135

Proof of Theorem 3.2.7 First, we will show that if .dσ (λ) is not extreme, then the set of polynomials is not dense in .L2 (R, dσ (λ)). Thus, let σ (A) = μ1 σ1 (A) + μ2 σ2 (A), σ1 , σ2 ∈ E, μ1 + μ2 = 1, μ1 , μ2 > 0.

.

Since .σj (A) ≤ .dσ1 (λ) and

1 μj

σ (A), then .dσj (λ) is absolutely continuous with respect to

dσj (λ) ≤ μj , dσ (λ)

ϕj (λ) =

.

j = 1, 2.

By Lemma 3.2.8, for an arbitrary polynomial .P (λ), we have f

f P (λ)ϕ1 (λ) dσ (λ) = R

P (λ) dσ1 (λ) R

f

f

.

P (λ) dσ2 (λ) =

= R

P (λ)ϕ2 (λ) dσ (λ), R

that is, the bounded function .ψ(λ) = ϕ1 (λ) − ϕ2 (λ) is not equal to zero .σ -almost everywhere is such that f P (λ)ψ(λ) dσ (λ) = 0

.

R

for an arbitrary .P (λ). This means that on .L1 (R, dσ (λ)), there exists a non-zero functional .ψ that is vanishes on all polynomials, i.e., the set of polynomials is not dense in .L2 (R, dσ (λ)). Conversely, if polynomials are not dense in .L2 (R, dσ (λ)), then there exists a function .ψ(λ) bounded by one and not equal to zero .σ -almost everywhere, for which f P (λ)ψ(λ) dσ (λ) = 0

.

R

for an arbitrary polynomial .P (λ). Let us construct two measures dσ1 (λ) = (1 + ψ(λ)dσ (λ)), dσ2 (λ) = (1 − ψ(λ)dσ (λ)).

.

These measures obviously satisfy (3.2.12), that is, have all moments, and are such that f f f . P (λ) dσ (λ) = P (λ) dσ1 (λ) = P (λ) dσ2 (λ), R

R

R

136

3 Jacobi Matrices and the Classical Moment Problem

for an arbitrary polynomial .P (λ). According to Lemma 3.2.8, .σ1 σ2 ∈ E, where .E is the convex set of spectral measures corresponding to the operator L. However, 1 1 u n .σ1 /= σ2 and .σ = σ1 + σ2 , that is, .σ is not extreme. 2 2

3.3 The Inverse Problem In the previous section, a direct problem was considered: the spectral representation was constructed according to the given expression L. Quite naturally, the inverse question arises as to whether it is possible to reconstruct L from the spectral data. We will show that such a reconstruction (recovery) is possible and that the spectral measure should be taken as spectral data. Moreover, it turns out that the set of all possible spectral measures is easy to describe. Consider the difference expression: aj −1 Pj −1 (λ) + bj Pj (λ) + aj Pj +1 (λ) = λPj (λ),

.

j ∈ N,

were we put as usual .P−1 (λ) := 0 and .a1 := 0. Multiplying the left and right parts of the previous equality on .Pk (λ) relative to the scalar product .L2 (R, dσ (λ)), we obtain f f .aj = λPj (λ)Pj +1 (λ) dσ (λ), bj = λPj2 (λ) dσ (λ), j ∈ N0 . (3.3.1) R

R

Now let there be given a measure .dσ (λ) on the real axes so that all integrals (3.2.12) converges, that is, the measure has all moments, .σ (R) = 1, and .dσ (λ) has an infinite number of growth points. We will show that such a measure is a spectral measure of some difference expression L. To do this, consider the space .L2 (R, dσ (λ)) and the system of functions 1, λ, λ2 , λ3 , . . . , λj , . . . .

.

(3.3.2)

We orthogonalize this sequence according to the Gram-Schmidt procedure and get a system of polynomials P0 (λ) = 1, P1 (λ), P2 (λ), . . . , Pj (λ), . . .

.

(3.3.3)

orthonormal in .L2 (R, dσ (λ)). Polynomials (3.3.3) are defined up to the sign of the leading coefficient. For uniqueness (3.3.3) according to the given sequence (3.3.2), we set the higher coefficients in .Pj (λ) to be positive. The orthogonalization process will be infinite because .dσ (λ) has an infinite number of growth points.

3.3 The Inverse Problem

137

Now we construct .aj , .bj by polynomials .Pj (λ) using equalities (3.3.1), where aj > 0, .j ∈ N0 . Indeed, .λPj (λ) is a polynomial of degree .j + 1 with a positive leading coefficient. Therefore, in the representation

.

λPj (λ) = cj +1 Pj +1 (λ) + . . . + c0 P0 (λ),

.

coefficients .cj +1 > 0, and hence .cj +1 = aj . We take numbers .aj , .bj , .j ∈ N0 as coefficients of the difference expression L (3.1.1). We will show that .Pj (λ) are polynomials of the first kind for L, i.e λPj (λ) = aj −1 Pj −1 (λ) + bj Pj (λ) + aj Pj +1 (λ),

.

j ∈ N0 , P−1 (λ) = 0, P0 (λ) = 1.

(3.3.4)

To prove this proposition, it is necessary to show that in the expansion of the .j + 1th degree polynomial .λPj (λ) by .P0 (λ), .P1 (λ), . . . , .Pj +1 (λ), coefficients at .P0 (λ), .P1 (λ), . . . , .Pj −1 (λ) are equal to zero, i.e., f λPj (λ)Pk (λ) dσ (λ) = 0,

.

R

as .0 ≤ k ≤ j − 2. But .λPk (λ) is a polynomial maximum .(j − 1)-th degree, and hence .Pj (λ) is orthogonal to it. Therefore, the measure .dσ (λ) has the property that polynomials of the first kind, corresponding to the expression L, constructed by the measure, are orthonormal with respect to this measure. This implies the Parseval equality (3.2.13) on the finite sequences u and v, that is, .dσ (λ) is the spectral measure of L. Hence, we prove the theorem. Theorem 3.3.1 Let the measure .dσ (λ) be given on the real axis .R, which has an infinite number of growth points, is normalized to one and has all moments, namely, f

f dσ (λ) = 1,

.

R

λn dσ (λ) < ∞, n ∈ N.

(3.3.5)

R

Then such a measure is a spectral measure of the difference expression, the coefficients of which are uniquely determined by .dσ (λ), using formulas (3.3.1), where the system of polynomials (3.2.15) is constructed by the orthogonalization of the sequence (3.3.2), according to the scalar product of the space .L1 (R, dσ (λ)).

138

3 Jacobi Matrices and the Classical Moment Problem

3.4 Further Spectral Analysis of the Difference Operator Polynomials of the Second Kind In this section, we will present the main general results from the spectral analysis of Jacobi matrices. Consider the difference equation as in (3.1.5): .(Lu)j = zuj , .j ∈ N, but with initial data .u0 = 0, .u1 = a10 .

Let .Q1 (z) = a10 , .Q2 (z), . . . , .Qj (z), . . . be the solutions of this Cauchy problem. Obviously, .Qj (z), .j ∈ N are polynomials of the j -th degree with real coefficients. Polynomials .Q0 (z) = 0, .Qj (z), .j ∈ N are called polynomials of the second kind generated by the expression L. It is clear that .P (z) = (P1 (z), P2 (z), . . .) and .Q(z) = (Q1 (z), Q2 (z), . . .) are linear independent systems of solutions of the equation .(Lu)j = zuj , .j ∈ N. Polynomials of the second kind will be needed to describe the Weyl function, which uniquely characterizes the spectral measure of the operator. Polynomials .Pj (z) and .Qj (z), .j ∈ N0 are related by f Qj (z) =

.

R

Pj (λ) − Pj (z) dσ (λ), j ∈ N0 . λ−z

(3.4.1)

Indeed, the sequence f uj =

.

R

Pj (λ) − Pj (z) dσ (λ) λ−z

satisfies the equation f (Lu)j = R

f

.

= R

(LP (λ))j − (LP (z))j dσ (λ) = λ−z

f R

λPj (λ))j − zPj (z)j dσ (λ) λ−z

λPj (λ) − zPj (z)j + zPj (λ) − zPj (λ)j dσ (λ) λ−z

f

Pj (λ) − Pj (z) dσ (λ) + λ−z

=z R

f Pj (λ) dσ (λ) = zuj , j ∈ N, R

where the last integral is zero due to mutual orthogonality .Pj (z), .j ∈ N0 (3.2.6), in particular .Pj (λ), .j ∈ N are orthogonal to .P0 (λ) = 1. Moreover, .u0 = 0 and f u1 =

.

R

(

1 1 a0 (λ − b0 ) − a0 (z

− b0 )

λ−z

hence, .uj = Qj (z), .j ∈ N. Thus, (3.4.1) is proved.

) dσ (λ) =

1 , a0

3.4 Further Spectral Analysis of the Difference Operator

139

Note that polynomials .Qj (z), .j ∈ N are also orthogonal with respect to some measure .d σˆ (λ) on the real axis .R. Indeed, .a0 Qj (z) can be considered as polynomials of the first kind of the shifted difference expression .Lˆ with coefficients .a ˆ j = aj +1 , .bˆj = bj +1 , .j = −1, 0, 1, . . . . Therefore, as .d σˆ (λ) we can take the ˆ spectral measure corresponding to .L. In what follows, this lemma will be useful. Lemma 3.4.1 Let .B0 (λ), .B1 (λ), . . . , .Bj (λ), . . . be an arbitrary sequence of real polynomials degrees up to .j ∈ N0 respectively, orthogonal on the real axis .R with respect to the measure .dσ (λ) satisfying conditions of Theorem 3.3.1. Then the roots (zeros) of such polynomials are real and different. Proof Let some polynomial .Bn (λ) have less than n zeros. We denote by λ1 , λ2 , . . . , λm (.m < n) only those of its real zeros, when passing through which .Bn (λ) changes sign. Let us construct a polynomial .P (λ) of degree m with zeros .λ1 , λ2 , . . . , λm , so that .sgnP (λ) = sgn Bn (λ), .λ ∈ R. Then, on the one hand, .P (λ) m E is represented as a linear combination of . cj Bj (λ) and hence .

j =0

f P (λ)Bn (λ) dσ (λ) = 0,

.

R

and on the other hand, this equality is impossible due to the construction of .P (λ). u n The obtained contradiction completes the proof of the lemma. One Corollary Polynomials Bj (λ) of the previous lemma satisfy the estimate |Bj (x + iy ' )| ≤ |Bj (x + iy '' )|,

.

|y ' | ≤ |y '' |,

j ∈ N0 .

(3.4.2)

The inequality (3.4.2) follows from the representation Bj (z) = a

j ||

.

(z − λα ),

α=1

where λα are the real zeros of polynomials Bj (z). In particular, all zeros of polynomials of the first and second kind are real and the estimate (3.4.2) is carried out for them. The Resolvent Let us pass to consider the resolvent associated with the operator L. Let A be some self-adjoint extension of the operator L, which is generally possible with exit to a wider space H˜ 0 than l2 . We denote by E˜ λ and R˜ z the resolution of the identity and the resolvent of the operator A in H˜ , respectively, and by Eλ and

140

3 Jacobi Matrices and the Classical Moment Problem

Rz the generalized resolution of the identity and the resolvent. If P is the projection operator H˜ 0 on l2 , then Eλ = P E˜ λ P and Rz = P R˜ z P . Since f Rz =

P (λ)

.

R

1 dEλ , λ−z

then due to (3.2.4), the operator matrix Rz has a form f

Pj (λ)Pk (λ) dσ (λ), λ−z

Rz;j,k = (Rz δk , δj )0 =

.

R

j, k ∈ N0 ,

(3.4.3)

in the basis δ0 , δ1 , . . . , δj , . . .. In what follows, the function, holomorphic outside the spectrum of the operator A, f 1 dσ (λ), (3.4.4) .m(z) = Rz;0,0 = (Rz δ0 , δ0 )0 = λ−z R

plays a significant role which is the Stieltjes transform of the spectral measure. As is well known, σ (A) is written using the inversion formula for m(z). Consider the relation in the case when dσ (λ) is orthogonal, that is, the relation is constructed by the usual resolution of the identity. According to (3.4.1) and (3.4.3) we have: f (Rz δ0 )j = R

Pj (λ) dσ (λ) = λ−z

.

f

+Pj (z) R

f R

Pj (λ) − Pj (z) dσ (λ) λ−z

1 dEλ = Qj (z) + m(z)Pj (z), P (λ) λ−z

(3.4.5) j ∈ N0 .

For the ordinary resolvent Rz , the Hilbert identity (2.1.24): Rμ − Rλ = (μ − λ)Rμ Rλ

.

holds true. Using this identity and (3.4.5) we get ) ( Rξ¯ − Rz m(ξ ) − m(z) = δ0 , δ0 = (Rξ¯ Rz δ0 , δ0 )0 ξ¯ − z ξ¯ − z 0 .

=(Rξ∗ Rz δ0 , δ0 )0 = (Rz δ0 , Rξ δ0 )0 =

∞ E (Rz δ0 )j (Rξ δ0 )j j =0

=

∞ E ( ) )( Qj (z) + m(z)Pj (z) Qj (ξ ) + m(ξ )Pj (ξ ) , ξ /= z. j =0

3.4 Further Spectral Analysis of the Difference Operator

141

Therefore, for arbitrary ξ and z we have ∞

.

)( ) m(ξ ) − m(z) E ( = Qj (z) + m(z)Pj (z) Qj (ξ ) + m(ξ )Pj (ξ ) , ξ¯ = / z. ¯ξ − z j =0 (3.4.6)

We put ξ = z and get ∞

.

|2 m(z) − m(z) E || Qj (z) + m(z)Pj (z)| , Imz /= 0. = z¯ − z

(3.4.7)

j =0

Let us write the relation (3.4.7) in the case when dσ (λ) is constructed according to the generalized resolution of the identity. Now the Hilbert identity for R˜ z is used, i.e., R˜ μ − R˜ λ = (μ − λ)R˜ μ R˜ λ and ξ = z is also assumed. Hence ( ) ) ( Rz¯ − Rz m(z) − m(z) R˜ z¯ − R˜ z = δ0 , δ0 = δ0 , δ0 z¯ − z z¯ − z z¯ − z 0

=

H˜ 0

=(R˜ z¯ R˜ z δ0 , δ0 )0 = (R˜ z∗ R˜ z δ0 , δ0 )0 .

=(R˜ z δ0 , R˜ z δ0 )0 ≥ (P R˜ z δ0 , P R˜ z δ0 )0 =(Rz δ0 , Rz δ0 )H˜ = 0

∞ E || || ||Qj (z) + m(z)Pj (z)||2 . j =0

Therefore, if dσ (λ) is orthogonal, then the relation (3.4.7) holds true. If dσ (λ) is not orthogonal, then the inequality

.

∞ |2 m(z) − m(z) E || Qj (z) + m(z)Pj (z)| , Imz /= 0, ≥ z¯ − z j =0

holds true. The relation (3.4.7) sometimes makes it possible to find m(z) and, therefore, dσ (λ) for a given L. Indeed, let us have a determined case. Suppose that such a function f (z) is found, Imz /= 0, that the sequence Qj (z) + f (z)Pj (z) ∈ l2 for each non-real z. Then m(z) = f (z). Indeed, if there exists z0 for which m(z0 ) /= f (z0 ), then Qj (z0 ) + m(z)Pj (z0 ) ∈ l2 and Qj (z0 ) + f (z)Pj (z0 ) ∈ l2 , as well as (m(z0 ) − f (z0 ))Pj (z0 ) ∈ l2 , i.e., Pj (z0 ) ∈ l2 —which is not possible due to the definiteness of L. The Indeterminate Case Suppose that the expression L corresponds to the indeterminate case, that is, the corresponding operator has non-trivial defect indices and, therefore, has many self-adjoint extensions (without exit from space). It turns

142

3 Jacobi Matrices and the Classical Moment Problem

out that the set of all spectral measures corresponding to L can be described. Let us first describe only the orthogonal ones. ∞ E For non-real z, as already said, the series |Pj (z)|2 converges. The stronger j =0

fact is actually correct. Lemma 3.4.2 In the indeterminate case, series

∞ E j =0

|Pj (z)|2 and

∞ E j =0

|Qj (z)|2 both

simultaneously converge uniformly in each bounded region of the complex plane. Proof Consider a function f m(z) =

.

R

1 dσ (λ), λ−z

corresponding to the usual resolution of the identity. This function m(z) satisfies (3.4.7). Since the left-hand side of the last equality is continuous everywhere outside the real axis R, then, according to the Dini theorem, the series ∞ E .

|Qj (z) + m(z)Pj (z)|2

j =0

the series converges uniformly in every bounded region located at a certain distance from R. Let dσ1 (λ) be a spectral measure for L different from dσ (λ) and let m1 (z) be the corresponding function (3.4.4). Since m(z) /= m1 (z), the equality m(z) = m1 (z) is possible only on some sequence z1 , z2 , . . . , zn , . .. with boundary points only on the real axis R. Let us show that in each bounded region G of the complex plane, which is located at a distance from the real axis R and from the points zj , the series ∞ E |Pj (z)|2 converges uniformly. j =0

Indeed, both series

.

∞ E | | |Qj (z) + m(z)Pj (z)|2 ,

∞ E | | |Qj (z) + m1 (z)Pj (z)|2

j =0

j =0

converge in G uniformly. Then the series |m(z) − m1 (z)|2

∞ E

.

| |2 |Pj (z)|2 = |Qj (z) + m(z)Pj (z) − (Qj (z) + m(z)Pj (z))|

j =0

also converges and, hence, we get the conclusion.

3.4 Further Spectral Analysis of the Difference Operator

143

Due to the estimate established in (3.4.2), the series ∞ E

majorized by the series

∞ E j =0

|Pj (x + iy ' )|2 is

|Pj (x + iy '' )|2 , if |y '' | ≥ |y ' |. Therefore, the uniform

j =0 ∞ E

convergence of the series

j =0

|Pj (z)|2 in the region G implies its uniform conver-

gence in an arbitrary bounded region. And finally, from the uniform convergence ∞ | ∞ | | | E E |Qj (z) + m(z)Pj (z)|2 and |Pj (z)|2 in a limited region that of the series j =0

j =0

is separated from the real axis R, it follows that the series uniformly converges ∞ | | E |Qj (z)|2 in this region. Takin into account the estimate (3.4.2) we get the

j =0

u n

convergence.

Let us fix a non-real z and consider m(z) as points in the space of functions such that, differen m(z) correspond to different dσ (z) that is generated by the usual resolution of the identity. It turns out that, similar to the case of the SturmLiouville equation, all such points lie on some Weyl-Hamburger circle. Recall that the equation ¯ + c = 0, a|ω|2 + bω¯ + bω

a > 0, Imc = 0, |b|2 − ac > 0

.

(3.4.8)

with the complex variable ω, forms a circle with the center in O and a radius R on the complex plane C, which are calculated by formulas b O=− , a

.

R2 =

1 (|b|2 − ac) a2

(3.4.9)

After simple transformations, due to Lemma 3.4.2, the equality (3.4.7) is rewritten in the form ⎞ ⎛ ⎞ ⎛ ∞ ∞ E E 1 ⎝ + |Pj (z)|2 ⎠|m(z)|2 ⎝ Pj (z)Qj (z)⎠ m(z) z − z¯ j =0

j =0

.

⎛ +⎝

1 + z − z¯

∞ E j =0

⎞ Pj (z)Qj (z)⎠m(z) +

∞ E

|Qj (z)|2 = 0.

j =0

(3.4.10) This is the relation of the type (3.4.8). (Note that the inequality |b|2 − ac > 0 is fulfilled automatically if it is known that ω exists that satisfies the Eq. (3.4.8)). Thus, all values of m(z) really belong to some circle. We will show that these values fill the whole circle.

144

3 Jacobi Matrices and the Classical Moment Problem

Consider the Cayley transform of the operator L : Uz = (L − z¯ I)(L − zI)−1 . This operator maps isometrically all R(L − zI) to all R(L − z¯ I) in H0 = l2 . Let us construct an isometric operator X that maps the deficiency subspace Nz into Nz¯ . The orthogonal sum Vz = Uz o X is given by the Cayley transform of the self-adjoint extension A in l2 of the operator L. Going through all such X, we get all extensions. Recall that the defect subspace Nz is spaned by vectors P0 (z), P1 (z), . . . , Pj (z), . . . . Let us decompose the vector δ0 into orthogonal components R(L − zI) and Nz : δ0 = ez + ηz ,

.

ez ∈ R(L − zI),

ηz ∈ Nz .

(3.4.11)

Let us calculate ||ηz ||0 . We denote by e the distance. Then ||ηz ||20 = e2 (δ0 , R(L − zI)) = e2 (δ0 , (L − zI)l2,0 ).

.

The last equality holds true because the set (L−zI)l2,0 is dense in R(L−zI). Let us pass to the Fourier transform. Since δ˜ = 1, and Fourier image (L − zI)l2,0 , which we denote by o, consists of all possible polynomials of the form (λ−z)P (λ), where P (λ) is an arbitrary one, then ||ηz ||20 =e2 (δ0 , o)) = e2 (δ˜0 , o) =e2 |P (z)=0 (1, P (λ)) =

.

=

inf

P :P (z)=1

inf

P :P (z)=0

||1 − P (λ)||2L2 (R,dρ(λ))

||P (λ)||2L2 (R,dρ(λ)) .

Let us calculate the last infimum. An arbitrary polynomial P (λ) with a condition P (z) = 1 has a form ⎛ ⎞−1 ∞ ∞ E E .P (λ) = cj Pj (λ) ⎝ cj Pj (z)⎠ , j =0

j =0

where the sequence of coefficients cj is finite. Now, by the Cauchy-Bunyakovsky inequality, we get

||P (λ)||2L2 (R,dρ(λ)) .

| ∞ |2 |E | | | c P (λ) j j f | | | j =0 | = | ∞ | dσ (λ) |E | | cj Pj (z) | R | | j =0 ∞ E

|cj |2

1 =| . |2 ≥ E ∞ |E | |∞ | |Pj (z)|2 c P (z)| | j =0 |j =0 j j | j =0

3.4 Further Spectral Analysis of the Difference Operator

145

On the other hand, putting cj = Pj (z), j = 0, 1, . . . , n and cj = 0, j > n + 1, we get

||P (λ)||2L2 (R,dρ(λ))

.

⎛ ⎞−1 ∞ E =⎝ |Pj (z)|2 ⎠ . j =0

Thus, ||ηz ||20 = inf ||P (λ)||2L2 (R,dρ(λ)) =

.

P (z)=1

1 ∞ E j =0

.

(3.4.12)

|Pj (z)|2

In particular, (3.4.12) shows that ||ηz || /= 0. If δ0 = ez¯ + ηz¯ is a decomposition of δ0 in accordance with l2 = R(L − z¯ I) o Nz¯ ,

.

then ||ηz ||0 = ||ηz¯ ||0 . Let A0 be some fixed self-adjoint extension in l2 of the operator L, and let A be some other arbitrary self-adjoint extension. We denote their Cayley transform by Vz0 and Vz , respectively. Since Vz0 and Vz act on R(L − zI) in the same way, then for some ϕ ∈ [0, 2π ), we have Vz δ0 =Vz ez + Vz ηz = Vz0 ez + eiϕ Vz0 ηz ; .

(Vz δ0 , δ0 )0 =(Vz0 ez + eiϕ Vz0 ηz , ez¯ + ηz¯ )0

(3.4.13)

=(Vz0 ez , ez¯ ) + eiϕ (Vz0 ηz , ηz¯ )0 . From the set of isometric operators, we choose one that transfers Nz into Nz¯ and is defined by Vz0 in a certain way: that is, we put Vz0 ηz = ηz¯ . Then (3.4.13) has the form (Vz δ0 , δ0 )0 = (Vz0 ez , ez¯ ) + eiϕ ||ηz¯ ||20 = (Vz0 ez , ez¯ ) + eiϕ ||ηz ||20 .

.

However Vz = (A − z¯ I)(A − zI)−1 = I + (z − z¯ )Rz ,

.

therefore (Vz δ0 , δ0 )0 = 1 + (z − z¯ )m(z). Thus, 1 + (z − z¯ )m(z) = (Vz0 ez , ez¯ ) + eiϕ ||ηz¯ ||20 ,

.

146

3 Jacobi Matrices and the Classical Moment Problem

hence m(z) =

.

1 ((Vz0 ez , ez¯ ) − 1 + eiϕ ||ηz¯ ||20 ). z − z¯

(3.4.14)

Taking all ϕ in [0, 2π ), we obtain all possible isometric operators mapping Nz to Nz¯ . Then operator A runs through all self-adjoint extensions of L in l2 . According to (3.4.14), the point m(z) describes some circle that coincides with the circle (3.4.10). Its center and radius are calculated using the formulas (3.4.9), but for the radius, the expression, following resulting from (3.4.12) and (3.4.14), is more convenient ⎛ ⎞−1 ∞ E ||ηz ||20 = ⎝|z − z¯ | |Pj (z)|2 ⎠ . .R = |z − z¯ | j =0

Thus, we proved the first part of the following theorem. Theorem 3.4.3 Let the expression L corresponds to an indeterminate case. Let us fix an non-real z and consider the point f 1 dσ (λ), .m(z) = λ−z R

where dσ (λ) is an orthogonal spectral measure. Going through all such measures, we get that the point m(z) describes some circle Kz , i.e., the Weyl-Hamburger circle with the center and the radius written by expressions ⎞ ∞ E 1 ⎝ + Pj (z)Qj (z)⎠ , z − z¯ ⎛ O(z) = −

1 ∞ E j =0

.

|Pj (z)|2

R(z) = |z − z¯ |

j =0

(3.4.15) 1 ∞ E j =0

. |Pj

(z)|2

If L is determined, then the circle (3.4.15) degenerates into the Weyl-Hamburger point. Points ω of the Weyl-Hamburger circle are in one-to-one correspondence with the orthogonal spectral measures dσ (λ), and therefore, due to (3.2.4)—with the spectral resolution of the identity. Namely, each measure dσ (λ) corresponds to a point f ω = m(z) =

.

R

1 dσ (λ). λ−z

3.4 Further Spectral Analysis of the Difference Operator

147

Conversely, each ω ∈ Kz corresponds to dσ (λ), whose Stieltjes transform is calculated by the formula m(ζ ) =

.

E0 (ζ, z)ω + E1 (ζ, z) , D0 (ζ, z)ω + D1 (ζ, z)

(3.4.16)

where E0 (ζ, z), E1 (ζ, z), D0 (ζ, z), D1 (ζ, z) are entire functions of each variable with the other fixed and are defined by series E0 (ζ, z) =1 + (ζ − z)

∞ E

Qj (ζ )Pj (z),

j =0

E1 (ζ, z) =(ζ − z)

∞ E

Qj (ζ )Qj (z),

j =0 .

D0 (ζ, z) = − (ζ − z)

∞ E

(3.4.17) Pj (ζ )Pj (z),

j =0

D1 (ζ, z) =1 − (ζ − z)

∞ E

Pj (ζ )Qj (z).

j =0

Proof of the second part of the theorem. Let us replace ζ with ζ¯ in (3.4.6) and from this equality, we get m(ζ ). Thus, we have m(ζ ) =

.

E0 (ζ, z)m(z) + E1 (ζ, z) , D0 (ζ, z)m(z) + D1 (ζ, z)

(3.4.18)

where E0 (ζ, z), E1 (ζ, z), D0 (ζ, z), D1 (ζ, z) have the form (3.4.17) (the convergence of series (3.4.17) and the properties of analyticity follow from Lemma 3.4.2. The relation (3.4.18) shows that if m(z) is known at one point, then it is known everywhere. The statement of the theorem easy follows from the last conclusion. u n Remark 3.4.4 The Weyl—Hamburger circle Kz is located in the upper half-plane if Imz > 0, and in the lower one if Imz < 0. This follows from the fact that the points m(z), if we go through all the orthogonal measures, describes the whole circle Kz . At the same time f f 1 1 dσ (λ) = y .Imm(z) = Im dσ (λ), z = x + iy. λ−z (λ − x)2 + y 2 R

R

148

3 Jacobi Matrices and the Classical Moment Problem

Corollary 3.4.5 In the indeterminate case, the spectrum of the self-adjoint extensions in the space l2 of the operator L is always discrete and has no condensation points on an arbitrary finite interval. It follows from (3.4.16) that m(z) is a meromorphic function and, therefore, is equivalent to the given in the last corollary.

3.5 The Classical Moment Problem The classic moment problem (the Hamburger moment problem) consists in finding out the conditions for the sequence of real numbers .(sn ), .n ∈ N0 , .sn ∈ R, which are necessary and sufficient for the existence of the representation f sn =

λn dρ(λ),

.

n ∈ N0 ,

R

where .dρ(λ) is a Borel measure on .σ -algebra of Borel sets on the real axis .R in the usual topology. It contains questions about the uniqueness of such a measure and a description of all measures in the case of non-uniqueness, etc. There are several approaches to researching such questions. In our opinion, the more natural is the spectral approach, which was first proposed by Krein M. G. for similarly problems. He used a spectral representation for the self-adjoint operator, which enables broad generalizations of this problem, particularly interesting and important for the mathematical physics. This approach is closely related to the theory of Jacobi matrices, and therefore its representation in this chapter is natural (see also Chap. 1, Sects. 2.4 and 2.11). Necessary Chains of Hilbert Spaces For conveniens, we repeat with additional explanations some material from Sect. 2.12. Consider a chain of Hilbert spaces G− ⊃ G0 ⊃ G+

.

(3.5.1)

with an involution “.∗”. In other words, an involution .G e ξ |→ ξ ∗ ∈ G− is defined in the negative space .G− . The restriction of this involution to the zero space .G0 or the positive .G+ is also an involution in the corresponding space. Let us construct the tensor square of the chain (3.5.1) G− ⊗ G− ⊃ G0 ⊗ G0 ⊃ G+ ⊗ G+

.

(3.5.2)

and fix some generalized kernel .K ∈ G− ⊗ G− . It is called positive definite if ∀g ∈ G+ ,

.

(K, ϕ ⊗ ϕ ∗ )G0 ⊗G0 ≥ 0.

(3.5.3)

3.5 The Classical Moment Problem

149

A kernel .K ∈ G− ⊗ G− is called Hermitian if the bilinear form a(ϕ, ψ) = (K, ψ ⊗ ϕ ∗ )G0 ⊗G0 , ϕ, ψ ∈ G+

.

is Hermitian, that is, .a(ϕ, ψ) = a(ψ, ϕ). It is clear that every positive definite kernel will be Hermitian. For each positive definite kernel K, a corresponding Hilbert space .HK is constructed by default. The construction is as follows. There is introduced a quasiscalar product = (K, ψ ⊗ ϕ ∗ )G0 ⊗G0 ,

.

ϕ, ψ ∈ G+

(3.5.4)

All .ϕ ∈ G, for which . = 0, are identified with zero. Consider the corresponding classes of vectors .ϕ ∈ G+ , namely, ϕˆ := {χ ∈ G+ | = 0}.

.

(3.5.5)

The expression (3.5.4) will already be a non-degenerate scalar product on classes (3.5.5). The space .HK is the complement of the linear space of class (3.5.5) by the scalar product (3.5.4). In more detail, this procedure is as follows. Take N = {ϕ ∈ G+ | = 0}.

.

(3.5.6)

Due to the inequality that follows from (3.5.4), || = |(K, ψ ⊗ ϕ ∗ )G0 ⊗G0 | ≤ ||K||G− ⊗G− ||ψ ⊗ ϕ ∗ ||G+ ⊗G+ .

≤ ||K||G− ⊗G− ||ψ||G+ ||ϕ||G+ ,

ϕ, ψ ∈ G+ ,

(3.5.7)

N will be a (linear) closed subspace in .G+ . The class .ϕˆ corresponding to .ϕ ∈ G+ coincides with the hyperplane .ϕ + N or with the elements of the quotient space .G+ /N. The form (3.5.4) does not change when .ϕ and .ψ run through the ˆ HK . The space ˆ ψ> corresponding classes .ϕˆ and .ψˆ and gives their scalar product .” stands by .ϕ /= 0, then (3.5.15) will be a scalar and not a quasi-scalar product, and the theorem becomes obvious: the chain (3.5.17) and the scalar product (3.5.15) generate the chain (3.5.18). It will be quasi-scalar if the imbedding operator nesting operator for .H+ ⊂ G+ is the Hilbert-Schmidt type. Let the chain (3.5.1) and the positive definite kernel .K ∈ G− ⊗ G− be given. Consider an operator A acting in the space .G+ and having a dense domain .D(A) in .G+ . When applying Theorem 3.5.2, the question arises when this operator can be “restricted” to the operator .Aˆ acting in the .HK space of .ϕˆ classes generated by N in (3.5.6). That is, when A transfers class into a class. Then it can be assumed that Aˆ ϕˆ := (Aϕ)ˆ,

.

ˆ = (D(A))ˆ. ϕ ∈ ϕ, ˆ ϕ ∈ D(A); D(A)

(3.5.19)

It is easy to do in such two cases. Lemma 3.5.4 The operator A is Hermitian with respect to the quasi-scalar product (3.5.4), i.e., . = , .ϕ, ψ ∈ D(A). The operator A is isometric with respect to (3.5.6), i.e., . = , .ϕ, ψ ∈ D(A). Proof Consider the case of the Hermitian operator A. In order to perform (2.7.35), it is necessary to make sure that A transfers a class into a class, and for this, it is necessary to prove: if .ϕ ∈ D(A), then from the equality . = 0, it follows that . = 0. Let .ψ ∈ D(A), then ||2 = ||2 ≤ ,

.

(3.5.20)

since the Cauchy-Bunyakovsky inequality holds true for the quasi-scalar product (3.5.4). Since . = 0, from (3.5.20), we have that . = 0, .ψ ∈ D(A). Due to the density of .D(A) in .G+ , we approximate the vector .Aϕ in .G+ by the vectors

3.5 The Classical Moment Problem

153

ψn ∈ D(A). They also converge in the space .HK according to the estimate (3.5.9). Again, due to the same Cauchy-Bunyakovsky inequality, we have

.

= lim = 0,

.

n→∞

and this had to be proved. The case of the isometric operator A is obvious: if . = 0, then . = u n 0. Some Results of the Spectral Theory Let the self-adjoint operator A defined on D(A) be given in the separable Hilbert space .H. Consider the equipment of the space .H:

.

H− ⊃ H ⊃ H+ ⊃ D,

.

(3.5.21)

where .H+ is a Hilbert space topologically and quasi-nuclearly embedded in .H; .H− is the space dual to .H+ with respect to the space .H; .D is a linear topological space topologically imbedded into .H+ . Recall that an operator A is said to be standardly connected with a chain (3.5.21) if .D ⊂ D(A) and the restriction .A | D acts continuously from .D to .H+ . Such a case is considered below. We also recall that the vector .o ∈ D is called strong cyclic for the operator A if for all .n ∈ N, .o ∈ D(An ) ∈ D and the set of all vectors .An o, for .n = N0 is total in the space .H+ (and therefore also in .H). Assuming that a strictly cyclic vector exists, we formulate some reduced version of the projective spectral theorem (see Chap. 1, Sect. 2.11). Theorem 3.5.5 For a self-adjoint operator A with a strong cyclic vector in a separable Hilbert space .H, there exists a Borel measure .dρ(λ) on the real axis, so that for .ρ-almost all .λ ∈ R there exists a generalized eigenvector .ξλ ∈ H− , that is, for each .f ∈ D (ξλ , Af )H = λ(ξλ , f )H ,

.

ξλ /= 0.

(3.5.22)

The corresponding Fourier transform F acts according to the rule: H ⊃ H+ e f |→ (Ff )(λ) = fˆ(λ) = (f, ξλ )H ∈ L2 (R, dρ(λ)),

.

(3.5.23)

is an isometry operator .(after its extension by continuity.) acting from the space .H in .L2 (R, dρ(λ)). The image of the operator A under the transformation by F is the multiplication operator by the independent variable .λ in .L2 (R, dρ(λ)), that is, .(F A)f (λ) = λf (λ). We also recall that for the self-adjoint operator A defined on .D(A) in .H, the n n vector .f ∈ ∞ n=0 D(A ) is called quasi-analytic if the class .C{mn }, where in this

154

3 Jacobi Matrices and the Classical Moment Problem

case .mn = ||An f ||H , is quasi-analytic (the class of functions on .[a, b] ⊂ R is defined by the expression C({mn }) = {g ∈ C ∞ ([a, b]) ∃K = Kf > 0,

.

|g (n) (t)| ≤ K n mn , t ∈ [a, b], n ∈ N0 }, hence it is quasi-analytic if

.

∞ E 1/k (inf{mk | n})−1 = ∞). n=1

The quasi-analyticity of the vector is used in the criterion of a self-adjointness and a commutativity, that is, results from Chap. 1 are used (see Sect. 2.6). The Moment Problem As mentioned above, the classical moment problem is regarded as the problem in finding conditions on the sequence .(sn ), .n ∈ N0 , .sn ∈ R, when for which there exists a Borel measure .dρ(x) on the real axis .R, so that f .sn = λn dρ(λ), n ∈ N0 . (3.5.24) R

The main result of the Section is contained in the theorem. Theorem 3.5.6 A sequence of real numbers .(sn )n∈N0 has a representation (3.5.24) if and only if it is positive definite, i.e., ∞ E .

sn+m fm f¯n ≥ 0

(3.5.25)

m,n=0

on arbitrary finite sequences .(fn )∞ n=0 of numbers .fn ∈ C. The representation (3.5.24) exists and the measure .dρ(λ) is unique if the sequence of numbers .(sn ), .n ∈ N0 is positive definite and, additionally, ∞ E .

1 = ∞. √ s2p

2p

p=1

(3.5.26)

Note that the condition (3.5.25) is necessary and sufficient for the existence of the representation (3.5.24), but there can be many representations (measures). The condition (3.5.25) together with (3.5.26) guarantees the uniqueness (of the measure) of the representation (3.5.24).

3.5 The Classical Moment Problem

155

Proof We will prove the necessity of the condition (3.5.25). If the sequence .(sn ), n ∈ N0 has the representation (3.5.24), then for an arbitrary finite sequence .f = (fn )∞ n=0 , .fn ∈ C we have

.

∞ E .

m,n=0

|2 f ||E ∞ | | | sn+m fm f¯n = | λn fn | dρ(λ) ≥ 0, | | n=0

C

which proves the necessity. Let us prove sufficiency. We will use the scheme outlined in Theorem 3.5.2. Thus, the role of the first of the chains (3.5.13) plays the equipment of the usual space .l2 = G0 of sequences 2 f = (fj )∞ j =0 , fj ∈ C; ||f ||l2 =

∞ E

.

|fj |2 < ∞.

j =0

Spaces .G+ and .G− will be a like .l2 space with a weight, namely, we denote by .l2 (q) the space .l2 with weight .q = (qj )∞ j =0 , .qj > 0; it consists of sequences ∞ , .f ∈ C, for which .f = (fj ) j j =0 2 .||f ||l (q) 2

=

∞ E

|fj | qj < ∞, 2

(f, g)l2 (q) =

j =0

∞ E

fj g¯ j qj .

(3.5.27)

j =0

We fix .q = (qj )∞ j =0 , .qj ≥ 1. Then the first of the chains (3.5.13) (i.e., (3.5.1)) will have the form l2 (q −1 ) ⊃ l2 ⊃ l2 (q) ⊃ lfin ,

.

∞ q −1 = (qj−1 )∞ j =0 , q = (qj )j =0 , qj ≥ 1, (3.5.28)

where .lfin denotes the space of finite sequences. The involution of “.∗” is now the ∗ ¯ ∞ usual complex conjugation: .(fj )∞ j =0 = f |→ f = (fj )j =0 . Accordingly, it is also convenient for us to change the notation of vectors in Theorem 3.5.2 from .ϕ to f . Let us specify the choice of weight q. The condition (3.5.25) in the theorem means that the matrix .K = (Kj k )∞ j,k=0 , where .Kj k = sj +k , is positive definite. Then |Kj k |2 ≤ Kjj Kkk , i.e., |sj,k |2 ≤ s2j s2k , j, k ∈ N0 .

.

(3.5.29)

Let .q = (qn )∞ n=0 , .qn ≥ 1 be the sequence such that C(s, q) :=

∞ E

.

n=0

s2n qn−1 < ∞.

(3.5.30)

156

3 Jacobi Matrices and the Classical Moment Problem

Using (3.5.29) and (3.5.30) we get ∞ E

0≤

sj +k fk f¯j ≤

j,k=0

s2j s2k |fk ||fj | ≤

j,k=0 .

⎛ ≤⎝

∞ E

|sj +k ||fk ||fj |

j,k=0

∞ E √



∞ E

∞ / E 1/2 1/2 s2j s2k qj−1 qk−1 |fk qk ||fj qj | j,k=0

⎞1/2 ⎛ s2j s2k qj−1 qk−1 ⎠



j,k=0

∞ E

⎞1/2 |fk |2 qk |fj |2 qj ⎠

j,k=0

⎛ ⎞⎛ ⎞ ∞ ∞ E E −1 =⎝ s2j q ⎠ ⎝ |fj |2 qj ⎠ = C(s, q)||f ||2l (q) , j

j =0

(3.5.31)

2

j =0

f ∈ lfin .

Passing here to the limit from the finite sequences .lfin to .l2 (q), we obtain the chain l2 (q −1 ) ⊃ l2 ⊃ l2 (q) ⊃ lfin , q = (qn )∞ n=0 , qn ≥ 1, .

C(s, q) =

∞ E s2n

qn

n=0

< ∞.

(3.5.32)

We fix the weight q for which the condition (3.5.32) is fulfilled (the sequence .(sn ), n ∈ N0 is considered given). Then the chain of spaces (3.5.32) plays the role of the first of chains (3.5.13). As already mentioned, the matrix .K = (Kj k )∞ j,k=0 , where .Kj k = sj +k , is the kernel K of discrete variable .j ∈ N0 ; we will denote it by .S = K. It will be positive definite, since the condition (3.5.3) is fulfilled: it coincides with (3.5.25). The next step in the scheme of the Theorem 3.5.2 is the construction of a chain (3.5.17). Here we require that the imbedding .H+ ⊂ → G+ must be Hilbert-Schmidt in the space .G+ = l2 (q), where q satisfies the construction (3.5.32), i.e., it is imbedded in the space .l2 (p) quasi-nuclear. It is so for .p = (pn )∞ n=0 if

.

∞ E .

pn2 qn2 < ∞.

(3.5.33)

n=0

Thus, we take as the chain (3.5.17) the following one: (lfin )' ⊃ l2 (p−1 ) ⊃ l2 (q −1 ) = l2 ⊃ l2 (q) ⊃ l2 (p) ⊃ lfin ,

.

(3.5.34)

3.5 The Classical Moment Problem

157

where the weight q is chosen such that the condition (3.5.32) is fulfilled, and the weight p is chosen so that the condition (3.5.33) is fulfilled. Let us fix the weight ∞ . .p = (pn ) n=0 According to Theorem 3.5.2, the necessary chain (3.5.18) is constructed H−,S ⊃ HS ⊃ H+,S ⊃ lfin ,

(3.5.35)

.

where we put .K = S. Let us first prove the theorem in the case of non-degeneracy of the kernel .K = S, i.e., when the inequality (3.5.25) contains the sign “.>” for an arbitrary finite sequence .(fn )∞ n=0 ∈ lfin , which is not equal to zero (that is, for at least one n we have .fn /= 0). The proof is completely based on the Theorem 3.5.2. In this case, the expression (f, g)S :=

∞ E

.

sj +k fk g¯ j ,

f, g ∈ lfin

(3.5.36)

j,k=0

is a scalar (and not a quasi-scalar) product in the space .lfin . The completion of this space with respect to (3.5.36) will be denoted by .HS . A linear space .C∞ of all sequences .f = (fn )∞ n=0 , .fn ∈ C, will be denoted by .l. Let .δn = (0, . . . , 0, 1, 0, . . .), .n ∈ N0 be a .δ-sequence (“1” is on the n-th position), ∞ E then for each .f ∈ lfin , we have .f = fn δn . In this way, a chain of linear spaces n=0

∀f ∈ lfin , f = (fn )∞ n=0 =

l ⊃ HS ⊃ lfin ,

.

∞ E

fn δn .

(3.5.37)

n=0

is constructed. In the case of a non-degenerate kernel S, the chains (3.5.35) and (3.5.34) give the following two chains of Hilbert spaces (l2 (p))−,S ⊃ HS ⊃ l2 (p) ⊃ lfin ,

(3.5.38)

l = (l2 )' ⊃ l2 (p−1 ) ⊃ l2 ⊃ l2 (p) ⊃ lfin ,

(3.5.39)

.

.

where the weight p satisfies the requirements (3.5.33), (3.5.30).

158

3 Jacobi Matrices and the Classical Moment Problem

Consider a linear expression in the space l: ⎡

⎤ ··· ···⎥ ⎥ ···⎥ ⎥; ···⎥ ⎦ .. ··· ··· ··· ··· .

0 ⎢1 ⎢ ⎢ J = ⎢0 ⎢0 ⎣

.

0 0 1 0

0 0 0 0

J : (Jf )k = fk−1 ,

0 0 0 0

k ∈ N0 ,

(Jf )0 = 0;

(3.5.40)

f ∈ l.

This expression is a “creation” type operator. Then the relation J δn = δn+1 , n ∈ N0

.

(3.5.41)

holds true for .δ-sequence. Its restriction to .lfin ⊂ l2 will not be a Hermitian operator in the space .l2 but it will be Hermitian in .HS , which is given by the inner product (3.5.36). Indeed, for .f, g ∈ lfin we have: (Jf, g)S =

∞ E

sj +k (Jf )k g¯ j =

j,k=0 .

=

∞ E

∞ E

sj +k fk−1 g¯ j

j,k=0

sj +k+1 fk g¯ j =

j,k=0

∞ E

(3.5.42) sj +k fk (J g)j = (f, J g)S ,

j,k=0

(here it is taken into account that due to the definition (3.5.40) .f−1 = g−1 = 0). Thus, the operator .lfin e f |→ Jf ∈ lfin will be Hermitian with a dense domain < , where .f¯ is the in the space .HS . It has the equal defect numbers because .J f¯ = Jf complex conjugate of .f ∈ l. Let A be some of its fixed self-adjoint extension in the space .HS . Thus, we have Af = Jf,

.

f ∈ lfin ⊂ D(A) ⊂ HS .

(3.5.43)

Let us now apply Lemma 2.7.5. As chains (2.7.35) we take (3.5.38) and (3.5.39). Let .ξλ ∈ (l2 (p))−,S be the generalized eigenvector of the operator A in terms of the rigging (3.5.38). In this case, according to Theorem 3.5.5, we have (ξλ , Af )S = λ(ξλ , f )S , λ ∈ R, f ∈ lfin .

.

Denote P (λ) = U ξλ ∈ l2 (p−1 ) ⊂ l, P (λ) = (Pn (λ))∞ n=0 , Pn (λ) ∈ C.

.

(3.5.44)

3.5 The Classical Moment Problem

159

Using (2.7.36), the expression (3.5.44) is rewritten as (P (λ), Af )l2 = λ(P (λ), f )l2 , λ ∈ R, f ∈ lfin .

.

(3.5.45)

The corresponding Fourier transform has the form HS ⊃ lfin e f → (Ff )(λ) = fˆ(λ) = (f, P (λ))l2 ∈ L2 (R, dρ(λ)).

.

(3.5.46)

Let us calculate .P (λ). The operator A is defined in (3.5.43) using the expression in (3.5.40), therefore, in (3.5.45) gives .∀f ∈ lfin ∞ E

λPn (λ)f¯n = λ(P (λ), f )l2 = (P (λ), Af )l2

n=0 .

= (P (λ), Jf )l2 = (J P (λ), f )l2 =

∞ E

(3.5.47) Pn+1 (λ)f¯n .

n=0

Thus, λPn (λ) = Pn+1 (λ), n ∈ N0 .

.

Without loss of generality, we assume .P0 (λ) = 1, λ ∈ R. Now the last two formulas give Pn (λ) = λn , n ∈ N0 .

.

(3.5.48)

Therefore, the Fourier transform (3.5.46) has the form HS ⊃ lfin

.

e f → (Ff )(λ) = fˆ(λ) =

∞ E

fn λn ∈ L2 (R, dρ(λ)),

(3.5.49)

n=0

and the Parseval equality: f (f, g)S =

.

ˆ dρ(λ), f, g ∈ lfin . fˆ(λ)g(λ)

(3.5.50)

C

To construct the Fourier transform (3.5.46) and check formulas (3.5.47)–(3.5.50) it is necessary to verify the expression of a strong cyclic vector for the operator A .o = δ0 ∈ lfin in the sense of the rigging (3.5.21). The last is really so since from (3.5.41), we have Ap o = J p δ0 = δp .

.

160

3 Jacobi Matrices and the Classical Moment Problem

The Parseval equality (3.5.50) leads directly to the representation (3.5.24): according to (3.5.48), (3.5.49) .δˆn = λn and .δˆ0 = 1; from (3.5.36), it follows f ˆn , δˆ0 )L2 (R,dρ(λ)) = λn dρ(λ), n ∈ N0 . .sn = (δn , δ0 )S = (δ (3.5.51) R

Therefore, the first part of the theorem in the case of non-degeneracy of the kernel K = S is proved. To prove its second part, it is necessary to make sure that the condition (3.5.26) leads to the self-adjointness of the operator A in the space .HS . To do this, consider a Hermitian operator defined on a linear set invariant under its action

.

D = lfin = span{δn | n ∈ N0 }

.

by the expression .Aδn = δn+1 , and for .p ≥ 1, we have .Ap δn = √ δn+p . According to (3.5.36), the space .HS has the norm .||f ||S = (f, f )S . Therefore, for each .δn ∈ D, we have .||Ap δn ||2S = ||δn+p ||2S = s2n+2p . Since ∞ E .

p=1

∞ E 1 1 , = √ √ p ||Ap δn || p=1 2p s2n+2p

then it can be asserted that the quasi-analyticity of the class .C{||Ap δn ||} is equivalent √ to the quasi-analyticity of the class .C{ s2n+2p } and according to the properties √ of quasi-analytic classes, this class is equivalent to .C{ s2p }. Here we used Theorem 2.6.8, Lemma 2.6.7 and the formula (2.6.21). Therefore, the Theorem 3.5.6 can be considered proved in the previous case. Let us consider the case of the degenerate kernel .S = K, that is, when the left expression in the inequality (3.5.25) is equal to zero for some non-zero sequence .f ∈ lfin . Consider the (linear) set of all such sequences. Since the inequality (3.5.29) holds true, it will be closed in the space .l2 (q) constructed according to (3.5.32), i.e., in the space .G+ of the chain (3.5.17). Thus, we arrive at the general situation of the quasi-scalar product considered in Theorem 3.5.2. The expression .(f, g)S , i.e., (3.5.36) will now be a quasi-scalar product, but the operator J by equality (3.5.42) will be Hermitian in the quasiscalar product (3.5.4). According to Lemma 3.5.4, this operator transforms a class into a class, and the proof of the theorem must be repeated starting from the definition (3.5.43). u n Remark 3.5.7 Let the sufficient conditions of the Theorem 3.5.6 be fulfilled and we have a representation of the moment sequence (3.5.24) with some measure .dρ(λ) on the axis .R. Suppose that the support of this measure is larger than the set consisting of a finite number of points on the axis. Then in the inequality (3.5.25) there should be a sign “.>”, that is, the corresponding product (3.5.36) will not be quasi-scalar, but scalar.

3.5 The Classical Moment Problem

161

Indeed, for the support .α of the measure .dρ(λ), we have a relation f

f |Fn (λ)|2 dρ(λ) = .

R

α

| |2 | f |E n | | |Fn (λ)|2 dρ(λ) = || fj λj || dρ(λ) |j =0 | α

n E

=

sj +k fj f¯k ≥ 0, where Fn (λ) =

n E

(3.5.52)

fj λj

j =0

j,k=0

following from (3.5.24) and (3.5.25). Suppose that the product .( · , · )S is quasi-scalar, then there exists .n ∈ N0 such that .f0 , . . . , fn not all equal zero, so that the sign “.≥” will be changed on “.=” in (3.5.52) and for a corresponding polynomial .Fn (λ), we obtain .Fn (λ) = 0, .λ ∈ α due to (3.5.52). But the polynomial .Fn (λ) = 0 has at most n different zeros, and .α has more than n points. Thus, we obtain a contradiction. .o One Generalization of the Moment Problem Let us consider a generalization of Theorem 3.5.6, which also uses the spectral theory of the operator but which is somewhat more complicated than the operator generated by the matrix (3.5.40). Namely, instead of (3.5.40), let us considered a Jacobi matrix with real elements .an /= 0, .bn , .cn ⎡

⎤ ··· ···⎥ ⎥ ···⎥ ⎥ ···⎥ ⎦ .. ··· ··· ··· ··· .

b0 ⎢ a0 ⎢ ⎢ J = ⎢0 ⎢0 ⎣ .

c0 b1 a1 0

0 c1 b2 a2

0 0 c2 b3

(3.5.53)

(Jf )n = an−1 fn−1 + bn fn + cn fn+1 , J δn = an δn+1 + bn δn + cn−1 δn−1 ,

(Jf )0 = 0; n ∈ N,

J δ0 = a0 δ1 + b0 δ0 . Let .(s) = (sj,k )∞ j,k=0 , .sj,k ∈ R be a fixed non-negative definite matrix. Denote by S the Hilbert space, constructed by the scalar product lfin e f, g −→ (f, g)S =

∞ E

.

sj,k fk g¯ j ∈ C.

(3.5.54)

j,k=0

We assume that the matrix (3.5.53) is Hermitian with respect to the scalar product (3.5.54), i.e., (Jf, g) = (f, J g)S ,

.

f, g ∈ lfin .

(3.5.55)

162

3 Jacobi Matrices and the Classical Moment Problem

As in Sect. 3.1, we introduce the operator .J˙ and its self-adjoint extension A in the space S. The generalized eigenvectors expansion of this operator A can be performed using the construction of Sect. 3.1. Namely, we introduce the chain (3.5.38), ∞ E but in its construction, we use a sequence .q = (qn )∞ sn,n qn−1 < ∞ n=0 for which . (instead of the condition .

∞ E

n=0

n=0

s2n qn−1

< ∞), the chain (3.5.39) is as before. In this

case, the vector .o = δ0 ∈ lfin is also a strong cyclic vector because .∀n ∈ N0 the linear space spanned by vectors .δ0 , J δ0 , . . . , J n δ0 coincides with the subspace .{f ∈ lfin | fn+1 = fn+2 = . . . = 0}. To prove the statement, it is sufficient to check that for any vector f = (f0 , f1 , . . . , fn , 0, 0, . . .),

.

fk /= 0, k = 0, 1, . . . , n

there exist numbers .x0 , .x1 , . . . , .xn ∈ C such that x0 δ0 + x1 J δ0 + · · · + xn J n δ0 = f.

.

(3.5.56)

From the last two formulas (3.5.53), we get that the equality (3.5.56), in the coordinate form, is a system of .n+1 linear equations with the desired .x0 , x1 , . . . , xn with a triangular matrix having the values 1, .a0 , .a0 a1 , . . . , .a0 a1 . . . an−1 on the main diagonal. Since all .an /= 0, the system (3.5.56) has a solution. .o Therefore, we can claim that (3.5.45) and (3.5.46) hold true. Now .dρ(λ) is the spectral measure of the operator A and −1 P (λ) = (Pn (λ))∞ n=0 ∈ l2 (p ) ⊂ l

.

is the corresponding generalized eigenvector wit an eigenvalue .λ. The calculation as in (3.5.47) for matrix (3.5.53) gives cn−1 Pn−1 (λ) + bn Pn (λ) + an Pn+1 (λ) = λPn (λ), n ∈ N .

b0 P0 (λ) + a0 P1 (λ) = λP0 (λ), λ ∈ R.

(3.5.57)

Setting .P0 (λ) = 1, we conclude that .Pn (λ) is a real polynomial in .λ of degree n ∈ N. The Fourier transform and the Parseval equality now have a form similar to (3.5.49) and (3.5.50),

.

S ⊃ lfin e f → (Ff )(λ) = fˆ(λ) = f

.

(f, g)S = R

∞ E

fn Pn (λ) ∈ L2 (R, dρ(λ)),

n=0

ˆ dρ(λ), f, g ∈ lfin . fˆ(λ)g(λ)

(3.5.58)

3.5 The Classical Moment Problem

163

Since .δˆn (λ) = Pn (λ), .n ∈ N0 , then from (3.5.58), we get a representation of the form (3.5.51): f ˆ ˆ .sj,k = (δk , δj )S = (δk , δj )L2 (R,dρ(λ)) = Pj (λ)Pk (λ) dρ(λ), j, k ∈ N0 . R

(3.5.59) We will look for the form of the matrix .(s), which is non-negatively defined and satisfies the Eq. (3.5.55). Using the adjoint .J + , we rewrite the equality in the form (Jf, g)S =

∞ E

⎛ ⎞ ∞ ∞ E E ⎝ (J + sj,· )k fk ⎠ g¯ j sj,k (Jf )k g¯ j = j =0

j,k=0

=

∞ E

(ck−1 sj,k−1 + bk sj,k + ak sj,k+1 )fk g¯ j ,

j,k=0 .

(f, J g)S =

∞ E j,k=0

=

∞ E

j =0

sj,k fk (J g)j =

∞ E j =0

⎛ ⎞ ∞ E fk ⎝ (J + s·,j )j g¯ j ⎠ j =0

(cj −1 sj −1,k + bj sj,k + aj sj +1,k )fk g¯ j ,

∀f, g ∈ lfin .

j,k=0

Using the equalities (3.5.55) and the arbitrariness of f and g, we obtain cj −1 sj −1,k + bj sj,k + aj sj +1,k = ck−1 sj,k−1 + bk sj,k + ak sj,k+1 .

s−1,k = sj,−1 = 0,

j, k ∈ N0 .

(3.5.60)

The equality (3.5.60) is a partial difference equation with two-indexed unknowns sj,k in which every five points .(j − 1, k), .(j, k), .(j + 1, k), .(j, k − 1), .(j, k + 1), are connected and the corresponding “exterior” coefficients .aj , .ak are non-zero. Therefore, proceeding step by step with increasing k, we fine a solution .sj,k for .k ≤ j using the initial data .sj,0 and the “boundary” conditions .sj,−1 = 0, .j ∈ N0 . Similarly, .sj,k , for .j ≤ k, are found, using initial data .s0,k and .s−1,k = 0, ∞ .j ∈ N0 . Taking into account the symmetry .(s) = (sj,k ) j,k=0 and coefficients of the Eq. (3.5.60) (with respect to the replacement .(j, k) → (k, j )), we conclude that the above procedure gives a symmetric solution .sj,k of (3.5.60) satisfying the prescribed initial data .sj,0 = s0,j and the boundary conditions .sj,−1 = s−1,j = 0, .j ∈ N0 . Thus, we have obtained the generalization of the first part of Theorem 3.5.6. .

164

3 Jacobi Matrices and the Classical Moment Problem

Theorem 3.5.8 Let J be a Jacobi matrix of the form (3.5.53) and let .(s) = (sj,k )∞ j,k=0 be non-negative defined matrix whose elements .sj,k ∈ R are a solution of the partial differential Eq. (3.5.60) with the indicated zero boundary conditions. Then, for .sj,k the representation f sj,s =

Pj (λ)Pk (λ) dρ(λ),

.

j, k ∈ N0 ,

(3.5.61)

R

holds true, where .Pj (λ) are polynomials of degree j and determined as a solution of recurrence relations (3.5.57) and .dρ(λ) is finite Borel measure on the real axis .R. Conversely, every sequence (3.5.61) is such a solution of the Eq. (3.5.60). It is only necessary to explain the last assertion. Substituting (3.5.61) into (3.5.60) and using (3.5.57), we see that the left and right-hand sides of the equalities (3.5.60) are equal to f sj,s =

λPj (λ)Pk (λ) dρ(λ),

.

R

i.e., (3.5.60) holds true. Substituting (3.5.61) into (3.5.54) with .g = f , we conclude .∀f ∈ lfin |2 | | f |E | |∞ . sj,k fk f¯j = || fj Pj (λ)|| dρ(λ), ≥ 0 | |j =0 j,k=0 ∞ E

R

i.e., the non-negative definiteness of the matrix .(s) is also performed.

o

.

3.6 Some Other Generalizations of the Moment Problem An Algebraic Approach to the Classical Moment Problem The main problem here is now to define the convolution “.∗”, conveniently related to the Eq. (3.5.60). For this, auxiliary constructions are necessary. Let us introduce the convolution “.∗” on the linear space .lfin , namely, lfin e f, g −→ (f ∗ g)n =

E

.

j +k=n

fj gk ∈ lfin .

(3.6.1)

3.6 Some Other Generalizations of the Moment Problem

165

As a result, the space .lfin becomes a commutative algebra .A with unit .δ0 and involution ¯ ¯ ∞ lfin e f = (fn )∞ n=0 −→ f := (fn )n=0 ∈ lfin ,

.

where .f¯n denotes the complex conjugation. Every sequence .s = (sn )∞ n=0 , .sn ∈ R can be interpreted as a linear functional .σ on .lfin , i.e., lfin e f −→ σ (f ) :=

∞ E

.

sn fn ∈ C.

(3.6.2)

n=0

If the sequence s is moment one, then the functional is positive (more exactly, non-negative), i.e., .σ (f ∗ f¯) ≥ 0, .f ∈ lfin . This fact follows from the equality ⎛ ⎞ ∞ ∞ E E E .σ (f ∗ g) ¯ = sn ⎝ fj g¯ k ⎠ = fj g¯ k , f, g ∈ lfin (3.6.3) n=0

j +k=n

j,k=0

and the assumption (3.5.25). Expression 3.5.36) and (3.6.3) imply that (f, g)S = σ (f ∗ g), ¯

.

f, g ∈ lfin = A,

(3.6.4)

i.e., quasi-scalar product on the algebra .A is given in a standard way using a positive functional and the involution (the so-called (GNS)-construction: Gelfand I. M., Naimark M .A., Segal I.). The operator J defined in (3.5.40) has, in this algebraic approach, the form S ⊃ A = lfin e f −→ Jf = δ1 ∗ f ∈ lfin = A.

.

(3.6.5)

Thus, we can say that the results of Sect. 3.1 have been obtained by applying the theory of the generalized eigenvectors expansion to the operator (3.6.5) .(or to any its self-adjoint extension.) acting in the Hilbert space obtained vie the .(GNS.)construction (3.6.4). Let us combine this algebraic approach with the generalization of the moment problem described above. Let the matrix (3.5.53) be given. Then from the Eq. (3.5.57) it is easy to concluded that .Pn (λ) is degree n polynomial with real coefficients. For every fixed .n ∈ N0 , the transformation of the functions 1, .λ, 2 n .λ , . . . , .λ into .P0 (λ) = 1, .P1 (λ), .P2 (λ), . . . , .Pn (λ), (.λ ∈ R) is made by a real lower triangular matrix C having values .1, a0 , a0 a1 , . . . , a0 a1 . . . an−1 on the main diagonal, Pj (λ) =

n E

.

k=0

Cj,k λk ,

j = 0, 1, 2, . . . , n;

C = (Cj,k )nj,k=0 ,

(3.6.6)

166

3 Jacobi Matrices and the Classical Moment Problem

where .Cj,k = 0 for .k > j . This matrix is invertible and for the inverse matrix D = (Dj,k )nj,k=0 = C −1 , the equality (3.6.6) holds true with .λj , .Pk (λ) replacing k .Pj (λ), .λ , respectively. The relations (3.6.6) for matrices D and C shows that every real polynomial n E . ak λk = F (λ) can be represented in the form .

k=0

F (λ) =

n E

.

bj Pj (λ),

λ∈R

(3.6.7)

j =0

and the coefficients .bj in (3.6.7) depend uniquely and linearly on F (namely C + b = a, .b = D + a, .b = (b0 , b1 , . . . , bn ), .a = (a0 , a1 , . . . , an ); “+” denotes a conjugation). In the lineal real space .P of all real-valued polynomials .F (λ), .G(λ), . . . , .λ ∈ R, we introduce a scalar product by setting

.

(F, G)P =

∞ E

.

fj gj where F (λ) =

j =0

∞ E j =0

fj Pj (λ), G(λ)) =

∞ E

gj Pj (λ).

j =0

(3.6.8) The expression (3.6.8) determines a real scalar product, because a finite sequence of coordinates .(fj )∞ j =0 and polynomials .F (λ) are in one-to-one correspondence. For each .F ∈ P, we have fj = (F, Pj )P , j ∈ N0 ;

.

F (λ) =

∞ E (F, Pj )P Pj (λ), λ ∈ R.

(3.6.9)

j =0

(Note, that .fj = 0 for j greater than the degree of F ). The sequence .(Pj (λ))∞ j =0 forms an orthogonal basis in .P. Let us fix .l ∈ N0 and denote by .qj,k (l) a solution of the partial difference Eq. (3.5.60) with the initial data .qj,0 (l) = q0,j (l) = δj,l . Proposition 3.6.1 The representation qj,k (l) = (Pj (λ)Pk (λ), Pl (λ))P , j, k, l ∈ N0 ; qj,k (l) = 0 for l > j + k, (3.6.10)

.

hold true. Indeed, let us set .sj,k = qj,k (l) in the left-hand side (3.5.60). Using the formula (3.5.57), we get .∀j, k ∈ N0 , cj −1 qj −1,k (l) + bj qj,k (l) + aj qj +1,k (l) = (λPj (λ)Pk (λ), Pl (λ))P .

.

(3.6.11)

3.6 Some Other Generalizations of the Moment Problem

167

Similarly, for the right-hand side of (3.5.57), we get ck−1 qj,k−1 (l) + bk qj,k (l) + ak qj,k+1 (l) = (Pj (λ)λPk (λ), Pl (λ))P .

.

This equality together with (3.6.11) imply that .qj,k is a solution of the Eq. (3.5.60). The last equality in 3.6.10) holds true, because the degree of the polynomial .Pj (λ)Pk (λ) is equal to .j + k. Using the orthogonality relation for .Pj (λ) we conclude that qj,0 (l) = (Pj (λ)1, Pj (λ))P = δj,l , j, l ∈ N0 .

.

o

.

We can now introduce a convolution in the linear space .lfin by setting lfin e f, g −→ (f ∗ g)n :=

∞ E

fj gk qj,k (n)

j,k=0 .

=

∞ E

fj gk (λPj (λ)Pk (λ), Pn (λ))P ,

∀n ∈ N0 .

j,k=0

(3.6.12) The right-hand side in (3.6.12) is a finite sequence, according to the last equality in (3.6.10). Proposition 3.6.2 The space .lfin with the convolution (3.6.12) is a commutative algebra .A with the unit .δ0 and involution ¯ ¯ ∞ A e f = (fn )∞ n=0 |→ f := (fn )n=0 ∈ A.

.

This assertion immediately follows from the following observation. We introduce for .f ∈ A its Fourier transform lfin = A e f −→ fˆ(λ) =

∞ E

.

fj Pj (λ) ∈ Pc ,

(3.6.13)

j =0

where .Pc is the complexification of .P (i.e., the space of all polynomials with complex coefficients).

168

3 Jacobi Matrices and the Classical Moment Problem

The mapping (3.6.13) is an isomorphism between the spaces .A and .Pc . Under this isomorphism the convolution “.∗” goes over to the ordinary multiplication of polynomials. Indeed, using (3.6.13) and (3.6.9) we have (f ∗ g)ˆ(λ) =

∞ E (f ∗ g)n Pn (λ) j =0

=

∞ E

⎛ ⎝

j =0

.

=



∞ E

fj gk (Pj (λ)Pk (λ), Pn (λ))P ⎠ Pn (λ)

j,k=0

(3.6.14)

∞ E (fˆ(λ)g(λ), ˆ Pn (λ))P Pn (λ) = j =0

fˆ(λ)g(λ), ˆ

λ ∈ R, ∀f, g ∈ A.

Proof of Proposition 3.6.2. The space .Pc is a commutative algebra with a standard multiplication, unit, and an involution. .A is isomorphic to .Pc . Therefore, .A has the corresponding properties. Thus, the introduced convolution is connected with Jacobi matrix (3.5.53). Now we have a possibility to reformulate the results of Theorem 3.5.8 in an algebraic form, similar to the classical moment problem. u n Let the Jacobi matrix (3.5.53) be given. we construct the corresponding convolution “.∗” (3.6.12) and an algebra .A = lfin . Each sequence .s = (sn )∞ n=0 , .sn ∈ R according to (3.6.2), can be interpreted as a functional .σ on .A. Theorem 3.6.3 Let .s = (sn )∞ n=0 , .sn ∈ R generate a positive functional .σ on the algebra .A, i.e., σ (f ∗ f¯) =

∞ E

.

sn (f ∗ f¯)n =

j =0

∞ E j =0

⎛ sn ⎝

∞ E

⎞ fj f¯k qj,k (n)⎠ ≥ 0,

f ∈ A.

j,k=0

(3.6.15) Then the representation f sn =

Pn (λ) dρ(λ),

.

n ∈ N0 ,

(3.6.16)

R

holds true, where .Pn (λ) is a solution of the Eq. (3.5.57) and .dρ(λ) is a Borel finite measure on .R. Conversely, each sequence (3.6.16) generates a positive functional .σ , i.e., the inequality (3.6.15) holds true.

3.6 Some Other Generalizations of the Moment Problem

169

Proof We reduce Theorem 3.6.3 to Theorem 3.5.8. On the other hand, this reducing explains connection between constructions introduced above. Let a sequence s satisfying condition (3.6.14) be given. Put sj,k =

∞ E

.

sn qj,k (n),

j, k ∈ N0

(3.6.17)

j =0

(.∀j, k the sum (3.6.17) is finite, according to Proposition 3.6.1). The sequence (3.6.17) is a solution of the Eq. (3.5.60) with the initial data .sj,0 = s0,j = 0, .j ∈ N0 . The matrix .s = (sj,k )∞ j,k=0 is non-negative definite, that is for .lfin we have, according to (3.6.15), ∞ E .

sj,k fj f¯k =

j =0

∞ E j,k=0

(∞ E

) sn qj,k (n) fj f¯k = σ (f ∗ f¯) ≥ 0.

(3.6.18)

n=0

Therefore, we apply Theorem 3.5.8 to the sequence .(s). Then the representation (3.5.61), for .k = 0, gives (3.6.16). To prove the inverse assertion, we first note that ∞ E .

qj,k (n)Pn (λ) = Pj (λ)Pk (λ),

λ ∈ R, ∀j, k ∈ N0 .

(3.6.19)

j =0

Then for .f ∈ lfin and .sn of the form (3.6.16), we have ∞ E n=0 .

⎛ sn ⎝

∞ E

j,k=0

⎞ fj f¯k qj,k (n)⎠ =

f E ∞ R j,k=0

fj f¯k

(∞ E

) Pn (λ)qj,k (n)

dρ(λ)

n=0

|2 | | f |E | |∞ = || fj Pj (λ)|| dρ(λ) ≥ 0. | |j =0 R

u n Remark 3.6.4 We note that teh representation (3.6.16) leads to the representation (3.5.61), since a solution .sj,k of (3.5.60) with the initial data .sj,0 = s0,j = sn has a form (3.6.17). By using the representation (3.6.16) and the equality (3.6.19), we conclude that (3.5.61). Remark 3.6.5 The scalar product (3.5.54), defined the space S, has .∀f, g ∈ A the form .(f, g)S = σ (f ∗ g), ¯ i.e., this space is constructed via the GNS-construction (see (3.6.18)).

170

3 Jacobi Matrices and the Classical Moment Problem

In the case of the classical moment problem, the action of the matrix J on .f ∈ lfin is equal .δ1 ∗f (see (3.6.5)). Let us consider the corresponding situation for a general matrix (3.5.53). Remark 3.6.6 In terms of the Fourier transform (3.6.13), the action of the matrix J is equal to the multiplication by .λ (Jf )ˆ(λ) = λfˆ(λ),

.

f ∈ lfin ,

λ ∈ R.

(3.6.20)

Also, we have an =(λPn (λ), Pn+1 (λ))P , .

bn =(λPn (λ), Pn (λ))P , cn =(λPn+1 (λ), Pn (λ))P ,

(3.6.21) n ∈ N0 .

Thus, for an arbitrary Jacobi matrix (3.5.53), it is possible to write formulas as in the classical self-adjoint case, but using the scalar product (3.6.8) instead of integration with respect to the scalar measure. The proof of (3.6.20) and (3.6.21) is simple. Indeed, it is sufficient to check (3.6.20) for .f = δn , .n ∈ N0 . Using the last formula in (3.5.53), equality .δˆj (λ) = Pj (λ) and (3.5.57), we get (J δn )ˆ(λ) = (an δn+1 + bn δn + cn−1 δn−1 )ˆ(λ) .

= an Pn+1 (λ) + bn Pn (λ) + cn−1 Pn−1 (λ) = λPn (λ) = λδˆn (λ), λ ∈ R.

To prove (3.6.21), it is necessary to multiply (in the scalar sense, in the space .P) the equality (3.5.57) by .Pj (λ). Then (3.6.21) follows from the orthogonality of the system .(Pj (λ))∞ j =0 in .P. Let us return to .δ1 ∗ f . Remark 3.6.7 The equality δ1 ∗ f = (a0−1 J − a0−1 b0 1)f,

.

f ∈ lfin

(3.6.22)

holds true. It is possible to derive this formula directly using (3.6.12) and calculating .q1,k (n) from (3.5.60), but it is more convenient to use the Fourier transform (3.6.13). Takin into account (3.6.14), the equality .δˆ1 (λ) = P1 (λ) = a0−1 (λ − b0 ), .λ ∈ R (see (3.5.57)) and (3.6.20), we find that .∀f ∈ lfin ˆ fˆ(λ) = a −1 (λ − b0 )fˆ(λ) (δ1 ∗ f )ˆ(λ) = δ(λ) 0 .

This equality gives (3.6.22).

= ((a0−1 J − a0−1 b0 1)f )ˆ(λ),

λ ∈ R.

3.6 Some Other Generalizations of the Moment Problem

171

Thus the equality (3.6.22) shows that the operation .f |→ δ1 ∗ f coincides with the action of the matrix J . Therefore, the proof of Theorem 3.5.8 could be based on the spectral theory of the operator .f |→ δ1 ∗ f instead of J . The Self-Adjointness of Operators and the Uniqueness of Representations Pass now dwell upon a generalization of the second part of Theorem 3.5.6 to the case when J has the form (3.5.53), that is, we will consider the problem of the uniqueness of the measure dρ(λ) in the representations (3.5.61) and (3.6.16). As we have already recalled, such uniqueness is equivalent to the essential self-adjointness of the operator lfin e f |→ Jf ∈ lfin (or lfin e f |→ δ1 ∗ f ∈ lfin in the space S) We will start with some simple modifications of the evolution and quasianalytic criteria of the self-adjointness and the commutativity considered in Chap. 1, Sect. 2.6. Consider an equipment of the type (3.5.21) of a complex Hilbert space H, namely, H− ⊃ H ⊃ H+ .

(3.6.23)

.

Let B be an operator in the space H+ with a dense domain Dom(B) and let B ∗ be its adjoint operator (in the space H+ ). On the interval [0, b), b ≤ ∞, we consider the differential equation ( .

dv dt

)

(t) − B ∗ v(t) = 0,

t ∈ [0, b).

(3.6.24)

We say that a vector-valued function [0, b) e t |→ v(t) ∈ H+ is a strong solution of (3.6.24) if it is strongly continuously differentiable, v(t) ∈ Dom(B ∗ ) and satisfies the equality (3.6.24). It is easy to understand that, instead of (3.6.24), we can demand that the “weak,” equality (( .

dv dt

)

) (t), f H+

− (v(t), Bf )H+ = 0,

f ∈ Dom(B), t ∈ [0, b)

(3.6.25)

holds true. Let us consider B as an operator A acting in the space H, that is, let Dom(A) = Dom(B) and Af = Bf ∈ H+ ⊂ H, f ∈ Dom(A). Suppose, A is Hermitian in the space H. Theorem 3.6.8 For the essential self-adjointness of the operator A, it is sufficient that for some b > 0, both equations ( .

dv dt

)

(t) ± (iB)∗ v(t) = 0,

t ∈ [0, b)

have a unique strong solution of the Cauchy problem.

(3.6.26)

172

3 Jacobi Matrices and the Classical Moment Problem

Proof Note first that the domain Dom(A) is dense in H. Applying Theorem 2.6.1, we assert that to prove the theorem, it is sufficient to check the uniqueness of strong solutions of the Cauchy problem for both equations in the space H, ( .

dv dt

)

(t) ± (iA)∗ v(t) = 0,

t ∈ [0, b).

(3.6.27)

Let, for example, [o, b) e t |→ u(t) ∈ H be a strong solution of the Eq. (3.6.27) with the sign “−” and u(0) = 0. It is necessary to show that u(t) = 0, t ∈ [o, b). To this end, we introduce a continuous operator I : H −→ H+ by the equality (If, ω)H+ = (f, ω)H ,

.

f ∈ H, ω ∈ H+ ,

where I denotes a canonical isometric isomorphism—one of the standard operators of the theory of rigging (see (2.7.13)). The function [0, b) e t |→ v(t) = I u(t) ∈ H+ is strongly continuously differentiable and is a strong solution of Eq. (3.6.26) with the sign “−”. Indeed, using the equality of the type (3.6.25) for B = iA and the definition of I , we conclude ∀f ∈ Dom(A) = Dom(B), ) ) ) ( ( ) du du (t), f − (u(t), iAf )H = I (t), f 0= − (I u(t), iAf )H+ dt dt H H+ . ) (( ) dv (t), f = − (v(t), iBf )H+ , t ∈ [0, ∞). dt H+ ((

∗ From ( dv ) this equality, it follows (for a fixed t) that v(t) ∈ Dom((iB) ) (because dt (t) ∈ H+ ) and, therefore, v(t) is a strong solution of the Eq. (3.6.26) with the sign “−”, for which v(0) = I u(0) = 0. The assumption of Theorem 3.6.8 about the uniqueness of a strong solution implies that v(t) = 0, t ∈ [0, b). But the kernel of the operator I is equal to zero, therefore u(t) = 0, t ∈ [0, b). Thanks to the mentioned Theorem 2.6.1, we get that A is essentially self-adjoint. u n

Remark 3.6.9 Similarly, it is also possible to modify other evolution criteria of the essential self-adjointness. Namely, it is easy to formulate results similar to Theorem 3.6.8, as a generalization of Theorems 2.6.1 and 2.6.6. Let us present for consideration a generalization of the quasi-analytical criterion. As usual, a vector ψ ∈ H+ is called quasi-analytic (with respect to the operator ∞ n B : H −→ H+ ) if ψ ∈ Dom(B n ) and the class C{||B n ψ||H+ } is quasi-analytic. n=0

Theorem 3.6.10 If for B a total set of quasi-analytic vectors in H+ exists, then the Hermitian operator A in the space H is essentially self-adjoint.

3.6 Some Other Generalizations of the Moment Problem

173

Note, that in the case H+ = H, we have the well-known sufficient condition of the essential self-adjointness of the operator A = B (in such a case this condition is also necessary). Proof According to Theorem 3.6.8, it is sufficient to prove the uniqueness of a strong solution of the Cauchy problem for both Eq. (3.6.26). The equality (3.6.26) with the sign “−” means (see (3.6.25)) that (v ' (t), f )H+ = (v(t), ζ Bf )H+ ,

.

Taking f = ψ ∈

∞ n

ζ = +i, f ∈ Dom(B), t ∈ [0, b).

(3.6.28)

Dom(B n ) in (3.6.28), we conclude

n=0

.

d (v(t), ψ)H+ = (v ' (t), ψ)H+ = (v(t), (ζ B)ψ)H+ . dt

Thus, we have a situation similar to the proof of Theorem 2.6.8. We can repeat the corresponding part of the proof of this theorem and conclude that the condition v(0) = 0 implies that v(t) = 0, t ∈ [0, b), i.e., get the uniqueness of the strong solution of the Cauchy problem for (3.6.26). The case of the sign “+” in the Eq. (3.6.26) can be considered similarly. But it is possible to reduce directly our theorem to Theorem 2.6.8. Namely, let ψ ∈ H+ be a quasi-analytic vector for the operator B. Then ψ∈

∞ n

.

Dom(An ),

B n ψ = An ψ

n=0

and C{||B n ψ||H+ } ⊃ C{||An ψ||H },

.

(3.6.29)

since ||f ||H+ ≥ ||f ||H , f ∈ H+ . The inclusion (3.6.29) shows that the class C{||An ψ||H } is quasi-analytic and therefore, the vector ψ is quasi-analytic for A in H. The space H+ is dense (as a set) in H, thus, A has a total set of quasi-analytic vectors and, according to mentioned Theorem 2.6.8, it is essentially self-adjoint. u n Remark 3.6.11 By using Theorem 2.6.8, it can be easily shown that for a semibounded operator A from Theorem 2.6.10, for its essential self-adjointness ot os 1/2 sufficient to demand that the class C{||B n ψ||H+ } must be quasi-analytic for a total ∞ n Dom(An ). set of vectors ψ ∈ n=0

Similarly, a variant of Theorem 2.6.10 is obtained for the semi-bounded operators 1/2 1/2 A1 and A2 . Here, it is possible to replace classes C{||B1n ψ||H+ }, C{||B2n ψ||H+ } and

174

3 Jacobi Matrices and the Classical Moment Problem 1/2

C{||(B1 | (B2 − zI))n ψ||H+ } with corresponding classes where the norm || · ||H+ 1/2

is replaced with || · ||H+ . Pass now to give an application of Theorem 3.6.10 to the problem of uniqueness of the measure dρ(λ) in the representations (3.5.61) or (3.6.16). Theorem 3.6.12 The measure dρ(λ) in the representation (3.5.61) (or (3.6.16)) is defined by a matrix (s) = (sj,k )∞ j,k=0 uniquely if the class C{An2n

.

/

max sj,j },

(3.6.30)

j =0,1,...,n

is quasi-analytic. Here An =

.

max {1, |aj |, |bj |, |cj |}. n ∈ N0

(3.6.31)

j =0,1,...,n

Proof We will use Theorem 3.6.10 connected with rigging (3.6.23). Now, the space H in this rigging is equal to the space S as in (3.5.54), and H+ = l2 (q), where q = (qn )∞ n=0 ,

.

(the condition

∞ E n=0

qn = (1 + n)1+ε sn,n ,

ε>0

(3.6.32)

sn,n qn−1 < ∞ is now fulfilled, therefore, H ⊃ H+ (see the

beginning of Section)). We will show that the condition (3.6.30) implies that each δk = (δj,k )∞ j =0 ∈ H, k ∈ N0 is a quasi-analytic vector for the operator B with the domain Dom(B) = lfin in the space H+ . The operator B is generated by the matrix J given in (3.5.53). The set {δ0 , δ1 , . . . , } is total in H+ , therefore, our assertion gives the essential self-adjointness of the operator lfin e|→ Af := Jf ∈ lfin in the space H = S. First, we note that the structure (3.5.53) of the matrix J and the condition (3.6.31) give the following: let f ∈ lfin and |fJ | ≤ oj , j ∈ N0 , where 0 < o0 ≤ o1 ≤ . . ., then |(Jf )j | = |aj −1 fj −1 + bj fj + cj fj +1 | ≤ 3Aj oj +1 ,

.

j ∈ N0 .

Repeating this estimate and using 3Aj oj +1 as oj for (Jf )j , we find that j ∈ N0 , |(J 2 f )j | = |(J (Jf ))j | ≤ 3Aj (3Aj +1 oj +2 ) = 32 Aj Aj +1 oj +2 ,

.

and so on in the same way for n = 3, 4, . . ., |(J n f )j | ≤ 3n Aj . . . Aj +n−1 oj +n ) ≤ 3n Aj +n−1 oj +n ,

.

j ∈ N0 .

(3.6.33)

3.6 Some Other Generalizations of the Moment Problem

175

Let k ∈ N0 be fixed and f = δk ; we put o0 = o1 = . . . = 1. Then (J n δk )j = 0 for j > k + n + 1 and for (3.6.33), using (3.6.32) and (3.6.31), we have ||J n δk ||2H+ = ||J n δk ||2l2 (q) = ≤

k+n+1 E j =0

.

k+n+1 E

|(J n δk )j |2 qj

j =0

(3n Anj+n+1 )2 (1 + j )1+ε sj,j

≤ (k + n + 2)(3n An2k+2n )2 (k + n + 2)1+ε 2(k+1+n)

≤ (k + n + 2)2+ε 32n Ak+1+n

=: (k + n + 2)2+ε 32n m2k+1+n ,

max

max

j =0,1,...,k+n+1

j =0,1,...,k+1+n

sj,j

sj,j

n ∈ N. (3.6.34)

As already mentioned in Chaps. 1 and 2, the quasi-analyticity of C{ml+m } and C{mn } are equivalent. Therefore, the quasi-analyticity of (3.6.30) and the estimate (3.6.34) give that each vector δk , k ∈ N0 is quasi-analytic for the operator u n B. If, instead of the general Theorem 3.6.10, we apply Theorem 3.6.8 and use a special form of the operator B (the form of the Jacobi matrix), then a somewhat finer statement is obtained. Proposition 3.6.13 The result of Theorem 3.6.12 holds true if the class C{Ann

.

/

max sj,j }

(3.6.35)

j =0,1,...,n

is quasi-analytic. Proof We use the equality (3.6.29) from the proof of Theorem 3.6.10. Let B be defined as above by the matrix J , (see (3.5.53)), let H+ = l2 (q), where q is given by (3.6.32) and ψ ∈ lfin . Then equalities ∞

E (n) dn (v(t), ψ) = vj (t)ψ¯ j qj = (v(t), (ζ J )n ψ)l2 (q) l (q) 2 dt n j =0

.

=

∞ E

vj (t)((ζ J )n ψ)qj = ζ¯ n

j =0

t ∈ [0, b),

∞ E ((J + )n (qv(t)))j ψ¯ j , j =0

∀n ∈ N (3.6.36)

176

3 Jacobi Matrices and the Classical Moment Problem

hold true. Here J + denotes the adjoint matrix of J , qv(t) = (qj vj (t))∞ j =0 . From (3.6.36), we conclude, that for w(t) = qv(t), the equality wj (t) = ((J + )n w(t))j ,

.

(n)

t ∈ [0, b1 ), b1 ∈ (0, b), j ∈ N0

(3.6.37)

holds true. The vector-valued function [0, b1 ) e t |→ v(t) ∈ l2 (q) is strongly continuous and, therefore, is bounded. Then for some C > 0, ∀t ∈ [0, b1 ), we have ∞ E .

|wj (t)|2 qj−1 =

j =0

∞ E j =0

|wj (t)| ≤

/

|vj (t)|2 qj−1 = ||v(t)||2l2 (q) ≤ C;

Cqj ,

(3.6.38)

j ∈ N0 .

For j ∈ N0 , we introduce a numbers oj = (C max qk )1/2 ,

0 < o 0 ≤ o1 ≤ . . . .

.

k=0,1,...,j

Similarly to (3.6.33), from (3.6.38), we get |((J + )n w(t))j | ≤ 3n Anj+n−1 oj +n ,

t ∈ [0, b1 ], j ∈ N0 , n ∈ N.

.

Therefore, (3.6.37) and (3.6.32) gives for these j and n that j +n

|wj(n) (t)| ≤ 3n Anj+n−1 oj +n ≤ 3n Aj +n oj +n .

= C 1/2 (1 + j + n)

1+ε 2

j +n /

3n Aj +n

max

sk,k =: mj +n , t ∈ [0, b1 ].

j =0,1,...,j +n

(3.6.39) It is sufficient to prove that, under the condition (3.6.35), each our solution v(t) for which v(0) = 0 is equal zero on [0, b1 ]. The equality v(0) = 0 gives, by (3.6.37), (n) that wj (0) = 0, j ∈ N0 , n ∈ N. Fix j and consider the estimate (3.6.39). The quasi-analyticity of the class (3.6.35) gives, as before, the quasi-analyticity of the class C{mn }, therefore, w(0) = 0, t ∈ [0, b1 ]. And since j ∈ N0 is arbitrary, then ∀t ∈ [0, b1 ], w(0) = 0 and v(0) = 0. The case of the sign “+” in the Eq. (3.6.26) is similar. u n Remark 3.6.14 The results of the last section can be generalized to more complicated matrices J . That is, it is possible to obtain a representation of the form (3.6.26) for matrices (3.5.53) with complex coefficients. Moreover, instead of tri-diagonal Jacobi matrices, one can take matrices with an arbitrary finite number of non-zero diagonals, that is, consider the case of difference expressions of order higher than two.

3.7 Connections with the Theory of Jacobi Matrices

177

3.7 Connections with the Theory of Jacobi Matrices The space .HK is a completion of .H+ = l2 (pj ) or .l2,0 with respect to the scalar product ∞ E

=

.

sj +k uk v¯j .

j,k=0

Herewith, it is associated with the moment problem. For this, if necessary, an N E sj +k ξk ξ¯j = 0 for .(ξ0 , . . . , ξN ) /≡ 0, which identification is first performed, i.e., . j,k=0

simplifies the case. In this case, the measure will be concentrated in a finite number of points. Therefore, the absence of degeneracy is assumed further. Also, without loss of generality, we can assume that .s0 = 1. Let us carry out in .HK the orthogonalization procedure of .δ-sequences .{δ0 , .δ1 , . . . , .δk , . . . .}. Thus, we get a sequence of vectors .{p0 = δ0 , .p1 , . . . .} ∈ l2,0 , which is orthonormal basis in .HK . The orthogonalization procedure is unambiguous if each vector .pn ⊥ {δ0 , . . . , δn−1 } will be taken in the form .pn = c0 δ0 + . . . + cn δn , where all .cj are real and .cn > 0; that is exactly what we will do. Since the absence of degeneracy is assumed, the basis .p0 , p1 , . . . is infinite. In general, it is possible to write out the usual formulas for the transition from the basis .δ0 , .δ1 , . . . to the basis .p0 , p1 , . . ., but they will not be used. Let us only denote the operator F in .HK , which transfers .{δ0 , δ1 , . . .} into .{p0 , p1 , . . .}, i.e., .F δk = pk , .k ∈ N0 . There is an operator .F −1 on .l2,0 ⊂ HK , but it is not bounded, generally speaking. Sometimes it is convenient to use another interpretation of the space .HK for the moment problem. Let us correspond each .u ∈ l2,0 to the polynomial .u(λ) ˜ = ∞ E j uj λ and .HK is regarded as the space of all polynomials .u(λ) ˜ for which a nonj =0

degenerate scalar product

˜ = =

∞ E

.

sj +k uk v¯j ,

u, v ∈∈ l2,0

(3.7.1)

j =0

is defined. If an image (3.5.24) with some measure .dσ (λ) is already written for the sequence .sj , then the scalar product (3.7.1) will be written in the form f

˜ =

u(λ) ˜ v(λ) ˜ dσ (λ).

.

R

We introduce polynomials Pj (λ) = p˜ j (λ), j ∈ N0 ;

.

P0 = δ˜0 (λ) ≡ 1,

(3.7.2)

178

3 Jacobi Matrices and the Classical Moment Problem

which are orthonormal in the scalar product (3.7.1) or (3.7.2) and form a basis in the space .HK . It is clear from what has been said that .HK can be interpreted as the space .l2 if the basis .{p0 , p1 , . . .} (or .{P0 (λ), P1 (λ), . . .}) is used. In any case, for .u ∈ l2,0 , we can write u=

∞ E

.

uk δk =

k=0

∞ E

uˆ j pj ,

j =0

hence uˆ =

∞ E

.

uk =

k=0

∞ E

Fj k uk ,

k=0

where ∞ ||Fj k ||∞ 0 = ||||0 = F ;

.

uj is calculated for .uˆ j using the formally inverse matrix .F −1 . Since .δk and .pk are included in .l2,0 , the matrices F and .F −1 transfer finite sequences into finite sequences. Let us write the operator .Bu = T + u, .u ∈ l2,0 in the basis .{p0 , p1 , . . .}. Obviously he looks like

.

uˆ j → (F T + F −1 u), ˆ

.

(uˆ 0 , uˆ 1 , . . .) ∈ l2,0 .

Calculate the elements of the matrix .F T + F −1 . It is easier to do this not directly, but with the help of such considerations. Since .T + δk = δk+1 , then + δ )(λ) = δ˜ (T k k+1 (λ) = λδ˜k (λ).

.

Thus, .HK has an interpretation as a space of polynomials with a scalar product (3.7.2), the operator B has the form of a multiplication operator by .λ. But it is known that such an operator can also be considered as an action of the Jacobi matrix (see Sect. 3.3). Elements of this matrix are calculated by formulas (3.3.1), where polynomials .Pj (λ) coincide with those just introduced. Theorem 3.7.1 Let the non-degenerate moment sequence .sj , .j ∈ N0 be given. We construct the space .HK and .Kj k = sj +k , .j, k ∈ N0 . Let us pass in this space from the basis .δ0 , .δ1 , . . . to the described orthonormal basis .p0 , p1 , . . .. The operator + .Bu = T u, .u ∈ l2,0 ([0, ∞)) in the basis .p0 , p1 , . . . , will be written as an action of the Jacobi matrix, the elements of which can be calculated by formulas aj = = , .

bj = = ,

j ∈ N0 .

3.7 Connections with the Theory of Jacobi Matrices

179

Conversely, let some Jacobi matrix (3.1.3) be given. We will construct a system of polynomials of the first kind .{P0 (λ), .P1 (λ), . . . .}; and .dσ (λ) is some of its spectral measure. Let (n)

λn = c0 P0 (λ) + . . . + cn(n) Pn (λ),

.

n ∈ N0 .

Then sn = c0(n) =

f λn · 1 dσ (λ),

.

n ∈ N0

R

is a non-degenerate moment sequence. If, according to it, using the first part of the theorem, namely construct the Jacobi matrix, then we get the initial matrix. Since the solution of the moment problem is reduced to the spectral theory of the operator B, and Sects. 3.1–3.4 describe the construction of the spectral theory of this particular operator, only written in a different basis, then all the results indicated there are transferred to the moment problem. In particular, Theorem 3.4.3 gives a description of all solutions of the moment problem, that is, all spectral measures .dσ (λ) of Sect. 2.4. Finally, we will show that the Carleman criterion (3.5.26) of the definiteness of the moment problem is a consequence of the criterion (3.1.9) of self-adjointness of the difference operator. Indeed, from the construction of polynomials of the first kind .Pj (λ), it is clear that .Pj (λ) = a0 ···a1 j −1 λj + . . ., therefore, taking into account their orthogonality and normality with respect to each spectral measure .dσ (λ) and the formula (3.3.1), we obtain: f aj = λPj (λ)Pj +1 (λ) dσ (λ) R

f (

= R

.

) λj +1 + λ(. . .) Pj +1 (λ) dσ (λ) = a0 · · · aj −1

1 = a0 . . . aj −1

≤ =

1 a0 · · · aj −1

f

λj +1 Pj +1 (λ) dσ (λ)

R

⎛ ⎞1/2 ⎛ ⎞1/2 f f ⎝ λ2(j +1) dσ (λ)⎠ ⎝ Pj2+1 (λ) dσ (λ)⎠ R

1 √ s2(j +1) , a0 · · · aj −1

R

180

3 Jacobi Matrices and the Classical Moment Problem

√ i.e., .a0 · · · aj ≤ s2(j +1) , .j ∈ N0 . Let the condition (3.5.26) be fulfilled. Then, from the obtained, it follows ∞ E .

j =0

√ j

1 = ∞. a0 · · · aj

(3.7.3)

If we now use the general inequality

.

∞ ∞ E E √ n q1 · · · qn ≤ e qn ,

qn ≥ 0,

n ∈ N0 ,

n=1

n=1

we get that the series .

∞ E

j =0

1 aj

diverges. According to Theorem 3.1.3, the difference

operator L is self-adjoint, that is, the moment problem is determined.

Bibliographical Notes The main information of the chapter is taken from the seventh chapter of the book by Berezansky Yu. M. [21]. Presentations are motivated by the works of Akhiezer N. Y., Krein M. G., Krasnosel’skii M. A. [2, 188]. The main used results are published in works of Berezansky Y. M., Kondratiev Y. G., Naimark M. A., Glazman I. M. [17, 124, 179, 227, 295]. A close approach to the theory of Jacobi matrices is found in [3, 260]. Works of Akhiezer N. I. [2, 3] is the beginning and the reason for the research presented in the chapter. These studies received a modern description in works of Berezansky Yu. M. [17–23, 43, 44, 50], Kostyuchenko A. G., Sargsyan I. S. [169], Shohat J. A, Tamarkin J. D. [260]. Initial works on this issues were papers by Krein M. G., [179, 180, 185, 186]. Important applications are in works Krasnosel’skii M. A., Vainikko G. M., Zabreiko P. P., Rutitskii J. B., Stetsenko V. Ya., Lifshitz E. A., Sobolev A. V. [173, 174]. There is also information about the eigenfunactions expansion of differential and other operators by Gelfand I. M., Kostyuchenko A.G. [111]. General orthogonal functions are described by Glazman I. M., Naiman P. B. [124]. General problems of the theory of extensions of symmetric densely defined operators to self-adjoint ones are in the works of Gorbachuk M. L. and Gorbachuk V. I. [128, 129], Derkach V. A., Malamud M. M. [77, 80], Kochubei A. N. [165], Krein M. G. [177, 178, 181, 185, 187], Krein M. G., Krasnosel’skii M. A. [188], Kuzhel A. V. [193]. The application of monotonic sequences to the moment problem was studied by Nudel’man A. A. [230], Krein M.G., Nudel’man A. A. [189]. General information can be found in works of Dunford N., Schwartz J. T. [95, 236].

Bibliographical Notes

181

Generalizations, in particular, concerning Jacobi matrices related to the moment problem, can be found in works of Berezansky Yu. M. [26, 30, 31] Berezansky Yu. M., Mierzejewski D. A., Shmoish M. [45, 49], Derevyagin M. S., Derkach V. A. [78], Derkach V. A., Malamud M. M. [76], Devinatz A. [81], Gesztesy F., Simon B. [120], Manakov S. V. [205], Schmüdgen K. [255], Szafraniec F. H. [275], Wouk A. [295]. The question of the uniqueness of the moment representation is solved with the help of works Carleman T. [64], Nelson E. [225], Nussbaum A. E. [231, 232]. An interesting direction related to the moment problem is the so-called truncated moment problem. We do not touch this direction in our book and refer to the most informative works in this direction, for example, Krein M. G. [70, 182].

Chapter 4

The Strong Moment Problem

In this chapter, we present the main provisions concerning the strong moment problem. More precisely, strong classical, when studying f the possibility of representing a sequence of real numbers .(sn ) in the form .sn = λn dρ(λ) for all integers n, i.e., R

n ∈ Z. It is known that various formulations of the moment problem are investigated using the method of the generalized eigenvectors expansion. With this approach, first of all, it is necessary to represent the moment sequence by applying the theory of the generalized eigenvectors expansion for the corresponding operators. For such vectors, we write a simple equation corresponding to the problem under some consideration: the solution of such an equation gives the form of the representation. The corresponding Parseval equality gives immediately the desired representation. After that, we pass from the moment problem to a block tri-diagonal matrices (Jacobi type). The spectral measure of such matrix gives the measure of the moment representation. The corresponding spectral theory of such a matrix provides additional information about the corresponding considered moment problem. The structure of the chapter is as follows. In the first section, we recall the results on the generalized eigenvectors expansion, which are necessary for what follows. In Sect. 4.2 we prove the main representation theorem and some conditions for the uniqueness of the existence of a measure. Sections 4.3 and 4.4 are devoted to the construction and study of block matrices related to the strong moment problem. It should be noted that our J matrices of the Jacobi-Laurent type have a tri-diagonal block structure. The main result of the subsections are Theorems 4.3.5 and 4.3.11. Our Jacobi-Laurent matrix J is symmetric and has an algebraic inverse .J −1 which is also block tri-diagonal with appropriate properties. An additional problem is the internal description of such matrices J and .J −1 .

.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y. M. Berezansky, M. E. Dudkin, Jacobi Matrices and the Moment Problem, Operator Theory: Advances and Applications 294, https://doi.org/10.1007/978-3-031-46387-7_4

183

184

4 The Strong Moment Problem

Section 4.3 presents the theory of Jacobi-Laurent block matrices including direct and inverse spectral problems. The construction is the same as for the classical Jacobi matrices and uses the generalized eigenvectors expansion. In Sect. 4.4, the Jacobi-Laurent matrices J are considered in the case when J is generated only by the Hermitian operator, which is not self-adjoint. We construct a corresponding theory similar to the case of classical Jacobi matrices. But we do not present the theory of the description of all self-adjoint extension in the original Hilbert space and we do not give a complete such description, because the construction is very similar to the case of classical Jacobi matrices, which is described in the second chapter. Section 4.5 is devoted to the construction of .sn , .n ∈ Z, or the initial measure .dρ(λ) of the Jacobi-Laurent matrices J , and an establishing connection of this measure with the spectral measure of the self-adjoint operator generated by J . From this section it becomes clear why the theory of self-adjoint extensions of Jacobi-Laurent matrices provides a description of all solutions of the strong moment problem.

4.1 Preliminaries to the Strong Moment Problem Let .H be a separable Hilbert space and let .A be a self-adjoint operator defined on Dom(A) in .H. Consider an equipment of the space .H:

.

H− ⊃ H ⊃ H+ ⊃ D,

.

(4.1.1)

such that .H+ is a Hilbert space topologically and quasi-nuclear embedded into .H (we recall that topologically means densely and continuously; quasi-nuclear means that the imbedding operator is of the Hilbert-Schmidt type); .H− is a space dual to .H+ with respect to the space .H; .D is a linear, topological space, topologically embedded into .H+ . Recall also that the operator .A is called standardly connected with the chain (4.1.1) if .D ⊂ Dom(A) and the restriction .A | D acts from .D into .H+ continuously (see for details Chap. 2). Let us formulate a shortened version of the projection spectral theorem about generalized eigenvectors expansion, which is convenient for considering the strong moment problem. Theorem 4.1.1 Let .A be a self-adjoint operator defined on a separable Hilbert space .H and standardly connected with the chain (4.1.1), where .D is also separable. Then there exists an operator-valued function .o(λ) and a bounded Borel (general) spectral measure .dσ (λ) such that .o(λ) is weakly measurable and defined for almost all .λ on the spectrum .s(A) of the operator .A in the sense of the spectral measure .dσ (λ) and takes values as non-negative operators acting from .H+ into .H− , and for every .λ its Hilbert-Schmidt norm satisfies the equality . o(λ) .≤ Tr(o(λ)) = 1.

4.1 Preliminaries to the Strong Moment Problem

185

Here “.Tr” denotes the trace of a corresponding operator. The function .o(λ) and the measure .dσ (λ) give a representation of the expansion of the identity .E of A, namely, (f E(α)f =

.

) o(λ) dσ (λ) f,

α ∈ B(R),

f ∈ H+

(4.1.2)

A

and for the operator .A ( f Af =

.

) λo(λ) dσ (λ) f,

f ∈ Dom(A) ∩ H+ .

(4.1.3)

s(A)

The set of values (range) .Ran(o(λ)) ⊂ H− consists of generalized eigenvectors ϕ(λ) ∈ H− of the operator .A with corresponding eigenvalues .λ, i.e.,

.

(ϕ(λ), Af )H = λ(ϕ(λ), f )H ,

.

λ ∈ R,

f ∈ D;

ϕ(λ) /= 0.

(4.1.4)

In general case, for the operator A appearing in Theorem 4.1.1, it is possible to construct the generalized eigenvectors expansion of each .f ∈ H according to the operator A Fourier transform. But in general situation the dimension of vector, by the corresponding Fourier transform, .fˆ(λ) depends on “multiplicity” of eigenvalue .λ and corresponding formulas are small effective. But in some special case of operators A the terms of Fourier transform is very convenient and replaces effectively formulas (4.1.2) and (4.1.3). Just such a case appeared in Chap. 3, when the existing of a cyclic vector helped the situation (see Theorem 3.5.6). Now there will be a similar situation. Let us introduce a corresponding definition. Let A be some self-adjoint operator in .H, vector .q ∈ H is called cyclic if .q ∈ Dom(An ), .n ∈ N. Let for A, algebraically inverse operator .A−1 exists (i.e., for −1 Af = f , .Dom(A−1 ) = Ran(A)). The cyclic vector q, is called .f ∈ Dom(A)), .A a double cyclic if .q ∈ Dom(An ), .n ∈ Z. The following theorem is required by the prove of Theorem 4.2.1. Theorem 4.1.2 Let A be a self-adjoint operator for which the conditions of Theorem 4.1.1 are satisfied. Assume that for this operator the cyclic vector q exists (or is doubly cyclic, in this case we assume that .A−1 f ∈ D if .f ∈ D and the algebraic inverse .A−1 is defined on .D), for which .∀n ∈ N0 .(or .∀n ∈ Z) An q ∈ D and the set of such vectors is total in .D. Then the spectrum A is simple and for each .λ ∈ σ (A) the corresponding generalized eigenvector .ϕ(λ) ∈ H− exists and the Fourier transform F can be written Def − | → (Ff )(λ) = fˆ(λ) := (f, ϕ(λ))H ∈ C.

.

(4.1.5)

186

4 The Strong Moment Problem

Instead of (4.1.2), (4.1.3) we have representations equivalent to them f (f, g)H =

f (λ)g(λ) dσ (λ),

.

< )(λ) = λfˆ(λ), (Af

λ ∈ s(A), ∀f, g ∈ D.

R

(4.1.6) Due to the extension by continuity, (4.1.5) is possible to extend on all .f ∈ H, then .fˆ(λ) ∈ L2 (R, dσ (λ)). The first equality in (4.1.6) can be extended on .f, g ∈ H and it is the Parseval equality. The second equality can be extended on .f ∈ Dom(A) and shows that our operator is unitary equivalent to the multiplication operator by .λ in the space .L2 (R, dσ (λ)), defined by .fˆ(λ), .f ∈ Dom(A). Proof This theorem in the case of cyclic vector is proved in Chap. 2. In the case of double cyclic vector q it is necessary to repeat proof of such a theorem. Only one place is necessary to explain in this repeating. Namely, let .ϕ(λ) := P (λ)Jf0 ∈ H− is the vector from the proof of Theorems 2.11.9 and 2.11.11. We have in such a situation .(ϕ(λ), q)H = 0 (previously q was denoted by .o). To prove the theorem, it is necessary to show that .ϕ(λ) = 0. This vector is the generalized eigenvector with the eigenvalue .λ, i.e., (ϕ(λ), Af )H = λ(ϕ(λ), f )H ,

f ∈ D.

.

Therefore, .∀f ∈ D (ϕ(λ), f )H = (ϕ(λ), A(A−1 f ))H = λ(ϕ(λ), A−1 f )H ,

.

(4.1.7)

since by conditions of the theorem .A−1 f ∈ D. From (4.1.7), we conclude that .λ /= 0: if .λ = 0, then .(ϕ(λ), f )H = 0, .f ∈ D, i.e., .ϕ(λ) = 0 and the proof is finished. Thus, let .λ /= 0. By using (4.1.7), we get (ϕ(λ), A−1 f )H = λ−1 (ϕ(λ), f )H ,

.

f ∈ D.

(4.1.8)

By the iteration of (4.1.8) (note that .A−1 f ∈ D), we have (ϕ(λ), A−n f )H = λ−n (ϕ(λ), f )H ,

.

f ∈ D,

n ∈ N.

(4.1.9)

For non-negative powers of A we evidently have (ϕ(λ), An q)H = λn (ϕ(λ), f )H = 0,

.

n ∈ N0 .

(4.1.10)

Taking in (4.1.9) .f = q we conclude that (4.1.10) is fulfilled for .n ∈ Z. But on conditions of the theorem, the set .{An q, n ∈ Z} is total in .D. Therefore, in the case of double cyclic vector we have (4.1.10) for .n ∈ Z and, hence, .ϕ(λ) = 0. u n

4.2 The Solution of the Strong Moment Problem

187

Then, some self-adjointness conditions related to the concept of a quasi-analytic vector are used. Let us recall these results.nTherefore, for the Hermitian operator A n defined on .Dom(A) in .H, the vector .f ∈ ∞ n=1 Dom(A ) is quasi-analytic if ∞ E .

√ n

n=1

1 = ∞. ||An f ||H

4.2 The Solution of the Strong Moment Problem The solution of the strong moment problem is given by the following theorem. Theorem 4.2.1 The given sequence of real numbers .s = (sn ), .n ∈ Z, .sn ∈ R admits a representation f sn =

λn dρ(λ),

.

n∈Z

(4.2.1)

R

with some Borel measure .dρ(λ) if and only if it is positive defined, i.e., E .

sj +k fj f¯k ≥ 0

(4.2.2)

j,k∈Z

for every finite sequences of complex numbers .(fj ), .j ∈ Z, .fj ∈ C. The measure in the representation (4.2.1) is unique if, additionally, to (4.2.2), we have ∞ E .

1 = ∞. √ s2n

2n

n=1

(4.2.3)

Proof The Necessity of the condition (4.2.2) is obvious. Indeed, if the sequence s has representation (4.2.1), then for an arbitrary finite sequence .f = (fk )k∈Z , .fk ∈ C, we have E .

j,k∈Z

sj +k fj f¯k =

|2 f |E | | n | | λ fn | dρ(λ) ≥ 0. | R

(4.2.4)

n∈Z

Denote by l a linear space .C∞ of sequences .f = (fj ), j ∈ Z, .fj ∈ C, and by .lfin we denote its linear subspace consisting of finite sequences .f = (fj ), j ∈ Z, i.e., the sequences such that .fj /= 0 only for finite numbers j . Let .δm , .m ∈ Z, be

188

4 The Strong Moment Problem

the .δ-sequence, i.e., .δm = (0, . . . , 0, 1, 0, 0 , . . .). Then each vector .f ∈ lfin has the m-th position E representation .f = fm δm . m∈Z

Let us consider the linear operator J on the form (Jf )j = fj −1 ,

.

j ∈ Z;

Dom(J ) = lfin .

(4.2.5)

The operator J is the “creation” type operator. For the .δ-sequence we get J δm = δm+1 ,

.

m ∈ Z.

(4.2.6)

The operator J is Hermitian with respect to the (quasi)scalar product E

(f, g)S =

.

sj +k fj g¯ k ,

f, g ∈ lfin .

(4.2.7)

j,k∈Z

Indeed, as in the case of (3.5.42), we have (Jf, g)S =

E

sj +k (Jf )j g¯ k =

j,k∈Z

=

.

E

j,k∈Z

=

E

E

sj +k fj −1 g¯ k

j,k∈Z

sj +k+1 fj g¯ k =

E

sj +k fj g¯ k−1

j,k∈Z

sj +k fj (J g)k = (f, J g)S .

j,k∈Z

In the next step we use Theorem 3.5.6. We need only to write .j, k, ... ∈ Z0 instead of .j, k, ... ∈ N0 . Let us sketch the proof of the theorem. As in Chap. 3, we will first assume that the expression (4.2.7) gives the scalar product in .lfin . That is, we assume that the given sequence .s = (sn ), .n ∈ Z is not degenerate, i.e., if .(f, f )S = 0 for .f ∈ lfin , then .f = 0. Consider the operator J (4.2.5). It is Hermitian and defined in the space .HS that is a completion of .lfin with respect to the scalar product (4.2.7) and it has the domain .Dom(J ) = lfin . Moreover, it is real in S with respect to the ordinary transformation from .f = (fj ), .j ∈ Z, to .f¯ = (f¯j ), .j ∈ Z. Therefore it has equal defect numbers and has a self-adjoint extension in S. Let us choose and fix some extension of A. We take and fixed some such extension A. We will apply to this operator A the general results of Sect. 4.1. But at first it is necessary to construct some equipment of the space .HS . Thus, we will consider the following rigging: (l2 (p))−,S ⊃ HS ⊃ l2 (p) ⊃ lfin ,

.

(4.2.8)

where .l2 (p) is the weighted .l2 type space (.l2 the space over .Z) withE a weight .p = (pn ), .n ∈ Z, .pn ≥ 1. The norm in .l2 (p) is given by .||f ||2l2 (p) = n∈Z |fn |2 pn ;

4.2 The Solution of the Strong Moment Problem

189

(l2 (p))−,S = H− is negative space with respect to the positive space .l2 (p) = H+ and .HS = H is the zero space. The space .lfin = D is provided with the coordinatewise uniformly finite convergence. As in the proof of Theorem 3.5.6, we make sure that there is a sufficiently fast growing sequence p such that the imbedding .l2 (p) c→ S is quasi-nuclear. In the next step we use the rigging (4.2.8) to construct a generalized eigenvectors. The inner structure of the space .(l2 (p))−,S is complicated, because of the complicated structure of .HS . This is a reason to introduce new auxiliary rigging

.

l = (lfin )' ⊃ (l2 (p−1 )) ⊃ l2 ⊃ l2 (p) ⊃ lfin ,

.

(4.2.9)

where .l2 (p−1 ), .p−1 = (pn−1 ), .n ∈ Z, is the negative space with respect to the positive space .l2 (p) and the zero (initial) space .l2 . Chains (4.2.8) and (4.2.9) have the same positive space .l2 (p). The general Lemma 2.7.5 in Chap. 2 establishes that the space .(l2 (p))−,S is isometric to the space .l2 (p−1 ). We go back to the operator A. It is some self-adjoint extension of J in the space .HS . It is easy to understand that operator A is standardly connected with the rigging (4.2.8), but instead of riggings (4.2.8) we consider (4.2.9) and Lemma 2.7.5. Let .ϕ(λ) ∈ (l2 (p))−,S be a generalized eigenvector of the operator .A in terms of the (4.2.8). Thus, in this case due to Theorem 4.1.2 and (4.1.4), we have (ϕ(λ), Af )S = λ(ϕ(λ), f )S ,

.

λ ∈ R,

f ∈ lfin .

(4.2.10)

Denote P (λ) = U ϕ(λ) ∈ l2 (p−1 ),

.

P (λ) = (Pn (λ)), n ∈ Z, ∀n ∈ Z, Pn (λ) ∈ R

(here we apply Lemma 2.7.5 with .H− = (l2 (p))−,S and .F− = l2 (p−1 )). By using (2.7.36), we can rewrite (4.2.10) in the form (P (λ), Af )l2 = λ(P (λ), f )l2 ,

.

λ ∈ R,

f ∈ lfin .

(4.2.11)

The correspondence Fourier transform (4.1.5) has the form HS ⊃ lfin e f → (Ff )(λ) = fˆ(λ) = (f, P (λ))l2 ∈ L2 (R, dσ (λ)).

.

(4.2.12)

Let us calculate .P (λ). The operator .A is a self-adjoint extension in .HS of the operator J with .Dom(J ) = lfin , and therefore, obtained on .lfin by the formula (4.2.5), and hence (4.2.11) gives E

λPn (λ)f¯n = λ(P (λ), f )l2 = (P (λ), Af )l2

n∈Z .

= (P (λ), Jf )l2 = (J + P (λ), f )l2 =

E

Pn+1 (λ)f¯n ,

∀f ∈ lfin .

n∈Z

(4.2.13)

190

4 The Strong Moment Problem

Hence, we have λPn (λ) = Pn+1 (λ),

n ∈ Z.

.

(4.2.14)

Without loss of generality, we can take .P0 (λ) = 1, λ ∈ R. Then the equalities (4.2.14) yields Pn (λ) = λn ,

n ∈ Z.

.

(4.2.15)

Thus, the Fourier transform (4.2.12) finally has the form S ⊃ lfin e f → (Ff )(λ) = fˆ(λ) =

E

.

fn λn ∈ L2 (R, dσ (λ)),

(4.2.16)

n∈Z

and the Parseval equality (4.1.5) has the form f (f, g)S =

.

ˆ dσ (λ), fˆ(λ)g(λ)

f, g ∈ lfin .

(4.2.17)

R

To construct the Fourier transform (4.2.12) and to verify formulas (4.2.13)– (4.2.17) it is necessary to note that for our operator .A, the algebraically inverse operator .A−1 on .lfin , exists and the vector .q = δ0 ∈ lfin has the property n n .A q = J δ0 = δn ∈ D, .n ∈ Z. The last set is total in .lfin and Theorem 4.1.2 is applicable. The Parseval equality (4.2.17) immediately leads to the representation (4.2.1). According to (4.2.15) and (4.2.16) .δˆn = λn and .δˆ0 = 1; by (4.2.7), we obtain sn = (δn , δ0 )S = (δˆn , δˆ0 )L2 (R,dσ (λ)) =

f λn dσ (λ),

.

n ∈ Z,

R

i.e., we get (4.2.1) with the measure .dρ(λ) = dσ (λ). If the operator J (4.2.5) is essentially self-adjoint in .HS , then we can take the closure of J as A. In this case the measure .dρ(λ) in the representation (4.2.1) is unique. Thus, to complete the proof of the theorem for the case of the scalar product (4.2.7), it is sufficient to prove that the condition (4.2.3) guarantees essential self-adjointness of .J in the space .HS . In what follows, we will use Theorem 2.6.8. Thus, it is necessary to prove that the operator .J has a total set Q of quasi-analytic vectors in the space .HS . Denote .Q = {δp }, .p ∈ Z. This set is total in .HS : its linear span is equal to .lfin . Check that each such vector .δp is quasi-analytic. According to (4.2.5)–(4.2.7) and using Theorem 2.6.8, Lemma 2.6.7, and (2.6.21), we can write ||J n δp ||2S = ||δp+n ||2S = s2p+2n ,

.

n ∈ N, p ∈ Z.

4.3 The Orthogonalization Procedure and the Construction of a Tri-Diagonal. . .

191

Hence, we have ∞ E .



/ n

n=1

But, since series

.

∞ E

n=1

√ 1

E 1 1 . = √ n 2n s ||J δp || n=1 2p+2n and

2n s2p+2n

.

∞ E

n=1

1

√ 2n s2n

(4.2.18)

are either convergent or divergent

simultaneously, then the equality (4.2.18) (the condition (4.2.3)) give that vector δp is quasi-analytic. Therefore, in the non-degenerate case, the theorem is proved. Consider the case when the quadratic form is degenerate, that is, there exists a nonzero .f = (fj ), .j ∈ Z such that

.

E

sj +k fj f¯k = 0.

.

(4.2.19)

j,k∈Z

In this case the expression (4.2.7) gives a quasi-scalar product and for the construction of the space .HS it is necessary to take factor space .lfin of all such f and after this to make the completion. The operator J is Hermitian with respect to the quasi-scalar product, therefore it is correctly defined on our .HS and it is Hermitian with respect to the introduced scalar product. After this, it is necessary to repeat given above scheme of the proof, applying what was said at the end of the proof of Theorem 3.5.6. u n Both for the proof of Theorem 3.5.6 and for the proof of the last theorem, one can make certain remarks about the support of the measure .dρ(λ) in the degenerate case. In the next section we develope the spectral theory of block Jacobi-Laurent matrices corresponding to the strong moment problem.

4.3 The Orthogonalization Procedure and the Construction of a Tri-Diagonal Block Matrix We at first propose some orthogonalization procedure and the construction of a tridiagonal block matrix of a self-adjoint operator related to the corresponding strong moment problem. Instead of usual space .l2 of sequences .f = (fn ), .fn ∈ C throughout this section, in which the ordinary Jacobi matrix acts, we will use the “double” space .l2 defined as l2 = H0 ⊕ H1 ⊕ H2 ⊕, · · · ,

.

H0 = C1 ,

H1 = H2 = · · · C2 .

(4.3.1)

192

4 The Strong Moment Problem

Ours tri-diagonal matrices act in this space (4.3.1). It is clear that this space actually coincides with the space .l2 × l2 , but on .Z, that is, it is the space of sequences .f = (fn ), .fn ∈ C1 . However, its representation in the form (4.3.1) is more convenient for us. In fact, because the coordinate .f0 is counted in .l2 once, and not twice, as in .l2 × l2 . Let .dρ(λ) be a Borel measure with a bounded support on the real axis .R and .L2 = L2 (R, dρ(λ)) is the space of complex square integrable functions defined on .R. We suppose that the Borel measure .dρ(λ) is such that all functions .R e λ − | → λm , .m ∈ Z belong to .L2 , and all these functions are linearly independent. In order to find an analog of usual Jacobi matrix J there is need to choose an order for the orthogonalization in .L2 applied to the family of linear independent functions R e λ −→ λm ,

m ∈ Z.

.

(4.3.2)

We choose the following order for the orthogonalization via the Gram-Schmidt procedure λ0 ;

.

λ−1 , λ1 ;

λ−2 , λ2 ;

...

;

λ−n , λn ;

...

(4.3.3)

Applying the Gram-Schmidt orthogonalization procedure to (4.3.3) with real coefficients, we obtain an orthonormal polynomials system in the space .L2 (with respect to variables .λ and .λ−1 , so-called Laurent polynomials), numbered in such an order P0;0 (λ); P1;0 (λ), P1;1 (λ); P2;0 (λ), P2;1 (λ); . . . ; Pn;0 (λ), Pn;1 (λ); . . . (4.3.4)

.

α+1

where each polynomial has the form .Pn;α (λ) = kn;α λ(−1) n + · · · , .n ∈ N, .α = 0, 1, .kn;α > 0; here “.+ · · · ” denotes the next part of the corresponding polynomial; .P0 (λ) = P0;0 (λ) = 1. In such a way .Pn;α is some linear combination of {1; λ−1 , λ1 ; λ−2 , λ2 ; . . . ; λ−(n−1) , λ(n−1) ; . . . ; λ−n } for α = 0, .

{1; λ−1 , λ1 ; λ−2 , λ2 ; . . . ; λ−(n−1) , λ(n−1) ; . . . ; λ−n , λn } for α = 1.

(4.3.5)

Since the family (4.3.2) is total in the space .L2 , the sequence (4.3.4) is an orthonormal basis in this space. Denote by .Pn;α the real subspace spanned by elements .Pn;α , .n ∈ N, .α = 0, 1, from (4.3.5). It is clear that .∀n ∈ N, we have P0;0 ⊂ P1;0 ⊂ P1;1 ⊂ P2;0 ⊂ P2;1 ⊂ · · · ⊂ Pn;0 ⊂ Pn;1 ⊂ · · · , .

Pn;α = {P0;0 (λ)} ⊕ {P1;0 (λ)} ⊕ {P1;1 (λ)} ⊕ {P2;0 (λ)} ⊕ {P2;1 (λ)} ⊕ · · · ⊕ {Pn;α (λ)},

(4.3.6)

4.3 The Orthogonalization Procedure and the Construction of a Tri-Diagonal. . .

193

where .{Pm;α (λ)}, .m ∈ N, .α = 0, 1 denotes the one-dimensional real space generated by .Pm;α (λ); .P0;0 = R. As was spoken above, in the sequel investigation, we need, instead of the space .l2 , use the complex Hilbert space (4.3.1). Each vector .f ∈ l2 has a form .f = (fn ), .fn ∈ Hn , and hence, ||f ||2l2 =

∞ E

||fn ||2Hn < ∞,

.

(f, g)l2 =

n=0

∞ E (fn , gn )Hn ,

∀f, g ∈ l2 .

n=0

For .n = 0, the vector .f0 ∈ H0 has, in the standard orthonormal basis .{e0;0 } of the space .C1 , the representation .f0;0 , hence, .f0 = (f0;0 ). For .n ∈ N, coordinates of vector .fn ∈ Hn , in the corresponding orthonormal basis .{en;0 , .en;1 } of the space 2 .C , are denoted by .(fn;0 , fn;1 ) and, hence, we have .fn = (fn;0 , fn;1 ). By the way, it is clear, that the space .l2 is almost isometric to the space .l2 × l2 . Using the orthonormal system (4.3.4) one can define a mapping of .l2 into .L2 . We put .∀n ∈ N0 and .∀λ ∈ R, .Pn (λ) = (Pn;0 , Pn;1 (λ)) ∈ Hn , then ˆ l2 e f = (fn )∞ n=0 |−→ (If )(λ) := f (λ) =

.

∞ E (fn , Pn (λ))Hn ∈ L2 .

(4.3.7)

n=0

Since for .n ∈ N0 we get (fn , Pn (λ))Hn = fn;0 Pn;0 (λ) + fn;1 Pn;1 (λ)

.

and ||f ||2l2 = ||(f0;0 , f1;0 , f1;1 , f2;0 , f2;1 , . . . , fn;0 , fn;1 , . . .)||2l2 ,

.

we see that (4.3.7) is a mapping of the space .l2 into .L2 , and the use of the orthonormal system (4.3.4) shows that this mapping is isometric. The image of .l2 under the mapping (4.3.7) coincides with the space .L2 , because under our assumption the system (4.3.4) is an orthonormal basis in .L2 (the Laurent polynomials basis). Therefore, the mapping (4.3.7) is unitary transformation I that acts from .l2 onto .L2 . Let us now dwell on some auxiliary constructions, which will be essentially used further. Namely, we prove technically important Theorems 4.3.5, 4.3.11, 4.3.12. Let A be an arbitrary linear operator defined on .Dom(A) = lfin ⊂ l2 , where .lfin denotes the set of finite vectors from .l2 . It is possible to construct the corresponding operator matrix .(aj,k )∞ j,k=0 , where for each .j, k ∈ N0 the element .aj,k is an operator acting from .Hk into .Hj , so that .∀f, g ∈ Dom(A) = lfin ⊂ l2 , we have (Af )j =

∞ E

.

k=0

aj,k fk ,

j ∈ N0 ,

(Af, g)l2 =

∞ E j,k=0

(aj,k fk , gj )Hj .

(4.3.8)

194

4 The Strong Moment Problem

To proof (4.3.8) we only need to write the usual matrix of the operator A in the space .l2 using the basis (e0;0 ; e1;0 , e1;1 ; e2;0 , e2;1 ; . . . ; en;0 , en;1 , . . .),

.

e0;0 = 1.

(4.3.9)

Then .aj,k , for each .j, k ∈ N0 , is the operator .Hk −→ Hj that has a matrix representation aj,k;α,β = (Aek;β , ej ;α )l2 ,

(4.3.10)

.

where .α = 0, 1 and .β = 0, 1. We will write: .aj,k = (aj,k;α,β )1,1 α,β=0 , .j, k ∈ N 1,0 (taking into account cases: .a0,1 = (a0,1;α,β )0,1 α,β=0 , .a1,0 = (a1,0;α,β )α,β=0 and .a0,0 =

(a0,0;α,β )0,0 α,β=0 = a0,0;0,0 ). We note that the first formula from (4.3.8) is fulfilled for .f ∈ lfin ; in the second formula .f ∈ lfin , .g ∈ l2 . Let us consider the image .Aˆ = I AI −1 : .L2 −→ L2 of the above presented operator A : .l2 −→ l2 by the mapping I (4.3.7). Its matrix in the basis (4.3.4), i.e., (P0;0 (λ); P1;0 (λ), P1;1 (λ); P2;0 (λ), P2,1 (λ); . . . ; Pn;0 (λ), Pn;1 (λ); . . .),

.

is equal to the usual matrix of operator A regarded as the operator: .l2 −→ l2 in the corresponding basis (4.3.9). By using (4.3.10) and the above mentioned procedure, we get the operator matrix .(aj,k )∞ j,k=0 of A : .l2 −→ l2 . By definition, this matrix is ˆ also the operator matrix of .A : L2 −→ L2 . It is clear that we can take an arbitrary ˆ essential self-adjoint operator in .L2 as the operator .A. Return now to the objects connected with our measure .dρ(λ) and sequences (4.3.3), (4.3.4). Lemma 4.3.1 For polynomials .Pn;α (λ) in (4.3.4) and subspaces .Pm,β with (4.3.6), relations λP0;0 (λ) = λ ∈ P1;1 , .

λPn;0 (λ) ∈ Pn;1 , λPn;1 (λ) ∈ Pn+1;1 ,

(4.3.11) n ∈ N.

hold true. Proof According to (4.3.4) the polynomial .Pn;α (λ), .n ∈ N is equal to some linear combination of {1; λ−1 , λ1 ; . . . ; λ−(n−1) , λn−1 , λ(−1)

.

α+1 n

}.

4.3 The Orthogonalization Procedure and the Construction of a Tri-Diagonal. . .

195

Hence, multiplying by .λ, we obtain a linear combination {λ; 1, λ2 ; λ−1 , λ3 ; . . . ; λ−(n−2) , λn , λ(−1)

.

α+1 n+1

}

and such a linear combination belongs to .Pn;1 for .α = 0 and to .Pn+1;1 for .α = 1. The first inclusion in (4.3.11) is trivial. n u Lemma 4.3.2 Let .Aˆ be the .(bounded and self-adjoint.) multiplication by .λ operator in the space .L2 , namely, ˆ L2 e ϕ(λ) |−→ (Aϕ)(λ) = λϕ(λ) ∈ L2 .

.

−1 AI ˆ ˆ ) has a The real operator matrix .(aj,k )∞ j,k=0 of the operator .A .(i.e., .A = I tri-diagonal structure, i.e., .aj,k = 0 for .|j − k| > 1.

Proof By using (4.3.10) for .en;γ = I −1 Pn;γ (λ), .n ∈ N0 , .γ = 0, 1, we have f aj,k;α,β = (Aek;β , ej ;α )l2 =

λPk;β (λ)Pj ;α (λ) dρ(λ),

.

∀j, k ∈ N0 ,

R

(4.3.12) where .α, β = 0, 1. From (4.3.11), we have .λPk;α ∈ Pk+1;α . According to (4.3.6), the integral in (4.3.12) is equal to zero for .j > k + 1 and for each .α = 0, 1. On another hand side, in the integral (4.3.12), we can multiply by .λ polynomial .Pj ;α (λ). Therefore, as above, we conclude that this integral is equal to zero for .k > j + 1 and for each .β = 0, 1. As result the integral in (4.3.12), i.e., elements .aj,k;α,β , .j, k ∈ N0 , are equal to zero for .|j − k| > 1, .α, β = 0, 1. (In previous considerations, it is necessary to take into account that .e0;0 = I −1 P0;0 (λ), .P0;0 (λ) = 1). u n ˆ In such a way the matrix .(aj,k )∞ j,k=0 of our multiplication operator .A has a tridiagonal block structure ⎡

a0,0 ⎢ a1,0 ⎢ ⎢ .⎢ 0 ⎢ 0 ⎣ .. .

a0,1 a1,1 a2,1 0 .. .

⎤ 0 ... a1,2 0 ...⎥ ⎥ a2,2 a2,3 0 . . . ⎥ ⎥. a3,2 a3,3 a3,4 . . . ⎥ ⎦ .. .. .. . . . . . . 0

0 0

(4.3.13)

From (4.3.12), we conclude that the symmetry aj,k;α,β = ak,j ;β,α ,

.

holds true.

j, k ∈ N0 ,

α, β = 0, 1

(4.3.14)

196

4 The Strong Moment Problem

A more detailed analysis of the expression (4.3.12) allows us to find out whether what elements of the .(aj,k;α,β )1,1 α,β=0 are zero and what are not zero in the general case for .|j − k| ≤ 1. We can also describe properties of the matrix with respect to the permutation of indexes .j, k, and .α, β. Lemma 4.3.3 Let .(aj,k )∞ j,k=0 be the operator matrix (4.3.13) generates the multiplication by .λ operator in .L2 . Now, .aj,k : .Hk −→ Hj , .aj,k = (aj,k;α,β )1,1 α,β=0 are matrices of operators .aj,k in the corresponding standard orthonormal basis. Then aj,j +1;0,0 = aj,j +1;0,1 = 0, .

aj +1,j ;0,0 = aj +1,j ;1,0 = 0,

∀j ∈ N.

(4.3.15)

If we choose inside of each pair .{λ−m , λm } in (4.3.5) another order, then Lemma 4.3.3 is not true but it will be also possible to describe zero elements ∞ of matrices .(aj,k;α,β )1,1 α,β=0 . Such matrices .(aj,k )j,k=0 have also tri-diagonal block structure and has zero elements but in another places. Proof According to (4.3.12), we have f aj,j +1;0,0 =

λPj +1;0 (λ)Pj ;0 (λ) dρ(λ), R

f

.

aj,j +1;0,1 =

(4.3.16) j ∈ N.

λPj +1;1 (λ)Pj ;0 (λ) dρ(λ), R

In the first integral in (4.3.16) according to (4.3.11), .λPj ;0 (λ) ∈ Pj ;1 but Pj +1;0 (λ) is orthogonal to the last set (see (4.3.6)). Therefore, this integral is equal to zero. Similarly, the second integral in (4.3.16) also equal to zero: it is necessary to use the orthogonality of .Pj +1;1 (λ) to .Pj ;1 . Thus, the first two equalities in (4.3.15) are fulfilled. The second equalities are fulfilled according to (4.3.14). u n

.

The mentioned above shows that the .(2 × 2)-matrices in (4.3.13), .aj,j +1 and aj +1,j , .j ∈ N have the first rows and columns, respectively, zero. Taking into account (4.3.13), we can conclude that the self-adjoint matrix of the multiplication by .λ operator is five-diagonal as usual scalar matrix in the usual basis of the space .l2 . .

Lemma 4.3.4 Elements a0,1;0,1 , a1,0;1,0 ,

.

aj,j +1;1,1 , aj +1,j ;1,1 ,

j ∈ N,

of matrices .(aj,k )∞ j,k=0 in (4.3.13), are always positive.

(4.3.17)

4.3 The Orthogonalization Procedure and the Construction of a Tri-Diagonal. . .

197

Proof The symmetry (4.3.14) shows that it is sufficient to show the positiveness of the second and the forth elements in (4.3.17). We start by .a1,0;1,0 . Denote ' (λ) a non-normalized vector .P (λ) obtained from the Gram-Schmidt by .P1;1 1;1 orthogonalization procedure. According to (4.3.3) and (4.3.4) we have ' P1;1 (λ) = λ − (λ, P1;0 (λ))L2 P1;0 (λ) − (λ, 1)L2 .

.

Therefore, using (4.3.12), we get f a1,0;1,0 =

' λP1;1 (λ) dρ(λ) = ||P1;1 (λ)||−1 L2

R ' (λ)||−1 = ||P1;1 L2

.

f

' λP1;1 (λ) dρ(λ)

R

f

λ(λ − (λ, P1;0 (λ))L2 P1;0 (λ) − (λ, 1)L2 ) dρ(λ) R

=

' 2 (λ)||−1 ||P1;1 L2 (||λ||L2

− |(λ, P1;0 (λ))L2 |2 − |(λ, 1)L2 |2 ). (4.3.18)

The positiveness of the expression (4.3.18) follows from the Parseval equality for the decomposition of the function .λ ∈ L2 with respect to the orthonormal basis (4.3.4) in the space .L2 . Namely |(λ, 1)L2 |2 + |(λ, P1;0 (λ))L2 |2 + |(λ, P1;1 (λ))L2 |2 + · · · = ||λ||2L2 ,

.

where .1 = P0;0 (λ). Pass to the proof of the positiveness of .aj +1,j ;1,1 where .j ∈ N. From (4.3.12), we have f .aj +1,j ;1,1 = λPj ;1 (λ)Pj +1;1 (λ) dρ(λ). (4.3.19) R

According to (4.3.4) and (4.3.6), we have Pj ;1 (λ) = kj ;1 λj + Rj ;0 (λ),

.

(4.3.20)

where .Rj ;0 (λ) is some polynomial from .Pj ;0 , and coefficients .kj ;1 > 0. Multiply expression (4.3.20) by .λ we get λPj ;1 (λ) = kj ;1 λj +1 + λRj ;0 (λ),

.

λRj ;0 (λ) ∈ Pj ;1

(4.3.21)

(now, it is necessary to use the second inclusion from (4.3.11) and (4.3.6)). Similarly to (4.3.20) we have Pj +1;1 (λ) = kj +1;1 λj +1 + Rj +1;0 (λ),

.

Rj +1;0 (λ) ∈ Pj +1;0 ,

(4.3.22)

198

4 The Strong Moment Problem

where .kj +1;1 > 0. Find .λj +1 from (4.3.22) and substitute it into (4.3.21). We get λPj ;1 (λ) = .

=

kj ;1 (Pj +1;1 (λ) − Rj +1;0 (λ)) + λRj ;0 (λ) kj +1;1 kj ;1 kj +1;1

Pj +1;1 (λ) −

kj ;1 kj +1;1

(4.3.23)

Rj +1;0 (λ) + λRj ;0 (λ).

The second two terms in (4.3.23) belong to .Pj +1;0 and .Pj ;1 respectively and are in any cases orthogonal to .Pj +1;1 (λ). Therefore, the substitution of the kj ;1 > 0. u n expression (4.3.23) into (4.3.19) gives .aj,j +1;1,1 = kj +1;1 In what follows we will use usual well known notations for elements .aj,k of the Jacobi matrix (4.3.13):

.

an = an+1,n : Hn −→ Hn+1 , bn = an,n : Hn −→ Hn , cn = an,n+1 : Hn+1 −→ Hn ,

(4.3.24) n ∈ N0 .

All previous investigation we summarize in the theorem. ˆ .(with Theorem 4.3.5 The bounded self-adjoint multiplication by .λ operator .A, a strong cyclic vector.) in the space .L2 , in the orthonormal basis (4.3.4) of polynomials, has the form of tri-diagonal block Jacobi type symmetric matrix ∞ .J = (aj,k ) j,k=0 that acts in the space (4.3.1), namely, l2 = H0 ⊕ H1 ⊕ H2 ⊕,

.

··· ,

H0 = C1 ,

Hn = C2 ,

n ∈ N.

(4.3.25)

In designations (4.3.24), this matrix has the form ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ .J = ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

∗b0 ∗ c0 + ∗ ∗ ∗ a0 b1 + ∗ ∗ 0 ∗ a1 0 +

...

0

0 c1

∗ ∗

+ ∗ 0 b2

∗ 0

0 .. .

c2 ∗ ∗ ∗ ∗

a2

+ ∗ b3

+ ∗

0 .. .

0

.. .

∗ .. .



⎥ ⎥ ⎥ 0 ...⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ...⎥ ⎥. ⎥ ⎥ ⎥ 0 0 ⎥ ⎥ c3 ...⎥ ⎥ ⎥ ∗ + ⎦ .. .. . .

(4.3.26)

4.3 The Orthogonalization Procedure and the Construction of a Tri-Diagonal. . .

199

In (4.3.26), .b0 = b0;0,0 is a .1×1-matrix, i.e., scalar; the diagonal .bn consists .2× 1,0 2-matrices: .bn = (bn;α,β )1,1 α,β=0 , .∀n ∈ N; .a0 is a .1 × 2-matrix: .a0 = (a0;α,β )α,β=0 ; an is a .2 × 2-matrix: .an = (an;α,β )1,1 α,β=0 .∀n ∈ N; .c0 is a .2 × 1-matrix: .c0 =

.

1,1 (c0;α,β )0,1 α,β=0 ; .cn is a .2 × 2-matrix: .cn = (cn;α,β )α,β=0 , .∀n ∈ N. In the matrices .an and .cn , elements

an;0,0 , an;1,0 ,

.

cn;0,0 , cn;0,1 ,

∀n ∈ N

(4.3.27)

are always equal to zero; elements a0;1,0 , c0;0,1 , an;1,1 , cn;1,1 ,

.

∀n ∈ N

(4.3.28)

are always positive. Thus, it is possible to say, that .∀n ∈ N, every left columns matrices .an .(starting from the .n = 1) and every over rows of the matrices .cn .(starting from the .n = 1) contain zero elements. All positive elements in (4.3.26) are denoted by “.+”. Thus, the matrix (4.3.26) in the scalar form is five-diagonals. It is symmetric in the basis (4.3.4), i.e., .bn;α,β = bn;β,α , .cn;α,β = an;β,α , .n ∈ N0 , .α, β = 0, 1. ˆ , we have For the considered operator .A = I −1 AI (Af )n = (Jf )n = an−1 fn−1 + bn fn + cn fn+1 , .

∀f, g ∈ Dom(A) = lfin ⊂ l2 ,

f−1 := 0, n ∈ N0 .

(4.3.29)

We want to do some enough simple, but essential remarks concerning operator A generated in (4.3.25) by matrix J from (4.3.26) in the case when such operator is unbounded and has the inverse one .A−1 . Remark 4.3.6 Assume that the measure .dρ(λ), in the beginning of Sect. 4.3, has an arbitrary support from .R and all functions Reλ− | → λm ,

.

m∈Z

(4.3.30)

belong to .L2 (R, dρ(λ)) = L2 and are linearly independent. In this case we can repeat constructions (4.3.3)–(4.3.29), but now the operator A, defined in the space .l2 on the set .lfin , is only Hermitian with real elements (see (4.3.14)) and, therefore, has equal defect numbers. Moreover, in what follows we will assume that the set of functions (4.3.30) is total in .L2 . Remark 4.3.7 Consider the one-to-one mapping between .R \ {0} and .R R0 := R \ {0} e λ − | → μ = λ−1 =: ϕ(λ) ∈ R.

.

(4.3.31)

We will assume that our given measure .dρ(λ) on .R is such that the point “0” not belong to its support, i.e., .0 /∈ supp(dρ(λ)). In such a case the mapping (4.3.31)

200

4 The Strong Moment Problem

transfers this measure into the Borel measure .dσ (μ) on .R (i.e., .∀α ∈ B(R), .σ (α) = ρ(ϕ −1 (α))). For an arbitrary function .R0 e λ − | → F (λ) ∈ C we have f .

F (μ−1 ) dσ (μ) =

R

f F (λ) dρ(λ);

F (μ−1 ) =: (Iϕ F )(μ).

(4.3.32)

R

Consider the Hilbert space of complex-valued functions of a variable .μ on the real axis .R, i.e., .L2 (R, dσ (μ)) = L2,ϕ . Then, form (4.3.32), we conclude that the operator .Iϕ : L2 |−→ L2,ϕ , acting by the rule (4.3.32), is isometric between spaces .L2 (R, dσ (μ)) = L2,ϕ and .L2 (R, dρ(λ)) = L2 . Let the operator A, constructed in .l2 according to (4.3.29) by the matrix J , is essentially self-adjoint and invertible. Then its .L2 -image, i.e., the multiplication by ˆ defined at first on linear combinations of functions (4.3.30), is also .λ operator .A, invertible and .0 /∈ supp(dρ(λ)). This inverse operator .Aˆ −1 as an operator in .l2 is generated by the algebraically inverse matrix .J −1 . The mapping (4.3.31) shows that this matrix, in the space constructed above using the measure .dσ (μ), has also the form (4.3.26), but other polynomials of the type (4.3.4) correspond to it. Of course, it is easy to calculate this matrix in previous basis, connected with .dρ(λ), i.e., the matrix .J −1 which we denote now by K. Note that the construction (4.3.31), (4.3.32) makes it possible to find interesting examples of measures .dρ(λ) for which the set (4.3.30 ) is total in .L2 (R, dρ(λ)). It is very interesting and unexpectedly that this inverse matrix .J −1 = K to the tri-diagonal matrix J is also tri-diagonal. This result is a consequence of the way of construction of the basis (4.3.4): these polynomials are linear combinations of .λm and .λ−n , .m, n ∈ N0 . We will prove corresponding results since the form of matrix K is a little another as J . At first, instead of Lemma 4.3.3, we have Lemma 4.3.8 For polynomials .Pn;α (λ) from (4.3.4) and subspaces .Pm,β , relations λ−1 P0;0 (λ) = λ−1 ∈ P1;0 , .

λ−1 Pn;0 (λ) ∈ Pn+1;0 , λ−1 Pn;1 (λ) ∈ Pn+1;0 ,

(4.3.33) n ∈ N.

hold true. Proof This proof is the same as the proof of Lemma 4.3.3, and follows from (4.3.5), (4.3.6) and (4.3.7). u n 1,1 Denote by .(pj,k )∞ j,k=0 , .(pj,k;α,β )α,β=0,0 the operator matrix K of the multiplication by .λ−1 operator in the previous space .L2 (R, dρ(λ)) = L2 ; this matrix is constructed as before (see (4.3.8), (4.3.9) and (4.3.10)) using the basis (4.3.4).

4.3 The Orthogonalization Procedure and the Construction of a Tri-Diagonal. . .

201

We can repeat now Lemma 4.3.4 for K: using (4.3.33) instead of (4.3.11), we assert that the integral f pj,k;α,β =

.

λ−1 Pk;β (λ)Pj ;α (λ) dρ(λ),

j, k ∈ N0 ,

α, β = 0, 1

(4.3.34)

R

is equal to zero if .|j − k| > 1. Thus, our matrix K has a form (4.3.13). Of course, K = J −1 is self-adjoint and symmetric. Instead of equalities of Lemma 4.3.3, we have some other equalities.

.

Lemma 4.3.9 For elements .pj,k = (pj,k;α,β )1,1 α,β=0,0 of the matrix, we have equalities p0,1;0,1 = p1,0;1,0 = 0, .

pj,j +1;0,1 = pj,j +1;1,1 = 0,

(4.3.35)

pj +1,j ;1,0 = pj +1,j ;1,1 = 0,

j ∈ N.

Proof It is also similar to the proof of Lemma 4.3.3 but with using (4.3.33) instead of (4.3.11). For example, we have according to (4.3.34), f

λ−1 Pj +1,1 (λ)Pj ;0 (λ) dρ(λ),

pj,j +1;0,1 =

.

j ∈ N0 .

(4.3.36)

R

By using the second inclusion from (4.3.33), we get that .λ−1 Pj ;0 (λ) ∈ Pj +1;0 , therefore, this function is orthogonal to .Pj +1;1 (λ) and the integral (4.3.36) is equal to zero. Similarly, using the third inclusion from (4.3.33), we get that .pj,j +1;1,1 = 0. The rest equalities in (4.3.35) are valid due to the symmetry of K. u n The analog of Lemma 4.3.4 is the following assertion. Lemma 4.3.10 Elements p0,1;0,0 , p1,0;0,0 , .

pj,j +1;0,0 , pj +1,j ;0,0 ,

j ∈ N,

(4.3.37)

of matrices .(pj,k )∞ j,k=0 , are always positive. ' (λ) a non-normalized vector Proof As before, we start from .p1,0;0,0 . Denote by .P1;0 ' −1 − .P1;0 (λ) obtained from the orthogonalization procedure, i.e., .P 1;0 (λ) = λ

202

4 The Strong Moment Problem

(λ−1 , 1)L2 . Therefore as in (4.3.18) due to the Parseval equality for .λ−1 , we have f p1,0;0,0 =

λ−1 P1;0 (λ) dρ(λ)

R .

f

=

' (λ)||−1 ||P1;0 L2

=

' −1 2 ||P1;0 (λ)||−1 L2 (||λ ||L2

λ−1 (λ−1 − (λ−1 , 1)L2 ) dρ(λ)

R

− |(λ−1 , 1)L2 |2 ) > 0.

As in (4.3.19), consider the fourth element in (4.3.37). We have f .pj +1,j ;0,0 = λ−1 Pj ;0 (λ)Pj +1;0 (λ) dρ(λ), j ∈ N.

(4.3.38)

R

According to (4.3.4) and (4.3.6), we have Pj ;0 (λ) = kj ;0 λ−j + Rj −1;1 (λ),

(4.3.39)

.

where .Rj −1;1 (λ) is some polynomial from .Pj −1;1 , .kj ;0 > 0. Multiplying (4.3.39) by .λ−1 , we get λ−1 Pj ;0 (λ) = kj ;0 λ−(j +1) + λ−1 Rj −1;1 (λ),

(4.3.40)

.

where .λ−1 Rj −1;1 (λ) ∈ Pj −1;0 . Similarly to (4.3.39), we have Pj +1;0 (λ) = kj +1;0 λ−(j +1) + Rj ;1 (λ),

.

Rj ;1 (λ) ∈ Pj ;1 ;

kj +1;0 > 0. (4.3.41)

Take .λ−(j +1) from (4.3.41) and substitute it into (4.3.40), thus λ−1 Pj ;0 (λ) = .

=

kj ;0 kj +1;0 kj ;0 kj +1;0

(Pj +1;0 (λ) − Rj ;1 (λ)) + λ−1 Rj −1;1 (λ) Pj +1;0 (λ) −

kj ;0 kj +1;0

(4.3.42) Rj ;1 (λ) + λ

−1

Rj −1;1 (λ).

In this expression the second two terms belong to .Pj ;1 and .Pj −1;0 respectively and are orthogonal to .Pj +1;0 (λ). After substitution of (4.3.42) into (4.3.38) we get: .pj +1,j ;0,0 > 0. The positivity of the rest elements from (4.3.37) follows from the symmetry of K. u n The results of the last considerations we can formulate as a theorem.

4.3 The Orthogonalization Procedure and the Construction of a Tri-Diagonal. . .

203

Theorem 4.3.11 Let the measure .dρ(λ) be such that the multiplication by .λ−1 operator .Aˆ is self-adjoint and invertible in .L2 . Then the bounded inverse to .Aˆ operator is generated in the space .l2 (4.3.25) by the tri-diagonal block Jacobi type symmetric matrix .J −1 = K of the form similar to (4.3.26), namely, K = (pj,k )∞ j,k=0 ,

.

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ .K = ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

pn := pn+1,n , qn := pn,n , rn := pn,n+1 , n ∈ N0 ;

∗q0 + r0 0 + ∗ ∗ p0 q1 0 ∗ ∗ + ∗ p1 0 0

...

+

0 r1

∗ ∗

0 ∗ + q2

∗ +

0

∗ ∗ ∗ ∗ p2

.. .

0 ∗ q3

0 ∗

0 .. .

0 r2

.. .

∗ .. .



⎥ ⎥ ⎥ 0 ... ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ... ⎥ ⎥, ⎥ ⎥ ⎥ + 0 ⎥ ⎥ r3 . . . ⎥ ⎥ ⎥ ∗ 0 ⎦ .. .. . .

(4.3.43)

where p0;1,0 = r0;0,1 = 0,

pn;1,0 = pn;1,1 = rn;0,1 = rn;1,1 = 0,

.

pn;0,0 , rn;0,0 > 0,

n ∈ N,

n ∈ N0 . (4.3.44)

The action of the operator is defined by the expression (J −1 f )n = (Kf )n = pn−1 fn−1 + qn fn + rn fn+1 , .

n ∈ N0 ,

f−1 := 0,

f ∈ l2 .

(4.3.45)

We will consider now some simple but essential for us generalization of the last results. Namely, we will assume that the multiplication by .λ operator .Aˆ is selfadjoint in .L2 , but the bounded .Aˆ −1 , possibly, not exist. Remind that all functions m .R e λ |−→ λ , .m ∈ Z− belong to .L2 (according to the remarked above). Therefore, the multiplication by .λ−1 operator in .L2 exists and is defined on functions F from .L2 , for which .λ−1 F (λ) ∈ L2 . The set of such functions is linear ˆ we denote and of course, dense in .L2 . This operator is (algebraically) inverse to .A, −1 ˆ it also by .A . Thus, we can formulate the following assertion: consider the general situation: all functions (4.3.30) belong to .L2 and the corresponding multiplication ˆ the algebraically inverse operator .Aˆ −1 by .λ operator .Aˆ is self-adjoint. Then, for .A,

204

4 The Strong Moment Problem

ˆ in .L2 . exists in a dense domain .Dom(Aˆ −1 ) ⊃ Ran(A) ˆ (Aˆ −1 A)F = λ−1 (λF (λ)) = F (λ), .

ˆ F ∈ Dom(A),

Dom(Aˆ −1 ) = {G ∈ L2 | λ−1 G(λ) ∈ L2 }.

(4.3.46)

Theorem 4.3.12 Consider the Laurent polynomials basis (4.3.4) in .L2 and transfer this space into .l2 . The algebraically inverse operator .Aˆ −1 can be rewritten as an operator in .l2 generated by the matrix K of the form (4.3.43) on the set .lfin . Such a matrix has properties (4.3.44) and acts according to the rule (4.3.45). We will call it as the algebraically inverse matrix .J −1 = K to J ; .J −1 Jf = f = J J −1 f , .f ∈ lfin . Proof At first we note that every Laurent polynomial belongs to .Dom(Aˆ −1 ). Indeed such a polynomial is a linear combination of finite number of functions .λm , .m ∈ Z. But according to the definition of .Dom(Aˆ −1 ) in (4.3.46), .∀m ∈ Z, .λm ∈ Dom(Aˆ −1 ) since .λ−1 λm ∈ L2 . Therefore we can construct the matrix K of the type (4.3.10) of the operator ˆ −1 using polynomials (4.3.4). This matrix has the structure (4.3.43) with prop.A erties (4.3.44) since we can repeat for this matrix Lemmas 4.3.24, 4.3.9, 4.3.10: all integrals of type (4.3.34) now exist. The last equality in the formulation of the theorem follows from (4.3.46) and, for example, from the remarks connected with (4.3.31). u n

4.4 Direct and Inverse Spectral Problems Corresponding to Tri-Diagonal Block Jacobi-Laurent Matrices Generating Self-Adjoint Operators In this section we will consider the operator .J in the space .l2 (4.3.1), (4.3.25), generated by the matrix J (4.3.26) with conditions on its elements, (4.3.27), (4.3.28) noted in Theorem 4.3.5. Moreover, we will demand that the algebraically inverse matrix .J −1 exists and satisfies conditions of Theorem 4.3.11. At first we remind some general facts concerning a rigging for the case of space .l2 and the generalized eigenfunction expansion for self-adjoint operators acting on this space. In addition to the space .l2 we consider its rigging (lfin )' ⊃ l2 (p−1 ) ⊃ l2 ⊃ l2 (p) ⊃ lfin ,

.

(4.4.1)

−1 = where .l2 (p) is weighted .l2 space with a weight .p = (pn )∞ n=0 , .pn ≥ 1, (.p ∞ −1 (pn )n=0 ). In our case .l2 (p) is the Hilbert space of sequences .f = (fn ), .fn ∈ Hn

4.4 Direct and Inverse Spectral Problems Corresponding to Tri-Diagonal. . .

205

for which we have ||f ||2l2 (p) =

∞ E

.

||fn ||2Hn pn ,

(f, g)l2 (p) =

n=0

∞ E (fn , gn )Hn pn . n=0

The space .l2 (p−1 ) is defined similarly; recall that .lfin is the space of finite sequences and .(lfin )' is the space conjugate to .lfin and equal to the space .l of all sequences ∞ , .f ∈ H . It is easy to show that the imbedding .l (p) c→ l is quasi.f = (fn ) n 2 2 n=1 n ∞ E −1 pn < ∞. nuclear if . n=0

Let A be an arbitrary self-adjoint operator standardly connected with the chain (4.4.1). According to the projection spectral theorem (see Sect. 4.2), such an operator has a representation f Af =

λo(λ) dσ (λ)f,

.

f ∈ l2 ,

(4.4.2)

R

where .o(λ) : .l2 (p) −→ l2 (p−1 ) is an operator of the generalized projection and −1 ) is .dσ (λ) is spectral measure. For every .f ∈ lfin the projection .o(λ)f ∈ l2 (p a generalized eigenvector of the operator A with the corresponding eigenvalues .λ. For all .f, g ∈ lfin , we have the Parseval equality f (f, g)l2 =

(o(λ)f, g)l2 dσ (λ);

.

(4.4.3)

R

after extending by continuity, the equality (4.4.3) holds true .∀f, g ∈ l2 . Let us denote by .πn the operator of orthogonal projection in .l2 on .Hn , .n ∈ N0 . Hence, .∀f = (fn ) ∈ l2 , we have .fn = πn f . This operator acts similarly on the spaces .l2 (p) and .l2 (p−1 ) (but possibly with the norm which is not equal to one). Let us consider the operator matrix .(oj,k (λ))∞ j,k=0 , where oj,k (λ) = πj o(λ)πk : l2 −→ Hj

.

(or

Hk −→ Hj ).

(4.4.4)

Now, we can rewrite the Parseval equality (4.4.3) in the form (f, g)l2 =

∞ f E

(o(λ)πk f, πj g)l2 dσ (λ)

j,k=0 R

.

=

∞ f E

(πj o(λ)πk f, g)l2 dσ (λ)

(4.4.5)

j,k=0 R

=

∞ f E

j,k=0 R

(oj,k (λ)fk , gj )l2 dσ (λ),

∀f, g ∈ l2 .

206

4 The Strong Moment Problem

Let us now pass to the study of a more special self-adjoint operator A that acts on the space .l2 . Namely, let .A = J, where .J is the closed operator generated on the space .l2 by the matrix (4.3.26) with conditions (4.3.27), (4.3.28) by the rule l2 ⊃ lfin e f − | → Jf := Jf ∈ l2 .

(4.4.6)

.

We will assume also that .J is self-adjoint. From (4.4.6), (4.3.29), it is easy to conclude that our operator .J is standardly connected with chain (4.4.1). Thus, above stated results of the type (4.4.2)—(4.4.5) we can apply to the operator .J. Additionally, we will demand that for matrix J , there exists its algebraically inverse matrix .J −1 which satisfies the conditions of Theorem 4.3.11: (4.3.44), (4.3.45). Conditions for coefficients of the matrix J , under which it will have an inverse one are given in the next sections. The existence of such .J −1 is very essential condition. Such a matrix J we will be also called the .(self-adjoint.) Jacobi-Laurent matrix. Our first aim is to rewrite the Parseval equality (4.4.5) for our .A = J in terms of generalized eigenvectors of .J. We prove two essential lemmas. Lemma 4.4.1 Let .ϕ(λ) = (ϕn (λ))∞ n=0 , .ϕn (λ) ∈ Hn , .λ ∈ R be a generalized eigenvector from .(lfin )' of the operator .J, constructed by self-adjoint Jacobi-Laurent matrix J . We assert that .ϕ(λ), .∀λ ∈ R is a solution in .(lfin )' of the difference equation (J ϕ(λ))n = an−1 ϕn−1 (λ) + bn ϕn (λ) + cn ϕn+1 (λ) = λϕn (λ), n ∈ N0 ,

.

ϕ−1 (λ) = 0

(4.4.7)

with the initial condition .ϕ0 (λ) = ϕ0 and has the representation ϕn (λ) = Sn (λ)ϕ0 ;

.

S0 (λ) = 1,

Sn (λ) = (Sn;0 , Sn;1 ),

n ∈ N.

(4.4.8)

Here .Sn;α , .α = 0, 1 are Laurent polynomials of variables .λ, λ−1 and these polynomials have the form Sn;α (λ) = ln;α λ(−1)

.

α+1 n

+ wn;α (λ),

n ∈ N,

α = 0, 1.

(4.4.9)

In (4.4.9) .ln;α > 0 and .wn;α (λ) is some linear combination with real coefficients λj , .j ∈ {0, −1, 1, −2, 2, · · · , −(n − 1), −αn}, i.e., belongs to .Pn−1;1 if .α = 0 and to .Pn;0 , if .α = 1.

.

Proof At first we remaind that by definition .ϕ(λ) ∈ (lfin )' = l is a generalized eigenvector with an eigenvalue .λ for the operator .J standardly connected with the rigging (4.4.1) if the equality (ϕ(λ), Jf )l2 = (ϕ(λ), Jf )l2 = λ(ϕ(λ), f )l2 ,

.

holds true.

f ∈ lfin .

(4.4.10)

4.4 Direct and Inverse Spectral Problems Corresponding to Tri-Diagonal. . .

207

By using (4.3.29) and the arbitrariness of f , we conclude, from (4.4.10), that (J ϕ(λ))n = an−1 ϕn−1 (λ) + bn ϕn (λ) + cn ϕn+1 (λ) = λϕn (λ), n ∈ N0 ,

.

ϕ−1 (λ) = 0.

(4.4.11)

For the matrix .J −1 (4.3.43), we have also a similar equality. Namely, using Theorem 4.3.12 (the equality .J J −1 f = f , .f ∈ lfin ), we get (ϕ(λ), f )l2 = (ϕ(λ), J J −1 f )l2 = λ(ϕ(λ), J −1 f )l2 , (ϕ(λ), J −1 f )l2 = λ−1 (ϕ(λ), f )l2 ,

.

i.e.,

∀f ∈ lfin .

(4.4.12)

Explain that the matrix .J −1 is tri-diagonal, therefore, .J −1 f ∈ lfin and the second equality in (4.4.12) follows from (4.4.10). Similarly to (4.4.11), the last equality in (4.4.12) and (4.3.43) give (J −1 ϕ(λ))n = pn−1 ϕn−1 (λ) + qn ϕn (λ) + rn ϕn+1 (λ) = λ−1 ϕn (λ), n ∈ N0 ,

.

ϕ−1 (λ) = 0,

∀λ ∈ R \ {0}.

(4.4.13)

First, we will give some explanations for further transformations and calculations. In addition to the two equalities (4.4.11) and (4.4.13), we have ((J + J −1 )ϕ(λ))n = (λ + λ−1 )ϕn (λ),

.

n ∈ N0 ,

ϕ−1 (λ) = 0.

(4.4.14)

The matrix .J + J −1 is also block tri-diagonal acting in the space .l2 . But from (4.3.26)–(4.3.28) and (4.3.43), (4.3.44), we see that its blocks on two side diagonals for .n ∈ N are .2 × 2 invertible matrices. Such a form of the matrix .J + J −1 actually shows that similarly to the classical Jacobi matrices, we can, from (4.4.14) step by step, find a generalized eigenvector .ϕ(λ) = (ϕn (λ))∞ n=0 and .∀n ∈ N .ϕn (λ) is a 1 polynomial with respect to .λ + λ , i.e., it is a Laurent polynomial. But it is essential for us to get for every .ϕn (λ) an exact representation (4.4.8), (4.4.9). Therefore we give below more precise calculation. Consider equalities (4.4.11), (4.4.13) for .n = 1. We have b0 ϕ0 + c0 ϕ1 (λ) = λϕ0 , .

q0 ϕ0 + r0 ϕ1 (λ) = λ−1 ϕ0 ,

i.e., c0;0,0 ϕ1;0 (λ) + c0;0,1 ϕ1;1 (λ) =(λ − b0 )ϕ0 , .

r0;0,0 ϕ1;0 (λ) + r0;0,1 ϕ1;1 (λ) =(λ−1 − q0 )ϕ0 .

208

4 The Strong Moment Problem

The last two equalities we can regard as a linear system of equations with respect to unknowns .ϕ1;0 (λ), .ϕ1;1 (λ), .ϕ0 ∈ C are given. According to (4.3.26), (4.3.43), we have for the matrices of this system and its solutions [ D1 =

c0;0,0 c0;0,1 r0;0,0 r0;0,1

]

[ =

] ∗ + , + 0

A1 = DetD1 = |D1 | = −r0;0,0 c0;0,1 < 0, | | | (λ − b0 )ϕ0 + | | | ϕ1;0 (λ) = A−1 1 | (λ−1 − q )ϕ 0 | , 0 0 | | | | −1 | ∗ (λ − b0 )ϕ0 | . ϕ1;1 (λ) = A1 | . + (λ−1 − a0 )ϕ0 |

(4.4.15)

It is clear that these two functions have the required form (4.4.7). Let .n ∈ N. Taking the equality (4.4.13) for the coordinate “0” and the equality (4.4.11) for the coordinate “1”, we get (pn−1 ϕn−1 (λ))0 + (qn ϕn (λ))0 + (rn ϕn+1 (λ))0 =λ−1 ϕn;0 (λ), .

(an−1 ϕn−1 (λ))1 + (bn ϕn (λ))1 + (cn ϕn+1 (λ))1 =λϕn;1 (λ).

(4.4.16)

We can rewrite the equalities (4.4.16) in the form [ .

] [ ] rn;0,0 rn;0,1 + 0 ϕn+1 (λ) = ϕn+1 (λ) cn;1,0 cn;1,1 ∗ + = (λ−1 ϕn;0 (λ) − (pn−1 ϕn−1 (λ))0 − (qn ϕn (λ))0 , λϕn;1 (λ) − (an−1 ϕn−1 (λ))1 − (bn−1 ϕn−1 (λ))1 ),

(4.4.17)

i.e., ϕn+1;0 (λ) = .

1 rn;0,0

(

) λ−1 ϕn;0 (λ) − (pn−1 ϕn−1 (λ))0 − (qn ϕn (λ))0 ,

1 ( λϕn;1 (λ) − (an−1 ϕn−1 (λ))1 cn;1,1 ) −(bn−1 ϕn−1 (λ))1 − cn;1,0 ϕn+1;0 (λ) ,

ϕn+1;1 (λ) =

n ∈ N.

Now we apply the induction: according to (4.4.15), .ϕ1 (λ) has the form (4.4.8), (4.4.9); .ϕ0 is the same. Let for .n ∈ N .ϕn−1 (λ), .ϕn (λ) have required form (4.4.8), (4.4.9). Then from the second equality in (4.4.17), it is easy to see, that .ϕn+1;0 (λ) has the form (4.4.8), (4.4.9). The last equality in (4.4.17) shows that the same situation is also for .ϕn+1;1 (λ). u n

4.4 Direct and Inverse Spectral Problems Corresponding to Tri-Diagonal. . .

209

In what follows, it will be convenient to look at .Sn (λ), .∀n ∈ N0 with a fixed .λ, as a linear operator that acts .∀n ∈ N from .R1 into .R2 , i.e., .R1 e ϕ0 |−→ Sn (λ)ϕ0 ∈ R2 , and into .R1 , if .n = 0. This operator is a standard extension on the corresponding complex space. As result we can write H0 e ϕ0 |−→ Sn (λ)ϕ0 ∈ Hn , .

Sn∗ (λ)

= (Sn (λ))∗ : Hn |−→ H0 ,

n ∈ N0 ,

(4.4.18)

λ ∈ R.

We also regard on .Qn (λ) as a vector-valued Laurent polynomial of .λ ∈ R with real coefficients. By using these polynomials .Qn (λ), we construct the representation for .oj,k (λ), introduced by (4.4.4). Lemma 4.4.2 The operator .oj,k (λ), .∀λ ∈ R has the representation: oj,k (λ) = Sj (λ)o0,0 (λ)Sk∗ (λ) : Hk −→ Hj ,

.

j, k ∈ N0 ,

(4.4.19)

where .o0,0 (λ) ≥ 0 is a scalar. Proof For a fixed .k ∈ N0 and an arbitrary fixed .x ∈ Hk ⊂ l2 the vector .ϕ(λ) = (ϕj (λ))∞ j =0 , where ϕj (λ) = oj,k (λ)x = πj o(λ)πk x ∈ Hj ,

.

λ∈R

is a generalized solution in the space .(lfin )' of the equation .J ϕ(λ) = λϕ(λ), since .o(λ) is a projection operator onto generalized eigenvectors of the operator A with a corresponding eigenvalue .λ. Therefore, .∀g ∈ lfin , we have .(ϕ, J g)l2 = λ(ϕ, g)l2 . Hence, it follows that .ϕ = ϕ(λ) ∈ l2 (p−1 ) exists as an usual solution of the equation .J ϕ(λ) = λϕ(λ) with initial conditions .ϕ0 (λ) = π0 o(λ)πk x ∈ H0 . By using Lemma 4.4.2 and due to (4.4.8), we get oj,k (λ)x = Sj (λ)(o0,k (λ)x), i.e., oj,k (λ) = Sj (λ)o0,k (λ), j ∈ N0 .

.

(4.4.20)

The operator .o(λ) : .l2 (p) −→ l2 (p−1 ) is formally self-adjoint on .l2 . Hence, according to (4.4.4), we get (oj,k (λ))∗ = (πj o(λ)πk )∗ = πk o(λ)πj = ok,j (λ),

.

j, k ∈ N0 .

(4.4.21)

For a fixed .j ∈ N0 from (4.4.21) and previous discussion, it follows that the vector ψ(λ) = (ψk (λ))∞ k=0 ,

.

ψk (λ) = ok,j (λ)y = (oj,k (λ))∗ y,

y ∈ Hj

is an usual solution of the equations .J ψ(λ) = λψ(λ) with initial conditions ψ0 (λ) = o0,j (λ)y = (oj,0 (λ))∗ y.

.

210

4 The Strong Moment Problem

Again using the Lemma 4.4.2 and the arbitrariness of y, we obtain a representation of the form (4.4.20): o0,k (λ) = Sk (λ)o0,j (λ),

.

k ∈ N0 .

(4.4.22)

Taking into account (4.4.3) and (4.4.22), we get o0,k (λ) = (ok,0 (λ))∗ = (Sk (λ)o0,0 (λ))∗ = o0,0 (λ)(Sk (λ))∗ ,

.

k ∈ N0 . (4.4.23)

Here we used .o0,0 (λ) ≥ 0, this inequality follows from (4.4.3) and (4.4.4)). Substituting (4.4.23) into (4.4.20), we get (4.4.19). u n Now we can write the Parseval equality (4.4.5) in concrete form. To this end, we substitute the expression (4.4.19) for .oj,k (λ) into (4.4.5) and get that (f, g)l2 =

∞ f E

(oj,k (λ)fk , gj )l2 dσ (λ)

j,k=0 R

=

∞ f E

(Sj (λ)o0,0 (λ)Sk∗ (λ)fk , gj )l2 dσ (λ)

j,k=0 R .

=

∞ f E

(4.4.24)

(Sk∗ (λ)fk , Sj∗ (λ)gj )l2 dρ(λ)

j,k=0 R

=

f (E ∞ R

Sk∗ (λ)fk

)( E ∞

Sj∗ (λ)gj

) dρ(λ),

∀f, g ∈ lfin ,

j =0

k=0

dρ(λ) = o0,0 (λ) dσ (λ). Introduce the Fourier transform “. 0. ' (z) a non-normalized vector .P (z) obtained from Consider .a0,1;0,1 . Denote .P1;1 1;1 the Gram-Schmidt orthogonalization procedure. According to (5.1.5) and (5.1.6), we get ' P1;1 (z) = z¯ − (¯z, P1;0 (z))L2 P1;0 (z) − (¯z, 1)L2 .

.

Therefore, using (5.1.15), we get f a0,1;0,1 =

' zP1;1 (z) dρ(z) = ||P1;1 (z)||−1 L2

C .

' (z)||−1 = ||P1;1 L2

f

' zP1;1 (z) dρ(z)

C

f

z(¯z − (¯z, P1;0 (z))L2 P1;0 (z) − (¯z, 1)L2 ) dρ(z) C

=

' ||P1;1 (z)||−1 z||2L2 L2 (||¯

− |(¯z, P1;0 (z))L2 |2 − |(¯z, 1)L2 |2 ). (5.1.22)

Also using (5.1.23), we conclude that the last expression is positive and, therefore, .a0,1;0,1 > 0. The positiveness in (5.1.21) and (5.1.22) follows from the Parseval equality for the decomposition of the function .z¯ ∈ L2 respect to the orthonormal basis (5.1.6) in

254

5 Block Jacobi Type Matrices in the Complex Moment Problem

the space .L2 , namely, |(¯z, 1)L2 |2 + |(¯z, P1;0 (z))L2 |2 + |(¯z, P1;1 (z))L2 |2 + · · · = ||¯z||2L2 ,

.

(5.1.23)

where .P0;0 (z) = 1. Let us now pass to show the positivity of .aj +1,j ;α,α , where .j ∈ N, .α = 0, 1, . . . , j . From (5.1.15) we have f aj +1,j ;α,α =

(5.1.24)

zPj ;α (z)Pj +1;α (z) dρ(z).

.

C

According to (5.1.6) and (5.1.8), Pj ;α (z) = kj ;α zj −α z¯ α + Rj ;α (z),

(5.1.25)

.

where .Rj ;α (z) is some polynomial from .Pj ;α−1 if .α > 0, or from .Pj −1;j −1 if α = 0. Therefore, .zRj ;α (z) is some polynomial from .Pj +1;α−1 , or from .Pj ;j −1 (see (5.1.14) and (5.1.8)). Multiplying (5.1.25) by z, we conclude

.

zPj ;α (z) = kj ;α zj +1−α z¯ α + zRj ;α (z);

.

zRj ;α (z) ∈ Pj +1;α−1

or

Pj ;j −1 ⊂ Pj ;j .

(5.1.26)

On the other hand, the equality (5.1.25) for .Pj +1;α (z) gives Pj +1;α (z) = kj +1;α zj +1−α z¯ α + Rj +1;α (z);

.

Rj +1;α (z) ∈ Pj +1;α−1

or

Pj ;j .

(5.1.27)

Take .zj +1−α z¯ α from (5.1.27) and substitute it into (5.1.26). Hence, we get zPj ;α (z) = .

=

kj ;α kj +1;α kj ;α kj +1;α

(Pj +1;α (z) − Rj +1;α (z)) + zRj ;α (z) Pj +1;α (z) −

(5.1.28)

kj ;α kj +1;α

Rj +1;α (z) + zRj ;α(z) (z),

where the second term belongs to .Pj +1;α−1 or to .Pj ;j and in any case are orthogonal to .Pj +1;α (z). Therefore, after the substitution of the expression (5.1.28) into (5.1.24), we get that aj +1,j ;α,α =

.

kj ;α kj +1;α

> 0.

5.1 Construction of the Tri-Diagonal Block Jacobi Type Matrix of a Bounded. . .

255

Consider, at last, the elements .aj,j +1;α,α+1 , where .j ∈ N, .α = 0, 1, . . . , j . From (5.1.15), we get f

f zPj +1,α+1 (z)Pj ;α (z) dρ(z) =

aj,j +1;α,α+1 =

.

C

z¯ Pj,α (z)Pj +1;α+1 (z) dρ(z). C

(5.1.29) For .Pj ;α (z), we have the expression (5.1.25). Multiplying it by .z¯ , we get an expression similar to (5.1.26). z¯ Pj ;α (z) = kj ;α zj −α z¯ α+1 + z¯ Rj ;α (z),

.

z¯ Rj ;α (z) ∈ Pj +1;α

or

Pj ;j , (5.1.30)

(but this time it is necessary to use the second inclusion (5.1.14) and (5.1.8)). Now, the equality (5.1.25) gives Pj +1;α+1 (z) = kj +1;α+1 zj −α z¯ α+1 + Rj +1;α+1 (z),

.

Rj +1;α+1 (z) ∈ Pj +1;α . (5.1.31)

By taking .zj −α z¯ α+1 from (5.1.31) and substituting it into (5.1.30), we get z¯ Pj ;α (z) = .

=

kj ;α kj +1;α+1 kj ;α kj +1;α+1

(Pj +1;α+1 (z) − Rj +1;α+1 (z)) + z¯ Rj ;α (z) Pj +1;α+1 (z) −

kj ;α kj +1;α+1

Rj +1;α+1 (z) + z¯ Rj ;α (z). (5.1.32)

As above, the second term in (5.1.32) belongs to .Pj +1;α , or to .Pj ;j and in any case is orthogonal to .Pj +1;α+1 (z). Therefore, the substitution of the expression (5.1.32) into (5.1.29) gives aj,j +1;α,α+1 =

.

kj ;α kj +1;α+1

> 0. u n

To denote elements .aj,k of the Jacobi matrix, we use the standard notation:

.

an = an+1,n

: Hn −→ Hn+1 ,

bn = an,n

: Hn −→ Hn ,

cn = an,n+1

: Hn+1 −→ Hn ,

(5.1.33) n ∈ N0 .

All previous investigations we summarize in a theorem.

256

5 Block Jacobi Type Matrices in the Complex Moment Problem

Theorem 5.1.5 The bounded normal multiplication by the independent variable z ˆ .(with a strong cyclic vector.) in the space .L2 , in the orthonormal basis operator .A, (5.1.6) of polynomials, has the form of a block trt-diagonal Jacobi type normal matrix .J = (aj,k )∞ j,k=0 acting in the space (5.1.9), namely, l2 = H 0 ⊕ H 1 ⊕ H 2 ⊕ · · · ,

.

Hn = Cn+1 ,

n ∈ N0 .

(5.1.34)

Norms of all operators .aj,k : Hk −→ Hj are uniformly bounded with respect to j, k ∈ N0 . In desihnations of (5.1.33), this matrix has the form

.

⎛ ⎜ ⎜ ⎜ .J = ⎜ ⎜ ⎝ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ .J = ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

∗ ∗ + + ∗ ∗ ∗ 0 ∗ ∗ ∗ + ∗ ∗ 0 + ∗ 0 0 ∗ + 0 0 0 0

b0 a0 0 0 .. .

c0 b1 a1 0 .. .

0 c1 b2 a2 .. .

0 0 c2 b3 .. .

0 0 0 c3 .. .

... ... ... ... .. .

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

(5.1.35)

⎞ + ∗ ∗ ∗ ∗ ∗ + 0 0 .. .

0 + ∗ ∗ ∗ ∗ ∗ + 0

0 ∗+ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 + ∗ ∗ ∗ ∗ ∗ .. .

0 0 + ∗ ∗ ∗ ∗

∗+ ∗ ∗ ∗ ∗ ∗ ∗

0 + ∗ ∗ .. .

0 0 0 0 + 0 ∗ +

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

(5.1.36)

Namely, in (5.1.36), elements are as follows: .∀n ∈ N0 , .bn is a .((n + 1) × (n + 1))-matrix, .bn = (bn;α,β )n,n α,β=0 , (.b0 = b0;0,0 is a scalar); .an is a .((n + 2) × (n + 1))n,n+1 matrix, .an = (an;α,β )n+1,n α,β=0 ; .cn is a .((n + 1) × (n + 2))-matrix, .cn = (cn;α,β )α,β=0 . Some elements in matrices .an and .cn are always equal to zero .∀n ∈ N, namely,

a0;1,0 = 0; .

an;β+1,β = an;β+2,β = · · · = an;n+1,β = 0, cn;α,α+2 = cn;α,α+3 = · · · = cn;α,n+1 = 0,

β = 0, 1, . . . , n;

(5.1.37)

α = 0, 1, . . . , n − 1.

Some elements in matrices .an and .cn are always positive, namaely, an;α,α ; cn;α,α+1 > 0,

.

α = 0, 1, . . . , n,

∀n ∈ N0 .

(5.1.38)

5.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi. . .

257

Thus, it is possible to say, that .∀n ∈ N0 each lower left corner of the matrix an .(starting from the second diagonal from above.) and each upper right corner of the matrix .cn .(starting from the third diagonal from below.) contain zero elements. All positive elements in (5.1.36) are denoted by “.+”. Possibly zero and non-zero elements are indicated by “.∗”. The matrix (5.1.36) in scalar form has a multi-diagonal form. ˆ ∗ in the basis (5.1.6) has a similar form, that is, the The adjoint operator .(A) block tri-diagonal Jacobi matrix .J + . These matrices J , .J + act by the rule

.

(Jf )n = an−1 fn−1 + bn fn + cn fn+1 , .

∗ fn−1 + bn∗ fn + an∗ fn+1 , (J + f )n = cn−1

n ∈ N0 ,

f−1 = 0,

(5.1.39)

∀f = (fn )∞ n=0 ∈ l2 ,

here “.∗” denotes the usual adjoint matrix. The form of the coefficients .J + follows from (5.1.18) and (5.1.33), the Eq. (5.1.39) generalizes (5.1.1).

5.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi Type Matrices of Bounded Normal Operators As it was mentioned above the main result of the previous section is really the solution of an inverse problem for a corresponding direct one appearing in the title of this section. We consider operators in the space .l2 of the form (5.1.9). Additionally to the space .l2 we consider its equipment, i.e., we have a rigging (lfin )' ⊃ l2 (p−1 ) ⊃ l2 ⊃ l2 (p) ⊃ lfin ,

.

(5.2.1)

−1 = where .l2 (p) is a weighted space .l2 with a weight .p = (pn )∞ n=0 , .pn ≥ 1, (.p ∞ (pn−1 )∞ n=0 ). In our case .l2 (p) is the Hilbert space of sequences .f = (fn )n=0 , .fn ∈ Hn , for which we have

||f ||2l2 (p) =

∞ E

.

n=0

||fn ||2Hn pn ,

(f, g)l2 (p) =

∞ E (fn , gn )Hn pn . n=0

(5.2.2)

258

5 Block Jacobi Type Matrices in the Complex Moment Problem

The space .l2 (p−1 ) is defined similarly; remind that .lfin is the space of finite sequences and .(lfin )' is the space adjoint to .lfin . It is easy to show that the imbedding ∞ E .l2 (p) c→ l2 is quasi-nuclear if . npn−1 < ∞ (see, for example, Chap. 2). n=0

Let A be a normal operator standardly connected with the chain (5.2.1). (About the standard connection see the beginning of the last section of this chapter.) According to the projection spectral theorem such an operator has a representation f Af =

zo(z) dσ (z)f,

.

f ∈ l2 ,

(5.2.3)

C

where .o(z) : .l2 (p) −→ l2 (p−1 ) is the operator of generalized projection and .dσ (z) is a spectral measure. The adjoint to A operator .A∗ , has the same representation (5.2.3), where .zo(z) is replaced with .z¯ o(z). For every .f ∈ lfin the projection −1 ) is a generalized eigenvector of the operators A and .A∗ with .o(z)f ∈ l2 (p corresponding eigenvalues z and .z¯ , respectively. For all .f, g ∈ lfin we have the Parseval equality f (f, g)l2 =

(o(z)f, g)l2 dσ (z).

.

(5.2.4)

C

After the extension by continuity, the equality (5.2.4) holds true .∀f, g ∈ l2 . Let us denote by .πn the operator of orthogonal projection in .l2 on .Hn , .n ∈ N0 . Hence, .∀f = (fn )∞ n=0 ∈ l2 we have .fn = πn f . This operator acts similarly on the spaces .l2 (p) and .l2 (p−1 ), but possibly with the norm which is not equal to one. Let us consider the operator matrix .(oj,k (z))∞ j,k=0 , where oj,k (z) = πj o(z)πk : l2 −→ Hj ,

.

(or

Hk −→ Hj ).

(5.2.5)

Then the Parseval equality (5.2.4) is written in the form (f, g)l2 =

∞ f E

(o(z)πk f, πj g)l2 dσ (z)

j,k=0 C

.

=

∞ f E

(πj o(z)πk f, g)l2 dσ (z)

(5.2.6)

j,k=0 C

=

∞ f E

j,k=0 C

(oj,k (z)fk , gj )l2 dσ (z),

∀f, g ∈ l2 .

5.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi. . .

259

Let us now consider a more special bounded operator A acting in space .l2 . More precisely, let it be given by a block tri-diagonal Jacobi type matrix J of the form (5.1.36). So, this operator A is defined by the first expression in (5.1.39); the adjoint operator defined similarly by the second expression in (5.1.39). Remind that the norm of all elements .an , .bn and .cn are uniformly bounded with respect to .n ∈ N0 . For further research, let us assume that conditions in (5.1.37) and (5.1.38) are fulfilled, and, additionally, the operator A given in (5.1.36) is bounded normal on .l2 . Conditions of a boundedness and the normality will be investigated in the next section. At the next step we will rewrite the Parseval equality (5.2.6) in terms of generalized eigenvectors of the operator A. At first we prove the lemma. Lemma 5.2.1 Let .ϕ(z) = (ϕn (z))∞ n=0 , .ϕn (z) ∈ Hn , .z ∈ C be a generalized eigenvector from .(lfin )' of the operator A with an eigenvalue z and, hence, as we have reminded above, it is also a generalized eigenvector of .A∗ with the corresponding eigenvalue .z¯ ; .ϕ0 (z) = ϕ0 is independent of z. Thus, .ϕ(z) is a solution in .(lfin )' of the system containing two difference equations .(see (5.1.39).), namely, (J ϕ(z))n = an−1 ϕn−1 (z) + bn ϕn (z) + cn ϕn+1 (z) = zϕn (z), .

∗ ϕn−1 (z) + bn∗ ϕn (z) + an∗ ϕn+1 (z) = z¯ ϕn (z), (J + ϕ(z))n = cn−1

n ∈ N0 ,

(5.2.7)

ϕ−1 (z) = 0,

with an initial condition .ϕ0 ∈ C. This solution has a form ϕn (z) = Qn (z)ϕ0 = (Qn;0 , Qn;1 , . . . , Qn;n , )ϕ0 ,

.

∀n ∈ N,

(5.2.8)

where .Qn;α , .α = 0, 1, . . . , n are polynomials of variables z and .z¯ , and these polynomials have the form Qn;α (z) = ln;α z¯ n−α zα + qn;α (z, z¯ ),

.

α = 1, . . . , n,

(5.2.9)

where .ln;α > 0 and .qn;α (z) is some linear combinations of .z¯ j zk , .0 ≤ j + k ≤ n − 1 and .z¯ n−(α−1) zα−1 for .α = 1, . . . n. Proof For .n = 0 the system (5.2.7) has the form b0 ϕ0 + c0 ϕ1 = zϕ0 , .

b0∗ ϕ0 + a0∗ ϕ1 = z¯ ϕ0 ,

or

a¯ 0;0,0 ϕ1;0 + a¯ 0;0,1 ϕ1;1 = (¯z − b¯0;0,0 )ϕ0 , c0;0,0 ϕ1;0 + c0;0,1 ϕ1;1 = (z − b0;0,0 )ϕ0 .

(5.2.10)

Here and in what follows we denote ϕn (z) = (ϕn;0 (z), ϕn;1 (z), . . . , ϕn;n (z)) ∈ Hn ,

.

∀n ∈ N, ϕ0 = ϕ0;0 .

260

5 Block Jacobi Type Matrices in the Complex Moment Problem

By using the assumption (5.1.37) and (5.1.38), we rewrite the last two equalities of (5.2.10) in the form A0 ϕ1 (z) = ((¯z − b¯0;0,0 )ϕ0 , (z − b0;0,0 )ϕ0 ); ) ( . a0;0,0 0 , a0;0,0 > 0, c0;0,1 > 0. A0 = c0;0,0 c0;0,1

(5.2.11)

Therefore, ϕ1;0 (z) = .

1 (¯z − b¯0;0,0 )ϕ0 = Q1;0 (z)ϕ0 , a0;0,0

(5.2.12)

ϕ1;1 (z) = (r1 (¯z − b¯0;0,0 ) + r2 (z − b0;0,0 ) + r3 )ϕ0 = Q1;1 (z)ϕ0 , where .r1 > 0, .r2 and .r3 are some constants. In other words, the solution of .ϕn (z) of the Eq. (5.2.7), for .n = 1 has a form (5.2.8) and (5.2.9). Suppose, using induction, that for .n ∈ N coordinates .ϕn−1 (z) and .ϕ(z) of our generalized eigenvector .ϕ(z) = (ϕn (z))∞ n=0 have the form (5.2.8) and (5.2.9) and prove that .ϕn+1 (z) also have the form (5.2.8) and (5.2.9). Our eigenvector .ϕ(z) satisfies the system (5.2.7) of two equations. But this system is overdetermined; it consists of .2(n + 1) scalar equations from which it is necessary to find only .n + 2 unknowns .ϕn+1;0 , .ϕn+1;1 , .. . ., .ϕn+1;n+1 using as an initial data the previous .n + 1 values .ϕn;0 , .ϕn;1 , .. . ., .ϕn;n of coordinates of the vector .ϕn (z). We preside by the following manner. According to Theorem 5.1.5, especially to (5.1.37) and (5.1.38), it follows that .((n + 1) × (n + 2))-matrices .an∗ and .cn act on .ψn+1 ∈ Hn by the rule ⎛ ⎞ an;0,0 0 0 ...0 0 ⎜ a¯ ...0 0⎟ ⎜ n;1,0 an;1,1 0 ⎟ ⎜ ⎟ a ¯ a . . . 0 0 a ¯ ⎜ ⎟ n;2,0 n;2,1 n;2,2 ∗ ⎜ .an ψn+1 (z) = (5.2.13) .. .. .. ⎟ . . .. ⎜ .. ⎟ ψn+1 (z), .. ⎜. . . .⎟ ⎜ ⎟ ⎝ a¯ n;n−1,0 a¯ n;n−1,1 a¯ n;n−1,2 . . . 0 0⎠ a¯ n;n,0 a¯ n;n,1 a¯ n;n,2 . . . an;n,n 0 ⎛

⎞ cn;0,0 cn;0,1 0 ...0 0 ⎜c ⎟ 0 ⎜ n;1,0 cn;1,1 cn;1,2 . . . 0 ⎟ ⎜ ⎟ 0 ⎜ cn;2,0 cn;2,1 cn;2,2 . . . 0 ⎟ ⎜ ⎟ ψn+1 (z), .cn ψn+1 (z) = .. .. .. . . .. ⎜ .. ⎟ .. ⎜. ⎟ . . . ⎜ ⎟ ⎝ cn;n−1,0 cn;n−1,1 cn;n−1,2 . . . cn;n−1,n 0 ⎠ cn;n,0 cn;n,1 cn;n,2 . . . cn;n,n cn;n,n+1 (5.2.14) where .ψn+1 (z) = (ψn+1;0 (z), ψn+1;1 (z), . . . , ψn+1;n+1 (z)).

5.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi. . .

261

Similarly to (5.2.11), we construct such a combination of matrix elements (5.2.13) and (5.2.14), which is the .((n + 2) × (n + 2))-matrix of the form ⎛

⎞ an;0,0 0 0 ...0 0 ⎜c ⎟ ...0 0 ⎜ n;0,0 cn;0,1 0 ⎟ ⎜ ⎟ 0 ⎜ cn;1,0 cn;1,1 cn;1,2 . . . 0 ⎟ ⎜ ⎟ ψn+1 (z), .An ψn+1 (z) = .. .. .. . . .. ⎜ .. ⎟ .. ⎜. ⎟ . . . ⎜ ⎟ ⎝ cn;n−1,0 cn;n−1,1 cn;n−1,2 . . . cn;n−1,n 0 ⎠ cn;n,0 cn;n,1 cn;n,2 . . . cn;n,n cn;n,n+1 (5.2.15) where .ψn+1 (z) = (ψn+1;0 (z), ψn+1;1 (z), . . . , ψn+1;n+1 (z)). The matrix (5.2.15) is invertible because its elements on the main diagonal are positive (see (5.1.38)). Rewrite the equalities (5.2.7) in the form ∗ an∗ ϕn+1 (z) = z¯ ϕn (z) − cn−1 ϕn−1 (z) − bn∗ ϕn (z), .

cn ϕn+1 (z) = zϕn (z) − an−1 ϕn−1 (z) − bn ϕn (z),

n ∈ N.

(5.2.16)

We see that the first .n + 2 scalar equations (of .2(n + 1) scalar equations (5.2.16)) have the form ( ∗ An ϕn+1 (z) = z¯ Qn;0 (z)−(cn−1 Qn−1 (z) − (b∗ Qn (z))n;0 , zQn;0 (z)−(an−1 Qn−1 (z))n;0 − (bn Qn (z))n;0 , . . . , ) zQn;n (z)−(an−1 Qn−1 (z))n;n − (bn Qn (z))n;n ϕ0 .

.

(5.2.17)

The construction of the matrix .An and the vector form on the right-hand side in (5.2.17) and (5.2.8), (5.2.9) yield ϕn+1;0 (z) = Qn+1;0 (z)ϕ0 = .

=

1 ∗ (¯zQn;0 (z) − (cn−1 Qn−1 (z))n;0 ) − (bn∗ Qn (z))n;0 )ϕ0 an;0,0 1 an;0,0

∗ (¯z(ln;0 z¯ n + qn;0 (z)) − (cn−1 Qn−1 (z))n;0

− (bn∗ Qn (z))n;0 )ϕ0 , (5.2.18) l

n;0 that is, the main term on the right-hand side (5.2.18) is equal to . an;0,0 z¯ n+1 z0 , hence, it has the form (5.2.9).

262

5 Block Jacobi Type Matrices in the Complex Moment Problem

Similarly we get .ϕn+1;1 (z), .. . ., .ϕn+1;n+1 (z). It is necessary to take into account that the diagonal of the matrix .An has also a positive elements .cn;0,1 , .cn;1,2 , .. . ., .cn;n,n+1 due to (5.1.38). This completes the induction and finishes the proof. u n Remark 5.2.2 Note that the previous lemma does not assert the existence of a solution of the overdetermined system (5.2.7) for an arbitrary initial data .ϕ0 ∈ C; it is only proved that the generalized eigenvector from .(lfin )' of the operator A is a solution of (5.2.7) and has the form (5.2.8) and (5.2.9). In what follows, it will be convenient to look at .Qn (z) with fixed z as a linear operator that acts from .H0 into .Hn , i.e., .H0 e ϕ0 |−→ Qn (z)ϕ0 ∈ Hn . We regard also .Qn (z) as an operator valued polynomial of variables .z, z¯ ∈ C; hence, for the adjoint operator we have .Q∗n (z) = (Qn (z))∗ : Hn −→ H0 . By using these polynomials .Qn (z), we construct the representation for .oj,k (z). Lemma 5.2.3 The operator .oj,k (z), .∀z ∈ C has the representation: oj,k (z) = Qj (z)o0,0 (z)Q∗k (z) : Hk −→ Hj ,

.

j, k ∈ N0 ,

(5.2.19)

where .o0,0 (z) ≥ 0 is a scalar. Proof For a fixed .k ∈ N0 , the vector .ϕ = ϕ(z) = (ϕj (z))∞ j =0 , where ϕj (z) = oj,k (z) = πj o(z)πk ∈ Hj ,

.

z∈C

(5.2.20)

is a solution and simultaneously, a generalized eigenvector in .(lfin )' of the equation .J ϕ(z) = zϕ(z), since .o(z) is a projector on to generalized eigenvectors of the operator A with correspondence generalized eigenvalues z. Therefore, .∀g ∈ lfin , we have .(ϕ, J + g)l2 = z(ϕ, g)l2 . Applying the first finite-difference equation .J + to the vector .ϕ, we obtain .(J ϕ, g)l2 = z(ϕ, g)l2 . Hence, .ϕ = ϕ(z) ∈ l2 (p−1 ) exists as an usual solution of the equation .J ϕ = zϕ with an initial condition .ϕ0 = π0 o(z)πk ∈ H0 . Since .∀f ∈ lfin , the vector .o(z)f ∈ l2 (p−1 ) is also a generalized eigenvector of the operator .A∗ with the corresponding eigenvalue .z¯ (because A is normal operator), the same vector .ϕ = ϕ(z) in (5.2.20) is also the solution of the equation .J + ϕ = z¯ ϕ with the same initial condition .ϕ0 = π0 o(z)πk . By using Lemma 5.2.1, in particular (5.2.8), we get oj,k (z) = Qj (z)(o0,k (z)),

.

j ∈ N0 .

(5.2.21)

The operator .o(z) : .l2 (p) −→ l2 (p−1 ) is formally normal on .l2 , The operator as a derivative of the resolution of the identity of the operator A in .l2 with respect to the spectral measure. Hence, according to (5.2.19) we get (oj,k (z))∗ = (πj o(z)πk )∗ = πk o(z)πj = ok,j (z),

.

j, k ∈ N0 .

(5.2.22)

5.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi. . .

263

For a fixed .j ∈ N0 from (5.2.22) and previous conversation, it follows that the vector ψ = ψ(z) = (ψk (z))∞ k=0 ,

ψk (z) = ok,j (z) = (oj,k (z))∗

.

is an usual solution of equations .J ψ = zψ and .J + ψ = z¯ ψ with an initial condition ∗ .ψ0 = o0,j (z) = (oj,0 (z)) . By using Lemma 5.2.1, we also obtain a representation of the type (5.2.21), ok,j (z) = Qk (z)(o0,j (z)),

.

k ∈ N0 .

(5.2.23)

Taking into account (5.2.22) and (5.2.23), we get o0,k (z) = (ok,0 (z))∗ = (Qk (z)o0,0 (z))∗ = o0,0 (z)(Qk (z))∗ ,

.

k ∈ N0 (5.2.24)

(here we used .o0,0 (z) ≥ 0; this inequality follows from (5.2.4) and (5.2.5)). Substituting (5.2.24) into (5.2.21), we obtain (5.2.19). u n Now, we can rewrite the Parseval equality (5.2.6) in concrete form. To do this, we substitute the expression (5.2.19) for .oj,k (z) in (5.2.6), therefore, we get (f, g)l2 =

∞ f E

(oj,k (z)fk , gj )l2 dσ (z)

j,k=0 C

=

∞ f E

(Qj (z)o0,0 (z)Q∗k (z)fk , gj )l2 dσ (z)

j,k=0 C .

=

∞ f E

(5.2.25)

(Q∗k (z)fk , Q∗j (z)gj )l2 dρ(z)

j,k=0 C

=

f (E ∞ C

Q∗k (z)fk

)( E ∞

Q∗j (z)gj

) dρ(z), ,

∀f, g ∈ lfin ,

j =0

k=0

dρ(z) = o0,0 (z) dσ (z). We denote the Fourier transform “. 0 is a fixed number, i.e., for .n = 0 from .x0 = ξ0 = δ0 . Then (5.3.11) gives x0 η0 = ξ0 y0 ,

.

ξ0 + y0 = η1 ,

y1 = x0 + η0 .

The general solution of this system has a form y0 = η0 = δ1 ,

.

y1 = η1 = δ0 + δ1 ,

(5.3.12)

where .δ1 > 0 is an arbitrary given number. Consider the system (5.3.11) for .n = 1 and take the coefficients .(x0 , x1 ; ξ0 , ξ1 ), equal to the solution .(y0 , y1 ; η0 , η1 ) (5.3.12) of the previous system. Thus, we get

.

x0 η1 = ξ1 y0 ,

δ1 η1 = (δ0 + δ1 )y0 ,

x1 η0 = ξ0 y1 ,

(δ0 + δ1 )η0 = δ1 y1 ,

ξ1 + y0 = η2 , ξ0 + y1 = x0 + η1 , y2 = x1 + η0 ,

i.e.,

(δ0 + δ1 ) + y0 = η2 ,

(5.3.13)

δ1 + y1 = δ1 + η1 , y2 = (δ0 + δ1 ) + η0 .

The general solution of the system (5.3.13) has a form y0 = η0 = δ2 ,

.

y1 = η1 = (δ0 +δ1 )δ2 δ1−1 ,

y2 = η2 = δ0 +δ1 +δ2 ,

(5.3.14)

where .δ2 > 0 is an arbitrary number. It is obvious that the solution is symmetric, i.e., .yk = ηk , .k = 0, 1, 2, and, therefore, the system (5.3.11) has more simple form for .n = 2. In this system .xk = ξk , .k = 0, 1, 2. In what follow we assert such a symmetry for an arbitrary .n = 2, 3, . . . .

272

5 Block Jacobi Type Matrices in the Complex Moment Problem

Lemma 5.3.1 Let coefficients of the system (5.3.11) be given in the form .xk = ξk > 0, .k = 0, 1, . . . , n, .n = 2, 3, . . . . Then its solution (y0 , y1 , . . . , yn+1 ; η0 , η1 , . . . , ηn+1 )

.

with initial data .y0 = η0 = δn+1 > 0 is symmetric, i.e., .yk = ηk , .k = 0, 1, . . . , n + 1. Proof From the first and the last equations of the first column in (5.3.11), we get ηn = ξn y0 x0−1 = xn η0 ξ0−1 = yn .

.

This equality and the first and the last equations from the second column in (5.3.11) give ηn+1 = ξn + y0 = xn + η0 = yn+1 .

.

The second and the fourth equation of the second column in (5.3.11) and .ηn = yn give ξn−1 + y1 = x0 + ηn = x0 + yn = ξ0 + yn = xn−1 + η1 = ξn−1 + η1 ,

.

from which we conclude: .η1 = y1 . This equality and the second and the fourth equation of the first column in (5.3.11) give ηn−1 = ξn−1 y1 x1−1 = xn−1 η1 ξ1−1 = yn−1 .

.

This equality, the third equation of the second column in (5.3.11) and non-written equation .ξ1 + yn−1 = xn−2 + η2 of the second column give: ξn−2 + y2 = x1 + ηn−1 = x1 + yn−1 = ξ1 + yn−1 = xn−2 + η2 = ξn−2 + η2 ,

.

and, hence, .η2 = y2 . This equality and the third equation of the first column in (5.3.11) and nonwritten equation .xn−2 η2 = ξ2 yn−2 of this column give ηn−2 = ξn−2 y2 x2−1 = xn−2 η2 ξ2−1 = yn−2 .

.

Repeating the last two steps, we get equalities .η3 = y3 , .ηn−3 = yn−3 ; .η4 = y4 , .. . .. As result, we prove that .ηk = yk for all .k = 0, 1, . . . , n + 1. u n

5.3 Normality Conditions of Block Jacobi Type Matrices

273

By using this lemma, we can instead of the system (5.3.11) consider the more simple system .∀n ∈ N, we have x0 yn = xn y0 , x1 yn−1 = xn−1 y1 , x2 yn−2 = xn−2 y2 , ... x[n/2] y[n/2]+1 = x[n/2]+1 y[n/2] ,

n is odd,

x[n/2]−1 y[n/2]+1 = x[n/2]+1 y[n/2]−1 ,

n is even. (5.3.15)

.

xn + y0 = yn+1 , xn−1 + y1 = x0 + yn , xn−2 + y2 = x1 + yn−1 , ... x[n/2]+1 + y[n/2] = x[n/2]−1 + y[n/2]+2 ,

n is odd,

x[n/2] + y[n/2] = x[n/2]−1 + y[n/2]+1

n is even.

(Here, the square brackets in (5.3.15) denote the integer part of the number.) So, we have the following procedure for finding matrices of the form (5.3.5)–(5.3.8) that satisfy (5.3.4), where all .bn = 0, .n ∈ N0 . Consider the system (5.3.15) of .n + 1 equations, where .(x0 , x1 , . . . , xn ) are positive coefficients and .(y0 , y1 , . . . , yn+1 ) are real unknowns. For .n = 1 we put .x0 = δ1 , .x1 = δ0 + δ1 (.δ0 , δ1 > 0—are arbitrary initial data) and find its solution (5.3.14) with an arbitrary .y0 = δ2 > 0. Consider the system (5.3.15) for .n = 2 with coefficients .(x0 , x1 , x2 ) equal to the previous solution .(y0 , y1 , y2 ) and find the solution .(y0 , y1 , y2 , y3 ), where .y0 = δ3 > 0 is an arbitrary number. Assume that .yk > 0, .k = 0, 1, 2, 3, and consider (5.3.15) for .n = 3 with coefficients .(x0 , x1 , x2 , x3 ) equal to .(y0 , y1 , y2 , y3 ) and initial data .y0 = δ4 > 0 for a new solution. Continue such procedure, assuming on each step that the solution is positive (such positiveness depends on choice of .δ0 , δ1 , . . . , > 0). As result, we find .(y0 , y1 , . . . , yn+1 ) for every .n ∈ N0 . By means of formulas (5.3.10) (where .ξk = xk and .ηk = yk ), we calculate elements of matrices (5.3.5)–(5.3.8). For these matrices and .bn = 0, .n ∈ N0 the matrix J of the type (5.1.36) is formally normal. If all .yk in the found solution .(y0 , y1 , . . . , yn+1 ) of (5.3.15) are uniformly bounded with respect to .n ∈ N0 , then the operator A generated by J is bounded and normal.

274

5 Block Jacobi Type Matrices in the Complex Moment Problem

A similar situation occurs in the more general case, that is, when .bn are not zero. Now it is necessary to find .bn , .n ∈ N0 , step by step, starting by .b0 ∈ R and then we ∗ fine .bn+1 = bn+1 with .bn = bn∗ using the third equality of (5.3.4). Example 5.3.2 We will use the procedure described above to find the solution of the system (5.3.15) in the case δn = q n ,

.

q > 0,

n ∈ N0 .

(5.3.16)

It is easy to calculate that this solution at the n-th step looks like y0 = q n+1 , y1 = q n + q n+1 , .

y2 = q n−1 + q n + q n+1 , . . . ,

(5.3.17)

yn+1 = 1 + q + q 2 + . . . + q n+1 . So, for the case .0 < q < 1 the corresponding to the matrix J operator A is bounded. In this case A is a normal operator. If .q ≥ 1, then a generated by J operator is unbounded and formally normal. In the case of (5.3.16), (5.3.17) it is not difficult to find self-adjoint matrices .bn , .n ∈ N0 , for which the third equation in (5.3.4) is satisfied . Of course, it is possible to obtain matrices corresponding to bounded normal operators of the type (5.1.35) (i.e., (5.1.36)) in more general situations as well. Example 5.3.3 Let us give an example of the Jacobi type block matrix, corresponding to some complex moment problem, in which .bn /≡ 0. It is not difficult to verify that the matrix ⎛ ⎞ √ 0 0 3 √ √ ⎜ ⎟ 2 0 ⎜ 3 0 0 0 ⎟ ⎜ ⎟ ⎜ 0 2 0 0 0 1 ⎟ 0 √ ⎜ ⎟ ⎜ ⎟ 1 0 0 0 0 0 2 0 0 √ √ ⎜ ⎟ ⎜ ⎟ 0 0 1 0 2 2 0 0 0 ⎜ ⎟ √ ⎜ ⎟ 0 0 0 2 0 0 0 0 1 √ .J = ⎜ ⎟ ⎜ 1 0 0 √0 0 0 0 0 2 0 0 0 ⎟ ⎜ ⎟ ⎜ ⎟ 0 1 √0 2 0 0 0 0 0 1 00 ⎟ ⎜ ⎜ ⎟ ⎜ 0 0 2 0 1 √0 0 0 0 0 1 0 ⎟ 0 ⎜ ⎟ ⎜ 20 0 0 0 01 ⎟ 0 0 0 0 0 ⎝ ⎠ .. .. .. . . . corresponds to a bounded and normal operator, that is, it satisfies the identity .J J ∗ = J ∗ J on finite vectors.

5.4 The Solution of the Complex Moment Problem

275

Additionally, we note that such a matrix is related to the Chebyshev polynomials on the Steiner domain on the complex plane. Recall that the Steiner domain is a closed set on the complex plane bounded by a curve of the hypocycloid, namely, z2 z¯ 2 − 4z3 − 4¯z3 + 18z¯z − 27 = 0.

.

And Chebyshev polynomials of the first kind on the Steiner domain have the form T0;0 (z) = 1,

.

T1;0 (z) = z, T0;1 (z) = z¯ ,

T2;0 (z) = z2 − 2¯z, T1;1 (z) = z¯z − 3, T0;2 (z) = z¯ 2 − 2z, . . . ,

.

which are orthogonal on the Steiner domain with the weight function 1 h(z) = / . −(z2 z¯ 2 − 4z3 − 4¯z3 + 18z¯z − 27)

.

The Chebyshev polynomials are related to the polynomials discussed above by relations { 1 √ Tn−α,α (z), n ≥ 2, α = 1, 2, . . . , n − 1, 6 .Pn;α (z) = √1 Tn−α,α (z), n ≥ 2, α = 0, n; 3

and in particular .P0;0 (z) = 1, .P1;0 (z) =

√1 z, .P1;1 (z) 3

=

√1 z¯ . 3

Remark 5.3.4 The study of the detailed internal structure of the Jacobi matrix of the block type of a normal operator, as well as for a unitary operator (see the next chapter), and objects similar to the Verblunsky coefficients, Schur parameters, corresponding to the spectral measure of a normal operator given by a normal matrix, remain an important and unsolved problem until today.

5.4 The Solution of the Complex Moment Problem The section is devoted to the solution of the complex moment problem in the power form. By the complex moment problem in power form we will understand the problem in finding conditions for the given sequence .(sm,n ), .m, n ∈ N0 = {0, 1, 2, . . .} of complex numbers, which corresponds to the Borel measure .dρ(z) on the complex plane .C, so that f sm,n =

zm z¯ n dρ(z), m, n ∈ N0 .

.

C

(5.4.1)

276

5 Block Jacobi Type Matrices in the Complex Moment Problem

The problem is solved under the condition that the set .{zm z¯ n }, .m, n ∈ N0 is dense in .L2 (C, dρ(z)). There are many known works devoted to the representation of (5.4.1) in various versions, i.e., .(m, n) ∈ Z×Z, .(m, n) ∈ N0 ×Z, .(m+n) ∈ N0 , and others, and a measure .dρ(z) is concentrated on the whole plane .C, on some half-plane in .C, on a parallelogram, on the circle of radius R in .C and many others. The section considers the case .m, n ∈ N0 and .dρ(z) is defined without restrictions on the plane .C. The solution approach is also based on the generalization of the method derived from the works of Krein M. G. By using the given sequence .(sm,n ), .m, n ∈ N0 , we will consider the (quasi)scalar product ∞ E

(f, g)S =

.

fj,k g¯ m,n sj +n,k+m ,

(5.4.2)

j,k,m,n=0 ∞ on finite sequences .f = (fj,k )∞ j,k=0 and .g = (gj,k )j,k=0 under the condition ∞ .(f, f )S ≥ 0 for an arbitrary finite sequence .f = (fj,k ) j,k=0 . In the Hilbert space generated by the scalar product .(·, ·)S , we consider the operator N acting on finite vectors as follows:

¯ )j,k = fj −1,k , j, k ∈ N0 , (Nf )j,k = fj,k−1 , (Nf

.

(5.4.3)

where .N¯ is an operator formally adjoint to N , and we put .fj,−1 = f−1,k = 0. It is not difficult to verify that the operator N is formally normal. Now, the theory of the generalized eigenvectors expansion is applied to this operator (more precisely, to its normal expansion). In this way, the generalized eigenvectors .P (λ) corresponding according to (5.4.3) to the generalized eigenvalues of .λ ∈ C have the form of linear combinations .(λm λ¯ n ), .m, n ∈ N0 . Therefore, the corresponding Parseval equality, using the scalar product (5.4.2) for arbitrary f and g in terms of “Fourier coefficients”, leads directly to the representation (5.4.1). The quasi-analytic criterion of self-adjointness of an operators plays the main role by solving the question about the uniqueness of the measure .dρ(z), which gives the representation (5.4.1). The main result of the section is contained in the theorem. Theorem 5.4.1 If a given two-index sequence of complex numbers .(sm,n ), .m, n ∈ N0 has the representation (5.4.1), then ∞ E .

fj,k f¯m,n sj +n,k+m ≥ 0,

j,k,m,n=0

for an arbitrary finite sequence of complex numbers .(fj,k )∞ j,k=0 , .fj,k ∈ C.

(5.4.4)

5.4 The Solution of the Complex Moment Problem

277

The representation (5.4.1) exists and is unique if this sequence .(sm,n ), .m, n ∈ N0 is positive definite, that is, it satisfies the condition (5.4.4), and ∞ E .

1 = ∞. √ s2p,2p

2p

p=1

(5.4.5)

Note that the condition (5.4.4) is obviously necessary for the representation (5.4.1). The conditions (5.4.4) and (5.4.5) lead to the representation (5.4.1) in the theorem with a unique measure (we will explain further that the condition (5.4.5) guarantees the uniqueness). Let us first recall some constructions (see Chap. 2, Sect. 2.7 and 2.1). Let the normal operator N defined on .D(N ) be given in the separable Hilbert space .H; .N ∗ is its adjoint one, .D(N ∗ ) = D(N ). Consider the equipment of the space .H: H− ⊃ H ⊃ H+ ⊃ D,

.

(5.4.6)

where .H+ is a positive Hilbert space topologically and quasi-nuclear imbedded into .H (topologically means densely and continuously; quasi-nuclear means that the imbedding operator is of Hilbert-Schmidt type); .H− is the dual to .H+ with respect to space .H; .D is a linear, topological space, topologically embedded into .H+ . The operator N is called standardly connected with the chain (5.4.6) if .D ⊂ D(N ) and restrictions .N  D, .N ∗  D act continuously from .D into .H+ . This case is under consideration. Let us recall that a vector .o ∈ D is called a strong cyclic vector of operators N and .N ∗ if for any .p, q ∈ N, we have o ∈ D(N p ) ∩ D((N ∗ )q ),

.

N p (N ∗ )q o ∈ D

and the set of all these vectors .N p (N ∗ )q o, as .p, q = 0, 1, 2, . . ., is total in the space .H+ (and, hence, also in .H). Assuming that the strong cyclic vector exists, we formulate some short variant of the projection spectral theorem (see Chap. 2, Sect. 2.11). Theorem 5.4.2 For the normal operator N with a strong cyclic vector in the separable Hilbert space .H, there exists a non-negative finite Borel measure .dρ(z) with support on the complex plane such that for .ρ-almost every .λ ∈ C, there exists a generalized joint eigenvector .ξλ ∈ H− , i.e., ¯ λ , f )H , λ ∈ C, f ∈ D, ξλ = (ξλ , N ∗ f )H = λ(ξλ , f )H , (ξλ , Nf )H = λ(ξ / 0. (5.4.7)

.

The corresponding Fourier transform F is given by the rule H ⊃ H+ e f |→ (Ff )(λ) = fˆ(λ) = (f, ξλ )H ∈ L2 (C, dρ(λ)),

.

(5.4.8)

278

5 Block Jacobi Type Matrices in the Complex Moment Problem

is an isometry operator .(after the extension by continuity.) acting from the space .H into .L2 (C, dρ(λ)). The image of the operator N .(N ∗ ), by the transformation F , is the multiplication by .λ .(λ¯ ) operator in .L2 (C, dρ(λ)). We remark that the notation .(·, ·)H denotes the same as .H (see Chap. 2, Sect. 2.7). We also recall (see Chap. 2, Sect. 2.6) that for the operator A defined on n∞ n .D(A) in .H, the vector .f ∈ n=0 D(A ) is called quasi-analytic if the class .C{mn }, n where in this case .mn = ||A f ||H , is quasi-analytic. The quasi-analyticity of the vector f for the Hermitian operator is equivalent to the condition ∞ E .

√ n

n=1

1 = ∞. ||An f ||H

(5.4.9)

The quasi-analyticity is used in the criterion of self-adjointness and commutativity (see Chap. 2, Sect. 2.6). Proof of Theorem 5.4.2 The necessity of the condition (5.4.4) is obvious. Indeed, if the sequence .(sm,n ), .m, n ∈ N0 has the representation (5.4.1), then for an arbitrary finite sequence .f = (fm,n )∞ m,n=0 of complex numbers .fm,n ∈ C, we have ∞ E .

fj,k f¯m,n sj +n,k+m

j,k,m,n=0

| |2 | f | E | ∞ | = || fm,n zm z¯ n || dρ(z) ≥ 0. |m,n=0 | C

Let us denote by l the linear space of sequences .f = (fm,n )∞ m,n=0 , .fm,n ∈ C and by .lfin its linear subspace consisting of finite sequences .f = (fm,n )∞ m,n=0 , i.e., the sequences such that .fm,n /= 0 for only finite numbers n and m. LetE.δm,n , .m, n ∈ N0 be the .δ-sequence such that each .f ∈ lfin has representation .f = ∞ n,m=0 fm,n δm,n . Let us consider a linear operators on .lfin , namely, J : (Jf )j,k = fj,k−1 , J + : (J + f )j,k = fj −1,k , j, k ∈ N0 ,

.

(5.4.10)

where always .fj,−1 = f−1,k ≡ 0. The operators J and .J + are the “creation” type. For the .δ-sequence we get J δj,k = δj,k+1 , J + δj,k = δj +1,k .

.

(5.4.11)

The operator J is formally adjoint to .J + with respect to the (quasi)scalar product (f, g)S =

∞ E

.

j,k,m,n=0

fj,k g¯ m,n sj +n,k+m , f, g ∈ lfin .

(5.4.12)

5.4 The Solution of the Complex Moment Problem

279

Indeed, (Jf, g)S =

∞ E

(Jf )j,k g¯ m,n sj +n,k+m

j,k,m,n=0

=

∞ E

fj,k−1 g¯ m,n sj +n,k+m

j,k,m,n=0

= .

∞ E

fj,k g¯ m,n sj +n,k+m+1

j,k,m,n=0

=

∞ E

fj,k g¯ m−1,n sj +n,k+m

j,k,m,n=0

=

∞ E

fj,k (J + g)m,n sj +n,k+m

j,k,m,n=0

= (f, J + g)S . The operator J commute with .J + on .lfin , namely, (J + Jf )j,k = fj −1,k−1 = (J J + f )j,k .

.

Hence the operator J is a formally normal and its adjoint is .J + . Let S be the Hilbert space obtained as the completion of the factor space l˙fin := lfin /{h ∈ lfin | (h, h)S = 0}.

.

The element f of the space S is a representative of the class .f˙ of equivalent elements from .l˙fin . Hence operators .J˙ and .J˙+ are correctly defined in S. These facts in the case of self-adjoint operators are described in detail in [21], Chapter 8, §1, SubSect. 4, and [23], Chapter 5, §5, SubSect. 2. Therefore, J˙f˙ = (Jf )· , f ∈ D(J˙) = l˙fin ; J˙+ f˙ = (J + f )· , f ∈ D(J˙∗ ) = l˙fin .

.

Let us denote, for the next considerations, by N and .N + the closure “.∼” of .J˙ and + in S. .J˙ In the next step we use Theorem 5.4.2. For simplicity, we suppose that the given sequence .(sm,n ) is non-degenerate, i.e., if .(f, f )S = 0 for .f ∈ lfin , then .f = 0 and, hence, .f˙ = f and .J˜˙ = N. Studies in the general case are more complicated, see [21], Chapter 8, §1, SubSect. 4, and [23], Chapter 5, §5, Subsections 1–3. We assume also for a moment that the operator N is normal one. Here, it will be proved that N is normal under the condition (5.4.5). In general, the conditions

280

5 Block Jacobi Type Matrices in the Complex Moment Problem

for the existence of normal extensions of a formally normal operator, especially in a connection with the complex moment problem, were considered in [272] (see also [83, 93]). Let us construct the rigging of spaces (l2 (p))−,S ⊃ S ⊃ l2 (p) ⊃ lfin ,

.

(5.4.13)

where .l2 (p) is weighted .l2 -space with a weight .p = (pm,n )∞ m,n=0 , .pm,n ≥ 1. The norm in .l2 (p) is given by the expression ∞ E

||f ||2l2 (p) =

.

|fm,n |2 pm,n ;

m,n=0

where .(l2 (p))−,S = H− is negative space with respect to the positive space .l2 (p) = H+ and the zero space .S = H. Lemma 5.4.3 If the sequence .(pm,n ), .m, n ∈ N0 decreases enough fast, then the imbedding .l2 (p) c→ S is quasi-nuclear. Proof The inequality (5.4.4) means that a multimatrix .(Kj,k;m,n )∞ j,k,m,n=0 , where Kj,k;m,n = sj +n,k+m , is non-negative definite and, therefore, for .j, k, m, n ∈ N0 , we have

.

|sj +n,k+m |2 = |Kj,k;m,n |2 ≤ Kj,k;j,k Km,n;m,n = sj +k,j +k sm+n,m+n .

.

(5.4.14)

E∞ −1 Let the weight .q = (qj,k )∞ j,k=0 , .qj,k ≥ 1, be such that . j,k=0 sj +k,j +k qj,k < ∞. Then from (5.4.12) and (5.4.14) it follows that ∞ E

||f ||2S =

.

j,k,m,n=0



fj,k f¯m,n sj +n,k+m

⎞ ∞ E sj +k,j +k ⎠ ||f ||2l (q) , f ∈ lfin . ≤⎝ 2 qj,k j,k=0

E −1 Therefore, the imbedding .l2 (q) c→ S is topological. And if . ∞ j,k=0 qj,k pj,k < ∞, then .l2 (p) c→ l2 (q) is quasi-nuclear. The composition .l2 (p) c→ S of the quasinuclear and topological imbedding is also quasi-nuclear one. u n In the next step we use the rigging (5.4.13) to construct generalized eigenvectors. The inner structure of the space .(l2 (p))−,S is complicated, because of the complicated structure of S. This is a reason to introduce a new auxiliary rigging l = (lfin )' ⊃ (l2 (p−1 )) ⊃ l2 ⊃ l2 (p) ⊃ lfin ,

.

(5.4.15)

−1 )∞ where .l2 (p−1 ), .p−1 = (pm,n m,n=0 is the negative space with respect to the positive space .l2 (p) and the zero space .l2 . Chains (5.4.13) and (5.4.15) have the same positive space .l2 (p).

5.4 The Solution of the Complex Moment Problem

281

We use the Lemma 2.7.5 that establishes an isomorphism between spaces (l2 (p))−,S and .l2 (p−1 ). Instead of riggings (2.7.35), we consider (5.4.13) and (5.4.15). Let .ξλ ∈ (l2 (p))−,S be a generalized eigenvector of the operator N in terms of the chain (5.4.13). Thus, in this case due to Theorem 5.4.2, we have

.

(ξλ , N ∗ f )S = λ(ξλ , f )S , (ξλ , Nf )S = λ¯ (ξλ , f )S , λ ∈ C, f ∈ lfin .

.

(5.4.16)

Denote P (λ) = U ξλ ∈ l2 (p−1 ) ⊂ l, P (λ) = (Pm,n (λ))∞ m,n=0 , Pm,n (λ) ∈ C.

.

By using (2.7.36) we can rewrite (5.4.16) in the form (P (λ), N ∗ f )l2 = λ(P (λ), f )l2 , .

(5.4.17)

(P (λ), Nf )l2 = λ¯ (P (λ), f )l2 , λ ∈ C1 , f ∈ lfin .

The correspondence Fourier transform has the form S ⊃ lfin e f → (Ff )(λ) = fˆ(λ) = (f, P (λ))l2 ∈ L2 (C, dρ(λ)).

(5.4.18)

.

Let us calculate .P (λ). The operator .N ∗ is defined by the second expression in (5.4.10) and, hence, (5.4.17) gives ∞ E

λPm,n (λ)f¯m,n = λ(P (λ), f )l2 = (P (λ), N ∗ f )l2

m,n=0 .

= (P (λ), J + f )l2 =

∞ E

Pm+1,n (λ)f¯m,n ,

∀f ∈ lfin .

m,n=0

(5.4.19) Similarly, using (5.4.10) and (5.4.17), we have ∞ E

λ¯ Pm,n (λ)f¯m,n = λ¯ (P (λ), f )l2 = (P (λ), Nf )l2

m,n=0 .

= (P (λ), Jf )l2 =

∞ E

Pm,n+1 (λ)f¯m,n ,

∀f ∈ lfin .

m,n=0

(5.4.20) Hence, we have λPm,n (λ) = Pm+1,n (λ), λ¯ Pm,n (λ) = Pm,n+1 (λ), m, n ∈ N0 .

.

282

5 Block Jacobi Type Matrices in the Complex Moment Problem

Without loss of generality, we can take .P0,0 (λ) = 1, λ ∈ C. Hence, last two equalities give Pm,n (λ) = λm λ¯ n , m, n ∈ N0 .

.

(5.4.21)

Thus, the Fourier transform (5.4.18) has the form S ⊃ lfin e f → (Ff )(λ) = fˆ(λ) =

∞ E

.

fm,n λm λ¯ n ∈ L2 (C, dρ(λ))

(5.4.22)

m,n=0

and f (f, g)S =

.

ˆ dρ(λ), f, g ∈ lfin . fˆ(λ)g(λ)

(5.4.23)

C

To construct the Fourier transform (5.4.18) and verify formulas (5.4.19)–(5.4.23) it is necessary to check that, for our operators N and .N ∗ , the vector .o = δ0,0 ∈ lfin is strong cyclic in the sense of the chain (5.4.13). But it is evidently true, since by (5.4.11) we have N p (N ∗ )q o = J p (J + )q δ0,0 = δq,p .

.

The Parseval equality (5.4.23) immediately leads to the representation (5.4.1): according to (5.4.21), (5.4.22) .δˆm,n = λm λ¯ n and .δˆ0,0 = 1; by (5.4.12), we get sm,n = (δm,n , δ0,0 )S = (δˆm,n , δˆ0,0 )L2 (C,dρ(λ)) f . = λm λ¯ n dρ(λ), m, n ∈ N0 , λ ∈ C. C

The uniqueness of representation (5.4.1) follows from the fact that the operator N is normal (not only just a formally normal). Thus, for the finish of the proof of Theorem 5.4.1, it is necessary only to check that the condition (5.4.5) provides the normality of N. For this reason we introduce two closed Hermitian operators defined on, invariant under their action, linear set D = lfin = span{δm,n | m, n ∈ N0 }

.

by expressions A1 =

.

1 1 (N + N ∗ ), A2 = (N − N ∗ ). 2i 2

5.4 The Solution of the Complex Moment Problem

283

For the normality of N it is sufficiently to show that operators .A1 and .A2 are selfadjoint both and commutative in the strong resolvent sense. But for this (see Chap. 2, Theorems 2.6.8 and 2.6.11) it must to show that the operator .A21 + A22 is essentially self-adjoint, i.e., for example, has a total set .D of quasi-analytic vectors. Due to (5.4.10), the operator .A := A21 + A22 acts on .δm,n ∈ D as follows Aδm,n = (A21 + A22 )δm,n ≡ NN + δm,n = δm+1,n+1 ,

.

and for .p ≥ 1, we have Ap δm,n = δm+p,n+p .

.

According to (5.4.12), the space S has the norm .||f ||S = .∀δm,n ∈ D we get



(f, f )S . Hence,

||Ap δm,n ||2S = ||δm+p,n+p ||2S = sm+n+2p,m+n+2p .

.

Since ∞ E .

p=1

∞ E 1 1 / , = √ p p 2p s ||A δm,n || p=1 m+n+2p,m+n+2p

then we conclude that the quasi-analyticity of the class .C{||Ap δm,n ||} is equivalent √ to the quasi-analyticity of the class .C{ sm+n+2p,m+n+2p } and due to the quasi√ analyticity properties it is equivalent to the quasi-analyticity of the class .C{ s2p,2p }. In any case, the operator .A must be positive regardless of whether it is essentially self-adjoint. Therefore, the theorem 5.4.1 is proved. u n Remark 5.4.4 If considered as a partial case: the real sequence .cn := sn,n , .n ∈ N0 instead of the complex .(sm,n ) (that is, in which .sm,n = 0 for .m /= n, .m, n ∈ N0 ), and, therefore, the corresponding one-dimensional real moment problem, then the wellknown result can be obtained (see Chap. 3). The sequence .cn has the representation f+∞ .cn = t 2n dρ(t) 0

if and only if it is the Stiltjes sequence and it is unique if ∞ E .

1 = ∞, √ c2n

2n

n=1

284

5 Block Jacobi Type Matrices in the Complex Moment Problem

provided the measure is supported on .[0, ∞). Here Theorem 2.6.8, Lemma 2.6.7, and (2.6.21) are used. Remark 5.4.5 The condition (5.4.4) is only necessary in Theorem 5.4.1, i.e., for the representation (5.4.1). For the clearness we propose the simple contrexample. Let .A1 and .A2 be two self-adjoint operators commuting on the linear set .D dense in the Hilbert space .H, where .D is invariant under the action of .A1 and .A2 . Suppose ∼ ∼ .A1 and .A2 are essentially self-adjoint on .D, i.e., .A1 = (A1 |D) , .A2 = (A2 |D) , but .A1 and .A2 are not commute in the strong resolvent sense. The existence of such sort operators guarantees the Nelson example [225] (see also [48], Chapter 13, §9). For the next consideration, we put .N˙ = (A1 +iA2 )  D and .N¯ = (A1 −iA2 )  D. In this case, .N˙ is a formally normal operator that does not have a normal extension even in an arbitrarily wide Hilbert space [66, 253]. Hence, the sequence sm,n := (N˙ m f0 , N˙ n f0 )H , f0 ∈ D

.

obviously satisfies the condition (5.4.4), but we do not have a representation (5.4.1). Indeed, for a finite sequence .f = (fj,k )∞ j,k=0 , we have ∞ E

fj,k f¯m,n sj +n,k+m

j,k,m,n ∞ E

=

fj,k f¯m,n (N j +n f0 , N k+m f0 )H

j,k,m,n

.

⎛ ⎞ ∞ ∞ E E =⎝ fj,k N j (N ∗ )k f0 , fm,n N m (N ∗ )n f0 ⎠ j,k

= ||

∞ E

m,n

H

fm,n N m (N ∗ )n f0 ||2H ≥ 0.

m,n

Finally, we recall that the classical problem of moments is closely related to the theory of ordinary Jacobi matrices: see Chap. 3, Sect. 3.4, Theorem 3.4.3 about the Weyl-Hamburger circle. In general, such a connection is also possible for a complex moment problem, but the corresponding theory of polynomials of the first kine .Pj (z) and the second kind .Qj is not developed here because the internal description of block Jacobi matrices related to such a problem is not investigated.

5.5 The Weyl Function and Polynomials of the Second Kind in the Complex. . .

285

5.5 The Weyl Function and Polynomials of the Second Kind in the Complex Moment Problem A Weyl-type function can be introduced for the case of a complex moment problem in power form, similarly to the way such a function is introduced in the classical moment problem (see Chap. 3). Let us set for the resolvent .Rz = (N − zI)−1 of a bounded normal operator N , the functino M(z) := (Rz δ0 , Rz δ0 )l2 = (Rz∗ Rz δ0 , δ0 )l2 ,

.

z ∈ C \ S,

(5.5.1)

where S is a spectrum of the operator N, .δ0 is corresponding cyclic vector. The Weyl type function introduced in this way preserves its basic traditional property, which is described by the theorem. Theorem 5.5.1 Let J be a fixed block Jacobi type matrix that generates a bounded normal operator N in .l2 . Let .Rz and S be its resolvent and spectrum. Then the function .M(z) := (Rz δ0 , Rz δ0 )l2 for .z ∈ C \ S determines uniquely the spectral measure .dρ(z), .z ∈ S of this operator. Proof According to the spectral theorem, the function .M(z) has a representation M(z) := (Rz δ0 , Rz δ0 )l2 =

.

(Rz∗ Rz δ0 , δ0 )l2

f = S

dρ(ξ ) , (ξ − z)(ξ¯ − z¯ )

z ∈ C \ S, (5.5.2)

Since the spectrum is located in the circle .|z| ≤ ||N ||, then outside .|z| ≥ ||N|| function (5.5.2) can be written in the form f M(z) = S

dρ(ξ ) (ξ − z)(ξ¯ − z¯ )

1 = 2 |z|

.

f S

dρ(ξ ) (1 − ξ z−1 )(1 − ξ¯ z¯ −1 )

f E ∞ ( ¯ )n ∞ ( )m E ξ ξ 1 = 2 dρ(ξ ) z z¯ |z| n=0

S m=0

=

=

1 |z|2

f

∞ E

S m,n=0

1 zm z¯ n

∞ 1 E sm,n , zm z¯ n |z|2 m,n=0

ξ m ξ¯ n dρ(ξ )

(5.5.3)

286

5 Block Jacobi Type Matrices in the Complex Moment Problem

where .(sm,n ), .m, n ∈ N0 are moments of the measure .dρ(z) (5.4.1). Form the representation (5.5.3) of the function .M(z) in the form of a Taylor series of variables −m z¯ −n under the condition .|z| ≥ ||N ||, it follows that coefficients .s .z m,n of this series are restored uniquely by .M(z). But these coefficients are (complex) moments of the measure .dρ(z), which allow the estimation of .|sm,n | ≤ ||N ||m+n , .m, n ∈ N0 . On the other hand, the measure .dρ(z) is uniquely restored by moments .sm,n , .m, n ∈ N0 , and therefore, it is uniquely restored by to the function .M(z). u n Polynomials of the second kind corresponding to the complex moment problem can also be introduced similarly to the way as it is done in the classical moment problem (see Chap. 3). Take f Qn (z) :=

.

C

Pn (ξ ) − Pn (z) dρ(ξ ), ξ −z

(5.5.4)

where .Pn (z) are polynomials of the first kind, .z, ξ ∈ C \ S, .dρ(ξ ) is a measure with a compact support on the complex plane corresponding to bounded normal operator and its adjoint one generated by block Jacobi type matrices (see Sects. 5.1–5.3, and, in particular 5.4). For the polynomials introduced in (5.5.4), their main property are described by the theorem. Theorem 5.5.2 The sequence .Q(z) = (Qn (z))∞ n=0 , .z ∈ C \ S with .Qn (z) given by the formula (5.5.4), is a solution of the system (5.2.7), namely, (J Q(z))n = an−1 Qn−1 (z) + bn Qn (z) + cn Qn+1 (z) = zQn (z), .

∗ Qn−1 (z) + bn∗ Qn (z) + an∗ Qn+1 (z) = z¯ Qn (z), (J + Q(z))n = cn−1

n ∈ N0 ,

Q−1 (z) := −1,

(5.5.5)

a−1 := 1.

In particular, .Q0 (z) = 0. Proof Since .P0 = P0;0 = 1, then from (5.5.4), it follows immediately .Q0 = Q0;0 = 0. Let us check that .Qn (z) = (Qn;0 , Qn;1 , . . . , Qn;n ), .n ∈ N satisfy also (5.5.5). Indeed, f (J P (ξ ))n − (J P (z))n dρ(ξ ) (J Q(z))n = ξ −z C

f

.

ξ Pn (ξ ) − zPn (z) dρ(ξ ) ξ −z

= C

f

=z C

Pn (ξ ) − Pn (z) dρ(ξ ) + ξ −z

(5.5.6) f Pn (z) dρ(ξ ). C

Bibliographical Notes

287

The last integral in (5.5.6) is zero, so the first equality in (5.5.5) is proved. The second equality in (5.5.4) is proved similarly. u n

Bibliographical Notes The complex moment problem is a generalization of the (real one-dimensional) classical moment problem, which is described in works of Akhiezer N. I. [2, 3]. To obtain an integral representation, the method of generalized eigenvectors expansions for a normal operator (for a pairs of commutative self-adjoint operators) developed by Berezansky Yu. M. is used [17, 18], and the spectral theory of sets of selfadjoint commutative operators Samoilenko Yu. S. [250] is also used. Historically, the first information about the complex moment problem is by Kilpi Y. [157– 160]. The main content of the chapter is presented in papers of Berezansky Y. M., Dudkin M. E. [34, 37–39] and also by Ivasiuk I. Ya. [141]. Papers of Stochel J. Szafraniec F. H. [269–272, 275] are directly related to investigations of the complex moment problem and some of its generalizations. The indeterminate case of the complex moment problem is related to the problem of describing the commutative of self-adjoint extensions of operators, which is in works of Bokhonov Yu. E. [57], Va˘ınerman L. ˘I. [287], Višik M. I. [290], also Jørgensen P. E. T. [145]. During research, orthogonal polynomials on the complex plane arise. General information about orthogonal polynomials can be found in works of Geronimus Ya. L. [117], Szegö G. [278], Suetin P. K. [273, 274]. Constructed block matrices can generate operators with non-trivial deficiency indices, see Dyukarev Yu. M. [97]. Scales of Hilbert spaces are constructed for normal operators, as for self-adjoint operators in the book of Koshmanenko V. D. and Dudkin M. E. [171]. The Weyl function for the case of a normal operator is obtained as a generalization of the paper of Brasche J. F., Malamud M. M., Neidhardt H. [59], where the Weyl function in the self-adjoint case was considered; and also by Gesztesy F., Simon B. in [120]. About normal extensions of a formally normal operator are discussed by Biriuk G., Coddington E. [53, 65, 66], Dudkin M. E., Nizhnik L. P. [83, 93], Szafraniec F. H. [276], Schmüdgen K. [252, 254] and in particular other related discussions [255]. Papers of Bisgaard T. M., Devinatz A. [55, 56, 81] considered generalizations of the complex moment problem for the case of two-sided, and other cases. The question of the uniqueness of the representation is solved with the help of papers Carleman T. [64], Mandelbrojt S. [206], Nelson E. [225], Nussbaum A. E. [231, 232].

Chapter 6

Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

This chapter considers yet another, in comparing with Chap. 5, generalization of the connection between the classical moment problem and the spectral theory of selfadjoint Jacobi matrices. Namely, an analogue of the Jacobi matrix is proposed for consideration, which related to the trigonometric moment problem, as well as the system of polynomials orthogonal with respect to some probability measure with a support on the unit circle. Such a matrix also has a block tri-diagonal structure and generates a unitary operator that acts in the .l2 type space. By using this connection, the existence of a one-to-one correspondence between probability measures on the unit circle and the block tri-diagonal Jacobi type unitary matrices is established. Thus, the main question of the section is the following: in what way is it possible to generalize the construction given in the second and the fourth chapters, in particular, in studies of orthogonal polynomials to the case of the unit circle .T ⊂ C? As Chaps. 3 and 5, this one ends with a statement of the corresponding (trigonometric) moment problem.

6.1 Introduction to the Unitary Block Jacobi Type Matrices Let us dwell on the case of the trigonometric problem of moments. In other words, the construction of Chaps. 3, 4 and 5 applies to the case when the matrix J of the form (5.1.35), (5.1.36) is a unitary operator. The difference with Chap. 5 is that the spectrum of the corresponding operator generated by a matrix J is supported on the unit circle .T ⊂ C and, hence, the set of functions (5.1.4) is linearly dependent, because in such a case zj +n z¯ k+n = zj z¯ k ,

.

j, k ∈ N0 ,

∀n ∈ N0 ,

z ∈ T.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y. M. Berezansky, M. E. Dudkin, Jacobi Matrices and the Moment Problem, Operator Theory: Advances and Applications 294, https://doi.org/10.1007/978-3-031-46387-7_6

289

290

6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

Therefore, only those functions from (5.1.4) are selected for which .j · k = 0, i.e., z0 z¯ 0 =1; .

z1 z¯ 0 =z1 ,

z0 z¯ 1 = z¯ 1 ;

z2 z¯ 0 =z2 ,

z0 z¯ 2 = z¯ 2 ;

zn z¯ 0 =zn ,

z0 z¯ n = z¯ n ;

(6.1.1) ... . . . , z ∈ T.

In the case of classical Jacobi matrices .z¯ = z for .z ∈ R. We consider only infinite Jacobi-type matrices, therefore the support .dρ(z) on .T is not contained in a finite number of points, therefore, functions (6.1.1) are linearly independent in .L2 (T, dρ(z)). The global linear order of the sequence (6.1.1) is the same as in (5.1.5) Fig. 5.1. But now, each “diagonal” .n = 1, 2, . . . has only 2 non-zero elements (instead of .n + 1 as before). In this case, the operator .J˜ is considered in some subspace .l2,u of the space .l2 given by the expression (5.1.9): l2,u = H0 ⊕H1 ⊕H2 ⊕· · · ,

.

H0 = C, H1 = H2 = · · · = C2 ;

where

(6.1.2)

l2,u ⊂ l2 .

.

The corresponding block matrix acts as an operator in the space (6.1.2) and its elements have the form

.

a0 :

C −→ C2 ,

b0 :

C −→ C,

c0 :

C2 −→ C,

an , bn , cn : with conditions ( )} a0;0,0 2, a0 = 0 ' '' ' .

)} ( b0 = b0;0,0 1 , ' '' '

)} an;0,0 an;0,1 2, 0 0 ' '' ' (

an =

1

C2 −→ C2 ,

(6.1.3) n ∈ N,

)} ( c0 = c0;0,0 c0;0,1 1 ; ' '' '

1

2

)} 0 0 2; cn;1,0 cn;1,1 '' ' '

( cn =

2

a0;0,0 , c0;0,1 , an;0,0 , cn;1,1 > 0,

2

n ∈ N.

(6.1.4)

6.2 Construction of the Tri-Diagonal Block Jacobi Type Matrices of a Unitary. . .

291

Therefore, the general results concerning normal operators also hold true for the unitary operator .J˜, acting in the space .l2,u (6.1.2) and obtained as a block Jacobi type matrix with elements from (6.1.3) and with condition (6.1.4). That is, the direct spectral problem leads to a Fourier transform of the type (5.2.33) acting from the space .l2,u into .L2 (T, dρ(z)). The inverse spectral problem has a solution similar to that described in Chap. 5. At the same time, we used the orthogonalization of the sequence (6.1.1). This problem is, of course, closely related to the trigonometric moment problem.

6.2 Construction of the Tri-Diagonal Block Jacobi Type Matrices of a Unitary Operator This section contains, discussed above, the direct and inverse problems for the block tri-diagonal Jacobi type matrix of the unitary operator associated with the trigonometric moment problem on the unit circle. The solution of the inverse problem is presented in Theorem 6.2.5. Namely, the unitary operator, given by its spectral measure on the unit circle, is represented in the form of a block tri-diagonal type Jacobi matrix acting in the orthonormal basis of polynomials on the unit circle. The direct problem is presented in Theorem 6.3.3. Namely, for a given tri-diagonal Jacobi type matrix, the spectral measure (with support on the unit circle) is restored in the sense of Fourier coefficients that obtained from the generalized eigenvectors expansion into series, respectively to corresponding unitary operator. The direct and inverse problems together obviously establish a one-to-one correspondence between unitary operators and infinite (five-diagonal) block tridiagonal Jacobi matrices. Let us denote .T = {z ∈ C | |z| = 1} = {eiθ | θ ∈ [0, 2π )} a unit circle on the complex plane .C. Let .dρ(z) = dρ(θ ) be a Borel probability measure on .T and .L2 = L2 (T, dρ(θ )) be the space of square integrable complex valued functions defined on .T. Suppose that the measure is not concentrated in a finite number of points and, therefore, all functions .[0, 2π ) e θ − | → eliθ , l ∈ Z are linear independent in .L2 . Let us consider the sequence of functions 1;

.

eiθ , e−iθ ;

e2iθ , e−2iθ ;

...;

eniθ , e−niθ ;

...,

(6.2.1)

292

6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

and start the orthogonalization according to the Gram-Schmidt procedure. As a result we get the orthonormal system of polynomials P0 (θ ) = 1; P1;1 (θ ) = k1;1 eiθ + · · · , P1;2 (θ ) = k1;2 e−iθ + · · · ; P2;1 (θ ) = k2;1 e2iθ + · · · , .

P2;2 (θ ) = k2;2 e−2iθ + · · · ;

(6.2.2)

··· Pn;1 (θ ) = k1;1 eniθ + · · · , Pn;2 (θ ) = k1;2 e−niθ + · · · ; ··· of variables .eiθ and .e−iθ , where .kn;1 > 0, .kn;2 > 0 and “.+ · · · ” denotes terms after the first in the corresponding polynomial. Thus, .Pn;1 is a linear combination of {1, eiθ , e−iθ , . . . , e(n−1)iθ , e−(n−1)iθ , eniθ },

.

(6.2.3)

and .Pn;2 is a linear combination of {1, eiθ , e−iθ , . . . , e(n−1)iθ , e−(n−1)iθ , eniθ , e−niθ }.

.

(6.2.4)

For all .n ∈ N, we denote .Pn;1 and .Pn;2 spaces spanned by (6.2.3) and (6.2.4), respectively. Hence, .Pn;1 (θ ) (.Pn;2 (θ )) are orthogonal to .Pn−1;2 (.Pn;1 ). It is clear that .∀n ∈ N, we have P0 ⊂ P1;1 ⊂ P1;2 ⊂ · · · ⊂ Pn;1 ⊂ Pn;2 ⊂ · · · , .

Pn;1 = {P0 (θ )} ⊕ {P1;1 (θ )} ⊕ · · · ⊕ {Pn−1;1 (θ )} ⊕ {Pn−1;2 (θ )} ⊕ {Pn;1 (θ )}, Pn;2 = Pn;1 ⊕ {Pn;2 (θ )} = {P0 (θ )} ⊕ {P1;1 (θ )} ⊕ · · · ⊕ {Pn;1 (θ )} ⊕ {Pn;2 (θ )}, (6.2.5)

where .{Pn;α (θ )}, .n ∈ N, α = 1, 2 denotes a one-dimensional space generated by Pn;α (θ ); .P0 = C. In the sequel investigation we need the special Hilbert space

.

l2,u = H0 ⊕ H1 ⊕ H2 ⊕ · · · ,

.

i.e., (6.1.2).

H0 = C,

H1 = H2 = · · · = C2 ,

(6.2.6)

6.2 Construction of the Tri-Diagonal Block Jacobi Type Matrices of a Unitary. . .

293

Each vector .f ∈ l2,u has a form .f = (fn )∞ n=0 , .fn ∈ Hn , and hence, ||f ||2l2,u =

∞ E

.

||fn ||2Hn < ∞,

(f, g)l2,u =

n=0

∞ E (fn , gn )Hn ,

∀f, g ∈ l2,u .

n=0

For .n ∈ N we denote .(fn;1 , fn;2 ) coordinates of a vector .fn ∈ H in the space C2 with a usual orthonormal basis .{en;1 , en;2 }, hence, for a vector we have a representation .fn = (fn;1 , fn;2 ). By using the orthonormal system (6.2.2), we define a mapping .l2,u into .L2 . We put

.

∀θ ∈ [0, 2π ), Pn (θ ) = (Pn;1 (θ ), Pn;2 (θ )) ∈ Hn ,

.

then ˆ l2,u e f = (fn )∞ n=0 |−→ f (θ ) =

.

∞ E (fn , Pn (θ ))Hn ∈ L2 .

(6.2.7)

n=0

Since for .n ∈ N (fn , Pn (θ ))Hn = fn;1 Pn;1 (θ ) + fn;2 Pn;2 (θ )

.

and ||f ||2l2,u = ||(f0 , f1;1 , f1;2 , f2,1 , . . .)||2l2 ,

.

hence, it is obvious, that (6.2.7) is the mapping of the usual space .l2 into .L2 defined due to the orthonormal system (6.2.2), thus this mapping is an isometry. The image of .l2,u by the transformation (6.2.7) coincides with the space .L2 . It is so, because the linear combination (6.2.1) approximates uniformly any continuous function on .T and such a set of functions is dense in .L2 ; hence, the system (6.2.2) is total in .L2 . Therefore the mapping (6.2.7) is an isometric operator (denotes by I ) which imbeds .l2,u into .L2 . Let A be a bounded linear operator defined on the space .l2,u . We will construct an operator-valued matrix .(aj,k )∞ j,k=0 , where for all .j, k ∈ N0 each element .aj,k is an operator from .Hk into .Hj , i.e., .∀f, g ∈ l2,u , we have (Af )j =

∞ E

.

aj,k fk , j ∈ N0 ,

k=0

(Af, g)l2,u =

∞ E

(aj,k fk , gj )Hj .

(6.2.8)

j,k=0

For the proof of (6.2.8) we need only to write the usual matrix of the operator A using the basis e0 = 1, e1;1 , e1;2 , e2;1 , . . .

.

294

6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

in the space .l2 . Then for each .j, k ∈ N, .aj,k is an operator .Hk −→ Hj that has a matrix representation .(aj,k;α,β )2α,β=1 , so that aj,k;α,β = (Aek;β , ej ;α )l2 .

.

(6.2.9)

If .j = 0, k = 1, 2, . . ., then .(a0,k;β )2β=1 is a .(1 × 2)-matrix operator .Hk −→ H0 , where .a0,k;β = (Aek;β , e0 )l2 ; if .k = 0, j = 1, 2, . . ., then .(aj,0;α )2α=1 is a .(2 × 1)-matrix operator .H0 −→ Hj , where .aj,0;α = (Ae0 , ej ;α )l2 . And .a0,0 is a .(1 × 1)-matrix operator (the scalar) .H0 −→ H0 , where .a0,0 = (Ae0 , e0 )l2 . Let us consider the image .Aˆ = I AI −1 : .L2 −→ L2 of the above operator A by the mapping (6.2.7). Its matrix, in the basis (6.2.2), namely, P0 (θ ), P1;1 (θ ), P1;2 (θ ), P2;1 (θ ), . . .

.

is equal to the usual matrix of operator A: .l2 −→ l2 in the corresponding basis .(e0 , e1;1 , .e1;2 , .e2;1 , .. . .). By using (6.2.9) and the above mentioned procedure, we get the operator matrix .(aj,k )∞ j,k=0 for .A : l2 −→ l2 . By definition this matrix is also the ˆ operator matrix of .A : L2 −→ L2 . It is clear that .Aˆ can be an arbitrary linear bounded operator in .L2 .

.

Lemma 6.2.1 Let .Aˆ be a unitary operator of the multiplication by .eiθ in the space .L2 , namely, ˆ L2 e ϕ(θ ) − | → (Aϕ)(θ ) = eiθ ϕ(θ ) ∈ L2 .

.

−1 AI ˆ ˆ ) has a triThe operator matrix .(aj,k )∞ j,k=0 of the operator .A (i.e., .A = I diagonal structure, i.e., .aj,k = 0 for .|j − k| > 1.

Proof By using (6.2.9) for .en;γ = I −1 Pn;γ (θ ), .n ∈ N, .γ = 1, 2, we have for .j, k ∈ N f2π aj,k;α,β = (Aek;β , ej ;α )l2 =

eiθ Pk;β (θ )Pj ;α (θ ) dρ(θ ),

.

α, β = 1, 2.

0

(6.2.10) From (6.2.2) we have .eiθ Pk;1 (θ ) ∈ Pk+1;1 and .eiθ Pk;2 (θ ) ∈ Pk+1;1 . According to (6.2.5), the integral in (6.2.10) is equal to zero for .j > k + 1 and for each .α = 1, 2. On the other hand, the integral in (6.2.10) can be written f2π aj,k;α,β =

.

0

e−iθ Pj ;α (θ )Pk;β (θ ) dρ(θ ).

6.2 Construction of the Tri-Diagonal Block Jacobi Type Matrices of a Unitary. . .

295

From (6.2.2), we have now that .e−iθ Pj ;1 (θ ) ∈ Pj ;2 and .e−iθ Pj ;2 (θ ) ∈ Pj +1;2 . According to (6.2.5), the last integral is equal to zero for .k > j + 1 and for each .β = 1, 2. As result, we have integrals in (6.2.10), i.e., coefficients .aj,k;α,β , .j, k ∈ N, which are equal zero for .|j − k| > 1; .α, β = 1, 2. The cases .j = 1, 2, . . ., .k = 0 and .j = 0, .k = 1, 2, . . . are considered similarly (it is necessary to take into account that .e0 = I −1 P0 (θ ), .P0 (θ ) = 1). u n ˆ Therefore, the matrix .(aj,k )∞ j,k=0 of the operator .A has a three diagonal block structure ⎛ ⎜ ⎜ ⎜ .⎜ ⎜ ⎝

a0,0 a0,1 0 0 0 a1,0 a1,1 a1,2 0 0 0 a2,1 a2,2 a2,3 0 0 0 a3,2 a3,3 a3,4 .. .. .. .. .. . . . . .

... ... ... ... .. .

⎞ ⎟ ⎟ ⎟ ⎟. ⎟ ⎠

(6.2.11)

A detailed analysis of the expression (6.2.10) and a similar expression for .j, k ∈ N0 allows us to find out the zero elements of the matrices .(aj,k;α,β )2α,β=1 in the case of .|j − k| ≤ 1. Let us describe the properties of the matrices using permutations of indices .j, k and .α, β. Note that (6.2.10) implies the uniform boundedness of the norms .||aj,k || with respect to .j, k ∈ N0 . Lemma 6.2.2 For polynomials .Pn;α (θ ) and subspaces .Pm,β , .n, m ∈ N0 , .α, β = 1, 2, relations

.

eiθ Pn;1 (θ ) ∈ Pn+1;1 ,

e−iθ Pn;1 (θ ) ∈ Pn;2 ;

eiθ Pn;2 (θ ) ∈ Pn+1;1 ,

e−iθ Pn;2 (θ ) ∈ Pn+1;2 ;

eiθ P0 (θ ) ∈ P1;1 ,

e−iθ P0 (θ ) ∈ P1;2 .

(6.2.12)

hold true. Proof According to (6.2.2), the polynomial .Pn;1 (θ ), .n ∈ N is some linear combination of {1, eiθ , e−iθ , . . . , e(n−1)iθ , e−(n−1)iθ , eniθ }.

.

Thus, the multiplication by .eiθ gives a linear combination {eiθ , e2iθ , 1, . . . , eniθ , e−(n−2)iθ e(n+1)iθ },

.

296

6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

and such a linear combination belongs to .Pn+1;1 . Similarly, the multiplication by e−iθ gives a linear combination

.

{e−iθ , 1, e−2iθ , . . . , e(n−2)iθ , e−niθ , e(n−1)iθ },

.

which belongs to .Pn;2 . According to (6.2.2), the polynomial .Pn;2 (θ ), .n ∈ N is some linear combination of {1, eiθ , e−iθ , . . . , e(n−1)iθ , e−(n−1)iθ , eniθ , e−niθ }.

.

Hence, the multiplication by .eiθ gives a linear combination {eiθ , e2iθ , 1, . . . , eniθ , e−(n−2)iθ , e(n+1)iθ , e−(n−1)iθ }

.

and it belongs to .Pn+1;1 . Similarly the multiplication by .e−iθ gives a linear combination {e−iθ , 1, e−2iθ , . . . , e(n−2)iθ , e−niθ , e(n−1)iθ , e−(n+1)iθ }

.

and such a combination belongs to .Pn+1;2 . The case .n = 0 is obvious, hence all expressions in (6.2.12) are proved. Let us

u n

∞ ˆ ∗ , adjoint to j,k )j,k=0 the operator matrix of the operator .(A) ˆ ∗ = (A) ˆ −1 is the multiplication by .e−iθ operator. Taking into that .(A)

denote .((a ∗ )

ˆ We remark A. account the expression (6.2.10) for .j, k ∈ N, .α, β = 1, 2, we have

.

f2π f2π −iθ .(a )j,k;α,β = e Pk;β (θ )Pj ;α (θ ) dρ(θ ) = eiθ Pj ;α (θ )Pk;β (θ ) dρ(θ ) = ak,j ;β,α . ∗

0

0

(6.2.13) In cases .j = 0, .k ∈ N, .k = 0, .j ∈ N, .j = 0, .k = 0 instead of (6.2.10), we have f2π a0,k;β =

eiθ Pk;β (θ ) dρ(θ ),

k ∈ N,

β = 1, 2;

eiθ Pj ;α (θ ) dρ(θ ),

j ∈ N,

α = 1, 2;

0

f2π .

aj,0;α = 0

f2π a0,0 =

eiθ dρ(θ ). 0

(6.2.14)

6.2 Construction of the Tri-Diagonal Block Jacobi Type Matrices of a Unitary. . .

297

In these cases the equality (6.2.13) has a form (a ∗ )0,k;β = a¯ k,0;β , (a ∗ )j,0;α = a¯ 0,j ;α , a0,0 = a¯ 0,0 , j, k = N, α, β = 1, 2. (6.2.15)

.

iθ Lemma 6.2.3 Let .(aj,k )∞ j,k=0 be the multiplication by .e operator matrix in .L2 , where .aj,k : Hk −→ Hj ; .a0,0 , .a0,k = (a0,k;β )2β=1 , .aj,0 = (aj,0;α )2α=1 , .aj,k = (aj,k;α,β )2α,β=1 are matrices of operators .aj,k in the standard (usual) basis. Then

aj,0;2 = aj,j +1;1,1 = aj,j +1;1,2 = aj +1,j ;2,1 = aj +1,j ;2,2 = 0,

.

j ∈ N. (6.2.16)

Proof According to (6.2.14), for .j ∈ N, we have f2π aj,0;2 =

eiθ Pj,2 (θ ) dρ(θ ).

.

(6.2.17)

0

Due to (6.2.12), .eiθ = eiθ P0 (θ ) ∈ P1;1 , but .Pj ;2 (θ ) is orthogonal to .P1;1 for .j ∈ N (see (6.2.5)). Hence, in such a case the integral is equal to zero in (6.2.17). According to (6.2.10) and (6.2.12), for .j ∈ N, we have f2π

f2π aj,j +1;1,1 =

e−iθ Pj,1 (θ )Pj +1;1 (θ ) dρ(θ ),

eiθ Pj +1,1 (θ )Pj ;1 (θ ) dρ(θ ) =

.

0

0

where .e−iθ Pj ;1 (θ ) ∈ Pj ;2 . But, according to (6.2.5), .Pj +1;1 (θ ) is orthogonal to .Pj ;2 and, hence, the last integral is equal to zero. Similarly with (6.2.10), (6.2.12) and (6.2.5) for .j ∈ N, we have f2π

f2π aj,j +1;1,2 =

e−iθ Pj,1 (θ )Pj +1;2 (θ ) dρ(θ ),

e Pj +1,2 (θ )Pj ;1 (θ ) dρ(θ ) = iθ

.

0

0

where .e−iθ Pj ;1 (θ ) ∈ Pj ;2 . But according to (6.2.5), .Pj +1;2 (θ ) is orthogonal to .Pj ;2 and, hence, the last integral is equal to zero. Also, from (6.2.10), (6.2.12) and (6.2.5) for .j ∈ N, we have f2π aj +1,j ;2,1 =

eiθ Pj ;1 (θ )Pj +1;2 (θ ) dρ(θ ),

.

0

298

6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

f2π aj +1,j ;2,2 =

eiθ Pj ;2 (θ )Pj +1;2 (θ ) dρ(θ ),

.

0

where .eiθ Pj ;1 (θ ), eiθ Pj ;2 (θ ) ∈ Pj +1;1 . But, according to (6.2.5), .Pj +1;2 (θ ) is orthogonal to .Pj +1;1 and, hence, both last integrals are also equal to zero. u n Thus, the last line of the .(2 × 1)-matrix .a1,0 and the top and bottom lines of the (2 × 2)-matrix .aj,j +1 (.aj +1,j ) contain always zero elements. Taking into account (6.2.11), we assert that the unitary matrix of the multiplication by .eiθ operator is five-diagonal in the usual sense, as a scalar matrix.

.

Lemma 6.2.4 Elements a0,1;2 , a1,0;1 ;

.

j ∈N

aj,j +1;2,2 , aj +1,j ;1,1 ,

(6.2.18)

of the matrix .(aj,k )∞ j,k=0 in Lemma 6.2.3 are positive. ' (θ ) a non-normalized vector .P (θ ) obtained from Proof Let us denote by .P1;1 1;1 ' (θ ) = the Gram-Schmidt orthogonalization procedure of the sequence (6.2.1), .P1;1 eiθ − (eiθ , 1)L2 . Then, according to (6.2.14), we get

f2π e P1;1 (θ ) dρ(θ ) =

a1,0;1 =



' ||P1;1 (θ )||−1 L2

0 .

' (θ ) dρ(θ ) eiθ P1;1

0

' (θ )||−1 ||P1;1 L2

=

f2π

f2π eiθ (eiθ − (eiθ , 1)L2 ) dρ(θ )

(6.2.19)

0

( ) ' −iθ (θ )||−1 , 1)L2 |2 . = ||P1;1 L2 1 − |(e Later we will show that |(e−iθ , 1)L2 |2 + |(e−iθ , P1;1 (θ ))L2 |2 < 1.

.

(6.2.20)

Hence, from (6.2.19), we get .a1,0;1 > 0. ' (θ ) a non-normalized vector Let us consider .a0,1;2 . As earlier we denote .P1;2 .P1;2 (θ ), obtained from the Gram-Schmidt orthogonalization procedure of the sequence (6.2.1), ' P1;2 (θ ) = e−iθ − (e−iθ , P1;1 (θ ))L2 P1;1 (θ ) − (e−iθ , 1)L2 .

.

6.2 Construction of the Tri-Diagonal Block Jacobi Type Matrices of a Unitary. . .

299

Then, according to (6.2.14) and (6.2.20), we get f2π a0,1;2 =

eiθ P1;2 (θ ) dρ(θ ) 0

.

=

' (θ )||−1 ||P1;2 L2

=

' ||P1;2 (θ )||−1 L2

f2π ( ) eiθ e−iθ − (e−iθ , P1;1 (θ ))L2 P1;1 (θ ) − (e−iθ , 1)L2 dρ(θ ) 0

( ) 1 − |(e−iθ , P1;1 (θ ))L2 |2 − |(e−iθ , 1)L2 |2 > 0.

In the next step we consider .aj,j +1;2,2 and .aj +1,j ;1,1 , .j ∈ N. According to (6.2.10), we have f2π

f2π

e−iθ Pj ;2 (θ )Pj +1;2 (θ ) dρ(θ ),

e Pj +1;2 (θ )Pj ;2 (θ ) dρ(θ ) =

aj,j +1;2,2 =



.

0

0

(6.2.21) f2π aj +1,j ;1,1 =

eiθ Pj ;1 (θ )Pj +1;1 (θ ) dρ(θ ).

.

(6.2.22)

0

In the case of the expression (6.2.21) for .aj,j +1;2,2 , we have .e−iθ Pj ;2 (θ ) ∈ Pj +1;2 (see (6.2.12)) and the coefficient of this function by .Pj +1;2 (θ ) is positive in the decomposition (6.2.5). This fact follows from (6.2.1) and (6.2.2), since e−iθ Pj ;2 (θ ) = kj ;2 e−(j +1)iθ +e−iθ (· · · ) =

.

kj ;2 kj +1;2

Pj +1;2 (θ )+Qj (θ ),

(6.2.23)

where dots indicate the following part of the corresponding decomposition .Pj ;2 (θ ) in (6.2.2). In (6.2.23), .Qj (θ ) is a linear combination of .eliθ , .l = −j, −(j − 1), . . . , (j + 1), that are orthogonal to .Pj +1;2 (θ ). Substituting (6.2.23) into (6.2.21), we get the equality .aj,j +1;2,2 = kj ;2 (kj +1,2 )−1 > 0. The positiveness of .aj +1;1,1 follows from (6.2.22) in a similar way; here we used the fact that according to (6.2.12), .eiθ Pj ;1 (θ ) ∈ Pj +1;1 and a coefficient of the function .Pj +1;1 (θ ) is positive, as it follows from equality of the type (6.2.23). Let us finally prove the inequality (6.2.20). Indeed, it follows from the Parseval equality by the decomposition of the function .e−iθ with respect to the orthonormal basis (6.2.2), namely, |(e−iθ , P0 (θ ))L2 |2 + |(e−iθ , P1;1 (θ ))L2 |2 + |(e−iθ , P1;2 (θ ))L2 |2 + · · ·

.

= ||e−iθ ||2L2 = 1. Hence, the lemma is proved.

(6.2.24) u n

300

6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

In what follows we will use usual well known notations for elements .aj,k of the matrix:

.

an = an+1,n

: Hn −→ Hn+1 ,

bn = an,n

: Hn −→ Hn ,

cn = an,n+1

: Hn+1 −→ Hn ,

(6.2.25) n ∈ N0 .

All previous investigations we summarize in the theorem. ˆ (with a strong cyclic Theorem 6.2.5 The unitary multiplication by .eiθ operator .A, vector) in the orthonormal basis (6.2.2) of polynomials in the space .L2 , has the form of tri-diagonal block Jacobi type unitary matrix .J = (aj,k )∞ j,k=0 acting as the operator in the space (6.2.6), namely, l2,u = H0 ⊕ H1 ⊕ H2 ⊕ · · · ,

H0 = C,

.

Hn = C2 ,

n ∈ N.

(6.2.26)

The norms of all operators .aj,k : Hk −→ Hj are uniformly bounded with respect to .j, k ∈ N0 . In designations (6.2.25), this matrix has the form ⎛

b0 ⎜ a0 ⎜ ⎜ .J = ⎜ 0 ⎜0 ⎝ .. .

c0 b1 a1 0 .. .

0 c1 b2 a2 .. .

and in more detail ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ .J = ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

0 0 c2 b3 .. .

0 0 0 c3 .. .

⎞ ... ...⎟ ⎟ ...⎟ ⎟, ...⎟ ⎠ .. .

∗b0 ∗ c0 + + ∗ ∗ b1 a0 0 ∗ ∗ + ∗ a1 0 0

an : Hn −→ Hn+1 , bn : Hn −→ Hn , cn : Hn+1 −→ Hn ,

...

0

0 c1

∗ ∗

+ ∗ 0 b2

∗ +

0

∗ ∗ ∗ ∗ a2

.. .

+ ∗ b3

0 ∗

0 .. .

0 c2

.. .

∗ .. .

(6.2.27) n ∈ N0 .



⎟ ⎟ ⎟ 0 ... ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ... ⎟ ⎟. ⎟ ⎟ ⎟ 0 0 ⎟ ⎟ c3 . . . ⎟ ⎟ ⎟ ∗ + ⎠ .. .. . .

(6.2.28)

In (6.2.28), .b0 is a .(1 × 1)-matrix .(i.e., a scalar.), .a0 is a .(2 × 1)-matrix: a0 = (a0;α )2α=1 , .c0 is a .(1 × 2)-matrix: .c0 = (c0;β )2β=1 ; for .j ∈ N, elements 2 2 2 .aj = (aj ;α,β ) α,β=1 , .bj = (bj ;α,β )α,β=1 , .cj = (cj ;α,β )α,β=1 are .(2 × 2)-matrices. .

6.3 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi. . .

301

Matrices .aj , .bj , and .cj have always zero elements on some positions, namely, a0;1 > 0, a0;2 = 0; .

c0;2 > 0;

an;2,1 = an;2,2 = 0, an;1,1 > 0;

cn;1,1 = cn;1,2 = 0, cn;2,2 > 0; n ∈ N. (6.2.29)

Possibly zero and non-zero elements of the matrix are indicated by “.∗”. The matrix (6.2.28) has five diagonals in the scalar form. ˆ ∗ has a similar block tri-diagonal Jacobi type form .J + The adjoint operator .(A) in the basis (6.2.2). Matrices J , .J + act as operators act by the rule (Jf )n = an−1 fn−1 + bn fn + cn fn+1 , .

∗ fn−1 + bn∗ fn + an∗ fn+1 , (J + f )n = cn−1

n ∈ N0 ,

f−1 = 0,

(6.2.30)

∀f = (fn )∞ n=0 ∈ l2,u ,

here “.∗” denotes the usual adjoint matrix. The form of coefficients .J + follows from (6.2.13), (6.2.15), and (6.2.25).

6.3 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi Type Matrices of Unitary Operators As mentioned at the beginning of the chapter, one of the main results is the solution of the inverse spectral problem for the corresponding direct one. Let us consider the space .l2,u of the form (6.2.6). In addition to the space .l2,u consider its equipment (lfin )' ⊃ l2,u (p−1 ) ⊃ l2,u ⊃ l2,u (p) ⊃ lfin ,

.

(6.3.1)

where .l2,u (p) is the weighted .l2,u space with a weight .p = (pn )∞ n=0 , .pn ≥ 1, ∞ −1 −1 (.p = (pn )n=0 ). In such a case .l2,u (p) is a the Hilbert space of sequences .f = (fn )∞ n=0 , .fn ∈ Hn , for which ||f ||2l2,u (p) =

∞ E

.

n=0

||fn ||2Hn pn ,

(f, g)l2,u (p) =

∞ E (fn , gn )Hn pn . n=0

(6.3.2)

302

6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

The space .l2,u (p−1 ) is defined similarly; .lfin is the space of finite sequences and ' .(lfin ) is the space adjoint to .lfin . It is easy to show that the imbedding .l2,u (p) c→ l2,u ∞ E is quasi-nuclear if . pn−1 < ∞. n=0

Let A be a unitary operator standardly connected with the chain (6.3.1). (About the standard connection see the beginning of the last section of this chapter.) According to the projection spectral theorem, such an operator has a representation f Af =

zo(z) dσ (z)f,

.

f ∈ l2,u ,

z = eiθ ∈ T,

θ ∈ [0, 2π ),

(6.3.3)

T

where .o(z) : .l2,u (p) −→ l2,u (p−1 ) is the operator of generalized projection and ∗ = A−1 has a similar .dσ (z) is a spectral measure. The adjoint to A operator .A representation (6.2.12), where .zo(z) is replaced with .z¯ o(z). For all .f ∈ lfin , the projection .o(z)f ∈ l2,u (p−1 ) is a generalized eigenvector of the operators A and ∗ .A with corresponding eigenvalues z and .z ¯ . For all .f, g ∈ lfin , we have the Parseval equality f (f, g)l2,u =

(o(z)f, g)l2,u dσ (z),

.

z = eiθ ∈ T,

(6.3.4)

T

that after the extension by continuity (6.3.4) is defined for all .f, g ∈ l2,u . We denote by .πn the orthogonal projection operator from .l2,u in .Hn , .n ∈ N0 . Hence, .∀f = (fn )∞ n=0 ∈ l2,u , we have .fn = πn f . This operator acts similarly in the spaces .l2,u (p) and .l2,u (p−1 ), but perhaps its norm does not equal one. Consider the operator matrix .(oj,k (z))∞ j,k=0 , where oj,k (z) = πj o(z)πk : l2,u −→ Hj , (or Hk −→ Hj ).

.

(6.3.5)

The Parseval equality (6.3.4) has the form (f, g)l2,u =

∞ f E

(o(z)πk f, πj g)l2,u dσ (z)

j,k=0 T

=

∞ f E

(πj o(z)πk f, g)l2,u dσ (z) (6.3.6)

j,k=0 T

.

=

∞ f E

(oj,k (z)fk , gj )l2,u dσ (z),

j,k=0 T

z = eiθ ∈ T,

θ ∈ [0, 2π ),

∀f, g ∈ l2,u .

6.3 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi. . .

303

Consider some special bounded operator A acting in the space .l2,u . More precisely, let this operator be given by the matrix J , which has a tri-diagonal block structure of the form (6.2.28), where all norms of the coefficients .an , .bn and .cn are uniformly bounded with respect to .n ∈ N0 . Therefore, the action of the operator A is determined by the first expression in (6.2.30) and the action of its adjoint operator is determined, similarly, by the second expression in (6.2.30). In what follows, it is assumed that conditions (6.2.29) hold true, and, additionally, the operator A, given in (6.2.28), is unitary in the space .l2,u . Without analyzing the conditions for the coefficients .an , .bn and .cn under which the operator A is unitary, we note that such conditions follow from the equality .AA∗ = A∗ A = 1. We make this analysis in the next section. In the next step, we will rewrite the Parseval equality (6.3.6) in terms of the generalized eigenvectors of the operator A. But first, consider a lemma. ' Lemma 6.3.1 Let .ϕ(z) = (ϕn (z))∞ n=0 , .ϕn (z) ∈ Hn , .z ∈ C be a solution in .(lfin ) of the system

(J ϕ(z))n = an−1 ϕn−1 (z) + bn ϕn (z) + cn ϕn+1 (z) = zϕn (z), .

∗ ϕn−1 (z) + bn∗ ϕn (z) + an∗ ϕn+1 (z) = z¯ ϕn (z), (J + ϕ(z))n = cn−1

n ∈ N0 ,

ϕ−1 (z) = 0,

z = eiθ ∈ T,

(6.3.7)

θ ∈ [0, 2π ),

with an initial condition .ϕ0 (z) = ϕ0 ∈ C. Then, this solution exists .∀ϕ0 and has a form ϕn (z) = Qn (z)ϕ0 = (Qn;1 , Qn;2 )ϕ0 ,

.

∀n ∈ N,

(6.3.8)

where .Qn;1 and .Qn;2 are polynomials of variables .z = eiθ and .z¯ = e−iθ and such polynomials have a form Qn;1 (z) = ln;1 z¯ n + qn;1 (z),

.

Qn;2 (z) = ln;2 zn + qn;2 (z),

∀n ∈ N.

(6.3.9)

Here .ln;1 > 0, .ln;2 > 0 and .qn;1 (z), .qn;2 (z) are linear combinations of .zj z¯ k for iθ ∈ T ⊂ C. .0 ≤ j + k ≤ n − 1, .∀n ∈ N, .Q0 (z) = 1, .z = e Proof For .n = 0, the system (6.3.7) has a form b0 ϕ0 + c0 ϕ1 = zϕ0 , .

b0∗ ϕ0

+ a0∗ ϕ1

= z¯ ϕ0 ,

or

c0;1 ϕ1;1 + c0;2 ϕ1;2 = (z − b0 )ϕ0 , ∗ ∗ ϕ1;1 + a0;2 ϕ1;2 = (¯z − b¯0 )ϕ0 . a0;1

(6.3.10)

Here and in what follows, we denote .ϕn (z) = (ϕn;1 (z), ϕn;2 (z)) ∈ Hn , .n ∈ N. By using the assumption (6.2.29) for the matrix (6.2.28), we rewrite the last two equalities from (6.3.10) in the form .d0 ϕ1 (z) = ((z − b0 )ϕ0 , (¯ z − b¯0 )ϕ0 ); d0 =

(

c0;1 c0;2 a0;1 0

) , c0;2 > 0, a0;1 > 0.

304

6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

Hence, there exist an inverse matrix .d0−1 , thus (ϕ1;1 (z), ϕ1;2 (z)) = ϕ1 (z) = d0−1 ((z − b0 )ϕ0 , (¯z − b¯0 )ϕ0 ).

.

Therefore, ϕ1;1 (z) = .

1 a0;1

(¯z − b¯0 )ϕ0 = Q1;1 (z)ϕ0 ,

(6.3.11)

ϕ1;2 (z) = (r1 (z − b0 ) + r2 (¯z − b¯0 ) + r3 )ϕ0 = Q1;2 (z)ϕ0 , where .r1 > 0, .r2 , and .r3 are some constants. In other words, the solution .ϕn (z) of the system (6.3.7), for .n = 1, has the form (6.3.8) and (6.3.9). Assume by induction that .ϕn (z) exists for .n ∈ N and has the form (6.3.8) and (6.3.9), and show that .ϕn+1 (z) exists and has the form (6.3.8) and (6.3.9). From (6.3.7), we get cn ϕn+1 (z) = (z1 − bn )ϕn (z) − an−1 ϕn−1 (z), .

∗ an∗ ϕn+1 (z) = (¯z1 − bn∗ )ϕn (z) − cn−1 ϕn−1 (z).

Adding the last two equalities and taking into account (6.2.29), (6.3.8) and (6.3.9), we obtain ∗ dn ϕn+1 (z) = ((z + z¯ )1 − (bn + bn∗ ))ϕn (z) − (an−1 + cn−1 )ϕn−1 (z)

= ((z + z¯ )1 − (bn + bn∗ ))(ln;1 z¯ n + qn;1 (z), ln;2 zn + qn;2 (z))ϕ0 ∗ − (an−1 + cn−1 )(ln−1;1 z¯ n−1 + qn−1;1 (z), ln−1;2 zn−1 + qn−1;2 (z))ϕ0

.

= (sn;1 (z), sn;2 (z))ϕ0 ; ) ( 0 an;1,1 , dn = cn;2,1 + an;1,2 cn;2,2

an;1,1 > 0,

cn;2,2 > 0. (6.3.12)

It follows from (6.3.12) that the matrix .dn−1 exists. Thus, it is possible to write the expression for .ϕn+1;1 (z) and .ϕn+1;2 (z) using the right-hand side (6.3.12), which is denoted .(sn;1 (z), sn;2 (z))ϕ0 , i.e., (ϕn+1;1 (z), ϕn+1;2 (z)) = dn−1 (sn;1 (z), sn;2 (z))ϕ0 , .

ϕn+1;1 (z) =

1 sn;1 ϕ0 , an;1,1

ϕn+1;2 (z) = (dn−1 )2,1 sn;1 (z) + (dn−1 )2,2 sn;2 (z).

(6.3.13)

6.3 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi. . .

305

It can be seen from expressions (6.3.12) and (6.3.13) that an increase by “1” of the power of z and .z¯ on the right-hand side in these formulas leads to multiplication by .(z − z¯ ). This shows that .sn;1 (z) and .sn;2 (z) have the form .ln+1;1 z¯ n+1 + qn+1;1 (z) and .ln+1;2 zn+1 + qn+1;2 (z), respectively. It is not difficult to see that new higher coefficients .ln+1;1 and .ln+1;2 are also positive. For .ln+1,1 this fact follows from the second expression in (6.3.13) and for .ln+1,2 it follows from the last expression in (6.3.13), that is, from the positiveness of the diagonal elements of the matrix .dn and the location of the coefficient .ln;2 zn at the second coordinate of the vector .(ln;1 z¯ n + qn;1 (z), ln;2 zn + qn;2 (z)). This completes the induction and proof. u n Consider .Qn (z) with fixed .z = eiθ as a linear operator acting from .H0 to .Hn , i.e., .H0 e ϕ0 |−→ Qn (z)ϕ0 ∈ Hn . We also consider .Qn (z) as an operator-valued polynomial of variables .z − eiθ , z¯ = e−iθ ∈ C. For the adjoint operator we have ∗ ∗ : H −→ H . By using polynomials .Q (z), we construct a .Qn (z) = (Qn (z)) n 0 n representation .oj,k (z). Lemma 6.3.2 The operator .oj,k (z), .∀z = eiθ ∈ T has a representation oj,k (z) = Qj (z)o0,0 (z)Q∗k (z) : Hk −→ Hj ,

j, k ∈ N0 ,

.

(6.3.14)

where .o0,0 (z) ≥ 0 is a scalar. Proof For a fixed .k ∈ N0 , the vector .ϕ = ϕ(z) = (ϕj (z))∞ j =0 , where ϕj (z) = oj,k (z) = πj o(z)πk ∈ Hj ,

.

z = eiθ ∈ T,

θ ∈ [0, 2π ),

(6.3.15)

is a generalized solution in .(lfin )' , of the equation .J ϕ(z) = zϕ(z), since .o(z) is a projector on generalized eigenvectors corresponding to generalized eigenvalues z of the operator A. Therefore, .∀g ∈ lfin we have an equality .(ϕ, J + g)l2,u = z(ϕ, g)l2,u . Substituting .ϕ into the finite-difference equation for .J + , we get .(J ϕ, g)l2,u = z(ϕ, g)l2,u . Hence, .ϕ = ϕ(z) ∈ l2,u (p−1 ) is a solution of the equation .J ϕ = zϕ with initial conditions .ϕ0 = π0 o(z)πk ∈ H0 . Since .∀f ∈ lfin , the vector .o(z)f ∈ l2 (p−1 ) is also the generalized eigenvector of the operator .A∗ with corresponding eigenvalue .z¯ (because A is unitary operator). Similarly, .ϕ = ϕ(z) in (6.3.15) is also the solution of the equation .J + ϕ = z¯ ϕ with the same initial conditions .ϕ0 = π0 o(z)πk . By using the Lemma 6.3.1 and (6.3.8), we obtain oj,k (z) = Qj (z)(o0,k (z)),

.

j ∈ N0 .

(6.3.16)

The operator .o(z) : l2,u (p) −→ l2,u (p−1 ) is formally self-adjoint in .l2,u , and it is a derivation with respect to the corresponding resolution of identity in .l2,u on the spectral measure of the operator A. Hence, according to (6.3.14), we get (oj,k (z))∗ = (πj o(z)πk )∗ = πk o(z)πj = Φk,j (z),

.

j, k ∈ N0 .

(6.3.17)

306

6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

For a fixed .j ∈ N0 from (6.3.17) and previous conversation it follows that the vector ψ = ψ(z) = (ψk (z))∞ k=0 ,

.

ψk (z) = ok,j (z) = (oj,k (z))∗

is an usual solution of equations .J ψ = zψ and .J + ψ = z¯ ψ with initial conditions ∗ .ψ0 = o0,j (z) = (oj,0 (z)) . Using again Lemma 6.3.1, we obtain the representation of the type (6.3.16), ok,j (z) = Qk (z)(o0,j (z)),

.

k ∈ N0 .

(6.3.18)

Taking into account (6.3.17) and (6.3.18), we get o0,k (z) = (ok,0 (z))∗ = (Qk (z)o0,0 (z))∗ = o0,0 (z)(Qk (z))∗ ,

.

k ∈ N0 (6.3.19)

(here we used .o0,0 (z) ≥ 0 following from (6.3.4) and (6.3.5)). Substituting (6.3.19) into (6.3.16), we obtain (6.3.14). u n Now, we obtain the possibility to rewrite the Parseval equality (6.3.6) in more concrete form. For this end we substitute the expression (6.3.14) for .oj,k (z) into (6.3.6) and get (f, g)l2,u =

∞ f E

(oj,k (z)fk , gj )l2,u dσ (z)

j,k=0 T

=

∞ f E

(Qj (z)o0,0 (z)Q∗k (z)fk , gj )l2,u dσ (z)

j,k=0 T .

=

∞ f E

(Q∗k (z)fk , Q∗j (z)gj )l2,u dρ(z)

(6.3.20)

j,k=0 T

=

f (E ∞ T

Q∗k (z)fk

)( E ∞

) Q∗j (z)gj dρ(z), ∀f, g ∈ lfin

j =0

k=0

dρ(z) = o0,0 (z) dσ (z),

z = eiθ ∈ T,

θ ∈ [0, 2π ).

Consider the Fourier transform “. 0; it consists of sequences .f = (fj )j ∈Z , .fj ∈ C, for which ||f ||2l2 (q) =

E

.

|fj |2 qj < ∞,

(f, g)l2 (q) =

j ∈Z

E

fj g¯ j qj .

(6.5.6)

j ∈Z

Let us fix .q = (qj )j ∈Z , .qj ≥ 1. Then the firs of chains (3.5.13) (i.e., (3.5.1)) has a form q −1 = (qj−1 )j ∈Z , q = (qj )j ∈Z , qj ≥ 1, (6.5.7)

lu,2 (q −1 ) ⊃ lu,2 ⊃ lu,2 (q) ⊃ lu,fin ,

.

where .lu,fin denotes the space of two-sided finite sequences. The involution “.∗” is now a usual complex conjugations .(fj )j ∈Z = f |→ f ∗ = (f¯j )j ∈Z . Accordingly, it is convenient to change the notation of vectors in Theorem 3.5.2 from .ϕ to f . Let us specify the choice of weight q. The condition (6.5.5) in Theorem say that the matrix .K = (Kj k )j,k∈Z , where .Kj k = sj −k , is positive defined. Then |Kj k |2 ≤ Kjj Kkk , i.e., |sj,k |2 ≤ s2j s2k , j, k ∈ Z.

.

(6.5.8)

Let .q = (qn )n∈Z , .qn ≥ 1 be a sequence such that Cu (s, q) :=

E

.

s2n qn−1 < ∞.

(6.5.9)

n∈Z

By using (6.5.8) and (6.5.9), we get 0≤

E

sj −k fk f¯j ≤

j,k∈Z



∞ E

E √ E / 1/2 1/2 s2j s2k |fk ||fj | ≤ s2j s2k qj−1 qk−1 |fk qk ||fj qj | j,k∈Z

j,k∈Z .

|sj −k ||fk ||fj |

j,k∈Z

⎛ ≤⎝

E

⎞1/2 ⎛ s2j s2k qj−1 qk−1 ⎠

j,k∈Z



E

⎞1/2 2

j,k∈Z

⎛ ⎞⎛ ⎞ E E =⎝ s2j q −1 ⎠ ⎝ |fj |2 qj ⎠ = Cu (s, q)||f ||2l (q) , j

j ∈Z

(6.5.10)

|fk | qk |fj | qj ⎠ 2

j ∈Z

2

f ∈ lu,fin .

6.5 The Solution of the Trigonometric Moment Problem

319

Passing here to the limit in two-sided finite sequences .lu,fin to .lu,2 (q), we obtain the chain lu,2 (q −1 ) ⊃ lu,2 ⊃ lu,2 (q) ⊃ lu,fin , q = (qn )n∈Z , qn ≥ 1, E s2n . < ∞. Cu (s, q) = qn

(6.5.11)

n∈Z

We fix the weight q for which the condition (6.5.11) is fulfilled (the sequence .(sn ), n ∈ Z is considered given). Then the chain of spaces (6.5.11) plays the role of the first of the chains (3.5.13). As already said, the matrix .K = (Kj k )j,k∈Z , where .Kj k = sj −k is the kernel K of the discrete variable .j ∈ Z; we will denote .Su = K. It will be positive definite, since the condition (3.5.3) is fulfilled: it coincides with (6.5.5). The next step in the scheme of Theorem 3.5.2 is the construction of a chain (3.5.17). Here we require that the imbedding .H+ c→ G+ will be Hilbert-Schmidt type. In the space .G+ = lu,2 (q), where q satisfies the condition (6.5.11), the space .lu,2 (p) embedded quasi-nucleary, where .p = (pn )n∈Z , if .

E .

pn2 qn2 < ∞.

(6.5.12)

n∈Z

Thus, as a chain (3.5.17) can be written (lu,fin )' ⊃ lu,2 (p−1 ) ⊃ lu,2 (q −1 ) = lu,2 ⊃ lu,2 (q) ⊃ lu,2 (p) ⊃ lu,fin ,

.

(6.5.13)

where the weight q is chosen such that the condition (6.5.11) is fulfilled, and the weight p is so that the condition (6.5.12) is fulfilled. Fix the weight .p = (pn )n∈Z . According to Theorem 3.5.2, the necessary chain (3.5.18) is constructed, where we set .K = Su and, hence, get H−,Su ⊃ HSu ⊃ H+,Su ⊃ lu,fin .

.

(6.5.14)

Let us first prove the theorem for the case of the non-degenerate kernel .K = Su , i.e., when the inequality (6.5.5) contains the sign “.>” for an arbitrary finite sequence .(fn )n∈Z ∈ lu,fin , which is not equal to zero (that is, at least for one n, .fn /= 0). The proof is completely based on Theorem 3.5.2. In this case, the expression (f, g)Su :=

E

.

sj −k fk g¯ j ,

f, g ∈ lu,fin

(6.5.15)

j ∈Z

is a scalar (rather than a quasi-scalar) product in the space .lu,fin . We denote the completion of this space relative to (6.5.15) by .HSu .

320

6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

The linear space .C∞ of two-sided finite sequences .f = (fn )n∈Z , .fn ∈ C, we (“1” is situated denote by .lu . Let .δn = (. . . , 0, 1, 0, . . .), .n ∈ Z be the .δ-sequence E on the n-th position), then for all .f ∈ lu,fin we have .f = fn δn . Thus, we have a n∈Z

chain of linear spaces lu ⊃ HSu ⊃ lu,fin ,

.

∀f ∈ lu,fin , f = (fn )n∈Z =

E

fn δn .

(6.5.16)

n∈Z

In the case of a non-degenerate kernel .Su , chains (6.5.14) and (6.5.13) give two chains of Hilbert spaces (lu,2 (p))−,Su ⊃ HSu ⊃ lu,2 (p) ⊃ lu,fin ,

(6.5.17)

lu = (lu,2 )' ⊃ lu,2 (p−1 ) ⊃ lu,2 ⊃ lu,2 (p) ⊃ lu,fin ,

(6.5.18)

.

.

where the weight p satisfies requirements (6.5.12) and (6.5.9). Consider a linear expression and its algebraic inverse in the space .lu , namely, J : (Jf )k = fk−1 ,

.

J −1 : (J −1 f )k = fk+1 ,

k ∈ Z, f ∈ lu .

(6.5.19)

For the .δ-sequence, we have relations J −1 δn = δn−1 ,

J δn = δn+1 ,

.

n ∈ Z.

(6.5.20)

It will be a unitary operator in .HS , which is given by the scalar product (6.5.15). Indeed, for .f, g ∈ lu,fin , we have (Jf, g)Su =

E

sj −k (Jf )k g¯ j =

j,k∈Z .

=

E

E

sj −k fk−1 g¯ j

j,k∈Z

sj −k+1 fk g¯ j =

j,k∈Z

E

sj −k fk (J g)j = (f, J −1 g)Su ,

(6.5.21)

j,k∈Z

(the definition (6.5.19) and .J ∗ = J −1 are taken into account here). Thus, the operator .lu,fin e f |→ Jf ∈ lu,fin will be unitary in the space .HSu . Hence, we have Vf = Jf,

.

f ∈ lu,fin ⊂ D(V ) = HSu .

(6.5.22)

Let us now apply Lemma 2.7.5. As chains (2.7.35) now we consider (6.5.17) and (6.5.18). Let .ξλ ∈ (lu,2 (p))−,Su be generalized eigenvector of the operator V with respect to the rigging (6.5.17). In such a case, due to Theorem 6.5.1, (ξλ , Vf )Su = e−iλ (ξλ , f )Su , λ ∈ R, f ∈ lu,fin .

.

(6.5.23)

6.5 The Solution of the Trigonometric Moment Problem

321

Denote P (λ) = U ξλ ∈ lu,2 (p−1 ) ⊂ lu , P (λ) = (Pn (λ))n∈Z .

.

By using (2.7.36), the expression (6.5.23) obtain a form (P (λ), Vf )l2,u = e−iλ (P (λ), f )l2 , λ ∈ R, f ∈ lu,fin .

(6.5.24)

.

The corresponding Fourier transform has a form HSu ⊃ lfin e f → (Ff )(λ) = fˆ(λ) = (f, P (λ))lu,2 ∈ L2 (T, dρ(λ)).

.

(6.5.25)

Let us calculate .P (λ). The operator V is defined in (6.5.22) due to the expression (6.5.19), hence, (6.5.24) gives .∀f ∈ lu,fin E eiλ Pn (λ)f¯n = eiλ (P (λ), f )lu,2 = (P (λ), V −1 f )lu,2 n∈Z .

= (P (λ), J −1 f )lu,2 = (J P (λ), f )lu,2 =

E

Pn+1 (λ)f¯n .

n∈Z

(6.5.26) Hence eiλ Pn (λ) = Pn+1 (λ),

.

n ∈ Z.

Without loss of generality, we assume .P0 (λ) = 1. Now the last two formulas give Pn (λ) = eiλn ,

.

n ∈ Z.

(6.5.27)

Therefore, the Fourier transform (6.5.25) has the form HSu ⊃ lu,fin e f → (Ff )(λ) = fˆ(λ) =

E

.

fn eiλn ∈ L2 (T, dρ(λ)),

(6.5.28)

n∈Z

and we get the Parseval equality f (f, g)Su =

.

ˆ dρ(λ), f, g ∈ lu,fin . fˆ(λ)g(λ)

(6.5.29)

T

To construct the Fourier transform (6.5.25) and check formulas (6.5.26)–(6.5.29) it is necessary to find out the existence of a strong cyclic vector for the operator V .o = δ0 ∈ lu,fin in the sense of the rigging (6.5.2). The last is really so, since from (6.5.20), we have V p o = J p δ0 = δp ,

.

p ∈ Z.

322

6 Unitary Block Jacobi Type Matrices and the Trigonometric Moment Problem

The Parseval equality (6.5.29) directly leads to the representation (6.5.1): according to (6.5.27), (6.5.28) .δˆn = eiλn and in particular .δˆ0 = 1; the scalar product (6.5.15) gives sn = (δn , δ0 )Su = (δˆn , δˆ0 )L2 (T,dρ(λ)) =

f eiλn dρ(λ), n ∈ Z.

.

T

Therefore, the first part of the theorem in the case of the non-degenerate kernel K = Su is proved. To prove its second part, it is necessary to make sure that the condition (3.5.26) leads to the unitary operator V in the space .HSu . Therefore, Theorem 6.5.2 is proven in the case considered above. Consider the case of a degenerate kernel .Su = K, that is, when the left expression in the inequality (6.5.5) is equal to zero for some non-zero sequence .f ∈ lu,fin . Consider the (linear) set of all such sequences. Since the inequality (6.5.8) holds true, then such a set will be closed in the space .lu,2 (q) constructed according to (6.5.11); that is, in the space .G+ from the chain (3.5.17). Thus, we get the general situation of the quasi-scalar product considered in Theorem 3.5.2. The expression .(f, g)Su in (6.5.15) will now be a quasi-scalar product, but the operator J according to the equality (6.5.21) will be unitary in the quasi-scalar product (3.5.4). According to Lemma 3.5.4, this operator transfers a class into a class and the proof our theorems starting with the definition (6.5.22) can be repeated.

.

Remark 6.5.3 Let conditions of Theorem 6.5.2 be fulfilled and we have a representation of the moment sequence (6.5.1) by some measure .dρ(λ) with a support the circle .T. Suppose that the support of this measure is wider than the set consisting of a finite number of points on the circle. Then the inequality (6.5.5) should contain the sign “.>”, that is, the corresponding product (6.5.15) will not be quasi-scalar, but scalar. In fact, for the support .α of the measure .dρ(λ), we have the relation which follows from (6.5.1) and (6.5.5) for .α ⊂ T, namely, f

f |Fn (λ)|2 dρ(λ) =

f |Fn (λ)|2 dρ(λ) =

T

α .

=

±n E j,k=0

| α

±n E

fj eiλj |2 dρ(λ)

j =0

sj −k fj f¯k ≥ 0, where Fn (λ) =

±n E

(6.5.30) fj eiλj .

j =0

Suppose that the product .( · , · )Su is quasi-scalar, then there will be .n ∈ Z, f−n , . . . , f0 , . . . , fn , which are not all equal to zero, so that in (6.5.30), the sign “.≥” will be replaced with the sign “.=” and for the corresponding polynomial .Fn (λ), due to (6.5.30), we get .Fn (λ) = 0, .eiλ ∈ α. But the polynomial .Fn (λ) = 0 has a

.

Bibliographical Notes

323

maximum of .2n + 1 different zeros, and .α under the condition has more than .2n + 1 points. The obtained contradiction completes the proof. u n

Bibliographical Notes The results of the chapter are a generalization of Akhiezer N. I. classical works [2, 3]. To obtain these results, the works of Berezansky Yu. M. were used. [17, 18]. When considering the trigonometric moment problem, orthogonal polynomials arise, studied in works of Golinskii L. B. [126], Golinskii L., Totik V. [127], Simon B. [262–266], Atzmon A.[13]. The general theory of orthogonal polynomials is found in Geronimus Ya. L. [117], Szegö G. [278]. Koshmanenko V. D. and Dudkin M. E. [171] have the basic information about equipped spaces. The main content of the chapter is contained in works of Berezansky Yu. M., Dudkin M. E. [36, 84]. Five-diagonal (block tri-diagonal) matrices were first constructed in the works of Cantero M. J., Moral L., Velázquez L. [62, 63]. Works of Killip R., Nenciu I. [156], Arlinski˘ı Yu., Golinski˘ı L., Tsekanovski˘ı E. [11, 12] contain a lot of information about the trigonometric moment problem and orthogonal polynomials on the unit circle. The book of Schmüdgen K. [255] also has a relation to the trigonometric moment problem.

Chapter 7

Block Jacobi Type Matrices and the Complex Moment Problem in the Exponential Form

This chapter considers the generalization of the information given in Chaps. 3 and 5. Namely, an analogue of the Jacobi type block matrix related to the complex moment problem in the exponential form is proposed and corresponding polynomials, orthogonal with respect to some probability measure on the complex plane are investigated. In this case, two commutative block matrices are obtained, which both have a tri-diagonal block structure, one of them generates a unitary and the second one generates a self-adjoint operator in the .l2 -type space. A one-to-one correspondence is also established between probability measures on a compact set of the complex plane and such couple of matrices.

7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the Complex Moment Problem in the Exponential Form Let .dρ(z) be a Borel measure with a compact support on .C and .L2 = L2 (C, dρ(z)) be the space of square integrable complex-valued functions defined on .C. We suppose that functions .C e z − | → r m einθ , m ∈ N0 , .n ∈ Z are linear independent and forms total set in .L2 , here .z = reiθ , .r ≥ 0, .θ ∈ [0, 2π ). In order to find an analog of Jacobi matrix J of the form (3.1.3) or (5.1.36), there is need to choose an order of the orthogonalization in .L2 applied to the set of functions {r m einθ },

.

m ∈ N0 , n ∈ Z.

(7.1.1)

We use the linear order for the orthogonalization according to the Gram-Schmidt procedure (see Fig. 7.1). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y. M. Berezansky, M. E. Dudkin, Jacobi Matrices and the Moment Problem, Operator Theory: Advances and Applications 294, https://doi.org/10.1007/978-3-031-46387-7_7

325

326

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

...

r0 e−i2θ

r0 e−iθ

r0 ei0θ

r0 eiθ

r0 ei2θ

...

...

r1 e−i2θ

r1 e−iθ

r1 ei0θ

r1 eiθ

r1 ei2θ

...

...



r2 e−iθ

r2 ei0θ

r2 eiθ



...

...





r3 ei0θ





...

.. .

.. .

.. .

.. .

.. .

Fig. 7.1 The orthogonalization order

According to Fig. 7.1, we get r 0 ei0θ = 1; r 0 eiθ , r 1 ei0θ , r 0 e−iθ ; .

r 0 ei2θ , r 1 eiθ , r 2 ei0θ , r 1 e−iθ , r 0 e−i2θ ;

... ;

r 0 einθ , r 1 ei(n−1)θ , . . . , r n−1 eiθ , r n ei0θ , r n−1 e−iθ . . . , r 1 e−i(n−1)θ , r 0 e−inθ ; ... (7.1.2) That is, as some sets on “lines” starting at .r 0 einθ and ending at .r 0 e−inθ and “with an angle” at .r n ei0θ . Considering the sequence of functions (7.1.2), we carry out orthogonalization according to the Gram-Schmidt procedure. As a result, we get a system of orthogonal polynomials (each polynomial consists of .(r m einθ ), .m ∈ N0 , .n ∈ Z), which are placed in groups

.

...;

P0;0 (z); P1;0 (z),

P2;0 (z),

Pn;0 (z),

P1;1 (z),

P2;1 (z),

Pn;1 (z),

P1;2 (z);

P2;2 (z),

Pn;2 (z),

P2;3 (z),

Pn;3 (z),

P2;4 (z);

Pn;4 (z), ... Pn;2n (z);

...

(7.1.3)

7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the. . .

327

where each polynomial has a form Pn;α (z) = kn;α r n−|α−n| ei(n−α)θ + Rn;α ,

.

α = 0, 1, . . . , 2n, kn;α > 0, n ∈ N0 ,

(7.1.4)

here .Rn;α denote the next part of the corresponding polynomial; .P0;0 (z) = 1. In such a way .Pn;α is some linear combination of {1 = r 0 ei0θ ; r 0 eiθ , r 1 ei0θ , r 0 e−iθ ; . . . ;

.

r 0 einθ , r 1 ei(n−1)θ , . . . , r n−|α−n| ei(n−α)θ }.

(7.1.5)

Since the family (7.1.1) is total in the space .L2 , the sequence (7.1.3) form an orthonormal basis in this space. Let .Pn;α , .∀n ∈ N denotes a subspace spanned by (7.1.5). Hence, .∀n ∈ N, we have P0;0 ⊂P1;0 ⊂ P1;1 ⊂ P1;2 ⊂ P2;0 ⊂ P2;1 ⊂ P2;2 ⊂ P2;3 ⊂ P2;4 ⊂ · · · ⊂Pn;0 ⊂ Pn;1 ⊂ · · · ⊂ Pn;2n ⊂ · · · , .

Pn;α ={P0;0 (z)} ⊕ {P1;0 (z)} ⊕ {P1;1 (z)} ⊕ {P1;2 (z)}⊕ ⊕{P2;0 (z)} ⊕ {P2;1 (z)} ⊕ {P2;2 (z)} ⊕ {P2;3 (z)} ⊕ {P2;4 (z)} ⊕ · · · ⊕{Pn;0 (z)} ⊕ {Pn;1 (z)} ⊕ · · · ⊕ {Pn;α (z)}, (7.1.6)

where .{Pn;α (z)}, .n ∈ N, .α = 0, 1, . . . 2n denotes a one-dimensional space spanned by .Pn;α (z); .P0;0 = C. For further research, it is necessary to consider the Hilbert space l2 = H 0 ⊕ H 1 ⊕ H 2 ⊕ · · · ,

.

Hn = C2n+1 ,

n ∈ N0 ,

(7.1.7)

instead of usual traditional space .l2 used in the classical Hamburger (onedimensional) moment problem. Each vector .f ∈ l2 has a form .f = (fn )∞ n=0 , .fn ∈ Hn , and, hence, ||f ||2l2 =

∞ 

.

n=0

||fn ||2Hn < ∞,

(f, g)l2 =

∞  (fn , gn )Hn ,

∀f, g ∈ l2 .

n=0

Coordinates of a vector .fn ∈ Hn , .n ∈ N0 in some orthonormal basis .{en;0 , .en;1 , en;2 , . . . , .en;2n } of the space .Cn+1 , are denoted by .(fn;0 , .fn;1 , .fn;2 , .. . ., .fn;2n ) and, hence, we have .fn = (fn;0 , fn;1 , fn;2 , . . . , fn;2n ). It is clear, that the space .l2 is almost isometric .l2 × l2 (the vector .e0;0 is taken only one time in .l2 ).

.

328

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

By using the orthonormal system (7.1.3), one can define a mapping of .l2 into .L2 . Denote, Pn (z) = (Pn;0 , Pn;1 (z), Pn;2 (z), . . . , Pn;2n ) ∈ Hn , ∀n ∈ N0 , ∀z = reiθ ∈ C,

.

then ˆ l2 e f = (fn )∞ n=0 |−→ f (z) =

.

∞  (fn , Pn (z))Hn ∈ L2 .

(7.1.8)

n=0

Since, for .n ∈ N0 , we have (fn , Pn (z))Hn = fn;0 Pn;0 (z) + fn;1 Pn;1 (z) + fn;2 Pn;2 (z) + · · · + fn;2n Pn;2n (z)

.

and ||f ||2l2 = ||(f0;0 , f1;0 , f1;1 , f1;2 , . . . , fn;0 , fn;1 , . . . , fn;2n , . . .)||2l2 ,

.

then (7.1.8) is the mapping of the space .l2 into .L2 , using the orthonormal system (7.1.3) we conclude that this mapping is isometric. The image .l2 by the mapping (7.1.8) coincides with the space .L2 , because under our assumption, the system (7.1.3) is an orthonormal basis in .L2 . Therefore, the mapping (7.1.8) is an isometric transformation (denoted by I ), acting from whole .l2 into whole .L2 . Let A be a bounded linear operator defined on the space .l2 . It is possible to construct the operator matrix .(aj,k )∞ j,k=0 , where for each .j, k ∈ N0 , the element .aj,k is an operator from .Hk into .Hj , so that we have (Af )j =

∞ 

.

aj,k fk , j ∈ N0 ,

k=0

(Af, g)l2 =

∞ 

(aj,k fk , gj )Hj ,

∀f, g ∈ l2 .

j,k=0

(7.1.9) For the proof of (7.1.9) we need only to write the usual matrix of the operator A in the space .l2 using the basis (e0;0 ; e1;0 , e1;1 , e1;2 ; . . . ; en;0 , en;1 , . . . , en;2n ; . . .),

.

e0;0 = 1.

(7.1.10)

Then, for each .j, k ∈ N0 , .aj,k is the operator .Hk −→ Hj that has a matrix representation aj,k;α,β = (Aek;β , ej ;α )l2 ,

.

(7.1.11)

7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the. . .

329 j,k

where .α = 0, 1, . . . , 2j , .β = 0, 1, . . . , 2k. We will write .aj,k = (aj,k;α,β )α,β=0 , including the cases 0,2 2,0 a0,0 = (a0,0;α,β )0,0 α,β=0 = a0,0;0,0 , a0,1 = (a0,1;α,β )α,β=0 , a1,0 = (a1,0;α,β )α,β=0 .

.

Note, that the same representation (7.1.9) is also valid for the general operator A on the space .l2 defined on .lfin ⊂ l2 , where .lfin denotes the set of finite vectors of .l2 . In this case the first formula in (7.1.9) holds true for .f ∈ lfin ; the second formula is valid for .f ∈ lfin , .g ∈ l2 . Let us consider the image .Aˆ = I AI −1 : .L2 −→ L2 of the above bounded operator A : .l2 −→ l2 given by the mapping (7.1.8). Its matrix in the basis (7.1.3), (P0;0 (z); P1;0 (z), P1;1 (z), P1;2 (z); . . . ; Pn;0 (z), Pn;1 (z), . . . , Pn;2n (z); . . .),

.

coincides with a usual matrix of the operator A regarded as an operator: .l2 −→ l2 in the corresponding basis (7.1.10). By using (7.1.11) and the above mentioned procedure, we get the operator matrix .(aj,k )∞ j,k=0 of the operator .A : l2 −→ l2 . By the definition this matrix is also the operator matrix of .Aˆ : L2 −→ L2 . It is clear that .Aˆ can be an arbitrary linear bounded operator in .L2 . Lemma 7.1.1 For polynomials .Pn;α (z) and subspaces .Pm,β , .n, m ∈ N0 , .α = 0, 1, . . . , 2n, .β = 0, 1, . . . , 2m, relations rPn;α (z) ∈ Pn+1;α+1 ,

α = 0, 1, ..., 2n; .

(7.1.12)

eiθ Pn;α (z) ∈ Pn+1;α ,

α = 0, 1, ..., n; .

(7.1.13)

eiθ Pn;α (z) ∈ Pn+1;n ,

α = n + 1, n + 2, ..., 2n; .

(7.1.14)

e−iθ Pn;α (z) ∈ Pn+1;α+2 ,

α = n, n + 1, ..., 2n; .

(7.1.15)

e−iθ Pn;α (z) ∈ Pn;2n ,

α = 0, 1, ..., n − 1, n ∈ N0 .

(7.1.16)

.

hold true. Proof According to (7.1.3), the polynomial .Pn;α (z), .n ∈ N0 is a linear combination of (7.1.5), i.e., {1 = r 0 ei0θ ; r 0 eiθ , r 1 ei0θ , r 0 e−iθ ; . . . ;

.

r 0 einθ , r 1 ei(n−1)θ , . . . , r n−|α−n| ei(n−α)θ }. Hence, multiplying by r each element in (7.1.5), we obtain the the set {r 1 ei0θ ; r 1 eiθ , r 2 ei0θ , r 1 e−iθ ; . . . ;

.

r 1 einθ , r 2 ei(n−1)θ , . . . , r n−|α−n|+1 ei(n−α)θ },

330

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

and a linear combination of such elements belongs to .Pn+1;α+1 for .α = 0, 1, ..., 2n, since for the last element (according to the order (7.1.2)), we have r n−|α−n|+1 ei(n−α)θ = r (n+1)−|(α+1)−(n+1)| ei((n+1)−(α+1))θ .

.

Hence, we proved (7.1.12). The multiplication by .eiθ of each element in (7.1.5) gives the set {r 0 eiθ ; r 0 ei2θ , r 1 eiθ , r 0 ei0θ ; . . . ;

.

r 0 ei(n+1)θ , r 1 ei(n)θ , . . . , r n−|α−n| ei(n−α+1)θ }

(7.1.17)

that a linear combination belongs to .Pn+1;α , if .α = 0, 1, ..., n, since for the last element (according to the order (7.1.2)) gives r n−|α−n| ei(n−α+1)θ = r (n+1)−|n−α|−1 ei((n+1)−α)θ .

= r (n+1)−|α−(n+1)| ei((n+1)−α)θ .

The last equality is true since .α ≤ n. Hence, we proved (7.1.13). And the linear combinations of elements (7.1.17) belongs to .Pn+1;n , if .α = n + 1, ..., 2n, since for the last element (according to the order (7.1.2)), we have r n−|α−n| ei(n−α+1)θ = r (n+1)−|α−(n+1)| ei((n+1)−α)θ .

.

Since for .α = n + 1, n + 2, ..., 2n, .Pn;α (z) contains the element .r n ei0θ , then .r n eiθ is contained in .eiθ Pn;α (z) and .r n eiθ ∈ Pn+1;n . Hence, we proved (7.1.14). Similarly, multiplying by .e−iθ each ellement in (7.1.5), we get {r 0 e−iθ ; r 0 e0θ , r 1 e−iθ , r 0 e−i2θ ; . . . ;

.

r 0 ei(n−1)θ , r 1 ei(n−2)θ , . . . , r n−|α−n| ei(n−α−1)θ }.

(7.1.18)

The linear combination of elements of (7.1.18) belongs to the subset .Pn+1;α+2 , if α = n, n + 1, ..., 2n, since for the element (according to the order (7.1.2)), we have

.

r n−|α−n| ei(n−α−1)θ =r (n+1)−|α−n|+1 ei(n−α−1)θ =r (n+1)−|(α+1)−(n+1)|+1 ei((n+1)−(α+2))θ .

=r (n+1)−|(α+2)−(n+1)| ei((n+1)−(α+2))θ =r (n+1)−|(n+1)−(α+2)| ei((n+1)−(α+2))θ .

Hence, we proved (7.1.15). The linear combination of elements in (7.1.18) belongs in any case to .Pn;2n , if 0 −i(n−1)θ is contained in .P .α = 0, 1, ..., n − 1, since .r e n;α (z), .∀α = 0, 1, ..., n − 1.

7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the. . .

331

Hence, for .α = 0, 1, ..., n − 1, .e−iθ Pn;α (z) contains .r 0 e−inθ . Hence, we proved (7.1.16). u n Lemma 7.1.2 Let .Aˆ be the self-adjoint multiplication by r operator on the space .L2 , namely, ˆ L2 e ϕ(z) − | → (Aϕ)(z) = rϕ(z) ∈ L2 .

.

−1 AI ˆ ˆ ) has a tri-diagonal structure, The operator matrix .(aj,k )∞ j,k=0 of .A .(i.e., .A = I i.e., .aj,k = 0 for .|j − k| > 1.

Proof By using (7.1.11), for .en;γ = I −1 Pn;γ (z), .n ∈ N0 , .γ = 0, 1, . . . , 2n, we have  aj,k;α,β = (Aek;β , ej ;α )l2 =

rPk;β (z)Pj ;α (z) dρ(z),

.

∀j, k ∈ N0 ,

(7.1.19)

C

where .α = 0, 1, . . . , 2j , .β = 0, 1, . . . , 2k. From (7.1.12), we have .rPk;α (z) ∈ Pk+1;α+1 for .α = 0, 1, ..., 2j . According to (7.1.6), the integral in (7.1.19) is equal to zero for .j > k + 1. On the other hand, the integral in (7.1.19) has the form  r¯ Pj ;α (z)Pk;β (z) dρ(z).

aj,k;α,β =

.

(7.1.20)

C

From (7.1.12), we have .r¯ Pj ;α (z) = rPj ;α (z) ∈ Pj +1;α+1 . According to (7.1.6), the last integral is equal to zero for .k > j + 1 and for each .α = 0, 1, . . . , 2j , .β = 0, 1, . . . , 2k. As a result the integral in (7.1.20), i.e., coefficients .aj,k;α,β , .j, k ∈ N0 , are equal to zero for .|j − k| > 1, .α = 0, 1, . . . , 2j , .β = 0, 1, . . . , 2k. In the previous considerations it is necessary to take into account .e0;0 = I −1 P0;0 (z), .P0;0 (z) = 1. u n ˆ In such a way the matrix .(aj,k )∞ j,k=0 of the operator .A has the tri-diagonal block structure ⎡

a0,0 ⎢ a1,0 ⎢ ⎢ .⎢ 0 ⎢ 0 ⎣ .. .

a0,1 a1,1 a2,1 0 .. .

⎤ 0 ... a1,2 0 ... ⎥ ⎥ a2,2 a2,3 0 . . . ⎥ ⎥. a3,2 a3,3 a3,4 . . . ⎥ ⎦ .. .. .. . . . . . . 0

0 0

(7.1.21)

332

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

A more detailed analysis of the expression (7.1.19) makes it possible to know 2j,2k about zero and non-zero matrix elements .(aj,k;α,β )α,β=0 for .|j − k| ≤ 1. We use actively also the permutation properties of indexes of matrix elements. .j, k and .α, β. ˆ ∗ Let us denote by .((a ∗ )j,k )∞ j,k=0 the operator matrix of the operator .(A) , which is ˆ ∗ = Aˆ is also the multiplication by .r = r¯ operator. ˆ We remark that .(A) adjoint to .A. Taking into account the expression (7.1.19) for .j, k ∈ N0 , we have  (a ∗ )j,k;α,β = r¯ Pk;β (z)Pj ;α (z) dρ(z) C



.

=

(7.1.22) rPj ;α (z)Pk;β (z) dρ(z) = a¯ k,j ;β,α ,

C

where .α = 0, 1, . . . , 2j , .β = 0, 1, . . . , 2k. Since .r¯ = r, then the matrix (7.1.21) is Hermitian (.aj,k;α,β = a¯ k,j ;β,α ). Lemma 7.1.3 Let .(aj,k )∞ j,k=0 be an operator matrix of the multiplication by r 2j,2k

operator in .L2 , where .aj,k : Hk −→ Hj ; .aj,k = (aj,k;α,β )α,β=0 are matrices of operators .aj,k in the corresponding orthonormal basis. Then, for .j ∈ N0 , we have .

aj,j +1;α,β = 0, α = 0, 1, . . . 2j, aj +1,j ;α,β = 0, β = 0, 1, . . . 2j,

β = α + 2, α + 3, . . . , 2j + 2; α = β + 2, β + 3, . . . , 2j + 2,

(7.1.23)

If we choose another order inside of each direction “in the line with a corner” {r 0 einθ , r 1 ei(n−1)θ , r 2 ei(n−2)θ , . . . , r 1 e−i(n−1)θ , r 0 e−inθ },

.

(see Fig. 7.1 and comments after (7.1.2))) preserving the order of lines, then Lemma 7.1.3 is valid and it will be also possible to describe zero elements of 2j,2k matrices .(aj,k;α,β )α,β=0 . Such a matrix .(aj,k )∞ j,k=0 have also a tri-diagonal block structure but has zeros at the same places. Proof of Lemma 7.1.3 According to (7.1.19) and (7.1.12), for .∀α = 0, 1, . . . , 2j , ∀β = α + 2, α + 3, . . . , 2j + 2, and .j ∈ N0 , we have

.

 aj,j +1;α,β =

 rPj +1,β (z)Pj ;α (z) dρ(z) =

.

C

rPj,α (z)Pj +1;β (z) dρ(z), C

where .rPj ;α (z) ∈ Pj +1;α+1 . But according to (7.1.6) .Pj +1;β (z) is orthogonal to Pj +1;α+1 for .β > α + 1 and, hence, the last integral is equal to zero. This gives the first equalities in (7.1.23).

.

7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the. . .

333

Similarly to (7.1.19) and (7.1.12), .∀β = 0, 1, . . . , 2j , .∀α = β + 2, β + 3, . . . , 2j + 2, and .j ∈ N0 , we have  aj +1,j ;α,β =

rPj,β (z)Pj +1;α (z) dρ(z),

.

C

where .rPj ;β (z) ∈ Pj +1;β+1 . But according to (7.1.6), .Pj +1;α (z) is orthogonal to Pj +1;β+1 if .α > β + 1 and, hence, the last integral is also equal to zero. This gives the second equalities in (7.1.23). u n

.

Thus, after these studies, we claim that, in (7.1.21) for .∀j ∈ N, the upper right corner of each .((2j + 1) × (2j + 3))-matrices .aj,j +1 , .j ∈ N0 (starting from the third diagonal) and the lower left corner of each .((2j + 3) × (2j + 1))-matrix .aj +1,j (starting from the third diagonal) contains zero elements. Considering (7.1.21), it can be asserted that the Hermitian matrix as a multiplication by r operator is multidiagonal as usual, that is, in the usual basis of the space .l2 . Lemma 7.1.4 Elements aj,j +1;α,α+1 , aj +1,j ;α+1,α ;

.

α = 0, 1, . . . 2j,

j ∈ N,

(7.1.24)

of matrices .(aj,k )∞ j,k=0 in Lemma 7.1.3, are positive. ' (z) a non-normalized Proof We start with the study of .a0,1;0,1 . Denote by .P1;1 vector .P1;1 (z), obtained due to the Gram-Schmidt orthogonalization procedure taking the vector .rei0θ . According to (7.1.2) and (7.1.3), we have: ' P1;1 (z) = r − (r, P1;0 (z))L2 P1;0 (z) − (r, 1)L2 ,

.

where .1 = P0;0 (z). Therefore, using (7.1.19), we get  a0,1;0,1 =

' rP1;1 (z) dρ(z) = ||P1;1 (z)||−1 L2

C .

' (z)||−1 = ||P1;1 L2





' rP1;1 (z) dρ(z)

C

r(r − (r, P1;0 (z))L2 P1;0 (z) − (r, 1)L2 ) dρ(z) C

' 2 2 2 (z)||−1 = ||P1;1 L2 (||r||L2 − |(r, P1;0 (z))L2 | − |(r, 1)L2 | ).

(7.1.25) By using also (7.1.26), we conclude that the last expression is positive and, therefore, .a0,1;0,1 > 0, since (7.1.1) is the total set of linear independent vectors in .L2 . The element .a1,0;1,0 is also positive since the matrix is Hermitian.

334

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

The positiveness (7.1.25) follows from the Parseval equality for the decomposition of the function .r ∈ L2 with respect to the orthonormal basis .L2 : |(r, 1)L2 |2 + |(r, P1;0 (z))L2 |2 + |(r, P1;1 (z))L2 |2 + · · · = ||r||2L2 .

.

(7.1.26)

Let us pass to the proof of the positiveness of .aj,j +1;α,α+1 , where .α = 0, 1, . . . , 2j , .j ∈ N. From (7.1.19), we have:  aj,j +1;α,α+1 =

rPj +1;α+1 (z)Pj ;α (z) dρ(z).

.

(7.1.27)

C

According to (7.1.4) and (7.1.6), we have Pj ;α (z) = kj ;α r j −|α−j | ei(j −α)θ + Rj ;α (z),

.

(7.1.28)

where .Rj ;α (z) is some polynomial from .Pj ;α−1 if .α > 0, or from .Pj −1;2j −2 if .α = 0. Therefore, .rRj ;α (z) is some polynomial from .Pj +1;α (if .α = 0, then .rRj ;α (z) ∈ Pj ;2j −1 (see (7.1.12). Multiplying (7.1.28) by r, we get rPj ;α (z) = kj ;α r (j +1)−|α−j | ei(j −α)θ + rRj ;α (z),

.

(7.1.29)

where .rRj ;α (z) ∈ Pj +1;α or to .Pj ;2j −2 . On the other hand, the equality (7.1.28) in the case of .Pj +1;α (z) gives Pj +1;α+1 (z) = kj +1;α+1 r (j +1)−|(α+1)−(j +1)| ei((j +1)−(α+1))θ + Rj +1;α+1 (z) .

= kj +1;α+1 r (j +1)−|(α−j )| ei(j −α)θ + Rj +1;α+1 (z), (7.1.30)

where .Rj +1;α+1 (z) ∈ Pj +1;α+1 since .α + 1 > 0. Take r (j +1)−|(α−j )| ei(j −α)θ = r (j +1)−|((α+1)−(j +1))| ei((j +1)−(α+1))θ

.

from (7.1.30) and substitute it into (7.1.29). Thus, we get: rPj ;α (z) = .

=

kj ;α kj +1;α+1

(Pj +1;α+1 (z) − Rj +1;α+1 (z)) + rRj ;α (z)

kj ;α kj ;α Rj +1;α+1 (z) + rRj ;α (z), Pj +1;α+1 (z) − kj +1;α+1 kj +1;α+1 (7.1.31)

where the last two terms belong to .Pj +1;α and to .Pj +1;α or to .Pj ;2j −1 , correspondingly and are in any case orthogonal to .Pj +1;α+1 (z).

7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the. . .

335

Hence, the substituting of the expression (7.1.31) into (7.1.27) gives .aj +1,j ;α,α = kj ;α (kj +1;α+1 )−1 > 0. u n In the sequel investigation, we use well-known convenient notations for the .aj,k elements of the Jacobi matrix

.

an = an+1,n : bn = an,n : cn = an,n+1 :

Hn −→ Hn+1 , Hn −→ Hn , Hn+1 −→ Hn , n ∈ N0 .

(7.1.32)

Let U be a bounded linear operator defined on the space .l2 . Construct the operator matrix .(uj,k )∞ j,k=0 , where for each .j, k ∈ N0 the element .uj,k is an operator from .Hk into .Hj , so that like (7.1.9), .∀f, g ∈ l2 , we have (Uf )j =

∞ 

.

uj,k fk , j ∈ N0 ,

k=0

(Uf, g)l2 =

∞ 

(uj,k fk , gj )Hj .

(7.1.33)

j,k=0

To prove (7.1.33), it is only necessary to rewrite the usual matrix of the operator U in the space .l2 , using a basis (e0;0 ; e1;0 , e1;1 , e1;2 ; . . . ; en;0 , en;1 , . . . , en;2n ; . . .),

.

e0;0 = 1.

(7.1.34)

Then .uj,k for each .j, k ∈ N0 is the operator .Hk −→ Hj that has a matrix representation uj,k;α,β = (U ek;β , ej ;α )l2 ,

.

(7.1.35)

where .α = 0, 1, . . . , 2j , .β = 0, 1, . . . , 2k. 2j,2k Let us write .uj,k = (uj,k;α,β )α,β=0 including cases 0,2 2,0 u0,0 = (u0,0;α,β )0,0 α,β=0 = u0,0;0,0 , u0,1 = (u0,1;α,β )α,β=0 , u1,0 = (u1,0;α,β )α,β=0 .

.

Note, that the same representation (7.1.33) is also valid for the general operator U on the space .l2 defined on .lin ⊂ l2 , where .lfin denotes, as usually, the set of finite vectors from .l2 . In this case the first formula in (7.1.33) is valid for .f ∈ lfin ; in the second formula .f ∈ lfin , .g ∈ l2 . Let us consider the image .Uˆ = I U I −1 : .L2 −→ L2 of the bounded operator U : .l2 −→ l2 by the mapping (7.1.8). Its matrix in the basis (7.1.3) (P0;0 (z); P1;0 (z), P1;1 (z), P1;2 (z); . . . ; Pn;0 (z), Pn;1 (z), . . . , Pn;2n (z); . . .)

.

is equal to the usual matrix of operator U regared as the operator acting .l2 −→ l2 in the corresponding basis (7.1.34). By using (7.1.35) and the above mentioned procedure, we get the operator matrix .(uj,k )∞ j,k=0 of .U : l2 −→ l2 . By the definition this matrix is also the operator matrix of .Uˆ : L2 −→ L2 .

336

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

It is clear that .Uˆ can be an arbitrary linear bounded operator in .L2 . Lemma 7.1.5 Let .Uˆ be the unitary multiplication by .eiθ operator on the space .L2 , namely, L2 e ϕ(z) − | → (Uˆ ϕ)(z) = eiθ ϕ(z) ∈ L2 .

.

−1 U ˆ ˆ I ) has a triThe operator matrix .(uj,k )∞ j,k=0 of the operator .U .(i.e., .U = I diagonal structure, i.e., .uj,k = 0 for .|j − k| > 1.

Proof By using (7.1.35) for .en;γ = I −1 Pn;γ (z), .n ∈ N0 , .γ = 0, 1, . . . , 2n, .∀j, k ∈ N0 , we have  .uj,k;α,β = (U ek;β , ej ;α )l2 = eiθ Pk;β (z)Pj ;α (z) dρ(z), (7.1.36) C

where .α = 0, 1, . . . , 2j , .β = 0, 1, . . . , 2k. From (7.1.13) and (7.1.14), we have eiθ Pk;β (z) ∈ Pk+1;β if .β = 0, 1, . . . , k and .eiθ Pk;β (z) ∈ Pk+1;k if .β = k + 1, k + 2, . . . , 2k. According to (7.1.6), the integral in (7.1.36) is equal to zero for .j > k +1 and for each .β = 0, 1, . . . , 2k. On the other hand, the integral in (7.1.36) has a form  .uj,k;α,β = e−iθ Pj ;α (z)Pk;β (z) dρ(z). (7.1.37)

.

C

From (7.1.15), (7.1.16) we have now that .e−iθ Pj ;α (z) ∈ Pj +1;α+2 if .α = j, j + 1, . . . , 2j and .e−iθ Pj ;α (z) ∈ Pj ;2j if .α = 0, 1, . . . , j − 1. According to (7.1.6), the last integral is equal to zero for .k > j + 1 and for each .α = 0, 1, . . . , 2j . As result, the integral in(7.1.37), i.e., coefficients .uj,k;α,β , .j, k ∈ N0 , are equal to zero for .|j − k| > 1, .α = 0, 1, . . . , 2j , .β = 0, 1, . . . , 2k. In previous considerations it is necessary to take into account that .e0;0 = u n I −1 P0;0 (z), .P0;0 (z) = 1. ˆ In such a way the matrix .(uj,k )∞ j,k=0 of the operator .U has a tri-diagonal block structure ⎤ ⎡ u0,0 u0,1 0 0 0 . . . ⎢ u1,0 u1,1 u1,2 0 0 . . . ⎥ ⎥ ⎢ ⎢ 0 ... ⎥ . ⎢ 0 u2,1 u2,2 u2,3 (7.1.38) ⎥. ⎢ 0 0 u u u ... ⎥ 3,2 3,3 3,4 ⎦ ⎣ .. .. .. .. .. . . . . . . . . A more detailed analysis of the expression (7.1.36) makes it possible to know 2j,2k about the zero and non-zero elements of the matrices .(uj,k;α,β )α,β=0 in each case for .|j − k| ≤ 1. Permutational properties of matrix indices .j, k and .α, β are also used.

7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the. . .

337

ˆ ∗ Let us denote by .((u∗ )j,k )∞ j,k=0 the operator matrix of the operator .(U ) which ∗ −iθ operator. Taking is adjoint to .Uˆ . We remark that .(Uˆ ) is the multiplication by .e into account the expression (7.1.36) for .j, k ∈ N0 , we have  ∗ (u )j,k;α,β = e−iθ Pk;β (z)Pj ;α (z) dρ(z) C



.

=

(7.1.39) eiθ Pj ;α (z)Pk;β (z) dρ(z)

= u¯ k,j :β,α ,

C

where .α = 0, 1, . . . , 2j and .β = 0, 1, . . . , 2k. iθ Lemma 7.1.6 Let .(uj,k )∞ j,k=0 be an operator matrix of the multiplication by .e operator in .L2 , where .uj,k : Hk −→ Hj ;are matrices of operators .uj,k in the corresponding orthonormal basis. Then .∀j ∈ N, we have .

uj,j +1;α,β = 0,

α = 0, 1, . . . , j − 1,

β = 0, 1, . . . , 2j + 2; .

uj,j +1;α,β = 0,

α = j, j + 1, . . . , 2j − 1,

β = α + 3, α + 4, . . . , 2j + 2; . (7.1.41)

uj +1,j ;α,β = 0,

β = 0, 1, . . . , j,

α = β + 1, β + 2, . . . , 2j + 2; . (7.1.42)

uj +1,j ;α,β = 0,

β = j + 1, j + 1 . . . , 2j,

α = j + 1, j + 2, . . . , 2j + 2; (7.1.43)

(7.1.40)

and .u1,0;1,0 = u1,0;2,0 = 0. If we choose inside of each direction {r 0 einθ , r 1 ei(n−1)θ , r 2 ei(n−2)θ , . . . , r 1 e−i(n−1)θ , r 0 e−inθ },

.

(see Fig. 7.1) another order (preserving the order of “lines with the corner” (see the comments after (7.1.2)), then Lemma 7.1.6 is not valid, but it will still be possible to 2j,2k describe the zeros of matrices .(uj,k;α,β )α,β=0 . Such matrices .(uj,k )∞ j,k=0 also have a tri-diagonal block structure and zero elements are possibly but in another places. Proof of Lemma 7.1.6 According to (7.1.36), (7.1.15), and (7.1.16), for .j ∈ N, α = 0, 1, . . . , 2j , .β = 0, 1, . . . , 2k, we have   .uj,j +1;α,β = eiθ Pj +1,β (z)Pj ;α (z) dρ(z) = e−iθ Pj,α (z)Pj +1;β (z) dρ(z),

.

C

C

where .e−iθ Pj ;α (z) ∈ Pj +1;α+2 for .α = j, j + 1, . . . , 2j . But according to (7.1.6), .Pj +1;α+2 is orthogonal to .Pj +1;β for .β > α +2 and, hence, the last integral is equal to zero. This gives the equalities in (7.1.41).

338

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

And .e−iθ Pj ;α (z) ∈ Pj ;2j for .α = 0, 1, . . . , j − 1. But, according to (7.1.6), .Pj ;2j is orthogonal to .Pj +1;β for .β = 0, 1, . . . , 2j + 2 and, hence, the last integral is also equal to zero. This gives the equalities in (7.1.40). Similarly, from (7.1.36), (7.1.13), and (7.1.14) for .j ∈ N0 , .α = 0, 1, . . . , 2j + 2, .β = 0, 1, . . . , 2j , we have  uj +1,j ;α,β =

eiθ Pj,β (z)Pj +1;α (z) dρ(z),

.

C

where .eiθ Pj ;β (z) ∈ Pj +1;β for .β = 0, 1, . . . , j . But according to (7.1.6), .Pj +1;β is orthogonal to .Pj +1;α for .β < α and, hence, the last integral is equal to zero for .α = β + 1, β + 2, . . . , 2j + 2. This gives the equalities in (7.1.42). And .eiθ Pj ;α (z) ∈ Pj +1;j for .β = j + 1, j + 2, . . . , 2j . But according to (7.1.6), .Pj +1;j is orthogonal to .Pj +1;α for .α > j and, hence, the last integral is also equal to zero for .α = j + 1, j + 2, . . . , 2j + 2. This gives the equalities in (7.1.43) and .u1,0;1,0 = u1,0;2,0 = 0. u n Note that we do not have additional information about .u0,1 . Thus, after last investigations we conclude that in (7.1.38) for .∀j ∈ N, each lower left corner of the matrix .uj +1,j (starting from the second diagonal, which located in the upper left corner) and .j + 2 of the last rows and each upper right corner of the matrix .uj,j +1 (starting from the second diagonal located from the lower right corner) and n of the first rows contain always zero elements. Considering (7.1.38), it can be stated that the unitary matrix corresponding to the multiplication by .eiθ operator, is multi-diagonal in the usual sense, that is, in the usual basis of the space .l2 . Lemma 7.1.7 Elements

.

u0,1;0,2 , u1,0;0,0 ; α = j + 1, j + 2, . . . , 2j ; uj,j +1;α,α+2 , α = 0, 1, . . . j ; j ∈ N, uj +1,j ;α,α ,

(7.1.44)

of matrices .(uj,k )∞ j,k=0 , in Lemma 7.1.6, are positive. ' (z) = Proof We start with the study of .u1,0;0,0 . By using (7.1.36), we denote .P1;0 eiθ − (eiθ , 1)L2 (.1 = P0;0 (z)) the non-normalized vector .P1;0 (z) obtained due to the Gram-Schmidt orthogonalization procedure taking the vector .r 0 eiθ . Therefore, we have   ' (z)||−1 eiθ (eiθ − (eiθ , 1)L2 ) dρ(z) u1,0;0,0 = eiθ P1;0 (z) dρ(z) = ||P1;0 L2 .

C

C

=

' iθ 2 ||P1;0 (z)||−1 L2 (||e ||L2

− |(e−iθ , 1)L2 |2 ). (7.1.45)

7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the. . .

339

The last difference is positive (see below, (7.1.47)), hence, .a1,0;0,0 > 0. ' (z) the non-normalized vector Let us consider .a0,1;0,2 . Denote, as before, .P1;2 .P1;2 (z), obtained due to the Gram-Schmidt orthogonalization procedure taking the vector .r 0 e−iθ . According to (7.1.2) and (7.1.3), we have ' P1;2 (z) = e−iθ − (e−iθ , P1;1 (z))L2 P1;1 (z)−

.

(e−iθ , P1;0 (z))L2 P1;0 (z) − (e−iθ , 1)L2 . Therefore, using (7.1.36), we get  u0,1;0,2 =

' e−iθ P1;2 (z) dρ(z) = ||P1;2 (z)||−1 L2

C ' (z)||−1 =||P1;2 L2





' eiθ P1;2 (z) dρ(z)

C

eiθ (e−iθ − (e−iθ , P1;1 (z))L2 P1;1 (z)

C

.

− (e

−iθ

(7.1.46)

, P1;0 (z))L2 P1;0 (z) − (e

' −iθ 2 =||P1;2 (z)||−1 ||L2 L2 (||e

− |(e

−iθ

−iθ

, 1)L2 ) dρ(z)

, P1;1 (z))L2 |2

− |(e−iθ , P1;0 (z))L2 |2 − |(e−iθ , 1)L2 |2 ). Also using (7.1.47), we conclude that the last expression is positive and therefore u0,1;0,2 > 0. The positiveness in (7.1.45) and (7.1.46) follows from the Parseval equality applied to the decomposition of the function .e−iθ ∈ L2 with respect to the orthonormal basis (7.1.3) of the space .L2 , namely,

.

|(e−iθ , 1)L2 |2 + |(e−iθ , P1;0 (z))L2 |2 + |(e−iθ , P1;1 (z))L2 |2 + · · · = ||e−iθ ||2L2 . (7.1.47)

.

Let us pass to the proof of the positiveness of .uj +1,j ;α,α , where .j ∈ N, .α = 0, 1, . . . , j . From (7.1.36) we have:  uj +1,j ;α,α =

eiθ Pj ;α (z)Pj +1;α (z) dρ(z).

.

(7.1.48)

C

According to (7.1.4) and (7.1.6), Pj ;α (z) = kj ;α r j −|α−j | ei(j −α)θ + Rj ;α (z),

.

(7.1.49)

where .Rj ;α (z) is some polynomial from .Pj ;α−1 if .α > 0 or from .Pj −1;2(j −1) if .α = 0. Therefore, .eiθ Rj ;α (z) is some polynomial from .Pj +1;α−1 for .α = 1, 2, . . . , j or

340

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

from .Pj +1;j for .α = j +1, j +2, . . . , 2j , or from .Pj ;2(j −1) for .α = 0 (see (7.1.13), (7.1.14), and (7.1.6)). Multiplying (7.1.49) by .eiθ , we get eiθ Pj ;α (z) = kj ;α r j −|α−j )| ei(j −α+1)θ + eiθ Rj ;α (z); = kj ;α r j +1−|α−(j +1))| ei(j +1−α)θ + eiθ Rj ;α (z); ⎧ . ⎨ Pj ;2(j −1) , α = 0; eiθ Rj ;α (z) ∈ Pj +1;α−1 , α = 1, 2, . . . , j ; ⎩ Pj +1;j , α = j + 1, j + 2, . . . , 2j.

(7.1.50)

On the other hand, equality (7.1.49), for .Pj +1;α (z), gives Pj +1;α (z) = kj +1;α r j +1−|α−(j +1)| ei(j +1−α)θ + Rj +1;α (z);

. α = 0; Pj ;2j , Rj +1;α (z) ∈ Pj +1;α−1 , α = 1, 2, . . . , 2j.

(7.1.51)

Let us find .r j +1−|α−(j +1)| ei(j +1−α)θ from (7.1.51) and substitute it in (7.1.50). Thus, we have: eiθ Pj ;α (z) = .

=

kj ;α kj +1;α kj ;α kj +1;α

(Pj +1;α (z) − Rj +1;α (z)) + eiθ Rj ;α (z) Pj +1;α (z) −

kj ;α kj +1;α

(7.1.52) Rj +1;α (z) + e Rj ;α (z), iθ

where the second term belong to .Pj +1;α−1 (.α /= 0) or .Pj ;2j (.α = 0) and the third term belong to .Pj ;2(j −1) if .α = 0, to .Pj +1;α−1 if .α = 1, 2, . . . , j , to .Pj +1;j if .α = j + 1, j + 2, . . . , 2j and is in any case orthogonal to .Pj +1;α (z) if .α ≤ j . Therefore, after the substitution the expression (7.1.52) into (7.1.48) we get that −1 > 0. .uj +1,j ;α,α = kj ;α (kj +1;α ) Consider at last the elements .uj,j +1;α,α+2 where .j ∈ N and .α = j + 1, j + 2, . . . , 2j . From(7.1.36), we get  uj,j +1;α,α+2 =

eiθ Pj +1,α+2 (z)Pj ;α (z) dρ(z) C



.

= C

(7.1.53) e−iθ Pj,α (z)Pj +1;α+2 (z) dρ(z).

7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the. . .

341

For .Pj ;α (z) we have expression (7.1.49). Multiply it on .e−iθ , similar to (7.1.50), we get e−iθ Pj ;α (z) = kj ;α r (j −|(α−1)| ei(j −α−1)θ + e−iθ Rj ;α (z), = kj ;α r (j +1)−|(α+2)−(j +1)| ei(j +1−(α+2))θ + e−iθ Rj ;α (z), ⎧ . ⎨ Pj −1;2(j −1) , α = 0; e−iθ Rj ;α (z) ∈ Pj ;2j , α = 1, 2, . . . , j − 1; ⎩ Pj +1;α+2 , α = j, j + 1, . . . , 2j. (7.1.54) Now the equality (7.1.49), for .Pj +1;α+2 , has a form Pj +1;α+2 (z) = kj +1;α+2 r j +1−|(α+2)−(j +1)| ei((j +1)−(α+2))θ + Rj +1;α+2 (z);

. Pj +1;α+1 , α = 1, 2, . . . , 2j ; Rj +1;α+2 (z) ∈ α = 0. Pj ;2j , (7.1.55) We take .r j +1−|(α+2)−(j +1)| ei((j +1)−(α+2))θ and (7.1.55) and substitute it into (7.1.54). Thus, we get e−iθ Pj ;α (z) = .

=

kj ;α (Pj +1;α+2 (z) − Rj +1;α+2 (z)) + e−iθ Rj ;α (z) kj +1;α+2 kj ;α kj +1;α+2

Pj +1;α+2 (z) −

kj ;α kj +1;α+2

Rj +1;α+2 (z) + e−iθ Rj ;α (z). (7.1.56)

As before the second term in (7.1.56) belongs to .Pj +1;α+1 for .α = 1, 2, . . . , 2j , or to .Pj ;2j and for .α = 0, and the third term belongs to .Pj +1;α+1 for .α = j + 1, j + 2, . . . , 2j , or to .Pj ;2j for .α = 1, 2, . . . , j − 1 and to .Pj −1;2(j −1) if .α = 0, and it is in any case orthogonal to .Pj +1;α+2 (z) for .α = j + 1, j + 2, . . . , 2j . Therefore, the substituting of the expression (7.1.56) into (7.1.53) gives: −1 > 0. .uj,j +1;α,α+1 = kj ;α (kj +1;α+2 ) u n In what follows we will use usual convenient notations for elements .uj,k of the Jacobi matrix, namely,

.

un = un+1,n : wn = un,n : vn = un,n+1 :

Hn −→ Hn+1 , Hn −→ Hn , Hn+1 −→ Hn ,

(7.1.57) n ∈ N0 .

All previous investigation are summarized in the theorem.

342

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

Theorem 7.1.8 The bounded Hermitian multiplication by r operator .Aˆ commuting with the unitary multiplication by .eiθ operator .Uˆ , .(with a strong cyclic vector.) on the space .L2 in the orthonormal basis (7.1.3) of polynomials, have the form of a tridiagonal block Jacobi type symmetric matrix .JA = (aj,k )∞ j,k=0 and a unitary matrix ∞ .JU = (uj,k ) j,k=0 , that act on the space (7.1.7), namely, l2 = H 0 ⊕ H 1 ⊕ H 2 ⊕ · · · ,

Hn = Cn+1 ,

.

n ∈ N0 .

(7.1.58)

Norms of all operators .aj,k : Hk −→ Hj and .uj,k : Hk −→ Hj are uniformly bounded with respect to .j, k ∈ N0 . In designations (7.1.32), the symmetric matrix has the form ⎡ ⎢ ⎢ ⎢ .JA = ⎢ ⎢ ⎣ ⎡

∗ ⎢ ⎢∗ ⎢ ⎢+ ⎢ ⎢0 ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ . = ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

∗ + ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ + ∗ 0 + 0 0 0 0

0 ∗ ∗ ∗ ∗ ∗ ∗ + 0

0

..

.

b0 a0 0 0 .. .

c0 b1 a1 0 .. .

0 c1 b2 a2 .. .

0 0 c2 b3 .. .

0 0 0 c3 .. .

... ... ... ... .. .

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤

∗ + 0 0 ∗ ∗ + 0 ∗ ∗ ∗ + ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ + ∗ ∗ ∗ 0 + ∗ ∗ 0 0 + ∗ 0 0 0 + 0 0 0 0 0 0 0 0

0 0 0 ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ + 0 .. .

0 ∗+ 0 0 0 0 ∗ ∗ + 0 0 0 ∗ ∗ ∗ + 0 0 ∗ ∗ ∗ ∗ + 0 ∗ ∗ ∗ ∗ ∗ + ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 0 0 0 ∗ ∗ ∗ ∗ ∗ ∗ ∗ .. .

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

(7.1.59)

In (7.1.59), .∀n ∈ N0 , .bn is a .((2n + 1) × (2n + 1))-matrix: .bn = (bn;α,β )2n,2n α,β=0 , 2n+2,2n (b0 = b0;0,0 is a scalar.); .an is a .((2n +3) × (2n+1))-matrix: .an = (an;α,β )α,β=0 ;

.

cn is a .((2n + 1) × (2n + 3))-natrix: .cn = (cn;α,β )2n,2n+2 α,β=0 . Matrices .an and .cn have

.

7.1 Construction of the Tri-Diagonal Block Matrix Corresponding to the. . .

343

some elements always equal to zero, namely, .∀n ∈ N0 , we have .

cn;α,β = 0,

α = 0, 1, . . . , 2n, β = α + 2, α + 3, . . . , 2n + 2;

an;α,β = 0,

β = 0, 1, . . . , 2n, α = β + 2, β + 3, . . . , 2n + 2.

(7.1.60)

Some other their elements are always positive, namely, .∀n ∈ N0 , we have cn;α,α+1 > 0, an;α+1,α > 0,

.

α = 0, 1, . . . , 2n.

(7.1.61)

Therefore, we can say that .∀n ∈ N0 each lower left corner of the matrix an .(starting from the third diagonal.) and each upper right corner of the matrix .cn .(starting from the third diagonal.) contain always zeros elements. All positive elements in (7.1.59) are marked “.+”. Possible zero and non-zero elements are indicated by “.∗”. Thus, the matrix (7.1.59) is multi-diagonal in the scalar form. ˆ ∗ = Aˆ in the basis (7.1.3) has the same form similar to The adjoint operator .(A) the tri-diagonal block Jacobi type matrix .JA . In designations from (7.1.57), the unitary matrix has the form .

⎡ ⎢ ⎢ ⎢ .JU = ⎢ ⎢ ⎣ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ . = ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

∗ + 0 0

∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ + ∗ 0 + 0 0 0 0 0 0

+ ∗ ∗ ∗ ∗ ∗ 0 0 0

0

..

.

w0 u0 0 0 .. .

v0 w1 u1 0 .. .

0 v1 w2 u2 .. .

0 0 v2 w3 .. .

0 0 0 v3 .. .

... ... ... ... .. .

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤

0 0 0 0 ∗ ∗ ∗ + ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ + ∗ ∗ ∗ 0 + ∗ ∗ 0 0 + ∗ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 + ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ 0 0 0 0 .. .

0 0000 0 0 0000 0 0 ∗∗∗∗+ 0 ∗∗∗∗ ∗ + ∗∗∗∗ ∗ ∗ ∗∗∗∗ ∗ ∗ ∗∗∗∗ ∗ ∗ ∗∗∗∗ ∗ ∗ ∗∗∗∗ ∗ ∗ ∗∗∗∗ ∗ ∗ ∗∗∗∗ ∗ ∗ ∗∗∗∗ ∗ ∗

0 0 0 0 + ∗ ∗ ∗ ∗ ∗ ∗ ∗ .. .

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

(7.1.62)

344

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

In (7.1.62), .∀n ∈ N0 , .wn is a .((2n + 1) × (2n + 1))-matrix: .wn = (bn;α,β )2n,2n α,β=0 , 2n+2,2n (w0 = w0;0,0 is a scalar.); .un is a .((2n+3)×(2n+1))-matrix: .un = (un;α,β )α,β=0 ;

.

vn is a .((2n + 1) × (2n + 3))-matrix: .vn = (cn;α,β )2n,2n+2 α,β=0 . Matrices .un and .vn have some elements that are always equal to zero, namely, .∀n ∈ N, we have

.

.

vn;α,β = 0, α = 0, 1, . . . , n − 1,

β = 0, 1, . . . , 2n + 2;

vn;α,β = 0, α = n, n + 1, . . . , 2n − 1,

β = α + 3, α + 4, . . . 2n + 2;

un;α,β = 0, β = 0, 1, . . . , n,

α = β + 1, β + 2, . . . , 2n + 2;

un;α,β = 0, β = n + 1, n + 2 . . . , 2n,

α = n + 1, n + 2, . . . , 2n + 2 (7.1.63)

and .u0;1,0 = u0;2,0 = 0. Some other their elements are always positive, namely, .∀n ∈ N0 , we have

.

v0;0,2 , u0;0,0 > 0; α = n + 1, n + 2, . . . , 2n; vn;α,α+2 > 0, α = 0, 1, . . . , n; n ∈ N. un;α,α > 0,

(7.1.64)

Therefore, we can say that .∀n ∈ N0 , each lower left corner of the matrix .un (starting from the second diagonal, which is located in the upper left corner.) and .n + 2 the last rows, and each upper right corner of the matrix .vn .(starting from the second diagonal, which is located in the lower right corner.) and n of the first rows contain always zero elements. All positive elements in (7.1.62) are denoted by “.+”. Possible zero and non-zero elements are indicated by “.∗”. Thus, the matrix (7.1.62) is multi-diagonal in the scalar form. The adjoint operator .(Uˆ )∗ in the basis (7.1.3) has the form similar to tri-diagonal block Jacobi type matrix .JU ∗ . These matrices .JA , .JU , and .JU ∗ act by the rule .

(JA f )n = an−1 fn−1 + bn fn + cn fn+1 , (JU f )n = un−1 fn−1 + wn fn + vn fn+1 , .

∗ (JU ∗ f )n = vn−1 fn−1 + wn∗ fn + u∗n fn+1 ,

n ∈ N0 ,

f−1 = 0,

(7.1.65)

∀f = (fn )∞ n=0 ∈ l2

(here “.∗” denotes the usual adjoint matrix.).

.

Let us explain that the form of the coefficients in the expression for .JU ∗ follows from (7.1.39) and (7.1.57).

7.2 Direct and Inverse Spectral Problems for Tri-Diagonal Block Jacobi Type. . .

345

7.2 Direct and Inverse Spectral Problems for Tri-Diagonal Block Jacobi Type Matrices Corresponding to the Complex Moment Problem in the Exponential Form As it was mentioned above, the main result of the previous section is actually the solution of the inverse problem corresponding to the direct one presented in the title of the section. We consider operators on the space .l2 of the form (7.1.7). In addition to the space .l2 , consider its equipment (lfin )' ⊃ l2 (p−1 ) ⊃ l2 ⊃ l2 (p) ⊃ lfin ,

(7.2.1)

.

where .l2 (p) is the weighted .l2 space with a weight .p = (pn )∞ n=0 , where .pn ≥ 1, and −1 = (p −1 )∞ . In our case .l (p) is the Hilbert space of sequences .f = (f )∞ , .p 2 n n=0 n n=0 .fn ∈ Hn for which we have ||f ||2l2 (p) =

∞ 

.

||fn ||2Hn pn ,

(f, g)l2 (p) =

n=0

∞  (fn , gn )Hn pn .

(7.2.2)

n=0

The space .l2 (p−1 ) is defined similarly; recall that .lfin is the space of finite sequences and .(lfin )' is the space conjugate to .lfin . It is easy to show that the imbedding ∞  .l2 (p) c→ l2 is quasi-nuclear if . npn−1 < ∞. n=0

Let A be a bounded self-adjoint operator strongly commuting with a unitary operators U . Suppose A and U standardly connected with the chain (7.2.1). (About the standard connection see the beginning of the last section of this chapter.) According to the projection spectral theorem, such operators have representations  Af =

 ro(z) dσ (z)f,

.

Uf =

C

eiθ o(z) dσ (z)f,

f ∈ l2 ,

(7.2.3)

C

where .o(z) : .l2 (p) −→ l2 (p−1 ) is the generalized projection operator and .dσ (z) is a spectral measure. The adjoint to U operator .U ∗ has the same representation as in (7.2.3), where .eiθ o(z) is replaced with .e−iθ o(z). For every .f ∈ lfin , the projection −1 ) is a generalized eigenvector of the operators U and .U ∗ and A with .o(z)f ∈ l2 (p corresponding eigenvalues .eiθ , .e−iθ , and r. For all .f, g ∈ lfin , we have the Parseval equality  (f, g)l2 =

(o(z)f, g)l2 dσ (z);

.

C

after the extension by continuity the equality (7.2.4) is valid for all .f, g ∈ l2 .

(7.2.4)

346

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

Let us denote by .πn the operator of orthogonal projection in .l2 on .Hn , .n ∈ N0 . Hence, .∀f = (fn )∞ n=0 ∈ l2 , we have .fn = πn f . This operator acts similarly on the spaces .l2 (p) and .l2 (p−1 ) but possibly with the norm which is not equal to one. Let us consider the operator matrix .(oj,k (z))∞ j,k=0 , where oj,k (z) = πj o(z)πk : l2 −→ Hj , (or Hk −→ Hj ).

.

(7.2.5)

The Parseval equality (7.2.4) can be written in the form (f, g)l2 =

∞  

(o(z)πk f, πj g)l2 dσ (z)

j,k=0 C

.

=

∞  

(πj o(z)πk f, g)l2 dσ (z)

(7.2.6)

j,k=0 C

=

∞  

(oj,k (z)fk , gj )l2 d σ (z),

∀f, g ∈ l2 .

j,k=0 C

Let us now pass to a study of a more special bounded operators A and U that act on the space .l2 . Namely, let them be given by matrices .JA and .JU having a tridiagonal block structure of the form (7.1.59) and (7.1.62), respectively. Thus, these operators A and U are defined by the first and the second expressions (7.1.65), the adjoint operator .U ∗ is similarly defined by the third expression in (7.1.65). Recall that norms of elements .an , .bn , and .cn , as well as .un , .wn , and .vn are uniformly bounded for all .n ∈ N0 . For further research, assume that the conditions (7.1.60), (7.1.61) and (7.1.63), (7.1.64) are satisfied and, additionally, operators A and U specified in (7.1.59) and (7.1.62) are self-adjoint and unitary, respectively, and they commute on .l2 . Conditions under which the operators A and U are self-adjoint, unitary and commute, are investigated in the next section. Here, we write the Parseval equality (7.2.6) in terms of generalized eigenvectors of the operator A. First, let us prove the lemma. Lemma 7.2.1 Let .ϕ(z) = (ϕn (z))∞ n=0 , .ϕn (z) ∈ Hn , .z ∈ C be a generalized eigenvector from .(lfin )' of commuting a self-adjoint operator A and a unitary operator U (.U ∗ ) with eigenvalues r and .eiθ , .(e−iθ ), respectively. Thus, .ϕ(z) is a solution in .(lfin )' of the system of three difference equations .(see (7.1.65).), namely, (JA ϕ(z))n = an−1 ϕn−1 (z) + bn ϕn (z) + cn ϕn+1 (z) = rϕn (z), (JU ϕ(z))n = un−1 ϕn−1 (z) + wn ϕn (z) + vn ϕn+1 (z) = eiθ ϕn (z), .

∗ (JU ∗ ϕ(z))n = vn−1 ϕn−1 (z) + wn∗ ϕn (z) + u∗n ϕn+1 (z) = e−iθ ϕn (z),

n ∈ N0 , with an initial condition .ϕ0 ∈ C.

ϕ−1 (z) = 0

(7.2.7)

7.2 Direct and Inverse Spectral Problems for Tri-Diagonal Block Jacobi Type. . .

347

It is claimed that this solution has the form ϕn (z) = Qn (z)ϕ0 = (Qn;0 , Qn;1 , . . . , Qn;2n , )ϕ0 ,

.

∀n ∈ N.

(7.2.8)

Here .Qn;α , .α = 0, 1, . . . , 2n are polynomials of variables r, .eiθ , and .e−iθ , and these polynomials have the form Qn;α (z) = kn;α r n−|α−n| e−i(n−α)θ + Rn;α (z),

.

α = 0, 1, . . . , 2n, n ∈ N0 ; (7.2.9)

where .kn;α > 0 and .Rn;α (z) is some linear combinations of r, .eiθ , and .e−iθ (of lesser order with respect to (7.1.2) and Fig. 7.1). Proof For .n = 0, the system (7.2.7) (in a convenient order of equations) has the form JU ∗ ϕ0 = e−iθ ϕ0 , . JA ϕ0 = rϕ0 , JU ϕ0 = eiθ ϕ0 ,

(7.2.10)

i.e., w¯ 0;0,0 ϕ0;0 +u¯ 0;0,0 ϕ1;0 = e−iθ ϕ0;0 , . b0;0,0 ϕ0;0 +c0;0,0 ϕ1;0 + c0;0,1 ϕ1;1 = rϕ0;0 w0;0,0 ϕ0;0 +v0;0,0 ϕ1;0 + v0;0,1 ϕ1;1 + v0;0,2 ϕ1;2 = eiθ ϕ0;0 or

.

u¯ 0;0,0 ϕ1;0 = (e−iθ − w¯ 0;0,0 )ϕ0;0 , = (r − b0;0,0 )ϕ0;0 c0;0,0 ϕ1;0 + c0;0,1 ϕ1;1 v0;0,0 ϕ1;0 + v0;0,1 ϕ1;1 + v0;0,2 ϕ1;2 = (eiθ − w0;0,0 )ϕ0;0 .

(7.2.11)

Here and in what follows we denote ϕ0 = ϕ0;0 := Q0;0 ;

.

ϕn (z) = (ϕn;0 (z), ϕn;1 (z), . . . , ϕn;2n (z)) ∈ Hn ,

∀n ∈ N.

We rewrite equalities of (7.2.10) in the form   A0 ϕ1 (z) = (e−iθ − w¯ 0;0,0 )ϕ0 ), (r − b0;0,0 )ϕ0 , (eiθ − w0;0,0 )ϕ0 ; .



⎞ u¯ 0;0,0 0 0 A0 = ⎝ c0;0,0 c0;0,1 0 ⎠ , v0;0,0 c0;0,1 c0;0,2

(7.2.12)

348

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

where due to conditions (7.1.60), (7.1.61) and (7.1.63), (7.1.64), we have always u¯ 0;0,0 > 0, .c0;0,1 > 0, .u0;0,2 > 0. Hence, .A0 > 0. Therefore, from (7.2.11) we obtain consequently

.

w¯ 0;0,0 − e−iθ ϕ0 =: Q1;0 (z)ϕ0 , u¯ 0;0,0   c0;0,0 e−iθ − w¯ 0;0,0 b0;0,0 − r ϕ1;1 (z) = − ϕ0 =: Q1;1 (z)ϕ0 , + c0;0,1 c0;0,1 u¯ 0;0,0 .  w0;0,0 − eiθ v0;0,0 w¯ 0;0,0 − e−iθ ϕ1;2 (z) = − − [ ] v0;0,2 v0;0,2 u¯ 0;0,0  c0;0,0 e−iθ − w¯ 0;0,0 v0;0,1 b0;0,0 − r [ + ] ϕ0 =: Q1;2 (z)ϕ0 . − v0;0,2 c0;0,1 c0;0,1 u¯ 0;0,0 (7.2.13) ϕ1;0 (z) = −

In other words, a solution .ϕn (z) by .n = 1 from (7.2.7) has a form (7.2.8) with (7.2.9). Suppose, by induction, that for .n ∈ N the coordinates .ϕn−1 (z) and .ϕn (z) of our generalized eigenvector .ϕ(z) = (ϕn (z))∞ n=0 have the form (7.2.8) with (7.2.9), and we will show that .ϕn+1 (z) also has the form (7.2.8) with (7.2.9). Our eigenvector .ϕn+1 (z) satisfies the system of equations (7.2.7). But this system is overdetermined: it consists of .3(2n+3) scalar equations from which it is necessary to find only .2n+3 unknowns .ϕn+1;0 , .ϕn+1;1 , .. . ., .ϕn+1;2(n+1) using as an initial data the previous .2n + 1 values .ϕn;0 , .ϕn;1 , .. . ., .ϕn;2n of coordinates of the vector .ϕn (z). Similarly to (7.2.10), for .j = 0, 1, . . . , 2n, we have JU ∗ ϕn;j = e−iθ ϕn;j , . JA ϕn;j = rϕn;j , JU ϕn;j = eiθ ϕn;j , We act in this way. According to Theorem 7.1.8, namely, (7.1.60), (7.1.61) and (7.1.63), (7.1.64), a .((2n + 3) × (2n + 1))-matrices .u¯ n , .cn , .vn and their application to .ϕn+1 ∈ Hn have the form ⎤ u¯ n;0,0 0 ...0 0...0 ⎢ u¯ u¯ n;1,1 ...0 0...0⎥ ⎥ ⎢ n;0,1 ⎢. .. .. . . .. ⎥ . . .. ⎢ .. .. . . .. ⎥ ⎥ ⎢ ⎥ ⎢ ∗ .un ϕn+1 (z) = ⎢ u u¯ n;1,n . . . u¯ n;n,n 0 . . . 0 ⎥ ϕn+1 (z), ¯ n;0,n ⎢. .. .. . . .. ⎥ . . .. ⎥ ⎢. .. ⎢. . . .. ⎥ ⎥ ⎢ ⎣ u¯ n;0,2n−1 u¯ n;1,2n−1 . . . u¯ n;n,2n−1 0 . . . 0 ⎦ u¯ n;0,2n u¯ n;1,2n . . . u¯ n;n,2n 0 . . . 0 ⎡

(7.2.14)

7.2 Direct and Inverse Spectral Problems for Tri-Diagonal Block Jacobi Type. . .

349



⎤ cn;0,0 cn;0,1 0 ...0 0 ⎢c cn;1,1 cn;1,2 ...0 0⎥ ⎢ n;1,0 ⎥ ⎢ ⎥ cn;2,1 cn;2,2 ...0 0⎥ ⎢ cn;2,0 .cn ϕn+1 (z) = ⎢ . .. .. .. ⎥ . . .. ⎢. ⎥ ϕn+1 (z), .. ⎢. . . .⎥ ⎢ ⎥ ⎣ cn;2n−1,0 cn;2n−1,1 cn;2n−1,2 . . . 0 0⎦ cn;2n,0 cn;2n,1 cn;2n,2 . . . cn;2n,2n+1 0 (7.2.15) ⎤ 0 ...0 ...0 0 ⎥ ⎢. .. . . .. . . .. ⎥ ⎢ .. .. .. . ⎥ ⎢ ⎥ ⎢0 ...0 ...0 0 ⎥ ⎢ ⎥ ⎢ .vn ϕn+1 (z) = ⎢ vn;n,0 . . . vn;n,n+2 ...0 0 ⎥ ϕn+1 (z), ⎥ ⎢. .. . . .. . . .. ⎥ ⎢. .. .. ⎥ ⎢. . ⎥ ⎢ ⎦ ⎣ vn;2n−1,0 . . . vn;2n−1,n+2 . . . vn;2n−1,2n+1 0 vn;2n,0 . . . vn;2n,n+2 . . . vn;2n,2n+1 vn;2n,2n+2 (7.2.16) ⎡

where .ϕn+1 (z) = (ϕn+1;0 (z), ϕn+1;1 (z), . . . , ϕn+1;2n+2 (z)). Let us construct the combination similarly to (7.2.12), using matrices from (7.2.14)–(7.2.16), we get a .((2n + 2) × (2n + 2))-matrix ⎡

⎤ u¯ n;0,0 0 0 ...0 0 ⎢c ⎥ ...0 0 ⎢ n;0,0 cn;0,1 0 ⎥ ⎢ ⎥ 0 ⎢ cn;1,0 cn;1,1 cn;1,2 . . . 0 ⎥ ⎢ ⎥ ϕn+1 (z), .An ϕn+1 (z) = .. .. .. . . .. ⎢ .. ⎥ .. ⎢. ⎥ . . . ⎢ ⎥ ⎣ cn;2n,0 cn;2n,1 cn;2n,2 . . . cn;2n,2n+1 0 ⎦ vn;2n,0 vn;2n,1 vn;2n,2 . . . vn;2n,2n+1 vn;2n,2n+2 (7.2.17) where .ϕn+1 (z) = (ϕn+1;0 (z), ϕn+1;1 (z), . . . , ϕn+1;2n+2 (z)). The matrix (7.2.17) has an inverse one, since its elements are positive on the main diagonal (see (7.1.60), (7.1.61) and (7.1.63), (7.1.64)). Rewrite the equalities (7.2.7) in the form ∗ u∗n ϕn+1 (z) = e−iθ ϕn (z) − vn−1 ϕn−1 (z) − wn∗ ϕn (z), .

cn ϕn+1 (z) = rϕn (z) − an−1 ϕn−1 (z) − bn ϕn (z), vn ϕn+1 (z) = eiθ ϕn (z) − un−1 ϕn−1 (z) − wn ϕn (z),

n ∈ N,

350

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

i.e., ∗ u∗n ϕn+1 (z) = (e−iθ − wn∗ )ϕn (z) − vn−1 ϕn−1 (z),

cn ϕn+1 (z) = (r − bn )ϕn (z) − an−1 ϕn−1 (z),

.

vn ϕn+1 (z) = (eiθ − wn )ϕn (z) − un−1 ϕn−1 (z),

(7.2.18) n ∈ N,

We see, that scalar equations (7.2.18)) have the form   ∗ u¯ n;0,0 ϕn+1,0 (z) = (e−iθ 1 − wn∗ )Qn (z) − vn−1 Qn−1 (z)

.

n;j

,

where “1” is the identity .(2n + 2) × (2n + 2)-matrix. Thus, for .j = 0 we obtain Qn+1,0 (z) := ϕn+1,0 (z) =

.

1



u¯ n;0,0

∗ (e−iθ 1 − wn∗ )Qn (z) − vn−1 Qn−1 (z)

 n;0

.

Hence, .Qn+1,0 (z) = kn+1,0 e−i(n+1)θ + . . .. For the case .j = 1, we have cn;0,0 ϕn+1,0 (z) + cn;0,1 ϕn+1,1 (z) = {(r1 − bn )Qn (z) − an−1 Qn−1 (z)}n;1 ,

.

Qn+1,1 (z) : = ϕn+1,1 (z) .

=

1 c¯n;0,1



−cn;0,0 Qn+1,0 (z) + (r1 − bn )Qn (z) − an−1 Qn−1 (z)

 n;1

Hence, .Qn+1,1 (z) = kn+1,1 re−inθ + . . .. For .j = 2, 3, . . . , 2n + 1, we have cn;j,0 ϕn+1,0 (z) + cn;j,1 ϕn+1,1 (z) + . . . , +cn;j,j +1 ϕn+1,j (z) .

= {(r1 − bn )Qn (z) − an−1 Qn−1 (z)}n;j ,

Qn+1,j (z) : = ϕn+1,j (z) = .

1 cn;j,j +1



−cn;j,0 ϕn+1,0 (z) − cn;j,1 ϕn+1,1 (z) − . . .

− cn;j,j ϕn+1,j −1 (z) + (r1 − bn )Qn (z) − an−1 Qn−1 (z)

 n;j

.

Hence, .Qn+1,j (z) = kn+1,j r n+1−|j −(n+1)| ei(n+1−j )θ + . . .. And for the last equation from (7.2.18) (i.e., .j = 2n + 2), we have

.

vn;2n,0 ϕn+1,0 (z)+vn;2n,1 ϕn+1,1 (z) + . . . + vn;2n,2n+2 ϕn+1,2n+2 (z)   , = (eiθ 1 − wn )Qn (z) − un−1 Qn−1 (z) n;2n+2

.

7.2 Direct and Inverse Spectral Problems for Tri-Diagonal Block Jacobi Type. . .

Qn+1,2n+2 (z) :=ϕn+1,2n+2 (z) =

1



vn;2n,2n+2

351

−vn;j,0 ϕn+1,0 (z)

− vn;j,1 ϕn+1,1 (z) − . . . − vn;2n,2n+1 ϕn+1,2n+1 (z)  . +(eiθ 1 − wn )Qn (z) − un−1 Qn−1 (z)

.

n;2n+2

Hence, .Qn+1,2n+2 (z) = kn+1,2n+2 r n+1−|(2n+2)−(n+1)| ei(n+1)−(2n+2))θ + . . .. It is necessary to take into account that diagonal elements .u¯ n;0,0 , cn;0,1 , .cn;1,2 , .. . ., .cn;2n,2n+1 , .vn;2n,2n+2 of the matrix .An are positive due to (7.1.61) and (7.1.64). This completes the induction and finishes the proof. u n Remark 7.2.2 Note, that we did not assert, that the solution of the overdetermined system (7.2.7) exist for an arbitrary initial data .ϕ0 ∈ C: we prove only, that the generalized eigenvector from .(lfin )' of operators A and U is a solution of (7.2.7) and has the form (7.2.8) and (7.2.9). In what follows, it will be convenient to look at .Qn (z) with fixed z as a linear operator that acts from .H0 into .Hn , i.e., .H0 e ϕ0 |−→ Qn (z)ϕ0 ∈ Hn . We also regard .Qn (z) as an operator valued polynomial of variables .z = reiθ ∈ C, i.e., r, iθ −iθ ; hence, for the adjoint operator we have .Q∗ (z) = (Q (z))∗ : H −→ .e , and .e n n n H0 . By using these polynomials .Qn (z) we construct the representation for .oj,k (z). Lemma 7.2.3 The operator .oj,k (z), .∀z ∈ C has the representation oj,k (z) = Qj (z)o0,0 (z)Q∗k (z) : Hk −→ Hj ,

j, k ∈ N0 ,

.

(7.2.19)

where .o0,0 (z) ≥ 0 is a scalar. Proof For a fixed .k ∈ N0 the vector .ϕ = ϕ(z) = (ϕj (z))∞ j =0 , where ϕj (z) = oj,k (z) = πj o(z)πk ∈ Hj ,

.

z∈C

(7.2.20)

is a generalized solution in .(lfin )' of the system of equations JU ∗ ϕ = e−iθ ϕ, . JA ϕ = rϕ, JU ϕ = eiθ ϕ,

(7.2.21)

since .o(z) is a projector on to generalized eigenvectors of operators A and U (.U ∗ ) with corresponding generalized eigenvalues r, .eiθ , and (.e−iθ ). Therefore, .∀g ∈ lfin , we have

.

(ϕ, JA g)l2 = r(ϕ, g)l2 , (ϕ, JU ∗ g)l2 = eiθ (ϕ, g)l2 , (ϕ, JU g)l2 = e−iθ (ϕ, g)l2 .

Hence, it follows that .ϕ = ϕ(z) ∈ l2 (p−1 ) exists as an usual solution of equations (7.2.21) with initial condition .ϕ0 = π0 o(z)πk ∈ H0 .

352

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

By using Lemma 7.2.1 and due to (7.2.8), we obtain oj,k (z) = Qj (z)(o0,k (z)),

.

j ∈ N0 .

(7.2.22)

The operator .o(z) : l2 (p) −→ l2 (p−1 ) is formally self-adjoint on .l2 as derivative of the resolution of the identity of operators A and U (.U ∗ ) on the space .l2 with respect to the spectral measure. Hence, according to (7.2.19) we get (oj,k (z))∗ = (πj o(z)πk )∗ = πk o(z)πj = ok,j (z),

.

j, k ∈ N0 .

(7.2.23)

For a fixed .j ∈ N0 from (7.2.23) and previous considerations it follows that the vector ψ = ψ(z) = (ψk (z))∞ k=0 ,

.

ψk (z) = ok,j (z) = (oj,k (z))∗

is an usual solution of the equations (7.2.21) with the initial condition .ψ0 = o0,j (z) = (oj,0 (z))∗ . Again using Lemma 7.2.1, we obtain the representation for (7.2.22), namely, ok,j (z) = Qk (z)(o0,j (z)),

.

k ∈ N0 .

(7.2.24)

Taking into account (7.2.23) and (7.2.24), we get o0,k (z) = (ok,0 (z))∗ = (Qk (z)o0,0 (z))∗ = o0,0 (z)(Qk (z))∗ ,

.

k ∈ N0 (7.2.25)

(here we used .o0,0 (z) ≥ 0, this inequality follows from (7.2.4) and (7.2.5)). Substituting (7.2.25) into (7.2.22) we get (7.2.19). u n Now we can rewrite the Parseval equality (7.2.6) in a refined form. To do this, we substitute the expression (7.2.19) for .oj,k (z) into (7.2.6) and get (f, g)l2 =

∞  

(oj,k (z)fk , gj )l2 dσ (z)

j,k=0 C

=

∞  

(Qj (z)o0,0 (z)Q∗k (z)fk , gj )l2 dσ (z)

j,k=0 C .

=

∞  

(7.2.26)

(Q∗k (z)fk , Q∗j (z)gj )l2 dρ(z)

j,k=0 C

=

  ∞ C

Q∗k (z)fk

k=0

dρ(z) = o0,0 (z) dσ (z).

  ∞ j =0

 Q∗j (z)gj dρ(z),

∀f, g ∈ lfin ,

7.2 Direct and Inverse Spectral Problems for Tri-Diagonal Block Jacobi Type. . .

353

Let us write the Fourier transform “.” generated by the bounded self-adjoint operator A, which commutes with the unitary operator U in the space .l2 , namely, ˆ l2 ⊃ lfin e f = (fn )∞ n=0 |−→ f (z) =

∞ 

.

Q∗n (z)fn ∈ L2 (C, dρ(z)).

(7.2.27)

n=0

Hens, (7.2.26) gives the Parseval equality in a final form  (f, g)l2 =

.

ˆ dρ(z) fˆ(z)g(z)

∀f, g ∈ lfin .

(7.2.28)

C

Extending (7.2.28) by continuity, it becomes valid .∀f, g ∈ l2 . The orthogonality of polynomials .Q∗n (z) follows from (7.2.27) and (7.2.28). Namely, it is sufficient only to take f = (0, . . . , 0, fk , 0, . . .), fk ∈ Hk ,

.

g = (0, . . . , 0, gj , 0, . . .), gj ∈ Hj

in (7.2.27) and (7.2.28). Then, we have  . (Q∗k (z)fk )(Q∗j (z)gj ) dρ(z) = δj,k (fj , gj )Hj ,

∀k, j ∈ N0 .

(7.2.29)

C

By using the representation (7.2.8) for these polynomials we can rewrite the equality (7.2.29) in a (standard) classical scalar form. To do this, we remark that ∗ ¯ 0 (z) and according to (7.2.8), we get .Q (z) = Q 0 Qn (z) = (Qn;0 (z), Qn;1 (z), . . . , , Qn;2n (z)) : H0 −→ Hn

.

n ∈ N.

Hence, for the adjoint operator .Q∗n (z) : Hn −→ H0 , we have (Qn (z)x, y)Hn = ((Qn;0 (z)x, Qn;1 (z)x, . . . , Qn;2n (z)x), (y0 , y1 , . . . , y2n ))Hn = Qn;0 (z)x y¯0 + Qn;1 (z)x y¯1 + · · · + Qn;2n (z)x y¯2n .

= x(Qn;0 (z)y0 + Qn;1 (z)y1 + · · · + Qn;2n (z)y2n ) = (x, Q∗n (z)y)H0 ,

∀x ∈ H0 , y = (y0 , y1 , . . . , y2n ) ∈ Hn .

that is, .Q∗n (z)y = Qn;0 (z)y0 + Qn;1 (z)y1 + · · · + Qn;2n (z)y2n . Due to the last equality for .n ∈ N and .fn = (fn,0 , fn,1 , . . . , fn,2n ) ∈ Hn , .z ∈ C, we obtain Q∗n (z)fn = Qn;0 (z)fn;0 + Qn;1 (z)fn;1 + · · · + Qn;2n (z)fn;2n ,

.

Q∗0 (z) = 1. (7.2.30)

354

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

Therefore, (7.2.29) has the form ⎛

  2k .

Qk;α (z)fk;α ⎝

⎞ Qj ;β (z)fj ;β ⎠ dρ(z) = δj,k

β=0

α=0

C

2j 

2j 

fj ;α g¯ j ;α ,

α=0

∀fk;0 , fk;1 , . . . , fk;2k , gj ;0 , gj ;1 , . . . , gj ;2j ∈ C, j, k ∈ N0 , (j < k). This equality is equivalent to the orthogonality relation in the usual classical form 

Q∗k;β (z)Qj ;α dρ(z) = δj,k δα,β ,

.

Q0;0 = Q0 (z),

(7.2.31)

C

∀j, k ∈ N0 , ∀α = 0, 1, . . . , 2j, β = 0, 1, . . . , 2k.

.

We note that due to (7.2.30) the Fourier transform (7.2.27) can be rewritten fˆ(z) =

2n ∞  

.

Qn;α (z)fn;α ,

z ∈ C,

∀f = (fn )∞ n=0 ∈ l2 .

(7.2.32)

n=0 α=0

By using the stated above results of this section, we can formulate the spectral theorem for our commuting bounded self-adjoint operator A and the unitary operator U . Theorem 7.2.4 Consider the space (7.1.7), namely, l2 = H0 ⊕ H1 ⊕ H2 ⊕, · · · ,

.

Hn = Cn+1 ,

n ∈ N0 ,

(7.2.33)

and commuting linear operators A and U defined on finite vectors .lfin by a block tridiagonal Jacobi type matrix .JA of the form (7.1.59) and .JU of the form (7.1.62) due to expressions in (7.1.65). Suppose that all its elements .an , .bn , .cn , and .un , .wn , .cn , .n ∈ N0 are uniformly bounded, some elements of these matrices are always equal to zero or positive according to (7.1.60), (7.1.61) and (7.1.63), (7.1.64) and the extension A by the continuity is a bounded self-adjoint operator and the extension U by the continuity is a unitary operator. The generalized eigenfunction expansion of operators A and U has the following form. According to Lemma 7.2.1 using .ϕ0 ∈ C, we represent the solution .ϕ(z) = (ϕn (z))∞ n=0 , .ϕn (z) ∈ Hn of the system (7.2.7) (which exists due to the projection spectral theorem) with .z = reiθ ∈ C, ϕn (z) = Qn (z)ϕ0 = (Qn;0 (z), Qn;1 (z), · · · , Qn;2n (z))ϕ0 ,

.

7.2 Direct and Inverse Spectral Problems for Tri-Diagonal Block Jacobi Type. . .

355

where .Qn;α (z), .α = 0, 1, . . . , 2n are polynomials of variables r, .eiθ , and .eiθ . Then the Fourier transform has the form ˆ l2 ⊃ lfin e f = (fn )∞ n=0 |−→ f (z) =

∞ 

Q∗n (z)fn

n=0 .

=

2n ∞  

(7.2.34)

Qn;α (z)fn;α ∈ L2 (C, dρ(z)).

n=0 α=0

Here .Q∗n (z) : Hn −→ H0 is adjoint to the operator .Qn (z) : H0 −→ Hn , .dρ(z) is the probability spectral measure of A and U . The Parseval equality, for all .f, g ∈ lfin , has the form  (f, g)l2 =

ˆ dρ(z); fˆ(z)g(z)

C



(JA f, g)l2 =

r fˆ(z)g(z) ˆ dρ(z);

C



.

(JU f, g)l2 =

(7.2.35) ˆ dρ(z); eiθ fˆ(z)g(z)

C



(J

U∗

f, g)l2 =

e−iθ fˆ(z)g(z) ˆ dρ(z).

C

The transform (7.2.34) and equalities (7.2.35) are extended by continuity .∀f, g ∈ l2 and the operator (7.2.34) becomes isometry, which maps whole .l2 into whole .L2 (C, dρ(z)). Polynomials .Qn;α (z), .n ∈ N, .α = 0, 1, . . . , 2n and .Q0;0 (z) = 1 form an orthonormal system in .L2 (C, dρ(z)) in the sense of (7.2.31) and it is total on this space. Proof It is only necessary here to show that orthogonal polynomials .Qn;α (z), .n ∈ N, .α = 0, 1, . . . , 2n, and .Q0;0 (z) = 1 form a total set in the space .L2 (C, dρ(z)). For this reason we remark at first that due to the compactness of the support of the measure .dρ(z) on .C, elements .r t eij θ , .t ∈ N0 , .j ∈ Z form a total set in .L2 (C, dρ(z)). Suppose the contrary, i.e., that our system of polynomials is not total. Then there exist a non-zero function .h(z) ∈ L2 (C, dρ(z)), that is orthogonal to all these polynomials and hence, according to (7.2.9) orthogonal to all .r t eij θ , .t ∈ N0 , .j ∈ Z. Then .h(z) = 0. Which is a contradiction. u n

356

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

The last theorem solves the direct problem for the bounded self-adjoint operator A commuting with the unitary operator U that are generated on the space .l2 by the matrix .JA of the form (7.1.59) and .JU of the form (7.1.62). The inverse problem consists in the construction, according to a given measure .dρ(z) on .C with a compact support, of bounded matrices: .JA of the form (7.1.59), which commutes with a unitary matrix .JU of the form (7.1.62), and such that their spectral measure coincides with .dρ(z). This construction is carried out according to Theorem 7.1.8, using the Gram-Schmidt orthogonalization procedure of the system (7.1.2). For matrices .JA of the form (7.1.59) and .JU of the form (7.1.62), constructed by .dρ(z), the spectral measure of the corresponding bounded self-adjoint operator A and the unitary operator U coincides with the initial (starting) measure. Proof This holds true, since the system of orthogonal polynomials associated with A, .Qn,α (z) .α = 0, 1, . . . , 2n, .n ∈ N0 , is orthonormal in .L2 (C, dρ(z)) and, according to Lemma 7.2.1, is constructed by .r t eij θ , .t ∈ N0 , .j ∈ Z in the same way as the system (7.1.3) is constructed by .r t eij θ , .t ∈ N0 , .j ∈ Z. Hence, Q0 (z) = 1 = P0 (z),

.

Qn,α (z) = Pn;α (z),

α = 0, 1, . . . , 2n,

∀n ∈ N. (7.2.36)

Since both systems of polynomials form a total set in .L2 (C, dρ(z)), then (7.2.36) shows that the spectral measures of the constructed operators coincides with the given one. u n Note that expressions (7.1.19) and (7.1.36) (as is known in the classical theory of Jacobi matrices) restore the original matrices (7.1.59) and (7.1.62) by the spectral measure .dρ(z) operators formed by .JA and .JU in .l2 .

7.3 Conditions of Unitarity and Commutativity of Matrices Corresponding to the Complex Moment Problem in the Exponential Form We will find the condition which guarantee that the matrix .JU of the type (7.1.62) is a unitary operator and commute with the matrix .JA of type (7.1.59) which is a bounded Hermitian operator. The formally adjoint matrix .JU ∗ has the form ⎡

JU ∗

.

w0∗ ⎢ v∗ ⎢ 0 =⎢ 0 ⎣ .. .

u∗0 w1∗ v1∗ .. .

0 u∗1 w2∗ .. .

0 0 u∗2 .. .

⎤ ··· ···⎥ ⎥ , ···⎥ ⎦ .. .

vn∗ : Hn −→ Hn+1 , wn∗ : Hn −→ Hn , u∗n : Hn+1 −→ Hn , n ∈ N0 .

(7.3.1)

7.3 Conditions of Unitarity and Commutativity of Matrices Corresponding to. . .

357

Multiplying matrices (7.1.62) and (7.3.1) we get ⎡

JU JU ∗

.

w0 w0∗ + v0 v0∗ w0 u∗0 + v0 w1∗ v0 u∗1 ∗ ∗ ∗ ∗ ∗ ⎢ u0 w + w1 v u0 u + w1 w + v1 v w1 u∗ + v1 w ∗ 0 0 0 1 1 1 2 ⎢ = ⎢ u v∗ u1 w1∗ + w2 v1∗ u1 u∗1 + w2 w2∗ + v2 v2∗ ⎣ 1 0 .. .. .. . . . ⎤ 0 ··· ···⎥ v1 u∗2 ⎥ . ∗ ∗ w2 u2 + v2 w3 · · · ⎥ ⎦ .. .. . .

(7.3.2)

The expression for .JU ∗ JU is the similar but with the change in (7.3.2) .un , .wn , .vn onto .vn∗ , .wn∗ .u∗n and vice versa. Comparing .JU JU ∗ and .JU ∗ JU , we get that the equality .JU JU ∗ = JU ∗ JU is equivalent to the system of equations v0 v0∗ = u∗0 u0 ; vn u∗n+1 = u∗n vn+1 , ∗ = wn∗ vn + u∗n wn+1 ≡ 0, wn u∗n + vn wn+1 ∗ ∗ ∗ w ∗ ∗ = vn∗ vn + wn+1 un un + wn+1 wn+1 + vn+1 vn+1 n+1 + un+1 un+1 ,

.

n ∈ N0 . (7.3.3)

Here we take into account that .w0 is a scalar, .w0∗ = w¯ 0 . Note that the necessity of equalities ∗ un wn∗ + wn+1 vn∗ = vn∗ wn + wn+1 un ,

.

∗ un+1 vn∗ = vn+1 un ,

n ∈ N0

follows from the third and second equalities from (7.3.3), by passing to adjoint matrices “.∗”. Thus, conditions (7.3.3) are necessary and sufficient for the matrix equality .JU JU ∗ = JU ∗ JU . Norms of operators .un , .wn , and .vn are uniformly bounded for all .n ∈ N0 , therefore, the operator .JU is bounded in .l2 . The validity of (7.3.3) gives the unitarity of this operator. Taking initial matrices .u0 , .w0 , .v0 and finding from (7.3.3) step by step .u1 , .w1 , .v1 ; .u2 , .w2 , .v2 ; .. . . etc. (in non-uniquely manner) we construct some unitary matrix .JU . Multiplying matrices (7.1.59) and (7.1.62) we get JA JU = ⎡ ⎤ b0 w0 + c0 u0 b0 v0 + c0 w1 c0 v1 0 ··· ⎢ a0 w0 + b1 u0 a0 v0 + b1 w1 + c1 u1 b1 v1 + c1 w2 c1 v2 ···⎥ ⎢ ⎥ . ⎢a u a1 w1 + b2 u1 a1 v1 + b2 w2 + c2 u2 b2 v2 + c2 w3 · · · ⎥ ⎣ 1 0 ⎦ .. .. .. .. .. . . . . .

.

(7.3.4)

358

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

The expression for .JU JA is the similar to (7.3.4) with the change of .un , .wn , .vn onto an , .bn , .cn .

.

JA JU = ⎡ ⎤ w0 b0 + v0 a0 w0 c0 + v0 b1 v0 c1 0 ··· ⎢ u0 b0 + w1 a0 u0 c0 + w1 b1 + v1 a1 w1 c1 + v1 b2 v1 c2 ···⎥ ⎢ ⎥ . ⎢u a u1 b1 + w2 a1 u1 c1 + w2 b2 + v2 a2 w2 c2 + v2 b3 · · · ⎥ ⎣ 1 0 ⎦ .. .. .. .. .. . . . . .

.

(7.3.5) Comparing .JA JU and .JU JA we get that the matrix equality .JU JA = JA JU is equivalent to the system of scalar equalities

.

c0 u0 = v0 a0 ; cn vn+1 = vn cn+1 an+1 un = un+1 an bn vn + cn wn+1 = wn cn + v1 bn+1 , an wn + bn+1 un = un bn + wn+1 an , an vn + bn+1 wn+1 + cn+1 un+1 = un cn + wn+1 bn+1 + vn+1 an+1 ,

n ∈ N0 . (7.3.6)

Here we take into account that .w0 and .b0 are scalars and .b0 = b¯0 . Let us note about the necessity of equalities. Thus, conditions (7.3.6) are necessary and sufficient for the matrix equality .JU JA = JA JU . Norms of operators .an , .bn and .cn are uniformly bounded for all .n ∈ N0 and the operator .JA is bounded on .l2 . Conditions (7.3.6) gives the mutual commutativity of operators A and U . Taking .a0 , .b0 , .c0 as initial matrices and the matrix .JU , using (7.3.6), we get step by step .a1 , .b1 , .c1 ; .a2 , .b2 , .c2 ; .. . . etc. (in non-uniquely manner). In such a way we can construct some matrix .JA which commutes with .JU . Remark 7.3.1 Finding matrices .un , .wn and .vn , .n ∈ N0 , which are solutions of equations (7.3.3), (7.3.6) and .un , .vn , .an , .cn , have the form (7.1.59), (7.1.62), respectively, is a rather difficult problem. Therefore, only a partial case is considered below. Suppose that matrices .wn are Hermitian: .wn∗ = wn , .n ∈ N0 . Then conditions (7.3.3) become the form

.

v0 v0∗ = u∗0 u0 ; vn u∗n+1 = 0, wn (u∗n − vn ) = (u∗n − vn )wn+1 , ∗ = vn∗ vn + u∗n+1 un+1 , un u∗n + vn+1 vn+1

(7.3.7) n ∈ N0 .

7.3 Conditions of Unitarity and Commutativity of Matrices Corresponding to. . .

359

In what follows, suppose that all matrices .an , .cn , .n ∈ N0 have a form, where an;0,0 , an;1,1 , . . . , an;n,n ,

cn;0,1 , cn;1,2 , . . . , cn;n,n+1 ,

.

∀n ∈ N0 ,

are positive and all other elements are equal to zero. Thus, our matrices have a form ⎡ 0 ⎢ ⎢an;1,0 ⎢ ⎢0 ⎢ .an = ⎢. ⎢.. ⎢ ⎢0 ⎣ 0 

0 0 an;2,1 .. . 0 0

⎤⎫ ⎪ ⎪ ⎪ ⎥⎪ ⎪ ⎥⎪ ⎪ ⎥⎪ ⎬ ⎥⎪ ⎥ ⎥ 2n+3 , ⎥⎪ ⎪ ⎥⎪ ⎪ ⎪ 0 . . . an;2n+1,2n⎥ ⎪ ⎦⎪ ⎪ ⎪ ⎭ 0 ... 0  !

0 0 0 .. .

... ... ... .. .

0 0 0 .. .

(7.3.8)

2n+1

⎡ ⎢0 cn;0,1 ⎢0 0 ⎢ ⎢. . .cn = ⎢. . ⎢. . ⎢0 0 ⎣ 00 

0 cn;1,2 .. .

... ... .. .

0 0 .. .

0 0 .. .

0 0

. . . cn;2n−1,2n 0 ... 0 cn;2n,2n+1 

⎤⎫ ⎪ 0⎥⎪ ⎪ ⎪ ⎪ ⎪ 0⎥ ⎬ ⎥⎪ ..⎥ ⎥ .⎥⎪ 2n+1 , ⎪ ⎪ 0⎥ ⎪ ⎦⎪ ⎪ ⎭ 0⎪ !

(7.3.9)

2n+3

an;1,0 , an;2,1 , . . . , an;2n+1,2n > 0, .

cn;0,1 , cn;1,2 , . . . , cn;2n,2n+1 > 0, an;i,j = cn;j,i ,

(7.3.10)

n, i, j ∈ N0 .

Suppose also that all matrices .un , .n ∈ N0 have a form where .un;0,0 , un;1,1 , . . . , un;n,n , .∀n ∈ N0 are positive and other elements are equal to zero. Thus, our matrices have a form ⎤⎫ ⎡ un;0,0 0 0...0 ⎪ 0 ...0 0 ⎪ ⎪ ⎪ ⎥ ⎢0 ⎪ 0 . . . 0 0 u 0 . . . 0 n;1,1 ⎥⎪ ⎢ ⎪ ⎬ ⎥ ⎢0 0 0 un;2,2 . . . 0 0 . . . 0⎥ ⎢ n+1 ⎥ ⎢. .. .. .. .. . . .. ⎥⎪ . . .. ⎢.. .. ⎪ . . . . . . ⎥⎪ ⎢ ⎪ ⎥⎪ ⎢ ⎪ (7.3.11) .un = ⎢0 0 0 . . . un;n−1,n−1 0 0 . . . 0⎥⎪ ⎭ ⎥ ⎢ ⎥ ⎢0 ⎫ 0 0 ...0 un;n,n 0 . . . 0 ⎥⎪ ⎢ ⎢0 ⎪ 0 0 . . . 0 0 0 . . . 0⎥ ⎥⎬ ⎢ ⎥ ⎢.. .. .. .. .. . . .. . . .. n+2 ⎣. .. . . . . . . ⎦⎪ ⎪ ⎭ 0 0 0 ...0 0 0...0  !  !  n+1

n

360

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

⎤⎫ ⎡ 0...0 0 ...0 0 0 ⎪ ⎪ ⎥⎬ ⎢.. . . .. .. .. .. .. . . ⎥ n ⎢. . . . . . . . ⎥⎪ ⎢ ⎥⎪ ⎢0 . . . 0 0 ...0 0 0 ⎥⎭ ⎢ ⎥⎫ ⎢0 . . . 0 0 ...0 0 0 ⎥ ⎢ ⎥⎪ ⎢ ⎪ .vn = ⎢0 . . . 0 vn;n,n+2 . . . 0 0 0 ⎥⎪ ⎪ ⎥⎪ ⎢. .. .. ⎪ . . .. ⎥⎪ ⎢. . . .. .. . . ⎥⎬ ⎢. . . . . . ⎥ n+1 ⎢ ⎥⎪ ⎢0 . . . 0 0 0 . . . vn;2n−2,2n 0 ⎥⎪ ⎢ ⎪ ⎦⎪ ⎣0 . . . 0 0 ⎪ ...0 vn;2n−1,2n+1 0 ⎪ ⎪ 0...0 0 ...0 0 vn;2n,2n+2 ⎭   !  ! n+2

(7.3.12)

n+1

By multiplying matrices of the type (7.3.8)–(7.3.10), (7.3.11) and (7.3.12), it is possible to rewrite the first, second and fourth equalities from (7.3.7) in the form of the corresponding equalities for matrix elements, namely, .c0;0,1 = a0;0,0 and .∀n ∈ N0 , u2n;0,0 = u2n+1;0,0 , u2n;1,1 = u2n+1;1,1 , ... u2n;n−1,n−1 = u2n+1;n−1,n−1 , u2n;n,n = u2n+1;n,n ;

.

2 2 vn;n,n+2 = vn+1;n+2,n+4 , 2 2 vn;n+1,n+3 = vn+1;n+3,n+5 , ... 2 2 v2n;2n−1,2n+1 = vn+1;2n+1,2n+3 , 2 2 vn;2n,2n+2 = vn+1;2n+2,2n+4 ,

(7.3.13)

2 = u2n+1;n+1,n+1 . vn+1;n+1,n+3

The system of equalities (7.3.13) is equivalent to the system (7.3.3) for the case when .wn = 0, .n ∈ N0 . The solution of this system step by step gives un;0,0 , un;1,1 , . . . , un;n,n = 1,

.

vn;n,n+2 , vn;n+1,n+2 , . . . , vn;2n,2n+2 = 1.

In this case, namely .wn = 0, .n ∈ N0 , .an and .cn , .n ∈ N0 in the form (7.3.8)–(7.3.10), the system (7.3.6) has a form

.

c0 u0 = v0 a0 ; cn vn+1 = vn cn+1 an+1 un = un+1 an an vn + cn+1 un+1 = un cn + vn+1 an+1 ,

(7.3.14) n ∈ N0 .

The first equality in (7.3.14) obviously holds true. The second equality in (7.3.14) gives .cn+1;j +2,j +3 = cn;j,j +1 , .j = n, n + 1, . . . , 2n. The third equality in (7.3.14)

7.4 The Solution of the Complex Moment Problem in the Exponential Form

361

gives .an+1;j +1,j = an;j +1,j , .j = 0, 1, . . . , n. The fourth equality in (7.3.14) follows from the second and third ones. By using the last inclusion, we redefine cn;j,j +1 = αn−|n−j | ,

.

j = 0, 1, . . . , 2n, n ∈ N0 .

(7.3.15)

Hence, the matrix .cn has a form ⎡

0 ⎢0 ⎢ ⎢0 ⎢ ⎢. ⎢ .. ⎢ ⎢ .cn = ⎢ 0 ⎢. ⎢. ⎢. ⎢ ⎢0 ⎢ ⎣0 0

α0 0 0 .. .

0 α1 0 .. .

0 .. .

0 .. .

0 0 α2 .. .

... ... ... .. .

0 ... .. . . . . 0 0 0 ... 0 0 0 ... 0 0 0 ...

0 0 0 .. .

... ... ... .. .

0 0 0 .. .

0 0 0 .. .

0 0 0 .. .

αn . . . .. . . . . 0 ... 0 ... 0 ...

0 .. .

0 .. .

0 .. .

⎤ 0 0⎥ ⎥ 0⎥ ⎥ .. ⎥ .⎥ ⎥ ⎥ 0⎥ .. ⎥ ⎥ .⎥ ⎥ 0⎥ ⎥ 0⎦

(7.3.16)

α2 0 0 0 α1 0 0 0 α0 0

Matrices .an have a symmetric form with respect to .cn . Example Take as a sequence .{αn }∞ n=0 for (7.3.16) such that .αn < c for some fixed number .c > 0. In this case, all matrices .an and .cn , as operators (see (7.1.32)), are uniformly bounded and the matrix .JA defines uniquely a bounded selfadjoint operator . If the sequence .{αn }∞ n=0 is not bounded, then we obtain formally commutative matrices .JA and .JU , but the matrix .JA may not uniquely define a selfadjoint operator.

7.4 The Solution of the Complex Moment Problem in the Exponential Form Let .H be a separable Hilbert space and let A be a self-adjoint operator defined on D(A) in .H, U be a unitary commuting in the strong resolvent sense with A operator. Consider an equipmment of .H:

.

H− ⊃ H ⊃ H+ ⊃ D

.

(7.4.1)

such that .H+ is a Hilbert space topologically and quasi-nuclear embedded into H (topologically means densely and continuously; quasi-nuclear means that the embedded operator is of Hilbert-Schmidt type); .H− is the space dual to .H+ with respect to space .H; .D is a linear, topological space, topologically embedded into .H+ . .

362

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

Operators A and U are called standard connected with the chain (7.4.1), if .D ⊂ D(A) and restrictions .A  D, .U  D act from .D into .H+ continuously. Let us recall that a vector .ω0 ∈ D is called a strong cyclic vector of operators A and U if for .p ∈ N0 , q ∈ Z, we have .ω0 ∈ D(U q Ap ), .U q Ap ω0 ∈ D and the set of all these vectors with .ω0 is total in the space .H+ (and, hence, also in .H). Assuming that the strong cyclic vector exists we formulate the short variant of the projection spectral theorem. Theorem 7.4.1 For the self-adjoint operator A commuting in the strong resolvent sense with the unitary operator U with a strong cyclic vector in the separable Hilbert space .H, there exists a non-negative finite Borel measure .dρ(z) such that for .ρ-almost every .z ∈ C there exists a generalized joint eigenvector .ξz ∈ H− , i.e., (ξz , Uf )H = e−iθ (ξz , f )H ,

(ξz , Af )H = r(ξz , f )H ,

.

f ∈ D, ξz /= 0,

where .z = reiθ is a two-parameter (.r and .θ ) eigenvalue. The corresponding Fourier transform F is given by the rule: H ⊃ H+ e f |→ (Ff )(z) = fˆ(z) = (f, ξz )H ∈ L2 (C, dρ(z)),

.

is an isometric operator .(after an extension by continuity.) acting from .H into L2 (C, dρ(z)). Images of operators A and U under the transformation F are multiplication by .r and .eiθ , respectively, operators in the space .L2 (C, dρ(z)).

.

We remark that the notation .(·, ·)H denotes the same as .H (see Chap. 2, Sect. 2.7). Theorem 7.4.2 The sequence of complex numbers .(ct,j ), .t ∈ N0 , j ∈ Z has the representation ∞ 2π ct,j =

r t eij θ dρ(r, θ ),

.

0

t ∈ N0 , j ∈ Z

(7.4.2)

0

if and only if it is positive defined, i.e., satisfies two inequalities 

ft,j f¯q,k ct+q,j −k ≥ 0

(7.4.3)

ft,j f¯q,k ct+q+1,j −k ≥ 0

(7.4.4)

.

t,q∈N0 j,k∈Z

and  .

t,q∈N0 j,k∈Z

for all finite sequences of complex numbers .(ft,j ), .t ∈ N0 , j ∈ Z, .(ft,j ∈ C).

7.4 The Solution of the Complex Moment Problem in the Exponential Form

363

And additionally, to (7.4.3) and (7.4.4), the representation (7.4.2) is unique if ∞  .

&

2p

p=1

1 = ∞. |c2p,0 |

(7.4.5)

Proof The necessity of the condition (7.4.3) is obvious. Indeed, if the sequence (ct,j ), .t ∈ N0 , j ∈ Z has the representation (7.4.2), then for an arbitrary finite sequence .f = (ft,j ), .ft,j ∈ C we have

.





ft,j f¯q,k ct+q,j −k =

t,q∈N0 j,k∈Z

t,q∈N0 j,k∈Z

.

∞ 2π = 0

0

ft,j f¯q,k

∞ 2π 0

r t+q ei(j −k)θ dρ(r, θ )

0



⎞⎛

⎞ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ft,j r t eij θ ⎟ ⎜ fq,k r q eikθ ⎟ ⎜ ⎝ ⎠ dρ(r, θ ) ≥ 0. ⎝ ⎠ t∈N0 j ∈Z

q∈N0 k∈Z

Hence, we get (7.4.3). Let us denote by l the linear space .C∞ of sequences .(ft,j ), .t ∈ N0 , .j ∈ Z, .(ft,j ∈ C) and by .lfin its linear subset consisting finite sequences f , i.e., sequences such that .ft,j /= 0 for only finite numbers t and j . Let .δt,j , .t ∈ N0 , j ∈ Z be a .δ-sequence such that each .f ∈ lfin has the representation .f = ft,j δt,j . t∈N0 , j ∈Z

Let us consider linear operators defined on .lfin by expressions (Af )t,j = ft−1,j ,

.

(Uf )t,j = ft,j −1 ,

t ∈ N0 , j ∈ Z,

(7.4.6)

where we put always .f−1,j ≡ 0. Operators A and U are the “creation” type. For δ-sequence we get

.

Aδt,j = δt+1,j ,

.

U δt,j = δt,j +1 ,

t ∈ N0 , j ∈ Z.

(7.4.7)

The operator A is positive Hermitian and U is isometric with respect to the (quasi)scalar product (f, g)C =



.

t,q∈N0 j,k∈Z

ft,j g¯ q,k ct+q,j −k ,

f, g ∈ lfin .

(7.4.8)

364

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

Indeed, .∀f, g ∈ lfin , we have 

(Af, g)C =

(Af )t,j g¯ q,k ct+q,j −k

t,q∈N0 j,k∈Z



= .

ft−1,j g¯ q,k ct+q,j −k

t,q∈N0 j,k∈Z



=

ft,j g¯ q,k ct+q+1,j −k

t,q∈N0 j,k∈Z

and (f, Ag)C =



ft,j (Ag)q,k ct+q,j −k

t,q∈N0 j,k∈Z

= .



ft,j g¯ q−1,k ct+q,j −k

t,q∈N0 j,k∈Z

=



ft,j g¯ q,k ct+q+1,j −k .

t,q∈N0 j,k∈Z

Hence, .(Af, g) = (f, Ag), .∀f, g ∈ lfin . We prove the positivity of A taking into account the condition (7.4.4), namely, (Af, f )C =



(Af )t,j f¯q,k ct+q,j −k

t,q∈N0 j,k∈Z

= .



ft−1,j f¯q,k ct+q,j −k

t,q∈N0 j,k∈Z

=



t,q∈N0 j,k∈Z

ft,j f¯q,k ct+q+1,j −k ≥ 0.

7.4 The Solution of the Complex Moment Problem in the Exponential Form

365

For the operator U , we have (Uf, Ug)C =



(Uf )t,j (Uf )q,k ct+q,j −k

t,q∈N0 j,k∈Z .

=



ft,j −1 f¯q,k−1 ct+q,j −k = (f, g)C .

t,q∈N0 j,k∈Z

Let C be a Hilbert space obtained as the completion of the factor space l˙fin := lfin /{h ∈ lfin | (h, h)C = 0}.

.

Each element f of C is a representative of the class .f˙ of equivalent elements of .l˙fin . This fact in case of self-adjoint and unitary operator is described in Sect. 2.12. Hence, operators .A˙ and .U˙ are correctly defined in C by the rule A˙ f˙ = (Af )· ,

.

˙ = l˙fin , f ∈ D(A)

U˙ f˙ = (Uf )· ,

f ∈ l˙fin .

Let us denote also by A and U the closure .A˙ and .U˙ , respectively, in C. It is obvious that the operator A commutes with U on .G := lfin , namely .AUg = U Ag, .g ∈ G. Suppose in general that the operator A is not self-adjoint but Hermitian with non-trivial deficiency indices. Let .AF be its Friedrichs extension and ˜ is the closure of U in C with respect to the scalar product (7.4.8). The following .U lemma proves that .AF commutes with .U˜ in a strong resolvent sense. u n Lemma 7.4.3 If the semi-bounded Hermitian operator A commutes with an isometric operator U on a subset G which is dense in a Hilbert space C, i.e., AUg = U Ag,

.

g ∈ G,

˜ = C, G

(7.4.9)

then the Friedrichs extension .AF of A commutes with the closure .U˜ of the operator U in C in the strong resolvent sense. Proof Without loss of generality we assume that A is non-negative operator. Otherwise, .A + dI , .d > 0 is considered instead of A. First of all, we will show that AF U˜ f = U˜ AF f,

.

∀f ∈ {f ∈ D(AF ) | U˜ f ∈ D(AF )}.

(7.4.10)

Consider the operator .B = A + 1, .D(B) = G and introduce the space .C+ as the completion of G with respect to the norm .(ϕ, ψ)C+ = (Bϕ, ψ)C , .∀ϕ, ψ ∈ G. Now, we have (g, g)C+ = (Bg, g)C ≥ (g, g)C ,

.

g∈G

366

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

and there is an imbedding .C+ ⊂ C. The bilinear form b(f, h) = (f, h)C+ ,

.

f, h ∈ D(b) := C+

is closed and has the representation b(f, h) = (BF f, h)C ,

.

f ∈ D(BF ) ⊂ D(b) = C+ ,

h ∈ D(b) = C+ .

The operator .BF in the space C is self-adjoint, .(BF f, f ) ≥ ||f ||2 , .f ∈ D(BF ) and it is the Friedrichs extension of B. Let us show that BF U˜ f = U˜ BF f,

.

f ∈ {f ∈ D(BF ) | U˜ f ∈ D(BF )}

(7.4.11)

Indeed, .∀g ∈ G ⊂ D(BF ), using the unitarity property of .U˜ in C and considering (7.4.9), we can write (BF U˜ f, Ug)C =(U˜ f, BF Ug)C = (U˜ f, BUg)C =(U˜ f, (A + 1)Ug)C = (U˜ f, U (A + 1)g)C

.

(7.4.12)

=(U˜ f, U Bg)C = (U˜ f, U˜ Bg)C = (f, Bg)C . From (7.4.11),for .g ∈ G ⊂ D(BF ), we have (U˜ BF f, Ug)C = (U˜ BF f, U˜ g)C = (BF f, g)C = (f, BF g)C = (f, Bg)C . (7.4.13)

.

Comparing expressions (7.4.12) and (7.4.13), we get (BF U˜ f, Ug)C = (U˜ BF f, U˜ g)C ,

.

∀g ∈ G.

(7.4.14)

Since G is dense in C, then from (7.4.14), we get (7.4.11). Taking into account the definition of the Friedrichs extension, the equality (7.4.14), and .AF = BF − 1, we get (7.4.10). Expressions (7.4.10) and (7.4.11) mean that (AF + 1)U˜ f = U˜ (AF + 1)f,

.

f ∈ {f ∈ D(AF ) | U˜ f ∈ D(AF )}.

(7.4.15)

Since the operator .AF + 1 has inverse one defined everywhere in C, then from (7.4.15) we get (AF + 1)−1 U˜ −1 f = U˜ −1 (AF + 1)−1 f,

.

f ∈ C. u n

7.4 The Solution of the Complex Moment Problem in the Exponential Form

367

Note that in general the existence of conditions for a self-adjoint expansions that commute with a given unitary operator is closely related to the problem of extension of a formally normal operator to normal one and is related to the complex moment problem in the power form considered in Chap. 5. In the next step we use Theorem 2.6.8. For the simplicity of the next consideration, we suppose that the complex moment sequence .(ct,j ) is non-degenerate, i.e., if .(f, f )C = 0 .f ∈ lfin , then .f = 0 and, hence, .f˙ = f , .A˙ = A and .U˙ = U . The investigation in general case is more complicated. Let us consider one of the self-adjoint extensions of the operator A, which commutes with the unitary operator U (as it was shown in Lemma 7.4.3, the Friedrichs extension has such a property). Later, we will prove that A is essentially self-adjoint under the condition (7.4.5). Let us construct the rigging of spaces (l2 (p))−,C ⊃ C ⊃ l2 (p) ⊃ lfin ,

(7.4.16)

.

where .l2 (p) is a weighted .l2 type space with a weight ), .t ∈ N0 , .j ∈ Z,  .p = (pt,j pt,j ≥ 1. The norm in .l2 (p) is given by .||f ||2l2 = |ft,j |2 pt,j ; .(l2 (p))−,C =

.

t∈N0 , j ∈Z

H− is negative space with respect to the positive .l2 (p) = H+ and the zero space .C = H. The space .lfin = D is finite with the coordinate-wise convergence. Lemma 7.4.4 If the sequence .(pt,j ), is enough fast increasing, then the imbedding l2 (p) c→ C is quasi-nuclear.

.

Proof The equality (7.4.3) means that the multimatrix .(Kt,j ;q,k ), .t, q ∈ N0 , .j, k ∈ Z, where .Kt,j ;q,k = ct+q,j −k is non-negative definite and, therefore, |ct+q,j −k |2 = |Kt,j ;q,k |2 ≤ Kt,j ;t,j Kq,k;q,k = c2t,0 c2q,0 ,

t, j ∈ N0 ,

.

Let .(qt,j ), .t ∈ N0 , j ∈ Z, .qt,j ≥ 1 be such that .



t,j ∈N0 , q,k∈Z

q, k ∈ Z. (7.4.17)

−1 c2t,0 qt,j < ∞. Then

from (7.4.8) and (7.4.17), it follows that ||f ||2C =



.

t,q∈N0 j,k∈Z

ft,j f¯q,k ct+q,j −k ≤ ||f ||2l2 (q)

 c2t,0 t∈N0 j ∈Z

qt,j

Therefore, the imbedding .l2 (q) c→ C is topological. And if .

f ∈ lfin .

,



t∈N0 , j ∈Z

−1 qt,j pt,j < ∞,

then .l2 (p) c→ l2 (q) is quasi-nuclear. The composition .l2 (p) c→ C of the quasiu n nuclear and topological imbedding operators is also quasi-nuclear one.

368

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

In the next step we use the rigging (7.4.16) to construct generalized eigenvectors. The inner structure of the space .(l2 (p))−,C is complicated, because of the complicated structure of C, This is a reason to introduce new auxiliary rigging l = (lfin )' ⊃ (l2 (p−1 )) ⊃ l2 ⊃ l2 (p) ⊃ lfin ,

(7.4.18)

.

−1 where .l2 (p−1 ), .p−1 = (pt,j ), .t ∈ N0 , j ∈ Z is the negative space with respect to the positive .l2 (p) and the zero space .l2 . Chains (7.4.16) and (7.4.18) have the same positive space .l2 (p). The general Lemma 2.7.5 establishes that the spaces −1 ). .(l2 (p))−,C is isometric to the space .l2 (p We consider (7.4.16) and (7.4.18) instead of riggings (2.7.35). Let .ξz ∈ (l2 (p))−,C be a generalized eigenvector of operators A and U in terms of the chain (7.4.16). In such a case due to Theorem 7.4.1 with .z = reiθ ∈ C, we have

(ξz , Af )C = r(ξz , f )C , (ξz , Uf )C = e−iθ (ξz , f )C ,

.

f ∈ lfin .

(7.4.19)

Denote P (z) = V ξz ∈ l2 (p−1 ) ⊂ l, P (z) = (Pt,j (z)), t ∈ N0 , j ∈ Z, Pt,j (z) ∈ C.

.

By using (2.7.36), we can rewrite (7.4.19) in the form (P (z), Af )l2 = r(P (z), f )l2 , z = reiθ ,

.

(P (z), Uf )l2 = e−iθ (P (z), f )l2 , r ∈ R1 ,

θ ∈ [0, 2π ),

f ∈ lfin .

(7.4.20)

The corresponding Fourier transform has the form S ⊃ lfin e f → (Ff )(z) = fˆ(z) = (f, P (z))l2 ∈ L2 (C, dρ(z)).

.

(7.4.21)

Let us calculate .P (z). The operator A is generated by the rule (7.4.6) and, hence, (7.4.20) gives 

rPt,j (z)f¯t,j = r(P (z), f )l2 = (P (z), Af )l2

t∈N0 j ∈Z .

= (AP (z), f )l2 =

 t∈N0 j ∈Z

Pt+1,j (z)f¯t,j ,

∀f ∈ lfin .

(7.4.22)

7.4 The Solution of the Complex Moment Problem in the Exponential Form

369

Similarly, using (7.4.20), we have 

zPt,j (z)f¯t,j = eiθ (P (z), f )l2 = (P (z), Uf )l2

t∈N0 j ∈Z .

= (U −1 P (z), f )l2 =



Pt,j +1 (z)f¯t,j

∀f ∈ lfin .

(7.4.23)

t∈N0 j ∈Z

Hence, we have rPt,j (z) = Pt+1,j (z), eiθ Pt,j (z) = Pt,j +1 (z), t ∈ N0 , j ∈ Z.

.

Without loss of generality, we can put .P0,0 (z) = 1, z ∈ C. Then (7.4.22) and (7.4.23) gives Pt,j (z) = r t eij θ ,

.

t ∈ N0 ,

j ∈ Z.

(7.4.24)

ft,j r t eij θ ∈ L2 (C, dρ(r, θ )),

(7.4.25)

Thus the Fourier transform (7.4.21) has the form 

S ⊃ lfin e f → (Ff )(z) = fˆ(z) =

.

t∈N0 j ∈Z

and we get the corresponding Parseval equality  (f, g)C =

.

ˆ dρ(z), fˆ(z)g(z)

f, g ∈ lfin .

(7.4.26)

C1

To construct the Fourier transform (7.4.21) and verify the formulas (7.4.22)– (7.4.26) there is still necessary to check that, for our operator A and U , the vector .ω0 = δ0,0 ∈ lfin is strong cyclic vector in the sense of the chain (7.4.16). But it is evidently true, since by (7.4.6) we have .At U j ω0 = At U j δ0,0 = δt,j . The Parseval equality (7.4.26) immediately leads to the representation (7.4.2); according to (7.4.24), (7.4.25) .δˆt,j = r t eij θ and by (7.4.8) we get ct,j = (δt,j , δ0,0 )C = (δˆt,j , δˆ0,0 )L2 (C,dρ(z)) =

 r t eij θ dρ(r, θ ), C

.

t ∈N0 ,

j ∈ Z,

z = re , iθ

r∈R , 1

θ ∈ [0, 2π ).

According to well known facts, Theorem 2.6.8, Lemma 2.6.7 and (2.6.21), the measure in the representation (7.4.26) is unique if and only if the operator .lfin e f |→ Af ∈ lfin is essentially self-adjoint in the space C. Due to the quasi-analytic

370

7 Block Jacobi Type Matrices and the Complex Moment Problem. . .

criterion of self-adjointness, i.e., Theorem 2.6.8, in our case it is enough to check that for each vector .δt,j , .t ∈ N0 , j ∈ Z, the class .C{||Ap δt,j ||C } is quasi-analytic, since the set of vectors .δt,j is total in C. According to (7.4.6), we have ||Ap δt,j ||2C = ||δt+p,j ||2C = |c2t+2p,0 |.

.

Due to the quasi-analyticity properties, the quasi-analyticity of the class .C{ml+n } with a fixed .l ∈ N is equivalent to the quasi-analyticity of the class .C{mn }. This completes the proof of the Theorem 7.4.2.

Bibliographical Notes The results of the chapter are also some generalization of the classical results of works of Akhiezer N. I. [2, 3]. The integral representation was obtained due to works of Berezansky Yu. M. [17, 18]. The theory of equipped spaces is used form the book of Koshmanenko V. D. and Dudkin M. E. [171]. The general theory of orthogonal polynomials see books of Geronimus Ya. L. [117], Szegö G. [278]. In research, there arose questions about the commutativity of symmetric and unitary operators which was discussed in works of Koˇcube˘ı A. N. [164], Schmüdgen K. [254, 255]. The main content of the chapter is in works of Berezansky Yu. M., Dudkin M. E. [35], [85]. Papers of Bisgaard T. M. [55, 56], Devinatz A. [81] Stochel J. Szafraniec F.H. [269– 272, 275] are close to the situation considered in the chapter. By considerations the uniqueness of the integral representation, we use also actively works of Carleman T. [64], Nelson E. [225], Nussbaum A. E. [231, 232].

Chapter 8

Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

The chapter proposes an analogue of Jacobi type bock matrices, which are related to the real power two-dimensional moment problem and the corresponding polynomials orthogonal with respect to some probability measure on the real plane. In this case, two commutative block matrices are also obtained, which have a tri-diagonal block structure and are self-adjoint operators in the .l2 type space. The existence of a one-to-one correspondence between the probability measure on the compact set of the real plane and such matrices is also proved.

8.1 Construction of the Tri-Diagonal Block Jacobi Type Matrices Corresponding to the Two Dimensional Real Moment Problem The inverse spectral problem for block Jacobi type matrices corresponding to the two-dimensional moment problem consists in constructing block tri-diagonal block matrices corresponding either to the two-dimensional problem of moments or, one might say, corresponding to the Borel measure on the real plane with the condition that the measure has all moments. The section gives detailed proof this inverse problem. We would like to note that historically, such matrices appeared for the first time in the work of Gekhtman M. I., Kalyuzhny˘ı A. A. Let .dρ(x, y) be a Borel probability measure with compact support on the real plane .R2 and .L2 = L2 (R2 , dρ(x, y)) be the space of square integrable functions with respect to a Borel probability measure .dρ(x, y) with a compact support on the

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y. M. Berezansky, M. E. Dudkin, Jacobi Matrices and the Moment Problem, Operator Theory: Advances and Applications 294, https://doi.org/10.1007/978-3-031-46387-7_8

371

372

8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

real plane .R2 . We consider that the support of this measure is not only a compact set, but also one that contains a non-empty open subset. Hence functions .R2 e (x, y) − | → x m y n , m, n ∈ N0 can be linear independent and form total set in .L2 . Let us consider the multiplication by x and y operators, namely, ˆ (x, y) = xf (x, y), Af

.

ˆ (x, y) = yf (x, y), Bf

on the space .L2 . It is obvious, that these operators are bounded and self-adjoint. To ˆ some order of orthogonalization find the Jacobi type matrices of operators .Aˆ and .B, in .L2 is chosen for the family of functions: {x m y n },

.

m, n ∈ N0 .

(8.1.1)

We use an order for orthogonalization according to the Gram-Schmidt procedure (see Fig. 8.1): x0y0; x0y1, x1y0; x0y2, x1y1, x2y0; . . . ;

.

x 0 y n , x 1 y n−1 , . . . , x n y 0 ; . . . . (8.1.2)

yk

xj

x0 y 0

x0 y 1

x0 y 2

x0 y 3

···

x1 y 0

x1 y 1

x1 y 2





x2 y 0

x2 y 1







x3 y 0







··· xj y k

.. .



Fig. 8.1 The orthogonalization order



.. .

..

.

8.1 Construction of the Tri-Diagonal Block Jacobi Type Matrices. . .

373

As a result, we get an orthonormal system of polynomials (each polynomial of variables .x m y n , .m, n ∈ N0 ), which we denote and arrange in groups P0;0 (x, y);

...;

P1;0 (x, y),

P2;0 (x, y),

P1;1 (x, y);

P2;1 (x, y),

Pn;1 (x, y),

P2;2 (x, y);

Pn;2 (x, y),

.

Pn;0 (x, y),

...,

... Pn;n (x, y); (8.1.3) where each polynomial has a form .Pn;α (x, y) = kn;α x α y n−α + · · · , .n ∈ N0 , .α = 0, 1, . . . , n, .kn;α > 0; here “.+ · · · ” denotes the next part of the corresponding polynomial; for convenience we put .P0;0 (x, y) = 1. Therefore, .Pn;α is some linear combination of elements {1; x 0 y 1 , x 1 y 0 ; . . . ; x 0 y n , x 1 y n−1 , . . . , x α y n−α }.

(8.1.4)

.

Since the set (8.1.1) is dense in the space .L2 , the sequence (8.1.3) is an orthonormal basis in this space. Let the subspace .Pn;α , .∀n ∈ N be spanned by (8.1.4). It is obvious that P0;0 ⊂ P1;0 ⊂ P1;1 ⊂ P2;0 ⊂ P2;1 ⊂ P2;2 ⊂ · · · ⊂ Pn;0 ⊂ Pn;1 ⊂ · · · ⊂ Pn;n ⊂ · · · , . Pn;α = {P0;0 (x, y)} ⊕ {P1;0 (x, y)} ⊕ {P1;1 (x, y)}⊕ ⊕{P2;0 (x, y)} ⊕ {P2;1 (x, y)} ⊕ {P2;2 (x, y)} ⊕ · · · ⊕ ⊕{Pn;0 (x, y)} ⊕ {Pn;1 (x, y)} ⊕ · · · ⊕ {Pn;α (x, y)}, ∀n ∈ N,

(8.1.5)

where .{Pn;α (x, y)}, .n ∈ N, .α = 0, 1, . . . n denote one dimensional space spanned by .Pn;α (x, y); .P0;0 = R. In the sequel investigation we need the Hilbert space l2 = H 0 ⊕ H 1 ⊕ H 2 ⊕ · · · ,

.

Hn = Cn+1 ,

n ∈ N0

(8.1.6)

instead of usual traditional space .l2 used in the classical Hamburger (onedimensional) moment problem. Each vector .f ∈ l2 has a form .f = (fn )∞ n=0 , .fn ∈ Hn , hence, ||f ||2l2 =

∞ E

.

n=0

||fn ||2Hn < ∞,

(f, g)l2 =

∞ E (fn , gn )Hn ,

∀f, g ∈ l2 .

n=0

Coordinates of a vector .fn ∈ Hn , .n ∈ N0 , in the some orthonormal basis .{en;0 , .en;1 , en;2 , . . . , .en;n } in the space .Cn+1 , we denote by .(fn;0 , fn;1 , fn;2 , . . . , fn;n ) hence, we have .fn = (fn;0 , fn;1 , fn;2 , .. . ., .fn;n ).

.

374

8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

By using the orthonormal system (8.1.3), one can define a mapping of .l2 into .L2 . Denote Pn (x, y) = (Pn;0 , Pn;1 (x, y), Pn;2 (x, y), . . . , Pn;n ) ∈ Hn , ∀(x, y) ∈ R2 , ∀n ∈ N0 .

.

Then ˆ l2 e f = (fn )∞ n=0 |−→ (If )(x, y) := f (x, y) =

∞ E

.

(fn , Pn (x, y))Hn ∈ L2 .

n=0

(8.1.7) Thus, for .n ∈ N0 we get (fn , Pn (x, y))Hn = fn;0 Pn;0 (x, y) + fn;1 Pn;1 (x, y)+

.

fn;2 Pn;2 (x, y) + · · · + fn;n Pn;n (x, y), ||f ||2l2 = ||(f0;0 , f1;0 , f1;1 , f2;0 , f2;1 , f2;2 , . . . , fn;0 , , fn;1 , . . . , fn;n , . . .)||2l2 .

.

Then, (8.1.7) is the mapping of the space .l2 into .L2 and, taking into account the density of the orthonormal system (8.1.3), we get that this mapping is isometric. The mapping (8.1.7) maps whole space .l2 into whole space .L2 , because the system (8.1.3) is an orthonormal basis in .L2 . Therefore, the mapping(8.1.7) is an isometric transformation (denoted by I ) acting from .l2 into .L2 . Let T be a bounded linear operator defined on the space .l2 . Then there exists a unique sequence .(τj,k )∞ j,k=0 , where for each .j, k ∈ N0 , the element .τj,k is an operator from .Hk into .Hj , so that (Tf )j =

∞ E

.

τj,k fk , j ∈ N0 ,

k=0

(Tf, g)l2 =

∞ E

(τj,k fk , gj )Hj ,

f, g ∈ l2 .

j,k=0

(8.1.8) For the proof of (8.1.8) we need only to write the usual matrix of the operator T in .l2 using the basis (e0;0 ; e1;0 , e1;1 ; e2;0 , e2;1 , e2;2 ; . . . ; en;0 , en;1 , . . . , en;n ; . . .),

.

e0;0 = 1. (8.1.9)

Then .τj,k is the operator .Hk −→ Hj for each .j, k ∈ N0 . This operator has a matrix representation τj,k;α,β = (T ek;β , ej ;α )l2

.

(8.1.10)

8.1 Construction of the Tri-Diagonal Block Jacobi Type Matrices. . .

375 j,k

where .α = 0, 1, . . . , j and .β = 0, 1, . . . , k. We rewrite .τj,k = (τj,k;α,β )α,β=0 , j,0

including cases: .τ0,k = (τ0,k;α,β )0,k α,β=0 , .τj,0 = (τj,0;α,β )α,β=0 and .τ0,0 = (τ0,0;α,β )0,0 α,β=0 = τ0,0;0,0 . It should be noted that the image (8.1.8) is also correct for the general operator T acting in the space .l2 with the domain .Dom(T ) = lfin ⊂ l2 , where .lfin denotes the subspace of finite vectors of .l2 . In this case, the first formula in (8.1.8) is valid for .f ∈ lfin ; the second formula is valid for .f ∈ lfin , .g ∈ l2 . Let us consider an image .Tˆ = I T I −1 : .L2 −→ L2 of the above mentioned bounded operator T : .l2 −→ l2 by the mapping (8.1.7). This matrix in the basis (8.1.3) P0;0 (x, y); P1;0 (x, y), P1;1 (x, y); P2;0 (x, y), P2,1 (x, y), P2,2 (x, y); . . . ;

.

Pn;0 (x, y), Pn;1 (x, y), . . . , Pn;n (x, y); . . . is equal to the usual matrix of the operator T that we regard as the operator: .l2 −→ l2 in the corresponding basis (8.1.9). By using (8.1.10) and the above mentioned procedure, we get the operator matrix .(τj,k )∞ j,k=0 .T : l2 −→ l2 . By the definition ˆ this matrix is also the operator matrix of .T : L2 −→ L2 . It is obvious that .Tˆ can be an arbitrary linear bounded operator in .L2 . In the next text we consider T instead of .Tˆ , and the role of T will play A and B, corresponding to our matrices .JA and .JB . Lemma 8.1.1 For polynomials .Pn;α (x, y) and subspaces .Pm,β , .n, m ∈ N0 , .α = 0, 1, . . . , n, .β = 0, 1, . . . , m, relations .

xPn;α (x, y) ∈ Pn+1;α+1 ,

yPn;α (x, y) ∈ Pn+1;α ,

(8.1.11)

hold true. Proof According to (8.1.3) the polynomial .Pn;α (x, y), .n ∈ N0 , is some linear combination {1; x 0 y 1 , x 1 y 0 ; . . . ; x 0 y n , x 1 y n−1 , . . . , x α y n−α }.

.

Hence, the multiplication by x of the last set gives a linear combination {x; x 1 y 1 , x 2 y 0 ; . . . ; x 1 y n , x 2 y n−1 , . . . , x α+1 y n−α },

.

and such a linear combination belongs to .Pn+1;α+1 . The similar multiplication by y gives a linear combination {y 1 ; x 0 y 2 , x 1 y 1 ; . . . ; x 0 y n+1 , x 1 y n , . . . , x α y n−α+1 },

.

and such a linear combination belongs to .Pn+1;α .

u n

376

8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

Lemma 8.1.2 Let .Aˆ be an operator of the multiplication by x in the space .L2 , namely, ˆ L2 e ϕ(x, y) |−→ (Aϕ)(x, y) = xϕ(x, y) ∈ L2 .

.

(It is obvious that .Aˆ is self-adjoint and bounded.) Then, the matrix .Aˆ = (aj,k )∞ j,k=0 , −1 ˆ ), has a tri-diagonal structure, i.e., .aj,k = 0 for in basis (8.1.3) (i.e., .A = I AI .|j − k| > 1. Proof By using (8.1.10) for .en;γ = I −1 Pn;γ (x, y), .n ∈ N0 , .γ = 0, 1, . . . , n, we get f aj,k;α,β = (Aek;β , ej ;α )l2 =

xPk;β (x, y)Pj ;α (x, y) dρ(x, y),

.

∀j, k ∈ N0 ,

R2

(8.1.12) where .α = 0, 1, . . . , j, β = 0, 1, . . . , k. From (8.1.11), we get .xPk;β (x, y) ∈ Pk+1;β+1 . According to expressions from (8.1.5), the integral in (8.1.12) is equal to zero for .j > k + 1 and for each .β = 0, 1, . . . , j . On the other hand, the integral in (8.1.12) has the form f (a )j,k;α,β = xPk;β (x, y)Pj ;α (x, y) dρ(x, y) ∗

R2

f

.

=

(8.1.13) xPj ;α (x, y)Pk;β (x, y) dρ(x, y) = ak,j ;β,α ,

R2

where .α = 0, 1, . . . , j and .β = 0, 1, . . . , k. Since the operator .Aˆ is symmetric, from (8.1.11), we have .xPj ;α (x, y) ∈ Pj +1;α+1 . According to (8.1.5), the last integral is equal to zero for .k > j +1 and for each .α = 0, 1, . . . , k, .β = 0, 1, . . . , k. As result the integral in (8.1.12), i.e., coefficients .aj,k;α,β , .j, k ∈ N0 are equal to zero for .|j − k| > 1; .α = 0, 1, . . . , j , .β = 0, 1, . . . , k. (In the previous considerations it is necessary to take into account that .P0;0 (x, y) = 1, .e0;0 = I −1 P0;0 (x, y)). u n ˆ Therefore, the matrix .(aj,k )∞ j,k=0 of the operator .A has a tri-diagonal block structure ⎡

a0,0 ⎢ a1,0 ⎢ .⎢ ⎣ 0 .. .

a0,1 a1,1 a2,1 .. .

0

0 0 ... a1,2 0 0 . . . a2,2 a2,3 0 . . . .. .. .. . . . . . .

⎤ ⎥ ⎥ ⎥. ⎦

(8.1.14)

8.1 Construction of the Tri-Diagonal Block Jacobi Type Matrices. . .

377

Further detailed analysis of expressions (8.1.12) makes it possible to identify j,k zero and non-zero elements of matrices .(aj,k;α,β )α,β=0 in each case for .|j − k| ≤ 1. We also use properties of permutations of matrix indices .j, k and .α, β. Lemma 8.1.3 Let .(aj,k )∞ j,k=0 be an operator matrix of the multiplication by x j,k

operator in .L2 , where .aj,k : Hk −→ Hj ; .aj,k = (aj,k;α,β )α,β=0 are matrices of operators .aj,k in the corresponding orthonormal basis. Then, .∀j ∈ N0 , we get .

∀α = 0, 1, . . . j − 1, aj,j +1;α,α+2 = aj,j +1;α,α+3 = · · · = aj,j +1;α,j +1 = 0; ∀β = 0, 1, . . . j − 1, aj +1,j ;β+2,β = aj +1,j ;β+3,β = · · · = aj +1,j ;j +1,β = 0. (8.1.15) If we choose inside of each diagonal {x 0 y n , x n y n−1 , x 0 y n−2 , . . . , x n y 0 }

.

another order (preserving the order of diagonals), then Lemma 8.1.3 is not valid j,k but it is also possible to describe zeros of matrices .(aj,k;α,β )α,β=0 . Such matrices ∞ .(aj,k ) j,k=0 have also tri-diagonal block structure and have another elements always equal zero. Proof According to (8.1.12) and (8.1.11), for .j ∈ N0 , .∀α = 0, 1, . . . , j and .∀β = 0, 1, . . . , j + 1, we have f aj,j +1;α,β =

xPj +1,β (x, y)Pj ;α (x, y) dρ(x, y) R2

f

.

=

xPj,α (x, y)Pj +1;β (x, y) dρ(x, y), R2

where .xPj ;α (x, y) ∈ Pj +1;α+1 . But according to (8.1.5), .Pj +1;β (x, y) is orthogonal to .Pj +1;α+1 for .β > α + 1 and, hence, the last integral is equal to zero. This gives the first equalities in (8.1.15). Similarly, from (8.1.12) and (8.1.11), for .j ∈ N0 .∀α = 0, 1, . . . , j + 1 and .∀β = 0, 1, . . . , j , we have f aj +1,j ;α,β =

xPj,β (x, y)Pj +1;α (x, y) dρ(x, y),

.

R2

where .xPj ;β (x, y) ∈ Pj +1;β+1 . But according to (8.1.5), .Pj +1;α (x, y) is orthogonal to .Pj +1;β+1 for .α > β + 1 and, hence, the last integral is equal to zero. This gives the second equalities in (8.1.15). u n

378

8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

Therefore, after last investigations we conclude that, in (8.1.14) for .∀j ∈ N, the right corner for each .((j + 1) × (j + 2))-matrix .aj,j +1 (starting from the third diagonal) and the left corner for each .((j + 2) × (j + 1))-matrix .aj +1,j (starting from the third diagonal) consist always zero elements. Taking into account (8.1.14), we can conclude that the matrix of the multiplication by x operator is multi-diagonal as a usual scalar matrix, i.e., in the usual basis of the space .l2 . Lemma 8.1.4 Elements a0,1;0,1 , a1,0;1,0 ;

.

aj,j +1;α,α+1 , aj +1,j ;α+1,α ;

j ∈ N, α = 0, 1, . . . j, (8.1.16)

of matrices .(aj,k )∞ j,k=0 in Lemma 8.1.7, are always positive. ' (x, y) a nonProof We start from the investigation of .a0,1;0,1 . Denote .P1;1 normalized vector .P1;1 (x, y) obtained from the Gram-Schmidt orthogonalization procedure. According to (8.1.2) and (8.1.3), we have ' P1;1 (x, y) = x − (x, P1;0 (x, y))L2 P1;0 (x, y) − (x, 1)L2 .

.

Therefore, using (8.1.30), we get f a0,1;0,1 =

' xP0;0 P1;1 (x, y) dρ(x, y) = ||P1;1 (x, y)||−1 L2

R2 ' =||P1;1 (x, y)||−1 L2

.

f

f

' xP1;1 (x, y) dρ(x, y)

R2

x(x − (x, P1;0 (x, y))L2 P1;0 (x, y) − (x, 1)L2 ) dρ(x, y) R2

' 2 2 2 (x, y)||−1 =||P1;1 L2 (||x||L2 − |(x, P1;0 (x, y))L2 | − |(x, 1)L2 | ),

(8.1.17) where .(1 = P0;0 (x, y)). Due to (8.1.36) also, we conclude that the last expression is positive and, hence, .a0,1;0,1 > 0. Since the operator A is symmetric, than .a0,1;0,1 = a1,0;1,0 > 0. The positiveness in (8.1.17) follows from the Parseval equality applied by the decomposition of the function .x ∈ L2 with respect to the orthonormal basis (8.1.3) in the space .L2 , namely, |(x, 1)L2 |2 + |(x, P1;0 (x, y))L2 |2 + |(x, P1;1 (x, y))L2 |2 + · · · = ||x||2L2 .

.

(8.1.18)

8.1 Construction of the Tri-Diagonal Block Jacobi Type Matrices. . .

379

Consider elements .aj,j +1;α,α+1 , for .j ∈ N, .α = 0, 1, . . . , j . From (8.1.30), we get f aj,j +1;α,α+1 =

xPj +1,α+1 (x, y)Pj ;α (x, y) dρ(x, y) R2

f

.

=

(8.1.19) xPj,α (x, y)Pj +1;α+1 (x, y) dρ(x, y).

R2

For .Pj ;α (x, y), according to (8.1.3) and (8.1.5), we have Pj ;α (x, y) = kj ;α x α y j −α + Rj ;α (x, y),

.

(8.1.20)

where .Rj ;α (x, y) is some polynomial from .Pj ;α−1 if .α > 0, or from .Pj −1;j −1 if .α = 0. Thus, .xRj ;α (x, y) is some polynomial from .Pj +1;α or from .Pj ;j (see (8.1.11) and (8.1.5)). Multiplying it on x, we get xPj ;α (x, y) = kj ;α x α+1 y j −α + xRj ;α (x, y),

.

(8.1.21)

where .xRj ;α (x, y) ∈ Pj +1;α or belongs to .Pj ;j . On the other hand, equality (8.1.20) gives: Pj +1;α (x, y) = kj +1;α x α+1 y j −α + Rj +1;α (x, y),

.

(8.1.22)

where .Rj +1;α (x, y) ∈ Pj +1;α if .α > 0 or belongs to .Pj ;j if .α = 0. We take .x α+1 y j −α from (8.1.22) and substitute it into (8.1.21). Thus, we get xPj ;α (x, y) = .

=

kj ;α kj +1;α

(Pj +1;α (x, y) − Rj +1;α (x, y)) + xRj ;α (x, y)

kj ;α kj ;α Rj +1;α (x, y) + xRj ;α (x, y). Pj +1;α (x, y) − kj +1;α kj +1;α (8.1.23)

The second two terms in (8.1.23) belong to .Pj +1;α or belong to .Pj ;j and in any case orthogonal to .Pj +1;α+1 (x, y). Thus, the substitution of the expression (8.1.23) into (8.1.19) gives −1 > 0. .aj,j +1;α,α+1 = kj ;α (kj +1;α+1 ) Since the matrix (8.1.32) is symmetric, then elements .aj,j +1;α,α+1 = aj,j +1;α,α+1 are also positive .j ∈ N, .α = 0, 1, . . . , j . u n

380

8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

In what follows we will use convenient notations for elements .aj,k of the Jacobi matrix (8.1.14), namely,

.

an = an+1,n : bn = an,n : cn = an,n+1 :

Hn −→ Hn+1 , Hn −→ Hn , Hn+1 −→ Hn ,

(8.1.24) n ∈ N0 .

All previous investigations are summarized in the theorem. ˆ (with a Theorem 8.1.5 The bounded self-adjoint multiplication by x operator .A, strong cyclic vector) in the space .L2 in the orthonormal basis (8.1.3) of polynomials has the form of tri-diagonal block Jacobi type symmetric matrix .JA = (aj,k )∞ j,k=0 , which acts in the space (8.1.6), namely, l2 = H 0 ⊕ H 1 ⊕ H 2 ⊕ · · · ,

.

Hn = Cn+1 ,

n ∈ N0 .

(8.1.25)

Norms of all operators .aj,k : Hk −→ Hj are uniformly bounded with respect to j, k ∈ N0 . In designations (8.1.24), this matrix has the form

.

⎡ ⎢ ⎢ ⎢ .JA = ⎢ ⎢ ⎣

b0 a0 0 0 .. .

c0 b1 a1 0 .. .

0 c1 b2 a2 .. .

0 0 c2 b3 .. .

0 0 0 c3 .. .

... ... ... ... .. .

⎤ ⎥ ⎥ ⎥ ⎥, ⎥ ⎦

i.e., ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ .JA = ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

∗ ∗ + ∗ ∗ ∗ ∗ + ∗ ∗ ∗ ∗ ∗ ∗ + ∗ ∗ 0 + ∗ ∗ + 0 0 0

⎤ + ∗ ∗ ∗ ∗ ∗ ∗ + 0 .. .

0 + ∗ ∗ ∗ ∗ ∗ ∗ +

0 ∗+ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 + ∗ ∗ ∗ ∗ ∗ .. .

0 0 + ∗ ∗ ∗ ∗

∗+ ∗ ∗ ∗ ∗ ∗ ∗

0 + ∗ ∗ .. .

0 0 0 0 + 0 ∗ +

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

(8.1.26)

In (8.1.26), .∀n ∈ N0 .bn is a .((n + 1) × (n + 1))-matrix: .bn = (bn;α,β )n,n α,β=0 , (.b0 = b0;0,0 is a scalar); .an is a .((n + 2) × (n + 1))-matrix: .an = (an;α,β )n+1,n α,β=0 ; .cn is a .((n + 1) × (n + 2))-matrix: .cn = (cn;α,β )n,n+1 α,β=0 . Matrices .an and .cn have some

8.1 Construction of the Tri-Diagonal Block Jacobi Type Matrices. . .

381

elements always equal to zero .∀n ∈ N, namely, an;β+2,β = an;β+3,β = · · · = an;n+1,β = 0, .

β = 0, 1, . . . , n − 1;

cn;α,α+2 = cn;α,α+3 = · · · = cn;α,n+1 = 0,

α = 0, 1, . . . , n − 1. (8.1.27)

Some another their elements are always positive, namely an;α+1,α ; cn;α,α+1 > 0,

α = 0, 1, . . . , n,

.

∀n ∈ N0 .

(8.1.28)

Thus, it is possible to say, that .∀n ∈ N0 , each upper left corner of the matrix .an (starting from the third diagonal) and each right corner of the matrix .cn (starting from the third diagonal) consist always of zero elements. All positive elements of (8.1.26) are denoted by “.+”. Therefore, the matrix (8.1.26) in scalar form is multi-diagonal in the indicated structure. Possibly zero and non-zero elements are ˆ ∗ = Aˆ gives marked with “.∗”. The symmetry of the operator .(A) an;α,β = cn;β,α ,

.

β = 0, 1, 2, . . . , n, α = 0, 1, . . . , β, β + 1, n ∈ N0 .

The matrix .JA acts by the rule (JA f )n = an−1 fn−1 + bn fn + cn fn+1 ,

.

n ∈ N0 ,

∀f = (fn )∞ n=0 ∈ l2 , (8.1.29)

were, we put, usually, .a−1 = 0, .f−1 = 0. Let us now pass to the study of the matrix .JB associated with the variable y. For this we need a lemma similar to Lemma 8.1.1. Lemma 8.1.6 Let .Bˆ be the multiplication by y operator in the space .L2 , namely, ˆ L2 e ϕ(x, y) |−→ (Bϕ)(x, y) = yϕ(x, y) ∈ L2 .

.

(Obviously, .Bˆ is bounded and self-adjoint). The matrix .(bj,k )∞ j,k=0 of the operator −1 ˆ ˆ .B in the basis (8.1.3) (i.e., .B = I BI ) has a tri-diagonal structure: .bj,k = 0 for .|j − k| > 1. Proof By using (8.1.10) for .en;γ = I −1 Pn;γ (x, y), .n ∈ N0 , .γ = 0, 1, . . . , n, we get f bj,k;α,β = (Bek;β , ej ;α )l2 =

yPk;β (x, y)Pj ;α (x, y) dρ(x, y),

.

(8.1.30)

R2

where .α = 0, 1, . . . , j, β = 0, 1, . . . , k, .∀j, k ∈ N0 . From (8.1.11), we have yPk;α (x, y) ∈ Pk+1;α . According to (8.1.5), the integral in (8.1.30) is equal to zero for .j > k + 1 and for each .α = 0, 1, . . . , j .

.

382

8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

On the other hand, the integral in (8.1.30) has the form f (b∗ )j,k;α,β = yPk;β (x, y)Pj ;α (x, y) dρ(x, y) R2

.

f = yPj ;α (x, y)Pk;β (x, y) dρ(x, y) = bk,j ;β,α ,

(8.1.31)

R2

where .α = 0, 1, . . . , j and .β = 0, 1, . . . , k. From (8.1.11), we have .yPj ;α (x, y) ⊂ Pj +1,α . According to (8.1.5) the integral in (8.1.31) is equal to zero for .k > j + 1 for each .α = 0, 1, . . . , j , .β = 0, 1, . . . , k. As result, the integral in (8.1.30), i.e., coefficients .bj,k;α,β , .j, k ∈ N0 are equal to zero for .|j − k| > 1, .α = 0, 1, . . . , j , .β = 0, 1, . . . , k. (In the previous considerations it is necessary to take into account that .e0;0 = I −1 P0;0 (x, y), .P0;0 (x, y) = 1). u n ˆ Thus, the matrix .(bj,k )∞ j,k=0 of the operator .B has a tri-diagonal block structure ⎡

b0,0 ⎢ b1,0 ⎢ .⎢ ⎣ 0 .. .

b0,1 b1,1 b2,1 .. .

0

0 0 ... b1,2 0 0 . . . b2,2 b2,3 0 . . . .. .. .. . . . . . .

⎤ ⎥ ⎥ ⎥. ⎦

(8.1.32)

Further detailed analysis of the expressions (8.1.30) makes it possible know j,k about the zero and non-zero elements of the matrix .(bj,k;α,β )α,β=0 in each case for .|j − k| ≤ 1. We also use permutations properties of matrix indices .j, k and .α, β. Lemma 8.1.7 Let .(bj,k )∞ j,k=0 be an operator matrix of the multiplication by y j,k

operator in .L2 , .bj,k : Hk −→ Hj ; .bj,k = (bj,k;α,β )α,β=0 are matrices of operators .bj,k in the corresponding orthonormal basis. Then, .∀j ∈ N0 , we get .

∀α = 0, 1, . . . j, bj,j +1;α,α+1 = bj,j +1;α,α+2 = · · · = bj,j +1;α,j +1 = 0; ∀β = 0, 1, . . . j, bj +1,j ;β+1,β = bj +1,j ;β+2,β = · · · = bj +1,j ;j +1,β = 0. (8.1.33)

If we choose inside of each diagonal {x 0 y n , x 1 y n−1 , x 2 y n−2 , . . . , x n y 0 },

.

(see Fig. 8.1) another order (preserving the order of diagonals), then Lemma 8.1.7 is j,k not valid but it is also possible to describe zero elements of matrices .(bj,k;α,β )α,β=0 . ∞ Such matrices .(bj,k )j,k=0 have also a tri-diagonal block structure and has zero elements but possibly in another places.

8.1 Construction of the Tri-Diagonal Block Jacobi Type Matrices. . .

383

Proof According to (8.1.30) and (8.1.11) for .j ∈ N0 , .∀α = 0, 1, . . . , j and .∀β = 0, 1, . . . , j , we have f bj,j +1;α,β = yPj +1,β (x, y)Pj ;α (x, y) dρ(x, y) R2

f

.

=

yPj,α (x, y)Pj +1;β (x, y) dρ(x, y) = b¯j +1,j ;β,α ,

R2

where .yPj ;α (x, y) ∈ Pj +1;α . According to (8.1.5), .Pj +1;β (x, y) is orthogonal to Pj +1;α for .β > α and, hence, the last integral is equal to zero. This gives the first equalities in (8.1.33). Similarly to (8.1.30) and (8.1.11), for .j ∈ N0 , .∀α = 0, 1, . . . , j + 1 and .∀β = 0, 1, . . . , j , we get f .bj +1,j ;α,β = yPj,β (x, y)Pj +1;α (x, y) dρ(x, y),

.

R2

where .yPj ;β (x, y) ∈ Pj +1;β . But according to (8.1.5) .Pj +1;α (x, y) is orthogonal to Pj +1;β if .α > β and, hence, the last integral is equal to zero. This gives the second equalities in (8.1.33). u n

.

Thus, after these investigations, we can conclude that in (8.1.32) for .∀j ∈ N, the right corner of each .((j + 1) × (j + 2))-matrix .bj,j +1 (starting from the second diagonal) and in the left corner of each .((j + 2) × (j + 1))-matrix .bj +1,j (starting from the second diagonal) consist always zero elements. Taking into account (8.1.14), we can conclude that the symmetric matrix of the multiplication by y operator is a multi-diagonal usual scalar matrix, that is, in the usual (standard) basis of the .l2 space. Lemma 8.1.8 Elements b0,1;0,0 , b1,0;0,0 ;

bj,j +1;α,α , bj +1,j ;α,α ;

.

j ∈ N, α = 0, 1, . . . j

(8.1.34)

of matrices .(bj,k )∞ j,k=0 from Lemma 8.1.7, are always positive. Proof We start from the investigation of .b1,0;0,0 . By using (8.1.12) we denote ' (x, y) = y − (y, 1) P1;0 L2 a non-normalized vector .P1;0 (x, y) obtained due to the Gram-Schmidt orthogonalization procedure. Hence, we get

.

f b1,0;0,0 =

yP0;0 (x, y)P1;0 (x, y) dρ(x, y) R2

.

=

' ||P1;0 (x, y)||−1 L2

f y(y − (y, 1)L2 ) dρ(x, y) R2

' 2 2 (x, y)||−1 = ||P1;0 L2 (||y||L2 − |(y, 1)L2 | ),

(8.1.35)

384

8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

where .P0;0 (x, y) = 1. The last difference is positive (see below (8.1.36)), therefore b1,0;0,0 > 0. The element .b0,1;0,0 is also positive since the matrix B is symmetric, i.e., .b1,0;0,0 = b0,1;0,0 . The positiveness (8.1.35) follows from the Parseval equality when the function .y ∈ L2 is expanded by an orthonormal basis (8.1.3) in the space .L2 , namely, .

|(y, 1)L2 |2 + |(y, P1;0 (x, y))L2 |2 + |(y, P1;1 (x, y))L2 |2 + · · · = ||y||2L2 .

.

(8.1.36)

Pass to the proof of the positiveness .bj +1,j ;α,α , where .j ∈ N, .α = 0, 1, . . . , j . From (8.1.12) we have: f (8.1.37) .bj +1,j ;α,α = yPj ;α (x, y)Pj +1;α (x, y) dρ(x, y). R2

According to (8.1.3) and (8.1.5), we get Pj ;α (x, y) = kj ;α x α y j −α + Rj ;α (x, y),

.

(8.1.38)

where .Rj ;α (x, y) is some polynomial from .Pj ;α−1 if .α > 0, or from .Pj −1;j −1 if α = 0. Therefore, .yRj ;α (x, y) is some polynomial from .Pj +1;α or from .Pj ;j −1 (see (8.1.11) and (8.1.5)). Multiplying (8.1.38) by x, we conclude

.

yPj ;α (x, y) = kj ;α x α y j −α+1 + xRj ;α (x, y),

.

(8.1.39)

where .yRj ;α (x, y) ∈ Pj +1;α−1 or .Pj ;j −1 ⊂ Pj ;j . On the other hand, the equality (8.1.38) for .Pj +1;α (x, y) gives Pj +1;α (x, y) = kj +1;α x α y j −α+1 + Rj +1;α (x, y),

.

(8.1.40)

where .Rj +1;α (x, y) ∈ Pj +1;α−1 or .Pj ;j . Find .x α y j −α+1 from (8.1.40) and substitute it into (8.1.39). Thus, we get yPj ;α (x, y) = .

=

kj ;α (Pj +1;α (x, y) − Rj +1;α (x, y)) + yRj ;α (x, y) kj +1;α kj ;α kj ;α Pj +1;α (x, y) − Rj +1;α (x, y) + yRj ;α(x,y) (x, y), kj +1;α kj +1;α (8.1.41)

where both last terms belong to .Pj +1;α−1 or to .Pj ;j and are always orthogonal to .Pj +1;α (x, y). Therefore, after the substitution of the expression (8.1.41) into (8.1.37) we get that .bj +1,j ;α,α = kj ;α (kj +1;α )−1 > 0. Since the matrix (8.1.14) is symmetric, then elements .bj,j +1;α,α+1 = bj +1,j ;α+1,α are also positive where .j ∈ N, .α = 0, 1, . . . , j . u n

8.1 Construction of the Tri-Diagonal Block Jacobi Type Matrices. . .

385

In what follows we will use convenient notations for elements .bj,k of the matrix

.

un = bn+1,n : wn = bn,n : vn = bn,n+1 :

Hn −→ Hn+1 , Hn −→ Hn , Hn+1 −→ Hn ,

(8.1.42) n ∈ N0 .

All last previous investigation we summarize also in the theorem. ˆ (with a Theorem 8.1.9 The bounded self-adjoint multiplication by y operator .B, strong cyclic vector) in the space .L2 in the orthonormal basis (8.1.3) of polynomials, has the form of tri-diagonal block Jacobi type symmetric matrix .JB = (bj,k )∞ j,k=0 which acts in the space (8.1.6), namely, l2 = H 0 ⊕ H 1 ⊕ H 2 ⊕ · · · ,

.

Hn = Cn+1 ,

n ∈ N0 .

(8.1.43)

Norms of all operators .bj,k : Hk −→ Hj are uniformly bounded with respect to j, k ∈ N0 . In designations (8.1.24), this matrix has the form

.

⎡ ⎢ ⎢ ⎢ .JB = ⎢ ⎢ ⎣

w0 u0 0 0 .. .

v0 w1 u1 0 .. .

0 v1 w2 u2 .. .

0 0 v2 w3 .. .

0 0 0 v3 .. .

... ... ... ... .. .

⎤ ⎥ ⎥ ⎥ ⎥, ⎥ ⎦

i.e., ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ .JB = ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

∗ + 0 + ∗ ∗ + 0 ∗ ∗ ∗ + ∗ ∗ 0 + ∗ 0 0 ∗ + 0 0 0 0

⎤ 0 + ∗ ∗ ∗ ∗ + 0 0 .. .

0 0 ∗ ∗ ∗ ∗ ∗ + 0

0 + 0 ∗ + ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

0 0 + ∗ ∗ ∗ ∗ .. .

0 0 0 ∗ ∗ ∗ ∗

+ 0 ∗ + ∗ ∗ ∗ ∗

0 0 + ∗ .. .

0 0 0 0 0 0 +0

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

(8.1.44)

In (8.1.44), .∀n ∈ N0 , .wn is a .((n + 1) × (n + 1))-matrix: .wn = (wn;α,β )n,n α,β=0 , ( .w0 = w0;0,0 a scalar); .un is a .((n + 2) × (n + 1))-matrix: .un = (an;α,β )n+1,n α,β=0 ;

386

8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

vn is a .((n + 1) × (n + 2))-matrix: .vn = (cn;α,β )n,n+1 α,β=0 . In these matrices .un and .vn some elements are always equal to zero, namely .∀n ∈ N0 , we have

.

.

un;β+1,β = un;β+2,β = · · · = un;n+1,β = 0,

β = 0, 1, . . . , n;

vn;α,α+1 = vn;α,α+2 = · · · = vn;α,n+1 = 0,

α = 0, 1, . . . , n.

(8.1.45)

Some another their elements are always positive, namely un;α,α ; vn;α,α+1 > 0,

.

α = 0, 1, . . . , n,

∀n ∈ N0 .

(8.1.46)

Thus, we can assume that .∀n ∈ N0 each element in the left corner of the matrix un (starting from the second diagonal) and each element in the right corner of the matrix .vn (starting from the second diagonal) is the zero element. All positive elements of (8.1.27) are denoted by “.+”. Therefore, the matrix (8.1.27) in a scalar form is multi-diagonal in the indicated structure. Both zero and non-zero elements are marked with “.∗”. ˆ ∗ = Bˆ gives The symmetry of the operator .(B)

.

un;α,β = vn;β,α ,

.

α = 0, 1, 2, . . . , n, β = α, . . . , n, n ∈ N.

The matrix .JB acts by the rule (JB f )n = un−1 fn−1 + wn fn + vn fn+1 ,

.

n ∈ N0 ,

∀f = (fn )∞ n=0 ∈ l2 , (8.1.47)

where we put, usually, .u−1 = 0, .f−1 = 0.

8.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi Type Matrices Corresponding to the Two-Dimensional Real Moment Problem As it was mentioned above the main result of the previous section is really the solution of an inverse problem for a corresponding direct one appearing in the title of this section. Indeed, one of the main results of the section is the solution of the direct spectral problem, i.e., the restoration of the measure by the given block matrices. By the restoration we understand usually the writing of the corresponding Parseval equalities in terms of generalized eigenvectors.

8.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi. . .

387

We consider operators in the space .l2 of the form (8.1.6). Additionally to the space .l2 , we consider its equipment, i.e., we have a rigging (lfin )' ⊃ l2 (p−1 ) ⊃ l2 ⊃ l2 (p) ⊃ lfin ,

.

(8.2.1)

−1 = where .l2 (p) is a weighted space .l2 with a weight .p = (pn )∞ n=0 , .pn ≥ 1, (.p ∞ ∞ −1 (pn )n=0 ). In our case .l2 (p) is the Hilbert space of sequences .f = (fn )n=0 , .fn ∈ Hn with a norm and a scalar product:

||f ||2l2 (p) =

∞ E

.

||fn ||2Hn pn ,

(f, g)l2 (p) =

n=0

∞ E (fn , gn )Hn pn .

(8.2.2)

n=0

The space .l2 (p−1 ) is defined similarly; remind that .lfin is the space of finite sequences and .(lfin )' is the space adjoint to .lfin . It is easy to show that the imbedding ∞ E .l2 (p) c→ l2 is quasi-nuclear if . npn−1 < ∞. n=0

Let A and B be pare of commuting bounded self-adjoint operators standardly connected with the chain (8.2.1). (About the standard connection see the beginning of Sect. 8.4 of this chapter.) According to the projection spectral theorem, such operators have representations: f Af =

f xo(x, y) dσ (x, y)f,

.

R2

Bf =

yo(x, y) dσ (x, y)f,

f ∈ l2 ,

R2

(8.2.3) where .o(x, y) : .l2 (p) −→ l2 (p−1 ) is the operator of generalized projection and .dσ (x, y) is a spectral measure. For every .f ∈ lfin , the projection .o(x, y)f ∈ l2 (p−1 ) is a generalized eigenvector of operators A and B with corresponding eigenvalues x and y. For all .f, g ∈ lfin , we have Parseval equality f (f, g)l2 =

(o(x, y)f, g)l2 dσ (x, y);

.

(8.2.4)

R2

after the extension by continuity the equality (8.2.4) is valid for all .f, g ∈ l2 . Let us denote by .πn the operator of orthogonal projection in .l2 on .Hn , .n ∈ N0 . Hence, .∀f = (fn )∞ n=0 ∈ l2 we have .fn = πn f . This operator acts similarly on spaces .l2 (p) and .l2 (p−1 ). After closure, the operator corresponding to the matrix .(oj,k (x, y))∞ j,k=0 , has a representation oj,k (x, y) = πj o(x, y)πk : l2 −→ Hj ,

.

(Hk −→ Hj ).

(8.2.5)

388

8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

The Parseval equality (8.2.4), we can rewrite in the form (f, g)l2 =

∞ f E

(o(x, y)πk f, πj g)l2 dσ (x, y)

j,k=0 2 R

=

.

∞ f E

(πj o(x, y)πk f, g)l2 dσ (x, y)

(8.2.6)

j,k=0 2 R

=

∞ f E

(oj,k (x, y)fk , gj )l2 dσ (x, y),

∀f, g ∈ l2 .

j,k=0 2 R

Let us now pass to a study of a more special bounded operator A and B that act on the space .l2 . Namely, let their be given by matrices .JA and .JB which have a tri-diagonal block structure of forms (8.1.26) and (8.1.44). So, these operators A and B are defined by expressions (8.1.29) and (8.1.47). Remind that the norm of all elements .an , .bn , .cn and .un , .wn , .vn are uniformly bounded .n ∈ N0 . For further research, we assume that conditions (8.1.27), (8.1.28) and (8.1.45), (8.1.46) are fulfilled and, additionally, operators A and B generated by matrices (8.1.26) and (8.1.44) are bounded, commutative, self-adjoint on .l2 , (8.1.6). At the next step we will rewrite the Parseval equality (8.2.6) in terms of generalized eigenvectors of commuting self-adjoint operators A and B. At first we prove the lemma. 2 Lemma 8.2.1 Let .ϕ(x, y) = (ϕn (x, y))∞ n=0 , .ϕn (x, y) ∈ Hn , .(x, y) ∈ R —be a ' generalized eigenvector from .(lfin ) of the operator A with the eigenvalue x and also a generalized eigenvector of B with the eigenvalue y. Then .ϕ(x, y) is a solution in ' .(lfin ) of two difference Eqs. (8.1.29) and (8.1.47), namely,

(JA ϕ(x, y))n = an−1 ϕn−1 (x, y) + bn ϕn (x, y) + cn ϕn+1 (x, y) = xϕn (x, y), .

(JB ϕ(x, y))n = un−1 ϕn−1 (x, y) + wn ϕn (x, y) + vn ϕn+1 (x, y) = yϕn (x, y), (8.2.7)

with an initial condition .ϕ0 ∈ R, for .n ∈ N0 , .ϕ−1 (x, y) =: 0. We assert that this solution has a form ϕn (x, y) = Qn (x, y)ϕ0 = (Qn;0 , Qn;1 , . . . , Qn;n , )ϕ0 ,

.

∀n ∈ N.

(8.2.8)

Here .Qn;α , .α = 0, 1, . . . , n are polynomials of variables x and y and these polynomials have the form Qn;α (x, y) = ln;α y n−α x α + qn;α (x, y),

.

α = 1, . . . , n,

(8.2.9)

8.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi. . .

389

where .ln;α > 0 and .qn;α (x, y) is a previous part of the polynomial due to (8.1.2) and it is some linear combination of .y j x k , .0 ≤ j + k ≤ n − 1, .y n−(α−1) x α−1 . The last expressions are described in the case .α = 1, . . . n and .y n−1 x n−1 if .α = 0. Proof For .n = 0, the system (8.2.7) has the form .

v w0 ϕ0 + v0 ϕ1 = yϕ0 , ϕ = (y − w0;0,0 )ϕ0 , i.e., 0;0,0 1;0 b0 ϕ0 + c0 ϕ1 = xϕ0 , c0;0,0 ϕ1;0 + c0;0,1 ϕ1;1 = (x − b0;0,0 )ϕ0 .

(8.2.10)

Here and in what follows we denote ϕn (x, y) = (ϕn;0 (x, y), ϕn;1 (x, y), . . . , ϕn;n (x, y)) ∈ Hn , ∀n ∈ N;

.

ϕ0 = ϕ0;0 .

By using assumptions (8.1.27), (8.1.28) and (8.1.45), (8.1.46), rewrite the last two equalities (8.2.10) in the form A0 ϕ1 (x, y) = ((y − w0;0,0 )ϕ0 , (x − b0;0,0 )ϕ0 ); ) ( . v0;0,0 0 , v0;0,0 > 0, c0;0,1 > 0. A0 = c0;0,0 c0;0,1

(8.2.11)

Therefore, 1 (y − w0;0,0 )ϕ0 = Q1;0 (x, y)ϕ0 , v0;0,0 . ) ( (x − b0;0,0 ) c0;0,0 (y − w0;0,0 ) ϕ0 = Q1;1 (x, y)ϕ0 . − ϕ1;1 (x, y) = c0;0,1 c0;0,1 v0;0,0 (8.2.12) ϕ1;0 (x, y) =

In another words the solution .ϕn (x, y) of (8.2.7) for .n = 0 has a form (8.2.8) and (8.2.9). Suppose, using the induction, that for .n ∈ N, elements .ϕn−1 (x, y) and .ϕ(x, y) of our generalized eigenvector .ϕ(x, y) = (ϕn (x, y))∞ n=0 , have the form (8.2.8) and (8.2.9) and we will prove that .ϕn+1 (x, y) can also be written in the form (8.2.8) and (8.2.9). Our eigenvector .ϕ(x, y) satisfies the system (8.2.7) of two equations. But this system is overdetermined: it consists of .2(n + 1) scalar equations from which it is necessary to find only .n + 2 unknowns .ϕn+1;0 , .ϕn+1;1 , .. . ., .ϕn+1;n+1 , using as an initial data the previous .n + 1 values .ϕn;0 , .ϕn;1 , .. . ., .ϕn;n which are coordinates of the vector .ϕn (x, y).

390

8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

According to Theorems 8.1.9 and 8.1.5, especially to (8.1.27), (8.1.28) and (8.1.45), (8.1.46), .((n + 1) × (n + 2))-matrices .cn , .vn , their application to .ϕn+1 ∈ Hn have a form ⎡

⎤ vn;0,0 0 0 ...0 0 ⎢v ...0 0⎥ ⎢ n;1,0 vn;1,1 0 ⎥ ⎢ ⎥ 0⎥ ⎢ vn;2,0 vn;2,1 vn;2,2 . . . 0 ⎢ vn ϕn+1 (x, y) = ⎢ . .. .. .. ⎥ . . .. ⎥ ϕn+1 (x, y), .. ⎢ .. . . .⎥ ⎢ ⎥ ⎣ vn;n−1,0 vn;n−1,1 vn;n−1,2 . . . 0 0⎦ vn;n,0 vn;n,1 vn;n,2 . . . vn;n,n 0 . ⎡ ⎤ cn;0,0 cn;0,1 0 ...0 0 ⎢c ⎥ 0 ⎢ n;1,0 cn;1,1 cn;1,2 . . . 0 ⎥ ⎢ ⎥ 0 ⎢ cn;2,0 cn;2,1 cn;2,2 . . . 0 ⎥ ⎥ ϕn+1 (x, y), cn ϕn+1 (x, y) = ⎢ .. .. .. . . .. ⎢ .. ⎥ .. ⎢. ⎥ . . . ⎢ ⎥ ⎣ cn;n−1,0 cn;n−1,1 cn;n−1,2 . . . cn;n−1,n 0 ⎦ cn;n,0 cn;n,1 cn;n,2 . . . cn;n,n cn;n,n+1 (8.2.13) where .ϕn+1 (x, y) = (ϕn+1;0 (x, y), ϕn+1;1 (x, y), . . . , ϕn+1;n+1 (x, y)). Construct, similarly to (8.2.11), the combination of matrices (8.2.13), i.e., the .((n + 2) × (n + 2))-matrix ⎡

⎤ vn;0,0 0 0 ...0 0 ⎢c ⎥ ...0 0 ⎢ n;0,0 cn;0,1 0 ⎥ ⎢ ⎥ 0 ⎢ cn;1,0 cn;1,1 cn;1,2 . . . 0 ⎥ ⎥ ϕn+1 (x, y), .An ϕn+1 (x, y) = ⎢ . .. .. .. . . .. ⎢. ⎥ . ⎢. ⎥ . . . . ⎢ ⎥ ⎣ cn;n−1,0 cn;n−1,1 cn;n−1,2 . . . cn;n−1,n 0 ⎦ cn;n,0 cn;n,1 cn;n,2 . . . cn;n,n cn;n,n+1 (8.2.14) where .ϕn+1 (x, y) = (ϕn+1;0 (x, y), ϕn+1;1 (x, y), . . . , ϕn+1;n+1 (x, y)). The matrix (8.2.14) is invertible because its elements on the main diagonal are positive (see (8.1.28) and (8.1.46)). Rewrite equalities (8.2.7) in the form cn ϕn+1 (x, y) = xϕn (x, y) − an−1 ϕn−1 (x, y) − bn ϕn (x, y), .

vn ϕn+1 (x, y) = yϕn (x, y) − un−1 ϕn−1 (x, y) − vn ϕn (x, y),

n ∈ N. (8.2.15)

8.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi. . .

391

The first .n + 2 scalar equations (from .2(n + 1) scalar Eq. (8.2.15)) have a form An ϕn+1 (x, y) = (xQn;0 (x, y) − (un−1 Qn−1 (x, y) − (wn Qn (x, y))n;0 , yQn;0 (x, y) − (an−1 Qn−1 (x, y))n;0 − (bn Qn (x, y))n;0 , . . . ,

.

yQn;n (x, y) − (an−1 Qn−1 (x, y))n;n − (bn Qn (x, y))n;n )ϕ0 . (8.2.16) The construction of matrix .An and the form of vector on the right-hand side (8.2.16) and (8.2.8), (8.2.9) give ϕn+1;0 (x, y) = Qn+1;0 (x, y)ϕ0

.

=

1 an;0,0 =

(xQn;0 (x, y) − (un−1 Qn−1 (x, y))n;0 ) − (wn Qn (x, y))n;0 )ϕ0 1 (x(ln;0 x n + qn;0 (x, y)) − (un−1 Qn−1 (x, y))n;0 − an;0,0 (wn Qn (x, y))n;0 )ϕ0 ,

(8.2.17)

l

n;0 that is, the first term on the right-hand side in (8.2.17) is equal to . an;0,0 x n+1 y 0 , so it has the form (8.2.9). Similar calculations give the same effect for

ϕn+1;1 (x, y), . . . , ϕn;n (x, y), ϕn+1;n+1 (x, y).

.

It is necessary to take into account that the diagonal elements vn;0,0 , cn;0,1 , cn;1,2 , . . . , cn;n,n+1

.

of the matrix .An are positive due to (8.1.46) and (8.1.28). This completes the induction and finishes the proof. u n By the formulation and by the proof of Lemma8.2.1, it is not asserted that the solution of the overdefined system (8.2.7) exists for any initial data .ϕ0 ∈ R: it is only proved that the generalized eigenvector from .(lfin )' of operators A and B is a solution (8.2.7) and has the form (8.2.8) and (8.2.9). Consider .Qn (x, y) with fixed x and y as a linear operator that acts from .H0 to .Hn , that is, .H0 e ϕ0 |−→ Qn (x, y)ϕ0 ∈ Hn . We also regard .Qn (x, y) as an operator-valued polynomial of variables .x, y ∈ R; therefore, the adjoint operator has the form Q∗n (x, y) = (Qn (x, y))∗ : Hn −→ H0 .

.

392

8 Block Jacobi Type Matrices and the Two Dimensional Real Moment Problem

By using these polynomials .Qn (x, y), we construct the representation for oj,k (x, y).

.

Lemma 8.2.2 The operator .oj,k (x, y), .∀(x, y) ∈ R2 has the representation oj,k (x, y) = Qj (x, y)o0,0 (x, y)Q∗k (x, y) : Hk −→ Hj ,

.

j, k ∈ N0 , (8.2.18)

where .o0,0 (x, y) ≥ 0 is a scalar. Proof For a fixed .k ∈ N0 , the vector .ϕ = ϕ(x, y) = (ϕj (x, y))∞ j =0 , where ϕj (x, y) = oj,k (x, y) = πj o(x, y)πk ∈ Hj ,

.

(x, y) ∈ R2

(8.2.19)

is a generalized solution, in .(lfin )' of equations JA ϕ(x, y) = xϕ(x, y),

.

JB ϕ(x, y) = yϕ(x, y)

since .o(x, y) is a projector on to generalized eigenvectors of operators A and B with correspondence generalized eigenvalues .(x, y). Hence, .ϕ = ϕ(x, y) ∈ l2 (p−1 ), exists as an usual solution of equations .JA ϕ = xϕ, .JB ϕ = yϕ with an initial condition .ϕ0 = π0 o(x, y)πk ∈ H0 . By using Lemma 8.2.1 and according to (8.2.8), we get oj,k (x, y) = Qj (x, y)(o0,k (x, y)),

.

j ∈ N0 .

(8.2.20)

The operator .o(x, y) : l2 (p) −→ l2 (p−1 ) is formally self-adjoint on .l2 and it is the derivative of the resolution of identity of the operator A (B) on .l2 with respect to the spectral measure. Hence, according to (8.2.18), we get (oj,k (x, y))∗ = (πj o(x, y)πk )∗ = πk o(x, y)πj = ok,j (x, y),

.

j, k ∈ N0 . (8.2.21)

For a fixed .j ∈ N0 , from (8.2.21) and previous conversation, it follows that the vector ϕ = ϕ(x, y) = (ϕk (x, y))∞ k=0 ,

.

ϕk (x, y) = ok,j (x, y) = (oj,k (x, y))∗

is an usual solution of equations .JA ϕ = xϕ and .JB ϕ = yϕ with an initial condition ϕ0 = o0,j (x, y) = (oj,0 (x, y))∗ . Again using Lemma 8.2.1, we obtain the representation of the (8.2.20), namely,

.

ok,j (x, y) = Qk (x, y)(o0,j (x, y)),

.

k ∈ N0 .

(8.2.22)

8.2 Direct and Inverse Spectral Problems for the Tri-Diagonal Block Jacobi. . .

393

Taking into account (8.2.21) and (8.2.22), we get o0,k (x, y) = (ok,0 (x, y))∗ = (Qk (x, y)o0,0 (x, y))∗ .

= o0,0 (x, y)(Qk (x, y))∗ ,

k ∈ N0

(8.2.23)

(here we used .o0,0 (x, y) ≥ 0, this inequality follows from (8.2.4) and (8.2.5)). By substituting (8.2.23) into (8.2.20), we obtain (8.2.18). u n Now it is possible to rewrite the Parseval equality (8.2.6) in a more concrete form. To this end, we substitute the expression (8.2.18) for .oj,k (x, y) into (8.2.6) and get (f, g)l2 =

∞ f E

(oj,k (x, y)fk , gj )l2 dσ (x, y) =

j,k=0 2 R

=

∞ f E

(Qj (x, y)o0,0 (x, y)Q∗k (x, y)fk , gj )l2 dσ (x, y)

j,k=0 2 R .

=

∞ f E

(Q∗k (x, y)fk , Q∗j (x, y)gj )l2 dρ(x, y) =

(8.2.24)

j,k=0 2 R

=

f (E ∞ R2

Q∗k (x, y)fk

)( E ∞

) Q∗j (x, y)gj dρ(x, y),

j =0

k=0

dρ(x, y) = o0,0 (x, y) dσ (x, y),

∀f, g ∈ lfin .

Denote the Fourier transform “. 0, then the first equality of (9.3.3) gives .ϕ1,1 (λ) = α¯ −1 β0 )c0 − α0 c1 ). By using (9.3.3) for .n = 1, 2, . . ., we can find step by step the solution .ϕ(λ) = (ϕn (λ))∞ n=0 (9.3.3).

432

9 Applications of the Spectral Theory of Jacobi Matrices and Their. . .

∞ Denote by .θ (α) (λ) = (θn )∞ n=0 = (θn,vn (λ))n=0,vn =0,1 , .α = 0, 1, two solutions of the Eq. (9.3.3) with initial data (α)

(0)

θ0,0 (λ) = 1,

.

(0)

θ1,0 (λ) = 0;

(α)

(1)

θ0,0 (λ) = 0,

(1)

θ1,0 (λ) = 1.

(9.3.4)

Equation (9.3.3) are linear, differential of the second order with a fixed .λ, therefore, the solution is uniquely determined by the values at the two points .(0, 0) and .(1, 0) from the domain of the argument .(n, 0), .(n, 1), .n ∈ N, .0 = (0, 0). Each solution .ϕ(λ) = (ϕn (λ))∞ n=0 of (9.3.3) can be obtained as a linear combination of the fundamental system of solutions (9.3.4). For example, the value written above for .ϕ1,1 (λ) is equal to .ϕ0,0 (λ)θ (0) (λ) + ϕ1,0 (λ)θ (1) (λ) at the point .(1, 1). By using (1) solutions .θ (0) (λ) and .θ1,0 (λ), we can write (1)

ϕ(λ) = ϕ0,0 (λ)θ (0) (λ) + ϕ1,0 (λ)θ1,0 (λ),

.

∀λ ∈ R,

i.e.,

(9.3.5)

(0) (1) ϕn,νn (λ) = ϕ0,0 (λ)θn,ν (λ) + ϕ1,0 (λ)θn,ν (λ), n n .

n ∈ N, ν ∈ {0, 1};

n = 0, ν0 = 0.

It is important for us to obtain the representation (9.3.5) for the operator matrix o(λ), which is now regarded as the operator .lfin → l. First, we note that each linear operator .A : lfin → l can be represented by a block matrix of the type (9.2.3) with scalar elements .Aj,k;vj ,vk , where .j, k ∈ N, and the second set of indices .vj , vk takes the value “0” or “1 ”, but for indices .j, k = 0, k we use only the second index (for .j, k = j, 0—only the first .j ∈ N0 ). We denote matrices of this type by

.

(Aj,vj ;k,vk )∞ j,k=0;vj ,vk =0,1 ;

.

(9.3.6)

for the solutions (9.3.3), as mentioned, a similar rule is also used (if .n = 0, then v0 = 0). Therefore, a matrix of the type (9.3.6) for our operator .o(λ) : lfin → l has a form

.

O(λ) = (Oj,vj ;k,vk (λ))∞ j,k=0;vj ,vk =0,1 , .

Oj,vj ;k,vk (λ) = (o(λ)δk,vk , δj,vj )l2 ,

j, k ∈ N0 , vj , vk ∈ {0, 1}.

(9.3.7)

It is essential for us to obtain a representation of the type (9.3.5) for the elements of the matrix (9.3.7).

9.3 The Spectral Theory of Corresponding to Toda Chains Block Jacobi Type. . .

433

Lemma 9.3.1 Elements of the matrix .O(λ) in (9.3.7) have a representation (0)

(0)

(1)

(0)

(0)

(1)

(1)

(1)

Oj,vj ;k,vk (λ) = O0,0;0,0 (λ)θj,vj (λ)θk,vk (λ) + O0,0;1,0 (λ)θj,vj (λ)θk,vk (λ) + O1,0;0,0 (λ)θj,vj (λ)θk,vk (λ) + O1,0;1,0 (λ)θj,vj (λ)θk,vk (λ),

.

j, k ∈ N0 ,

vj , vk = 0, 1;

λ ∈ R. (9.3.8)

Proof Let us first consider the case of .j, k ∈ N. For fixed .k, vk , we have Oj,vj ;k,vk (λ) = (o(λ)δk,vk , δj,vj )l2 = (o(λ)δk,vk )j,vj

.

is a vector from .l which is generalized eigenvector of the operator .J with an eigenvalue .λ. Thus, it is a solution of the difference equation (9.3.3) with initial data .O0,0;k,vk (λ) and .O1,0;k,vk (λ) and, according to (9.3.5), one can write (0) (1) Oj,vj ;k,vk (λ) = O0,0;k,vk (λ)θj,v (λ) + O1,0;k,vk (λ)θj,v (λ), j j

.

j ∈ N, vj = 0, 1. (9.3.9)

In general, the generalized projection operator .o(λ) is formally Hermitian, therefore, our matrix (9.3.7) is Hermitian. Since the operator .J is real, then .o(λ) is also real and this fact gives that its matrix (9.3.7) is real. Thus, as a result, the matrix (9.3.7) is real and symmetric. Applying this fact to (9.3.9), we get (0)

(1)

(0)

(1)

O0,0;k,vk (λ) = Ok,vk ;0,0 (λ) = O0,0;0,0 (λ)θk,vk (λ) + O1,0;0,0 (λ)θk,vk (λ), .

O1,0;k,vk (λ) = Ok,vk ;1,0 (λ) = O0,0;1,0 (λ)θk,vk (λ) + O1,0;1,0 (λ)θk,vk (λ). Substituting these expressions into (9.3.9), we get (9.3.8). Let .j = 0, .k ∈ N. According to our rule, we use only the index .k ∈ N0 in matrix (9.3.6), thus, the equality (9.3.9) is fulfilled. The situation is similar in case of .k = 0, .j ∈ N. To obtain the complete proof, it is necessary to repeat proof for the case .j = 0, .k ∈ N.

the the the u n

Let us return to the first equality in (9.3.2). For .f, g ∈ lfin and .A= R, matrices (9.3.7), we have f (f, g)l2 = (o(λ)f, g)l2 dρb (λ) R .

=

f [

R

∞ E j,k=0, vj ,vk =0,1

] Oj,vj ;k,vk (λ)fk,vk g j,vj dρb (λ).

(9.3.10)

434

9 Applications of the Spectral Theory of Jacobi Matrices and Their. . .

Let us rewrite this equality in a convenient form. To do this, we introduce the Fourier transform (on the generalized eigenvectors of the operator .J): for .f = (fn )∞ n=0 ∈ lfin , we put f