Orthogonal Polynomials: Current Trends and Applications: Proceedings of the 7th EIBPOA Conference (SEMA SIMAI Springer Series, 22) 3030561895, 9783030561895

The present volume contains the Proceedings of the Seventh Iberoamerican Workshop in Orthogonal Polynomials and Applicat

120 3 3MB

English Pages 335 [330] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Riemann–Hilbert Problem and Matrix Biorthogonal Polynomials
1 Introduction
2 Riemann–Hilbert Problem for Matrix Biorthogonal Polynomials
2.1 Matrix Biorthogonal Polynomials
2.2 Reductions: From Biorthogonality to Orthogonality
2.3 The Riemann–Hilbert Problem
2.4 Three Term Recurrence Relation
3 Matrix Weights Supported on a Curve γ on the Complex Plane that Connects the Point 0 to the Point ∞: Laguerre Weights
3.1 Matrix Weights Supported on a Curve γ with One Finite End Point: W(z) = zA H(z)
3.2 Matrix Weights Supported on a Curve γ with One Finite End Point: W(z) = zα H(z)G(z) zB
References
The Symmetrization Problem for Multiple Orthogonal Polynomials
1 Introduction
2 Definitions and Matrix Interpretation of Multiple Orthogonality
3 Representation of the Stieltjes and Weyl Functions
4 The Symmetrization Problem for Multiple OP
5 Matrix Multiple Orthogonality
6 Example with Multiple Hermite Polynomials
6.1 The Matrix Type II Multiple Orthogonal Polynomials
6.2 The Symmetric Type II Multiple Orthogonal Polynomial Sequence
References
Darboux Transformations for Orthogonal Polynomials on the Real Line and on the Unit Circle
1 Introduction
2 Orthogonal Polynomials and Matrix Representations
2.1 OPRL and Jacobi Matrices
2.2 OPUC and Hessenberg Matrices
2.3 OLPUC and CMV Matrices
3 The Darboux Transformations
3.1 Darboux Transformations of Self-Adjoint Operators
3.2 Darboux Transformations of Unitary Operators
4 Darboux Transformations of Jacobi Matrices
4.1 Inverse Darboux Transformations of Jacobi Matrices
5 Darboux Transformations of CMV Matrices
5.1 Inverse Darboux Transformations of CMV Matrices
References
Special Function Solutions of Painlevé Equations: Theory, Asymptotics and Applications
1 Painlevé Equations
1.1 Hamiltonian Formulation and Tau Functions
1.2 Bäcklund Transformations and Toda Equation
1.3 Special Function Solutions of Painlevé Equations
2 Motivation 1. Orthogonal Polynomials
2.1 General Theory
2.2 Connection with Painlevé Equations
2.3 The Riemann–Hilbert Formulation
3 Motivation 2: Random Matrix Theory
4 Asymptotic Analysis
4.1 Asymptotics as n→∞
4.2 Asymptotics as z→∞
5 Conclusions and Related Problems
References
Discrete Semiclassical Orthogonal Polynomials of Class 2
1 Introduction
2 Basic Background on Discrete Semiclassical Linear Functionals
3 Modification of Functionals
3.1 Rational Spectral Transformations
3.1.1 Uvarov Transformations
3.1.2 Christoffel Transformations
3.1.3 Geronimus Transformations
3.2 Truncated Linear Functionals
3.3 Symmetrized Functionals
3.4 Summary
4 Semiclassical Polynomials of Class 0 (Classical Polynomials)
4.1 (0,0): Charlier Polynomials
4.2 (1,0): Meixner Polynomials
4.3 (1,0;N): Krawtchouk Polynomials
4.3.1 Symmetrized Charlier Polynomials
4.4 (2,1;N,1): Hahn Polynomials
4.4.1 Symmetrized Meixner Polynomials
4.4.2 Symmetrized Generalized Charlier Polynomials
5 Semiclassical Polynomials of Class 1
5.1 (0,1): Generalized Charlier Polynomials
5.2 (1,1): Generalized Meixner Polynomials
5.2.1 Reduced-Uvarov Charlier Polynomials
5.2.2 Christoffel Charlier Polynomials
5.2.3 Geronimus Charlier Polynomials
5.2.4 Truncated Charlier Polynomials
5.3 (2,0;N): Generalized Krawtchouk Polynomials
5.4 (2,1): Generalized Hahn Polynomials of Type I
5.4.1 Reduced-Uvarov Meixner Polynomials
5.4.2 Christoffel Meixner Polynomials
5.4.3 Geronimus Meixner Polynomials
5.4.4 Truncated Meixner Polynomials
5.4.5 Symmetrized Generalized Krawtchouk Polynomials
5.5 (3,2;N,1): Generalized Hahn Polynomials of Type II
5.5.1 Reduced-Uvarov Hahn Polynomials
5.5.2 Christoffel Hahn Polynomials
5.5.3 Geronimus Hahn Polynomials
5.5.4 Symmetrized Hahn Polynomials
5.5.5 Symmetrized Polynomials of Type (3,0;N)
6 Semiclassical Polynomials of Class 2
6.1 Polynomials of Type (0,2)
6.2 Polynomials of Type (1,2)
6.2.1 Reduced-Uvarov Generalized Charlier Polynomials
6.2.2 Christoffel Generalized Charlier Polynomials
6.2.3 Geronimus Generalized Charlier Polynomials
6.2.4 Truncated Generalized Charlier Polynomials
6.3 Polynomials of Type (2,2)
6.3.1 Uvarov Charlier Polynomials
6.3.2 Reduced-Uvarov Generalized Meixner Polynomials
6.3.3 Christoffel Generalized Meixner Polynomials
6.3.4 Geronimus Generalized Meixner Polynomials
6.3.5 Truncated Generalized Meixner Polynomials
6.4 Polynomials of Type (3,0;N)
6.5 Polynomials of Type (3,1;N)
6.5.1 Reduced-Uvarov Generalized Krawtchouk Polynomials
6.5.2 Christoffel Generalized Krawtchouk Polynomials
6.5.3 Geronimus Generalized Krawtchouk Polynomials
6.6 Polynomials of Type (3,2)
6.6.1 Uvarov Meixner Polynomials
6.6.2 Symmetrized Generalized Meixner Polynomials
6.6.3 Reduced-Uvarov Generalized Hahn Polynomials of Type I
6.6.4 Christoffel Generalized Hahn Polynomials of Type I
6.6.5 Geronimus Generalized Hahn Polynomials of Type I
6.6.6 Truncated Generalized Hahn Polynomials of Type I
6.6.7 Symmetrized Polynomials of Type (0,2)
6.7 Polynomials of Type (4,3;N,1)
6.7.1 Uvarov Hahn Polynomials
6.7.2 Symmetrized Generalized Hahn Polynomials of Type I
6.7.3 Reduced-Uvarov Generalized Hahn Polynomials of Type II
6.7.4 Christoffel Generalized Hahn Polynomials of Type II
6.7.5 Geronimus Generalized Hahn Polynomials of Type II
6.7.6 Symmetrized Polynomials of Type (1,2)
7 Conclusions
References
Stable Equilibria for the Roots of the Symmetric Continuous Hahn and Wilson Polynomials
1 Introduction
2 Symmetric Continuous Hahn Polynomials
2.1 Preliminaries
2.2 Morse Function
2.3 Gradient Flow
2.4 Proof of Theorem 1
2.5 Numerical Samples
3 Wilson Polynomials
3.1 Preliminaries
3.2 Morse Function
3.3 Gradient Flow
3.4 Proof of Theorem 2
3.5 Numerical Samples
4 Symmetry Reduction
References
Convergent Non Complete Interpolatory Quadrature Rules
1 Introduction
2 Some Explicit Convergent Schemes of Nodes
3 Connection with Orthogonal Polynomials
4 Asymptotic Analysis
5 Proof of Theorem 1
References
Asymptotic Computation of Classical Orthogonal Polynomials
1 Introduction
2 Laguerre Polynomials
3 Jacobi Polynomials
3.1 An Expansion in Terms of Elementary Functions
3.2 An Expansion in Terms of Bessel Functions
3.3 An Expansion for Large β
3.4 An Expansion for Large α and β
3.5 Computational Performance
Appendix
References
An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation
1 Introduction
1.1 Some Historical Background
1.2 Markov Systems and Orthogonality
1.2.1 Angelesco Systems
1.2.2 Nikishin Systems
2 On the Perfectness of Nikishin Systems
3 On the Interlacing Property of Zeros
3.1 Interlacing for Type I
3.2 Interlacing for Type II
4 Weak Asymptotic
4.1 Preliminaries from Potential Theory
4.2 Weak Asymptotic Behavior for Type II
4.3 Application to Hermite-Padé Approximation
5 Ratio Asymptotic
5.1 Preliminaries from Riemann Surfaces and Boundary Value Problems
References
Revisiting Biorthogonal Polynomials: An LU Factorization Discussion
1 Introduction
2 LU Factorization and Gram Matrix
2.1 LDU Factorization
2.2 Schur Complements
2.3 Bilinear Forms, Gram Matrices and LU Factorizations
3 Orthogonal Polynomials
3.1 Quasi-Determinants
3.1.1 The Easiest Quasi-Determinant: A Schur Complement
3.2 Second Kind Functions and LU Factorizations
3.3 Spectral Matrices
3.4 Christoffel–Darboux Kernels
4 Standard Orthogonality: Hankel Reduction
4.1 Recursion Relations
4.2 Heine Formulas
4.3 Gauss Quadrature Formula
4.4 Christoffel–Darboux Formula
5 Very Classical Orthogonal Polynomials: Hermite, Laguerre and Jacobi Polynomials
6 Christoffel and Geronimus Transformations
6.1 Some History
6.2 Christoffel and Geronimus Transformations
6.3 Christoffel Perturbations
6.4 Geronimus Transformations
References
Infinite Matrices in the Theory of Orthogonal Polynomials
1 Introduction
2 The Algebra of Infinite Lower Hessenberg Matrices
3 Polynomial Sequences and Their Matrix Representation
4 Orthogonal Polynomial Sequences
5 Orthogonal Polynomial Sequences That Satisfy a Generalized First-Order Difference Equation
6 Hypergeometric Orthogonal Polynomial Sequences
7 A Family of q-Orthogonal Polynomial Sequences That Contains All the Families in the q-Askey Scheme
8 Final Remarks
References
Recommend Papers

Orthogonal Polynomials: Current Trends and Applications: Proceedings of the 7th EIBPOA Conference (SEMA SIMAI Springer Series, 22)
 3030561895, 9783030561895

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

SEMA SIMAI Springer series  22

Francisco Marcellán Edmundo J. Huertas  Eds.

Orthogonal Polynomials: Current Trends and Applications Proceedings of the 7th EIBPOA Conference

SEMA SIMAI Springer Series Volume 22

Editors-in-Chief Luca Formaggia, MOX-Department of Mathematics, Politecnico di Milano, Milano, Italy Pablo Pedregal, ETSI Industriales, University of Castilla-La Mancha, Ciudad Real, Spain Series Editors Mats G. Larson, Department of Mathematics, Umeå University, Umeå, Sweden Tere Martínez-Seara Alonso, Departament de Matemàtiques, Universitat Politècnica de Catalunya, Barcelona, Spain Carlos Parés, Facultad de Ciencias, Universidad de Málaga, Málaga, Spain Lorenzo Pareschi, Dipartimento di Matematica e Informatica, Università degli Studi di Ferrara, Ferrara, Italy Andrea Tosin, Dipartimento di Scienze Matematiche “G. L. Lagrange”, Politecnico di Torino, Torino, Italy Elena Vázquez-Cendón, Departamento de Matemática Aplicada, Universidade de Santiago de Compostela, A Coruña, Spain Jorge P. Zubelli, Instituto de Matematica Pura e Aplicada, Rio de Janeiro, Brazil Paolo Zunino, Dipartimento di Matemática, Politecnico di Milano, Milano, Italy

As of 2013, the SIMAI Springer Series opens to SEMA in order to publish a joint series aiming to publish advanced textbooks, research-level monographs and collected works that focus on applications of mathematics to social and industrial problems, including biology, medicine, engineering, environment and finance. Mathematical and numerical modeling is playing a crucial role in the solution of the complex and interrelated problems faced nowadays not only by researchers operating in the field of basic sciences, but also in more directly applied and industrial sectors. This series is meant to host selected contributions focusing on the relevance of mathematics in real life applications and to provide useful reference material to students, academic and industrial researchers at an international level. Interdisciplinary contributions, showing a fruitful collaboration of mathematicians with researchers of other fields to address complex applications, are welcomed in this series. THE SERIES IS INDEXED IN SCOPUS

More information about this series at http://www.springer.com/series/10532

Francisco Marcellán • Edmundo J. Huertas Editors

Orthogonal Polynomials: Current Trends and Applications Proceedings of the 7th EIBPOA Conference

Editors Francisco Marcellán Departamento de Matemáticas Universidad Carlos III de Madrid Leganés, Spain

Edmundo J. Huertas Departamento de Física y Matemáticas Universidad de Alcalá Alcalá de Henares Madrid, Spain

ISSN 2199-3041 ISSN 2199-305X (electronic) SEMA SIMAI Springer Series ISBN 978-3-030-56189-5 ISBN 978-3-030-56190-1 (eBook) https://doi.org/10.1007/978-3-030-56190-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The present volume contains the Proceedings of the Seventh Iberoamerican Workshop in Orthogonal Polynomials and Applications (EIBPOA, which stands for Encuentros Iberoamericanos de Polinomios Ortogonales y Aplicaciones, in Spanish), held at the Universidad Carlos III de Madrid, Leganés, Spain, from July 3 to July 6, 2018. The conference, as well as this volume, was warmly dedicated to celebrate the 70th birthday of Professor Guillermo López Lagomasino, professor of Applied Mathematics in the University Carlos III de Madrid. He is one of the most relevant and active mathematicians of the Spanish school of orthogonal polynomials. His contributions to orthogonal polynomials in different frameworks (the real line, the unit circle, multiple orthogonality, and Sobolev orthogonality) as well as in approximation theory (Padé and Hermite–Padé rational approximants) have been recognized worldwide. His leadership in the above fields and his commitment with young researchers constitute a sample of his scientific excellence. This is the seventh workshop of a series that began in 2011 at the National University of Colombia in Bogotá organized by Professor Herbert Dueñas Ruíz, continued in 2012 at Universidad de Colima, México, organized by Professor Luis E. Garza Gaona and, until we reach this last edition, has also been held in São José do Rio Preto, Brazil (Universidade Estadual Paulista UNESP 2013), Bogotá, Colombia (Universidad Nacional de Colombia 2014), México D.F. (Universidad Nacional Autónoma de México, 2015), and Uberaba, Brazil (Universidade Federal do Triângulo Mineiro UFTM 2017). These meetings on orthogonal polynomials were mainly focused to encourage research in the fields of approximation theory, special functions, orthogonal polynomials and their applications among graduate students as well as young researchers from Latin America, Spain, and Portugal. To support this idea, the core of these workshops is based on two academic courses in orthogonal polynomials theory in the morning sessions and plenary talks and short communications during the evenings. Both talks and courses are taught at postgraduate level and have a marked

v

vi

Preface

international character, since professors and students come from all around the world, and surprisingly enough, not only from the Latin-American region. The EIBPOA logo reflects the spirit of this event. The Latin-American region is united through mathematics, specifically by the graph of the first seven Jacobi orthogonal polynomials. In this occasion, we were very happy to have the two EIBPOA Courses taught by Professor Guillermo López Lagomasino (UC3M) and Professor Manuel MañasBaena (UCM). We had nothing less than ten plenary distinguished speakers: María José Cantero (Universidad de Zaragoza, España), Alfredo Deaño (University of Kent, United Kingdom), Diego Dominici (State University of New York at New Paltz, USA), Ulises Fidalgo (Case Western Reserve University, USA), Ana Foulquié-Moreno (Universidade de Aveiro, Portugal), Judit Mínguez Ceniceros (Universidad de La Rioja, España), Maria das Neves Rebocho (Universidade da Beira Interior, Portugal), Javier Segura (Universidad de Cantabria, España), Jan Felipe van Diejen (Universidad de Talca, Chile), and Luis Verde Star (Universidad Autónoma Metropolitana, México). Their very valuable contributions dealt with issues of absolute currency in orthogonal polynomials theory: from the role of Darboux transformations, Painlevé equations, incomplete interpolatory and Gaussian quadrature rules, new systems of discrete orthogonal polynomial families, Fourier series, Laguerre–Hahn, q-Hahn families of the q-Askey scheme, and new insights on the Riemann–Hilbert problem for orthogonal polynomials. The event was rounded off with eleven short communications and a poster contest whose winners were two young researchers: Fátima Lizarte López (Universidad de Granada) and Juan Francisco Mañas Mañas (Universidad de Almería). We would like to thank all the members of the organizing and the local committees for the nice organization of this 7th EIBPOA, as well as the members of the Scientific Committee who helped us to make an attractive and genuine conference of the highest scientific quality. We also express our gratitude to all the participants of the workshop, to the contributors of this volume, and to the editors of the SEMA/SIMAI series for their support in the production of these proceedings. In this edition, the workshop was co-organized by Universidad Politécnica de Madrid (UPM) and Universidad Carlos III de Madrid (UC3M). The event was partially supported by PROGRAMA PROPIO I+D+I DE LA UNIVERSIDAD POLITÉCNICA DE MADRID, through CONVOCATORIA DE AYUDAS DIRIGIDAS A PDI E INVESTIGADORES DOCTORES PARA LA ORGANIZACIÓN DE EVENTOS as well as partially supported by the network ORTHONET, project MTM2017-90694-REDT granted by Agencia Estatal de Investigación, Ministerio de Ciencia, Innovación y Universidades of Spain. Leganés, Spain Alcalá de Henares, Spain

Francisco Marcellán Edmundo J. Huertas

Contents

Riemann–Hilbert Problem and Matrix Biorthogonal Polynomials . . . . . . . . Amílcar Branquinho, Ana Foulquié-Moreno and Manuel Mañas-Baena

1

The Symmetrization Problem for Multiple Orthogonal Polynomials . . . . . . Amílcar Branquinho and Edmundo J. Huertas

17

Darboux Transformations for Orthogonal Polynomials on the Real Line and on the Unit Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . María José Cantero, Francisco Marcellán, Leandro Moral, and Luis Velázquez Special Function Solutions of Painlevé Equations: Theory, Asymptotics and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alfredo Deaño

53

77

Discrete Semiclassical Orthogonal Polynomials of Class 2 . . . . . . . . . . . . . . . . . . 103 Diego Dominici and Francisco Marcellán Stable Equilibria for the Roots of the Symmetric Continuous Hahn and Wilson Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Jan Felipe van Diejen Convergent Non Complete Interpolatory Quadrature Rules . . . . . . . . . . . . . . . 193 Ulises Fidalgo and Jacob Olson Asymptotic Computation of Classical Orthogonal Polynomials . . . . . . . . . . . . 215 Amparo Gil, Javier Segura, and Nico M. Temme An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Guillermo López-Lagomasino

vii

viii

Contents

Revisiting Biorthogonal Polynomials: An LU Factorization Discussion . . . 273 Manuel Mañas Infinite Matrices in the Theory of Orthogonal Polynomials. . . . . . . . . . . . . . . . . 309 Luis Verde-Star

Riemann–Hilbert Problem and Matrix Biorthogonal Polynomials Amílcar Branquinho, Ana Foulquié-Moreno, and Manuel Mañas-Baena

Abstract Recently the Riemann–Hilbert problem with jumps supported on appropriate curves in the complex plane has been presented for matrix biorthogonal polynomials, in particular non–Abelian Hermite matrix biorthogonal polynomials in the real line, understood as those whose matrix of weights is a solution of a Sylvester type Pearson equation with coefficients first order matrix polynomials. We will explore this discussion, present some achievements and consider some new examples of weights for matrix biorthogonal polynomials. Keywords Riemann-Hilbert problems · Matrix Pearson equations · Markov functions · Matrix biorthogonal polynomials

1 Introduction Back in 1949, in the seminal papers [36, 37], Krein discussed matrix extensions of real orthogonal polynomials. Afterwards, this matrix extension of the standard orthogonality was studied only sporadically until the last decade of the XX century,

A. Branquinho CMUC and Department of Mathematics, University of Coimbra (FCTUC), Coimbra, Portugal e-mail: [email protected] A. Foulquié-Moreno () Departamento de Matemática, CIDMA, Universidade de Aveiro, Campus de Santiago, Aveiro, Portugal e-mail: [email protected] M. Mañas-Baena Departamento de Física Teórica, Universidad Complutense de Madrid, Madrid, Spain M. Mañas Instituto de Ciencias Matemáticas (ICMAT), Campus de Cantoblanco UAM, Madrid, Spain e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Marcellán, E. J. Huertas (eds.), Orthogonal Polynomials: Current Trends and Applications, SEMA SIMAI Springer Series 22, https://doi.org/10.1007/978-3-030-56190-1_1

1

2

A. Branquinho et al.

see [2, 33] and [1]. In[1], for a kind of discrete Sturm–Liouville operators, the authors solved the corresponding scattering problem and found a matrix version of Favard’s theorem, polynomials that satisfy the three term relation xPk (x) = Ak Pk+1 (x) + Bk Pk (x) + A∗k−1 Pk−1 (x),

k = 0, 1, . . . ,

are orthogonal with respect to a positive definite matrix of measures. Along the last decade, a number of basic results on the theory of scalar orthogonal polynomials, such as Favard’s theorem [15, 16, 27, 30], quadrature formulae [17, 21, 29] and asymptotic properties (Markov’s theorem [17], ratio [18, 19] weak [20] and zero asymptotics [28]), have been extended to the matrix scenario. The search of families of matrix orthogonal polynomials that satisfy second order differential equations with coefficients independent of n, can be found in [22–26]. This can be considered as a matrix extension of the classical orthogonal polynomial sequences of Hermite, Laguerre and Jacobi. Fokas, Its and Kitaev, when discussing 2D quantum gravity, discovered that certain Riemann–Hilbert problem was solved in terms of orthogonal polynomials in the real line (OPRL), [31]. Namely, it was found that the solution of a 2 × 2 Riemann– Hilbert problem can be expressed in terms of orthogonal polynomials in the real line and its Cauchy transforms. Later on, Deift and Zhou combined these ideas with a non–linear steepest descent analysis in a series of papers [10, 11, 13, 14] which was the seed for a large activity in the field. Relevant results to be mentioned here are the study of strong asymptotic with applications in random matrix theory, [10, 12], the analysis of determinantal point processes [7, 8, 38, 39], applications to orthogonal Laurent polynomials [40, 41] and Painlevé equations [9, 35]. The Riemann–Hilbert problem characterization is a powerful tool that allows one to prove algebraic and analytic properties of orthogonal polynomials. The Riemann– Hilbert problem for this matrix situation and the appearance of non–Abelian discrete versions of Painlevé I (mdPI) were explored in [4], and the appearance of singularity confinement was shown in [6]. The analysis was extended further [5] for the matrix Szeg˝o type orthogonal polynomials in the unit circle and corresponding non– Abelian versions discrete Painlevé II equations. For an alternative discussion of the use of Riemann–Hilbert problem for MOPRL see [34], were the authors focus on the algebraic aspects of the problem, obtaining difference and differential relations satisfied by the corresponding orthogonal polynomials. In [3] we have studied a Hermite–type biorthogonal matrix polynomial system from a Riemann–Hilbert problem. In this case the matrix measure extends to an entire function on the complex plane. We have considered three types of matrix weights, W , obtained from the solution of a generalized Pearson differential or Sylvester differential equation, i.e. W  = hL (z)W (z) + W (z)hR (z) where hL and hR are polynomials of the first, second or third degrees.

Riemann–Hilbert Problem and Matrix Biorthogonal Polynomials

3

In that paper we showed that for these weights the matrix solutions of the Riemann–Hilbert problem     −Cn−1 QR PnR PnL QLn L R n−1 Yn = , Yn = L R −Cn−1 Pn−1 −Cn−1 QLn−1 −Cn−1 Pn−1 QR n satisfies    YnL exp  exp





z 0

with

z 0

CL (t)C−1 L (t) dt

R CR (t)C−1 R (t) dt Yn

CL (t)C−1 L (t)





   = MnL YnL exp 

= exp





z 0

z 0

CL (t)C−1 L (t) dt



R R CR (t)C−1 R (t) dt Yn Mn

  R hL 0N h 0N −1  , CR (t)CR (t) = , where MnL and MnR = 0N −hR 0N −hL 



are defined by L TnL − TnL MnL , (TnL ) = Mn+1

R (TnR ) = TnR Mn+1 − MnR TnR

and {TnL }, {TnR } are sequences of transfer matrices, i.e. L = TnL YnL , Yn+1

R Yn+1 = YnR TnR ,

(1)

with 

TnL (z)

 zIN − βnL Cn−1 = , −Cn 0N



TnR (z)

 zIN − βnR −Cn = . Cn−1 0N

(2)

In this work we extend the Riemann–Hilbert characterization for a more general class of measures, a matrix extension of the classical scalar Laguerre measures, and we find that for this class of measures appears a power logarithmic type singularities at the end point of the support of the measure. This will open a window for the study of the differential properties of the sequences of matrix polynomials associated to this matrix measures.

4

A. Branquinho et al.

2 Riemann–Hilbert Problem for Matrix Biorthogonal Polynomials 2.1 Matrix Biorthogonal Polynomials Let ⎤ W (1,1) · · · W (1,N ) ⎢ .. ⎥ ∈ CN ×N .. W = ⎣ ... . . ⎦ (N,1) (N,N ) ··· W W ⎡

be a N × N matrix of weights with support on a smooth oriented non self– intersecting curve γ in the complex plane C, i.e. W (j,k) is, for each j, k ∈ {1, . . . , N }, a complex weight with support on γ . We define the moment of order n associated with W as  1 zn W (z) d z, n ∈ N := {0, 1, 2, . . .}. Wn = 2π i γ   We say that W is regular if det Wj +k j,k=0,...,n = 0, n ∈ N. In this way, we   define a sequence of matrix monic polynomials, PnL (z) n∈N , left orthogonal and  R  right orthogonal, Pn (z) n∈N with respect to a regular matrix measure W , by the conditions,  1 P L (z)W (z)zk d z = δn,k Cn−1 , (3) 2π i γ n  1 zk W (z)PnR (z) d z = δn,k Cn−1 , (4) 2π i γ for k = 0, 1, . . . , n and n ∈ N, where Cn is an nonsingular matrix. Notice that neither the matrix of weights is requested to be Hermitian nor the curve γ to be the real line, i.e., we are dealing, in principle with nonstandard orthogonality and, consequently, with biorthogonal matrix polynomials instead of orthogonal matrix polynomials. The matrix of weights induce a sesquilinear form in the set of matrix polynomials CN ×N [z] given by P , QW :=

1 2π i

 P (z)W (z)Q(z) d z. γ

(5)

Riemann–Hilbert Problem and Matrix Biorthogonal Polynomials

5

    Moreover, we say that PnL (z) n∈N and PnR (z) n∈N are biorthogonal with respect to a matrix weight functions W if 

PnL , PmR

 W

= δn,m Cn−1 ,

n, m ∈ N.

(6)

As the polynomials are chosen to be monic, we can write 1 2 n zn−1 + pL,n zn−2 + · · · + pL,n , PnL (z) = IN zn + pL,n 1 2 n PnR (z) = IN zn + pR,n zn−1 + pR,n zn−2 + · · · + pR,n , N ×N k , pk with matrix coefficients pL,n , k = 0, . . . , n and n ∈ N (imposing R,n ∈ C N ×N 0 0 denotes the identity matrix. that pL,n = pR,n = I , n ∈ N). Here I ∈ C We define the sequence of second kind matrix functions by

QLn (z)

1 := 2π i

QR n (z) :=

1 2π i

 γ



γ

PnL (z ) W (z ) d z , z − z

(7)

PnR (z )  dz , z − z

(8)

W (z )

for n ∈ N. From the orthogonality conditions (3) and (4) we have, for all n ∈ N, the following asymptotic expansion near infinity for the sequence of functions of the second kind  1 QLn (z) = −Cn−1 IN z−n−1 + qL,n z−n−2 + · · · ,  −n−1 1 QR + qR,n z−n−2 + · · · Cn−1 . n (z) = − IN z

(9) (10)

Assuming that the measures W (j,k) , j, k ∈ {1, . . . , N } are Hölder continuous we obtain, by the Plemelj’s formula applied to (7) and (8), the following fundamental jump identities  QLn (z) + − Qn (z)L − = PnL (z)W (z),  R  R Qn (z) + − QR n (z) − = W (z)Pn (z),



(11) (12)

 z ∈ γ , where, f (z) ± = lim f (z + i); here ± indicates the positive/negative →0±

region according to the orientation of the curve γ .

6

A. Branquinho et al.

2.2 Reductions: From Biorthogonality to Orthogonality We consider two possible reductions for the matrix of weights, the symmetric reduction and the Hermitian reduction. (i) A matrix of weights W (z) with support on γ is said to be symmetric if (W (z)) = W (z),

z ∈ γ.

(ii) A matrix of weights W (x) with support on R is said to be Hermitian if (W (x))† = W (x),

x ∈ R.

These two reductions leads to orthogonal polynomials, as the two biorthogonal families are identified; i.e., for the symmetric case  PnR (z) = PnL (z) ,

 L QR n (z) = Qn (z) ,

z ∈ C,

and for the Hermitian case, with γ = R,  † PnR (z) = PnL (¯z) ,

 L † QR z) , n (z) = Qn (¯

z ∈ C.

In both cases biorthogonality collapses into orthogonality, that for the symmetric case reads as   1 Pn (z)W (z) Pm (z) d z = δn,m Cn−1 , n, m ∈ N, 2π i γ while for the Hermitian case can be written as follows   † 1 Pn (x)W (x) Pm (x) d x = δn,m Cn−1 , 2π i R

n, m ∈ N,

where Pn = PnL .

2.3 The Riemann–Hilbert Problem Let us consider the particular case when the N × N matrix of weights with support on a smooth oriented non self–intersecting curve γ has entrywise power logarithmic type singularities at the end points of the support of the measure, that is the entries W (j,k) of the matrix measure W can be described as  W j,k (z) = hm (z)(z − c)αm logpm (z), m∈Ij,k

Riemann–Hilbert Problem and Matrix Biorthogonal Polynomials

7

where Ij,k denotes a finite set of indexes, αm > −1, pm ∈ N and hm (x) is Hölder continuous, bounded and non–vanishing on γ . The biorthogonality can be characterized in terms of a left and right Riemann– Hilbert formulation, Theorem 1 (i) The matrix function 

QLn (z) PnL (z) := L (z) −C L −Cn−1 Pn−1 n−1 Qn−1 (z)

YnL (z)



is, for each n ∈ N, the unique solution of the Riemann–Hilbert problem; which consists in the determination of a 2N × 2N complex matrix function such that: (RHL1): (RHL2):

YnL (z) is holomorphic in C \ γ ; has the following asymptotic behaviour near infinity,   ∞ n 0   j,L IN z N ; (z−j )Yn YnL (z) = IN + 0N IN z−n j =1

(RHL3):

satisfies the jump condition  L  Yn (z) + = YnL (z) −



 IN W (z) , 0N IN

z ∈ γ.



 O(1) O(s1L (z)) (RHL4): = , as z → c, O(1) O(s2L (z)) where c denotes any of the end points of the curve γ if they exists, YnL (z)

lim (z − c)sjL (z) = 0N ,

j = 1, 2,

z→c

and the O conditions are understood entrywise. (iii) The matrix function 

YnR (z)

R (z)C PnR (z) −Pn−1 n−1 := R R Qn (z) −Qn−1 (z)Cn−1



is, for each n ∈ N, the unique solution of the Riemann–Hilbert problem; which consists in the determination of a 2N × 2N complex matrix function such that: (RHR1):

YnR (z) is holomorphic in C \ γ ;

8

A. Branquinho et al.

(RHR2):

has the following asymptotic behaviour near infinity,  YnR (z) =

 ∞   IN zn 0N j,R I + (z−j )Yn ; N −n 0N IN z j =1

(RHR3):

satisfies the jump condition    R IN 0N  R Yn (z) − , Yn (z) + = W (z) IN  O(1) O(1) , as = O(s1R (z)) O(s2R (z))

z ∈ γ.



(RHR4):

YnR (z)

z → c,

where c denotes any of the end points of the curve γ if they exists, lim (z − c)sjR (z) = 0N ,

z→c

j = 1, 2,

and the O conditions are understood entrywise. (ii) The determinant of YnL (z) and YnL (z) are both equal to 1, for every z ∈ C. Proof Using the standard calculations from the scalar case it follows that the matrices YnL and YnR satisfy (RHL1) − (RHL3) and (RHR1) − (RHR3) respectively. The entries W (j,k) of the matrix measure W can be described as  W j,k (z) = hm (z)(z − c)αm logpm (z), m∈Ij,k

where Ij,k denotes a finite set of indexes, αm > −1, pm ∈ N and hm (x) is Hölder continuous, bounded and non–vanishing on γ . At the boundary values of the curve γ if they exist and are denoted by c, for z → c. It holds [32] that in a neighbourhood of the point c, the Cauchy transform of the function φm (z) =

1 2π i

 γ

p(z )hm (z )(z − c)αm logpm (z )  dz , z − z

where p(z ) denotes any polynomial in z , verifies lim (z − c)φm (z) = 0,

z→c

and the condition (RHL4), is fulfilled for the matrix YnL and respectively the condition (RHR4), is fulfilled for the matrix YnR . Now let us consider  G(z) = YnL (z)

   0N IN 0 −IN YnR (z) N . −IN 0N IN 0N

Riemann–Hilbert Problem and Matrix Biorthogonal Polynomials

9

It can easily be proved that G has no jump on the curve γ . In a neighborhood of the point c  G(z) =

 O(s1L (z)) + O(s2R (z)) O(s1L (z)) + O(s1R (z)) , O(s2L (z)) + O(s2R (z)) O(s2L (z)) + O(s1R (z))

so lim (z − c)G(z) = 0, and at the point c the singularity is removable. Now using z→c

the behaviour for z → ∞,  G(z) =

IN zn 0N 0N IN z−n



0N IN −IN 0N



IN zn 0N 0N IN z−n



   0N −IN I 0 = N N , IN 0N 0N IN

and using Liouville’s Theorem it holds that G(z) = I , the identity matrix. From this follows the unicity of the solution of each of the Riemann–Hilbert problems stated in this theorem. Again using the standard arguments as in the scalar case we can conclude that det YnL (z) and det YnR (z) are both equal to 1.  L −1 given by the following We recover a representation for the inverse matrix Yn result Corollary 1 It holds that  L −1 Yn (z) =



   0N IN 0 −I N N Y R (z) . −IN 0N n IN 0N

(13)

Corollary 2 In the conditions of Theorem 1 we have that for all n ∈ N, −1 R (z) − PnL (z)QR QLn (z)Pn−1 n−1 (z) = Cn−1 ,

(14)

−1 L L R (z)QR Pn−1 n (z) − Qn−1 (z)Pn (z) = Cn−1 ,

(15)

QLn (z)PnR (z) − PnL (z)QR n (z) = 0.

(16)

Proof As we have already proven the matrix 

 R −QR n−1 (z)Cn−1 −Qn (z) , R (z)C Pn−1 PnR (z) n−1

is the inverse of YnL (z), i.e.  YnL (z)

 R −QR n−1 (z)Cn−1 −Qn (z) = I ; R (z)C Pn−1 PnR (z) n−1

10

A. Branquinho et al.



and multiplying the two matrices we get the result.

2.4 Three Term Recurrence Relation Following the standard arguments from the Riemann–Hilbert formulation we can prove L (z) = TnL (z)YnL (z), Yn+1

n ∈ N,

where TnL is given in (2). For the right orthogonality, we similarly obtain from (1) that R (z) = YnR (z)TnR (z), Yn+1

n ∈ N,

where TnR is given in (2). Hence, we conclude that the sequence of monic polynomials PnL (z) n∈N satisfies the three term recurrence relations L L zPnL (z) = Pn+1 (z) + βnL PnL (z) + γnL Pn−1 (z),

n ∈ N,

(17)

1 1 with recursion coefficients βnL := pL,n − pL,n+1 , γnL := Cn−1 Cn−1 , with initial L L conditions, P−1 = 0N and P0 = IN . We can also assert that R R zPnR (z) = Pn+1 (z) + PnR (z)βnR + Pn−1 (z)γnR ,

n ∈ N,

(18)

where βnR := Cn βnL Cn−1 , γnR := Cn γnL Cn−1 = Cn−1 Cn−1 .

3 Matrix Weights Supported on a Curve γ on the Complex Plane that Connects the Point 0 to the Point ∞: Laguerre Weights Motivated by different attempts that appear in the literature we try to consider some classes of weights with the aim to use the Riemann–Hilbert formulation. In this matrix case it is not so obvious which are the conditions we should to impose in order to guarantee the integrability of the matrix measure that we want to consider.

Riemann–Hilbert Problem and Matrix Biorthogonal Polynomials

11

3.1 Matrix Weights Supported on a Curve γ with One Finite End Point: W (z) = zA H (z) We begin considering the weight W (z) = zA H (z) supported on a curve γ on the complex plane that connects the point 0 to the point ∞, where (i) The function zA is defined as zA = eA log z , where γ is the branch cut of the logarithmic function, and we define por t ∈ γ , the t A := (zA )+ , where (zA )+ is the non–tangential limit as z → t, from the left side of the oriented curve γ . (ii) The constant matrix A is such that the minimum of the real part of its eigenvalues is greater than −1. (iii) The factor H (t) is the restriction to the curve γ of H (z), a matrix of entire functions, z ∈ C such that H (z) is invertible for all z ∈ C.  −1   (iv) The left logarithmic derivative h(z) := H (z) H (z) is an entire function. It is necessary, in other to consider the Riemann–Hilbert problem related to the weight function W (z), to clarify the behaviour of this weight function W (z) on a neighborhood of the point z = 0. If we consider the Jordan decomposition of the matrix A, it holds that there exists an invertible matrix P such that A = P J P −1 , where J = D + N, where D is the diagonal matrix formed whose entries are the eigenvalues of the matrix A and N is a nilpotent matrix that commutes with the matrix D. This commutation enables us to obtain zA = zP J P

−1

= P zJ P −1 = P zD zN P −1

where zN is a polynomial in the variable log z. The matrix zD is a diagonal matrix whose entries are of the form zαj +iβj , where αj +iβj is a eigenvalue of the matrix A. Let us consider as an example the matrix ⎡

⎤ − 12 1 0 A = ⎣ 0 − 12 0⎦ 0 0 1 In this case ⎡

1

z− 2 0 ⎢ A z = ⎣ 0 z− 21 0 0

⎤⎡ 0 1 log z ⎥ 0⎦ ⎣ 0 1 0 0 z

⎤ 0 0⎦ 1

So we observe that the matrix A is not diagonalizable, as it appears a factor in the 1 behaviour near 0, such as z− 2 log z, hence we are in presence of a power logarithmic type singularity. To assure the integrability of this kind of measure it is enough to

12

A. Branquinho et al.

ask that α > −1, where is the minimum of the real part of the eigenvalues of the matrix A. So in this case for this weight we are in the conditions of the Theorem 1. It is also valuable to comment about the factor H (z) of the measure W (z). In other also to have integrability of this matrix weight function we should be careful. If for example we consider H (z) = eBz , then it is clear that, by reasoning similarly as before, that we should impose that the real part of the eigenvalues of the matrix B   −1  H (z) to be a matrix polynomial are negative. If we consider h(z) := H (z) h(z) = B0 +B1 z+· · ·+Bm zm , it should be enough in order to guarantee integrability of the measure, to impose that the real part of the eigenvalues of the matrix Bm to be negative.

3.2 Matrix Weights Supported on a Curve γ with One Finite End Point: W (z) = zα H (z)G(z)zB In [23] appears different examples of Laguerre matrix weights for the matrix orthogonal polynomials on the real line. This motivates to consider the matrix weight W (z) = zα H (z)G(z)zB supported on a curve γ on the complex plane that connects the point 0 to the point ∞, with similar considerations as in the case treated before. Nevertheless when we try to apply the general methods from the Riemann– Hilbert formulation we find a lot of difficulties, derived from the non–commutativity of the matrix product and we should impose important restrictions. This kind of matrix weights can be treated in a more general context. Let us consider that instead of a given matrix of weights we are provided with two matrices, say hL (z) and hR (z), of entire functions such that the following two matrix Pearson equations are satisfied z

d WL = hL (z)W L (z), dz

z

d WR = W R (z)hR (z); dz

(19)

and given solutions to them we construct the corresponding matrix of weights W = W L W R . Moreover, this matrix of weights is also characterized by a Pearson equation. Proposition 1 (Pearson Sylvester Differential Equation) Given two matrices of entire functions hL (z) and hR (z), any solution of the Sylvester type matrix differential equation, which we call Pearson equation for the weight, z

dW = hL (z)W (z) + W (z)hR (z) dz

(20)

is of the form W = W L W R where the factor matrices W L and W R are solutions of (19), respectively.

Riemann–Hilbert Problem and Matrix Biorthogonal Polynomials

13

Proof Given solutions W L and W R of (19), respectively, it follows intermediately, just using the Leibniz law for derivatives, that W = W L W R fulfills (20). Moreover, given a solution W of (20) we pick a solution W L of the first equation in (19), then it is easy to see that (W L )−1 W satisfies the second equation in (19). Acknowledgments Amílcar Branquinho was partially supported by the Centre for Mathematics of the University of Coimbra–UIDB/00324/2020, funded by the Portuguese Government through FCT/MCTES. Ana Foulquié acknowledges the Center for Research & Development in Mathematics and Applications through the Portuguese Foundation for Science and Technology (FCT–Fundação para a Ciência e a Tecnologia), references UIDB/04106/2020 and UIDP/04106/2020. Manuel Mañas Baena thanks economical support from the Spanish Ministerio de Economía y Competitividad research project [MTM2015-65888-C4-2-P], Ortogonalidad, teoría de la aproximación y aplicaciones en física matemática and Spanish Agencia Estatal de Investigación research project [PGC2018-096504-B-C33], Ortogonalidad y Aproximación: Teoría y Aplicaciones en Física Matemática.

References 1. Aptekarev, A.I., Nikishin, E.M.: The scattering problem for a discrete Sturm–Liouville operator. Math. USSR, Sb. 49, 325–355 (1984) 2. Berezanskii, J.M.: Expansions in Eigenfunctions of Selfadjoint Operators. Translations of Mathematical Monographs, vol. 17. American Mathematical Society, Providence (1968) 3. Branquinho, A., Foulquié Moreno, A., Mañas, M.: Matrix biorthogonal polynomials: eigenvalue problems and non-abelian discrete Painlevé equations, J. Math. Anal. Appl. 494(2), 124605 (2021) 4. Cassatella–Contra, G.A., Mañas, M.: Riemann–Hilbert problems, matrix orthogonal polynomials and discrete matrix equations with singularity confinement. Stud. Appl. Math. 128, 252–274 (2011) 5. Cassatella–Contra, G.A., Mañas, M.: Matrix biorthogonal polynomials in the unit circle: Riemann–Hilbert problem and matrix discrete Painleve II system (2016). arXiv:1601.07236 [math.CA] 6. Cassatella–Contra, G.A., Mañas, M., Tempesta, P.: Singularity confinement for matrix discrete Painlevé equations. Nonlinearity 27, 2321–2335 (2014) 7. Daems, E., Kuijlaars, A.B.J.: Multiple orthogonal polynomials of mixed type and non– intersecting Brownian motions. J. Approx. Theory 146, 91–114 (2007) 8. Daems, E., Kuijlaars, A.B.J., Veys, W.: Asymptotics of non–intersecting Brownian motions and a 4 × 4 Riemann–Hilbert problem. J. Approx. Theory 153, 225–256 (2008) 9. Dai, D., Kuijlaars, A.B.J.: Painlevé IV asymptotics for orthogonal polynomials with respect to a modified Laguerre weight. Stud. Appl. Math. 122, 29–83 (2009) 10. Deift, P.A.: Orthogonal Polynomials and Random Matrices: A Riemann–Hilbert Approach. Courant Lecture Notes, vol. 3. American Mathematical Society, Providence (2000) 11. Deift, P.A.: Riemann–Hilbert methods in the theory of orthogonal polynomials. In: Spectral Theory and Mathematical Physics: A Festschrift in Honor of Barry Simon’s 60th Birthday. Proceedings of Symposia in Pure Mathematics, vol. 76, pp. 715–740. American Mathematical Society, Providence (2007) 12. Deift, P.A., Gioev, D.: Random Matrix Theory: Invariant Ensembles and Universality. Courant Lecture Notes in Mathematics, vol. 18. American Mathematical Society, Providence (2009)

14

A. Branquinho et al.

13. Deift, P.A., Zhou, X.: A steepest descent method for oscillatory Riemann–Hilbert problems. Asymptotics for the MKdV equation. Ann. Math. 137, 295–368 (1993) 14. Deift, P.A., Zhou, X.: Long–time asymptotics for solutions of the NLS equation with initial data in a weighted Sobolev space. Commun. Pure Appl. Math. 56, 1029–1077 (2003) 15. Durán, A.J.: A generalization of Favard’s theorem for polynomials satisfying a recurrence relation. J. Approx. Theory 74, 260–275 (1994) 16. Durán, A.J.: On orthogonal polynomials with respect to a positive definite matrix of measures. Can. J. Math. 47, 88–112 (1995) 17. Durán, A.J.: Markov’s theorem for orthogonal matrix polynomials. Can. J. Math. 48, 1180– 1195 (1996) 18. Durán, A.J.: Ratio asymptotics for orthogonal matrix polynomials. J. Approx. Theory 100, 304–344 (1999) 19. Durán, A.J., Daneri–Vias, E.: Ratio asymptotics for orthogonal matrix polynomials with unbounded recurrence coefficients. J. Approx. Theory 110, 1–17 (2001) 20. Durán, A.J., Daneri–Vias, E.: Weak convergence for orthogonal matrix polynomials. Indag. Math. 13, 47–62 (2002) 21. Durán, A.J., Defez, E.: Orthogonal matrix polynomials and quadrature formulas. Linear Algebra Appl. 345, 71–84 (2002) 22. Durán, A.J., de la Iglesia, M.D.: Second order differential operators having several families of orthogonal matrix polynomials as eigenfunctions. Int. Math. Res. Notices 2008, 24 (2008) 23. Durán, A.J., Grünbaum, F.A.: Orthogonal matrix polynomials satisfying second order differential equations. Int. Math. Res. Notices 10, 461–484 (2004) 24. Durán, A.J., Grünbaum, F.J.: Structural formulas for orthogonal matrix polynomials satisfying second order differential equations, I, Constr. Approx. 22, 255–271 (2005) 25. Durán, A.J., Grünbaum, F.A.: Orthogonal matrix polynomials, scalar–type Rodrigues’ formulas and Pearson equations. J. Approx. Theory 134, 267–280 (2005) 26. Durán, A.J., Grünbaum, F.A.: Structural formulas for orthogonal matrix polynomials satisfying second order differential equations I. Constr. Approx. 22, 255–271 (2005) 27. Durán, A.J., Lopez–Rodriguez, P.: Orthogonal matrix polynomials: zeros and Blumenthal’s theorem. J. Approx. Theory 84, 96–118 (1996) 28. Durán, A.J., Lopez–Rodriguez, P., Saff, E.B.: Zero asymptotic behaviour for orthogonal matrix polynomials. J. d’Analyse Math. 78, 37–60 (1999) 29. Durán, A.J., Polo, B.: Gauss quadrature formulae for orthogonal matrix polynomials. Linear Algeb. Appl. 355, 119–146 (2002) 30. Durán, A.J., Van Assche, W.: Orthogonal matrix polynomials and higher order recurrence relations. Linear Algeb. Appl. 219, 261–280 (1995) 31. Fokas, A.S., Its, A.R., Kitaev, A.V.: The isomonodromy approach to matrix models in 2D quantum gravity. Commun. Math. Phys. 147, 395–430 (1992) 32. Gakhov, F.D.: Boundary Value Problems (Translated from the Russian. Reprint of the 1966 translation). Dover Publications, New York (1990) 33. Geronimo, J.S.: Scattering theory and matrix orthogonal polynomials on the real line. Circ. Syst. Signal Proc. 1, 471–495 (1982) 34. Grünbaum, F.A., de la Iglesia, M.D., Martínez–Finkelshtein, A.: Properties of matrix orthogonal polynomials via their Riemann–Hilbert characterization. SIGMA Symmetry Integr. Geom. Method Appl. 7, 098 (2011) 35. Its, A.R., Kuijlaars, A.B.J., Östensson, J.: Asymptotics for a special solution of the thirty fourth Painlevé equation. Nonlinearity 22, 1523–1558 (2009) 36. Krein, M.G.: Infinite J–matrices and a matrix moment problem. Dokl. Akad. Nauk SSSR 69(2), 125–128 (1949) 37. Krein, M.G.: Fundamental Aspects of the Representation Theory of Hermitian Operators with Deficiency Index (m, m). AMS Translations, Series 2, vol. 97, pp. 75–143. American Mathematical Society, Providence (1971)

Riemann–Hilbert Problem and Matrix Biorthogonal Polynomials

15

38. Kuijlaars, A.B.J.: Multiple orthogonal polynomial ensembles. In: Recent Trends in Orthogonal Polynomials and Approximation Theory. Contemporary Mathematics, vol. 507, pp. 155–176. American Mathematical Society, Providence (2010) 39. Kuijlaars, A.B.J., Martínez–Finkelshtein, A., Wielonsky, F.: Non–intersecting squared Bessel paths and multiple orthogonal polynomials for modified Bessel weights. Commun. Math. Phys. 286, 217–275 (2009) 40. McLaughlin, K.T.–R., Vartanian, A.H., Zhou, X.: Asymptotics of Laurent polynomials of even degree orthogonal with respect to varying exponential weights. Int. Math. Res. Notices 2006, 62815 (2006) 41. McLaughlin, K.T.–R., Vartanian, A.H., Zhou, X.: Asymptotics of Laurent polynomials of odd degree orthogonal with respect to varying exponential weights. Constr. Approx. 27, 149–202 (2008)

The Symmetrization Problem for Multiple Orthogonal Polynomials Amílcar Branquinho and Edmundo J. Huertas

Abstract We analyze the effect of symmetrization in the theory of multiple orthogonal polynomials. For a symmetric sequence of type II multiple orthogonal polynomials satisfying a high-term recurrence relation, we fully characterize the Weyl function associated to the corresponding block Jacobi matrix as well as the Stieltjes matrix function. Next, from an arbitrary sequence of type II multiple orthogonal polynomials with respect to a set of d linear functionals, we obtain a total of d + 1 sequences of type II multiple orthogonal polynomials, which can be used to construct a new sequence of symmetric type II multiple orthogonal polynomials. We also prove a Favard-type result for certain sequences of matrix multiple orthogonal polynomials satisfying a matrix four-term recurrence relation with matrix coefficients. Finally, we provide an example concerning multiple Hermite polynomials. Keywords Darboux transformations · Multiple orthogonal polynomials · Matrix orthogonal polynomials · Linear functionals · Recurrence relations · Operator theory · Full Kostant-Toda systems · Symmetrization problem AMS Subject Classification (2000) Primary 33C45; Secondary 39B42

A. Branquinho CMUC and Department of Mathematics, University of Coimbra (FCTUC), Coimbra, Portugal e-mail: [email protected] E. J. Huertas () Departamento de Física y Matemáticas, Universidad de Alcalá, Madrid, Spain e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Marcellán, E. J. Huertas (eds.), Orthogonal Polynomials: Current Trends and Applications, SEMA SIMAI Springer Series 22, https://doi.org/10.1007/978-3-030-56190-1_2

17

18

A. Branquinho and E. J. Huertas

1 Introduction In recent years an increasing attention has been paid to the notion of multiple orthogonality (see, for instance [1]). Multiple orthogonal polynomials are a generalization of orthogonal polynomials [13], satisfying orthogonality conditions with respect to a number of measures, instead of just one measure. There exists a vast literature on this subject, e.g. the classical works [2, 4], [24, Ch. 23] and [29] among others. A characterization through a vector functional equation, where the authors call them d-orthogonal polynomials instead of multiple orthogonal polynomials, was done in [17]. Their asymptotic behavior have been studied in [5], also continued in [14], and properties for their zeros have been analyzed in [22]. The Bäcklund transformation approach is one of the most effective methods for constructing explicit solutions of partial differential equations which are called integrable systems and play important roles in mechanics, physics and differential geometry. Roughly speaking, it consists in generating new solutions for a given partial differential equation by starting with a “seed” solution to the same (or a different) PDE and solving an auxiliary system of ODEs. Bäcklund transformations resulted from the symmetrization process in the usual (standard) orthogonality, which allows one to jump, from one hierarchy to another, in the whole Toda lattices hierarchy (see [21]). That is, they allow reinterpretations inside the hierarchy. In [9], the authors have found certain Bäcklund-type transformations (also known as Miura-type transformations) which allow to reduce problems in a given full Konstant-Toda hierarchy to another. Also, in [6], where Peherstorfer’s work [30] is extended to the whole Toda hierarchy, it is shown how this system can be described with the evolution of only one parameter instead of two, using exactly this kind of transformations. Other application to the Toda systems appear in [3, 7], and [8], where the authors studied Bogoyavlenskii systems which were modeled by certain symmetric multiple orthogonal polynomials. The symmetrization of linear functionals concerning just one variable is well known in the literature [13, I.8, p. 40]. Broadly speaking, given a quasi-definite, symmetric linear functional S defined in the linear space of polynomials with real coefficients P, and the linear functional L defined by L[x n ] := S[x 2n ], if {Sn }n≥0 is the monic orthogonal polynomial sequence associated to S, then one has Sn (−x) = (−1)n Sn (x). Moreover, there exist two monic polynomial sequences {Pn }n≥0 and {Qn }n≥0 such that S2m (x) = Pm (x),

S2m+1 (x) = xQm (x 2 ),

satisfying L[Pm (x)Pn (x)] = S[S2m (x)S2n (x)] and L∗ [Qm (x)Qn (x)] = L[xPm (x)Pn (x)] = S[S2m+1 (x)S2n+1 (x)].

The Symmetrization Problem for Multiple Orthogonal Polynomials

19

For every polynomial π(x) ∈ P, and any real or complex number κ, the linear functional L∗ is defined by L∗κ [π(x)] := L[(x − κ)π(x)], and {Qn }n≥0 is the so called monic kernel orthogonal polynomial sequence of parameter κ, associated to {Pn }n≥0 . Our aim here is to extend the above ideas to the framework of multiple orthogonality, i.e., in this work we analyze the effect of symmetrization in systems of multiple orthogonality measures. Our viewpoint seeds some new light on the subject, and we prove that the symmetrization process in multiple orthogonality is a model to define the aforementioned Bäcklund-type transformations, as happens in the scalar case with the Bäcklund transformations (see [13, 26, 30]). Furthermore, we solve the so called symmetrization problem in the theory of multiple orthogonal polynomials. We apply certain Darboux transformations, already described in [9], to a (d + 2)-banded matrix, associated to a (d + 2)-term recurrence relation satisfied by an arbitrary sequence of type II multiple orthogonal polynomials, to obtain a total of d + 1 sequences of not necessarily symmetric multiple orthogonal polynomials, which we use to construct a new sequence of symmetric multiple orthogonal polynomials. On the other hand, following the ideas in [26] (and the references therein) for standard sequences of orthogonal polynomials, in [28] (see also [16]) the authors provide a cubic decomposition for sequences of polynomials, multiple orthogonal with respect to a two different linear functionals. Concerning the symmetric case, in [27] this cubic decomposition is analyzed for a 2-symmetric sequence of polynomials, which is called a diagonal cubic decomposition (CD) by the authors. Here, we also extend this notion of diagonal decomposition to a more general case, considering symmetric sequences of polynomials multiple orthogonal with respect to d > 3 linear functionals. The structure of the manuscript is as follows. In Sect. 2 we summarize without proofs the relevant material about multiple orthogonal polynomials, and a basic background about the matrix interpretation of the type II multi-orthogonality conditions with respect to the a regular system of d linear functionals {u1 , . . . , ud } and diagonal multi-indices. In Sect. 3 we fully characterize the Weyl function RJ and the Stieltjes matrix function F associated to the block Jacobi matrix J corresponding to a (d + 2)-term recurrence relation satisfied by a symmetric sequence of type II multiple orthogonal polynomials. In Sect. 4, starting from an arbitrary sequence of type II multiple polynomials satisfying a (d + 2)-term recurrence relation, we state the conditions to find a total of d + 1 sequences of type II multiple orthogonal polynomials, in general non-symmetric, which can be used to construct a new sequence of symmetric type II multiple orthogonal polynomials. Moreover, we also deal with the converse problem, i.e., we propose a decomposition of a given symmetric type II multiple orthogonal polynomial sequence, which allows us to find a set of other (in general non-symmetric) d + 1 sequences of type II multiple orthogonal polynomials, satisfying in turn (d +2)-term recurrence relations. Section 5 is devoted to present a Favard-type result, showing that certain 3 × 3 matrix

20

A. Branquinho and E. J. Huertas

decomposition of a type II multiple 2-orthogonal polynomials, satisfy a matrix four-term recurrence relation, and therefore it is type II multiple 2-orthogonal (in a matrix sense) with respect to a certain system of matrix measures. Finally, in Sect. 6, we provide an example of the methods developed throughout the paper. We define a particular sequence of type II multiple Hermite orthogonal polynomials and we construct their corresponding matrix type II multiple 2-orthogonal polynomial sequence satisfying a matrix four-term recurrence relation. Furthermore, following the techniques of Sect. 4, we construct the corresponding sequence of symmetric type II multiple orthogonal polynomials.

2 Definitions and Matrix Interpretation of Multiple Orthogonality Let n = (n1 , . . . , nd ) ∈ Nd be a multi-index with length |n| := n1 + · · · + nd and let {uj }dj =1 be a set of linear functionals, i.e. uj : P → C. Let {Pn } be a sequence of polynomials, with deg Pn is at most |n|. {Pn } is said to be type II multiple orthogonal with respect to the set of linear functionals {uj }dj =1 and multi-index n if uj (x k Pn ) = 0 , k = 0, 1, . . . , nj − 1 , j = 1, . . . , d .

(1)

A multi-index n is said to be normal for the set of linear functionals {uj }dj =1 , if the degree of Pn is exactly |n| = n. When all the multi-indices of a given family are normal, we say that the set of linear functionals {uj }dj =1 is regular. In the present work, we will restrict our attention to the so called diagonal multi-indices n = (n1 , . . . , nd ) ∈ I, where I = {(0, 0, . . . , 0), (1, 0, . . . , 0), . . . , (1, 1, . . . , 1), (2, 1, . . . , 1), . . . , (2, 2, . . . , 2), . . .}.

Notice that there exists a one to one correspondence, i, between the above set of diagonal multi-indices I ⊂ Nd and N, given by i(Nd ) = |n| = n. Therefore, to simplify the notation, we write in the sequel Pn ≡ P|n| = Pn . The left-multiplication of a linear functional u : P → C by a polynomial p ∈ P is given by the new linear functional p u : P → C such that p u(x k ) = u(p(x)x k ) , k ∈ N . Next, we briefly review a matrix interpretation of type II multiple orthogonal polynomials with respect to a system of d regular linear functionals and a family of diagonal multi-indices. Throughout this work, we will use this matrix interpretation as a useful tool to obtain some of the main results of the manuscript. For a recent and deeper account of the theory (in a more general framework, considering quasidiagonal multi-indices) we refer the reader to [11].

The Symmetrization Problem for Multiple Orthogonal Polynomials

21

Let us consider the family of vector polynomials T  Pd = { P1 · · · Pd , d ∈ N, Pj ∈ P}, and Md×d the set of d × d matrices with entries in C. Let {Xj } be the family of vector polynomials Xj ∈ Pd defined by T  Xj = x j d · · · x (j +1)d−1 , j ∈ N,

(2)

T  where X0 = 1 · · · x d−1 . By means of the shift n → nd, associated with {Pn }, we define the sequence of vector polynomials {Pn }, with T  Pn = Pnd (x) · · · P(n+1)d−1 (x) , n ∈ N, Pn ∈ Pd .

(3)

Let uj : P → C with j = 1, . . . , d a system of linear functionals as in (1). From now T  on, we define the vector of functionals U = u1 · · · ud acting in Pd → Md×d , by ⎤ ⎡ 1 u (P1 ) · · · ud (P1 ) 

T ⎢ .. ⎥ , .. U(P) = U·PT = ⎣ ... . . ⎦ 1 d u (Pd ) · · · u (Pd ) where “·” means the symbolic product of vectors U and PT . Let Al (x) =

l 

Alk x k ,

k=0

be a matrix polynomial of degree l, where Alk ∈ Md×d , and U a vector of functional. We define the new vector of functionals called left multiplication of U by a matrix polynomial Al , and we denote it by Al U, to the map of Pd into Md×d , described by (Al U) (P) =

l 

 x k U (P) (Alk )T .

(4)

k=0

From the above definition we introduce the notion of moments of order j ∈ N, associated with the vector of functionals x k U, which will be in general the following

22

A. Branquinho and E. J. Huertas

d × d matrices ⎡ 

⎢ Ukj = x k U (Xj ) = ⎣

⎤ ··· ud (x j d+k ) ⎥ .. .. ⎦, . . u1 (x (j +1)d−1+k ) · · · ud (x (j +1)d−1+k ) u1 (x j d+k ) .. .

with j, k ∈ N, and from this moments, we construct the block Hankel matrix of moments ⎤ ⎡ 0 U0 · · · Un0 ⎥ ⎢ Hn = ⎣ ... . . . ... ⎦ , n ∈ N. U0n · · · Unn We say that the vector of functionals U is regular, if the determinants of the principal minors of the above matrix are non-zero for every n ∈ N. Notice that the two notions of regularity, that for the set of entries of {uj }dj =1 and for U itself, are suitably compatibilized in [11, Th. 4]. Having in mind (2) it is obvious that Xj = (x d )j X0 , j ∈ N. Thus, from (3) we can express Pn (x) in the alternative way Pn (x) =

n 

Pjn Xj , Pjn ∈ Md×d ,

(5)

j =0

where the matrix coefficients Pjn , j = 0, 1, . . . , n are uniquely determined. Thus, it also occurs Pn (x) = Wn (x d )X0 ,

(6)

where Wn is a matrix polynomial (i.e., Wn is a d × d matrix whose entries are polynomials) of degree n and dimension d, given by Wn (x) =

n 

Pjn x j , Pjn ∈ Md×d .

(7)

j =0

Notice that the matrices Pjn ∈ Md×d in (7) are the same as in (5). Within this context, we can describe the matrix interpretation of multiple orthogonality for diagonal multi-indices. Let {Pn } be a sequence of vector polynomials with polynomial entries as in (3), and a vector of functionals U as described above. {Pn } is said to be a type II vector multiple orthogonal polynomial sequence with respect to the vector of functionals U, and a set of diagonal multi-indices, if i) (x k U)(Pn ) = 0d×d , k = 0, 1, . . . , n − 1 , ii) (x n U)(Pn ) = n ,

 (8)

The Symmetrization Problem for Multiple Orthogonal Polynomials

23

where n is a regular upper triangular d × d matrix (see [11, Th. 3] considering diagonal multi-indices). Next, we introduce a few aspects of the duality theory, which will be useful in the sequel. We denote by P∗ the dual space of P, i.e. the linear space of linear functionals defined on P over C. Let {Pn } be a sequence of monic polynomials. We call { n }, n ∈ P∗ , the dual sequence of {Pn } if i (Pj ) = δi,j , i, j ∈ N holds. Given a sequence of linear functionals { n } ∈ P∗ , by means of the shift n → nd, the vector sequence of linear functionals {Ln }, with T  Ln = nd · · · (n+1)d−1 , n ∈ N, is said to be the vector sequence of linear functionals associated with { n }. It is very well known (see [17]) that a given sequence of type II polynomials {Pn }, simultaneously orthogonal with respect to a d linear functionals, or simply d-orthogonal polynomials, satisfy the following (d + 2)-term recurrence relation xPn+d (x) = Pn+d+1 (x) + βn+d Pn+d (x) +

d−1 

d−1−ν γn+d−ν Pn+d−1−ν (x) ,

(9)

ν=0 0 = 0 for n ≥ 0, with the initial conditions P0 (x) = 1, P1 (x) = x − β0 , and γn+1

Pn (x) = (x − βn−1 )Pn−1 (x) −

n−2 

d−1−ν γn−1−ν Pn−2−ν (x) , 2 ≤ n ≤ d .

ν=0

E.g., if d = 2, the sequence of monic type II multiple orthogonal polynomials {Pn } with respect to the regular system of functionals {u1 , u2 } and normal multi-index satisfy, for every n ≥ 0, the following four term recurrence relation (see [11, 25, Lemma 1-a]) 1 0 xPn+2 (x) = Pn+3 (x) + βn+2 Pn+2 (x) + γn+2 Pn+1 (x) + γn+1 Pn (x) ,

(10)

0 1 ,γ0 where βn+2 , γn+2 n+1 ∈ C, γn+1 = 0, P0 (x) = 1, P1 (x) = x − β0 and P2 (x) = (x − β1 )P1 (x) − γ11 P0 (x).

Definition 1 ([17, Def. 4.1.]) A monic system of polynomials {Sn } is said to be d-symmetric when it verifies Sn (ξk x) = ξkn Sn (x) , n ≥ 0 ,

(11)

where ξk = exp (2kπ i/(d + 1)), k = 1, . . . , d, and ξkd+1 = 1. Notice that, if d = 1, then ξk = −1 and therefore Sn (−x) = (−1)n Sn (x) (see [13]).

24

A. Branquinho and E. J. Huertas

We also assume (see [17, Def. 4.2.]) that the vector of linear functionals L0 = T 0 · · · d−1 is said to be d-symmetric when the moments of its entries satisfy, for every n ≥ 0, 

ν (x (d+1)n+μ ) = 0 , ν = 0, 1, . . . , d − 1 , μ = 0, 1, . . . , d , ν = μ .

(12)

Observe that if d = 1, this condition leads to the well known fact 0 (x 2n+1 ) = 0, i.e., all the odd moments of a symmetric moment functional are zero (see [13, Def. 4.1, p.20]). Under the above assumptions, we have the following Theorem 1 (Cf. [17, Th. 4.1]) For every sequence of monic polynomials {Sn }, dT  orthogonal with respect to the vector of linear functionals L0 = 0 · · · d−1 , the following statements are equivalent: (a) The vector of linear functionals L0 is d-symmetric. (b) The sequence {Sn } is d-symmetric. (c) The sequence {Sn } satisfies xSn+d (x) = Sn+d+1 (x) + γn+1 Sn (x) , n ≥ 0,

(13)

with Sn (x) = x n for 0 ≤ n ≤ d. Notice that (13) is a particular case of the (d + 2)-term recurrence relation (9). Continuing the same trivial example above for d = 2, it directly implies that the sequence of polynomials {Sn }, satisfy the particular case of (9), i.e. S0 (x) = 1, S1 (x) = 1, S2 (x) = x 2 and xSn+2 (x) = Sn+3 (x) + γn+1 Sn (x) , n ≥ 0. 1 of polynomials Sn+2 and Sn+1 respecNotice that the coefficients βn+2 and γn+2 tively, on the right hand side of (10) are zero. On the other hand, the (d + 2)-term recurrence relation (13) can be rewritten in terms of vector polynomials (3), and then we obtain what will be referred to as the symmetric type II vector multiple orthogonal polynomial sequence Sn =  T Snd · · · S(n+1)d−1 . For n → dn + j , j = 0, 1, . . . d − 1 and n ∈ N, we have the following matrix three term recurrence relation

xSn = ASn+1 + BSn + Cn Sn−1 , n = 0, 1, . . .

(14)

The Symmetrization Problem for Multiple Orthogonal Polynomials

25

T  T  with S−1 = 0 · · · 0 , S0 = S0 · · · Sd−1 , and matrix coefficients A, B, Cn ∈ Md×d given ⎤ ⎡ ⎤ 0 1 0 ⎢ .. .. ⎥ .. ⎥ ⎢ . . ⎥ .⎥ ⎥ , and ⎥, B=⎢ ⎣ ⎦ 0 1⎦ 0 ··· 0 0 1 0 ··· 0   Cn = diag γ1 · · · γnd . ⎡

0 ⎢ .. ⎢ A = ⎢. ⎣0

0 ··· .. . . . .

(15)

Note that, in this case, one has T  S0 = X0 = 1 x · · · x d−1 . Since {Sn } satisfies (13), it is clear that this d-symmetric type II multiple polynomial sequence will be orthogonal with respect to certain system of d linear functionals, say {v 1 , . . . , v d }. Hence, according to the matrix interpretation of multiple orthogonality, the corresponding type II vector multiple polynomial sequence {Sn } will be T  orthogonal with respect to a symmetric vector of functionals V = v 1 · · · v d . The corresponding matrix orthogonality conditions for {Sn } and V are described in (8). One of the main goals of this manuscript is to analyze symmetric sequences of type II vector multiple polynomials, orthogonal with respect to a symmetric vector of functionals. The remainder of this section will be devoted to the proof of one of our main results concerning the moments of the d functional entries of such symmetric vector of functionals V. The following theorem states that, under certain conditions, the moment of each functional entry in V can be given in terms of the moments of other functional entry in the same V.  T Theorem 2 If V = v 1 · · · v d is a symmetric vector of functionals, the moments of each functional entry v j , j = 1, 2, . . . , d in V, can be expressed for all n ≥ 0 as (i) If μ = 0, 1, . . . , j − 2 v j (x (d+1)n+μ ) =

vj,μ vμ+1,μ

v μ+1 (x (d+1)n+μ ),

where vk,l = v k (Sl ). (ii) If μ = j − 1, the value v j (x (d+1)n+μ ) depends on v j , and it is different from zero. (iii) If μ = j, j + 1, . . . , d v j (x (d+1)n+μ ) = 0.

26

A. Branquinho and E. J. Huertas

Proof In the matrix framework of multiple orthogonality, the type II vector polynomials Sn are multiple orthogonal with respect to the symmetric vector moment of T  functionals V : Pd → Md×d , with V = v 1 · · · v d . If we multiply (14) by x n−1 , together with the linearity of V, we get, for n = 0, 1, . . . 





  V x n Sn = AV x n−1 Sn+1 + BV x n−1 Sn + Cn V x n−1 Sn−1 , n = 0, 1, . . . .

By the orthogonality conditions (8) for V, we have



 AV x n−1 Sn+1 = BV x n−1 Sn = 0d×d , n = 0, 1, . . . , and iterating the remain expression

  V x n Sn = Cn V x n−1 Sn−1 , n = 0, 1, . . . we obtain  V x n Sn = Cn Cn−1 · · · C1 V (S0 ) , n = 0, 1, . . . . The above matrix V (S0 ) is given by ⎤ v 1 (S0 ) · · · v d (S0 ) ⎥ ⎢ .. .. .. V (S0 ) = V00 = ⎣ ⎦. . . . 1 d v (Sd−1 ) · · · v (Sd−1 ) ⎡

To simplify the notation, in the sequel vi,j −1 denotes v i (Sj −1 ). Notice that (8) leads to the fact that the above matrix is an upper triangular matrix, which in turn means that vi,j −1 = 0, for every i, j = 1, . . . , d − 1, that is ⎡

⎤ v1,0 v2,0 · · · vd,0 ⎢ v2,1 · · · vd,1 ⎥ ⎢ ⎥ V00 = ⎢ .. ⎥ . . . ⎣ . . ⎦ vd,d−1

(16)

Let L0 be a d-symmetric vector of linear functionals as in Theorem 1. We can express V in terms of L0 as V = G0 L0 . Thus, we have (G−1 0 V)(S0 ) = L0 (S0 ) = Id . From (4) we have 

−1 T T 0 G−1 V (S0 ) = V (S0 ) (G−1 0 0 ) = V0 (G0 ) = Id .

The Symmetrization Problem for Multiple Orthogonal Polynomials

27

Therefore, taking into account (16), we conclude ⎡

⎤ v1,0 ⎢ v2,0 v2,1 ⎥ ⎢ ⎥ G0 = (V00 )T = ⎢ . ⎥. . .. . . . ⎣ .. ⎦ vd,0 vd,1 · · · vd,d−1 Observe that the matrix (V00 )T is lower triangular, and every entry in their main diagonal is different from zero, so G0 always exists and is a lower triangular matrix. Since V = G0 L0 , we finally obtain the expressions v 1 = v1,0 0 , v 2 = v2,0 0 + v2,1 1 , v 3 = v3,0 0 + v3,1 1 + v3,2 2 , ··· v d = vd,0 0 + vd,1 1 + · · · + vd,d−1 d−1 ,

(17)

between the entries of L0 and V. Next, from (17), (12), together with the crucial fact that every value in the main diagonal of V00 is different from zero, it is a simple matter to check that the three statements of the lemma follow. We conclude the proof only for the functionals v 2 and v 1 . The other cases can be deduced in a similar way. From (17) we get v 1 (x (d+1)n+μ ) = v1,0 · 0 (x (d+1)n+μ ) = 0. Then, from (12) we see that for every μ = 0 we have v 1 (x (d+1)n+μ ) = 0 (statement (iii)). If μ = 0, we have v 1 (x (d+1)n ) = v1,0 · 0 (x (d+1)n ) = 0 (statement (ii)). Next, from (17) we get v 2 (x (d+1)n+μ ) = v2,0 · 0 (x (d+1)n+μ ) + v2,1 · 1 (x (d+1)n+μ ). Then, from (12) we see that for every μ = 1, we have v 2 (x (d+1)n+μ ) = 0 (statement (iii)). If μ = 0 we have v 2 (x (d+1)n ) =

v2,0 1 (d+1)n v (x ) (statement (i)). v1,0

If μ = 1 then v 2 (x (d+1)n+1 ) = v2,1 · 1 (x (d+1)n+1 ) = 0 (statement (ii)). Thus, the theorem follows.



3 Representation of the Stieltjes and Weyl Functions T  Let U a vector of functionals U = u1 · · · ud . We define the Stieltjes matrix function associated to U (or matrix generating function associated to U), F by (see [11]) F(z) =

∞  (x n U) (X0 (x)) n=0

zn+1

.

28

A. Branquinho and E. J. Huertas

We define the above F(z) when the sequence of matrix moments (x n U) is bounded by a constant, let us say m, independent of n. Thus, F(z) is well defined for values |z| > m. In this section we find the relation between the Stieltjes matrix function F, associated to a certain d-symmetric vector of functionals V, and certain interesting function associated to the corresponding block Jacobi matrix J . Here we deal with d-symmetric sequences of type II multiple orthogonal polynomials {Sn }, and hence J is a (d + 2)-banded matrix with only two extreme non-zero diagonals, which is the block-matrix representation of the three-term recurrence relation with d × d matrix coefficients, satisfied by the vector sequence of polynomials {Sn } (associated to {Sn }), orthogonal with respect to V. Thus, the shape of a Jacobi matrix J , associated with the (d + 2)-term recurrence relation (13) satisfied by a d-symmetric sequence of type II multiple orthogonal polynomials {Sn } is ⎡

0 ⎢0 ⎢ ⎢. ⎢. ⎢. ⎢ 0 J =⎢ ⎢ ⎢ γ1 ⎢ ⎢ ⎢ ⎣

1 0 1 .. .. . . 0 1 0 01 γ2 0 .. .. . .



⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ .. ⎥ 0 .⎥ ⎦ .. .

We can rewrite J as the block tridiagonal matrix ⎡

⎤ B A ⎢ C1 B A ⎥ ⎢ ⎥ J =⎢ C B A ⎥ 2 ⎣ ⎦ .. .. .. . . . associated to a three term recurrence relation with matrix coefficients, satisfied by the sequence of type II vector multiple orthogonal polynomials {Sn } associated to {Sn }. Here, every block matrix A, B and Cn has d × d size, and they are given in (15). When J is a bounded operator, it is possible to define the resolvent operator by (zI − J )

−1

∞  Jn = , |z| > ||J ||, zn+1 n=0

The Symmetrization Problem for Multiple Orthogonal Polynomials

29

(see [10]) and we can put in correspondence the following block tridiagonal analytic function, known as the Weyl function associated with J RJ (z) =

∞ T n  e0 J e0 , |z| > ||J ||, zn+1 n=0

T  where e0 = Id 0d×d · · · . If we denote by Mij the d ×d block matrices of a semiinfinite matrix M, formed by the entries of rows d(i − 1) + 1, d(i − 1) + 2, . . . , di , and columns d(j − 1) + 1, d(j − 1) + 2, . . . , dj , the matrix J n can be written as the semi-infinite block matrix ⎡ n n ⎤ J11 J12 · · · ⎢ n n ⎥ J n = ⎣ J21 J22 · · ·⎦ . .. .. . . . . . We can now formulate our first important result in this section. For more details we refer the reader to [7, Sec. 1.2] and [3]. Let {Sn } be a symmetric type II vector multiple polynomial sequence orthogonal with respect to the d-symmetric vector of functionals V. Following [11, Th. 7], the matrix generating function associated to V, and the Weyl function associated with J , the block Jacobi matrix corresponding to {Sn }, can be put in correspondence by means of the matrix expression F(z) = RJ (z)V(X0 ) , where, for the d-symmetric case, V(X0 ) = V(S0 ) = V00 , which is explicitly given in (16). First we study the case d = 2, and next we consider more general situations for d > 2 functional entries in V. From Theorem 2, we obtain the entries for the  representation of the Stieltjes matrix function F(z), associated with V = v 1 v 2 , as

F(z) =

 ∞  v 1 (x 3n ) n=0

0



 ∞   0 v 2 (x 3n+1 ) 3n+2 + /z /z 0 0 n=0   ∞  0 0 3n+3 + . v ·v 1 (x 3n+3 ) /z v 1 (x 3n+3 ) 2,0 v1,0

v2,0 ·v 1 (x 3n ) v1,0 v 2 (x 3n+1 )

3n+1

(18)

n=0

Notice that we have F(z) = F1 (z) + F2 (z) + F3 (z). The following theorem shows that we can obtain F(z) in our particular case, analyzing two different situations.

30

A. Branquinho and E. J. Huertas

T  Theorem 3 Let V = v 1 v 2 be a symmetric vector of functionals, with d = 2. Then the Weyl function is given by ⎡

RJ (z) =

∞  n=0

v 1 (x 3n ) ⎣ v1,0

0

⎤ 0 v 2 (x 3n+1 ) v2,1

z3n+1





0 +

∞  0

v 2 (x 3n+1 ) v2,1



0 z3n+2

n=0

 +

∞  n=0

0 v 1 (x 3n+3 ) v1,0 z3n+3

0 0



Proof It is enough to multiply (18) by (V00 )−1 . The explicit expression for V00 is given in (16). Computations considering d > 2 functionals, can be cumbersome but doable. The matrix generating function F (as well as the Weyl function), will be the sum of d + 1 matrix terms, i.e. F(z) = F1 (z) + · · · + Fd+1 (z), each of them of size d × d. Let us now outline the very interesting structure of RJ (z) for the general case of d functionals. We shall describe the structure of RJ (z) for d = 3, comparing the situation with the general case. Let ∗ denote every non-zero entry in a given matrix. Thus, there will be four J11 matrices of size 3 × 3. Here, and in the general case, the (d+1)n first matrix [J11 ]d×d will always be diagonal, as follows ⎡

4n [J11 ]3×3

⎤ ∗00 = ⎣ 0 ∗ 0⎦ . 00∗

(d+1)n ]d×d = Id . Next, we have Indeed, observe that for n = 0, [J11



4n+1 [J11 ]3×3

⎤ 0∗0 = ⎣ 0 0 ∗⎦ . 000

From (13) we know that the “distance” between the two extreme non-zero diagonals of J will always consist of d zero diagonals. It directly implies that, also in the (d+1)n ]d×d will always be zero, and general case, every entry in the last row of [J11 (d+1)n therefore the unique non-zero entries in [J11 ]d×d will be one step over the main diagonal. Next we have ⎡

4n+2 [J11 ]3×3

⎤ 00∗ = ⎣ 0 0 0⎦ . ∗00 (d+1)n

]d×d goes “one diagonal” Notice that for every step, the main diagonal in [J11 up, but the other extreme diagonal of J is also moving upwards, with exactly d

The Symmetrization Problem for Multiple Orthogonal Polynomials

31

zero diagonals between them. It directly implies that, no matter the number of (d+1)n+2 functionals, only the lowest-left element of [J11 ]d×d will be different from zero. Finally, we have ⎡

4n+3 [J11 ]3×3

⎤ 000 = ⎣ ∗ 0 0⎦ . 0∗0

4n ] Here, the last non-zero entry of the main diagonal in [J11 3×3 vanishes. In the (d+1)n+(d−1) general case, it will occur exactly at step [J11 ]d×d , in which only the (d+1)n+3 uppermost right entry is different from zero. Meanwhile, in matrices [J11 ]d×d (d+1)n+(d−2) up to [J11 ]d×d will be non-zero entries of the two extreme diagonals. In (d+1)n+(d−1) this last situation, the non-zero entries of [J11 ]d×d , will always be exactly one step under the main diagonal.

4 The Symmetrization Problem for Multiple OP Throughout this section, let {A1n } be an arbitrary and not necessarily symmetric sequence of type II multiple orthogonal polynomials, satisfying a (d +2)-term recurrence relation with known recurrence coefficients, and let J1 be the corresponding (d +2)-banded matrix. We will consider next a generalization to the case of multiple orthogonality of the so-called Darboux transformations with shift (for a description of these transformations in the scalar case, see for example [12, Prop. 3.6]), which in a first step consist of finding factorizations of Jacobi matrices as described below. Let J1 be such that the following LU factorization J1 − CI = LU = L1 L2 · · · Ld U

(19)

is unique, where I is the identity matrix, U is an upper two-banded, semi-infinite, and invertible matrix, L is a (d + 1)-lower triangular semi-infinite with ones in the main diagonal, and every Li , i = 1, . . . , d is a lower two-banded, semi-infinite, and invertible matrix with ones in the main diagonal. The parameter C, which is called the shift of the transformation, is certain constant such that det(Jn − CIn ) = 0 for all n ∈ N. This number C is chosen such that the factorization of the matrix L = L1 L2 · · · Ld is unique, with all the matrices L1 , L2 , . . . Ld being lower twobanded, semi-infinite, and having ones in the main diagonal. See [9] for more details on this factorization in the case of d-diagonal Jacobi matrices. We follow [9, Def. 3], where the authors generalize the concept of Darboux transformation to general Hessenberg banded matrices, in assuming that any circular permutation of L1 L2 · · · Ld U is a Darboux transformation of J1 −CI . Thus we have d possible Darboux transformations of J1 − CI , say Jj − CI , j = 2, . . . , d + 1,

32

A. Branquinho and E. J. Huertas

with J2 − CI = L2 · · · Ld U L1 , J3 − CI = L3 · · · Ld U L1 L2 ,. . . , Jd+1 − CI = U L1 L2 · · · Ld . Next, we solve the so called symmetrization problem in the theory of multiple orthogonal polynomials, i.e., starting with {A1n }, we find a total d +1 type II multiple j orthogonal polynomial sequences {An }, j = 1, . . . , d + 1, satisfying (d + 2)term recurrence relation with known recurrence coefficients, which can be used to construct a new d-symmetric type II multiple orthogonal polynomial sequence {Sn }. j It is worth pointing out that all the aforesaid sequences {An }, j = 1, . . . , d + 1 are of the same kind, with the same number of elements in their respective (d + 2)-term recurrence relations, and multiple orthogonal with respect to the same number of functionals d. Theorem 4 Let {A1n } be an arbitrary and not necessarily symmetric sequence of type II multiple orthogonal polynomials as stated above. Let Jj − CI , j = 2, . . . , d + 1, be the Darboux transformations of J1 − CI given by the d cyclic j permutations of the matrices in the right hand side of (19). Let {An }, j = 2, . . . , d + 1, d new families of type II multiple orthogonal polynomials satisfying (d + 2)-term recurrence relations given by the matrices Jj , j = 2, . . . , d + 1. Then, the sequence {Sn } defined by ⎧ S(d+1)n (x) = A1n (x d+1 + C), ⎪ ⎪ ⎨ S(d+1)n+1 (x) = xA2n (x d+1 + C), ⎪ ··· ⎪ ⎩ d+1 + C), S(d+1)n+d (x) = x d Ad+1 n (x

(20)

is a d-symmetric sequence of type II multiple orthogonal polynomials. Proof Let {A1n } satisfy the (d + 2)-term recurrence relation given by (9) [1] A1n+d (x) + xA1n+d (x) = A1n+d+1 (x) + bn+d

d−1 

d−1−ν,[1] 1 cn+d−ν An+d−1−ν (x) ,

ν=0 0,[1] = 0 for n ≥ 0, with the initial conditions A10 (x) = 1, A11 (x) = x − b0[1] , and cn+1 [1] )A1n−1 (x) − A1n (x) = (x − bn−1

n−2 

d−1−ν,[1] 1 cn−1−ν An−2−ν (x) , 2 ≤ n ≤ d ,

ν=0

with known recurrence coefficients. Set  T Aj = Aj0 (x) Aj1 (x) Aj2 (x) · · · .

The Symmetrization Problem for Multiple Orthogonal Polynomials

33

Thus, in a matrix notation, we have xA1 = J1 A1 ,

(21)

where ⎡

bd[1]



1

⎢ cd−1,[1] b[1] ⎥ 1 ⎢ d+1 ⎥ d+1 ⎢ d−2,[1] d−1,[1] ⎥ [1] ⎢ cd+1 ⎥ cd+2 bd+2 1 ⎢ . ⎥ J1 = ⎢ . ⎥. d−2,[1] d−1,[1] [1] ⎢ . ⎥ cd+3 bd+3 1 cd+2 ⎢ 0,[1] ⎥ d−2,[1] d−1,[1] [1] ⎢ c ⎥ · · · cd+3 cd+4 bd+4 1 ⎣ d+1 ⎦ .. .. .. .. .. .. . . . . . . Following [9] (see also [23]), the unique LU factorization for the square (d + 2)banded semi-infinite Hessenberg matrix J1 − CI is such that ⎡ ⎢ ⎢ ⎢ U =⎢ ⎢ ⎢ ⎣

γ1



1 γd+2

1 γ2d+3

1 γ3d+4

⎥ ⎥ ⎥ ⎥ .. ⎥ .⎥ ⎦ .. .

(22)

is an upper two-banded, semi-infinite, and invertible matrix, and L is a (d +1)-lower triangular, semi-infinite, and invertible matrix with ones in the main diagonal. It is clear that the entries in L and U depend entirely on the known entries of J1 and the constant parameter C. Thus, we rewrite (21) as (x − C)A1 = LU A1 .

(23)

d+1 = U A1 . Next, we define a new sequence of polynomials {Ad+1 n } by (x − C)A Multiplying both sides of (23) by the matrix U , we have



  x U A1 − C(U A1 ) = U L U A1 , x(x − C)Ad+1 − C(x − C)Ad+1 = U L(x − C)Ad+1 , and pulling out (x − C) we get (x − C)Ad+1 = U L Ad+1 ,

(24)

which is the matrix form of the (d + 2)-term recurrence relation satisfied by the new type II multiple polynomial sequence {Ad+1 n }.

34

A. Branquinho and E. J. Huertas

Since L is given by (see [9]) L = L1 L2 · · · Ld , where every Lj is the lower two-banded, semi-infinite, and invertible matrix ⎡

1



⎥ ⎢ γd−j 1 ⎥ ⎢ ⎥ ⎢ 1 γ2d+1−j Lj = ⎢ ⎥ ⎥ ⎢ γ3d+2−j 1 ⎦ ⎣ .. .. . . with ones in the main diagonal, it is also clear that the entries in Lj will depend on the known entries in J1 . Under the same hypotheses, we can define new d − 1 −1 2 1 3 polynomial sequences starting with A1 as follows: A2 = L−1 1 A , A = L2 A , . . . , d−1 d j +1 j Ad = L−1 up to Ad+1 = L−1 = L−1 d A . That is, A j A , j = 1, . . . d − 1. d−1 A Combining this fact with (24) we deduce (x − C)Ad = Ld U L1 L2 · · · Ld−1 Ad , (x − C)Ad−1 = Ld−1 Ld U L1 L2 · · · Ld−2 Ad−1 , up to the known expression (21) (x − C)A1 = L1 L2 · · · Ld U A1 . The above formulas mean that all these d + 1 sequences {Aj }, j = 1, . . . , d + 1 are in turn type II multiple orthogonal polynomials. Finally, from these d + 1 sequences, we construct the type II polynomials in the sequence {Sn } as (20). Note that, according to (11), it directly follows that {Sn } is a d-symmetric type II multiple orthogonal polynomial sequence, which proves our assertion. A particular example of (20) for C = 0 is given in [15, (2.16), (2.17) and (2.18)]. There, the author calls the sequence {Sn } as Hermite-type d-OPS, because, up to some point, these polynomials have similar symmetric properties to the classical Hermite polynomials. Next, we state the converse of the above theorem. That is, given a sequence of type II d-symmetric multiple orthogonal polynomials {Sn } satisfying the highj term recurrence relation (13), we find a set of d + 1 polynomial families {An }, j = 1, . . . , d + 1 of not necessarily symmetric type II multiple orthogonal polynomials, satisfying in turn (d + 2)-term recurrence relations, so o they are themselves sequences of type II multiple orthogonal polynomials. When d = 2, this construction goes back to the work of Douak and Maroni (see [16]).

The Symmetrization Problem for Multiple Orthogonal Polynomials

35

Theorem 5 Let {Sn } be a d-symmetric sequence of type II multiple orthogonal polynomials satisfying the corresponding high-order recurrence relation (13), and j {An }, j = 1, . . . , d + 1, the sequences of polynomials given by (20). Then, each j sequence {An }, j = 1, . . . , d + 1, satisfies the (d + 2)-term recurrence relation j

[j ]

j

j

xAn+d (x) = An+d+1 (x) + bn+d An+d (x) +

d−1 

d−1−ν,[j ]

cn+d−ν

j

An+d−1−ν (x) ,

ν=0 0,[j ]

j

[j ]

j

cn+1 = 0 for n ≥ 0, with initial conditions A0 (x) = 1, A1 (x) = x − b0 , [j ]

j

j

An (x) = (x − bn−1 )An−1 (x) −

n−2 

d−1−ν,[j ]

cn−1−ν

j

An−2−ν (x) , 2 ≤ n ≤ d ,

ν=0

and therefore they are type II multiple orthogonal polynomial sequences. Proof Since {Sn } is a d-symmetric multiple orthogonal sequence, it satisfies (13) with Sn (x) = x n for 0 ≤ n ≤ d. Shifting, for convenience, the multi-index in (13) as n → (d + 1)n − d + j, j = 0, 1, 2, . . . .d, we obtain the equivalent system of (d + 1) equations ⎧ xS(d+1)n (x) = S(d+1)n+1 (x) + γ(d+1)n−d+1 S(d+1)n−d (x) , ⎪ ⎪ ⎪ ⎪ xS ⎪ ⎨ (d+1)n+1 (x) = S(d+1)n+2 (x) + γ(d+1)n−d+2 S(d+1)n−d+1 (x) , .. . ⎪ ⎪ ⎪ ⎪ (x) = S (x) + γ S (x) , xS (d+1)n+d−1 (d+1)n+d (d+1)n (d+1)n−1 ⎪ ⎩ xS(d+1)n+d (x) = S(d+1)n+(d+1) (x) + γ(d+1)n+1 S(d+1)n (x) ,

j = 0, j = 1, .. .

.

j = d − 1, j = d.

Substituting (20), having C = 0, into the above expressions yields ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

1) 2) .. .

A1n (x 3 ) = A2n (x 3 ) + γ(d+1)n−(d−1) A2n−1 (x 3 ) , A2n (x 3 ) = A3n (x 3 ) + γ(d+1)n−(d−2) A3n−1 (x 3 ) , .. .

d+1 3 3 d) Adn (x 3 ) = Ad+1 n (x ) + γ(d+1)n An−1 (x ), 1 3 3 1 3 d + 1) xAd+1 n (x ) = An+1 (x ) + γ(d+1)n+1 An (x ) .

Next, replacing x d+1 → x, we get the following system of (d + 1) equations ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

1) 2) .. .

A1n (x) = A2n (x) + γ(d+1)n−(d−1) A2n−1 (x) , A2n (x) = A3n (x) + γ(d+1)n−(d−2) A3n−1 (x) , .. .

d) d + 1)

d+1 Adn (x) = Ad+1 n (x) + γ(d+1)n An−1 (x), 1 d+1 xAn (x) = An+1 (x) + γ(d+1)n+1 A1n (x) .

(25)

36

A. Branquinho and E. J. Huertas

Having x = 0 in the last of the above equations, we can define the γ as γ(d+1)n+1 =

−A1n+1 (0) A1n (0)

.

From now on we deal with the matrix representation of each equation in (25). Notice j j +1 j +1 that the first d equations of (25), namely An = An + γ(d+1)n+(j −d) An−1 , j = 1, . . . , d, can be written in the matrix way Aj = Lj Aj +1 ,

(26)

where Lj is the lower two-banded, semi-infinite, and invertible matrix ⎡



1

⎥ ⎢ γd−j 1 ⎥ ⎢ ⎥ ⎢ 1 γ 2d+1−j Lj = ⎢ ⎥. ⎥ ⎢ γ3d+2−j 1 ⎦ ⎣ .. .. . . Similarly, the (d + 1)-th equation in (25) can be expressed as xAd+1 = U A1 ,

(27)

where U is the upper two-banded, semi-infinite, and invertible matrix ⎡ ⎢ ⎢ ⎢ U =⎢ ⎢ ⎢ ⎣

γ1



1 γd+2

1 γ2d+3

1 γ3d+4

⎥ ⎥ ⎥ ⎥. .. ⎥ .⎥ ⎦ .. .

It is clear that the entries in the above matrices Lj and U are given in terms of the recurrence coefficients γn+1 for {Sn } satisfying (13). From (26) we have A1 defined in terms of A2 , as A1 = L1 A2 . Likewise A2 in terms of A3 as A2 = L2 A3 , and so on up to j = d. Thus, it is easy to see that, A1 = L1 L2 · · · Ld Ad+1 . Next, we multiply by x both sides of the above expression, and we apply (27) to obtain xA1 = L1 L2 · · · Ld U A1 .

The Symmetrization Problem for Multiple Orthogonal Polynomials

37

Since each Lj and U are lower and upper two-banded semi-infinite matrices, it follows easily that L1 L2 · · · Ld is a lower triangular (d +1)-banded matrix with ones in the main diagonal, so the above decomposition is indeed a LU decomposition of certain (d + 2)-banded Hessenberg matrix J1 = L1 L2 · · · Ld U (see for instance [9] and [23, Sec. 3.2 and 3.3]). The values of the entries in J1 come from the usual definition for matrix multiplication, matching every entry in J1 = L1 L2 · · · Ld U , with Lj and U provided in (26) and (27) respectively. On the other hand, starting with A2 = L2 A3 instead of A1 , and proceeding in the same fashion as above, we can reach xA2 = L2 · · · Ld U L1 A2 , xA3 = L3 · · · Ld U L1 L2 A3 , .. . xAd+1 = U L1 L2 · · · Ld Ad+1 . Observe that Jj denotes a particular circular permutation of the matrix product L1 L2 · · · Ld U . Thus, we have J1 = L1 L2 · · · Ld U , J2 = L2 · · · Ld U L1 , . . . , Jd+1 = U L1 L2 · · · Ld . Using this notation, Jj is the matrix representation of the operator of multiplication by x xAj = Jj Aj , which from (9) directly implies that each polynomial sequence {Aj }, j = 1, . . . d + 1, satisfies a (d +2)-term recurrence relation as in the statement of the theorem, with coefficients given in terms of the recurrence coefficients γn+1 , from the high-term recurrence relation (13) satisfied by the symmetric sequence {Sn }. This completes the proof.

5 Matrix Multiple Orthogonality For an arbitrary system of type II vector multiple polynomials {Pn } orthogonal with T  respect to certain vector of functionals U = u1 u2 , with T  Pn = P3n (x) P3n+1 (x) P3n+2 (x) ,

(28)

there exists a matrix decomposition ⎡

⎡ ⎤ ⎤ P3n (x) 1 Pn = Wn (x 3 )Xn → ⎣ P3n+1 (x)⎦ = Wn (x 3 ) ⎣ x ⎦ , x2 P3n+2 (x)

(29)

38

A. Branquinho and E. J. Huertas

with Wn being the matrix polynomial (see [27]) ⎡

⎤ A1n (x) A2n−1 (x) A3n−1 (x) 3 (x) ⎦ . Wn (x) = ⎣ Bn1 (x) Bn2 (x) Bn−1 1 2 Cn (x) Cn (x) Cn3 (x)

(30)

Throughout this section, for simplicity of computations, we assume d = 2 for the vector of functionals U, but the same results can be easily extended for an arbitrary number of functionals. We first show that, if a sequence of type II multiple 2-orthogonal polynomials {Pn } satisfy a recurrence relation like (10), for example xPn (x) = Pn+1 (x) + bn Pn (x) + cn Pn−1 (x) + dn Pn−2 (x),

(31)

then there exists a sequence of matrix polynomials {Wn }, Wn (x) ∈ P3×3 associated to {Pn } by (28) and (29), satisfying a matrix four term recurrence relation with matrix coefficients. Theorem 6 Let {Pn } be a sequence of type II multiple polynomials, 2-orthogonal with respect to the system of functionals {u1 , u2 }, i.e., satisfying the four-term type recurrence relation (31). Let {Wn }, Wn (x) ∈ P3×3 associated to {Pn } by (28) and (29). Then, the matrix polynomials Wn satisfy a matrix four term recurrence relation with matrix coefficients. Proof We first prove that the sequence of vector polynomials {Wn } satisfy a four term recurrence relation with matrix coefficients, starting from the fact that {Pn } satisfy (31). In order to get this result, we use the matrix interpretation of multiple orthogonality described in Sect. 2. From (28), using the matrix interpretation for multiple orthogonality, (31) can be rewritten as the matrix three term recurrence relation xPn = APn+1 + Bn Pn + Cn Pn−1 or, equivalently ⎤ ⎡ ⎤⎡ ⎤ 000 P3n+3 P3n x ⎣ P3n+1 ⎦ = ⎣ 0 0 0⎦ ⎣ P3n+4 ⎦ 100 P3n+2 P3n+5 ⎡



b3n

1

+ ⎣ c3n+1 b3n+1 d3n+2 c3n+2

⎤⎡ ⎤ ⎡ ⎤ ⎤⎡ P3n 0 d3n c3n P3n−3 0 1 ⎦ ⎣ P3n+1 ⎦ + ⎣ 0 0 d3n+1 ⎦ ⎣ P3n−2 ⎦ b3n+2 P3n+2 0 0 0 P3n−1

Multiplying the above expression by x we get x 2 Pn = AxPn+1 + Bn xPn + Cn xPn−1 = A [xPn+1 ] + Bn [xPn ] + Cn [xPn−1 ]

The Symmetrization Problem for Multiple Orthogonal Polynomials

39

= AAPn+2 + [ABn+1 + Bn A] Pn+1

(32)

+ [ACn+1 + Bn Bn + Cn A] Pn + [Bn Cn + Cn Bn−1 ] Pn−1 + Cn Cn−1 Pn−2 The matrix A is nilpotent, so AA is the zero matrix of size 3 × 3. Having 1

1

An = ABn+1 + Bn A, Bn = ACn+1 + Bn Bn + Cn A, 1 1 Cn = Bn Cn + Cn Bn−1 , Dn = Cn Cn−1 , 1

1

1

where the entries of An , Bn , and Cn can be easily obtained using a computational software as Mathematica or Maple , from the entries of An , Bn , and Cn . Thus, we can rewrite (32) as 1 1 1 x 2 Pn = A1 n Pn+1 + Bn Pn + Cn Pn−1 + Dn Pn−2 .

We now continue in this fashion,multiplying again by x 1 1 1 x 3 Pn = A1 n xPn+1 + Bn xPn + Cn xPn−1 + Dn xPn−2   1 1 = A1 n APn+2 + An Bn+1 + Bn A Pn+1   1 1 + A1 n Cn+1 + Bn Bn + Cn A Pn   + Bn1 Cn + Cn1 Bn−1 + Dn1 A Pn−1   + Cn1 Cn−1 + Dn1 Bn−2 Pn−2 + Dn1 Cn−2 Pn−3 1

1

The matrix products An A and Dn Cn−2 both give the zero matrix of size 3 × 3, and the remaining matrix coefficients are 2

1

1

2

1

1

1

An = An Bn+1 + Bn A, Bn = An Cn+1 + Bn Bn + Cn A, 2 1 1 1 2 1 1 Cn = Bn Cn + Cn Bn−1 + Dn A, Dn = Cn Cn−1 + Dn Bn−2 . (33) Using the expressions stated above, the matrix coefficients (33) can be easily obtained as well. Beyond the explicit expression of their respective entries, the 2 key point is that they are structured matrices, namely An is lower-triangular with 2 2 2 one’s in the main diagonal, Dn is upper-triangular, and Bn , Cn are full matrices. Therefore, the sequence of type II vector multiple orthogonal polynomials satisfy the following matrix four term recurrence relation 2 2 2 x 3 Pn = A2 n Pn+1 + Bn Pn + Cn Pn−1 + Dn Pn−2 , n = 2, 3, . . . ,

(34)

40

A. Branquinho and E. J. Huertas

Next, combining (29) with (34), we can assert that ⎡

⎡ ⎤ ⎡ ⎤ ⎤ 1 1 1 3 3 ⎣ ⎦ 2 3 ⎣ ⎦ 2 3 ⎣ ⎦ x Wn (x ) x = An Wn+1 (x ) x + Bn Wn (x ) x x2 x2 x2 ⎡ ⎤ ⎡ ⎤ 1 1 + Cn2 Wn−1 (x 3 ) ⎣ x ⎦ + Dn2 Wn−2 (x 3 ) ⎣ x ⎦ , n = 2, 3, . . . . x2 x2  T Next, we take rid of the vector 1 x x 2 in the above equation. It could be argued that a matrix can have a nontrivial kernel and therefore this removal can not be performed, but it is the special way to construct the matrix polynomials Wn and 2 2 the matrix coefficients An , . . . , Dn which allows us to do so (see for example [20] where similar techniques are employed). After the shift x 3 → x the above expression may be simplified as 2 2 2 xWn (x) = A2 n Wn+1 (x) + Bn Wn (x) + Cn Wn−1 (x) + Dn Wn−2 (x),

(35)

where W−1 = 03×3 and W0 is certain constant matrix, for every n = 1, 2, . . ., which is the desired matrix four term recurrence relation for Wn (x). This kind of matrix high-term recurrence relation completely characterizes certain type of orthogonality. Hence, we are going prove a Favard type Theorem which states that, under the assumptions of Theorem 6, the matrix polynomials {Wn } are type II matrix multiple orthogonal with respect to a system of two matrices of measures {dM1 , dM2 }. Next, we briefly review some of the standard facts on the theory of matrix orthogonality, or orthogonality with respect to a matrix of measures (see [18, 19] and the references therein). Let W, V ∈ P3×3 be two matrix polynomials, and let ⎡

⎤ μ11 (x) μ12 (x) μ13 (x) M(x) = ⎣ μ21 (x) μ22 (x) μ23 (x)⎦ , μ31 (x) μ32 (x) μ33 (x) be a matrix with positive Borel measures μi,j (x) in its entries. One says that M(E) is a positive definite weight matrix supported in the real line if M(E) is positive semidefinite for any Borel set E ⊂ R, it has finite moments  μ¯ k =

dM(x)x k , k = 0, 1, 2, . . . E

The Symmetrization Problem for Multiple Orthogonal Polynomials

41

of every order, and it satisfies that 

V (x)dM(x)V ∗ (x), E

where V ∗ ∈ P3×3 is the adjoint matrix of V ∈ P3×3 , is non-singular if the matrix leading coefficient of the matrix polynomial V is non-singular. Under these conditions, it is possible to associate to a weight matrix M the Hermitian sesquilinear form  W, V  =

W (x)dM(x)V ∗ (x).

E

We then say that a sequence of matrix polynomials {Wn }, Wn ∈ P3×3 with degree n and nonsingular leading coefficient, is orthogonal with respect to M if Wm , Wn  = n δm,n ,

(36)

where n ∈ M3×3 is a positive definite upper triangular matrix, for n ≥ 0. Let m = (m1 , m2 ) ∈ N2 be a multi-index with length |m| := m1 + · · · + m2 and let {M1 , M2 } be a set of matrix of measures as defined above. Let {Wm } be a sequence of matrix polynomials, with deg Wm is at most |m|. {Wm } is said to be a type II multiple orthogonal with respect to the set of matrix measures {M1 , M2 } and multi-index m if it satisfy the following orthogonality conditions  W (x)dMj (x)x k = 03×3 , k = 0, 1, . . . , nj − 1 , j = 1, 2. E

A multi-index m is said to be normal for the set of matrix measures {M1 , M2 }, if the degree of Wm is exactly |m| = m. Thus, in what follows we will write Wm ≡ W|m| = Wm . In this framework, let consider the sequence of vector of matrix polynomials {Bn } where  W2n . Bn = W2n+1 

(37)

We can define the vector of functionals M(Bn ), associated to a set of positive definite matrix measures {M1 , M2 } and with M : P6×3 → M6×6 , by means of the action M on Bn , as follows

T  = M (Bn ) := M·BTn     1 2 E W2n (x)dM (x)   E W2n (x)dM (x) ∈ M6×6 . 1 2 E W2n+1 (x)dM (x) E W2n+1 (x)dM (x)

(38)

42

A. Branquinho and E. J. Huertas

Notice that  Wj (x)dMi = ij ,

i = 1, 2, and j = 0, 1,

(39)

with {M1 , M2 } being a system of two matrix of measures as described above. Thus, we say that a sequence of vectors of matrix polynomials {Bn } is orthogonal with respect to M if i) M(x k Bn ) = 06×6 , k = 0, 1, . . . , n − 1 , ii) M(x n Bn ) = n ,

 (40)

where n is a regular block upper triangular 6 × 6 matrix, holds. Now we are in a position to prove the following Theorem 7 (Favard Type) Let {Bn } a sequence of vectors of matrix polynomials of size 6 × 3, defined in (37), with Wn matrix polynomials satisfying the four term recurrence relation (35). The following statements are equivalent: (a) The sequence {Bn }n≥0 is orthogonal with respect to M, which is associated to a system of two matrix measures {M1 , M2 } as described above. 3 3 (b) There are sequences of scalar 6 × 6 block matrices {An }n≥0 , {Bn }n≥0 , and 3 3 {Cn }n≥0 , with Cn block upper triangular non-singular matrix for n ∈ N, such that the sequence {Bn } satisfy the matrix three term recurrence relation 3 3 xBn = A3 n Bn+1 + Bn Bn + Cn Bn−1 ,

(41)

T  3 with Bn−1 = 03×3 03×3 , B0 given, and Cn non-singular. Proof First we prove that (a) implies (b). Since the sequence of vector of matrix polynomials {Bn } is a basis in the linear space P6×3 , we can write xBn =

n+1 

Ank Bk , Ank ∈ M6×6 .

k=0

Then, from the orthogonality conditions (40), we get M (xBn ) = M(Ank Bk ) = 06×6 ,

k = 0, 1, . . . , n − 2.

Thus, xBn = Ann+1 Bn+1 + Ann Bn + Ann−1 Bn−1 . 3

3

3

Having Ann+1 = An , Ann = Bn , and Ann−1 = Cn , the result follows.

The Symmetrization Problem for Multiple Orthogonal Polynomials

43

To proof that (b) implies (a), we know from Theorem 6 that the sequence of vector polynomials {Wn } satisfy a four term recurrence relation with matrix coefficients. We can associate this matrix four term recurrence relation (35) with the block matrix three term recurrence relation (41). Then, it is sufficient to show that M is uniquely determined by its orthogonality conditions (40), in terms of the 3 sequence {Cn }n≥0 in that (41). Next, from (37) we can rewrite the matrix four term recurrence relation (35) into a matrix three term recurrence relation       2 2 2 2 03×3 03×3 B2n A2n D2n C2n xBn = Bn+1 + Bn + Bn−1 , 2 2 2 2 A2n+1 03×3 C2n+1 B2n+1 03×3 D2n+1 where the size of Bn is 6 × 3. We give M in terms of its block matrix moments, which in turn are given by the matrix coefficients in (41). There is a unique vector moment functional M and hence two matrix measures dM1 and dM2 , such that    W (x)dM1 (x) E W0 (x)dM2 (x) 3 M (B0 ) = E 0 1 (x) 2 (x) = C0 ∈ M6×6 . W (x)dM W (x)dM 1 1 E E For the first moment of M0 we thus get 

10 20 3 = C0 . M0 = M (B0 ) =

11 21 

Hence, we have M (B1 ) = 06×6 , which from (41) also implies 06×6 = xM (B0 ) − 3 3 B0 M (B0 ) = M1 − B0 M0 . Therefore 3

3

M 1 = B0 C0 . By a similar argument, we have  3 3 3 3 3 06×6 = M (A0 )−1 x 2 B0 − (A0 )−1 B0 xB0 − B1 (A0 )−1 xB0

3 3 3 + B1 (A0 )−1 B0 B0 − C1 B0

 3 3 3 3 3 = (A0 )−1 M2 − (A0 )−1 B0 + B1 (A0 )−1 M1

 3 3 3 3 + B1 (A0 )−1 B0 − C1 M0 ∈ M6×6 , which in turn yields the second moment of M



 3 3 3 3 3 3 3 3 3 3 3 3 M2 = B0 + A0 B1 (A0 )−1 B0 C0 −A0 B1 (A0 )−1 B0 − C1 C0 .

44

A. Branquinho and E. J. Huertas

Repeated application of this inductive process, enables us to determine M in a unique way through its moments, only in terms of the sequences of matrix 3 3 3 coefficients {An }n≥0 , {Bn }n≥0 and {Cn }n≥0 . On the other hand, because of (38) and (41) we have M (xBn ) = 06×6 ,

n ≥ 2.

Multiplying by x both sides of (41), from the above result we get

 M x 2 Bn = 06×6 ,

n ≥ 3.

The same conclusion can be drawn for 0 < k < n

 M x k Bn = 06×6 , 0 < k < n,

(42)

and finally

  M x n Bn = Cn3 M x n−1 Bn−1 . Notice that the repeated application of the above argument leads to  3 3 3 3 M x n Bn = Cn3 Cn−1 Cn−2 · · · C1 C0 .

(43)

From (42), (43) we conclude

 3 3 3 3 M x k Bn = Cn3 Cn−1 Cn−2 · · · C1 C0 δn,k = n δn,k ,

n, k = 0, 1, . . . , k ≤ n,

which are exactly the desired orthogonality conditions (40) for M stated in the theorem.

6 Example with Multiple Hermite Polynomials ς ,ς

The monic type II multiple orthogonal polynomials Hn11,n22 (x) are orthogonal with 2 respect to the pair of Hermite-like weights wj (x) = e−x +ςj x , j = 1, 2, so the following orthogonality conditions hold (see [4], [24, §23.5] or [31])  R

,ς2 x k Hnς11,n (x)e−x 2

2 +ς

jx

dx = 0,

k = 0, 1, . . . , nj − 1,

j = 1, 2

The Symmetrization Problem for Multiple Orthogonal Polynomials

and  R

,ς2 x nj Hnς11,n (x)e−x 2

2 +ς

jx

45

! 2 √ dx = 2−|n| π nj ! (ςj − ςi )ni eςj /4 ,

j = 1, 2.

i=j

Throughout this example we will use the recurrence coefficients given in [31]. The ς ,ς ς monic polynomials Hn (x) ≡ Hn11,n22 , where ς = (ς1 , ς2 ) and n = n1 + n2 satisfy the four term recurrence relation (31) ς

ς

ς

xHnς (x) = Hn+1 (x) + bn Hnς (x) + cn Hn−1 (x) + dn Hn−2 (x) , ς

ς

ς

(44)

ς

H−2 (x) = H−1 (x) = 0, H0 (x) = 1, H1 (x) = x − b0 , where, following [31], the recurrence coefficients are given by bn =

⎧ ⎨ b2n ⎩ b2n+1 ⎧ ⎨ c2n

ς1 , ς22 = , 2 =

if n1 = n2 = n, if n1 = n + 1, n2 = n,

= n, if n1 = n2 = n, 2n + 1 ⎩ c2n+1 = , if n1 = n + 1, n2 = n, 2 ⎧ n(ς1 − ς2 ) ⎪ ⎨ d2n = , if n1 = n2 = n, 4 dn = − ς ) n(ς 2 1 ⎪ ⎩ d2n+1 = , if n1 = n + 1, n2 = n. 4 cn =

(45)

ς

Observe that this notation means that (44) can be read for polynomials H2n (x) = ς ς ς Hn,n (x) (when n1 = n2 = n) and H2n+1 (x) = Hn+1,n (x) (when n1 = n + 1, n2 = n). It also implies that we take the diagonal multi-index n = (n1 , n2 ) ∈ I, where I = {(0, 0), (1, 0), (1, 1), (2, 1), (2, 2), (3, 2), (3, 3), . . .}. The associated four banded matrix J1 is ⎡

ς1 /2 ⎢ 1/2 ⎢ ⎢ (ς1 −ς2 ) ⎢ 4 ⎢ ⎢ J1 = ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

1 ς2 /2 1

−(ς1 −ς2 ) 4

⎤ 1 ς1 /2 3/2

(ς1 −ς2 ) 2

1 ς2 /2 2 −(ς1 −ς2 ) 2

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 1 ⎥ ⎥ ς1 /2 1 ⎥ .. ⎥ 5/2 ς2 /2 . ⎥ ⎦ .. .. .. . . .

(46)

46

A. Branquinho and E. J. Huertas

For the sake of simplicity, we henceforth assume a particular example in the sequel, having ς1 = 1 and ς2 = 2. Thus ⎡

1/2 ⎢ 1/2 ⎢ ⎢ ⎢ −1/4 ⎢ ⎢ J1 = ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

1 1 1 1 1/2 1 1/4 3/2 1 1 −1/2 2 1/2 1



⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ .. ⎥ ⎥ . 1/2 5/2 1 ⎦ .. .. .. . . .

(47)

and using Mathematica we can obtain the explicit expression for the first polynomials of the sequence H01,2 (x) = 1

H31,2 (x) = x 3 − 2x 2 −

H11,2 (x) = x − 12 ,

H41,2 (x) =

H21,2 (x) = x 2 − 32 x, H51,2 (x) =

x 3 4 + 4, x 4 − 3x 3 + 14 x 2 + 3x − 58 , 19 2 x 5 − 72 x 4 − 14 x 3 + 59 8 x − 8 x



19 16 ,

H61,2 (x) = x 6 − 92 x 5 + 34 x 4 + H71,2 (x) = x 7 − 5x 6 + H81,2 (x) = x 8 − 6x 7 +

117 3 75 2 99 19 8 x − 8 x − 16 x + 8 , 51 4 291 3 375 2 475 61 2 x − 16 x − 16 x + 32 x + 32 , 3 6 81 5 699 4 225 3 1049 2 2 x + 2 x − 16 x − 4 x + 16 x

+

21 2 x



597 64 ,

H91,2 (x) = x 9 − 1,2 H10 (x) =

1,2 H11 (x) =

13 8 1 7 243 6 1095 5 2 x + 2 x + 4 x − 16 x 2897 3 1987 2 5129 75 4 − 4341 32 x + 16 x + 32 x − 64 x − 128 , 5 8 345 7 2095 6 7983 5 9 x 10 − 15 2 x + 2 x + 4 x − 16 x − 32 x 4875 3 26485 2 2235 1301 4 + 7805 16 x + 32 x − 64 x + 128 x + 32 2945 7 3887 6 64343 5 8 x 11 − 8x 10 + 54 x 9 + 475 4 x − 16 x − 8 x + 64 x 11725 3 265 2 114113 7439 4 + 34055 64 x − 8 x − 64 x + 256 x − 256

···

6.1 The Matrix Type II Multiple Orthogonal Polynomials From the above explicit polynomial expressions, and using the method detailed at the beginning of Sect. 5, it is not dificult to obtain the matrix cubic decomposi-

The Symmetrization Problem for Multiple Orthogonal Polynomials

47

tion (30) of the vector polynomials ⎤ ⎡ ⎤ 1,2 (x) H3n 1 ⎥ ⎢ 1,2 3 ⎣ ⎦ = W = Wn (x 3 )Xn → ⎣ H3n+1 (x ) (x)⎦ x . n 1,2 x2 H3n+2 (x) ⎡

H1,2 n

The first cubic decompositions, which are in turn matrix polynomials, are ⎡

1

0

⎢ W0 (x) = ⎣ −1/2 0 ⎡

1

⎢ W1 (x) = ⎣ −3

0 1

1

0 0





117/8

⎢ ⎥ ⎢ W2 (x) = ⎣ −5 1 0⎦ x 2 + ⎣ −291/16 3/2 −6 1



⎥ 0⎦ ,

1

−3/2 1

⎤ ⎡ 0 3/4 ⎥ ⎢ 0⎦ x + ⎣ −5/8

−1/4 −7/2 1 ⎡

0

−1/4 −2



⎥ 1/4 ⎦ ,

3

−19/16 −19/8 59/8 −9/2

3/4





19/8

⎥ ⎢ 0 ⎦ x + ⎣ 61/32

51/2

−225/4 −699/16 81/2

−99/16 −75/8



⎥ 475/32 −375/16⎦ ,

−597/64 21/2

1049/16

(48) ⎡

⎤ ⎡ 1 0 0 243/4 ⎢ ⎥ 3 ⎢ W3 (x) = ⎣ −15/2 1 0⎦ x + ⎣ −2095/16

1/2 345/4

−13/2



⎥ 2 5/2 ⎦ x

5/4 −8 1 −3887/8 −2945/16 475/4 ⎡ ⎤ ⎡ 2897/16 −4341/32 −1095/16 −75/128 ⎢ ⎥ ⎢ + ⎣ 4875/32 7805/16 −7983/32⎦ x + ⎣ 1301/32 −11725/8 34055/64 64343/64

−5129/64

1987/32



⎥ 2235/128 −26485/64⎦

−7439/256 114113/256 −265/64

···

Next, we show that Theorem 6, and more specifically expression (35), hold in our numeric example for the above set of matrix polynomials. Combining (45) with (33), the first matrix coefficients in (35) are given by ⎧ ⎡ ⎤ ⎡ ⎤ ⎪ 1 00 7/8 13/4 2 ⎪ ⎪ 2 2 ⎪ ⎪ A = ⎣ 5/2 1 0⎦ , B0 = ⎣ 9/8 19/4 19/4⎦ , ⎪ ⎪ ⎨ 0 25/4 2 1 3/16 39/8 37/8 ⎡ ⎤ ⎡ ⎤ n=0 ⎪ 0 0 0 000 ⎪ ⎪ 2 2 ⎪ ⎪ C = ⎣ 0 0 0⎦ , D0 = ⎣ 0 0 0⎦ , ⎪ ⎪ ⎩ 0 000 000

48

A. Branquinho and E. J. Huertas

⎧ ⎡ ⎤ ⎡ ⎤ ⎪ 1 0 0 10 31/4 5/2 ⎪ ⎪ 2 2 ⎪ ⎪ A =⎣ 2 B1 = ⎣ 63/4 67/8 37/4⎦ , 1 0⎦ , ⎪ ⎪ ⎨ 1 43/4 5/2 1 63/4 183/8 61/4 ⎡ ⎤ ⎡ ⎤ n=1 ⎪ 3/8 39/8 9 000 ⎪ ⎪ 2 2 ⎪ ⎪ C = ⎣ −9/16 27/8 27/8⎦ , D1 = ⎣ 0 0 0⎦ , ⎪ ⎪ ⎩ 1 3/16 9/8 57/8 000 ⎧ ⎡ ⎤ ⎡ ⎤ ⎪ 1 00 97/8 49/4 2 ⎪ ⎪ 2 2 ⎪ ⎪ A = ⎣ 5/2 1 0⎦ , B2 = ⎣ 171/4 41/2 55/4 ⎦ , ⎪ ⎪ ⎨ 2 61/4 2 1 75/4 111/2 127/8 ⎡ ⎤ ⎡ ⎤ n=2 ⎪ 129/8 153/16 261/8 −3/32 3/8 −51/16 ⎪ ⎪ 2 2 ⎪ ⎪ C = ⎣ 39/8 105/4 261/8⎦ , D2 = ⎣ 0 3/32 15/16 ⎦ , ⎪ ⎪ ⎩ 2 3/2 −75/8 177/4 0 0 −3/8 ··· Using the above matrices, and those for the matrix polynomials Wn (x) in (48), it is trivial to check that 2

2

2

2

for n = 0 → xW0 (x) = A0 W1 (x) + B0 W0 (x) + C0 W−1 (x) + D0 Wn−2 (x), 2 2 2 2 for n = 1 → xW1 (x) = A1 W2 (x) + B1 W1 (x) + C1 W0 (x) + D1 W−1 (x), 2 2 2 2 for n = 2 → xW2 (x) = A2 W3 (x) + B2 W2 (x) + C2 W1 (x) + D2 W0 (x), ···

6.2 The Symmetric Type II Multiple Orthogonal Polynomial Sequence In what follows, we give an example satisfying the theory developed in Sect. 4. We begin with the multiple orthogonal polynomial sequence {Hn1,2 }n≥1 and we find the associated symmetric multiple orthogonal polynomial sequence {Sn }n≥1 . All the matrices in the sequel are two, three or four banded so, we abbreviate them just giving their main (0), sub (−1, −2, . . .) and/or supra (+1, +2, . . .) diagonals. Thus, having C = 1 we seek the LU factorization of the semi-infinite four banded matrix ⎧ +1 : 1 1 1 ⎪ ⎪ ⎨ 0 : −1/2 0 −1/2 J1 − I = ⎪ −1 : 1/2 1 3/2 ⎪ ⎩ −2 : −1/4 1/4 −1/2

1 1 0 −1/2 2 5/2 1/2 −3/4

1 1 0 −1/2 3 7/2 3/4 −1

1 1 0 −1/2 4 9/2 1 −5/4

1 1 0 −1/2 5 11/2 5/4 −3/2

··· ··· ··· ···

The Symmetrization Problem for Multiple Orthogonal Polynomials

49

Then we have ⎧ 1 1 1 1 1 1 ⎨ 0: 1 1 260 L = −1 : −1 1/2 −5/4 6/5 −21/17 29/14 −9/8 81 ⎩ −2 : 1/2 1/4 1/2 2/5 15/34 17/28 7/18 8/9 " U=

1

−585 601 405 1202

+1 : 1 1 1 1 1 1 1 1 0 : −1/2 1 −1 5/4 −17/10 21/17 −18/7 9/8

1

1

1121 −1973 234 2476 601 351 468 1238

1

1

1

−601 585 −619 162 601 117

··· ··· , ···

··· . ···

Next, using the algorithm developed in [9, Th. 3], we obtain the new factorization of L = L1 L2 , with " L1 = # L2 =

0:1 1 1 1 1 −1 : 1 −1/4 1/3 −6/19 19/72

0: 1 1 1 1 144 −1 : −2 34 −19 12 95

1

1

1

1

1

1

1

1

−108 367 −2860 10161 −99775 363631 367 1430 10161 39910 363631 1434202

1

1

1

1

1

−1835 12155 −7903 319280 −29454111 430977701 −1865015451 1224 5138 5720 91449 23985910 85089654 1775542076

Finally, Theorem 4 now yields the associated symmetric sequence S01,2 (x) = 1

S31,2 (x) = x 3 + 12 ,

S11,2 (x) = x,

S41,2 (x) = x 4 − 12 x,

S21,2 (x) = x 2 , S51,2 (x) = x 5 + 32 x 2 , S61,2 (x) = x 6 + 12 x 3 − 12 ,

S91,2 (x) = x 9 + x 6 − 54 x 3 − 12 ,

1,2 S71,2 (x) = x 7 + 34 x 4 − 58 x, S10 (x) = x 10 + 23 x 7 − 32 x 4 −

S81,2 (x) = x 8 − 74 x 2 ,

1,2 S11 (x) = x 11 + 94 x 8 − 32 x 5 −

1,2 S12 (x) = x 12 + x 9 − 1,2 S13 (x) = x 13 + 1,2 S14 (x) = x 14 − 1,2 S15 (x) = x 15 + 1,2 S16 (x) = x 16 + 1,2 S17 (x) = x 17 +

7 24 x, 49 2 16 x ,

11 6 3 3 5 4 x − 2x + 8, 25 10 193 7 75 4 81 19 x − 76 x − 38 x + 152 x, 1 11 119 8 3 5 207 2 5 x − 20 x + 10 x + 40 x , 3 12 17 9 35 6 21 3 17 2 x − 4 x − 8 x + 8 x + 16 , 89 13 331 10 1067 7 151 4 59 72 x − 72 x − 288 x + 48 x + 64 x, 93 14 333 11 101 8 489 5 2361 2 34 x − 68 x − 8 x + 136 x + 272 x ,

···

··· , ··· ··· . ···

50

A. Branquinho and E. J. Huertas

1,2 1,2 which satisfy the four term recurrence relation xSn+2 (x) = Sn+3 (x) + γn+1 Sn1,2 (x), with recurrence coefficients

−1 −1 3 1 , γ2 = 1, γ3 = −2, γ4 = 1, γ5 = , γ6 = , γ7 = −1, γ8 = , 2 4 4 3 −19 5 −6 144 −17 19 , γ10 = , γ11 = , γ12 = , γ13 = , γ14 = , γ9 = 12 4 19 95 10 72 −1835 21 −108 12155 γ15 = , γ16 = , γ17 = , γ18 = ,... 1224 17 367 5138 γ1 =

Acknowledgments We would like to thank the referee for carefully reading the manuscript, and for giving constructive comments, which helped us to improve the quality of the paper. The work of the first author (AB) was supported by the Centre for Mathematics of the University of Coimbra - UIDB/00324/2020, funded by the Portuguese Government through FCT/MCTES. The work of the second author (EJH) was supported by Fundação para a Ciência e a Tecnologia (FCT) of Portugal, ref. SFRH/BPD/91841/2012, and partially supported by Dirección General de Investigación e Innovación, Consejería de Educación e Investigación of the Comunidad de Madrid (Spain), and Universidad de Alcalá under grant CM/JIN/2019-010, Proyectos de I+D para Jóvenes Investigadores de la Universidad de Alcalá 2019.

References 1. Álvarez-Fernández, C., Fidalgo, U., Mañas, M.: Multiple orthogonal polynomials of mixed type: Gauss–Borel factorization and the multi-component 2D Toda hierarchy. Adv. Math. 227, 1451–1525 (2011) 2. Aptekarev, A.I.: Multiple orthogonal polynomials. J. Comput. Appl. Math. 99, 423–447 (1998) 3. Aptekarev, A.I., Kaliaguine, V., Iseghem, J.V.: Genetic sum’s representation for the moments of a system of Stieltjes functions and its application. Constr. Approx. 16, 487–524 (2000) 4. Aptekarev, A.I., Branquinho, A., Van Assche, W.: Multiple orthogonal polynomials for classical weights. Trans. Amer. Math. Soc. 335, 3887–3914 (2003) 5. Aptekarev, A.I., Kalyagin, V.A., Saff, E.B.: Higher-order three-term recurrences and asymptotics of multiple orthogonal polynomials. Constr. Approx. 30 (2), 175–223 (2009) 6. Barrios, D., Branquinho, A.: Complex high order Toda and Volterra lattices. J. Diff. Eq. Appl. 15(2), 197–213 (2009) 7. Barrios, D., Branquinho, A., Foulquié-Moreno, A.: Dynamics and interpretation of some integrable systems via multiple orthogonal polynomials. J. Math. Anal. Appl. 361(2), 358–370 (2010) 8. Barrios, D., Branquinho, A., Foulquié-Moreno, A.: On the relation between the full KostantToda lattice and multiple orthogonal polynomials. J. Math. Anal. Appl. 377(1), 228–238 (2011) 9. Barrios, D., Branquinho, A., Foulquié-Moreno, A.: On the full Kostant-Toda system and the discrete Korteweg-de Vries equations. J. Math. Anal. Appl. 401(2), 811–820 (2013) 10. Beckermann, B.: On the convergence of bounded J-fractions on the resolvent set of the corresponding second order difference operator. J. Approx. Theory 99(2), 369–408 (1999) 11. Branquinho, A., Cotrim, L., Foulquié-Moreno, A.: Matrix interpretation of multiple orthogonality. Numer. Algor.55, 19–37 (2010) 12. Bueno, M.I., Marcellán, F.: Darboux transformation and perturbation of linear functionals. Linear Algeb. Appl., 384, 215–242 (2004)

The Symmetrization Problem for Multiple Orthogonal Polynomials

51

13. Chihara, T.S.: An Introduction to Orthogonal Polynomials. Gordon and Breach, New York (1978) 14. Delvaux, S., López–García, A.: High order three-term recursions, Riemann-Hilbert minors and Nikishin systems on star-like sets. Constr. Approx. 37(3), 383–453 (2013) 15. Douak, K.: The relation of the d-orthogonal polynomials to the Appell polynomials, J. Comput. Appl. Math., 70 (2), (1996), 279–295. 16. Douak, K., Maroni, P.: Les polynômes orthogonaux “classiques” de dimension deux. Analysis 12, 71–107 (1992) 17. Douak, K., Maroni, P.: Une caractérisation des polynômes d-orthogonaux classiques. J. Approx. Theory 82, 177–204 (1995) 18. Durán, A.J., On orthogonal polynomials with respect to a positive definite matrix of measures. Canad. J. Math. 47, 88–112 (1995) 19. Durán, A.J., Grünbaum, A.: A survey on orthogonal matrix polynomials satisfying second order differential equations. J. Comput. Appl. Math. 178, 169–190 (2005) 20. Durán, A.J., Van Assche, W.: Orthogonal matrix polynomials and higher-order recurrence relations. Linear Algebra Appl. 219, 261–280 (1995) 21. Gardner, C.S., Greene, J.M., Kruskal, M.D., Miura, R.M.: A method for solving the Korteweg– de Vries equation. Phys. Rev. Lett. 19, 1095–1097 (1967) 22. Haneczok, M., Van Assche, W.: Interlacing properties of zeros of multiple orthogonal polynomials, J. Math. Anal. Appl. 389(1), 429–438 (2012) 23. Isaacson, E., and Bishop–Keller, H.: Analysis of Numerical Methods. Wiley, New York (1994) 24. Ismail, M.E.H.: Classical and quantum orthogonal polynomials in one variable. Encyclopedia of Mathematics and its Applications, vol. 98, Cambridge University Press, Cambridge (2005) 25. Kaliaguine, V.: The operator moment problem, vector continued fractions and an explicit form of the Favard theorem for vector orthogonal polynomials. J. Comput. Appl. Math. 65(1–3), 181–193 (1995) 26. Marcellán, F., Sansigre, G.: Quadratic decomposition of orthogonal polynomials: a matrix approach, Numer. Algor. 3, 285–298 (1992) 27. Maroni, P., Mesquita, T.A.: Cubic decomposition of 2-orthogonal polynomial sequences. Mediterr. J. Math. 10, 843–863 (2013) 28. Maroni, P., Mesquita, T.A., da Rocha, Z.: On the general cubic decomposition of polynomial sequences. J. Differ. Equ. Appl. 17(9), 1303–1332 (2011) 29. Nikishin, E.M., Sorokin, V.N.: Rational approximants and orthogonality. Translations of Mathematical Monographs, vol. 92. American Mathematical Society, Providence (1991) 30. Peherstorfer, F.: On Toda lattices and orthogonal polynomials. Proceedings of the fifth international symposium on orthogonal polynomials, special functions and their applications (Patras, 1999). J. Comput. Appl. Math. 133(1–2), 519–534 (2001) 31. Van Assche, W., Coussement, E.: Some classical multiple orthogonal polynomials. J. Comput. Appl. Math. 127(1–2), 317–347 (2001)

Darboux Transformations for Orthogonal Polynomials on the Real Line and on the Unit Circle María José Cantero, Francisco Marcellán, Leandro Moral, and Luis Velázquez

Abstract Jacobi matrices constitute a communicating vessel between orthogonal polynomials on the real line (OPRL) and the theory of self-adjoint operators. In this context, commutation methods in operator theory lead to Darboux transformations of Jacobi matrices, which are equivalent to Christoffel transformations of OPRL. Jacobi matrices have unitary analogues known as CMV matrices, which provide a similar link between orthogonal polynomials on the unit circle (OPUC) and the theory of unitary operators. It has been recently shown that Darboux transformations also make sense in this setting, although some differences arise with respect to the Jacobi case. This paper is a survey on Darboux transformations for Jacobi and CMV matrices which present them in a unified way, highlighting their similarities and differences and connecting them to OPRL and OPUC. Keywords Darboux transformations · Jacobi and CMV matrices · Orthogonal polynomials

1 Introduction Darboux transformations have remarkable applications in various areas of mathematics and physics such as differential equations, spectral theory, integrable systems, signal theory or quantum physics. In a simple way, Darboux transfor-

M. J. Cantero () · L. Moral · L. Velázquez Department of Applied Mathematics & IUMA, Universidad de Zaragoza, Zaragoza, Spain e-mail: [email protected]; [email protected]; [email protected] F. Marcellán Departamento de Matemáticas, Universidad Carlos III de Madrid, Leganés, Spain e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Marcellán, E. J. Huertas (eds.), Orthogonal Polynomials: Current Trends and Applications, SEMA SIMAI Springer Series 22, https://doi.org/10.1007/978-3-030-56190-1_3

53

54

M. J. Cantero et al.

mations consist in the factorization of a non-negative self-adjoint operator, the commutation of whose factors gives a new non-negative self-adjoint operator. The spectrum of the new operator is closely related to that of the initial one. This may be used for the spectral analysis of non-trivial operators starting from simple ones, but also to generate operators with prescribed spectral properties. OPRL are connected to self-adjoint operators via the Jacobi matrix encoding the corresponding three term recurrence relation [4, 7]. When applied to Jacobi matrices, Darboux transformations turn out to be equivalent to degree one polynomial modifications of the underline orthogonality measure on the real line, which define the Christoffel transformations of OPRL. The inverse process corresponds to the so-called Geronimus transformations, which not only divide the measure by the polynomial, but may add an arbitrary mass point at its zero. This shows that inverse Darboux transformations of Jacobi matrices have a degree of freedom given by a real parameter, a fact that may be exploited for different purposes such as generating flows of integrable systems, searching for bispectral situations or developing tools for signal processing. The relevance of OPUC for the theory of unitary operators relies on their close connection to the orthogonal Laurent polynomials on the unit circle (OLPUC), together with the simplicity of the OLPUC recurrence relation, summarized by a five-diagonal unitary matrix known as a CMV matrix [1, 8, 10]. Darboux transformations of CMV matrices are built out of Darboux transformations of a Hermitian matrix obtained from the CMV and its adjoint, i.e. its inverse. As a consequence, Darboux for CMV is also related to polynomial modifications of the orthogonality measure, but given by a Laurent polynomial instead of just a polynomial. This causes strong similarities as well as important differences between Darboux for Jacobi and CMV. This work gives a unified approach to the theory of Darboux transformations for Jacobi and CMV matrices, providing an ideal setting for the comparison between both of them. Darboux for Jacobi matrices has quite a long story, and has been used for the study of Toda lattices [9], generalizations of Bochner’s theorem on the real line (see [5] and references therein) and related time-and-band limiting problems (see [3] and references therein). However, the theory of Darboux transformations for CMV matrices is of much more recent vintage [2]. The paper is organized as follows: Sect. 2 presents the different families of orthogonal polynomials involved in our discussion—OPRL, OPUC and OLPUC— their recurrence relations and related matrix representations. Darboux transformations for self-adjoint and unitary operators are introduced in Sect. 3. Section 4 applies Darboux machinery to Jacobi matrices, while Darboux transformations of CMV matrices are developed in Sect. 5, emphasizing the differences with respect to the Jacobi case.

Darboux Transformations for OPUC and OPRL

55

2 Orthogonal Polynomials and Matrix Representations Any positive Borel measure μ on the complex plane C defines the Hilbert space L2 (μ) of μ-square integrable functions with inner product  f, g =

C

f (z) g(z) dμ(z).

In what follows we will always suppose that μ is non-trivial, i.e. it has a support supp(μ) with infinitely many points, and {zn }n≥0 ⊂ L2 (μ). Then {zn }n≥0 is a linearly independent subset of L2 (μ) which may be orthonormalized. This yields a unique sequence of orthonormal polynomials with positive leading coefficients, which is an orthonormal basis of L2 (μ) whenever the polynomials are dense in L2 (μ). The orthonormalization of {zn }n≥0 is equivalent to the Cholesky factorization G = LL+ ,

L lower triangular with positive diagonal,

L+ := adjoint of L,

of the Gram matrix  G = zm , zn  m,n≥0 , which has special  properties depending on the location of supp(μ). If supp(μ) ⊂ R, then Gm,n = R zm+n dμ(z) and G is a Hankel matrix. On the other  hand, when supp(μ) lies on the unit circle T := {z ∈ C : |z| = 1}, then Gm,n = T zn−m dμ(z) so that G is a Toeplitz matrix. These two situations give rise to the two paradigmatic cases studied in the literature, OPRL and OPUC, which we will describe succinctly.

2.1 OPRL and Jacobi Matrices If supp(μ) ⊂ R, the corresponding orthonormal polynomials (pn )n≥0 defined by  pm , pn  =

R

pm (z) pn (z) dμ(z) = δm,n ,

pn (z) = κn zn + · · · ,

κn > 0,

are characterized by a three term recurrence relation [4, 8] zpn (z) = an+1 pn+1 (z)+bn+1 pn (z)+an pn−1 (z),

n ≥ 0,

p−1 = 0,

p0 = κ 0 ,

56

M. J. Cantero et al.

where bn ∈ R and an = κn−1 /κn > 0. This recurrence may be expressed as ⎛

zp(z) = Jp(z),



b1 a1 ⎜ a1 b2 a2 ⎟ ⎟, J=⎜ ⎝ a2 b3 a3 ⎠ .. .. .. . . .





p0 ⎜ p1 ⎟ ⎟ p=⎜ ⎝ p2 ⎠ , .. .

(1)

in terms of the Jacobi matrix J. When supp(μ) is bounded, Weierstrass theorem implies that the set P := C[z] of polynomials is dense in L2 (μ), hence the OPRL sequence (pn )n≥0 constitutes an orthonormal basis of L2 (μ). Then, (1) identifies J as a matrix representation of the multiplication operator Tμ : L2 (μ) → L2 (μ) f (z) → zf (z),

(2)

which is bounded because zf (z) ≤ Kf (z), K = sup{|z| : z ∈ supp(μ)}, as well as self-adjoint since zf (z), g(z) = f (z), zg(z). The boundedness of Tμ translates into that of J, which means that the sequences (an )n≥1 and (bn )n≥1 are bounded. For an unbounded supp(μ) the multiplication operator (2) remains self-adjoint but becomes unbounded. Since Weierstrass theorem does not work for unbounded intervals, in the unbounded case P may not be dense in L2 (μ), thus J is not necessarily a matrix representation of Tμ . The polynomials are known to be dense in L2 (μ) when the related moment problem is determinate [7], i.e. when μ is the unique measure giving the OPRL (pn )n≥0 , a condition which includes the bounded case and guarantees that J represents the whole self-adjoint operator Tμ . Then, J defines a self-adjoint operator in the Hilbert space of square summable sequences which is unitarily equivalent to Tμ . Its spectrum is given by spec J = spec Tμ = supp(μ), the eigenvalues being the mass points of μ.

2.2 OPUC and Hessenberg Matrices When supp(μ) ⊂ T, the corresponding orthonormal polynomials (ϕn )n≥0 given by  ϕn , ϕm  =

T

ϕn (z) ϕm (z) dμ(z) = δn,m ,

ϕn (z) = κn zn + · · · ,

κn > 0,

satisfy the recurrence relation [8] zϕn (z) = ρn+1 ϕn+1 (z) − an+1 ϕn∗ (z),

ϕn∗ (z) := zn ϕn (1/z),

n ≥ 0,

(3)

Darboux Transformations for OPUC and OPRL

57

where an = ϕn (0)/κn —known as Schur parameters, Geronimus parameters or Verblunsky coefficients of μ—are complex parameters such that |an | < 1, while * ρn = 1 − |an |2 = κn−1 /κn > 0. The multiplication operator (2) makes sense also for supp(μ) ⊂ T, but in this case Tμ is unitary since it is onto and f (z), zg(z) = z−1 f (z), g(z). If the OPUC sequence (ϕn )n≥0 is an orthonormal basis of L2 (μ), the recurrence (3) yields a matrix representation of Tμ by expanding ϕn∗ as a linear combination of (ϕn )n≥0 . However, ϕn∗ is in general a linear combination of all the polynomials ϕ0 , ϕ1 , . . . , ϕn , hence ⎛

zϕ(z) = Hϕ(z),

h00 ⎜ H = ⎝ h10 h20 ...



h01 ⎟ h11 h12 ⎠, h21 h22 h23 ... ... ... ...





ϕ0 ⎜ ϕ1 ⎟ ⎟ ϕ=⎜ ⎝ ϕ2 ⎠ , .. .

thus the corresponding matrix representation of Tμ is given by a Hessenberg matrix H instead of a band one. Besides, H only represents the whole unitary operator Tμ when P is dense in L2 (μ), which is not always guaranteed because Weierstrass theorem does not hold for the polynomials on T. In other words, H represents the multiplication operator f (z) → zf (z) on the closure P of P, which is an isometric operator but, in general, is not onto since 1 may not lie in its range because z−1 is not necessarily in P. Hence, the Hessenberg matrix H is not always unitary. The fact that Weierstrass theorem holds for Laurent polynomials on T suggests that the drawback regarding the unitarity of the matrix representation may be overcome if we substitute the OPUC by an orthonormal basis constituted by Laurent polynomials. Actually, this also will change the Hessenberg representation by a band one.

2.3 OLPUC and CMV Matrices When supp(μ) ⊂ T, the set  := C[z, z−1 ] of Laurent polynomials is dense in L2 (μ). The orthonormalization of {1, z, z−1 , z2 , z−2 , . . . } yields a sequence of orthonormal Laurent polynomials (χn )n≥0 which serves as an orthonormal basis of L2 (μ). The OLPUC χn are related to the OPUC ϕn by very simple relations, namely, ∗ (z), χ2n (z) = z−n ϕ2n

χ2n+1 (z) = z−n ϕ2n+1 (z).

The recurrence relation (3) for OPUC leads to another one for OLPUC which may be expressed as [1, 8, 10] zχ (z) = Cχ (z),

χ = (χ0 , χ1 , χ2 , . . . )T ,

58

M. J. Cantero et al.

where C is an infinite matrix given by ⎞



−a1 ρ1 ⎜ −ρ1 a2 −a 1 a2 −ρ2 a3 ⎜ ρ ρ a ρ −a a 1 2 1 2 2 3 C=⎜ ⎜ −ρ 3 a4 ⎝ ρ3 ρ4

ρ2 ρ3 ⎟ ⎟ a 2 ρ3 ⎟. ⎟ −a 3 a4 −ρ4 a5 ρ4 ρ5 ⎠ a 3 ρ4 −a 4 a5 a 4 ρ5 ... ... ...

(4)

The matrix C, known in the literature as a CMV matrix, represents the multiplication operator (2), which now is unitary. Hence, unlike the Hessenberg representation linked to OPUC, the five-diagonal CMV representation (4) related to OLPUC is not only banded, but is always unitary. The spectral relations between C and Tμ are similar to those of the Jacobi case, but now they hold for any measure μ on T.

3 The Darboux Transformations Darboux transformations of Jacobi and CMV matrices are particular cases of Darboux transformations of self-adjoint and unitary operators. In this section we will describe this more general Darboux setting. This will allow us to distinguish later on the Darboux features which are specific of the Jacobi and CMV case from those already present for general self-adjoint and unitary operators. The self-adjoint case has its roots in a work by Moutard from 1875, with subsequent contributions from Darboux, Schrödinger, Dirac, von Neumann and others (see [6] and references therein). However, the application of Darboux methods to unitary operators is not usual in the literature.

3.1 Darboux Transformations of Self-Adjoint Operators Let H be a bounded self-adjoint operator everywhere defined on a Hilbert space. The inequality |v|H v| ≤ H v2 shows that H − ξ is positive definite for some ξ ∈ R. Then, any factorization H − ξ = AA+ , leads to a new bounded self-adjoint operator G defined by G − ξ = A+ A.

(5)

The above identities show that ker(AA+ ) = ker A+ and ker(A+ A) = ker A are the ξ -eigenspaces of H and G respectively. The remaining eigenspaces of H and G are

Darboux Transformations for OPUC and OPRL

59

connected by the operators A and A+ , as follows from the equalities H A = AA+ A + ξ A = AG,

A+ H = A+ AA+ + ξ A+ = GA+ ,

(6)

which imply that, except for the eigenvalue ξ , the operator A+ /A maps eigenvectors of H /G into eigenvectors of G/H with the same eigenvalue. Actually, a classical result due to von Neumann [11, Thm. 5.39] states that the operators AA+ and A+ A are unitarily equivalent when restricted to (ker A+ )⊥ and (ker A)⊥ respectively, hence H and G are unitarily equivalent when restricted to the orthogonal complement of their ξ -eigenspaces. Therefore, the transformation H → G is isospectral up to the eigenvalue ξ , i.e. it can modify the spectrum only by creating or annihilating ξ -eigenvectors. We will refer to the transformation H → G as a Darboux transformation of a self-adjoint operator H . Darboux transformations allow to translate spectral properties of known operators into spectral information about unknown ones, but they are also tools to generate new operators by adding/removing eigenvalues from a given one. If ker A = {0}, the operator A has an inverse on its range ran A. Then (6) yields G = A−1 H A,

(7)

which is an alternative to (5) for the definition of the Darboux transform G. In this case ξ is not an eigenvalue of G, but it is an eigenvalue of H whenever ker A+ = {0}, in which case the Darboux transformation annihilates the ξ -eigenvectors. If, instead, ker A+ = {0}, the Darboux transformation becomes strictly isospectral. The above discussion can be generalized by choosing a real polynomial ℘ of arbitrary degree such that ℘ (H ) is positive definite. Theorem 1 Let H be a bounded self-adjoint operator on a Hilbert space and ℘ a real polynomial such that ℘ (H ) is positive definite. If ℘ (H ) = AA+ with A having a bounded inverse on ran A, then G = A−1 H A defines a self-adjoint operator such that ℘ (G) = A+ A and spec G follows from spec H by removing the eigenvalues at the zeros of ℘. Proof First we must show that G = A−1 H A is well defined, which amounts to prove that ran(H A) ⊂ ran A. Since H commutes with ℘ (H ) = AA+ , we find that AA+ H = H AA+ . Therefore, AA+ H ker A+ = {0}, which means that H ker A+ ⊂ ker A+ . Bearing in mind that H + = H , this implies that H (ker A+ )⊥ ⊂ (ker A+ )⊥ . Since A has a bounded inverse on its range, ran A = (ker A+ )⊥ , thus H ran A ⊂ ran A. The relation G = A−1 H A implies that ℘ (G) = A−1 ℘ (H )A = A+ A and ℘ (G+ ) = ℘ (G)+ = A+ A. Thus, G and G+ commute with A+ A, which helps to prove the self-adjointness of G. Using that AG = H A and G+ A+ = A+ H we find that A+ AG+ = G+ A+ A = A+ H A = A+ AG.

60

M. J. Cantero et al.

We conclude that G+ = G because ker A = {0} for A having an inverse on its range. Regarding the spectrum, the relation (G − z)−1 = A−1 (H − z)−1 A between the resolvents shows that spec G ⊂ spec H . The sum of the eigenspaces of H whose eigenvalues are zeros of ℘ constitute ker ℘ (H ) = ker A+ . An analogous statement for G together with the fact that ker ℘ (G) = ker A = {0} implies that G has no eigenvalues among the zeros of ℘. The statement about the spectrum is proved if we show that any point z ∈ spec H which is not a zero of ℘, necessarily lies in spec G. Any such a point is characterized by the existence of a sequence of approximated eigenvectors vn ∈ (ker A+ )⊥ of H , i.e. (H − z)vn → 0,

vn  ≥  > 0.

Since vn ∈ ran A, we can define vn = A−1 vn , which satisfies vn  ≥   > 0 with   = /A. Therefore, z ∈ spec G because, bearing in mind that A−1 is bounded, (G − z)vn = A−1 (H − z)vn → 0,

vn  ≥   > 0.

Self-adjoint Multiplication Operators Let μ be a non-trivial positive Borel measure with a bounded support on R. The corresponding multiplication operator Tμ given by (2) is bounded and self-adjoint. Given ξ ≤ inf supp(μ), the operator Tμ − ξ is positive definite since f, (Tμ − ξ )f  = R (z − ξ )|f (z)|2 dμ(z) ≥ 0 for f ∈ L2 (μ). Actually, f, (Tμ − ξ )f  = 0 only when ξ is a mass point of μ and f is proportional to the characteristic function 1ξ of {ξ }. Therefore, f, (Tμ − ξ )f  > 0 if f ∈ P \ {0}. When translated to the related Jacobi matrix J, this means that X+ (J − ξ )X > 0 for any column vector X = 0 with finitely many nonnull entries. This implies the existence of a unique Cholesky factorization J − ξ = AA+ , i.e. such that A is lower triangular with a positive upper diagonal [2, App. A]. The tridiagonal shape of J − ξ implies that A is indeed 2-band, pictorially, ++ A=

∗ + ∗+

, ,

(8)

.. .. . .

where, here and in what follows, the symbol + stands for a positive coefficient, while ∗ denotes a possible non-zero coefficient. Therefore, the Darboux transformation J → K given by K − ξ = A+ A yields a new bounded Jacobi matrix K, whose related measure ν defines a new bounded self-adjoint (continued)

Darboux Transformations for OPUC and OPRL

61

multiplication operator Tν , the Darboux transform of Tμ . The structure (8) implies that ker(K − ξ ) = ker A = {0}, so that ker(Tν − ξ ) = {0}. Besides, if ξ is a mass point of μ, then ker(Tμ − ξ ) = span{1ξ }. In this case, the Darboux transformation Tμ → Tν removes the eigenvalue ξ , which means that ξ is a mass point of μ but not of ν.

3.2 Darboux Transformations of Unitary Operators Consider a unitary operator U on a Hilbert space. Using the results of the previous subsection, we can define a Darboux transformation of U resorting to a bounded self-adjoint operator defined by U , for instance Re U = (U + U + )/2. If ξ ∈ R makes Re U − ξ positive definite, the corresponding Darboux transformation U → V should start with a factorization Re U − ξ = AA+ , generating a new unitary operator V such that Re V − ξ = A+ A. However, Re V does not determine the unitary V . Instead, we can think on an approach to Darboux for unitaries defining V by a conjugation with A, analogously to (7), i.e. V = A−1 U A. Indeed, inspired by Theorem 1, with no more effort we can develop a general result in which we substitute Re U − ξ = (U + U −1 )/2 − ξ by an arbitrary Laurent polynomial mapping unitary operators into self-adjoint ones. These are those ∈  satisfying (z) = (1/z), which have the explicit form (z) =

n 

cj zj ,

c−j = cj .

(9)

j =−n

We will refer to (9) as a Hermitian Laurent polynomial of degree 2n, since it has 2n zeros counting multiplicity. The absence of Hermitian Laurent polynomials of degree one will be the origin of important differences between Darboux for Jacobi and CMV matrices. These ideas lead to the following analogue of Theorem 1, which defines a mapping U → V which we call Darboux transformation of a unitary operator U. Theorem 2 Let U be a unitary operator on a Hilbert space and a Hermitian Laurent polynomial such that (U ) is positive definite. If (U ) = AA+ with A having a bounded inverse on ran A, then V = A−1 U A defines a unitary operator such that (V ) = A+ A and spec V follows from spec U by removing the eigenvalues at the zeros of . Proof To prove that V = A−1 U A is well defined we take into account that U + commutes with (U ) = AA+ . This implies that U + ker A+ ⊂ ker A+ , which is equivalent to U (ker A+ )⊥ ⊂ (ker A+ )⊥ . Hence, U ran A ⊂ ran A for A with bounded inverse on its range.

62

M. J. Cantero et al.

As in Theorem 1, (V ) = A−1 (U )A = A+ A. Hence V and V + commute with which combined with the identities AV = U A, V + A+ = A+ U + and the unitarity of U gives

A+ A,

A+ AV + V = V + A+ AV = A+ U + U A = A+ A. Since ker A = {0}, this implies that V + V = 1, which proves the unitarity of V , bearing in mind that V −1 = A−1 U −1 A is everywhere defined. The proof of the spectral relations between U and V is just as in Theorem 1.

Unitary Multiplication Operators Let Tμ be the unitary multiplication operator (2) given by a non-trivial positive Borel measure on T. Given a Hermitian Laurent polynomial (z) = (z + z−1 )/2 − ξ with 0 <  = inf{Re z − ξ : z ∈ supp(μ)}, the operator (Tμ ) is positive definite. Indeed, f, (Tμ )f  = T (Re z − ξ )|f (z)|2 dμ(z) ≥ f 2 , which implies that the related CMV matrix C satisfies X+ (C)X ≥ X2 . Therefore, there exists a unique Cholesky factorization (C) = AA+ [2, App. A]. Due to the five-diagonal shape of (C), the factor A becomes 3-band, i.e. ⎛+ A=⎝

∗ + ∗ ∗ + ∗ ∗+

⎞ ⎠.

(10)

.. .. . . . ..

From A+ X2 = X+ (C)X ≥ X2 we conclude that A+ has an everywhere defined bounded inverse, so the same holds for A. Hence, the Darboux transform D = A−1 CA is well defined. It turns out that D is again a CMV matrix (see Sect. 5), whose measure ν provides a unitary multiplication operator defining a Darboux transformation Tμ → Tν . This transformation is strictly isospectral since ker A = ker A+ = {0}, which means that supp(μ) = supp(ν). This would change if μ had a mass point ζ with Re ζ = ξ , while 0 <  = inf{Re z − ξ : z ∈ supp(μ) \ {ζ }}. Then, ker A = {0} but ker (Tμ ) = span{1ζ }, so that ker A+ = ker (C) is nontrivial. Using that A+ A is unitarily equivalent to AA+ restricted to (ker A+ )⊥ , we get inf X

f, (Tμ )f  AX2 X+ A+ AX X+ AA+ X = inf = inf = inf 2 2 X f ⊥1ζ X X X2 f 2 X⊥ ker A+  2 T\{ζ } (Re z − ξ )|f | dμ(z)  ≥ . = inf 2 f T\{ζ } |f | dμ(z) (continued)

Darboux Transformations for OPUC and OPRL

63

This proves that A has a bounded inverse on ran A, thus D = A−1 CA is again a well defined Darboux transformation. Actually, D is a CMV matrix (see Sect. 5). The related measure has no mass point at ζ because ker (D) = ker A = {0}. Therefore, the Darboux transformation Tμ → Tν removes the eigenvalue ζ in this case.

The previous Hilbert approach to Darboux transformations has some drawbacks which we will try to overcome. For instance, it is not directly applicable to unbounded self-adjoint operators. Even in the bounded case, some difficulties arise in connection with the domains of the operators. As an example, consider the so called inverse Darboux transformations H ← G of self-adjoint operators, which take G as a starting point. The relation (7) suggests that such an inverse transformation should be given by H = AGA−1 . However, this relation cannot be valid in general because H is everywhere defined, while A−1 has domain ran A. This becomes even more important in the unitary case, where we have no alternative to the definition of Darboux transformations by conjugation with the factor A. We will solve these problems simultaneously by dealing directly with the algebra of infinite matrices, avoiding any interpretation in terms of operators in Hilbert spaces. This has the advantage of allowing for the action of certain infinite matrices –e.g. band or triangular matrices, like A or A−1 – on non-square-summable vectors, which enlarges the domains solving the related problems (see the comments after Lemma 1). The Darboux transformations of infinite matrices keep many of the features already discussed for the operator setting, but have some advantages since no attention is paid to norms and unbounded situations. Nevertheless, working with infinite matrices without the help of norms and inner products is a source of other difficulties arising from the eventual manipulation of infinite sums. We must specify the matrix operations that will be admitted and their properties. We say that a-product AB of complex matrices A, B is admissible if every entry (AB)ik = j Aij Bj k involves only a finite number of non-zero summands. Hessenberg type matrices, i.e. those with the shape .

∗ ∗ ∗ ···

∗ ∗ ∗ ···

··· ··· ··· ···

/ ∗ ∗ ∗ , ∗ ∗ ∗ ··· ··· ··· ···

lower Hessenberg type

.

∗ ∗ ∗ ···

∗ ∗ ∗ ···

··· ··· ··· ···

/T ∗ ∗ ∗ ∗ ∗ ∗ ··· ··· ··· ···

,

upper Hessenberg type

generate admissible products: AB is admissible if either A is lower Hessenberg type or B is upper Hessenberg type. Band matrices are lower and upper Hessenberg type. Most of the properties of finite matrix multiplication hold for admissible products. Exceptions are the associativity and the uniqueness of the inverse matrix. The

64

M. J. Cantero et al.

later one is related to the fact that invertible matrices may have left and right kernels, kerL A = {X = (X0 , X1 , . . . ) : XA = 0},

kerR A = {X = (X0 , X1 , . . . )T : AX = 0}.

These kernels must be understood as sets of (not necessarily finite norm) vectors giving an admissible null product with the matrix. Thus kerR A differs form what we called previously ker A, the set of vectors with finite norm giving a null standard right product with the matrix A (see the comments just before Sect. 4.1). Regarding the above pathologies, the following result shows the way we will circumvent them in the development of Darboux transformations for Jacobi and CMV matrices. Proposition 1 ([2, Sect. 2]) The associativity (AB)C = A(BC) holds if either A and B are lower Hessenberg type, or B and C are upper Hessenberg type, or A is lower Hessenberg type and C is upper Hessenberg type. If a lower (upper) Hessenberg type matrix A has a lower (upper) Hessenberg type inverse, no other inverse exists. In this case, kerR A = {0} (kerL A = {0}). Whenever the associativity holds, we will omit parenthesis for multiple products. A Hessenberg type matrix A is called invertible if it has an inverse of the same type.

4 Darboux Transformations of Jacobi Matrices Let p = (p0 , p1 , . . . )T and q = (q0 , q1 , . . . )T where pn , qn ∈ R[x] have positive leading coefficients and deg(pn ) = deg(qn ) = n. The matrices J and K given by zp(z) = Jp(z),

zq(z) = Kq(z),

are lower Hessenberg with positive upper diagonal, i.e. they have the shape .

∗ ∗ ∗ ···

/ + ∗ + . ∗ ∗ + ··· ··· ··· ···

(11)

Other matrices relating p and q come into play, namely, the matrices A, B given by p = Aq,

℘q = Bp,

where ℘ (z) = z − ξ is a real polynomial. The matrix A is lower triangular, while B is lower Hessenberg, both with positive upper diagonal. Thus, kerR (A) = {0} and A is invertible since it has a lower triangular inverse. The following relations are the starting point to develop the Darboux transformations of Jacobi matrices. Lemma 1 ℘ (J) = AB, ℘ (K) = BA, K = A−1 JA, J = AKA−1 .

JA = AK,

KB = BJ,

Darboux Transformations for OPUC and OPRL

65

Proof The first line follows from the linear independence of the sets p, q and (℘ (J) − AB)p = ℘p − ℘Aq = 0,

(℘ (K) − BA)q = ℘q − Bp = 0,

(JA − AK)q = Jp − zAq = 0,

(KB − BJ)p = ℘Kq − zBp = 0.

The two last identities stem from the two previous ones and Proposition 1.



The relation J = AKA−1 announces an anticipated advantage of the matrix approach regarding inverse Darboux transformations. In contrast to the operator setting, as invertible lower triangular matrices, A and A−1 have as “domain” and “range” the whole set of infinite vectors in the context of admissible matrix products. Hence, J = AKA−1 makes sense in this context since J and K also have such a “domain” because, as band matrices, their product with any infinite vector is admissible. Similar manipulations are not possible for KB = BJ due to the following result. Proposition 2 B is not invertible and ker ℘ (J) = kerR (B) = span{p(ξ )}. Proof According to the Hessenberg structure (11) of B, dim kerR (B) = 1, hence B cannot be invertible by Proposition 1. The equality kerR ℘ (J) = kerR B follows from ℘ (J) = AB and kerR A = {0}. Therefore, ℘ (J)p(ξ ) = ℘ (ξ )p(ξ ) = 0 implies that p(ξ ) ∈ kerR B, which means that p(ξ ) spans kerR (B). Suppose now that J and K are Jacobi matrices, so that p and q are orthonormal with respect to some measures on R. According to Lemma 1, Darboux transformations correspond to the choice B = A+ associated with the Cholesky factorization ℘ (J) = AA+ , which is possible iff ℘ (J) > 0, i.e. iff all the leading principal minors of ℘ (J) are positive, which means that X+ ℘ (J)X > 0 for every column vector X = 0 with finitely many non-zero entries [2, App. A]. This choice may be characterized in terms of the orthogonality measures. If p is orthonormal with respect to the measure dμ, Theorem 3 B = A+ ⇔ then q is orthonormal with respect to the measure dν = ℘ dμ.  Proof Expressing the μ-orthonormality of p as pp+ dμ = I , we get +

B =



+

p(Bp) dμ =



+

℘pq dμ = A



qq + ℘ dμ.

The result follows from this equality and the fact that kerR A = {0}. A+



The condition B = implies that A has the 2-band structure (8) with non-zero lower diagonal entries. Theorem 3 is valid for unbounded Jacobi matrices, actually it holds in the indeterminate case. This result raises the question of the positivity of ℘ dμ. In the bounded case, this is guaranteed by the positive definiteness of ℘ (J).

66

M. J. Cantero et al.

Proposition 3 Let J be a Jacobi matrix and μ a related measure on R. If ℘ ∈ P is non-negative on supp(μ), then ℘ (J) > 0, being both equivalent conditions when supp(μ) is bounded.   Proof identity ℘ (J) = ℘ (J)pp+ dμ = pp+ ℘ dμ implies that ℘ (J) > 0   The iff |f |2 ℘ dμ > 0 for f ∈ P \ {0}. If ℘ ≥ 0 on supp(μ), then |f |2 ℘ dμ > 0 for every f ∈ P \ {0} because supp(μ) has infinitely many points. On the other hand, 2 when supp(μ) is bounded,  P 2is dense in L (μ) and Tμ is bounded, i.e. continuous. Thus, f, ℘ (Tμ )f  = |f | ℘ dμ > 0 for f ∈ P \ {0} means that this actually holds for f ∈ L2 (μ) \ {0}, which implies that ℘ ≥ 0 on supp(μ). The previous results show that, whenever J − ξ > 0, a Darboux transformation J → K is generated by the Cholesky factorization J − ξ = AA+ and K − ξ = A+ A. This is equivalent to the Chistoffel transformation dμ(z) → dν(z) = (z − ξ ) dμ(z), which preserves the support of the measure μ up to any mass point at ξ . In the determinate case supp(μ) = spec Tμ = spec J, the mass points of μ being the eigenvalues of J, thus the Darboux transformation is isospectral up to any eigenvalue of J at ξ , which is absent in K. The above picture solves an apparent contradiction which arises when applying Proposition 2 to K, concluding that kerR ℘ (K) = span{q(ξ )}. This is in contrast with the equality ker ℘ (K) = ker A = {0}, that follows from the arguments for general self-adjoint operators in Sect. 3.1. The only way to make both identities compatible is assuming that q(ξ ) has an infinite norm, so that it does not lie on ker ℘ (K). This is indeed the case because K has no eigenvalue at ξ .

4.1 Inverse Darboux Transformations of Jacobi Matrices In this subsection we will consider the inverse problem of the previous one: given a Jacobi matrix K and a polynomial ℘ (z) = z − ξ such that ℘ (K) > 0, find the Jacobi matrices J such that K is its Darboux transform. This amounts to find all the reversed Cholesky factorizations ℘ (K) = A+ A, with A having the shape (8). The condition ℘ (K) > 0, necessary for a reversed Cholesky factorization, almost guarantees it for a bounded K as a consequence of the following result. Proposition 4 ([2, App. A]) If M = M + > 0 is a bounded (2n + 1)-band matrix, then M = A+ A where A is n-band lower triangular with non-negative diagonal entries. When the lower diagonal entries of M are non-zero, the lower diagonal entries of A must be non-zero, while all but at most its first n diagonal entries must be positive. The last statement of this proposition was incorrectly stated in [2, App. A], which claimed that all the diagonal entries of A should be positive. When applied to a tridiagonal matrix ℘ (K) > 0 arising from a bounded Jacobi matrix K, the above

Darboux Transformations for OPUC and OPRL

67

proposition guarantees the existence of a factorization ℘ (K) = A+ A with ⎛



r0 ⎜ s0 r1 ⎟ A = ⎝ s1 r2 ⎠, .. .. . .

r0 ≥ 0, rn+1 > 0,

sn = 0,

n ≥ 0.

(12)

When r0 = 0 we will refer to a singular factorization. Any non-singular factor A yields an inverse transform because ℘ (J) = AA+ defines a Jacobi matrix J. Nevertheless, Lemma 1 implies that, unlike the operator setting, the matrix approach allows us to define the inverse transformations by the conjugation J = AKA−1 . Inverse Darboux transformations also apply to the unbounded case, but the existence of the related reversed Cholesky factorizations is not covered by Proposition 4. Denoting ℘ (K) = (mi,j ) and using the notation (12), ℘ (K) = A+ A reads as rn2 + sn2 = mn,n ,

rn+1 sn = mn+1,n .

(13)

The solutions of these equations depend in general on a single real parameter, which may be chosen as r0 . Not all the choices r0 > 0 give a non-singular factor A, since they must be consistent with rn > 0 and sn ∈ R\{0} for all n. Actually, Proposition 4 guarantees the existence of a factor (12) when ℘ (K) > 0, but it can happen that there is a single solution corresponding to r0 = 0, which means that no inverse Darboux transform exists. The identification between Darboux and Christoffel transformations sheds light on this discussion. If ν is a measure related to the Jacobi matrix K, inverse Darboux is equivalent to searching for solutions μ of ℘ dμ = dν which are non-trivial positive Borel measures such that P ⊂ L2 (μ). This requires ℘ ≥ 0 on supp(ν), which in view of Proposition 3 brings us back to the condition ℘ (K) > 0. Assuming this, the positive solutions are given by the so-called Geronimus transformations, dμ =

dν + m δξ , ℘

m ≥ 0,

(14)

 which satisfy the rest of the requirements if the moments zn dν/℘ are finite for  n ≥ 0. In the bounded case, this is equivalent to ask for the finiteness of dν/℘. Hence, the absence of non-singular reversed factorizations of ℘ (K) > 0 indicates  that PL2 (μ), which in the bounded case means that dν/℘ is infinite. The freedom in the parameter r0 for inverse Darboux is related to the freedom in the mass m for the Geronimus transformations. This relation can be made more precise. The search for such an explicit relation will also help to understand the choices of r0 giving non-singular factorizations ℘ (K) = A+ A.

68

M. J. Cantero et al.

Theorem 4 Suppose that ℘ (z) = z − ξ ≥ 0 on supp(ν) and any non-singular factorization ℘ (K) = A+ A satisfies  0
0. Thus, there exists a Cholesky factorization ℘ (J) = AA+ and, according to Theorem 3, ℘ (K) = A+ A. Since the first diagonal entry of A satisfies (16), it is the chosen value of r0 .

Chebyshev Polynomials √ Let dν(z) = 1 − z2 dz, z ∈ [−1, 1], be the measure corresponding to the second kind Chebyshev polynomials. The associated Jacobi matrix is ⎛

0



1 2

⎜ 12 0 12 K=⎜ ⎝ 21 0

1 2

.. .. .. . . .

⎟ ⎟. ⎠

(continued)

Darboux Transformations for OPUC and OPRL

69

 Consider ℘ (z) = z + 1, which is non-negative on [−1, 1]. Since dν = π/2  and dν/℘ = π , the condition (15) for a non-singular factorization K + I = A+ A is 0 < r02 ≤ 1/2. As can be checked by induction, for any such a value of r0 the Eqs. (13) lead to solutions rn , sn > 0 given by rn2 =

1 n − 2r02 (n − 1) , 2 n + 1 − 2r02 n

sn =

1 2rn+1

n ≥ 0.

,

√ For each r0 ∈ (0, 1/ 2], they provide the non-singular factor A leading to the inverse Darboux transform J = AA+ − I . According to (14) and (16), the corresponding measure μ is given by dμ(z) =

0

. 1−z 1+z

dz + m δ−1 (z) dz,

m=π

1 2r02

/ −1 .

√ The mass m only vanishes for r0 = 1/ 2, in which case ⎛ ⎜ ⎜ A=⎜ ⎝



√1 2 √1 √1 2 2 √1 √1 2 2

.. .. . .



⎟ ⎟ ⎟, ⎠

⎜ J = AA+ − I = ⎜ ⎝

− 21 1 2

1 2

0 1 2

⎞ 1 2

On the other hand, when r0 = 1/2, there is a mass m = π and rn2 = that J is given by (1) with b1 = r02 − 1 = −3/4, an = rn−1 sn−1

1 = 2

2 2 bn = rn−1 + sn−2 −1=

√ n(n + 2) , n+1

⎟ ⎟

0 12 ⎠. .. .. .. . . .

1 1 , 2 n(n + 1)

1 n+1 2 n+2 ,

so

n ≥ 2,

n ≥ 1.

If we consider now as a starting point the first kind Chebyshev polynomials, with measure dν(z) = √ 1 2 dz, z ∈ [−1, 1], and Jacobi matrix 1−z



0

⎜ √1 ⎜ K=⎜ 2 ⎝

√1 2

0 1 2

⎞ 1 2 1 2

0 .. .. .. . . .

⎟ ⎟ ⎟, ⎠

(continued)

70

M. J. Cantero et al.

the same polynomial ℘ (z) = z + 1 leads to the following solutions of (13), rn2 =

1 1 − r02 (n − 1) , 2 1 − r02 n

n ≥ 1,

1 s0 = √ , 2r1

sn =

1 , 2rn+1

n ≥ 1.

No choice of r0 > 0 yields rn > 0 for all n, which means that there is no non-singular factorization ℘ (K) = A+ A. The factorization ensured by Proposition 4 is that one with r0 = 0, which is singular and provides no inverse Darboux  transform. This could be predicted just looking at the measure because dν/℘ is infinite.

5 Darboux Transformations of CMV Matrices In this section we will follow Sect. 3.2 to define the Darboux transformations of CMV matrices [2]. Many of the proofs are similar to those for Jacobi matrices, so we will omit them, paying attention only to the new ingredients. An advantage of CMV with respect to Jacobi is the absence of the subtleties of unbounded situations, thus the results typically hold for any CMV matrix. Also, the only requirement for a non-trivial positive Borel measure μ on T to generate a CMV matrix is the finiteness  of dμ, since it is equivalent to  ⊂ L2 (μ). Any sequence χ = (χ0 , χ1 , χ2 , . . . )t of Laurent polynomials such that χ = T η,

η = (1, z, z−1 , z2 , z−2 , . . . )t ,

triangular with T lower positive diagonal ,

will be called a zig-zag sequence in . Given two zig-zag sequences, χ , ω and a Hermitian Laurent polynomial (z) = αz + β + αz−1 of degree 2, we can define the matrices A, B, C, D by zχ = Cχ ,

zω = Dω,

χ = Aω,

ω = Bχ .

The matrix A is triangular with positive diagonal, while B, C, D are lower Hessenberg type with the structure represented below, B having non-zero upper diagonal entries, . C, D →

∗ ∗ ∗ ···

/ + ∗ ∗ + , ∗ ∗ ∗ ··· ··· ··· ···

. B→

∗ ∗ ∗ ···

∗ ∗ ∗ ···

/ ∗ ∗ ∗ ∗ ∗ ∗ . ··· ··· ···

(17)

A difference with respect to the Jacobi case is that C and D are invertible because z−1 χ = C−1 χ and z−1 ω = D−1 ω define lower Hessenberg type inverses. The

Darboux Transformations for OPUC and OPRL

71

Hessenberg type structure is key to use Proposition 1 to prove the following results using steps similar to those in the proofs of the analogous results in Sect. 4. Lemma 2 ([2, Sect. 2]) (C) = AB, BC, D = A−1 CA, C = ADA−1 .

(D) = BA,

CA = AD,

DB =

Proposition 5 ([2, Sect. 2]) B is not invertible and, if ζ1 , ζ2 are the roots of , then " kerR ( (C)) = kerR (B) =

span{χ (ζ1 ), χ (ζ2 )} if ζ1 = ζ2 , span{χ (ζ1 ), χ  (ζ1 )} if ζ1 = ζ2 .

Assume now that χ , ω are orthonormal with respect to some measures on T. Then C, D are the related CMV matrices, and we have the following analogues of Theorem 3 and Proposition 3. Theorem 5 ([2, Sect. 3]) If χ is orthonormal with respect to the measure dμ, B = A+ ⇔ then ω is orthonormal with respect to the measure dν = dμ. Proposition 6 ([2, Sect. 3]) Let C be a CMV matrix and μ a related measure on T. Then, a Hermitian ∈  is non-negative on supp(μ) iff (C) > 0. The previous results lead to the definition of the Darboux transformations C → D of CMV matrices. If (C) > 0 for a CMV matrix C, there is a unique Cholesky factorization (C) = AA+ , which defines D = A−1 CA satisfying (D) = AA+ . Due to the five-diagonal structure of (C) and the fact that its lower diagonal entries are non-zero, the Cholesky factor A has the 3-band shape (10) with non-zero lower diagonal entries. Theorem 5 shows that D is a CMV matrix, that one related to the measure dν = dμ, where μ is associated  with C. The positivity of ν follows from Proposition 6, while the finiteness of dν is a consequence of the fact that  ⊂ L2 (μ). The transformation dμ → dν = ℘ dμ preserves the support of μ up to the mass points at the zeros of . This means that the Darboux transformation C → D is isospectral up to the eigenvalues of C at the zeros of , which are removed.

5.1 Inverse Darboux Transformations of CMV Matrices Inverse Darboux transformations may also be considered for CMV matrices. Starting with a CMV matrix D such that (D) > 0 for some Hermitian polynomial of degree 2, we search for the reversed Cholesky factorizations (D) = A+ A, with A as in (10), which yield the CMV matrices C = ADA−1 whose Darboux transform is D. If ν is a measure related to D, this is equivalent to find the CMV

72

M. J. Cantero et al.

matrices C with a related measure on T having the form dμ =

 dν + m ζ δζ . (ζ )=0

According to Proposition 6, the condition (D) > 0 guarantees that dν/ is positive, which will be a measure related to a CMV matrix provided that dν/ is finite. There are three possibilities according to the location of the roots ζ1 , ζ2 of : 1. ζ1 = ζ2 , ζ1 , ζ2 ∈ T ⇒ dμ = dν + m 1 δζ 1 + m 2 δζ 2 . dν 2. ζ1 = ζ2 = ζ ∈ T ⇒ dμ = + m δζ . / T ⇒ dμ = dν 3. ζ1 ζ 2 = 1, ζ1 , ζ2 ∈ . Therefore, the inverse Darboux transforms of a CMV matrix D such that (D) > 0 constitute a set of CMV  matrices parametrized by at most two real parameters, which will be empty if dν/ is infinite. Whenever (D) > 0, Proposition 4 guarantees the existence of a factorization (D) = A+ A with ⎛



r0 ⎜ s0 r1 ⎟ ⎟ t0 s1 r2 A=⎜ ⎝ t1 s2 r3 ⎠ , .. .. .. . . .

r0 , r1 ≥ 0, rn+2 > 0,

tn = 0,

n ≥ 0.

(18)

As in the Jacobi case, inverse Darboux for CMV requires non-singular factorizations, i.e. r0 , r1 > 0. Nevertheless, in contrast to the Jacobi case, non-singular factorizations may not lead to CMV inverse Darboux transforms. To see this, denoting (D) = (mi,j ), let us write explicitly the factorization (D) = A+ A, which becomes rn2 +|sn |2 +|tn |2 = mn,n ,

rn+1 sn +sn+1 tn = mn+1,n ,

rn+2 tn = mn+2,n .

(19)

Any solution of these equations is determined by r0 , r1 and s0 , which gives a freedom of four real parameters in the factorization (D) = A+ A, two more than those expected from the previous discussion. This indicates that most of the nonsingular factors A will lead to matrices C = ADA−1 which are not CMV, which is in striking contrast with the Jacobi case. Actually, bearing in mind that admissible products of infinite matrices are subject to the rules of Proposition 1—implicitly assumed in all the previous manipulations—, we can be in troubles to show the unitarity of C. An attempt to prove that C+ = C−1 is by noting that C−1 = AD+ A−1 , while Lemma 2 for B = A+ yields A+ C = DA+ , so that (C+ A)A−1 = (AD+ )A−1 = AD+ A−1 . The second equality is due to the associativity guaranteed by the band structure of A and D+ , see Proposition 1. However, we cannot use the associativity to cancel the factor A in (C+ A)A−1 because A−1 is lower triangular and we only

Darboux Transformations for OPUC and OPRL

73

know that C+ has the upper Hessenberg type structure inferred from (17) because we do not know yet if C is a CMV matrix. An alternative is to prove that CC+ = I . Using again Lemma 2, taking care of the associativity rules in Proposition 1, we find that (CC+ )A = C(C+ A) = C(AD+ ) = (CA)D+ = (AD)D+ = A(DD+ ) = A. However, the only conclusion from this identity is that the rows of CC+ − I belong to kerL A = (kerR A+ )+ , which is non-trivial by Proposition 5. All this points out to the existence of non-unitary matrices C = ADA−1 coming from non-singular factorizations (D) = A+ A. Actually, it is possible to prove that a matrix C related to a ziz-zag sequence χ in  by zχ = Cχ is unitary iff it is a CMV matrix [2, Sect. 2]. Therefore, the existence of reversed Cholesky factorizations leading to matrices C which are not CMV, shown by parameter counting, also uncovers that such spurious solutions are indeed non-unitary. Fortunately, it is possible to give explicit conditions which select the values of the free parameters r0 , r1 , s0 characterizing the non-singular factorizations leading to non-spurious solutions, i.e. the true CMV inverse Darboux transforms. Theorem * 6 ([2, Sect. 4]) Let D be a CMV matrix with first Schur parameter b and σ = 1 − |b|2 . If (D) > 0 for a Hermitian polynomial (z) = αz + β + αz−1 , the factorizations (D) = A+ A leading to a CMV matrix C = ADA−1 are those with factors (18) satisfying r02 = β − 2 Re(αa), for some a ∈ C with |a| < 1 and ρ = first Schur parameter of C.

σ r1 = r0 , ρ

s 0 = r1

a−b , σ

(20)

* 1 − |a|2 . Moreover, the parameter a is the

This theorem implies that, given a Hermitian Laurent polynomial and a CMV matrix D, the inverse Darboux transforms are determined by their first Schur parameter. However, not every choice of this parameter necessarily yields a reversed Cholesky factorization (D) = A+ A, something which depends on the consistency of the conditions rn > 0, tn = 0 for the corresponding solutions of (19). Therefore, the inverse Darboux transforms are parametrized by at most two real parameters, which coincides with the result previously argued via orthogonality measures.

Constant Schur Parameters Consider a CMV matrix D with constant Schur parameters (b, b, b, . . . ), 0 < b < 1, and the Hermitian Laurent polynomial (z) = 2 − z − z−1 . A simple (continued)

74

M. J. Cantero et al.

inspection shows that a couple of solutions of (19) is given by 1

Solution 1 :

2b , r0 = 2 1+b

Solution 2 :

√ r0 = 2 b,

1 s0 =

1−b (b − 1), 1+b

* s0 = − 1 − b2 ,

⎧ ⎨rn = 1 + b, n ≥ 1, sn = 0, tn = b − 1,

n ≥ 1, n ≥ 0.

sn = 0, r2n = −t2n−1 = 1 + b,

n ≥ 1, n ≥ 1.



⎧ ⎨r2n+1 = −t2n = 1 − b, n ≥ 0, ⎩

Both give non-singular factorizations (D) = A+ A, but the first solution satisfies (20) for a = (3b − 1)/(1 + b) ∈ (−1, 1), while the second one does not because the third condition in (20) yields a = −1 in this case. Therefore, the first solution leads to an inverse Darboux transform C = ADA−1 , which turns out to be the CMV matrix with Schur parameters (a, b, b, . . . ) [2, Sect. 4], but the second solution yields a spurious solution which we know that is not even unitary. This is explicitly √ shown by calculating its first row, which gives the non-unit vector (1, 2 b(1 + b)/(1 − b), 0, 0, . . . ).

Acknowledgments The work of the second author has been partially supported by the research project MTM2015-65888-C4-2-P from Dirección General de Investigación Científica y Técnica, Ministerio de Economía y Competitividad of Spain. The work of the rest of the authors has been supported in part by the research project MTM2017-89941-P from Ministerio de Economía, Industria y Competitividad of Spain and the European Regional Development Fund (ERDF), and by Project E26_17R of Diputación General de Aragón (Spain) and the ERDF 2014-2020 “Construyendo Europa desde Aragón”.

References 1. Cantero, M.J., Moral, L., Velázquez, L., Five-diagonal matrices and zeros of orthogonal polynomials on the unit circle. Linear Algeb. Appl. 362, 29–56 (2003) 2. Cantero, M.J., Marcellán, F., Moral, L., Velázquez, L.: Darboux transformations for CMV matrices. Adv. Math. 298, 122–206 (2016) 3. Castro, M., Grünbaum, F.A.: The Darboux process and time-and-band limiting for matrix orthogonal polynomials. Linear Alg. Appl. 487, 328–341 (2015) 4. Chihara, T.S.: An introduction to orthogonal polynomials. In: Mathematics and its Applications, vol. 13. Gordon and Breach Science Publishers, New York (1978) 5. Grünbaum, F.A.: The Darboux process and a noncommutative bispectral problem: some explorations and challenges. In: Geometric Aspects of Analysis and Mechanics, Progress in Mathematics, vol. 292, pp.161–177. Birkhäuser/Springer, New York (2011) 6. Mielnik, B., Rosas-Ortiz, O.: Factorization: little or great algorithm? J. Phys. A 37(43), 10007– 10035 (2004) 7. Simon, B.: The classical moment problem as a self-adjoint finite difference operator. Adv. Math. 137, 82–203 (1998)

Darboux Transformations for OPUC and OPRL

75

8. Simon, B.: Orthogonal Polynomials on the Unit Circle. Part 1 and 2. American Mathematical Society Colloquium Publications, vol. 54, vol. 1 and 2. American Mathematical Society, Providence (2005) 9. Teschl, G.: Jacobi Operators and Completely Integrable Nonlinear Lattices. Mathematical Surveys and Monographs, vol. 72. American Mathematical Society, Providence (2000). 10. Watkins, D.S.: Some perspectives on the eigenvalue problem. SIAM Rev. 35(3), 430–471 (1993) 11. Weidmann, J.: Linear Operators in Hilbert Spaces. Springer, New York (1980)

Special Function Solutions of Painlevé Equations: Theory, Asymptotics and Applications Alfredo Deaño

Abstract In this paper we review the construction of special function solutions of the Painlevé differential equations. We motivate their study using the theory of orthogonal polynomials, in particular deformation of classical weight functions, as well as unitarily invariant ensembles in random matrix theory. The asymptotic behavior of these Painlevé functions can be studied in at least two different regimes, using the Riemann-Hilbert approach and the classical saddle point method for integrals. Keywords Painlevé-type functions · Orthogonal polynomials · Random matrices · Asymptotic representations in the complex plane

1 Painlevé Equations The six Painlevé differential equations, PI –PVI , are nonlinear second order differential equations, whose solutions have appeared in many different areas of Mathematics in the last decades, including random matrix theory, continuous and discrete orthogonal polynomials, integrable systems and combinatorics, as well as in models in Statistical Physics. We refer the reader to [22, 35, 37, 40], among others, for more information. The standard form of the Painlevé differential equations, as given in [51, Chapter 32], is the following: u = 6u2 + z,

(1)

u = zu + 2u3 + α,

(2)

A. Deaño () School of Mathematics, Statistics and Actuarial Science, University of Kent, Canterbury, UK e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Marcellán, E. J. Huertas (eds.), Orthogonal Polynomials: Current Trends and Applications, SEMA SIMAI Springer Series 22, https://doi.org/10.1007/978-3-030-56190-1_4

77

78

A. Deaño

u =

u αu2 + β (u )2 δ − + + γ u3 + , u z z u

(3)

3u3 (u )2 β + + 4zu2 + 2(z2 − α)u + , (4) 2u 2 u . / / . β 1 (u − 1)2 γ u δu(u + 1) 1 u u = αu + + (u )2 − + + + , 2 2u u − 1 z u z u−1 z (5) . / . / 1 1 1 1 1 (u )2 1 + + + + u = − u 2 u u−1 u−z z z−1 u−z   βz γ (z − 1) δz(z − 1) u(u − 1)(u − z) α + . (6) + + + z2 (z − 1)2 u2 (u − 1)2 (u − z)2

u =

where α, β, γ , δ ∈ C, see for instance [51, Chapter 32]. They have the so-called Painlevé property, which states that any solution is free from movable branch points (in the sense that these depend on initial or boundary values). This does not exclude fixed singularities, such as z = 0 in PIII , see (3), or z = 0 and z = 1 in PVI , see (6), or movable poles. Generic solutions of PI –PVI are sometimes called Painlevé transcendents, or nonlinear special functions, highlighting the role that they play in the nonlinear setting, which is analogue to classical special functions (such as Airy, Bessel or hypergeometric) in the linear world. In general, Painlevé transcendents cannot be expressed in terms of elementary or even classical special functions, but for specific values of the parameters α, β, γ , δ in PII –PVI , it is known that families of rational and special function solutions exist; these appear for instance in the theory of semiclassical orthogonal polynomials, see for example [55] and the recent monograph [56], and in random matrix theory, see [37–39]. It is important to remark that in many applications, the standard form of the Painlevé ODEs given before is not very illuminating. In this sense, the so-called σ equations (SI −SVI ) often show more relevant properties, such as symmetry relations between the parameters. Also, as explained in detail in [35], Painlevé equations appear in a quite natural and systematic way when studying compatibility relations between 2 × 2 systems of linear differential equations: d (λ, x) = A(λ, x)(λ, x), dλ

d (λ, x) = U (λ, x)(λ, x), dx

(7)

where  : C2 → C2×2 , and A and U are rational functions of λ in the extended complex plane (possibly including the point at ∞). If one imposes compatibility of the two equations in (7), that is λx = xλ , the following condition (sometimes called the zero curvature condition) holds: d d A + AU = U + U A. dx dλ

(8)

Special Function Solutions of Painlevé Equations: Theory, Asymptotics and. . .

79

This relation produces nonlinear differential equations relating the entries of the matrices A and U , and these identities are of Painlevé type for some standard forms of (7). The singularity structure of the system (7) in terms of the variable λ indicates which Painlevé equation we should look for, and in this respect, the linear system probably offers a clearer classification than the nonlinear ODEs. The goal of this paper is to study families of special function solutions of Painlevé equations. These are families of solutions that can be constructed in terms of classical special functions, using Wronskian determinants generated from an initial seed function. In order to understand such families of solutions and give a systematic way to construct them, we need two important properties of the solutions of Painlevé equations: the Hamiltonian formulation and Bäcklund transformations, that we describe next.

1.1 Hamiltonian Formulation and Tau Functions The Hamiltonian formulation provides a systematic way to study families of solutions of Painlevé equations. We refer the reader to the works of Okamoto [45– 48], Forrester and Witte [38, 39] and also the monograph of Forrester [37], for more detailed information. It is known that any Painlevé equation can be written as a Hamiltonian system of differential equations: ∂H dq = , dz ∂p

dp ∂H =− , dz ∂q

(9)

where H (q, p, z, v) is a function that depends on p, q, the variable z and (in general) a vector of parameters v. As a first example, the Hamiltonian function for PII is



 HII = 12 p2 − q 2 + 12 z p − α + 12 q,

(10)

and then (9) gives the following system: dq = p − q 2 − 12 z, dz

dp = 2qp + α + 12 . dz

(11)

Eliminating p, we see that q satisfies PII , given by (2), and eliminating q we get

p

d 2p dz2

=

1 2

.

dp dz

/2 + 2p3 − zp2 −



2 α + 12 2

,

(12)

80

A. Deaño

an equation that is not independent of PII but appears often in applications and is known as P34 ; finally, the function σ (z) = HII (q, p, z) satisfies the σ equation SII : +

d2 σ dz2

,2

. +4

dσ dz

/3 +2

dσ dz

. / . / dσ 1 1 2 z −σ = α+ . dz 4 2

(13)

As pointed out by Clarkson [23], if we have σ (z), then we can recover p(z) and q(z) as follows: p(z) = −2σ  (z),

q(z) =

4σ  (z) + 2α + 1 . 8σ  (z)

(14)

In this sense, it seems reasonable to focus on the Hamiltonian (or the σ function), since properties of the solutions of PII and P34 can be obtained from it. If we write the Hamiltonian as H (z) = H (q(z), p(z), z), we define a tau function as follows: H (z) =

d log τ (z). dz

(15)

1.2 Bäcklund Transformations and Toda Equation The Painlevé equations PII –PVI admit transformations that relate two solutions (of the same or different equation) with different values of the parameters. These transformations can be thought of as nonlinear analogues of the relations between contiguous functions for classical special functions, such as hypergeometric ones. We illustrate the main idea using PII as an example again: if q(z) satisfies PII with parameter α, then v(z) = −q(z) verifies PII with parameter −α, an easy fact that can be checked directly from the differential equation; therefore, we can write S : F → F, where F = {q(z; α), α ∈ C} is the space of solutions of PII with parameter α, and S(q(z, α)) = −q(z, −α),

(16)

is a Bäcklund transformation, that satisfies S ◦ S = I . A less trivial transformation is given by

q(z, α ± 1) = T± (q(z, α)) =

⎧ ⎨−q, ⎩−q −

2α ± 1 , ±2q  + 2q 2 + z

α = ∓ 12 , otherwise.

(17)

Special Function Solutions of Painlevé Equations: Theory, Asymptotics and. . .

81

Bäcklund transformations can be understood in a more general context, in the sense that they do not just transform the solutions of the differential equations, but the Hamiltonian and the tau function as well. In the references [22, Section 4.1] and [40, §19] we can find other examples with other Panlevé equations. When there are more parameters present, the main idea of the Bäcklund transformations is similar, but the parameters can change independently in several directions in parameter space (instead of just α → α ± 1 as in PII ). In order to choose these directions suitably, the general formulation uses the theory of affine Weyl groups, see for example [37, §8.2.2]. We take PII as an example again, following the ideas exposed in [38, Section 3]. The system (11) can be written in a more symmetric way by defining the parameters α0 =

1 2

− α,

α1 =

+ α,

(18)

f1 = p.

(19)

f1 = 2qf1 + α1 ,

(20)

1 2

which satisfy α0 + α1 = 1, as well as the functions f0 = 2q 2 − p + z, Thus, (11) becomes f0 = −2qf0 + α0 , and the Hamiltonian is HII = − 12 f0 f1 − α1 q.

(21)

Given this construction, the group of transformations W = s0 , s1 , π  acts on the roots α0 , α1 and on the functions f0 , f1 , defined in (18) and (19), in the following way: α0

α1

f0

f1

q

2α 2 4α0 q f1 − + 20 f0 f0

q−

s0

−α0

α1 + 2α0

f0

s1

α0 + 2α1

−α1

f0 +

π

α1

α0

f1

2α 2 4α1 q + 21 f1 f1

f1 f0

α0 f0 α1 q+ f1 −q

The composition T1 := π s1 shifts the parameters as follows: T1 (α0 , α1 ) = (α1 + 2α0 , −α0 ), since T1 (α0 ) = π s1 (α0 ) = π(α0 + 2α1 ) = α1 + 2α0 . It follows that this transformation corresponds to α → α − 1, and its inverse, T2 = s1 π , corresponds to α → α + 1.

82

A. Deaño

When we apply this transformation to the Hamiltonian, we obtain 



T1 (HII ) = π s1 − 12 f0 f1 − α1 q = π − 12 f0 f1 − α1 q = HII + q.

(22)

It seems natural to iterate this calculation, in order to generate a sequence of Hamiltonian functions: HII,n = T1n (HII ). As a consequence of (22), we have T1n+1 (HII ) − T1n (HII ) = HII,n+1 − HII,n = qn ,

(23)

using the notation qn = q(z, α −n), given an initial value of α. A similar calculation yields T1n+1 (HII ) − T1n (HII ) − (T1n (HII ) − T1n−1 (HII )) = qn − T1−1 (qn ),

(24)

and since HII,n = T1n HII =

d τn (z), dz

we conclude that T1n+1 (HII ) − T1n (HII ) − (T1n (HII ) − T1n−1 (HII )) = HII,n+1 − 2HII,n + HII,n−1 =

d τn+1 (z)τn−1 (z) . ln dz τn2 (z)

Combining this with (24), we have qn − T1−1 (qn ) =

τn+1 (z)τn−1 (z) d ln . dz τn2 (z)

(25)

On the other hand, since T1 T2 = I , where T2 = s1 π , we have qn − T1−1 (qn ) = qn − T2 (qn ) = qn − s1 π(qn ) = 2qn +

α1 d ln f1,n , = f1,n dz

(26)

where we write f1,n := f1 (z; α − n) again. We differentiate (21), bearing in mind that 2q  = f1 −f0 , which results from combining the first equation in (11) with (19), then / / . . 2 d d d d d d −2 HII,n = f1,n ⇒ ln τ (z) . ln f1,n = ln HII,n = ln n dz dz dz dz dz dz2

Special Function Solutions of Painlevé Equations: Theory, Asymptotics and. . .

83

Combining this equation with (26) and (25), integrating and taking exponentials, we have d2 τn+1 (z)τn−1 (z) = C ln τn (z), τn2 (z) dz2

(27)

where C is a constant, which is a Toda equation. In the references [38] and [39], the authors present this and other examples with different Painlevé equations. In those cases, the calculations are more involved, since the number of parameters is larger and then there are several Bäcklund transformations, changing parameters in different directions. Also, in some cases changes of variable and modifications of the tau functions are needed as well, in order to arrive at a Toda equation.

1.3 Special Function Solutions of Painlevé Equations The iterative process that we just described, involving transformations of the Hamiltonian and the tau function, can be used in general to construct sequences of solutions of Painlevé equations. Furthermore, the Toda equation (27) allows us to compute the tau functions in a simple way, starting from suitable initial values. In the case of special function solutions, the combination of (27) and τ0 (z) = 1 allows us to deduce a remarkable structure in the form of Wronskian determinants. If we take the example of PII again, suppose that τ0 (z) = 1, then H0 = 0, so from (21) we have − 12 f0,0 f1,0 − α1 q0 = 0, where once again we have written f0,n := f0 (z; α − n), for n ≥ 0, and similarly for f1 . This equality can be achieved if we set α1 = 0 (and then α = − 12 from (18)), and f1,0 = 0. As a consequence, H1 − H0 = q0 ⇒

d log τ1 = q0 , dz

and if we differentiate again and use the second equation in (11), we obtain d2 z τ1 (z) + τ1 (z) = 0, 2 2 dz which is a linear differential equation that can be solved in terms of classical Airy functions: τ1 (z) = ϕ(z) = C1 Ai(−2−1/3 z) + C2 Bi(−2−1/3 z).

(28)

84

A. Deaño

This function ϕ(z) is usually called the seed function. The key observation is that the Toda equation, together with the initial value τ0 (z) = 1, allows us to write τn (z) as an n × n Wronskian determinant: . τn (z) = det

d j +k ϕ(z) dzj +k

/ (29)

, j,k=0,1,...n−1

with initial values τ0 (z) = 1 and τ1 (z) = ϕ(z). A proof of this result can be found for instance in [57, Theorem 6.3], as a consequence of the Jacobi identity for determinants and the structure of the Wronskian. An alternative derivation of these special function solutions, explained in [22], is to assume that the solution that we seek solves, apart from the corresponding Painlevé equation, a Riccati equation: u (z) = p2 (z)u(z)2 + p1 (z)u(z) + p0 (z),

(30)

for some coefficients p0 (z), p1 (z), p2 (z) to be determined. The combination of both differential equations imposes restrictions on the parameters of the Painlevé equation and fixes the form of the coefficients p0 (z), p1 (z) and p2 (z). Finally, the Riccati equation can be linearised by means of u(z) = ±ϕ  (z)/ϕ(z), resulting in a linear second order differential equation for ϕ(z). In the case of PII , we obtain p2 (z) = ±1,

p1 (z) = 0,

p0 (z) = ± 12 z,

α = ± 12 ,

and therefore u (z) = ±u(z)2 ± 12 z. If we write u(z) = ∓ϕ  (z)/ϕ(z), we arrive at z ϕ  (z) + ϕ(z) = 0, 2 which is the same Airy equation as before. We observe that this approach is conceptually simpler, and it imposes specific values of α in PII in a quite straightforward way. In exchange, it concerns solutions of PII only, whereas the Hamiltonian formulation describes a bigger picture, including solutions of P34 and σII . Formula (29) describes a whole sequence of tau functions in terms of Airy functions when α = n2 , n ∈ Z. Using the fact that σ (z) = HII (z) =

d log τ (z), dz

(31)

Special Function Solutions of Painlevé Equations: Theory, Asymptotics and. . . Table 1 Classical special functions that correspond to special function solutions of PII –PVI .

PII PIII PIV PV PVI

85

Airy: Ai(z), Bi(z) Bessel: Jν (z), Yν (z), Iν (z), Kν (z) Parabolic cylinder (Weber): U (a, z), Dν (z) Kummer (Whittaker): 1 F1 (a; c; z) Gauss: 2 F1 (a, b; c; z)

and then (14), we can construct special function (Airy) solutions of PII . We remark that proving the fact that Airy solutions happen only for these values of α requires extra work, we refer the reader to [40, Theorem 21.1] in particular for PII . Special function solutions of Painlevé equations PIII −PVI have been obtained in the literature, from the work of Okamoto [45], see also Forrester and Witte [38, 39], and Clarkson [22]. One obtains a similar determinant structure: + , Dj +k ϕ(z) , (32) τn (z) = det Dzj +k j,k=0,1,...n−1

with initial values τ0 (z) = 1 and τ1 (z) = ϕ(z) a suitable seed function. Here D is a differential operator that can be more general than the derivative, and sometimes further changes of variable or modifications of the original tau function are needed in the calculation of the Toda equation. The correspondence between Painlevé equations and classical special functions is given in Table 1. Finally, we mention that once we have properties of τn (z), we can translate them to solutions of Painlevé equations. For example, in PII we have the formulas (14), that give special function solutions pn (z) and qn (z) to P34 and PII respectively, in d terms of σn (z) = dz log τn (z). Similar formulas for PIV can be found for instance in [24].

2 Motivation 1. Orthogonal Polynomials 2.1 General Theory The classical theory of orthogonal polynomials can be found, with different points of view, in many standard references, such as the books by Chihara [20], Ismail [41] or Szeg˝o [52], as well as in many chapters of books on special functions, as well as Chapter 18 of the DLMF [51]. For completeness, we recall below the facts that we need in order to make the connection with Painlevé functions. Given a weight function w(x), positive and integrable in a subset of the real line J ⊂ R, and such that the moments  μm = x m w(x)dx, m≥0 (33) J

86

A. Deaño

are defined, we can generate a sequence of orthogonal polynomials (OPs in the sequel) Pn (x), for n = 0, 1, 2, . . ., such that deg(Pn ) = n and the polynomials satisfy the L2 (J, w(x)) orthogonality  Pn (x)Pm (x)w(x)dx = n δmn ,

(34)

J

where n = 0 and δmn is the classical Kronecker delta. Furthermore, the family of OPs is unique once we fix the normalization; two standard ones are monic polynomials, Pn (x) = x n + . . . and orthonormal ones:  πn (x) = γn x n + . . . , J

πn2 (x)w(x)dx = 1.

An essential property of the OPs, that follows directly from orthogonality, is that they satisfy a three term recurrence relation: xPn (x) = Pn+1 (x) + αn Pn (x) + βn Pn−1 (x),

(35)

with initial data P−1 (x) = 0 and P0 (x) = 1. In the sequel it will be important to observe that the OPs, as well as the coefficients αn and βn in (35), can be constructed from Hankel determinants generated with the moments (33) of the weight function. Namely, if Hn = det(μj +k )j,k=0,1,...,n−1 ,

n ≥ 1,

(36)

then 2 2 μ0 2 2 μ1 1 22 . Pn (x) = 2 . Hn 2 . 2μ 2 n−1 2 1

2 . . . μn−1 μn 22 . . . μn μn+1 22 . .. 2 = x n + . . . .. . .. . 22 μn . . . μ2n−2 μ2n−1 22 x . . . x n−1 x n 2 μ1 μ2 .. .

(37)

is the n-th OP with respect to w(x). Additionally, the Hankel determinant Hn and Pn (x) admit n-fold integral representations, see [41, Sec. 2.1]: Hn =

1 n!



 ··· J

1 Pn (x) = n!Hn

!

(xk − xj )2

J 1≤j 0,

x ∈ [0, ∞),

• Chen and Dai [17]: w(x; s) = x α (1 − x)β e−t/x ,

α, β, t > 0,

x ∈ [0, 1],

• Boelen and Van Assche [8], Filipuk et al. [33], Clarkson and Jordaan [24]: w(x; t) = x λ e−x

2 +tx

λ > −1,

,

x ∈ (0, ∞).

• Clarkson et al. [25]: w(x) = |x|2λ+1 e−x

4 +tx 2

,

λ > −1,

t < 0,

α, β > 0,

x∈R

• Chen and Zhang [19]: w(x; t) = (x − t)γ x α (1 − x)β ,

γ ∈ R,

x ∈ [0, 1],

We refer the reader to the recent monograph of Van Assche [56] for more examples, involving discrete OPs and OPs on the unit circle as well. In all these examples, the deformation of a classical weight (Jacobi, Laguerre or Hermite, respectively) is introduced by multiplication by an exponential or algebraic factor that contains a parameter t ∈ R: w(x; t) = w0 (x)W (x, t).

88

A. Deaño

If this new weight function satisfies the conditions above, we can consider deformed OPs with respect to w(x; t):  Pn (x; t)Pm (x; t)w(x; t)dx = n (t)δmn . J

These new OPs will naturally depend on t, and they satisfy a recurrence relation very similar to (35), but with coefficients depending on t: xPn (x; t) = Pn+1 (x; t) + αn (t)Pn (x; t) + βn (t)Pn−1 (x; t). The connection with Painlevé equations can occur in at least two different situations: 1. For finite n, one can identify the recurrence coefficients αn (t) and βn (t), as well as other quantities related to the OPs, as solutions of Painlevé differential equations in the deformation parameter t (in the notation of the previous examples). In this situation, one typically encounters special function solutions of Painlevé equations, because the recurrence coefficients can be written in terms of the Hankel determinants Hn , and the moments contained in these are, in many cases, classical special functions. 2. As n → ∞, it can happen that there are critical transitions as other parameters tend to critical values (possibly coupled with n). In this setting, solutions of Painlevé equations are often needed in order to describe the asymptotic behaviour. In this case, one typically needs fully transcendental solutions of Painlevé equations. For instance, if we consider OPs on the real line with respect to the weight w(x; t) = e−V (x;t) ,

V (x; t) =

t x4 + x2, 4 2

where t ∈ R, there is a critical transition when t = tc = −2, when the support of the equilibrium measure, that determines the behaviour of the zeros of the OPs as n → ∞, changes from one interval to two. The asymptotic analysis near this critical case makes use of a particular (transcendental) solution of PII , the Hastings–McLeod solution. It was studied by Bleher and Its [5], in the context of random matrix theory, and it is included as a particular case in the analysis by Claeys et al. [21]. In this paper we are interested in the first situation, because of the connection with special function solutions of Painlevé equations. In order to identify which Painlevé equation is relevant (for instance in the examples given above), there are several procedures; one possibility is to use ladder operators, as described for example by Ismail [41, Chapter 3]: if the weight function is given by w(x; t) = exp(−v(x; t)),

Special Function Solutions of Painlevé Equations: Theory, Asymptotics and. . .

89

then the OPs with respect to w(x; t)dx satisfy Pn (x; t) = −Bn (x; t)Pn (x; t) + βn (t)An (x; t)Pn−1 (x; t)  Pn−1 (x; t) = −An−1 (x; t)Pn (x; t) + (Bn (x; t) + v  (x))Pn−1 (x; t),

(39)

with coefficients 

v  (x; t) − v  (s; t) 2 Pn (s; t)w(s; t)ds, x−s   1 v (x; t) − v  (s; t) Pn (s; t)Pn−1 (s; t)w(s; t)ds, Bn (x; t) = n−1 (t) x−s

An (x; t) =

1 n (t)

(40)

 where as before n (t) = Pn2 (x; t)w(x; t)dx. Equations (39) are often called lowering and raising relations (respectively), since they decrease/increase the degree of the polynomials by means of a differential operator in x. In many references, the relevant Painlevé equation is usually obtained from these ladder relations via a series of ad-hoc manipulations, differentiation with respect to t and identities for quantities related to the OPs, together with suitable changes of variable. An alternative is to consider the following vector: / Pn (x; t) , ψn (x; t) = Pn−1 (x; t) .

(41)

and write (39) in matrix form: ψn (x; t) =

.

/ βn (t)An (x; t) −Bn (x; t) ψn (x; t). −An−1 (x; t) Bn (x; t) + v  (x; t)

(42)

This can be seen as (part of) the x equation of a Lax pair for ψn (x; t), and we could try to identify it among the different Painlevé equations studied from this perspective in [35], in particular using the singularity structure in the spectral variable t. One could also carry out a similar calculation but differentiating with respect to t, in order to obtain the other equation of the Lax pair. Another approach that we mention next is systematic but laborious.

2.3 The Riemann–Hilbert Formulation The Riemann–Hilbert (RH) formulation of OPs is originally due to Fokas, Its and Kitaev [34], and it has been extensively used in the literature in the last decades, often in connection with large n asymptotic behaviour of OPs, using the Deift–Zhou method of steepest descent. We refer the reader to [28] or [29] for more details. The

90

A. Deaño

methodology, however, can also be used in order to obtain algebraic and differential identities for OPs for finite n, as explained in [41, Chapter 22]. We recall the basic ideas of the Riemann–Hilbert formulation, for simplicity assuming that we deal with OPs on the real line, with respect to a weight function w(x): we seek a 2 × 2 matrix Yn : C → C2×2 with the following properties: 1. Yn (z) is analytic C \ R. 2. If x ∈ R, the function Y (x) admits boundary values Yn± (x) = limε→0 Yn (x±iε), which are related by a jump condition .

/

1 w(x) Yn+ (x) = Yn− (x) 0 1

(43)

3. As z → ∞, the matrix Yn (z) satisfies . /. n / Y2,n Y1,n z 0 + 2 + O(z−3 ) Yn (z) = I + . 0 z−n z z

(44)

If the support of the weight function is a finite or semi–infinite interval (as in the case of Laguerre or Jacobi), then one needs to specify the local behaviour of Y as z tends to these endpoints in order to have a unique solution of the Riemann–Hilbert problem. The proof of existence and uniqueness of solution for this RH problem is very instructive and can be found in [28, Cap. 3]. In particular, it can be shown that det Y (z) = 1, and therefore the matrix Y (z) is invertible for all z ∈ C. The solution is constructed as follows: / . (CPn w)(z) Pn (z) , (45) Yn (z) = −1 −1 (z)Pn−1 (z) −2π in−1 (CPn−1 w)(z) −2π in−1 where Pn (z) is the n-th OP with respect to w(z), 1 (Cf )(z) = 2π i



f (s) ds s−z

(46)

 is the Cauchy transform of the function f (z), and as before n = R Pn2 (z)w(z)dz. We observe that the matrix Y (z) does not only contain the OPs, but the functions of the second kind as well, and the L2 norms. Furthermore, it is possible to identify the entries of the matrix Y1,n in (44) in terms of known quantities: if Pn (z) = zn + σn,n−1 zn−1 + . . ., then (Y1,n )11 = σn,n−1 ,

(Y1,n )12 = −

n , 2π i

(Y1,n )21 = −

2π i , n−1

(47)

Special Function Solutions of Painlevé Equations: Theory, Asymptotics and. . .

91

which in turn leads to formulas for the recurrence coefficients in terms of the entries of Y1,n : αn = (Y1,n )11 − (Y1,n+1 )11 ,

βn = (Y1,n )12 (Y1,n )21 .

(48)

The main idea in order to use this construction for the purposes of deriving algebraic and differential identities is to observe that if we define . Un (z) = Yn (z)

/ 0 w(z)1/2 , 0 w(z)−1/2

(49)

then the jump relation for this new matrix becomes .

/ 11 Un+ (x) = Un− (x) , 01 which is clearly constant with respect to x. We note that in this transformation new jumps can appear for example if the weight function has finite or semi–infinite support; consider the case w(x) = x α e−x , with α > −1 on (0, ∞), then w(z)±1/2 in (49) will have an extra cut for instance on R− . If the jumps of Un (z) are constant on all contours, it can be shown that the derivative Un (z) has the same jumps as Un (z), and if we take the product Un (z)Un−1 (z), then this product does not have any jumps, only possible isolated singularities (poles) at the endpoints of the support of the weight function (if any). It follows that Vn (z) = f (z)Un (z)Un−1 (z),

(50)

where f (z) has zeros that cancel the possible poles of Un (z)Un−1 (z), is an entire function, with known asymptotic behaviour as z → ∞, from (44). By Liouville’s theorem, we can determine the form of Vn (z) precisely, and then we obtain a differential relation for Un (z), that we can translate back to Yn (z). A similar idea can be applied, in a very systematic way, to different operations on Un (z): for example, a shift in n instead to the derivative with respect to z, in which case we consider Un+1 (z)Un−1 (z), and we recover (among other identities), the three term recurrence relation; also, if the weight depends on an extra parameter t, then we can differentiate with respect to it, consider U˙ n (z)Un−1 (z), and try to obtain another differential relation in t. This procedure can be quite laborious, in particular if the weight function is complicated, but in return it has the advantage that it offers a very systematic way to obtain identities for the OPs and related quantities. Also, if one can obtain two differential identities (in x and in t), it is usually not too complicated to identify it with a known Lax pair for one of the Painlevé equations.

92

A. Deaño

3 Motivation 2: Random Matrix Theory The standard way to define a unitarily invariant ensemble of Hermitian random matrices is to take a probability law of the form dPN (M) =

1 exp (−tr(V (M))) dM, ZN

(51)

where M is an N ×N Hermitian matrix, ZN is the partition function that normalises the measure to be a probability one, V (M) is a smooth function (often called the potential) with sufficient growth at infinity, and dM is the standard Lebesgue measure on the entries of M. The most classical and well studied case corresponds to 2 the quadratic potential V (M) = M2 , which leads to the Gaussian Unitary Ensemble (GUE). It can be shown [28] that this measure is invariant under unitary transformations, that is dP (U MU ∗ ) = dP (M) for any unitary N × N matrix U . Therefore the usual first step is to rewrite the probability law in terms of eigenvalues and eigenvectors, which leads to a remarkable decoupling: in terms of the (real) eigenvalues λj , we have 1  ZN

dP (λ1 , λ2 , . . . , λN ) =

!

(λk − λj )2

N !

e−V (λj ) dλj .

(52)

j =1

1≤j −δ1 is satisfied if a > 0 (the positive-definite case).

4.4.2

Symmetrized Generalized Charlier Polynomials

See Sect. 5.1 for a definition of the Generalized Charlier polynomials. Special values a → −N − b,

N → 2m,

x → x + m.

Linear functional Lς [r] =

. / m 2m (b + 1 + m)m  (−m − b)x (−m)x r (x) . m (b + 1)m x=−m (b + 1 + m)x (m + 1)x

Pearson equation "ς (x + 1) (x − m − b) (x − m) = . "ς (x) (x + m + b + 1) (x + m + 1)

Discrete Semiclassical Orthogonal Polynomials of Class 2

127

Moments νnς =

(−2m)n (−2m − b)n (2m + 2b + 1)2m−n , (b + 1)n (b + 1 + n)2m−n

n ∈ N0 .

Stieltjes transform difference equation (t + m + 1) (t + m + b + 1) Sς (t + 1) − (t − m) (t − m − b) Sς (t) ς

= (2b + 1 + 4m) ν0 . Remark 6 If we use (4) and (41) to write the weight function "ς (x) in terms of Gamma functions, we have 2  "ς (x) = "ς (0) (b + 1)m m! =

(x + m + b + 1)

1 1 (b + 1)x+m (x + m)! (b + 1)−x+m (−x + m)! (b + 1)2m (2m)! (x + m + 1) (−x + m + b + 1)

(−x + m + 1)

.

This agrees (up to a normalization factor) with the weight function considered by the authors in [7, equation 26].

5 Semiclassical Polynomials of Class 1 In this section, we consider all families of polynomials of class 1. We have five main cases, corresponding to (p, q) = (0, 1) , (1, 1) , (2, 0) , (2, 1) , (3, 2) . There are also 14 subcases.

5.1 (0, 1): Generalized Charlier Polynomials Linear functional L [r] =

∞  x=0

r (x)

zx 1 . (b + 1)x x!

128

D. Dominici and F. Marcellán

Pearson equation z " (x + 1) = . " (x) (x + b + 1) (x + 1) Moments νn (z) =

  zn − F ; z , 0 1 b+1+n (b + 1)n

n ∈ N0 .

Stieltjes transform difference equation (t + 1) (t + b + 1) S (t + 1) − zS (t) = (t + b + 1) ν0 + ν1 .

5.2 (1, 1): Generalized Meixner Polynomials Linear functional L [r] =

∞ 

r (x)

x=0

(a)x zx . (b + 1)x x!

Pearson equation z (x + a) " (x + 1) = . " (x) (x + 1) (x + b + 1) Moments νn (z) = zn

  (a)n a+n F ; z , 1 1 b+1+n (b + 1)n

n ∈ N0 .

Stieltjes transform difference equation (t + 1) (t + b + 1) S (t + 1) − z (t + a) S (t) = (t + b + 1 − z) ν0 + ν1 .

5.2.1

Reduced-Uvarov Charlier Polynomials

Let ω = 0. Special values a → 0,

b → −1.

(51)

Discrete Semiclassical Orthogonal Polynomials of Class 2

129

Linear functional LU [r] =

∞ 

r (x)

x=0

zx + Mr (0) . x!

Pearson equation "U (x + 1) zx = . "U (x) (x + 1) x Moments νnU (z) = zn ez + Mφn (0) ,

n ∈ N0 .

Stieltjes transform difference equation t [(t + 1) SU (t + 1) − zSU (t)] = (t − z) ν0U + ν1U .

(52)

Since  (t − z) ν0U + ν1U = (t − z) ez + M + zez = tν0 + M (t − z) , where ν0 = ez is the first moment of the Charlier polynomials, we can rewrite (52) in the M-dependent form  z

, (t + 1) SU (t + 1) − zSU (t) = ν0 + M 1 − t which agrees with (31).

5.2.2

Christoffel Charlier Polynomials

Let ω ∈ / {0, z} . Special values a → −ω + 1,

b → −ω − 1.

Linear functional LC [r] =

∞  zx r (x) (x − ω) . x! x=0

130

D. Dominici and F. Marcellán

Pearson equation "C (x + 1) z (x + 1 − ω) = . "C (x) (x + 1) (x − ω) Moments νnC (z) = νn+1 + (n − ω) νn = (n + z − ω) zn ez ,

n ∈ N0 .

(53)

Stieltjes transform difference equation  

 (t − ω) (t + 1) SC (t + 1) − (t + 1 − ω) zSC (t) = ν0 (z − ω) t + ω2 − zω + z .

Note that if we use (53), we have ν0C = (z − ω) ν0 ,

ν1C = (1 + z − ω) zν0 ,

and therefore (t − ω) (t + 1) SC (t + 1) − (t + 1 − ω) zSC (t) = (t − ω − z) ν0C + ν1C , which agrees with (51) as a → 1 − ω,

5.2.3

b → −ω − 1.

Geronimus Charlier Polynomials

Let ω ∈ / {−1} ∪ N0 . Special values a → −ω,

b → −ω.

Linear functional LG [r] =

∞  r (x) zx + Mr (ω) . x − ω x! x=0

Pearson equation "G (x + 1) z (x − ω) = . "G (x) (x + 1) (x + 1 − ω) Moments  νnG (z)

= φn (ω) M − S (ω) + e

z

n−1  k=0

 zk , φk+1 (ω)

n ∈ N0 .

Discrete Semiclassical Orthogonal Polynomials of Class 2

131

Stieltjes transform difference equation (t + 1 − ω) σ (t + 1) SG (t + 1) − (t − ω) η (t) SG (t) = ν0 + ν0G (t + 1 − z) . Note that if we use (34), we have ν1G − ων0G = ν0 , and therefore we can write (t + 1) (t + 1 − ω) SG (t + 1) − z (t − ω) SG (t) = (t + 1 − ω − z) ν0G + ν1G , which agrees with (51) as a → −ω,

5.2.4

b → −ω.

Truncated Charlier Polynomials

Special values a → −N,

b → −N − 1.

Linear functional LT [r] =

N  x=0

r (x)

zx . x!

Pearson equation "T (x + 1) z (x − N) = . "T (x) (x + 1) (x − N) Moments νnT (z)

  zN 1 n − N, 1 = ;− , 2 F0 − z (N − n)!

n ∈ N0 .

Stieltjes transform difference equation (t − N) [(t + 1) ST (t + 1) − zST (t)] = (t − N − z) ν0T + ν1T . Remark 7 The performance of the modified Chebyshev algorithm used to compute the moments of these polynomials was studied in [25] (example 4.3).

132

D. Dominici and F. Marcellán

5.3 (2, 0; N ): Generalized Krawtchouk Polynomials Linear functional L [r] =

N 

r (x) (a)x (−N)x

x=0

zx . x!

Pearson equation " (x + 1) z (x + a) (x − N) = . " (x) x+1 Moments  νn (z) = z (−N)n (a)n 2 F0 n

 −N + n, a + n ;z , −

n ∈ N0 .

Stieltjes transform difference equation (t + 1) S (t + 1) − z (t + a) (t − N) S (t) = [1 − (t + a − N) z] ν0 − zν1 .

5.4 (2, 1): Generalized Hahn Polynomials of Type I Linear functional L [r] =

∞  (a1 )x (a2 )x zx . r (x) (b + 1)x x! x=0

Pearson equation " (x + 1) z (x + a1 ) (x + a2 ) = . " (x) (x + b + 1) (x + 1) Moments νn (z) = zn

  (a1 )n (a2 )n a1 + n, a2 + n F ; z , 2 1 b+1+n (b + 1)n

n ∈ N0 ,

where we choose the principal branch z ∈ C \ [1, ∞). Note that since b + 1 + n − (a1 + n + a2 + n) ≤ 0, νn (1) is undefined for n ≥ b − a1 − a2 + 1.

n ≥ b − a1 − a2 + 1,

Discrete Semiclassical Orthogonal Polynomials of Class 2

133

Stieltjes transform difference equation (t + 1) (t + b + 1) S (t + 1) − z (t + a1 ) (t + a2 ) S (t) = [t + b + 1 − z (t + a1 + a2 )] ν0 + (1 − z) ν1 .

5.4.1

Reduced-Uvarov Meixner Polynomials

Let (ω, ) ∈ {(0, 0) , (−a, a + 1)} . Special values a1 → ,

a2 → a,

b →  − 1.

Linear functional LU [r] =

∞  zx + Mr (ω) . r (x) (a)x x! x=0

Pearson equation "U (x + 1) z (x + ) (x + a) = . "U (x) (x + ) (x + 1) Moments νnU (z) = zn (a)n (1 − z)−a−n + Mφn (ω) ,

n ∈ N0 .

Stieltjes transform difference equation (t + ) [(t + 1) SU (t + 1) − z (t + a) SU (t)] = [(1 − z) (t + ) − az] ν0U + (1 − z) ν1U . Stieltjes transform M-dependent difference equation 

 t +1 z (t + a) − . (t + 1) SU (t + 1) − z (t + a) SU (t) = (1 − z) ν0 + M t −ω+1 t −ω 5.4.2

Christoffel Meixner Polynomials

  Let ω ∈ / −a, 0, az (1 − z)−1 . Special values a1 → a,

a2 → −ω + 1,

b → −ω − 1.

134

D. Dominici and F. Marcellán

Linear functional LC [r] =

∞ 

r (x) (x − ω) (a)x

x=0

zx . x!

Pearson equation "C (x + 1) z (x + a) (x + 1 − ω) = . "C (x) (x + 1) (x − ω) Moments νnC (z) = νn+1 + (n − ω) νn = zn (a)n (1 − z)−a−n−1 (az + ωz + n − ω) ,

n ∈ N0 .

(54) Stieltjes transform difference equation (t − ω) (t + 1) SC (t + 1) − (t + 1 − ω) z (t + a) SC (t)  

 = ν0 (az − ω + ωz) t + ω2 + az + ωz − azω − zω2 . Note that if we use (54), we have ν0C =

az + ωz − ω ν0 , 1−z

ν1C =

za(az + ωz + 1 − ω) (1 − z)2

ν0 ,

and therefore we can also write (t − ω) (t + 1) SC (t + 1) − (t + 1 − ω) z (t + a) SC (t) = [t − ω − z (t + a − ω + 1)] ν0C + (1 − z) ν1C . 5.4.3

Geronimus Meixner Polynomials

Let ω ∈ / {−1, 1 − a} ∪ N0 . Special values a1 → a,

a2 → −ω,

b → −ω.

Linear functional LG [r] =

∞  r (x) zx + Mr (ω) . (a)x x−ω x! x=0

Discrete Semiclassical Orthogonal Polynomials of Class 2

135

Pearson equation "G (x + 1) z (x + a) (x − ω) = . "G (x) (x + 1) (x + 1 − ω) Moments  νnG (z)

= φn (ω) M − S (ω) + (1 − z)

−a

n−1 k  z (a)k (1 − z)−k k=0

φk+1 (ω)

 ,

Stieltjes transform difference equation (t + 1) (t − ω + 1) SG (t + 1) − z (t + a) (t − ω) SG (t) = (1 − z) ν0 + ν0G [(1 − z) t + (1 − az)] . Note that if we use (34), we have ν1G − ων0G = ν0 , and therefore we can also write (t + 1) (t − ω + 1) SG (t + 1) − z (t + a) (t − ω) SG (t) = [(1 − z) t + zω − ω − az + 1] ν0G + (1 − z) ν1G . 5.4.4

Truncated Meixner Polynomials

Special values a1 → a,

a2 → −N,

b → −N − 1.

Linear functional LT [r] =

N  zx . r (x) (a)x x! x=0

Moments νnT

= (a)N

  zN n − N, 1 1 , ; 2 F1 1−N − a z (N − n)!

n ∈ N0 .

n ∈ N0 .

136

D. Dominici and F. Marcellán

Pearson equation "T (x + 1) z (x + a) (x − N) = . "T (x) (x − N) (x + 1) Stieltjes transform difference equation (t − N) [(t + 1) ST (t + 1) − z (t + a) ST (t)] = [(t − N) (1 − z) − az] ν0T + (1 − z) ν1T . 5.4.5

Symmetrized Generalized Krawtchouk Polynomials

Special values a1 → a,

a2 → −2m,

b → −2m − a,

z → −1,

x → x + m.

Linear functional m 

Lς [r] = "ς (0)

r (x)

x=−m

(a + m)x (−m)x (−1)x , (−m − a + 1)x (m + 1)x

where . / 2m (a)m "ς (0) = (−1) . m (a + m)m m

Pearson equation "ς (x + 1) − (x + a + m) (x − m) = . "ς (x) (x + 1 − a − m) (x + 1 + m) Moments on the basis {φ n (x)}n≥0 νnς = (−1)n

  (a)n (−2m)n −2m + n, a + n ; −1 , 2 F1 −2m − a + n (−2m − a)n

n ∈ N0 .

Stieltjes transform difference equation (t + m + 1) (t − m − a + 1) Sς (t + 1) + (t − m) (t + m + a) Sς (t) ς

ς

= (2t − 2m + 1) ν0 + 2ν1 .

Discrete Semiclassical Orthogonal Polynomials of Class 2

137

5.5 (3, 2; N, 1): Generalized Hahn Polynomials of Type II Linear functional L [r] =

N  (a1 )x (a2 )x (−N )x 1 . r (x) (b1 + 1)x (b2 + 1)x x! x=0

Pearson equation " (x + 1) (x + a1 ) (x + a2 ) (x − N) = . " (x) (x + b1 + 1) (x + b2 + 1) (x + 1) Moments   (−N )n (a1 )n (a2 )n −N + n, a1 + n, a2 + n νn = ;1 , 3 F2 b1 + 1 + n, b2 + 1 + n (b1 + 1)n (b2 + 1)n

n ∈ N0 .

Stieltjes transform difference equation (t + b1 + 1) (t + b2 + 1) (t + 1) S (t + 1) − (t + a1 ) (t + a2 ) (t − N) S (t) = [(N + 2 + b1 + b2 − a1 − a2 ) t + b1 + b2 + b1 b2 − a1 − a2 − a1 a2 +N a1 + Na2 + 1] ν0 + (N + b1 + b2 − a1 − a2 + 1) ν1 .

5.5.1

Reduced-Uvarov Hahn Polynomials

Let (ω, ) ∈ {(0, 0) , (−a, a + 1) , (−b, b) , (N, −N + 1)} . Special values a1 → a,

a2 → ,

b1 →  − 1,

b2 → a.

Linear functional LU [r] =

N  (a)x (−N)x 1 r (x) + Mr (ω) . (b + 1)x x! x=0

Pearson equation "U (x + 1) (x + a) (x + ) (x − N) = . "U (x) (x + b + 1) (x + ) (x + 1)

138

D. Dominici and F. Marcellán

Moments νnU =

(−N)n (a)n (b + 1 − a)N −n + Mφn (ω) , (b + 1)n (b + 1 + n)N −n

n ∈ N0 .

Stieltjes transform difference equation (t + ) [(t + b + 1) (t + 1) SU (t + 1) − (t + a) (t − N)] SU (t) = [(N − a + b + 1)t + (N − a + b + 1)  + aN] ν0U + (N + b − a) ν1U . Stieltjes transform M-dependent difference equation (t + b + 1) (t + 1) SU (t + 1) − (t − N) (t + a) SU (t)   (t + b + 1) (t + 1) (t − N ) (t + a) − . = (N + b − a + 1) ν0 + M t −ω+1 t −ω 5.5.2

Christoffel Hahn Polynomials

" Let ω ∈ / −a, −b, 0, N, − Special values a1 → a,

aN b−a+N

 .

a2 → −ω + 1,

b1 → b,

b2 → −ω − 1.

Linear functional LC [r] =

∞ 

r (x) (x − ω)

x=0

(a)x (−N)x 1 . (b + 1)x x!

Pearson equation "C (x + 1) (x + a) (x − N) (x + 1 − ω) = . "C (x) (x + b + 1) (x + 1) (x − ω) Moments   (a + n) (N − n) νnC (z) = νn+1 +(n − ω) νn = n − ω − νn , b−a+N −n

n ∈ N0 .

(55)

Stieltjes transform difference equation (t − ω) (t + b + 1) (t + 1) SC (t + 1) − (t − ω + 1) (t − N ) (t + a) SC (t)   = (aω − N ω − bω − N a) t + aω − N ω + ω2 + N ω2 − aω2 + bω2 − N a + N aω ν0 .

Discrete Semiclassical Orthogonal Polynomials of Class 2

139

Note that if we use (55), we have ν0C = −

(a + ω) N + (b − a) ω ν0 , b−a+N

ν1C =

(N a + N ω − aω + bω − b − ω)N a ν0 , (b − a + N ) (b − a + N − 1)

and therefore we can also write (t − ω) (t + b + 1) (t + 1) SC (t + 1) − (t − ω + 1) (t − N ) (t + a) SC (t) = [(N − a + b) t + N a − N ω + aω − bω + N − a − ω] ν0C + (N − a + b − 1) ν1C .

5.5.3

Geronimus Hahn Polynomials

Let ω ∈ / {−b − 1, −1, 1 − a, N + 1} ∪ N0 . Special values a1 → a,

a2 → −ω,

b → −ω.

Linear functional LG [r] =

∞  r (x) (a)x (−N)x 1 + Mr (ω) . x − ω (b + 1)x x! x=0

Pearson equation "G (x + 1) (x + a) (x − N ) (x − ω) = . "G (x) (x + b + 1) (x + 1) (x + 1 − ω) Moments  νnG (z)

= φn (ω) M − S (ω) +

n−1  k=0

 νk , φk+1 (ω)

n ∈ N0 .

Stieltjes transform difference equation ξG (t) = ξ (t) + [σ (t + 1) − η (t)] ν0G , (t − ω + 1) (t + b + 1) (t + 1) SG (t + 1) − (t − ω) (t − N ) (t + a) SG (t) = (b + 1 − a + N) ν0 + [(N − a + b + 2) t + b + Na + 1] ν0G .

140

D. Dominici and F. Marcellán

Note that if we use (34), we have ν1G − ων0G = ν0 , and therefore we can also write (t − ω + 1) (t + b + 1) (t + 1) SG (t + 1) − (t − ω) (t − N ) (t + a) SG (t) = [(N − a + b + 2)t + N a − N ω + aω − bω + b − ω + 1] ν0G + (b − 1 − a + N ) ν1G .

5.5.4

Symmetrized Hahn Polynomials

Special values a1 → a,

a2 → −N −b,

b1 → b,

b2 → −N −a,

N → 2m,

x → x+m.

Linear functional Lς [r] = "ς (0)

m 

r (x)

x=−m

(m + a)x (−m − b)x (−m)x , (1 − m − a)x (m + b + 1)x (m + 1)x

where "ς (0) = (−1)m

. / 2m (a)m (b + 1 + m)m . m (a + m)m (b + 1)m

Pearson equation "ς (x + 1) (x + m + a) (x − m − b) (x − m) = . "ς (x) (x + 1 − m − a) (x + 1 + m + b) (x + 1 + m) Moments on the basis {φ n (x)}n≥0 (−2m)n (a)n (−2m − b)n ς νn = 3 F2 (b + 1)n (−2m − a + 1)n



 −2m + n, a + n, −2m − b + n ;1 , b + 1 + n, −2m − a + 1 + n

Stieltjes transform difference equation (t + 1 − m − a) (t + 1 + m + b) (t + 1 + m) Sς (t + 1) − (t + m + a) (t − m − b) (t − m) Sς (t)

n ∈ N0 .

Discrete Semiclassical Orthogonal Polynomials of Class 2

141

  ς = 2 (1 + b − a + m) t + 2am − 2bm − 2m2 − a + b + 1 ν0 ς

+ (1 + 2m − 2a + 2b) ν1 . Symmetrized Polynomials of Type (3, 0; N)

5.5.5

For a definition of the polynomials of type (3, 0; N), see Sect. 6.4. Special values b1 → −N − a1 ,

b2 → −N − a2 ,

N → 2m,

x → x + m.

Linear functional Lς [r] = "ς (0)

m 

r (x)

x=−m

(a1 + m)x (a2 + m)x (−m)x , (1 − a1 − m)x (1 − a2 − m)x (m + 1)x

where "ς (0) = (−1)m

. / 2m (a1 )m (a2 )m . m (a1 + m)m (a2 + m)m

Pearson equation "ς (x + 1) (x + a1 + m) (x + a2 + m) (x − m) . = "ς (x) (x + 1 − a1 − m) (x + 1 − a2 − m) (x + 1 + m) Moments on the basis {φ n (x)}n≥0   (−2m) (a ) (a ) + n, a + n −2m + n, a 1 2 n n n 1 2 ;1 , νnς = 3 F2 (1 − a1 − 2m)n (1 − a2 − 2m)n 1 − a1 − 2m + n, 1 − a2 − 2m + n n ∈ N0 .

Stieltjes transform difference equation (t + 1 − a1 − m) (t + 1 − a2 − m) (t + 1 + m) Sς (t + 1) − (t + a1 + m) (t + a2 + m) (t − m) Sς (t)   ς = −2 (m + a1 + a2 − 1) t + 2m2 − 2m + 1 + (2m − 1) (a1 + a2 ) ν0 ς

+ (−2m + 1 − 2a1 − 2a2 ) ν1 .

142

D. Dominici and F. Marcellán

6 Semiclassical Polynomials of Class 2 In this section, we consider all families of polynomials of class 2. We have seven main cases, corresponding to (p, q) = (0, 2) , (1, 2) , (2, 2) , (3, 0) , (3, 1) , (3, 2) , (4, 3) . There are also 25 subcases.

6.1 Polynomials of Type (0,2) Linear functional ∞  L [r] = r (x) x=0

zx 1 . (b1 + 1)x (b2 + 1)x x!

Pearson equation " (x + 1) z = . " (x) (x + b1 + 1) (x + b2 + 1) (x + 1) Moments   zn − νn (z) = ;z , 0 F2 b1 + 1 + n, b2 + 1 + n (b1 + 1)n (b2 + 1)n

n ∈ N0 .

Stieltjes transform difference equation (t + b1 + 1) (t + b2 + 1) (t + 1) S (t + 1) − zS (t) = (t + b1 + 1) (t + b2 + 1) ν0 + (t + b1 + b2 + 2) ν1 + ν2 .

6.2 Polynomials of Type (1,2) Linear functional L [r] =

∞  r (x) x=0

zx (a)x . (b1 + 1)x (b2 + 1)x x!

Discrete Semiclassical Orthogonal Polynomials of Class 2

143

Pearson equation z (x + a) " (x + 1) = . " (x) (x + b1 + 1) (x + b2 + 1) (x + 1) Moments νn (z) = zn

  (a)n a+n ; z , F 1 2 b1 + 1 + n, b2 + 1 + n (b1 + 1)n (b2 + 1)n

n ∈ N0 .

Stieltjes transform difference equation (t + b1 + 1) (t + b2 + 1) (t + 1) S (t + 1) − z (t + a) S (t) = [(t + b1 + 1) (t + b2 + 1) − z] ν0 + (t + b1 + b2 + 2) ν1 + ν2 .

6.2.1

Reduced-Uvarov Generalized Charlier Polynomials

Let ω ∈ {0, −b} . Special values a → −ω,

b1 → b,

b2 → −ω − 1.

Linear functional LU [r] =

∞  x=0

r (x)

zx 1 + Mr (ω) . (b + 1)x x!

Pearson equation "U (x + 1) z (x − ω) = . "U (x) (x + b + 1) (x − ω) (x + 1) Moments νnU

  zn − ; z + Mφn (ω) , (z) = 0 F1 b+1+n (b + 1)n

n ∈ N0 .

Stieltjes transform difference equation (t − ω) [(t + b + 1) (t + 1) SU (t + 1) − zSU (t)] = [(t + b + 1) (t + b) − z] ν0U + (t + 2b + 1) ν1U + ν2U .

144

D. Dominici and F. Marcellán

Stieltjes transform M-dependent difference equation (t + b + 1) (t + 1) SU (t + 1) − zSU (t)   z (t + b + 1) (t + 1) − . = (t + b + 1) ν0 + ν1 + M t −ω+1 t −ω 6.2.2

Christoffel Generalized Charlier Polynomials

"  ν1 Let ω ∈ / −b, 0, . ν0 Special values a → −ω + 1,

b1 → b,

b2 → −ω − 1.

r (x) (x − ω)

zx 1 . (b + 1)x x!

Linear functional LC [r] =

∞  x=0

Pearson equation "C (x + 1) z (x + 1 − ω) = . "C (x) (x + b + 1) (x + 1) (x − ω) Moments νnC (z) = νn+1 + (n − ω) νn ,

n ∈ N0 .

Stieltjes transform difference equation (t − ω) (t + b + 1) (t + 1) SC (t + 1) − z (t − ω + 1) SC (t) 



  = −ωt 2 + z − ω − bω + ω2 t + z − zω + ω2 + bω2 ν0 + (t − ω) (t − ω + 1) ν1 .

6.2.3

Geronimus Generalized Charlier Polynomials

Let ω ∈ / {−b − 1, −1} ∪ N0 . Special values a → −ω,

b1 → b,

b2 → −ω.

Discrete Semiclassical Orthogonal Polynomials of Class 2

145

Linear functional LG [r] =

∞  1 r (x) zx + Mr (ω) . x − ω (b + 1)x x! x=0

Pearson equation "G (x + 1) z (x − ω) = . "G (x) (x + b + 1) (x + 1) (x + 1 − ω) Moments  νnG (z)

= φn (ω) M − S (ω) +

n−1  k=0

 νk , φk+1 (ω)

n ∈ N0 .

Stieltjes transform difference equation (t − ω + 1) (t + b + 1) (t + 1) SG (t + 1) − (t − ω) zSG (t) = (t + b + 1) ν0 + ν1 + [(t + b + 1) (t + 1) − z] ν0G . 6.2.4

Truncated Generalized Charlier Polynomials

Special values a → −N,

b1 → b,

b2 → −N − 1.

Linear functional LT [r] =

N  r (x) x=0

zx 1 . (b + 1)x x!

Pearson equation "T (x + 1) z (x − N) = . "T (x) (x + b + 1) (x − N) (x + 1) Moments νnT (z) =

  zN 1 n − N, 1, −N − b 1 , F ; 3 0 − z (b + 1)N (N − n)!

n ∈ N0 .

146

D. Dominici and F. Marcellán

Stieltjes transform difference equation (t − N) [(t + b + 1) (t + 1) ST (t + 1) − zST (t)] = [(t + b + 1) (t − N) − z] ν0T + (t + b + 1 − N) ν1T + ν2T .

6.3 Polynomials of Type (2,2) Linear functional L [r] =

∞  r (x) x=0

zx (a1 )x (a2 )x . (b1 + 1)x (b2 + 1)x x!

Pearson equation " (x + 1) z (x + a1 ) (x + a2 ) = . " (x) (x + b1 + 1) (x + b2 + 1) (x + 1) Moments νn (z) = zn

  (a1 )n (a2 )n a1 + n, a2 + n ; z , F 2 2 b1 + 1 + n, b2 + 1 + n (b1 + 1)n (b2 + 1)n

n ∈ N0 .

Stieltjes transform difference equation (t + b1 + 1) (t + b2 + 1) (t + 1) S (t + 1) − z (t + a1 ) (t + a2 ) S (t) = [(t + b1 + 1) (t + b2 + 1) − (t + a1 + a2 ) z] ν0 + (t + b1 + b2 + 2 − z) ν1 + ν2 .

6.3.1

Uvarov Charlier Polynomials

Let ω = 0. Special values a1 → −ω,

a2 → −ω + 1,

b1 → −ω − 1,

Linear functional LU [r] =

∞  zx + Mr (ω) . r (x) x! x=0

b2 → −ω.

Discrete Semiclassical Orthogonal Polynomials of Class 2

147

Pearson equation "U (x + 1) z (x − ω) (x − ω + 1) = . "U (x) (x − ω) (x − ω + 1) (x + 1) Moments νnU (z) = zn ez + Mφn (ω) ,

n ∈ N0 .

Stieltjes transform difference equation (t − ω) (t − ω + 1) [(t + 1) SU (t + 1) − zSU (t)] = [(t − ω) (t − ω + 1) − (t − 2ω + 1) z] ν0U + (t − z − 2ω + 1) ν1U + ν2U . Stieltjes transform M-dependent difference equation . (t + 1) SU (t + 1) − zSU (t) = ν0 + M

6.3.2

/ t +1 z − . t +1−ω t −ω

Reduced-Uvarov Generalized Meixner Polynomials

Let (ω, ) ∈ {(−a, a + 1) , (−b, b) , (0, 0)} . Special values a1 → a,

a2 → ,

b1 → b,

b2 →  − 1.

Linear functional LU [r] =

∞  x=0

r (x)

(a)x zx + Mr (ω) . (b + 1)x x!

Pearson equation "U (x + 1) z (x + a) (x + ) = . "U (x) (x + b + 1) (x + ) (x + 1) Moments νnU (z) = zn

  (a)n a+n F ; z + Mφn (ω) , 1 1 b+1+n (b + 1)n

n ∈ N0 .

148

D. Dominici and F. Marcellán

Stieltjes transform difference equation (t + ) [(t + b + 1) (t + 1) SU (t + 1) − z (t + a) SU (t)] = [(t + b + 1) (t + ) − (t + a + ) z] ν0U + (t + b + 1 +  − z) ν1U + ν2U . Stieltjes transform M-dependent difference equation (t + b + 1) (t + 1) SU (t + 1) − z (t + a) SU (t)   (t + b + 1) (t + 1) z (t + a) − . = (t + b + 1 − z) ν0 + ν1 + M t +1−ω t −ω 6.3.3

Christoffel Generalized Meixner Polynomials

"  ν1 Let ω ∈ / −a, −b, 0, . ν0 Special values a1 → a,

a2 → −ω + 1,

b1 → b,

b2 → −ω − 1.

Linear functional LC [r] =

∞ 

r (x) (x − ω)

x=0

(a)x zx . (b + 1)x x!

Pearson equation "C (x + 1) z (x + a) (x + 1 − ω) = . "C (x) (x + 1) (x + b + 1) (x − ω) Moments νnC (z) = νn+1 + (n − ω) νn ,

n ∈ N0 .

Stieltjes transform difference equation (t − ω) (t + b + 1) (t + 1) SC (t + 1) − z (t − ω + 1) (t + a) SC (t)  

 = −ωt 2 + zω − bω − ω + ω2 + az t + zω + ω2 + bω2 − zω2 + az − azω ν0 + (t − ω) (t − ω + 1) ν1 .

Discrete Semiclassical Orthogonal Polynomials of Class 2

6.3.4

149

Geronimus Generalized Meixner Polynomials

Let ω ∈ / {−b − 1, −1, 1 − a} ∪ N0 . Special values a1 → a,

a2 → −ω,

b1 → b,

b2 → −ω.

Linear functional LG [r] =

∞  r (x) (a)x zx + Mr (ω) . x − ω (b + 1)x x! x=0

Pearson equation "G (x + 1) z (x + a) (x − ω) = . "G (x) (x + 1) (x + b + 1) (x + 1 − ω) Moments  νnG (z)

= φn (ω) M − S (ω) +

n−1  k=0

 νk , φk+1 (ω)

n ∈ N0 .

Stieltjes transform difference equation (t − ω + 1) (t + b + 1) (t + 1) SG (t + 1) − z (t − ω) (t + a) SG (t) = (t + b + 1 − z) ν0 + ν1 + [(t + b + 1) (t + 1) − z (t + a)] ν0G . 6.3.5

Truncated Generalized Meixner Polynomials

Special values a1 → a,

a2 → −N,

b1 → b,

b2 → −N − 1.

Linear functional LT [r] =

N  r (x) x=0

(a)x zx . (b + 1)x x!

Pearson equation "T (x + 1) z (x + a) (x − N) = . "T (x) + (x b + 1) (x − N) (x + 1)

150

D. Dominici and F. Marcellán

Moments νnT

  zN (a)N n − N, 1, −N − b 1 , ; (z) = 3 F1 1−N − a z (b + 1)N (N − n)!

n ∈ N0 .

Stieltjes transform difference equation (t − N) [(t + b + 1) (t + 1) ST (t + 1) − z (t + a) ST (t)] = [(t + b + 1) (t − N) − (t + a − N) z] ν0T + (t + b − N + 1 − z) ν1T + ν2T .

6.4 Polynomials of Type (3,0;N) Linear functional L [r] =

N  zx r (x) (a1 )x (a2 )x (−N)x . x! x=0

Pearson equation " (x + 1) z (x + a1 ) (x + a2 ) (x − N) = . " (x) x+1 Moments  νn (z) = z (−N)n (a1 )n (a2 )n 3 F0 n

 −N + n, a1 + n, a2 + n ;z , −

n ∈ N0 .

Stieltjes transform difference equation (t + 1) S (t + 1) − z (t + a1 ) (t + a2 ) (t − N ) S (t)   = −zt 2 − (a1 + a2 − N) zt + z (Na1 + Na2 − a1 a2 ) + 1 ν0 − (t + a1 + a2 + 1 − N) zν1 − zν2 .

6.5 Polynomials of Type (3,1;N) Linear functional L [r] =

N  (a1 )x (a2 )x (−N)x zx . r (x) x! (b + 1)x x=0

Discrete Semiclassical Orthogonal Polynomials of Class 2

151

Pearson equation z (x + a1 ) (x + a2 ) (x − N) " (x + 1) = . " (x) (x + b + 1) (x + 1) Moments νn (z) = zn

  (−N)n (a1 )n (a2 )n −N + n, a1 + n, a2 + n F ; z , 3 1 b+1+n (b + 1)n

n ∈ N0 .

Stieltjes transform difference equation (x + b + 1) (t + 1) S (t + 1) − z (t + a1 ) (t + a2 ) (t − N) S (t)   = −zt 2 + t − (a1 + a2 − N) zt + (Na1 + Na2 − a1 a2 ) z + b + 1 ν0 + [1 − (t + a1 + a2 + 1 − N) z] ν1 − zν2 .

6.5.1

Reduced-Uvarov Generalized Krawtchouk Polynomials

Let (ω, ) ∈ {(−a, a + 1) , (0, 0) , (N, −N + 1)} . Special values a1 → a,

a2 → ,

b →  − 1.

Linear functional LU [r] =

N 

r (x) (a)x (−N)x

x=0

zx + Mr (ω) . x!

Pearson equation "U (x + 1) z (x + a) (x + ) (x − N) = . "U (x) (x + ) (x + 1) Moments  νnU

(z) = z (−N)n (a)n 2 F0 n

 −N + n, a + n ; z + Mφn (ω) , −

n ∈ N0 .

152

D. Dominici and F. Marcellán

Stieltjes transform difference equation (t + ) [(t + 1) SU (t + 1) − z (t + a) (t − N) SU (t)]   = −zt 2 + t − (a +  − N) zt + (Na + N − a) z +  ν0U + [1 − (t + a +  + 1 − N) z] ν1U − zν2U . Stieltjes transform M-dependent difference equation (t + 1) SU (t + 1) − z (t + a) (t − N) SU (t)   t +1 z (t + a) (t − N ) = [1 − (t + a − N) z] ν0 − zν1 + M − . t −ω+1 t −ω 6.5.2

Christoffel Generalized Krawtchouk Polynomials

 " ν1 . Let ω ∈ / −a, 0, N, ν0 Special values a1 → a,

a2 → −ω + 1,

b → −ω − 1.

Linear functional LC [r] =

∞ 

r (x) (x − ω) (a)x (−N)x

x=0

zx . x!

Pearson equation "C (x + 1) z (x + a) (x − N) (x + 1 − ω) = . "C (x) (x + 1) (x − ω) Moments νnC (z) = νn+1 + (n − ω) νn ,

n ∈ N0 .

Stieltjes transform difference equation (t − ω) (t + 1) SC (t + 1) − z (t − ω + 1) (t + a) (t − N) SC (t)  

 = zωt 2 − ω − zω + zω2 + Nzω − azω + Naz t ν0 

+ ω2 − Nzω + azω + Nzω2 − azω2 − Naz + Nazω ν0 − z (t − ω) (t − ω + 1) ν1 .

Discrete Semiclassical Orthogonal Polynomials of Class 2

6.5.3

153

Geronimus Generalized Krawtchouk Polynomials

Let ω ∈ / {−1, 1 − a, 1 + N} ∪ N0 . Special values a1 → a,

a2 → −ω,

b → −ω.

Linear functional LG [r] =

∞  r (x) zx + Mr (ω) . (a)x (−N)x x−ω x! x=0

Pearson equation "G (x + 1) z (x + a) (x − N) (x − ω) = . "G (x) (x + 1) (x + 1 − ω) Moments  νnG (z)

= φn (ω) M − S (ω) +

n−1  k=0

 νk , φk+1 (ω)

n ∈ N0 .

Stieltjes transform difference equation (t − ω + 1) (t + 1) SG (t + 1) − z (t − ω) (t + a) (t − N ) SG (t) = (−zt + Nz − az + 1) ν0 − zν1 + [t + 1 − z (t + a) (t − N )] ν0G .

6.6 Polynomials of Type (3,2) Linear functional L [r] =

∞  r (x) x=0

(a1 )x (a2 )x (a3 )x zx . (b1 + 1)x (b2 + 1)x x!

Pearson equation " (x + 1) z (x + a1 ) (x + a2 ) (x + a3 ) = . " (x) (x + b1 + 1) (x + b2 + 1) (x + 1)

154

D. Dominici and F. Marcellán

Moments   (a1 )n (a2 )n (a3 )n a1 + n, a2 + n, a3 + n νn (z) = z ;z , 3 F2 b1 + 1 + n, b2 + 1 + n (b1 + 1)n (b2 + 1)n n

n ∈ N0 ,

where we choose the principal branch z ∈ C \ [1, ∞). Note that since b1 +1+n+b2 +1+n−(a1 + n + a2 + n + a3 + n) ≤ 0,

n ≥ b1 +b2 −a2 −a3 −a1 +2,

νn (1) is undefined for n ≥ b1 + b2 − a2 − a3 − a1 + 2. Stieltjes transform difference equation (t + b1 + 1) (t + b2 + 1) (t + 1) S (t + 1) − z (t + a1 ) (t + a2 ) (t + a3 ) S (t) =

2 

ξk t k ,

k=0

where ξ2 = (1 − z) ν0 ,

ξ1 = [e1 (b + 1) − e1 (a) z] ν0 + (1 − z) ν1 ,

ξ0 = [e2 (b + 1) − e2 (a) z] ν0 + [e1 (b + 1) − e1 (a) z − z] ν1 + (1 − z) ν2 , and e1 , e2 , e3 denote the elementary symmetric polynomials defined by e1 (x) =

 xi ,

e2 (x) =

i

6.6.1



xi xj ,

i 1. Denote σ the balayage measure associated to σ supported on the interval [−1, 1]. Given a rational number a ∈ [0, 1], consider a subsequence  ⊂ N such that for    1−a n ∈ N. Let x = xn = x1,n , . . . , xn,n n∈ be a scheme of each n ∈ , 2 κ nodes. If for each j = 1, . . . n, n ∈  there are two constants A ≥ 0 and > 0 satisfying 2 2  1 2 2j − 1 22 2 π 2 ≤ Ae− n , dσ (t) − a arccos xj,n − 2(1 − a)π 2 2 2n xj,n

(10)

  then there always exist weights w = wn = (w1,n , . . . , wn,n ) n∈N , where {In }n∈ corresponding to x and w is convergent. In Sect. 2 we give some explicit schemes that satisfy the relation (10). The statement of Theorem 1 is proved in Sect. 5. In such proof we use results coming from the orthogonal polynomials theory that are analyzed in Sects. 3 and 4. In Sect. 3 we study algebraic properties of families of orthogonal polynomials and their connections with convergent conditions of non-complete interpolatory quadrature rules. In Sect. 4 we describe the strong asymptotic behavior of an appropriated family of orthogonal polynomials with respect to a varying measure.

2 Some Explicit Convergent Schemes of Nodes We consider three particular cases where the inequality (10) holds. In the three situations the measure σ = δζ corresponds to a Dirac delta supported on a point belonging to the real line ζ > 2. Hence the situations are when a takes the values 0, 1/2, and 1. According to [15, Section II.4 equation (4.46)], the balayage measure of σ = δζ on [−1, 1] has the following differential form * ζ2 − 1 dσ (t) = d t. √ π(ζ − t) 1 − t 2

(11)

We study the function  Ia (x) = (1 − a)π x

1

 0 dσ (t) = (1 − a) ζ 2 − 1 x

1

dt . √ (ζ − t) 1 − t 2

Taking the change of variables t = cos θ and taking into account ζ > 2 (ϕ(ζ ) > 2 implies that arg(1 − ϕ(ζ )) = π ), we have that  



Ia (x) = (1 − a) − arccos x + 2 arg ei arccos x − ϕ(ζ ) − 2π .

Convergent Non Complete Interpolatory Quadrature Rules

197

In this situation the condition of convergence (10) in Theorem 1 acquires the following form 2 2  

 2 2 2arccos xj,n + 2(1 − a) π − arg ei arccos x − ϕ(ζ ) + 2j − 1 π 2 ≤ Ae− n . 2 2 2n (12)    Then a scheme x = xn = x1,n , . . . , xn,n n∈ that satisfies the following relation is convergent  

 2j − 1 arccos xj,n +2(1−a) π − arg ei arccos x − ϕ(ζ ) = − π −Ae− n = κj,n , 2n with A > 0 and > 0. This means that  



 = cos κj,n . cos arccos xj,n + 2(1 − a) π − arg ei arccos x − ϕ(ζ ) Using the cosine addition formula we have that 6  

7 xj,n cos 2(1 − a) π − arg ei arccos x − ϕ(ζ ) −

0

6  

7 2 sin 2(1 − a) π − arg ei arccos x − ϕ(ζ ) 1 − xj,n = cos κj,n .

(13)

First we consider the situation a = 1. In this case the expressions in (13) become xj,n = cos κj,n ,

j = 1, . . . , n,

n ∈ .

(14)

The nodes are close to the zeros of the Chebyshev polynomials. That’s because of the term corresponding to the σ ’s influence in (13) vanishes when a = 1. Let us analyze now the case a = 1/2. We consider the following identities  

 cos π − arg ei arccos xj,n − ϕ(ζ ) = 0

ϕ(ζ ) − xj,n ϕ 2 (ζ ) − 2ϕ(ζ )xj,n + 1

(15)

and  

 sin π − arg ei arccos xj,n − ϕ(ζ ) = 0

0 2 1 − xj,n ϕ 2 (ζ ) − 2ϕ(ζ )xj,n + 1

.

(16)

Substituting (15) and (16) in (13) we arrive at the quadratic equations: 2 xj,n −

2 sin2 κj,n sin2 κj,n − cos2 κj,n = 0, xj,n + 2 ϕ(ζ ) ϕ (ζ )

j = 1, . . . , n,

n ∈ .

198

U. Fidalgo and J. Olson

For each j = 1, . . . , n, n ∈  we obtained the following solutions xj,n =

  0 1 sin2 κj,n + cos κj,n ϕ 2 (ζ ) − sin2 κj,n . ϕ(ζ )

(17)

During the process of finding these above solutions we introduce some extra solutions that we removed. Observe that when ζ tends to ∞ the expressions in (17) reduce to (14). This is in accordance with the fact that σ approaches λ0 as ζ → ∞, see (11), hence we only considered the positive branch of the square root in (17). Finally take a = 0. From (13) we have that 6  

7 xj,n cos 2 π − arg ei arccos x − ϕ(ζ ) 0 6  

7 2 sin 2 π − arg ei arccos x − ϕ(ζ ) − 1 − xj,n = cos κj,n . We use the conditions (15) and (16), and obtain the following expression xj,n =

2ζ cos κj,n , ϕ(ζ ) + 2 cos κj,n

j = 1, . . . , n,

n ∈ .

0 Taking into account that ϕ(ζ ) = ζ + ζ 2 + 1 we see that the above expression is reduced to (14) when ζ goes to infinity.

3 Connection with Orthogonal Polynomials Let μ be a positive finite Borel measure with infinitely many points in its support supp(μ). Set denoting  the least interval which contains supp(μ). A collection of monic polynomials qμ,n n∈Z , Z+ = {0, 1, . . .} is the family orthogonal + polynomials with respect to μ if its elements satisfy the following orthogonality relations  (18) 0 = x ν qμ,n (x) dμ(x), ν = 0, 1, . . . , n − 1, n ∈ Z+ . ◦

Each qμ,n has n single roots lying in the interior of (we denote ) such that it vanishes at most once in each interval of \ supp(μ) (see [4, Theorem 5.2] or [7, Chapter 1]). We also know that qμ,n+1 and qμ,n interlace their zeros. In [17] B. Wendroff proved that given two polynomials Pn and Pn+1 , with deg Pn+1 = deg Pn + 1 = n + 1, that interlace zeros, there always exist measures μ such that Pn = qμ,n and Pn+1 = qμ,n+1 . Now we find some of these measures.

Convergent Non Complete Interpolatory Quadrature Rules

We say then a polynomial Pn (x) =

199

n !  x − xj of degree n is admissible with j =1



respect to the measure μ, if its roots are all simple, lying in , with at most one zero into each interval of \ supp(μ). The system of nodes (x1 , . . . , xn ) is also said to be admissible with respect to μ. n n−1 ! !  x − xj and Pn (x) = x − xj be two admissible Lemma 1 Let Pn (x) = j =1

j =1

polynomials with respect to μ that satisfy x1 < x1 < x2 < · · · < xn−1 < xn . Then there exists a positive integrable function ρn with respect to μ (ρn is a weight function for μ) such that for the measure μn which differential form dμn (x) = ρn (x) dμ(x), x ∈ supp(μ), Pn ≡ qμn ,n and Pn ≡ qμn ,n−1 are the n-th and n − 1-th monic orthogonal polynomials with respect to μn , respectively. In the proof we follow techniques used in [10]. Proof Consider $ a set of weight functions such that for every constant α > 0 it satisfies: i) ρ ∈ $ "⇒ αρ ∈ $. ii) (ρ, ρ) ∈ $2 = $ × $ "⇒ αρ + (1 − α)ρ ∈ $, α ≤ 1. iii) If a polynomial Q satisfies Q(x)ρ(x) dμ(x) > 0 for all ρ ∈ $, then Q ≥ 0 in supp(μ). Two examples of sets of weight functions satisfying the above conditions are the positive polynomials and positive simple functions in [14, Definition 1.16]. In general, the positive linear combinations of a Chevyshev system (see [9, Chapter II]) conform a set as $. Examples of Chevyshev systems can be found in [13] (also in [8]). Given ρ ∈ $ we set . vρ =

 Pn (x)ρ(x)dμ(x), . . . ,



/

 Pn (x)ρ(x)dμ(x), . . . ,

x n−2 Pn (x)ρ(x)dμ(x),

x

n−1

Pn (x)ρ(x)dμ(x) ∈ R2n−1 .

  Let us focus on K = vρ : ρ ∈ $ . Proving Lemma 1 reduces to showing that K contains the origin. From condition (i) we have that the origin belongs to K’s closure, K. Since K is open we need to prove the origin is an interior point. We proceed by contradiction. Suppose that the origin belongs to the boundary of K. This is O ∈ ∂K = K \ K. There exists a hyper-plane A that touches tangentially ∂K at O. On the other hand we have that condition (ii) implies that K is convex, then there exists a vector a = a0,n−1 , . . . , an−2,n−1 , a0,n , . . . , an−1,n which is orthogonal

200

U. Fidalgo and J. Olson

with respect to A in the sense of the standard inner vector product (a · u = 0, for all u ∈ A), and for each vρ ∈ K, vρ · a > 0. So the polynomials pn−1 (x) = a0,n−1 + a1,n−1 x + . . . + an−2,n−1 x n−2 and pn (x) = a0,n + a1,n x + . . . + an−1,n x n−1 satisfy that  0
0,

j = 1, . . . , n.

(21)

j =1

Observe that !



(x − ti )

i=1

Pn (x)

=

1 ! i=1

(x − ti )2di −1 q(x)

⎣pn−1 (x)

n  j =1

⎤ λj + pn (x)⎦ . x − xj

Convergent Non Complete Interpolatory Quadrature Rules

201

This means that the above function satisfies that ⎤ ⎡ / . n  λj 1 1 ⎣pn−1 (z) +pn (z)⎦ = O n− z − xj z ! j =1 (z − ti )2di −1 q(z)

as

z → ∞,

i=1

which is a holomorphic functions on C\({x1 , . . . , xn } \ S) . For each ν = 0, . . . , n− − 2 we have then ⎤ ⎡ . / n  λj zν 1 ⎣pn−1 (z) as z → ∞, + pn (z)⎦ = O 2 z − xj z ! j =1 (z − ti )2di −1 q(z) i=1

also holomorphic functions on C \ ({x1 , . . . , xn } \ S) . Set the elements yj ∈ {x1 , . . . , xn } \ S, j = 1, . . . , n − with y1 < y2 < · · · < yn− , and λj , j = 1, . . . , n − the coefficients λ’s defined in (21) corresponding to points yj . Also let λj denote the λ’s of tj , j = 1, . . . , . Call F the set of the roots of the polynomial q defined in (19). Consider a closed integration path with winding number 1 for all its interior points. Denote Ext( ) and Int( ) the unbounded and bounded connected components respectively of the complement of . Take so that I ⊂ Int( ) and F ⊂ Ext( ). From Cauchy’s Theorem and the above two conditions, it follows that ⎤ ⎡  n  λj zν 1 ⎣pn−1 (z) + pn (z)⎦ dz 0= 2π i z − xj ! j =1 (z − ti )2di −1 q(z) i=1

1 = 2π i



zν pn−1 (z)

j =1 !

(z − ti )

i=1

Since

zν pn (z) ! i=1

n− 

(z − ti )2di −1 q(z)

λj dz z − yj

2di −1

q(z)

1 + 2π i



zν pn (z) dz !

(z − ti )

2di −1

. q(z)

i=1

∈ H (Int( )) the second term vanishes. From (20),

202

U. Fidalgo and J. Olson

using the Cauchy integral formula, we obtain: 0=

n− 

yjν p(yj )

j =1

λj

2(di −1)

 ! y j − ti

= 0,

ν = 0, . . . , n − − 1.

q(yj )

i=1

Taking into account that for each j = 1, . . . , n − ,

λj 

2(di −1) ! y j − ti q(yj )

> 0,

i=1

we conclude that the above orthogonality relations imply that p must change sign at least n − times, hence deg p ≥ n − . Since deg p = deg pn−1 − ≤ n − − 1 we arrive at a contradiction which completes the proof. Consider a monic polynomial Pn (x) =

n !

(x −xj ) with degree n ∈ N which is μ

j =1

admissible. We say that a weight function ρn on supp(μ) is orthogonal with respect to Pn (x) and μ if Pn ≡ qμn ,n , where dμn (x) = ρn (x) dμ(x), x ∈ supp(μ). We also say that ρn is orthogonal with respect to xn = (x1 , . . . , xn ) and μ. A sequence of weight functions {ρn }n∈N is a family of orthogonal weight functions with respect to the sequence of polynomials {Pn }n∈N , if for each n ∈ N, Pn ≡ qμn ,n . Let qm(n) be an arbitrary polynomial with degree deg qm(n) (x) = 2n − m(n) − 1 being positive on [−1, 1]. Let μn denote the measure with differential form −1 dμn (x) = qm(n) (x)dμ(x), x ∈ supp(μ). Set a system of nodes xn = (x1 , . . . , xn ) n ! (x − xj ) = qμn ,n . This means that xn is the system of n such that Pn (x) = j =1

nodes corresponding to the Gaussian quadrature rule for the measure μn . Given an arbitrary polynomial p ∈ #m(n) , we have that qm(n) (x)p(x) −

n 

qm(n) (xj )p(xj )Lj,n (x) = qμn ,n (x)Pn−1 (x),

(22)

j =1

where Lj,n (x) :=

n ! x − xk , xj − xk

j = 1, . . . , n, and Pn−1 is a certain polynomial

k=1 k=j

with deg Pn−1 = n − (m(n) − deg p) − 1 ≤ n − 1.

Convergent Non Complete Interpolatory Quadrature Rules



 p(x) d μ(x) =

Observe that

203

qm(n) (x)p(x)dμn (x). Hence from (22) we

obtain 

   n qm(n) (xj )p(xj )Lj,n (x) d μn (x) = qμn ,n (x)Pn−1 (x) d μn (x), p(x) d μ(x)− j =1

which vanishes because qμn ,n satisfies the orthogonality relations for μn as in (18). We conclude then  p(x) d μ(x) =

n 

 p(xj,n )qm(n) (xj,n )

j =1

 d λ0 (x) = wj,n p(xj,n ). qm(n) (x) n

Lj,n (x)

j =1

This is an interpolatory integration rule with degree of exactness m(n), where the weights can be defined via  wj,n = qm(n) (xj )

Lj,n (x) dμn (x) = qm(n) (xj )wj,n ,

j = 1, . . . , n.

(23)

The numbers wj,n , j = 1, . . . , n are the weights corresponding to a Gaussian quadrature rule, which are all positive. Since qm(n) is also positive the weights wj,n > 0. According to Pólya’s condition a sequence of these rules of integration is convergent.    Let us consider x = xn = x1,n , . . . , xn,n n∈N an admissible scheme of nodes for a measure μ, and take a corresponding family of orthogonal weights {ρn }n∈N . For each n, μn denotes the measure with differential form  dμ(x)  = ρn (x)dμ(x), and introduce its family of orthonormal polynomials pμn ,j j ∈Z . This means + 22 22 that pμ ,j ≡ qμ ,j / 22qμ ,j 22 , j ∈ Z+ where || · ||2,μ denotes the L2 norm n

n

n

n

2,μn

corresponding to the measure μn . Given a function f ∈ L2,μn and j ∈ Z+ we consider the  j -th partial sum of the  Fourier series corresponding to f/ρn on the bases pμn ,j j ∈Z : +

Sf,μn ,j =

j −1 

 fk pμn ,k (x),

fk =

f (x)pμn ,k (x)dμn (x),

k = 0, . . . , j − 1.

k=0

Using the Christoffel-Darboux identity (see [16, Theorem 3.2.2]) we can deduce  Sf,μn ,j (x) =

qμn ,j (x)qμn ,j −1 (t) − qμn ,j (t)qμn ,j −1 (x) f (t)dμn (t). 22 22 22qμ ,j −1 222 (x − t) n

2,μn

The following result is an extension of [16, Theorem 15.2.4 (equality 15.2.7)]

(24)

204

U. Fidalgo and J. Olson

Lemma 2 Let (x1 , . . . , xn ) be an μ admissible system of nodes. Given a polynomial qm(n) take the system of weights (w1,n , . . . , wn,n ) whose elements wj,n , j = 1, . . . , n, are constructed using (23). Then there always exists a weight ρn such that 22 22 22 22 22qτ ,n−1 222 S1/ρ ,τ ,n (xj ) 22qτ ,n 222 S1/ρ ,τ ,n+1 (xj ) wj,n n n n n n n 2,τn 2,τn = =− ,  qm(n) (xj ) qτn ,n−1 (xj )qτn ,n (xj ) qτn ,n+1 (xj )qτ n ,n (xj ) (25) ρn dτn = . Thus sign wj,n dμ qm(n) sign S1/ρn ,τn ,n (xj ) = sign S1/ρn ,τn ,n+1 (xj ), j = 1, . . . , n.

where the measure τn is such that

=

Proof Take an orthogonal weight ρn with respect to the system of n nodes (x1 , . . . , xn ) and the measure with differential form dμ(x)/qm(n) (x). According to (23) and taking into account that Pn ≡ qμn ,n where the measure τn has the differential form dτn (x) = ρn (x)dμ(x)/qm(n) (x), we have the following  wj,n = qm(n) (xj )

qτn ,n (x)  qτn ,n (xj )(x − xj )

dμ(x) , qm(n) (x)

j = 1, . . . , n.

Arranging the above formula and using the identity (24) we obtain that

wj,n

22 222 qm(n) (xj ) 22qτn ,n 222,τ  qτn ,n+1 (xj )qτn ,n (x) 1 ρn (x) dμ(x) n = 22 22 qτn ,n+1 (xj )qτ n ,n (xj ) 22qτ ,n 222 (x − xj ) ρn (x) qm(n) (x) n

=−

2,τn

22 222 qm(n) (xj ) 22qτn ,n 222,τ

n

qτn ,n+1 (xj )qτ n ,n (xj )

S1/ρn ,τn ,n (xj ),

which proves the second identity in (25). Since qτn ,n+1 (xj )qτ n ,n (xj ) < 0, j = 1, . . . , n, then sign wj,n = sign S1/ρn ,τn ,n+1 (xj ). Following the above steps we can prove the first equality in (25) and sign wj,n = sign S1/ρn ,τn ,n (xj ). The following two results are consequences of the above Lemma 2.   Lemma 3 An admissible scheme of nodes x = xn = (x1,n , . . . , xn,n ) n∈N is convergent if there exists a family  of orthogonal weights {ρ  n }n∈N with respect to x and the sequence of measures dτn (x) = dμ(x)/qm(n) (x) n∈N satisfying 22 22 lim 221 − ρn (x)S1/ρn ,τn ,n (x)22[−1, 1],∞ = 0,

n→∞

where ||·||[−1,1],∞ denotes the supremum norm on [−1, 1].

(26)

Convergent Non Complete Interpolatory Quadrature Rules

205

Proof Assuming the equality (26), there exists a number N > 0 such that for every n ≥ N the function S1/ρn ,μn ,n (x) > 0 on [−1, 1] particularly at the nodes. According to Lemma 2, the coefficients wj,n , j = 1, . . . , n, are also positive. This completes the proof. Lemma 4 Consider the varying measure d μn (x) = d μ(x)/qm(n) (x) and n !  x − xj,n and qμn ,n−1 (x) = their orthogonal polynomials qμn ,n (x) = j =1 n−1 ! j =1

    x − xj,n−1 , n ∈ N. Let y = yn = y1,n , . . . , yn,n n∈N be a scheme of

nodes such that for each n ∈ N − 1 < y1,n < x1,n−1 < y2,n < · · · < xn−1,n−1 < yn,n < 1. Assume that the polynomials Pn (x) =

n !  x − yj,n ,

(27)

n ∈ N satisfy

j =1 −1 qμ(n) (t) lim 22 222 n→∞ 22 qμ ,n−1 22 n



 (qμn ,n − Pn )qμn ,n−1 (x) = 0,

(x, t) ∈ [−1, 1]2 .

2,μn

(28) Then y is convergent. Proof From Lemma 1 we ensure the existence of a weight function ρn such that the polynomials qτn ,n−1 and Pn belong to the family of orthogonal polynomials corresponding to the measure ρn (x)dμ(x)/qm(n) (x). Let us analyze the function 2 2 2 2 ||Pn ||22,ρn dμ/qm(n) 2 2 21 − 22 S1/ρn ,ρn dμ/qm(n),n (x)22 222 2 22qμ ,n−1 22 2 2 n 2,μ n

2 2 2 = 22S1,μn,n (x) − 2

2 22 222 2 22Pn 22 2 2,ρn dμ/qm(n) 2 S (x) 22 222 1/ρn ,ρn dμ/qm(n),n 2 22qμ ,n−1 22 2 n 2,μ n

We have used that S1,μn ,n ≡ 1qm(n) , hence we need to show that 2 2 2 lim 22S1,μn ,n − n→0 2

2 22 222 2 22Pn 22 2 2,ρn dμ/qm(n) 2 = 0. S (x) 22 222 1/ρn ,ρn dμ/qm(n),n 2 22qμ ,n−1 22 2 n 2,μ n

206

U. Fidalgo and J. Olson

Applying (24) we observe that ⎞ 22 222 22P n 22 1 2,ρn dμ/qm(n) ⎝S1,μn ,n (x) − 22 S1/ρn ,ρn dμ/qm(n),n (x)⎠ = 22 22 22 22qμ ,n−1 222 22qμ ,n−1 222 n n 2,μ 2,μ ⎛

n

 . ×

n

qμn ,n (x)qμn ,n−1 (t) − qμn ,n (t)qμn ,n−1 (x) Pn (x)qμn ,n−1 (t) − Pn (t)qμn ,n−1 (x) − x−t x−t

×

/

dμ(t) qm(n) (t)

Let us consider the kernel qμn ,n (x)qμn ,n−1 (t) − qμn ,n (t)qμn ,n−1 (x) Pn (x)qμn ,n−1 (t) − Pn (t)qμn ,n−1 (x) − 22 22 22 22 22qμ ,n−1 222 qm(n) (t)(x − t) 22qμ ,n−1 222 qm(n) (t)(x − t) n

=

n

2,μn

2,μn

(qμn ,n − Pn )(x)qμn ,n−1 (t) − (qμn ,n − Pn )(t)qμn ,n−1 (x) = K(x, t). 22 22 22qμ ,n−1 222 qm(n) (t)(x − t) n

2,μn

From Taylor’s Theorem we obtain that −1(t)

qμ(n)   K(x, t) = 22 (qμn ,n − Pn )qμn ,n−1 (s) 22 22qμ ,n−1 222 n 2,μ n

for some s in between of x and t, so the assumption (28) completes the proof.



4 Asymptotic Analysis * Let us consider the varying measure μn with dμn (x)/dx = (qm(n) (x) 1 − x 2 )−1 , 1−a + κ ,2 1 − a n n 2 ! k k , n ∈ . Let σ be where q (x) = q (x) = (x − ζ ) m(n)

j

κ

k=1

the zero counting measure of qk . This is σ =

κ 1 δζk . Set the analytic logarithmic κ k=1

potential corresponding to the measure σ :  g(z, σ ) = −

log (z − ζ ) d σ (ζ ).

(29)

Convergent Non Complete Interpolatory Quadrature Rules

207

We take the logarithmic branch such that g(z, σ ) is analytic on a domain D ⊂ K that contains the interval [−1, 1], and also for every x ∈ [−1, 1],  1 d σ (ζ ) = g(x, σ ) = − log (z − ζ ) d σ (ζ ). (30) |x − ζ |  Since σ is symmetric we arg(x − ζ )d σ (ζ ) = 0. In each compact K ⊂ D we 

V σ (x) =

log

1 1 log = (1 − a) g(z, σ ) on K. 2n qm(n) (z) * Lemma 5 Let dμn (x)/dx = (qm(n) (x) 1 − x 2 )−1 , n ∈ N be a sequence of measures as above. Then 7 7 6 6  (31) qμn ,n = 1 + O(e−cn ) exp −nV ν K1,n + O(e−cn ) exp −nV ν K2,n have that

and 7 7 6 6  dn,n−1 −cn ν −cn ν K K1,n q = 1 + O(e ) exp −nV + O(e ) exp −nV μ ,n−1 2,n n 22na (32)  22 222 −1 2 2 2 2 , where dn,n−1 = − 2π i qμn ,n−1 μ ,2 n

.  K1,n (x) = 2 cos n (1 − a)π

1

/ dσ (t) − a arccos x ,

(33)

x

and . /  1 1 K2,n (x) = cos n (1 − a)π dσ (t) − (a − 1/n) arccos x . i x

(34)

Proof We study a matrix Riemann-Hilbert problem like in [12, Theorem 2.4] whose solution Y is a 2 × 2 matrix function satisfying the following conditions: 1. Y ∈ H(C \ [−1, 1]) (all the entries of Y are analytic on C \ [−1, 1]), ⎛ 

−1 ⎞ √ 1 qm(n) (x) 1 − x 2 ⎟ ⎜ 2. Y+ (x) = Y− (x) ⎝ ⎠, x ∈ (−1, 1), ⎛ 3. Y (z) ⎝

z−n

0

0

zn



4. Y (z) = O ⎝



0

1

⎠ = I + O(1/z) as z → ∞, I is the 2 × 2 identity matrix.

1 |z ± 1|−1/2 1 |z

± 1|−1/2

⎞ ⎠ as z → ∓1.

208

U. Fidalgo and J. Olson

According to [12, Theorem 2.4] (see also [11]) the Y solution of above matrix Riemann-Hilbert problem (for short Y-RHP) is unique and has the form ⎛ ⎜ ⎜ Y (z) = ⎜ ⎝

qμn ,n (z) dn,n−1 qμn ,n−1 (z) −



1 2π i

dn,n−1 2π i



qμn ,n (x) dμn (x) z−x



qμn ,n−1 (x) dμn (x) z−x

⎞ ⎟ ⎟ ⎟. ⎠

The key of our procedure follows the ideas introduced in [1]. We find a relationship between Y and the matrix solution R : C \ γ → C2×2 corresponding to another Riemann-Hilbert problem (R-RHP) for a closed Jordan curve γ positively oriented surrounding the interval [−1, 1]: 1. R ∈ H(C \ γ ), 2. R+ (ζ ) = R− (ζ )Vn (ζ ), ζ ∈ γ , with Vn ∈ H(D), 3. R(z) → I as z → ∞, where Vn = I + O(εn ) with 0 ≤ ε < 1, uniformly on compact subsets of K as n → ∞. Those conditions imply that R = I + O(εn ) uniformly on C as n → ∞. There is a chain of transformations to arrive from Y to R, which we represent Y → T → S → R. Once we have arrived to R, we recover the entries of Y going back from R to Y . From [3, Corollary 4] we have that the zero counting measures νn defined in (8) n !  corresponding to the monic orthogonal polynomials qμn ,n (z) = z − xj,n with j =1

respect to the varying measures μn , satisfy %

νn → ν = (1 − a)σ + aλ0

as

n → ∞,

(35)

where σ denotes the balayage of the measure σ out of C \ [−1, 1] onto [−1, 1]. The measure ν is the so called (see [15, Theorem I.1.3]) equilibrium measure under the influence of the external field (1 − a)V σ (z). From (35) we have the following equilibrium condition V ν (t) − (1 − a)V σ (t) = aV λ0 (t) = a log 2,

t ∈ [−1, 1].

(36)

Observe the conditions (3) in both Riemann Hilbert problems. Y requires a normalization at infinity to get to R’s behavior at infinity. We modify Y to obtain a Riemann-Hilbert problem whose solution is defined on the same set as Y , which approaches I as n → ∞. Let us introduce the function g(z, ν), which is the analytic potential corresponding to the measure ν described in (35)  g(z, ν) = −

 log (z − t) dν(t) = V ν (z) − i

arg(z − t) dν(t),

(37)

Convergent Non Complete Interpolatory Quadrature Rules

209

with arg denoting the principal argument g(z, ν) ∈ H(K \ (−∞, 1]). Substituting g(z, ν) in (36) we obtain g+ (x, ν) + g− (x, ν) − 2a log 2 − 2(1 − a)g(x, σ ) = 0,

x ∈ [−1, 1].

(38)

and ⎧ if x ≥ 1 ⎨0 g− (x, ν) − g+ (x, ν) = 2π i if x ≤ 1 ⎩ 2℘ (x) if x ∈ (−1, 1),

(39)

with 

1

℘ (x) = π i

  dν(t) = π i (1 − a)

x

1 x

 a dσ (t) − arccos(x) . π

(40)

.

/ . na / 0 0 eng(z,ν) 2 and L = . 0 e−ng(z,ν) 0 2−na We define the matrix function T = LY GL−1 . So T is the unique solution of the following Riemann-Hilbert problem (T-RHP) Consider the matrices G(z) =

1. T ∈ H(C \ [−1, 1]), 2. T+ (x) = T− (x)M(x), x ∈ (−1, 1), 3. T (z) = I +.O (1/z) as z → / ∞, 1 |z ± 1|−1/2 as z → ∓1, 4. T (z) = O 1 |z ± 1|−1/2

/ . −2n℘ (x) (1−x 2 )−1/2 e , where according to (38) and (39) the jump matrix M(x)= 0 e2n℘ (x) x ∈ (−1, 1). According to [6, Theorem 1.34] there exists a domain D containing the interval [−1, 1] where the function ℘ in (40) admits an analytic extension on D \ [−∞, 1] as 

1

A(z) = π i z

 dν(ζ ) = π i

1

ν  (ζ )dζ,

(41)

z

ψ(ζ ) where ν  (ζ ) = * , with ψ ∈ H(D) and ψ(x) > 0, x ∈ (−1, 1). 1 − ζ2 Observe that A+ (x) =/ ℘ (x) = −A− (x), then we write M(x) = . e−2nA+ (x) (1 − x 2 )−1/2 . 0 e−2nA− (x) We now seek jump conditions as we have in R-RHP. Consider a closed Jordan curve γ ∈ D surrounding [−1, 1] as we have in R-RHP. √ Let  denote the bounded connected component of C \ γ . We consider the function z2 − 1 ∈ H(C \ [−1, 1]

210

U. Fidalgo and J. Olson

√ √ with x 2 − 1± = ±i 1 − x 2 , x ∈ (−1, 1). We introduce the matrix function S as follows ⎧ T (z) when z ∈ C \ (γ ∪ ) ⎨ / . . S(z) = 1 0 √ ⎩ T (z) when z ∈  −2nA(z) 2 −i z − 1 e 1 The matrix function S is the solution of the following Riemann-Hilbert problem (S-RHP): 1. S ∈ H(C \ (γ ∪.[−1, 1])), / 0 (1 − x 2 )−1/2 2. S+ (x) = S− (x) , when x ∈ (−1, 1) and −(1 − x 2 )1/2 0 / . 1 0 √ , when ζ ∈ γ . S+ (ζ ) = S− (ζ ) −i z2 − 1 e−2nA(z) 1 3. S(z) = I +.O (1/z) as z → / ∞, 1 |z ± 1|−1/2 4. S(z) = O as z → ∓1. 1 |z ± 1|−1/2 The jump matrix on γ approaches uniformly the identity matrix I . However it does not happen in [−1, 1]. We fix this problem in the interval following the steps in [11]. Consider the matrix ⎛

⎞ a(z) − a −1 (z) a(z) + a −1 (z) D(∞) D(∞)D(z) ⎜ ⎟ 2 D(z) 2i ⎟, N (z) = ⎜ −1 ⎝ a(z) − a −1 (z) ⎠ 1 a(z) + a (z) D(z) −2i D(∞)D(z) 2 D(∞) .

/1/2 +1 ,

(42)

√ (z − 1)1/4 2 and a(z) = . Hence (z + 1)1/4 z2 − 1 N is the solution of the following Riemann-Hilbert problem

where D(z) =



z

D(∞) =

1. N ∈ H(C \ [−1, .1]),

/ 0 (1 − x 2 )−1/2 2. N+ (x) = N− (x) , x ∈ (−1, 1), −(1 − x 2 )1/2 0 3. N (z) = I +.O (1/z) as z → / ∞, 1 |z ± 1|−1/2 as z → ∓1. 4. N (z) = O 1 |z ± 1|−1/2 Introduce the matrix function R(z) = S(z)N −1 . Taking into account that R and S satisfy the same jump conditions across (−1, 1) we have that R+ (x) = R− (x). So R ∈ H(C \ (γ ∪ {−1, 1})). Since det N = 1 and from (42) we have that N −1 (z) = O

.

|z ± 1|−1/2 |z ± 1|−1/2 1 1

/ as

z → ∓1.

Convergent Non Complete Interpolatory Quadrature Rules

211

Thus, when z → ∓1 . R(z) = O

1 |z ± 1|−1/2 1 |z ± 1|−1/2

/

. O

|z ± 1|−1/2 |z ± 1|−1/2 1 1

/ .

This implies . R(z) = O

|z ± 1|−1/2 |z ± 1|−1/2 |z ± 1|−1/2 |z ± 1|−1/2

/ ,

which means that each entry of R has isolated singularities at z = −1 and z = 1 with R(z) = O|z ± 1|−1/2 as z → ∓1, and they are removable. So R satisfies the following Riemann-Hilbert conditions: 1. R ∈ H(C \ γ ), . 2. R+ (ζ ) = R− (ζ )

/ 0 , when ζ ∈ γ . e−2nA(ζ ) 1 3. R(z) = I + O (1/z) as z → ∞. 1

From (41) 2A ∈ H(D \ [−∞, 1]) and Re(2A± (x)) = 0, x ∈ [−1, 1]. Using ψ(x) , x ∈ (−1, 1) and the Cauchythe fact 2A± (x) = ±2iν  (x) = ∓2π i √ 1 − x2 ∂Re(2A± ) Riemann conditions we have that (x) > 0, x ∈ [−1, 1]. Since Re(2A) ∂y is a harmonic function on D \ [−1, 1] we have Re(2A(z)) > 0, z ∈ D \ [−1, 1]. So given an arbitrary compact set K ⊂ D\[−1, 1] there exists a constant c(K) > 0 and an N ∈ N large enough such that for every n ≥ N the function Re(2A(z))(z)) > c(K), z ∈ K and n ≥ N. Note also that φn → 0 as n → ∞. So according [1] we arrive at R(z) = I + O(e−cn ) uniformly as n → ∞ for each compact set K ⊂ C \ [−1, 1]. Take z ∈ Int(γ ). Going back now from R to Y , and considering just the first column, we have that: ,

+ e

ng(z,ν)

qμn ,n (z) 2−2na dn,n−1 qμn ,n−1 (z)

 = I + O(e−cn )

⎞⎛ ⎞ a(z) − a −1 (z) a(z) + a −1 (z) D(∞) 1 D(∞)D(z) ⎟ ⎜ ⎟ 2 D(z) 2i ⎟⎜ ×⎜ ⎠. ⎠⎝ ⎝ a(z) − a −1 (z) 1 a(z) + a −1 (z) D(z) 2 1/2 −2nA(z) (1 − z ) e −2i D(∞)D(z) 2 D(∞) ⎛

Take the + boundary values of all quantities involved when z → x ∈ (−1, 1). Using the following identities from [11] or [12] / . 1 π i a+ (x) ± a+ (x) =√ , exp ± arccos x ∓ i 2 2 4 2(1 − x 2 )1/4

212

U. Fidalgo and J. Olson

/ / . 6 7.q  K1,n (x) μn ,n (x) ν −cn = I + O(e ) , we have exp nV (x) 2−2na dn,n−1 qμn ,n−1 (x) K (x) . / 2,n  1 1 where K2,n (x) = cos n (1 − a)π dσ (t) − (a − 1/n) arccos x and i . /  1 x K1,n (x) = 2 cos n (1 − a)π dσ (t) − a arccos x . Finally we obtain

x

 ν ν qμn ,n (x) = 1 + O(e−cn ) e−nV (x) K1,n (x) + O(e−cn )e−nV (x) K2,n (x) and  dn,n−1 ν ν qμn ,n−1 (x) = 1 + O(e−cn ) e−nV (x) K2,n (x) + O(e−cn )e−nV (x) K1,n (x), 2na 2

which are exactly the equalities stated in (31) and (32).

5 Proof of Theorem 1 We combine Lemma  28. First we choose a special scheme of  5 and Lemma nodes y = yn = y1,n , . . . , yn,n n∈ which satisfies (10). The corresponding polynomials have the following form Pn (x) =

.  n !  x − yj,n = $n (x) cos n (1 − a)π

1

/ dσ (t) − a arccos x ,

x

j =1

(43) Where $n is a real valued function on [−1, 1] that never vanishes. Let us rewrite the relation (31) as follows .  6 7. ν qμn ,n (x) = 2 exp −nV (x) cos n (1 − a)π

1

/

dσ (t) − a arccos x + O(e

−cn

/ )

x

.  6 7 ν = 2 exp −nV (x) cos n (1 − a)π

1

dσ (t) − a arccos x + O(e

−cn

/ ) .

x



yj,n

Combining the above equality with (43) we obtain that xj,n

dν(t) = O(e−cn ) and

xj,n − yj,n = O(e−cn ). This implies that lim sup |qμn ,n − Pn |1/n (x) = exp(−c − n→∞

Convergent Non Complete Interpolatory Quadrature Rules

213

V ν (x)) on [−1, 1]. Hence ⎛ lim sup ⎝ 22 22qμ n→∞

n



1

222 22 ,n−1

2,μn

(qμn ,n − Pn )qμn ,n−1 2 qm(n)

 ⎞1/n ⎠

(x)

= exp(−c + 2(1 − 9)V σ (x) + 2(1 − 9)V σ (t). Here we have taken into account that dist ({ζ1 , . . . , ζκ } , [−1, 1]) > 1, which yields V σ (x) < 0, x ∈ [−1, 1]. Then we see that condition (28) in Lemma 4 is satisfied. We now prove that condition (27) holds. Taking into account the equality (32) we have that the zeros of the polynomials qμn ,n−1 satisfy that for each j = 1, . . . , n − 1, n ∈ N  (1 − a)π

1

dσ (t) − (a − 1/n) arccos x + O(e−cn ) =

xj,n−1

2j − 1 π. 2n

(44)

For each j = 1, . . . , n − 1, we subtract the above equality (44) to (10), and we obtain that  xj,n−1 1 dν(t) = (1 + o(1)) as n → ∞. n yj,n This means that for n large enough yj,n < xj,n−1 , j = 1, . . . , n − 1. Considering now the j th equality in (10) and the j + 1th in (44) we have that 

yj +1,n xj,n−1

dν(t) =

1 (π − 1 + o(1)) n

as

n → ∞,

which implies that xj,n−1 < yj +1,n . So condition (27) holds. This proves that the scheme y is convergent. Once convergent, we can construct another convergent scheme  weknow that y is  x = xn = x1,n , . . . , xn,n n∈ taking xj,n − yj,n ≤ Ae− n ,

j = 1, . . . , n,

n ∈ ,

and follow the previous process. This completes the proof. Acknowledgments This author’s research was supported in part by the research grant MTM201236372-C03-01 funded by “Ministerio de Economía y Competitividad”, Spain.

214

U. Fidalgo and J. Olson

References 1. Aptekarev, A.I., Van Assche, W.: Scalar and matrix Riemann-Hilbert approach to the strong asymptotics of Padé approximants and complex orthogonal polynomials with vaying weight. J. Approx. Theory 129, 129–166 (2004) 2. Bloom, T., Lubinsky, D.S., Stahl, H.: What distribution of points are possible for convergent sequences of interpolatory integration rules. Constr. Appr. Theo. 9, 41–58 (1993) 3. Bloom, T., Lubinsky, D.S., Stahl, H.: Interpolatory integration rules and orthogonal polynomials with varying weights. Numer. Algorithms 3, 55–66 (1992) 4. Chihara, T.S.: An Introduction to Orthogonal Polynomials. Gordon and Breach, Science Publishers, New York, London, Paris (1978) 5. Davis, P.J., Rabinowitz, P.: Methods of Numerical Integration, 2nd edn. Academic Press, San Diego, CA (1984) 6. Deift, P., Kriecherbauer, T., McLaughlin, K.T.-R.: New results on the equilibrium measures for logarithmic potentials in presence of an external field. J. Approx. Theory. 95, 388–475 (1998) 7. Freud, G.: Orthogonal Polynomials. Pergamon Press, London, Toronto, New York (1971) 8. Fidalgo, U., Medina Peralta, S., Mínguez Cenicero, J.: Mixed type multiple orthogonal polynomials: Perfectness and interlacing properties. Linear Algebra Appl. 438, 1229-1239 (2013) 9. Krein, M.G., Nudel’man, A.A.: The Markov Moment Problem and Extremal Problems. Transl. Math. Monogr., Vol. 50. Amer. Math. Soc., Providence, RI (1977) 10. Kroó, A., Schmidt, D., Sommer, M.: On some properties of A-spaces and their relation to Hobby-Rice theorem. J. Approx. Theory 68, 136–141 (1999) 11. Kuijlaars, A.B.J.: Riemann-Hilbert analysis for orthogonal polynomials. In: Koelink, E., Van Assche, W. (eds.), Orthogonal Polynomials and Special Functions. Lect. Notes Math., vol. 1817, pp. 167–210. Springer, Berlin (2003) 12. Kuijlaars, A.B.J., Mc Laughlin, K.T.-R., Van Assche, W., Vanlessen, M.: The Riemann-Hilbert approach to strong asymptotics for orthogonal polynomials on [−1, 1]. Adv. Math. 188, 337– 398 (2004) 13. Nikishin, E.M.: On simultaneous Padé approximants. Matem. Sb. 113, 499–519 (1980) (Russian); English translation in Math. USSR Sb. 41, 409–425 (1982) 14. Rudin, W.: Real and Complex Analysis, 3rd edn. McGraw-Hill, New York (1986) 15. Saff, E.B., Totik, V.: Logarithmic Potentials with External Fields. Series of Comprehensive Studies in Mathematics, Vol. 316. Springer, New York, NY (1997) 16. Szeg˝o, G.: Orthogonal Polynomials, Vol. XXIII. Coll. Pub. Amer. Math. Soc., New York, NY (1939) 17. Wendroff, B.: On orthogonal polynomials. Proc. Amer. Math. Soc. 12, 554–555 (1961)

Asymptotic Computation of Classical Orthogonal Polynomials Amparo Gil, Javier Segura, and Nico M. Temme

Abstract The classical orthogonal polynomials (Hermite, Laguerre and Jacobi) are involved in a vast number of applications in physics and engineering. When large degrees n are needed, the use of recursion to compute the polynomials is not a good strategy for computation and a more efficient approach, such as the use of asymptotic expansions, is recommended. In this paper, we give an overview of the asymptotic expansions considered in Gil et al. (Comput. Phys. Commun. 210, 124– (α) 131 (2017)) for computing Laguerre polynomials Ln (x) for bounded values of the parameter α. Additionally, we show examples of the computational performance of an asymptotic expansion for L(α) n (x) valid for large values of α and n. This expansion was used in Gil et al. (Stud. Appl. Math. 140(3), (2018)) as starting point for obtaining asymptotic approximations to the zeros. Finally, we analyze the expansions considered in recent publications of the authors for computing the Jacobi polynomials for large degrees, summarizing some of the results from Gil et al. (SIGMA 14(73):9, 2018; SIAM J. Sci. Comput. 41(1):A668-A693, 2019; J. Math. Anal. Appl. 494(2):124642, 2021).

A. Gil Departamento de Matemática Aplicada y CC. de la Computación. ETSI Caminos, Universidad de Cantabria, Santander, Spain e-mail: [email protected] J. Segura () Departamento de Matemáticas, Estadística y Computación, Universidad de Cantabria, Santander, Spain e-mail: [email protected] N. M. Temme IAA, Alkmaar, The Netherlands CWI, Amsterdam, The Netherlands e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Marcellán, E. J. Huertas (eds.), Orthogonal Polynomials: Current Trends and Applications, SEMA SIMAI Springer Series 22, https://doi.org/10.1007/978-3-030-56190-1_8

215

216

A. Gil et al.

Keywords Classical orthogonal polynomials · Asymptotic expansions · Numerical computation

1 Introduction (α,β)

As it is well known, Laguerre L(α) (x) polynomials present n (x) and Jacobi Pn monotonic and oscillatory regimes depending on the parameter values. Sharp bounds limiting the oscillatory region for these polynomials are given in [3]. In the particular case of Jacobi polynomials, these bounds are √ B − 4(n − 1) C A

≤ xn,k (α, β) ≤

√ B + 4(n − 1) C

(1)

,

A

where xn,k (α, β) are the zeros of the Jacobi polynomials and A = (2n + α + β)(n(2n + α + β) + 2(α + β + 2)), B = (β − α)((α + β + 6)n + 2(α + β)), C = n2 (n + α + β + 1)2 + (α + 1)(β + 1)(n2 + (α + β + 4)n + 2(α + β)). (2) An example of the oscillatory behaviour of the Jacobi polynomials for two different values of n is given in Fig. 1. On the other hand, the classical orthogonal polynomials satisfy (also wellknown) three-term recurrence relations pn+1 (x) = (An x + Bn )pn (x) − Cn pn−1 ,

(3)

2 P(1/3,1/4) (x) 5

0

Oscillatory region -2 -1

-0.9 -0.8 -0.7 -0.6 -0.5 -0.4

-0.3 -0.2 -0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x 0.5 a P(1/3,1/4) 500

0 -0.5 -1

-0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1

0

x (1/3,1/4)

Fig. 1 Two plots of Jacobi polynomials (P5 behaviour of the polynomials

(1/3,1/4)

(x) and P500

(x)) showing the oscillatory

Asymptotic Computation of Classical Orthogonal Polynomials

217

(α)

where, for Laguerre polynomials Ln (x), the coefficients are given by An =

−1 2n + α + 1 n+α , Bn = , Cn = , n+1 n+1 n+1 (α,β)

and for Jacobi polynomials Pn

(4)

(x),

(2n + α + β + 1)(2n + α + β + 2) , 2(n + 1)(n + α + β + 1) (α 2 − β 2 )(2n + α + β + 1) Bn = , 2(n + 1)(n + α + β + 1)(2n + α + β) (n + α)(n + β)(2n + α + β + 2) Cn = . (n + 1)(n + α + β + 1)(2n + α + β)

An =

(5)

By using the Perron-Kreuser theorem [15] it is easy to show that the threeterm recurrence relations for the classical orthogonal polynomials are not badly conditioned in the sense that there is no solution which dominates exponentially over the rest. Therefore the recurrence relations can be used in the forward direction, for (α) (α) example, with starting values L0 (x) = 1 and L1 (x) = 1 + α − x for Laguerre (α,β) (α,β) polynomials, or P0 (x) = 1 and P1 (x) = 12 (α − β) + 12 (α + β + 2)x, to compute the Jacobi polynomials when n is small/moderate. However, when large degrees n are needed, the use of recursion to compute the polynomials is not a good strategy for computation and a more efficient approach, such as the use of asymptotic expansions, is recommended. The use of asymptotic expansions in the computation of Hermite polynomials has already been discussed in [7], since Hermite polynomials are a particular case of the parabolic cylinder function U (a, x). In this paper, we consider the asymptotic computation of Laguerre and Jacobi polynomials. Other recent references dealing with the numerical evaluation of these polynomials using asymptotic methods are, for example, [2] and [13]. We first give an overview of the asymptotic expansions considered in [8] for (α) computing Laguerre polynomials Ln (x) for bounded values of the parameter α. Additionally, we show examples of the computational performance of an asymptotic (α) expansion for Ln (x) given in [6] valid for large values of α and n. We also analyze the expansions considered in [9, 10] and [11] to compute the Jacobi polynomials for large degrees n. These expansions have been used as starting point to obtain asymptotic approximations to the nodes and/or weights of the Gauss-Jacobi (G-J) quadrature. In the asymptotic expansions for large degree n described in [11], we assume that α and β should be of order O(1) for large values of n. It is interesting to note that these approximations can be used as standalone methods for the noniterative computation (15 − 16 digits accuracy) of the nodes and weights of G-J quadratures of high degree (n ≥ 100). Alternatively, they can be used as accurate starting points of methods for solving nonlinear equations. This is the approach

218

A. Gil et al.

followed, for example, in [12] in which simple asymptotic approximations for the nodes are iteratively refined by Newton’s method.

2 Laguerre Polynomials (α)

The computation of Laguerre polynomials Ln (z) for bounded values of the parameter α was considered in [8]. In that paper, we analyzed the computational performance of three types of asymptotic expansions valid for large values of n: an expansion in terms of Airy functions and two expansions in terms of Bessel functions (one for small values of the variable z and other one in which larger values are allowed). It is possible to obtain asymptotic expansions in terms of elementary functions starting from Bessel expansions, as done in [1]; however, the Bessel expansions appear to have a larger range of validity. We next summarize results given in [8] and [6]; more details about the expansions can be found in [18]. a) An expansion in terms of Airy functions The representation used is L(α) n (νx)

= (−1)

ne

1 2 νx

χ (ζ )  1

2α ν 3

 



4 Ai ν 2/3 ζ A(ζ ) + ν − 3 Ai ν 2/3 ζ B(ζ ) (6)

with expansions A(ζ ) ∼

∞  α2j j =0

ν

, 2j

B(ζ ) ∼

∞  β2j +1 j =0

ν 2j

,

ν → ∞.

(7)

Here ν = 4κ,

κ =n+

1 2 (α

+ 1),

1 2

χ (ζ ) = 2 x

− 14 − 12 α

.

ζ x−1

/1 4

,

(8)

and 

*

√ 3 2 1 2 2 3 (−ζ ) = 2 arccos x − x − x  * √ 3 2 2 1 x 2 − x − arccosh x 3ζ = 2



if 0 < x ≤ 1, if x ≥ 1.

(9)

The expansions in (7) hold uniformly for bounded α and x ∈ (x0 , ∞), where x0 ∈ (0, 1), a fixed number. We observe that x = 1 is a turning point, where the argument of the Airy functions vanishes (see (9)). Left of x = 1, where ζ < 0, (α) the zeros of Ln (νx) occur.

Asymptotic Computation of Classical Orthogonal Polynomials

219

The first few coefficients in the expansions (7) are α0 = 1 and For 0 < x ≤ 1 1  4608u8 v 6 μ3 + 2016v 6 u8 μ2 + 2304u8 v 6 μ4 1152 −672v 6 u4 μ2 − 192v 6 μu6 + 2880v 6 u6 μ2 + 2688v 3 u7 μ2 −288v 6 μu8 + 3072u6 v 6 μ3 − 672v 6 μu4 + 2688v 3 μu7 + 924v 6 u2 + 558v 6 u4 − 180u6 v 6 − 135v 6 u8 + 504u7 v 3 + 336v 3 u5 +280u3 v 3 − 7280u6 + 385v 6 /(v 6 u6 ), 1  β1 = 48v 3 u4 μ2 + 9v 3 u4 + 48v 3 μu4 − 20u3 + 6v 3 u2 + 5v 3 /(u3 v 4 ), 12 √ √ where u = 1/x − 1, v = 2 −ζ and μ = (α − 1)/2. For x > 1 α2 = −

1  4608u8 v 6 μ3 + 2016v 6 u8 μ2 + 1152 2304u8 v 6 μ4 − 672v 6 u4 μ2 + 192v 6 μu6 −2880v 6 u6 μ2 − 336v 3 u5 − 288v 6 μu8 − 3072u6 v 6 μ3 −672v 6 μu4 + 280u3 v 3 − 924v 6 u2 + 558v 6 u4 + 180u6 v 6 −135v 6 u8 + 2688v 3 u7 μ2 + 504u7 v 3 + 2688v 3 μu7 −7280u6 + 385v 6 /(u6 v 6 ), 1  β1 = 48v 3 u4 μ2 + 9v 3 u4 + 48v 3 μu4 − 20u3 − 6v 3 u2 + 5v 3 /(u3 v 4 ), 12 √ √ where u = 1 − 1/x, v = 2 ζ and μ = (α − 1)/2. α2 =

A detailed explanation of the method used to obtain the coefficients of Airy expansions can be found in [18, §23.3]. The expansion in terms of Airy functions was derived in [4]. b) A simple Bessel-type expansion The starting point to obtain this expansion is the well-known relation between Laguerre polynomials and the Kummer function 1 F1 (a; c; x) . L(α) n (z)

=

n+α n

+

/ 1 F1

−n ;z α+1

, .

(10)

Then, we use an expansion for 1 F1 (a; c; x) for large negative values of a; see [18, §10.3.4] ,  z 1 (1−c) (1 + a)e 12 z −a 2 × ;z ∼ a (a + c) c , 1 ∞ ∞  √  ak (z) z  √  bk (z) Jc−1 2 az − . Jc 2 az (−a)k a (−a)k

1 1 F1 (c) +

+

k=0

k=0

(11)

220

A. Gil et al.

This expansion of 1 F1 (−a; c; z) is valid for bounded values of z and c, with a → ∞ inside the sector −π + δ ≤ ph a ≤ π − δ. Using this, we obtain  x − 1 α 1 2 e2x × L(α) n (x) ∼ n + , 1 ∞ ∞  √   √  x k ak (x) k bk (x) Jα+1 2 nx (−1) − (−1) Jα 2 nx , nk n nk k=0

k=0

(12) as n → ∞. The coefficients ak (x) and bk (x) follow from the expansion of the function . f (x, s) = e

xg(s)

s 1 − e−s

/α+1 ,

g(s) =

1 1 1 − s − . s e −1 2

(13)

The function f is analytic in the strip |$s| < 2π and it can be expanded for |s| < 2π into f (x, s) =

∞ 

ck (x)s k .

(14)

k=0

The coefficients ak (x) and bk (x) are given in terms of the ck (x) coefficients: ak (x) =

k ./  k

(m + 1 − c)k−m x m ck+m (x), m m=0 k ./  k bk (x) = (m + 2 − c)k−m x m ck+m+1 (x), m

(15)

m=0

k = 0, 1, 2, . . . , c = α + 1. The coefficients ck (x) are combinations of Bernoulli numbers and Bernoulli polynomials, the first ones being c0 (x) = 1,

c1 (x) =

1 12

(6c − x) ,

 −12c + 36c2 − 12xc + x 2 ,  1 −5x 3 + 90x 2 c + (−540c2 + 180c + 72)x + 1080c2 (c − 1) , c3 (x) = 51840 (16) c2 (x) =

1 288

Asymptotic Computation of Classical Orthogonal Polynomials

221

and the first relations are a0 (x) = c0 (x) = 1,

b0 (x) = c1 (x),

a1 (x) = (1 − c)c1 (x) + xc2 (x),

b1 (x) = (2 − c)c2 (x) + xc3 (x),

a2 (x) = (c2 − 3c + 2)c2 (x) + (4x − 2xc)c3 (x) + x 2 c4 (x),

(17)

b2 (x) = (c2 − 5c + 6)c3 (x) + (6x − 2xc)c4 (x) + x 2 c5 (x), again with c = α + 1. c) A not so simple expansion in terms of Bessel functions To understand how this representation and the coefficients of the expansion are obtained, we start with the standard form  1 du Fζ (ν) = eν(u−ζ /u) f (u) α+1 , (18) 2π i C u where the contour C starts at −∞ with ph u = −π , encircles the origin anticlockwise, and returns to −∞ with ph u = π . The function f (u) is assumed to be analytic in a neighborhood of C, √ and in particular in a domain that contains the saddle points ±ib, where b = ζ . When f is replaced by unity, we obtain the Bessel function:  *

1 Fζ (ν) = ζ − 2 α Jα 2ν ζ .

(19)

For Laguerre polynomials, we have the integral representation L(α) n (z)



1 = 2π i

L

(1 − t)−α−1 e−tz/(1−t)

dt , t n+1

(20)

where L is a circle around the origin with radius less than unity. After substituting t = e−s , we obtain e

−νx

L(α) n (2νx)

2−α = 2π i



.

(0+)

e

−∞

νh(s,x)

sinh s s

/−α−1

ds s α+1

,

(21)

where ν = 2n + α + 1 and h(s, x) = s − x coth s. The contour is the same as in (18). Using the transformation h(s, x) = u − ζ /u, we obtain the integral representation in the standard form of (18) 2α e−νx L(α) n (2νx) =

1 2π i



(0+)

−∞

eν(u−ζ /u) f (u)

du , uα+1

(22)

222

A. Gil et al.

where f (u) =

 u α+1 ds . sinh s du

(23)

We now consider the following recursive scheme  fj (u) = Aj (ζ ) + Bj (ζ )/u + 1 + b2 /u2 gj (u), α+1 gj (u), u fj (ib) + fj (−ib) fj (ib) − fj (−ib) , Bj (ζ ) = i , Aj (ζ ) = 2 2b

fj +1 (u) = gj (u) −

(24)

with f0 (u) = f (u), the coefficient function. Using this scheme and integration by parts in (22), we obtain the following representation eνx χ (ζ )

L(α) n (2νx) =

1

2α ζ 2 α

.

/  *  * 1 Jα 2ν ζ A(ζ ) − √ Jα+1 2ν ζ B(ζ ) , ζ

(25)

with expansions A(ζ ) ∼

∞  A2j (ζ ) j =0

ν 2j

,

B(ζ ) ∼

∞  B2j +1 (ζ ) j =0

ν 2j +1

,

ν → ∞.

(26)

Here, ν = 2n + α + 1,

− 14

χ (ζ ) = (1 − x)

. / 1 α+ 1 ζ 2 4 , x

x < 1,

(27)

with ζ given by * √

x 2 − x + arcsinh −x , * * √

ζ = 12 x − x 2 + arcsin x ,

*

−ζ =

1 2

if

x ≤ 0,

if

0 ≤ x < 1.

(28)

This expansion in terms of Bessel functions was derived in [4]. It is uniformly valid for x ≤ 1 − δ, where δ ∈ (0, 1) is a fixed number. Recall that the expansion in terms of Airy functions is valid around the turning point x = 1 and up to infinity, but not for small positive x. As shown in [4], the Bessel-type expansion (left of x = 1) and the Airy-type expansion are valid in overlapping x-domains of R. For x < 0 the J -Bessel functions can be written in terms of modified I -Bessel functions.

Asymptotic Computation of Classical Orthogonal Polynomials

223

The coefficients Aj (ζ ) and Bj (ζ ) in (26) can all be expressed in terms of the derivatives f (k) (±ib) of f (u) (see (23)) at the saddle points ±ib. We have to proceed as follows: We expand the functions fj (u) (24) in two-point Taylor expansions fj (u) =

∞ 

(j )

Ck (u2 − b2 )k + u

k=0

∞ 

(j )

Dk (u2 − b2 )k .

(29)

k=0

Using the recursive scheme for the functions fj (u) given in (24), we derive the following scheme for the coefficients (j +1)

= (2k − α)Dk + b2 (α − 4k − 2)Dk+1 + 2(k + 1)b4 Dk+2 ,

(j +1)

= (2k + 1 − α)Ck+1 − 2(k + 1)b2 Ck+2 ,

Ck

Dk

(j )

(j )

(j )

(j )

(j )

(30)

for j, k = 0, 1, 2, . . ., and the coefficients Aj and Bj follow from (j )

Aj (ζ ) = C0 ,

(j )

Bj (ζ ) = −b2 D0 ,

j ≥ 0.

(31)

In the present case of the Laguerre polynomials the functions f2j are even and f2j +1 are odd, and we have A2j +1 (ζ ) = 0 and B2j (ζ ) = 0. A few non– vanishing coefficients are A0 (ζ ) = f (ib),  B1 (ζ ) = − 14 b (2α − 1)if (1) (ib) + bf (2) (ib) ,  1 3i(42 α − 1)f (1) (ib) − (3 − 16α + 4α 2 )bf (2) (ib) A2 (ζ ) = − 32b +2i(2α − 3)b2 f (3) (ib)b2 + b3 f (4) (ib) ,  1 3(4α 2 − 1)(2α − 3)(if (1) (ib) + bf (2) (ib)) B3 (ζ ) = − 384b

(32)

+2i(α − 7)(2α − 1)(2α − 3)b2 f (3) (ib) +3(19 − 20α + 4α 2 ))b3 f (4) (ib) −3i(2α − 5)b4 f (5) (ib) − b5 f (6) (ib) . To have A0 (ζ ) = 1 we have to scale all A and B-coefficients with respect to A0 (ζ ) = χ (ζ ). As an example, using this normalization and the explicit expression for the first two derivatives f (k) (ib) of f (u), we obtain for B1 (ζ ) the following expression B1 (ζ ) =

1  4 5ξ b + 6ξ 2 b + 3ξ + 12α 2 (b − ξ ) − 3b , 48ξ

(33)

224

A. Gil et al.

1

x . 1−x In the resulting algorithm for Laguerre polynomials described in [8], the two Bessel-type expansions were used in the oscillatory region of the functions. The use of the simple Bessel-type expansion was restricted, as expected, to values of the argument of the Laguerre polynomials very close to the origin. The second Bessel-type expansion was used in the part of the oscillatory region where the Airy expansion failed (computational schemes for Airy functions are more efficient than the corresponding schemes for Bessel functions). Therefore, the Airy expansion was used in the monotonic region and in part of the oscillatory region of the Laguerre polynomials L(α) n (x), with α small. Recall that we have observed (see below (28)) that the Airy-type and Bessel-type expansions in (6) and (25), respectively, are valid in overlapping x-intervals of R. d) An expansion in terms of Bessel functions for large α and n When α is large, an expansion suitable for the oscillatory region of the Laguerre polynomials and valid for large values of n, is the expansion given in [6] where ξ =

/ . b α (n + α + 1) −κA L(α) × (4κx) = e χ (b) n 2κx n!  Jα (4κb)A(b) − 2bJα (4κx)B(b) ,

(34)

with expansions A(b) ∼

∞  Ak (b) k=0

κk

B(b) ∼

,

∞  Bk (b) k=0

κk

,

κ → ∞,

(35)

where . κ =n+

1 2 (α

+ 1),

χ (b) =

4b2 − τ 2 4x − 4x 2 − τ 2

/ 14 ,

τ=

α . 2κ

(36)

We have τ < 1. The quantity b is a function of x and follows from the relation W = τ x − 1τ2 1 − 2x − τ arcsin √ 2 + 12 π(1 − τ ), 2R − arcsin √ 1 − τ2 x 1 − τ2

2W − 2τ arctan

(37)

where R=

1 2

*

4x − 4x 2 − τ 2 =

*

(x2 − x)(x − x1 ),

W =

* 4b2 − τ 2 ,

(38)

Asymptotic Computation of Classical Orthogonal Polynomials

225

and x1 =

1 2



* 1 − 1 − τ2 ,

x2 =

1 2



1+

*

1 − τ2 .

(39)

The relation in (37) can be used for x ∈ [x1 , x2 ], in which case b ≥ 12 τ . In (α) this interval the Laguerre polynomial Ln (4κx) oscillates. The endpoints of this interval can be seen as two turning points, where in fact Airy functions can be used. We assume that τ ≤ τ0 , where τ0 is a fixed number in the interval (0, 1). When τ → 1, the points x1 and x2 , which can be considered as turning points, become the same, and we expect expansions in terms of parabolic cylinder function (in the present case these are Hermite polynomials). The expansion is valid for x ∈ [0, (1 − δ)x2 ], with δ a small positive number. For x < x1 , the relation in (37) should be modified, because the arguments of the square roots of W and R become negative. At the turning point x = x1 we have b = 12 τ , and the argument of the Bessel functions in (34) becomes 4κb = 2κτ = α, where we have used the definition of τ in (36). We see that argument and order of the Bessel functions become equal, and because we assume that α is large, this explains that at the turning point x1 we can expect Airy-type behaviour. The Airy-type approximation around x1 cannot be valid near x = 0, but the Bessel function can handle this. The first coefficients of the expansions in (35) are A0 (b) = 1, A1 =

B0 (b) = 0,

P R 3 + QW 3 , 24(τ 2 − 1) 192R 3 W 4 (τ 2 − 1) P = 4(2τ 2 + 12b2 )(1 − τ 2 ), τ

,

B1 =

(40)

Q = 2τ 4 − 12x 2 τ 2 − τ 2 − 8x 3 + 24x 2 − 6x, B2 (b ) = A1 (b)B1 (b). An example of the computational performance of the expansion (34) is given in Fig. 2. For comparison, we show the results for two different values of α (α = 198.5, α = 20.6) and a fixed value of n (n = 340). The coefficients Ak , Bk , k = 0, 1, 2 in (35) have been considered in the computations. The expansion has been tested by considering the relation given in Eq. (18.9.13) of [14], written in the form 2 2 2 2 L(α+1) (z) + L(α) (z) 2 2 n−1 n (41) − 122 ,  = 22 2 2 L(α+1) (z) n

226

A. Gil et al.

10 0 α=198.5 n=340

10

-10

10 -20 0 10

0.1

0.2

0.3

0.4

0.5 x

0.6

0.7

0.8

0.9

1

0.2

0.3

0.4

0.5 x

0.6

0.7

0.8

0.9

1

0

α=20.6 n=340

10

-10

10 -20

0

0.1

Fig. 2 Test of the performance of the expansion (34) for large values of n and α. The -value in Eq. (41) is plotted

where z = 4κx is the argument of the Laguerre polynomial in the expansion (34). As can be seen, an accuracy better than 10−10 can be obtained in the first half of the x-interval for the two values of α tested.

3 Jacobi Polynomials Next we review of some useful asymptotic expansions for computing Jacobi polynomials.

3.1 An Expansion in Terms of Elementary Functions In [11] we considered the following representation Pn(α,β) (cos θ ) =

Gκ (α, β) cos χ U (x) − sin χ V (x) , √ 1 1 πκ sinα+ 2 1 θ cosβ+ 2 1 θ 2

(42)

2

where x = cos θ,

χ = κθ −



1 2α

+

1 4

π,

κ = n + 12 (α + β + 1).

(43)

The expansion is valid for x ∈ [−1 + δ, 1 − δ], where δ is a small positive number.

Asymptotic Computation of Classical Orthogonal Polynomials

227

The functions U (x) and V (x) have the expansions U (x) ∼

∞  u2k (x) k=0

κ 2k

,

V (x) ∼

∞  v2k+1 (x) k=0

κ 2k+1

.

(44)

The first coefficients are u0 (x) = 1,

v1 (x) =

u2 (x) =

1



2α 2 − 2β 2 + (2α 2 + 2β 2 − 1)x , 8 sin θ

12(5 − 2α 2 − 2β 2 )(α 2 − β 2 )x + 384 sin θ 4(−3(α 2 − β 2 )2 + 3(α 2 + β 2 ) − 6 + 4α(α 2 − 1 + 3β 2 ) + 2

(45)

(−12(α 2 + β 2 )(α 2 + β 2 − 1) − 16α(α 2 − 1 + 3β 2 ) − 3)x 2 . The function Gκ (α, β) is given by Gκ (α, β) =

(n + α + 1) = n! κ α



κ + 12 (α − β + 1) 

, κ − 12 (α + β − 1) κ α

(46)

2  with expansion in negative powers of κ − 12 β Gκ (α, β) ∼ (w/κ)α

∞  Cm (ρ)(−α)2m , w 2m

(47)

ρ = 12 (α + 1),

(48)

m=0

where w = κ − 12 β, and the first few Cm (ρ) coefficients are C0 (ρ) = 1,

1 C1 (ρ) = − 12 ρ, C2 (ρ) = 4 + 21ρ + 35ρ 2 C3 (ρ) = −ρ , 362880 18 + 101ρ + 210ρ 2 + 175ρ 3 C4 (ρ) = ρ . 87091200

1 1440

(5ρ + 1) , (49)

228

A. Gil et al.

3.2 An Expansion in Terms of Bessel Functions Another expansion, this time in terms of Bessel functions, considered in [11] is (α,β) Pn (cos θ )

=

1

Gκ (α, β) sinα 12 θ

cosβ 12 θ

W (θ ) = Jα (κθ ) S(θ ) +

θ W (θ ), sin θ

1 Jα+1 (κθ ) T (θ ), κ

(50)

where Gκ (α, β) is defined in (46), and κ = n + 12 (α + β + 1).

(51)

This expansion holds uniformly with respect to θ ∈ [0, π − δ], where δ is a small positive number. When an asymptotic is needed for θ near π (or for x near −1), we can use the symmetry relation Pn(α,β) (−x) = (−1)n Pn(β,α) (x).

(52)

The functions S(θ ) and T (θ ) have the expansions S(θ ) ∼

∞  Sk (θ ) k=0

κ 2k

,

T (θ ) ∼

∞  Tk (θ ) k=0

κ 2k

,

κ → ∞,

(53)

with S0 (θ ) = A0 (θ ) = 1, T0 (θ ) = A1 (θ ), and for k = 1, 2, 3, . . . Sk (θ ) = −

1 θ k−1

/ k−1 .  k−1 j =0

j

Aj +k+1 (θ )(−θ )j 2k−1−j (α + 2 + j )k−j −1 ,

k . / 1  k Tk (θ ) = k Aj +k+1 (θ )(−θ )j 2k−j (α + 1 + j )k−j . j θ j =0

(54) The coefficients Ak (θ ) are analytic functions for 0 ≤ θ < π . The first ones are A0 (θ ) = 1 and   2 4α − 1 (sin θ − θ cos θ ) + 2θ α 2 − β 2 (cos θ − 1) . A1 (θ ) = 8θ sin θ

(55)

Asymptotic Computation of Classical Orthogonal Polynomials

229

For small values of θ , it is convenient to consider expansions of the coefficients Ak (θ ): Ak (θ ) = χ k θ k

∞ 

Aj k θ 2j ,

χ=

j =0

θ , sin θ

(56)

where the series represent entire functions of θ . The first few Aj k are 

α 2 + 3β 2 − 1 , 

1 −3α 2 − 5β 2 + 2 , = 480 

1 −16α − 14α 2 − 90β 2 + 5α 4 + 4α 3 + 45β 4 + 30β 2 α 2 + 60β 2 α + 21 . = 5760

A0,1 = A1,1 A0,2

1 24

(57) More details about the coefficients Ak (θ ) are given in [11]. As in the case of the Bessel-type expansions for Laguerre polynomials, for computing the representation (50), an algorithm for evaluating the Bessel functions Jν (z) is needed. We use a computational scheme based on the methods of approximation described in the Appendix.

3.3 An Expansion for Large β In [9] we discussed two asymptotic approximations of Jacobi polynomials for large values of the β-parameter. The expansions are in terms of Laguerre polynomials. One of the two expansions given in [9] is the following: . / 2z = (1 − z/b)n W (n, α, z), Pn(α,β) 1 − b (α) (α) W (n, α, z) = Ln (z)R(n, α, β, z) + Ln−1 (z)S(n, α, β, z), (α)

(58)

(α)

where 0 < z < β, b = β + n and Ln (z), Ln−1 (z) are Laguerre polynomials. R(n, α, β, z), S(n, α, β, z) have the expansions R(n, α, β, z) ∼

∞  rk (n, α, z) k=0

bk

,

S(n, α, β, z) ∼

∞  sk (n, α, z) k=0

bk

.

(59)

230

A. Gil et al.

The first few coefficients are r0 (n, α, z) = 1,

s0 (n, α, z) = 0,

r1 (n, α, z) = 12 n(2z + α + 1), s1 (n, α, z) = − 12 (n + α)(α + z + 1),  3 1 3α + 6α 2 z − 6αnz + 3αz2 r2 (n, α, z) = − 24 −9nz2 + 10α 2 + 8αz − 4nz − 12z2 + 9α + 2 ,  1 n + α)(3α 3 + 3α 2 z − 6αnz − 3αz2 − 6nz2 s2 (n, α, z) = 24 −3z3 + 10α 2 + αz − 4nz − 11z2 + 9α − 2z + 2 .

(60)

(α,β)

In this expansion for the Jacobi polynomials Pn (x), the range of validity in the xinterval is very limited near x = 1 (to reach the whole x-interval (−1, 1), z should be of order O(β)). For large values of the parameter α, we can obtain a similar expansion valid near x = −1 by using the symmetry relation (52) of the Jacobi polynomials.

3.4 An Expansion for Large α and β In [10] an expansion for Jacobi polynomials in terms of elementary functions valid for large values of the parameters α and β has been given: 





2 2 (α+β+1) e−κψ  cos κχ + 14 π P + sin κχ + 14 π Q , =√ π κw(x)U (x) 1

Pn(α,β) (x)

(61)

where w(x) = (1 − x)α (1 + x)β , ψ = − 12 (1 − τ ) ln(1 − τ ) − 12 (1 + τ ) ln(1 + τ ) + 1 1 2 (1 + σ ) ln(1 + σ ) + 2 (1 − σ ) ln(1 − σ ),

U (x) + 1−x+σ +τ U (x) (τ − 1) arctan + (1 − σ ) atan2(−U (x), τ + xσ ). 1+x+σ −τ √ U (x) = 1 − 2σ τ x − τ 2 − σ 2 − x 2 , χ = (τ + 1) arctan

(62)

Asymptotic Computation of Classical Orthogonal Polynomials

231

and σ =

α+β , 2κ

τ=

α−β , 2κ

κ = n + 12 (α + β + 1).

(63)

The function atan2(y, x) in the third term of χ (x) in (62) denotes the phase ∈ (−π, π ] of the complex number x + iy. P and Q have expansions P ∼

∞  pj j =0

κ

Q∼

, j

∞  qj , κj

(64)

j =0

and the first coefficients are p0 = 1, q0 = 0. These expansions are valid for x ∈ [x− (1 + δ), x+ (1 − δ)], where x± are the turning points x± = −σ τ ±

*

(1 − σ 2 )(1 − τ 2 ),

and δ is a fixed positive small number. For more details about the coefficients pj and qj , see [10].

3.5 Computational Performance The performance of the asymptotic expansions for the Jacobi polynomials has been tested by considering the relation given in Eq. (18.9.3) of [14] written in one of the two following forms Pn(α, β−1) (x) − Pn(α−1, β) (x) (α, β)

Pn−1 (x)

= 1,

(65)

(α, β)

Pn(α−1, β) (x) + Pn−1 (x) Pn(α, β−1) (x)

= 1.

(66)

(α, β)

Note that the test (65) fails close to the zeros of Pn−1 (x); in this case, we can (α, β)

(α, β−1)

consider the alternative test (66): the zeros of Pn−1 (x) and Pn (x) interlace according to Theorem 2 of [17], and therefore they can not vanish simultaneously. First, we test the performance of the asymptotic expansion (42). Figure 3 shows the points where the error in the computation of the test is greater than 10−12 for two different sets of values of the parameters (α, β). The first four nonzero coefficients of the expansions (44) have been considered in the computations. Random values have been generated in the parameter region (n, θ ) ∈ (10, 1000) × (0, π ). As

232

α=1/3 β=1/5

n

1000 800 600 400 200 0

A. Gil et al.

0

0.5

1

1.5

2

2.5

3

2

2.5

3

θ α=4.2 β=3.8

n

1000 800 600 400 200 0

0

0.5

1

1.5

θ

Fig. 3 Performance of the asymptotic expansion (42). The points where the error in the computation is greater than 10−12 . Two different sets of values of the parameters α and β have been considered in the tests

n

500 400 300 200 100 0

n

500 400 300 200 100 0

α =1/3 β=1/5

0.5

1

1.5

θ

2

2.5

2

2.5

3

α =4.2 β=3.8

0.5

1

1.5

θ

3

Fig. 4 Tests of the performance of the asymptotic expansion (50) for two different sets of values of the parameters α and β . The points where the error in the computation is greater than 10−12 are shown

expected, the expansion works well for values of x = cos(θ ) not close to ±1. Also, the parameters α, β should be of order O(1), and smaller values of these parameters (1/3, 1/5) (cos θ ) give better results, as can be seen when comparing the results for Pn (4.2, 3.8) and Pn (cos θ ). The performance of the asymptotic expansion in terms of Bessel functions (50) is illustrated in Fig. 4. As before, the points where the error in the computation is greater than 10−12 are shown. The asymptotic expansion (50) has been computed by using the first three coefficients Sk (θ ), Tk (θ ), k = 0, 1, 2 in (53). The results are as expected: better accuracy is obtained near x = ±1, although the expansion performs well even for small values of x (θ -values close to π/2) when α is small. Also, for a fixed number of terms used in the expansion, the accuracy obtained with the expansion decreases as the parameter values (α, β) increase. Regarding CPU times, the Bessel expansions, as expected, are slower than the elementary expansions. Therefore, when both expansions are valid in the same parameter region, it is always convenient to use the expansion in terms of elementary functions.

Asymptotic Computation of Classical Orthogonal Polynomials

233

When the parameter values α and/or β are large, it is interesting to consider the expansions given in Sects. 3.3 and 3.4. An example of the performance of the (α,β) (x) given in (58) is shown in Fig. 5 for a fixed value of α expansion for Pn and two different values of the parameter β (β = 250, β = 500). The first five coefficients of the expansions (59) have been considered in the computations and the algorithm given in [8] is used to compute the Laguerre polynomials. The variable x in the abscissas axis is the argument 1 − 2z/b in the expansion. Random points have been generated in the parameter region (n, x) ∈ (10, 110) × (0.994, 1). The points where the error in the computation is greater than 10−8 are plotted in the figure. The results show that the range of validity of the expansion is quite narrow; therefore, it is of limited interest as a tool to compute the polynomials. The expansion is, however, very useful for obtaining expansions for the zeros of Jacobi polynomials, as discussed in [9]. More interesting for computational purposes is the expansion in terms of elementary functions given in (61). In Fig. 6 we show an example of the performance of this expansion. The first four coefficients of the expansions (64) have been considered in the computations. The points where the error in the computation is greater than

n

100 80 60 40 20

α=1/4 β=250

0.994

0.996

x

0.997

0.998

0.999

n

100 80 60 40 20 0.994

0.995

1

α=1/4 β=500

0.995

0.996

x

0.997

0.998

0.999

1

Fig. 5 Performance of the asymptotic expansion (58) for a fixed value of α and two different (large) values of the parameter β. The variable x in the abscissas axis is the argument 1 − 2z/b in the expansion. The points where the error in the computation is greater than 10−8 are plotted 500

α =10 β =20

400

n

300 200 100 0

0

0.5

1

1.5

θ

2

2.5

3

Fig. 6 Example of the performance of the asymptotic expansion (61). The points where the error in the computation is greater than 10−8 are plotted

234

A. Gil et al.

10−8 are plotted in the figure. We have chosen θ (x = cos(θ )) as the variable in the abscissas axis. As can be seen, the expansion performs well even for not too large values of α and β (α = 10, β = 20 in the example shown.) Also, as expected, the values of x = cos(θ ) should not be close to ±1.

Appendix The following methods of approximation were used in our algorithm for computing the Bessel functions Jν (x): Power Series When z is small, we use the power series given in Eq. (10.2.2) of [16, §10.19(ii)]: Jν (z) = ( 12 z)ν

∞ 

(−1)k

k=0

( 14 z2 )k . k! (ν + k + 1)

Debye’s Asymptotic Expansions The expressions are given in Eq. (10.19.3) and Eq. (10.19.6) of [16, §10.19(ii)]: When ν < z, we use Jν (ν sech α) ∼

∞ eν(tanh α−α)  Uk (coth α) , 1 νk (2π ν tanh α) 2 k=0

and for ν > z . Jν (ν sec β) ∼

2 π ν tan β

, /1 + ∞ ∞   2 U2k (i cot β) U2k+1 (i cot β) −i sin ξ cos ξ . ν 2k ν 2k+1 k=0

k=0

The coefficients Uk (p) are polynomials in p of degree 3k given by U0 (p) = 1 and  1 p 2  1 2 (1 − 5t 2 )Uk (t)dt. Uk+1 (p) = 2 p (1 − p )Uk (p) + 8 0 Hankel’s Expansion For large values of the argument z, we use Hankel’s expansion given in [16, §10.17(i)]: . Jν (z) ∼

2 πz

, /1 + ∞ ∞   2 k a2k (ν) k a2k+1 (ν) (−1) − sin ω (−1) cos ω , z2k z2k+1 k=0

k=0

Asymptotic Computation of Classical Orthogonal Polynomials

235

where ω = z − 12 νπ − 14 π. The coefficients ak (ν) are given by a0 (ν) = 1 and ak (ν) =

(4ν 2 − 12 )(4ν 2 − 32 ) · · · (4ν 2 − (2k − 1)2 ) , k!8k

k = 1, 2, 3, . . . .

Airy-Type Expansions We use the representation given in [5, Chapter 8] Jν (νx) =

 φ(ζ )  2/3 −4/3  2/3 Ai(ν ζ ) A (ζ ) + ν Ai (ν ζ ) B (ζ ) , ν ν ν 1/3

where . φ(ζ ) =

4ζ 1 − x2

/1 4

,

1

φ(0) = 2 3 .

The variable ζ is written in terms of the variable x as

2 3/2 3ζ 3/2 2 3 (−ζ )



1 − x2 * − 1 − x2, = ln x * 1 = x 2 − 1 − arccos , x 1+

0 < x ≤ 1, x ≥ 1.

Three-Term Recurrence Relation Using Miller’s Algorithm The standard three-term recurrence relation for the cylinder functions Jν−1 (z) + Jν+1 (z) = (2ν/z) Jν (z) , is computed backwards (starting from large values of ν) using Miller’s algorithm; see [5, §4.6]. Acknowledgments We thank the referee for valuable comments. We acknowledge financial support from Ministerio de Ciencia e Innovación, Spain, projects MTM2015-67142-P (MINECO/FEDER, UE) and PGC2018-098279-B-I00 (MCIU/AEI/FEDER, UE). NMT thanks CWI, Amsterdam, for scientific support.

236

A. Gil et al.

References 1. Deaño, A., Huertas, E.J., Marcellán, F.: Strong and ratio asymptotics for Laguerre polynomials revisited. J. Math. Anal. Appl. 403(2), 477–486 (2013) 2. Deaño, A., Huybrechs, D., Opsomer, P.: Construction and implementation of asymptotic expansions for Jacobi-type orthogonal polynomials. Adv. Comput. Math. 42(4), 791–822 (2016) 3. Dimitrov, D.K., Nikolov, G.P.: Sharp bounds for the extreme zeros of classical orthogonal polynomials. J. Approx. Theor. 162(10), 1793–1804 (2010) 4. Frenzen, C.L., Wong, R.: Uniform asymptotic expansions of Laguerre polynomials. SIAM J. Math. Anal. 19(5), 1232–1248 (1988) 5. Gil, A., Segura, J., Temme, N.M.: Numerical Methods for Special Functions. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2007) 6. Gil, A., Segura, J., Temme, N.M.: Asymptotic approximations to the nodes and weights of Gauss–Hermite and Gauss–Laguerre quadratures. Stud. Appl. Math. 140(3), 298–332 (2018) 7. Gil, A., Segura, J., Temme, N.M.: Computing the real parabolic cylinder functions U (a, x), V (a, x). ACM Trans. Math. Softw. 32(1), 70–101 (2006) 8. Gil, A., Segura, J., Temme, N.M.: Efficient computation of Laguerre polynomials. Comput. Phys. Commun. 210, 124–131 (2017) 9. Gil, A., Segura, J., Temme, N.M.: Expansions of Jacobi polynomials for large values of beta and of their zeros. SIGMA 14(73), 9 p. (2018) 10. Gil, A., Segura, J., Temme, N.M.: Asymptotic expansions of Jacobi polynomials and of the nodes and weights of Gauss-Jacobi quadrature for large degree and parameters in terms of elementary functions. J. Math. Anal. Appl. 494(2), 124642 (2021) 11. Gil, A., Segura, J., Temme, N.M.: Noniterative computation of Jacobi polynomials. SIAM J. Sci. Comput. 41(1), A668–A693 (2019) 12. Hale, N., Townsend, A.: Fast and accurate computation of Gauss–Legendre and Gauss–Jacobi quadrature nodes and weights. SIAM. J. Sci. Comput. 35(2), A652–A674 (2013) 13. Huybrechs, D., Opsomer, P.: Construction and implementation of asymptotic expansions for Laguerre-type orthogonal polynomials. IMA J. Numer. Anal. 38(3), 1085–1118 (2018) 14. Koornwinder, T.H., Wong, R., Koekoek, R., Swarttouw, R.F.: Chapter 18, Orthogonal polynomials. In: NIST Handbook of Mathematical Functions, pp. 435–484. U.S. Dept. Commerce, Washington, DC (2010). http://dlmf.nist.gov/18 15. Kreuser, P.: Über das Verhalten der Integrale homogener linearer Differenzengleichungen im Unendlichen. Diss. Tübingen, 48 S (1914) 16. Olver, F.W.J., Maximon, L.C.: Chapter 10, Bessel functions. In: NIST Handbook of Mathematical Functions, pp. 215–286. U.S. Dept. Commerce, Washington, DC (2010b). http:// dlmf.nist.gov/10 17. Segura, J.: Interlacing of the zeros of contiguous hypergeometric functions. Numer. Algorithms 49(1–4), 387–407 (2008) 18. Temme, N.M.: Asymptotic Methods for Integrals, vol. 6. Series in Analysis. World Scientific Publishing, Hackensack, NJ (2015)

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation Guillermo López-Lagomasino

Abstract We present a brief introduction to the theory of multiple orthogonal polynomials on the basis of known results for an important class of measures known as Nikishin systems. For type I and type II multiple orthogonal polynomials with respect to such systems of measures, we describe some of their most relevant properties regarding location and distribution of zeros as well as their weak and ratio asymptotic behavior. Keywords Perfect systems · Nikishin systems · Multiple orthogonal polynomials · Hermite-Padé approximation · Logarithmic asymptotics · Ratio asymptotics · Asymptotic properties

1 Introduction The object of this paper is to provide an introduction to the study of HermitePadé approximation, multiple orthogonal polynomials, and some of their asymptotic properties. For the most part, the attention is restricted to the case of multiple orthogonality with respect to an important class of measures introduced by E.M. Nikishin in [40]. For this reason, they are referred to in the specialized literature as Nikishin systems. Throughout the years, with the assistance of colleagues and former Ph.D. students, I have dedicated a great part of my research to this subject area and I wish to express here my gratitude to all those involved. This material is not intended to present new results, but a mere account of the experience I have accumulated for the benefit of future research from the younger generations.

G. López Lagomasino () Departamento de Matemáticas, Universidad Carlos III de Madrid, Leganés, Spain e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Marcellán, E. J. Huertas (eds.), Orthogonal Polynomials: Current Trends and Applications, SEMA SIMAI Springer Series 22, https://doi.org/10.1007/978-3-030-56190-1_9

237

238

G. López Lagomasino

1.1 Some Historical Background In 1873, Charles Hermite publishes in [28] his proof of the transcendence of e making use of simultaneous rational approximation of systems of exponentials. That paper marked the beginning of the modern analytic theory of numbers. The formal theory of simultaneous rational approximation for general systems of analytic functions was initiated by K. Mahler in lectures delivered at the University of Groningen in 1934–35. These lectures were published years later in [37]. Important contributions in this respect are also due to his students J. Coates and H. Jager, see [13] and [29]. K. Mahler’s approach to the simultaneous approximation of finite systems of analytic functions may be reformulated in the following terms. Let f = (f1 , . . . , fm ) be a family of analytic functions in some domain D of the extended complex plane containing ∞. Fix a non-zero multi-index n = (n1 , . . . , nm ) ∈ Zm + , |n| = n1 + . . . , nm . There exist polynomials an,1 , . . . , an,m , not all identically equal to zero, such that (i) deg an,j-≤ nj − 1, j = 1, . . . , m (deg an,j ≤ −1 means that an,j ≡ 0), |n| (ii) an,0 + m j =1 an,j (z)fj (z) = O(1/z ), z → ∞. for some polynomial an,0 . Analogously, there exists Qn , not identically equal to zero, such that (i) deg Qn ≤ |n|, (ii) Qn (z)fj (z) − Pn,j (z) = O(1/znj +1 ), z → ∞, j = 1, . . . , m, for some polynomials Pn,j , j = 1, . . . , m. The polynomials an,0 and Pn,j , j = 0, . . . , m, are uniquely determined from (ii) once their partners are found. The two constructions are called type I and type II polynomials (approximants) of the system (f1 , . . . , fm ). Algebraically, they are closely related. This is clearly exposed in [13, 29], and [37]. When m = 1 both definitions coincide with that of the well-known Padé approximation in its linear presentation. Apart from Hermite’s result, type I, type II, and a combination of the two (called mixed type), have been employed in the proof of the irrationality of other numbers. For example, in [11] F. Beukers shows that Apery’s proof (see [1]) of the irrationality of ζ (3) can be placed in the context of mixed type Hermite-Padé approximation. See [46] for a brief introduction and survey on the subject. More recently, mixed type approximation has appeared in random matrix and non-intersecting brownian motion theories (see, for example, [9, 14, 31], and [32]), and the Degasperi-Procesi equation [10]. In applications in the areas of number theory, convergence of simultaneous rational approximation, and asymptotic properties of type I and type II polynomials, a central question is if these polynomials have no defect; that is, if they attain the maximal degree possible. Definition 1 A multi-index n is said to be normal for the system f for type I approximation (respectively, for type II,) if deg an,j = nj − 1, j = 1, . . . , m

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

239

(respectively, deg Qn = |n|). A system of functions f is said to be perfect if all multi-indices are normal. It is easy to verify that (an,0 , . . . , an,m ) and Qn are uniquely determined to within a constant factor when n is normal. Moreover, if a system is perfect, the order of approximation in parts ii) above is exact for all n. The convenience of these properties is quite clear. Considering the construction at the origin (instead of z = ∞ which we chose for convenience), the system of exponentials considered by Hermite, (ew1 z , . . . , ewm z ), wi = wj , i = j, i, j = 1, . . . , m, is known to be perfect for type I and type II. A second example of a perfect system for both types is that given by the binomial functions (1 − z)w1 , . . . , (1 − z)wm , wi − wj ∈ Z. When normality occurs for multi-indices with decreasing components the system is said to be weakly perfect. Basically, these are the only examples known of perfect systems, except for certain ones formed by Cauchy transforms of measures.

1.2 Markov Systems and Orthogonality Let s be a finite Borel measure with constant sign whose compact support consists of infinitely many points and is contained in the real line. In the sequel, we only consider such measures. By we denote the smallest interval which contains the support, supp s, of s. We denote this class of measures by M( ). Let  8 s(z) =

ds(x) z−x

denote the Cauchy transform of s. Obviously, 8 s ∈ H(C \ ); that is, it is analytic in C \ . If we apply the construction above to the system formed by 8 s (m = 1), it is easy to verify that Qn turns out to be orthogonal to all polynomials of degree less than n ∈ Z+ . Consequently, deg Qn = n, all its zeros are simple and lie in the open convex hull Co(supp s) of supp s. Therefore, such systems of one function are perfect. These properties allow to deduce Markov’s theorem on the convergence of (diagonal) Padé approximations of 8 s published in [38]. For this reason, 8 s is also called a Markov function. Markov functions are quite relevant in several respects. Many elementary functions can be expressed as such. The resolvent function of self-adjoint operators admits that type of representation. If one allows complex weights, any reasonable analytic function in the extended complex plane with a finite number of algebraic singularities adopts that form. This fact, and the use of Padé approximation, has played a central role in some of the most relevant achievements in the last decades concerning the exact rate of convergence of the best rational approximation: namely, A.A. Gonchar and E.A. Rakhmanov’s result, see [24, 26], and [5], on the best

240

G. López Lagomasino

rational approximation of e−x on [0, +∞); and H. Stahl’s theorem, see [44], on the best rational approximation of x α on [0, 1]. Let us see two examples of general systems of Markov functions which play a central role in the theory of multiple orthogonal polynomials.

1.2.1

Angelesco Systems

In [2], A. Angelesco considered the following systems of functions. Let j , j = 1, . . . , m, be pairwise disjoint bounded intervals contained in the real line and sj , j = 1, . . . , m, a system of measures such that Co(supp sj ) = j . Fix n ∈ Zm + and consider the type II approximant of the so called Angelesco system of functions (8 s1 , . . . ,8 sm ) relative to n. It turns out that  x ν Qn (x)dsj (x) = 0,

ν = 0, . . . , nj − 1,

j = 1, . . . , m.

Therefore, Qn has nj simple zeros in the interior (with respect to the euclidean topology of R) of j . In consequence, since the intervals j are pairwise disjoint, deg Qn = |n| and Angelesco systems are type II perfect. Unfortunately, Angelesco’s paper received little attention and such systems reappear many years later in [39] where E.M. Nikishin deduces some of their formal properties. In [25] and [3], their logarithmic and strong asymptotic behavior, respectively, are given. These multiple orthogonal polynomials and the rational approximations associated have nice asymptotic formulas but not so good convergence properties. In this respect, a different system of Markov functions turns out to be more interesting and foundational from the geometric and analytic points of view.

1.2.2

Nikishin Systems

In an attempt to construct general classes of functions for which normality takes place, in [40] E.M. Nikishin introduced the concept of MT-system (now called Nikishin system). Let α , β be two non intersecting bounded intervals contained in the real line and σα ∈ M( α ), σβ ∈ M( β ). With these two measures we define a third one as follows (using the differential notation) σβ (x)dσα (x); dσα , σβ (x) = 8 that is, one multiplies the first measure by a weight formed by the Cauchy transform of the second measure. Certainly, this product of measures is non commutative. Above, 8 σβ denotes the Cauchy transform of the measure σβ .

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

241

Definition 2 Take a collection j , j = 1, . . . , m, of intervals such that

j ∩ j +1 = ∅,

j = 1, . . . , m − 1.

Let (σ1 , . . . , σm ) be a system of measures such that Co(supp σj ) = j , σj ∈ M( j ), j = 1, . . . , m. We say that (s1,1 , . . . , s1,m ) = N(σ1 , . . . , σm ), where s1,1 = σ1 ,

s1,2 = σ1 , σ2 , . . .

, s1,m = σ1 , σ2 , . . . , σm 

is the Nikishin system of measures generated by (σ1 , . . . , σm ). Fix n ∈ Zm + and consider the type II approximant of the Nikishin system of functions (8 s1,1 , . . . ,8 s1,m ) relative to n. It is easy to prove that  x ν Qn (x)ds1,j (x) = 0,

ν = 0, . . . , nj − 1,

j = 1, . . . , m.

All the measures s1,j have the same support; therefore, it is not immediate to conclude that deg Qn = |n|. Nevertheless, if we denote sj,k = σj , σj +1 , . . . , σk ,

j < k,

sj,j = σj  = σj ,

the previous orthogonality relations may be rewritten as follows  (p1 (x) +

m 

pk (x)8 s2,k (x))Qn (x)dσ1 (x) = 0,

(1)

k=2

where p1 , . . . , pm are arbitrary polynomials such that deg pk ≤ nk − 1, k = 1, . . . , m. Definition 3 A system of real continuous functions u1 , . . . , um defined on an interval is called an AT-system on for the multi-index n ∈ Zm + if for any choice of real polynomials (that is, with real coefficients) p1 , . . . , pm , deg pk ≤ nk − 1, the function m 

pk (x)uk (x)

k=1

has at most |n| − 1 zeros on . If this is true for all n ∈ Zm + we have an AT system on . In other words, u1 , . . . , um forms an AT-system for n on when the system of functions (u1 , . . . , x n1 −1 u1 , u2 , . . . , x nm −1 um )

242

G. López Lagomasino

is a Chebyshev system on of order |n| − 1. From the properties of Chebyshev systems (see [30, Theorem 1.1]), it follows that given x1 , . . . , xN , N < |n|, points in the interior of one -can find polynomials h1 , . . . , hm , conveniently, with deg hk ≤ nk − 1, such that m k=1 hk (x)uk (x) changes sign at x1 , . . . , xN , and has no other points where it changes sign on . In [40], Nikishin stated without proof that the system of functions (1,8 s2,2 , . . . ,8 s2,m ) forms an AT-system for all multi-indices n ∈ Zm + such that n1 ≥ n1 ≥ · · · ≥ nm (he proved it when additionally n1 − nm ≤ 1). Due to (1) this implies that Nikishin systems are type II weakly perfect. Ever since the appearance of [40], a subject of major interest for those involved in simultaneous approximation was to determine whether or not Nikishin systems are perfect. This problem was settled positively in [20] (see also [21] where Nikishin systems with unbounded and/or touching supports are considered). In the last two decades, a general theory of multiple orthogonal polynomials and Hermite-Padé approximation has emerged which to a great extent matches what is known to occur for standard orthogonal polynomials and Padé approximation. From the approximation point of view, Markov and Stieltjes type theorems have been obtained (see, for example, [12, 16–19, 25, 27, 35, 36, 40]). From the point of view of the asymptotic properties of multiple orthogonal polynomials there are results concerning their weak, ratio, and strong asymptotic behavior (see, for example, [3, 4, 6, 22, 33, 35]). This is specially so for Nikishin systems of measures on which we will focus in this brief introduction.

2 On the Perfectness of Nikishin Systems Let us begin with the following result which was established in [20]. It constitutes the key to the proof of many interesting properties of Nikishin systems; in particular, the fact that they are perfect. From the definition it is obvious that if (σ1 , . . . , σm ) generates a Nikishin system so does (σj , . . . , σk ) where 1 ≤ j < k ≤ m. Theorem 1 Let (s1,1 , . . . , s1,m ) = N(σ1 , . . . , σm ) be given. Then, (1,8 s1,1 , . . . , 8 s1,m ) forms an AT-system on any interval disjoint from 1 = Co(supp σ1 ). m+1 Moreover, for each n ∈ Z+ , and arbitrary polynomials - with real coefficients pk , deg pk ≤ nk − 1, k = 0, . . . , m, the linear form p0 + m s1,k , has at most k=1 pk8 |n| − 1 zeros in C \ 1 . For arbitrary multi-indices the proof is quite complicated and based on intricate transformations which allow to reduce the problem to the case of multi-indices with decreasing components. For such multi-indices the proof is pretty straightforward and we will limit ourselves to that situation. Let us first present two auxiliary lemmas. Lemma 1 Let (s1,1 , . . . , s1,m ) = N(σ1 , . . . , σm ) be given. Assume that there exist polynomials with real coefficients 0 , . . . , m and a polynomial w with real

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

243

coefficients whose zeros lie in C \ 1 such that L0 (z) ∈ H(C \ 1 ) w(z) where L0 := 0 +

-m

. / L0 (z) 1 =O N , w(z) z

and

s1,k k=1 k8

and N ≥ 1. Let L1 := 1 +

L0 (z) = w(z)



z → ∞,

-m

s2,k . k=2 k8

L1 (x) dσ1 (x) . (z − x) w(x)

Then (2)

If N ≥ 2, we also have  x ν L1 (x)

dσ1 (x) = 0, w(x)

ν = 0, . . . , N − 2.

(3)

In particular, L1 has at least N − 1 sign changes in the interior of 1 (with respect to the Euclidean topology of R). Proof Let be a positively oriented closed smooth Jordan curve that surrounds 0 (z)

1 sufficiently close to 1 . Since Lw(z) = O(1/z), z → ∞, if z and the zeros of w(z) are in the unbounded connected component of the complement of , Cauchy’s integral formula and Fubini’s theorem render L0 (z) 1 = w(z) 2π i m  

1 2π i

k=1





m  k (ζ )8 s1,k (ζ )dζ L0 (ζ ) dζ 1  = = w(ζ ) z − ζ 2π i w(ζ )(z − ζ ) k=1

 m   k (x)ds1,k (x) k (ζ )dζ L1 (x) dσ1 (x) ds1,k (x) = = w(ζ )(z − ζ )(ζ − x) w(x)(z − x) (z − x) w(x) k=1

which is (2). ν L (z) 0 When N ≥ 2, it follows that z w(z) = O(1/z2 ), z → ∞, for ν = 0, . . . , N − 2. Then, using Cauchy’s theorem, Fubini’s theorem and Cauchy’s integral formula, it follows that  0=

 zν L0 (z) dz = w(z) m

k=1

2πi



 zν k (z)8 s1,k (z) dz = w(z) m

k=1

 

zν k (z)dz ds1,k (x) = wk (z)(z − x)

 m   x ν k (x) dσ1 (x) ds1,k (x) = 2πi x ν L1 (x) , wk (x) w(x) k=1

and we obtain (3).



244

G. López Lagomasino

m+1 Lemma 2 Let (s1,1 , . . . , s1,m ) = N(σ1 , . . . , σm ) and n = (n0 , . . . , nm ) ∈ Z+ be given. Consider the linear form

Ln = p0 +

m 

pk8 s1,k ,

deg pk ≤ nk − 1,

k = 0, . . . , m,

k=1

where the polynomials pk have real coefficients. Assume that n0 = max{n0 , n1 − 1, -.m. . , nm − 1}. If Ln had at least |n| zeros in C \ 1 the reduced form p1 + s2,k would have at least |n| − n0 zeros in C \ 2 . k=2 pk8 Proof The function Ln is symmetric with respect to the real line Ln (z) = Ln (z); therefore, its zeros come in conjugate pairs. Thus, if Ln has at least |n| zeros in C \ 1 , there exists a polynomial wn , deg wn ≥ |n|, with real coefficients and zeros contained in C \ 1 such that Ln /wn ∈ H(C \ 1 ). This function has a zero of order ≥ |n| − n0 + 1 at ∞. Consequently, for all ν = 0, . . . , |n| − n0 − 1, zν Ln = O(1/z2 ) ∈ H(C \ 1 ), wn

z → ∞,

and zν Ln z ν p0  z ν pk = + 8 s1,k . wn wn wn m

k=1

From (3), it follows that  0=

x ν (p1 +

m  k=2

pk8 s2,k )(x)

dσ1 (x) , wn (x)

ν = 0, . . . , |n| − n0 − 1,

taking into consideration that s1,1 = σ1 and ds1,k (x) = 8 s2,k (x)dσ1 (x), k = 2, . . . , m. These orthogonality relations imply that p1 + m s2,k has at least |n| − k=2 pk8 n0 sign changes in the interior of 1 . In fact, if there were at most |n| − n0 − 1 sign changes one can - easily construct a polynomial p of degree ≤ |n| − n0 − 1 such that p(p1 + m s2,k ) does not change sign on 1 which contradicts k=2 pk8 the orthogonality relations. Therefore, already in the interior of 1 ⊂ C \ 2 , the reduced form would have the number of zeros claimed. Proof of Theorem-1 When n0 ≥ n1 ≥ · · · ≥ nm In this situation assume that the linear form p0 + m s1,k , has at least |n| zeros in C \ 1 . Applying Lemma 2 k=1 pk8 consecutively m times we would arrive to the conclusion that pm has at least nm zeros in C but this is impossible since its degree is ≤ nm − 1. From Theorem 1 the following result readily follows.

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

245

Theorem 2 Nikishin systems are type I and type II perfect. Proof Consider a Nikishin system N(σ1 , . . . , σm ). Let us prove that it is type I perfect. Given a multi-index n ∈ Zm + condition ii) for the system of functions (8 s1,1 , . . . ,8 s1,m ) implies that 

 x ν an,1 + an,28 s2,2 + · · · + an,m8 s2,m (x)dσ1 (x) = 0,

ν = 0, . . . , |n| − 2.

These orthogonality relations imply that An,1 := an,1 + an,28 s2,2 + · · · + an,m8 s2,m has at least |n|−1 sign changes in the interior of 1 (with the Euclidean topology of R). Suppose that n is not normal; that is, deg an,j ≤ nj − 2 for some j = 1, . . . , m. Then, according to Theorem 1, An,1 can have at most |n| − 2 zeros in C \ 2 . This contradicts the previous assertion, so such a component j cannot exist in n. The proof of type II perfectness is analogous. Suppose that there exists an n such that Qn has less than |n| sign changes in the interior of 1 . Let xk , k = 1, . . . xN , N ≤ |n| − 1 be the points where it changes sign. Construct a linear form s2,2 + · · · + pm8 s2,m , p1 + p28

deg pk ≤ nk − 1,

k = 1, . . . , m.

with a simple zero at each of the points xk and a zero of multiplicity |n| − N − 1 at one of the end points of 1 . This is possible because there are sufficient free parameters in the coefficients of the polynomials pk and the form is analytic on a neighborhood of 1 . By Theorem 1 this linear form cannot have any more zeros in the complement of 2 than those that have been assigned. However, using (1), we have  (p1 (x) +

m 

pk (x)8 s2,k (x))Qn (x)dσ1 (x) = 0,

k=2

which is not possible since the function under the integral sign has constant sign on

1 and it is not identically equal to zero.

3 On the Interlacing Property of Zeros In the sequel, we restrict our attention to multi-indices in m Zm + (•) := {n ∈ Z+ : n1 ≥ · · · ≥ nm }.

246

G. López Lagomasino

3.1 Interlacing for Type I Fix n ∈ Zm + (•). Consider the type I Hermite-Padé approximant (an,0 , . . . , an,m ) of s1,m ) for the multi-index n = (n1 , . . . , nm ). Set (8 s1,1 , . . . ,8 An,k = an,k +

m 

an,j8 sk+1,j ,

k = 0, . . . , m − 1.

j =k+1

We take An,m = an,m . Proposition 1 For each k = 1, . . . , m the linear form An,k has exactly nk + · · · + nm −1 zeros in C\ k+1 , where m+1 = ∅, they are all simple and lie in the interior of k . Let An,k be the monic polynomial whose roots are the zeros of An,k on k . Then  dσk (x) = 0, ν = 0, . . . , nk + · · · + nm − 2, k = 1, . . . , m, x ν An,k (x) An,k−1 (x) (4) where An,0 ≡ 1. Proof According to Theorem 1 applied to the Nikishin system N(σk+1 , . . . , σm ), the linear form An,k cannot have more than nk + · · · , nm − 1 zeros in C \ k+1 . So it suffices to show that it has at least nk + · · · + nm − 1 sign changes on k to prove that it has exactly nk + · · · + nm − 1 simple roots in C \ k+1 which lie in the interior of k . We do this producing consecutively the orthogonality relations (4). From the definition of type I Hermite-Padé approximation it follows that zν An,0 = O(1/z2 ), ν = 0, . . . , |n| − 2. From (3) we get that  x ν An,1 (x)dσ1 (x) = 0,

ν = 0, . . . , n1 + · · · + nm − 2,

which is (4) when k = 1. Therefore, An,1 has at least n1 +· · ·+nm −1 sign changes on 1 as we needed to prove. Let An,1 be the monic polynomial whose roots are the zeros of An,1 on 1 . Since zν An,1 /An,1 = O(1/z2 ), ν = n2 + · · · + nm − 2 (recall that the multi-index has decreasing components) from (3) we get (4) for k = 2 which implies that An,2 has at least n2 + · · · + nm − 1 sign changes on 2 as needed. We repeat the process until we arrive to An,m = an,m = An,m . Fix ∈ {1, . . . , m}. Given n ∈ Zm + (•), we denote n = n + e , where e is the m dimensional canonical vector with 1 in its -th component and 0 everywhere else. the multi-index which is obtained adding 1 to the -th component

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

247

of n. Notice that n need not belong to Zm + (•); however, we can construct the linear forms An ,k corresponding to the type I Hermite-Padé approximation with respect to n . Theorem 3 Assume that n ∈ Zm + (•) and ∈ {1, . . . , m}. For each k = 1, . . . , m, An ,k has at most nk + · · · + nm zeros in C \ k+1 and at least nk + · · · + nm − 1 sign changes in k . Therefore, all its zeros are real and simple. The zeros of An,k and An ,k in C \ k+1 interlace. Proof Let A, B be real constants such that |A| + |B| > 0. For k = 1, . . . , m, consider the forms Gn,k := AAn,k + BAn ,k . From Theorem 1 it follows that Gn,k has at most nk + · · · + nm zeros in C \ k+1 . Let us prove that it has at least nk + · · · + nm − 1 sign changes on k . Once this is achieved we know that all the zeros of Gn,k are simple and lie on the real line. In particular, this would be true for An ,k . Let us start with k = 1. Notice that  x ν Gn,1 (x)dσ1 (x) = 0, ν = 0, . . . , n1 + · · · + nm − 2. Consequently, Gn,1 has at least n1 + · · · + nm − 1 sign changes on 1 as claimed. Therefore, its zeros in C \ 2 are real and simple. Let Wn,1 be the monic polynomial whose roots are the simple zeros of Gn,1 in C \ 2 . Observe that zν Gn,1 /Wn,1 = O(1/z2 ), ν = 0, . . . , n2 + · · · + nm − 2. Consequently, from (3)  x ν Gn,2 (x)

dσ2 (x) = 0, Wn,1 (x)

ν = 0, . . . , n2 + · · · + nm − 2,

which implies that Gn,2 has at least n2 + · · · + nm − 1 sign changes on 2 . So all its zeros are real and simple. Repeating the same arguments we obtain the claim about the number of sign changes of Gn,k on k and that all its zeros are real and simple for each k = 1, . . . , m. Let us check that An,k and An ,k do not have common zeros in C \ k+1 . To the contrary, assume that x0 is such a common zero. We have An,k (x0 ) = 0 = An ,k (x0 ) because the zeros of these forms are simple. Then An ,k (x0 )An,k − An,k (x0 )An ,k has a double zero at x0 against what was proved above. Fix y ∈ R \ k+1 and set y

Gn,k (z) = Anl ,k (z)An,k (y) − Anl ,k (y)An,k (z).

248

G. López Lagomasino

Let x1 , x2 , x1 < x2 , be two consecutive zeros of Anl ,k in R \ k+1 and let y y ∈ (x1 , x2 ). The function Gn,k (z) is a real valued function when restricted to y R \ k+1 and analytic in C \ k+1 . We have (Gn,k ) (z) = An ,k (z)An,k (y) − 0  An ,k (y)An,k (z). Assume that (Gn,k ) (y0 ) = 0 for some y0 ∈ (x1 , x2 ). Since y y0 Gn,k (y) = 0 for all y ∈ (x1 , x2 ) we obtain that Gn,k (z) has a zero of order ≥ 2 (with respect to z) at y0 which contradicts what was proved above. Consequently,

y

(Gn,k ) (y) = Anl ,k (y)An,k (y) − Anl ,k (y)An,k (y) y

takes values with constant sign for all y ∈ (x1 , x2 ). At the end points x1 , x2 , this function cannot be equal to zero because An,k , Anl ,k do not have common zeros. y By continuity, (Gn,k ) preserves the same sign on all [xν , xν+1 ] (and, consequently, on each side of the interval k+1 ). Thus 1  ) (x1 ) = sign((Anl ,k ) (x1 )An,k (x1 )) = sign(Gxn,k 2  sign((Anl ,k ) (x2 )An,k (x2 )) = sign(Gxn,k ) (x2 ) .

Since sign(Anl ,k ) (x1 ) = sign(Anl ,k ) (x2 ) , we obtain that signAn,k (x1 ) = signAn,k (x2 ) ; consequently, there must be an intermediate zero of An,k between x1 and x2 .



The linear forms An,k , k = 0, . . . , m satisfy nice iterative integral representations. Let us introduce the following functions Hn,k (z) :=

An,k+1 (z)An,k (z) , An,k (z)

k = 0, . . . , m,

(5)

where the An,k are the polynomials introduced in the statement of Proposition 1. We take An,0 ≡ 1 ≡ An,m+1 . We have: Proposition 2 For each k = 0, . . . , m − 1  Hn,k (z) =

A2n,k+1 (x) Hn,k+1 (x)dσk+1 (x) z−x

An,k (x)An,k+2 (x)

(6)

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

and  x ν An,k+1 (x)

Hn,k+1 (x)dσk+1 (x) = 0, An,k (x)An,k+2 (x)

249

ν = 0, . . . , nk+1 + · · · + nm − 2. (7)

Proof Relation (7) is (4) (with the index k shifted by one) using the notation introduced for the functions Hn,k (z). From (2) applied to the function An,k /An,k , k = 0, . . . , m − 1, we obtain  An,k+1 (x) dσk+1 (x) An,k (z) = . An,k (z) z−x An,k (x) Using (4) with k + 1 replacing k it follows that 

An,k+1 (z) − An,k+1 (x) dσk+1 (x) An,k+1 (x) = 0. z−x An,k (x)

Combining these two integral formulas we get An,k+1 (z)An,k (z) = An,k (z) 



An,k+1 (x) dσk+1 (x) An,k+1 (x) = z−x An,k (x)

A2n,k+1 (x) An,k+2 (x)An,k+1 (x) z−x

An,k+1 (x)

dσk+1 (x) An,k (x)An,k+2 (x)

which is (6). Notice that the varying measure appearing in (6) has constant sign on k+1 .

3.2 Interlacing for Type II Now, let us see what happens with the type II Hermite Padé approximants. Let us introduce the following functions.  n,0 (z) = Qn (z),

n,k (z) =

n,k−1 (x) dσk (x), z−x

k = 1, . . . , m.

where Qn is the type II multiple orthogonal polynomial. The next result is [27, Proposition 1]. Proposition 3 Let n ∈ Zm + (•). For each k = 1, . . . , m 

⎛ n,k−1 (x) ⎝pk (x) +

m  j =k+1

⎞ pj (x)8 sk+1,j (x)⎠ dσk (x) = 0,

(8)

250

G. López Lagomasino

where deg pj ≤ nj − 1, j = k, . . . , m. For k = 0, . . . , m − 1, the function n,k has exactly nk+1 + · · · + nm zeros in C \ k , where 0 = ∅), they are all simple and lie in the interior of k+1 . The function n,m has no roots in C \ m . Proof The statement about the zeros of Qn = n,0 was proved above. The proof of (8) is carried out by induction on k. The statement is equivalent to showing that for each k = 1, . . . , m, we have  ν = 0, . . . , nj − 1, j = k, . . . , m. (9) x ν n,k−1 (x)dsk,j (x) = 0, When k = 1 we have n,0 = Qn and (9) reduces to the orthogonality relations which define Qn . So the basis of induction is settled. Assume that (9) is true for k ≤ m − 1 and let us show that it also holds for k + 1. Take k + 1 ≤ j ≤ m and ν ≤ nj − 1. Then 

 x n,k (x)dsk+1,j (x) = ν



 x

ν

n,k−1 (t)dσk (t) dsk+1,j (x) = x−t



xν − t ν + t ν dsk+1,j (x)dσk (t) = x−t   sk+1,j (t)dσk (t), n,k−1 (t)pν−1 (t)dσk (t) + t ν n,k−1 (t)8 n,k−1 (t)

where pν−1 is a polynomial of degree ≤ ν − 1 ≤ nj − 2 < nk − 1 (because the indices have decreasing components). Since 8 sk+1,j (t)dσk (t) = dsk,j (t) the induction hypothesis renders that both integrals on the last line equal zero and we obtain what we need. Applying (9) with j = k, we have 

z nk − x nk n,k−1 (x)dσk (x) = 0. z−x

Therefore, using the definition of n,k , it follows that  znk n,k (z) =

x nk n,k−1 (x)dσk (x) = O(1/z), z−x

z → ∞.

In other words n,k (z) = O(1/znk +1 ),

z → ∞.

Relations (8) applied with k replaced with k + 1 ≤ m, together with Theorem 1, imply that n,k has at least nk+1 + · · · + nm sign changes on the interval k+1 . Let

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

251

Qn,k+1 be the monic polynomial whose roots are the zeros of n,k in C \ k . Take Qn,m+1 ≡ 1. Obviously deg Qn,k+1 ≥ Nk+1 = nk+1 + · · · , nm , Nm+1 = 0. Notice that n,k /Qn,k+1 ∈ H(C \ k ) and n,k /Qn,k+1 = O(1/znk +Nk+1 +1 ), z → ∞, and the order is > Nk + 1 if deg Qn,k+1 > Nk+1 . Similar to the way in which (2) and (3) were proved, it follows that n,k (z) = Qn,k+1 (z)



n,k−1 (x) dσk (x) z − x Qn,k+1 (x)

(10)

and  x ν n,k−1 (x)

dσk (x) = 0, Qn,k+1 (x)

ν = 0, . . . , Nk − 1.

(11)

The second one of these relations also implies that n,k−1 has at least Nk sign changes on k . Notice that should n,k have more than Nk+1 zeros in C \ k we get at least one more orthogonality relation in (11) and then n,k−1 would have more than Nk zeros in C \ k−1 , 0 = ∅. Applying this argument for decreasing values of k we obtain that if for some k = 1, . . . , m, n,k has more than Nk+1 zeros in C\ k then n,0 = Qn would have more that |n| = N1 = n1 +· · · , nm zeros in C which is not possible. Consequently, for k = 1, . . . , m the function n,k has exactly Nk+1 zeros in C \ k they are all simple and lie in the interior of k+1 , m+1 = ∅ as stated. Set Hn,k :=

Qn,k−1 n,k−1 . Qn,k

Proposition 4 Fix n ∈ Zm + (•). For each k = 1, . . . , m  x ν Qn,k (x)

Hn,k (x)dσk (x) = 0, Qn,k−1 (x)Qn,k+1 (x)

ν = 0, . . . , nk + · · · + nm − 1, (12)

and  Hn,k+1 (z) = where Qn,0 = Qn,m+1 ≡ 1.

Q2n,k (x)

Hn,k (x)dσk (x) , z − x Qn,k−1 (x)Qn,k+1 (x)

(13)

252

G. López Lagomasino

Proof Using the notation introduced for the functions Hn,k , (11) adopts the form (12). In turn, this implies that  Qn,k (z) − Qn,k (x) Hn,k (x)dσk (x) Qn,k (x) = 0. z−x Qn,k−1 (x)Qn,k+1 (x) Separating this integral in two and using (10) we obtain (13). Zm + (•)



For n ∈ and ∈ {1, . . . , m} we define as was done above. Though n m need not belong to Z+ (•), we can define the corresponding functions n ,k . n

Theorem 4 Assume that n ∈ Zm + (•) and ∈ {1, . . . , m}. For each k = 0, . . . , m − 1, n ,k has at most nk+1 + · · · + nm + 1 zeros in C \ k and at least nk+1 + · · · + nm sign changes in k+1 . Therefore, all its zeros are real and simple. The zeros of n,k and n ,k in C \ k interlace. Proof The proof is similar to that of Theorem 3 so we will not dwell into details. For each k = 0, . . . , m − 1 and A, B ∈ R, |A| + |B| > 0 define Gn,k = An,k + Bn ,k . Then 

⎛ Gn,k−1 (x) ⎝pk (x) +

m 

⎞ pj (x)8 sk+1,j (x)⎠ dσk (x) = 0,

j =k+1

deg pj ≤ nj − 1

j = k, . . . , m.

From here, it follows that there exists a monic polynomial Wn,k+1 , deg Wn,k+1 ≥ Nk+1 = nk+1 + · · · , nm , Nm+1 = 0, whose roots are the zeros of Gn,k in C \ k such that  dσk (x) = 0, ν = 0, . . . , nk + deg Wn,k+1 − 1. x ν Gn,k−1 (x) Wn,k+1 (x) These relations imply that Gn,k has at most nk+1 + · · · + nm + 1 zeros in C \ k and at least nk+1 + · · · + nm sign changes in k+1 . In particular, this is true for n ,k . So, all the zeros of Gn,k (and n ,k ) in C \ k are real and simple. The interlacing is proved following the same arguments as in Theorem 3. The details are left to the reader.

4 Weak Asymptotic Following standard techniques, the weak asymptotic for type I and type II HermitePadé polynomials is derived using arguments from potential theory. We will briefly summarize what is needed.

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

253

4.1 Preliminaries from Potential Theory Let Ek , k = 1 . . . , m, be (not necessarily distinct) compact subsets of the real line and 1 ≤ j, k ≤ m,

C = (cj,k ),

a real, positive definite, symmetric matrix of order m. C will be called the interaction matrix. Let M1 (Ek ) be the subclass of probability measures in M(Ek ). Set M1 = M1 (E1 ) × · · · × M1 (Em ) . Given a vector measure μ = (μ1 , . . . , μm ) ∈ M1 and j = 1, . . . , m, we define the combined potential μ

Wj (x) =

m 

cj,k V μk (x) ,

k=1

where  V μk (x) :=

log

1 dμk (t) , |x − t|

denotes the standard logarithmic potential of μk . Set μ

μ

ωj := inf{Wj (x) : x ∈ Ej } ,

j = 1, . . . , m .

It is said that σ ∈ M( ) is regular, and we write σ ∈ Reg, if 1/n

lim γn

=

1 , cap(supp(σ ))

where cap(supp(σ )) denotes the logarithmic capacity of supp(σ ) and γn is the leading coefficient of the (standard) n-th orthonormal polynomial with respect to σ . See [45, Theorems 3.1.1, 3.2.1] for different equivalent forms of defining regular measures and its basic properties. In connection with regular measures it is frequently convenient that the support of the measure be regular. A compact set E is said to be regular when the Green’s function, corresponding to the unbounded connected component of C \ E, with singularity at ∞ can be extended continuously to E. In Chapter 5 of [42] the authors prove (we state the result in a form convenient for our purpose). Lemma 3 Assume that the compact sets Ek , k = 1, . . . , m, are regular. Let C be a real, positive definite, symmetric matrix of order m. If there exists λ =

254

G. López Lagomasino

(λ1 , . . . , λm ) ∈ M1 such that for each j = 1, . . . , m Wjλ (x) = ωjλ ,

x ∈ supp λj ,

then λ is unique. Moreover, if cj,k ≥ 0 when Ej ∩ Ek = ∅, then λ exists. For details on how Lemma 3 is derived from [42, Chapter 5] see [8, Section 4]. The vector measure λ is called the equilibrium solution for the vector potential problem determined by the interaction matrix C on the system of compact sets λ ) is the vector equilibrium constant. Ej , j = 1, . . . , m and ωλ := (ω1λ , . . . , ωm There are other characterizations of the equilibrium measure and constant but we will not dwell into that because they will not be used and their formulation requires introducing additional notions and notation. We also need Lemma 4 Let E ⊂ R be a regular compact set and φ a continuous function on E. Then, there exists a unique λ ∈ M1 (E) and a constant w such that " V λ (z) + φ(z)

≤ w, ≥ w,

z ∈ supp λ , z ∈E.

In particular, equality takes place on all supp λ. If the compact set E is not regular with respect to the Dirichlet problem, the second part of the statement is true except on a set e such that cap(e) = 0. Theorem I.1.3 in [43] contains a proof of this lemma in this context. When E is regular, it is well known that this inequality except on a set of capacity zero implies the inequality for all points in the set (cf. Theorem I.4.8 from [43]). λ is called the equilibrium measure in the presence of the external field φ on E and w is the equilibrium constant. As usual, a sequence of measures (μn ) supported on a compact set E is said to converge to a measure μ in the weak star topology if for every continuous function f on E we have 

 lim n

f dμn =

f dμ.

We write ∗ limn μn = μ. Given a polynomial Q of degree n we denote μQ =

1  δx , n Q(x)=0

where δx is the Dirac measure with mass 1 at point x. In the previous sum, each zero of Q is repeated taking account of its multiplicity. The measure μQ is usually called the normalized zero counting measure of Q. One last ingredient needed is a result which relates the asymptotic zero distribution of polynomials orthogonal with respect to varying measures with the solution of a vector equilibrium problem in the presence of an external field contained in

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

255

Lemma 4. Different versions of it appear in [23], and [45]. In [23], it was proved assuming that supp σ is an interval on which σ  > 0 a.e.. Theorem 3.3.3 in [45] does not cover the type of external field we need to consider. As stated here, the proof appears in [22, Lemma 4.2]. Lemma 5 Assume that σ ∈ Reg and supp σ ⊂ R is regular. Let {φn }, n ∈  ⊂ Z+ , be a sequence of positive continuous functions on supp σ such that lim

n∈

1 1 log = φ(x) > −∞, 2n φn (x)

(14)

uniformly on supp σ . Let (qn ), n ∈ , be a sequence of monic polynomials such that deg qn = n and  x k qn (x)φn (x)dσ (x) = 0,

k = 0, . . . , n − 1.

Then ∗ lim μqn = λ,

(15)

n∈

and . lim

n∈

/1/2n |qn (x)|2 φn (x)dσ (x)

= e−w ,

(16)

where λ and w are the equilibrium measure and equilibrium constant in the presence of the external field φ on supp σ given by Lemma 4. We also have + lim

n∈

|qn (z)| 1/2 qn φn E

,1/n = exp (w − V λ (z)),

K ⊂ C \ ,

(17)

where ·E denotes the uniform norm on E and is the smallest interval containing supp σ .

4.2 Weak Asymptotic Behavior for Type II In the proof of the asymptotic zero distribution of the polynomials Qn,j we take Ej = supp σj . We need to specify the sequence of multi-indices for which the result takes place and the relevant interaction matrix for the vector equilibrium problem which arises.

256

G. López Lagomasino

Let  = (p1 , . . . , pn ) ⊂ Zm + (•) be an infinite sequence of distinct multiindices such that nj = pj ∈ (0, 1), n∈ |n|

j = 1, . . . , m.

lim

Obviously, p1 ≥ · · · ≥ pm and Pj =

-m

j =1 pj

m 

(18)

= 1. Set

pk ,

j = 1, . . . , m.

k=j

Let us define the interaction matrix CN which is relevant in the next result. Set ⎛

P 2 − P12P2 0 ⎜ P11 P2 P22 − P22P3 ⎜− 2 ⎜ 0 − P22P3 P32 CN := ⎜ ⎜ . .. .. ⎜ . ⎝ . . . 0 0 0

··· ··· ··· .. .

0 0 0 .. .

⎞ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠

(19)

· · · Pm2

This matrix satisfies all the assumptions of Lemma 3 on the compact sets Ej = supp(σj ), j = 1, . . . , m, including cj,k ≥ 0 when Ej ∩ Ek = ∅ and it is positive definite because the principal section (CN )r , r = 1, . . . , m of CN satisfies ⎛

1 ⎜− 1 ⎜ 2 ⎜ ⎜ 0 2 2 det(CN )r = P1 · · · Pr det ⎜ ⎜ .. ⎜ . ⎜ ⎝ 0 0

− 12 1 − 12 .. . 0 0

0 − 12 1 .. .

··· ··· ··· .. .

0 0 0 .. .

0 0 0 .. .



⎟ ⎟ ⎟ ⎟ ⎟ > 0. ⎟ ⎟ ⎟ 0 · · · 1 − 12 ⎠ 0 · · · − 12 1 r×r

Let λ(CN ) = (λ1 , . . . , λm ) be the solution of the corresponding vector equilibrium problem stated in Lemma 3. The next result, under more restrictive conditions on the measures but in the framework of so called Nikishin systems on a graph tree is contained in [27]. We have practically reproduced their arguments which incidentally can also be adapted to the study of so called mixed type Hermite-Padé approximation in which the definition contains a mixture of type I and type II interpolation conditions. For details see [22]. Theorem 5 Let  be a sequence of multi-indices verifying (18). Assume that σj ∈ Reg and supp σj = Ej is regular for each j = 1, . . . , m. Then, ∗ lim μQn,j = λj , n∈

j = 1, . . . , m.

(20)

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

257

where λ = (λ1 , . . . , λm ) ∈ M1 is the vector equilibrium measure determined by the matrix CN on the system of compact sets Ej , j = 1, . . . , m. Moreover, ⎛ ⎞ 21/2|n| 2 j  2 2 H (x) dσ (x) n,j j 2 = exp ⎝− ωkλ /Pk ⎠ , lim 2 Q2n,j (x) n∈ 2 Q (x)Q (x) 2 n,j −1

n,j +1

(21)

k=1

λ ) is the vector equilibrium constant. For j = 1, . . . , m where ωλ = (ω1λ , . . . , ωm

⎛ lim |n,j (z)|1/|n| = exp ⎝Pj V λj (z) − Pj +1 V λj +1 (z) − 2

n∈

j 

⎞ ωkλ /Pk ⎠

(22)

k=1

uniformly on compact subsets of C \ ( j ∪ j +1 ) where m+1 = ∅ and the term with Pm+1 is dropped when j = m. Proof The unit ball in the cone of positive Borel measures is weak star compact; therefore, it is sufficient to show that each one of the sequences of measures (μQn,j ), n ∈ , j = 1, . . . , m, has only one accumulation point which coincides with the corresponding component of the vector equilibrium measure λ determined by the matrix CN on the system of compact sets Ej , j = 1, . . . , m. Let  ⊂  be such that for each j = 1, . . . , m ∗ lim μQn,j = μj . n∈

Notice that μj ∈ M1 (Ej ), j = 1, . . . , m. Taking into account that all the zeros of Qn,j lie in j , it follows that lim |Qn,j (z)|1/|n| = exp(−Pj V μj (z)),

n∈

uniformly on compact subsets of C \ j . When k = 1, (12) reduces to  x ν Qn,1 (x)

dσ1 (x) = 0, |Qn,2 (x)|

ν = 0, . . . , |n| − 1.

According to (23) lim

n∈

1 P2 log |Qn,2 (x)| = − V μ2 (x) , 2|n| 2

(23)

258

G. López Lagomasino

uniformly on 2 . Using Lemma 5, it follows that μ1 is the unique solution of the extremal problem V μ1 (x) −

" P2 μ2 = ω1 , V (x) ≥ ω1 , 2

x ∈ supp μ1 , x ∈ E1 ,

(24)

and 2 21/2|n| 2 2 Q2n,1 (x) 2 2 lim 2 dσ1 (x)2 = e−ω1 . 2 |Qn,2 (x)| n∈ 2

(25)

Using induction on increasing values of j , let us show that for all j = 1, . . . , m V

μj

Pj −1 μj −1 Pj +1 μj +1 Pj −1 (x) − V (x) − V (x) + ωj −1 2Pj 2Pj Pj

#

= ωj , ≥ ωj ,

x ∈ supp μj , , x ∈ Ej ,

(26) (when j = 1 or j = m the terms with P0 and Pm+1 do not appear,) and 2 2 lim 22 Q2n,j (x)

n∈

2 |Hn,j (x)|dσj (x) 221/2Nn,j = e−ωj , |Qn,j −1 (x)Qn,j +1 (x)| 2

(27)

where Qn,0 ≡ Qn,m+1 ≡ 1 and Nn,j = nj + · · · + nm . For j = 1 these relations are non other than (24)–(25) and the initial induction step is settled. Let us assume that the statement is true for j − 1 ∈ {1, . . . , m − 1} and let us prove it for j . Taking account of the fact that Qn,j −1 , Qn,j +1 and Hn,j have constant sign on

j , for j = 1, . . . , m, the orthogonality relations (12) can be expressed as  x ν Qn,j (x)

|Hn,j (x)|dσj (x) = 0, |Qn,j −1 (x)Qn,j +1 (x)|

ν = 0, . . . , Nn,j − 1 ,

and using (13) it follows that 

2 2 2 2 Q2 (t) dσj (x) (t)|dσ (t) |H n,j −1 j −1 n,j −1 2 2 = 0, x ν Qn,j (x) 2 2 2 2 |x − t| |Qn,j −2 (t)Qn,j (t)| |Qn,j −1 (x)Qn,j +1 (x)|

for ν = 0, . . . , Nn,j − 1 . Relation (23) implies that lim

n∈

Pj −1 μj −1 Pj +1 μj +1 1 log |Qn,j −1 (x)Qn,j +1 (x)| = − V (x) − V (x) , 2Nn,j 2Pj 2Pj (28)

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

259

uniformly on j . (Since Qn,0 ≡ 1, when j = 1 we only get the second term on the right hand side of this limit.) Set 2 2 2 |Hn,j −1 (t)|dσj −1 (t) 22−1/2 Kn,j −1 := 22 Q2n,j −1 (t) . |Q (t)Q (t)| 2 n,j −2

(29)

n,j

It follows that for x ∈ j 2 2 2 2 Q2 1 1 n,j −1 (t) |Hn,j −1 (t)|dσj −1 (t) 2 2 ≤ , 2≤ 2 ∗ 2 2 |x − t| |Qn,j −2 (t)Qn,j (t)| 2 δj −1 Kn,j δj −1 Kn,j −1 2 −1 where 0 < δj −1 = min{|x − t| : t ∈ j −1 , x ∈ j } ≤ max{|x − t| : t ∈ j −1 , x ∈

j } = δj∗−1 < ∞. Taking into consideration these inequalities, from the induction hypothesis, we obtain that 21/2Nn,j 2 2 2 Q2 n,j −1 (t) |Hn,j −1 (t)|dσj −1 (t) 2 2 lim 2 = e−Pj −1 ωj −1 /Pj . 2 |x − t| |Qn,j −2 (t)Qn,j (t)| 2 n∈ 2

(30)

Taking (28) and (30) into account, Lemma 5 yields that μj is the unique solution of the extremal problem (26) and 2  21/2Nn,j 2 2 Q2n,j −1 (t) |Hn,j −1 (t)|dσj −1 (t) Q2n,j (x)dσj (x) 2 2 lim 2 = e−ωj . 2 |x − t| |Qn,j −2 (t)Qn,j (t)| |Qn,j −1 (x)Qn,j +1 (x)| 2 n∈ 2 According to (13) the previous formula reduces to (27). We have concluded the induction. Now, we can rewrite (26) as Pj2 V μj (x) −

Pj Pj −1 μj −1 Pj Pj +1 μj +1 V V (x) − (x) 2 2

#

= ωj , ≥ ωj ,

x ∈ supp μj , x ∈ Ej , (31)

for j = 1, . . . , m, where ωj = Pj2 ωj − Pj Pj −1 ωj −1 ,

(ω0 = 0).

(32)

(Recall that the terms with V μ0 and V μm+1 do not appear when j = 0 and j = m, respectively.) By Lemma 3, λ = (μ1 , . . . , μm ) is the solution of the equilibrium problem determined by the interaction matrix CN on the system of compact sets  ) is the corresponding vector equilibrium Ej , j = 1, . . . , m and ωλ = (ω1 , . . . , ωm constant. This is for any convergent subsequence; since the equilibrium problem does not depend on the sequence of indices  and the solution is unique we obtain

260

G. López Lagomasino

the limits in (20). By the same token, the limit in (27) holds true over the whole sequence of indices . Therefore, 2 2 2 |Hn,j (x)|dσj (x) 221/2|n| 2 2 Qn,j (x) lim = e−Pj ωj . n∈ 2 |Q (x)Q (x)| 2 n,j −1

n,j +1

(33)

From (32) it follows that ω1 = ω1λ when j = 1. Suppose that Pj −1 ωj −1 = -j −1 λ k=1 ωk /Pk where j − 1 ∈ {1, . . . , m − 1}. Then, according to (32) Pj ωj = ωjλ /Pj + Pj −1 ωj −1 =

j 

ωkλ /Pk

k=1

and (21) immediately follows using (33). For j ∈ {1, . . . , m}, from (13), we have Qn,j +1 (z) n,j (z) = Qn,j (z)

 Q2 (x) n,j

Hn,j (x)dσj (x) , z − x Qn,j −1 (x)Qn,j +1 (x)

(34)

where Qn,0 ≡ Qn,m+1 ≡ 1. Now, (20) implies 2 2 2 Qn,j +1 (z) 21/|n|  2 = exp Pj V λj (z) − Pj +1 V λj +1 (z) , lim 22 2 n∈ Qn,j (z)

(35)

uniformly on compact subsets of C \ ( j ∪ j +1 ) (we also use that the zeros of Qn,j and Qn,j +1 lie in j and j +1 , respectively). It remains to find the |n|-th root asymptotic behavior of the integral. Fix a compact set K ⊂ C \ j . It is not difficult to prove that (for the definition of Kn,j see (29)) 2 2 2 Q2 (x) H (x)dσ (x) 2 C2 C1 n,j j n,j 2 2 ≤2 2≤ 2 , 2 2 z − x Qn,j −1 (x)Qn,j +1 (x) 2 Kn,j Kn,j where C1 =

min{max{|u − x|, |v| : z = u + iv} : z ∈ K, x ∈ j } >0 max{|z − x|2 : z ∈ K, x ∈ j }

and C2 =

1 < ∞. min{|z − x| : z ∈ K, x ∈ j }

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

261

Taking into account (21) ⎛ ⎞ 2 2 j 2 Q2 (x) H (x)dσ (x) 21/|n|  n,j n,j j 2 2 lim 2 = exp ⎝−2 ωkλ /Pk ⎠ . 2 n∈ 2 z − x Qn,j −1 (x)Qn,j +1 (x) 2

(36)

k=1



From (34), (35), and (36), we obtain (22) and we are done.

Remark 1 In the case of type I Hermite Padé approximation, asymptotic formulas for the forms An,j and the polynomials An,j can be obtained following arguments similar to those employed above, see [41] or [42]. Basically, all one has to do is replace the use of the formulas in Proposition 4 by the ones in Proposition 2. I recommend doing it as an exercise.

4.3 Application to Hermite-Padé Approximation The convergence of type II Hermite-Padé approximants for the case of m generating measures and interpolation conditions equally distributed between the different functions was obtained in [12]. When m = 2 the result was proved in [40]. Other results for more general sequences of multi-indices and so called multipoint Hermite-Padé approximation were considered in [18, 19]. For type I, the convergence was proved recently in [35]. Here we wish to show how the weak asymptotic of the Hermite-Padé polynomials allows to estimate the rate of convergence of the approximants. We restrict to type II. The presentation follows closely the original result given in [27]. Consider the functions $n,j (z) := (Qn8 s1,j − Pn,j )(z) = O(1/znj +1 ),

z → ∞,

j = 1, . . . , m, (37)

which are the remainders of the interpolation conditions defining the type II Hermite-Padé approximants with respect to the multi-index n of the Nikishin system of functions (8 s1,1 , . . . ,8 s1,m ). Because of (37), Pn,j is the polynomial part of the Laurent expansion at ∞ of Qn8 s1,j . It is easy to check that  (Qn8 s1,j − Pn,j )(z) =

Qn (x)ds1,j (x) , z−x

 Pn,j (z) =

Qn (z) − Qn (x) ds1,j (x). z−x

For example, this follows using Hermite’s integral representation of Qn8 s1,j − Pn,j , Cauchy’s integral formula, and the Fubini theorem. According to the way in which s1,j is defined, we have  $n,j (z) =

 ···

Qn (x1 )dσ1 (x1 )dσ2 (x2 ) · · · dσj (xj ) . (z − x1 )(x1 − x2 ) · · · (xj −1 − xj )

262

G. López Lagomasino

Notice that $n,1 = n,1 . We wish to establish a connection between the functions $n,j and n,k , 1 ≤ j, k ≤ m. First let us present an interesting formula which connects (s1,1 , . . . , s1,m ) = N(σ1 , . . . , σm ) and (sm,m , . . . , sm,1 ) = N(σm , . . . , σ1 ). When 1 ≤ k < j ≤ m, we denote sj,k = σj , σj −1 , . . . , σk . Lemma 6 For each j = 2, . . . , m, s1,j −18 sj,j +8 s1,j −28 sj,j −1 + · · · + (−1)j −18 s1,18 sj,2 + (−1)j8 sj,1 )(z) ≡ 0, (8 s1,j −8 (38) for all z ∈ C \ ( 1 ∪ j ). Proof Notice that  8 s1,j (z) + (−1)j8 sj,1 (z) =

 ···

(x1 − xj )dσ1 (x1 )dσ2 (x2 ) · · · dσj (xj ) . (z − x1 )(x1 − x2 ) · · · (xj −1 − xj )(z − xj )

On the right hand side, use that x1 −xj = (x1 −x2 )+(x2 −x3 )+· · ·+(xj −1 −xj ) to separate the integral in a sum. In each one of the resulting integrals, the numerator cancels one of the factors in the denominator, and the integral splits in the product of two which easily identify with the remaining terms in the formula. Now, we can prove the connection formulas. Lemma 7 We have n,1 = $n,1 and for j = 2, . . . , m j  (−1)k8 sj,k (z)$n,k−1 (z) + (−1)j +1 $n,j (z), n,j (z) =

z ∈ C \ ( 1 ∪ j ) ,

k=2

(39) and $n,j (z) =

j 

(−1)k8 sk,j (z)n,k−1 (z) + (−1)j +1 n,j (z),

j

z ∈ C \ (∪k=1 k ) .

k=2

(40) Proof Obviously, n,1 = $n,1 . Notice that formula (38) remains valid if the measures σ1 , . . . , σm are signed and finite. All what is needed is that they are supported on intervals which are consecutively non intersecting. It is easy to see that $n,j (z) = Qn σ1 , σ2 , . . . , σj8 (z),

j = 2, . . . , m.

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

263

The symbol ·8  means taking the Cauchy transform of ·. On the other hand n,j (z) = σj , . . . , σ2 , Qn σ18 (z),

j = 2, . . . , m.

Taking into consideration the previous remarks, using formula (38) with dσ1 replaced with Qn dσ1 , after trivial transformations we obtain (39). The collection of formulas (39) for j = 2, . . . , m together with $n,1 = n,1 can be expressed in matrix form as follows (n,1 , . . . , n,m )t = D($n,1 , . . . , $n,m )t , where (·)t is the transpose of the vector (·) and D is the m × m lower triangular matrix given by ⎛

1 0 0 ⎜8 ⎜ s2,2 −1 0 ⎜ s3,2 −8 s3,3 1 D := ⎜ 8 ⎜ . .. .. . ⎝ . . . sm,3 8 sm,4 8 sm,2 −8

··· ··· ··· .. .



0 0 0 .. .

⎟ ⎟ ⎟ ⎟. ⎟ ⎠

· · · (−1)m−1

Obviously D is invertible and ($n,1 , . . . , $n,m )t = D −1 (n,1 , . . . , n,m )t

(41)

is the matrix form of the relations which express each function $n,j in terms of n,k , k = 1, . . . , j . The matrix D −1 is also lower triangular and it may be proved, using (38) in several ways, that ⎛

D −1

1 0 0 ⎜8 s −1 0 2,2 ⎜ ⎜ s2,3 −8 s3,3 1 := ⎜ 8 ⎜ . .. .. ⎝ .. . . s3,m 8 s4,m 8 s2,m −8

··· ··· ··· .. .



0 0 0 .. .

⎟ ⎟ ⎟ ⎟. ⎟ ⎠

· · · (−1)m−1

Using (41) and the expression of D −1 we obtain (40).



Let λ = (λ1 , . . . , λm ) ∈ M1 be the vector equilibrium measure determined by the matrix CN on the system of compact sets Ej = supp(σj ), j = 1, . . . , m. In the sequel we assume that the hypothesis of Theorem 5 hold. For each j = 1, . . . , m, set Ujλ = Pj V λj (z) − Pj +1 V λj +1 (z) − 2

j  k=1

ωkλ /Pk ,

264

G. López Lagomasino

μm+1 ≡ 0). Notice that in a neighborhood of z = ∞, we have (V 8

Ujλ (z)

/ . 1 . = O pj log |z|

The potentials of the components of the equilibrium measure define continuous functions on all C (see the equilibrium equations). Thus, the functions Ujλ are defined and continuous on all C. Fix j ∈ {1, . . . m}. For k = 1, . . . , j define the regions Dk = {z ∈ C : Ukλ (z) > Uiλ (z), i = 1, . . . , j }. j

j

Some Dk could be empty. Denote ξj (z) = max{Ukλ (z) : k = 1, . . . j }. Corollary 1 Under the assumptions of Theorem 5, for each j = 1, . . . , m, we have 2 2 2 Pn,j (z) 221/|n| s1,j (z) − = exp(V λ1 + ξj )(z) , lim 228 n∈ Qn (z) 2

j

j +1

j

z ∈ (∪k=1 Dk ) \ (∪k=1 k ) , (42)

and 2 2 2 Pn,j (z) 221/|n| 2 s1,j (z) − lim sup 28 ≤ exp(V λ1 + ξj )(z) , Qn (z) 2 n∈

j +1

z ∈ C \ (∪k=1 k ) , (43)

uniformly on compact subsets of the indicated regions. Moreover, (V λ1 + ξj )(z) < 0, z ∈ C \ 1 which implies that the sequence (Pn,j /Qn ), n ∈ , converges to 8 s1,j j +1 with geometric rate in C \ (∪k=1 k ). Proof By (22) and (40) it follows that the following asymptotic formula takes place j (notice that the functions 8 sk,j are different from zero in C \ (∪k=1 k )), lim |$n,j (z)|1/|n| = exp Ukλ (z),

n∈

j +1

j

z ∈ Dk \ (∪k=1 k ),

uniformly on compact subsets of the specified region. Then lim |$n,j (z)|1/|n| = exp ξj (z) ,

n∈

j

j

j +1

z ∈ (∪k=1 Dk ) \ (∪k=1 k ) ,

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

265

and lim sup |$n,j (z)|1/|n| ≤ exp ξj (z) , n∈

j +1

z ∈ C \ (∪k=1 k ) ,

uniformly on compact subsets of the specified region. Formulas (42) and (43) follow directly from 8 s1,j (z) −

$n,j (z) Pn,j (z) = , Qn (z) Qn (z)

the asymptotic formulas given for $n,j , and (20). When j = 1, we have (V λ1 + ξ1 )(z) = 2V λ1 (z) − P2 V λ2 (z) − 2ω1λ = 2(W1λ (z) − ω1λ ). According to (24), W1λ (z) − ω1λ ≡ 0, x ∈ supp(λ1 ). On the other hand, W1λ (z) − ω1λ is subharmonic in C \ supp(λ1 ) and tends to −∞ as z → ∞. By the maximum principle for subharmonic functions W1λ (z) − ω1λ < 0, z ∈ C \ supp(λ1 ) (equality cannot occur at any point of this region because it would imply that W1λ (z)−ω1λ ≡ 0 which is impossible). Let us assume that (V λ1 + ξj −1 )(z) < 0, z ∈ C \ 1 , where j ∈ {2, . . . , m} and let us prove that (V λ1 + ξj )(z) < 0, z ∈ C \ 1 . Obviously, ξj (z) = max{ξj −1 (z), Uj (z)}. Consider the difference  Uj (z) − Uj −1 (z) = 2(Wjλ (z) − wjλ )/Pj = O (pj − pj −1 ) log(1/|z|) , z → ∞. If pj = pj −1 = 0 then Wjλ (z) − wjλ is subharmonic in C \ supp(λj ) (at ∞ it is finite) and equals zero on supp(λj ). Hence, Uj (z) ≤ Uj −1 (z) ≤ ξj −1 (z) on C \ supp(λj ). Therefore, using the equilibrium condition, Uj (z) = Uj −1 (z) on j and Uj (z) < Uj −1 (z) on C \ j . In this case, ξj (z) = ξj −1 (z), z ∈ C \ 1 , and the conclusion follows from the induction hypothesis. If pj < pj −1 , in a neighborhood of ∞ we have Uj (z) > Uj −1 (z) since (pj − pj −1 ) log(1/|z|) → +∞ as z → ∞. Let = {z ∈ C : Uj (z) = Uj −1 (z)}. This set contains supp(λj ) and divides C \ 1 in two domains 1 = {z ∈ C \ 1 : Uj (z) > Uj −1 (z)}, which contains z = ∞, and 2 = {z ∈ C \ 1 : Uj (z) < Uj −1 (z)}. Since Uj −1 (z) ≤ ξj −1 (z), on 2 ∪ we have ξj −1 (z) = ξj (z) and thus (V λ1 + ξj ) < 0. On 1 the function V λ1 + Uj is subharmonic and on its boundary equals V λ1 + Uj −1 < 0. Since (V λ1 + Uj )(z) → −∞ as z → ∞ it follows that on 1 we have (V λ1 + Uj )(z) < 0. Therefore, (V μ1 + ξj ) < 0 on 1 . With this we conclude the proof.

266

G. López Lagomasino

5 Ratio Asymptotic In the study of the ratio asymptotic of Hermite-Padé approximants conformal mappings on Riemann surface and boundary problems of analytic functions come into play.

5.1 Preliminaries from Riemann Surfaces and Boundary Value Problems Let 1 , . . . , m be a collection of intervals contained in the real line as in Definition 2. Consider the (m + 1)-sheeted Riemann surface R=

m 9

Rk ,

k=0

formed by the consecutively “glued” sheets R0 := C\ 1 ,

Rk := C\{ k ∪ k+1 },

k = 1, . . . , m−1,

Rm = C\ m ,

where the upper and lower banks of the slits of two neighboring sheets are identified. Fix ∈ {1, . . . , m}. Let ψ ( ) , = 1, . . . , m, be a single valued rational function on R whose divisor consists of one simple zero at the point ∞(0) ∈ R0 and one simple pole at the point ∞( ) ∈ R . Therefore, ψ ( ) (z) = C1 /z + O(1/z2 ) , z → ∞(0) ,

ψ ( ) (z) = C2 z + O(1) , z → ∞( ) , (44)

where C1 and C2 are constants different from zero. Since the genus of R equals zero (it is conformally equivalent to C), such a single valued function on R exists and is uniquely determined up to a multiplicative constant. We denote the branches of the algebraic function ψ ( ) , corresponding to the different sheets k = 0, . . . , m of R by ( )

ψ ( ) := {ψk }m k=0 . In the sequel, we fix the multiplicative constant in such a way that m ! k=0

( )

|ψk (∞)| = 1 ,

C1 > 0.

(45)

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

267

Since ψ ( ) is such that C1 > 0, then ψ ( ) (z) = ψ ( ) (z),

z ∈ R.

In fact, define φ(z) := ψ ( ) (z). Notice that φ and ψ ( ) have the same divisor (same poles and zeros counting multiplicities); consequently, there exists a constant C such that φ = Cψ ( ) . Comparing the leading coefficients of the Laurent expansion of these two functions at ∞(0) , we conclude that C = 1. In terms of the branches of ψ ( ) , the symmetry formula above means that for each k = 0, 1, . . . , m: ( )

ψk : R \ ( k ∪ k+1 ) −→ R ( 0 = m+1 = ∅); therefore, the coefficients (in particular, the leading one) of the Laurent expansion at ∞ of the branches are real numbers, and ( ) ψk( ) (x± ) = ψk( ) (x∓ ) = ψk+1 (x± ),

x ∈ k+1 .

(46)

Among other things, the symmetry property entails that all the coefficients in the ( ) Laurent expansion at infinity of the branches ψk are real numbers. ( ) ( ) Since limx→∞ xψ0 (x) = C1 > 0, by continuity it follows that ψk (∞) > ( ) ( )  ( ) 0, k = 1, . . . , − 1, limx→∞ ψ (x)/x = (ψ ) (∞) > 0, and ψk (∞) < 0, k = 3 ( ) + 1, . . . , m. On the other hand, the product of all the branches m k=0 ψk is a single valued analytic function on C without singularities; therefore, by Liouville’s Theorem it is constant. Due to the previous remark and the normalization adopted in (45), we can assert that m !

" ( ) ψk (z)



k=0

1, m − is even, −1, m − is odd.

(47)

In [6, Lemma 4.2] the following boundary value problem was proved to have a unique solution. In (49) below, we introduce a slight correction to the formula in [6]. Lemma 8 Let ∈ {1, . . . , m} be fixed. There exists a unique collection of functions ( ) (Fk )m k=1 which verify the system of boundary value problems (l)

( )

∈ H(C \ k ) ,

1)

Fk , 1/Fk

2)

(Fk( ) ) (∞) > 0,

2 )

Fk (∞) > 0,

3)

1 ( ) |Fk (x)|2 2 ( ) ( ) 2 = 1, 2(F F )(x)2 k−1 k+1

( )

k = 1, . . . , l , k = l + 1, . . . , m , x ∈ k ,

(48)

268

G. López Lagomasino (l)

(l)

where F0 ≡ Fm+1 ≡ 1. Moreover + Fk(l)

= sg

m !

, ψν(l) (∞)

ν=k

m !

ψν(l) ,

(49)

ν=k

3 m

ψν(l) (∞) denotes the sign of the leading coefficient of the Laurent 3 (l) expansion at ∞ of m ν=k ψν . where sg

ν=k

We are ready to state a result on the ratio asymptotic for type II Hermite-Padé polynomials of a Nikishin system. This result was obtained in [6] (see also [33]). Theorem 6 Assume that σk > 0 almost everywhere on k = supp σk , k = 1, . . . , m. Let  ⊂ Zm + (•) be a sequence of multi-indices such that n1 − nm ≤ d for all n ∈ , where d is some fixed constant. Then for each fixed k ∈ {1, . . . , m}, we have Qnl ,k (z) : (l) = Fk (z), n∈ Qn,k (z)

(50)

lim

(l)

uniformly on each compact subset of C \ k , where Fk is given in (49), the : (l) (l) algebraic functions ψν are defined by (44)–(45) and Fk is the result of dividing (l) Fk by the leading coefficient of its Laurent expansion at ∞. Sketch of the Proof Fix ∈ {1, . . . , m}. Because of the interlacing property of the zeros of the polynomials Qn,k and Qn ,k it follows that for each fixed k ∈ {1, . . . , m} the family of functions (Qn ,k /Qn,k ), n ∈ , is uniformly bounded on each compact subset of C \ k . To prove the theorem we need to show that for any  ⊂  such that lim

n∈

Qn ,k = G k , Qn,k

k = 1, . . . , m,

the limiting functions do not depend on  . To prove this, it is shown that their exist positive constants c1 , . . . , cm such that (ck G k )m k=1 verifies the system of boundary value problems (48). Properties 1), 2), and 2’) are easily verified by (Gk )m k=1 with 1 on the right hand side of 2) and 2’). Thanks to the orthogonality properties contained in (12), using results on ratio and relative asymptotic of orthogonal polynomials with respect to varying measures contained in [15, Theorem 6] and [7, Theorem 2] one can also prove that (Gk )m k=1 satisfies 3) with a constant different from 1 on the right hand side. Normalizing the functions Gk appropriately one obtains all the boundary conditions and it follows that ck G k = Fk , k = 1, . . . , m. Then using that G k (∞) = 1, k = + 1, . . . , m and (G k )) (∞) = 1, k = 1, . . . , . one sees that G k has to be the function on the right hand side of (50) independently of  .

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

269

Remark 2 An interesting open question is if one can relax the assumption n1 − nm ≤ d in the theorem. That restriction is connected with the conditions which have been found to be sufficient for the ratio and relative asymptotic of polynomials orthogonal with respect to varying measures. Those theorems in [7, 15] would have to be improved. Perhaps this could be done without too much difficulty if n1 −nm = o(|n|), |n| → ∞. Sequences verifying something like (18), as in the case of weak asymptotic, would require a deep consideration and substantial new ideas. Remark 3 On the basis of (50) and (13) one can also prove ratio asymptotic for the sequences (Kn ,k /Kn,k ), (n ,k /n,k ), n ∈ , k = 1, . . . , m (see [6, 33]). Using Theorem 3 and Proposition 2, it is also possible to prove ratio asymptotic for the polynomials An,k and the forms An,k and it is a good exercise (see also [22]). Remark 4 The ratio asymptotic of multiple orthogonal polynomials finds applications in the study of asymptotic properties of modified Nikishin systems and the corresponding Hermite-Padé approximants (see, for example, [34] and [36]). Acknowledgments This work was supported by research grant PGC2018-096504-B-C33 of Ministerio de Ciencias, Innovación y Universidades, Spain.

References 1. Apéry, R.: Irrationalité de ζ (2) et ζ (3). Astérisque 61, 11–13 (1979) 2. Angelesco, M.A.: Sur deux extensions des fractions continues algébriques. C.R. Acad. Sci. Paris 18, 262–263 (1919) 3. Aptekarev, A.I.: Asymptotics of simultaneously orthogonal polynomials in the Angelesco case. Math. USSR Sb. 64, 57–84 (1989) 4. Aptekarev, A.I.: Strong asymptotics of multiply orthogonal polynomials for Nikishin systems. Sbornik Math. 190, 631–669 (1999) 5. Aptekarev, A.I.: Sharp estimates for rational approximations of analytic functions. Sbornik Math. 193, 3–72 (2002) 6. Aptekarev, A.I., López Lagomasino, G., Rocha, I.A.: Ratio asymptotic of Hermite-Padé orthogonal polynomials for Nikishin systems. Sbornik Math. 196, 1089–1107 (2005) 7. Barrios Rolanía, D., de la Calle Ysern, B., López Lagomasin, G.: Ratio and relative asymptotic of polynomials orthogonal with respect to varying Denisov-type measures. J. Approx. Theory 139, 223–256 (2006) 8. Bello Hernández, M., López Lagomasino, G., Mínguez Ceniceros, J.: Fourier-Padé approximants for Angelesco systems. Constr. Approx. 26, 339–359 (2007) 9. Bleher, P.M., Kuijlaars, A.B.J.: Random matrices with external source and multiple orthogonal polynomials. Int. Math. Res. Notices 2004(3), 109–129 (2004) 10. Bertola, M., Gekhtman, M., Szmigielski, J.: Cauchy biorthogonal polynomials. J. Approx. Theory 162, 832–867 (2010) 11. Beukers, F.: Padé Approximation in Number Theory. In: Lecture Notes in Math., Vol. 888, pp. 90–99. Springer, Berlin (1981) 12. Bustamante, J., López Lagomasino, G.: Hermite–Padé approximation for Nikishin systems of analytic functions. Russ. Acad. Sci. Sb. Math. 77, 367–384 (1994) 13. Coates, J.: On the algebraic approximation of functions. I, II, III. Indag. Math. 28, 421–461 (1966)

270

G. López Lagomasino

14. Daems, E., Kuijlaars, A.B.J.: Multiple orthogonal polynomials of mixed type and non intersecting brownian motions. J. Approx. Theory 146, 91–114 (2007) 15. de la Calle Ysern, B., López Lagomasino, G.: Weak Convergence of varying measures and Hermite-Padé orthogonal polynomials. Constr. Approx. 15, 553–575 (1999) 16. Driver, K., Stahl, H.: Simultaneous rational approximants to Nikishin systems. I. Acta Sci. Math. (Szeged) 60, 245–263 (1995) 17. Driver, K., Stahl, H.: Simultaneous rational approximants to Nikishin systems. II. Acta Sci. Math. (Szeged) 61, 261–284 (1995) 18. Fidalgo Prieto, U., López Lagomasino, G.: Rate of convergence of generalized Hermite-Padé approximants of Nikishin systems. Constr. Approx. 23, 165–196 (2006) 19. Fidalgo Prieto, U., López Lagomasino, G.: General results on the convergence of multipointPadé approximants of Nikishin systems. Constr. Approx. 25, 89–107 (2007) 20. Fidalgo Prieto, U., López Lagomasino, G.: Nikishin systems are perfect. Constr Approx. 34, 297–356 (2011) 21. Fidalgo Prieto, U., López Lagomasino, G.: Nikishin systems are perfect. The case of unbounded and touching supports. J. Approx. Theory 163, 779–811 (2011) 22. Fidalgo Prieto, U., López, A., López Lagomasino, G., Sorokin, V.N.: Mixed type multiple orthogonal polynomials for two Nikishin systems. Constr. Approx. 32, 255–306 (2010) 23. Gonchar, A.A., Rakhmanov, E.A.: The equilibrium measure and distribution of zeros of extremal polynomials. Math. USSR Sb. 53, 119–130 (1986) 24. Gonchar, A.A.: Rational approximations of analytic functions. In: Proc. Internat. Congress Math., Vol. I, pp. 739–748. Amer. Math. Soc., Providence, RI (1987) 25. Gonchar, A.A., Rakhmanov, E.A.: On convergence of simultaneous Padé approximants for systems of functions of Markov type. Proc. Steklov Inst. Math. 157, 31–50 (1983) 26. Gonchar, A.A., Rakhmanov, E.A.: Equilibrium distributions and degree of rational approximation of analytic functions. Math. USSR Sb. 62, 305–348 (1989) 27. Gonchar, A.A., Rakhmanov, E.A., Sorokin, V.N.: Hermite–Padé approximants for systems of Markov–type functions. Sb. Math. 188, 33–58 (1997) 28. Hermite, Ch.: Sur la fonction exponentielle. C. R. Acad. Sci. Paris 77, 18–24, 74–79, 226– 233, 285–293 (1873); Reprinted in his Oeuvres, Tome III, pp. 150–181. Gauthier-Villars, Paris (1912) 29. Jager, H.: A simultaneous generalization of the Padé table. I-VI. Indag. Math. 26, 193–249 (1964) 30. Krein, M.G., Nudel’man, A.A.: The Markov Moment Problem and Extremal Problems. Transl. Math. Monogr., Vol. 50. Amer. Math. Soc., Providence, RI (1977) 31. Kuijlaars, A.B.J.: Multiple orthogonal polynomial ensembles. In: Arvesú, J., Marcellán, F., Martínez-Finkelshtein, A. (eds.), Recent Trends in Orthogonal Polynomials and Approximation Theory. Contemporary Mathematics, vol. 507, pp. 155–176. Amer. Math. Soc., Providence, RI (2010) 32. Kuijlaars, A.B.J.: Multiple orthogonal polynomials in random matrix theory. In: Proc. Internat. Congress Math., Vol. III, pp. 1417–1432, Hyderabad, India (2010) 33. López García, A., López Lagomasino, G.: Ratio asymptotic of Hermite-Padé orthogonal polynomials for Nikishin systems. II. Adv. Math. 218, 1081–1106 (2008) 34. López, A., López Lagomasino, G.: Relative asymptotic of multiple orthogonal polynomials of Nikishin systems. J. Approx. Theory 158, 214–241 (2009) 35. López Lagomasino, G., Medina Peralta, S.: On the convergence of type I Hermite-Padé approximants. Adv. Math. 273, 124–148 (2015) 36. López Lagomasino, G., Medina Peralta, S.: On the convergence of type I Hermite-Padé approximants for a class of meromorphic functions. J. Comput. Appl. Math. 284, 216–227 (2015) 37. Mahler, K.: Perfect systems. Compos. Math. 19, 95–166 (1968) 38. Markov, A.A.: Deux demonstrations de la convergence de certains fractions continues. Acta Math. 19, 93–104 (1895)

An Introduction to Multiple Orthogonal Polynomials and Hermite-Padé Approximation

271

39. Nikishin, E.M.: A system of Markov functions. Vestnik Moskov. Moscow Univ. Math. Bull. 34, 63–66 (1979) 40. Nikishin, E.M.: On simultaneous Padé approximants. Math. USSR Sb. 41, 409–425 (1982) 41. Nikishin, E.M.: Asymptotics of linear forms for simultaneous Padé approximants. Sov. Math. (Iz. VUZ) 30, 43–52 (1986) 42. Nikishin, E.M., Sorokin, V.N.: Rational Approximations and Orthogonality. Transl. Math. Monogr., Vol. 92. Amer. Math. Soc., Providence, RI (1991) 43. Saff, E.B., Totik, V.: Logarithmic Potentials with External Fields. In: Series of Comprehensive Studies in Mathematics, Vol. 316. Springer, New York (1997) 44. Stahl, H.: Best uniform rational approximation of x α on [0,1]. Acta Math. 90, 241–306 (2003) 45. Stahl, H., Totik, V.: General Orthogonal Polynomials. Cambridge University Press, Cambridge (1992) 46. Van Assche, W.: Analytic number theory and rational approximation. In: Branquinho, A., Foulquié, A. (eds.), Coimbra Lecture Notes on Orthogonal Polynomials, pp. 197–229. Nova Science Pub., New York, (2008)

Revisiting Biorthogonal Polynomials: An LU Factorization Discussion Manuel Mañas

Abstract The Gauss–Borel or LU factorization of Gram matrices of bilinear forms is the pivotal element in the discussion of the theory of biorthogonal polynomials. The construction of biorthogonal families of polynomials and its second kind functions, of the spectral matrices modeling the multiplication by the independent variable x, the Christoffel–Darboux kernel and its projection properties, are discussed from this point of view. Then, the Hankel case is presented and different properties, specific of this case, as the three terms relations, Heine formulas, Gauss quadrature and the Christoffel–Darboux formula are given. The classical orthogonal polynomial of Hermite, Laguerre and Jacobi type are discussed and characterized within this scheme. Finally, it is shown who this approach is instrumental in the derivation of Christoffel formulas for general Christoffel and Geronimus perturbations of the bilinear forms. Keywords Biorthogonal polynomials · Bivariate functionals · Sesquilinear forms · Gram matrix · Gauss–Borel factorization · Jacobi matrix · Very classical orthogonal polynomials · Christoffel formulas · Christoffel perturbations · Geronimus perturbations · Quasi determinants

1 Introduction These notes correspond to the orthogonal polynomial part of the five lectures I delivered during the VII Iberoamerican Workshop in Orthogonal Polynomials and Applications (Seventh EIBPOA). As such, more than presenting new original material, they are intended to give an alternative, but consistent and systematic, M. Mañas () Departamento de Física Teórica, Universidad Complutense de Madrid, Madrid, Spain Instituto de Ciencias Matematicas (ICMAT), Madrid, Spain e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Marcellán, E. J. Huertas (eds.), Orthogonal Polynomials: Current Trends and Applications, SEMA SIMAI Springer Series 22, https://doi.org/10.1007/978-3-030-56190-1_10

273

274

M. Mañas

construction of the theory of biorthogonal polynomials using the LU factorization of the Gram matrix of a given bilinear form. We refer the interested reader to the classical texts [23, 36] for a general background on the subject. At the beginning, when our group started to work in this area, this LU approach was motivated by the connection of the theory of orthogonal polynomials and the theory of integrable systems, and the ubiquity of factorization problems in the description of the later, see [18, 22, 28–31, 33]. Adler and van Moerbeke performed a pioneering work regarding this approach [1–4]. Despite this original motivation, we soon realized that this factorization technique allows for a general and systematic approach to the subject of orthogonal polynomials, giving a unified framework that can be extended to more sophisticated orthogonality scenarios. Given the understandable space constraint of this volume we will not describe the LU description of the Toda/KP integrable hierarchies and refer the reader to the slides of my lectures posted in the web page of Seventh EIBPOA. With the background given here we hope that the reader could understand further developments applied in other orthogonality situations. Let us now describe what we have done regarding this research in with more general orthogonality frameworks. In [7] we studied the generalized orthogonal polynomials [2] and its matrix extensions from the Gauss–Borel view point. In [8] we gave a complete study in terms of this factorization for multiple orthogonal polynomials of mixed type and characterized the integrable systems associated to them. Then, we studied Laurent orthogonal polynomials in the unit circle trough the CMV approach in [5] and found in [6] the Christoffel–Darboux formula for generalized orthogonal matrix polynomials. These methods were further extended, for example we gave an alternative Christoffel–Darboux formula for mixed multiple orthogonal polynomials [10] or developed the corresponding theory of matrix Laurent orthogonal polynomials in the unit circle and its associated Toda type hierarchy [11]. In [9, 15, 16] a complete analysis, in terms of the spectral theory of matrix polynomials, of Christoffel and Geronimus perturbations and Christoffel formulas was given for matrix orthogonal polynomials, while in [17] we gave a complete description for the Christoffel formulas corresponding to Christoffel perturbations for univariate CMV Laurent orthogonal polynomials. We also mention recent developments on multivariate orthogonal polynomials in real spaces (MVOPR), the corresponding Christoffel formula and the interplay with algebraic geometry [12, 13]. Similar multivariate situations but for the complex torus and the CMV ordering was analyzed in [14]. During my lectures I learned, with pleasure, that Diego Dominici is also interested in the ways the LU factorization is an useful tool in the theory of orthogonal polynomials. A paper by Dominici related to matrix factorizations and OP will appear in the journal Random Matrices: Theory and Applications.

Orthogonal Polynomials and LU Factorization

275

2 LU Factorization and Gram Matrix Given a square complex matrix A ∈ CN ×N , an LU factorization refers to the factorization of A into a lower triangular matrix L and an upper triangular matrix U

2.1 LDU Factorization An LDU decomposition is a decomposition of the form A = LDU, where D is a diagonal matrix, and L and U are unitriangular matrices, meaning that all the entries on the diagonals of L and U are one. Existence and Uniqueness • Any A ∈ GLN (C) admits an LU (or LDU ) factorization if and only if all its leading principal minors are nonzero. • If A ∈ CN ×N is a singular matrix of rank r, then it admits an LU factorization if the first r leading principal minors are nonzero, although the converse is not true. • If A ∈ CN ×N has an LDU factorization, then the factorization is unique. Cholesky Factorization • If A ∈ CN ×N is a symmetric (A = A ) matrix, we can choose U as the transpose of L A = LDL . • When A is Hermitian, A = A† , we can similarly find A = LDL† . These decomposition are called Cholesky factorizations. The Cholesky decomposition always exists and is unique—provided the matrix is positive definite.

2.2 Schur Complements Given a block matrix M . M=

AB C D

/

276

M. Mañas

with A ∈ GLp (C), B ∈ Cp×q , C ∈ Cq×p and D ∈ Cq×q , we define its Schur complement with respect to A as M\A := D − CA−1 B ∈ Cq×q . Proposition 1 (Schur Complements and LU Factorization) If A is nonsingular we have for the block matrix M the following factorization /. / /. . A 0p×q Ip A−1 B Ip 0p×q . M= CA−1 Iq 0q×p M\A 0q×p Iq For a detailed account of Schur complements see [38]. −1 N ×N Given a matrix M = (Mi,j )N a + 1, < N, truncation is given by i,j =0 ∈ C ⎛

M [ +1]

M0,0 M0,1 ⎜ M1,0 M1,1 ⎜ := ⎜ . .. ⎝ .. . M ,0 M ,1

⎞ . . . M0, . . . M1, ⎟ ⎟ . ⎟. .. . .. ⎠ . . . M ,

Proposition 2 (Nonzero Minors and LU Factorization) Any nonsingular matrix M ∈ GLN (C) with all its leading principal minors different from zero, i.e., det M [ +1] = 0, ∈ {0, 1, , . . . , N − 1}, has a LDU factorization. Proof As det M [N −1] = 0, we have /. / . [N −1] / . 0(N −1)×1 IN −1 wN M IN −1 0(N −1)×1 M= vN 1 01×(N −1) M\M [N −1] 01×(N −1) 1 where vN and wN are row and column vectors, respectively, with N −1 components. Now, from det M [N −2] = 0 we deduce /. / . [N −2] / . 0(N −2)×1 IN −2 wN −1 M IN −2 0(N −2)×1 [N −1] = M vN −1 1 01×(N −2) M [N −1] \M [N −2] 01×(N −2) 1 which inserted in the previous result yields ⎛ ⎜ M=⎝

IN−2 0(N−2)×2

*

1 ∗

0 1

⎞⎛ ⎟⎜ ⎠⎝

M [N−2]

0(N−2)×2

02×(N−2)

M [N−1] \M [N−2] 0 0 M\M [N−1]

⎞⎛ ⎟⎜ ⎠⎝

IN−2 02×(N−2)





⎟ 1 ∗⎠ . 01

We finally get an LDU factorization with  D = diag M [1] \M [0] , M [2] \M [1] , . . . , M\M [N −1] .

Orthogonal Polynomials and LU Factorization

277

2.3 Bilinear Forms, Gram Matrices and LU Factorizations A bilinear form ·, · on the ring of complex valued polynomials in the variable x, C[x], is a continuous map ·, · : C[x] × C[x] −→ C (P (x), Q(x)) → P (x), Q(y) such that for any triple P (x), Q(x), R(x) ∈ C[x] the following properties are fulfilled 1. AP (x) + BQ(x), R(y) = A P (x), R(y) + B Q(x), R(y), ∀A, B ∈ C, 2. P (x), AQ(y) + BR(y) = P (x), Q(y) A + P (x), R(y) B, ∀A, B ∈ C. Observe that we have not chosen the conjugate in one of the variables. deg deg -P -Q For P (x) = pk x k and Q(x) = ql x l the bilinear form is k=0

l=0

P (x), Q(y) =



pk Gk,l ql ,

< ; Gk,l = x k , y l .

k=1,...,deg P l=1,...,deg Q

The corresponding semi-infinite matrix ⎛

⎞ G0,0 G0,1 . . . ⎜ ⎟ G = ⎝ G1,0 G1,1 . . .⎠ , .. .. . . is the so called Gram matrix of the sesquilinear form. Examples • Borel measures A first example is given by a complex (or real) Borel measure d μ with support supp(d μ) ⊂ R . Given any pair of matrix polynomials P (x), Q(x) ∈ C[x] we introduce the following bilinear form  P (x), Q(x)μ =

R

P (x) d μ(x)Q(x).

• Example: Linear functionals We consider the space of polynomials C[x], with an appropriate topology, as the space of fundamental functions and take the space of generalized functions as the corresponding continuous linear functionals. Take a linear functional u ∈ (C[x]) and consider P (x), Q(x)u = u(P (x)Q(x)).

278

M. Mañas

In both examples the Gram matrix is a Hankel matrix Gi+1,j = Gi,j +1 . In these cases, the Gram matrix is also known as moment matrix as we have  Gi,j = x i+j d μ(x) = mi+j , R

where mj is known as the j -moment, for the measure case, while for the linear functional scenario we have Gi,j = u(x i+j ). Schwartz Generalized Kernels There are bilinear forms which do not have this particular Hankel type symmetry. Let ux,y ∈ (C[x, y]) ∼ = C[[x, y]] so that P (x), Q(y) = ux,y , P (x) ⊗ Q(y). The Gram matrix of this bilinear form has entries Gk,l = ux,y , x k ⊗ y l . This gives a continuous linear map Lu : C[y] → (C[x]) such that P (x), Q(y) = Lu (Q(y)), P (x). See [34] for an introduction to bivariate generalized kernels. Integrals Kernels A kernel u(x, y) is a complex valued locally integrable function, that defines an integral operator f (x) → g(x) = u(x, y)f (y) d y and  P (x), Q(y) =

R2

P (x)u(x, y)Q(y) d x d y

There is an obvious way of ordering the monomials {x n }∞ n=0 in a semi-infinite vector χ (x) = (1, x, x 2 , . . . ) Also, we consider χ ∗ (x) = (x −1 , x −2 , x −3 , . . . ) , with (χ (x)) χ ∗ (y) =

1 , |y| < |x|. x−y

  The semi-infinite Gram matrix can be written as follows G = χ , χ . For a   Borel ;measure it reads G< = χ (x) χ (x) d μ(x) and for a linear functional  G = u(x), χ (x) χ (x) . When dealing with an integral kernel we have G = ;    < χ (x) χ (y) u(x, y) d x and for a Schwartz kernel G = u(x, y), χ (x) χ (y) .

Orthogonal Polynomials and LU Factorization

279

3 Orthogonal Polynomials Definition 1 (Quasi-Definite Bilinear Forms) We say that a bilinear form ·, · is quasi-definite whenever its Gram matrix has all its leading principal minors different from zero. Proposition 3 (Quasi-Definiteness and LDU Factorization) The Gram matrix of a quasi-definite bilinear form admits a unique LDU factorization. Given a quasi-definite bilinear form in the space of polynomials we consider the LDU factorization of its Gram matrix in the form G = (S1 )−1 H (S2 )− where S1 and S2 are lower unitriangular matrices and H is a diagonal matrix. When the quasi-definite bilinear form comes from a Borel measure or a linear functional the corresponding Gram matrix, now known as moment matrix, is symmetric: G = G . Thus, the LDU factorization becomes a Cholesky factorization. Whenever the Borel measure is positive (sign defined will equally do) the moment matrix is a positive definite matrix, i.e., all the principal minors of the moment matrix are strictly positive. Given either a Borel measure or a linear functional, we consider its LDU factorization G = S −1 H S − , where S is a lower unitriangular matrix and H is a diagonal matrix. For a Borel positive measure, the diagonal coefficients Hk are positive, Hk > 0. Definition 2 (Constructing the Polynomials) Given a Gram matrix and its Gauss– Borel factorization we construct the following two families of polynomials P1 (x) := S1 χ (x),

P2 (x) := S2 χ (x).

Here Pi = (Pi,0 , Pi,1 , . . . , ) with Pi,k (x) = x k + · · · , monic polynomials of degree k. Proposition 4 (Orthogonality Relations) The above families of polynomials satisfy the following orthogonality relations ; < P1,k (x), y l = δl,k Hk ,

;

< x l , P2,k (y) = δl,k Hk ,

0 ≤ l < k.

;    < = S1 χ (x), (χ (y)) = S1 G = H (S2 )− is Proof We have that P1 (x), χ (y) an upper triangular matrix, and the result follows.

280

M. Mañas

Proposition 5 (Biorthogonality Relations) The above families of polynomials are biorthogonal  P1,k (x), P2,l (x) = δl,k Hk .



;    < = S1 χ (x), (χ (y)) (S2 ) = S1 G = H . Proof We have that P1 (x), P2 (y)



3.1 Quasi-Determinants We include this brief section here because, despite the fact that for the standard orthogonality the quotient of determinants is enough to describe adequately the results, in more general situations quasi-determinants are needed. Also, even in this situation they give more compact expressions. As we will see we can understand them as an extension of determinants to noncommutative situations and also as Schur complements. Some History In the late 1920 Archibald Richardson, one of the two responsible of Littlewood–Richardson rule, and the famous logician Arend Heyting, founder of intuitionist logic, studied possible extensions of the determinant notion to division rings. Heyting defined the designant of a matrix with noncommutative entries, which for 2 × 2 matrices was the Schur complement, and generalized to larger dimensions by induction. The Situation Nowadays 1990 till today, was given by Gel’fand, Rektah and collaborators, see [24]. Quasi-determinants were defined over free division rings and was early noticed that is not an analog of the commutative determinant but rather of a ratio of determinants. A cornerstone for quasi-determinants is the heredity principle, quasi-determinants of quasi-determinants are quasideterminants; there is no analog of such a principle for determinants.

3.1.1

The Easiest Quasi-Determinant: A Schur Complement



A A1,2 We start with k = 2, so that A = A1,1 . In this case the first quasi-determinant 2,1 A2,2 &1 (A) :− A/A1,1 ; i.e., a Schur complement which requires det A1,1 = 0. Olver vs Gel’fand The notation2 of Olver [32] 2 is different to that of the Gel’fand 2 A1,1 A1,2 2 2. There is another quasi-determinant school were &1 (A) = |A|2,2 = 22 A A 2,1 2,2 2 2 2 2 A A1,2 22 1,1 2 &2 (A) = A/A22 = |A|1,1 = 2 2, the other Schur complement, and A2,1

A2,2

we need A2,2 to be a non singular matrix. Other quasi-determinants that can be

Orthogonal Polynomials and LU Factorization

2 2 2 A 2 2 1,1 A1,2 2 2 2 and 22 A1,1 A1,2 considered for regular square blocks are 2 A 2,1 A2,2 2 A2,1 A2,2 two quasi-determinants are out of the scope of these notes.

281

2 2 2. These last 2

Example Consider ⎛

⎞ A1,1 A1,2 A1,3 A = ⎝ A2,1 A2,2 A2,3 ⎠ A3,1 A3,2 A3,3 and take the quasi-determinant with respect the first diagonal block, which we define as the Schur complement indicated by the non dashed lines 2 2 2 A11,1 A1,2 A1,3 2 . / . / 2 2 A2,2 A2,3 A2,1 −1  2 2 − A &1 (A) = 2 A2,1 A2,2 A2,3 2 = A1,2 A1,3 2 2 A3,2 A3,3 A3,1 1,1 2 A3,1 A3,2 A3,3 2 + , A2,2 − A2,1 A−1 A1,2 A2,3 − A2,1 A−1 A1,3 1,1 1,1 = . −1 A3,2 − A3,1 A−1 1,1 A1,2 A3,3 − A3,1 A1,1 A1,3 Take the quasi-determinant given by the Schur complement as indicated by the dashed lines 2 2 −1 2 A − A A−1 A 2 2,1 1,1 1,2 A2,3 − A2,1 A1,1 A1,3 2 2 2,2 2 &2 (&1 (A)) = 22 −1 −1 2 2 A3,2 − A3,1 A1,1 A1,2 A3,3 − A3,1 A1,1 A1,3 2  −1 = A3,3 − A3,1 A−1 1,1 A1,3 − A3,2 − A3,1 A1,1 A1,2  −1  A2,2 − A2,1 A−1 A2,3 − A2,1 A−1 1,1 A1,2 1,1 A1,3 Compute, for the very same matrix ⎛

⎞ A1,1 A1,2 A1,3 A = ⎝ A2,1 A2,2 A2,3 ⎠ A3,1 A3,2 A3,3 the Schur complement indicated by the non-dashed lines, that is, 2 2 A1,1 A1,2 A1,3 2 2 &{1,2} (A) = 2 A2,1 A2,2 A2,3 2 2 A1,3 A2,3 A3,3

2 2 / . / . 2  A1,1 A1,2 −1 A1,3 2 . 2 = A3,3 − A3,1 A3,2 2 A2,1 A2,2 A2,3 2

282

M. Mañas

Now, from

.

A1,1 A1,2 A2,1 A2,2

/−1

 −1 −1 −1 −1 A−1 1,1 + A1,1 A1,2 (A2,2 − A2,1 A1,1 A1,2 ) A2,1 A1,1

−1 −A−1 1,1 A1,2 (A2,2 − A2,1 A1,1 A1,2 )  , = −1 A A−1 −(A2,2 − A2,1 A−1 A ) 1,2 2,1 1,1 1,1

−1 −1 (A2,2 − A2,1 A1,1 A1,2 )

we deduce &2 (&1 (A)) = &{1,2} (A), the simplest case of the heredity principle. Proposition 6 (Heredity Principle) Quasi-determinants of quasi-determinants are quasi-determinants. Given any set I = {i1 , . . . , im } ⊂ {1, . . . , k} the heredity principle allows us to define the quasi-determinant &I (A) = &i1 (&i2 (· · · &im (A) · · · )). Proposition 7 (Quasi-Determinantal Expressions) The sequence of biorthogonal polynomials an its squared norms are quasi-determinants ⎡ ⎢ ⎢ ⎢ P1,k (x) = &∗ ⎢ ⎢ ⎢ ⎣

1 x .. .

G[k]

Gk,0 Gk,1 . . . Gk,k−1 Hk = &∗ (G[k+1] ), ⎡ ⎢ ⎢ ⎢ P2,k (x) = &∗ ⎢ ⎢ ⎢ ⎣

G[k]

1 x . . . x k−1

G0,k G1,k .. .



⎥ ⎥ ⎥ ⎥, ⎥ ⎥ x k−1 ⎦ xk



⎥ ⎥ ⎥ ⎥. ⎥ ⎥ Gk,k−1 ⎦ xk

Proof From LU factorization, recalling that Sk,k = 1, we get 

 Sk,0 Sk,1 . . . Sk,k−1 G[k] = − Gk,0 Gk,1 . . . Gk,k−1 .

As G[k] is a non singular matrix 

−1   . Sk,0 Sk,1 . . . Sk,k−1 = − Gk,0 Gk,1 . . . Gk,k−1 G[k]

Orthogonal Polynomials and LU Factorization

283

On the other hand, ⎞ ⎛ ⎞ G0,k G0,k ⎟ ⎜ ⎜ G1,k ⎟ ⎟ ⎟ 



⎜ ⎜ G1,k ⎟ ⎜ . ⎟ ⎟ Sk,k ⎜ .. ⎟ = Sk,0 Sk,1 . . . Sk,k−1 ⎜ . ⎜ . ⎟ + Gk,k . ⎟ ⎜ ⎝ ⎠ . ⎟ ⎜ ⎝ Gk−1,k ⎠ Gk−1,k Gk,k ⎛

 Hk = Sk,0 Sk,1 . . . Sk,k−1



3.2 Second Kind Functions and LU Factorizations Definition 3 (Second Kind Functions) For a generalized kernel ux,y we define two families of second kind functions given by > 1 , C1,n (z) = P1,n (x), z−y u = > 1 , P2,n (y) , C2,n (z) = z−x u =

z ∈ suppy (u), z ∈ suppx (u).

Proposition 8 (LU Factorization Representation of Second Kind Functions)  For z such that |z| > sup |y| : y ∈ suppy (u) it follows that C1 (z) = H S2− χ ∗ (z),  while for z such that |z| > sup |x| : x ∈ suppx (u) we find C2 (z) = H S1−1 χ ∗ (z). Proof Whenever z belongs to an annulus around the origin with no intersection with the y support of the functional = C1 (z) = S1 ;

1 χ (x), z−y

4

> = S1 u

= S1 χ (x), (χ (y)) χ ∗ (z)



1  yk χ (x), z zk

< u

k=0

5 u

= S1 Gχ ∗ (z) = H S2− χ ∗ (z).

284

M. Mañas

When z belongs to an annulus around the 0 without intersection with the x support of the functional 4 ∞ 5 > = 1  xk 1 (C2 (z)) = , (χ (y)) (S2 ) = , (χ (y)) (S2 ) k z−x z z u k=0 u < ; = (χ ∗ (z)) χ (x), (χ (y)) (S2 ) u

= (χ (z)) G(S2 ) = (χ (z)) S1− H. ∗









3.3 Spectral Matrices Let us introduce the shift or spectral matrix ⎛

0 ⎜0 ⎜  := ⎜ 0 ⎝ .. .

1 0 0 .. .

0 1 0 .. .

⎞ 0 ... 0 . . .⎟ ⎟ . 1 . . .⎟ ⎠ .. .

The following spectral properties hold χ (x) = xχ (x),

χ ∗ (x) =

1 ∗ χ (x). x

If (Ei,j )s,t := δs,i δt,j we have • In the one hand  = I; and, in the other hand,   = I − E0,0 . •  χ (x) = x1 (I − E0,0 )χ (x). We introduce the semi-infinite matrices J1 := S1 (S1 )−1 ,

J2 := S2 (S2 )−1 .

Proposition 9 (Spectral Matrices are Hessenberg) The spectral matrices are lower uni-Hessenberg matrices, i.e., of the form ⎛

⎞ ∗ 1 0 0 ... ⎜ ∗ ∗ 1 0 . . .⎟ ⎝ ⎠. .. . . . . . . . . . . . . .

Orthogonal Polynomials and LU Factorization

285

Proposition 10 (Spectrality) The spectral matrices satisfy the eigenvalue property J1 P1 (z) = zP1 (z),

J2 P2 (z) = zP2 (z).

Proposition 11 (Eigenvalues of the Truncation and Roots of the Polynomials) The roots of Pi,k (z) and the eigenvalues of the truncation Ji[k] coincide. Proof We have ⎛ ⎜ ⎜ Ji[k] ⎜ ⎝





Pi,0 (x) Pi,1 (x) .. .

⎜ ⎟ ⎜ ⎟ ⎟=x⎜ ⎝ ⎠

Pi,k−1 (x)



Pi,0 (x) Pi,1 (x) .. .



0 0 .. .

⎟ ⎜ ⎟ ⎜ ⎟−⎜ ⎠ ⎝

Pi,k−1 (x)

⎞ ⎟ ⎟ ⎟. ⎠

Pi,k (x)

For a root α, i.e., Pi,k (α) = 0 we arrive to ⎛ 

⎜ ⎜ J [k] ⎜ ⎝





Pi,0 (α) P1 (α) .. .

⎜ ⎟ ⎜ ⎟ ⎟ = α⎜ ⎝ ⎠

Pi,k−1 (α)  and, therefore, we have the eigenvector

Pi,0 (α) Pi,1 (α) .. .

⎞ ⎟ ⎟ ⎟ ⎠

Pi,k−1 (α) Pi,0 (α) Pi,1 (α)

.. .

with eigenvalue α.



Pi,k−1 (α)

3.4 Christoffel–Darboux Kernels Definition 4 (Christoffel–Darboux Kernels) Given two sequences of matrix biorthogonal polynomials 

P1,k (x)

∞ k=0

 ∞ and P2,k (y) k=0 ,

with respect to the sesquilinear form ·, ·u , we define the n-th Christoffel–Darboux kernel matrix polynomial Kn (x, y) :=

n  k=0

P2,k (y)(Hk )−1 P1,k (x),

286

M. Mañas

and the mixed Christoffel–Darboux kernel Knmix (x, y) :=

n 

P2,k (y)(Hk )−1 C1,k (x).

k=0

Proposition 12 (Projection Properties) (i) For a quasidefinite generalized kernel ux,y , the corresponding Christoffel– Darboux kernel gives the projection operator 4

m 

Kn (x, z), 4 m 

5 =

λj P2,j (y)

j =0

j =0

u

= u

λj P2,j (z),

j =0

5

λj P1,j (x), Kn (z, y)

n 

n 

Cj P1,j (z).

j =0

for any m ∈ {0, 1, 2, . . . }. (ii) In particular, we have ;

Kn (x, z), y l

< u

= zl ,

l ∈{0, 1, . . . , n}.

Proposition 13 (ABC Theorem (Aitken, Berg and Collar) [35]) We have the following relation

−1 

 K [l] (x, y) = χ [l] (y) G[l] χ [l] (x). Proof Is a consequence of the following

−1

−1

 

  G[l] (S1[l] )−1 H [l] (S2[l] )− χ [l] (x) = χ [l] (y) χ [l] (x) χ [l] (y)

−1

  H [l] S1[l] χ [l] (x) = S2[l] χ [l] (y)

−1

  H [l] P1[l] (x). = P2[l] (y) Proposition 14 (Reproducing Property) As we are dealing with a projection we find K [l] (x, z2 ), K [l] (z1 , y)u = K [l] (z1 , z2 ).

Orthogonal Polynomials and LU Factorization

287

Proof As an exercise, let us use the ABC theorem =

−1

 G[l] χ [l] (x), K (x, z2 ), K (z1 , y)u = χ [l] (z2 ) [l]

[l]

>

−1

  G[l] χ [l] (z1 ) × χ [l] (y)

−1 

 G[l] = χ [l] (z2 ) χ [l] (z1 ).

u



4 Standard Orthogonality: Hankel Reduction Recall that for bilinear forms associated to a Borel measure or a linear functional the Gram matrix is a Hankel matrix, Gi,j +1 = Gi+1,j . We will consider in this section some properties that appear in this situation and not in the general scheme.

4.1 Recursion Relations This property is just a reflection of the self-adjointness of the multiplication operator by x with respect to inner product is reflected in the Hankel structure of the moment matrix xf, hμ = f, xhμ ⇒ G = G ⇒ Gi,j = Gi+j . That leads to the tridiagonal form of the spectral matrices, now named after Jacobi. Proposition 15 The spectral matrices are linked by J1 H = H (J2 ) . Proof From the LU factorization (Cholesky) G = S −1 H S − and the symmetry G = G we find SS −1 = H (SS −1 ) H −1 .

288

M. Mañas

This tridiagonal Jacobi matrix J := J1 can be written as follows ⎛

−S10 1 0 0 0 ⎜ H H −1 S − S 1 0 0 10 21 ⎜ 1 0 ⎜ −1 H S − S 1 0 0 H ⎜ 2 1 21 32 ⎜ ⎜ 0 1 0 H3 H2−1 S32 − S43 J := ⎜ −1 ⎜ 0 H S − S54 0 0 H 4 43 3 ⎜ ⎜ . .. ⎜ 0 0 0 0 ⎝ .. .. .. .. . . . .

0 0 0 0 1 .. .

⎞ ... . . .⎟ ⎟ ⎟ . . .⎟ ⎟ . . .⎟ ⎟. . . .⎟ ⎟ .. ⎟ .⎟ ⎠

Therefore, the spectral properties lead to the well known recursion relations. Proposition 16 (3-Term Relations) The orthogonal polynomials and the corresponding second kind functions fulfill J P (x) = xP (x),

Hk Pk−1 + (Sk,k−1 − Sk+1,k )Pk + Pk+1 = xPk , Hk−1

J C(x) = xC(x) − H0 e0 ,

Hk Ck−1 + (Sk,k−1 − Sk+1,k )Ck + Ck+1 = xCk , Hk−1

where e0 := (1, 0, 0, . . . , 0, . . . ) .

4.2 Heine Formulas As the Gram matrix now is a moment matrix we find the well known Heine formulas: Proposition 17 (Heine Integral Representation) The orthogonal polynomials can be written as follows 1   Pk (x) = k! det G[k]

 ! k

!

(x − xj )

j =1

(xn − xj )2 d μ(x1 ) d μ(x2 ) . . . d μ(xk )

1≤j

= (z − a) − Pˇ1 (x),

1 z−y uˇ > = ; < z − a − (y − a) = Pˇ1 (x), = Pˇ1 (x), 1 uˇ z−y uˇ ⎛ ⎞ Hˇ 0 ⎜0⎟ =⎝ ⎠ .. .

> (y−a)uˇ

and > 1 , P2 (y) H −1 z−x uˇ u = > = >  1 1 = , Hˇ −1 ω Pˇ2 (y) − , H − P2 (y) z−x z−x uˇ u = > = > 1 1 = , (y − a)H − P2 (y) − , H − P2 (y) z−x z−x uˇ u

  Cˇ 2 (x) Hˇ −1 ω − C2 (x) H −1 =

=

1 , Pˇ2 (y) z−x

>

Hˇ −1 ω −

=

= 0.

We cannot evaluate directly in x = a as that point belongs to the support of the perturbed functional and the second kind function might not be defined there. Proposition 41 For n > 0, we have ωn,n−1

  C1,n (a) − ξx , P1,n (x)  , =− C1,n−1 (a) − ξx , P1,n−1 (x)

Hˇ 0 = −(C1,0 (a) − ξx , 1).

Orthogonal Polynomials and LU Factorization

303

Proof It follows form = (z − a)Cˇ 1 (z) = (z − a) Pˇ1 (x), =

= (z − a) Pˇ1 (x), =

= (z − a) Pˇ1 (x),

1 z−y 1 z−y 1 z−y

> > >

ux,y y−a

ux,y y−a

ux,y y−a

+ξx δy−a

; < + ξx , Pˇ1 (x) + ω ξx , P1 (x) .

Thus, we derive an expression without singularity problems at z = a =

(z − a) Pˇ1 (x),

1 z−y



> ux,y y−a

⎞ Hˇ 0  ⎜ ⎟ − ⎝ 0 ⎠ = ω C1 (z) − ξx , P1 (x) . .. .

Proposition 42 For n > 0, we have the following quasideterminantal expression   . / C1,n−1 (a) − ξx , P1,n−1 (x) P1,n−1 (x) ˇ   P1,n (x) = &∗ , P1,n (x) C1,n (a) − ξx , P1,n (x)   . / C1,n−1 (a) − ξx , P1,n−1 (x) Hn ˇ   . Hn = &∗ C1,n (a) − ξx , P1,n (x) 0 Proof For n > 0, the expression for ωn,n−1 implies   C1,n (a) − ξx , P1,n (x)   P1,n−1 (x), C1,n−1 (a) − ξx , P1,n−1 (x)   C1,n (a) − ξx , P1,n (x) ˇ   Hn−1 . Hn = − C1,n−1 (a) − ξx , P1,n−1 (x)

Pˇ1,n (x) = P1,n (x) −

Proposition 43 The following connection formulas for Christoffel–Darboux kernels hold Kˇ n−1 (x, y) = (y − a)Kn−1 (x, y) − Pˇ2,n (y)Hˇ n−1 ωn,n−1 P1,n−1 (x), and for n ≥ 1 (x − a)Kˇ n−1 (x, y) = (y − a)Kn−1 (x, y) − Pˇ2,n (y)Hˇ n−1 ωn,n−1 C1,n−1 (x) + 1. (mix)

(mix)

304

M. Mañas

Proof It follows from the definition of the kernels and the connection formulas 

Pˇ1 (x) = ωP1 (x),

Pˇ2 (y)



 Hˇ −1 ω = P2 (y) (y − a)H −1

 and (x − a)Cˇ 1 (x) − Hˇ 0 , 0, . . . = ωC1 (x).



Proposition 44 It also holds that Pˇ2,n (y) = Hn−1

(mix)

(y − a)(Kn−1 (a, y) − ξx , Kn−1 (x, y)) + 1   . C1,n−1 (a) − ξx , P1,n−1 (x)

Proof The mixed kernel Kˇ n−1 (x, y) will have singularity problems at x = a. This issue can be handled as before with the aid of the CD kernel and we get (mix)

(mix)

(y − a)(Kn−1 (a, y) − ξx , Kn−1 (x, y)) + 1    = Pˇ2,n (y)(Hˇ n )−1 ωn,n−1 C1,n−1 (a) − ξx , P1,n−1 (x)    = Pˇ2,n (y)(Hn−1 )−1 C1,n−1 (a) − ξx , P1,n−1 (x) . Proposition 45 For n > 0, the following formulas hold + Pˇ2,n (y) = −&∗

,   Hn−1 C1,n−1 (a) − ξx , P1,n−1 (x) . (mix) (y − a)(Kn−1 (a, y) − ξx , Kn−1 (x, y)) + 1 0

Proposition 46 For a general Geronimus perturbation we find the general Christoffel formulas < ; ⎞ [1] [1] (x), (ξ )x W Pn−N (x) JC [1] − Pn−N ⎟ ⎜ n−N ⎟ ⎜ .. .. Pˇn[1] (x) = &∗ ⎜ ⎟, . . ⎠ ⎝ < ; [1] [1] JC [1] − Pn (x), (ξ )x W Pn (x) n < ; ⎞ ⎛ [1] JC [1] − Pn−N (x), (ξ )x W Hn−N n−N < ; ⎟ ⎜ [1] ⎜J − Pn−N+1 (x), (ξ )x W 0 ⎟ ⎟ ⎜ C [1] n−N +1 ⎟, Hˇ n = &∗ ⎜ ⎟ ⎜ . . .. .. ⎟ ⎜ ⎠ ⎝ < ; ⎛

JC [1] − Pn[1] (x), (ξ )x W n

0

Orthogonal Polynomials and LU Factorization

305

< ; ⎞ [1] JC [1] − Pn−N Hn−N (x), (ξ )x W n−N < ; ⎜ ⎟ [1] ⎜ 0 ⎟ JC [1] − Pn−N+1 (x), (ξ )x W ⎜ ⎟ n−N +1 ⎜ ⎟  [2] ⎜ ⎟ . . ˇ .. .. ⎟ . Pn (y) = −&∗ ⎜ ⎜ ⎟ < ; ⎜ ⎟ [1] ⎜ 0 ⎟ (x), (ξ )x W JC [1] − Pn−1 ⎝ ⎠ n−1  W (y) JK (pc) (y) − Kn−1 (x, y), (ξ )x  W + JV (y) 0 ⎛

n−1

Here W, V ∈ CNG ×NG are upper triangular matrices determined by WG (x) and its derivatives, and V(x, y) is a polynomial constructed in terms of WG (x) and completely symmetric bivariate polynomials. Acknowledgments The author thanks the financial support from the Spanish Agencia Estatal de Investigación, research project PGC2018-096504-B-C3 entitled Ortogonalidad y Aproximación: Teoría y Aplicaciones en Física Matemática. The author acknowledges the exhaustive revision by the anonymous referee that much improved the readability of this contribution.

References 1. Adler, M., van Moerbeke, P.: Group factorization, moment matrices and Toda lattices. Int. Math. Res. Notices 12, 556–572 (1997) 2. Adler, M., van Moerbeke, P.: Generalized orthogonal polynomials, discrete KP and Riemann– Hilbert problems. Commun. Math. Phys. 207, 589–620 (1999) 3. Adler, M., van Moerbeke, P.: Darboux transforms on band matrices, weights and associated polynomials. Int. Math. Res. Notices 18, 935–984 (2001) 4. Adler, M., van Moerbeke, P., Vanhaecke, P.: Moment matrices and multi-component KP, with applications to random matrix theory. Commun. Math. Phys. 286, 1–38 (2009) 5. Álvarez-Fernández, C., Mañas, M.: Orthogonal Laurent polynomials on the unit circle, extended CMV ordering and 2D Toda type integrable hierarchies. Adv. Math. 240, 132–193 (2013) 6. Álvarez-Fernández, C., Mañas, M.: On the Christoffel–Darboux formula for generalized matrix orthogonal polynomials. J. Math. Anal. Appl. 418, 238–247 (2014) 7. Álvarez-Fernández, C., Fidalgo Prieto, U., Mañas, M.: The multicomponent 2D Toda hierarchy: generalized matrix orthogonal polynomials, multiple orthogonal polynomials and Riemann-Hilbert problems. Inverse Probl. 26, 055009 (15pp.) (2010) 8. Álvarez-Fernández, C., Fidalgo Prieto, U., Mañas, M.: Multiple orthogonal polynomials of mixed type: Gauss–Borel factorization and the multi-component 2D Toda hierarchy. Adv. Math. 227, 1451–1525 (2011) 9. Álvarez-Fernández, C., Ariznabarreta, G., García-Ardila, J.C., Mañas, M., Marcellán, F.: Christoffel transformations for matrix orthogonal polynomials in the real line and the nonAbelian 2D Toda lattice hierarchy. Int. Math. Res. Notices 2016, 1–57 (2016) 10. Araznibarreta, G., Mañas, M.: A Jacobi type Christoffel–Darboux formula for multiple orthogonal polynomials of mixed type. Linear Algebra Appl. 468, 154–170 (2015) 11. Ariznabarreta, G., Mañas, M.: Matrix orthogonal Laurent polynomials on the unit circle and Toda type integrable systems. Adv. Math. 264, 396–463 (2014) 12. Ariznabarreta, G., Mañas, M.: Multivariate orthogonal polynomials and integrable systems. Adv. Math. 302, 628–739 (2016)

306

M. Mañas

13. Ariznabarreta, G., Mañas, M.: Christoffel transformations for multivariate orthogonal polynomials. J. Approx. Theory 225, 242–283 (2018) 14. Ariznabarreta, G., Mañas, M.: Multivariate Orthogonal Laurent Polynomials and Integrable Systems. Publications of the Research Institute for Mathematical Sciences (Kyoto University) (to appear) 15. Ariznabarreta, G., García-Ardila, J.C., Mañas, M., Marcellán, F.: Matrix biorthogonal polynomials on the real line: Geronimus transformations. Bull. Math. Sci. (2018). https://doi.org/10. 1007/s13373-018-0128-y 16. Ariznabarreta, G., García-Ardila, J.C., Mañas, M., Marcellán, F.: Non-Abelian integrable hierarchies: matrix biorthogonal polynomials and perturbations. J. Phys. A Math. Theor. 51, 205204 (2018) 17. Ariznabarreta, G., Mañas, M., Toledano, A.: CMV biorthogonal Laurent polynomials: perturbations and Christoffel formulas. Stud. Appl. Math. 140, 333–400 (2018) 18. Bergvelt, M.J., ten Kroode, A.P.E.: Partitions, vertex operators constructions and multicomponent KP equations. Pac. J. Math. 171, 23–88 (1995) 19. Bueno, M.I., Marcellán, F.: Darboux transformation and perturbation of linear functionals. Linear Algebra Appl. 384, 215–242 (2004) 20. Bueno, M.I., Marcellán, F.: Polynomial perturbations of bilinear functionals and Hessenberg matrices. Linear Algebra Appl. 414, 64–83 (2006) 21. Christoffel, E.B.: Über die Gaussische Quadratur und eine Verallgemeinerung derselben. J. Reine Angew. Math. (Crelle’s J.) 55, 61–82 (1858, in German) 22. Date, E., Jimbo, M., Kashiwara, M., Miwa, T.: Transformation groups for soliton equations. Euclidean Lie algebras and reduction of the KP hierarchy. Publ. Res. Inst. Math. Sci. 18, 1077– 1110 (1982) 23. Gautschi, W.: Orthogonal Polynomials: Computation and Approximation. Oxford University Press, Oxford (2004) 24. Gel’fand, I.M., Gel’fand, S., Retakh, V.S., Wilson, R.: Quasideterminants. Adv. Math.193, 56– 141 (2005) 25. Geronimus, J.: On polynomials orthogonal with regard to a given sequence of numbers and a theorem by W. Hahn. Izv. Akad. Nauk SSSR 4, 215–228 (1940, in Russian) 26. Golinskii, L.: On the scientific legacy of Ya. L. Geronimus (to the hundredth anniversary). In: Priezzhev, V.B., Spiridonov, V.P. (eds.) Self-Similar Systems (Proceedings of the International Workshop (July 30–August 7, Dubna, Russia, 1998)), pp. 273–281. Publishing Department, Joint Institute for Nuclear Research, Moscow Region, Dubna (1999) 27. Grünbaum, F.A., Haine, L.: Orthogonal polynomials satisfying differential equations: the role of the Darboux transformation. In: Symmetries and Integrability of Difference Equations (Estérel, 1994). CRM Proceedings Lecture Notes, vol. 9, pp. 143–154. American Mathematical Society, Providence (1996) 28. Kac, V.G., van de Leur, J.W.: The n-component KP hierarchy and representation theory. J. Math. Phys. 44, 3245–3293 (2003) 29. Mañas, M., Martínez-Alonso, L.: The multicomponent 2D Toda hierarchy: dispersionless limit. Inverse Probl. 25, 115020 (2009) 30. Mañas, M., Martínez-Alonso, L., Álvarez-Fernández, C.: The multicomponent 2d Toda hierarchy: discrete flows and string equations. Inverse Probl. 25, 065007 (2009) 31. Mulase, M.: Complete integrability of the Kadomtsev–Petviashvili equation. Adv. Math. 54, 57–66 (1984) 32. Olver, P.J.: On multivariate interpolation. Stud. Appl. Math. 116, 201–240 (2006) 33. Sato, M.: Soliton equations as dynamical systems on infinite dimensional Grassmann manifolds (random systems and dynamical systems). Res. Inst. Math. Sci. Kokyuroku 439, 30–46 (1981) 34. Schwartz, L.: Théorie des noyaux. In: Proceedings of the International Congress of Mathematicians (Cambridge, MA, 1950), vol. 1, pp. 220–230. American Mathematical Society, Providence (1952)

Orthogonal Polynomials and LU Factorization

307

35. Simon, B.: The Christoffel–Darboux kernel. In: Perspectives in Partial Differential Equations, Harmonic Analysis and Applications: A Volume in Honor of Vladimir G. Maz’ya’s 70th Birthday. Proc. Sympos. Pure Math., vol. 79, pp. 295–336 (2008) 36. Szego, G.: Orthogonal Polynomials, vol. XXIII of American Mathematical Society Colloquium Publications, American Mathematical Society, New York (1939) 37. Uvarov, V.B.: The connection between systems of polynomials that are orthogonal with respect to different distribution function. USSR Comput. Math. Math. Phys. 9, 25–36 (1969) 38. Zhang, F. (ed.): The Schur Complement and Its Applications. Springer, New York (2005) 39. Zhedanov, A.: Rational spectral transformations and orthogonal polynomials. J. Comput. Appl. Math. 85, 67–86 (1997)

Infinite Matrices in the Theory of Orthogonal Polynomials Luis Verde-Star

Abstract In this chapter we present a brief account of a matrix approach to the study of sequences of orthogonal polynomials. We use an algebra of infinite matrices of generalized Hessenberg type to represent polynomial sequences and linear maps on the complex vector space of all polynomials. We show how the matrices are used to characterize and to construct several sets of orthogonal polynomials with respect to some linear functional on the space of polynomials. The matrices allow us to study several kinds of generalized difference and differential equations, to obtain explicit formulas for the orthogonal polynomial sequences with respect to given bases, and also to obtain formulas for the coefficients of the three-term recurrence relations. We also construct a family of hypergeometric orthogonal polynomials that contains all the families in the Askey scheme and a family of basic hypergeometric q-orthogonal polynomials that contains all the families in the q-Askey scheme.

1 Introduction In this Chapter we present a brief description of a matrix approach to some aspects of the theory of orthogonal polynomials. We present some basic ideas and a few examples of results that show the usefulness of the approach. We begin here with some remarks about the theory and the applications of infinite matrices. Infinite matrices of several types have been studied for a long time. They are usually seen as representations of linear operators between vector spaces that have countable bases, or as operators that send sequences to sequences, as it is done in Summability Theory, or that send power series to power series, as in Complex Analysis. Infinite matrices were used often in the development of Quantum Mechanics, considered as linear transformations on a separable Hilbert space. After

L. Verde-Star () Department of Mathematics, Universidad Autónoma Metropolitana, Mexico City, Mexico e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 F. Marcellán, E. J. Huertas (eds.), Orthogonal Polynomials: Current Trends and Applications, SEMA SIMAI Springer Series 22, https://doi.org/10.1007/978-3-030-56190-1_11

309

310

L. Verde-Star

the appearance of Functional Analysis and Operator Theory, around 1930, the interest in infinite matrices decayed greatly. Another reason for the reduced interest in infinite matrices that persisted for a long time is the fact that most Linear Algebra textbooks only deal with finite dimensional spaces, and most of the material about infinite dimensional spaces is now considered as a part of Functional Analysis. Some algebraic aspects of the theory of spaces of infinite matrices were studied between 1900 and 1930 by many researchers, such as Hurwitz, Hellinger, Dienes, Riesz, Kothe, and Toeplitz. After that time the analytic aspects of the theory were the main subject of study, mainly due to the interest in Hilbert space operators and Quantum Mechanics. From around 1950 to 1980 infinite matrices were used in very few areas of Mathematics, such as Summability theory and Combinatorics. Infinite Toeplitz matrices have been used for a long time as representations of multiplication operators in some areas of Analysis and Functional Analysis that are very active research areas nowadays. A more recent development is the use of infinite matrices in the theories of Signal and Image processing and Data communication, where the infinite matrices represent transformations of sequences, and are replaced by large finite matrices in the practical applications. The Pascal triangle is certainly the oldest and most studied infinite matrix, but the study of Pascal-like matrices from the linear algebra perspective started quite recently. There was a renewed interest on linear algebraic properties of finite Pascal matrices and some related matrices around 1980, when some papers on the subject were published. Some of them were motivated by combinatorial problems and others by the group structure of several types of generalized Pascal matrices. There are numerous papers published during the last 30 years where the authors  prove, by means of explicit matrix computations, that the matrices P (x) = [ nk x n−k ] satisfy P (x)P (y) = P (x + y). The interpretation of Pascal matrices as the matrix representation of translations on the space of polynomials was used by Kalman [12] in 1983. See also [13]. Since the set of translations is obviously a commutative group so is the set of their matrix representations. Some papers about generalizations of Pascal matrices are [9, 26], and [27]. There are some recent papers on the use of matrices and Linear Algebra to study polynomials and polynomial sequences, for example, [2, 3, 6–8], and [30]. In the theory of orthogonal polynomials there are many areas that can be studied using linear algebraic methods, for example, recurrence relations, connection constants, difference and differential equations, moments, and discrete orthogonality. Of course there are also areas that require the use of analytical methods, such as asymptotic properties and perturbation of measures. It is well-known that the infinite tridiagonal Jacobi matrices associated with orthogonal sequences determine most of the properties of the sequences, since their entries are the coefficients of the three-term recurrence relation, but in the development of the theory of orthogonal polynomials the Jacobi matrices have rarely been studied as elements of an algebra of infinite matrices. In [20] Maroni proposed an algebraic approach to study orthogonal polynomials based on the use of linear functionals and duality. His approach has been used with great success by numerous researchers.

Infinite Matrices in the Theory of Orthogonal Polynomials

311

In our previous papers [28] and [29] we used infinite matrices to study some aspects of the theory of orthogonal polynomials. Our approach can be considered as a complement to Maroni’s approach and has been used recently by some researchers. See [10, 11, 19], and [22]. In this chapter we use an algebra of infinite Hessenberg matrices as a setting for the study of polynomial sequences and linear operators on the space of polynomials. Some results that are obtained using our approach, that we don’t include in this chapter, can be found in [28] and [29]. The algebra of infinite triangular matrices is the set of infinite matrices that has received more attention from the researchers in the Algebra community. There are some recent articles on the subject, see for example, [23, 24], and [25], and the references therein.

2 The Algebra of Infinite Lower Hessenberg Matrices We introduce in this section the algebra L of infinite lower Hessenberg matrices that is an appropriate algebraic setting for the study of polynomial sequences and linear operators such as difference and differential operators on the space of polynomials. We denote by L the set of all the infinite matrices A = [aj,k ] with complex entries and indices j and k in N for which there is an integer m ∈ Z such that aj,k = 0 if j − k < m. We say that aj,k lies in the n-th diagonal of A if j − k = n. Therefore, if m > n then the m-th diagonal lies below (to the left of) the n-th diagonal. We say that a nonzero element of L is a diagonal matrix if all of its nonzero elements lie in a single diagonal. Let A be a nonzero element of L and let m be the minimum integer such that A has at least one nonzero entry in the m-th diagonal, then we say that A has index m and write i(A) = m. The index of the zero matrix is defined as infinity. It is easy to see that L is a complex vector space with the usual operations of addition and multiplication by scalars. If A = [aj,k ] and B = [bj,k ] are in L, with i(A) = m and i(B) = n, then the usual matrix multiplication of A and B is well defined and the product C = AB is in L, since cr,k =

r−m 

ar,j bj,k ,

r − k ≥ m + n.

(2.1)

j =k+n

Note that for every pair (r, k) there is a finite number of summands and that i(AB) ≥ i(A) + i(B). If A ∈ L then a sufficient, but not necessary, condition for A to have a two-sided inverse is that i(A) = 0 and ak,k = 0 for k ≥ 0. We denote by G the set of all matrices that satisfy such condition. G is a group under matrix multiplication. The unit is the identity matrix I whose entries on the 0-th diagonal are equal to 1 and all other entries are zero.

312

L. Verde-Star

We define next some particular elements of L that will be used in the rest of the chapter. Denote by S the diagonal matrix of index 1 with (S)j +1,j = 1 for j ≥ 0. ˆ =I Denote by Sˆ the transpose of S. Note that Sˆ is diagonal with index -1 and SS and S Sˆ = J , where J is the diagonal of index zero with its entry in the position (0, 0) equal to zero and its entries in positions (j, j ) equal to 1 for j ≥ 1. That is, J differs from the identity I only in the position (0, 0). Therefore Sˆ is a left-inverse for S but not a right-inverse. If m is a positive integer then S m is the diagonal matrix of index m with all its entries in the m-th diagonal equal to 1. Analogously, Sˆ m is diagonal of index −m and all its entries in the −m-th diagonal are equal to 1. We denote by D the diagonal matrix of index 1 with entries dj,j −1 = j for j ≥ 1. Denote by Dˆ the diagonal matrix of index -1 with entries dˆj −1,j = 1/j for j ≥ 1. That is, ⎡

0 ⎢1 ⎢ ⎢ D = ⎢0 ⎢0 ⎣ .. .

0 0 2 0 .. .

0 0 0 3 .. .

0 0 0 0 .. .

⎤ ... . . .⎥ ⎥ . . .⎥ ⎥, . . .⎥ ⎦ .. .



01 0 ⎢ 0 0 1/2 ⎢ ⎢0 0 0 Dˆ = ⎢ ⎢ ⎢0 0 0 ⎣ .. .. .. . . .

⎤ 0 ... 0 . . .⎥ ⎥ 1/3 . . . ⎥ ⎥. .. ⎥ ⎥ . 0 ⎦ .. . . . .

(2.2)

ˆ It is easy to see that DD = I and D Dˆ = J . Let A ∈ G. Then a simple −1 ˆ ˆ computation shows that SA S is the two-sided inverse of SAS and also that −1 ˆ ˆ ˆ DA D is the two-sided inverse of DAD. Note that SAS is obtained from A by shifting the matrix one position to the left and one position upward, so that the 0-th row and the 0-th column of A disappear. We say that A ∈ L is banded if there exist a pair of integers (n, m) such that n ≤ m and all the nonzero entries of A lie between the diagonals of indices n and m. In such case we say that A is (n, m)-banded. Note that the set of banded matrices is closed under addition and multiplication, but a banded matrix may have an inverse that is not banded. The transpose of a (n, m)-banded matrix is a (−m, −n)-banded matrix. The Toeplitz matrices in L are the elements of the form T =

∞  k=0

ck S k +

m 

c−k Sˆ k ,

(2.3)

k=1

where the coefficients cj are complex numbers and m is a positive integer. Note that, if c−m = 0 then i(T ) = −m. The 0-th column of T is [c0 , c1 , c2 , . . .]T and the 0-th row is [c0 , c−1 , c−2 , . . . , c−m , 0, 0, . . .]. ˆ Then we have, if n ≤ 0 Theorem 1 Let A ∈ L with i(A) = n such that ASˆ = SA, ˆ then A is a polynomial in S of degree −n, and if n ≥ 1 then A = 0.

Infinite Matrices in the Theory of Orthogonal Polynomials

313

ˆ = I the equation ASˆ = SA ˆ gives A = SAS. ˆ Proof Since SS This equation shows ˆ is [a1,0 , a2,0 , a3,0 , . . .]T . that A must be a Toeplitz matrix. The 0-th column of SA It is the 0-th column of A shifted one place upwards. The 0-th column of ASˆ is zero and then aj,0 = 0 for j ≥ 1. Since A is Toeplitz, all the entries of A below the diagonal of index 0 must be zero. Therefore, if n > 0 then A = 0 and if n ≤ 0 we must have A=

−n 

a0,k Sˆ k .

(2.4)

k=0

If A ∈ L has index m, we say that A is monic if all the entries in the diagonal of index m are equal to 1. We say that two elements A and B of L are diagonally similar if and only if there exists an invertible diagonal matrix T of index 0 such that B = T −1 AT . We denote by T the set of (−1, 1)-banded matrices whose entries in the diagonals of indices 1 and -1 are all nonzero. The elements of T are usually called Jacobi matrices and have index equal to -1. We list some properties of T in the following proposition. The proofs are straightforward. See [28]. Theorem 2 (a) (b) (c) (d) (e)

If L ∈ T then L is diagonally similar to a monic element of T. ˆ If L ∈ T is monic then there is a unique monic A ∈ G such that A−1 LA = S. If L and M are in T then they are similar to each other. If L ∈ T then L is diagonally similar to LT . If L ∈ T is monic and has positive entries in the diagonal of index 1 then L is diagonally similar to a symmetric element of T.

3 Polynomial Sequences and Their Matrix Representation In this section we present some definitions related with sequences of polynomials and describe their connection with elements of L. By a triangular polynomial sequence we mean a sequence of polynomials p0 , p1 , p2 , . . . with complex coefficients such that pn has degree n for n ≥ 0. Every triangular polynomial sequence is a basis for the complex vector space P of all polynomials in one variable. Let {pn } be a triangular polynomial sequence and let A be a nonzero element of L with i(A) = m. We can associate each row of A with a polynomial as follows. Define qk (t) =

k−m  j =0

ak,j pj (t),

k ≥ 0,

(3.1)

314

L. Verde-Star

where empty sums are zero. This means that the k-th row of A is the vector of coefficients of qk with respect to the basis {pj }. If the matrix A is in the group G then {qk } is a triangular polynomial sequence and thus it is a basis for P. The sequence of scalar equations (3.1) is equivalent to the vector-matrix equation [q0 , q1 , q2 , . . .]T = A [p0 , p1 , p2 , . . .]T .

(3.2)

We say that A is the matrix representation of the sequence qn with respect to the basis {pn }. From (3.2) we see that each matrix in L can be considered as a linear transformation on the vector space S of sequences of polynomials {uk } for which the set of numbers {degree(uk ) − k : k ≥ 0} has an upper bound in N. ˆ defined in Sect. 2, satisfy The matrices S and S, ˆ 0 , u1 , u2 , . . .]T = [u1 , u2 , u3 , . . .]T , S[u

(3.3)

[u0 , u1 , u2 , . . .]Sˆ = [0, u0 , u1 , u2 , . . .],

(3.4)

and

for any sequence of polynomials u0 , u1 , u2 , . . .. Taking transposes in the previous equations we obtain [u0 , u1 , u2 , . . .]S = [u1 , u2 , u3 , . . .],

(3.5)

S[u0 , u1 , u2 , . . .]T = [0, u0 , u1 , u2 , . . .]T .

(3.6)

and

Let γ be a linear operator on the space P of polynomials and let {pk (t) : k ∈ N} be a triangular basis for P. The matrix representation of γ with respect to the basis {pk (t)} is the matrix G = [gk,j ] where γ (pk (t)) =



gk,j pj (t),

k ≥ 0.

(3.7)

j ≥0

Let us note that in the general case G may not be an element of L, but there are many important linear operators on the space P that have matrix representations in L. If A ∈ L is the matrix representation of a sequence of polynomials {uk (t) : k ∈ N} in S with respect to the basis {pk (t)} and G ∈ L then AG is the matrix representation of the sequence {γ (uk (t)) : k ∈ N} with respect to the same basis. Note that the multiplication by G is on the right-hand side. For example, the matrix D is the representation of the usual differential operator with respect to the basis of monomials and then if A is the representation of the sequence uk (t) with respect to

Infinite Matrices in the Theory of Orthogonal Polynomials

315

the basis of monomials then AD is the representation of the sequence {uk (t)} with respect to the same basis. We define next an important class of triangular bases of P. Let x0 , x1 , x2 , . . . be a sequence of complex numbers. Define the polynomials v0 (t) = 1, and vk (t) = (t − x0 )(t − x1 )(t − x2 ) · · · (t − xk−1 ),

k ≥ 1.

(3.8)

The sequence of monic polynomials {vn } is clearly a triangular basis for the space P of polynomials. It is called the Newton basis associated with the sequence xn . Let V be the infinite matrix that satisfies [v0 (t), v1 (t), v2 (t), . . .]T = V [1, t, t 2 , . . .]T .

(3.9)

V is in G and its entries in the n-row are the coefficients of vn (t) with respect to the basis of monomials. Therefore they are elementary symmetric functions of x0 , x1 , x2 , . . . , xn−1 , with appropriate signs. The entries in V −1 are complete homogeneous symmetric functions of the xk . The dual basis of the Newton basis {vn } is the sequence of divided difference functionals [x0 , x1 , . . . , xn−1 ], which give us the coefficients in the representation of any polynomial in terms of the Newton basis. From (3.8) we see that tvk (t) = (t − xk + xk )vk (t) = vk+1 (t) + xk vk (t),

k ≥ 0,

(3.10)

and therefore the matrix Xv that represents the multiplication map u(t) → tu(t) on the space P with respect to the Newton basis {vn } is ⎡

x0 ⎢0 ⎢ ⎢ Xv = ⎢ 0 ⎢0 ⎣ .. .

1 x1 0 0 .. .

0 1 x2 0 .. .

0 0 1 x3 .. .

0 0 0 1 .. .

⎤ ··· · · ·⎥ ⎥ · · ·⎥ ⎥. · · ·⎥ ⎦ .. .

(3.11)

When all the xn are zero the basis {vk } becomes the basis {t k } of monomials and Xv ˆ becomes S.

4 Orthogonal Polynomial Sequences In this section we show how the matrices in L can be used to study sequences of orthogonal polynomials.

316

L. Verde-Star

Let μ : P → C be a linear functional. A triangular sequence of monic polynomials {pn }n≥0 is orthogonal with respect to μ if and only if μ(pk pn ) = γk δk,n ,

k, n ≥ 0,

(4.1)

where γk is a nonzero number for k ≥ 0 and δk,n is Kronecker’s function. Since {pn }n≥0 is a basis for P, if n ≥ 0 then the monomial t n is a linear combination of p0 (t), p1 (t), p2 (t), . . . , pn (t) and then it is easy to verify that the orthogonality condition (4.1) is equivalent to # μ(t pk (t)) = n

0, γk ,

if 0 ≤ n < k if n = k

.

(4.2)

Theorem 3 Let {vk (t)}k≥0 be the Newton basis associated with a sequence xk . Let A ∈ G be the matrix representation of a triangular sequence of monic polynomials {pk } with respect to {vk }, and let Xv be as defined in the previous section. Then {pk } is orthogonal with respect to some linear functional if and only if the matrix L = AXv A−1 is an element of T. Proof Suppose that {pk } is orthogonal with respect to the functional μ. Let mk = μ(vk ) for k ≥ 0 and let g = [m0 , m1 , m2 , . . .]T . For j ≥ 0 let ej denote the infinite column vector with its j -th entry equal to 1 and all other entries equal to zero. From the definition of L we get Ln A = AXvn for n ≥ 0. Therefore Ln Ag = AXvn g for n ≥ 0. The k-th entry of AXvn g is the product of the k-th row of A and the vector Xvn g. From (3.10) we get μ(tvk (t)) = μ(vk+1 ) + xk μ(vk (t)), and by (3.11) we see that Xv g = [μ(tv0 (t)), μ(tv1 (t)), μ(tv2 (t)), . . .]T and then Xvn g = [μ(t n v0 (t)), μ(t n v1 (t)), μ(t n v2 (t)), . . .]T . Therefore the k-th entry of AXvn g is k 

ak,j μ(t n vj (t)) = μ(t n pk (t)),

j =0

which, by (4.2) is zero if k > n and it is γn if k = n. Therefore AXvn g = Ln Ag is in the subspace generated by {e0 , e1 , . . . , en }. By the orthogonality hypothesis Ag = γ0 e0 and then γ0 Ln e0 is in the subspace generated by {e0 , e1 , . . . , en } and there exist coefficients cn,j such that γ0 Ln e0 = γn en +

n−1  j =0

cn,j ej ,

n ≥ 0.

(4.3)

Infinite Matrices in the Theory of Orthogonal Polynomials

317

Multiplying on the left by L and using (4.3) with n + 1 instead of n we get γ0 Ln+1 e0 = γn Len + L

n−1 

cn,j ej = γn+1 en+1 +

j =0

n 

cn+1,j ej .

j =0

Therefore, solving for Len we obtain  γn+1 en+1 + dn,j ej , γn n

Lem =

j =0

for some coefficients dn,j . Therefore the entries of L below the diagonal of index 1 are equal to zero and the entries in the diagonal of index 1 are the quotients γn+1 /γn , which are nonzero. On the other hand, since Xv has index −1 and L = AXv A−1 it is ˆ Therefore clear that L has index −1 and its diagonal of index −1 coincides with S. we have proved that L is in T. Now suppose that {pk } is a sequence of monic polynomials with matrix representation A, and such that L = AXv A−1 is in T. Let αn+1 be the (n + 1, n) entry of L and let g be the 0-th column of A−1 . Since L is (−1, 1)-banded then Ln is (−n, n)-banded and has nonzero entries in its diagonal of index n. From the equations AXvn g = Ln Ag = Ln e0 we see that the orthogonality condition (4.2) is satisfied and therefore the sequence {pk } is orthogonal with respect to the functional μ defined by μ(vj ) = (A−1 )j,0 for j ≥ 0. From Ln e0 = AXvn g we obtain e0T Ln e0 = (e0T A)Xvn g = e0T Xvn g and this means that the (0, 0) entry of Ln is equal to μ(vn ), for n ≥ 0. The numbers μ(vn ) are called the generalized moments of μ with respect to the basis {vk }. The matrix L of the previous theorem has the form ⎡

β0 ⎢ α1 ⎢ L=⎢0 ⎣ .. .

1 β1 α2 .. .

0 1 β2 .. .

0 0 1 .. .

⎤ ... . . .⎥ ⎥ , . . .⎥ ⎦ .. .

(4.4)

where αk =

γk , γk−1

k ≥ 1.

The matrix equation LA = AXv gives us, by equating the k-th rows in both sides, the three-term recurrence relation αk pk−1 (t) + βk pk (t) + pk+1 (t) = tpk (t), Notice that p0 (t) = 1 and p1 (t) = t − β0 .

k ≥ 1.

(4.5)

318

L. Verde-Star

Let B ∈ G and [u0 , u1 , u2 , . . .] = [v0 , v1 , v2 , . . .]B. Then {uk } is also a triangular basis of P and AB is the matrix representation of {pk } with respect to {uk }. Therefore LAB = ABB −1 Xv B, and it is easy to verify that B −1 Xv B is the matrix representation of the multiplication map w(t) → tw(t) with respect to the basis {uk }. This means that the matrix L doesn’t change when we use the basis {uk } instead of the basis {vk } to represent the sequence {pk } and the multiplication map. From Theorem 2, part b) and Theorem 3 we see that each pair of sequences of numbers ({αk }k≥1 , {βk }k≥0 ), with αk = 0, determines a unique triangular sequence of monic orthogonal polynomials with respect to some linear functional. An important problem is to characterize and to find properties of orthogonal sequences that satisfy certain functional equations, such as differential equations and difference equations, and several generalizations of them. See [1, 4, 15–18], and [21]. We present next an example of a characterization theorem expressed in terms of matrices. Let A be the matrix representation of a sequence of monic orthogonal ˆ polynomials {pk } with respect to the basis of monomials. Then DAD is the matrix  representation of the sequence of monic polynomials pk+1 (t)/(k + 1) and it is a monic element of G. The orthogonal sequence {pk } is called classical if the  sequence {pk+1 } is also orthogonal. Theorem 4 Let A and {pk } be as defined above. Then {pk } is classical if and only −1 is a (0,2)-banded element of G. ˆ if A(DAD) For the proof see [28, Sec. 4].

5 Orthogonal Polynomial Sequences That Satisfy a Generalized First-Order Difference Equation We introduce a family of matrices that represent operators that generalize the usual differentiation and difference operators. We consider some simple eigenvaluedifference equations and look for orthogonal polynomial solutions of such equations. From now on by polynomial sequence we mean triangular polynomial sequence and we will consider triangular bases of P. We denote by D0 the set of diagonal matrices of index 0. For each sequence s : N → C we define the matrix M(s) as the element of D0 whose (n, n) entry is s(n), for n ≥ 0. Definition Let {vk } be a basis of P and let s be a complex valued sequence. We say that the operator represented by the matrix M(s(n))S with respect to the basis {vk } is a generalized difference operator of first order. It is the operator that sends vn to s(n)vn−1 , for n ≥ 1, and sends v0 to zero. For example, if s(n) = n for n ≥ 0 then M(s(n))S = D and represents the usual differentiation operator with respect to the basis of monomials, and represents the

Infinite Matrices in the Theory of Orthogonal Polynomials

319

usual difference operator with step h with respect to the Newton basis associated with the sequence xn = nh, where h is a fixed nonzero number. With respect to {vk } the matrix M(s(n)) represents the operator that sends vn to s(n)vn , for n ≥ 0, and we say that it is a generalized differential operator of order zero. It is essentially a rescaling of the sequence {vn }. It is easy to see that M(s(n))S = SM(ˆs (n)) where sˆ is the shifted sequence defined by sˆ (n) = s(n + 1). In order to simplify the notation we use algebraic expressions in the discrete variable n to denote the corresponding sequences, for example, M(n2 ) is the diagonal sequence of index zero with (n, n) entry equal to n2 , and if g is a known sequence then M(ng(n)) has its (n, n) entry equal to ng(n). Let f , g and h be complex valued sequences. Let {vk (t)}k≥0 be the Newton basis of P associated with the sequence xn = f (n). We define the matrix B = M(h(n)) + SM(g(n)).

(5.1)

B is (0, 1)-banded and it is the representation of an operator δ which is the sum of a generalized difference operator of order one and a generalized difference operator of order zero. The operator δ acts on the basis {vk (t)} as follows δ(vk (t)) = g(k − 1)vk−1 (t) + h(k)vk (t),

k ≥ 0,

(5.2)

where g(−1) = 0. Problem Find all the orthogonal polynomial sequences {uk (t)} that satisfy the generalized difference equation δ(uk (t)) = h(k)uk (t),

k ≥ 0,

(5.3)

for some sequences f , g and h. This problem can be expressed in terms of matrix equations as follows. Let C ∈ G be the matrix representation of the polynomial sequence {uk (t)} with respect to the Newton basis {vk (t)}. Let Xv be the matrix representation of the operator of multiplication by t on the space P with respect to the basis {vk (t)}. By Theorem 3 the sequence {uk (t)} is orthogonal if L = CXv C −1 ∈ T,

(5.4)

and the matrix expression of (5.3) is CB = M(h(n))C.

(5.5)

The problem is now to find sequences f , g, and h such that (5.4) and (5.5) are satisfied. One natural approach is to look for simple sequences, for example, constant sequences. A larger family of simple sequences is the set of linearly recurrent sequences. They are the solutions of linear difference equations with

320

L. Verde-Star

constant coefficients and satisfy an equation of the form s(n + m) =

m−1 

dj s(n + j ),

n ≥ 0,

(5.6)

j =0

where m ≥ 1 and the dj are numbers. The sequence s is determined by the coefficients dj and the initial values s(0), s(1), s(2), . . . , s(m − 1) and s(n) can be expressed as a linear combination of functions of the form pj (n)λnj , where the pj are polynomials. Since B is bidiagonal, the matrix equation (5.5) can be easily solved for C = [ck,j ] in terms of g and h. The entries of C are given by ck,k = 1,

k ≥ 0,

(5.7)

and ck,j =

g(j )g(j + 1) · · · g(k − 1) , (h(k) − h(j ))(h(k) − h(j + 1)) · · · (h(k) − h(k − 1))

0 ≤ j < k. (5.8)

The previous equation shows that if Eq. (5.5) has a solution C then the sequence h must satisfy h(j ) = h(k) if j = k. The inverse matrix C −1 = [mk,j ] is also easy to determine. We have mk,k = 1,

k ≥ 0,

(5.9)

and mk,j =

g(j )g(j + 1) · · · g(k − 1) , (h(j ) − h(j + 1))(h(j ) − h(j + 2)) · · · (h(j ) − h(k))

0 ≤ j < k. (5.10)

If {uk (t)} is orthogonal with respect to a linear functional μ and C is the matrix representation of {uk (t)} with respect to the basis {vk (t)}, then the entries mk,0 are the generalized moments, that is, mk,0 = μ(vk ) for k ≥ 0. The standard moments are easily obtained using a change of basis matrix.

6 Hypergeometric Orthogonal Polynomial Sequences Here we present a solution of (5.4) and (5.5) obtained with sequences f , g, and h that are polynomials in n of small degrees. It turns out that the solution that we obtain in this way is a family of hypergeometric orthogonal polynomial sequences

Infinite Matrices in the Theory of Orthogonal Polynomials

321

that contains all the families included in the Askey scheme. The coefficients of f , g, and h are the parameters of the family. See [14]. Define the polynomials f (z) = w0 + w1 z + w2 z2 ,

h(z) = (y1 + y2 z)z,

g(z) = (1 + e1 z + e2 z2 + e3 z3 )(z + 1).

(6.1) Then the sequences f (n), h(n), g(n), for n ≥ 0, are linearly recurrent. For example, f (n) satisfies f (k + 1) − 3f (k) + 3f (k − 1) − f (k − 2) = 0,

k ≥ 2.

(6.2)

From the Eqs. (5.7) and (5.8) we can find the entries of C in terms of the coefficients y1 , y2 , e1 , e2 , e3 . Then we substitute C in Eq. (5.4) and find that a necessary condition for L to be in T is e3 = w2 y2 ,

e2 = w1 y2 + w2 y1 + w2 y2 .

(6.3)

This shows that the polynomials f, g, h must be related in a particular way. It turns out that the conditions in (6.3) are also sufficient for L to be in T. Therefore there remain 6 parameters: e1 , y1 , y2 , w0 , w1 , w2 . A change in the value of w0 corresponds to a translation of the basis polynomials vk (t) and, consequently, to a translation of the orthogonal polynomials uk (t). The polynomials uk (t) have a quite simple explicit expression as linear combinations of the polynomials vk (t). ˜ It is convenient to define the polynomials h(n) = y1 + y2 n and g(n) ˜ = 1 + e1 n + 2 3 e2 n + e3 n . Then we define the coefficients bn,k =

. /˜ ˜ + 1) · · · h(n ˜ + k − 1) n h(n)h(n , g(0) ˜ g(1) ˜ · · · g(k ˜ − 1) k

0 ≤ k ≤ n.

(6.4)

The orthogonal polynomials are given by un (t) =

n 1 

bn,n

bn,k vk (t),

n ≥ 0.

(6.5)

k=0

The matrix C = [cn,k ], which is the matrix representation of the sequence {uk (t)} with respect to the basis {vk (t)}, is therefore given by cn,k =

. / bn,k n g(k) ˜ g(k ˜ + 1) · · · g(n ˜ − 1) , = ˜ + k)h(n ˜ + k + 1) · · · h(2n ˜ bn,n k h(n − 1)

0 ≤ k ≤ n. (6.6)

322

L. Verde-Star

We also have explicit formulas for the entries of the matrix L, which are the coefficients of the three-term recurrence relation satisfied by the polynomials uk (t). Let L be as in (4.4) and let σn =

n 

βj ,

n ≥ 0.

(6.7)

j =0

Note that βn = σn − σn−1 for n ≥ 1, and β0 = σ0 . Using the explicit formulas for the entries of C and C −1 we can compute the entries of L. They turn out to be rational functions of n, and their explicit expressions are αn =

˜ − 1) g(n ˜ ˜ − 1) g(− ˜ h(n)/y −ny2 h(n 2) , ˜ ˜h(2n − 2)h˜ 2 (2n − 1)h(2n)

n ≥ 1.

(6.8)

and σn =

−(n + 1) s(n) , ˜ 6 h(2n + 1)

n ≥ 0,

(6.9)

where s(n) is the polynomial s(n) = γ3 n3 + γ2 n2 + γ1 n + γ0 ,

n ≥ 0,

(6.10)

and the coefficients are given by γ3 = 2w2 y2 ,

γ2 = 2w2 (2y1 + y2 ),

˜ γ1 = 6e1 − 12w0 y2 − h(1)(3w 1 + w2 ),

˜ γ0 = 6(1 − w0 h(1)).

(6.11)

From (6.9) we obtain immediately the coefficients βn = σn − σn−1 . The numerator of αn in (6.8) is a polynomial in n of degree at most 8, and the denominator is a polynomial of degree at most 4. For the sequence σn the numerator has degree at most 4 and the denominator has degree at most one. The generalized moments are given by μ(vk ) = (C −1 )k,0 =

˜ g(1) ˜ · · · g(k ˜ − 1) (−1)k g(0) , ˜ h(2) ˜ · · · h(k) ˜ h(1)

k ≥ 0.

(6.12)

Note that the generalized moments satisfy a simple two-term recurrence relation and they are normalized so that μ(v0 ) = 1. Giving appropriate values to the 6 parameters we can obtain all the families of hypergeometric orthogonal polynomials in the Askey scheme. We give next some examples.

Infinite Matrices in the Theory of Orthogonal Polynomials

323

Since the parameters are the coefficients of f , g, and h, and the equations in (6.3) ˜ and g˜ to determine all the are satisfied, it is enough to give the polynomials f , h, coefficients. We also list the value of e1 , which is the only free coefficient of g. ˜ For the Wilson polynomials, Eq. (9.1.5) in [14], we have f (n) = −(n + a)2 , g(n) ˜ = e1 =

n+a+b+c+d −1 ˜ , h(n) =− (a + b)(a + c)(a + d)

(n + a + b)(n + a + c)(n + a + d) , (a + b)(a + c)(a + d)

3a 2 + 2(ab + ac + ad) + bc + bd + cd . (a + b)(a + c)(a + d)

For the Meixner-Pollaczek polynomials, Eq. (9.7.4) in [14], we have f (n) = −i(n + λ),

eix sin(x) ˜ h(n) = , λ

g(n) ˜ =

n + 2λ , 2λ

e1 =

1 . 2λ

For the Jacobi polynomials, Eq. (9.8.5) in [14], we have f (n) = 1,

n+α+β +1 ˜ h(n) = , 2(α + 1)

g(n) ˜ =

n+α+1 , (α + 1)

e1 =

1 . (α + 1)

For the Bessel polynomials, Eq. (9.13.4) in [14], we have f (n) = 0,

n+a+1 ˜ , h(n) = 2

g(n) ˜ = 1,

e1 = 0.

7 A Family of q-Orthogonal Polynomial Sequences That Contains All the Families in the q-Askey Scheme In this section we present another solution of the problem set up in (5.5) and (5.6) that gives us a family of q-orthogonal polynomial sequences that contains all the families in the q-Askey scheme [14]. See [31] for a more detailed account of a related family, using another set of parameters. Define the Laurent polynomials f (z) = w0 + w1 z + w2 z−1 ,

h(z) = y1 z + y2 z−1 ,

(7.1)

and g(z) = a0 z−2 + a1 z−1 + a2 + a3 z + a4 z2 .

(7.2)

324

L. Verde-Star

Let q be a complex number that is not a root of unity and define the sequence xn = f (q n ) for n ≥ 0, and let {vk (t)}k≥0 be the Newton basis associated with the sequence xn . Define the matrices G = M(g(q n+1 )) and B = M(h(q n )) + SG and let C be the unique monic matrix that satisfies CB = M(h(q n ))C. Recall that the entries of C are expressed in terms of the sequences h(q n ) and g(q n+1 ) by the formulas (5.7) and (5.8). Let L = CXv C −1 where Xv is as in (3.11). The equations L4,2 = 0 and L5,3 = 0 give us a0 = qw2 y2 and a4 = q −1 w1 y1 , which are necessary conditions for L to be in the set T. The equations Lk,0 = 0, for k ≥ 2 give 3 values for a1 for which L becomes (−1, 1)-banded, but only when a1 = −(qw2 y2 + a2 + a3 + q −1 w1 y1 we obtain L ∈ T. For the other values of a1 some entry in the diagonal of index 1 of L becomes zero. Therefore, if we take g(z) = qw2 y2 z−2 − (qw2 y2 + a2 + a3 + q −1 w1 y1 )z−1 + a2 + a3 z + q −1 w1 y1 z2 , (7.3) then C satisfies CB = M(h(q n ))C and L = CXv C −1 ∈ T and therefore C is the matrix representation of an orthogonal polynomial sequence {uk (t)}k≥0 with respect to the Newton basis {vk (t)}k≥0 . The sequence {uk (t)}k≥0 is determined by the coefficients of f , h, and g, so that we have a family with parameters a2 , a3 , y1 , y2 , w0 , w1 , w2 . We can obtain another family that has one parameter less by means of the substitutions y2 → pq −1 y1 , a2 → a2 y1 , a3 → a3 y1 . The parameters y1 , y2 disappear and the new parameter p is incorporated. In [31] we obtained a similar family that contains all the families in the q-Askey scheme and we present the values of the parameters that correspond to some of the families in the q-Askey scheme. The generalized moments with respect to the basis {vk (t)}k≥0 are the coefficients mk,0 , for k ≥ 0. They are given by (5.10) and satisfy a simple two-term recurrence relation. Let U (z) be the rational function / qy2 y1 z U (z) = . 2 2 2 (y1 z − q y2 )(y1 z − qy2 )2 (y1 z2 − y2 ) .

q 2 y1 y2 z4 g(z)g

(7.4)

If we write L as in (4.4) then αk = U (q k ),

k ≥ 1.

(7.5)

Since g(1) = 0 we can factor g and then obtain the following expression for U (z) U (z) =

q(y1

(z − 1)(y1 z − qy2 )w(z)r(z) , − q 2 y2 )(y1 z2 − qy2 )2 (y1 z2 − y2 )

z2

(7.6)

Infinite Matrices in the Theory of Orthogonal Polynomials

325

where w(z) = w1 y1 z3 + (qa3 + w1 y1 )z2 + (qa2 + qa3 + w1 y1 )z − q 2 w2 y2 ,

(7.7)

and r(z) = qw2 y12 z3 − (qa2 y1 + qa3 y1 + w1 y12 )z2 − (q 2 a3 y2 + qw1 y1 y2 )z − q 2 w1 y22 .

(7.8) The rational function U (z) satisfies . U (z) = U

qy2 y1 z

/ (7.9)

,

and the degrees of its numerator and denominator are at most equal to 8. Now let s(z) =

(qz − 1)((q 2 w2 y1 − q 2 a3 − qw1 y1 )z − (qw1 y2 + qa2 + qa3 + w1 y1 )) , q(q − 1)(qy1 z2 − y2 ) (7.10)

and σk = s(q k ) + (k + 1)w0 ,

k ≥ 0.

(7.11)

Then the recurrence coefficients βk are given by βk = σk − σk−1 ,

k ≥ 1,

(7.12)

and β0 = σ0 . Let us note that s(z) is a rational function and the degrees of its numerator and denominator are at most equal to 2. It is easy to verify that the sequences f (q k ), h(q k ), and g(q k+1 ) are linearly recurrent. Giving appropriate values for the parameters a2 , a3 , y1 , y2 , w0 , w1 , w2 we can obtain the coefficients αk and βk of the normalized recurrence relation for each one of the families of q-orthogonal polynomial sequences in the q-Askey scheme. For the families for which w1 = 0 or w2 = 0 it is easy to obtain the discrete orthogonality measure from the sequence of generalized moments by means of simple matrix operations.

8 Final Remarks We presented in the previous sections two important families of orthogonal polynomial sequences that are solutions of the general problem proposed in Sect. 5. It would be interesting to find other similar families associated with linearly recurrent

326

L. Verde-Star

sequences. Another problem is to determine which polynomial sequences in the families studied here are not included in the Askey scheme or the q-Askey scheme. There are some choices of the sequences associated with f, g, and h that yield a tridiagonal matrix L that has some zero entries in its diagonal of index 1, and thus L is not in T, that should be studied. The matrix approach seems very well suited to study connection coefficients. A more general problem is obtained when the matrix B is replaced by a (0, 2)banded matrix, that represents a generalized difference operator of order 2. We can also use other bases for the space P instead of the Newton bases. For example, the bases of Chebyshev polynomials, the Hermite polynomials, or some polynomial sequence that satisfies a simple recurrence relation. The study of the algebraic properties of the algebra L and some related algebras is an area that requires further research. In [5] we obtained some results about an extension of L to an algebra of doubly infinite matrices. The subalgebra of L of the infinite lower triangular matrices has been studied for a long time and its properties are better known. See [23, 24], and [25] and the references therein.

References 1. Álvarez-Nodarse, R.: On characterizations of classical polynomials. J. Comput. Appl. Math. 196, 320–337 (2006) 2. Arponen, T.: A matrix approach to polynomials. Linear Algebra Appl. 359, 181–196 (2003) 3. Arponen, T.: A matrix approach to polynomials II. Linear Algebra Appl. 394, 257–276 (2005); Classical orthogonal polynomials. Bull. Math. Anal. 4. Al Salam, W.A., Chihara, T.S.: Another characterization of the classical orthogonal polynomials. SIAM J. Math. Anal. 3, 65–70 (1972) 5. Arenas-Herrera, M.I., Verde-Star, L.: Representation of doubly infinite matrices as noncommutative Laurent series. Spec. Matrices 5, 250–257 (2017) 6. Costabile, F.A., Longo, E.: An algebraic approach to Sheffer polynomial sequences. Integral Transforms and Special Functions 25, 295–311 (2014) 7. Costabile, F.A., Gualtieri, M.I., Napoli, A.: Recurrence relations and determinant forms for general polynomial sequences. Application to Genocchi polynomials. Integral Transforms and Special Functions 30 112–127 (2019) 8. Costabile, F.A., Gualtieri, M.I., Napoli, A.: Polynomial sequences: elementary basic methods and application hints. A survey., Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A. Matemáticas, to appear. 9. Ernst, T.: q-Pascal and q-Bernoulli Matrices: An Umbral Approach, Uppsala University, Department of Mathematics, Report 23. Uppsala, Sweden (2008) 10. Garza, L.G., Garza, L.E., Marcellán, F.: A matrix characterization for the Dv -semiclassical and Dv -coherent orthogonal polynomials. Linear Algebra Appl. 487, 242–259 (2015) 11. Garza, L.G., Garza, L.E., Marcellán, F.: A matrix approach for the semiclassical and coherent orthogonal polynomials. Appl. Math. Comput. 256, 459–471 (2015) 12. Kalman, D.: Polynomial translation groups. Math. Mag. 56, 23–25 (1983) 13. Kalman, D., Ungar, A.: Combinatorial and functional identities in one-parameter matrices. Am. Math. Monthly 94, 21–35 (1987) 14. Koekoek, R., Lesky, P.A., Swarttouw, R.F.: Hypergeometric Orthogonal Polynomials and Their q-Analogues. Springer, Berlin, Heidelberg (2010)

Infinite Matrices in the Theory of Orthogonal Polynomials

327

15. Koepf, W., Schmersau, D.: Recurrence equations and their classical orthogonal polynomial solutions, Orthogonal systems and applications. Appl. Math. Comput. 128, 303–327 (2002) 16. Kwon, K.H., Littlejohn, L.L., Yoon, B.H.: New characterizations of classical orthogonal polynomials. Indag. Math. (N.S.) 7, 199–213 (1996) 17. Loureiro, A.F.: New results on the Bochner condition about classical orthogonal polynomials. J. Math. Anal. Appl. 364, (2010) 307–323 (2010) 18. Marcellán, F., Branquinho, A., Petronilho, J.: Classical orthogonal polynomials: a functional approach. Acta Appl. Math. 34 283–303 (1994) 19. Marcellán, F., Pinzón-Cortés, C.N.: (M, N )-Coherent pairs of linear functionals and Jacobi matrices. Appl. Math. Comput. 232 76–83 (2014) 20. Maroni, P.: Une théorie algébrique des polynômes orthogonaux. Application aux polynômes orthogonaux semi-classiques. In: Brezinski, C., et al. (eds.), Orthogonal Polynomials and Their Applications. IMACS Ann. Comput. Appl. Math. 9, 95–130 (1991) 21. Medem, J.C., Álvarez-Nodarse, R., Marcellán, F.: On the q-polynomials: a distributional study. J. Comput. Appl. Math. 135, 157–196 (2001) 22. Njinkeu Sandjon, M., Branquinho, A., Foupouagnigni, M., Area, I.: Characterization of classical orthogonal polynomials on quadratic lattices. J. Differ. Equ. Appl. 23 983–1002 (2017) 23. Słowik, R.: Maps on infinite triangular matrices preserving idempotents. Linear and Multilinear Algebra 62, 938–964 (2014) 24. Słowik, R.: Derivations of rings of infinite matrices. Commun. Algebra 43, 3433–3441 (2015) 25. Słowik, R.: Every infinite triangular matrix is similar to a generalized infinite Jordan matrix. Linear and Multilinear Algebra 65, 1362–1373 (2017) 26. Verde-Star, L.: Groups of generalized Pascal matrices. Linear Algebra Appl. 382, 179–194 (2004) 27. Verde-Star, L.: Infinite triangular matrices, q-Pascal matrices, and determinantal representations. Linear Algebra Appl. 434, 307–318 (2011) 28. Verde-Star, L.: Characterization and construction of classical orthogonal polynomials using a matrix approach. Linear Algebra Appl. 438, 3635–3648 (2013) 29. Verde-Star, L.: Recurrence coefficients and difference equations of classical discrete and qorthogonal polynomial sequences. Linear Algebra Appl. 440, 293–306 (2014) 30. Verde-Star, L.: Polynomial sequences generated by infinite Hessenberg matrices. Spec. Matrices 5 64–72 (2017) 31. Verde-Star, L.: A unified construction of all the hypergeometric and basic hypergeometric families of orthogonal polynomial sequences. Submitted