Weighted Polynomial Approximation and Numerical Methods for Integral Equations [1 ed.] 9783030774967, 9783030774974

The book presents a combination of two topics: one coming from the theory of approximation of functions and integrals by

271 57 9MB

English Pages 662 Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
1 Introduction
2 Basics from Linear and Nonlinear Functional Analysis
2.1 Linear Operators, Banach and Hilbert Spaces
2.2 Fundamental Principles
2.3 Compact Sets and Compact Operators
2.4 Function Spaces
2.4.1 Lp-Spaces
2.4.2 Spaces of Continuous Functions
2.4.3 Approximation Spaces and Unbounded Linear Operators
2.5 Fredholm Operators
2.6 Stability of Operator Sequences
2.7 Fixed Point Theorems and Newton's Method
3 Weighted Polynomial Approximation and Quadrature Rules on (-1,1)
3.1 Moduli of Smoothness, K-Functionals, and Best Approximation
3.1.1 Moduli of Smoothness and K-Functionals
3.1.2 Moduli of Smoothness and Best Weighted Approximation
3.1.3 Besov-Type Spaces
3.2 Polynomial Approximation with Doubling Weights on the Interval (-1,1)
3.2.1 Definitions
3.2.2 Polynomial Inequalities with Doubling Weights
3.2.3 Christoffel Functions with Respect to Doubling Weights
3.2.4 Convergence of Fourier Sums in Weighted Lp-Spaces
3.2.5 Lagrange Interpolation in Weighted Lp-Spaces
3.2.6 Hermite Interpolation
3.2.7 Hermite-Fejér Interpolation
3.2.8 Lagrange-Hermite Interpolation
3.3 Polynomial Approximation with Exponential Weights on the Interval (-1,1)
3.3.1 Polynomial Inequalities
3.3.2 K-Functionals and Moduli of Smoothness
3.3.3 Estimates for the Error of Best Weighted Polynomial Approximation
3.3.4 Fourier Sums in Weighted Lp-Spaces
3.3.5 Lagrange Interpolation in Weighted Lp-Spaces
3.3.6 Gaussian Quadrature Rules
4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded Intervals
4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line
4.1.1 The Case of Freud Weights
4.1.2 The Case of Generalized Freud Weights
4.1.3 Lagrange Interpolation in Weighted Lp-Spaces
4.1.4 Gaussian Quadrature Rules
4.1.5 Fourier Sums in Weighted Lp-Spaces
4.2 Polynomial Approximation with Generalized Laguerre Weights on the Half Line
4.2.1 Polynomial Inequalities
4.2.2 Weighted Spaces of Functions
4.2.3 Estimates for the Error of Best Weighted Approximation
4.2.4 Fourier Sums in Weighted Lp-Spaces
4.2.5 Lagrange Interpolation in Weighted Lp-Spaces
4.3 Polynomial Approximation with Pollaczek–Laguerre Weights on the Half Line
4.3.1 Polynomial Inequalities
4.3.2 Weighted Spaces of Functions
4.3.3 Estimates for the Error of Best Weighted Polynomial Approximation
4.3.4 Gaussian Quadrature Rules
4.3.5 Lagrange Interpolation in L2w
4.3.6 Remarks on Numerical Realizations
Computation of the Mhaskar–Rahmanov–Saff Numbers
Numerical Construction of Quadrature Rules
Numerical Examples
Comparison with the Gaussian Rule Based on Laguerre Zeros
5 Mapping Properties of Some Classes of Integral Operators
5.1 Some Properties of the Jacobi Polynomials
5.2 Cauchy Singular Integral Operators
5.2.1 Weighted L2-Spaces
5.2.2 Weighted Spaces of Continuous Functions
5.2.3 On the Case of Variable Coefficients
5.2.4 Regularity Properties
5.3 Compact Integral Operators
5.4 Weakly Singular Integral Operators with Logarithmic Kernels
5.5 Singular Integro-Differential or Hypersingular Operators
5.6 Operators with Fixed Singularities of Mellin Type
5.7 A Note on the Invertibility of Singular Integral Operators with Cauchy and Mellin Kernels
5.8 Solvability of Nonlinear Cauchy Singular Integral Equations
5.8.1 Equations of the First Type
5.8.2 Equations of the Second Type
5.8.3 Equations of the Third Type
6 Numerical Methods for Fredholm Integral Equations
6.1 Collectively Compact Sequences of Integral Operators
6.2 The Classical Nyström Method
6.2.1 The Case of Jacobi Weights
6.2.2 The Case of an Exponential Weight on (0,∞)
6.2.3 The Application of Truncated Quadrature Rules
6.3 The Nyström Method Based on Product Integration Formulas
6.3.1 The Case of Jacobi Weights
6.3.2 The Case of an Exponential Weight on (0,∞)
6.3.3 Application to Weakly Singular Integral Equations
6.4 Integral Equations with Logarithmic Kernels
6.4.1 The Well-posed Case
6.4.2 The Ill-posed Case
6.4.3 A Collocation-Quadrature Method
6.4.4 A Fast Algorithm
7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral Equations
7.1 Cauchy Singular Integral Equations on an Interval
7.1.1 Collocation and Collocation-Quadrature Methods
7.1.2 Weighted Uniform Convergence
Collocation Methods
Collocation-Quadrature Methods
7.1.3 Fast Algorithms
Weighted L2-Convergence
Computational Complexity of the Algorithm
Weighted Uniform Convergence
7.2 Hypersingular Integral Equations
7.2.1 Collocation and Collocation-Quadrature Methods
7.2.2 A Fast Algorithm
First Step of the Algorithm
Second Step of the Algorithm
A More General Situation
7.3 Integral Equations with Mellin Type Kernels
7.4 Nonlinear Cauchy Singular Integral Equations
7.4.1 Asymptotic of the Solution
7.4.2 A Collocation-Quadrature Method
7.4.3 Convergence Analysis
7.4.4 A Further Class of Nonlinear Cauchy Singular Integral Equations
A Collocation-Quadrature Method
Convergence Analysis
8 Applications
8.1 A Cruciform Crack Problem
8.1.1 The Integral Equations Under Consideration
8.1.2 Solvability Properties of the Operator Equations
Equation (I+MH0)u0=f0
Equation (I+MH1)u1=f1
Equation (I+H2)u2=f2
8.1.3 A Quadrature Method
8.2 The Drag Minimization Problem for a Wing
8.2.1 Formulation of the Problem
8.2.2 Derivation of the Operator Equation
8.2.3 A Collocation-Quadrature Method
8.2.4 Numerical Examples
8.3 Two-Dimensional Free Boundary Value Problems
8.3.1 Seepage Flow from a Dam
The Linear Case
The Nonlinear Case
8.3.2 Seepage Flow from a Channel
Generating Gaussian Rules
The Application of Product Integration Rules
Numerical Results
9 Hints and Answers to the Exercises
10 Equalities and Inequalities
10.1 Equalities and Equivalences
10.2 General Inequalities
10.3 Marcinkiewicz Inequalities
Bibliography
Index
Recommend Papers

Weighted Polynomial Approximation and Numerical Methods for Integral Equations [1 ed.]
 9783030774967, 9783030774974

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Pathways in Mathematics

Peter Junghanns Giuseppe Mastroianni Incoronata Notarangelo

Weighted Polynomial Approximation and Numerical Methods for Integral Equations

Pathways in Mathematics Series Editors Takayuki Hibi, Department of Pure and Applied Mathematics, Osaka University, Suita, Osaka, Japan Wolfgang König, Weierstraß-Institut, Berlin, Germany Johannes Zimmer, Fakultät für Mathematik, Technische Universität München, Garching, Germany

Each “Pathways in Mathematics” book offers a roadmap to a currently well developing mathematical research field and is a first-hand information and inspiration for further study, aimed both at students and researchers. It is written in an educational style, i.e., in a way that is accessible for advanced undergraduate and graduate students. It also serves as an introduction to and survey of the field for researchers who want to be quickly informed about the state of the art. The point of departure is typically a bachelor/masters level background, from which the reader is expeditiously guided to the frontiers. This is achieved by focusing on ideas and concepts underlying the development of the subject while keeping technicalities to a minimum. Each volume contains an extensive annotated bibliography as well as a discussion of open problems and future research directions as recommendations for starting new projects. Titles from this series are indexed by Scopus.

More information about this series at http://www.springer.com/series/15133

Peter Junghanns • Giuseppe Mastroianni • Incoronata Notarangelo

Weighted Polynomial Approximation and Numerical Methods for Integral Equations

Peter Junghanns Fakultät für Mathematik Technische Universität Chemnitz Chemnitz, Germany

Giuseppe Mastroianni Department of Mathematics, Computer Sciences and Economics University of Basilicata Potenza, Italy

Incoronata Notarangelo Department of Mathematics ”Giuseppe Peano” University of Turin Turin, Italy

ISSN 2367-3451 ISSN 2367-346X (electronic) Pathways in Mathematics ISBN 978-3-030-77496-7 ISBN 978-3-030-77497-4 (eBook) https://doi.org/10.1007/978-3-030-77497-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com, by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The content of this book is a combination of two topics, one comes from the theory of approximation of functions and integrals by interpolation and quadrature, respectively, and the other from the numerical analysis of operator equations, in particular, of integral and related equations. It is not necessary to point out that integral equations play an important role in different mathematical areas. We stress the connection between ordinary differential equations and Volterra-Fredholm integral equations and the connection between boundary value problems for partial differential equations and boundary integral equations, which are obtained by socalled boundary integral methods. These methods lead to different classes of integral equations like, for example, Fredholm integral equations of first and second kind involving integral operators with both smooth and weakly singular kernel functions, or Cauchy singular integral equations, as well as hypersingular integral equations, or integro-differential equations. Regarding interpolation and quadrature processes, we restrict ourselves to the non periodic case, that means to the approximation and integration of functions defined on bounded or unbounded intervals, where we attach particular value to functions having singularities at the end points of the interval. This is due to our further aim in this book, namely, to propose and to investigate numerical methods for different classes of integral equations given on such intervals, where these methods are based on the mentioned interpolation and quadrature processes. The book contains classical results, but also very recent results. We thought it might be worthwhile to publish a book in which these two topics are summarized. In Chap. 1, the introduction, we give some hints for the use of the book and introduce some general notations and agreements for the whole text. Chapter 2 collects the basic principles from linear functional analysis needed in the remaining part of the book, gives definitions for different kinds of function spaces such as weighted Lp spaces and weighted spaces of continuous functions as well as scales of subspaces of them, which are important for our investigations. This chapter also presents some concepts concerned with the stability and convergence of operator sequences or, in other words, with numerical or approximation methods for operator

v

vi

Preface

equations. Moreover, we recall some basic facts from fixed point theory and about Newton’s method. Chapters 3 and 4 are devoted to the study of interpolation processes and the respective quadrature rules based on the zeros of orthogonal polynomials with respect to certain weight functions on the interval (−1, 1), the half axis (0, ∞), and the whole real axis. These chapters can be considered as a continuation of [121, Chapter 2 and Chapter 4, Section 5.1], where the authors mainly consider classical weights (also with additional inner singularities). In the present text we concentrate on recent results and developments concerned with non classical weights like exponential weights on (−1, 1) and on (0, ∞) and generalized Freud weights on the real axis. In Chap. 5, we provide mapping properties of various classes of integral operators in certain Banach spaces of functions and with respect to appropriate scales of subspaces of these Banach spaces, which are of interest for our further investigations. Moreover, we discuss solvability properties of certain classes of nonlinear Cauchy singular integral equations. Chapters 6 and 7 deal with numerical methods for several classes of integral equations based on some interpolation and quadrature processes considered in Chaps. 3 and 4. While Chap. 6 concentrates on respective Nyström and collocationquadrature methods for Fredholm integral equations with continuous and weakly singular kernel functions, in Chap. 7 collocation and collocation-quadrature methods are applied to strongly singular integral equations like linear and nonlinear Cauchy singular integral equations, integral equations with strongly fixed singularities, and hypersingular integral equations. In Chap. 8, we investigate some concrete applications of the theory presented in the previous chapter to examples from two-dimensional elasticity theory, airfoil theory, and free boundary seepage flow problems. In the two final chapters, Chaps. 9 and 10, we give complete answers or detailed hints to the exercises and list a series of inequalities, equivalences, and equalities used at many places in the book, respectively. The book is mainly addressed to graduate students familiar with the basics of real and complex analysis, linear algebra, and functional analysis. But, the study of this book is also worthwhile for researchers beginning to deal with the approximation of functions and the numerical solution of operator equations, in particular integral equations. Moreover, we hope that the present book is also suitable to give ideas for handling further or new problems of interest, which can be solved with the help of integral equations. The book should also reach engineers who are interested in the solution of certain problems of similar kind as presented here. We are deeply grateful to Birkhäuser for including this book in the very successful series Pathways in Mathematics and, in particular, for the invaluable assistance in preparing the final version of the text. Chemnitz, Germany Potenza, Italy Turin, Italy

Peter Junghanns Giuseppe Mastroianni Incoronata Notarangelo

Contents

1

Introduction .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

1

2

Basics from Linear and Nonlinear Functional Analysis. . . . . . . . . . . . . . . . 2.1 Linear Operators, Banach and Hilbert Spaces . .. . . . . . . . . . . . . . . . . . . . 2.2 Fundamental Principles . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.3 Compact Sets and Compact Operators . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.4 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.4.1 Lp -Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.4.2 Spaces of Continuous Functions . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.4.3 Approximation Spaces and Unbounded Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.5 Fredholm Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.6 Stability of Operator Sequences . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 2.7 Fixed Point Theorems and Newton’s Method.. .. . . . . . . . . . . . . . . . . . . .

5 5 9 13 16 17 23

3

34 42 43 50

Weighted Polynomial Approximation and Quadrature Rules on (−1, 1) .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 57 3.1 Moduli of Smoothness, K-Functionals, and Best Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 57 3.1.1 Moduli of Smoothness and K-Functionals.. . . . . . . . . . . . . . . . . 58 3.1.2 Moduli of Smoothness and Best Weighted Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 65 3.1.3 Besov-Type Spaces . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 68 3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1) .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 78 3.2.1 Definitions .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 78 3.2.2 Polynomial Inequalities with Doubling Weights . . . . . . . . . . . 84 3.2.3 Christoffel Functions with Respect to Doubling Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 86 3.2.4 Convergence of Fourier Sums in Weighted Lp -Spaces . . . . 91 3.2.5 Lagrange Interpolation in Weighted Lp -Spaces .. . . . . . . . . . . 98 3.2.6 Hermite Interpolation . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 108 vii

viii

Contents

3.3

4

5

3.2.7 Hermite-Fejér Interpolation . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.2.8 Lagrange-Hermite Interpolation.. . . . . . . .. . . . . . . . . . . . . . . . . . . . Polynomial Approximation with Exponential Weights on the Interval (−1, 1) . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.3.1 Polynomial Inequalities .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.3.2 K-Functionals and Moduli of Smoothness.. . . . . . . . . . . . . . . . . 3.3.3 Estimates for the Error of Best Weighted Polynomial Approximation .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 3.3.4 Fourier Sums in Weighted Lp -Spaces . .. . . . . . . . . . . . . . . . . . . . 3.3.5 Lagrange Interpolation in Weighted Lp -Spaces .. . . . . . . . . . . 3.3.6 Gaussian Quadrature Rules . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

Weighted Polynomial Approximation and Quadrature Rules on Unbounded Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.1.1 The Case of Freud Weights . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.1.2 The Case of Generalized Freud Weights . . . . . . . . . . . . . . . . . . . . 4.1.3 Lagrange Interpolation in Weighted Lp -Spaces .. . . . . . . . . . . 4.1.4 Gaussian Quadrature Rules . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.1.5 Fourier Sums in Weighted Lp -Spaces . .. . . . . . . . . . . . . . . . . . . . 4.2 Polynomial Approximation with Generalized Laguerre Weights on the Half Line . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.2.1 Polynomial Inequalities .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.2.2 Weighted Spaces of Functions . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.2.3 Estimates for the Error of Best Weighted Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.2.4 Fourier Sums in Weighted Lp -Spaces . .. . . . . . . . . . . . . . . . . . . . 4.2.5 Lagrange Interpolation in Weighted Lp -Spaces .. . . . . . . . . . . 4.3 Polynomial Approximation with Pollaczek–Laguerre Weights on the Half Line . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3.1 Polynomial Inequalities .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3.2 Weighted Spaces of Functions . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3.3 Estimates for the Error of Best Weighted Polynomial Approximation .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3.4 Gaussian Quadrature Rules . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3.5 Lagrange Interpolation in L2√w . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4.3.6 Remarks on Numerical Realizations . . . .. . . . . . . . . . . . . . . . . . . . Mapping Properties of Some Classes of Integral Operators . . . . . . . . . . 5.1 Some Properties of the Jacobi Polynomials .. . . .. . . . . . . . . . . . . . . . . . . . 5.2 Cauchy Singular Integral Operators .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.2.1 Weighted L2 -Spaces . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.2.2 Weighted Spaces of Continuous Functions . . . . . . . . . . . . . . . . .

109 110 115 116 122 124 127 131 140 145 145 147 150 155 173 179 195 195 198 199 201 206 211 211 214 217 219 224 227 239 239 245 245 252

Contents

ix

5.2.3 On the Case of Variable Coefficients. . . .. . . . . . . . . . . . . . . . . . . . 5.2.4 Regularity Properties . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Compact Integral Operators . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Weakly Singular Integral Operators with Logarithmic Kernels .. . . Singular Integro-Differential or Hypersingular Operators . . . . . . . . . Operators with Fixed Singularities of Mellin Type .. . . . . . . . . . . . . . . . A Note on the Invertibility of Singular Integral Operators with Cauchy and Mellin Kernels . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Solvability of Nonlinear Cauchy Singular Integral Equations . . . . . 5.8.1 Equations of the First Type .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.8.2 Equations of the Second Type . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 5.8.3 Equations of the Third Type . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

262 268 276 288 309 313

6

Numerical Methods for Fredholm Integral Equations . . . . . . . . . . . . . . . . . 6.1 Collectively Compact Sequences of Integral Operators . . . . . . . . . . . . 6.2 The Classical Nyström Method .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.2.1 The Case of Jacobi Weights . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.2.2 The Case of an Exponential Weight on (0, ∞) .. . . . . . . . . . . . 6.2.3 The Application of Truncated Quadrature Rules . . . . . . . . . . . 6.3 The Nyström Method Based on Product Integration Formulas . . . . 6.3.1 The Case of Jacobi Weights . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.3.2 The Case of an Exponential Weight on (0, ∞) .. . . . . . . . . . . . 6.3.3 Application to Weakly Singular Integral Equations .. . . . . . . 6.4 Integral Equations with Logarithmic Kernels .. .. . . . . . . . . . . . . . . . . . . . 6.4.1 The Well-posed Case . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.4.2 The Ill-posed Case . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6.4.3 A Collocation-Quadrature Method .. . . . .. . . . . . . . . . . . . . . . . . . . 6.4.4 A Fast Algorithm.. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

355 356 360 366 368 370 372 374 380 385 392 392 393 403 409

7

Collocation and Collocation-Quadrature Methods for Strongly Singular Integral Equations . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.1 Cauchy Singular Integral Equations on an Interval .. . . . . . . . . . . . . . . . 7.1.1 Collocation and Collocation-Quadrature Methods . . . . . . . . . 7.1.2 Weighted Uniform Convergence . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.1.3 Fast Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.2 Hypersingular Integral Equations . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.2.1 Collocation and Collocation-Quadrature Methods . . . . . . . . . 7.2.2 A Fast Algorithm.. . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.3 Integral Equations with Mellin Type Kernels . . .. . . . . . . . . . . . . . . . . . . . 7.4 Nonlinear Cauchy Singular Integral Equations .. . . . . . . . . . . . . . . . . . . . 7.4.1 Asymptotic of the Solution .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.4.2 A Collocation-Quadrature Method .. . . . .. . . . . . . . . . . . . . . . . . . . 7.4.3 Convergence Analysis . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 7.4.4 A Further Class of Nonlinear Cauchy Singular Integral Equations .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

5.3 5.4 5.5 5.6 5.7 5.8

326 329 330 340 351

419 419 420 424 443 464 467 474 484 493 495 498 500 517

x

Contents

8

Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.1 A Cruciform Crack Problem .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.1.1 The Integral Equations Under Consideration .. . . . . . . . . . . . . . 8.1.2 Solvability Properties of the Operator Equations .. . . . . . . . . . 8.1.3 A Quadrature Method.. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.2 The Drag Minimization Problem for a Wing . . .. . . . . . . . . . . . . . . . . . . . 8.2.1 Formulation of the Problem . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.2.2 Derivation of the Operator Equation . . . .. . . . . . . . . . . . . . . . . . . . 8.2.3 A Collocation-Quadrature Method .. . . . .. . . . . . . . . . . . . . . . . . . . 8.2.4 Numerical Examples .. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.3 Two-Dimensional Free Boundary Value Problems . . . . . . . . . . . . . . . . . 8.3.1 Seepage Flow from a Dam . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8.3.2 Seepage Flow from a Channel.. . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

9

Hints and Answers to the Exercises . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 601

10 Equalities and Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 10.1 Equalities and Equivalences . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 10.2 General Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 10.3 Marcinkiewicz Inequalities . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . .

537 537 537 542 547 549 549 553 558 565 570 570 582

637 637 638 640

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 643 Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 653

Chapter 1

Introduction

In this book we are mainly interested in the approximation of (in general, complex valued) functions, defined on bounded or unbounded intervals of the real line, with possible singularities (like unboundedness or nonsmoothness) at the endpoints of these intervals (cf. Chaps. 3 and 4). In particular, we focus on situations, where such functions are (unknown) solutions of different kinds of integral equations, namely Fredholm integral equations (Chap. 6), linear and nonlinear Cauchy singular integral equations, hypersingular integral equations, and integral equations with Mellin kernels (Chap. 7). We propose and investigate numerical methods for solving these operator equations. Thereby, we concentrate on so-called global ansatz functions for the unknown solutions by searching an approximate solution as a finite linear combination of (in general, weighted) polynomials (cf. Chaps. 6 and 7). Consequently, our main goals are, firstly, studying (weighted) polynomial approximation of functions from weighted spaces of continuous functions and weighted Lp -spaces (cf. Chaps. 3 and 4) and, secondly, the stability and convergence of certain approximation methods for integral equations basing on interpolation and quadrature processes as investigated in Chaps. 3 and 4. Thus, if readers are mainly interested in the approximation of functions and integrals, then, after reading this introduction and Chap. 2, they can concentrate on studying Chaps. 3 and 4 on weighted polynomial approximation. Readers, who are more interested in numerics for integral equations, can skip Chaps. 3 and 4 and directly proceed with Chap. 5 on mapping properties of some classes of integral operators, followed by Chap. 6 or/and Chap. 7 (consulting certain results from Chaps. 3 and 4 if necessary). They should end up reading Chap. 8 on some Applications devoted to fracture mechanics, wing theory, and free boundary value problems (cf. the picture at the end of this introduction). Moreover, many sections of the book are written in such a way that, for a reader who would not go too much into the details, it is also possible to study a single section with a certain success and without taking into account too much references to other places in the book. That means, that definitions and notations

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Junghanns et al., Weighted Polynomial Approximation and Numerical Methods for Integral Equations, Pathways in Mathematics, https://doi.org/10.1007/978-3-030-77497-4_1

1

2

1 Introduction

are repeated at appropriate places. Anyway we recommend the readers to apply themselves to the exercises. Finally, let us make some agreements concerning notations and formulations, which are in force throughout the whole book. By Z, N0 , N, R, and C we denote the sets of integer numbers, nonnegative integers, positive integers, real and complex numbers, respectively. By capital and boldface letters X, Y, . . . from the end of the Latin alphabet, we will denote spaces (of functions), in particular metric spaces, linear spaces, vector spaces, normed spaces, and Banach spaces. We have to take into account that a metric space is an ordered pair (X, d) of a set X and a metric d : X × X −→ [0, ∞) or that a normed space is an ordered pair (X, .) of a linear space X over a field K (here usually K = R or K = C) and a norm function . : X −→ [0, ∞) satisfying the respective axioms. A function f or map from a set A into a set B is often written as f : A −→ B, a → f (a) or, shortly, as f : A −→ B. We use calligraphic letters A, B, . . . to denote operators A : X −→ Y, B : Z −→ W, . . . In all what follows, we denote by c, c1 , c2 , . . . positive real constants which can have different values at different places. Moreover, by c = c(n, f, x, . . .) we indicate that c is independent of the variables n, f, x, . . . If A(n, f, x, . . .) and B(n, f, x, . . .) are two positive functions depending on certain variables n, f, x, . . ., then we will write A ∼n,f,x,... B, if there exists a positive constant c = c(n, f, x, . . .) such that c−1 B(n, f, x, . . .) ≤ A(n, f, x, . . .) ≤ c B(n, f, x, . . .) is satisfied. ∞ ∞ Sometimes, for sequences (ξn )n=0 and (ηn )n=0 of positive numbers, we will use the notion ξn = O(ηn ) in order to say that there is a positive constant C such that ξn ≤ C ηn for all n ∈ N0 . If in such a situation in a finite number of elements of these sequences some numbers are not well defined, then these numbers have ∞ to be set equal to 1. Hence, for example, we write shortly (ln n) = (ln n)n=0 for ∞ (1, 1, ln 2, ln 3 . . .) or (ξn ln n) = (ξn ln n)n=0 for (ξ0 , ξ1 , ξ2 ln 2, ξ3 ln 3, . . .). Moreover, sometimes function values at special points (for example, endpoints of an interval) are defined as limits (such that the respective function is continuous in that points). Thus, the formulation “Let ρ(x) = (1 − x)α (1 + x)β be a Jacobi weight and let f : (−1, 1) −→ C be a function such that ρf : [−1, 1] −→ C is continuous.” means that f : (−1, 1) −→ C is continuous and that the finite limits (ρf )(±1) := limx→±1,x∈(−1,1) ρ(x)f (x) exist. By Greek letters σ , μ, . . . we will denote weight functions σ (x), μ(x), . . ., while the letters α, β, . . . from the beginning of the Greek alphabet are used for special parameters in these weights. Additionally, in Chap. 9 we give complete answers or detailed hints to the exercises and in Chap. 10, as a quick reference, we collect a series of inequalities important for a lot of our considerations.

1 Introduction

3

Chapter 1 : Introduction



Chapter 2 : Basics from linear functional analysis



Chapters 3 and 4 :

Chapter 5 :

Weighted polynomial approximation

Mapping properties of integral operators



Chapter 6 :

Chapter 7 :

Fredholm integral equations

Strongly singular integral equations



Chapter 8 : Applications

Two proposals for reading parts of the book

Chapter 2

Basics from Linear and Nonlinear Functional Analysis

In this chapter we collect the basic principles from linear functional analysis needed in the remaining part of the book. We give definitions for different kinds of function spaces such as weighted Lp -spaces and weighted spaces of continuous functions as well as scales of subspaces of them, which are important for our investigations. We also present some concepts concerned with the stability and convergence of operator sequences or, in other words, with numerical or approximation methods for operator equations. Moreover, we recall some basic facts from fixed point theory, namely Banach’s and Schauder’s fixed point theorems, and discuss a few aspects on the convergence of Newton’s iteration method.

2.1 Linear Operators, Banach and Hilbert Spaces By boldfaced capital letters X, Y, . . . we will denote (linear) spaces (in most cases, of functions), while by calligraphic letters A, B, . . . we will refer to (linear) operators. The notion A : X −→ Y means that the operator A is defined on X and the images Af (for all f ∈ X) belong to Y. For this, we also write shortly A : X −→ Y, f → Af . The set of linear operators A : X −→ Y from a linear space X into a linear space Y will be denoted by L(X, Y). Let us recall some basic concepts and facts from linear functional analysis. A normed linear space (X, .) is a linear space X over an field K (here K = R, the field of real numbers, or K = C, the field of complex numbers) equipped with a so called norm . : X −→ [0, ∞), x → x satisfying (N1) f  = 0 if and only if f =  (the zero element in X), (N2) αf  = |α|f  ∀ α ∈ K, ∀ x ∈ X, (N3) f + g ≤ f  + g ∀ f, g ∈ X (triangle inequality). We call (X, .) a real or complex normed space if K = R or K = C, respectively. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Junghanns et al., Weighted Polynomial Approximation and Numerical Methods for Integral Equations, Pathways in Mathematics, https://doi.org/10.1007/978-3-030-77497-4_2

5

6

2 Basics from Linear and Nonlinear Functional Analysis

A normed linear space (X, .) is automatically considered as a metric space (X, ρ), where the metric ρ : X × X −→ [0, ∞) is defined by ρ(f, g) = f − g. With the help of this metric the notions of bounded, open, closed, and compact ∞ sets as well as of convergent or Cauchy sequences (fn )n=1 are given. A normed linear space (X, .) is called Banach space if the respective metric space (X, ρ) ∞ is complete, i.e., every Cauchy sequence (fn )n=1 ⊂ X is convergent in X. We say that two norms .1 and .2 on a linear space X are equivalent if there is a real constant γ ≥ 1 such that γ −1 f 2 ≤ f 1 ≤ γ f 2

∀f ∈ X.

In this case, a sequence is convergent in (X, .1 ) if and only if it is convergent in (X, .2 ). In other words, these two spaces have the same topological properties. If X and Y are two normed spaces over the same field K, then the product space X × Y can be equipped with several equivalent norms, for example with  p p1 (f, g)p = f X + gY p for some p ∈ [1, ∞). We agree that, if in the sequel there occur products of normed spaces, we will consider them equipped with one of these norms. Moreover, K is also considered as the Banach space (K, |.|) with the modulus as norm. Clearly, if both spaces X and Y are Banach spaces, then so is their product. With these definitions, one can easily see that the maps X × X −→ X, (f, g) → f + g and K × X −→ X, (α, f ) → αf are continuous maps. Let X and Y be normed linear spaces over the same field K. A linear operator A ∈ L(X, Y) is called bounded if it maps bounded sets into bounded sets, which is equivalent to   AX→Y := sup Af Y : f ∈ X, f X ≤ 1 < ∞ . The boundedness of A ∈ L(X, Y) is equivalent to the continuity of the operator A : X −→ Y (as a map between the respective metric spaces), and the number M = AX→Y is the smallest one for which Af Y ≤ M f X

∀f ∈ X.

The set of all bounded linear operators from X into Y is denoted by L(X, Y), in case Y = X by L(X). The linear combination of two linear operators A, B ∈ L(X, Y) is defined by (αA + βB)f := (αAf ) + (βBf ) ∀f ∈ X,   where α, β ∈ K. Then L(X, Y) is a linear space, and L(X, Y), .X→Y = L(X, Y), .L(X,Y) is a normed linear space, which is a Banach space if and only if Y is a Banach space (see the following three exercises). Let (X, .X ) and (Y, .Y ) be two normed spaces. We say that Y is continuously embedded into X, if Y ⊂ X and the operator E : Y −→ X, y → y is

2.1 Linear Operators, Banach and Hilbert Spaces

7

bounded, i.e., if yX ≤ cyY holds true for all y ∈ Y with a finite constant c ∈ R not depending on y. In all what follows, we abstain from writing the index in a norm (for example, in .X or .L(X,Y)), if there is no possibility of misunderstandings. Exercise 2.1.1 Let X0 be a dense linear subspace of X and let Y be a Banach space. Assume A0 ∈ L(X0 , Y). Show that there exists a unique extension A ∈ L(X, Y) of A0 , i.e., Af = A0 f for all f ∈ X0 . ∞ of elements xn ∈ X of a normed space Exercise 2.1.2 For a sequence (xn )n=1 ∞  n ∞   (X, .), we say that the series xn is convergent if the sequence xk n=1

k=1

converges. It is called absolutely convergent, if the number series

∞ 

n=1

xn 

n=1

converges. Show that X is a Banach space if and only if every absolutely convergent series of elements of X is convergent in X. Exercise 2.1.3 Prove that L(X, Y) is a Banach space if and only if Y is a Banach space. (Hint: For the ⇒-direction, use Corollary 2.2.10 from Sect. 2.2 below.) An operator P ∈ L(X, X) is called projection if P 2 = P. Since in this case, for g = Pf , we have PgX = gX , we see that PX→X ≥ 1 for every continuous projection P ∈ L(X, X). Let H be a linear space over the field K = R or K = C, equipped with a so-called inner product ., . : H × H −→ K having the following properties: (I1) f, f  ≥ 0 ∀ f ∈ H, and f, f  = 0 if and only if f = , (I2) f, g = g, f  ∀ f, g ∈ H (α denotes the complex conjugate number of α ∈ C), (I3) αf + βg, h = α f, h + β g, h ∀ f, g, h ∈ H, ∀ α, β ∈ K. A linear space (H, ., .) √with inner product is considered as a normed linear space (H, .) with f  := f, f . In such a space the Cauchy-Schwarz inequality holds true, | f, g | ≤ f g

∀ f, g ∈ H .

An immediate consequence of this inequality is, that the map H × H −→ K, (f, g) → f, g is continuous. A infinite dimensional linear space (H, ., .) with inner product is called Hilbert space if the respective normed space (H, .) is a Banach space. ∞ In what follows, let (H, ., .) be a Hilbert space. A sequence (en )n=0 ⊂ H is called orthonormal system if em , en  = δmn :=

1 : m = n, 0 : m = n .

(2.1.1)

8

2 Basics from Linear and Nonlinear Functional Analysis

We remark that in this case the system {e0 , e1 , . . . , em } is linearly independent for every m ∈ N0 . We denote the linear hull of this system by Hm and define the operators Pm : H −→ H by Pm f =

m 

f, ej ej .

j =0

Then the image R(Pm ) := Pm (H) is equal to Hm . Moreover, Pm : H −→ H is an orthoprojection, which means that Pm f, g = f, Pm g

2 and Pm f = Pm f

∀ f, g ∈ H .

This implies Pm ∈ L(H, H) and Pm H→H = 1. We list further important ∞ properties of an orthonormal system (en )n=0 : For every f ∈ H, (P1) f − Pm f  = inf {f − p : p ∈ Hm }, m  |f, ek |2 ≤ f 2 ∀ m ∈ N0 . (P2) k=0

Property (P2) is known as Bessel’s inequality and can be written equivalently as (P3)

∞ 

|f, ek |2 ≤ f 2 .

k=0 ∞ is named complete if it meets Parseval’s equality An orthonormal system (en )n=0 ∞ 

|f, ek |2 = f 2

∀f ∈ H.

(2.1.2)

lim f − Pm f  = 0

∀f ∈ H,

(2.1.3)

k=0

If this is the case, then m→∞

which can also be written as f =

∞ 

f, ek  ek (in the sense of convergence in H)

k=0

and which is a consequence of the relation f − Pm f 2 = f − Pm f, f − Pm f  = f 2 − Pm f 2 = f 2 −

m 

| f, ek  |2 .

k=0

(2.1.4)

2.2 Fundamental Principles

9

∞ Exercise 2.1.4 Show that the orthonormal system (en )n=0 is complete if and only ∞

:= Hm is dense in H, which means that every f ∈ H is the limit of if the set H m=0

a sequence of elements of H.

2.2 Fundamental Principles There exist three fundamental principles in linear functional analysis, the principle of uniform boundedness, the closed graph theorem, and the theorem on the sufficient number of bounded linear functionals, which are closely connected with the BanachSteinhaus theorem, Banach’s theorem, and the Hahn-Banach theorem, respectively. In the sequel, we will describe these principles in a short manner. Principle of Uniform Boundedness Let X be a Banach space and Y be a normed space. If a family F ⊂ L(X, Y) of bounded linear operators is pointwise bounded, i.e.,   sup Af Y : A ∈ F < ∞

∀f ∈ X,

then F is uniformly bounded, i.e., sup {AX→Y : A ∈ F} < ∞. The following Banach-Steinhaus theorem is an application of this principle to strongly convergent operator sequences. A sequence of operators An : X −→ Y is called strongly convergent to the operator A : X −→ Y if lim An f − Af Y = 0

n→∞

∀f ∈ X.

Theorem 2.2.1 Let X and Y be Banach spaces, X0 ⊂ X be a dense subset, and ∞ ∞ An ∈ L(X, Y), n ∈ N. Then (An )n=1 is strongly convergent if and only if (An f )n=1 ∞ is a Cauchy sequence in Y for all f ∈ X0 and the number sequence (An X→Y )n=1 is bounded. Exercise 2.2.2 Give a proof of Theorem 2.2.1 using the principle of uniform boundedness. The Closed Graph Theorem A linear operator A : D(A) −→ Y, where the domain D(A) of A is a linear subspace of the normed space X, is called closed , if xn ∈ D(A), xn −→ x in X, and Axn −→ y in Y imply x ∈ D(A) and Ax = y. The following closed graph theorem deals with the case D(A) = X. Theorem 2.2.3 If X, Y are Banach spaces and if the linear operator A : X −→ Y is closed, then A : X −→ Y is continuous, i.e., A ∈ L(X, Y).

10

2 Basics from Linear and Nonlinear Functional Analysis

An immediate consequence of the closed graph theorem is Banach’s theorem. Theorem 2.2.4 If X and Y are Banach spaces and if A ∈ L(X, Y) realizes a bijective mapping A : X −→ Y, then A is continuously invertible, i.e., A−1 belongs to L(Y, X). The subset of L(X, Y) of all invertible operators we will denote by GL(X, Y), in case X = Y by GL(X). Exercise 2.2.5 Prove Theorem 2.2.4 with the help of Theorem 2.2.3. Sufficient Number of Bounded Linear Functionals For the Hahn-Banach theorem, there are a real version and a complex version. Theorem 2.2.6 (Real Version) Let X be a real normed space and p : X −→ R be a sublinear functional, i.e., p(x + y) ≤ p(x) + p(y) and p(αx) = αp(x) for all x, y ∈ X and all α ∈ [0, ∞). If X0 ⊂ X is a linear subspace and if f : X0 −→ R is a linear functional satisfying f (x) ≤ p(x) for all x ∈ X0 , then there is a linear functional F : X −→ R having the properties F (x) = f (x) for all x ∈ X0 and F (x) ≤ p(x) for all x ∈ X. Theorem 2.2.7 (Complex Version) Let X be a normed space over K = R or K = C and X0 ⊂ X a linear subspace. If f : X0 −→ K is a linear functional with |f (x)| ≤ x for all x ∈ X0 , then there exists a linear functional F : X −→ K satisfying F (x) = f (x) for all x ∈ X0 and |F (x)| ≤ x for all x ∈ X. The set of all linear and bounded functionals on a normed space X, i.e., (L(X, K), .X→K ), is also denoted by X∗ and called the dual space of X. If a normed space Y is continuously embedded into a normed space X, then it is easily to see that X∗ is continuously embedded into Y∗ . The following first corollary of the Hahn-Banach theorems shows that, for a linear subspace X0 ⊂ X equipped with the induced norm of X, the dual space X∗0 is “not greater” than X∗ . Corollary 2.2.8 Assume that X is a normed space and X0 ⊂ X a linear subspace equipped with the norm from X.Then, for every f0 ∈ X∗0 , there is an f ∈ X∗ with the properties f (x) = f0 (x) for all x ∈ X0 and f X∗ = f0 X∗0 . Proof The case f0 X∗ = 0 is trivial. Let f0 X∗ > 0. By x∗ := f0 X∗ xX 0 0 0 we define a norm on X, where |f0 (x)| ≤ f0 X∗ xX = x∗ 0

∀ x ∈ X0

holds true. In view of Theorem 2.2.7, there exists a functional f ∈ X∗ such that f (x) = f0 (x) for all x ∈ X0 and |f (x)| ≤ x∗ = f0 X∗ xX for all x ∈ X. 0 Thus, on the one hand f X∗ ≤ f0 X∗ . On the other hand, we have 0

f X∗ = sup {|f (x)| : x ∈ X, xX ≤ 1} ≥ sup {|f0 (x)| : x ∈ X0 , xX ≤ 1} = f0 X∗ , 0

and the corollary is proved.

 

2.2 Fundamental Principles

11

Corollary 2.2.9 Let X be a normed space and X0 a linear subspace of X. If x ∗ ∈ X with d = dist(x ∗ , X0 ) := inf {x0 − x ∗  : x0 ∈ X0 } > 0 then there exists a functional f0 ∈ X∗ such that f (x0 ) = 0 for all x0 ∈ X0 , f0 X∗ = 1, and f0 (x ∗ ) = d. Proof Set X1 = {x = x0 + αx ∗ : x0 ∈ X0 , α ∈ K} and f1 (x0 + αx ∗ ) = αd for x0 ∈ X0 and α ∈ K. This definition is correct, since the relation x0 + α  x ∗ = x0 + αx ∗ for some x0 , x0 ∈ X0 , α, α  ∈ K, implies x0 − x0 = (α − α  )x ∗ ∈ X0 and hence α  = α. In case α = 0 we get      |f1 (x0 + αx ∗ ) = |α|d = |α| inf −α −1 y0 − x ∗  : y0 ∈ X0      = inf y0 + αx ∗  : y0 ∈ X0 ≤ x0 + αx ∗  Consequently, f1 X∗ ≤ 1. Furthermore, there are xn 1 xn − x ∗  −→ d if n tends to infinity. This implies

∀ x 0 ∈ X0 .



X0 such that

  d = |f1 (xn − x ∗ )| ≤ f1 X∗ xn − x ∗  −→ f1 X∗ d , 1

1

i.e., 1 ≤ f1 X∗ . Finally, we have to use Corollary 2.2.8.

 

1

The following conclusion of the Hahn-Banach theorem is also known as proposition on the sufficient number of bounded linear functionals. Corollary 2.2.10 Let X be a normed space and x0 ∈ X \ {}. Then there is a functional f0 ∈ X∗ with the properties f0 X∗ = 1 and f0 (x0 ) = x0 . The phrase “sufficient number of bounded linear functionals” describes the separation property of the dual space X∗ formulated in the following exercise. Exercise 2.2.11 Use Theorem 2.2.7 to give a proof of Corollary 2.2.10 and show that, as a consequence of this corollary, the functionals from X∗ separate the points of X, which means that, for arbitrary different points x1 and x2 of X, there is a functional f ∈ X∗ such that f (x1 ) = f (x2 ). For 1 < p < ∞, let us consider the sequence space = ξ= p

∞ (ξn )n=0

: ξn ∈ C ,

∞ 

 |ξn | < ∞ p

,

(2.2.1)

n=0

which is, equipped with the norm

ξ  p =

∞  n=0

1

p

|ξn |

p

,

(2.2.2)

12

2 Basics from Linear and Nonlinear Functional Analysis

a Banach space. As a consequence of the Hölder inequality ∞  ∞ 1  ∞ 1 p q       p q ξn ηn  ≤ |ξn | |ηn | ,    n=0

n=0

ξ ∈ p , η ∈ q

(2.2.3)

n=0

+ q1 = 1 , its dual space ( p )∗ can be identified with q via the isometrical isomorphism 1 p

 ∗ J : p −→ q , ∞ 

with f (ξ ) =

∞ f → J f = η = (ηn )n=0

(2.2.4)

ξn ηn for all ξ ∈ p (i.e., J : ( p )∗ −→ q is linear and bijective

n=0

with J f  q = f ( p )∗ for all f ∈ ( p )∗ ). Exercise 2.2.12 Use Hölder’s inequality (2.2.3) to show that the map J ( p )∗ −→ q defined in (2.2.4) is an isometrical isomorphism.

:

∞ of positive numbers, by ω we denote the weighted For a sequence ω = (ωn )n=0 ∞    √ p ∞ p space ω = x = (ξn )n=0 : ωn ξn n=0 ∈ p equipped with the norm p

 ξ  pω =

∞ 

1

p

ωn |ξn |

p

.

(2.2.5)

n=0

Of course,  ∗ Jω : pω −→ qω , with f (ξ ) =

∞ 

∞ f → Jω f = η = (ηn )n=0

(2.2.6)

p

ωn ξn ηn for all ξ ∈ ω is an isometrical isomorphism (cf.

n=0

Exercise 2.2.12). Corollary 2.2.13 Let 1 < p < ∞ and p1 + q1 = 1, and let αk ∈ C, γk > 0, k ∈ N0 , be given numbers. If there is a constant A ∈ R such that, for all βk ∈ C and all n ∈ N,   n  n 1 p      γk αk βk  ≤ A γk |βk |p ,    k=0

(2.2.7)

k=0

then ∞  k=0

1 q

γk |αk |q

≤ A.

(2.2.8)

2.3 Compact Sets and Compact Operators

13

∞ Proof For γ = (γn )n=0 , define the linear functional

fn : pγ −→ C,

∞ ξ = (ξn )n=0 →

n 

γk αk ξk .

k=0

 p ∗ γ and, since Jω in (2.2.6) is an isometrical 1 n q  = γk |αk |q ≤ A, which yields (2.2.8).  

By (2.2.7) we have fn ∈  isomorphism, fn  pγ ∗

k=0

2.3 Compact Sets and Compact Operators Let E be a metric space with the distance function d : E × E −→ [0, ∞). Recall that a subset A ⊂ E is called compact if every covering of A by open sets contains a finite covering of A and that A is called relatively compact if its closure is compact. For ε > 0, by an ε-net for a nonempty subset A ⊂ E we mean a set Aε ⊂ E such that for every x ∈ A there exists an xε ∈ Aε with d(x, xε ) < ε. This condition can also be written in the formula Aε ∩ Uε (x) = ∅

∀x ∈ A,

where Uε (x) = UE ε (x) denotes the (open) ε-neighbourhood of x ∈ E, Uε (x) = {y ∈ E : d(y, x) < ε} . Exercise 2.3.1 Let A ⊂ E be a nonempty subset of a metric space (E, d). Show that, if for every ε > 0 there exists a finite ε-net Aε ⊂ E for A, then, for every ε > 0, there exists a finite ε-net Bε ⊂ A for A. Exercise 2.3.2 Prove that A ⊂ E is relatively compact if and only if every sequence ∞ of points xn ∈ A possesses a convergent subsequence. Moreover, show that (xn )n=1 a set A ⊂ E is relatively compact if, for every ε > 0, there exists a finite ε-net for A, and that the reverse conclusion is true, if E is a complete metric space. Note, that a subset of a finite dimensional normed space is relatively compact if and only if it is bounded. Lemma 2.3.3 Let X be a Banach space and Xn ⊂ X, n = 1, 2, . . . be a sequence of finite dimensional subspaces of X. If A is a bounded subset of X and if lim sup {En (g) : g ∈ A} = 0 ,

n→∞

where En (g) = inf {g − fn  : fn ∈ Xn }, then A is relatively compact.

(2.3.1)

14

2 Basics from Linear and Nonlinear Functional Analysis

Proof There exists an R ∈ R with A ⊂ UR () := {f ∈ X : f  < R}. If ε > 0 then there is an N ∈ N such that EN (g) < ε2 for all g ∈ A. Thus, for every g ∈ A, there exists an fN,g ∈ XN such that g − fN,g  < 2ε . The set AN :=   fN,g : g ∈ A ⊂ XN is bounded, namely AN ⊂ UR+ ε2 (). Consequently, there is a finite ε2 -net AεN ⊂ XN for AN , which is an ε-net for A. It remains to refer to Exercise 2.3.2.   Lemma 2.3.4 Assume that X and Xn satisfy the conditions of Lemma 2.3.3 with ∞

Xn ⊂ Xn+1 , n ∈ N, and that Xn is dense in X. If A ⊂ X is relatively compact, n=1

then (2.3.1) is satisfied. Proof Assume that there are an ε > 0 and a sequence n1 < n2 < . . . of positive integers such that   sup Enk (g) : g ∈ A ≥ 2ε > 0 ∀ k ∈ N . Then there exist gk ∈ A with Enk (gk ) ≥ ε, where we can assume that (due to the relative compactness of A, cf. Exercise 2.3.2) gk −→ g ∗ for k −→ ∞ and for some g ∗ ∈ X. Let k be sufficiently large, such that gk − g ∗  < ε2 . Then, for fnk ∈ Xnk ,         ε ε ≤ fnk − gnk  ≤ fnk − g ∗  + g ∗ − gnk  < fnk − g ∗  + 2   implying that 2ε ≤ fnk − g ∗  for all fnk ∈ Xnk and all sufficiently large k. Hence, ∞

taking into account Xn ⊂ Xn+1 , we have ε2 ≤ f − g ∗  for all f ∈ X0 := Xn , n=1

which contradicts the assumed density of X0 in X.

 

Let E be a compact metric space and denote by C(E) the Banach space of all continuous functions f : E −→ C, where the norm is given by f ∞ = f ∞,E := max {|f (x)| : x ∈ E} . A family F of functions from C(E) is called uniformly bounded if it is a bounded subset of C(E), i.e., if there exists a positive number M ∈ R such that |f (x)| ≤ M for all x ∈ E and for all f ∈ F . A family F of continuous functions from C(E) is called equicontinuous if, for every ε > 0, there is a δ > 0 such that |f (x1 ) − f (x2 )| < ε

∀ f ∈ F, ∀ x1 , x2 ∈ E : d(x1 , x2 ) < δ .

Theorem 2.3.5 (Arzela-Ascoli) Let E be a compact metric space. A set F ⊂ C(E) is relatively compact if and only if it is a uniformly bounded and equicontinuous family.

2.3 Compact Sets and Compact Operators

15

Exercise 2.3.6 Let (E, d) be a metric space. A family F ⊂ C(E) is called (locally) equicontinuous in x0 ∈ E, if for every ε > 0 there exists a δ > 0 such that |f (x) − f (x0 )| < ε

∀ f ∈ F , ∀ x ∈ Uδ (x0 ) .

Show that, for a compact metric space (E, d), a set F ⊂ C(E) is equicontinuous if and only if it is equicontinuous in every x0 ∈ E. Exercise 2.3.7 For I = [−1, 1], I = [0, ∞], or I = [−∞, ∞], by (I, db ) we denote the respective metric spaces defined by db (x, y) = | arctan(x) − arctan(y)| π with arctan(±∞) = ± . Prove that (I, db ) is a compact metric space. 2 Exercise 2.3.8 Show that, if E is one of the (compact) metric spaces from Exercise 2.3.7, then the Banach space (C(E), .∞ ) is given by

C(I) =

⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩

C[−1, 1]

: I = [−1, 1] ,

{f ∈ C[0, ∞) : ∃ f (∞) := limx→∞ f (x)}

: I = [0, ∞] ,

{f ∈ C(−∞, ∞) : ∃ f (±∞) := limx→±∞ f (x)} : I = [−∞, ∞] ,

and f ∞ = f C(I) = sup {|f (x)| : x ∈ I}. ∞ be a sequence of functions fn : E −→ C on the Exercise 2.3.9 Let (fn )n=1 compact metric space E, which are equicontinuous, and assume that there is a continuous function f : E −→ C such that

lim fn (x) = f (x)

n→∞

for all x ∈ E .

Show that lim fn − f ∞,E = 0. n→∞

An operator T ∈ L(X, Y) is called compact, if it maps bounded subsets of X into relatively compact subsets of Y, which is equivalent tothe condition, that the image  of the unit ball of X is relatively compact, i.e., that T f : f ∈ X, f X ≤ 1 is relatively compact in Y. The subset of L(X, Y) of all compact operators we denote by K(X, Y) or, in case of X = Y, by K(X). Examples of compact operators will be considered in Sect. 5.3. We say that a normed space (X0 , .X0 ) is compactly embedded into a normed space (X, .X ), if the embedding operator E : X0 −→ X, f → f is compact. Exercise 2.3.10 Let Y be a Banach space. Show that (K(X, Y), .X→Y ) is a closed linear subspace of (L(X, Y), .X→Y ). The following lemma and the following corollary are consequences of the Banach-Steinhaus theorem and combine strong convergence with compactness of operators.

16

2 Basics from Linear and Nonlinear Functional Analysis

Lemma 2.3.11 Assume that X and Y are Banach spaces, An , A ∈ L(X, Y), An −→ A strongly, and M ⊂ X is relatively compact. Then lim sup {(An − A)xY : x ∈ M} = 0 ,

n→∞

i.e., strong convergence is uniform on relatively compact subsets. Proof Let M ⊂ X be relatively compact and assume that, for n −→ ∞, sup {(An − A)xY : x ∈ M} does not tend to zero. are an ε > 0,  Then there  nk > nk−1 ∈ N0 , n0 := 0, and xk ∈ M such that (Ank − A)xk Y ≥ 2ε, k = 1, 2, . . . We can assume, due to the relative compactness of M, that xk −→ x ∗ in X for k −→ ∞. Since, in virtue of Theorem 2.2.1, γ := sup An L(X,Y) : n ∈ N is finite, we conclude       2ε ≤ (Ank − A)xk Y ≤ (Ank − A)(xk − x ∗ )Y + (Ank − A)x ∗ Y       ≤ γ + AL(X,Y) xk − x ∗ Y +  Ank − A x ∗ Y .    Hence, the inequality ε ≤  Ank − A x ∗ Y is valid for all k ≥ k0 in contradiction to the strong convergence Ank −→ A.   Corollary 2.3.12 If X, Y, and Z are Banach spaces, T ∈ K(X, Y), An , A ∈ L(Y, Z), and if An −→ A strongly, then lim (An − A)T L(X,Z) = 0. n→∞

Proof Since by assumption the set M0 := {T x : x ∈ X, xX ≤ 1} is relatively compact, in virtue of Lemma 2.3.11 we obtain   (An − A)T L(X,Z) = sup (An − A)yZ : y ∈ M0 −→ 0 if

n −→ ∞ ,  

and the corollary is proved. ∗

Exercise 2.3.13 For Banach spaces X and Y, show that T ∈ K(X, Y) implies T ∈ L(Y∗ , X∗ ). Using this and taking into account Corollary 2.3.12, prove the following: If X, Y, Z, and W are Banach spaces, T ∈ K(X, Y), An , A ∈ L(Y, Z), and if An −→ A strongly, as well as Bn , B ∈ L(W, X) and Bn∗ −→ B ∗ strongly, then lim An T Bn − AT BL(W,Z) = 0. (For the definition of the adjoint operator B ∗ n→∞

of B, see Sect. 2.5.)

2.4 Function Spaces In this section we give an overview on the types of function spaces playing an essential role in this book. Thereby, we concentrate on typical examples of such spaces, while further particular spaces will be introduced at that places of the book, where they are needed.

2.4 Function Spaces

17

2.4.1 Lp -Spaces Here and in what follows, by a weight function u : I −→ R on an interval I ⊂ R we mean a nonnegative, measurable, and integrable1 function u(x), for which  u(x) dx > 0 and all moments x k u(x) dx, k ∈ N0 , are finite. If 1 ≤ p < ∞, I

I

then, as usual, by Lp (I ) we denote the Banach space of all (classes of) measurable functions f : I −→ C, for which the Lp -norm  f Lp (I ) =

1 |f (x)|p dx

p

I

L∞ (I )

is finite. In case p = ∞, by the space of all (classes of) measurable and essentially bounded functions equipped with the norm f L∞ (I ) = ess sup {|f (x)| : x ∈ I } := inf {sup {|f (x)| : x ∈ A} : A ⊂ I, m(A) = 0}

is meant, where by m(A) the Lebesgue measureof theset A is denoted. In case I = (a, b), we also write Lp (a, b) instead of Lp (a, b) , and if (a, b) = (−1, 1), p Lp instead of Lp (−1, 1). If u is a weight function on I , then by Lu (I ) we refer to the Banach space of all (classes of) functions f : I −→ C for which f u belongs to Lp (I ) with the norm f Lpu (I ) = f uLp (I ) . If u(x) = v α,β (x) = (1 − x)α (1 + x)β with α, β > −1 is a Jacobi weight, p p then we use the notation Lα,β := Lv α/p,β/p together with f α,β,(p) := f Lp . In α,β

particular, the inner product and the norm in the Hilbert space L2α,β are defined by  f, gα,β :=



1 −1

f (x)g(x)v

α,β

(x) dx

and f α,β := f α,β,(2) =



1 −1

|f (x)|2 v α,β (x) dx ,

respectively. Moreover, for a real number s ≥ 0, we define the Sobolev-type 2 subspace L2,s α,β of Lα,β as L2,s α,β

:= f ∈ L2α,β :

∞ 



2  (n + 1)2s  f, pnα,β α,β  < ∞

 ,

(2.4.1)

n=0 α,β

where pn (x) denotes the normalized Jacobi polynomial with respect to v α,β (x) of degree n (cf. Sect. 5.1). Equipped with the inner product f, gα,β,s :=

∞ 



α,β (n + 1)2s f, pnα,β α,β g, pn α,β

n=0

1 Here,

measurability and integrability is meant in the Lebesgue sense.

(2.4.2)

18

2 Basics from Linear and Nonlinear Functional Analysis

and the respective norm f α,β,s :=



f, f α,β,s , the space L2,s α,β is again a Hilbert

2,0 space. Note that Lα,β = L2α,β .   ., , . Exercise 2.4.1 Show that L2,s α,β,s is a Hilbert space. α,β

Remark 2.4.2 It is well-known that, for 1 < p < ∞ and p1 + q1 = 1, the map J :  1  ∗ f (x)g(x) dx is an isometric isomorphism. For Lq −→ Lp with (J g)(f ) = −1  p ∗ = Lq . Consequently, in the same way of identification we have this, we write L  1 ∗  p q g(x)f (x)v α,β (x) dx, Lv α,β = Lv −α,−β . Moreover, if we use (Jα,β g)(f ) = −1 ∗  q p then Jα,β : Lα(q−1),β(q−1) −→ Lα,β is also an isometric isomorphism. Example 2.4.3 Let −1 < α, β < 1, p > 1, and p1 + q1 = 1 and consider the integral operator K defined by  1 (Kf )(x) = K(x, y)u(y)v α,β (y) dy , −1 < x < 1 , −1

where the measurable function K : (−1, 1)2 \ {(x, x) : x ∈ (−1, 1)} −→ C is assumed to satisfy |K(x, y)| ≤ c|x − y|−η ,

(x, y) ∈ (−1, 1)2 \ {(x, x) : x ∈ (−1, 1)} ,

  p for some η ∈ 0, q1 . By Hölder’s inequality we get, for f ∈ Lα,β and −1 < x < 1,  |(Kf )(x)| ≤ c

1

|x − y|

−1

−ηq α,β

v

 q1 (y) dy

 − − f α,β,(p) ≤ c v −α ,−β (x)

1 q

f α,β,(p) ,

where α± = max {0, ±α}, β± = max {0, ±β}, and where we took into account Lemma 5.2.10. Consequently,  Kf −α,−β,(q) =

1

−1

q −α,−β

|(Kf )(x)| v

 q1 (x) dx

  p q i.e., K ∈ L Lα,β , L−α,−β with KLp

 ≤c

q

α,β →L−α,−β

1 −1

v

−α + ,−β +

 q1 (x) dx

1  + + ≤ c v −α ,−β Lq 1 (−1,1).

Lemma 2.4.4 The norm in L2,s α,β is equivalent to the norm  f α,β,s,∼ :=

∞ 

m=0

(m + 1)

2s−1

f α,β,(p) ,

!

"2 α,β Em (f )2

1 2

,

2.4 Function Spaces

19

  α,β where Em (f )p := inf f − P α,β,(p) : P ∈ Pm with Pm being the set of α,β polynomials of degree less than m and E0 (f )2 := f α,β . Since, in case γ ≤ α and δ ≤ β, L2γ ,δ is continuously embedded into L2α,β , we have also the continuous 2,s embedding L2,s γ ,δ into Lα,β in this case.

Proof It is well known that, for f ∈ L2α,β ,  α,β Em (f )2 =

∞   2 

 f, pnα,β α,β 

1 2

.

n=m

Consequently, f 2α,β,s,∼ =

∞ 

n ∞  ∞   2  2  



(m + 1)2s−1 ,  f, pnα,β α,β  =  f, pnα,β α,β 

(m + 1)2s−1

n=m

m=0

n=0

m=0

and the relation   1 n n    1 m + 1 2s−1 1 2s−1 lim = (m + 1) = lim x 2s−1 dx n→∞ (n + 1)2s n→∞ n+1 n+1 0 m=0

m=0

 

proves the equivalence of the two norms.

Remark 2.4.5 The spaces L2,s α,β , s ≥ 0 can be considered as a Hilbert scale generated by the operator Ef =

∞ 

(1 + n) f, pnα,β α,β pnα,β n=0

2,1 with domain D(E) = Lα,β (cf. [18, Chapter III, §6.9]). Hence, these spaces have 1 the interpolation property, i.e., if the linear operator A is bounded from L2,s α1 ,β1

2,s (τ )

2,t1 2,t2 1 2 into L2,s α2 ,β2 and from Lα1 ,β1 into Lα2 ,β2 , then A is also bounded from Lα1 ,β1 into 2,s (τ )

2 Lα2 ,β , where sj (τ ) = (1 − τ )sj + τ tj and 0 < τ < 1 [18, Chapter III, §6.9, 2 Theorem 6.10].

Lemma 2.4.6 (cf. [20, Lemma 4.2]) If h(x, .) ∈ L2α,β for all (or almost all) x ∈ [−1, 1] and h(., y) ∈ L2,s γ ,δ uniformly with respect to y ∈ [−1, 1], then the linear operator H : L2α,β −→ L2,s γ ,δ defined by  (Hf )(x) = is bounded.

1 −1

h(x, y)f (y)v α,β (y) dy ,

−1 < x < 1 ,

20

2 Basics from Linear and Nonlinear Functional Analysis

Proof For f ∈ L2α,β and m ∈ N0 , using Fubini’s theorem and the Cauchy-Schwarz inequality we can estimate  

   γ ,δ 2  Hf, pm γ ,δ  =    =   ≤



1

1

−1 −1



1 −1 1

−1

h(x, y)f (y)v γ ,δ

h(., y), pm

α,β

2  γ ,δ γ ,δ (y) dy pm (x)v (x) dx 

2  α,β  f (y)v (y) dy  γ ,δ



  γ ,δ 2  h(., y), pm γ ,δ  v α,β (y) dyf 2α,β .

Hence, Hf 2γ ,δ,s =

∞ 



  γ ,δ 2 (1 + m)2s  Hf, pm γ ,δ 

m=0

 ≤

∞ 1  −1 m=0

 =

1

−1



  γ ,δ 2 (1 + m)2s  h(., y), pm γ ,δ  v α,β (y) dyf 2α,β

h(., y)2γ ,δ,s v α,β (y) dyf 2α,β ≤ c f 2α,β ,  

and the lemma is proved.

Lemma 2.4.7 ([20, Conclusion 2.3, pp. 196, 197]) For 0 ≤ s < t, the space L2,t α,β (k) ϕ k ∈ is compactly embedded into L2,s N, then f ∈ L2,r α,β . If r ∈ √ α,β if and only if f L2α,β for all k = 0, . . . , r, where ϕ(x) = 1 − x 2 . Moreover, the norm f α,β,r is r     (k) k  equivalent to f ϕ  . k=0

α,β

A proof of the second part of the previous lemma will be given in Sect. 3.1.3 (see Corollary 3.1.30). For u = v γ ,δ with γ , δ > − p1 and in the general case 1 ≤ p ≤ ∞, the Sobolevtype spaces are defined by # $    (r) r  p,r p (r−1) Wu = f ∈ Lu : f ∈ AC(−1, 1), f ϕ u < ∞ , (2.4.3) p

√ r ∈ N, where again ϕ(x) := 1 − x 2 and where AC(−1, 1) = ACloc (−1, 1) denotes the set of all functions f : (−1, 1), which are absolutely continuous on every closed subinterval of (−1, 1).

2.4 Function Spaces

21

Remark 2.4.8 Concerning the properties of absolutely continuous functions we refer the reader to the famous book of Natanson [173, Chapter IX, §1 and §2]. In particular, we mention that, for every absolutely continuous function f : [a, b] −→ C, the derivative f  (x) exists for almost all x ∈ [a, b], where f  ∈ L1 [a, b]. The spaces defined in (2.4.3) become Banach spaces if we equip them with the norm   f Wp,r = f up + f (r)ϕ r up u

(2.4.4)

  (cf. Exercise 2.4.10). The set f ∈ Cr−1 (−1, 1) : f (r−1) ∈ AC(−1, 1) is also p denoted by ACr−1 loc . By Lloc we refer to the set of all measurable functions f : (−1, 1) −→ C satisfying f ∈ Lp (a, b) for all closed intervals [a, b] ⊂ (−1, 1). Note that the following lemma is also true for f ∈ C(−1, 1) and C[a, b] instead of Lp (a, b). p

Lemma 2.4.9 Assume that g, fn ∈ ACr−1 loc and f ∈ Lloc satisfy lim fn − f Lp (a,b) = 0 and

n→∞

  lim fn(r) − g (r) Lp (a,b) = 0

n→∞

(r) = g (r) . for every interval [a, b] ⊂ (−1, 1). Then f ∈ ACr−1 loc and f

Proof We define Pn ∈ Pr , n ∈ N, by Pn(r) ≡ 0 and  Pn(k) (x) =

x 0

Pn(k+1) (y) dy + fn(k) (0) − g (k) (0) ,

k = r − 1, . . . , 0 .

This implies  fn(k) (x) −

Pn(k) (x) −

g

(k)

x

(x) = 0

! (k+1) " fn (y) − Pn(k+1) (y) − g (k+1) (y) dy

and, for every interval [a, b] ⊂ (−1, 1), p−1   (k)   f −P (k) −g (k)  ≤ (b−a) p fn(k+1) −Pn(k+1) −g (k+1) Lp (a,b) , n n [a,b],∞

k = r−1, . . . , 0 .

Hence, by induction, lim fn − Pn − g∞,[a,b] = 0 and, since fn −→ f in n→∞

Lp (a, b), additionally we have Pn −→ f −g in Lp (a, b). This yields f (x)−g(x) = P (x) for almost all x ∈ (−1, 1) with P ∈ Pr .   α β

2,r Note that, in case u = v 2 , 2 , we have W2,r u = Lα,β due to Lemma 2.4.7 and the following exercise, which can be solved with the help of Lemma 2.4.9.

22

2 Basics from Linear and Nonlinear Functional Analysis p,r

Exercise 2.4.10 Let 1 ≤ p < ∞ and u = v γ ,δ , γ , δ > − p1 . Prove that Wu equipped with the norm defined in (2.4.4) as well as with the norm r     (k) k  f ϕ u

(2.4.5)

p

k=0

(cf. Lemma 2.4.7) becomes a Banach space and that these two norms are equivalent p,r on Wu . As a conclusion of Exercise 2.4.10 we have the following property of a multiplication operator. For this, we define the space Crϕ of all r times continuously differentiable functions f : (−1, 1) −→ C satisfying the conditions f (k) ϕ k ∈ C[−1, 1] for k = 0, 1, . . . , r. This space, equipped with the norm f Crϕ = r     (k) k  f ϕ  , is a Banach space. k=0



Corollary 2.4.11 Let r ≥ 0 be an integer and a ∈ Crϕ . Then the multiplication  p,r  p,r p,r operator aI : Wu −→ Wu , u → au belongs to L Wu , where p,r ≤ ca r aIWp,r Cϕ u →Wu

with a constant c = c(a). p,r

Proof For f ∈ Wu we obtain, in view of (2.4.4), af Wp,r ≤ u

r      (af )(k) ϕ k  . p

k=0

Using (af )

k    k (j ) j (k−j ) k−j ϕ = ϕ , a ϕ f j

(k) k

j =0

and taking into account Exercise 2.4.10, we get af Wp,r ≤ u

r  k      k  (j ) j     a ϕ  f (k−j ) ϕ k−j  ∞ p j k=0 j =0

    ≤ max a (j )ϕ j  0≤j ≤r

    max f (k−j ) ϕ k−j  2r+1

∞ 0≤j ≤r

p

≤ caCrϕ f Wp,r , u and the corollary is proved.

 

2.4 Function Spaces

23 γ

δ

2 , 2 , we Corollary 2.4.12 Since we have W2,r = L2,r u γ ,δ for r ∈ N0 and u = v can apply Corollary 2.4.11 together with Remark 2.4.5. Hence, for a ∈ Crϕ , the

2,s multiplication operator aI : L2,s γ ,δ −→ Lγ ,δ is continuous for 0 ≤ s ≤ r. p,r

The Besov-type space Bq,u basing on the kth weighted modulus of continuity p of a function f ∈ Lu given by kϕ (f, t)u,p

#     = sup  khϕ f u

$ :0 0 for all n ∈ N0 and ∞ lim ξn = 0. Moreover, by ∞ we refer to the set all bounded sequences (ξn )n=0 of n→∞ complex numbers. ∞ ∞ and η = (ηn )n=0 be sequences of positive real Proposition 2.4.28 Let ξ = (ξn )n=0 numbers and let γ , δ ≥ 0. Then ξ

(a) Cγ ,δ is a Banach space,

(b) Cγ ,δ is compactly embedded into Cγ ,δ if ξ ∈ c+ 0, ξn ξ η (c) Cγ ,δ is compactly embedded into Cγ ,δ if lim = 0, n→∞ ηn   ξ

(d) if

∞ ξn ηn n=0

∈ ∞ , the embedding Cγ ,δ ⊂ Cγ ,δ is continuous, ξ

η

ξ

ξ

(e) if γ ≤ ρ and δ ≤ τ , the embedding Cγ ,δ ⊂ Cρ,τ is continuous. Proof ∞ be a Cauchy sequence in Cγ ,δ . Since Cγ ,δ is a Banach space, (a) Let (fm )m=1 there exists a function f ∈ Cγ ,δ with lim fm − f γ ,δ,∞ = 0. Let M := m→∞   sup fm γ ,δ,ξ: m ∈ N and choose, for every n ∈ N0 , a number mn ∈ N such that fmn − f γ ,δ,∞ ≤ ξn . Then ξ

γ ,δ

γ ,δ

En (f ) ≤ En

   γ ,δ  f − fmn + En fmn ≤ (1 + M)ξn . ξ

Hence, the function f belongs to Cγ ,δ . Moreover, for every ε > 0, there exists an m0 ∈ N such that fk − fm γ ,δ,ξ ≤ ε  for all k,m ≥ m0 . For n ∈ N0 ,  choose a natural number kn ≥ m0 satisfying f − fkn γ ,δ,∞ ≤ ε ξn . Then, for all m ≥ m0 , γ ,δ

γ ,δ

En (f − fm ) ≤ En



  γ ,δ  f − fkn + En fkn − fm ≤ 2 ε ξn ,

n ∈ N,

ξ

which shows that fm converges to f in the norm of Cγ ,δ . ξ

(b) Since ξn −→ 0, we have Cγ ,δ ⊂ Cγ ,δ (cf. Exercise 2.4.23). Let M be a bounded f

ξ subset of Cγ ,δ , say f γ ,δ,ξ ≤ c0 < ∞ for all f ∈ M. Moreover, let Pn ∈  f γ ,δ Pn be a polynomial with f − Pn γ ,δ,∞ = En (f ) (cf. Exercise 2.4.24). It follows f γ ,δ,∞ ≤ c1 < ∞ and, for all n ∈ N and f ∈ M,

 f Pn 

γ ,δ,∞

 f  ≤ Pn − f γ ,δ,∞ + f γ ,δ,∞ ≤ c0 ξn + c1 ≤ c2 < ∞ ,

2.4 Function Spaces

29

 f which implies Pn ∞ ≤ c3 n2 max{γ ,δ} , since, for each polynomial p ∈ Pn , we have the Remez inequality (cf. Proposition 3.2.21) # $ 1 p∞ ≤ c sup |p(x)| : |x| ≤ 1 − 2 , 2n

c = c(n, p) .

(2.4.13)

In view of the Arzela-Ascoli theorem (cf.   Theorem 2.3.5), it remains to show that the function family f v γ ,δ : f ∈ M is equicontinuous. For this, define γ0 =

min {γ , 1} : γ > 0 , 1

: γ = 0,

and δ0 =

min {δ, 1} : δ > 0 , : δ = 0.

1

(2.4.14)

Then v γ ,δ ∈ C0,λ for λ = min {γ0 , δ0 }. For ε > 0, we can choose n0 ∈ N, such that c0 ξn0 < 4ε , and set  η :=

ε

1

 2(1+max{γ ,δ})  v γ ,δ  0,λ 6 c3 n0 C

λ

.

For f ∈ M, x1 , x2 ∈ [−1, 1], and |x1 − x2 | < η, we get   |f (x1 )vγ ,δ (x1 ) − f (x2 )vγ ,δ (x2 )          f   f  f f ≤ f (x1 ) − pn0 (x1 ) vγ ,δ (x1 ) + pn0 (x1 )vγ ,δ (x1 ) − pn0 (x2 )vγ ,δ (x2 ) + pn0 (x2 ) − f (x2 ) vγ ,δ (x2 )  f    γ ,δ ≤ 2En0 (f ) + pn0 C0,λ vγ ,δ C0,λ |x1 − x2 |λ  f    ≤ 2 c0 ξn0 + 3 n20 pn0 ∞ vγ ,δ C0,λ ηλ ≤ ε ,

where we took into account that, for a polynomial pm ∈ Pm , due to Markov’s inequality    p  ≤ (m − 1)2 pm ∞ ≤ m2 pm ∞ , m ∞

(2.4.15)

we have    1−λ  2 pm C0,λ ≤ pm ∞ + pm ≤ (1 + 21−λ m2 )pm ∞ ≤ 3 m2 pm ∞ . ∞ ξ

(c) Let M be a bounded subset of Cγ ,δ as in the proof of (b), from which we also conclude that every sequence of elements from M has a subsequence

30

2 Basics from Linear and Nonlinear Functional Analysis ∞ convergent in Cγ ,δ , i.e., there exists a function f ∈ Cγ ,δ with (fm )m=1 lim fm − f γ ,δ,∞ = 0. For each n ∈ N, choose mn ∈ N such that m→∞   mn > mn−1 (m0 := 0) and fmn − f γ ,δ,∞ ≤ ξn and let pn∗ be a polynomial   γ ,δ from Pn with pn∗ − fmn γ ,δ,∞ = En (fmn ). Then

  ∗ p − f  n

γ ,δ,∞

    ≤ pn∗ − fmn γ ,δ,∞ + fmn − f γ ,δ,∞ ≤ (c0 + 1)ξn , ξ

which means that f belongs to Cγ ,δ . For ε > 0, choose an n0 ∈ N, such that   ξn ε  for all n > n0 , and an n1 ∈ N, such that fm − f  <  < 2 f γ ,δ,ξ +c0   ε min 1,η0 ,...,ηn0 for 2 ηn

n

γ ,δ,∞

all n > n1 . Then, for all n > n1 ,

  εηn γ ,δ En (f − fmn ) ≤ f − fmn γ ,δ,∞ < , 2

n = 0, 1, 2, . . . , n0 ,

and   εηn γ ,δ En (f − fmn ) ≤ f γ ,δ,ξ + c0 ξn < , n = n0 + 1, n0 + 2, . . . 2   Consequently, f − fmn γ ,δ,η < ε for all n > n1 . (d) Since ξn ≤ c0 ηn , n ∈ N0 with some real constant c0 , the claimed embedding ξ η follows immediately from the definition of the norms in Cγ ,δ and Cγ ,δ . (e) The assertion is a conclusion of the continuous embedding Cγ ,δ ⊂ Cρ,τ .   For a√function f : (−1, 1) −→ C, for r ∈ N, h, t > 0, γ , δ ≥ 0, and for ϕ(x) = 1 − x 2 , define the symmetric difference of order r (cf. (2.4.7))    r     r  r k r − k h ϕ(x) , hϕ f (x) := (−1) f x+ 2 k

(2.4.16)

k=0

−1 + 2r 2 h2 ≤ x ≤ 1 − 2r 2 h2 , and the main part of the weighted modulus of continuity rϕ (f, t)γ ,δ,∞ by (cf. (2.4.6)) ⎧ ⎪ ⎪ ⎨ sup

0 0 , c = c(t, f ) .

0≤n 0 and ξn = (n + 1)−ρ G(n) with a function G : [0, ∞) −→ R satisfying (G1) (G2) (G3) (G4)

G(x) ≥ g0 for all x ∈ [0, ∞) with a constant g0 > 0, for some x0 ≥ 1, G(x1 ) ≤ G(x2 ) if x0 ≤ x1 < x2 , x −ρ G(x) ≤ c0 x −τ for all x ≥ x0 and some constants c0 , τ > 0, G(nm) ≤ c1 G(n)G(m) for all m, n ∈ N0 and some constant c1 > 0.

Moreover, let f : (−1, 1) −→ C be continuous. ξ

(a) If f ∈ Cγ ,δ then, for all t ∈ (0, 1], rϕ (f, t)γ ,δ,∞ ≤ c f γ ,δ,ξ

: r >ρ, t ρ G(t −1 ) ! " ρ −1 −1 t G(t ) ln(t ) + 1 : r = ρ ,



where c = c(t, f ). (b) If rϕ (f, t)γ ,δ,∞ ≤ Mf t ρ G(t −1 ) for all t ∈ (0, 1] with a constant Mf , then   ξ f ∈ Cγ ,δ and f γ ,δ,ξ ≤ c Mf + f γ ,δ,∞ with c = c(f ). ξ

Proof Let f ∈ Cγ ,δ , choose ε ∈ (0, min {1, r − ρ}) if r > ρ, and set ε = 0 if r = ρ. Then, for 0 < t ≤ 1, Proposition 2.4.29 implies rϕ (f, t)γ ,δ,∞ ≤ c t r



γ ,δ

(n + 1)r−1 En (f )

0≤n 0 we choose m ∈ N such that m 

k→∞

∞ 

|ξn |q EnX (f )q
0, then there are fn ∈ Xn such that EnX (f ) = fn − f X and there is an m ∈ N with |ξn | EnX (f ) < ε2 for all n > m. Moreover, due to sup {|ξn | : n ∈ N0 } = ∞, we have lim EnX (f ) = 0. Hence, there is also a n→∞

k > m such that max {|ξn | : n = 0, . . . , m} EkX (f ) < ε2 . This all together yields,

2.4 Function Spaces

37

X using EnX (fk − f ) = Emax{n,k} (f ),

  fk − f X,ξ,∞ = sup |ξn | EnX (fk − f ) : n ∈ N0     = max |ξn |EkX (f ) : n = 0, . . . , m + max |ξn | EkX (f ) : n = m + 1, . . . , k   + sup |ξn | EnX (f ) : n = k + 1, k + 2, . . .   ≤ max {|ξn | : n = 0, . . . , m} EkX (f ) + sup |ξn | EnX (f ) : n = m + 1, m + 2, . . . < ε.

The proposition is proved.   Let X and Xn be as above and let Y be a further Banach space. We consider an in general unbounded linear operator A : D(A) ⊂ X −→ Y, the domain of which + is D(A) = n∈N Xn and we assume that its unboundedness can be estimated in ∞ the sense that there exist a positive constant γ and a sequence (γn )n=1 of positive numbers such that Afn Y ≤ γ γn fn X

∀ fn ∈ Xn , n ∈ N .

(2.4.20)

  We agree that, in case sup AXn →Y : n ∈ N = ∞, we have γ0 := 0 < γ1 = 1 < γ2 < γ3 < . . . and γn+1 ≤ δ γn for all n ∈ N with a positive constant δ = δ(n), and that, in case   ξ sup AXn →Y : n ∈ N < ∞, we set γn = 1 for all n ∈ N. Moreover, define Yq ξ in the same way as Xq with Yn := A(Xn ) instead of Xn . ξ

Proposition 2.4.36 There exists a uniquely determined extension A ∈ L(X1 , Y),

X ξ where ξn = γn+1 − γn and, in case ξn = 0 for all n ∈ N, X1 = X0 := Xn . n∈N

due to Exercise 2.1.1. In the Proof In case γn = 1 ∀ n ∈ N,the assertion is obvious  other case, define n(j ) = max n ∈ N : γn ≤ δ j , j ∈ N. Note that δ > 1 and that δ j −1 < γn(j ) ≤ δ j ,

j ∈ N,

(2.4.21)

since γn(j ) ≤ δ j −1 implies γn(j )+1 ≤ δγn(j ) ≤ δ j in contradiction to the definition

of n(j ). It follows 1 =: n(0) < n(1) < n(2) < . . . Let f ∈ X := Xn , i.e., n∈N

38

2 Basics from Linear and Nonlinear Functional Analysis

f ∈ Xn0 for some n0 ∈ N, and choose fn ∈ Xn , n ∈ N0 , such that fn = f for all n ≥ n0 and f − fn X = EnX (f ) Then Af − Afn(1) =

∞ 

for n < n0 .

(2.4.22)

  A fn(j +1) − fn(j ) and

j =1

      fn(j +1) − fn(j )  ≤ fn(j +1) − f  + f − fn(j )  ≤ 2E X (f ) . n(j ) X X X With n(−1) := 0 and in view of γn(j +1) ≤ δ j +1 =

 " δ 3  j −1 δ3 ! δ − δ j −2 ≤ γn(j ) − γn(j −2) δ−1 δ−1

(2.4.23)

we get ∞ ∞          Af − Afn(1)  ≤ A fn(j +1) − fn(j )  ≤ γ γn(j +1) fn(j +1) − fn(j ) X Y Y j =1

≤ 2γ

j =1

∞ 

X γn(j +1) En(j ) (f )

∞ (2.4.23) 2γ δ 3 



δ−1

j =1



∞ 2γ δ 3  δ−1

n(j )−1 

j =1

EnX (f )ξn = cf X,ξ,1 ,

X En(j ) (f )

n(j )−1 

(γn+1 − γn )

n=n(j −2)

c = c(n, f ) .

j =1 n=n(j −2)

Hence, for all f ∈ X,       (2.4.21) Af Y ≤ Af − Afn(1) Y +Afn(1) Y ≤ cf X,ξ,1 +γ γn(1) fn(1) X ≤ cf X,ξ,1

with c = c(n, f ). Due to Exercise 2.1.1, it remains to show that X is dense in X1 . But, this is a consequence of Proposition 2.4.35 and ξ

ξ 1 ≥

m 

(γn+1 − γn ) = γm+1 −→ ∞

for m −→ ∞

n=0

 

by assumption.

Remark 2.4.37 ([116, Theorem 2.2]) Under the assumptions of Proposition 2.4.36,  (δn ) γn−1 δn   (γn+1 −γn ) , Y) belongs to L Xq , Yq the extended operator A ∈ L(X1 , where ∞ is an arbitrary sequence of positive numbers satisfying (δn )n=0 ∞  n=0

q

δn = ∞

as well as

 n  γn  γm−1 δm

         ∼n (δm )nm=1 q ∼n (δm )n+1 m=1 

m=1 q

q

2.4 Function Spaces

39

    −1 1+ε ∞ n   and where it is assumed that the sequence γn (δm )m=1 q

is non-

n=n0

increasing for some ε > 0 and n0 ∈ N.

Proposition 2.4.38 Let δn > 0 with lim δn = ∞ such that δn+1 ≤ c δn and the n→∞ ∞  sequence δn−1 γn1+ε n=n is non-increasing for some constants c > 0, ε > 0, and 0

(δ )

n0 ∈ N, and with γn from (2.4.20). Then the space X∞n is continuously embedded γ −γ (δ ) into X1n+1 n and hence, due to Proposition 2.4.36, A is well defined on X∞n . (δn ) Moreover, for all f ∈ X∞ , fn ∈ Xn , n ∈ N,   A(f − fn )Y ≤ c γn f − fn X + δn−1 f X,(δn ),∞ .

(2.4.24)

In particular, by choosing fn ∈ Xn such that f − fn X = EnX (f ), we see that  −1  (δn )  γ δn A ∈ L X∞ , Y∞n . Proof In case of γn = 1 for all n ∈ N, we have A(f − fn ))Y ≤ cf − fn X , (δ ) since A ∈ L(X0 , Y) and, due to lim δn = ∞, X∞n ⊂ X0 . Thus, we can assume n→∞

that γn < γn+1 for all n ∈ N0 (see the agreement after (2.4.20)). γ −γ (δ ) Let f ∈ X∞n ∩ X1n+1 n . With the help of Proposition 2.4.36 we get A(f − fn )Y ≤ c

∞ 

X (γm+1 − γm )Em (f − fn )

m=0

≤c

n−1 

(γm+1 − γm )f − fn X + c

∞ 

X (γm+1 − γm )Em (f )

m=n

m=0

≤ c γn f − fn X + cf X,(δn ),∞

∞  γm+1 − γm . δm m=n

(2.4.25) γ 1+ε γ 1+ε By assumption ln n+1 ≤ ln n for all n ≥ n0 , which is equivalent to δn+1 δn   γn+1 1 1 δn+1 ln 1+ ≤ ln . Consequently, ε γn ε δn ln

γn+1 1 δn+1 γn ≤ ln , γn ε δn γn+1

n ≥ n0 ,

40

2 Basics from Linear and Nonlinear Functional Analysis

and, by the mean value theorem, ln

γn+1 − γn γn+1 − γn γn+1 = ln γn+1 − ln γn = . ∼n γn ξ γn

Analogously, ln

δn γn+1 δn+1 γn δn+1 γn − δn γn+1 ∼n =1− . δn γn+1 δn+1 γn δn+1 γn

Thus, γn+1 − γn ≤ cγn ln

By taking into account

  γn+1 c δn+1 γn δn γn+1 ≤ γn ln ≤ c γn − γn ε δn γn+1 δn+1

∀ n ≥ n0 .

γn γ 1+ε c = n ε ≤ ε −→ 0 if n −→ ∞, we conclude δn δn γn γn

 ∞ ∞    γm+1 − γm γm+1 γm γn ≤c − , =c δm δm δm+1 δn m=n m=n

n ≥ n0 .  

This, together with (2.4.25), proves (2.4.24). Now, additionally to (2.4.20), assume that the operator A : ,: is invertible and its inverse A



n=0

Yn −→





n=0

Xn −→



Yn

n=0

Xn also satisfies (2.4.20), i.e.,

n=0

  A , ≤ γ γn , Yn −→X

n ∈ N.

(2.4.26)

Here, the same γn as in (2.4.20) are taken for the sake of simplicity. ∞  Corollary 2.4.39 If δn+1 ≤ c δn and if the sequence δn−1 γn2+ε n=n is non0 (δ ) , increasing for some ε > 0 and some n0 ∈ N, then AAf = f for all f ∈ X∞n and (δ ) n , = g for all g ∈ Y∞ . AAg Proof In case γn = 1 for all n ∈ N, the operator A ∈ L(X0 , Y0 ) is invertible with , as its inverse. Hence, let lim γn = ∞. First, we apply Proposition 2.4.38 and A n→∞  (δn ) γn−1 δn    ∞ get A ∈ L X∞ , Y∞ . Since the sequence δn−1 γn2+ε n=n is non-increasing, 0  , γn−1 δn , Y, and X we can apply Proposition 2.4.38 a second time, namely for A, instead of A, (δn ), X, and Y, respectively. This yields  −2   (δ )   γn δn  n) n , ∈ L X(δ ⊂ L X∞ , X ,X . AA ∞ ∞

(2.4.27)

2.4 Function Spaces

41 −ε

Now let us consider the sequence (εn ) with εn = γn 2 δn . Since (γn ) is increasing  2+ ε  and δn+1 ≤ c δn , we have εn+1 ≤ c εn . Moreover, the sequence εn−1 γn 2 =  −1 2+ε  δn γn is non-increasing. Hence, we can apply (2.4.27) also to the sequence  n)  δn 2+ ε2 , ∈ L X(ε (εn ) instead of (δn ), which yields AA ∞ , X . Of course εn = 2+ε γn γn tends to infinity, which implies, due to Proposition 2.4.35(c), that the closure of ∞

(ε ) Xn in X∞n is equal to the set of all f ∈ X for which lim εn EnX (f ) = 0 is n→∞

n=1

, satisfied. Since this condition is fulfilled for all f ∈ and since AAf = f for ∞

(δ ) , Xn , it follows AAf = f for all f ∈ X∞n . The second assertion of the all f ∈ (δ ) X∞n

n=1

corollary can be proved analogously.

 

Let us speak about a consequence of the last corollary. For this, we anticipate certain considerations in Sects. 2.6 and 7.1.2. If one is interested in the numerical solution of a given operator equation, one is often in the following situation. The operator equation can be written in the form (cf. (2.6.12)) (A − T )f = g ,

(2.4.28)

where the linear operator A is relatively simple to handle for a numerical approach. For example, a formula of its inverse is known or one can compute exactly the images of this operator on the ansatz functions, which serve as approximations for the solution of (2.4.28). Now, if one has the problem that the linear operator A is unbounded in a pair of Banach spaces (X, Y), which, at first glance, seems to be appropriate for dealing with Eq. (2.4.28) and its numerical solution, the Corollary 2.4.39 suggests the following: Instead of (2.4.28) one should study the , and its approximate solution in X0 and suitable , )f = Ag equation (I − AT approximation spaces. More details about such a general approach can be found in [116], where the assumptions, made on the operator T and its approximations Tn , are relatively weak. In the present book we will not come back to this general situation. Instead of, in Sect. 7.1.2 we will use approximation spaces based on the Banach spaces Cγ ,δ (cf. Sect. 2.4.2) to study operator equations of the form (2.4.28) and its numerical solution under slightly stronger conditions and in the concrete situation, where A and T are certain Cauchy singular integral operators and Fredholm integral operators, respectively. Note that the Cauchy singular integral operator (see (5.2.1)) is unbounded in the spaces Cγ ,δ under consideration (see (5.2.31)). Nevertheless, the present section and the more detailed investigations in [116] give an idea how to deal with such a situation as described above.

42

2 Basics from Linear and Nonlinear Functional Analysis

2.5 Fredholm Operators Let X and Y be Banach spaces. The adjoint operator A∗ : Y∗ −→ X∗ of an operator A ∈ L(X, Y) is defined by the equality (A∗ g)(x) = g(Ax)

∀ x ∈ X , ∀ g ∈ Y∗ .

(2.5.1)

It is well-known that A∗ ∈ L(Y∗ , X∗ ) with A∗ Y∗ →X∗ = AX→Y and that with T ∈ K(X, Y) we also have T ∗ ∈ K(Y∗ , X∗ ) (see also Exercise 2.3.13). An operator A ∈ L(X, Y) is called Fredholm operator, if its image R(A) = {Ax : x ∈ X} is closed in Y and if the nullspaces N(A) = NX (A) := {x ∈ X : Ax = 0} and N(A∗ ) = NY∗ (A∗ ) := {g ∈ Y∗ : A∗ g = 0} of A and A∗ , respectively, are of finite dimension. The set of Fredholm operators from L(X, Y) is denoted by F (X, Y), in case X = Y by F (X). For A ∈ F (X, Y), the number ind(A) := dim N(A) − dim N(A∗ ) is called the Fredholm index of the operator A. Proposition 2.5.1 (See [164, Chapter I, Theorems 3.3, 3.4, 3.5]) If A ∈ F (X, Y) and B ∈ F (Y, Z), then (a) BA ∈ F (X, Z) and ind(BA) = ind(A) + ind(B), (b) A + T ∈ F (X, Y) and ind(A + T ) = ind(A) for all T ∈ K(X, Y), (c) there exists an ε > 0 such that A + E ∈ F (X, Y) and ind(A + E) = ind(A) for all E ∈ L(X, Y) with EX→Y < ε. Hence, by assertion (c) of this proposition, the set F (X, Y) is an open subset of L(X, Y) and the map F (X, Y) −→ Z, A → ind(A) is continuous. For subsets M ⊂ X and N ⊂ X∗ , define   M⊥ := f ∈ X∗ : f (x) = 0 ∀ x ∈ M

and



N := {x ∈ X : f (x) = 0 ∀ f ∈ N} .

  ⊥ then, M = M⊥ and, for a linear and closed subspace X0 ⊂ X, ⊥ X⊥ 0 = X0 . ⊥ ∗ Moreover, for every A ∈ L(X, Y), we have R(A) = N(A ). Thus, in case R(A) is closed, the relation R(A) = ⊥ N(A∗ ) holds true, and we get the following consequence. Corollary 2.5.2 If the operator A ∈ L(X, Y) is invertible, the operator T ∈ L(X, Y) is compact, and dim N(A + T ) = 0, then A + T : X −→ Y is invertible. Exercise 2.5.3 Prove the above mentioned relations, namely ⊥

(a) M =M⊥ for every subset M ⊂ X, (b) ⊥ X⊥ 0 = X0 for every linear and closed subspace X0 ⊂ X, and (c) R(A)⊥ = N(A∗ ) for every A ∈ L(X, Y).

2.6 Stability of Operator Sequences

43

2.6 Stability of Operator Sequences There exist different concepts for studying the applicability of certain approximation methods to operator equations. We restrict here to such ones applicable in situations of interest in this book. Let X and Y be Banach spaces and Xn ⊂ X and Yn ⊂ Y be sequences of closed linear subspaces, where Xn = R(Pn ) = {Pn f : f ∈ X} and Yn = R(Qn ) = {Qn f : f ∈ Y} are the images of bounded linear projections Pn : X −→ X and Qn : Y −→ Y strongly converging to the identity operators IX : X −→ X and IY : Y −→ Y, respectively. These subspaces are considered as normed spaces (Xn , .X ) and (Yn , .Y ) with the induced norms. Moreover, consider a linear and bounded operator A : X −→ Y and the respective operator equation Af = g

(2.6.1)

for a given g ∈ Y, where f ∈ X is searched for. Now an approximation method for Eq. (2.6.1) can be described by a sequence of linear operators An : Xn −→ Yn and a sequence of elements gn ∈ Yn as well as by looking for fn ∈ Xn solving An fn = gn .

(2.6.2)

∞ is called stable if Eq. (2.6.2) is uniquely The operator sequence (An ) = (An )n=1 solvable for all n ≥ n0 (i.e., the inverse operator A−1 n : Yn −→ Xn exists for n ≥ n0 ) and if

#    sup A−1 n 

Yn →Xn

$ : n ≥ n0 < ∞ .

(2.6.3)

Here, the following two situations (C1) and (C2) are of interest. (C1) We have dim Xn = dim Yn = n ∈ N. (C2) We have Pn = IX and Qn = IY , i.e., Xn = X, Yn = Y, for all n ∈ N. Corollary 2.6.1 If (An ) is stable with An Pn −→ A strongly, f ∗ ∈ X is a solution of (2.6.1), and lim gn − gY = 0, then the solution fn∗ ∈ Xn of (2.6.2) converges n→∞

in X to f ∗ , where

  ∗       f − f ∗  ≤ c gn − gY + Af ∗ − An Pn f ∗  + Pn f ∗ − f ∗  n X Y X (2.6.4) with a constant c = c(n, g). In particular, the solution f ∗ ∈ X of (2.6.1) is unique, −1 strongly. A−1 : Y −→ X exists, and A−1 n Qn −→ A

44

2 Basics from Linear and Nonlinear Functional Analysis

Proof Using ∗ ∗ ∗ −1 ∗ ∗ ∗ ∗ fn∗ −f ∗ = A−1 n (gn −An Pn f )+Pn f −f = An (gn −g+Af −An Pn f )+Pn f −f ,

 

we get (2.6.4) by taking into account (2.6.3).

In the following we consider a sequence (An + Kn ) as a perturbation of a stable sequence of operators An : Xn −→ Yn by another sequence of operators Kn : Xn −→ Yn and check conditions on (Kn ), which guarantee that (An + Kn ) is also stable. First, we remember the following well-known lemma. Lemma 2.6.2 If E : X −→ X is a linear and bounded operator on the Banach space X with EX→X < 1, then I − E : X −→ X is invertible with   1   ≤ . (I − E)−1  X→X 1 − EX→X Proof Since X is a Banach space, also L(X) is a Banach space. Hence, because of ∞ ∞   EnX→X , the series the convergence of E n converges in L(X). Due to n=0

(I − E)

n=0 n 

Ek =

k=0

in operator norm, we have

n 

E k (I − E) = I − E n+1 −→ I

k=0

∞ 

E n = (I − E)−1 as well as

n=0

    (I − E)−1 

L(X)



∞ 

EnL(X) =

n=0

1 , 1 − EL(X)  

and the lemma is proved.

Corollary 2.6.3 If A ∈ L(X, Y) is invertible and if B ∈ L(X, Y) satisfies 1 B <  −1  , then A + B is invertible, where A      (A + B)−1  ≤

 −1  A    1 − A−1 B

 

 

and (A + B)−1 − A−1  ≤

 −1 2 A  B   . 1 − A−1 B

In other words, the set GL(X, Y) is an open subset of L(X, Y) and the map GL(X, Y) −→ GL(X, Y), A → A−1 is continuous.

2.6 Stability of Operator Sequences

45

Proof Write A + B = A(I + E) with E = A−1 B. Then E < 1, and Lemma 2.6.2 is applicable. This results in           (A + B)−1  ≤ (I + E)−1 A−1  ≤

 −1  A    1 − A−1 B

and         (A + B)−1 − A−1  = A−1 (−B)(A + B)−1  ≤

 −1 2 A  B   , 1 − A−1 B  

which completes the proof.

Lemma 2.6.4 If (An ) is a stable sequence and lim Cn Xn →Yn = 0, then n→∞ (An + Cn ) is also a stable sequence. Of course, if An Pn −→ A strongly, then also (An + Cn )Pn −→ A strongly.   Proof Write An + Cn = An IXn + A−1 n Cn , where IXn is the identity operator in Xn . From (2.6.3) and Cn Xn →Yn  −→ 0 we get the existence of numbers n1 ∈ N  C and q ∈ (0, 1) satisfying A−1 n n Xn →Xn ≤ q for all n ≥ n1 . Consequently, by −1 Lemma 2.6.2, IXn + An Cn : Xn → Xn is invertible for all n ≥ n1 with  −1     IX + A−1 Cn  n n  

Xn →Xn



1 , 1−q

Now the assertion follows using (An + Cn )−1 = applying (2.6.3) together with (2.6.5).

n ≥ n1 .

(2.6.5)

−1 −1  IXn + A−1 An and n Cn  

Corollary 2.6.5 The proof of the previous lemma shows, that for the stability of (An + Cn ) it is sufficient to have the stability of (An ) together with the estimate  A−1 Cn  ≤ q < 1 for all sufficiently large n. n Xn →Xn Proposition 2.6.6 Let B = A + E + T : X −→ X be an operator with  dim N(B) = 0, A ∈ GL(X, Y), T ∈ K(X, Y), E ∈ L(X, Y), A−1 E L(X) < 1, and let Bn = An + En + Tn ∈ L(Xn ) with a stable sequence (An ), An Pn −→ A strongly, En Pn −→ E strongly,    −1  An En 

L(Xn )

≤ q < 1 ∀ n ≥ n0 ,

and

lim Tn − T Xn →Y = 0 ,

n→∞

(2.6.6) then (Bn ) is stable with Bn Pn −→ B strongly. Proof Note that, due to our assumptions and due to Corollary 2.6.1, we have −1 strongly and, consequently, A−1 E P −→ A−1 E strongly. A−1 n Qn −→ A n n n Applying Lemma 2.6.2 and Corollary 2.6.5 together with Corollary 2.6.1, we get

46

2 Basics from Linear and Nonlinear Functional Analysis

−1 −1 −1 the strong convergence (IXn + A−1 n En ) Pn −→ (I + A E) , which implies −1 −1 (An + En ) Qn −→ (A + E) strongly and, due to Corollary 2.3.12,

    lim (An + En )−1 Qn T − (A + E)−1 T 

L(X)

n→∞

= 0.

Let xn ∈ Xn be arbitrarily chosen. Now we use the invertibility of A + E + T : X −→ Y (which follows from dim N(B) = 0, the invertibility of A + E : X −→ X, and Corollary 2.5.2) as well as the stability of the sequence (An + En ) and get     xn  ≤ c(A + E + T )xn  ≤ c I + (A + E )−1 T xn          ≤ c  IXn + (An + En )−1 Qn T xn  +  (A + E )−1 T − (An + En )−1 Qn T xn          ≤ c  IXn + (An + En )−1 Tn xn  + (An + En )−1 

L(X)

 Tn − Qn T L(X) xn 

    + (A + E )−1 T − (An + En )−1 Qn T 

L(X)

xn 



≤ c(An + En + Tn )xn  + εn xn 

(2.6.7) with a positive constant c and positive numbers εn tending to zero. Hence, there is an n0 ∈ N such that 12 xn  ≤ c(An + En + Tn )xn  for all n ≥ n0 and for all xn ∈ Xn . In the situation (C1), this implies the invertibility of An + En + Tn : Xn −→ Yn and     (An + En + Tn )−1 

L(Xn ,Yn )

≤ 2c ,

n ≥ n0 .

In case (C2), we can consider the invertible operator B 0 = I + (A + E)−1 T : X −→ X together with Bn0 = I + (An + En )−1 Tn : X −→ X, where    0  Bn − B 0 

L(X)

≤ εn −→ 0

with εn from (2.6.7). From Corollary 2.6.3 we infer the invertibility of Bn0 : X −→ X with  −1     B   0 −1   ≤ 2  B Bn  ≤  1 − B −1 εn and thus the invertibility of Bn : X −→ Y for all sufficiently large n. Finally, the strong convergence Bn Pn −→ B follows from the strong convergences of An Pn , En Pn , and T Pn , as well as the second relation in (2.6.6).  

2.6 Stability of Operator Sequences

47

For the case A = I, we get the following corollary. Corollary 2.6.7 Let B = I +E +T : X −→ X be an operator with dim N(B) = 0, T ∈ K(X), E ∈ L(X), and EL(X) < 1, and let Bn = IXn + En + Tn ∈ L(Xn ) with En Pn −→ E strongly, En L(Xn ) ≤ q < 1 ∀ n ≥ n0 ,

and

lim Tn − T Xn →X = 0 ,

n→∞

(2.6.8)

then (Bn ) is stable with Bn Pn −→ B strongly. Finally in this section, we mention the concept of collectively compact operator sequences and give the main result on its application to the proof of stability and convergence of numerical methods (2.6.2) in the situation (C2) with A = I. In Sects. 6.1, 6.2, and 6.3 the reader can find examples for the application of this approach to Fredholm integral equations. ∞ of linear operators Tn : X −→ X is collectively We say that a sequence (Tn )n=1 compact, if the set {Tn f : f ∈ X, f  ≤ 1, n ∈ N} is relatively compact in X, i.e., the closure of this set is compact in X. The concept of collectively compact sets of operators goes back to Anselone and Palmer [6, 7, 9–11]. For the following proposition, see, for example, [8], or Sections 10.3 and 10.4 in [95, 97], or [98], or Section 4.1 in [14]. Proposition 2.6.8 Let X be a Banach space and T : X −→ X, Tn : X −→ X, n ∈ N be given linear operators with lim Tn f − T f  = 0 for all f ∈ X. For n→∞

g ∈ X, consider the operator equations

(I − T )f = g

(2.6.9)

(I − Tn )fn = g .

(2.6.10)

and

If the null space N(I − T ) = {f ∈ X : (I − T )f = 0} of the operator I − T : ∞ X −→ X is trivial and the sequence (Tn )n=1 is collectively compact, then, for all sufficiently large n, Eq. (2.6.10) has a unique solution fn∗ ∈ X, where  ∗    f − f ∗  ≤ cTn f ∗ − T f ∗  , n

c = c(n, g) ,

(2.6.11)

and f ∗ ∈ X is the unique solution of (2.6.9). Proof Since, due to the strong convergence Tn −→ T , the set {T f : f ∈ X, f  ≤ 1} is a subset of the closure of {Tn f : f ∈ X, f  ≤ 1, n ∈ N}, the operator T : X −→ X is compact. Corollary 2.5.2 implies the invertibility of

48

2 Basics from Linear and Nonlinear Functional Analysis

I − T : X −→ X. Moreover, with the help of Lemma 2.3.11 we get lim (Tn − T )Tn  = 0 .

n→∞

Set An := I + (I − T )−1 Tn . Then An (I − Tn ) = (I − T )−1 (I − T + Tn )(I − Tn ) = (I − T )−1 [I − T − (Tn − T )Tn ] = I − Rn

with Rn = (I − T )−1 (Tn − T )Tn and lim Rn  = 0. Thus, for all sufficiently n→∞

large n, the operators I−Tn are injective. The compactness of Tn and Corollary 2.5.2 yield the invertibility of I − Tn , where the inverses (I − Tn )−1 = (I − Rn )−1 An are uniformly bounded. Now the relation (I − Tn )(fn∗ − f ∗ ) = g − (I − Tn )f ∗ = Tn f ∗ − T f ∗  

proves (2.6.11).

Remark 2.6.9 Note that from the proof of Proposition 2.6.8 one get that ∞ is a stable sequence. (I − Tn )n=1 In what follows we generalize Proposition 2.6.8 to the case A = I. Corollary 2.6.10 Let the assumptions on T and Tn of Proposition 2.6.8 be fulfilled ∞ and (An )n=1 be a stable sequence of linear operators An : X −→ Y strongly converging to A ∈ L(X). If the null space N(A − T ) of the operator A − T : X −→ X is trivial then, for every g ∈ X, equation (A − T )f = g

(2.6.12)

and, for all sufficiently large n, equation (An − Tn )fn = g have unique solutions f ∗ ∈ X and fn∗ ∈ X, respectively. Moreover,      ∗    −1 ∗ −1 ∗ f − f ∗  ≤ c  A−1 n n g − Ag  + An Tn f − A T f 

(2.6.13)

(2.6.14)

with c = c(n, g). −→ A−1 strongly and, consequently, Proof By Corollary 2.6.1, we have A−1 n −1 −1 An Tn −→ A T strongly. Furthermore, Eqs. (2.6.12) and (2.6.13) are equivalent to (I − A−1 T )f = A−1 g

(2.6.15)

−1 (I − A−1 n Tn )fn = An g .

(2.6.16)

respective

2.6 Stability of Operator Sequences

49

    : n ≥ n0 for an appropriate n0 . If N ⊂ X is a finite Set M = sup A−1 n −1 M  n ∈ N},  ∞then N is a finite ε-net for  −1ε-net for the set {Tn f: f  ≤ 1, An Tn f : f  ≤ 1, n ≥ n0 . Hence, A−1 n Tn n=n0 is collectively compact and strongly convergent to A−1 T . Proposition  ∞ with Remark 2.6.9  2.6.8 together guarantees the stability of the sequence I − A−1 T n n=n0 , and by (2.6.4) we n get the estimate (2.6.14).   ∞ ∞ as well as (Mn )n=1 be Exercise 2.6.11 Let X be a Banach space and (Tn )n=1 ∞ sequences of linear and bounded operators in X, where (Tn )n=1 is collectively compact, the operators Mn : X −→ X are compact, and lim Tn − Mn X→X = 0. ∞ Show, that (Mn )n=1 is also collectively compact.

n→∞

To realize an approximation method (2.6.2), usually, one has to rewrite (2.6.2) as an equivalent system of linear equations. For this, let us consider the situation (C1) n and take some sequences of linear and invertible operators RX n : Xn −→ C and Y n Rn : Yn −→ C . Then the sequence (2.6.2) of operator equations is equivalent to the sequence of equations An ξ n = ηn := RY n gn ,

 −1 fn = RX ξn n

(2.6.17)

 X −1 with the n × n-matrices An = RY Rn ∈ Cn×n . If Cn∗ = (Cn , .∗ ) is a nA n −1 normed space, then by cond∗ (An ) = An ∗ An ∗ the respective condition number of an invertible matrix An ∈ Cn×n is denoted, where An ∗ = An Cn∗ →Cn∗ . 1  n 2    In case .∗ = .2 is the Euclidean norm, i.e.,  (ξk ) n  = |ξk |2 , k=1 2

k=1

the condition number cond∗ (An ) = cond2 (An ) is called the spectral condition  −1 number of the matrix An . Now, since we also have An = RY An RX n n , the following corollary is easily seen. Corollary 2.6.12 Let the operators An Pn with An from (2.6.2) be strongly convergent (or at least uniformly bounded). If the sequences of the operators RX n : n and the sequences of their inverses are uniformly Xn −→ Cn∗ and RY : Y −→ C n ∗ bounded, then the stability of the approximation method (An ) is equivalent to the boundedness of the sequence (cond∗ (An )) of the ∗-condition numbers of the matrices An in (2.6.17). The notion condition number is not only used for matrices. Of course, in general, if X and Y are normed spaces and if A ∈ L(X, Y) is an boundedly invertible operator, the number cond(A) = A−1 Y→X AX→Y is called the condition number of the operator A. The role played by the condition number in practise can be seen from the following exercise (cf., for example, [97, Theorem 14.2]). Exercise 2.6.13 Let X and Y be Banach spaces and A ∈ GL(X, Y) as well as B ∈ L(X, Y) a small perturbation of A in the sense of A−1 B − A < 1. Show

50

2 Basics from Linear and Nonlinear Functional Analysis

that, if Af = g = 0 and BfB = gB , then the estimate fB − f  ≤ f 

cond(A) B − A 1 − cond(A) A



gB − g B − A + g A



is true.

2.7 Fixed Point Theorems and Newton’s Method In order to prove the existence of solutions of nonlinear operator equations, often fixed point theorems are used. Here, we recall the classical fixed point theorems of Banach and Schauder as well as Newton’s method for solving nonlinear operator equations approximately. For a metric space (M, d) and a mapping F : M −→ M, u → F (u), we investigate the fixed point equation u = F (u) ,

u ∈ M.

(2.7.1)

First, we consider Banach’s fixed point theorem. Proposition 2.7.1 Let (M, d) be a complete metric space and F : M −→ M a contractive map, i.e., there is a constant q ∈ (0, 1) such that d(F (u), F (v)) ≤ q d(u, v)

∀ u, v ∈ M .

(2.7.2)

Then there is a unique solution u∗ ∈ M of the fixed point equation (2.7.1). Moreover, lim d(un , u∗ ) = 0 ,

n→∞

(2.7.3)

∞ where u0 ∈ M is arbitrary and the sequence (un )n=0 is defined by the method of successive approximation

un+1 = F (un ) ,

n ∈ N0 .

(2.7.4)

Additionally, the error estimate d(un , u∗ ) ≤ holds true.

q n d(u1 , u0 ) 1−q

(2.7.5)

2.7 Fixed Point Theorems and Newton’s Method

51

Proof The repeated application of the contraction condition (2.7.2) in combination with formula (2.7.4) yields d(un+1 , un ) ≤ q d(un , un−1 ) ≤ . . . ≤ q n d(u1 , u0 ) ,

n ∈ N0 .

and, by using the triangular inequality, d(un+k , un ) ≤ d(un+k , un+k−1 ) + . . . + d(un+2 , un+1 + d(un+1 , un )   ≤ q n q k−1 + . . . + q + 1 d(u1 , u0 ) =

(2.7.6)

qn q n (1 − q k ) d(u1 , u0 ) ≤ d(u1 , u0 ) , 1−q 1−q

k ∈ N.

∞ is a Cauchy sequence having a limit u∗ ∈ M. Since, due to (2.7.2), Hence, (un )n=0 the map F : M −→ M is continuous, from (2.7.4) we can infer (for n tending to infinity) that u∗ = F (u∗ ). Moreover, if in (2.7.6) we send k to infinity, then we get (2.7.5), where we use the well-known continuity of the metric function d : (M × M, d1 ) −→ (R, |.|) and where M × M is equipped, for example, with the metric d1 ((u1 , v1 ), (u2 , v2 )) = d(u1 , u2 ) + d(v1 , v2 ).  

We formulate the following version of Schauder’s fixed point theorem for Banach spaces. Proposition 2.7.2 ([218, Corollary 2.13]) Let M ⊂ X be a nonempty compact and convex subset of the Banach space X and F : M −→ M be a continuous map. Then F possesses a fixed point. Let us turn to the following general situation. For Banach spaces X and Y and a given (in general nonlinear) mapping P : D(P) ⊂ X −→ Y we study Newton’s method un+1 = un − [P  (un )]−1 P(un ) ,

n ∈ N0 ,

(2.7.7)

n ∈ N0 ,

(2.7.8)

and its modified version un+1 = un − [P  (u0 )]−1 P(un ) , in order to solve the operator equation P(u) = 0

(2.7.9)

approximately. Here, we assume that the Fréchet derivatives P  (un ) in (2.7.7) resp. P  (u0 ) in (2.7.8) exist and are invertible. Moreover, for Newton’s method (2.7.7), we have to guarantee that with un ∈ D(P) also un+1 defined by (2.7.7) is in the domain D(P) of the operator P.

52

2 Basics from Linear and Nonlinear Functional Analysis

Let us start with an proposition on the convergence of the modified Newton method. Proposition 2.7.3 Let X and Y be Banach spaces and let P : X −→ Y be an operator whose Fréchet derivative P  (u) ∈ L(X, Y) exists for every u ∈ X. Moreover, suppose that there is an u0 ∈ X for which P  (u0 ) : X −→ Y is invertible. Furthermore, assume that there are constants M0 , N0 , C0 ∈ R and η ∈ (0, 1] such that   (a) [P  (u0 )]−1 L(Y,X) ≤ M0 ,    [P (u0 )]−1 P(u0 ) ≤ N0 , (b)   X η (c) P  (u) − P  (u0 )L(X,Y) ≤ C0 u − u0 X ∀ u ∈ X, η

and h0 := M0 N0 C0 ≤ 14 . Then there exists a neighbourhood Uρ (u0 ) = {u ∈ X : u − u0 X < ρ} such that ρ ≤ 2N0 and Eq. (2.7.9) has a unique solution ∞ u∗ in Uρ (u0 ), where the sequence (un )n=0 defined by (2.7.8) converges in X to u∗ . Proof Let t ∈ [1, t ∗ ] be a solution of the equation h0 t 1+η − t + 1 = 0, where √ 0 1− 1−4h0 ∗ is a solution of the quadratic equation h0 t 2 − t + 1 = 0. Note that t = 2h0 4h0   ≤ 2. Define t∗ = √ 2h0 1 + 1 − 4h0 ! "−1 Q(u) := u − P  (u0 ) P(u) ,

R(u) := P(u) − P(u0 ) − P  (u0 )(u − u0 ) ,

and assume that u − u0 X ≤ t0 N0 =: ρ. Since        η R (u)   L(X,Y) = P (u) − P (u0 ) L(X,Y) ≤ C0 (t0 N0 ) we have R(u)Y = R(u) − R(u0 )Y ≤ C0 (t0 N0 )1+η and   ! "−1 Q(u) − u0 X ≤ u − u0 − P  (u0 ) [P (u) − P (u0 )]X + N0  1+η ≤ M0 R(u)Y + N0 ≤ M0 C0 (t0 N0 )1+η + N0 = h0 t0 + 1 N0 = t0 N0 .

Hence, the operator Q maps Uρ (u0 ) into itself. Moreover, in view of    !  " ! " t0 − 1 η η Q (u)  P (u0 ) −1 P  (u) − P  (u0 )  < 1, L(X) = L(X) ≤ M0 C0 (t0 N0 ) = h0 t0 = t0

Banach’s fixed point theorem is applicable to the equation u = Q(u).

 

2.7 Fixed Point Theorems and Newton’s Method

53

Lemma 2.7.4 Let P : D ⊂ X −→ Y be a map from a nonempty open convex subset D = D(P) of the Banach space X into the Banach space Y, which is Fréchet differentiable on D. If P  is Hölder continuous, i.e.,    η P (u) − P  (v) L(X,Y) ≤ H u − vX

∀ u, v ∈ D ,

(2.7.10)

where H > 0 and η ∈ (0, 1] are some constants, then 1+η   H v − uX P(v) − P(u) − P  (u)(v − u) ≤ Y 1+η

∀ u, v ∈ D .

Proof For t ∈ [0, 1] and u, v ∈ D, set ψ(t) = P(u + t (v − u)) − P  (u)(v − u) . Then ! " ψ  (t) = P  (u + t (v − u)) − P  (u) (v − u) and therefore      1+η η ψ (t) ≤ P  (u + t (v − u)) − P  (u) Y L(X,Y)v − uX ≤ H t v − uX . That means     P(v) − P(u) − P  (u)(v − u) ≤ ψ(1) − ψ(0)Y =   Y 

1

≤ 0

1 0

  ψ (t) dt   

Y 1+η

1+η

H t η v − uX

dt =

H v − uX 1+η

,  

and the lemma is proved. Proposition 2.7.5 Suppose that Eq. (2.7.9) has a solution Fréchet derivative P  (u) exists for all

u∗

∈ D(P ), that the

    u ∈ Uε0 (u∗ ) = u ∈ X : u − u∗ X < ε0 ⊂ D(P) and satisfies    P (u) − P  (v) ≤ H u − vη X Y

∀ u, v ∈ Uε0 (u∗ ) ,

(2.7.11)

where H > 0 and η ∈ (0, 1] are some constants, and that P  (u∗ ) : X −→ Y ∞ is invertible. Then there is a δ > 0 such that the sequence (un )n=0 defined

54

2 Basics from Linear and Nonlinear Functional Analysis

by (2.7.7) converges in X to the solution u∗ for every initial point u0 ∈ X satisfying u0 − u∗ X ≤ δ. Furthermore, there exists a constant M = M(u0 , n) such that, for n ∈ N,     un − u∗  ≤ M un−1 − u∗ 1+η X X





and un − u∗ X ≤ M

(1+η)n −1 η

n

δ (1+η) .

(2.7.12)

    Proof For u ∈ Uε0 (u∗ ), let k := P  (u) − P  (u∗ )L(X,Y)[P  (u∗ )]−1 L(Y,X). Using (2.7.11) we get  η     k ≤ H u − u∗  [P  (u∗ )]−1 

L(Y,X)

    ≤ H εη [P  (u∗ )]−1 

L(Y,X)


0 and for all u ∈ Uε (u∗ ). Now, by Corollary 2.6.3, the inverse operator [P  (u)]−1 ∈ L(Y, X) exists for all u ∈ Uε (u∗ ) and      [P (u)]−1 

L(Y,X)



  ∗ −1  [P (u )]  L(Y,X) 1−k

Hence, we can define #    α := sup [P  (u)]−1 

L(Y,X)

    < 2[P  (u∗ )]−1 

L(Y,X)

$ : u ∈ Uε (u∗ ) < ∞ .

∀ u ∈ Uε (u∗ ) .

(2.7.13)

With the notation Q(u) := u − [P  (u)]−1 P(u) Newton’s method (2.7.7) applied to the operator P becomes the iterative method un+1 = Q(un ). For u ∈ Uε (u∗ ), we write ! " Q(u) − u∗ = u − u∗ − [P  (u)]−1 P (u) − P (u∗ ) ! ! " " = [P  (u)]−1 P  (u∗ )(u − u∗ ) − P (u) + P (u∗ ) + [P  (u)]−1 P  (u) − P  (u∗ ) (u − u∗ ) .

Using (2.7.11), (2.7.13), and Lemma 2.7.4 this gives         Q(u) − u∗  ≤ α P  (u∗ )(u − u∗ ) − P (u) + P (u∗ ) + P  (u) − P  (u∗ ) u − u∗  X Y L(X,Y) X  ≤

   1+η 1+η H + H u − u∗ X =: M u − u∗ X , 1+η

so that       un+1 − u∗  = Q(un ) − u∗  ≤ M u − u∗ 1+η X X X

if

un ∈ Uε (u∗ ) .

2.7 Fixed Point Theorems and Newton’s Method

55

Consequently,       2 un − u∗  ≤ M un−1 − u∗ 1+η ≤ M · M 1+η un−2 − u∗ (1+η) X X X ≤ ... ≤ M

(1+η)n −1 η

   (1+η)n n 1  1 u0 − u∗ (1+η) = M − η M η u0 − u∗  . X X

Thus, for sufficiently small δ > 0 and u0 − u∗ X ≤ δ, the iterate un belongs to the neighbourhood Uε (u∗ ) for all n ∈ N0 and converges in X to u∗ if n tends to infinity.  

Chapter 3

Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

This and the following chapter are devoted to the study of interpolation processes and the respective quadrature rules based on the zeros of orthogonal polynomials with respect to certain weight functions on the interval (−1, 1), the half axis (0, ∞), and the whole real axis. These chapters can be considered as a continuation of [121, Chapter 2 and Chapter 4, Section 5.1], where the authors mainly consider classical weights (also with additional inner singularities). Here we concentrate on recent results and developments concerned with non classical weights like exponential weights on (−1, 1) and on (0, ∞) and generalized Freud weights on the real axis. The present chapter focuses on the respective processes for doubling and exponential weights on the bounded interval (−1, 1).

3.1 Moduli of Smoothness, K-Functionals, and Best Approximation In this section we want to introduce the notions of moduli of smoothness and Kfunctionals in more detail and illustrate the techniques behind their application in the definition of function spaces and studying weighted polynomial approximation. For that aim, we restrict our considerations here to weighted Lp spaces on (−1, 1), the underlying weight of which is a Jacobi weight w(x) = v γ ,δ (x) with γ , δ > − p1 if 1 ≤ p < ∞ and γ , δ ≥ 0 if p = ∞. The respective spaces are defined by (cf. Sects. 2.4.1 and 2.4.2)   Lpw = f ∈ M : wf p < ∞ , 1 ≤ p < ∞ , and   b b L∞ w = Cw = Cγ ,δ = f ∈ C(−1, 1) : wf ∞ < ∞ ,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Junghanns et al., Weighted Polynomial Approximation and Numerical Methods for Integral Equations, Pathways in Mathematics, https://doi.org/10.1007/978-3-030-77497-4_3

57

58

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

where M denotes the (linear) space of all measurable function classes f : p (−1, 1) −→ C. In case w(x) ≡ 1, then we write Lp instead of L1 . For r ∈ N0 , by r AC [a, b] we denote the set of all r times differentiable functions f : [a, b] −→ C having an absolutely continuous rth derivative on [a, b], i.e., f (r) ∈ AC[a, b] := p AC0 [a, b]. Moreover, by Lloc (a, b), 1 ≤ p < ∞, and ACrloc (a, b), r ∈ N0 , we refer to the function classes of all locally p-summable functions over (a, b) and all r times differentiable functions on (a, b), the rth derivative of which is locally absolutely continuous, respectively, i.e.,   p Lloc (a, b) = f ∈ M : f ∈ Lp (a1 , b1 ) ∀ [a1 , b1 ] ⊂ (a, b) and   r−1 (r−1) ACr−1 (a, b) = f ∈ C [a, b] : f ∈ AC[a , b ] ∀ [a , b ] ⊂ (a, b) , 1 1 1 1 loc

r ∈ N.

Additionally, L∞ loc (a, b) := C(a, b). In case of (a, b) = (−1, 1), we write shortly p r−1 Lloc and ACloc , respectively (see also page 21). Analogous results for wider classes of weights on (−1, 1) are presented in the remaining sections of the present chapter, while weights on unbounded intervals are considered in Chap. 4. We are going to extend the discussion started in Sect. 2.4.2 by considering weighted spaces of continuous functions (i.e., the case p = ∞). The explications in the remaining part of this section are based on respective investigations in [44] (cf. also [117, 189]).

3.1.1 Moduli of Smoothness and K-Functionals √ In what follows we use the abbreviations w(x) = v γ ,δ (x) and ϕ(x) = 1 − x 2 . For h > 0, and r ∈ N, we define the following difference operators rhϕ , rh,F , rh,B : M −→ M by    r   r   r  k r − k hϕ(x) , hϕ f (x) = (−1) f x+ 2 k k=0



  r    r  rh,F f (x) = f x + (r − k)h , (−1)k k k=0

  r   r   r  h,B f (x) = (−1)k f x − kh . k k=0

3.1 Moduli of Smoothness, K-Functionals, and Best Approximation

59

Basing on these definitions, for t > 0 sufficiently small, we introduce the main part of modulus of smoothness     rϕ (f, t)w,p = sup wrhϕ f  p (3.1.1) 2 2 2 2 L (−1+2r h ,1−2r h )

0 0 .

Lemma 3.1.3 A function f : (−1, 1) −→ C belongs to the space Cγ ,δ if and only if lim ωϕ1 (f, t)w,∞ = 0 .

t →+0

60

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

Proof Let lim ωϕ1 (f, t)w,∞ = 0, [a, b] ⊂ (−1, 1), and ε > 0. Then, there is a t →+0   t0 > 0 such that [a, b] ⊂ − 1 + 2t02 , 1 − 2t02 and 1ϕ (f, t0 )w,∞ < ε A, where A = min {w(x) : x ∈ [a, b]}. Set δ0 = t0 B, where B = min {ϕ(x) : x ∈ [a, b]}. If a ≤ x1 < x2 ≤ b with x2 − x1 ≤ δ0 , then we can write x1 = x0 − h2 ϕ(x0 ) and x2 = x0 + h2 ϕ(x0 ), where h ϕ(x0 ) ≤ δ0 = t0 B, i.e., h ≤ t0 . It follows |f (x2 ) − f (x1 )| ≤

      1ϕ (f, t0 )w,∞ w(x0 )  h h  ϕ(x ϕ(x < ε. + ) − f x − ) f x 0 0 0 0 ≤  A 2 2 A

Hence f ∈ C(−1, 1). Now, we prove that

lim

x→−1+0

w(x)f (x) = 0 if δ > 0. For this,

let ε > 0. Since   lim sup w[f (· + h) − f ]∞,(−1,−1+2t 2 ) : 0 < h ≤ 2t 2 = 0 , t →+0

(3.1.3)

there is a t0 > 0 such that w(x)|f (x + h) − f (x)| < ε

∀ x ∈ (−1, −1 + 2t02 ) , ∀ h ∈ (0, 2t02 ) .

From this we get w(x)|f (x)| ≤ w(x)|f (x + h) − f (x)| + w(x)|f (x + h)| < ε + w(x)|f (x + h)| . Since we already know that

lim

x→−1+0

f (x + h) = f (−1 + h), we conclude

lim sup w(x)|f (x)| ≤ ε for all ε > 0, i.e., (wf )(−1+0) = 0. In case δ = 0, (3.1.3) x→−1+0

is equivalent to   lim sup f (· + h) − f ∞,(−1,−1+2t 2 ) : 0 < h ≤ 2t 2 = 0 ,

t →+0

which implies the existence of the finite limit

lim

x→−1+0

f (x). For the other endpoint

x = 1 corresponding to the exponent γ , we can proceed analogously. Let us turn to the proof of the reverse assertion, i.e., we assume that f ∈ Cγ ,δ and show that then lim ωϕ1 (f, t)w,∞ = 0. We restrict ourselves to the case t →+0

min {γ , δ} > 0, since the other cases γ = 0 or δ = 0 are simpler to handle. First, we note that, due to Exercise 3.1.2, there exists a constant cw = cw (x, h) such that w(x)

 w x+

h 2

ϕ(x)

 ≤ cw

    1 ∀ h ∈ 0, √ , ∀ x ∈ −1 + 2h2 , 1 − 2h2 . 2

3.1 Moduli of Smoothness, K-Functionals, and Best Approximation

61

Let ε > 0 be arbitrarily chosen. Then, there is a δ0 > 0 such that |w(x1 )f (x1 ) − w(x2 )f (x2 )|
0 having the property w(x)|f (x)|
0, we define the K-functionals     Kr,ϕ (f, t r )w,p = inf w(f − g)p + t r wϕ r g (r) p : g ∈ ACr−1 loc

64

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

and    η Kr,ϕ (f, t r )w,p = sup inf w(f − g)Lp (Ih,η ) + hr wϕ r g (r) Lp (I

h,η )

0 0 such that Kr,ϕ (f, t0r )w,p = 0. The topic of this section is the relation between the moduli of smoothness defined in the previous section and the best weighted approximation of a given function f ,   Em (f )w,p = inf w(f − P )p : P ∈ Pm , m ∈ N , E0 (f )w,p := wf p . Exercise 3.1.11 Show that, for every f ∈ Lp and every m ∈ N, there exists a f polynomial Pm ∈ Pm of weighted best approximation, i.e.,  f  Em (f )w,p = w(f − Pm )p . Proposition 3.1.12 ([44], Theorem 8.2.1) There exist positive constants c1 = p c1 (f, m) and m0 = m0 (f ) such that, for all f ∈ Lloc and m ≥ m0 , Em (f )w,p ≤ c1

∞  k=0

  1 rϕ f, k , 2 m w,p

(3.1.6)

and there are further positive constants c2 = c2 (f, t) and t0 = t0 (f, t) such that, p for all f ∈ Lloc and 0 < t ≤ t0 , !

rϕ (f, t)w,p ≤ c2 t r

−1

"

t 

(m + 1)r−1 Em+1 (f )w,p ,

m=0

where [a] denotes the integer part of the real number a.

(3.1.7)

66

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1) ρ,τ

Exercise 3.1.13 Recall the definition of Cr,λ and Cγ ,δ in Sect. 2.4.2 and show that f ∈ C0,λ for some λ ∈ (0, 1) implies f ∈ Cλ,0 0,0 . Proposition 3.1.14 ([44], Theorem 8.2.4) Let γ , δ ≥ 0. Then, there are positive p constants c3 = c3 (f, m) and m0 = m0 (f ) such that, for all f ∈ Lloc and m ≥ m0 , !

ωϕr (f, t)w,p ≤ c3 t r

−1

"

t 

(m + 1)r−1 Em+1 (f )w,p ,

(3.1.8)

m=0

We remark that, due to ωϕr (f, t)w,p ≥ rϕ (f, t)w,p , the modulus ωϕr (f, t)w,p can replace rϕ (f, t)w,p also in (3.1.6). In the following proposition, we present estimates of the rth derivative of the polynomials of weighted best approximation with the help of moduli of smoothness and best weighted approximations. Proposition 3.1.15 ([44], Theorem 8.3.1) Let r ∈ N. There exist positive constants p c = c(f, n) and m0 = m0 (f ) such that, for all f ∈ Lw and m ≥ m0 ,  r (r)  wϕ P  ≤ c mr p



1 m

rϕ (f, τ )w,p dτ

(3.1.9)

τ

0

and m     r (r)  k r−1 Ek (f )w,p , wϕ P  ≤ c p

(3.1.10)

k=1

f

where P = Pm (cf. Exercise 3.1.11). The following proposition is the analogue to Proposition 3.1.12, but concerned with K-functionals instead of moduli of smoothness. Proposition 3.1.16 For certain constants c1 = c1 ((f, n), c2 = c2 (f, t), m0 = m0 (f ), and t0 = t0 (f, t), we have Em (f )w,p ≤ c1

∞ 

 Kr,ϕ f,

k=0

1 kr 2 mr

 p

w,p

∀ f ∈ Lloc , ∀ m ≥ m0

and !

Kr,ϕ (f, t r )w,p ≤ c2 t r

−1

"

t 

m=0

(m + 1)r−1 Em+1 (f )w,p

p

∀ f ∈ Lloc , ∀ m ≥ m0 .

3.1 Moduli of Smoothness, K-Functionals, and Best Approximation

67

Proof With the help of Propositions 3.1.8 and 3.1.12 we estimate, for a fixed constant η > 0, Em (f )w,p ≤ c

∞  k=0

      ∞ ∞   1 1 1 η rϕ f, k ≤c Kr,ϕ ≤c Kr,ϕ f, kr r , f, kr r 2 m w,p 2 m w,p 2 m w,p k=0

k=0

η

where we also used the relation Kr,ϕ (f, t r )w,p ≤ Kr,ϕ (f, t r )w,p , which is obvious by definition of the K-functionals. In order to prove the second inequality, we can p assume that f ∈ Lw , since otherwise we have Em (f )w,p = ∞ for all m ∈ N. Again ! " f from the definition of Kr,ϕ (f, t r )w,p , we get, with m = t −1 and P = Pm+1 ,   Kr,ϕ (f, t r )w,p ≤ Em+1 (f )w,p + t r wϕ r P (r) p

(3.1.10) ≤

Em+1 (f )w,p + c t r

m 

(k + 1)r−1 Ek+1 (f )w,p .

k=0

It remains to note that Em+1 (f )w,p ≤ r(m+1)−r Em+1 (f )w,p

m 

(k+1)r−1 ≤ r t r

k=0

m 

(k+1)r−1 Ek+1 (f )w,p ,

k=0

where we took into account (m + 1)r = r



m+1

x r−1 dx =

0

m  

k+1

x r−1 dx ≤

k=0 k

m  (k + 1)r−1 . k=0

 

The proposition is proved. η Kr,ϕ (f, t r )w,p

p Lloc

≤ Kr,ϕ and t > 0. By definition we have w,p for all f ∈ In the last proposition of this section, we state another relation between these two moduli of smoothness. (f, t r )

Proposition 3.1.17 For a fixed constant η > 0, there exist constants c = c(f, t) p and t0 = t0 (f ) such that, for all f ∈ Lloc and 0 < t ≤ t0 , 

t

Kr,ϕ (f, t r )w,p ≤ c 0

η

Kr,ϕ (f, τ r )w,p dτ τ

(3.1.11)

holds true. Proof Since, due to Proposition 3.1.12, Exercise 3.1.9, and Proposition 3.1.8, the p p integral in (3.1.11) is infinite if f ∈ Lloc , we can assume that f belongs to Lloc . ! −1 " f and P = Pm , we obtain With m = 1 + t    (3.1.9) Kr,ϕ (f, t r )w,p ≤ Em+1 (f )w,p + t r wϕ r P (r) p ≤ Em+1 (f )w,p + c

0

1 m

rϕ (f, τ )w,p dτ τ

,

68

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

which, in virtue of Proposition 3.1.12 and Exercise 3.1.9, yields  Kr,ϕ (f, t r )w,p ≤ c

1 m

rϕ (f, τ )w,p dτ

0

τ

 ≤c

t

rϕ (f, τ )w,p dτ τ

0

,  

and the proposition is proved.

Finally in this section, we state a Jackson-type inequality, a proof of which can be found in [121, Section 2.5.2]. Proposition 3.1.18 Let r ∈ N. There exists a real constant c = c(f, n) such that, p for all f ∈ Lw and n ∈ N with n > r, En (f )w,p ≤

c ωϕr

  1 . f, n w,p

3.1.3 Besov-Type Spaces p

Let q > 1 and r ≥ 0 be given real numbers. To every f ∈ Lw we associate the ∞ number sequence a(f ) = (am (f ))m=0 with am (f ) = (m + 1)

r− q1

Em (f )w,p . p,r

One aim of this section is to show that the Besov-type space Bq,w , given in (2.4.8), can also be defined by   p,r,∗ Bq,w = f ∈ Lpw : a(f ) ∈ q and equipped with the equivalent norm  f ∗Bp,r q,w

= a(f ) q =

∞ 

1 q

|am (f )|

q

.

m=0

In this section we restrict ourselves to the parameter ranges 1 ≤ p < ∞ and 1 ≤ q < ∞. A second intention of the present section will be the description of certain Besovtype spaces with the help of the properties of the derivatives of their elements. Exercise 3.1.19 Show that f ∗Bp,r defines a norm on Bq,w q,w   p,r,∗ Bq,w , f ∗Bp,r is a Banach space.

p,r,∗

q,w

and that

3.1 Moduli of Smoothness, K-Functionals, and Best Approximation

69

Example 3.1.20 From Lemma 2.4.4 it is seen that the Sobolev-type space L2,r α,β 2,r,∗ defined in (2.4.1) is equal to B2,w , where α = 2γ and β = 2δ. ∞ be a nonincreasing sequence of nonnegative numbers Exercise 3.1.21 Let (ξn )n=1 and 1 ≤ q < ∞ as well as r ≥ 0. Show that, for k0 ∈ N0 and n0 = 2k0 , the equivalence



∞  

(n + 1)

r− q1

q



1 q

ξn

∞ ⎝ ∼(ξn )n=1

n=n0

∞  

⎞1 q

kr

2 ξ2k

q



(3.1.12)

k=k0

is true. p,r

We recall the definition of the norm in Bq,w in Sect. 2.4.1, namely f Bp,r = wf p + |f |w,p,q,r , q,w where 

%

1

|f |w,p,q,r =

sϕ (f, t)w,p

0

t

&q

 q1 dt

r+ q1

and s ∈ N is fixed and satisfies s > r.   p,r Exercise 3.1.22 Show that Bq,w , .Bp,r is a normed space. q,w p,r

p,r,∗

Proposition 3.1.23 The spaces Bq,w and Bq,w coincide and the norms .Bp,r and q,w .∗Bp,r are equivalent, i.e., there is a positive constant M = M(f ) such that q,w

M −1 f Bp,r ≤ f ∗Bp,r ≤ Mf Bp,r q,w q,w

p,r

∀ f ∈ Bq,w .

q,w

p,r

Proof Let f ∈ Bq,w and define bmk (f ) = (m + 1)

r− q1

  sϕ f, 2k1m

w,p

for a certain

s ∈ N with s > r. With the help of Proposition 3.1.12 (with m0 = 2j0 ) and the triangle inequality for the q -norm as well as (3.1.12), we get 

1

∞ 

q

|am (f )|q

m=m0



∞ 

≤ c1

≤c

%∞ 

m=m0

k=0



%

∞  k=0



∞  j=j0

&q  q1

∞  k=0

 jr

2

≤ c1

bmk (f )

sϕ

f,

1 2k+j



∞ 

1 q

[bmk (f )]q

m=m0

⎛ ⎞1 &q ⎞ q1 %   &q q ∞ ∞   1 −kr jr s ⎠ =c ⎝ ⎠ 2 2 ϕ f, j 2 w,p w,p



k=0

j=j0 +k

70

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

≤c

∞ 

⎛ −kr

2

k=0

⎛ ≤ c⎝

∞ 



% jrq



sϕ

2

j=j0



∞ 

j (rq−1)

2

1 2j −1

!

1 2j

j=j0

1 f, j 2



&q ⎞ q1



⎠ = c⎝ w,p

"q sϕ (f, t)w,p

%

∞ 

jrq

 sϕ

2

j=j0

⎞1



q

1

r+ dt ⎠ ≤ 2 q c ⎝

∞  

1 2j −1

1 f, j 2 %

⎠ w,p

sϕ (f, t)w,p

1 2j

j=j0

&q ⎞ q1



t

⎞1

&q

q

dt ⎠

r+ q1

≤ c |f |w,p,q,r .

Consequently, ⎛

m 0 −1

f ∗Bp,r = ⎝ q,w

⎞1

∞ 

|am (f )|q +

q

  |am (f )|q ⎠ ≤ c wf p + |f |w,p,q,r = cf Bp,r , q,w

m=m0

m=0

where c = c(f ). p,r,∗ Now, let f ∈ Bq,w , s ∈ N, and s > r. Then, 

|f |w,p,q,r

q



1

=

%

sϕ (f, t)w,p

0



t

∞ 

=2

∞ 

1 2j −1 1 2j

% 2

 jr

sϕ

j=0

∞  

j =j0

%

  &q 1 jr s 2 ϕ f, j 2 w,p

%

sϕ (f, t)w,p t

&q

r+ q1

j=1

&q



2j−1

w,p

. w,p

(3.1.7)

≤ c

∞ 

2j0

∞ 

≤ c

(3.1.13) t0−1

+ 1, we have

n=0

⎡ 2j (r−s)q ⎣

j =j0

c





⎡ j ⎤q 2  2j (r−s)q ⎣ (n + 1)s−1 En+1 (f )w,p ⎦

j =j0

(3.1.12)

1

&q



1 f, j 2



dt

%  ∞  ! s "q 2jr sϕ f, ϕ (f, t)w,p dt ≤

With t0 from Proposition 3.1.12 and n0 = ∞ 

1 2j −1 1 2j

j=1

j (rq+1)

j=1

r

dt =

r+ q1

 2

&q

∞  j =j0

j +1 2

⎤q (n + 1)s−1 En (f )w,p ⎦

n=1

⎡ 2j (r−s)q ⎣

j +1  k=0

⎤q j 2ks E2k (f )w,p ⎦ .

3.1 Moduli of Smoothness, K-Functionals, and Best Approximation

71

By Hölder’s inequality we get ⎡ ⎤q ⎡ ⎤q   j +1 j +1   1 k(s−r) k(s−r) 1− q ⎣ 2ks E2k (f )w,p ⎦ = ⎣ 2 2 q 2kr E2k (f )w,p ⎦ k=0

k=0

⎛ ⎞q−1 j +1 j    k(s−r)⎠ ⎝ ≤ 2 2k(s−r) 2kr E2k (f )w,p k=0

q

k=0 j +1 

≤ c 2j (s−r)(q−1)

 2k(s−r) 2kr E2k (f )w,p

q

k=0

and, consequently, ∞ 

⎡ 2j (r−s)q ⎣

j =j0

j +1 

⎤q j 2ks E2k (f )w,p ⎦

k=0

≤c

∞ 

2j (r−s)

j =j0

=c

∞ 

j +1 

 2k(s−r) 2kr E2k (f )w,p

k=0

 2k(s−r) 2kr E2k (f )w,p

= c

∞ 

∞  

2j (r−s)

j =max{j0 ,k−1}

 2k(s−r) 2kr E2k (f )w,p

k=0

≤c

∞ 

q

k=0

s>r

q

2kr E2k (f )w,p

q

q

2max{j0 ,k−1}(r−s) 1 − 2r−s

∞  (3.1.12) 



k=0

c

(n + 1)

r− q1

q

En (f )w,p

q  ≤ c f ∗Bp,r . q,w

n=1

Putting this together with (3.1.13) and Lemma 3.1.5, we end up with f Bp,r = wf p + |f |w,p.q,r q,w ⎛ ⎞1 %   &q q ∞  1 ⎠ 2j r sϕ f, j ≤ wf p + c ⎝ 2 w,p j =0



j 0 −1 

≤ wf p + c ⎝

j =0

and the proof is complete.

2j r wf p

q



+ f ∗Bp,r

q,w

q

⎞1 q

⎠ ≤ cf ∗ p,r , B q,w

 

72

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

Now, we turn to the second aim of this section, the description of certain Besov-type spaces by means of the derivatives of their elements. To this end, first we formulate and prove some lemmata. Lemma 3.1.24 For all sufficiently large m, say m ≥ m0 = m0 (f ) and all f ∈ ACr−1 loc , we have Em (f )w,p ≤ c m−r Em−r (f (r) )ϕ r w,p with a constant c = c(f, m).

 r (r)  r r  and, due to Proof Since f ∈ ACr−1 loc , we get Kr,ϕ (f, t ) ≤ t wϕ f p Proposition 3.1.16 , Em (f )w,p ≤ c1

∞ 

 Kr,ϕ f,

k=0

≤ c1

∞ 

1 2kr mr

 w,p

    2−kr m−r wϕ r f (r) p = c m−r wϕ r f (r) p ,

m ≥ m0 .

k=0

It follows, for all P ∈ Pm−r and Q ∈ Pm with Q(r) = P ,  " ! Em (f )w,p = Em (f − Q)w,p ≤ c m−r wϕ r f (r) − P p and, by taking the infimum over all P ∈ Pm−r on the right-hand side, we get the assertion.   p,s

(r) ∈ B Lemma 3.1.25 Let s ≥ 0 and r ∈ N. If f ∈ ACr−1 q,ϕ r w , then loc and f

f ∈

p,r+s Bq,w

and    ∗ f ∗ p,r+s ≤ c wf p + f (r)Bp,s Bq,w

q,ϕ r w

with a constant c = c(f ). Proof By Lemma 3.1.24 we have ∞   r+s− q1 (m + 1) Em (f )w,p

q

≤c

m=m0

Moreover,

m=m0

m 0 −1  m=0

follows.

∞   s− 1 (m + 1) q Em−r (f (r) )w,p

(m + 1)

r+s− q1

q

Em (f )w,p



q

 ∗  ≤ c f (r) Bp,s

q,wϕ r

q .

q

cwf p , and the assertion  

3.1 Moduli of Smoothness, K-Functionals, and Best Approximation

73

The following lemma states the iterated version of the Bernstein inequality. Lemma 3.1.26 ([44], Theorem 8.4.7) For all m ∈ N and all P ∈ Pm ,    r (r)  wϕ P  ≤ c mr wP p , p

where the constant c does not depend on P and m. p,r+s

Lemma 3.1.27 Let s > 0 and r ∈ N. If f belongs to Bq,w , then f ∈ ACr−1 loc and p,s (r) f ∈ Bq,ϕ r w , where 

   (r) ∗ f  p,s

Bq,ϕ r w

≤c

∞  

(m + 1)

r+s− q1

q

1 q

Em (f )w,p

m=r

with a constant c = c(f ). f

Proof Take Pm = Pm . With the help of Lemma 3.1.26 we obtain ⎛

⎡ ⎤q ⎞ q1 ∞ ∞   1 " ! (r) ⎝ ⎣ (m + 1)s− q ϕ r w P (r) − P2j m p ⎦ ⎠ 2j+1 m m=1

j =0

⎡ ⎤q ⎞ q1 ∞ ∞    " r+s− q1  r ! ⎣ ≤ c⎝ 2j r (m + 1) ϕ w P2j+1 m − P2j m  ⎦ ⎠ ⎛

m=1

p

j =0



⎡ ⎤q ⎞ q1 ∞ ∞   r+s− q1 ⎣ ≤ c⎝ 2j r (m + 1) E2j m (f )w,p ⎦ ⎠ m=1

≤c

∞ 

j =0

 2

jr

j =0

∞  

(m + 1)

r+s− q1

q

E2j m (f )w,p

1 q

.

m=1

Using relation (3.1.12) two times, we can further estimate the last term by

c

∞  j =0



⎞1 ⎛ ⎞1 q q ∞  ∞ ∞     q q 2 (r+s) E2j+ (f )w,p ⎠ = c 2 (r+s) E2 (f )w,p ⎠ 2j r ⎝ 2−j s ⎝ j =0

=0

≤c

∞  j =0

⎛ 2−j s ⎝

∞  m=1

(m + 1)

r+s− q1

=j

.q Em (f )w,p

⎞1 q

⎠ ≤ cf ∗ p,r+s . Bq,w

74

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

One consequence of that is Paraphrased, the series

∞  ! j =0

∞   r ! (r) "  < ∞ for all m ∈ N. ϕ w P j+1 − P (r) 2 m 2j m p j =0

(r) (r) " P2j+1 m − P2j m converges for every m ∈ N absolutely

p

p

f

p

in Lϕ r w to a function gm ∈ Lϕ r w . Since Pm = Pm converges to f in Lw for m −→ ∞, we have, for every m ∈ N, f = Pm +

∞  !

P2j+1 m − P2j m

"

in

Lpw .

j =0

We apply Lemma 2.4.9 with fn = Pm +

n  !

" (r) P2j+1 m − P2j m and g (r) = Pm + gm ,

j =0 (r)

to conclude f ∈ ACloc and f (r) = Pm + gm . From this and the above estimates, we get 

∞  

(n + 1)

s− q1   r

!

ϕ w f

(r)

− Pm(r)

" 

q

1 q

p

m=1



⎡ ⎤q ⎞ q1 ∞ ∞   1 " ! (r) ⎣ (n + 1)s− q ϕ r w P (r) ≤⎝ − P2j m p ⎦ ⎠ ≤ cf ∗ p,r+s . 2j+1 m m=1

Bq,w

j =0

This implies 

∞  

(m + 1)

s− q1

  Em f (r) ϕ r w,p

q

1 q

≤ cf ∗ p,r+s Bq,w

m=1

  and, since P1(r) ≡ 0, also ϕ r wf (r) p ≤ cf ∗ p,r+s , which together yields the Bq,w

estimate  (r) ∗ f  p,s B

q,ϕ r w

≤ cf ∗ p,r+s . Bq,w

(3.1.14)

If we apply (3.1.14) to f − Pr instead of f , we get the assertion, since Em (f − Pr )w,p = Er (f )w,p , m = 0, 1, . . . , r, and Em (f − Pr )w,p = Em (f )w,p , m > r.  

3.1 Moduli of Smoothness, K-Functionals, and Best Approximation

75

As a conclusion of Lemmata 3.1.25 and 3.1.27 as well as Proposition 3.1.23, we can formulate the following Proposition. For this we set   p,r,s p,s (r) Wq,w = f ∈ ACr−1 ∈ Bq,ϕ r w , loc : f

r ∈ N, s ≥ 0,

  p,r p,r,0 = wf p + f (r) Bp,s . If s = 0, we write Wq,w instead of Wq,w , and f Wp,r,s q,w p,r,s

and, in case of q = p, Ww

q,ϕ r w

p,r,s

p,r

p,r

instead of Wp,w and Ww instead of Wp,w . p,r,s

p,r+s

Proposition 3.1.28 If r ∈ N and s > 0, then Wq,w = Bq,w and the norms .Wp,r,s , .∗ p,r+s , and .Bp,r+s are equivalent on this space. In case of s = 0, we q,w q,w Bq,w  p,r   p,r  have the continuous embedding Wq,w , .Wp,r ⊂ Bq,w , .Bp,r . q,w q,w 2,r Let us come back to the Example 3.1.20 of the Sobolev-type space L2,r α,β = B2,w , where α = 2γ and β = 2δ. In that situation, in comparison to Proposition 3.1.28, we can prove equality also in case s = 0 as the following corollary shows (cf. Lemma 2.4.7). In preparation of the proof of this corollary we prove the following lemma. 2,1 is complete. The set Lemma 3.1.29 ([20], Lemma 2.16) The normed space Ww 2,1 P of all polynomials is dense in Ww . ∞ 2,1 , then there are f ∈ L2 and g ∈ L2 Proof If (fn )n=1 is a Cauchy sequence in Ww w ϕw such that

lim w(fn − f )2 = 0

n→∞

and

  lim ϕw(fn − g)2 = 0 ,

n→∞

since L2 is complete. Let ψ : [−1, 1] −→ C be an arbitrary smooth function satisfying ψ (k) (±1) = 0 for all k ∈ N0 . By taking into account 

1

−1

[fn (x)ψ  (x) + fn (x)ψ(x)] dx = 0 ,

we get   0 ≤ 

1

−1

    [f (x)ψ  (x) + g(x)ψ(x)] dx  = 

1 −1

!

 "  f (x) − fn (x)]ψ  (x) + [g(x) − fn (x)]ψ(x) dx 

        ≤ w(f − fn )2 w −1 ψ   + ϕw(g − fn )2 ϕ −1 w −1 ψ  −→ 0 2

2

if

n −→ ∞ .

    Note that w−1 ψ  2 and ϕ −1 w−1 ψ 2 are finite numbers due to the boundary conditions on ψ(x). Consequently, 

1 −1

[f (x)ψ  (x) + g(x)ψ(x)] dx = 0

76

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

for all ψ under consideration, which implies g(x) = f  (x) for allmost all x ∈ 2,1 . [−1, 1] and f ∈ Ww 2,1 and ε > 0 be arbitrarily Let us turn to the second assertion. Let f ∈ Ww 2 chosen. Then, due to the density of P in Lϕw , there is a polynomial P1 ∈ P such   −1 that ϕw(f  − P1 ) < M ε , where 2

2

#   2|γ | 2|δ| M = max 2 , 2 max 2

1 1 , 1 + 2γ 1 + 2δ

$ .

    This implies f  L1 (a,b) ≤ f  − P1 L1 (a,b) + P1 L1 (a,b) < ∞ for all compact intervals [a, b] ⊂ (−1, 1). Hence, for x ∈ (1, 1), the function  g(x) =

x

f  (y) dy

0

is well defined, and g(x) = f (x) + c0 for allmost all x ∈ (−1, 1) and some constant x

c0 ∈ C. We define P0 (x) =

P1 (y) dy − c0 . We can estimate

0

w(f − P0 )22  =

1 −1

 2 w(x)f (x) − P0 (x) dx =

 ≤ c1



0 −1

0

(1 + x)2δ



1 −1

 x 2   [w(x)]2  [f  (y) − P1 (y)] dy  dx 0

|f  (y) − P1 (y)|2 dy dx +



1



0

x

x

(1 − x)2γ

|f  (y) − P1 (y)|2 dy dx



0

=: c1 (J− + J+ )

  with c1 = max 22|γ | , 22|δ| . Now,  J+ =



0

1 1

(1 − x)2γ dx |f  (y) − P1 (y)|2 dy =

y

1 1 + 2γ



1

1 1 + 2γ



1

(1 − y)1+2γ |f  (y) − P1 (y)|2 dy

0

(1 − y)1+2γ (1 + y)1+2δ |f (y) − P1 (y)|2 dy

0

and analogously, J− ≤

1 1 + 2δ



0 −1

(1 − y)1+2γ (1 + y)1+2δ |f  (y) − P1 (y)|2 dy .

3.1 Moduli of Smoothness, K-Functionals, and Best Approximation

77

Putting this all together, we end up with

and f − P0 W2,1 w

  ε w(f − P0 )2 ≤ M ϕw(f  − P1 )2 < 2   < 1 + M −1 ε2 ≤ ε.

 

2,r Corollary 3.1.30 If r ∈ N, then the equality W2,r w = B2,w holds true, where the  r (r) norms f W2,r = wf 2 + ϕ wf 2 , f ∗ 2,r , and f B2,r are equivalent. B2,w

w

2,w

Proof We use the notations L2,r α,β (cf. (2.4.1)) and f α,β,r = f, gα,β,r =

∞ 



f, f α,β,r with



α,β α,β g, pm α,β (m + 1)2r f, pm α,β

m=0 2,r (cf. (2.4.2)) for B2,r 2,w and an equivalent norm in B2,w , respectively (cf. Lemma 2.4.4). In case r = 0 we write f α,β instead of f α,β,0 . Obviously, f α,β =      f, f α,β and wf 2 + ϕ r wf (r) 2 = f α,β + f (r) r+α,r+β . Due to 2,r Propositions 3.1.28 and 3.1.23 it remains to show that L2,r α,β ⊂ Ww and f W2,r ≤ w

cf α,β,r for all f ∈ L2,r α,β , where c = c(f ). Due to relation (5.5.6), for every polynomial P ∈ P we have   P α,β + P  1+α,1+β ≤ cP α,β,1

(3.1.15)

2,1 . The orthoprojection with a constant c = c(P ). Let f ∈ Lα,β

Pnα,β

:

L2α,β

−→

L2α,β

,

f →

n−1 

α,β f, pm

α,β

α,β pm

m=0

has the property ∞ 

   2 2 α,β f − P α,β f  f, p = (m + 1)  n m α,β  −→ 0 α,β,1

if

n −→ ∞ .

m=n

 ∞ α,β 2,1 , which, by Hence, in view of (3.1.15), Pn f is a Cauchy sequence in Ww n=1

2,1 . It follows Lemma 3.1.29 has a limit g ∈ Ww

        f − gα,β ≤ f − Pnα,β f α,β +Pnα,β f − g α,β ≤ f − Pnα,β f α,β +Pnα,β f − g W2,1 −→ 0 w

78

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

if n −→ ∞, which implies f (x) = g(x) and f  (x) = g  (x) forallmostall x ∈  α,β  (−1, 1) as well as f W2,1 = gW2,1 . Since, due to (3.1.15), Pn f  2,1 ≤ w w Ww    α,β  cPn f  , with n tending to infinity we get α,β,1

f W2,1 ≤ cf α,β,1 w

2,1 ∀ f ∈ Lα,β ,

and the corollary is proved for r = 1. We proceed with induction and assume that the assertion is true for r = 1, . . . , s ∈ N and consider r = s + 1. Now let r ∈ N \ {1} and write r = 1 + s. Then, by Proposition 3.1.28 (together with the already proved part), we have   2,1+s 2,s 2,s 2,1,s  2,s L2,r = B = W = f ∈ AC : f ∈ B = W = L loc w ϕw α,β 2,w 2,ϕw 1+α,1+β   (r) = f ∈ ACr−1 ∈ L2r+α,r+β = W2,r w loc : f and, for f ∈ L2,r α,β ,       f α,β,r ∼f f W2,1,s = wf 2 + f  B2,s ∼f f α,β + f  1+α,1+β + f (r) r+α,r+β . w

2,ϕw

      It remains to prove that f  1+α,1+β ≤ c f α,β + f (r) r+α,r+β with some constant c = c(f ). But, this is a consequence of Exercise 2.4.10.  

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1) 3.2.1 Definitions Recall the definition of a weight function u : I −→ R for an interval I ⊂ R (see Sect. 2.4.1) and let us introduce a wide class of weight functions, the so-called doubling weights. We say that a weight function u : (−1, 1) −→ R is a doubling weight, if there exists a constant L = L(u) > 0 such that, for every interval I ⊂ (−1, 1),   u(t) dt ≤ L u(t) dt , (3.2.1) (−1,1)∩2I

I

where 2I denotes the interval with the same midpoint as I but of double length. The smallest constant L satisfying (3.2.1) is called doubling constant of u. The set of all doubling weights defined on (−1, 1) is denoted by DW = DW(−1, 1).

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

79

Exercise 3.2.1 Show that the weight u(x) =

1 : −1 < x < 0 x:



0≤x 0 such that ω(I ) ≥ βω(I0 ) =: L−1 ω(I0 ). (b)⇒(c): If J ⊂ I and |J | ≤ α|I |, then there is an interval J0 ⊂ I \ J with |J0 | ≥ 1−α 3 |I |. By (b), there is a β0 > 0 such that ω(J0 ) ≥ β0 ω(I ). It follows ω(J ) ≤ ω(I ) − ω(J0 ) ≤ (1 − β0 )ω(I ) =: βω(I ) . (c)⇒(b): Let J ⊂ I ⊂ (−1, 1), |J | ≥ α|I |, write I = J ∪ J1 ∪ J2 such that J , J1 , and J2 are pairwise disjoint intervals, and set Ik = J ∪ Jk , k = 1, 2. Then |Jk | ≤ (1 − α)|Ik | and, by (c), ω(Jk ) ≤ β0 ω(Ik ), k = 1, 2, for some β0 ∈ (0, 1).

80

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

It follows ω(I ) = ω(J1 ) + ω(J2 ) + ω(J ) ≤ β0 [ω(I1 ) + ω(I2 )] + ω(J ) = β0 [ω(I ) + ω(J )] + ω(J ) = β0 ω(I ) + (β0 + 1)ω(J ) .

Hence, ω(J ) ≥ βω(I ) with β =

1−β0 1+β0 .

  = (1 4 − x)α (1 + x)β α,β v (x) rk=1 |x − tk |γk

v α,β (x)

Exercise 3.2.4 Prove that all Jacobi weights with α, β > −1 as well as all generalized Jacobi weights with −1 < t1 < . . . < tr < 1, α, β > −1, and γk > −1, k = 1, . . . , r, are doubling weights. Also the weights introduced by Badkov [17] and those considered in [119, 149] fulfil the property (3.2.1). Further classes of doubling weights are given by the generalized Ditzian-Totik weights and the Ap -weights as well as the A∗ -weights. The class GDT = GDT(−1, 1) of generalized Ditzian-Totik weights is the set of all weights u : (−1, 1) −→ R of the form (cf. [139, Section 2.1]) %r−1 & 5 √ √ k u(x) = (1 − x) W0 ( 1 − x) |x − tk | Wk (|x − tk |) (1 + x)r Wr ( 1 + x) , 0

k=1

(3.2.2) where k > −1, k = 0, . . . , r, −1 < t1 < t2 < . . . < tr−1 < 1, and where every function Wk , k = 0, . . . , r is either identically 1 or a modulus of smoothness of first order, i.e., a concave, nonnegative, nondecreasing, and continuous function on [0, 2] with Wk (0) = 0. Moreover, it is assumed that, for every ε > 0, the function x −ε Wk (x) is nonincreasing on (0, 2], and that limx→+0 x −ε Wk (x) = ∞. The last condition says that the function Wk increases in a subalgebraic way. These weights have been considered by several authors and in different contexts. The case Wk (x) ≡ 1 and k = 0 for all k = 1, · · · , r − 1 has been examined in [44, Definition 8.1.1]. If Wk (x) ≡ 1 for all k = 0, · · · , r, then u is a generalized Jacobi weight and it has been studied by several authors (see, for instance, [16, 175]). For 1 < p < ∞, the class Ap of Ap -weights consists of all weight functions u : (−1, 1) −→ R satisfying the Muckenhoupt condition (cf. [67, 169, 170], or [21, Chapter 2])     u p u−1  q ≤ c|I | L (I ) L (I )

(3.2.3)

for all intervals I ⊂ (−1, 1), where again |I | = m(I ) denotes the length of the interval I , where p−1 + q −1 = 1, and where the constant c does not depend on I . Let us note that the definition of the Ap -weights can be extended in a natural way to arbitrary and also unbounded intervals of I0 ⊂ R instead of I0 = (−1, 1). The respective class, we denote by Ap (I0 ). Here, in case of an unbounded interval I0 , by Ap (I0 ) one refers to all functions u(x), which are weight functions on every

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

81

bounded interval I ⊂ I0 and for which also (3.2.3) is fulfilled for all such bounded subintervals. Exercise 3.2.5 Show that condition (3.2.3) is equivalent to the existence of an ε > 0 such that # $    1  u p u−1  q : I ⊂ (−1, 1), |I | < ε < ∞ . sup (3.2.4) L (I ) L (I ) |I | Can this definition of Ap be generalized to an equivalent definition of Ap (I0 ) ? Exercise 3.2.6 Show that, if u, v ∈ Ap and 0 < λ < 1, then uλ v 1−λ ∈ Ap . Lemma 3.2.7 Let u, v ∈ Ap . If there is an ε > 0 such that, for every interval I ⊂ (−1, 1) with |I | < ε, we have u, u−1 ∈ L∞ (I ) or v, v −1 ∈ L∞ (I ), then uv ∈ Ap . Proof There exist finitely many intervals I1 , . . . , IN ⊂ (−1, 1) such that |Ik | < 2ε , N

k = 1, . . . , N, and (−1, 1) = Ik . By Ik0 we refer to that part of the interval k=1

with the same midpoint as Ik and double length, which belongs to (−1, 1). By assumption, for every k ∈ {1, . . . , N}, we can choose one function wk ∈ {u, v} such that wk , wk−1 ∈ L∞ (Ik0 ). Set      M = max wk L∞ (I 0 ) , wk−1 L∞ (I 0 ) : k = 1, . . . , N . k

k

Let I ⊂ (−1, 1) be an arbitrary interval with |I | < ε2 . Then, there exists a k0 ∈ {1, . . . , N} such that I ⊂ Ik00 . It follows with wk0 ∈ {u, v} \ {wk },         (3.2.3) uv  p (uv)−1  q ≤ M 2 w0  p (w0 )−1  q ≤ M 2 c|I | , k0 L (I ) k0 L (I ) L (I ) L (I )  

which proves the lemma (cf. Exercise 3.2.5).

Exercise 3.2.8 Show that a generalized Jacobi weight u belongs to Ap if and only if u ∈ Lp and u−1 ∈ Lq , where p1 + q1 = 1, and that v(x) = |x|λ is an Ap (R)-weight if and only if − p1 < λ < q1 . For a function f ∈ L1 (R), the Hilbert transform , also called Cauchy singular integral,  (HR f )(x) =

∞ −∞

f (y) dy y−x

(3.2.5)

82

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

exists in the Cauchy principal value sense , i.e.,  (HR f ) (x) = lim

ε→+0

x−ε −∞

 +

∞

x+ε

f (y) dy , y−x

for allmost all x ∈ R. The restriction of the Hilbert transform onto a measurable set I ⊂ R will be denoted by HI , i.e.,  (HI f ) (y) = I

f (x) dx , x ∈ I x−y

or HI f = χI HR χI f ,

(3.2.6)

where χI (x) denotes the characteristic function of I . In case I = (−1, 1), we write H instead of H(−1,1) , i.e., for f ∈ L1 (−1, 1),  (Hf ) (x) =

1 −1

f (y) dy , y−x

−1 < x < 1 .

(3.2.7)

p

Recall the definition of the weighed space Lu in Sect. 2.4.1. Proposition 3.2.9 (cf. [67] and the cited literature in [21], Section 4.4) For 1 < p < ∞ and an interval I ⊂ R the singular integral (3.2.6) acts as a bounded linear p p operator (the Cauchy singular integral operator) HI : Lu (I ) −→ Lu (I ), i.e., we p have HI ∈ L Lu (I ) , if and only if u ∈ Ap (I ). As a consequence of Exercise 3.2.8 we get the following corollary. Corollary 3.2.10 Let 1 < p < ∞ and S := H(−1,1) . Since the map v −α,−β I : p Lp −→ Lv α,β is an isometrical isomorphism, the operator v α,β Sv −α,−β I : Lp −→ Lp is continuous if and only if − p1 < α, β < q1 = 1 − p1 . We will say that a continuous weight function u : [−1, 1] −→ R belongs to the class A∗ of A∗ - weights, if there exists a constant L∗ such that, for each interval I ⊂ [−1, 1] and all x ∈ I , L∗ u(x) ≤ |I |

 u(t) dt .

(3.2.8)

I

We remark that the A∗ -condition (3.2.8) is stronger than the doubling condition (3.2.1). Exercise 3.2.11 Let a < c < b, u, v ∈ A∗ [a, b], u, u−1 ∈ C[c, b], and v, v −1 ∈ C[a, c]. Show that uv ∈ A∗ [a, b]. Exercise 3.2.12 Show that the Jacobi weights and the generalized Jacobi weights with nonnegative exponents are A∗ -weights. A weight function u : (−1, 1) −→ R belongs to the class A∞ and is called A∞ -weight, if, for every α > 0, there is a β > 0, such that, for every interval

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

83

I ⊂ (−1, 1) and every measurable set E ⊂ I with |E| ≥ α|I |, the inequality 

 u(x) dx ≥ β E

(3.2.9)

u(x) dx I

is true. Exercise 3.2.13 Show that every A∞ -weight is a doubling weight. With the help of the function um (t) =

1 m (t)

 |x−t |< m2 (t)

(3.2.10)

u(x) dx ,

where √ 1 1 − t2 m (t) = + 2, m m

(3.2.11)

we can characterize the doubling property of a weight function u(x). Lemma 3.2.14 ([145], Lemma 7.1) A weight function u : (−1, 1) −→ R is a doubling weight if and only if there are two positive constants s and K such that, for all x, t ∈ [−1, 1] and for all m ∈ N, s      um (t) ≤ K 1 + m |t − x| + m  1 − t 2 − 1 − x 2  um (x) . Corollary 3.2.15 If |x − t| ≤ c m (x) with c = c(x, t, m) and if u ∈ DW, then um (t) ∼ um (x). Proof Let −1 ≤ x ≤ t ≤ 1 and t = x + h with h ≤ c m (x) and c ≥ 1. Due to Lemma 3.2.14, since m|x − t| is obviously bounded, it remains to show that √  √ √  2 1 + x + mc m 1 − t 2 − 1 − x 2  is bounded. But, this follows from h ≤ 2c m and 

1 − t2 −



1 − x2 = ≤

√  √ √ √ √ √ √ 1−t 1+t − 1−x 1+x ≤ 1−x 1+t − 1+x  √ √ √ 2 1+t − 1+x

√ ≤ 2

/

 c 2 √ 2c √ 1+x+ 1+x + − 1+x m m

 =

√ 2c . m

The assumption √− t| ≤ c m (t). Hence, we get the boundedness of the √ implies |x expression m 1 − t 2 − 1 − x 2  also in case of −1 ≤ t ≤ x ≤ 1.   We continue with certain properties of doubling weights.

84

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

Lemma 3.2.16 ([145], (7.34)–(7.36)) If u ∈ DW, then there exist algebraic polynomials qm (x) of degree at most m such that, for every x ∈ (−1, 1), 1 um (x) ≤ qm (x) ≤ c um (x) , c

    (x) ≤ c m u (x) ,  1 − x 2 qm  m

 (x)| ≤ c m2 u (x) , |qm m

(3.2.12) where the constant c is independent of m and x. Lemma 3.2.17 ([145], (7.27)) Let u ∈ DW ∩ Lp . Then, for every 1 ≤ p < ∞ and fixed ∈ N, there exists a constant c such that, for every polynomial P ∈ P m , 1 P up ≤ P um p ≤ c P up , c

c = c(m, P ) .

(3.2.13)

3.2.2 Polynomial Inequalities with Doubling Weights In this section we present results from [145]. For respective results in case of 0 < p < 1, see [46]. Proposition 3.2.18 ([145], Theorems 7.3 and 7.4) Let u ∈ DW ∩ Lp . Then, for every p ∈ [1, ∞), there exists a constant c such that, for every polynomial Pm ∈ Pm , the Bernstein inequality    P ϕu ≤ c mPm up m p

(3.2.14)

   P u ≤ c m2 Pm up m p

(3.2.15)

and the Markov inequality

hold true, where ϕ(x) =



1 − x 2 and c = c(m, Pm ).

Combining (3.2.14) and (3.2.15) we deduce        P m u ≤ 1 P  ϕu + 1 P  u ≤ cPm up m m m p p p 2 m m with m (t) defined in (3.2.11). If Q ∈ P m with ∈ N, then we conclude      Q m u ≤ 2 Q  m u ≤ c 2 Q up . p p

(3.2.16)

Remark 3.2.19 The proof of Proposition 3.2.18 consists of two steps. First, we can prove the weighted Bernstein and Markov inequalities for the weight um , approximating um by a polynomial by Lemma 3.2.16 and using the classical

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

85

(unweighted) Bernstein and Markov inequalities    P ϕ  ≤ mPm p m p

  and Pm p ≤ m2 Pm p ,

Pm ∈ Pm .

Then, applying Lemma 3.2.17, we can replace um by u and obtain Proposition 3.2.18. As a consequence, inequality (3.2.16) remains true for um (t) instead of u(t) with c = c(m, Q). Proposition 3.2.20 ([145], (7.22)) Let 1 ≤ p < ∞, u ∈ DW ∩ Lp , −1 = t0 < N 5 t1 < · · · < tN−1 < tN = 1, βk ≥ 0, and set w(x) = |tk − x|βk . Then, for every polynomial Pm ∈ Pm , the Schur-type inequality

k=0

Pm up ≤ c mβ Pm u wp holds, where β =

1 p

(3.2.17)

max {2β0 , β1 , . . . βN−1 , 2βN } and where c = c(m, Pm ).

In the following we will state some weighted Remez and Nikolski inequalities. In the unweighted case, for every fixed constant α > 0, there exists a positive constant c such that, for every polynomial Pm ∈ Pm and for every measurable set Em ⊂ (−1, 1) for which the measure of the set {θ ∈ (0, π) : cos θ ∈ Em } does not exceed α m , we have the Remez inequality Pm p ≤ c Pm Lp ((−1,1)\Em) ,

c = c(m, Pm , Em ) .

The classical unweighted Nikolski inequality is of the form 1

1

Pm q ≤ c m2( p − q ) Pm p

∀ Pm ∈ Pm ,

where 1 ≤ p < q ≤ ∞ and where c is a constant independent of m and Pm . In the weighted case, the Remez and Nikolski inequalities are not true when the weight function u(x) satisfies only the doubling property (3.2.1). This is due to the fact that a doubling weight can vanish on a set of positive measure (cf. [145]). Nevertheless, both Remez and Nikolski inequalities are satisfied under stronger conditions. The following two propositions, the proofs of which can be found in [145], show that the Remez and Nikolski inequalities can be stated for all A∞ weights. Proposition 3.2.21 ([145], (7.15), (7.16), (7.32)) Let u ∈ A∞ ∩Lp and p ∈ [1, ∞) or u ∈ A∗ ∩ L∞ and p = ∞. Then, for every fixed α > 0, there exists a positive constant c such that, for every polynomial Pm ∈ Pm and for every measurable set α Em ⊂ (−1, 1) with |{θ ∈ (0, π) : cos θ ∈ Em }| ≤ m , Pm up ≤ c Pm uLp ((−1,1)\Em) , where the constant c does not depend on m, Pm , and Em .

(3.2.18)

86

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

Proposition 3.2.22 ([145], (7.19), (7.20)) Let u ∈ A∞ ∩ Lp and p ∈ [1, ∞). Then, there exists a constant c depending only on u, p, and q such that, for every polynomial Pm ∈ Pm ,   1 1  1 1  − − Pm ϕ p q u ≤ c m p q Pm up , q

1

1

Pm uq ≤ c m2( p − q ) Pm up , (3.2.19)

where 1 ≤ p < q < ∞ and ϕ(x) =



1 − x 2.

If the weight u fulfils the stronger A∗ -condition, then, in case 1 ≤ p < q = ∞, the Nikolski inequalities in Proposition 3.2.22 are of the form ([145, (7.31)]).   1   Pm ϕ p u

1



≤ c m p Pm up

2

and Pm u∞ ≤ c m p Pm up ,

(3.2.20)

where c is a positive constant independent of m and Pm . The proofs of the previous üroüositionss follow by applying Lemma 3.2.16 and Lemma 3.2.17 (see [145]). In particular, the main idea is to replace the weight u with um by using Lemma 3.2.16 and subsequently to replace um with a polynomial qm by applying Lemma 3.2.17. In this way, the proof of a weighted polynomial inequality is reduced to the analogous one in the unweighted case. As an example, we give the proof of the Markov inequality (3.2.15): defined in (3.2.10). Due to Lemma 3.2.17 we have ProofLet um be the weight   P  u ∼m,P P  um  . Moreover, using (3.2.12) we can say that there exists a m m m p p polynomial qm such that          P um  ≤ c P  qm  ≤ c (Pm qm )  + Pm q   . m m m p p p p Thus, by applying the unweighted Markov inequality to the first term and (3.2.12) to the second one, we have    ! " P u ≤ c m2 Pm qm p + Pm um p . m p Now, the assertion follows by applying Lemma 3.2.16 and Lemma 3.2.17.

 

3.2.3 Christoffel Functions with Respect to Doubling Weights The proofs of the statements presented in this section  ∞be found in [147]. If w is  wcan a weight function on the interval (−1, 1), then by pm we refer to the system m=0 of orthonormal polynomials with respect to w(x), i.e., w pm (x) = γm x m + lower degree terms , γm = γmw > 0 ,

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

87

and 

1 −1

w pm (x)pnw (x)w(x) dx = δm,n :=

0 : m = n , 1 : m = n.

For 1 ≤ p < ∞ and u ∈ Lp , we define the m−th Christoffel function of order p as # 1 $ u,p λm (t) = inf |P (x)|p u(x) dx : P ∈ Pm , |P (t)| = 1 . −1

In case of p = 2, the Christoffel function is a basic tool in different contexts, in particular in the theory of orthogonal polynomials, numerical quadratures, and onesided approximation. In this case, we have λum (t)

:=

λu,2 m (t)

=

m−1 !

"2 pku (t)

−1 .

(3.2.21)

k=0 u,p

The asymptotic behaviour of λm (t) is given by the following proposition the proof of which can be found in [145]. Proposition 3.2.23 ([145], (7.14)) Let u ∈ DW and p ∈ [1, ∞). Then there exists a constant c such that, for all m ∈ N and t ∈ (−1, 1), u,p

λm (t) 1 ≤ ≤ c, c um (t)m (t)

(3.2.22)

where c = c(m, t) and where um (t) and m (t) are defined in (3.2.10) and (3.2.11), respectively. We remark that, if u ∈ GDT (cf. (3.2.2)), then ([139, 145, 147])  um (t) ∼t,m

√ 1 1−t + m



20 W0



1−t +

1 m



%r−1     &    5 √ √ 1 k 1 1 2r 1 · Wk |t − tk | + 1+t + Wr 1+t + |t − tk | + . m m m m k=1

u , k = 1, 2, . . . , m of the mth orthonormal It is well known that the zeros xmk = xmk u polynomial pm (x), are simple and lie in (−1, 1). Hence, the natural question arises if these zeros are arcsin distributed, i.e., does there exist a positive constant c independent of m and k such that

c c−1 ≤ θmk − θm,k+1 ≤ , m m

k = 0, 1, 2, . . . , m, m ∈ N ,

(3.2.23)

88

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

where xmk = cos θmk with θmk ∈ [0, π], θm0 = 0, θm,m+1 = π. The following proposition gives an answer (see also Exercise 3.2.25).  u∞ Proposition 3.2.24 ([147], Theorem 1) Let u ∈ DW, let pm be the correm=0 sponding orthonormal polynomial system, and let −1 =: xm0 < xm1 < xm2 < u u. . . . < xmm < 1 =: xm,m+1 , where xmk = xmk , k = 1, . . . , m are the zeros of pm Then, there exists a positive constant c such that c−1 ≤

xm,k+1 − xmk ≤ c, m (xmk )

k = 0, 1, 2, . . . , m ,

(3.2.24)

where c = c(m, k) and where m (t) is defined in (3.2.11). Exercise 3.2.25 Show that (3.2.24) is equivalent to (3.2.23). Now let us introduce the Christoffel numbers λumk = λum (xmk ) ,

(3.2.25)

which appear in the Gauss-Jacobi quadrature rule 

1 −1

f (x)u(x) dx ∼

m 

u λumk f (xmk ).

(3.2.26)

k=1

u (x, t) by Exercise 3.2.26 We define the Darboux kernel Km

u (x, t) = Km

m−1 

pku (x)pku (t) .

(3.2.27)

k=0

Starting from the three-term recurrence relation u u u u u u βm+1 pm+1 (x) = (x − αm )pm (x) − βm pm−1 (x) ,

(3.2.28)

u ∈ R , β u > 0 , m ∈ N , and pu (x) ≡ 0, show that the equality where αm 0 m −1 u (x, t) = Km

u u (x)pu u γm−1 pm m−1 (t) − pm (t)pm−1 (x) γm x−t

(3.2.29)

u (x). is true with γm = γmu > 0 being the leading coefficient of pm  1 umk (x)u(x) dx, where by umk (x) we refer to Exercise 3.2.27 Define μumk = −1

the fundamental Lagrange interpolation polynomials with respect to the nodes

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

89

u , namely xmk

umk (x) =

n 5

u x − xmj

xu j =1,j =k mk



u xmj

=

u (x) pm u u u  (pm ) (xmk )(x − xmk )

,

k = 1, . . . , m

(see also Sect. 3.2.5). Show that the equality 

1 −1

p(x)u(x) dx =

m 

u μumk p(xmk )

(3.2.30)

k=1

holds true for all polynomials p ∈ P2m . Exercise 3.2.28 Prove that μumk = λumk

u u and umk (x) = λumk Km (x, xmk ),

k = 1, . . . , m ,

(3.2.31)

holds true. Adjoining Christoffel numbers are uniformly of the same order as the following proposition shows. Proposition 3.2.29 ([147], Theorem 2) If u ∈ DW, then there is a positive constant c independent of m and k such that, for all m ∈ N and k = 1, . . . , m − 1, we have c−1 ≤

λumk λum,k+1

≤ c.

(3.2.32)

Propositions 3.2.24 and 3.2.29 have a converse version as the following statement shows. Proposition 3.2.30 ([147], Theorem 3) If u(x) is a weight function on (−1, 1) satisfying both (3.2.24) and (3.2.32), then u ∈ DW. All the results shown in this subsection and in the previous one hold also in a local sense, i.e., if a weight u defined on (−1, 1) fulfils the doubling property (3.2.1) in some interval (a, b) ⊂ (−1, 1), then the above results hold in (a, b). Observe that the problem of establishing pointwise estimates for the polynomials u } , u ∈ DW, remains open. The question is: “For u ∈ DW, does of the sequence {pm m there exist a constant c = c(m) such that ! u "2 pm (x) ≤

! u "2 ! u "2 ! u "2 p0 (x) + p1 (x) + . . . + pm−1 (x) c =c m λum (x) m

90

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

is true for all x ∈ (−1, 1) ?” This question seems to be natural since this kind of estimate is satisfied by important weights belonging to the doubling class, for instance those considered in [139, 149]. In this subsection, finally we will prove the first Marcinkiewicz inequality for doubling weights, which is useful in several situations. Proposition 3.2.31 Let 1 ≤ p < ∞, u ∈ Lp ∩ DW, and let −1 ≤ ym0 < ym1 < . . . < ymm < ym,m+1 ≤ 1 be distributed as xmk in (3.2.24). Then, with um (t) defined in (3.2.10) and for a fixed ∈ N, m 

 |Q(ymk )um (ymk )|p ymk ≤ c

k=0

1 −1

|Q(t)um (t)|p dt

∀ Q ∈ P m ,

(3.2.33)

where ymk := ym,k+1 − ymk and c = c(m, Q). Proof For f ∈ C1 [a, b], a < b, and 1 ≤ p < ∞, we have the inequality - (b − a) |f (a)| ≤ 2 p

b

p−1

 |f (t)| dt + (b − a) p

b

p

a

.



|f (t)| dt . p

(3.2.34)

a

Indeed, 

b

 f (x) dx =

a

b

 [f (x) − f (a)] dx + f (a)(b − a) =

a

 =

b



a b

x

f  (t ) dt dx + f (a)(b − a)

a

f  (t )(b − t ) dt + f (a)(b − a)

a

implies (3.2.34) in case p = 1, namely 

b

|f (a)|(b − a) ≤

 |f (x)| dx + (b − a)

a

b

|f  (x)| dx .

a

In case 1 < p < ∞, we apply Hölder’s inequality to the two integrals on the right-hand side of the last inequality and get, for p1 + q1 = 1, |f (a)|(b − a) ≤ (b − a)

1 q



b

|f (x)| dx p

 p1

+ (b − a)

1+ q1



a

b



|f (x)| dx p

a

or 1 p



b

|f (a)|(b − a) ≤ a

|f (x)| dx p

 p1



b

+ (b − a) a



|f (x)| dx p

 p1 .

 p1

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

91

We arrive at (3.2.34) by taking into account the inequality   (α + β)p ≤ 2p−1 α p + β p

(α, β ≥ 0), p

which can be proved by maximizing the function f (x) = (x+1) x p +1 , x ≥ 0. Using (3.2.34) with a = yk := ymk , b = yk+1 , k = 0, 1, 2, . . . , m, f = Q, and by multiplying with [um (yk )]p , we get (yk := ymk ) -

yk+1

|Q(yk )um (yk |p yk ≤ 2p−1

 |Q(t)um (yk )|p dt + (yk )p

yk

- ≤c

yk+1

yk+1

   Q (t)um (yk )p dt

.

yk

 |Q(t)um (t)|p dt +

yk

yk+1

.    Q (t)m (t)um (t)p dt ,

yk

where we took into account (3.2.24) and Corollary 3.2.15. Summing up on k gives m 

- |Q(yk )um (yk )| yk ≤ c



1

p

−1

k=0

|Q(t)um (t)| dt + p

1

−1

.    Q (t)m (t)um (t)p dt .

 

It remains to apply (3.2.16) to arrive at (3.2.33) (cf. also Remark 3.2.19). In particular, if we replace ymk with the Jacobi zeros xk = 6 xk = xk+1 − xk ∼

1 − xk2 m

ϕ(xk ) , m

=

w , xmk

that satisfy

k = 1, 2, . . . , m − 1,

using analogous arguments, we can prove that, for all Q ∈ P m , m 

 |Q(xk )u(xk )| xk ≤ c p

k=1

1

−1

|Q(t)u(t)|p dt

(3.2.35)

holds true, where 1 ≤ p < ∞, u(t) is another Jacobi weight, and c = c(m, Q).

3.2.4 Convergence of Fourier Sums in Weighted Lp -Spaces  w ∞ w (x) = γ x m + . . ., Let w be a weight function on (−1, 1) and pm , pm m m=0 w γm = γm > 0, be the corresponding system of orthonormal polynomials. For f ∈ L1w = L1w (−1, 1) (cf.  2.4.1), the m-th Fourier sum of f with respect  Sect. w ∞ is given by to the orthonormal system pm m=0 w f = Sm

m−1  k=0

 ck pkw ,

ck = ckw (f ) =

1

−1

f (x)pkw (x)w(x) dx .

92

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

Using the Darboux kernel (cf. (3.2.27) and (3.2.29)) w (x, t) = Km

m−1 

pkw (x)pkw (t) =

k=0

w (x)pw (t) − pw (t)pw (x) γm−1 pm m m−1 m−1 γm x−t

we can write 

w f Sm



 (x) =

1 −1

w Km (x, t)f (t)w(t) dt .

(3.2.36)

w f = f if f ∈ P Of course, Sm function f , a natural task is to m−1 . Foran arbitrary  w f ∞ in different function spaces. For study the convergence of the sequence Sm m=1 p f ∈ Lu , we have

        f − S w f u ≤ 1 + S w  m m u,p Em (f )u,p , p  w    = S w  p is the operator norm of S w : Lpu −→ Lpu , i.e., where Sm m L(L ) m u,p u

   w   S  = sup  S w f u : f ∈ Lpu , f up ≤ 1 m u,p m p with .p = .Lp (−1,1) , and where   Em (f )u,p = inf (f − P ) up : P ∈ Pm p

is the best approximation of f in Lu by polynomials from  Pwm. Consequently, due  to the Banach-Steinhaus Theorem 2.2.1, if lim supm→∞ Sm = ∞, then the u,p  w ∞ p sequence Sm f m=1 diverges (in the sense of convergence in Lu ) for some function p p f belonging to Lu . Moreover, we have limm→∞ Em (f )u,p = 0 for all f ∈ Lu if p and only if the Weierstrass theorem holds true in Lu . Therefore, we aregoing  to w ∞ investigate the uniform boundedness (in operator norm) of the sequence Sm m=1 + p of Fourier projections in Lu -spaces, in which the set P := ∞ n=1 Pn of all algebraic polynomials is dense. us give a necessary condition for the boundedness of the sequence  At first,let ∞ S w  . The following proposition states that the uniform boundedness of m u,p m=1 p

p

w uniform boundedness of the orthonormal system Smw : ∞Lu −→ Lu pimplies the q pm m=0 both in Lu and in Lu−1 w , where p1 + q1 = 1. For its proof, we recall the estimate ([161, Theorem 2], see also [179, Theorem 4.7.12])    w   g  1       lim inf pm g p ≥ (3.2.37)  √wϕ  , m→∞ max p1 − 12 ,0 √ p 2 π

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

93

√ where ϕ(x) = 1 − x 2 and which is true for 1 ≤ p ≤ ∞, for an almost everywhere positive weight function w, and for an arbitrary measurable function g. Proposition 3.2.32 Let 1 < p < ∞ and let w and u be weight functions, where w is almost everywhere positive and where u ∈ Lp . If the inequality  w    S f u ≤ cf up , m p

c = c(m, f ) ,

p

holds true for all f ∈ Lu and all m ∈ N, then, for  w  up < ∞ sup pm

and

m∈N

1 p

+

1 q

(3.2.38) = 1,

   w −1  sup pm u w < ∞ , q

m∈N

(3.2.39)

as well as u ∈ Lp √ wϕ

w 1 , u u

and

/

w ∈ Lq . ϕ

(3.2.40)

p

Proof If (3.2.38) holds true for all f ∈ Lu , then  w   w  S  m+1 f − Sm f u p ≤ 2cf up , i.e.,   w   p u  m p

1 −1

w pm (t) f (t)u(t)

 w(t)  dt  ≤ 2cf up . u(t)

Consequently,  w  p u m p

sup p

f ∈Lu ,f up =1

   

1 −1

w pm (t)

   w   w w(t) f (t)u(t) dt  = pm up m Lp →C ≤ 2c , u(t)

w : Lp −→ C is defined by where the linear functional m

 w m (g) =

1 −1

w pm (t)

w(t) g(t) dt u(t)

 w  w −1   p and the norm of which is equal to m = pm u wq . Hence, (3.2.38) L →C implies   w   w −1  p u  u w p  ≤ 2c . m m p q

(3.2.41)

94

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

Applying (3.2.37) g = u−1 w and taking into account   6to   to g = u and 6 (3.2.41)  √u  1 w u 1 p as well as  wϕ  > 0 and  u ϕ  > 0 lead to √wϕ ∈ L and u wϕ ∈ Lq p

q

(cf. (3.2.40)). Using that we observe, for all sufficiently large m,  w  p u  w  m p u =  6 p m p 1 w u ϕ 

 /  1 w   u ϕ  q

(3.2.37)



  w    w −1  cpm up pm u w ≤ c q

q

 w −1  and, consequently, also pm u wp ≤ c. Thus, (3.2.39) is proved, which also w q yields u ∈ L .   √ The case, where w is an arbitrary weight function, u = w, and p = 2, is particularly simple and interesting. By well known arguments one easily get  √   f − Swf w2 = Em (f )√w,2 . m Moreover, if P is dense in the space L2√w , one obtains Parseval’s identity ∞  √ 2   w  f w = c (f )2 k 2 k=0

and, consequently,  Em (f )√w,2 =

∞    w c (f )2

1 2

k

.

k=m

For arbitrary weight functions u and w, the determination of sufficient conditions, under which (3.2.38) holds true is an open problem. Its solution is known only for special classes of weights. The main difficulty is to find pointwise estimates for the w (x) of the orthonormal system. polynomials pm As an example, we want to consider the expansion of a function f with respect to a system of Jacobi polynomials. The following proposition was proved by Pollard [191] for α, β ≥ −1/2 (cf. also [216]), and then the proof was completed by Muckenhoupt [167] for the remaining cases. Here, we are going to give a simple proof. Proposition 3.2.33 Let 1 < p < ∞ and let w = v α,β as well as u = v γ ,δ be Jacobi weights with α, β > −1 and γ , δ > − p1 . Then, the inequality  w    S f u ≤ cf up , m p

c = c(m, f ) ,

(3.2.42)

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

95

p

holds true for all f ∈ Lu and for all m ∈ N if and only if the conditions (3.2.40)    1 α 1 1 1 are fulfilled, which are equivalent to γ − 2 − 2 + p  < min 4 , 2 + α2 and       β β δ − 12 − 2 + p1  < min 14 , 12 + 2 in the present case. Proof Let us prove that the conditions in (3.2.40) imply (3.2.42). First of all we observe that, using the H. Pollard decomposition ([191], see also [12]), the Darboux w (x, t) (cf. (3.2.27)) can be written in the form kernel Km wϕ 2

w w w Km (x, t) = −αm pm (x)pm (t) + βm

wϕ 2

w (x)p 2 2 w pm m−1 (t)[ϕ(t)] − pm−1 (x)[ϕ(x)] pm (t)

x−t

,

(3.2.43)

where αm ∼ 1 ∼ βm . Hence, using the Hilbert transform H (cf. (3.2.7)) and (3.2.36), we can write  w  w (x) Sm f (x) = −αm pm

 1 −1

w (t)f (t)w(t) dt pm

  . w (x) Hp wϕ 2 wϕ 2 f (x) − p wϕ 2 (x)[ϕ(x)]2 Hp w wf  (x) . − βm pm m m−1 m−1

Taking the Remez inequality (3.2.18) with   (−1, 1) \ Em = Im := −1 + m−2 , 1 − m−2 , we conclude that  w       S f u ≤ c  S w f u p m m p L (I

m)

  w     ≤ c pm u Lp (Im ) 

1 −1

 

w pm (t)f (t)w(t) dt 

      w wϕ 2 Hpm−1 wϕ 2 f u +pm

Lp (Im )

   wϕ 2  w   + pm−1 ϕ 2 Hpm wf u

.

Lp (Im )

=: c (A1 + A2 + A3 ) .

In order to estimate the above norms, we recall some known facts. Since the inequality (see (2.4.11))  w  p (x) ≤  m √ 1−x+

c α+1/2 √ 1 1+x+ m

1 m

β+1/2 ,

|x| ≤ 1 ,

96

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

holds true, it follows that  ⎧  w √ pm (x) w(x)ϕ(x) ≤ c : x ∈ Im (cf. also [206, (8.21.18)]) , ⎪ ⎪ ⎨    1  w (x) ≤ c : x ∈ −1, −1 + m−2 , m−β− 2 pm ⎪ ⎪    ⎩ 1  w (x) ≤ c : x ∈ 1 − m−2 , 1 , m−α− 2 pm

(3.2.44)

where c = c(m). For the term A1 , using the Hölder inequality, the first estimate in (3.2.44), as well as the Remez inequality (3.2.18) and taking into account that w q u ∈ L , we get  w  w  w   w   w   w  f up A1 ≤ pm uLp (Im ) pm uLp (Im ) pm  f up ≤ cpm  u q u Lq (Im )   /   1 w  u     f up ≤ cf up . ≤ c   √wϕ  ϕ Lq (Im ) Lp (Im ) u In order to estimate the terms A2 and A3 , we recall that the Hilbert transform H satisfies (cf. Sect. 3.2.1) (Hf ) up ≤ cf up

for all f ∈ Lp

(3.2.45)

with 1 < p < ∞, c = c(f ), if and only if u ∈ Ap (which means, for a Jacobi weight u, that u ∈ Lp and u−1 ∈ Lq , see Exercise 3.2.8). For the term A2 , by (3.2.44) we have   u    wϕ 2 2   A2 ≤ c  Hpm−1 wϕ f √ . wϕ Lp (Im ) Since the assumptions (3.2.40) imply √uwϕ ∈ Ap , by using (3.2.45), (3.2.18) for  u = wϕ 3 ∈ A∗ ∩ L∞ (see Exercise 3.2.12), and (3.2.44) (with wϕ 2 instead of w), we conclude that        wϕ 2  wϕ 2 6  wϕ 2 6   u  2 3 3    f up ≤ cf up . A2 ≤ c  pm−1 wϕ f √wϕ  ≤ cpm−1 wϕ  f up ≤ cpm−1 wϕ  ∞ ∞ L (Im ) p

The estimate of the term A3 is similar to the latter, if α,6β ≥ −1/2 (which means √ wϕ ∈ A∗ ∩ L∞ , see Exercise 3.2.12). Indeed, since u wϕ ∈ Ap , in this case we

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

97

get, using the same arguments as in the estimation of A2 , /    wϕ 2 6   ϕ 3 Hpw wf u  A3 =  wϕ p m  m w

Lp (Im )

/    w   w√  ϕ  ≤ c ≤ cpm wϕf up  Hpm wf u w  p L (Im )  w√   w√  ≤ cpm wϕ ∞ f up ≤ cpm wϕ L∞ (Im ) f up ≤ cf up . For α < −1/2 or β < −1/2, we use again the first line in (3.2.44) for wϕ 2 instead of w to estimate      6 w −1 ϕ  A3 ≤ c  H χ p wf u w −2 (−1,−1+m ) m  

Lp (Im )

     6 w −1 ϕ  + H χ p wf u w Im m  

     6 w −1 ϕ  + H χ p wf u w −2 (1−m ,1) m  

Lp (Im )

=: c (B1 + B2 + B3 ) . Lp (Im )

For the term B2 we easily get   w√ B2 ≤ cχIm pm wϕf up ≤ cf up . We are going to estimate only the term B1 in case of β < −1/2, since the term B3 and the case α < −1/2 can be treated in a similar way. We can write p B1

 ≤

0

−1+

1 m2

 p   −1+ 12 pw (t)w(t)f (t) ϕ(x)  m  m dt u(x)  dx   −1 t −x w(x)   + 0

1−

1 m2

 p   −1+ 12 pw (t)w(t)f (t) ϕ(x)  m  m dt u(x)  dx =: J1 + J2 .   −1 t −x w(x) 

For J1 , we have  J1 =

0 −1+

 ≤c

1 m2

  p  −1+ 1  −β− 21 w pm (t)w(t)f (t) ϕ(x)  m2 m  β+1/2 dt u(x) m  dx  t −x w(x)  −1

0

−1+

1 m2

  p  −1+ 1  −β− 21 w 1 pm (t)w(t)f (t) ϕ(x)  m2 m  dt u(x) √  dx ,  w(x)ϕ(x) −1 t −x w(x) 

98

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

taking into account that β

1

1

1

mβ+ 2 ≤ (1 + x)− 2 − 4 ∼x [w(x)ϕ(x)]− 2

for x ∈ [−1 + m−2 , 0] .

(3.2.46)

Since δ − β > δ + 12 , we have, due to (3.2.40), wu ∈ Ap (−1, 0) and we can use p the boundedness of the Hilbert transform in Lw−1 u (−1, 0). Consequently, in view of the second line of (3.2.44), we obtain  p  −1+ 1  −1+ 12 m−β− 12 pw (t)w(t)f (t) u(x)  m m2  p m |f (x)u(x)|p dx ≤ cf up . dt   dx ≤ c t −x w(x)  −1 −1  −1

 J1 ≤ c

0

To estimate J2 , we write (again using (3.2.44))    p    −1+ 12  1− 12 w(t) 1 m  −β− 1 w m  2 J2 ≤ c pm (t) f (t)u(t) dt m   u(t) ϕ(t) −1 0

  p  ϕ(x)   u(x)  dx  w(x) 

 / p  / p  1 w ϕ   f up  = cf up ≤ c p u p, u ϕ  w p q

where we took into account |t − x|−1 ≤ c for −1 < t < −1 + m−2 and 0 < x < √ β 1 1 1−m−2 as well as w(t)ϕ(t) ≤ c(1+t) 2 + 4 ≤ c m−β− 2 for −1 < t < −1+m−2 . Thus, the proof of the sufficiency of (3.2.40) is complete. For the proof of the necessity of (3.2.40) we have only to refer to Proposition 3.2.32.   Let us remark that the previous proof can be adapted also to more general systems of orthonormal polynomials, if the pointwise estimates of the polynomials are known, for instance, to orthogonal polynomials with respect to Badkov or generalized Ditzian-Totik weights (see, e.g., [119, 139, 149]).

3.2.5 Lagrange Interpolation in Weighted Lp -Spaces Here we present results from [139] and [40]. We consider the Lagrange ∞  winterpolation based on the zeros of orthogonal polynomials. As above, let pm m=0 be the sequence of orthonormal polynomials with respect to a weight function w , k = 1, 2, . . . , m the zeros of pw (x), w on (−1, 1) and denote by xk = xmk m −1 < x1 < x2 < · · · < xm < 1. For a continuous function f : (−1, 1) −→ C, by Lw interpolates m f we denote the unique polynomial of degree at most m − 1, which  w f (x ) = f (x ), the function f at the nodes xk , i.e., Lw f ∈ P and L m k k m m k = 1, 2, . . . , m. Introducing the fundamental Lagrange polynomials with respect

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

99

to the nodes xk , w mk (x) = 

w pm



w (x) pm

(xk )(x − xk )

m 5

=

j =1,j =k

x − xk , xj − xk

k = 1, 2, . . . , m ,

we get Lw mf =

m 

f (xk ) w mk .

(3.2.47)

k=1 w w w Recalling that w mk (x) = λmk Km (x, xk ) (see Exercise 3.2.28), where λmk are the w Christoffel numbers (cf. (3.2.25)) and Km (t, x) is the Darboux kernel (see (3.2.27)), relation (3.2.47) can be rewritten as m   w  w Lm f (x) = λw mk Km (x, xk )f (xk ) .

(3.2.48)

k=1

  Hence, Lw m f (x) is the discrete version of the Fourier sum (3.2.36) obtained by applying the Gaussian rule to (3.2.36). In particular, if w is a Szeg˝o weight, i.e., w(cos ·) ∈ L1 (0, π), then we can use the H. Pollard decomposition (3.2.43) to get wϕ 2

m w (x)p 2   w  pm m−1 (xk )[ϕ(xk )] Lm f (x) = βm f (xk ) x − xk

with βm ∼ 1 .

k=1

(3.2.49) The behaviour in weighted uniform norms of the Lagrange interpolation polynomials, related to different systems of nodes and various classes of continuous functions on (−1, 1), has been widely investigated in the literature. The reader can consult [121] and the  references   cited therein. Here, we are going to state some estimates of  the norm  Lw m f u p for several systems of nodes and different function classes. The first result on this topic √ is due to P. Erdös and P. Turán [49] and is devoted to the special case u = w and p = 2. Indeed, by using the Gaussian rule for a continuous function f ∈ C[−1, 1] and for an arbitrary weight function w, the inequality m  w  √ 2  2 2 L f  = w λw m mk [f (xk )] ≤ f ∞ w1 2 k=1

follows. Hence, one deduces   √   f − Lw f w 2 ≤ 2 w1 Em (f )∞ , m

(3.2.50)

100

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

  0,0 where Em (f )∞ = Em (f ) = inf f − P ∞ : P ∈ Pm . As a consequence of this estimate we get the weighted L2 -convergence of the Lagrange interpolation polynomials for every continuous function. Nevertheless, for arbitrary weight functions w, u and for p = 2, to prove estimates of the form  w    L f u ≤ cf ∞ , m p

c = c(m, f ) ,

(3.2.51)

is still an open problem. Motivated by a problem raised by P. Turán, several authors have shown necessary conditions for the validity of (3.2.51). We only mention P. Nevai [176, Theorems 15 and 16 on pp. 180, 181] for the case of Szeg˝o weights w and u and [148] for arbitrary systems of nodes. Roughly speaking, from the results of these authors it follows that the validity of for some p ∈ (1, ∞)  (3.2.51)  w ∞ in the space Lp , i.e., implies the uniform boundedness of the system p u m m=0  w  supm∈N0 pm up < ∞. Another long-standing problem, raised by G. Freud [56], is to find sufficient conditions for the validity of (3.2.51). For this, the main difficulty is to obtain pointwise estimates of the polynomials of the orthonormal system. Nevertheless, after a first theorem proved by P. Erd˝os and E. Feldheim [48], many years later the well-known results of R. Askey [12] and P. Nevai [178] appeared in the literature. For the sake of completeness, we recall the statements proved by P. Nevai [178], since it includes the results of the other two authors and its proof  does not pdepend on the behaviour of the related Fourier sums. We will write g ∈ L log+ L if 

1 −1

  g(t) log+ |g(t)|p dt < ∞ ,

where log+ x = log max {1, x} for x ∈ R. Proposition 3.2.34 ([178], Theorems 1 and 2) Let 1 ≤ p < ∞, let w be a p generalized Jacobi weight, and let u be a weight function with u ∈ L log+ L . Then, the inequality  w    L f u ≤ cf ∞ , m p

c = c(m, f ) ,

(3.2.52)

holds true for all bounded functions f : [−1, 1] −→ C and all m ∈ N if and only if √u ∈ Lp . wϕ For u ∈ Lp , from Proposition 3.2.34 we can easily conclude that, for 1 ≤ p < ∞,     f − Lw f u ≤ c Em (f )∞ m p

∀ f ∈ C[−1, 1], ∀ m ∈ N,

(3.2.53)

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

101

holds true with c = c(m, f ) if and only if u ∈ Lp . √ wϕ

(3.2.54)

Moreover, with a small modification in the proof of Proposition 3.2.34, given in [178], for u = v γ ,δ with γ , δ ≥ 0, one obtains that     f − Lw f u ≤ c Em (f )u,∞ m p

∀ f ∈ Cu , ∀ m ∈ N , c = c(m, f )

if and only if u ∈ Lp √ wϕ

and

√ wϕ ∈ L1 . u

Here Em (f )u,∞ denotes the error of best approximation by polynomials of degree less than m in the weighted space Cu of continuous functions,   Em (f )u,∞ = inf (f − P )u∞ : P ∈ Pm .     In all mentioned cases, the norm  Lw m f u p is estimated by means of the uniform or weighted uniform norm of the function f . But, we recall that the Lagrange interpolation polynomial based on the zeros of orthogonal polynomials is the discrete version of the Fourier sum related to the same orthogonal system.  Hence,  in a natural way the problem arises if it is possible to estimate the norm  Lw f u m

p

by means of a weighted Riemann sum of the function f , i.e., to prove inequalities of the form 1  m p   w   p  L f u ≤ c |f (x )u(x )| x , k k k m p k=1 w , x = x where xk = xmk k k+1 − xk , xm+1 = 1, which would be an analogue to the estimate

 w    S f u ≤ cf up . m p √ In the special case u = w, w = v α,β , α, β > −1/2, and p = 2 this is evident. Indeed, since λw mk ∼m,k xk w(xk ), one has m  m 2    w  √ 2    2  L f w 2 = λw [f (x )] ∼ ) w(x ) (x f  xk , k m,f k k m mk k=1

k=1

(3.2.55)

102

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

and the last term is a Riemann sum for the integral 

1 −1

2     f (x) w(x) dx .

The following proposition generalizes (3.2.55). Proposition 3.2.35 Let w and u be Jacobi weights with u ∈ Lp and 1 < p < ∞, w , k = 1, . . . , m, and x and let xk = xk+1 − xk with xk = xmk m+1 = 1. Moreover, 1 1 p + q = 1. Then, the equivalence  m 

 w    L f u ∼m,f m p

1

p

xk |f (xk )u(xk )|p

∀ f ∈ C(−1, 1)

(3.2.56)

k=1

holds true if and only if u ∈ Lp √ wϕ

and

√ wϕ ∈ Lq . u

(3.2.57)

that (3.2.57) implies (3.2.56). For this, let Im = Proof At−2first, we−2prove  −1 + m , 1 − m . Then, due to the Remez inequality (3.2.18), we have   w        L f u ≤ c Lw f u p = c sup A(g) : g ∈ Lq (Im ), gLq (Im ) = 1 , m m p L (Im ) where  A(g) = Im





Lw m f (x)u(x)g(x) dx =

m  k=1

f (xk )u(xk )   w (x ) u(xk ) pm k

 Im

w (x) pm u(x)g(x) dx . x − xk

Thus, since ([176])  1  ∼m,k xk w(xk )ϕ(xk )  (pw ) (xk ) m

we get A(g) ≤ c

m 

|f (xk )u(xk )|xk

k=1

√  w(xk )ϕ(xk )  g m (xk ) , u(xk )

where, for every Q ∈ Pm with Q(x) = 0 for all x ∈ (−1, 1), g m (t)

 = Im

w (x)Q(x) − pw (t)Q(t) g(x)u(x) pm m dx x−t Q(x)

(3.2.58)

3.2 Polynomial Approximation with Doubling Weights on the Interval (−1, 1)

103

is a polynomial of degree at most 2m − 2, and, consequently, % A(g) ≤ c

m 

xk |f (xk )u(xk )|

p

&1 % m p 

k=1

% =: c

k=1

m 

√ q & q1   w(xk )ϕ(xk ) g  m (xk ) xk  u(xk )

&1

p

xk |f (xk )u(xk )|p

B(g) .

k=1

With the help of the Marcinkiewicz inequality (3.2.35) and the Remez inequality (3.2.18), we get B(g)

(3.2.35)  √wϕ  ≤



c

u

 g m  

(3.2.18)  √wϕ  ≤

Lq

c

u

 g m  

Lq (Im )

 √   wϕ w  −1  p := c (J1 + J2 ) , + Q H χ Q u g Im m  u  q Lq (Im ) L (Im )

 √   wϕ  w  c  H χ p ug Im m  u 

where H is the Hilbert transform from (3.2.7). Hence, in virtue of Proposition 3.2.9, Exercise 3.2.8, and the assumption (3.2.57), we have √   wϕ w   p J1 ≤ c  u g ≤ c gLq (Im ) , m  u  q L (Im ) where we also took into account the first line in (3.2.44). Due to [139, Lemma 2.1], √ we can choose Q such that Q ∼m wϕ in Im . Hence, again using (3.2.44), √   wϕ w √  −1  pm wϕ HχIm Q u g  J2 ≤ c   q u L (Im )  √  √  wϕ   wϕ u g  −1    Hχ ≤ c Q u g ≤ c ≤ cgLq (Im ) . I m  u  q  u Q q L (Im ) L (Im ) Now, we prove that (3.2.56) implies (3.2.57). Let us assume that (3.2.56) holds for all f ∈ C(−1, 1). Let τmk ∈ C be arbitrary numbers and τm : [−1, 1] −→ C be a continuous function, which fulfils τm (x) = 0 if x ∈ [−η, η] for some 0 < η < 12    w  (x ) if x ∈ [−η, η]. Then, (3.2.56) holds with τ and τm (xk ) = |τmk | sgn pm k k m

104

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

in place of f . Moreover, since |x − xk | < 2 and  w  Lm τm (x) =



pw (x) sgn (x − xk ) |τ |  mk  m   w  |x − xk | xk ∈[−η,η]  pm (xk )

w = pm (x) sgn (x)



|τ |    mk   w  xk ∈[−η,η]  pm (xk )(x − xk )

for x ∈ (−1, 1) \ [−η, η], we get  w     w     L τm u ≥  Lw τm u p ≥ c pm uLp {η 1, (c) the function T (x) = 1 + Q (x) Q (x) for x ∈ [A, 1). (d) for some A ∈ (0, 1), the function T satisfies T (x) ∼x Q(x) For τ > 0, the Mhaskar-Rahmanov-Saff number aτ () related to the weight  is defined as the positive solution of the equation τ=

2 π



1

aτ t Q (aτ t) √

0

dt 1 − t2

(3.3.2)

.

The equivalence (see [107, (2.6)])  Q (am ) ∼m m T (am )

(3.3.3)

can lead to an approximation of am . Let us now consider the weight function u(x) = v β (x)w(x) = (1 − x 2 )β e−(1−x

2 )−α

(3.3.4)

with α > 0 and β ≥ 0. It is easily seen that the weight w belongs to the , while u can be considered as a logarithmic perturbation of w. In [129, class W, , and from the Proposition 2.3] it is shown that the weight u belongs to the class W proof of this proposition it is seen that its related function T (x) fulfils T (x) ∼x

1 1 − x2

(3.3.5)

for x sufficiently close to 1. In the sequel we will shortly denote by am = am (u) the Mhaskar-Rahmanov-Saff number related to the weight u. In view of (3.3.3)

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

117

and (3.3.5), we get 1 − am ∼ m



1 α+ 1 2

(3.3.6)

.

, and give a Exercise 3.3.1 Show that u(x) defined in (3.3.4) belongs to the class W proof of the equivalence (3.3.6). √ K 1 − x2 , then we have (see [131, If x, y ∈ [−am , am ] with |x − y| ≤ m Proposition 4.1]) u(y) ∼ u(x) .

(3.3.7)

bm = bm (u) = am (1 − λδm ) ,

(3.3.8)

Moreover, if we set

where λ > 0 is a constant and δm = δm (u) = [m T (am )]

− 23

∼m

− 23



2α+3 2α+1



,

(3.3.9)

then, with the help of (3.3.5) and (3.3.6), the following restricted range inequalities (i.e., Remez-type inequalities) has been proved by D. S. Lubinsky and E. B. Saff [113] for p = ∞ and by A. L. Levin and D. S. Lubinsky [103] for 0 < p ≤ ∞. Proposition 3.3.2 ([104], pp. 95–97) Let 0 < p ≤ ∞ and u(x) = (1 − x 2 )β e−(1−x

2 )−α

, α > 0, β ≥ 0.

For every polynomial Pm ∈ Pm we have Pm up ≤ cPm uLp (−bm ,bm )

(3.3.10)

and Pm uLp ({|x|≥asm (u)}) ≤ c e−c1 m Pm up , γ

γ =

2α , s > 1, 2α + 1

(3.3.11)

where the constants c and c1 are independent of m and Pm . The following lemma shows that the weight u(x) is equivalent to a polynomial in its Mhaskar-Rahmanov-Saff interval.

118

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1) 2 −α

Lemma 3.3.3 Let u(x) = v β (x)w(x) = (1 − x 2 )β e−(1−x ) with α > 0, β ≥ 0 and let , s ∈ N be fixed. Then, for all sufficiently large m ∈ N, there exists a polynomial R m ∈ P m satisfying 3 1 u(x) ≤ R m (x) ≤ u(x) 2 2

(3.3.12)

and    R (x) ϕ(x) m ≤ c u(x) m for |x| ≤ asm , where c = c(m, u, R m ) and where ϕ(x) =

(3.3.13) √ 1 − x 2. 2 −α

Lemma 3.3.3 was proved in [131] for the weight w(x) = e−(1−x ) , i.e., for β = 0. We will omit the proof, since it is similar to that one in [131], if one takes into account (3.3.5). Now, we are going to state polynomial inequalities related to the weight σ u, where σ is a doubling weight and u is given by (3.3.4). In the proofs of these inequalities we will use the restricted range inequality (3.3.10) and we will approximate the weight u in [−am , am ] by means of a polynomial, using arguments analogous to those in [119, 145]. Before, we state a Remez-type inequality, generalizing the results holding for exponential or A∞ -weights. Proposition 3.3.4 ([182], Theorem 3.1) Let 0 < p ≤ ∞, u be the weight in (3.3.4), σ be a doubling weight, and σm be as in (3.2.10). Moreover, let  > 0 and let E ⊂ [−b2m , b2m ] be a measurable set with  dx  6 ≤ , m 2 − x2 E b2m where b2m = a2m (1 − λδ2m ) with λ > 0 (cf. (3.3.8)). Then, for every Pm ∈ Pm , Pm σm up ≤ cPm σm uLp ((−b2m ,b2m )\E) ,

(3.3.14)

where c depends only on  and λ. Taking into account Proposition 3.3.2, a natural question arising is whether it is possible to replace the weight σm in (3.3.14) by σ . For instance, it is easily to check that, if σ is a doubling weight, then, for 0 < p ≤ ∞ and for every Pm ∈ Pm , the estimate Pm σ up ≤ cPm σ uLp (−1+m−2 ,1−m−2 )

(3.3.15)

holds with c = c(m, Pm ). Of course, this inequality is weaker than (3.3.14) near ±1. On the other hand, with more assumptions on the weight σ , as a consequence of Proposition 3.3.4, we can formulate the following corollary.

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

119

Corollary 3.3.5 ([182], Corollary 3.2) Let 0 < p < ∞, u be the weight in (3.3.4), and  > 0. If σ is an A∞ -weight and if E ⊂ [−b3m , b3m ] is a measurable set with  6 E

dx 2 − x2 b3m



 , m

then, for all Pm ∈ Pm , 

1 −1

 |Pm (x)σm (x)u(x)|p dx ≤ c

|Pm (x)σ (x)u(x)|p dx .

(3.3.16)

(−b3m ,b3m )\E

Moreover, if 0 < p ≤ ∞ and if σ is an A∗ -weight, we get Pm σ up ≤ cPm σ uLp ((−b3m ,b3m )\E)

∀ Pm ∈ Pm ,

(3.3.17)

where b3m = a3m (1 − λδ3m with λ > 0 and where, in both cases, c depends only on  and λ. For instance, if u is the weight in (3.3.4) and if σ (x) = |x|γ with γ > − p1 in case of 1 ≤ p < ∞ and γ ≥ 0 in case of p = ∞, then Proposition 3.3.4 implies     Pm | · |γ u ≤ cPm | · |γ u p

Lp



c1 m ≤|x|≤a3m (1−λδ3m )



∀ Pm ∈ Pm

with c, λ, c1 independent of m and Pm . In the following propositions, we state the Bernstein-Markov inequalities related to the weight σm u and as before, in their corollaries, we replace the weight σm by σ . Proposition 3.3.6 ([182], Theorem 3.3) Let 0 < p ≤ ∞, u be the weight in (3.3.4), σ be a doubling weight and σm be as in (3.2.10). Then, for every polynomial Pm ∈ Pm ,    P ϕσm u ≤ c mPm σm up , m p where ϕ(x) =

(3.3.18)

√ 1 − x 2 and c = c(m, Pm ).

In particular, for every Pm ∈ Pm , from Proposition 3.3.6 we deduce             1 γ +1 1 γ    2 sup Pm (x) 1−x + w(x) ≤ c m sup Pm (x) 1 − x2 + w(x)   m m x∈(−1,1) x∈(−1,1) 2 −α

with γ ∈ R, w(x) = e−(1−x ) , α > 0, and c = c(m, Pm ). The previous estimate is useful in various contexts and generalizes a result of Halilova [65] (see also [175]) stated for Jacobi weights.

120

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

Corollary 3.3.7 For 0 < p < ∞ and for all Pm ∈ Pm , under the assumptions of Proposition 3.3.6 we get    P ϕσm u ≤ c mPm σ up . m p

(3.3.19)

Moreover, if 0 < p ≤ ∞ and if σ is an A∗ -weight, we have    P ϕσ u ≤ c mPm σ up m p where ϕ(x) =

∀ Pm ∈ Pm ,

(3.3.20)

√ 1 − x 2 and where, in both cases, c = c(m, Pm ).

Proposition 3.3.8 ([182], Theorem 3.5) Let 0 < p ≤ ∞, u be the weight in (3.3.4), σ be a doubling weight, and σm be as in (3.2.10). Then, for every polynomial Pm ∈ Pm ,    P σm u ≤ √ c m Pm σm up m p 1 − am

(3.3.21)

with c = c(m, Pm ). Corollary 3.3.9 Let 0 < p < ∞. Under the assumptions of Proposition 3.3.8, for any Pm ∈ Pm , we get    P σm u ≤ √ c m Pm σ up . m p 1 − am

(3.3.22)

Moreover, if 0 < p ≤ ∞ and if σ is an A∗ -weight, for every Pm ∈ Pm , we get    P σ u ≤ √ c m Pm σ up , m p 1 − am

(3.3.23)

where, in both cases, c is independent of m and Pm . Notice that the Bernstein-type inequality (3.3.20) has the same form of that proved in [131] for the weight u and also of that one proved in [145], which is related to doubling or A∗ -weights for p < ∞ or p = ∞, respectively. Exercise 3.3.10 Show that the estimate (3.3.20) can be iterated in order to get    (r) r  Pm ϕ σ u ≤ c mr Pm σ up , p

for r ∈ N. , the Markov-type inequality has been In case of weights belonging to the class W proved in [113] for p = ∞ and in [104, p. 294] for 0 < p ≤ ∞. Note that, in comparison to the Markov inequality proved in [145] for doubling weights, in inequality (3.3.21) the classical exponent 2 in m2 is replaced by a smaller

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

121

one. To be more precise, from Proposition 3.3.8 in case σ (x) ≡ 1 and from the equivalence (3.3.6), it follows that    P u ≤ c m 2α+2 2α+1 P u , m p m p such that the exponent 2 in the classical Markov inequality is replaced by Now, we state some Schur-type inequalities.

(3.3.24) 2α+2 2α+1 .

Proposition 3.3.11 ([182], Theorem 3.7) Let 0 < p < ∞, u be the weight in (3.3.4), σ be a doubling weight and σm be as in (3.2.10). Furthermore, let h be a generalized Jacobi weight of the form h(x) =

N 5

|x − xj |γj

j =1

with γj > 0 and −1 ≤ x1 < . . . < xN ≤ 1. Then, for every polynomial Pm ∈ Pm , Pm σm up ≤ c m Pm σm uhp

with

c = c(m, Pm ) ,

  where  = max γj∗ : j = 1, . . . , N , γj∗ = γj if xj = ±1, and γj∗ = xj = ±1.

(3.3.25) γj α+

1 2

if

Corollary 3.3.12 Under the assumptions and with the notations of Proposition 3.3.11, we get Pm σm up ≤ c m Pm σ uhp

∀ Pm ∈ Pm .

(3.3.26)

Moreover, if 0 < p ≤ ∞ and if σ is an A∗ -weight, we get Pm σ up ≤ c m Pm σ uhp

∀ Pm ∈ Pm ,

(3.3.27)

where, in both cases, c is independent of m and Pm . To finish this section, we state some Nikolski-type inequalities. We agree that, in case q = ∞, we set q1 = 0. Proposition 3.3.13 ([182], Theorem 3.9) Let 1 ≤ p < q ≤ ∞, u be the weight in (3.3.4), σ be a doubling weight, and σm be as in (3.2.10). Then, for every Pm ∈ Pm , 

m Pm σm uq ≤ c √ 1 − am

1−1 p

q

Pm σm up

(3.3.28)

122

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

and   1 1 1 1   − − Pm ϕ p q σm u ≤ c m p q Pm σm up ,

(3.3.29)

q

where ϕ(x) =

√ 1 − x 2 and c = c(m, Pm ).

Corollary 3.3.14 Under the assumptions and with the notations of Proposition 3.3.13, in case of q < ∞, we get, for all Pm ∈ Pm , 

m Pm σm uq ≤ c √ 1 − am

1−1 p

q

Pm σ up

(3.3.30)

and   1 1 1   −1 − Pm ϕ p q σm u ≤ c m p q Pm σ up .

(3.3.31)

q

Moreover, if q ≤ ∞ and if σ is an A∗ -weight, then, for all Pm ∈ Pm ,  1−1 p q m Pm σ uq ≤ c √ Pm σ up 1 − am

(3.3.32)

and   1 1 1   − Pm ϕ p σ u ≤ c m p q Pm σ up ,

(3.3.33)

q

where ϕ(x) = c(m, Pm ).



1 − x 2 and where, in each of the previous inequalities, c =

3.3.2 K-Functionals and Moduli of Smoothness The tools of this section were introduced in [131]. For 1 ≤ p ≤ ∞, r ∈ N, t > 0 sufficiently small (say t < t0 ), and u being the weight in (3.3.4), we introduce the K-functional # $   p,r r r  (r) r  K(f, t )u,p = inf (f − g)up + t g ϕ u : g ∈ Wu p

p,r

(for the definition of Wu , see Sect. 2.4.1) and its main part #     r K(f, t )u,p = sup inf (f − g) uLp (Ih ) + hr g (r) ϕ r u 0 1 is a fixed constant.

Furthermore, we define the rth modulus of smoothness     rϕ (f, t)u,p = sup rhϕ (f ) u p

L (Ih )

0 1. The following lemma shows that this modulus of smoothness is equivalent to the K-functional. Lemma 3.3.16 ([131], Lemma 2.3) Under the assumptions of Lemma 3.3.15 the equivalence ωϕr (f, t)u,p ∼f,t K(f, t r )u,p holds true. By means of the main part of the modulus of smoothness, for 1 ≤ p ≤ ∞, we can define the Zygmund-type spaces p,s

Zu

p,s,r

:= Zu

# $ rϕ (f, t)u,p p = f ∈ Lu : sup < ∞ , ts t >0

0 < s < r , r ∈ N,

equipped with the norm f Zp,s = f Lpu + sup u t >0

rϕ (f, t)u,p ts

.

124

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1) p,s

In the sequel, we will denote these subspaces by Zu satisfying r > s.

without the index r ∈ N

3.3.3 Estimates for the Error of Best Weighted Polynomial Approximation In this section we present results proven in [131]. Let us denote by   Em (f )u,p = inf (f − Pm ) up : Pm ∈ Pm p

the error of best approximation of a function f ∈ Lu by polynomials of degree less than m, where u is a weight function and 1 ≤ p ≤ ∞. In order to prove a Favard-Jackson-type inequality of the form Em (f )u,p ≤

 c   (r) r  ϕ u  , f p mr

p,r

f ∈ Wu ,

where c = c(m, f ) and 1 ≤ p ≤ ∞, we state the following lemma. Lemma 3.3.17 ([131], Lemma 3.3) Let 0 ≤ p ≤ ∞ and u(x) be the weight p,1 in (3.3.4) with β ≥ 0 and α > 0. Then, for every f ∈ Wu , we have Em (f )u,p ≤

 c  f  ϕu , p m

(3.3.35)

where c = c(m, f ). By using Lemma 3.3.17 and Lemma 3.3.16, we are able to prove a Jackson-type inequality. Proposition 3.3.18 ([131], Theorem 3.4) Let 1 ≤ p ≤ ∞ and u be as in (3.3.4). p Then, for any f ∈ Lu , Em (f )u,p ≤

c ωϕr

  1 , f, m u,p

(3.3.36)

where c = c(m, f ). In the next proposition we state a weak Jackson-type inequality. Proposition 3.3.19 ([131], Theorem 3.5) Let 1 ≤ p ≤ ∞, u be as in (3.3.4), and p f ∈ Lu . Assume that rϕ (f, t)u,p t −1 is summable. Then,  Em (f )u,p ≤ c 0

with c = c(m, f ).

1/m

rϕ (f, t)u,p t

dt ,

r < m,

(3.3.37)

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

125

With the help of the previous propositions we get, for instance, Em (f )u,p ≤

 c   (r) r  ϕ u f  , p mr

p,r

f ∈ Wu ,

(3.3.38)

and Em (f )u,p ≤

rϕ (f, t)u,p c sup , ms t >0 ts

r >s,

p,s

f ∈ Zu ,

(3.3.39)

where 1 ≤ p ≤ ∞ and c = c(m, f ). Using the equivalence between the modulus of smoothness and the K-functional, stated in Lemma 3.3.16, and the Bernstein-type inequality (see Corollary 3.3.7), by standard arguments we obtain the Salem-Stechkin-type inequality in following proposition. Proposition 3.3.20 ([131], Theorem 3.6) Let 1 ≤ p ≤ ∞ and u be the weight p in (3.3.4). Then, for every f ∈ Lu , we have ωϕr

  m c  1 ≤ r (1 + j )r−1 Ej (f )u,p , f, m u,p m

(3.3.40)

j =0

where c = c(m, f ). In analogy to [44, p. 84, Theorem 7.3.1], the following theorem connects the behaviour of the derivatives of a sequence of polynomials of quasi best approximap ∞ tion of f ∈ Lu with its modulus of smoothness. We say that the sequence (Pm )m=0 with Pm = Pm (f ) ∈ Pm is of quasi best approximation for f belonging to a p certain subclass F0 ⊂ Lu , if there is a constant cb = cb (m, f ) such that (f − Pm ) up ≤ cb Em (f )u,p

∀ f ∈ F0 .

(3.3.41)

Often this notion will be used only for F0 = {f }. In other cases, it becomes clear from the context what F0 is. Proposition 3.3.21 ([131], Theorem 3.7) Let 1 ≤ p ≤ ∞, u be the weight p ∞ in (3.3.4), f ∈ Lu , and (Pm )m=0 be a sequence of polynomials of quasi best approximation for f . Then, for r ∈ N,     1  (r) r  r r , Pm ϕ u ≤ c m ωϕ f, p m u,p where c = c(m) depends only on p, r, the weight u, and cb (cf. (3.3.41)).

(3.3.42)

126

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

An immediate consequence of the last proposition is the equivalence   1 ∼m,f ωϕr f, m u,p

# (f − Pm ) up +

inf

Pm ∈Pm

 1  P (r) ϕ r u m p r m

$ ,

(3.3.43)

where 1 ≤ p ≤ ∞. Finally, using the Nikolski inequalities (3.3.32) and (3.3.33), one can prove some embedding theorems, connecting function spaces related to the weight u in (3.3.4) and extending some results due to Ul’yanov [209] (see also [43, 51]). Proposition 3.3.22 ([182], Theorem 4.3) Let 1 ≤ p < ∞, u(x) be the weight p in (3.3.4), and f ∈ Lu satisfying 

1

rϕ (f, t)u,p t

0

1+ pη

dt < ∞ ,

η=

2α + 2 . 2α + 1

(3.3.44)

Then, 

1/m

Em (f )u,∞ ≤ c

rϕ (f, t)u,p t

0

rϕ

1+ pη

dt ,

   1/m r ϕ (f, t)u,p 1 ≤c dt f, 1+ η m u,∞ 0 t p

(3.3.45)

(3.3.46)

and f u∞

 ≤ c f up +

rϕ (f, t)u,p

1

t

0

1+ pη

. dt ,

(3.3.47)

where c depends only on r. Proposition 3.3.23 ([182], Theorem 4.4) Let 1 ≤ p < ∞, and u(x) be the weight p in (3.3.4), and f ∈ Lu such that 

1

rϕ (f, t)u,p

0

t

1+ p1

dt < ∞ .

(3.3.48)

Then,  Em (f )

rϕ

1 ϕp

u,∞

1/m

≤c 0

rϕ (f, t)u,p t

1+ p1

dt ,

   1/m r ϕ (f, t)u,p 1 ≤c dt f, 1 1+ 1 m ϕ p u,∞ 0 t p

(3.3.49)

(3.3.50)

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

127

and  1   p  f ϕ u



%



≤ c f up +

1 0

rϕ (f, t)u,p t

1+ p1

& dt

(3.3.51)

,

where c depends only on r. From Proposition 3.3.23 we can easily deduce the following corollary, useful in several contexts. Corollary 3.3.24 ([182], Corollary 4.5) If the conditions of Proposition 3.3.23 are fulfilled, then f is continuous on (−1, 1).

3.3.4 Fourier Sums in Weighted Lp -Spaces The results of this section were proved in [129] for 1 < p < ∞ and in [128] for 2 −α p = 1 and p = ∞. Let w(x) = e−(1−x ) with α > 0 be a Pollaczek-type weight (cf. (3.3.1)) and define the related mth Fourier sum of a function f ∈ L1√w by w Sm f =

m−1 

 ck pkw ,

ck =

k=0

1 −1

f (x)pkw (x)w(x) dx .

(3.3.52)

p

w f in L√ , we note that, for p = 2, since the Concerning the behaviour of Sm w  w ∞ Weierstrass theorem holds in L2√w (see Proposition 3.3.18), the system pm m=0

is a complete orthonormal system in the Hilbert space L2√w . For general p, the following proposition is easy to prove. p

Proposition 3.3.25 ([129], Proposition 3.1) If, for all f ∈ L√w ,  w √   √  S f w ≤ cf w , m p p then

4 3

c = c(m, f ) ,

(3.3.53)

< p < 4.

 w ∞ is uniformly bounded in In other words, if the operator sequence Sm m=1     p L L√w , then p is necessarily restricted to the interval 43 , 4 . On the other hand,   the condition p ∈ 43 , 4 seems not to be sufficient in order to get #  w  sup Sm

  p L L√w

$ :m∈N 0 and λ ≥ 0 (cf. (3.3.4)) and  1 2 −α u(x) = v μ (x) w(x) = (1 − x 2 )μ e− 2 (1−x ) ,

(3.3.56)

with α > 0 and μ ≥ 0. The weights σ and u can be considered as a logarithmic , (see the beginning of Sect. 3.3.1, perturbation of w and belong to the class W Exercise 3.3.1, and, for further details, [129]). σ . Using the Darboux For a function f ∈ L1σ , we consider the mth Fourier sum Sm kernel (cf. Exercise 3.2.26) σ Km (x, t) =

m−1 

pkσ (x)pkσ (t) =

k=0

σ σ (x)pσ σ γm−1 pm m−1 (t) − pm (t)pm−1 (x) , γm x−t

σ , we can rewrite S σ f as where γm = γmσ > 0 is the leading coefficient of pm m

 σ  Sm f (t) =



1 −1

σ Km (x, t)f (t)σ (t) dt .

Now, even if the weight σ does not belong to the Szegö class, it can be shown that the Pollard decomposition holds (see [129, Proposition 2.2, p. 627], cf. (3.2.43)), i.e., ϕ2 σ

σ σ σ Km (x, t) = −αm pm (x)pm (t) + βm

where m ∈ N, ϕ(x) =

ϕ2 σ

σ (x)p 2 2 σ pm m−1 (t)[ϕ(t)] − pm−1 (x)[ϕ(x)] pm (t)

√ 1 − x 2 , and αm ∼m 1 ∼m βm .

x−t

,

(3.3.57)

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

129

σ . Let θ ∈ (0, 1) and χ = χ Now, we are going to modify the operator Sm θ θ,m be √ the characteristic function of the interval [−a , a ] with a = a ( σ ). Setting θm θm m m   σ f ∞ in the space Lp , where u is fθ = χθ f , we consider the sequence χθ Sm θ m=1 u given in (3.3.56).

Proposition 3.3.26 ([129], Theorem 3.2) Let 1 < p < ∞, θ ∈ (0, 1) be fixed. Then, the estimate

1 p

+

  σ   χθ S fθ u ≤ cθ fθ up m p

1 q

= 1, and let

(3.3.58)

  1 p holds for all f ∈ Lu with cθ = cθ (m, f ) and cθ = O log− 2 θ1 , if and only if vμ  ∈ Lp λ v ϕ

 and

1 vμ

vλ ∈ Lq . ϕ

(3.3.59)

Moreover, under the assumption (3.3.59), we have      f − χθ S σ fθ u ≤ cθ EM (f )u,p + e−c M γ f up , m p where M =   1 O log− 2 θ1 .

9

θm 2(θ+1)

:

,γ =

2α 2α+1 ,

(3.3.60)

c = c(m, f, θ ), cθ = cθ (m, f ), and cθ =

 σ ∞   σ f ∞ , then ProposiIf we consider the sequence Sm fθ m=1 instead of χθ Sm θ m=1 tion 3.3.26 can be replaced by the following proposition. Proposition 3.3.27 ([129], Theorem 3.3) For 1 < p < 4 and with the notations of Proposition 3.3.26, we have  σ    S fθ u ≤ cθ fθ up m p

p

∀ f ∈ Lu

(3.3.61)

  1 with cθ = cθ (m, f ) and cθ = O log− 4 θ1 , if and only if 1 λ 3 1 1 − 1 , 0

: y ≤ 1.

3.3.5 Lagrange Interpolation in Weighted Lp -Spaces This section is based on the explications in [133]. We consider the weight σ (x) σ , 1 ≤ k ≤ " m #, the positive zeros of pσ from (3.3.55) and denote by xk = xmk m 2 σ. and by x−k = −xk the negative ones. If m is odd, then x0 = 0 is a zero of pm Concerning the location of these zeros, we have (see [104, pp. 22–23]) −am (1 − c δm ) ≤ x−" m2 # < · · · < x1 < x2 < · · · < x" m2 # ≤ am (1 − c δm ) , where 0 < √ c = c(m), where am is the Mhaskar-Rahmanov-Saff number related to the weight σ satisfying (3.3.6) and where  δm :=

1 − am m

2 3

.

(3.3.71)

132

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

Let us denote by Lσm f ∈ Pm−1 the polynomial interpolating the function f ∈ σ . It is well known that C(−1, 1) at the zeros of pm Lσm f =



f (xk ) σk

σk (x) = 

with

|k|≤" m 2#



σ (x) pm

σ (x )(x − x ) pm k k

.

Following an idea used for Gaussian rules on (0, +∞) in [124] and [125] and for Lagrange interpolation on unbounded intervals in [151], we are going to define a “truncated" interpolation process. For a fixed θ ∈ (0, 1), we define an index j = j (m, θ ) by xj =

min

1≤k≤"m/2#

{xk : xk ≥ aθm } ,

(3.3.72)

where m is sufficiently large (say m > m0 ), and we introduce the interpolation operator L∗m , defined by   ∗  f (xk ) σk (x) (3.3.73) Lm f (x) = |k|≤j

 ∗  ∗ for all f ∈ C(−1,  ∗ 1).  By definition, Lm f ∈ Pm−1 and Lm f (xk ) = f (x∗k ) for |k| ≤ j , while Lm f (xk ) = 0 for |k| > j . Therefore, in general we have Lm P = P for P ∈ Pm−1 . Now, consider the collection of polynomials P∗m−1 = {Q ∈ Pm−1 : Q(xk ) = 0, |k| > j } . Of course, P∗m−1 depends on the weight σ and on the parameter θ ∈ (0, 1). For all f ∈ C(−1, 1), we get L∗m f ∈ P∗m−1 and L∗m Lσm f = L∗m f . It is also easily seen that ⎫ ⎧ ⎬ ⎨ σk (x)P (xk ) : P ∈ Pm−1 P∗m−1 = L∗m (Pm−1 ) = ⎭ ⎩ |k|≤j

and that L∗m : C(−1, 1) −→ C(−1, 1) is a projection, i.e., L∗m Q) = Q for all Q ∈ im L∗m = P∗m−1 . + p The following theorem shows, that m∈N P∗m−1 is dense in Lu , 1 ≤ p ≤ ∞ and that the corresponding error of best approximation   m−1 (f )u,p = inf (f − Q)up : Q ∈ P∗m−1 E is strictly connected with EM (f )u,p , where > M=

θ θ +1



m s

with s ≥ 1 fixed and θ ∈ (0, 1) as in (3.3.72).

? ∼m m

(3.3.74)

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

133

Proposition 3.3.31 ([133], Theorem 2.2) Let 1 ≤ p ≤ ∞, θ ∈ (0, 1) be fixed, and let σ and u be as in (3.3.55) and (3.3.56),respectively.  ∞ Then, for every ∗function∗f ∈ p ∗ Lu , 1 ≤ p ≤ ∞, there exists a sequence qm−1 of polynomials qm−1 ∈ Pm−1 , m=1 such that    ∗ u p = 0 . lim  f − qm−1

(3.3.75)

m→∞

Furthermore, if χθ denotes the characteristic function of [−aθm , aθm ] with aθm = √ aθm ( σ ), then    ∗ up = 0 . lim  f − χθ qm−1

(3.3.76)

m→∞ p

Moreover, for every f ∈ Lu , we get  m−1 (f )u,p ≤ c EM (f )u,p + e−c1 M γ f up , E

(3.3.77)

2α where γ = 2α+1 , where M is defined by (3.3.74) , and where c and c1 are positive constants independent of m and f .

Let us now introduce a further interpolation operator. Following an idea due to J. Szabados [204], we denote by Lm,2 f the polynomial of minimal degree, σ and at the two extra points x interpolating f at the zeros of pm := ±am ±(" m 2 #+1) √ with am = am ( σ ). Then, for every f ∈ C(−1, 1), we have 

Lm,2 f =

f (xk ) σk,2 ,

|k|≤" m 2 #+1

where σk,2 (x) = 

σ pm



2 − x2 am , 2 − x2 (xk )(x − xk ) am k σ (x) pm

1 ≤ |k| ≤

9m: 2

,

and σ±(" m #+1),2 (x) = 2

σ (x) am ± x pm . σ (±a ) 2am pm m

Analogously to (3.3.73), the “truncated” version of Lm,2 is given by L∗m,2 f =



f (xk ) σk,2 ,

(3.3.78)

|k|≤j

where the index j is defined by (3.3.72). We note that this operator has the same form of that one defined in [126] for Lagrange interpolation on the real line.

134

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

By arguments similar to those used for the operator L∗m , it is easily seen that the operator L∗m,2 : C(−1, 1) −→ C(−1, 1) is a projection onto the space of polynomials P∗m−1,2 := {Q ∈ Pm+1 : Q(±am ) = Q(xk ) = 0, |k| > j } , As in Proposition 3.3.31, the set particular, setting

+

∗ m∈N Pm−1,2

p

is dense in Lu for 1 ≤ p ≤ ∞. In

  m−1,2 (f )u,p = inf (f − Q) up : Q ∈ Pm−1,2 , E p

we have, for 1 ≤ p ≤ ∞ and for all f ∈ Lu ,  m−1,2 (f )u,p ≤ c EM (f )u,p + e−c1 M γ f up , E

(3.3.79)

2α where γ = 2α+1 , where M is defined by (3.3.74), and where c and c1 are positive constants independent of m and f . In what follows, we are going to study the behaviour of the previously defined interpolating polynomials in suitable function spaces. Let us first consider the w Lagrange interpolation polynomial Lw m f , based on the zeros of pm , in the space √ 2 −α w (x) w(x) ∞ , C√w , where w(x) = e−(1−x ) , α > 0. Since the sequences pm m=0 w are not x ∈ (−1, 1), are not uniformly bounded and, moreover, the zeros of pm arcsine distributed, we cannot expect that the mth Lebesgue constant

 w L  m

  L C√w

= max

⎧ ⎨ ⎩

w(x)

 |k|≤" m 2#

⎫ ⎬ (x)| | w : x ∈ (−1, 1) √k ⎭ w(xk )

has the desired order log m (for a more complete discussion on this topic see [121, p. 251] and also [148]). Indeed, from a result due to S. B. Damelin [32] it follows that  w L  m

  L C√w

−1 ∼m δm 4

∼m m

1 6



2α+3 2α+1



,

 ∞ where δm is given in (3.3.71). Now, concerning the interpolation process L∗m m=1 in Cu , we have L∗m f = Lσm χj f for all f ∈ Cu , where by χj we denote the characteristic function of [−xj , xj ] with j as in (3.3.72), and the following theorem holds. Proposition 3.3.32 ([133], Theorem 3.1) Let θ ∈ (0, 1) be fixed and let σ and u be as in (3.3.55) and (3.3.56), respectively. Then, the estimate   ∗     χj L f u ≤ cθ χj f u log m m ∞ ∞

with

cθ = cθ (m, f )

(3.3.80)

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

135

holds for all f ∈ Cu , if and only if λ 1 λ 5 + ≤μ≤ + . 2 4 2 4

(3.3.81)

Moreover, (3.3.80) implies that      f − χj L∗ f u ≤ cθ EM (f )u,∞ log m + e−c M γ f u∞ , m ∞ where γ =

(3.3.82)

where M is defined by (3.3.74), and where c = c(m, f ) and    c(m, f ) with cθ = O log−ν θ1 and some fixed ν > 0. cθ = 2α 2α+1 ,

We remark that the previous statement seems to be the best possible in the sense that, even if condition (3.3.81) is satisfied, the quantities   σ   χj L f u m ∞

   and  Lσm χj f u∞

diverge with an order greater than log m for m −→ ∞. We omit the details here. p The behaviour of L∗m f in Lu -norms is given by the following theorem. Proposition 3.3.33 ([133], Theorem 3.2) Let 1 ≤ p < ∞, let θ ∈ (0, 1) be fixed, and let σ and u be as in (3.3.55) and (3.3.56), respectively. Then, the estimate     ∗   χj L f u ≤ cθ χj f u m p ∞

cθ = cθ (m, f )

with

(3.3.83)

holds for all f ∈ Cu , if and only if vμ  ∈ Lp λ v ϕ where ϕ(x) =

 and

vλ ϕ ∈ L1 , vμ

(3.3.84)

√ 1 − x 2 . Moreover, if the conditions in (3.3.83)) are satisfied, then

     f − χj L∗ f u ≤ cθ EM (f )u,∞ + e−c M γ f u∞ m p with cθ , c, M, and γ as in Proposition 3.3.32. Finally, if 1 < p < ∞, and xk = xk+1 − xk , then the estimate ⎛ ⎞1 p    ∗   p⎠ χj L f u ≤ cθ ⎝ |f (x x )u(x )| k k k m p |k|≤j

(3.3.85) 1 p

+

1 q

= 1,

(3.3.86)

136

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

with cθ = cθ (m, f ) is valid for all f ∈ C(−1, 1), if and only if 

vμ  ∈ Lp vλ ϕ

vλ ϕ ∈ Lq . vμ

and

(3.3.87)

The following proposition will be useful in estimating the error of the interpolation p processes under consideration here in Lu norms. p

Proposition 3.3.34 ([133], Proposition 3.3) Let 1 < p < ∞ and f ∈ Lu . If ϕ (f, t)u,p t ⎛ ⎝



−1− p1

is summable, then ⎞1

p

p⎠

xk |f (xk )u(xk )|

|k|≤j

%

  −1 ≤ c χj f up + m p



1 m

0

ϕ (f, t)u,p t

1+ p1

& dt (3.3.88)

with c = c(m, f ). As a consequence of (3.3.86) and Proposition 3.3.34, we obtain the following corollary. p

Corollary 3.3.35 ([133], Corollary 3.4) Let 1 < p < ∞ and f ∈ Lu such that rϕ (f, t)u,p t

−1− p1

is summable. Then, under the assumptions (3.3.87), we have % &  1 r  1   m ϕ (f, t)u,p γ − ∗ −c m  f − χ j L f u ≤ c θ m p f up dt + e m p 1+ 1 0 t p (3.3.89)

with γ , cθ , and c as in Proposition3.3.32. We remark that, under the assumptions of Corollary 3.3.35, the Lagrange interpolation is well defined, since in [182, Corollary 4.5] it was proved that, if 1 < p < ∞ p and f ∈ Lu such that, for some r ∈ N, 

1

rϕ (f, t)u,p

0

t

1+ p1

dt < ∞ ,

then f is continuous on (−1, 1). In particular, if sup t >0

rϕ (f, t)u,p ts

< ∞,

p,s

with p1 < s ∈ R and r > s, i.e., if f ∈ Zu , then, for m > m0 , the estimate (3.3.89) becomes     f − χj L∗ f u ≤ cθ f  p,s , m Zu p ms

(3.3.90)

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

137

and, if 1 ≤ s ∈ N, then the norm f Zp,s can be replaced by the Sobolev-type norm u f Wp,s , taking into account Lemma 3.3.15, which implies u     rϕ (f, t)u,p ≤ c t r f (r) ϕ r u

p,r

p

∀ f ∈ Wu .

 ∞ Therefore, in the considered classes of functions, the sequence χj (m) L∗m f m=1 p converges to f in the Lu -norm with the order of the best polynomial approximation given by (3.3.39) and (3.3.38).  ∞ Let us again consider the sequence L∗m f m=1 , but, now we drop the “truncation" p in the Lu -norm of L∗m f . In this case a proposition, analogous to Proposition 3.3.33 holds with the restriction that 1 ≤ p < 4. Proposition 3.3.36 ([133], Theorem 3.5) Let 1 ≤ p < 4, let θ ∈ (0, 1) be fixed, and let σ and u be as in (3.3.55) and (3.3.56), respectively. Then, there exists a constant cθ = cθ (m, f ), such that  ∗      L f u ≤ cθ χj f u m p ∞

∀ f ∈ Cu ,

(3.3.91)

if and only if vμ √ ∈ Lp vλ Moreover, if 1 < p < 4 and

1 p

+

1 q

and

√ vλ ∈ L1 . vμ

(3.3.92)

= 1, then we have

⎛ ⎞1 p   ∗   p⎠  L f u ≤ c θ ⎝ xk |f (xk )u(xk )| m p

∀ f ∈ Cu ,

(3.3.93)

|k|≤j

if and only if vμ √ ∈ Lp vλ

and

√ vλ ∈ Lq . vμ

(3.3.94)

Note that the special case p = 2 and μ = λ = 0 has been treated in [35] using a truncated Gaussian rule. Nevertheless, this procedure cannot be applied in the general case p = 2. From Proposition 3.3.36 we can deduce consequences similar to those related to Proposition 3.3.33. We omit the details concerned with this topic and state the following corollary.

138

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

Corollary 3.3.37 ([133], Corollary 3.6) Let 1 < p < 4. The equivalence ⎛ Pm−1 up ∼m,Pm−1





⎞1

p

xk |Pm−1 (xk )u(xk )|

p⎠

|k|≤j

holds for all polynomials Pm−1 ∈ P∗m−1 if and only if the conditions in (3.3.94) are satisfied. Let us turn to the interpolating polynomials L∗m,2 f . Proposition 3.3.38 ([133], Theorem 3.7) Let θ ∈ (0, 1) be fixed and let σ and u be as in (3.3.55) and (3.3.56), respectively. Then, we have  ∗      L f u ≤ cθ χj f u log m m,2 ∞ ∞

∀ f ∈ Cu

(3.3.95)

with cθ = cθ (m, f ), if and only if 0≤μ−

λ 3 + ≤ 1. 2 4

(3.3.96)

Moreover, (3.3.95) implies that  !  "   f − L∗ f u ≤ cθ (log m) EM (f )u,∞ + e−c M γ f u∞ , m,2 ∞

(3.3.97)

where γ , M, cθ , and c as in Proposition 3.3.32. Proposition 3.3.39 ([133], Theorem 3.8) Let 1 ≤ p < ∞ let θ ∈ (0, 1) be fixed, and let σ and u be as in (3.3.55) and (3.3.56), respectively. Then, there exists a constant cθ = c(m, f ) such that  ∗      L f u ≤ cθ χj f u m,2 ∞ p

∀ f ∈ Cu ,

(3.3.98)

if and only if v μ+1 ∈ Lp  λ v ϕ Moreover, if 1 < p < ∞ and

1 p

+

1 q

 and

vλ ϕ

v μ+1

∈ L1 .

= 1, we have

⎛ ⎞1 p   ∗   p⎠  L f u ∼m,f ⎝ |f (x x )u(x )| k k k m,2 p |k|≤j

(3.3.99)

∀ f ∈ C(−1, 1) ,

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

139

if and only if 

v μ+1  ∈ Lp vλ ϕ

and

vλ ϕ

v μ+1

∈ Lq .

(3.3.100)

From the previous theorem we can deduce an analogue of Corollary 3.3.35. Hence, as in (3.3.90)), under the assumptions (3.3.100)), 1 < p < ∞, and p1 < s ∈ R, we have, for all sufficiently large m,     f − L∗ f u ≤ cθ f  p,s m,2 Zu p ms

p,s

∀ f ∈ Zu

and, in case of 1 < p < ∞ and r ∈ N,     f − L∗ f u ≤ cθ f  p,r m,2 Wu p mr

p,r

∀ f ∈ Wu ,

(3.3.101)

 ∞ where cθ is independent of m and f in both cases. Thus, the sequence L∗m,2 f

m=1

p

converges in Lu to the function f with the order of the best polynomial approximation in the considered classes of functions (cf. (3.3.39) and (3.3.38)). ∞ ∗ is uniformly bounded in the Furthermore, the operator sequence Lm,2 m=1 Sobolev-type spaces as the following proposition shows. Proposition 3.3.40 ([133], Theorem 3.9) Let 1 < p < ∞, r ∈ N, s > r, let θ ∈ (0, 1) be fixed, and let σ and u be as in (3.3.55) and (3.3.56), respectively. Furthermore, assume that the conditions in (3.3.100) are fulfilled. Then, for every p,r f ∈ Wu ,  ∗  L f  p,r ≤ cθ f  p,r . m,2 Wu W

(3.3.102)

u

p,s

Moreover, for every f ∈ Wu , s > r ≥ 1, we get   f − L∗ f  p,r ≤ m,2 W u

cθ f Wp,s , u ms−r

(3.3.103)

where, in both cases, cθ is independent of m and f . p,r

p,s

Note that in the previous theorem Wu can be replaced by Zu with the assumption s > p1 (which guarantees the continuity of f , see Corollary 3.3.24). As a further consequence of Proposition 3.3.39, analogously to Corollary 3.3.37, the following Marcinkiewicz equivalence holds in the subspace P∗m−1,2 .

140

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

Corollary 3.3.41 Let 1 < p < ∞ and let θ ∈ (0, 1) be fixed. Then, the equivalence ⎛ Pm+1 up ∼m,Pm+1





⎞1

p

xk |Pm+1 u| (xk )⎠ p

|k|≤j

holds for every polynomial Pm+1 ∈ P∗m−1,2 , if and only if the conditions in (3.3.100) are complied. To conclude this section, we want to emphasize that, in the previous theorems the constants cθ appear, which depend on θ ∈ (0, 1). The parameter θ is crucial for our results, since it cannot assume the value 1. In other words, the “truncation” of the sums in (3.3.73) and (3.3.78) seems to be essential.

3.3.6 Gaussian Quadrature Rules This section is based on results which were proved in [35]. We consider w(x) = 2 −α e−(1−x ) , α > 0, defined in (3.3.1), use the notations introduced at the beginning of Sect. 3.3.5, and note that, additionally to (3.3.6), we have 2 2 − x"2 m # ∼m am − x"2 m #−r ∼m δm , am 2

(3.3.104)

2

where r is a fixed positive integer (see [104, pp. 22–23]) and δm is defined in (3.3.71). √ Let θ ∈ (0, 1) be fixed, aθm = aθm ( w), and m be sufficiently large (say m > m0 ). Recalling that am − aθm ∼m 1 − am (see [104, p. 81]) and am − x" m #−p ∼m 2

1 − am , (1 − am )1/3 m2/3

by (3.3.6) we have x" m #−r > aθm for some fixed r ∈ N. Again define j = j (m) 2 by (3.3.72). Concerning the distance between two consecutive zeros, from the formula (see [104, p. 23]) xk = xk+1 − xk ∼m

6

1 − xk2

,

2 − x2 + a2 δ m am m m k

k=−

9m: 2

,...,

9m: 2

− 1,

(3.3.105) we can conclude that (see [104, p. 32], [131, (4.39)]) 6 xk ∼m

2 − x2 am k

m

,

|k| ≤ j .

(3.3.106)

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

141

This means that the nodes xk are arcsin distributed with respect to the interval [−am , am ] if |k| ≤ j . In general, formula (3.3.106) does not hold true for |k| > j . Concerning the Christoffel function λw m (x)

=

m−1 !

"2 w pm (x)

−1

k=0

 w ∞ associated to the orthonormal system pm , from the equivalence (see [103, p. 7] m=0 and [104, p. 20]) λw m (x) ∼m,x

(1 − x 2 )w(x)  , 2 − x2 + a2 δ m am m m

|x| ≤ am ,

(3.3.107)

we deduce  λw m (x)

∼m,x

2 − x2 am w(x) , m

|x| ≤ aθm ,

and  λw m (x)

≤c

1 − am ϕ(x)w(x) ϕ(x)w(x) ≤ c mγ , δm m m

aθm ≤ |x| ≤ am , (3.3.108)

2λ , ϕ(x) = where γ = 6α+3 and (3.3.107). we have

√ 1 − x 2 , and c = c(m, x). Note that by (3.3.105)

w λw k = λm (xk ) ∼m,k xk w(xk ),

k=−

9m: 2

,...,

9m: 2

− 1.

(3.3.109)

Let us consider the Gaussian rule defined by 

1 −1

P (x)w(x) dx =

m " 2#

k=−"

m 2

λw k P (xk )

∀ P ∈ P2m−1 ,

#

w and λw are the Christoffel numbers. For a function where xk are the zeros of pm k f : (−1, 1) −→ C, we introduce the remainder term

 em (f )w :=

1 −1

f (x)w(x) dx −

m " 2#

k=−"

m 2

λw k f (xk ) . #

(3.3.110)

142

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

Concerning the behaviour of em (f )w , we recall the well known estimate |em (f )w | ≤ 2w1 E2m−1 (f )∞

∀ f ∈ C[−1, 1] .

(3.3.111)

  Let Cw = f ∈ C(−1, 1) : lim|x|→1 f (x)w(x) = 0 . Then, for every function f ∈ Cw , it is easily seen that |em (f )w | ≤ c E2m−1 (f )w,∞ ,

c = c(m, f ).

(3.3.112)

Therefore, since (see Proposition 3.3.18) lim Em (f )w,∞ = 0 ∀ f ∈ Cw , m

the error of the Gaussian rule converges to zero with the order of the best approximation in Cw . Notice that the functions belonging to Cw can increase exponentially at the endpoints of the interval (−1, 1). It is well known that (3.3.112) holds true with w replaced by a Jacobi weight v γ ,δ (x) = (1 −x)γ (1 +x)δ , γ , δ > 0, but it is false for exponential weights on unbounded intervals (see [36, 124–126]). Now, we want to investigate if estimates of the form |em (f )w | ≤

 c f  ϕw , 1 m

c = c(m, f ) ,

1,1 f ∈ Ww ,

(3.3.113)

which are useful in different contexts, are possible. We first recall some known results. If w is a Jacobi weight v γ ,δ , γ , δ > −1, an estimate of the form (3.3.113) holds true and, moreover, we have (see [121, pp. 170, 338]) Em (f )v γ ,δ ,1 ≤

 c f  ϕv γ ,δ  . 1 m

In contradistinction, for the weight w this does not happen. Indeed, for every 1,1 , we have (see Lemma 3.3.17) function f ∈ Ww Em (f )w,1 ≤

 c f  ϕw , 1 m

c = c(m, f ) ,

but (3.3.113) is false in the sense of the following theorem. Proposition 3.3.42 ([35], Theorem 2) Let w(x) = e−(1−x 1,1 , we have every f ∈ Ww   |em (f )w | ≤ c mγ −1 f  ϕw1 ,

2 )−α

, α > 0. Then, for

(3.3.114)

3.3 Polynomial Approximation with Exponential Weights on the Interval (−1, 1)

143

2α where γ = 6α+3 and c = c(m, f ). Moreover, for all sufficiently large m, there   1,1 with 0 < f  ϕw  < +∞, exists a function fm ∈ Ww m 1

  |em (fm )w | ≥ c mγ −1 fm ϕw1 ,

(3.3.115)

and c = c(m, fm ). Hence, the error of the Gaussian rule (3.3.110) cannot be estimated by means of (3.3.113). For that reason, we introduce a “truncated” Gaussian rule, which satisfies our requirement, as we will show. With the integer j = j (m) defined in (3.3.72), we associate the quadrature rule 

1 −1



f (x)w(x) dx =

∗ λw k f (xk ) + em (f )w ,

(3.3.116)

|k|≤j

which we obtain by dropping the terms in the ordinary Gaussian rule related to the w , which have not arcsin distribution in general. zeros of pm As an effect of the truncation, the formula (3.3.116)@ is Aexact for polynomials P2m−1 ∈ P2m−1 such that P2m−1 (xk ) = 0 for j < |k| ≤ m2 , but it is not exact for ∗ every polynomial belonging 9to P2m−1 : . For instance, em (1)w = 0. Nevertheless, for every P ∈ PM , where M =

2θ m θ+1

, we have

∗ |em (P )w | ≤ c e−c1 M P w∞ β

with

β=

2α . 2α + 1

(3.3.117)

Indeed, due to (3.3.109) we get ∗ em (P )w

 =

1 −1

P (x)w(x) dx−

 |k|≤j

λw k P (xk ) =



λw k P (xk ) ≤ c

|k|>j

max

aθ m ≤|x|≤1

|P (x)w(x)| .

√ Using (3.3.11) and taking into account that aθm ( w) = a2θm (w), inequality (3.3.117) follows. The following proposition shows that, if the function f is continuous on [−1, 1] ∗ (f ) converges to zero with the order of e (f ) or belongs to Cw , the error em w m w (see (3.3.111) and (3.3.112). : 9 m Proposition 3.3.43 ([35], Proposition 1) Let θ ∈ (0, 1) be fixed and M = 2θ θ+1 . Then, for every continuous function f : [−1, 1] −→ C,  β ∗ |em (f )w | ≤ c EM (f )∞ + e−c1 M f ∞

(3.3.118)

144

3 Weighted Polynomial Approximation and Quadrature Rules on (−1, 1)

and, for every f ∈ Cw ,  β ∗ |em (f )w | ≤ c EM (f )w,∞ + e−c1 M f w∞ , where β =

2α 2α+1

(3.3.119)

and where the constants c and c1 are independent of m and f .

1,1 or f ∈ Z1,s , s > 1, the following theorem states the For functions f ∈ Ww w required estimates.

Proposition 3.3.44 ([35], Theorem 3) With the notations of Proposition 3.3.43, for every f ∈ W11,w , we have    β ∗ (f )w | ≤ c M −1 f  ϕw1 + e−c1 M f w1 |em

(3.3.120)

and, for every f ∈ Z1s,w with s > 1,  ∗ |em (f )w | ≤ c M −1

1/M

ωϕr (f, t)w,1

0

t2

. β dt + e−c1 M f w1 ,

(3.3.121)

s < r ∈ N, where the constants c and c1 do not depend on m and f . In particular, from Proposition 3.3.44 we deduce the estimates  ∗  e (f )w  ≤ c f  1,r m Ww mr

∀ f ∈ W1,r w ,

 ∗  e (f )w  ≤ c f  1,s m Zw ms

∀ f ∈ Z1,s w ,

r ∈N

(3.3.122)

s > 1,

(3.3.123)

and

where c is independent of m and f in both cases. Therefore, in the considered ∗ (f ) converges to 0 with the order of the best polynomial function spaces em w approximation. As a consequence, the estimates (3.3.122) and (3.3.123), which are not true for the error of the ordinary Gaussian rule, cannot be improved with respect to the order of convergence.

Chapter 4

Weighted Polynomial Approximation and Quadrature Rules on Unbounded Intervals

This chapter continues the investigations of the previous one by refering to weight functions on unbounded intervals, the real line and the half line. In particular, we study Lagrange interpolation and Gaussian rules with respect to generalized Freud weights, generalized Laguerre weights, and Pollaczek–Laguerre weights.

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line In this section we consider functions which are defined on the whole real line and which can increase exponentially at ±∞. Hence, the weight u is of exponential kind. We will define the function spaces related to u, some suitable moduli of smoothness, and respective K-functionals. For the purpose of approximating functions defined on the real line by means of polynomials, the first question arising is whether this is possible. Indeed, introducing a weight function u : R −→ R+ , we wonder when the polynomials are dense in weighted spaces related to u. In the 1910 years S. N. Bernstein posed the question, under which assumptions on the weight u, for every continuous function f : R −→ R satisfying lim (f u)(x) = 0 ,

x→±∞

∞ there exists a polynomial sequence (Pm )m=1 such that

lim (f − Pm ) u∞ = 0 .

m→∞

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Junghanns et al., Weighted Polynomial Approximation and Numerical Methods for Integral Equations, Pathways in Mathematics, https://doi.org/10.1007/978-3-030-77497-4_4

(4.1.1)

145

146

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

A certain solution of the problem were found by POLLARD, MERGELYAN, and ACHIESER in the 1950s. If the weight u satisfies u ≤ 1 and is even and if ln (u(ex )) is even and convex, then a necessary and sufficient condition for the density of the polynomials in the sense of (4.1.1) is (see [93, p. 170]) 

∞ 0

log (1/u(x)) dx = ∞ . 1 + x2

In particular, for Freud weights of the form u(x) = e−|x|

λ

(4.1.2)

the Weierstrass theorem holds (i.e., the set of polynomials is dense) if and only if λ ≥ 1. A second question which arises in this context is, whether the weight u admits a Jackson theorem. In other words, for which weights does there exist a sequence ∞ such that (ηm )m=1   Em (f )u,p ≤ ηm f  up   for all absolutely continuous functions f with f  up < ∞. In [109] Lubinsky proved that, for continuous weights, this kind of Jackson theorem holds if and only if  lim u(x)

x→∞

x

u−1 (t) dt = 0

.−1 

and

0

lim

min u(y)

x→∞ y∈[0,x]



u(t) dt = 0

x

and analogous conditions for x −→ −∞ are fulfilled. In particular, for Freud weights of the form (4.1.2) with λ > 1, we have (see [44, p. 185] and [163, p. 81]) Em (f )u,p ≤

c m

1− λ1

   f u p

for 1 ≤ p ≤ ∞ and with c independent of m and f . Moreover, this rate of convergence is the best possible for absolutely continuous functions f with   f  u < ∞. From the previous inequality we deduce that a critical case is λ = 1. p In this case, as we have already mentioned, the polynomials are dense, however a Jackson theorem of the previous form does not hold (see [108]).

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

147

4.1.1 The Case of Freud Weights Before considering generalized Freud weights, we recall some results related to Freud weights due to Ditzian and Totik [44]. Let us consider the weight u(x) = (1 + |x|)β e−|x| , λ

β ≥ 0, λ > 1.

(4.1.3)

The Mhaskar–Rahmanov–Saff number am related to the Freud weight e−|x| is given by (see for instance [102, (1.8)]) λ

% am =

2λ−1  (λ/2)2 (λ)

&1

λ

1

1

mλ ∼ mλ ,

(4.1.4)

where (λ) denotes the Gamma-function. The Mhaskar-Rhmanov-Saff number related to the weight u in (4.1.3) is equivalent to the previous one (namely, 1 am (u) ∼ m λ ). For p = ∞, we set # $ L∞ := C = f ∈ C(R) : lim f (x)u(x) = 0 u u x→±∞

and equip this space with the norm f Cu := f u∞ = sup {|f (x)u(x)| : x ∈ R} . We emphasize that f ∈ Cu can increase exponentially for |x| −→ ∞. Moreover, by p Lu , 1 ≤ p < ∞, as usual we denote the set of all measurable functions f : R −→ C such that  f Lpu := f up =

1 R

|f (x)u(x)|p dx

p

< ∞.

p

All these spaces Lu , 1 ≤ p ≤ ∞, are Banach spaces with the above defined norms. p Subspaces of Lu are the Sobolev spaces given by p,r

Wu

# $     p := f ∈ Lu : f (r−1) ∈ AC(R), f (r) u < ∞ , p

r ∈ N,

where AC(R) denotes the set of all functions which are absolutely continuous on every closed subinterval of R. We equip these spaces with the norm    (r)  f Wp,r f = u + u f  . p u p

148

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . . p

For every f ∈ Lu , 1 ≤ p ≤ ∞ and r ∈ N, we consider the following rth modulus of smoothness ωr (f, t)u,p = r (f, t)u,p +

2 

  inf (f − P ) uLp (Jk ) : P ∈ Pr−1 ,

(4.1.5)

k=1

  1 where t > 0 is sufficiently small, where J1 = −∞, −c0 r t − λ−1 , J2 =   1 c0 r t − λ−1 , +∞ , and where c0 > 1 is a fixed constant. Its main part r (f, t)u,p is defined by    r (f, t)u,p = sup  rh f uLp (I

(4.1.6)

 1 1 Ih = −c0 r h− λ−1 , c0 r h− λ−1

(4.1.7)

h)

0 s ts t >0

equipped with the norm f Zp,s = f Zp,s,r = f Lpu + sup u u t >0

ωr (f, t)u,p . ts p,s

In the sequel we will denote these subspaces briefly by Zu , without the second index r and with the assumption r > s. We remark that, due to (4.1.12) and (4.1.14), p,s we can deduce r (f, t)u,p ∼t,f ωr (f, t)u,p for every f ∈ Zu . We observe that, for sufficiently small h > 0 and for x, y ∈ Ih (cf. (4.1.7)), we have that |x − y| ≤ c h implies u(x) ∼ u(y)

(4.1.15)

for every value β ≥ 0 of the parameter β of the weight u in (4.1.3). The following lemma collects some polynomial inequalities (see, for instance, [104]). Lemma 4.1.1 Let 1 ≤ p ≤ ∞ and let u(x) = (1 + |x|)β e−|x| with λ > 1 and β ≥ 0. Moreover, let am be the Mhaskar–Rahmanov–Saff number given by (4.1.4). Then, for every polynomial Pm ∈ Pm , we have λ

   P u ≤ c m Pm up , m p am 

m Pm uq ≤ c am

1−1 p

q

Pm up ,

1 ≤ p < q ≤ ∞,

(4.1.16)

(4.1.17)

150

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

and Pm up ≤ c Pm uLp [−am ,am ] ,

(4.1.18)

where c is a constant independent of m and Pm . Finally, for all δ > 0, the inequality Pm uLp (Jm ) ≤ c e−c1 m Pm up ,

(4.1.19)

where Jm = {x ∈ R : |x| > (1 + δ)am } and where c and c1 are positive constants independent of m and Pm . With the help of (4.1.19), it easily follows that the inequality " ! f uLp (Jm ) ≤ c Em (f )u,p + e−c1 m f up

(4.1.20)

p

holds for every f ∈ Lu (see also [151, (3.4)]).

4.1.2 The Case of Generalized Freud Weights Here, we study the polynomial approximation of functions f : R \ {0} −→ R and present results proved in [140, 141]. To be more precise, we consider functions which can increase exponentially for |x| −→ ∞ and can also have 0 as singular point. We define suitable function spaces, moduli of smoothness, and K-functionals and we give estimates of the error of the best polynomial approximation in terms of the moduli of smoothness. Let us consider the weight function u(x) = |x|γ (1 + |x|)β e−

|x|λ 2

(4.1.21)

,

where γ ≥ 0, β ∈ R, and λ > 1. The related Mhaskar–Rahmanov–Saff number 1 am = am (u) ∼ m λ is as in (4.1.4) (see [102, (1.8)]). For 1 ≤ p < ∞, we again p denote by Lu the set of all measurable functions f : R −→ R such that  f 

p Lu

:= f up =

1 |f (x)u(x)| dx p

R

p

< ∞,

while for p = ∞, we introduce the space

Cu :=

L∞ u

=

⎧ ⎪ ⎪ ⎨

# $ f ∈ C(R) : lim f (x)u(x) = 0 |x|→∞

: γ = 0,

# $ ⎪ ⎪ ⎩ f ∈ C(R \ {0}) : lim f (x)u(x) = lim f (x)u(x) = 0 : γ > 0 , x→0

|x|→∞

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

151

with the norm f L∞ := f u∞ = sup {|f (x)u(x)| : x ∈ R \ {0}} . u p

Again, all these spaces Lu with the above defined norms are Banach spaces. Note p that a function in Lu does not belong necessarily to Lp and that the limiting conditions in the definition of Cu imply the density of the set of polynomials in Cu . For r ∈ N, let us define the respective Sobolev spaces by p,r Wu

# $    (r)  p (r−1) = f ∈ Lu : f ∈ AC(R \ {0}), f u < ∞ , p

where AC(R \ {0}) denotes the set of all the functions which are absolutely continuous on every closed subinterval of R \ {0}. We equip these spaces with the norm    (r)  f Wp,r f = u + u f  . p u p

p

For 1 ≤ p ≤ ∞, f ∈ Lu , and r ∈ N, we consider the following rth modulus of smoothness, the definition of which appeared in [140, 141], ω

r

(f, t)∗u,p

=

r

(f, t)∗u,p

+

3  k=1

inf (f − P ) uLp (Ik ) ,

P ∈Pr−1

t0 > 0 and where I1 = where 0 < t < t0 with a sufficiently small   1 1 − λ−1 − λ−1 , I2 = (−4 r t, 4 r t), I3 = c r t , +∞ , and where c > 0 −∞, −c r t is a constant. Its main partcr (f, t)∗u,p is defined by   r (f, t)∗u,p = sup rh (f )uLp (I

r,h )

0 0. If γ + p1 > r, then f uLp (−t,t )

     ≤ c t f uLp (−a,a) + f (r) u



r

,

Lp (−a,a)

a>t.

(4.1.25)

Furthernore, if γ + p1 ≤ r, if γ + p1 = 1, 2, . . . , r, and if f (r−1−τ )(0) exists with : 9 τ = γ + p1 , then there exists a polynomial P ∈ Pr−1−τ such that (f − P ) uLp (−t,t )

     ≤ c t f uLp (−a,a) + f (r) u



r

Lp (−a,a)

,

(4.1.26)

where, in both estimates, c is independent of t and f . To complete Lemma 4.1.2, we remark that if γ + tr

log t −1

1 p

= r, then (4.1.25) holds true

in place of and if γ + p1 = ν ∈ N, ν < r, then (4.1.26) holds true t r log t −1 in place of τ and t r , respectively. For simplicity, in the sequel tr ,

with with ν and we will assume that γ + p1 is not an integer. By means of the modulus of smoothness, for 1 ≤ p ≤ ∞, we can define the Zygmund spaces p,s Zu

# $ ωr (f, t)∗u,p p = f ∈ Lu : sup 0

r > s ∈ R+ ,

equipped with the norm f Zp,s = f Lpu + sup u t >0

ωr (f, t)∗u,p ts

.

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

153

Let us remark that, due the inequalities (4.1.24) and (4.1.23), the relation supt >0  r (f, t)u,p t −s < ∞ implies supt >0 ωr (f, t)∗u,p t −s < ∞. Therefore, in the definition of the Zygmund space, ωr (f, t)∗u,p can be replaced by  r (f, t)u,p . The following lemma collects some polynomial inequalities, which can be found in [142, pp. 106,107]. |x|λ

Lemma 4.1.3 Let u(x) = |x|γ (1 + |x|)β e− 2 with λ > 1, β ∈ R, and γ > − p1 if 1 ≤ p < ∞, as well as γ ≥ 0 if p = ∞. Moreover, let am be the Mhaskar– Rahmanov–Saff number given by (4.1.4). Then, for every polynomial Pm ∈ Pm , we have    P u ≤ c m Pm up m p am

(4.1.27)

and  Pm uq ≤ c

m am

1−1 p

q

Pm up ,

1 ≤ p < q ≤ ∞,

(4.1.28)

where c is a constant independent of m and Pm . Moreover, for any fixed d > 0, there exists a constant c = c(d) = c(m, Pm ) such that Pm up ≤ c Pm uLp (Imd ) ,

(4.1.29)

  where Imd = −am , − d mam ∪ d mam , am . Finally, for all δ > 0, the inequality Pm uLp (Imδ ) ≤ c e−c−1m Pm up

(4.1.30)

holds with Imδ = {x ∈ R : |x| > (1 + δ)am } and positive constants c and c1 independent of m and Pm . Let us consider the generalized Freud weight w(x) = |x|α e−|x| , λ

(4.1.31)

 w ∞ where α > −1, λ > 1, and let pm be the corresponding sequence of m=1 orthonormal polynomials with positive leading coefficient γm = γm (w) > 0. These polynomials, sometimes called generalized Freud polynomials, have been extensively studied in [89–91, 105, 174]. We denote by xk , 1 ≤ k ≤ " m2 #, the w and by x positive zeros of pm −k the negative ones. If m is odd, then x0 = 0 is a zero w of pm . The location of these zeros can be described by − am +

c am c am ≤ x−" m2 # < · · · < x1 < x2 < · · · < x" m2 # ≤ am − 2/3 , m2/3 m

(4.1.32)

154

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

where c is a positive constant independent of m (see [89, Theorem 1.3]). The w following inequalities deal with the behaviour of the orthonormal polynomials pm and their zeros xk . In [89, Theorem 1.13] it has been proved that   α 6   w  |x|λ p (x)e− 2 |x| + am 2 4 a 2 − x 2  ∼m 1 m  m  m x∈[−am ,am ]

(4.1.33)

sup

and, consequently % 2 &− 14   w  a m 2 2 p (x) w(x) ≤ c a − x + √ , m m 3 m

x ∈ Imd ,

(4.1.34)

where Imd is the subset defined in Lemma 4.1.3. Moreover, the equivalence %

1

  √ ∼k,m xk  w    pm (xk ) w(xk ) holds true for |k| ≤

m 2.

− xk2

+

am √ 3 m

2 & 14 ,

(4.1.35)

Finally, the distance

xk =

 2 am

xk+1 − xk :

k = 1, . . . , " m2 # ,

(4.1.36)

xk − xk−1 : k = −1, . . . , −" m2 # ,

w , where we set x m between two consecutive zeros of pm = am , " 2 #+1 = −x−" m 2 #−1 can be estimated by

xk ∼k,m

2 1 am 6 , m a 2 − x 2 + m−2/3 a 2 m m k

|k| ≤

m , 2

(4.1.37)

In the sequel we will need the following proposition, proved in [89, Theorem 1.4]. To demonstrate the technique, we give the proof here.  √ w (x) w(x) 4 |a 2 − x 2 |, 1 ≤ p < ∞, and g ∈ Lp . Proposition 4.1.4 Let t (x) = pm m Then, 1

 |g(x)t (x)| dx p

R

with c = c(m, g).

p

 ≥c

am −am

1 |g(x)| dx p

p

(4.1.38)

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

155

Proof Let m be even and set Im =



 δ δ xk − xk , xk + xk 8 8 m

|k|≤ 2

with a fixed δ > 0 sufficiently small. Moreover, define Jm = (−am , am )\Im . Taking into account (4.1.43), it follows 6  w  p (x) w(x) 4 a 2 − x 2 ∼m |x − xd | , m m xd w (x) closest to x. Consequently, we have |t (x)| ≥ c > 0 where xd is a zero of pm 0 for all x ∈ Jm . Hence, we get p g tp

-

 |g(x)| dx = c0

≥ c0

am

p

Cm

% = c0 a m

1 −1

−am

|g(x)| dx +

|g(x)| dx p

Im

&

 |g(am y)| dy +

.

 p

p

|g(am y)| dy p

Im∗

with Im∗



 xk δ xk xk δ xk = − , + . am 8 am am 8 am m |k|≤ 2

The measure of the subset Im∗ is less than c02δ and, by the absolute continuity of the integral, we can choose δ such that the integral extended to Im is one half of that extended to [−1, 1]. From this we obtain (4.1.38). Note that the same proof works if m is odd.  

4.1.3 Lagrange Interpolation in Weighted Lp -Spaces Here we study the approximation of functions f : R \ {0} −→ R by means of Lagrange interpolation and present results from [126]. More precisely, we consider |x|λ

the generalized Freud weight u(x) = |x|γ (1 + |x|)β e− 2 , defined in (4.1.21), and p we investigate the convergence of the interpolation process in Lu for 1 < p < ∞. As we have already mentioned, since u(0) = 0, the possible singularity of the function f at 0 generates special difficulties and requires a more careful study of the polynomial behaviour in suitable neighborhoods of the origin (cf. Lemma 4.1.2). In order to introduce our interpolation process, we consider the generalized  w ∞ λ Freud weight w(x) = |x|α e−|x| and the corresponding sequence pm of m=1

156

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

orthonormal polynomials with positive leading Unfortunately, the ∞  wcoefficients. behaviour of the orthoprojections related to pm is poor. For instance, in m=1 w f have been considered for functions [138], the corresponding Fourier sums Sm p f ∈ Lu and, similarly to the casesof Freud and Laguerre weights, it has been w (f )u ≤ c f u with a constant c > 0 holds proved that the inequality Sm p p   4 true only if p ∈ 3 , 4 . Furthermore, the Lagrange interpolation polynomials Lw mf w interpolating f at the zeros of pm behave like that one based on Freud zeros. To overcome such a problem, in [125, 151, 183] the authors have suggested to interpolate a “finite section” of the function and then to estimate a finite section of the interpolating polynomial in suitable weighted norms. A similar method has been used for Fourier sums in [138, 151]. To describe the idea more precisly, denote by am the Mhaskar–Rahmanov–Saff number defined in (4.1.4) and by χm the characteristic function of the interval θ ∈ (0,1) Then,  [−θ am , θ a m∞] for a fixed  ∞ p w the Lu -convergence of the sequences χm Lw χ f and χ S m m m m χm f m=1 has m=1 been proved in [151, Theorems 3.1, 3.6] for 1 < p < ∞ and α = γ = β = 0. w But χm Lw m χm f is a “truncated” polynomial, and therefore χm Lm does not project ∞ a function belonging to Lu into the space Pm−1 of polynomials of degree at most m − 1. On the other hand, bounded projections as well as those having the smallest norm are crucial tools in different topics, for instance in the numerical treatment of operator equations. In order to obtain these kinds of projections, following an idea previously used ∞ in [204], we denote by Lw m,2 f the Lagrange interpolation polynomial of f ∈ Lu ∗,w w w at the zeros of pm and the extra points ±am . Then, we set Lm,2 f = Lm,2 χj f , where f ∈ L∞ u and where χj denotes the characteristic function of the interval [−xj , xj ], with xj = xj (m) = min {xk : k = 1, . . . , xk ≥ θ am }, where θ ∈ (0, 1) is fixed (cf. Sect. 3.3.5). The operator L∗,w does not project L∞ u onto Pm+1 , but onto a m,2+ p ∞ ∗ ∗ subspace Pm+1 ⊂ Pm+1 . We prove that m=1 Pm+1 is dense in Lu for 1 ≤ p ≤ ∞ and that, for each element of P∗m+1 , both Marcinkiewicz inequalities hold true. As a by-product, we characterize the Marcinkiewicz bases in P∗m+1 . Moreover, under simple necessary and sufficient conditions on the weights w and u, we show that the p operators L∗,w m,2 are uniformly bounded in some important subspaces of Lu . w ∞ the Let w(x) be the generalized Freud weight defined in (4.1.31), pm m=1 related system of orthonormal polynomials, and xk , |k| ≤ "m/2#, be the zeros of w as described in (4.1.32). In order to introduce a subspace of P pm m+1 , we assume m to be even and, for a fixed θ ∈ (0, 1), we set  m . xj = min xk : xk ≥ θ am , k = 1, . . . , 2 By P∗m+1 we denote the subspace of Pm+1 defined by P∗m+1 = {Q ∈ Pm+1 : Q(±am ) = Q(xi ) = 0, |i| > j } .

(4.1.39)

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

157

Setting ∗,w (x) , ϕk (x) = k u(xk )

|k| ≤ j

with w (x)ρ (x) pm m ∗,w , k (x) =  w  pm (xk )(x − xk )ρm (xk )

(4.1.40)

the collection {ϕ−j , . . . , ϕ−1 , ϕ1 , . . . , ϕj } is a basis of P∗m+1 , i.e., every polynomial Q ∈ P∗m+1 can be uniquely represented in the form Q=



Q(xk )u(xk )ϕk =: L∗,w m,2 Q .

|k|≤j

Extending the operator L∗,w m,2 to the functions f ∈ Cu , we introduce L∗,w m,2 f =



f (xk ) ∗,w k .

|k|≤j

This polynomial interpolates the function f at each zero xk , |k| ≤ j , and it vanishes at the points ±am and xi , |i| > j . Hence, it belongs to P∗m+1 . Therefore, ∗ w L∗,w m,2 : Cu −→ Pm+1 is a projection. Notice that the Lagrange polynomial, Lm,2 f , m interpolating f at the knots xi , i ≤ 2 , and at ±am , can be written as Lw m,2 f =



f (xk ) ∗,w k ,

|k|≤ m 2 +1

where the ∗,w k ’s are defined in (4.1.40) for |k| ≤ m/2, x±( m +1) = ±am , and 2

w (x)(a ± x) pm m . (x) = ∗,w m w (±a ) ±( 2 +1) 2 am pm m

(4.1.41)

Now, L∗,w m,2 f is not a “truncated” polynomial as in [151, 183], but it satisfies w L∗,w m,2 f = Lm,2 fj ,

where fj = χj f and χj is the characteristic function of the interval [−xj , xj ].

158

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

We can estimate the fundamental Lagrange polynomials, defined by (4.1.40) and (4.1.41), in the following way. If |k| ≤ m/2, then, due to (4.1.34) and (4.1.35), we get  w   ∗,w   γ − α   p (x) ρm (x)u(x)  (x) u(x)  x  2 1 + |x| β xk m k  =   ≤ c     w  u(xk ) xk 1 + |xk | |x − xk |  pm (xk ) |x − xk |ρm (xk )u(xk )

(4.1.42) w closest to x. for x ∈ Imd (cf. Lemma 4.1.3) and xk = xd , where xd is a zero of pm Furthermore, the equivalence

 ∗,w   (x) u(x) d

u(xd )

∼x,m 1

(4.1.43)

is well-known (see [104, pp. 320–321]). Finally, by (4.1.43) and (4.1.35)) with x = am and xd = x m2 , it follows that  w  p (am ) w(am ) ≥ c



1

m3 . am

m

(4.1.44)

With the help of (4.1.44) we get    ∗,w   m +1 (x) u(x) 2

u(am )

 w √ p (x) w(x)  x γ − α2  1 + |x| β   ≤  wm  √ p (am ) w(am )  am  1 + am m 6 c

≤ 6 4

2 am

1

m− 3 a m

− x2

2 2 + m− 3 a m

  x  a

γ − α    2 1 + |x| β   1 + am m

(4.1.45)

γ − α    2 1 + |x| β  ,  1 + am m

  x ≤ c  a

. Now, for x ∈ Imd , the sum for x ∈ Imd . An analogous inequality holds for ∗,w − m −1 2

Sm (α, γ , β, x) :=

 |k|≤ m2 , k=d

 x  x

γ − α   γ − α     2 1 + |x| β xk  x  2 1 + |x| β    +2    1 + |xk | |x − xk | am 1 + am k

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

159

can be estimated by Sm (α, γ , β, x) ≤ c

log m : 0 ≤ γ − mτ :

α 2

≤ 1, 0 ≤ β ≤ 1 ,

(4.1.46)

otherwise

for some τ > 0 and α, β, γ arbitrarily fixed. Up to now we supposed m to be even. Nevertheless, if m is odd, we can replace x0 w (x) by pw (x) x− pm x0 = 0 is such that x1 − x0 ∼ amm . For instance, we m x , where x1 can choose x0 = 2 (see [183] for more details). So, without loss of generality, from now on we tacitly assume m to be even. From the numerical point of view, in order to compute L∗,w m,2 f , we observe that,  w ∞ 2 α −x if λ = 2, i.e. w(x) = |x| e , then pm m=1 is the sequence of Sonin-Markov polynomials, which are simply related to Laguerre polynomials (see [92]). In the case λ = 2 we can use the Mathematica Package “Orthogonal Polynomials” presented in [31]. From (4.1.37) and (4.1.39) we deduce xk ∼k,m

am , m

|k| ≤ j .

(4.1.47)

The following Marcinkiewicz-type inequality (4.1.48) holds true. Lemma 4.1.5 Let 1 ≤ p < ∞, 0 < θ < 1, and let w(x) = |x|α e−|x|

λ

as well as u(x) = |x|γ (1 + |x|)β e−

|x|λ 2

be the weights in (4.1.31) and (4.1.21) with α > −1, β ∈ R, γ > − p1 , and λ > 1. w and let  be defined as in (4.1.36). Moreover, let xk , |k| ≤ m2 , be the zeros of pm k Then, for every polynomial Q ∈ P m with a fixed integer, there exists a θ1 ∈ (θ, 1) such that   |Q(x)u(x)|p dx , xk |Q(xk )u(xk )|p ≤ c (4.1.48) Im∗

|k|≤j

where Im∗ := [−θ1 am , −x1 ] ∪ [x1 , θ1 am ], where j is defined in (4.1.39), and where c is a positive constant independent of m and Q. x2

In case u(x) = e− 2 and xk are the Hermite zeros, an inequality similar to (4.1.48) was firstly proved by P. Nevai [177]. We will omit the proof of Lemma 4.1.5 because it follows by applying the same method used [151, 183]. But we emphasize that (4.1.48) seems to be false for j = m2 and Q ∈ P m (see [110, 111] for similar remarks). + p Now we are going to show that m P∗m+1 is dense in Lu , 1 ≤ p ≤ ∞. To this end, in the following lemma w formulate a stronger statement. In order to present the

160

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

lemma we need some notation. With θ ∈ (0, 1) fixed and with j defined in (4.1.39), we choose B λ C θ M= m =: c0 m , (4.1.49) θ +1 with 0 < c0 < 1. Moreover, we set   ∗ m+1 (f )u,p = inf (f − Q)up : Q ∈ P∗m+1 . E

(4.1.50)

Lemma 4.1.6 Let 1 ≤ p ≤ ∞, and suppose that w and u are the weights defined in (4.1.31) and  ∞(4.1.21), respectively, with arbitrarily fixed parameters λ > 1, α, γ ,  β. If PM(m) m=1 is a sequence of quasi best approximation polynomials PM ∈ PM , p then, for every function f ∈ Lu , we have    " !   ∗ −c1 m m+1 f up E (f )u,p ≤  f − L∗,w m,2 PM u ≤ c EM (f )u,p + e p

(4.1.51) and -  . (r)   a r   a r  ∗,w  am ∗ m m r  L PM  f up , u ≤ c ω f, + m  m,2 m u,p m p

(4.1.52)

where r ∈ N, M is defined by (4.1.49), and c and c1 are positive constants independent of m and f . Proof We are going to prove (4.1.51) and (4.1.52) only for 1 ≤ p < ∞, since the case p = ∞ is simpler. Set Q = L∗,w m,2 (PM ). In view of (4.1.29) we have ∗ m+1 (f )u,p ≤ (f − Q) up ≤ (Q − PM ) up + c EM (f )u,p E

     w =  L∗,w P − L P m,2 M u + c EM (f )u,p m,2 M p

.

-     w ≤ c  L∗,w P − L P u M M m,2 m,2

Lp (Imd )

+ EM (f )u,p .

  where again Imd = −am , − d mam ∪ d mam , am with a constant d > 0. The first addend in the brackets is dominated by 1 p

c am sup |PM (x)u(x)| sup |x|≥θam



x∈Imd |k|≤ m +1 2

 ∗,w   (x) u(x) k

u(xk )

.

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

161

By (4.1.43) and (4.1.46) the sum is not greater than c mτ for some τ > 0. Therefore, it remains to estimate the quantity 1

γm := amp mτ sup |PM (x)u(x)| . |x|≥θam

But, from the choice of M, it is easily seen that θ am ≥ (1 + θ )aM . Using (4.1.30) and (4.1.28), we get γm ≤ c e−c m PM up ≤ c e−c m f up , from which (4.1.51) follows. Now, let us prove (4.1.52). By the Bernstein inequality (4.1.27), we have (r)   a r   ∗,w  m  L PM u m,2   m

p

  a r   r  (r)   w  m (r)  ∗,w   + am  L ≤ P − L P u u P M M M p m,2  m  m,2 m p

     a r  m   (r)   ∗,w ≤ c  Lw PM u . m,2 PM − Lm,2 PM u + p p m We have already seen that the first addend is less than c e−c1 m f up , while, in order to estimate the second addend, we can use the inequality . -    a r  a r  am  ∗ m m  (r)  f up + PM u ≤ c ωr f, p m m u,p m  

proved in [141, Theorem 2.3]. This completes the proof. p

By virtue of Lemma 4.1.6, every f ∈ Lu can be approximated by means of elements of P∗m+1 . Moreover, if PM ∈ PM belongs to a sequence of polynomials of quasi best approximation for f , its projection into P∗m+1 has an analogous behaviour with respect to f .   Let us first study the interpolation process L∗,w m,2



m=1

in Cu . |x|λ

Proposition 4.1.7 Let w(x) = |x|α e−|x| and u(x) = |x|γ (1 + |x|)β e− 2 be the weights in (4.1.31) and (4.1.21) with α > −1, γ ≥ 0, β ∈ R, and λ > 1. Moreover, assume that their parameters α, γ , β satisfy λ

0≤γ −

α ≤ 1 and 0 ≤ β ≤ 1 . 2

(4.1.53)

162

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

Then, for every f ∈ Cu , we have  w  L f u ≤ c log m f u∞ , m,2 ∞

(4.1.54)

    f − Lw f u ≤ c log m Em+1 (f )u,∞ , m,2 ∞

(4.1.55)

and      ∗,w  f − Lm,2 f u



" ! ≤ c log m EM (f )u,∞ + e−c m f u∞ ,

(4.1.56)

where M = c0 m, 0 < c0 < 1, is defined by (4.1.49)), and c = c(m, f ). Proof By the infinite-finite range inequality (4.1.29) we have  w    L f u ≤ c Lw f u ∞ d . m,2 m,2 ∞ L (I ) m

For every x ∈ Imd , we can write   ∗,w   w    (x)f (xk )  L f (x) u(x) ≤ u(x) m,2 k |k|≤ m 2 +1

⎡   ∗,w (x) u(x) d ≤ f u∞ ⎣ + u(xd )

+

   ∗,w   m +1 (x) u(x) 2

u(am )



 ∗,w   (x) u(x)

k=d, |k|≤ m 2

k

u(xk )

  ⎤  ∗,w   − m −1 (x) u(x) 2 ⎦ + u(−am )

=: f u∞ [I1 + I2 + I3 + I4 ] ,   w closest to x. By (4.1.43) and (4.1.46) where xd ∈ −x m2 , . . . , x m2 is a zero of pm we have I1 + I2 ≤ c [1 + Sm (α, γ , β, x)] ≤ c log m , recalling the assumptions on α, γ , β, and x. Further, let us estimate only the term I3 , since I4 can be handled in a similar way. Due to (4.1.45), since γ − α2 ≥ 0 and 0 ≤ β ≤ 1, we have     ∗,w  m +1 (x) u(x) 2

u(am )

  x ≤ c  a

γ − α    2 1 + |x| β  ≤ c.  1 + am m

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

163

Thus, (4.1.54) and (4.1.55) are proved. Finally, with the help of (4.1.51), inequality (4.1.56) follows from (4.1.54).   By virtue of Proposition 4.1.7 and a result due to P. Vértesi [211, p. 197], Lw m,2 : ∗,w ∗ Cu −→ Pm+1 and, consequently, Lm,2 : Cu −→ Pm+1 are projections having the smallest norm (w.r.t. the order), but L∗,w m,2 has the advantage of requiring only "b m# (0 < b < 1) evaluations of the function f . In case of λ = 2 and α = γ = β = 0, Proposition 4.1.7 recovers a result due to J. Szabados (see [204, Theorem 1]). The following Lemma 4.1.9 states a converse inequality to (3.2.84) for polynomials belonging to P∗m+1 . For this purpose, we need the following result, proved by B. Muckenhoupt [168, Lemma 8, p. 440]. Lemma 4.1.8 Let wr,s (x) = |x|r (1 + |x|)s−r and 1 < p < ∞. If r ≥ R, s ≤ S, r > − p1 , s < 1 − p1 , R < 1 − p1 , S > − p1 , then there exists a constant c = c(f ), such that p        f (y) f (x)wR,S (x)p dx    x − y wr,s (x) dy  dx ≤ c R R R for all measurable functions f : R −→ C, for which the right-hand side of this inequality is finite. Lemma 4.1.9 Let 1 < p < ∞ and u(x) be the weight in (4.1.21), where γ > − p1 and β ≥ 0. Then, there is a constant c = c(m, Q) such that ⎡ Qup ≤ c ⎣



⎤1

p

xk |Q(xk )|p up (xk )⎦

(4.1.57)

|k|≤j

for every polynomial Q ∈ P∗m+1 , if and only if −

1 α 1 − p1 , β ∈ R, and λ > 1, let σ (x) = (1 + |x|)ν u(x), and suppose that 1 < p < ∞ and ν > fulfilled, then, for every f ∈ Cσ ,

1 p.

If (4.1.58) is

   ∗,w  Lm,2 f u ≤ c f σ ∞ ,

(4.1.66)

p

where c is a constant independent of m and f . In the sequel we show that L∗,w m,2 f behaves like the best polynomial approximation p,r p,r in the Zygmund spaces Zu and in the Sobolev spaces Wu . We will prove that in these spaces the sequence of the operators L∗,w m,2 is uniformly bounded, if (4.1.58) is fulfilled. For this, the following lemma is crucial. Lemma 4.1.12 Let 1 < p < ∞, let w and u be defined by (4.1.31) and (4.1.21) with α > −1, γ > − p1 , β ∈ R, and λ > 1. Set (for some r ≥ 1) p,1

Lu

  p −1− p1 = f ∈ Lu : r (f, t)∗u,p t is summable on (0, 1) .

(4.1.67)

Then, the following assertions are equivalent: (a) Condition (4.1.58) is fulfilled. p,1 (b) There exists a constant c independent of m and f such that, for all f ∈ Lu , % &    a  1  amm r (f, t)∗ u,p m p  ∗,w  dt . Lm,2 f u ≤ c f up + 1+ 1 p m 0 t p

(4.1.68) p,1

(c) There exists a constant c independent of m and f such that, for all f ∈ Lu ,     ∗,w  f − Lm,2 f u

p

% ≤c ω

r



 a r a 1 am ∗ m m p f up + + f, m u,p m m

 0

am m

r (f, t)∗u,p 1

t 1+ p

& dt

.

(4.1.69) Proof We first show that (4.1.58) implies (4.1.68) and (4.1.69). Using Lemma 4.1.9 and well-known arguments (see [151] for more details), we get ⎛ ⎞1 % & p    a  1  amm r (f, t)∗  u,p  ∗,w  m p p p ⎝ ⎠ xk |f (xk )| u (xk ) ≤ c f uLp (Imd ) + dt , Lm,2 f u ≤ c 1+ 1 p m 0 t p |k|≤j

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

169

  where Imd = −θ am , − d mam ∪ d mam , θ am and d > 0 is a constant. Let PM ∈ PM

be a polynomial of quasi best approximation for f ∈ Lu and set Q = L∗,w m,2 (PM ). With the help of (4.1.68) we get p

      ∗,w   ∗,w [f − Lm,2 f ]u ≤ (f − Q)up + Lm,2 (f − Q)u p

p

% ≤ c (f − Q)up + % ≤ c (f − Q)up +

a  1  m

am m

p

m

0

a  1  m

r (f − Q, t)∗u,p

am m

p

m

t

0

1+ p1

r (f, t)∗u,p t

1+ p1

dt +

& dt   a r   (r)  m Q u p m

& ,

(4.1.70) p,r

since, for 1 ≤ p ≤ ∞, g ∈ Wu , and γ +

1 p

not an integer, we have (see [141])

    ωr (g, t)∗u,p ≤ c t r g (r) u

p

(4.1.71)

with c = c(t, g). Hence, using Lemma 4.1.6 and the Jackson-type inequality (4.1.22), the estimate (4.1.69) follows from (4.1.70). On the other hand, it is easily seen that (4.1.68) is equivalent to (4.1.69). Therefore it remains only to prove that (4.1.68) implies (4.1.58). To this end, let us consider the function ⎧ ⎪ 0 : x ≤ −x1 , ⎪ ⎪ ⎪ ⎪ ⎪ 1 x ⎪ ⎪ : −x1 < x ≤ x1 , + ⎨ 2x1 2 f0 (x) = x x2 ⎪ ⎪ ⎪ − + : x1 < x < x2 , ⎪ ⎪ x1 x1 ⎪ ⎪ ⎪ ⎩ 0 : x ≥ x2 . ∗,w and, with the help of (4.1.68), we get By definition L∗,w m,2 f0 = 1

     ∗,w  Lm,2 f0 u ≤ c uLp (−x1 ,x2 ) , p

i.e.,  ∗,w  1  u ≤ c (x1 ) p u(x1 ) , 1 p

(4.1.72)

due to the Marcinkiewicz-type inequality (4.1.57). As in the proof of Lemma 4.1.9, condition (4.1.58) follows from (4.1.72).  

170

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . . p,r

In particular, if f ∈ Zu and r >

1 p,

i.e., sup ωk (f, t)∗u,p t −r < ∞ and k > r > t >0

1 p,

then (4.1.69) implies     a r m   ∗,w f Zp,r .  f − Lm,2 f u ≤ c u p m

(4.1.73)

The following proposition states the uniform boundedness of the Lagrange-type p,r operators L∗,w m,2 in Zu . Proposition 4.1.13 Let 1 < p < ∞, let w and u be the weights in (4.1.31) and (4.1.21) with α > −1, γ > − p1 , β ∈ R, and λ > 1. Under the p,r

assumption (4.1.58), for every f ∈ Zu , r >    ∗,w  Lm,2 f 

p,r

Zu

p,s

Moreover, for every f ∈ Zu , s > r >    ∗,w  f − Lm,2 f 

≤c

p,r Zu

1 p,

we have

≤ c f Zp,r . u

1 p,

(4.1.74)

we get

 a s−r m

m

f Zp,s , u

(4.1.75)

where, in both estimates, c is a constant independent of m and f . p,r

Proof Note that, for every f ∈ Zu , we have 

f 

p,r Zu

j ∼f f up + sup a j j ≥1

r Ej (f )u,p ,

where we took into account the Stechkin-type inequality (4.1.23). Hence, for every p,s f ∈ Zu with s ≥ r > p1 , we get    ∗,w  f − Lm,2 f 

p,r

Zu

 r     j   =  f − L∗,w f u + sup Ej f − L∗,w f =: A + B .  m,2 m,2 p u,p j≥1 aj

(4.1.76) In view of Lemma 4.1.12, we obtain A≤c

 a s m

m

f Zp,s . u

(4.1.77)

Since 

Ej f − L∗,w m,2 f





     f u : j ≤ m + 1 , ≤  f − L∗,w m,2 p

u,p

=

Ej (f )u,p

: j ≥ m +2,

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

171

we can conclude  r   j Ei f − L∗,w f B = sup m,2 u,p j ≥1 aj = max

 sup

j ≤m+1

j aj

 r   r  i   ∗,w Ei (f )u,p  f − Lm,2 f u , sup p j ≥m+2 aj

    r−s  s j j m r  am  s f Zp,s , sup Ej (f )u,p ≤ max c u am m aj j ≥m+2 aj ≤c

 a s−r m

m

f Zp,s . u (4.1.78)

Combining (4.1.77) and (4.1.78) with (4.1.76), we obtain (4.1.74) and (4.1.75). L∗,w m,2 f

 

p,r Wu ,

Concerning the behaviour of for f ∈ in addition to (4.1.58), we 1 assume that γ + p is not an integer. Then, from (4.1.69) the inequality     a r m   ∗,w f Wp,r  f − Lm,2 f u ≤ c u p m

(4.1.79)

follows. Proposition 4.1.14 Let 1 < p < ∞, let w and u be the weights in (4.1.31) and (4.1.21) with α > −1, γ > − p1 , β ∈ R, and λ > 1. Suppose that γ + p1 p,r is not an integer and that (4.1.58) is fulfilled. Then, for every f ∈ Wu ,    ∗,w  Lm,2 f 

p,r

Wu

≤ c f Wp,r . u

(4.1.80)

p,s

and, for every f ∈ Wu with s > r,    ∗,w  f − Lm,2 f 

p,r Wu

≤c

 a s−r m

m

f Wp,s , u

(4.1.81)

where c is a constant independent of m and f . p,s

Proof First, we prove inequality (4.1.81). Assume f ∈ Wu , s > r, and set Qm+1 = L∗,w m,2 f . From (4.1.69) we infer f (x) − Qm+1 (x) =

∞  ! " Q2k+1 (m+1) (x) − Q2k (m+1) (x) k=0

a.e. ,

172

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

and the Bernstein inequality (4.1.27) gives ∞     (r)      u (f − Qm+1 )(r) u ≤  Q2k+1 (m+1) − Q2k (m+1) p

p

k=0

 r ∞  ! "  2k+1 (m + 1)    Q k+1 ≤c 2 (m+1) − Q2k (m+1) u p . a2k+1 (m+1) k=0

Taking into account (4.1.79) and a2k+1 (m+1) ∼k,m 2

k+1 λ

am , we get

. ∞ ∞    s−r   a2k+1 (m+1) s−r  am s−r   p,s f  ∼ 2(k+1)(1−λ)/λ (f − Qm+1 )(r) u ≤ c f Wp,s Wu u p 2k+1 (m + 1) m k=0

≤c

 a s−r m

m

k=0

f Wp,s u

∞ 

2(k+1)(1−λ)/λ ≤ c

k=0

 a s−r m

m

f Wp,s , u

since 2(1−λ)/λ < 1 for λ > 1. Now let us prove inequality (4.1.80). We have           ∗,w   ∗,w   ∗,w (4.1.82) Lm,2 f  p,r =  Lm,2 f u +  Lm,2 f (r) u . Wu

p

p

relation (4.1.79) implies     a r m   ∗,w f Wp,r ≤ c f Wp,r .  Lm,2 f u ≤ f up + c u u p m

(4.1.83)

Let PM ∈ PM (with M ∼m m defined by (4.1.49)) be a sequence of polynomials of p,r quasi best approximation for f ∈ Wu . We can write    ! (r)   ∗,w (r)   ∗,w  "(r)   ∗.w   L f u u +  Lm,2 PM u  m,2  ≤  Lm,2 (f − PM )   , p p

p

and, due to (4.1.52) and (4.1.71), we have  (r)   ∗,w   L PM u ,  m,2  ≤ c f Wp,r u

(4.1.84)

p

p,r

since f ∈ Wu and since γ + we get   ∗,w  L (f − PM )  m,2

(r)

1 p

is not an integer. Finally, by (4.1.27) and (4.1.83),

  r    m  ∗,w   u ≤ c  Lm,2 (f − PM ) u p a m

p

 ≤c

m am

r EM (f )u,p + c f − PM Wp,r . u

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

173

Using Lemma 4.1.6 and the Jackson-type inequality (4.1.22), we obtain   ∗,w  L (f − PM )  m,2

(r)

  u .  ≤ c f Wp,r u

(4.1.85)

p

With the help of (4.1.83), (4.1.84), and (4.1.85), the inequality (4.1.80) follows from (4.1.82).   Finally remark that, for α = 0, hypothesis (4.1.58) becomes − p1 < γ < 1 − β − p1 , and thus all the previous results are true for a more restricted class of functions.

4.1.4 Gaussian Quadrature Rules This section, the results of which were obtained in [126], deals with quadrature rules for evaluating integrals on the real line. To this aim we consider some truncated Gaussian and product rules related to generalized Freud weights. We point out that, in case of exponential weights, the truncation is not only important for the reduction of the computational cost of the rules, but also for ensuring their convergence. We investigate some quadrature formulas related to the generalized Freud weight λ w(x) = |x|α e−|x| and study their convergence in weighted function spaces Cu with |x|λ

u(x) = |x|γ (1 + |x|)β e− 2 . Let us again denote by xk , |k| ≤ "m/2# the zeros of w , fix a parameter θ ∈ (0, 1), and define the index j by pm xj = minm {xk : xk ≥ θ am } ,

(4.1.86)

1≤k≤ 2

where am is the Mhaskar–Rahmanov–Saff number related to fies (4.1.4). We consider a Gauss-type rule defined by  R

f (x)w(x) dx =



λw k f (xk ) + em (f ) ,

√ w, which satis-

(4.1.87)

|k|≤j

where m is even, λw k are the Christoffel numbers, and em (f ) is the remainder term. Rules of the form (4.1.87) for Laguerre and Freud weights appeared for the first time in [36, 125, 137]. The rule (4.1.87) can be obtained by replacing the function f by the Lagrange-type polynomial L∗,w m,2 f introduced in Sect. 4.1.3 and using the ordinary Gaussian rule. Obviously em (f ) = 0 if f ∈ P∗m+1 , where P∗m+1 is the subspace of polynomials also introduced in Sect. 4.1.3. In the following theorem we give a simple estimate of the error, which can be useful in different contexts.

174

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

Proposition 4.1.15 Let f ∈ Cσ , where σ (x) = |x|γ (1 + |x|)β e−b|x| , 0 < b ≤ 1. If λ



w(x) dx < ∞, σ (x)

R

then " ! |em (f )| ≤ c EM (f )σ,∞ + e−c1 m f σ ∞ ,

(4.1.88)

where M = c0 m, where c0 ∈ (0, 1) is fixed, and where c and c1 are positive constants independent of m and f . √ Proof Let M and m b chosen in such a way that θ am ( w) ≥ (1 + θ )aM (σ ) and 2m − 1 ≥ M, and let PM ∈ Pm be a sequence of polynomials of quasi best approximation for f ∈ Cσ . Then, |em (f )| ≤ |em (f − PM )| + |em (PM )| ,

(4.1.89)

and, since the ordinary Gaussian rule is exact for PM , with the help of (4.1.30) and the hypotheses, we get      λw  w  k |em (PM )| =  λk PM (xk ) ≤ PM σ L∞ {|x|>θ am (√w)} σ (xk ) |k|>j  |k|>j  ≤ PM σ L∞ {|x|>(1+θ )aM (σ )}

R

w(x) dx ≤ c e−c1 m PM σ ∞ ≤ c e−c1 m f σ ∞ . σ (x)

(4.1.90) Moreover, we have ⎞ ⎛   λw w(x) dx k ⎠ |em (f − PM )| ≤ (f − PM )σ ∞ ⎝ + ≤ c EM (f )σ,∞ . σ (xk ) R σ (x) |k|≤j

(4.1.91)  

Hence, due to (4.1.89), (4.1.90), and (4.1.91)), we get (4.1.88). In many applications the product rule  R

f (x)K(x, y)u(x) dx =



∗ ∗ Ak (y)f (xk ) + em (f ) =: (Gm f ) (y) + em (f ) ,

|k|≤j

(4.1.92)

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line ∗ (f ) is the remainder term, where can be useful, where em  Ak (y) = ∗,w k (x)k(x, y)u(x) dx

175

(4.1.93)

R

with ∗,w k (x) being the fundamental polynomials w.r.t. the interpolation operator L∗,w f m,2 defined in (4.1.40), and where j is defined in (4.1.86). We remark that the construction of the coefficients Ak (y) depends on the parameters of the weights u and w, and strongly on the form of the kernel K(x, y). Here, omitting this discussion, we want to assign the conditions under which the quadrature rule (Gm f ) (y) converges to the integral uniformly with respect to the parameter y. Proposition 4.1.16 Let us assume that K : R2 → R as well as the weights u and w, defined in (4.1.21) and (4.1.31), respectively, satisfy $ #√ w(x) :x∈R 0, which does not vanish on Im and which will be specified in the sequel, and HIm is the Hilbert transform on Im (cf. (3.2.6)). By the Marcinkiewicztype inequality (3.2.84), we have f u A1 ≤ c  ∞ 3 am



|t| 2 −γ |(t)| dt . (1 + |t|)β α

Im

Moreover, by (4.1.94), it follows that f u A1 ≤ c  ∞ 3 am f u ≤c  ∞ 3 am

 |(t)| dt Im

- Im

    HI G∗ (t) dt + m

 Im

  .  ∗    (t)  HI r −1 gky u (t) dt m

f u =: c  ∞ [B1 + B2 ] . 3 am In order to estimate B1 , we can write 6

3 4

G∗ (x) = ρm (x) 4

2 − x 2 pw (x)g(x)k (x)u(x) =: am y m

6 3 G (x) am 1

with G1 ∈ L log+ L, i.e., G∗ log+ |G∗ | ∈ L1 . In view of (4.1.34), we have   α |G1 (x)| ≤ c |x|γ − 2 (1 + |x|)β ky (x) .

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

177

Therefore, using a well-known result (see, for instance, [203]), we get B1 = c ≤c



6 3 am

Im



6 3 am

R

6     HI G1 (t) dt ≤ c a 3 m m



  |G1 (t)| 1 + log+ |G1 (t)| dt Im

     α |t|γ − 2 (1 + |t|)β ky (t) 1 + log+ ky (t) + log+ |t| dt . (4.1.100)

Due to the assumption (4.1.95), the last integral is bounded. Indeed, choosing r(x) such that The term √ B2 can be estimated in the same way.  3 and r(x) ∼x w(x) for x ∈ Im , we obtain | ∗ (t)| ≤ c am 6  3 B2 ≤ c am

      ky (x) √u(x) 1 + log+ ky (x) + log+ |x| dx . w(x) Im

(4.1.101)

From the estimates (4.1.100) and (4.1.101), it follows that A1 ≤ c f u∞ .

(4.1.102)

Concerning the term A2 , we can write     A2 = L∗,w f k u  y m,2

L1 (Jδ,m )

     f k u +  L,w  y m,2

L1 ({|x|>am (1+δ)})

=: D1 + D2 ,

where Jδ,m := {am < |x| < am (1 + δ)}. By virtue of (4.1.35), we can be estimate   w α  p (x)ρm (x)ky (x) u(x) f u∞  |xk | 2 −γ xk m D1 ≤ c  dx . 3 (1 + |xk |)β Jδ,m x − xk am |k|≤j Moreover, with the help of inequality (4.1.29) for p = ∞ and (4.1.34), we get D1 ≤ c

   f u∞  ky (x) |x|γ − α2 (1 + |x|)β dx xk am Jδ,m |k|≤j

 ≤ c f u∞

R

  ky (x) |x|γ − α2 (1 + |x|)β dx ≤ c f u∞ ,

 3  2  4 ≤ c a 3 , and due to (4.1.94). Let us since x − xk ≥ (1 − θ )am and x 2 − am m estimate D2 . In view of (4.1.30) and (4.1.46), for some τ > 0, we have           D2 ≤ c e−A m L∗,w f u  ky 1 ≤ c e−A m mτ f u∞ ky 1 ≤ c f u∞ . m,2 ∞

178

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

It follows that A2 ≤ c f u∞ .

(4.1.103)

Pluging (4.1.102) and (4.1.103) into (4.1.99) and taking the supremum over all y ∈ R leads to (4.1.96). Finally, in order to show that (4.1.98) implies (4.1.97), let  usconsider a function w  (x )(x − x ) for f0 such that |f0 (x)| ≤ 1 for every x ∈ R, f0 (xk ) = sgn pm k k x1 ≤ xk ≤ 1 and f0 (xk ) = 0 for xk > 1 and for xk < x1 . Then,   L∗,w f m,2 0 (x) =



pw (x)ρ (x)   m  m .   w  x1 ≤xk ≤1  pm (xk ) |x − xk | ρm (xk )

Using (4.1.35), for x ∈ [0, 1], we obtain  ∗     L (w, f ; x) = pw (x) m m,2 x1 ≤xk ≤1



2 − x2 am 2 − x2 am k



1      w   pm (xk ) |x − xk |

   w 6 2 − x2 ∼ pm (x) 4 am xk w(xk ) x1 ≤xk ≤1

 w 6 2 − x2 ≥ pm (x) 4 am  w 6 2 − x2 ≥ pm (x) 4 am

 

xk+1



w(t) dt

x1 ≤xk ≤1 xk



1



1/2

 w 6 2 − x2 . w(t) dt = c pm (x) 4 am

For every y ∈ R, in virtue of (4.1.98) and Proposition 4.1.4 we have u∞

      ∗,w ≥c  Lm,2 f0 (x) k(x, y) u(x) dx R



6 pw (x)k(x, y) 4 a 2 − x 2 u(x) dx ≥ c m m

1

≥c 0



1 0

u(x) |k(x, y)| √ dx . w(x)

Since, for all y ∈ R, 

u(x) |k(x, y)| √ dx < ∞ w(x) R\[0,1]

we get (4.1.97) by taking the supremum over all y ∈ R.

 

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

179

As a consequence of Proposition 4.1.16 we have, for every function f ∈ Cu ,  ∗  ∗ e (f ) ≤ c E m+1 (f )u,∞ , m ∗ (f )u,∞ , defined by (4.1.50), we can use Lemma 4.1.6. Furtherand to estimate E m+1 more, we emphasize that, if k(x, y) ≡ 1, then, due to (4.1.97), inequality (4.1.96) is not true. Nevertheless, Proposition 4.1.16 is useful for a wide class of kernel functions. For instance, let us consider a kernel of the form P (x, y) , k(x, y) = Q(x, y)

(4.1.104)

where P (x, y) =



ci |x − th |γh |y − τi |δi

and Q(x, y) =



 α dν x − aη  η |y − bν |βν > 0

η,ν

h,i

with ci , dν , γh , δi , αη , βν , th , τi , aη , bν ∈ R and h, i, η, ν ∈ {1, . . . , N}, N ∈ N. In this case, Proposition 4.1.16 turns into the following corollary. Corollary 4.1.17 Let k : R2 → R be as in (4.1.104). Assume that the weights w and u satisfy (4.1.94). Then, there is a constant c = c(m, f ) such that sup {|(Gm f ) (y)| : y ∈ R} ≤ c f u∞

∀ f ∈ Cu ,

if and only if # sup

R

$  u(x)  k(x, y) dx : y ∈ R < ∞ . √ w(x)

4.1.5 Fourier Sums in Weighted Lp -Spaces In this section we are going to define a continuous version of the Lagrange-type operator studied in Sect. 4.1.3. To this aim we consider a Fourier-type operator,  w ∞ with w(x) = introduced in [127] and related to the orthonormal system pm m=1 |x|α e−|x| , in order to approximate functions f : R \ {0} −→ R in Lu for λ

p

λ − |x|2

1 ≤ p ≤ ∞, where u(x) = |x|γ e is a generalized Freud weight (cf. (4.1.21)). In 1965 R. Askey and S. Wainger [13] proved that a function f can be represented by a Hermite series under very restrictive assumptions on it. To be more precise, we w f with w(x) = e−x 2 the mth partial Hermite sum. Then, there exists denote by Sm a constant c = c(m, f ) such that  w  √   √   S f w p ≤ cf w p

p

∀ f ∈ L√w ,

180

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .



if and only if p ∈ the weights

4 3,4

 . Subsequently, in 1970 B. Muckenhoupt [169] introduced

 u(x) = (1 + |x|)β w(x)

 and v(x) = (1 + |x|)β1 (1 + log+ |x|)η w(x)

with w(x) = e−x and proved inequalities of the type 2

 w    S f u ≤ cf vp p

with

c = c(m, f )

(4.1.105)

for 1 < p < ∞ and u = v under suitable assumptions on β, β1 , and η. In 1995 S. W. Jha and D. S. Lubinsky [68] extended the results of B. Muckenhoupt to the 2 case of Freud weights by replacing w(x) = e−x by w(x) = e−Q(x) and making suitable assumptions on Q(x). It is important to remark that in [169, Theorem 3] B. Muckenhoupt showed that 4 inequality (4.1.105) holds with u = v and on the left-hand side is 3 if the norm  p>  √  √   restricted to the subset Im = x ∈ R : |x| − m ≥ δ m with a δ ∈ (0, 1). Then, p

w f to f in I . Moreover, in [169, Theorem 4] he he proved the Lu convergence of Sm m showed that inequality (4.1.105) holds with u = v and 1 < p < 4 for all functions vanishing in R \ Im . Therefore, denoting by χIm the characteristic function of Im , the inequality

   √  √  χI S w χI f wp ≤ cχIm f wp m m m

(4.1.106)

holds for 1 < p < ∞, w(x) = e−x , and all f ∈ L√w , where the constant c is independent of m and f . Furthermore, by virtue of the results due to S. W. Jha and D. S. Lubinsky, in [169] inequality (4.1.106) has been extended to the case of Freud weights, but under weaker assumptions than the previous ones. Now, we make a slight change of notation. Let Im = {x ∈ R : |x| ≤ θ am }, where θ√ ∈ (0, 1) and where am is the Mhaskar–Rahmanov–Saff number associated to w, and let χIm be the related characteristic function. More recently, in [151, Theorem 3.1] it was shown that inequality (4.1.106) holds a Freud weight  alsowif w(x) is ∞ of the form e−Q(x) and, moreover, the sequence χIm Sm χIm f m=1 converges to p the function f in L√w with the order of the best polynomial approximation. This last result was extended to the case of generalized Freud weights in [138]. Letting 2

p

|x|λ

w(x) = |x|α e−|x| , u(x) = |x|γ e− 2 , λ > 1, α > −1, γ > − p1 , in [138] it was proved that, under suitable assumptions on α and γ , the estimate λ

      χI S w χI f u ≤ cχI f u m m m m p p

(4.1.107)

  w χ f ∞ converges to the holds for p ∈ (1, ∞) and that the sequence χIm Sm Im m=1 p function f in L√w with the optimal order. We finish this short survey by mentioning

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

181

the fact that results, similar to all the previous ones, hold also for functions defined on the real semiaxis (see [13, 138,  151,w169]). ∞ Let us note that the sequence χIm Sm χIm f m=1 in (4.1.107) is not a sequence of polynomials, as it is required in several contexts. Therefore, our aim is to construct a w χ I. new polynomial operator of Fourier type having the same behaviour as χIm Sm Im Indeed, we will define this operator and show that it is the continuous version of a Lagrange-type projection, which was described in Sect. 4.1.3 and which was introduced and studied in [127]. p We consider functions belonging to a Lu , where u(x) is the generalized Freud weight u(x) = |x| e



λ

γ − |x|2

,

λ > 1,

γ > − p1 : 1 ≤ p < ∞ , γ ≥0 :

p = ∞.

(4.1.108)

Moreover, let w(x) = |x|αe−|x| with α > −1 and λ > 1 be the generalized Freud w ∞ be the corresponding sequence of orthonormal weight in (4.1.31) and let pm m=1 polynomials with leading coefficient γm = γmw > 0. Remark that we restrict ourselves to the weight u(x) only for simplicity reasons. Indeed, we can replace the weight u(x) by a more general weight of the form λ

u(x) =

s 5

|x − tk |αk e−

|x|λ 2

,

αk > 0 , λ > 1 ,

k=1

where tk ∈ R and s ≥ 0 are fixed. Otherwise, the weight w(x) for the orthonormal system cannot be replaced by another weight, since nothing is known in the literature concerning the properties of polynomials orthonormal w.r.t. more general weights. Denoting by am the Mhaskar–Rahmanov–Saff number given by (4.1.4), we introduce the subspace P#m+1 = {Q ∈ Pm+1 : Q(±am ) = 0} of Pm+1 . Of course, this subspace is connected with the weight w via the Mhaskar– p Rahmanov–Saff number am . Moreover, for 1 ≤ p ≤ ∞ and f ∈ Lu , we denote by   Em (f )u,p = inf (f − P ) up : P ∈ Pm the error of best approximation of f by polynomials of Pm and by   m+1 (f )u,p = inf (f − Q) up : Q ∈ P#m+1 E the error of best approximation of f by means of polynomials from P#m+1 . The following lemma is crucial for our aims.

182

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

Lemma 4.1.18 Let 1 ≤ p ≤ ∞ and u, w be the weights defined in (4.1.108) p and (4.1.31). Then, for every f ∈ Lu , the estimate inf

Q∈PM ∩P#m+1

" ! (f − Q) up ≤ c EM (f )u,p + e−c1 m f up

(4.1.109)

>

? m holds for M = , δ > 0 fixed, and constants c and c1 are independent (δ + 1)λ of m, f , as well as the parameters α and γ , i.e., c = c(m, f, α, γ ) and c1 = c1 (m, f, α, γ ). Proof We are going to prove (4.1.109) only for 1 ≤ p < ∞, since the case p = ∞ of polynomials of quasi best approximation is simpler. Let PM ∈ PM> be a sequence ? m p for f ∈ Lu with M = , where δ > 0 has to be fixed in the sequel. We (δ + 1)λ set   QM (x) = w w k (x)PM (xk ) and PM (x) = k (x)PM (xk ) , |k|≤" M+1 2 #−1

|k|≤" M+1 2 #

where w k (x) =

>

w pM−1 (x)ρm (x)

(pM−1 ) (xk )(x − xk )ρm (xk )

,

|k| ≤

M −1 2

? ,

(4.1.110)

and w (x) = ±" M+1 # 2

w (x) (am ± x) pM−1 w 2 am pM−1 (±am )

.

(4.1.111)

Hence, this interpolation process is based on the zeros of pM−1 (w), i.e., on x−" M−1 # < . . . < x1 < x2 < . . . < x" M−1 # , and on the additional nodes 2 2 x±" M+1 # := ± am . If M − 1 is odd, then we replace the zero x0 = 0 by x0 := x21 . 2

Obviously QM ∈ PM ∩ P#m+1 , and we have inf

Q∈PM ∩P#m+1

(f − Q) up ≤ (f − QM ) up ≤ (QM − PM ) up + c EM (f )u,p

.   w  w  + c EM (f )u,p = P (−a ) + P (a ) u m M m M+1  −" M+1 # M  " # 2

2

p

        w w    ≤ PM (−am ) −" M+1 # u + PM (am ) " M+1 # u  + c EM (f )u,p 2 2 p

p

=: I1 + I2 + c EM (f )u,p . (4.1.112)

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

183

We are going to estimate only the term I2 , since I1 can be handled analogously. With the help of the infinite-finite range inequality (4.1.29) we get     w  I2 ≤ c  P (a ) u M m  " M+1 # 

∗ ) Lp (IM

2



1/p c aM

 w   " M+1 # (x)u(x)    2 sup |PM (x)u(x)| sup    u(am ) |x|≥am x∈I ∗  M

" ! ∗ = [−a , a ] \ − aM , aM . Using the inequalities (see [127, (4.11)]) where IM M M M M    w   M+1 (x) u(x)  " 2 #  u(am )

  x ≤ c  a

m

γ − α  2  ≤ 

c cm

α 2 −γ

:γ− :γ−

α 2 α 2

≥ 0, < 0,

we obtain 1

I2 ≤ c (aM ) p mτ sup |PM (x)u(x)| , |x|≥am

where τ = max{0, α2 − γ }. Now, by the choice of M, it is easily seen that, for sufficiently large m, am ≥ (1 + δ)aM . Hence, using the inequalities (4.1.30) and (4.1.28), we get I2 ≤ c e−c1 m PM up ≤ c e−c1 m f up . From this estimate and from (4.1.112), inequality (4.1.109) follows.

 

We remark that, in view of Lemma 4.1.18, we have ! " m+1 (f )u,p ≤ c EM (f )u,p + e−c1 m f up E >

(4.1.113)

? m , δ > 0 fixed, and a constant c = c(m, f ). Relation (4.1.113) (δ + 1)λ + p ensures the density of m∈N P#m+1 in Lu for 1 ≤ p ≤ ∞ and estimates the error of best approximation by polynomials from P#m+1 in terms of the error of best approximation by polynomials of degree at most M. p w f is If f belongs to Lu with u defined in (4.1.108), then its mth Fourier sum Sm defined in the usual way as with M =



m−1   w Sm f (x) = ckw (f )pkw (x) = k=0

 R

w Km (x, t)f (t)w(t) dt ,

184

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

 where ckw (f ) = pkw (t)f (t)w(t) dt is the kth Fourier coefficient of f w.r.t. the  w ∞ R system pm and where m=1 w Km (x, t) =

m−1 

w (x)pw (t) − pw (x)pw (t) γm−1 pm m m−1 m−1 γm x−t

pkw (x)pkw (t) =

k=0

(4.1.114) is the Christoffel–Darboux kernel. p Let us introduce a new Fourier-type operator. For f ∈ Lu√, θ ∈ (0, 1), and am the Mhaskar–Rahmanov–Saff number related to the weight w, we set fθ (x) = 2 − x 2 , where χ denotes the characteristic function of χθ (x)f (x) and ρm (x) = am θ p ∗,w the interval (−θ am , θ am ). For f ∈ Lu , define Sm f by ∗,w w −1 f = ρm Sm ρm fθ . Sm

(4.1.115)

−1 f belongs to L , this Fourier-type operator is wellSince, for f ∈ Lu , also ρm θ u defined. With (4.1.114) we can write p



p

 γm−1 ∗,w Sm f (x) = ρm (x) γm





−∞

w (x)pw (t) − pw (x)pw (t) pm fθ (t)w(t) m m−1 m−1 dt . x−t ρm (t)

Using a Gaussian rule, we obtain the “truncated” Lagrange interpolation polynomial   L∗,w f (x) = m,2



∗,w k (x)f (xk ) ,

|xk |≤θam w and the additional nodes ±a , where based on the zeros xk of pm m w (x)ρ (x) pm m ∗,w . k (x) =  w  pm (xk )(x − xk )ρm (xk )

The interpolation process based on the zeros of Freud polynomials plus two extra points was introduced by J. Szabados [204], while the operator L∗,w m,2 , which is the ∗,w discrete version of Sm , was introduced and studied in [126]. We also remark that w , where S w f = ρ S w ρ −1 f , seems to be more natural than S ∗,w , m the operator S m m m m m w Q = Q for all Q ∈ P# since Sm m+1 . However, the Fourier coefficients of the function −1 f , given by ρm −1 ck (ρm f) =

 R

f (t) w p (t)w(t) dt , ρm (t) k

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

185

are finite only under appropriate assumptions on the function f . This is one of the ∗,w reasons why we have defined the operator Sm by replacing the function f by its p ∗,w “finite section” fθ . Obviously Sm maps Lu into P#m+1 , but it is not a projection. ∗,w , which will be useful in the sequel. The following lemma states a property of Sm Lemma 4.1.19 Let w and u be the weights given by (4.1.31) and (4.1.108). For every polynomial Q = ρm PM−2 ∈ PM ∩ P#m+1 with am in ρm (x) defined in (4.1.4) > λ ? ∗,w θ and with M = m , θ ∈ (0, 1), we have Sm Q = Q + m+1 , where θ+1 m+1 ∈ P#m+1 and m+1 up ≤ c e−c1 m Q up ,

1 ≤ p ≤ ∞,

(4.1.116)

and where the constants c and c1 are independent of m, Q, α, and γ . Proof Let us first consider the case 1 < p < ∞. For each Q ∈ PM ∩ P#m+1 with > λ ? θ M= m and a fixed θ ∈ (0, 1), we can write θ+1 ∗,w w −1 w −1 Q = ρm Sm ρm Qθ = Q + ρm Sm ρm (Qθ − Q) =: Q + m+1 , Sm

where w w −1 (Qθ − Q) + ρm Sm ρm (Qθ − Q) =: A1 + A2 . m+1 = Sm

(4.1.117)

In order to estimate A1 , we recall the inequalities [138, (18),(19), p. 93] and [169, Theorem 1],  w  S f u ≤ c m p



f up

:

4 3

1 3

m f up : 1 < p ≤

< p < 4, 4 3

or 4 ≤ p < ∞ .

Since θ am ≥ (1 + θ )aM (for all sufficiently large m) because of the choice of M, we can use (4.1.30) to obtain A1 up ≤ c m 3 (Qθ − Q) up = c m 3 Q uLp {|x|≥θam } ≤ c e−c1 m Q up . (4.1.118) 1

1

To handle A2 , due to (4.1.114), we can write γm−1 A2 (x) = γm

! w " w (t) − pw (x)pw (t) Q(t)w(t) (t + x) pm (x)pm−1 m m−1

 |t |>θam

ρm (t)

dt

186

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

and, according to the infinite-finite range inequality (4.1.29), we can estimate γm−1 γm

 Im∗

   u(x)pw (x) m 



θam

 w  ≤ c am pm uLp (I ∗ )



m

1 p p  t +x w  pm−1 (t)Q(t)w(t) dt  dx ρm (t) ∞

θam

  w  pm−1 (t)Q(t)w(t)   dt    a −t

(4.1.119)

m

where Im∗ = [−am , am ] \ [− amm , amm ] and γγm−1 ∼ am (see [89]). The estimation of m the other part of A2 (x) is similar. Inequality (4.1.34) yields 1   w  α m6     am pm u Lp (I ∗ ) ≤ c am √ | · |γ − 2  p ∗ m L (Im ) am

#

≤c

1 √ am m 6 max am

γ − α2 + p1

,

 a γ − α + 1 $ m

2

p

m

(4.1.120) .

Concerning the last factor in (4.1.119), with the help of (4.1.34) we get    w  1  ∞  Q(t)|t| α2 −γ u(t)   pm−1 (t)Q(t)w(t)  6     dt ≤ c√m   dt     a − t a a − t m m θam m θam





% 1 6

α 1 2 −γ − 2

≤ c m am



2am

θam

&   1  ∞  Q(t)u(t)  α m6 −γ   dt +  |Q(t)| |t| 2 u(t) dt .  a −t  3 2a m am m

(4.1.121) Using the Hölder inequality, the boundedness of the Hardy-Littlewood maximal function, and the inequalities (4.1.30) and (4.1.27), we can estimate 1

α

m 6 am2

−γ − 12



2am θam

    α 1 1− 1  Q u   Q(t)u(t)  p   dt ≤ c m 16 am2 −γ − 2 am   a −t  a − · m

m

1

α

= c m 6 am2

−γ + 12 − p1



2am θam

1

α

≤ c m 6 am2

−γ + 12 − p1

Lp (θam ,2am )

 p  p1  am   1    dt (Q u) (y) dy  |a − t|  m t

  (Q u) 

Lp (θam ,2am )

  c m e−c1 m Q up , ≤ c e−c m (Q u) p ≤ am

4.1 Polynomial Approximation with Generalized Freud Weights on the Real Line

187

because of θ am ≥ (1 + θ )aM . Let us consider the last addend in (4.1.121). If we have α2 − γ ≤ 0, then, due to (4.1.30), (4.1.29), and the Hölder inequality, 

1

m6  3 am



1

α

|Q(t )| |t | 2 −γ u(t ) dt ≤ c m 6 am2 α

−γ − 32



2am 1

α

1

α

≤ c m 6 am2

1

m6  3 am



|Q(t )| u(t ) dt

2am

≤ c m 6 am2

In case of



α 2 ∞

−γ − 32 −c1 m

e

Q uL1 (Im∗ )

1 −γ − 32 1− p −c1 m Q uLp (Im∗ ) am e

≤ c e−c1 m Q up .

−γ > 0, again by (4.1.30), (4.1.29), and the Hölder inequality, we have |Q(t)| |t| 2 −γ u(t) dt ≤ α

2am

1  α c m 6 e−c1 m     Q | · | 2 −γ u 1 ∗ 3 L (Im ) am α

1



c m 6 e−c1 m am2  3 am

−γ

Q uL1 (Im∗ ) ≤ c e−c1 m Q up .

Combining the previous estimates of the terms in (4.1.121) with (4.1.120) and (4.1.119), it follows that A2 up ≤ c e−c1 m Q up .

(4.1.122)

Taking into account (4.1.117), due to (4.1.118) and (4.1.122), we obtain (4.1.116) for 1 < p < ∞. Now we are going to prove (4.1.116) for p = ∞, taking into account what we have it already proved for 1 < p < ∞. Using the Nikolski-type inequality (4.1.28) and the infinite-finite range inequality (4.1.29), we observe that m+1 u∞



1/p



1/p

m ≤c am m ≤c am



m m+1 up ≤ c am

1/p

e−c1 m Q up

e−c1 m Q uLp (Im∗ ) ≤ c m1/p e−c1 m Q u∞ ≤ c e−c1 m Q u∞ .

Finally, if p = 1, by the infinite-finite range inequality (4.1.29), (4.1.116), and the Nikolski-type inequality (4.1.28), we obtain m+1 u1 ≤ cm+1 uL1 (Im∗ ) ≤ c am m+1 u∞ ≤ c am e−c1 m Q u∞ ≤ c m e−c1 m Q u1 ≤ c e−c1 m Q u1 , and the proof is complete.

 

188

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

In order  ∞to prove boundedness and convergence theorems for the operator sequence  ∗,w Sm m=1 , we note that v(x) = |x|λ , is an Ap (R)-weight on R (1 < p < ∞) if and only if − p1 < λ < 1 − p1 (cf. Exercise 3.2.8). Proposition 4.1.20 Let 1 < p < ∞, θ ∈ (0, 1), and w, u be the weights in (4.1.31) and (4.1.108). Then, there exists a constant c = c(m, f ) such that  ∗,w    S fθ u ≤ c fθ up m p

p

∀ f ∈ Lu ,

(4.1.123)

if and only if −

α 1 1 

θ θ+1



(4.1.137)

? m and constants c = c(m, f ), c1 = c1 (m, f ).

Concerning the case p = ∞, the following theorem holds true. Proposition 4.1.22 Let θ ∈ (0, 1) be fixed and w, u be as in Corollary 4.1.21. If  α α ≤γ < +1 max 0, 2 2

(4.1.138)

then, for every f ∈ Cu ,  ∗,w    S fθ u ≤ c (log m) fθ u∞ m ∞

(4.1.139)

   " !  f − S ∗,w fθ u ≤ c (log m) EM (f )u,∞ + e−c1 m f u∞ m ∞

(4.1.140)

and

with M =

>

θ θ+1



? m and constant c, c1 independent of m and f .

Proof With (4.1.115) and (4.1.114), we get     ∗,w   ! "  S fθ (x) u(x) ∼m,x am ρm (x) H(−θa ,θa ) ρ −1 pw (x) pw − pw (x) pw fθ w  u(x) , m m m m m m m−1 m−1

since γγm−1 ∼m am . According to the infinite-finite range inequality (4.1.29) we can m assume x ∈ Im∗ = [−am , am ] \ [− amm , amm ] and, for symmetry reasons, it is sufficient

194

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

to consider the case x ∈ [ amm , am ]. We use the decomposition  ∗,w    S fθ (x) u(x) m

∼m,x

  & %    w (x, t)χ (t)f (t)w(t) Km θ   dt  u(x) am ρm (x) + am am   ρ (t) m |x−t |> |x−t |≤ m

m

=: |I1 (x) + I2 (x)| ≤ |I1 (x)| + |I2 (x)| . (4.1.141) Let us consider the term I1 (x). We can write  |I1 (x)| ≤ cfθ u∞ am ρm (x)u(x)

|x−t |> amm

We observe that, due to (4.1.34), for |x| ≥ am

am m

|Km (w, x, t)|χθ (t)w(t) dt . ρm (t)u(t)

and |t| ≤ θ am , the inequality

 w(t) α α ρm (x)  w w p (x)pm−1 u(x) ≤ c |x|γ − 2 |t| 2 −γ (t) ρm (t) m u(t)

holds. Using this inequality we get |I1 (x)| ≤ cf u∞ |x|

γ − α2



x− amm −θam

|t| 2 −γ dt + x−t α



θam x+ amm

|t| 2 −γ dt t −x α

 ≤ cfθ u∞ log m ,

since α2 − γ ∈ (−1, 0] because of (4.1.138). It remains to estimate the term I2 (x). Write  am   m K w (x, t)    m |I2 (x)| ≤ c am u(x)ρm (x)  fθ (t)w(t) dt   − am ρm (t)  m

 ≤c

am m

− amm

|Rm (x, t)fθ (t)u(t)| dt 

≤ cfθ u∞

am m

− amm

|Rm (x, t)| dt ,

(4.1.142)

4.2 Polynomial Approximation with Generalized Laguerre Weights on the. . .

195

where  w w (t) − pw (x)pw (t)   pm (x)pm−1 m m−1  w(t)  |Rm (x, t)| = am u(x)ρm (x)   u(t) ρm (t)(x − t)    w w (t)  ρm (x)  pm (x) − pm  |pw (t)| w(t) ≤ am u(x)   m−1 ρm (t) x−t u(t)  w   w (x)   pm−1 (t) − pm−1 ρm (x) w  w(t)  |pm (t)|  + am u(x)  u(t) ρm (t) x−t =: |Tm (x, t)| + |Vm (x, t)| . (4.1.143) By the mean value theorem we have     w(t)  w    w |Tm (x, t)| ≤ am u(x)  pm (ξ ) pm−1 (t) u(t) with ξ between x and t. Since w(x) ∼x,t w(ξ ) ∼x,t w(t) and u(x) ∼x,t u(t), using the inequalities (4.1.27) and (4.1.34), we get |Tm (x, t )| ≤ am

    cm u(x)   u(x) m 1 1  w  (t ) w(t ) ≤ c am ≤ √ √ pm (w, ξ ) w(ξ ) pm−1 u(t ) u(t ) am am am am

" ! for |t| ≤ amm and x ∈ amm , am . Analogously we can show that |Vm (x, t)| ≤ camm . Therefore, together with (4.1.142) and (4.1.143), it follows that |I2 (x)| ≤ cfθ u∞ . Combining the estimates for I1 (x) and I2 (x) in (4.1.141) and taking the supremum over all x ∈ Im∗ , we obtain (4.1.139). In order to prove the estimate (4.1.140) we can proceed as in the proof of (4.1.125). We omit the details.   As it has already been mentioned, from the approximation point of view, the ∞ ∗,w sequence Sm fθ m=1 has the same behaviour as the “truncated” sequence   ∞ wχ f (see [120, 138, 151]), but it has the advantage of being a χIm Sm Im m=1 polynomial sequence.

4.2 Polynomial Approximation with Generalized Laguerre Weights on the Half Line 4.2.1 Polynomial Inequalities Concerning the results of this section we refer to [142]. The main idea for proving polynomial inequalities concerned with exponential weights on unbounded intervals

196

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

is to use known polynomial inequalities (if necessary with weights) on bounded intervals. To this end, the main ingredients are the “infinite-finite range inequality” and the approximation of weights by polynomials on a finite interval. In the present β case, the weight w(x) = wα,β (x) = x α e−x is related to the generalized Freud 2β weight u(x) = |x|2α+1e−|x| by a quadratic transformation. The Mhaskar-Rahmanov-Saff number am (u), related to the weight u, satisfies 1

(see [142]) am (u) ∼m m 2β (the constants in “∼” depend on α and β). Thus, for the weight w, we have 1

am := am (w) = a2m (u)2 ∼m m β ,

(4.2.1)

and, for an arbitrary polynomial Pm ∈ Pm , the inequalities 



1 |Pm (x)wα,β (x)| dx p

p



1

≤c

0

|Pm (x)wα,β (x)| dx p

p

(4.2.2)

m

and 

1



|Pm (x)wα,β (x)|p dx

p

≤ c e−c1 m





1 |Pm (x)wα,β (x)|p dx

p

0

am (1+δ)

(4.2.3) easily follow, where m =

 2 0, am (1 − m− 3 d) with a constant d > 0, p ∈

(0, +∞], β > 12 , as well as α > − p1 if p < +∞ and α ≥ 0 if p = ∞. Here, c = c(m, p, Pm , δ) and c1 = c1 (m, p, Pm ). As a consequence of some results in [103, 197], for a fixed b ≥ 1, there exist polynomials Qm ∈ Pm such that, for β x ∈ [0, b am ], we have Qm (x) ∼m,x e−x and √ am √ β | x Qm (x)| ≤ c e−x , m

(4.2.4)

where c = c(m, x). Therefore, using (4.2.2) and (4.2.4) and a linear transformation of the interval (0, 1), polynomial inequalities of Bernstein-, Remez-, and Schurtype can be deduced from analogous inequalities on (0, 1) w.r.t. the Jacobi weight x α . The following theorems can be proved by applying these considerations. For d > 0 fixed and 0 < t1 < . . . < tr < am , we set Im∗

 . √ √ . r d am d am d d am , ti + , \ ti − = , am 1 − 2 m2 m m m3 i=1 -

4.2 Polynomial Approximation with Generalized Laguerre Weights on the. . .

197

∗ where m is  sufficiently . large (say m > m0 ) and r ∈ N0 . If r = 0 then Im = d am . , am 1 − d2 m2 m3

Proposition 4.2.1 (See [142]) For 0 < p ≤ ∞, m > m0 , and all Pm ∈ Pm , 

 Pm (x)wα,β (x)p dx

∞



1

p

≤c

0

Im∗

  Pm (x)wα,β (x)p dx

1

p

(4.2.5)

,

where the constant c does not depend on m, p, and Pm . Proposition 4.2.2 (See [142]) For 0 < p ≤ ∞, m > m0 , and every polynomial Pm ∈ Pm , we have 

 √ P  (x) x wα,β (x)p dx m

∞ 0

1

p

cm ≤ √ am



 Pm (x)wα,β (x)p dx

∞

1

p

0

(4.2.6) and 

∞ 0

|Pm (x)wα,β (x)|p dx

1

p

c m2 ≤ am





1 |Pm (x)wα,β (x)| dx p

p

0

with c = c(m, p, Pm ). As for the Markov–Bernstein inequality, we have also two versions of Nikolski’s inequality as shown in the next proposition and which can be deduced from [142]. Proposition 4.2.3 Let 1 ≤ q < p ≤ ∞ as well as α ≥ 0 if p = +∞ and α > − p1 if p < ∞. Then, there is a constant c independent of m, p, q, and Pm ∈ Pm , such that 1−1    q p   1 1 m   −p Pm wα,β  q wα,β  ≤ c √ Pm ϕ q p am

(4.2.7)

and 2−2  q p     Pm wα,β  ≤ c √m Pm wα,β  , p q am where ϕ(x) =

√ x.

(4.2.8)

198

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

4.2.2 Weighted Spaces of Functions For 1 ≤ p < ∞, wα,β (x) = x α e−x , and α > − p1 , by Lwα,β we denote the space of all measurable (classes of) functions f : (0, ∞) −→ C such that β

  f wα,β  := p





p

1 |f (x)wα,β (x)| dx p

p

< ∞.

0

If p = ∞ and α ≥ 0, then we set # $ L∞ = f ∈ C(0, ∞) : lim w (x)f (x) = lim w (x)f (x) = 0 α,β α,β wα,β x−→∞

x−→0

if

α>0

  L∞ w0β = f ∈ C[0, ∞) : lim (wα,β f (x))(x) = 0 , x−→∞

and f L∞ w

α,β

  = sup |wα,β (x)f (x)| : 0 < x < ∞ .

For more regular functions we introduce the Sobolev-type spaces # $    (r) r  p (r−1) Wp,r = f ∈ W : f ∈ AC[0, ∞) and ϕ w < ∞ , f α,β  wα,β wα,β p

√ x, and AC[0, ∞) is the set of where r ∈ N, 1 ≤ p ≤ ∞, ϕ(x) = p absolutely continuous functions on [0, ∞). In order to define in Lwα,β a modulus of 2

smoothness, for every h > 0 and β > 12 , we introduce the quantity h∗ = h− 2β−1 and the segment Irh = [8r 2 h2 , d h∗ ], where d > 0 is a fixed constant. Now, following [44] we define     rϕ (f, t)wα,β ,p = sup (rhϕ f )wα,β  p (4.2.9) L (Irh )

0 0. Moreover, if 

1 0

rϕ (f, t)w ,p t

1+ p1

dt < ∞

w =ϕ

with

− p1

(4.2.18)

w,

then ⎧ ⎨

Em (f )w,∞  √  ⎩ rϕ f, am m

w,∞

⎫ ⎬ ⎭

√ am m

 ≤c

rϕ (f, t)w ,p

0

t

1+ p1

dt

(4.2.19)

.

(4.2.20)

and %



f w∞ ≤ c f w p +

1

rϕ (f, t)w ,p

0

t

1+ p1

& dt

Finally (4.2.18) implies (4.2.19) and (4.2.20) with w in place of w and of

1 p.

2 p

in place

Here the positive constants c are independent of m, t, and f .

4.2.4 Fourier Sums in Weighted Lp -Spaces The results of this section were proved in [127]. For 1 ≤ p ≤ ∞, we consider the generalized Laguerre weight defined by u(x) = x γ e−

xλ 2

,

x ∈ (0, ∞) ,

(4.2.21)

with λ > 1/2 and γ > − p1 if p < ∞, as well as γ ≥ 0 if p = ∞. The related Lp -spaces are defined as in Sect. 4.2.2. Let w(x) = x α e−x , λ

(4.2.22)

where  w  ∞α > −1 and λ > 1/2, be another generalized Laguerre weight and let pm m=0 be the corresponding sequence of orthonormal polynomials with leading coefficient γm = γmw > 0. It is known that the related Mhaskar–Rahmanov–Saff number am satisfies am ∼m m1/λ . We further introduce the subspace P∗m of the space of polynomials of degree at most m by P∗m = {Q ∈ Pm : Q(am ) = 0} .

202

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . . p

Moreover, for 1 ≤ p ≤ ∞ and every function f ∈ Lu , we denote by m (f )u,p = inf (f − Q) up E ∗ Q∈Pm

the error of best approximation of f by means of polynomials of P∗m . If M < m, we will consider the subspace PM ∩ P∗m = {Q ∈ PM : Q(am ) = 0} . The following lemma can be proved as Lemma 4.1.18. Lemma 4.2.10 Let 1 ≤ p ≤ ∞, let u and w be the weights defined in (4.2.21) and (4.2.22), where λ > 1/2, α > −1 and γ > − p1 if p < ∞ as well as γ ≥ 0 if p = ∞. Then, the inequality inf

Q∈PM ∩P∗m

" ! (f − Q) up ≤ c EM (f )u,p + e−c1 m f up

(4.2.23)

: 9 p m holds for every f ∈ Lu , for M = (δ+1) λ , δ > 0 fixed, where the constants c and c1 do not depend on m, f , α, and γ . Lemma 4.2.10 yields the inequality " ! m (f )u,p ≤ c EM (f )u,p + e−c1 m f up , E

(4.2.24)

9 : m where M = (δ+1) λ , δ ∈ (0, 1), and c = c(m, f ), c1 = c1 (m, f ). This ensures the +∞ p density of m=1 P∗m in Lu , 1 ≤ p ≤ ∞, analogously to the respective subspaces introduced in Sect. 4.1. p w f is given If f belongs to Lu with u from (4.2.21), then its mth Fourier sum Sm by 

w Sm f



(x) =

m−1 

 ckw (f )pkW (x)

k=0

=

R

w Km (x, t)f (t)w(t) dt ,

where  ckw (f )



= 0

pkw (t)f (t)w(t) dt

4.2 Polynomial Approximation with Generalized Laguerre Weights on the. . .

203

 w ∞ is the kth Fourier coefficient of f w.r.t. the orthonormal system pm and m=0 w (x, t) = Km

m−1 

pkw (x)pkw (t) =

k=0

w (x)pw (t) − pw (x)pw (t) γm−1 pm m m−1 m−1 γm x−t

(4.2.25) is the respective Christoffel–Darboux kernel. Now we introduce the Fourier-type operator, analogous to that one defined in p Sect. 4.1 and related to a generalized Freud weight. For f ∈ Lu ,√θ ∈ (0, 1), and the Mhaskar–Rahmanov–Saff number am associated to the weight w, we set fθ = χθ f , where χθ denotes the characteristic function of the interval [0, θ am ], and define ∗,w Sm f by using the abbreviation ρm (x) = am − x and by    ∗,w  w −1 Sm (x) = ρm (x) Sm ρm fθ (x) γm−1 = ρm (x) γm



∞ 0

w (x)pw (t) − pw (x)pw (t) pm fθ (t) m m−1 m−1 w(t) dt . x−t ρm (t) (4.2.26)

B using a Gaussian rule, we obtain the “truncated” Lagrange interpolation polynow (x) and the additional node a , mial based on the zeros xk of pm m   ∗,w  Lm+1 f (x) = f (xk ) ∗,w k (x) , xk ≤θam

where ∗,w k (x) = 

w pm



w (x) pm

ρm (x) . (xk )(x − xk ) ρm (xk )

The following lemma is the analogue to Lemma 4.1.19. Lemma 4.2.11 Let 1 ≤ p ≤ ∞ and let w and u be the weights defined in (4.2.22) and repsectively. For every Q = ρm PM−1 ∈ PM ∩ P∗m , where M = > (4.2.21), λ ? ∗,w θ m and θ ∈ (0, 1), we have Sm Q = Q + m , where m ∈ P∗m and θ+1 m up ≤ c e−c1 m Q up

(4.2.27)

with positive constants c and c1 independent of m, f , α, and γ . Proposition 4.2.12 Let 1 < p < ∞, θ ∈ (0, 1) and w, u be the weights in (4.2.22), (4.2.21) with α > −1, λ > 1/2, and γ > − p1 . Then, there exists a

204

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

constant c = c(m, f ) such that the inequality  ∗,w    S fθ u ≤ cfθ up m p

(4.2.28)

p

holds for all f ∈ Lu , if and only if 1 α 3 1 1 − 

θ θ+1



(4.2.32)

? m and c = c(m, f ), c1 = c1 (m, f ).

Proposition 4.2.14 Let θ ∈ (0, 1), M =

>

θ θ+1



? m , and let w and u be the same

weights as in Corollary 4.2.13. If $ # 1 α 3 α ≤γ < + , max 0, + 2 4 2 4

(4.2.33)

then, there are positive constants c = c(m, f ) and c1 = c1 (m, f ), such that. for all f ∈ Cu ,  ∗,w    S fθ u m



≤ c (log m) fθ u∞

(4.2.34)

4.2 Polynomial Approximation with Generalized Laguerre Weights on the. . .

205

and    " !  f − S ∗,w fθ U  ≤ c EM (f )u,∞ log m + e−c1 m f u∞ . m ∞

(4.2.35)

As we have sequence  ∗,w  the   ∞already emphasized, from the approximation point of view, wχ f ∞ Sm fθ m=1 has the same behaviour as the truncated sequence χθ Sm θ m=1 (see [120, 138, 151]). We omit the proofs of the results in this section, since they can be deduced from the proofs given in Sect. 4.1.5. To give the ideas, we are only going to prove Proposition 4.2.12. xλ

Proof If w(x) = x α e−x and u(x) = x γ e− 2 are generalized Laguerre weights under consideration (see (4.2.22) and (4.2.21)), then we also use the generalized λ

2γ + 1

y 2λ

p e− 2 . For every Freud weights wα,λ (y) = |y|2α+1e−y and uγ ,λ,p (y) = |y| p p 2 f ∈ Lu , we set f (y) = f (y ), so that f ∈ Luγ ,λ,p . If we apply the transformation x = y 2 , y ∈ R, the orthonormal polynomials and Mhaskar–Rahmanov–Saff numbers corresponding to w and wα,λ are related by 2λ

w

W pm (x) = p2mα,λ (y)

! "2 and am = a2m (wα,λ ) . ∗,w

∗,w Hence, the operator Sm in (4.2.26) can be obtained from S2m α,λ in (4.1.115) by ! "2 the quadratic transformation x = y 2 , namely by setting ψ(y) = y 2 − am (wα,λ ) (cf. the definition of ρm (x) in Sect. 4.1.5) and

   ∗,w  w −1 Sm fθ (x) = ρm (x) Sm ρm fθ (x)     w ∗,w −1 = ψ2m (y) S2mα,λ ψ2m f√θ (y) = S2m α,λ f √θ (y) .

(4.2.36)

Due to Proposition 4.1.20, we have       ∗,wα,λ √     S2m f θ u ≤ cf √θ u , p

p

c = c(m, f ) ,

if and only if −

  1 1 1 1 < 2γ + − α + −1 and β > 1/2, and am = am ( w). The zeros w (x) satisfy x1 , x2 , . . . , xm of pm   2 0 < m−2 am < x1 < x2 < . . . < xm < 1 − c m− 3 am . For a function f : (0, ∞) −→ C, the respective Lagrange interpolation polynomial Lw m f equals Lw mf =

m 

where w k (x) =

f (xk ) w k ,

k=1

w (x) pm   . w (x ) (x − xk ) pm k

We observe that this interpolation process is not optimal. Indeed, for γ ≥ 0 and u(x) = x γ e−

xβ 2

the inequality  w L  m C

u →Cu

≥ c mρ

holds for some constants ρ > 0 and c > 0. For that reason, we fix a θ ∈ (0, 1), define, for all sufficiently large m (say m > m0 ), the index j = j (m, θ ) by xj = min {xk : xk ≥ aθm , k = 1, . . . , m} . and introduce the modified Lagrange interpolation polynomial L∗,w m+1 f by 

 L∗,w m+1 f (x)

=

j 

f (xk )

k=1

w k (x)(am − x) . (am − xk )

 ∞ p Before studying the behaviour of the sequence L∗,w m+1 f m=1 in Lu -spaces with u(x) = x γ e− p

xβ 2

Bu =

and 1 < p ≤ ∞, we introduce the class of functions ⎧ ⎨ ⎩

f ∈ C(R+ ) : sup

m>m0

j (m,θ) k=1

xk |f (xk )u(xk )|p < ∞

⎫ ⎬ ⎭

, p

where xk = xk+1 − xk and 1 < p < ∞. Note that, in the definition of Bu , the quadrature sums are of Riemann-Stieltjes type and are uniformly bounded if p f ∈ C(R+ ) ∩ Lu .

4.2 Polynomial Approximation with Generalized Laguerre Weights on the. . .

207 −1− 1

p

p is In the sequel, we will assume that f ∈ Lu and that rϕ (f, t)u,p t summable. In this case the function is continuous and the quadrature sums can be estimated by

⎞1 ⎛ ⎡ ⎤ p  √  1  √ am r j  p  (f, t) m a u,p m ϕ ⎝ xk |f (xk )u(xk )|p ⎠ ≤ c ⎣f up + dt ⎦ . 1+ 1 m 0 t p k=1 In what follows, we will use the notation v σ (x) = x σ . Proposition 4.2.15 There exists a constant c = c(m, f ) such that, for all f ∈ Cu ,  ∗,w    L  m+1 f u ∞ ≤ c (log m)f u∞ ,

(4.2.37)

if and only if the parameters of the weights w and u satisfy 1 α 5 α + ≤γ ≤ + . 2 4 2 4

(4.2.38)

Further, we have, for all f ∈ C(R+ ) ∩ Lu , p

⎛ ⎞1 p j   ∗,w   p⎠   L ⎝ xk |f (xk )u(xk )| m+1 f u p ≤ c

(4.2.39)

k=1

with c = c(m, f ), if and only if vγ √ α ∈ Lp (0, 1) and v ϕ where

1 p

+

1 q

√ α v ϕ ∈ Lq (0, 1) , vγ

(4.2.40)

= 1.

Proof We note that the sufficiency of the conditions has been already proved in [101]. Here, we will only show, that (4.2.39) implies (4.2.40) omitting the remaining part of the proof. First of all we show that inequality (4.2.39) yields γ √v α ∈ Lp (0, 1). To this end we define a continuous and piecewise linear function v ϕ F1 : (0, ∞) −→ R satisfying F1 (x) =

0

:

x ∈ (1, 2) ,

w (x ) : x = x ∈ (1, 2) . |f (xk )| sgn pm k k

208

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

Then, from the assumption it follows that ⎞1 ⎛ p   ∗,w    ∗,w   p⎠    L  ⎝ Lm+1 F1 u p ≤ c xk |F1 (xk )u(xk )| . m+1 F1 u Lp (0,1) ≤ 1 1 or if a < 1 and δ is arbitrary. The error estimate (4.3.30)) implies the convergence of the Gaussian rule

4.3 Polynomial Approximation with Pollaczek–Laguerre Weights on the Half. . .

221

for all f ∈ Cu . For smoother functions, for instance for f ∈ W∞,r u , with the help of (4.3.30) and (3.3.38), we obtain  √ r  am  (r) r   |em (f )| ≤ c f ϕ u , ∞ m where c = c(m, f ) and am ∼ m1/β . Thus, a natural task is to establish the degree of convergence of em (f ) if the function f is infinitely differentiable, i.e., f ∈ C∞ (R+ ). We recall that, in [1, 2], there were proved estimates of em (f ) related to Hermite or Freud weights for functions holomorphic in some domain of the complex plane containing the quadrature nodes. For precise estimates, considering the same class of functions and different weights, we refer to [106]. Here, we consider the case of infinitely differentiable functions on R+ with the condition that (f (m) u)(x) is uniformly bounded w.r.t. m and x. We note that the derivatives of the function can increase exponentially for x −→ 0 and x −→ ∞. Proposition 4.3.15 Let u(x) = (1 + x)δ wa (x), with 0 < a < 1 and δ arbitrary. Then, for everyinfinitely differentiable function f : (0, ∞) −→ C with K(f ) := supr∈N0 f (r) u∞ < ∞, we have  lim

m→∞

m

|em (f )| = 0. K(f )

(4.3.31)

Now, in order to study the behavior of the Gaussian rule in Sobolev spaces W1,r w , it is natural to investigate whether estimates of the form |em (f )| ≤ c

√  am  f  ϕ w 1 m

(4.3.32)

1,1 , We recall that, for the error of best with c = c(m, f ) hold true for f ∈ Ww approximation, we have (see [134])

Em (f )w,1 ≤

c

√  am  f  ϕ w , 1 m

1,1 f ∈ Ww , c = c(m, f ) .

On the other hand, inequality (3.3.113) holds, mutatis mutandis, for Gaussian rules on bounded intervals related to Jacobi weights. But, as for many exponential weights (see [35, 36, 125]), inequality (3.3.113) is false in the sense of the following proposition.

222

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

Proposition 4.3.16 Let w(x) = e−x 1,1 , we have f ∈ Ww

−α −x β

|em (f )| ≤

with α > 0 and β > 1. Then, for every

√  c am  f  ϕ w  , 2 1 m3

(4.3.33)

where c is independent of m and f . Moreover, for all sufficiently large m (say m >  m0 ), there exists a function fm with 0 < fm ϕ w1 < ∞ such that √  c am  f  ϕ w , |em (fm )| ≥ m 2 1 m3

(4.3.34)

where c = c(m). Nevertheless, estimates of the form (3.3.113) are required in different contexts. Thus, in order to obtain this kind of error estimates, we are going to modify the Gaussian rule by using an idea from [125]. With θ ∈ (0, 1) fixed, we define two indices j1 = j1 (m) and j2 = j2 (m) by xj1 = max {xk : xk ≤ εθm , k = 1, . . . , m}

(4.3.35)

xj2 = min {xk : xk ≥ aθm , k = 1, . . . , m} ,

(4.3.36)

and

respectively, and set   w : xj1 ≤ xk ≤ xj2 . Zθ,m = xk = xmk For the sake of completeness, if {xk : xk ≤ εθm } or {xk : xk ≥ aθm } are empty, we set xj1 = x1 or xj2 = xm , respectively. Furthermore, for all sufficiently large N ∈ N, let P∗N denote the collection of all polynomials of degree at most N, vanishing at the w which are smaller than x or larger than x , i.e., zeros of pm j1 j2   / Zθ,m . P∗N = P ∈ PN : P (xi ) = 0 , xi ∈ w ∈ P∗ for N ≥ m and θ ∈ (0, 1) arbitrary. Of course, pm N Now, analogously to (4.3.29), we define a modified Gaussian rule by the condition that



∞ 0

Q2m−1 (x)w(x) dx =

m  k=1

λw k Q2m−1 (xk )

=

j2  k=j1

λw k Q2m−1 (xk ) ,

4.3 Polynomial Approximation with Pollaczek–Laguerre Weights on the Half. . .

223

holds for every Q2m−1 ∈ P∗2m−1 . Then, for every suitable function f : (0, ∞) −→ C, the “truncated” Gaussian rule is defined as 



f (x)w(x) dx =

0

j2 

∗ λw k f (xk ) + em (f ) ,

(4.3.37)

k=j1

∗ (f ) is the difference between the integral and the quadrature sum. whose error em In comparison to the Gaussian rule (4.3.29)), in formula (4.3.37) the terms of the quadrature sum corresponding to the zeros which are “close” to the Mhaskar– Rahmanov–Saff numbers are dropped. From the numerical point of view, this fact has two consequences. First, it avoids overflow phenomena (taking into account that, in general, the function f is exponentially increasing at the endpoints of (0, ∞)). Second, it produces a computational saving, which is evident in the numerical treatment of integral equations. ∗ (f ) for functions f ∈ C and We are now going to study the behaviour of em u 1,r ∗ (f ) have essentially the same f ∈ Ww . We will see that the errors em (f ) and em ∗ behaviour in Cu , but not in W1,r w , since em (f ) satisfies (3.3.113), while em (f ) does ∗ not. The behaviour of em (f ) in Cu is given by the following proposition.

Proposition 4.3.17 Assume u−1 w ∈ L1 . Then, for any f ∈ Cu , we have   ∗  e (f ) ≤ c EM (f )u,∞ + e−c1 mν f u∞ , m where M =

9

θ θ+1

(4.3.38)

 : m , θ ∈ (0, 1) is fixed, ν is given by (4.3.9), and c = c(m, f ),

c1 = c1 (m, f ). In particular, if f ∈ W∞,r u , then inequality (4.3.38) becomes   ∗ e (f ) ≤ c m

 √ r am f W∞,r . u m

(4.3.39)

For smoother functions, the analogue of Proposition 4.3.15 is given by the following statement. Proposition 4.3.18 If the weight u and the function f satisfy the assumptions of Proposition 4.3.15, then, for 0 < μ < α(1 − 1/(2β))/(α + 1/2) fixed, % lim

m→∞

&m−μ  ∗  e (f ) m   = 0. f u∞ + f (m) u∞

(4.3.40)

1,1 or f ∈ Z1,s , where s > 1, the following proposition states For functions f ∈ Ww w the wanted estimates.

224

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

1,1 , we have Proposition 4.3.19 For every f ∈ Ww ∗ (f )| |em

-√ .  am   −c1 mν   f w1 . f ϕw 1 + e ≤c m

(4.3.41)

Moreover, for 1 < s < r ∈ N and f ∈ Z1,s w , the estimate ∗ |em (f )|

& %√  √ am /m r (f, t) am w,1 ϕ −c mν f w1 ≤c dt + e m 0 t2

(4.3.42)

is true. In both cases, ν is given by (4.3.9) and c and c1 do not depend on m and f . Consequently, inequality (4.3.41) is the wanted estimate and, with the help of (4.3.42), it can be generalized to ∗ (f )| |em

 √ s am f Z1,s ≤c w m

for f ∈ Z1,s w , s > 1, where c = c(m, f ). In particular, if s is an integer, recalling (4.3.41), the Zygmund norm can be replaced by the Sobolev norm. Finally, we emphasize that the previous estimate cannot be improved, since, in ∗ (f ) converges to zero with the order of the best the considered function spaces, em polynomial approximation.

4.3.5 Lagrange Interpolation in L2√

w

The assertions of this section were proved in [154] (see also [135], where the bahaviour of the Lagrange interpolation has been studied in different function spaces). We want to apply the results of Sect. 4.3.4 in order to estimate the error w . For a function of the Lagrange interpolation process based on the zeros of pm w f : (0, ∞) −→ C, the Lagrange polynomial interpolating f at the zeros of pm is defined by Lw mf =

m 

f (xk ) w k ,

k=1

w (x) pm . w k (x) =  w  pm (xk )(x − xk )

 √  We are going to study the behaviour of  Lw w 2 for different function classes. mf Since m    w  √ 2  λw k L f  = w f (xk ) w(xk ) m 2 w(xk ) k=1

2

(4.3.43)

4.3 Polynomial Approximation with Pollaczek–Laguerre Weights on the Half. . .

225

and since we are dealing with an unbounded interval, we cannot expect an analogue √ of the theorem in [49]. On the other hand, if f ∈ Cu with u(x) = (1 + x)δ w(x) and δ > 1/2, it is easily seen that  √   f − Lw f w2 ≤ c Em−1 (f )u,∞ , m

c = c(m, f ) .

2,1 Nevertheless, as for the Gaussian rule, if f ∈ W√ , then Lw m f has not an optimal w behaviour, i.e., an estimate of the form

√   √  √   f − Lw f  ≤ c am f  ϕ w  , w m 2 2 m

c = c(m, f ) ,

does not hold (cf. Proposition 4.3.16). For this reason, we introduce the following “truncated” Lagrange interpolation polynomial L∗,w m f =

j2 

f (xk ) w k ,

k=j1

where j1 , j2 are defined by (3.3.72). Of course, in general we have L∗,w m P = P for ∗,w ∗ ∗ polynomials P ∈ Pm−1 . But, L∗,w m Q = Q for all Q ∈ Pm−1 and Lm f ∈ Pm−1 ∗,w for arbitrary f : (0, ∞) −→ C. Hence, the operator Lm is a projection from, for example, C(R+ ) onto P∗m−1 . Moreover, considering the weight  σ (x) = (1 + x)δ w(x) ,

δ > 0,

(4.3.44)

p

we can show that every function f ∈ Lσ can be approximated by polynomials from P∗m . For this, we define   m (f )σ,p = inf (f − P )σ p : P ∈ P∗m , E

1 ≤ p ≤ ∞.

p

Lemma 4.3.20 For 1 ≤ p ≤ ∞ and f ∈ Lσ , where σ is given by (4.3.44), we have  m (f )σ,p ≤ c EM (f )σ,p + e−c1 mν f σ p , E 9  : θ where M = θ+1 m , θ ∈ (0, 1) is fixed, and ν is given by (4.3.9), as well as c = c(m, f ), c1 = c1 (m, f ). As an immediate consequence of the previous lemma and equality (4.3.43), we get the following proposition.

226

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

Proposition 4.3.21 For f ∈ Cσ with σ as in (4.3.44) and δ > 1/2, we have   √  ν  f − L∗,w f w 2 ≤ c EM (f )σ,∞ + e−c1 m f σ ∞ , m  : 9 θ m , θ ∈ (0, 1) is fixed, ν is given by (4.3.9), and c = c(m, f ), where M = θ+1 c1 = c1 (m, f ). ∞  Now, we are going to study the behaviour of the sequence L∗,w m m=1 in the √ , which is interesting in different contexts. To this regard, we Sobolev spaces W2,r w w observe that, since no results concerning thesequence  ∞ of the Fourier sums  wSm∞are available, we cannot deduce the behaviour of L∗,w from that ones of S m m=1 m m=1 . Therefore, we need a different approach. The following proposition describes the behaviour of the operator L∗,w m in various function spaces. Proposition 4.3.22 Assume r ∈ N, f ∈ L2√w , and 

1 0

rϕ (f, t)√w,2 3

t2

dt < ∞ .

(4.3.45)

Then, ⎡ ⎤  √  1  √am r 2  ϕ (f, t)√w,2 √  m a ν √  m ∗,w −c m  f −L f w2 ≤ c ⎣ dt + e 1 f w2 ⎦ , m 3 m 0 t2

(4.3.46) where ν is given by (4.3.9) and the constants c and c1 are independent of m and f . Note that the assumption (4.3.45) implies f ∈ C(R+ ) (see [132]). The error estimate (4.3.46) has interesting consequences. First, if 12 < s < r ∈ N and sup t >0

rϕ (f, t)√w,2 ts

dt < ∞ ,

√ , then the order of convergence of the process is O i.e., f ∈ Z2,s w

 am s . m

 √

√ , then Whereas, if r ∈ N and f ∈ W2,r w

 √   f − L∗,w f w 2 ≤ c m

 √ r am f W2,r . √ m w

(4.3.47)

This means that the process converges with the error of the best approximation for the considered classes of functions. we are now able to show the uniform  Second, ∞ boundedness of the sequence L∗,w m m=1 in the Sobolev spaces.

4.3 Polynomial Approximation with Pollaczek–Laguerre Weights on the Half. . .

227

√ , then Proposition 4.3.23 If r ∈ N and f ∈ W2,r w

   sup L∗,w ≤ c f W2,r , m f W2,r √ √ w

m∈N

w

c = c(m, f ) .

(4.3.48)

2,s , we have Moreover, for s > r and f ∈ W√ w

  f − L∗,w f  2,r ≤ c m W√ w

 √ s−r am f W2,s , √ m w

c = c(m, f ) .

(4.3.49)

∗ (f ) and f − L∗,w f , a constant c = c(m, f ) Remark 4.3.24 In all estimates of em m appears. We have not pointed out the dependence of this constant on the parameter θ ∈ since θ is fixed. Nevertheless, it is useful to observe that c = c(θ ) = (0, 1), 2  θ O . Hence, it is clear that the parameter θ cannot assume the value 0 or 1−θ

1 and the “truncation” is necessary in this sense (see [154]).

4.3.6 Remarks on Numerical Realizations Here we follow the presentations in [154] and [156].

Computation of the Mhaskar–Rahmanov–Saff Numbers Here we give a method for computing the Mhaskar–Rahmanov–Saff numbers εt = εt (w) and at = at (w) for the weight function (4.3.2). These numbers are defined by (4.3.4). After the linear transformation x=

εt + at at − εt + ξ, 2 2

−1 < ξ < 1,

i.e., x = s(1 + Aξ ), where at − εt = A, at + εt

at + εt = s, 2

we obtain (at − x)(x − εt ) = A2 s 2 (1 − ξ 2 ), and then (4.3.4) reduce to t = βs β ψ(A; β) − αs −α ψ(A; −α)

(4.3.50)

and 0 = βs β−1 ψ(A; β − 1) − αs −α−1 ψ(A; −α − 1),

(4.3.51)

228

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

respectively, where we introduced the function ψ : (−1, 1) → R by 1 ψ(A; γ ) = π



1

−1

(1 + Aξ )γ  dξ, 1 − ξ2

with an arbitrary parameter γ ∈ R. This function can be expressed in terms of the hypergeometric function +∞  (a)k (b)k zk 2 F1 (a, b; c; z) = (c)k k! k=0

as follows  ψ(A; γ ) = 2 F1

 γ 1−γ 2 , − ; 1; A . 2 2

Here (a)k denotes Pochhammer’s symbol that is defined by (a)k = a(a + 1) · · · (a + k − 1) =

(a + k) , (a)

(a)0 = 1,

where  is Euler’s gamma function. For given α and β, from (4.3.51) we get -

α ψ(A; −α − 1) s= β ψ(A; β − 1)

.

1 α+β

(4.3.52)

.

Using (4.3.50) we obtain the following equation for finding A to a given t, G(A; α, β) + G(A; −β, −α) = t,

(4.3.53)

where -

α ψ(A; −α − 1) G(A; α, β) = β β ψ(A; β − 1)

.

β α+β

ψ(A; β).

Finally, the Mhaskar–Rahmanov–Saff numbers are given by at = s(1 + a) and εt = s(1 −a), where A = a is the unique solution of the nonlinear equation (4.3.53) in the interval (0, 1) and s is given by (4.3.52)). The corresponding MATHEMATICA code can be given in the following way: MRSNumbers[t_,alpha_,beta_]:= Module[{psi,funG,al,be,ga,A,a,s}, psi[A_,ga_]:= Hypergeometric2F1[(1-ga)/2,-ga/2,1,A^2]; funG[A_,al_,be_]:=be(al/be psi[A,-al-1]/psi[A,be-1])^(be/(al+be))psi[A,be]; a=A/. FindRoot[funG[A,alpha,beta]+funG[A,-beta,-alpha]==t,{A,1-10^(-6)}]; s=(alpha/beta psi[a,-alpha-1]/psi[a,beta-1])^(1/(alpha+beta)); Return[{s(1+a),s(1-a),a,s}];

4.3 Polynomial Approximation with Pollaczek–Laguerre Weights on the Half. . .

229

As a starting value for solving the nonlinear equation (4.3.53) we can take some value very close to 1, for example, 1–10−6. The Mhaskar–Rahmanov–Saff numbers at and εt in case of α = 1 and β = 1.2, 1.4, and 1.9 are displayed in the following figures. 0.20

100 80

0.15 60 40

0.10

20 0

0.05 20

40

60

80

100

120

20

140

40

60

80

100

120

140

MRS numbers at (left) and εt (right) for α = 1 and β = 1.2 (red line), β = 1.4 (blue line) and β = 1.9 (black line)

The respective numbers in case of β = 1.5 and α = 0.8, 1, and 1.5 are presented in the next figures. Note that here the graphs for at almost coincide.

35

0.30

30

0.25

25

0.20

20

0.15

15

0.10

10

0.05 20

40

60

80

100

120

140

20

40

60

80

100

120

140

MRS numbers at (left) and εt (right) for β = 1.5 and α = 0.8 (red line), α = 1 (blue line) and α = 1.5 (black line)

Numerical Construction of Quadrature Rules Here we consider the numerical construction of the Gaussian quadrature formulas −α β with respect to the weight function w(x) = e−x −x on (0, ∞). Thus, we want to construct the parameters of m-point Gaussian quadrature defined in Sect. 4.3.4,  0

+∞

f (x)w(x) dx =

m  k=1

λw k f (xk ) + em (f ) ,

(4.3.54)

230

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

the nodes xk and Christoffel numbers λw k for an arbitrary m ≤ n. In such a procedure we need the moments  +∞ w μk = μk = x k w(x) dx, k = 0, 1, . . . , 2n − 1, (4.3.55) 0

in order to construct the recursion coefficients αk = αkw and βk = βkw , k ≤ n − 1, in the three-term recurrence relation for the corresponding monic orthogonal polynomials πk (x) = πkw (x) (cf. (5.1.7) for the monic Jacobi polynomials), πk+1 (x) = (x − αk )πk (x) − βk πk−1 (x),

k = 0, 1, . . . , n − 1,

(4.3.56)

with π0 (x) = 1 and π−1 (x) = 0. In that way, we have access to all polynomials πkw (x) of degree at most n and a possibility for constructing Gaussian rules for every m ≤ n points). Usually, the Chebyshev method of moments (or modified moments) is not applicable in a standard machine arithmetic for a sufficiently large m, since the process is ill-conditioned, especially for the weights on the infinite intervals as in our case. Hence, a construction of recursive coefficients must be carefully realized by an application of the discretized Stieltjes-Gautschi procedure [57]. However, recent progress in symbolic computation and variable-precision arithmetic now makes it possible to generate the coefficients αk and βk in the threeterm recurrence relation (4.3.56) directly by using the standard method of moments in sufficiently high precision. Respective symbolic/variable-precision software for orthogonal polynomials is available in GAUTSCHI’s package SOPQ in MATLAB (see [59, 60]) and the MATHEMATICA package OrthogonalPolynomials (see [31] and [165]). Hence, all what is required is a procedure for (symbolic) calculation of the moments in variable-precision arithmetic. Thus, in order to overcome the numerical instability in the procedure for generating the recursion coefficients αk and βk in the MATHEMATICA package OrthogonalPolynomials, {alpha, beta} = aChebyshevAlgorithm[moments, WorkingPrecision− > WP]

(4.3.57) we have to choose WP sufficiently large such that the relative errors in these coefficients satisfy  α   k   < ε, αk

 β   k  < ε,  βk

k = 0, 1, . . . , n − 1,

(4.3.58)

where ε is the required accuracy. The list of moments (moments) contains 2n elements and it can be given in a symbolic form.

4.3 Polynomial Approximation with Pollaczek–Laguerre Weights on the Half. . .

In the case that α = β, i.e., w(x) = e−x moments (4.3.55) by  μk = μw k =

+∞ 0

x k w(x)dx =

−α −x α

2 K k+1 (2), α α

231

on (0, ∞), we can calculate the

k = 0, 1, 2, . . . ,

(4.3.59)

where Kr (z) is the modified Bessel function of the second kind. In the MATHEMATICA Package this function is implemented as BesselK[r,z], and its value can be evaluated with arbitrary precision. The case α = β is more complicated, especially for symbolic computations, but in some cases for integer (or rational) values of parameters, the moments can be expressed in terms of the Meijer’s G-function (see [154] for further details). Then, in order to compute the nodes and weights of the Gaussian quadrature rules we use the Golub-Welsch procedure in [62]. For instance, to construct Gaussian quadrature rules (4.3.54) for m ≤ n = 150 with a high precision like Precision->85, we use the recursion coefficients obtained for WP= 250. In that case, we obtain the Gaussian parameters by the Golub-Welsch procedure realized in the MATHEMATICA Package OrthogonalPolynomials as aGaussianNodesWeights[m, alpha, beta, WorkingPrecision− > 90, Precision− > 85].

We can compute the parameters (nodes and weights) in all m-point Gaussian formulas for m ≤ n = 150 with the same precision, since the Golub-Welsch algorithm is well-conditioned.

Numerical Examples We first consider the numerical construction of Gaussian quadrature rules with −2 2 respect to the weight function w(x) = e−x −x on (0, ∞), and then we apply them to the calculation of some integrals. In order to generate quadrature ruels for m ≤ n = 300, we need the first six hundred moments μk , given by (4.3.59) or in the MATHEMATICA package OrthogonalPolynomials (see [31] and [165]) with the command moments = Table[BesselK[(k + 1)/2, 2], k, 0, 600]. Suppose that we need a very high precision in the quadrature parameters, for example, 70 decimal digits. Then, in order to overcome the numerical instability in the procedure for generating the recursion coefficients αk and βk , we have to choose WP = 400 in (4.3.57). We obtain all recursion coefficients for k < n = 300 with a relative error less than ε = 10−72 (see (4.3.58)), which is easy to check by taking a higher WorkingPrecision, for example, WP = 500. As we can see, the calculation of the recursive coefficients is a very sensitive process, which here, in

232

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

the worst case, causes a loss of about 328 decimal digits! Notice that, in case of WP = 500, all recursive coefficients are determined with a relative error less than ε = 10−172. But, when we need, for example, only the first n = 100 coefficients with a relative error less than ε = 10−52 , then it is enough to put WP = 150. In that case the loss of accuracy is about 98 decimal digits. Example 4.3.25 We apply Gaussian quadrature rules to calculate the integral 

+∞

I (f ) =

f (x) e−1/x

2 −x 2

dx

0

for 

 1 f (x) = f1 (x) = cosh cosh(x − 1) x +1



and

1+x f (x) = f2 (x) = arctan 4

 .

The values of these integrals can be evaluated with a high precision using the MATHEMATICA function NIntegrate. One obtains I (f1 ) = 0.145675081234175234662385034933527957846278353 . . . I (f2 ) = 0.059190601605211612059097576887285181920420759787912939501099229334394 . . .

   Qm (f ) − I (f )  , where Qm (f ) are the  The relative errors given by rm (f ) =   I (f ) corresponding Gaussian sums, are presented in the following Table for m = 5(5)60. Numbers in parentheses indicate decimal exponents, for example 2.36(−6) = 2.36 × 10−6 . The convergence for both smooth functions is very fast. m 5 10 15 20 25 30 35 40 45 50 55 60

rm (f1 ) 2.36(−6) 1.96(−10) 4.68(−14) 2.14(−17) 1.55(−20) 1.58(−23) 2.11(−26) 3.56(−29) 7.26(−32) 1.75(−34) 4.89(−37) 1.55(−39)

rm (f2 ) 1.18(−10) 5.77(−19) 9.58(−24) 8.39(−30) 1.05(−34) 4.45(−40) 4.52(−45) 2.71(−49) 1.09(−53) 4.92(−58) 2.74(−62) 1.95(−66)

Relative errors in Gaussian sums for m = 5(5)60

4.3 Polynomial Approximation with Pollaczek–Laguerre Weights on the Half. . .

233

Example 4.3.26 We apply now the same quadrature rules to calculate the corre−2 2 sponding integral for the function f (x) = | cos x|5/4 with w(x) = e−x −x . In the second column of the following table we present the relative errors of the Gaussian ∞, 5

rule for m = 10(10)50 and m = 100(50)300. We note that f ∈ Z√w4 . So, by Proposition 4.3.14 and (4.3.22), the results in the table are concordant with the 15 theoretical order of convergence, that is m− 16 .

m 10 20 30 40 50 100 150 200 250 300

θ = 0.1 (j1 , j2 ) (1, 7) (2, 12) (2, 17) (3, 23) (3, 28) (5, 55) (7, 82) (8, 109) (10,136) (11,163)

rm (f ) 3.70(−3) 1.74(−3) 2.07(−3) 1.89(−3) 1.29(−4) 3.51(−4) 1.92(−4) 3.21(−6) 8.04(−5) 1.03(−4)

rm (j1 , j2 ) 4.54(−3) 2.58(−3) 2.01(−3) 1.97(−3) 1.21(−4) 3.51(−4) 1.92(−4) 3.21(−6) 8.04(−5) 1.03(−4)

θ = 0.05 (j1 , j2 ) (1, 6) (2, 10) (3, 15) (4, 19) (5, 23) (7, 45) (9, 68) (11, 90) (13,112) (15,134)

ε = 10−5 (j1 , j2 ) (1, 8) (1, 13) (2, 17) (3, 20) (4, 24) (7, 38) (11, 51) (14, 62) (18, 73) (21, 83)

rm (j1 , j2 ) 1.65(−2) 1.34(−2) 4.37(−4) 4.08(−3) 2.11(−3) 2.86(−4) 1.83(−4) 7.58(−7) 7.94(−5) 1.04(−4)

rm (j1 , j2 ) 3.71(−3) 1.76(−3) 2.01(−3) 2.14(−3) 1.45(−4) 1.37(−5) 3.10(−4) 6.45(−4) 7.79(−4) 1.04(−3)

Relative errors in quadrature sums for m = 10(10)50 and m = 100(50)300

In the third and fourth column of the table we show the indices j1 and j2 (see (4.3.35) and (4.3.36)) and relative errors obtained for the truncated Gaussian rule with θ = 0.1 and θ = 0.05, respectively. Finally, we compare these results with another kind of truncation, i.e., in the Gaussian rule we omit the terms with indices k for which λw k |f (xk )| < ε, where ε is the precision to be achieved in the computations. In the last column we show the indices and the relative errors for this kind of truncated rule choosing ε = 10−5 . For calculating the relative errors we used as exact value 

+∞

I=

| cos x|5/4e−x

−2 −x 2

dx = 0.04552779434634736 . . .

0

obtained by using the MATHEMATICA function NIntegrate and the following decomposition I = I0 +

+∞ 

Ik

k=0



π/2

= 0

(cos x)5/4 w(x) dx +

+∞   k=0 0

π

  π (−1)k cos t + (2k − 1) 2

5/4

 π dt , w t + (2k − 1) 2

234

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

i.e., 

π/2

I=

(cos x)5/4w(x) dx +

0

+∞   k=0 0

π

 π dt, (sin t)5/4 w t + (2k − 1) 2

where I0 = 0.042224454817570724303 . . . , I1 = 0.003303339527329462572 . . . , I2 = 1.447179253891697823900 . . . × 10−12 , I3 = 3.561804871271085508749 . . . × 10−30 ,

etc.

Comparison with the Gaussian Rule Based on Laguerre Zeros We observe that the integral   +∞ f (x)w(x) dx = 0



f (x)e−x

−α −x β

(4.3.60)

dx

0

can be evaluated by means of the Gaussian rule related to the weight w(x) = −α β e−x −x , i.e., Gm (w, f ) =

m 

λw k f (xk ) ,

k=1

as described in Sect. 4.3.4. On the other hand, this integral can be rewritten as  ∞  ∞ −α β −α f (x)e−x e−x dx = f (x)e−x σ (x) dx 0

0

and evaluated by using the Gaussian rule related to the Laguerre-type weight σ (x) = β e−x , i.e., Gm (σ, g) =

m  k=1

λσk g(tk ) =

m 

−α

λσk f (tk )e−tk ,

k=1

−α

σ where g(x) = f (x)e−x , tk = tm,k are the zeros of the mth Laguerre-type σ polynomial pm , satisfying

c m2−1/β

≤ t1 < · · · < tm < cm1/β ,

4.3 Polynomial Approximation with Pollaczek–Laguerre Weights on the Half. . .

235

and λσk are the corresponding Christoffel numbers (see, e.g., [121, 125]). Now, considering the coefficients of the two Gaussian rules, we observe that for the first term of Gm (w, f ) we have −α

−x1 λw x1 ∼ e−m 1 ∼ w(x1 )x1 ∼ e

α(2β−1) β(2α+1)

,

¯ m (σ, g) fulfills whereas the first term of G −α

−α

−α

λσ1 e−t1 ∼ σ (t1 )t1 e−t1 ∼ t1 e−t1 ∼ e−m

α(2β−1) β

.

This last quantity is much smaller than λw 1 for large values of m and also smaller than the ordinary tolerance usually adopted in computation. Therefore, a certain number η = η(m) of summands of Gm (σ, g) do not give any contribution. So, if Gm (w, f ) computes the integral with a certain error, one could obtain the same precision using the Laguerre-type rule for larger values of m and with more evaluations of the function f . The following example confirms this fact. Example 4.3.27 We apply the Gaussian quadrature rules w.r.t. the exponential 3 3 3 weight w(x) = e−1/x −x and the Laguerre weight σ (x) = e−x for calculating 

+∞

0



1+x arctan 4



e−1/x

3 −x 3

dx ,

    3 with f (x) = arctan 1+x and g(x) = arctan 1+x e−1/x . This integral can be 4 4 evaluated with a high precision using the Mathematica function NIntegrate. In following Table we compare the relative errors obtained applying the two rules for increasing values m, working in double arithmetic precision. We note that underflow phenomena occurred in the case of Laguerre weights, while in the case of w the symbol “—” means that the required precision has already been obtained and the relative error is of the order of the machine epsilon. m 2 7 30 60 110

Relative error of Gm (σ, g) 3.077 × 10−1 1.222 × 10−2 9.005 × 10−7 1.584 × 10−11 6.984 × 10−16

Relative error of Gm (w, f ) 5.891 × 10−6 1.256 × 10−16 – – –

Relative errors

We also want to mention that a similar argument applies a fortiori if we compare the two truncated Gaussian rules related to w and σ . In fact, in Gm (w, f ) we can drop some terms related to the zeros close to ε(w) and some other terms related to the

236

4 Weighted Polynomial Approximation and Quadrature Rules on Unbounded. . .

zeros close to am (w), but in Gm (σ, g) we can drop only some terms related to the largest zeros without loss of accuracy (see [125]). Let us now compare the convergence of the two Gaussian rules. To this aim, for the weight function v(x) = (1 + x)δ w(x) = (1 + x)δ e−x

−α −x β

δ > 1,

,

x ∈ (0, +∞), we use the function space # $ Cv := f ∈ C(0, +∞) : lim f (x)v(x) = 0 = lim f (x)v(x) x→+∞

x→0+0

with the norm f Cv := f v∞ =

sup

|f (x)v(x)| .

x∈(0,+∞)

For more regular functions, we define the Sobolev-type spaces  W∞,r = f ∈ Cv : f (r−1) ∈ AC(0, +∞), v

   (r) r  f ϕ v 



1, β

gCv¯ := f v ¯ ∞=

sup

|f (x)v(x)| ¯ ,

x∈(0,+∞)

and/or to the Sobolev-type space     (r) r  (r−1) W∞,r = g ∈ C : g ∈ AC(0, +∞), ϕ v ¯ g   v ¯ v¯



 0 ,

0

and the Gamma function 



(z) =

e−t t z−1 dt ,

z > 0,

0

and obtain for n ∈ N, taking into account (5.1.2), (5.1.5), and (5.1.1), !

hα,β n

"2

 = =

1 −1

! α,β "2 α,β Pn (x) v (x) dx = knα,β



1 −1

Pnα,β (x)x n v α,β (x) dx

   2n + α + β (−1)n 1 n d n α+n,β+n x v (x) dx 4n n! −1 dx n n

   2n + α + β α+β+1 1 = (1 − t)n+α t n+β dt 2 n 0 =

(n + α + 1)(n + β + 1) 2α+β+1 . 2n + α + β + 1 n! (n + α + β + 1)

Here, we also used the well known relation (z +1) = z(z). This formula remains true for n = 0, as one can directly check. Consequently,  hα,β n =

(n + α + 1)(n + β + 1) 2α+β+1 , 2n + α + β + 1 n! (n + α + β + 1)

n ∈ N0 .

(5.1.12)

Our next aim is to prove a formula for the first derivative of the nth Jacobi polynomial. With the help of the orthogonality relations (5.1.2) and partial integration we get, for n = 2, 3, . . . , 

1

−1

-

. d α,β Pn (x) x k v α+1,β+1 (x) dx = 0 , dx

k = 0, . . . , n − 2 .

(5.1.13)

5.1 Some Properties of the Jacobi Polynomials

243

d α,β P (x) is As a consequence of formula (5.1.5), the leading coefficient of dx n equal to 2

−n

  2n + α + β 1 α+1,β+1 n = (n + α + β + 1) kn−1 . n 2

This, together with (5.1.13) and the uniqueness properties of orthogonal polynomial systems, yields 1 d α,β α+1,β+1 (x) . Pn (x) = (n + α + β + 1)Pn−1 dx 2 For the normalized polynomials we conclude d α,β α+1,β+1 p (x) = γnα,β pn−1 (x) , dx n α,β

where γn

=



(5.1.14)

n(n + α + β + 1).

Exercise 5.1.1 For n ∈ N, prove that v α,β (x)pnα,β (x) = −

1 α,β γn

d  α+1,β+1 α+1,β+1 (x)pn−1 (x) . v dx

(5.1.15)

Exercise 5.1.2 Use the relations (cf. (5.1.13)) 

1 −1

(1 − x)Pnα+1,β (x)x k v α,β (x) dx = 0 ,

k = 0, . . . , n − 1 ,

(5.1.16)

to show that α,β

(1 − x)pnα+1,β (x) = δnα,β pnα,β (x) − εnα,β pn+1 (x) ,  α,β

where δ0

=

n ∈ N0 ,

2(α + 1) , α+β +2 

δnα,β =

2(n + α + 1)(n + α + β + 1) , (2n + α + β + 1)(2n + α + β + 2)

n ∈ N,

2(n + 1)(n + β + 1) , (2n + α + β + 2)(2n + α + β + 3)

n ∈ N0 .

and  εnα,β

=

(5.1.17)

244

5 Mapping Properties of Some Classes of Integral Operators β,α

α,β

Prove that pn (x) = (−1)n pn (−x) and conclude α,β

(1 + x)pnα,β+1 (x) = δnβ,α pnα,β (x) + εnβ,α pn+1 (x) ,

n ∈ N0 .

(5.1.18)

Exercise 5.1.3 With the help of (cf. (5.1.13)) 

1 −1

Pnα,β (x)x k v α+1,β (x) dx = 0 ,

k = 0, . . . , n − 2 ,

(5.1.19)

prove that α,β

α+1,β

pnα,β (x) = δnα,β pnα+1,β (x) − εn−1 pn−1 (x) ,

n ∈ N0 .

(5.1.20)

Let us mention the special cases α = β = − 12 and α = β = 12 , in which the Jacobi polynomials are called Chebyshev polynomials of first and second kind, respectively. The respective normalized polynomials we denote by Tn (x) and Un (x). The following relations are well-known, − 12 ,− 12

T0 (x) = p0

− 12 ,− 12

1 (x) = √ , π 1 1 2,2

Un (cos s) = pn

Tn (cos s) = pn /

(cos s) =

/ (cos s) =

2 cos(ns) , π

n ∈ N,

(5.1.21) 2 sin(n + 1)s , π sin s

n ∈ N0 .

(5.1.22)

With these polynomials, relations (5.1.14) and (5.1.15) can be written as Tn (x) 1 d  √ =− 1 − x 2 Un−1 (x) , n dx 1 − x2 (5.1.23)

Tn (x) = n Un−1 (x) and

respectively. Moreover, the normalized Chebyshev polynomials of third and fourth kind are given by − 12 , 12

Rn (cos s) = pn

  cos n + 12 s (cos s) = √ π cos 2s

(5.1.24)

and 1 1 2 ,− 2

Pn (cos s) = pn respectively.

  sin n + 12 s (cos s) = √ , π sin 2s

(5.1.25)

5.2 Cauchy Singular Integral Operators

245

Exercise 5.1.4 Check the following formulas for the parameters in the Gaussian rule 

1 −1

f (x)v α,β (x) dx ∼

n 

α,β

α,β

λnk f (xnk )

k=1 α,β

α,β

α,β

α,β

v = xnk ): (cf. (3.2.26) in case u = v α,β , i.e., we abbreviate λvnk = λnk and xnk − 1 ,− 12

π , n

=

λnk2

− 1 ,− 12

xnk2

= cos

(2k − 1)π , 2n

/

1 1 2,2

λnk =

 12 , 12 2 π 1 − xnk n+1

1 1

,

−1,1 λnk2 2

 −1,1  2π 1 + xnk2 2 , = 2n + 1

1 1 2 ,− 2

1 1  2 ,− 2 2π 1 − xnk , = 2n + 1

λnk

,

2 2 xnk = cos

− 1 , 12

xnk2 1

,− 12

2 xnk

kπ , n+1

= cos

(2k − 1)π , 2n + 1

= cos

2kπ . 2n + 1

5.2 Cauchy Singular Integral Operators 5.2.1 Weighted L2 -Spaces For α, β > −1, by L2α,β we denote the Hilbert space L2v α/2,β/2 (cf. the definition of p Lu in Sect. 2.4.1) equipped with the inner product  u, vα,β :=

1

−1

u(x)v(x) v α,β (x) dx .

For integrable and differentiable functions u : (−1, 1) −→ C define S0 u as the Cauchy principal value integral (S0 u)(x) =

1 lim π ε→+0



x−ε −1

 +

1

x+ε



u(y) dy , y−x

−1 < x < 1 .

(5.2.1)

Let us recall a conclusion from the theorem on the boundedness of the Cauchy singular integral operator in weighted Lp -spaces given by Hunt, Muckenhoupt, and Wheeden [67] (see also the discussion about the Muckenhoupt condition in Sect. 3.2.1).

246

5 Mapping Properties of Some Classes of Integral Operators

Lemma 5.2.1 The operator S0 defined in (5.2.1) can be uniquely extended to a linear bounded operator S ∈ L(L2α,β ) if and only if α, β ∈ (−1, 1). In that case, if u ∈ L2α,β then, for almost all x ∈ (−1, 1), the limit on the right hand side of (5.2.1) exists and coincides with (Su)(x). Using the residue theorem of complex function theory one can prove that (cf. [47, 15.2]) 

∞ −∞

|x|ν−1 πν dx = −π|u|ν−1 sgn (u) cot , x−u 2

0 < ν < 1, u ∈ R,

(5.2.2)

where the integral has to be understood in the Cauchy principal value sense  lim

ε→+0

u−ε −∞

|x|ν−1 dx + x−u



∞ u+ε

|x|ν−1 dx x−u

 .

Analogously (cf. [47, 15.2]), 

∞ −∞

|x|ν−1sgn (x) πν dx = π|u|ν−1 tan , x−u 2

(5.2.3)

and by adding this equation to Eq. (5.2.2) we obtain (cf. [47, 15.2]) 

∞ 0

x ν−1 dx = −π uν−1 cot(πν) , x−u

0 < ν < 1, 0 < u < ∞.

(5.2.4)

Exercise 5.2.2 Prove formulas (5.2.2)–(5.2.4) by integrating the function f (z) =

|z|ν−1ei(ν−1) arg(z) zν−1 = , z−u z−u



π 3π < arg(z) < , 2 2

along the curve   u,δ,ε,R =[−R, −δ] ∪ δ ei(π−t) : 0 ≤ t ≤ π ∪     ∪ [δ, u − ε] ∪ u + ε ei(π−t) : 0 ≤ t ≤ π ∪ [u + ε, R] ∪ R eit : 0 ≤ t ≤ π

for 0 < δ < u − ε < u + ε < R, where δ and ε tend to zero and R to infinity. In what follows, let α, β ∈ (−1, 1) \ {0}. For a given function u : (−1, 1) −→ C and for x ∈ (−1, 1), define   sin(πβ) Sα,β u (x) = cos(πβ) v α,β (x) u(x) + π



1 −1

v α,β (y) u(y) dy , y−x

(5.2.5)

5.2 Cauchy Singular Integral Operators

247

if the integral exists as a Cauchy principal value one. In the sense of Lemma 5.2.1, we can consider Sα,β as a linear and bounded operator from L2−α,−β into L2α,β , i.e., Sα,β ∈ L(L2−α,−β , L2α,β ). Proposition 5.2.3 If α + β = −1, then   −α,−β Sα,β P,nα,β (x) = −P,n−1 (x) ,

−1 < x < 1 , n ∈ N0 ,

(5.2.6)

α,β where P,n (x) denotes the nth (n ∈ N0 ) monic Jacobi polynomial (cf. (5.1.6)) and −α,−β where we agree that P,−1 (x) ≡ 0.

Proof Using the substitution t = 

1 −1

1 v α,β (y) u(y) dy = y−x 1−x



∞ 0

1+y and relation (5.2.4) we have 1−y

t β dt t−

1+x 1−x

=−

π 1−x



1+x 1−x

β cot(πβ) = −π cot(πβ) v α,β (x) .

  α,β (x) ≡ 0 and (5.2.6) is proved in case of n = 0. Setting Hence, Sα,β P,0   α,β ,n (x) = Sα,β P ,n Q (x) we conclude from the recursion formula (5.1.7) ,n+1 (x) = (x − αn )Q ,n−1 (x) + sin(πβ) ,n (x) − βn Q Q π



1 −1

v α,β (y)P,nα,β (y) dy

⎧ , , ⎪ ⎨ (x − αn )Qn (x) − βn Qn−1 (x) : n ∈ N ,  = sin(πβ) 1 α,β ⎪ ⎩ v (y) dy : n = 0. π −1 Because of α + β = −1 we have 

1 −1

v α,β (y) dy = B(−β, 1 + β) = −

π , sin(πβ)

(5.2.7)

,1 (x) ≡ −1 ≡ −P,−α,−β (x). Hence, (5.2.6) is also proved for n = 1. If we so that Q 0 write the recursion coefficients in (5.1.7) as αn = αn (α, β) and βn = βn (α, β), then we see that αn (α, β) = αn−1 (−α, −β), n ∈ N and βn (α, β) = βn−1 (−α, −β), ,n ’s n = 2, 3, . . . With this observation, from the above recursion formula for the Q the correctness of (5.2.6) follows for all n ∈ N0 .   Assume again α + β = −1. Taking into account "−1 α,β α,β ! ,n (x) kn P pnα,β (x) = hα,β n

(5.2.8)

248

5 Mapping Properties of Some Classes of Integral Operators

  α,β −1 α,β −α,−β (cf. (5.1.6) and (5.1.11)) and hn kn = hn−1 and (5.1.12)) together with (5.2.6), we get the relation   −α,−β Sα,β pnα,β (x) = −pn−1 (x) ,

−1 −α,−β kn−1

(cf. (5.1.5)

−1 < x < 1 , n ∈ N0 ,

(5.2.9)

α,β

where pn (x) is the normalized Jacobi polynomial of degree n (cf. (5.1.6) α,β and (5.1.11)) and where we again agree that p−1 (x) ≡ 0. Analogous considerations are possible for α, β ∈ (−1, 1) \ {0} and α + β = 0 or α + β = 1, as the following corollary shows. Corollary 5.2.4 Let α, β ∈ (−1, 1) \ {0} with κ0 := α + β ∈ {0, 1}. Then, for all n ∈ N0 and all x ∈ (−1, 1),   α,β ,n+κ (x) and Sα,β P,nα,β (x) = (−1)κ0 P 0

  α,β Sα,β pnα,β (x) = (−1)κ0 pn+κ0 (x) .

Proof Let κ0 = 0 and α > 0. Set , α = α − 1, such that , α + β = −1. Then, in virtue of (5.1.17) and (5.2.9), , α ,β

α ,β , α ,β , α ,β , α ,β Sα,β pnα,β = δ, α ,β pn − εn S, α ,β pn+1 = −δn pn−1 n S, α,β

1−α,−β

α ,β 1−α,−β + ε, . n pn , α ,β

α,β

Since, by the definition of δn and εn (cf. Exercise 5.1.2), we have εn , α ,β −α,−β and δn = εn−1 . We obtain, using (5.1.20), Sα,β pnα,β = pn−α,−β ,

−α,−β

= δn

n ∈ N0 .

(5.2.10)

In case κ0 = 0 and β > 0, we can use the just proved relation (5.2.10) to show that n

(−1)



Sα,β pnα,β



(−x) = cos(πα)v

β,α

(x)pnβ,α (x) +

sin(πα) π



1 −1

β,α

v β,α (y)pn (y) dy y−x

= pn−β,−α (x) = (−1)n pn−α,−β (−x) .

If κ0 = 1, then we set , α = α − 1 and get, again taking into account (5.1.17) and also (5.2.10), , α ,β

−α,−β

α,β , α,β , α,β −α,−β 1−α,−β Sα,β pnα,β = δ, pn − δn+1 α ,β pn − εn S, α,β pn+1 = εn n S,

1−α,β

pn+1

(5.1.20) =

−α,−β

−pn+1

The formulas for the monic polynomials can be proved by applying (5.2.8).

.

 

Finally we come up with the following proposition. Proposition 5.2.5 Let a, b ∈ R, β0 ∈ (0, 1), and a − ib = eiπβ0 . Moreover, choose integers λ and ν, such that α := λ + β0 and β := ν − β0 are in the interval (−1, 1),

5.2 Cauchy Singular Integral Operators

249

and define (Au)(x) = a v α,β (x)u(x) +

b π



1

v α,β (y)u(y) dy , y−x

−1

−1 < x < 1 ,

(5.2.11)

if the integral exists in the Cauchy principal value sense. Then  α,β  −α,−β Apn (x) = (−1)λ pn−κ (x) ,

−1 < x < 1 , n ∈ N0 ,

(5.2.12)

where κ = −(λ + ν) = −(α + β). Proof Remark that, for λ = −1 and ν = 0, we have κ = 1 and a = cos(πβ0 ) = cos(πβ), b = − sin(πβ0 ) = sin(πβ), i.e., A = Sα,β , and (5.2.12) is equivalent to (5.2.9). If λ = −1 and ν = 1, then κ = 0, a = − cos(πβ), and b = − sin(πβ). Hence, Corollary 5.2.4 (case κ0 = 0) yields (5.2.12). For λ = ν = 0, we have again κ = 0 and further a = cos(πβ), b = sin(πβ), such that (5.2.12) again follows from Corollary 5.2.4 (case κ0 = 0). Finally, if λ = 0 and ν = 1, then κ = −1, a = − cos(πβ), and b = − sin(πβ). Now relation (5.2.12) follows from Corollary 5.2.4 (case κ0 = 1).   For the cases α = β = − 12 , i.e., a = 0, b = −1, λ = −1, ν = 0, and α = β = 12 , i.e., a = 0, b = −1, λ = 0, and ν = 1, we obtain the special relations 1 π



1

−1

dy Tn (y)  = Un−1 (x) y − x 1 − y2

1 π

and



1 −1

 1 − y 2 Un (y) dy = −Tn+1 (x) , y−x

(5.2.13)

respectively, where −1 < x < 1 and n ∈ N0 , and where the Chebyshev polynomials Tn (x) and Un (x) of first and second kind are defined in (5.1.21) and (5.1.22), respectively.     α,β

Since v α,β pn

L2−α,−β ,



n=0



−α,−β

and pn

n=0

are complete orthonormal systems in

from Proposition 5.2.5 together with Lemma 5.2.1 we can conclude the following corollary. Corollary 5.2.6 With the notations of Proposition 5.2.5, the operator (Bu) (x) = a u(x) +

b π



1

−1

u(y) dy , y−x

−1 < x < 1 ,

(5.2.14)

defined on integrable and differentiable functions u : (−1, 1) −→ C, can be uniquely extended to a linear and bounded operator B : L2−α,−β −→ L2−α,−β . Note that, for u ∈ L2−α,−β , the function values (Bu)(x) coincide almost everywhere on (−1, 1) with (cf. Lemma 5.2.1) b lim au(x) + π ε→+0



x−ε

−1

 +

1 x+ε



u(y) dy . y−x

250

5 Mapping Properties of Some Classes of Integral Operators

Moreover, the operator B : L2−α,−β −→ L2−α,−β is invertible if and only if κ = 0. In case κ = 1, it is Fredholm with index 1 and dim ker B = 1, and in case κ = − 1, it is Fredholm with index −1 and dim ker B ∗ = 1. Since the multiplication operator v α,β I : L2α,β −→ L2−α,−β is an isometrical isomorphism, the operator A : L2α,β −→ L2−α,−β , defined by (5.2.11), has the same properties. Moreover, the formulas −a−ib = eiπ(1−β0) , −α = −(λ+1)+(1−β0 ), and −β = (1−ν)−(1−β0 ) −α,−β , n−α,−β = (−1)λ pn+κ holds true show together with (5.2.11) and (5.2.12) that Ap for n ∈ N0 , where 

 , (x) = av −α,−β (x)u(x) − b Au π



1 −1

v −α,−β (y)u(y) dy . y−x

(5.2.15)

2 , : L2 Consequently, the operator A −α,−β −→ Lα,β is a one-sided inverse (in case κ = 0 the inverse) operator of A : L2α,β −→ L2−α,−β defined by (5.2.11). Moreover, if we define, for r ∈ N,

# E D α,β L2α,β,r := u ∈ L2α,β : u, pj

$

α,β

= 0, j = 0, . . . , r − 1

and L2α,β,0 = L2α,β , then A : L2α,β,max{0,κ} −→ L2−α,−β,max{0,−κ} is invertible , Since A = Bv α,β I, a one-sided inverse of the operator with its inverse A. 2 B : L−α,−β −→ L2−α,−β defined by (5.2.14) is given by  (−1)  b v α,β (x) B u (x) = a u(x) − π



1 −1

v −α,−β (y)u(y) dy . y−x

(5.2.16)

Example 5.2.7 Let us define L2√

v γ ,δ ,(0)

#  = L2γ ,δ,(0) := f ∈ L2γ ,δ :

1 −1

$ f (x) dx = 0 .

Then, due to Corollary 5.2.6, in case κ = 1 the operator B : L2−α,−β,(0) −→ L2−α,−β defined by (5.2.14) is invertible and its inverse is given by (5.2.16). From (5.2.12) we can also conclude the mapping properties of the operator A in pairs of Sobolev spaces defined in Sect. 2.4.1. Indeed, if we define 2,s 2 L2,s α,β,r := Lα,β ∩ Lα,β,r , s ≥ 0, r ∈ N0 , then we have the following corollary. Corollary 5.2.8 For all s ≥ 0, the operator A : L2α,β −→ L2−α,−β , defined   2,s by (5.2.11) and considered in Corollary 5.2.6, belongs to L L2,s , L α,β −α,−β and is

5.2 Cauchy Singular Integral Operators

251

2,s invertible as operator from L2,s α,β,max{0,κ} into L−α,−β,max{0,−κ} , where the operator , given by (5.2.15), is its inverse. Moreover, due to (5.2.12), A,

  2,s L L2,s α,β ,L−α,−β

A

=

1 : κ ∈ {0, 1} , 2s : κ = −1 ,

2 where in case of κ = 0, A : L2,s α,β −→ L−α,−β is an isometric isomorphism.

If we apply the operator A from (5.2.11) to a polynomial p ∈ Pn and if we take into account the algebraic accuracy of the Gaussian rule 

1 −1

q(x)v α,β (x) dx =

n 

α,β

α,β

λnk q(xnk ) ,

q ∈ P2n ,

(5.2.17)

k=1

we get .  n α,β b 1 v α,β (y) dy b  α,β p(xnk ) − p(x) . λnk p(x) + (Ap) (x) = av α,β (x) + α,β π −1 y − x π xnk − x k=1 (5.2.18) α,β

For the particular case p(x) = pn (x), this together with relation (5.2.12) yields av α,β (x) +

b π



n α,β −α,−β (−1)λ pn−κ (x) b  λnk v α,β (y) dy − = , α,β α,β y−x π pn (x) k=1 xnk − x

1 −1

and consequently, −α,−β

(Ap) (x) =

(−1)λ pn−κ

(x)p(x)

α,β

pn (x)

α,β α,β n b  λnk p(xnk ) + , α,β π k=1 xnk − x

as well as −α,−β (Ap) (xn−κ,j )

α,β α,β n b  λnk p(xnk ) = , α,β −α,−β π k=1 xnk − xn−κ,j α,β

Note that in case p = pn

(5.2.19)

α,β

from (5.2.18) we get (if x −→ xnj ) the formula −α,−β

α,β

λnj =

j = 1, . . . , n − κ .

α,β

π(−1)λpn−κ (xnj ) ,  α,β  α,β b pn (xnj )

j = 1, . . . , n .

(5.2.20)

252

5 Mapping Properties of Some Classes of Integral Operators α,β

If we define the (generalized Vandermonde) matrix Un α,β n by  α,β α,β Uα,β n = pj (xnk )

and the diagonal matrix



n−1, n



α,β α,β and α,β = δj k λα,β n = diag λn1 . . . λnn nk

j =0,k=1

n j,k=1

,

(5.2.21) α,β pi ,

i = 1, . . . , n, in case of respectively, then relation (5.2.19), applied to p = κ = 0 together with (5.2.12), leads to the matrix equation 

T U−α,−β n

(−1)λ b = π

%

&

1 α,β xnk



−α,−β xnj

n

 α,β T Un α,β . n

(5.2.22)

j,k=1

Moreover, if p ∈ Pn is written in different forms p=

n−1 

α,β

, αj pj

=

j =0

with the Fourier coefficients , αj =

n 

α,β

ξk nk

k=1

D

E

α,β p, pj α,β

=

n 

α,β

α,β

α,β

λnk ξk pj (xnk ) and the

k=1

α,β

function values ξk = p(xnk ), then ! "n  ! "n−1 T α,β ξ = ξk k=1 = Uα,β , α and , α= , αj j =0 = Uα,β n n n ξ .

(5.2.23)

Hence, 

Uα,β n

T

 α,β T α,β α,β α,β Un Uα,β . n n = I = Un n

Since, due to (5.2.12), again in case κ = 0, Ap = (−1)λ

(5.2.24) n−1 

−α,−β

, αj pj

, we

j =0

conclude  −α,−β ) (Ap) (xnj

n j =1

 T  T α,β α,β = (−1)λ U−α,−β , α = (−1)λ U−α,−β Un  n ξ . n n (5.2.25)

5.2.2 Weighted Spaces of Continuous Functions Now we start to prepare the proof of some mapping properties of the operator A in spaces of continuous functions introduced in Sect. 2.4.2. In [74, Lemma 4.3]

5.2 Cauchy Singular Integral Operators

253

(cf. also [30]) there was proved that, for 0 ≤ γ , δ < 1, there is a constant c = c(x) such that    1 −γ ,−δ     1−x v (y) dy  −γ ,−δ    + 1 , −1 < x < 1 . (x) ln  ≤ cv  y−x 1+x −1 The following lemma improves this result in case of 0 < γ , δ < 1. Lemma 5.2.9 For 0 < γ , δ < 1, there is a constant c = c(x) such that    

1

−1

 v −γ ,−δ (y) dy  −γ ,−δ (x) ,  ≤ cv y−x

−1 < x < 1 .

Proof Consider the case −1 < x ≤ 0. For symmetry reasons, the case 0 ≤ x < 1 can be handled analogously. Set  J (x) :=

1 −1

v −γ ,−δ (y) dy = y−x

 Then, due to

2x+1 −1

2x+1 −1

v −γ ,−δ (y) dy + y−x



1 2x+1

v −γ ,−δ (y) dy =: J1 (x) + J2 (x) . y−x

dy = 0, y−x 

J1 (x) =

2x+1 −1

 =



2x+1 −1

 +

v −γ ,−δ (y) − v −γ ,−δ (x) dx y−x ! " (1 − y)−γ (1 + y)−δ − (1 + x)−δ dx y−x

2x+1 −1

!

" (1 − y)−γ − (1 − x)−γ (1 + x)−δ dy y−x

=: J11 (x) + J12 (x) , where, with the help of the substitution 1+y = (1+x)t and the continuous function

f : [0, 2] −→ R ,

⎧ δ ⎪ ⎨1−t : t =  1, t → 1−t ⎪ ⎩ δ : t = 1,

254

5 Mapping Properties of Some Classes of Integral Operators

we have |J11 (x)| ≤ (1 + x)−δ



2

[2 − (1 + x)t ]−γ

0

-

= (1 + x)−δ

1

t −δ − 1 dt 1−t

[2 − (1 + x)t ]−γ t −δ f (t ) dt +



0

2

[2 − (1 + x)t ]−γ t −δ f (t ) dt

.

1

 ≤ (1 + x)−δ (1 − x)−γ

1

t −δ f (t ) dt +



0

2

(2 − t )−γ t −δ f (t ) dt

.

1

≤ c v −γ ,−δ (x) ,

taking into account −1 < x ≤ 0 as well as 2 − (1 + x)t ≥ 1 − x for 0 < t < 1 and 2 − (1 + x)t ≥ 2 − t for 1 < t < 2. With the substitution 1 − y = (1 − x)t, 1 − tγ g(t) = , t ∈ [0, 2] \ {1}, and g(1) = t γ , we get 1−t |J12 (x)| = v −γ,−δ (x)



2 1−x

2x − 1−x

t

−γ

g(t) dt ≤ v

−γ,−δ



2

(x)

t −γ g(t) dt = c v −γ,−δ (x) .

0

Finally, let us turn to J2 (x). Since 1 + y < y − x for 2x + 1 < y < 1 and since x < 2x + 1 for −1 < x ≤ 0, 

1

|J2 (x)| = J2 (x) ≤

(1 − y)−γ (1 + y)−δ−1 dy ≤

2x+1



1

=



1

(1 − y)−γ (1 + y)−δ−1 dy

x

(1 − y)−γ (1 + y)−δ−1 dy +



0

0

(1 + y)−δ−1 dy

x

= c1 + c2 (1 + x)−δ ≤ c v −γ,−δ (x) ,

 

which finishes the proof.

Lemma 5.2.10 (cf. [27], Lemma 2.4) Let 0 < η < 1 and 0 ≤ γ , δ < 1. Then, for −1 < x < 1, 

1

−1

v −γ ,−δ (y) dy  ≤ B(1 − δ, 1 − η)(1 + x)1−η + B(1 − γ , 1 − η)(1 − x)1−η v −γ ,−δ (x) , |y − x|η

 where B(α, β) =

1

t α−1 (1 − t)β−1 dt, α, β > 0, denotes the Beta function.

0

Proof We write 

1 −1

v −γ ,−δ (y) dy = |y − x|η



x −1

v −γ ,−δ (y) dy + (x − y)η

 x

1

v −γ ,−δ (y) dy =: J1 (x) + J2 (x) , (y − x)η

5.2 Cauchy Singular Integral Operators

255

and get, using the substitution 1 + y = (1 + x)t, J1 (x) ≤ (1 − x)



−γ

x −1

(1 + y)−δ dy = (1 − x)−γ (1 + x)1−δ−η (x − y)η

 0

1

t −δ dt (1 − t)η

= (1 − x)−γ (1 + x)1−δ−η B(1 − δ, 1 − η) . Analogously, J2 (x) ≤ (1 − x)1−γ −η (1 + x)−δ B(1 − γ , 1 − η) ,  

and the proof is complete.

Lemma 5.2.11 (cf. [27], (2.7), (2.8)) Let 0 ≤ γ , δ < 1. There is a constant c = c(n, x) such that, for x ∈ (−1, 1) and n ∈ N \ {1}, 

x− 1+x 2 2n

−1

 +



1

x+ 1−x 2 2n

v −γ ,−δ (y) dy ≤ c v −γ ,−δ (x) ln n . |y − x|

 Proof With the substitution 1 + y = (1 + x) 1 − 

x− 1+x 2 2n

−1

v −γ ,−δ (y) dy ≤ (1 − x)−γ x−y



x− 1+x 2 2n

−1

1 2n2

(5.2.26)

 t, we get

(1 + y)−δ dy x−y

   t −δ dt 1 1−δ 1 −γ ,−δ   (x) 1 − 2 =v 2n 0 1− 1− 1 t 2 2n   Taking into account t −δ ≤ t −δ 1 − 1 − further estimated by c v −γ ,−δ (x)





1

⎣t −δ

0

1 2n2



t +1−

1 , 2n2

the last term can be

⎤ 1 − 2n1 2   ⎦ dt ≤ c v −γ ,−δ (x) ln n . + 1 − 1 − 2n12 t

Analogously, 

1 x+ 1−x 2 2n

v −γ ,−δ (y) dy ≤ c v −γ ,−δ (x) ln n . y−x

 

256

5 Mapping Properties of Some Classes of Integral Operators

Additionally, let us prove the following lemma. Lemma 5.2.12 Suppose that 0 < η1 , η2 , 0 ≤ γ , δ < 1, and η1 +η2 +max {γ , δ} < 1. Then there exists a constant c = c(x1 , x2 ), such that 

1 −1

v −γ ,−δ (y) dy ≤ c, |y − x1 |η1 |y − x2 |η2

−1 ≤ x1 , x2 ≤ 1 .

Proof Note that, for α, β > −1 and −1 ≤ x1 < x2 ≤ 1, 

x2

 (x2 − y)α (y − x1 )β dy =

x1

x2 − x1 2

1+α+β 

1 −1

v α,β (t) dt .

(5.2.27)

We have 

(1 − y)−γ (1 + y)−δ dy ≤ η η −1 |y − x1 | 1 |y − x2 | 2 1



1

0

(1 − y)−γ dy + |y − x1 |η1 |y − x2 |η2



(1 + y)−δ dy η η −1 |y − x1 | 1 |y − x2 | 2 (5.2.28) 0

and, with η = η1 + η2 and the help of (5.2.27), we can estimate 

1 0

(1 − y)−γ dy |y − x1 |η1 |y − x2 |η2



⎧ ⎪ ⎪ ⎪ ⎨



1

−γ −η

(1 − y)

0

 ⎪ ⎪ ⎪ ⎩

x2 0

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨⎧ ⎪ = ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩⎪ ⎩

(x2 − y)−γ dy + y η1 (x2 − y)η2

y



1

x2

dy

⎫ ⎪ : −1 ≤ x1 < x2 ≤ 0 , ⎪ ⎪ ⎬

⎪ (1 − y)−γ dy ⎪ : −1 ≤ x1 ≤ 0 < x2 ≤ 1 ⎪ ⎭ η (y − x2 )

⎫ ⎪ ⎪ : −1 ≤ x1 < x2 ≤ 0 , ⎪ ⎪ ⎪ 0 ⎪ ⎪ ⎪ ⎫  ⎬  x 1−γ −η 1 ⎪ 2 −γ −η2 ,−η1 ⎪ v (t) dt ⎪ ⎬ ⎪ 2 −1 ⎪ : −1 ≤ x1 ≤ 0 < x2 ≤ 1 ⎪ ⎪  1−γ −η  1 ⎪ ⎪ ⎪ 1 − x2 ⎪ ⎪ −γ,−η ⎪ ⎪ ⎭ + v (t) dt ⎭ 2 −1 

1

(1 − y)−γ y −η dy

5.2 Cauchy Singular Integral Operators

257

as well as, in case 0 ≤ x1 < x2 ≤ 1, 

1 0

(1 − y)−γ dy |y − x1 |η1 |y − x2 |η2 

x1

≤ 0

=

(x1 − y)−γ dy + (x1 − y)η

 x 1−γ −η  1

2

1

−1

v



x2 x1

−γ −η,0

 +

(x2 − y)−γ dy + (y − x1 )η1 (x2 − y)η2 

(t) dt +

1 − x2 2

x2 − x1 2

1−γ −η 

1 −1



1−γ −η 

1 x2 1

−1

(1 − y)−γ dy (y − x2 )η v −γ −η2 ,−η1 (t) dt

v −γ ,−η (t) dt ,

and, since the second integral on the right-hand side of (5.2.28) can be estimated analogously, the lemma is proved.   ∞ for which ξ0 In what follows, if there occur a number sequence (ξn ) = (ξn )n=0 and/or ξ1 are equal to zero or not defined, then we set ξ0 = 1 and/or ξ1 = 1. ∞ Hence, for example, we write briefly (ln n) = (ln n)n=0 for (1, 1, ln 2, ln 3 . . .) or ∞ (ξn ln n) = (ξn ln n)n=0 for (ξ0 , ξ1 , ξ2 ln 2, ξ3 ln 3, . . .). ∞ Moreover, for a positive number ρ and a sequence (ξn )n=0 of positive numbers, −ρ we will shortly write ξn = O(n ), if there is a positive constant c = c(n) such that ξn ≤ c n−ρ for all n ∈ N. Let

α = α+ − α− ,

β = β+ − β−

with

0 ≤ α ± < 1,

0 ≤ β± < 1 , (5.2.29)

where α and β are defined in Proposition 5.2.5. In case of α ± = max {0, ±α} and β ± = max {0, ±β}, the following assertions in this section were proved in [74, Prop. 4.4–Prop. 4.10]. Proposition 5.2.13 If f ∈ Cα − ,β − with ξn = O(n−ρ ) for some ρ ∈ (0, 1) and , defined in (5.2.15), we have Af , ∈ Cα + ,β + and m0 ∈ N, then, for the operator A ξ

    f α − ,β − ,ξ Af ,  + + ≤ c f α − ,β − ,∞ ln n + α ,β ,∞ nm0

(5.2.30)

for n = 2, 3, . . . , where c = c(n, f ). If p ∈ Pn , then   Ap ,  + + ≤ c pα − ,β − ,∞ ln n for n = 2, 3, . . . α ,β ,∞ with c = c(n, p).

(5.2.31)

258

5 Mapping Properties of Some Classes of Integral Operators

  Proof Since, in virtue of the assumption ξ = O n−ρ , the inequality ξ f α − ,β − ,(n−ρ ) ≤ cf α − ,β − ,ξ is valid for all f ∈ Cα − ,β − (cf. Proposition 2.4.28(d)) it suffices to prove the proposition for ξn = n−ρ . Note that − − + + v −α,−β (x) = v α ,β (x)v −α ,−β (x). Hence,  − − + +   b f (x)v α ,β (x) 1 v −α ,−β (y) dy −α,−β , (x)f (x) − Af (x) = a v π y−x −1  − − − − b 1 v α ,β (y)f (y) − v α ,β (x)f (x) −α + ,−β + v − (y) dy . π −1 y−x (5.2.32) First, we have |av −α,−β (x)f (x)| ≤ |a| f α − ,β − ,∞ v −α

+ ,−β +

(5.2.33)

(x) .

Further, with the help of Lemma 5.2.9 we can estimate, for −1 < x < 1,       b f (x)v α − ,β − (x)  1 v −α + ,−β + (y) dy   1−x + +   +1 .   ≤ cf α − ,β − ,∞ v −α ,−β (x) ln   π y −x 1+x −1

(5.2.34) Let n > 1 and −1 < x < 1. We split the last integral in (5.2.32) into two terms, %

x− 1+x2 2n

−1

 +



− ,β −

x+ 1−x2 2n

 +

&

1

x+ 1−x2 2n



− ,β −

x− 1+x2 2n

(y)f (y) − v α y−x

(y)f (y) − v α y −x

− ,β −

− ,β −

(x)f (x)

v −α

+ ,−β +

(y) dy

(5.2.35) (x)f (x)

v

−α + ,−β +

(y) dy .

Using (5.2.26), we estimate the first term in (5.2.35) by % 2f α − ,β − ,∞

x− 1+x 2 2n

−1

 +

&

1

v −α

x+ 1−x 2n2

+ ,−β +

(y) dy + + ≤ c f α − ,β − ,∞ v −α ,−β (x) ln n . |y − x|

For the second term in (5.2.35), we take into account v −α

+ ,−β +

(y) ≤ c v −α

+ ,−β +

(x) ,

x−

1+x 1−x ≤y≤x+ , 2n2 2n2

5.2 Cauchy Singular Integral Operators

259

     − −  as well as v α ,β f  0,θ ≤ c f α − ,β − ,ξ for some θ ∈ 0, ρ2 (see Lemma 2.4.32) C and get 

x+ 1−x 2 2n

x− 1+x 2 2n

  − −  v α ,β (y)f (y) − v α − ,β − (x)f (x)   −α + ,−β +  (y) dy v    y−x

≤ cf −α − ,−β − ,ξ v −α

+ ,−β +

 (x)

x+ 1−x 2 2n

x− 1+x 2n2

|y − x|θ −1 dy ≤ c

f α − ,β − ,ξ n2θ

v −α

+ ,−β +

(x) .

Thus, together with (5.2.33) and (5.2.34) we have proved that    .  + +  1−x f α − ,β − ,ξ     α ,β   , f  + ≤ c ln (x) Af (x) ln n + . v  α − ,β − ,∞  1+x n2θ (5.2.36) α − ,β −

Let pn ∈ Pn with f − pn α − ,β − ,∞ = En instead of f to obtain

(f ) and apply (5.2.36) with f − pn

   .  + +  1−x f α − ,β − ,ξ f − pn α − ,β − ,ξ     α ,β ln  + ,(f − pn ) (x) ≤ c (x) A ln n + v  1+x nρ n2θ ≤c

f α − ,β − ,ξ



n2θ

   1−x  , 1 + ln 1+x

where we used 2θ < ρ and f − pn α − ,β − ,ξ ≤ f α − ,β − ,ξ , which is a α − ,β −

consequence of the relations Em − ,β −

α Em

α − ,β −

(f − pn ) = Em

(f ) for m ≥ n and

(f − pn ) ≤ f − pn α − ,β − ,∞ ≤ f α − ,β − ,ξ n−ρ ≤ f α − ,β − ,ξ m−ρ

for m < n . Consequently,   f α − ,β − ,ξ A(f , − pn ) + θ + θ ≤ c , α + 2 ,β + 2 n2θ   c f α − ,β − ,ξ , 2j+1 n − Ap , 2j n  + θ + θ ≤ which implies Ap α + 2 ,β + 2 n2θ in the sense of convergence in Cα + + θ ,β + + θ , 2

, − Ap , n= Af

∞  



1 4θ

j and thus,

2

 , 2j+1 n − Ap , 2j n . Ap

j =0

Since, due to Proposition 3.2.21, pα + ,β + ,∞ ≤ c

max

|x|≤1−n−2

  + +   p(x)v α ,β (x)

∀ p ∈ Pn , ∀ n ∈ N

(5.2.37)

260

5 Mapping Properties of Some Classes of Integral Operators

, 2j+1 n ∈ P2j+1 n+|κ| , we conclude and, due to Corollary 5.2.6, Ap  A ,p

2j +1 n

 ,p2j n  + + ≤ c −A α ,β ,∞

max

|x|≤1−(2j +1 n+|κ|)

−2

!     "  , ,p2j n (x) v α + ,β + (x)  Ap2j +1 n (x) − A

θ    ,p2j +1 n − A ,p2j n  + θ + θ ≤ c 2j +1 n + |κ| A α + ,β + 2

≤c

f α − ,β − ,ξ 2(j +1)θ nθ f α − ,β − ,ξ = 2θ c 22j θ n2θ nθ

2



1 2θ

j .

This implies   f α − ,β − ,ξ A(f , − pn ) + + ≤ c . α ,β ,∞ nθ

(5.2.38)

, n is a polynomial, it follows Af , ∈ Cα + ,β + . Relation (5.2.36), applied to Since Ap ρ and to f = pm , yields n = mk with k ≥ 2θ   A ,pm 

α+ ,β + ,∞

≤c

max

|x|≤1−(m+|κ|)−2

  + +  ,    Apm (x)v α ,β (x)

 ≤ c pm α− ,β − ,∞ ln m +

pm α− ,β − ,ξ mρ

 ≤ c pm α− ,β − ,∞ ln m ,

, m ∈ Pm+|κ| , and where we took into account (5.2.37), Ap   − − pm α − ,β − ,ξ = sup nρ Enα ,β (pm ) : 0 ≤ n < m − ,β −

ρ

= n0 Enα0

(pm ) ≤ mρ pm α − ,β − ,∞

(5.2.39)

for some n0 ∈ {0, 1, . . . , m − 1}. Together with (5.2.38) and − ,β −

α pm α − ,β − ,∞ ≤ Em

this leads to   Af , 

α + ,β + ,∞

(f ) + f α − ,β − ,∞ ≤ 2 f α − ,β − ,∞ ,

  f α − ,β − ,ξ ≤ c f α − ,β − ,∞ ln n + . nθ

Applying this with nm0 k instead of n, where k ≥ θ −1 , gives assertion (5.2.30). The second assertion follows by applying (5.2.30) to p ∈ Pn , ξn = n−1 , and m0 = 1 by taking into account pα − ,β − ,(n−1 ) ≤ npα − ,β − ,∞ (cf. (5.2.39)).   −ρ Proposition 5.2.14 Let ρ, τ > 0 and assume that c−1 n−τ ≤   ξn ≤ c n and ξ (ξn ln n) , ξn ≤ c ξn+1 for all n ∈ N, where 0 < c = c(n). Then A ∈ L Cα − ,β − , Cα + ,β + . In case κ ≤ 0, the assumption ξn ≤ c ξn+1 can be omitted.

5.2 Cauchy Singular Integral Operators

261 α − ,β −

ξ

Proof Let f ∈ Cα − ,β − , n ≥ 2, and pn ∈ Pn−κ such that En−κ (f ) = , n ∈ Pn , we get from (5.2.30), for m0 ≥ 2τ , f − pn α − ,β − ,∞ . Then, since Ap Enα

+ ,β +

  , ) ≤ A(f , − pn ) + + (Af α ,β ,∞   f − pn α − ,β − ,ξ α − ,β − ≤ c En−κ (f ) ln n + nm0

(5.2.40)

≤ c f α − ,β − ,ξ ξn ln n , where we used, in case κ ≤ 0, α − ,β −

En−κ (f ) ≤ Enα

− ,β −

(f ) ≤ f α − ,β − ,ξ ξn

and, in case κ > 0, α − ,β −

En−κ (f ) ≤ f α − ,β − ,ξ ξn−κ ≤ c f α − ,β − ,ξ ξn , as well as pn α − ,β − ,ξ ≤ c nτ pn α − ,β − ,∞ (cf. (5.2.39)), which implies pn α − ,β − ,∞ nm0



 c nτ  α − ,β − f  E (f ) + − − α ,β ,∞ n−κ nm0

c f α − ,β − ,ξ ≤ c ξn f α − ,β − ,ξ . nτ   ,  + + Since (5.2.40) means nothing else than Af ≤ c f α − ,β − ,ξ for all α ,β ,(ξn ln n) α − ,β −

≤ c En−κ (f ) +

ξ

f ∈ Cα − ,β − , where c = c(f ), the proposition is proved.

 

Remark 5.2.15 Since the operator A defined in (5.2.11) is of the same structure as , given by (5.2.15) (replace a, α, β, and κ by −a, −α, −β, and −κ, the operator A respectively), Propositions 5.2.13 and 5.2.14 can be applied also to A. For example,  ξ (ξ ln n) under the conditions of Proposition 5.2.14, we have A ∈ L Cα + ,β + , Cα −n ,β − , where in case κ ≥ 0, the assumption ξn ≤ c ξn+1 can be droped.   ξ Proposition 5.2.16 If K ∈ L Cα + ,β + , Cα − ,β − with ξn = O(n−ρ ) for some , : Cα + ,β + −→ Cα + ,β + is compact (cf. Sect. 2.3). ρ ∈ (0, 1), then the operator AK   η Proof In view of Proposition 2.4.28(d), we have K ∈ L Cα + ,β + , Cα − ,β − , where   , ∈ L Cα + ,β + , C(η+n ln+n) , and ηn = n−ρ . Due to Proposition 5.2.14, it follows AK α ,β it remains to apply Proposition 2.4.28(b).  

262

5 Mapping Properties of Some Classes of Integral Operators

The results of this Section can be generalized to the case of variable coefficients in the singular integral operator, i.e., to operators of the form b(x) (Au) (x) = a(x)σ (x)u(x) + π



1

−1

σ0 (y)u(y) dy y−x

with specific chosen generalized Jacobi weights σ (x) and σ0 (x) and to the , For this, we refer to [74] (cf. also Sect. 5.2.3). respective operators A. Finally, we briefly discuss the behaviour of the Cauchy singular integral operator with respect to Hölder continuous functions. For this, we recall the definition of the space C0,λ [a, b] (cf. (2.4.9)) of functions being Hölder continuous on the interval [a, b] with exponent λ ∈ (0, 1] and denote by χ[a,b] the characteristic function of the interval [a, b]. For the following proposition we refer to [171, Theorem in §19] and [164, Chapter II, Section 6]. By C0,λ 0 we denote the closed subspace of C0,λ = C0,λ [−1, 1] consisting of all functions from C0,λ vanishing at both endpoints ±1. Proposition 5.2.17 Let −1 ≤ a < b ≤ 1 and 0 < λ < 1. If f ∈ C0,λ [−1, 1] then ⎧ C0,λ [a, b] : −1 < a < b < 1 , ⎪ ⎪ ⎨ Sf ∈ C0,λ [−1, b] : −1 < b < 1, f (−1) = 0 , ⎪ ⎪ ⎩ 0,λ C [a, 1] : −1 < a < 1, f (1) = 0 . 0,λ is bounded. Moreover, the operator S : C0,λ 0 −→ C

5.2.3 On the Case of Variable Coefficients For x ∈ (−1, 1), we consider Cauchy singular integral equations of the form 

1

a(x)v(x) +

1 π

a(x)v(x) +

b(x) π

−1

b(y)v(y) dy + y−x



1

K(x, y)v(y) dy = f (x) ,

(5.2.41)

K0 (x, y)v(y) dy = f0 (x) ,

(5.2.42)

−1

or 

1 −1

v(y) dy + y−x



1 −1

for an unknown function v : (−1, 1) −→ C. Here a, b : [−1, 1] −→ R, f, f0 : (−1, 1) −→ C and K, K0 : (−1, 1)2 −→ R

or K, K0 : (−1, 1)2 \ {x = t} −→ R

5.2 Cauchy Singular Integral Operators

263

are given functions. Although we are here mainly interested in mapping properties of the singular integral operators given by the first two terms of the sum on the lefthand side of (5.2.41) or (5.2.42), we can have in mind the above two “complete” equations by considering the third term of the sum on the respective left-hand side as a compact perturbation. We assume that there exists a function c(x) = c(x)

N+1 5

|x − xj∗ |−γj ,

(5.2.43)

j =0 ∗ < . . . < x0∗ = 1, γj > −1, j = 0, 1, . . . , N + 1, N ∈ N0 , where −1 = xN+1 c(x) > 0 for x ∈ [−1, 1], such that B(x) := b(x)c(x) is a polynomial. Furthermore, in all what follows in this section as well as in the following Sect. 5.2.4 we suppose that

(A1) (A2) (A3) (A4)

a, b, c ∈ C0,η , 0 < η < 1, γ0 ± α > −1, γN+1 ± β > −1, B( x ) = 0and x ∈ [−1, 1] imply b( x ) = 0, r(x) := [a(x)]2 + [b(x)]2 > 0 for all x ∈ [−1, 1].

Here, the real numbers α and β are defined by α = λ + g(1) ∈ (−1, 1) and β = ν − g(−1) ∈ (−1, 1) ,

(5.2.44)

where λ and ν are integers and g : [−1, 1] −→ R is a continuous function satisfying a(x) − ib(x) = r(x)eiπg(x)

for x ∈ [−1, 1] .

For x ∈ (−1, 1) we define the weight functions (1 − x)λ (1 + x)ν exp σ (x) = r(x) μ(x) =

-

1 −1

. g(y) dy , y−x

-  1 . (1 − x)−λ (1 + x)−ν g(y) exp − dy , r(x) −1 x − y

(5.2.45)

(5.2.46)

and σ0 (x) = Moreover, set κ = −(λ + ν).

σ (x) , c(x)

μ0 (x) =

μ(x) . c(x)

(5.2.47)

264

5 Mapping Properties of Some Classes of Integral Operators

Remark 5.2.18 If we replace b(x) by −b(x), then we have successively the following changes in the definitions (starting after assumptions (A1)–(A4)): g(x) −→ −g(x) σ (x) −→ μ(x) λ

−→ −λ

μ(x) −→ σ (x)

ν

−→ −ν

σ0 (x) −→ μ0 (x)

κ

−→ −κ

μ0 (x) −→ σ0 (x)

α

−→ −α

,0 A

−→ A0

cf. (5.2.55) and (5.2.56)

β

−→ −β

, A

−→ A

cf. (5.2.55) and (5.2.56)

Lemma 5.2.19 If a, b ∈ C0,η for some η ∈ (0, 1), then the (weight) function σ (x) admits the representation σ (x) = (1 − x)α (1 + x)β w(x)

(5.2.48)

with w ∈ C0,η and w(x) > 0 for all x ∈ [−1, 1]. As a consequence of the previous remark, the same holds true for μ0 (x) if α and β are replaced by −α and −β, respectively. Proof From the definition (5.2.45) and the relation 

1 −1

(1 ± x)g(±1) 1−x (1 ± y)g(±1) dy = ln + g(±1) y−x 2 1+x

it follows that σ (x) is equal to (1 − x)λ (1 + x)ν r(x)



1−x 1+x

%

 (1+x)g(1)+(1−x)g(−1) 2

d0 exp

(1 − x)α (1 + x)β  (1 − x)1−x (1 + x)1+x = r(x)

1

g(y) −

(1+y)g(1)+(1−y)g(−1) 2

&

y−x

−1

g(−1)−g(1) 2

dy

% d0 exp

1

g(y) −

(1+y)g(1)+(1−y)g(−1) 2

y −x

−1

dy

& ,

where d0 = eg(1)−g(−1). Now the assertion is a consequence of Proposition 5.2.17 and Exercise 2.4.21.   In virtue of Lemma 5.2.19, σ0 (x) and μ0 (x) are generalized Jacobi weights, which can be written in the form σ0 (x) = (1 − x)γ0 +α (1 + x)γN+1 +β

N 5 j =1

|x − xj∗ |γj

w(x) c(x)

(5.2.49)

5.2 Cauchy Singular Integral Operators

265

and N 5

μ0 (x) = (1 − x)γ0 −α (1 + x)γN+1 −β

|x − xj∗ |γj

j =1

1 [r(x)]2 c(x)w(x)

,

(5.2.50)

respectively, where w ∈ C0,η and w(x) > 0 for x ∈ [−1, 1]. Now, in case of Eq. (5.2.41), we shall seek the solution in the form v = σ u and define K(x, y) = K0 (x, y)c(y), while in case of Eq. (5.2.42), we set v = σ0 u. Moreover, in case of Eq. (5.2.42), we define K(x, y) = c(x)K0 (x, y) and f (x) = c(x)f0 (x). Instead of (5.2.41) and (5.2.42), we investigate the equivalent equations 1 a(x)σ (x)u(x) + π



1 −1



B(y)σ0 (y)u(y) dy + y−x

1 −1

K(x, y)σ0 (y)u(y) dy = f (x) (5.2.51)

and a(x)σ (x)u(x) +

B(x) π



1 −1

σ0 (y)u(y) dy + y−x



1

−1

K(x, y)σ0 (y)u(y) dy = f (x) , (5.2.52)

respectively. Let us write these equations in short form Au + Ku = f

(5.2.53)

A0 u + Ku = f ,

(5.2.54)

and

where A = aσ I + SBσ0 I = aσ I + Sbσ I

and A0 = aσ I + BSσ0 I ,

(5.2.55)

as well as  (Ku)(x) =

1 −1

K(x, t)σ0 (t)u(t) dt .

For κ ≥ 0, we define the closed subspaces L2√σ0 ,B,κ and L2√μ0 ,κ of L2√σ0 and L2√μ0 by (cf. the definition of L2α,β,κ on page 250)   L2√σ0 ,B,κ = f ∈ L2√σ0 : f, Bpσ0 = 0 ∀p ∈ Pκ



with

f, gσ0 =

1

−1

f (x)g(x)σ0 (x) dx

266

5 Mapping Properties of Some Classes of Integral Operators

and   L2√μ0 ,κ = f ∈ L2√μ0 : f, pμ0 = 0 ∀p ∈ Pκ



with

f, gμ0 =

1 −1

f (x)g(x)μ0 (x) dx ,

respectively. Additionally to the operators A and A0 used in (5.2.53) and (5.2.54), respectively, we define the operators , = aμI − SBμ0 I = aμI − SbμI A

,0 = aμI − BSμ0 I . and A

(5.2.56)

The following Propositions 5.2.21–5.2.26 summarize some results the proofs of which the interested reader can find, for example, in [194, Theorem 9.9, Remark 9.10, Theorems 9.12, 9.14, 9.17] and [27, Remark 3.10]. But, at first the following remark. Remark 5.2.20 Concerning the assertions formulated in the remaining part of this , we section, it is important to note that, for the properties of the operators A and A, only need the assumptions on the functions a(x) and b(x) in (A1) and (A4). With the integers λ and ν as well as the function g(x), defined above, we consider the function - 1 . g(t) dt λ ν X(z) = (1 − z) (1 + z) exp , z ∈ C \ [−1, 1] . (5.2.57) −1 t − z Proposition 5.2.21 Let p(x) = x n + . . . be a monic polynomial of degree n. Then q(x) := (Ap)(x) = (−1)λx n−κ + . . .

(5.2.58)

is a polynomial of degree n − κ, where (Ap)(x) ≡ 0 if n − κ < 0 and q(x) is defined by ! " X(z)p(z) − q(z) = 0 .

(5.2.59)

, (Ap)(x) = (−1)λ x n+κ + . . . ,

(5.2.60)

lim

|z|→∞

Furthermore,

, where (Ap)(x) ≡ 0 if n + κ < 0. Moreover, for every polynomial p(x) we have , AAp = p , if κ ≤ 0 ,

, = p , if κ ≥ 0 . and AAp

(5.2.61)

5.2 Cauchy Singular Integral Operators

267

Remark 5.2.22 From the proof of (5.2.59) (cf. the proof of [194, Theorem 9.9]) it is seen that, for z ∈ C \ [−1, 1], the equality 

1 X(z)p(z) − q(z) = − π

1 −1

b(t)p(t)σ (t) dt t −z

(5.2.62)

holds true with the polynomials p(z) and q(z) from Proposition 5.2.21. Note that, due to Remark 5.2.18, assertion (5.2.60) of Proposition 5.2.21 is a consequence of (5.2.58). Furthermore, due to the representation (5.2.48) of σ (x), we see that the equalities L2√σ = L2α,β and L2√μ = L2−α,−β are valid in the sense of equivalent norms. Since the operators A : L2α,β −→ L2−α,−β and 2 , : L2 A −α,−β −→ Lα,β are bounded (see Lemma 5.2.1) and since the set of all polynomials is dense in both spaces L2α,β and L2−α,−β , from relation (5.2.61) we conclude , =I 2 AA L

α,β

,= I 2 and AA L

= IL2√ , if κ ≤ 0 ,

−α,−β

σ

= IL2√ , if κ ≥ 0 . μ

(5.2.63) Proposition 5.2.23 If pn (x) is a polynomial of degree n such that 

1 −1

x j pn (x)b(x)σ (x) dx = 0 ,

j = 0, 1, . . . , n − 1 ,

then 

1 −1

x k (Apn ) (x)b(x)μ(x) dx = 0 ,

k = 0, 1, . . . , n − κ − 1 .

Corollary 5.2.24 Since B(x) is a polynomial and since ,0 p = Ap , + (SB − BS)μ0 p, A0 p = Ap + (SB − BS)σ0 p as well as A by (5.2.58) and (5.2.60) we see that, for a polynomial p(x) of degree n, (A0 p)(x) ,0 p)(x) are polynomials of degree ≤ max {n − κ, deg B − 1} and degree ≤ and (A max {n + κ, deg B − 1}, respectively. σ

Proposition 5.2.25 If pn0 (x) is an orthogonal polynomial of degree n with respect to the weight function σ0 (x) and if n − κ > deg B − 1, then   qnμ0 (x) := A0 pnσ0 (x) is an orthogonal polynomial of degree n − κ with respect to the weight μ0 (x), where  μ q 0  n

L2√μ

0

  = pnσ0 L2√ . σ0

268

5 Mapping Properties of Some Classes of Integral Operators

Note that the last proposition implies that the operator A0 : L2√σ0 −→ L2√μ0

(5.2.64)

is a bounded one. Furthermore, we have the following details. Proposition 5.2.26 The operator defined in (5.2.64) has the properties:   (a) dim N(A0 ) = max {0, κ}, where N(A) = u ∈ L2√σ0 : A0 u = 0 ,   (b) f ∈ R(A0 ) = A0 u : u ∈ L2√σ0 if and only if 

1 −1

x j f (x)μ0 (x) dx = 0 ,

j = 0, 1, . . . , −κ − 1 ,

i.e., R(A0 ) = L2√μ0 ,max{0,−κ} , , (c) A∗0 = aμI − SbμI = A, (d) dim N(A∗0 ) = max {0, −κ}, ,0 A0 = I 2√ if κ ≤ 0 and A0 A ,0 = I 2√ if κ ≥ 0, (e) A L σ L μ 0  0 (f) N(A0 ) = span BMj : j = 0, 1, . . . , κ − 1 if κ > 0, where Mj (x) = x j .  2  2 √ Corollary 5.2.27 In case κ = 0, the operator A0 ∈ L L√ σ0 , L μ0 is invertible,   ,0 ∈ L L2√ , L2√ . If κ > 0, then the operator A0 ∈ = A where A−1 0 μ0 σ0  2    2√ , √ L L2√σ0 ,B,κ , L2√μ0 is invertible, where A−1 0 = A0 ∈ L L μ0 , L σ0 ,B,κ . Proof If κ = 0, then N(A0 ) = {0} and R(A0 ) = L2√μ0 due to Proposi,0 . tion 5.2.26(a), (b). Moreover, Proposition 5.2.26(e) yields A−1 = A Let κ > 0. Set B0 = A0 |L2√

σ0 ,B,κ

0

. Taking into account the definition of L2√σ0 ,B,κ

and Proposition 5.2.26(f) we see that   

 N(B0 ) = span BMj : j = 0, . . . , κ−1 ∩ u ∈ L2√σ0 : u, BMj σ = 0, j = 0, . . . , κ−1 = {0} . 0

Hence, it remains to prove that R(B0 ) = L2√μ0 . But this is a consequence of ,0 f . Then A0 u = f Proposition 5.2.26(e). Indeed, for every f ∈ L√μ0 , set u = A and u = u0 +v0 with u0 ∈ L√σ0 ,B,κ and v0 ∈ N(A0 ). This implies B0 u0 = A0 u0 = A0 u − A0 v0 = f .  

5.2.4 Regularity Properties Here we study the continuity properties of solutions of equation (5.2.54). Throughout the present section, we assume that conditions (A1)–(A4) of Sect. 5.2.3 are fulfilled. Furthermore, let −∞ < a < b < ∞ and −∞ < c < d < ∞.

5.2 Cauchy Singular Integral Operators

269

Lemma 5.2.28 ([171], Chapter 1, §6, 7◦ ) Let the function g : [a, b]×[c, d] −→ C, (x, y) → g(x, y) be Hölder continuous with exponent γ ∈ (0, 1) uniformly in the variables x and y, i.e., there is a constant c1 ∈ R such that   |g(x1 , y1 ) − g(x2 , y2 )| ≤ c1 |x1 − x2 |γ + |y1 − y2 |γ

(5.2.65)

for all (xj , yj ) ∈ [a, b] × [c, d]. Moreover, let x ∗ ∈ [a, b], 0 < δ < γ , and f (x, y) =

g(x, y) − g(x ∗ , y) . |x − x ∗ |δ

Then there exists a constant c2 ∈ R such that   |f (x1 , y1 ) − f (x2 , y2 )| ≤ c2 |x1 − x2 |γ −δ + |y1 − y2 |γ −δ

∀ (xj , yj ) ∈ [a, b] × [c, d] .

Note that, if |g(x1 , y1 ) − g(x2 , y2 )| ≤ c1 (|x1 − x2 | + |y1 − y2 |)γ ,

(5.2.66)

is satisfied for all (xj , yj ) ∈ [a, b] × [c, d], then also (5.2.65) is in force. Indeed, by (5.2.66) we have |g(x1 , y) − g(x2 , y)| ≤ c1 |x1 − x2 |γ

and |g(x, y1 ) − g(x, y2 )| ≤ c1 |y1 − y2 |γ

for all (xj , y), (x, yj ) ∈ [a, b] × [c, d], which implies |g(x1 , y1 ) − g(x2 , y2 )| ≤ |g(x1 , y1 ) − g(x1 , y2 )| + |g(x1 , y2 ) − g(x2 , y2 )|   ≤ c1 |x1 − x2 |γ + |y1 − y2 |γ for all (xj , yj ) ∈ [a, b] × [c, d]. Corollary 5.2.29 Suppose that g(x, y) possesses the property (5.2.65) and that g(xj0 , y) = 0 for j = 0, 1, . . . , N + 1, y ∈ [c, d], and some xj0 satisfying 0 ∗ < . . . < x 0 < x 0 = b. If < xN a = xN+1 1 0 g(x, y) , G(x, y) = 4N+1 0 δj j =0 |x − xj |

0 < δj < γ ,

(5.2.67)

then, for some constant c3 ∈ R,   |G(x1 , y1 ) − G(x2 , y2 )| ≤ c3 |x1 − x2 |γ −δ + |y1 − y2 |γ −δ

(5.2.68)

  for all (xj , yj ) ∈ [a, b] × [c, d], where δ = max δj : j = 0, 1, . . . , N + 1 .

270

5 Mapping Properties of Some Classes of Integral Operators

Proof Let gj (x, y) =

5

g(x, y) |x − xj0 |δk

.

k∈{0,1,...,N+1}\{j }

Apply Lemma 5.2.28 to the functions % G0 : I0 := % Gj : Ij :=

& x10 + x00 , b −→ C , 2

xj0+1 + xj0 xj0 + xj0−1 , 2 2

& −→ C ,

j = 1, . . . , N ,

and % GN+1 : IN+1 := a,

0 0 + xN xN+1

2

& −→ C ,

where Gj (x, y) =

gj (x, y) |x − xj0 |δj

= G(x, y) ,

j = 0, 1, . . . , N + 1 .

This yields (cf. Exercise 2.4.15)   |G(x1 , y1 ) − G(x2 , y2 )| ≤ c3,j |x1 − x2 |γ −δ + |y1 − y2 |γ −δ

for all (xk , yk ) ∈ Ij × [c, d]. It remains to refer to Exercise 2.4.16.

(5.2.69)  

Lemma 5.2.30 If, for some x ∗ ∈ [a, b], f (x ∗ ) = 0, and if, for some constants c4 ∈ R, γ ∈ (0, 1), and δ > 0, |f (x1 ) − f (x2 )| ≤ c4 |x1 − x2 |γ , ∀ x1 , x2 ∈ [a, b] ,

and g(x) = |x − x ∗ |δ f (x) ,

then there is a constant c5 ∈ R such that |g(x1 ) − g(x2 )| ≤ c5 |x1 − x2 |γ ,

∀ x1 , x2 ∈ [a, b] .

Proof In case of γ ≤ δ, the assertion is a direct consequence of Exercises 2.4.15, 2.4.19, and 2.4.13. Let δ < γ and x ∗ ≤ x1 < x2 . If x ∗ = x1 , then   |g(x1 )−g(x2 )| = f (x2 )(x2 − x ∗ )δ  ≤ c4 (x2 −x ∗ )γ +δ ≤ c4 (b−x ∗ )δ |x2 −x1 |γ =: c51 |x2 −x1 |γ .

5.2 Cauchy Singular Integral Operators

271

If x ∗ < x1 < x2 , then we estimate   |g(x1 ) − g(x2 )| = (x1 − x ∗ )δ f (x1 ) − (x2 − x ∗ )δ f (x2 ) ! " ≤ (x2 − x ∗ )δ − (x1 − x ∗ )δ |f (x1 )| + (x2 − x ∗ )δ |f (x1 ) − f (x2 )| .

(5.2.70) In case of x1 − x ∗ ≤ x2 − x1 , it follows |g(x1 ) − g(x2 )| ≤ c52 |f (x1 )| + c53 |f (x1 ) − f (x2 )| ≤ c52 c4 (x1 − x ∗ )γ + c53 c4 (x2 − x1 )γ ≤ (c52 + c53 )c4 (x2 − x1 )γ . Otherwise, if x2 − x1 < x1 − x ∗ , then we use the mean value theorem saying that there exists a z ∈ (x1 , x2 ) with the property (x2 − x ∗ )δ − (x1 − x ∗ )δ = δ(z − x ∗ )δ−1 (x2 − x1 ) . Consequently, by (5.2.70) and 0 < δ < γ < 1, |g(x1 ) − g(x2 )| ≤ δ(z − x ∗ )δ−1 (x2 − x1 )c4 (x1 − x ∗ )γ + c53 c4 (x2 − x1 )γ  ≤ δ c4 (x1 − x ∗ )δ−1 (x2 − x1 )1−γ (x1 − x ∗ )γ + c53 c4 (x2 − x1 )γ ! ! " " ≤ δ(x1 − x ∗ )δ + c53 c4 (x2 − x1 )γ ≤ δ(b − x ∗ )δ + c53 c4 (x2 − x1 )γ .

We can proceed analogously in case of x1 < x2 ≤ x ∗ . In case of x1 < x ∗ < x2 , we refer to the already proved cases and apply the triangular inequality.   Corollary 5.2.31 Let g(x, y) satisfy the conditions of Corollary 5.2.29 and G(x, y) be defined as in (5.2.67) , where δj < γ for all j = 0, 1, . . . , N + 1. Then relation (5.2.68) is valid for δ = max {0, δ0 , . . . , δN+1 }. Proof With the notations of the proof of Corollary 5.2.29 and with the help of Lemma 5.2.30, we get that relation (5.2.69) remains true, and we have again only to refer to Exercise 2.4.16.   Lemma 5.2.32 (cf. Proposition 5.2.17 and [171], §20) Let the function g(x, y) fulfil condition (5.2.65) for some γ ∈ (0, 1) and let g(x, c) = g(x, d) = 0, a ≤ x ≤ b. Then the function 

d

G(x) = c

is an element of C0,γ [a, b].

g(x, y) dy y−x

272

5 Mapping Properties of Some Classes of Integral Operators

Now we are ready to prove the following proposition, where we use the notations of Sect. 5.2.3. Proposition 5.2.33 Let the assumptions (A1)–(A4) be in force, let f ∈ Cm,η , and let max {0, α − γ0 , β − γN+1 , −γ1 , . . . , −γN } < η < 1 . ,0 f belongs to Cm,δ , where Then the function A δ = min {η, η − α, η − β, η + γ0 − α, η + γN+1 − β, η + γ1 , . . . , η + γN } . Proof Taking into account that B(x) is a polynomial, that 

.    B(x) 1 [f (y) − f (x)]μ0 (y) dy B(x) 1 μ0 (y) dy , − A0 f (x) = f (x) a(x)μ(x) − π π y−x −1 y − x −1 .  1 [B(x) − B(y)]μ0 (y) dy ,0 1)(x) − 1 − B(x)F (x) , = f (x) (A π −1 y−x

where F (x) =

1 π



1 −1

[f (y) − f (x)]μ0(y) dy , y−x

,0 1)(x) is a polynomial, it remains to prove that and that, due to Corollary 5.2.24, (A BF belongs to Cm,δ . By induction it can be proved that, for k = 0, 1, . . . , m, dk Gk (y, x) := k dx

-

f (y) − f (x) y−x

.

⎡ ⎤ k (j ) (x)(y − x)j  k! f ⎣f (y) − ⎦. = j! (y − x)k+1 j =0

(5.2.71) Hence, F (k) (x) =

1 π



1 −1

Gk (y, x)μ0(y) dy ,

k = 0, 1, . . . , m ,

(5.2.72)

and we have only to prove that BF (m) belongs to C0,δ . Moreover, using Taylor’s formula (with integral error term), from (5.2.71) we get 1 Gk (y, x) = (y − x)k+1





y

f x

(k+1)

(t)(y −t) dt = k

0

1

  f (k+1) sx +(1−s)y s k ds . (5.2.73)

5.2 Cauchy Singular Integral Operators

273

This implies     |Gm−1 (y1 , x1 ) − Gm−1 (y2 , x2 )| ≤ f (m) 

 C0,η

|y1 − y2 | + |x1 − x2 |



,

(5.2.74)

for all yj , xj ∈ [−1, 1]. Define g(y, x) := mGm−1 (y, x) − f (m) (x) (5.2.71)

=

⎡ ⎤ m−1 (m) (x)(y − x)m  f (j ) (x)(y − x)j f m! ⎣f (y) − ⎦ − (y − x)m j! m! j =0

= Gm (y, x)(y − x) (5.2.75) and P (y, x) :=

N+1 

g(xj∗ , x)

j =0

N+1 5 k=0,k=j

y − xk∗ . xj∗ − xk∗

Note, that P (y, x) is a polynomials w.r.t. the variable y with coefficients depending on x and belonging to C0,η (see (5.2.74)). Moreover, P (xj∗ , x) = g(xj∗ , x), j = 0, 1, . . . , N + 1. We obtain B(x)F (m) (x)

(5.2.72) B(x) =

=

π

B(x) π



1 −1



1 −1

g(y, x)μ0 (y) dy y−x

[g(y, x) − P (y, x)]μ0 (y) dy y−x

.  B(x) 1 P (y, x)μ0 (y) dy + a(x)μ(x)P (x, x) − a(x)μ(x)P (x, x) − π y−x −1 =

B(x) π



1 −1

 [g(y, x) − P (y, x)]μ0 (y) dy  , − A0 P (·, x) (x) + a(x)μ(x)P (x, x) . y−x

Taking into account Lemma 5.2.19 and the assumptions on c(x), we see that in the representation (cf. (5.2.50)) μ0 (x) = (1 − x)γ0 −α (1 + x)γN+1 −β

N 5

|x − xj∗ |γj

j =1

we have

1 [r(x)]2 c(x)w(x)

1 ∈ C0,η . Hence, we can apply Corollaries 5.2.29 and 5.2.31 with r 2 cw G(x, y) = [g(y, x) − P (y, x)]μ0 (y) ,

xj0 = xj∗ ,

274

5 Mapping Properties of Some Classes of Integral Operators

and δ0 = α − γ0 ,

δj = −γj , j = 1, . . . , N ,

δN+1 = β − γN+1 ,

as well as γ = η to conclude, in accordance with Lemma 5.2.32, that 

1 −1

g(y, ·) − P (y, ·)]μ0 (y) dy ∈ C0,δ0 , y−·

where δ0 = η − max {0, δ0 , . . . , δN+1 } = min {η, η − δ0 , . . . , η − δN+1 } = min {η, η + γ0 − α, η + γN+1 − β, η + γ1 , . . . , η + γN } . Corollary 5.2.24 together with the fact that  P (y, x) is a polynomial in y with ,0 P (·, x) (x) is an element of C0,η . coefficients belonging to C0,η yields that A Finally, the function Q(x) := P (x, x) has the property Q(±1) = g(±1, ±1) = mGm−1 (±1, ±1) − f (m) (±1) = 0 ,

(5.2.76)

since, due to (5.2.73),  mGm−1 (x, x) = mf (m) (x)

1

s m−1 ds = f (m) (x) .

0

Consequently, Corollary 5.2.31 applied to G(x, y) = μ(x)P (x, x)

(5.2.48),(5.2.46)

=

Q(x) (1 − x)α (1 + x)β [r(x)]2w(x)

shows that a(x)μ0 (x)P (x, x) belongs to C0,δ1 with δ1 = η − max {α, β} = min {η − α, η − β}.   , instead of A ,0 . Its proof is easier An analogous result holds true for the operator A than the previous one. Proposition 5.2.34 Let a, b ∈ C0,η , f ∈ Cm,η , and max {0, α, β} < η < 1 . , belongs to Cm,δ , where Then the function Af δ = min {η, η − α, η − β} .

5.2 Cauchy Singular Integral Operators

275

Proof We write 

.  1   1 1 b(y)[f (y) − f (x)]μ(y) dy b(y)μ(y) dy ,f (x) = f (x) a(x)μ(x) − 1 − A π −1 y−x π −1 y−x   ,1 (x) − F (x) , = f (x) A

where F (x) =

1 π



1 −1

b(y)[f (y) − f (x)]μ(y) dy . y−x

  , (x) is a polynomial, it remains to show that Since, due to Proposition 5.2.21, A1 the function F (x) belongs Cm,δ . Now we can proceed in the same manner as in the proof of Proposition 5.2.33. For k = 0, 1, . . . , m, we have F (k) (x) =

1 π



1 −1

b(y)Gk (y, x)μ(y) dy

with Gk (y, x) from (5.2.71). With (cf. (5.2.75)) g(y, x) = mGm−1 (y, x) − f (m) (x) = Gm (y, x)(y − x) and P (y, x) = g(1, x)

y−1 x+1 + g(−1, x) 2 2

we get F (m) (x) =

=

 1 1 b(y)g(y, x)μ(y) dy π −1 y−x  1 1 b(y)[g(y, x) − P (y, x)]μ(y) dy π −1 y−x % − a(x)μ(x)P (x, x) −

=

&  1 1 b(y)P (y, x)μ(y) dy + a(x)μ(x)P (x, x) π −1 y−x

  1 1 b(y)[g(y, x) − P (y, x)]μ(y) dy  , − AP (·, x) (x) + a(x)μ(x)P (x, x) . π −1 y−x

276

5 Mapping Properties of Some Classes of Integral Operators

In virtue of (5.2.74) and the fact that P (y, x) is a polynomial in y with C0,η -coefficients in x as well as g(±1, x) − P (±1, x) = 0, we can apply Lemma 5.2.32 to conclude 1 π



1 −1

b(y)[g(y, ·) − P (y, ·)]μ(y) dy ∈ C0,δ , y−·

where we also took intoaccount b ∈ C0,η and Lemma 5.2.19 on the representation , (·, x) (x) is an element of C0,η , since P (·, x) is a of μ(x). By (5.2.60), AP 0,η polynomial with C -coefficients. Finally, by Corollary 5.2.29, aμQ ∈ C0,δ for Q(x) = P (x, x), since Q(±1) = 0 (see (5.2.76)).  

5.3 Compact Integral Operators In this section we consider classes of compact integral operators in spaces of continuous functions, which play an important role in studying numerical methods for Fredholm integral equations in Chap. 6. Lemma 5.3.1 Let g ∈ L1 (−1, 1). Then #  sup 

1 −1

 $   g(x)f (x) dx  : f ∈ C[−1, 1], f ∞ ≤ 1 =

1 −1

|g(x)| dx .

Proof Denote the supremum on the left-hand side by cg . Of course, cg ≤  1 |g(x)| dx. To prove the reverse inequality, let ε > 0 be arbitrary. Due to −1

the density of C[−1, 1] in L1 (−1, 1), there is a function gε ∈ C[−1, 1] with g − gε 1 < 2ε . Then the functions fn , n ∈ N, defined by fn (x) =

gε (x) |gε (x)| +

1 n

belong also to C[−1, 1] and −

  1      ε ε  1 gε (x)fn (x) dx  −  g(x)fn (x) dx  < , 

  1  ε |gε (x)|2 ε  gε (x)fn (x) dx  − = dx − 1 2 2 −1 −1 |gε (x)| + n 1

 −→  This shows that cg ≥

1 −1

1 −1

|gε (x)| dx −

ε > 2



1 −1

|g(x)| dx − ε .

|g(x)| dx.

 

Proposition 5.3.2 Let X be equal to (C[−1, 1], .∞ ) and K0 : [−1, 1]2 −→ C be a given function. Then, by  (K0 f )(x) :=

1 −1

K0 (x, y)f (y) dy ,

−1 ≤ x ≤ 1 ,

there is defined a compact operator K0 : X −→ X if and only if (a) K0 (x, .) ∈ L1 (−1, 1) for all x ∈ [−1, 1] and (b) lim K0 (x, .) − K0 (x0 , .)L1 (−1,1) = 0 for all x0 ∈ [−1, 1]. x→x0

Proof Of course, for (K0 f )(x) being well defined for every x ∈ [−1, 1] and every f ∈ C[−1, 1], it is necessary and sufficient that condition (a) is satisfied.  1 |K0 (x, y)| dy If additionally condition (b) is fulfilled, then the function F (x) = −1

is continuous on [−1, 1] due to  1 |K(x, y) − K(x0 , y)| dy = K0 (x, .) − K0 (x0 , .)L1 (−1,1) . |F (x) − F (x0 )| ≤ −1

Hence, F :[−1, 1] −→ R is bounded, such that the set F := {K0 f : f ∈ C[−1, 1], f ∞ ≤ 1 is bounded and, in view of (b), also equicontinuous in every point x0 ∈ [−1, 1]. Thus, due to Exercise 2.3.6, the set F is equicontinuous on [−1, 1]. Consequently, due to the Arzela-Ascoli Theorem 2.3.5, the conditions (a) and (b) imply the compactness of K0 : X −→ X. On the other hand, if this operator is compact, then the set {K0 f : f ∈ C[−1, 1],  f ∞ ≤ 1 is equicontinuous. Hence, for every ε > 0 and every x0 ∈ [−1, 1], there is a δ > 0 such that, for all f ∈ C[−1, 1] with f ∞ ≤ 1 and all x ∈ [−1, 1] with |x − x0 | < δ,  1     [K0 (x, y) − K0 (x0 , y)] f (y) dy  < ε .  −1

278

5 Mapping Properties of Some Classes of Integral Operators

Applying Lemma 5.3.1 to g(y) = K0 (x, y) − K0 (x0 , y), we get 

1 −1

|K0 (x, y) − K0 (x0 , y)| dy ≤ ε

∀ x ∈ [−1, 1] : |x − x0 | < δ ,  

i.e., (b) is fulfilled. Analogously, one can prove the following. Proposition 5.3.3 Let p > 1, a given function. Then, by  (Kf )(x) :=

1 −1

1 p

+

1 q

= 1, α, β > −1, and K0 : [−1, 1]2 −→ C be

K0 (x, y)f (y)v α,β (y) dy ,

−1 < x < 1 ,

p

there is defined a compact operator K : Lα,β −→ C[−1, 1] if and only if q

(a) K0 (x, .) ∈ Lα(q−1),β(q−1) for all x ∈ [−1, 1] and (b) lim K0 (x, .) − K0 (x0 , .)α(q−1),β(q−1),(q) = 0 for all x0 ∈ [−1, 1]. x→x0

Exercise 5.3.4 Give the proof of Proposition 5.3.3. Cγ ,δ −→ C, Due to the definition of the Banach space Cγ ,δ , the mapping Jγ ,δ : f → v γ ,δ f is an isometric isomorphism. Hence, the operator K : Cγ0 ,δ0 −→ Cγ ,δ −1 is continuous or compact if and only if Jγ ,δ KJγ0 ,δ0 : C −→ C is continuous or compact, respectively. Corollary 5.3.5 Let K : (−1, 1)2 −→ C be a continuous function and let γ , δ ≥ 0, y) := γ0 , δ0 ≥ 0, γ1 , δ1 , and α, β be real numbers such that the function K(x, v γ ,δ (x)K(x, y)v γ1 ,δ1 (y) can be extended to a continuous function on [−1, 1]2 and that α + 1 > γ0 + γ1 , β + 1 > δ0 + δ1 . Then the operator K : Cγ0 ,δ0 −→ Cγ ,δ defined by  (Kf ) (x) =

1 −1

K(x, y)f (y)v α,β (y) dy ,

−1 < x < 1 ,

(5.3.1)

is a linear and compact operator. Proof The operator K : Cγ0 ,δ0 −→ Cγ ,δ is compact if and only if the function K0 (x, y) =

y)v α,β (y) K(x, v γ ,δ (x)K(x, y)v α,β (y) = v γ0 ,δ0 (y) v γ0 +γ1 ,δ0 +δ1 (y)

(5.3.2)

5.3 Compact Integral Operators

279

satisfies the conditions (a) and (b) of Proposition 5.3.2. But, these conditions are : [−1, 1]2 −→ C and the obviously fulfilled because of the continuity of K α−γ −γ ,β−δ −δ 0 1 0 1 integrability of v (x).   p

α β

p

,

Analogously, by the definition of Lα,β , the map Jα,β,p : Lα,β −→ Lp , f → v p p f is an isometrical isomorphism, and from Proposition 5.3.3 we get the following. p Cγ ,δ , defined by relation (5.3.1), is Corollary 5.3.6 The operator K : Lα0 ,β0 −→

compact if and only if the function K0 (x, y) = v conditions (a) and (b) of Proposition 5.3.3.

γ−

α0 p

,δ−

β0 p

(x)K(x, y) meets the

Now let us consider operators of the form  K1 : C[0, ∞] −→ C[0, ∞] ,



(K1 f )(x) =

K1 (x, y)f (y) dy

(5.3.3)

0

and

 K2 : C[−∞, ∞] −→ C[−∞, ∞] ,

(K2 f )(x) =

∞ −∞

K2 (x, y)f (y) dy (5.3.4)

defined on the Banach spaces (C[0, ∞], .∞ ) and (C[−∞, ∞], .∞ ), respectively, which are considered in Exercise 2.3.8. Let ϕ1 : (−1, 1) → (0, ∞) and ϕ2 : (−1, 1) −→ (−∞, ∞) be differentiable with ϕj (x) > 0 ∀ x ∈ (−1, 1) and ϕ1 (−1) := ϕ1 (−1 + 0) = 0, ϕ1 (1) := ϕ1 (1 − 0) = ∞, ϕ2 (−1) := ϕ2 (−1 + 0) = − ∞, 1+x x and ϕ2 (x) = ϕ2 (1) := ϕ2 (1 − 0) = ∞, for example, ϕ1 (x) = . Since 1−x 1 − x2 the mappings J1 : C[0, ∞] −→ C[−1, 1] ,

(J1 f )(x) = f (ϕ1 (x))

and J2 : C[−∞, ∞] −→ C[−1, 1] ,

(J2 f )(x) = f (ϕ2 (x))

are isometrical isomorphisms, the operators (5.3.3) or (5.3.4) are compact ones if and only if the operators J1 K1 J1−1 : C −→ C or J2 K2 J2−1 : C −→ C are compact, respectively. Corollary 5.3.7 The operators Kj , j = 1, 2, defined in (5.3.3) or (5.3.4), are compact if and only if the conditions (a) K1 (x, .) ∈ L1 (0, ∞) ∀ x ∈ [0, ∞] or K2 (x, .) ∈ L1 (−∞, ∞) ∀ x ∈ [−∞, ∞], and (b) limx→x0 K1 (x, .) − K1 (x0 , .)L1 (0,∞) = 0 or limx→x0 K2 (x, .) − K2 (x0 , .)L1 (−∞,∞) = 0 are satisfied for all x0 ∈ [0, ∞] or x0 ∈ [−∞, ∞], respectively.

280

5 Mapping Properties of Some Classes of Integral Operators

Proof Let us consider the operator K1 . In case of the operator K2 we can proceed completely analogous. With the above introduced notations we have, for f ∈ C[−1, 1] and x ∈ [−1, 1],    J1 K1 J1−1 f (x) =



K1 (ϕ1 (x), z)f (ϕ −1 (z)) dz =

0



1 −1

K1 (ϕ1 (x), ϕ1 (y))ϕ1 (y)f (y) dy .

Consequently, K1 : C[0, ∞] −→ C[0, ∞] is compact if and only if the function K0 : [−1, 1] × (−1, 1) −→ C ,

(x, y) → K1 (ϕ1 (x), ϕ1 (y))ϕ1 (y)

satisfies conditions (a) and (b) of Proposition 5.3.2, where the first one means 

1 −1

K1 (ϕ1 (x), ϕ1 (y))ϕ1 (y) dy





=

K1 (w, z) dz < ∞

∀ w = ϕ1 (x) ∈ [0, ∞]

0

and the second one is  0 = lim

1

x→x0 −1

 = lim

w→w0 0

|K1 (ϕ1 (x), ϕ1 (y)) − K1 (ϕ1 (x0 ), ϕ1 (y))| ϕ1 (y) dy



|K1 (w, z) − K1 (w0 , z)| dz ∀ w0 = ϕ1 (x0 ) ∈ [0, ∞] ,  

which proves the corollary.

Remark 5.3.8 If one considers, for example, K1 (x, y) as defined only for finite x and y, then one has to interpret conditions (a) and (b) of Corollary 5.3.7 in case x0 = ∞ in the sense that the L1 -limit of K1 (x, .) for x −→ ∞ exists and is denoted by K1 (∞, .). For that reason, in [198, (2.1)–(2.3)] conditions (a) and (b) of Corollary 5.3.7 are written in the following equivalent form: (a) K1 (x, .) ∈ L1 (0, ∞) ∀ x ∈ [0, ∞), (b) lim K1 (x, .) − K1 (x0 , .)L1 (0,∞) = 0 ∀ x0 ∈ [0, ∞), x→x0 # ∞ $   K1 (x  , y) − K1 (x, y) dy : x  > x = 0. (c) lim sup x→∞

0

In the following corollary we consider a so called weakly singular integral operator. Corollary 5.3.9 Take the assumptions of Corollary 5.3.5, where we only replace y) = v γ ,δ (x)K(x, y)v γ1 ,δ1 (y) by the conditions that K : the continuity of K(x, 2 [−1, 1] \ {(x, x) : −1 ≤ x ≤ 1} −→ C is continuous and that   K(x, y) ≤ M0 |x − y|−η

∀ (x, y) ∈ [−1, 1]2 \ {(x, x) : −1 ≤ x ≤ 1}

5.3 Compact Integral Operators

281

is satisfied for some constants M0 ∈ R and 0 ≤ η < 1 − max {0, γ0 + γ1 − α, δ0 + δ1 − β} . Then the operator K : Cγ0 ,δ0 −→ Cγ ,δ defined by (5.3.1) is linear and compact. Proof Variant 1: Define ⎧ ⎪ ⎪ ⎨

χ(t) =

and, for n ∈ N,



n (x, y) = K

⎪ ⎪ ⎩

0

:0≤t ≤

2t − 1 : 1

1 2

:

1 2

,

< t < 1, 1≤t,

y) : (x, y) ∈ [−1, 1]2 , x = y , χ(n|x − y|)K(x, :

x = y ∈ [−1, 1] .   n (x, y) ≤ M0 n|x − y|1−η for all The relation χ(t) ≤ t for all t ≥ 0 implies K n : [−1, 1]2 −→ C is continuous for all n ∈ N. (x, y) ∈ [−1, 1]2. Consequently, K If we define  1 χ(n|x − y|)K(x, y)(x, y)f (y)v α,β (y) dy , (Kn f ) (x) = 0

−1

  Cγ ,δ . Moreover, since then, due to Corollary 5.3.5, Kn ∈ K Cγ0 ,δ0 , η0 := η + max {0, γ0 + γ1 − α, δ0 + δ1 − β} < 1 , we have, for f γ0 ,δ0 ,∞ ≤ 1,   γ ,δ   v (x) [(Kf ) (x) − (Kn f ) (x)] =  

!

1

−1

 α,β  " y) − K n (x, y) f (y)v (y) dy  K(x,  γ ,δ v 1 1 (y)

    min1,x+ 1   " f (y)v α,β (y)  n ! y) − K n (x, y) dy  =    K(x, γ ,δ v 1 1 (y)   max −1,x− n1   min 1,x+ n1   max −1,x− 1n

 ≤ M0

  min 1,x+ n1   max −1,x− n1

 ≤ 2 M0  ≤ M1

 |x − y|−η + n|x − y|1−η

x+ n1

x− n1

|x − y|−η

|x − y|−η0 dy =

v α,β (y) dy v γ0 ,δ0 (y)v γ1 ,δ1 (y)

v α,β (y) dy v γ0 ,δ0 (y)v γ1 ,δ1 (y)

2 M1 1 − η0

 1−η0 1 n

282

5 Mapping Properties of Some Classes of Integral Operators

with some constant M1 ∈ R satisfying 2M0 |x − y|−η v α−γ0 −γ1 ,β−δ0 −δ1 (y) ≤ M1 |x − y|−η0 , Hence, lim K − Kn  Cγ

0 ,δ0

n→∞

→ Cγ ,δ

−1 < y < 1 .

= 0 and, since the subspace of compact

operators is closed in L(X, Y) (cf. Exercise 2.3.10), this implies the compactness Cγ ,δ . of K : Cγ0 ,δ0 −→ Variant 2: The conditions of the corollary imply that the function K0 (x, y), defined in (5.3.2), satisfies conditions (a) and (b) of Proposition 5.3.2. Indeed, since η + γ0 + γ1 − α < 1 and η + δ0 + δ1 − β < 1, we have  K0 (x, .)L1 =

1 −1

|K0 (x, y)| dy

 ≤ M0

1 −1

|x − y|−η (1 − y)α−γ0 −γ1 (1 + y)β−δ0 −δ1 < ∞

∀ x ∈ [−1, 1] .

To check condition (b) of Proposition 5.3.2, we first remark that there is a constant M1 = M1 (x, y) such that |x − y|−η ω(y) ≤ M1 |x − y|−η0 for all (x, y) ∈ [−1, 1] × (−1, 1), where ω(y) = v α−γ0 −γ1 ,β−δ0 −δ1 (y) and η0 = η + max {0, γ0 + γ1 − α, δ0 + δ1 − β} ∈ [0, 1). Let  M2 =

1 −1

ω(y) dy ,

0 M1 ε > 0, x0 ∈ [−1, 1], and choose A > 0 such that 4M 1−η0  A y) − K(x 0 , y)| < ε for all δ ∈ 0, 2 such that |K(x, 2M2



3A 2

1−η0


−1, and prove the following Proposition, which was stated in [20, Theorem 3.5] for α = β = − 12 . Proposition 5.3.10 For α, β > −1, the operator Bα,β : L2α,β −→ C is compact. Proof Let f ∈ L2α,β and f α,β = 1. Then, for x1 , x2 ∈ [−1, 1],   (Bα,β f )(x1 ) − (Bα,β f )(x2 ) ≤





1 −1

   ln |x1 − y| − ln |x2 − y|2 v α,β (y) dy .

(5.3.6) For z1 , z2 ∈ [0, 2] and μ > 0, we have (cf. Exercise 2.4.13)      μ   2μ  2μ μ μ μ μ μ z1 ln z1 − z2 ln z2  ≤  z1 − z2 z1 ln z1  +  z1 − z2 z2 ln z2  ≤ c |z1 − z2 |μ . (5.3.7) Let λ > 0 and write ln |x1 − y| − ln |x2 − y| =

|x1 − y|λ ln |x1 − y| − |x2 − y|λ ln |x2 − y| |x1 − y|λ +

|x1 − y|λ ln |x1 − y| − |x2 − y|λ ln |x2 − y| |x2 − y|λ

+

|x2 − y|2λ ln |x2 − y| − |x1 − y|2λ ln |x1 − y| . |x1 − y|λ |x2 − y|λ

284

5 Mapping Properties of Some Classes of Integral Operators

Then, in view of (5.3.7), we can estimate    ln |x1 − y| − ln |x2 − y|2  2  ≤ 3 |x1 − y|λ ln |x1 − y| − |x2 − y|λ ln |x2 − y|

+ %



≤ c |x1 − x2 |λ  = c|x1 − x2 |λ ≤ c|x1 − x2 |λ

1 1 + |x1 − y|2λ |x2 − y|2λ



 2 3 |x2 − y|2λ ln |x2 − y| − |x1 − y|2λ ln |x1 − y|

1 1 + |x1 − y|2λ |x2 − y|2λ

|x1 − y|2λ |x2 − y|2λ  +

|x1 − x2 |2λ |x1 − y|2λ |x2 − y|2λ

&

    1 1 1 1 1 λ  + + − x2 − y  |x1 − y|λ |x2 − y|λ |x1 − y|2λ |x2 − y|2λ  x1 − y

1 1 + + |x1 − y|2λ |x2 − y|2λ



1 1 + |x1 − y|λ |x2 − y|λ



1 |x1 − y|λ |x2 − y|λ

. ,

where in the last step we used     |z1 − z2 |λ ≤ (|z1 | + |z2 |)λ ≤ 2λ max |z1 |λ , |z2 |λ ≤ 2λ |z1 |λ + |z2 |λ . Due to Lemma 5.2.12 the integrals 

1 −1

v α,β (y) dy |x1 − y|2λ

 and

1 −1

v α,β (y) dy |x1 − y|2λ |x2 − y|λ

are bounded by a constant independend of x1 , x2 ∈ [−1, 1], if 0 < 3λ < 1 − max {0, −α, −β} . Thus, for such a λ we conclude from (5.3.6)   (Bα,β f )(x1 ) − (Bα,β f )(x2 ) ≤ c|x1 − x2 |λ ,   which yields the equicontinuity of the set Bα,β f : f ∈ L2α,β , f α,β = 1 . Furthermore, this set is uniformly bounded, since, for 0 < ε < 1−max {0, −α, −β},     Bα,β f (x) ≤ c



1

−1

v α,β (y) dy |x − y|ε

 12

f α,β ≤ cf α,β ,

where we again took into account Lemma 5.2.12.

−1 ≤ x ≤ 1 , c = c(f ) ,

 

Remembering the definition of the Hölder space C0,λ in Sect. 2.4.2, from the proof of Proposition 5.3.10 we deduce the following corollary.

5.3 Compact Integral Operators

285

Corollary 5.3.11 The operator Bα,β : L2α,β −→ C0,λ defined by (5.3.5) is compact if α, β > −1 and 0 < 3λ < 1 − max {0, −α, −β}. Proof From the proof Proposition 5.3.10 we immediately get the continuity of this operator. Applying this result to λ0 instead of λ, where 0 < 3λ0 < 3λ < 1 − max {0, −α, −β}, and using the compact embedding C0,λ ⊂ C0,λ0 (see Exercise 2.4.18), we obtain the compactness of the operator Bα,β : L2α,β −→ C0,λ .   For a continuous function K : (−1, 1)2 −→ C, we will use the notion K ∈ Cγ ,δ,x ∩ Cρ,τ ,y ξ

if and only if the function G(x, y) := v γ ,δ (x)K(x, y)v ρ,τ (y)

(5.3.8)

can be extended to a continuous function G : [−1, 1]2 −→ C and the function Kyρ,τ (x) := K(x, y)v ρ,τ (y) =

G(x, y) v γ ,δ (x)

(5.3.9)

belongs to Cγ ,δ uniformly w.r.t. y ∈ [−1, 1]. In case of ξn = n−ρ ln n we also write ρ,τ ξ Cγ ,δ,x ∩ Cρ,τ ,y instead of Cγ ,δ,x ∩ Cρ,τ ,y (see page 27). Here and in all what follows, for a family of elements hp of a normed space X, by “hp ∈X uniformly w.r.t.   p∈ J ” we mean that hp belongs to X for all p ∈ J and that sup hp X : p ∈ J < ∞. ξ

ξ Lemma 5.3.12 ([74], Lemma 4.11) If K ∈ Cγ ,δ,x ∩ Cρ,τ ,y , then there is a n  ∞ sequence (Pn )n=1 of functions Pn (x, y) = cnj (y)x j , such that all functions j =0

cnj (y)v ρ,τ (y), j = 0, 1, . . . , n, are piecewise constant on [−1, 1] and, for n ∈ N,    sup [K(x, y) − Pn (x, y)] v γ ,δ (x)v ρ,τ (y) : (x, y) ∈ [−1, 1]2 ≤ c ξn , (5.3.10) where c = c(n).

 y y γ ,δ ρ,τ  ρ,τ Proof For y ∈ [−1, 1], choose Pn ∈ Pn with Pn − Ky γ ,δ,∞ = En (Ky ), ρ,τ where Ky (x, y) is defined in (5.3.9). Since, by assumption, the function G(x, y) from (5.3.8) is uniformly continuous on [−1, 1]2 , there is a δ = δ(n) > 0, such that sup {|G(x, y1 ) − G(x, y2 )| : x ∈ [−1, 1]} < ξn for all y1 , y2 ∈ [−1, 1] with |y1 − y2 | < δ. This implies    y     y sup Pn 1 (x)v γ ,δ (x) − G(x, y2 ) : x ∈ [−1, 1] ≤ Pn 1 − Kyρ,τ  1

γ ,δ,∞

+ ξn ≤ c ξn ,

286

5 Mapping Properties of Some Classes of Integral Operators

c = c(n), for all y1 , y2 ∈ [−1, 1] with |y1 − y2 | < δ. If we choose disjoint intervals Ik , k = 1, . . . , m, of length less than δ together with some points yk ∈ Ik such m m y

 χIk (y)Pn k (x) fulfils the that Ik = [−1, 1], then the function Pn (x, y) = v ρ,τ (y) 1 k=1 assertion of the lemma, where χIk denotes the characteristic function of the interval Ik .   As in Sect. 5.2, let 0 ≤ α ± , β ± < 1 and α + − α − = α, β + − β − = β. The following Propositions 5.3.13 and 5.3.14 are proved in [74, Prop.s 4.12, 4.13] for the case α ± = max {0, ±α}, β ± = max {0, ±β}. Proposition 5.3.13 Let α, β > −1 and ρ, τ ≥ 0 be constants with ρ + α − , ξ τ + β − < 1. If K ∈ Cγ ,δ,x ∩ Cρ,τ ,y and if K is the integral operator defined   ξ in (5.3.1), then K ∈ L Cα + ,β + , C . γ ,δ

Proof For f ∈ Cα + ,β + , we can estimate   (Kf ) (x)v γ,δ (x) ≤

    sup Kyρ,τ 

y∈[−1,1]

 γ,δ,∞

f α+ ,β + ,∞

1 −1

v −α

− −ρ,−β − −τ

(y) dy ≤ c f α+ ,β + ,∞ .

Thus, Kf γ ,δ,∞ ≤ c f α + ,β + ,∞ . Let Pn (x, y) be the functions from Lemma 5.3.12. Then, since cnj v ρ,τ ∈ L∞ , we have Qn ∈ Pn , where Qn (x) =  1 Pn (x, y)f (y)v α,β (y) dy. With the help of (5.3.10) we get −1

   1       (Kf − Qn ) (x)v γ ,δ (x) ≤ [K(x, y) − Pn (x, y)] v γ ,δ (x)f (y) v α,β (y) dy ≤ c ξn f α+ ,β + ,∞ , −1





where we took into account that v −ρ−α ,−τ −β ∈ L1 . Consequently, En (Kf ) ≤ c ξn f α + ,β + ,∞ , which leads to Kf γ ,δ,ξ ≤ c f α + ,β + ,∞ .   γ ,δ

h(x, y) − h(y, y) ξ with h ∈ C0,0,x ∩ C0,0,y and x−y −1 −δ −γ c n ≤ ξn ≤ c n for someconstants c > 0 and  γ , δ > 0. Then the operator K (ξn ln n) defined in (5.3.1) belongs to L Cα + ,β + , Cα − ,β − . Proposition 5.3.14 Let K(x, y) =

Proof In viewof Lemma 2.4.32, we have hy ∈ C0,θ uniformly w.r.t. y ∈ [−1, 1] γ for some θ ∈ 0, 2 , where hy (x) = h(x, y). With the help of Lemma 5.2.10 we get, for f ∈ Cα + ,β + and −1 < x < 1,  |(Kf ) (x)| ≤

   α,β  h(x, y) − h(y, y)  v (y) dy  f (y)   x−y −1 1

 ≤c

1 −1

|x − y|θ −1 v −α

− ,−β −

(y) dyf α + ,β + ,∞ ≤ c v −α

− ,−β −

(x)f α + ,β + ,∞ .

5.3 Compact Integral Operators

287

Hence, Kf α − ,β − ,∞ ≤ cf α + ,β + ,∞ for all f ∈ Cα + ,β + . Let pn (x, y) = n  cnj (y)x j be a polynomial of degree not greater than n in x satisfying (5.3.10) j =0

with h(x, y) instead of K(x, y), where γ = δ = ρ = τ = 0, i.e., |h(x, y) − pn (x, y)| ≤ c ξn ,

sup

n ∈ N,

c = c(n) .

(5.3.11)

(x,y)∈(−1,1)2

This implies  pn (x, y) − pn (y, y)  xj − yj = = cnj (y) dnj (y)x j , x−y x−y n

n−1

j =1

j =0

dnj ∈ L∞ (−1, 1)

and Qn ∈ Pn , where  Qn (x) :=

1 −1

pn (x, y) − pn (y, y) f (y)v α,β (y) dy . x−y

By h ∈ C0,0,x ∩ C0,0,y , (5.3.11) and Markov’s inequality (2.4.15), we have ξ

 # $    ∂pn (x, y)   : (x, y) ∈ [−1, 1]2 ≤ n2 sup |pn (x, y)| : (x, y) ∈ [−1, 1]2 ≤ c n2 sup   ∂x

with c = c(n). We conclude    h(x, y) − h(y, y) pn (x, y) − pn (y, y)  −α− ,−β −  (y) dy f α+ ,β + ,∞ −  v x−y x−y −1

 |(Kf ) (x) − Qn (x)| ≤

1

:= γn (x)f α+ ,β + ,∞ ,

where, for m ∈ N and due to (5.3.11) and (5.2.26), % γn (x) ≤ c ξn



x−

1+x 2n2m

−1

 + % ≤ c ξn ln n +



+

x+

x−

x+ 1−x 2n2m

1−x 2n2m

1+x 2n2m

x+ x−





1−x 2n2m

1+x 2n2m

v −α

− ,−β −

(y) dy |x − y|

  − − |x − y|θ−1 + n2 v −α ,−β (y) dy

&

&   − − θ−1 2 + n dy v −α ,−β (x) , |x − y|

288

5 Mapping Properties of Some Classes of Integral Operators

since, for x −

1+x 2n2m

v −α If m ≥ max 

x+

1−x 2n2m

x− 1+x 2n2m



δ δ 2θ , 2

≤y≤x+ − ,−β −

1−x , 2n2m

(y) ≤ v −α

− ,−β −

 (x) 1 −

1 2n2m

−α − −β − .

 − − + 1 , then γn (x) ≤ c v −α ,−β (x) ξn ln n because of

|x − y|θ−1 dy =

1



θ 2n2m

! " θ θ θ (1 − x) + (1 + x) ≤

c n2mθ



c ≤ c ξn nδ

and  2

n

α − ,β −

Consequently, En and

x+

1−x 2n2m

x− 1+x 2n2m

dy =

1 n2(m−1)



1 ≤ c ξn . nδ (ξ ln n)

(Kf ) ≤ cf α + ,β + ,∞ ξn ln n, which leads to Kf ∈ Cα −n ,β − Kf α − ,β − ,(ξn ln n) ≤ cf α + ,β + ,∞ .

 

5.4 Weakly Singular Integral Operators with Logarithmic Kernels Here we continue the investigation of weakly singular integral operators with logarithmic kernels started with Proposition 5.3.10. For given u : (−1, 1) −→ C and a, b, α, β defined as in Proposition 5.2.5, by Wu = Wα,β u we refer to 



(W u)(x) = Wα,β u (x) = a



x

−1

v

α,β

b (y)u(y) dy − π



1 −1

v α,β (y) ln |y − x| u(y) dy ,

(5.4.1)

−1 < x < 1, if the integral exists as non-proper Riemann integral. If α + β = −1, in view of (5.1.15) we have, for n ∈ N, 

x

1 α+1,β+1 v α,β (y)pnα,β (y) dy = − v α+1,β+1 (x)pn−1 (x) n −1

5.4 Weakly Singular Integral Operators with Logarithmic Kernels

289

and by partial integration 

1

−1

vα,β (y) ln |y − x| u(y) dy  = lim

ε→+0

x−ε −1

% = lim





1

+

vα,β (y) ln |x − y| pnα,β (y) dy

x+ε

α+1,β+1

vα+1,β+1 (x + ε)pn−1

α+1,β+1

(x + ε) − vα+1,β+1 (x − ε)pn−1

+

=

1 n



1

(x − ε)

n

ε→+0

α+1,β+1

vα+1,β+1 (y)pn−1

(y)

y −x

−1



1 n

x−ε −1



1

+



ln ε

α+1,β+1

vα+1,β+1 (y)pn−1

(y)

y −x

x+ε

& dy

dy ,

where the integral on the right hand side is a Cauchy principal one. Together with (5.2.12) this gives, in case α + β = −1,   1 Wpnα,β (x) = − pnβ,α (x) , n

−1 < x < 1 , n ∈ N .

(5.4.2)

Lemma 5.4.1 For some 0 < η ≤ 1 and all closed subintervals [a, b] ⊂ (−1, 1), let f ∈ L1 (−1, 1) ∩ C0,η [a, b]. Then d dx



1

−1

 ln |y − x| f (y) dy = v.p.

1 −1

f (y) dy , x−y

−1 < x < 1 .

Proof We have  ψ(x) :=

1

−1

 ln |y − x| f (y) dy = lim ψε (x) ε→+0

with

ψε (x) =

x−ε −1

 +

1

 ln |y − x| f (y) dy .

x+ε

For x ∈ (−1, 1), it follows ψε (x)

 =

x−ε

−1

 =

 −→

1 −1

+

1



x+ε

x−ε

−1

  +

1 x+ε

f (y) dy x−y



f (y) dy + [f (x − ε) − f (x + ε)] ln ε x−y f (x − ε) − f (x + ε) f (y) dy + (2ε)η ln ε x−y (2ε)η if

ε −→ +0 ,

290

5 Mapping Properties of Some Classes of Integral Operators

where the last integral is defined as a Cauchy principal value. For every δ ∈ (0, 1), this convergence is uniformly w.r.t. x ∈ [−1 + δ, 1 − δ]. Indeed, for x ∈ [−1 + δ, 1 − δ], 0 < ε1 < ε2
0 . π

(5.4.9)

Of course, W1 = WM − MW, where (Mu)(x) = x u(x). Using the relation x pnσ (x) =

" 1! σ σ pn+1 (x) + pn−1 (x) , 2

n = 2, 3, . . . ,

(5.4.10)

we get 1 Mp0σ = √ p1σ , 2

Mp1σ =

1 σ 1 p + √ p0σ , 2 2 2

Mpnσ =

 1 σ σ p + pn−1 , n = 2, 3, . . . 2 n+1

Consequently, taking into account (5.4.4) we have W1 pnσ

=

:

−β0 p1σ σ βn−1 pn−1

σ − βn pn+1

n = 0,

: n = 1, 2, . . . ,

where β0 =

1 − ln 2 √ 2

and βn = − − 12 ,− 12

With the abbreviation Js = Js

1 , n = 1, 2, . . . 2n(n + 1)

we get the following corollary.

(5.4.11)

5.4 Weakly Singular Integral Operators with Logarithmic Kernels

295

2,s+2 is bounded and Corollary 5.4.4 The operator W1 : L2,s σ −→ Lσ

W1 = Js+2 W1 Js−1 : 2s −→ 2s+2 ,

∞ ∞ (ξn )n=0 → (βn ξn+1 − βn−1 ξn−1 )n=0 , β−1 := 0, ξ−1 := 0 .

The homogenous equation W1 ξ =  has in 2

− 32

only the trivial solution. Moreover,

the equation W1 ξ = (1, 0, 0, . . .) has no solution in 2 3 . −2

ξn−2 Proof The equation W1 ξ =  implies ξ1 = 0 and ξn = βn−2 = 0 for n = βn−1 3, 5, 7, . . . If ξ0 = 0 then also ξn = 0 for n = 2, 4, 6, . . . Let ξ0 = 1. Then m ξ2 = −4β0 and ξ2m = m−1 ξ2(m−1) = −4mβ0 , m = 2, 4, . . . In this case, ξ belongs 3 2 to s if and only if s < − 2 . Since W1 ξ = (1, 0, 0, . . .) leads to ξ1 = β0−1 , analogous considerations complete the proof.   ∞ → (0, ξ0 , ξ1 , . . .) and the Define the shift operator V : 2s −→ 2s , (ξn )n=0 ∞ 2 2 respective left inverse operator V−1 : −→ , (ξn )n=0 → (ξ1 , ξ2 , ξ3 , . . .) as ∞  1 ∞ 2 2 well as the isometric isomorphism R2 : s −→ s+2 , (ξn )n=0 → (n+1) . 2 ξn n=0 It is easily seen that the operator

1 K2 := W1 − R2 (V − V−1 ) : 2s −→ 2s+3 2 is bounded, so that W1 can be represented in the form W1 =

1 R2 (V − V−1 ) + K2 2

with a compact operator K2 : 2s −→ 2s+2 . Since the operator T(b) := V − V−1 : 2 −→ 2 is a Toeplitz operator with symbol b(t) = t − t −1 , t ∈ {z ∈ C : |z| = 1}, which vanishes in ±1, this operator is not Fredholm. More precisely, its image space im T(b) ⊂ 2 is not closed (see, for example, [22, 23]). Corollary 5.4.5 The image of the operator W1 : L2σ −→ Lσ2,2 is not closed, i.e., the problem of finding a solution u ∈ L2σ of W1 u = f for f ∈ Lσ2,2 is an ill posed problem. Proof By Corollary 5.4.4 the operator W1 : 2 −→ 22 has a trivial nullspace. ∞ ∞ → (βn−1 ξn−1 − βn ξn+1 )n=0 has Its adjoint operator W∗1 : 2−2 −→ 2 , (ξn )n=0 a one-dimensional nullspace spanned by (1, 0, −4β0 , 0, −8β0 , . . .) (cf. the proof of Corollary 5.4.4). Since W1 : 2 −→ 22 is not Fredholm its image space is necessarily not closed.   Remark 5.4.6 One can easily show that also 12 (V−1 − V) : 2s −→ 2s is not Fredholm for all s ∈ R, so that Corollary 5.4.5 remains true for W1 : L2,s σ −→ L2,s+2 for all s ∈ R. σ

296

5 Mapping Properties of Some Classes of Integral Operators

As an example, let us deal with the equation .  1 1 1 dy (y − x) ln |y − x| + = f (x) + a , u(y)  π −1 2 1 − y2

(Au)(x) :=

(5.4.12)

−1 < x < 1, in which the function f (x) and the constant a ∈ C are looked for. By the definition of A we have



u, p1σ σ σ u, p0σ σ σ √ p0 − √ p1 , Au = W1 u + 2 2 2 2 n−1 pσ − β n pσ , n ∈ N0 , where β 0 = so that Apnσ = β n−1 n+1

3 2 2 −ln √

n = βn for and β

2

n = 0. Consequently, the operator A = Js+2 AJs−1 : 2s −→ 2s+2 has the same structure as W1 and Corollaries 5.4.4 and 5.4.5 remain true for A and A instead of W1 and W1 , respectively. If Aξ = η then ξ2k+1

  k k−1 k  1 5 β η0 2s+1 = η = 4(2k + 1) − m η2m 2k 2s 2m 0 β β 4β m=0 s=m m=1

and ξ2(k+1) =

k k  5

1 2k+1 β

m=0 s=m+1

% & k k 5  2s 2s β β 0 ξ0 − η2m+1 + ξ0 = 4(k + 1) β (2m + 1)η2m+1 , 2s−1 β β s=0 2s+1 m=0

k = 0, 1, 2, . . . Consider equation Aξ = η + (α, 0, 0, . . .) ,

(5.4.13)

where η ∈ 2 is given and ξ ∈ 2−1 as well as α ∈ C are looked for. To ensure that ∞  ξn is a zero sequence we have to choose α and ξ0 such that n+1 n=0

∞ ∞  η0 + α  − m η2m = 0 and β0 ξ0 − (2m + 1)η2m+1 = 0 , 0 4β m=1

(5.4.14)

m=0

where we assume that η ∈ 2s for some s > 32 . This implies ξ2k+1 = 4(2k + 1)

∞  m=k+1

m η2m

and ξ2(k+1) = 4(k + 1)

∞ 

(2m + 1)η2m+1 .

m=k+1

(5.4.15)

5.4 Weakly Singular Integral Operators with Logarithmic Kernels

Since, for r > 1,

∞ 

m

−r

∼k

1−r

297

we can estimate |ξk | ≤ c 2

(k + 1)2 η2 2

m=k

s

(k + 1)2s−3

, which

gives ξ ∈ 2τ if s > τ + 3. Consequently, if η ∈ 2s for some s > τ + 3 and τ ≥ −1, then Eq. (5.4.13) has a unique solution (ξ, α) ∈ 2τ × C. By the way we have proved the following. Corollary 5.4.7 For η ∈ 2s , s > τ + 3, τ ≥ −1, equation W1 ξ = η + (ζ0 , 0, . . .) has a unique solution (ξ, ζ0 ) ∈ 2τ , which is given by ζ0 = 4β0

∞ 

m η2m − η0 ,

m=1

ξ0 =

∞ 1  (2m + 1)η2m+1 , β0 m=0

and ξ2j +1 = 4(2j + 1)

∞ 

m η2m ,

ξ2(j +1) = 4(j + 1)

m=j +1

∞ 

(2m + 1)η2m+1 ,

m=j +1

j = 0, 1, . . . (cf. the relations in (5.4.14) and (5.4.15)). Later on (see Sect. 6.4) we will denote the solution operator, described by Corol, 0 , i.e., if η ∈ 2s and 2 < τ + 3 < s, then W1 ξ = η + (ζ0 , 0, . . .) is lary 5.4.7 by W 1 , 0 η. equivalent to (ξ, ζ0 ) = W 1 Corollary 5.4.8 If f ∈ L2,s σ for some s > τ + 3 and τ ≥ −1, then Eq. (5.4.12) as well as the equation W1 u = f + a has a unique solution (u, a) ∈ L2,τ σ × C, where in case τ ∈ [−1, 0) Eq. (5.4.12) has to be considered in the form (5.4.13) or W1 ξ = η + (α, 0, 0, . . .), respectively. Let us discuss how sharp the condition on f in Corollary 5.4.8 is. To this end, consider Eq. (5.4.13) for η = (1, 1, 2−r−1 , 3−r−1 , . . .), which belongs to 2s for s < r + 12 . In this case, for k ≥ 0, ξ2k+1 = 2(2k + 1)

∞ 

(2m)−r

m=k+1

and ξ2(k+1) = 4(k + 1)

∞ 

(2m + 1)−r ,

m=k+1

so that ξk ∼ (k + 1)2−r and ξ belongs to 2τ if and only if r > τ + 52 . Hence, if r = τ + 52 and τ ≥ −1 then η ∈ 2s for s < τ + 3, but Eq. (5.4.13) has no solution in 2τ × C. Corollary 5.4.9 If s < τ + 3 and τ ≥ −1, then there exist functions f ∈ L2,s σ , for which Eq. (5.4.12) has no solution (u, a) ∈ L2,τ σ × C. Remark 5.4.10 If the operator A is considered as operator from 2 into 22 , then the adjoint operator A∗ : 2−2 −→ 2 has a one-dimensional nullspace, and we have

298

5 Mapping Properties of Some Classes of Integral Operators

that the closure of the image im A is orthogonal to ker A∗ . Hence, in this situation the first equality in (5.4.14) is nothing else than the orthogonality of the right hand side η + (α, 0, 0, . . .) to ker A∗ . For example, in case A : 2−1 −→ 21 the nullspace of A∗ is trivial, so that the above interpretation of the first equality in (5.4.14) is not longer true. Now consider the operator W2 : L2σ −→ L2σ defined by 1 (W2 u)(x) := π



1

dy (y − x)2 ln |y − x| u(y)  , −1 1 − y2

−1 < x < 1 .

Taking into account W2 = W1 M − MW1 and relations (5.4.10) we get W2 p0σ = −γ0 p0σ + γ1 p2σ ,

W2 p1σ = −γ p1σ + γ2 p3σ ,

and σ σ − 2γn pnσ + γn+1 pn+2 , W2 pnσ = γn−1 pn−2

n = 2, 3, . . . ,

where γ =

5 − ln 2 , 4

γ0 = ln 2 − 1 ,

γ1 =

3 − 2 ln 2 , √ 4 2

γn = −

1 , n = 2, 3, . . . 2(n − 1)n(n + 1)

For preparing the proof of some mapping properties of the operator W2 (see Corollary 5.4.13), we formulate the following lemma, which can be proved by induction.   2(n + 1) 2n + 1 xn−1 , n = 2, 3, . . ., then Lemma 5.4.11 If xn+1 = 2xn − 2n − 1 2(n − 1) n(n2 − 1) , n = 1, 2, . . ., if x1 = 0, x2 = 1, 6 2 n(n − 4) , n = 1, 2, . . ., if x1 = 1, x2 = 0. (b) xn = − 3   2n + 3 n+1 yn − yn−1 , n = 1, 2, . . ., then If yn+1 = n 2n − 1 (a) xn =

(c) yn = y1 = 1,

(2n + 1)[(2n + 1)2 − 1] n(n + 1)(2n + 1) = , n = 0, 1, . . ., if y0 = 0, 6 24

(2n + 1)[(2n + 1)2 − 9] (n − 1)(n + 2)(2n + 1) =− , n = 0, 1, . . ., (d) yn = − 2 8 if y0 = 1, y1 = 0. Let us formulate an immediate conclusion of Lemma 5.4.11.

5.4 Weakly Singular Integral Operators with Logarithmic Kernels

299

Corollary 5.4.12 Let k = 0, 1, 2, . . . and a, b ∈ C. If k,a,b xn+1 =

  2(n + 1) 2n + 1 k,a,b 2xnk,a,b − xn−1 , 2n − 1 2(n − 1)

k,a,b k,a,b n = k + 2, k + 3, . . . , xk+1 = a , xk+2 = b,

and k,a,b yn+1 =

  2n + 3 n + 1 k,a,b ynk,a,b − yn−1 , n 2n − 1

k,a,b n = k+1, k+2, . . . , ykk,a,b = a , yk+1 = b,

then xnk,a,b = akx n3 + bkx n ,

n = k + 1, k + 2, . . . ,

where akx

1 = 2k + 3



b a − k+2 k+1

 ,

bkx

. a(k + 2)2 b(k + 1)2 1 − , = 2k + 3 k+1 k+2

and y

y

ynk,a,b = ak (2n + 1)3 + bk (2n + 1) ,

n = k, k + 1, . . . ,

where y

ak =

1 8(k + 1)



b a − 2k + 3 2k + 1

 ,

y

bk =

. a(2k + 3)2 1 b(2k + 1)2 − . 8(k + 1) 2k + 1 2k + 3

2,s+3 is bounded, where Corollary 5.4.13 The operator W2 : L2,s σ −→ Lσ

W2 = Js+3 W2 Js−1 : 2s −→ 2s+3 ,

∞ ∞ → (ηn )n=0 (ξn )n=0

with η0 = −γ0 ξ0 +γ1 ξ2 , η1 = −γ ξ1 +γ2 ξ3 , and ηn = γn−1 ξn−2 −2γn ξn +γn+1 ξn+2 , n = 2, 3, . . . The homogeneous equation W2 ξ =  has only the trivial solution in 2 7 . Moreover, the equation W2 ξ = (η0 , η1 , 0, 0, . . .) with |η0 | + |η1 | > 0 has no −2

solution in 2 3 . −2

Proof Equation W2 ξ =  is equivalent to −γ0 ξ0 + γ1 ξ2 = 0 , γ2k−1 ξ2(k−1) − 2γ2k ξ2k + γ2k+1 ξ2(k+1) = 0 ,

k = 1, 2, . . . ,

300

5 Mapping Properties of Some Classes of Integral Operators

and −γ ξ1 + γ2 ξ3 = 0 , γ2k ξ2k−1 − 2γ2k+1 ξ2k+1 + γ2k+2 ξ2k+3 = 0 ,

k = 1, 2, . . .

Thus, if ξ0 = 0 then ξ2k = 0 for all k = 0, 1, 2, . . . and if ξ1 = 0 then ξ2k+1 = 0 for all k = 0, 1, 2, . . . Let ξ0 = 1. Then ξ2 = γ1−1 γ0 =: δ and ξ4 = γ3−1 (2γ2δ − γ1 ) =: ε as well as ξ2(k+1)

  2(k + 1) 2k + 1 ξ2(k−1) , = 2ξ2k − 2k − 1 2(k − 1)

k = 2, 3, . . .

By Corollary 5.4.12 we get ξ2k =

(ε − 2δ)k 3 + (8δ − ε)k , 6

k = 1, 2, . . . ,

∞ so that (ξn )n=0 ∈ 2 7 , since ε − 2δ = 0. Analogously, one can show, using −2

∞ ∈ 2 Lemma 5.4.11(c), (d), that (ξn )n=0

− 72

(η0 , η1 , 0, 0, . . .) and ξ ∈ 2 3 , i.e.,

if ξ1 = 1. Now assume that W2 ξ =

−2

−γ0 ξ0 + γ1 ξ2 = η0 , γ2k−1 ξ2(k−1) − 2γ2k ξ2k + γ2k+1 ξ2(k+1) = 0 ,

k = 1, 2, . . . ,

and −γ ξ1 + γ2 ξ3 = η1 , γ2k ξ2k−1 − 2γ2k+1 ξ2k+1 + γ2k+2 ξ2k+3 = 0 ,

k = 1, 2, . . .

It follows (see above) ξ2k =

(ξ4 − 2ξ2 )k 3 + (8ξ2 − ξ4 )k , 6

which implies together with ξ ∈ 2

− 32

k = 3, 4, . . . ,

that ξ2 = ξ4 = 0 and so ξ2k = 0 for

k = 0, 1, 2, . . . Analogously, ξ2k+1 = 0, k = 0, 1, . . ., and we get a contradiction to |η0 | + |η1 | > 0.   The operator W2 can be written in the form . 1 2 2 W2 = R3 I − (V−1 + V ) + K3 , 2

5.4 Weakly Singular Integral Operators with Logarithmic Kernels

301

∞ where I : 2s −→ 2s denotes the identity operator, R3 : 2s −→ 2s+3 , (ξn )n=0 →  ∞ 1 ξ is an isometric isomorphism, and K3 : 2s −→ 2s+4 is bounded. (n+1)3 n n=0

The operator T(c) := I − 12 (V2−1 + V2 ) : 2 −→ 2 is a Toeplitz operator with symbol c(t) = 1 − 12 (t −2 + t 2 ), which vanishes at t = ±1, so that this operator is not Fredholm (see, for example, [22, 23]). The proof of the following Corollary is analogous to the proof of Corollary 5.4.5 (cf. also Remark 5.4.6). 2,s+3 Corollary 5.4.14 Let s ∈ R. The image of the operator W2 : L2,s σ −→ Lσ 2,s is not closed, i.e., the problem of finding a solution u ∈ Lσ of W2 u = f for f ∈ L2,s+3 is an ill posed problem. σ

Let us now investigate the solvability of the equation W2 u = f + αp0σ + βp1σ ,

(5.4.16)

where f ∈ L2σ is given and (u, α, β) ∈ L2,−1 × C2 is looked for. σ ! "∞ ! "∞ Lemma 5.4.15 For a = an n=0 and b = bn n=0 , consider an infinite system Dξ = η

(5.4.17)

with a symmetric tridiagonal matrix ⎡

a0 b0



⎥ ⎢ ⎥ ⎢ b0 a1 b1 ⎥ ⎢ ⎥ D = D(a, b) = ⎢ ⎥ ⎢ b1 a2 b2 ⎥ ⎢ ⎦ ⎣ .. .. .. . . . and define ⎡

Dkj

ak bk



⎢ ⎥ ⎢ bk ak+1 bk+1 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ .. .. .. = Dkj (a, b) = det ⎢ ⎥, . . . ⎢ ⎥ ⎢ ⎥ ⎢ bj −2 aj −1 bj −1 ⎥ ⎣ ⎦ bj −1 aj

0≤k≤j.

Then (5.4.17) is equivalent to ξn+1

%n−1 &  (−1)k Dk+1,n ηk ηn D0n ξ0 n = + (−1) − , bn bk · · · bn b0 b1 · · · bn k=0

n = 0, 1, 2, . . .

302

5 Mapping Properties of Some Classes of Integral Operators

Proof Using Dk,n+1 = an+1 Dkn − bn2 Dk,n−1 , n = k + 1, k + 2, . . ., k = 0, 1, . . ., the lemma is proved by induction.   The equation W2 ξ = η is equivalent to We2 ξ e = ηe and Wo2 ξ o = ηo with We2 = De = D(a e , be ) ,

where a0e = −γ0 , ake = −2γ2k , k = 1, 2, . . . ,

Wo2 = Do = D(a o , bo ) ,

where a0o = −γ , ako = −2γ2k+1 , k = 1, 2, . . . ,

ξ e = (ξ0 , ξ2 , ξ4 , . . .) , ξ o = (ξ1 , ξ3 , ξ5 , . . .) ,

bke = γ2k+1 , k = 0, 1, . . . , bko = γ2k+2 , k = 0, 1, . . . ,

ηe = (η0 , η2 , η4 , . . .) , ηo = (η1 , η3 , η5 , . . .) .

Define xk,n+1 :=

e (−1)n+k Dkn

bke · · · bne

,

yk,n+1 :=

o (−1)n+k Dkn

bko · · · bno

,

n = k, k+1, . . . , k = 0, 1, 2, . . .

 h 2 h h = a h Dh Then, for n = k + 2, k + 3, . . ., we have Dkn n k,n−1 − bn−1 Dk,n−2 , h ∈ {e, o}, so that   e bn−1 ane 2(n + 1) 2n + 1 x x − x = − 2x kn k,n−1 kn k,n−1 , bne bne 2n − 1 2(n − 1) (5.4.18)   o bn−1 ano 2n + 3 n+1 ykn − yk,n−1 , (5.4.19) = − o ykn − o yk,n−1 = bn bn n 2n − 1

xk,n+1 = −

yk,n+1

n = k + 2, k + 3, . . . For k = 0 we have x01 =

e D00 De γ0 , x02 = − e 01e = e = − b0 γ1 b0 b1

γ12 − 2γ0 γ2 and, by Corollary 5.4.12 (cf. also Lemma 5.4.11 and the proof of γ1 γ3 Corollary 5.4.4), x0n =

x02 n(n2 − 1) x01n(n2 − 4) − =: a0x n3 +b0x n , 6 3

n = 1, 2, . . . ,

(5.4.20)

with a0x b0x =

γ0 3γ1



γ0 = 3γ1

  γ2 γ1 = −1.20645 . . . , 1− + γ3 6γ3

 √ γ1 γ1 γ2 −4 − =− = 2(3 − 2 ln 2) = 2.28212 . . . γ3 6γ3 6γ3

5.4 Weakly Singular Integral Operators with Logarithmic Kernels

303

o o D00 D01 γ22 − 2γ γ3 γ = − and y = − = . If we define y00 02 b0o γ2 b0o b1o  γ2 γ4  1 1 = 5(y01 − 2y00), i.e. y00 = y01 − y02 , then, due to Corollary 5.4.12, 2 5

Moreover, y01 = by y02 y0n =

y01 (2n + 1)[(2n + 1)2 − 1] y00 (2n + 1)[(2n + 1)2 − 9] y y − =: a0 (2n+1)3 +b0 (2n+1) 24 8

(5.4.21)

with y

a0 =

y01 − 3y00 = 0.40343 . . . , 24

y

b0 =

27y00 − y01 = −1.40343 . . . 24

If k > 0 then xk,k+1 =

e Dkk ake 4(k + 1) =: a = =− e e bk bk 2k − 1

and xk,k+2 = −

=

e Dk,k+1 e bke bk+1

=−

e ake ak+1 − [bke ]2 e bke bk+1

=

bke ae ae (k + 2)(2k + 3) 4(k + 1) 4(k + 2) − ke k+1 = − e e bk+1 bk bk+1 k(2k + 1) 2k − 1 2k + 1

(k + 2)[(2k − 1)(2k + 3) − 16k(k + 1)] 3(k + 2)(2k + 1) =− =: b k(2k − 1)(2k + 1) k(2k − 1)

By Corollary 5.4.12 we get xkn = akx n3 + bkx n with akx =

. 1 3(2k + 1) 4 1 − + =− , 2k + 3 k(2k − 1) 2k − 1 k(2k − 1)

bkx =

. 3(2k + 1)(k + 1)2 1 4(k + 2)2 (k − 1)2 + . − = 2k + 3 2k − 1 k(2k − 1) k(2k − 1)

Furthermore, in case k > 0, we have yk,k+1 =

o Dkk ako 2k + 3 =: b = o o =− bk bk k

(5.4.22)

304

5 Mapping Properties of Some Classes of Integral Operators

and yk,k+2 = −

o Dk,k+1 o bko bk+1

=−

o − [bko ]2 ako ak+1 o bko bk+1

=

bko ao ao 2k + 5 − ko k+1 = o o bk+1 bk bk+1 k+1



k+2 2k + 3 − 2k + 1 k

 .

Consequently, if we set ykk = −1 =: a then the recursion (5.4.19) holds true for n = k + 1, k + 2, . . ., and Corollary 5.4.12 implies y

y

ykn = ak (2n + 1)3 + bk (2n + 1)

(5.4.23)

with y ak

. 1 1 1 1 , = − + =− 8(k + 1) k 2k + 1 8k(2k + 1)

y

bk =

. (2k + 3)2 (2k + 1)2 (2k − 1)2 1 − + = . 8(k + 1) 2k + 1 k 8k(2k + 1)

Now, from Lemma 5.4.15 we infer η2n  xk+1,n+1 − η2k − x0,n+1 ξ0 , bne bke n−1

ξ2(n+1) =

n = 0, 1, . . . ,

k=0

as well as η2n+1  yk+1,n+1 − η2k+1 − y0,n+1 ξ1 , bno bko n−1

ξ2n+3 =

n = 0, 1, . . .

k=0

Together with the definition of bke , bko and relations (5.4.20), (5.4.22), as well as (5.4.21), (5.4.23) this leads to ⎡



n−1 x  x η − a1 η − a x ξ ⎦ ξ2(n+1) = −8n(n + 1)(2n + 1)η2n + (n + 1)3 ⎣ 8k(k + 1)(2k + 1)ak+1 2k 0 0 γ1 0 k=1





n−1 

x x η − b1 η − bx ξ ⎦ , +(n + 1) ⎣ 8k(k + 1)(2k + 1)bk+1 2k 0 0 γ1 0 k=1

⎧ ⎨

= −(n + 1) 8n(2n + 1)η2n ⎩

⎡ + (n + 1)2 ⎣

n−1  k=1

(5.4.24)

⎫ ⎤ n−1 ⎬  η0 x 3 x 8kη2k − + a0 ξ0 ⎦ − 8k η2k + b0 ξ0 ⎭ γ1 k=1

5.4 Weakly Singular Integral Operators with Logarithmic Kernels

305

and ξ2n+3 = −4(n + 1)(2n + 1)(2n + 3)η2n+1 +(2n + 3)3

%n−1 

& y

y

4(k + 1)(2k + 1)(2k + 3)ak+1 η2k+1 − a0 ξ1

k=0

+(2n + 3)

%n−1 

& y

y

4(k + 1)(2k + 1)(2k + 3)bk+1 η2k+1 − b0 ξ1

(5.4.25)

k=0

= −(2n + 3) 4(n + 1)(2n + 1)η2n+1 + (2n + 3)2

%n−1 &  2k + 1 y η2k+1 + a0 ξ1 2 k=0



n−1  (2k + 1)3 y η2k+1 + b0 ξ1 2

 ,

k=0

n = 0, 1, 2, . . . Coming back to Eq. (5.4.16), which is equivalent to W2 ξ = η + (α, β, 0, 0, . . .)

(5.4.26)



with ηk = f, pkσ σ , k = 0, 1, 2, . . ., and taking into account (5.4.25) as well as (5.4.25) we observe the following. If (5.4.26) has a solution ξ ∈ 2−1 then 8

∞  k=1

∞ 

kη2k −

η0 + α + a0x ξ0 = 0 , γ1

(2k + 1)η2k+1 + β

y + 2a0 ξ1

k=0

= 0,

8

∞ 

k 3 η2k − b0x ξ0 = 0 ,

k=1 ∞  y (2k + 1)3 η2k+1 + β − 2b0 ξ1 = 0 . k=0

These conditions together with (5.4.25) and (5.4.25) determine α, β, ξ0 , ξ1 , ξ2 , . . . uniquely if η ∈ 2s for some s > 72 . In this case % ξ2(n+1) = −8(n + 1) n(2n + 1)η2n − (n + 1)

2

∞ 

kη2k +

k=n

∞ 

& 3

k η2k

k=n

and % ξ2n+3 = −(2n + 3) 4(n + 1)(2n + 1)η2n+1 − (2n + 3)2

∞ ∞   2k + 1 (2k + 1)3 η2k+1 + η2k+1 2 2 k=n

k=n

& .

306

5 Mapping Properties of Some Classes of Integral Operators

Consequently, |ξn |2 ≤ c(n + 1)9−2s η2 2 , s

which implies ξ ∈ 2τ if s > τ + 5. Thus, if η ∈ 2s for some s > τ + 5 and τ ≥ −1, then Eq. (5.4.26) has a unique solution (ξ, α, β) ∈ 2τ × C2 . Corollary 5.4.16 If f ∈ L2,s σ for some s > τ + 5 and τ ≥ −1, then Eq. (5.4.16) 2 has a unique solution (u, a, b) ∈ L2,τ σ × C , where in case τ ∈ [−1, 0) Eq. (5.4.16) has to be considered in the form (5.4.26). The following proposition deals with a generalization of the operator W defined in (5.4.1), namely  (W0 u) (x) = a

x −1

H (x, y)v α,β (y)u(y) dy−

b π



1

−1

H (x, y)v α,β (y) ln |y−x| u(y) dy ,

(5.4.27)

−1 < x < 1, where a, b, α, β are given as in Proposition 5.2.5. First, let us proof the following lemma. ρ,τ

Lemma 5.4.17 ([74], Lemma 4.14) If ψ ∈ C0,0 , then the operator of multiplicaρ,τ tion by ψ(x) belongs to L(Cγ ,δ ). ρ,τ

Proof For u ∈ Cγ ,δ and ψn , un ∈ Pn , n ≥ 2, we have  γ ,δ      v (ψu − ψn un ) ≤ v γ ,δ u ψ − ψn ∞ + ψn ∞ v γ ,δ (u − un ) , ∞ ∞ ∞ which implies γ ,δ

γ ,δ

E2n (ψu) ≤ uγ ,δ,∞ En0,0 (ψ) + cψ∞ En (u) , where c = c(n, u). Hence, for m = 2n + 1 and m = 2n, it follows γ,δ

Em (ψu) ≤

 lnτ n  lnτ m uγ,δ,ρ,τ , uγ,δ,∞ ψ0,0,ρ,τ + cψ∞ uγ,δ,ρ,τ ≤ c ρ n mρ

and the lemma is proved.

 

Proposition 5.4.18 ([74], Proposition 4.15) Let m − 1 < ρ < m for some ρ−m+1,τ j ∈ Cρ−j,τ for m ∈ N and assume that Hm ∈ C0,0,x ∩ C0,0,y and H 0,0 ∂ j H (x, y) j = 0, . . . , m − 1, where Hj (x, y) := and Hj (x) = Hj (x, x). Then ∂x j   ρ,τ ρ+1,τ +1 W0 ∈ L Cα + ,β + , Cα − ,β − , where α ± and β ± satisfy (5.2.29) and where the operator W0 is defined in (5.4.27).

5.4 Weakly Singular Integral Operators with Logarithmic Kernels

307

Proof In view of Corollary 2.4.31 and in view of the formula 

x

Hj (x, y) = y

j (y) , Hj +1 (s, y) ds + H

∩ C0,0,y can be replaced by

ρ−m+1,τ

we see that the condition Hm ∈ C0,0,x ρ−j +1,τ

Hj ∈ C0,0,x

j = 0, . . . , p − 1 ,

∩ C0,0,y ,

j = 0, . . . , m .

(5.4.28)

ρ,τ

Let us first consider the case 0 < ρ < 1. For u ∈ Cα + ,β + , we have   0 I u , (W0 u) = W1 + T1 + AH

(5.4.29)

where A is defined in (5.2.11) and where (W1 u) (x) = a

 x −1

H1 (x, y)u(y)v α,β (y) dy −

 b 1 H1 (x, y) ln |y − x| u(y)v α,β (y) dy π −1

=: a (W10 u) (x) − b (W11 u) (x)

as well as (T1 u) (x) =

b π



1 −1

,(x, y)u(y)v α,β (y) dy , H

,(x, y) = H (x, y) − H (y, y) . H y−x

0 ∈ Cρ,τ , Lemma 5.4.17 and Remark 5.2.15 it follows In view of H 0,0   AH 0 u

α − ,β − ,ρ,τ +1

≤ cuα + ,β + ,ρ,τ .

(5.4.30)

Consider H1 (x, y) ln |x − y| =

,1 (y, y) ,1 (x, y) − H H x−y

,1 (x, y) = H1 (x, y)(x − y) ln |x − y|. With the help of Lemma 5.4.17 and where H Proposition 5.3.14 we can conclude ρ,τ +1 W11 ∈ L( Cα + ,β + , Cα − ,β − ) ,

(5.4.31)

C0,0,y ⊂ C0,0,x ∩ C0,0,y for d(x, y) = where we took into account that d ∈ Cx ∩ 0,ρ

ρ,0

0,ρ

(x − y) ln |x − y| (see Exercises 2.4.22 and 3.1.13) By d ∈ Cx

we mean that

308

5 Mapping Properties of Some Classes of Integral Operators

d(x, ·) ∈ C0,ρ uniformly with respect to x ∈ [−1, 1]. If −1 + 2h2 ≤ x ≤ 1 − 2h2 and 0 < h ≤ 12 , then    1  hϕ (W10 u) (x)        x− hϕ(x)  x+ hϕ(x)  2 2 hϕ(x) hϕ(x)   , y u(y)v α,β (y) dy − , y u(y)v α,β (y) dy  = H1 x + H1 x −  −1  2 2 −1 % ≤ uα+ ,β + ,∞  +

x− hϕ(x) 2

x− hϕ(x) 2

−1

x+ hϕ(x) 2

     H1 x + hϕ(x) , y  v −α− ,−β − (y) dy   2

&        H1 x + hϕ(x) , y − H1 x − hϕ(x) , y  v −α− ,−β − (y) dy   2 2

≤ cuα+ ,β + ,∞ v −α

− ,−β −

(x)hρ lnτ h−1 ,

 (x) for y ∈ x − hϕ(x) ,x +    2   1 0,0  0,0 (cf. the proof of Corollary 2.4.31) and hϕ H1,y (x) ≤ 1ϕ H1,y , h

where we used v −α

c hρ

τ

− ,−β −

(y) ≤ c v −α

− ,−β −

hϕ(x) 2

0,0,∞

h−1



ln (see (5.3.9) and Corollary 2.4.30(a)). Consequently, taking into account that the function ψ(h) = hρ lnτ h−1 is increasing on an interval (0, h0 (ρ, τ )), we get 1ϕ (W10 u, h)α − ,β − ,∞ ≤ cuα + ,β + ,∞ hρ lnτ h−1 . In virtue of Corollary 2.4.30(b) and the obvious estimate W10 uα − ,β − ,∞ ≤ cW10 u∞ ≤ cuα + ,β + ,∞ , it follows W10 uα − ,β − ,ρ,τ ≤ cuα + ,β + ,∞ , which implies   ρ,τ +1 W10 ∈ L Cα + ,β + , Cα − ,β − .

(5.4.32)

This is also true for the operator T1 , since ,(x, y) = − H



1

H1 (tx + (1 − t)y, y) dt ,

(5.4.33)

0

, ∈ Cρ,τ ∩ which leads to H 0,0,x C0,0,y (see (5.4.28)) and, in view of Proposition 5.3.13,  ρ,τ +1  to T1 ∈ L Cα + ,β + , Cα − ,β − . (Indeed, we can also apply Proposition 5.3.14 by using  ρ+1,τ ρ+1,τ +1  C0,0,y to obtain T1 ∈ L Cα + ,β + , Cα − ,β − . H ∈ C0,0,x ∩

5.5 Singular Integro-Differential or Hypersingular Operators

309

Thus, together with Corollary 2.4.31 we obtain  c α − ,β −  lnτ +1 n En−1 (5.4.34) (W0 ) ≤ c ρ+1 uα + ,β + ,ρ,τ . n n   Since the operator W0 obviously belongs to L Cα + ,β + , Cα − ,β − , the proposition is proved in case of 0 < ρ < 1. Assume that the assertion is true for 0 < ρ < m ∈ N and that the conditions of the proposition are fulfilled  for some ρ with m < ρ < m + 1. Note that the operator ρ−1,τ ρ,τ +1 ,∈ W1 in (5.4.29) belongs to L Cα + ,β + , Cα − ,β − and that relation (5.4.33) yields H   ρ,τ ρ,τ +1 C0,0,x ∩ C0,0,y . Hence, in view of Proposition 5.3.13, T1 ∈ L Cα + ,β + , Cα − ,β − .   Since (5.4.30) is again valid, it follows (W0 u)  − − ≤ cuα + ,β + ,ρ,τ , Enα

− ,β −

(W0 u) ≤

α ,β ,ρ,τ +1

 

and (5.4.34) applies. So, the proposition is proved by induction.

Remark 5.4.19 From the proof of Proposition 5.4.18, we see that 0 < ρ < 1 and  ρ,τ ρ,τ +1  C0,0,y imply (see (5.4.31) and (5.4.32)) W0 ∈ L H ∈ C0,0,x ∩ Cα + ,β + , Cα − ,β − , where the operator W0 is defined by (5.4.27).

5.5 Singular Integro-Differential or Hypersingular Operators With the help of the operator A introduced in (5.2.11) we define (Bu)(x) =

d (Au)(x) , dx

(5.5.1)

if the derivative exists in the classical sense. Formulas (5.2.12) and (5.1.14) imply 

  1−α,1−β Bpnα,β (x) = (−1)λ (n − κ)(n + 1) pn−κ−1 (x) ,

(5.5.2)

−1 < x < 1 , n ∈ N0 . In case α = β = 12 , we derive the relation (cf. the second equation in (5.2.13) and the first one in (5.1.23)) d 1 − dx π



1 −1



1 − y 2 Un (y) dy = (n + 1)Un (x) , y−x

(5.5.3)

−1 < x < 1 , n ∈ N0 . The last relation is often written formally as 1 − π



1

−1



1 − y 2 Un (y) dy = (n + 1)Un (x) , (y − x)2

−1 < x < 1 , n ∈ N0 ,

310

5 Mapping Properties of Some Classes of Integral Operators

and the operator defined by the left hand side is called hypersingular or finite part integral operator. Exercise 5.5.1 Prove that, for u ∈ C[−1, 1] ∩ C2 (−1, 1) with u(±1) = 0 and u ∈ L1 (−1, 1), we have d dx



1 −1

u(y) dy = y−x



1 −1

u (y) dy = y−x



1

−1

u(y) − u(x) 2u(x) dy − , (y − x)2 1 − x2

−1 < x < 1 ,

where all integrals have to be understood in the Cauchy principal value sense. More general than in the last exercise, we have (cf. [164, Chapter II, Lemma 6.1] d dx



1 −1

u(y) dy = y−x 

for all u(x) =

x −1



1 −1

u(1) u(−1) v(y) dy − − , y−x 1−x 1+x

x ∈ (−1, 1) ,

(5.5.4)

v(y) dy with v ∈ Lp (−1, 1) and p > 1.

Lemma 5.5.2 For all s ≥ 0 and α, β > −1, the operator D of generalized 2,s differentiation is a continuous isomorphism from L2,s+1 α,β,1 onto L1+α,1+β . Proof Note that, in view of Lemma 2.4.7, the operator D is defined on L2,s+1 α,β . Further, we use equation (5.1.15), i.e., 1 d  1+α,1+β 1+α,1+β v (x)pn−1 (x) = −[n(n+α+β+1)] 2 v α,β (x)pnα,β (x) , dx

(5.5.5)

n ∈ N, and conclude, for f ∈ L2,s+1 α,β,1 , Df 21+α,1+β,s =

∞ 

D E  1+α,1+β 1+α,1+β 2 n2s  Df, pn−1 v 

n=1

=

∞ 



2 n2s n(n + α + β + 1)  f, pnα,β v α,β  ∼f f 2α,β,s+1 .

n=1

(5.5.6)  

The lemma is proved. Lemma 5.5.2 together with Proposition 5.2.5 yields the following.

Corollary 5.5.3 For all s ≥ 0 and α, β > −1 with α + β = 1, the operator B 2,s defined in (5.5.1) is a continuous isomorphism between the spaces L2,s+1 α,β and Lβ,α , where (cf. (5.5.2)) Bf =

∞  n=0



(n + 1) f, pnα,β α,β pnβ,α

∀ f ∈ L2,s+1 α,β .

5.5 Singular Integro-Differential or Hypersingular Operators

311

Proposition 5.5.4 ([28], Prop. 2.3) Under the assumptions of Corollary 5.5.3, the inverse operator of the operator B : L2,s+1 −→ L2,s α,β β,α can be written in the form 2,s B −1 = A1 W0 A0 , where the linear and bounded operators A0 : L2,s β,α −→ L−β,−α ,

2,s+1 2,s+1 2,s+1 2,s+1 are W0 : L2,s −β,−α −→ Lβ−1,α−1 = L−α,−β , and A1 : L−α,−β −→ Lα,β defined by

(A0 f )(x) = a v β,α (x)f (x) −  (W0 f )(x) = a

x

−1

v

−β,−α

b π



1

v β,α (y)f (y) dy , y−x

−1

b (y)f (y) dy − π

(A1 f )(x) = a v −α,−β (x)f (x) −

b π



1 −1



1 −1

v −β,−α (y) ln |x − y| f (y) dy ,

v −α,−β (y)f (y) dy , y−x

respectively. Proof Recalling the notations used in Proposition 5.2.5, in the present situation we have a − ib = eiπβ0 , 0 < β0 < 1, α = β0 , and β = 1 − β0 . With γ0 = 1 − β0 , we have −a − ib = eiπγ0 , β = γ0 , and α = 1− γ0 . Hence,  as a consequence of 2,s , L Corollary 5.2.8 and Eq. (5.2.12) we get A0 ∈ L L2,s β,α −β,−α and, for n ∈ N0 ,  (A0 pnβ,α )(x)

= − −av

β,α

(x)pnβ,α (x) +

b π

By Proposition 5.4.2, the operator W0 Eq. (5.4.2) can be written as −β,−α

W0 pn+1

=−



1

−1

β,α

v β,α (y)pn (y) dy y−x

 −β,−α

= −pn+1

(x) .

(5.5.7)  2,s+1 belongs to L L2,s −β,−α , L−α,−β , and

1 −α,−β p , n + 1 n+1



n ∈ N0 .

(5.5.8)

  , defined in (5.2.15), so that A1 ∈ L L2,s+1 , L2,s+1 The operator A1 is equal to A −α,−β α,β and −α,−β

A1 pn+1

= pnα,β , n ∈ N0 .

1 pα,β , n ∈ N0 , n+1 n β,α = (n + 1)pn , n ∈ N0 (cf. Corollary 5.5.3) completes  

Putting (5.5.7)–(5.5.9) together, we have A1 W0 A0 pnβ,α = α,β

which together with Bpn the proof.

(5.5.9)

312

5 Mapping Properties of Some Classes of Integral Operators

Corollary 5.5.5 ([28], Prop. 2.6) For ρ > 0 and τ ≥ 0, the operator B −1 ρ,τ ρ+1,τ +1 defined in Proposition 5.5.4 is continuous from Cγ − ,δ − into C + 1 + 1 , where γ − − γ + = β and δ − − δ + = α with 0 ≤ γ + , δ +
δ0 − 1 such that |H (t)| ≤ c0 t ρ

for t ∈ (0, t0 )

(5.6.3)

and that the function  h : [a, 1] × [−1, 1] −→ C ,

(x, y) → H

1+x 1+y



(1 + y)−η−1

(5.6.4)

is continuous for every a ∈ (−1, 1). Then Cb −→ Cb0,δ is bounded, if δ ≥ max {δ0 , −ρ}, where (a) the operator MH :  γ0 ,δ0 ⊂ C(−1, 1], additionally MH Cb γ0 ,δ0

(b) the operator MH : Cbγ0 ,δ0 −→ C0,δ is bounded, if δ > max {0, δ0 , −ρ}, (c) the function (MH u) (x) is n times continuously differentiable on (−1, 1] for every u ∈ Cbγ0 ,δ0 , if, additionally to (5.6.2), (5.6.3), and (5.6.4), the Mellin kernel H : (0, ∞) −→ C is n times continuously differentiable such that  ∞    (j )  δ +j −1 H (t)t 0 dt < ∞ and H (j )(t) ≤ c0 t ρ−j 0

for t ∈ (0, t0 ) and j = 1, . . . , n , and the functions   (j ) 1 + x hj : [a, 1] × [−1, 1] −→ C , (x, y) → H (1 + y)−η−j −1 , 1+y (5.6.5) j = 1, . . . , n, are continuous for every a ∈ (−1, 1). Moreover,    1   u(y) dy (j ) (j ) 1 + x H (MH u) (x) = MH,j u (x) := 1 + y (1 + y)j +1 −1

(5.6.6)

for all x ∈ (−1, 1] and j = 1, . . . , n. The operators Cbγ0 ,δ0 −→ Cb0,δ+j , MH,j :

j = 1, . . . , n,

  are bounded with MH,j Cbγ0 ,δ0 ⊂ C(−1, 1], if δ ≥ max {δ0 , −ρ}, and the operators MH,j : Cbγ0 ,δ0 −→ C0,δ+j ,

j = 1, . . . , n,

are bounded, if δ > max {0, δ0 , −ρ}.   Proof Set M0 = max |H (t)| : t20 ≤ t ≤ 2 . At first we show that, for −1 < x < 1, 

1 −1

   −γ0 ,−δ0   (y) dy H 1 + x  v ≤ c1 (1 + x)− max{δ0 ,−ρ} ,  1+y  1+y

(5.6.7)

5.6 Operators with Fixed Singularities of Mellin Type

315

where c1 = c1 (x). Indeed, with the help of the substitution t =

1+x 1+y ,

we get

    −γ0 ,−δ0     ∞   (y) dy |H (t)| 1 + x −γ0 1 + x −δ0 H 1 + x  v dt = 2 −  1+x 1+y  1+y t t t −1 2



1

 =

1+x 1+x 2

 +





1+x

    |H (t)| 1 + x −γ0 1 + x −δ0 dt 2− t t t

=: I1 (x) + I2 (x) ,

where ⎧      1+x ⎪ 1 + x −γ0 1 + x −δ0 ⎪ ρ−1 ⎪ c t dt : 1 + x < t0 , 2 − ⎪ ⎨ 0 1+x t t 2 I1 (x) ≤      1+x ⎪ 1 + x −γ0 1 + x −δ0 ⎪ −1 ⎪ ⎪ M 2 − t dt : 1 + x ≥ t0 , ⎩ 0 1+x t t 2  1 ⎧ ⎪ ρ ⎪ c (1 + x) s ρ−1+γ0 +δ0 (2s − 1)−γ0 ds : 1 + x < t0 , ⎪ 0 ⎨ 1 t =(1+x)s 2 =  1 ⎪ ⎪ ⎪ M0 s −1+γ0 +δ0 (2s − 1)−γ0 ds : 1 + x ≥ t0 , ⎩ 1 2

and, since 1 ≤ 2 −

1+x t

≤ 2 for t ≥ 1 + x,

  I2 (x) ≤ max 1, 2−γ0 (1 + x)−δ0





|H (t)|t −1+δ0 dt ,

(5.6.8)

0

and (5.6.7) is proved. Now we show that MH u : (−1, 1] −→ C is a continuous function for every u∈ Cbγ0 ,δ0 . For this, let a ∈ (−1, 1) and ε > 0 as well as u ∈ Cbγ0 ,δ0 be arbitrarily chosen. Since the function defined in (5.6.4) is uniformly continuous, there is a χ > 0 such that |h(x1 , y) − h(x2 , y)|
max {0, δ0 , −ρ}. In virtue of MH,j = v 0,j MHj , the part of assertion (c) concerned with the boundedness of the operators under consideration follows immediately. It remains to prove that   (5.6.9) (MH u)(j ) )(x) = MH,j u (x) holds true for all x ∈ (−1, 1], j = 1, . . . , n, and u ∈ Cbγ0 ,δ0 . For x, x0 ∈ (−1, 1] and ε ∈ (0, 1), we have 

x





MH,1 u (z) dz

x0

 =

x

x0

 =

x



1 −1



x0

 =

=

1

−1+ε



H

1

−1+ε





1

−1+ε



x

 H

+

H

x0

-

1+z 1+y





u(y) dy dz (1 + y)2

−1+ε 

−1

1+z 1+y

1+x 1+y



H



1+z 1+y



u(y) dy dz (1 + y)2

u(y) dz dy + (1 + y)2



 −H

1 + x0 1+y

.



x x0



−1+ε

−1

u(y) dy + 1+y

H 

x

x0





1+z 1+y −1+ε

−1



u(y) dy dz (1 + y)2

H



1+z 1+y



u(y) dy dz . (1 + y)2

5.6 Operators with Fixed Singularities of Mellin Type

317

Note that, for 0 < ε < 1,      ∞  −1+ε   1 + z u(y) dy    H |H  (t)|t δ0 dt uγ0 ,δ0 ,∞   ≤ max 1, 2−γ0 (1 + z)−δ0 1+z  −1 1+y 1+y  ε

(cf. (5.6.8)) tends to zero uniformly with respect to z between x0 and x. Hence  x  x0

 MH,1 u (z) dz =

 1 −1+ε

-

 H

1+x 1+y



 −H

1 + x0 1+y

.

u(y) dy = (MH u) (x) + c , 1+y

which yields (5.6.9) in case j = 1. The remaining cases follow by induction.

 

The following proposition is devoted to the special case δ0 = δ in Proposition 5.6.1(a), (b). Proposition 5.6.2 Suppose that γ0 < 1, δ0 ∈ R, and 



|H (t)|t δ0 −1 dt < ∞ ,

(5.6.10)

0

Moreover, assume that there are c0 , ρ ∈ R, t0 ∈ (0, 1), and η > δ0 − 1 such that δ0 + ρ ≥ 0, if γ0 ≤ 0, and δ0 + ρ > 0, if γ0 > 0, as well as |H (t)| ≤ c0 t ρ , t ∈ (0, t0 ), and that the function  h : [a, 1] × [−1, 1] −→ C ,

(x, y) → H

1+x 1+y



(1 + y)−η−1

is continuous for every a ∈ (−1, 1). Then the operator MH : Cγ0 ,δ0 −→ C0,δ0 defined by (5.6.1) is bounded, where  v

0,δ0

   0,δ0 MH u (−1) = v u (−1) 



H (t)t δ0 −1 dt

0

∀u ∈ Cγ0 ,δ0

(5.6.11)

and, in case of γ0 = 0,  MH L( C0,δ

0

)



≤ 0

|H (t)|t δ0 −1 dt



and MH L( Cb

0,δ0 )

≤ 0



|H (t)|t δ0 −1 dt .

(5.6.12)

Proof By Proposition 5.6.1(a) we already have the boundedness of MH :   Cb0,δ0 ∩ C(−1, 1]. Hence it Cb0,δ0 together with MH Cbγ0 ,δ0 −→ Cbγ0 ,δ0 ⊂   v 0,δ0 MH u (x) exists and remains to show that, for u ∈ Cγ0 ,δ0 , the limit lim x→−1+0

is finite.

318

5 Mapping Properties of Some Classes of Integral Operators

In case of γ0 ≤ 0 we proceed as follows. Since in that case uδ0 := v 0,δ0 u is bounded, we have lim

x→−1+0

  v 0,δ0 MH u (x) =

 lim

H

x→−1+0 −1

 =



1

lim

x→−1+0

∞ 1+x 2

1+x 1+y



1+x 1+y

δ0

uδ0 (y) dy 1+y

  H (t)t δ0 −1 uδ0 t −1 (1 + x) − 1 dt

and 

∞ 1+x 2

       H (t)t δ0 −1 uδ0 t −1 (1 + x) − 1  dt ≤ v 0,δ0 u∞



|H (t)|t δ0 −1 dt .

0

Thus, we can apply Lebesgue’s dominated convergence theorem by using the assumption (5.6.10), and (5.6.11) follows. Now let γ0 > 0 and δ0 + ρ > 0. Setting uγ0 ,δ0 = v γ0 ,δ0 u, we write  (1 + x) (MH u) (x) = δ0

0 −1

+

 1



1+x 1+y

H 0



1+x 1+y

δ0

uγ0 ,δ0 (y) dy (1 − y)γ0 (1 + y)

=: J1 (x) + J2 (x) , (5.6.13) where  J1 (x) =



1+x

  H (t)t δ0 −1 uδ0 t −1 (1 + x) − 1 dt

and  ∞        H (t)t δ0 −1 uδ0 t −1 (1 + x) − 1  dt ≤ v 0,δ0 u∞,[−1,0] 1+x



H (t)t δ0 −1 dt .

0

Hence, we can again apply Lebesgue’s dominated convergence theorem to get lim

x→−1+0

   0,δ0 J1 (x) = v u (−1) 0



H (t)t δ0 −1 dt .

(5.6.14)

5.6 Operators with Fixed Singularities of Mellin Type

319

For J2 (x), −1 < x < t0 − 1, we obtain 1 1 +

 J2 (x) ≤ c0

x 1+y

0

ρ+δ0



1

≤ (1 + x)ρ+δ0 0

uγ0 ,δ0 (y) dy (1 − y)γ0 (1 + y)

 γ ,δ  dy v 0 0 u −→ 0 if ∞ γ (1 − y) 0

x −→ −1 .

This together with (5.6.13) and (5.6.14), yields (5.6.11). The estimates in (5.6.12) are consequences of         0,δ0   0,δ0 M u (x) ≤ u v  v   H ∞

    δ0   dy H 1 + x  1 + x   1+y 1 + y 1 +y −1

     = v 0,δ0 u ∞

1

∞ 1+x 2

|H (t)|t δ0 −1 dt ,

u∈ Cb0,δ0 . (5.6.15)  

The proposition is proved.

Remark 5.6.3 If H (t) ≥ 0, t > 0, then, in case of u = v 0,−δ0 , we have exclusively equalities in (5.6.15) and hence, under the conditions of Proposition 5.6.2,  MH L( C0,δ

0

)

= MH L( Cb

0,δ0 )



=

H (t)t δ0 −1 dt .

0

The following proposition combines the situation considered in Proposition 5.6.1(c) with Proposition 5.6.2. For this, we define the Banach spaces C(n) 0,δ0 , n ∈ N, by   (j ) C(n) ∈ C0,δ0 +j , j = 1, . . . , n 0,δ0 = f ∈ C0,δ0 : f (0) (∞) Furthermore, set C0,δ0 = C0,δ0 and C0,δ0 =

and f  C(n) = 0,δ0

K

n   (j )  f  0,δ j =0

0 +j,∞

.

(n) C0,δ0 .

n∈N0

Exercise 5.6.4 Show that f ∈

(n) C0,δ1

and g ∈ C0,δ2 imply fg ∈ C0,δ1 +δ2 . (n)

(n)

Proposition 5.6.5 Let γ0 < 1, δ0 , ρ ∈ R, and the n times continuously differentiable function H : (0, ∞) −→ C satisfy the conditions 

∞ 0

  (j )  δ0 +j −1 dt < ∞ , H (t) t

j = 0, 1, . . . , n ,

320

5 Mapping Properties of Some Classes of Integral Operators

and    (j )  H (t) ≤ c0 t ρ−j ,

0 < t < t0 ,

for some c0 , t0 ∈ (0, ∞) .

Moreover, assume that the functions  hj : [a, 1] × [−1, 1] −→ C ,

(x, y) → H (j )

1+x 1+y



(1 + y)−η−j −1 ,

j = 0, 1, . . . , n ,

are continuous for some η > δ0 − 1 and for every a ∈ (−1, 1). Then the operator Cγ0 ,δ0 −→ C(n) MH : 0,δ0 is bounded, if δ0 + ρ ≥ 0 and γ0 ≤ 0 or δ0 + ρ > 0 and γ0 > 0. Proof By Proposition 5.6.1(a), (c), we already have that (MH u)(j ) = MH,j u for all u ∈ Cbγ0 ,δ0 , j = 1, . . . , n, and that the operators MH,j : Cb −→ Cb0,δ0 +j ,  γ0 ,δ0 j = 0, 1, . . . , n, are bounded (MH,0 := MH ) with MH,j Cbγ0 ,δ0 ⊂ C(−1, 1]. Furthermore, applying Proposition 5.6.2 to the functions Hj (t) = H (t)t j yields, as in the proof of Proposition 5.6.1(c), the boundedness of the operators MH,j : Cγ0 ,δ0 −→ C0,δ0 +j , j = 0, 1, . . . , n, and we are done.   tρ meets the conditions (1 + t)μ of Propositions 5.6.1 and 5.6.2 as well as of Remark 5.6.3 and Proposition 5.6.5 (cf. the following Exercise 5.6.6), if −δ0 < ρ < μ − δ0 and δ0 < η ≤ μ − ρ, we derive the following corollary concerned with operators of the form

Since, for μ > 0 and ρ ∈ R, the function H (t) =



 Mμ,ρ u (x) =



1 −1

(1 + x)ρ (1 + y)μ−ρ−1 u(y) u(y) dy , (2 + y + x)μ

(5.6.16)

−1 < x < 1, which play an important role in boundary integral equations in twodimensional elasticity theory (see, for example, [45, 88]). Exercise 5.6.6 Show that, for ρ, μ ∈ R and the function H : (0, ∞) −→ R, tρ , we have t → (1 + t)μ H (j )(t) =

αj 0 t ρ + αj 1 t ρ−1 + . . . + αjj t ρ−j , (1 + t)μ+j

j ∈ N0 ,

with certain real numbers αj k . Corollary 5.6.7 Let γ0 < 1 and −δ0 < ρ < μ − δ0 . Then (a) the operator Mμ,ρ : Cbγ0 ,δ0 −→ Cb0,δ is bounded, if δ ≥ δ0 , where we   additionally have Mμ,ρ Cbγ0 ,δ0 ⊂ Cbγ0 ,δ0 ∩ C(−1, 1],

5.6 Operators with Fixed Singularities of Mellin Type

321

(b) the operator Mμ,ρ : Cbγ0 ,δ0 −→ C0,δ is bounded, if δ > max {0, δ0 }, (c) the operator Mμ,ρ : Cγ0 ,δ0 −→ C0,δ0 is bounded, where      0,δ0 0,δ0 v Mμ,ρ u (−1) = v u (−1)

∞ t ρ+δ0 −1 dt

0

(1 + t)μ

∀u ∈ Cγ0 ,δ0

and, in case γ0 = 0,   Mμ,ρ   LC



0,δ0

  = Mμ,ρ 

  L Cb0,δ



∞ t ρ+δ0 −1 dt

= 0

0

(1 + t)μ

,

(d) the operator Mμ,ρ : Cγ0 ,δ0 −→ C(n) 0,δ0 is bounded for all n ∈ N, in particular   (∞) we have Mμ,ρ Cγ0 ,δ0 ⊂ C . 0,δ0

We remark that, in general, the operator Mμ,0 does not map Cγ0 ,δ0 into C, as the simple integrals 

1 −1



dy 3+x = ln 2+y+x 1+x

and

1

−1

(1 + y) dy 3+x (1 + x)(2 − x) = ln − (2 + y + x)2 1+x 3+x

show. But, 

1

−1

(1 + x) dy 2−x = (2 + y + x)2 3+x

is a continuous function on [−1, 1]. As a further example we consider the operator    Mcχ u (x) = sin χ

1

−1

(1 + x) u(y) dy , (1 + x)2 + 2 cos χ (1 + x)(1 + y) + (1 + y)2

(5.6.17)

−1 < x < 1, which plays an important role when dealing with boundary integral equations corresponding to boundary value problems in domains with corners and where the given number χ ∈ (0, 2π)\{π} refers to the angle at the respective corner (cf., for example, [15, 96], or [14, Section 8.1.3]). It turns out that Mcχ = MHχc with t sin χ Hχc (t) = 2 . The function H = Hρc satisfies (5.6.2) for −1 < δ0 < t + 2t cos χ + 1 1 and (5.6.3) for ρ = 1. In (5.6.4) we can choose η = 0. Taking into account Hχc (t) =

1 sin χ

1+



t t +cos χ sin χ

2

and the following Exercise 5.6.8, we can also apply Proposition 5.6.5.

322

5 Mapping Properties of Some Classes of Integral Operators

Exercise 5.6.8 Show that, for ρ, μ ∈ R and the function H : (0, ∞) −→ R, tρ t → , we have (1 + t 2 )μ H (j )(t) =

βjj t ρ+j + βj,j −1 t ρ+j −1 + . . . + βj 0 t ρ , (1 + t 2 )μ

j ∈ N0 ,

with certain βj k ∈ R. Corollary 5.6.9 Let γ0 < 1 and −1 < δ0 < 1. Then Cbγ0 ,δ0 −→ Cb0,δ is bounded, where (a) the operator Mcχ :   Mcχ Cb0,δ ∩ C(−1, 1], Cbγ0 ,δ0 ⊂ (b) the operator Mcχ (c) the operator Mcχ

if

δ ≥ δ0 ,

: Cbγ0 ,δ0 −→ C0,δ is bounded, if δ > max {0, δ0 }, : Cγ0 ,δ0 −→ C0,δ0 is bounded, where

     0,δ0 c 0,δ0 v Mχ u (−1) = v u (−1)



0

t δ0 sin χ dt t 2 + 2t cos χ + 1

∀u ∈ Cγ0 ,δ0

and, in case γ0 = 0,  c M   χ L C

 0,δ0

  = Mcχ 

  Cb0,δ L

 =

0

0



t δ0 | sin χ| dt . t 2 + 2t cos χ + 1

(n) (d) the operator Mcχ : Cγ0 ,δ0 −→ C is bounded for all n ∈ N, in particular we   0,δ0 (∞) have the relation Mμ,ρ Cγ0 ,δ0 ⊂ C . 0,δ0

Remark 5.6.10 Since, for −1 < a < 1, 

∞ 0

π dt = √ , t 2 + 2ta + 1 2 1 − a2

in case of δ0 = 0, Corollary 5.6.9 yields   π u(−1) π sin χ Mcχ u (−1) = u(−1) = sgn (π − χ) 2| sin χ| 2

∀u ∈ Cγ0 ,0

and  c  c  M   χ L(C[−1,1]) = Mχ

  L Cb0,0

=

π . 2

Now let us ask under which conditions operators of the form (5.6.1) map functions from weighted Lp -spaces into weighted spaces of continuous functions.

5.6 Operators with Fixed Singularities of Mellin Type

323

Proposition 5.6.11 Let 1 < p < ∞, p1 + q1 = 1, and γ0 < that  ∞ |H (t)|q t (δ0 +1)q−2 dt < ∞

1 q.

Moreover, assume

(5.6.18)

0

and that there are ρ ∈ R, t0 ∈ (0, 1), and η > δ0 − |H (t)| ≤ c0 t ρ

1 q

such that

for t ∈ (0, t0 )

(5.6.19)

and that the function h : [a, 1] × [−1, 1] −→ C defined in (5.6.4) is continuous for every a ∈ (−1, 1). Then   p p Cb0,δ is bounded, where MH Lv γ0 ,δ0 ⊂ (a) the operator MH : Lv γ0 ,δ0 −→   C(−1, 1] holds true, if δ ≥ max −ρ, δ0 + p1 ,   p (b) the operator MH : Lv γ0 ,δ0 −→ C0,δ is bounded, if δ > max −ρ, δ0 + p1 , (c) (MH u) (x) is n times continuously differentiable on (−1, 1] for every u ∈ Cbγ0 ,δ0 , if, additionally to (5.6.18) and (5.6.19), the function H : (0, ∞) −→ C is n times continuously differentiable such that 

∞

 H (j ) (t)q t (δ0 +j +1)q−2 dt < ∞

  and H (j )(t) ≤ c0 t ρ−j

(5.6.20)

0

for t ∈ (0, t0 ) and j = 1, . . . , n, and the functions hj , j = 1, . . . , n, defined (j ) in (5.6.5),  are continuous for every a ∈ (−1, 1). Moreover, (MH u) (x) = MH,j u (x) for all x ∈ (−1, 1] and j = 1, . . . , n (cf. (5.6.6)). The operators Cb0,δ+j , MH,j : Lv γ0 ,δ0 −→ p

j = 1, . . . , n,

    p are bounded with MH,j Lv γ0 ,δ0 ⊂ C(−1, 1], if δ ≥ max −ρ, δ0 + p1 , and the operators p

MH,j : Lv γ0 ,δ0 −→ Cb0,δ+j ,

j = 1, . . . , n,

  are bounded, if δ > max −ρ, δ0 + p1 . Proof We proceed as in  the proof of Proposition 5.6.1, set again M0 max |H (t)| : t20 ≤ t ≤ 2 , and show at first that, for −1 < x < 1, 

    q −γ0 q,−δ0 q   (y) dy −q max −ρ,δ0 + p1 q H 1 + x  v ≤ c1 (1 + x) ,  1+y  (1 + y)q −1

=

1

(5.6.21)

324

5 Mapping Properties of Some Classes of Integral Operators

where c1 = c1 (x). For this, with the help of the substitution t = 

1+x 1+y ,

we compute

  q −γ0 q,−δ0 q   (y) dy H 1 + x  v   1+y (1 + y)q

1 −1

 =

∞ 1+x 2

 =

    1 + x −γ0 q 1 + x −(δ0 +1)q 1 + x |H (t)|q 2 − dt t t t2

1+x 1+x 2

 +





1+x

    1 + x −γ0 q 1 + x −(δ0 +1)q 1 + x |H (t)| 2 − dt t t t2 q

=: I1 (x) + I2 (x) , where ⎧      1+x 1 + x −γ0 q 1 + x −(δ0 +1)q ⎪ q ⎪ ⎪ c0 (1 + x) t qρ−2 2 − dt : 1 + x < t0 , ⎪ ⎨ 1+x t t 2 I1 (x) ≤ −γ0 q     1+x ⎪ ⎪ 1 + x −(δ0 +1)q q ⎪ −2 2 − 1 + x ⎪ M (1 + x) t dt : 1 + x ≥ t0 , ⎩ 0 1+x t t 2 ⎧  1 ⎪ q ⎪ ⎪ c0 (1 + x)qρ s qρ−2+(γ0 +δ0 +1)q (2s − 1)−γ0 q ds : 1 + x < t0 , ⎪ 1 ⎨ t =(1+x)s 2 =  1 ⎪ ⎪ q −2+(γ0 +δ0 +1)q (2s − 1)−γ0 q ds ⎪ ⎪ M : 1 + x ≥ t0 , ⎩ 0 1 s 2

and, since 1 ≤ 2 −

1+x t

≤ 2 for t ≥ 1 + x,

  −q I2 (x) ≤ max 1, 2−γ0 q (1 + x)

 ∞  δ0 + p1

|H (t)|q t (δ0 +1)q−2 dt ,

0

and (5.6.21) is proved. We continue with showing that MH u : (−1, 1] −→ C is a continuous function p p for every u ∈ Lv γ0 ,δ0 . To this end, let a ∈ (−1, 1) and ε > 0 as well as u ∈ Lv γ0 ,δ0 be arbitrarily chosen. Due to the continuity of the function defined in (5.6.4), there is a χ > 0 such that |h(x1 , y) − h(x2 , y)|
0,      sup (1 + |z|)1+ε , h (z) : z ∈ γ0 ,δ0 < ∞ . Note that condition (h1) implies that , h : γ ,δ −→ C is holomorphic (see [73, Corollary 3.5]). Under the assumption that a(x) − ib(x) = 0 for all x ∈ [−1, 1], to the operator a(x) + ib(x) p p and the (closed) A : Lα,β −→ Lα,β we associate the function c(x) = a(x) − ib(x) c c curve A = −1 ∪ 0c ∪ +1 , where # c −1

=

$ a(−1) + b(−1) cot π(ξ − it) + , h(ξ − it) :t ∈R , a(−1) − ib(−1) #

0c = {c(x) : −1 < x < 1} ,

c +1 =

$ a(1) − b(1) cot π(ζ − it) :t∈R , a(1) − ib(1)

1+α and ξ = 1+β p as well as ζ = p . Here, by R we refer to the two-point compactification [−∞, ∞] of the real line. Due to the following exercise, under the conditions (h1) and (h2) A is really a closed curve, since , h(ξ ± ∞) = 0 as well as

cot π(ξ − it) =

eiξ +t + e−iξ −t cos π(ξ − it) e2iξ + e−2t = i iξ +t = i 2iξ −iξ −t sin π(ξ − it) e −e e − e−2t

  and, consequently, cot π ξ − i(±∞) = ±i. We define the orientation of A due to its above parmametrization by x ∈ (−1, 1) and t ∈ R, i.e., 1 −→ c(−1) −→ c(1) −→ 1. If A does not run through the zero point (i.e., 0 ∈ A ), then the winding number of A around 0 ∈ C is denoted by wind A .

328

5 Mapping Properties of Some Classes of Integral Operators

Exercise 5.7.1 Assume that condition (h1) is satisfied with −1 < β < p − 1 1 and γ < ξ = 1+β p < δ. Use the fact that, for a function u ∈ L (R), we have (F u)(±∞) = 0 (cf. [208, Section 1.8, Theorem 1]), where  (F u)(η) =



−∞

e−iηs u(s) ds

is the Fourier transform of the summable function u(s), and show that , h(ξ ± i∞) = 0. Proposition 5.7.2 Let 1 < p < ∞, −1 < α, β < p − 1, and γ < ξ = 1+β p < δ. Assume that the coefficients a, b : [−1, 1] −→ C as well as the Mellin kernel h : (0, ∞) −→ C are continuous functions, where h(t) satisfies conditions (h1) p p and (h2). Then the operator A : Lα,β −→ Lα,β defined by (5.7.1) is Fredholm if and only if a(x) − ib(x) = 0 for all x ∈ [−1, 1] and 0 ∈ A , where the Fredholm index of A is equal to −wind A . A comprehensive proof of this theorem together with the respective references to the existing literature can be found in [73, Section 4], where the more general situation of an operator of the form a(x)u(x)+

b(x) π



1 −1

u(y) dy + y−x





1 −1

h

1+x 1+y



u(y) dy + 1+y





1 −1

k

1−x 1−y



u(y) dy 1−y

with piecewise continuous functions a(x) and b(x) as well as two Mellin operators with fixed singularities at −1 and +1 is studied. p p It is well known that, in case of h(t) ≡ 0, the operator A : Lα,β −→ Lα,β in (5.7.1) is one-sided invertible if it is Fredholm (see, for example, [61, Chapter 3]). p More general, one can state, that the homogeneous equations Au = 0 in Lα,β or q A∗ v = 0 in the dual space L(1−q)α,(1−q)β have only the trivial solution. Here, the dual space is defined with respect to the unweighted inner product  f, g =

1 −1

f (x)g(x) dx .

That means, that in such a situation a Fredholm operator is invertible if and only if it is Fredholm and has index 0. Concerning equations with operators of the type (5.7.1), in [73, Section 6] the following is proved. Proposition 5.7.3 Let 1 < p < ∞ and −1 < α, β < p − 1 as well as the continuous function h : (0, ∞) −→ C fulfil conditions (h1) and (h2). Then, for p the operator A defined in (5.7.1), the homogeneous equations Au = 0 in Lα,β or

5.8 Solvability of Nonlinear Cauchy Singular Integral Equations

329

q

Av = 0 in L(1−q)α,(1−q)β have only the trivial solution, if one of the following conditions is satisfied: (a) a ∈ C \ {0} and b ≡ 0, (b) a ∈ C and b ∈ C \ {0}, α = 0,       , 1+β + it  : t ∈ R > 0. (c) a, b ∈ C and inf a + b cot π 1+β p + it + h p Corollary 5.7.4 Let h : (0, ∞) −→ C be a continuous function satisfying conditions (h1) and (h2) for some γ , δ ∈ R with γ < p1 < δ. Then the equation 



1+x h u(x) + 1+y −1 1



u(y) dy = f (x) 1+y

p is uniquely solvable in Lp (−1, 1) for 1), if and only if   every  f ∈ L (−1,  t =+∞  1 1 h p + it = 0. 1 +, h p + it = 0 for all t ∈ R and arg 1 + , t =−∞

5.8 Solvability of Nonlinear Cauchy Singular Integral Equations Nonlinear singular integral equations of the form F (x, u(x)) =

1 π



1 −1

u(y) dy + c0 , y−x

−1 < x < 1 ,

(5.8.1)

and 1 u(x) = π



1 −1

F (y, u(y)) dy + f (x) + c0 , y−x

−1 < x < 1 ,

(5.8.2)

with the unknown function u(x) and the unknown constant c0 ∈ R occur, for example, when investigating and solving certain free boundary value problems by means of boundary integral methods, some examples of which we consider in Sect. 8.3 (cf. also [184–187] and [69, 78, 79]). In [215], the existence of a solution u(x) with u ∈ Lp (−1, 1) and some p > 1 has been proved under the conditions |f  (x)|, |Fx (x, z)| ≤ 0 (1 − x 2 )−δ ,

−1 ≤ x ≤ 1 , z ∈ R ,

(5.8.3)

on the given data, where 0 and δ < 12 are some constants. In the examples, considered in the above mentioned literature, condition (5.8.3) is only satisfied for δ = 12 . Here we show that the condition on δ can really be weakened. In particular, if we maintain all the other assumptions in [215] even δ > 12 is possible. We follow the technique of [214, 215] based on the application of Schauder’s fixed point theorem.

330

5 Mapping Properties of Some Classes of Integral Operators

5.8.1 Equations of the First Type We write Eq. (5.8.1) in the form F (x, u(x)) = (Su)(x) + c0 ,

−1 < x < 1 ,

(5.8.4)

and search for continuous functions u : [−1, 1] −→ R and real numbers c0 satisfying (5.8.4) and the additional condition  u(x) =

x −1

v(x) dx ,

−1 ≤ x ≤ 1 ,

u(1) = 0 ,

(5.8.5)

for some function v ∈ Lp = Lp (−1, 1) and some real number p > 1. In particular, u(−1) = u(1) = 0 is satisfied for every solution we are interested in. The set of all functions u ∈ C[−1, 1] of the form (5.8.5) we will denote by W1+ 0 , i.e., W1+ 0 =

# u ∈ C[−1, 1] : ∃ v ∈ Lp



with u(x) =

p>1

x

−1

$ v(y) dy ∀ x ∈ [−1, 1]

and u(1) = 0 .

In order to prove the existence of a solution u ∈ W1+ 0 of Eq. (5.8.4) we assume that the following assumption is fulfilled: (N1) The function F : [−1, 1] × R −→ R, (x, z) → F (x, z) possesses a continuous partial derivative Fz and a partial derivative Fx , which is continuous w.r.t. z ∈ R for almost all x ∈ [−1, 1] and measurable w.r.t. x ∈ [−1, 1] for all z ∈ R (Carathéodory condition). Furthermore, there exist constants 0 , 1 , 2 ≥ 0 and δ ≥ 0 such that 1 2 < 1 and − 1 ≤ Fz (x, z) ≤ 2 , |Fx (x, z)| ≤ 0 (1 − x 2 )−δ , δ
1 and that u(x) =

x

−1

v(y) dy with v ∈ Lp is a solution of (5.8.4). By using (5.5.4), we

can differentiate equation (5.8.4) and get the relation a v − Sv = g ,

(5.8.9)

5.8 Solvability of Nonlinear Cauchy Singular Integral Equations

331

where a(x) = Fz (x, u(x)) and g(x) = −Fx (x, u(x)) . Since, in view of assumption (N1), the function a(x) is continuous, there exists a continuous functions α : [−1, 1] −→ (0, 1) such that a(x) + i =  1 + [a(x)]2 eiπα(x) for all x ∈ [−1, 1]. Moreover, the solution of (5.8.9) has to fulfil the condition 

1 −1

v(x) dx = 0 .

(5.8.10)

To find an explicit expression for the solution v ∈ Lp of problem (5.8.9) and (5.8.10) we use the results of [61, §9.5]. With P = 12 (I − iS) and Q = I − P = 12 (I + iS) we can write Eq. (5.8.9) in the equivalent form Bv := [P(a − i) + Q(a + i)]v = g . −2πiα(x), then, for all sufficiently small p > 1, the If we set c(x) = a(x)−i a(x)+i = e function c admits a generalized Lp -factorization c(x) = c− (x)(x − i)−1 c+ (x), where

c− (x) =

.  1 (x − i)[a(x) − i] x −i α(y) dy  exp −iπα(x) + = exp[π(S α)(x)] , 1−x y − x −1 (1 − x) 1 + [a(x)]2

 c+ (x) = (1 − x) exp −iπα(x) −

1 −1

α(y) dy y−x

. =

(1 − x)[a(x) − i]  exp[−π(S α)(x)] . 1 + [a(x)]2

Hence, B : Lp −→ Lp is a Fredholm operator with index 1, and a right inverse of B is given by B (−1) =

−1 ! " c+   d c+ P(a +i)+Q(a −i) I = d −1 aI +S I, a+i a−i 1 + a2

(5.8.11)

where  d(x) = (1 − x) 1 + [a(x)]2 exp[−π(Sα)(x)] .

(5.8.12)

Because of (5.8.7) and (5.8.8) we have g ∈ Lp for all sufficiently small p > 1. We show that v = B (−1)g is the solution of (5.8.9) and (5.8.10). For this we have to prove that v = B (−1)g satisfies (5.8.10). Indeed, taking into account relation (5.8.11) we conclude that (a + i)c+ v lies in the image of the operator P(a + i)I + Q(a − i)I,

332

5 Mapping Properties of Some Classes of Integral Operators

which implies (cf. [61, §9.5]) .  1 dx x−i 0= [a(x) + i]c+ (x)v(x) c− (x) − c+ (x) x − i −1  =

1 −1

 [a(x) + i] [c(x) − 1]v(x) dx = −2i

1

−1

v(x) dx .

  By integrating v(y) = B (−1)g (y) from −1 to x, we see that every solution u ∈ W1+ 0 of problem (5.8.4) is a solution of the fixed point equation u = T u, where

 (T u) (x) =

u ∈ C0 ,

(5.8.13)

T (y, u(y)) dy

(5.8.14)

x −1

and, due to (5.8.11), 1 a(x)g(x) + T (x, u(x)) = 1 + [a(x)]2 πd(x)



1 −1

d(y)g(y) dy , 1 + [a(y)]2 y − x

(5.8.15)

and where C0 = (C0 [−1, 1], .∞ ) denotes the Banach space of all continuous functions f : [−1, 1] −→ R satisfying the boundary conditions f (−1) = f (1) = 0. The following proposition implies that the fixed point problem (5.8.13) is equivalent to our original problem (5.8.9). Proposition 5.8.1 Every solution u ∈ C0 of the fixed point Eq. (5.8.13) is also a solution of problem (5.8.4). Proof Of course, to prove this proposition it is sufficient to show that T (., u) belongs to Lp for some p > 1 if u ∈ C0 . But, this is a consequence of Lemma 5.8.3 below.   In what follows we assume un ∈ C0 for all n ∈ N and u ∈ C0 as well as un − u∞ −→ 0 if n −→ ∞. Moreover, we use the notations a(x) = Fz (x, u(x)), an (x) = Fz (x, un (x)), and analogous notations for functions depending on u or un . The assumptions on the continuity of the partial derivatives Fz and Fx guarantee an − a∞ −→ 0

and gn −→ g a.e. if n −→ ∞ .

(5.8.16)

With γ (x) := arctan a(x) = we can write d(x) =

π − πα(x) 2

 √ 1 − x 2 1 + [a(x)]2 e(S γ )(x). From (5.8.15) we conclude

T (x, u(x)) = h(x) sin[γ (x)] + M(x, u(x)) cos[γ (x)] ,

(5.8.17)

5.8 Solvability of Nonlinear Cauchy Singular Integral Equations

where h(x) = 

g(x) 1 + [a(x)]2

333

= g(x) cos[γ (x)] and

 1 1  1 1 M(x, u(x)) = v − 2 ,− 2 (x) e−(S γ )(x) Sv 2 , 2 h eS γ (x) . The following lemma is due to Wolfersdorf [215, Proof of (A2) in the Appendix]. Lemma 5.8.2 For a continuous function f : [−1, 1] −→ R satisfying |f (x)| ≤ γ < ∞, x ∈ [−1, 1],   π π and for κ ∈ − 2γ , 2γ , we have the inequality 1 π



1 −1

eκ(S f )(x) dx 1 . ≤ √ 2 cos(κγ ) 1−x

Lemma 5.8.3 Let 1 < p < δ −1 , where δ > 0 is from assumption (N1). Then h ∈ Lp and hn sin γn − h sin γ Lp −→ 0 if n −→ ∞. Moreover, hLp ≤ c, where the finite constant c = c(δ, p) does not depend on u ∈ C0 . Proof In view of the definition of h(x) and of the estimate (5.8.7), we have 

1

p

−1

|h(x)|p dx ≤ 0



1 −1

(1 − x 2 )−δp dx =: c(δ, p)p < ∞ .

By (5.8.16) we get hn (x) sin γn (x) −→ h(x) sin γ (x) for almost all x ∈ [−1, 1], and (5.8.7) implies |hn (x) sin γn (x) − h(x) sin γ (x)|p ≤ 2p 0 (1 − x 2 )−δp . p

Now, from Lebesgue’s dominated convergence theorem we conclude the assertion.   Let us introduce the notations 2ω± := arctan 2 ± arctan 1 . Because of 2|ω− | < π2   1 + 2 3 and 2ω+ = arctan 1− ∈ 0, π2 we have 2|ω > 2 and 2ωπ+ > 2. Moreover, −| 1 2 1+

since 0 < δ
0, 0 < ϑ < 1+ε n (x)|κ as functions " : [0, ∞) −→ R, z → zϑ and fn : [−1, 1] −→ R, x → |R well as the number ν := κ(1 + ε)(1 + ϑ) we can estimate

 1 −1

|fn (x)|"(|fn (x)|) dx

=

=



=

 1 −1

 1 −1

(1 − x 2 )−

(1 − x 2 )

% 1 −1

% 1 −1

1+ϑ 2

exp[−κ(1 + ϑ)(Sμn )(x)] dx

ε − 2(1+ε) − ϑ2

(1 − x 2 )

 ε − 2(1+ε) + ϑ2 2 (1 − x )

1+ε ε

& 1 ϑ(1+ε) (1 − x 2 )− 2 − 2ε dx

1 − 2(1+ε)

& dx ε 1+ε

ε 1+ε

exp[−κ(1 + ϑ)(Sμn )(x)] dx % 1 −1

& 1 (1 − x 2 )− 2 exp[−ν(Sμn )(x)] dx

  ν 1  2 )− 2ν −S μn  1+ε ≤ c e − . (1  ν L

π cos(νω+ )

.

1 1+ε

1 1+ε

,

where we used Lemma 5.8.2. Since μn − μ∞ −→ 0 and since the operator S : Lq −→ Lq is continuous for 1 < q < ∞, it follows Sμn −→ Sμ in measure, which implies, due to the monotonicity of the function ez , that fn −→ f in measure.

5.8 Solvability of Nonlinear Cauchy Singular Integral Equations

335

From the lemmata of Vallée-Poussin and Vitali we obtain  x  x dy ∀ x ∈ [−1, 1] , Rn (y) dy −→ R(y) −1

  n  as well as R



−1

   κ . This yields the assertion. −→ R L

 

Lemma 5.8.5 For the function φ(x) defined in (5.8.21) and the respective functions φn (x), we have φn − φLκ −→ 0.   κ < ε < 2ωπ+ and represent φ(x) in the form Proof Let max κ, 3−2δκ φ(x) = ψ(x)χ(x) ,

(5.8.23)

where and χ(x) = (1 − x 2 )− 2ε e(S μ)(x) . 1

ε+κ

ψ(x) = h(x)(1 − x 2 ) 2εκ

εκ . Then Furthermore, let p := ε and q := ε−κ ψL q χLp , where, due to Lemma 5.8.2,

1 p

+

1 q

= κ1 , which implies φLκ ≤

χLp ≤ c1 (ω+ , ε)

(5.8.24)

(cf. (5.8.22)) and - ψL q ≤ 0

1 −1

(1 − x ) 2

ε+κ−2δεκ 2(ε−κ)

. q1 dx

:= c2 (κ, ε) < ∞

(5.8.25)

because of ε+κ−2δεκ 2(ε−κ) > −1. In the same manner as in the proof of Lemma 5.8.4 we can show that χn −→ χ in Lp . Since ψn −→ ψ a.e. and q q |ψn (x) − ψ(x)| ≤ (2 0 ) (1 − x 2 )

ε+κ−2δεκ 2(ε−κ)

,

q . Hence, Lebesgue’s dominated convergence theorem yields ψn −→ ψ in L φn −→ φ in Lκ .  

Corollary 5.8.6 With the operator S = ωSω−1 I defined by (5.8.21), the convergence   Sφ  n − Sφ



−→ 0

if n −→ ∞

takes place. Proof In virtue of Lemma 5.8.5 it remains to show that the operator S : Lκ −→ Lκ is continuous, which is equivalent to the continuity of S : Lκω −→ Lκω . Thus, due to

336

5 Mapping Properties of Some Classes of Integral Operators κ

Proposition 3.2.9 and Exercise 3.2.8, we have to prove that ω ∈ Lκ and ω−1 ∈ L κ−1 (cf. also Corollary 3.2.10). But, the first of these two relations is a consequences of 1 1 the relation ω = v α,β with α = 2κ − 12 − ωπ− and β = 2κ − 12 + ωπ− as well as of the inequalities (cf. (5.8.18)) 3

κ
−1, respectively. Taking into account 3 3+ we also get


0 such that |K0 (x, y) − K0 (x0 , y)| < ε for all (x, y) ∈ Uδ (x0 ) × I , where Uδ (x0 ) = {x ∈ I : da (x, x0 ) < δ} (cf. Exercise 2.3.7). Consequently, according to (6.2.9), kn 

|nk (x) − nk (x0 )| =

k=1

kn 

λnk |K1 (x, xnk ) − K1 (x0 , xnk )|

k=1

=

kn  k=1

λnk |K0 (x, xnk ) − K0 (x0 , xnk )| < c ε u(xnk )w1 (xnk )

for all x ∈ Uδ (x0 ), which shows the validity of (K4). Now the application of Lemma 6.1.1 completes the proof.  

6.2 The Classical Nyström Method

365

Remark 6.2.3 In case of u−1 u1 = w1 , one can also use Lemma 6.1.3 for the proof of Proposition 6.2.2. Indeed, if we set v = u1 and define H (x, y) = w(y), S(x, y) = K1 (x, y), X0 = span {w} with .X0 = .L1 (I ) , λFnk (γ w) = γ λnk for v −1

γ ∈ C, then, we have X0 ⊂ L1v −1 (I ) continuously (see (D) which now coincides with (B)), K(x, y) = H (x, y)S(x, y) with the continuous function S(x, y)v(y) (see (A)), and nk (x) = λFnk (w)S(x, xnk ) (cf. (6.2.8)). Moreover, for all f = γ w ∈ X0 and all g ∈ CPv , lim

n→∞

kn 

λFnk (f )g(xnk )

k=1

= lim γ n→∞

kn 

 λnk g(xnk ) =

f (y)g(y) dy I

k=1

in view of condition (E). Consequently, conditions (H1)–(H3) are fulfilled and Lemma 6.1.3 can be applied. Corollary 6.2.4 Assume (A)–(F). Consider the Eqs. (6.2.1), (6.2.4) with g ∈ Cu . Assume further, that the homogeneous Eq. (6.2.10) (i.e., g ≡ 0) has only the trivial solution in Cu . Then, for all sufficiently large n, Eq. (6.2.4) possesses a unique solution f n∗ ∈ Cu converging to f ∗ , where f ∗ ∈ Cu is the unique solution of (6.2.1). If the assumptions of Corollary 6.2.1 are satisfied, then    ∗   f − f ∗  .)f ∗ ≤ c sup E2n u(x)K(x, n u,∞ u

1 ,∞

:x∈I

 ,

(6.2.15)

.)f ∗ ∈ CPu for all where c = c(n, g ). (Note that, due to condition (C), u(x)K(x, 1 x ∈ I .) Proof In virtue of Proposition 6.2.2, we can apply Proposition 2.6.8 with X = C(I ) to the Eqs. (6.2.10) and (6.2.11). Estimate (2.6.11) gives  ∗      f − f ∗  = fn∗ − f ∗ ∞ ≤ cKn f ∗ − Kf ∗ ∞ , n u,∞ where f ∗ ∈ C(I ) and fn∗ ∈ C(I ) are the solutions of (6.2.10) and (6.2.11), respectively, and where   Kn f ∗ − Kf ∗ 



⎧  ⎫   kn ⎨  ⎬  ∗ ∗ = sup  nk (x)f (xnk ) − K(x, y)f (y) dy  : x ∈ I ⎩ ⎭ I  k=1 ⎫  ⎧   kn ⎬ ⎨   ∗ ∗ xnk )f (xnk ) − u(x)K(x, y)f (y)w(y) dy  : x ∈ I . λnk u(x)K(x, = sup   ⎭ ⎩ I  k=1

.)f ∗ ∈ CPu (cf. (C)) and Corollary 6.2.1(b). It remains to use u(x)K(x, 1

 

366

6 Numerical Methods for Fredholm Integral Equations

6.2.1 The Case of Jacobi Weights Let us apply the above described Nyström method in case of f (x) −



1

−1

y)v α,β (y)f (y) dy = K(x, g (x) ,

−1 < x < 1 ,

(6.2.16)

: (−1, 1)2 −→ C are given continuous where g ∈ Cu = Cu (−1, 1) and K α,β functions and where v (x) = (1 − x)α (1 + x)β , α, β > −1, u(x) = v γ ,δ (x), γ , δ ≥ 0, and Cu = Cv γ ,δ . We set u1 (x) = v γ1 ,δ1 (x), w1 (x) = v α1 ,β1 (x) and assume that α1 ,β1 (y), (A1) K0 : [−1, 1]2 −→ C is continuous, where K0 (x,y) = v γ ,δ (x)K(x,y)v  1 α,β v (x) dx < ∞, i.e., γ + α1 < α + 1 and δ + β1 < β + 1, (B1) γ ,δ (x)v α1 ,β1 (x) −1 v (C1) 0 ≤ γ1 ≤ 1, 0 ≤ δ1 ≤ 1, and γ + α1 < γ1 < α + 1, δ + β1 < δ1 < β + 1.

If we set w(x) := v α,β (x), then conditions (A1) and (B1) are equivalent to (A) and (B) in the present situation, respectively. Condition (C1) leads immediately to (C) and (D) (cf. the following exercise). Exercise 6.2.5 Recall that, on page 358, we defined CPu as the smallest closed subspace of Cu containing all algebraic polynomials. By Exercise 2.4.23 we have CPu = Cγ ,δ for 0 ≤ γ , δ ≤ 1, where (cf. the definition of Cγ ,δ on page 25) Cγ ,δ is the set of all continuous functions f : (−1, 1) −→ C satisfying lim f (x)u(x) = 0 if γ > 0

x→1−0

and

lim

x→−1+0

f (x)u(x) = 0 if δ > 0 .

Prove, by using Lemma 3.1.3 and Proposition 3.1.18, that CPu = Cγ ,δ holds true for all γ , δ ≥ 0. As quadrature rule (6.2.3) we take the Gaussian rule w.r.t. the Jacobi weight α,β α,β α,β v α,β . Then w(x) = v α,β (x), i.e., kn = n, λnk = λnk := vnk , and xnk = xnk := xnk in Corollary 6.2.1 we have κ(n) = 2n − 1. Moreover, condition (C1) guarantees that (6.2.9) and (6.2.14) are also fulfilled, which is due to the following lemma. Lemma 6.2.6 ([176], Theorem 9.25) For v α,β (x) and v α1 ,β1 (x), assume that α + α1 > −1 and β + β1 > −1, and let j ∈ N be fixed. Then, for each polynomial q(x) with deg q ≤ j n, n 

     α,β  α,β  α,β λnk q(xnk ) v α1 ,β1 xnk ≤ c

k=1

where c = c(n, q).

1 −1

|q(x)|v α,β (x)v α1 ,β1 (x) dx ,

6.2 The Classical Nyström Method

367

Hence, all conditions (A)–(F) are in force and we can apply Corollary 6.2.4 together with Corollary 6.2.1(b) to Eq. (6.2.16) and the Nyström method f n (x) −

n 

α,β α,β α,β xnk )f n (xnk ) = λnk K(x, g (x) ,

−1 < x < 1 ,

(6.2.17)

k=1

to get the following proposition. Proposition 6.2.7 Assume that (A1), (B1), and (C1) are fulfilled and that Eq. (6.2.16) has only the trivial solution in Cv γ ,δ in case of g (x) ≡ 0. Then, for g∈ Cv γ ,δ and all sufficiently large n, Eq. (6.2.17) has a unique solution f n∗ ∈ Cv γ ,δ and    ∗   γ ,δ  f − f ∗  .)f ∗ γ ,δ (x)K(x, : −1 ≤ x ≤ 1 , n γ ,δ,∞ ≤ c sup E2n v 1 1 v ,∞ where f ∗ ∈ Cv γ ,δ is the unique solution of (6.2.16) and c = c(n, g). (Again we note .)f ∗ ∈ C γ1 ,δ1 that the assumptions of the proposition guarantee that v γ ,δ (x)K(x, v for all x ∈ [−1, 1], cf. Corollary 6.2.4.) For checking (6.2.9) and (6.2.14), we used Lemma 6.2.6. The following Lemma will allow us to prove these assumptions also in other cases. Lemma 6.2.8 Let w : I0 −→ [0, ∞) and v : I0 −→ [0, ∞) be weight functions and λnk > 0, xnk ∈ I0 , k = 1, . . . , n, be given numbers with xn1 < . . . < xnn . Moreover, we assume that (a) v −1 w ∈ L1 (I ), (b) λnk ∼n,k xnk w(xnk ), k = 1, . . . , n, where xnk = xnk − xn,k−1 and the node xn0 < xn1 is appropriately chosen, (c) xnk ∼n,k xn,k−1 , k = 2, . . . , n, (d) for each closed subinterval [a, b] ⊂ I0 , the weight v −1 w : [a, b] −→ R is continuous and lim max {xnk : xnk ∈ [a, b]} = 0 ,

n→∞

(6.2.18)

(e) there exists a subinterval [A, B] ⊂ I0 such that the functions v −1 w : {x ∈ I0 : x ≤ A} −→ R and v −1 w : {x ∈ I0 : x ≥ B} −→ R are monotone. Then there is a constant c = c(n) such that  n  λnk w(x) ≤c dx . v(xnk ) I v(x) k=1

(6.2.19)

368

6 Numerical Methods for Fredholm Integral Equations

Proof By assumption (b) we have

n n   λnk w(xnk ) ∼n xnk . Moreover, v(xnk ) v(xnk ) k=1

k=1

 # $  w(x) w(xnk )    : x ∈ [xn,k−1 , xnk ], xnk ∈ [A, B] = 0 , lim sup  − n→∞ v(x) v(xnk )  due to assumption (d). Hence, w(xnk ) xnk ≤ c v(xnk )



xnk xn,k−1

w(x) dx v(x)

∀ xnk ∈ [A, B] with

c = c(n, k) .

If v −1 w : {x ∈ I0 : x ≤ A} −→ R is non-increasing, then w(xnk ) xnk ≤ v(xnk )



xnk xn,k−1

w(x) dx v(x)

∀ xnk < A, k ≥ 1 .

In the case that v −1 w : {x ∈ I0 : x ≤ A} −→ R is non-decreasing we use assumption (c) and get w(xnk ) w(xnk ) xnk ∼n,k xn,k+1 ≤ v(xnk ) v(xnk )



xn,k+1 xnk

w(x) dx v(x)

∀ xnk < A, k ≥ 1 ,

with (if necessary) an appropriately chosen xn,n+1 > xnn . For xnk > B we can proceed analogously (noting that B can be chosen sufficiently large such that, for all n ≥ n0 , v −1 w is monotone on the interval [xn,k0 −1 , xn,k0 ) containing B). Summarizing we obtain (6.2.19).  

6.2.2 The Case of an Exponential Weight on (0, ∞) Consider the integral equation f (x) −





y)w(y)f (y) dy = K(x, g (x) ,

0 < x < ∞,

(6.2.20)

0

: (0, ∞)2 −→ C are given functions and where where g ∈ Cu (0, ∞) and K −α −x β α,β −x w(x) = w (x) = e , α > 0, β > 1, u(x) = ua,δ (x) = x δ [w(x)]a , a ≥ 0, δ ≥ 0. Here we use the Gaussian rule w.r.t. the weight w(x) = wα,β (x) and study the Nyström method f n (x) −

n  k=1

w w λw g (x) , nk K(x, xnk )fn (xnk ) =

0 < x < ∞.

(6.2.21)

6.2 The Classical Nyström Method

369

Let us check conditions (A)–(F), where we choose w1 (x) = ua0 ,δ0 (x) := x δ0 [w(x)]a0 and u1 (x) = ua1 ,δ1 (x) = x δ1 [w(x)]a1 . Assume that (A2) (B2) (C2) (D2)

y)w1 (y) is continuous on [0, ∞]2 , K0 (x, y) := u(x)K(x, 0 < a + a0 < 1, 0 < a1 < 1, a1 > a0 + a.

Note that, due to Lemma 6.2.8 (cf. [126, Proposition 3.8], for checking the conditions of Lemma 6.2.8 see also [89, 102, 142]), n  k=1

λw nk w) ≤c u1 (xnk

with

c = c(n)

(6.2.22)

1 if u−1 1 w ∈ L (0, ∞), which is equivalent to the assumption (C2). We also see that (B2) implies (w1 u)−1 w ∈ L1 (0, ∞). That means, again by Lemma 6.2.8, that condition (F) is fulfilled.

Exercise 6.2.9 As in the case of Jacobi weights, the equivalence (cf. [134]) lim ωϕ1 (f, t)u,∞ = 0

t →+0

⇐⇒

f ∈ Cu

holds true, where (cf. the beginning of Sect. 4.3.2) # $ Cu = u ∈ C(0, ∞) : lim f (x)u(x) = lim f (x)u(x) = 0 x→∞

x→+0

and ωϕ1 (f, t)u,∞ is defined in (4.3.17). Use Proposition 4.3.10 to prove that the spaces CPu (0, ∞) and Cu (0, ∞) coincide. Exercise 6.2.9 together with conditions (A2) and (D2) guarantee that .)u−1 ∈ CPu (0, ∞) for all x ∈ [0, ∞]. Hence, we see that (A2)–(D2) u(x)K(x, 1 together with Corollary 6.2.1(a) imply (A)–(F), and we can apply Corollary 6.2.4 together with Corollary 6.2.1(b) to the operator equations (6.2.20) and (6.2.21) to get the following. −α

Proposition 6.2.10 Let w(x) = e−x −x , α > 0, β > 1, and u(x) = x δ [w(x)]a , a ≥ 0, δ ≥ 0. Assume that (A2), (B2), (C2), and (D2) are fulfilled and that Eq. (6.2.20) has only the trivial solution in Cu (0, ∞) in case of g (x) ≡ 0. Then, for g ∈ Cu (0, ∞) and all sufficiently large n, Eq. (6.2.17) has a unique solution f n∗ ∈ Cu (0, ∞) and β

  ∗    f − f ∗  .)f ∗ ≤ c sup E2n u(x)K(x, n u,∞ u

1 ,∞

 :0≤x≤∞ ,

where f ∗ ∈ Cu (0, ∞) is the unique solution of (6.2.20) and c = c(n, g ).

370

6 Numerical Methods for Fredholm Integral Equations

6.2.3 The Application of Truncated Quadrature Rules As in Sect. 6.2.2, we deal with the integral equation f (x) −





y)w(y)f (y) dy = K(x, g (x) ,

0 < x < ∞,

(6.2.23)

0

where by w(x) we refer to the weight defined in (4.3.2), w(x) = e−x α > 0 and β > 1. We recall the truncated Gaussian rule (4.3.37), 



f (x)w(x) dx =

0

j2 

−α −x β

with

∗ w λw nk f (xnk ) + en (f ) ,

k=j1

w are the zeros of the nth orthogonal polynomial with respect to the where xnk weight w(x) and j1 = j1 (n, θ ) and j2 (m, θ ) are defined in (4.3.35) and (4.3.36), respectively, for some fixed θ ∈ (0, 1). We look for an approximate solution fn ∈ Cu (R+ ) of (6.2.23) by solving

  n fn (x) = g (x) , f n (x) − K

0 < x < ∞,

(6.2.24)

where j2    n f (x) = w K λw k K(x, xk )f (xnk ) ,

0 < x < ∞.

(6.2.25)

k=j1

We take the weight function u(x) = (1 + x)δ [w(x)]a and assume that the following conditions are satisfied: y)u(y) is continuous on [0, ∞]2 and satisfies (A3) K0 (x, y) := u(x)K(x, K0 (x, ±∞) = 0 for all x ∈ [0, ∞], (B3) 0 < a < 12 and δ ∈ R or a = 12 and δ > 12 , i.e. uw2 ∈ L1 (0, ∞). As in the previous sections, we study the operator equations (6.2.23) and (6.2.25) in Cu (0, ∞), which is equivalent to considering Cu = 



f (x) −

K(x, y)f (y) dy = g(x) ,

0 < x < ∞,

0

and fn (x) −

j2  k=j1

w nk (x)fn (xnk ) = g(x) ,

0 < x < ∞,

6.2 The Classical Nyström Method

371

in the space C[0, ∞], where f (x) = u(x)f (x), g(x) = u(x) g (x), and fn (x) = u(x)f n (x), K(x, y) =

y)w(y) u(x)K(x, u(y)

as well as nk (x) =

w λw nk u(x)K(x, xnk ) . w) u(xnk

Define the operators K : C[0, ∞] −→ C[0, ∞] and Kn : C[0, ∞] −→ C[0, ∞] by  (Kf )(x) =



K(x, y)f (y) dy

and (Kn f )(x) =

0

j2 

w nk (x)f (xnk ).

k=j1

The estimates  0

 0





 |K(x, y)| dy ≤ K0 (x, ·)∞,[0,∞]



0

w(y) dy , [u(y)]2 

|K(x, y) − K(x0 , y)| dy ≤ K0 (x, ·) − K0 (x0 , ·)∞,[0,∞] j2 

|nk (x) − nk (x0 )| ≤ K0 (x, ·) − K0 (x0 , ·)∞,[0,∞]

k=j1



0 j2  k=j1

x ∈ [0, ∞] w(y) dy , [u(y)]2

x0 , x ∈ [0, ∞]

λw nk ! w "2 u(xnk )

together with (A3), (B3), and (see Lemma 6.2.8 and cf. the notes to (6.2.22)) j2  k=j1

!

λw nk w) u(xnk

"2 ≤ c = c(n, θ ) ,

show that conditions (K1), (K2), and (K4) on page 356 are satisfied. Hence, the operator K : C[0, ∞] −→ C[0, ∞] is compact and the sequence of operators Kn : C[0, ∞] −→ C[0, ∞] is collectively compact (cf. the proof of Lemma 6.1.1). To be in the position to apply Proposition 2.6.8, it remains to prove the strong convergence of Kn to K. For this, we write (Kn f )(x) − (Kf )(x) =

j2 

 w nk (x)f (xnk )−

k=j1

=

j2 



K(x, y)f (y) dy 0

w w λw nk u(x)K(x, xnk )f (xnk ) −

k=j1





y)w(y)f (y) dy u(x)K(x,

0

  ·)f = en∗ u(x)K(x,

(6.2.26)

372

6 Numerical Methods for Fredholm Integral Equations

·)f ∈ Cu2 . Hence we with f = uf ∈ C[0, ∞]. From (A3) we conclude u(x)K(x, can apply Proposition 4.3.17 (with u2 instead of u) and get     ·)f : x ∈ [0, ∞] sup en∗ u(x)K(x, % ≤c

  ·)f 2 + e−c1 ν F ∞,[0,∞]2 sup EM u(x)K(x, u ,∞

&

(6.2.27) ,

x∈[0,∞]

where > F (x, y) = K0 (x, y)f (y) ,

M=

 ? θ m , θ +1



and

1 ν = 1− 2β



2α . 2α + 1 (6.2.28)

Analogously to Exercise 6.2.9, we have CPu2 (0, ∞) = Cu2 (0, ∞), which implies, due to (A3), lim Kn f − Kf ∞,[0,∞] = 0 for all f ∈ C[0, ∞]. Now the n→∞ application of Proposition 2.6.8 together with (6.2.26) and (6.2.27) yields the following statement. Proposition 6.2.11 Let w(x) = e−x

−α −x β

, α > 0, β > 1, and

u(x) = (1 + x)δ [w(x)]a with 0 < a < 12 and δ ∈ R or a = 12 and δ > 12 . Assume that the conditions (A3) and (B2) are fulfilled and that Eq. (6.2.23) has only the trivial solution in Cu (0, ∞) if g (x) ≡ 0. Then, for g∈ Cu (0, ∞) and all sufficiently large n, Eq. (6.2.24) has a unique solution f n∗ ∈ Cu (0, ∞) and   ∗ f − f ∗  ≤c n u,∞

% sup EM x∈[0,∞]



 ·)f 2 + e−c1 ν F ∞,[0,∞]2 u(x)K(x, u ,∞

& ,

where f ∗ ∈ Cu (0, ∞) is the unique solution of (6.2.23), F (x, y), M, and ν are given in (6.2.28), and c = c(n, g ) as well as c1 = c1 (n, g ).

6.3 The Nyström Method Based on Product Integration Formulas Let again I0 be equal to (−1, 1), (0, ∞), or (−∞, ∞) and I equal to [−1, 1], [0, ∞], or [−∞, ∞], respectively. Here we discuss the numerical solution of the Fredholm integral equation (6.2.1) by approximating the operator : Cu , K Cu −→

f →

 I

y)w(y)f (y) dy K(.,

(6.3.1)

6.3 The Nyström Method Based on Product Integration Formulas

373

with the help of 

 n f (x) = K

 " H (x, y) ! Ln S(x, .)uf (y)w(y) dy , I u(y)

x ∈ I0 ,

(6.3.2)

y) = H (x, y) where K(x, S(x, y) and Ln g is the algebraic polynomial of degree less than n with (Ln g)(xnk ) = g(xnk ), k = 1, . . . , n. Using the formula (Ln g)(x) =

n 

g(xnk ) nk (x) with

nk (x) =

n 5 x − xnj , xnk − xnj

j =1

k=1

we conclude 

n   n f (x) = K k=1

 H (x, y) nk (y)w(y) dy S(x, xnk )u(xnk )f (xnk ) . I u(y)

(6.3.3)

This means for Eq. (6.2.6) considered in the space C(I ), that the operator K : C(I ) −→ C(I ) introduced in (6.2.12) is approximated by Kn : C(I ) −→ C(I ) given by (6.2.13), where K(x, y) is defined in (6.2.5) and (cf. (6.1.4))  nk (x) = I

with H (x, y) =

H (x, y) nk (y) dy S(x, xnk ) = λFnk (H (x, .))S(x, xnk )

(6.3.4)

(x, y)w(y) u(x)H , S(x, y) = S(x, y), and u(y)  λFnk (f ) = f (y) nk (y) dy .

(6.3.5)

I

In order to check under which conditions assumption (H3) is satisfied, we should use  n            F λnk (f )g(xnk ) − f (y)g(y) dy  =  f (y) [(Ln g)(y) − g(y)] dy     I I k=1

    p1  f (y) p  dy  (Ln g − g)uLq (I ) , ≤   I u(y) (6.3.6) where p > 1,

1 p

+

1 q

= 1, and u is an appropriate weight function.

374

6 Numerical Methods for Fredholm Integral Equations

6.3.1 The Case of Jacobi Weights Consider the case where w(x) = v α,β (x) is a Jacobi weight with α, β > −1. Lemma 6.3.1 Let w = v α,β , α, β > −1, p > 1, γ0 , δ0 ≥ 0, and γ0 > β 2

α 2

+

1 p

− 34 ,

1 3 w p − 4 . Then condition (H3) is fulfilled for nk = nk in (6.3.5) as well as p P X0 = Lv −γ0 ,−δ0 and Cv = C. p Proof Firstly, X0 = Lv −γ0 ,−δ0 is continuously embedded in L1 , since γ0 , δ0 ≥ 0.

δ0 >

+

Secondly, we can use the fact that there is a constant c > 0 such that   (g − Lw g)v γ0 ,δ0  ≤ c En−1 (g)∞ for all g ∈ C if and only if v√γ0 ,δ0 ∈ Lq n wϕ q (see the discussion after Proposition 3.2.34), i.e., γ0 −

1 1 α − >− 2 4 q

and δ0 −

1 1 β − >− . 2 4 q

Hence, (6.3.6) can be applied to all f ∈ X0 , all g ∈ C, and u = v γ0 ,δ0 .

 

We remark that Lemma 6.3.1 improves the result mentioned in [198, Section 4.5], where γ0 and δ0 are chosen as # max

1 α + ,0 2 4

$

# and

max

$ β 1 + ,0 , 2 4

respectively. As a consequence of Lemma 6.3.1 and of Lemma 6.1.3, we have to assume that p H (x, .) satisfies condition (H2) for X0 = Lv −γ0 ,−δ0 with appropriate γ0 , δ0 and p as in Lemma 6.3.1. The aim of the remaining part of this subsection is to weaken this condition in a certain way. By L log+ L(a, b) we denote the set of all measurable functions f : (a, b) −→ C  b   |f (x)| 1 + log+ |f (x)| dx is finite. For for which the integral ρ+ (f ) := a

f ∈ L1 (a, b), by Hab f we denote the Hilbert transform of f ,    b Ha f (x) :=

b a

f (y) dy , y−x

a δ ≥ 0. 2 4

(6.3.10)

376

6 Numerical Methods for Fredholm Integral Equations

Then there is a constant c =  c(n, f, g) such that, for all functions g f : (−1, 1) −→ C with f v ∈ L∞ and all g with √ ∈ L log+ L, wϕ  w  gL f  ≤ c ρ+ n 1



g √ wϕ

 f v∞ .

   Proof Write gLw n f 1 = J1 + J2 + J3 , where    J1 = gLw n f L1 (An ) ,

   J2 = gLw nf

   J3 = gLw nf

  x −1 , L1 −1, n12

L1



xnn +1 2 ,1

.

Define p n (y) :=

pn (y) : y ∈ An , : y ∈ An ,

0

and gn (y) :=

g(y) : y ∈ An , 0 : y ∈ An ,

!   " as well as hn (y) := sgn g(y) Lw n f (y) , and consider  J1 =

n    f (xnk ) hn (y)g(y) Lw f (y) dy = n  p An n (xnk ) k=1

(R2)

≤ cf v∞

n  k=1

 An

pn (y) g(y)hn (y) dy y − xnk

√ w(xnk )ϕ(xnk ) |Gn (xnk )| , xnk v(xnk )

where 

pn (y)Qn (y) − pn (x)Qn (x) g(y)hn (y) dy y−x Qn (y)

Gn (x) = An

for some polynomial Qn ∈ P n positive on An ( ∈ N fixed). Then, due to (R3) and Gn ∈ P n+n−1 ,  J1 ≤ cf v∞

√ w(x)ϕ(x) dx |Gn (x)| v(x) An

- ≤ cf v∞  +

1

−1

! " =: cf v∞ J1 + J1 ,

1

−1

√ w(x)ϕ(x) gn hn ) (x) kn1 (x) dx pn (H v(x)

√  .  ghn w(x)ϕ(x) | pn (x)| Qn (x) H (x) kn2 (x) dx v(x) Qn

6.3 The Nyström Method Based on Product Integration Formulas

where

kn1 (x)

= sgn [(H pn gn hn ) (x)] and

kn2 (x)

377

 . - ghn (x) . With the = sgn H Qn

help of (6.3.8), (R1), and (6.3.7), we get J1

 √  wϕ 1 kn (x) dx =− p n (x) gn (x)hn (x) H v −1  √     √  wϕ   g wϕ 1  g     H kn  ≤ c  ρ+ √ ≤ c √ wϕ v v ∞ wϕ 1 

1

and, by choosing Qn (x) ∼n,x J1 ≤ c



1

−1

√ w(x)ϕ(x) for x ∈ An (see [139, Lemma 2.1]),

√   √    1 wϕ 2 w(x)ϕ(x) ghn g(x)hn (x) kn (x) dx H H (x) kn2 (x) dx = −c v(x) Qn v −1 Qn (x)

   √   √  g  wϕ  wϕ 2  g    k ≤ c H ≤ c ρ n  √wϕ  v  + √wϕ . v ∞ 1

Now let us estimate J3 . The term J2 can be handled analogously. We get  J3 =

1 xnn +1 2

n    f (xnk ) hn (y)g(y) Lw f (y) dy = n pn (xnk )



xnn +1 2

k=1

(R2)

≤ cf v∞

n  k=1

1

pn (y) g(y)hn (y) dy y − xnk

√  |pn (y)g(y)| w(xnk )ϕ(xnk ) 1 xnk dy x +1 nn v(xnk ) y − xnk 2

Note that, due to the assumptions on w and u, α + 12 > 0. Hence, in view of (6.3.9), √ c |pn (y)| w(y)ϕ(y) ≤ , y − xnk 1 − xnk

-

. xnn + 1 y∈ ,1 , 2

. xnn + 1 1 − xnk , 1 , we have y − xnk ≥ . We conclude since, for y ∈ 2 2 -

J3 ≤ cf v∞

 n √  w(xnk )ϕ(xnk ) k=1

(R3)

≤ cf v∞

since



v(xnk )(1 − xnk ) 1

−1

1 xnn 2 +1

|g(y)| dy √ w(y)ϕ(y)

  √   w(x)ϕ(x)  g   ≤ cf v∞ ρ+ √g dx  , √ (1 − x)v(x)  wϕ 1 wϕ

α 1 + − γ − 1 > −1. 2 4

 

378

6 Numerical Methods for Fredholm Integral Equations

Lemma 6.3.4 Let v : I −→ [0, ∞) be a continuous weight function, which is positive on I0 , and let R : I 2 −→ C be a function such that Rx ∈ CPv for all x ∈ I , where Rx (y) = R(x, y), and such that R(x, y)v(y) is continuous on I 2 . Then, for every n ∈ N, there is a function Pn (x, y) such that   lim sup |R(x, y) − Pn (x, y)|v(y) : (x, y) ∈ I 2 = 0 n→∞

and Pn,x with Pn,x (y) = Pn (x, y) belongs to Pn for every x ∈ I . Proof Let εn > 0 and, for every x ∈ I , choose Pn,x ∈ Pn such that   (Rx − Pn,x )v  < En (Rx )v,∞ + εn . ∞

  It remains to prove that lim sup En (Rx )v,∞ : x ∈ I = 0. If this is not the case, n−→∞

then there are an ε > 0 and n1 < n2 < . . . such that Enk (Rxk )v,∞ ≥ 2ε for certain ∗ xk ∈ I . Due to the compactness of I , we can assume that xk −→  x for k −→  ∞. In  ∗ )v  (R virtue of the continuity of R(x, y)v(y), we can conclude that − R ε for all p ∈ Pnk and k ∈ N, in contradiction to Rx∗ ∈ CPv .   Let us come back to the integral operator K : C[−1, 1] −→ C[−1, 1],  (Kf )(x) =

1

−1

(6.3.11)

K(x, y)f (y) dy

and its product integration approximation Kn : C[−1, 1] −→ C[−1, 1], (Kn f )(x) =

n 

 w nk (x)f (xnk )

k=1

=

1 −1

  H (x, y) Lw n Sx f (x) dx ,

(6.3.12)

where Sx (y) = S(x, y),  K(x, y) = H (x, y)S(x, y) ,

and nk (x) =

w S(x, xnk )

1 −1

H (x, y) w nk (y) dy . (6.3.13)

Proposition 6.3.5 Consider (6.3.11) and (6.3.12) together with (6.3.13) in the Banach space C[−1, 1]. If the Jacobi weights w = wα,β and v = v γ ,δ satisfy the conditions of Lemma 6.3.3 and if Hx ∈ L log+ L for all x ∈ [−1, 1], where Hx (y) = H (x, y), (a) √ wϕ

6.3 The Nyström Method Based on Product Integration Formulas

379

 $ #  Hx : −1 ≤ x ≤ 1 < ∞, (b) sup ρ+ √  wϕ  Hx − Hx0 (c) lim ρ+ = 0 for all x0 ∈ [−1, 1], √ x→x0 wϕ (d) the map [−1, 1]2 −→ C, (x, y) → S(x, y)v(y) is continuous with Sx ∈ CPv for all x ∈ [−1, 1], then the operators Kn form a collectively compact sequence converging strongly to the operator K. Proof At first we show that Kn converges strongly to K. Indeed, for f ∈ C[−1, 1], a function P (x, y), which is a polynomial in y of degree less than n, and Px (y) = P (x, y), we have |(Kn f ) (x) − (Kf ) (x)|  ≤

1

−1

 ! "  H (x, y) Lw (Sx f − Px ) (x) dx + n



1 −1

|H (x, y) [S(x, y)f (y) − P (x, y)]| dx

  -   . Hx   + Hx v −1  (Sx f − Px )v∞ , ≤ c ρ+ √ 1 wϕ

where we took into account Lemma 6.3.3 and that condition (a) together with (6.3.10) implies Hx v −1 ∈ L1 (−1, 1). Moreover,      sup Hx v −1  : −1 ≤ x ≤ 1 < ∞ 1

due to condition (b). Thus, Kn f − Kf ∞ ≤ c sup (Sx f − Px )v∞ , −1≤x≤1

which proves the desired strong  convergence by referring to Lemma 6.3.4. A consequence of this is that the set Kn f ∞ : f ∈ C[−1, 1], f ∞ ≤ 1 is bounded. Furthermore, for f ∞ ≤ 1, |(Kn f )(x) − (Kn f )(x0 )|  ≤

1

−1

 ! "  H (x, y) Lw (Sx − Sx )f (y) dy 0 n  +

1 −1

    [H (x, y) − H (x0 , y)] Lw Sx f (y) dy n 0

-   .       Lemma 6.3.3 Hx (Sx − Sx )v  + ρ+ Hx√− Hx0 Sx v  ≤ c ρ+ √ . 0 0 ∞ ∞ wϕ wϕ   Hence, due to (b), (c), and (d), the set Kn f : f ∈ C[−1, 1], f ∞ ≤ 1 is equicontinuous in each point x0 ∈ [−1, 1], and so equicontinuous on [−1, 1].  

380

6 Numerical Methods for Fredholm Integral Equations

6.3.2 The Case of an Exponential Weight on (0, ∞) Here, in case w(x) = wα,β (x) = x α e−x , 0 < x < ∞, α ≥ 0, β > 12 , we are going to prove results analogous to Lemma 6.3.3 and Proposition 6.3.5. For w as well as x this we set again pn (x) = pnw (x) and xnk = xnk n,n+1 = an , where 1 √ β w) ∼n n is the Mhaskar-Rahmanov-Saff number associated with an = an ( √ the weight w(x). Let us fix θ ∈ (0, 1), set nθ = min {k ∈ 1, . . . , n : xnk ≥ θ an }, and, for a function f : (0, ∞) → C, define β

L∗n f =

nθ 

f (xnk ) ∗nk ,

∗nk (x) =

k=1

pnw (x)(an − x)  pn (xnk )(x − xnk )(an − xnk )

.

(6.3.14)

    Then L∗n f (xnk ) = f (xnk ) for k = 1, . . . , nθ and L∗n f (xnk ) = 0 for k = nθ + 1, . . . , n + 1. Moreover, if we set xnk = xnk − xn,k−1 , k = 1, . . . , n, and xn0 = 0, we have the following properties: (R4) There is a constant c = c(n) such that 6    sup |pn (x)| w(x) |an − x|x : 0 < x < ∞ ≤ c < ∞ (see [89, 102]). (R5) The equivalences 6  1   ∼n,k xnk w(xnk ) (an − xnk )xnk , p (xnk )

k = 1, . . . , n,

n

hold true (see [89, 102]). (R6) For fixed ∈ N, there is a constant c = c(n, p) such that (see [101]) nθ 



θan

xnk |p(xnk )| ≤ c

|p(x)| dx

for all p ∈ P n .

0

k=1

Remark 6.3.6 The constant on the right-hand side of (6.3.7) does not depend on the interval [a, b], i.e., we have, for −∞ < a < b < ∞ and for all g ∈ L∞ (a, b) and f ∈ L log+ L(a, b),      b    gHa f  + f Hab g  ≤ cg∞ ρ+ (f ) 1

where c = c(f, g, a, b).

1

(6.3.15)

6.3 The Nyström Method Based on Product Integration Formulas

381

Indeed, if c1 is the constant in (6.3.7) in case a = 0 and b = 1, then, setting x = χ(t) = (b − a)t + a and y = χ(s), we observe  a

b

   g(x) 

b

a

   b  b  f (y) dy  g(y) dy   dx + dx f (x) y−x  y−x  a a -

  g(χ (t)) 

1

= (b − a) 0

0

 ≤ c1 g∞,[a,b]

1

0

1

  f (χ (s)) ds  dt + s−t 

  f (χ (t)) 

1

0

0

1

 . g(χ (s)) ds  dt s−t 

  |f (χ (t))| 1 + log+ |f (χ (t))| dt = c1 g∞,[a,b]ρ+,[a,b](f ) .

√ √ Lemma 6.3.7 Let ψ(x) = x, x ≥ 0 and v(x) = (1 + x)δ w(x), δ ≥ 14 . Then there is a constant c = c(n, f, g) such that, for all functions f : (0, ∞) −→ C with g f v ∈ L∞ (0, ∞) and all g with √ ∈ L log+ L(0, ∞), wψ  ∗  gL f  1 ≤ c ρ+ n L (0,∞)



g √ wψ

 f v∞ .

      Proof Write gL∗n f L1 (0,∞) = gL∗n f L1 (0,2an) + gL∗n f L1 (2an ,∞) =: J1 + J2 . !  ∗  " Using (R5) we get, with hn (y) = sgn g(y) Ln f (y) , J1 ≤ cf v∞

nθ 



xnk

k=1

= cf v∞

nθ 

1

xnk

k=1

≤c

nθ f v∞  3

(an ) 4

  w(xnk )ψ(xnk )  2an pn (y)(an − y)g(y)hn (y)  dy 3   y − xnk v(xnk )(an − xnk ) 4 0 (xnk ) 4 3

(1 + xnk )δ (an − xnk ) 4

   

2an 0

 pn (y)(an − y)g(y)hn (y)  dy  y − xnk

xnk |Gn (xnk )| ,

k=1

where 

2an

Gn (t) = 0

pn (y)(an − y)Qn (y) − pn (t)(an − t)Qn (t) g(y)hn (y) dy y−t Qn (y)

382

6 Numerical Methods for Fredholm Integral Equations

and Qn ∈ P n a polynomial positive on (0, an ) ( ∈ N fixed). Since Gn ∈ P( +1)n , with the help of (R6) we can estimate J1 ≤

cf v∞ (an )

-

3 4

     2an  H0 pn (an − ·)ghn (x) dx

2an 0



2an

+

n

0

Defining kn1 (x) = sgn obtain J1 ≤

cf v∞ (an )



3 4

2an 0

 ≤ cf v∞

2an

   .    pn (x)(an − x)Qn (x) H2an ghn (x) dx =: J  + J  . 1 1 0   Q





H02an pn (am − ·)ghn (x) and using (6.3.8) and (R4), we

  n 1 pn (x)(an − x)g(x)hn (x) H2a 0 kn (x) dx



0

|g(x)| w(x)ψ(x)

     (6.3.15) g   2an 1 ≤ cf v∞ ρ+ √ .  H0 kn (x) dx wψ

In order to estimate J1 , we choose Qn ∈ P n such that Qn (x) ∼n,x for x ∈ (0, 2an ) (see [153]). Then, due to (R4) and (6.3.8),

√ w(x)ψ(x)

  ghn J1 ≤ cf v∞ kn2 (x) H02an (x) dx Qn  ≤ cf v∞

2an 0

     2   √ g(x) Hk n (x) dx  w(x)ψ(x)

(6.3.15)



 cf v∞ ρ+

g √ wψ

 ,

  n where kn2 (x) = sgn H02an gh Qn (x) . Finally, let us consider J2 . Again taking into account (R5), we get J2 ≤ cf v∞

nθ 



xnk

k=1

= cf v∞

nθ 

1

xnk

k=1



nθ cf v∞  3

(an ) 4

  w(xnk )ψ(xnk )  ∞ pn (y)(an − y)g(y)hn (y)  dy  3  y − xnk v(xnk )(an − xnk ) 4 2an

k=1

(xnk ) 4

3

(1 + xnk )δ (an − xnk ) 4

  ∞  pn (y)(an − y)g(y)hn (y)   dy   y −x 2an

nk

 √  3  ∞ |pn (y)| w(y) y(y − an ) y − an 4 |g(y)| dy , xnk √ 1 y − x w(y)ψ(y) 2an nk 4 (a ) n

6.3 The Nyström Method Based on Product Integration Formulas

383

where we also used that y − xnk ≥ 2an − an = an . By the finite-infinite range inequality and by (R4),   6   √ pn wψ · − an   

∞,[2an ,∞)

 6      p ≤ c e− cn  wψ |a − ·| n  n 





y − an with a positive constant c = c(n). Hence, in virtue of y − xnk nθ  xnk ≤ an , and (R1),

≤ c e− cn

3 4

≤ 1 for y > 2an ,

k=1

 J2 ≤ cf v∞

∞ 2an

|g(y)| dy ≤ cf v∞ ρ+ √ w(y)ψ(y)



g √ wψ

 .  

Let us apply Lemma 6.3.7 to the integral operator K : C[0, ∞] −→ C[0, ∞], 



(Kf )(x) =

(6.3.16)

K(x, y)f (y) dy 0

and its product integration approximation Kn : C[0, ∞] −→ C[0, ∞], (Kn f )(x) =

nθ 

w ∗nk (x)f (xnk )=

k=1



∞ 0

  H (x, y) L∗n Sx f (x) dx ,

(6.3.17)

where w(x) = wα,β (x) = x α e−x , α > −1, β > 12 , where L∗n is defined in (6.3.14), and where Sx (y) = S(x, y), β

K(x, y) = H (x, y)S(x, y) ,

∗nk (x)

 =

w S(x, xnk )

∞ 0

H (x, y) ∗nk (y) dy . (6.3.18)

Proposition 6.3.8 Consider (6.3.16) and √(6.3.17) together with (6.3.18) in the Banach space C[0, ∞]. If v(x) = (1 + x)δ w(x) with δ ≥ 14 and if Hx (a) √ ∈ L log+ L(0, ∞) for all x ∈ [0, ∞], where Hx (y) = H (x, y), wψ  $ #  Hx : 0 ≤ x ≤ ∞ < ∞, (b) sup ρ+ √ wϕ   Hx − Hx0 (c) lim ρ+ √ = 0 for all x0 ∈ [0, ∞], d(x,x0)→0 wψ (d) the map [0, ∞]2 −→ C, (x, y) → S(x, y)v(y) is continuous with Sx ∈ CPv for all x ∈ [−1, 1],

384

6 Numerical Methods for Fredholm Integral Equations

then the operators Kn form a collectively compact sequence converging strongly to the operator K. Proof We proceed in an analogous way as in the proof of Proposition 6.3.5. For f ∈ C[0, ∞] and a function P (x, y) = Px (y), which is a polynomial in y of degree less than n, we have |(Kn f ) (x) − (Kf ) (x)|  ≤

1

−1

 ! "  H (x, y) L∗ (Sx f − Px ) (x) dx + n  +

1

−1





   w ∗ H (x, y)  dy P (x ) (y) x nk nk     k=nθ +1

∞ 

0

n+1 

|H (x, y) [S(x, y)f (y) − P (x, y)]| dx =: J1 + J1 + J3 ,

By Lemma 6.3.7 it is clear that  J1 ≤ c ρ+ Condition (a) together with δ ≥

1 4

Hx √ wϕ

 (Sx f − Px )v∞ .

implies Hx v −1 ∈ L1 (−1, 1), and hence,

    J3 ≤ Hx v −1  (Sx f − Px )v∞ . 1

   Consequently, since we have also sup Hx v −1 1 : −1 ≤ x ≤ 1 < ∞ by condition (b), we get J1 + J3 ≤ c sup (Sx f − Px )v∞ .

(6.3.19)

−1≤x≤1

To estimate J2 we recall that (see [142, (2.3)]) Pn uL∞ (xn

θ ,∞)

≤ c e− c n Pn u∞

for Pn ∈ Pm(n)

(m(n) < n, lim m(n) = ∞) for some constants c = c(n, Pn ) and c = c(n, Pn ) n→∞ and (cf. [127, pp. 362,373]) n+1  v(x) ∗nk (x) ≤ c nσ w) v(xnk

k=nθ +1

for some σ > 0 and c = c(n, x). Thus,         J2 ≤ c nσ Hx v −1  Px vL∞ (xnθ ,∞) ≤ cnσ e− c n Hx v −1  Px v∞ 1

1

6.3 The Nyström Method Based on Product Integration Formulas

385

and Px ∈ Pm(n) can be chosen in such a way that sup {Px v∞ : x ∈ [0, ∞]} < ∞ (in view of Lemma 6.3.4). Hence, together with (6.3.19) we conclude the strong convergence of Kn to K. Consequently, the set   Kn f ∞ : f ∈ C[0, ∞], f ∞ ≤ 1 is bounded. Furthermore, for f ∞ ≤ 1, |(Kn f )(x) − (Kn f )(x0 )|  ≤

1

−1

 ! "  H (x, y) L∗ (Sx − Sx )f (y) dy + 0 n



1

−1

    [H (x, y) − H (x0 , y)] L∗ Sx f (y) dy n 0

   . -      Lemma 6.3.7 Hx (Sx − Sx )v  + ρ+ Hx√− Hx0 Sx v  . ≤ c ρ+ √ 0 0 ∞ ∞ wϕ wϕ

  Thus, due to (b), (c), and (d), the set Kn f : f ∈ C[−1, 1], f ∞ ≤ 1 is equicontinuous in each point x0 ∈ [0, ∞], and so equicontinuous on [0, ∞].  

6.3.3 Application to Weakly Singular Integral Equations n defined in (6.3.2), We are going to apply the results on the sequence of operators K when the kernel function K(x, y) in (6.3.1) is of the form W (|x − y|) S(x, y) with W (t) = ln t or W (t) = t −μ with 0 < μ < 1. Lemma 6.3.9 For s, t ≥ 0 and 0 < η ≤ 1, the inequality  η  s − t η  ≤ |t − s|η

(6.3.20)

is valid. Proof We can assume 0 ≤ s < t. Then r =

∈ [0, 1), and it remains to prove that 1 − rη this follows directly 1 − r η ≤ (1 − r)η . Defining the function γ (r) := (1 − r)η    from γ (0) = 1 and γ (r) < 0 for 0 < r < 1. s t

Lemma 6.3.10 For α, β > −1, 1 ≤ p < ∞, and x0 , x ∈ [−1, 1], we have  lim

1

x→x0 −1

     x − y p α,β  ln    x0 − y  v (y) dy = 0 .

(6.3.21)

Proof For 0 < δ < 1, set Iδ := [−1, 1] ∩ [x0 − δ, x0 + δ] and write the integral in (6.3.21) as the sum J1 (x, δ) + J2 (x, δ), where the first addend is the integral

386

6 Numerical Methods for Fredholm Integral Equations

over Iδ and second one the integral over [−1, 1] \ Iδ . Choose 0 < η < η0 := 1 − max {0, −α, −β}. Then 

1

   ln |x − y|p v α,β (y) dy

[J1 (x, δ)] p ≤

1

p



   ln |x0 − y|p v α,β (y) dy

+



1

p



%

−η

≤c

|x − y|

 η0 −1  p1   η0 −1  p1 1 − y2 dy + |x0 − y|−η 1 − y 2 dy



&



 1 1 =: c [J (x0 , x, δ)] p + [J (x0 , x0 , δ)] p .

(6.3.22) Let x ∈ Iδ . In case of x0 = 1 we can estimate η0 −1

%

J (x0 , x, δ) ≤ (2 − δ)

x

η0 −η−1

(x − y)

 dy +

1−δ

=

1+x 2

η0 −η−1

(y − x)

 dy +

x

1 1+x 2

& η0 −η−1

(1 − y)

dy

% &   2η0 −η + 2 η0 −η (2 − δ)η0 −1 1 − x η0 −η (x + δ − 1)η0 −η + 2 ≤ δ (2 − δ)η0 −1 η0 − η 2 η0 − η

and in case of x0 = −1 in the same way. If −1 < x0 < 1 with

δ < min {1 − x0 , 1 + x0 } ,

then   J (x0 , x, δ) ≤ (1 − δ)2 − x02

x0 +δ

x0 −δ

-  = (1 − δ)2 − x02

x

x0 −δ

=

|x − y|−η dy

(x − y)−η dy +



x0 +δ

(y − x)−η dy

.

x

(1 − δ)2 − x02  2(2δ)1−η  (x − x0 + δ)1−η + (x0 + δ − x)1−η ≤ (1 − δ)2 − x02 . 1−η 1−η

Consequently, if ε > 0 then there exists a δ > 0 such that J1 (x, δ) < x ∈ Iδ . Now choose 0 < δ1 < 1 such that  ε  1  p

| ln(1 + z)| < Then, for x ∈ Iδ1 δ

2

1

−1

ε 2

for all

−1 v α,β (y) dy

∀ z ∈ (−δ1 , δ1 ) .

   x − x0   < δ1 and  and y ∈ [−1, 1] \ Iδ , we have  x0 − y    p   ln 1 + x − x0  v α,β (y) dy < ε ,  x0 − y  2 [−1,1]\Iδ

 J2 (x, δ) =

and the lemma is proved.

 

6.3 The Nyström Method Based on Product Integration Formulas

Lemma 6.3.11 For α, β > −1, 0 < μ < 1, 1 ≤ p < x0 , x ∈ [−1, 1], 

1

lim

x→x0 −1

387

1 − max {0, −α, −β} , and μ

  |x − y|−μ − |x0 − y|−μ p v α,β (y) dy = 0 .

(6.3.23)

Proof In the same manner as in the proof of Lemma 6.3.10, we write the integral in (6.3.23) as the sum of the integrals J1 (x, δ) and J2 (x, δ) over Iδ = [−1, 1] ∩ [x0 − δ, x0 + δ] and [−1, 1] \ Iδ , respectively, where 0 < δ < 1. For J1 (x, δ), we have  1 1 1 [J1 (x, δ)] p ≤ c [J (x0 , x, δ)] p + [J (x0 , x0 , δ)] p , where J (x0 , x, δ) is defined as in (6.3.22) with 0 < η < η0 = 1 − max {0, −α, −β} and η = μp. Hence, in virtue of the proof of Lemma 6.3.10, for every ε > 0 there exists a δ > 0 such that J1 (x, δ) < 2ε for all x ∈ Iδ . Further, taking into account Lemma 6.3.9 we get, for x ∈ Iδ1 with ⎫ ⎧ % −1 & η1 ⎬ ⎨ δ δ 2 ε  1 , v α,β (y) dy 0 < δ1 < min , ⎭ ⎩ 2 2 2 −1 the estimate  J2 (x, δ) =

[−1,1]\Iδ

 ≤

[−1,1]\Iδ

 p |x0 − y|−η |x − y|−η |x0 − y|μ − |x − y|μ  v α,β (y) dy |x0 − y|−η |x − y|−η |x − x0 |η v α,β (y) dy

(2|x − x0 |)η ≤ δ 2η



1 −1

v α,β (y) dy
−1, 0 ≤ γ < min 34 + α2 , 1 + α ,   and 0 ≤ δ < min 34 + β2 , 1 + β , and if the function S : [−1, 1]2 −→ C is   n given by (6.3.25) converges strongly continuous, then the operator sequence K defined by (6.3.24). and collectively compact in the space Cγ ,δ to K Proof In case δ − β < γ − α we choose γ0 = 0, δ0 = γ − α − δ + β, and p ≥ 1 such that  min

1 3 4



α 2 , δ0

+

4 3



β 2

 −1 , we can apply Lemma 6.3.10 to obtain H (x, .) − H (x0 , .)Lp

v −γ0 ,−δ0

 =

1 −1

  γ,δ v (x) ln |x − y| − v γ,δ (x0 ) ln |x0 − y|p v p(α−γ −γ0 ),p(β−δ−δ0 ) (y) dy 

≤v

γ,δ

(x)

1 −1

   ln |x − y| − ln |x0 − y|p v p(α−γ −γ0 ),p(β−δ−δ0 ) (y) dy

  + v γ,δ (x) − v γ,δ (x0 ) −→ 0

if



1 −1

 p1

 p1

   ln |x0 − y|p v p(α−γ −γ0 ),p(β−δ−δ0 ) (y) dy

 p1

x −→ x0 .

Consequently, also condition (H2) is satisfied. Finally, the choice of p guarantees that the conditions of Lemma 6.3.1 are in force, which implies that (H3) is true.   In case 0 < μ < 1, we have the following.

390

6 Numerical Methods for Fredholm Integral Equations

Proposition 6.3.13 Suppose that α, β > −1, γ , δ ≥ 0, and 0 < μ < 1, and that S : [−1, 1]2 −→ C is continuous. If $ # $ # 3 β 3 α + ,1 + α and μ + δ < min + ,1 + β , μ + γ < min 4 2 4 2   n given by (6.3.25) converges strongly and collecthen the operator sequence K tively compact to K defined by (6.3.24) in the space Cγ ,δ . Proof Analogously to the proof of Proposition 6.3.12, we show that we can apply Lemma 6.1.3 to the operators Kn in (6.3.26) and K in (6.3.27) in case v ≡ 1 and p X0 = Lv −γ0 ,−δ0 for appropriate chosen p ≥ 1 and γ0 , δ0 ≥ 0. For this, we take #

1 1 1 ≤ p < min , max {μ, μ + γ − α} max {μ, μ + δ − β}

$

and further γ0 , δ0 ≥ 0 such that 1 3 1 α + − < γ0 < + α − γ − μ and 2 p 4 p

β 1 3 1 + − < δ0 < + β − δ − μ . 2 p 4 p

Let us check conditions (H1)–(H3). p The space Lv −γ0 ,−δ0 is continuously embedded in L1 and the functionals (6.3.28) p are linear and bounded on Lv −γ0 ,−δ0 , i.e., condition (H1) is fulfilled. Because of μp < 1, μ + γ + γ0 − α < p1 , and μ + δ + δ0 − β < p1 , we have H (x, .) ∈ p Lv −γ0 ,−δ0 for all x ∈ [−1, 1] and H (x, y) defined in (6.3.26). Furthermore, due to p(α − γ − γ0 ) > μp − 1 > −1, p(β − δ − δ0 ) > μp − 1 > −1, and p
0 , π

where x = cos θ . To get a formula for nk (x) in (6.3.26) we need to compute  λFnk (ln |x − .|) =

where σnk (x) =

n−1 

1 −1

ln |x − y| σnk (y)  dy . 1 − y2

αj Tj (x) are the fundamental Lagrange interpolation polyno-

j =0

σ , k = 1, . . . , n. With the help of the respective Gaussian mials w.r.t. the nodes xnk rule we get

 αj =

1

dy π σ σnk (y)Tj (y)  = Tj (xnk ) 2 n −1 1−y

and consequently, in virtue of formula (5.4.4), λFnk (ln |x−.|)

⎡ ⎤  1 n−1 n−1  π  dy π ⎣ π σ σ ln 2 + Tj (xnk )Tj (x)⎦ . = Tj (xnk ) ln |x−y| Tj (y)  =− n n j 1 − y2 −1 j =0 j =1

392

6 Numerical Methods for Fredholm Integral Equations

6.4 Integral Equations with Logarithmic Kernels Here we study some collocation and collocation-quadrature methods for Fredholm integral equations of the first kind with logarithmic kernel functions, where we will use the respective mapping properties presented in Sect. 5.4 (see also [82] and [77]). Furthermore, we discuss also the possibility to construct, basing on the previous results, fast algorithms for such equations (see [55]).

6.4.1 The Well-posed Case Let us first consider the simple equation 1 π



1 −1

ln |y − x|  u(y) dy = f (x) , 1 − y2

−1 < x < 1 ,

(6.4.1)

where u ∈ L2σ is searched for and f ∈ Lσ2,1 is given. With the operator W0 : L2σ −→ Lσ2,1 , where W0 = W− 1 ,− 1 (cf. (5.4.1) and (5.4.4)), we write (6.4.1) as operator 2 2 equation W0 u = f .

(6.4.2)

  2,s+1 , then it can be If we consider this equation in the pair of spaces L2,s σ , Lσ written equivalently as W0 ξ = η

(6.4.3)

with η = Js+1 f ∈ 2s+1 and ξ = Js u ∈ 2s . Here ! "∞ W0 = Js+1 WJs−1 = diag αn n=0 : 2s −→ 2s+1 ,

∞ → (αn ξn )∞ (ξn )n=0 n=0 , − 12 ,− 12

where α0 = − ln 2, αn = − n1 , n ∈ N, and where Js = Js and (5.4.4)). Define

(cf. (5.4.7), (5.4.8),

Pn : 2s −→ 2s ,

(6.4.4)

us look for an approximate solution ξ n = ∞ (ξ0 , . . . , ξn−1 , 0, 0, . . .) and let ξ n→ n ξk k=0 ∈ im Pn by solving W0n ξ = ηn ∈ im Pn with W0n = Pn W0 |im Pn = n W0 |im Pn . Of course, ξ n = W−1 0 η , which implies    ∗  −1  ξ − ξ n  2 ≤  W 0  t

2t+1 → 2t

  η − ηn  2 , t+1

6.4 Integral Equations with Logarithmic Kernels

    where W−1 0 

2t+1 → 2t

393

= (ln 2)−1 for all t ∈ R and where ξ ∗ is the solution

of (6.4.3). Consequently, we can approximate the solution u∗ ∈ L2,s σ of (6.4.2) n−1 n−1   by u∗n = ξkn pkσ , if fn = ηkn pkσ is an appropriate approximation of f . k=0

k=0

For example, fn (x) can be chosen as the interpolation polynomial (Lσn f )(x) of σ = cos 2k−1 π, f (x) with respect to the Chebyshev nodes of the first kind xnk 2n 1 2,s k =  1, . . . , n. If f ∈ Lσ for some s > 2 , then (see, Lemma 3.2.39) lim Lσn f − f σ,s = 0 and

n→∞

 σ  L f − f  ≤ c nt −s f σ,s , n σ,t It follows, for s > − 12 and f ∈ L2,s+1 , σ   ∗ u − u∗  ≤ c nt −s f σ,s+1 , n σ,t

0≤t ≤s.

−1 ≤ t ≤ s .

(6.4.5)

(6.4.6)



The numbers ηkn = Lσn f, pkσ σ can be computed efficiently by ηkn =

n π σ σ f (xnj )pkσ (xnj ) n j =1

=

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

(6.4.7) √  n π σ f (xnj ) n

:

k = 0,

j =1

√ n ⎪ ⎪ 2π  k(2j − 1)π ⎪ σ ⎪ f (xnj cos ) : k = 1, . . . , n − 1 , ⎪ ⎪ ⎩ n 2n j =1

using the respective fast discrete cosine transform .n−1, n k(2j − 1)π 2 Cn = cos . 2n k=0,j =1

6.4.2 The Ill-posed Case Equation (6.4.1) is a special case of integral equations of the form 1 π



1 −1

!

κ  " u(y) dy (y − x)κ ln |y − x| + h(x, y)  = f (x) + αj pjσ−1 (x) , 1 − y2 j =1 (6.4.8)

394

6 Numerical Methods for Fredholm Integral Equations

where −1 < x < 1, where κ = 0, 1, 2, and where αj are unknown constants to be determined together with u(x). With the notations from Sect. 5.4, we can write these equations as (Wk + H)u = f +

κ−1 

ζj pjσ (x) ,

(6.4.9)

j =0

where the operator H is given by 1 (Hu)(x) = π





h(x, y)u(y) 

0

dy

−1 < x < 1 .

,

1 − y2

(6.4.10)

Note, that there is an essential difference between the case κ = 0 and the −→ L2,s+1 is cases κ = 1, 2. While in the first case the operator W0 : L2,s σ σ 2,s+1+κ , κ = 1, 2, have no closed image invertible, the operators Wκ : L2,s −→ L σ (cf. Corollary 5.4.5, Remark 5.4.6, and Corollary 5.4.14). That means, solving Eq. (6.4.8) in case κ = 1, 2 w.r.t. the mentioned spaces is an ill-posed problem. Our aim is to solve (6.4.8) resp. (6.4.9) numerically by means of a collocationquadrature method and to derive convergence rates. The main ideas are explicitly presented in case κ = 1, since this is then easily translated to case κ = 2. Let us start with the so-called dominant (or unperturbed) equation 1 π



1 −1

(y−x) ln |y−x| u(y) 

dy

ζ0 = f (x)+ √ , π 1 − y2

−1 < x < 1 ,

(6.4.11)

where the function u ∈ L2σ and the constant ζ0 ∈ C are unknown. Here we have to look for an additional unknown ζ0 ∈ C, which is suggested by the statements of Corollary 5.4.8. Comparing with (6.4.9) we can write (6.4.11) shortly as W1 u = f + ζ0 p0σ .

(6.4.12)

This is equivalent to the equation (cf. Corollary 5.4.4) W1 ξ = η + (ζ0 , 0, 0, . . .) ,

(6.4.13)

where η = Js+2 f , ξ = Js u, and W1 = Js+2 W1 Js−1 . By Pn : L2σ −→ L2σ we refer to the orthoprojection Pn f =

n−1 

k=0

f, pkσ

σ

pkσ .

6.4 Integral Equations with Logarithmic Kernels

395

We look for an approximate solution un ∈ im Pn−1 by solving the collocation equations σ σ ) = f (xnj ) + ζ0n p0σ , (W1 un ) (xnj

j = 1, . . . , n .

(6.4.14)

Taking into account that W1 un ∈ im Pn for all un ∈ im Pn−1 (cf. (5.4.11)), system (6.4.14) can be written equivalently as W1 un = Lσn f + ζ0n p0σ ,

(6.4.15)

or as n , 0, 0, . . .) , W1 ξ n = (η0n + ζ0n , η1n , . . . , ηn−1

ξ n = Js un ∈ im Pn−1 ,

(6.4.16)

with the projection Pn defined in (6.4.4) and with ηkn given in (6.4.7). Due to Corollary 5.4.7 and the fact, that ηjn = 0 for j ≥ n, for the solution of (6.4.16) 

n−1

2 

we have that ζ0n = −4β0

n m η2m − η0n ,

m=1  n ξ2j +1

n−1

2 

= −4(2j + 1)

-

n m η2m ,

m=j +1

. n−1 j = 0, . . . , − 1, 2

(6.4.17)

n and that ξ2(j +1) is equal to

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

[ n2 ]−1 1  n − (2m + 1)η2m+1 β0

:

j = −1 ,

m=0

n [ ⎪ 2 ]−1 ⎪ ! " ⎪ n ⎪ ⎪ (2m + 1)η2m+1 , : j = 0, . . . , n2 − 2 . −4(j + 1) ⎪ ⎩

(6.4.18)

m=j +1

Consequently, provided that the assumptions of Corollary 5.4.7 are fulfilled, lim ζ0n = ζ0 , where, taking into account (6.4.5),

n→∞

⎛

n−1 2

⎜ n 2(m + 1) |η2m − η2m | + |ζ0n − ζ0 | ≤ c ⎜ ⎝ m=0

⎞ ∞   m= n−1 2 +1

  1  3 ≤ c n 2 Lσn f − Pn f σ,1 + n 2 −s f σ,s ≤ c n 2 −s f σ,s . 3

⎟ 2m |η2m |⎟ ⎠

396

6 Numerical Methods for Fredholm Integral Equations

. n−1 − 1, Moreover, for j = 0, . . . , 2 -



1 2j + 1

n−1

2     n  n 2m |η2m − η2m | + 2 ξ2j +1 − ξ2j +1  = 2

m=j +1

≤c



∞   m= n−1 2 +1

2m |η2m |

  1 3 n 2 Lσn f − Pn f σ,1 + n 2 −s f σ,s

3

≤ c n 2 −s f σ,s and, analogously,   3 1  n  ξ2j − ξ2j  ≤ c n 2 −s f σ,s , 2(j + 1)

j = 0, . . . ,

n 2

−1

such that un − Pn uσ,t

F Gn−1 G = H (k + 1)2t |ξkn − ξk |2 ≤ c n3+t −s f σ,s ,

0≤t.

k=0

Taking into account u − Pn uσ,t ≤ c nt −τ uσ,τ , t ≤ τ , we get   un − uσ,t ≤ c n3+t −s f σ,s + nt −τ uσ,τ ,

0≤t ≤τ.

Now one can use s > τ + 3 and uσ,τ ≤ c f σ,s (cf. Lemma 6.4.4(a) below). Hence, the following proposition has been proved. Proposition 6.4.1 Assume that f ∈ L2,s σ for some s > τ + 3, τ ≥ 0. Then the unique solution (un , ζ0n ) of Eq. (6.4.15) converges in L2σ × C to the unique solution (u, ζ0 ) ∈ L2σ × C of Eq. (6.4.12), where, for 0 ≤ t ≤ τ ,   n ζ − ζ0  ≤ c n 32 −s f σ,s 0

and un − uσ,t ≤ c nt −τ f σ,s

with a constant c = c(n, f ). Now we turn back to the complete Eq. (6.4.8) resp. (6.4.9) in case κ = 1. Assume that, for some s0 > 3, the continuous function h : [−1, 1]2 −→ C satisfies 2,s0

(A) h(·, y) ∈ Lσ

uniformly w.r.t. y ∈ [−1, 1].

6.4 Integral Equations with Logarithmic Kernels

397

2 2 Then,  operator H : Lσ −→ Lσ defined by (6.4.10), we have H ∈  for the 2,s L L2σ , Lσ 0 (see Lemma 2.4.6). At first we look for an approximate solution (un , ζ0n ) ∈ im Pn−1 × C of the equation

(W1 + H)u = f + ζ0 p0σ

(6.4.19)

by solving the collocation equations  ζn 1 1 un (y) dy σ σ σ σ ) ln |y − xnj | + h(xnj , y)  = f (xnj ) + √0 , (y − xnj π −1 π 1 − y2

j = 1, . . . , n .

We write this system of equations in the form ,n )(un , ζ0 ) = Lσn f , ,1 + H (W

(un , ζ0n ) ∈ im Pn−1 × C ,

(6.4.20)

, ζ0 ). , ζ0 ) = Hu, and H ,n (u, ζ0 ) = Lσn H(u, ,1 (u, ζ0 ) = W1 u − ζ0 pσ , H(u, where W 0 2,s 2,s Furthermore, we equip , Lσ := Lσ × C with the norm (u, ζ0 )σ,s = 6 u2σ,s + |ζ0 |2 . In what follows we will assume that (B) equation (6.4.19) with f ≡ 0 has only the trivial solution in , L2σ . , 0 given by Corollary 5.4.7 and define, for 2 < Remember the solution operator W 1 0 2,s , ,0 ,−1 , 0 , t +3 < s, the operator W1 : Lσ −→ , L2,t σ as W1 = Jt W1 Js , where Jt (u, ζ0 ) := (Jt u, ζ0 ). , 0 : L2,s ,2,t Lemma 6.4.2 Let 2 < t + 3 < s. The above defined operator W σ −→ Lσ 1 is linear and bounded and satisfies ,1 W , 0f = f W 1

∀ f ∈ L2,s σ

and , 0W , W 1 1 (u, ζ0 ) = (u, ζ0 )

∀ (u, ζ0 ) ∈ , L2,−1 : W1 u ∈ L2,s σ σ .

,1 (ξ, ζ0 ) := W1 ξ − (ζ0 , 0, . . .), from Corollary 5.4.7 we have Proof With W , 1W , 0 η = η for all η ∈ 2s . Moreover, for η ∈ 2s , W 1

 ∞ 2 ∞         , 0 2 (n + 1)2t +2  (m + 1)ηm  W1 η 2 ≤ c m=n  t n=0

≤c

∞  (n + 1)2t +2(n + 1)1+2(1−s)η2 2 ≤ cη2 2 , n=0

s

s

, 1W , 0W , since 2t + 5 − 2s < −1. If ξ ∈ 2−1 such that W1 ξ ∈ 2s , then W 1 1 (ξ, ζ0 ) = 0, , ,   W1 (ξ, ζ0 ) and consequently, due to Corollary 5.4.7, W1 W1 (ξ, ζ0 ) = (ξ, ζ0 ).

398

6 Numerical Methods for Fredholm Integral Equations

Remark 6.4.3 From the proof of Lemma 6.4.2 we see that ∞ 1 2   0 2(t −s)+5 W ,  2,s ,2,t ≤ c (n + 1) =: ρs−t . 1 L →L σ

σ

n=0

Lemma 6.4.4 Assume that 3 ≤ t + 3 < s < s0 and that conditions (A) and (B) are fulfilled. (a) There exists a constant ct s > 0 such that    (W ,1 + H)(u, , ζ 0 )

σ,s

≥ ct s (u, ζ0 )σ,t

∀ (u, ζ0 ) ∈ , L2σ : W1 u ∈ L2,s σ .

(b) For all sufficiently large n,   (W ,1 + H ,n )(u, ζ0 ) ≥ ct s (u, ζ0 )σ,t σ,s 2

∀ (u, ζ0 ) ∈ , L2σ : W1 u ∈ L2,s σ .

In particular, Eq. (6.4.20) is uniquely solvable for such n. Proof 2,t (a) By Lemma 6.4.2 we have that u ∈ L2σ and W1 u ∈ L2,s σ imply u ∈ Lσ . Thus, if we assume that such a constant ct s > 0 does not exist, then n  2,s  there are (un , ζ0n ) ∈ , L2,t σ with (un , ζ0 ) σ,t = 1, W1 un ∈ Lσ , and   0 ,1 + H)(u , n , ζ n ) = 0. Due to the compact embedding L2,s ⊂ lim (W σ 0

n→∞ L2,s σ (see

σ,s

Lemma 2.4.7), the operator H : L2,t −→ L2,s σ σ is compact, and n , n , ζ ) converges to some f ∈ L2,s we can assume that H(u σ w.r.t. .σ,s . 0  n   , Hence, lim W1 (un , ζ0 ) + f σ,s = 0. Using Lemma 6.4.2 we conclude that   n→∞  , 0f  lim (un , ζ0n ) + W = 0. Together with 1 

n→∞

σ,t

   M ,)W M10 f  (W1 + H 

σ,s

   ,W M10 f  = f + H 

σ,s

    M1 (un , ζ0n ) +  M1 (un , ζ0n ) M10 f − W ,W ≤ f + W H  σ,s

σ,s

  M1 (un , ζ0n ) ≤ f + W σ,s

   , M0 ,(un , ζ0n ) + H W1 f + H 

σ,s

    M1 (un , ζ0n ) + c M10 f + (un , ζ0n ) ≤ f + W W  σ,s

σ,t

  M1 + H ,)(un , ζ0n ) + (W σ,s

  M1 + H ,)(un , ζ0n ) + (W σ,s

 0  ,1 + H) ,W , 0 f = 0 in contradiction to condition , f  = 1 and (W we get W 1 1 σ,t (B).

6.4 Integral Equations with Logarithmic Kernels

399

2,s , ∈ L(, (b) Due to assumption (A) we have H L2σ , Lσ 0 ). Consequently, taking into account (6.4.5),

   (H ,− H ,n )(u, ζ0 )





r−s0 H(u, r−s0 (u, ζ ) , , ζ0 ) 0 σ σ,r ≤ c n σ,s0 ≤ c n

(6.4.21)

0 ≤ r ≤ s0 , which implies together with (b)     ,1 + H ,− H ,n )(u, ζ0 ) ,n )(u, ζ0 ) + (H ct s (u, ζ0 )σ,t ≤ (W σ,s σ,s   ,1 + H ,n )(u, ζ0 ) + ct s (u, ζ0 )σ,t ≤ (W σ,s 2 for (u, ζ0 ) ∈ , L2σ with Bu ∈ L2,s σ and for all sufficiently large n.

 

Proposition 6.4.5 Let 3 ≤ 3 + τ < s < s0 , 0 ≤ t ≤ τ , f ∈ L2,s σ , and assume that conditions (A) and (B) are fulfilled. Then, for all sufficiently large n, Eq. (6.4.20) has a unique solution (un , ζ0n ). Moreover, there is a constant c = c(f, t) such that  n  ζ − ζ ∗  ≤ c n−τ f σ,s 0

0

  and un − u∗ σ,t ≤ c nt −τ f σ,s

where (u∗ , ζ0∗ ) ∈ , L2,τ σ is the unique solution of (6.4.19). Proof Choose some ε ∈ (0, s − τ − 3) and let (un , ζ0n ) ∈ , L2,τ σ be the solution of (6.4.20), which exists uniquely for all sufficiently large n (cf. Lemma 6.4.4(b)) n u ∈ im Pn . With and which belongs automatically to im Pn−1 × C since Lσn f − H the help of Lemma 6.4.4(b) we get, for all sufficiently large n and m,      ct,3+t+ε  (un , ζ n ) − (um , ζ m ) ≤ Lσ f − Lσ f  m − H n )(um , ζ m ) +  (H . n m 0 0 0 σ,t σ,3+t+ε σ,3+t+ε 2

(6.4.22)   Since the norms (um , ζ0m )σ are bounded (cf. Lemma 6.4.4(b)), the estimate (6.4.22) together with (6.4.21) implies that (un , ζ0n ) converges in the norm of ∗ ∗ ∗ ,2,τ , , ∗ ∗ , L2,τ σ to some (u , ζ0 ) ∈ Lσ . Set f = (W1 + H)(u , ζ0 ). The estimation   ∗ f − f 













n  n   ∗  , ,  σ  M , σ,2 ≤ f − (W1 + H)(un , ζ0 ) σ,2 + (H − Hn )(un , ζ0 ) σ,2 + Ln f − f σ,2

shows that f ∗ = f . Relations (6.4.22), (6.4.5), and (6.4.21) lead to   (un , ζ n ) − (u∗ , ζ ∗ ) ≤ 0 σ 0

2 c0,3+ε

       Lσ f − f  ,−H ,n ,2 2,3+ε (u∗ , ζ ∗ ) + H n 0 σ,3+ε σ L →L

≤ c n3+ε−s f σ,s .

σ

σ

400

6 Numerical Methods for Fredholm Integral Equations

Since W1 u∗ ∈ L2,s σ we can apply Lemma 6.4.4(a) and obtain       un − u∗  ≤ un − Pn−1 u∗  + Pn−1 u∗ − u∗  σ,t σ,t σ,t     ≤ nt un − Pn−1 u∗ σ + nt −τ u∗ σ,τ   nt −τ f σ,s ≤ nt un − u∗ σ + cτ,s   ≤ c n3+t +ε−s + nt −τ f σ,s ,  

which completes the proof.

In what follows we improve the convergence rate of ζ0n given by Proposition 6.4.5. Lemma 6.4.6 Let the conditions (A) and (B) be fulfilled and let 3 < r < s0 . Then dim NL2,−r (W1 + H)∗ = 1 , σ

−→ Lσ2,2−r denotes the ·, ·-adjoint operator to W1 +H : where (W1 +H)∗ : L2,−r σ 2,r−2 2,r Lσ −→ Lσ . L2,r σ

2,r−2 Proof We show that p0σ ∈  (W ) . Assume that there are ele1 + H)(Lσ 2,r−2  ments un ∈ Lσ with lim (W1 + H)un − p0σ σ,r = 0. Since, in view of n→∞

∞ Lemma 6.4.4(a), the sequence (u)n=1 is bounded in L2σ , we can assume, due to the 2 2,r compactness of H : Lσ −→ Lσ , that lim Hun − f σ,r = 0 for some f ∈ L2,r σ . n→∞   σ Hence, lim W1 un − p0 + f σ,r = 0, which implies together with Lemma 6.4.2  n→∞   , 0f  ,0 lim (un , 1) + W 1  = 0. Consequently, W1 f = (0, 0) and (cf. the proof of n→∞ σ Lemma 6.4.4(a))

   M ,)W M10 f  (W1 + H 

σ,r

      M1 (un , 1) +c M1 + H ,)(un , 1) −→ 0 , M10 f + (un , 1) ≤ f + W W  +(W σ,r σ,r σ

,1 + H) ,W , 0 f = 0 in contradiction to condition (B). So we can such that (W 1 −→ Lσ2,2−r is nontrivial. Let conclude that the nullspace of (W1 + H)∗ : L2,−r σ ∗ ∗ ∗ v1 , v2 ∈ NL2,−r (W1 + H) \ {0} and let (un , μn ) ∈ , L2σ , n = 0, 1, 2, . . ., be the σ solutions of (W1 + H)un = pnσ + μn p0σ , which exist uniquely in accordance with Proposition 6.4.5. It follows μ∗n vj (p0σ ) = −vj (pnσ ), n = 0, 1, 2, . . ., j = 1, 2. This implies vj (p0σ ) = 0, j = 1, 2, and v1 (pnσ ) = α v2 (pnσ ), n = 0, 1, 2, . . ., where v (p σ )

α = v12 (p0σ ) . Consequently, since the set of polynomials is dense in L2,r σ , v1 = α v2 , 0 i.e., dim NL2,−r (W1 + H)∗ = 1.   σ

6.4 Integral Equations with Logarithmic Kernels

401

Corollary 6.4.7 Let the conditions of Proposition 6.4.5 and Lemma 6.4.6 be fulfilled and assume that 0 NL2,−r (W1 + H)∗ ⊂ L2,−r σ

(6.4.23)

|ζ0n − ζ0∗ | ≤ c nr0 −s f σ,s ,

(6.4.24)

σ

for some r0 ∈ [0, 3]. Then

which improves the estimate in Proposition 6.4.5. . Proof Let v ∈ Lσ 0 with vσ,−r0 = 1 span the nullspace of (W1 + H)∗ in L2,−r σ Then (cf. the proof of Lemma 6.4.6) v(p0σ ) = 0 and, since (W1 +H)u∗ = f +ζ0∗ p0σ and (W1 + Hn )un = Lσn f + ζ0n p0σ , 2,−r

v(f + ζ0∗ p0σ ) = v(Lσn f + (H − Hn )un + ζ0n p0σ ) = 0 . This implies ζ0n − ζ0∗ =

v(f − Lσn f + (Hn − H)un ) v(p0σ )

and, taking into account (6.4.21) together with Lemma 6.4.4(b), |ζ0n



ζ0∗ |



  f − Lσ f  + (Hn − H)un σ,r0 n σ,r 0

|v(p0σ )|

  ≤ c nr0 −s f σ,s + nr0 −s0 un σ ≤ c nr0 −s f σ,s ,  

which completes the proof.

In the following proposition we present conditions which ensure that (6.4.23) is  satisfied. For this we use the notations ϕ(y) = 1 − y 2 as well as hj (x, y) = [ϕ(y)]j

∂ j h(x, y) , ∂j y

−1 < y < 1,

and in what follows we assume that hj : [−1, 1]2 −→ C is continuous for j = 1, . . . , j0 and for some j0 ∈ N. 1 uniformly with respect to Proposition 6.4.8 Assume (A), (B), and hj (·, y) ∈ L2,r σ y ∈ [−1, 1] for every j = 1, . . . , j0 and for some r1 ∈ (3, s0 ) and j0 ≥ 3. Then condition (6.4.23) is fulfilled for r0 > 3 − j0 , i.e., the estimate (6.4.24) holds true for r0 = 0 if j0 > 3 and r0 = ε > 0 arbitrarily small if j0 = 3.

402

6 Numerical Methods for Fredholm Integral Equations

Proof Since H ∈ L(L2σ ,Lσ 0 ) we have H∗ ∈ L(L2,−r , L2σ ) for all r ∈ [0, s0 ], σ ∞ 2,−r σ 2 2 where, for v ∈ Lσ , i.e. v(pn ) n=0 ∈ −r , and u ∈ Lσ , 2,s

(H∗ v)(u) = v(Hu) =

∞ 

Hu, pkσ σ v(pkσ ) . k=0

Consequently, for each v ∈ L2,−r there exists an element uv ∈ L2σ such that σ N ∗

u, uv σ = (H v)(u) = lim

n→∞

u,

n 

O v(pkσ )H∗ pkσ

k=0

, σ

where (H∗ pkσ )(y)

1 = π



1 −1

Since, due to condition (A), ∞ 

h(x, y)pkσ (x) √

dx 1 − x2

=

1 σ pk , h(·, y) σ . π

∞  

  h(·, y), pσ 2 (k + 1)2r ≤ c2 < ∞, the series r k σ k=0

v(pkσ )(H∗ pkσ )(y) converges uniformly w.r.t. y ∈ [−1, 1], where

k=0 ∞    v(pσ )(H∗ pσ )(y) ≤ cr vσ,−r . k k π k=0

Hence, uv =

∞ 

v(pkσ )H∗ pkσ in the .∞ -sense, and uv : [−1, 1] −→ C is

k=0

continuous, since the functions y → pkσ , h(·, y) σ , k = 0, 1, 2, . . ., are continuous for y ∈ [−1, 1]. Moreover, due to our assumptions and

  ∞     hj (·, y) vσ,−r  1   σ,r1 σ j j ∗ σ . v(pk ) ϕ D H pk (y) ≤ π k=0

we have ϕ j D j uv =

∞ 

v(pkσ )ϕ j D j H∗ pkσ

1 for v ∈ L2,−r , j = 1, . . . , j0 , σ

k=0 2,j

1 in the .∞ -sense. Hence, by Lemma 2.4.7, uv ∈ Lσ 0 . Now, let v ∈ L2,−r with σ ∗ vσ,r1 = 1 span the nullspace N 2,−r1 (W1 + H) (cf. Lemma 6.4.6). In virtue of L σ

6.4 Integral Equations with Logarithmic Kernels

403

W1∗ = −W1 we have W1 v = H∗ v = uv ∈ Lσ 0 and, in view of Lemma 6.4.2, , 0 H∗ v ∈ , (v, 0) = W L2,t   σ for t < j0 − 3, which completes the proof. 1 2,j

6.4.3 A Collocation-Quadrature Method In this section we use the results from the previous section in order to design a complete discretized method by applying a quadrature rule to the integral operator ,n0 (u, ζ0 ) = Hn0 u by with the kernel function h(x, y). For (u, ζ0 ) ∈ L2σ we define H 0 σ (Hn u)(x) = (Ln gn )(x) with 1 gn (x) = π



! σ " u(y) dy Ln−1 h(x, .) (y)  , −1 1 − y2 1

−1 < x < 1 .

(6.4.25)

We again restrict ourselve to the case κ = 1 and look for an approximate solution (un , ζ0n ) ∈ im Pn−1 × C of Eq. (6.4.8) resp. (6.4.9) by solving ,n0 )(un , ζ n ) = Lσn f . ,1 + H (W 0

(6.4.26)

Taking into account the algebraic accuracy of the Gauss-Chebyshev quadrature we can write Eq. (6.4.26) as the system 1 π



ζn 1  σ σ un (y) dy σ σ σ (y−xnj ) ln |y−xnj | + h(xnj , xn−1,k )χkn = f (xnj )+ √0 , π −1 1 − y 2 n − 1 k=1 n−1

1

σ j = 1, . . . , n, where χkn = un (xn−1,k ). Assume that 0 (C) h(x, .) ∈ L2,s uniformly w.r.t. x ∈ [−1, 1]. σ

Proposition 6.4.9 Let, together with (C), all the conditions of Proposition 6.4.5 be satisfied. Then, for all sufficiently large n, Eq. (6.4.26) has a unique solution (un , ζ0n ) and all assertions of Proposition 6.4.5 remain true. Of course, also Corollary 6.4.7 is in force. Proof Due to assumption (C), we can apply (6.4.21) and Lemma 7.1.3 to get, for 0 ≤ t ≤ s,     , ,0   H − Hn (u, ζ0 )

σ,t

      0 u ,−H ,n (u, ζ0 ) + ≤ H H − H   n n σ,t

σ,t

≤ c nt −s (u, ζ0 )σ .

,n is replaced by H ,n0 (cf. the proof of Hence, Lemma 6.4.4(b) remains true if H Lemma 6.4.4(b)), and the proof of Proposition 6.4.9 goes on the same lines as the proof of Proposition 6.4.5.  

404

6 Numerical Methods for Fredholm Integral Equations

Remark 6.4.10 As mentioned in the proof of Proposition 6.4.9, analogously to Lemma 6.4.4(b) we can state that, under the conditions (A), (B), and (C), there exists a positive constant ct s such that    W ,1 + H ,n0 ((u, ζ0 )

σ,s



ct s (u, ζ0 )σ,t 2

∀ (u, ζ0 ) ∈ , L2σ : W1 u ∈ L2,s σ .

Remark 6.4.11 Using Corollaries 5.4.13 and 5.4.16, one can formulate and prove propositions like Propositions 6.4.1, 6.4.5, and 6.4.9 for the equation 1 π



1

u(y) dy [(y − x)2 ln |y − x| + h(x, y)]  = f (x) + ζ0 p0σ (x) + ζ1 p1σ (x) . 1 − y2 −1

Remark 6.4.12 Write un in (6.4.26) in the form un (x) =

n−1 

ξkn pkσ (x). Since

k=0

σ χin = un (xn−1,i )=

n−2 

σ pkσ (xn−1,i )ξkn ,

k=0

a matrix version of Eq. (6.4.26) is given by (W1n + Dn C2n Hn C3n−1 Dn−1 )ξ n = ηn + (ζ0n , 0, . . . , 0)

(6.4.27)

with the n × (n − 1) matrices ⎡

W1n

Dn =

0

⎢ ⎢ −β0 ⎢ ⎢ ⎢ ⎢ =⎢ ⎢ ⎢ 0 ⎢ ⎢ ⎢ 0 ⎣ 0

β0

0

···

0

0

β1

···

0

..

..

..

.

.

· · · −βn−4

.

0

βn−3

···

0

−βn−3

0

···

0

0

−βn−2

√ " ! √ 1 diag 1 2 · · · 2 , n

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

 σ , xσ Hn = h(xnj n−1,k )

.n−1, n j (2k − 1)π C2n = cos , 2n j =0,k=1

(6.4.28)

n , n−1 j =1,k=1

T  C3n−1 = C2n−1 ,

! "n and the vector ηn = ηkn k=1 defined by (6.4.7).

,

(6.4.29)

(6.4.30)

6.4 Integral Equations with Logarithmic Kernels

405

As an example let us consider the equation 1 π



1

u(y) dy ζ0 = f (x) + √ [(y − x) ln |y − x| − cos(xy + 2y)]  2 π 1−y −1

(6.4.31)

for different right hand sides f (x) and apply the collocation quadrature method (6.4.26) for its numerical solution. In the tables we present  following ∞ computed Fourier coefficients (w.r.t. the system pnσ n=0 ) and function values of un (x) as well as ζ0n for right-hand sides f (x) with different smoothness. n

ξ0n

ξ1n

ξ2n

ξ3n

ξ4n

4 8 16 32 64 128 256 512

−5.40112961 −8.43254040 −8.47596429 −8.47857251 −8.47873350 −8.47874353 −8.47874416 −8.47874420

−0.48449341 −1.11412102 −1.12423470 −1.12485039 −1.12488837 −1.12489074 −1.12489088 −1.12489089

−5.21815407 −6.97308165 −7.02314233 −7.02604962 −7.02622831 −7.02623943 −7.02624013 −7.02624017

0.00000000 0.14453160 0.14623119 0.14632202 0.14632760 0.14632795 0.14632797 0.14632797

0.00000000 −1.30716032 −1.38230256 −1.38671366 −1.38698578 −1.38700273 −1.38700379 −1.38700386

Equation (6.4.31), Fourier coefficients of un , f (x) = −x 3 |x| n

ζ0n

un (-0.9)

un (0)

un (0.5)

un (0.95)

4 8 16 32 64 128 256 512

−1.56360580 −1.35009089 −1.36028283 −1.36084850 −1.36088304 −1.36088519 −1.36088532 −1.36088533

−5.28070867 −7.46665743 −7.33128912 −7.34419883 −7.34654510 −7.34633790 −7.34636881 −7.34637260

1.11622350 −0.53942159 −0.62465602 −0.64187474 −0.64594776 −0.64695190 −0.64720206 −0.64726455

−1.15880369 −1.71255964 −1.80938307 −1.79710269 −1.79917382 −1.79892639 −1.79895946 −1.79895542

−6.76610747 −10.42210437 −10.46165518 −10.43954148 −10.43672800 −10.43710217 −10.43711238 −10.43711063

Equation (6.4.31), ζ0n and function values of un , f (x) = −x 3 |x| n

ξ0n

ξ1n

ξ2n

ξ3n

ξ4n

4 8 16 32 64 128 256 512

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000

−2.35696767 −4.27093405 −4.25553787 −4.25538652 −4.25538436 −4.25538432 −4.25538432 −4.25538432

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000

0.00000000 −5.51829717 −5.47167380 −5.47121507 −5.47120852 −5.47120842 −5.47120842 −5.47120842

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000

Equation (6.4.31), Fourier coefficients of un , f (x) = −x 4 |x|

406

6 Numerical Methods for Fredholm Integral Equations

n

ζ0n

un (−0.9)

un (0)

un (0.5)

un (0.95)

4 8 16 32 64 128 256 512

0.09237946 −0.32487689 −0.32155438 −0.32152179 −0.32152133 −0.32152132 −0.32152132 −0.32152132

1.69252930 3.46909238 3.57407640 3.57224821 3.57217606 3.57217755 3.57217748 3.57217748

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000

−0.94029406 2.26497025 2.34797959 2.34560345 2.34571196 2.34570873 2.34570884 2.34570883

−1.78655871 −5.77407940 −5.77976528 −5.78331281 −5.78339496 −5.78339207 −5.78339205 −5.78339205

Equation (6.4.31), ζ0n and function values of un , f (x) = −x 4 |x|

We observe that the convergence (of the Fourier coefficients and of the function values) becomes better if the right hand side becomes smoother. In case of 1 f (x) = − π

the

-

 .   2 sin(x + 2) 1 1 1 1 2 2 − (1 − x) ln(1 − x) − + (1 + x) ln(1 + x) − x+2 2 2 2 2

function u∗ (x)

(6.4.32)

√ = 1 − x 2 is the exact solution, where ζ0∗ = 0.

n

ξ0n

ξ1n

ξ2n

ξ3n

ξ4n

4 8 16 32 64 128 256 512

1.44851367 1.13963910 1.12978800 1.12855539 1.12840120 1.12838192 1.12837951 1.12837921

0.07966143 0.00265930 0.00033218 0.00004156 0.00000520 0.00000065 0.00000008 0.00000001

−0.37489673 −0.51943238 −0.53036099 −0.53172768 −0.53189862 −0.53191999 −0.53192266 −0.53192299

0.00000000 −0.00038250 −0.00004873 −0.00000610 −0.00000076 −0.00000010 −0.00000001 −0.00000000

0.00000000 −0.08772781 −0.10401085 −0.10608699 −0.10634738 −0.10637995 −0.10638403 −0.10638454

Equation (6.4.31), Fourier coefficients of un , f (x) from (6.4.32) n

ζ0n

un (−0.9)

un (0)

un (0.5)

un (0.95)

4 8 16 32 64 128 256 512 u∗ (x)

−0.03101251 0.00242665 0.00030175 0.00003770 0.00000471 0.00000059 0.00000007 0.00000001

0.57457469 0.41433610 0.43740970 0.43596566 0.43587373 0.43588962 0.43588980 0.43588988 0.43588989

1.11636063 1.00283824 1.00028530 1.00003272 1.00000399 1.00000050 1.00000006 1.00000001 1.00000000

0.99857879 0.87114862 0.86633439 0.86609941 0.86603259 0.86602645 0.86602553 0.86602542 0.86602540

0.63682384 0.29552061 0.30846637 0.31170267 0.31226353 0.31224771 0.31224956 0.31224986 0.31224990

Equation (6.4.31), ζ0n and function values of un , f (x) from (6.4.32)

6.4 Integral Equations with Logarithmic Kernels

407

As a second example we deal with the equation 1 π



1



−1

√ 2ζ1 x u(y) dy ζ0 (y − x) ln |y − x| − cos(xy + 2y)  = f (x) + √ + √ 2 π π 1−y (6.4.33) 2

again using the collocation quadrature method analogous to (6.4.26). The numerical results for f (x) = −x 3 |x| and f (x) = −ex are presented in the following four tables. Moreover, the last two tables are concerned with the case -

 .   2 sin(x + 2) 1 1 1 1 − (1 − x)3 ln(1 − x) − − (1 + x)3 ln(1 + x) − x+2 3 3 3 3

f (x) = −

1 π

in which

u∗ (x)

(6.4.34)

√ = 1 − x 2 is the exact solution, where ζ0∗ = ζ1∗ = 0.

n

ξ0n

ξ1n

ξ2n

ξ3n

ξ4n

4 8 16 32 64 128 256 512

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000001

3.89516842 6.06352352 6.30796573 6.36442444 6.37841811 6.38191213 6.38278541 6.38300375

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 −0.00000002

0.00000000 2.99539862 3.61165572 3.77432865 3.81589808 3.82635452 3.82897278 3.82962770

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 −0.00000004

Equation (6.4.33), Fourier coefficients of un , f (x) = −x 3 |x| n

ζ0n

ζ1n

un (−0.9)

un (0.5)

un (0.95)

4 8 16 32 64 128 256 512

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 −0.00000001

−3.53746377 −2.77500388 −2.96250258 −3.00749821 −3.01875475 −3.02157177 −3.02227625 −3.02245241

−2.79710527 −5.82901534 −5.74292291 −5.47426811 −5.46727406 −5.47955038 −5.47504222 −5.47562556

1.55394737 −0.72916989 −0.58450110 −0.54753047 −0.54075715 −0.53760774 −0.53698198 −0.53680014

2.95250001 6.00687279 7.41386459 6.51404218 6.54254851 6.52001095 6.52890323 6.52615747

Equation (6.4.33), ζ0n and function values of un , f (x) = −x 3 |x|

408

6 Numerical Methods for Fredholm Integral Equations

n

ξ0n

ξ1n

ξ2n

ξ3n

ξ4n

4 8 16 32 64 128 256 512

−0.58280657 −1.40188257 −1.40191993 −1.40191993 −1.40191993 −1.40191993 −1.40191993 −1.40191994

0.72882972 0.85249743 0.85249555 0.85249555 0.85249555 0.85249555 0.85249556 0.85249559

−2.14508166 0.32708937 0.32721284 0.32721284 0.32721284 0.32721284 0.32721284 0.32721287

0.00000000 0.06894267 0.06896456 0.06896456 0.06896456 0.06896456 0.06896457 0.06896467

0.00000000 0.01413619 0.01432215 0.01432215 0.01432215 0.01432215 0.01432214 0.01432221

Equation (6.4.33), Fourier coefficients of un , f (x) = −ex n

ζ0n

ζ1n

un (0)

un (0.5)

un (0.95)

4 8 16 32 64 128 256 512

2.14508166 2.39164719 2.39172319 2.39172319 2.39172319 2.39172319 2.39172319 2.39172321

0.32474949 0.47532359 0.47532077 0.47532077 0.47532077 0.47532077 0.47532076 0.47532074

−0.32881339 −1.04062806 −1.04072747 −1.04072747 −1.04072747 −1.04072747 −1.04072747 −1.04072749

−0.03805240 −0.64119672 −0.64121781 −0.64121781 −0.64121781 −0.64121781 −0.64121781 −0.64121787

0.22363249 0.10053606 0.10059761 0.10059761 0.10059761 0.10059761 0.10059755 0.10059716

Equation (6.4.33), ζ0n and function values of un , f (x) = −ex n

ξ0n

ξ1n

ξ2n

ξ3n

ξ4n

4 8 16 32 64 128 256 512

0.40998695 1.11994191 1.12757437 1.12828511 1.12836761 1.12837773 1.12837899 1.12837914

0.04201924 −0.00200748 −0.00020515 −0.00002445 −0.00000302 −0.00000038 −0.00000005 −0.00000001

−0.38240806 −0.50425705 −0.52906441 −0.53158241 −0.53188097 −0.53191780 −0.53192239 −0.53192294

0.00000000 0.00042534 0.00004660 0.00000562 0.00000070 0.00000009 0.00000001 −0.00000001

0.00000000 −0.06204493 −0.10115735 −0.10574444 −0.10630502 −0.10637467 −0.10638337 −0.10638442

Equation (6.4.33), Fourier coefficients of un , f (x) from (6.4.34)

6.4 Integral Equations with Logarithmic Kernels

409

n

ζ0n

ζ1n

un (0)

un (0.5)

un (0.95)

4 8 16 32 64 128 256 512 u∗ (x)

0.38240806 0.01709214 0.00173656 0.00020612 0.00002543 0.00000317 0.00000040 0.00000006

−0.18170011 0.00070921 0.00006887 0.00000812 0.00000100 0.00000012 0.00000002 0.00000000

0.23131037 0.98469378 0.99875332 0.99986381 0.99998359 0.99999797 0.99999975 0.99999996 1.00000000

0.24807362 0.85663685 0.86470886 0.86586928 0.86600576 0.86602295 0.86602510 0.86602537 0.86602540

0.26316054 0.29199603 0.29949307 0.31155025 0.31216344 0.31223920 0.31224851 0.31224977 0.31224990

Equation (6.4.33), ζ0n and function values of un , f (x) from (6.4.34)

6.4.4 A Fast Algorithm Usually, the solution of a linear system of equations like (6.4.27) needs a number of operations of order O(n3 ) or, in case of an iterative procedure like Krylov subspace methods, at least of order O(n2 ) times the number of iterations. Here we design a fast algorithm for the solution of (6.4.9), which we can also consider as an fast algorithm for solving (6.4.27) (approximately). We pursue two targets. First, the method should have complexity of order O(n log n) and, second, we want to keep the convergence rate stated in Proposition 6.4.5 for the collocation method and in Proposition 6.4.9 for the collocation-quadrature method, at least in a smaller range for the parameter t (cf. Proposition 6.4.15 below). For this, we use the structure properties of that part of the operator in (6.4.8), which is given by the kernel (y − x)κ ln |y − x| (cf. Corollary 5.4.4), and the smoothing properties of the remaining part, which are due to the assumptions (A) and (C) from Sect. 6.4.2 and which make it possible to realize a compression idea. The main ingredients and ideas, involved in the construction of the fast algorithm presented in this section, can also be found in Sect. 7.1.3, where they are applied to operator equations with invertible operators. Here we show how to use the experiences, which we collected in the previous two sections for proving the applicability of a collocation and a collocation-quadrature method in the present ill-posed case in order to construct a fast algorithm. As in the previous two sections, we restrict ourselves to the case κ = 1 in (6.4.8) resp. (6.4.9). In what follows we assume that the conditions (A) and (B) in Sect. 6.4.2 and condition (C) in Sect. 6.4.3 are fulfilled. Again we search for an approximate solution (un , ζ0n ) ∈ im Pn−1 × C of Eq. (6.4.19). But, now we represent un in the splitted form un =

m−2  k=0

ξkn pkσ

+

n−2  k=m−1

ξkn pkσ ,

(6.4.35)

410

6 Numerical Methods for Fredholm Integral Equations

where 2 < m < n. For k = m − 1, . . . , n − 2, we set ξkn := vn , pkσ σ , where (vn , ζ0n ) ∈ im Pn−1 × C is the unique solution of (cf. (6.4.15) and (6.4.16)) W1 vn = Lσn f + ζ0n p0σ .

(6.4.36)

Equation (6.4.36) is equivalent to "T ! n W1n ξ n = ηn + ζ0 0 . . . 0

! "n−2 with ξ n = ξkn k=0 ,

where the matrix W1n ∈ Cn×(n−1) is given by (6.4.28) and where the entries of ! "n−1 ηn = ηkn k=0 =

√  π σ ) Dn C2n f (xnj n

n j =1

(cf. (6.4.29), (6.4.30)) can be computed with O(n log n) complexity. Due to the structure of the matrix W1n we have the formulas n ξn−2 =−

n ηn−1

βn−2

,

n ξn−3 =−

n ηn−2

βn−3

,

ξkn =

n n βk+1 ξk+2 − ηk+1

βk

, k = n−2, . . . , m−1 ,

which can be realized with O(n) complexity. Moreover, due to Lemma 6.4.2,   , 0 Lσn f . vn , ζ0n = W 1 Define Qn := I − Pn : L2σ −→ L2σ , i.e. Qn u =

∞ 

(6.4.37) u, pkσ

σ

pkσ . Note that

k=n

Qn uσ,t ≤ (n + 1)t −s uσ,s ,

0 ≤ t ≤ s , u ∈ L2,s σ ,

(6.4.38)

because of Qn u2σ,t =

∞ 

∞ 



2  2 (k+1)2t  u, pkσ σ  = (k+1)2(t−s) (k+1)2s  u, pkσ σ  ≤ (n+1)2(t−s) u22,s .

k=n

k=n

Proposition 6.4.13 Let 3 ≤ τ + 3 < s < s0 and f ∈ L2,s σ . Furthermore, let (u∗ , ζ0∗ ) ∈ L2,τ × C be the unique solution of (6.4.9) and (v , ζ0n ) ∈ im Pn−1 × C n σ be the unique solution of (6.4.36). Then, for every ε ∈ (0, s − τ − 3) there is a constant c = c(m, n, f, t) such that, for 0 ≤ t ≤ τ + s0 − s + ε,   Qm−1 u∗ − Qm−1 vn 

σ,t

    ≤ c mt −τ −s0 +s−ε u∗ σ,0 + nt −τ −ε f σ,s .

6.4 Integral Equations with Logarithmic Kernels

411

2,t ∗ ,:, Proof Set θ := s0 − s and define R L2,t σ −→ Lσ , (u, ζ ) → u. Since W1 u = ∗ σ ∗ 2,s f + ζ0 p0 − Hu ∈ Lσ we have, in view of Lemma 6.4.2,

    , 0f = W , 0 W1 u∗ + Hu∗ − ζ ∗ pσ = u∗ , ζ ∗ + W , 0 Hu∗ . W 1 1 0 0 0 1 Consequently, for 0 ≤ t ≤ τ ,   Qm−1 u∗ − Qm−1 vn 

σ,t

  ,(u∗ , ζ ∗ ) − Qm−1 R ,(vn , = Qm−1 R ζ0n )σ,t 0

(6.4.37)  =

 M0 f − W M0 Hu∗ ) − Qm−1 R ,(W ,W M0 Lσ f  Qm−1 R 1 1 1 n σ,t

    ,W M0 Hu∗  + Qm−1 R ,W M0 (f − Lσ f ) ≤ Qm−1 R n 1 1 σ,t σ,t

(6.4.38) ≤

 0 ∗  0  M Hu  M (f − Lσ f ) mt−(τ +θ +ε) W + W n 1 1 σ,τ +θ +ε,∼ σ,t,∼

Lemma 6.4.2



 0 M mt−τ −θ −ε W 1  0 M + W 1

2,s0 +θ+ε → L2,τ σ



2,t+3+ε1



→ L2,t σ

 ∗ Hu 

σ,s0

  f − L σ f  n σ,t+3+ε

1

 (6.4.5)  t−τ −θ −ε  ∗  u  + nt+3+ε1 −s f σ,s , ≤ c m σ,0

where ε1 := s − τ − 3 − ε > 0, so that t + 3 + ε1 − s = t − τ − ε. Moreover, M0  2,t+3+ε1 2,t ≤ ρ3+ε and we took into account that, due to Remark 6.4.3, W 1 1 Lσ →, Lσ  0 W M  2,s0 2,τ +θ+ε ≤ ρs −τ −θ −ε = ρs−τ −ε independently from t .   0 1 L →, L σ

σ

The numbers ξkn , k = 0, . . . , m − 2 in (6.4.35) we take from the solution wm ∈ im Pm−1 of (cf. (6.4.26))   W1 + Hn0 wm = Lσn f + , ζ0m p0σ − Lσm W1 Qm−1 vn ,

(6.4.39)



where vn ∈ im Pn−1 is the solution of (6.4.36), i.e., ξkn := wm , pkσ σ for k = 0, . . . , m − 2. Moreover, we set ζ0n := , ζ0m . In view of Remark 6.4.12 solving Eq. (6.4.39) is equivalent to solving the system   ! m "T W1m + Dm C2m Hm C3m−1 Dm−1 , ηm + , ξm = , ζ0 0 · · · 0 ,

412

6 Numerical Methods for Fredholm Integral Equations

which can be done with O(m3 ) complexity if the numbers

, ηkm = Lσm f − Lσm W1 Qm−1 vn , pkσ σ ,

k = 0, . . . , m − 1 ,

are computed. In order to do this like in (6.4.7) we need the function values σ ), j = 1, . . . , m. To get these values effectively we can choose (W1 Qm−1 vn ) (xmj m and n in such a way that m n is an odd integer, since in this case we have    σ σ xmj : j = 1, . . . , m ⊂ xnj : j = 1, . . . , n . Let us write W1 Qm−1 vn =

n−1 

χkn pkσ .

k=0 n In view of (5.4.11) we get, setting ξn−1 = ξnn = 0,

W1 Qm−1 vn =

n−2 

  σ σ ξkn βk−1 pk−1 − βk pk+1

k=m−1 n σ σ pm−1 + βm−1 ξmn pm + = βm−2 ξm−1

n−1  

 σ n n βk ξk+1 pk , − βk−1 ξk−1

k=m

such that, due to (6.4.36),

χkn

:=

⎧ ⎪ ⎪ ⎨

0

n βk ξk+1 ⎪ ⎪ ⎩ ηkn

: 0 ≤ k ≤ m−3, : k = m − 2, m − 1 , : m ≤ k ≤ n−1.

Finally, σ )= (BQm−1 vn ) (xnj

n−1 

σ χkn pkσ (xnj ),

j = 1, . . . , n ,

k=0

which can be realized with O(n log n) complexity using the discrete cosine transform C2n−1 . Consequently, (6.4.39) can be solved with O(m3 ) + O(n log n) = O(n log n) complexity if

m3 n

≤ γ for some constant γ .

6.4 Integral Equations with Logarithmic Kernels

413

Proposition 6.4.14 Assume that 3 ≤ τ + 3 < s < s0 , s0 > 4, as well as f ∈ L2,s σ , and choose ε ∈ (0, s − τ − 3) such that τ + s0 − s − 1 + ε > 0. Then, for every δ ∈ (0, τ + s0 − s − 1 + ε], there exists a constant c = c(m, n, f ) such that, for all sufficiently large m and n with 0 < m < n, 6   2    m wm − Pm−1 u∗ 2σ,0 + , ζ0 − ζ0∗  ≤ c m1+δ−τ −s0 +s−ε u∗ σ,0 + n1+δ−τ −ε f σ,s ,

where (u∗ , ζ0∗ ) and (wm , , ζ0m ) are the solutions of (6.4.9) and (6.4.39), respectively. Proof We compute 

W1 + H0m



wm − Pm−1 u∗



  ζ0m p0σ − Lσm W1 Qm−1 vn − W1 + H0m Pm−1 u∗ = Lσm f + ,   = Lσm (W1 + H)u∗ − ζ0∗ p0σ + , ζ0m p0σ − Lσm BQm−1 vn − W1 + H0m Pm−1 u∗   m   = Lσm − I W1 Pm−1 u∗ + , ζ0 − ζ0∗ p0σ   + Lσm W1 Qm−1 (u∗ − vn ) + Lσm H − H0m u∗ + H0m Qm−1 u∗    m  = Lσm W1 Qm−1 (u∗ − vn ) + , ζ0 − ζ0∗ p0σ + Lσm H − H0m u∗ ,

  where we took into account that Lσm − I W1 Pm−1 u∗ = 0 because of 0Q ∗ σ W1 Pm−1 u∗ ∈ im Pm and that Hm m−1 u = Lm gm with (cf. (6.4.25)) 

1

  [Lm−1 h(x, .)] (y) Qm−1 u∗ (y)σ (y) dy

gm (x) =

1 π

=

1 π

=

m−1

1  σ h(x, xm−1,k ) Qm−1 u∗ , σm−1,k σ = 0 , π

−1



1 m−1  −1 k=1

k=1

  σ h(x, xm−1,k ) σm−1,k (y) Qm−1 u∗ (y)σ (y) dy

414

6 Numerical Methods for Fredholm Integral Equations

where σm−1,k denote the respective fundamental Lagrange interpolation polynomials of degree m − 2. With the help of Remark 6.4.10, Corollary 5.4.4, Proposition 6.4.13, and Lemma 7.1.3 we get   (wm − Pm−1 u∗ , , ζ0m − ζ0∗ )σ,0,∼ ≤

2

    m     0 ζ0 − ζ0∗ p0σ  wm − Pm−1 u∗ − ,  W1 + Hm

σ,3+δ γ0,3+δ           0 ≤ c Lσm W1 Qm−1 (u∗ − vn )σ,3+δ +  Lσm H − Hm u∗  σ,3+δ

        0 u∗  ≤ c Qm−1 (u∗ − vn )σ,1+δ +  Lσm H − Hm

 σ,3+δ

      ≤ c m1+δ−τ −s0 +s−ε u∗ σ,0 + n1+δ−τ −ε f σ,s + m3+δ−s0 u∗ σ,0     ≤ c m1+δ−τ −s0 +s−ε u∗ σ,0 + n1+δ−τ −ε f σ,s ,  

and the assertion is proved.

The following proposition shows that, for the O(n log n)-algorithm, we can attain the same convergence rate (in a restricted interval for t) as in Proposition 6.4.5 under the assumptions that s0 > s > 4. ζ0m be determined by Proposition 6.4.15 Let un in (6.4.35) together with ζ0n = , ∗ ∗ the above described algorithm and let (u , ζ0 ) be the solution of (6.4.9), where we assume that there is a constant γ > 0 such that mn3 ≤ γ and that 4 ≤ τ +4 < s < s0 , f ∈ L2,s σ . Then   un − u∗  ≤ c nt −τ , σ,t

$ # s0 − s ≤t≤τ. max 0, τ − 2

with a constant c = c(m, n, t). Moreover, if additionally τ ≤  n  ζ − ζ ∗  ≤ c n−τ . 0 0

s0 −s 2 ,

then

6.4 Integral Equations with Logarithmic Kernels

415

Proof Choose ε ∈ (1, s − τ − 3) and δ = ε − 1. Then 0 < δ < τ + s0 − s − 1 + ε, and we can apply Propositions 6.4.13 and 6.4.14 to estimate  ∗      u − un  ≤ Pm−1 u∗ − Pm−1 un  + Qm−1 u∗ − Qm−1 un  σ,t σ,t σ,t     = Pm−1 u∗ − wm σ,t + Qm−1 u∗ − Qm−1 vn σ,t       ≤ c mt Pm−1 u∗ − wm σ,0 + mt−τ −s0 +s−ε u∗ σ,0 + nt−τ −ε f σ,s     ≤ c mt−τ −s0 +s u∗ σ,0 + mt n−τ f σ,s + nt−τ −ε f σ,s ≤ c nt−τ

if τ −

s0 −s 2

≤ t. Furthermore, due to Proposition 6.4.14,

    m     n ∗ −τ −s0 +s  ∗  −τ −τ ζ − ζ ∗  = , f  ζ ≤ c m u − ζ + n σ,s ≤ c n 0 0 0 0 σ,0 if t ≤

s0 −s 2 .

 

As an example we take the equation −

1 π



1

u(y) dy ζ0 = f (x)+ √ [(y − x) ln |y − x| − cos(xy + 2y)]  2 π −1 1−y

(6.4.40)

which was already considered in Sect. 6.4.3. For the right-hand side function f (x) we choose f (x) = −x 3 |x|, f (x) = −x 4 |x|, and f (x) = −

1 π

-

   . 1 1 2 sin(x + 2) 1 1 − (1 − x)2 ln(1 − x) − + (1 + x)2 ln(1 + x) − , x+2 2 2 2 2

(6.4.41) √ which were also used in Sect. 6.4.3. In case of (6.4.41), u∗ (x) = 1 − x 2 with ζ0∗ = 0 is the exact solution. In the following tables one can see numerical results of the described fast algorithm (FA) for various choices of n and m in comparison with the results of the collocation-quadrature method (CQM) presented in Sect. 6.4.3.

416

6 Numerical Methods for Fredholm Integral Equations n

FA 256 FA 256 QM 256 FA 512 FA 512 QM 512 FA 1024 FA 1024

m

ξ0n

ξ1n

ξ2n

ξ3n

ξ4n

8 16

−8.47904757 −8.47874416 −8.47874416 −8.47904761 −8.47874420 −8.47874420 −8.47904761 −8.47874420

−1.12502515 −1.12489088 −1.12489088 −1.12502516 −1.12489089 −1.12489089 −1.12502516 −1.12489089

−7.02640549 −7.02624013 −7.02624013 −7.02640553 −7.02624017 −7.02624017 −7.02640554 −7.02624017

0.14626959 0.14632797 0.14632797 0.14626959 0.14632797 0.14632797 0.14626959 0.14632797

−1.38702022 −1.38700379 −1.38700379 −1.38702028 −1.38700386 −1.38700386 −1.38702029 −1.38700386

8 16 8 16

Equation (6.4.40), Fourier coefficients of un , f (x) = −x 3 |x| n FA 256 FA 256 QM 256 FA 512 FA 512 QM 512 FA 1024 FA 1024

m

ζ0n

un (−0.9)

un (0)

un (0.5)

un (0.95)

8 16

−1.36088509 −1.36088532 −1.36088532 −1.36088510 −1.36088533 −1.36088533 −1.36088510 −1.36088533

−7.34653331 −7.34636880 −7.34636881 −7.34653709 −7.34637260 −7.34637260 −7.34653652 −7.34637202

−0.64725315 −0.64720210 −0.64720206 −0.64731563 −0.64726455 −0.64726455 −0.64733128 −0.64728020

−1.79908541 −1.79895947 −1.79895946 −1.79908137 −1.79895542 −1.79895542 −1.79908188 −1.79895593

−10.4375062 −10.4371124 −10.4371124 −10.4375044 −10.4371106 −10.4371106 −10.4375040 −10.4371102

8 16 8 16

Equation (6.4.40), ζ0n and function values of un , f (x) = −x 3 |x|

In cases n = 256 and n = 512 the fast algorithm leads to the same results (with respect to 9 decimal digits) as in the collocation-quadrature method already for m = 16, which remains true if we increase n. In the following two tables we consider a more smooth right-hand side which results in the fact, that the fast algorithm gives the same results as the quadrature method already for m = 8.

FA QM FA QM FA

n

m

ξ0n

ξ1n

ξ2n

ξ3n

ξ4n

256 256 512 512 1024

8

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000

−4.25538432 −4.25538432 −4.25538432 −4.25538432 −4.25538432

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000

−5.47120842 −5.47120842 −5.47120842 −5.47120842 −5.47120842

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000

8 8

Equation (6.4.40), Fourier coefficients of un , f (x) = −x 4 |x|

6.4 Integral Equations with Logarithmic Kernels

FA QM FA QM FA

417

n

m

ζ0n

un (−0.9)

un (0)

un (0.5)

un (0.95)

256 256 512 512 1024

8

−0.32152132 −0.32152132 −0.32152132 −0.32152132 −0.32152132

3.57217748 3.57217748 3.57217748 3.57217748 3.57217748

0.00000000 0.00000000 0.00000000 0.00000000 0.00000000

2.34570884 2.34570884 2.34570883 2.34570883 2.34570883

−5.78339205 −5.78339205 −5.78339205 −5.78339205 −5.78339205

8 8

Equation (6.4.40), ζ0n and function values of un , f (x) = −x 4 |x|

In the last example (see the following two tables), the right-hand side is less smooth. But, beginning with n = 1024 the results are exact (w.r.t. 9 or 8 decimal digits) already for m = 16.

FA FA QM FA FA QM FA FA u∗

n

m

ξ0n

ξ1n

ξ2n

ξ3n

ξ4n

256 256 256 512 512 512 1024 1024

8 16

1.12848920 1.12837951 1.12837951 1.12848890 1.12837921 1.12837921 1.12848886 1.12837917 1.12837917

0.00004609 0.00000008 0.00000008 0.00004602 0.00000001 0.00000001 0.00004601 0.00000000 0.00000000

−0.53186634 −0.53192266 −0.53192266 −0.53186667 −0.53192299 −0.53192299 −0.53186671 −0.53192303 −0.53192304

0.00001504 −0.00000001 −0.00000001 0.00001505 0.00000000 0.00000000 0.00001505 0.00000000 0.00000000

−0.10638087 −0.10638403 −0.10638403 −0.10638138 −0.10638454 −0.10638454 −0.10638144 −0.10638460 −0.10638461

8 16 8 16

Equation (6.4.40), Fourier coefficients, f (x) from (6.4.41)

FA FA QM FA FA QM FA FA u∗ (x)

n

m

ζ0n

un (−0.9)

un (0)

un (0.5)

un (0.95)

256 256 256 512 512 512 1024 1024

8 16

−0.00000237 0.00000007 0.00000007 −0.00000243 0.00000001 0.00000001 −0.00000244 0.00000000

0.43594383 0.43588980 0.43588980 0.43594390 0.43588988 0.43588988 0.43594392 0.43588989 0.43588989

1.00001952 1.00000006 1.00000006 1.00001947 1.00000001 1.00000001 1.00001947 1.00000001 1.00000000

0.86607051 0.86602553 0.86602553 0.86607040 0.86602542 0.86602542 0.86607039 0.86602541 0.86602540

0.31238988 0.31224956 0.31224956 0.31239018 0.31224986 0.31224986 0.31239022 0.31224990 0.31224990

8 16 8 16

Equation (6.4.40), ζ0n and function values, f (x) from (6.4.41)

In the following two tables, let us show some results with 15 decimal digits for the third example. One can see that, due to rounding errors, the precision of the results decreases when going from n = 16, 384 to n = 32, 768.

418

6 Numerical Methods for Fredholm Integral Equations

FA FA FA FA FA FA FA FA FA u∗

n

m

ξ0n

ξ4n

8192 8192 8192 16384 16384 16384 32768 32768 32768

8 16 32 8 16 32 8 16 32

1.12848885640610 1.12837916711157 1.12837916711008 1.12848885640011 1.12837916710552 1.12837916710401 1.12848885641034 1.12837916711585 1.12837916711433 1.12837916709551

−0.10638144804001 −0.10638460808187 −0.10638460808242 −0.10638144805023 −0.10638460809210 −0.10638460809269 −0.10638144803278 −0.10638460807464 −0.10638460807525 −0.10638460810705

Equation (6.4.40), Fourier coefficients, f (x) from (6.4.41) n 8192 8192 8192 16384 16384 16384 32768 32768 32768

FA FA FA FA FA FA FA FA FA u∗ (x)

m 8 16 32 8 16 32 8 16 32

un (−0.9) 0.43594391928916 0.43588989107899 0.43588989107818 0.43594401611548 0.43588998790529 0.43588998790464 0.43594421947381 0.43589019126367 0.43589019126272 0.43588989435407

un (0.5) 0.86607039879272 0.86602541682333 0.86602541682336 0.86607034614225 0.86602536417281 0.86602536417291 0.86607061508862 0.86602563311924 0.86602563311909 0.86602540378444

Equation (6.4.40), ζ0n and function values, f (x) from (6.4.41)

The last table presents the comparison of the CPU-time used for establishing and solving the system by the quadrature method and by the fast algorithm (in case of m = 16). n QM FA

64 0.02 0.00

128 0.16 0.00

256 1.25 0.00

512 9.52 0.00

1024

2048

4096

8192

16,384

32,768

0.00

0.01

0.01

0.02

0.03

0.06

CPU-time in seconds for QM and FA (m = 16)

Chapter 7

Collocation and Collocation-Quadrature Methods for Strongly Singular Integral Equations

In this chapter collocation and collocation-quadrature methods, based on some interpolation and quadrature processes considered in Chap. 3, are applied to strongly singular integral equations like linear and nonlinear Cauchy singular integral equations, integral equations with strongly fixed singularities, and hypersingular integral equations.

7.1 Cauchy Singular Integral Equations on an Interval We use the notations introduced in Sect. 5.2, in particular in Proposition 5.2.5. We are interested in the numerical solution of the Cauchy singular integral equation a v α,β (x)u(x) +

b π



1 −1

v α,β (y)u(y) dy + y−x



1 −1

K(x, y)v α,β (y)u(y) dy = f (x) , (7.1.1)

−1 < x < 1 , which we can write shortly in the form (A + K)u = f ,

(7.1.2)

where A is defined in (5.2.11) and K in (5.3.1).

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Junghanns et al., Weighted Polynomial Approximation and Numerical Methods for Integral Equations, Pathways in Mathematics, https://doi.org/10.1007/978-3-030-77497-4_7

419

420

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

7.1.1 Collocation and Collocation-Quadrature Methods Let us consider the case κ = −(α + β) ≥ 0. We look for a solution u ∈ L2α,β of (7.1.2), which satisfies the additional condition 

1 −1

u(x)v α,β (x) dx = 0

(7.1.3)

in case κ = 1. Choose w = v γ ,δ , γ , δ > −1, and let Lw n be the Lagrange interw of the nth orthonormal polynomial polation operator with respect to the zeros xnj pnω (x), 

n   w w Lw f (x) = f (xnj ) nj (x) , n

n 5

w x − xnj

k=1,k=j

w − xw xnk nj

w nj (x) =

j =1

.

(7.1.4)

We search for an approximate solution un ∈ Pn+κ of Eq. (7.1.2) by solving the collocation equation w Lw n (A + K)un = Ln f

(7.1.5)

or the collocation-quadrature equation 0 w Lw n (A + Kn )un = Ln f ,

(7.1.6)

where Kn0 is defined by applying the Gaussian rule with respect to the Jacobi weight v α,β (x) to the second integral in (7.1.1), (Kn0 un )(x) =

n+κ 

α,β

α,β

α,β

λn+κ,k K(x, xn+κ,k )un (xn+κ,k ) ,

k=1

and where, in case κ = 1, the additional condition n+1 

α,β

α,β

λn+1,k un (xn+1,k ) = 0

(7.1.7)

k=1

has to be satisfied. Let p, r ∈ (1, ∞). We agree that, in what follows, Eq. (7.1.2) is considered together with condition (7.1.3), if κ = 1. Analogously, we agree that the collocation method (7.1.5) as well as the collocation-quadrature method (7.1.6) include Eq. (7.1.7), if κ = 1. As a consequence of (3.2.53) and (3.2.54), we have the following property.    (L1) The estimate f − Lw n f −α,−β,(r) ≤ c En (f )∞ holds true for all functions   f ∈ C[−1, 1], where c = c(n, f ), if and only if α < 1 − 2r γ + 12 and   β < 1 − 2r δ + 12 .

7.1 Cauchy Singular Integral Equations on an Interval

421

r This means that, under these conditions, the operators Lw n : C[−1, 1] −→ L−α,−β r converge strongly to the embedding operator E : C[−1, 1] −→ L−α,−β . If K : [−1, 1]2 −→ C fulfils the assumptions (a) and (b) (on K0 (x, y)) of p Proposition 5.3.3 (for example, K is continuous), then K : Lα,β −→ C[−1, 1] is compact. Hence, by applying Corollary 2.3.12, in this case we obtain the following conclusion.     (L2) If α < 1 − 2r γ + 12 and β < 1 − 2r δ + 12 , then

   lim Lw n K − K Lp

= 0.

r α,β →L−α,−β

n→∞

We apply (L1) and (L2) in case p = r = 2 to get the following proposition. Proposition 7.1.1 Let κ ∈ {0, 1}, −1 < γ < 12 − α, −1 < δ < f ∈ C[−1, 1]. Moreover, assume that K : [−1, 1]2 −→ C satisfies

1 2

− β, and

(a) K(x, ·) ∈ L2α,β for all x ∈ [−1, 1], (b) lim K(x, ·) − K(x0 , ·)α,β = 0 for all x0 ∈ [−1, 1]. x→x0

Suppose that Eq. (7.1.2) has only the trivial solution in L2α,β in case f ≡ 0. Then the collocation equation (7.1.5) (together with (7.1.7) in case κ = 1) has a unique solution u∗n ∈ Pn+κ for all sufficiently large n, where  ∗  w   u − u∗  L K − K 2 ≤ c E (f ) + n ∞ n n α,β L

2 α,β →L−α,−β

 ∗ u 

.

α,β

(7.1.8)

with c = c(n, f, u∗ ) and u∗ ∈ L2α,β,κ is the unique solution of (7.1.2). Proof Since, due to (5.2.12), Lw n Aun = Aun for all un ∈ Pn+κ , the operator A : 2 L2α,β,κ −→ L2−α,−β is invertible, and Lw n (f − Ku) ∈ Pn for all u ∈ Lα,β , the collocation method (7.1.5) is equivalent to w (A + Lw n K)u = Ln f ,

u ∈ L2−α,−β,κ .

(7.1.9)

Thus, we can apply Proposition 2.6.6 in case of X = Xn = L2α,β,κ , Y = Yn = L2−α,−β (i.e., Pn = IX and Qn = IY ), and E = , En = . The operator A + K : X −→ Y is invertible, since, due to our assumptions, the nullspace N(A + K) is trivial and the operator K : X −→ Y is compact. Hence, Proposition 2.5.1,(b) is applicable. Consequently, the sequence of operators A + Lw n K : X −→ Y is stable and converges even in operator norm to A + K by (L2) (p = r = 2). Finally, from (2.6.4) we infer    ∗   w   u − u∗  ≤ c Lw f − f  (L K − K)u∗  . + n n n α,β −α,−β −α,−β It remains to use (L1) to get (7.1.8).

(7.1.10)  

422

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Let us turn to the collocation-quadrature method (7.1.6). Since the Gaussian rule with n + κ nodes is exact for polynomials of degree less than 2(n + κ), we can α,β α,β write, for Ln := Lvn and un ∈ Pn+κ , 



K0n un (x) =



1

−1

!

"

α,β



Ln+κ K(x, ·)un (y) v α,β (y) dy =

1 −1

! α,β " un (y) Ln+κ K(x, ·) (y) v α,β (y) dy ,

(7.1.11) which implies that Kn0 : L2α,β −→ L2−α,−β is well defined and that        α,β   0   Kn u (x) − Ku (x) ≤ Ln+κ K(x, ·) − K(x, ·)

α,β

uα,β

∀ u ∈ L2α,β .

Hence, the collocation-quadrature method (7.1.6) is equivalent to (cf. (7.1.9)) 0 w (A + Lw n Kn )u = Ln f ,

u ∈ L2α,β,κ .

(7.1.12)

If we assume that K : [−1, 1]2 −→ C is a continuous function and that the conditions on α, β and γ , δ of Proposition 7.1.1 are satisfied, then     w 0 Ln (Kn − K)u −α,−β

   ≤ Lw n C[−1,1]→L2

−α,−β

#   α,β  sup Ln+κ K(x, ·) − K(x, ·)

≤ c sup {En (K(x, ·))∞ : x ∈ [−1, 1]} uα,β

with

α,β

$ : x ∈ [−1, 1] uα,β

c = c(n, K, u) ,

√ where we took into account (3.2.53) and (3.2.54) for u = v α,β , w = v α,β , and w 0 p = 2. This means that the norms of the operators  Ln (K0 n − K) : X −→ Y converge to zero, so that the operator sequence A + Kn of the collocationquadrature  method (7.1.12) can be considered as a perturbation of the sequence A + Lw n K of the collocation method (7.1.9) by an operator sequence tending to zero in norm. Thus, combining Proposition 7.1.1 and Lemma 2.6.4 leads to the following proposition. Proposition 7.1.2 Let κ ∈ {0, 1}, −1 < γ < 12 − α, −1 < δ < 12 − β, f ∈ C[−1, 1], and K : [−1, 1]2 −→ C be continuous. Moreover, assume that Eq. (7.1.2) has only the trivial solution in L2α,β in case f ≡ 0. Then the collocationquadrature equation (7.1.6) (together with (7.1.7) in case κ = 1) has a unique solution u∗n ∈ Pn+κ for all sufficiently large n, where .     ∗  ∗ w 0   u − u∗  ≤ c En (f )∞ +  u α,β (7.1.13) K − Ln Kn  2 n α,β 2 Lα,β →L−α,−β

with c = c(n, f, u∗ ) and u∗ ∈ L2α,β,κ is the unique solution of (7.1.2).

7.1 Cauchy Singular Integral Equations on an Interval

423

We are now interested in convergence rates for u∗n −→ u∗ as considered in Propositions 7.1.1 and 7.1.2 assuming that the respective assumptions of these propositions are satisfied. For this, at first we prove the following lemma. Lemma 7.1.3 Let s > 12 and K(x, ·) ∈ L2,s α,β uniformly with respect to x ∈ [−1, 1]. Then, for 0 ≤ t ≤ s and u ∈ L2α,β ,    γ ,δ 0  Lm (Kn − K)u

γ ,δ,t

≤ c mt n−s uα,β

with

c = c(n, m, t, u) .

Proof Using vm γ ,δ,t ≤ mt vm γ ,δ for all vm ∈ Pm and the algebraic accuracy of the Gaussian rule, we can estimate 2    γ,δ 0 Lm (Kn − K)u

γ,δ,t

2    γ,δ ≤ m2t Lm (K0n − K)u

γ,δ

= m2t

m  j =1

≤ m2t

m 

 γ,δ  λmj 

1 −1

2 !   " γ,δ γ,δ α,β Lw (y) dy  n+κ K(xmj , ·) (y) − K(xmj , y) u(y)v

2   γ,δ  γ,δ γ,δ u2α,β λmj Lw n+κ K(xmj , ·) − K(xmj , ·) α,β

j =1

≤ c m2t n−2s

m 

2   γ,δ  γ,δ λmj K(xmj , ·)

α,β,s

j =1

u2α,β ≤ c m2t n−2s u2α,β ,

 

where we also took into account Lemma 3.2.39,(b).

Corollary 7.1.4 Let the conditions of Proposition 7.1.1 or 7.1.2 be satisfied in case of considering the collocation or the collocation-quadrature method, respectively. 2,s If, for some s > 12 , f ∈ L2,s −α,−β and K(·, y) ∈ L−α,−β uniformly with respect to y ∈ [−1, 1], then in case of γ = −α and δ = −β, for the solution u∗n of the collocation method (7.1.5), we have  ∗   u − u∗  n

α,β,t

  ≤ c nt −s u∗ α,β,s ,

0≤t≤s

(7.1.14)

with c = c(n, t, u∗ ). If additionally K(x, ·) ∈ L2,s α,β uniformly with respect to x ∈ [−1, 1], then the estimate (7.1.14) remains true for the solution u∗n of the collocation-quadrature method (7.1.6). Proof Note that, due to Au∗ = f − Ku∗ , Lemma 2.4.6, and Corollary 5.2.8, we have u∗ ∈ L2,s α,β . We use the orthoprojections Pn : L2α,β −→ L2α,β ,

u →

n−1 

k=0

α,β α,β p , α,β k

u, pk

424

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

and Qn = I − Pn , for which we have (cf. (6.4.38)) Qn uα,β,t ≤ nt −s uα,β,s ,

0 ≤ t ≤ s , u ∈ L2,s α,β .

Consequently, since pn α,β,t ≤ nt pn α,β for pn ∈ Pn , we get  ∗   u − u∗  n

α,β,t

    ≤ u∗n − Pn u∗ α,β,t + nt −s u∗ α,β,s     ≤ nt Pn (u∗n − u∗ )α,β + nt −s u∗ α,β,s     ≤ nt u∗n − u∗ α,β + nt −s u∗ α,β,s .

In case of the collocation method, we use (7.1.10) together with Lemma 3.2.39,(b) and get    ∗  u − u∗  ≤ cL−α,−β f − f −α,−β n n α,β  +c

1 −1

v

−α,−β

  (x) 

"

!

1 −1

L−α,−β K(., y) n

2  12   ∗ α,β (x) − K(x, y) u (y)v (y) dy 

  ≤ cL−α,−β f − f −α,−β n  +c

1 −1



v−α,−β (x)

≤ c n−s f −α,−β,s



1 −1

2 ! −α,−β " L K(., y) (x) − K(x, y) vα,β (y) dy n

  + sup L−α,−β K(·, y) − K(·, y) n

−α,−β

 12

 ∗ u  α,β

   : y ∈ [−1, 1] u∗ α,β

      ≤ c n−s u∗ α,β,s + sup K(·, y)−α,−β,s : y ∈ [−1, 1] u∗ α,β   ≤ c n−s u∗ α,β,s .

In case of the collocation-quadrature method we can proceed analogously using (7.1.13) and  α,β 0    L K − K u∗  n

n

−α,−β

   0    ∗  Kn − K u∗ −α,−β +  Lα,β ≤ Lα,β n n K − K u −α,−β ,

as well as Lemma 7.1.3 for γ = −α, δ = −β, and m = n.

 

7.1.2 Weighted Uniform Convergence Now let α ± and β ± be nonnegative constants satisfying α = α + − α − and β = β + − β − , as well as α + + α − < 1 and β + + β − < 1, and let γ , δ ≥ 0. Note that,

7.1 Cauchy Singular Integral Equations on an Interval

425

under these conditions, we have the continuous embeddings Cα + ,β + ⊂ L2α,β

and Cα − ,β − ⊂ L2−α,−β .

(7.1.15)

The considerations in this section are mainly based on [74], where the parameters α ± and β ± are taken as α ± = max {±α, 0} and β ± = max {±β, 0}, respectively. ∞ Assume that (Xn )n=1 is a sequence of partitions of the interval [−1, 1] with  Xn = x1n , . . . , xmn n , −1 ≤ xmn n < . . . < x2n < x1n ≤ 1, and 0 0 , xmn n = −1 : δ > 0 .

(7.1.16)

By Ln = LXn we denote the Lagrange interpolation operator with respect to Xn , i.e., for a function f : [−1, 1] −→ C, the polynomial Ln f ∈ Pmn is defined by (Ln f ) (xj n ) = f (xj n ), j = 1, . . . , mn . The respective weighted Lebesgue constant Ln γ ,δ is given by   Ln γ ,δ = Ln  Cγ ,δ → Cγ ,δ = sup Ln f γ ,δ,∞ : f ∈ Cγ ,δ , f γ ,δ,∞ ≤ 1 . (7.1.17) Exercise 7.1.5 Prove that the weighted Lebesgue constant in (7.1.17) can be defined equivalently by   Ln γ ,δ = sup Ln f γ ,δ,∞ : f ∈ Cγ ,δ , f γ ,δ,∞ = 1 as well as, in case of x1n = 1 and xmn n = −1, by   Ln γ ,δ = sup Ln f γ ,δ,∞ : f ∈ Cbγ ,δ , f γ ,δ,∞ = 1 . Let us again consider the case κ ≥ 0. We look for a solution u ∈ Cα + ,β + of (7.1.2), which satisfies the additional condition (7.1.3) in case κ = 1. We assume that K maps Cα + ,β + into L2−α,−β and f belongs to Cα − ,β − . If we are interested in solutions u ∈ Cα + ,β + satisfying (7.1.3), then, due to (7.1.15) and Corollary 5.2.6, we can consider the equation   , u = Af , I + AK

(7.1.18)

  ξ instead of (7.1.2), (7.1.3). Moreover, if K ∈ L Cα + ,β + , Cα − ,β − with ξn = O(n−ρ ) for some ρ > 0, then, due to Proposition 5.2.16, the operator on the left-hand side , : Cα + ,β + −→ Cα + ,β + is compact. of (7.1.18) maps Cα + ,β + into Cα + ,β + , where AK Remark 7.1.6 Note that it is not necessary to have (7.1.15) in order to get (7.1.18). That means, we can abstain from the conditions α + + α − < 1 and β + + β − < 1

426

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

, postulated at the beginning of this section. Indeed, the relations AAf = f and ξ ξ , AAg = g remain true for all f ∈ Cα + ,β + and g ∈ Cα − ,β − , as one can see from the proof of [76, Proposition 3.1]. Lemma 7.1.7 If f ∈ Cγ ,δ and γ ≤ α − , δ ≤ β − , then ρ,τ

τ +1   n A(f , − Ln f ) + + ≤ c f γ ,δ,ρ,τ ln Ln γ ,δ , α ,β ,∞ ρ n

n = 2, 3, . . . ,

where c = c(n, f ). γ ,δ

Proof Let pn ∈ Pmn satisfy f − pn γ ,δ,∞ = Emn (f ). We get   f − Ln f α − ,β − ,∞ ≤ cf − Ln f γ ,δ,∞ ≤ c f − pn γ ,δ,∞ + Ln (pn − f )γ ,δ,∞ γ ,δ

≤ c Ln γ ,δ Emn (f ) ≤ c Ln γ ,δ f γ ,δ,ρ,τ

lnτ n , nρ

n ≥ 2,

which implies, for m = 2, 3, . . ., ⎧ lnτ n ⎪ ⎨ ≤ f − Ln f α− ,β − ,∞ ≤ c Ln γ ,δ f γ ,δ,ρ,τ ρ : m < mn , − − α ,β m Em (f − Ln f ) lnτ m ⎪ γ ,δ ⎩ = E α− ,β − (f ) ≤ c Em (f ) ≤ c f γ ,δ,ρ,τ : m ≥ mn . m mρ

Consequently, f − Ln f α − ,β − ,ρ,τ ≤ c Ln γ ,δ f γ ,δ,ρ,τ lnτ n, and the assertion follows from Proposition 5.2.13 with m0 ≥ ρ.   Let Kn : Cα + ,β + −→ Cbγ ,δ be an approximation for the operator K in a sense, which has to be described later. To find an approximate solution of (7.1.2),(7.1.3) we look for a polynomial un ∈ Pmn +κ satisfying (Aun + Kn un ) (xj n ) = f (xj n ) ,

j = 1, . . . , mn ,

(7.1.19)

and, in case of κ = 1, the additional condition 

1 −1

un (x)v α,β (x) dx = 0 .

(7.1.20)

With the help of A(Pmn +κ ) = Pmn (cf. Proposition 5.2.5), we see that problem (7.1.19),(7.1.20) is equivalent to looking for a solution un ∈ Cα + ,β + of 

 , n Kn un = AL , nf , I + AL

where we also take into account Corollary 5.2.6.

(7.1.21)

7.1 Cauchy Singular Integral Equations on an Interval

427

Proposition 7.1.8 Suppose that  ρ,τ +σ  ρ,τ (a) K ∈ L Cα + ,β + , Cα − ,β − for some σ ≥ 0 and f ∈ Cγ ,δ for some γ ≤ α − , δ ≤ β −,  ρ,τ  (b) the operators Kn ∈ L Cα + ,β + , Cγ ,δ are uniformly bounded, lnτ0 n (c) (K − Kn )pα − ,β − ,∞ ≤ c ρ pα + ,β + ,∞ for all polynomials p ∈ Pn , n 0 where c = c(n, p), τ0 ≥ 0, as well as ρ0 > 0, lnτ +1 n Ln γ ,δ = 0. (d) lim n→∞ nρ   , n Kn is collectively compact with AL , in , n Kn −→ AK Then the sequence AL Cα + ,β + and %

    A , Ln Kn − K u

α + ,β + ,∞

≤c

&  lnτ +1 n lnτ0 +1 n α + ,β + uα + ,β + ,∞ + En Ln γ ,δ + (u) nρ nρ0

(7.1.22)   , = 0, for all u ∈ Cα + ,β + , where c = c(n, u). Furthermore, if dim NCα+ ,β + I + AK then, for all sufficiently large n, Eq. (7.1.21) is uniquely solvable in Cα + ,β + and the solution u∗n converges in the norm of Cα + ,β + to the unique solution u∗ of (7.1.18), where & % τ +1 n    ∗  lnτ0 +1 n ln ∗ σ u − u  + + ≤ c Ln γ ,δ + ln n + f γ ,δ,ρ,τ n α ,β ,∞ nρ nρ0 (7.1.23) with a constant c = c(n, f ). Proof From γ ≤ α − , δ ≤ β − , and (b), we get that the operators Kn ∈  ρ,τ L Cα + ,β + , Cα − ,β − are uniformly bounded. Consequently, for u ∈ Cα + ,β + and α + ,β +

pn ∈ Pn with u − pn α + ,β + ,∞ = En   A ,(Kn − K)u

(u), we conclude

α + ,β + ,∞

      , (pn − u) + + , n (u − pn ) + + + A ,(Kn − K)pn  + + AK ≤ AK α ,β ,∞ α ,β ,∞ α ,β ,∞   (Kn − K)pn α − ,β − ,ρ,τ +σ ≤ c u − pn α + ,β + ,∞ + (Kn − K)pn α − ,β − ,∞ ln n + nm0 % + + Enα ,β (u)

≤c

pn α + ,β + ,∞ lnτ0 +1 n pn α + ,β + ,∞ + + nρ0 nm0

% ≤ c Enα

+ ,β +

(u) +

lnτ0 +1 n uα + ,β + ,∞ nρ0

& ,

&

428

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

where we took into account Proposition 5.2.14, assumption (c), and Proposition 5.2.13 with m0 ∈ N, m0 ≥ ρ0 , as well as pn α + ,β + ,∞ ≤ uα + ,β + ,∞ + Enα

+ ,β +

(u) ≤ 2 uα + ,β + ,∞ .

With the help of Lemma 7.1.7 and assumption (b) we obtain            A , Ln Kn − K u + + ≤ A , Ln Kn u − Kn u  + + + A , Kn − K u + + α ,β ,∞ α ,β ,∞ α ,β ,∞ % ≤c

lnτ +1 n lnτ0 +1 n Ln γ ,δ + nρ nρ 0

&

 uα+ ,β + ,∞ +

+ + Enα ,β (u)

.

, n Kn −→ AK , in Cα + ,β + , since the set This implies the strong convergence AL P of polynomials is dense  in Cα + ,β + (see Exercises 2.4.23 and 6.2.5).  In view of assumption (b), the set Kn u : u ∈ Cα + ,β + , uα + ,β + ,∞ ≤ 1, n ∈ N is bounded in ρ,τ ρ,τ ρ,τ Cγ ,δ . Due to the continuous embedding Cγ ,δ ⊂ Cα − ,β − (see Proposition 2.4.28,(e)) and Proposition 5.2.14, this implies the boundedness of the set   , n u : u ∈ Cα + ,β + , uα + ,β + ,∞ ≤ 1, n ∈ N A := AK ρ,τ +1

in Cα + ,β + . Hence, in view of Proposition 2.4.28,(b), A is a relatively compact subset of Cα + ,β + , which implies together with Lemma 7.1.7 and assumptions (b) and (d)   , n Kn . Indeed, for all the collective compactness of AL   u ∈ SCα+ ,β + := u ∈ Cα + ,β + : uα + ,β + ,∞ ≤ 1 , we have    A , Kn u − Ln Kn u 

Lemma 7.1.7 α + ,β + ,∞



(b)



c Kn uγ ,δ,ρ,τ

c

lnτ +1 n Ln γ ,δ nρ

lnτ +1 n (d) Ln γ ,δ −→ 0 nρ

if n −→ ∞ .

, , Consequently, onecan apply  Exercise 2.6.11 with Tn := AKn and Mn := ALn Kn . , If dim NCα+ ,β + I + AK = 0, then Proposition 2.6.8 states the unique solvability of (7.1.21) for all sufficiently large n together with  ∗      u − u∗  + + ≤ c A , Ln Kn − K u∗  + + . n α ,β ,∞ α ,β ,∞

(7.1.24)

7.1 Cauchy Singular Integral Equations on an Interval

429

Proposition 5.2.14, assumption (a), and f α − ,β − ,ρ,τ +σ ≤ c f γ ,δ,ρ,τ yield u∗ = +σ +1 , − Ku∗ ) ∈ Cρ,τ and A(f + + α ,β

 ∗ u 

α + ,β + ,ρ,τ +σ +1

    ≤ c f γ ,δ,ρ,τ + u∗ α + ,β + ,∞ ≤ c f γ ,δ,ρ,τ .

Hence Enα

+ ,β +

(u∗ ) ≤

 lnτ +σ +1 n  lnτ +1 n σ u∗  + + ≤c ln n f γ ,δ,ρ,τ . α ,β ,ρ,τ +σ +1 ρ n nρ  

This, together with (7.1.22) and (7.1.24), leads to (7.1.23). Proposition 7.1.9 Assume that

(a) 0 ≤ γ ≤ α − and 0 ≤ δ ≤ β − , ρ,τ ρ,τ (b) K ∈ L(Cα + ,β + , Cγ ,δ ) and f ∈ Cγ ,δ , Cbγ ,δ ), where we consider Pmn +κ as a subspace of Cα + ,β + , and (c) Kn ∈ L(Pmn +κ , (Kn − K)pγ ,δ,∞ ≤ c

lnτ n pα + ,β + ,∞ nρ

∀ p ∈ Pmn +κ ,

c = c(n, p) ,

lnτ +1 n Ln γ ,δ = 0, n→∞ nρ , = 0. (e) dim NCα+ ,β + (I + AK)

(d) lim

Then, for all sufficiently large n, Eq. (7.1.21) is uniquely solvable and the solution u∗n converges in the norm of Cα + ,β + to the unique solution u∗ of (7.1.18), where τ +1   ∗ u − un  + + ≤ c ln Ln γ ,δ f γ ,δ,ρ,τ n α ,β ,∞ nρ

(7.1.25)

with a constant c = c(n, f ). , and Proof We apply Corollary 2.6.7 with X = Cα + ,β + , Xn = Pmn +κ , T = AK, , Tn = ALn Kn , as well as zero operators E and En . From Corollary 5.2.6 it follows Tn ∈ L(Xn ). Moreover, due to (5.2.31), we get   AL , n (Kn − K)un  + + ≤ cLn (Kn − K)un α − ,β − ,∞ ln n α ,β ,∞ (a)

≤ cLn (Kn − K)un γ ,δ,∞ ln n

≤ cLn γ ,δ (Kn − K)un γ ,δ,∞ ln n (c)

≤c

lnτ +1 n Ln γ ,δ un α + ,β + ,∞ , nρ

u n ∈ Xn , (7.1.26)

430

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

and, due to Lemma 7.1.7   A ,(Ln K − K)u

≤ c Kuγ ,δ,ρ,τ

α + ,β + ,∞

(b) lnτ +1 lnτ +1 n Ln γ ,δ ≤ c ρ Ln γ ,δ uα + ,β + ,∞ nρ n

(7.1.27) for all u ∈ X. Hence, lim Tn − T Xn →X = 0. Moreover, in view of Propon→∞

sition 5.2.16 together with assumptions (a) and (b), the operator T : X −→ X is compact. Thus, all conditions of Corollary 2.6.7 are satisfied, and it remains to prove , n f of (7.1.21) belongs to Xn . The the estimate (7.1.25), since the right-hand side AL , nK : estimate (7.1.27) together with Corollary 2.6.7 of I + AL   give the invertibility  , n K)−1  X −→ X for all sufficiently large n and sup (I + AL : n ≥ n 0 −1 as well as β + β1 = −β − − δ0 > −1. Moreover, if we set kn = n, κ(n) = 2n, w(x) = v α,β (x), and u1 (x) = + + v α +γ0 ,β +δ0 (x), then Corollary 6.2.1,(b) yields the second assertion (7.1.39).   ρ,τ

ρ ,τ

Proposition 7.1.15 Let Kn be defined as in (7.1.37), where K ∈ Cγ ,δ,x ∩ Cγ00,δ00,y with nonnegative constants γ , δ, γ0 , δ0 satisfying γ ≤ α − , δ ≤ β − , γ0 + α − < 1 , and δ0 + β − < 1 .   lnτ +1 n , = 0, then, Ln γ ,δ = 0, and dim NCα+ ,β + I + AK n→∞ nρ for all sufficiently large n, Eq. (7.1.21) has a unique solution u∗n ∈ Cα + ,β + , which converges in the norm of the space Cα + ,β + to the unique solution u∗ ∈ Cα + ,β + of (7.1.18), where ρ,τ

If f ∈ Cγ ,δ , lim

  ∗  u − u∗  n

with c = c(n, f ).

α + ,β + ,∞



 lnτ +1 n lnτ0 +1 n f γ ,δ,ρ,τ Ln γ ,δ + ≤c nρ nρ0

7.1 Cauchy Singular Integral Equations on an Interval

437

ρ,τ Proof From Proposition 5.3.13 it follows K ∈ L( Cα + ,β + , Cγ ,δ ). Referring to the definition (7.1.37) of the operators Kn and to (7.1.38) we estimate, for u ∈ Cα + ,β + ,

Kn uγ ,δ,ρ,τ

 n     α,β γ0 ,δ0 −γ0 −α + ,−δ0 −β + α,β α + ,β + α,β α,β  = λnk K α,β v (xnk )v (xnk )u(xnk ) xnk   k=1

#   γ ,δ  ≤ c sup Ky 0 0 

γ ,δ,ρ,τ

$ γ ,δ,ρ,τ

: y ∈ [−1, 1] uα+ ,β + ,∞ ≤ c uα+ ,β + ,∞ ,

ρ,τ where we also took into account the assumption K ∈ Cγ ,δ,x ∩ Cγ0 ,δ0 ,y . Thus, ρ,τ the operators Kn ∈ L(Cα + ,β + , Cγ ,δ ) are uniformly bounded. Let p ∈ Pn and  γ ,δ  γ ,δ γ ,δ = En0 0 (Kx ), x ∈ [−1, 1]. Then, in virtue of px ∈ Pn with px − Kx  n

n

γ0 ,δ0 ,∞

Lemma 7.1.14,  

 1



−1

 α,β  γ ,δ  |(Kn p − Kp)(x)| v γ ,δ (x) = Qn Kx p −

 

 γ ,δ Kx (y)p(y)v α,β (y) dy  

 γ ,δ  γ ,δ  Kx p ≤ c Kx p − pnx pα + +γ ,β + +δ ,∞ 0 0

α + +γ0 ,β + δ0 

≤ c E2n

γ ,δ0 

≤ c pα + ,β + ,∞ En0

γ ,δ 

Kx

≤c

lnτ0 n pα + ,β + ,∞ . nρ0

Consequently, (Kn − K)pα − ,β − ,∞ ≤ c (Kn − K)pγ ,δ ≤ c

lnτ0 n pα + ,β + ,∞ nρ0

∀ p ∈ Pn .

Thus, all assumptions of Proposition 7.1.8 are fulfilled, from which all assertions follow.   In what follows, we suppose that the kernel K(x, y) of the integral operator (7.1.30) is of the form K(x, y) =

H (x, y) − H (y, y) x−y

with

ρ,τ

ρ,τ

H ∈ C0,0,x ∩ C0,0,y .

(7.1.40)

Furthermore, we assume that mn = n + k 0

with a constant integer k0 ,

(7.1.41)

and we define α,β Q n,x (f ) :=

n  k=1,k=d(x)

α,β

α,β

λnk f (xnk ) ,

(7.1.42)

438

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

where d = d(x) ∈ {1, . . . , n} such that        α,β  α,β  x − xnd  = min x − xnk  k = 1, . . . , n . If there are two such indices d, we choose the smaller one. Instead of (7.1.37) we use the definition n 

n f )(x) = (K

α,β

α,β

α,β

(7.1.43)

λnk K(x, xnk )f (xnk ) .

k=1,k=d(x)

Recall the estimate [176, Theorem 6.3.28, Theorem 9.22] 1

α,β

λnk ≤ c

1

α,β

v α+ 2 ,β+ 2 (xnk ) , n

k = 1, . . . , n ∈ N ,

(7.1.44)

where c = c(n, k). ρ,τ

ρ,τ

Lemma 7.1.16 If the function H (x, y) belongs to C0,0,x ∩ C0,0,y , then there is a sequence of polynomials pn (x, y) =

n−1 

(n)

γj k x j y k of degree less than n in each

j,k=0

variable, such that sup {|H (x, y) − pn (x, y)| : x, y ∈ [−1, 1]} ≤ c

lnτ n , nρ

n = 2, 3, . . . ,

where c = c(n). Proof Let Pn (R2 ) be the set of all algebraic polynomials p(x, y) of degree less than n in each variable. Choose a natural number r > ρ. We have (see [44, Theorem 12.1.1]), for n ≥ r, inf

sup

p∈Pn (R2 ) x,y∈[−1,1]

  |H (x, y) − p(x, y)| ≤ r[−1,1]2 H, n−1 ,

(7.1.45)

where r[−1,1]2 (H, t)

= max

sup y∈[−1,1]

rϕ



Hy0,0, t

 , sup

∞ x∈[−1,1]

rϕ

  Hx0,0 , t

 ∞

(7.1.46) and (cf. Sects. 2.4.2 and 3.1.1) #    rϕ (f, t)∞ = rϕ (f, t)0,0,∞ = sup rhϕ f 

L∞ (−1+4r 2 h2 ,1−4r 2 h2)

$ :0 ρ .

Together with (7.1.45) and (7.1.46) we arrive at inf

sup

p∈Pn (R2 ) x,y∈[−1,1]

|H (x, y) − p(x, y)| ≤ c n−ρ lnτ n ,  

and the lemma is proved. Lemma 7.1.17 ([118], Lemma 4.1) If n  k=1,k=d

− 12

≤ γ, δ ≤

1 2,

then

    α,β √ v γ ,δ (xnk ) 1 2γ −1 √ 1 2δ−1  ≤c  1−x+ 1+x+ ln n ,  α,β  n n n x − xnk  (7.1.47)

−1 < x < 1, where c = c(n). Proposition 7.1.18 Let (7.1.40) and (7.1.41) be satisfied,   and let Kn be defined τ +2 n ln n ln ρ,τ +1 Ln α − ,β − = 0, and + by (7.1.42). If f ∈ Cα − ,β − , lim ρ n→∞ n n   , dim NCα+ ,β + I + AK = 0, then, for all sufficiently large n, the collocationquadrature equation   ,K n u = AL , nf I +A

(7.1.48)

has a unique solution u∗n ∈ Cα + ,β + converging in the norm of Cα + ,β + to the unique solution u∗ of (7.1.18), where  ∗  u − u∗  + + ≤ c n α ,β ,∞



 ln n lnτ +2 n Ln α − ,β − f α − ,β − ,ρ,τ +1 + n nρ

with a constant c = c(n, f ). Proof Without further mention, in this proof we only consider natural numbers n ≥ n0 , where n0 = max {2, 1 − k0 − κ, 2(k0 + κ − 1)} (cf. (7.1.41)), which implies

440

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

≤ 2n − mn − κ + 1 ≤ 2n. For fixed x ∈ [−1, 1], we define pnH,x ∈ P2n−mn −κ+2 by n 2

 0,0  H − H − pnH,x  = E2n−mn −κ+2 (Hx0,0 − H ) , x ∞ (y) = H (y, y). In virtue of Lemma 7.1.16, we have E2n (H ) ≤ c n−ρ lnτ n, where H 0,0 τ −ρ implying En (H ) ≤ c n ln n. Thus, E2n−mn −κ+1 (Hx − H ) ≤ c n−ρ lnτ n, since En (Hx0,0 ) ≤ c n−ρ lnτ n by assumption. Due to Hx0,0(x) − H (x, x) = 0 we obtain       sup Hx0,0(y) − H (y, y) − pnH,x (y) − pnH,x (x)  : (x, y) ∈ [−1, 1]2      ≤ sup Hx0,0(y) − H (y, y) − pnH,x (y) : (x, y) ∈ [−1, 1]2      + sup Hx0,0(x) − H (x, x) − pnH,x (x) : x ∈ [−1, 1] ≤c

lnτ n . nρ (7.1.49)

x ∈ P2n by For p ∈ Pmn +κ , we define p2n

x p2n (y)

" ! p(y) pnH,x (y) − pnH,x (x) . = x−y

With the help of (7.1.49) and the algebraic accuracy of the Gaussian rule, we deduce, for all p ∈ Pmn +κ ,   (Kp − K n p)(x)  ≤

1 −1

 0,0      α,β x  x  0,0   K p (y) − px (y)v α,β (y) dy + Qα,β (px ) − Q  α,β x n n,x (p2n ) + Qn,x (p2n − Kx p) 2n 2n

≤ c pα+ ,β + ,∞

   H 0,0 (y) − H (y, y) pnH,x (y) − pnH,x (x)  −α− ,β −  x − (y) dy v   x−y x −y −1 



1

  n  pH,x (x α,β ) − pH,x (x)  −α + ,−β + (x α,β ) τ n  −α+ ,−β + α,β α,β ln n  v  n α,β nd nd v λnk . (x )λ + +  nd nd α,β α,β   nρ x − xnd |x − xnd | k=k=d

(7.1.50)

7.1 Cauchy Singular Integral Equations on an Interval

441

Using the mean value theorem and the Remez inequality (2.4.13), we get    pH,x (x α,β ) − pH,x (x)    n  n   H,x   nd p = (ξ )    ≤c n α,β   x−x nd

sup |y|≤1−(2n)−2

      H,x   (y) ≤ c n  pnH,x  1 , 1 ,∞ ,  pn 2 2

(7.1.51) α,β

where ξ lies between x and xnd . From [44, Theorems 7.3.1, 7.2.4] it follows  H,x    p 1 n

1 2 , 2 ,∞

≤c

2n+1  m=1

≤c

⎧ ⎪ ⎨ ⎪ ⎩

2n+1    lnτ m ≤c Em Hx0,0 − H mρ m=2

: ρ > 1,

1



2n+1

(7.1.52)

t −ρ dt lnτ n : ρ ≤ 1

1

≤c

:ρ>1

1

c n1−ρ lnτ +1 n : ρ ≤ 1 .

   α,β α,β α,β α,β α,β In case x ∈ xnn , xn1 , i.e., xnd ∈ xnn , xn1 , we have 6

α,β 1 − (xnd )2

√ ≤ 2 max

#6 α,β 1 + xnn ,

6

$ α,β 1 − xn1

≤ c n−1 ,

where we took into account the arcsin distribution of the Jacobi nodes (cf. Proposition 3.2.24 and Exercise 3.2.25). Consequently, due to (7.1.51) and (7.1.52),    pH,x (x α,β ) − pH,x (x)   1 + n1−ρ lnτ +1 n n  n  α,β nd α,β 6 (7.1.53) , x ∈  xnn , xn1 . ≤ c   α,β   α,β x − xnd 1 − (xnd )2  α,β α,β If x ∈ xnn , xn1 , then    c 6 1  α,β  2 1−ξ + ξ − xnd  ≤ n n for ξ from (7.1.51) (see the proof of [27, Theorem 4.1]). Again due to the arcsin distribution of the Jacobi nodes, we have 6  1 α,β ≤ c 1 + xnn ≤ c 1 + ξ n

and

6  1 α,β ≤ c 1 − xn1 ≤ c 1 − ξ . n

442

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

      α,β  α,β α,β  This yields ξ − xnd  ≤ c(1 ± ξ ). Consequently, 1 ± xnd ≤ 1 ± ξ + ξ − xnd  ≤ c(1 ± ξ ) and, with ξ from (7.1.51),   H,x     pn 1 1    pnH,x   1 , 1 ,∞  H,x   2 2 2 , 2 ,∞  (ξ ) ≤ ≤c 6 .  pn 2 α,β 1−ξ 1 − (xnd )2 Hence, due to (7.1.52), the estimate (7.1.53) holds true for all x ∈ [−1, 1]. Together with (7.1.44) this leads to      pH,x (x α,β ) − pH,x (x)  τ +1 n ln 1 + + − − n  n  α,β α,β α,β nd +   v −α ,−β (xnd )λnd ≤ c v −α ,−β (xnd ) α,β   n nρ x − xnd  1 lnτ +1 n , (x) + n nρ (7.1.54) 

≤ cv

−α − ,−β −

α,β

where we used 1 ± x ≤ c(1 ± xnd ) (see the proof of [74, Theorem 4.1]). Now let us consider the first term in the square brackets on the right-hand side of (7.1.50). Lemma 2.4.32 yields that Hx0,0 ∈ C0,λ uniformly with respect to y ∈ [−1, 1] for some λ ∈ (0, 12 ). We choose a natural number m ≥ ρ+2 2λ . Using Markov’s  H,x   H,x   2     ≤ c n pn and (7.1.49), we obtain inequality pn ∞ ∞ 

   H 0,0(y) − H (y, y) pH,x (y) − pH,x (x)   x  −α − ,β − n n − (y) dy  v   x − y x − y −1 1

 ≤c

x+ x−

1−x 2n2m

1+x 2n2m

|x − y|λ−1 v −α

− ,−β −

(x)

 (y) dy + c n2

x+ x−

lnτ n +c ρ n ≤ c v −α

− ,−β −

%

x− −1

1+x 2n2m

 +

&

1 x+ 1−x 2n2m

v −α

1−x 2n2m

1+x 2n2m

v −α

− ,−β −

(y) dy

− ,−β −

(y) dy |x − y|

lnτ +1 n , nρ (7.1.55) −







where we took v −α ,−β (y) ≤ c v −α ,−β (x) for y ∈ and (5.2.26) (with nm instead of n) into account.



x−

1+x ,x 2n2m

+

1−x 2n2m

7.1 Cauchy Singular Integral Equations on an Interval

443

To estimate the last term in the square brackets on the right-hand side of (7.1.50) we use (7.1.44) as well as Lemma 7.1.17 and get n 

v −α,−β (xnd ) α,β

k=1,k=d

|x −

α,β xnd |

α,β

λnk

(7.1.44) ≤

n 

c

k=1,k=d

v −α

− + 1 ,−β − + 1 2 2

n|x −

α,β

(xnk )

(7.1.47)

α,β xnk |



c v −α

− ,−β −

(x) ln n .

(7.1.56) Putting (7.1.50), (7.1.54), (7.1.55), and (7.1.56) together yields   (K − K n )p

 α − ,β − ,∞

≤c

 1 lnτ +1 n pα + ,β + ,∞ + n nρ

∀ p ∈ Pmn +κ .

The application of Proposition 7.1.9 combined with Proposition 5.3.14 completes the proof.  

7.1.3 Fast Algorithms In this section we present a basic idea for the construction of fast algorithms for the numerical solution of Cauchy singular integral equations of the form (7.1.1). Here “fast algorithm” means, that we can solve the system of linear algebraic equations associated with the numerical method under consideration (for example, the collocation-quadrature method (7.1.6)) with O(nk ) computational complexity, where k < 3, while the simple application of the Gaussian elimination method leads to O(n3 ) complexity. In contrary to, for example, Krylov subspace methods like GMRes, we do not consider iteration methods. Instead of, the fast algorithms discussed here are based on the use of two grids and a compression technique. Hence, these two grid methods can also be considered as starting point for the construction of multiple grid methods (see, for example, [19]). The basic idea can be shortly described as follows. In the operator A + K of Eq. (7.1.2), the part A has a lot of structure, while the operator K is compact and has the property to smooth the function to which it is applied. If the approximation method (7.1.6) is able to restore these structural and smoothing properties (in some sense) in the approximating operators, then the w equation Lw n Aun = Ln f can be solved in a very cheap manner, and the final idea is to solve the complete Eq. (7.1.2) only for a small m 12 , r > 12 , and η > 0, then we have

to x ∈ [−1, 1], and f ∈ L2,s −α,−β for some the estimate

 α,β ∗     t −s−η R (u − v ∗ ) + nt −s u∗ α,β,s , m n α,β,t ≤ c m

0≤t ≤s,

where vn∗ is the solution of (7.1.61) and c = c(m, n, t, u∗ ). ∗ ∗ Proof We estimate the L2,t α,β -norm of Rm (u − vn ) by using α,β

     −α,−β  ∗ ∗ α,β ∗ −1 α,β −1 −1 Rα,β f + Rα,β Ln f − Avn∗ , f − L−α,−β m (u − vn ) = Rm u − A f + Rm A n m A

(7.1.65) where the last term on the right-hand side of (7.1.65) vanishes because of the 2,s+η equivalence of (7.1.61) and (7.1.62). Since (A + K)u∗ = f , Ku∗ ∈ L−α,−β (see 2,s+η

Lemma 2.4.6), and A maps Lα,β u∗ − A−1 f = −A−1 Ku∗ ∈ A:

L2,t α,β

−→

L2−α,−β

2,s+η

onto L−α,−β (see Corollary 5.2.8), we have

2,s+η Lα,β .

Hence, due to (7.1.60) and the fact that

is an isometric isomorphism (see Corollary 5.2.8),

   α,β ∗  Rm (u − A−1 f )

α,β,t

    ≤ c mt −s−η Ku∗ −α,−β,s+η ≤ c mt −s−η u∗ α,β ,

446

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

where c = c(m, t, u∗ ). With the help of Lemma 3.2.39,(b) we get  α,β −1     R A f −L−α,−β f  ≤ c f −L−α,−β f −α,−β,t ≤ c nt −s f −α,−β,s , m n n α,β,t where c = c(n, t, f ), and, since f −α,−β,s ≤ cu∗ α,β,s , the lemma is proved. Lemma 7.1.20 If K(·, y) ∈ 2,s+η K(x, ·) ∈ Lα,β some s > 12 and η

2,s+η L−α,−β

 

uniformly with respect to y ∈ [−1, 1],

uniformly with respect to x ∈ [−1, 1], and f ∈ L2,s −α,−β for   1 ≥ 0, then, for every t0 ∈ 2 , s and all sufficiently large m, we

have the estimate   α,β ∗ P u − w ∗  m

m α,β,t

   ≤ c mt −s−η + nt −s u∗ α,β,s ,

max {t0 , η} ≤ t ≤ s ,

∗ is the solution of (7.1.63) and c = c(m, n, t, u∗ ). where wm ∗ is the solution of (7.1.63), we obtain Proof Taking into account that wm 0 ∗ α,β ∗ (A + Km )(wm − Pm u ) L−α,−β m

 ∗ 0 α,β ∗ f − ARα,β = L−α,−β m m vn − (A + Km )Pm u  α,β ∗ ∗ α,β ∗ 0 α,β ∗ = L−α,−β (A + K)(Pm u + Rα,β m m u ) − ARm vn − (A + Km )Pm u  0 0 α,β ∗ ∗ ∗ = L−α,−β (K − Km )u∗ + Km Rm u + ARα,β m m (u − vn ) . (7.1.66) As a consequence of Corollary 7.1.4 and the Banach-Steinhaus theorem, the operators 2,t0 0 0 (A + Km ) : L2,t L−α,−β m α,β −→ L−α,−β

are invertible for all sufficiently large m, where their inverses are uniformly bounded. Hence, in virtue of (7.1.59),   ∗   ∗ α,β ∗  w − P α,β u∗  ≤ mt −t0 wm − Pm u α,β,t m m α,β,t

0

  0 ∗ α,β ∗  ≤ c mt −t0 L−α,−β (A + Km )(wm + Pm u ) −α,−β,t m 0 (7.1.67)

7.1 Cauchy Singular Integral Equations on an Interval

447

with c = c(m, t, u∗ ). Lemma 7.1.3 yields  −α,−β  L (K − K0 )u∗  m

m

−α,−β,t0

  ≤ c mt0 −s−η u∗ α,β .

(7.1.68)

Looking at the definition of Kn0 in (7.1.11), we see that 0 α,β ∗ Rm u = 0 , Km

(7.1.69)

α,β

since the image space of the operator Rm is v α,β -orthogonal to the space   2,t 2,t0 as well as Pm . Finally, using Lemma 3.2.39,(a) and A ∈ L Lα,β0 , L−α,−β Lemma 7.1.19, we get  −α,−β   α,β ∗  ∗ ∗  ∗  L  ARα,β m m (u − vn ) −α,−β,t ≤ c Rm (u − vn ) α,β,t 0

0

  ≤ c mt0 −s−η + nt0 −s u∗ α,β,s . (7.1.70) 

Note that the constants (denoted by c) in (7.1.68) and (7.1.70) do not depend on m, n, t, and u∗ . Using the estimates (7.1.68), (7.1.70), and relation (7.1.69) together with (7.1.67), we conclude, for t0 ≤ t ≤ s,  ∗        w − P α,β u∗  ≤ c mt−t0 mt0 −s−η + nt0 −s u∗ α,β,s ≤ c mt−s−η + nt−s u∗ α,β,s , m m α,β,t

where c = c(m, n, t, u∗ ).

 

Now we can proof the following proposition dealing with the rate of convergence of u∗n from (7.1.64) to the solution of (7.1.2). Proposition 7.1.21 Let Eq. (7.1.2) be uniquely solvable in L2α,β and the assumptions of Lemma 7.1.19 and Lemma 7.1.20 be fulfilled with an η > 0. Choose   the number m such that mr ∼ n for some r ∈ N \ {1}. Then, for every t0 ∈ 12 , s and all sufficiently large m,     ∗  un − u∗ α,β,t ≤ c nt −s u∗n α,β,s ,

# max t0 , s −

η r−1

$ ≤t ≤s,

(7.1.71)

where u∗n is defined by (7.1.64), u∗ is the solution of (7.1.2), and c = c(m, n, t, u∗ ). Proof Putting (7.1.64), Lemma 7.1.19, and Lemma 7.1.20 together, we get  ∗    un α,β,t ≤ c mt −s−η + nt −s u∗α,β,s , which immediately yields (7.1.71) using mr ∼ n.

t0 ≤ t ≤ s ,  

448

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Computational Complexity of the Algorithm For the following considerations we assume that all needed function values −α,−β −α,−β −α,−β α,β −α,−β α,β f (xnj ), f (xmj ), K(xnj , xnk ), and K(xmj , xmk ) as well as the −α,−β

α,β

Christoffel numbers λmk and λnk are computed. Relation (5.2.25) shows that Eq. (7.1.61) is equivalent to the system of linear equations   T α,β α,β −α,−β U  ξ = (−1)λ U−α,−β ) f (xnj n n n

n j =1

=: η ,

(7.1.72)

  n γ ,δ α,β α,β α,β , and Un where ξ = , n = diag λα,β = un (xnk ) n1 . . . λnn k=1  n−1, n γ ,δ γ ,δ . As a consequence of (5.2.24), the solution of (7.1.72) is pj (xnk ) j =0,k=1

given by

 T −α,−β −α,−β ξ = (−1)λ Uα,β Un n η, n

(7.1.73)

which can be realized with O(n2 ) complexity using the algorithms designed γ ,δ for so-called discrete orthogonal polynomial transformations Un and based on the three-term recurrence relations for the respective orthogonal polynomials (cf. [192]). Moreover, in case v α,β is a Chebyshev weight (i.e., |α| = |β| = 12 ) these transformations can be realized with O(n log n)-complexity (see also [50, 190] and the references given there). For example, if α = − 12 = −β then, due to (5.1.24) − 1 , 12

and (5.1.25), xnk2 ⎡

1

,− 12

2 = cos (2k−1)π 2n+1 , xnk

⎤ (2j +1)(2k−1)π n−1, n 2(2n+1) ⎦ , π cos (2k−1)π 2(2n+1) j =0,k=1

cos −1,1 Un 2 2 = ⎣ √

2kπ = cos 2n+1 ,

1 1 2 ,− 2

and Un

% =

+1)kπ sin (2j2n+1 √ kπ π sin 2n+1

&n−1, n . j =0,k=1

Furthermore, 1 1 2 ,− 2

n

% = diag

 1 1 & n 2 ,− 2 2π 1 − xnk 2n + 1

=

k=1

4π sin2

kπ 2n+1

2n + 1

.n . k=1

Consequently, 

− 12 , 12

Un

T

1

Un2

,− 12

1

n2

,− 12

. . (2j + 1)(2k − 1)π n ,n−1 (2k − 1)π n cos = diag cos 2(2n + 1) k=1 2(2n + 1) k=1,j =0 sin

(2j + 1)kπ 2n + 1

-

.n−1, n diag j =0,k=1

4 sin

kπ 2n+1

2n + 1

.n k=1

7.1 Cauchy Singular Integral Equations on an Interval

449

and (7.1.73) can be performed with the help of discrete sine and cosine transformations. We also refer the reader to the considerations concerned with the weighted uniform convergence of a fast algorithm below (see, in particular, pages 455–458). Equation (7.1.63) is equivalent to (An + Kn )ω = η,  α,β where ω = wm (xmk ) % An =

(7.1.74)

m k=1

and (see (5.2.19)) &

α,β

bλmk

 α,β −α,−β  π xmk − xmj

m

 α,β −α,−β α,β Kn = λmk K(xmj , xmk )

,

m j,k=1

,

j,k=1

as well as (see (5.2.12)) % η=

−α,−β ) − (−1)λ f (xmj

n−1 

&

n

∗ −α,−β −α,−β βnk pk (xmj )

. j =1

k=m

3 2 Thus, the solution ω  of (7.1.74) can be performed with at most O(m +n ) or, in case  of α, β ∈ 12 , − 12 , O(m3 + n ln n) complexity. For this, see also (5.2.23) for the ∗ ’s from the solution ξ of (7.1.72). In summary, the complexity computation of the βnk   2 of the algorithm is O(n2 ) if we choose m ∼ n 3 or, in case of α, β ∈ 12 , − 12 , is 1

equal to O(n ln n) if m ∼ n 3 .

Weighted Uniform Convergence Let us discuss the possibility to study the convergence of fast algorithms also in weighted uniform norms. We follow the explications in [75, Section 3]. Since, for γ ,δ γ ,δ p ∈ Pn , Em (p) = 0 ∀ m ≥ n and since Em (p) ≤ pγ ,δ,∞ , we have the Bernstein-type inequalities pγ ,δ,ρ,τ ≤

nρ pγ ,δ,∞ ≤ cτ nρ pγ ,δ,∞ , lnτ n

p ∈ Pn ,

(7.1.75)

where cτ = cτ (n, ρ, p), and 

pγ ,δ,ρ,τ ≤ nρ−ρ pγ ,δ,ρ  ,τ ,

p ∈ Pn , 0 < ρ  ≤ ρ .

(7.1.76)

As a special case of Proposition 3.2.21, we have (cf. [44, Theorem 8.4.8]) # $ 1 pγ ,δ,∞ ≤ c sup |p(x)|v γ ,δ (x) : |x| ≤ 1 − 2 , n

p ∈ Pn ,

(7.1.77)

450

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

where c = c(n, p). This implies pγ  ,δ  ,∞ ≤ c n2 max{γ −γ

 ,δ−δ 

} p

γ ,δ,∞ ,

0 ≤ γ  ≤ γ , 0 ≤ δ  ≤ δ , p ∈ Pn , (7.1.78)

where c = c(n, p) . Lemma 7.1.22 If f ∈ Cγ ,δ , pn ∈ Pn , and f − pn γ ,δ,∞ ≤ cf

lnτ n , nρ

n ∈ N \ {1} ,

then f − pn γ ,δ,ρ  ,τ ≤

cf lnτ n , lnτ 2 nρ−ρ 

0 < ρ  ≤ ρ , n ∈ N \ {1} .

Proof We take into account ⎧ lnτ m γ ,δ ⎪ ⎪ : m ≥ n, ⎨ = Em (u) ≤ cf mρ γ ,δ Em (f − pn ) τ ⎪ ⎪ ⎩ ≤ f − pn γ ,δ,∞ ≤ cf ln n : m < n , nρ and get, for m ∈ N \ {1}, ⎧ ⎪ ⎪ ⎨

cf cf lnτ n ≤  lnτ 2 nρ−ρ  mρ−ρ

: m ≥ n, m γ ,δ Em (f − pn ) ≤ τ τ ρ ⎪ lnτ m ⎪ ⎩ ≤ m cf ln n ≤ cf ln n : m < n , τ τ ln m nρ ln 2 nρ−ρ  ρ

γ ,δ

as well as E1 (f − pn ) ≤ f − pn γ ,δ,∞ ≤ cf lemma is proved.

cf lnτ n lnτ n ≤ , and the nρ lnτ 2 nρ−ρ   

For the investigation of the fast algorithm in weighted uniform norms, we have to α,β α,β study some properties of the L2α,β -orthoprojections Pn and Rn (see (7.1.58)) in the weighted spaces of continuous functions under consideration here. In that sense, the following lemma is crucial. For this, define α :=

# $ 1 1 max 0, α + 2 2

:= and β

# $ 1 1 max 0, β + . 2 2

7.1 Cauchy Singular Integral Equations on an Interval

451

, and set Lemma 7.1.23 Assume 0 ≤ γ < 1 + α − α, 0 ≤ δ < 1 + β − β $ # α 1 1 γ := α + max {0, γ + α − α} = max γ + , γ − α, + 2 2 4 as well as # $   β 1 1 δ := β + max 0, δ + β − β = max δ + , δ − β, + . 2 2 4 Then  α,β  R f  ≤ c Enγ ,δ (f ) ln n n γ ,δ,∞

(7.1.79)

for all f ∈ Cγ ,δ and n ∈ N \ {1}, where c = c(n, f ). If f ∈ Cγ ,δ , then ρ,τ

 α,β  R f 

γ , δ,ρ  ,τ

n

≤ c f γ ,δ,ρ,τ

lnτ +1 n  , nρ−ρ

0 < ρ  ≤ ρ , n ∈ N\{1} ,

(7.1.80)

where c = c(n, ρ  , f ).

  we obtain the Proof Using formula (4.1.114) and setting η := 1 + max α, β representation  α,β  Pn f (x) =

γn−1 γn

γn−1 = γn



α,β

α,β

α,β

α,β

1

pn (y)pn−1 (x) − pn−1 (y)pn (x)

−1

y −x



x−

1+x 2n2η

−1

 +

x+

x−

1−x 2n2η

1+x 2n2η

 +



1

x+

1−x 2n2η

α,β

f (y)v α,β (y) dy α,β

α,β

α,β

pn (y)pn−1 (x) − pn−1 (y)pn (x) y−x

f (y)v α,β (y) dy

=: I1 + I2 + I3

with (cf. (5.1.5), (5.1.11), and (5.1.12))

γn = γnv

α,β

=

 2n

  2n + α + β n (n + α + 1)(n + β + 1) 2α+β+1 2n + α + β + 1 n! (n + α + β + 1)

,

452

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

such that lim

n→∞

that

γn−1 1 = . To estimate the integrals I1 , I2 , and I3 , first we remark γn 2  α,β  p  ≤ c , n α ,β ,∞

n ∈ N0 , c = c(n) ,

(7.1.81)

which is a consequence of inequality (2.4.11). Relation (7.1.78) yields α ,β } p∞ ≤ c n2 max{ p ,∞ , α ,β

p ∈ Pn , c = c(n, p) .

(7.1.82)

Now, taking into account (7.1.81), (7.1.82), and Markov’s inequality (2.4.15), we can estimate  α,β   p (y)pα,β (x) − pα,β (y)pα,β (x)  n  n  n−1 n−1     y−x   α,β α,β   pα,β (y) − pα,β (x) pn−1 (x) − pn−1 (y) α,β  α ,−β  n n α,β α ,β α ,β (x) = pn−1 (x)v (x) + pn (x)v (x) v −   y −x y−x     α,β    α ,−β α ,−β v − ≤ n2 pnα,β ∞ pn−1  + pnα,β  (x) ≤ c n2η v − (x) α,β,∞ α,β,∞

and get  |I2 | ≤ c n f γ ,δ,∞ 2η

x+ 1−x 2η 2n

x− 1+x 2n2η

 Since, for γ  ≥ 0, δ  ≥ 0, and y ∈ x − v −γ

 ,−δ 



α −γ ,β−β −δ v α− (y) dy .

1+x ,x 2 n2η

(y) ≤ c v −γ

+

1−x 2 n2η

 ,−δ 

,

(x) ,

we have α −α},− max{0,δ+β −β } |I2 | ≤ c v − max{0,γ + (x)f γ ,δ,∞ .

To estimate I1 + I3 , we use (7.1.81) together with (5.2.26) and get |I1 + I3 | ≤ c v

− α ,−β

(x)





x− 1+x 2η

−1

2n

 +

γ ,−δ ≤ c v − (x)f γ ,δ,∞ ln n .



1

x+ 1−x 2η 2n



α −γ ,β−β −δ (y) v α− dyf γ ,δ,∞ |x − y|

7.1 Cauchy Singular Integral Equations on an Interval

453

Consequently,  α,β  P f 

, γ δ,∞

n

≤ cf γ ,δ,∞ ln n

(7.1.83)

and, for all p ∈ Pn ,  α,β    P f −f  ≤ P α,β (f −p) +f − p n n γ ,δ,∞ γ ,δ,∞ γ ,δ,∞

γ >γ , δ>δ



cf − pγ ,δ,∞ ln n ,

which proves (7.1.79). The second assertion (7.1.80) is a consequence of the first one and of Lemma 7.1.22.   Now let us consider a special case of a Cauchy singular integral equation of the form (7.1.1), namely 1 π



1

−1

v − 2 , 2 (y)u(y) dy + y−x 1 1



1 −1

K(x, y)u(y)v − 2 , 2 (y) dy = f (x) , 1 1

(7.1.84)

−1 < x < 1, where f (x) and K(x, y) are given functions. We will use the 1 1 1 1 abbreviations ν(x) = v − 2 , 2 (x) as well as μ(x) = v 2 ,− 2 (x) and write (7.1.84) shortly as (A + K)u = f ,

(7.1.85)

where (Au)(x) =

1 π



1 −1

ν(y)u(y) dy y−x

 and (Ku)(x) =

1 −1

K(x, y)u(y)ν(y) dy .

Note that the operator A in (7.1.85) is equal to the operator −A in Proposition 5.2.5 in case of a = 0, b = −1, i.e., β0 = 12 , and λ = −1, κ = 0, i.e., α = − 12 , β = 12 . The respective collocation-quadrature method (7.1.6) can be written as   0 μ Lμ n A + Kn un = Ln f ,

(7.1.86)

where we search for an approximate solution un ∈ Pn and n    ν ν λνnk K(x, xnk )un (xnk ) Kn0 u (x) =

(7.1.87)

k=1

with (cf. Exercise 5.1.4) λνnk =

ν ) 2π(1 + xnk 4π (2k − 1)π = cos2 2n + 1 2n + 1 4n + 2

ν and xnk = cos

(2k − 1)π . 2n + 1 (7.1.88)

454

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

If we want to apply Proposition 7.1.15 to the present situation, we have to make the following choices and assumptions: (A1) Choose α ± , β ± ∈ [0, 1) such that α + − α − = − 12 and β + − β − = 12 . ρ,τ ρ ,τ (A2) Let K ∈ Cγ ,δ,x ∩ Cγ00,δ00,y with 0 ≤ γ ≤ α − 0 ≤ δ ≤ β − , γ0 + α − < 1 , δ0 + β − < 1 . ρ,τ

(A3) The right-hand side f in (7.1.84) resp. (7.1.85) belongs to the space Cγ ,δ .     , u = 0 is trivial, where (cf. (5.2.15)) (A4) The nullspace u ∈ Cα + ,β + : I + AK 1 , (Ag)(x) =− π



1 −1

μ(y)g(y) dy . y−x

Note that, due to Proposition 5.2.5 and Corollary 5.2.6, Apnν = pnμ

, nμ = pnν , and Ap

n ∈ N0 .

(7.1.89)

Furthermore, due to (5.2.31), Proposition 5.2.14, and Remark 5.2.15,   Ap ,  + + ≤ cpα − ,β − ,∞ ln n α ,β ,∞

∀ p ∈ Pn ,

c = c(n, p) ,

(7.1.90)

and  ρ,τ  ρ,τ +1  ρ,τ +1  , ∈ L Cρ,τ , Cα + ,β + and A ∈ L Cα + ,β + , Cα − ,β − . A α − ,β −

(7.1.91)

Moreover, we have to check that the condition  lnτ +1 n  Lμ  = 0 n γ ,δ ρ n→∞ n is fulfilled. For this, let us cite the following Proposition. (A5) lim

Proposition 7.1.24 ([27], Theorem 4.1) Let r and s be nonnegative integers, α, β > −1, and α,β

α,β − 1 ≤ yns < . . . < yn1 < xnn < . . . < xn1 < znr < . . . < zn1 ≤ 1 .

(7.1.92)

Moreover, assume that xnn − yn1 ∼ n−2 , znr − xn1 ∼ n−2 , ynj − yn,j −1 ∼ n−2 , j = 2, . . . , s, znk − zn,k−1 ∼ n−2 , k = 2, . . . , r − 1, and 1 − zn1 ∼ n−2

if γ > 0

as well as 1 + yns ∼ n−2 if δ > 0

7.1 Cauchy Singular Integral Equations on an Interval

455

α,β

uniformly with respect to n ∈ N. By Ln,r,s f we denote the Lagrange interpolation polynomial of the function f : [−1, 1] −→ C with respect to the nodes (7.1.92). If α 1 α 5 + −γ ≤ r ≤ + −γ 2 4 2 4

and

β 1 β 5 + −δ ≤ s ≤ + −δ, 2 4 2 4

(7.1.93)

then, for f ∈ Cγ ,δ ,   γ ,δ f − Lα,β f  ≤ c En+r+s (f ) ln n , n,r,s γ ,δ,∞  α,β  where c = c(n, f ). In particular, Ln,r,s γ ,δ ≤ c ln n. μ

This proposition applied to the interpolation operators Ln , i.e., α = 12 , β = − 12 , and r = s = 0, as well as Lνn , i.e., α = − 12 , β = 12 , and r = s = 0, leads to the following corollary. Corollary 7.1.25 Let f ∈ Cγ ,δ . If

1 2

≤γ ≤

3 2

and 0 ≤ δ ≤ 1, then

  γ ,δ f − Lμ f  ≤ c En (f ) ln n , n γ ,δ,∞

c = c(n, f ) .

 μ In particular, Ln γ ,δ ≤ c ln n. If 0 ≤ γ ≤ 1 and

1 2

≤ δ ≤ 32 , then

  γ ,δ f − Lν f  ≤ c En (f ) ln n , n γ ,δ,∞

c = c(n, f ) .

  In particular, Lνn γ ,δ ≤ c ln n. Applying this together with Proposition 7.1.15 we can recognize that the following convergence result for the collocation-quadrature method (7.1.86) is true. Proposition 7.1.26 Let the conditions (A1)–(A4) be satisfied with 12 ≤ γ ≤ 32 , 0 ≤ δ ≤ 1, ρ0 = ρ, and τ0 = τ + 1. Then, for all sufficiently large n, Eq. (7.1.86) is uniquely solvable and its solution u∗n converges in the norm of the space Cα + ,β + to the unique solution u∗ of (7.1.85), where τ +2  ∗  n u − u∗  + + ≤ c ln f γ ,δ,ρ,τ n α ,β ,∞ nρ

with c = c(n, f ).

(7.1.94)

456

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

For a moment consider the more general situation ω = v α,β , α, β! > "−1, and n represent a polynomial un ∈ Pn with the help of the vectors ξ n = ξnk k=1 and "n−1 ! α n = αnj j =0 of Cn in two ways, namely un =

n 

ξnk ωnk =

n−1 

αnj pjω .

j =0

k=1

ω Of course, the vector ξ n contains the function values ξnk = un (xnk ) and the vector

ω α n the Fourier coefficients αnj = un , pj ω . Due to the algebraic accuracy of the Gaussian rule we have the relations n E D  ω ω δj i = pjω .piω = λωnk pjω (xnk )piω (xnk ) ω

and αnj =

k=1

n 

ω λωnk pj (xnk )ξnk ,

k=1

which imply (and which we already mentioned in (5.2.23) and (5.2.24))  T ξ n = Uωn α n ,

(7.1.95)

 T  T Uωn ωn Uωn = I = ωn Uωn Uωn ,

(7.1.96)

α n = Uωn ωn ξ n , and

where by the matrices Uωn and ωn we refer to the (generalized Vandermonde)  n−1, n ! " ω) pjω (xnk matrix and to the diagonal matrix diag λωn1 , . . . , λωnn = j =0,k=1 "n  μ T ! = δj k λωnk j,k=1 , respectively (cf. (5.2.21)). According to (5.2.22) we get Un .n 1   1 T νn Uνn or equivalently μ ν π xnk − xnj j,k=1



T ν Uμ Un n

1 = π

-

1 ν − xμ xnk nj

.

n

(7.1.97)

. j,k=1

With these notations, the collocation-quadrature method (7.1.86) is equivalent to (cf. (5.2.25))    T ν μ Un + Kn νn ξ n = ηn = f (xnj ) Uμ n where Kn =



μ

ν ) K(xnj , xnk

n j =1

,

(7.1.98)

n j,k=1

(cf. also the considerations in the previous

subsection on the computational complexity of the algorithm).

7.1 Cauchy Singular Integral Equations on an Interval

457

The fast algorithm is based on the assumption that there exists a δρ > 0 such that ρ+δρ ,τ 2 ,0,x

(A) K ∈ C 1

∩C

ρ+δρ ,τ 0, 21 ,y

ρ,τ 2 ,0

and f ∈ C 1 . 1

1

Here the first condition in (A) means that the function K(x, y)v 2 ,0 (x)v 0, 2 (y) is 0, 21

continuous on [−1, 1]2 and Ky 1 2 ,0

as well as Kx 1

K(x, y)v 0, 2 (y)

ρ+δρ ,τ 2 ,0

∈ C1

uniformly with respect to y ∈ [−1, 1]

0, 1 ρ+δρ ,τ uniformly with respect to x ∈ [−1, 1], where Ky 2 (x) 1 0, 2 1 1 ,0 and Kx2 (y) = K(x, y)v 2 ,0 (x). Moreover, we suppose that,

∈C

=

    , (B) for every α + ∈ 0, 12 and every β + ∈ 12 , 1 , the equation (I + AK)u =0 possesses only the trivial solution in Cα + ,β + . μ

μ

ν ), j, k = 1, . . . , n, Let us assume that the function values f (xnj ) and K(xnj , xnk are already computed. We look for an approximate solution un ∈ Pn of Eq. (7.1.84) represented in the form

un =

n−1 

αnj pjν

j =0

and choose an integer m with 0 < m < n such that d :=

2n+1 2m+1

is again integer.

First Step Solve Avn = Lμ nf ,

vn =

n−1 

βnj pjν ,

(7.1.99)

j =0

and set αnj = βnj for j = m, . . . , n−1. Referring to (7.1.98), (7.1.96), and (7.1.95), we see that (7.1.99) is equivalent to  μ T Un β n = η n

μ or β n = Uμ n n ηn ,

"n−1 ! where β n = βnj j =0 . Remembering (5.1.25) as well as Exercise 5.1.4, we see that μ

xnk = cos

2kπ 2n + 1

μ

μ

and λnk =

2π(1 − xnk ) 4π kπ = sin2 2n + 1 2n + 1 2n + 1

and, consequently, μ Uμ n n

√ .n . 4 π (2j + 1)kπ n−1, n kπ = diag sin . sin 2n + 1 2n + 1 k=1 2n + 1 j =0,k=1 μ

μ

Thus, the transformation of the right-hand side ηn by Un n is up to a diagonal matrix a discrete sine transformation and can be realized with O(n ln n) complexity

458

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

(see, for example, [202, 207]). We remark that the transformation is not one of the classical sine transformations, but writing sin

(2j + 1)2kπ (2j + 1)kπ = sin 2n + 1 2(2n + 1)

it can be regarded as a restriction of the sine-I-transformation of length 2n + 1 (cf. [202, pp. 41,42]). −1,1

−1,1

Let Pnν = Pn 2 2 and Rνn = Rn 2 2 = I − Pnν (cf. (7.1.58)). As a special case of Lemma 7.1.23 we obtain, for 0 ≤ γ < 12 and 0 ≤ δ < 1,  ν  R f  1 1 ≤ c Enγ ,δ (f ) ln n ≤ c f γ ,δ,∞ ln n , n γ + ,δ+ ,∞ 2

f ∈ Cγ ,δ ,

2

(7.1.100) and for, 0 < ρ  ≤ ρ, τ +1  ν  n R f  1 1  ≤ c ln  f γ ,δ,ρ,τ , n γ + 2 ,δ+ 2 ,ρ ,τ ρ−ρ n

ρ,τ

f ∈ Cγ ,δ .

(7.1.101)

Moreover, due to (7.1.83),  ν  P f  1 1 ≤ cf γ ,δ,∞ ln n , n γ + ,δ+ ,∞ 2

f ∈ Cγ ,δ .

2

 Lemma 7.1.27 Let (A), (B), and 0 < ε < min ρ

1 ρ 2, 4



(7.1.102)

be satisfied. Then, for 0 ≤

≤ ρ − 4ε,     ν ∗ 4ε+ρ  −ρ−δρ 4ε+ρ  −ρ R (u − v ∗ ) 1 f  1 ,0,ρ,τ , ≤ c m + n  m n −ε,1−ε,ε+ρ ,τ +2 2

2

where c = c(m, n, f, ρ  ) as well as u∗ and vn∗ =

n−1 

∗ ν βnj pj are the solutions

j =0

of (7.1.85) and (7.1.99), respectively. Proof Let g ∈ C

ρ , τ 0, 21

 0, 1 g g and Pm ∈ Pm such that g − Pm 0, 1 ,∞ = Em 2 (g). 2

By (7.1.75), (7.1.78), as well as Lemma 7.1.22 and u 1 −ε,1−ε,ε+ ≤ ρ  , τ 2  cu0, 1 ,ε+ ≤ρ − ε, ρ  , τ , we have, for 0 ≤ ρ 2

 ν  R g  1 m

ρ  , τ 2 −ε,1−ε,ε+

 ν  g  g ≤ Pm (g − Pm ) 1 −ε,1−ε,ε+ + g − Pm  1 −ε,1−ε,ε+ ρ  , τ ρ  , τ 2

≤ c τm

ε+ ρ

 ≤c m

2

  ν P (g − Pmg ) 1 m



3ε+ ρ 

ν Pm (g

2

 g + cg − Pm 0, 1 ,ε+ −ε,1−ε,∞ ρ  , τ 2

g  − Pm ) 1 ,1,∞ 2

 τ m ln g0, 1 , + ρ − ,  τ 2 ρ , m ρ −ε

7.1 Cauchy Singular Integral Equations on an Interval

459

where c = c(m, ρ, ρ , g). In virtue of (7.1.102) and the choice of Pm , it follows g

  τ +1 τ  ν  m m ln  ln 3ε+ ρ R g  1 g0, 1 , ≤c m + ρ −  m τ ρ  , τ 2 ρ , 2 −ε,1−ε,ε+ mρ m ρ −ε ≤c

τ +1 m ln g0, 1 , τ, ρ  −3ε 2 ρ , mρ −

(7.1.103) where c = c(m, ρ  , ρ , g). Lemma 7.1.7 yields, for γ = α − = (i.e., α + = 0, β + = 12 ),

1 2

and δ = β − = 0

   lnτ +1 n  lnτ +2 n  1 ≤ cf  1 Lμ  1 ≤ cf  1 A(f − Lμ f ) , n n 0, 2 ,∞ 2 ,0,ρ,τ 2 ,0,ρ,τ 2 ,0 nρ nρ ρ,τ 2 ,0

f ∈ C 1 , c = c(n, f ), where we also took into account Corollary 7.1.25. Since , μ AL n f ∈ Pn (see (7.1.89)), with the help of Lemma 7.1.22 we get   lnτ +2 n A(f  1  − Lμ  f ) ≤ cf , 1  n 0, 2 ,ρ +3ε,τ +2 2 ,0,ρ,τ nρ−ρ −3ε

0 < ρ  + 3ε ≤ ρ.

, − Lμ Thus, if we apply (7.1.103) to g = A(f = ρ  + 3ε, ρ  = ρ  , and n f ), ρ τ = τ + 2, then we obtain  ν    τ +3 R A(f   , − Lμ , − Lμ m A(f m n f ) 1 −ε,1−ε,ε+ρ  ,τ +2 ≤ c ln n f ) 0, 1 ,ρ  +3ε,τ +2 2

2

lnτ +2 n f  1 ,0,ρ,τ  2 nρ−ρ −3ε  ρ+δ ,τ  By assumption (A) we have, due to Proposition 5.3.13, K ∈ L C0, 1 , C 1 ρ . ≤ c lnτ +3 m

2 ,0

2

, : C 1 −→ C 1 Proposition 5.2.16 implies the compactness of the operator AK 0, 2  0,2 , −1 ∈ L C 1 . Moreover, and, due to assumption (B), the existence of (I + AK) 0, 2

ρ+δ ,τ +1 ρ ,τ , : Cρ+δ the operator A −→ C 1 ρ is continuous (see Proposition 5.2.14). 1 0, 2

2 ,0

Hence, with the help of (7.1.103) (set ρ = ρ + δρ , ρ  = ρ  , and τ = τ + 1) we can estimate  ν  R AK , u∗  1 m

2 −ε,1−ε,ρ

 +ε,τ +1

≤c

≤c

≤c

lnτ +2 m  mρ+δρ −ρ −3ε

lnτ +2 m  mρ+δρ −ρ −3ε

  AK u∗    A ,f 

0, 12 ,ρ+δρ ,τ +1

0, 12 ,∞

lnτ +2 m f  1 ,0,ρ,τ .  2 mρ+δρ −ρ −3ε

≤c

≤c

lnτ +2 m  mρ+δρ −ρ −3ε

lnτ +2 m  mρ+δρ −ρ −3ε

 ∗ u 

0, 12 ,∞

  A ,f  1 0, ,ρ,τ +1 2

460

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

, − Ln f ) − AKu , ∗ , we finally obtain Since u∗ − vn∗ = A(f μ

% & τ +2 τ +2  ν ∗  n m ln ln ∗ τ +3 R (u − v ) 1 ≤ c ln m ρ−ρ −3ε + ρ+δ −ρ  −3ε f  1 ,0,ρ,τ  m n 2 2 −ε,1−ε,ε+ρ ,τ +2 n m ρ     ≤ c n4ε+ρ −ρ + m4ε+ρ −ρ−δρ f  1 ,0,ρ,τ , 2

 

and the lemma is proved. If we define  (Kn u)(x) =

1

!

−1

" Lνn K(x, ·) (y)u(y)ν(y) dy ,

then, due to the algebraic accuracy of the Gaussian rule, Kn0 p = Kn p for all p ∈ Pn , where the quadrature operator Kn0 is defined in (7.1.87). If we additionally take into account (7.1.89), we see that Eq. (7.1.86) is equivalent to   , μ , μ I + AL (7.1.104) n Kn un = ALn f .  μ ∞ , n Kn (see also (7.1.48)). Moreover, the operator sequence AL is collectively n=1 , compact and strongly converging to AK in Cα + ,β + (see Proposition 7.1.8). Consequently, there is a positive constant c = c(n, un ) such that     I + AL  , μ (7.1.105) n Kn un α + ,β + ,∞ ≥ cun α + ,β + ,∞ ∀ un ∈ Pn . Lemma 7.1.28 If condition (A) with respect to K(x, y) is satisfied, then   K − Kn  C

→ C 1 ,0 0, 1 2

≤c

2

lnτ +1 n , nρ+δρ

where c = c(n). Proof For u ∈ C0, 1 and −1 < x < 1, we have 2

    1! √ ! ν " " 1 + y   K(x, y) − Ln K(x, ·) (y) u(y) 1−x dy   −1 1−y    0, 1  1 ,0  ≤ cLνn 0, 1 En 2 Kx2 u0, 1 ,∞ 2

2



1

−1



0, 12 

dy 1 − y2 1

and the assertion follows from Corollary 7.1.25 and Kx2 (A)).

≤ c ln n En

,0

∈C

ρ+δρ ,τ 0, 12

1 ,0

Kx2

 u0, 1 ,∞ , 2

(see condition  

7.1 Cauchy Singular Integral Equations on an Interval

461

Second Step Solve μ ν (A + Lμ m Km )wm = Lm (f − ARm vn ) ,

wm =

m−1 

γmj pjν

(7.1.106)

j =0

and set αnj = γmj , j = 0, . . . , m − 1. In view of (7.1.98), (7.1.97), and (7.1.95), this equation is equivalent to the system ν xmk

1 μ ν μ + K(xmj , xmk ) − xmj

where ωm =

!

ν ) wm (xmk

.



m

j,k=1

"m k=1

σm ωm = ηm

and γ m := γmj



and ηm =

m−1 j =0

= Uνm νm ωm ,

 μ  μ f (xmj ) − ARνm vn (xmj )

m j =1

2n + 1 2m + 1 μ = xn,d·j

system can be solved with O(m3 ) computational complexity. Since d = μ

μ

μ

ν ) with x is assumed to be an integer, the values f (xmj ) and K(xmj , xmk mj ν ν and xmk = x d(2k−1)+1 are already given. Hence, it remains to compute n,

. This

2

  μ ARνm vn (xmj )

m j =1

 T β = Rnm Uμ n n

" ! " " ! ! = 0 . . . 0 βnm . . . βn,n−1 T and Rnm ζk n = ζd·k m , which can with β k=1 k=1 n be done with O(n ln n) complexity. Lemma 7.1.29 If the assumptions (A) and (B) are fulfilled, then Eq. (7.1.106) is uniquely solvable for all sufficiently large m, where, for ρ  > 0 and sufficiently small ε > 0,  ν ∗    ε−ρ−δ ρ P u − w∗  1 + nε−ρ f  1 ,0,ρ,τ m m ,1,∞ ≤ c m 2

2

(7.1.107)

and  ν ∗  P u − w ∗  1 m

m

2 ,1,ρ

 ,0

   ≤ c mρ mε−ρ−δρ + nε−ρ f  1 ,0,ρ,τ 2

(7.1.108)

with the solutions u∗ and w∗ of (7.1.85) and (7.1.106), respectively, and a constant c = c(m, n, ρ  , f ). Proof Using (7.1.105) with α + = we get, for 0 < ε < 12 ,  ν ∗  P u − w ∗  1 m

m

2 ,1,∞

1 2

− ε and β + = 1 − ε (cf. also (A1) and (B)),

  ν ∗    ν ∗ ∗ ∗  , μ ≤ cPm u − wm ≤ c I + AL 1 m Km (Pm u −wm ) 1 −ε,1−ε,∞ . −ε,1−ε,∞ 2

2

462

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Furthermore, since ν ∗ APm u

(7.1.89)

=

ν ∗ μ ∗ ν ∗ μ ∗ μ ν ∗ Lμ m APm u = Lm A(u − Rm u ) = Lm (f − Ku ) − Lm ARm u ,

ν u∗ = AL , m (f − Ku∗ ) − AL , m ARνm u∗ , and since Km Rνm u∗ = 0, we have i.e., Pm μ

μ

  ν ∗ ∗ ν ∗ ∗ ν ∗ , μ , μ , μ , μ I + AL m Km (Pm u − wm ) = Pm u − ALm Km u − ALm f + ALm ARm vn ν ∗ ∗ ∗ , μ , μ = AL m ARm (vn − u ) + ALm (Km − K)u ,

7.1.25, (7.1.91), and where vn∗ is the solution of (7.1.99). By (7.1.90), Corollary  Lemma 7.1.27, we can estimate, for 0 < ε < min 12 , ρ4 ,  μ  AL , m ARνm (vn∗ − u∗ ) 1

2 −ε,1−ε,∞

  ≤ c ln2 mARνm (vn∗ − u∗ )1−ε, 1 −ε,ε,τ +3 2

  ≤ c ln2 mRνm (vn∗ − u∗ ) 1 −ε,1−ε,ε,τ +2 2

  ≤ c ln2 m m4ε−ρ−δρ + n4ε−ρ f  1 ,0,ρ,τ 2

  ≤ c m5ε−ρ−δρ + n5ε−ρ f  1 ,0,ρ,τ 2

(7.1.109) and, taking into account (7.1.90), Corollary 7.1.25, and Lemma 7.1.28,   ∗ AL , μ m (Km − K)u 1

2 −ε,1−ε,∞

  lnτ +3 m   ≤ c ln2 m(Km − K)u∗  1 ,0,∞ ≤ c ρ+δ u∗ 0, 1 ,∞ ρ 2 2 m ≤c

lnτ +3 m mρ+δρ

f  1 ,0,ρ,τ ≤ c m5ε−ρ−δρ f  1 ,0,ρ,τ . 2

2

(7.1.110) By replacing 5ε by ε, the estimates (7.1.109) and (7.1.110) prove (7.1.107). Finally, the estimate (7.1.108) follows from (7.1.107) and (7.1.75).   The following proposition summarizes the results of Lemma 7.1.27 and Lemma 7.1.29. Proposition 7.1.30 Let (A) and (B) be satisfied. Then, for all sufficiently large m, n−1 n−1   ∗ ν ∗ ν αnj pj is well defined by the solutions vn∗ = βnj pj the polynomial u∗n = j =0 ∗ = and wm

m−1  j =0

j =0

∗ ν ∗ = γ∗ , γmj pj of (7.1.99) and (7.1.106), respectively, where αnj mj

7.1 Cauchy Singular Integral Equations on an Interval

463

∗ = β ∗ , j = m, . . . , n − 1. Moreover, for sufficiently j = 0, . . . , m − 1, and αnj nj small ε > 0,

 ∗     un − u∗  1 ,1,∞ ≤ c mε−ρ−δρ + nε−ρ f  1 ,o,ρ,τ 2

2

(7.1.111)

as well as, for 0 < ρ  < ρ − ε,     ∗    un − u∗  1 ,1,ρ  ,0 ≤ c mε+ρ −ρ−δρ + nε+ρ −ρ f  1 ,o,ρ,τ , 2

2

(7.1.112)

where c = c(m, n, ρ  , f ) and u∗ is the solution of (7.1.84) respective (7.1.85). In the following corollary we offer the consequences of the above considerations concerning the complexity of the fast algorithm and its error estimates (cf. also Proposition 7.1.21). Corollary 7.1.31 Let the assumptions of Proposition 7.1.30 be satisfied. If δρ ≥ 2ρ and the integers m and n are chosen in such a way that m3 ∼ n, then the algorithm is of complexity O(n ln n) and  ∗   un − u∗  1 ,1,∞ ≤ c nε−ρ f  1 ,0,ρ,τ , 2

2

 Moreover, if max 0, ρ − ε −

δρ 2



(7.1.113)

< ρ  < ρ − ε, then

  ∗   un − u∗  1 ,1,ρ  ,0 ≤ c nε+ρ −ρ f  1 ,0,ρ,τ , 2

2

c = c(m, n, f ) .

c = c(m, n, ρ  , f ) .

(7.1.114)

Proof Since the first step of the algorithm can be realized with O(n ln n) and the second with O(m3 ) computational complexity, the complete algorithm is of O(n ln n) complexity if m3 ∼ n. By (7.1.111), we get 

  ∗ max  un − u∗  1 ,1,∞ ≤ c n

ε−ρ−δρ 3

2

ε−ρ−δρ 3

and the first estimate follows from estimate is a consequence of (7.1.112).

 ,ε−ρ

f  1 ,0,ρ,τ , 2

< ε − ρ. Analogously, the second  

Remark 7.1.32 We note that the first error estimate in Corollary 7.1.31 yields almost the same convergence rate as in Proposition 7.1.26 applied with α + = 0, β + = 12 , α − = γ = 12 , and β − = δ = 0. But, the norm used to measure the error in (7.1.113) is weaker than the norm in (7.1.94). The same holds true if we compare (7.1.114) with an estimate, which we obtain by applying Lemma 7.1.22 to (7.1.94).

464

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

7.2 Hypersingular Integral Equations This section is devoted to the investigation of collocation and collocation-quadrature methods for the numerical solution of singular integro-differential equations of Prandtl’s type g(x)v(x) +

.  1  b 1 v(y) dy d av(x) + + h(x, y)v(y) dy = f (x) , dx π −1 y − x −1

(7.2.1)

−1 < x < 1, with the additional condition v(−1) = v(1) = 0 ,

(7.2.2)

where a, b ∈ R are given constants and g, f : [−1, 1] −→ C, as well as h : [−1, 1]2 −→ C are given functions. First, we will study the mentioned methods in weighted Sobolev spaces L2,s α,β defined in (2.4.1). After that, we consider weighted ρ,q Besov spaces Cγ ,δ defined at page 27 (cf. also Sect. 7.1), which are subspaces of weighted spaces of continuous functions and which give us the possibility to obtain convergence results in uniform norms. By following the considerations in [26], let us assume that the kernel function h(x, y) in (7.2.1) is of the form h(x, y) = h1 (x, y) + h2 (x, y) ln |y − x| + h3 (x, y)|y − x|−η ,

(7.2.3)

where 0 < η < 1 and the functions hj (x, y) fulfil certain smoothness conditions specified later on. According to (7.2.3) we introduce the operator H = H1 + H2 + H3 , where  (Hu)(x) =

−1

 (H1 u)(x) =

1

−1

 (H3 u)(x) =

1

−1

 (H2 u)(x) =

1

1

−1

h(x, y)v α,β (y)u(y) dy ,

h1 (x, y)v α,β (y)u(y) dy , (7.2.4) h2 (x, y) ln |y − x| v α,β (y)u(y) dy , h3 (x, y)|x − y|−η v α,β (y)u(y) dy ,

7.2 Hypersingular Integral Equations

465

−1 < x < 1. Furthermore, let DA be the operator introduced in (5.5.1) in case α + β = 1, i.e., λ = 0, κ = −1, and DAu =

∞ 



(n + 1) u, pnα,β α,β pnβ,α

∀ u ∈ L2α,β

(7.2.5)

n=0

due to (5.5.2). Let us collect some properties of these operators: −→ L2,s (O1) For each s ≥ 0, the operator DA : L2,s+1 α,β β,α is a continuous isomorphism (see Corollary 5.5.3). (O2) If h1 (x, ·) ∈ L2α,β for almost all x ∈ (−1, 1) and h1 (·, y) ∈ L2,s γ ,δ uniformly  2 2,s  with respect to y ∈ [−1, 1], then H1 ∈ L Lα,β , Lγ ,δ (see Lemma 2.4.6). To investigate the operator H2 we introduce the operator (cf. also (5.4.27)) 2 u)(x) = a (H



x −1

h2 (x, y)u(y)v α,β (y) dy −

b π



1 −1

h2 (x, y) ln |y − x| v α,β (y)u(y) dy ,

(7.2.6) −1 < x < 1. Set h2x (x, y) :=

∂h2 (x, y) . ∂x

Lemma 7.2.1 If h2 , h2x : [−1, 1]2 −→ C possess continuous partial derivatives   2 ∈ L L2,s , L2,s+1 for 0 ≤ s ≤ r. up to order r ∈ N0 , then H α,β −α,−β Proof We have 2 = H (1) + T1 + AχI , DH 2

(7.2.7)

where  (1)  u (x) = a H 2



  b T1 u (x) = π

x

−1



h2x (x, y)u(y)v α,β (y) dy −

1 −1

h2 (x, y)u(y)v α,β (y) dy

b π



1 −1

h2x (x, y) ln |y − x| u(y)v α,β (y) dy ,

h2 (x, y) − h2 (y, y) , h2 (x, y) = with y−x

χ(x) = h2 (x, x), and A is defined in (5.2.11). Due to our assumptions, h2x , h2 : [−1, 1]2 −→ C and χ : [−1, 1] −→ C are continuous functions. It follows that     ϕDH 2 u 2 u ≤ DH ≤ cuα,β −α,−β −α,−β

466

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

for all u ∈ L2α,β . Indeed, 

1

−1

  v −α,−β (x) 

2  h2 (x, y)u(y)v α,β (y) dy  dx

1

−1

 ≤c

1 −1

v −α,−β (x) dx

-

1 −1

.2 |u(y)|v α,β (y) dy

≤ cu2α,β

as well as 

1 −1

v

−α,−β

  (x) 

x −1

 ≤c

2   α,β h2x (x, y)u(y)v (y) dy 

1

−1

v

−α,−β

- (x) dx

1 −1

dx .2

|u(y)|v

α,β

(y) dy

≤ cu2α,β .

(1) Moreover, to the part of the operator  H2 with the logarithmic singularity we can apply Example 2.4.3 with η ∈ 0, 12 and p = q = 2. For the operator AχI we have to refer to Corollary 5.2.6.  2 ∈ L L2 , L2 Furthermore, H α,β −α,−β due to Example 2.4.3. Hence, Lemma 2.4.7   2 ∈ L2 , L2,1 implies H −α,−β , and the lemma is proved for r = 0 with having regard α,β to Remark 2.4.5. Assume that the assertion of the lemma is valid for r ≤ m and that the conditions of the lemma are fulfilled for r = m + 1. Then (7.2.7) holds true   (1) , T1 ∈ L L2,m , L2,m+1 (concerning T1 we refer to Lemma 2.4.6 and with H α,β −α,−β 2  2,m+1 Lemma 2.4.7) and A ∈ L L2,m+1 α,β , L−α,−β (see Corollary 5.2.8). Moreover, in   view of Corollary 2.4.11, χI ∈ L L2,m+1 . Applying Lemma 2.4.7 we obtain α,β   H 2 u

−α,−β,m+2

    m+2 m+2  ϕ 2 u 2 u H ≤ c H + D −α,−β,m+1 −α,−β       (1) + T1 + AχI u ≤ c uα,β,m + ϕ m+1 Dm+1 H 2 −α,−β    (1)   + T1 + AχI u ≤ c uα,β,m +  H 2 −α,−β,m+1 ≤ c uα,β,m+1 ,

u ∈ L2,m+1 , α,β  

and the lemma is proved by induction. Applying Lemma 7.2.1 in case α = β = following.

1 2

(i.e., a = 0, b = −1) we get the

7.2 Hypersingular Integral Equations

467

(O3) If h2 , h2x : [−1, 1]2 −→ C possess continuous partial derivatives up to order r ∈ N0 , then the operator H2 , defined in (7.2.4) for α = β = 12 , belongs to   2,s+1 for 0 ≤ s ≤ r. L L2,s ϕ , Lϕ Now we turn to the operator H3 in (7.2.4). For this, set α ± = max {0, ±α} and β ± = max {0, ±β}, and recall the definition of Cr,λ = Cr,λ [−1, 1] on page 24. Lemma 7.2.2 Let h3 : [−1, 1]2 −→ C be continuous. If h3 (·, y) belongs to C0,1−η 2,η uniformly with respect to y ∈ [−1, 1], then the operator H3 : L2,s γ ,δ −→ Lγ  ,δ  is

compact for s > 12 , γ = 2α + − 12 , δ = 2β + − 12 , γ  > 2α − − 1, δ  > 2β − − 1, and 0 ≤ η < 1 − η.

Proof We have h3 (x, y) =

H (x, y) − H (y, y) with x−y

H (x, y) = h3 (x, y)(x − y)|x − y|−η and H (·, y) belonging to C0,1−η (cf. Exercises 2.4.19 and 2.4.20). Hence, we can apply Exercise 3.1.13 together with Proposition 5.3.14 and get H3 ∈  1−η,1  L Cα + ,β + , Cα − ,β − . By Lemma 2.4.26 we have the continuous embedding L2,s ⊂ Cα + ,β + . For ε > 0 with η + ε < 1 − η, the compact embedding γ ,δ 1−η,1

η +ε,0

Cα − ,β − ⊂ Cα − ,β − follows from Proposition 2.4.28,(c). Moreover, by Lemma 2.4.27 η +ε,0

2,η

we obtain the continuous embedding Cα − ,β − ⊂ Lγ  ,δ  . Thus, the operator η +ε,0

2,η

H3 : L2,s γ ,δ −→ Cα + ,β + −→ Cα − ,β − −→ Cα − ,β − −→ Lγ  ,δ  1−η,1

 

is compact. As a consequence of Lemma 7.2.2 we get, in case of α = β = 12 , the following.

(O4) If η ∈ (0, 1), h3 (·, y) ∈ C0,1−η uniformly with respect to y ∈ [−1, 1], and 2,η 0 ≤ η < 1 − η, then the operator H3 : L2,s defined in (7.2.4) is ϕ −→ Lϕ 1 compact for s > 2 .

7.2.1 Collocation and Collocation-Quadrature Methods Having in mind the previous considerations, here we deal with the equation   2 + H3 u = f ∈ L2,s , Bu := Mψ + DA + H1 + H β,α

u ∈ L2,s+1 α,β ,

(7.2.8)

  instead of (7.2.1),(7.2.2), where Mψ u (x) = ψ(x)u(x), DA is given by (5.5.1) 2 by (7.2.6). or (7.2.5), H1 and H3 by (7.2.4) and H

468

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

A possible collocation method for (7.2.8) consists in looking for an approximate solution un ∈ Pn by solving the equation β,α Lβ,α n Bun = Ln f ,

(7.2.9)

where α and β are defined at the beginning of Sect. 7.2 (cf. the definition of DA in (5.5.1)). In view of relation (7.2.5), the collocation method (7.2.9) is equivalent to  " ! 2 + H3 un = Lβ,α Mψ + H1 + H Bn un := DA + Lβ,α n n f .

(7.2.10)

of (7.2.10) belongs to Pn . Again due to relation (7.2.5), every solution un ∈ L2,s+1 α,β   2,s Thus, we can equivalently consider Eq. (7.2.10) in the pair of spaces L2,s+1 α,β , Lβ,α . Let us assume: 2,1 (P0) For f ≡ 0 Eq. (7.2.8) has only the trivial solution in Lα,β . (P1) The continuous function ψ : (−1, 1) −→ C belongs to Crϕ for some integer r ≥ 0, where Crϕ is defined in Sect. 2.4.1. (P2) The function h1 (x, ·) belongs to L2α,β for almost all x ∈ (−1, 1) and h1 (·, y) 2,s

belongs to Lβ,α0 uniformly with respect to y ∈ [−1, 1] for some s0 ≥ 0, which has to be specified later on. (P3) The functions h2 and h2x possess continuous partial derivatives up to order r0 , where h2x =

∂h2 (x, y) . ∂x

(P4) The function h3 : [−1, 1]2 −→ C is continuous, where h3 (·, y) belongs to C0,1−η uniformly with respect to y ∈ [−1, 1] for some η ∈ (0, 1). 2 , and H3 as well In all cases, which we will consider here, the operators H1 , H 2,s+1 2,s 2,1 as Mψ are compact both from Lα,β into Lβ,α and from Lα,β into L2β,α . This is a consequence of (O2), Lemma 7.2.1, Lemma 7.2.2, Corollary 2.4.11, and Lemma 2.4.7. Thus, in view of (O1), Remark 2.4.5, and assumption (P0), the +1 2,t operator B : L2,t α,β → Lβ,α is invertible for 0 ≤ t ≤ s and Eq. (7.2.8) possesses a unique solution u∗ ∈ L2,s+1 α,β .

Proposition 7.2.3 Let s > 12 , ψ ≡ 0, h3 ≡ 0, f ∈ L2,s β,α , and assume that (P0), (P2), and (P3) are fulfilled for s0 = s and some r0 ≥ max {0, s − 1}. Then, for all sufficiently large n, Eq. (7.2.10) is uniquely solvable, and the solution u∗n converges ∗ in the norm of L2,s+1 α,β to the unique solution u of (7.2.8). Moreover,     ∗ u − u∗  ≤ c nt −s u∗ α,β,s+1 , n α,β,t +1

0≤t ≤s,

(7.2.11)

7.2 Hypersingular Integral Equations

469

where the constant c does not depend on n, t, and f . 3 : L2,s+1 −→ L2,s is compact, it follows Proof Since H1 + H α,β β,α lim Bn − BL2,s+1 →L2,s = 0

n→∞

α,β

(7.2.12)

β,α

taking into account Lemma 3.2.39,(a) and Corollary 2.3.12. With the help of (O2), Lemma 7.2.1, and Lemma 3.2.39,(b), as well as the continuous embedding 2,1 2,1 ⊂ Lβ,α we obtain L−α,−β         −1    (Bn − B)uβ,α = Lβ,α H2 u β,α,1 n − I ) H1 − H2 u β,α ≤ c H1 uβ,α,s + n   ≤ c n−s + n−1 uβ,α ≤ c n−t0 uα,β,1 ,

i.e., Bn − BL2,1 →L2 α,β

β,α

≤ c n−t0 ,

where t0 = min {s, 1}. Together with (7.2.12) and  this implies the  Remark 2.4.5, 2,t 2,t +1 −1 for 0 ≤ t ≤ s and existence and uniform boundedness of Bn ∈ L Lβ,α , Lα,β for all sufficiently large n. Consequently, in view of !   "  β,α 2 u∗ , u∗n − u∗ = Bn−1 Lβ,α H1 + H n f − f + I − Ln we have    ∗    ∗ t −s u − u∗    f  u ≤ c n + H + H 1 2 β,α,s n α,β,t +1 β,α,s taking into account Lemma 3.2.39,(b). This yields (7.2.11).

 

(i.e., a = 0, b = −1) Remark 7.2.4 In case ψ ≡ 0 and α = β = Proposition 7.2.3 remains true if we additionally assume that (P1) is fulfilled with r ≥ s. 1 2

2,s 2,s Proof We remark that, in the present case, we have L2,s α,β = Lβ,α = Lϕ . With 2,t +1 the help of Corollary 2.4.12 we see the compactness of Mψ : Lϕ −→ L2,t ϕ for    ϕ −1   0 ≤ t ≤ s. Moreover, since r ≥ 1, we have Mψ − Ln Mψ u ϕ ≤ c n uϕ,1

for all u ∈ Lϕ2,1 .

 

Remark 7.2.5 In case ψ ≡ 0, h3 ≡ 0 and α = β = 12 (i.e., a = 0, b = −1) Proposition 7.2.3 remains true for 12 < s < 1 − η, if we additionally assume that 0 < η < 12 and (P1) with r ≥ s as well as (P4) are fulfilled.

470

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Proof At first, we refer to (O4). Furthermore, if we apply Lemma 7.2.2 with η = s   ϕ 1 + +   and α = β = 2 , we get H3 − Ln H3 L2,1 →L2 ≤ c n−s .   ϕ

ϕ

γ ,δ

By Qn we will refer to the application of the Gaussian rule with respect to the Jacobi weight v γ ,δ (x), i.e., 

1

−1

f (x)v

γ ,δ

(x) dx ≈

γ ,δ Qn (f )

:=

n 

γ ,δ f (xnk ) ,

γ ,δ λnk

 =

k=1

1 −1

γ ,δ

nk (x)v γ ,δ (x) dx .

We can approximate the operator H1 by (H1n u) (x) =

n 

α,β

α,β

α,β

λnk h1 (x, xnk )u(xnk ) = Qα,β n (h1 (x, ·)u) .

k=1

2 and H3 , we use product integration rules of the To approximate the operators H form  a

x −1

u(y)v α,β (y) dy −

b π



1 −1

ln |y − x| v α,β (y) dy ≈

n 

α,β α,β λnk (x)u(xnk )

k=1

(7.2.13) and 1 π



1 −1

|y − x|−η u(y)v α,β (y) dy ≈

n 

α,β

α,β

ωnk (x)u(xnk ) ,

k=1

where α,β λnk (x) = a



x −1

α,β nk (y)v α,β (y) dy

b − π



1 −1

α,β

nk (y) ln |y − x| v α,β (y) dy

and α,β



ωnk (x) =

1 −1

|y − x|−η nk (y)v α,β (y) dy . α,β

2 and H3 leads to The application of these quadrature rules to the operators H approximating operators given by 

n   α,β α,β α,β 2n u (x) = λnk (x)h2 (x, xnk )u(xnk ) H k=1

(7.2.14)

7.2 Hypersingular Integral Equations

471

and n    α,β α,β α,β ωnk (x)h3 (x, xnk )u(xnk ) . H3n u (x) =

(7.2.15)

k=1

The respective collocation-quadrature method consists in solving the equation  " ! 2n + H3n un = Lβ,α (7.2.16) Mψ + H1n + H DA + Lβ,α n n f . Because of relation (7.2.5) a solution un ∈ L2α,β of this equation belongs to Pn . Due to the algebraic accuracy of the Gaussian rule, for such un ∈ Pn , we have   un Lα,β (H1n un ) (x) = Qα,β n n [h1 (x, ·)]  =

1 −1

  α,β ,1n un (x) . un (y)Lα,β (y) dy =: H n [h1 (x, ·)] (y)v (7.2.17)

Thus, the collocation-quadrature equation (7.2.16) is equivalent to !  " ,1n + H 2n + H3n un = Lβ,α n un := DA + Lβ,α B Mψ + H n n f .

(7.2.18) q

Lemma 7.2.6 Let γ , δ > −1 and, for some integer q ≥ s > 12 , h2 (x, ·) ∈ Cϕ uniformly with respect to x ∈ [−1, 1]. Then, for 0 ≤ t ≤ s and u ∈ L2,s α,β ,     γ ,δ  2 u ≤ c mt n−s uα,β,s , Lm H2n − H  γ ,δ,t

where c = c(m, n, t, u). Proof Taking into account the algebraic accuracy of the Gaussian rule, we obtain  γ ,δ    Lm H 2 u2 2n − H

γ ,δ,t

 γ ,δ    2 u2 2n − H ≤ m2t Lm H γ ,δ = m2t

m  j=1



≤ 2m2t

b π

  γ ,δ % n &  xmj  γ ,δ  γ ,δ γ ,δ α,β α,β α,β λmj a h2 (xmj , xnk )u(xnk ) nk (τ ) − h2 (xmj , τ )u(τ ) v α,β (τ dτ  −1 k=1

1

% n 

−1

k=1



m 

2 &     γ ,δ   γ ,δ α,β α,β α,β γ ,δ h2 (xmj , xnk )u(xnk ) nk (τ ) − h2 (xmj , τ )u(τ ) ln xmj − τ  v α,β (τ ) dτ  

2 γ ,δ  γ ,δ  λmj (Lα,β n − I )[h2 (xmj , ·)u] α,β ·

j=1

 · a2 π

1

−1

v α,β (τ ) dτ +

b2 π



1

−1

.     γ ,δ ln2 xmj − τ  v α,β (τ ) dτ .

472

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

With the help of Lemma 3.2.39,(b), Corollary 2.4.12, and the uniform boundedness of 

1 −1

ln2 |x − τ | v α,β (τ ) dτ ,

x ∈ [−1, 1] ,  

the assertion follows.

Lemma 7.2.7 Assume that 0 < η < 12 and, for some integer q ≥ s > 12 , let q h3 (x, ·) ∈ Cϕ uniformly with respect to x ∈ [−1, 1]. Then, for 0 ≤ t ≤ s and 2,s u ∈ Lα,β ,  γ ,δ  Lm (H3n − H3 )u

γ ,δ,t

≤ c mt n−s uα,β,s ,

c = c(m, n, t, u) .

Proof As in the proof of Lemma 7.2.6, we see that  γ ,δ  Lm (H3n −H3 )u2

2t γ ,δ,t ≤ m

 1 α,β m   α,β ! "2 v (τ ) dτ γ ,δ  γ ,δ   λmj Ln −I h3 (xmj , ·)u α,β 2η .  γ ,δ   −1 x j =1 mj − τ

The assumption on η as well as the inequalities α > 0 and β > 0 guarantee the uniform boundedness of  1 1 v α,β (τ ) dτ   , x ∈ [−1, 1] . π −1 x − τ 2η Thus, the assertion follows from Lemma 3.2.39,(b) and Corollary 2.4.12.

 

Proposition 7.2.8 Let s > 12 , ψ ≡ 0, h3 ≡ 0, and f ∈ L2,s β,α . Assume that (P0), (P2), and (P3) are fulfilled for s0 = s and r0 ≥ max {0, s − 1}. Moreover, suppose q that h1 (x, ·) ∈ L2,s α,β and h2 (x, ·) ∈ Cϕ uniformly with respect to x ∈ [−1, 1] and for some integer q ≥ s. Then, for all sufficiently large n, Eq. (7.2.18) is uniquely +1 solvable, and the solution u∗n converges in the norm of L2,t α,β , 0 ≤ t < s, to the ∗ unique solution u of (7.2.8), where  ∗    u − u∗  ≤ c nt −s u∗ α,β,s+1 , c = c(n, t, u∗ ) . (7.2.19) n α,β,t +1 2 , Proof Referring to Lemmata 7.1.3, 7.2.6, and 3.2.39 we see that, for H = H1 + H ,1n + H 2n , and t0 := min {1, s}, Hn = H  β,α  L Hn − H n

2 L2,1 α,β −→Lα,β

   ≤ cLβ,α n Hn − H

2,t

Lα,β0 −→L2α,β

    ≤ c Lβ,α n (Hn − H) ≤ c n−t0 .

2,t Lα,β0 −→L2α,β

   + Lβ,α n H−H

 2,t Lα,β0 −→L2α,β

7.2 Hypersingular Integral Equations

473

2,t

2,t

Note that the operator H : Lα,β0 −→ Lβ,α0 is continuous because of the following sequences of continuous mappings and embeddings (see (O1) and Lemma 7.2.1), H1

2,t0 2,s 2,s 2 0 L2,t α,β ⊂ Lα,β −→ L−α,−β ⊂ Lβ,α ⊂ Lβ,α

and 2,t

2 H

2,t

2,1 2,1 Lα,β0 ⊂ L2α,β −→ L−α,−β ⊂ Lβ,α ⊂ Lβ,α0 .

  n − B  2,1 Consequently, lim B = 0, which implies the uniform boundedLα,β →L2β,α n→∞  2  −1 n ∈ L L , L2,1 for all sufficiently large n. With ness of the inverse operators B α,β β,α the help of this result and Lemmata 7.1.3, 7.2.6, and 3.2.39,(b) we can estimate  ∗      ∗ t ∗ α,β ∗  u − Lα,β u∗  ≤ nt u∗n − Lα,β n n n u α,β,1 ≤ c n Bn (un − Ln u ) α,β α,β,t+1        β,α α,β ∗  ∗ α,β ∗     ≤ cnt Lβ,α n f − f α,β + (H − Ln Hn )Ln u α,β + B (u − Ln u ) α,β   ≤ c nt−s u∗ α,β,s+1 ,

c = c(n, t, u∗ ) .

Thus, the estimate (7.2.19) is proved, if we remember u∗ Lemma 3.2.39,(b).

∈ L2,s+1 and α,β  

Proposition 7.2.9 In case of ψ ≡ 0 and α = β = 12 (i.e., a = 0, b = −1) Proposition 7.2.8 remains true if we additionally assume that (P1) with r ≥ s is satisfied. Proof With t0 = min {1, s} (cf. the proof of Proposition 7.2.8) we have, due to Lemma 3.2.39,(b) and Corollary 2.4.12,   12 , 12 Ln Mψ − Mψ  2,1 L

2 1 1 →L 1 , 1 2,2 2 2

≤ c n−t0 .

The proof of the estimate (7.2.19) goes along the same lines as in the proof of Propo1 1

,

1 1

,

sition 7.2.8, if we additionally take into account the relation Ln2 2 Mψ Ln2 2 u∗ = 1 1

,

Ln2 2 Mψ u∗ and again apply Corollary 2.4.12.

 

Proposition 7.2.10 In case of ψ ≡ 0, h3 ≡ 0, and α = β = 12 (i.e., a = 0, b = −1) Proposition 7.2.8 remains true if we additionally assume that h3 (x, ·) ∈ C1ϕ uniformly with respect to x ∈ [−1, 1], and that (P1) with r ≥ s and (P4) with 0 < η < 12 are valid.

474

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Proof With the help of Lemma 7.2.7 we get  12 , 12   Ln H3n − H3  2,1 L

2 1 , 1 →L 1 , 1 2 2 2 2

≤ c n−s .

Moreover, due to (O4) and Lemma 3.2.39,(b), we also have   12 , 12 Ln H3 − H3 

2 L2,1 1 1 →L 1 ,

2 2

,1 2 2

≤ c n−s .

The proof of the estimate (7.2.11) is the same as in the proof of Proposition 7.2.8.  

7.2.2 A Fast Algorithm In this section we consider Eq. (7.2.1) in case of g(x) = γ0 (1 − x 2 )− 2 and a = 0, b = −1 (i.e., α = β = 12 ) as well as h(x, y) = h1 (x, y)− γπ1 ln |x −y|, where γ0 , γ1 √ are certain constants. Hence, using the notations ϕ(x) = 1 − x 2 and L21 1 = L2ϕ 1

as well as

2,2

L2,s 1 1 2,2

=

L2,s ϕ

and setting v(x) = ϕ(x)u(x) in (7.2.1), the hypersingular

integral equation under consideration is of the form γ0 u(x) −

d 1 dx π



1 −1

u(y)ϕ(y) dy + y−x



! " γ1 h1 (x, y) + ln |x − y| u(y)ϕ(y) dy = f (x) , π −1 1

(7.2.20) −1 < x < 1. We write this equation as (A + H1 )u = f ,

(7.2.21)

where A = Mγ0 + V + γ1 W, (V u)(x) = −

d 1 dx π



1 −1

u(y)ϕ(y) dy , y−x

(W u)(x) =

1 π



1

−1

ln |x − y| u(y)ϕ(y) dy ,

(7.2.22) and  (H1 u)(x) =

1 −1

h1 (x, y)u(y)ϕ(y) dy .

(7.2.23)

7.2 Hypersingular Integral Equations

475

  for We investigate the operator equation (7.2.21) in the pair of spaces L2,s+1 , L2,s ϕ ϕ 1 some s > 2 and make the following assumptions: (Q0) For f ≡ 0, Eqs. (7.2.21) and Au = f possess only the trivial solution in Lϕ2,1 . (Q1) For the kernel function h1 (x, y) and some δ ≥ 0, we have h1 (·, y) ∈ L2,s+δ ϕ uniformly with respect to y ∈ [−1, 1] and h1 (x, ·) ∈ Ls+δ uniformly with ϕ respect to x ∈ [−1, 1]. (Q2) The right-hand side f in (7.2.21) belongs to L2,s ϕ . We construct a fast algorithm for the numerical solution of Eq. (7.2.21) basing on the collocation-quadrature method studied in Sect. 7.2.1 and following the ideas of Sect. 7.1.3. First of all, let us list some results of the previous section, which are of interest here.   (see Lemma 2.4.6). In particular, (O5) The operator H1 belongs to L L2ϕ , L2,s+δ ϕ 2,s+1 2,s −→ Lϕ is compact (see Lemma 2.4.7). H1 : Lϕ −→ L2,s (O6) The operator Mγ0 : L2,s+1 ϕ ϕ is compact (see Lemma 2.4.7). 2,t 2,t (O7) The operator W : Lϕ −→ Lϕ +1 is continuous for all t ≥ 0. This is a consequence of Lemma 7.2.1 (in case of a = 0, b = −1) and the continuous embedding L2,s1 1 ⊂ L2,s ϕ , which follows using the equivalent norms in − 2 ,− 2

these spaces given  by Lemma  2.4.4. Hence, the operator W is compact in the 2,s (see Lemma 2.4.7). pair of spaces L2,s+1 , L ϕ ϕ (O8) The invertibility of the operator V : L2,s+1 −→ L2,s ϕ ϕ (see Corollary 5.5.3 in case of a = 0, b = −1) and the properties (O5), (O6), and (O7) together with assumption (Q0) yield the invertibility of the operators A : L2,s+1 −→ L2,s ϕ ϕ 2,s+1 2,s and A + H1 : Lϕ −→ Lϕ (see Corollary 2.5.2). The operator A + H1 is approximated by the collocation-quadrature method analogous to that one investigated in the previous section (cf. (7.2.18)). Hence, we consider the equation (An + H1n )un = Lϕn f , 1 1

(7.2.24)

,1n , and where Ln = Ln2 2 , An = Mγ0 + V + γ1 Ln W, Hn = Ln H ϕ

,

,1n u)(x) = (H

ϕ



1 −1

ϕ

Lϕn [h1 (x, ·)](y)u(y)ϕ(y) dy .

Let us reformulate Proposition 7.2.9 for the case under consideration here. Proposition 7.2.11 Let s > 12 and assume that (Q0), (Q1), and (Q2) are fulfilled. Then, for all sufficiently large n, Eq. (7.2.24) is uniquely solvable, and the error estimate     ∗  u − u∗  ≤ c nt −s u∗ ϕ,s+1 (7.2.25) n ϕ,t +1

476

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

holds true for its solution u∗n , where .ϕ,s = . 1 , 1 ,s , 0 ≤ t ≤ s , c = c(n, t, u∗ ), 2 2

and u∗ ∈ L2,s+1 is the unique solution of Eq. (7.2.21). ϕ Proof If we to take into account the relation  ϕ    L W − W u n ϕ,t

Lemma 3.2.39(b)



c nt−s W uϕ,s ≤ cuϕ,s+1 ,

0≤t ≤s,

u ∈ L2,s+1 , ϕ

 

we can proceed as in the proofs of Propositions 7.2.8 and 7.2.9.

Since every solution un ∈ L2,s+1 of (7.2.24) belongs Pn , we have, due to the ϕ algebraic accuracy of the Gaussian rule, (H1n un )(x) =

n 

ϕ

ϕ

ϕ

λnk h1 (x, xnk )un (xnk )

k=1

with ϕ λnk

1 1 2,2

= λnk

!

ϕ

ϕ(xnk ) = n+1

"2 =

π kπ sin2 , n+1 n+1

ϕ

xnk = cos

kπ , n+1

(7.2.26)

k = 1, . . . , n. We remember the formulas in (5.4.4), which one can use to solve the following exercise. Exercise 7.2.12 Show that, for the operator W defined in (7.2.22), the relations ⎧√ ⎫ 1 σ σ ⎪ ⎪ ⎪ ⎪ p 2 ln 2 p (x) − (x) : n = 0 , ⎨ ⎬ 0 2 1 2 ϕ (Wpn )(x) = − ⎪ 2⎪ ⎪ ⎪ σ ⎩ 1 pnσ (x) − 1 pn+2 (x) : n ∈ N , ⎭ n n+2 − 12 ,− 12

are valid, where pnσ = pn

ϕ

1 1 ,2

and pn = pn2

−1 < x < 1,

(see (5.1.21) and (5.1.22)).

As a consequence of 1 ϕ p0σ (x) = √ p0 (x) , 2

p1σ (x) =

1 ϕ p (x) , 2 1

pnσ (x) =

" 1! ϕ ϕ p (x) − pn−2 (x) , n = 2, 3, . . . 2 n

and Exercise 7.2.12, we deliver . 1 1 ϕ 1 1 ϕ ϕ ϕ ϕ Wp0 = − 2 ln 2 + p + · p2 =: ω00 p0 + ω02 p2 , 4 2 0 4 2 ϕ Wp1

  1 1 1 ϕ 1 ϕ ϕ ϕ =− 1+ p1 + · p3 =: ω11 p1 + ω13 p3 , 4 3 4 3

(7.2.27)

(7.2.28)

7.2 Hypersingular Integral Equations

Wpnϕ



1 ϕ 1 p = − 4n n−2 4 =:

ϕ ωn,n−2 pn−2

477

 1 1 1 ϕ + pnϕ + p n n+2 4(n + 2) n+2 ϕ + ωn,n+2 pn+2

+ ωnn pnϕ

(7.2.29)

.

Furthermore, we set ωj k = 0

for |j − k| = 0 or |j − k| = 2 .

(7.2.30)

Using the representation ϕ

ϕ

nk (x) = λnk

n−1 

ϕ

ϕ

ϕ

pj (xnk )pj (x)

(7.2.31)

j =0

of the fundamental Lagrange interpolation polynomials (cf. (3.2.31)), from Exercise 7.2.12 we obtain the formula  ϕ ϕ  λnk (x) = W nk (x) ⎡ = λnk ⎣p0 (xnk ) ϕ

ϕ



ϕ

⎤     n−1 1 1 ln 2 σ 1 σ ϕ ϕ σ √ p0 (x) − p2σ (x) + pj (xnk ) pj (x) − pj+2 (x) ⎦ 4 2j 2(j + 2) 2 j=1 1 1

,

2 2 for the weights (equal to λnk (x) in (7.2.13) in case of a = 0, b = −1) in the product integration rule

1 π



1

−1

ln |x − y| u(y)ϕ(y) dy ≈

n 

ϕ ϕ λnk (x)u(xnk ) .

k=1

Moreover, due to (5.5.3) and (7.2.31), ϕ

ϕ

V nk = λnk

n−1 

ϕ

ϕ

ϕ

pj (xnk )(j + 1)pj .

j =0

Thus, if we seek the approximate solution of Eq. (7.2.21) in the form un (x) =

n 

ϕ

ξnk nk (x) ,

(7.2.32)

k=1

then (7.2.25) can be written as (γ0 In + Vn n + γ1 Wn + H1n n ) ξ n = ηn

(7.2.33)

478

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

 "n ! ϕ with ξ n = ξnk k=1 , ηn = f (xnj )  In = δj k

Un =



n j,k=1

n−1, n ϕ ϕ pj (xnk ) j =0,k=1

%/ =

j =1

! " , and Dn = diag 1 · · · n ,

 ϕ ϕ Wn = λnk (xnj )

Vn = UTn Dn Un ,

,

n

(j +1)kπ 2 sin n+1 kπ π sin n+1

 ϕ ϕ H1n = h1 (xnj , xnk )

n j,k=1

,

&n−1, n ,

n j,k=1

,

"n ! n = diag λϕnk k=1 .

j =0,k=1

(7.2.34) n 

ϕ ϕ ϕ ϕ ϕ The orthogonality relations δj k = pk , pj ϕ = λϕnm pk (xnm )pj (xnm ) can be m=1

written as matrix equation (cf. (5.2.24)) In = Un n UTn .

(7.2.35)

We will see that it is not necessary to generate the matrix Wn in order to solve (7.2.33) (see Remark 7.2.13). Furthermore, in what follows we suppose that ϕ ϕ ϕ the function values f (xnj ) and h1 (xnj , xnk ), j, k = 1, . . . , n, are given. First Step of the Algorithm Let us choose an integer m with 0 < m < n and write un in (7.2.32) in the form un =

m−1 

ϕ

αnk pk +

k=0

n−1 

ϕ

αnk pk = Pm un + Rm un ,

k=m

where, for u ∈ L2ϕ , Pm u =

m−1 



ϕ ϕ u, pk ϕ pk

and Rm u = (I − Pm )u =

k=0

∞ 

ϕ ϕ u, pk ϕ pk .

k=m

n−1 

ϕ ∗ ϕ Set αnk = vn∗ , pk ϕ , k = m, . . . , n − 1, where vn∗ = βnk pk is the solution of k=0

An vn = Lϕn f ,

vn =

n−1  k=0

ϕ

βnk pk .

(7.2.36)

7.2 Hypersingular Integral Equations

479

Applying Proposition 7.2.11 in case h1 (x, y) ≡ 0 yields that (7.2.36) is uniquely solvable for all sufficiently large n, if (Q0) is satisfied. With the notation β n = ! "n−1 βnk k=0 we have   ϕ Mγ0 vn (xnj )

%

n j =1

=

γ0

n−1 

&n ϕ ϕ βnk pk (xnj )

= γ0 UTn β n

(7.2.37)

= UTn Dn β n .

(7.2.38)

j =1

k=0

and 

n ϕ (Vvn ) (xnj ) j =1

!

n = ωj k Set W

"

n j,k=0

=

% n−1 

&n βnk (k

ϕ ϕ + 1)pk (xnj ) j =1

k=0

n β . Then with ωj k defined in (7.2.27)–(7.2.30) and βn = W n

Lϕn Wvn =

n−1 

nk pϕ + βn,n−1 ωn,n+2 Lϕn pϕ , β k n+1

k=0 ϕ ϕ

ϕ

since Ln pn = 0. With the help of the three-term recurrence relation pn+1 (x) = ϕ ϕ ϕ ϕ ϕ ϕ ϕ 2xpn (x) − pn−1 (x), we see that Ln pn+1 = −Ln pn−1 = −pn−1 . Consequently, Lϕn Wvn =

n−2 

  nk pϕ + β n,n−1 − ωn,n+2 βn,n−1 pϕ , β k n−1

k=0

which shows that 

ϕ (Wvn )(xnj )

n j =1

=

% n−1  k=0

 ϕ ϕ βnk Wpk (xnj )

&n , nβ , = UTn W n

(7.2.39)

j =1

" n−1 ! ,n = ω where W ωn−1,n−1 = ωn−1,n−1 −ωn,n+2 and , ωj k = ωj k for all ,j k j,k=0 with ,   T ,n β = other j, k. That means, that Eq. (7.2.36) is equivalent to Un γ0 In +Dn +γ1 W n ηn or, having in mind (7.2.35), equivalent to 

 , n β = Un n η . γ0 In + Dn + γ1 W n n

(7.2.40)

! "n−1 Remark 7.2.13 Using these observations together with ξ n = UTn αnk k=0 ! "n−1 (cf. (7.2.37)), i.e., αnk k=0 = Un n ξ n (see (7.2.35)), we see that Eq. (7.2.33) can be written in the equivalent form 

  , n Un + H1n n ξ = η . UTn γ0 In + Dn + γ1 W n n

480

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Since the transform (see (7.2.26) and (7.2.34)) √ .n .n 2π j kπ kπ diag sin Un n = sin n+1 n + 1 j,k=1 n + 1 k=1 can be applied to a vector of length n with O(n ln n) computational complexity (cf., for example, [202, 207]), we are able to compute β n (and so αm , . . . , αn−1 ) with O(n ln n) complexity, if we take into account the simple structure of the matrix on the left-hand side of (7.2.40). Lemma 7.2.14 Assume (Q0) and (Q2) as well as (Q1) with respect to h1 (·, y) to be satisfied. Let u∗ ∈ L2,s+1 be the solution of (7.2.21) and Rm un be defined with ϕ the help of the solution vn∗ of (7.2.36) (i.e., Rm un = Rm vn∗ ). Then, for 0 ≤ t ≤ s,      t −s−δ t −s  ∗  Rm un − Rm u∗  u ϕ,s+1 , c = c(n, m, t, u∗ ) . ≤ c m + n ϕ,t +1 Proof Writing   ϕ ∗ Rm un − Rm u∗ = Rm (vn∗ − u∗ ) = Rm A−1 n Ln f − u ! ϕ " Ln f − f + γ1 (W − Lϕn W)u∗ + H1 u∗ = Rm A−1 n we can estimate   Rm un − Rm u∗ 

ϕ,t+1

    ϕ ≤ Rm A−1 n (Ln f − f )

ϕ,t+1

   ϕ ∗ + |γ1 | Rm A−1 n (W − Ln W )u 

ϕ,t+1

   ∗ + Rm A−1 n H1 u 

ϕ,t+1

.

With the help of the inequality Rm uϕ,t ≤ (1 + m)t −s uϕ,s (see (7.1.60)), Lemma (O7), as well as the uniform boundedness of   −1  3.2.39,(b), and  (O5), A  2 2,1 and A−1  2,s+δ 2,s+1+δ (see the proofs of Propositions 7.2.8 n n Lϕ Lϕ →Lϕ →Lϕ and 7.2.9) we get     ϕ Rm A−1 n (Ln f − f )

ϕ,t+1

    ϕ ≤ A−1 n (Ln f − f )

ϕ,t+1

    ϕ ≤ nt A−1 n (Ln f − f )

ϕ,1

  ≤ c nt Lϕn f − f ϕ ≤ c nt−s f ϕ,s ,    ϕ ∗ Rm A−1 n (W − Ln W )u 

ϕ,t+1

   ∗ ϕ ∗  ≤ A−1 n (W u − Ln W u )

ϕ,t+1

  ≤ c nt W u∗ − Lϕn W u∗ ϕ

    ≤ c nt−s−2 W u∗ ϕ,s+2 ≤ c nt−s−2 u∗ ϕ,s+1 ,    ∗ H u  Rm A−1 1 n

ϕ,t+1

   ∗ ≤ c mt−s−δ A−1 H u  1 n

ϕ,s+1+δ

    ≤ c mt−s−δ H1 u∗ ϕ,s+δ ≤ c mt−s−δ u∗ ϕ ,

7.2 Hypersingular Integral Equations

481

 

which proves the lemma.

Second Step of the Algorithm ∗ , where w ∗ is the solution of We set Pm un = wm m

(Am + H1m )wm = Lϕm (f − An Rm vn∗ ) .

(7.2.41)

In virtue of Remark 7.2.13, Eq. (7.2.41) is equivalent to 

  , m Um + H1m m ω = UTm γ0 Im + Dm + γ1 W ηm , m

where ωm =

!

ϕ

wm (xmk )

"m k=1

and ηm =



(7.2.42)

f (xmj ) − (An Rm vn∗ )(xmj ) ϕ

ϕ

m j =1

. The

matrix Um can be generated with O(m2 ) complexity using the three-term recurrence ϕ relation of the orthogonal polynomials pj (x). Thus, for given ηm , Eq. (7.2.42) can ϕ be solved with O(m3 ) complexity. The values f (xmj ) are already given if we choose  ϕ  ϕ n+1 is an integer, which implies xmj ∈ xnk : k = 1, . . . , n m in such a way that m+1  ϕ  for j = 1, . . . , m. Analogously, we get the values An Rm vn∗ (xmj ), j = 1, . . . , m, from the entries of the vector 

 ϕ An Rm vn∗ (xnj )

n j =1

  ,n βn = UTn γ0 In + Dn + γ1 W

 T ∗ · · · β∗ (see (7.2.37), (7.2.38), and (7.2.39)), where β n = 0 · · · 0 βnm . This n,n−1 can be done with O(n ln n) complexity. The determination of the Fourier coefficients ! "m−1 αnk , k = 0, . . . , m − 1, needs O(m ln m) complexity, since αnk k=1 = Um m ωm . In the following remark, we summarize these considerations. ∗ + R v∗ , Remark 7.2.15 The computation of the Fourier coefficients of un = wm m n ∗ ∗ where vn and wm are the solutions of (7.2.36) and (7.2.41), respectively, can be done with O m3 + n ln n computational complexity.

Lemma 7.2.16 If the assumptions (Q0), (Q1), and (Q2) are fulfilled and if 12 < t0 ≤ n+1 is an integer, then, for all sufficiently large m, Eq. (7.2.41) is t ≤ s as well as m+1 ∗ and the solution u∗ ∈ L2,s+1 of (7.2.21), uniquely solvable and, with its solution wm ϕ   ∗    w − Pm u∗  ≤ c mt −s−δ + nt −s u∗ ϕ,s+1 , m ϕ,t +1

c = c(m, n, t, u∗ ) .

482

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . . ϕ

ϕ

ϕ

Proof Since Lm Ln = Lm (note that we have

  is an integer) and Mγ0 + V Pm u∗ ∈ Pm ,

n+1 m+1

∗ (Am + H1m )(wm − Pm u∗ )

= Lϕm f − Lϕm (Mγ0 + V + γ1 Lϕn W )Rm vn∗ − (Mγ0 + V + γ1 Lϕm W )Pm u∗ − H1m Pm u∗ = Lϕm (A + H1 )u∗ − Lϕm ARm vn∗ − Lϕm APm u∗ − H1m Pm u∗ ,1m Rm u∗ . ,1m )u∗ + Lϕm H = Lϕm A(Rm u∗ − Qm vn∗ ) + Lϕm (H1 − H

From Lemma 3.2.39,(a), (O8) and Lemma 7.2.14 it follows that  ϕ  L A(Rm u∗ − Rm v ∗ ) m

n

ϕ,t0

  ≤ cRm u∗ − Rm vn∗ ϕ,t

0 +1

   ≤ c mt0 −s−δ + nt0 −s u∗ ϕ,s+1 .

With the help of Lemma 7.1.3, we can estimate  ϕ     L H1 − H ,1m u∗  ≤ c mt −s−δ u∗  . m ϕ,t ϕ ,1m we see that H ,1m Rm u∗ = 0. Thus, Moreover, by the definition of the operator H   using the uniform boundedness of (Am + H1m )−1  2,t0 2,t0 +1 , we get Lϕ →Lϕ

 ∗   ∗  w − Pm u∗  ≤ mt−t0 wm − Pm u∗ ϕ,t m ϕ,t+1

0 +1

  ∗ ≤ c mt−t0 (Am + H1m )(wm − Pm u∗ )ϕ,t

0

     ≤ c mt−t0 mt0 −s−δ + nt0 −s u∗ ϕ,s+1 + mt−s−δ u∗ ϕ    ≤ c mt−s−δ + nt−s u∗ ϕ.s+1 ,

 

and the lemma is proved.

Let us summarize Lemma 7.2.14 and Lemma 7.2.16 as well as Remark 7.2.15 to the following proposition on the properties of the fast algorithm presented in this section. Proposition 7.2.17 Let s > 12 , (Q0), (Q1), (Q2), and (Q3) be satisfied as well as m n+1 and n be integers such that 0 < m < n, m+1 is an integer, and mn3 ∼ 1. Then, for all ∗ and v ∗ . The sufficiently large m, Eqs. (7.2.36) and (7.2.41) have unique solutions wm n ∗ ∗ ∗ 2,t +1 function un := wm +Rm vn converges in the norm of Lϕ , 0 ≤ t < s, to the unique   solution u∗ ∈ L2,s+1 of Eq. (7.2.21), where, for max 12 , s − 2δ < t0 ≤ t ≤ s, ϕ  ∗     un − u∗ ϕ,t +1 ≤ c nt −s u∗ ϕ,s+1 ,

c = c(m, n, t, u∗ ) .

Moreover, the solution of (7.2.36) and (7.2.41) needs O(n ln n) operations.

(7.2.43)

7.2 Hypersingular Integral Equations

483

Proof Lemma 7.2.14 and Lemma 7.2.16 yield  ∗      un − u∗ ϕ,t +1 ≤ c mt −s−δ + nt −s u∗ ϕ,s+1 . 1

Due to m ≥ c1 n 3 and t0 ≤ t ≤ s, we have mt −s−δ ≤ c1t −s−δ n

t−s−δ 3

≤ c1∗ nt −s ,

and the estimate (7.2.43) follows. Remark (7.2.15) together with m3 ≤ c2 n leads to a complexity of O(n ln n).  

A More General Situation Finally, let us discuss, what results are possible if, instead of Mγ0 or γ1 W in (7.2.20) resp. (7.2.21), operators like Mψ (see (7.2.8)) or H2 (see (7.2.4)) occur. That means, instead of Eq. (7.2.20) we consider an equation of the form ψ(x)u(x) −

d 1 dx π



1

−1

u(y)ϕ(y) dy + y −x



1 −1

!

" h1 (x, y) + h2 (x, y) ln |x − y| u(y)ϕ(y) dy = f (x) ,

(7.2.44) −1 < x < 1. We again write this equation in the form (cf. (7.2.21)) (A + H)u = f ,

(7.2.45)

but now with A = V and H = Mψ + H1 + H2 , where H1 and H2 are defined in (7.2.4) with v α,β = ϕ. The approximating operators are given by An = A and ϕ ,1n + H 2n , where (cf. (7.2.17) and (7.2.14) for the case a = 0, H(n) = Ln Mψ + H b = −1) 

n   ϕ ϕ ϕ ,1n u (x) = H λnk h1 (x, xnk )u(xnk ) k=1

and

n    ϕ ϕ ϕ 2n u (x) = λnk (x)h2 (x, xnk )u(xnk ) . H k=1

We have to check if the assertions of Lemma 7.2.14 and Lemma 7.2.16 remain valid. The crucial  point in the  proof of a lemma analogous to Lemma 7.2.14 is the estimation of Rm A−1 Hu∗ ϕ,t +1 . If we suppose that ψ belongs to Crϕ and that h2 and h2x possess continuous partial derivatives up to order r ≥ s + 1 on [−1, 1], then we can apply Corollary 2.4.12 and Property (O3) together with Corollary 5.5.3 in case a = 0, b = −1 to obtain           ≤ c mt −s−1 A−1 Mψ u∗  ≤ c mt −s−1 u∗  Rm A−1 Mψ u∗  ϕ,t +1

ϕ,s+2

ϕ,s+1

484

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

and     Rm A−1 H2 u∗ 

ϕ,t +1

    ≤ c mt −s−2 A−1 H2 u∗ 

ϕ,s+3

  ≤ c mt −s−2u∗ ϕ,s+1 .

The essential steps in the proof of an lemma analogous to Lemma 7.2.16 are the estimations of  ϕ       L H − Mψ − H ,1m − H 2m u∗  = Lϕm H1 + H2 − H ,1m − H 2m u∗  m ϕ,t ϕ,t   ϕ    2m Rm u∗  . If we assume that, for some and H(m) Rm u∗ ϕ,t = Lm Mψ + H ϕ,t q

integer q ≥ s + 1, h2 (x, ·) ∈ Cϕ uniformly with respect to x ∈ [−1, 1], then, by taking into account Lemma 7.2.6,  ϕ    L (H2 − H 2m )u∗  ≤ c mt −s−1 u∗  . m ϕ,t ϕ,s+1 1 2

Furthermore, for

< t0 ≤ t ≤ s + 1,

 ϕ        L Mψ Rm u∗  ≤ c mt−t0 Lϕ Mψ Rm u∗  ≤ c mt−t0 Rm u∗ ϕ,t ≤ c mt−s−1 u∗ ϕ,s+1 m m ϕ,t ϕ,t 0

0

and, for 1 ≤ t ≤ s + 1,  ϕ      ∗ t−s−1  ∗  L H u ϕ,s+1 + Lϕm H2 Rm u∗ ϕ,t m 2m Rm u ϕ,t ≤ c m       ≤ c mt−s−1 u∗ ϕ,s+1 + c mt−1 Rm u∗ ϕ,1 ≤ c mt−s−1 u∗ ϕ,s+1 .

Hence, the assertions of Proposition 7.2.17 remain true if we additionally assume that t0 ≥ 1 and δ ≤ 1.

7.3 Integral Equations with Mellin Type Kernels The aim of this section is to investigate collocation-quadrature methods for integral equations of the form  u(x) +



1 −1

H

1+x 1+y



u(y) dy = f (x) , 1+y

−1 < x < 1 ,

(7.3.1)

K(x, y)u(y) dy = f (x) ,

−1 < x < 1 ,

or, more general,  u(x) +



1 −1

H

1+x 1+y



u(y) dy + 1+y



1 −1

(7.3.2)

7.3 Integral Equations with Mellin Type Kernels

485

where H : (0, ∞) −→ C, K : [−1, 1]2 −→ C, f : (−1, 1) −→ C are given continuous functions and the function u : [−1, 1] −→ C is looked for. We will do this in weighted spaces C0,δ0 of continuous functions and by exploiting the mapping properties we deduced for Mellin-type operators in Sect. 5.6. We concentrate on Eq. (7.3.1), since Eq. (7.3.2) can be handled with compact perturbation principles. As starting point we take the Gauss-Legendre rule  Q(f ) :=

1

−1

f (x) dx =

n 

λnk f (xnk ) + rn (f ) =: Qn (f ) + rn (f ) ,

(7.3.3)

k=1

0,0 0,0 where xnk = xnk are the Legendre nodes and λnk = λnk the respective Christoffel numbers. We approximate the operator MH , given by

 (MH u)(x) =



1 −1

H

1+x 1+y



u(y) dy = Q(Hx u) 1+y



with

Hx (y) = H

1+x 1+y



1 , 1+y

(7.3.4) with the help of the sequence of the operators MH n , defined as (MH n u)(x) =

n 



1+x 1 + xnk

λnk H

k=1



u(xnk ) = Qn (Hx u) 1 + xnk

(7.3.5)

for x ∈ [−1, 1], i.e., we look for an approximate solution un (x) by solving un (x) +

n  k=1

 λnk H

1+x 1 + xnk



u(xnk ) = f (x) , 1 + xnk

−1 ≤ x ≤ 1 ,

(7.3.6)

a method, which is comparable with the Nyström method for Fredholm integral equations, studied in Sect. 6.2. C0,δ0 −→ C0,δ0 Since, for nontrivial kernel functions H (t), the operator MH : p p or MH : Lα,β −→ Lα,β is non compact, in the literature it is often assumed that the norm of this operator is less than 1, which we will also do here. Then the most important task is to show that, for all sufficiently large n, also the norms of certain PH approximating operators M n are uniformly less than 1, i.e.,    PH  Mn 

C0,δ0 → C0,δ0

≤ q < 1,

(7.3.7)

in order to be able to prove the stability of the method. To avoid these restrictions other techniques are needed, which are not in the focus of present book and which are based on Banach algebra techniques in case of Lp -spaces, in particular C ∗ algebra techniques in case of L2 -spaces. The interested reader is referred to [71, 72, 80, 83, 84] for example.

486

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Let us make the following assumptions on the Mellin kernel function H : (0, ∞) −→ C: There are numbers s ∈ N0 , and δ0 , ρ ∈ R, and c0 , t0 ∈ (0, ∞) such that δ0 + ρ ≥ 0 and H : (0, ∞) −→ C is s times continuously differentiable, where  ∞  ∞   (j )  δ0 +j −1 |H (t)|t δ0 −1 dt < 1 and dt < ∞, j = 1, . . . , s, (H1) H (t) t 0  0 (j )  (H2) H (t) ≤ c0 t ρ−j , 0 < t < t0 , j = 0, 1, . . . , s, (H3) the functions  hj : [a, 1]×[−1, 1] −→ C ,

(x, y) → H (j )

1+x 1+y



(1+y)−η−j −1 ,

j = 0, 1, . . . , s ,

are continuous for some η > δ0 − 1 and for every a ∈ (−1, 1). Then, by Propositions 5.6.2 and 5.6.5 we get the following two properties: (s) (M1) The operator MH : C0,δ0 −→ C0,δ0 is bounded, where

  (j ) ∈ C0,δ0 +j , j = 1, . . . , s C(s) 0,δ0 = f ∈ C0,δ0 : f

and f  C(s) = 0,δ0

s     (j )  f  j =0

0,δ0 ,∞

.

(M2)  The norm of the operator MH : C0,δ0 −→ C0,δ0 is less or equal to ∞ |H (t)δ0 −1 dt < 1. 0

For our further investigations, we recall an estimate of the error term rn (f ) in (7.3.3). Lemma 7.3.1 ([42], Theorem 3) If the function (1 − x 2 )f (s) (x) is integrable over [−1, 1], then  |rn (f )| ≤ cs

1 −1

   (s)  f (x) ωn (x) dx ,

where cs = cs (n, f ) is a finite constant and  s  √ 1 − x2 2 s ωn (x) = min , (1 − x ) . n

7.3 Integral Equations with Mellin Type Kernels

487

Corollary 7.3.2 If δ0 < 1 and f ∈ C0,δ0 for some s ≥ 2(1 − δ0 ), then for the error term rn (f ) in (7.3.3) we have (s)

|rn (f )| ≤ cs,1 f  (s) C

0,δ0

⎧ ⎪ ⎨

ln n : s = 2(1 − δ0 ) , n2(1−δ0 ) 1 ⎪ ⎩ : s > 2(1 − δ0 ) , 2(1−δ 0) n

with a finite constant cs,1 independent of f and n. Proof We apply Lemma 7.3.1 and get ⎡⎛ |rn (f )| ≤ cs f  (s) ⎣⎝ C

6 − 1−



0,δ0

1 n2

−1

 +

⎞ 1 6 1−

⎠ (1 + x)−δ0 −s (1 − x 2 )s dx

1 n2

+n ⎡ c⎣

≤ cs f  (s) C



0,δ0

6 − 1−

1 n2

−1

(1 + x)−δ0 dx +



/

≤ cs f  (s) c ⎣ 1 − C 0,δ0

−s + cs f  (s) c n C 0,δ0

≤ cs,1 f  (s) C

0,δ0

1 1− 2 n

⎧ ⎪ ⎪ ⎪ ⎪ ⎨

−s



1−δ0

1 n2



6 1−

6 − 1−

1 n2 1 n2

⎤ −δ0 −s

(1 + x)

(1 − x ) dx ⎦ 2

s 2

(1 − x)s dx

1 n2



/

+ 1−

6  ln 1 − 1 −



1 6 1−

+n ⎡

−s

0 6 − 1−

1 1− 2 n

+1

& −δ0 − 2s

1 n2

(1 + x)

dx + 1

s+1 ⎤ ⎦

: s = 2(1 − δ0 ) ,

1−δ0 − s  / 2 ⎪ 1 ⎪ ⎪ ⎪ 1 − 1 − + 1 : s > 2(1 − δ0 ) , ⎩ n2

⎧ ⎪ ⎨

ln n : s = 2(1 − δ0 ) , n2(1−δ0 ) 1 ⎪ ⎩ : s > 2(1 − δ0 ) , n2(1−δ0 )

where we took into account

1 2n2

≤1−

6

1−

1 n2

=

1  6 n2 1+ 1−

1 n2





1 . n2

 

488

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

In order to study the error of the approximation of MH f by MH n f , we write (cf. (7.3.4) and (7.3.5)) H (MH f )(x) = (MH n f )(x) + (Rn f )(x) ,

(7.3.8)

where, due to (7.3.3),  (RH n f )(x) = rn (Hx f )

with

Hx (y) = H

1+x 1+y



1 . 1+y

We further assume that there are constants ρ1 , ρ2 , c1 ∈ (0, ∞) such that   c1 (1 + y)ρ1 −j  (j )  for j = 0, 1, . . . , s, (H4) s ≥ 2(1 − δ0 + ρ1 ) and Hx (y) ≤ (1 + x)ρ2 −1 < x, y ≤ 1. Note that the kernel functions of operators of the form (5.6.16) and (5.6.17) have this property. We also refer to the application in Sect. 8.1. Corollary 7.3.3 Assume that δ0 < 1 and (H4) are fulfilled. Then, for f ∈ C0,δ0 , (s)

H   cs f  C(s) 0,δ0  H    Rn f (x) ≤ ρ (1 + x) 2

⎧ ⎪ ⎨ ⎪ ⎩

ln n n2(1−δ0 +ρ1 ) 1

n2(1−δ0 +ρ1 )

: s = 2(1 − δ0 + ρ1 ) , : s > 2(1 − δ0 + ρ1 ) ,

where the constant csH ∈ (0, ∞) does not depend on x, f , and n. (s) C0,δ0 −ρ1 . Proof Set fx (y) = (1 + x)ρ2 Hx (y)f (y). Due to Exercise 5.6.4, fx ∈ (s) Moreover, fx  C

0,δ0 +ρ1

≤ cH,s f  (s) , −1 < x ≤ 1, with cH,s = cH,s (x), Hence, C 0,δ0

in view of Corollary 7.3.2,

⎧ ln n ⎪ ⎨ 2(1−δ +ρ ) : s = 2(1 − δ0 + ρ1 ) ,     0 1 ρ2  H n (1 + x)  Rn f (x) = |rn (fx )| ≤ cs,1 cH,s f  (s) C0,δ ⎪ 1 0 ⎩ : s > 2(1 − δ0 + ρ1 ) , n2(1−δ0 +ρ1 )

 

which yields the assertion. In what follows, we restrict our considerations to the case, where (H5) H (t) > 0 for all t ∈ (0, ∞). In that situation, due to Remark 5.6.3 and (5.6.15), we have  ∞    0,δ0 δ0 −1 0,−δ0  = H (t)t dt = M v (M3) MH   . v H C0,δ →C0,δ 0

0

0



7.3 Integral Equations with Mellin Type Kernels

489

In the estimate   n     |f (xnk )| 1+x   δ0 (1 + x)δ0  MH f (x) λ H ≤ (1 + x)  nk n 1 + xnk 1 + xnk k=1

≤ (1 + x)δ0

n 

 λnk H

k=1

1+x 1 + xnk



(1 + xnk )−δ0 f  C0,δ , 0

we have equality if f (x) = (1 + x)−δ0 . Hence, since δ0 + ρ ≥ 0 by assumption, we get the boundedness of the operator MH n : C0,δ0 → C0,δ0 and

   H Mn 

C0,δ0 → C0,δ0

= sup (1 + x)

n 

 λnk H

k=1

Thus,

   (M4) MH n C

δ0

C0,δ0 0,δ0 →

1+x 1 + xnk



 (1 + xnk )

−δ0

: −1 < x ≤ 1

.

  0,−δ0  . = v 0,δ0 MH n v ∞

In order to get an estimate of the form (7.3.7), our aim is to show that  H Pn  lim M C

n→∞

C0,δ0 0,δ0 →

= MH  C0,δ

0

(7.3.9)

→ C0,δ0

H PH holds true. If we would set M n = Mn , then by (M3), (M4), and (7.3.8)

  PH   Mn  C

0,δ0 →C0,δ0

− MH  C0,δ

 

→ C0,δ0  0

  0,δ     0,−δ0  v 0 MH v 0,−δ0   − = v 0,δ0 MH n v ∞ ∞      0,−δ  0,−δ0  0 ≤ v 0,δ0 MH = v 0,δ0 RH , n − MH v n v ∞ ∞

such that Corollary 7.3.3 is only applicable in case of δ0 ≥ ρ2 , which, in the applications, contradicts the condition δ0 < 1. That’s why we modify the operators ∗ −ρ3 − 1 and some ρ > 0, MH 3 n by following an idea of [123]. Set, for xn = n ⎧ ⎪ ⎨

 H  : xn∗ ≤ x ≤ 1 , Mn u (x)  H   Pn u (x) =  M 1 + x −δ0  H  ∗ ⎪ ⎩ Mn u (xn ) : −1 ≤ x < xn∗ . 1 + xn∗

(7.3.10)

Then       ∗   PH   ∗ 0,δ0 ∗  H = v u (x) (x ) u (xn ) max v 0,δ0 (x)  M : −1 ≤ x ≤ x M   n n n n (7.3.11)

490

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

and, in virtue of Corollary 7.3.3,  H M Pn  C

C0,δ0 0,δ0 →

  0,−δ0  = v 0,δ0 MH n v ∞,[x ∗ ,1] n

     0,−δ  0 ≤ v 0,δ0 MH + MH  n − MH v ∞,[x ∗ ,1] C n

    0,−δ0  = v 0,δ0 RH + MH  n v C0,δ ∞,[x ∗ ,1] n

≤ =

0

c (1 + xn∗ )ρ2 −δ0 n2(1−δ0 +ρ1 ) c n2(1−δ0 +ρ1 )−(ρ2 −δ0 )ρ3

C0,δ0 0,δ0 →

→ C0,δ0

  + MH  C

C0,δ0 0,δ0 →

  + MH  C

C0,δ0 0,δ0 →

,

if δ0 < ρ2 , Hence, we have proved the following lemma. PH Lemma 7.3.4 If the conditions (H1)–(H5) are satisfied and the operator M n is defined by (7.3.10), where xn∗ =

ln n −1 nρ3

and 0 < ρ3 ≤

2(1 − δ0 + ρ1 ) , ρ2 − δ0

−ρ ≤ δ0 < min {1, ρ2 } ,

then  H Pn  lim sup M C n→∞

C0,δ0 0,δ0 →

  ≤ MH  C

C0,δ0 0,δ0 →

,

from which we deliver  H M P  n

C0,δ0 C0,δ0 →

≤q 0,  for ∞ there exist constants c1 , c2 ∈ R and a sequence pn∗ n=1 of polynomials pn∗ ∈ Pn such that, for n ∈ N,  ∗     u − pn∗ C0,ε ≤ c1 n−(m+δ−ε) ln(n + 1) and pn∗ Cm,δ0 ≤ c2 ,

(7.4.33)

where cj = cj (n). Let a1,n (x) = Fz (x, σ1 (x)pn∗ (x)). Furthermore, define (cf. (7.4.11)) σ1,n (x) = 

1+x 1 + [a1,n (x)]2

- exp

1 −1

g1,n (t) dt t −x

. ,

μ1,n (x) =

1  , σ1,n (x) 1 + [a1,n (x)]2

7.4 Nonlinear Cauchy Singular Integral Equations

503

where the continuous functions g1,n : [−1, 1] −→ R are given by the relations a1,n (x) + i =

6

1 + [a1,n (x)]2 eπig1,n (x) ,

0 < g1,n (x) < 1 ,

in Case A, 1 + ia1,n (x) =

6

1 + [a1,n (x)]2 eπig1,n (x) ,

0 ≤ g1,n (x)
0. Now the assertion (7.4.34) for ν ∈ σ 0 , μ0 is a consequence of (7.4.34) for ν ∈ {σ, μ} and of the relations a1 (1) − a1 (x) a1 (x) = a1,n (x) Fz (1, σ1 (x)pn∗ (x)) − Fz (x, σ1 (x)pn∗ (x)) =

  − 1 + [a1 (ξ1 (x))]2 πg1 (−1) a1 (ξ1 (x)) = , Fxz (ξ2 (x, n), σ1 (x)pn∗ (x)) 2Fxz (ξ2 (x, n), σ1 (x)pn∗ (x))

ξ1 (x), ξ2 (x, n) ∈ (x, 1) ,

implying   πg1 (−1) 1 + [a1(−1)]2 πg1 (−1) a1 (x) ≤ ≤ . 2M2 a1,n (x) 2M1

(7.4.37)

Note that, due to (7.4.9), Fz (1, z) = 0 for all z ∈ R, and that, due to (7.4.26) together with the continuity of the function Fxz and the definition of a1 (x), the 1 (x) quotient aa1,n   (x) is positive. Corollary 7.4.7 In view of (7.4.34) and (7.4.37), all norms .σ1 , .σ1,n , .σ 0 , 1 and .σ 0 are uniformly equivalent with respect to n ∈ N, and the same is true for 1,n .μ1 , .μ1,n , .μ0 , and .μ0 . 1

1,n

Lemma 7.4.8 For a function ω : [−1, 1] −→ R satisfying ω(x) ≥ c > 0 for all x ∈ [−1, 1] and every weight function ρ, we have  ρ  L ωp ≥ cpρ n ρ

∀ p ∈ Pn .

Proof Remember the algebraic accuracy of the Gaussian rule, i.e., we have 

1 −1

p(x)ρ(x) dx =

n 

ρ  ρ  λnj p xnj ∀ p ∈ P2n .

j =1

Thus, for p ∈ Pn we observe n n       ρ 2  ρ   ρ   ρ 2 ρ   ρ  2 2 2 2 L ωp = p x λ ≥ c λ x x ω p  n nj nj nj nj nj  = c pρ , ρ j =1

j =1

 

which proves the lemma.

Lemma 7.4.9 Let τ = 1 + max {α1 , β1 } as well as 0 < ε < 1. Then there exist constants c∞,σ1 , cε,∞ ∈ R, not depending on n ∈ N, such that, for all p ∈ Pn , p∞ ≤ c∞,σ1 nτ pσ1

and pC0,ε ≤ cε,∞ n2ε p∞ .

(7.4.38)

506

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . . α ,β1

Proof For the Jacobi polynomial pn 1 Section 3, Prop. 2])  α ,β  p 1 1 

1



n

(x), we have (cf. [172, Part II, Chapter VI,

≤ c(n + 1)τ − 2 ,

n ∈ N0 ,

with some real constant c = c(n). Thus, for p(x) =

n−1 

α ,β1

εj pj 1

(x),

j =0

F F G G n−1 n−1 G G  α1 ,β1 2 G G 2 p  |p(x)| ≤ H |εj | H j ∞ j =0

j =0

≤ cpL2

α1 ,β1

F G n−1 G G H (j + 1)2τ −1 ≤ cpL2



α1 ,β1

j =0

Now the equivalence of the norms .L2

α1 ,β1

n 0

c nτ s 2τ −1 ds = √ pL2 . α1 ,β1 2τ

and .σ1 (see (7.4.12) and (7.4.13))

proves the first inequality in (7.4.38). For the second one, we follow [193, Chapter II, Section 2.1] and estimate |p(x1 ) − p(x2 )| ≤

: |x1 − x2 | ≥ n−2 ,

2p∞ |x1 − x2 |ε n2ε

n2 p∞ |x1 − x2 |ε n−2(1−ε) : |x1 − x2 | < n−2 ,

where, in the second case, we used Markov’s inequality (2.4.15) to get |p(x1 ) − p(x2 )| ≤ n2 p∞ |x1 − x2 | ≤ n2 p∞ |x1 − x2 |ε n−2(1−ε) ,  

and we are finished.

u∗ , D ∗ ) ∈ Cm,δ × R For the following lemma, recall that P( u∗ , D ∗ ) = 0, where ( (see assumption (7.4.32)). Lemma 7.4.10 For ε ∈ (0, m + δ),   Pn (p∗ , D ∗ ) ≤ c9 n−(m+δ−ε) , n μ 1

where the constant c9 ∈ R does not depend on n ∈ N. μ0

1 1 Proof Since Ln+1 P( u∗ , D ∗ ) = 0 resp. Ln+1 P( u∗ , D ∗ ) = 0, we can estimate

μ

  Pn (p∗ , D ∗ ) n

μ1

 μ1 !  μ1  " ≤ Ln+1 u∗ ) μ + Ln+1 S σ1 (pn∗ − u∗ )μ F (·, σ1 pn∗ ) − F (·, σ1 1

1

7.4 Nonlinear Cauchy Singular Integral Equations

in Case A and   Pn (p∗ , D ∗ ) n

 0   μ1  ∗ ∗  u )  μ1 ≤ Ln+1 σ1 (pn −

μ1

507

 μ01 ! " + Ln+1 S F (·, σ1 pn∗ ) − F (·, σ1 u∗ ) μ

1

in Case B. To handle the terms on the right-hand sides we use the estimates (7.4.33), the estimates  0   μ1  L g  ≤ cμ g∞ , Lμ1 g  ≤ c 0 g∞ , (7.4.39) 1 μ n+1 n+1 μ μ 1

1

1

the fact that σ1 ∈ for sufficiently small ε0 ∈ (0, ε) (see (7.4.12) and (7.4.13)), 0 and the continuity of the operator S : C0,ε −→ C (see Proposition 5.2.17). In Case 0 B, we also recall condition (7.4.9). Indeed, in Case A we obtain C0,ε0

  Pn (p∗ , D ∗ ) n

μ1

    ! " ≤ cμ1 max { 1 , 2 } σ1 ∞ pn∗ − u∗ ∞ + S C0,ε0 →C σ1 C0,ε0 pn∗ − u∗ C0,ε0 ! " ≤ cμ1 c1 max { 1 , 2 } σ1 ∞ + S C0,ε0 →C σ1 C0,ε0 n−(m+δ−ε0 ) ln(n + 1) ≤ c9 n−(m+δ−ε) .

For Case B, first we set, for s ∈ {z, xz, zz},   (s) K] , c9 := max |Fs (x, z) : (x, z) ∈ [−1, 1] × [−K, ∈ (0, ∞) is appropriately chosen (cf. (7.4.36)). Then, by setting h(x) = where K u∗ (x) and g(x) = σ1 (x)pn∗ (x), we can estimate σ1 (x)   F (x1 , h(x1 )) − F (x1 , g(x1 )) − F (x2 , h(x2 )) + F (x2 , g(x2 ))   ≤ F (x1 , h(x1 ) + [g(x2 ) − h(x2 )]) − F (x1 , h(x1 ) + [g(x1 ) − h(x1 )])   + F (x2 , g(x2 )) − F (x1 , g(x2 )) − F (x2 , h(x2 )) + F (x1 , h(x2 ))  + F (x1 , h(x2 ) + [g(x2 ) − h(x2 )]) − F (x1 , h(x1 ) + [g(x2 ) − h(x2 )])  − F (x1 , h(x2 )) + F (x1 , h(x1 ))   ≤ c9(z) g(x2 ) − h(x2 ) − g(x1 ) + h(x1 )   + 

1!

  +

1

0

 "  Fx (x2 + t (x1 − x2 ), g(x2 )) − Fx (x2 + t (x1 − x2 ), h(x2 )) dt  |x1 − x2 |

!

Fz (x1 , h(x2 ) + t[h(x1 ) − h(x2 )] + [g(x2 ) − h(x2 )])

0

"  − Fz (x1 , h(x2 ) + t[h(x1 ) − h(x2 )]) dt  |h(x1 − h(x2 )|

508

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

≤ c9(z)g − hC0,ε0 |x1 − x2 |ε0 + c9(xz)g − h∞ |x1 − x2 | + c9(zz)g − h∞ hC0,ε0 |x1 − x2 |ε0    ∗ u C0,ε0 · ≤ c1 c9(z) σ1 C0,ε0 + c9(xz)σ1 ∞ 21−ε0 + c9(zz)σ1 ∞ σ1 C0,ε0  · n−(m+δ−ε0 ) ln(n + 1)|x1 − x2 |ε0 =: c9,1 n−(m+δ−ε0 ) ln(n + 1)|x1 − x2 |ε0 for x1 , x2 ∈ [−1, 1]. Moreover,     (z) F (·, σ1 p∗ ) − F (·, σ1 u∗ )∞ ≤ c9 σ1 ∞ pn∗ − u∗ ∞ n (z) ≤ c1 c9 σ1 ∞ n−(m+δ−ε0 ) ln(n + 1) =: c9,2 n−(m+δ−ε0 ) ln(n + 1) .

Hence, in Case B,   Pn (p∗ , D ∗ ) n

μ1

      ≤ cμ0 σ1 ∞ pn∗ − u∗ ∞ + S C0,ε0 →C F (·, σ1 pn∗ ) − F (·, σ1 u∗ )C0,ε0 1

  ≤ cμ0 c9,1 + c9,2 n−(m+δ−ε0 ) ln(n + 1) ≤ c9 n−(m+δ−ε) , 1

 

and the lemma is proved.

Lemma 7.4.11 If γ > 12 , where γ is defined in Lemma 7.4.6, then the operators Pn (pn∗ , D ∗ ) : Xn −→ Yn are invertible for all sufficiently large n and  !  ∗ ∗ "−1   P (p , D )  n

n

L(Yn ,Xn )

for all these n with a constant c10 ∈ R. Proof First, let us consider Case A, choose η ∈ operators



≤ c10

1 2,1

 , and define the bilinear

  Mn : C0,η × C0,η −→ L L2√σ1 , L2√μ1 by  μ1  w1 (Sw2 I − w2 S)σ1,n v , [Mn (w1 , w2 )] v = I − Ln+1

wj ∈ C0,η , v ∈ L2√σ1 .

7.4 Nonlinear Cauchy Singular Integral Equations

509

Since, for x ∈ [−1, 1] and ρ > −1, 

1

−1

 |y − x|ρ σ1 (y) dy ≤ σ1 ∞

=

x

−1



1

(x − y)ρ dy +

(y − x)ρ dy

x

σ1 ∞  σ1 ∞ 22+ρ ρ (1 + x)1+ρ + (1 − x)1+ρ ≤ =: c10,1 < ∞ , 1+ρ 1+ρ

we have (cf. (7.4.34))      1 w (x) − w (y)  1  w (x) − w (y)  σ (y)   2 2 2  2  1,n σ1,n (y)v(y) dy  ≤    σ (y) σ1 (y)|v(y)| dy  −1  y−x y − x −1 1 ≤



 1

w2 C0,η c3

−1

|y − x|η−1 σ1 (y)|v(y)| dy

6 2(η−1) w2 C0,η c10,1 c3

vσ1

and hence, due to (7.4.39),   L L2√σ ,L2√μ

Mn (w1 , w2 )

1



6 2(η−1) 2cμ1 c10,1 πc3

1

w1 ∞ w2 C0,η .

(7.4.40)

Moreover, since 

1 −1

2    w1 (x) w2 (x) − w2 (y)  σ1 (y) dy ≤ w1 2 w2 2 0,η c2(η−1) , ∞ 10,1   C y−x

we are able to apply [81, p. 216, Prop. 2], which yields   L L2√σ ,L2√μ

Mn (w1 , w2 )

1

−→ 0

if

n −→ ∞

(7.4.41)

1

μ

μ

μ

1 1 1 gI = Ln+1 gLn+1 ) for all (w1 , w2 ) ∈ C0,η × C0,η . Comparing (note that Ln+1

Pn (pn∗ , D ∗ )(vn , D) μ

1 (a1,n σ1 vn − S σ1 vn ) − D = Ln+1

!  " μ1 μ1  μ1 ωnσ a1,n σ1,n vn − S σ1,n vn + Ln+1 S − (ωnσ )−1 S ωnσ I σ1,n vn − Ln+1 (ωnσ )−1 D = Ln+1

510

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

with

  σ −1 QA a1,n σ1 v − Sσ1 v − D n (v, D) := (ωn )   = a1,n σ1,n v − Sσ1,n v + S − (ωnσ )−1 Sωnσ I σ1,n v − (ωnσ )−1 D

and applying Lemma 7.4.8 together with Lemma 7.4.6 as well as Lemma 7.4.5,(A1) we conclude   ∗ ∗  P (p , D )(vn , D) n

n

μ1

     μ1  μ1 ≥ c3 a1,n σ1,n vn − S σ1,n vn + Ln+1 S − (ωnσ )−1 S ωnσ I σ1,n vn − Ln+1 (ωnσ )−1 D        μ1 σ1 −1 σ σ −1 = c3 QA − Ln+1 (ωnσ )−1  n (vn , D) + Mn ((wn ) , ωn ) vn + D (ωn )

μ1

μ1

.

(7.4.42) According to relation (7.4.34) we get the estimate    A  Qn (v, D)

μ1



 √    c3 QA n (v, D)

μ1,n

√ ≥

√   c3  c3  a1,n σ1 v − S σ1 v − D  a1,n σ1n ωσ v − S σ1,n ωσ v − D  = n n μ1,n μ1,n c4 c4

Lemma 7.4.5,(A3)

=



c3 c4

/

  a1,n σ1,n ωσ v − S σ1,n ωσ v 2 n n μ

1,n

2 |D|2 , + cμ 1,n

where  cμ2 1,n

=

1

−1

 μ1,n (x) dx =

1 −1

cμ2 μ1 (x) dx ≥ 1. μ c4 ωn (x)

From this, by using Lemma 7.4.5,(A2), Lemma 7.4.6, and Corollary 7.4.7, we conclude √ /    c3    A ωσ v 2 + c2 |D|2 Qn (v, D) ≥ n μ1,n σ1,n μ1 c4 √ /   c3   ωσ v 2 + c2 |D|2 = n μ1,n σ1 c4  (7.4.43) √ 2 c c3 μ 1 ≥ c3 v2σ1 + |D|2 c4 c4 $ # √  c3 1 (v, D)X =: cA (v.D)X . ≥ min c3 , c4 c4

7.4 Nonlinear Cauchy Singular Integral Equations

511

  μ1 The convergence lim w − Ln+1 wμ = 0 for all w ∈ C[−1, 1] is uniform on 1 n→∞   0,γ the compact subset Cc5 := w ∈ C0,γ [−1, 1] : wC0,γ ≤ c5 of C[−1, 1] (see Corollary 2.4.14 and Lemma 2.3.11). Consequently, due to (7.4.35),    σ −1  μ1 (ωnσ )−1  (ωn ) − Ln+1

μ1

   0,γ μ1 ≤ sup w − Ln+1 w μ : w ∈ Cc5 −→ 0 1

if n −→ ∞ .

Moreover, in view of (7.4.40), the convergence (7.4.41) is also uniform on every compact subset of C0,η × C0,η . If we again use relation (7.4.35) as well as the compact embedding C0,γ ⊂ C0,η for η < γ (see Exercise 2.4.18), then we can conclude that, for all sufficiently large n,      μ1 (ωnσ )−1   Mn ((wnσ1 )−1 , ωnσ ) vn + D (ωnσ )−1 − Ln+1

μ1



cA (v, D)X 2

∀(v, d) ∈ X .

This finally, together with (7.4.43) and (7.4.42), yields   ∗ ∗  P (p , D )(vn , D) ≥ c3 cA (vn , D)X n n Y 2

∀ (vn , D) ∈ Xn ,

! "−1  2  i.e.,  Pn (pn∗ , D ∗ ) L(Yn ,Xn ) ≤ c3 cA for all sufficiently large n. Now let us turn to Case B. Here we write Pn (pn∗ , D ∗ )(vn , D) μ0

1 = Ln+1 (σ1 vn − S a1,n σ1 vn ) − D

  μ01 0 = Ln+1 σ1 vn − S bσ1,n vn − D .  0 μ01 μ01  μ01 0 = Ln+1 ωnσ σ1,n vn − S bσ1,n vn + Ln+1 S − (ωnσ )−1 S ωnσ I bσ1,n vn − Ln+1 (ωnσ )−1 D

with b(x) = 1 − x and define, for (v, D) ∈ X,   σ −1 0 QB v−D σ1 v − Sbωnσ σ1,n n (v, D) := (ωn )   0 0 = σ1,n v − Sbσ1,n v + S − (ωnσ )−1 Sωnσ I bσ1,n v − (ωnσ )−1 D . First, we prove that there is a constant cB ∈ (0, ∞) such that (cf. (7.4.43))    B  (7.4.44) Qn (v, D) ≥ cB (v, D)X ∀ (v, D) ∈ X . μ1

512

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

0 v = − 1 v, 1 0 Set Dn (v) = (Sb − bS)σ1,n 0 . If σ1,n v − Sbσ1,n v − D = g and σ1,n π (v, D) ∈ X, then 0 σ1,n v − bSσ1,n v = g + D + Dn (v) = g + D −

1 v, 1σ 0 , 1,n π

which, in view of Lemma 7.4.5,(B3),(B2), implies 

g + D, 1μ0 1 1,n v, 1σ 0 = 1,n π c2 0

cμ2 0 1,n

with

μ1,n

= 1, 1μ0 = 1,n

1 −1

μ01,n (x) dx

and (note that, due to (7.4.34), c2 0 ≤ c4 c2 0 ) μ1,n

μ1

  vσ 0 = g + D − π −1 v, 1σ 0 μ0 ≤ gμ0 + cμ0 |D| + 1,n

1,n

1,n

1,n

    g + D, 1μ0 

1,n

1,n

cμ0

1,n

   (7.4.34)  ≤ 2 gμ0 + cμ0 |D| ≤ cB,1 gμ0 + |D| . 1,n

1,n

1,n

0 bv − bD = bg, we also get, again by Moreover, since σ1,n bv − bSσ1,n Lemma 7.4.5,(B3),    g, bμ0  (7.4.34)  1,n  |D| =   ≤ cB,2 gμ0 . 1,n  b, 1μ0  1,n

Consequently, vσ 0 ≤ cB,3 gμ0 and, since 1,n

1,n

  σ −1 0 σ1,n ωnσ v − Sbσ1,n QB ωnσ v − D , n (v, D) = (ωn ) we have    σ   σ B (ω v, D) ≤ cB,4  Q (v, D)  ω n n n X Now (7.4.44) follows from Corollary 7.4.7.

μ01,n

.

7.4 Nonlinear Cauchy Singular Integral Equations

513

Analogously to the proof in Case A, we proceed with   ∗ ∗  P (p , D )(vn , D) n

n

μ1

     0 μ01  μ01 0 σ −1 σ σ −1  σ ≥ c3  v − S bσ v + L S − (ω ) S ω I v − L (ω ) D bσ 1,n n n n n n n 1,n 1,n n+1 n+1       B " ! σ −1 μ01 0 σ −1 σ σ −1  = c3  Q (v , D) + M ((ω ) , ω ) bv + D (ω ) − L (ω ) n n n n n n+1 n   n n

μ1

,

μ1

where we used the notation  μ01  0 w1 (Sw2 I − w2 S)σ1,n v [M0n (w1 , w2 )]v = I − Ln+1 and Lemma 7.4.8 as well as Lemma 7.4.5,(B1). In the same manner as in Case A, we get that, for all sufficiently large n,   0  " !  M0 ((ωσ )−1 , ωσ ) bvn + D (ωσ )−1 − Lμ1 (ωσ )−1  n n n n n+1 n  

μ1



cB (v, D)X 2

∀(v, D) ∈ X .

 

The assertion is obtained in the same way as in Case A.

Lemma 7.4.12 Let τ be defined as in Lemma 7.4.9 and ε > 0 be sufficiently small. Then, for all (vn , D) ∈ Xn ,    P (vn , D) − P  (p∗ , D ∗ ) n

n

≤ c11

n

⎧ ⎪ ⎨

L(Xn ,Yn )

 γ n(τ +2ε)(1+γ2) vn − pn∗ σ2

: Case A ,      ⎪ ⎩ n2ε+τ n(2ε+τ )γ2 vn − pn∗ γ2 + nτ γ3 vn − pn∗ γ3 : Case B , σ σ 

1

1

1

with a constant c11 ∈ (0, ∞). Proof In Case A, by (7.4.39), (7.4.5), and (7.4.38), we get !   "  P (vn , D) − P  (p∗ , D ∗ ) (  vn , D) n

n

n

μ1

!  " ≤ cμ1  Fz (·, σ1 vn ) − Fz (·, σ1 pn∗ ) σ1 vn ∞ γ 1+γ  ≤ cμ1 cz σ1 ∞ 2 vn − pn∗ ∞2  vn ∞  γ 1+γ 1+γ ≤ cμ1 cz σ1 ∞ 2 c∞,σ21 nτ (1+γ2 ) vn − pn∗ σ2  vn σ1 . 1

  In Case B, first of all we estimate the norm Fz (·, σ1 vn ) − Fz (·, σ1 pn∗ )C0,ε . On the one hand, in view of (7.4.7) we have   Fz (·, σ1 vn ) − Fz (·, σ1 p∗ ) n



γ γ  ≤ cz σ1 ∞2 vn − pn∗ ∞2 .

514

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

On the other hand, in the same manner as at the end of the proof of Lemma 7.4.10 (with Fz instead of F ), by setting g(x) = σ1 (x)vn (x) and h(x) = σ1 (x)pn∗ (x) and choosing ε0 ∈ (0, min {α1 , β1 , m + δ0 }) as well as ε = ε0 γ2 , we can estimate   Fz (x1 , σ1 (x1 )p∗ (x1 )) − Fz (x1 , σ1 (x1 )vn (x1 )) − Fz (x2 , σ1 (x2 )p∗ (x2 )) + Fz (x2 , σ1 (x2 )vn (x2 )) n

n

  = Fz (x1 , h(x1 )) − Fz (x1 , g(x1 )) − Fz (x2 , h(x2 )) + Fz (x2 , g(x2 ))   ≤ Fz (x1 , h(x1 ) + [g(x2 ) − h(x2 )]) − Fz (x1 , h(x1 ) + [g(x1 ) − h(x1 )])   + Fz (x2 , g(x2 )) − Fz (x1 , g(x2 )) − Fz (x2 , h(x2 )) + Fz (x1 , h(x2 ))  + Fz (x1 , h(x2 ) + [g(x2 ) − h(x2 )]) − Fz (x1 , h(x1 ) + [g(x2 ) − h(x2 )])  − Fz (x1 , h(x2 )) + Fz (x1 , h(x1 ))  γ ≤ cz g(x2 ) − h(x2 ) − g(x1 ) + h(x1 ) 2   + 

1!

  +

1

0

 "  Fxz (x2 + t (x1 − x2 ), g(x2 )) − Fxz (x2 + t (x1 − x2 ), h(x2 )) dt  |x1 − x2 |

!

Fzz (x1 , h(x2 ) + t[h(x1 ) − h(x2 )] + [g(x2 ) − h(x2 )])

0

 "   − Fzz (x1 , h(x2 ) + t[h(x1 ) − h(x2 )]) dt  h(x1 ) − h(x2 )  γ γ ≤ cz g − hC20,ε0 |x1 − x2 |ε0 γ2 + cxz g(x2 ) − h(x2 ) 3 |x1 − x2 |  γ   + czz g(x2 ) − h(x2 ) 3 h(x1 ) − h(x2 ) γ2 |x C0,ε0 1

≤ cz g − h

γ

γ

− x2 |ε + cxz g − h∞3 |x1 − x2 | + czz g − h∞3 hC0,ε |x1 − x2 |ε

 γ γ  γ γ  ≤ cz σ1 C20,ε0 vn − pn∗ C20,ε0 + cxz σ1 ∞3 vn − pn∗ ∞3 21−ε  γ    + czz vn − pn∗ ∞3 σ1 C0,ε pn∗ C0,ε |x1 − x2 |ε .

Taking the second relation in (7.4.33) into account we summarize   Fz (·, σ1 vn ) − Fz (·, σ1 p∗ ) n

C0,ε

  γ γ ≤ c11,1 vn − pn∗ C20,ε0 + vn − pn∗ ∞3 . (7.4.45)

7.4 Nonlinear Cauchy Singular Integral Equations

515

Consequently, in view of (7.4.39) and Lemma 7.4.9, we get !   "  P (vn , D) − P  (p∗ , D ∗ ) (  vn , D) n n n

μ1

  vn C0,ε ≤ cμ0 SC0,ε →C Fz (·, σ1 vn ) − Fz (·, σ1 pn∗ )C0,ε σ1 C0,ε  1

 γ γ  vn C0,ε ≤ cμ0 SC0,ε →C σ1 C0,ε c11,1 vn − pn∗ C20,ε0 + vn − pn∗ ∞3  1

 γ ! γ γ2 ≤ cμ0 SC0,ε →C σ1 C0,ε c11,1 cε02,∞ c∞,σ n(2ε+τ )γ2 vn − pn∗ σ2 1

1

1

 γ " γ3 τ γ3  vn − pn∗ σ3 cε,∞ c∞,σ1 n2ε+τ  vn σ1 , + c∞,σ 1n 1

(7.4.46)  

which proves the assertion also in Case B.

The following proposition summarizes the essential results we obtained up to now in the present section (see Lemmata 7.4.10, 7.4.11, and 7.4.12). Moreover, in Case B we assume that γ3 = γ2 . Proposition 7.4.13 If the solution (u∗ , D ∗ ) ∈ X with u∗ = σ1 u∗ of prob∗ m,δ lem (7.4.1), (7.4.3) resp. (7.4.2),(7.4.4) has the property u ∈ C , if γ defined in Lemma 7.4.6 is greater than 12 , and if pn∗ ∈ Pn is a sequence of polynomials satisfying (7.4.33) (the existence of which is guaranteed by Lemma 7.4.4 for a sufficiently small ε > 0), then for the operators Pn : Xn −→ Yn defined by (7.4.28) we have that   Pn (p∗ , D ∗ ) n

μ1

≤ c9 n−(m+δ−ε)

(7.4.47)

and that, for all sufficiently large n, their Frechet derivatives Pn (pn∗ , D ∗ ) : Xn −→ Yn at the points (pn∗ , D ∗ ) are invertible, where !    ∗ ∗ "−1   Pn (pn , D ) 

Yn −→Xn

≤ c10 .

(7.4.48)

Here the constants c9 , c10 ∈ (0, ∞) do not depend on n. Furthermore, there is a finite constant c11 = c11 (n, vn , D) such that, for all (vn , D) ∈ Xn ,     γ (2ε+τ )(1+γ2 )  P (vn , D) − P  (p∗ , D ∗ ) vn − pn∗ σ2 , n n n L(Xn ,Yn ) ≤ c11 n 1

(7.4.49)

where τ is defined in Lemma 7.4.9 and γ2 = γ3 is given by conditions (7.4.5) resp. (7.4.7),(7.4.25). Now we are able to prove the following proposition on the applicability of the modified Newton iteration method (7.4.29) for approximating a solution of

516

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Eq. (7.4.28), which converges to a solution of Eq. (7.4.27) representing our original problem (7.4.1) resp. (7.4.2). Proposition 7.4.14 Assume that the solution (u∗ , D ∗ ) of problem (7.4.1),(7.4.3) resp. problem (7.4.2),(7.4.4) admits the representation u∗ = σ1 u∗ with u∗ ∈ Cm,δ and σ1 (x) defined in (7.4.11). Furthermore, let the assumptions (F1) resp. (F2), (F2’) be fulfilled such that γ2 (m + δ) > τ (1 + γ2 ) and γ > 12 , where τ and γ are defined in Lemma 7.4.9 and Lemma 7.4.6, respectively. Then, for all sufficiently  (0) (0)  large n, there exists an element vn , Dn ∈ Xn such that Eqs. (7.4.29) possess  ∞   (k) a unique solution vn , Dn(k) for all k ∈ N0 . The sequence vn(k) , Dn(k) k=0 converges in the norm of X to a solution (vn∗ , Dn∗ ) of Eq. (7.4.28), where  ∗ ∗  (v , D ) − ( u∗ , D ∗ )X ≤ c n−(m+δ−ε) n n

(7.4.50)

for all sufficiently small ε > 0 and a finite constant c = c(n). Proof Choose vn = pn∗ and Dn = D ∗ as well as a sufficiently small ε > 0. We can apply Proposition 2.7.3 replacing P, X, and Y, by Pn , Xn , and Yn , respectively, as well as setting η = γ2 , M0 = c10 , N0 = c10 c9 n−(m+δ−ε) , and (0)

(0)

C0 = c11

n(2ε+τ )(1+γ2) : Case A , n4ε+τ (1+γ2) : Case B .

Indeed, condition (a) of Proposition 2.7.3 is satisfied due to (7.4.48), condition (b) due to (7.4.48) and (7.4.47), and condition (c) due to (7.4.49). Moreover, η

1+γ2 γ2 c9 c11 n−[γ2 (m+δ−ε)−(2ε+τ )(1+γ2)]

h0 = M0 N0 C0 = c10



1 4

for all sufficiently large n, since γ2 (m + δ − ε) > (2ε + τ )(1 + γ2 )

if ε
0 ,

Fz (1, 0) < 0 ,

(7.4.55)

518

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

and    f (x1 ) − f  (x2 ) ≤ cf |x1 − x2 |γf ,

xj ∈ [−1, 1] .

(7.4.56)

Furthermore, there exist nonnegative constants j , j = 1, 2, with 1 2 < 1 and − 1 ≤ Fz (x, z) ≤ 2 ,

(x, z) ∈ [−1, 1] × R .

(7.4.57)

At some places we will make use of the special structure and certain properties of the integral equation under consideration in Sect. 8.3.2, namely that the nonlinearity is of the form   b0 (1 − x) F (x, z) = g +z , (7.4.58) 2 where the given function g : [0, b0] −→ R is two times continuously differentiable with g  (0) < 0 ,

g  (b0 ) > 0 ,

and g  (x) > 0 , −1 ≤ x ≤ 1 ,

(7.4.59)

and, for the solution (u∗ , D ∗ , E ∗ ), we have h (x) < 0 ,

−1 < x < 1 ,

where h(x) =

b0 (1 − x) + u∗ (x) . 2

(7.4.60)

Moreover, g : R −→ R is a two times continuously differentiable continuation of the function g : [0, b0 ] −→ R with g  (x) = g  (0)

∀x < 0

and g  (x) = g  (b0 )

∀ x > b0 .

(7.4.61)

First, we try to find a representation of the solution u∗ (x) of problem (7.4.52), (7.4.53) like in Sect. 7.4.1. For this, we define the weight function σ2 (x) = 

-

1 − x2 1 + [b2 (x)]2

exp

1 −1

g2 (t) dt t −x

. ,

where g2 (x) =

1 1 (1 − x)g2 (−1) + (1 + x)g2 (1) 2 2

and (cf. (7.4.55)) g2 (±1) =

1 arctan Fz (±1, 0) , π

(7.4.62)

7.4 Nonlinear Cauchy Singular Integral Equations

519

as well as b2 (x) = tan[πg2 (x)]. We have the representation (cf. the proof of (7.4.12) and (7.4.13)) σ2 (x) = (1 − x)α2 (1 + x)β2 w2 (x)

(7.4.63)

with α2 = 1 + g2 (1), β2 = 1 − g2 (−1), and K w2 ∈ C0,η [−1, 1] and w2 (x) > 0 , −1 ≤ x ≤ 1 .

(7.4.64)

0 0 for all x ∈ R and |g  (x1 ) − g  (x2 )| ≤ cg |x1 − x2 |γ3 for all x1 , x2 ∈ R. Hence, F : [−1, 1] × R −→ R possesses also continuous partial derivatives Fxz , Fzz : [−1, 1] × R −→ R, satisfying |Ft z (x, z1 ) − Ft z (x, z2 )| ≤ ct |z1 − z2 |γ3 ,

(7.4.74)

(x, zj ) ∈ [−1, 1] × R , t ∈ {x, z}. Equation (7.4.52) we write as P(v, D, E) = 0 ,

(v, D, E) ∈ X ,

(7.4.75)

where u = σ2 v, X = L2√σ1 × C2 . As norm in X we take (v, D, E)X =

6

 v2σ2

+ D

+ E·2μ2

=



1

−1

 σ2 (x)|v(x)|2 dx +

1

−1

|D + Ex|2 μ2 (x) dx .

We consider P as an (nonlinear) operator from the Hilbert space X into the Hilbert space Y = L2√μ1 . The approximating equation (7.4.72) we write as Pn (vn , Dn , En ) = 0 ,

(vn , Dn , En ) ∈ Xn ,

(7.4.76)

where Xn = Pn−1 × C2 is considered as a (finite dimensional) subspace of X. Of course Pn is an operator from Xn into Yn = Pn+2 ⊂ Y. Instead of the Newton method (7.4.73) we investigate the modified Newton iteration method Pn (vn(k) , Dn(k) , En(k) ) + Pn (vn(0) , Dn(0) , En(0) )(vn(k) , Dn(k) , En(k) ) = 0 , (7.4.77) where Pn (vn(0) , Dn(0) , En(0) ) denotes the Frechet derivative of Pn at the point (0) (0) (0) (vn , Dn , En ) and vn(k+1) = vn(k) + vn(k) ,

Dn(k+1) = Dn(k) + Dn(k) ,

En(k+1) = En(k) + En(k) .

7.4 Nonlinear Cauchy Singular Integral Equations

523

In particular, that means, μ0



2 σ2 vn − S [Fz (·, σ2 vn(0) )σ2 vn ] − Dn − En · . Pn (vn(0) , Dn(0) , En(0) )(vn , Dn , En ) = Ln+2

(7.4.78) In what follows we assume that ( u∗ , D ∗ , E ∗ ) is a solution of (7.4.62) with u∗ ∈ m,δ C , where m ≥ 1 (see Lemma 7.4.18). If we choose δ0 ∈ (0, δ) and take into account Lemma 7.4.4, then,  ∞a sufficiently small ε > 0, there exist constants  for c1 , c2 ∈ R and a sequence qn∗ n=1 of polynomials qn∗ ∈ Pn−1 such that, for n ∈ N,   ∗   u − qn∗ C0,ε ≤ c1 n−(m−1+δ−ε) ln(n + 1)

  and qn∗ Cm−1,δ0 ≤ c2 , (7.4.79)

where cj = cj (n). Setting pn∗ (x) = u∗ (−1) +



x

−1

qn (y) dy ,

we can conclude   ∗  u (x) − pn∗ (x) ≤



x −1

    ∗    ∗   u − qn∗ ∞ u (y) − qn∗ (y) dy ≤ 2 

and   ∗    u∗ (x2 ) + pn∗ (x2 ) =  u (x1 ) − pn∗ (x1 ) −

x2 x1

     ∗   ∗   u∗ (y) − qn∗ (y) dy  ≤  u −qn ∞ |x1 −x2 | .

     (m)  ∗ (m−1) Moreover, pn∗ = qn and pn∗ ∞ ≤  u∗ ∞ + 2qn∗ ∞ . In summary,   ∗    u − pn∗ C0,1 ≤ c1 n−(m−1+δ−ε) ln(n + 1) and pn∗ Cm,δ0 ≤ c2 ,

(7.4.80)

n ∈ N, with finite constants cj = cj (n). Let b2,n (x) = Fz (x, σ2 (x)pn∗ (x)). Furthermore, define (cf. (7.4.62)) σ2,n (x) = 

-

1 − x2 1 + [b2,n (x)]2

exp

1 −1

g2,n (t) dt t −x

. ,

μ2,n (x) =

1  , σ2,n (x) 1 + [b2,n (x)]2

where the continuous functions g2,n : [−1, 1] −→ R are given by the relations 1 + ib2,n (x) =

6

1 + [b2,n (x)]2 eπig2,n (x) ,



1 1 < g2,n (x) < . 2 2

Lemma 7.4.19 For all sufficiently large n, there is one and only one xn∗ ∈ (−1, 1)  (x ∗ ) < 0. such that b2,n (xn∗ ) = 0. Moreover, b2,n n

524

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Proof Set hn (x) = b0 (1−x) + σ2 (x)pn∗ (x), i.e., b2,n (x) = g  (hn (x)). Due to the 2 strong monotonicity of g  : [0, b0 ] −→ R, there is exactly one y ∗ ∈ (0, b) such that g  (y ∗ ) = 0. Moreover, as a consequence of (7.4.80), there is a positive constant K0 ∈ R such that  ∗ p 

n ∞

≤ K0

∀n ∈ N.

(7.4.81)

This implies, in virtue of the continuity of σ2 (x) and σ2 (±1) = 0, that there is a compact interval [a1 , a2 ] ⊂ (−1, 1) with the property . - ∗ . y + b0 y∗ ∪ , b0 hn (x) ∈ 0, 2 2

∀ x ∈ [−1, 1] \ [a1 , a2 ] , ∀ n ∈ N .

In other words, b2,n (x) = 0 for all x ∈ [−1, 1] \ [a1 , a2 ] and all n ∈ N. Hence, it is sufficient to show that, for all sufficiently large n, the function hn (x) = b0 (1−x) + 2 σ2 (x)pn∗ (x) is strongly monotone on [a1 , a2 ]. As a consequence of (7.4.60), for the function h(x) = b0 (1−x) + σ2 (x) u∗ (x), we have h (x) ≤ ρ[a1 ,a2 ] < 0 and, due 2 ρ  to (7.4.79) and (7.4.80), also hn (x) ≤ − [a12,a2 ] < 0 (note that σ2 (x) is continuous on [a1, a2 ]) for all x ∈ [a1 , a2 ] ⊂ (−1, 1) and all sufficiently large n. Because of    b2,n (x) = g  hn (x) hn (x)  (x ∗ ) < 0. and assumption (7.4.59), we also have b2,n n

 

Taking into account b2,n (−1) = g  (b0 ) > 0, b2,n (1) = g  (0) < 0, and Lemma 7.4.19, we see that b2,n (x)

> 0 : −1 ≤ x < xn∗ , < 0 : xn∗ < x ≤ 1 ,

so that the weight functions 0 σ2,n (x) =

b2,n (x)σ2,n (x) xn∗ − x

and μ02,n (x) =

b2,n (x)μ2,n (x) xn∗ − x

are well defined and

cn (x) > 0

∀ x ∈ [−1, 1] ,

where cn (x) :=

⎧ x∗ − x ⎪ ⎪ ⎪ ⎨ b2,n (x) ⎪ ⎪ ⎪ ⎩−

  : x ∈ [−1, 1] \ xn∗ ,

1  (x ∗ ) : b2,n n

x = xn∗ .

0 (x) and μ0 (x) with that of σ 0 (x) and μ0 (x) If we compare the definition of σ2,n 2,n 1,n 1,n in Sect. 7.4.3 for the Case B, then we observe as an essential difference, that in the

7.4 Nonlinear Cauchy Singular Integral Equations

525

present situation xn∗ depends on n, while in Sect. 7.4.3 x ∗ = 1 is fixed. In particular, this will cause some further difficulties in the convergence analysis here. That’s why, at some places, we will refer to the additional assumption (F3’) formulated on page 522 to overcome this problem. Lemma 7.4.20 We have 0 v ∈P (C1) σ2,n vm − bn∗ Sσ2,n m m+2 ∀ vm ∈ Pm , m ∈ N,   0 ∗ (C2) σ2,n v − bn Sσ2,n v μ0 = vσ 0 ∀ v ∈ L26 0 , 2,n

2,n

(C3) ∃ v ∈

L2 0 σ2,n

: σ2,n −

0 v bn∗ Sσ2,n

=g∈

σ2,n

L26

μ02,n

⇐⇒ g, p μ0 = 0 ∀ p ∈ P2 , 2,n

(C4) μ2,n p + Sμ2,n p =  0 ∀ p ∈ P2 ,  (C5) μ2,n I + Sμ2,n I σ2,n v − Sσ2,n v = v ∀ v ∈ L2σ2,n . where bn∗ (x) = xn∗ − x. Proof The assertions follow applying Proposition 5.2.25 and Proposition 5.2.26,(b)  as well as relation (5.2.63) in case of κ = −2 and B(x) ≡ −1 or B(x) = x −xn∗.  Lemma 7.4.21 There exist constants c3 , c4 , c5 ∈ R such that, for all n ∈ N and ν ∈ {σ, μ}, 0 < c3 ≤ ωnν (x) ≤ c4 ,

−1 ≤ x ≤ 1 ,

(7.4.82)

and  ν   ω  0,γ , (ων )−1  0,γ ≤ c5 , n C n C

(7.4.83)

  ν2 (x) and γ = min {α2 , β2 , m + δ0 }. For ν ∈ σ 0 , μ0 , ν2,n (x) relation (7.4.82) is also valid for all sufficiently large n.

where ωnν (x) =

Proof The representation (7.4.63) implies σ2 ∈ C0,γ . Note that   σ2 p∗  ≤ K n ∞

(7.4.84)

for some real constant K = K(n) and set Mt = sup {|Ft z (x, z)| : (x, z) ∈ [−1, 1] × [−K, K]} ,

t ∈ {x, z} .

526

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Then Mt < ∞ and     b2,n (x1 ) − b2,n (x2 ) ≤ Fz (x1 , σ2 (x1 )p∗ (x1 )) − Fz (x2 , σ2 (x1 )p∗ (x1 )) n n   + Fz (x2 , σ2 (x1 )pn∗ (x1 )) − Fz (x2 , σ2 (x2 )pn∗ (x2 )) ≤ Mx |x1 − x2 | + Mz |σ2 (x1 )pn∗ (x1 ) − σ2 (x2 )pn∗ (x2 )|   ≤ Mt |x1 − x2 | + Mz σ2 C0,γ pn∗ C0,γ |x1 − x2 |γ ≤ c6 |x1 − x2 |γ .

  Thus, due to (7.4.57), b2,n C0,γ ≤ max { 1 , 2 } + c6 =: c7 and, consequently,   ! " g2,n  0,γ ≤ c8 , since g2,n (x) = π −1 arctan b2,n (x) by definition. Now the C estimates (7.4.82) and (7.4.83) follow from 

ωnσ (x)

1 + [b2,n (x)]2 π [S (g2 −g2,n )](x) =  e , 1 + [b2 (x)]2 

ωnμ (x) g2 − g2,n

1 + [b2,n (x)]2 π [S (g2,n −g2 )](x) =  e , 1 + [b2 (x)]2   0,γ ∈ C0 = h ∈ C0,γ : h(±1) = 0 , 0,γ

and the continuity of the operator S : C0 −→ C0,γ (see Proposition 5.2.17). 0 Finally, let ν ∈ {σ, μ} and consider ωnν (x). In this situation we have 0

ωnν (x) =

Since

ν20 (x) 0 (x) ν2,n

=

b2 (x)(xn∗ − x) ν b2 (x)ν2 (x)(xn∗ − x) = ω (x) . (x ∗ − x)b2,n (x)ν2,n (x) (x ∗ − x)b2,n (x) n

 b2 (x) − b2 (x ∗ )   b2 (x) = = b2 (ξ(x)) with ξ(x) between x ∗ and x, where ∗ ∗ x −x x −x

   π 1 + [b2 (x)]2 (2 − α2 − β2 ) π (1 + b2 ∞ ) (2 − α2 − β2 ) π(2 − α2 − β2 )    ≤ b2 (x) = ≤ , 2 2 2

it remains to show that there are constants c3 and c4 such that 0 < c3 ≤

b2,n (x) − b2,n (xn∗ ) b2,n (x) = ≤ c4 , xn∗ − x xn∗ − x

−1 ≤ x ≤ 1

7.4 Nonlinear Cauchy Singular Integral Equations

527

holds true. We use the notations y ∗ ∈ (0, b0 ), [a1 , a2 ] ⊂ (−1, 1), hn (x) = b0 (1−x) + σ2 (x)pn∗ (x) from the proof of Lemma 7.4.19, as well as h(x) = b0 (1−x) + 2 2 ∗ (x), and define the positive constants M , M by u∗ (x) = b0 (1−x) + u σ2 (x) 1 2 2 # .$ . - ∗ y∗ y + b0 , b0 M1 = min |g  (y)| : y ∈ 0, ∪ 2 2 and # .$ . - ∗ y∗ y + b0 , b0 M2 = max |g  (y)| : y ∈ 0, . ∪ 2 2 Recalling (from the proof of Lemma 7.4.19) that b2,n (x) = g  (hn (x)) and that (for all suffciently large n) the xn∗’s lie in the  ∗compact interval [a1, a2 ] ⊂ (−1, 1), y∗ 0 where hn ([−1, 1] \ [a1 , a2 ]) ⊂ 0, 2 ∪ y +b 2 , b0 , we can estimate, for x ∈ [−1, 1] \ [ a1 , a2 ] with some fixed a1 ∈ (−1, a1) and a2 ∈ (a2 , 1), b2,n (x) M2 M1 ≤ ∗ ≤ < ∞. 2 xn − x min {a1 − a1 , a2 − a2 }   With the help of (7.4.79) and (7.4.80), we obtain hn − h ∞,[ a , a ] −→ 0. 1 2 Moreover, due to (7.4.59), (7.4.60), and (7.4.61), there are constants M1 , M2 such that a1 , a2 ] , 0 < M1 ≤ −g  (hn (x)) h (x) ≤ M2 < ∞ ∀ x ∈ [     M and there is an n0 ∈ N such that g  ∞,[0,b ] hn − h ∞,[ a , a ] < 21 for all n > n0 0 1 2     (note that g  ∞,[0,b ] = g  ∞,(−∞,∞) due to (7.4.61)). Hence, for such n and 0 x ∈ [ a1 , a2 ], we get     b2,n (x) = −g  hn (ξ(x, n)) hn ξ(x, n) ∗ xn − x   !       " = −g  hn (ξ(x, n)) h ξ(x, n) + g  hn (ξ(x, n)) h ξ(x, n) − hn ξ(x, n))  ∈

M M1 , M2 + 1 2 2



for a certain ξ(x, n) between xn∗ and x, and the proof is finished.

 

Corollary 7.4.22 Because of (7.4.82), all norms .σ2 , .σ2,n , .σ 0 , and .σ 0 2 2,n resp. .μ2 , .μ2,n , .μ0 , and .μ0 are equivalent uniformly with respect to all 2 2,n sufficiently large n.

528

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Remark 7.4.23 From the proof of Lemma 7.4.9, it is easy to see that this lemma remains true, if we replace τ by τ = 1 + max {α2 , β2 } and σ1 by σ2 , i.e., we have p∞ ≤ c∞,σ2 nτ pσ2

and pC0,ε ≤ cε,∞ n2ε p∞

∀ p ∈ Pn .

(7.4.85)

Lemma 7.4.24 If P( u∗ , D ∗ , E ∗ ) = 0 with ( u∗ , D ∗ , E ∗ ) ∈ Cm,δ × R2 with m ≥ 1, then, for ε ∈ (0, m + δ),   Pn (p∗ , D ∗ , E ∗ ) ≤ c9 n−(m−1+δ−ε) , n μ 2

where the constant c9 ∈ R does not depend on n ∈ N and we additionally assume that the partial derivative Fzz : [−1, 1] × R −→ R exists and is continuous. μ0

2 Proof Because of Ln+2 P( u∗ , D ∗ , E ∗ ) = 0, we can estimate

 0   μ2  ∗ − ∗ ) L ≤ σ (p u  n+2 2 n  μ2

  Pn (p∗ , D ∗ , E ∗ ) n

μ2

 μ02 ! " +Ln+2 S F (·, σ2 pn∗ ) − F (·, σ2 u∗ ) μ . 2

To handle the terms on the right-hand side we use the estimates (7.4.80) and  μ02  L g  ≤ c 0 g∞ , μ n+2 μ 2

2

(7.4.86)

the fact that σ2 ∈ C0,ε0 for sufficiently small ε0 ∈ (0, ε) (see (7.4.63) and (7.4.64)), 0,ε and further the continuity of the operator S : C0 0 −→ C (see Proposition 5.2.17). Indeed, first we set, for s ∈ {z, xz, zz},   (s) K] , c9 := max |Fs (x, z) : (x, z) ∈ [−1, 1] × [−K, ∈ (0, ∞) is appropriately chosen (cf. (7.4.84)). Then, by setting h(x) = where K σ2 (x) u∗ (x) and g(x) = σ2 (x)pn∗ (x), we can estimate   F (x1 , h(x1 )) − F (x1 , g(x1 )) − F (x2 , h(x2 )) + F (x2 , g(x2 ))   ≤ F (x1 , h(x1 ) + [g(x2 ) − h(x2 )]) − F (x1 , h(x1 ) + [g(x1 ) − h(x1 )])   + F (x2 , g(x2 )) − F (x1 , g(x2 )) − F (x2 , h(x2 )) + F (x1 , h(x2 ))  + F (x1 , h(x2 ) + [g(x2 ) − h(x2 )]) − F (x1 , h(x1 ) + [g(x2 ) − h(x2 )])  − F (x1 , h(x2 )) + F (x1 , h(x1 ))

7.4 Nonlinear Cauchy Singular Integral Equations

529

 (z)  ≤ c9 g(x2 ) − h(x2 ) − g(x1 ) + h(x1 )   + 

  Fx (x2 + t (x1 − x2 ), g(x2 )) − Fx (x2 + t (x1 − x2 ), h(x2 )) dt  |x1 − x2 |

1! 0

  +

"

1!

Fz (x1 , h(x2 ) + t[h(x1 ) − h(x2 )] + [g(x2 ) − h(x2 )])

0

"  − Fz (x1 , h(x2 ) + t[h(x1 ) − h(x2 )]) dt  |h(x1 − h(x2 )| (z) (xz) ≤ c9 g − hC0,ε0 |x1 − x2 |ε0 + c9 g − h∞ |x1 − x2 |

+ c9(zz) g − h∞ hC0,ε0 |x1 − x2 |ε0  ∗   u C0,ε0 · ≤ c σ2 C0,ε0 + σ2 ∞ + σ2 ∞ σ2 C0,ε0  · n−(m−1+δ−ε0 ) ln(n + 1)|x1 − x2 |ε0 =: c9,1 n−(m−1+δ−ε0 ) ln(n + 1)|x1 − x2 |ε0

for x1 , x2 ∈ [−1, 1]. Moreover,   F (·, σ2 p∗ ) − F (·, σ2 u∗ ) n



  ≤ c9(z) σ2 ∞ pn∗ − u∗ ∞ ≤ cσ2 ∞ n−(m−1+δ−ε0 ) ln(n + 1) =: c9,2 n−(m−1+δ−ε0 ) ln(n + 1) .

Hence,   Pn (p∗ , D ∗ , E ∗ ) n

μ2

      ≤ cμ0 σ2 ∞ pn∗ − u∗ ∞ + S C0,ε0 →C F (·, σ2 pn∗ ) − F (·, σ2 u∗ )C0,ε0 1

  ≤ cμ0 c9,1 + S C0,ε0 →C c9,2 n−(m−1+δ−ε0 ) ln(n + 1) ≤ c9 n−(m−1+δ−ε) , 1

where we took into account F (±1, σ2 (±1)pn∗ (±1)) = F (±1, 0) = F (±1, σ2 (±1) 0,ε u∗ (±1)), which implies F (·, σ2 pn∗ ) − F (·, σ2 u ∗ ) ∈ C0 0 .   Lemma 7.4.25 If γ > 12 , where γ is defined in Lemma 7.4.21, then the operators Pn (pn∗ , D ∗ , E ∗ ) : Xn −→ Yn are invertible for all sufficiently large n and  !  ∗ ∗ ∗ "−1    P (p , D , E ) n

n

for all these n with a constant c10 ∈ R.

L(Yn ,Xn )

≤ c10

530

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

Proof We write, with b2,n (x) = Fz (x, σ2 (x)pn∗ (x)) and bn∗ (x) = xn∗ − x (cf. Lemma 7.4.20), Pn (pn∗ , D ∗ , E ∗ )(vn , D, E) μ0

2 (σ2 vn − S b2,n σ2 vn ) − D − E· = Ln+2

  μ02 0 σ2 vn − S bn∗ σ2,n vn − D − E· = Ln+2 .  μ02 μ02  μ02 0 0 ωnσ σ2,n vn − S bn∗ σ2,n vn + Ln+2 S − (ωnσ )−1 S ωnσ I bn∗ σ2,n vn − Ln+2 (ωnσ )−1 (D + E·) = Ln+2

and define, for (v, D, E) ∈ X,   0 v − D − E· Qn (v, D, E) := (ωnσ )−1 σ2 v − Sbn∗ ωnσ σ2,n   0 v + S − (ωσ )−1 Sωσ I b∗ σ 0 v − (ωσ )−1 (D + E·) . = σ2,n v − Sbn∗ σ2,n n n n 2,n n

We show that there is a constant c∗ ∈ (0, ∞) such that Qn (v, D, E) μ2 ≥ c∗ (v, D, E)X

∀ (v, D, E) ∈ X .

(7.4.87)

The technique we used to prove the analogous inequality (7.4.44) is not applicable here, because of bn∗ (x) having its zero xn∗ inside the open interval (−1, 1) and this zero is a-priori not known. The method we use to prove (7.4.87) is essentially based on the application of Lemma 7.4.20,(C3),(C4),(C5) and works also in the case of inequality (7.4.44). 0 v = − 1 v, 1 Set Dn∗ (v) = (Sbn∗ − bn∗ S)σ2,n σ 0 . If (v, D, E) ∈ X satisfies π 2,n

0 σ2,n v − Sbn∗ σ2,n v − D − E· = g ,

then 0 σ2,n v − bn∗ Sσ2,n v = g + D + Dn (v) + E· = g + D − μ0

1 v, 1σ 0 + E· 2,n π μ0

= g + Dn0 p0 2,n − π −1 v, 1σ 0 + En0 p1 2,n , 2,n

7.4 Nonlinear Cauchy Singular Integral Equations μ0

531

μ0

where D+E· = Dn0 p0 2,n +En0 p1 2,n . This, in view of Lemma 7.4.20,(C3),(C4),(C5), implies

μ0 Dn0 = cμ0 π −1 v, 1μ0 − g, p0 2,n μ0 , 2,n

2,n

2,n

μ0 En0 = − g, p1 2,n μ0 , 2,n

,n g . v = μ2,n g + Sb2,n μ2,n g =: A ,n : L26 Note that, in virtue of Corollary 7.4.22, the operators A

μ02,n

−→ L26

are

0 σ2,n

uniformly bounded, say   A ,n 

L26

μ0 2,n

→L26

σ0 2,n

≤ cA .

Consequently, vσ 0 ≤ cA gμ0 , 2,n 2,n    0 D  ≤ c2 0 π −1 v 0 + g 0 ≤ 1 + c2 0 π −1 cA g 0 , n σ μ μ μ μ 2,n

2,n

 0 E  ≤ g n

μ02,n

2,n

2,n

2,n

,

from which we conclude /

v2 0 σ2,n

+ D

+ E·2μ0 2,n

/  2  2 = v2σ 0 + Dn0  + En0  ≤ 2,n



2  2 + 1 + c2 π −1 c gμ2,n . 1 + cA A 0 μ2,n

Since   0 Qn (v, D, E) = (ωnσ )−1 σ2,n ωnσ v − Sbn∗ σ2,n ωnσ v − D − E· , we have /  ωnσ v 2 0 + D + E·2 ≤ 0 σ 2,n

μ2,n



 2  σ  2 + 1 + c2 π −1 c ω Qn (v, D, E) 1 + cA A 0 n μ μ2,n

Now (7.4.87) follows from Lemma 7.4.21 and Corollary 7.4.22.

2,n

.

532

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

By using Lemma 7.4.8 and Lemma 7.4.21, we proceed with    ∗ ∗ ∗ P (p , D , E )(vn , D, E) n

n

μ2

     ∗ 0 μ02  μ02 ∗ 0 σ −1 σ σ −1  σ ≥ c3  v − S b σ v + L S − (ω ) S ω I σ v − L (ω ) (D + E ·) b 2,n n n n n n n n n 2,n 2,n n+2 n+2        ! σ −1 " μ02 0 σ −1 σ ∗ σ −1  = c3  Qn (vn , D, E) − Mn ((ωn ) , ωn ) bn vn + (ωn ) − Ln+2 (ωn ) (D + E ·)  ≥ c3 Qn (vn , D, E)μ2

μ2

μ2

     ! σ −1 " μ02 0 σ −1 σ ∗ σ −1  − M ((ω ) , ω ) b v − (ω ) − L (ω ) (D + E ·) , n n n n n n n+2 n   μ2

where we used the notation  μ02  0 w1 (Sw2 I − w2 S)σ2,n v [M0n (w1 , w2 )]v = I − Ln+2 and Lemma 7.4.8 as well as Lemma 7.4.20,(C1). Analogously to (7.4.40) and (7.4.41), we have    0   ≤ c∗,1 w1 ∞ w2  0,η (7.4.88) Mn (w1 , w2 )  2 C 2 L L√σ ,L√μ 2

2

and  0  M (w1 , w2 ) n

  L L2√σ ,L2√μ 2

−→ 0

if

n −→ ∞

(7.4.89)

2

for all (w1 , w2 ) ∈ C0,η × C0,η . Due to (7.4.35) we have (M1 (x) := x) 0 0     σ −1 (ω ) − Lμ2 (ωσ )−1  , (ωσ )−1 M1 − Lμ2 (ωσ )−1 M1  n n n n+2 n n+2 μ μ 2

#

 μ02   : w ∈ C0,γ ≤ sup w − Ln+2 2c5 μ 2

2

$ −→ 0

if n −→ ∞ ,

 μ0  since the convergence lim w − Ln 2 wμ = 0 for every w ∈ C[−1, 1] is uniform 2 n→∞   0,γ on the compact subset C2c5 = w ∈ C0,γ : wC0,γ ≤ 2c5 of C[−1, 1] (see Corollary 2.4.14 and Lemma 2.3.11). Also the convergence in (7.4.89) is uniform because of the compact embedding C0,γ × C0,γ ⊂ C0,η × C0,η (if η < γ ), so that   0   ! "  M0 ((ωσ )−1 , ωσ ) b∗ v − (ωσ )−1 − Lμ2 (ωσ )−1 (D + E·) n n n n n   n+2 n

μ2



c∗ (v, D, E)X 2

7.4 Nonlinear Cauchy Singular Integral Equations

533

for all (v, D, E) ∈ X and all sufficiently large n. This implies    ∗ ∗ P (p , D )(vn , D, E) n

n

μ2



c3 c∗ (vn , D, E)X 2

which proves the lemma with c10 =

∀ (vn , D, E) ∈ Pn × C2 = Xn ,

 

2 c3 c∗ .

Lemma 7.4.26 Let τ be defined as in Remark 7.4.23 and ε > 0 be sufficiently small. Then, for all (vn , D, E) ∈ Xn ,    P (vn , D, E) − P  (p ∗ , D ∗ , E ∗ ) n

n

n

L(Xn ,Yn )

  γ  γ  ≤ c11 n2ε+τ n(2ε+τ )γ2 vn − pn∗ σ2 + nτ γ3 vn − pn∗ σ3 1

1

with a constant c11 ∈ (0, ∞). Proof Completely analogous to (7.4.45) and (7.4.46), for ε0 ∈ (0, min {α2 , β2 , m + δ0 }) and ε = ε0 γ2 , we can show that       Fz (·, σ2 vn ) − Fz (·, σ2 p∗ ) 0,ε ≤ c11,1 vn − p∗ γ20,ε + vn − p∗ γ3 n C n C 0 n ∞ and, in view of (7.4.86) and Remark 7.4.23, !   "  P (vn , D, E) − P  (p∗ , D ∗ , E ∗ ) ( E)  vn , D, n

n

n

μ2

  vn C0,ε ≤ cμ0 SC0,ε →C Fz (·, σ2 vn ) − Fz (·, σ2 pn∗ )C0,ε σ2 C0,ε  2

 γ  γ vn C0,ε ≤ cμ0 SC0,ε →C σ2 C0,ε c11,1 vn − pn∗ C20,ε0 + vn − pn∗ ∞3  2

 γ ! γ γ2 (2ε+τ )γ2  ≤ cμ0 SC0,ε →C σ2 C0,ε c11,1 cε02,∞ c∞,σ vn − pn∗ σ2 2n

2

2

 γ " γ3 τ γ3  vn − pn∗ σ3 cε,∞ c∞,σ2 n2ε+τ  vn σ2 , + c∞,σ 2n 2

 

which proves the Lemma.

In the following proposition we summarize the essential results we obtained up to now in the present section (see Lemmata 7.4.24, 7.4.25, and 7.4.26), where we assume that γ2 = γ3 . Proposition 7.4.27 If the solution (u∗ , D ∗ , E ∗ ) ∈ X with u∗ = σ2 u∗ of prob∗ m,δ lem (7.4.62), (7.4.63) has the property u ∈ C for some m ∈ N, if γ defined in Lemma 7.4.21 is greater than 12 , and if pn∗ ∈ Pn form a sequence of polynomials satisfying (7.4.79) and (7.4.80) (the existence of which is guaranteed by Lemma 7.4.4 for a sufficiently small ε > 0), then for the operators Pn : Xn −→ Yn defined by (7.4.76) we have that   Pn (p∗ , D ∗ , E ∗ ) ≤ c9 n−(m−1+δ−ε) n μ 2

(7.4.90)

534

7 Collocation and Collocation-Quadrature Methods for Strongly Singular Integral. . .

and that, for all sufficiently large n, their Frechet derivatives Pn (pn∗ , D ∗ , E ∗ ) : Xn −→ Yn at the points (pn∗ , D ∗ , E ∗ ) are invertible, where !    ∗ ∗ ∗ "−1   Pn (pn , D , E ) 

Yn −→Xn

≤ c10

(7.4.91)

and the constants c9 , c10 ∈ (0, ∞) do not depend on n. Furthermore, there is a finite constant c11 = c11 (n, vn , D) such that    P (vn , D, E) − P  (p∗ , D ∗ , E ∗ ) n

n

n

Yn →Xn

 γ ≤ c11 n(2ε+τ )(1+γ2) vn − pn∗ σ2 2 (7.4.92)

for all (vn , D, E) ∈ Xn , where τ is defined in Remark 7.4.23 and γ2 = γ3 is given by conditions (7.4.54) and (7.4.74). Now we can prove the following proposition on the applicability of the modified Newton iteration method (7.4.77) for approximating a solution of Eq. (7.4.76), which converges to a solution of Eq. (7.4.75) representing the original problems (7.4.52). Proposition 7.4.28 Assume that the solution (u∗ , D ∗ , E ∗ ) of problem (7.4.62), (7.4.63) admits the representation u∗ = σ2 u∗ with u∗ ∈ Cm,δ for some m ≥ 1 and σ2 (x) defined in (7.4.64). Furthermore, let assumptions (F3) and (F3’) (with γ3 = γ2 ) be fulfilled such that γ2 (m − 1 + δ) > τ (1 + γ2 ) and γ > 12 , where τ and γ are defined in Remark 7.4.23 and Lemma 7.4.21, respectively.   Then, for all sufficiently large n, there exists an element vn(0) , Dn(0) , En(0) ∈ Xn   (k) (k) (k) such that Eqs. (7.4.77) possess a unique solution vn , Dn , En for all  (k) (k) (k) ∞ k ∈ N0 . The sequence vn , Dn , En k=0 converges in the norm of X to a solution (vn∗ , Dn∗ , E ∗ ) of Eq. (7.4.76), where  ∗ ∗ ∗  (v , D , E ) − ( u∗ , D ∗ , E ∗ )X ≤ c n−(m−1+δ−ε) n n n

(7.4.93)

for all sufficiently small ε > 0 and a finite constant c = c(n). Proof Choose vn(0) = pn∗ , Dn(0) = D ∗ , En(0) = E ∗ and a sufficiently small ε > 0. We can apply Proposition 2.7.3 replacing P, X, and Y, by Pn , Xn , and Yn , respectively, as well as setting η = γ2 , M0 = c10 , N0 = c10 c9 n−(m−1+δ−ε) and C0 = c11 n(2ε+τ )(1+γ2) . Indeed, condition (a) of Proposition 2.7.3 is satisfied due to (7.4.91), condition (b) due to (7.4.91) and (7.4.90), and condition (c) due to (7.4.92). Moreover, 1+γ2 γ2 c9 c11 n−[γ2 (m−1+δ−ε)−(2ε+τ )(1+γ2)]

h0 = c10



1 4

7.4 Nonlinear Cauchy Singular Integral Equations

535

for all sufficiently large n, since γ2 (m − 1 + δ − ε) > (2ε + τ )(1 + γ2 )

if ε
−1 and all n ∈ N0 . Equation (I + MH1 )u1 = f1 In virtue of (8.1.20), for 0 < δ0 < 2, equation (I + MH1 )u = f has a unique solution u∗f ∈ C0,δ0 for every f ∈ C0,δ0 . Hence, since the right-hand side f1 in (8.1.7) belongs to C0,0 , the unique solution u∗1 ∈ C0,1 of (8.1.7) belongs to K C0,δ0 . δ0 >0

Again one can easily show that (j )

H1 (t) =

(t 2

qj (t) , + 1)j +2

j ∈ N,

where qj (t) is an odd (even) polynomial of degree j + 1 if j is even (odd). The function H1 (t) satisfies the assumptions of Proposition 5.6.5 for all j ∈ N0 , when choosing −1 < δ0 < 3, ρ = 1, and η = 2, since  (j )

H1

1+x 1+y









(1 + y)j +1 (1 + y)j +3 1 = . ! "j +2 (1 + y)η+j +1 (1 + x)2 + (1 + y)2 (1 + y)η+j +1 qj

1+x 1+y

8.1 A Cruciform Crack Problem

545

On this way, we get u∗1 = f1 − MH1 u∗1 ∈ C0,δ0 (n)

(8.1.23)

for all δ0 > 0 and all n ∈ N0 . To get more precise information about u∗1 (x), we present it in the form u∗1 (x) = u∗∗ 1 (x) + χ0 with a constant χ0 having the goal to choose χ0 ∈ C in such a way that u∗∗ 1 ∈ C0,δ0 for all δ0 > −1. Since   4χ0 MH1 χ0 (x) = π



∞ 1+x 2

ds 2χ0 = 2 2 (1 + s ) π



t − 2 dt , (1 + t)2 1

∞ 6

1+x 2

we have, due to (8.1.19), 

lim

x→−1+0

 MH1 χ0 (x) = χ0



and

 2χ0 MH1 χ0 (x) = − π

1 1+



1+x 2

2 .2

  0 implying MH1 χ0 (−1) = − 2χ π and   MH1 χ0 (x) − χ0 2χ0 =− . x→−1+0 x+1 π lim

C0,−1 and, for χ0 = 14 , we get Consequently, MH1 χ0 − χ0 ∈ ∗∗ u∗∗ 1 + MH1 u1 =

1 − χ0 − MH1 χ0 = χ0 − MH1 χ0 ∈ C0,−1 . 2

In other words, the function u∗∗ (x) := (1 + x)−1 u∗∗ 1 (x) satisfies (I + MH )u∗∗ ∈ C0,0 = C[−1, 1] , where (MH u)(x) =



1 −1

!



(1 + y)3 u(y) dy (1 +

x)2

+ (1 +

y)2

"2 =



1 −1

H

1+x 1+y



u(y) dy 1+y

4 . The Mellin transform π(1 + t 2 )2

with H (t) = ,(z) = 4 H π

4 π

 0



2 t z−1 dt = 2 2 (1 + t ) π

 0



z

t 2 −1 dt (1 + t)2

(8.1.19)

=

1 − (z − 1) z−2 = π sin 2 (z − 2) cos π(z−1) 2

546

8 Applications

is holomorphic in the strip 0,2 = {z ∈ C : 0 < Re z < 2}. 4 and γ ∈ (0, 1) as π(1 + t 2 )2 well as δ = 2, the conditions (h1) and (h2) of Sect. 5.7 are fulfilled.

Exercise 8.1.1 Prove that, for the function h(t) =

Exercise 8.1.2 Show that the holomorphic function F : −1,1 −→ C, z → satisfies / Re F (z) > −2

1−z cos πz 2

2 > −1 for all z ∈ −1,1 . π

Due to these two exercises, we can apply Corollary 5.7.4 to conclude that, for all p ∈ (1, ∞), the equation (I + MH )u = f has a unique solution u ∈ Lp (−1, 1) for every f ∈ Lp (−1, 1). Consequently, u∗∗ ∈ Lp (−1, 1) for all p ∈ [1, ∞). Moreover, Proposition 5.6.11,(b) is applicable with γ0 = δ0 = 0, ρ = 0, and η ∈ (0, 3], as well as p ∈ (1, ∞) and δ > 0. Thus, MH u∗∗ ∈ C0,δ , which implies u∗∗ = (u∗∗ + MH u∗∗ ) − MH u∗∗ ∈ C0,δ for every δ > 0. As a consequence of ∗∗ this, the function u∗∗ 1 (x) = (1 + x)u (x) belongs to C0,δ0 for every δ0 > −1, and we have proved the representation u∗1 (x) = u∗∗ 1 (x) +

1 4

with

K

u∗∗ 1 ∈

C0,δ0

(8.1.24)

δ0 >−1

for the unique solution u∗1 ∈ C0,1 of Eq. (8.1.7). If u∗1 ∈ C0,1 is the (unique) solution of (8.1.7) then, by construction, the function u∗0 (x) = (1 + x)u∗1 (x) is the unique solution of (8.1.6) in C0,0 . Hence, we have the following conclusion. Corollary 8.1.3 The unique solution u∗0 ∈ C[−1, 1] of Eq. (8.1.6) resp. (8.1.4) admits the representation u∗0 (x) = u∗∗ 0 (x) +

1+x 4

with

u∗∗ 0 ∈

K

C0,δ0 .

(8.1.25)

δ0 >−2

Equation (I + H2 )u2 = f2 In Sect. 8.1.1, by studying problem (8.1.2),(8.1.3) in a weighted L2 -space we have translated it into a Fredholm integral equation (8.1.16), where the integral operator is defined in (8.1.17). Hence, it is possible to consider this equation in (weighted) spaces of continuous functions and to apply results from Chap. 6 for its numerical solution. In the following section we will concentrate on the application of results from Sect. 7.3.

8.1 A Cruciform Crack Problem

547

8.1.3 A Quadrature Method In this section we want to apply the results of Sect. 7.3 to Eq. (8.1.4). First let us check whether the Mellin kernel H0 (t) fulfils the assumptions (H1)–(H5) of Sect. 7.3. As we have seen in Sect. 8.1.2, conditions (H1)–(H3) and (H5) are satisfied for −1 < δ0 < 1 ,

ρ = 2,

η = 1,

and s ∈ N0 .

 1 4(1 + x)2 (1 + y) 1+x = ! = H0 If we set G(y) = "2 , then 1+y 1+y π (1 + x)2 + (1 + y)2 one can prove by induction that, for j ∈ N0 , 

Hx0 (y)

G(j ) (y) =

⎧ j   ⎪ 2 ⎪  2 j2 −k ⎪ 2 ⎪ (1 + x) (1 + y) γj k (1 + x) (1 + y)2k ⎪ ⎪ ⎪ k=0 ⎪ ⎪ : j even , ⎪ ! "j +2 ⎪ ⎨ (1 + x)2 + (1 + y)2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

(1 + x)2

j+1 2





γj k (1 + x)

2

j+1 2 −j



(1 + y)2k

k=0

: j odd ,

! "j +2 (1 + x)2 + (1 + y)2

where the γj k ’s denote certain real constants. Choosing ρ1 ∈ [0, 1], we get

   (j )  G (y) ≤

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨! ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩!

j

2 

|γj k |

k=0

(1 + x)2 + (1 + y)2 j+1 2



" j+1 2

|γj k |

k=0

(1 + x)2 + (1 + y)2

" j+1 2

⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ : j even , ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ : j odd , ⎪ ⎪ ⎭



c1 (1 + y)−j , 1+x

and condition (H4) is fulfilled for ρ1 = 0 and ρ2 = 1. So, we can look for an approximate solution vn ∈ C0,δ0 of Eq. (8.1.6) by solving 

 0 PH I+M n vn = f0 ,

(8.1.26)

548

8 Applications

0 PH where the operator M n : C0,δ0 −→ C0,δ0 is defined with the help of the Legendre 0,0 0,0 by (cf. (7.3.10)) nodes λnk = λnk and the respective Christoffel numbers xnk = xnk



⎧ ⎪ ⎨



 0 MH : xn∗ ≤ x ≤ 1 , n v (x) H   Pn 0 v (x) = M 1 + x −δ0  H0  ∗ ⎪ ⎩ Mn v (xn ) : −1 ≤ x < xn∗ , 1 + xn∗ 

where n  H   Mn 0 v (x) λnk H0 k=1



1+x 1 + xnk



v(xnk ) 1 + xnk

and xn∗ = n−ρ3 − 1 with some ρ3 > 0. If we apply Proposition 7.3.5 we obtain the following convergence result. Note that in the present situation ρ1 = 0 and ρ2 = 1. Proposition 8.1.4 If we choose −1 < δ0 < 1, 0 ≤ ρ1 ≤ 1, 0 < ρ3 ≤ 2, and, due to (8.1.22), −1 < δ1 < δ0 , then, for all sufficiently large n, Eq. (8.1.26) has a unique solution vn∗ ∈ C0,δ0 , where, for some real constant c = c(n),  ∗  v − u∗  n 0 C

0,δ0



c nχ

with χ = min {(δ0 − δ1 )ρ3 , 2(1 − δ1 ) − (1 − δ0 )ρ3 } = (δ0 − δ1 )ρ3 . That means, for ρ3 = 2, we can achieve χ = 2(δ0 + 1) − ε with ε > 0 as small as we want. To improve this result, we follow an idea of [123, pp. 89,90] and use the representation (8.1.25) of u∗0 (x), namely u∗0 = u∗∗ u0 with u0 (x) = 1+x 0 + 4 and Q ∗∗ u0 ∈ δ>−2 C0,δ . For this, we write vn∗ = vn∗∗ + u0 , look for vn∗∗ as the solution of 0 ∗∗ PH vn∗∗ + M u0 − MH0 u0 n vn = g0 := f0 −

and observe that ∗∗ u∗∗ 0 + MH0 u0 = g0 .

8.2 The Drag Minimization Problem for a Wing

549

After computing 1 1+x − g0 (x) = 4 π



1 −1

!

(1 + x)2 (1 + y)2 dy "2 (1 + x)2 + (1 + y)2

 2 1 + x 1+x t 2 dt 1+x − 4 π (1 + t 2 )2 0 . 1 1+x 2(1 + x) 1+x 1 − arccot + = 2 2 π 2 (1 + x)2 + 4 % ∞ &   (1 + x)2 1  (−1)k 1 + x 2k 4 , = − 4 π 2k + 1 2 (1 + x)2 + 4 =

k=0

|1 + x| < 1, 2

∗∗ we see that g0 ∈ C(s) 0,−2 for all s ∈ N0 . Since, due to (8.1.25), we also have u0 = (s) g0 − MH0 u∗∗ 0 ∈ C0,δ1 for all δ1 > −2 and s ∈ N0 , we can apply Proposition 8.1.4 ε for δ1 = −2 + 2 to obtain

 ∗∗  v − u∗∗  n 0 C

0,δ0



c nχ1

with χ1 = 2(δ0 + 2) − ε and ε > 0 as small as we want.

8.2 The Drag Minimization Problem for a Wing 8.2.1 Formulation of the Problem Following [41] and [86], we consider a wing defined by a single open lifting line in the cartesian y-z plane. This is represented by a curve , having a parametric "T ! representation ψ(t) = ψ1 (t) ψ2 (t) , |ψ  (t)| = 0, t ∈ [−1, 1]. For simplicity, it is also assumed that the lifting line is sufficiently smooth. That means, it is assumed that ψi (t), i = 1, 2, are continuous functions together with their first m ≥ 2 derivatives in the interval [−1, 1] (i.e., ψi ∈ Cm [−1, 1]). A point on the lifting ! "T ∈ , line, where the aerodynamic forces are calculated, is denoted by r = y z "T ! where r = r(t) = ψ1 (t) ψ2 (t) . The expressions of the wing lift L and induced drag Dind are obtained in terms of the unknown circulation 0 , namely as  L = L(0 ) = −ρ∞ V∞

1 −1

τy (t)0 (t) dt

(8.2.1)

550

8 Applications

and  Dind = Dind (0 ) = −ρ∞

1 −1

vn (t)0 (t) dt .

(8.2.2)

The quantities ρ∞ and V∞ are given positive constants which indicate the density and free stream velocity, respectively. Further, τy (t) = y  (t) is the projection on the y-axis of the unit vector tangent to the lifting line, while vn is the so-called normalwash being represented by vn (η) =

1 4π



1 −1

0 (s)Y0 (t, s) ds ,

−1 < t < 1 ,

(8.2.3)

where Y0 (t, s) := −

d ln |r(s) − r(t)| . dt

(8.2.4)

The function Y0 (t, s) has a singularity of order 1 when t = s, and the integral in (8.2.3) is a Cauchy principal value one. The problem we want to solve is the minimization of the functional Dind (0 ) over a suitable space, subject to the prescribed lift constraint L(0 ) = Lpres , which takes the form 

1 −1

ψ1 (t)0 (t) dt = γ0 := −

Lpres . ρ∞ V∞

(8.2.5)

Moreover, from (8.2.2) and (8.2.3) we get Dind

ρ∞ = Dind (0 ) = − 4π

With ϕ(t) =





1



1

−1 −1

Y0 (t, s)0 (s) ds 0 (t) dt .

(8.2.6)

  2,1 1 − t 2 we set V = f = ϕu : u ∈ L√ ϕ , where here and in what

√ we refer the Sobolev-like space follows, for a Jacobi weight ρ = v α,β (x), by L2,s ρ 2,s 2,s √ L2,s α,β , which is defined in Sect. 2.4.1 (cf. Exercise 2.4.1), i.e., L ϕ = L 1 1 . The

norm in L2√

2,s √ ρ and L ρ

2,2

we denote by .ρ and .ρ,s , respectively, i.e., .ρ = .α,β

and .ρ,s = .α,β,s . By ., .ρ , we refer to the respective inner product in L2√ρ , i.e.,   ., .ρ = ., .α,β (see page 17). The norm in V is defined by f V = ϕ −1 f  . ϕ,1

Hereafter, by D we denote the operator of generalized differentiation. We √ spaces (see recall an important property of this operator with respect to the L2,r ρ Lemma 5.5.2).

8.2 The Drag Minimization Problem for a Wing

551

√ √ (A) For r ≥ 0, the operator D : L2,r+1 −→ L2,r ρ

ρ (1)

set

ρ (1) (t)

=

(1 − t)1+α (1 + t)1+β

is continuous, where we have

= ρ(t)(1 − t 2 ).

Lemma 8.2.1 For f ∈ V, we have f ∈ C[−1, 1] with f (±1) = 0. 2,1 2√ . Hence, for 0 < t < 1, Proof Let f = ϕg with g ∈ L√ ϕ . Due to (A), Dg ∈ L 3 ϕ

    t t   3 |g(t)| = g(0) + (Dg)(s) ds  ≤ |g(0)| + (1 − s 2 )− 2 ds Dgϕ 3 0

0

and 

t

2 − 32

(1 − s )



t

ds ≤

0

(1 − s)

0

− 32



1

ds = 2 √ −1 1−t

 .

This implies f (1) = limt →1−0 ϕ(t)g(t) = 0. Analogously, one can show that also f (−1) = 0 is true for f ∈ V.   Now the problem we aim to solve (cf. [41]) is the following: (P) Find a function 0 ∈ V, which minimizes the functional (cf. (8.2.6))  F (0 ) := −

1



1

−1 −1

Y0 (t, s)0 (s) ds 0 (t) dt

subject to the condition (cf. (8.2.5)) ψ1 , 0  = γ0 . If we define the linear operator 1 (Af ) (t) = − π



1 −1

Y0 (t, s)f  (s) ds,

−1 < s < 1,

(8.2.7)

then the problem can be reformulated as follows: (P) Find a function 0 ∈ V, which minimizes the functional F (0 ) := A0 , 0  on V subject to the condition ψ1 , 0  = γ0 . The formulation of this problem is correct, which can be seen from Lemma 8.2.3 below. Lemma 8.2.2 If ψj ∈ Cm [−1, 1] for some integer m ≥ 2 and |ψ  (t)| = 0 for t ∈ [−1, 1], then the function Y0 (t, s) admits the representation Y0 (t, s) =

1 + K(t, s), s−t

(8.2.8)

552

8 Applications

where the function K : [−1, 1]2 −→ R is continuous together with its partial ∂ j +k K(t, s) derivatives , k, j ∈ N0 , j + k ≤ m − 2. ∂t j ∂s k Proof Note that, by definition, Y0 (t, s) =

[ψ1 (s) − ψ1 (t)]ψ1 (t) + [ψ2 (s) − ψ2 (t)]ψ2 (t) . [ψ1 (s) − ψ1 (t)]2 + [ψ2 (s) − ψ2 (t)]2

Hence, K(t, s) = Y0 (t, s) −

(t, s) 1 = , s−t (t, s)

where (t, s) = G1 (t, s)g1 (t, s) + G2 (t, s)g2 (t, s), gj (t, s) =

Gj (t, s) =

ψj (s) − ψj (t) = s−t ψj (t) − gj (t, s) s−t



1

0

 = 0

(t, s) = [g1 (t, s)]2 + [g2 (t, s)]2 ,

  ψj sv + t (1 − v) dv, 1

  ψj sv + t (1 − v) (1 − v) dv,

and the assertion of the lemma follows taking into account (t, s) = 0 for all (t, s) ∈ [−1, 1]2.   Lemma 8.2.3 The operator A : V −→ L2√ϕ is linear and bounded. Consequently, Af, f  is well defined for all f ∈ V. ϕ

1 1 ,2

Proof Let Un = pn = pn2 Df 2ϕ =

− 12 ,− 12

= pn

. Then, for f ∈ V,

∞  ∞ ∞ 2      |Df, Tn |2 = |f, n Un−1 |2 Df, ϕ −1 Tn ϕ  = n=0

=

ϕ −1

and Tn = pn

∞ 

n=0

n=1

2  2      (1 + n)2 ϕ −1 f, Un ϕ  = ϕ −1 f 

ϕ,1

n=0

= f 2V ,

i.e., Df ∈ L2√ϕ . By relation (8.2.8), the operator A defined in (8.2.7) can be written in the form A = −(S + K)D with 1 (Sf )(t) := π



1 −1

f (s) ds , s−t

1 (Kf )(t) = π



1 −1

K(t, s)f (s) ds ,

−1 < t < 1 .

8.2 The Drag Minimization Problem for a Wing

553

The Cauchy singular integral operator S : L2√ϕ −→ L2√ϕ is bounded (see Lemma 5.2.1 and Corollary 5.2.6) and the integral operator K : L2√ϕ −→ L2√ϕ is compact (cf. Proposition 5.3.3 and Exercise 5.3.4). Consequently, for f = ϕu ∈ V we have that Af, f  = Af, uϕ is a finite number, since both Af and u belong to L2√ϕ .  

8.2.2 Derivation of the Operator Equation In the following lemma we give a representation of the operator A defined in (8.2.7), which is crucial for our further investigations. From this representation, it is seen that the operator A is an example of an hypersingular integral operator (cf. Sect. 5.5). Lemma 8.2.4 For all f ∈ V, the relation Af = DBf

(8.2.9)

holds true, where (Bf )(t) =

1 π



1

−1

ln |r(s) − r(t)| f  (s) ds

(8.2.10)

and where D is again the operator of generalized differentiation. Proof Since D : V −→ L2ϕ is an isometrical mapping (cf. the proof of Lemma 8.2.3), it suffices to show that −(S + K)g = DB0 g is valid for all g ∈ L2√ϕ , where (B0 g) (t) =

1 π



1 −1

ln |r(s) − r(t)| g(s) ds .

Since Z0 (t, s) := ln |r(s) − r(t)| = ln |s − t| + K0 (t, s)

(8.2.11)

∂K0 (t, s) = −K(t, s), ∂t is bounded (use Proposition 5.4.2 in case α =

with a function K0 : [−1, 1]2 −→ R being continuous and √ the operator B0 : L2√ϕ −→ L2,1 −1 ϕ

√ −→ L2√ϕ is continuous due to (A), such that on β = − 12 ). Moreover, D : L2,1 −1 ϕ

the one hand, the operator DB0 : L2ϕ −→ L2ϕ is linear and bounded. On the other

554

8 Applications

hand, the operator S + K : L2ϕ −→ L2ϕ is also linear and bounded. Thus, it remains to prove that 

1 −1

Y0 (t, s)g(s) ds = −

d dt



1 −1

Z0 (t, s)g(s) ds ,

−1 < t < 1

(8.2.12)

for all g from a linear and dense subset of L2ϕ . For this, let g : [−1, 1] −→ R be a continuously differentiable function and consider  ψ0 (t) :=  with ψε (t) := ψε (t) = −



−1

t−ε

−1

 =−

t −ε

1

−1

Z0 (t, s)g(s) ds = lim ψε (t) ε→+0



1 t +ε

Z0 (t, s)g(s) ds. For every t ∈ (−1, 1), it follows

 Y0 (t, s)g(s) ds + Z0 (t, t − ε)g(t − ε) − Z0 (t, t + ε)g(t + ε)

t+ε

t−ε

−1

 +

 +

1

 +

1

 Y0 (t, s)g(s) ds + ln ε[g(t − ε) − g(t + ε)]

t+ε

+ K0 (t, t − ε)g(t − ε) − K0 (t, t + ε)g(t + ε)  −→ −

1

−1

ε −→ +0 ,

if

Y0 (t, s)g(s) ds

where, as before, the last integral is defined in the Cauchy principal value sense. For every δ ∈ (0, 1), this convergence is uniform w.r.t. t ∈ [−1 + δ, 1 − δ]. Indeed, for 0 < ε1 < ε2 < δ and for  gε (t) :=

t −ε

−1

 +



1 t +ε

Y0 (t, s)g(s) ds ,

we have  gε1 (t) − gε2 (t) =

 Y0 (t, s)g(s) ds +

t−ε2

 =

t−ε1

t−ε1

t+ε2

Y0 (t, s)g(s) ds t+ε1

 Y0 (t, s)[g(s) − g(t)] ds +

t−ε2

t+ε2

Y0 (t, s)[g(s) − g(t)] ds

t+ε1

- + g(t)

t−ε1 t−ε2

 Y0 (t, s)g(s) ds +

.

t+ε2

Y0 (t, s)g(s) ds t+ε1

8.2 The Drag Minimization Problem for a Wing



t−ε1

=

 +

t−ε2

t+ε2 

555

g(s) − g(t) ds s−t

[1 + (s − t)K(t, s)]

t+ε1



t−ε1

+ g(t)

 +

t−ε2

t+ε2 

K(t, s) ds . t+ε1

     Consequently, gε1 (t) − gε2 (t) ≤ M(ε2 − ε1 ), where M = 2 M1 g  ∞ +M2 g∞ ) with M1 = 1 + max {|(s − t)K(t, s)| : −1 ≤ s, t ≤ 1} and M2 = max {|K(t, s)| : −1 ≤ s, t ≤ 1}. This uniform convergence implies that ψ0 (t) is differentiable for all t ∈ (−1, 1), where ψ0 (t) =

and

ψ0 (t)

d = dt

d dt



-

.  lim ψε (t) = lim ψε (t) = −

ε→+0

ε→+0

1

−1

Y0 (t, s)g(s) ds

1 −1

 

Z0 (t, s)g(s) ds, such that (8.2.12) is proved.

Lemma 8.2.5 The operator A : V −→ L2ϕ is symmetric and positive, i.e., Af, g = f, Ag ∀ f, g ∈ V and Af, f  > 0 ∀ f ∈ V \ {0}. Proof Using relation (8.2.9), Lemma 8.2.1, partial integration, and Fubini’s theorem, we get, for all f, g ∈ V, Af, g = −

1 π



1



1

−1 −1

f  (s) ln |r(s) − r(t)| ds g  (t) dt = f, Ag.

(8.2.13)

Hence, Af, f  =

1 π



1



1

−1 −1

ln

1 f  (s)f  (t) ds dt |r(s) − r(t)|

corresponds to the logarithmic energy of the function

f ,

 where

1 −1

f  (t) dt = 0

due to Lemma 8.2.1. Consequently (see [197], Section I.1 and, in particular, Lemma 1.8), Af, f  is positive if f  = 0 a.e. Thus, Af, f  = 0 implies f  (t) = 0 for almost all t ∈ (−1, 1) and, due to f (±1) = 0, also f (t) = 0 for all t ∈ [−1, 1].      For γ ∈ R, define the (affine) manifold Vγ := f ∈ V : f, ψ1  = γ and make the following observation. Proposition 8.2.6 The element 0∗ ∈ Vγ0 is a solution of Problem (P) if and only if there is a number β ∈ R such that A0∗ = βψ1 . This solution is unique, if it exists.

(8.2.14)

556

8 Applications

  Proof Assume that 0∗ ∈ Vγ0 and F (0∗ ) = min F (0 ) : 0 ∈ Vγ0 . This implies G (0) = 0 for G(α) = F (0∗ + αf ) and for all f ∈ V0 \ {0}. Since G(α) = F (0∗ ) + 2αA0∗ , f  + α 2 f, f 

(8.2.15)

and G (α) = 2A0∗ , f  + 2αf, f , 2,1  this condition gives A0∗ , gϕ = 0 for all g ∈ L√ ϕ with g, ψ1 ϕ = 0, which is equivalent to (8.2.14). On the other hand, if 0∗ ∈ Vγ0 and β ∈ R fulfil (8.2.14) and if f ∈ V0 \ {0}, then we get from (8.2.15) for α = 1

F (0∗ + f ) = F (0∗ ) + 2A0∗ , f  + f, f  = F (0∗ ) + 2A0∗ − βψ1 , f  + f, f  = F (0∗ ) + f, f  > F (0∗ ),

 

which shows the uniqueness of the solution (if it exists). Remark 8.2.7 Using relation (8.2.9), Eq. (8.2.14) can be written equivalently as B0∗ = βψ1 + γ ,

0∗ ∈ Vγ0 , β, γ ∈ R.

(8.2.16)

Moreover, by applying partial integration to the integral in (8.2.10) and taking into account f (±1) = 0 for f ∈ V (see Lemma 8.2.1), we get 1 ε→+0 π



(Bf ) (t) = lim

t −ε −1

 +



1

t +ε

ln |r(s) − r(t)| f  (s) ds

" 1! f (t − ε) ln |r(t − ε) − r(t)| − f (t + ε) ln |r(t + ε) − r(t)| ε→+0 π

= lim

1 ε→+0 π

− lim (8.2.4)

=

1 π





t −ε −1

 +

1

t +ε

 f (s)

d ln |r(s) − r(t)| ds ds

1 −1

Y0 (s, t)f (s) ds .

Hence, we obtain the identity Bf = A0 f

∀ f ∈ V,

(8.2.17)

8.2 The Drag Minimization Problem for a Wing

where (cf. (8.2.8)) (A0 f )(t) =

1 π

with



1

Y0 (s, t)f (s) ds = −(Sf )(t) + (K0 f )(t)

−1

1 (K0 f )(t) = π

557



1

−1

K(s, t)f (s) ds .

(8.2.18)

Note that Eq. (8.2.14) as well its equivalent representation obtained from (8.2.16) and (8.2.17) define the Euler-Lagrange equation for the drag minimization problem. We recall the relation (see (5.2.13)) ϕ −1

Sϕpnϕ = −pn+1 ,

n ∈ N0 ,

(8.2.19)

from which we get the following property of the operator S (cf. Corollary 5.2.6). (B) The operators S : L2√

ϕ −1

−→ L2√

are invertible, where we have

√ −→ L2,r √ and S : ϕL2,r , r > 0, ϕ ϕ −1 ,0   = f ∈ L2√ρ : f, 1ρ = 0 and

ϕ −1 ,0 set L2√ρ,0

2,r 2√ √ √ L2,r ρ,0 = L ρ ∩ L ρ,0 .

In the following proposition we discuss the solvability of (8.2.16). Proposition 8.2.8 Assume that ψj ∈ C3 [−1, 1]. Then, (a) the operator A0 : L2√

ϕ −1

−→ L2√

ϕ −1

# N(A0 ) = f ∈ L2√

has a trivial null space, i.e.,

ϕ −1

$ : A0 f = 0 = {0},

(b) Equation (8.2.16) possesses a unique solution (0∗ , β, γ ) belonging to Vγ0 ×R2 , if ψ1 (t) is not a constant function, (c) problem (P) is uniquely solvable, if ψ1 (t) is not a constant function. 2 Proof Let f0 ∈ L√

C1 [−1, 1]



and A0 f0 = 0. Due to Lemma 8.2.2, Sf0 = K0 f0 ∈

ϕ −1 √ . By L2,1 ϕ −1

√ (B), we get K0 f0 ∈ L2,1 −1 ϕ

,0

and, consequently, f0 ∈

2,1 ϕL√ ϕ = V. On the other hand, due to (8.2.9) and (8.2.17) (cf. also the proof of Lemma 8.2.5), we have

0 < Af, f  = −A0 f, Df  This implies f0 = 0, and (a) is proved. Since, by (B), the operator S : L2√

ϕ −1

∀ f ∈ V \ {0}.

2 −→ L√

ϕ −1

is Fredholm with index

−1 and since, due to the continuity of the function K(s, t) (cf. Lemma 8.2.2), the

558

8 Applications

operator K0 : L2√

ϕ −1

−→ L2√

ϕ −1

is compact, also the operator

A0 = −S + K0 : L2√

ϕ −1

−→ L2√

ϕ −1

is Fredholm with index −1. Hence, we conclude that the codimension of the image # R(A0 ) = A0 f : f ∈ L2√

$ ϕ −1

is equal to 1. Thus, the intersection W1 := R(A0 ) ∩ {βψ1 + γ : β, γ ∈ R} is at least one-dimensional. If this dimension is equal to 1 and if W1 = span {ψ0 }, then there is a unique 0 ∈ L2√ −1 , such that A0 0 = ψ0 . Again using Lemma 8.2.2, ϕ

we get S0 = K0 0 − ψ0 ∈ C1 [−1, 1] and, consequently, 0 ∈ V. We show that 0 , ψ1  = 0. If this is not the case, then, because of Proposition 8.2.6 and Remark 8.2.7, Problem (P) has only a solution for γ0 = 0. But, this (unique) solution is identically zero. This implies 0 = 0 in contradiction to ψ0 = 0. Hence, 0∗ = γ0 γ0 0 is the solution of (8.2.16) with βψ1 + γ = ψ0 .  0 , ψ1  0 , ψ1  Finally, to complete the proof of (b) we show that dim W1 = 2 is not possible. Indeed, in that case W1 = span {ψ1 , ψ0 } with ψ0 (t) = 1, and we have (cf. the j j previous considerations) two linearly independent solutions 0 ∈ V of A0 = ψj γ 0 j j,∗ j with 0 , ψ1  = 0, j = 0, 1. Hence, 0 = j 0 , j = 1, 2 are two linearly 0 , ψ1  independent solutions of (8.2.16) and, in virtue of Proposition 8.2.6, also of Problem (P) in contradiction to the uniqueness of a solution of (P). Assertion (c) is an immediate consequence of (b), together with Proposition 8.2.6 and Remark 8.2.7.  

8.2.3 A Collocation-Quadrature Method Following the considerations in [86] we describe a numerical procedure for the approximate solution of Eq. (8.2.16). For this, we write this equation in the form (cf. (8.2.17), (8.2.18)) A0 f = βψ1 + γ , with A0 = −S + K0 : L2√

ϕ −1

(Sf )(t) =

1 π



1 −1

(f, β, γ ) ∈ Vγ0 × R2

−→ L2√

f (s) ds , s−t

ϕ −1

(8.2.20)

and

(K0 f )(t) =

1 π



1 −1

K(s, t)f (s) ds .

8.2 The Drag Minimization Problem for a Wing

559

Given any integer n ≥ 1, we are looking for an approximate solution (fn , βn , γn ) ∈ R(Pn ) × R2 of (8.2.20), where R(Pn ) is the image space of the orthoprojection Pn : L2√ −1 −→ L2√ −1 defined by ϕ

ϕ

Pn f =

n−1 

ϕ ϕ f, pk ϕpk ,

k=0

by solving the collocation equations − (Sfn )(tnj ) + (Kn0 fn )(tnj ) = βn ψ1 (tnj ) + γn ,

j = 1, . . . , n + 1,

(8.2.21)

together with π  ϕ(sni )ψ1 (sni )fn (sni ) = γ0 , n+1 n

(8.2.22)

i=1

(2j − 1)π iπ and sni = cos are Chebyshev nodes of first and 2n + 2 n+1 second kind, respectively, and where

where tnj = cos

1  ϕ(sni )K(sni , t)fn (sni ). n+1 n

(Kn0 fn )(t) =

(8.2.23)

i=1

Note that fn (t) can be written with the help of the weighted Lagrange interpolation polynomials ϕ nk (t) =

ϕ

ϕ(t) nk (t) ϕ(snk )

ϕ

ϕ

with nk (t) =

pn (t) ,  ϕ  (t − snk ) pn (snk )

k = 1, . . . , n ,

in the form fn (t) =

n 

ϕ ξnk nk (t),

ξnk = fn (snk ) .

(8.2.24)

k=1 j

Let Ln , j = 1, 2 denote the interpolation operators, which associate the polynomials (L1n g)(t)

=

n+1  j =1

ϕ −1

g(tnj )pn+1 (t)  ϕ −1  (t − tj n ) pn+1 (tnj )

and (L2n g)(t) =

n  i=1

ϕ

g(sni )pn (t)  ϕ  (t − sni ) pn (sni )

560

8 Applications

with a function g : (−1, 1) −→ R. Now the system (8.2.21), (8.2.22) can be written as operator equation An fn = βn L1n ψ1 + γn ,

fn ∈ R(Pn )

(8.2.25)

together with D

E L2n ψ1 , fn = γ0 ,

(8.2.26)

where An = −Sn + Kn , Sn = L1n SPn , and Kn = L1n Kn0 Pn . The equivalence of (8.2.22) and (8.2.26) follows from the algebraic accuracy of the Gaussian rule w.r.t. the Chebyshev nodes of second kind. Let us recall the following well-known facts (for (C) see [206, Theorem 14.3.1], and for (D) cf. Lemma 3.2.39).         (C) For all f ∈ C[−1, 1], lim f − L1n f  −1 = 0 and lim f − L2n f  = 0. n→∞

n→∞

ϕ

ϕ

(D) If r > 12 , then there exists a constant c > 0 such that, for any real p with   √ 0 ≤ p ≤ r, f − L1n f ϕ −1 ,p ≤ c np−r f ϕ −1 ,r for all f ∈ L2,r and ϕ −1   2,r f − L2 f  ≤ c np−r f ϕ,r for all f ∈ L√ϕ , where n ∈ N and c = n ϕ,p c(n, p, f ). Lemma 8.2.9 Let ψj ∈ C2 [−1, 1], j = 1, 2. Then, (a) lim Kn − K0 L2√ n→∞

ϕ −1

→L2√

ϕ −1

= 0,

(b) there exist constants η > 0 and n0 ∈ N such that An fn ϕ −1 ≥ ηfn ϕ −1

∀ fn ∈ R(Pn ), ∀ n ≥ n0 .

Proof At first, recall that the operator A0 : L2√

ϕ −1

−→ L2√

ϕ −1

is Fredholm with

index −1 (cf. the proof of Proposition 8.2.8). By Banach’s Theorem 2.2.4, the  operator A0 : L2√ −1 −→ R(A0 ), .ϕ −1 has a bounded inverse. Hence, there ϕ

is a constant η0 > 0 with A0 f ϕ −1 ≥ η0 f ϕ −1

∀ f ∈ L2√

ϕ −1

.

(8.2.27)

By definition of Kn0 and in virtue of the algebraic accuracy of the Gaussian rule, for fn ∈ R(Pn ) we have (Kn0 fn )(t) =

1 π



1 −1



L2n K(., t)ϕ −1 fn (s)ϕ(s) ds =

1 π



1

−1

L2n [K(., t)] (s)fn (s) ds ,

8.2 The Drag Minimization Problem for a Wing

561

which implies      0  Kn − K0 Pn f 





# $  1   sup L2n [K(., t)] − K(., t) : −1 ≤ t ≤ 1 f ϕ −1 , ϕ π

where .∞ is the usual norm in C[−1, 1]. Since, due to (C) and the principle of uniform boundedness (see Theorem 2.2.1), the operator sequence L1n : C[−1, 1] −→ L2√

ϕ −1

is uniformly bounded, the last estimate together with (C) (applied to L2n ) leads to     lim Kn − L1n K0 Pn 

2 L√

n→∞

ϕ −1

2 →L√

= 0. ϕ −1

Again (C), the strong convergence of Pn = Pn∗ −→ I (the identity operator), and the compactness of the operator K0 : L2√ −1 −→ C[−1, 1] give us ϕ     lim L1n K0 Pn − K0  2 = 0, and (a) is proved. 2 L√

n→∞

ϕ −1

→L√

ϕ −1

Formula (8.2.19) implies the relation Sn = SPn . This and condition (a) lead to (An − A0 ) fn ϕ −1 ≤ αn fn ϕ −1

∀ fn ∈ R(Pn ),

(8.2.28)

where αn −→ 0. Together with (8.2.27), this yields (b).

 

Cm [−1, 1],

Proposition 8.2.10 Assume ψj ∈ j = 1, 2 for some integer m > 2, γ0 = 0, and let ψ1 (t) be not constant. Then, for all sufficiently large n (say n ≥ n0 ), there exists a unique solution (fn∗ , βn∗ , γn∗ ) ∈ R(Pn ) × R2 of (8.2.25),(8.2.26) and / lim

n→∞

  f ∗ − f ∗ 2 −1 + |β ∗ − β ∗ |2 + |γ ∗ − γ ∗ |2 = 0, n n n ϕ

(8.2.29)

where (f ∗ , β ∗ , γ ∗ ) is the unique solution of (8.2.20). More precisely, we have /

  f ∗ − f ∗ 2 −1 + |β ∗ − β ∗ |2 + |γ ∗ − γ ∗ |2 ≤ c n2−m n n n ϕ

(8.2.30)

with a constant c > 0 independent of n. Proof Due to the Fredholmness of A0 : L2√

ϕ −1

−→ L2√

ϕ −1

to R(A0 ) = {0} (see Proposition 8.2.8, (b)), we have L2√

ϕ −1

2 (direct orthogonal sum w.r.t. ., .ϕ −1 ) for some g0 ∈ L√

ϕ −1

with index −1 and due = R(A0 ) ⊕ span {g0 } with g0 ϕ −1 = 1.

By H we denote the Hilbert space of all pairs (f, δ) ∈ L2√

ϕ −1

× R equipped

with the inner product (f1 , δ1 ), (f2 , δ2 )H = f1 , f2 ϕ −1 + δ1 δ2 . For a continuous

562

8 Applications

function g1 ∈ C[−1, 1] with g1 , g0 ϕ −1 = 0 and for n ∈ N, define the linear and bounded operators 2 B0 : H −→ L√

ϕ −1

,

(f, δ) → A0 f − δg1

and 2 Bn : H −→ L√

ϕ −1

,

(f, δ) → An f − δL1n g1 .

Let us consider the auxiliary problems B0 (f, δ) = g ∈ C[−1, 1],

2 (f, δ) ∈ L√

ϕ −1

×R

(8.2.31)

and Bn (fn , δn ) = L1n g,

(fn , δn ) ∈ R(Pn ) × R.

(8.2.32)

An immediate consequence of (8.2.28) is the relation     Bn (fn , δ) − B0 (fn , δ)ϕ −1 ≤ αn fn ϕ −1 + |δ|L1n g1 − g1 

ϕ −1

≤ βn (fn , δ)H

∀ (fn , δ) ∈ R(Pn ) × R, (8.2.33)

6

2  αn2 + L1n g1 − g1 ϕ −1 −→ 0. Equation (8.2.31) is uniquely solvable, # $ since R(A0 ) = f ∈ L2√ −1 : f, g0 ϕ −1 = 0 . Consequently, the part δ ∗ ∈ R of where βn =

ϕ

the solution (f ∗ , δ ∗ ) of (8.2.31) is uniquely determined by the condition g + δg1 , g0 ϕ −1 = 0,

i.e., δ ∗ = −

g, g0 ϕ −1 g1 , g0 ϕ −1

,

and f ∗ ∈ L2ϕ −1 is the unique solution (cf. Proposition 8.2.6,(a)) of A0 f = g + δ ∗ g1 . is Hence, in virtue of Banach’s Theorem 2.2.4, the operator B0 : H −→ L2√ ϕ −1

boundedly invertible, which implies that there is a constant η1 > 0, such that B0 (f, δ)ϕ −1 ≥ η1 (f, δ)H

∀ (f, δ) ∈ H.

(8.2.34)

8.2 The Drag Minimization Problem for a Wing

563

Putting this together with (8.2.33), we can state that there is a number n0 ∈ N, such that Bn (fn , δ)ϕ −1 ≥

η1 (fn , δ)H 2

∀ (fn , δ) ∈ R(Pn ) × R, ∀ n ≥ n0 .

(8.2.35)

 This implies that, for n ≥ n0 , the map Bn : R(Pn ) × R −→ span Tj : j = 0, 1, . . . , n} is a bijection, such that (8.2.32) is uniquely solvable for all n ≥ n0 . Moreover, if (fn∗ , δn∗ ) is the solution of (8.2.32), then   ∗ ∗ (f , δ ) − (Pn f ∗ , δ ∗ ) n n H  2   1  Ln g − Bn (Pn f ∗ , δ ∗ ) −1 ϕ η1      2   1  ≤ Ln g − B0 (Pn f ∗ , δ ∗ ) −1 + (B0 − Bn )(Pn f ∗ , δ ∗ )ϕ −1 ϕ η1      2   1  ≤ Ln g − B0 (Pn f ∗ , δ ∗ ) −1 + βn (Pn f ∗ , δ ∗ )H , ϕ η1 (8.2.36) ≤

which leads to   lim (fn∗ , δn∗ ) − (f ∗ , δ ∗ )H = 0.

n→∞

(8.2.37)

Now let (f ∗ , β ∗ , γ ∗ ) ∈ Vγ0 × R2 be the unique solution of (8.2.20) (cf. Proposition 8.2.8,(b)). There exist β1∗ , γ1∗ ∈ R such that g1 , gϕ −1 = 0 and g1 ϕ −1 = 1, where g = β ∗ ψ1 + γ ∗ and g1 = β1∗ ψ1 + γ1∗ . Because of dim (R(A0 ) ∩ {βψ1 + γ : β, γ ∈ R}) = 1 (cf. the proof of Proposition 8.2.8), we have g1 ∈ R(A0 ), i.e., g1 , g0 ϕ −1 = 0. With these notations, (f ∗ , 0) ∈ L2ϕ −1 × R is the unique solution of (8.2.31). Taking into account the previous considerations, we conclude that, for all sufficiently large n, there is a unique (fn1 , δn1 ) ∈ R(Pn ) × R satisfying An fn1 − δn1 L1n (β1∗ ψ1 + γ1∗ ) = L1n (β ∗ ψ1 + γ ∗ ) or equivalently An fn1 = (β ∗ + δn1 β1∗ )L1n ψ1 + γ ∗ + δn1 γ1∗ ,

564

8 Applications

  where, due to (8.2.37), fn1 − f ∗ ϕ −1 −→ 0 and δn1 −→ 0. It follows D

E D E E D

L2n ψ1 , fn1 = L2n ψ1 , ϕ −1 fn1 −→ ψ1 , ϕ −1 f ∗ = ψ1 , f ∗ = γ0 . ϕ

ϕ

Consequently, for all sufficiently large n, L2n ψ1 , fn1 = 0 and (fn∗ , βn∗ , γn∗ ) with fn∗ =

γ0 fn1 , 2 Ln ψ1 , fn1

βn∗ =

γ0 (β ∗ + δn1 β1∗ )

, L2n ψ1 , fn1

γn∗ =

γ0 (γ ∗ + δn1 γ1∗ )

L2n ψ1 , fn1

is a solution of (8.2.25), (8.2.26). This solution is unique, since (fn1 , δn1 ) was uniquely determined. Furthermore, fn∗ −→ f ∗ in L2√

ϕ −1

and βn∗ −→ β ∗ , γn∗ −→ γ ∗ ,

and (8.2.29) follows. To prove the error estimate (8.2.30), first we recall that ψj ∈ Cm [−1, 1], j = 1, 2 for some m > 2 implies, due to Lemma 8.2.2, the continuity ∂ k K(s, t) , k = 1, . . . , m − 2, for (s, t) ∈ [−1, 1]2. of the partial derivatives ∂t k Consequently, √ −Sf ∗ = β ∗ ψ1 + γ ∗ − K0 f ∗ ∈ Cm−2 [−1, 1] ⊂ L2,m−2 , −1 ϕ

√ i.e., in virtue of (B), f ∗ ∈ ϕL2,m−2 . Taking into account the uniform boundedness ϕ 2 1 of Ln : C[−1, 1] −→ L√ −1 (see (C)) and (D), we get, for all fn ∈ R(Pn ), ϕ

(Kn − K0 )fn ϕ −1

    ≤ L1n (K0n − K0 )fn 

    + (L1n K0 − K0 )fn 

ϕ −1

ϕ −1

# $        ≤ sup L2n K(., t) − K(., t) : −1 ≤ t ≤ 1 fn ϕ −1 + (L1n K0 − K0 )fn  ϕ

ϕ −1

    ≤ c n1−m sup K(., t)ϕ,m−1 : −1 ≤ t ≤ 1 fn ϕ −1 + K0 fn ϕ −1 ,m−1 ≤ cn1−m fn ϕ −1 ,

where we have also used that K0 : L2√

ϕ −1

√ −→ Cm−2 [−1, 1] ⊂ L2,m−2 is bounded −1 ϕ

(cf. [20, Lemma 4.2]). Hence, in (8.2.28) and (8.2.33) we have αn = O(n2−m )

8.2 The Drag Minimization Problem for a Wing

565

√ , also βn = O(n2−m ). From (8.2.36) and and, since g1 = β1∗ ψ1 + γ ∗ ∈ L2,m −1 ϕ

we obtain the bound g = β ∗ ψ1 + γ ∗ ∈ L2,m ϕ −1     1 1 (fn , δn ) − (Pn f ∗ , 0) ≤

2 η1

H

    1 Ln g − g 

ϕ −1

+ A0 L2√

→L2√ −1 ϕ −1 ϕ

 ∗    f − Pn f ∗  −1 + βn f ∗  −1 ϕ ϕ



≤ c n2−m .

 

Now (8.2.30) easily follows.

Remark 8.2.11 From the proof of Proposition 8.2.10 it is seen that the first assertion, including (8.2.29), remains true, if the assumption ψj ∈ Cm [−1, 1] with m > 2 is replaced by ψj ∈ C2 [−1, 1] together with dim N(A0 ) = 0 (cf. Proposition 8.2.8). Further we remark that a generalization of the results, presented in this section, to a system of wings can be found in [87].

8.2.4 Numerical Examples First, let us discuss some computational aspects. Because of fn ∈ R(Pn ) we have, taking into account (8.2.19) and Tn+1 (tnj ) = 0, (S fn )(tnj ) =

n  k=1

=

n  k=1

=−

fn (snk ) 1  ϕ(snk )Un (snk ) π 1 fn (snk ) ϕ(snk )Un (snk ) π

n  k=1



1 −1



1 −1

ϕ(s)Un (s) ds (s − snk )(s − t) 

1 1 − s − snk s − tnj

 ϕ(s)Un (s) ds

1 snk − tnj

 ϕ(snk ) fn (snk ) fn (snk ) Tn+1 (snk ) = . ϕ(snk )Un (snk ) snk − tnj n + 1 snk − tnj n

k=1

From this, the following expression is obtained: 1  ϕ(snk )Y0 (snk , tnj )fn (snk ), n+1 n

−(S fn )(tnj ) + (Kn0 fn )(tnj ) =

k=1

j = 1, . . . , n + 1

566

8 Applications

(cf. (8.2.8), (8.2.23), and (8.2.21)). Thus, to find the solution (fn , βn , γn ) of (8.2.21), (8.2.22), we have to solve the algebraic linear system of equations An ξn = ηn ,

(8.2.38)

"n+2 ! "T ! where ηn = ηj n j =1 = 0 . . . 0 γ0 ∈ Rn+2 is given and ! "n+2 ! "T ξn = ξnk k=1 = fn (s1n ) . . . fn (snn ) βn γn ∈ Rn+2 " n+2 ! is the vector we are looking for, and where the matrix An = aj k j,k=1 is defined by aj k =

ϕ(snk )Y0 (snk , tnj ) , n+1

j = 1, . . . , n + 1, k = 1, . . . , n,

aj,n+1 = −ψ1 (tnj ), aj,n+2 = −1, an+2,k =

πϕ(snk )ψ1 (snk ) , n+1

j = 1, . . . , n + 1,

k = 1, . . . , n,

an+2,n+1 = an+2,n+2 = 0.

Now let us discuss the question whether, under the assumptions of Proposition 8.2.10, the condition numbers of the matrices An are uniformly bounded or whether it is necessary to apply a preconditioning to An . Note that, under the assumptions of Proposition 8.2.10, the operator sequence Bn : R(Pn ) × R2 −→ Pn+1 × R (Pn being the set of all real algebraic polynomials of degree less than n) defined by E  D Bn (fn , β, γ ) = An fn − βL1n ψ1 − γ , L2n ψ1 , fn (cf. (8.2.25) and (8.2.26)) is bounded and stable, i.e., the norms of Bn and of Bn−1 (which exist for all sufficiently large n) are uniformly bounded (as a consequence of Proposition 8.2.10 together with Lemma 8.2.9). Hereby, the norms in H1n := R(Pn ) × R2 and H2n := Pn+1 × R are given by (fn , β, γ )H1n =

6 fn 2ϕ −1 + |β|2 + |γ |2

/ respectively. Set ωn =

and (pn , δ)H2n =

6 pn 2ϕ −1 + |δ|2 ,

π and define the operators n+1

En : H1n −→ Rn+2 ,

  (fn , β, γ ) → ωn fn (s1n ), . . . , ωn fn (snn ), β, γ

8.2 The Drag Minimization Problem for a Wing

567

and Fn : H2n −→ Rn+2 ,

  (pn , δ) → ωn pn (t1n ), . . . , ωn pn (tn+1,n ), δ ,

where the space Rn+2 is equipped with the usual Euclidean inner product. These operators are unitary ones. To prove this, we recall the representation (8.2.24) of fn (t), in order to see that, for all (fn , β, γ ) ∈ H1n and (ξ1 , . . . , ξn+2 ) ∈ Rn+2 , En (fn , β, γ ), (ξ1 , . . . , ξn+2 ) = ωn

n 

fn (snk )ξk + βξn+1 + γ ξn+2

k=1

 =

1

−1

fn (s)

n D E 1  ϕ ξk nk (s) ds + βξn+1 + γ ξn+2 = (fn , β, γ ), En−1 (ξ1 , . . . , ξn+2 ) 1 . Hn ωn k=1

Analogously, one can show that D E Fn (pn , δ), (η1 , . . . , ηn+2 ) = (pn , δ), Fn−1 (η1 , . . . , ηn+2 )

H2n

holds true for all (pn , δ) ∈ H2n and (η1 , . . . , ηn+2 ) ∈ Rn+2 . As a consequence we " n+2 ! get, that an appropriate matrix Bn = bj k j,k=1 can be defined by ⎛ Bn (ξ1 , . . . , ξn+2 ) = Fn Bn En−1 (ξ1 , . . . , ξn+2 ) = Fn Bn ⎝ωn−1

n 

⎞ ϕ nk , ξn+1 , ξn+2 ⎠ ξk

k=1

⎛ = Fn ⎝ωn−1 ⎛% =⎝

n  



n 

n  ϕ kn − ξn+1 L1n ψ1 − ξn+2 , ωn ξk An ϕ(snk )ψ1 (snk )ξk ⎠ k=1 k=1

ϕ  kn (tnj )ξk − ωn ψ1 (tnj )ξn+1 − ωn ξn+2 An

&n+1

⎛% =⎝

n 

, ωn j =1

k=1

&n+1 aj k ξk + ωn aj,n+1 ξn+1 + ωn aj,n+2 ξn+2

,

n 

n 

⎞ ϕ(snk )ψ1 (snk )ξk ⎠

k=1

⎞ ωn−1 an+2,k ξk ⎠ ,

j =1 k=1

k=1

i.e., bj k = aj k for j = 1, . . . , n + 1, k = 1, . . . , n, bj k = ωn aj k for j = 1, . . . , n + 1, k = n + 1, n + 2, and bn+2,k = ωn−1 an+2,k , k = 1, . . . , n. This means that Bn = Fn An E−1 n

with

" " ! ! En = diag 1 . . . 1 ωn−1 ωn−1 , Fn = diag 1 . . . 1 ωn−1 ,

ξ = η instead of An ξ = η, where and we can solve the system Bn η = Fn η and ξ = En ξ .

568

8 Applications

Therefore, in the following numerical examples we can check the stability of the method by computing the condition number of the matrix Bn w.r.t. the Euclidean smax (Bn ) of its biggest and its smallest norm, which is equal to the quotient smin (Bn ) singular values. Moreover, the left hand side in (8.2.30) can be approximated by the following discretization of it F G G err = H

N 2  2 "2  π ! ∗ fn (sNk ) − fN∗ (sNk ) + βn∗ − βN∗  + γn∗ − γN∗  N +1 k=1 (8.2.39) with N >> n. To test our numerical method and the associated convergence estimate (8.2.30), we consider four simple curves. The first one is the following non-symmetric part of the unit circle: π π (3t + 13) , ψ2 (t) = sin (3t + 13) , −1 ≤ t ≤ 1. ψ1 (t) = cos 8 8

The second one is a symmetric part of the ellipse having semi-axis a = 1, b = 0.2 and centered at the point (0, b), given by: - ψ1 (t) = a cos

 . 1 π 3π + t+ , 2 100 2

- ψ2 (t) = b sin

 . 1 π 3π + t+ , 2 100 2

−1 ≤ t ≤ 1.

The third one is the non-symmetric three times continuously differentiable curve

ψ1 (t) = t, −1 ≤ t ≤ 1,

⎧ 4 t ⎪ ⎪ ⎪ ⎨ 4 : −1 ≤ t ≤ 0, ψ2 (t) = ⎪ 4 ⎪ ⎪ ⎩ t : 0 < t ≤ 1, 2

while the last one is the (non-symmetric and not two times differentiable) open curve defined by the following natural (smooth) cubic spline: ψ1 (t) = t, −1 ≤ t ≤ 1, ⎧   a+b a+b ⎪ 3 ⎪ (1 + t) + a : −1 ≤ t ≤ 0, ⎪ ⎨ 4 (1 + t) − a + 4 ψ2 (t) =   ⎪ a+b a+b a+b ⎪ ⎪ (1 − t)3 + b + : 0 < t ≤ 1, t− ⎩ 4 4 4 where we have chosen a = 0.1, b = 0.25. In the following tables we report the (global) error, defined by (8.2.39), and the errors |βN∗ − βn∗ | and |γN∗ − γn∗ |, we have obtained for some values of n and N,

8.2 The Drag Minimization Problem for a Wing

569

as well as the condition numbers of the matrix An and the preconditioned matrix Bn . In all examples, we take γ0 = −1 (cf. (8.2.5)). Moreover, in the last two tables we also present some values nr ∗ err for an appropriate r in order to determine the convergence rate, where err is given by (8.2.39). We can see that the convergence rate is higher than that forecasted by Proposition 8.2.10 (remember that ψ is three times continuosly differentiable in Example 3 and two times in Example 4). n 4 8 16 256

n 4 8 16 32 256

n 4 8 16 32 64 128 256

n 4 8 16 32 64 128 512

(8.2.39) 8.99e–04 3.68e–08 9.98e–14

βn∗ γn∗ |βN∗ − βn∗ | |γN∗ − γn∗ | −0.6926674 0.1832556 6.68e–06 1.77e–06 −0.6926607 0.1832538 1.18e–14 9.74e–15 −0.6926607 0.1832538 1.22e–14 3.97e–15 −0.6926607 0.1832538 Example 1: non symmetric circular arc, N = 256

cond(Bn ) 2.5770 2.5771 2.5771 2.5771

cond(An ) 3.3097 5.2093 9.1997 130.1899

(8.2.39) 6.05e–03 1.44e–04 7.15e–07 1.25e–10

βn∗ γn∗ |βN∗ − βn∗ | |γN∗ − γn∗ | −0.5984153 0.0000000 1.88e–04 6.40e–16 −0.5982318 0.0000000 4.92e–06 1.21e–15 −0.5982269 0.0000000 9.14e–10 1.03e–15 −0.5982269 0.0000000 2.22e–16 8.98e–16 −0.5982269 0.0000000 Example 2: symmetric ellipse arc, N = 256

cond(Bn ) 2.6788 2.6919 2.6918 2.6918 2.6918

cond(An ) 2.8100 4.0774 7.0137 13.0283 97.6167

(8.2.39) 1.42e–03 2.65e–05 5.27e–07 3.58e–08 2.34e–09 1.41e–10

βn∗ γn∗ |βN∗ − βn∗ | |γN∗ − γn∗ | cond(Bn ) cond(An ) −0.5671039 0.0227110 1.69e–04 8.82e–06 2.0341 2.706 −0.5669336 0.0227055 1.70e–06 3.22e–06 2.0339 4.438 −0.5669351 0.0227025 1.46e–07 2.87e–07 2.0339 8.021 −0.5669353 0.0227022 1.06e–08 2.13e–08 2.0339 29.634 −0.5669353 0.0227022 7.09e–10 1.44e–09 2.0339 15.223 −0.5669353 0.0227022 4.31e–11 8.75e–11 2.0339 58.458 −0.5669353 0.0227022 2.0339 116.104 Example 3: non symmetric C 3 arc, N = 256

n4 ·err 0.363 0.108 0.035 0.038 0.039 0.040

(8.2.39) 2.10e–04 2.32e–05 3.50e–06 6.28e–07 1.16e–07 2.07e–08

βn∗ γn∗ |βN∗ − βn∗ | |γN∗ − γn∗ | cond(Bn ) cond(An ) −0.6297931 0.0036251 9.66e–05 3.02e–05 2.1603 2.782 −0.6297061 0.0036502 9.64e–06 5.09e–06 2.1601 4.514 −0.6296973 0.0036545 8.24e–07 7.73e–07 2.1601 8.094 −0.6296965 0.0036552 6.22e–08 1.08e–07 2.1601 15.293 −0.6296965 0.0036553 4.34e–09 1.43e–08 2.1601 29.699 −0.6296965 0.0036553 2.90e–10 1.82e–09 2.1601 58.510 −0.6296965 0.0036553 2.1601 231.372 Example 4: non symmetric C 2 arc, N = 512

n 2 ·err 0.007 0.004 0.004 0.004 0.004 0.004

5

570

8 Applications

8.3 Two-Dimensional Free Boundary Value Problems In this section we give two examples for the application of the solvability theory presented in Sect. 5.8 and the numerical methods discussed in Sect. 7.4 for nonlinear Cauchy singular integral equations. Here these integral equations (cf. (8.3.5) or (8.3.7) and (8.3.25)) have to be considered as boundary integral equations obtained from free boundary value problems for harmonic functions applying complex function theory tools. To be more precise, we consider two seepage flow problems, a dam and a channel problem. The respective nonlinear Cauchy singular integral equations are of different kind and corresponding to that one studied in Sect. 5.8. That in the original problems we have an unknown free boundary is expressed by a further unknown parameter k ∈ (0, 1) in the integral equation and an additional condition. As a consequence of this we have to combine the numerical methods introduced in Sect. 7.4 with a regula falsi method for finding a zero of a real valued function. We remark that a further example, where the approach presented here is used, can be found in [79, 185] and in [70] for an electrochemical maching problem and an associated optimal control problem, respectively. We also refer to the paper [186] devoted to the free boundary value problem of Miranda.

8.3.1 Seepage Flow from a Dam We consider the model of two-dimensional steady seepage from a dam investigated in [184] (see the following picture). The seepage region is bounded by the dam front OA, represented by the function f : [0, h] −→ [0, H ], x → y = f (x), by the (unknown) seepage line AB, and by the basis OB (y = 0), which is devided into an impermeable part OL and a permeable part LB (the drain). The function f (x) is assumed to be Hölder continuously differentiable, where f (0) = 0, f (h) = H (the gauge height) and its derivative satisfies 0 < m0 ≤ f  (x) ≤ M0 < ∞

∀ x ∈ [0, h] .

(8.3.1)

Outside the interval [0, h] we define f (x) =

f  (0)x

: x < 0,

f  (h)(x − h) + H : x > h .

(8.3.2)

Neglecting capillary and evaporation effects and assuming a laminar flow, Darcy’s law says that the velocity vector v is equal to 

p v = −γ grad +y ρg

 ,

8.3 Two-Dimensional Free Boundary Value Problems

571

where p = p(x, y) denotes the pressure at the point (x, y), ρ is the density of the fluid, g the acceleration constant , and γ the permeability coefficient. From div v = 0 it follows that the velocity potential u(x, y) = p(x,y) ρg + y satisfies the Laplace equation ∂ 2u ∂ 2u + 2 =0 ∂x 2 ∂y in the seepage region and the boundary conditions u = H on OA ,

u = 0 on LB ,

∂u = 0 on OL , ∂y

u=ϕ,

∂u = 0 on AB , ∂n

where the function y = ϕ(x) represents the (unknown) seepage line and the normal derivative (w.r.t. the boundary of the seepage region).

∂u ∂n

denotes

y

A(h,H)

H

free boundary S

seepage region L(l,0) O(0,0)

B(b,0)

h

drain

x

Steady seepage flow from a dam

With the help of the theory of holomorphic functions, in [184, Section 3] this boundary value problem is transformed into the following nonlinear singular integral equation x(τ ) −

1 π



1 κ

1

1 f (x(σ )) dσ −D = σ −τ π



1 −1

r(κ, σ ) dσ , σ −τ

1 0. Continue with F. H. Compute the flow rate qflow by (8.3.12) and the seepage line by (8.3.11) for an appropriate set of parameter values t, for example t = cos jmπ , j = 1, . . . , m−1. For this, we again replace the integrals by Gaussian rules:

1 π



1 −1



(1 − κ 2 s 2 )(1 − s 2 ) 

1 −1

1  ln |xnk − t| 6  2 , m k=1 1 − κ 2 xnk m

ln |s − t| ds



u∗κ (s) ds s+

1+κ−2κt 1−κ



n 

∗ λnk ξnk

,

α,β

α,β

k=1

− 1 ,− 12

xnk = xnk2

xnk +

1+κ−2κt 1−κ

.

 √ 1 − κ 2 can also be approxiWe remark that the elliptic integrals K(κ) and K mated with the help of the Gauss-quadrature w.r.t. the Chebyscheff weight of first kind. But, in general their numerical evaluation by the method of the arithmeticgeometric mean is more effective (see [66]). As a first example, we consider the situation h = H = 1, l = 2 and present the numerical results for the flow rate qflow obtained by the described algorithm for different values of the discretization parameter n, the order m of the GaussChebyscheff-quadrature rule, and the threshold parameter ε in the following table. n m ε qflow 10 20 0.0010 0.36179 40 40 0.0010 0.36119 10 60 0.0001 0.36110 20 60 0.0001 0.36102 40 60 0.0001 0.36101 Linear dam front, h = H = 1, l = 2

For a second example, we choose h = H = 1, l = 3.16 and compare the flow rate computed by the algorithm presented here with the flow rate qflow = 0.20115 obtained in [200].

580

8 Applications n m ε qflow 10 60 0.0001 0.20154 20 60 0.0001 0.20157 Linear dam front, h = H = 1, l = 3.16

Finally, we simulate the case of a perpendicular dam front (i.e., h = 0) by sending h to zero in our algorithm. For this situation, in [29] (cf. also [94]) an analytic approximation of the flow rate is given by the formula qa = −3l +



9 l2 + 3 H 2 .

(8.3.24)

The results obtained by the above algorithm for H = 1, h = 10−6 , and ε = 10−5 in comparison to the mentioned analytic approximation qa can be seen in the following table. l n m qflow qa 2.0 11 60 0.24506 0.24499 2.0 15 60 0.24505 1.0 10 40 0.46429 0.46410 1.0 15 40 0.46424 1.0 20 40 0.46423 0.3 10 60 1.09457 1.05192 0.3 15 60 1.09359 0.3 20 60 1.09323 0.1 10 60 1.79480 1.45784 0.1 15 60 1.78429 0.1 20 60 1.78065 “Perpendicular” dam front, H = 1, h = 10−6 , ε = 10−5

We remark that, in the above mentioned literature, also an analytic approximation for the seepage line is presented, namely x=l−

y 2 − qa2 2 qa

with qa from (8.3.24). But, for small quotients Hl the “seepage lines” computed by that formula make no sense. Readers interested in the computed seepage lines we refer to [78].

8.3 Two-Dimensional Free Boundary Value Problems

581

The Nonlinear Case The algorithm “Regula falsi for L(κ) = 0” presented above can also be applied in case of a nonlinear dam front. In Step B, we have only to replace the usage of the formulas (8.3.22) and (8.3.23) by the combination of a collocation-quadrature method with the modified Newton method described in Sect. 7.4.2 for Case A. Unfortunately, due to the end point singularities of Rκ (t) and so also of F  (t, z) (cf. (8.3.13), (8.3.14), and (8.3.15)), we are not able to refer to the convergence analysis of Sect. 7.4.3, since we also have no a-priori information on the smoothness of u∗κ in the representation u∗κ = σ1 u∗κ (cf Remark 7.4.16). Nevertheless, for certain examples, here we present respective numerical results, which show that, as in the linear case, we get good results already for relatively small n and m. More details together with pictures of computed seepage lines, some computational aspects, and comparisons with results obtained by other methods, one can find in [78]. For different parameters b ∈ R, we consider the following dam fronts: x − 2b)x , 1 − 2b (DAM2) x = g(y) = b2 + [2(1 +b) − y]y − b,  2x ln b−1 −1 b(e . (DAM3) y = f (x) = b−1 − 2 The flow rates, computed for n = 10, m = 40, and ε = 0.001, are given in the following table. (DAM1) y = f (x) =

Example DAM1

DAM2

DAM3

b −5.0000 −0.1000 −0.0010 1.0000 0.0100 0.0001 0.1000 0.0100 0.0001

qflow 0.36351 0.38063 0.38483 0.37903 0.41346 0.41464 0.40805 0.43052 0.43972

582

8 Applications

8.3.2 Seepage Flow from a Channel We consider a two-dimensional steady seepage problem from a channel through a homogeneous and isotropic porous medium underlain by a drain at a finite depth (see the following picture). y

A(0,H)

free boundary S

B(b,H) free boundary S 1

2 seepage region

D(d,0)

C(c,0) x

drain

Steady seepage flow from a channel

In [187] and [69] the problem is transformed into a singular integral equation problem which can be written in the form (cf. [69, (1.5)–(1.8)]) u(t) =

1 π



1 −1

F (s, u(s)) ds + f (t) + DC + EC t , s−t

−1 < t < 1 ,

(8.3.25)

where  F (t, z) = g

 b(1 − t) +z −H , 2

1 Rκ (t) = π r(κ, t) = H − K = E





H K

1 κ

f (t) = Rκ (t) −

-

1

1 1 − r(κ, s) s−t s+t



t 1



ds (s 2 − 1)(1 − κ 2 s 2 )

 1 − κ 2, 1 ,

 E(κ, t) = 0

t

. ds ,

,

b(1 − t) H 1 − t + ln , 2 π 1+t (8.3.26) 0 < κ < 1,

1 0 ,

(8.3.29)

(a natural condition, due to the practical problem behind) and |g  (ξ1 ) − g  (ξ2 )| ≤ cg |x1 − x2 |η ,

x1 , x2 ∈ [0, b] ,

(8.3.30)

for some positive constants cg and η ∈ (0, 1], and is defined outside the interval [0, b] by g(x) =

H + g  (0)x H

+ g  (b)(x

: x < 0,

− b) : x > b .

(8.3.31)

The unknown function u : [−1, 1] −→ R has to satisfy the boundary conditions u(−1) = u(1) = 0 .

(8.3.32)

Beside the unknown constants DC , EC ∈ R, the parameter κ ∈ (0, 1) is also looked for, where u, DC , EC have to be considered as functions of κ, and κ is determined by the condition EC (κ) = 0 .

(8.3.33)

Hence, for a fixed κ ∈ (0, 1), problem (8.3.25),(8.3.31) is of the form (5.8.66), (5.8.67). This suggests to check whether we are able to fulfil condition (N4) in Sect. 5.8.3 and to apply Proposition 5.8.26 concerning the solvability of that problem. Let   1 = − min g  (ξ ) : ξ ∈ [0, b] ,

  2 = max g  (ξ ) : ξ ∈ [0, b] ,

and assume that 1 2 < 1 .

(8.3.34)

  ) + z , condition (5.8.68) is satisfied. Note that, Then, since Fz (t, z) = g  b(1−t 2 due to condition (8.3.29), both 1 and 2 are positive and (cf. (5.8.70)) Fz (−1, 0) = g  (b) > 0 ,

Fz (1, 0) = g  (0) < 0 .

584

8 Applications

Moreover, in view of (8.3.28), we also have F (±1, 0) = 0 (cf. (5.8.2)). Finally, because of (cf. (8.3.30))     b  b   b(1 − t) |Ft (t, z)| = g + z  ≤ g  ∞ , 2 2 2 it remains to show that there are constants 0 and δ, , such that (cf. (5.8.71) and (5.8.72))  −δ |f  t)| ≤ 0 1 − t 2 ,

−1 < t < 1 ,

(8.3.35)

and 0≤δ
1 sufficiently large, /

π F0 (τ ) = 2

.   1 ∞  dξ ξ τ −k ξ k dξ ln τ + ln 1 −  = ln τ −  , τ k −1 −1 1 − ξ2 1 − ξ2 k=1



1

which implies, in accordance with (8.3.53), that the constant is equal to − ln 2. Thus, √ 1 τ + τ2 − 1 . R−1 (τ ) := F0 (τ ) = √ ln 2 π Using the same notations as above we obtain now for some nodes τj > 1, j = 1, . . . , M, , a = TTR D1 T a ,  M π 2 . In view of the where TR = [Ri−1 (τj )]N−1, and D = diag 1 21 . . . N−1 1 i=0,j =1 N three-term recurrence relation (8.3.52) we can again apply the idea of fast discrete orthogonal polynomial transforms. Finally, let us consider the integral (8.3.48) in case t > 1 nearby 1. Since the σ0

values h(xnk2 ) = quadrature rule

σ0

h2 (xnk2 )−H σ0

have already been computed, our aim is to construct a

σ20 (xnk2 )



1 −1

n  h(s) 0 σ0 0 σ2 (s) ds ≈ σnk (t) h(xnk2 ) , s−t k=1

where  0 σnk (t)

=

1 −1

σ0

nk2 (s) 0 σ (s) ds . s−t 2

(8.3.54)

594

8 Applications σ0

By pk 2 (t) refer to the normalized orthogonal polynomial of degree k with respect to the weight σ20 (t), −1 < t < 1, i.e. σ0 pk 2 (t)

 =

 σ20 2 P (s) σ0 (s) ds

1

− 12

k

−1

σ0

Pk 2 (t) .

Then (comp. (8.3.42)) σ0

σ

σ0

σ

σ

σ0

0 2 2 βk+1 pk+1 (t) = (t − αk 0 ) pk 2 (t) − βk 0 pk−1 (t) ,

σ0

with p0 2 (t) ≡ we have





σ0

2 , p−1 (t) ≡ 0, and c2 0 = 0

1

σ2

2

σ0 nk2 (t)

=

σ0 λnk2

n−1 

1 −1

k = 0, 1, 2, . . . ,

σ20 (s) ds. Analogously to (8.3.51)

σ0

σ0

σ0

σ0

σ0

σ0

pj 2 (xnk2 ) pj 2 (t) ,

j =0

which implies, in view of (8.3.54), σ0

0 σnk (t) = λnk2

n−1 

pj 2 (xnk2 ) qj 2 (t) ,

j =0

where σ0 qj 2 (t)

 =

σ0

1

pj 2 (s)

−1

s−t

σ20 (s) ds ,

|t| > 1 .

The three-term recurrence relation (8.3.55) implies σ

σ0

σ

σ0

σ

σ0

0 2 2 βk+1 qk+1 (t) = (t − αk 0 ) qk 2 (t) − βk 0 qk−1 (t) ,

k = 1, 2, . . . ,

and σ0 σ0

σ0

σ0

β1 2 q1 2 (t) = (t − α0 2 ) q0 2 (t) + cσ 0 . 2

With the notations ! "M , b= , bj j =1 ,

, bj =

n  k=1

(8.3.55)

σ0

0 σnk (tj ) h(xnk2 )

8.3 Two-Dimensional Free Boundary Value Problems

595

for some nodes tj > 1 and ! "n b = bk k=1 ,

σ0

bk = h(xnk2 ) ,

we can write , b = TTq Dλ Tp b ,  0 n−1, n σ0 σ0 can be and Tq = qkσ2 (tj )]n−1, M pj 2 (xnk2 ) k=0,j =1 j =0,k=1 handled as discrete orthogonal polynomial transforms and Dλ denotes the diagonal  0 0 σ0 matrix Dλ = diag λσ2 . . . , λσnn2 . It remains to calculate cσ 0 and q0 2 (t). Write cσ 0 n1 2 2 in the form (cf. the definition of σ20 (t) in (7.4.71)) where again Tp =

c2 0 σ2

π



=−

1 π



1 −1

b2 (t) σ2 (t) dt = (A1)(x ∗ ) − σ2 (x ∗ ) , t − x∗

where Au = σ2 u − Sb2 σ2 u. Then we are able to apply Proposition 5.2.21, in particular (5.2.59) with the function X(z) defined in (7.4.65). In view of (7.4.66) we have, for |z| > 1, X(z) = x2 z + x1 z + x0 + 2

∞ 

x−n z−n

n=1

with (cf (7.4.66)) x2 = −1, x1 = α2 − β2 , and x0 = view of (5.2.59),

1+α2 +β2 3



(α2 +β2 )2 . 2

(A1)(t) = x2 t 2 + x1 t + x0 =: q(t) . Furthermore, for z ∈ R \ [−1, 1], σ0

cσ 0 q0 2 (z) 2

π

1 = π



1 −1

b2 (t) σ2 (t) dt x∗ − t t − z

1 = ∗ π(x − z)



1 −1

⎡ (5.2.62)

=



1 1 + ∗ t −z t −t c2 0

 b2 (t)σ2 (t) dt ⎤

1 ⎣ σ X(z) − q(z) + 2 ⎦ x∗ − z π

Hence, in

596

8 Applications

with - X(z) = (1−z ) exp 2

1

−1

g2 (t) dt t −z

.

  . 1 − z   + α 2 + β2 − 2 . = (1−z ) exp g2 (z) ln  1 + z 2

Numerical Results Here we present certain numerical results obtained with this combination of a regula falsi method and a Newton collocation method for different shapes of the channel: ⎧ 66 − x : 16 ≤ x < 26 , ⎪ ⎪ ⎪ ⎨ 40 : 26 ≤ x ≤ 50 , (CH1) g(x) = ⎪ ⎪ ⎪ ⎩ 5(x − 84) + 50 : 50 < x ≤ 84 . 12 ⎧ 120 − x : 20 ≤ x < 30 , ⎨ (CH2) g(x) = 5(x − 68) ⎩ + 50 : 30 ≤ x ≤ 68 . 19  56.25 − 264.0625 − (x − 35)2 : 20 ≤ x < 35 , (CH3) g(x) =  106.25 − 4389.0625 − (x − 35)2 : 35 ≤ x ≤ 70 .  56.25 − 264.0625 − (x − 35)2 : 20 ≤ x < 35 , (CH4) g(x) =  76.25 − 1314.0625 − (x − 35)2 : 35 ≤ x ≤ 60 . We compare the flow rates, computed by means of (8.3.41) and our method, with results of [195], which are based on the Baiocchi transformation and the application of a finite difference successive over-relaxation method with projection. In doing so, we also apply our method to non smooth shape functions of the channel (see examples (CH1) and (CH2)) and we document the behaviour of both iteration processes, the regula falsi method for solving Ec (κ) = 0 and the Newton iteration method for the linearized discrete equations. In all examples we applied the product integration rule (8.3.50) with respect to the Chebyscheff weight (1 − ξ 2 )−1/2 with N = 20 nodes. We refer to [69] for figures with computed seepage lines.

Example CH1 CH2 CH3 CH4

Number n + 2 of collocation points 100 100 25 100 100

Computed flow rates

Flow rate qflow 94.9238 67.3589 76.7290 76.5019 66.2439

qflow from [195] 94.9171 67.0490 77.4687 76.6511 67.0235 66.3845

8.3 Two-Dimensional Free Boundary Value Problems

597

In the last column of the previous table, the first flow rates in Examples (CH3) and (CH4) are computed in [195] with a finer grid as that for the second ones. Iteration step Number of regula falsi Parameter κ EC (κ) Newton steps 1 0.450000 −9.043380 5 2 0.600000 −3.031338 4 3 0.800000 7.863467 4 4 0.655647 −0.488888 5 5 0.666348 0.030348 9 6 0.665690 −0.001822 4 7 0.665730 0.000115 2 8 0.665727 −0.000007 2 9 0.665727 0.000000 1 Iteration information, Example (CH1), 100 collocation points

Iteration step Number of regula falsi Parameter κ EC (κ) Newton steps 1 0.450000 36.442627 7 2 0.225000 19.164337 6 3 0.112500 9.358071 4 4 0.056250 2.857352 4 5 0.028125 −1.797644 4 6 0.038986 0.219950 3 7 0.037546 −0.027089 2 8 0.037704 0.000260 2 9 0.037703 −0.000003 2 10 0.037703 0.000000 1 Iteration information, Example (CH2), 100 collocation points

In the following table there are listed the computed parameters of the Gaussian rule with respect to the weight σ20 (t) and the collocation points for n + 2 = 25 σ0 σ0 (number of collocation points) in Example (CH3) (for λnk2 x ∗ − xnk2 see (8.3.49)). For the Stieltjes procedure we used the following parameters (cf. (8.3.45)): M = 20, M1 = M20 = 40, Mi = 20, i = 2, . . . , 19, x19 = −x1 = 0.9999, x18 = −x2 = 0.999, x17 = −x3 = 0.99, xi = 0.15 · (i − 4) − 0.9 , i = 4, . . . , 16. In the last table, for Examples (CH3) and (CH4) we present the computed flow rates in dependence on the number of collocation points.

598

8 Applications

k

σ0

xnk2

σ0

λnk2

σ0 σ0 λnk2 x ∗ − xnk2

j

μ0

2 xn+2,j

1 −0.99936 2 −0.98567 3 −0.95589 4 −0.91066 5 −0.85082 6 −0.77743 7 −0.69175 8 −0.59521 9 −0.48938 10 −0.37595 11 −0.25669 12 −0.13346 13 −0.00816 14 0.11729 15 0.24096 16 0.36095 17 0.47538 18 0.58244 19 0.68041 20 0.76769 21 0.84281 22 0.90445 23 0.95150 24 0.98308 25 0.99852 Parameters of Gaussian rules, Example (CH3), 25 collocation points 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

−0.98923 −0.96195 −0.91864 −0.85994 −0.78678 −0.70034 −0.60203 −0.49350 −0.37656 −0.25319 −0.12549 0.00438 0.13420 0.26177 0.38489 0.50147 0.60952 0.70722 0.79291 0.86516 0.92278 0.96485 0.99073

0.00015 0.00077 0.00199 0.00379 0.00609 0.00871 0.01145 0.01411 0.01649 0.01842 0.01974 0.02037 0.02026 0.01942 0.01791 0.01583 0.01333 0.01059 0.00780 0.00519 0.00296 0.00129 0.00030

0.00010 0.00047 0.00112 0.00191 0.00261 0.00298 0.00280 0.00192 0.00031 −0.00192 −0.00458 −0.00737 −0.00996 −0.01203 −0.01329 −0.01360 −0.01289 −0.01127 −0.00898 −0.00635 −0.00379 −0.00170 −0.00040

Iteration step Number of regula falsi Parameter κ EC (κ) Newton steps 1 0.450000 −1.162736 6 2 0.600000 4.720009 4 3 0.479648 −0.062450 4 4 0.481331 0.000661 2 5 0.481313 −0.000008 2 6 0.481313 0.000000 1 Iteration information, Example (CH3), 100 collocation points

8.3 Two-Dimensional Free Boundary Value Problems Iteration step Number of Parameter κ EC (κ) Newton steps regula falsi 1 0.450000 3.163046 4 2 0.225000 −4.872387 3 3 0.361432 0.021232 3 4 0.360833 0.000190 2 5 0.360828 0.000002 2 6 0.380828 0.000000 1 Iteration information, Example (CH4), 100 collocation points

Number n + 2 of collocation points Flow rate (CH1) Flow rate (CH4) 10 77.5749 67.0372 20 76.8342 66.4788 40 76.5982 66.3094 60 76.5416 66.2709 80 76.5182 66.2555 100 76.5019 66.2439 Computed flow rates, Examples (CH3) and (CH4)

599

Chapter 9

Hints and Answers to the Exercises

Exercise 2.1.1 For f ∈ X, fn ∈ X0 and fn −→ f , we have necessarily (and so uniquely) to define Af = lim A0 fn in order to guarantee the conn→∞

tinuity of A : X −→ Y. Otherwise, this definition is correct, since, due to ∞ A0 fn − A0 fm Y ≤ A0 X0 →Y fn − fm X , (A0 fn )n=1 is a Cauchy sequence in Y and hence convergent. Moreover, gn −→ f implies A0 fn − A0 gn Y ≤ AX0 →Y fn − gn X −→ 0 if n −→ ∞, such that lim A0 gn = lim A0 fn . n→∞

n−→∞

Furthermore, Af = A0 f for all f ∈ X0 (choose fn = f ). Additionally, fn −→ f and gn −→ g, with fn , gn ∈ X0 as well as α, β ∈ K imply αfn + βgn −→ αf + βg and A(αf + βg) = lim A0 (αfn + βgn ) = lim (αA0 fn + βA0 gn ) = αAf + βAg . n→∞

n→∞

The boundedness of A : X −→ Y follows by Af Y = lim A0 fn Y ≤ A0 X0 →Y lim fn X = A0 X0 →Y f X n→∞

n→∞

for fn ∈ X0 and fn −→ f .

∞  xn be absolutely convergent. Exercise 2.1.2 Let X be a Banach space and n=1    n ∞ m+j  m+j     xn , ∀ m, j ∈ N, the sequence Then, due to  xk  xn is a  ≤  k=m  k=m k=1 n=1 Cauchy sequence and, consequently, convergent. Conversely, let every absolutely ∞ convergent series in X be convergent and let (yn)n=1 be a Cauchy sequence in X.   ∞ −k   Then there is a subsequence ynk k=1 such that ynk+1 − ynk < 2 , k ∈ N. This

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 P. Junghanns et al., Weighted Polynomial Approximation and Numerical Methods for Integral Equations, Pathways in Mathematics, https://doi.org/10.1007/978-3-030-77497-4_9

601

602

9 Hints and Answers to the Exercises

yields that the series

∞  

 ynk+1 − ynk is absolutely convergent and so, due to our

k=1

assumption, convergent, i.e., there exists an y ∗ ∈ X such that ynk −→ y ∗ in X ∞ if k −→ ∞. Since (yn )n=1 is a Cauchy sequence, it follows yn −→ y ∗ in X if n −→ ∞. ∞ Exercise 2.1.3 Let Y be a Banach space and (An )n=1 be a Cauchy sequence in L(X, Y). Then, for every ε > 0, there is an n0 ∈ N such that An x − Am x ≤ An − Am x ≤ εx for all m, n > n0 and all x ∈ X. Consequently, for every x ∈ X there is a y =: Ax ∈ Y with An x −→ y = Ax in Y if n −→ ∞. Obviously, the operator A : X −→ Y defined in this way is a linear one. Moreover, since the ∞ Cauchy sequence (An )n=1 is bounded in L(X, Y), say An  ≤ M < ∞ for all n ∈ N, and since An x −→ Ax if n −→ ∞, we also have Ax ≤ Mx for all x ∈ X. Moreover, An x − Ax = lim An x − Am x ≤ εx ∀ n > n0 , ∀ x ∈ X . m→∞

Hence, An − A = sup {An x − Ax : x ≤ 1} ≤ ε for all n > n0 . ∞ Now let L(X, Y) be a Banach space and (yn )n=1 be a Cauchy sequence in Y. Fix one x0 ∈ X with x0  = 1. Then, by Corollary 2.2.10, there is an f0 ∈ X∗ such that f0 X∗ = 1 and f0 (x0 ) = 1. Define An : X −→ Y, x → f0 (x)yn , n ∈ N. Of course, these operators are linear ones and they are also bounded, since An x = |f0 (x)| yn  ≤ xyn . Moreover, (An − Am )x ≤ yn − ym  x, ∞ i.e., An − Am  ≤ yn − ym . Hence, (An )n=1 is a Cauchy sequence in L(X, Y) and so, by our assumption, convergent to some A ∈ L(X, Y). In particular, this yields Ax0 − An x0  = Ax0 − yn  −→ 0 if n −→ ∞, i.e., yn −→ Ax0 in Y if n −→ ∞. ∞ is dense in H by Exercise 2.1.4 If the system (en )n=0 is complete, then H m1 < m2 < relation (2.1.3). If fn ∈ H converges to f ∈ H, then  there is a sequence  . . . with fn ∈ Hmn , n ∈ N. Property (P1) implies f − Pmn f  ≤ f − fn  −→ 0 if n −→ ∞. Together with H1 ⊂ H2 ⊂ . . ., this leads to f − Pm f  −→ 0 if m −→ ∞ for all f ∈ H. Using (2.1.4) we get Parseval’s equality (2.1.2). ∞ Exercise 2.2.2 If An −→ A strongly, then (An f )n=1 is a Cauchy sequence  in Y and the set An f Y : n ∈ N is bounded for all f ∈ X. Hence, by the principle of uniform boundedness, sup {An X→Y : n ∈ N} < ∞. Conversely, if ∞ M := sup {An X→Y : n ∈ N} + 1 < ∞ and if (An f )n=1 is a Cauchy sequence in Y for all f from a dense subset X0 of X, then for every f ∈ X and every ε ε > 0 there is an f0 ∈ X0 and an n0 ∈ N such that f − f0 X < 3M and ε An f0 − Am f0 Y < 3 for all m, n > n0 . Consequently, for m, n > n0 , An f − Am f Y ≤ An (f − f0 )Y + An f0 − Am f0 Y + Am (f0 − f )Y < Mf − f0 X +

ε + Mf0 − f X < ε . 3

9 Hints and Answers to the Exercises

603

∞ Hence, for every f ∈ Y, the sequence (An f )n=1 is a Cauchy sequence in the Banach space Y, such that there is an operator A : X −→ Y satisfying An f −→ Af for all f ∈ X. It is easily seen that the operator A : X −→ Y is linear and bounded. Exercise 2.2.5 In order to be able to apply Theorem 2.2.3, we have to show that the inverse map A−1 : Y −→ X, which is obviously linear, is a closed operator. For this, let gn ∈ Y with gn −→ g in Y and A−1 gn −→ f in X. Since A ∈ L(X, Y), we have also gn = A(A−1 gn ) −→ Af in Y, i.e., g = Af or f = A−1 g. That means the operator A−1 : Y −→ X is closed. Exercise 2.2.11 Set X0 := {αx0 : α ∈ K} and define f (αx0 ) = αx0 X . Then |f (αx0 )| = αx0 X and, by Theorem 2.2.7, there is a functional f0 ∈ X∗ with f0 (x) = f (x) for all x ∈ X0 and |f0 (x)| ≤ x for all x ∈ X. This yields f0 X∗ = 1, and Corollary 2.2.10 is proved. Now, if x1 , x2 ∈ X with x1 = x2 , then, by the just proved Corollary 2.2.10, there is an f ∈ X∗ satisfying f (x1 ) − f (x2 ) = f (x1 − x2 ) = x1 − x2 X = 0. ∞ Exercise 2.2.12 Let f ∈ ( p )∗ and define ηk = f (ek ), where ek = (δnk )n=0 . For ∞  ∞ every ξ = (ξn )n=0 ∈ p , we have ξ = ξn en , since n=0

  n      ξk ek  ξ −   k=0

Hence, f (ξ ) =

 =

ξn f (en ) =

n=0

p

|ξk |

p

−→ 0

if

n −→ ∞ .

k=n+1

p

∞ 

1

∞ 

∞ 

ξn ηn , ∀ ξ ∈ p . Write ηn = |ηn |eiαn with

n=0

0 ≤ αn < 2π and set

ξ (n) = (|η0 |q−1 e−iα0 , . . . , |ηn |q−1 e−iαn , 0, 0, . . .) . Then ξ (n)  n 

n      ∈ p and f ξ (n) = |ηk |q , where ξ (n)  p = k=0

1



n  k=0

p

|ηk |q

. Consequently,

k=0 n  k=0

 n 1 p    (n)  q   ≤ f ( p )∗ |ηk | = f ξ |ηk | , q

k=0

1

p

|ηk |(q−1)p

=

604

9 Hints and Answers to the Exercises

which yields

 n 

1 q

|ηk |

q

≤ f ( p )∗ and, if n tends to infinity, we get η =

k=0

∞ ∈ q with η q ≤ f ( p )∗ . On the other hand, if η ∈ q then, by Hölder’s (ηn )n=0 inequality (2.2.3),

 ∞     |f (ξ )| =  ξn ηn  ≤ ξ  p η q ,   n=0

which means f ∈ ( p )∗ and f ( p )∗ ≤ η q . Exercise 2.3.1 Let A ε2 ⊂ E be a finite ε2 -net for the set A ⊂ E. Obviously, the set   Aε = x ∈ A ε2 : U ε2 (x) ∩ A = ∅ is also an 2ε -net for A. For every x ∈ Aε , choose 2 2    ε an ax ∈ U 2 (x) ∩ A. Then Bε := ax : x ∈ A ε is a finite ε-net for A. Indeed, for 2

a ∈ A there is an x ∈ Aε ∩ U ε2 (a), which implies d(ax , a) ≤ d(ax , x) + d(x, a) < 2 ε ε 2 + 2 = ε. Exercise 2.3.2

∞ (a) Let A ⊂ E be relatively compact, i.e., its closure A is compact, and let (xn )n=1 be a sequence with xn ∈ A for all n ∈ N. If the image B = {xn : n ∈ N} of this sequence is finite, then, obviously, it has a convergent subsequence. Hence, let B be infinite and assume that B has no condensation point. Thus, for every x ∈ E there is an εx > 0 such that Uεx (x) ∩ B ⊂ {x}. Due to the compactness N

of A, there exist x1 , . . . , xN ∈ E such that A ⊂ Uεxk (xk ), which leads to the k=1

contradiction that B ⊂ {x1 , . . . , xN } cannot be infinite. Consequently, B has a ∞ condensation point, which is the limit of a subsequence of (xn )n=1 . ∞ (b) Now assume that every sequence (xn )n=1 of elements xn ∈ A has a convergent subsequence. Assume that there is an ε > 0 for which no finite ε-net for A n−1

exists. Choose x0 ∈ A and, for n ∈ N, xn ∈ A \ Uε (xk ). Then d(xn , xm ) ≥ ε k=0

for all m, n ∈ N with m = n, which leads to the contradiction that the sequence ∞ has no convergent subsequence. Hence, for every ε > 0, A ⊂ E (xn )n=1

Gα of possesses a finite ε-net. Moreover, suppose there is an open covering α∈J

A, which  does not contain a finite subcovering of A. Set Aα := A ∩ Gα , α ∈ J , and let xk1 ∈ A : k = 1, . . . , m1 be a 1-net for A. Set U01 (xk1 ) := A ∩ U1 (xk1 ) and assume that, w.l.o.g., U01 (x11 ) is not covered by finitely many Aα ’s. Since   U01 (x11 ) ⊂ A, there is a finite 12 -net xk2 ∈ A : k = 1, . . . , m2 for U01 (x11 ) ⊂ A. W.l.o.g., U01 (x12 ) := U 1 (x12 ) ∩ U01 (x11 ) is not covered by finitely many Aα . 2 2  ∞ Continuing this process we get a sequence x1n n=1 with x1n ∈ A and U01 := n

9 Hints and Answers to the Exercises

605

U 1 (x1n )∩. . .∩U1 (x11 )∩A is not covered by finitely many Aα ’s. By assumption, n  ∞ there exists a convergent subsequence x1nk k=1 , the limit x ∗ = lim x1nk of k→∞

which, of course, belongs to A. Thus, there are an α ∗ ∈ J such that x ∗ ∈ Gα ∗ and an ε > 0 such that Uε (x ∗ ) ⊂ Gα ∗ . This implies the existence of a k ∗ ∈ N n with x1 k ∈ Uε (x ∗ ) for all k ≥ k ∗ . Moreover, nk0 < 2ε for some k0 ≥ k ∗ ,  nk   nk  which yields U 1 x1 0 ⊂ Uε (x ∗ ) and, consequently U0 1 x1 0 ⊂ Aα ∗ in nk 0

nk 0

contradiction to the choice of the sets U 1 (x1n ). Hence, A is compact, i.e., A is n relatively compact. (c) Now assume that E is a complete metric space and that, for every ε > 0, is a finite all n ∈ N and let there  ε-net for A ⊂ E. Let xn ∈ A for 1 1 ) contains an infinite y11 , . . . , ym (y be a finite 1-net for A. W.l.o.g., U 1 1 1  ∞ ∞ . Analogously, there is a U 1 (y12 ) containing subsequence xn1 n=1 of (xn )n=1 2  ∞  ∞ an infinite subsequence xn2 n=1 of xn1 n=1 . In this way, we get a sequence ∞  ∞  of subsequences xnk n=1 of xnk−1 n=1 lying in U 1 (y1k ), where xn0 := xn . Then k  n ∞ ∞ and turns out to be a Cauchy sequence, xn n=1 is also a subsequence of (xn )n=1 i.e., a convergent sequence. By (b), A is relatively compact. (d) If A is relatively compact, then, by (a), every sequence of elements of A has a convergent subsequence, which implies, by the considerations in (b), that A has, for every ε > 0, a finite ε-net. Exercise 2.3.6 It is obvious that the condition is necessary. Let us prove its sufficiency. Assume that for every x0 ∈ E and every ε > 0 there is a δ > 0 such that |f (x) − f (x0 )| < ε ∀ f ∈ F , ∀ x ∈ Uδ (x0 ) and that the family F ⊂ C(E) is not (globally) equicontinuous. Then there is an ε∗ > 0 such that, for all n ∈ N, there exist xn , yn ∈ E and fn ∈ F with d(xn , yn ) < n1 and |fn (xn ) − fn (yn )| ≥ ε∗ . The  ∞  ∞ compactness of E guarantees the existence of subsequences xnk k=1 and ynk k=1 ∗ converging to an x ∗ ∈ E. Now there is a δ ∗ > 0 such that |f (x) − f (x ∗ )| < ε2 for ∗ ∗ ∗ all x ∈ Uδ ∗ (x ) and for all f ∈ F . Moreover, there is a k0 with d(xnk , x ) < δ and d(ynk , x ∗ ) < δ ∗ for all k > k0 . For k > k0 , this leads to the contradiction       ε∗ ≤ fnk (xnk ) − fnk (ynk ) ≤ fnk (xnk ) − fnk (x ∗ ) + fnk (x ∗ ) − fnk (ynk ) < ε∗ . " ! " ! Exercise 2.3.7 Since the maps g : [−1, 1] −→ − π4 , π4 , g : [0, ∞] −→ 0, π2 and ! " g : [−∞, ∞] −→ − π2 , π2 with g(x) = arctan(x) are bijective, the spaces (I, db ) with I = [−1, 1], I = [0, ∞], and I = [−∞, ∞] are metric   ∞spaces. Moreover, if ∞ is a sequence in I , then there is a subsequence xnk k=1 converging (w.r.t. (xn )n=1 the |.|-metric) to a finite or infinite ! number " in x ∈ I , implying, due to the sequential continuity of g : [−∞, ∞] −→ − π2 , π2 , that lim db (xnk , x) = 0. k→∞

606

9 Hints and Answers to the Exercises

Exercise ! π π " 2.3.8 One has only to take into account that the map g : [−∞, ∞] −→ − 2 , 2 is together with its inverse sequentially continuous, i.e., xn ∈ [−∞, ∞] ! " and xn −→ x ∈ [−∞, ∞] imply g(xn ) −→ g(x) as well as yn ∈ − π2 , π2 and yn −→ y imply g −1 (yn ) −→ g −1 (y).  ∞ Exercise 2.3.9 Assume that there is an ε > 0 and a subsequence fnk k=1 with   fn − f  ≥ 2ε for all k = 1, 2, . . . For every such k, there exists xk ∈ E with k ∞,E |fnk (xk ) − f (xk )| ≥ ε, where, due to the compactness of E, we can assume that lim xk = x ∈ E. Moreover, there is a δ > 0 such that |fn (x) − fn (y)| < 3ε for k→∞

all n ∈ N and for all x, y ∈ E with d(x, y) < δ, and there is an N ∈ N such that d(xk , x) < δ for all k > N. The estimate ε ≤ |fnk (xk ) − f (xk )| ≤ |fnk (xk ) − fnk (x ∗ )| + |fnk (x ∗ ) − f (x ∗ )| + |f (x ∗ ) − f (xk | ≤

ε ε + |fnk (x ∗ ) − f (x ∗ )| + , 3 3

k>N

shows that ε3 ≤ |fnk (x ∗ ) − f (x ∗ )| for all k > N in contradiction to the assumption lim fn (x) = f (x) for all x ∈ E. n→∞ xn  ≤ 1, then there Exercise 2.3.10 IfT1 ,T2 ∈ K(X, Y) and xn ∈ X with  ∞ ∞ is a subsequence xnk k=1 such that the sequences Tj xnk k=1 , j = 1, 2 are ∞  convergent in Y. Consequently, the sequence (T1 + T2 )xnk k=1 and the sequence ∞  (αT1 )xnk k=1 for a scalar α are convergent. Exercise 2.3.2 shows that T1 + T2 and αT1 are compact operators, such that K(X, Y) is a linear subspace of L(X, Y). It remains to prove that this subspace is closed. For this, let Tn ∈ K(X, Y), T ∈ L(X, Y), and Tn − T L(X,Y) −→ 0, and let usshow that, for every ε > 0, there is a finite ε-net for the set A = T f : f X ≤ 1 (cf.  Exercise 2.3.2). Hence, let ε > 0 be arbitrary, choose n0 ∈ N such that Tn0 − T X→Y < 2ε , and  Bε =  let     Tn0 f1 , . . . , Tn0 fN be an 2ε -net for B = Tn0 f : f X ≤ 1 , where fj X ≤ 1   for all j = 1, . . . , N (cf. Exercise 2.3.1). Then Aε = T fj : j = 1, . . . , N is a finite ε-net for the set A.   Exercise 2.3.13 Denote U1X := f ∈ X : f X ≤ 1 . Let ε>0  T ∈ K(X,Y) and ε X   be arbitrary. Then there are f1 , . . . , fN ∈ U1 , such that T f − T fj Y < 3 for all ∗ f ∈ U1X and a certain j = j (f ) ∈ {1, . . . , N} (see Exercise  ∗ 2.3.1). For g ∈ Y , we ! "T define Bg := g(T f1 ) · · · g(T fN ) ∈ CN . Then B U1Y is relatively compact in CN , since it is a bounded set in a finite dimensional space. Hence, there exist ∗ ∗ g1 , . . . , gM ∈ U1Y such that Bg − Bgk CN < ε3 for all g ∈ U1Y and for a certain k = k(g) ∈ {1, . . . , M}. This means |g(T fi ) − gk (T fi )|
0 and v γ ,δ f (−1 + 0) = 0 if γδ,δ > 0. Secondly, let f ∈ Cγ ,δ and assume that γ > 0 and δ > 0, i.e., v f (±1 ∓ 0) = 0. The other cases can be handled completely analogous. Let ε > 0 bearbitrarily chosen. Since P is dense in C, there is a polynomial p1 ∈ P  such that p1 − v γ ,δ f ∞ < ε3 . If we define p2 (x) = p1 (x) − p1 (1)

1+x 1−x − p1 (−1) , 2 2

then, because of γ , δ ≤ 1, v −γ ,−δ p2 is continuous on [−1, 1], and we have p1 − p2 ∞ ≤ ε3 due to v γ ,δ f (±1 ∓ 0) = 0, so that v γ ,δ f − p2 ∞ < 2ε 3 .   Furthermore, there is a polynomial p3 ∈ P with v −γ ,−δ p2 − p3 ∞ < 3v γε,δ  . ∞ This gives     p3 − f γ ,δ,∞ ≤ v γ ,δ p3 − p2 ∞ + p2 − v γ ,δ f ∞ < ε . Hence, the set P of polynomials is dense in Cγ ,δ , which is equivalent to γ ,δ lim Em (f ) = 0 for all f ∈ Cγ ,δ . m→∞ Cbγ ,δ containing all polynomiDenote by Cγ ,δ,∗ the smallest closed subspace of als. On the one hand, since Cγ ,δ is a closed subspace of Cbγ ,δ and P ⊂ Cγ ,δ , we have Cγ ,δ,∗ ⊂ Cγ ,δ . On the other hand, due to the already proved density of P in Cγ ,δ , every f ∈ Cγ ,δ is in the closure of P in Cbγ ,δ . Thus, Cγ ,δ ⊂ Cγ ,δ,∗ . Exercise 2.4.24 For every m ∈ N there exists a polynomial pm ∈ Pn such that γ ,δ f − pm γ ,δ,∞ < En (f ) + m1 . It follows γ ,δ

pm γ ,δ,∞ ≤ f γ ,δ,∞ + En (f ) + 1 := R

∀m ∈ N.

  The bounded set UR () in the finite dimensional space Pn , .γ ,δ,∞ is relatively  ∞ f compact, such that there is a subsequence pmk k=1 converging to a polynomial pn ∞ in the norm .γ ,δ,∞ . By the construction of the sequence (pm )m=1 , this leads to   γ ,δ f − pnf  ≤ En (f ). γ ,δ,∞

9 Hints and Answers to the Exercises

613

Exercise 2.4.25 Firstly, we observe that

|ϕ(f )| ≤

N 

|αk |v −γ ,−δ (xk )f γ ,δ,∞ ,

k=1

such that ϕ( Cb

γ ,δ

)∗



N 

v −γ ,−δ (xk )|αk |. Secondly, write αk = |αk |e−iδk with

k=1

0 ≤ δk < 2π and define f0 : [−1, 1] −→ C as the continuous function, which is a polynomial of degree less than 2 on every interval [−1, xN ], [xN , xN−1 ], . . ., [x1 , 1] and fulfills f0 (xk ) = e−iδk v −γ ,−δ (xk ), k = 1, . . . , N, and f0 (±1) = 0. Then N  ϕ(f0 ) = |αk |v −γ ,−δ (xk ) and f0 γ ,δ,∞ = 1. k=1

Exercise 2.5.3 ⊥

(a) Since M ⊂ M, on the one hand we have M ⊂ M⊥ . On the other hand, if f ∈ M⊥ and x ∈ M, then there is a sequence of elements xn ∈ M such that ⊥ xn −→ x, which implies f (x) = lim f (xn ) = 0. Consequently, f ∈ M . n→∞     (b) By definition, ⊥ X⊥ = x ∈ X : f (x) = 0 ∀ f ∈ X⊥ 0 0 . Obviously, X0 ⊂   ⊥ X⊥ . Let x ∈ X \ X . Since X is a closed linear subspace of X, we have 0 0 0 d := dist(x, X0 ) > 0, and, by Corollary 2.2.9, there is a functional f ∈ X∗ such that f (x0 ) = 0 for all x0 ∈ X0 and f (x) = d. Thus,the functional f belongs ⊥ X⊥ . to X⊥ 0 , but it does not vanish on x. That means x ∈ 0 ∗ ∗ (c) The element f belongs to N(A ) if and only if (A f )(x) = f (Ax) = 0 for all x ∈ X, which is equivalent to f ∈ R(A)⊥ . Exercise 2.6.11 Set A = {Tn u : u ∈ SX , n ∈ N} and B = {Mn u : u ∈ SX , n ∈ N}, where  SX = {u ∈ X : uX ≤ 1} denotes the unit ball in X. Let ε > 0 and vj : j = 1, . . . , N ⊂ X be an ε2 -net for A. There exists an N1 ∈ N such that Tn u − Mn u < ε2 for all u ∈ SX and n > N1 . Define B1 := {Mn u : u ∈ SX , n ≤ N1 } and B2 := B \ B1 . Taking into account the compactness  of the operators Mn we conclude the existence of an ε-net wj : j = 1, . . . , N2 for B1 . If v = Mn u ∈ B2 with u ∈ SX , then there is a vj such that     Mn u − vj  ≤ Mn u − Tn u + Tn u − vj  < ε + ε = ε . 2 2     Hence wj : j = 1, . . . , N2 ∪ vj : j = 1, . . . , N is an ε-net for B.

614

9 Hints and Answers to the Exercises

Exercise 2.6.13 By Corollary 2.6.3 we get B ∈ GL(X, Y) with              fB − f  = B −1 gB − A−1 g  ≤ B −1 (gB − g) +  B −1 − A−1 Af          = B −1 (gB − g) + B −1 (A − B)f      g − g B   A + A − B f  ≤ B −1  g  −1    A  gB − g   A + A − B f  ≤ g 1 − A−1 B − A cond(A)   =  1 − A−1 B − A



 gB − g A − B f  , + g A

and the exercise is done. Exercise 3.1.1 (a) Consider rh
0 , so that ψ : [0, 1 − 2r 2 h2 ] −→ R is increasing and thus x+

6     2 rh rh 1 − 1 − 2r 2 h2 = 1−r 2 h2 2 − 1 − r 2 h2 < 1 . ϕ(x) ≤ ψ(1−2r 2 h2 ) = 1−2r 2 h2 + 2 2

Moreover, in virtue of 1−

rh 1 1 − r 2 h2 > 1 − √ − > 0, 2 2 2 2

rh 2 2 2 2 we have x − rh 2 ϕ(x) ≥ − 2 > −1 + r h > −1. The case x ∈ [−1 + r h , 0] can be handled analogously. (b) Let −1 < x < −1 + 2r 2 t 2 , 0 < h ≤ 2r 2 t 2 , and 0 < t ≤ r √1r+1 . Then x + (r − k)h > −1 and

x + (r − k)h ≤ x + rh < −1 + 2r 2 t 2 + rh ≤ −1 +

2r 2 + = 1. r +1 r +1

9 Hints and Answers to the Exercises

615

(c) Analogously, for 1 − 2r 2 t 2 < x < 1, 0 < h ≤ 2r 2 t 2 , and 0 < t < get x − kh < 1 and x − kh ≥ x − rh > 1 − 2r 2 t 2 − rh ≥ 1 −

√1 , r r+1

we

2 2r − = −1 . r +1 r +1

Exercise 3.1.2 Let −1 + 2h2 < x < 1 − 2h2 and − h2 ≤ ρ ≤ h2 . Then √ √ √ 1−x >h 2>h 1+x

and



√ √ 1+x >h 2>h 1−x.

√ √ Hence, 1 − x > h 1 − x 2 and 1 + x > h 1 − x 2 as well as   . 1−x h 1−x =1− x+ < 1 − x + ϕ(x) 2 2 2   ≤ 1 − x + ρ 1 − x2 .   1−x h 3 ≤ 1 − x − ϕ(x) < 1 − x − = (1 − x) 2 2 2 and, analogously or by replacing x by −x,   1+x 3 < 1 + x + ρ 1 − x 2 < (1 + x) . 2 2  (x) = 1 ∓ √ δx Exercise 3.1.4 The derivatives ψ±

±√ 1

1+δ 2

1−x 2

have the only zeros x± =

and are positive on (−1, x+ ) resp. (x− , 1). Moreover,   1 − δ2 = ±1 , ψ± (∓1) = ∓1 and ψ± ± 1 + δ2

2 1−δ 2 < x+ and x− < − 1−δ . This all together shows the claimed bijectivity. 1+δ 2 1+δ 2  2  2 δ 1−δ   (x) ≥ ψ  , ψ = −1, 1−δ Since ψ± = ∓ 3 , we have, for x ∈ 2 + + 1+δ 2 1+δ (1−x 2 ) 2     2 2 1+δ 2  (x) ≥ ψ  − 1−δ 2 = 1+δ > 12 and, for x ∈ − 1−δ , 1 , ψ− > 12 . − 2 2 1+δ 2 1+δ 2   !   −1 "−1 −1 Consequently, ψ± ψ (y) (y) = ψ± < 2 for all y ∈ (−1, 1). 2 2 Exercise 3.1.6 If g ∈ ACr−1 loc (−1 + η1 h , 1 − η1 h ), then

where

2 2 g ∈ ACr−1 loc (−1 + η2 h , 1 − η2 h )

616

9 Hints and Answers to the Exercises

and   w(f − g)Lp (Ih,η ) + hr wϕ r g (r) Lp (I

h,η2 )

2

η

  ≤ w(f − g)Lp (Ih,η ) + hr wϕ r g (r) Lp (I

h,η1 )

1

6

η

which yields Kr,ϕ2 (f, t r )w,p ≤ Kr,ϕ1 (f, t r )w,p . Moreover, for h0 =

η1 η2 h,

,

we have

r−1 ACr−1 loc (Ih,η1 ) = ACloc (Ih0 ,η2 ) and

  w(f − g)Lp (Ih,η ) + hr wϕ r g (r) Lp (I

h,η1 )

1

 = w(f − g)Lp (Ih  ≤

η2 η1

r  2

0 ,η2

)

+

η2 η1

w(f − g)Lp (Ih

r

0 ,η2

2

  hr0 wϕ r g (r) Lp (I

h0 ,η2 )

)

  + hr0 wϕ r g (r) Lp (I

h0 ,η2 )

.

Thus, η1 Kr,ϕ (f, t r )w,p

   = sup inf w(f − g)Lp (Ih,η ) + hr wϕ r g (r) Lp (I 1

0 0, then  (r)  r r  there exist gn ∈ ACr−1 loc such that, for all n ∈ N, w(f − gn )p + t0 ϕ wgn p < −n 2 . For every fixed j ∈ N, this leads to f − gj =

∞ 

(gn+1 − gn ) in Lpw

n=j

and

∞   r ! (r) " (r)  ϕ w g < ∞. n+1 − gn p n=j

(r)

(r) − g In virtue of Lemma 2.4.9, we get f ∈ ACr−1 loc and f j =

∞  

(r)

gn+1 − gn(r)

n=j p

in Lϕ r w . Furthermore,     r (r)  " ! ϕ wf  ≤ ϕ r w f (r) − g (r)  + ϕ r wg (r)  j j p p p ⎡ ⎤ ∞    < t0−r ⎣ 2−n−1 + 2−n + 2−j ⎦ = c 2−j −→ 0 n=j

618

9 Hints and Answers to the Exercises

  if j −→ ∞. That means ϕ r wf (r)p = 0, i.e., f (r) (x) = 0 for almost all x ∈ (−1, 1). Hence, f ∈ Pr . Exercise 3.1.11 For every k ∈ N, there is a polynomial pk ∈ Pm such that w(f − pk )p < Em (f )w,p +

1 . k

  ∞ The sequence (pk )k=1 is bounded in the finite-dimensional space Pm , w ·p , because of wpk p ≤ w(pk − f )p + wf p ≤ Em (f )w,p + 1 + wf p .  ∞ f Hence, there is a converging subsequence pkj j =1 , for the limit Pm ∈ Pm of which we have     f  f  Em (f )w,p ≤ w(f − Pm )p ≤ w(f − pkj )p + w(pkj − Pm )p ≤ Em (f )w,p +

 1 f  + w(pkj − Pm )p −→ Em (f )w,p kj

if j −→ ∞ .

Exercise 3.1.13 If f ∈ C0,λ then #  $    h h 1ϕ (f, t)1,∞ = sup sup f x + ϕ(x) − f x − ϕ(x) : −1 + 2h2 ≤ x ≤ 1 − 2h2 2 2 0 0 there is an k0 ∈ N such that sup {fn − fk  : n, k ≥ k0 } ≤ ε. From p (∗) we get, for all g, h ∈ Lw , |am (h) − am (g)| ≤ |am (h − g)| ≤ h − g. Hence, lim am (fk − fn ) = am (fk − f ) for all m ∈ N0 . Consequently, for every ∈ N and n→∞ k ≥ k0 ,  ε ≥ lim

n→∞



 q1 |am (fk − fn )|

 =

q

m=0



 q1 |am (fk − f )|

q

1 ≤ q < ∞,

,

m=0

respective ε ≥ lim

sup |am (fk − fn )| = sup |am (fk − f )| ,

n→∞ 0≤m≤

which implies fk − f  ≤ ε

0≤m≤ p,r,∗

∀ k ≥ k0 . This yields f ∈ Bq,w and lim fk = f k→∞

p,r,∗

in Bq,w . Exercise 3.1.21 For 2k ≤ n < 2k+1 , we have r  r  2k ξ2k+1 ≤ (n + 1)r ξn ≤ 2k+1 ξ2k . Firstly, we conclude ∞  

r− q1

(n + 1)

q

ξn

=

n=n0

! " ∞ 2k+1 ∞  ∞ 2k+1  −1 [(n + 1)r ξn ]q  −1 2kr ξ2k q  ≤ 2rq = 2rq 2kr ξ2k k n+1 2 k k

k=k0 n=2

k=k0 n=2

q

.

k=k0

Secondly, ∞   k=k0 +1

2kr ξ2k

q

=

∞   k=k0 +1

≤ 2rq

2kr ξ2k

q

k −1 2

n=2k−1

∞ 

21−k = 2rq

k −1 2

21−k

k=k0 +1 n=2k−1

∞ ∞    [(n + 1)r ξn ]q r− 1 ≤ 2rq+1 (n + 1) q ξn n n=n n=n 0

0

q

 r 2k−1 ξ2k

q

620

9 Hints and Answers to the Exercises

and 

2k0 r ξ2k0

q

 "q ! r− 1 = nr0 ξn0 ≤ (n0 + 1) (n0 + 1) q ξn0

q

≤ (n0 + 1)

∞  

(n + 1)

r− q1

q

ξn

.

n=n0

Exercise 3.1.22 We have to check the norm axioms. But this can be easily done, since the definition of sϕ (f, t)w,p is based on the Lp -norm and since we have, for p,r f, g ∈ Bq,w and α ∈ C, shϕ (αf ) = αshϕ f Exercise 3.2.1 Let 0 < a
0, consider the continuous function .  gλ (1 + 2x) − gλ (1 − 2x) 1 −→ R , x → , fλ : 0, 2 gλ (1 + x) − gλ (1 − x)  where gλ (y) = y λ , y ≥ 0. Then fλ (x) > 0 for all x ∈ 0, 12 and lim fλ (x) = 2. x→+0 Hence, there is a constant cλ > 0 such that fλ (x) ≤ cλ for all x ∈ 0, 12 . For

9 Hints and Answers to the Exercises

621

μ > −1, h > 0, and M ≥ 0, we set I = (M − h, M + h) and study the quotient L |x|μ dx Qh = L2I μ I |x| dx for different constellations between M and h : 1. If 0 ≤ M ≤ h, then Qh =

(M + 2h)1+μ + (2h − M)1+μ ≤ (M + h)1+μ + (h − M)1+μ



   M + 2h 1+μ 2h − M 1+μ + ≤ 31+μ + 21+μ . M +h M +h

2. If 0 < h ≤ M ≤ 2h, then Qh =

(M + 2h)1+μ + (2h − M)1+μ ≤ (M + h)1+μ − (M − h)1+μ

 1+μ 3 1 2(M + 2h)1+μ . ≤ 2  1+μ .  1+μ 2 1 M−h 1+μ 1 − 1 − M+h (M + h) 2

3. If 2h ≤ M, then  1+μ  1+μ   1 + 2h − 1 − 2h (M + 2h)1+μ − (M − 2h)1+μ M M h ≤ c1+μ . Qh = =  = f1+μ    1+μ 1+μ 1+μ 1+μ h h M (M + h) − (M − h) 1+ M − 1− M

With the help of these estimates one can see that every weight | · −x0 |μ with x0 ∈ [−1, 1] belongs to DW. It remains to apply Exercise 3.2.2. Exercise 3.2.5 Clearly, condition (3.2.3) implies (3.2.4). Hence, let (3.2.4) be satisfied and let I ⊂ (−1, 1) with |I | ≥ ε. Then u ∈ Lp , u−1 ∈ Lq , and        1  u p u−1  q ≤ 1 u p u−1  q , L (I ) L (I ) L L |I | ε and (3.2.3) follows. The answer to the question is: Yes, in case of Ap (I0 ) condition (3.2.3) can be replaced by (3.2.4) if I0 is bounded or if u ∈ Lp (I0 ) and u−1 ∈ Lq (I0 ). Exercise 3.2.6 By Hölder’s inequality we have  λ 1−λ  u v 

Lp (I )

(3.2.3)  −λ λ−1     −1 λ     u v  q ≤ uλ p v 1−λ u  p v −1 1−λ ≤ c|I | L (I ) Lp (I ) L (I ) Lp (I ) L (I )

for every interval I ⊂ (−1, 1).

622

9 Hints and Answers to the Exercises

Exercise 3.2.8 We start with the weight v(x) = |x|λ on R. It is easily seen, that the condition − p1 < λ < q1 is necessary for v(x) being an Ap (R)-weight. Hence, let this condition be fulfilled and let first 0 ≤ a < b < ∞. Then   v 

Lp (a,b)

 −1  v 

 Lq (a,b)



a

=

=

=:

b

|x|λp dx

 1  p b

1 |x|−λq dx

a

q

. p1 - 1  . q1 1  1+λp − a 1+λp b b1−λq − a 1−λq 1 + λp 1 − λq -

b 1

1

(1 + λp) p (1 − λq) q b (1 +

1 λp) p

(1 −

1 λq) q

1−

ψ

 a 1+λp . p1 b

1−

 a 1−λq . q1 b

a  a 1− . b b

Since 1 + λp > 0 and 1 − λq > 0, the function ψ : [0, 1) −→ R ,

 1  1  1  1 1 − x 1+λp p 1 − x 1−λq q 1 − x 1+λp p 1 − x 1−λq q = x → 1−x 1−x 1−x

is continuous, where lim ψ(x) = (1 + λp)(1 − λq). Thus, x→1−0

  v 

Lp (a,b)

 −1  v 

Lq (a,b)



ψ∞,[0,1) 1

1

(1 + λp) p (1 − λq) q

(b − a) =: cλ,p (b − a) .

If −∞ < a < 0 < b < ∞, with d = max {|a|, b} we get    1  1    −1    p  −1 q p q v p p v −1 q q v  p   q  q    L (0,|a|) + v Lp (0,b) L (0,|a|) + v L (0,b) L (a,b) v L (a,b) =     ≤ 2v Lp (0,d) v −1 Lq (0,d) ≤ 2 cλ,p d ≤ 2 cλ,p (b − a) .

Let us turn to the case of a generalized Jacobi weight. If we can show that, for every x0 ∈ [−1, 1], the weight u(x) = |x −x0 |λ belongs to Ap if and only if − p1 < λ < q1 , then we can finish the exercise by applying Lemma 3.2.7. Again the necessity part of the condition is clear. We prove the sufficiency part. Let − p1 < λ < q1 and (a, b) ⊂ (−1, 1). Then, with v(x) from above,    −1    u p u  q = v Lp (a−x L (a,b) L (a,b) and we are ready.

0 ,b−x0 )

 −1  v  q L (a−x

0 ,b−x0 )

≤ 2 cλ,p (b − a) ,

9 Hints and Answers to the Exercises

623

Exercise 3.2.11 Let I ⊂ [a, b] and x ∈ I . If x ∈ [a, c], then u(x) ≤

L∗u |I |

 u(t) dt I

implies u(x)v(x) ≤





  L∗u v∞,[a,c] v −1 ∞,[a,c] - |I |

I ∩[a,c]

  L∗u v∞,[a,c] v −1 ∞,[a,c] - |I | L∗1 |I |

.



I ∩[a,c]

u(t)v(t) dt + Luv

I ∩[c,b]

    u(t)v(t) dt + Luv u−1 

v(t) dt .

 ∞,[c,b]

I ∩[c,b]

u(t)v(t) dt

 u(t)v(t) dt , I

      where L∗1 = L∗u v∞,[a,c] v −1 ∞,[a,c] max 1, Luv u−1 ∞,[c,b] and Luv = u∞,[c,b] (b − c) if v ≡ 0 on [c, b]. In case v ≡ 0 on [c, b] we can choose   Lb v −1  v(t) dt ∞,[a,c] c   L∗ = L∗ v∞,[a,c] v −1  . Analogously, if x ∈ [b, c], we get 1

∞,[a,c]

u

u(x)v(x) ≤

L∗2 |I |

 u(t)v(t) dt . I

Exercise 3.2.12 Let α ≥ 0, −∞ < a < b < ∞, and x ∈ [a, b]. In case 0 ≤ a < b < ∞, we have 1 b−a



b a

 α b α b − a ab bα xα 1 b 1+α − a 1+α = ≥ ≥ . t dt = b−a 1+α 1+α b−a 1+α 1+α α

If −∞ < a < 0 ≤ b < ∞, then we set c = max {|a|, b} as well as d = min {|a|, b} and get 1 b−a

 a

b

|t|α dt =

1 c1+α cα |x|α 1 c1+α + d 1+α ≥ = ≥ . c+d 1+α 2c 1 + α 2(1 + α) 2(1 + α)

This shows that all weights u(x) = |x −x0|α with α ≥ 0 belong to A∗ = A∗ [−1, 1]. By applying Exercise 3.2.11 successively, we can prove that all generalized Jacobiweights with nonnegative exponents are A∗ -weights.

624

9 Hints and Answers to the Exercises

Exercise 3.2.13 Choose α = 12 . If u(x) is an A∞ -weight, then there is a β > 0 (see (3.2.9)) such that (with I and 2I as in the definition (3.2.1) of a doubling weight) 

1 u(x) dx ≤ β 2I ∩(−1,1)

 u(x) dx , I

since |I | ≥ 12 |2I ∩ (−1, 1)|. Exercise 3.2.25 First we remark that, due to the continuity of the function cos−1 : [−1, 1] −→ [0, π], relation (3.2.23) as well as the relation (3.2.24) imply that there exists an m0 ∈ N such that θmk − θm,k+1 < π8 for all k = 0, 1, . . . , m and m > m0 . Hence we can restrict the following considerations to all m > m0 . θ +θ Let (3.2.24) be fulfilled and consider first the case mk 2m,k+1 ≤ 5π 8 . Then θmk ≤ and in what follows we can use the inequalities θ ≥ sin θ ≥ which are true for 0 ≤ θ ≤ 3π 4 . The relations 3π 4 ,

c−1 ≤

cos θm,k+1 − cos θmk sin θmk m

+

1 m2

=

2 sin

√ 2 2θ 3π

θmk +θm,k+1 θ −θ sin mk 2m,k+1 2 sin θmk 1 m + m2

=: c0 θ ,

≤c

and θmk ± θm,k+1 c0 (θmk ± θm,k+1 ) θmk ± θm,k+1 ≥ sin ≥ , 2 2 2

θmk ≥ sin θmk ≥ c0 θmk

imply c−1 ≤

m(θmk + θm,k+1 )(θmk − θm,k+1 ) m ≤ (θmk − θm,k+1 ) . 2c0 θmk c0

Together with the relation m1 ≤ (A.1), it becomes clear that c≥

If

5π 8


0 and, −

1

1

consequently, 1 − am ∼m m α+ 2 . Exercise 3.3.10 The inequality (3.3.20) is stated for the class of weights u(x) = (1 − x 2 )β e−(1−x

2 )−α

with α > 0 and β ≥ 0 (cf. Proposition 3.3.6 and Corollary 3.3.7). Since the weight j 2 −α uj (x) = (1 − x 2 ) 2 +β e−(1−x ) also belongs to this class for j ∈ N0 , we obtain

628

9 Hints and Answers to the Exercises

from (3.3.20), for r ∈ N and constants c = c(Pm ),  (r) r        P ϕ σ u = P (r) ϕσ ur−1  ≤ c mP (r−1 σ ur−1  ≤ · · · ≤ c mr P σ u . m m p p p p Exercise 5.1.1 Set Qn (x) = v −α,−β (x) relations 

1 −1

d  α+1,β+1 α+1,β+1 v (x)Pn−1 (x) . Since the dx

d  α+1,β+1 α+1,β+1 v (x)Pn−1 (x) x k dx = 0 dx

hold true for k = 0, . . . , n − 1 and the leading coefficient of Qn (x) is equal to α+1,β+1

−(n + α + β + 1)kn−1

,

we have, due to the uniqueness properties of the orthogonal polynomials, α+1,β+1

Qn (x) = −

(n + α + β + 1) kn−1 α,β

kn

Pnα,β (x) = −2n Pnα,β (x) .

This shows α,β

1 α+1,β+1

hn−1

Qn (x) = −

2n hn

α+1,β+1

hn−1

pnα,β (x) = −γnα,β pnα,β (x)

or v α,β (x) 1 1 d  α+1,β+1 α,β α+1,β+1 v α,β (x)pn (x) = − α,β α+1,β+1 Qn (x) = − α,β (x)pn−1 (x) . v dx γn hn−1 γn

Exercise 5.1.2 Obviously, due to (5.1.16), α,β

(1 − x)Pnα+1,β (x) = δn Pnα,β (x) − εn Pn+1 (x) for some real numbers δn and εn . Comparing the leading coefficients on both sides α+1,β α,β of this equality, gives kn = εn kn+1 , from which follows εn = α,β

2(n + 1) . 2n + α + β + 2

α,β

In virtue of δn Pn (1) − εn Pn+1 (1) = 0 and (5.1.8), we get δn =

2(n + α + 1) . 2n + α + β + 2

9 Hints and Answers to the Exercises

629

Taking into account α,β

α,β

(1 − x)pnα+1,β (x) = δn

hn

α+1,β hn α,β

we obtain the claimed formulas for δn By substituting x

=

hn+1

pnα,β (x) − εn α,β

and εn .



−y in the integral

α+1,β hn

1 −1

α,β

pn+1 (x) ,

α,β

pnα,β (x)pk (x)v α,β (x) dx α,β

and taking into account that the leading coefficient of pn (−x) is equal ! " ! " n hα,β −1 k α,β n hβ,α −1 k β,α , one can see that the set = (−1) to (−1) n n n n   α,β

(−1)n pn (−x) : n ∈ N0 is an orthonormal system of polynomials with respect

to the weight function v β,α (x), the nth polynomial of which has the same leading β,α α,β β,α coefficient as pn (x). Consequently, (−1)n pn (−x) = pn (x), n ∈ N0 . β,α+1 α+1,β In view of (5.1.17), the polynomial (1+x)pn (x) = (1+x)(−1)npn (−x) is equal to α,β

δnα,β (−1)n pnα,β (−x) + εnα,β (−1)n+1 pn+1 (−x) , which implies (5.1.18). Exercise 5.1.3 Analogously to Exercise 5.1.2, from (5.1.19) we conclude the existence of real numbers δn and εn such that α+1,β δn Pnα+1,β (x) − εn Pn−1 (x) . Pnα,β (x) =

! α+1,β "−1 α,β ! α,β "−1 = h0 δ0 , relaThen this yields (5.1.20) for n ∈ N. Since h0 tion (5.1.20) is true also for n = 0. ± 1 ,± 1

Exercise 5.1.4 For the determination of the zeros xnk2 2 of the Chebyshev polynomials recall the formulas (5.1.21), (5.1.22), (5.1.24), and (5.1.25). Use (5.2.20) to ± 1 ,± 12

compute the Christoffel numbers λnk2 Exercise 5.2.2 We get

.

 0=

f (z) dz u,δ,ε,R

 =

−δ −R

|x|ν−1 [cos(ν − 1)π + i sin(ν − 1)π] dx + x−u

 +

 {δ ei(π −t) :0≤t ≤π }

+

{u+ε ei(π −t) :0≤t ≤π }



u−ε

 +

δ

u+ε



|x|ν−1 dx x−u



 +

R

{R eit :0≤t ≤π }

f (z) dz ,

630

9 Hints and Answers to the Exercises

where        π δ ν−1 e(π−t)(ν−1) iδ e−it dt  δν π     −→ 0 f (z) dz =  ≤   u−δ  {δ ei(π−t) :0≤t≤π }   0 δ ei(π−t) − u 



{u+ε ei(π−t) :0≤t≤π }

π

f (z) dz = 0

!

if δ −→ 0 ,

"ν−1  π ν−1 u + ε ei(π−t) iε e−it dt −it u − ε e = −i dt ε ei(π−t) 0 −→ −iπ uν−1

if

ε −→ 0 ,

       π R ν−1 ei(ν−1)t Ri eit dt  Rν π     −→ 0 f (z) dz =  ≤  it  R−u  {R eit :0≤t≤π }   0 Re −u

if R −→ ∞ .

 Consequently, 0 = 

∞ 0

lim

δ→0,ε→0,R→∞ u,δ,ε,R

f (z) dz leads to

|x|ν−1 dx − [cos(νπ) + i sin(νπ)] x−u



0

−∞

|x|ν−1 dx = iπ uν−1 . x−u

For real numbers α, β, γ , satisfying α − [cos(νπ) + i sin(νπ)]β = iγ , we have γ α = γ cot(νπ) and β = − . It follows sin νπ 

∞ 0

x ν−1 dx = −π uν−1 cot(νπ) x−u

and 

∞ −∞

νπ |x|ν−1 dx cos(νπ) + 1 = −π uν−1 = −π uν−1 cot , x−u sin(νπ) 2

as well as 

∞ −∞

|x|ν−1sgn (x) dx cos(νπ) − 1 νπ = −π uν−1 = π uν−1 tan . x−u sin(νπ) 2

In case u < 0, we can use the substitution y = −x and get 

∞ −∞

|x|ν−1 dx =− x−u



∞ −∞

|y|ν−1 dy νπ = π(−u)ν−1 cot y − (−u) 2

9 Hints and Answers to the Exercises

631

and 



|x|ν−1sgn (x) dx = x−u





|y|ν−1 sgn (y) dy νπ = π(−u)ν−1 tan . y − (−u) 2 −∞ −∞  p Exercise 5.3.4 We prove Proposition 5.3.3. Let F := Kf : f ∈ Lα,β , f α,β,(p) ≤ 1} be a uniformly bounded and equicontinuous set of functions. Then, p for every x ∈ [−1, 1], the linear functional Fx : Lα,β −→ C, f →  1 K0 (x, y)f (y)v α,β (y) dy is bounded, which implies (a) (see Remark 2.4.2). −1

Moreover, for every ε > 0 and every x0 ∈ [−1, 1], there is a δ > 0 such that, for all x ∈ [−1, 1] with |x − x0 | < δ,    

1 −1

[K0 (x, y) − K0 (x0 , y)] f (y)v

α,β

  (y) dy  < ε

p

∀ f ∈ Lα,β : f α,β,(p) ≤ 1 .

Remark 2.4.2 induces K0 (x, .) − K0 (x0 , .)α(q−1),β(q−1),(q) ≤ ε

∀ x ∈ [−1, 1] : |x − x0 | < δ ,

which shows that (b) is satisfied. Conversely, if (a) and (b) are fulfilled, then the above defined set F is relatively compact in C[−1, 1]. Exercise 5.4.3 Putting (5.4.6) and (5.1.20) together, we obtain Wα,β pnα,β =−

1 n

/

/ 1 n + β  β−1,α β,α n − β + 1  β−1,α β,α β−1,α β,α δn δn+1 pn+1 − εnβ−1,α pnβ,α pn − εn−1 pn−1 − 2n + 1 n+1 2n + 1

β,α

β,α

ρn pn+1 = ρn pn−1 + σn pnα,β −

/ where, taking into consideration α = −β, / n−β +1 , 2n + 1 1 ρn = n σn =

/

1 n + β β−1,α = ε 2n + 1 n−1 n

1 n+1

1 ρ n = n+1

/

/

= ρn+1 .

/

n+β 1 2n + 1 n

n − β + 1 β−1,α 1 − ε 2n + 1 n n

/

/

β−1,α δn

n−β = 2n − 1



=

n2 − β 2 , 4n2 − 1

1 n + β β−1,α = δ 2n + 1 n 2n + 1

n − β + 1 β−1,α 1 = δ 2n + 1 n+1 n+1

/

n−β +1 2n + 1



n+β β−1,α and εn = 2n + 1



n+1−β n+β − n+1 n

n+1+β 1 = 2(n + 1) + 1 n+1



 ,

(n + 1)2 − β 2 4(n + 1)2 − 1

632

9 Hints and Answers to the Exercises

Exercise 5.5.1 For −1 < x < 1 and 0 < ε < min {1 − x, 1 + x}, define  I (ε, x) :=

x−ε −1

 +

1



u (y) ln |y − x| dy .

x+ε

It follows, using u(±1) = 0,  I (ε, x) = [u(x − ε) − u(x + ε)] ln ε −  i.e., lim I (ε, x) = − v.p. ε→+0

I (ε, x) we get

1 −1

x−ε −1





1

+

x+ε

u(y) dy , y−x

u(y) dy . On the other hand, from the definition of y−x

" dI (ε, x) !  = u (x − ε) − u (x + ε) ln ε − dx



x−ε −1

 +

1



x+ε

u (y) dy , y−x

so that dI (ε, x) lim = − v.p. ε→+0 dx



1 −1

u (y) dy y−x

holds  true. This convergence is uniformly w.r.t. x ∈ [−1 + 2δ, 1 − 2δ] for every δ ∈ 0, 12 , since for x ∈ [−1 + 2δ, 1 − 2δ] and 0 < ε < δ,    !  "  u (x − ε) − u (x + ε) ln ε ≤ u 

∞,[−1+δ,1−δ]

2ε |ln ε|

and    v.p. 

x+ε x−ε

     u (y) dy   x+ε u (y) − u (x)  dy  ≤ 2εu ∞,[−1+δ,1−δ] . =  y−x y−x x−ε

Hence, d dx

-

. dI (ε, x) lim I (ε, x) = lim , ε→+0 ε→+0 dx

−1 < x < 1 ,

implying d dx



1

−1

u(y) dy = y−x



1 −1

u (y) dy , y−x

−1 < x < 1 .

9 Hints and Answers to the Exercises

633

Moreover, by partial integration v.p.

 1  u (y) dy −1 y − x  = lim

−1

ε→+0

   . . x−ε  1 u(y) − u(x) x−ε u(y) − u(x) u(y) − u(x) 1 + + + dy y −x y −x (y − x)2 −1 x+ε −1 x+ε

-

.  1 u(x − ε) − u(x) u(x) u(x) u(x + ε) − u(x) u(y) − u(x) dy − − − + v.p. ε 1+x 1−x ε −1 (y − x)2

ε→+0

ε→+0

= v.p.

 1   u (y) dy y −x x+ε

+

= lim

= lim

x−ε



 1 u(y) − u(x) 2u(x) dy − . 1 − x2 −1 (y − x)2

Exercise 5.6.4 Use (fg)(n) =

n    n (n−j ) (j ) g and f j j =0

C0,δ2 +j ⊂ C0,δ1 +n−j C0,δ1 +δ2 +n . f (n−j ) g (j ) ∈ Exercise 5.6.6 The proof is easily done with the help of induction w.r.t. j ∈ N0 . Exercise 5.6.8 The proof is easily done with the help of induction w.r.t. j ∈ N0 . Exercise 5.7.1 Since  ∞  ∞ ξ −iη−1 , t k(t) dt = e−iηs u(s) ds = (F u)(η) h(ξ − iη) = −∞

0

  1 with u(s) = eξ s h es and ξ = 1+β p , it remains to show that u ∈ L (R). But, this is a consequence of Hölder’s inequality, which we can use to estimate 



−∞

 |u(s)| ds =



1 1 r+ 

t ξ −1 |h(t)| dt =r

0



1

|h(t)|t γ − r t ξ −γ − r  dt + 1

1

0

 ≤

1

|h(t)|r t rγ −1 dt



|h(t)|t δ− r t ξ −δ− r  dt 1

1

1

 1r 

0



1



t (ξ −γ )r −1 dt

 r1

0

 + 1



 1  |h(t)|r t rδ−1 dt



r



t (ξ −δ)r −1 dt

 1 r

.

1

where all integrals are finite due to condition (h1) and ξ −γ > 0 as well as ξ −δ < 0.

634

9 Hints and Answers to the Exercises

Exercise 5.8.9 The set K0R,R0 ,λ is a subset of the image of the bounded set   u ∈ C0,λ : uC0,λ ≤ R + R0 in C0,λ with respect to the embedding operator E : C0,λ −→ C, which is a compact operator due to Corollary 2.4.14. Thus, K0R0 ,R,λ is relatively compact. It is easily seen that un ∈ K0R,R0 ,λ , u ∈ C0 , and limn→∞ un − u∞ = 0 imply u ∈ K0R,R0 ,λ . That means, K0R,R0 ,λ is also a closed subset of C0 . Taking u1 and u2 from K0R,R0 ,λ as well as an arbitrary number μ ∈ (0, 1), we get, for x, x1 , x2 ∈ [−1, 1], |μu1 (x) + (1 − μ)u2 (x)| ≤ μR + (1 − μ)R = R and ! " ! "  μu1 (x1 ) + (1 − μ)u2 (x1 ) − μu1 (x2 ) + (1 − μ)u2 (x2 )  ≤ μR0 |x1 − x2 |λ + (1 − μ)R0 |x1 − x2 |λ = R0 |x1 − x2 |λ ,

which shows the convexity of the set K0R,R0 ,λ . Exercise 6.2.5 Putting Lemma 3.1.3 and Proposition 3.1.18 together, we get the density of the set P of all polynomials in Cγ ,δ . We can proceed as in the answer to Exercise 2.4.23. Exercise 6.2.9 As in the previous exercise, the density of P in Cu is essential For f ∈ Cu , we have limt →+0 ωϕ1 (f, t)u,∞ = 0. Moreover, due to Proposition 4.3.10,  √  a the estimate Em (f )u,∞ ≤ c ωϕ1 ( f, mm , m > 1, holds true, where am = am (w) is a Mhaskar-Rakhmanov-Saff √ number related to w(x) (cf. Sect. 4.3.1). Hence, it am = 0. But, this is a consequence of (4.3.6) and remains to show that lim m→∞ m β > 1. Exercise 7.1.5 Define   α∗ := sup Ln f γ ,δ,∞ : f ∈ Cγ ,δ , f γ ,δ,∞ ≤ 1 ,   α0 := sup Ln f γ ,δ,∞ : f ∈ Cγ ,δ , f γ ,δ,∞ ≤ 1 ,   Cbγ ,δ , f γ ,δ,∞ ≤ 1 , α1 := sup Ln f γ ,δ,∞ : f ∈ where, for the definition of α1 , we assume xn1 = 1 and xmn n = −1. Obviously, α0 ≤ α∗ ≤ α1 . To get the reverse inequalities, one has only to check that, in all cases under consideration, for every f1 ∈ Cbγ ,δ one can find an f∗ ∈ Cγ ,δ with Ln f∗ = Ln f1 , and that for every f∗ ∈ Cγ ,δ there is an f0 ∈ Cγ ,δ with Ln f0 = Ln f∗ . (Recall the conditions on x1n and xmn n in (7.1.16).)

9 Hints and Answers to the Exercises

635

Exercise 7.2.12 By (5.4.4) we have, for −1 < x < 1, 1 π



1 −1

ln |x

− y| pnσ (y)



dy 1 − y2

=

− ln 2 p0σ (x) : n = 0 , − n1 pnσ (x) : n ∈ N .

With x = cos s, 0 < s < π, from (5.1.21) and (5.1.22) we get / 1 2 2 sin s sin(n + 1)s = [cos ns − cos(n + 2)s] = π 2 π √ σ 2p0 (x) − p2σ (x) : n = 0 , 1 = 2 pnσ (x) − pσ (x) : n ∈ N . n+2 /

pnϕ (x)(1 − x 2 )

Thus, (Wpnϕ )(x) =

1 π



1 −1

ln |x − y| pnϕ (y)(1 − y 2 ) 

dy 1 − y2

⎧  1 √ 1 dy ⎪ ⎪ ⎪ ln |x − y| 2p0σ (y) − p2σ (y)  : n = 0, ⎪ 1 − y2 1 ⎨ π −1 =  2⎪ ! " ⎪ 1 1 dy ⎪ σ ⎪ ln |x − y| pnσ (y) − pn+2 (y)  : n ∈ N, ⎩ π −1 1 − y2 ⎧√ 1 ⎪ ⎪ 2 ln 2 p0σ (x) − p2σ (x) : n = 0 , ⎨ 1 2 =− 2⎪ 1 1 ⎪ ⎩ pnσ (x) − pσ (x) : n ∈ N . n n + 2 n+2 t rγ −1 are summable on (0, ∞) for all ρ ∈ (0, 2) (1 + t 2 )2r and r ∈ (1, 2], which follows from rρ > 0 and r(2 − ρ) > 0. z−2 and 0 < In order check condition (h2) for the function , h(z) = sin π2 (z − 2) z γ0 < Re z < δ0 < 2, it is sufficient to consider the function G(z) = for sin z sin z − z cos z . Let z = x + iy, 0 < γ0 < Re z < δ0 < π. We have G (z) = sin2 z 0 < γ0 < x < δ0 < π. Then, clearly Exercise 8.1.1 The functions

sin(x +iy) = sin x cosh y +i cos x sinh y

and

cos(x +iy) = cos x cosh y −i sin x sinh y .

636

9 Hints and Answers to the Exercises

Thus, | sin(x + iy)| =

6 sin2 x cosh2 y + cos2 x sinh2 y ≥ sin x cosh y ≥ min {sin γ0 , sin δ0 } cosh y .

This implies |G (x + iy)| ≤

c1 (1 + |y|)(cosh y + sinh |y|) 2

cosh y



c2 (1 + |y|) , cosh y

where the real and positive constant c2 only depends on γ0 and δ0 . Condition (h2) follows immediately. Exercise 8.1.2 We follow the proof of Lemma 4.1 in [166]. Let z = x + iy with −1 < x < 1 and y ∈ R. Then F (z) =

1 − x − iy (1 − x − iy)(a + ib) , πy πy = πx a 2 + b2 cos πx cosh − i sin sinh 2 2 2 2

πy πy πx where a = cos πx 2 cosh 2 and b = sin 2 sinh 2 . It follows

Re F (z) =

(1 − x)a + by by > 2 . 2 2 a +b a + b2

We note that, for w > 0, the inequality

1 cosh w ≥ holds true, since, for w ≥ 2, w 2

∞ cosh w 2  w2n 2 2w = = + +... ≥ 2 w w (2n)! w 2 n=0

and, for 0 < w ≤ 2, we have |x| ≤ 12 ,

1 1 cosh w ≥ ≥ . Thus, we can estimate, in case of w w 2

√ |by| |y| |y| 4 2 2 ≤