Operators Between Sequence Spaces and Applications [1 ed.] 9789811597411, 9789811597428

This book presents modern methods in functional analysis and operator theory along with their applications in recent res

194 10 5MB

English Pages 366 [379] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Authors
Acronyms
1 Matrix Transformations and Measures of Noncompactness
1.1 Linear Metric and Paranormed Spaces
1.2 FK and BK Spaces
1.3 Matrix Transformations into the Classical Sequence Spaces
1.4 Multipliers and Dual Spaces
1.5 Matrix Transformations Between the Classical Sequence Spaces
1.6 Crone's Theorem
1.7 Remarks on Measures of Noncompactness
1.8 The Axioms of Measures of Noncompactness
1.9 The Kuratowski and Hausdorff Measures of Noncompactness
1.10 Measures of Noncompactness of Operators
References
2 Matrix Domains
2.1 General Results
2.2 Bases of Matrix Domains of Triangles
2.3 The Multiplier Space M(XΣ,Y)
2.4 The α-, β- and γ-duals of XΣ
2.5 The α- and β-duals of XΔ(m)
2.6 The β-duals of Matrix Domains of Triangles in FK Spaces
References
3 Operators Between Matrix Domains
3.1 Matrix Transformations on W(u,v;X)
3.2 Matrix Transformations on XT
3.3 Compact Matrix Operators
3.4 The Class mathcalK(c)
3.5 Compact Operators on the Space bv+
References
4 Computations in Sequence Spaces and Applications to Statistical Convergence
4.1 On Strong τ-Summability
4.2 Sum and Product of Spaces of the Form sξ, sξ0, or sξ(c)
4.3 Properties of the Sequence C(τ)τ
4.4 Some Properties of the Sets sτ(Δ), sτ0(Δ) and sτ(c)(Δ)
4.5 The Spaces wτ(λ), wτ°(λ) and wτ(λ)
4.6 Matrix Transformations From wτ(λ)+wν(µ) into sγ
4.7 On the Sets cτ(λ,µ), cτ°(λ,µ) and cτ(λ,µ)
4.8 Sets of Sequences of the Form [ A1,A2]
4.9 Extension of the Previous Results
4.10 Sets of Sequences that are Strongly τ-Bounded With Index p
4.11 Computations in Wτ and Wτ0 and Applications to Statistical Convergence
4.12 Calculations in New Sequence Spaces
4.13 Application to A-Statistical Convergence
4.14 Tauberian Theorems for Weighted Means Operators
4.15 The Operator C(λ)
References
5 Sequence Spaces Inclusion Equations
5.1 Introduction
5.2 The (SSIE) FsubsetEa+Fx with einF and FsubsetM(F,F)
5.3 The (SSIE) FsubsetEa+Fx with E,F,Fin{c0,c,s1,ellp,w0,winfty}
5.4 Some (SSIE) and (SSE) with Operators
5.5 The (SSIE) FsubsetEa+Fx for e-.25ex-.25ex-.25ex-.25exF
5.6 Some Applications
References
6 Sequence Space Equations
6.1 Introduction
6.2 The (SSE) Ea+Fx=Fb with einF
6.3 Some Applications
6.4 The (SSE) with Operators
6.5 Some (SSE's) with the Operators Δ and Σ
6.6 The Multiplier M((Ea)Δ,F) and the (SSIE) Fbsubset(Ea)Δ+Fx
6.7 The (SSE) (Ea)Δ+sx(c)=sb(c)
6.8 More Applications
References
7 Solvability of Infinite Linear Systems
7.1 Banach Algebras of Infinite Matrices
7.2 Solvability of the Equation Ax=b
7.3 Spectra of Operators Represented by Infinite Matrices
7.4 Matrix Transformations in χ(Δm)
7.5 The Equation Ax=b, Where A Is a Tridiagonal Matrix
7.6 Infinite Linear Systems with Infinitely Many Solutions
7.7 The Hill and Mathieu Equations
References
Appendix Inequalities
A.1 Inequalities
A.2 Functional Analysis
References
Index
Recommend Papers

Operators Between Sequence Spaces and Applications [1 ed.]
 9789811597411, 9789811597428

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Bruno de Malafosse Eberhard Malkowsky Vladimir Rakočević

Operators Between Sequence Spaces and Applications

Operators Between Sequence Spaces and Applications

Bruno de Malafosse Eberhard Malkowsky Vladimir Rakočević •



Operators Between Sequence Spaces and Applications

123

Bruno de Malafosse University of Le Havre (LMAH) Ore, France

Eberhard Malkowsky Faculty of Management Univerzitet Union Nikola Tesla Beograd, Serbia

Vladimir Rakočević Department of Mathematics University of Niš Niš, Serbia

ISBN 978-981-15-9741-1 ISBN 978-981-15-9742-8 https://doi.org/10.1007/978-981-15-9742-8

(eBook)

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

The study of operators between sequence spaces is a wide field in modern summability. In general, summability theory deals with a generalization of the concept of convergence of sequences of complex numbers. One of the original ideas was to assign, in some way, a limit to divergent series, by considering a transform, in many cases defined by the use of matrices, rather than the original series. A central problem of interest is the characterization of classes of all operators between sequence spaces: The first result in this area was the famous Toeplitz theorem which established necessary and sufficient conditions on the entries of a matrix to transform to preserve convergence. The original proof used the analytical method of the gliding hump. The introduction on a large scale of functional analytic methods to summability in the 1940s and the development of the FK and BK space theory made the study of matrix transformations and linear operators between sequence spaces a rapidly expanding field of interest in summability. More recently, in particular after 2000, the theory of measures on noncompactness was applied in the characterization of compact operators between BK spaces. This book presents recent studies on bounded and compact operators, their underlying theories and applications to problems of the solution of infinite systems of linear equations in various sequence spaces. It consists of two parts. In the first part, Chaps. 1–3, it presents the modern methods in functional analysis and operator theory and their applications in recent research in the representations and characterizations of bounded and compact linear operators between BK spaces and between matrix domains of triangles and row finite matrices in BK spaces. In the second part, Chaps. 4–7, we present applications and research results using the results of the first three chapters. This involves the study of whether infinite matrices A ¼ ðank Þ1 n;k¼1 are injective, surjective or bijective, when considered as operators between certain sequence spaces. This book is unique, since it connects and presents the topics of the two parts in one volume for the first time, to the best of the authors’ knowledge.

v

vi

Preface

This book contains relevant parts of several of the authors’ related lectures on graduate and postgraduate levels at universities in Australia, France, Germany, India, Jordan, Mexico, Serbia, Turkey, the USA and South Africa. A great number of illustrating examples and remarks were added concerning the presented topics. The book could also be used as a textbook for graduate and postgraduate courses as a basis for and overview of research in the fields mentioned and is intended to address students, teachers and researchers alike. Chapter 1 is mostly introductory and recalls the basic concepts from functional analysis needed in the book, such as linear metric and paranormed spaces, FK, BK, AK and AD spaces, multiplier spaces, matrix transformations, measures of noncompactness, in particular, the Hausdorff and Kuratowski measures of noncompactness, and measures of noncompactness of operators between Banach spaces. Although most of the material is standard, almost all proofs are presented of the vital results. In Chap. 2, we study sequence spaces that have recently been introduced by the use of infinite matrices. They can be considered as the matrix domains in certain sequence spaces and can be used to define almost all the classical methods of summability as special cases. We apply the results and methods of Chap. 1 to determine their topological properties, bases and various duals; in particular, we establish a general result for the determination of the b-dual of arbitrary triangles in arbitrary FK spaces. In Chap. 3, we characterize matrix transformations on the spaces of generalized weighted means and on matrix domains of triangles in BK spaces. We also establish estimates or identities for the Hausdorff measure of noncompactness of matrix transformations from arbitrary BK spaces with AK into c, c0 and ‘1 , and also from the matrix domains of an arbitrary triangle in ‘p , c and c0 into c, c0 and ‘1 . Furthermore, we determine the classes of compact operators between the spaces just mentioned. Finally, we establish the representations of the general bounded linear operators from c into itself and from the space bv þ of sequences of bounded variation into c, and the determination of the classes of compact operators between them. In Chaps. 5 and 6, we obtain new results on sequence spaces inclusion and on sequence spaces equations using the results of the first part on matrix transformations. This study leads to a lot of results published until now, for instance, in the statistical convergence and in the spectral theory where the operators are defined by infinite matrices. We also deal with the solvability of infinite systems of linear equations in various sequence spaces. Here, we use the classical sequence spaces and the generalized Cesàro and difference operators to obtain many calculations and simplifications of complicated spaces involving these operators. We also consider the sum and the product of some linear spaces of sequences involving the sets of sequences that are “strongly bounded and summable to zero”. Finally, we obtain new results on “statistical convergence”.

Preface

vii

In Chap. 7, we consider a Banach algebra in which we may obtain the inverse of an infinite matrix and obtain a new method to calculate the “Floquet exponent”. Furthermore, we determine the solutions of the infinite linear system associated with the Hill equation with a second member and give a method to approximate them. Finally, we present a study of the Mathieu equation which can be written as an infinite tridiagonal linear system of equations. Le Havre, France Niš, Serbia Niš, Serbia August 2020

Bruno de Malafosse Eberhard Malkowsky Vladimir Rakočević

Contents

1 Matrix Transformations and Measures of Noncompactness . . . . 1.1 Linear Metric and Paranormed Spaces . . . . . . . . . . . . . . . . . 1.2 FK and BK Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Matrix Transformations into the Classical Sequence Spaces . 1.4 Multipliers and Dual Spaces . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Matrix Transformations Between the Classical Sequence Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Crone’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Remarks on Measures of Noncompactness . . . . . . . . . . . . . . 1.8 The Axioms of Measures of Noncompactness . . . . . . . . . . . 1.9 The Kuratowski and Hausdorff Measures of Noncompactness 1.10 Measures of Noncompactness of Operators . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Russian References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

1 2 10 15 19

. . . . . . . .

. . . . . . . .

. . . . . . . .

23 27 34 35 37 42 44 45

2 Matrix Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 General Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Bases of Matrix Domains of Triangles . . . . . . . . . . . . . . . 2.3 The Multiplier Space MðXR ; YÞ . . . . . . . . . . . . . . . . . . . . 2.4 The a-, b- and c-duals of XR . . . . . . . . . . . . . . . . . . . . . . 2.5 The a- and b-duals of XDðmÞ . . . . . . . . . . . . . . . . . . . . . . . 2.6 The b-duals of Matrix Domains of Triangles in FK spaces References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. 47 . 47 . 55 . 66 . 68 . 78 . 90 . 101

3 Operators Between Matrix Domains . . . . . 3.1 Matrix Transformations on Wðu; v; XÞ 3.2 Matrix Transformations on XT . . . . . . 3.3 Compact Matrix Operators . . . . . . . . 3.4 The Class KðcÞ . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

105 105 112 128 139

ix

x

Contents

3.5 Compact Operators on the Space bv þ . . . . . . . . . . . . . . . . . . . . 145 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 4 Computations in Sequence Spaces and Applications to Statistical Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 On Strong s-Summability . . . . . . . . . . . . . . . . . . . . . . . . . . . ðcÞ 4.2 Sum and Product of Spaces of the Form sn , s0n , or sn . . . . . . 4.3 Properties of the Sequence CðsÞs . . . . . . . . . . . . . . . . . . . . . . ðcÞ 4.4 Some Properties of the Sets ss ðDÞ, s0s ðDÞ and ss ðDÞ . . . . . . .  4.5 The Spaces ws ðkÞ, ws ðkÞ and ws ðkÞ . . . . . . . . . . . . . . . . . . . . 4.6 Matrix Transformations From ws ðkÞ þ wm ðlÞ into sc . . . . . . . .  4.7 On the Sets cs ðk; lÞ, cs ðk; lÞ and cs ðk; lÞ . . . . . . . . . . . . . . . 4.8 Sets of Sequences of the Form ½A1 ; A2  . . . . . . . . . . . . . . . . . 4.9 Extension of the Previous Results . . . . . . . . . . . . . . . . . . . . . 4.10 Sets of Sequences that are Strongly s-Bounded With Index p . 4.11 Computations in Ws and Ws0 and Applications to Statistical Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Calculations in New Sequence Spaces . . . . . . . . . . . . . . . . . . 4.13 Application to AStatistical Convergence . . . . . . . . . . . . . . . 4.14 Tauberian Theorems for Weighted Means Operators . . . . . . . . 4.15 The Operator CðkÞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

172 174 179 180 183 189 192

. . . . . .

. . . . . .

198 204 207 214 222 226

5 Sequence Spaces Inclusion Equations . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 The (SSIE) F  Ea þ Fx0 with e 2 F and F 0  MðF; F 0 Þ . . . . . 5.3 The (SSIE) F  Ea þ Fx0 with E; F; F 0 2 fc0 ; c; s1 ; ‘p ; w0 ; w1 g . 5.4 Some (SSIE) and (SSE) with Operators . . . . . . . . . . . . . . . . . 5.5 The (SSIE) F  Ea þ Fx0 for e 62 F . . . . . . . . . . . . . . . . . . . . . 5.6 Some Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

229 230 236 238 245 250 253 263

6 Sequence Space Equations . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 The (SSE) Ea þ Fx ¼ Fb with e 2 F . . . . . . . 6.3 Some Applications . . . . . . . . . . . . . . . . . . . . 6.4 The (SSE) with Operators . . . . . . . . . . . . . . . 6.5 Some (SSE’s) with the Operators D and R . . . 6.6 The Multiplier MððEa ÞD ; FÞ and the (SSIE) Fb

. . . . . . .

. . . . . . .

265 266 271 276 284 289 297

ðcÞ

ðcÞ

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .  ðEa ÞD þ Fx .

. . . . . . .

. . 159 . . 160 . . 162 . . 166

6.7 The (SSE) ðEa ÞD þ sx ¼ sb . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 6.8 More Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313

Contents

7 Solvability of Infinite Linear Systems . . . . . . . . . . . . . . . . . 7.1 Banach Algebras of Infinite Matrices . . . . . . . . . . . . . . 7.2 Solvability of the Equation Ax ¼ b . . . . . . . . . . . . . . . 7.3 Spectra of Operators Represented by Infinite Matrices . 7.4 Matrix Transformations in vðDm Þ . . . . . . . . . . . . . . . . . 7.5 The Equation Ax ¼ b, Where A Is a Tridiagonal Matrix 7.6 Infinite Linear Systems with Infinitely Many Solutions . 7.7 The Hill and Mathieu Equations . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

315 315 320 326 330 338 343 349 356

Appendix: Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363

About the Authors

Bruno de Malafosse was Full Professor at the Laboratoire de Mathématiques Appliquées Havrais, Université du Havre, France, until 2009. He has 35 years of teaching experience in most fields of analysis, probability theory, linear algebra and mathematics for informatics. He obtained the Doctorat of 3eme cycle at the Université du Havre Toulouse III, France, in 1980, with a thesis titled “Contribution à l’étude des systèmes infinis”. He completed the Habilitation à Diriger des Recherches in 2004 at the Université du Havre, France, with a thesis titled “Sur la théorie des matrices infinies et applications”. His research interests include the infinite matrix theory, summability, differential equations, theory of the sum of operators in the nondifferential case, numerical analysis, convergence of a numerical scheme optimization, quasi-Newton method, continuous fractions, spectral theory and sequence spaces. He is a member of the editorial boards of two reputed journals in Serbia and Jordan and a reviewer for the Mathematical Reviews. His list of publications includes 85 research papers published in international journals. He has been a supervisor/examiner for many doctoral theses in France, Algeria and India. Eberhard Malkowsky is Full Professor at the Faculty of Management, University Union Nikola Tesla, Belgrade, Serbia. He completed his Ph.D. in Mathematics at Giessen University, Germany, in 1983 with the thesis titled “Toeplitz–Kriterien für Matrizenklassen bei Räumen absolut und stark limitierbarer Folgen”. He also completed his Habilitation in Mathematics at Giessen University, Germany, in 1989, with the thesis titled “Matrix transformations in a new class of sequence spaces that includes spaces of absolutely and strongly summable sequences”. He has 40 years of teaching experience in all fields of analysis, differential geometry and computer science at universities in Germany, South Africa, Serbia, Jordan and Turkey, and as a visiting professor in the USA, India, Hungary, France and Iran. He was also an invited lecturer at four summer schools of the German Academic Exchange Service (DAAD). He has supervised six Ph.D. theses and a number of B. Sc. and M.Sc. theses. His research interests include functional analysis, operator

xiii

xiv

About the Authors

theory, summability, sequence spaces, matrix transformations and measures of noncompactness. He also has developed software for the visualization of topics in mathematics. His list of publications includes 164 research papers, 13 books and proceedings. He is a member of the editorial boards of 10 international journals and a reviewer for the Mathematical Reviews and Zentralblatt der Mathematik. Furthermore, he was the main organizer of 3 and participant of 8 research projects. He was the main organizer and a member of the organizing and scientific committees of 28 international conferences and has delivered more than 100 plenary keynote and invited talks. Moreover, he was an European Union expert for the evaluation of the Tempus Projects from 2004 to 2006. Vladimir Rakočević is Full Professor at the Department of Mathematics, Faculty of Sciences and Mathematics, University of Niš, Serbia. He is also a corresponding member of the Serbian Academy of Sciences and Arts (SANU) in Belgrade, Serbia. He received his Ph.D. in Mathematics from the Faculty of Sciences, University of Belgrade, Serbia, in 1984; the title of his thesis was “Essential spectra and Banach algebras”. His research interests include functional analysis, fixed point theory, operator theory, linear algebra and summability. He was a visiting professor at several universities and scientific institutions across the world. Furthermore, he participated as an invited/keynote speaker at numerous international scientific conferences and congresses. He is a member of the editorial boards of several international journals of repute. His list of publications includes 173 research papers in international journals. He was included in the Thomson Reuter’s list of Highly Cited Authors in 2014. He is the co-author of 7 books and has supervised 7 Ph.D. and more than 50 B.Sc. and M.Sc. theses in mathematics.

Acronyms

x dx c0 c ‘1 k  k1 ‘1 ‘p k  kp ‘ ð pÞ c 0 ð pÞ ‘ 1 ð pÞ g g0 T X jY clY ðEÞ X0 / A An Ak An x Ax ðX; Y Þ e ¼ ðek Þ1 k¼0 ðnÞ

eðnÞ ¼ ðek Þ1 k¼0 x½m

set of all complex sequences x ¼ ðxk Þ1 k¼0 metric on x set of all complex null sequences set of all convergent complex sequences set of all bounded complex sequences norm for c0 , c and ‘1 set of all absolutely convergent complex series P p ¼ fx 2 x : 1 jx j k¼0 k \1g for 1  p\1 norm for ‘p P pk ¼ fx 2 x : 1 k¼0 jxk j \1g ¼ fx 2 x : limk!1 jxk jpk ¼ 0g ¼ fx 2 x : supk jxjpk \1g paranorm on ‘ð pÞ paranorm on c0 ð pÞ relative topology of X on Y  X closure of E  Y in a topological space Y continuous dual of a Fréchet space space X set of finite complex sequences ¼ ðank Þ1 n;k¼0 infinite matrix of complex entries sequence in the nth row of the infinite matrix A sequence in the kth column of the infinite matrix A P ¼ 1 k¼0 ank xk ¼ ðAn xÞ1 n¼0 class of infinite matrices that map X  x into Y  x sequence with ek ¼ 1 for all k ðnÞ

sequence with eðnnÞ ¼ 1 and ek ¼ 0 for k 6¼ n P 1 ðnÞ ¼ m n¼0 xn e , m-section of the sequence x ¼ ðxk Þk¼0

xv

xvi

Acronyms

 X;d ð xÞ  d ð xÞ; B B k a k d ¼k a k X;d BðX; Y Þ K ðX; Y Þ X cs bs k  kbs M ðX; Y Þ Xa Xb Xc Xf MX McX diam ðSÞ aðQÞ vðQÞ coðSÞ X B BX SX k L kv XA ðX; ðpn ÞÞ z1 Y za zb zc R ¼ ðrnk Þ1 n;k¼0 RðmÞ D ¼ ðDnk Þ1 n;k¼0 DðmÞ

c0 ð pÞ; DðmÞ bvð pÞ



closed ball of radius d and centre in x0 in a metric space ðX; d Þ  P ¼ sup   1 ak xk  x2Bd ½0

k¼0

space of bounded linear operators between the Banach spaces X and Y set of compact operators in BðX; Y Þ continuous dual of the normed space X set of all convergent complex series set of all bounded complex series norm for bs and cs multiplier of the set X  x in the set Y  x a-dual of the set X  x b-dual of the set X  x c-dual of the set X  x functional dual of the set X  x class of bounded sets in the metric space X subclass of closed sets in MX diameter of the set S in a metric space Kuratowski measure of noncompactness of the set Q 2 MX Hausdorff measure of noncompactness of the set Q 2 MX convex hull of the subset S of a linear space closed unit ball in the normed space X open unit ball in the normed space X unit sphere in the normed space X Hausdorff measure on noncompactness of the operator L ¼ fx 2 x : Ax 2 X g, matrix domain of A in X vector space X with its metrizable topology given by the sequence ðpn Þ of seminorms in the sense of Theorem 2.1 ¼ fa 2 x : a  z ¼ ðak zk Þ1 k¼0 2 Yg 1 ¼ z ‘1 ¼ z1 cs ¼ z1 bs triangle of the partial sums with rnk ¼ 1 for ð0  k  nÞ and rnk ¼ 0 for k [ n ðn ¼ 0; 1; . . .Þ triangle of the mth iterated partial sums triangle of the backward differences with Dn;n ¼ 1, Dn1;n ¼ 1 and Dnk ¼ 0 otherwise triangle of the mth iterated backward differences ¼ ðc0 ð pÞÞDðmÞ ¼ ð‘ð pÞÞD

Acronyms

  c0 DðmÞ   c DðmÞ   ‘1 DðmÞ bvp bv U 1=u C1 q N  qÞ ðN; 0  qÞ ðN;  qÞ ðN; 1 c0 ðuDÞ cðuDÞ ‘1 ðuDÞ E r ¼ ðernk Þ1 n;k¼0 erp er0 erc er1  1 T  ¼ ðtnk Þn;k¼0 þ 1 D þ ¼ ðDnk Þn;k¼0

W ðu; v; X Þ nþ1 bv þ bv0þ Uþ Du 1 u

ð1uÞ1 E

xvii

¼ ðc0 ÞDðmÞ ¼ cDðmÞ ¼ ð‘1 ÞDðmÞ ¼ ð‘p ÞD P ¼ ð‘1 ÞD ¼ fx 2 x : 1 k¼0 jxk  xk1 j\1g, space of sequences of bounded variation ¼ fu 2 x : uk 6¼ 0 for all k ¼ 0; 1; . . .g ¼ ð1=uk Þ1 k¼0 for u 2 U triangle of the arithmetic means or the Cesàro means of order 1 triangle of the weighted or Riesz means ¼ ðc0 ÞN q ¼ cN q ¼ ð‘1 ÞN q ¼ ðu1 c0 ÞD for u 2 U ¼ ðu1 cÞD for u 2 U ¼ ðu1 ‘1 ÞD for u 2 U Euler matrix 0\r\1, the triangle  n ð1  rÞnk r k for 0  k  n with ernk ¼ k ¼ ð‘p ÞEr for 1  p\1 ¼ ðc0 ÞEr ¼ cE r ¼ ð‘1 ÞEr   matrix with tn;n1 ¼ 1 and tnk ¼ 0 ðk 6¼ 0Þ for n ¼ 0; 1; . . . þ matrix of the forward differences with Dnn ¼ 1, þ þ Dn;n þ 1 ¼ 1 and Dnk ¼ 0 otherwise ¼ v1 ðu1 XÞR ¼ fx 2 x : u  Rðv  xÞ 2 X g for u; v 2 U, the set of generalized weighted means ¼ ðn þ 1Þ1 n¼0 ¼ ð‘1 ÞD þ ¼ bv þ \ c0 class of sequences with positive real terms diagonal matrix with the sequence u on its diagonal ¼ ð1=uk Þ1 k¼1 for u 2 U y yn 1 ¼ Du E ¼ fy ¼ ðyn Þ1 n¼1 2 x : u ¼ ðun Þn¼1 2 Eg for u 2 U and E  x

xviii

Es ss s0s

sðscÞ k x kss Ss;m k A kSs;m Ss Cs sr s0r sðrcÞ Sr Cr U 1þ c ð 1Þ E F C ðkÞ ¼ ððC ðkÞÞnk Þ1 n;k¼1 DðkÞ ¼ ððDðkÞÞnk Þ1 n;k¼1 s c1 C ^ C d Cþ 1

C Cþ ^ C c j xj ws ðkÞ  ws ðkÞ ws ðkÞ cs ðk; lÞ  cs ðk; lÞ cs ðk; lÞ ½C; C  ½C; D ½D; C  ½D; D

Acronyms

¼ Ds E for s 2 U þ and E  x ¼ ð1sÞ1 ‘1 for s 2 U þ ¼ ð1sÞ1 c0 for s 2 U þ ¼ ð1sÞ1 c for s 2 U þ ¼ supn jxn =sn j for x 2 ss ; s0s ; sðscÞ and s 2 U þ ¼ ðss ; sm Þ for s; m 2 U þ P þ ¼ supn ð1=mn Þ 1 k¼1 jank jsk for s; m 2 U þ ¼ Ss;s for s 2 U ¼ fA 2 Ss :k I  A kSs \1g ¼ ss for s ¼ ðr n Þ1 n¼1 and r [ 0 ¼ s0s for s ¼ ðr n Þ1 n¼1 and r [ 0 ¼ sðscÞ for s ¼ ðr n Þ1 n¼1 and r [ 0 ¼ Ss for s ¼ ðr n Þ1 n¼1 and r [ 0 n 1 ¼ Cs for s ¼ ðr Þn¼1 and r [ 0 ¼ fx 2 U þ with xk  1 for all kg ¼ fx 2 x : limk!1 xk ¼ 1g ¼ fxy ¼ ðxn yn Þ1 n¼1 2 x : x 2 E and y 2 Fg for k 2 U, where ðC ðkÞÞnk ¼ 1=kn for 1  k  n and ðC ðkÞÞnk ¼ 0 for k [ n for k 2 U, where ðDðkÞÞnn ¼ kn ðDðkÞÞn;n1 ¼ kn1 and ðDðkÞÞn;k ¼ 0 for k 6¼ n; n  1 ¼ ðsn1 =sn Þ1 Uþ n¼1 for s 2  1 P n ¼ fs 2 U þ : ðð1=sn Þ k¼1 sk Þn¼1 2 ‘1 g  P n 1 ¼ fs 2 U þ : ðð1=sn Þ k¼1 sk Þn¼1 2 cg

P  T ¼ O ð 1 Þ ð n ! 1 Þ ¼ s 2 U þ cs : ð1=sn Þ 1 s k¼n k ¼ fs 2 U þ : lim supn!1 sn \1g ¼ fs 2 U þ : lim supn!1 ðsn þ 1 =sn Þ\1g ¼ fs 2 U þ : limn!1 sn \1g ¼ fs 2 U þ : s  2 c g 1 ¼ ðjxn jÞ1 n¼1 for x ¼ ðxn Þn¼1 2 x ¼ f x 2 x : C ðkÞðj xjÞ 2 ss g ¼ x 2 x : C ðkÞðj xjÞ 2 s0s  ¼ x 2 x : x  le 2 ws ðkÞ for some l 2 C ¼ fx 2 x : CðkÞðjDðlÞxjÞ 2 ss g for sk; l 2 U þ and l 2 x ¼ x 2 x : C ðkÞðjDðlÞxjÞ 2 s0s for s 2 U þ and l 2 x ¼ x 2 x : x  le 2 cs ðklÞ for some l 2 C for s 2 U þ and l 2 x 

Pn    ð1=lm Þ Pm xk  ¼ sn Oð1Þðn ! 1Þ ¼ x 2 x : ð1=kn Þ k¼1 Pm¼1  n ¼ x 2 x : ð1=kn Þ k¼1 jlk xk  lk1 xk1 j ¼ sn Oð1Þðn ! 1Þ  P   Pn    n1   ¼ sn Oð1Þ ¼ fx 2 x : kn1 ð1=ln1 Þ k¼1 xk  þ kn ð1=ln Þ k¼1 xk

ðn ! 1Þg

¼ fx 2 x : kn1 jln1 xn1  ln2 xn2 j þ kn jln xn  ln1 xn1 j ¼ sn Oð1Þðn ! 1Þg

Acronyms

 D; D þ ½D; C þ 

þ  D ;D

þ  D ;C

þ þ D ;D ½C þ ; C  ½C þ ; C þ  wps ðkÞ  wsp ðkÞ wsþ p ðkÞ  ws þ p ðkÞ cps ðk; lÞ :

csþ p ðk; lÞ :

csþ p ðk; lÞ csþ p ðk; lÞ cps ðk; lÞ :

csþ p ðk; lÞ :

csþ p ðk; lÞ þp cg s ðk; lÞ

w1 ðkÞ w0 ðkÞ k x kk k A kðw1 ðkÞ;w1 ðkÞÞ Ws Ws0 Ws ðDðkÞÞ Ws ðC ðkÞÞ

xix ¼ fx : kn jln ðxn  xn þ 1 Þj  kn1 jln1 ðxn1  xn Þj ¼ sn Oð1Þðn ! 1Þg P  P1    ¼ x : kn  1 ðxi =li Þ ¼ sn Oð1Þðn ! 1Þ i¼n ðxi =li Þ  kn1  i¼n1  ¼ x : kn jln xn  ln1 xn1 j  kn ln þ 1 xn þ 1  ln xn  ¼ sn Oð1Þðn ! 1Þ n o   P   P þ 1  ¼ x : kn ð1=ln Þ ni¼1 xi   1=ln þ 1  ni¼1 xi  ¼ sn Oð1Þðn ! 1Þ ¼ x : kn ln jxn  xn þ 1 j  kn ln þ 1 jxn þ 1  xn þ 2 j ¼ sn Oð1Þðn ! 1Þ

  n P  o Pk   ¼ x: 1 k¼n ð1=kk Þð1=lk Þ i¼1 xi  ¼ sn Oð1Þðn ! 1Þ

 P1  P    ¼ s n O ð 1Þ ð n ! 1 Þ ¼ x: 1 k¼n ð1=kk Þ i¼k ðxi =li Þ ¼ fx 2 x : C ðkÞðjxjp Þ 2 ss g for 0\p\1 ¼ fx 2 x : C ðkÞðjxjp Þ 2 s0s g for 0\p\1 ¼ fx 2 x : C þ ðkÞðjxjp Þ 2 ss g for 0\p\1 ¼ fx 2 x : C þ ðkÞðjxjp Þ 2 s0s g for 0\p\1 ¼ ðwps ðkÞÞDðlÞ ¼ fx 2 x : CðkÞðjDðlÞxjp Þ 2 ss g for 0\p\1     ¼ ðwps ðkÞÞD þ ðlÞ ¼ x 2 x : C ðkÞ D þ ðlÞxjp 2 ss

for 0\p\1 ¼ ðwsþ p ðkÞÞDðlÞ ¼ fx : C þ ðkÞðjDðlÞxjp Þ 2 ss g for 0\p\1 ¼ ðwsþ p ðkÞÞD þ ðlÞ ¼ fx : C þ ðkÞðjD þ ðlÞxjp Þ 2 ss g for 0\p\1 ¼ fx ¼ ðxn Þ1 n¼1 : supn ½ð1=jkn jsn Þð for 0\p\1

Pn

¼ fx ¼ ðxn Þ1 n¼1 : sup½ð1=jkn jsn Þð n

k¼1 jlk xk

Pn

 lk1 xk1 jp Þ\1g

k¼1 jlk ðxk

 xk þ 1 Þjp Þ\1g

for 0\p\1

P1 p ¼ fx ¼ ðxn Þ1 k¼n ðð1=jkk jÞjlk xk  lk1 xk1 j Þ\1g n¼1 : supn ½ð1=sn Þ

for 0\p\1

P1 p ¼ fx ¼ ðxn Þ1 n¼1 : limn!1 ½ð1=sn Þ k¼n ðð1=jkk jÞjlk ðxk  xk þ 1 Þj Þ ¼ 0g

for 0\p\1 Pn ¼ fx ¼ ðxn Þ1 n¼1 2 x : supn ð1=kn ÞP k¼1 jxk j\1g n ¼ fx ¼ ðxn Þ1 n¼1 2 x : lim ð1=kn Þ k¼1 jxk j ¼ 0g

n!1   P ¼k x kw1 ðkÞ ¼ supn ð1=kn Þ nk¼1 jxk j for x 2 w1 ðkÞ; w0 ðkÞ and k 2 U þ

¼ supx6¼0 ðk Axjk = k x kk Þ   P ¼ fx 2 x :k x kWs ¼ supn ð1=nÞ nk¼1 jxk j=sk \1g for s 2 U þ   P ¼ x 2 x : limn!1 ð1=nÞ nk¼1 jxk j=sk ¼ 0 for s 2 Uþ   P ¼ fx 2 x : supn ð1=nÞ nk¼1 ð1=sk Þjkk xk  kk1 xk1 j \1g for k; s 2 U þ P  P   ¼ fx 2 x : supn ð1=nÞ nm¼1 ð1=km sm Þ m k¼1 xk \1g þ for k; s 2 U

xx

Acronyms

Ws ðC þ ðkÞÞ

for k; l; s 2 U þ

 P  P ¼ fx 2 x : supn ð1=nÞ nm¼1 ð1=sm Þ 1 k¼m ð1=kk Þjlk xk  lk1 xk1 j \1g

½C ; DWs

k; l; s 2 U þ P  i h P  P  k  ¼ fx 2 x : supn ð1=nÞ nm¼1 ð1=sm Þ 1 \1g k¼m ð1=kk Þð1=lk Þ i¼1 xi 

for

þ

½C ; CWs

kM

clE ðbÞ I a ðE; F; F 0 Þ

Gþ U 1þ K C ^K C cs0 bvp SR;R Bðss Þ L Np;s ð AÞ b p ð sÞ B

C0p;s rðA; X Þ qðA; X Þ

k; l; s 2 U þ P1  

P  P   \1g ¼ fx 2 x : supn ð1=nÞ nm¼1 ð1=sm Þ 1 k¼m ð1=kk Þ i¼k ðxi =li Þ

for

þ

½C ; C Ws

v Bð~r ; ~sÞ

    Pm   k¼1 xk =kk \1g

P i P  P  k  \1g ¼ fx 2 x : supn ð1=nÞ nm¼1 ð1=km sm Þ m k¼1 ð1=lk Þ i¼1 xi 

þ

k RE

m¼1 ð1=sm Þ

for k; l; s 2 U þh

½C; CWs

ðan Þ1 n¼1

Pn

 P  P ¼ fx 2 x : supn ð1=nÞ nm¼1 ð1=km sm Þ m k¼1 jlk xk  lk1 xk1 j \1g

½C; DWs

þ

¼ fx 2 x : supn ð1=nÞ for k; s 2 U þ

forP k; l; s 2 U þ m ¼ 1 m¼1 2 max2m  k  2m þ 1 1 jak j equivalence relation defined for any E  x by xRE y if and only if E x ¼ E y for x; y 2 U þ equivalence class of b 2 U þ and the equivalence relation RE ¼ x 2 U þ : F  Ea þ Fx0 , where E; F; F 0  x are linear spaces and a 2 U þ ¼ fx 2 U þ : 1=x 2 vg for any v  x the bidiagonal matrix with ½Bð~r ; ~sÞnn ¼ rn for all n and ½Bð~r ; ~sÞn;n1 ¼ sn1 for all n 2 ¼ G \ U þ for any G  x ¼ fx 2 U þ : xn  1 for all ng 1=n Þ\1g ¼ fx ¼ ðxn Þ1 n¼1 2 x : supn ðjxn j 1 ¼ fx ¼ ðxn Þn¼1 2 x : limn!1 ðjxn j1=n Þ ¼ 0g ¼ fa 2 U þ : ½CðaÞan  k n for all n and some k [ 0g P ¼ ðc0 ÞR ¼ fx 2 x : ð nk¼1 xk Þ1 n¼1 2 c0 g P1 ¼ ð‘p ÞD ¼ fx 2 x : k¼1 jyk  yk1 jp \1g for 0\p\1 ðcÞ ¼ fx 2 U þ : ðs0R ÞD þ sðxcÞ ¼ sR g T ¼ Bðss Þ ðss ; ss Þ the set of lower triangular infinite matrices, that is, A 2 L if ank ¼ 0 for k [ n and all n P1 P q p1 1=p  ¼½ 1 n¼1 ð k¼1 ðjank jðsk =sn ÞÞ Þ 1 ¼ fA ¼ ðank Þn;k¼1 : Np;s ð AÞ\1g   ¼ fA ¼ ðank Þ1 n;k¼1 2 ðð‘p Þs ; ‘p Þs : Np;s ðI  AÞ\1g spectrum of an operator A 2 BðX; X Þ ¼ CnrðA; X Þ resolvent set of an operator A 2 BðX; X Þ

Chapter 1

Matrix Transformations and Measures of Noncompactness

The major part of this chapter is introductory and included as a reference for the reader’s convenience; it recalls the concepts and results from the theories of sequence spaces, matrix transformations in Sects. 1.1–1.3 and 1.5 and measures of noncompactness in Sects. 1.7–1.10 that are absolutely essential for the book. Although the results of this chapter may be considered as standard in the modern theories of matrix transformations, we included their proofs with the exception of those in Sect. 1.4 on the relations between various kind of duals; these results are mainly included for the sake of completeness and not directly used in the remainder of the book. We refer the reader interested in matrix transformations to [1, 5, 14, 19, 23, 25, 27, 33–37] and the survey paper [29], and, in measures of noncompactness, to [2–4, 10, 12, 21–23, 30]. Although the concepts and results of this chapter are standard, we decided to cite from [23] in almost all cases, if necessary. Sections 1.3 and 1.5 contain results that are less standard. They concern the characterizations of matrix transformations from arbitrary F K spaces into the spaces c0 , c, ∞ and 1 of all null, convergent and bounded sequences and of all absolutely convergent series, and all known characterizations of classes matrix transformations  between the classical sequence spaces c0 , c, ∞ and  p = {x = (xk ) : k |xk | p < ∞} for 1 ≤ p < ∞. For completeness sake, we also prove Crone’s theorem which characterizes the class of all matrix transformations of 2 into itself in Sect. 1.6. The underlying fundamental concepts for the theories of sequence spaces and matrix transformations are those of linear metric and paranormed spaces of F K , B K and AK spaces, and of various kinds of dual spaces. The general results related to these concepts are used in the characterizations of matrix transformations between the classical sequence spaces. By this, we mean to give necessary and sufficient conditions on the entries of an infinite matrix to map one classical sequence space into another. We also present an axiomatic introduction of measures of noncompactness on bounded sets of complete metric spaces, recall the definitions and essential proper© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. de Malafosse et al., Operators Between Sequence Spaces and Applications, https://doi.org/10.1007/978-981-15-9742-8_1

1

2

1 Matrix Transformations and Measures of Noncompactness

ties of the Kuratowski and Hausdorff measures of noncompactness, which are the most prominent measures on noncompactness, and the famous theorem by Goldenštein, Go’hberg and Markus, which gives an estimate for the Hausdorff measure of noncompactness of bounded sets in Banach spaces with a Schauder basis. Finally, we recall the definition of the measure of noncompactness of operators and list some important related properties.

1.1 Linear Metric and Paranormed Spaces Here, we recall the concepts of linear metric and paranormed spaces, which are fundamental in the theory of sequence spaces and matrix transformations. The concept of a linear or vector space involves an algebraic structure given by the definition of two operations, namely, the sum of any two of its elements, also called vectors, and the product of any scalar with any vector. It is clear that the set ω of all complex sequences x = (xk )∞ k=0 is a linear space with respect to the addition and scalar multiplication defined termwise, that is, ∞ x + y = (xk + yk )∞ k=0 and λx = (λx k )k=0 ∞ for all x = (xk )∞ k=0 , y = (yk )k=0 ∈ ω and all λ ∈ C.

On the other hand, a topological structure of a set may be given by a metric. For instance, ω is a metric space with its metric d defined by d(x, y) =

∞  1 |xk − yk | for all x, y ∈ ω. 2k 1 + |xk − yk | k=0

(1.1)

If a set is both a linear and metric space, then it is natural to require the algebraic operations to be continuous with respect to the metric. The continuity of the algebraic operations in a linear metric space (X, d) means the following: If (xn ) and (yn ) are sequences in X and (λn ) is a sequence of scalars with xn → x, yn → y and λn → λ (n → ∞), then it follows that xn + yn → x + y and λn xn → λx (n → ∞). A complete linear metric space is called a Fréchet space. Unfortunately, this terminology is not universally agreed on. Some authors call a complete linear metric space an F space, and a locally convex F space a Fréchet space (e.g. [28, p. 8] or [15, p. 208]) which Wilansky calls an F space. We follow Wilansky’s terminology of a Fréchet space being a complete linear metric space [31–33]. We will see later that ω is a Fréchet space with the metric defined in (1.1). The concept of a paranorm is closely related to linear metric spaces. It is a generalization of that of absolute value. The paranorm of a vector may be thought of as its distance from the origin. We recall the definition of a paranormed space for the reader’s convenience. Definition 1.1 Let X be a linear space. (a) A function p : X → R is called a paranorm, if

1.1 Linear Metric and Paranormed Spaces

3

p(0) = 0,

(P.1)

p(x) ≥ 0 for all x ∈ X,

(P.2)

p(−x) = p(x) for all x ∈ X,

(P.3)

p(x + y) ≤ p(x) + p(y) for all x, y ∈ X (triangle inequalit y)

(P.4)

if (λn ) is a sequence of scalars with λn → λ (n → ∞) and (xn ) is a sequence of vectors with p(xn − x) → 0 (n → ∞) then it follows that

(P.5)

p(λn xn − λx) → 0 (n → ∞) (continuit y o f multi plication by scalar s). If p is a paranorm on X , then (X, p), or X for short, is called a paranormed space. A paranorm p for which p(x) = 0 implies x = 0 is called total. (b) For any two paranorms p and q, p is called stronger than q if, whenever (xn ) is a sequence with p(xn ) → 0 (n → ∞), then also q(xn ) → 0 (n → ∞). If p is stronger than q, then q is said to be weaker than p. If p is stronger than q and q is stronger than p, then p and q are called equivalent. If p is stronger than q, but p and q are not equivalent, then p is called strictly stronger than q, and q is called strictly weaker than p. If p is a total paranorm for a linear space X , then it is easy to see that d(x, y) = p(x − y) (x, y ∈ X ) defines a metric on X , thus every totally paranormed space is a linear metric space. The converse is also true. The metric of any linear metric space is given by some total paranorm [31, Theorem 10.4.2, p. 183]. The next well-known result shows how a sequence of paranorms may be used to define a paranorm. Theorem 1.1 Let ( pk )∞ k=0 be a sequence of paranorms on a linear space X . We define the so-called Fréchet combination of ( pk )∞ k=0 by p(x) =

∞  1 pk (x) for all x ∈ X. k 2 1 + pk (x) k=0

(1.2)

Then, we have (a) p is a paranorm on X and satisfies p(xn ) → 0 (n → ∞) if and only if pk (xn ) → 0 (n → ∞) for each k. (b) p is the weakest paranorm stronger than every pk . (c) p is total if and only if the set { pk : k = 0, 1, . . . } is total.

(1.3)

4

1 Matrix Transformations and Measures of Noncompactness

(We recall that a set  if functions from a linear space X to a linear space is said to be total if given x ∈ X \ {0}, there exists f ∈  with f (x) = 0.) We obtain as an immediate consequence of Theorem 1.1. Corollary 1.1 The set ω is a complete, totally paranormed space with its paranorm p defined by ∞  1 |xk | for all x ∈ ω. (1.4) p(x) = 2k 1 + |xk | k=0 Thus, ω is a Fréchet space with its natural metric dω given by dω (x, y) =

∞  1 |xk − yk | for all x, y ∈ ω. 2k 1 + |xk + yk | k=0

(1.5)

Furthermore, convergence in (ω, dω ) and coordinatewise convergence are equivalent, that is, dω (x (n) , x) → 0 (n → ∞) if and only if lim xk(n) = xk for each k. n→∞

We close this section with an example we prove in detail for the reader’s convenience, since part of it will be used in Part (b) of Examples 1.2, 2.3, and in Sect. 6, Example 1.1 Let p = ( pk )∞ k=0 be a sequence of positive reals and  ( p) = x ∈ ω :

∞ 



|xk |

  < ∞ , c0 ( p) = x ∈ ω : lim |xk | pk = 0

pk

k→∞

k=0

and

  ∞ ( p) = x ∈ ω : sup |x| pk < ∞ . k

(a) Then ( p), c0 ( p) and ∞ ( p) are linear spaces if and only if the sequence p is bounded. (b) Let the sequence p be bounded and M = max{1, supk pk }. Then ( p) and c0 ( p) are complete, totally paranormed spaces with their natural paranorms g and g0 given by 1/M ∞  pk |xk | for all x ∈ ( p) g(x) = k=0

and

g0 (x) = sup |xk | pk /M for all x ∈ c0 ( p), k

1.1 Linear Metric and Paranormed Spaces

5

and g and g0 are strictly stronger than the natural paranorm of ω on ( p) and c0 ( p), respectively. But g0 is a paranorm for ∞ ( p) if and only if m = inf pk > 0,

(1.6)

k

in which case ∞ ( p) reduces to the classical space ∞ of bounded complex sequences. Proof (a) First we assume that the sequence p is bounded. We write X ( p) for any of the sets ( p), c0 ( p) and ∞ ( p). Let x, y ∈ X ( p) and λ ∈ C be given. (a.i)

First we show x + y ∈ X ( p). Putting αk = pk /M ≤ 1 for all k, we obtain |xk + yk |αk ≤ |xk |αk + |yk |αk by inequality (A.1). If X ( p) = c0 ( p) or X ( p) = ∞ ( p), then sup |xk + yk |αk ≤ sup |xk |αk + sup |yk |αk , k

k

(1.7)

k

which implies x + y ∈ X ( p). If X ( p) = ( p), we get applying Minkowski’s inequality (A.4) ∞ 

1/M |xk + yk |

pk

=

k=0

∞  

≤  ≤

1/M |xk + yk |

k=0 ∞ 

1/M (|xk |αk + |yk |αk )

k=0 ∞ 

1/M |xk |αk M

k=0

=

αk M

∞ 

+

|xk |

pk

∞ 



1/M +

k=0

M

1/M |yk |αk M

k=0 ∞ 

1/M

|yk |

.

pk

k=0

Thus, we have shown ∞  k=0

(a.ii)

1/M |xk + yk |

pk



∞  k=0



1/M |xk |

pk

+

∞ 

1/M |yk |

pk

,

k=0

and so x + y ∈ ( p). This completes Part (a.i) of the proof. Now we show λx ∈ X ( p). We put  = max{1, |λ| M } < ∞. Then |λk | pk ≤  for all k and so

(1.8)

6

1 Matrix Transformations and Measures of Noncompactness ∞ 

|λxk | pk ≤ 

k=0

∞ 

|xk | pk < ∞

k=0

and sup |λxk | pk ≤  sup |xk | pk < ∞, k

k

hence λx ∈ X ( p). This completes Part (a.ii) of the proof. Thus, we have shown that if the sequence p is bounded then ( p) is a linear space. To show the converse part, we assume that the sequence p is not bounded. Then there exists a subsequence ( pk( j) )∞ j=0 of the sequence p such that pk( j) > j + 1 for all j. We define the sequence x = (xk )∞ k=0 by xk =

⎧ ⎨

1 ( j + 1)2/ pk( j) ⎩ 0

Then, we have

∞ 

(k = k( j)) (k = k( j))

|xk | pk =

k=0

∞  j=0

( j = 0, 1, . . . ).

1 < ∞, ( j + 1)2

hence x ∈ ( p), but sup |2xk | pk > k

2 j+1 for j = 0, 1, . . . , ( j + 1)2

that is, 2x ∈ / ∞ ( p). This shows that if the sequence p is not bounded, then the spaces X ( p) are not linear spaces. This completes the proof of Part (a). (b) Now let the sequence p be bounded. (b.1) First we show that g is a total paranorm for X ( p) = ( p) and g0 is a total paranorm for X ( p) = c0 ( p). We write g X ( p) = g for X ( p) = ( p) and g X ( p) = g0 for X ( p) = c0 ( p). Then, we obviously have g X ( p) : X ( p) → R, g X ( p) (0) = 0, g X ( p) (x) ≥ 0, g X ( p) (x) > 0 implies x = 0 and g X ( p) (−x) = g X ( p) (x), and (1.8) and (1.7) are the triangle inequalities for ( p) and c0 ( p), respectively. To show the condition in (P.5) of Definition 1.1, let (λn )∞ n=0 be a sequence of scalars with λn → λ (n → ∞) and (x (n) )∞ n=0 be a sequence of elements x (n) ∈ X ( p) with g X ( p) (x (n) − x) → 0 (n → ∞). We observe that by (1.7) or (1.8)

  g X ( p) λn x (n) − λx ≤ g X ( p) (λn − λ)(x (n) − x)

 + g X ( p) λ(x (n) − x) + g X ( p) ((λn − λ)x) .

(1.9)

1.1 Linear Metric and Paranormed Spaces

7

It follows from λn → λ (n → ∞) that |λn − λ| < 1 for all sufficiently large n, hence

 g X ( p) (λn − λ)(x (n) − x) ≤ g X ( p) (x (n) − x) → 0 (n → ∞). Furthermore, we have

 g X ( p) λ(x (n) − x) ≤ g X ( p) (x (n) − x) → 0 (n → ∞). Therefore, the first two terms on the right in (1.9) tend to zero as n tends to infinity. To show that the third term on the right also tend to zero as n tend to infinity, let ε > 0 be given with ε < 1. (α) First, we consider the case of X ( p) = ( p). Since x ∈ ( p), we can choose a non-negative integer k0 such that 

∞ 

1/M |xk | pk

< ε/2.

k=k0 +1

Now we choose a non-negative integer n 0 such that  |λn − λ| ≤ 1 and max |λn − λ| 0≤k≤k0

pk


0 be given. Then there exists a non-negative integer N such that |xk(n) − xk(m) |αk ≤ g0 (x (n) − x (m) ) < ε for all n, m ≥ N and for all k, and so |xk(n) − xk |αk = lim |xk(n) − xk(m) |αk ≤ ε for all n ≥ N and all k, m→∞

that is,

g0 (x (n) − x) ≤ ε for all n ≥ N .

(1.11)

Since x (N ) ∈ c0 ( p), there exists a non-negative integer k0 such that |xk(N ) | pk < ε for all k ≥ k0 , hence αk  αk      |xk |αk ≤ xk(N ) − xk  + xk(N )  ≤ ε1/M + ε1/M ≤ 2ε1/M and so |xk | pk ≤ 21/M ε ≤ 2ε for all k ≥ k0 . This and (1.11) together imply x (n) → x (n → ∞) in c0 ( p). Thus, we have shown that c0 ( p) is complete. (b.4) Finally, we show that g0 is a paranorm for ∞ ( p) if and only if the condition (1.6) is satisfied.

10

1 Matrix Transformations and Measures of Noncompactness

If 0 < m ≤ pk ≤ M for all k, then obviously ∞ ( p) = ∞ , and ∞ is a Banach space. Conversely, if m = 0, then there is a subsequence ( pk( j) )∞ j=0 of the sequence ( p k )∞ such that p < 1/( j + 1) for j = 0, 1, . . . . We define the k( j) k=0 by sequence x = (xk )∞ k=0  xk =

1 0

(k = k( j)) (k = k( j))

( j = 0, 1, . . . ).

Then obviously x ∈ ∞ ( p). Let λn = 1/n for all n. Then limn→∞ λn = 0, but g0 (λn x) = sup |λn xk | k

pk

 1/( j+1) 1 ≥ lim = 1 for all n ∈ N, j→∞ n

hence g0 (λn x) → 0 (n → ∞). Thus, g0 cannot be a paranorm for ∞ ( p). This completes the proof.



1.2 FK and BK Spaces Here we give a short overview concerning the theory of F K and B K spaces which is the most powerful tool in the characterization of matrix transformations between sequence spaces. The fundamental result of this section is Theorem 1.4 which states that matrix maps between F K spaces are continuous. Most of the results in this section can be found in [21, 23, 31, 33]. We also refer the reader to [14, 34]. We provide the proofs of Theorems 1.2–1.7, and Corollary 1.2 for the reader who may not be too familiar with the theory of B K spaces. We start with a slightly more general definition. Definition 1.2 Let H be a linear space and a Hausdorff space. An FH space is a Fréchet space X such that X is a subspace of H and the topology of X is stronger than the restriction of the topology of H on X . If H = ω with its topology given by the metric dω of (1.5) in Corollary 1.1, then an F H space is called an F K space. A B H space or a B K space is an F H or F K space which is a Banach space. Remark 1.1 (a) If X is an F H space, then the inclusion map ι : X → H with ι(x) = x for all x ∈ X is continuous. Therefore, X is continuously embedded in H . (b) Since convergence in (ω, dω ) and coordinatewise convergence are equivalent by Corollary 1.1, convergence in an F K space implies coordinatewise convergence. (c) The letters F, H, K and B stand for Fréchet, Hausdorff, Koordinate, the German word for coordinate, and Banach.

1.2 FK and BK Spaces

11

(d) Some authors include local convexity in the definition of an F H space. Since most of the theory presented here can be developed without local convexity, we follow Wilansky [33] and do not include it in our definition. If local convexity, however, is needed then it will explicitly be mentioned. Example 1.2 (a) Trivially ω is an F K space with its natural metric of (1.5) in Corollary 1.1. (b) If p = ( pk )∞ k=0 is a bounded sequence of positive reals, then ( p) and c0 ( p) are F K spaces with their metrics d given by the their paranorms, since they are Fréchet subspaces of ω with their metrics stronger than the natural metric of ω on them by Example 1.1. (c) We consider the spaces ∞ , c and c0 of all bounded, convergent and null sequences, and  p = x ∈ ω :

∞ 

 |xk | < ∞

for 1 ≤ p < ∞.

p

k=0

It is well known that ∞ , c and c0 are B K spaces with x∞ = sup |xk |; k

since |xk | ≤ x∞ for all x and all k, those spaces are B K spaces. Since  p obviously is the special case of ( p) with the sequence p = p · e, and x p =

∞ 

1/ p |xk |

p

k=0

is a norm on  p for 1 ≤ p < ∞,  p is a B K space by Example 1.1. We refer to spaces in this part as the classical sequence spaces. Remark 1.2 We have seen in Example 1.1 (b) that g0 is only a paranorm for ∞ ( p) when the condition in (1.6) holds, in which case ∞ ( p) = ∞ . Grosse–Erdmann [9] determined a linear topology for ∞ ( p). He showed in [9, Theorem 2 (ii)] that ∞ ( p) is an I B K space, that is, it can be written as the union of an increasing sequence of B K spaces, and is endowed with the inductive limit topology. The following results are fundamental. Their proofs use well-known theorems from functional analysis, which are included in Appendix A.2 for the reader’s convenience. Theorem 1.2 Let X be a Fréchet space, Y be an F H space, TY , T H |Y denote F H topologies on Y and of H on Y , and f : X → Y be linear. Then f : X → (Y, T H |Y ) is continuous, if and only if f : X → (Y, TY ) is continuous.

12

1 Matrix Transformations and Measures of Noncompactness

Proof (i) First, we assume that f : X → (Y, TY ) is continuous. Since Y is an F H space, we have T H |Y ⊂ TY , and so f : X → (Y, T H |Y ) is continuous. (ii) Conversely, we assume that f : X → (Y, T H |Y ) is continuous. Then it has closed graph by Theorem A.1 in Appendix A.2. Since Y is an F H space, we again have T H |Y ⊂ TY , and so f : X → (Y, TY ) has closed graph. Consequently, f : X → (Y, TY ) is continuous by the closed graph theorem, Theorem A.2 in Appendix A.2.  We obtain as an immediate consequence of Theorem 1.2. Corollary 1.2 Let X be a Fréchet space, Y be an F K space, f : X → Y be linear, and the coordinates Pn : X → C for n = 0, 1, . . . be defined by Pn (x) = xn for all x ∈ X . If Pn ◦ f : X → C is continuous for every n, then f : X → Y is continuous. Proof Since convergence and coordinatewise convergence are equivalent in ω by Corollary 1.1, the continuity of Pn ◦ f : X → C for all n implies the continuity of f : X → ω, and so f : X → Y is continuous by Theorem 1.2.  By φ, we denote the set of all finite sequences. Thus, x = (xk )∞ k=0 ∈ φ if and only if there is an integer k such that x j = 0 for all j > k. Let X be a Fréchet spaces. Then we denote by X the set of all continuous linear functionals on X ; X is called the continuous dual of X .  Theorem 1.3 Let X ⊃ φ be an F K space. If the series ∞ k=0 ak x k converge for all x ∈ X , then the linear functional f a defined by f a (x) = ∞ k=0 ak x k for all x ∈ X is continuous.  Proof We define the functionals f a[n] for all n ∈ N0 by f a[n] (x) = nk=0 ak xk for all n [n] x ∈ X . Since X is an F K space and f a = k=0 ak Pk is a finite linear combination of the continuous coordinates Pk (k = 0, 1, . . . ), we have f a[n] ∈ X for all n. By hypothesis, the limits f a (x) = limn→∞ f a[n] (x) exist for all x ∈ X , hence f a ∈ X by the Banach–Steinhaus theorem, Theorem A.3.  The next result is one of the most important ones in the theory of matrix transformations between sequence spaces. We need the following notations. Let X and Y be subsets of ω, A = (ank )∞ n,k=0 be an infinite matrix of complex numbers and x ∈ ω. Then, we write An x =

∞ 

ank xk for n = 0, 1, . . . and Ax = (An x)∞ n=0

k=0

provided all the series converge; Ax is called the A transform of the sequence x. We write (X, Y ) for the class of all infinite matrices that map X into Y , that is, for which the series An x converge for all n and all x ∈ X , and Ax ∈ Y for all x ∈ X . Theorem 1.4 Any matrix map between F K spaces is continuous.

1.2 FK and BK Spaces

13

Proof Let X and Y be F K spaces, A ∈ (X, Y ) and L A : X → Y be defined by L A (x) = Ax for all x ∈ X . Since the maps Pn ◦ L A : X → C are continuous for all  n by Theorem 1.3, L A : X → Y is continuous by Corollary 1.2. It turns out as a consequence of Theorem 1.2 that the F H topology of an F H space is unique, more precisely, we have the following. Theorem 1.5 Let (X, T X ) and (Y, TY ) be F H spaces with X ⊂ Y , and TY | X denote the topology of Y on X . Then (1.12) T X ⊃ TY | X ; T X = TY | X if and only if X is a closed subspace of Y.

(1.13)

In particular, the topology of an F H space is unique. Proof (i) First we show the inclusion in (1.12). Since X is an F H space, the inclusion map ι : (X, T X ) → (H, TH ) is continuous by Remark 1.1 (a). Therefore, ι : (X, T X ) → (Y, TY ) is continuous by Theorem 1.2. Thus, the inclusion in (1.12) holds. (ii) Now we show the identity in (1.13). Let T and T be F H topologies for an F H space. Then it follows by what we have just shown in Part (i) that T ⊂ T ⊂ T . If X is closed in Y , then X becomes an F H space with TY | X . It follows from the uniqueness that T X = TY | X . (β) Conversely, if T X = TY | X , then X is a complete subspace of Y , and so closed in Y .  (α)

The next results are also useful. Theorem 1.6 Let X , Y and Z be F H spaces with X ⊂ Y ⊂ Z . If X is closed in Z , then X is closed in Y . Proof Since X is closed in (Y, T Z |Y ), it is closed in (Y, TY ) by Theorem 1.5.



Let Y be a topological space, and E ⊂ Y . Then we write clY (E) for the closure of E in Y . Theorem 1.7 Let X and Y be F H spaces with X ⊂ Y , and E be a subset of X . Then, we have clY (E) = clY ( cl X (E)), in particular cl X (E) ⊂ clY (E). Proof Since X is closed in (Y, T Z |Y ), it is closed in (Y, TY ) by Theorem 1.5.



Example 1.3 (a) Since c0 and c are closed in ∞ , their B K topologies are the same; since 1 is not closed in ∞ , its B K topology is strictly stronger than that of ∞ on 1 (Theorem 1.5).

14

1 Matrix Transformations and Measures of Noncompactness

(b) If c is not closed in an F K space X , then X must contain unbounded sequences (Theorem 1.6). Now we recall the definition of the AD and AK properties. Definition 1.3 Let X ⊃ φ be an F K space. Then X is said to have (a) AD if cl X (φ) = X ; (b) AK if every sequence x = (xk )∞ k=0 ∈ X has a unique representation x=

∞ 

xk e(k) ,

k=0

that is, if every sequence x is the limit of its m-sections x [m] =

m 

xk e(k) .

k=0

The letters A, D and K stand for abschnittsdicht, the German word for sectionally dense, and Abschnittskonvergenz, the German word for sectional convergence. Example 1.4 (a) Every F K space with AK obviously has AD. (b) An Example of an F K space with AD which does not have AK can be found in [33, Example 5.2.14]. (c) The spaces ω, c0 ( p), ( p) for p = ( pk )∞ k=0 ∈ ∞ , in particular, c0 and  p (1 ≤ p < ∞) have AK . (d) The space c does not have AK , since e ∈ c. Now we recall the concept of a Schauder basis. We refer the reader to [18, 24] for further studies. Definition 1.4 A Schauder basis of a linear metric space X is a sequence (bn ) of vectors such that for every vector x ∈ X there is a unique sequence (λn ) of scalars with ∞ m   λn bn = x, that is, lim λn bn = x. n=0

m→∞

n=0

For finite-dimensional spaces, the concepts of Schauder and algebraic bases coincide. In most cases of interest, however, the concepts differ. Every linear space has an algebraic basis, but there are linear spaces without a Schauder basis which we will soon see. We recall that a metric space (X, d) is said to be separable if it has a countable dense subset; this means there is a countable set A ⊂ X such that for all x ∈ X and all ε > 0 there is an element a ∈ A with d(x, a) < ε. The next result is well known from elementary functional analysis. Theorem 1.8 Every complex linear metric space with a Schauder basis is separable.

1.2 FK and BK Spaces

15

We close this section with two well-known important examples. Example 1.5 The space ∞ has no Schauder basis, since it is not separable. Example 1.6 We put b(−1) = e and b(k) = e(k) for k = 0, 1, . . . . Then the sequence ∞ (b(n) )∞ n=−1 is a Schauder basis of c; more precisely every sequence x = (x k )k=0 ∈ c has a unique representation ∞  x =ξe+ (xk − ξ )e(k) where ξ = ξ(x) = lim xk . k→∞

k=0

(1.14)

1.3 Matrix Transformations into the Classical Sequence Spaces Now we apply the results of the previous section to characterize the classes (X, Y ) where X is an arbitrary F K space and Y is any of the spaces ∞ , c, c0 and 1 . Let (X, d) be a metric space, δ > 0 and x0 ∈ X . Then, we write B δ (x0 ) = B X,δ (x0 ) = {x ∈ X : d(x, x0 ) ≤ δ} for the closed ball of radius δ with its centre in x0 . If X ⊂ ω is a linear metric space and a ∈ ω, then we write a∗δ

=

a∗X,δ

∞      = sup  ak x k  ,   x∈B δ [0]

(1.15)

k=0

provided the expression  on the right hand exists and is finite which is the case whenever the series ∞ k=0 ak x k converge for all x ∈ X (Theorem 1.3). Let us recall that a subset S of a linear space X is said to be absorbing if, for every x ∈ X , there is ε > 0 such that λx ∈ S for all scalars λ with |λ| ≤ ε. The next statement is well known. Remark 1.3 ([23, Remark 3.3.4 (j)]) Let (X, p) be a paranormed space. Then the open and closed neighbourhoods Nr (0) = {x ∈ X : p(x) < r } and N r (0) = {x ∈ X : p(x) ≤ r } of 0 are absorbing for all r > 0. The first result characterizes the class (X, ∞ ) for arbitrary F K spaces X . Theorem 1.9 Let X be an F K space. Then we have A ∈ (X, ∞ ) if and only if A∗δ = sup An ∗δ < ∞ for some δ > 0. n

(1.16)

16

1 Matrix Transformations and Measures of Noncompactness

Proof First, we assume that (1.16) is satisfied. Then the series An x converge for all x ∈ B δ (0) and for all n, and Ax ∈ ∞ for all x ∈ B δ (0). Since the set B δ (0) is absorbing by Remark 1.3, we conclude that the series An x converge for all n and all x ∈ X , and Ax ∈ ∞ for all x ∈ X . Conversely, we assume A ∈ (X, ∞ ). Then the map L A : X → ∞ defined by L A (x) = Ax for all x ∈ X

(1.17)

is continuous by Theorem 1.4. Hence, there exist a neighbourhood N of 0 in X and a real δ > 0 such that B δ (0) ⊂ N and L A (x)∞ < 1 for all x ∈ N . This implies (1.16).  Let X and Y be Banach spaces. Then we use the standard notation B(X, Y ) for the set of all bounded linear operators from X to Y . It is well known that B(X, Y ) is a Banach space with the operator norm defined by L = sup{L(x) : x = 1} for all L ∈ B(X, Y ); we write X ∗ for X with the norm defined by  f  = sup{| f (x)| : x = 1} for all f ∈ X . Theorem 1.10 Let X and Y be B K spaces. (a) Then (X, Y ) ⊂ B(X, Y ), that is, every A ∈ (X, Y ) defines an operator L A ∈ B(X, Y ) by (1.17). (b) If X has AK then B(X, Y ) ⊂ (X, Y ), that is, for each L ∈ B(X, Y ) there exists A ∈ (X, Y ) such that (1.17) holds. (c) We have A ∈ (X, ∞ ) if and only if A(X,∞ ) = sup An ∗X = sup (sup {|An x| : x = 1}) < ∞; n

if A ∈ (X, ∞ ), then

(1.18)

n

L A  = A(X,∞ ) .

(1.19)

Proof (a) This is Theorem 1.4. (b) Let L ∈ B(X, Y ) be given. We write L n = Pn ◦ L for all n, and put ank = (k) ) for all n and k. Let x = (xk )∞ L n (e k=0 ∈ X be given. Since X has AK , we have (k) Y is a B K space, itfollows that L n ∈ X ∗ for all n. x= ∞ k=0 x k e , and since ∞ (k) Hence, we obtain L n (x) = ∞ k=0 x k L n (e ) = k=0 ank x k = An x for all n, and so L(x) = Ax.  (c) This follows from Theorem 1.9 and the definition of A(X,∞ ) . In many cases, the characterizations of the classes (X, c) and (X, c0 ) can easily be obtained from the characterization of the class (X, ∞ ).

1.3 Matrix Transformations into the Classical Sequence Spaces

17

Theorem 1.11 Let X be an F K space with AD, and Y and Y1 be F K spaces with Y1 a closed subspace of Y . Then A ∈ (X, Y1 ) if and only if A ∈ (X, Y ) and Ae(k) ∈ Y1 for all k. Proof First, we assume A ∈ (X, Y1 ). Then Y1 ⊂ Y implies A ∈ (X, Y ), and e(k) ∈ X for all k implies Ae(k) ∈ Y1 for all k. Conversely, we assume A ∈ (X, Y ) and Ae(k) ∈ Y1 for all k. We define the map L A : X → Y by (1.17). Then Ae(k) ∈ Y1 implies L A (φ) ⊂ Y1 . By Theorem 1.4, L A is continuous, hence L A (cl X (φ)) ⊂ clY (L A (φ)). Since Y1 is closed in Y , and φ is dense in the AD space X , we have L A (X ) = L A (cl X (φ)) ⊂ clY (L A (φ)) ⊂  clY (Y1 ) = clY1 (Y1 ) = Y1 by Theorem 1.5. The following result is sometimes referred to as improvement of mapping. Theorem 1.12 (Improvement of mapping) Let X be an F K space, X 1 = X ⊕ e = {x1 = x + λe : x ∈ X, λ ∈ C}, and Y be a linear subspace of ω. Then A ∈ (X 1 , Y ) if and only if A ∈ (X, Y ) and Ae ∈ Y . Proof First, we assume A ∈ (X 1 , Y ). Then X ⊂ X 1 implies A ∈ (X, Y ), and e ∈ X 1 implies Ae ∈ Y . Conversely, we assume A ∈ (X, Y ) and Ae ∈ Y . Let x1 ∈ X 1 be given. Then there are x ∈ X and λ ∈ C such that x1 = x + λe, and it follows that Ax1 = A(x + λe) = Ax + λAe ∈ Y .  We need Lemma [26] for the characterization of the class (X, 1 ). Although it is elementary, we provide its proof here. Lemma 1.1 ([26]) Let a0 , a1 , . . . , an ∈ C. Then the following inequality holds: n  k=0

      |ak | ≤ 4 · max  ak  . N ⊂{0,...,n}  

(1.20)

k∈N

Proof First we consider the case when a0 , a1 , . . . , an ∈ R. We put N + = {k ∈ {0, . . . , n} : ak ≥ 0} and N − = {k ∈ {0, . . . , n} : ak < 0}, and obtain       n              |ak | =  a + a  ≤ 2 · max  ak  . N ⊂{0,...,n}    + k  − k k=0

k∈N

k∈N

k∈N

Now we assume a0 , a1 , . . . , an ∈ C. We write ak = αk + iβk for k = 0, 1, . . . , n and, for any subset N of {0, . . . , n}, we put    αk , y N = βk and z N = x N + i y N = ak . xN = k∈N

k∈N

Now we choose subsets Nr , Ni and N∗ of {0, . . . , n} such that

k∈N

18

1 Matrix Transformations and Measures of Noncompactness

  x N  = r

max

N ⊂{0,...,n}

  |x N |,  y Ni  =

max

N ⊂{0,...,n}

  |y N | and z N∗  =

max

N ⊂{0,...,n}

|z N |.

Then, we have for all subsets N of {0, . . . , n}         |x N |, |y N | ≤ z N∗  and x Nr  +  y Ni  ≤ 2 · z N∗  . Finally, it follows by the first part of the proof that n  k=0

|ak | ≤

n  k=0

|αk | +

n 

    |βk | ≤ x Nr  + x Ni 

k=0

      max  ak  . N ⊂{0,...,n}  

  ≤ 4 ·  z N∗  = 4 ·



k∈N

Now we give the characterization of the class (X, 1 ) for arbitrary F K spaces. The proof is similar to that of Theorem 1.9. Theorem 1.13 ([20, Satz 1]) Let X be an F K space. Then A ∈ (X, 1 ) if and only if  ∞ ∗      ank (1.21) A(X,1 ) = sup   < ∞ for some δ > 0.   N ⊂ N0 N finite

n∈N

k=0 X,δ

Proof (i) First we show the sufficiency of the condition in (1.21). We assume that (1.21) is satisfied. Then the series An x converge for all x ∈ Bδ [0] and for all n. Let m ∈ N0 be given. Then we have by Lemma 1.1      ∞    |An x| ≤ 4 · max  ank xk  Nm ⊂{0,...,m}   n=0 n∈Nm k=0  ⎛ ⎞     ∞   ⎝ ⎠ = 4 · max  ank xk  Nm ⊂{0,...,m}   k=0 n∈Nm ⎛ ⎞∞ ∗       ≤ 4 · max ⎝ ank ⎠   Nm ⊂{0,...,m}   n∈Nm

m 

k=0 X,δ

≤ 4 · A(X,1 ) < ∞, for all x ∈ B δ (0). Since m ∈ N0 was arbitrary, Ax ∈ 1 for all x ∈ B δ (0). Since the set B δ (0) is absorbing by Remark 1.3, the series An x converge for all n and all x ∈ X , and Ax ∈ 1 for all x ∈ X . Therefore, we have A ∈ (X, 1 ). Thus, we have shown the sufficiency of the condition in (1.21)

1.3 Matrix Transformations into the Classical Sequence Spaces

(ii)

19

Now we show the necessity of the condition in (1.21). We assume A ∈ (X, 1 ). Then the series An x converge for all n and all x ∈ X , and L A is continuous by Theorem 1.4. Hence there exist a neighbourhood U of 0 in X and a real δ > 0 such that B δ (0) ⊂ U and L A (x)1 < 1 for all x ∈ U . Since ∞              ank  =  An x  ≤ An x1 < ∞      k=0

n∈N

n∈N

for all finite subsets N of N0 , we conclude that the condition in (1.21) holds.  Remark 1.4 It follows from the proof of Theorem 1.13 that if A ∈ (X, 1 ), where X is an F K space, then A(X,1 ) ≤ L A  ≤ 4 · A(X,1 ) .

(1.22)

1.4 Multipliers and Dual Spaces Here we consider the so-called multipliers and β-duals, the latter of which are of greater interest than the continuous duals in the theory of matrix transformations. They naturally arise in the characterizations of matrix transformations in connection with the convergence of the series An x. We also consider α-, γ -, functional and continuous duals of sequence spaces and some relations between them. Finally, we recall an important result, Theorem 1.22, that connects the properties of a matrix A with those of its transpose A T . This theorem has several applications in the characterizations of matrix transformations considered in Sect. 1.5. Let cs and bs denote the sets of all convergent and bounded series. The β-duals of sequence spaces are special cases of multiplier spaces. Definition 1.5 Let X and Y be subsets of ω. The set   M(X, Y ) = a ∈ ω : a · x = (ak xk )∞ k=0 ∈ Y for all ∈ X is called the multiplier space of X in Y . Special cases are X α = M(X, 1 ), X β = M(X, cs) and X γ = M(X, bs), the α-, β- and γ -duals of X . The following simple results are fundamental, and will frequently be used in the sequel. Proposition 1.1 Let X , X˜ , Y and Y˜ be subsets of ω, and {X δ : δ ∈ I }, where I is an indexing set, be a collection of subsets of ω. Then, we have

20

1 Matrix Transformations and Measures of Noncompactness

(i) Y ⊂ Y˜ implies M(X, Y ) ⊂ M(X, Y˜ ); (ii) X ⊂ X˜ implies M( X˜ , Y ) ⊂ M(X, Y ); (iii) X ⊂ M(M(X, Y ), Y ); (iv) M(X,  Y ) = M(M(M(X, Y ), Y ), Y );   Xδ, Y = M(X δ , Y ). (v) M δ∈I

δ∈I

The following result is an immediate consequence of Proposition 1.1. Corollary 1.3 ([23, Corollary 9.4.3]) Let X and X˜ be subsets of ω, {X δ : δ ∈ I }, where I is an indexing set, be a collection of subsets of ω, and † denote any of the symbols α, β or γ . Then, we have (ii) X ⊂ X˜ implies X˜ † ⊂ X † ;

(i) X α ⊂ X β ⊂ X γ ; (iii) X ⊂ X †† = (X † )† ;

(v)

(iv) X † = X ††† ;  

† Xδ

δ∈I

=



X δ† .

δ∈I

Example 1.7 ([13, Lemma 1]) We have (i)M(c0 , c) = ∞ ; (ii)M(c, c) = c; (iii)M(∞ , c) = c0 . Proof (i) If a ∈ ∞ , then a · x ∈ c for all x ∈ c0 , and so ∞ ⊂ M(c0 , c). Conversely, we assume a ∈ / ∞ . Then there is a subsequence (ak( j) )∞ j=0 of the sequence a such that |ak( j) | > j + 1 for all j = 0, 1, . . . . We define the sequence x by ⎧ j ⎨ (−1) (k = k( j)) xk = ( j = 0, 1, . . . ). (1.23) ak( j) ⎩ 0 (k = k( j)) Then, we have x ∈ c0 and ak( j) xk( j) = (−1) j for all j = 0, 1, . . . , hence ax ∈ / c. This shows M(c0 , c) ⊂ ∞ . (ii) If a ∈ c, then a · x ∈ c for all x ∈ c, and so c ⊂ M(c, c). Conversely, if a ∈ M(c, c), then a · x ∈ c for all x ∈ c, in particular, for x = e ∈ c, a · e = a ∈ c, and so M(c, c) ⊂ c. (iii) If a ∈ c0 , then a · x ∈ c for all x ∈ ∞ , and so c0 ⊂ M(∞ , c0 ). Conversely, we assume a ∈ / c0 . Then there are a real b > 0 and a subsequence of the sequence a such that |ak( j) | > b for all j = 0, 1, . . . . We define (ak( j) )∞ j=0 the sequence x as in (1.23). Then, we have x ∈ ∞ and ak( j) xk( j) = (−1) j for all  j = 0, 1, . . . , hence a ∈ / M(∞ , c). This shows M(∞ , c) = c0 . Example 1.8 Let † denote any of the symbols α, β or γ . Then we have ω† = φ, φ † = ω, c0† = c† = †∞ = 1 , †1 = ∞ and †p = q (1 < p < ∞; q = p/( p − 1)).

1.4 Multipliers and Dual Spaces

21

Another dual space frequently arises in the theory of sequence spaces. Definition 1.6 Let X ⊃ φ be an F K space. Then the set X f = {( f (e(n) ))∞ n=0 : f ∈ X }

is called the functional dual of X . Many of the results in the remainder of this section and their proofs can also be found in [33, Sects. 7.2 and 8.3]. The following results, Theorems 1.14, 1.15, 1.19 and 1.20 and Examples 1.9, 1.10 and 1.13, concern the relations between the various duals; they are not absolutely essential, but interesting in themselves and included for completeness sake. Along with Theorem 1.21 the theorems in the previous sentence are needed to prove Theorem 1.22. We start with a relation between the β- and continuous duals of F K spaces. The first part of the next result is Theorem 1.3.

Theorem 1.14 ([23, Theorem 9.5.2]) Let X ⊃ φ be an F K space. Then X β ⊂ X ; this means that there is a linear one-to-one map T : X β → X . If X has AK then T is onto. Theorem 1.15 ([23, Theorem 9.5.5]) (a) Let X ⊃ φ be an F K space. Then we have X f = (cl X (φ)) f . (b) Let X, Y ⊃ φ be F K spaces. If X ⊂ Y , then X f ⊃ Y f . If X is closed in Y , then Xf = Y f. It might be expected from X ⊂ X †† in Part (iii) of Corollary 1.3 that X is contained in X f f , but Example 1.9 will show that this is not the case, in general. We will, however, see in Theorem 1.20 that X ⊂ X f f for B K spaces with AD. If X is a linear space, S is a subset of X and y ∈ X \ S, then we write S ⊕ y = {x : x = s + λy for some s ∈ S and some scalar λ}. Example 1.9 ([23, Example 9.5.6]) Let X = c0 ⊕ z with z unbounded. Then X is a B K space, X f = 1 and X f f = ∞ , so X ⊂ X f f . Theorem 1.16 ([23, Theorem 9.5.7]) Let X ⊃ φ be an F K space. (a) We have X β ⊂ X γ ⊂ X f . (b) If X has AK , then X β = X f . (c) If X has AD, then X β = X γ . A relation between the functional and continuous duals of an F K space is given by the next result. Theorem 1.17 ([23, Theorem 9.5.8]) Let X ⊃ φ be an F K space. (a) Then the map q : X → X f given by q( f ) = ( f (e(k) ))∞ k=0 is onto. Moreover, if T : X β → X denotes the map of Theorem 1.14, then q(T a) = a for all a ∈ X β . (b) Then X f = X , that is, the map q of Part (a) is one-to-one, if and only if X has AD.

22

1 Matrix Transformations and Measures of Noncompactness

Example 1.10 ([23, Example 9.5.9]) We have cβ = c f = 1 . The map T of Theorem 1.14 is not onto. We consider lim ∈ X . If there were a ∈ X f with lim a =  ∞ (k) = 0, hence lim x = 0 for all k=0 ak x k , then it would follow that ak = lim e x ∈ c, contradicting lim e = 1. Also then map q of Theorem 1.17 is not onto, since q ◦ lim= 0. It will turn out in Theorems 1.18 and 1.19 that the multiplier spaces and the functional duals of B K spaces are again B K spaces. These results do not extend to F K spaces, in general, as we will see in Example 1.12. Theorem 1.18 ([23, Theorem 9.4.6]) Let X ⊃ φ and Y ⊃ φ be B K spaces. Then Z = M(X, Y ) is a B K space with z = sup x · z for all z ∈ Z . x∈S X

We obtain as an immediate consequence of Theorem 1.18. Corollary 1.4 ([23, Corollary 9.4.7]) The α-, β- and γ -duals of a B K space X are B K spaces with aα = sup a · x1 = sup x∈S X

x∈S X

∞ 

|ak xk |

for all a ∈ X α ,

k=0

 n      = sup sup  ak xk  for all a ∈ X β , X γ .   n x∈S X 

and aβ = sup abs x∈S X

k=0

Furthermore, X β is a closed subspace of X γ . Example 1.11 ([23, Example 9.5.3 and (4.50) and (4.56) in Theorem 4.7.2 for c∗ ]) The continuous duals c0∗ , ∗1 and ∗p for 1 < p < ∞ are norm isomorphic with 1 , ∞ and q where q = p/( p − 1), respectively. ∞ χ : c∗ → We also have f ∈ c∗ if and only ∞if f (x)(k)= χ ( f ) · ξ + k=0 ak xk where (k) C is defined by χ ( f )= f (e) − k=0 f (e ), ξ = limk→∞ xk and a = ( f (e ))∞ k=0 ∈ 1 , and we have (1.24)  f  = |χ ( f )| + a1 . β

Finally, we have a∗∞ = a1 for all a ∈ ∞ . Theorem 1.18 fails to hold for F K spaces, in general. Example 1.12 ([23, Example 9.4.8]) The space ω is an F K space, and ωα = ωβ = ωγ = φ, but φ has no Fréchet metric. Theorem 1.19 ([23, Theorem 9.5.10]) Let X ⊃ φ be a B K space. Then X f is a B K space.

1.4 Multipliers and Dual Spaces

23

The next theorem gives a sufficient condition for X ⊂ X f f to hold. Theorem 1.20 ([23, Theorem 9.5.11]) Let X ⊃ φ be a B K space. Then X f f ⊃ cl X (φ). Hence, if X has AD, then X ⊂ X f f . The condition that X has AD is not necessary for X ⊂ X f f , in general. Example 1.13 ([23, Example 9.5.12]) Let X = c0 ⊕ z with z ∈ ∞ . Then, we have f X f f = 1 = ∞ ⊃ X , but X does not have AD. The next results connect the properties of A with those of its transpose A T . This is related to but different from the adjoint map. They are useful in the characterizations of some classes of matrix transformations between classical sequence spaces. Theorem 1.21 ([23, Theorem 9.6.1]) Let X ⊃ φ be an F K space and Y be any set of sequences. If A ∈ (X, Y ) then A T ∈ (Y β , X f ). If X and Y are B K spaces and Y β has AD then we have A T ∈ (Y β , cl X f (X β )). Theorem 1.22 ([23, Theorem 9.6.3]) Let X and Z be B K spaces with AK and Y = Z β . Then we have (X, Y ) = (X ββ , Y ); furthermore, A ∈ (X, Y ) if and only if A T ∈ (Z , X β ).

1.5 Matrix Transformations Between the Classical Sequence Spaces We apply the results of the previous sections to give necessary and sufficient conditions on the entries of a matrix A to be in any of the classes (X, Y ) where X and Y are any of the classical sequence spaces  p (1 ≤ p ≤ ∞), c0 and c with the exceptions of ( p , r ) where both p, r = 1, ∞ (the characterizations are not known), and of (∞ , c) (Schur’s theorem) and (∞ , c0 ) [29, 21 (21.1)] (no functional analytic proof seems to be known). The proof is purely analytical and uses the method of the gliding hump. A proof of Schur’s theorem can be found, for instance, in [19] or [33]. The class (2 , 2 ) was characterized by Crone ([6] or [27, pp. 111–115]). We characterize the class (2 , 2 ) in Sect. 1.6. Theorem 1.23 Let 1 < p, r < ∞, q = p/( p − 1) and s = r/(r − 1). Then the necessary and sufficient conditions for A ∈ (X, Y ) can be read from the following table: From To ∞ c0 c 1 r

∞ c0 c 1 1. 6. 11. 16. 21.

2. 7. 12. 17. 22.

3. 8. 13. 18. 23.

p

4. 5. 9. 10. 14. 15. 19. 20. 24. unknown

24

1 Matrix Transformations and Measures of Noncompactness

1., 2., 3. (1.1) where  (1.1) supn ∞ k=0 |ank | < ∞ 4. (4.1) where (4.1) supn,k |ank | < ∞ 5.

(5.1) where (5.1) supn

∞ 

|ank |q < ∞

k=0

6. 7. 8. 9.

(6.1) where  (6.1) limn→∞ ∞ k=0 |ank | = 0 (1.1), (7.1) where (7.1) limn→∞ ank = 0 for every k (1.1), (7.1), (8.1) where (8.1) limn→∞ ∞ k=0 ank = 0 (4.1), (7.1)

10. 11.

(5.1), (7.1) (11.1), (11.2) ∞ where (11.1) k=0 |ank | converges uniformly in n (11.2) limn→∞ ank = αk exists for every k 12. (1.1), (11.2) 13. (1.1), (11.2), (13.1) where (13.1) limn→∞ ∞ k=0 ank = α exists 14. (4.1), (11.2) 15. (5.1), (11.2) 16., 17., 18. (16.1) where   (16.1) sup ( ∞ n∈N ank |) < ∞ k=0 | N ⊂ N0 N finite

19. 20.

(19.1) where  (19.1) supk ∞ n=0 |ank | < ∞ (20.1) where   q (20.1) sup ( ∞ k=0 | n∈N ank | ) < ∞ N ⊂ N0 N finite

21., 22., 23. (21.1) where   ∞ r (21.1) sup k∈K ank | < ∞ n=0 | K ⊂ IN0 K finite

24.

(24.1) where  r (24.1) supk ∞ n=0 |ank | < ∞.

Proof The condition (1.1) in 1. and 2., and the conditions (4.1) and (5.1) in 4. and 5. follow from Part (c) in Theorem 1.10 and Example 1.11. Since c0 ⊂ c ⊂ ∞ , we also obtain the condition in (1.1) of 3.. Theorem 1.11 and the conditions in 2., 4. and 5. yield those in 7. and 12., 9. and 14., and in 10. and 15.. Now the conditions in 8. and 13. follow from those in 7. and 8. by Theorem 1.12.

1.5 Matrix Transformations Between the Classical Sequence Spaces

25

The conditions in 6. and 11. are in [29, 21 (21.1)] and [33, Theorem 1.18 (i)]]; no functional analytic proofs seem to be known in these cases. The conditions (16.1) in 16. and 17., and (20.1) in 20. follow from Theorem 1.13 and Example 1.11. Since c0 ⊂ c ⊂ ∞ , we also obtain the condition (16.1) in 18.. Furthermore, the condition (19.1) in 19. follows from Theorem 1.22 with Z = c0 , β hence Y = c0 = 1 , and 1.. The conditions (21.1) in 21. and 22., and (24.1) in 24. follow from Theorem 1.22 and those in 20. and 1.. Since c0 ⊂ c ⊂ ∞ , we also obtain the condition in (21.1) in 23..  Remark 1.5 We would obtain from Theorem 1.13 and Example 1.11 that A ∈ (1 , 1 ) if and only if        (1.25) sup sup  ank  < ∞.   N ⊂ N0 k N finite

n∈N

It is easy to see that conditions (16.1) of Theorem 1.23 and in (1.25) are equivalent. Remark 1.6 The results of Theorem 1.23 can be found in [29, 33]; many references to the original proofs can be found in [29]. The characterizations of Parts 1., 2. and 3. of Theorem 1.23 are given in [33, Example 8.4.5A, p. 129] or [29, (1.1) in 1.], of 4. in [33, Example 8.4.1A, p. 126] or [29, (6.1) in 6.], of 5. [33, Example 8.4.5D, p. 129] or [29, (5.1) in 5.], of 6. in [33, Theorem 1.7.19, p. 17] or [29, (21.1) in 21.], of 7. in [33, Example 8.4.5A, p. 129] or [29, (1.1), (11.2) in 23.], of 8. in [33, Example 8.4.5A, p. 129] or [29, (1.1), (11.2), (22.1) in 22.], of 9. in [33, Example 8.4.1A, p. 126] or [29, (6.2), (11.2) in 28.], of 10. in [33, Example 8.4.6D, p. 129] or [29, (5.1), (11.2) in 27.] of 11. in [33, Theorem 1.17.8, p.15] or [29, (10.1), (10.4) in 10.], of 12. in [33, Example 8.4.5A, p. 129] or [29, (1.1), (10.1) in 12], of 13. in [33, Example 8.4.5A, p. 129 or Theorem 1.3.6, p. 6] or [29, (1.1), (10.1), (11.1) in 11.], of 14. in [33, Example 8.4.1A, p. 126] or [29, (6.1), (10.1) in 17.], of 15. in [33, Example 8.4.5D, p. 129] or [29, (5.1), (10.1) in 16.], of 16., 17. and 18. in [33, Example 8.4.9A, p. 130] or [29, (72.2) in 72.], of 19. in [33, Example 8.4.1D, p. 126] or [29, (77.1) in 77.], of 20. in [33, Example 8.4.1D, p. 126] or [29, (76.1) in 76.], of 21., 22. and 23. in [33, Example 8.4.8A, p. 131] or [29, (63.1) in 63.] and of 24. in [33, Example 8.4.1D, p. 126] or [29, (68.1) in 68.]. The conditions for the class (∞ , c0 ) in [33, Theorem 1.7.19, p. 17] are ∞ 

|ank | converges uniformly in n

(i)

k=0

and (7.1) in 7. of Theorem 1.23. Two more sets of equivalent conditions for the class (∞ , c) are given in [33, Theorem 1.17.8, p.15], they are

26

1 Matrix Transformations and Measures of Noncompactness

⎧ ∞  ⎪ ⎪ 11. (11.2) |ank | < ∞ for all n, ⎪ ⎪ ⎪ k=0 ⎪ ⎨ ∞ |αk | < ∞ with αk from 11. (11.2) and ⎪ k=0 ⎪ ⎪ ∞ ⎪  ⎪ ⎪ |ank − αk | = 0 ⎩ lim

(ii)

n→∞ k=0

or ⎧ ⎨ 11. (11.2) and ∞ ∞   |ank | = |αk |, both series being convergent. ⎩ lim n→∞ k=0

(iii)

k=0

Alternative equivalent conditions for the classes (∞ , 1 ), (c0 , 1 ) and (c, 1 ) are given in [33, Example 8.4.9A, p. 130], namely,      ank  < ∞ sup    K ⊂ IN0 n→∞

(iv)

k∈K

K finite

and in [29, 72.], namely, sup N , K ⊂ N0 N , K finite

       ank  < ∞,   

(72.1)

n∈N k∈K

or (72.3), which is (iv), or   ∞      ank  converges for all K ∗ ⊂ N0 .   

(72.4)

n=0 k∈K ∗

β

If we apply Theorem 1.22 with X = Z = c0 and Y = c0 = 1 then we have A ∈ (c0 , 1 ) if and only if A T ∈ (c0 , 1 ), that is, the conditions are symmetric in n and k, and (iv) follows in the same way as 16. (16.1). It can easily be shown that the other conditions given here are equivalent. An alternative condition for the class (1 ,  p ) is given in [33, Example 8.4.1D, p. 126], namely, sup k

∞ 

|ank | p < ∞,

(v)

n=0

which can be obtained by applying Theorem 1.22 with X = 1 , Z = q and Y =  p and then using 5 (5.1) with A and p replaced by A T and q.

1.5 Matrix Transformations Between the Classical Sequence Spaces

27

Finally, an alternative condition for the classes (c0 , r ), (c, r ) and (∞ , r ) is given in [29, 63.], namely,   ∞      ank  converges for all K ∗ ⊂ N0 .   

(63.1)

n=0 k∈K ∗

1.6 Crone’s Theorem In this section, we prove Crone’s theorem which characterizes the class (2 , 2 ). The conditions are substantially different from those for the characterizations of the classes in the previous section. We will need some concepts and results from the theory of Hilbert spaces. We recall that if A = (ank ) is a (finite or infinite) matrix of complex entries, then ∗ ) denotes its conjugate complex transpose, that is, A∗ = (ank ∗ = a¯ kn for all n and k. ank

An eigenvalue of an n × n matrix A is a number λ for which Ax = λx for some vector x = 0; the vector x is called eigenvector of A. It is known that every n × n matrix A has at least one eigenvalue. Although the next result is well known, we include its proof for the reader’s convenience. Lemma 1.2 The norm of an n × n matrix A as a map from n-dimensional Euclidean space into itself is equal to the square root of the largest eigenvalue of A∗ A. Proof (i) First we show that each eigenvalue of A∗ A is non-negative. Let λ be an eigenvalue of A∗ A. Then there exists a vector x = 0 such that (A∗ A)x = λx. Since x = 0 it follows that λ= (ii)

1 1 ∗ ∗  Ax2 x A Ax = · x ∗ λx = ≥ 0. 2 x2 x2 x22

Now we prove the conclusion of the lemma. It is a well-known result from linear algebra that since A∗ A is Hermitian, there is a matrix C such that C −1 = C ∗ and C −1 A∗ AC = D,

28

1 Matrix Transformations and Measures of Noncompactness

where D is a diagonal matrix with the eigenvalues λk of A∗ A on its diagonal. It follows that x ∗ A∗ Ax x ∗ CC −1 A∗ ACC −1 x (C ∗ x)∗ D(C ∗ x) = sup = sup 2 2 x x x2 x=0 x=0 x=0 ⎛ ⎞ n λk |yk |2 ∗ ⎜ k=1 ⎟ y Dy ⎟ ≤ λmax , = sup = sup ⎜ (1.26) n ⎝ ⎠ 2  y=0 y y=0 |yk |2

A2 = sup

k=1

where we put y = C ∗ x and λmax is the largest eigenvalue of A∗ A. Let xmax be an eigenvector of λmax . Then, we have ∗ A∗ Axmax = λmax xmax 2 , xmax

hence λmax ≤ A2 .

(1.27)

It follows from (1.26) and (1.27) that A2 = λmax . Finally, since λmax ≥ 0 by Part (i) of the proof, we conclude A =

λmax .



In the sequel, the norm of a matrix will always mean its norm as a map in Euclidean or Hilbert space. [m] ∞ [m] = (ank )n,k=0 If A = (ank )∞ n,k=0 is an infinite matrix and m ∈ N0 , then A denotes the matrix with  ank (0 ≤ n, k ≤ m) [m] ank = 0 (n, k ≥ m + 1). Remark 1.7 One may regard A[m] either as a map from 2 into 2 , or from m + 1dimensional space into itself. The norm in either case is the same. ∞ Proof Let x = (x k )k=0 ∈ 2 be given, and y = (x 0 , x 1 , . . . , x m ). We obtain for the [m] m-section x = nk=0 xk e(k) of the sequence x

x [m]

∗

A[m]

∗

A[m] x [m] =

∞ ! 

n=0

=

m  n=0

A[m] x [m]

∗ " n

A[m] x [m]

 n

⎡⎛ ⎞ ⎤ m m ! "∗   [m] [m] ⎣⎝ x¯k a jn ⎠ ank x¯k ⎦ j=0

k=0

1.6 Crone’s Theorem

29

=

m 

⎡⎛ ⎞ ⎤ m m ! "∗   [m] ⎣⎝ ⎠ x¯ j a [m] ank xk ⎦ jn

n=0

=

j=0

m  n=0



=x

A[m] x

A

 [m] ∗

k=0

∗ n

A[m] x

 n

A[m] x.

We also obtain for the (m + 1) × (m + 1) matrix B = (bnk )m n,k=0 with bnk = ank (n, k = 0, 1, . . . , m)

∗ y ∗ B ∗ By = x ∗ A[m] A[m] x, hence B = A[m] .



We need some lemmas. We recall that, since 2 is a B K space with AK , by Theorem 1.10 (a) and (b), B(2 , 2 ) can be identified with the class (2 , 2 ), in particular, if A ∈ (2 , 2 ), then we write L A ∈ B(2 , 2 ) for the operator with L A (x) = Ax for all x ∈ 2 , and A = L A . Lemma 1.3 We have A ∈ (2 , 2 ) if and only if supm L A[m]  < ∞; in this case, L A  = sup L A[m]  and lim A[m] x = Ax. m→∞

m

Proof (i) First, we show that A ∈ (2 , 2 ) implies   sup  A[m]  < ∞.

(1.28)

m

Let A ∈ (2 , 2 ). Then there exists a constant M such that Ax2 = L A (x)2 ≤ Mx2 for all x ∈ 2 . Let x ∈ B¯ 2 = {x ∈ 2 : x2 ≤ 1} and m ∈ N0 be given. Then we have x [m] ∈ B¯ 2 and ∞ ∞   [m] 2  [m] 2 A x = |A x| ≤ |An x [m] |2 ≤ A2 · x [m] 22 ≤ M 2 , n 2 n=0

hence

This implies

n=0

 [m]   A x  ≤ M. 2    [m]   A  = sup  A[m] x  ≤ M for all m, 2 x∈ B¯ 2

and (1.28) is an immediate consequence.

30

(ii)

1 Matrix Transformations and Measures of Noncompactness

Now we show that the condition in (1.28) implies A ∈ (2 , 2 ). We assume that the condition in (1.28) is satisfied, that is, supm A[m]  = M < ∞. Let ε > 0 and x ∈ 2 be given, and x [ j] for j ∈ N0 be the j section of the sequence x. We choose an integer k0 such that   [k ] x 2 − x [k1 ]  < 2

ε for all k2 > k1 > j. M +1

(1.29)

Let n be an arbitrary integer. We choose k0 ≥ n. Then it follows from (1.29) that  k  2       [k ] [k ]    1]   2 x 2 − x [k1 ]  ank xk  =  An[k2 ] x − A[k  n x = An   k=k1 +1     ε ≤  A[k2 ]  · x [k2 ] − x [k1 ] 2 < M · k1 > k0 . β Thus, An x exists for all n so that An = (ank )∞ k=0 ∈ 2 = 2 for all n. Since e(k) ∈ B¯ 2 for k = 0, 1, . . . , it follows that  m 

1/2 |ank |

2

∞ 1/2    2   [m] (k) A e  = =  A[m] e(k) 2 ≤  A[m]  ≤ M n

n=0

n=0

for all m ≥ k, hence



∞ 

1/2 |ank |

2

≤ M,

n=0

and so also Ak = (ank )∞ n=0 ∈ 2 for the columns of the matrix A. This implies  [m] (k)   A e − Ae(k)  = 2



∞ 

1/2 |ank |

2

→ 0 (m → ∞),

n=m+1

that is,

lim A[m] e(k) = Ae(k) for k = 0, 1, . . . .

m→∞

Let j and m be arbitrary integers. Then we have for j > m  j 2 ⎞1/2 m        ⎝ ank xk  ⎠ ≤  A[ j]  · x2 ≤ M · x2 .    ⎛

n=0 k=0

β

It follows from An ∈ 2 for n = 0, 1, . . . that, for all m,

(1.30)

1.6 Crone’s Theorem

 m 

31

1/2 |An x|2

n=0

⎛  j 2 ⎞1/2 m      = lim ⎝ ank xk  ⎠ ≤ M · x2 ,  →∞   n=0 k=0

and so Ax2 =

∞ 

1/2 |An x|

≤ M · x2 ,

2

(1.31)

n=0

thus Ax ∈ 2 . This completes Part (ii) of the proof. (iii) Now we show that supm A[m]  < ∞ implies lim A[m] x = Ax for all x ∈ 2 .

(1.32)

m→∞

We assume that supm A[m]  < ∞. Then A ∈ (2 , 2 ) by Part (ii) of the proof, j and so L A ∈ B(2 , 2 ). Let x ∈ φ, that is, x = k=0 xk e(k) for some j ∈ N0 . It follows from (1.30) that lim A

m→∞

[m]

x = lim

m→∞

=

j 

 j 

xk A

[m] (k)

e

=

k=0

j  k=0

xk lim A[m] e(k) m→∞

xk Ae(k) = Ax.

k=0

Since 2 has AK , φ is dense in 2 , and since L A is continuous on 2 , we obtain (1.32). (iv) Finally, we show that supm A[m]  < ∞ implies   A = sup  A[m]  .

(1.33)

m

We assume supm A[m]  < ∞. First, it follows from Part (ii) of the proof that A ∈ (2 , 2 ), and so we have for all x ∈ 2 and all m ∈ N0    [m]     A x  =  A[m] x [m]  ≤  Ax [m]  ≤ A · x2 , 2 2 2 hence

  sup  A[m]  ≤ A. m

By (1.31), we also have   A ≤ sup  A[m]  ≤ A, m

32

1 Matrix Transformations and Measures of Noncompactness

and so (1.33) follows. This concludes the proof of the lemma.



Lemma 1.4 We have A ∈ (2 , 2 ) if and only if A∗ A exists and  [m]    sup  A∗ A  < ∞.

(1.34)

m

Proof (i) First, we assume A ∈ (2 , 2 ). Then we A∗ ∈ (2 , 2 ) for the adjoint matrix, so A∗ A exists and A∗ A ∈ (2 , 2 ). Now (1.34) follows by Lemma 1.3. (ii) Conversely, we assume that (1.34) is satisfied. (k) for some m ∈ N0 . Then, we Let x ∈ φ with x2 ≤ 1, that is, x = m k=0 x k e have ∞ ! 

[m] ∗ ∗ " ∗

x Ax22 = x [m] A∗ Ax [m] = A A j x [m]

=

 m ∞   j=0

=

j=0

x¯k ak∗j

 m  n=0

k=0

m 

x¯k

!

A∗ A

[m] " kn

k,n=0

a jn xn

j

=

m 

⎛ ⎞ ∞  ⎝ ak∗j a jn ⎠ x¯k xn

k,n=0

j=0

[m] x n = x ∗ A∗ A x

  [m]  [m]      ≤ x ∗ 2 ·  A∗ A x  ≤  A∗ A  · x2 2   [m]  [m]      ≤  A∗ A  ≤ sup  A∗ A  < ∞. m

Thus, A defines a bounded linear operator L : (φ,  · 2 ) → 2 , where L(x) = Ax for all x ∈ φ. Since φ is dense in 2 , it is easy to see that L can be extended  to an operator form all of 2 into 2 . Now we establish the characterization of the class (2 , 2 ). Theorem 1.24 (Crone) ([6]) We have A ∈ (2 , 2 ) if and only if

and

A∗ A

m

exists for all m = 0, 1, . . .

' (

m  1/2m < ∞; C = sup sup A∗ A nn m

(1.35)

(1.36)

n

in this case, A = C. Proof (i) First, we show the necessity of the conditions in (1.35) and (1.36). Let A ∈ (2 , 2 ). This implies A∗ ∈ (2 , 2 ) and so (A∗ A)m exists for each m and maps 2 into itself. We obtain for all m, k = 0, 1, . . .

1.6 Crone’s Theorem

33



A∗ A

m  kk

∗ ∗ m (k) A A e , = e(k)

for if we put B = (A∗ A)m , then we have ∞

(k) ∗ (k) 

(k) ∗ e e j B j e(k) = Bk e(k) = bkk . Be = j=0

It follows that  

∗ m    (k) ∗ ∗ m (k)∗  

m     (k) ∗    A A = e e ≤ A A e    ·  A∗ A  · e(k) 2 kk 2  m  m ≤  A∗ A ≤  A∗  · Am = A2m . Since

A∗ A

 kk

=

∞ 

a ∗ k j ∗ a jk =

j=0

∞ 

a¯ jk a jk =

j=0

∞ 

|a jk |2 ≥ 0 for k = 0, 1, . . . ,

j=0

it follows that sup

)

A∗ A

m  *1/2m kk

k



m  1/2m = sup  A∗ A kk  ≤ A for m = 0, 1, . . . k

'

and

C = sup sup m

(ii)

)



A A

m  *1/2m

k

kk

( ≤ A.

Now we show the sufficiency of the conditions in (1.35) and (1.36). It is easy to see that ! 2 "[m] ! ∗ [m] "2 A∗ A − A A

(1.37)

is a Hermitian matrix with non-negative entries on its diagonal and it is known from linear algebra that the matrix in (1.37) is positive and semi-definite. Thus, we have '! (

∗ 2 "[m] ! ∗ [m] "2 ∗ A A ≥ 0 for all x ∈ 2 − A A x so that for x ∈ B¯ 2 x and



!



A A

[m] "2

x≤x



!



A A

2 "[m]

 !  ∗ 2 "[m]    x ≤ A A 

!     ∗ [m] "2  ! ∗ 2 "[m]   A A ≤ A A .    

34

1 Matrix Transformations and Measures of Noncompactness

Furthermore x∗ implies

!

A∗ A

[m] "2

x = x∗

!

A∗ A

[m] "∗

A∗ A

[m]

x

  ! !  ∗ [m] "2  

∗ [m] 

∗ 2 "[m]  2  = .   A A   A A  ≤ A A  

It follows by mathematical induction that    j  ! ∗ 2 j "[m]   ∗ [m] 2  for j = 1, 2, . . . . A A  A A  ≤  

(1.38)

The sum of the diagonal entries of an m × m Hermitian matrix is equal to the sum of the eigenvalues of the matrix. Thus, for an m × m Hermitian matrix B (for which the entries on the diagonal are non-negative), we have B = max{λ : λ is an eigenvalue of B} ≤ m · max bkk . k

We conclude from this and (1.38)  −j !  (2− j '   !

∗ 2 j "[m] 2 j "[m] 2  ∗ [m]   ∗ 2− j   A A ≤ A A A ≤ (m + 1) · max A     k

2− j

(m + 1)

kk

· C for j = 0, 1, . . . . 2

Since this holds for all j, it follows that    ∗ [m]   A A  ≤ C 2 for m = 0, 1, . . . and so A ∈ (2 , 2 ) by Lemma 1.4.



1.7 Remarks on Measures of Noncompactness The known characterizations of matrix transformations between the classical sequence spaces were given in Theorem 1.23. This was mainly achieved by applying the theory of F K and B K spaces; the fundamental result was Theorem 1.4 which states that matrix transformations between F K spaces are continuous. The characterization of compact matrix operators can be achieved by applying the theory of measures of noncompactness, in particular, the Hausdorff measure of noncompactness and its properties. In the remainder of this section, we list the essential concepts and and results in this field.

1.7 Remarks on Measures of Noncompactness

35

First, we recall that a linear operator L from Banach space X to a Banach space Y is said to be compact, or completely continuous, if D L = X for the domain D L of L and, for every bounded sequence (xn ) ∈ X , the sequence (L(xn )) has a convergent subsequence in Y . We write K(X, Y ) for the set of all compact operators from X to Y , and K(X ) = K(X, X ), for short. It is clear that a compact operator is bounded, that is, K(X, Y ) ⊂ B(X, Y ). Measures of noncompactness are also very useful tools in functional analysis, for instance, in metric fixed point theory and the theory of operator equations in Banach spaces. They are also used in the studies of functional equations, ordinary and partial differential equations, fractional partial differential equations, integral and integrodifferential equations, optimal control theory and in the characterizations of compact operators between Banach spaces. In the next section, we will give an axiomatic introduction to measures of noncompactness of bounded sets in complete metric spaces, and establish some of their most import properties. In particular, we study the Kuratowski, Hausdorff and separation measures of noncompactness. Furthermore, we prove the famous Goldenštein– Goh’berg–Markus theorem which gives an estimate for the Hausdorff measure of noncompactness of bounded sets in Banach spaces with a Schauder basis. The first measure of noncompactness, denoted by α, was defined and studied by Kuratowski [16] in 1930. In 1955, Darbo [7] used the function α to prove his fixed point theorem, which is a very important generalization of Schauder’s fixed point theorem, and includes the existence part of Banach’s fixed point theorem. Other important measures of noncompactness are the ball or Hausdorff measure of noncompactness χ introduced by Goldenštein, Goh’berg and Markus [39] and later studied by Goldenštein and Markus [40] in 1968, and the separation measure of noncompactness β introduced by Istr˘a¸tescu [11] in 1972. These measures of noncompactness are studied in detail and their use is discussed, for instance, in the monographs [2–4, 12, 17, 21–23, 30, 38].

1.8 The Axioms of Measures of Noncompactness In this section, we introduce the axioms of a measure of noncompactness on the class M X of bounded subsets of a complete metric space (X, d). It seems that the axiomatic approach is the best way of dealing with measures of noncompactness. It is possible to use several systems of axioms which are not necessarily equivalent. However, the notion of a measure of noncompactness was originally introduced in metric spaces. So we are going to give our axiomatic definition in this class of spaces. This approach is convenient for our purposes, and can be found in [21, 23, 30]. In the books [2, 3], for instance, two different patterns are provided for the axiomatic introduction of measures of noncompactness in Banach spaces. We are

36

1 Matrix Transformations and Measures of Noncompactness

going to work mainly with the Hausdorff measure of noncompactness in Banach spaces which satisfies the axioms of either axiomatic approach. We recall that a set S in a topological space is said to be relatively compact or pre-compact, if its closure S¯ is compact. Definition 1.7 ([23, Definition 7.5.1]) Let (X, d) be a complete metric space. A set function φ : M X → [0, ∞) is called a measure of noncompactness on M X , if it satisfies the following conditions for all Q, Q 1 , Q 2 ∈ M X : (MNC.1) φ(Q) = 0 if and only if Q is relatively compact (regularity); (MNC.2) φ(Q) = φ(Q) (invariance under closure); (MNC.3) φ(Q 1 ∪ Q 2 ) = max{φ(Q 1 ), φ(Q 2 )} (semi-additivity). The number φ(Q) is called the measure of noncompactness of the set Q. The following properties can easily be deduced from the axioms in Definition 1.7. By McX , we denote the subclass of all closed sets in M X . Proposition 1.2 ([23, Proposition 7.5.3]) Let φ be a measure of noncompactness on a complete metric space (X, d). Then φ has the following properties : Q 1 ⊂ Q 2 implies φ(Q 1 ) ≤ φ(Q 2 ) (monotonicity).

(1.39)

φ(Q 1 ∩ Q 2 ) ≤ min{φ(Q 1 ), φ(Q 2 )} for all Q 1 , Q 2 ∈ M X .

(1.40)

If Q is finite then φ(Q) = 0 (non-singularit y).

(1.41)

⎧ Generalized Cantor’s intersection property ⎪ ⎪ ⎪ ⎪ ⎨ If (Q n ) is a decreasing sequence of nonempty sets in McX and then the intersection limn→∞ φ(Q n ) = 0,  ⎪ ⎪ = Q n = ∅ Q ⎪ ∞ ⎪ ⎩ is compact.

(1.42)

Remark 1.8 If X is a Banach space then a measure of noncompactness may have some additional properties related to the linear structure of a normed space such as homogeneity, subadditivity, translation invariance and invariance under the passage to the convex hull (cf. (8)–(11) in Part of Theorem 1.25).

1.9 The Kuratowski and Hausdorff Measures of Noncompactness

37

1.9 The Kuratowski and Hausdorff Measures of Noncompactness In this section, we reall the definitions and some properties of the Kuratowski and Hausdorff measures of noncompactness. As usual, diam(S) = supx,y∈S d(x, y) is the diameter of a set S in a metric space (X, d). Definition 1.8 Let (X, d) be a complete metric space. (a) The function α : M X → [0, ∞) with  α(Q) = inf ε > 0 : Q ⊂

n 

 Sk , Sk ⊂ X, diam(Sk ) < ε (k = 1, 2, . . . , n ∈ N)

k=1

is called the Kuratowski measure of noncompactness (KMNC), and the real number α(Q) is called the Kuratowski measure of noncompactness of Q. (b) The function χ  : M X → [0, ∞) with χ(Q) = inf ε > 0 : Q ⊂

n 



Brk (xk ), xk ∈ X, rk < ε (k = 1, 2, . . . , n ∈ N

k=1

is called the Hausdorff or ball measure of noncompactness, and the real number χ (Q) is called the Hausdorff or ball measure of noncompactness of Q. Remark 1.9 Therefore, α(Q) is the infimum of all positive real numbers ε such that Q can be covered by a finite number of sets of diameters less than ε, and χ (Q) is the infimum of all positive real numbers ε such that Q can be covered by a finite number of open balls of radius less than ε. Similarly, χ (Q) is the infimum of all positive real numbers ε such that Q has a finite ε-net. It is not supposed in the definition of χ (Q) that the centres of the balls which cover Q belong to Q. If S is a subset of a linear space then co(S) =



{C : C ⊃ S convex }

denotes the convex hull of S. Theorem 1.25 ([30, Proposition II.2.3 and Theorem II.2.4] or [21, Lemma 2.11 and Theorem 2.12]) (a) Let (X, d) be a complete metric space and Q, Q 1 , Q 2 ∈ M X and ψ = α or ψ = χ . Then, we have

38

1 Matrix Transformations and Measures of Noncompactness

ψ(Q) = 0 if and only if Q is relatively compact

(1)

ψ(Q) = ψ(Q)

(2)

ψ(Q 1 ∪ Q 2 ) = max {ψ(Q 1 ), ψ(Q 2 )}

(3)

Q 1 ⊂ Q 2 implies ψ(Q 1 ) ≤ ψ(Q 2 )

(4)

ψ(Q 1 ∩ Q 2 ) ≤ min {ψ(Q 1 ), ψ(Q 2 )}

(5)

If Q is finite then ψ(Q) = 0 ⎧ Generalized Cantor’s intersection property ⎪ ⎪ ⎪ ⎪ If (Q ) is a decreasing sequence of nonempty sets in McX ⎪ n ⎨ and limn→∞  ψ(Q n ) = 0 , then the intersection ⎪ = Q n = ∅ Q ⎪ ∞ ⎪ ⎪ n ⎪ ⎩ is compact.

(6)

(7)

(b) If X is a Banach space, then we have for all Q, Q 1 , Q 2 ∈ M X



ψ(Q 1 + Q 2 ) ≤ ψ(Q 1 ) + ψ(Q 2 )

(8)

ψ(x + Q) = ψ(Q) for all x ∈ X

(9)

ψ(λQ) = |λ|ψ(Q) for each scalar λ

(10)

invariance under the passage to the convex hull ψ(co(Q)) = ψ(Q).

(11)

Remark 1.10 The conditions in (1), (2) and (3) show that the Kuratowski and Hausdorff measures of noncompactness are a measure of noncompactness in the sense of Definition 1.7, the conditions in (4), (5), (6) and (7) are analogous to those of Proposition 1.2, and the conditions in (8), (9), (10) and (11) are analogous to those mentioned in Remark 1.8. The following result is well known. Theorem 1.26 Let X be an infinite-dimensional Banach space and B X denote the closed unit ball in X . Then, we have α(B X ) = 2 and χ (B X ) = 1. Proof We write B¯ = B X , for short. ¯ = 2. (i) First, we show α( B) ¯ ≤ 2. We clearly have α( B) ¯ < 2, then, by Definition 1.8 (b), there exist bounded, closed subsets If α( B)  Q k of X with diam(Q k ) < 2 for k = 1, 2, . . . , n such that B¯ ⊂ nk=1 Q k . Let X n = {x1 , x2 , . . . , xn } be a linearly independent subset of X , and E n be the set of all real linear combinations of elements of X n . Clearly E n is a real

1.9 The Kuratowski and Hausdorff Measures of Noncompactness

39

n-dimensional normed space, where the norm on E n is, of course, the restriction Let Sn = {x ∈ E n : x = 1} denote the unit sphere on of the norm of X on E n .  E n . We note that Sn ⊂ nk=1 (Sn ∩ Q k ), diam(Sn ∩ Q k ) < 2 and Sn ∩ Q k is a closed subset of E n for each k = 1, 2, . . . , n. This is a contradiction to the wellknown Ljusternik–Šnirleman–Borsuk theorem [8, pp. 303–307], which states that if Sn is the unit sphere of an n-dimensional real normed  space E n , Fk is a closed subset of E n for each k = 1, 2, . . . , n and Sn ⊂ nk=1 Fk , then there exists a k0 ∈ {1, 2, . . . , n} such that the set Sn ∩ Fk0 contains a pair of antipodal points, that is, there exists x0 ∈ Sn ∩ Fk0 such that {x0 , −x0 } ⊂ Sn ∩ Fk0 . ¯ = 1. (ii) Now we show χ ( B) ¯ ≤ 1. We clearly have χ ( B) ¯ = q < 1, then we choose ε > 0 such that q + ε < 1. By Definition 1.8 If χ ( B) ¯ say {x1 , x2 , . . . , xn }. Hence (b), there exists a (q + ε)-net of B, B¯ ⊂

n 

¯ (xk + (q + ε) B).

(1.43)

k=1

Now it follows from Theorem 1.25, (4), (3) and (9) that

 ¯ ≤ max χ (xk + (q + ε) B) ¯ = (q + ε)q. q = χ ( B) 1≤k≤n

Since q + ε < 1, we have q = 0 by (1.43), that is, B¯ is a totally bounded set. But this is impossible since X is is an infinite-dimensional space. ¯ = 1. Thus, we have χ ( B)  Now we are going to prove the famous Goldenštein–Goh’berg–Markus theorem which gives an estimate for the Hausdorff measure of noncompactness of bounded sets in Banach spaces with Schauder bases. Theorem 1.27 (Goldenštein, Goh’berg, Markus) ([39]) Let X be a Banach space with a Schauder basis (bn ), Pn : X → X for n ∈ N be the projector onto the linear span of {b1 , b2 , . . . , bn }, that is, Pn (x) =

n 

λk bk for all x =

k=1

∞ 

λk bk ∈ X,

k=1

and Rn = I − Pn . Furthermore, let the function μ : M X → [0, ∞) be defined by 



μ(Q) = lim sup sup Rn (x) . n→∞

x∈Q

Then the following inequalities hold for all Q ∈ M X :

(1.44)

40

1 Matrix Transformations and Measures of Noncompactness

 1 · μ(Q) ≤ χ (Q) ≤ inf sup Rn (x) ≤ μ(Q), n a x∈Q

(1.45)

where a = lim supn→∞ Rn  denotes the basis constant of the Schauder basis. Proof We obviously have for any n ∈ N Q ⊂ Pn (Q) + Rn (Q) for all Q ∈ M X .

(1.46)

It follows from Theorem 1.25 (4) and (8) χ (Q) ≤ χ (Pn (Q)) + χ (Rn (Q)) ≤ sup Rn (x) . x∈Q



Now we obtain



χ (Q) ≤ inf sup Rn (x) ≤ μ(Q). n

x∈Q

Hence, it suffices to show the first inequality in (1.45). Let ε > 0 be given and {y1 , y2 , . . . , yn } be a (χ (Q) + ε)-net of Q. Then it follows that ¯ Q ⊂ {y1 , y2 , . . . , yn } + (χ (Q) + ε) B, where B¯ denotes the closed unit ball in X . This implies that, for any x ∈ Q, there exist y ∈ {y1 , y2 , . . . , yn } and z ∈ B¯ such that x = y + (χ (Q) + ε)z, and so sup Rn (x) ≤ sup Rn (yk ) + (χ (Q) + ε) Rn  , x∈Q

1≤k≤n



hence



lim sup sup Rn (x) ≤ (χ (Q) + ε) lim sup Rn . n→∞

n→∞

x∈Q

Since ε > 0 was arbitrary, this yields the first inequality in (1.45).



Remark 1.11 ([23, Remark 7.9.4]) If we write in Theorem 1.27



μ(Q) ˜ = inf sup Rn (x) , n

x∈Q

then we obtain from the inequalities in (1.45) 1 μ(Q) ˜ ≤ χ (Q) ≤ μ(Q) ˜ for all Q ∈ M X . a

(1.47)

1.9 The Kuratowski and Hausdorff Measures of Noncompactness

41

Remark 1.12 ([23, Remark 7.9.5]) The measure of noncompactness μ in Theorem 1.27 is equivalent to the Hausdorff measure of noncompactness by (1.45). If a = 1 then we obtain from (1.45) μ(Q) ˜ = μ(Q) = χ (Q) for all Q ∈ M X . If X =  p (1 ≤ p < ∞) or X = c, c0 , then the limit in (1.44) exists. Example 1.14 (a) If X = c0 or X =  p for 1 ≤ p < ∞ with the standard Schauder basis (e(n) )∞ n=1 and the natural norms, then a = 1 and we have ⎧    ⎪ ⎪ ⎪ lim sup sup |xk | ⎪ ⎨n→∞ x∈Q k≥n+1  χ (Q) =  ∞ 1/ p ⎪  ⎪ p ⎪ lim sup |xk | ⎪ ⎩n→∞ x∈Q

(Q ∈ Mc0 ) (Q ∈ M p ).

k=n+1

(b) If X = c, then a = 2 and we have  1 1 · μ(Q) = · lim sup Rn (x)∞ ≤ χ (Q) ≤ μ(Q) for every Q ∈ M X , 2 2 n→∞ x∈Q (1.48) where for each x ∈ c with ξx = limk→∞ xk Rn (x)∞ = sup |xk − ξx |. k≥n+1

We close this section with defining one more measure of noncompactness which is also equivalent to the Hausdorff measure of noncompactness. Theorem 1.28 ([23, Theorem 7.9.8]) Let X be a Banach space with a Schauder basis. Then the function ν : M X → [0, ∞) with



ν(Q) = lim inf sup Rn (x) n→∞

for all Q ∈ M X

x∈Q

is a measure of noncompactness on X which is invariant under the passage to the convex hull. Moreover, the following inequalities hold: 1 · ν(Q) ≤ χ (Q) ≤ ν(Q) for all Q ∈ M X . a The properties of ν now follow from those of μ, or can be proved similarly.



Remark 1.13 ([23, Remark 7.9.9]) It is obvious from Theorem 1.28 that ν = μ if

42

1 Matrix Transformations and Measures of Noncompactness



 lim

n→∞

sup Rn (x) x∈Q

exists for all Q ∈ M X , for instance, for X = c0 or X =  p (1 ≤ p < ∞) with the standard basis by Part (a) of Example 1.14. When X = c with the supremum norm and the standard basis, then we have ν = μ by Part (b) of Example 1.14.

1.10 Measures of Noncompactness of Operators In this section, we recall the definition of measures of noncompactness of operators between Banach spaces and list some important properties of the Hausdorff measure of noncompactness of such operators. Definition 1.9 Let φ and ψ be measures of noncompactness on the Banach spaces X and Y , respectively. (a) An operator L : X → Y is said to be (φ, ψ)-bounded, if L(Q) ∈ MY for all Q ∈ M X ,

(1.49)

and if there is a non-negative real number c such that ψ(L(Q)) ≤ c · φ(Q) for all Q ∈ M X .

(1.50)

(b) If an operator L is (φ, ψ)-bounded, then the number L(φ,ψ) = inf{c ≥ 0 : (1.50) holds}

(1.51)

is called the (φ, ψ)-operator norm of L, or (φ, ψ)-measure of noncompactness of L. If ψ = φ, we write Lφ = L(φ,φ) , for short. First, we recall the formula for the Hausdorff measure of noncompactness of a bounded linear operator between infinite-dimensional Banach spaces. Theorem 1.29 Let X and Y be infinite-dimensional Banach spaces and L : X → Y be a bounded linear operator. Then, we have Lχ = χ (L(S X )) = χ (L(B X )), χ (L(B X ))

(1.52)

where B X and B X and S X denote the open and closed unit balls, and the unit sphere in X .

1.10 Measures of Noncompactness of Operators

43

Proof We write B¯ = B X and S = S X for the closed unit ball and the unit sphere in X. Since co(S) = B¯ and L(co(S)) =co(L(S))), it follows from Theorem 1.25 (2) and (11) that ¯ = χ (L(co(S))) = χ (co(L(S))) = χ (L(S)). χ (L( B)) Hence, we have by (1.51) and Theorem 1.26 ¯ = Lχ · 1 = Lχ . ¯ ≤ Lχ · χ ( B) χ (L( B)) Now, we show

¯ Lχ ≤ χ (L( B)).

Let Q ∈ M X and {x1 , x2 , . . . , xn } be a finite r -net of Q. Then Q ⊂ obviously n  L (Br (xk )) . L(Q) ⊂

(1.53) n k=1

Br (xk ) and

k=1

It follows from this Theorem 1.25 (4), (3), the additivity of L, (8), (6), (10)  χ (L(Q)) ≤ χ

n 

k=1

L(Br (xk )) ≤ max χ (L(Br (xk ))) 1≤k≤n

= max χ (L({xk } + Br (0))) = max χ (L({xk }) + L (Br (0))) 1≤k≤n

1≤k≤n

≤ max χ (L({xk })) + (L (Br (0))) = χ (L (Br (0))) 1≤k≤n

= χ (L(r B1 (0))) = χ (r L(B)) = r χ (L(B)). This implies χ (L(Q)) ≤ χ (Q) · χ (L(Q)), 

and so (1.53). Finally, we need the following useful known results.

Theorem 1.30 Let X , Y and Z be Banach spaces, L ∈ B(X, Y ) and L˜ ∈ B(Y, Z ). Then Lχ is a seminorm on B(X, Y ) and Lχ = 0 if and only if L ∈ K(X, Y ); Lχ ≤ L; ˜ χ · Lχ .  L˜ ◦ Lχ ≤  L Proof (i) The proof that  · χ is a seminorm is elementary.

(1.54) (1.55) (1.56)

44

1 Matrix Transformations and Measures of Noncompactness

The statement in (1.54) follows from the observation that an operator L : X → Y is compact if and only if it is continuous and maps bounded sets into relatively compact sets. (iii) Now we show (1.55). We have by (1.52) in Theorem 1.29 (ii)

Lχ = χ (L(S X )) ≤ sup L(x) = L. x∈S X

(iv)

Finally, we show the inequality in (1.56). We have for all Q ∈ M X by Definition 1.9

! " ! " ˜ χ · Lχ · χ (Q), ˜ ˜ χ · χ ((L(Q))) ≤  L χ ( L˜ ◦ L)(Q) = χ L(L(Q)) ≤  L and the inequality in (1.56) is an immediate consequence.



References 1. Aasma, A., Dutta, H., Natarajan, P.N.: An Introductory Course in Summability Theory. Wiley, New York (2017) 2. Akhmerov, R.R., Kamenskii, M.I., Potapov, A.S., Rodkina, A.E., Sadovskii, B.N.: Measures of Noncompactness and Condensing Operators. Operator Theory. Advances and Applications, vol. 55. Springer, Basel (1992) 3. Bana´s, J., Goebel, K.: Measures of Noncompactness in Banach Spaces. Lecture Notes in Pure and Applied Mathematics, vol. 60. Marcel Dekker Inc., New York (1980) 4. Bana´s, J., Mursaleen, M.: Sequence Spaces and Measures of Noncompactness with Applications to Differential and Integral Equations. Springer, New Delhi (2014) 5. Boos, J.: Classical and Modern Methods in Summability. Oxford University Press, Oxford (2000) 6. Crone, L.: A characterization of matrix mappings on 2 . Math. Z. 123, 315–317 (1971) 7. Darbo, G.: Punti uniti in transformazioni a condominio non compatto. Rend. Sem. Math. Univ. Padova 24, 84–92 (1955) 8. Furi, M., Vignoli, A.: On a property of the unit sphere in a linear normed space. Bull. Acad. Pol. Sci. Sér. Sci. Math. Astron. Phys. 18(6), 333–334 (1970) 9. Grosse-Erdmann, K.-G.: The structure of sequence spaces of Maddox. Can. J. Math. 44(2), 298–307 (1992) 10. Hadži´c, O.: Fixed point theory in topological vector spaces. University of Novi Sad, Institute of Mathematics, Novi Sad, Serbia (1984) 11. Istrˇa¸tescu, V.: On a measure of noncompactness. Bull. Math. Soc. Sci. Math. R. S. Roumanie (N.S.) 16, 195–197 (1972) 12. Istrˇa¸tescu, V.: Fixed Point Theory, an Introduction. Reidel Publishing Company, Dordrecht (1981) 13. Jarrah, A.M., Malkowsky, E.: BK spaces, bases and linear operators. Rend Circ. Mat. Palermo, Ser. II, Suppl. 52, 177–191 (1998) 14. Kamthan, P.K., Gupta, M.: Sequence Spaces and Series. Marcel Dekker, New York (1981) 15. Köthe, G.: Topologische lineare Räume I. Springer, Heidelberg (1966) 16. Kuratowski, K.: Sur les espaces complets. Fund. Math. 15, 301–309 (1930)

References

45

17. Kuratowski, K.: Topologie. Warsaw (1958) 18. Lindenstrauss, J., Tzafriri, L.: Classical Banach Spaces I, Sequence Spaces. Springer, Heidelberg (1977) 19. Maddox, I.J.: Elements of Functional Analysis. Cambridge University Press, Cambridge (1971) 20. Malkowsky, E.: Klassen von Matrixabbildungen in paranormierten F K - Räumen. Analysis 7, 275–292 (1987) 21. Malkowsky, E., Rakoˇcevi´c, V.: An Introduction into the Theory of Sequence Spaces and Measures of Noncompactness. Zbornik radova, Matematˇcki institut SANU, vol. 9(17), pp. 143–234. Mathematical Institute of SANU, Belgrade (2000) 22. Malkowsky, E., Rakoˇcevi´c, V.: On some results using measures of noncompactness. Advances in Nonlinear Analysis via the Concept of Measure of Noncompactness, pp. 127–180. Springer, Berlin (2017) 23. Malkowsky, E., Rakoˇcevi´c, V.: Advanced Functional Analysis. CRC Press, Taylor & Francis Group, Boca Raton (2019) 24. Marti, J.M.: Introduction to the Theory of Bases. Springer, Heidelberg (1967) 25. Mursaleen, M.: Applied Summability Methods. Springer, Cham (2014) 26. Peyerimhoff, A.: Über ein Lemma von Herrn Chow. J. Lond. Math. Soc. 32, 33–36 (1957) 27. Ruckle, W.H.: Sequence Spaces. Research Notes in Mathematics, vol. 49. Pitman, Boston (1981) 28. Rudin, W.: Functional Analysis. McGraw Hill, New York (1973) 29. Stieglitz, M., Tietz, H.: Matrixtransformationen in Folgenräumen. Eine Ergebnisübersicht. Math. Z. 154, 1–16 (1977) 30. Ayerbe Toledano, J.M., Dominguez Benavides, T., Lopez Acedo, G.: Measures of Noncompactness in Metric Fixed Point Theory. Operator Theory Advances and Applications, vol. 99. Birkhäuser Verlag, Basel (1997) 31. Wilansky, A.: Functional Analysis. Blaisdell Publishing Company, New York (1964) 32. Wilansky, A.: Modern Methods in Topological Vector Spaces. McGraw Hill, New York (1978) 33. Wilansky, A.: Summability Through Functional Analysis. Mathematical Studies, vol. 85. North-Holland, Amsterdam (1984) 34. Zeller, K.: Abschnittskonvergenz in FK-Räumen. Math. Z. 55, 55–70 (1951) 35. Zeller, K.: Allgemeine Eigenschaften von Limitierungsverfahren. Math. Z. 53, 463–487 (1951) 36. Zeller, K.: Matrixtransformationen von Folgenräumen. Univ. Rend. Mat. 12, 340–346 (1954) 37. Zeller, K., Beekmann, W.: Theorie der Limitierungsverfahren. Springer, Heidelberg (1968)

Russian References 38. R. R. Ahmerov, M. I. Kamenski, A. S. Potapov i dr., Mery nekompaktnosti i uplotnwie operatory, Novosibirsk, Nauka, 1986 39. L. S. Goldenxten, I. C. Gohberg i A. S. Markus, Issledovanie nekotoryh svostv linenyh ograniqennyh operatorov v svzi s ih q-normo, Uq. zap. Kixinevskogo gos. un-ta, 29, (1957), 29–36 40. L. S. Goldenxten, A. S. Markus, O mere nekompaktnosti ograniqennyh mnoestv i linenyh operatorov, V kn.: Issledovanie po algebre i matematiqeskomu analizu, Kixinev: Kart Moldavenske (1965) 45–54

Chapter 2

Matrix Domains

In Chap. 2, we study sequence spaces that have recently been introduced by the use of infinite matrices. They can be considered as the matrix domains of particular triangles in certain sequence spaces. This seems to be natural in view of the fact that most classical methods of summability are given by triangles. There are a large number of publications in this area. Here we list some some of them related to the topics of this section and Sect. 3 [3, 4, 6, 7, 9, 14, 14, 15, 20, 22–24, 28–33, 35, 46–48, 50, 51, 58–64]. We also mention more of the following recent publications related to the topics covered in this chapter, for instance, [1, 8, 17, 21, 26, 34, 35, 45, 49, 52–56]. The results in these publications are proved for each sequence space separately. We use the relevant theory presented in Chap. 1 to provide a general, unified approach for matrix domains of arbitrary triangles in arbitrary F K spaces and some special cases. In particular, we obtain some fundamental results concerning topological properties, Schauder bases, the β-duals of matrix domains of triangles in the classical sequence spaces and the characterizations of matrix transformations between these spaces. We also consider some important special cases. Our general results reduce the determination of the β-duals of the matrix domains in the classical sequence space and the characterizations of matrix transformations between them to the β-duals of the spaces and the matrix transformations between the spaces themselves. For further related studies, we refer the interested reader to the text book [25].

2.1 General Results In this section, we study the most important fundamental general results for matrix domains related to their topological properties. Many of the results of this section can be found in [65, Sects. 4.2 and 4.3]. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. de Malafosse et al., Operators Between Sequence Spaces and Applications, https://doi.org/10.1007/978-981-15-9742-8_2

47

48

2 Matrix Domains

It turns out that the theory of F K spaces of the previous section applies to most matrix domains. Throughout, we assume that all the F K spaces are locally convex. Definition 2.1 Let X be a subset of ω and A be an infinite matrix. Then the set X A = {x ∈ ω : Ax ∈ X } is called the matrix domain of A in X , in particular, the set c A is called the convergence domain of A. We will frequently use the following well-known result from functional analysis. Theorem 2.1 ([44, Corollary 3.10.9] or [65, 4.0.1]) Every locally convex metrizable space X has its topology defined by a finite or infinite sequence ( pn ) of seminorms pn ; this means that if (x (k) ) is a sequence in X , then x (k) → 0 (k → ∞) if and only if pn (x (k) ) → 0 (k → ∞) for each n. We use the notation (X, ( pn )) for a vector space X with its metrizable topology given by the sequence ( pn ) of semi-norms pn in the sense of Theorem 2.1. Example 2.1 (a) The space (ω, (|Pn |)) is an F K space where (Pn ) is the sequence of coordinates, and x (m) → x (m → ∞) in ω if and only if xn(m) → xn (m → ∞) for each n (Corollary 1.1). (b) The space c is a B K space with p(x) = x∞ ; there is only one semi-norm, a norm in this case, and x (m) → 0 (m → ∞) in c if and only if xm ∞ → 0 (m → ∞) (Example 1.2). The next result is fundamental. Theorem 2.2 Let (X, ( pn )) and (Y, (qn )) be F K spaces, A be a matrix defined on X , that is, X ⊂ ω A , and Z = X ∩ Y A = {x ∈ ω : Ax ∈ Y } . Then Z is an F K space with ( pn ) ∪ (qn ◦ A); this means that the topology of Z is given by the sequences ( pn ) and (qn ◦ A) of all the seminorms pn and qn ◦ A. Proof The countable set { pn , qn ◦ A : n ∈ N} of seminorms yields a metrizable topology larger than that of X , hence of ω, since (X, ( pn )) is an F K space. We have to show that Z is complete. Let (x (m) )∞ m=0 be a Cauchy sequence in Z . Then it clearly is a Cauchy sequence in X which is convergent by the completeness of X , lim x (m) = t in X, say,

m→∞

that is,

  lim pn x (m) − t = 0 for each n.

m→∞

2.1 General Results

49

Since x (m) ∈ Y A , it follows that Ax (m) ∈ Y , and so (Ax (m) ) is a Cauchy sequence in Y , since qn (Ax (m) ) = (q ◦ A)(x (m) ). So the sequence (Ax (m) ) is convergent by the completeness of Y , lim Ax (m) = b in Y, say, m→∞

that is,

  lim qn Ax (m) − b = 0 for each n.

m→∞

Since X ⊂ ω A and the matrix map A from X to ω is continuous by Theorem 1.4, it follows that lim Ax (m) = At in ω. m→∞

We also have

lim Ax (m) = b in ω,

m→∞

since the topology of the F K space is stronger than that of ω on Y . This yields b = At, since ω is a metric space, and so   t ∈ Z , lim pn x (m) − t = 0 m→∞

and

    lim (q ◦ A) x (m) − t = lim q Ax (m) − b = 0 for each n.

m→∞

m→∞

 ∞ ∞ k Let A = (ank )∞ n,k=0 be an infinite matrix and An = (ank )k=0 and A = (ank )n=0 denote the sequences in the n-th row and the k-th column of A. Then A is said to be row finite if An ∈ φ for all n; it is said to be column finite if Ak ∈ φ for all k. An infinite matrix T = (tnk )∞ n,k=0 is called a triangle if tnn = 0 and tnk = 0 (k > n) for all n = 0, 1, . . . . It is known that any triangle T has a unique inverse S which also is a triangle and x = S(T x) = T (Sx) ([65, 1.4.8, p. 9], [13, Remark 22 (a), p. 22]). The next result is a consequence of Theorem 2.2.

Theorem 2.3 Let A be a row-finite matrix. Then (c A , ( pn )) is an F K space where p−1 =  ·  A , that is, p−1 (x) = Ax∞ , and pn (x) = |xn | for n = 0, 1, 2, . . . . If A is a triangle, then (c A , p−1 ) is a B K space. Proof We apply Theorem 2.2 with X = ω and Y = c so that Z = c A . The seminorms on X = ω are |Pn | with Pn (x) = xn for n = 0, 1, . . . by Part (a) of Example 2.1. Also Y = c is a B K space with q = x∞ by Part (b) of Example 2.1. We put p−1 = q ◦ A. Then the statement is clear from Theorem 2.2. If A is a triangle, then the matrix map A : c A → x is one to one, linear and onto. So c A becomes a Banach space equivalent to c with the norm  · c A defined by

50

2 Matrix Domains

xc A = Ax∞ for all x ∈ c A . To see that the coordinates are continuous let B = A−1 be the inverse of A, also a triangle. It follows that  n  n      |xn | = |Bn (Ax)| =  bnk Ak x  ≤ |bnk | · Ax∞ = Mn · xc A for all n,   k=0

where Mn =

n k=0

k=0

|bnk |. This shows that the coordinates are continuous.



Let z be any sequence and Y be a subset of ω. Then we write   z −1 ∗ Y = a ∈ ω : a · z = (ak z k )∞ k=0 ∈ Y . Part (a) of the next result is a special case of Theorem 2.2 when A is a diagonal matrix with the sequence z on the diagonal. Theorem 2.4 Let (Y, q) be an F K space and z be a sequence. (a) Then z −1 ∗ Y is an F K space with ( pn ) ∪ (h n ) where pn = |xn | and h n = qn (z · x) for all n. (b) If Y has AK then z −1 ∗ Y has AK also. Proof (a) We define the diagonal matrix D = (dnk )∞ n,k=0 by dnn = z n and dnk = 0 (k = n) for n = 0, 1, . . . , and apply Theorem 2.2 with X = ω. (b) We fix n. Then we have pn (x − x [m] ) = 0 for all m > n, that is, lim pn (x − x [m] ) = 0.

m→∞

Since Y has AK , we also obtain h n (x − x [m] ) = qn (z · (x − x [m] )) = qn (z · x − z · x [m] ) → 0 (m → ∞). Therefore, it follows that x [m] → x in z −1 ∗ Y , and so z −1 ∗ X has AK .



Example 2.2 Let α = (αk )∞ k=0 be a sequence of positive reals, and z = 1/α = (1/αk )∞ k=0 . The spaces

2.1 General Results

51

αp

=z

−1

∗ p = x ∈ ω :



 |xk | p k=0

αk

n (n = 0, 1, . . . ), then we obtain |xk | =

|Ak x − Ak−1 x| p−1 (x) ≤2 , |z k | |z k |

and so pk is redundant. If z ∈ φ then z β = ω.



We need the following result from functional analysis: (I) ([65, 4.0.9]) Let a vector space X have a collection C of locally convex topologies. For each T ∈ C let P(T  ) be the set of seminorms that generates T as in Theorem 2.1. Let P = {P(T ) : T ∈ C}. Then P generates the so-called sup − topology sup C with the property that x → 0 if and only if p(x) → 0 for all p ∈ P(T ) and all T ∈ C. Thus, x → 0 if and only if x → 0 in (X, T ) for each T ∈ C. If each T ∈ C is a norm topology and C has finitely many members, sup C is given by a norm, namely, the sum of these norms. Theorem 2.6 The intersection of countably many  F H spaces is an F H space. If each X n is an F K space with AK , then X = ∞ n=1 X n has AK . Proof Let Tn be the topology on X n . We place on X the topology T = sup{Tn : X n }. If each X n is an F K space, then the coordinates are continuous in each Tn , hence in the larger topology T . If (xm ) is a T -Cauchy sequence in X , then it is a Tn -Cauchy sequence for each n, and hence convergent by the completeness of X n , x (m) → tn ∈ X n , say (m → ∞) for each n. Then, we have

x (m) → tn ∈ H (m → ∞) for each n,

since the F H topology of X n is stronger than that of H on X n . Thus, all the tn are the same, since H is a Hausdorff space, tn = t, say, for all n. Thus, we clearly have  t ∈ X and x (m) → t (m → ∞) in Tn for each n, so x → t in (X, T ) by (I). Now we are able to prove the next result. Theorem 2.7 Let A be a matrix. Then (ω A , ( pn ) ∪ (h n )) is an AK space with  m      ank xk  for all n. pn (x) = |xn | and h n (x) = sup    m k=0

2.1 General Results

53

For any k such that Ak has at least one nonzero term, pk may be omitted. For any k such that An ∈ φ, h n may be omitted. ∞ β β Proof We observe that ω A =  n=0 An , and each space An is an AK space with m pn (x) = |xn | and h n (x) = supm | k=0 ank xk | by Theorem 2.5. Also, the intersection of countably many AK spaces is an AK space by Theorem 2.6. If ank = 0 then we have |xk | = 2 · h n (x)/|ank | as in the proof of Theorem 2.5, so pk is redundant.  If An ∈ φ then h n can be omitted by the last part of Theorem 2.5. Theorem 2.8 Let (Y, (qn )) be an F K space with F K and A be a matrix. Then Y A is an F K space with ( pn ) ∪ (h n ) ∪ (qn ◦ A) where the sequences ( pn ) and (h n ) are as in Theorem 2.7. For any k such that Ak has at least one nonzero term, pk may be omitted. For any n such that An ∈ φ, h n may be omitted. If A is a triangle, only (qn ◦ A) is needed. Proof We apply Theorem 2.2 with X = ω A , which is an F K space by Theorem 2.7. Then Z = Y A and the seminorms are obtained from Theorems 2.2 and 2.7. The remaining parts follow from Theorem 2.7 and the fact that if A is a triangle then  the map A : Y A → Y is an equivalence. Example 2.3 We write (1) = , where is the triangle of the proof of Theorem 2.5, and = (1) = ( nk )∞ n,k=0 for the matrix with nn = 1, n−1,n = −1 and nk = 0 otherwise. Let m ∈ N \ {1}. Then we write (m) = (m−1) · and (m) = (m−1) · . Since the matrices and are obviously inverse to one another and matrix multiplication is associative for triangles ([65, Corollary 1.4.5]), the matrices (m) and (m) are also inverse to one another. It is well known that (m) n x



n  m m n−k xn−k = xk = (−1) (−1) k n−k k=0 k=max{0,n−m} m 

and n(m) x

=

k

n

 m+n−k−1 k=0

(a)

n−k

xk for n = 0, 1, . . . .

(2.2)

(2.3)

Let p = ( pk )∞ k=0 be a bounded sequence of positive reals, M = max{1, supk pk } and m ∈ N. By Theorem 2.8, Part (b) of Example 1.2, (2.3) and (2.2), the spaces (( p)) (m) , (( p)) (m) and c0 (( p), (m) ) = (c0 ( p)) (m) are F K spaces with the total paranorms

54

2 Matrix Domains

  ⎞

 pk 1/M ∞    k m+k− j −1   x j  ⎠ g(( p)) (m) (x) = ⎝ ,  k− j  k=0  j=0   ⎞ ⎛

 pk 1/M ∞  k     m  x j  ⎠ (−1)k− j g(( p)) (m) (x) = ⎝  k− j  k=0  j=max{0,k−m} ⎛

and   ⎞ 

 pk 1/M k    m x j  ⎠ gc0 (( p), (m) ) (x) = ⎝sup  (−1)k− j . k − j k  j=max{0,k−m}  ⎛

If m = 1, then we write bv( p) = (( p)) , and the spaces (( p)) , bv( p) and c0 (( p), ) are F K spaces with the total paranorms   pk ⎞1/M ∞ 1/M  ∞     k  pk   ⎝ ⎠ |xk − xk−1 | xj and gbv( p) (x) = g(( p)) (x) =   k=0  j=0 k=0 ⎛

and

1/M

gc0 (( p), ) (x) = sup |xk − xk−1 | pk . k

(b)

Since c0 , c and ∞ are B K spaces with  · ∞ by Part (c) of Example 1.2, the sets c0 ( (m) ) = (c0 ) (m) , c( (m) ) = c (m) and ∞ ( (m) ) = (∞ ) (m) are B K spaces with x(∞ ) (m)

  

 k    m x j  = sup  (−1)k− j k− j k  j=max{0,k−m} 

(2.4)

and the sets c (m) and (∞ ) (m) are B K spaces with x(∞ ) (m)

  

  k m+k− j −1  x j  . = sup  k− j k  j=0 

(2.5)

Also, the sets ( p ) (m) and ( p ) (m) are B K spaces with x( p ) (m)

  ⎞ ⎛

 p 1/ p ∞  k     m  x j  ⎠ =⎝ (−1)k− j  k− j  k=0  j=max{0,k−m}

(2.6)

2.1 General Results

55

and

  ⎞

 p 1/ p ∞    k m+k− j −1   x j  ⎠ . =⎝  k− j  k=0  j=0 ⎛

x( p ) (m)

(2.7)

For m = 1, the norms in (2.4)–(2.7) reduce to

x(∞ ) = sup |xk − xk−1 | , x(∞ ) k

xbv p =

∞ 

     k  = sup  x j  = xbs k  j=0 

|xk − xk−1 | p

  p ⎞1/ p  ∞    k   =⎝ x j  ⎠ .   k=0  j=0 ⎛

1/ p and x( p )

k=0

If p = 1, we write bv = bv1 for the set of sequences of bounded variation. The next result is an immediate consequence of Theorem 2.8; it is a generalization of Theorem 2.3. Corollary 2.1 Let A be a matrix. Then c A is an F K space with ( pn ) ∪ (h n ) where p−1 (x) = Ax∞ and pn and h n (n = 0, 1, . . . ) are defined as in Theorem 2.7. For any k such that Ak has at least one nonzero term, pk may be omitted. For any n such that An ∈ φ, h n may be omitted (Theorem 2.3). Theorem 2.9 Let X and Y be F K spaces, A be a matrix, and X be a closed subspace of Y . Then X A is a closed subspace of Y A . Proof Since Y is an F K space, Y A is an F K space by Theorem 2.8. So the map L : Y A → Y with L(x) = Ax for all x ∈ Y A is continuous by Theorem 1.4, and so  X A = L −1 (X ) is closed. Example 2.4 We apply Theorem 2.9 and obtain that (c0 ) (m) is a closed subspace of c (m) and c (m) is a closed subspace of (∞ ) (m) ; also (c0 ) (m) is a closed subspace of c (m) and c (m) is a closed subspace of (∞ ) (m) .

2.2 Bases of Matrix Domains of Triangles In this section, we study the bases of matrix domains of triangles. The main results are Theorem 2.12 and Corollary 2.5. Theorem 2.12 expresses the Schauder bases of matrix domains X T of triangles T in linear metric sequence

56

2 Matrix Domains

spaces X with a Schauder basis in terms of the Schauder basis of X . Corollary 2.5 is the special case of Theorem 2.12, where X is an F K space with AK and gives explicit representations of the sequences in X T , X T ⊕ e and (X ⊕ e)T with respect to the sequences e(n) (n = 0, 1, . . . ) and e. These results would yield the Schauder bases for the matrix domains in the classical sequence spaces (with the exception of ∞ (Corollary 2.4 and Example 1.5)) and in c0 ( p) and ( p) (Example 2.7) in a great number of recent publications. We start with some special results of interest. Let U = {u ∈ ω : u k = 0 for all k = 0, 1, . . . }. If u ∈ U then we write 1/u = (1/u k )∞ k=0 . Theorem 2.10 ([27, Theorem 2]) Let X be a B K space and u ∈ U. (k) If (b(k) )∞ = (1/u) · b(k) for k = 0, 1, . . . , then k=0 is a basis of X and c (k) ∞ −1 (c )k=0 is a basis of Y = u ∗ X . (b) Let |u 0 | ≤ |u 1 | ≤ . . . and |u n | → ∞ (n → ∞),

(a)

and T = (tnk )∞ n,k=0 be the triangle with

tnk

⎧ ⎨1 = un ⎩ 0

(0 ≤ k ≤ n) (k > n)

(n = 0, 1, . . . ).

Then (c0 )T has AK . Proof (a) Let  ·  be the B K norm of X . Then Y is a B K space with respect to  · u defined by yu = u · y (y ∈ Y ) by Theorem 2.8 and Part (a) of Theorem 2.4. Also u · c(k) = b(k) ∈ X implies c(k) ∈ Y for all k. Finally, let y ∈ Y be given. Then u · y = x ∈ X and there exists a unique sequence (λk )∞ k=0 of scalars such that m  λk b(k) → x (m → ∞) in X. x = k=0

We put y = (1/u) · x =

m 

λk c(k) for m = 0, 1, . . . .

k=0

Then we have u · y → x = u · y (m → ∞) in X , hence y → y (m → ∞) ∞ in Y , that is, y = k=0 λk c(k) . Obviouly this representation is unique. (b) By Theorem 2.1, (c0 )T is a B K space with respect to the norm  · (c0 )T defined by   n 1     x(c0 )T = sup |Tn x| = sup  xk  for all x ∈ (c0 )T .  n n  un k=0

2.2 Bases of Matrix Domains of Triangles

57

Also |u n | → ∞ (n → ∞) implies φ ⊂ (c0 )T . Let ε > 0 and x ∈ (c0 )T be given. Then there is a non-negative integer n 0 such that |Tm x| < ε/2 for all m > n 0 . Then it follows that   n 1     xk  ≤ sup (|Tn x| + |Tm x|) < ε, x − x [m] (c0 )T = sup    n≥m+1 n≥m+1 u n k=m+1 hence x=

∞ 

xk e(k) .

(2.8)

k=0

Obviously, the representation in (2.8) is unique.  We apply Theorem 2.10 to determine the expansions of sequences in the matrix domians of the arithmetic means in c0 and c. Example 2.5 Let C1 = (ank )∞ n,k=0 be the matrix of the arithmetic means or Cesàro matrix of order 1, that is, ank =

⎧ ⎨

1 n+1 ⎩0

(0 ≤ k ≤ n) (k > n)

(n = 0, 1, . . . ).

(1) We write C0(1) = (c0 )C1 , C (1) = cC1 and C∞ = (∞ )C(1) for the sets of sequences that are summable to 0, summable and bounded by the C1 method. Then the space C0(1) has AK and every sequence x = (xk )∞ k=0 has a unique representation

x =ξ ·e+

∞ 

1  xk . n→∞ n + 1 k=0 n

(xk − ξ )e(k) where ξ = lim

k=0

(2.9)

Proof We apply Part (b) of Theorem 2.10 with u n = (n + 1) (n = 0, 1, . . . ) and obtain that C0 (1) has AK . (1) If x = (xk )∞ k=0 ∈ C , then there exists a unique complex number such that 1  1  xk − ξ = (xk − ξ ) → 0 (n → ∞), n + 1 k=0 n + 1 k=0 n

n

(1) (0) hence x (0) = x − ξ · e = (xk − ξ )∞ = k=0 ∈ C 0 , and so x (1) C0 has AK . Therefore, it follows that

x=x

(0)

∞

k=0 (x k

∞  +ξ ·e =ξ ·e+ (xk − ξ )e(k) , k=0

− ξ )e(k) , since

58

2 Matrix Domains



which is (2.9).

Now we consider some spaces related to the so-called weighted or Riesz means. Example 2.5 and the results presented following Definition 2.2 were applied in [41] to simplify the results in [11, 12, 18, 19, 43]. Definition 2.2 Letq = (qk )∞ k=0 be a sequence of non-negative real numbers with q0 > 0 and Q n = nk=0 qk for n = 0, 1, . . . . Then the matrix N¯ q = (( N¯ q )nk )∞ n,k=0 of the weighted means or Riesz means is defined by 

N¯ q

 nk

⎧ ⎨ qk = Qn ⎩0

(0 ≤ k ≤ n) (k > n)

(n = 0, 1, . . . ).

We write 

N¯ , q

 0

    = (c0 ) N¯ q , N¯ , q = c N¯ q and N¯ , q ∞ = (∞ ) N¯ q

for the sets of all sequences that are summable to 0, summable and bounded by the (1) of Example method N¯ q . If p = e, then these sets reduce to the sets C0(1) , C (1) and C∞ 2.5. Corollary 2.2 ([27, Corollary 1]) Each of the sets ( N¯ , q)0 , ( N¯ , q) and ( N¯ , q)∞ is a B K space with respect to the norm  ·  N¯ ,q defined by x N¯ q

  n  1     = sup  qk x k  ,   Q n n k=0

( N¯ , q)0 is a closed subspace of ( N¯ , q) and ( N¯ , q) is a closed subspace of ( N¯ , q)∞ . Furthermore, if Q n → ∞ (n → ∞), then ( N¯ , q)0 has AK , and every sequence ¯ x = (xk )∞ k=0 ∈ ( N , q) has a unique representation ∞  x =ξ ·e+ (xk − ξ )e(k) , where ξ = lim

n→∞

k=0



 n 1  qk x k . Q n k=0

(2.10)

Proof The first part follows from Theorem 2.9 and the facts that c0 is closed in c and c is closed in ∞ . We define the matrix T = (tnk )∞ n,k=0 by

tnk

⎧ ⎨ 1 = Qn ⎩ O

(0 ≤ k ≤ n) (k > n)

(n = 0, 1, . . . ).

Then we have ( N¯ , q)0 = q −1 ∗ (c0 )T and ( N¯ , q)0 has AK by Part (b) of Theorem 2.10 and Theorem 2.4.

2.2 Bases of Matrix Domains of Triangles

59

¯ Finally, let x = (xk )∞ k=0 ∈ ( N , q). Then there exists a unique complex number ξ such that     n n 1  1  qk xk − ξ = lim qk (xk − ξ ) = 0, lim n→∞ n→∞ Q n k=0 Q n k=0 that is, x (0) = x − ξ · e ∈ ( N¯ , q)0 . Hence x (0) = has AK . Therefore, it follows that x = x (0) + ξ · e = ξ · e +

∞

k=0 (x k

− ξ )e(k) , since ( N¯ , q)0

∞  (xk − ξ )e(k) , k=0



which is (2.10). Now we consider some generalized difference sequence spaces.

Definition 2.3 For any u ∈ U, we define the generalized difference sequence spaces       c0 (u ) = u −1 ∗ c0 , c(u ) = u −1 ∗ c and ∞ (u ) = u −1 ∗ ∞ ; if u = e then we obtain the following spaces introduced and studied in [36] c0 ( ) = (c0 ) , c( ) = c and ∞ ( ) = (∞ ) . Corollary 2.3 ([27, Corollary 2]) Let u ∈ U. Then each of the spaces c0 (u ), c(u ) and ∞ (u ) is a B K space with respect to the norm  · ∞ (u ) defined by x∞ (u ) = sup |u k (xk − xk−1 )| where x−1 = 0; k

c0 (u ) is a closed subspace of c(u ) and c(u ) is a closed subspace of ∞ (u ). 

Proof This follows from Theorems 2.8 and 2.9. Now we determine a basis for each of the spaces c0 (u ) and c(u ). Theorem 2.11 ([27, Theorem 3]) Let u ∈ U, b

(−1)

 n ∞  1 = (1/u) = u k=0 k

and b(k) = e −

k−1 

e( j) for k ≥ 0.

j=0

n=0

Then (a)

∞ (b(k) )∞ k=0 is a basis of c0 (u ) and every sequence x = (x k )k=0 ∈ c0 (u ) has a unique representation

x=

∞  k=0

( k x) b(k) =

∞  (xk − xk−1 )b(k) ; k=0

(2.11)

60

2 Matrix Domains

∞ (b) (b(k) )∞ k=−1 is a basis of c(u ) and every sequence x = (x k )k=0 ∈ c(u ) has a unique representation

x = ξ · b(−1) +





xk − xk−1 −

k=0

1 uk



b(k) , where ξ = lim u k (xk − xk−1 ). k→∞

(2.12) Proof (a) We obviously have b(k) ∈ c0 (u ) for k = 0, 1, . . . . Let x = (xk )∞ k=0 ∈ c0 (u ) and ε > 0 be given. Then there is a non-negative integer m 0 such that |u m (xm − xm−1 )| < ε for all m ≥ m 0 . Let m ≥ m 0 and x =

m  (xk − xk−1 )b(k) . k=0

Since

xn =

m 

⎛ (xk − xk−1 ) ⎝en −

k=0

=

xm xn

k−1 

⎞ en( j) ⎠ = xm −

j=0

m−1  j=0

en( j)

m 

(xk − xk−1 )

k= j+1

(n ≥ m) (n ≤ m − 1),

we have

= xn − xn−1

0 xn − xn−1

(n ≥ m + 1) (n ≤ m),

and so x − x n + m)

if n ≥ m.

(2.18)

m  = 0 for k ≥ m + n + 1, the sequences c(n) are given by (2.18) But since k−n for all n. If m = 1 then we obtain c(n) = e(n) − e(n+1) for all n = 0, 1, . . . and every sequence x = (xn )∞ n=0 ∈ (( p)) has a unique representation x=

 n ∞   n=0

(c)

 xk (e(n) − e(n+1) )

k=0

by (2.14) in Part (a) of Corollary 2.5. We consider the B K space c( ) = (c0 ⊕ e) of Part (b) of Example 2.3. Then the sequence c(−1) in (2.13) of Corollary 2.5 is obviously given by ck(−1) =

k  j=0

sk j = k + 1 for k = 0, 1, . . . .

64

2 Matrix Domains

If we write (k + 1) for the sequence (k + 1)∞ k=0 , then every sequence w ∈ c( ) has a unique representation w = lim wn · (k + 1) + n→∞

∞ 

( wn − lim wn )(e − e[n−1] ) n→∞

n=0

by (2.16) in part c of Corollary 2.5. (d) Finally, we consider the space cs = (c0 ⊕ e) . Then we obviously have c(−1) = e(0) for the sequence in (2.13) of Corollary 2.5. Now every sequence w ∈ cs has a unique representation by (2.16) in Part (c) of Corollary 2.5

w= =

∞ 

wn · e

(0)

+

 n ∞  

n=0

n=0

∞ 

∞ 

wn · e(0) +

n=0

n=0



wk −

k=0 ∞ 

∞ 

 wk (e(n) − e(n+1) )

k=0

 wk

· (e(n+1) − e(n) ).

(2.19)

k=n+1

 We write y = T w − ξ · ewhere ξ = limn→∞ Tn w = limn→∞ nk=0 wk . Then  (n) (n+1) and ∞ converge (in the ∞ y ∈ c0 and so the series ∞ n=0 yn e n=0 yn e norm), and it follows from (2.19)

w = ξ · e(0) +

∞ 

yn (e(n) − e(n+1) ) = ξ · e(0) +

n=0

= ξ · e(0) + y0 · e(0) + =

yn e(n) −

n=0 ∞  n=1

∞ 

∞ 

(yn − yn−1 )e(n) = w0 · e(0) +

∞ 

yn e(n+1)

n=0 ∞ 

wn · e(n)

n=1

wn · e(n) ,

n=0

that is, cs has AK . We close this section with one more example to obtain the results given for the Schauder bases of the spaces in [2, 5, 12]. Example 2.8 (a) The Euler sequence spaces were studied in [2, 5]; they are r ∞ defined as follows: Let 0 < r < 1 and E r = (enk )n,k=0 be the Euler matrix of order r with   n (1 − r )n−k r k (0 ≤ k ≤ n) r enk = k (n = 0, 1, . . . ). 0 (k > n) Then the Euler sequence spaces are defined by erp = ( p ) E r for 1 ≤ p < ∞, r = (∞ ) E r . Writing T = E r , for short, we observe e0r = (c0 ) E r , ecr = c E r and e∞

2.2 Bases of Matrix Domains of Triangles

65

that the inverse S = (snk )∞ n,k=0 of the triangle T is given by snk

  n (r − 1)n−k r −n = k 0

(0 ≤ k ≤ n) (k > n)

(n = 0, 1, . . . ).

(2.20)

Now [2, Theorem (i), (ii)] is an immediate consequence of (2.14) and (2.15) in Parts (a) and (b) of Corollary 2.5; Part (a) also yields Schauder bases for the spaces erp (1 ≤ p < ∞). (b) The spaces a0r ( ) and acr ( ) were introduced and studied in [12] as follows. If T = (tnk )∞ n,k=0 is the triangle defined by

tnk

⎧ 1 ⎪ ⎪ (r k − r k+1 ) ⎪ ⎪ ⎨n + 1 n+1 = r ⎪ ⎪ n+1 ⎪ ⎪ ⎩ 0

(0 ≤ k ≤ n − 1) (k = n)

(n = 0, 1, . . . ),

(k > n)

then a0r ( ) = (c0 )T and acr = cT . Since the inverse matrix S = (snk )∞ n,k=0 of T is given by

snk



1 1 ⎪ ⎪ (k + 1) − ⎪ ⎪ ⎨ 1 + rk 1 + r k+1 = n+1 ⎪ ⎪ 1 + rn ⎪ ⎪ ⎩ 0

(0 ≤ k ≤ n − 1) (k = n)

(n = 0, 1, . . . ),

(k > n)

(2.21) [12, Theorem 3 (a), (b)] is an immediate consequence of (2.8) and Parts (a) and (b) of Corollary 2.5. Remark 2.1 We write 1/(n + 1) = (1/(n + 1))∞ n=0 . It was shown in [41, Corollary 2.5] that a0r ( ) = (1/(n + 1))−1 ∗ c0 , acr ( ) = (1/(n + 1))−1 ∗ c and

r ( ) = (1/(n + 1))−1 ∗ ∞ . a∞

r By [41, Remark 2.6] a0r ( ) has AK , and every sequence x = (xk )∞ k=0 ∈ ac ( ) has a unique representation

x = (n + 1) · ξ −

∞  (xn − (n + 1)ξ )e(n) , where ξ = lim n=0

n→∞

xn . n+1

We note that the last two statements are special cases of αn = n + 1 (n = 0, 1, . . . ) of the corresponding statements in Example 2.2.

66

2 Matrix Domains

2.3 The Multiplier Space M(X  , Y ) In this and the following sections, we determine some dual spaces of matrix domains of triangles. Among other things, we reduce the determination of (X T )β for certain F K spaces to that of the determination of X β and the characterization of the class (X, c0 ). We start with an almost trivial, but useful result that gives the multiplier of the matrix domain of a diagonal matrix and a sequence space. Proposition 2.1 Let u, v ∈ U. Then we have for arbitrary subsets X and Y of ω M(u −1 ∗ X, Y ) = (1/u)−1 ∗ M(X, Y ), in particular, if † denotes any of the symbols α, β or γ , then (u −1 ∗ X )† = (1/u)−1 ∗ X † . We also have A ∈ (u −1 ∗ X, v−1 ∗ Y ) if and only if B ∈ (X, Y ), where bnk =

vn ank for n, k = 0, 1, . . . . uk

Proof Since x ∈ u −1 ∗ X if and only if y = u · x ∈ X and a · x = b · y where b = a/u = a · (1/u), it follows that a ∈ M(u −1 ∗ X, Y ) if and only if b ∈ M(X, Y ), that  is, a ∈ (1/u)−1 ∗ M(X, Y ). First we determine the multiplier space of X in Y . From this, we will easily obtain the α-, β- and γ -duals of X T in the special case T = . A subset X of ω is said to be normal, if x ∈ X and |yk | ≤ |xk | for all k together imply y ∈ X . − ∞ − − )n,k=0 be the matrix with tn,n−1 = 1 and tnk = 0 (k = 0) for n = Let T − = (tnk − 0, 1, . . . , that is, T is the matrix of the right shift operator. We also write + = ∞ ( + k )n,k=0 for the matrix of the forward differences with

+ nk

⎧ ⎪ ⎨1 = −1 ⎪ ⎩ 0

(k = n) (k = n + 1) (k = n, n + 1)

(n = 0, 1, . . . ).

We obviously have = − + · T − = −T − · + . Theorem 2.13 ([39, Theorem 2.3]) Let X be any set of sequences, Y be a linear subspace of ω and Y ⊂ YT − . We put

2.3 The Multiplier Space M(X , Y )

67

Z 1 = (M(X, Y )) + , Z 2 = M(X, Y ) and Z 3 = M(X, Y ). Then, we have Z 1 ∩ Z 2 ⊂ M (X , Y ) .

(2.22)

If, in addition, X and Y are normal and YT − ⊂ Y then M (X , Y ) = Z 1 ∩ Z 3 .

(2.23)

Proof We write Z = X and observe that z ∈ Z if and only if x = z ∈ X ; furthermore, we have z = x. We obtain T − (x · + a) + (a · x) = (T − x) · (T − ( + a)) + a · x − T − (a · x) = −(T − x) · ( a) + a · x − (T − a) · (T − x) = −(T − x) · ( a + T − a) + a · x = a · (x − T − x) = a · x = a · z, that is,

a · z = T − (x · + a) + (a · x).

(2.24)

(i) First we show the inclusion in (2.22) We assume a ∈ Z 1 ∩ Z 2 . Let z ∈ Z be given. Then x ∈ X , and a ∈ Z 1 implies + ∈ M(X, Y ), hence x · + a ∈ Y ⊂ YT − , that is, T − (x · + a) ∈ Y . Also, a ∈ Z 2 implies a · x ∈ Y , hence (a · x) ∈ Y . Since Y is a linear space, (2.22) follows from (2.24). (ii) Now we show that Y ⊂ YT − implies Z3 ⊂ Z2.

(2.25)

Let a ∈ Z 3 and x ∈ X be given. Then we have a · x ∈ Y ⊂ YT − , hence T − (a · x) ∈ Y , and so (a · x) = a · x − T − (a · x) ∈ Y, since Y is a linear space.

(iii)

Thus, we have a ∈ M(X, Y ) = Z 2 and we have shown (2.25). Now we show that if X and Y are normal and YT − ⊂ Y , then M(Z , Y ) ⊂ Z 1 ∩ Z 3 .

(2.26)

We assume a ∈ M(Z , Y ). Let x ∈ X be given. Then we have a · x = a · z ∈ Y . We define the sequence x˜ by x˜n = (−1)n |xn | for n = 0, 1, . . . .

68

2 Matrix Domains

Since X is normal, it follows that x˜ ∈ X , hence z˜ = x˜ ∈ Z , and consequently ∞  a · z˜ = (−1)n an (|xn | + |xn−1 |) n=0 ∈ Y. Furthermore, |an xn | ≤ |an z˜ n | for all n implies a · x ∈ Y , since Y is normal. This shows a ∈ Z 3 = M(X, Y ). By (2.25), this implies a ∈ Z 2 , that is, (a · x) ∈ Y . Therefore, we obtain from (2.24) T − (x · + a) ∈ Y, since Y is a linear space, and thus x · ( a) ∈ YT − ⊂ Y , that is, a ∈ Z 1 . Thus, we have shown (2.26). 

Finally, we obtain (2.23) from (2.26), (2.25) and (2.22).

2.4 The α-, β- and γ -duals of X  Now we apply Theorem 2.13 to determine the α-, β- and γ -duals of X . Since 1 is a normal linear space with (1 )T − = 1 we immediately obtain from Theorem 2.13. Corollary 2.6 ([39, Corollary 2.1]) For any subset X of ω, we have (X α ) + ∩ X α ⊂ (X )α .

(2.27)

(X )α = (X α ) + ∩ X α .

(2.28)

If X is normal, then we have

Remark 2.2 ([39, Remark 2.1]) If X is normal with X T − = X , then we have (X )α = X α . Proof We show that X ⊂ X T − implies X α ⊂ (X α ) + . Then statement follows from (2.28). Let a ∈ X α and x ∈ X be given. Then, we have a · x ∈ 1 , and a · T − x ∈ 1 , since X ⊂ X T − . It also follows from − − Tn− (x · + a) = xn−1 + n−1 = x n−1 an−1 − x n−1 an = Tn (x · a) − an Tn x

that

      − T (x · + a) ≤ T − (x · a) + an T − x  for n = 0, 1, . . . . n n n

Therefore, we a ∈ (X α ) + .

obtain

T − (x · + a) ∈ 1 ,

hence

x · + a ∈ 1

and

thus 

2.4 The α-, β- and γ -duals of X

69

Example 2.9 We obtain from Remark 2.2   α ∞ p = q

( p = 1) (1 < p < ∞; q = p/( p − 1)),

((∞ ) )α = bs α = α∞ = 1 = c0α = ((c0 ) )α and 1 = bs α ⊂ cs α ⊂ ((c0 ) )α = 1 implies cs α = 1 . Now we apply Theorem 2.13 to determine the β- and γ -duals of X . Corollary 2.7 ([39, Corollary 2.2]) Let X be any subset of ω. We write   Z 1† = X †

β

+

γ

for † = β, γ , Z 2 = M(X, c), Z 2 = M(X, ∞ ) and Z 3 = M(X, c0 ).

Then we have Z 1† ∩ Z 2† ⊂ (X )† for † = β, γ .

(2.29)

If, in addition, X is normal, then we have β

γ

γ

(X )β = Z 1 ∩ Z 3 and (X )γ = Z 1 ∩ Z 2 .

(2.30)

If a ∈ (X )β , then we have ∞  k=0

ak z k =

∞   +  k a ( k z) for all z ∈ X .

(2.31)

k=0

Proof We put Z = X , Y (1; β) = cs, Y (1; γ ) = bs and Y (2; †) = (Y (1; †)) . Since cs and bs are linear spaces with cs ⊂ csT − and bs ⊂ bsT − , and since cs = c and bs = ∞ , (2.22) in Theorem 2.13 implies for † = β, γ 

X†

 +

  ∩ M (X, (Y (1; †)) ) = X † + ∩ M (X, Y (2; †)) = Z 1† ∩ Z 2† ⊂ Z † .

Now let X be normal and a ∈ Z † . We write Y˜ (2; β) = c0 and Y˜ (2; γ ) = ∞ . First a ∈ Z † implies a · z ∈ Y˜ (2; †) for all z ∈ Z . Since Y˜ (2; †) is normal, we conclude a ∈ M(X, Y˜ (2; †)) by (2.23) in Theorem 2.13. We obtain from (2.24) with x = z   (a · z) = T − x · + a + a · x.

(2.32)

Now (a · z) ∈ Y (2; †) and a · x ∈ Y (2; †) together imply ( + a) · x ∈ Y (1; †) for all x ∈ X , that is, γ γ β (2.33) Z β ⊂ Z 1 ∩ Z 3 and Z γ ⊂ Z 1 ∩ Z 2 .

70

2 Matrix Domains β

Since Z 3 = M(X, c0 ) ⊂ M(x, c) = Z 2 , the identities in (2.30) follow from (2.33) and (2.29). Finally, (2.32) and (2.30) together imply (2.31).  As a first application of Corollary 2.7, we determine the α-, β- and γ -duals of the matrix domain of in the classical sequence spaces  p (1 ≤ p ≤ ∞), c0 and c. We need the following result. Proposition 2.2 ([39, Lemma 3.1]) We have (a) (b) (c) (d) (e) (f)

M(c0 , c0 ) = ∞ , M(c, c) = c, M(∞ , c0 ) = c0 , M( p , c0 ) = ∞ (1 ≤ p < ∞), M( p , ∞ ) = ∞ (1 ≤ p ≤ ∞), M(c0 , ∞ ) = ∞ . Proof (b) This is Part (ii) in Example 1.7. (a), (c) We have by Part (i) in Proposition 1.1, Parts (i) and (iii) in Example 1.7 M(c0 , c0 ) ⊂ M(c0 , c) = ∞ and M(∞ , c0 ) ⊂ M(∞ , c) = c0 . Conversely, if a ∈ ∞ , then we have a · x ∈ c0 for all x ∈ c0 , that is, a ∈ M(c0 , c0 ); and if a ∈ c0 then a · x ∈ c0 for all x ∈ ∞ , that is, a ∈ M(∞ , c0 ). Thus, we have shown Parts (a) and (c). (d) Let 1 ≤ p < ∞. Since  p ⊂ c0 , it follows from Part (ii) of Proposition 1.1 and Part (a) that M( p , c0 ) ⊃ M(c0 , c0 ) = ∞ .

(2.34)

Conversely, if a ∈ / ∞ , then there exists a subsequence (ak( j) )∞ j=0 of the sequence a such that |ak( j) | > ( j + 1)2 for j = 0, 1, . . . . We put ⎧ ⎨ 1 xk = ak( j) ⎩ 0

(k = k( j)) (k = k( j))

( j = 0, 1, . . . ).

Then we have ∞  k=0

|xk | p =



 j=0

1 |ak( j) |

p
2 for j = 0, 1, . . . . We put ⎧ 1 ⎨ |ak( j) | xk = ⎩ 0

(k = k( j)) (k = k( j))

(k = 0, 1, . . . )

Then we have ∞ 

|xk | =

k=0

∞  j=0





 1 1 < √  j < ∞, |ak( j) | j=0 2

that is, x ∈ 1 ⊂  p (1 ≤ p ≤ ∞). But ak( j) xk( j) =

√  j  |ak( j) | > 2 for j = 0, 1, . . . ,

/ M( p , c0 ). Thus, we have shown hence a · x ∈ / ∞ , that is, a ∈ M( p , ∞ ) ⊂ ∞ for 1 ≤ p ≤ ∞.

(2.37)

Now Part (e) follows from (2.36) and (2.37). (f) We obtain from Part (ii) in Proposition 1.1 and Part (e) ∞ ⊂ M(∞ , ∞ ) ⊂ M(c0 , ∞ ) ⊂ M(1 , ∞ ) = ∞ .  Now we can determine the duals (X ) for † = β, γ and X =  p , c0 , c (1 ≤ p ≤ ∞). †

Corollary 2.8 We have (a)  β ∞ ( p ) = (q ) + ∩ ∞

( p = 1) (1 < p < ∞; q = p/( p − 1)),

((c0 ) )β = (c )β = cs β = bv ([65, Theorem, 7.3.5 (v))], ((∞ ) )β = bs β = bv ∩ c0 ([65, Theorem 7.3.5 (vi)]);

72

2 Matrix Domains

(b)



( p )



β  = ( p ) for 1 ≤ p < ∞

and ((c0 ) )γ = cs γ = bs γ = bv ([65, Theorem 7.3.5 (v), (vii)]). Proof Since  p (1 ≤ p ≤ ∞) and c0 are normal, we can apply (2.30) in Corollary 2.7 to obtain    β   β ( p ) = βp + ∩ M( p , c0 ) and ((c0 ) )β = c0

+

∩ M(c0 , c0 )

(2.38)

and γ    γ  ( p ) = γp + ∩ M( p , ∞ ) and ((c0 ) )γ = c0 + ∩ M(c0 , ∞ ). (2.39) β

γ

We observe that  p =  p = q for 1 ≤ p ≤ ∞ where q = ∞ for p = 1 and q = 1 γ β for p = ∞, c0 = c0 = 1 , M( p , c0 ) = M( p , ∞ ) = ∞ for 1 ≤ p < ∞ by Parts (d) (e) in Proposition 2.2, and M(c0 , c0 ) = M(c0 , ∞ ) = ∞ by Parts (a) and (f) of Proposition 2.2. Hence, it follows from (2.38) and (2.39) that β  γ    ( p ) = ( p ) = q + ∩ ∞ for 1 ≤ p < ∞ and

((c0 ) )β = ((c0 ) )γ = (1 ) + ∩ ∞ = bv ∩ ∞ .

Since obviously ∞ ⊂ (∞ ) + and bv ⊂ ∞ , we have ((1 ) )β = ∞ and ((c0 ) )β = bv. Now if p = ∞ then we have by (2.38) and Part (c) of Proposition 2.2 ((∞ ) )β = (1 ) + ∩ M(∞ , c0 ) = bv ∩ c0 and by (2.39) and Part (e) of Proposition 2.2 ((∞ ) )γ = (1 ) + ∩ M(∞ , ∞ ) = bv ∩ ∞ = bv. It remains to show the statements for cs. We obtain from (2.29) in Theorem 2.7 and Part (b) of Proposition 2.2 cs β = (c )β ⊃ (cβ ) + ∩ M(c, c) = bv ∩ c, but bv ⊂ c, so cs β ⊃ bv. Conversely, we have cs β = (c )β ⊂ ((c0 ) )β = bv. Finally,

2.4 The α-, β- and γ -duals of X

73

bv = bs γ ⊃ cs γ ⊃ ((c0 ) )γ = bv yields cs γ = bv.



Unlike as in the case of p = 1, the β- and γ -duals of ( p ) cannot be reduced for 1 < p < ∞. Remark 2.3 We have      p + ⊂ ∞ and ∞ ⊂  p + for 1 < p < ∞. Proof We define the sequence x by xk = k 1/2q (k = 0, 1, . . . ), where q = p/( p − 1). Then, by the inequality in (A.5), there exists a constant M such that    +   k x  = |xk − xk+1 | ≤ M · (k + 1)1/2q−1 = M · (k + 1)1/q−1−1/2q for k = 1, 2, . . . ,

hence  + p  x  ≤ M p · (k + 1) p(−(1−1/q))− p/2q = M p · (k + 1)−1− p/2q for k = 1, 2, . . . . k Since p/2q > 0, it follows that ∞   + p  x  < ∞, k k=1

  / ∞ . This shows  p + ⊂ ∞ . hence x ∈ ( p ) + , but clearly x ∈ Now we define the sequence x by xk = (−1)k (k = 0, 1, . . . ). Then clearly x ∈ + / ( p ) + . This shows ∞ ⊂ ∞ , but | k x| = |xk − xk+1 | = 2 for all k, hence x ∈   p + . Now we consider the so-called spaces of generalized weighted means. Definition 2.4 Let u, v ∈ U and X ∈ ω. Then the sets W (u, v; X ) of generalized weighted means are defined by W (u, v; X ) = v−1 ∗ (u −1 ∗ X ) = {x ∈ ω : u ∗ (v · x) ∈ X } . There are the following special cases. Example 2.10 (a) We have W (e, e; c) = cs and W (e, e; ∞ ) = bs. (b)  If v = q is a non-negative sequence with q0 > 0 and u = 1/Q with Q n = n k=0 qk for n = 0, 1, . . . , then we obtain the sets of weighted means of Definition 2.2 W (1/Q, q; c0 ) = ( N¯ , q)0 , W (1/Q, q; c) = ( N¯ , q) and W (1/Q, q; ∞ ) = ( N¯ , q)∞ .

74

2 Matrix Domains

(c)

∞ If v = (1 + r k )∞ k=0 for some fixed r with 0 < r < 1 and u = (1/(n + 1))n=0 , then we obtain the sets

a rp = W (u, v;  p ) (1 ≤ p < ∞) ([10]), and r = W (u, v; ∞ ) ([11]). a0r = W (u, v; c0 ), a r = W (u, v; c) and a∞

(d) If v = e and u = (1/(n + 1))∞ n=0 , then we obtain the sets of Example 2.5 (1) , W (u, v; c0 ) = C0(1) , W (u, v; c) = C (1) and W (u, v; ∞ ) = C∞

and the Cesàro sequence spaces of non-absolute type X p = W (u, v;  p ) for 1 ≤ p ≤ ∞ ([57]). Now we give the α-, β- and γ -duals of the generalized weighted means. Corollary 2.9 ([39, Theorem 2.4]) Let u, v ∈ U, b = (1/u) · + (a/v) and d = a/(u · v). Then, we have (W (u, v; X ))α ⊃ {a ∈ ω : b ∈ X α and d ∈ X α },

(a)

(W (u, v; X ))β ⊃ {a ∈ ω : b ∈ X β and d ∈ M(X, c)}, (W (u, v; X ))γ ⊃ {a ∈ ω : b ∈ X γ and d ∈ M(X, ∞ )} . (b)

If, in addition, X is normal, then (W (u, v; X ))α = {a ∈ ω : b ∈ X α and d ∈ X α }, (W (u, v; X ))β = {a ∈ ω : b ∈ X β and d ∈ M(X, c0 )}, (W (u, v; X ))γ = {a ∈ ω : b ∈ X γ and d ∈ M(X, ∞ )} . If a ∈ (W (u, v; X ))β , then we have ∞  k=0

ak z k =

∞ 

+ k (a/v) k (v · z) for all z ∈ W (u, v; X ).

k=0

Proof Writing W = W (u, v; X ) we have by Proposition 2.1 W † = (1/v)−1 ∗ ((u −1 ∗ X ) )† for † = α, β, γ .

(2.40)

2.4 The α-, β- and γ -duals of X

75

(i) We put Y = u −1 ∗ X and obtain by the inclusion in (2.27) of Corollary 2.6 and Proposition 2.1   (Y )α ⊃ (Y α ) + ∩ Y α = (1/u)−1 ∗ X α + ∩ (1/u)−1 ∗ X α .

(2.41)

Hence if a ∈ (1/v)−1 ∗



(1/u)−1 ∗ X α

 +

 ∩ ((1/u)−1 ∗ X α ) ,

that is, (1/u) · + (a/v) = b ∈ X α and (1/v) · (1/u) = 1/(u · v) = d ∈ X α , then a ∈ W α . This shows W α ⊃ {a ∈ ω : b ∈ X α and d ∈ X α }. Also, if X is normal then there is equality in (2.41) by (2.28) in Corollary 2.6, and so (1/v) · (1/u) = 1/(u · v) = d ∈ X α . Thus, we have shown the first statements in Parts (a) and (b). (ii) Now we obtain by the inclusion in (2.29) of Corollary 2.7 and Proposition 2.1     (Y )† ⊃ (Y † ) + ∩ M(Y, c) = (1/u)−1 ∗ X † + ∩ (1/u)−1 ∗ M(X, c) . Arguing as at the end of the proof of Part (i), we obtain the second and third inclusions in Part (a). Furthermore, if X is normal then we obtain by the first identity in (2.30) of Corollary 2.7 and Proposition 2.1     (Y )β = (Y β ) + ∩ M(Y, c0 ) = (1/u)−1 ∗ X β ∩ (1/u)−1 ∗ M(X, c0 ) , and by the second identity in (2.30) of Corollary 2.7 and Proposition 2.1     (Y )γ = (Y γ ) + ∩ M(Y, ∞ ) = (1/u)−1 ∗ X γ ∩ (1/u)−1 ∗ M(X, ∞ ) .

(iii)

Now the second and third identities in Part (b) follow as before. Finally, we show (2.40). Let a ∈ W β and z ∈ W be given. Then we have x = u · (v · z) ∈ X , z = (1/v) · (x/u) and n 

ak z k =

k=0

n  ak k=0 n−1 

vk

k (x/u) =

n−1  k=0

+ k (a/v) ·

xk an x n + uk u n vn

+ k (a/v) · k (v · z) + dn x n for n = 0, 1, . . . .

(2.42)

It follows from d ∈ M(X, c0 ) that d · x ∈ c0 , and so (2.40) follows.



=

k=0

76

2 Matrix Domains

Now we consider the special case of Corollary 2.9 when X is any of the classical sequence spaces  p (1 ≤ p ≤ ∞), c0 and c. Corollary 2.10 ([39, Theorem 3.1]) We write q = ∞ for p = 1, q = p/( p − 1) for 1 < p < ∞ and q = 1 for p = ∞. Using the notations of Corollary 2.9, we obtain 

  = a ∈ ω : b ∈ q and d ∈ q for 1 ≤ p ≤ ∞, (W (u, v; c))α = (W (u, v; c0 ))α = {a ∈ ω : b ∈ 1 and d ∈ 1 } ,   β  W (u, v;  p ) = a ∈ ω : b ∈ q and d ∈ ∞ for 1 ≤ p < ∞, W (u, v;  p )



(W (u, v; ∞ ))β = {a ∈ ω : b ∈ 1 and d ∈ c0 } , (W (u, v; c0 ))β = {a ∈ ω : b ∈ 1 and d ∈ ∞ } ,  and

(W (u, v; c))β = {a ∈ ω : b ∈ 1 and d ∈ c} ,  γ  W (u, v;  p ) = a ∈ ω : b ∈ q and d ∈ ∞ for 1 ≤ p ≤ ∞

(W (u, v; c))γ = (W (u, v; c0 ))γ = {a ∈ ω : b ∈ 1 and d ∈ ∞ }.

Proof All the assertions except for the duals of W (u, v; c) are immediate consequences of Corollary 2.9 and Proposition 2.2. Furthermore, W (u, v; c0 ) ⊂ W (u, v; c) ⊂ W (u, v; ∞ ) and (W (u, v; c0 ))α = (W (u, v; ∞ ))α imply (W (u, v; c0 ))α = (W (u, v; c))α . We also have by Part (ii) of Proposition 1.1 and the second identity of Part (b) of Corollary 2.9   (W (u, v; c))β ⊂ (W (u, v, c0 ))β = a ∈ ω : (1/u) · + (a/v) ∈ 1 and d ∈ ∞   ⊂ a ∈ ω : (1/u) · + (a/v) ∈ c0 . So a ∈ (W (u, v; c))β implies d ∈ c by (2.42), hence   (W (u, v; c))β ⊂ (W (u, v, c0 ))β = a ∈ ω : (1/u) · + (a/v) ∈ 1 and d ∈ c . Conversely, we have by (2.29) in Corollary 2.7   (W (u, v; c))β ⊃ a ∈ ω : (1/u) · + (a/v) ∈ 1 and d ∈ c . The part concerning the γ -duals is proved similarly. Corollary 2.10 yields many special cases.



2.4 The α-, β- and γ -duals of X

77

Example 2.11 ([39, Example 3.1]) (a)

([27, Theorem 6] for the β-duals) n Let (qk )∞ k=0 qk for k=0 be a non-negative sequence with q0 > 0 and Q n = n = 0, 1, . . . . We put

bk = Q k

ak ak+1 − qk qk+1

and dk =

Q k ak for k = 0, 1, . . . . qk

Then it follows from Corollary 2.10 that  α ( N¯ , q)0  β ( N¯ , q)0  β N¯ , q  β ( N¯ , q)∞  γ ( N¯ , q)0

α  = ( N¯ , q)α = ( N¯ , q)∞ = {a ∈ ω : b ∈ 1 and d ∈ 1 } , = {a ∈ ω : b ∈ 1 and d ∈ ∞ } , = {a ∈ ω : b ∈ 1 and d ∈ c} , = {a ∈ ω : b ∈ 1 and d ∈ c0 } , γ  = ( N¯ , q)γ = ( N¯ , q)∞ = {a ∈ ω : b ∈ 1 and d ∈ ∞ } . β

(b) ([57, Theorem 6] for X p (1 < p ≤ ∞)) If v = e and u = (1/(n + 1))∞ n=1 , then we put bk = (k + 1)(ak − ak+1 ) and dk = (k + 1)ak for k = 0, 1, . . . , and obtain from Corollary 2.10   X αp = a ∈ ω : d ∈ q (1 ≤ p ≤ ∞),   X βp = X γp = a ∈ ω : b ∈ q and d ∈ ∞ , (1 ≤ p < ∞) β β X∞ = C∞ = {a ∈ ω : b ∈ 1 and d ∈ c0 } , β

C0 = {a ∈ ω : b ∈ 1 and d ∈ ∞ } , C β = {a ∈ ω : b ∈ 1 and d ∈ c} and

(c)

γ γ = C∞ = {a ∈ ω : b ∈ 1 and d ∈ c0 } . X∞

([65, Theorem 7.3.5 (ii), (v), (vii) and (v)] for the β- and γ -duals) Let cs0 = (c0 ) . Then cs0α = 1 . Furthermore, bs α = 1 , and so cs α = 1 ; bs α = bv0 = bv ∩ c0 ; cs β = bv, since bv ⊂ c; bs γ = bv ∩ ∞ = bv and cs γ = bv.

78

2 Matrix Domains

2.5 The α- and β-duals of X (m) In this section, we study the α-, β- and γ -duals of the matrix domains of the difference operator. The situation is more complicated than in the case of the duals of matrix domains of the sum operator in Sect. 2.4. We start, however, with an almost trivial result. Proposition 2.3 Let a be a sequence, T be a triangle and S be its inverse. We define the matrix B = B(T ; a) = (bnk )∞ n,k=0 by Bn = an · Sn for all n, that is, bnk = an snk for all n, k = 0, 1, . . . . Then we have for all subsets X and Y of ω, a ∈ M(X T , Y ) if and only if B ∈ (X, Y ).

(2.43)

Proof Since z ∈ Z = X T if and only if x = T z ∈ X , and z = Sx, it follows that an z n = an (Sn x) = an

n 

snk xk =

k=0

n 

an snk xk = Bn x for n = 0, 1, . . . ,

(2.44)

k=0



and the statement in (2.43) is an immediate consequence. We apply Proposition 2.3 to determine the α-dual of the set bv = (1 ) .

Example 2.12 Let T = . Then we have S = (Example 2.3), and so the matrix B of Proposition 2.3 is defined by [n]

Bn = an e , that is, bnk

an = 0

(0 ≤ k ≤ n) (k > n)

(n = 0, 1, . . . ).

Therefore, we have by (2.43) a ∈ bvα = ((1 ) )α if and only if B ∈ (1 , 1 ), and by 19. in Theorem 1.23, this is the case if and only if the condition in (19.1) is satisfied, that is, if and only if sup k

∞ 

|bnk | = sup

n=0

k

∞ 

|an | < ∞,

n=k

hence a ∈ 1 . Therefore, we have bvα = 1 .

(2.45)

2.5 The α- and β-duals of X (m)

79

Now we determine the β- and γ -duals of the set bv. Example 2.13 Using the notations of Example 2.12, we obtain from (2.44), n 

ak z k =

k=0

n  k 

⎛ ⎞ n n   ⎝ bk j x j = b jk ⎠ x j

k=0 j=0

j=0

k= j

for all z ∈ X T , that is, for all x = Sz ∈ X . We write C = (cnk )∞ n,k=0 for the matrix with ⎧ n ⎨ b jk (0 ≤ k ≤ n) cnk = j=k (n = 0, 1, . . . ), ⎩ 0 (k > n) and obtain a ∈ M(X T , Y ) if and only if C ∈ (X, Y ).

(2.46)

If T = , then we have cnk =

⎧ n ⎨ aj ⎩

(0 ≤ k ≤ n)

j=k

(n = 0, 1, . . . )

(k > n)

0

and (2.46) yields for X = 1 and Y = c a ∈ bvβ = M(bv, cs) = M ((1 ) , c ) if and only if C ∈ (1 , c). By 9. in Theorem 1.23, this is the case if and only if the conditions in (4.1) and (7.1) are satisfied, that is, if and only if sup |cnk | < ∞ and lim cnk = n,k

n→∞

∞ 

a j exists for each k.

j=k

Obviously these conditions are equivalent to a ∈ cs and consequently bvβ = cs ([65, Theorem 7.3.5 (iii)]).

(2.47)

If Y = ∞ then we similarly obtain a ∈ bvγ = M(bv, bs) = M ((1 ) , (∞ ) ) if and only if C ∈ (1 , ∞ ). By 4. in Theorem 1.23, this is the case if and only if the condition in (4.1) is satisfied, that is, if and only if

80

2 Matrix Domains

   n     sup |cnk | = sup  a j  < ∞. n,k k,n≥k  j=k  Obviously this condition is equivalent to a ∈ bs, and so bvγ = bs ([65, Theorem 7.3.5 (iv)]).

(2.48)

Remark 2.4 The determination of (X T )α for arbitrary F K spaces X and triangles T is equivalent to B(X, 1 ) by (2.43), and by (1.21) in Theorem 1.13, this is equivalent to  sup N ⊂ N0 N finite



n∈N

∞

∞         = sup bnk xk  sup    N ⊂ N0 x∈Bδ (0) k=0 n∈N N finite ∞          = sup an snk xk  < ∞. sup    N ⊂ N0 x∈Bδ (0) 



bnk k=0 X,δ

N finite

k=0

n∈N ,n≥k

This condition is not particularly practicable, in general, for instance, we obtain the following condition in the simple case of X = ∞ and T = :  ∞       sup an  < ∞.    N ⊂ N0 N finite

k=0 n∈N ,n≥k

Therefore, we would want to find more applicable conditions. But it seems there is no general approach in the case of α-duals. Let m ∈ M, u ∈ U and (m) be the triangle of the m th -order difference operator (Example 2.3). We define the sets   X (u (m) ) = x ∈ ω : u · (m) x ∈ X for X = c0 , c, ∞ . For m = 1, these sets reduce to the generalized difference spaces c0 (u ), c(u ) and ∞ (u ) of Definition 2.3. The following result holds. Theorem 2.14 Let m ∈ N, u ∈ U and

2.5 The α- and β-duals of X (m)

81

 −1 M α,(m) (u) = (m) 1/|u| ∗ 1 ⎫ ⎧ ∞ k

⎬ ⎨   m+k− j −1 1 n)

0

and Cn(m) = bn,(m) and Dn(m) = (1/u) · Cn(m) for n = 0, 1, . . . , that is, (m) dnk

⎧1  n   m+ j−k−1 ⎨ aj j−k u = k j=k ⎩ 0

(0 ≤ k ≤ n)

(n = 0, 1, . . . ).

(k > n)

We note that, by Proposition 2.3, a ∈ (X (u (m) ))β if and only if C (m) ∈ (u −1 ∗ X, c) and this is the case by Proposition 2.1 if and only if D ∈ (X, c).

(a)

(2.58)

First we show Part (a). (a.i)

First we show that if a ∈ (c0 (u (m) ))β then the conditions in (2.53) and (2.54) are satisfied. Let a ∈ (c0 (u (m) ))β be given. Then we have D (m) ∈ (c0 , c) by (2.58) and by 12. in Theorem 1.23 this is the case if and only if the conditions in (1.1) and (11.2) hold, that is, K D = sup n

∞ 

(m) |dnk | = sup n

k=0

n  |bn,(m) | k

k=0

|u k |

n)

Let x ∈ c0 (u (m) ). Then we have y = m x ∈ u −1 ∗ c0 , and interchanging the order of summation and noting that n−1

 m+ j −k−1 a j = 0 for all n = 0, 1, . . . , j −k j=n

bnn−1,(m) = we obtain n−1  k=0

k

 m+k− j −1 yj k− j k=0 k=0 j=0 ⎛ ⎞ n−1  n−1

 m + k − j − 1 ⎝ ak ⎠ y j = k− j j=0 k= j ⎛ ⎞ n−1  n−1

 m + j − k − 1 ⎝ a j ⎠ yk = j −k k=0 j=k

ak x k =

=

n−1 

n−1  k=0

=

n 

ak k(m) y =

bkn−1,(m) yk ⎛ ⎝

k=0

=

n   k=0

n−1 

=

ak

n 

bkn−1,(m) yk

k=0

⎞ ∞

 m+ j −k−1 m+ j −k−1 aj − a j ⎠ yk j −k j − k j=n



 j=k

n   (m) bk(m) − wnk yk = bk(m) yk − Wn(m) y, k=0

86

2 Matrix Domains

that is,

n−1 

ak x k =

k=0

n 

bk(m) yk − Wn(m) y for n = 0, 1, . . . .

(2.61)

k=0

Now since a ∈ (c0 (u (m) )β and b(m) ∈ (u −1 ∗ c0 )β , it follows from (2.61) that W (m) ∈ (u −1 ∗ c0 , c), and so V (m) ∈ (c0 , c) where the matrix V (m) = (m) ∞ (m) (m) )n,k=0 is given by vnk = wnk /u k for n, k = 0, 1, . . . . It follows from (vnk (1.1) in Part 12. of Theorem 1.23 that V (m) ∈ (c0 , c) implies

sup n

∞  k=0

(m) |vnk | = sup n

n  |w(m) | nk

k=0

|u k |

   ∞

n   1   m + j − k − 1  a j  < ∞ = sup  u j −k n  k=0  k j=n

(2.62)

which is (2.54). Thus, we have shown that a ∈ (c0 (u (m) ))β implies the conditions in (2.53) and (2.54). This completes the proof of Part (a.i). (a.ii) Now we show that a ∈ (c0 (u (m) ))β implies (2.55). It follows from (2.60) and the definition of the matrix V (m) that lim v(m) n→∞ nk



(m) wnk 1  m+ j −k−1 a j = 0 (2.63) = lim = lim n→∞ u k n→∞ u k j −k j=n

for each k. Therefore, if a ∈ (c0 (u (m) ))β then we have b(m) ∈ (u −1 ∗ c0 )β and (2.62) and (2.63) imply W (m) ∈ (u −1 ∗ c0 , c0 ) by (1.1) and (7.1) in 7. of Theorem 1.23. Thus, (2.55) follows from (2.61). This completes the proof of Part (a.ii). (a.iii) Now we show that (2.53) and (2.54) imply a ∈ (c0 (u (m) ))β . We assume that the conditions in (2.53) and (2.54) are satisfied. Then again (2.63) holds and this and the condition in (2.54) together imply W (m) ∈ (u −1 ∗ c0 , c0 ) by (1.1) and (7.1) in 7. of Theorem 1.23. Furthermore, the condition in (2.53) obviously implies b(m) ∈ (u −1 ∗ c0 )β . Now it follows from (2.61) that a · x ∈ cs for all x ∈ c0 (u (m) ), that is, a ∈ (c0 (u (m) ))β . This completes the proof of Part (a.iii). Thus, we have shown Part (a). (b) Now we show Part (b). (b.i)

First, we show that if a ∈ (c(u (m) ))β , then the conditions in (2.53), (2.54) and (2.56) are satisfied.

2.5 The α- and β-duals of X (m)

87

Let a ∈ (c(u (m) ))β be given. Then we have a ∈ (c0 (u (m) ))β and the conditions in (2.53) and (2.54) are satisfied by Part (a). Furthermore, we have seen in Part (a.i) of the proof of Part (a) that a ∈ (c0 (u (m) ))β implies b(m) ∈ (u −1 ∗ c0 )β and W (m) ∈ (u −1 ∗ c0 , c0 ). Now let x ∈ c(u (m) ). Then there exists a complex number ξ such that u · (m) x − ξ e ∈ c0 . We put y = x − ξ (m) (1/u). Then (m) y = (m) x − ξ(1/u) and u · (m) y = u · (m) x − ξ e ∈ c0 , that is, y ∈ c0 (u (m) ), and we obtain as in (2.61) with z = (m) y ∈ u −1 ∗ c0 for n = 0, 1, . . . n−1 

ak x k =

k=0

n−1 

ak yk + ξ

k=0

=

n 

n−1 

ak k(m) (1/u)

k=0

bk(m) z k − Wn(m) z + ξ

k=0

n−1 

ak k(m) (1/u).

(2.64)

k=0

Now, by (2.64), a ∈ (c0 (u (m) ))β , b(m) ∈ (u −1 ∗ c0 )β and W (m) ∈ (u −1 ∗ c0 , c0 ) imply a ∈ ( (m) (1/u))−1 ∗ cs which is (2.56). Thus, we have shown that a ∈ (c(u (m) ))β implies that the conditions in (2.53), (2.54) and (2.56) are satisfied. This completes the proof of Part (b.i). (b.ii) Now we show that the conditions in (2.53), (2.54) and (2.56) imply a ∈ (c(u (m) ))β . We assume that the conditions in (2.53), (2.54) and (2.56) are satisfied. Then the conditions in (2.53) and (2.54) together imply W (m) ∈ (u −1 ∗ c0 , c0 ) and b(m) ∈ (u −1 ∗ c0 ) as in Part (a.ii) of the proof of Part (a). Finally, it follows from (2.64) and the condition in (2.56) that a · x ∈ cs for all x ∈ c(u (m) ), that is, a ∈ (c(u (m) ))β . This completes the proof of Part (b.ii). (c)

Thus, we have shown Part (b). Now we show Part (c). (c.i)

First, we show that if a ∈ (∞ (u (m) ))β then the conditions in (2.53) and (2.57) are satisfied. We assume that a ∈ (∞ (u (m) ))β . Then it follows that a ∈ (c0 (u (m) ))β , and so b(m) ∈ (u −1 ∗ c0 )β as in the proof of Part (b.i), but obviously (u −1 ∗ c0 )β = (u −1 ∗ ∞ )β . Now, since a ∈ (∞ (u (m) ))β and b(m) ∈ (u −1 ∗ ∞ )β , it follows from (2.61) and (2.63) that W (m) ∈ (u −1 ∗ ∞ , c0 ). By (6.1) in 6. of Theorem 1.23, W (m) ∈ (u −1 ∗ ∞ , c0 ) implies      ∞

∞  (m)  n       w 1 m + j − k − 1  nk   lim a j  = 0,   = lim  n→∞  u k  n→∞ j −k  uk  k=0

k=0

j=n

88

2 Matrix Domains

which is (2.57). Thus, we have shown that a ∈ (∞ (u (m) ))β implies that the conditions in (2.53) and (2.57) are satisfied. This completes the proof of Part (c.i). (c.ii) Now we show that if a ∈ (∞ (u (m) ))β then (2.55) holds for all x ∈ ∞ (u (m) ). If a ∈ (∞ (u (m) ))β , then, as we have seen in the proof of Part (c.i), b(m) ∈ (u −1 ∗ ∞ )β and W (m) ∈ (u −1 ∗ ∞ , c0 ). Therefore, it follows from (2.61) that (2.55) holds for all x ∈ ∞ (u (m) ). This completes the proof of Part (c.ii). (c.iii) Finally, we show that if the conditions in (2.53) and (2.57) are satisfied, then a ∈ (∞ (u (m) ))β . We assume that the conditions in (2.53) and (2.57) are satisfied. Then, as β before, (2.53) implies b(m) /u ∈ 1 = ∞ , that is, b(m) ∈ (u −1 ∗ ∞ )β , and (2.57) is n (m)  |wnk | =0 lim n→∞ |u k | k=0 which implies W (m) ∈ (u −1 ∗ ∞ , c0 ) by (6.1) in 6. of Theorem 1.23. Now it follows from (2.61) that a · x ∈ cs for all x ∈ ∞ (u (m) ), hence a ∈ (∞ (u (m) ))β . Thus, we have shown that the conditions in (2.53) and (2.57) imply a ∈ (∞ (u (m) ))β . This completes the proof of Part (c.iii). This completes the proof of Part (c).  Now we consider the special case m = 1 and u = e of Theorem 2.15. We need the following result. Lemma 2.1 (a) ([38, Lemma 1]) If b = (bk )∞ k=0 ∈ cs then ∞  ck = 0. lim (n + 1) n→∞ k +1 k=n

(2.65)

(b) ([38, Corollary]) If a ∈ (n + 1)−1 ∗ cs then R ∈ (n + 1)−1 ∗ c0 where R = (Rk )∞ k=0 is the sequence with Rk =

∞  j=k

a j for k = 0, 1, . . . .

2.5 The α- and β-duals of X (m)

Proof (a) that

89

We assume b ∈ cs. Let ε > 0 be given. Then there exists n 0 ∈ N0 such    n+k    ε  b j  < for all n ≥ n 0 and for all k ∈ N0 .   j=n  2

Let n ≥ m and m ∈ N0 be given. Then we obtain by Abel’s summation by parts n+m  k=n

 bn+k bk = k+1 n+k+1 k=0

 m−1 k m   1 1 1 − bn+ j + bn+k , = n + k + 1 n + k + 2 j=0 n + m + 1 k=0 k=0 m

hence   n+m  n+m     b  m−1   

 n+k  1 1 1 k     +  − b bk   ≤ j  n + m + 1      k + 1 n + k + 1 n + k + 2  j=n  k=n k=0 k=n

1 ε 1 1 ε < · = − + . n+1 n+m+1 n+m+1 2 2(n + 1) This shows that b/(n + 1) ∈ cs and  ∞  b  ε ε k   < for all n ≥ n 0 , ≤   2(n + 1)  k + 1 n + 1 k=n that is,

  ∞   bk   lim (n + 1)  = 0. n→∞  k + 1 k=n

Thus, we have shown that b ∈ cs implies (2.65). This completes the proof of Part (a). (b) We apply Part (a) with bk = (k + 1)ak for all k = 0, 1, . . . .  Corollary 2.11 ([38, Theorem 2 (a) and Corollary 3] for Parts (b) and (c) We have (a)

a ∈ (c0 ( ))β if and only if   ⎞ ⎛     ∞   ∞  ∞     ⎠ < ∞; ⎝ < ∞ and sup a a n j j   n  j=n   k=0  j=k

(2.66)

90

(b)

2 Matrix Domains

a ∈ (∞ ( ))β if and only if   ⎞ ⎛  ∞   ∞     ∞   a j  < ∞ and lim ⎝n  a j ⎠ = 0;  n→∞     k=0 j=k j=n

(c)

(2.67)

(c( ))β = (∞ ( ))β .

Proof (a) It is easy to see that the conditions in (2.66) are the special cases of m = 1 and u = e of the conditions (2.53) and (2.54), and so Part (a) follows from Part (a) of Theorem 2.15. (b) It is easy to see that the conditions in (2.67) are the special cases of m = 1 and u = e of the conditions (2.53) and (2.57), and so Part (b) follows from Part (c) of Theorem 2.15. (c) First, c( ) ⊂ ∞ ( ) implies (∞ ( ))β ⊂ (c( ))β . Conversely, if a ∈ (c( ))β then it follows from Part (b) of Theorem 2.15 that the first condition in (2.67) holds and a ∈ (n + 1)−1 ∗ cs by (2.56). Now a ∈ (n + 1)−1 ∗ cs implies (n + 1) · R ∈ c0 by Part (b) of Lemma 2.1, and this is the second condition in (2.67). Thus a ∈ (∞ ( ))β by Part (b).  t Remark 2.6 Writing R = (rnk )∞ n,k=0 = for the transpose of the matrix , that is, 1 (k ≥ n) rnk = (n = 0, 1, . . . ), 0 (0 ≤ k < n)

we obtain from Corollary 2.11   (c0 ( ))β = 1 ∩ (n + 1)−1 ∗ ∞ R and

  (c( ))β = (∞ ( ))β = 1 ∩ (n + 1)−1 ∗ c0 R .

2.6 The β-duals of Matrix Domains of Triangles in F K Spaces In this section, we establish a general result which reduces the determination of the matrix domain of a triangle T in an F K space X to the determination of the β-dual of X and the characterization of the class (X, c0 ). Throughout T will always be a triangle, S be its inverse and R = S t be the transpose of S. We will frequently use the following result.

2.6 The β-duals of Matrix Domains of Triangles in F K Spaces

91

Proposition 2.4 ([37, Theorem 1]) Let X and Y be subsets of ω. Then we have A ∈ (X, YT ) if and only if T · A ∈ (X, Y ). Proof (i) First we show that A ∈ (X, YT ) implies T · A ∈ (X, Y ).  Let A ∈ (X, YT ). Then we have An ∈ X β for all n, and so (T · A)n = nj=0 tn j A j ∈ X β , since the β-dual of any subset of ω obviously is a linear space. Let x ∈ X be given. Since A j ∈ X β for all j, the series ∞ k=0 a jk x k converge for all j, and so

(T · A)n x =

∞ 

⎛ ⎞ ∞ n   ⎝ (T · A)nk xk = tn j a jk ⎠ xk

k=0

=

n 

tn j

j=0

∞ 

k=0

 a jk xk

k=0

=

j=0 n 

tn j A j x

j=0

= Tn (Ax) for n = 0, 1, . . . , that is, (T · A)x = T (Ax). Since A ∈ (X, YT ), that is, T (Ax) ∈ Y for all x ∈ X , it follows that (T · A) ∈ Y for all x ∈ X , hence T · A ∈ (X, Y ). Thus, we have shown that A ∈ (X, YT ) implies T · A ∈ (X, Y ). (ii) Now we show that T · A ∈ (X, Y ) implies A ∈ (X, YT ). We assume T · A ∈ (X, Y ) = (X, (YT ) S ), since matrix multiplication of triangles is associative. It follows by Part (i) that B = S · (T · A) ∈ (X, YT ). Since S and T are triangles, it follows that B = S · (T · A) = (S · T ) · A = A ∈ (X, Y ). Thus, we have shown that T · A ∈ (X, Y ) implies A ∈ (X, YT ).  We also need the following result. Lemma 2.2 ([43, Lemma 3.1]) Let X be an F K space with AK . We write R = S t for the transpose of S. Then we have (X T )β ⊂ (X β ) R .

(2.68)

Proof We write Z = X T and assume a ∈ Z β . Then we have B ∈ (X, cs) by Proposition 2.3, where Bn is the matrix with the rows Bn = an Sn for n = 0, 1, . . . . We write C = · B, that is, the matrix C has the entries cnk =

⎧ n ⎨ a j s jk ⎩

(0 ≤ k ≤ n)

j=k

(k > n)

(n = 0, 1, . . . ).

92

2 Matrix Domains

Then C ∈ (X, c) by Proposition 2.4, since cs = c . Since X is an F K space with AK , it follows from (1.16) in Theorem 1.9 and Theorem 1.11 that C ∈ (X, c) if and only if (2.69) sup Cn ∗X,δ < ∞ for some δ > 0 n

and Rk a = lim cnk = n→∞

∞ 

a j s jk exists for each k.

(2.70)

j=k

By (2.69), there exists a constant K such that   n     |Cn x| =  cnk xk  ≤ K for all n and all x ∈ B δ (0).  

(2.71)

k=0

Let x ∈ X be given and ρ = δ/2. Since B δ (0) is absorbing by Remark 1.3, and X has AK , there are a real λ > 0 and a non-negative integer m 0 such that y [m] = λ−1 · x [m] ∈ B ρ (0) for all m ≥ m 0 . Let m ≥ m 0 be given. Then we have for all n ≥ m by (2.71)  m    m          [m]  cnk xk  = λ  cnk yk  = λ Cn y [m]  ≤ λ · K ,      k=0

and so by (2.70)

k=0

 m         (Rk a)xk  ≤ λ · lim Cn y [m]  ≤ λ · K . n→∞   k=0

Since m ≥ m 0 was arbitrary, it followsthat (Ra) · x ∈ bs, that is, Ra ∈ x γ , and since x ∈ X was arbitrary, we have Ra ∈ x∈X x γ = X γ . Finally, since X has AK , we obtain X β = X γ by Part (c) of Theorem 1.16. Therefore, we have Ra ∈ X β , that is, a ∈ (X β ) R . Thus, we have shown the inclusion in (2.68).  Now we reduce the determination of the β-dual of a triangle in X to the determination of the β-dual of X itself and the characterization of the class (X, c0 ). Theorem 2.16 ([43, Theorem 3.2]) Let X be an F K space with AK . Then we have a ∈ (X T )β if and only if   a ∈ X β R and W ∈ (X, c0 ), where the matrix W = (wnk )∞ n,k=0 is defined by

(2.72)

2.6 The β-duals of Matrix Domains of Triangles in F K Spaces

wnk =

⎧ ∞ ⎨ a j s jk

(0 ≤ k ≤ n)

(n = 0, 1, . . . ).

j=n

⎩ 0

93

(k > n)

Moreover, if a ∈ (X T )β then we have ∞ 

ak z k =

k=0

∞  (Rk a)(Tk z) for all z ∈ Z = X T .

(2.73)

k=0

Proof We write Z = X T . (i) First we show that a ∈ Z β implies that the conditions in (2.72) hold. We assume a ∈ Z β . Then it follows by Lemma 2.2 that Ra ∈ X β . Hence that ∞ the series j=n a j s jk converge for all n and k, that is, the matrix W is defined. Furthermore, we have n n n    (Rk a)(Tk z) − wnk Tk z = (Rk a − wnk ) Tk z k=0

k=0

=

n 

⎛ ⎝

k=0 ∞ 

k=0

j=k

k=0

j=k

a j s jk −

∞ 

⎞ a j s jk ⎠ Tk z)

j=n

⎛ ⎞   j+1 n n−1 n−1     ⎝ = a j s jk ⎠ Tk z = s jk Tk z aj =

n−1 

j=0

a j S j (T z) =

j=0

n−1 

k=0

ajz j,

j=0

that is, n−1  j=0

ajz j =

n 

(Rk a)(Tk z) − Wn (T z) for all n and all z ∈ Z .

(2.74)

k=0

Let x ∈ X be given. Then we have z = Sx ∈ Z , and so a ∈ z β and a ∈ (x β ) R . This implies W x = W (T z) ∈ c by (2.74). Since x ∈X was arbitrary, we have W ∈ (X, c) ⊂ (X, ∞ ). Furthermore, since Rk a = ∞ j=l a j s jk exists for each k, we have ∞  lim wnk = lim a j s jk = 0 for each k. (2.75) n→∞

n→∞

j=n

94

2 Matrix Domains

Now W ∈ (X, ∞ ) and (2.75) imply W (X, c0 ) by Theorem 1.11. This completes the proof of (i). (ii) Now we show that if a ∈ Z β then (2.73) holds. Let a ∈ Z β . Then it follows by (i) that the conditions (2.72) hold, and so (2.73) follows from (2.74). (iii) Finally, we show that the conditions in (2.72) imply a ∈ Z β . We assume that the conditions in (2.72) are satisfied. If z ∈ Z then x = T z ∈ X , and so a · z ∈ cs by (2.74), that is, a ∈ Z β .  Corollary 2.12 ([40, Proposition 3.1]) Let X be an F K space with AK . Then we have a ∈ (X T )β if and only if   a ∈ X β R and W ∈ (X, ∞ ).

(2.76)

Moreover, if a ∈ (X T )β then the identity in (2.73) holds. Proof First we assume a ∈ (X T )β . Then it follows from (2.72) in Theorem 2.16 that a ∈ (X β ) R and W ∈ (X, c0 ), but (X, c0 ) ⊂ (X, ∞ ). Conversely, we assume a ∈ (X β ) R and W ∈ (X, ∞ ). It follows from Ra ∈ X β that wmk exists for each m and k and limm→∞ wmk = 0 for each k. Since X has AK , this and W ∈ (X, ∞ ) together imply W ∈ (X, c0 ) by Theorem 1.11. Finally, a ∈ (X β ) R and W ∈ (X, c0 ) together imply a ∈ (X T )β by Theorem 2.16.  The result of Theorem 2.16 can be extended to the spaces ∞ and c. Corollary 2.13 ([43, Remark 3.3] for (a) and (b)) (a) The statement of Theorem 2.16 also holds when X = ∞ . (b) We have a ∈ (cT )β if and only if a ∈ (1 ) R and W ∈ (c, c). Moreover, if a ∈ (cT )β then we have for all z ∈ cT ∞ 

ak z k =

k=0

∞  k=0

(Rk a)(Tk z) − ξ α, where ξ = lim Tk z and α = lim k→∞

n→∞

n 

wnk .

k=0

(2.77) (c) We have

a∗cT = Ra1 + |α| for all a ∈ (cT )β .

(2.78)

Proof (i) First we show that if X = c or X = ∞ then a ∈ (X T )β if and only if a ∈ (1 ) R and W ∈ (X, c). Let X = c or X = ∞ . Then X ⊃ c0 implies X T ⊃ (c0 )T , hence (X T )β ⊂ ((c0 )T )β . Since c0 is a B K space with AK , it follows from Lemma 2.2 that β a ∈ (X T )β ⊂ ((c0 )T )β implies a ∈ (c0 ) R = (1 ) R = (X β ) R . We also obtain W ∈ (X, c) from (2.74). Conversely, if a ∈ (X β ) R and W ∈ (X, c) then it follows from (2.74) that a ∈ (X T )β .

2.6 The β-duals of Matrix Domains of Triangles in F K Spaces

(a)

95

Now let X = ∞ . We have to show that W ∈ (∞ , c) implies W ∈ (∞ , c0 ). If W ∈ (∞ , c) then it follows from (11.1) in Part 11. of Theorem 1.23 that n 

|wnk | is uniformly convergent in n.

(2.79)

k=0

But, as before, in Part (i) of the proof of Theorem 2.16, we also have (2.75) and this and (2.79) imply lim

n→∞

n 

|wnk | = 0.

k=0

From this we obtain W ∈ (∞ , c0 ) by (6.1) in Part 6. of Theorem 1.23. Thus, we have proved Part (a). (b) It remains to show that a ∈ (cT )β implies (2.77). Let a ∈ (cT )β and z ∈ cT be given. Then we have x = T z ∈ c and ξ = limk→∞ xk exists. Hence, there is x (0) ∈ c0 such that x = x (0) + ξ · e. We put z (0) = Sx (0) . Then it follows that z (0) ∈ (c0 )T and z = Sx = S(x (0) + ξ · e) = z (0) + ξ · Se, and we obtain as in (2.74) n−1  k=0

ak z k =

n    (Rk a)(Tk z) − Wn T (z (0) + ξ · Se) k=0

n  = (Rk a)(Tk z) − Wn (T z (0) ) − ξ · Wn e for all n. k=0

The first term in the last equality converges as n → ∞, since Ra ∈ 1 by Part (i). The second term in the last equality tends to zero as n → ∞, since a ∈ (cT )β ⊂ ((c0 )T )β implies W ∈ (c0 , c0 ) by Theorem 2.16. Finally, we also have W ∈ (c, c) by Part (i), and this implies by (13.1) in Part 13. of Theorem 1.23 that α = limn→∞ Wn e exists. Now the identity in (2.77) follows. (c) If a ∈ (cT ), then it follows from Part (b) that Ra ∈ 1 and (2.77) holds. Since z ∈ cT if and only if x = T z ∈ c and zcT = x∞ , the right-hand side of (2.78) defines a functional f ∈ c∗ with its norm  f  given by the right-hand side in (2.78) by (1.24) in Example 1.11.  Example 2.14 Let 1 < p < ∞ and q = p/( p − 1). We write bv p = ( p ) . Then we have a ∈ (bv p )β if and only if  q   ∞   ∞    a j  < ∞ and sup n 1/q  n   k=0 j=k

 ∞      · ak  < ∞.   k=n

(2.80)

96

2 Matrix Domains

Moreover, if a ∈ (bv p )β then a∗bv p

 q ⎞1/q ⎛  ∞   ∞   ⎝ = a j  ⎠ for all a ∈ (bv p )β .    k=0 j=k

(2.81)

Proof (i) First we show that a ∈ (bv p )β if and only if the conditions in (2.80) hold. Since  p (1 < p < ∞) is a B K space with AK by Part (c) of Example 1.4, we can apply Theorem 2.16 with T = , S = and R = t , that is, rnk = 1 for k ≥ n and rnk = 0 for 0 ≤ k < n (n = 0, 1, . . . ). We have by (2.72) that β a ∈ (bv p )β if and only if Ra ∈  p = q , that is,  q  ∞   ∞   a j  < ∞,    k=0 j=k which is the first condition in (2.80), and W ∈ ( p , c0 ), where W = (wnk )∞ n,k=0 is the matrix with ⎧ ∞ ⎨ aj (0 ≤ k ≤ n) wnk = j=n (n = 0, 1, . . . ). ⎩ 0 (k > n) Now we have W ∈ ( p , c0 ) by (5.1) and (7.1) in Part 10. of Theorem 1.23, if and only if

sup n

∞ 



1/ p |wnk |q

k=0

= sup ⎝(n + 1)1/q n

 ⎞ ∞    ·  a j ⎠ < ∞,  j=n 

which is the second identity in (2.80), and lim wnk = lim

n→∞

n→∞

∞ 

a j = 0,

j=n

 which is redundant, since the series ∞ k=0 ak converges. Thus, we have shown that a ∈ (bv p )β if and only if the conditions in (2.80) hold. This completes the proof of Part (i). (ii) Now we show that if a ∈ (bv p )β then the identity in (2.81) holds. Let a ∈ (bv p )β be given. We observe that x ∈ bv p if and only if y = x ∈  p and xbv p =  x p = y p . Then we have by (2.73) in Theorem 2.16

2.6 The β-duals of Matrix Domains of Triangles in F K Spaces ∞ 

ak x k =

∞ 

k=0

97

(Rk a)yk .

k=0

Now xbv p = y p implies a∗bv p = Ra∗ p and (2.81) follows from the fact that ∗p and q are norm isomorphic. This completes the proof of Part (ii).  Remark 2.7 (a) We have by Example 2.14   −1  ∗ ∞ . (bv p )β = q ∩ (n 1/q )∞ n=0 R

−1 −1 (b) We observe that neither q ⊂ ((n 1/q )∞ ∗ ∞ nor ((n 1/q )∞ ∗ ∞ ⊂ q . n=0 ) n=0 ) To see this we define the sequences y and y˜ by

⎧ ⎨ 1 yk = ν + 1 ⎩0

(k = 2ν ) (k = 2ν )

(ν = 0, 1, . . . ) and y˜k =

1 (k = 0, 1, . . . ). (k + 1)1/q

−1 −1 Then we have y ∈ q \ [((n 1/q )∞ ∗ ∞ ] and y˜ ∈ [((n 1/q )∞ ∗ ∞ ] \ q , since n=0 ) n=0 ) ∞ 

|yk |q =

∞  ν=0

k=0

1 2ν/q ν 1/q ν | (2 ) |y → ∞ (ν → ∞), < ∞, but = 2 (ν + 1)q ν+1

and

k 1/q | y˜k | =

k k+1

1/q ≤ 1 for k = 0, 1, . . . , but

∞  k=0

| y˜k |q =

∞  k=0

1 = ∞. k+1

Remark 2.8 We have seen at the beginning of the Proof of Lemma 2.2 that a ∈ (X T )β if and only if C ∈ (X, c) where the matrix C is defined by cnk =

⎧ n ⎨ a j s jk ⎩

(0 ≤ k ≤ n)

j=k

0

(n = 0, 1, . . . ).

(k > n)

(a) Therefore it follows from Theorem 2.16 or Part (a) of Corollary 2.13 that if X is an F K space with AK or X = ∞ then C ∈ (X, c) if and only if Ra ∈ X β and W ∈ (X, c0 ). (b) It follows from Part (b) of Corollary 2.13 that C ∈ (c, c) if and only if

98

2 Matrix Domains

Ra ∈ 1 and W ∈ (c, c). We close this section with an application. Corollary 2.14 We consider the Euler sequence spaces erp , e0r , ecr of Part (a) of Example 2.8 introduced in [2, 5]. Then we have (a)

([5, Theorem 4.5]) a ∈ (e1r )β if and only if    

 n  j sup  (r − 1) j−k r − j a j  < ∞ n,k  j=k k  and

(b)

∞  j (r − 1) j−k r − j a j converges for each k; k j=k

(2.84)

([2, Theorem 4.2]) a ∈ (e0r )β if and only if the condition in (2.83) holds and  ⎞ ⎛ 

∞    n  j j−k − j ⎠  ⎝ (r − 1) r a j  < ∞; sup  k n  k=0  j=k

(d)

(2.83)

([5, Theorem 4.5]) a ∈ (erp )β for 1 < p < ∞ and q = p/( p − 1) if and only if the condition in (2.83) holds and  q ⎞ ⎛ 

∞    n  j j−k − j  ⎠  (r − 1) r a < ∞; sup ⎝ j  k n  k=0  j=k

(c)

(2.82)

(2.85)

([2, Theorem 4.5]) a ∈ (ecr )β if and only if the conditions in (2.83) and (2.85) hold and ⎛ ⎞ n  n  j (r − 1) j−k r − j a j ⎠ exists; lim ⎝ (2.86) n→∞ k k=0 j=k

r β (e) a ∈ (e∞ ) if and only if

and

∞  ∞    k   (r − 1)k−n r −k ak  < ∞    n n=0 k=n

(2.87)

∞  m    k  k−n −k  lim (r − 1) r ak  = 0.  m→∞   n n=0 k=m

(2.88)

2.6 The β-duals of Matrix Domains of Triangles in F K Spaces

99

Proof (i) We verify that the matrix S defined by (2.20) is the inverse matrix of the Euler matrix T = E r . (i.α)

In the proof of Part (i), we need the identity



n j n n−k = for 0 ≤ k ≤ j ≤ n. j k k j −k

(2.89)

We have for 0 ≤ k ≤ j ≤ n



n n−k n · · · (n − k + 1) (n − k) · · · (n − k − ( j − k) + 1) = · k j −k k! ( j − k)! n · · · (n − j + 1) j! = · j! k!( j − k)!

n j = , j k that is, we have shown (2.89). We write y = E r x, that is, y j = E rj x =

j  j (1 − r ) j−k r k xk for n = 0, 1, . . . . k k=0

Using (2.20) and (2.89), we obtain n 

n  n

n− j −n

j  j

(r − 1) r (1 − r ) j−k r k xk j k j=0 j=0 k=0 n n   n j (−1)n− j (1 − r )n−k = r k−n xk j k k=0 j=k n

n

 n 1 − r n−k  n − k (−1)n− j = xk k j − k r k=0 j=k

Sn y =

=

n

 n 1 − r n−k k=0

Since

sn j y j =

k

r

n−k

 n−k (−1)n−k− j . xk j j=0

n−k

 n−k 1 n−k− j n−k (−1) = (1 − 1) = j 0 j=0

(k = n) (k = n),

100

2 Matrix Domains

it follows that Sn y =





n 1 − r n−n xn = xn for n = 0, 1, . . . . r n

Thus, we have shown S(E r x) = x. Since matrix multiplication of triangles is associative it follows that S is the inverse matrix of the Euler matrix E r . This completes the proof of Part (i). To prove Parts (a) to (d) we apply Remark 2.8 to obtain a ∈ X β if and only if C ∈ (X, c) in the corresponding cases, where

cnk

⎧ n n   ⎨ s jk a j =  j (r − 1) j−k r − j a j k = j=k j=k ⎩ 0

(0 ≤ k ≤ n)

(n = 0, 1, . . . ).

(k > n) (2.90)

(a)

We have C ∈ (1 , c) by (4.1) and (11.2) in Part 11. of Theorem 1.23 and by (2.90) if and only if    

 n  j j−k − j   (r − 1) r a j  < ∞, sup |cnk | = sup  n,k n,k  j=k k  which is the condition in (2.82), and lim cnk =

n→∞

∞  j (r − 1) j−k r − j a j exists for each k, k j=k

which is the condition in (2.83). This completes the proof of Part (a). (b) We have C ∈ ( p , c) for 1 < p < ∞ by (5.1) and (11.2) of Part 15. of Theorem 1.23, and by (2.90) if and only if  q  n n     j q j−k − j   (r − 1) r a j  < ∞, sup |cnk | = sup  k n n  k=0 k=0  j=k ∞ 

which is the condition in (2.84), and (11.2) is the condition in (2.83) as in Part (a) of the proof. This completes the proof of Part (b). (c) We have C ∈ (c0 , c) by (1.1) and (11.2) of Part 12. of Theorem 1.23, and by (2.90) if and only if

2.6 The β-duals of Matrix Domains of Triangles in F K Spaces

101

  

n    n  j j−k − j   sup |cnk | = sup (r − 1) r a j  < ∞,  k n n  k=0 k=0  j=k ∞ 

which is the condition in (2.85), and (11.2) is the condition in (2.83) as in Part (a) of the proof. This completes the proof of Part (c). (d) By Theorem 1.12, we have C ∈ (c, c) if and only if C ∈ (c0 , c) which yields the conditions in (2.83) and (2.85) by Part (c), and Ce ∈ c, that is,

lim

n→∞

∞ 

⎛ cnk = lim ⎝

k=0

n→∞

n n   j k=0 j=k

k

⎞ (r − 1) j−k r − j a j ⎠ exists,

which is the condition in (2.86). This completes the proof of Part (d). r β (e) Now we apply Theorem 2.16 and Part (a) of Corollary 2.13. We have a ∈ (e∞ ) β if and only if R ∈ ∞ = 1 , which is the condition in (2.85), and W ∈ (∞ , c0 ), that is, by (6.1) of Part 6. of Theorem 1.23, ∞  m    k   (r − 1)k−n r −k ak  = 0, lim |wmk | = lim  m→∞ m→∞   n n=0 k=m k=0 ∞ 

which is (2.88). This completes the proof of Part (e). 

References 1. Alp, Z., Ilkhan, M.: On the difference sequence space  p (Tˆ q ). Math. Sci. Appl. E-Notes 7(2), 161–173 (2019) 2. Altay, B., Ba¸sar, F.: Some Euler sequence spaces on non–absolute type. Ukr. Math. J. 57(1) (2005) 3. Altay, B., Ba¸sar, F.: Some paranormed sequence spaces of non-absolute type derived by weighted mean. J. Math. Anal. Appl. 319(2), 494–508 (2006) 4. Altay, B., Ba¸sar, F.: Generalization of the sequence space ( p) derived by weighted mean. J. Math. Anal. Appl. 330(1), 174–185 (2007) 5. Altay, B., Ba¸sar, F., Mursaleen, M.: On the Euler sequence spaces which include  p and ∞ I. Inform. Sci. 176, 1450–1462 (2006) 6. Ba¸sarir, M., Öztürk, M.: On the Riesz difference sequence space. Rend. Circ. Mat. Palermo 2(57), 377–389 (2008) 7. Candan, M.: Domain of the double sequential band matrix in the classical sequence spaces. J. Inequal. Appl. 2012, 281 (2012). http://www.journalofinequalitiesandapplications.com/ content/2012/1/281

102

2 Matrix Domains

8. Candan, M.: Almost convergence and double sequential band matrix. Acta Math. Sci. Ser. B Engl. Ed. 34(2), 354–366 (2014) 9. Candan, M.: Domain of the double sequential band matrix in the spaces of convergent and null sequences. J. Inequal. Appl. 2014, 18: Adv. Differ. Equ. (2014) 10. Aydın, Ç.: Isomorphic sequence spaces and infinite matrices. PhD thesis, Inönü University, Malatya, Turkey (2001) 11. Aydın, Ç., Ba¸sar, F.: On the new sequence spaces which include the spaces c0 and c. Hokkaido Math. J. 33, 338–398 (2004) 12. Aydın, Ç., Ba¸sar, F.: Some new difference sequence spaces. Appl. Math. Comput. 157, 677–693 (2004) 13. Cooke, R.C.: Infinite Matrices. MacMillan and Co. Ltd, London (1950) 14. Sengönül, ¸ M., Ba¸sar, F.: Some new Cesàro sequence spaces of nonabsolute type which include the spaces c0 and c. Soochow J. Math. 31(1), 107–119 (2005) 15. Sim¸ ¸ sek, N., Karakaya, V.: Structure and some geometric properties of generalized Cesàro sequence space. Int. J. Contemp. Math. Sci. 3(8), 389–399 (2008) 16. de Malafosse, B., Rakoˇcevi´c, V.: Applications of measure of noncompactness in operators on (c) p the spaces sα , sα◦ , sα , α . J. Math. Anal. Appl. 323, 131–145 (2006) 17. Demiriz, S., Ilkhan, M., Kara, E.E.: Almost convergence and Euler totient matrix. Math. Sci. Appl. E-Notes (2020). https://doi.org/10.1007/s43034-019-00041-0 18. Djolovi´c, I.: Compact operators on the spaces a0r ( ) and acr ( ). J. Math. Anal. Appl. 318, 658–666 (2006) 19. Djolovi´c, I., Malkowsky, E.: A note on compact operators on matrix domains. J. Math. Anal. Appl. 340, 291–303 (2008) 20. Erfanmanesh, S., Foroutannia, D.: Some new semi-normed spaces of non-absolute type and matrix transformations. Proc. Inst. Appl. Math. 4(2), 96–108 (2015) 21. Et, M.: On some difference sequence spaces. Turkish J. Math. 17, 18–24 (1993) 22. Foroutannia, D.: On the block sequence space  p (e) and related matrix transformations. Turk. J. Math. 39, 830–841 (2015) 23. Grosse-Erdmann, K.-G.: The structure of sequence spaces of Maddox. Can. J. Math. 44(2), 298–307 (1992) 24. Grosse-Erdmann, K.-G.: Matrix transformations between the sequence spaces of Maddox. J. Math. Anal. Appl. 180, 223–238 (1993) 25. Grosse-Erdmann, K.-G.: The Blocking Technique, Weighted Mean Operators and Hardy’s Inequality. Springer Lecture Notes in Mathematics, vol. 1679 (1999) 26. Ilkhan, M., Demiriz, S., Kara, E.E.: A new paranormed sequence space defined by Euler totient matrix. Karaelmas Sci. Eng. J. 9(2), 277–282 (2019) 27. Jarrah, A.M., Malkowsky, E.: BK spaces, bases and linear operators. Rend Circ. Mat. Palermo, Ser. II, Suppl. 52, 177–191 (1998) 28. Kara, E.E., Ba¸sarır, M., Mursaleen, M.: Compactness of matrix operators on some sequence spaces derived by Fibonacci numbers. Kragujevac J. of Math. 39(2), 217–230 (2015) 29. Karakaya, V.: Some geometric properties of sequence spaces involving lacunary sequence. J. Inequal. Appl., Article ID 81028 (2007).https://doi.org/10.1155/2007/81028 30. Karakaya, V., Altun, M.: Fine spectra of upper triangular double-band matrices. J. Comput. Appl. Math. 234, 1387–1394 (2010) 31. Karakaya, V., Altun, M.: On some geometric properties of a new paranormed sequence space. J. Funct. Spaces Appl., Article ID 685382, 8 (2014) 32. Kiri¸sci, M.: The application domain of infinite matrices on classical sequence spaces. arXiv:1611.06138v1 [math.FA], 18 Nov 2016 33. Kiri¸sci, M.: The sequence space bv and some applications. Math. Aeterna 4(3), 207–223 (2014) 34. Kiri¸sci, M.: Riesz type integrated and differentiated sequence spaces. Bull. Math. Anal. Appl. 7(2), 14–27 (2015) 35. Kiri¸sci, M., Ba¸sar, F.: Some new sequence spaces derived by the domain of generalized difference matrix. Comput. Math. Appl. 60(5), 1299–1309 (2010) 36. Kızmaz, H.: On certain sequence spaces. Canad. Math. Bull. 24(2), 169–176 (1981)

References

103

37. Malkowsky, E.: Linear operators in certain BK spaces. Bolyai Soc. Math. Stud. 5, 259–273 (1996) 38. Malkowsky, E.: A note on the Köthe-Toeplitz duals of generalized sets of bounded and convergent difference sequences. J. Analysis 4, 81–91 (1996) 39. Malkowsky, E.: Linear operators between some matrix domains. Rend. Circ. Mat. Palermo, Serie II, Suppl. 68, 641–655 (2002) 40. Malkowsky, E., Nergiz, H.: Matrix transformations and compact operators on spaces of strongly Cesàro summable and bounded sequences of order α. Contemp. Anal. Appl. Math. (CAAM) 3(2), 263–279 (2015) 41. Malkowsky, E., Özger, F.: A note on some sequence spaces of weighted means. Filomat 26(3), 511–518 (2012) 42. Malkowsky, E., Parashar, S.D.: Matrix transformations in spaces of bounded and convergent difference sequences of order m. Analysis 17, 187–196 (1997) 43. Malkowsky, E., Rakoˇcevi´c, V.: On matrix domains of triangles. Appl. Math. Comput. 189, 1146–1163 (2007) 44. Malkowsky, E., Rakoˇcevi´c, V.: Advanced Functional Analysis. CRC Press, Taylor & Francis Group, Boca Raton, London, New York (2019) 45. Mohiuddine, S.A., Alotaibi, A.: Weighted almost convergence and related infinite matrices. J. Inequal. Appl. 2018(1), 15 (2018) 46. Mursaleen, M.: Generalized spaces of difference sequences. J. Math. Anal. Appl. 203(3), 738– 745 (1996) 47. Mursaleen, M.: On some geometric properties of a sequence space related to  p . Bull. Australian Math. Soc. 67(2), 343–347 (2003) 48. Mursaleen, M., Ba¸sar, F., Altay, B.: On the euler sequence spaces which include the spaces  p and ∞ II. Nonlinear Anal. 65(3), 707–717 (2006) 49. Mursaleen, M., Noman, A.K.: Some new sequence spaces derived by the domain of generalized difference matrix. Math. Comput. Modelling 52(3–4), 603–617 (2010) 50. Mursaleen, M., Noman, A.K.: On generalized means and some related sequence spaces. Comput. Math. Appl. 61, 988–999 (2011) 51. Mursaleen, M., Noman, A.K.: On some new sequence spaces of non-absolute type related to the spaces  p and ∞ II. Math. Commun. 16, 383–398 (2011) 52. Natarajan, P.N.: Cauchy multiplication of (M, λn ) summable series. Adv. Dev. Math. Sci. 3(12), 39–46 (2012) 53. Natarajan, P.N.: On the (M, λn ) method of summability. Analysis (München) 33(2), 51–56 (2013) 54. Natarajan, P.N.: A product theorem for the Euler and Natarajan methods of summability. Analysis (München) 33(2), 189–195 (2013) 55. Natarajan, P.N.: New properties of the Natarajan method of summability. Comment. Math. 55(1), 9–15 (2015) 56. Natarajan, P.N.: Classical Summability Theory. Springer Nature Singapore Pte Ltd. (2017) 57. Ng, P.-N., Lee, P.-Y.: Cesàro sequence spaces of non-absolute type. Commentat. Math, XX (1978) 58. Polat, H., Ba¸sar, F.: Some Euler spaces of difference sequences of order m. Acta Math. Sci. Ser. B Engl. Ed. 27B(2), 254–266 (2007) 59. Ba¸sarır, M., Kara, E.E.: On compact operators on the Riesz B(m)–difference sequence space. Iran J. Sci. Technol. Trans. A Sci. 35(4), 279–285 (2011) 60. Roopaefi, H., Foroutannia, D.: A new sequence space and norm of certain matrix operators on this space. Sahand Commun. Math. Anal 3(1), 1–12 (2016) 61. Sanhan, W., Suantai, S.: Some geometric properties of Cesàro sequence space. Kyungpook Math. J. 43, 191–197 (2003) 62. Sönmez, A., Ba¸sar, F.: Generalized difference spaces of non-absolute type of convergent and null sequences. Abstr. Appl. Anal. 20 (2012). https://doi.org/10.1155/2012/435076

104

2 Matrix Domains

63. Talebi, G.: On multipliers of matrix domains. J. Inequal. Appl. 2018, 296 (2018). https://doi. org/10.1186/s13660-018-1887-4 64. Tuˇg, O., Rakoˇcevi´c, V., Malkowsky, E.: On the domain of the four–dimensional sequential band matrix in some double sequence spaces. Mathematics 8 (2020). https://doi.org/10.3390/ math8050789 65. Wilansky, A.: Summability Through Functional Analysis. Mathematical Studies, vol. 85. North-Holland, Amsterdam (1984)

Chapter 3

Operators Between Matrix Domains

In this chapter, we apply the results of the previous chapters to characterize matrix transformations on the spaces of generalized weighted means and on matrix domains of triangles in B K spaces. We also establish estimates or identities for the Hausdorff measure of noncompactness of matrix transformations from arbitrary B K spaces with AK into c, c0 and 1 , and also from the matrix domains of an arbitrary triangle in  p , c and c0 into c, c0 and 1 . Furthermore we determine the classes of compact operators between the spaces just mentioned. Finally we establish the representations of the general bounded linear operators from c into itself and from the space bv+ of sequences of bounded variation into c, and the determination of the classes of compact operators between them. Throughout this chapter, T will always be a triangle, S be its inverse and R = S t be the transpose of S. Section 3.5 is related to the results recently published in [7]. We also list some more recent publications concerning measures of noncompactness and its applications such as [1, 4, 5, 8–11, 16–19].

3.1 Matrix Transformations on W (u, v; X) In this section, we establish a result that characterizes the classes (W (u, c; X ), Y ) of matrix transformations, where the sets W (u, c; X ) are the spaces of generalized weighted means of Definition 2.4 in Sect. 2.4. Throughout, let u, v ∈ U. Theorem 3.1 ([12, Theorem 2.6]) Let X and Y be subets of ω and X be normal. (a)

Then we have A ∈ (X  , Y ) if and only if An ∈ M(X, c0 ) for all n = 0, 1, . . .

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. de Malafosse et al., Operators Between Sequence Spaces and Applications, https://doi.org/10.1007/978-981-15-9742-8_3

(3.1) 105

106

3 Operators Between Matrix Domains

and

(b)

B ∈ (X, Y ) where Bn = + An for n = 0, 1, . . . .

(3.2)

Then we have A ∈ (W (u, v; X ), Y ) if and only if An = (1/(u · v))−1 ∗ M(X, c0 ) for n = 0, 1, . . .

(3.3)

and Bˆ ∈ (X, Y ) where Bˆ n = (1/u) · + (An /v) for n = 0, 1, . . . .

(3.4)

Proof (a) We write Z = X  . (i)

First we show that A ∈ (Z , Y ) implies that the conditions in (3.1) and (3.2) hold. We assume A ∈ (Z , Y ). Then we have An ∈ Z β for n = 0, 1, . . . , hence An ∈ M(X, c0 ) by the first part of (2.30) in Corollary 2.7. Let x ∈ X be given. Then z = x ∈ Z and x = z, and it follows from An ∈ Z β and (2.31) in Corollary 2.7 that (3.5) Bn x = An z for n = 0, 1, . . . , that is, Bx = Az.

Therefore, Az ∈ Y implies Bx ∈ Y . Thus we have shown that A ∈ (Z , Y ) implies that the conditions in (3.1) and (3.2) hold. This completes the proof of Part (i). (ii) Now we show that if the conditions in (3.1) and (3.2) are satisfed, then A ∈ (Z , Y ). We assume that the conditions in (3.1) and (3.2) are satisfied. First we have Bn ∈ X β for n = 0, 1, . . . by (3.2) which means An ∈ (X β )+ by the definition of the matrix B. Now this and An ∈ M(X, c0 ) in (3.1) together imply An ∈ Z β for n = 0, 1, . . . by the first part of (2.30) in Corollary 2.7. Again it follows that (3.5), and therefore Az ∈ Y for all z ∈ Z . Thus we have shown that if the conditions in (3.1) and (3.2) are satisfed, then A ∈ (Z , Y ). This completes the proof of Part (ii). (b) Since, by Proposition 2.1, A ∈ (u −1 ∗ X, Y ) if and only if D ∈ (X, Y ) where Dn = An /u for n = 1, 2, . . . , Part (b) is an immediate consequence of Part (a).  We apply Theorem 3.1 to characterize matrix transformations from bs and cs into ∞ , c0 and c. Corollary 3.1 Then the necessary and sufficient conditions for A ∈ (X, Y ) can be read from the following table:

3.1 Matrix Transformations on W (u, v; X )

107

From bs cs To ∞ 1. ([20, 8.4.5C]) 2. ([20, 8.4.5B]) c0 3. ([20, 8.5.6E]) 4. ([20, 8.4.5B]) c 5. ([20, 8.5.6D]) 6. ([20, 8.4.5B]) where 1. (1.1), (1.2) where  (1.1) sup ∞ k=0 |ank − an,k−1 | < ∞ n

(1.2) limk→∞ ank = 0 for each n 2. (1.1) 3. (1.2), (3.1), ∞(3.2) where (3.1) k=0 |ank − an,k+1 | converges uniformly in n (3.2) limn→∞ ank = 0 for each k 4. (1.1), (3.2) 5. (1.2), (3.1), (5.1) where (5.1) lim ank = αk exists for each k n→∞

6. (1.1), (5.1). Proof 1. Since X = ∞ obviously is normal, we have by (3.1) and (3.2) in Theorem 3.1 that A ∈ (bv, ∞ ) if and only if An ∈ M(∞ , c0 ) for all n, that is, An ∈ c0 by Part (b) of Proposition 2.2, which is the condition in (1.2). Also B = + An ∈ (∞ , ∞ ), that is, by (1.1) of Part 1. of Theorem 1.23 sup n

∞ 

|bnk | = sup n

k=0

∞ 

|ank − an,k+1 | < ∞,

k=0

which, together with (1.2), obviously is equivalent to the condition in (1.1). This completes the proof of Part 1. 2. Since X = cs is a B K space with AK by Part (d) of Example 2.7, Z = 1 is a β B K space with AK by Part (c) of Example 1.4 with Z β = ∞ = 1 by Example β 1.8, and cs = bv by Part (a) of Corollary 2.8, we obtain from Theorem 1.22 that A ∈ (cs, ∞ ) if and only if D = At ∈ (1 , bv). This is the case by Proposition 2.4 if and only if C =  · D ∈ (1 , 1 ), that is, by (19.1) of Part 19. of Theorem 1.23, if and only if ∞  sup |cnk | < ∞. (3.6) k

n=0

Since cnk = ( · D)nk = dnk − dn,k−1 = akn − ak,n−1 for all n and k,

108

3 Operators Between Matrix Domains

we obtain from (3.6) that sup k

∞ 

|akn − ak,n−1 | < ∞,

n=0

which is the condition in (1.1). This completes the proof of Part 2.. 5. (i) First we show the sufficiency of the conditions in 5. We assume that the conditions in (1.2), (5.1) and (3.2) are satisfied. First we note that the condition in (1.2) is equivalent to An ∈ M(∞ , c0 ) for all n by Part (b) of Proposition 2.2, which is the condition in (3.1) of Part (a) of Theorem 3.1. Also the condition in (5.1) obviously implies lim bnk = lim (ank − an,k+1 ) exists for each k

n→∞

n→∞

and this and the condition in (3.2) together imply B ∈ (∞ , c) by the conditions (11.1) and (11.2) of Part 11. of Theorem 1.23. Now B ∈ (∞ , c) is the condition in (3.2) which together with the condition in (3.1) implies A ∈ (bs, c) by Part (a) of Theorem 3.1. Thus we have shown the sufficiency of the conditions in (1.2), (5.1) and (3.2). (ii) Now we show the necessity of the conditions in 5.. We assume A ∈ (bs, c). Since ∞ is a normal set it follows as in the proof of Part 1. that An ∈ M(∞ , c0 ) which is the condition in (1.2), and B ∈ (∞ , c) which, by the conditions in (11.1) and (11.2) of Part 11. of Theorem 1.23, yields the condition in (3.1). Finally, since e(k) ∈ bs for all k, we obtain Ae(k) ∈ c for all k which is the condition in (5.1). Thus we have shown that A ∈ (bs, c) implies that the conditions in 5. hold. 3. The proof of Part 3. is analogous to that of Part 5. 4. Since cs has AK it follows by Theorem 1.11 that A ∈ (cs, c0 ) if and only A ∈ (cs, ∞ ) and Ae(k) ∈ c for all k, which is the condition in (3.2); A ∈ (cs, ∞ ) is to equivalent the condition in (1.1) by Part 2. 6. The proof of Part 6. is very similar to that of Part 4.  Now we establish the characterizations of the classes (bs, bs), (cs, bs), (bs, cs) and (cs, cs). Corollary 3.2 The necessary and sufficient conditions for A ∈ (X, Y ), when X ∈ {bs, cs} and Y ∈ {∞ , c0 , c}, can be read from the following table: From bs cs To bs 1. ([20, 8.4.6C]) 2. ([20, 8.4.6B]) cs 3. 4. ([20, 8.4.6B])

3.1 Matrix Transformations on W (u, v; X )

109

where 1. (1.1), (1.2) where  n (1.1) sup ∞ k=0 | j=0 (a jk − a j,k−1 | < ∞ n

(1.2) limk→∞ ank = 0 for each n 2. (1.1) 3. (1.2), (3.1), ∞(3.2) where n (3.1) | j=0 (a jk − a j,k+1 )| converges uniformly in n k=0 ∞ (3.2) a exists for each k j=0 jk 4. (1.1), (3.2). Proof We write C =  · A, that is, n 

cnk =

a jk for n, k = 0, 1, . . . .

j=0

Applying Proposition 2.4, we obtain Parts 1., 2., 3. and 4. by replacing the entries of the matrix A in the conditions in Parts 1., 2., 5. and 6. of Corollary 3.1 by those of the matrix C and observing that the condition lim cnk = lim

k→∞

k→∞

n 

a jk = 0 for each n

j=0

obviously is equivalent to the condition lim an = 0 for each n,

k→∞

which is the condition in (1.2).



Remark 3.1 By [20, 8.5.9], A ∈ (bs, cs) if and only if    ∞    ∞  (a jk − a j,k+1 ) = 0 and (1.2) in Corollary 3.2. lim   n→∞  k=0  j=n

(3.7)

It can be shown that the conditions in (3.7) are equivalent to those in (1.2), (3.1) and (3.2) in Part 3. of Corollary 3.2. Now we characterize matrix transformations from the spaces of generalized weighted means into the classical sequence spaces and into spaces of generalized weighted means. We write sup N , sup K and supk,N for the suprema taken over all finite subsets N and K of N0 and over all non-negative integers k and all finite subsets N of N0 , respectively.

110

3 Operators Between Matrix Domains

Corollary 3.3 ([12, Theorem 3.3]) The necessary and sufficient conditions for A ∈ (W (u, v; X ), Y ), when X ∈ { p (1 ≤ p ≤ ∞), c0 , c} and Y ∈ {∞ , c0 , c, } and Y = r (1 < r < ∞) for X = 1 can be read from the following table: To From W (u, v;  p ) (1 ≤ p < ∞) W (u, v; c0 ) W (u, v; c) W (u, v; ∞ )

∞ c0 c 1 r (1 < r < ∞) 1. 5. 9. 13.

2. 6. 10. 14.

3. 7. 11. 15.

4. 17. only p = 1 8. 18. 12. 15. 16. 20.

where 1. (1.1),(1.2) where ∞ (1.1) sup k=0 |ank /u k vk | < ∞ for all n  k  q supn ∞ k=0 |(1/u k )(ank /vk − an,k+1 /vk+1 )| < ∞ (1 < p < ∞) (1.2) ( p = ∞) supn,k |(1/u k )(ank /vk − an,k+1 /vk+1 )| < ∞ 2. (1.1),(1.2), (2.1) where (2.1) limn→∞ ((1/u k )(ank /vk − an,k+1 /vk+1 )) = 0 for each k 3. (1.1),(1.2), (3.1) where (3.1) limn→∞ ((1/u k )(ank /vk − an,k+1 /vk+1 )) = αk for each k 4. (1.1), (4.1) ⎧ where   q sup N ∞ ⎪ n∈N (ank /vk − an,k+1 /vk+1 )| < ∞ k=0 |(1/u k ) ⎪ ⎨ (1 < p < ∞) (4.1) ∞  ⎪ ⎪ |ank /vk − an,k+1 /vk+1 | < ∞ ( p = 1) ⎩ supk |1/u k | n=0

5. (1.1), (5.1) where  (5.1) supn ∞ k=0 |(1/u k )(ank /vk − an,k+1 /vk+1 )| < ∞ 6. (1.1),(2.1),(5.1) 7. (1.1),(3.1),(5.1) 8. (1.1), (8.1) where   (8.1) sup N ∞ k=0 |(1/u k ) n∈N (ank /vk − an,k+1 /vk+1 )| < ∞ 9. (5.1), (9.1), (9.2) where (9.1) limk→∞ ank /(u k vk ) = βn for all n (9.2) supn βn < ∞ 10. (2.1),(5.1), (9.1),(9.2),  (10.1) where (10.1) limn→∞ ∞ k=0 ank /vk (1/u k − 1/u k−1 ) = 0 11. (3.1),(5.1), (9.1),(9.2),  (11.1) where (11.1) limn→∞ ∞ k=0 ank /vk (1/u k − 1/u k−1 ) = γ 12. (8.1),(9.1), where (12.1) ∞ (12.1) |β | m)

(m = 0, 1, . . . ).

3.2 Matrix Transformations on X T

113

Moreover, if A ∈ (X T , Y ) then ˆ z) for all z ∈ Z = X T . Az = A(T

(3.9)

Proof We write W (n) = W (An ) , for short. (i) First we show that A ∈ (X T , Y ) implies the conditions in (3.8) and (3.9). We assume A ∈ (X T , Y ). Then it follows that An ∈ Z β , hence, by (2.72) in Theorem 2.16, Aˆ n ∈ X β and W (n) ∈ (X, c0 ) for all n, which is the second condition in (3.8). Let x ∈ X be given, hence z = Sx ∈ Z . Since An ∈ Z β implies An z = Aˆ n (T z) = Aˆ n x for all n by (2.73) in Theorem 2.16, and Az ∈ Y ˆ = Az ∈ Y , we have Aˆ ∈ (X, Y ). Moreover (3.9) holds. for all z ∈ Z implies Ax (ii) Now we show that the conditions in (3.8) imply A ∈ (Z , Y ). We assume Aˆ ∈ (X, Y ) and W (n) ∈ (X, c0 ) for all n. Then we have Aˆ n ∈ X β for all n, and this and W (n) ∈ (X, c0 ) together imply An ∈ Z β for all n by (2.72) in Theorem 2.16. Now let z ∈ Z be given, hence x = T z ∈ X . Again we have ˆ ∈ Y for all x ∈ X An z = Aˆ n x for all n by (2.73) in Theorem 2.16, and Ax implies ˆ ∈ Y. Az = Ax Hence we have A ∈ (X, Y ).  Remark 3.3 Similarly as in Corollary 2.12, the condition W (An ) ∈ (X, c0 ) in (3.8) of Theorem 3.2 may be replaced by W (An ) ∈ (X, ∞ ). The result of Theorem 3.2 can be extended to matrix transformations on the spaces (∞ )T and cT . Corollary 3.4 ([14, Remark 3.5]) (a) The statement of Theorem 3.2 also holds for X = ∞ . (b) Let Y ⊂ ω be a linear space. Then we have A ∈ (cT , Y ) if and only if Aˆ ∈ (c0 , Y ), W (An ) ∈ (c, c) for all n,

(3.10)

and m 

(n) ∞ (An ) (n) ˆ wmk (n = 0, 1, . . . ). Ae − α n=0 ∈ Y, where α = lim m→∞

(3.11)

k=0

Moreover, if A ∈ (cT , Y ) then we have

ˆ z) − ξ α (n) ∞ for all z ∈ cT , where ξ = lim Tk z. Az = A(T n=0 k→∞

(3.12)

114

3 Operators Between Matrix Domains

Proof (b)

(a) Part (a) is obvious from Part (a) of Corollary 2.13.

First we show that A ∈ (cT , Y ) implies the conditions in (3.10)–(3.12). First we assume A ∈ (cT , Y ). Since cT ⊃ (c0 )T , it follows that A ∈ ((c0 )T , Y ) and so, by the first condition in (3.9) of Theorem 3.9, Aˆ ∈ (c0 , Y ) which is the first condition in (3.10). Also, by Part (b) of Corollary 2.13, An ∈ (cT )β for all n implies W (An ) ∈ (c, c) for all n, which is the second condition in (3.10). Furthermore we obtain (3.11) from (2.77) in Part (b) of Corollary 2.13. Moreover, if A ∈ (cT , Y ) then (3.11) follows from (2.77). This completes the proof of (i). (ii) Now we show that the conditions in (3.10) and (3.11) imply A ∈ (cT , Y ). We assume that the conditions in (3.10) and (3.11) are satisfied. Then Aˆ n = β R An ∈ c0 = 1 and W (An ) ∈ (c, c) for all n, the conditions in (3.10), together imply An ∈ (cT )β for all n by Part (b) of Corollary 2.13. Let z ∈ cT be given. Then we have x = T z ∈ c. We put x (0) = x − ξ · e where ξ = limk→∞ xk . Then we have x (0) ∈ c0 and, by (2.77) in Part (b) of Corollary 2.13 (i)





ˆ (0) + ξ · Ae ˆ − α (n) ∞ ∈ Y, ˆ z) − ξ α (n) ∞ = Ax Az = A(T n=0 n=0 ˆ − (α (n) )∞ since Aˆ ∈ (c0 , Y ), Ae n=0 ∈ Y and Y is a linear space. This completes the proof of Part (ii). 

This completes the proof of Part (b).

Now we establish an relation between the operator norm of the matrix operators of A ∈ (X T , Y ) and Aˆ ∈ (X, Y ). Theorem 3.3 ([14, Theorem 3.6]) Let X and Y be B K spaces and X have AK or X = ∞ . If A ∈ (X T , Y ) then we have

L A = L Aˆ ,

(3.13)

ˆ for all x ∈ X (Part (b) of where L A (z) = Az for all z ∈ Z = X T and L Aˆ (x) = A(x) Theorem 1.10). Proof We assume A ∈ (X T , Y ). Since X is a B K space, so is Z = X T with the norm · Z = T (·) X by Theorem 2.8. This also means that x ∈ B X (0, 1) if and only if z = Sx ∈ B Z (0, 1). It follows by Theorem 1.4 that L A ∈ B(Z , Y ), and so L Aˆ ∈ B(X, Y ) by Theorem 3.2. We have by (3.9) in Theorem 3.2 or in Part (a) of Corollary 3.4 for X = ∞   L ˆ  = A =

sup

   L ˆ (x) = A

sup

Az =

x∈B X (0,1) z∈B Z (0,1)

sup

x∈B X (0,1)

sup

z∈B Z (0,1)

  ˆ   Ax 

L A (x) = L A ,

3.2 Matrix Transformations on X T

115



which implies (3.13). We obtain the next result as a corollary of Theorem 3.13.

Corollary 3.5 ([6, Theorem 2.8]) Let X =  p (1 ≤ p ≤ ∞) or X = c0 , and q be the conjugate number of p. (a)

Let Y ∈ {c0 , c, ∞ }. If A ∈ (X T , Y ), then we put

A (X T ,∞)

⎧ ∞  ⎪ ⎪ sup |aˆ nk | ⎪ ⎪ ⎪ n k=0 ⎪ ∞  ⎨      q 1/q = sup  Aˆ n  = sup | a ˆ | nk ⎪ q n ⎪ n k=0 ⎪ ⎪ ⎪ ⎪ ⎩sup |aˆ nk |

(X ∈ {c0 , ∞ }) (X ∈  p for 1 < p < ∞) (X = 1 ).

n,k

Then we have

L A = A (X T ,∞) . (b)

(3.14)

Let Y = 1 and sup N denote the supremum taken over all finite subsets N ˆ (N ) ˆ (N ) ∞ ˆ (N ) of N0 . If A ∈ (X T , 1 ) and b = (bk )k=0 denotes the sequence with bk = ˆ nk (k = 0, 1, . . . ) for any finite subset N of N0 , then we put n∈N a    

A (X T ,1) = sup bˆ (N ) 

q

N

  ⎧ ∞    ⎪  ⎪ aˆ nk  ⎨sup N  k=0 n∈N q 1/q ∞  =     ⎪ ⎪ ⎩sup N aˆ nk  

(X ∈ {c0 , ∞ } (X =  p for 1 < p < ∞)

k=0 n∈N

and ∞     ∞   

A ((1 )T ,1 ) ) = sup  Aˆ k  = sup  aˆ nk n=0 1 = sup |aˆ nk |. k

If X = 1 then

1

k

k

n=0

L A = A ((1 )T ,1)

(3.15)

A (X T ,1) ≤ L A ≤ 4 · A (X T ,1) .

(3.16)

holds; otherwise we have

Proof Let A ∈ (X T , Y ). Since X is a BK space with AK or X = ∞ , A ∈ (X T , Y ) implies Aˆ ∈ (X, Y ) by Theorem 3.3 and Part (a) of Corollary 3.4 for X = ∞ , and (3.9) in Theorem 3.2 holds in each case.

116

3 Operators Between Matrix Domains

(a) If Y ∈ {c0 , c, ∞ }, then we have L A = L Aˆ = supn Aˆ n ∗ by (1.18) and (1.19) in Part (c) of Theorem 1.10 and Theorems 1.11 and 2.8. Now (3.14) follows from the definition of the norm · (X,∞ ) and the fact that · ∗X = · q by Example 1.11. (b) Let Y = 1 . The cases X =  p (1 < p ≤ ∞) and X = c0 are proved in exactly the same way as in Part (a) of the proof by applying (1.22) in Remark 1.4. Finally, let X = 1 . Again we have Aˆ ∈ (1 , 1 ) and (3.13) in Theorem 3.3 holds. Since L (A) ˆ ∈ B(1 , 1 ) by Part (b) of Theorem 1.10, it easily follows that ∞  ∞       ˆk  L ˆ (x) ≤ | a ˆ x | ≤ sup  A  · x 1 ≤ A ((1 )T ,1) · x 1 , nk k A 1 k

n=0 k=0

1

hence L Aˆ ≤ A ((1 )T ,1) . We also have for each k         L ˆ (e(k) ) =   Aˆ k  ≤  L Aˆ  , A 1 1

hence A ((1 )T ,1) = supk Aˆ k 1 ≤ L Aˆ . Thus we have shown (3.15).  Because of (3.12) in Part (b) of Corollary 3.4 the case X = c has to be treated separately for the estimates of the norm of operators given by matrix transformations from cT into any of the spaces c0 , c, ∞ and 1 . Theorem 3.4 ([6, Theorem 2.9]) (a)

Let A ∈ (cT , Y ) where Y ∈ {c0 , c, ∞ }. Then we have

L A = A (cT ,∞) = sup n

(b)

∞ 

 (n)

|aˆ n,k | + |α |

with α (n) from 3.11.

(3.17)

k=0

Let A ∈ (cT , 1 ). Then we have    ∞       (n)  aˆ nk  +  α  ≤ L A ≤ 4 · A (cT ,1) . (3.18)

A (cT ,1) = sup      N k=0 n∈N

n∈N

Proof We write L = L A , for short, and assume A ∈ (cT , Y ) for Y ∈ {c0 , c, ∞ , 1 }. Then An ∈ (cT )β for all n. (a) Let Y ∈ {c0 , c, ∞ }. First A ∈ (cT , Y ) implies Aˆ ∈ (c 0 , Y ) by the first part of ˆ nk | < ∞ by (1.1) (3.10) in Part (b) of Corollary 3.4 and so we have supn ∞ k=0 |a ˆ ˆ in 1.–3. of Theorem 1.23, that is, Ae ∈ ∞ . Since also Ae − (α (n) )∞ n=0 ∈ Y ⊂ ∞ by (3.11) in Part (b) of Corollary 3.4, we obtain (α (n) )∞ n=0 ∈ ∞ . Therefore the

3.2 Matrix Transformations on X T

117

third term in (3.17) is defined and finite. Since cT is a B K space by Theorem 2.8 it follows from (1.18) and (1.19) in Part (c) of Theorems 1.10, 1.11 and 2.8 (as in Part (a) of the proof of Corollary 3.5) that

L = sup An ∗cT .

(3.19)

n

Also An ∈ (cT )β for n = 0, 1, . . . implies by (1.24) in Example 1.11

An ∗cT = R An 1 + |α (n) | =

∞ 

|aˆ nk | + |α (n) | for all n = 0, 1 . . . .

(3.20)

k=0

Now (3.17) follows from (3.19) and (3.20). (b) Now let Y = 1 . Since cT is a B K space, it follows by (1.21) in Theorem 1.13 and (1.22) in Remark 1.4 that

A (cT ,1)

   ∗   = sup  An  ≤ L ≤ 4 · A (cT ,1) .  N  n∈N

(3.21)

cT

 Let N be a finite subset of N0 and the sequence b(N ) = n∈N An defined as the sequence bˆ (N ) in Part (b) of Corollary 3.5 with the entries aˆ nk of Aˆ replaced by the entries (ank ) of A. Since An ∈ (cT )β for all n, it follows by (2.68) in Lemma 2.2 that the series Rk An , and consequently Rk b(N ) converge for all k, and we obtain Rk b(N ) =

∞ 

) s jk b(N = j

j=k

=



j=k

Rk A n =

n∈N

for all k, that is,

(N )

(b wmk

Hence

)

⎧ ∞ ⎪ ⎨  s b(N ) jk j = j=m ⎪ ⎩ 0

(b(N ) ) wmk

=



m→∞



s jk



an j =

n∈N

∞ 

aˆ nk = bˆk(N )

Rb(N ) = bˆ (N ) ,   ∞   = s jk an j (0 ≤ k ≤ m)

m  k=0

s jk an j

n∈N j=k

n∈N

n∈N

j=m

(m = 0, 1, . . . ).

(k > m)

(An ) n∈N wmk

β (N ) = lim

∞ 

(3.22) for all m and k, and (N )

(b wmk

)

= lim

m→∞

m  n∈N k=0

(An ) wmk =

 n∈N

α (n) .

(3.23)

118

3 Operators Between Matrix Domains

ˆ − (α (n) )∞ Furthermore, A ∈ (cT , 1 ) implies Aˆ ∈ (c0 , 1 ) and Ae n=0 ∈ 1 by the first part of (3.10) and by (3.10) in Part (b) of Corollary 3.4. It follows from (1.21) in Theorem 1.13, the fact that · ∗c0 = 1 by Example 1.11 and from (3.22) that   ∞            M1 = sup aˆ nk  = sup  Rb(N ) 1 = sup bˆ (N )  < ∞.    1 N N N k=0 n∈N

ˆ − (α (n) )∞ Furthermore, Ae n=0 ∈ 1 and (3.23) yield, if we put M2 = (n) ∞ ˆ

Ae − (α )n=0 1 ,       

   (N )   (n)      (n) β  =  Aˆ n e − α α ≤ Aˆ n e +       n∈N n∈N n∈N   ∞          ≤ M2 +  aˆ nk  ≤ M2 + bˆ (N )  ≤ M1 + M2 < ∞.   1 n∈N k=0

This implies that A (cT ,1) is defined and finite. Finally, An ∈ (cT )β for all n implies b(N ) ∈ (cT )β for all finite subsets N of N0 , and so we obtain from (2.78) in Part (c) of Corollary 2.78, (3.22) and (3.23) that     ∞        (N )   (N )        ˆ (N ) ∗ (n) aˆ nk  +  α .  b  =  Rb 1 + β  =     cT k=0 n∈N

n∈N

Now (3.18) follows from (3.21).  In view of Remark 2.8 and Theorem 3.2, we obtain Theorem 3.5 ([14, Theorem 3.9]) Let X be an F K space with AK amd Y be an arbitrary subset of ω. Then we have A ∈ (X T , Y ) if and only if Aˆ ∈ (X, Y ) and V (An ) ∈ (X, c) for n = 0, 1, . . . ,

(3.24)

(An ) ∞ (An ) = (vmk )m,k=0 (n = 0, 1, . . . ) are where the matrices Aˆ = (aˆ nk )∞ n,k=0 and V defined by Aˆ n = R An and

(An ) vmk =

⎧ m ⎨ s jk an j

(0 ≤ k ≤ n)

j=k

⎩ 0

(m = 0, 1, . . . ).

(k > n)

Proof First we assume that the conditions in (3.24) are satisfied. By Remark 2.8, V (An ) ∈ (X, c) implies W (An ) ∈ (X, c0 ), and this and Aˆ ∈ (X, Y ) together imply A ∈ (X T , Y ) by Theorem 3.2.

3.2 Matrix Transformations on X T

119

Conversely, if A ∈ (X T , Y ) then the first condition in (3.24) holds by Theorem  3.2; also An ∈ (X T )β implies V (An ) ∈ (X, c) by Remark 2.8. Remark 3.4 (a) ([14, Remark 3.10 (a)]) The statement of Theorem 3.5 also holds for X = ∞ by Part (a) of Remark 2.8. (b) ([14, Remark 3.10 (c)]) Let Y be a linear subspace of ω. Then it follows by Part (b) of Remark 2.8 and Part (b) of Corollary 3.4 that A ∈ (cT , Y ) if and only if

ˆ − α (n) ∞ ∈ Y. Aˆ ∈ (c0 , Y ), V (An ) ∈ (c, c) for all n and Ae n=0

(3.25)

We apply Theorem 3.5 to obtain the following result Part (b) is due to K. Zeller in [21]. Example 3.1 We have (a)

A ∈ (bv, ∞ ) if and only if     ∞  sup  an j  < ∞; n,k  j=k 

(b)

(3.26)

A ∈ (bv, c) if and only if (3.26) holds and α = lim

∞ 

n→∞

ank exists

(3.27)

k=0

and αk = lim ank exists for each k; n→∞

(c)

(3.28)

A ∈ (bv, c0 ) if and only if (3.26) holds and lim

n→∞

∞ 

ank = 0

(3.29)

k=0

and lim ank = 0 for each k.

n→∞

(3.30)

Proof Here we have T =  and bv = (1 ) . Since 1 is a B K space with AK , we can apply Theorem 3.5. We obtain aˆ nk =

∞  j=0

r k j an j =

∞  j=k

an j (n, k = 0, 1, . . . )

120

3 Operators Between Matrix Domains

and (n) vmk =

⎧ m ⎨ an j

(0 ≤ k ≤ m)

j=k

⎩ 0

(m = 0, 1, . . . )

(k > m)

(a) We have by Theorem 3.5 A ∈ (bv, ∞ ) if and only if Aˆ ∈ (1 , 1 ), which is the case by (4.1) in Part 4. of Theorem 1.23 is the case if and only if     ∞  sup |aˆ nk | = sup  an j  < ∞, n,k n,k  j=k  which is (3.26), and also V (An ) ∈ (1 , c), which is the case by (4.1) and (7.1) in Part 7. of Theorem 1.23 if and only if    m     (An ) sup |vmk | = sup  an j  < ∞ m,k m≥k,k  j=k 

(3.31)

and lim v(An ) m→∞ mk

= lim

m 

m→∞

an j =

j=k

∞ 

an j = βk(n) exists for all n and k.

(3.32)

j=k

Obviously the conditions in (3.31) and (3.32) are redundant. So we have A ∈ (bv, ∞ ) if and only if the condition in (3.26) is satisfied. (b) We have by Theorem 3.5 A ∈ (bv, c) if and only if Aˆ ∈ (1 , c), which is the case by (4.1) and (11.2) in Part 14. of Theorem 1.23 if and only if (3.26) holds and lim aˆ nk = lim

n→∞

n→∞

∞ 

an j = αˆ k exists for each k,

(3.33)

j=k

and also V (An ) ∈ (1 , c), which is redundant as in Part (a) of the proof. So we have A ∈ (bv, c) if and only if the conditions in (3.26) and (3.33) hold. We show that the condition in (3.33) is equivalent to those in (3.27) and (3.28). If αˆ k in (3.33) exists for each k, then, in particular, αˆ 0 exists, which is the condition in (3.27); we also have for each fixed k ∈ N0 αk = lim ank n→∞

⎛ ⎞ ∞ ∞   = lim ⎝ an j − an j ⎠ = αˆ k − αˆ k+1 , n→∞

j=k

j=k+1

3.2 Matrix Transformations on X T

121

that is, (3.28) holds. Conversely  α and αk for k ∈ N0 exist in (3.27) and (3.28), then  if the limits limn→∞ kj=0 an j = kj=0 α j for each k, and the limits αˆ k = lim

n→∞

∞ 

⎛ an j

= lim ⎝

j=k

n→∞

∞ 

an j −

j=0

k−1 

⎞ an j ⎠ = αˆ 0 −

j=1

k−1 

α j exist for all k.

j=0

(c) The proof of Part (c) is similar to Part (b) of the proof.  Remark 3.5 It is easy to see that the condition in (3.26) of Example 3.1 is equivalent to the conditions ∞   m          (3.34) sup  ank  < ∞ and sup  ank  < ∞.     n n,m k=0

k=0

Now we apply Theorems 3.5 and 3.2, Remark 3.4 and Corollary 3.4 to characterize some matrix transformations on the spaces erp for 1 ≤ p ≤ ∞, e0r and ecr . ˆ V (n) = V (An ) and It follows from (2.20) in Corollary 2.14 that the matrices A, W (n) = W (An ) (n = 0, 1, . . . ) of Theorems 3.5 and 3.2 are given by aˆ nk = Rk An = (n) vmk =

m 

∞    j (r − 1) j−k r − j an j (n, k = 0, 1, . . . ), k j=k

(3.35)

an j s jk

j=k

⎧ m j ⎨ (r − 1) j−k r − j an j = j=k k ⎩ 0

(0 ≤ k ≤ m)

(m = 0, 1, . . . ),

(3.36)

(m = 0, 1, . . . ).

(3.37)

(k > m)

and (n) wmk =

∞ 

an j s jk

j=m

⎧ ∞ j ⎨ (r − 1) j−k r − j an j = j=m k ⎩ 0

(0 ≤ k ≤ m) (k > m)

Corollary 3.6 We have (a)

([2, Theorem 2.2 (i)]) A ∈ (e1r , ∞ ) if and only if

122

3 Operators Between Matrix Domains

  ∞     j  j−k − j  sup  (r − 1) r an j  < ∞; n,k  j=k k 

(3.38)

A ∈ (e1r , c0 ) if and only if the condition in (3.38) holds and ⎛

⎞ ∞    j (r − 1) j−k r − j an j ⎠ = 0 for each k; lim ⎝ n→∞ k j=k

(3.39)

A ∈ (e1r , c) if and only if the condition in (3.38) holds and ⎛ ⎞ ∞    j j−k − j (r − 1) r an j ⎠ exists for each k; αˆ k = lim ⎝ n→∞ k j=k

(3.40)

(b) ([2, Theorem 2.2 (ii)]) A ∈ (erp , ∞ ) for 1 < p < ∞ and q = p/) p − 1) if and only if  q    ∞   ∞ j  j−k − j   (r − 1) r a (3.41) sup n j  < ∞;  k n  k=0  j=k and

 q    m    m j  j−k − j  (r − 1) r an j  < ∞ for all n; sup  k m  k=0  j=k

(3.42)

A ∈ (erp , c0 ) if and only if the conditions in (3.41), (3.42) and (3.39) hold; A ∈ (erp , c) if and only if the conditions in (3.41), (3.42) and (3.40) hold; r , ∞ ) if and only if (c) A ∈ (e∞      ∞   ∞ j  j−k − j   (r − 1) sup r a n j  < ∞;  k n  k=0  j=k and

     m   ∞ j  j−k − j  lim (r − 1) r an j  = 0 for all n;  m→∞ k  k=0  j=m

(3.43)

(3.44)

r A ∈ (e∞ , c0 ) if and only if the condition in (3.44) holds and

     ∞   ∞ j  j−k − j   (r − 1) lim r a n j  = 0;  n→∞ k  k=0  j=k

(3.45)

3.2 Matrix Transformations on X T

123

r A ∈ (e∞ , c) if and only if the conditions in (3.40), (3.44) and (3.45) hold and

     ∞   ∞ j  j−k − j   (r − 1) r a n j  converges uniformly in n.  k  k=0  j=k

(3.46)

Proof In the proof of Parts (a) and (b), we use the conditions in (3.24) of Theorem 3.5, and in the proof of Part (c), the conditions in (3.8) of Theorem 3.2 for the characterizations of the respective classes of matrix transformations. (a) The conditions for A ∈ (e1r , ∞ ) are Aˆ ∈ (1 , ∞ ), which is the condition in (3.38) by (4.1) in Part 4. of Theorem 1.23 and (3.35), and V (n) ∈ (1 , c) for all n which is equivalent to the conditions in (3.39) and lim v(n) m→∞ mk

exists for each k

(3.47)

by (4.1) and (11.2) in Part 14. of Theorem 1.23 and (3.36); the condition in (3.47) is redundant in view of (3.35) and (3.38). The conditions for A ∈ (e1r , c0 ) are Aˆ ∈ (1 , c0 ), which means by Theorem 1.11 that we have to add the condition limn→∞ aˆ nk = 0 for each k, that is, the condition in (3.39), to the condition in (3.38) for Aˆ ∈ (1 , ∞ ), and again V (n) ∈ (1 , c) for all n. For A ∈ (e1r , c) we have to replace the condition in (3.39) for Aˆ ∈ (1 , c0 ) by αˆ k = limn→∞ aˆ nk exists for each k, which is the condition in (3.40), for Aˆ ∈ (1 , c). (b) The proof of Part (b) is very similar to that of Part (b). Now we obtain the conditions in (3.41) and (3.42) for Aˆ ∈ (r , ∞ ) and V (n) ∈ (r , c) (n = 0, 1, . . . ) from the condition in (5.1) in Parts 5. and 15. of Theorem 1.23; the condition in (3.47) that comes from (11.2) in Part 15. of Theorem 1.23 again is redundant. (c) Now we use the conditions in (3.8) of Theorem 3.2 and (3.37) for the entries of the matrices W (n) (n = 0, 1, . . . ). r , ∞ ) are Aˆ ∈ (1 , 1 ), which is the condition in (3.43) The conditions for A ∈ (e∞ by (1.1) in Part 1. of Theorem 1.23, and W (n) ∈ (∞ , c0 ) (n = 0, 1, . . . ),(n)which, by (6.1) in Part 6. of Theorem 1.23, is equivalent to limm→∞ m k=0 |wmk | = 0 r ˙ that is, to the condition in (3.44). The conditions for A ∈ (e∞ , c0 ) (n = 0, 1, ), , c ), which, by (6.1) in Part 6. of Theorem 1.23, is equivalent to are Aˆ ∈ ( ∞ 0 (n) | a ˆ | = 0, that is, to the condition in (3.45), and W ∈ ( , c ) limm→∞ ∞ mk ∞ 0 k=0 r , c) which again is equivalent to the condition in (3.44). The conditions for A ∈ (e∞ , c) which, by (11.1) are W (n) ∈ (∞ , c0 ), that is, (3.44) as before, and Aˆ ∈ (∞ ˆ nk | converges and (11.2) in Part 11. of Theorem 1.23, is equivalent to ∞ k=0 |a uniformly in n, that is, the condition in (3.46), and limn→∞ aˆ nk = aˆ k exists for each k, which is the condition in (3.40). 

124

3 Operators Between Matrix Domains

Finally we characterize matrix transformations on the matrix domains of  in the classical sequence spaces. Now we have T =  and S = , hence the matrices Aˆ and W (n) (n = 0, 1, . . . ) of Theorem 3.2 are given by aˆ nk = Rk An =

∞ 

an j (n, k = 0, 1, . . . ),

(3.48)

j=k

and (n) wmk =

∞  j=m

an j s jk

⎧ ∞ ⎨ an j = j=m ⎩ 0

(0 ≤ k ≤ m)

(m = 0, 1, . . . ).



(k > m) (3.49)

We need the following results the first of which is a generalization of Theorem 1.12. Proposition 3.1 Let X be an F K space, b ∈ / X be a sequence, X 1 = X ⊕ b = {x1 = x + λb : x ∈ X, λ ∈ C}, and Y be a linear subspace of ω. Then A ∈ (X 1 , Y ) if and only if A ∈ (X, Y ) and Ab ∈ Y . Proof First, we assume A ∈ (X 1 , Y ). Then X ⊂ X 1 implies A ∈ (X, Y ), and b ∈ X 1 implies Ab ∈ Y . Conversely, we assume A ∈ (X, Y ) and Ab ∈ Y . Let x1 ∈ X 1 be given. Then there are x ∈ X and λ ∈ C such that x1 = x + λb, and it follows that Ax1 = A(x + λb) = Ax + λAb ∈ Y .  Proposition 3.2 Let Y be a linear subspace of ω. Then we have A ∈ (c(), Y ) if and only if Aˆ ∈ (c0 , Y ), (3.50)   ∞    (3.51) an j  < ∞ for n = 0, 1, . . . sup(m + 1)  m  j=m  and A(e) ∈ Y.

(3.52)

Proof Since x ∈ c() if and only if x (0) = x − ξ e ∈ c0 (), applying Proposition 3.1 with X = c0 (), X 1 = c() and b = e, we obtain A ∈ (c(), Y ) if and only if A ∈ (c0 , ) and A(e) ∈ Y which is the condition in (3.52). Furthermore, we

3.2 Matrix Transformations on X T

125

have, by Theorem 3.2, A ∈ (c0 , ) if and only if Aˆ ∈ (c0 , Y ), which is the condition in (3.50), and W (n) ∈ (c0 , c0 ) for all n, which by the conditions (1.1) (7.1) in Part 7. of Theorem 1.23 and by (3.49) is equivalent to     ∞   ∞ m        (n)    sup |wmk | = sup an j  = sup(m + 1)  an j  < ∞ for n = 0, 1, . . . ,  m m m  j=m   k=0 k=0  j=m ∞ 

which is the condition in (3.51), and (n) lim wmk = lim

m→∞

∞ 

m→∞

an j = 0 for n = 0, 1, . . . ,

(3.53)

j=m



which is redundant.

Corollary 3.7 Let 1 < p, r < ∞, q = p/( p − 1) and s = r/(r − 1). Then the necessary and sufficient conditions for A ∈ (X, Y ) can be read from the following table: From ∞ () To ∞ 1. c0 6. c 11. 1 16. r 21.

c0 () c() bv 2. 7. 12. 17. 22.

3. 8. 13. 18. 23.

bv p

4. 5. 9. 10. 14. 15. 19. 20. 24. unknown

where 1. where  ∞ (1.1) supn ∞ an j | < ∞ k=0 | j=k (1.2) limm→∞ (m + 1)| ∞ j=m an j | = 0 for all n 2. (1.1), (2.1) where  (2.1) supm (m + 1)| ∞ j=m an j | < ∞ for all n 3. (1.1), (2.1), (3.1)  where (3.1) supn | ∞ k=0 (k + 1)ank | < ∞ for all n 4. (4.1) where  (4.1) supn,k | ∞ j=k an j | < ∞

126

3 Operators Between Matrix Domains

5. (5.1),(5.2) where ∞   q | ∞ (5.1) supn j=k ank | < ∞ k=0  s (5.2) supm (m + 1)| ∞ j=m an j | < ∞ for all n 6.

(1.2), (6.1) where ∞ (6.1) limn→∞ ∞ k=0 | j=k ank | = 0

7.

(1.1), (2.1), (7.1) where (7.1) limn→∞ ∞ j=k an j = 0 for every k

8. (1.1), (2.1), (7.1),(8.1) where (8.1) limn→∞ ∞ k=0 (k + 1)ank = 0 9.

(4.1), (7.1)

10. (5.1), (5.2), (7.1) 11. (1.2), (11.1), ∞(11.2) ∞where (11.1) | a | converges uniformly in n k=0 j=k n j (11.2) limn→∞ ∞ ˆ k exists for every k j=k ank = α 12. (1.1), (2.1), (11.2) 13. (1.1), (2.1), (11.2),(13.1) where (13.1) limn→∞ ∞ k=0 (k + 1)ank = γ exists 14. (4.1), (11.2) 15. (5.1), (5.2), (11.2) 16. (1.2), (16.1) where ∞   (16.1) sup ( ∞ n∈N k=0 | j=k an j |) < ∞ N ⊂ N0 N finite

17. (2.1),(16.1) 18. (2.1), (16.1), ∞(18.1) ∞where (18.1) | n=1 k=0 (k + 1)ank | < ∞ 19. (19.1) where  ∞ (19.1) supk ∞ n=0 | j=k an j | < ∞ 20. (5.2) (20.1) where n   q (20.1) sup ( ∞ k=0 | j=k an j | ) < ∞ n∈N N ⊂ N0 N finite

21. (1.2), (21.1) where ∞  ∞ r (21.1) sup k∈K n=0 | j=k ank | < ∞ K ⊂ IN0 K finite

22. (2.1), (21.1) 23. (2.1), (21.1), where ∞(23.1) ∞ (23.1) | |(k + 1)a |r ) < ∞ nk n=0 k=0 24. (24.1) where  ∞ r (24.1) supk ∞ n=0 | j=k an j | < ∞.

3.2 Matrix Transformations on X T

127

Proof (i) First we show that if X is an F K space with AK and Y is any subset of ω, then A ∈ (X T , Y ) if and only if Aˆ ∈ (X, Y ) and W (n) ∈ (X, ∞ ) for n = 0, 1, . . . .

(3.54)

We assume A ∈ (X T , Y ). Then, by the condition in (3.8) of Theorem 3.8, Aˆ ∈ (X, Y ) and W (n) ∈ (X, ∞ ) for n = 0, 1, · · · , and these conditions clearly imply those in (3.54), since (X, c0 ) ⊂ (X, ∞ ). Conversely, we assume that the conditions in (3.54) hold. Then the series Rk An = ∞ s a converge for all n and k, hence jk n j j=k lim w(n) m→∞ mk

= lim

m→∞

∞ 

s jk an j = 0 for each k and all n.

j=m

This condition and W (n) ∈ (X, ∞ ) (n = 0, 1, . . . ) together imply W (n) ∈ (X, c0 ) (n = 0, 1, . . . ). So the conditions in (3.8) of Theorem 3.8 are satisfied and consequently we have A ∈ (X T , Y ). This completes the proof of Part (i) (ii) We prove Parts 1., 6., 11., 16. and 21. By Part (a) of Remark 3.4, we have A ∈ (∞ , Y ) if and only if Aˆ ∈ (∞ , Y ) and W (n) ∈ (∞ , c0 ) (n = 0, 1, . . . ). For Y = ∞ , the condition A ∈ (∞ , Y ) yields (1.1) in Part 1. by (1.1) in Part 1. of Theorem 1.23; for Y = c0 , the condition in A ∈ (∞ , Y ) yields (6.1) in Part 6. by (6.1) in Part 6. of Theorem 1.23; for Y = c, the condition in A ∈ (∞ , Y ) yields (11.1) and (11.2) in Part 11. by (11.1) and (11.2) in Part 11. of Theorem 1.23; for Y = 1 , the condition in A ∈ (∞ , Y ) yields (16.1) in Part 16. by (16.1) in Part 16. of Theorem 1.23; for Y = r (1 < r < ∞), the condition in A ∈ (∞ , Y ) yields (21.1) in Part 21. by (21.1) in Part 21. of Theorem 1.23. Also the condition W (n) ∈ (∞ , c0 ) yields (1.2) in Parts 1., 6., 11., 16. and 21. by (6.1) in Part 6. of Theorem 1.23. This completes the proof of Part (ii). (iii) We prove Parts 2., 7., 12. 17. and 22.. Since (c0 , Y ) = (∞ , Y ) for Y = ∞ , 1 , r (1 < r < ∞) by Parts 2., 17. and 22. of Theorem 1.23, we obtain (1.1) in Part 2., (16.1) in Part 17. and (21.1) in Part 22.. For Y = c0 , c, the condition Aˆ ∈ (c0 , Y ) yields Aˆ ∈ (c0 , ∞ ) and Ae(k) ∈ Y for all k, that is, (1.1) and (7.1) in Part 7., and (1.1) and (11.2) in Part 12.. Also the second condition in (3.54), that is, W (n) ∈ (c0 , ∞ ) for n = 0, 1, . . . yields (2.1) in Parts 2., 7., 12. 17. and 22. by (1.1) in Part 2. of Theorem 1.23 This completes the proof of Part (iii). (iv) We prove Parts 3., 8., 13. 18. and 23.. By Proposition 3.1, we have to add the condition A(e) =

∞  k=0

∞ (k + 1)ank

∈Y n=0

128

3 Operators Between Matrix Domains

to the conditions for A ∈ (c0 , Y ) in each case. This yields (3.1), (8.1), (13.1), (18.1) and (23.1) in Parts 3., 8., 13. 18. and 23.. This completes the proof of Part (iv). (v) We prove Parts 4., 9., 14. 19. and 24.. For Y = ∞ , the condition A ∈ (1 , Y ) yields (4.1) in Part 4. by (4.1) in Part 4. of Theorem 1.23; for Y = c0 , c we have to add the condition Ae(k) ∈ c0 , c for each k, that is, (7.1) in Part 9., and (11.2) in Part 14.; for Y = 1 , the condition A ∈ (1 , Y ) yields (19.1) in Part 19. by (19.1) in Part 19. of Theorem 1.23; for Y = r , the condition A ∈ (1 , Y ) yields (24.1) in Part 24. by (24.1) in Part 24. of Theorem 1.23. The condition W (n) ∈ (1 , ∞ ) for n = 0, 1, . . . in (3.54) is     ∞   (3.55) sup  an j  < ∞ for all n m≥k,k  j=m  by (4.1) in Part 4. of Theorem 1.23. It is obvious that the condition in (3.55) is contained in (4.1) in Parts 4., 9. and 14., in (19.1) and (24.1) of Parts 19. and 24.. This completes the proof of Part (v). (vi) Finally, we prove Parts 5., 10., 15. and 20.. For Y = ∞ , the condition A ∈ (r , Y ) (1 < r < ∞) yields (5.1) in Part 5. by (5.1) in Part 5. of Theorem 1.23; for Y = c0 , c we have to add the condition Ae(k) ∈ c0 , c for each k, that is, (7.1) in Part 10., and (11.2) in Part 15.; For Y = 1 , the condition A ∈ (r , Y ) (1 < r < ∞) yields (20.1) in Part 20. by (20.1) in Part 20.. of Theorem 1.23. Also the W (n) ∈ (r , ∞ ) (1 < r < ∞; s = r/(r − 1)) for n = 0, 1, . . . in (3.54) is  s  ∞ ∞     (n) s  sup |wmk | = sup an j   m m  k=0 j=m  j=m  s   ∞   = sup(m + 1)  an j  < ∞ for all n, m  j=m  ∞ 

(3.56)

by (5.1) in Part 5. of Theorem 1.23, and (3.56) obviously is (5.2) in Parts 5., 10., 15. and 20.. This completes the proof of Part (vi). 

3.3 Compact Matrix Operators In this section, we establish some identities or estimates for the Hausdorff measure of noncompactness of matrix operators from B K with AK into the spaces c0 , c and 1 . This is achieved by applying Theorems 1.10, 1.27 and 1.29. The identities or

3.3 Compact Matrix Operators

129

estimates and Theorem 1.30 yield the characterizations of the corresponding classes of compact matrix operators. We need the following concept and result. A norm · on a sequence space X ˜ is said to be monotonous, if x, x˜ ∈ X with |xk | ≤ |x˜k | for all k implies x ≤ x . Part (a) of the following lemma generalizes Example 1.14. Lemma 3.1 ([15, Lemma 9.8.1]) (a) Let X be a monotonous B K space with AK , Pn : X → X be the projector onto the linear span of {e(1) , e(2) , . . . , e(n) } and Rn = I − Pn , where I is the identity on X . Then we have L = lim Rn = 1 n→∞





and χ (Q) = lim

n→∞

(3.57)

sup Rn (x)

for all Q ∈ M X

(3.58)

x∈Q

(b) Let Pn : c → c be the projector onto the linear span of {e, e(1) , e2 , . . . , e(n) } and Rn = I − Pn . Then (3.59) L = lim Rn = 2. n→∞

and the estimate in (1.48) of Part (b) in Example 1.14 holds. Proof It is clear that μn (Q) = sup Rn (x) < ∞ for all n and all Q ∈ M X or Q ∈ Mc . x∈Q

(a) Since X is a monotonous B K space with AK , we obtain for all n and all x ∈ X

Rn (x) = x − x [n] ≥ x − x [n+1] = Rn+1 (x) .

(3.60)

Let n ∈ N0 and ε > 0 be given. Then there exists a sequence x (0) ∈ Q such that

Rn (x (0) ) ≥ μn+1 (Q) − ε and it follows from (3.60) that μn (Q) ≥ Rn (x (0) ) ≥ Rn+1 (x (0) ) ≥ μn+1 (Q) − ε.

(3.61)

Since n ∈ N0 and ε > 0 were arbitrary, we have μn (Q) ≥ μn+1 (Q) ≥ 0 for all n, and so limn→∞ μn (Q) exists. This shows that the limit in (3.58) exists. Furthermore, since the norm · is monotonous, we have Rn (x) = x − x [n] ≤ x for all x ∈ X and all n, hence

Rn ≤ 1 for all n.

(3.62)

130

3 Operators Between Matrix Domains

To prove the converse inequality, given n ∈ N0 , we have Rn (e(n+1) ) =

e(n+1) = 0, and consequently

Rn ≥ 1 for all n.

(3.63)

Finally (3.62) and (3.63) imply (3.57) and the equality in (3.58) follows from Remark 1.12. (b) (This is Part (b) of Example 1.14.) The sequence (e, e(0) , e(1) , . . . ) is a Schauder basis of the space c by Example 1.6 and every sequence x = (xk )∞ k=0 ∈ c has a unique representation x =ξe+

∞  (xk − ξ )e(k) where ξ = ξ(x) = lim xk k→∞

k=0

by (1.14) in Example 1.6. Then we have Rn (x) =

∞ 

(xk − ξ )e(k) for all n ∈ N.

k=n+1

Since |xk − ξ | ≤ x ∞ + |ξ | ≤ 2 · x ∞ for all k and all x ∈ c, we obtain

Rn (x) ∞ ≤ sup |xk − ξ | ≤ 2 · x ∞ , k≥n+1

that is,

Rn ≤ 2 for all n ∈ N.

(3.64)

On the other hand, if n ∈ N is given, then we have for x = −e + 2 · e(n+1) ξ = −1, x ∞ = 1 and Rn (x) = 2 · e(n+1) , that is, Rn ≥ 2. This and (3.64) together imply Rn = 2 for all n ∈ N, and so a = lim Rn = 2. n→∞

Similarly as in the proof of Part (a) it can be shown that  lim

n→∞

 sup Rn (x) ∞ x∈Q

exists, and so (1.48) follows from (1.44) and (1.45) in Theorem 1.27.



Example 3.2 The Hausdorff measure of noncompactness of any operator L ∈ B(1 ) is given by

3.3 Compact Matrix Operators

131



L χ = lim

m→∞

sup k

∞ 

 |ank | ,

(3.65)

n=m

where A = (ank )∞ n,k=0 is the matrix that represents L (Part (b) of Theorem 1.10). Proof We write B 1 for the closed unit ball in 1 and A for the matrix with the rows A = 0 for n ≤ m and A = An for n ≥ m + 1. Then we obviously have n n (Rm ◦ L)(x) = A (x) for all x ∈ 1 and we obtain by (1.52) in Theorem 1.29, Remark 1.12, (3.58) and (3.57) in Lemma 3.1, and (19.1) in 19. of Theorem 1.23





L χ = χ L(B 1 ) = lim

m→∞

 = lim

m→∞

sup k

 sup (Rm ◦ L) (x) = lim A

m→∞

x =1



∞ 

|ank | ,

n=m+1



that is, (3.65) holds.

Remark 3.6 It follows from (3.65) and (1.54) in Theorem 1.30 that L ∈ K(1 ) if and only if   ∞  lim sup |ank | = 0. (3.66) m→∞

k

n=m

Theorem 3.6 ([15, Theorem 9.8.4]) Let X and Y be B K spaces, X have AK , L ∈ B(X, Y ) and A ∈ (X, Y ) be the matrix that represents L (Part (b) of Theorem 1.10). (a)

If Y = c, then we have     1 ∞ ∗  · lim sup An − (αk )k=0 X ≤ L χ 2 m→∞ n≥m ≤ lim

m→∞



  ∗  , (3.67) sup  An − (αk )∞ k=0 X

n≥m

where αk = lim ank for all k.

(3.68)

n→∞

(b)

If Y = c0 , then we have 

L χ = lim

m→∞

sup An ∗X n≥m

 .

(3.69)

132

(c)

3 Operators Between Matrix Domains

If Y = 1 , then we have  ∗ ⎞  ∗ ⎞ ⎛            ⎠, ⎠ ⎝ lim ⎝sup  A ≤ 4 · lim A ≤

L

sup n χ n   m→∞ m→∞ Nm n∈N Nm n∈N   m m ⎛

X

(3.70)

X

where the supremum is taken over all finite subsets of integers ≥ m. Proof We write · = · ∗X and A = A (X,∞ ) = supn An , for short. (a) Let A = (ank )∞ n,k=0 ∈ (X, c). Then A < ∞, αk in (3.68) exists for all k by (11.2) in 13. of Theorem 1.23, and L = A by (1.19) in Theorem 1.10. β (i) We show (αk )∞ k=0 ∈ X . Let x ∈ X be given. Since X has AK , there exists a positive constant C such that x [m] ≤ C x for all m ∈ N0 and it follows that   m       ank xk  =  An x [m]  ≤ C An ∗ x ≤ C A · x for all n and m,    k=0

hence by (3.68)   m   m         αk xk  = lim  ank xk  ≤ C A · x for all m.   n→∞    k=0

(3.71)

k=0

This implies (αk xk )∞ k=0 ∈ bs, and since x ∈ X was arbitrary, we conclude γ γ ∈ X , and X = X β by Theorem 1.16 (c), since X has AK and so (αk )∞ k=0 AD. ∞ β ∗ Also (αk )∞ k=0 ∈ X implies (αk )k=0 < ∞ by Theorem 1.14.

(ii)

Now we show lim An x =

n→∞

∞ 

αk xk for all x ∈ X.

(3.72)

k=0

Let x ∈ X and ε > 0 be given. Since X has AK , there exists k0 ∈ N0 such that

x − x [k0 ] ≤

ε ∗ , where M = A + (αk )∞ k=0 . 2(M + 1)

(3.73)

It also follows from (3.68) that there exists n 0 ∈ N0 such that k  0   ε    (ank − αk )xk  < for all n ≥ n 0 .   2 k=0 Let n ≥ n 0 be given. Then it follows from (3.73) and (3.74) that

(3.74)

3.3 Compact Matrix Operators

133

  k   ∞  ∞ 0               αk xk  ≤  (ank − αk )xk  +  (ank − αk )  An x −       k=0

(iii)

k=k0 +1

k=0

ε ε ε ∗ [k0 ] < + An − (αk )∞

< + = ε. k=0 x − x 2 2 2

Thus we have shown (3.72). Now we show the inequalities in (3.67). ∈ c be given. Then, by Example 1.6, y has unique represenLet y = (yn )∞ n=0  ∞ (n) tation y = ηe + n=0 (yn − η)e , where η = lim n→∞ yn . We obtain Rm y = ∞ (n) for all m. Writing yn = An x for n = 0, 1, . . . and B = n=m+1 (yn − η)e for the matrix with bnk = ank − αk for all n and k, we obtain from (bnk )∞ n,k=0 (3.72)   ∞      αk xk  = sup |Bn x|,

Rm (Ax) = sup |yn − η| = sup  An x −  n≥m+1 n≥m+1 n≥m+1  k=0

whence

sup Rm (Ax) = sup Bn ∗ for all m. x∈B X

n≥m+1

Now the inequalities in (3.67) follow from (1.52) in Theorem 1.29, (3.58) and (3.59) in Lemma 3.1, and (1.45) in Theorem 1.27. Thus we have shown Part (a). (b) Part (b) follows from (a) with αk = 0 and L = limm→∞ Rm = 1. (c) We note that we have by Theorem 1.10 (d) and the fact that Rm is given by the matrix A    ∗         

 = sup sup  A A   ≤ Rm ◦ L ∗X = n n    N ⊂ N Nm+1 n∈N  0 n∈N m+1 X N finite    ∗         



A

(X,1 ) ≤ 4 · sup  An An   = 4 · sup   . (3.75)   N ⊂ N0 Nm+1 n∈N  n∈N m+1 X N finite Now the inequalities in (3.70) follow from (1.52) in Theorem 1.29, (3.58) and (3.57) in Lemma 3.1, and Remark 1.13.  Theorem 3.7 ([14, Theorem 4.2]) Let X be a complete linear metric sequence space with a translation invariant metric, T be a triangle, and χT and χ denote the Hausdorff measures of noncompactness on X T and X , respectively. Then we have χT (Q) = χ (T (Q)) for all Q ∈ M X T .

(3.76)

134

3 Operators Between Matrix Domains

Proof We write Z = X T , B(x, r ) and BT (z, r ) for the open balls of radius r and X and Z , centred at x and z, and observe that Q ∈ M Z if and only if T (Q) ∈ M X by the definition of the metric on Z (Theorem 2.8). (i) First we show that χT (Q) ≤ χ (T (Q)) for all Q ∈ M Z .

(3.77)

We assume that r = χT (Q) > χ (T (Q)) = s for some Q ∈ M Z . Then there are a real ε with s < ε < t, x1 , x2 , . . . , xn ∈ X and r1 , r2 , . . . , rn < ε such that T (Q) ⊂

∞ 

B(xk , rk )

k=1

by the definition of χT (Q) in Part (b) of Definition 1.8. Let v ∈ Q be given. We put u = T v ∈ X , and so there are z j ∈ Z with x j = T z j and r j < ε such that u ∈ B(x j , r j ), that is, d(u, x j ) = d(T v, T z j ) = d(T v − T z j , 0) = d(T (v − z j ), 0) = dT (v − z j , 0) = dT (v, z j ) < r j ,

hence v ∈ BT (x j , r j ) ⊂

n 

BT (z k , rk ).

k=1

Since v ∈ Q was arbirtary, we have Q⊂

n 

BT (z k , rk ),

k=1

and so χT (Q) ≤ ε < t, which is a contradiction to χT (Q) = t. Therefore (3.77) must hold. (ii) Now we show χT (Q) ≥ χ (T (Q)) for all Q ∈ M Z .

(3.78)

Applying with X and Z replaced by Z and Z S = X , respectively, where S = T −1 , we obtain χ (T (Q)) = (χT ) S (T (Q)) ≤ χT (S(T (Q))) = χT (Q) for all Q ∈ M Z , that is, (3.78) is satisfied. Finally (3.77) and (3.78) imply (3.76).



Now we apply the results above to establish identities or estimates of the Hausdorff measure of noncompactness of continuous linear operators L A on matrix domains

3.3 Compact Matrix Operators

135

of triangles in the classical sequence spaces. We write Nr (r ∈ N0 ) for subsets of N0 with elements that are greater or equal to r , and sup Nr for the supremum taken overall finite sets Nr . Corollary 3.8 ([6, Corollary 3.6]) Let 1 ≤ p ≤ ∞ and q be the conjugate number of p. (a)

If A ∈ (( p )T , c0 ) or A ∈ ((c0 )T , c0 ), then we have      

L A χ = lim sup  Aˆ n  r →∞ n≥r   ∞q  ⎧  ⎪ ⎪ sup |aˆ nk | ( p = ∞ or X = c0 ) ⎪ ⎪rlim →∞ n≥r k=0 ⎪ ⎪   ⎪   1/q ⎨ ∞  |aˆ nk |q (1 < p < ∞) = lim sup r →∞ n≥r k=0 ⎪ ⎪ ⎪   ⎪ ⎪ ⎪ ⎪ ( p = 1). sup |aˆ nk | ⎩ lim r →∞

(3.79)

n≥r,k≥0

(b) If A ∈ (( p )T , 1 ) for 1 < p ≤ ∞ or A ∈ ((c0 )T , 1 ), then we have ⎛

⎛   ⎞   ⎞         lim ⎝sup  Aˆ n  ⎠ ≤ L A χ ≤ 4 · lim ⎝sup  Aˆ n  ⎠ . r →∞ r →∞   Nr  Nr  n∈Nr

n∈Nr

q

(3.80)

q

If A ∈ ((1 )T , 1 ), then we have  lim

r →∞

(c)

sup k

∞ 

 |aˆ nk |

.

(3.81)

n=r

If A ∈ (( p )T , c) or A ∈ ((c0 )T , c), then we have         1 ˆ ˆ   · lim sup  An − αˆ  ≤ L A χ ≤ lim sup  An − αˆ  r →∞ n≥r q q 2 r →∞ n≥r ∞ where αˆ = (α) ˆ k=0 with αˆ k = lim aˆ nk for every k.

(3.82)

n→∞

Proof (a) This follows from Part (a) of Lemma 3.1, (1.52) in Theorem 1.29 and (3.58) in Part (a) in Corollary 3.5. (b) This follows from (3.58) in Part (a) of Lemma 3.1, (1.52) in Theorem 1.29 and Part (b) in Corollary 3.5. (c) This follows from (3.58) in Part (a) of Lemma 3.1, Part (a) Theorem 3.6 and (3.58) in Part (a) in Corollary 3.5. 

136

3 Operators Between Matrix Domains

Example 3.3 If T =  and A ∈ (bv p , c) = ( p (), c) for 1 ≤ p ≤ ∞, then ⎛ ⎞∞ ∞  Aˆ n = ⎝ an j ⎠ j=k

⎛ for all n and αˆk = lim aˆ nk = lim ⎝ n→∞

n→∞

k=0

∞ 

⎞ an j ⎠ for all k

j=k

(3.83)

and we obtain from (3.82) in Part (c) of Corollary 3.8 ⎛    ⎞    ∞ ∞  1  ⎠  an j − lim an j · lim ⎝sup   n→∞ j=k  2 r →∞ n≥r  j=k q ⎛    ⎞    ∞ ∞   ⎠  ≤ L A χ ≤ lim ⎝sup  an j − lim an j  r →∞ n≥r  j=k n→∞ j=k 

(3.84)

q

with · q as in (3.79) in Part (a) of Corollary 3.8. Example 3.4 Now we determine identities or estimates for the Hausdorff measure of noncompactness of L A when A ∈ (X, c0 ) and A ∈ (X, c) where X = erp for 1 ≤ p ≤ ∞ or X = e0r for the Euler sequence spaces erp and e0r of Part (a) in Example 2.8. Now it follows from (2.21) in Part (a) of Example 2.8 that aˆ nk = Rk An =

∞    j (r − 1) j−k r j an j for n, k = 0, 1, . . . k j=k

(3.85)

and αˆ k = lim aˆ nk for k = 0, 1, . . . . n→∞

(3.86)

Then the identities for L A χ when A ∈ (erp , c0 ) for 1 ≤ p ≤ ∞ and A ∈ (e0r , c0 ) are given by (3.79) in Part (a) of Corollary 3.8 with aˆ nk from (3.85) and the estimates for L A χ when A ∈ (erp , c) for 1 ≤ p ≤ ∞ and A ∈ (e0r , c) are given by (3.82) in Part (c) of Corollary 3.8 with aˆ nk from (3.85) and αˆ k from (3.86). Now we establish estimates and an identity for L A χ when A ∈ (cT , Y ), where Y ∈ {c0 , 1 , 1 }. Theorem 3.8 ([6, Theorem 3.7]) (a)

Let A ∈ (cT , c), αˆ n = lim aˆ nk for k = 0, 1, . . . , n→∞

γn = lim

m→∞

m  k=0

(An ) wmk for n = 0, 1, . . .

(3.87) (3.88)

3.3 Compact Matrix Operators

137

∞ 

and β = lim

n→∞

 aˆ nk − γn .

(3.89)

k=0

Then we have  ∞  ∞    1  |aˆ nk − αˆ k | +  αˆ k − β − γn  · lim sup r →∞ 2 n≥r k=0  ∞   ∞k=0     . ≤ L A χ ≤ lim sup |aˆ nk − αˆ k | +  αˆ k − β − γn  r →∞

n≥r

k=0

k=0

(3.90) (b)

Let A ∈ (cT , c0 ). Then we have 

L A χ = lim

r →∞

(c)

sup n≥r

∞ 

 |aˆ nk | + |γn |

.

(3.91)

k=0

Let A ∈ (cT , 1 ). Then we have          ∞        aˆ nk  +  γn  lim sup Nr  r →∞     k=0 n∈Nr     n∈Nr      ∞        ≤ L A χ ≤ 4 · lim sup Nr . aˆ nk  +  γn   r →∞     k=0 n∈Nr n∈Nr

(3.92)

Proof (a) Let A ∈ (cT , c). Then it follows by (3.10) and (3.11) in Part (b) of Corollary 3.4 that Aˆ ∈ (c0 , c), which implies that the limits αk in (3.87) exist β for all k, and by Part (i) of the proof of Theorem 3.6, (αˆ k )∞ k=0 ∈ c0 = 1 ⊂ cs, (An ) ∈ (c, c) for all n, which implies that the limits γn in (3.88) exist for all W ˆ − (γn )∞ n, and Ae n=0 ∈ c, which implies that the limit β in (3.89) exists. Let z ∈ cT be given and ξT = limk→∞ Tk z. Then we have by (3.12) in Part (b) of Corollary 3.4 yn = An z = Aˆ n (T z) − ξT γn = Aˆ n (T z − ξT e) + ξT ( Aˆ n e − γn ) for all n. (3.93) First, Aˆ ∈ (c0 , c) and T z − ξT e ∈ c0 together imply by (3.72) in Part (ii) of the proof of Theorem 3.6 ηˆ 0 = lim Aˆ n (T z − ξT e) = n→∞

∞ 

αˆ k (Tk z − ξT ) =

k=0

and so by (3.93) and (3.89) ηˆ = lim yn = ηˆ 0 − ξT β. n→∞

∞  k=0

αˆ k Tk z − ξT

∞  k=0

αˆ k ,

138

3 Operators Between Matrix Domains

Therefore we have yn − ηˆ = =

∞ 

aˆ nk Tk z − ξT γn − (ηˆ 0 + ξT β)

k=0 ∞ 

∞ 

k=0

k=0

(aˆ nk − αˆ k )Tk z + ξT

 αˆ k − β − γn .

Finally, since z cT = T z ∞ , we obtain from (3.17) in Part (a) of Theorem 3.4  ∞ ∞      |aˆ nk − αˆ k | +  αˆ k − β − γn  sup Rr −1 (Az) ∞ = sup   n≥r z∈ScT k=0

(b)

and (3.90) follows. Let A ∈ (cT , c0 ). Then we obtain as in Part (a) of the proof αˆ k = 0 for all k, ˆ z) (n = 0, 1, . . . ) for all z ∈ cT . Therefore it follows β = 0 and then yn = A(T that  ∞     aˆ nk + |γn | sup Rr −1 (Az) ∞ = sup (3.94) n≥r

z∈ScT

(c)

k=0

k=0

and (3.91) follows. Let A ∈ (cT , 1 ). As in Part (b) of the proof, we obtain (3.94), and (3.92) follows from (3.18) in Part (b) of Theorem 3.4 

Remark 3.7 If A ∈ (X T , Y ), then we can easily obtain necessary and sufficient conditions for L A ∈ K(cT , c) from our results above and (1.54) in Theorem 1.30. For instance, if A ∈ (cT , c), then L A ∈ K(cT , c) if and only if  lim

r →∞

sup

∞ 

n≥r

k=0

|aˆ nk

 ∞     =0 − αˆ k | +  αˆ k − β − γn    k=0

by (3.90) in Part (a) of Theorem 3.8. Example 3.5 Let T =  and A ∈ (X T , Y ) for X, Y ∈ {c0 , c}, Aˆ n n = 0, 1, . . . and αˆ k (k = 0, 1, . . . ) be defined by (3.83) in Example 3.3, and γn = lim

m→∞

and

m  k=0

(An ) wmk

⎛ ⎞ m  ∞  = lim ⎝ an j ⎠ for all n m→∞

k=0 j=m

(3.95)

3.3 Compact Matrix Operators

β = lim

n→∞

∞ 

139





= lim ⎝

aˆ nk − γn

n→∞

k=0

∞ ∞  

⎛ an j − lim ⎝

∞ m  

m→∞

k=0 j=k

⎞⎞ an j ⎠⎠ . (3.96)

k=0 j=m

Then the identities or estimates for L A χ can be read from the following table From c0 () c() To c0 1. 2. c 3. 4. 1. (1.1), that is, the case X = c0 of (3.79) in Part (a) of Corollary 3.8 2. (2.1), that is, (3.91) in Part (b) of Theorem 3.8 where

3. (3.1), that is, (3.83) with q = 1 in Example 3.3 (by Part (c) of Corollary 3.8) 4. (4.1), that is, (3.90) in Part (a) of Theorem 3.8.

3.4 The Class K(c) The vital assumption in Theorem 3.6 is that the initial space has AK . Hence it can not be applied in the case of = B(c, c). The next result gives the representation of the general operator L ∈ B(c), a formula for its norm L , an estimate for its Hausdorff measure of noncompactness

L χ and a formula for the limit of the sequence L(x). Theorem 3.9 ([15, Theorem 9.9.1]) We have L ∈ B(c) if and only if there exist a sequence b ∈ ∞ and a matrix A = (ank )∞ n,k=0 ∈ (c0 , c) such that L(x) = bξ + Ax for all x ∈ c, where ξ = lim xk , k→∞

ank = L n (e(k) ), bn = L n (e) −

∞ 

ank for all n and k,

(3.97)

(3.98)

k=0

 β = lim

bn +

n→∞

∞ 

 ank

exists

(3.99)

k=0



and

L = sup |bn | + n

Moreover, if L ∈ B(c), then we have

∞  k=0

 |ank | .

(3.100)

140

3 Operators Between Matrix Domains

1 · lim sup 2 n→∞

   ∞        ˜  αk  +  An  bn − β +   1 k=0

   ∞        ˜  ≤ L χ ≤ lim sup bn − β + αk  +  An  ,   1 n→∞ k=0

(3.101) where αk = lim ank for all k ∈ N0

(3.102)

n→∞

and A˜ = (a˜ nk )∞ ˜ nk = ank − αk for all n and k; we also have n,k=0 is the matrix with a for all x ∈ c η = lim L n (x) = ξβ + n→∞

∞ 

 αk (xk − ξ ) = β −

k=0

∞ 

 αk ξ +

k=0

∞ 

αk xk . (3.103)

k=0

Proof (i) First we assume that L ∈ B(c) and show that L has the given representation and satisfies (3.100). We assume L ∈ B(c). We write L n = Pn ◦ L for n = 0, 1, . . . where Pn : c → C is the n th coordinate with Pn (x) = xn for all x = (xk )∞ k=0 ∈ c. Since c is a B K space, we have L n ∈ c∗ for all n ∈ N, that is, by Example 1.11 (with χ ( f ) and a replaced by bn and An , respectively) L n (x) = bn · lim xk + k→∞

∞ 

ank xk for all x ∈ c

(3.104)

k=0

with bn and ank from (3.98); we also have by Example 1.11

L n = |bn | +

∞ 

|ank | for all n.

(3.105)

k=0

Now (3.104) yields the representation of the operator L in (3.97). Furthermore, since L(x (0) ) = Ax (0) for all x (0) ∈ c0 , we have A ∈ (c0 , c) and so ∞ 

A = sup |ank | < ∞ n

k=0

by (1.1) in Theorem 1.23 12.. Also L(e) = b + Ae by (3.97), and so L(e) ∈ c yields (3.99), and we obtain

b ∞ ≤ L(e) ∞ + A < ∞,

3.4 The Class K(c)

141

that is, b ∈ ∞ . Consequently, we have  sup L n = sup |bn | + n

n

∞ 

 |ank | < ∞.

k=0

Since | limk→∞ xk | ≤ supk |xk | = x ∞ for all x ∈ c, we obtain by (3.99)

L(x) ∞

  ∞      = sup bn · lim xk + ank xk  k→∞   n k=0    ∞  ≤ sup |bn | + |ank | · x ∞ = sup L n · x ∞ , n

n

k=0

hence L ≤ supn L n . We also have |L n (x)| ≤ L(x) ∞ ≤ L for all x ∈ Sc and all n, that is, supn L n ≤ L , and we have shown (3.100). This completes Part (i) of the proof. (ii) Now we show that if L has the given representation then L ∈ B(c). We assume A ∈ (c0 , c) and b ∈ ∞ and that the conditions in (3.97), (3.99) and (3.100) are satisfied. Then we obtain from A ∈ (c0 , c) by (1.1) in Theorem 1.23 12. that A < ∞, and this and b ∈ ∞ imply L < ∞ by (3.100), hence L ∈ B(c, ∞ ). Finally, let x ∈ c be given and ξ = limk→∞ xk . Then x − ξ · e ∈ c0 , so by (3.97) L n (x) = bn · ξ +

∞  k=0

 ank xk = bn +

∞ 

 ank

· ξ + An (x − ξ · e) for all n,

k=0

and it follows from (3.99) and A ∈ (c0 , c) that limn→∞ L n (x) exists. Since x ∈ c was arbitrary, we have L ∈ B(c). This completes Part (ii) of the proof. (iii) Now we show that if L ∈ B(c) then L χ satisfies the inequalities in (3.101). We assume L ∈ B(c). Let x ∈ c be given, ξ = limk→∞ xk and y = L(x). Then we have by Part (i) of the proof y = b · ξ + Ax, where A ∈ (c0 , c) and b ∈ ∞ , and we note that the limits αk in (3.102) exist for all k by (11.2) in Theorem 1.23 12., and (αk )∞ k=0 ∈ 1 by Part (a) (i) of the proof of Theorem 3.6. So we can write

142

3 Operators Between Matrix Domains

 yn = bn · ξ + An x = ξ · bn +

∞ 

 + An (x − ξ · e) for all n ∈ N.

ank

k=0

(3.106) Since A ∈ (c0 , c) it follows from (3.72) in Part (a) (ii) of the proof of Theorem 3.6 that lim An (x − ξ · e) =

n→∞

∞ 

αk (xk − ξ ) =

∞ 

k=0

∞ 

αk xk − ξ

k=0

αk .

(3.107)

k=0

So we obtain by (3.106), (3.99) and (3.107)   η = lim yn = lim n→∞

ξ bn +

n→∞

 bn +

= ξ · lim

n→∞

=ξ ·β +

∞ 

∞ 



k=0

(3.108)

+ lim An (x − ξ · e)

ank

αk xk − ξ

+ An (x − ξ · e)

ank

k=0

n→∞

k=0

∞ 





∞ 

 αk = ξ β −

k=0

∞ 

 αk

+

k=0

∞ 

αk xk ,

k=0

that is, we have shown (3.103). For each m, we have Rm (y) =

∞ 

(yn − η)e(n) for y ∈ c and η = lim yn . n→∞

n=m+1

Writing

f n(m) (x) = (Rm (L(x)))n ,

we obtain for n ≥ m + 1 by (3.106) and (3.108)   f n(m) (x) = yn − η = ξ · bn + An (x) − ξ β −  = ξ · bn − β +

∞ 

 αk

+

k=0

 αk

∞ 

+

k=0

∞ 

(ank − αk )xk .

k=0

Since f n(m) ∈ c∗ , we have by Example 1.11   ∞ ∞     (m)     f  = bn − β + + α |ank − αk |,  k n   k=0

k=0

∞  k=0

 αk xk

3.4 The Class K(c)

143

and it follows that sup Rm (L(x)) ∞ = sup f n(m)

n≥m+1

x∈Sc

   ∞ ∞       = sup bn − β + αk  + |ank − αk | .  n≥m+1  k=0

k=0

Now the inequalities in (3.101) follow from (1.52) in Theorem 1.29, (3.58) and (3.59) in Lemma 3.1, and (1.47) in Remark 1.11.  We obtain the following result for L ∈ B(c, c0 ) as in Theorem 3.9 with β = αk = 0 for all k and by replacing the factor 1/2 in (3.101) by !: Corollary 3.9 ([15, Corollary 9.9.2]) We have L ∈ B(c, c0 ) if and only if there exist a sequence b ∈ ∞ and a matrix A = (ank )∞ n,k=0 ∈ (c0 , c) such that (3.97) holds, where the sequence b and the matrix A are given by (3.98), and the norm of L satisfies (3.100). Moreover, if L ∈ B(c, c0 ), then we have

L χ = lim sup (|bn | + An 1 ) .

(3.109)

n→∞

Now we give the estimates or identities for the Hausdorff measures of noncompactness of bounded linear operators between the spaces c0 and c. Theorem 3.10 [15, Theorem 9.9.3]) Let L ∈ B(X, Y ) where X and Y are any of the spaces c0 and c. Then the estimates or identities for the Hausdorff measure of the operators L can be obtained from the following table From c0 c To c0 c

1. 2. 3. 4.

where

144

3 Operators Between Matrix Domains

1. L χ = lim sup n→∞

∞  

 |ank |

k=0

2. L χ = lim sup |bn | + n→∞

∞ 

 |ank |

k=0

  ∞ ∞   1 3. lim sup |ank − αk | ≤ L χ ≤ lim sup |ank − αk | 2 n→∞ k=0 n→∞ k=0    ∞ ∞     1   4. lim sup bn − β + αk  + |ank − αk | ≤ L χ  2 n→∞  k=0 k=0    ∞ ∞       ≤ lim sup bn − β + αk  + |ank − αk | .   n→∞ k=0

k=0

Proof 1. and 3. The conditions follow from Theorem 3.6 (b) for 1. and (a) for 3., and (1.47) in Remark 1.11 and the fact that · ∗X = · 1 for X = c0 and X = c by Example 1.11. 2. and 4. The conditions follow from (3.109) in Corollary 3.9 and from (3.102) in Theorem 3.9 (b).  Applying (1.54) in Theorem 1.30 we obtain the following characterizations for compact operators between the spaces c0 and c. Corollary 3.10 ([15, Corollary 9.9.4]) Let L ∈ B(X, Y ) where X and Y are any of the spaces c0 and c. Then the necessary and sufficient conditions for L ∈ K(X, Y ) can be obtained from the following table From c0 c To c0 c

1. L χ = lim

∞ 

n→∞

 2. L χ = lim

n→∞

 3.

lim

n→∞

where

1. 2. 3. 4.

∞ 

 |ank | = 0

k=0

|bn | +

∞ 

 |ank | = 0

k=0



|ank − αk | = 0

k=0

   ∞ ∞       4. lim bn − β + αk  + |ank − αk | = 0. n→∞   k=0

k=0

3.4 The Class K(c)

145

The following example will show that a regular matrix cannot be compact; this is a well-known result by Cohen and Dunford ([3]). Example 3.6 ([15, Example 9.9.5]) Let A ∈ (c, c). It follows from the condition in 4. in Corollary 3.10 with bn = 0 for all n and β = limn→∞ ∞ k=0 ank that L A is compact if and only if    ∞ ∞      lim  αk − β  + |ank − αk | = 0. n→∞   k=0

(3.110)

k=0

If A is regular, then αk = 0 for k = 0, 1, . . . and β = 1, and so ∞  ∞ ∞       αk − β  + |ank − αk | = 1 + |ank | ≥ 1 for all n.    k=0

k=0

k=0

Thus a regular matrix cannot satisfy (3.110) and consequently cannot be compact.

3.5 Compact Operators on the Space bv+ In this section, we define and study the spaces bv0+ and bv+ , characterize the classes (X, Y ) where X ∈ {bv0+ , bv+ } and Y ∈ {∞ , c, c0 }, establish the representations of the general operators in the classes B(bv+ , Y ) where Y ∈ {c0 , c, ∞ }, determine the classes B(bv+ , Y ), give estimates for the Hausdorff measures of noncompactness

L χ for L ∈ B(bv+ , c0 ) and L ∈ B(bv+ , c) and characterize the classes K(bv+ , c0 ) and K(bv+ , c). + ∞ Definition 3.1 We define the Matrix + = (δnk )n,k=0 of the forward differences by

+ δnk

⎧ ⎪ (k = n) ⎨1 = −1 (k = n + 1) ⎪ ⎩ 0 (k = n, n + 1)

(n = 0, 1, . . . )

and define the sets  bv+ = (1 )+ = x ∈ ω :

∞ 

 |xk − xk+1 | < ∞

and bv0+ = bv+ ∩ c0 .

k=0

The definition of bv+ is very similar to that of bv, but + is not a triangle, so we cannot apply the previous results for matrix domains of triangles. Remark 3.8 We have bv+ ⊂ c.

146

3 Operators Between Matrix Domains

Proof Let x ∈ bv+ and ε > 0 be given. Then there exsits n 0 ∈ N0 such that ∞ 

|xk − xk+1 | < ε for all n ≥ n 0 ,

k=n

hence |xn − xm | ≤

m−1 

|xk − xk+1 | < ε for all m > n ≥ n 0 .

k=n

Thus x = (xk )∞ k=0 is a Cauchy sequence of complex numbers, hence convergent.  We need the following general result. Theorem 3.11 ([20, Theorem 4.5.1]) Let (X, ( pn )) and (Y, (qn )) be F H spaces (in H ) (Definition 1.2 and the notation introduced after Theorem 2.1), X ∩ Y = {0} and Z = X + Y = {x + y : x ∈ X, y ∈ Y }. Then Z is an F H space and X and Y are closed in Z . If X and Y are B H spaces, then Z is a B H space. Proof Each z ∈ Z can be written uniquely as z = x + y, where x ∈ X and y ∈ Y . We define rn (z) = pn (z) + qn (y) for each n. (i)

First we show that (Z , (rn )) is complete. If z is a Cauchy sequence in (Z , (rn )), then x and y are Cauchy sequences in (X, ( pn )) and (Y, (qn )), respectively, hence convergent by the completeness of X and Y . So there are s ∈ X and t ∈ Y such that x → s and y → t. We put u = s + t. Then we have rn (z − u) = pn (x − s) + qn (y − t) → 0 for all n,

hence z → u in (Z , (rn )). Thus we have shown that (Z , (rn )) is complete. (ii) Now we show that (Z , (rn )) is an F H space. We assume z → 0 in (Z , ( pn )). Let z = x + y with x ∈ X and y ∈ Y . Then z → 0 implies pn (x) ≤ pn (x) + qn (y) = rn (z) for all n, hence x → 0 in (X, ( pn )) and so x → 0 in H , since X is an F H space. Similarly we obtain y → 0 in (Y, (qn )); and so z = x + y → 0 in H . Thus we have shown that (Z , (rn )) is an F H space.

3.5 Compact Operators on the Space bv +

(iii)

147

Now we show that X is closed in Z . For x ∈ X , we have rn (x) = pn (x) + qn (0) = pn (x) for all n.

Hence X has the relative topology of Z , and consequently X is closed in Z by Theorem 1.5. (iv) The last statement is trivial.  Example 3.7 ([20, Example 4.5.5]) Let X be an F H space (in H ), y ∈ H \ X and Y =span({y}). Then we write X ⊕ y = X + Y . By Theorem 3.11, X ⊕ y is an F H space and X is a closed subspace. Lemma 3.2 We put

x bv+

   ∞   +  =  lim xk  + |xk − xk+1 | for all x = (xk )∞ k=0 ∈ bv . k→∞

(3.111)

k=0

Then we have (a) (b)

bv0+ is a B K space with AK ; bv+ is a B K space.

Proof We note that since bv+ ⊂ c by Remark 3.8, · bv+ in (3.111) is defined on bv+ . (i) We show that bv0+ and 1 are norm isomorphic. We define the map T on bv0+ by + ∞ y = T x = (xk − xk+1 )∞ k=0 for all x = (x k )k=0 ∈ bv0 .

Then obviously T ∈ L(bv0+ , 1 ). If T x = 0, then xk = xk+1 for all all k, and x ∈ c0 implies x = 0. This shows that T is one to one. If y ∈ 1 is given, we put x=

∞  k=n

∞ .

yk n=0

Since 1 ⊂ cs, we have x ∈ c0 ; also ∞  ∞ ∞  ∞ ∞        |xn − xn+1 | = yk − yk  = |yn | < ∞,    n=0

n=0 k=n

k=n+1

n=0

148

3 Operators Between Matrix Domains

hence x ∈ c0 ∩ bv+ = bv0+ and T x = (xn − xn+1 )∞ n=0 =

∞ 

yk −

∞ 

∞ = y.

yk

n=n+1

k=n

n=0

This shows that T : bv0+ → 1 is also onto. Consequently bv0+ and 1 are isomorphic. Finally

x bv+ =

∞  k=0

|xk − xk+1 | =

∞    (T x)∞  = T x 1 for all x ∈ bv+ . k=0 0 k=0

Thus we have shown that bv0+ is a Banach space with · bv+ . (ii) Now we show that bv0+ is a B K space with AK . Let x ∈ bv0+ be given. Since x ∈ c0 and + x ∈ 1 ⊂ cs, it follows that xn =

m 

(xk − xk+1 ) + xm+1 →

k=n

∞ 

(xk − xk+1 ) (m → ∞),

k=n

hence xn ≤ x bv+ for all n, and so bv0+ is a B K space. Furthermore we have ∞    x − x [m]  + = |xm+1 | + |xk − xk+1 | → 0 (m → ∞), bv k=m+1

hence bv0+ has AK . (iii) Finally, since bv+ = bv0+ ⊕ e, bv+ is a B K space by Example 3.7.  Lemma 3.3 We have

and

(bv0+ )β = bs ([20 Theorem 7.3.5 (ii)])

(3.112)

(bv+ )β = cs ([20 Theorem 7.3.5 (iii)]).

(3.113)

Proof Let a, x ∈ ω and m ∈ N0 be given. Then we have

3.5 Compact Operators on the Space bv + m 

(xk − xk+1 )

k=0

k 

aj =

j=0

m 

xk

149

k 

k=0

aj −

j=0

= a0 x 0 +

m  k=1

= a0 x 0 +

m 

m 

xk+1

k=0

xk

k 

m 

xk

k=1

ak xk − xm+1

k=1

aj

j=0

aj −

j=0

k 

m 

k−1 

a j − xm+1

j=0

m 

aj

j=0

aj,

j=0

that is, m 

ak x k =

k=0

m 

(xk − xk+1 )

k=0

k−1 

a j + xm+1

j=0

(i) First we show

and

m 

a j for all m.

(3.114)

j=0

bs ⊂ (bv0+ )β

(3.115)

cs ⊂ (bv+ )β .

(3.116)

First we observe that the first term on the right-hand side of (3.114) converges as m → ∞ for all x ∈ bv+ ⊃ bv0+ and all a ∈ bs ⊃ cs, since      k−1  |xk − xk+1 |  a j  ≤ a bs · x bv+ .  j=0  k=0

∞ 

(3.117)

Furthermore, if a ∈ bs and x ∈ bv0+ ⊂ c0 , then the second term on the righthand side of (3.114) converges to 0 as m → ∞, so we have established the inclusion in (3.115); we also note that in this case ∞      ak xk  ≤ a bs · x bv+ . (3.118)    k=0

Finally, if a ∈ cs and x ∈ bv+ ⊂ c the second term on the right-hand side of (3.114) converges as m → ∞, so we have established the inclusion in (3.116). (ii) Conversely we show (3.119) (bv0+ )β ⊂ bs. Let a ∈ (bv0+ )β . Then f (x) =

∞  k=0

ak xk for all x ∈ X defines a functional f ∈

(bv0+ )∗ by Theorem 1.3. If m ∈ N0 is given, then we have for x = e[m] ∈ bv0+ that e[m] bv+ = 1 and

150

3 Operators Between Matrix Domains

 m      ak  ≤ f · e[m] bv+ ≤ f < ∞,    k=0

and since m ∈ N0 was arbitrary, we obtain

a bs ≤ f < ∞,

(3.120)

hence a ∈ bs. Thus we have shown the inclusion in (3.119). (iii) If a ∈ (bv+ )β , then we obtain, since e ∈ bv+ , a = a · e ∈ cs, and we have shown (3.121) (bv+ )β ⊂ cs. Now (3.112) is an immediate consequence of (3.115) and (3.119), and (3.113) is an immediate consequence of (3.116) and (3.121).  The following result is useful for the representation of continuous linear functionals on bv+ . Lemma 3.4 Let X be a B K space and Y = b ⊕ X , where b ∈ ω \ X . Then f ∈ Y ∗ if and only if there exist c ∈ C and g ∈ X ∗ such that f (y) = λ · c + g(x) for all y = λb + x ∈ Y, where x ∈ X. Moreover

1 · (|c| + g ) ≤ f ≤ |c| + g . 2

(3.122)

(3.123)

Proof We note that Y is a B K space with

y Y = |λ| + x X for y = λ · b + x by Example 3.7. (i) We show: If there exist c ∈ C and g ∈ X ∗ such that (3.122) holds then f ∈ Y ∗ . (α)

We show that f is linear (trivial). Let μ ∈ C and y, y˜ ∈ Y . Then there are unique λ, λ˜ ∈ C and x, x˜ ∈ X such that y = λ · b + x and λ˜ · b + x. ˜ Then we have ˜ · b + μx + x˜ μy + y˜ = μ(λ · b + x) + λ˜ · b + x˜ = (μλ + λ) and it follows from (3.122) and the linearity of g that f (μy + y˜ ) = (μλ + λ˜ ) · c + g(μx + x) ˜ ˜ = μ(λ · c + g(x)) + λ · c + g(x) ˜ = μf (y) + f ( y˜ ),

3.5 Compact Operators on the Space bv +

(β)

151

that is, f is linear. We show f is continuous. Since g ∈ X ∗ , we obtain for y = λ · b + x | f (y)| = |λ · c + g(x)| ≤ |λ| · |c| + g · x X ≤ (|c| + g ) · (|λ| + x X ) = (|c| + g ) · y Y , that is,

f ≤ |c| + g .

(3.124)

Hence we have f ∈ Y ∗ . (ii)

We show that if f ∈ Y ∗ then there exist c ∈ C and g ∈ X ∗ such that (1) holds. Let f ∈ Y ∗ . We put c = f (b) and g = f | X , the restriction of f on X . Then clearly g ∈ X ∗ and we obtain for all y = λ · b + x ∈ Y f (y) = f (λ · b + x) = λ · f (b) + f (x) = λ · c + g(x),

(iii)

that is, f has the representation in (3.122). It remains to show the left-hand side of (3.123). Let ε > 0 be given. Then there exists x0 ∈ S X , where S X denotes the unit sphere in X , such that g(x0 ) ≥ g − ε. We put y0 =sgn(c) · b + g(x0 ) with c = 0 and obtain | f (y0 )| = | |c| + g(x0 )| ≥ |c| + g − ε and

y0 = | |sgn(c)| + 1| = 2. Since ε > 0 was arbitrary, we obtain

f ≥

1 · (|c| + g ) . 2 

Theorem 3.12 (a) The space (bv0+ )∗ is norm isomorphic to bs. (b) We have f ∈ bv∗ if and only if there exist c ∈ C and a sequence a = (ak )∞ k=0 ∈ bs such that f (y) = η · c +

∞ 

ak (yk − ξ ) for all y = (yk )∞ k=0 ∈ bv, where η = lim yk ;

k=0

moreover, if f ∈ bv∗ then

k→∞

(3.125)

152

3 Operators Between Matrix Domains

1 · (|c| + a bs ) ≤ f ≤ |c| + a bs . 2

(3.126)

Proof (a) Since bv0+ is a B K space with AK by Part (a) of Lemma 3.2, (bv0+ )∗ and (bv0+ )β are isomorphic by Theorem 1.14, and (bv0+ )β = bs by (3.112) in Lemma 3.3.  + Furthermore if f (x) = ∞ k=0 ak x k (x ∈ bv0 ) for some sequence a ∈ bs, then it follows from (3.118) and (3.120) in the proof of Lemma 3.3 that

f = a bs .

(3.127)

We note that (bv+ , · bv+ ) is a B K space with

(b)

y bv+ = |ξ | +

∞ 

+ |yk − yk+1 | for all y = (yk )∞ k=0 ∈ bv , where η = lim yk . k→∞

k=0

and bv+ = bv0+ ⊕ e by Part (b) of Lemma 3.2. Applying Lemma 3.4 with b = e and  λ = η, we obtain by (3.122) f ∈ (bv+ )∗ if and only if, writing x = (k) ∈ bv0+ y − ηe = ∞ k=0 (yk − η)e f (y) = η · c + g(x), where c = f (e) and g ∈ (bv0+ )∗ . Furthermore we have by Part (a) that g ∈ (bv0+ )∗ if and only if there exists a sequence a ∈ bs such that g(x) =

∞ 

ak xk for all x ∈ bv0+ ,

k=0

hence we have established the representation in (3.125). Moreover, since g = a bs , we obtain the inequalities in (3.126) from (3.123).  Theorem 3.13 Let X ∈ {bv0+ , bv+ } and Y ∈ {∞ , c, c0 , 1 }. Then the necessary and sufficient conditions for A ∈ (X, Y ) can be read from the following table From To ∞ c0 c 1 where

bv0+ bv+ 1. 3. 5. 7.

2. 4. 6. 8.

3.5 Compact Operators on the Space bv +

153

    k   where (1.1) A (bv+ ,∞) = sup  an j  < ∞  n,k  j=0 ∞    where (2.1) sup  ank  < ∞ n

1. (1.1), 2. (1.1) and (2.1),

k=0

3. (1.1) and (3.1),

where (3.1) lim ank = 0 for each k n→∞ ∞  4. (1.1), (2.1) and (4.1), where (4.1) lim ank = 0 n→∞ k=0

5. (1.1) and (5.1),

where (5.1) αk = lim ank exsists for each k n→∞ ∞  6. (1.1), (2.1) and (6.1), where (6.1) α = lim ank exists n→∞ k=0     k    7. (7.1), where (7.1) A (bv+ ,1) = sup N sup  an j  < ∞   k j=0 n∈N ∞  ∞     ank  < ∞. 8. (7.1) and (8.1), where (8.1)   n=0 k=0

Moreover if A ∈ (X, Y ) for X ∈ {bv0+ , bv+ } and Y ∈ {∞ , c, c0 }, then

L A = A (bv+ ,∞)

(3.128)

A (bv+ ,1) ≤ L A = 4 · A (bv+ ,1) .

(3.129)

and if A ∈ (X, 1 ), then

Proof 1. and 7. Since bv0+ is a B K space by Part (a) in Lemma 3.2, it follows by (1.18) in Part (b) of Theorem 1.10 that A ∈ (bv0+ , ∞ ) if and only if supn An ∗bv+ < ∞, and by (1.21) in Theorem 1.13 that A ∈ (bv0+ , 1 ) if and 0 only if  sup

An ∗bv+ < ∞. N

0

n∈N

Since · ∗bv+ = · bs by (3.112) in Lemma 3.3, these conditions yield (1.1) in 0 1. and (7.1) in 7., respectively. 3. and 5. Since bv0+ has AK by Part (a) in Lemma 3.2, Theorem 1.11 yields the conditions Ae(k) ∈ c0 for each k, that is, (3.1) in 3., and Ae(k) ∈ c for each k, that is, (5.1) in 5.. 2., 4., 6. and 8. Since bv+ = bv0+ ⊕ e, Theorem 1.12 yields the conditions Ae ∈ ∞ , that is, (2.1) in 2., Ae ∈ c0 , that is, (4.1) in 4., Ae ∈ c, that is, (6.1) in 6. and Ae ∈ 1 , that is, (8.1) in 8.. Finally, the conditions in (3.128) and (3.129) follow from (1.19) in Part (b) of Theorem 1.10 and (1.22) in Remark 1.4, respectively.  Remark 3.9 It is easy to see that the conditions (1.1) in 1. and (2.1) in 2. Theorem 3.11 are equivalvent to

154

3 Operators Between Matrix Domains

  ∞     sup  an j  < ∞. n,k  j=k 

(3.130)

Theorem 3.14 (a) We have L ∈ B(bv+ , ∞ ) if and only if there exists a matrix A ∈ (bv0+ , ∞ ) and a sequence b ∈ ∞ such that L(x) = b · ξ + Ax (0) for all x ∈ bv+ ,

(3.131)

where ξ = limk→∞ xk and x (0) = x − ξ · e ∈ bv0+ . (b) We have L ∈ B(bv+ , c) if and only if there exists a matrix A ∈ (bv0+ , c) and a sequence b ∈ c such that (3.131) holds. (c) We have L ∈ B(bv+ , c0 ) if and only if there exists a matrix A ∈ (bv0+ , c0 ) and a sequence b ∈ c0 with such that (3.131) holds. (d) If L ∈ B(bv+ , Y ), where Y ∈ {∞ , c, c0 }, then 1 · sup (|bn | + An bs ) ≤ L ≤ sup (|bn | + An bs ) . 2 n n

(3.132)

Proof (a)(a.i) First we assume L ∈ B(bv+ , ∞ ). Trivially L ∈ B(bv+ , ∞ ) implies L ∈ B(bv0+ , ∞ ) and since bv0+ has AK by Part (a) of Lemma 3.2, there exists a matrix A ∈ (bv0+ , ∞ ) by Part (b) of Theorem 1.10 such that L(x (0) ) = Ax (0) for all x (0) ∈ bv0+ . Now, for x ∈ bv+ , there exists x (0) ∈ bv0+ such that x = x (0) + ξ · e ∈ bv+ where ξ = limk→∞ xk and we obtain L(x) = L(x (0) + ξ · e) = L(x (0) ) + ξ · L(e), L(x (0) ) = Ax (0) ∈ ∞ and L(e) ∈ ∞ . Putting = L(e) we obtain b ∈ ∞ and (3.131) is satisfied. (a.ii) Conversely, we assume that there exist a matrix A ∈ (bv0+ , ∞ ) and a sequence b ∈ ∞ such that (3.131) holds. Clearly L maps bv+ into ∞ and is linear. Also A ∈ (bv0+ , ∞ ) implies + β + β An = (ank )∞ k=0 ∈ (bv0 ) for all n, and (bv0 ) = bs by (3.112) in Lemma 3.3. Now we obtain from (3.131) for each n L n (x) = bn · ξ + An (x (0) ) for all x = x (0) + ξ · e ∈ bv+ , and so L n ∈ (bv+ )∗ for all n by Part (b) of Theorem 3.12. (b) (b.i) First we assume L ∈ B(bv+ , c). This implies L ∈ B(bv+ , ∞ ) and by Part (a), there exist a matrix A ∈ (bv0+ , ∞ ) and a sequence b ∈ ∞ such that (3.131) holds. It follows that L(e(k) ) = Ae(k) = (ank )∞ k=0 ∈ c for each k,

3.5 Compact Operators on the Space bv +

155

hence the limits αk = lim ank exist for all k. n→∞

(3.133)

Now A ∈ (bv0+ , ∞ ) and (3.133) together imply A ∈ (bv0+ , c) by 5. in Theorem 3.11. Furthermore, we have L(e) = b ∈ c. (b.ii) Conversely, we assume that there exist a matrix A ∈ (bv0+ , c) and a sequence b ∈ c such that (3.131) holds. Then we have L ∈ B(bv+ , ∞ ) by Part (a). Let x ∈ bv+ be given. Then x = x (0) + ξ · e with x (0) ∈ bv0+ and ξ = limk→∞ xk and L(x) = bξ + Ax (0) ∈ c. (c) The proof is similar to that in Part (b). (d) In each case, L ∈ B(bv+ , Y ) implies L n ∈ (bv+ )∗ for n = 0, 1, . . . , and it follows from the right-hand side of (3.126) in Part (b) of Theorem 3.12 that

L n ≤ |bn | + An bs for each n. Since b ∈ ∞ and A ∈ (bv+ , ∞ ) implies supn An bs < ∞ by 1. in Theorem 3.11, we obtain

L(x) ∞ = sup |L n (x)| ≤ sup L n · x bv+ ≤ sup (|bn | + An bs ) · x bv+ , n

n

n

hence

L ≤ sup (|bn | + An bs ) , n

which is the second inequality in (3.132). Also by the left-hand side of (3.126) in Part (b) of Theorem 3.12, given ε > 0, for each n ∈ N0 there exists x (n) ∈ Sbv+ such that    L n (x (n) ) ≥ 1 · (|bn | + An bs ) − ε, 2 whence

L ≥

1 · (|bn | + An bs ) − ε. 2

Since ε > 0 was arbitrary, we have

L ≥

1 · (|bn | + An bs ) , 2

which is the first inequality in (3.132).  Now we are able to establish an estimate for the Hausdorff neasure of noncompactness of an arbitrary operator L ∈ B(bv+ , c). We use the notations of Theorem 3.14. Theorem 3.15 Let L ∈ B(bv+ , c). Then we have

156

3 Operators Between Matrix Domains

 m     1   · lim sup |bn − β| + sup  ank − αk  ≤ L χ ≤  4 n→∞ m  k=0  m       ank − αk  , ≤ lim sup |bn − β| + sup    n→∞ m

(3.134)

k=0

where αk = lim ank for all k and β = lim bn . n→∞

(3.135)

n→∞

Proof Let L ∈ B(bv+ , c). Then L(x) ∈ c for all x ∈ bv+ , hence by (1.14) in Example 1.6 ∞  L(x) = η(x) · e + (3.136) (L m (x) − η(x)) e(m) , m=0

where η(x) = lim j→∞ L j (x). Also, by Part (b) of Theorem 3.14, there exists a matrix A ∈ (bv0+ , c) and a sequence b ∈ c such that L(x) = b · ξ + Ax (0) for all x = x 0 + ξ · e,

(3.137)

where x (0) ∈ bv0+ and ξ = limk→∞ xk . We observe that the limits β and αk (k = 0, 1, . . . ) in (3.135) exist, and b = L(e) and ank = L n (e(k) ) for all n and k. We write Q (r ) = Rr ◦ L for all r and obtain by (3.136) ∞ 

Q (r ) (x) = Rr (L(x)) =

(L m (x) − η(x))e(m) for each r.

(3.138)

m=r +1

On the other hand by (3.137), for each r there exists a sequence d (r ) = Q (r ) (e) ∈ c (r ) (r ) ) (k) ∈ (bv0+ , c) with cnk = Q (r and a matrix C (r ) = (cnk n (e ) for all n and k such that Q (r ) (x) = d (r ) · ξ + C (r ) x (0) .

(3.139)

Now it follows from (3.139) and (3.138) that d

(r )

(r )

= Q (e) =

∞ 

(L m (e) − η(e)) e

m=r +1

=

∞  m=r +1

that is,

(bm − β) e(m) ,

(m)

=

∞   m=r +1



bm − lim b j e(m) j→∞

3.5 Compact Operators on the Space bv +

 dn(r )

=

157

0 (n ≤ r ) bn − β (n ≥ r + 1),

and (r ) cnk

=

) (k) Q (r n (e )

∞ 

=

(k)

(k)

L n (e ) − η(e )



en(m)

=

m=r +1

(r ) cnk

=

(ank − αk ) en(m) ,

m=r +1



hence

∞ 

0 ank − αk

(n ≤ r ) (n ≥ r + 1).

Since Q (r ) ∈ B(bv0+ , c) for each r , we obtain by (3.132) in Part (d) of Theorem 3.14  m     1   · sup |bn − β| + sup  ank − αk  ≤ Q (r ) ≤  2 n≥r +1 m  k=0  ≤ sup

n≥r +1

 m      ank − αk  . |bn − β| + sup    m k=0

Now the inequalities in (3.134) follow from (1.52) in Theorem 1.29, (3.59) in Lemma 3.1 and (1.47) in Remark 1.11.  Now the characterization of K(bv+ , c) is an immediate consequence of (3.134) in Theorem 3.15 and (1.54) in Theorem 1.30. Corollary 3.11 Let L ∈ B(bv+ , c). Then we have L ∈ K(bv+ , c) if and only if  m      ank − αk  = 0. |bn − β| + sup   m 

 lim

n→∞

k=0

References 1. Alotaibi, A., Mursaleen, M., Alamiri, B.A.S., Mohiuddine, S.A.: Compact operators on some Fibonacci difference sequence spaces. J. Inequal. Appl. 2015(1), 1–8 (2015) 2. Altay, B., Ba¸sar, F., Mursaleen, M.: On the Euler sequence spaces which include  p and ∞ I. Inf. Sci. 176, 1450–1462 (2006) 3. Cohen, L.W., Dunford, N.: Transformations on sequence spaces. Duke Math. J. 3(4), 689–701 (1937) 4. Demiriz, S.: Applications of measures of noncompactness to the infinite system of differential equations in bv p spaces. Electron. J. Math. Anal. Appl. 5(1), 313–320 (2017) 5. Demiriz, S., Kara, E.E.: On compact operators on some sequence spaces related to matrix B(r, s, t). Thai J. Math. 14(3), 651–666 (2016)

158

3 Operators Between Matrix Domains

6. Djolovi´c, I., Malkowsky, E.: A note on compact operators on matrix domains. J. Math. Anal. Appl. 340, 291–303 (2008) 7. Djolovi´c, I., Petkovi´c, K., Malkowsky, E.: Matrix mappings and general bounded linear operators on the space bv. Math. Slovaca 68(2), 405–414 (2018) 8. Ilkhan, M., Sim¸ ¸ sek, Kara, E.E.: A new regular infinite matrix defined by Jordan totient function and its matrix domain in  p . Math. Methods. Appl. Sci. (2020). https://doi.org/10.1002/mma. 6501 9. Ilkhan, M., Kara, E.E.: A new Banach space defined by Euler totient matrix operator. Oper. Matrices 13(2), 527–544 (2019) 10. Kara, E.E., Ilkhan, M., Sim¸ ¸ sek, N.: A study on certain sequence spaces using Jordan totient function. In: 8th International Eurasian Conference on Mathematical Sciences and Applications, Baku Azerbaijan (2019) 11. Kara, E.E., Ba¸sarır, M.: On compact operators and some Euler B(m)-difference sequence spaces and compact operators. J. Math. Anal. Appl. 379(2), 499–511 (2011) 12. Malkowsky, E.: Linear operators between some matrix domains. Rend. Circ. Mat. Palermo, Serie II, Suppl. 68, 641–655 (2002) 13. Malkowsky, E., Rakoˇcevi´c, V.: The measure of noncompactness of linear operators between certain sequence spaces. Acta Sci. Math. (Szeged) 64, 151–170 (1998) 14. Malkowsky, E., Rakoˇcevi´c, V.: On matrix domains of triangles. Appl. Math. Comput. 189, 1146–1163 (2007) 15. Malkowsky, E., Rakoˇcevi´c, V.: Advanced Functional Analysis. CRC Press, Taylor & Francis Group, Boca Raton, London, New York (2019) 16. Mursaleen, M., Mohiuddine, S.A.: Applications of measures of non-compactness to the infinite system of differential equations in  p spaces. Nonlinear Anal. 74(4), 2111–2115 (2012) 17. Mursaleen, M., Noman, A.K.: Compactness by the Hausdorff measure of noncompactness. Nonlinear Anal. 73(8), 2541–2557 (2010) 18. Ba¸sarır, M., Kara, E.E.: On compact operators on the Riesz B(m)–difference sequence space. Iran J. Sci. Technol. Trans. A Sci. 35(4), 279–285 (2011) 19. Ba¸sarır, M., Kara, E.E.: On the B-difference sequence space derived by generalized weighted mean and compact operators. J. Math. Anal. Appl. 379(2), 499–511 (2011) 20. Wilansky, A.: Summability through Functional Analysis, Mathematical Studies, vol. 85. North– Holland, Amsterdam (1984) 21. Zeller, K.: Faktorfolgen bei Limitierungsverfahren. Math. Z. 56, 134–151 (1952)

Chapter 4

Computations in Sequence Spaces and Applications to Statistical Convergence

In this chapter, we define elementary sets that are used to simplify some sequence spaces, in particular, we are interested in applications of the theory of infinite matrices in various topics of mathematics. Since we know the characterizations of the classes (X, Y ) when X and Y are any of the sets ∞ , c or c0 (Theorem 1.23), it is interesting to deal with the sets sτ , sτ0 and sτ(c) introduced in Example 2.2, where τ = (τn )∞ n=1 is a sequence with τn > 0 for all n; these may also be considered as the matrix domain in ∞ , c and c0 of the diagonal matrix with the sequence τ on its diagonal. So we obtain characterizations of the sets (X 1 , Y1 ) where X 1 ∈ {sτ , sτ0 , sτ(c) } and Y1 ∈ {sν , sν0 , sν(c) } where τ and μ are real sequences with positive terms. We also ◦ p p •p obtain simplifications of some sets of sequences, such as wτ (λ), wτ (λ), wτ (λ) and +. p p p cτ (λ, μ) with p > 0. These sets generalize in a certain sense the sets w∞ , w p , c0 , p p c∞ and c . So we may do calculations on these sets and extend some well-known results. In particular, we consider the sum and the product of some linear spaces of sequences, and present many computations involving the sets w0 (λ) and w∞ (λ), which are generalizations of w0 and w∞ , to obtain new results on statistical convergence. This notion was first introduced by Steinhaus in 1949 [35], and studied by several authors such as Fast [21], Fridy [24] and Connor [1], and more recently Mursaleen and Mohhiuddine [33], Mursaleen and Edely [32], Patterson and Sava¸s [34], and de Malafosse and Rakoˇcevi´c [19]. Then Fridy and Orhan [23] introduced the notion of lacunary statistical convergence which can be considered as a special case of statistical convergence. The notion of statistical convergence was also used for Hardy’s Tauberian theorem for Cesàro means [25]. It was shown by Fridy and Khan [22] that the hypothesis of the convergence of the arithmetic means of the sequence x can be replaced by the weaker assumption of its statistical convergence. Here, we extend Hardy’s Tauberian theorem for Cesàro means to the cases where the arithmetic means are successively replaced by the operator of weighted means N q and the generalized arithmetic means C(λ).

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. de Malafosse et al., Operators Between Sequence Spaces and Applications, https://doi.org/10.1007/978-981-15-9742-8_4

159

160

4 Computations in Sequence Spaces and Applications to Statistical Convergence

4.1 On Strong τ -Summability Let U + ⊂ ω be the set of all sequences u = (u n )∞ n=1 with u n > 0 for all n. We recall that U is the set of all nonzero sequences (beginning of Sect. 2.2). Then, for given sequence u = (u n )∞ n=1 ∈ ω, we define the infinite diagonal matrix Du with (Du )nn = u n for all n. Let E be any subset of ω and u ∈ U, using Wilansky’s notations [36] we write  −1    ∞ 1 y yn ∞ ∗ E = Du E = y = (yn )n=1 ∈ ω : = ∈E . u u u n n=1 We note that we can also write E τ = Dτ E for τ ∈ U + and for any subset E of ω. We recall the definition in Example 2.2 of the following sets for any sequence τ ∈ U+:   −1   ∞ 1 xn sτ = ∗ ∞ = x ∈ ω : ∈ ∞ , τ τn n=1   −1   ∞ 1 xn sτ0 = ∗ c0 = x ∈ ω : ∈ c0 τ τn n=1 and sτ(c)

 −1   ∞  1 xn = ∗ c = xn ∈ ω : ∈c . τ τn n=1

It was also noted in Example 2.2 that each of the sets sτ , sτ0 and sτ(c) is a B K space normed by   |xn | xsτ = sup τn n and sτ0 has AK. Now let τ, ν ∈ U + . We write Sτ,ν for the class of infinite matrices A = (ank )∞ n,k=1 such that   ∞ 1  A Sτ,ν = sup |ank |τk < ∞. (4.1) νn k=0 n The set Sτ,ν is a Banach space with the norm A Sτ,ν . It was noted in Proposition 2.1 that A ∈ (sτ , sν ) if and only if A ∈ Sτ,ν .

4.1 On Strong τ -Summability

161

So we can write (sτ , sν ) = Sτ,ν . In this way, we may state the next extension. Lemma 4.1 Let τ, ν ∈ U + and E, F ⊂ ω. Then we have  A ∈ (E τ , Fν ) if anf only if D1/ν ADτ =

1 ank τk νn

∞

∈ (E, F).

n,k=1

Now by Parts 1.–3. of Theorem 1.23, we have (s1 , s1 ) = (c0 , s1 ) = (c, s1 ), and so by Lemma 4.1 (sτ , sν ) = (sτ0 , sν ) = (sτ(c) , sν ) = Sτ,ν . When sτ = sν we obtain the Banach algebra with identity Sτ,ν = Sτ , (see [2, 3, 5, 6, 8]) normed by A Sτ = A Sτ,τ . We also have A ∈ (sτ , sτ ) of and only if A ∈ Sτ . If I − A Sτ < 1, we say that A ∈ τ . The set Sτ is a Banach algebra with identity. We have the useful result: if A ∈ τ then A is bijective from sτ into itself. If τ = 0 (c) 0 (c) (r n )∞ n=1 , we write r , Sr , sr , sr and sr for τ , Sτ , sτ , sτ and sτ , respectively [10]. 0 (c) When r = 1, we obtain s1 = ∞ , s1 = c0 , sr = c and S1 = Se . We have by 1.–3. of Theorem 1.23 (s1 , s1 ) = (c0 , s1 ) = (c, s1 ) = S1 . For any subset E of ω, we write, AE = {y ∈ ω : y = Ax for some x ∈ E}. Then the matrix domain of A in F ⊂ ω is denoted by F(A) or FA = {x ∈ ω : Ax ∈ F}. So for any given a ∈ U we have Fa = FD1/a . First, we need to recall some results. Lemma 4.2 (Parts (i) and (ii) of Proposition 1.1) Let E and F be arbitrary subsets of ω and let u ∈ U. Then we have F) for all E ⊂ E, (i) M(E, F) ⊂ M( E, for all F ⊂ F. (ii) M(E, F) ⊂ M(E, F) Lemma 4.3 ([29, Lemma 3.1, p. 648] and [31, Example 1.28, p. 157]) We have (i) M(c0 , F) = ∞ for F = c0 , or c, or ∞ ; (ii) M(∞ , G) = ∞ for G = c0 , or c; (iii) M(c, c) = c and M(∞ , c) = c0 . We deduce the next corollary from Lemma 4.1. Corollary 4.1 Let α, β ∈ U + . Then we have (i) M(sα0 , H ) = sβ/α for H = sβ0 , or sβ(c) , or sβ ; (ii) M(K , sβ ) = sβ/α if K = sα0 , sα(c) ; (c) 0 and M(sα(c) , sβ(c) ) = sβ/α . (iii) M(sα , sβ(c) ) = sβ/α

162

4 Computations in Sequence Spaces and Applications to Statistical Convergence

Proof This result follows from Lemma 4.1. Indeed, for any given sets E, F ⊂ ω, we have a ∈ M(Dα ∗ E, Dβ ∗ F) if and only if we successively have Da ∈ (Dα ∗ E, Dβ ∗ F), Daα/β ∈ (E, F) and aα/β ∈ M(E, F). 

4.2 Sum and Product of Spaces of the Form sξ , sξ0 , or sξ(c) In the following, we use the next two properties, where we write U1+ (resp. c(1)) for the set of all sequences ξ that satisfy 0 < ξn ≤ 1 for all n, (resp. that satisfy ξn → 1 (n → ∞)). For any linear space of sequences χ , we consider the conditions χ ⊂ χ (Dξ ) for all ξ ∈ U1+

(4.2)

χ ⊂ χ (Dξ ) for all ξ ∈ c(1).

(4.3)

and

Lemma 4.4 ([16, Proposition 5.1]) Let E be a linear space of sequences that satisfies condition (4.2). Then we have E α+β = E α + E β for all α, β ∈ U + .

(4.4)

Proof Let y = (α + β)z = αz + βz with z ∈ E. Then we have y ∈ E α + E β and E α+β ⊂ E α + E β . Now let y ∈ E α + E β . Then there are u, v ∈ E, such that  y = αu + βv = (α + β)

 β α u+ v , α+β α+β

since E is a linear space by the condition in (4.2) we have y ∈ E α+β and E α + E β ⊂  E α+β . So we have shown E α+β = E α + E β . Remark 4.1 The condition in (4.2) is true when E is either of the sets c0 or ∞ . 0 for α, β ∈ U + . This implies sα + sβ = sα+β and sα0 + sβ0 = sα+β Remark 4.2 We note that for any linear space of sequences E we have E α+β ⊂ E α + E β for all α, β ∈ U + . So if y ∈ E implies αy ∈ E for all y and for all α ∈ U1+ , then the condition in (4.4) is equivalent to E α + E β ⊂ E α+β . Now we state some results concerning the sum of particular sequence spaces. Theorem 4.1 ([11, Theorem 4, p. 293]) Let α, β ∈ U + . Then we have

(c)

4.2 Sum and Product of Spaces of the Form sξ , sξ0 , or sξ

(i) (a)

163

sα = sβ if and only if there are K 1 , K 2 > 0 such that K1 ≤

βn ≤ K 2 for all n; αn

(b) sα + sβ = sα+β = smax{α,β} ; (c) sα + sβ = sα if and only if β/α ∈ ∞ . (ii) (a) sα0 ⊂ sβ0 if and only if α/β ∈ ∞ ; (b) sα0 = sβ0 if and only if sα = sβ ; 0 (c) sα0 + sβ0 = sα+β ; 0 0 0 (d) sα + sβ = sα if and only if β/α ∈ ∞ ; (e) sα(c) ⊂ sβ(c) if and only if α/β ∈ c; (f) The condition αn /βn → l = 0 for some l ∈ C is equivalent to sα(c) = sβ(c) , and if αn /βn → l = 0, then sα = sβ , sα0 = sβ0 and sα(c) = sβ(c) . (iii) (a) (b)

(c) sα+β ⊂ sα(c) + sβ(c) ; The condition α/(α + β) ∈ c is equivalent to (c) ; sα(c) + sβ(c) = sα+β

(c) (c) The condition β/α ∈ c is equivalent to sα(c) + sβ(c) = sα+β = sα(c) ; (c) (c) 0 (iv) sα + sβ = sβ is equivalent to α/β ∈ ∞ , and the condition β/α ∈ c0 is equivalent to sα0 + sβ(c) = sα0 ; (v) (a) sα + sβ0 = sα is equivalent to β/α ∈ ∞ ; (b) sα + sβ0 = sβ0 is equivalent to α/β ∈ c0 ; (vi) (a) sα(c) + sβ = sα(c) is equivalent to β/α ∈ c0 ; (b) sα(c) + sβ = sβ is equivalent to α/β ∈ ∞ ; (c) sα(c) + sβ(c) = sα(c) is equivalent to β/α ∈ c.

Proof

(i)

(a) follows from [9, Proposition 1, p. 244]. (b) follows from Remark 4.1. Then from the inequalities max{αn , βn } ≤ αn + βn ≤ 2 max{αn , βn } for all n we obtain by (i) (a) the identity smax(α,β) = sα+β . This concludes the proof of Part (i) (b).

164

4 Computations in Sequence Spaces and Applications to Statistical Convergence

(c) We have sα + sβ = sα if and only if sβ ⊂ sα , that is, β/α ∈ M(s1 , s1 ) = ∞ . (ii) (a) follows from the equivalence of sα0 ⊂ sβ0 and α/β ∈ M(c0 , c0 ) = ∞ . (b) We deduce from (ii) (a) that sα0 = sβ0 is equivalent to α/β and β/α ∈ ∞ , that is, sα = sβ . (c) follows from Remark 4.1. 0 (d) The identity sα0 + sβ0 = sα0 is equivalent to sα+β = sα0 and to sα+β = sα + sβ = sα and we conclude using (i) (c). (e) We have sα(c) ⊂ sβ(c) if and only if α/β ∈ M(c, c) = c and we have shown Part (ii) (e). (f) follows from the equivalence of sα(c) = sβ(c) and α/β, β/α ∈ c, that is, αn → l (n → ∞) for some scalar l = 0. βn

(4.5)

Furthermore, the condition in (4.5) implies α/β and β/α ∈ ∞ . This means sα = sβ and we conclude by (ii) (b) and the first part of (ii) (f). (iii) (c) (a) For any given x ∈ sα+β there is ϕ ∈ c such that

x = (α + β)ϕ = αϕ + βϕ ∈ sα(c) + sβ(c) , which implies

(c) ⊂ sα(c) + sβ(c) . sα+β

(b) Necessity. Let α/(α + β) ∈ c. Then we have β/(α + β) = 1 − α/(α + (c) (c) (c) and sβ(c) ⊂ sα+β and sα(c) + sβ(c) ⊂ sα+β and we β) ∈ c. So we obtain sα(c) ⊂ sα+β conclude by (iii) (a) (c) = sα(c) + sβ(c) . sα+β (c) Sufficiency. Conversely, if sα+β = sα(c) + sβ(c) , then we have

α ∈ sα(c) + sβ(c) (c) and α ∈ sα+β which imply α/(α + β) ∈ c. This concludes the proof of (b).

(c)

4.2 Sum and Product of Spaces of the Form sξ , sξ0 , or sξ

165

(c) (c) By (ii) (f), the identity sα+β = sα(c) is equivalent to

αn + βn βn =1+ → l = 0 (n → ∞), αn αn that is, β/α ∈ c. We conclude, since the condition β/α ∈ M(c, c) = c is equivalent to sα(c) + sβ(c) = sα(c) . (iv) The condition sα0 + sβ(c) = sβ(c) is equivalent to sα0 ⊂ sβ(c) . Then the inclusion sα0 ⊂ sβ(c) is equivalent to α/β ∈ M(c0 , c) = ∞ , and we conclude sα0 + sβ(c) = sβ(c) if and only if α/β ∈ ∞ . Similarly, sα0 + sβ(c) = sα0 is equivalent to sβ(c) ⊂ sα0 , and since β/α ∈ M(c, c0 ) = c0 we conclude sα0 + sβ(c) = sα0 is equivalent to β/α ∈ c0 . (v) (a) follows from the equivalence of sα + sβ0 = sα and β/α ∈ M(c0 , ∞ ) = ∞ . 

Parts (v) (b) and (vi) can be obtained in the similar way.

As a direct consequence of the preceding results, we obtain the following corollary. Corollary 4.2 ([11, Corollary 5, p. 296]) The following conditions are equivalent: (i) β/α ∈ ∞ ; (ii) sα0 + sβ0 = sα0 ; (iii) sα + sβ = sα ; (vi) sα + sβ(c) = sα . Remark 4.3 It can easily be shown that if β = e, and α is defined by α2n = 1 and α2n+1 = n for all n, then we have sα(c) + s1 = s1 and = sα(c) . We easily deduce from the preceding results the following proposition. Proposition 4.1 ([11, Proposition 12, p. 306]) Let α, β, γ ∈ U + . Then we have

(i) Sα+β,γ = Sα,γ Sβ,γ = Smax{α,β},γ . (ii) (E + F, sγ ) = Sα+β,γ = Smax{α,β},γ for E = sα , sα0 , or sα(c) and F = sβ , sβ0 , or sβ(c) . Now we deal with some properties of the product E ∗ F of certain subsets E and F of ω. These results generalize some of those given in [9]. For any given sets of sequences E and F, we write E ∗ F = {x y = (xn yn )∞ n=1 ∈ ω : x ∈ E and y ∈ F}.

(4.6)

We state the following results without proof. Proposition 4.2 ([11, Proposition 7, p. 298]) Let α, β, γ ∈ U + . Then we have

166

4 Computations in Sequence Spaces and Applications to Statistical Convergence

(i) sα ∗ sβ = sα ∗ sβ(c) = sαβ ; 0 ; (ii) sα ∗ sβ0 = sα0 ∗0β = sα(c) ∗ sβ0 = sαβ (c) (c) (c) (iii) sα ∗ sβ = sαβ ; (iv) sα(c) ∗ sβ = sαβ ; (v) Let E be any of the sets sα or sα0 or sα(c) . Then E ∗ sβ0 = sγ0 if and only if there are K 1 , K 2 > 0 such that K 1 γn ≤ αn βn ≤ K 2 γn for all n; (vi) (a) (b)

sα ∗ sβ = sα ∗ sγ if and only if sβ = sγ and sα0 ∗ sβ0 = sα0 ∗ sγ0 if and only if sβ0 = sγ0 .

4.3 Properties of the Sequence C(τ )τ Since we intend to simplify certain sets of sequences, in particular, those of τ -strongly summable sequences, we need to study the sequence C(τ )τ . First, we deal with the operators represented by C(λ) and (λ). We define C(λ) for λ = (λn )∞ n=1 ∈ U by (C(λ))nk

⎧ ⎨ 1 if k ≤ n = λn ⎩ 0 otherwise.

(4.7)

It can easily be shown that the matrix (λ) defined by

((λ))nk

⎧ ⎪ ⎨λn = −λn−1 ⎪ ⎩ 0

if k = n if k = n − 1 and n ≥ 1 otherwise

(4.8)

is the inverse of the triangle C(λ), see [13]. If λ = e, we obtain the operator of the first difference represented by (e) = . In the following, we use the well-known notation  = C(e). Note that  =  −1 and ,  ∈ S R for any real R > 1. We let (λ)T = (λ)+ and C(λ)T = C(λ)+ . Then we have + (λ) = Dλ + . Now we consider the following sets defined in [6, 7] where cs is the set of all convergent + series and we write τ • = (τn−1 /τn )∞ n=1 for τ ∈ U .

4.3 Properties of the Sequence C(τ )τ

167



 n    1 1 = τ ∈ U : τk = O(1) (n → ∞) , C τn k=1   n  ∞  1  +  τk ∈c , C = τ ∈U : τn k=1 n=1  ∞     1 + = τ ∈ U + C τk = O(1) (n → ∞) , cs : 1 τn k=n    = τ ∈ U + : lim τn• < 1 and n→∞     τn+1 1 is a real number. It can easily be seen that τ ∈ C n→∞

We obtain the following results in which we let [C(τ )τ ]n = (

n

k=1 τk )/τn .

Proposition 4.3 ([8, Proposition 2.1. p. 1656], [18]) Let τ ∈ U + . Then we have (i)

τn• → 0 if and only if [C(τ )τ ]n → 1 (n → ∞).

(ii) [C(τ )τ ]n → l if and only if τn• → 1 −

1 (n → ∞) for some scalar l > 0. l

(iii) 1 implies τn ≥ K γ n for some K > 0 and γ > 1 for all n. τ ∈C (iv)

(4.11)

1 and there exists a real b > 0 such The condition τ ∈  implies that τ ∈ C that  n 1 + b γq (τ ) for n ≥ q + 1. (4.12) [C(τ )τ ]n ≤ 1 − γq (τ )

(v) Proof

+ . We have  + ⊂ C 1 (i) We assume τn• → 0 (n → ∞). Then there is an integer N such that τn• ≤

1 for all n ≥ N + 1. 2

(4.13)

4.3 Properties of the Sequence C(τ )τ

169

So there exists a real K > 0 such that τn ≥ K 2n for all n and τk • = τk+1 . . . τn• ≤ τn

 n−k 1 for N ≤ k ≤ n − 1. 2

(4.14)

Then we have 1 τn

 n−1  k=1

 τk

1 = τn

 N −1 

n−1 

τk

+

 N −1 



k=1

1 ≤ K 2n and since



k=1

τk

n−1  τk τ k=N n

+

(4.15)

n−1  n−k  1

k=N

2

,

(1/2)n−k = 1 − (1/2)n−N → 1 (n → ∞),

k=N

we deduce

n−1 

τk

k=1

τn and

= O(1) (n → ∞)

([C(τ )τ ]n )∞ n=1 ∈ ∞ .

From the identities τ1 + · · · + τn−1 • τn + 1 τn−1 = [C(τ )τ ]n−1 τn• + 1 for all n ≥ 1,

[C(τ )τ ]n =

(4.16)

we obtain [C(τ )τ ]n → 1 (n → ∞). This proves the necessity. Conversely, assume [C(τ )τ ]n → 1 (n → ∞). By the identities given in (4.16), we have [C(τ )τ ]n − 1 , (4.17) τn• = [C(τ )τ ]n−1 and we conclude τn• → 0 (n → ∞). ˆ (ii) The necessity is a direct consequence of the identity in (4.17) and Cˆ ⊂ . • Conversely, we assume τn → 1 − 1/l (n → ∞). Let L = 1 − 1/l and σn = n k=1 τk and note that l ≥ 1, since σn /τn = 1 + σn−1 /τn ≥ 1 for all n. By Lemma

170

4 Computations in Sequence Spaces and Applications to Statistical Convergence

= 4.5 we have that C  , so we can write σn /τn → l1 (n → ∞) for some scalar l1 , and we must show that l1 = l. We have for every n > 2 τn• = and

σn−1 − σn−2 σn−1 • σn−2 τn−2 • = τ − τ τn τn−1 n τn−2 τn−1 n

σn−1 − σn−2 → l1 L − l1 L 2 = L (n → ∞). τn

If L = 0 then we have l1 = 1/ (1 − L) and since L = 1 − 1/l, we conclude 

l1 =

1

1− 1−

1 l

 = l.

If L = 0 then we have l = 1 and σn σn−1 • = τ + 1 → 1 (n → ∞). τn τn−1 n (iii) For a real M > 1 we have [C(τ )τ ]n =

σn ≤ M for all n ≥ 2. σn − σn−1

So we have σn ≥ and

 σn ≥

M σn−1 M −1

M M −1

n−1 τ1 for all n.

Then from the inequalities τ1 τn



M M −1

n−1 ≤ [C(τ )τ ]n =

σn ≤ M, τn

we conclude τn ≥ K γ n for all n, with K = (M − 1)τ1 /M 2 and γ = M/(M − 1) > 1. (iv) If τ ∈  there is an integer q ≥ 1 for which

4.3 Properties of the Sequence C(τ )τ

171

τk• ≤ χ < 1 for k ≥ q + 1 with χ = γq (τ ). So there is a real M > 0 for which τn ≥ Writing σnq = (

q

k=1 τk )/τn

M for all n ≥ q + 1. χn

(4.18)

and dn = [C(τ )τ ]n − σnq , we obtain

⎛ ⎞ n− j  n n−1 n    1 ⎝ ⎠ 1 • . dn = τk = 1 + τn−k+1 ≤ χ n− j ≤ τn k=q+1 1 − χ j=q+1 k=1 j=q+1 We obtain by the condition in (4.18) σnq

 q  1 n  ≤ χ τk . M k=1

We conclude [C(τ )τ ]n ≤ a + bχ n with 1 1 and b = a= 1−χ M

 q 

 τk .

k=1

(v) If τ ∈  + , there are χ ∈ (0, 1) and an integer q ≥ 1 such that τk /τk−1 ≤ χ for k ≥ q . Then for n ≥ q , we have 1 τn

∞  k=n

 τk

=

 ∞   τk k=n

τn

≤1+

∞ 

k−n−1  

k=n+1 i=0

τk−i τk−i−1

 ≤

∞ 

χ k−n = O(1)

k=n

(n → ∞). 

This completes the proof. Now we state a result where we let c• = {τ ∈ U + : τ • ∈ c}. Lemma 4.6 We have

1 ∩ c• = . C

(4.19)

1 ∩ c• ⊂ . For this, we 1 ∩ c• , it is enough to show C Proof Since we have  ⊂ C •  assume that there is τ ∈ C1 ∩ c such that τ ∈ / . Then we have limn→∞ τn• ≥ 1. So

172

4 Computations in Sequence Spaces and Applications to Statistical Convergence

for any given ε > 0, there is an integer q > 0 such that τn• ≥ 1 − ε for all n ≥ q + 1 and ⎛ ⎞ 2q 2q−1  1 ⎝ ⎠   • • [C(τ )τ ]2q ≥ τk+1 . . . τ2q +1 τk ≥ τ2q k=q k=q ≥ (1 − ε)q + · · · + (1 − ε) + 1 ≥

1 − (1 − ε)q+1 . ε

Then we have 1 − (1 − ε)q+1 1 − [1 − (q + 1)ε] ∼ ∼ q + 1 (ε → 0) ε ε 1 . This leads to a contradiction and we and ([C(τ )τ ]2q )q ∈ / ∞ which implies τ ∈ /C have shown the lemma. 

4.4 Some Properties of the Sets sτ (), sτ0 () and sτ(c) () In this section, we consider  as an operator from X into itself where X is any of the sets sτ , sτ0 or sτ(c) . Then we obtain necessary and sufficient conditions for  ∈ (X, X ) to be bijective. In this way, we have the following results. Theorem 4.2 ([6, Theorem 2.6. p. 1789]) Let τ ∈ U + . Then we have 1 ; (i) sτ () = sτ if and only if τ ∈ C 1 ; (ii) sτ0 () = sτ0 if and only if τ ∈ C ; (iii) sτ(c) () = sτ(c) if and only if τ ∈  (iv) τ = D1/τ Dτ is bijective from c into itself with lim x = τ − lim x, if and only if τn−1 → 0. τn• = τn Proof (i) We have sτ () = sτ if and only if ,  ∈ (sτ , sτ ). This means that ,  ∈ Sτ , that is,  Sτ = sup(1 + τn• ) < ∞ and  Sτ = sup[C(τ )τ ]n < ∞. n

Since

n

0 < τn• ≤ [C(τ )τ ]n for all n,

1 . we deduce ,  ∈ Sτ if and only if  Sτ < ∞, that is, τ ∈ C (ii) The condition sτ0 () = sτ0 is equivalent to

(c)

4.4 Some Properties of the Sets sτ (), sτ0 () and sτ ()

173

τ = D1/τ Dτ and τ = D1/τ  Dτ ∈ (c0 , c0 ). Since (τ )nk

⎧ ⎨ τk = τn ⎩0

if k ≤ n if k > n,

the condition sτ0 () = sτ0 is equivalent to sτ () = sτ and τk /τn = o(1) (n → ∞) for all k. Part (i) and by Part (iii) of Proposition 4.3, the condition sτ () = sτ 1 and τn → ∞ (n → ∞). implies τ ∈ C So we have shown Part (ii). (iii) As above we obtain sτ(c) () = sτ(c) if and only if τ and τ ∈ (c, c). We have τ ∈ (c, c) if and only if τ • = (τn−1 /τn )∞ n=1 ∈ c. Indeed, we have τ ∈ S1 and n  (τ )nk = 1 − τn• k=1

tends to a limit as n → ∞. Then we have τ ∈ (c, c) if and only if each of the next conditions is satisfied, where 1 ; τ ∈ S1 , that is, τ ∈ C lim

n→∞

and

(a)

τk = 0 for all k, τn

(b)

 τ ∈ C.

(c)

From Proposition 4.3, the condition in (c) implies limn→∞ τn = ∞, so (c) implies  implies (a) and (b). Finally, from Part (i) of Proposition 4.3, we conclude that τ ∈ C • (c) (c)  τ ∈ c. We have shown sτ () = sτ if and only if τ ∈ C and we conclude, since = C  by Parts (i) and (ii) of Proposition 4.3. This completes the proof of Part (iii). (iv) We have τ ∈ (c, c) and lim x = τ − lim x if and only if limn→∞ τn• = 0.   We conclude using Part (iii), since limn→∞ τn• = 0 implies τ ∈ C. Remark 4.5 In Part (iv) of Theorem 4.2, we saw that τ ∈ (c, c) and lim x = τ − lim x if and only if τn• → 0 (n → ∞). Indeed, we must have σnk = τk /τn = o(1) (n → ∞) for all k and lim

n→∞

 n  k=1

 σnk

 = lim

n→∞

n−1  τk 1+ τ k=1 n

 = 1;

and from Part (i) of Proposition 4.3, the previous property is satisfied if and only if τn• → 0 (n → ∞).

174

4 Computations in Sequence Spaces and Applications to Statistical Convergence

1 . To see Remark 4.6 It can be seen that the condition τ • ∈ c does not imply τ ∈ C ∈ / c . this, it is enough to notice that C(e)e = (n)∞ 0 n=1



4.5 The Spaces wτ (λ), wτ (λ) and w•τ (λ) ◦

In this section, we establish some properties of the sets wτ (λ), wτ (λ), wτ• (λ). We also consider the sum of sets of the form wτ (λ), and show that under some conditions, ◦ the spaces wτ (λ), wτ (λ) and wτ• (λ) can be written in the form sξ or sξ0 . We need to recall some definitions and properties. For every sequence x = (xn )∞ n=1 , we write and define |x| = (|xn |)∞ n=1 wτ (λ) = {x ∈ ω : C(λ)(|x|) ∈ sτ } , ! ◦ wτ (λ) = x ∈ ω : C(λ)(|x|) ∈ sτ0 ,

! ◦ wτ• (λ) = x ∈ ω : x − le ∈ wτ (λ) for some l ∈ C .

For instance, we see that  wτ (λ) = x =

  n 1  ∈ ω : sup |xk | < ∞ . |λn |τn k=1 n 

(xn )∞ n=1

If there exist A, B > 0, such that A ≤ τn ≤ B for all n, we obtain the well-known ◦ spaces wτ (λ) = w∞ (λ), wτ (λ) = w0 (λ) and wτ• (λ) = w(λ). It was proved that if λ is a strictly increasing sequence of reals tending to infinity the sets w0 (λ) and w∞ (λ) are B K spaces and w0 (λ) has AK with respect to the norm  n 1  x = C(λ) |x| s1 = sup |xk | . λn k=1 n 

We have the next result. Theorem 4.3 ([7, Theorem 1, p. 18]) Let τ, λ ∈ U + . Then we have (i) We consider the following conditions: (a) (b) (c) (d) (e) (f)

limn→∞ (τ λ)•n = 0; sτ(c) (C(λ)) = sτ(c) λ; 1 ; τλ ∈ C wτ (λ) = sτ λ ; ◦ wτ (λ) = sτ0λ ; wτ• (λ) = sτ0λ .

Then (a) is equivalent to (b), (c) is equivalent to (d), and (c) implies (e) and (f).



4.5 The Spaces wτ (λ), wτ (λ) and wτ• (λ)

(ii)

175

1 , then wτ (λ), wτ (λ) and wτ• (λ) are BK spaces with respect to the If τ λ ∈ C norm   |xn | , (4.20) xsτ λ = sup τn λn n ◦



and wτ (λ) = wτ• (λ) has AK. Proof

(i) We have sτ(c) (C(λ) = (λ)sτ(c) = Dλ sτ(c) = sτ(c) λ.

(c) This shows sτ(c) (C(λ)) = sτ(c) is equivalent to (sτ(c) λ ) = sτ λ and the equivalence of (a) and (b) follows from Part (iii) of Theorem 4.2. Now we show the equivalence of (c) and (d). First, we assume that (c) holds. Then we have

wτ (λ) = {x : |x| ∈ (λ)sτ } . From the identity (λ) = Dλ , we obtain (λ)sτ = sτ λ . Now using (c) we see that  is bijective from sτ λ into itself and wτ (λ) = sτ λ . Conversely, assume wτ (λ) = sτ λ . Then τ λ ∈ sτ λ implies C(λ)(τ λ) ∈ sτ , and since D1/τ C(λ)(τ λ) ∈ s1 = ∞ we conclude C(τ λ)(τ λ) ∈ ∞ . The proof that (c) implies (e) follows the same lines as that of (c) implies (d) by replacing sτ λ by sτ0λ . Now we show that (c) implies (f). Let x ∈ wτ• (λ). There is l ∈ C such that C(λ)(|x − le|) ∈ sτ0 . So we have |x − le| ∈ (λ)sτ0 = sτ0λ , and from Part (ii) of Theorem 4.2, we have sτ0λ = sτ0λ . Now, since (c) holds we deduce from Part (iii) of Proposition 4.3 that τn λn → ∞ (n → ∞) and le ∈ sτ0λ . We conclude x ∈ wτ• (λ) if and only if x ∈ le + sτ0λ = sτ0λ . (ii) is a direct consequence of (i).  Now we establish some conditions to obtain wτ (λ) + wτ (μ) = wτ (λ + μ), wτ (λ) + wν (λ) = wτ +ν (λ) or wτ ν (λ · μ) = wτ (λ) ∗ wν (μ). We obtain the next result. Lemma 4.7 ([9, Proposition 4, p. 251]) Let τ , ν ∈ U + . Then we have (i) The condition sτ () = sν is equivalent to 1 . sτ = sν and τ ∈ C (ii) (a)

For any given n, we have

176

4 Computations in Sequence Spaces and Applications to Statistical Convergence

[C(τ + ν)(τ + ν)]n ≤ [C(τ )τ ]n + [C(ν)ν]n

(4.21)

and [C(τ ν)(τ ν)]n ≤ ([C(τ )τ ]n )([C(ν)ν]n ) for all n.

(iii)

1 . 1 , then τ + ν, τ ν ∈ C (b) If τ, ν ∈ C We assume sτ = sν . Then

(a) (b)

1 if and only if τ + ν ∈ C 1 ; τ, ν ∈ C   τ ∈ C1 if and only if ν ∈ C1 .

Proof (i) If  is bijective from sτ into sν , then we have  ∈ (sτ , sν ) and  = −1 ∈ (sν , sτ ). So (τn−1 + τn )/νn = O(1) (n → ∞), and there is a real M > 0 such that  n  νn 1  ≤ νk ≤ M for all n. τn τn k=1 Then τn = O(νn ) and νn = O(τn ) (n → ∞) and from Part (i) of Theorem 4.1, we conclude sτ = sν and C(τ )τ ∈ ∞ . The converse has been shown in Part (i) of Theorem 4.2. (ii) (a) The inequality in (4.21) is obvious, and 1 [C(τ ν)(τ ν)]n = τn νn

 n  k=1

 τk νk

 ≤

n 1  τk τn k=1



n 1  νk νn k=1



≤ ([C(τ )τ ]n )([C(ν)ν]n ) for all n. (b) is a direct consequence of (a). (iii) (a) The necessity follows from (ii) (a). Conversely, since sτ = sν there are K 1 , K 2 > 0 such that K1 ≤

τn ≤ K 2 for all n. νn

So τn + νn ≤ (K 2 + 1)νn and n  1 (τk + νk ) ≤ [C(τ + ν)(τ + ν)]n for all n. (K 2 + 1)νn k=1

From (4.22), we get 1/νn ≥ K 1 /τn and we conclude

(4.22)



4.5 The Spaces wτ (λ), wτ (λ) and wτ• (λ)

177

K 1 [C(τ )τ ]n + [C(ν)ν]n ≤ (K 2 + 1)[C(τ + ν)(τ + ν)]n for all n and C(τ )τ , C(ν)ν ∈ ∞ . So we have shown (a). (b) Finally, since the condition in (4.22) implies K1 K2 [C(ν)ν]n ≤ [C(τ )τ ]n ≤ [C(ν)ν]n for all n, K2 K1 we obtain (b). 

This concludes the proof. We then have the next theorem. Theorem 4.4 ([9, Theorem 1, p. 257]) Let λ, μ, τ , ν ∈ U + . Then we have 1 imply (i) The conditions τ λ, τ μ ∈ C wτ (λ + μ) = wτ (λ) + wτ (μ) = sτ (λ+μ) . (ii)

(4.23)

1 imply The conditions τ λ, νλ ∈ C wτ +ν (λ) = wτ (λ) + wν (λ) = s(τ +ν)λ .

(iii)

1 imply The conditions τ λ, νμ ∈ C wτ ν (λ · μ) = wτ (λ) ∗ wν (μ) = sτ νλμ .

(iv) (a)

If τ λ, τ μ ∈  then wτ (λ + μ) = wτ (λ) + wτ (μ) = sτ (λ+μ) .

(b)

If τ λ, νμ ∈  then wτ ν (λ · μ) = wτ (λ) ∗ wν (μ) = sτ νλμ .

1 . By Part (i) of Theorem 4.3, we obtain wτ (λ) = sτ λ Proof (i) Let τ λ, τ μ ∈ C and wτ (μ) = sτ μ . So we have wτ (λ) + wτ (μ) = sτ (λ+μ) , 1 . So wτ (λ + μ) = and using Part (ii) of Lemma 4.7, we obtain τ (λ + μ) ∈ C sτ (λ+μ) and the condition in (4.23) holds. (ii) We obtain from Part (i) of Theorem 4.3 wτ (λ) + wν (λ) = sτ λ + sνλ = s(τ +ν)λ .

178

4 Computations in Sequence Spaces and Applications to Statistical Convergence

1 , then we have Furthermore, if τ λ, νλ ∈ C C((τ + ν)λ)((τ + ν)λ) ∈ ∞ and wτ +ν (λ) = s(τ +ν)λ = wτ (λ) + wν (λ). This completes the proof of Part (ii). 1 . It follows from Part (i) of Theorem 4.3 and Part (ii) of (iii) Let τ λ, νμ ∈ C 1 and wτ ν (λ · μ) = sτ νλμ . We conclude Lemma 4.7 that τ λνμ ∈ C wτ (λ) ∗ wν (μ) = sτ λ ∗ sνμ = sτ νλμ = wτ ν (λ · μ). (iv) (a) (iv) (b)

1 . follows from Part (i), since τ λ, τ μ ∈  together imply τ λ, τ μ ∈ C can be obtained similarly using Part (iii). 

Remark 4.7 We note that wτ (λ) ∗ wν (μ) ⊂ wτ ν (λ · μ) for all λ, μ ∈ U + . Indeed, ∞ let x = y · z ∈ wτ (λ) ∗ wν (μ) with y = (yn )∞ n=1 , z = (z n )n=1 . We obtain C(λ)(|y|) ∈ sτ and C(μ)(|z|) ∈ sν and since 1 λn μn

 n 



"

|yk z k | ≤

k=1

1 λn

 n  k=1

# "

|yk |

1 μn

 n 

#

|z k |

= τn νn O(1) (n → ∞),

k=1

we conclude x ∈ wτ ν (λμ). Using some results given in Lemma 4.7, we obtain the following results. Proposition 4.4 ([9, Proposition 6, p. 258]) Let λ, μ, τ, ν ∈ U + . Then we have (i) We assume sτ = sν . (a)

1 is equivalent to The condition τ λ, νλ ∈ C wτ +ν (λ) = wτ (λ) + wν (λ) = sτ λ .

(b)

1 is equivalent to If λn ∼ μn (n → ∞), then the condition τ λ, νμ ∈ C wτ +ν (μ) = wτ (λ) + wν (μ) = sτ λ .

1 , then If λn ∼ μn (n → ∞) and τ λ, νμ ∈ C

(ii)

wτ +ν (μ) = wτ (λ) + wν (μ) = s(τ +ν)λ . Proof

(i)

(a) The necessity was shown in Theorem 4.4. Conversely, assume wτ +ν (λ) = sτ λ . The condition sτ = sν implies sτ λ = sνλ 1 . From Part (iii) and sτ λ = s(τ +ν)λ . So wτ +ν (λ) = s(τ +ν)λ and (τ + ν)λ ∈ C  (a) of Lemma 4.7, we deduce τ λ, νλ ∈ C1 , so wτ (λ) = sτ λ , wν (λ) = sνλ and wτ +ν (μ) = wτ (λ) + wν (λ) = s(τ +ν)λ .



4.5 The Spaces wτ (λ), wτ (λ) and wτ• (λ)

179

1 together imply (b) Necessity. The conditions τ λ, νμ ∈ C wτ (λ) + wν (μ) = sτ λ + sνμ , and since λn ∼ μn (n → ∞), we obtain sνμ = sνλ . So we have wτ (λ) + wν (μ) = wτ +ν (μ) = s(τ +ν)λ and we conclude since sτ = sν implies sτ λ = s(τ +ν)λ . Conversely, as above, the condition sτ = sν implies sτ λ = s(τ +ν)λ , and the iden1 , and using Part (ii) of Lemma tity wτ +ν (μ) = s(τ +ν)λ implies (τ + ν)λ ∈ C 1 . Furthermore, the condition λn ∼ μn as n → ∞ 4.7, we deduce τ λ, νλ ∈ C implies sλ = sμ and sνλ = sνμ . So, by Part (iii) (b) of Lemma 4.7, the condition 1 . 1 implies νμ ∈ C νλ ∈ C This concludes the proof of Part (i). 1 imply τ λ + νμ ∈ C 1 . Since sλ = sμ , we have (ii) The conditions τ λ, νμ ∈ C sτ λ = sτ μ and sτ λ+νμ = s(τ +ν)μ . So using Part (iii) of Lemma 4.7, we have τ λ + 1 holds. Finally, we obtain 1 if and only if (τ + ν)λ ∈ C νμ ∈ C wτ (λ) + wν (μ) = sτ λ+νμ = s(τ +ν)μ = sτ λ+νμ .  Remark 4.8 In Part (ii) of Proposition 4.4, we can replace the condition τ λ, νμ 1 by τ λ, νμ ∈ . ∈C

4.6 Matrix Transformations From wτ (λ) + wν (μ) into sγ We obtain the following result by Proposition 4.4. Proposition 4.5 ([9, Proposition 7, p. 262]) Let τ, ν, γ , λ, μ ∈ U + . We assume 1 . Then we have λn ∼ μn (n → ∞) and τ λ, νμ ∈ C (i) A ∈ (wτ (λ) + wν (μ), sγ ) if and only if 

∞ 1  sup |ank |(τk + νk )μk γn k=1 n

 < ∞;

(4.24)

(ii) A ∈ (c + wτ (λ) + wν (μ), sγ ) if and only if (4.24) holds. Proof

(i) By Part (ii) of Proposition 4.4, we obtain wτ (λ) + wν (μ) = s(τ +ν)λ and (wτ (λ) + wν (μ), sγ ) = S(τ +ν)λ,γ .

180

4 Computations in Sequence Spaces and Applications to Statistical Convergence

1 . Then the condition (ii) By Part (ii) (b) of Lemma 4.7, we have τ λ + νμ ∈ C λn ∼ μn (n → ∞) implies sτ λ = sτ μ and sτ λ+νμ = s(τ +ν)μ , and by Part (iii) (b) 1 . So there is M > 0 such that of Lemma 4.7, we obtain (τ + ν)μ ∈ C (τ1 + ν1 )μ1 ≤ [C((τ + ν)μ)((τ + ν)μ)]n ≤ M for all n, (τn + νn )μn which implies 1/(τ + ν)μ ∈ ∞ . Using Part (vi) (b) of Theorem 4.1, we deduce  c + wτ (λ) + wν (μ) = wτ (λ) + wν (μ) and we conclude applying Part (i). Example 4.1 ([9, Example 1, p. 262]) Under the hypotheses of the previous proposition it can easily be seen that (wτ (λ) + wν (μ), (c + sτ ) ∗ sν ) = S(τ +ν)λ(τ +e)ν . Indeed, from Part (iv) of Proposition 4.2, we have (c + sτ ) ∗ sν = c ∗ sν + sτ ∗ sν = sν + sτ ν and as above we obtain wτ (λ) + wν (μ) = s(τ +ν)λ . So we have shown Ax + Ay ∈ (c + sτ ) ∗ sν for all x ∈ wτ (λ) and y ∈ wν (μ) if and only if



∞  1 |ank |(τk + νk )λk sup (τn + 1)νn k=1 n

 < ∞.



4.7 On the Sets cτ (λ, μ), cτ (λ, μ) and cτ• (λ, μ) In this section, we deal with spaces that generalize the sets c0 (λ), c(λ) and c∞ (λ). ◦ We will see that under some conditions, the spaces cτ (λ, μ), cτ (λ, μ) and cτ• (λ, μ) can be written in the form sξ , sξ0 , or sξ(c) . + Let τ = (τn )∞ n=1 ∈ U and let λ ∈ U and μ ∈ ω. We consider the set cτ (λ, μ) = (wτ (λ))(μ) = {x ∈ ω : (μ)x ∈ wτ (λ)} . It is easy to see that cτ (λ, μ) = {x ∈ ω : C(λ)(|(μ)x|) ∈ sτ } , that is,  cτ (λ, μ) = x = (xn )∞ n=1

  n 1  ∈ ω : sup |μk xk − μk−1 xk−1 | < ∞ , |λn |τn k=1 n 



4.7 On the Sets cτ (λ, μ), cτ (λ, μ) and cτ• (λ, μ)

181

with the convention x−1 = 0. Similarly, we define the following sets: ! ◦ cτ (λ, μ) = x ∈ ω : C(λ)(|(μ)x|) ∈ sτ0 ,

! ◦ cτ• (λ, μ) = x ∈ ω : x − le ∈ cτ (λμ) for some l ∈ C .

We recall that for λ = μ, we write c0 (λ) = (w0 (λ))(λ) , c(λ) = {x ∈ ω : x − le ∈ c0 (λ) for some l ∈ C} , and c∞ (λ) = (w∞ (λ))(λ) , see [28, 31]. It can easily be seen that ◦

c0 (λ) = ce (λ, λ), c∞ (λ) = ce (λ, λ) and c(λ) = ce• (λ, λ). These sets of sequences are called strongly convergent to 0, strongly convergent and strongly bounded. If λ ∈ U + is a sequence strictly increasing to infinity, c(λ) is a Banach space with respect to 

xc∞ (λ)

n 1  = sup |λk xk − λk−1 xk−1 | λn k=1 n



with the convention x0 = 0. Each of the spaces c0 (λ), c(λ) and c∞ (λ) is a B K space with the previous norm (see [28]); c0 (λ) has AK and every x ∈ c(λ) has a unique representation given by ∞  x = le + (xk − l)e(k) , (4.25) k=1

where x − le ∈ c0 (λ). The number l is called the strong c(λ)-limit of the sequence x. We obtain the next result. Theorem 4.5 ([6, Theorem 4.2, p. 1798]) Let τ, λ, μ ∈ U + . (i) We consider the following properties: (a) (b)

(ii)

1 , τλ ∈ C cτ (λ, μ) = sτ μλ , ◦

(c)

cτ (λ, μ) = sτ0 λ ,

(d)

cτ• (λ, μ) = {x ∈ ω : x − le ∈ sτ0 λ for some l ∈ C}.

μ

μ

Then (a) is equivalent to (b), (a) implies (c) and (a) implies (d). 1 , then cτ (λ, μ), cτ◦ (λ, μ) and cτ• (λ, μ) are BK spaces with respect If τ λ ∈ C to the norm   |xn | ; xsτ λ = sup μn μ τn λn n

182

4 Computations in Sequence Spaces and Applications to Statistical Convergence ◦

cτ (λ, μ) has AK and every x ∈ cτ• (λ, μ) has a unique representation (4.25), where x − le ∈ sτ0λ/μ . Proof (i) We show that (a) implies (b). Let x ∈ cτ (λ, μ). We have (μ)x ∈ wτ (λ), which is equivalent to x ∈ C(μ)sτ λ = D1/μ sτ λ , and by Part (i) of Theorem 4.2, each of the operators represented by  and  = −1 is bijective from sτ λ into itself. So we have sτ λ = sτ λ and x ∈ D1/μ sτ λ = sτ λ/μ . Hence (b) holds. Now we show that (b) implies (a). First we let τλ,μ = ((−1)n τn λn /μn )∞ n=1 . We have τλ,μ ∈ sτ λ/μ = cτ (λ, μ). Since τλ,μ = ((−1)n τn λn )∞ (μ) = Dμ and Dμ n=1 , we obtain |(μ) τλ,μ | = (ξn )∞ n=1 , with  λ1 τ1 ξn = λn−1 τn−1 + λn τn

if n = 1 if n ≥ 2.

We deduce |(μ) τλ,μ | ∈ sτ λ . This means Cn

1 = τn λn

From the inequality

 n  (λk−1 τk−1 + λk τk ) = O(1)(n → ∞). λ1 τ1 +



k=2

[C(τ λ)(τ λ)]n ≤ Cn for all n

we obtain (a). The proof of (a) implies (c) follows the same lines as that of (a) implies (b) with sτ replaced by sτ0 . Now we show that (a) implies (d). Let x ∈ cτ• (λ, μ). Then there exists l ∈ C such that ◦ (μ)(x − le) ∈ wτ (λ) ◦

and since (c) implies Part (e) in Theorem 4.3, we have wτ (λ) = sτ0λ . So we have x − le ∈ C(μ)sτ0λ = D μ1 sτ0λ ,



4.7 On the Sets cτ (λ, μ), cτ (λ, μ) and cτ• (λ, μ)

183

and from Theorem 4.2, we have sτ0λ = sτ0λ and D1/μ sτ0λ = sτ0λ/μ . We conclude that x ∈ cτ• (λ, μ) if and only if x ∈ le + sτ0λ/μ for some l ∈ C. (ii) is a direct consequence of (i) and of the fact that for every x ∈ cτ• (λ, μ), we have $ $   N $ $  |xn − l| $ (k) $ (xk − l)e $ = sup μn = o(1)(N → ∞). $x − le − $ $ τn λn n≥N +1 k=1 sτ

λ μ



This completes the proof. We immediately obtain the following Corollary.

Corollary 4.3 ([6, Theorem 4.3, pp. 1799–1800]) Let τ, λ, μ ∈ U + . Then we have 1 and μ ∈ ∞ , then (i) If τ λ ∈ C cτ• (λ, μ) = sτ0 λ . μ

(ii)

(4.26)

1 , and λ ∈ C 1 implies c0 (λ) = sλ0 and c∞ (λ) = sλ . We have λ ∈  implies λ ∈ C

Proof (i) Since μ ∈ ∞ , by Part (iii) of Proposition 4.3, we deduce that there are K > 0 and γ > 1 such that τn λn ≥ K γ n for all n. μn So le ∈ sτ0λ/μ and (4.26) holds. 1 . (ii) follows from Theorem 4.5, since  ⊂ C



4.8 Sets of Sequences of the Form [ A1 , A2 ] In this section, we consider spaces that generalize the sets given in Sect. 4.7 and establish necessary conditions to reduce them to the form sξ or sξ0 . In this section, we deal with the sets [A1 (λ), A2 (μ)] = {x ∈ ω : A1 (λ)(|A2 (μ)x|) ∈ sτ } , where A1 and A2 are of the form C(ξ ), C + (ξ ), (ξ ) or + (ξ ) and we give necessary conditions to obtain [A1 (λ), A2 (μ)] in the form sγ . Let λ, μ ∈ U + . Throughout this section, we write

184

4 Computations in Sequence Spaces and Applications to Statistical Convergence

[A1 , A2 ] = [A1 (λ), A2 (μ)] for any matrices ! ! A1 (λ) ∈ (λ), + (λ), C(λ), C + (λ) and A2 (μ) ∈ (μ), + (μ), C(μ), C + (μ)

to simplify. So we have, for instance, [C, ] = {x ∈ ω : C(λ)(|(μ)x|) ∈ sτ } = (wτ (λ))(μ) , etc. Now we consider the spaces [C, C], [C, ], [, C] and [, ]. For the reader’s convenience, we obtain the next identities, where A1 (λ) and A2 (μ) are triangles and we use the convention μ−1 = 0.  [C, C] =

 [C, ] =

 [, C] =

1 x ∈ω: λn

1 x ∈ω: λn



%   m % n % %  % % 1  xk % = τn O(1) (n → ∞) , % % % μm

m=1

 n 

k=1





|μk xk − μk−1 xk−1 | = τn O(1) (n → ∞) ,

k=1

% %  n−1 %  n % % % % 1 % 1   % % % % x ∈ ω : −λn−1 % x k % + λn % xk % = τn O(1) % % % μn−1 % μn k=1

k=1

(n → ∞)} and [, ] = {x ∈ ω : −λn−1 |μn−1 xn−1 − μn−2 xn−2 | + λn |μn xn − μn−1 xn−1 | = τn O(1) (n → ∞)} . We note that for τ = e and λ = μ, [C, ] is the well-known set of sequences that are strongly bounded, denoted c∞ (λ), see [28]. We obtain the following results from Proposition 4.3 and Theorem 4.5. Theorem 4.6 ([8, Theorem 3.1, p. 1665]) Let τ, λ, μ ∈ U + . Then we have (i) If τ λ, τ λμ ∈ , then

(ii)

If τ λ ∈ , then

[C, C] = s(τ λμ) .

4.8 Sets of Sequences of the Form [A1 , A2 ]

185

[C, ] = s(τ μλ ) . (iii)

(iv)

Proof

If τ, τ μ/λ ∈ , then If τ, τ/λ ∈ , then

[, C] = s(τ μλ ) . [, ] = s( λμτ ) .

(i) For any given x we have C(λ)(|C(μ)x|) ∈ sτ if and only if C(μ)x ∈ sτ (C(λ)).

Since τ λ ∈  we have sτ (C(λ)) = s(τ λ) . So we obtain x ∈ (μ)sτ λ and the condition τ λμ ∈  implies (μ)sτ λ = s(τ λμ) . So we have shown Part (i) (ii) is a direct consequence of Theorem 4.5 (i). (iii) Similarly, we have (λ)(|C(μ)x|) ∈ sτ if and only if |C(μ)x| ∈ sτ ((λ)) = C(λ)sτ . Since τ ∈ , we obtain C(λsτ = D1/λ sτ = s(τ/λ) . So we have for any given x x ∈ (μ)s( τλ ) = s( τλμ ) . We conclude, since τ μ/λ ∈  implies s(τ μ/λ) = s(τ μ/λ) . (iv) Since τ ∈ , we have as above C(λ)sτ = s(τ/λ) . So we obtain (λ)(|(μ)x|) ∈ sτ if and only if (μ)x ∈ C(λ)sτ = s( τλ ) . Then we have x ∈ C(μ)s( τλ ) = s( λμτ ) , since τ/λ ∈ . So Part (iv) holds. This concludes the proof. Remark 4.9 We have by Theorem 4.5 1 if and only if [C, ] = s(τ λ ) . τλ ∈ C μ We obtain as direct consequence of Lemma 4.6 Corollary 4.4 Under the condition τ λ ∈ c• , we have τ λ ∈  if and only if [C, ] = s(τ μλ ) .



186

4 Computations in Sequence Spaces and Applications to Statistical Convergence

Remark 4.10 We obtain the same results as in Theorem 4.6 for the set ! [A1 , A2 ]0 = x ∈ ω : A1 (λ)(|A2 (μ)x|) ∈ sτ0 , replacing in each of the cases (i), (ii), (iii) and (iv) the set sξ by the set sξ0 . Now we study the sets [, + ], [, C + ], [C, + ], [+ ], [+ , C], [+ + ], [C , C], [C + , ], [C + , + ] and [C + , C + ]. We immediately obtain the following from the definitions of the operators (ξ ), + (η), C(ξ ) and C + (η): +

= {x : λn |μn (xn − xn+1 )| − λn−1 |μn−1 (xn−1 − xn )| = τn O(1) (n → ∞)} , %∞ % % ∞ %   % x % % x % i % i % % % + [, C ] = x : λn % % − λn−1 % % = τn O(1) (n → ∞) , % % μi % μi % i=n

i=n−1

+

[ , ] = {x : λn |μn xn − μn−1 xn−1 | − λn |μn+1 xn+1 − μn xn | = τn O(1) (n → ∞)} , % n % % n+1 %    1 %% %% 1 %% %% + [ , C] = x : λn xi % − xi % = τn O(1) (n → ∞) , % % % μn+1 % % μn % i=1

i=1

[+ , + ] = {x : λn μn |xn − xn+1 | − λn μn+1 |xn+1 − xn+2 | = τn O(1) (n → ∞)} , %    % ∞ k % 1  %  1 % % + [C , C] = x : xi % = τn O(1) (n → ∞) , % λk % μk i=1 % k=n %    %∞ ∞  1 %% xi %% + + [C , C ] = x : % % = τn O(1) (n → ∞) . λk % μi % k=n

i=k

The spaces [C + , ], [C + , + ] and [C, + ] will be generalized in Sect. 4.9, where we will see that [C + , ] = cτ+.1 (λ, μ), [C + , + ] = cτ+1 (λ, μ) and [C, + ] = cτ.+1 (λ, μ). ∞ + − Given any sequence ν = (νn )∞ n=1 ∈ U we write ν = (νn−1 )n=1 with the convention ν0 = 0. We state the following result, in which we use the convention τn = μn = νn = 1 for n < 0. Theorem 4.7 ([8, Theorem 3.3, pp. 1666–1667]) (i) Let τ ∈ . Then we have [, + ] = s( λμτ )− if

τ ∈ , λμ

(a)

λ ∈ . τ

(b)

[, C + ] = s(τ μλ ) if τ, (ii)

If τ λ and τ λ/μ ∈ , then

4.8 Sets of Sequences of the Form [A1 , A2 ]

187

[C, + ] = s(τ μλ )− . If τ/λ ∈ , then

(iii)

[+ , ] = s( (iv)

τn−1 μn λn−1 )n

= s( μ1 ( τλ )− ) .

If τ/λ, μ(τ/λ)− = (μn τn−1 /λn−1 )n ∈ , then [+ , C] = sμ( τλ )− .

(v)

If τ/λ, (τ/λ)− μ−1 = (τn−1 (μn λn−1 )−1 )n ∈ , then [+ , + ] = s

(vi)

If 1/τ, τ λμ ∈ , then

(vii)

If 1/τ, τ λ ∈ , then

(viii)

If 1/τ, τ λ/μ ∈ , then

(

( τλ )− − μ )

= s(

τn−2 λn−2 μn−1 )n

.

[C + , C] = s(τ λμ) . [C + , ] = s(τ μλ ) .

[C + , + ] = s(τ μλ )− . (ix)

If 1/τ, 1/τ λ ∈ , then [C + , C + ] = s(τ λμ) .

Proof These results follow from the next properties. If α ∈ U + , then we have sα (+ ) = sα− () and the condition α ∈  implies sα (+ ) = sα− . Then it can easily be shown that if α ∈  + then we have sα ( + ) = sα . Indeed, the inclusion sα ( + ) ⊂ sα is immediate, since ( + )−1 = + ∈ (sα , sα ). Then by Proposition 4.3 + which implies  + ∈ (s , s ) and s ⊂ s ( + ). (v), we have α ∈ C α α α α 1 (i) (a) to

First, for any given x, the condition (λ)(|+ (μ)x|) ∈ sτ is equivalent |+ (μ)x| ∈ sτ ((λ)).

Then we have sτ ((λ)) = D1/λ sτ (), since the condition y ∈ sτ ((λ)) means Dλ y ∈ sτ and y ∈ D1/λ sτ () for all y. Finally, the condition τ ∈  implies sτ ((λ)) = s(τ/λ) , and using the identity + (μ) = Dμ + , we conclude [, + ] = s( λμτ )− if τ/λμ ∈ . So we have shown (i) (a). (i) (b) We have (λ)(|C + (μ)x|) ∈ sτ if and only if

188

4 Computations in Sequence Spaces and Applications to Statistical Convergence

|C + (μ)x| ∈ C(λ)sτ = D λ1 sτ . Since τ ∈ , we have sτ = sτ and D1/λ sτ = s(τ/λ) . Then, for τ/λ ∈  + , we have x ∈ [, C + ] if and only if   x ∈ s(τ/λ) C + (μ) . Since C + (μ) =  + D1/μ , we conclude [, C + ] = s(τ μλ ) . (ii) We have C(λ)(|+ (μ)|) ∈ sτ if and only if |+ (μ)x| ∈ (λ)sτ = sτ λ . Since τ λ ∈ , we have sτ λ = sτ λ and the condition C(λ)(|+ (μ)x|) ∈ sτ is equivalent to |+ (μ)x| ∈ sτ λ . Now, by the identity + (μ) = Dμ + and since τ λ/μ ∈ , the condition |+ (μ)x| ∈ sτ λ is equivalent to x ∈ s(τ λ/μ) (+ ) = s(τ λ/μ)− . This concludes the proof of Part (ii). (iii) Here, we have + (λ)(|(μ)x|) ∈ sτ if and only if |(μ)x| ∈ sτ (+ (λ)) = s( τλ )− , since τ/λ ∈ . Thus, we obtain x ∈ C(μ)s( τλ )− = D μ1 s( τλ )− = s(

τn−1 λn−1 μn

)

if (τ/λ)− ∈ , that is, τ/λ ∈ . (iv) If τ/λ ∈ , then we have + (λ)(|C(μ)x|) ∈ sτ if and only if   |C(μ)x| ∈ sτ + (λ) = s( τλ )− , that is, x ∈ (μ)s(τ/λ)− . Since μ(τ/λ)− ∈ , we conclude [+ , C] = s(μ(τ/λ)− ) . (v) We have ! [+ , + ] = x : |+ (μ)x| ∈ sτ (+ (λ)) , and since τ/λ ∈ , we obtain sτ (+ (λ)) = s( τλ )− . So the condition τ/λ ∈  implies [+ , + ] = s( τλ )− (+ (μ)). Then the condition (τ/λ)− μ−1 ∈  implies   s( τλ )− + (μ) = s

(

( τλ )− − μ )

= s(

τn−2 λn−2 μn−1 )n

.

4.8 Sets of Sequences of the Form [A1 , A2 ]

189

This concludes the proof of (v). (vi) Here we have C + (λ)(|C(μ)x|) ∈ sτ if and only if |C(μ)x| ∈ sτ (C + (λ)) and since τ ∈  + , we have sτ (C + (λ)) = sτ λ . Then for τ λμ ∈ , we have x ∈ [C + , C] if and only if x ∈ (μ)sτ λ = s(τ λμ) and [C + , C] = s(τ λμ) . (vii) , (viii) and (ix) can be shown using similar arguments as those used above.  Remark 4.11 In each of the conditions in Theorem 4.7, we have sτ (A1 A2 ) = [A1 , A2 ] = (sτ (A1 )) A2 , for A1 ∈ {(λ), + (λ), C(λ), C + (λ)} and A2 ∈ {(μ), + (μ), C(μ), C + (μ)}. For instance, we have  [, C] = x :



λn λn−1 − μn μn−1

 n−1 i=1



λn xi + xn = τn O(1) (n → ∞) μn

for

τμ ∈ . λ

Under conditions similar to those given in Theorems 4.6 and 4.7, we may explicitly calculate the sets [, ] = {x : −λn−1 μn−2 xn−2 + μn−1 (λn + λn−1 )xn−1 − λn μn xn = τn O(1) (n → ∞)},   ∞  xm λn + xn + (λn − λn−1 ) = τn O(1) (n → ∞) , [, C ] = x : μn μ m=n−1 m [, + ] = {x : −λn−1 μn−1 xn−1 + (μn λn + μn−1 λn−1 )xn − λn μn xn+1 = τn O(1) (n → ∞)} and [+ , ] = {x : −λn μn+1 xn−1 + 2λn μn xn − λn μn+1 xn+1 = τn O(1) (n → ∞)} .

4.9 Extension of the Previous Results Here we study the sets X ((μ)), X (+ (μ)) for X ∈ {sτ , sτ0 , sτ(c) }, and the sets wτ (λ), ◦ ◦ p +p +p +p wτ (λ), wτ (λ), wτ (λ) and wτ (λ) for p > 0. First, we deal with the sets X ((μ)), X (+ (μ)) for X ∈ {sτ , sτ0 , sτ(c) }, and the ◦ ◦ p p +p +p sets wτ (λ), wτ (λ), wτ (λ) and wτ (λ). We use some properties of + (λ) for λ ∈ U. We see that for any given τ ∈ U + , the condition p

λ•n = O(1) (n → ∞) τn•

190

4 Computations in Sequence Spaces and Applications to Statistical Convergence

implies

+ (λ) = Dλ + ∈ (s(τ/|λ|) , sτ ).

To state some new results we need the following lemmas. Lemma 4.8 If + is bijective from sτ into itself, then τ ∈ cs.  Proof We assume τ ∈ / cs, that is, ∞ n=1 τn = ∞. We are led to deal with two cases.

+ + 1- e ∈ Ker sτ . Then  is not bijective from sτ into itself. / s1 and there is a strictly increasing sequence of 2- e ∈ / Ker+ sτ . Then 1/τ ∈ ∞ such that 1/τni → ∞ (i → ∞). We assume that the equation integers (n i )i=1 + x = τ has a solution x = (xn,0 )∞ n=1 in sτ . Then there is a unique scalar x 0 such that n−1  xn,0 = x0 − τk . k=1

So we obtain %  % n i −1 % |xni ,0 | %% 1 % =% τk % → ∞ (i → ∞), x0 − % % τni τni k=1 and x ∈ / sτ . So we are led to a contradiction.



We conclude that each of the properties e ∈ Ker+ sτ and e ∈ / Ker+ sτ is not  satisfied and + is not bijective from sτ into itself. This completes the proof. We also need to state the following elementary result. Lemma 4.9 We have  + (+ x) = x for all x ∈ c0 and + ( + x) = x for all x ∈ cs. By Lemmas 4.8 and 4.9 we obtain the next proposition, (cf. [8, Theorem 2.7, p. 1659]). Theorem 4.8 Let τ ∈ U + . The operator + ∈ (sτ , sτ ) is bijective if and only if + . τ ∈C 1 Now let  !  wτp (λ) = x ∈ ω : C(λ) |x| p ∈ sτ  !  ◦ wτ p (λ) = x ∈ ω : C(λ) |x| p ∈ sτ0  !  wτ+ p (λ) = x ∈ ω : C + (λ) |x| p ∈ sτ  !  ◦ wτ+ p (λ) = x ∈ ω : C + (λ) |x| p ∈ sτ0 , see [7]. We obtain the following theorem from [7, Theorem 2, pp. 19–20].

(4.27) (4.28) (4.29) (4.30)

4.9 Extension of the Previous Results

191

Theorem 4.9 Let τ ∈ U + , λ, μ ∈ U and p > 0. Then we have (i) (a) (b)

1 ; sτ ((μ)) = s(τ/|μ|) if and only if τ ∈ C 0 0 1 ; sτ ((μ)) = s(τ/|μ|) if and only if τ ∈ C (c) sτ(c) ((μ)) = s(τ/|μ|) if and only if τ ∈  ;

(c) 1 . (d) sτ (+ (μ)) = s(τ/|μ|)− if and only if τ/|μ| ∈ C + ; (ii) (a) sτ ( + ) = sτ if and only if τ ∈ C 1 + ; 0 + 0 (b) sτ ( ) = sτ if and only if τ ∈ C 1 + if and only if wτ+ p (λ) = s ; (c) τ ∈ C 1 (τ |λ|)1/ p ◦ + p +  , then wτ (λ) = s 0 (d) if τ ∈ C ; 1

(e) (f) Proof

(τ |λ|)1/ p

1 if and only if wτ (λ) = s 1/ p . τ |λ| ∈ C (τ |λ|) ◦ 1 , then wτ p (λ) = s 0 If τ |λ| ∈ C . p

1/ p (τ |λ|)

(i)

(a), (b), (c) can be shown as in Theorem 4.2. (d) We write ν = τ/|μ| and we show sτ (+ (μ)) = sν − ().

(4.31)

We have x ∈ sτ (+ (μ)) if and only if Dμ + x ∈ sτ , that is, + x ∈ sν . But the condition + x ∈ sν means x ∈ sν − . We conclude that (4.31) holds. Now by 1 , that is, ν ∈ C 1 . Theorem 4.2, we obtain sν − () = sν − if and only if ν − ∈ C This concludes the proof of Part (i) (d). (ii) (a), (b) follow from the equivalence of each of the inclusions sτ ⊂ sτ ( + ), or + . sτ0 ⊂ sτ0 ( + ) and the condition τ ∈ C 1 + . Since C + (λ) =  + D , we have (c) Let τ ∈ C 1/λ 1     wτ+ p (λ) = x ∈ ω : ( + D λ1 )(|x| p ) ∈ sτ = x : D λ1 (|x| p ) ∈ sτ ( + ) ; + implies s ( + ) = s , we conclude and since τ ∈ C τ τ 1 ! wτ+ p (λ) = x ∈ ω : |x| p ∈ Dλ sτ = sτ |λ| = s

1 (τ |λ|) p

+p

Conversely, we have (τ |λ|)1/ p ∈ s(τ |λ|)1/ p = wτ (λ). So we have ∞ ∞ 'p &  τk |λk | 1 ∈ sτ , C (λ) (τ |λ|) p = |λk | k=n +

n=1

.

192

4 Computations in Sequence Spaces and Applications to Statistical Convergence

+ and we have shown Part (ii) (c). that is, τ ∈ C 1 (d) We obtain Part (ii) (d) using similar arguments as above. (e), (f) We only need to show Part (ii) (e), since the proof of Part (ii) (f) is similar. 1 . Then we have We assume τ |λ| ∈ C ! ◦ wτ p (λ) = x ∈ ω : |x| p ∈ (λ)sτ0 . Since (λ) = Dλ , we obtain (λ)sτ0 = sτ0|λ| . Now, Part (i) (b), the condition ◦ 1 implies that  is bijective from sτ0|λ| into itself and wτ p (λ) = s 0 1/ p . τ |λ| ∈ C (τ |λ|) This concludes the proof.  As a direct consequence of Theorem 4.9 we obtain the following results. Corollary 4.5 ([7, Corollary 1, p. 23]) Let r > 0 be any real. Then the next statements are equivalent, where (i) r > 1, (ii) sr () = sr , (iii) sr0 () = sr0 and (iv) sr (+ ) = sr . Proof Indeed, we see from Parts (i) and (ii) of Theorem 4.9 that it is enough to n ∞   show that τ = (r n )∞ n=1 ∈ C 1 if and only if r > 1. We have (r )n=1 ∈ C 1 if and only if r = 1 and r

−n

 n  k=1

 r

k

=

r 1 −n+1 r = O(1) (n → ∞). − 1−r 1−r

This implies r > 1 and concludes the proof.



4.10 Sets of Sequences that are Strongly τ -Bounded With Index p p

.

+p

+. p

+p

Now we consider the spaces cτ (λ, μ), cτ (λ, μ), cτ (λ, μ) and cτ (λ, μ) that generalize the sets [C, ], [C, + ], [C + , ] and [C + , + ] studied in the previous p section. These sets also generalize the well-known spaces of sequences c∞ (λ) and p c0 (λ) that are strongly bounded and convergent to naught with index p. They also generalize the sets c∞ (λ, μ) and c0 (λ, μ) that are strongly τ -bounded and convergent to naught we studied in the previous section. For given real p > 0 and λ, μ ∈ U, we write !   cτp (λ, μ) = wτp (λ) (μ) = x : C(λ)(|(μ)x| p ) ∈ sτ , !   . cτ+ p (λ, μ) = wτp (λ) + (μ) = x : C(λ)(|+ |(μ)x| p ) ∈ sτ , !   . cτ+ p (λ, μ) = wτ+ p (λ) (μ) = x : C + (λ)(|(μ)x| p ) ∈ sτ , !   cτ+ p (λ, μ) = wτ+ p (λ) + (μ) = x : C + (λ)(|+ (μ)x| p ) ∈ sτ .

(4.32)

4.10 Sets of Sequences that are Strongly τ -Bounded With Index p

193

.+ p p When sτ is replaced by sτ0 in the previous definitions, we write c τ (λ, μ),  cτ (λ, μ), . +. p +p p +p +. p +p  c (λ, μ) and c( (λ, μ), instead of c (λ, μ), c (λ, μ), c (λ, μ) and c (λ, μ).

τ

τ

τ

τ

τ

τ

For instance, it can easily be seen that  cτp (λ, μ)

= x=

"

(xn )∞ n=1

 .

cτ+ p (λ, μ)

= x = (xn )∞ n=1 

. cτ+ p (λ, μ)

= x = (xn )∞ n=1

 +p ( cτ (λ, μ) = x = (xn )∞ n=1

 n #   1 p : sup |μk xk − μk−1 xk−1 | 0 1 lim n→∞ n

%⎧ %⎨ % % k≤n %⎩ %

⎫% % ⎛ ⎞% % ∞ % i ⎬%% % k 1  % 0 % % ⎝ : %2 x j ⎠% ≥ ε %% = 0 for all x ∈ W3/2 . i ⎭% % i=1 3 % j=1

It is enough to apply Part (ii) of Theorem 4.14 with τk = 2−k , μk = 3k and λk = 1 for all k. We also have the next example. Example 4.5 By Part (iii) of Theorem 4.14 with λk = μk = k and τk = 2−k , the condition n 1  k |xk | 2 2 =0 lim n→∞ n k k=1 implies that for each ε > 0 1 lim n→∞ n

%⎧ %⎨ % % k≤n %⎩ %

% ⎫% ⎛ ⎞% %  % ∞ ⎬%% % k ∞ 1  % x j % % % ⎝ ⎠ : %2 % ≥ ε⎭% = 0. % i=k i j=i j % %

4.14 Tauberian Theorems for Weighted Means Operators We start from results on Hardy’s Tauberian theorem for Cesàro means. This was formulated as follows, if the sequence x = (xn )∞ n=1 satisfies lim n→∞ C 1 x = L and xn = O(1/n), then limn→∞ x = L. It was shown by Fridy and Khan [22] that the hypothesis limn→∞ C1 x = L can be replaced by the weaker assumption of the existence of the statistical limit st − lim C1 x = L, that is, for every ε > 0 lim

n→∞

1 |{k ≤ n : |[C1 x]k − L| ≥ ε}| = 0. n

Here it is our aim to show that Hardy’s Tauberian theorem for Cesàro means can be extended to the cases when C1 is successively replaced by the operator of weighted means N q defined in Definition 2.2 and by C(λ). In this way we show in Theorem 4.15 that, under some conditions, if x = (xn )∞ n=1 satisfies limn→∞ N q x = L 1 and limn→∞ Q n qn xn = L 2 , then lim x = L 1 .

n→∞

(4.57)

Similarly in Theorem 4.16, we show that, under some other conditions, (4.57) holds for sequences x that satisfy the conditions

4.14 Tauberian Theorems for Weighted Means Operators

lim N q x = L 1 and lim

n→∞

n→∞

215

Qn xn = L 2 . qn

Then in Proposition 4.8, we give an extension of Hardy’s Tauberian theorem, where C1 is replaced by C(λ) and we determine sequences μ for which x is convergent when the sequences C(λ)x and (μn xn )∞ n=1 are convergent. We recall the characterization of (c, c), which we use in all that follows. Lemma 4.14 (Part 13. of Theorem 1.23) We have A = (ank )∞ n,k=1 ∈ (c, c) if and only if (i) A ∈ S1 ;  (ii) limn→∞ ∞ k=1 ank = l for some l ∈ C; (iii) limn→∞ ank = lk for some lk ∈ C and for all k ≥ 1. We recall that a matrixA = (ank )∞ n,k=1 ∈ (c, c) is said to be regular if x n → l (n → ∞) implies An x = ∞ k=1 ank x k is convergent for all n and converges to the same limit. We write xn → l implies An x → l (n → ∞). We recall that A is regular if and only if A satisfies the condition in (i) of Lemma 4.14 and limn→∞ An e = 1 and limn→∞ ank = 0 for all k ≥ 1. ∞ nLet q = (qn )n=1 be a positive sequence, Q be the sequence defined by Q n = q for all n ≥ 1. The operator of weighted means N q is defined by the matrix k=1 k N q of Definition 2.2. It can easily be seen that N q = D1/Q  Dq . In all that follows, we write xn = 0 for any term of sequence with negative subscript. Now we will give two versions of Tauberian theorems concerning the operator of weighted means N q . Then we deal with the operator C(λ). Now we give a first version of a Tauberian theorem for N q . Theorem 4.15 ([12, Theorem 2.1, pp. 3–4]) (i) The following statements are equivalent: Q/q ∈ ∞ , for any given sequence (xn )∞ n=1 , we have

(a) (b)

lim

n→∞

q1 x 1 + · · · + qn x n = L 1 if and only if lim xn = L 1 n→∞ Qn

for some L 1 ∈ C. (ii)

We assume

n 1  k =L n→∞ nqn Qk k=1

lim

and lim

n→∞

Qn = L = 0 nqn

(4.58)

(4.59)

for some scalars L and L . Then, for any given sequence (xn )∞ n=1 , the conditions

216

4 Computations in Sequence Spaces and Applications to Statistical Convergence

lim

n→∞

q1 x 1 + · · · + qn x n = L 1 and lim Q n (qn xn − qn−1 xn−1 ) = L 2 n→∞ Qn

for some L 1 , L 2 ∈ C imply together limn→∞ xn = L 1 . Proof

−1

−1

(i) We have N q = (D1/Q  Dq )−1 = D1/q D Q , that is, [N q ]n,n−1 = −1

−Q n−1 /qn , [N q ]nn = Q n /qn for all n ≥ 1 (with the convention Q 0 = 0) and −1

[N q ]nn = 0 otherwise. Since Q is increasing and Q/q ∈ ∞ , we have $ $ $ −1 $ $N q $

 S1

Q n + Q n−1 qn

= sup n

 ≤ 2 sup n

Qn < ∞. qn −1

Then limn→∞ (Q n − Q n−1 )/qn = 1 and we conclude N q is regular. This shows −1

the condition in (a) holds if and only if N q is regular. So (a) means that, for any y = (yn )∞ n=1 , the condition yn = [N q x]n → L 1 implies −1

xn = [N q y]n → L 1 (n → ∞). Since N q is regular, we conclude xn → L 1 implies yn → L 1 (n → ∞). So we have shown Part (i). ∞ ∞ (ii) Let x = (xn )∞ n=1 ∈ ω and y = (yn )n=1 = N q x. Writing z = (z n )n=1 = ∞ (Q n (qn xn − qn−1 xn−1 ))n=1 , we easily see that z = D Q Dq x.

(4.60)

We have (D Q Dq )−1 = D1/q  D1/Q and by (4.60) we obtain x = (D Q Dq )−1 z = D1/q  D1/Q z. Then y = N q x = N q D1/q  D1/Q z = D1/Q  Dq D1/q  D1/Q z = D1/Q  2 D1/Q z and the infinite matrix  2 is the triangle defined by [ 2 ]nk = n + 1 − k for k ≤ n and [ 2 ]nk = 0 otherwise. So we easily obtain n n n 1 n+1−k n + 1  zk 1  k zk = − . yn = Q n k=1 Qk Q n k=1 Q k Q n k=1 Q k z k

4.14 Tauberian Theorems for Weighted Means Operators

Since

217

n   1  zk , xn = D1/q  D1/Q z n = qn k=1 Q k

we obtain yn =

n n+1 1  k qn x n − zk . Qn Q n k=1 Q k

Now we consider the triangle ⎞



. ⎜. =⎜ Q ⎜ ⎝.

.

0 ⎟ ⎟ k 1 ⎟. . ⎠ (n + 1)qn Q k . . . .

The condition in (4.59) implies 1/nqn ∼ L /Q n (n → ∞) and since Q n is increasing we have 0 < lim Q n ≤ ∞ n→∞

 and (1/nqn )∞ n=1 ∈ c. So, for each fixed k, the sequence [ Q]nk tends to a limit as  ∈ (c, c) and, since n tends to infinity. This and the condition in (4.58) imply Q z ∈ c, we have n  k 1 z k → l (n → ∞) for some l ∈ C. (n + 1)qn k=1 Q k

Using (4.59) we deduce that if yn → L 1 and z n → L 2 (n → ∞), then we have xn =

n  k Qn 1 yn + zk → L L 1 + l (n + 1)qn (n + 1)qn k=1 Q k

and x ∈ c. Now since N q is regular and yn = [N q x]n , we have yn → L 1 = L L 1 + l (n → ∞). We conclude xn → L 1 (n → ∞).



As a consequence of Theorem 4.15 (i), we obtain the next result. Corollary 4.10 ([12, Corollary 2.2, p. 5]) Let x = (xn )∞ n=1 be any given sequence. The condition (4.61) [N q x]n → L implies xn → L (n → ∞)

218

4 Computations in Sequence Spaces and Applications to Statistical Convergence

for some L ∈ C implies there are γ > 1 and K > 0 such that qn ≥ K γ n for all n. −1

Proof The condition in (4.61) implies that N q is regular, that is, Q n + Q n−1 = O(1) (n → ∞). qn  1 , where C 1 = {x : (( nk=1 xk )/xn )∞ and Q/q ∈ ∞ . Then q ∈ C n=1 ∈ ∞ } was defined in (4.9), (cf. [6]). We conclude by Part (iii) of Proposition 4.3 (see also [7, Proposition 2.1, p. 1786]).  As a direct consequence of Part (ii) of Theorem 4.15, we obtain the next corollary. Corollary 4.11 ([12, Corollary 2.3, p. 6]) Let α ≥ 0 and let (xn )∞ n=1 be a sequence that satisfy x 1 + 2α x 2 + · + n α x n → L 1 and n  kα

 n 

 k

α

(n α xn − (n − 1)α xn−1 ) → L 2

k=1

k=1

for some L 1 , L 2 ∈ C. Then xn → L 1 (n → ∞). Proof If α = 0 then the conditions in (4.58) and (4.59) are trivially satisfied. Now we let qn = n α with α > 0 and α = 1. We obtain

1

2n

n α+1

n 

kα 2n+1 Qn 1 k=1 x dx ≤ = α+1 ≤ α+1 x α d x, nqn n n α

0

1

3n 3 n+1 and since the sequences (1/n α+1 ) 0 x α d x and (1/n α+1 ) 1 x α d x tend to the same limit 1/(α + 1) as n tends to infinity, we conclude limn→∞ Q n /nqn = 1/(α + 1) and (4.59) holds. Now we deal with the condition in (4.58). For this, we note that for every k ≥ 2 k k α+1 ≤ k . = 3 Qk kα α x dx 0

Then

4.14 Tauberian Theorems for Weighted Means Operators

219

n n   k 1 ≤ 1 + (α + 1) Qk kα k=1 k=2

2n ≤ 1 + (α + 1)

dx xα

1

 α + 1  1−α n ≤1+ −1 . 1−α Thus

  n 1 1 1  k 1+α 1 ≤ − α+1 + α+1 α+1 2α n Qk 1−α n n n k=1

and

n 1  k → 0 (n → ∞). n α+1 k=1 Q k

We conclude applying Theorem 4.15. For α = 1, we obtain 2n n n 1  k 2 2 2 dx 1  ≤ 2 = 2 log (n + 1) = 2 nqn k=1 Q k n k=1 (k + 1) n x +1 n 0

and

n 1  k → 0 (n → ∞). nqn k=1 Q k

Since (4.59) trivially holds with L = 1/2, we can apply Theorem 4.15 and conclude xn → L 1 (n → ∞). This completes the proof.  We immediately deduce the following corollary from the previous proof. Corollary 4.12 Let (xn )∞ n=1 be any sequence. If x1 + 2x2 + · · · + nxn → L 1 and n 2 [nxn − (n − 1)xn−1 ] → L 2 , n2 then xn → 2L 1 (n → ∞). Now we consider another statement of a Tauberian theorem, where the conditions in (4.58) and (4.59) in Theorem 4.15 are replaced by the convergence of n 1  qk Q k−1 Q n k=2 Q k

220

4 Computations in Sequence Spaces and Applications to Statistical Convergence

and the condition on Q n (qn xn − qn−1 xn−1 ) is replaced by a similar condition on another sequence defined by Q n (xn − xn−1 )/qn . Theorem 4.16 ([12, Theorem 2.5, p. 7]) We assume lim

n→∞

n 1  Q k−1 qk =L Q n k=2 Qk

(4.62)

for some scalar L. For any given sequence (xn )∞ n=1 , the conditions lim

n→∞

q1 x 1 + · · · + qn x n Qn = L 1 and lim (xn − xn−1 ) = L 2 n→∞ qn Qn

(4.63)

for some L 1 , L 2 ∈ C imply limn→∞ xn = L 1 . Proof We let

y = (yn )∞ n=1 = N q x

(4.64)

and z = D Q/q x. Then we have x =  Dq/Q z

(4.65)

and y = N q  Dq/Q z = D1/Q  Dq  Dq/Q z. Since [ Dq ]nk =

⎧n ⎨ q

i

for k ≤ n

i=k



0

otherwise,

we obtain  n  n 1   qk yn = qi zk Q n k=1 i=k Qk =

n 1  qk (Q n − Q k−1 ) zk Q n k=1 Qk

=

n n  qk 1  Q k−1 zk − qk z k . Qk Q n k=1 Q k k=1

Using (4.65) we deduce yn = xn −

n 1  Q k−1 qk z k Q n k=1 Q k

4.14 Tauberian Theorems for Weighted Means Operators

and xn = yn +

221

n 1  Q k−1 qk z k . Q n k=1 Q k

with ( Q) nk = qk Q k−1 /Q n Q k for 2 ≤ k ≤ n and ( Q) nk = Now consider the matrix Q nk 0 otherwise. Since Q is increasing, we have 1/Q ∈ c and for each fixed k, ( Q) tends to a limit as n tends to infinity. This result and the conditions in (4.62) imply ∈ (c, c). Now we consider the sequence w defined by Q wn =

n 1  Q k−1 qk z k . Q n k=1 Q k

The conditions given in (4.63) mean that yn → L 1 and z n → L 2 (n → ∞) and since ∈ (c, c), we have Q n → L 1 + l for some l ∈ C. xn = yn + wn = yn + [ Q] To complete the proof we need to show l = 0. For this it is enough to notice that, since N q is regular, the condition xn → L 1 + l implies yn = [N q x]n → L 1 + l = L 1 (n → ∞) and xn → L 1 (n → ∞). This concludes the proof.



This result leads to the next corollary. Corollary 4.13 Let (xn )∞ n=1 be a sequence with 1 n→∞ log n lim



1 1 x1 + x2 + · · · + xn 2 n

 = L 1 and lim n log n(xn − xn−1 ) = L 2 . n→∞

Then limn→∞ xn = L 1 . Proof We have qn = 1/n for all n, Q n = un =

n

k=1 (1/k)

and

n n 1  Q k−1 1 1 − σn qk = Q n k=2 Qk Q n k=2 k

with σn =

n 1  1 . Q n k=2 k 2 Q k

222

4 Computations in Sequence Spaces and Applications to Statistical Convergence

 Since Q n tends to infinity as n tends to infinity and σn ≤ (1/Q n ) nk=2 (1/k 2 ), we have lim σn = 0. Then u n tends to 1 as n tends to infinity and the condition in (4.62) n→∞ of Theorem 4.16 is satisfied. Finally, since Q n ∼ log n, we have Q n /qn ∼ n log n (n → ∞) and we conclude by Theorem 4.16.  Remark 4.16 We note that neither of Theorems 4.15 and 4.16 implies the other. Indeed, we consider the case when qn = n for all n. Then the sequence x = e satisfies Theorem 4.16, since (4.62) is satisfied with L = 1 and (Q n /qn )(xn − xn−1 ) = 0 for all n, but n2 (n → ∞), Q n (qn xn − qn−1 xn−1 ) ∼ 2 so Q n (qn xn − qn−1 xn−1 ) → ∞ (n → ∞) and Theorem 4.15 cannot hold. Furthermore, in the case when qn = 1/n, we have seen in Corollary 4.13 that the condition (4.62) in Theorem 4.16 is satisfied, but (4.59) in Theorem 4.15 is not satisfied.

4.15 The Operator C(λ) Now we consider the case when N q is replaced by C(λ) and we obtain results that extend some of those given in the previous sections. By similar arguments as those used in Theorems 4.15 and 4.16, we can state the next proposition, where N q is replaced by C(λ). We will see that in the case of λ = μ the sequence λ plays the role of Q with q = e. Proposition 4.8 ([12, Proposition 2.8, p. 9]) Let λ, μ ∈ U + and n 1 k =L n→∞ n μk k=1

lim

and lim

n→∞

(4.66)

λn = L n

for some scalars L and L . Then for any given sequence (xn )∞ n=1 the conditions x1 + · · · + xn → l and μn (xn − xn−1 ) → l (n → ∞) λn for some l, l ∈ C together imply that (xn )∞ n=1 is convergent and xn → L l (n → ∞).

(4.67)

4.15 The Operator C(λ)

223

Proof We let yn = (x1 + · · · + xn )/λn and z n = μn (xn − xn−1 ). Then we have y = C(λ)x, z = Dμ x and y = C(λ) D1/μ z. Since C(λ) = D1/λ  and (Dμ )−1 =  D1/μ , we obtain y = D1/λ  2 D1/μ z and x =  D1/μ z. As we have seen in the proof of Theorem 4.16, we obtain by the calculation of D1/λ  2 D1/μ n 1 n−k+1 yn = zk . λn k=1 μk Similarly, we have

n    zk . xn =  D1/μ z n = μ k k=1

So, we successively have n n+1 1  k xn − zk yn = λn λn k=1 μk

and

λn 1  k yn + zk . n+1 n + 1 k=1 μk

(4.68)

n

xn =

(4.69)

Now let  be the triangle defined by nk = k/((n + 1)μk ) for 2 ≤ k ≤ n and nk = 0 for k = 1 or k > n and for all n and k. By (4.66), we have  ∈ (c, c). Now the conditions in (4.67) mean that yn → l and z n → l (n → ∞) and since  ∈ (c, c), we deduce xn =

λn yn + [z]n → L l + χ (n → ∞) for some χ ∈ C. n+1

Now we are led to deal with the cases L = 0 and L = 0. 1- We assume L = 0. Here xn → L l + χ = χ and x1 + · · · + xn → χ. n Since n/λn → ∞ and x1 + · · · + xn x1 + · · · + xn n = → l (n → ∞), λn n λn we conclude χ = 0 and xn → 0 = L l (n → ∞).

224

4 Computations in Sequence Spaces and Applications to Statistical Convergence

2- We assume L = 0. Here, since xn → L l + χ (n → ∞), we also have x1 + · · · + xn → L l + χ (n → ∞). n Then x1 + · · · + xn x1 + · · · + xn n L l + χ = → = l (n → ∞). λn n λn L So χ = 0 and xn → L l (n → ∞). This concludes the proof.



We deduce the following result. Corollary 4.14 Let x = (xn )∞ n=1 be a sequence with (x1 + · · · + xn )/n → l and n(xn − xn−1 ) → l (n → ∞) for some l, l ∈ C. Then we have (i) xn → l (n → ∞), (ii) l = 0. Proof The condition in (4.66) is trivially satisfied and L = L = 1. Since /L =  defined in the proof of Proposition 4.8 is regular, we have here l = χ . As we have just seen we successively obtain xn → l + χ (n → ∞), l + χ = l + l = l and l = 0.  Remark 4.17 It can be seen that Proposition 4.8 is an extension of Hardy’s Tauberian theorem. For this, we show that there is μ ∈ U + with (n/μn )n ∈ U + \∞ such that (4.66) holds. We take, for instance, μn = 2i /i when n = 2i for i ≥ 1 and μn = n 3 i i otherwise.

Let n be any given integer and In = {2 : 2 ≤ n}. Using the notation In = In [1, n] we successively have sn =

n 1 k 1 k 1 k + = n k=1 μk n k∈I μk n μk n



1 n

 k∈{i:2i ≤n}

1 1 k+ . n k2

Putting N = max{i : 2i ≤ n} and S = sn ≤

k∈In

k∈In

∞ k=1

1/k 2 we deduce

S 1 N (N + 1) + . n 2 n

4.15 The Operator C(λ)

225

Since 2 N ≤ n, we obtain sn ≤

1 2 N +1

N (N + 1) +

S . n

Finally, from the definition of N and the inequality N ≤ log n/ log 2, we obtain N = E(log n/ log 2). So N tends to infinity as n tends to infinity and sn tends to zero. Since we have n μn (xn − xn−1 ) n(xn − xn−1 ) = μn / ∞ , the condition μn (xn − xn−1 ) → l (n → ∞) does not imply and (n/μn )∞ n=1 ∈ n(xn − xn−1 ) = O(1) (n → ∞) and we have shown that Proposition 4.8 is an extension of Hardy’s Tauberian theorem. More precisely we can state the following result when C(λ)x ∈ c0 , where we use the same notations as in the proof of Proposition 4.8. Proposition 4.9 ([12, Proposition 2.12, p. 11]) Let λ, μ ∈ U + and 

n 1 k sup n k=1 μk n

and sup n

 0. In [15], we determined the set of all positive sequences x for which the (SSIE) (sx(c) ) B(r,s) ⊂ (sx(c) ) B(r  ,s  ) holds, where r, r  , s  and s are real numbers, and B(r, s) is the generalized operator of the first difference defined by (B(r, s)y)n = r yn + syn−1 for all n ≥ 2 and (B(r, s)y)1 = r y1 . In this way we determined the set of all positive sequences x for which (r yn + syn−1 )/xn → l implies (r  yn + s  yn−1 )/xn → l (n → ∞) for all sequences y and some scalar l. In this chapter, we recall some results stated [8–12, 14]. So we study the sequences spaces inclusions (SSIE) of the form F ⊂ E a + Fx in each of the cases e ∈ F and e∈ / F.

5.1 Introduction In this section, we deal with the (SSIE) F ⊂ E a + Fx , where E, F and F  are sequence spaces with e ∈ F, and a is a positive sequence. We obtain the solvability of these (SSIE)’s for a = (r n )∞ n=1 . We consider the (SSIE) F ⊂ E a + Fx as a perturbed inclusion equation of the elementary inclusion equation F ⊂ Fx . In this way it is interesting to determine the set of all positive sequences a for which the elementary and the perturbed inclusions equations have the same solutions. Then writing, as usual, Dr for the diagonal matrix with (Dr )nn = r n , we study the solvability of the (SSIE) c ⊂ Dr ∗ E  + cx with E = c0 or s1 , where  is operator of the first difference. Then we consider the (SSIE) c ⊂ Dr ∗ E C1 + sx(c) with E = c0 , c or s1 , and s1 ⊂ Dr ∗ (s1 )C1 + sx with E = c or s1 , where C1 is the Cesàro operator of order 1. For instance, the (SSIE) s1 ⊂ Dr ∗ cC1 + sx is associated with the next statement: The condition supn |yn | < ∞ implies that there are sequences  u, v ∈ ω such that y = u + v for which n −1 nk=1 u k r −k → l and supn (|vn |/xn ) < ∞ for some scalar l and for all y. For the reader’s convenience, we list the characterizations of the classes (c0 , c0 ), (c0 , c), (c, c0 ), (c, c), (s1 , c) and ( p , F), where F = c0 , c or ∞ . Lemma 5.1 We have (i) (Theorem 1.23 7) A ∈ (c0 , c0 ) if and only if

A (∞ ,∞ ) = sup n

holds and

∞  k=1

|ank | < ∞

(5.1)

5.1 Introduction

231

lim ank = 0 for all k;

n→∞

(5.2)

(ii) (Theorem 1.23 12) A ∈ (c0 , c) if and only if (5.1) holds and lim ank = lk for all k and some scalar lk ;

n→∞

(5.3)

(iii) (Theorem 1.23 8) A ∈ (c, c0 ) if and only if (5.1) and (5.2) hold and lim

∞ 

n→∞

ank = 0;

k=1

(iv) (Theorem 1.23 13) A ∈ (c, c) if and only if (5.1) and (5.3) hold, and lim

n→∞

∞ 

ank = l for some scalar l;

(5.4)

k=1

(v) (Part (iii) of Remark 1.6) A ∈ (s1 , c) if and only if (5.3) holds and lim

n→∞

∞  k=1

|ank | =

∞ 

|lk |.

(5.5)

k=1

Lemma 5.2 Let p ≥ 1. We write

A ( p ,∞ )

⎧ ⎪ |ank | ⎪ ⎨sup n,k ∞ 1/q =  ⎪ q ⎪ sup |a | ⎩ nk n

for p = 1 for 1 < p < ∞ and q = p/( p − 1).

k=1

Then we have (i) (Theorem 1.23 4 and 5) A ∈ ( p , ∞ ) if and only if

A ( p ,∞ ) < ∞;

(5.6)

(ii) (Theorem 1.23 9 and 10) A ∈ ( p , c0 ) if and only if the conditions in (5.6) and (5.2) are satisfied; (iii) (Theorem1.23 14 and 15) A ∈ ( p , c) if and only if the conditions in (5.6) and (5.3) are satisfied. We also use the well-known properties, stated as follows. Lemma 5.3 (Proposition 2.1) Let a, b ∈ U + and let E, F ⊂ ω be any linear spaces. We have A ∈ (E a , Fb ) if and only if D1/b ADa ∈ (E, F).

232

5 Sequence Spaces Inclusion Equations

Lemma 5.4 ([4, Lemma 9, p. 45]) Let T  and T  be any given triangles and let E, F ⊂ ω. Then, for any given operator T represented by a triangle, we have T ∈  (E T  , FT  ) if and only if T  T T −1 ∈ (E, F). Now we recall that for λ ∈ U, the infinite matrices C(λ) and (λ) defined in (4.7) and (4.8) are triangles, and (λ) is the inverse of C(λ), that is, C(λ)((λ)) = (λ)(C(λ)y) = y for all y ∈ ω (for the use of these matrices, see, for instance, [17, 22]). If λ = e, then we obtain the well-known operator of the first difference represented by (e) = . We then have n y = yn − yn−1 for all n ≥ 1, with the convention y0 = 0. Usually the notation  = C(e) is used and then we may write C(λ) = D1/λ . We note that  =  −1 . The Cesàro operator is defined by C1 = 0 C((n)∞ n=1 ). We also use the sets Wa and Wa of sequences that are a-strongly bounded and a-strongly convergent to zero defined for a ∈ U + in (4.44) and (4.45) (cf. [8, 16]). It can easily be seen that Wa = {y ∈ ω : C1 D1/a |y| ∈ s1 }. If a = (r n )∞ n=1 the sets Wa and Wa0 are denoted by Wr and Wr0 . For r = 1 we obtain the well-known sets



n 1 |yk | < ∞ w∞ = y ∈ ω : y w∞ = sup n k=1 n and

w0 =

y ∈ ω : lim

n→∞



n 1 |yk | = 0 n k=1

called the spaces of sequences that are strongly bounded and strongly summable to zero sequences by the Cesàro method of order 1. We will use Lemmas 4.1 and 4.2 on multipliers. The α- and β-duals of a set of sequences E are defined as E α = M(E, 1 ) and E β = M(E, cs), respectively, where cs = c is the set of all convergent series. Now we recall some results that are direct consequence of [20, Theorem 2.4], where ∞    (an )∞  = 2ν max |ak |. (5.7) n=1 M 2ν ≤k≤2ν+1 −1

ν=1

∞ Writing An = (ank )∞ k=1 for the sequence in the n-th row of the matrix A = (ank )n.k=1 obtain the following result:

Lemma 5.5 ([20]) (i) We have (w0 , ∞ ) = (w∞ , ∞ ) and A ∈ (w∞ , ∞ ) if and only if sup ( An M ) = sup n

n

∞  ν=1

2

ν

max

2ν ≤k≤2ν+1 −1

|ank | < ∞;

(5.8)

5.1 Introduction

233

(ii) A ∈ (w∞ , c0 ) if and only if lim An M = lim

n→∞

n→∞

∞  ν=1

2

ν

max

2ν ≤k≤2ν+1 −1

|ank | = 0;

(iii) A ∈ (w0 , c0 ) if and only if (5.8) holds and lim ank = 0 for all k.

n→∞

In the following we use the results stated below. Lemma 5.6 Let p ≥ 1. We have (i) (ii) (iii) (iv)

M(c, c0 ) = c0 ; M(c, ∞ ) = ∞ ; M(c0 ,  p ) = M(c,  p ) = M(∞ ,  p ) =  p ; M( p , F) = ∞ for F ∈ {c0 , c, s1 ,  p }.

Proof (i), (ii), (iv) with F ∈ {c0 , c, ∞ } follow from [21, Lemma 3.1, p. 648], Lemma 5.2 and [22, Example 1.28, p. 157]. In Part (iv), it remains to show M( p ,  p ) = ∞ . We have ∞ ⊂ M( p ,  p ), since for any given sequence a ∈ ∞ , we have ∞ ∞   |ak yk | p ≤ sup |ak | p |yk | p < ∞ for all y ∈  p . k=1

k

k=1

Then we have by Lemma 4.2 M( p ,  p ) ⊂ M( p , ∞ ) = ∞ and we have shown M( p ,  p ) = ∞ . (iii) Since a ∈ M(E, F) if and only if Da ∈ (E, F), we apply Theorem 1.23 to obtain M(E,  p ) =  p for E ∈ {c0 , c, ∞ } immediately from 16, 17, 18 for p = 1 and 21, 22, 23 for 1 < p < ∞.  Now we determine the multiplier M(E, F) involving the sets w0 and w∞ . Lemma 5.7 We have (i) (ii) (iii) (iv) (v)

for F = c0 , c, ∞ ; M(w0 , F) = M(w∞ , ∞ ) = s(1/n)∞ n=1 0 M(w∞ , c0 ) = M(w∞ , c) = s(1/n) ∞ ; n=1 0 M(1 , w∞ ) = s(n)∞ and M(1 , w0 ) = s(n) ∞ ; n=1 n=1 M(E, w0 ) = w0 for E = s1 , c; M(E, w∞ ) = w∞ for E = c0 , s1 , c.

Proof (i) First we show M(w∞ , ∞ ) = s(1/n)∞ . n=1 We have a ∈ M(w∞ , ∞ ) if and only if Da = (dnk )∞ n,k=1 ∈ (w∞ , ∞ ). Now we apply formula (5.8). We have dnk = an for k = n and ank = 0 for k = n, and, for any given integer n, let νn denote the uniquely determined integer for which 2νn ≤

234

5 Sequence Spaces Inclusion Equations

n ≤ 2νn +1 − 1. Then (n + 1)/2 ≤ 2νn ≤ n, and, by Lemma Part (i) of Lemma 5.5, we have Da ∈ (w∞ , ∞ ) if and only if σn =

∞  ν=0

Since



max

2ν ≤m≤2ν+1 −1

|dnk | = 2νn |an | = O(1) (n → ∞).

n+1 |an | ≤ 2νn |an | ≤ n|an for all n, 2

we conclude supn σn < ∞ if and only if supn (n|an |) < ∞ and M(w∞ , ∞ ) = . Then we have by Part (iii) of Lemma 5.5 M(w0 , c0 ) = s(1/n)∞ . Finally s(1/n)∞ n=1 n=1 we have = M(w0 , c0 ) ⊂ M(w0 , c) ⊂ M(w0 , ∞ ) ⊂ M(w∞ , ∞ ) = s(1/n)∞ . s(1/n)∞ n=1 n=1 Thus we have shown Part (i). (ii) can be shown as above by the use of Part (ii) of Lemma 5.5. (iii) By [23, Theorem 1], Da ∈ (1 , w∞ ) if and only if supn (|an |/n) < ∞ which . Again by [23, Theorem 1], we have Da ∈ (1 , w0 ) if and means that a ∈ s(n)∞ n=1 0 only if limn→∞ (|an |/n) = 0 and a ∈ s(n) ∞ . n=1 (iv) We have M(s1 , w0 ) ⊂ w0 , since e ∈ s1 . Now we show w0 ⊂ M(s1 , w0 ). For this let a ∈ w0 . Then we have for every y ∈ s1 n n 1 1 |ak yk | ≤ sup |yk | |ak | for all n n k=1 n k=1 k and ay ∈ w0 . So we have shown w0 ⊂ M(s1 , w0 ) and M(s1 , w0 ) = w0 . Then we have w0 = M(s1 , w0 ) ⊂ M(c, w0 ) ⊂ w0 and M(c, w0 ) = w0 . This completes the proof of Part (iv). (v) It remains to show M(c0 , w∞ ) = w∞ . By [22, Lemma 3.56, p. 218], the set β ββ M = w∞ is a B K space with AK and is β-perfect, that is, w∞ = w∞ . By Theorem 1.22 with X = c0 and Z = M, we obtain

5.1 Introduction

235 β

ββ M(c0 , w∞ ) = M(c0 , w∞ ) = M(M, c0 ). β

Since c0 = 1 , we conclude M(c0 , w∞ ) = Mβ = w∞ . 0 Now we show the identity M(w∞ , c) = s(1/n) ∞ . First we note that n=1

0 M(w∞ , c) ⊃ M(w∞ , c0 ) = s(1/n) ∞ . n=1 0 It remains to show the inclusion M(w∞ , c) ⊂ s(1/n) ∞ . Since c ⊂ (c0 ) , we have n=1

M(w∞ , c) ⊂ M(w∞ , (c0 ) ). So a ∈ M(w∞ , (c0 ) ) implies Da ∈ (w∞ , c0 ). The matrix Da = (αnk )∞ n,k=1 is the triangle whose the nonzero entries are defined by ann = −αn,n−1 = an for all n with α10 = 0. By the characterization of the class (w∞ , c0 ), we have that a ∈ (w∞ , c0 ) implies (a)n M → 0 (n → ∞), where

(a)n M =

ν n −1 ν=0



max

2ν ≤k≤2ν+1 −1

|αnk | + 2νn νmax |αnk | ≥ 2νn |an | 2 n ≤k≤n

and νn is an integer uniquely defined by 2νn ≤ n ≤ 2νn +1 − 1 for all n. Since 2νn ≥ (n + 1)/2, the condition (a)n M → 0 implies n|an | → 0 (n → ∞). So we have shown the inclusion 0 M(w∞ , c) ⊂ s(1/n) ∞ . n=1

This completes the proof.  We need some results on the equivalence relation RE which is defined using the multiplier of sequence spaces in the following way. Definition 5.1 For b ∈ U + and for any subset E of ω, we denote by cl E (b) the equivalence class for the equivalence relation RE defined by x RE y if Ex = E y for x, y ∈ U + . It can easily be seen that cl E (b) is the set of all x ∈ U + such that x/b ∈ M(E, E) and b/x ∈ M(E, E), (cf. [18]). Then we have

cl E (b) = cl M(E,E) (b).

236

5 Sequence Spaces Inclusion Equations

For instance, cl c (b) is the set of all x ∈ U + such that sx(c) = sb(c) . This is the set of all sequences x ∈ U + such that xn ∼ Cbn (n → ∞) for some C > 0. In [18] we denoted by cl ∞ (b) the class cl ∞ (b). We recall that cl ∞ (b) is the set of all x ∈ U + , such that K 1 ≤ xn /bn ≤ K 2 for all n and for some K 1 , K 2 > 0. Since M(c0 , c0 ) = ∞ , we have cl c0 (b) = cl ∞ (b).

5.2 The (SSIE) F ⊂ Ea + Fx with e ∈ F and F  ⊂ M(F, F  ) Here we are interested in the study of the set of all positive sequences x that satisfy the inclusion F ⊂ E a + Fx , where E, F and F  are linear spaces of sequences and a is a positive sequence. For instance, a positive sequence x satisfies the inclusion c ⊂ sa0 + sx(c) if and only if the next statement holds The condition yn → l implies that there are sequences u, v ∈ ω with y = u + v such that u n /an → 0 and vn /xn → l  (n → ∞) for some scalars l and l  and all sequences y. We may consider this problem as a perturbation problem. If we know the set M(F, F  ), then the solutions of the elementary inclusionFx ⊃ F are determined by 1/x ∈ M(F, F  ). Now the question is: If E is a linear space of sequences, what are the solutions of the perturbed inclusion Fx + E ⊃ F? An additional question may be the following one: What are the conditions on E under which the solutions of the elementary and the perturbed inclusions are the same? The solutions of the perturbed inclusion F ⊂ E a + Fx where E, F and F  are linear spaces of sequences cannot be obtained in the general case. So are led to deal with the case when a = (r n )∞ n=1 for r > 0, for which most of these (SSIE) can be totally solved. In the following we write   Ia (E, F, F  ) = x ∈ U + : F ⊂ E a + Fx , where E, F and F  are linear spaces of sequences and a ∈ U + . For any set χ of sequences, we put   χ = x ∈ U + : 1/x ∈ χ . In the following we use the set = {c0 , c, s1 ,  p , w0 , w∞ } with p ≥ 1. We recall that c(1) is the set of all sequences α ∈ U + that satisfy limn→∞ αn = 1 and consider the condition (5.9) G ⊂ G 1/α for all α ∈ c(1) (cf. (4.3)) for any given linear space G of sequences. We note that condition (5.9) is satisfied for all G ∈ . Theorem 5.1 ([12, Theorem 1, p. 1045]) Let a ∈ U + and let E, F and F  be linear spaces of sequences. We assume

5.2 The (SSIE) F ⊂ E a + Fx with e ∈ F and F  ⊂ M(F, F  )

237

(a) e ∈ F, (b) F  ⊂ M(F, F  ), (c) F  satisfies (5.9). Then we have (i) a ∈ M(E, c0 ) implies Ia (E, F, F  ) = F  ; (ii) 1/a ∈ M(F, E) implies Ia (E, F, F  ) = U + . Proof (i) Let x ∈ Ia (E, F, F  ). Then there are ξ ∈ E and f  ∈ F  such that 1 = an ξn + xn f n , hence 1 − a n ξn = f n for all n. xn Since a ∈ M(E, c0 ), we have 1 − an ξn → 1 (n → ∞) and 1 1 = f  for all n. xn 1 − a n ξn n By the condition in (c), we conclude x ∈ F  . Conversely, the condition x ∈ F  implies 1/x ∈ F  , and the condition in (b) implies 1/x ∈ M(F, F  ). We conclude F ⊂ Fx and x ∈ Ia (E, F, F  ). So we have shown Part (i). (ii) follows from the equivalence of 1/a ∈ M(F, E) and F ⊂ E a . This concludes the proof.  We immediately deduce the following. Corollary 5.1 ([12, Corollary 1, p. 1045]) Let E, F and F  be linear spaces of sequences. We assume (a) e ∈ F, (b) F  ⊂ M(F, F  ), (c) E ⊂ c0 . Then the next statements are equivalent (i) F ⊂ E + Fx , (ii) F ⊂ Fx , (iii) x ∈ F  . In some cases where E = cs or 1 , and F  = 1 , we obtain the next results using the α- and β-duals. Corollary 5.2 ([12, Corollary 2, p. 1045]) Let a ∈ U + and let F and F  be linear spaces of sequences. We assume that the conditions in (a), (b), (c) in Theorem 5.1 are satisfied. Then the set Ia (cs, F, F  ) of all positive sequences x such that F ⊂ csa + Fx satisfies the next properties. (i) a ∈ s1 implies Ia (cs, F, F  ) = F  . (ii) 1/a ∈ F β implies Ia (cs, F, F  ) = U + .

238

5 Sequence Spaces Inclusion Equations

Proof It is enough to show M(cs, c0 ) = s1 . We have a ∈ M(cs, c0 ) if and only if Da  ∈ (c, c0 ), and Da  is the infinite matrix whose nonzero entries are [Da ]nn = −[Da ]n,n−1 = an for all n ≥ 2, with the convention [Da ]1,1 = a1 . By the characterization of (c, c0 ) in Lemma 5.1 we conclude M(cs, c0 ) = s1 . This completes the proof.  Corollary 5.3 ([12, Corollary 3, p. 1046]) Let a ∈ U + and let F and F  be linear spaces of sequences. We assume that the conditions in (a), (b), (c) in Theorem 5.1 are satisfied. Then the set Ia (1 , F, F  ) of all positive sequences x such that F ⊂ (1 )a + Fx satisfies the next properties. (i) a ∈ s1 implies Ia (1 , F, F  ) = F  ; (ii) 1/a ∈ F α implies Ia (1 , F, F  ) = U + . Proof This result follows from the identity M(1 , c0 ) = s1 .



Corollary 5.4 ([12, Corollary 4, p. 1046]) Let a ∈ U + and let E and F be linear spaces of sequences. We assume e ∈ F and 1 ⊂ F α . Then the set Ia (E, F, 1 ) of all positive sequences x such that F ⊂ E a + (1 )x satisfies the implications in (i) and (ii) of Theorem 5.1 with F  = 1 . Remark 5.1 We obtain similar results to those stated above for the (SSIE) defined by Fb ⊂ E a + Fx with a, b ∈ U + . Let Ia,b (E, F, F  ) be the set of all positive sequences that satisfy the previous (SSIE). Under the conditions (a), (b), (c) in Theorem 5.1 we obtain that a/b ∈ M(E, c0 ) implies Ia,b (E, F  ) = Fb , and b/a ∈ M(F, E) implies Ia,b (E, F, F  ) = U + . This result is a direct consequence of the equivalence of Fb ⊂  . For instance, we consider the (SSIE) defined by E a + Fx and F ⊂ E a/b + Fx/b c R ⊂ c0 + cx with R > 0. A positive sequence x satisfies the (SSIE) c R ⊂ c0 + cx if and only if the next statement holds The condition yn /R n → l implies there are sequences u, v ∈ ω such that y = u + v and u n → 0 and vn /xn → l  (n → ∞) for some scalars l and l  and all sequences y. (c) 0 + s(x We have that c R ⊂ c0 + cx is equivalent to c ⊂ s1/R n . Since M(c0 , c0 ) = n /R )n ∞ s1 , we have (R −n )∞ n=1 ∈ s1 if and only if R ≥ 1, and we obtain Ie,(R −n )n=1 (c0 , c) = c R + for R ≥ 1, and since M(c, c0 ) = c0 , we obtain Ie,(R −n )∞ (c , c) = U for all R < 1. 0 n=1

5.3 The (SSIE) F ⊂ Ea + Fx with E, F, F  ∈ {c0 , c, s1 ,  p , w0 , w∞ } In this section, we use the set  = ({s1 } × ( \ {c})) ∪ ({c} × ) with p ≥ 1 and deal with the perturbed inclusions of the form F ⊂ E a + Fx where E = c0 , s1 ,  p , w0 , or w∞ and (F, F  ) ∈ . As a direct consequence of Lemmas 5.6 and 5.7 we obtain.

5.3 The (SSIE) F ⊂ E a + Fx with E, F, F  ∈ {c0 , c, s1 ,  p , w0 , w∞ }

239

Lemma 5.8 We have that (F, F  ) ∈  implies F  ⊂ M(F, F  ). By Corollary 5.1 and Lemma 5.8 we obtain the following result. Proposition 5.1 ([12, Proposition 1, p. 1046]) Let E ⊂ c0 be a linear space of sequences and let (F, F  ) ∈ . Then the next statements are equivalent (i) F ⊂ E + Fx , (ii) F ⊂ Fx , (iii) x ∈ F  . Example 5.1 We consider the next statement, where  r = (rn )∞ n=1 ∈ c ∩ U with ∞ s = (sn )n=1 ∈ c. By Ir ,s we denote the set of all positive limn→∞ rn = 0 and  sequences x that satisfy the next statement. For every y ∈ ω, yn → L implies that there are sequences u, v ∈ ω with y = u + v such that rn u n + sn−1 u n−1 → 0 and xn vn → L  (n → ∞) for some scalars L and L  . This statement is equivalent to the (SSIE) defined by c ⊂ (c0 ) B(r ,s) + c1/x , where B( r , s) is the bidiagonal infinite matrix defined by [B( r , s)]nn = rn for all n and [B( r , s)]n,n−1 = sn−1 for all n ≥ 2, the other entries being equal to zero. By [7, Corollary 5.2.1, p. 15], the operator B( r , s) ∈ (c0 , c0 ) is bijective if and only if | lim sn | < | lim rn |. n→∞

n→∞

(5.10)

So if the condition in (5.10) holds then we obtain (c0 ) B(r ,s) = c0 and we conclude Ir ,s = c+ . Proposition 5.2 ([12, Proposition 2, p. 1046]) Let a ∈ U + and let (F, F  ) ∈ . We have (i) (ii) (iii) (iv) (v)

Ia (c0 , F, F  ) = F  if a ∈ s1 , and Ia (c0 , F, F  ) = U + if 1/a ∈ c0 ; Ia (s1 , F, F  ) = F  if a ∈ c0 , and Ia (s1 , F, F  ) = U + if 1/a ∈ s1 ; Ia ( p , F, F  ) = F  if a ∈ s1 , and Ia ( p , F, F  ) = U + if 1/a ∈  p for p ≥ 1; , and Ia (w0 , F, F  ) = U + if 1/a ∈ w0 . Ia (w0 , F, F  ) = F  if a ∈ s(1/n)∞ n=1 0   Ia (w∞ , F, F ) = F if a ∈ s(1/n)∞ , and Ia (w∞ , F, F  ) = U + if 1/a ∈ w∞ . n=1

Proof The proof is a direct consequence of Theorem 5.1 and Lemmas 5.6 and 5.7. Indeed, we successively have M(E, c0 ) = s1 , for E = c0 , or  p ; M(E, c0 ) = c0 , 0 and M(w∞ , c0 ) = s(1/n) for E = c, or s1 ; M(w0 , c0 ) = s(1/n)∞ ∞ . Then we have n=1 n=1 M(F, E) = M(s1 , E) = M(c, E) for E ∈ \ {c}, and M(s1 , c0 ) = c0 , M(s1 , s1 ) =  s1 , M(s1 ,  p ) =  p , M(s1 , w0 ) = w0 and M(s1 , w∞ ) = w∞ . In all that follows we write G + = G ∩ U + for any set G of sequences. We may illustrate these results by the next examples. Example 5.2 Let p ≥ 1. We consider the system  (S)

sx ⊂ ( p )x + w∞ , w∞ ⊂ Wx .

240

5 Sequence Spaces Inclusion Equations

We have sx ⊂ ( p )x + w∞ if and only if s1 ⊂  p + W1/x . Then by Part (iii) of Proposition 5.2, the solutions of the (SSIE) s1 ⊂  p + W1/x are determined by x ∈ w∞ . Then by Lemma 5.7, we have w∞ = M(s1 , w∞ ) and x ∈ w∞ if and only if sx ⊂ w∞ . Furthermore, we have M + (w∞ , w∞ ) = + ∞ (cf. [14, Remark 3.4, p. 597]), and w∞ ⊂ Wx if and only if s1 ⊂ sx . So x ∈ U + satisfies  system (S) if and only if s1 ⊂ sx ⊂ w∞ . This means that xn ≥ K 1 and n −1 nk=1 xk ≤ K 2 for all n and for some K 1 and K 2 > 0. Example 5.3 Let p ≥ 1, and consider the next statement. The condition yn →  l implies that there are two sequences u, v ∈ ω with n p −1 y = u + v such that ∞ k=1 |u k | < ∞ and n k=1 |vk |/x k ≤ K for all n, for some scalars l and K, with K > 0, and for all y. This statement is associated with the (SSIE) c ⊂  p + Wx and with the set Ie ( p , c, w∞ ). By Part (iii) of Proposition 5.2, we conclude Ie ( p , c, w∞ ) = w∞ . Example 5.4 We consider the next statement. The condition yn → l implies that  there are two sequences u, v ∈ n −1 ω, with y = u + v such that n k=1 |ku k | → 0 (n → ∞) and ∞ p (|v |/x ) < ∞ for some scalar K > 0 and all y. k k k=1 0 This statement corresponds to the (SSIE) c ⊂ W(1/n) ∞ + ( p ) x , and by Part (iv) of n=1 Proposition 5.2, the set of all positive sequences that satisfy this (SSIE) is equal to ∞ I(n)n (w0 , c,  p ) =  p , since (1/n)∞ n=1 ∈ s(1/n)n=1 .

Remark 5.2 Let a ∈ U + and (F, F  ) ∈ . Then the conditions (a), (b), (c) of Theorem 5.1 hold. The set Ia (cs, F, F  ) of all positive sequences x such that F ⊂ csa + Fx satisfies the next properties. (i) a ∈ s1 implies Ia (cs, F, F  ) = F  . (ii) 1/a ∈ F β = 1 implies Ia (cs, F, F  ) = U + . When E = c, we obtain the following result. Proposition 5.3 ([12, Proposition 3, p. 1046]) Let a ∈ U + and F  ∈ . Then we have (i) Ia (c, c, F  ) = F  if a ∈ c0 , and Ia (c, c, F  ) = U + if 1/a ∈ c. (ii) Ia (c, s1 , F  ) = F  if a ∈ c0 , and Ia (c, s1 , F  ) = U + if 1/a ∈ c0 . Proof The proof follows from Theorem 5.1 and Lemma 5.6. Here we have M(E, c0 ) = M(c, c0 ) = c0 and M(F, E) = M(F, c) = c0 for F = s1 and M(F, c) = c for F = c.  Now we study the solvability of the F ⊂ Er + Fx with E, F  ∈ {c0 , c, s1 ,  p , w0 , w∞ }.

(SSIE)

5.3 The (SSIE) F ⊂ E a + Fx with E, F, F  ∈ {c0 , c, s1 ,  p , w0 , w∞ }

241

  For a = (r n )∞ n=1 , we write Ir (E, F, F ) for the set Ia (E, F, F ). Then we solve  the perturbed inclusions F ⊂ Er + Fx where F is either c or s1 and E ∈ \ {w0 } and F  ∈ . It can easily be seen that in most of the cases the set Ir (E, F, F  ) may be determined by

F  if r < 1,  Ir (E, F, F ) = (5.11) U + if r ≥ 1

or by



Ir (E, F, F ) =

F U+

if r ≤ 1 if r > 1.

(5.12)

We see that in the first case the elementary inclusion F ⊂ Fx and the perturbed inclusion F ⊂ Er + Fx have the same solutions if and only if r < 1. Similarly, in the second case the equivalence of the elementary and the perturbed inclusions is satisfied for r ≤ 1. As a direct consequence of Propositions 5.2 and 5.3 we obtain the following result. Proposition 5.4 ([12, Proposition 4, p. 1047]) Let a ∈ U + and (F, F  ) ∈ . Then we have (i) The sets Ir (s1 , F, F  ), Ir (c, c, F  ) and Ir (w∞ , F, F  ) are determined by (5.11). (ii) The sets Ir (c0 , F, F  ) and Ir ( p , F, F  ) for p ≥ 1 are determined by (5.12). Remark 5.3 From Part (ii) in Proposition 5.4, we deduce the equivalence of the (SSIE) s1 ⊂ Fx and the perturbed inclusion s1 ⊂ c0 + Fx , where F  is any of the sets c0 , s1 ,  p for p ≥ 1, w0 , or w∞ . This is also the case for the (SSIE) c ⊂ Fx and the perturbed inclusion c ⊂  p + Fx , where F  is any of the sets c0 , c, s1 ,  p for p ≥ 1, w0 , or w∞ . Rewriting Proposition 5.4 we obtain. Corollary 5.5 Let r > 0. Then we have (i) Let F  ∈ . Then (a) the solutions of the (SSIE) c ⊂ Er + Fx with E = c, s1 , or w∞ are determined by (5.11); (b) the solutions of the (SSIE) c ⊂ Er + Fx with E = c0 , or  p for p ≥ 1 are determined by (5.12); (ii) Let F  ∈ \ {c}. Then (a) the solutions of the (SSIE) s1 ⊂ Er + Fx with E = s1 , or w∞ are determined by (5.11); (b) the solutions of the (SSIE) s1 ⊂ Er + Fx with E = c0 , or  p for p ≥ 1 are determined by (5.12).

242

5 Sequence Spaces Inclusion Equations

Remark 5.4 The set Ir (w0 , c, F  ) of all the solutions of the (SSIE) c ⊂ Wr0 + Fx where F  ∈ is determined for all r = 1. We obtain



Ir (w0 , c, F ) =

F  for r < 1 U + for r > 1.

Example 5.5 By Part (i) (a) of Corollary 5.5, the solutions of the (SSIE) defined by c ⊂ sr + sx0 are determined by the set of all positive sequences x that satisfy xn → ∞ (n → ∞) if r < 1. Then the (SSIE) holds for all positive sequences x for r ≥ 1. Similarly, by Part (i) (b) of Corollary 5.5, the solutions of the (SSIE) defined by c ⊂ ( p )r + Wx for r > 0 and p ≥ 1 are determined by n −1 nk=1 xk−1 ≤ K for all n and for some K > 0, for r ≤ 1, and if r > 1 the (SSIE) holds for all positive sequences x. Example 5.6 By Part (ii) (a) of Corollary 5.5, the solutions of the (SSIE) defined by s1 ⊂ Wr + sx are determined by 1/x ∈ s1 if r < 1, and if r ≥ 1 the (SSIE) holds for F  = w0 , the all positive sequences x. By Part (ii) (b) of Corollary 5.5 with E = c0 and solutions of the (SSIE) defined by s1 ⊂ sr0 + Wx0 are determined by n −1 nk=1 xk−1 → 0 (n → ∞), for r ≤ 1, and if r > 1 the (SSIE) holds for all positive sequences x. From Theorem 5.1, we obtain the next corollary. Corollary 5.6 Let a ∈ U + and E and F be two linear spaces of sequences. We assume (a) e ∈ F (b) F ⊂ M(F, F) and (c) F satisfies (5.9). Then we have (i) a ∈ M(E, c0 ) implies Ia (E, F) = F; (ii) 1/a ∈ M(F, E) implies Ia (E, F) = U + . Now we deal with the (SSIE) F ⊂ E a + Fx where F is either c, or s1 and E ∈ . We obtain the following by Corollary 5.6 and Lemmas 5.6 and 5.7. Corollary 5.7 Let a ∈ U + . Then we have (i)

(a)

Ia (c0 , c) =

(b)

Ia (c, c) =

(c)

Ia (s1 , c) =

c U+

if a ∈ s1 if 1/a ∈ c0 ;

c U+

if a ∈ c0 if 1/a ∈ c;

c U+

if a ∈ c0 if 1/a ∈ s1 ;

5.3 The (SSIE) F ⊂ E a + Fx with E, F, F  ∈ {c0 , c, s1 ,  p , w0 , w∞ }

(d)

Ia ( p , c) =

(e)

Ia (w0 , c) =

(f)

Ia (w∞ , c) =

c U+

if a ∈ s1 if 1/a ∈  p for p ≥ 1;

c U+

if a ∈ s(1/n)∞ n=1 if 1/a ∈ w0 ;

c

0 if a ∈ s(1/n) ∞

U+

243

n=1

if 1/a ∈ w∞ .

(ii)

(a)

Ia (c0 , s1 ) =

(b)

Ia (c, s1 ) =

(c)

Ia (s1 , s1 ) =

(d)

Ia ( p , s1 ) =

(e)

Ia (w0 , s1 ) =

(f)

Ia (w∞ , s1 ) =

s1 U+

if a ∈ s1 if 1/a ∈ c0 ;

s1 U+

if a ∈ c0 if 1/a ∈ c0 ;

s1 U+

if a ∈ c0 if 1/a ∈ s1 ;

s1 if a ∈ s1 U + if 1/a ∈  p for p ≥ 1; s1 U+

if a ∈ s(1/n)∞ n=1 if 1/a ∈ w0 ;

s1 U+

0 if a ∈ s(1/n) ∞ n=1 if 1/a ∈ w∞ .

Example 5.7 Let r > 0. Then by Part (ii) (c) of Corollary 5.7 of the (SSIE) defined by s1 ⊂ sr + sx , are determined by (5.11) with F  = F = s1 . The solutions of the (SSIE) defined by s1 ⊂ Er + sx where E ∈ {c0 ,  p } with p ≥ 1 are determined by (5.12) with F  = F = s1 . The solutions of the (SSIE) defined by c ⊂ Er + cx , with E ∈ {c, s1 } are determined by (5.11) with F = c. The solutions of the (SSIE) c ⊂ Er + cx where E ∈ {c0 ,  p } with p ≥ 1 are determined by (5.12) with F  = F = c. ∞ Now we consider the case when a = (r n )∞ n=1 for r > 0. We write E r for E (r n )n=1 , and the set of all positive sequences x that satisfy

F ⊂ Er + Fx

(5.13)

is denoted by Ir (E, F). Here we explicitly calculate the solutions of the (SSIE) defined by (5.13), where F is either c, or s1 and E ∈ . For instance, a positive sequence x satisfies the (SSIE) c ⊂ sr + cx if the next statement holds.

244

5 Sequence Spaces Inclusion Equations

The condition yn → l implies that there are sequences u, v ∈ ω such that y = u + v for which supn (|u n |/r n ) < ∞ and vn /xn → l  (n → ∞) for some scalars l and l  and for all y. We consider the conditions (R n )∞ n=1 ∈ M(E, c0 ) for all 0 < R < 1, (R n )∞ n=1 (R −n )∞ n=1 (R −n )∞ n=1

(5.14)

∈ M(E, c0 ) for all 0 < R ≤ 1, ∈ M(F, E) for all R ≥ 1,

(5.15) (5.16)

∈ M(F, E) for all R > 1.

(5.17)

We will use the next result which is a direct consequence of Corollary 5.6. Corollary 5.8 Let r > 0 and let E and F be linear spaces of sequences. We assume that F satisfies the conditions (a), (b), (c) of Corollary 5.6. Then we have (i) If the conditions in (5.14) and (5.16) hold, then the solutions of the (SSIE) defined by (5.13) are determined by (5.11) with F  = F. (ii) If the conditions in (5.15) and (5.17) hold, then the solutions of the (SSIE) defined by (5.13) are determined by (5.12) with F  = F. We obtain the following corollary as a direct consequence of the preceding results. Corollary 5.9 Let r > 0. Then we have (i) (a) The solutions of the (SSIE) c ⊂ Er + cx for E = c, s1 , or w∞ are determined by (5.11) with F  = c. (b) The solutions of the (SSIE) c ⊂ Er + cx with E = c0 or  p for p ≥ 1 are determined by (5.12) with F  = c. (ii) (a) The solutions of the (SSIE) s1 ⊂ Er + sx with E = s1 or w∞ are determined by (5.11) with F  = s1 . (b) The solutions of the (SSIE) s1 ⊂ Er + sx with E = c0 or  p for p ≥ 1 are determined by (5.12) with F  = s1 . It is interesting to state the next remarks on the elementary (SSIE) successively defined by c ⊂ c0 + cx , c ⊂  p + cx , s1 ⊂ c0 + sx and s1 ⊂  p + sx . Remark 5.5 Obviously we have c  c0 , but we may determine the set Ic,c0 of all positive sequences x for which c ⊂ c0 + cx . So by Corollary 5.9, we have x ∈ Ic,c0 if and only if xn → L (n → ∞) for some scalar L with 0 < L ≤ ∞. This means that the (SSIE) c ⊂ cx and the perturbed equation c ⊂ c0 + cx are equivalent. We obtain a similar result concerning the perturbed (SSIE) defined by c ⊂  p + cx for p ≥ 1. Remark 5.6 By Part (ii) (b) of Corollary 5.9, each of the (SSIE) s1 ⊂ c0 + sx and s1 ⊂  p + sx , for p ≥ 1 is equivalent to 1/x ∈ s1 . This means that the solutions of each of the previous (SSIE) are determined by xn ≥ K for all n and for some K > 0. It is interesting to notice that the (SSIE) sx ⊃ s1 and each of the perturbed (SSIE) determined by c0 + sx ⊃ s1 and  p + sx ⊃ s1 have the same set of solutions.

5.4 Some (SSIE) and (SSE) with Operators

245

5.4 Some (SSIE) and (SSE) with Operators In this section, we study the (SSIE) of the form F ⊂ Dr ∗ E  + Fx

(5.18)

associated with the operator , where the inclusion in (5.18) is considered for any of the cases c ⊂ Dr ∗ (c0 ) + cx , c ⊂ Dr ∗ c + cx , c ⊂ Dr ∗ (s1 ) + cx , s1 ⊂ sr + sx and s1 ⊂ Dr ∗ (s1 ) + sx for r > 0. Then we consider the (SSIE) c ⊂ Dr ∗ E C1 + sx(c) with E ∈ {c, s1 } and s1 ⊂ Dr ∗ (s1 )C1 + sx , where C1 is the Cesàro operator. Then we solve the (SSE) Dr ∗ E C1 + sx(c) = c with E ∈ {c0 , c, s1 }, and Dr ∗ (s1 )C1 + sx = s1 . We note that since Da ∗ E T = E T D1/a , where T D1/a is a triangle, for any linear space E of sequences and any triangle T , the previous inclusions and identities can be considered as (SSIE) and (SSE). More precisely, the previous (SSE) can be considered as the perturbed equations of the equations Fx = F with F = c, or s1 . Now we study the (SSIE) of the form F ⊂ Dr ∗ E  + Fx . In the next result, among other things, we deal with the (SSIE) c ⊂ Dr ∗ (c0 ) + cx , which is associated with the next statement. The condition yn → l (n → ∞) implies that there are sequences u, v ∈ ω such that y = u + v and u n r −n − u n−1r −(n−1) → 0 and vn /xn → l  (n → ∞) for some scalars l and l  and for all y. The corresponding set of sequences is denoted by Ir ((c0 ) , c). In a similar way, Ir ((s1 ) , c) is the set of all sequences that satisfy the (SSIE) c ⊂ Dr ∗ (s1 ) + cx . We obtain the following result. Corollary 5.10 Let r > 0. We have (i) Ir (E, c) = Ir (s1 , c) for E = c, (c0 ) , c or (s1 ) , and Ir (s1 , c) is determined by (5.11) with F  = c. (ii) Ir ((s1 ) , s1 ) = Ir (s1 , s1 ) is determined by (5.11) with F  = s1 .

246

5 Sequence Spaces Inclusion Equations

Proof (i) The identity Ir (s1 , c) = Ir (c, c) is a direct consequence of Corollary 5.8, and Ir (s1 , c) is determined by (5.11) with F = c. Indeed, we have M(E, c0 ) = M(c, c0 ) = M(s1 , c0 ) = c0 . Then we obtain M(E, c) = c for E = c, and M(E, s1 ) = s1 for E = s1 . Now we deal with I((c0 ) , c). We have (R n )∞ n=1 ∈ M((c0 ) , c0 ) if and only if D R  ∈ (c0 , c0 ). The operator D R  is the triangle defined by (D R )nk = R n for k ≤ n and for all n. Then from the characterization of (c0 , c0 ), the condition D R  ∈ (c0 , c0 ) is equivalent to n R n = O(1) (n → ∞), and R < 1. Then the condition (R −n )∞ n=1 ∈ M(c, (c0 ) ) implies D1/R ∈ (c, c0 ). The nonzero entries of D1/R are given by [D1/R ]nn = R −n and [D1/R ]n,n−1 = −R −n+1 for all n ≥ 1, and from the characterization of (c, c0 ), we conclude R ≥ 1. By similar arguments as those used above we obtain Ir ((s1 ) , c) = Ir (c , c) = Ir (s1 , c). (ii) The proof of (ii) is similar and left to the reader.  We may state an elementary application which can be considered as an exercise and is based on the notion of perturbed sequence spaces inclusions. Example 5.8 Let r, ρ > 0. We consider the system of perturbed (SSIE) defined by  (S1 )

c ⊂ Dr ∗ (c0 ) + sx(c) ∞ ⊂ Dρ ∗ (s1 ) + s1/x .

By Corollary 5.10, the system (S1 ) is equivalent to c ⊂ sx(c) and ∞ ⊂ s1/x , that is, 1/x ∈ c and x ∈ ∞ for r and ρ < 1. Since we have 1/x ∈ c if and only if xn → L (n → ∞) for some scalar L with 0 < L ≤ ∞, we conclude that the set S of all the positive sequences that satisfy system (S1 ) is determined by

S=

cl c (e) ∅

if r, ρ < 1 otherwise.

Now we consider the (SSIE) of the form F ⊂ Dr ∗ E C1 + Fx , where C1 is the Ces`aro operator and E, F ∈ {c, s1 }. For instance, the (SSIE) c ⊂ Dr ∗ cC1 + sx(c) is associated with the next statement. The condition yn → l (n → ∞) implies that there are sequences u, v ∈ ω such that y = u + v and n 1  uk vn → l  and → l  (n → ∞) n k=1 r k xn

for some scalars l, l  and l  and for all y. We obtain the following result.

5.4 Some (SSIE) and (SSE) with Operators

247

Proposition 5.5 ([12, Proposition 5, p. 1049]) Let r > 0. Then we have (i) The solutions of the (SSIE) defined by c ⊂ Dr ∗ E C1 + sx(c) with E ∈ {c, s1 } are determined by (5.11) with F  = c. (ii) The solutions of the (SSIE) s1 ⊂ Dr ∗ (s1 )C1 + sx are determined by (5.11) with F  = s1 . Proof (i) Case E = c. −1 Let R > 0. Then we have (R n )∞ n=1 ∈ M(cC1 , c0 ) if and only if D R C 1 ∈ (c, c0 ). −1 It can easily be seen that the entries of the matrix C1 are defined by [C1−1 ]nn = n, [C1−1 ]n,n−1 = −(n − 1) for all n ≥ 2 and [C1−1 ]1,1 = 1. Then D R C1−1 is the triangle whose nonzero entries are given by [D R C1−1 ]nn = n R n , [D R C1−1 ]n,n−1 = −(n − 1)R n for all n ≥ 2 and [D R C1−1 ]1,1 = R. By the characterization of (c, c0 ), this means limn→∞ (R n [n − (n − 1)]) = limn→∞ R n = 0, and (2n − 1)R n ≤ K for some K > 0 and all n. We conclude (R n )∞ n=1 ∈ M(cC1 , c0 ) if and only if R < 1. ∈ M(c, c ) if and only if C1 D1/R ∈ (c, c). But C1 D1/R is Then we have (R −n )∞ C1 n=1 for the triangle defined by [C1 D1/R ]nk = n −1 R −k  k ≤ n and for all n. So the condition C1 D1/R ∈ (c, c) is equivalent to n −1 nk=1 R −k → L (n → ∞) for some scalar L and R ≥ 1. We conclude by Corollary 5.8. Case E = s1 . −1 We have (R n )∞ n=1 ∈ M((s1 )C1 , c0 ) if and only if D R C 1 ∈ (s1 , c0 ), and from the characterization of (s1 , c0 ), that is, limn→∞ [(2n − 1)R n ] = 0 and R < 1. Then we  have (R −n )∞ n=1 ∈ M(c, (s1 )C1 ) if and only if C 1 D1/R ∈ (c, s1 ), that is, n −1 −k R ) < ∞ and R ≥ 1. Again we conclude by Corollary 5.8. supn (n k=1 (ii) As we have just seen above, we have (R n )∞ n=1 ∈ M((s1 )C1 , c0 ) if and only if −n ∞ ) ∈ M(s , (s ) R < 1. Then we obtain (R 1 1 C1 ) if and only if C 1 D1/R ∈ (s1 , s1 ), n=1  that is, supn (n −1 nk=1 R −k ) < ∞ and we conclude R ≥ 1.  Remark 5.7 We may deal with the set Ia (cC1 , c) of all solutions of the (SSIE) c ⊂ as those used above, we obtain Ia (cC1 , c) = c Da ∗ cC1 + sx(c) . By similar arguments n ∞ if a ∈ c0 and Ia (cC1 , c) = U + if (n −1 nk=1 1/ak )∞ n=1 ∈ c. For a = (r )n=1 we obtain α ∞ Part (i) of Proposition 5.5. In the case of a = (n )n=1 with α ∈ R, it can easily be shown that

c if α < 0 Ia (cC1 , c) = U + if α ≥ 0. We obtain the following result concerning the (SSIE) c ⊂ Dr ∗ (c0 )C1 + sx(c) . Corollary 5.11 The solutions of the (SSIE) c ⊂ Dr ∗ (c0 )C1 + sx(c) , are determined by

c if r < 1 Ir ((c0 )C1 , c) = + if r > 1. U Proof We apply Theorem 5.1. We have (R n )∞ n=1 ∈ M((c0 )C1 , c0 ) if and only if D R C1−1 ∈ (c0 , c0 ). We conclude D R C1−1 ∈ (c0 , c0 ) if and only if ((2n − 1)R n )∞ n=1 ∈

248

5 Sequence Spaces Inclusion Equations

∞ and R < 1. Thenwe have (R −n )∞ n=1 ∈ M(c, (c0 )C1 ) if and only if C 1 D1/R ∈ (c, c0 ), that is, n −1 nk=1 R −k → 0 (n → ∞), and R > 1. This completes the proof.  Now we consider an application to the solvability of the following (SSE’s) with an operator Dr ∗ E C1 + sx(c) = c for E ∈ {c0 , c, s1 } and Dr ∗ E C1 + sx = s1 for E ∈ {c, s1 }. For instance, the (SSE) defined by Dr ∗ (s1 )C1 + cx = c is associated with the next statement. The condition yn → l (n → ∞) holds if and only if there are sequences u, v ∈ ω such that y = u + v and   n vn 1  u k  sup → l  (n → ∞)  < ∞ and  k  n r x n n k=1 for some scalars l and l  and for all y. Here we also use the (SSIE) defined by Dr ∗ (c0 )C1 + cx ⊂ c which is associated with the next statement.  The conditions n −1 ( nk=1 u k /r k ) → 0 and vn /xn → l together imply  u n + vn → l (n → ∞) for all sequences u and v ∈ ω and for some scalars l and l  . We write Ia (E, F) = {x ∈ U + : E a + Fx ⊂ F}. We note that since E and F are linear spaces of sequences, we have x ∈ Ia (E, F) if and only if and E a ⊂ F and Fx ⊂ F. This means that x ∈ Ia (E, F) if and only if a ∈ M(E, F) and x ∈ M(F, F). Then we have S(E, F) = Ia (E, F) ∩ Ia (E, F) = {x ∈ U + : E a + Fx = F}, (see [11, pp. 222–223]). From Proposition 5.5 and Corollary 5.11 we obtain the next results on the (SSE) Dr ∗ E C1 + sx(c) = c with E ∈ {c0 , c, s1 }, and Dr ∗ E C1 + sx = s1 with E ∈ {c, s1 }. Proposition 5.6 ([12, Proposition 6, pp. 1050–1051]) Let r > 0. Then we have (i) The solutions of the (SSIE) Dr ∗ E C1 + sx(c) ⊂ c with E ∈ {c0 , c, s1 } and Dr ∗ E C1 + sx ⊂ s1 with E ∈ {c, s1 } are determined by  

Ir and

c+ E C1 , c = ∅ 

if r < 1 for E ∈ {c0 , c, s1 }, if r ≥ 1

(5.19)

5.4 Some (SSIE) and (SSE) with Operators

Ir (E C1 , s1 )

=

s1+ ∅

249

if r < 1 for E ∈ {c, s1 }. if r ≥ 1

(ii) The solutions of the perturbed equations Dr ∗ E C1 + sx(c) = c with E ∈ {c0 , c, s1 } and Dr ∗ E C1 + sx = s1 with E ∈ {c, s1 } are determined by 



Sr E C1 , c = and



cl c (e) if r < 1 for E ∈ {c0 , c, s1 }, ∅ if r ≥ 1

cl ∞ (e) if r < 1 Sr (E C1 , s1 ) = for E ∈ {c, s1 }. ∅ if r ≥ 1

Proof (i) The inclusion Dr ∗ (c0 )C1 ⊂ c is equivalent to Dr C1−1 ∈ (c0 , c), that is, Dr C1−1 ∈ S1 and (n + n − 1)r n ≤ K for all n and some K > 0. So we have Dr ∗ (c0 )C1 ⊂ c if and only if r < 1. Then the inclusion sx(c) ⊂ c is equivalent to x ∈ c. So the (SSIE) Dr ∗ (c0 )C1 + sx(c) ⊂ c is equivalent to r < 1 and x ∈ c and the identity in (5.19) holds for E = c0 .

Case E = c. The inclusion Dr ∗ cC1 ⊂ c is equivalent to Dr C1−1 ∈ (c, c), that is, [n − (n − 1)]r n = r n → L (n → ∞) for some scalar L, and [n + (n − 1)]r n = O(1) (n → ∞). This implies r < 1. Using similar arguments as those above we conclude that (5.19) holds for E = c. The proof of the case E = s1 is similar and left to the reader. (ii) is obtained from Part (i) and Part (i) of Proposition 5.5 for Sr (E C1 , c) with E = c or s1 and is obtained from Part (i) and Corollary 5.11 for Sr ((c0 )C1 , c). Then the determination of the set Sr ((s1 )C1 , s1 ) is obtained from Part (i) and Part (ii) of Proposition 5.5. It remains to determine the set Sr (cC1 , s1 ). For this we deal with the solvability of the (SSIE) s1 ⊂ Dr ∗ cC1 + sx . As we have seen in the proof of Propo−1 sition 5.5, we have (r n )∞ n=1 ∈ M(cC1 , c0 ) if and only if Dr C 1 ∈ (c, c0 ), that −n ∞ is, r < 1. Then we have (r )n=1 ∈ M(s1 , cC1 ) if and only if C1 D1/r ∈ (s1 , c). Since limn→∞ [C1 D1/r ]nk = 0 for all k ≥ 1, we have by Part (v) of Lemma 5.1 C1 D1/r ∈ (s1 , c) if and only if n −1 nk=1 r −k → 0 (n → ∞) and r > 1. So we have shown Ir (cC1 , s1 ) = s1 for r < 1 and Ir (cC1 , s1 ) = s1 for r > 1. We conclude for the set Sr (cC1 , s1 ) using the determination of Ir (cC1 , s1 ) given in Part (i). This completes the proof.  Example 5.9 For instance, we have x ∈ Sr (cC1 , s1 ) if and only if the next statement holds.

250

5 Sequence Spaces Inclusion Equations

The condition supn |yn | < ∞ holds if and  only if there are sequences u, v ∈ ω such that y = u + v for which n −1 nk=1 u k /r k → l (n → ∞) and supn (|vn |/xn ) < ∞ for some scalar l and all sequences y. We have Sr (cC1 , s1 ) = ∅ if and only if r < 1, and then Sr (cC1 , s1 ) = cl ∞ (e), that is, x ∈ Sr (cC1 , s1 ) if and only if there are K 1 , K 2 > 0 such that K 1 ≤ xn ≤ K 2 for all n. Remark 5.8 By similar arguments as those used above, the set Ia (cC1 , c) of all sequences x ∈ U + for which Da ∗ cC1 + sx(c) ⊂ c is determined by Ia (cC1 , c) = c+ if a ∈ s(1/n)∞ and Ia (cC1 , c) = ∅ if a ∈ / s(1/n)∞ . Indeed, we have a ∈ M(cC1 , c) if n=1 n=1 and only if Da C1−1 ∈ (c, c). But the matrix Da C1−1 is the triangle whose the nonzero entries are given by [Da C1−1 ]nn = nan , [Da C1−1 ]n,n−1 = −(n − 1)an for all n ≥ 2 and [Da C1−1 ]1,1 = a1 . From the characterization of (c, c), the condition Da C1−1 ∈ (c, c) is equivalent to lim (an [n − (n − 1)]) = lim an = L

n→∞

n→∞

for some scalar L, and (2n − 1)an ≤ K for some K > 0 and all n. We conclude a ∈ . Since s(1/n)∞ ⊂ c0 , this result and Remark 5.7 M(cC1 , c) if and only if a ∈ s(1/n)∞ n=1 n=1 , and Sa (cC1 , c) = ∅ if a ∈ / s(1/n)∞ . together imply Sa (cC1 , c) = cl c (e) if a ∈ s(1/n)∞ n=1 n=1 . In this way, So we obtain the result of Part (i) in Proposition 5.6, where a = (r n )∞ n=1 (cC1 , c) = cl c (e) if α ≥ 1 and S(n −α )∞ (cC1 , c) = ∅ if α < 1. we also obtain S(n −α )∞ n=1 n=1

Conclusion. In this section, we studied the solvability of the (SSIE) of the form F ⊂ (E a )T + Fx for some triangle T and e ∈ F. In future, we will be led to deal with the (SSIE) with operators of the form F ⊂ (E a ) A + Fx in each of the cases e ∈ F and e ∈ / F, where A is an infinite matrix which can be a triangle, or a band matrix, or the inverse of a band matrix.

5.5 The (SSIE) F ⊂ Ea + Fx for e ∈ / F In this section, we consider a class of sequences spaces inclusions equations that are direct applications of Theorem 5.2 and involve the sets c0 , c,  p for 1 ≤ p ≤ ∞, w0 and w∞ . As we have seen above, the solutions of the perturbed inclusion F ⊂ E a + Fx , where E, F and F  are linear spaces of sequences, cannot be obtained in the general case. So we are led to deal with the case when a = (r n )∞ n=1 with r > 0 for which most of these (SSIE) can be totally solved. Here also we write Ia (E, F, F  ) = {x ∈ U + : F ⊂ E a + Fx }, where E, F and F  are linear spaces of sequences and a ∈ U + . For any set χ of sequences, we let χ = {x ∈ U + : 1/x ∈ χ }. is the set So we have Fx ⊃ F if and only if x ∈ M(F, F  ). For instance, s(1/n)∞ n=1 whose elements are determined by xn ≥ Cn for all n and some C > 0. We note that for any given b ∈ U + , we also have Fx ⊃ Fb if and only if b/x ∈ M(F, F  ), that is, x ∈ [M(F, F  )]b or x ∈ M(F, F  )1/b . We recall that = {c0 , c, ∞ ,  p , w0 , w∞ } with p ≥ 1.

5.5 The (SSIE) F ⊂ E a + Fx for e ∈ /F

251

Now we state a general result for the study of the (SSIE) of the form F ⊂ E a + Fx where e does not necessarily belong to F. Then among other things, we deal with the solvability of the next (SSIE) c0 ⊂ E a + Fx for E, F  ∈ {c0 , c, s1 },

(1-)

c0 ⊂ ( p )a + Fx for F  ∈ {c0 , c, s1 } and p ≥ 1,

(2-)

c0 ⊂ E a + Wx for E ∈ ,

(3-)

1 ⊂ E a + Wx for E ∈ ,

(4-)

 p ⊂ E a + Fx with E, F  ∈ {c0 , c, s1 ,  p },

(5-)

w0 ⊂ E a + Fx with E, F  ∈ {c0 , c, s1 },

(6-)

w0 ⊂ E a + Fx with E ∈ {c0 , c, s1 , w0 , w∞ }, F  ∈ {w0 , w∞ },

(7-)

w∞ ⊂ E a + sx with E ∈ {c0 , s1 },

(8-)

w∞ ⊂ E a + Wx with E ∈ {c0 , s1 , w∞ },

(9-)

c0 ⊂ Wa + Fx for F  ∈ {c0 , c, s1 }.

(10-)

We denote by U1+ the set of all sequences α with 0 < αn ≤ 1 for all n and consider the condition (5.20) G ⊂ G 1/α for all α ∈ U1+ for any given linear space G of sequences. Then we introduce the linear space of sequences H which contains the spaces E and F  . The proof of the next theorem is based on the fact that if H satisfies the condition in (5.20) we then have Hα + Hβ = Hα+β for all α, β ∈ U + (cf. [14, Proposition 5.1, pp. 599–600]). We note that c does not satisfy this condition, but each of the sets c0 , ∞ ,  p , ( p ≥ 1), w0 and w∞ satisfies the condition in (5.20).

252

5 Sequence Spaces Inclusion Equations

In the following, we write M(F, F  ) = χ . The next result is used to determine a new class of (SSIE). Theorem 5.2 ([11, Theorem 9, p. 216]) Let a ∈ U + and let E, F and F  be linear subspaces of ω. We assume that (a) χ satisfies condition (5.9). (b) There is a linear space of sequences H that satisfies the condition in (5.20) and the conditions (α) E, F  ⊂ H

and (β) M (F, H ) = χ .

Then we have (i) a ∈ M(χ , c0 ) implies Ia (E, F, F  ) = χ; (ii) a ∈ M(F, E) implies Ia (E, F, F  ) = U + . Proof (i) Let a ∈ M(χ , c0 ) and let x ∈ Ia (E, F, F  ). As we have just seen, we have F ⊂ Ha + Hx ⊂ Ha+x , which implies ξ=

1 ∈ χ. a+x

Then it can easily be shown that 1 ξ = . x 1 − aξ Then the conditions ξ ∈ χ + and a ∈ M(χ , c0 ) imply 1 − an ξn → 1 (n → ∞). By the condition in (a), we conclude x ∈ χ. Conversely, let x ∈ χ . Then we successively obtain 1/x ∈ χ = M(F, F  ), F ⊂ Fx , F ⊂ E a + Fx and x ∈ Ia (E, F, F  ). We conclude Ia (E, F, F  ) = χ . (ii) follows from the equivalence of a ∈ M(F, E) and F ⊂ E a . This completes the proof.  Corollary 5.12 Let a ∈ U + , and E, F and F  be linear subspaces of ω. We assume χ satisfies condition (5.9) and E ⊂ F  , where F  satisfies the condition in (5.20). Then we have (i) a ∈ M(χ , c0 ) implies Ia (E, F, F  ) = χ; (ii) a ∈ M(F, E) implies Ia (E, F, F  ) = U + . In the next remarks we compare Theorems 5.2 and 5.1 when e ∈ F.

5.5 The (SSIE) F ⊂ E a + Fx for e ∈ /F

253

Remark 5.9 Under some conditions we may compare Theorems 5.2 and 5.1. Let (E, F) the set a ∈ U + and let E and F be linear subspaces of ω. We denote by Ia of all positive sequences x that satisfy F ⊂ E a + E x . We assume that the conditions e ∈ F and E ⊂ M(F, E) hold and that E satisfies the condition in (5.9). Then we have χ = M(F, E) = E and trivially M(χ , c0 ) = M(E, c0 ), so the condition (i) in Theorem 5.2 is equivalent to the condition (i) in Theorem 5.1. Remark 5.10 Let a ∈ U + and let E, F and F  be linear subspaces of ω. We assume (a) e ∈ F, (b) E ⊂ F  , (c) F  ⊂ M(F, F  ) and (d) F  satisfies (5.9). Then we have χ = M(F, F  ) = F  and M(χ , c0 ) ⊂ M(E, c0 ). So it can easily be seen that Theorem 5.1 implies Theorem 5.2. This result can be illustrated by the next example. We consider the (SSIE) c ⊂ sa0 + sx . By Theorem 5.1, we obtain that a ∈ ∞ implies Ia (c0 , c, s1 ) = s1 , and since χ = M(c, s1 ) = s1 , we obtain that a ∈ c0 implies Ia (c0 , c, s1 ) = χ = s1 in Theorem 5.2. We obtain similar results for the (SSIE) c ⊂ sa0 + Fx where F  ∈ {∞ , w0 , w∞ }.

5.6 Some Applications Now we apply the results of the previous sections. We deal with the inclusion / F or F   M(F, F  ), F ⊂ E a + Fx , where either e ∈ since the case of e ∈ F and F  ⊂ M(F, F  ) was studied in [12]. The (SSIE) c0 ⊂ E a + Fx . Now we deal with the set Ia (E, c0 , F  ) of all solutions of the (SSIE) c0 ⊂ E a + for E, F  ∈ {c0 , c, s1 }. Then we deal with the (SSIE) c0 ⊂ ( p )a + Fx for F  ∈ {c0 , c, s1 } and p ≥ 1. Then we study the (SSIE) c0 ⊂ E a + Wx for E ∈ . We obtain the next results as a direct consequence of Theorem 4.16 and Lemmas 5.6 and 5.7. Fx

Proposition 5.7 Let a ∈ U + and let p ≥ 1. Then we have (i) Let E, F  ∈ {c0 , c, s1 }. Then (a) the condition a ∈ c0 implies Ia (E, c0 , F  ) = s1 ; (b) the condition a ∈ s1 implies Ia (E, c0 , F  ) = U + . (ii) Let F  ∈ {c0 , c, s1 }. Then (a) the condition a ∈ c0 implies Ia ( p , c0 , F  ) = s1 ; (b) the condition a ∈  p implies Ia ( p , c0 , F  ) = U + . (iii) Let E ∈ . Then

254

5 Sequence Spaces Inclusion Equations 0 (a) the condition a ∈ s(1/n) implies Ia (E, c0 , w∞ ) = w∞ ; ∞ n=1 + (b) Ia (E, c0 , w∞ ) = U in the next cases

(α) a ∈ s1 for E ∈ {c0 , c, s1 }, (β) a ∈ E for E ∈ { p , w0 , w∞ }. Proof (i) We take H = s1 in Theorem 5.2. Then we have χ = M(F, F  ) = M(F, H ) = s1 for F  ∈ {c0 , c, s1 } and we obtain Ia (E, c0 , F  ) = s1 if a ∈ c0 . Then we have M(F, E) = M(c0 , E) = s1 for E ∈ {c0 , c, s1 } which shows Part (i). (ii) Again, with H = s1 , we obtain χ = M(c0 , F  ) = M(c0 , H ) = s1 for F  ∈ {c0 , c, s1 }. Since M(χ , c0 ) = c0 , we obtain Ia ( p , c0 , F  ) = s1 if a ∈ c0 for F  ∈ {c0 , c, s1 }. Then by Lemma 5.6, we obtain M(F, E) = M(c0 ,  p ) =  p and Ia ( p , c0 , F  ) = U + if a ∈  p . (iii) (iii) (a) Here we take H = F  = w∞ . So we obtain χ = M(c0 , F  ) = w∞ , and 0 since M(w∞ , c0 ) = s(1/n) ∞ , we obtain the statement in (iii) (a). n=1 (iii) (b) The cases (α) and (β) follow from Lemma 5.6, where M(c0 , E) = s1 for E ∈ {c0 , c, s1 } and M(c0 , E) = E for E ∈ { p , w0 , w∞ }.  The (SSIE) 1 ⊂ E a + Wx and  p ⊂ E a + Fx . Now we deal with the set Ia (E, 1 , w∞ ) of all solutions of the (SSIE) 1 ⊂ E a + Wx for E ∈ . Then we consider the set Ia (E,  p , F  ) of all solutions of the (SSIE)  p ⊂ E a + Fx with E, F  ∈ {c0 , c, s1 ,  p }. We obtain the following result. Proposition 5.8 Let a ∈ U + and let p ≥ 1. Then we have (i) Let E ∈ = {c0 , c, s1 ,  p , w0 , w∞ }. Then (a) the condition a ∈ s(1/n)∞ implies Ia (E, 1 , w∞ ) = s(n)∞ ; n=1 n=1 + (b) Ia (E, 1 , w∞ ) = U in the next cases 0 (α) a ∈ s1 and E ∈ {c0 , c, s1 ,  p }, (β) a ∈ s(n) and E = w0 ∞ n=1

and and E = w∞ . (γ ) a ∈ s(n)∞ n=1 (ii) Let E, F  ∈ {c0 , c, s1 ,  p }. Then

and

a ∈ c0 implies Ia (E,  p , F  ) = s1

(5.21)

a ∈ s1 implies Ia (E,  p , F  ) = U + .

(5.22)

5.6 Some Applications

255

Proof (i) (a) We take F  = H = w∞ in Theorem 5.2, so we have χ = M(1 , H ) = 0 by Lemma 5.7. Then we have M(χ , c0 ) = s(1/n) s(n)∞ ∞ . This shows (i) (a). n=1 n=1 (b) Now we show (i) (b) Case (α). By Lemma 5.7, we have M(1 , E) = s1 for E ∈ {c0 , c, s1 }. Then we consider the case E =  p . By Theorem 1.22 with X = 1 and Z = q we have β M(1 ,  p ) = M(1 , q ) = M(q , s1 ) = s1 . This concludes the proof of (i) (b) (α). Cases (β) and (γ ). We obtain these cases from Lemma 5.7, where M(1 , w0 ) = w0 and M(1 , w∞ ) = w∞ . Thus we have shown Part (i). (ii) We take H = s1 . So we obtain χ = M(1 , F  ) = M(1 , H ) = s1 for F  ∈ {c0 , c, s1 ,  p }, and since M(χ , c0 ) = c0 we obtain (5.21). Then by Lemma 5.6, we have M( p , E) = s1 for E ∈ {c0 , c, s1 ,  p }, which shows (5.22). This completes the proof.  The (SSIE) of w0 ⊂ E a + Fx . Now we study the set Ia (E, w0 , F  ) of all solutions of the (SSIE) w0 ⊂ E a + Fx in each of the cases E, F  ∈ {c0 , c, s1 } and E ∈ {c0 , c, s1 , w0 , w∞ } and F  ∈ {w0 , w∞ }. Proposition 5.9 Let a ∈ U + . Then we have (i) Let E, F  ∈ {c0 , c, s1 }. Then 0 (a) the condition a ∈ s(n) implies Ia (E, w0 , F  ) = s(1/n)∞ ; ∞ n=1 n=1  + ∞ (b) the condition a ∈ s(1/n)n=1 implies Ia (E, w0 , F ) = U .

(ii) Let E ∈ {c0 , c, s1 , w0 , w∞ } and F  ∈ {w0 , w∞ }. Then (a) the condition a ∈ c0 implies Ia (E, w0 , F  ) = s1 ; (b) the identity Ia (E, w0 , F  ) = U + holds in the next cases for E ∈ {c0 , c, s1 }; (α) a ∈ s(1/n)∞ n=1 (β) a ∈ s1 for E ∈ {w0 , w∞ }. Proof

(i) follows from Lemma 5.7, where we have M(w0 , Y ) = s(1/n)∞ for Y ∈ {c0 , c, s1 }. n=1

(5.23)

Then it is enough to apply Theorem 5.2 with H = s1 . (ii) Here we take H = w∞ . Then we have χ = M(w0 , F  ) = M(w0 , H ) = s1 for F  ∈ {w0 , w∞ }. The result is obtained from Lemma 5.7, where (5.23) holds and M + (w0 , Y ) = s1+ for Y ∈ {w0 , w∞ }. 

256

5 Sequence Spaces Inclusion Equations

The (SSIE) w∞ ⊂ E a + Fx . Now we study the set Ia (E, w∞ , s1 ) of all solutions of the (SSIE) w∞ ⊂ E a + sx with E ∈ {c0 , s1 }. Then we consider the (SSIE) w∞ ⊂ E a + Wx with E ∈ {c0 , s1 , w∞ }. We note that although e ∈ w∞ , Theorem 5.1 cannot be applied, since we have w∞  M(w∞ , F  ), where F  is either of the sets s1 or w∞ , and we have M(w∞ , s1 ) = s(1/n)∞ and M + (w∞ , w∞ ) = s1+ . We obtain the following result. n=1 Proposition 5.10 Let a ∈ U + . Then we have (i) Let E ∈ {c0 , s1 }. Then 0 (a) the condition a ∈ s(n) implies Ia (E, w∞ , s1 ) = s(1/n)∞ ; ∞ n=1 n=1 (b) the identity Ia (E, w∞ , s1 ) = U + holds in the next cases 0 (α) a ∈ s(1/n) for E = c0 ; ∞ n=1 ∞ (β) a ∈ s(1/n)n=1 for E = s1 .

(ii) Let E ∈ {c0 , s1 , w∞ }. Then (a) the condition a ∈ c0 implies Ia (E, w∞ , w∞ ) = s1 ; (b) the identity Ia (E, w∞ , w∞ ) = U + holds in the next cases 0 (α) a ∈ s(1/n) for E = c0 ; ∞ n=1 for E = s1 ; (β) a ∈ s(1/n)∞ n=1 (γ ) a ∈ s1 for E = w∞ . by Lemma 5.7. Proof (i) We take F  = H = s1 and χ = M(w∞ , H ) = s(1/n)∞ n=1 0 Then we have M(χ , c0 ) = s(n) ∞ . This shows (i) (a). The statements in (i) (a) and (i) n=1 0 (b) are direct consequences of Lemma 5.7, where we have M(w∞ , c0 ) = s(1/n) ∞ n=1 ∞ and M(w∞ , s1 ) = s(1/n)n=1 . (ii) Here we take H = w∞ , then by [14, Remark 3.4, p. 597], we have χ + = M + (w∞ , H ) = s1+ and Ia (E, w∞ , w∞ ) = s1 if a ∈ M(χ + , c0 ) = c0 for 0 E ∈ {c0 , s1 , w∞ , w∞ }. Then we have M(w∞ , c0 ) = s(1/n) ∞ , M(w∞ , s1 ) = s(1/n)∞ n=1 n=1 + + and M (w∞ , w∞ ) = s1 and we conclude by Theorem 5.2. This completes the proof.  The (SSIE) defined by c0 ⊂ Wa + Fx . We may add to the previous classes the (SSIE) c0 ⊂ Wa + Fx for which the previous theorems cannot be applied. We state the next result. Proposition 5.11 Let a ∈ U + and Ia (w∞ , c0 , F  ) be the set of all solutions of the (SSIE) c0 ⊂ Wa + Fx . We assume F  ⊂ s1 and M(c0 , F  ) = ∞ . Then we have

and

nan → 0 (n → ∞) implies Ia (w∞ , c0 , F  ) = s1

(5.24)

a ∈ w∞ implies Ia (w∞ , c0 , F  ) = U + .

(5.25)

5.6 Some Applications

257

Proof Let x ∈ Ia (w∞ , c0 , F  ) and assume nan → 0 (n → ∞). Then, since w∞ ⊂ and F  ⊂ s1 , we deduce c0 ⊂ s(nan )∞ + sx = s(nan +xn )∞ , which implies ((nan + s(n)∞ n=1 n=1 n=1 −1 ∞ xn ) )n=1 ∈ M(c0 , s1 ) = s1 . So there is K > 0 such that xn ≥ K − nan , and since nan → 0 (n → ∞), there is C > 0 such that xn ≥ C for all n. So we obtain the statement in (5.24).  Then a ∈ w∞ implies c0 ⊂ Wa and (5.25) holds. Corollary 5.13 Let a ∈ U + and F  ∈ {c0 , c, s1 }. Then (5.24) and (5.25) hold. From the results of Propositions 5.7, 5.8, 5.9 and 5.10, we obtain the next corollaries where r is a positive real. Corollary 5.14 (i) Let E, F  ∈ {c0 , c, s1 }. Then the solutions of the (SSIE) c0 ⊂ Er + Fx are determined by (5.11) with χ = s1 . (ii) Let E ∈ {c0 , c, s1 , w∞ }. Then the solutions of the (SSIE) c0 ⊂ Er + Wx are determined by (5.11) with χ = w∞ . Remark 5.11 We may illustrate Part (i) of Corollary 5.14 as follows. Let E, F  ∈ {c0 , c, s1 }. Then each of the next statements is equivalent to r < 1, where (a) Ir (E, c0 , F  ) = s1 . (b) For every x ∈ U + , we have c0 ⊂ Er + Fx if and only if c0 ⊂ Fx . (c) For every x ∈ U + , we have c0 ⊂ Er + Fx if and only if xn ≥ K for all n. Corollary 5.15 Let p ≥ 1. Then we have (i) Let E ∈ = {c0 , c, s1 ,  p , w0 , w∞ }. Then the solutions of the (SSIE) 1 ⊂ Er + . Wx are determined by (5.11) with χ = s(n)∞ n=1 (ii) Let E, F  ∈ {c0 , c, s1 ,  p }. Then the solutions of the (SSIE)  p ⊂ Er + Fx are determined by (5.11) with χ = s1 . Corollary 5.16 (i) Let E, F  ∈ {c0 , c, s1 }. Then the solutions of the (SSIE) w0 ⊂ . Er + Fx are determined by (5.12) with χ = s(1/n)∞ n=1 (ii) Let E, F  ∈ {w0 , w∞ }. Then the solutions of the (SSIE) w0 ⊂ Er + Fx are determined by (5.11) with χ = s1 . Corollary 5.17 (i) Let E ∈ {c0 , s1 }. Then the solutions of the (SSIE) w∞ ⊂ Er + . sx are determined by (5.12) with χ = s(1/n)∞ n=1 (ii) The solutions of the (SSIE) w∞ ⊂ Wr + Wx are determined by (5.11) with χ = s1 . Remark 5.12 By Corollaries 5.16 and 5.17, we easily see that the solutions of each of the (SSIE) w0 ⊂ c0 + Fx , w0 ⊂ c + Fx , w0 ⊂ ∞ + Fx , with F  ∈ {c0 , c, s1 }, and w∞ ⊂ c0 + sx , w∞ ⊂ ∞ + sx are determined by xn ≥ K n for all n and some K > 0. · Example 5.10 Let I1/2 (c, 1 , w∞ ) be the set of all positive sequences x that satisfy the next statement.

258

5 Sequence Spaces Inclusion Equations

 The condition ∞ k=1 |yk | < ∞ implies that there are sequences  u, v ∈ ω with y = u + v such that 2n u n → L (n → ∞) and n −1 nk=1 xk |vk | ≤ K for all n and for all sequences y and for some scalars L and K with K > 0. · (c, 1 , w∞ ) is associated with the (SSIE) 1 ⊂ c1/2 + W1/x , which is The set I1/2 · . This means that I1/2 (c, 1 , w∞ ) is the set of all positive equivalent to x ∈ s(n)∞ n=1  sequences x that satisfy xn ≤ K n for all n and some K  > 0. By Propositions 5.7, 5.9 and 5.10 and Corollary 5.13, we obtain the next corollary concerning the solvability of some (SSIE) of the form F ⊂ Er + Fx for r = 1. Corollary 5.18 Let p ≥ 1. Then we have (i) The solutions of each of the (SSIE) defined by c0 ⊂ ( p )r + Fx with F  ∈ {c0 , c, s1 } and by w∞ ⊂ Er + Wx with E ∈ {c0 , s1 } are determined by the set

s1 I = U+ 1

if r < 1 if r > 1.

(ii) The solutions of each of the (SSIE) defined by c0 ⊂ Er + Wx with E ∈ { p , w0 , w∞ } are determined by

I = 2

w∞ U+

if r < 1 if r > 1.

(iii) The solutions of each of the (SSIE) defined by w0 ⊂ Er + Fx with E ∈ {c0 , c, s1 }, F  ∈ {w0 , w∞ } are determined by

s1 I = U+ 3

if r < 1 if r > 1.

Example 5.11 We consider the system 

0 w0 ⊂ c + s1/x 1 ⊂ W1/2 + Wx .

Let S be the set of all positive sequences that satisfy the system. By Part (i) of Corollary 5.16 and Part (i) of Corollary 5.15, the solutions of the (SSIE) w0 ⊂ c + 0 are determined by x ∈ s(1/n)∞ and those of the (SSIE) 1 ⊂ W1/2 + Wx are s1/x n=1 . Since we have x ∈ s(n)∞ if and only if 1/x ∈ s(n)∞ and determined by x ∈ s(n)∞ n=1 n=1 n=1 ∈ s , we conclude (1/nxn )∞ 1 n=1 ∩ s(n)∞ = cl S = s(1/n)∞ n=1 n=1



 ∞ 1 n n=1

5.6 Some Applications

259

  K1 K2 ≤ xn ≤ for all n and some K 1 , K 2 > 0 . = x ∈ U+ : n n Now we deal with the next (SSIE’s) c0 ⊂ E a + sx0 and  p ⊂ E a + ( p )x for E ∈ {c0 , c, s1 ,  p }, w0 ⊂ E a + Wx0 for E ∈ {w0 , w∞ } and w∞ ⊂ Wa + Wx for E ∈ {c0 , c, s1 ,  p }. Then we determine the solvability of the next (SSIE’s) c0 ⊂ Er + sx0 for E ∈ {c0 , c, s1 , w∞ } and  p ⊂ Er + ( p )x for E ∈ {c0 , c, s1 ,  p }. For instance, the (SSIE)  p ⊂ sr + ( p )x is equivalent to the next statement.  p The condition ∞ are sequences u, v ∈ ω k=1 |yk | < ∞ implies that there p with y = u + v such that supn (|u n |/r n ) < ∞ and ∞ k=1 |vk |/x k ) < ∞ for all sequences y. In the case of F  = F and M(F, F) = s1 in Theorem 5.2, we immediately obtain the next corollary. Corollary 5.19 Let a ∈ U + , and let E, F be linear spaces of sequences. We assume the next statements hold (a) There is a linear space H ⊂ ω that satisfies the condition in (5.20) such that E, F ⊂ H ; (b) M(F, F) = M(F, H ) = s1 . Then we have (i) a ∈ c0 implies Ia (E, F) = s1 ; (ii) a ∈ M(F, E) implies Ia (E, F) = U + . Example 5.12 Here we use the set cs = c of all convergent series. Let a ∈ c0 and denote by Ia (cs, c0 ) the set of all positive sequences x that satisfy the next statement. The condition yn → 0 (n → ∞) implies that  there are sequences u, v ∈ ω such that y = u + v for which the series ∞ k=1 u k /ak is convergent and vn /xn → 0 (n → ∞) for all sequences y. This statement is associated with the (SSIE) c0 ⊂ csa + sx0 . Corollary 5.19 can be applied  with H = c0 , χ = s1 and Ia (cs, c0 ) = s1 . We have 1/a ∈ M(c0 , cs) if and 0 only if ∞ k=1 1/ak < ∞. So if 1/a ∈ cs, then the (SSIE) c0 ⊂ csa + sx is satisfied for all positive sequences x.

260

5 Sequence Spaces Inclusion Equations

When F = H , by Corollary 5.19 we obtain the next results. Corollary 5.20 Let a ∈ U + , and let E, F be linear spaces of sequences with E ⊂ F. We assume F satisfies the condition in (5.20), and M(F, F) = s1 . Then we have (i) a ∈ c0 implies Ia (E, F) = s1 ; (ii) a ∈ M(F, E) implies Ia (E, F) = U + . Corollary 5.21 Let a ∈ U + . Then (i) (a) Let E ∈ {c0 , c, s1 }. Then the solutions of the (SSIE) c0 ⊂ E a + sx0 are determined by

s1 if a ∈ c0 Ia (E, c0 ) = (5.26) U + if a ∈ s1 . (b) The solutions of the (SSIE) c0 ⊂ ( p )a + sx0 for p ≥ 1 are determined by

Ia ( p , c0 ) =

s1 U+

if a ∈ c0 if a ∈  p .

(ii) Let E ∈ {c0 , c, s1 ,  p }. Then the solutions of the next (SSIE)  p ⊂ E a + ( p )x are determined by

if a ∈ c0 s1 Ia (E,  p ) = + U if a ∈ s1 . (iii) Let E ∈ {w0 , w∞ }. Then the sets Ia (E, w0 ) and Ia (w∞ , w∞ ) of all solutions of each of the (SSIE) w0 ⊂ E a + Wx0 and w∞ ⊂ Wa + Wx satisfy Ia (E, w0 ) = Ia (w∞ , w∞ ) = Ia (E, c0 ) determined in (5.26). Proof (i) (a) It is enough to take H = s1 , in Corollary 5.19. So we have M(c0 , c0 ) = M(c0 , H ) = H . (b) It is enough to take H = c0 . (ii) We take H = s1 in Corollary 5.19. Again we obtain M( p ,  p ) = M( p , H ) = H . Since by Lemma 5.6, we have M( p , E) = s1 for E ∈ {c0 , c, s1 ,  p } we conclude for Part (ii). (iii) Here we take H = w∞ and apply Corollary 5.19 using the identities M(w0 , w0 ) = M(w0 , w∞ ) = M(w∞ , w∞ ) = s1 (cf. [14, Remark 3.4, p. 597]). This concludes the proof.  As a direct consequence of Corollary 5.21 and Proposition 5.11, we obtain the following.

5.6 Some Applications

261

Corollary 5.22 Let a ∈ c0+ and let p ≥ 1. Then each of the next (SSIE) c0 ⊂ E a + sx0 with E ∈ {c0 , c, s1 ,  p },

(i)

 p ⊂ E a + ( p )x with E ∈ {c0 , c, s1 ,  p },

(ii)

w0 ⊂ E a + Wx0 with E ∈ {w0 , w∞ }

(iii)

w∞ ⊂ Wa + Wx

(iv)

and

has the same set of solutions which is determined by Ia = s1 . Remark 5.13 We note that by Corollary 5.13, the solutions of the (SSIE) c0 ⊂ Wa + 0 sx0 are determined by Ia = s1 if a ∈ s(1/n) ∞ . n=1

We immediately deduce the following result from Corollaries 5.15, 5.16, 5.17, 5.13 and 5.14. Corollary 5.23 Let r > 0 and p ≥ 1. Then the solutions of the next (SSIE) (i) (ii) (iii) (iv) (v)

c0 ⊂ Er + sx0 for E ∈ {c0 , c, s1 }, c0 ⊂ Wr + Fx for F  ∈ {c0 , c, s1 },  p ⊂ Er + ( p )x for E ∈ {c0 , c, s1 ,  p }, w0 ⊂ Er + Wx0 for E ∈ {w0 , w∞ }, w∞ ⊂ Wr + Wx

are determined by (5.11) with χ = s1 . Remark 5.14 Let r > 0 and assume that F satisfies the condition in (5.20), E ⊂ F and M(F, F) = s1 . Then we have (i) the condition r < 1 implies Ir (E, F) = s1 ; + (ii) the condition (r −n )∞ n=1 ∈ M(F, E) implies Ir (E, F) = U . Remark 5.15 We easily deduce that under the hypotheses given in Remark 5.14 and the condition M(F, E) = M(F, F) = s1 , the (SSIE) defined by F ⊂ Er + Fx is totally solved and Ir (E, F) = Ir (E, s1 ) is defined by (5.11) with χ = s1 . It is the case of the (SSIE) defined by E ⊂ Er + E x for r > 0, E ∈ {c0 , s1 ,  p , w0 , w∞ } with p ≥ 1. We also note that, by Lemma 5.1, where E = c, the solutions of the (SSIE) c ⊂ cr + cx are defined by (5.11) with χ = c. Example 5.13 In this example, we use the set bv of sequences of bounded variation defined by

∞  bv = (1 ) = y ∈ ω : |yk − yk−1 | < ∞ k=1

262

5 Sequence Spaces Inclusion Equations

with the convention y0 = 0. For a ∈ c0 , we consider the (SSIE) c0 ⊂ bva + sx0 , which is associated with the next statement. The condition yn → 0 (n → ∞) implies that there are sequences u, v ∈ ω such that y = u + v for which  ∞    uk   − u k−1  < ∞ and vn → 0 (n → ∞) a ak−1  xn k k=1 for all sequences y. Since  ∈ (1 , s1 ), we deduce bv ⊂ s1 and we can apply Corollary 5.19 with H = s1 . Then we have Ia (bv, c0 ) = s1 . Finally we state a result on the solvability of the (SSE) Er + ( p )x =  p with E ∈ {c0 , c, s1 ,  p }. We use the set Ir (E, F) of all positive sequences x that satisfy Er + Fx ⊂ F. We recall that x ∈ Ir (E, F) if and only if (r n )∞ n=1 ∈ M(E, F) and x ∈ M(F, F). Proposition 5.12 Let r > 0. Then the set Sr (E,  p ) of all solutions of the (SSE) defined by Er + ( p )x =  p with E = c0 , c, s1 or  p for p ≥ 1 is determined by

Sr (E,  p ) = and

cl ∞ (e) if r < 1 for E = c0 , c or s1 ∅ if r ≥ 1

⎧ ∞ ⎪ ⎨cl (e) if r < 1 Sr ( p ,  p ) = s1+ if r = 1 ⎪ ⎩ ∅ if r > 1.

(5.27)

(5.28)

Proof We have Er + ( p )x ⊂  p if and only if (r n )∞ n=1 ∈ M(E,  p ) and x ∈ M( p ,  p ) = s1 . Now it can easily be seen that M(E,  p ) =  p for E = c0 , c, s1 , and M( p ,  p ) = s1 . Then we have Ir (E,  p )

s+ = 1 ∅

if r < 1 for E ∈ {c0 , c, s1 }. if r ≥ 1

Then we have Ir ( p ,  p )

=

s1+ ∅

if r ≤ 1 if r > 1.

Since Sr (E,  p ) = Ir (E,  p ) ∩ Ir (E,  p ) and using Corollary 5.23, we obtain the statements in (5.27) and (5.28). This concludes the proof. 

References

263

References 1. Farés, A., de Malafosse, B.: Sequence spaces equations and application to matrix transformations. Int. Math. Forum 3(19), 911–927 (2008) 2. de Malafosse, B.: On some BK space. Int. J. Math. Math. Sci. 58, 1783–1801 (2003) 3. de Malafosse, B.: Sum of sequence spaces and matrix transformations. Acta Math. Hung. 113(3), 289–313 (2006) 4. de Malafosse, B.: Application of the infinite matrix theory to the solvability of certain sequence spaces equations with operators. Mat. Vesn. 54(1), 39–52 (2012) 5. de Malafosse, B.: Applications of the summability theory to the solvability of certain sequence spaces equations with operators of the form B(r, s). Commun. Math. Anal. 13(1), 35–53 (2012) 6. de Malafosse, B.: Solvability of certain sequence spaces equations with operators. Demonstr. Math. 46(2), 299–314 (2013) 7. de Malafosse, B.: On some Banach algebras and applications to the spectra of the operator B( r , s) mapping in new sequence spaces. Appl. Math. Lett. 2, 7–18 (2014) 8. de Malafosse, B.: On the spectra of the operator of the first difference on the spaces Wτ and Wτ0 and application to matrix transformations. Gen. Math. Notes 22(2), 7–21 (2014) 9. de Malafosse, B.: Solvability of sequence spaces equations using entire and analytic sequences and applications. J. Ind. Math. Soc. 81(1–2), 97–114 (2014) 10. de Malafosse, B.: Solvability of sequence spaces equations of the form (E a ) + F x = Fb . Fasc. Math. 55, 109–131 (2015) 11. de Malafosse, B.: On new classes of sequence spaces inclusion equations involving the sets c0 , c,  p (1 ≤ p ≤ ∞), w0 and w∞ . J. Ind. Math. Soc. 84(3–4), 211–224 (2017) 12. de Malafosse, B.: Application of the infinite matrix theory to the solvability of sequence spaces inclusion equations with operators. Ukr. Math. Zh. 71(8), 1040–1052 (2019) 13. de Malafosse, B., Malkowsky, E.: Matrix transformations between sets of the form Wξ and operator generators of analytic semigroups. Jordanian J. Math. Stat. 1(1), 51–67 (2008) 14. de Malafosse, B., Malkowsky, E.: On sequence spaces equations using spaces of strongly bounded and summable sequences by the Cesàro method. Antarct. J. Math. 10(6), 589–609 (2013) 15. de Malafosse, B., Malkowsky, E.: On the solvability of certain (SSIE) with operators of the form B(r, s). Math. J. Okayama Univ. 56, 179–198 (2014) 16. de Malafosse, B., Rakoˇcevi´c, V.: Calculations in new sequence spaces and application to statistical convergence. Cubo A 12(3), 117–132 (2010) 17. de Malafosse, B., Rakoˇcevi´c, V.: Series summable (C, λ, μ) and applications. Linear Algebra Appl. 436(11), 4089–4100 (2012) 18. de Malafosse, B., Rakoˇcevi´c, V.: Matrix transformations and sequence spaces equations. Banach J. Math. Anal. 7(2), 1–14 (2013) 19. Maddox, I.J.: On Kuttner’s theorem. J. Lond. Math. Soc. 43, 285–290 (1968) 20. Malkowsky, E.: The continuous duals of the spaces c0 (λ) and c(λ) for exponentially bounded sequences λ. Acta Sci. Math. (Szeged) 61, 241–250 (1995) 21. Malkowsky, E.: Linear operators between some matrix domains. Rend. Circ. Mat. Palermo, Serie II, Suppl. 68, 641–655 (2002) 22. Malkowsky, E., Rakoˇcevi´c, V.: An introduction into the theory of sequence spaces and measures of noncompactness. Zbornik radova (Matematˇcki institut SANU) 9(17), 143–234 (2000). Mathematical Institute of SANU, Belgrade 23. Malkowsky, E., Rakoˇcevi´c, V.: The measure of noncompactness of linear operators between spaces of strongly C1 summable and bounded sequences. Acta Math. Hung. 89(1–2), 29–45 (2000)

Chapter 6

Sequence Space Equations

In this chapter, we extend the results of Chap. 5, and first consider the solvability of the equation Fx = Fb for a given positive sequence b, where x = (xn )∞ n=1 is the unknown positive sequence. Now, the question is what are the solutions of the new equation obtained from the previous one where we add a new linear space of sequences E to the set Fx in the first member of the above equation? So, the new problem consists in solving the equation E + Fx = Fb and determining whether its solutions are the same as above, or not. Then, we will consider the case when E is the matrix domain of an operator A in E. We refer to [5–7, 9] for more related results. We also solve the sequence spaces equations (SSE) E a + Fx = Fb where E and F are linear spaces of sequences of the form c0 , c, ∞ ,  p ( p ≥ 1), w0 , w∞ ,  or . We recall that     ∞ 1/n 0 such that α K n ≤ L n for all n. (ii) For any α, K , K  > 0, there is L  > 0 such that K n + α K n ≤ L n for all n. Proof (i) It is enough to take L = α K if α > 1, and L = K if α ≤ 1. (ii) is a consequence of Part (i), since there is L > 0 such that K n + α K n ≤ K n + L n ≤ (K + L)n for all n.  We have for any given a ∈ U + [C(a)a]n =

a1 + · · · + an for all n. an

We recall that  

1 = a ∈ U + : sup[C(a)a]n < ∞ , C n

= {a ∈ U + : C(a)a ∈ c} C

6.1 Introduction

267

and also use the set  = {a ∈ U + : [C(a)a]n ≤ k n for all n and some k > 0}. C

1 , then a + b and ab ∈ C

1 . For any given R > 0, it can We note that if a, b ∈ C

∈ C if and only R > 1. By (4.11 in Proposition 4.3), easily be seen that (R n )∞ 1 n=1

1 implies that there are K > 0 and γ > 1 such that the condition a ∈ C an ≥ K γ n for all n. is equal to the set It is known that C  of all x ∈ U + with limn→∞ (xn−1 /xn ) < 1. Here we state some results that are consequences of [1, Proposition 2.1, p. 1786], [3, Proposition 9, p. 300], and of the fact that (sa ) ⊂ sa is equivalent to D1/a  Da ∈ S1

1 . and a ∈ C We also have the next elementary result.

1 Lemma 6.2 Let a, b ∈ U + and assume E a = E b , where E = c0 or ∞ . Then a ∈ C

1 . if and only if b ∈ C Now we state the next result. Proposition 6.1 Let a, b ∈ U + . Then (i) E a+b = E a + E b , where E is any of the sets c0 , ∞ ,  or . (ii) The following statements are equivalent where E = c0 or ∞ .

1 , (a) a ∈ C (b) (E a ) ⊂ E a , (c) (E a ) = E a . (iii) (iv)

(sa(c) ) = sa(c) if and only if a ∈ . (a)

We have a = b if and only if  a =  b , and the equality a = b is equivalent to the statement k1n ≤

an ≤ k2n for all n and some k1 , k2 > 0, bn

(6.3)

 . (b) a () = b if and only if a = b and a ∈ C Proof (i) As we have seen above in [9, Proposition 5.1, pp. 599–600], it was stated that if E satisfies the condition in (5.20), then E a + E b = E a+b for all a, b ∈ U + . (ii) and (iii) follow from Theorem 4.2. (iv) (a) We will see in Proposition 6.2 that M(,) = . So the condition  a =  b is equivalent to a/b, b/a ∈ M(,) = . This shows a = b if and only if a = b .

268

6 Sequence Space Equations

Now we show a = b if and only if (6.3) holds. We have that a ⊂b implies a ∈b and a/b ∈. Similarly b ⊂a implies b/a ∈. So we have shown that a = b implies (6.3). Now we show that (6.3) implies a = b . First (6.3) implies a/b ∈. Then, for any y ∈a , there is ξ ∈ such that y = aξ = b[(a/b)ξ ] ∈b and we have shown that a/b ∈ implies a ⊂b . Similarly we obtain that b/a ∈ implies b ⊂a . We conclude that (6.3) is equivalent to a = b . (b) Necessity. First, the identity (a ) = b is equivalent to −1 a = b , where −1 = .  We deduce ((−1)n bn )∞ n=1 ∈ b = (a ) and a ∈b . So there are k1 and k2 > 0 such that bn−1 + bn a1 + · · · + an ≤ k1n , ≤ k2n for all n an bn

(6.4)

with b0 = 0, and (6.3) holds with k1 = 1/k1 . So we have shown that (a ) = b implies a = b .  . Now we show that (a ) = b implies a ∈ C We have seen that (a ) = b implies (6.4). So we have 1 a1 + · · · + an a1 + · · · + an ≤ ≤ k2n for all n k1n an bn  . This shows the necessity of Part (b). and a ∈ C  and a = b . Sufficiency. We assume a ∈ C Then let y ∈ (a ) . We have y = z for some z ∈a . So there is h ∈ ∞ such that z n = an h nn and |yn | = an

n zk

a1 |h 1 | + · · · + an |h nn | a1 K + · · · + an K n ≤ an an an a1 + · · · + an ≤ max{K , K n } for all n and some K > 0. an k=1



 , we obtain Now by Lemma 6.1 and since a ∈ C |yn | ≤ k n max{K , K n } ≤ k n for some k and k  and all n, an and y ∈a . Conversely, let y ∈a . Then y ∈a . Again, by Lemma 6.1, we obtain

6.1 Introduction

269

an |h nn | + an−1 |h n−1 |yn − yn−1 | n−1 | ≤ an an 1 ≤ K n + k n K n−1 = K n + (k K )n k ≤ K n for all n and some K  > 0. This shows the sufficiency and concludes the proof.  Now we recall the characterizations of the classes (c0 ( p), c0 (q)) and (c0 ( p), ∞ (q)) and determine the sets M(, F), M(, F), M(E, ) and M(E, ) for E, F ∈ {c0 , c, ∞ , , }. + Let p = ( pn )∞ n=1 ∈ U duced the sets



∞ be a sequence and put . In Example 1.1, we intro-

  pn : sup |y | < ∞ , ∞ ( p) = y = (yn )∞ n n=1 n   pn c0 ( p) = y = (yn )∞ =0 . n=1 : lim |yn | n→∞

It was shown there that set c0 ( p) is a complete paranormed space with g(y) = supn |yn | pn /L , where L = max{1, supn pn } ([12, Theorem 6]) and ∞ ( p) is a paranormed space with g only if inf n pn > 0 in which case ∞ ( p) = ∞ ([14, Theorem 9]). So we can state the next lemma, where for any given integer k, we denote by Nk the set of all integers n ≥ k.  Lemma 6.3 ([11, Theorem 5.1.13]) Let p, q ∈ U + ∞ . Then (i) A ∈ (c0 ( p), c0 (q)) if and only if for all N ∈ N1 there is M ∈ N2 such that  sup N n

(ii)

1/qn

∞ 

 |ank |M

−1/ pk

< ∞ and lim |ank | pn = 0 for all k. n→∞

k=1

A ∈ (c0 ( p), ∞ (q)) if and only if there is M ∈ N2 such that sup n

∞  k=1

qn |ank |M

−1/ pk

< ∞.

270

6 Sequence Space Equations

Example 6.1 In this way (,) if and only if there is an integer

we have −kA ∈ 1/n |a |M ) < ∞, since  = c0 ( p) and  = ∞ ( p) M ≥ 2 such that supn ( ∞ k=1 nk with pn = 1/n for all n. To show the next proposition we need the following lemma.   Lemma 6.4 ([5, Proposition 4.1, p. 104]) We have = r >0 sr and = r >0 sr .  Proof The proof  of = r >0 sr was given in [4]. We show ⊂ r >0 sr . For this let a ∈. Then there is a sequence ε = (εn )∞ n=1 ∈ c0 such that an = εnn for all n and |an |  εn n = = o(1) (n → ∞) for all r > 0, rn r   so a ∈ r >0 sr and  ⊂ r >0 sr . Conversely, let a ∈ r >0 sr . Then a ∈ sr and |an | ≤ Mr r n for all n, for all r > 0 and 1/n for some Mr > 0. Since for every r > 0 we have Mr → 1 (n → ∞), there is an integer Nr such that Mr1/n ≤ 2 and |an |1/n ≤ 2r for all n ≥ Nr . We conclude |an |1/n → 0 (n → ∞) and a ∈.



Now we state the next result, where we have 1 =  and 1 = . Proposition 6.2 ([5, Proposition 4.1, p. 104]) We have (i) M(, F) =  for F ∈ {c0 , c, ∞ ,,}; (ii) M(, F) =  for F ∈ {c0 , c, ∞ ,}; (iii) M(E,) =  for E ∈ {c0 , c, ∞ ,,}; (iv) M(E,) =  for E ∈ {c0 , c, ∞ ,}. Proof (i) We show M(, ) = . By Part (i) of Lemma 6.3, we have a ∈ M(,), that is, Da ∈(,) if and only if for all integers N ≥ 1, there is an integer M ≥ 2 such that |an |1/n ≤ K

M for all n and some K > 0. N

This means that a ∈ and we have shown M(,) = . Now we show M(,) = . By Part (i) of Lemma 6.3, we have Da ∈ (,) if and only if there is an integer M ≥ 2 such that  1/n < ∞. sup |an |M −n ) n

This means a ∈ . So we have M(,) = . Since ⊂ c0 ⊂ c ⊂ s1 ⊂ , we conclude

6.1 Introduction

271

 = M(, ) ⊂ M(, F) ⊂ M(, ) = , where F is any of the sets c0 , c or ∞ . Thus we have shown Part (i) (ii) We show  ⊂ M(,). Let a ∈.Then, for each y ∈, we have |an yn |1/n ≤ K |an |1/n for all n and some K > 0. Since |an |1/n → 0 (n → ∞), we conclude ay ∈. Now we show M(, ∞ ) ⊂. By Lemma 6.4, we have = r >0 sr and M(, ∞ ) ⊂ M(sr , ∞ ) for all r > 0. Hence we obtain that a ∈  M(, ∞ ) implies |an |r n ≤ K for all n ≥ 1, r > 0 and some K . This shows a ∈ r >0 sr = . Then we have  ⊂ M(, ) ⊂ M(, F) ⊂ M(, ∞ ) ⊂ , where F is any of the sets c0 or ∞ . (iii) We have ⊂ M( ,). Let a∈. Then we have ay∈ for each y∈, since |an yn |1/n = |an |1/n |yn |1/n ≤ K K  for all n. Now we obtain  ⊂ M(, ) ⊂ M(E, ) ⊂ M(, ) = , where E is any of the sets c0 , c or ∞ . This concludes the proof of Part (iii). (iv) First we show M(c0 ,) ⊂.  Let a ∈ M(c0 ,). Since = r >0 sr by Lemma 6.4, we have  a ∈ M(c0 , sr ) ∈ M(c , s ) = s for all r > 0. This shows a ∈ and (an r −n )∞ 0 1 1 n=1 r >0 sr =  and M(c0 ,) ⊂. Now we can write  ⊂ M(, ) ⊂ M(E, ) ⊂ M(c0 , ) ⊂  for E = c0 , c or ∞ . This concludes the proof of Part (iv). 

6.2 The (SSE) Ea + Fx = Fb with e ∈ F In this section, we apply the previous results to the solvability of the (SSE) E a + Fx = Fb with e ∈ F and we state the next result, where we do not use the homomorphism in U + defined by a → E a . We also need the following conditions in the next result. e ∈ F,

(6.5)

272

6 Sequence Space Equations

and F ⊂ M(F, F).

(6.6)

Theorem 6.1 ([5, Theorem 5.1, pp. 106–107]) Let a, b ∈ U + and E and F be two linear subspaces of ω. We assume F satisfies the conditions in (5.9), (6.5) and (6.6), and (6.7) M(E, F) ⊂ M(E, c0 ). 

Then S(E, F) =

cl F (b) if a/b ∈ M(E, F) ∅ otherwise.

(6.8)

E a + Fx = Fb

(6.9)

E a + Fx ⊂ Fb

(6.10)

Fb ⊂ E a + Fx .

(6.11)

Proof First, trivially the equation

is equivalent to

and The identity in (6.9) implies (6.10), which is equivalent to a/b ∈ M(E, F) and x/b ∈ M(F, F). We show that if a/b ∈ M(E, F), then the inclusion in (6.11) is equivalent to Fb ⊂ Fx . Necessity. By the condition in (6.5), there are sequences ξ ∈ E and ξ  ∈ F such that b = aξ + xξ  , x  a ξ =e− ξ b b and b ξ = a . x e− ξ b By (6.7), we have a/b ∈ M(E, F) ⊆ M(E, c0 ) and (a/b)ξ ∈ c0 . Then we obtain bn ξn = ∼ ξn (n → ∞) xn 1 − abnn ξn and by (5.9) we have b/x ∈ F, but (6.6) successively implies b/x ∈ M(F, F) and Fb ⊂ Fx . Sufficiency. The converse is trivial since Fb ⊂ Fx implies Fb ⊂ E a + Fx . So we

6.2 The (SSE) E a + Fx = Fb with e ∈ F

273

have shown that for a/b ∈ M(E, F) the inclusion Fb ⊂ E a + Fx is equivalent to Fb ⊂ Fx . We conclude that the identity in Eq. (6.9) is equivalent to a/b ∈ M(E 1 , F) and Fx = Fb . Finally we have S(E, F) = ∅ if a/b ∈ / M(E 1 , F), since E a ⊂ E a + Fx = Fb implies E a ⊂ Fb and a/b ∈ M(E, F). This completes the proof.  We immediately deduce the following. Corollary 6.1 Let E and F be any of the sets c0 , c, ∞ ,  or  and assume e ∈ F, and M(E, F) ⊆ M(E, c0 ). Then S(E, F) is defined by (6.8). Proof The proof follows the same lines as above. Here we have F ⊂ M(F, F),  where F is any of the sets c0 , c, ∞ ,  or . Now we give an application. Let a, b ∈ U + . We consider the sets S(E, c), where E is any of the sets c0 , c, ∞ ,  or . We obtain the next results as a direct consequence of Theorem 6.1 and Corollary 6.1. Proposition 6.3 ([5, Proposition 5.1, p. 108]) Let a, b ∈ U + . Then  S(c0 , c) =  S(∞ , c) =  S(, c) =  S(, c) =

cl c (b) if a/b ∈ ∞ ∅ otherwise; cl c (b) if a/b ∈ c0 ∅ otherwise; cl c (b) if a/b ∈  ∅ otherwise; cl c (b) if a/b ∈  ∅ otherwise.

Now we give an example that illustrates the previous proposition. Example 6.2 We consider the set of all x ∈ U + such that for every sequence y we have yn → l1 (n → ∞) if and only if there are sequences u, v ∈ ω for which y = u + v and |u n |1/n → 0 and xn vn → l2 (n → ∞) for some l1 and l2 . (c) = c, by Proposition 6.3 it is equal Since this set corresponds to the equation  + s1/x to the set of all sequences that tend to a positive limit.

In the case when E = F = c, we obtain the next theorem.

274

6 Sequence Space Equations

Theorem 6.2 ([5, Theorem 5.2, p. 108]) Let a, b ∈ U + . Then ⎧ c ⎪ ⎨cl (b) (c) S(c, c) = sb ∩ U + ⎪ ⎩ ∅

if a/b ∈ c0 if a/b ∈ c \ c0 if a/b ∈ / c.

Proof First let a/b ∈ c0 . Then x ∈ S(c, c) implies sx(c) ⊂ sb(c) and x ∈ sb(c) . Then since b ∈ sb(c) , the condition x ∈ S(c, c) also implies that there are sequences ϕ, ψ ∈ c such that b = aϕ + xψ. So we have ψ b = . x e − ab ϕ But since a/b ∈ c0 , it follows that (a/b)ϕ ∈ c0 and b/x ∈ c. Since x ∈ sb(c) , we conclude x ∈ cl c (b). Conversely, if sx(c) = sb(c) we deduce sa(c) + sxc) = sa(c) + sb(c) = sb(c) . So we have shown S(s (c) , s (c) ) = cl c (b) if a/b ∈ c0 . Now let a/b ∈ c \ c0 . Then an /bn → L = 0 (n → ∞) and sa(c) = sb(c) . So x ∈ S(c, c) implies (6.12) sa(c) + sx(c) = sb(c) + sx(c) = sb(c) . Then we have sx(c) ⊂ sb(c) and x ∈ sb(c) , hence S(c, c) ⊂ sb(c) . Conversely, if x ∈ sb(c) then sx(c) ⊂ sb(c) and as above we obtain (6.12) and x ∈ S(c, c). Thus we have shown S(c, c) = sb(c) . Now since sa(c) ⊂ sa(c) + sx(c) = sb(c) , we deduce a/b ∈ c. So if a/b ∈ / c then S(c, c) = ∅. This concludes the proof.  We also have the next proposition, where we write cl ∞ (b) = cl ∞ (b). Proposition 6.4 ([5, Proposition 5.2, p. 109]) Let a, b ∈ U + . Then  S(c0 , ∞ ) =  S(, ∞ ) =  S(, ∞ ) =

cl ∞ (b) if a/b ∈ ∞ ∅ otherwise; cl ∞ (b) if a/b ∈  ∅ otherwise; cl ∞ (b) if a/b ∈  ∅ otherwise.

We conclude with the next proposition. Proposition 6.5 ([5, Proposition 5.3, p. 109]) Let a, b ∈ U + . Then we have

6.2 The (SSE) E a + Fx = Fb with e ∈ F

275

 S(, ) =

cl  (b) if a/b ∈  ∅ otherwise.

We obtain the following result. Theorem 6.3 Let a, b ∈ U + and E and F be linear subspaces of ω. We assume (i) M(F, F) = χ satisfies condition (5.9), (ii) M(E, F) ⊂ M(χ , c0 ), (iii) There is a linear subspace H of ω that satisfies the condition in (5.20) and the conditions (a) and (b), where (a) (b)

E, F ⊂ H , M(F, F) = M(F, H ).

Then S(E, F) is defined by (6.8). Proof Let x ∈ S(E, F). Then we have E a + Fx ⊂ Fb and a/b ∈ M(E, F) and x/b ∈ M(F, F).

(6.13)

Then we have Fb ⊂ E a + Fx and by Part (iii) (a), we obtain Fb ⊂ Ha + Hx ⊂ Ha+x . This implies ξ=

b ∈ χ. a+x

Then it can easily be shown that b ξ = . x e − ab ξ So the conditions ξ ∈ M(F, F) and a/b ∈ M(E, F) ⊂ M(χ , c0 ) imply  a n ξn 1− =1 n→∞ bn lim

and b/x ∈ M(F, F). Using the condition in (6.13), we conclude a/b ∈ M(E, F) and x ∈ cl M(F,F) (b). Conversely, we assume a/b ∈ M(E, F) and x ∈ cl M(F,F) (b). Then we have E a + Fx = E a + Fb = Fb and x ∈ S(E, F). This concludes the proof.  We also have the following corollary.

276

6 Sequence Space Equations

Corollary 6.2 Let a, b ∈ U + and E and F be linear subspaces of ω with E ⊂ F. We assume (i) F satisfies the condition in (5.20), (ii) M(F, F) = χ satisfies the condition in (5.9), (iii) M(E, F) ⊂ M(χ , c0 ). Then the set of all the solutions of the (SSE) E a + Fx = Fb is determined by (6.8).

6.3 Some Applications In this section, we provide some more applications of the results in the previous section. First we need the next lemma. Lemma 6.5 Let p ≥ 1. Then we have (i) M(w∞ ,) = M( p , ) =  (ii) M(,w∞ ) = M( ,  p ) = . Proof (i) First we show M(w∞ ,) = . Since ∞ ⊂ w∞ ⊂, we have by Lemma 6.2  = M(, ) ⊂ M(w∞ , ) ⊂ M(∞ , ) =  and M(w∞ ,) = . Now we show M( p ,) = . First we have  = M(c0 , ) ⊂ M( p , ), since  p ⊂ c0 . Now we show M( p ,) ⊂ . ∈ M( p , s1 ), For this let r > 0. We have a ∈ M( p , sr ) if and only if D(an /r n )∞ n=1 ∈ s and a ∈ s . So we have M( , s ) = s . Since = ∩r >0 sr , that is, (an /r n )∞ 1 r p r r n=1 we conclude M( p ,) ⊂ M( p , sr ) for all r > 0 and M( p ,) ⊂. We conclude M( p ,) = . (ii) Since ∞ ⊂ w∞ ⊂, we obtain  = M(, ∞ ) ⊂ M(, w∞ ) ⊂ M(, ) = . This shows M(, w∞ ) = . Now we show M(,  p ) = . By Lemma 6.2, we have M(,) = M( ,) = . Then we have  = M(, ) ⊂ M(,  p ) ⊂ M(, ) = .

6.3 Some Applications

277

This completes the proof of Part (ii) This completes the proof.



As a direct consequence of Theorem 6.3 we obtain the next corollary. Corollary 6.3 Let a, b ∈ U + and p ≥ 1. Then we have (i) The sets of all solutions of the (SSE) E a + x =  b , where c0 ⊂ E ⊂ are determined by  cl  (b) if a/b ∈  S(E, ) = ∅ otherwise. (ii) The sets of all solutions of the (SSE) defined by sa(c) + sx0 = sb0 and sa + sx0 = sb0 satisfy S(c, c0 ) = S(s1 , c0 ) and are determined by  S(s1 , c0 ) = (iii)

cl ∞ (b) if a/b ∈ c0 ∅ otherwise.

Let p > 1 and let q = p/( p − 1). Then the solutions of the (SSE) ( p )a + (1 )x = (1 )b are determined by  cl ∞ (b) if a/b ∈ q S( p , 1 ) = ∅ otherwise.

Proof (i) We have χ = M(,) = ,  satisfies the condition in (5.9). Since c0 ⊂ E, we have M(E, ) ⊂ M(c0 , ) =  = M(χ , c0 ). For H = , we have χ = M(F, H ) = M( ,) =  and Theorem 6.3 can be applied. So we obtain Part (i). (ii) We limit our study to the case of the (SSE) sa(c) + sx0 = sb0 . Then we have χ = M(F, F) = M(c0 , c0 ) = ∞ and M(E, F) = M(c, c0 ) = c0 = M(χ , c0 ). Taking H = ∞ , we obtain M(F, H ) = M(c0 , ∞ ) = M(F, F) = c0 and Theorem 6.3 can be applied. The case of the (SSE) sa + sx0 = sb0 can be obtained in a similar way. So we have shown Part (ii). (iii) We apply Corollary 6.2 with χ = M(1 , 1 ) = ∞ and M(E, F) = M( p , 1 ) = q .

278

6 Sequence Space Equations

Then we obtain M(E, F) ⊂ c0 = M(χ , c0 ). If we take H = ∞ , then we have M(F, H ) = M( p , ∞ ) = ∞ = M(F, F). 

This completes the proof.

Example 6.3 We consider the statement n −1 |yn |1/n → 0 (n → ∞) if and only if there are sequences u and v for which y = u + v  n |vn | 1/n 1 |u k | → 0 and → 0 (n → ∞) for all y. n k=1 xn By Stirling’s formula, we have (n!)1/n ∼ ne−1 (n → ∞) and n −1 |yn |1/n → 0 if and . This statement is equivalent to only if (|yn |/n!)1/n → 0, that is, y ∈ (n!)∞ n=1 w0 +  x =  (n!)∞ n=1

(6.14)

and to x ∈ S(w0 ,). By Part (ii) of Corollary 6.3,the solutions of equation (6.14) are determined by K 1n n! ≤ xn ≤ K 2n n! for all n and some K 1 , K 2 > 0. Again as a direct consequence of Theorem 6.3 we obtain the next proposition. Proposition 6.6 Let a, b ∈ U + and p ≥ 1. Each of the (SSE) E a + ( p )x = ( p )b where E = c0 , c or ∞ is regular and  S(E,  p ) =

cl ∞ (b) if a/b ∈  p ∅ otherwise.

Proof We only consider the case E = ∞ , since the proofs of the other cases are similar. By Lemma 5.6, we have M( p ,  p ) = ∞ and by Lemma 5.7, we have M(E, F) = M(∞ ,  p ) =  p . and then M(E, F) ⊂ M(M(F, F), c0 ) = c0 . We take H = ∞ and obtain M(F, H ) = M( p , ∞ ) = ∞ = M( p ,  p ). This completes the proof. Now we consider the (SS E) E a + Wx0 = Wb0 with E ∈ {s1 , c}.



6.3 Some Applications

279

Here for given a, b ∈ U + , we determine the set of all x ∈ U + that satisfy the statement y ∈ Wb0 if and only if there are sequences u, v ∈ ω for which y = u + v, n 1  |vk | un → l and → 0 (n → ∞) an n k=1 xk

for some scalar l and all sequences y. Let  denote the set of all sequences τ ∈ U + such that τ ∈ w0 implies τ ∈ c0 . We note that (r n )∞ n=1 ∈  for all r > 0. Now we state the next theorem. Theorem 6.4 ([9, Theorem 7.1, pp. 604–605]) Let E = s1 or c and a/b ∈  . Then the set S(E, w0 ) of all positive sequences such that E a + Wx0 = Wb0 is determined by  cl ∞ (b) if a/b ∈ c0 S(E, w0 ) = ∅ if a/b ∈ / c0 . Proof Let x ∈ S(E, w0 ). Then we have E a + Wx0 ⊂ Wb0

(6.15)

Wb0 ⊂ E a + Wx0 .

(6.16)

a ∈ M(E 1 , w0 ) = w0 b

(6.17)

Wx0 ⊂ Wb0 .

(6.18)

and

The inclusion in (6.15) implies

and Since a/b ∈  , the condition in (6.17) implies a/b ∈ c0 . Then we have by [9, Remark 3.4, p. 597] M(w0 , w0 ) = ∞ , and the condition in (6.18) implies sx ⊂ sb and x ∈ sb . Now we consider the inclusion in (6.16). We have Wb0 ⊂ E a + Wx0 ⊂ Wa + Wx = Wa+x . Thus the condition in (6.19) implies Wb0 ⊂ Wa+x and 

bn an + x n

So there is K > 0 such that

∞ n=1

∈ M(w0 , w∞ ) = ∞ .

(6.19)

280

6 Sequence Space Equations

xn an ≥K− . bn bn Since a/b ∈ c0 , there is K 1 > 0 such that xn /bn ≥ K 1 > 0 for all n. We conclude that x ∈ S(E, w0 ) implies x ∈ cl ∞ (b) and a/b ∈ c0 . Conversely, we assume x ∈ cl ∞ (b) and a/b ∈ c0 . Then sx = sb which implies Wx0 = Wb0 and E a + Wx0 = E a + Wb0 . Now since M(E 1 , w0 ) = w0 for E = s1 or c by Lemma 5.7, it follows that a/b ∈ c0 implies a/b ∈ M(E 1 , w0 ) for E = s1 or c. Then E a ⊂ Wb0 and E a + Wb0 = Wb0 . This concludes the proof.  0 ∞ Example 6.4 Since (1/n)∞ n=1 ∈  , the solutions of the (SSE) c + W x = W(n)n=1 are determined by K 1 n ≤ xn ≤ K 2 n for all n and some K 1 , K 2 > 0.

Now we consider the (SS E) Er + Wx0 = Wu0 . = For any real r > 0 and for any linear space E 1 , let Er denote the set E (r n )∞ n=1 0 0 E : for instance, we have W = W . Similarly we use the notation D(r n )∞ ∞ n e r (r )n=1 n=1 cl ∞ (u) = cl ∞ ((u n )∞ n=1 ). Theorem 6.5 ([9, Theorem 7.3, pp. 606–607]) Let r, u > 0 and E be a linear subset of ω. Also let  S(E, w0 ) be the set of the solutions of the (SSE) Er + Wx0 = Wu0 . We assume (i) E 1 ⊂ w∞ . (ii) For any real ρ > 0, the condition (ρ n )∞ n=1 ∈ M(E 1 , w0 ) holds if and only if ρ < 1. Then

 cl ∞ (u) if r < u  S(E, w0 ) = ∅ if r ≥ u.

Proof Let x ∈  S(E 1 , w0 ). Then we have Er + Wx0 ⊂ Wu0

(6.20)

Wu0 ⊂ Er + Wx0 .

(6.21)

and

6.3 Some Applications

281

The inclusion in (6.20) implies ∞  (r/u)n n=1 ∈ M(E 1 , w0 )

(6.22)

Wx0 ⊂ Wu0 .

(6.23)

and By (ii), the condition in (6.22) implies ρ = r/u < 1 and r < u, and as we have just seen, since M(w0 , w0 ) = ∞ , we obtain sx ⊂ su and x ∈ su . Now we consider the inclusion in (6.21). Then Wu0 ⊂ Er + Wx0 ⊂ Wr + Wx . Since , Wr + Wx = W(r n +xn )∞ n=1 we obtain Wu0 ⊂ W(r n +xn )∞ n=1 and



un r n + xn



∈ M(w0 , w∞ ) = ∞ .

n=1

So there is K > 0 such that xn /u n ≥ K − ρ n . Since ρ < 1, there is K 1 > 0 such that S(E 1 , W 0 ) implies x ∈ cl ∞ (u) and xn /u n ≥ K 1 > 0 for all n. We conclude that x ∈  r < u. Conversely, we assume x ∈ cl ∞ (u) and r < u. Then sx = su which implies Wx0 = Wu0 and Er + Wx0 = Er + Wu0 . Now by (ii), since ρ = r/u < 1, we have (ρ n )∞ n=1 ∈ M(E 1 , w0 ), Er ⊂ Wu0 and Er + Wu0 = Wu0 . This concludes the proof.  Corollary 6.4 ([9, Corollary 7.4, pp. 607–608]) Let r, u > 0. Then the sets  S(E, w0 ) of solutions of the (SSE) Er + Wx0 = Wu0 with E = ∞ , c or w∞ are given by



cl ∞ (u) if r < u  S(E, w0 ) = ∅ if r ≥ u and for E = w0 , we have ⎧ ∞ ⎪ ⎨cl (u)  S(w0 , w0 ) = su ∩ U + ⎪ ⎩ ∅

if r < u if r = u if r > u.

(6.24)

282

6 Sequence Space Equations

Proof 1- Case E = ∞ or c. It is enough to apply Theorem 6.5. First we have E 1 ⊂ w∞ and the condition in (i) of Theorem 6.5 is satisfied. Then we have a/b = (ρ n )∞ n=1 ∈  , since n −1 nk=1 ρ k → 0 implies 0 < ρ < 1. 2- Case E = w∞ . It is enough to show Theorem 6.5 with E = w∞ . We have (ρ n )∞ n=1 ∈ M(w∞ , w0 ). Since M(w∞ , w0 ) ⊂ M(∞ , (c0 )C1 ), we have C1 Dρ ∈ (∞ , c0 ). The matrix C1 Dρ is the triangle defined by [C1 Dρ ]nk = ρ k /n, for k ≤ n, and the condition C1 Dρ ∈ (∞ , c0 ) implies n 1 k ρ → 0 (n → ∞) n k=1

and ρ < 1. Conversely, if 0 < ρ < 1, then we have nρ n → 0 (n → ∞) and +  0 = M + (w∞ , w0 ) ⊂ M + (w∞ , w∞ ) . (ρ n )∞ n=1 ∈ s(1/n)∞ n=1 We conclude the condition in (ii) of Theorem 6.5 is satisfied for E = w∞ . 3- Case E = w0 . The (SSE) Wr0 + Wx0 = Wu0 is equivalent to Wu0 = W(r0 n +xn )∞ n=1 and to su = s(r n +xn )∞ = sr + sx . We conclude by [10, Corollary 12, p. 918]. n=1  Example 6.5 It can easily be shown that the solutions of the (SSE) w∞ + Wx0 = W20 are determined by K 1 2n ≤ xn ≤ K 2 2n for all n and for some K 1 and K 2 > 0. Example 6.6 For u > 1, the solutions of the (SSE) c + Wx0 = Wu0 are determined by K 1 u n ≤ xn ≤ K 2 u n for all n and for some K 1 and K 2 > 0. If u ≤ 1, then the (SSE) has no solution. Example 6.7 The statement y ∈ w0 if and only if there are sequences u and v ∈ ω for which y = u + v, n 1  |vk | n 2 u n → l and → 0 (n → ∞) n k=1 xk for some scalar l and for all sequences y (c) is equivalent to the (SSE) s1/2 + Wx0 = w0 . The solutions of the (SSE) are determined by K 1 ≤ xn ≤ K 2 for all n and for some K 1 and K 2 > 0.

We have the next result. Lemma 6.6 Let a, b ∈ U + and E and F be any of the sets c0 , c, ∞ ,  or . Then the set S of all x ∈ U + , which satisfy the system E a + Fx = Fb and Fb ⊂ Fx , is determined by S = S(E, F) defined in (6.8).

6.3 Some Applications

283

Proof First the (SSE) E a + Fx = Fb implies E a ⊂ Fb and Fx ⊂ Fb , that is, a/b ∈ M(E, F) and x/b ∈ M(F, F). Since Fb ⊂ Fx , we conclude that x ∈ S implies a/b ∈ M(E, F) and x ∈ cl F (b). Conversely, we assume a/b ∈ M(E, F) and x ∈ cl F (b). Then we have E a + Fx = E a + Fb = Fb , and x ∈ S. This completes the proof.



We close this section with the solvability of the (SS E) E a + (Fx ) = Fb , where Fb ⊂ Fx and E, F ∈ {c0 , c, ∞ , , }. For a, b ∈ U + , let   S(E, F) = x ∈ U + : E a + (Fx ) = Fb and Fb ⊂ Fx , where E and F are any of the sets c0 , c, ∞ ,  or . For instance, if E = F = c, then the condition x ∈ S(c, c) means that the statements in (a) and (b) hold, where (a) bn /xn → l1 (n → ∞), for some l1 ∈ C, (b) yn /bn → l (n → ∞) if and only if there are sequences u and v ∈ ω such that y = u + v and vn un → l  and → l  (n → ∞) an xn for some scalars l, l  and l  and for all sequences y ∈ ω. We obtain the next result. Theorem 6.6 Let E and F be any of the sets c0 , c, ∞ ,  or . (i) We assume M(F, F) = ∞ . 1 , then S(E, F) = ∅. (a) If b ∈ /C 1 , then (b) If b ∈ C  S(E, F) = (ii) We assume F = c. (a) If b ∈ /  , then S(E, F) = ∅. (b) If b ∈  , then

cl ∞ (b) if a/b ∈ M(E, F) ∅ otherwise.

284

6 Sequence Space Equations

 S(E, F) = (iii)

cl c (b) if a/b ∈ M(E, c) ∅ otherwise.

We assume F = .

 , then S(E, F) = ∅. (a) If b ∈ /C  , then (b) If b ∈ C  S(E, F) =

cl  (b) if a/b ∈ M(E, ) =  ∅ otherwise.

Proof (i) First we note that the condition M(F, F) = ∞ implies F = s1 or c0 . Let 1 . By x ∈ S(E, F). We successively obtain (Fx ) ⊂ Fx , D1/x  Dx ∈ S1 and x ∈ C Part (ii) of Proposition 6.1, we have (Fx ) = Fx , E a + Fx = Fb and Fb ⊂ Fx . By 1 , a/b ∈ Lemma 6.6, we have a/b ∈ M(E, F) and x ∈ cl ∞ (b). We conclude x ∈ C M(E, F) and Fx = Fb . Since F = c0 or ∞ , the identity Fx = Fb is equivalent 1 implies b ∈ C 1 . So we have shown that to sx = sb and by Lemma 6.2, x ∈ C x ∈ S(E, F) implies 1 , b∈C

a ∈ M(E, F) and Fx = Fb . b

(6.25)

1 and Fx = Fb Now we show that (6.25) implies x ∈ S(E, F). The conditions b ∈ C together imply x ∈ C1 . So we have E a + (Fx ) = E a + Fx = E a + Fb , and since a/b ∈ M(E, F), we successively obtain E a ⊂ Fb , E a + Fb = Fb and E a + (Fx ) = Fb . We conclude x ∈ S(E, F) if and only if the conditions in (6.25) hold. So we have shown Part (i). (ii), (iii) Parts (ii) and (iii) can be shown similarly. We note that for Part (ii), we have F = c and M(F, F) = c. 

6.4 The (SSE) with Operators In this section, we deal with a class of (SSE) with operators of the form E T + Fx = Fb , where T is either  or  and E is any of the sets c0 , c, ∞ ,  p for p ≥ 1, w0 ,  or  and F = c, ∞ or . For instance, the solvability of the (SSE) with operator defined by the equation   +x = b consists in determining the set of all positive sequences x = (xn )∞ n=1 that satisfy the statement:

6.4 The (SSE) with Operators

285

supn ((|yn |/bn )1/n < ∞ if and only if there are sequences u and v ∈ ω with y = u + v such that n    1/n |vn | 1/n u k = 0 and sup lim 0. We also note that the set S (c, c) is not regular since we have by Theorem 6.2 S(c, c) = cl c (b) for 1/b ∈ c0 ; S(c, c) = cb for 1/b ∈ c \ c0 , and S(c, c) = ∅ for 1/b ∈ / c. As a direct consequence of Theorem 6.1 we obtain the following result. Lemma 6.8 Let b ∈ U + and E and F be two linear subspaces of ω. We assume that F satisfies the conditions in (5.9), (6.5), (6.6) and that (6.7) holds. Then S(E, F) is regular, that is,  cl F (b) if 1/b ∈ M(E, F) S(E, F) = ∅ if 1/b ∈ / M(E, F). As a direct consequence of the preceding results and using Lemma 6.8, we obtain the next results. Lemma 6.9 Let b ∈ U + and p ≥ 1. Then each of the next (SSE) is regular, where (i) +x = b ; (ii) E + cx = cb for E = , , c0 , ∞ w0 and  p ; (iii) E + sx = sb for E = , , c0 , w0 and  p . Proof (i) follows from Proposition 6.5. (ii) follows from Proposition 6.3 for E = , , c0 , ∞ . The case E = w0 is a direct consequence of Theorem 6.1 and of the equalities M(w0 , c) = M(w0 , c0 ) = . The case E =  p follows from Theorem 6.1 and the identity M( p , c) = s(1/n)∞ n=1 ∞ , where p ≥ 1. (iii) follows from Proposition 6.4 for E = , , c0 . The case E = w0 is a direct , and we conclude by Theorem consequence of the identity M(w0 , s1 ) = s(1/n)∞ n=1 6.1. The case E =  p follows from Theorem 6.1 and the identity M( p , ∞ ) = ∞ where p ≥ 1. This concludes the proof.  More precisely we obtain the following lemma which is a direct consequence of Lemma 6.9. Lemma 6.10 Let b ∈ U + . Then we have (i)

 S(∞ , c) =

cl c (b) if 1/b ∈ c0 ∅ otherwise.

(b) Let F be any of the sets c, s1 or . Then we have  S(, F) = (ii) We have for F = c or s1 :

cl F (b) if 1/b ∈  ∅ otherwise.

(a)

6.4 The (SSE) with Operators

287

(a) Let p ≥ 1. We have S( p , F) = S(c0 , F) and  S(c0 , F) =  S(w0 , F) =

cl F (b) if 1/b ∈ s1 ∅ otherwise.

cl F (b) if 1/b ∈ s(1/n)∞ n=1 ∅ otherwise;

 cl F (b) if 1/b ∈  S(, F) = ∅ otherwise;

(b)

(b)

Remark 6.1 The results for S( p , c) and S( p , ∞ ) follow from Lemma 5.6, where M( p , c) = M( p , ∞ ) = ∞ . Example 6.8 We consider the set of all x ∈ U + that satisfy the statement: for every sequence y, we have yn → l1 (n → ∞) if and only if there are sequences u and v ∈ ω for which y = u + v and |u n |1/n → 0 and xn vn → l2 (n → ∞) for some l1 and l2 . (c) = c, by Lemma 6.3 it is equal to Since this set corresponds to the equation +s1/x the set of all sequences that tend to a positive limit.

Example 6.9 It can easily be shown that the solutions of the (SSE) w∞ + Wx0 = W20 are determined by K 1 2n ≤ xn ≤ K 2 2n for all n and for some K 1 and K 2 > 0. Example 6.10 For u > 1, the solutions of the (SSE) c + Wx0 = Wu0 are determined by K 1 u n ≤ xn ≤ K 2 u n for all n and some K 1 and K 2 > 0, and if u ≤ 1, then the (SSE) has no solution. (c) Example 6.11 The (SSE) s1/2 + Wx0 = w0 is equivalent to the statement

y ∈ w0 if and only if there are sequences u and v ∈ ω for which y = u + v, n 1  |vk | 2n u n → l and → 0 (n → ∞) n k=1 xk for some scalar l and for all sequences y. The solutions of the (SSE) are determined by K 1 ≤ xn ≤ K 2 for all n and some K 1 and K 2 > 0.

288

6 Sequence Space Equations

Now we give an application to the solvability of the (SSE) E T + Fx = Fb with e ∈ F. Let b ∈ U + , and E and F be two subsets of ω. We deal with the (SSE) with operators (6.27) E T + Fx = Fb , where T is a triangle and x ∈ U + is the unknown. The equation in (6.27) means for every y ∈ ω, we have y/b ∈ F if and only if there are sequences u and v ∈ ω such that y = u + v such that T u ∈ E and v/x ∈ F. We assume e ∈ F. By S(E T , F), we denote the set of all x ∈ U + that satisfy the (SSE) in (6.27). We obtain the next result which is a direct consequence of Lemma 6.8, where we replace E by E T . Proposition 6.7 ([6, Proposition 6.1, p. 94–95]) Let b ∈ U + and let E and F be linear vector spaces of sequences. We assume F satisfies the conditions (i) e ∈ F, (ii) F ⊂ M(F, F), (iii) F satisfies condition (5.9) and M(E T , F) ⊂ M(E T , c0 ).

(6.28)

Then the set S(E T , F) is regular, that is,  S(E T , F) =

cl F (b) if 1/b ∈ M(E T , F) ∅ if 1/b ∈ / M(E T , F).

We may adapt the previous result using the notations of matrix transformations instead of the multiplier of sequence spaces. So we obtain the following. Corollary 6.5 ([6, Corollary 6.1, p. 94–95]) Let b ∈ U + and let E and F be linear spaces of sequences. We assume F satisfies conditions (i), (ii) and (iii) in Proposition 6.7 and that Dα T −1 ∈ (E, F) implies Dα T −1 ∈ (E, c0 ) for all α ∈ ω. 

Then we have S(E T , F) =

cl F (b) if D1/b T −1 ∈ (E, F) / (E, F). ∅ if D1/b T −1 ∈

(6.29)

6.4 The (SSE) with Operators

289

Proof This result is a direct consequence of Proposition 6.7 and of the fact that the condition 1/b ∈ M(E T , F) is equivalent to D1/b ∈ (E T , F) and to D1/b T −1 ∈ (E, F). 

6.5 Some (SSE’s) with the Operators  and  In this section, we give some applications to the solvability of the (SS E  s) E  + Fx = Fb and E  + Fx = Fb . We apply Proposition 6.7 and Lemma 6.10 to solve (SSE) of the form E T + Fx = Fb in each of the cases T =  and T = . We obtain a class of (SSE) that are regular, that is, for which S(E, F) is regular. Now we solve each of the (SS E  s) (c0 ) + cx = cb and (c0 ) + sx = sb . The solvability of the first (SSE) means that for every y ∈ ω we have yn /bn → l1 (n → ∞) if and only if there are sequences u and v ∈ ω such that y = u + v and u n − u n−1 → 0 and

vn → l2 (n → ∞) xn

for some scalars l1 and l2 . Proposition 6.8 ([6, Proposition 7.1, p. 95]) Let b ∈ U + and F = c or ∞ . Then we have  cl F (b) if 1/b ∈ s(1/n)∞ n=1 S((c0 ) , F) = ∅ if 1/b ∈ / s(1/n)∞ . n=1 Proof The condition α ∈ M((c0 ) , s1 ) means Dα  ∈ (c0 , s1 ) = S1 and is equiva. In the same way, by the lent to nαn = O(1) (n → ∞). So M((c0 ) , s1 ) = s(1/n)∞ n=1 . Then we have characterization of (c0 , c0 ), we obtain M((c0 ) , c0 ) = s(1/n)∞ n=1 = M((c0 ) , c0 ) ⊂ M((c0 ) , c) ⊂ M((c0 ) , s1 ) = s(1/n)∞ , s(1/n)∞ n=1 n=1 for F = s1 , c or c0 . We conclude by Proposition 6.7.  and M((c0 ) , F) = s(1/n)∞ n=1 Example 6.12 Let α ≥ 0. The (SSE) (c0 ) + cx = c(n α )∞ has solutions if and only n=1 if α ≥ 1. These solutions are determined by limn→∞ xn /n α > 0 (n → ∞). If 0 ≤ α < 1, then the (SSE) has no solution. We note that the (SSE) (c0 ) + cx = c has no solution.

290

6 Sequence Space Equations

Example 6.13 Let u > 0. Then the set of all positive sequences x that satisfy the (SSE) (c0 ) + sx = su is empty if u ≤ 1, and if u > 1 it is equal to the set of all sequences that satisfy K 1 u n ≤ xn ≤ K 2 u n for all n and some K 1 and K 2 > 0. Now we solve each of the (SS E  s) bv p + cx = cb and bv p + sx = sb , where   bv p =  p  =

 y∈ω:

∞ 

 |yk − yk−1 | p < ∞

for 0 < p < ∞

k=1

is the set of all sequences of p-bounded variation. The solvability of the second (SSE) consists in determining the set of all positive sequences x, such that the next statement holds. For every y ∈ ω, we have supn (|yn |/bn ) < ∞ if and only if there are sequences u and v ∈ ω with y = u + v such that ∞ 

 |u n − u n−1 | p < ∞ and sup n

k=1

|vn | xn

< ∞.

We obtain the next proposition. Proposition 6.9 ([6, Proposition 7.2, p. 96]) Let b ∈ U + , p > 1 and q = p/( p − 1). We have for F = c or ∞ ⎧  1/q ∞ n ⎪ F ⎪ ⎪ cl (b) if ∈ s1 ⎨ bn n=1 S(bv p , F) =  1/q ∞ ⎪ n ⎪ ⎪ ∅ if ∈ / s1 . ⎩ bn n=1 Proof We have α ∈ M(bv p , ∞ ) if and only if Dα  ∈ ( p , ∞ ). We obtain from the characterization of ( p , ∞ ) in (1.1) of 1 in Theorem 1.23 n|αn |q = O(1) (n → ∞).

(6.30)

. Now we obtain α ∈ M(bv p , c0 ) if and only if So we have M(bv p , ∞ ) = s(n −1/q )∞ n=1 (6.30) holds and (6.31) αn → 0 (n → ∞). But trivially the condition in (6.30) implies that in (6.31). So we have = M(bv p , c0 ) ⊂ M(bv p , c) ⊂ M(bv p , ∞ ) = s(n −1/q )∞ s(n −1/q )∞ n=1 n=1

6.5 Some (SSE’s) with the Operators  and 

291

and M(bv p , F) = s(n −1/q )∞ for F = c0 , c or ∞ . We apply Proposition 6.7, where n=q means (n 1/q /bn )∞ the condition 1/b ∈ M(bv p , ∞ ) = s(n −1/q )∞ n=1 ∈ s1 . n=1 This concludes the proof.  Example 6.14 The (SSE) defined by bv2 + cx = c has no solution since q = 2 and √ / s1 . ( n/bn )∞ n=1 ∈ Example 6.15 Let p > 1 and r > 0. The set S = S(bv p , c) of all solutions of the is empty if r < ( p − 1)/ p. If r ≥ ( p − 1)/ p, then it is (SSE) bv p + cx = c(nr )∞ n=1 determined by limn→∞ (xn /n r ) > 0. For any given r = 1, we have S = ∅ if and only if p ≤ 1/(1 − r ). Solvability of the (SSE) (w0 ) + Fx = Fb . Here a positive sequence x is a solution of the (SSE) (w0 ) + cx = cb if the next statement holds. For every y ∈ ω, we have yn /bn → l1 (n → ∞) if and only if there are sequences u and v ∈ ω with y = u + v such that n 1 vn |u k − u k−1 | → 0 and → l2 (n → ∞) n k=1 xn

for some scalars l1 and l2 . We obtain a similar statement for the (SSE) (w0 ) + sx = sb . The next proposition holds. Proposition 6.10 ([6, Proposition 7.3, p. 98]) Let b ∈ U + . Then we have for F = c or ∞  cl F (b) if 1/b ∈ s(1/n)∞ n=1 S((w0 ) , F) = S((c0 ) , F) = S(w0 , F) = ∅ otherwise. Proof We have α ∈ M((w0 ) , ∞ ) if and only if Dα  ∈ (w0 , ∞ ).

(6.32)

2νn ≤ n ≤ 2νn +1 − 1.

(6.33)

Now we define the integer νn by

Then, by the characterization of (w0 , ∞ ), the condition in (6.32) means there is K > 0 such that

292

6 Sequence Space Equations

σn =

∞  ν=0



max

2ν ≤k≤2ν+1 −1

|(Dα )nk | = |αn |

νn 

  2ν = αn | 2νn +1 − 1 ≤ K for all n.

ν=0

(6.34)

Then we have from (6.33) Dα  ∈ (w0 , ∞ ) if and only if   n|αn | ≤ 2νn +1 − 1 |αn | ≤ K for all n and some K > 0, . and we obtain M((w0 ) , ∞ ) ⊂ s(1/n)∞ n=1 ⊂ M((w ) ,  ). Now we show s(1/n)∞ 0  ∞ n=1 . Then we have n|αn | ≤ K for all n, and it follows from by (6.34) Let α ∈ s(1/n)∞ n=1 and (6.33) that   σn = 2νn +1 − 1 |αn | ≤ (2n − 1)|αn | ≤ 2K for all n. ⊂ M((w0 ) , ∞ ) and M((w0 ) , ∞ ) = s(1/n)∞ . This shows s(1/n)∞ n=1 n=1 . Then we have We obtain by similar arguments as above M((w0 ) , c0 ) = s(1/n)∞ n=1 = M ((w0 ) , c0 ) ⊂ M ((w0 ) , c) ⊂ M ((w0 ) , ∞ ) = s(1/n)∞ . s(1/n)∞ n=1 n=1 for F = c0 c or ∞ . We conclude by PropoFinally, we have M((w0 ) , F) = s(1/n)∞ n=1 sition 6.7 and Lemma 6.10. This completes the proof.  Example 6.16 The (SSE) (w0 ) + cx = c has no solution. Example 6.17 The solutions of the (SSE) (w0 ) + sx = s(n)∞ are determined by n=1 K 1 n ≤ xn ≤ K 2 n for all n and some K 1 , K 2 > 0. Now we deal with the solvability of the (SSE) with an operator  + Fx = Fb . Proposition 6.11 ([6, Proposition 7.4, p. 99]) Let b ∈ U + . We have for F = c or s1  cl F (b) if 1/b ∈  S( , F) = S(, F) = ∅ otherwise.  . Indeed, we have n ≤ Proof By Lemma 6.1,  ∈ (,) is bijective, since e ∈ C K n for all n and some K > 1, hence  = . Also it follows from Lemma 6.2 that M( , F) = M(, F) =  for F = c0 c or s1 and we apply Lemma 6.10. This concludes the proof.  Now we solve the (SSE’s) E  + Fx = Fb , where E ∈ {c, c0 , w0 , , ,  p } for p > 1, and F ∈ {c, ∞ },

6.5 Some (SSE’s) with the Operators  and 

293

  + x = b and (∞ ) + cx = cb . First we deal with the (SSE’s) χ + Fx = Fb , where χ ∈ {cs, bs, cs0 } and ( p ) + Fx = Fb , where F ∈ {c, ∞ }. For instance, x is a solution of the (SSE) cs + cx = cb if the next statement holds. For every y ∈ ω, we have yn /bn → l1 (n → ∞) if and only

if there are sequences u and v ∈ ω with y = u + v and the series ∞ k=1 u k is convergent and vn /xn → l2 (n → ∞) for some scalars l1 and l2 . Proposition 6.12 ([6, Proposition 7.5, p. 99–100]) Let b ∈ U + . Then we have  S(bs, c) = S(∞ , c) =

cl c (b) if 1/b ∈ c0 ∅ otherwise.

(i)

For F = c or ∞ , we have S(cs, F) = S(cs0 , F) = S(( p ) , F) = S(c0 , F) with p ≥ 1, and

 S(c0 , F) =

cl F (b) if 1/b ∈ s1 ∅ otherwise.

Proof (i) We have α ∈ M(bs, c) if and only if Dα ∈ (∞ (), c) and Dα  ∈ (∞ , c). The matrix Dα  is the triangle with (Dα )nn = −(Dα )n,n−1 = αn for all n, with the convention (Dα )1,0 = 0, the other entries being

equal to zero. Trivially we have limn→∞ (Dα )nk = 0 for all k and limn→∞ ∞ k=1 |(Dα )nk | = 0 which implies M(bs, c) = c0 . Since bs = ∞ () ⊂ ∞ , we conclude c0 = M(∞ , c0 ) ⊂ M(bs, c0 ) ⊂ M(bs, c) = c0 , and Proposition 6.7 and Lemma 6.10 can be applied. (ii) Case of S(cs0 , F). Since c0 ⊂ c ⊂ ∞ and cs0 = (c0 ) ⊂ c0 , we obtain s1 = M(c0 , c0 ) ⊂ M(cs0 , c0 ) ⊂ M(cs0 , c) ⊂ M(cs0 , ∞ ).

(6.35)

Now α ∈ M(cs0 , ∞ ) if and only if Dα ∈ (cs0 , ∞ ) and Dα  ∈ (c0 , ∞ ). Since (c0 , ∞ ) = S1 , we have |αn | + |αn−1 | ≤ K for all n and some K > 0 and α ∈ s1 . So M(cs0 , ∞ ) = s1 . Using (6.35), we conclude M(cs0 , F) = s1 for F = c0 , c or ∞ , and Proposition 6.7 and Lemma 6.10 can be applied. Case of S(cs, F).

294

6 Sequence Space Equations

By similar arguments as those used above and noting that cs = c , we obtain s1 = M(cs, c0 ) ⊂ M(cs, c) ⊂ M(cs, ∞ ) = s1 .

(6.36)

Case of S(( p ) , F). Let p > 1. First α ∈ M(( p ) , ∞ ) implies Dα  ∈ ( p , ∞ ). By the characterization of ( p , ∞ ) in (5.1) of 5 in Theorem 1.23, we obtain |αn |q = O(1) (n → ∞) and α ∈ s1 . This means M(( p ) , ∞ ) ⊂ s1 . We have ( p ) ⊂  p , since  ∈ ( p ,  p ) and       s1 = M( p , c0 ) ⊂ M ( p ) , c0 ⊂ M ( p ) , c ⊂ M ( p ) , ∞ ⊂ s1 . So Proposition 6.7 and Lemma 6.10 can be applied. In the case p = 1, using similar arguments as those above and by the characterizations of (1 , ∞ ) in (4.1) of 4 in Theorem 1.23 and (1 , c0 ) in (4.1) and (7.1) of 9 in Theorem 1.23, we obtain M((1 ) , F) = s1 , where F = c0 c or ∞ . This concludes the proof of Part (ii).  Now we solve the following (SSE’s) with an operator (w0 ) + sx = sb and (w0 ) + cx = cb . We note that x is a solution of the second (SSE) if for every y ∈ ω, we have yn /bn → l1 (n → ∞) if and only if there are sequences u and v ∈ ω such that y = u + v and k n vn 1   u i → 0 and → l2 (n → ∞) n k=1 i=1 xn for some scalars l1 and l2 . First we state a lemma. Lemma 6.11 We have . M((w0 ) , ∞ ) = M((w0 ) , c0 ) = M((w∞ ) , ∞ ) = s(1/n)∞ n=1 Proof We have M((w0 ) , c0 ) = M((w0 ) , ∞ ). Indeed, α ∈ M ((w0 ) , c0 ) if and only if Dα  ∈ (w0 , c0 ). But, by Lemma 5.5, Dα  ∈ (w0 , c0 ) if and only if Dα  ∈ (w0 , ∞ ). So α ∈ M((w0 ) , c0 ) if and only if α ∈ M((w0 ) , ∞ ) and M((w0 ) , c0 ) = M((w0 ) , ∞ ). Now we show M((w∞ ) , ∞ ) = s(1/n)∞ . For this let α ∈ M((w∞ ) , ∞ ). Then we n=1

6.5 Some (SSE’s) with the Operators  and 

295

have Dα  ∈ (w∞ , ∞ ). Let νn for each non-negative integer n be the uniquely defined integer with 2νn ≤ n ≤ 2νn +1 − 1. Then we obtain σn =

∞  ν=0



max

2ν ≤k≤2ν+1 −1

|(Dα )nk | ≥ |αn |2νn ≥

n+1 |αn |. 2

But by Lemma 5.5, Dα  ∈ (w∞ , ∞ ) implies σ ∈ ∞ and α ∈ s(1/n)∞ . So we have n=1 . shown M((w∞ ) , ∞ ) ⊂ s(1/n)∞ n=1 ⊂ M((w∞ ) , ∞ ). We have w∞ ⊂ s(1/n)∞ . Since  ∈ Conversely, we show s(1/n)∞ n=1 n=1 ∞ ), we obtain (s ∞ ) ∞ , s ⊂ s and (s(1/n)∞ (1/n)n=1 (1/n)n=1  (1/n)n=1 n=1 M ((w∞ ) , ∞ ) ⊃ M

     , ∞ ⊃ M s(1/n)∞ , ∞ = s(1/n)∞ . s(1/n)∞ n=1  n=1 n=1

and obtain M((w0 ) , c0 ) = s(1/n)∞ using We conclude M((w∞ ) , ∞ ) = s(1/n)∞ n=1 n=1 a similar arguments as those above. This concludes the proof.  Proposition 6.13 ([6, Proposition 7.6, p. 102–103]) Let b ∈ U + and F = c or ∞ . Then we have  cl F (b) if 1/b ∈ s(1/n)∞ n=1 S ((w0 ) , F) = S(w0 , F) = ∅ otherwise. Proof First (w0 ) ⊂ w0 implies M(w0 , c0 ) ⊂ M((w0 ) , c0 ) and we obtain by Lemma 6.11 = M (w0 , c0 ) ⊂ M ((w0 ) , c0 ) ⊂ M ((w0 ) , c) s(1/n)∞ n=1 ⊂ M ((w0 ) , ∞ ) = s(1/n)∞ . n=1 for F = c0 , c or ∞ , and Proposition 6.7 and Then we have M((w0 ) , F) = s(1/n)∞ n=1 Lemma 6.10 can be applied.  Remark 6.2 From Propositions 6.8, 6.10 and 6.13, we have S(χ , F) = S(w0 , F) for χ ∈ {(w0 ) , (w0 ) , (c0 ) }. Finally, we deal with the (SSE’s)   + Fx = Fb , where F ∈ {c, ∞ , }, and  + Fx = Fb for ∈ {c, ∞ }. A positive sequence x is a solution of the (SSE)  + cx = cb if the next statement holds

296

6 Sequence Space Equations

limn→∞ yn /bn = l if and only if there are sequences u and v ∈ ω with y = u + v such that limn→∞ | nk=1 u k |1/n = 0 and limn→∞ vn /xn = l  for some scalars l and l  and for all y ∈ ω. We obtain the next result. Theorem 6.7 ([6, Theorem 7.1, p. 102]) Let b ∈ U + . Then (i) for F = c, ∞ or , we have  S(  , F) = S(, F) =

cl F (b) if 1/b ∈  ∅ otherwise;

(ii) for F = c or ∞ , we have  S( , F) = S(, F) = Proof 4.14

cl F (b) if 1/b ∈  ∅ otherwise.

(i) First let α ∈ M(  ,). Then Dα  ∈ (,), and we obtain by Lemma |αn |(M −n + M −n+1 )

!1/n

≤ K for all n and some K > 0 and M ≥ 2.

This implies |αn |1/n ≤

KM ≤ K  for all n and some K  > 0. (1 + M)1/n

We conclude M(  ,) ⊂. Then it can easily be seen that   ⊂, since  ∈ (,). Then we have by Lemma 6.2  = M(, c0 ) ⊂ M (  , c0 ) ⊂ M (  , c) ⊂ M (  , ∞ ) ⊂ M (  , ) ⊂ . So we obtain M(  , F) =  for F = c0 c, ∞ or  and Proposition 6.7 and Lemma 6.10 can be applied. This concludes the proof of Part (i). (ii) As we have seen in Proposition 6.11, the operator  ∈ (,) is bijective and this is also the case for  ∈ (,). Then we have  =  and we conclude by Lemma 6.10 that Part (ii) holds.  Example 6.18 The solutions of the (SSE)   +x = u with u > 0 are determined by k1n ≤ xn ≤ k2n for all n and some k1 , k2 > 0. Example 6.19 Each of the (SSE)  + Fx = Fu , where F = c or s1 has no solution for any given u > 0.

6.6 The Multiplier M((E a ) , F) and the (SSIE) Fb ⊂ (E a ) + Fx

297

6.6 The Multiplier M((Ea ) , F) and the (SSIE) Fb ⊂ (Ea ) + Fx In this section, we determine the multiplier M((E a ) , F), where E ∈ {c0 ,  p } and F ∈ {c0 , c, s1 }. Then we deal with the (SSIE) Fb ⊂ (E a ) + Fx , where E and F are sequence spaces with E ⊂ s1 and c0 ⊂ F ⊂ s1 . In the following we use the factorable matrix Dα  Dβ with α, β ∈ ω defined by (Dα  Dβ )nk = αn βk for k ≤ n for all n, the other entries being equal to zero. Lemma 6.12 ([7, Lemma 6, p. 115]) Let a ∈ U + and let p > 1. Then we have (i) The condition a ∈ / cs implies M

  0  sa  , F = s



n 1 k=1 ak

for F = c0 , c or s1 .

(6.37)

n=1

(ii) The condition a q ∈ / cs implies with q = p/( p − 1) M

  p  for F = c0 , c or s1 . a  , F = s(( nk=1 akq )−1/q )∞ n=1

(6.38)

Proof (i) We have α ∈ M((sa0 ) , c0 ) if and only if Dα  Da ∈ (c0 , c0 ). It follows from the characterization of (c0 , c0 ) by (1.1) and (7.1) in 7 of Theorem 1.23 that |αn |

n 

ak ≤ K for all n and some K > 0

(6.39)

k=1

and α ∈ c0 .

(6.40)

But since a ∈ / cs, the condition in (6.39) implies (6.40), and so α ∈ M((sa0 ) , c0 ) if and only if (6.39) holds. This shows the identity in (6.37) for F = c0 . In a similar way, the identity (6.37) for F = s1 can easily be shown. From the inclusions M((sa0 ) , c0 ) ⊂ M((sa0 ) , c) ⊂ M((sa0 ) , s1 ), we conclude that the identity in (6.37) holds for F = c. p (ii) We have α ∈ M((a ) , c0 ) if and only if Dα  Da ∈ ( p , c0 ). We have by the p characterization of ( , c0 ) in (5.1) and (7.1) of 10 in Theorem 1.23 |αn |q

n  k=1

q

ak ≤ K for all n and some K > 0

(6.41)

298

6 Sequence Space Equations

and (6.40) holds. But since a q ∈ / cs, the condition in (6.41) implies (6.40), and p we so α ∈ M((a ) , c0 ) if and only if (6.41) holds. So we have shown that the identity in (6.38) holds for F = c0 . In a similar way the identity in (6.38) with F = s1 can easily be shown. We conp p p clude the proof using the inclusions M((a ) , c0 ) ⊂ M((a ) , c) ⊂ M((a ) , s1 ).  Now we study some properties of the (SSIE) Fb ⊂ (E a ) + Fx . Let E and F be two linear subspaces of ω. We write I((E a ) , F, F  ) for the set of all x ∈ U + such that Fb ⊂ (E a ) + Fx . It can easily be seen that the sets (E a ) and Fx are linear spaces of sequences, and we have z ∈ (E a ) + Fx if and only if   there are ξ ∈ E and f ∈ F such that z n = nk=1 ak ξk + f n xn . To simplify we write    I EF,F = I (E a ) , F, F  . We need the next lemma. Lemma 6.13 ([8]) Let a, b ∈ U + . Then we have (χa ) + (χb ) = (χa+b ) for χ = s1 or c0 . Proof Since the inclusion (χa+b ) ⊂ (χa ) + (χb ) is trivial, it is enough to show (χa ) + (χb ) ⊂ (χa+b ) . For this, let y ∈ (χa ) + (sb ) . Since (χα ) = ( Dα )χ with α ∈ U + , there are sequences u and v ∈ χ such that yn =

n  k=1

ak u k +

n 

n  bk vk = (ak + bk )z k = ( Da +  Db )n z,

k=1

k=1

where z k = (ak u k + bk vk )/(ak + bk ) for all k. Since 0 < ak /(ak + bk ) < 1 and 0 < bk /(ak + bk ) < 1, we have |z k | ≤ |u k | + |vk | for all k, (|u k | + |vk |)∞ k=1 ∈ ∞ for χ = ∈ c for χ = c . This shows y ∈ ( D )χ = (χa+b ) and s1 and (|u k | + |vk |)∞ 0 0 a+b k=1 (χa ) + (χb ) ⊂ (χa+b ) . This completes the proof.  Remark 6.3 As a direct consequence of the preceding lemma we have  Da χ +  Db χ = ( Da+b )χ for χ = s1 or c0 . + In the following, we use the sequence σ = (σn )∞ n=1 defined for a, b ∈ U by

σn =

n 1  ak for all n. bn k=1

6.6 The Multiplier M((E a ) , F) and the (SSIE) Fb ⊂ (E a ) + Fx

299

We recall that, for any given b ∈ U + , s b = s1/b is the set of all sequences x such that xn ≥ K bn for all n and some K > 0. We note that sb ∩ s1/b = cl ∞ (b). First we state the next lemma (cf. [8]). Lemma 6.14 Let a, b ∈ U + and E, F and F  be linear spaces of sequences that satisfy F ⊃ c0 and E, F  ⊂ ∞ . Then we have 

(i) If σ ∈ c0+ , then I EF F ⊂ s1/b .  (ii) If a ∈ c0+ and b = e, then I EF F ∩ c ⊂ cl c (e). 

Proof (i) Let x ∈ I EF F . Then we have Fb ⊂ (E a ) + Fx . Since E, F  ⊂ s1 , we obtain (E a ) + Fx = ( Da )E + Dx F  ⊂ ( Da )s1 + Dx s1 .

n

Now we let τ = (

k=1

a k )∞ n=1 . We obtain by elementary calculations D1/τ  Da ∈ (s1 , s1 )

and ( Da )s1 ⊂ sτ . Then we have sb0 ⊂ (sa ) + sx ⊂ sτ +x and

b ∈ M(c0 , s1 ) = s1 . τ +x

So there is K > 0 such that n 

ak + xn ≥ K bn for all n.

k=1

Hence we have xn ≥ bn (K − σn ) for all n and since σ ∈ c0 , there is K  > 0 such that xn ≥ K  bn for all n and consequently x ∈ s1/b . (ii) First we show sx ⊂ (sx+x − ) . This inclusion is equivalent to D1/(x+x − ) Dx ∈ (s1 , s1 ), where D1/(x+x − ) Dx

! nn

=

xn xn−1 + xn

and D1/(x+x − ) Dx

! n,n−1

=−

xn−1 for all n, xn−1 + xn

300

6 Sequence Space Equations 

the other entries being naught. Now let x ∈ I EF F ∩ c with b = e. Then we have x ∈ c and c0 ⊂ (sa ) + sx . This inclusion implies c0 ⊂ ( Da )s1 + ( Dx+x − )s1 and we obtain by Lemma 6.13 ( Da )s1 + ( Dx+x − )s1 = ( Da+x+x − )s1 = (sa+x+x − ) . We deduce c0 ⊂ (sa+x+x − ) . So there is K > 0 such that 1 ≤K an + xn + xn−1 and xn + xn−1 ≥

1 − an for all n. K

Since a ∈ c0 , there is M > 0 such that xn + xn−1 ≥ M for all n.

(6.42)

Then x ∈ c implies lim (xn + xn−1 ) = 2 lim xn ≥ M

n→∞

n→∞



and limn→∞ xn > 0, which implies sx(c) = c. So we have shown I EF F ∩ c ⊂ cl c (e). This concludes the proof. 

6.7 The (SSE) (Ea ) + sx(c) = sb(c) Here we solve the (SSE) (E a ) + sx(c) = sb(c) where E ∈ {c0 ,  p } for p > 1. For instance, the (SSE)

is equivalent to the statement

(sa0 ) + sx(c) = sb(c)

(c)

(c)

6.7 The (SSE) (E a ) + sx = sb

301

yn → l1 (n → ∞) bn if and only if there are two sequences u and v with y = u + v such that vn n u → 0 and → l2 (n → ∞) an xn for all sequences y and for some scalars l1 and l2 . For any given a, b ∈ U + we write S((E a ) , F) for the set of all solutions of the (SSE) (E a ) + Fx = Fb , where E and F are linear spaces. These results extend some results stated in Propositions 6.8 and 6.9, since the (SSE) (sa0 ) + sx(c) = sb(c) reduces p to (c0 ) + sx(c) = sb(c) , and the (SSE) (a ) + sx(c) = sb(c) reduces to ( p ) + sx(c) = sb(c) for a = e. We have the following. Theorem 6.8 ([7, Theorem 1, pp. 117–118]) Let a, b ∈ U + . Then we have (i) The set S((sa0 ) , c) of all solutions of the (SSE) (sa0 ) + sx(c) = sb(c) is determined in the following way:

(a) If a ∈ / cs, (that is, k ak = ∞), then we have  S((sa0 ) , c)

=

cl c (b) if σ ∈ s1 ∅ if σ ∈ / s1 .

(b) If a ∈ cs, then we have ⎧ 1 ⎪ ⎪ cl c (b) if ∈ c0 ⎪ ⎪ b ⎪ ⎪ ⎨ 1 0 S((sa ) , c) = cl c (e) if ∈ c \ c0 ⎪ b ⎪ ⎪ ⎪ ⎪ 1 ⎪ ⎩∅ / c. if ∈ b

(6.43)

(ii) The set S((a ) , c) with p > 1 of all solutions of the (SSE) (a ) + sx(c) = sb(c) is determined in the following way: p

p

(a) If a q ∈ / cs, then ⎧  q q ∞ a1 + · · · + an ⎪ c ⎪ ⎪ ∈ s1 q ⎨cl (b) if bn n=1 S((ap ) , c) =  q q ∞ ⎪ a1 + · · · + an ⎪ ⎪ if ∈ / s1 . ⎩∅ q bn n=1 p

(b) If a q ∈ cs, then S((a ) , c) = S((sa0 ) , c) is defined by (6.43).

302

6 Sequence Space Equations

Proof

(i)

(a) First consider the case a ∈ / cs. By Lemma 6.12, we have M((sa0 ) , c) = M((sa0 ) , c0 ) and we can apply Proposition 6.7, where 1/b ∈ M((sa0 ) , c) if and only if σ ∈ s1 . (a) Case a ∈ cs. We deal with the three cases (α)1/b ∈ / c, (β)1/b ∈ c0 and (γ )1/b ∈ c \ c0 . (α) We have S((sa0 ) , c) = ∅. Indeed, if we assume there is x ∈ S((sa0 ) , c), then (sa0 ) ⊂ sb(c) and D1/b  Da ∈ (c0 , c). From the characterization of (c0 , c) in (1.1) and (11.2) of 12 in Theorem 1.23, we deduce 1/b ∈ c, which is a contradiction. We conclude S((sa0 ) , c) = ∅. (β) Let 1/b ∈ c0 . Then x ∈ S((sa0 ) , c) implies x ∈ sb(c)

(6.44)

and sb(c) ⊂ (sa0 ) + sx(c) . Since b ∈ sb(c) , there are sequences ε ∈ c0 and ϕ ∈ c such that   n bn 1  ak εk = ϕn for all n. 1− xn bn k=1 We deduce b/x ∈ c, since σ ∈ c0 . Using the condition in (6.44) we conclude that x ∈ S((sa0 ) , c) implies sx(c) = sb(c) . Conversely, we assume sx(c) = sb(c) . Then we have (sa0 ) + sx(c) = (sa0 ) + sb(c) = sb(c) , since σ ∈ s1 and 1/b ∈ c. We conclude S((sa0 ) , c) = cl c (b). (γ ) Here we have limn→∞ bn = L > 0 and sb(c) = c and we are led to study the (SSE) (sa0 ) + sx(c) = c. We have that x ∈ S((sa0 ) , c) implies x ∈ c. Then, by Part (ii) of Lemma 6.14 with E = c0 and F = F  = c, we obtain S((sa0 ) , c) ⊂ cl c (e). Conversely, x ∈ cl c (e) implies sx(c) = c, and since a ∈ cs, we have (sa0 ) ⊂ c and (sa0 ) + sx(c) = c. We conclude S((sa0 ) , c) = cl c (e). This completes the proof of Part (i). (ii) (a) Case a q ∈ / cs. p p By Lemma 6.12, we have M((a ) , c) = M((a ) , c0 ). Then we can apply p Proposition 6.7, where 1/b ∈ M((a ) , c) if and only if

(c)

(c)

6.7 The (SSE) (E a ) + sx = sb

303

⎛ n

q

⎞∞

ak

⎜ k=1 ⎟ ⎜ q ⎟ ⎝ bn ⎠

∈ s1 .

n=1

(b) Case a q ∈ cs. As above we deal with the three (α)1/b ∈ / c, (β)1/b ∈ c0 and (γ )1/b ∈ c \ c0 . (α) We have that x ∈ S((a ) , c) implies (a ) ⊂ sb(c) and D1/b  Da ∈ ( p , c). From the characterization of ( p , c) in (5.1) and (11.2) of 15 in Theorem 1.23, we deduce 1/b ∈ c. We conclude that if 1/b ∈ / c, then p S((a ) , c) = ∅. p (β) We have that x ∈ S((a ) , c) implies p

and

p

x ∈ sb(c)

(6.45)

  sb(c) ⊂ ap  + sx(c) .

(6.46)

Since b ∈ sb(c) , there are sequences λ ∈  p and ϕ ∈ c such that bn xn



n 1  a k λk 1− bn k=1

 = ϕn for all n.

From the characterization of ( p , c0 ) in (5.1) and (7.1) of 10 in Theorem 1.23, we have D1/b  Da ∈ ( p , c0 ), since 1/b ∈ c0 and a q ∈ cs together imply 

q

q

a1 + · · · + an q bn

∞ ∈ s1 . n=1

We deduce 

D1/b  Da



λ= n

n 1  ak λk → 0 (n → ∞), bn k=1 p

and b/x ∈ c. Using the condition in (6.45), we conclude that x ∈ S((a ) , c) implies sx(c) = sb(c) . Conversely, we assume sx(c) = sb(c) . Since 1/b ∈ c0 and a q ∈ cs together imply D1/b  Da ∈ ( p , c), 10 in Theorem 1.23, we successively obtain p (a ) ⊂ sb(c) , (ap ) + sx(c) = (ap ) + sb(c) = sb(c)

304

6 Sequence Space Equations p

p

and x ∈ S((a ) , c). We conclude S((a ) , c) = cl c (b). (γ ) Here we have sb(c) = c and we are led to study the (SSE) (ap ) + sx(c) = c. p

We have that x ∈ S((a ) , c) implies x ∈ c, that is, xn → l (n → ∞). Then by Part (ii) of Lemma 6.14 with E =  p and F = F  = c, we obtain p p S((a ) , c) ⊂ cl c (e). Since a q ∈ cs, we have  Da ∈ ( p , c) and (a ) ⊂ c. p p c c This implies cl (e) ⊂ S((a ) , c). We conclude S((a ) , c) = cl (e). This completes the proof.  Now we study the equation

and the perturbed equation

sx(c) = sb(c) (sa0 ) + sx(c) = sb(c) .

In view of perturbed equations we state the following. Let b be a positive sequence. Then the equation (6.47) sx(c) = sb(c) is equivalent to limn→∞ xn /bn = l (n → ∞) for some l > 0. Then the (SSE) (sa0 ) + sx(c) = sb(c)

(6.48)

can be considered as a perturbed equation of (6.47), and the question is what are the conditions on a for which the perturbed equation and the (SSE) defined by (6.47) have the same solutions. As a direct consequence of Theorem 6.8 we obtain the next corollary. Corollary 6.6 Let a, b ∈ U + . Then we have (i) If 1/b ∈ c, then the equations in (6.47) and (6.48) are equivalent if and only if a ∈ cs ∪ (ω \ cs ∩ (sb ) ). (ii) If 1/b ∈ / c, then the perturbed equation in (6.48) has no solutions. Proof (i) is an immediate consequence of Theorem 6.8.

(ii) Let a ∈ / cs. The condition σ ∈ s1 should imply 1/bn ≤ K ( nk=1 ak )−1 for all n and some K > 0 and 1/b ∈ c0 , which is contradictory. So the perturbed equation in (6.48) has no solutions. The case a ∈ cs is a direct consequence of Part (i) (b) of Theorem 6.8.  Remark 6.4 We may state a similar result for the perturbed equation (a ) + sx(c) = sb(c) . p

(c)

(c)

6.7 The (SSE) (E a ) + sx = sb

305

1 . Now we state the next elementary results, where b, bq ∈ C Corollary 6.7 ([7, Corollary 4, p. 121]) Let a, b ∈ U + . Then we have

1 . Then the set S((sa0 ) , c) of all positive x ∈ U + such that (sa0 ) + (i) Let b ∈ C (c) (c) sx = sb is determined as follows. (a) Let a ∈ / cs. Then we have  S((sa0 ) , c)

=

cl c (b) if a/b ∈ s1 ∅ if a/b ∈ / s1 .

(6.49)

(b) Let a ∈ cs. Then we have S((sa0 ) , c) = cl c (b).

1 with q = p/( p − 1). Then the set S((ap ) , c) of all Let p > 1 and bq ∈ C p x ∈ U + such that (a ) + sx(c) = sb(c) is determined in the following way:

(ii)

p

(a) Let a q ∈ / cs. Then S((a ) , c) = S((sa0 ) , c) defined by (6.49). p q (b) Let a ∈ cs. Then S((a ) , c) = S((sa0 ) , c) = cl c (b). Proof

(i)

(a) We have σ ∈ s1 if and only if a ∈ (sb ) . But as we saw in Proposition 6.1, we

1 if and only if (sb ) = sb . This implies that  ∈ (sb , sb ) is bijective have b ∈ C and so is for  = −1 . So we have (sb ) = sb . We have σ ∈ s1 if and only if a/b ∈ s1 , and we conclude by Theorem 6.8. This completes the proof of Part (i) (a).

1 implies 1/b ∈ c0 . (b) follows from the fact that b ∈ C (ii) (a) Here we have 

q

q

a1 + · · · + an q bn

∞ ∈ s1 if and only if a q ∈ (sbq ) , n=1

1 . So we obtain and as we have just seen, we have (sbq ) = sbq , since bq ∈ C Part (a).

1 implies that there are C > 0 and γ > 1 such that (b) The condition bq ∈ C q bn ≥ Cγ n for all n. So we have bn ≥ C 1/q γ n/q for all n, and 1/b ∈ c0 . We conclude by Theorem 6.8. This completes the proof.

1 Remark 6.5 We note that we have for b ∈ C S((sa0 ) , c) = ∅ if and only if a ∈ (cs ∪ (ω \ cs ∩ sb )) ∩ U + .



306

6 Sequence Space Equations

Example 6.20 We consider the (SSE) with operator defined by   0 s(n + sx(c) = sb(c) , −α )∞ n=1

(6.50)



1

1 . We have a/b = (n −α /bn )n . The condition b ∈ C with 0 < α ≤ 1 and b ∈ C n implies that there are K > 0 and γ > 1 such that bn ≥ K γ for all n. This implies a/b ∈ c0 . We apply Corollary 6.7 and conclude that the solutions of the (SSE) in (6.50) satisfy the condition xn ∼ Cbn (n → ∞) for some C > 0.

1 . It can easily be shown that the solutions of the (SSE) Example 6.21 Let bq ∈ C p ((n α )∞ ) + sx(c) = sb(c) are defined by xn ∼ Cbn (n → ∞) for some C > 0 and all n=1 reals α.

1 , the set S(sa0 , c) is determined by Part (i) Remark 6.6 We note that if a ∈ C

1 implies of Corollary 6.7. Indeed, by Part (ii) Theorem 4.2, the condition a ∈ C (sa0 ) = sa0 , and we conclude from the solvability of the (SSE) sa0 + sx(c) = sb(c) given in Proposition 6.3. p

p

Remark 6.7 If limn→∞ (an−1 /an ) < 1, then we have (a ) = a , (cf. [2, Theorem p p 6.5 p. 3200]). So we obtain S((a ) , c) = cl c (b) if a/b ∈ s1 , and S((a ) , c) = ∅ if a/b ∈ / s1 . We obtain the next corollary the proof of which is elementary and left to the reader. Corollary 6.8 ([7, Corollary 5, pp. 122–123]) Let R, R > 0, and denote by S R,R the set of all positive sequences x that satisfy the (SSE) (s R0 ) + sx(c) = s R(c) . Then we obtain (i) Case R < 1. We have



S R,R (ii) Case R = 1. We have S R,R (iii)

Case R > 1. We have

  cl c R if R ≥ 1 = ∅ if R < 1.

   cl c R if R > 1 = ∅ if R ≤ 1.



S R,R

  cl c R if R ≤ R = ∅ if R > R.

As a direct consequence of the preceding we can state the next remark.

(c)

(c)

6.7 The (SSE) (E a ) + sx = sb

307

Remark 6.8 Let R, R > 0. Then S R,R = ∅ if and only if R = 1 < R or 1 < R ≤ R or R < 1 ≤ R. For instance, the set of all positive sequences that satisfy the (SSE) (s R0 ) + sx(c) = s2(c) is nonempty if and only if R ≤ 2. Now we consider the next statement: the condition n α yn → l1 (n → ∞) holds if and only if there are two sequences u and v with y = u + v such that r n (u n − u n−1 ) → 0 and xn vn → l2 (n → ∞) for some scalars l1 and l2 and for all y ∈ ω. The set of all x that satisfy the previous statement is equivalent to the (SSE) 

0 s1/r

 

(c) (c) + s1/x = s(1/n α )∞ . n=1

(6.51)

6.8 More Applications In this section, we study some more applications of the results of the previous section. We obtain the following corollary. Corollary 6.9 ([7, Corollary 6, pp. 123–124]) Let r > 0, α be a real and  Sr,α denote the set of all positive sequences x that satisfy the (SSE) defined by (6.51). Then we obtain (i) If r < 1, then  Sr,α = ∅. (ii) If r = 1, then    cl c (n α )∞ if α ≤ −1 n=1  Sr,α = ∅ if α > −1. (iii)

If r > 1, then



  cl c (n α )∞ if α ≤ 0 n=1  Sr,α = ∅ if α > 0.

/ cs. So the statement in Part (i) Proof We note that r < 1 implies a = (r −n )∞ n=1 ∈ follows from the equivalence σn ∼ (1 − r −1 n α r −n ) (n → ∞) and σ ∈ / s1 for r < 1. Let r = 1. Then we have σn ∼ n α+1 (n → ∞) and σ ∈ s1 if and only if α ≤ −1, and we conclude by Theorem 6.8. This shows Part (ii) Finally, for r > 1, we have a ∈ cs and 1/b = (n α )∞ n=1 ∈ c if and only if α ≤ 0, and we conclude by Theorem 6.8. This completes the proof.  We immediately deduce the next remark.

308

6 Sequence Space Equations

Remark 6.9 We have  Sr,α = ∅ if and only if r = 1 ≤ α or r > 1 and α ≤ 0. We also have  Sr,0 = ∅ if and only if r > 1. Example 6.22 We consider the statement yn /n → l1 (n → ∞) holds if and only if there are two sequences u and v, with y = u + v such that u n − u n−1 → 0 and xn vn → l2 (n → ∞) for some scalars l1 and l2 and for all y ∈ ω. This statement holds if and only if x ∈  S1,−1 , that is, xn /n → L (n → ∞) with L > 0. (c) (c) 0 For the (SSE) (s(1/n α ) ) + s1/x = s1/r , we obtain the next result by Theorem 6.8.

Corollary 6.10 ([7, Corollary 7, p. 124]) Let r > 0, α be a real and Sα,r be the (c) (c) 0 set of all x ∈ U + such that (s(1/n α )∞ ) + s1/x = s1/r . Then the next statements are n=1 equivalent. (i) Sα,r = ∅;   (ii) Sα,r = cl c (r n )∞ n=1 ; (iii) r ≤ 1 < α or α ≤ 1 and r < 1. Proof This result is a direct consequence of the equivalences ∫n =

n  k=1

k −α ∼

n 1−α (n → ∞) if α = 1; and ∫n ∼ log n (n → ∞) if α = 1. 1−α

Then if α = 1, we have (r n n 1−α )∞ n=1 ∈ ∞ if and only if r ≤ 1 < α or α and r < 1. If α = 1, we have (r n log n)∞ n=1 ∈ ∞ if and only if r < 1. This concludes the proof.  Example 6.23 For r = 1/2, we have Sα,1/2 = cl c ((2−n )∞ n=1 ) for all reals α. Now let Sα,β for all reals α and β be the set of all positive sequences x = (xn )∞ n=1 that satisfy the following statement. For every y, the condition n β yn → l1 (n → ∞) holds if and only if there are two sequences u and v with y = u + v such that n α (u n − u n−1 ) → 0 and xn vn → l2 (n → ∞) for some scalars l1 and l2 . (c) (c) 0 This statement leads to the solvability of the (SSE) (s(1/n α )∞ ) + s1/x = s(1/n β )∞ . n=1 n=1 We obtain the next result which can be proved by similar arguments as those above.

Corollary 6.11 ([7, Corollary 8, p. 125]) Let α and β be reals. Then we have (i) If α < 1, then



Sα,β

cl c = ∅

 β ∞  n n=1 if β ≤ α − 1 if β > α − 1.

6.8 More Applications

309

(ii) If α = 1, then

 Sα,β =

(iii)

If α > 1, then

 Sα,β =

cl c ∅

 β ∞  n n=1 if β < 0 if β ≥ 0.

cl c ∅

 β ∞  n n=1 if β ≤ 0 if β > 0.

Corollary 6.12 We have Sα,β = ∅ if and only if β ≤ α − 1 < 0, or α = 1 and β < 0, or α > 1 and β ≤ 0. Example 6.24 As a direct consequence of the preceding results, note that the (SSE) (c) 0 (s(n −α )∞ ) + s1/x = c is equivalent to x n → L (n → ∞) with L > 0 for all α > 1. n=1

In the next corollary, we deal with the following statement for reals α and β and p > 1: the condition n β yn → l1 holds if and only if there are sequences u and v ∈ ω with y = u + v such that ∞ 

(k α |u k − u k−1 |) p < ∞ and xn vn → l2 (n → ∞)

k=1

for all y ∈ ω and some scalars l1 and l2 . This is equivalent to the (SSE)   p (c) (c) (n −α )∞ + s1/x = s(n −β )∞ . n=1



n=1

(6.52)

We obtain the next result. Corollary 6.13 ([7, Corollary 10, pp. 125–126]) Let α and β be reals and S p (c) be the set of all solutions of the (SSE) (6.52). Then we have (i) If αq ≥ 1, then

  α  cl c n β n=1 if β < 0 S p (c) = ∅ if β ≥ 0.

(ii) If αq < 1, then ⎧    1 ∞ ⎪ ⎨cl c n β n=1 if α − β ≥ q S p (c) = 1 ⎪ ⎩∅ if α − β < . q

310

6 Sequence Space Equations

Proof The proof follows from the fact that σn ∼

n (β−α)q+1 (n → ∞), 1 − αq

if αq = 1; and σn ∼ n βq log n (n → ∞) if αq = 1. Then it can easily be seen that σ ∈ ∞ if and only if α − β ≥ 1/q for αq < 1, or β < 0 for αq ≥ 1. We conclude by Theorem 6.8.  We deal for reals β with the statement: n β yn → l1 if and only if y = u + v with ∞   |u k − u k−1 | 2 < ∞ and xn vn → l2 (n → ∞) k k=1 for all sequences y and some scalars l1 and l2 . (c) (c) = s(n This statement is equivalent to the (SSE) (2(n)∞ ) + s1/x −β )∞ and this (SSE) n=1 n=1 has solutions if and only if β ≤ −3/2.

Example 6.25 We note that the set of all solutions of the (SSE) (2(1/√n)∞ ) + n=1

(c) sx(c) = s(log are determined by limn→∞ (xn / log n) > 0. This result comes from n)∞ n=1 √

the equivalence nk=1 (1/ k)2 ∼ log n (n → ∞).

Now we solve the (SSE) (E a ) + sx0 = sb0 , where E ∈ {c, ∞ }. For E = c, the solvability of the previous (SSE) consists in determining the set of all positive sequences x = (xn )∞ n=1 that satisfy the next statement. For every sequence y the condition yn /bn → 0 (n → ∞) holds if and only if there are two sequences u and v with y = u + v such that u n − u n−1 vn → l and → 0 (n → ∞) an xn for some scalar l. We also consider the (SSE) (sa(c) ) + sx0 = sb0 as a perturbed equation of the equation sx0 = sb0 , which is equivalent to K 1 ≤ xn /bn ≤ K 2 for all n and some K 1 , K 2 > 0. We obtain the equivalence of these two equations under some conditions on a and b.

6.8 More Applications

311

By Lemma 6.14, we obtain the solvability of the (SSE’s) (sa(c) ) + sx0 = sb0 and (sa ) + sx0 = sb0 . Theorem 6.9 ([7, Theorem 2, p. 127]) The set S0E of all solutions of the (SSE) (E a ) + sx0 = sb0 , where E = c or ∞ is determined by  S0E

=

cl ∞ (b) if σ ∈ c0 ∅ if σ ∈ / c0 .

Proof Let x ∈ S0E . Then the inclusion (E a ) + sx0 ⊂ sb0 holds. This implies (E a ) ⊂ sb0 and D1/b  Da ∈ (E, c0 ), whence D1/b  Da ∈ (c, c0 ) since E ⊃ c and Now we have sx0 ⊂ sb0 and

σn → 0 (n → ∞).

(6.53)

x ∈ sb .

(6.54)

sb0 ⊂ (E a ) + sx0 .

(6.55)

Then we consider the (SSIE) By Part (i) of Lemma 6.14 with E ⊂ s1 and F = F  = c0 , we obtain S0E ⊂ s1/b . Using the condition in (6.54) we conclude that x ∈ S0E implies x ∈ cl ∞ (b). Conversely, we assume x ∈ cl ∞ (b) and that (6.53) holds. Since 1/b ∈ c0 , we have (E a ) ⊂ sb0 for E = c0 or s1 and obtain (E a ) + sx0 = (E a ) + sb0 = sb0 and x ∈ S0E . We conclude S0E = cl ∞ (b).



For a = e we easily obtain the next result. Corollary 6.14 The set S(E  , c0 ) of all solutions of the (SSE) E  + sx0 = sb0 where E = c or ∞ is determined by  cl ∞ (b) if (n/bn )∞ n=1 ∈ c0 S(E  , c0 ) = ∅ if (n/bn )∞ / c0 . n=1 ∈ Example 6.26 The equation 0 E  + sx0 = s(n α )∞ , where E ∈ {c, ∞ } n=1

312

6 Sequence Space Equations

has solutions if and only if α > 1. So the equation E  + sx0 = c0 has no solution, 0 2 and the solutions of the equation E  + sx0 = s(n 2 )∞ are determined by K 1 n ≤ x n ≤ n=1 K 2 n 2 for all n and some K 1 , K 2 > 0. We close this section with the solvability of the (SS E) (E a ) + sx0 = sb0 for special sequences a and b, in particular, we consider the (SS E  s) 

 (c) 0 s(n + s1/x −α )∞ n=1    s R(c) + sx0    (c) 0 s(n −α )∞ + s1/x n=1    (c) 0 s1/R + s1/x 

0 = s(n −β )∞

(6.56)

= s R0

(6.57)

0 = s1/R

(6.58)

n=1

0 = s(n −β )∞

n=1

(6.59)

with reals α and β and R, R > 0. The following result holds which we state without proof. Proposition 6.14 ([7, Proposition 1, p. 129]) (i) The (SSE) (6.56) has solutions if and only if β < α − 1 < 0 or α ≥ 1 and β < 0. (ii) The (SSE) (6.57) has solutions if and only if R ≤ 1 < R or 1 < R < R. (iii) The (SSE) (6.58) has solutions if and only if R < 1 < α or R < α = 1 or α and R < 1. (iv) The (SSE) (6.59) has solutions if and only if R = 1 and β < −1, or R > 1 and β < 0. (c) Example 6.27 The (SSE) (s1/2 ) + sx0 = s R0 has solutions if and only if R > 1.

Example 6.28 Let τ and τ  be reals. Then the system of (SSE) 

0 c + sx0 = s(n τ )∞ n=1   , (c) 0 0 s1/2 + sx = s(n τ  )∞ 

n=1

where x is the unknown, has solutions if and only if τ = τ  > 1. Then x is a solution of the system if and only if C1 n τ ≤ xn ≤ C2 n τ for all n and some C1 , C2 > 0. This is a direct consequence of Part (iv) in Proposition 6.14 and of the elementary fact = s(n τ  )∞ if and only if τ = τ  . that s(n τ )∞ n=1 n=1 Example 6.29 Let S0c be the set of all positive sequences that satisfy the following statement.

6.8 More Applications

313

For every sequence y, the condition yn /n → 0 (n → ∞) holds √ if and only if there are two sequences u and v with y = u + v such that n(u n − u n−1 ) → L and xn vn → 0 (n → ∞) for some scalar L. By Part (i) of Proposition 6.14, we have x ∈ S0c if and only if K 1 /n ≤ xn ≤ K 2 /n for all n and some K 1 , K 2 > 0.

References 1. de Malafosse, B.: On some BK space. Int. J. Math. Math. Sci. 58, 1783–1801 (2003) 2. de Malafosse, B.: On the Banach algebra B( p (α)). Int. J. Math. Math. Sci. 60, 3187–3203 (2004) 3. de Malafosse, B.: Sum of sequence spaces and matrix transformations. Acta Math. Hung. 113(3), 289–313 (2006) 4. de Malafosse, B.: On the sets of ν-analytic and ν-entire sequences and matrix transformations. Int. Math. Forum 2(36), 1795–1810 (2007) 5. de Malafosse, B.: Solvability of sequence spaces equations using entire and analytic sequences and applications. J. Ind. Math. Soc. 81(1–2), 97–114 (2014) 6. de Malafosse, B.: On sequence spaces equations of the form E T + Fx = Fb for some triangle T . Jordan J. Math. Stat. 8(1), 79–105 (2015) 7. de Malafosse, B.: Solvability of sequence spaces equations of the form (E a ) + F x = Fb . Fasc. Math. 55, 109–131 (2015) 8. de Malafosse, B.: Extension of the results on the (SS I E) and the (SS E) of the form F ⊂ E + Fx and E + Fx = F. Fasc. Math. 59, 107–123 (2017) 9. de Malafosse, B., Malkowsky, E.: On sequence spaces equations using spaces of strongly bounded and summable sequences by the Cesàro method. Antarctica J. Math. 10(6), 589–609 (2013) 10. Farés, A., de Malafosse, B.: Sequence spaces equations and application to matrix transformations. Int. Math. Forum 3(19), 911–927 (2008) 11. Grosse-Erdmann, K.-G.: Matrix transformations between the sequence spaces of Maddox. J. Math. Anal. Appl. 180, 223–238 (1993) 12. Maddox, I.J.: Some properties of paranormed sequence spaces. London J. Math. Soc. 2(1), 316–322 (1969) 13. Rao, K.C., Srinivasalu, T.C.: Matrix operators on analytic and entire sequences. Bull. Malays. Math. Sci. Soc. (Second series) 14, 422–436 (1991) 14. Simons, S.: The sequence spaces ( pν ) and m( pν ). London Math. Soc. 3(15), 422–436 (1965)

Chapter 7

Solvability of Infinite Linear Systems

Here we apply the theory of matrix transformations to the solvability of infinite systems of linear equations. So, we obtain some results in the spectral theory. Many results were published in this domain by Bilgiç and Furkan in 2008 [7], Furkan, Bilgiç and Ba¸sar in 2010 [27], Akhmedov and Ba¸sar in 2006 and 2007 [2, 3], Akhmedov and El-Shabrawy in 2011 [4], de Malafosse in 2014 [22], Srivastava and Kumar in 2010, 2012 and 2018 [39–41], Ba¸sar and Karaisa in 2013 and 2014 [6, 30] and Das in 2017 [10]. More related results on the spectra of operators on sequence spaces can be found in [31, 42–44]. We also deal with the Hill equation which was studied by Brillouin [8], Ince [29], and more recently by Hochstadt (1963) [28], Magnus and Winkler (1966) [33] and Rossetto [38]. These authors used an infinite determinant and showed that the Hill equation has a nonzero solution if the determinant is zero. B. Rossetto provided an algorithm to calculate the Floquet exponent from the generalization of the notion of a characteristic equation and of a truncated infinite determinant. Here, we consider a Banach algebra in which we may obtain the inverse of an infinite matrix and obtain a new method to calculate the Floquet exponent. Furthermore, we determine the solutions of the infinite linear system associated with the Hill equation with a second member and give a method to approximate them. Finally, we present a study of the Mathieu equation which can be written as an infinite tridiagonal linear system of equations.

7.1 Banach Algebras of Infinite Matrices In the following, we consider infinite linear systems given by the matrix equation Ax = b,

(7.1)

T T where A=(ank )∞ n,k=1 is an infinite matrix and x = (x 1 , x 2 , . . . ) and b = (b1 , b2 . . . ) ∈ ω are column matrices, that is,

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 B. de Malafosse et al., Operators Between Sequence Spaces and Applications, https://doi.org/10.1007/978-981-15-9742-8_7

315

316

7 Solvability of Infinite Linear Systems



a11 ⎜ . ⎜ Ax = ⎜ ⎜ an1 ⎝ . .

. . . . .

. a1k . . . ank . . . .

⎞⎛ ⎞ ⎛ ⎞ b1 x1 . ⎜ . ⎟ ⎜ . ⎟ .⎟ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ .⎟ ⎟ ⎜ xk ⎟ = ⎜ bn ⎟ ⎠ . ⎝ . ⎠ ⎝ . ⎠ . . .

 whenever the series defined by An x = ∞ k=1 ank x k are convergent for all n. When A ∈ (X, Y ) for given X, Y ⊂ ω, the question is ”if b = (b1 , b2 , . . . )T ∈ Y is a given sequence, does the equation in (7.1) have a solution in X ” ? In this way we are led to determine if this solution is unique in the set X and if it is possible to explicitly compute this solution or give an approximation for it. There is no direct answer to these problems. We can write the equation in (7.1) in the form of the infinite linear system ∞ ank xk = bn for n = 1, 2, . . . . k=1

We note that if X is a BK space with AK, X has a Schauder basis with the elements e(n) for n = 1, 2, . . . . We recall that in this case every operator L ∈ B(X ) = B(X, X ) is given by the infinite A = (ank )∞ n,k=1 such that L(x) = Ax for every x ∈ X , where ank = L(e(k) )n for all n and k. So we are led to consider the left or right inverse of an infinite matrix and we need to determine an adapted Banach algebra in which we can define the product of infinite matrices. It is useful to use the next well-known elementary property. For any Banach space X of sequences, the condition I − A∗B(X ) < 1 implies that A is invertible and A−1 =



(I − A)i ∈ B(X ).

i=0

Let, for instance, ⎞ 1 1/2 1/2 . . 1/2 . ⎜ 1/2 1/3 1/3 . 1/3 .⎟ ⎟ ⎜ ⎜ . . . . .⎟ ⎟ ⎜ 1/n 1/ (n + 1) 1/ (n + 1) . ⎟ A=⎜ ⎟. ⎜ ⎟ ⎜ 0 . . . ⎟ ⎜ ⎝ . .⎠ . ⎛

We will see in Sect. 7.2 that using a convenient Banach algebra the system Ax = b for b has a solution in a well-chosen space. Then we will see in Sect. 7.2 that if C1T

7.1 Banach Algebras of Infinite Matrices

317

denotes the transpose of the matrix of the Cesàro operator, the equation C1T x = b also has a solution in a well-chosen space. Now we consider the Banach algebra Sτ . We recall that the set sτ is a B K space with the norm xsτ = supn |xn |/τn (Example 2.2). Also the set

Sτ =



A=

(ank )∞ n,k=1

: A Sτ

∞ 1 = sup |ank |τk τn k=1 n



n and all n. Lemma 7.2 (i) (Theorem 2.3) Let T ∈ L be a triangle, that is, tnn = 0 for all n and X a BK space. Then X (T ) = X T is a BK space with the norm x X (T ) = T x X .

(7.3)

(ii) (Part (b) of Theorem 2.4). Let T = Da = (an δnk )∞ n,k=1 be a diagonal matrix with a ∈ U and X be a B K space with AK . Then X (T ) has AK with the norm given by (7.3). We summarize; Part (i) of the next lemma is an immediate consequence of Lemma 7.2, and Part (ii) is obvious. Lemma 7.3 (i) Let τ ∈ U + . Then sτ , sτ0 and sτ(c) are B K spaces with the norm  · sτ and sτ0 has AK . The set ( p )τ for 1 ≤ p < ∞ is a B K space with AK with the norm  · ( p )τ . (ii) Let χ be any of the spaces sτ , sτ0 or sτ(c) . Then we have |Pn (x)| = |xn | ≤ τn xsτ for all n ≥ 1 and all x ∈ χ . Remark 7.1 We recall that if X is a B K space with the norm  ·  X , then by Part 1.10(a) Theorem 1.10, (X, X ) ⊂ B(X ).

318

7 Solvability of Infinite Linear Systems

We recall some more definitions and results. We write B¯ X (0, 1) = {x ∈ X : x X ≤ 1} for the closed unit ball in a B K space X . Thus we obtain A∗B(X )



Ax X = sup x X x=0

 =

sup

x∈ B¯ X (0,1)

Ax X for all A ∈ B(X ).

∞ Also for every a = (an )∞ n=1 ∈ X such that the series n=1 an x n is convergent for all x ∈ X , the identity ∞      ∗ a X = sup  an x n    ¯ x∈ B X (0,1) n=1

is defined and finite (cf. (1.15) and subsequent remark). Furthermore, the following results hold. Lemma 7.4 (Theorem 1.9) Let X be a B K space. Then A ∈ (X, ∞ ) if and only if   sup An ∗X = sup n≥1

n≥1



∞      ank xk  < ∞ sup    ¯ x∈ B X (0,1) k=1

Lemma 7.5 (Example 1.11 or [35, Theorem 1.2.3, p. 155]) For every a ∈ 1 , a∗c0 = a∗c = a∗∞ = a1 = ∞ n=1 |an |. It is clear that if X is a B K space, then A ∈ (X, sβ ) if and only if  ∗ 1   A sup  n  < ∞.  βn n X Since there is no characterization of the class (( p )τ , ( p )τ ) for 1 < p < ∞ and p (τ ) of (( p )τ , ( p )τ ) permitting us to obtain p = 2, we need to define a subset B the inverse of some well-chosen matrix map A ∈ (( p )τ , ( p )τ ). Thus we are led to define the number ⎤ ∞  q p−1 1/ p ∞ τk ⎦ |ank | N p,τ (A) = ⎣ τ n n=1 k=1 ⎡

for 1 < p < ∞ and q = p/( p − 1). Now we can state the following proposition. Proposition 7.1 ([19, Proposition 7, pp. 45–46]) Let τ ∈ U + . Then (i) for every A ∈ Sτ

A∗B(sτ ) = A Sτ

∞ 1 = sup |ank |τk ; τn k=1 n

7.1 Banach Algebras of Infinite Matrices

319

(ii) (a) B((1 )τ ) = ((1 )τ , (1 )τ ) and A ∈ B((1 )τ ) if and only if A T ∈ S1/τ ; (b) A∗B((1 )τ ) = A T  S1/τ for all A ∈ B((1 )τ ). p (τ ) ⊂ B(( p )τ ), where (iii) For 1 < p < ∞, we have B   p (τ ) = A = (ank )∞ B n,k=1 : N p,τ (A) < ∞ , p (τ ) and for every A ∈ B A∗B(( p )τ ) ≤ N p,τ (A). Proof

(i) First we have

1 Axsτ = sup τn n

∞      1   ank xk  = sup |An x| for all x ∈ sτ ,    τn n

(7.4)

k=1

then A∗B(sτ )

   1 1 sup = sup |An x| = sup τn τn n n x∈Bsτ (0,1)

sup

x∈Bsτ (0,1)

|An x| .

Writing x = τ y in (7.4), we obtain

A∗B(sτ )

1 = sup τn n





 1 ∗ sup (|An (τ y)|) = sup An Dτ ∞ . τn n y∈Bs1 (0,1)

Now we have ∞    = An Dτ ∗∞ = (ank τk )∞ |ank |τk for all n. k=1 1 k=1

We conclude A∗B(sτ )



τk = sup |ank | τn n k=1

.

(ii) By Theorem 1.10, we have B(1 ) = (1 , 1 ) and the condition A ∈ ((1 )τ , (1 )τ ) is equivalent to A T ∈ S1/τ by Theorem 1.22, and it is clear that the identity in (b) holds. p (τ ) be a given infinite matrix and take any x ∈  p . Then (iii) Let A ∈ B

320

7 Solvability of Infinite Linear Systems

p Ax p

 ∞ ∞   = ank xk  k=1

p ∞ p p ∞ ∞  ∞      ank xk  ≤ |ank xk | ,  =     n=1 k=1

n=1  p

n=1

k=1

and from the Hölder inequality ((A.3)), we obtain for all n ∞

|ank xk | ≤

k=1



|ank |

q

1/q ∞

k=1

1/ p |xk |

p

=



k=1

1/q |ank |

q

· x p ,

k=1

with q = p/(1 − p). We deduce p

Ax p ≤



⎡ ⎤p 1/q ∞ p/q ∞ ∞ p q q ⎣ |ank | x p ⎦ = |ank | x p

n=1

n=1

k=1

k=1

and since p/q = p − 1, we have Ax p

⎡ ∞ p−1 ⎤1/ p ∞ ⎦ x p ≤⎣ |ank |q n=1

and A∗B( p )



Ax p = sup x p n

k=1



⎡ ∞ p−1 ⎤1/ p ∞ ⎦ . ≤⎣ |ank |q n=1

k=1

p (e), then A ∈ B( p ). So if A ∈ B p (τ ) and We have shown that if A ∈ B  D1/τ ADτ ∈ B p (e), then we have D1/τ ADτ ∈ B( p ) and A ∈ B(( p )τ ). This concludes the proof.  Remark 7.2 In [18], we obtained additional results on the Banach algebra B( p ) and in [24], we dealt with the Banach algebra (w∞ (), w∞ ()) and gave some applications to the solvability of matrix equations in w∞ ().

7.2 Solvability of the Equation Ax = b We use Banach algebra techniques to solve infinite linear systems. We start with the following elementary but very useful result for a = (an )∞ n=1 ∈ U (cf. [19, Proposition 8, p. 47]). Proposition 7.2 Let X ⊂ ω be a B K space. We assume D1/a A ∈ (X, X ) and

7.2 Solvability of the Equation Ax = b

321

I − D1/a A∗B(X ) < 1.

(7.5)

Then the equation Ax = b with D1/a b = (bn /an )∞ n=1 ∈ X has a unique solution in X given by x = (D1/a A)−1 D1/a b. Proof First we see that, since D1/a A ∈ (X, X ) and condition (7.5) holds, D1/a A is invertible in B(X ). Since B(X ) is a Banach algebra of operators with identity,   (D1/a A)−1 (D1/a Ax) = (D1/a A)−1 o(D1/a A) x = I x = x for all x ∈ X. Thus the equation Ax = b with D1/a b ∈ X is equivalent to D1/a Ax = D1/a b which  in turn is x = (D1/a A)−1 (D1/a b) ∈ X . This concludes the proof. We can state a similar result in a more general case. As we have seen in Sect. 4, we let   τ = A = (ank )∞ n,k=1 ∈ Sτ : I − A Sτ < 1 , . Then, for 1 < p < ∞, we let and let r = (r n )∞ n=1   p,τ = A = (ank )∞ n,k=1 ∈ (( p )τ , ( p )τ ) : N p,τ (I − A) < 1 . We note that, since Sτ is a Banach algebra, the condition A ∈ τ means that A is invertible and A−1 ∈ Sτ . ∞ In the following we let |a| = (|an |)∞ n=1 for any given sequence a = (an )n=1 . Corollary 7.1 ([19, Corollary 9, p. 47]) Let τ ∈ U + and A be an infinite matrix. We assume D1/a A ∈ τ with a ∈ U. Then we have (i) (a)

For any given b ∈ s|a|τ , the equation Ax = b has a unique solution in sτ given by (7.6) x 0 = (D1/a A)−1 (D1/a b) = A−1 b

with A−1 ∈ (s|a|τ , sτ ). 0 , the (b) If limn→∞ (ank /ann τn ) = 0 for all k ≥ 1, then, for any given b ∈ s|a|τ 0 equation Ax = b has a unique solution in sτ given by (7.6), and A−1 ∈ 0 , sτ0 ). (s|a|τ (c) If limn→∞ (ank /ann τn ) = lk for some lk for all k ≥ 1, and lim

n→∞

∞ 1 ank τk τn ann k=1

= l for some l

(c) then for any given b ∈ s|a|τ , the equation Ax = b has a unique solution in (c) (c) −1 , sτ(c) ). sτ given by (7.6) with A ∈ (s|a|τ T (ii) (a) If A D1/a ∈ 1/τ , then for any given b ∈ (1 )|a|τ , the equation Ax = b has a unique solution in (1 )τ given by (7.6) with A−1 ∈ (1 (|a|τ ), 1 (τ )).

322

7 Solvability of Infinite Linear Systems

(b) Let p > 1. If D1/a A ∈ p,τ , then for any given b ∈ ( p )|a|τ , the equation Ax = b has a unique solution in ( p )τ given by (7.6) with A−1 ∈ (( p )|a|τ , ( p )τ ). Proof

(i)

(a) If D1/a A ∈ τ , then we have by Part (i) of Proposition 7.1, I − D1/a A Sτ = I − D1/a A∗B(sτ ) < 1.  Thus D1/a A is invertible in B(sτ ) Sτ . Then (D1/a A)−1 ∈ Sτ , that is, A−1 ∈ (s|a|τ , sτ ) and we conclude by Proposition 7.2. (ii) Since the set sτ0 is a B K space with AK , we have B(sτ0 ) = (sτ0 , sτ0 ). Also, since D1/a A ∈ τ and limn→∞ (ank /ann τn ) = 0 for all k ≥ 1, we deduce D1/τ (D1/a A)Dτ ∈ (c0 , c0 ). So D1/a A ∈ (sτ0 , sτ0 ). Then we have   I − D1/a A∗B(sτ0 ) = sup (I − D1/a A)xsτ = I − D1/a A Sτ < 1 x∈Bsτ

and we conclude by Proposition 7.2. (c) Here we have I − D1/a A∗B(s (c) ) = I − D1/a A Sτ < 1. τ



Then (D1/a A)−1 ∈ Sτ B(sτ(c) ), so (D1/a A)−1 is an operator represented by an infinite matrix, since (D1/a A)−1 ∈ Sτ and the condition (D1/a A)−1 ∈ B(sτ(c) ) implies (D1/a A)−1 ∈ (sτ(c) , sτ(c) ). We conclude again by Proposition 7.2. (ii) (a) By Part (ii) (b) of Proposition 7.1, the condition A T D1/a ∈ 1/τ implies I − D1/a A∗B(1 (τ )) = I − (D1/a A)t  S1/τ < 1 and

(D1/a A)−1 ∈ B(1 (τ )) = ((1 )τ , (1 )τ ) .

Thus A−1 ∈ ((1 )|a|τ , (1 )τ ) and we conclude by Proposition 7.2. (b) By Part (iii) of Proposition 7.1, we have     (D1/a A)−1 ∈ B ( p )τ = ( p )τ , ( p )τ , and so A−1 ∈ (( p )|a|τ , ( p )τ ). Again we conclude by Proposition 7.2. This completes the proof. As direct consequences of the preceding we consider the next examples.



7.2 Solvability of the Equation Ax = b

323

Example 7.1 We consider the matrix ⎛

1 ⎜c ⎜ 2 A1 = ⎜ ⎜c ⎝.

a 1 c .

a2 a 1 .

. a2 a .

.



⎟ ⎟ ⎟ ⎟ .⎠

([32, Example 3, p. 158]). We easily see that if 0 < a < 1 for 0 < c < 1 and 2a + 2 − 3ac < 1,

(7.7)

then A1 ∈ 1 . Indeed, we have I − A1  S1 ≤



a + n

n=1



cn =

n=1

c a + < 1. 1−a 1−c

We conclude the equation A1 x = b for any b ∈ s1 has a unique solution in s1 given by ∞ x0 = (I − A1 )i b = A−1 (7.8) 1 b. i=0

Now if (7.7) is satisfied, then we also have I − A1T  S1 ≤

c a + < 1, 1−a 1−c

and for any b ∈ 1 , the equation A1 x = b has a unique solution in 1 given by (7.8). Example 7.2 For the matrix A given in Sect. 7.1, it can easily be verified that A ∈ (sr , sr ) = Sr and A ∈ / r for all r < 1. If we define A + , A = D(n)∞ n=1 then

and



⎞ 1 −1/2 0 ⎜ ⎟ . . ⎜ ⎟

⎜ 1 −1/ (n + 1) ⎟ A =⎜ ⎟ ⎝ 0 ⎠

I − A  Sr =

r . 2

Let r < 1. Then + is bijective from sr to itself. Since A and + ∈ Sr , we have

324

7 Solvability of Infinite Linear Systems

  A( + x ) = A + x for all x ∈ sr . So the equation Ax = b with x = + x is equivalent to     A( + x ) = D(n)∞ (A + )x = (D(n)∞ A + )x = D(n)∞ b. D(n)∞ n=1 n=1 n=1 n=1 We conclude that the equation Ax = b has a solution given by  −1 b = + D(n)∞ A + D(n)∞ b x = A −1 D(n)∞ n=1 n=1 n=1 in the set X1 =



sr

r 0, we have C1T ∈ / r , but for every r < 1, we obtain  ∞   1 1 T + r k−n+1 < 1. C1  Sr = sup n I − D(n)∞ (7.9) − n=1 k k + 1 n k=n So the equation C1T x = b

(7.10)

with x = + x is equivalent to  T +  C1 ( x ) = D(n)∞ b. D(n)∞ n=1 n=1 Now since C1T , + ∈ Sr , we have C1T ( + x ) = (C1T + )x and  T  +    C1 x = D(n)∞ C1T + x for all x ∈ sr . D(n)∞ n=1 n=1

(7.11)

C1T + )−1 D(n)∞ b is the unique solution of (7.11). So (7.9) implies x = (D(n)∞ n=1 n=1 + Using the fact that is bijective from sr to itself for all r < 1, we conclude that

7.2 Solvability of the Equation Ax = b

325

equation (7.10) has a unique solution in the set X 1 =

!

r 0, 0 < ρ < 1 and 1 < p < ∞ satisfy the inequality p ρp < τ p − 1 with q = ; (1 − ρ q ) p−1 p−1 and we assume

(7.12)

⎧ 1 ⎪ ⎨ |ank | ≤ τ for 1 ≤ k < n − 1 n ⎪ ⎩ ann = 1 for n ≥ 1 ank = 0 otherwise.

Then A ∈ (( p )1/ρ , ( p )1/ρ ) and for any b ∈ ( p )1/ρ , the equation Ax = b has a unique solution in ( p )1/ρ . Proof We have  (k−n)q n−1   1 ρq 1 1 (k−n)q 1 σn = |ank | ≤ τq ≤ τq , ρ n k=1 ρ n 1 − ρq k=1 ∞

q

k=n

and so



σnp−1 ≤

n=1

∞ 1 ρp , q p−1 (1 − ρ ) nτ p n=2

since p − 1 = p/q and τ p − 1 > 0 in (7.12). Now from the inequality & ∞ 1 dx 1 , ≤ = τ p τ p n x τp − 1 n=2 ∞

1

and using (7.12), we obtain 

N p,1/ρ (I − A)

p

=

∞ n=1

σnp−1 ≤

ρp 1 < 1. · q p−1 (1 − ρ ) τp − 1

We conclude applying Part (ii) of Corollary 7.1.



326

7 Solvability of Infinite Linear Systems

Remark 7.3 If we put ρ = 1/2, p = q = 2 and τ > 2/3, then (7.12) holds and A is bijective from (2 )2 to itself.

7.3 Spectra of Operators Represented by Infinite Matrices In this section, we give some applications of the preceding results to various domains, where the solvability of linear equations is used. We recall that the spectrum of an operator A ∈ (X, X ) is the set σ (A, X ) of all complex numbers λ such that A − λI is not invertible. We denote by ρ(A, X ) = C \ σ (A, X ) the resolvent set of A. There are many results on the spectrum and the fine spectrum of operators such as , the Cesàro operator C1 and the operator of weighted means mapping in sets of the form s1 , c0 , c, or  p (cf. ,for instance, Altay, Basar et al [5]). Here we focus on the cases when the operators , T = + , and N q are mapping between sets of the form sτ . In the following, we need to explicitly compute the inverse of any Toeplitz operator mapping between sets of the form sr . So we use the isomorphism ϕ : f → A from the algebra of the power series into the algebra of the corresponding matrices. A recent paper [25] contains results on the spectra of special operators between generalized spaces of sequences that are strongly bounded or strongly convergent to zero.  k First we recall that we can associate with any power series f (z) = ∞ k=0 ak z defined in the open disc |z| < R the upper triangular infinite matrix A = ϕ( f ) ∈ ! Sr defined by 0 0. Then we have (i) σ ( , sr ) = D(1, 1/r ); (ii) σ ( + , sr ) = D(1, r ). Proof

(i) First we let λ (A) =

1 (λI − A), λ−1

where A is a given matrix and λ = 1 is any complex number. We suppose that λ∈ / σ ( , sr ). This means that λI − for λ = 1 is bijective from sr into itself. Then λI − is invertible, since it is a triangle. The infinite matrix (λI − )−1 is also bijective from sr into itself, and by Lemma 7.1, we have (sr , sr ) = Sr and (λI − )−1 ∈ Sr . We are led to explicitly compute the inverse of λI − . We have (λI − )T = ϕ(λ − 1 + z) and −1  (λI − )T =ϕ Since



1 λ−1+z

 .

328

7 Solvability of Infinite Linear Systems ∞

1 zk = (−1)k for |z| < |λ − 1|, λ−1+z (λ − 1)k+1 k=0 we obtain (λI − )−1 = (ξnk )∞ n,k=1 , where ξnk

⎧ ⎨

(−1)n−k = (λ − 1)n−k+1 ⎩ 0

if k ≤ n otherwise.

The condition (λI − )−1 ∈ Sr is then equivalent to

χ = r sup n

n k=1

1 (|λ − 1|r )n−k+1



⎡( ⎢ = r sup ⎣

1 |λ−1|r

n

)n+1

1 |λ−1|r



1 |λ−1|r

−1

⎤ ⎥ ⎦ < ∞.

(7.15) We see that (7.15) is equivalent to 1/(|λ − 1|r ) < 1. This shows that λ ∈ / D(1, 1/r ). Conversely, we assume λ ∈ / D(1, 1/r ). Since ⎛

⎞ 1 ⎜ 1/(λ − 1) 1 0 ⎟ ⎜ ⎟ 1 ⎜ 1/(λ − 1) 1 ⎟ (λI − ) = ⎜ λ ( ) = ⎟, λ−1 ⎝0 . . ⎠ we obtain λ ( ) − I  Sr =

1 < 1, |λ − 1|r

and λI − is bijective from sr into itself. + ∞ )n,k=1 , (ii) If λI − + is bijective from sr into itself, we have (λI − + )−1 = (ξnk where ⎧ k−n ⎨ (−1) if k ≥ n + ξnk = (λ − 1)k−n+1 ⎩ 0 otherwise. The condition (λI − + )−1 ∈ Sr is equivalent to χ = sup n

∞ k=n

1 r k−n |λ − 1|k−n+1

< ∞,

and to r/|λ − 1| < 1, which shows λ ∈ / D(1, r ). Conversely, we take λ ∈ / D(1, r ). By similar arguments as those above, we have λI − + = ϕ(λ − 1 + z) and

7.3 Spectra of Operators Represented by Infinite Matrices

329

      r λ ( + ) − I  =  1 ϕ(−z) = < 1,  λ − 1 Sr |λ − 1| Sr which shows that λI − + is bijective from sr into itself. This completes the proof.



Concerning the triangle  = −1 we recall that nk = 1 for k ≤ n. We can state the following result. Proposition 7.3 ([14, Theorem 5, p. 292]) Let r > 1. Then (i) 1/λ ∈ D(1, 1/r ) if and only if λ ∈ σ (, sr ). (ii) For all λ ∈ / σ (, sr ), λI −  is bijective from sr into itself and

[(λI − )−1 ]nk

⎧ 1 ⎪ ⎪ ηnn = ⎪ ⎪ 1 − λ ⎨   1 −λ n−k−1 = η = · nk ⎪ ⎪ 1 − λ)2 1−λ ⎪ ⎪ ⎩ ηnk = 0

for all n for k ≤ n

(7.16)

otherwise.

Proof (i) follows from Part (i) of Lemma 7.7. (ii) The operators  and are bijective from sr into itself for r > 1. We have ( − λI )T = ϕ(1 − λ + z/(1 − z)) and 1 1−λ+

z 1−z

 ∞  −λ n−1 n 1 1 1−z = − = z . 1 − λ + λz 1 − λ (1 − λ)2 n=1 1 − λ

So we obtain the entries of (λI − )−1 in the statement in (7.16).



Now we are interested in the study of the spectrum of the operator N q considered as operator from sτ into itself, or from sτ0 into itself. We state the following theorem without proof and use the notation + Jq =

, qn :n≥1 . Qn

Theorem 7.1 ([20, Theorem 11, p. 417]) We assume q is nondecreasing, q/Q ∈ c0 and τ ∈ . Then we have (i) sτ (N q − λI ) = sτ for all λ ∈ / Jq ∪ {0}; / Jq ∪ {0} and (ii) sτ0 (N q − λI ) = sτ0 for all λ ∈     σ N q , sτ = σ N q , sτ0 = Jq ∪ {0}.

330

7 Solvability of Infinite Linear Systems

We immediately deduce from the preceding the following result, see also [12, Proposition 1, pp. 57–58]. Corollary 7.3 Let r > 1. Then we have   σ (C1 , sr ) = σ C1 , sr0 = {0} ∪ {1/n : n ≥ 1}.

7.4 Matrix Transformations in χ(m ) In this section, we establish some properties of the set χ ( ) for χ = sτ , sτ0 , sτ(c) or ( p )τ . The characterization of the class (χ ( m ), χ ( m )) given in [14] is complicated. So we deal with the subset (χ ( m ), χ ( m )L of infinite Toeplitz triangles that map χ ( m ) into itself. First, we consider the sets X = χ ( m ) for χ = sτ , sτ0 , sτ(c) or ( p )τ with 1 ≤ p < ∞. We recall that   , + τn−1 n 0 such that the matrix ⎛

a1,αn0 ⎜ .

1 A1 = ⎜ ⎝ . ak1 ,αn0

. . . .

⎞ . a1,αn0 +k1 −1 ⎟ . . ⎟ ⎠ . . . ak1 ,αn0 +k1 −1

is invertible. So we obtain by induction a strictly increasing sequence of integers k1 < k2 < · · · < kn < . . . and a sequence of corresponding matrices. By similar arguments as those above, we can determine a solution satisfying Theorem 7.2. In fact, a Pòlya system has infinitely many solutions, see [9]. The matrix A = (k n )∞ n,k=1 is an important example of a Pòlya matrix. The matrix A = (a (k−n) )∞ n,k=1 , 2

where a is a real with 0 < a < 1, is also a Pòlya matrix and satisfies A ∈ 1 . So, for any given bounded sequence b, the equation Ax = b has a unique bounded solution x = A−1 b. 4 3 Now we define the matrix A t1 , . . . , t p with integer p ≥ 1 obtained from A by adding the following rows

7.6 Infinite Linear Systems with Infinitely Many Solutions

345

t1 = (t11 , t12 , . . . ), t2 = (t21 , t22 , . . . ), . . . t p = (t p1 , t p2 , . . . ) with tkk = 0 for k = 1, 2, . . . , p, where tnk ∈ C, that is, ⎡

t11 ⎢ . ⎢ ⎢ t p1 ⎢ 3 4 ⎢ a11 A t1 , . . . , t p = ⎢ ⎢ . ⎢ ⎢ an1 ⎢ ⎣ . We let

. . t1k .. . . . t pk . . a1k .. . . . ank .. .

⎤ .. . .⎥ ⎥ . .⎥ ⎥ . .⎥ ⎥. . .⎥ ⎥ . .⎥ ⎥ . .⎦

3 4 b u 1 , . . . , u p = (u 1 , . . . , u p , b1 , . . . )T .

If b = 0, we write

4 3  u 1 , . . . , u p = (u 1 , ...u p , 0, . . . )T .

−1 ∞

−1 We also write a −1 3 = (ann 4 )n=1 where ann are the inverses of the diagonal elements of the matrix A t1 , . . . , t p . Then we have the following result (cf. [11, Theorem 1, pp. 51–52] and [32, Proposition 3, pp.158–159]).

Proposition 7.10 Let τ ∈ U + and 4 4 3 3 Da −1 A t1 , . . . , t p ∈ τ and Da −1 b u 1 , . . . , u p ∈ sτ .

(7.34)

Then we have (i) The solutions of Ax = b in the space sτ are given by  4−1 4 3 3 x = Da −1 A t1 , . . . , t p Da −1 b u 1 , . . . , u p for u 1 , u 2 , . . . , u p ∈ C. (ii)

 The linear space K er (A) sτ of the solutions of Ax = 0 in the space sτ is of dimension p and is given by K er (A) =

5

  sτ = span x1 , . . . , x p ,

where 4−1  3  0, 0, . . . , 1, 0, . . . , 0 for k = 1, 2, . . . , p xk = A t1 , . . . , t p with 1 being the k-th term of the p-tuple.

346

7 Solvability of Infinite Linear Systems

Proof Since Part (i) is a direct consequence of (ii), we only show (ii). For any scalars u 1 , . . . , u p , the system 4 4 3 3 Da −1 A t1 , . . . , t p x = Da −1  u 1 , . . . , u p has one solution in sτ given by 4−1 4 3 3  Da −1  u 1 , . . . , u p = u k xk , x = Da −1 A t1 , . . . , t p p

k=1

which is also a solution of Ax = 0. It is clear that (xk )1≤k≤ 3 p are linearly 4 −1 independent,

for k = 1, 2, . . . , p) is the k-th column of (A t , . . . , t ) and by (7.34), since x 1 p k 4 3 (A t1 , . . . , t p )−1 is injective in φ. Therefore the identity  3 4−1 3 4 A t1 , . . . , t p  u1, . . . , u p = u k xk = 0 for u 1 , . . . , u p ∈ C p

k=1

implies u 1 = · · · = u p = 0. Now we assume dim (K er (A) ∩ sτ ) > p and consider x p+1 , x p+2 , . . . , x p+q such that (x1 , . . . , x p , x p+1 , x p+2 , . . . , x p+q ) are linearly independent in the space K er A ∩ sτ . Then replacing some linear combination p+q λk xk

x= k=1

(where λk is the unknown) in the system 4 3 4 3 A t1 , . . . , t p x =  u 1 , . . . , u p , we obtain the finite system p+q

λk tn xk = u n for n = 1, 2, . . . , p,

k=1



∞ where tn xk = ∞ j=1 tn j x jk , x k = (x nk )n=1 . This system cannot have a unique solution since the number of unknowns is strictly larger than the number of equations. So the hypothesis dim(K er A ∩ sτ ) > p leads to a contradiction. We note that this result is independant of the choice of the rows t1 , t2 , . . . , t p . This concludes the proof.   p Remark 7.7 The solutions given by (i) can be written as x = x ∗ + i=1 u i xi , where

7.6 Infinite Linear Systems with Infinitely Many Solutions

4−1  3 x ∗ = Da −1 A t1 , . . . , t p Da −1 b 0, 0, . . . , 0

347

(7.35)

is a particular solution of Ax = b. Example 7.4 ([32, Example 5, p. 159]) Let A be the infinite matrix ⎛

0 ⎜ b2 ⎜ 3 ⎜b A=⎜ ⎜. ⎜ ⎝

1 0 b2 .

a 1 0 .

. a 1 .

. . a .

⎞ . .⎟ ⎟ .⎟ ⎟. .⎟ ⎟ ⎠

We assume 0 < b < 1 < 1/a. So A ∈ S1 and it is clear that A ∈ / 1 , since the diagonal entries are naught. Now we consider the sequence t1 = (1, a, a 2 , . . . ). We have I − A t1   S1 =

a b2 + 1−b 1−a

and A t1  ∈ 1 for infinitely many pairs of reals (a, b), (for instance, if 0 < a < 1/3 and 0 < b < 1/2). Then the system Ax = b with b ∈ s1 has infinitely many solutions in s1 given by x = (A t1 )−1 b u 1  + ux1 for u ∈ C. Example 7.5 ([32, Example 6, p. 160]) Let A be the following matrix ⎛

10 ⎜ 1 ⎜ ⎜ A=⎜ ⎜ 0 ⎜ ⎝

Since

⎞ ... 1 0 (2!) 0 ... ⎟ 2 ⎟ 1 1 0 (2!)2 ... ⎟ ⎟. 1 0 ... ⎟ ⎟ . ... ⎠ .

1 (2!)2

0

1 (4!)2

 ∞  rn 2 I − A Sr = , (2n!) n=1

√ we deduce A ∈ r for r < 2. Thus the equation Ax = b for b ∈ sr has a unique solution in sr . Now we let   1 1 1 t1 = , 0, , 0, , 0, . . . , (2!)2 (4!)2 (6!)2   1 1 1 , 0, , 0, ,... . t2 = 0, (2!)2 (4!)2 (6!)2

348

7 Solvability of Infinite Linear Systems

Then we have 4 A t1 , t2  ∈ r for r = 3. So the system Ax = b with b ∈ s3 has infinitely many solutions in s3 given by x = (A t1 , t2 )−1 b 0, 0 + u 0 x0 + u 1 x1 for u 0 , u 1 ∈ C. We also note that using the map ϕ we then can locate the zeros of the series f (z) =

∞ z 2n . (2n!)2 n=0

So we easily see that the equation f (z) = 0 has two zeros in the disc |z| ≤ 3, four zeros in the disc |z| < 6, etc.…. Now, we can study the properties of matrices obtained by deleting some rows from A. This problem can be formulated as follows: Let the matrix A and the sequence b satisfy (7.34). For a given integer q, let tn

∞ = (tnk )k=1 for n = 1, . . . , q and ∞

|tnk |τk < ∞ for n = 1, 2, . . . , q.

(7.36)

k=1

3 4 We note that we have not necessarily D A t1 , . . . , tq ∈ τ , where D is the diagonal 4 3 matrix whose entries are the inverses of the diagonal elements of A t1 , . . . , tq . Then we study the new equation for given u 1 , u 2 , . . . , u q ∈ C 4 3 4 3 A t1 , . . . , tq x = b u 1 , . . . , u q .

(7.37)

This equation is equivalent to the finite linear system p

a˜ nk u k = vn for n = 1, 2, . . . , q,

(7.38)

k=1

where u 1 , . . . , u p are the unknowns and a˜ nk = tn xk . Now we state the following result. Corollary 7.6 Let ζ denote the rank of the system in (7.38). We assume that hypotheses given (7.34) hold. Then we have (i) If ζ = p = q, then the system in (7.37) has a unique solution in sτ . (ii) If ζ = q < p, then 4 3 dim K er (A t1 , . . . , tq ) ∩ sτ = p − ζ

7.6 Infinite Linear Systems with Infinitely Many Solutions

349

p and the solutions of the system in (7.37) are given by x = x ∗ + i=1 u i xi , where the scalars p − ζ among u 1 , . . . , u p are arbitrary. (iii) If ζ < q, then there is a scalar χ and an integer i ∈ {ζ + 1, . . . , q} such that the system in (7.37) has no solution in sτ for u i = χ . + Remark 7.83 Let d ∈ U 4 . If ζ < q, then we see that for a given sequence b ∈ ω



such that b u 1 , . . . , u q ∈ sd , the system in (7.37) has no solution in sτ . This means

4 3 sd  A t1 , . . . , tq sτ . This method is used in the following for the study of the Hill equation with a second member.

7.7 The Hill and Mathieu Equations Another application of the results of Sect. 7.6 is the Hill equation, which was studied by Brillouin [8], Ince [29], and more recently by Magnus and Winkler [33], Hochstadt [28] and Rossetto [38]. These authors used an infinite determinant and showed that the Hill equation has a nonzero solution if the determinant is zero. B. Rossetto provided an algorithm to calculate the Floquet exponent from the generalization of the notion of a characteristic equation and of a truncated infinite determinant. Here we apply our results on infinite linear systems to the Hill equation and obtain a new method to calculate the Floquet exponent. Then we determine the solutions of the equation and give a method to approximate them. Then we consider the Hill equation with a second member. The Hill equation is the well-known differential equation y

(z) + J (z)y(z) = 0,

(7.39)

where z ∈  and  ⊂ C is an open set containing the real axis. The coefficient J (z) can be written as an absolutely convergent Fourier series, that is, J (z) =

+∞

θk e2ikz ,

(7.40)

k=−∞

with θn = θ−n for every n. Floquet’s theorem [36], shows that a solution of (7.39) can be written in the form y(z) = eμz

+∞ k=−∞

xk e2ikz ,

(7.41)

350

7 Solvability of Infinite Linear Systems

where μ is a complex number, called referred to as Floquet exponent. To every μ for which the equation in (7.39) has a nonzero solution, we associate the series b(z) = eμz

+∞

bk e2ikz

(7.42)

k=−∞

and study the equation

y

(z) + J (z)y(z) = b(z).

(7.43)

 2ikz has a second derivative It is natural to assume that the sum of the series +∞ k=−∞ x k e which is obtained by a term by termwise differentiation and that the corresponding series are absolutely convergent for all z ∈ . Substituting (7.41) and (7.42) in (7.43), we obtain the infinite linear system (μ + 2in) xn + 2



θ|n−k| xk = bn for n ∈ Z,

(7.44)

k=−∞

which can be written in the form Ax = b, where the matrix A = (ank )∞ n,k=0 is defined by

θ0 + (μ + 2ni)2 for n = k ∈ Z ank = for k = n θ|n−k| and x T = (. . . , x−1 , x0 , x1 , . . . ) b T = (. . . , b−1 , b0 , b1 , . . . ). To solve the system in (7.44) we have to determine μ ∈ C for which the equation Ax = 0 has a nonzero solution in sr . We note that Proposition 7.10 can be applied here. In this part, we use natural extensions of the sets sr and Sr for r > 0 and ∞ denote the elements of sr and Sr by x = (xn )n∈Z (or (xn )∞ n=−∞ ) and (ank )n,k=−∞ , respectively. We recall that when the product Ax exists, it is defined by the one column matrix ( +∞ k=−∞ ank x k )n∈Z . As it is suggested in the next numerical application, we consider the next hypothesis. There are reals ρ, r0 > 0 with 0 < ρ < 1 and an integer s such that θ0 = −(μ + 2is)2 and we have for k = s ∞     θ0 + (μ + 2is)2  ≤ |θn | r0n + r0−n n=1

  ≤ ρ θ0 + (μ + 2ik)2  . Then ann = 0 for every n, and the first inequality of (7.45) means

(7.45)

7.7 The Hill and Mathieu Equations

351

D1/a A ∈ / r0 . Now we consider the second member of the inequalities. The numerical application leads to deal with the case s = 0. It is also convenient to consider s = −1 or s = 1. For instance, we choose s = 1. Then it is necessary to consider the matrix A∗ obtained by deleting the rows with indices 0 and 1 from A. This implies that the row with index −1 of A becomes the row with index 0 of A∗ (and the row with index 2

( p) of A becomes the row with index 13 of A∗ ). Now 4 let∗e ∞ = (. . . , 0, 1, 0, . . . ), where ∗ (0) (1) = (ank )n,k=−∞ is the matrix obtained 1 is in the p-th position. Then A e , e by insering the rows e (0) and e (1) between the rows with indices 0 and 1 of A∗ , that is, ⎛ ⎞ . ⎜ . ⎟ ⎜ ⎟ ⎜ . θ1 θ0 + (μ − 2i)2 θ1 θ2 . . ⎟ ⎜ ⎟ 3 4 0 1 0 . . ⎟ A∗ e (0) , e (1) = ⎜ ⎜. . ⎟. ⎜ ⎟ . 0 1 0 . ⎜ ⎟ 2 ⎝ θ2 θ1 θ0 + (μ + 4i) θ1 ⎠ θ3 . Now let xq =

∞ 3 4 T [I − D ∗ A∗ e (0) , e (1) ]n D ∗ e (q) for q = 0, 1

(7.46)

n=0

with D ∗ = D(1/ann∗ )n∈Z . Then it can easily be seen by Proposition 7.10 that {x0 , x1 } is a basis of K er A∗ ∩ sr0 . This result follows from Corollary3 7.6 and the 4 fact that the second member of the inequalities in (7.45) means D ∗ A∗ e (0) , e (1) ∈ r0 . We are led to state the next remark. Remark 7.9 It can3 easily be4 seen that the identities in (7.45) may be written in the form xq = (D ∗ A∗ e (0) , e(1) )−1 e (q)T for q = 0, 1. We note that we obtain similar results by deleting the rows with indices −1 and 0 from A instead of those with indices 0 or 1. Now we put

∞   a˜ pq = A p xq = a pk xkq ,

(7.47)

k=−∞

if xq = (xkq )∞ k=−∞ . By the second inequality in (7.45), the series defined in (7.47) are convergent for p = 0, 1 and for all x ∈ K er A∗ ∩ sr0 . We will see in the numerical application that there is μ = μ1 , which satisfies (7.45) and the condition    a˜ 00 a˜ 01   = 0.  D(μ) =  a˜ 10 a˜ 11 

352

7 Solvability of Infinite Linear Systems

In the next results we write A = Aμ . We obtain the following result as a direct consequence of Corollary 7.6. Proposition 7.11 (i) If there is a pair ( p, q) ∈ {0, 1}2 such that a˜ pq = 0, then dim(K er Aμ1 ∩ sr0 ) = 1. If for every pair ( p, q) ∈ {0, 1}2 , we have a˜ pq = 0, then

(ii)

dim(K er Aμ1 ∩ sr0 ) = 2. Proof (i) The substitution of x = ux0 + vx1 with u, v ∈ C in the equation Ax = 0 gives a finite linear system, whose terms a˜ pq with p, q ∈ {0, 1} are the coefficients



and u and v are the  unknowns. Then a special combination of x0 and x1 yields a basis of K er Aμ1 sr0 . (ii) can be obtained similarly.  We immediately deduce the following result. Corollary 7.7 (i) Given any integer k ∈ Z, Proposition 7.11 is still true if we replace μ1 by μ1 + 2ik. (ii) The equation in (7.39) has infinitely many solutions of the form y(z) = eμ1 z

+∞

xk e2ikz ,

k=−∞

where (xn )∞ n=−∞ ∈ sr . The space of these solutions is one dimensional if there exist a pair ( p, q) for which a˜ pq = 0, and two dimensional otherwise. Proof

(i) follows from the equivalence of x = (xn )∞ n=−∞ ∈ K er Aμ1

and

5

sr0

xk = (xn+k )∞ n=−∞ ∈ K er (Aμ1 +2ki ) ∩ sr0 .

(ii) is a direct consequence of Proposition 7.11. This concludes the proof.



Remark 7.10 The number μ1 is the unique value of μ for which (7.45) is satisfied. Corollary 7.8 Proposition 7.11 is still satisfied when μ1 is replaced by ±μ1 + 2ik for k ∈ Z.

7.7 The Hill and Mathieu Equations

353

Proof We write

 Aμ =

Cμ C

C

Cμ ,



where Cμ = (ank )n≥0 , Cμ = (ank )n