Fractional Differential Equations: An Approach via Fractional Derivatives: 206 (Applied Mathematical Sciences, 206) [1st ed. 2021] 3030760421, 9783030760427

This graduate textbook provides a self-contained introduction to modern mathematical theory on fractional differential e

272 47 7MB

English Pages 382 [377] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Acronyms
Part I Preliminaries
1 Continuous Time Random Walk
1.1 Random Walk on a Lattice
1.2 Continuous Time Random Walk
1.3 Simulating Continuous Time Random Walk
2 Fractional Calculus
2.1 Gamma Function
2.2 Riemann-Liouville Fractional Integral
2.3 Fractional Derivatives
2.3.1 Riemann-Liouville fractional derivative
2.3.2 Djrbashian-Caputo fractional derivative
2.3.3 Grünwald-Letnikov fractional derivative
3 Mittag-Leffler and Wright Functions
3.1 Mittag-Leffler Function
3.1.1 Basic analytic properties
3.1.2 Mittag-Leffler function Eα,1(-x)
3.2 Wright Function
3.2.1 Basic analytic properties
3.2.2 Wright function Wρ,µ(-x)
3.3 Numerical Algorithms
3.3.1 Mittag-Leffler function Eα,β(z)
3.3.2 Wright function Wρ,µ(x)
Part II Fractional Ordinary Differential Equations
4 Cauchy Problem for Fractional ODEs
4.1 Gronwall's Inequalities
4.2 ODEs with a Riemann-Liouville Fractional Derivative
4.3 ODEs with a Djrbashian-Caputo Fractional Derivative
5 Boundary Value Problem for Fractional ODEs
5.1 Green's Function
5.1.1 Riemann-Liouville case
5.1.2 Djrbashian-Caputo case
5.2 Variational Formulation
5.2.1 One-sided fractional derivatives
5.2.2 Two-sided mixed fractional derivatives
5.3 Fractional Sturm-Liouville Problem
5.3.1 Riemann-Liouville case
5.3.2 Djrbashian-Caputo case
Part III Time-Fractional Diffusion
6 Subdiffusion: Hilbert Space Theory
6.1 Existence and Uniqueness in an Abstract Hilbert Space
6.2 Linear Problems with Time-Independent Coefficients
6.2.1 Solution representation
6.2.2 Existence, uniqueness and regularity
6.3 Linear Problems with Time-Dependent Coefficients
6.4 Nonlinear Subdiffusion
6.4.1 Lipschitz nonlinearity
6.4.2 Allen-Cahn equation
6.4.3 Compressible Navier-Stokes problem
6.5 Maximum Principles
6.6 Inverse Problems
6.6.1 Backward subdiffusion
6.6.2 Inverse source problems
6.6.3 Determining fractional order
6.6.4 Inverse potential problem
6.7 Numerical Methods
6.7.1 Convolution quadrature
6.7.2 Piecewise polynomial interpolation
7 Subdiffusion: Hölder Space Theory
7.1 Fundamental Solutions
7.1.1 Fundamental solutions
7.1.2 Fractional θ-functions
7.2 Hölder Regularity in One Dimension
7.2.1 Subdiffusion in mathbbR
7.2.2 Subdiffusion in mathbbR+
7.2.3 Subdiffusion on bounded intervals
7.3 Hölder Regularity in Multi-Dimension
7.3.1 Subdiffusion in mathbbRd
7.3.2 Subdiffusion in mathbbRd+
7.3.3 Subdiffusion on bounded domains
A Mathematical Preliminaries
A.1 AC Spaces and Hölder Spaces
A.1.1 AC spaces
A.1.2 Hölder spaces
A.2 Sobolev Spaces
A.2.1 Lebesgue spaces
A.2.2 Sobolev spaces
A.2.3 Fractional Sobolev spaces
A.2.4 s(Ω) spaces
A.2.5 Bochner spaces
A.3 Integral Transforms
A.3.1 Laplace transform
A.3.2 Fourier transform
A.4 Fixed Point Theorems
References
References
Index
Recommend Papers

Fractional Differential Equations: An Approach via Fractional Derivatives: 206 (Applied Mathematical Sciences, 206) [1st ed. 2021]
 3030760421, 9783030760427

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Applied Mathematical Sciences

Bangti Jin

Fractional Differential Equations An Approach via Fractional Derivatives

Applied Mathematical Sciences Volume 206

Series Editors Anthony Bloch, Department of Mathematics, University of Michigan, Ann Arbor, MI, USA C. L. Epstein, Department of Mathematics, University of Pennsylvania, Philadelphia, PA, USA Alain Goriely, Department of Mathematics, University of Oxford, Oxford, UK Leslie Greengard, New York University, New York, NY, USA Advisory Editors J. Bell, Center for Computational Sciences and Engineering, Lawrence Berkeley National Laboratory, Berkeley, CA, USA P. Constantin, Department of Mathematics, Princeton University, Princeton, NJ, USA R. Durrett, Department of Mathematics, Duke University, Durham, CA, USA R. Kohn, Courant Institute of Mathematical Sciences, New York University, New York, NY, USA R. Pego, Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA, USA L. Ryzhik, Department of Mathematics, Stanford University, Stanford, CA, USA A. Singer, Department of Mathematics, Princeton University, Princeton, NJ, USA A. Stevens, Department of Applied Mathematics, University of Münster, Münster, Germany S. Wright, Computer Sciences Department, University of Wisconsin, Madison, WI, USA Founding Editors F. John, New York University, New York, NY, USA J. P. LaSalle, Brown University, Providence, RI, USA L. Sirovich, Brown University, Providence, RI, USA

The mathematization of all sciences, the fading of traditional scientific boundaries, the impact of computer technology, the growing importance of computer modeling and the necessity of scientific planning all create the need both in education and research for books that are introductory to and abreast of these developments. The purpose of this series is to provide such books, suitable for the user of mathematics, the mathematician interested in applications, and the student scientist. In particular, this series will provide an outlet for topics of immediate interest because of the novelty of its treatment of an application or of mathematics being applied or lying close to applications. These books should be accessible to readers versed in mathematics or science and engineering, and will feature a lively tutorial style, a focus on topics of current interest, and present clear exposition of broad appeal. A compliment to the Applied Mathematical Sciences series is the Texts in Applied Mathematics series, which publishes textbooks suitable for advanced undergraduate and beginning graduate courses.

More information about this series at http://www.springer.com/series/34

Bangti Jin

Fractional Differential Equations An Approach via Fractional Derivatives

123

Bangti Jin Department of Computer Science University College London London, UK

ISSN 0066-5452 ISSN 2196-968X (electronic) Applied Mathematical Sciences ISBN 978-3-030-76042-7 ISBN 978-3-030-76043-4 (eBook) https://doi.org/10.1007/978-3-030-76043-4 Mathematics Subject Classification: 26A33, 33E12, 35R11, 34A08, 35R30 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Fractional differential equations (FDES), i.e., differential equations involving fractional-order derivatives, have received much recent attention in engineering, physics, biology and mathematics, due to their extraordinary modeling capability for describing certain anomalous transport phenomena observed in real world. However, the relevant mathematical theory of FDES is still far from complete when compared with the more established integer-order counterparts. The main modeling tools, i.e., fractional integrals and fractional derivatives, have a history nearly as old as calculus itself. In 1695, in a letter to Leibniz, L’Hospital asked about the possible meaning of a half derivative. Leibniz in a letter dated September 30, 1695—now deemed as the birthday of fractional calculus—replied “It will lead to a paradox, from which one day useful consequences will be drawn.” The first definitions of general fractional derivatives, e.g., Riemann-Liouville fractional integral and derivatives, that supported rigorous analysis were proposed in the nineteenth century. By the middle of the twentieth century, fractional calculus was already an almost fully developed field. The practical applications of these operators, as conjectured by Leibniz, were realized only much later. Recent experimental observations of “fractional” diffusion have led to a flurry of studies in physics and engineering. By now FDES have been established as the lingua franca in the study of anomalous diffusion processes. This motivates a mathematical study of differential equations containing one or more fractional derivatives. This is the primary purpose of this textbook: we give a self-contained introduction to basic mathematical theory of FDES, e.g., well-posedness (existence, uniqueness and regularity of the solutions) and other analytic properties, e.g., comparison principle, maximal principle and analyticity, and contrast them with the integer counterparts. The short answer is that there is much in common, but with some significant and important differences and additional technical challenges. One distinct feature is that the solution usually contains weak singularity, even if the problem data are smooth, due to the nonlocality of the operators and weak singularity of the associated kernels. These new features have raised many outstanding challenges for relevant fields, e.g., mathematical analysis, numerical analysis, inverse problems and optimal control, which have v

vi

Preface

witnessed exciting developments in recent years. We will also brief ly touch upon these topics to give a f lavor, illustrating distinct influences of the nonlocality of fractional operators. The term FDE is a big umbrella for any differential equations involving one or more fractional derivatives. This textbook only considers the models that involve only one fractional derivative in either spatial or temporal variable, mostly of the Djrbashian-Caputo type, and only brief ly touches upon that of the RiemannLiouville type. This class of FDES has direct physical meaning and allows initial/boundary values as for the classical integer-order differential equations, and thus has been predominant in the mathematical modeling of many practical applications. The book consists of seven chapters and one appendix, which are organized as follows. Chapter 1 briefly describes continuous time random walk, which shows that the probability density function of certain stochastic process satisfies a FDE. Chapters 2 and 3 describe, respectively, fractional calculus (various fractional integral and derivatives), and two prominent special functions, i.e., Mittag-Leffler function and Wright function. These represent the main mathematical tools for FDES, and thus Chapters 2 and 3 build the foundation for the study of FDES in Chapters 4–7. Chapters 4–5 are devoted to initial and boundary value problems for fractional ODEs, respectively. Chapters 6–7 describe mathematical theory for time-fractional diffusion (subdiffusion) in Hilbert spaces and Hölder spaces, respectively. In the appendix, we recall basic facts of function spaces, integral transforms and fixed point theorems that are extensively used in the book. Note that Chapters 4–7 are organized in a manner such that they only loosely depend on each other and the reader can directly proceed to the chapter of interest and study each chapter independently, after going through Chapters 2–3 (and the respective parts of the appendix). The selected materials focus mainly on solution theory, especially the issues of existence, uniqueness and regularity, which are also important in other related areas, e.g., numerical analysis. Needless to say, these topics can represent only a few small tips of current exciting developments within the FDE community, and many important topics and results have to be left out due to page limitation. Also we do not aim for a comprehensive treatment and describing the results in the most general form, and instead we only state the results that are commonly used in order to avoid unnecessary technicalities. (Actually each chapter can be expanded into a book itself!) Whenever known, references for the stated results are provided, and the pointers to directly relevant further extensions are also given. Nonetheless, the reference list is nowhere close to complete and clearly biased by the author’s knowledge. There are many excellent books available, especially at the research level, focusing on various aspects of fractional calculus or FDEs, notably [SKM93] on Riemann-Liouville fractional integral/derivatives, [Pod99, KST06] on fractional derivatives and ODEs, [Die10] on ODEs with Djrbashian-Caputo fractional derivative. Our intention is to take a broader scope than the more focused research monographs and to provide a relatively self-contained gentle introduction to current

Preface

vii

mathematical theory, especially distinct features when compared with the integer-order counterparts. Besides, it contains discussions on numerical algorithms and inverse problems, which have so far been only scarcely discussed in textbooks on fdes, despite their growing interest in the community. Thus, it may serve as an introductory reading for young researchers entering the field. It can also be used as the textbook for a single semester course. Many exercises (of different degree of difficulty) are provided at the end of each chapter, and quite a few of them give further extensions to the results stated in the book. The typical audience would be senior undergraduate and graduate students with a solid background in real analysis and basic theory of ordinary and partial differential equations, especially Chapters 6 –7. The book project was initiated in 2015, and but stopped in Summer 2016. The writing was resumed in later 2019, and the scope and breadth have grown significantly, in order to properly reflect several important developments in the last few years. The majority of the writing was done during the COVID 19 pandemic, and partly during an extended research stay in 2020 at Department of Mathematics, The Chinese University of Hong Kong, when the book was being finalized. The hospitality is greatly appreciated. The writing has benefited enormously from the collaborations with several researchers on the topic of FDES during the past decade, especially Raytcho Lazarov, Buyang Li, Yikan Liu, Joseph Pasciak, William Rundell, Yubin Yan and Zhi Zhou. Further several researchers have kindly provided constructive comments on the book at different stages. In particular, Yubin Yan and Lele Yuan kindly proofread several chapters of the book and gave many useful comments, which have helped eliminate many typos. Last but not least, I would like to take this opportunity to thank my family for their constant support over the writing of the book, which has taken much of the time that I should have spent with them. London, UK March 2021

Bangti Jin

Contents

Part I

Preliminaries ....... ....... ....... Walk . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

3 3 6 16

2 Fractional Calculus . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Gamma Function . . . . . . . . . . . . . . . . . . . . . . 2.2 Riemann-Liouville Fractional Integral . . . . . . . 2.3 Fractional Derivatives . . . . . . . . . . . . . . . . . . . 2.3.1 Riemann-Liouville fractional derivative . 2.3.2 Djrbashian-Caputo fractional derivative . 2.3.3 Grünwald-Letnikov fractional derivative

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

19 19 21 27 28 41 51

3 Mittag-Leffler and Wright Functions . . . . . 3.1 Mittag-Leffler Function . . . . . . . . . . . . . 3.1.1 Basic analytic properties . . . . . . . 3.1.2 Mittag-Leffler function Efi;1 ðxÞ . 3.2 Wright Function . . . . . . . . . . . . . . . . . . 3.2.1 Basic analytic properties . . . . . . . 3.2.2 Wright function Wq;l ðxÞ . . . . . 3.3 Numerical Algorithms . . . . . . . . . . . . . 3.3.1 Mittag-Leffler function Efi;fl ðzÞ . . 3.3.2 Wright function Wq;l ðxÞ . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

59 59 61 68 74 75 79 84 84 89

1 Continuous Time Random Walk . . . . . . . 1.1 Random Walk on a Lattice . . . . . . . . 1.2 Continuous Time Random Walk . . . . 1.3 Simulating Continuous Time Random

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

ix

x

Part II

Contents

Fractional Ordinary Differential Equations

4 Cauchy Problem for Fractional ODEs . . . . . . . . . . . . . . . 4.1 Gronwall’s Inequalities . . . . . . . . . . . . . . . . . . . . . . . . 4.2 ODEs with a Riemann-Liouville Fractional Derivative . 4.3 ODEs with a Djrbashian-Caputo Fractional Derivative .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. 97 . 97 . 104 . 113

5 Boundary Value Problem for Fractional ODEs . 5.1 Green’s Function . . . . . . . . . . . . . . . . . . . . . 5.1.1 Riemann-Liouville case . . . . . . . . . . . 5.1.2 Djrbashian-Caputo case . . . . . . . . . . . 5.2 Variational Formulation . . . . . . . . . . . . . . . . 5.2.1 One-sided fractional derivatives . . . . . 5.2.2 Two-sided mixed fractional derivatives 5.3 Fractional Sturm-Liouville Problem . . . . . . . . 5.3.1 Riemann-Liouville case . . . . . . . . . . . 5.3.2 Djrbashian-Caputo case . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

137 137 137 144 149 150 158 165 166 168

6 Subdiffusion: Hilbert Space Theory . . . . . . . . . . . . . . . . . 6.1 Existence and Uniqueness in an Abstract Hilbert Space 6.2 Linear Problems with Time-Independent Coefficients . . 6.2.1 Solution representation . . . . . . . . . . . . . . . . . . . 6.2.2 Existence, uniqueness and regularity . . . . . . . . . 6.3 Linear Problems with Time-Dependent Coefficients . . . 6.4 Nonlinear Subdiffusion . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Lipschitz nonlinearity . . . . . . . . . . . . . . . . . . . . 6.4.2 Allen-Cahn equation . . . . . . . . . . . . . . . . . . . . 6.4.3 Compressible Navier-Stokes problem . . . . . . . . 6.5 Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Inverse Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Backward subdiffusion . . . . . . . . . . . . . . . . . . . 6.6.2 Inverse source problems . . . . . . . . . . . . . . . . . . 6.6.3 Determining fractional order . . . . . . . . . . . . . . . 6.6.4 Inverse potential problem . . . . . . . . . . . . . . . . . 6.7 Numerical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Convolution quadrature . . . . . . . . . . . . . . . . . . 6.7.2 Piecewise polynomial interpolation . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

175 175 185 186 195 208 221 221 225 229 236 247 248 251 256 258 267 267 269

Part III

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

Time-Fractional Diffusion

Contents

xi

7 Subdiffusion: Hölder Space Theory . . . . . . . 7.1 Fundamental Solutions . . . . . . . . . . . . . . 7.1.1 Fundamental solutions . . . . . . . . . 7.1.2 Fractional h-functions . . . . . . . . . . 7.2 Hölder Regularity in One Dimension . . . . 7.2.1 Subdiffusion in R . . . . . . . . . . . . 7.2.2 Subdiffusion in R þ . . . . . . . . . . . 7.2.3 Subdiffusion on bounded intervals 7.3 Hölder Regularity in Multi-Dimension . . . 7.3.1 Subdiffusion in Rd . . . . . . . . . . . . 7.3.2 Subdiffusion in Rdþ . . . . . . . . . . . 7.3.3 Subdiffusion on bounded domains .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

A Mathematical Preliminaries . . . . . . . . . A.1 AC Spaces and Hölder Spaces . . . A.1.1 AC spaces . . . . . . . . . . . . . . A.1.2 Hölder spaces . . . . . . . . . . . A.2 Sobolev Spaces . . . . . . . . . . . . . . . A.2.1 Lebesgue spaces . . . . . . . . . A.2.2 Sobolev spaces . . . . . . . . . . A.2.3 Fractional Sobolev spaces . . s A.2.4 H_ ðXÞ spaces . . . . . . . . . . . . A.2.5 Bochner spaces . . . . . . . . . . A.3 Integral Transforms . . . . . . . . . . . . A.3.1 Laplace transform . . . . . . . . A.3.2 Fourier transform. . . . . . . . . A.4 Fixed Point Theorems . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

277 277 277 284 289 290 296 302 306 306 311 325 331 331 331 332 333 333 334 336 337 339 340 340 341 342

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367

Acronyms

CðzÞ C;u hfi , hfi AC n ðDÞ Bðfi; flÞ Cþ C Cðfi; flÞ C k;c ðXÞ C fi a Dx C fi x Db C fi a Dx R fi a Dx R fi x Db GL fi a Dx

E fi;fl ðzÞ E 2F 1 ða; b; c; xÞ F Gfi , Gfi H_ s ðXÞ H s ðXÞ fi aI x fi xIb L Ml N N0

Gamma function contour in complex plane fractional h functions space of absolutely continuous functions Beta function the set fz 2 C : Rz [ 0g the set fz 2 C : RðzÞ\0g binomial coefficient Hölder spaces left-sided Djrbashian-Caputo fractional derivative of order fi right-sided Djrbashian-Caputo fractional derivative of order fi regularized left-sided Djrbashian-Caputo fractional derivative of order fi left-sided Riemann-Liouville fractional derivative of order fi right-sided Riemann-Liouville fractional derivative of order fi left-sided Grunwald-Letnikov fractional derivative two-parameter Mittag-Leffler function expectation hypergeometric function Fourier transform fundamental solutions for subdiffusion (fractional order) Sobolev space Sobolev spaces left-sided Riemann-Liouville fractional integral of order fi right-sided Riemann-Liouville fractional integral of order fi Laplace transform Mainardi function the set of natural numbers the set of nonnegative integers, i.e., N ¼ N [ f0g

xiii

xiv

Rþ R W q;l ðzÞ W k;p ðXÞ @ fit R fi @t

ae erf erfc BDF BVP CQ CTRW FDE ODE PDE PDF

Acronyms

the set ð0; 1Þ the set ð1; 0Þ Wright function Sobolev spaces left-sided Djrbashian-Caputo fractional derivative of order fi (in time) left-sided Riemann-Liouville fractional derivative of order fi (in time) almost everywhere error function complementary error function backward differentiation formula boundary value problem convolution quadrature continuous time random walk fractional differential equation ordinary differential equation partial differential equation probability density function

Part I

Preliminaries

Chapter 1

Continuous Time Random Walk

The classical diffusion equation ∂t u(x, t) − κΔu(x, t) = 0

(1.1)

is widely used to describe transport phenomena observed in nature, where u denotes the concentration of a substance, κ is the diffusion coefficient, and (1.1) describes how the concentration evolves over time. The model (1.1) can be derived from a purely macroscopic argument, based on conservation of mass, i.e., ∂t u + ∇ · J = 0 (J is the flux), and Fick’s first law of diffusion, i.e., J = −κ∇u. Alternatively, following the work of Albert Einstein in 1905 [Ein05], one can also derive it from the underlying stochastic process, under a Brownian motion assumption on the particle movement. Then the probability density function (pdf) p(x, t) of the particle satisfies (1.1). In this chapter, with continuous time random walk, we show that the pdf p(x, t) of the particle satisfies fractional differential equations (fdes) of the form C α 0 Dt p(x, t)

2 = κα ∂xx p(x, t),  κμ  μ μ ∂t p(x, t) = 2 (1 − β)−∞RDx p(x, t) + (1 + β)RxD∞ p(x, t) , μ

α R where 0 < α < 1, 1 < μ < 2, −1 ≤ β ≤ 1, and the notation C 0 Dt and −∞D x , etc. denote Djrbashian-Caputo / Riemann-Liouville fractional derivatives, etc. in time / space; see Definitions 2.2 and 2.3 in Chapter 2 for the precise definitions. In physics, these models are used to describe so-called fractional kinetics [MK00, MK04, ZDK15], and will represent the main objects of this book.

1.1 Random Walk on a Lattice Perhaps the simplest stochastic approach to derive equation (1.1) is to consider the random walk framework on a lattice, which is also known as a Brownian walk. At each time step (with a time step size Δt), the walker randomly jumps to one of its © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 B. Jin, Fractional Differential Equations, Applied Mathematical Sciences 206, https://doi.org/10.1007/978-3-030-76043-4_1

3

4

1 Continuous Time Random Walk

four nearest neighboring sites on a square lattice (with a space step Δx); see Fig. 1.1 for a schematic illustration.

Fig. 1.1 Random walk on a square lattice, starting from the circle.

In the one-dimensional case, such a process can be modeled by p j (t + Δt) = 12 p j−1 (t) + 12 p j+1 (t), where the index j denotes the position on the lattice (grid), and p j (t) the probability that the walker is at grid j at time t. It relates the probability of being at position j at time t + Δt to that of the two adjacent sites j ± 1 at time t. The factor 12 expresses the directional isotropy of the jumps: the jumps to the left and right are equally likely. A rearrangement gives p j (t + Δt) − p j (t) 1 p j−1 (t) − 2p j (t) + p j+1 (t) = . 2 (Δx)2 (Δx)2 The right-hand side is the central finite difference approximation of the second-order 2 p(x, t) at grid j. In the limit Δt, Δx → 0+ , if the pdf p(x, t) is smooth derivative ∂xx in x and t, discarding the higher order terms in Δt and Δx in the following Taylor expansions p(x, t + Δt) = p(x, t) + Δt∂t p(x, t) + O((Δt)2 ), p(x ± Δx, t) = p(x, t) ± Δx∂x p(x, t) +

(Δx)2 2 2 ∂xx p(x, t)

+ O((Δx)3 )

leads to 2 ∂t p(x, t) = κ∂xx p(x, t).

(1.2)

The limit is taken such that the quotient κ=

lim

Δx→0+,Δt→0+

(2Δt)−1 (Δx)2

(1.3)

is a positive constant, and κ is called the diffusion coefficient, which connects the spatial and time scales.

1.1 Random Walk on a Lattice

5

Equation (1.2) can also be regarded as a consequence of the central limit theorem. Suppose that the jump length Δx has a pdf given by λ(x) so that for a < b, ∫b P(a < Δx < b) = a λ(x)dx. Then Fourier transform gives ∫ ∞ ∫ ∞   = 1 − iξ x − 12 ξ 2 x 2 + . . . λ(x)dx e−iξ x λ(x)dx = λ(ξ) −∞

−∞

= 1 − iξ μ1 − 12 ξ 2 μ2 + . . . , ∫∞ where μ j is the jth moment μ j = −∞ x j λ(x)dx, provided that these moments do exist, which holds if λ(x) decays sufficiently fast to zero as x → ±∞. Further, assume that the pdf λ(x) is normalized and even, i.e., μ1 = 0, μ2 = 1 and μ3 = 0. Then  = 1 − 1 ξ 2 + O(ξ 4 ). λ(ξ) 2 Denote by Δxi the jump length taken at the ith step. In the random walk model, the steps Δx1 , Δx2 , . . . are independent. The sum of the independent and identically distributed (i.i.d.) random variables Δxn (according to the pdf with a rescaled λ(x)) xn = Δx1 + Δx2 + . . . + Δxn gives the position of the walker after n steps. It is also a random variable. Now we recall a standard result on the sum of two independent random variables, where ∗ denotes convolution of f and g. Theorem 1.1 If X and Y are independent random variables with a pdf given by f and g, respectively, then the sum Z = X + Y has a pdf f ∗ g.  n, By Theorem 1.1, the random variable Xn has a Fourier transform pn (ξ) = (λ(ξ)) and the normalized sum n− 2 xn has the Fourier transform  1   n 1 2 pn n− 2 ξ = 1 − 2n ξ + O(n−2 ) . 1

Taking the limit as n → ∞ gives p(ξ) = e−

ξ2 2

and inverting the Fourier transform x2

gives the standard Gaussian distribution p(x) = (2π)− 2 e− 2 , cf. Example A.4 in the appendix. This is precisely the central limit theorem, asserting that the long-term average behavior of i.i.d. random variables is Gaussian. One requirement for the whole procedure to work is the finiteness of the second moment μ2 of λ(x). Now we interpret xn as the particle position after n time steps, by correlating the time step size Δt with the variance of Δx according to the ansatz (1.3). This can be easily achieved by rescaling the variance of λ(x) to 2κt. Then by the scaling rule for the Fourier transform, pn (ξ) is given by 1

pn (n− 2 ξ) = (1 − n−1 κtξ 2 + O(n−2 ))n 1

and taking limit as n → ∞ gives p(ξ) = e−ξ κt . By inverse Fourier transform, the pdf being at certain position x and time t is governed by the standard diffusion model 2

6

1 Continuous Time Random Walk

(1.2), and is given by the Gaussian x2

p(x, t) = (4πκt)− 2 e− 4κ t . 1

It is the fundamental solution, i.e., the solution p(x, t) of (1.2) with the initial condition p(x, 0) = δ(x), the Dirac function concentrated at x = 0. Note that at any fixed time∫t > 0, p(x, t) is a Gaussian distribution in x, with mean zero and variance 2κt, ∞ i.e., −∞ x 2 p(x, t)dx = 2κt, which scales linearly with the time t. This represents one distinct feature of normal diffusion processes.

1.2 Continuous Time Random Walk There are several stochastic models for deriving differential equations involving a fractional-order derivative. We describe the continuous time random walk framework (ctrw) due to Montroll and Weiss [MW65]. ctrw generalizes the random walk model in Section 1.1, in which the length of each jump and the time elapsed between two successive jumps follow a given pdf. We assume that these two random variables are independent, though in theory one can allow correlation in order to achieve more flexible modeling. In one spatial dimension, the picture is as follows: a walker moves along the x-axis, starting at a position x0 at time t0 = 0. At time t1 , the walker jumps to x1 , then at time t2 jumps to x2 , and so on. We assume that the temporal and spatial increments Δtn = tn −tn−1 and Δxn = xn − xn−1 are i.i.d. random variables, following pdfs ψ(t) and λ(x), respectively, known as the waiting time distribution and jump length distribution, respectively. Namely, the probability of Δtn lying in any interval [a, b] ⊂ R+ and Δxn lying in any interval [c, d] ⊂ R are given by ∫ b ∫ d ψ(t)dt and P(c < Δxn < d) = λ(x)dx, P(a < Δtn < b) = a

c

respectively. Now the goal is to determine the probability that the walker lies in a given spatial interval at time t. For given pdfs ψ and λ, the position x of the walker can be regarded as a step function of t. Example 1.1 Suppose that the distribution ψ(t) is exponential with a parameter τ > 0, t i.e., ψ(t) = τ −1 e− τ for t ∈ R+ , and the jump length distribution λ(x) is Gaussian with mean zero and variance σ 2 , i.e., λ(x) = (2πσ 2 )− 2 e waiting time Δtn and jump length Δxn satisfy 1

E[Δtn ] = τ,

E[Δtn2 ] = τ 2,

E[Δxn ] = 0,



x2 2σ 2

for x ∈ R. Then the

E[Δxn2 ] = σ 2,

where E denotes taking expectation with respect to the underlying distribution. The position x(t) of the walker is a step function (in time t). Two sample trajectories of ctrw with τ = 1 and σ = 1 are given in Fig. 1.2.

7

20

20

0

0

x

x

1.2 Continuous Time Random Walk

-20

-20 0

20

40

60

80

100

0

20

40

t

60

80

100

t

Fig. 1.2 Two realizations of 1D ctrw, with exponential waiting time distribution (with τ = 1) and Gaussian jump length distribution (with σ = 1), starting from x0 = 0.

Now we derive the pdf of the total waiting time tn after n steps and the total jump length xn − x0 after n steps. The main tools in the derivation are Laplace and Fourier transforms in Appendices A.3.1 and A.3.2, respectively. We denote by ψn (t) the pdf of tn = Δt1 + Δt2 + . . . + Δtn , and ψ1 = ψ. By Theorem 1.1, we have ∫ t ψn−1 (s)ψ(t − s)ds. ψn (t) = (ψn−1 ∗ ψ)(t) = 0

n (z) of ψn (t) (i.e., its Laplace transform) ψ n (z) = Then the characteristic function ψ ∫∞ −zt L[ψn ](z) = 0 e ψn (t)dt, by the convolution rule for Laplace transform in (A.8), n (z) = (ψ (z))n . Let Ψ(t) denote the survival probability, i.e., the is given by ψ probability of the walker not jumping within a time t (or equivalently, the probability of remaining stationary for at least a duration t). Then ∫ t ∫ ∞ ψ(s)ds = 1 − ψ(s)ds, 0 < t < ∞. Ψ(t) = t

0

The characteristic function for the survival probability Ψ is = z−1 − z −1 ψ (z). Ψ(z) Then the probability of taking exactly n steps up to time t is given by ∫ t ψn (s)Ψ(t − s)ds = (ψn ∗ Ψ)(t). χn (t) = 0

The convolution rule for Laplace transform yields =ψ n (z)Ψ(z) (z)n z −1 (1 − ψ (z)). χn (z) = ψ Next we derive the pdf of the jump length xn − x0 after n steps. We denote by λn (x) the pdf of the random variable xn − x0 = Δx1 + Δx2 + . . . + Δxn . Appealing to Theorem 1.1 again yields

8

1 Continuous Time Random Walk

∫ λn (x) = (λn−1 ∗ λ)(x) =



−∞

λn−1 (y)λ(x − y)dy,

n (ξ) is given by and by the convolution rule for Fourier transform, λ  n, n (ξ) = (λ(ξ)) λ

n ≥ 0.

We denote by p(x, t) the pdf of the walker at the position x at time t. Since χn (t) is the probability of taking n steps up to time t, p(x, t) =



λn (x) χn (t).

n=0

We denote the Fourier-Laplace transform of p(x, t) by ∫ ∞ ∫ ∞ p(ξ, z) = LF [p](ξ, z) = e−zt e−iξ x p(x, t)dxdt. 0

−∞

Then p(ξ, z) is given by p(ξ, z) =



∞ (z)

1−ψ  ψ n (ξ) (z)]n . χn (z) = [λ(ξ) λ z n=0 n=0

Since λ and ψ are pdfs, ∫ ∞  = λ(x)dx = 1 λ(0) −∞

and

(0) = ψ





ψ(t)dt = 1.

(1.4)

(1.5)

0

 ψ (z)| < 1 so the series in (1.4) is absolutely convergent: If ξ  0 or z > 0, then | λ(ξ) ∞

 ψ  ψ (z)]n = (1 − λ(ξ) (z))−1, [λ(ξ) n=0

and hence we obtain the following fundamental relation for the pdf p(x, t) in the Laplace-Fourier domain (z) 1−ψ 1 p(ξ, z) = .  ψ z (z) 1 − λ(ξ)

(1.6) t

Example 1.2 For the waiting time distribution ψ(t) = τ −1 e− τ and the jump length 2 − x2 2σ

1 (z) = (1 + τz)−1 and distribution λ(x) = (2πσ 2 )− 2 e in Example 1.1, we have ψ 2 2 σ ξ  = e− 2 . Thus, the Fourier-Laplace transform λ(ξ) p(ξ, z) of the pdf p(x, t) of the walker at position x (relative to x0 ) at time t is given by 2 2

σ ξ p(ξ, z) = τ(1 + τz − e− 2 )−1 .

1.2 Continuous Time Random Walk

9

Different types of ctrw processes can be classified according to the characteristic waiting time  and jump length variance ς 2 defined by ∫ ∞ ∫ ∞  =: E[Δtn ] = tψ(t)dt and ς 2 =: E[(Δxn )2 ] = x 2 λ(x)dx, −∞

0

being finite or diverging. Below we shall discuss the following three scenarios: (i) Finite  and ς 2 , (ii) Diverging  and finite ς 2 and (iii) Diverging ς 2 and finite . (i) Finite characteristic waiting time and jump length variance. When both characteristic waiting time  and jump length variance ς are finite, ctrw does not lead to anything new. Specifically, assume that the pdfs ψ(t) and λ(x) are normalized: ∫ ∞ ∫ ∞ ∫ ∞ tψ(t)dt = 1, xλ(x)dx = 0, x 2 λ(x)dx = 1. (1.7) −∞

0

−∞

These conditions are satisfied after suitable rescaling if the waiting time pdf ψ(t) has a finite mean, and the jump length pdf λ(x) has a finite variance. Recall the relation  (0) = 1 = λ(0). (1.5): ψ Since ∫ ∞ dk ψ = e−zt (−t)k ψ(t)dt, dz k 0 using the normalization condition (1.7), we have ∫ ∞

tψ(t)dt = −1. ψ (0) = − 0

Similarly, we deduce ∫

 λ (0) = −i



−∞

xλ(x)dx = 0,



(0) = − λ





−∞

x 2 λ(x)dx = −1.

Next, for τ > 0 and σ > 0, let the random variables Δtn and Δxn follow the rescaled pdfs ψτ (t) = τ −1 ψ(τ −1 t) and

λσ (x) = σ −1 λ(σ −1 x).

(1.8)

Now consider the limit of p(ξ, z; σ, τ) as τ, σ → 0+ . Then simple computation shows  =: E[Δtn ] = τ, E[Δxn ] = 0, and ς 2 =: E[Δxn2 ] = σ 2 . By the scaling rules for Fourier and Laplace transforms, (τz) and τ (z) = ψ ψ Consequently,

σ (ξ) = λ(σξ).  λ

(τz) 1−ψ 1 p(ξ, z; σ, τ) = .  z (τz)λ(σξ) 1−ψ

(z) around z = 0 yields The Taylor expansion of ψ

10

1 Continuous Time Random Walk

(z) = ψ (0) + ψ (0)z + . . . = 1 − z + O(z 2 ) ψ

as z → 0.



(0) = 0 Further we assume that the pdf λ(x) is even, i.e., λ(−x) = λ(x). Then λ  and the Taylor expansion of λ(ξ) around ξ = 0 is given by 

(0)ξ 2 + . . . = 1 − 1 ξ 2 + O(ξ 4 ).  = λ(0)  +λ  (0)ξ + 1 λ λ(ξ) 2 2 Using the last two relations, simple algebraic manipulation gives p(ξ, z; σ, τ) =

τ τz +

1 2 2 2σ ξ

1 + O(τz) . 1 + O(τz + σ 2 ξ 2 )

(1.9)

Now in the formula (1.9), we let σ, τ → 0+ while keeping the relation limσ→0+,τ→0+ (2τ)−1 σ 2 = κ, for some κ > 0, and obtain p(ξ, z) = lim

τ τz + 12 σ 2 ξ 2

=

1 . z + κξ 2

(1.10)

The scaling relation here for κ is identical to that in the random walk framework, cf. (1.3). Upon inverting the Laplace transform, we find ∫ 2 1 ezt p(ξ, t) = dz = e−κ ξ t , 2 2πi C z + κξ and then inverting the Fourier transform gives the familiar Gaussian pdf p(x, t) = x2

(4πκt)− 2 e− 4κ t . Last we verify that the pdf p(x, t) satisfies (1.2): 1

2 p(ξ, z) LF [∂t p − κ∂xx p](ξ, z) = z p(ξ, z) − p(ξ, 0) + κξ 2

= (z + κξ 2 ) p(ξ, z) − p(ξ, 0) = 1 − p(ξ, 0) = 0, ˜ = 1. Hence, the pdf p(x, t) for x(t) − x0 where the last step follows by p(ξ, ˜ 0) = δ(ξ) at the position x (relative to x0 ) of the walker at time t indeed satisfies (1.2). The ctrw framework recovers the classical diffusion model, as long as the waiting time pdf ψ(t) has a finite mean and the jump length pdf λ(x) has finite first and second moments. Further, the displacement variance is given by ∫ ∞ x 2 p(x, t)dx = 2κt, μ2 (t) = E p(t) [(x(t) − x0 )2 ] = −∞

which grows linearly with the time t. One striking thing is that any pair of pdfs with finite characteristic waiting time  and jump length variance ς 2 lead to the same result, to the lowest order. (ii) Divergent mean waiting time. Now we consider the situation where the characteristic waiting time  diverges, but the jump length variance ς 2 is finite. This occurs for example when the particle might be trapped in a potential well, and it takes a long time for the particle to leave the well. To model such phenomena, we employ

1.2 Continuous Time Random Walk

11

a heavy-tailed (i.e., the tail is not exponentially bounded) waiting time pdf with the asymptotic behavior (1.11) ψ(t) ∼ at −1−α, as t → ∞ for some α ∈ (0, 1), where a > 0 is a constant. One example of such pdf is ψ(t) = α(1 + t)−1−α . The (asymptotic) power law decay in (1.11) is heavy tailed and allows occasional very large waiting time between consecutive jumps. Again, the specific form of ψ(t) is irrelevant, and only the power law decay at large time matters. The parameter α determines the asymptotic decay of the pdf ψ(t): the closer is α to zero, the slower the decay is and more likely the long waiting time is. Now the question is whether such occasional long waiting time will change completely the dynamics of the stochastic process∫in the large time. For the power ∞ law decay, the mean waiting time is divergent, i.e., 0 tψ(t)dt = +∞. So one cannot assume the normalizing condition on the pdf ψ in (1.7), and the above analysis breaks this part, the assumption on λ(x) remains unchanged, ∫ ∞down. Nonetheless,∫ in ∞ i.e., −∞ xλ(x)dx = 0 and −∞ x 2 λ(x)dx = 1. 15

x

10 5 0 -5 0

1000

2000

3000

t

(a)

(b)

Fig. 1.3 ctrw with a power law waiting time pdf. Panel (a) gives one sample trajectory with waiting time pdf ψ(t) = α(1 + t)−1−α , α = 34 , and standard Gaussian jump length pdf; Panel (b) is a zoom-in of the first cluster of Panel (a).

In Fig. 1.3, we show one sample trajectory of ctrw with a power law waiting time pdf ψ(t) = α(1 + t)−1−α , α = 34 , and the standard Gaussian jump length pdf, and an enlarged version of the initial time interval. One observes clearly the occasional but large waiting time appearing at different time scales, cf. Fig. 1.3(b). This behavior is dramatically different from the case of finite mean waiting time in Fig. 1.2. (z) for z → 0. The notation The following result bounds the Laplace transform ψ Γ(z) denotes the Gamma function; see (2.1) in Section 2.1 for its definition. Theorem 1.2 Let α ∈ (0, 1), and a > 0. If ψ(t) = at −1−α + O(t −2−α ) as t → ∞, then (z) = 1 − Bα z α + O(z) ψ with Bα = aα−1 Γ(1 − α).

as z → 0

12

1 Continuous Time Random Walk

Proof By the assumption on ψ, there exists t0 > 0 such that |t 1+α ψ(t) − a| ≤ ct −1,

∀t0 ≤ t < ∞

(1.12)

(0) = 1, we have and we consider 0 < z < t0−1 so that zt0 < 1. Since ψ ∫ ∞ ∫ t0 (z) = 1−ψ (1 − e−zt )ψ(t)dt = (1 − e−zt )(ψ(t) − at −1−α )dt 0



+

0 ∞

(1 − e

−zt

)(ψ(t) − at

−1−α

∫ )dt +

t0



(1 − e−zt )at −1−α dt =:

0

3

Ii .

i=1

Since 0 ≤ 1 − e−s ≤ s for all s ∈ [0, 1], the term I1 can be bounded by ∫ t0 ∫ t0 ztψ(t)dt + ztat −1−α dt |I1 | ≤ 0 0 ∫ t0 ∫ ∞ ψ(t)dt + az t −α dt ≤ cα,t0 z. ≤ zt0 0

0

For the term I2 , using (1.12) and changing variables s = zt, we deduce ∫ ∞ |I2 | ≤ (1 − e−zt )ct −2−α dt t0

≤ cz 1+α ≤ cz

1+α

∫

1

t0 z −1

s−1−α ds +

(α ((t0 z)





s−2−α ds



1 −α

− 1) + (1 + α)−1 ) ≤ cα,t0 z.

Last, for the term I3 , by changing variables s = zt, we have ∫ ∞ ∫ ∞ 1 − e−s −zt −1−α α I3 = (1 − e )at dt = az ds. s1+α 0 0 Integration by parts and the identity (2.2) below give ∫ ∞ ∫ ∞ 1 − e−s −1 ds = α e−s s−α ds = α−1 Γ(1 − α). s1+α 0 0 Combining the preceding estimates yields the desired inequality.



Now we introduce the following rescaled pdfs for the incremental waiting time Δtn and jump length Δxn : ψτ (t) = τ −1 ψ(τ −1 t) and λσ (x) = σ −1 λ(σ −1 x). By (1.6), p(ξ, z; σ, τ) is given by (τz) 1−ψ 1 p(ξ, z; σ, τ) = .  z (τz)λ(σξ) 1−ψ

1.2 Continuous Time Random Walk

13

(z) = 1 − Under the power law waiting time pdf, from Theorem 1.2 we deduce ψ  = 1 − 1 ξ 2 + O(ξ 4 ) as ξ → 0. Hence, Bα z α + O(z) as z → 0 and λ(ξ) 2 p(ξ, z; σ, τ) =

Bα τ α z α−1

1 + O(τ 1−α z 1−α ) . Bα τ α z α + 12 σ 2 ξ 2 1 + O(τ 1−α z 1−α + τ α z α + σξ)

(1.13)

Once again, in (1.13), we find the Fourier-Laplace transform p(ξ, z) by letting σ, τ → 0+ , while keeping limσ→0+,τ→0+ (2Bα τ α )−1 σ 2 = κα for some κα > 0. Thus we obtain Bα τ α z α−1

p(ξ, z) = lim p(ξ, z; σ, τ) = lim

Bα τ α z α + 12 σ 2 ξ 2

=

z α−1 . z α + κα ξ 2

(1.14)

Formally, we recover the formula (1.10) by setting α = 1. Last, we invert the Fourier-Laplace transform p(ξ, z) using the Mittag-Leffler function Eα,1 (z) (cf. (3.1) in Chapter 3) and Wright function Wρ,μ (z) (cf. (3.24) in Chapter 3). Using the Laplace transform formula of the Mittag-Leffler function Eα,1 (z) in Lemma 3.2 below, we deduce p(ξ, t) = Eα,1 (−κα t α ξ 2 ), and then by the Fourier transform of the Wright function Wρ,μ (z) in Proposition 3.4, we obtain an explicit expression of the pdf p(x, t) p(x, t) = (4κα t α )− 2 W− α2 ,1− α2 (−(κα t α )− 2 |x|). 1

1

With α = 1, this formula recovers the familiar Gaussian density. Then Lemma 2.9 implies that the pdf p(x, t) satisfies α 2 α (ξ, z) − z α−1 p(ξ, 0) + κα ξ 2 p(ξ, z) LF C 0 Dt p(x, t) − κα ∂xx p(x, t) = z p = (z α + κα ξ 2 ) p(ξ, z) − z α−1 p(ξ, 0) ≡ 0, since p(ξ, 0) =  δ(ξ) = 1. Thus the pdf p(x, t) satisfies the following time-fractional α diffusion equation with a Djrbashian-Caputo fractional derivative C 0 Dt p in time t (see Definition 2.3 in Chapter 2 for the precise definition) C α 0 Dt p(x, t)

2 − κα ∂xx p(x, t) = 0,

0 < t < ∞, −∞ < x < ∞.

It can be rewritten using a Riemann-Liouville fractional derivative 0RDt1−α p as 2 p(x, t). ∂t p(x, t) = κα 0RDt1−α ∂xx

In this book, we focus on the approach based on the Djrbashian-Caputo one. ∫∞ Now we compute the mean square displacement μ2 (t) = −∞ x 2 p(x, t) dx for t > 0. To derive an explicit formula, we resort to the Laplace transform

14

1 Continuous Time Random Walk

∫ μ2 (z) =



−∞ d2

=−

x 2 p (x, z)dx = −

dξ 2

d2 p(ξ, z)|ξ=0 dξ 2

(z + κα z 1−α ξ 2 )−1 |ξ=0 = 2κα z −1−α,

which upon inverse Laplace transform yields μ2 (t) = 2Γ(1 + α)−1 κα t α . Thus the mean square displacement grows only sublinearly with the time t, which, at large time t, is slower than that in normal diffusion, and as α → 1− , it recovers the formula for normal diffusion. Such a diffusion process is called subdiffusion or anomalously slow diffusion in the literature. Subdiffusion has been observed in a large number of physical applications, e.g., column experiments [HH98], thermal diffusion in fractal domains [Nig86], non-Fickian transport in geological formations [BCDS06], and protein transport within membranes [Kou08]. The mathematical theory for the subdiffusion model will be discussed in detail in Chapters 6 and 7. (iii) Diverging jump length variance. Last we turn to the case of diverging jump length variance ς 2 , but finite characteristic waiting time . To be specific, we assume an exponential waiting time ψ(t), and that the jump length follows a (possibly asymmetric) Lévy distribution. See the monograph [Nol20] for an extensive treatment of univariate stable distributions and [GM98] for the use of stable distributions within fractional calculus. The most convenient way to define Lévy random variables is via their characteristic function (Fourier transform)

−|ξ | μ (1 − iβ sign(ξ) tan μπ 2 ), μ  1,  μ, β) = ln λ(ξ; −|ξ |(1 + iβ sign(ξ) π2 ln |ξ |), μ = 1, where μ ∈ (0, 2] and β ∈ [−1, 1]. The parameter μ determines the rate at which the tails of the pdf taper off. When μ = 2, it recovers a Gaussian distribution, irrespective of the value of β, whereas with μ = 1 and β = 0, the stable density is identical to the Cauchy distribution, i.e., λ(x) = (π(1 + x 2 ))−1 . The parameter β determines the degree of asymmetry of the distribution. When β is negative (respectively positive), the distribution is skewed to the left (respectively right). When β = 0, the expression  μ) = e− |ξ | μ . simplifies to λ(ξ; Generally, the pdf of a Lévy stable distribution is not available in closed form. However, it is possible to compute the density numerically [Nol97] or to simulate random variables from such stable distributions, cf. Section 1.3. Nonetheless, following the proof of Theorem 1.2, when 1 < μ < 2, one can obtain an inverse power law asymptotic [Nol20, Theorem 1.2, p. 13] λ(x) ∼ aμ,β |x| −1−μ

as |x| → ∞

(1.15)

for some aμ,β > 0, from which it follows directly that the jump length variance diverges, i.e., ∫ ∞ x 2 λ(x) dx = ∞. (1.16) −∞

1.2 Continuous Time Random Walk

15

In general, the pth moment of a stable random variable is finite if and only if p < μ.

50 40

x

30 20 10 0 0

50

100

150

200

250

t

(a)

(b)

Fig. 1.4 ctrw with a Lévy jump length pdf. Panel (a) is one sample trajectory with an exponential waiting time pdf ψ(t) = e−t , and a Lévy jump length pdf with μ = 32 and β = 0. Panel (b) is a zoom-in of Panel (a).

For ctrw, one especially relevant case is λ(x; μ, β) with μ ∈ (1, 2] for the jump length distribution. In Fig. 1.4, we show one sample trajectory of ctrw with a Lévy jump length distribution. Due to the asymptotic property (1.15) of the jump length pdf, very long jumps may occur with a much higher probability than for an exponentially decaying pdf like a Gaussian. The scaling nature of the Lévy jump length pdf leads to clusters along the trajectory, i.e., local motion is occasionally interrupted by long sojourns, at all length scales. Below we derive the diffusion limit using the scaling technique, by appealing to the scaled pdfs ψ∫τ (t) and λσ (x), for some τ, σ > 0. The pdf ψ is assumed to ∞ be normalized, i.e., 0 tψ(t)dt = 1. Using Taylor expansion, for small ξ, we have (μ  1)  μπ   + O(|ξ | 2μ ). e−λ(ξ;μ,β) = 1 − |ξ | μ 1 − iβ sign(ξ) tan 2 Hence, for μ  1, we have τ (z) = 1 − τz + O(τ 2 z 2 ) as τ → 0, ψ   σ (ξ) = 1 − σ μ |ξ | μ 1 − iβ sign(ξ) tan μπ + O(σ 2μ |ξ | 2μ ) as σ → 0. λ 2 These two identities and Fourier-Laplace convolution formulas yield p(ξ, z) = lim p(ξ, z; σ, τ) =

z + κμ

|ξ | μ



1 , 1 − iβ sign(ξ) tan μπ 2

(1.17)

where the limit is taken under the condition κμ = limσ→0+,τ→0+ τ −1 σ μ for some κμ > 0, which represents the diffusion coefficient. Next we invert the Fourier transform of p(ξ, z) and obtain

16

1 Continuous Time Random Walk

p(ξ, t) = e−κμ |ξ |

μ (1−iβ sign(ξ) tan μ π 2

)t

,

which is the characteristic function of a Lévy stable distribution, for which an explicit expression is generally unavailable. This can be regarded as a generalized central limit theorem for stable distributions. One can verify by Fourier and Laplace inversion that the pdf p(x, t) satisfies  κ  μ μ ∂t p(x, t) = 2μ (1 − β)−∞RDx p(x, t) + (1 + β)Rx D∞ p(x, t) , in view of Theorem 2.8 below. In the symmetric case β = 0, it simplifies to μ

∂t p(x, t) = −κμ (−Δ)R2 p(x, t), μ

where (−Δ)R2 denotes the Riesz fractional operator μ

μ

μ

−(−Δ)R2 f = 12 ( −∞RDx f + R x D∞ f ), μ

μ

where the notation −∞RDx and R x D∞ denote the left-sided and right-sided RiemannLiouville fractional derivatives, cf. Definition 2.2 in Chapter 2 for the precise definitions. This operator can be viewed as the one-dimensional version of fractional Laplacian (see the survey [Gar19] for a nice introduction to fractional Laplacian). It recovers the Gaussian density as μ → 2. Using (1.15), we deduce the following power law asymptotic as |x| → ∞ p(x, t) ∼ κμ t|x| −1−μ,

μ < 2.

∫∞ Due to this property, the mean squared displacement diverges −∞ |x| 2 p(x, t) dx = ∞. That is, the particle following such a random walk model spreads faster than normal diffusion, and it is commonly known as superdiffusion in the literature. It has been proposed for several applications, including solute transport [BWM00] and searching patterns of animals (e.g., albatross, bacteria, plankton, jackals, spider monkey) [RFMM+04]. The divergent mean squares displacement has caused some controversy in practice, and various modifications have been proposed. Boundary value problems (bvps) for superdiffusion are involved, since the long jumps make the very definition of a proper boundary condition actually quite intricate. In Chapter 5, we present mathematical theory for 1D stationary superdiffusion models.

1.3 Simulating Continuous Time Random Walk To simulate ctrw, one needs to generate random numbers for a given pdf, e.g., power laws and Lévy stable distributions. The task of generating random numbers for an arbitrary pdf is often highly nontrivial, especially in high-dimensional spaces. There are a number of possible methods, and we describe only the transformation

1.3 Simulating Continuous Time Random Walk

17

method, which is simple and easy to implement. For more advanced methods, we refer to the monograph [Liu08]. The transformation method requires that we have access to random numbers uniformly distributed in the unit interval 0 ≤ r ≤ 1, which can be generated by any standard pseudo-random number generators. Now suppose that p(x) is a continuous pdf from which we wish to draw samples. The pdfs p(x) and p(r) (the uniform distribution on [0, 1]) are related by p(x) = p(r)

dr dr = , dx dx

where the second equality follows because p(r) = 1 over the interval [0, 1]. Integrating both sides with respect to x, we obtain the complementary cumulative distribution function F(x) in terms of r: ∫ 1 ∫ ∞



p(x ) dx = dr = 1 − r, F(x) = x

F −1 (1 − r), where

r

F −1

denotes the inverse function of the function or equivalent x = F(x). For many pdfs, this function can be evaluated in closed form. Example 1.3 In this example we illustrate the method on the power law density ψ(x) = α(1 + x)−1−α . One can compute directly ∫ ∞ ∫ ∞ α 1 F(x) = ψ(x ) dx = dx = ,

)1+α (1 + x)α (1 + x x x and the inverse function F −1 (x) is given by F −1 (x) = x − α − 1. This last formula provides an easy way to generate random variables from the power law density ψ(t), needed for simulating the subdiffusion model. 1

Generating samples from the Lévy stable distribution λ(x; μ, β) is more delicate. The main challenge lies in the fact that there are no analytic expressions for the inverse F −1 , and thus the transformation method is not easy to apply. Two wellknown exceptions are the Gaussian (μ = 2) and Cauchy (μ = 1, β = 0). One popular algorithm is from [CMS76, WW95]. For μ ∈ (0, 2] and β ∈ [−1, 1], it generates a random variable X ∼ λ(x; μ, β) as in Algorithm 1.

Exercises Exercise 1.1 Show formula (1.13). Exercise 1.2 Prove the asymptotic (1.15). (Hint: [Nol20, Section 3.6]) Exercise 1.3 Show the jump length variance formula (1.16).

18

1 Continuous Time Random Walk

Algorithm 1 Generating stable random variables.

1: Generate a random variable V uniformly distributed on (− π2 , tial random variable W with mean 1. 2: For μ  1, compute X = Cμ, β

and an independent exponen-

  1−μ sin(μ(V + Bμ, β )) cos(V − μ(V + Bμ, β )) μ , 1 W (cos V ) μ

where Bμ, β =

π 2)

arctan(β tan μ

μπ 2 )

 μπ  2μ1 and Cμ, β = 1 + β 2 tan2 . 2

3: For μ = 1, compute X=

   2 π W cos V + βV tan V − β ln π . π 2 2 + βV

Exercise 1.4 Prove that the pth moment of a stable random variable is finite if and only if p < μ. Exercise 1.5 Derive the representation (1.17). Exercise 1.6 Develop an algorithm for generating random variables from the Cauchy distribution λ(x) = (π(1 + x 2 ))−1 and implement it. Exercise 1.7 Implement the algorithm for generating random variables from Lévy stable distribution, with μ = 1.5, and β = 0. Exercise 1.8 Derive the diffusion limit for the ctrw with a power law waiting time and Lévy jump length distribution, and the corresponding governing equation.

Chapter 2

Fractional Calculus

In this chapter, we describe basic properties of fractional-order integrals and derivatives, including Riemann-Liouville fractional integral and derivative, DjrbashianCaputo and Grünwald-Letnikov fractional derivatives. They will serve as the main modeling tools in fdes.

2.1 Gamma Function This function will appear in nearly every formula in this book! The usual way to define the Gamma function Γ(z) is ∫ ∞ t z−1 e−t dt, (z) > 0. (2.1) Γ(z) = 0

It is analytic in the right half complex plane C+ := {z ∈ C : (z) > 0} and integration by parts shows the following recursion formula Γ(z + 1) = zΓ(z).

(2.2)

Since Γ(1) = 1, (2.2) implies that for any positive integer n ∈ N, Γ(n + 1) = n!. The following reflection formula [AS65, 6.1.17, p. 256] Γ(z)Γ(1 − z) =

π , sin(πz)

0 < (z) < 1,

and Legendre’s duplication formula [AS65, 6.1.18, p. 256] √ Γ(z)Γ(z + 12 ) = π21−2z Γ(2z)

(2.3)

(2.4)

are very useful in practice. For example, one only needs to approximate Γ(z) for √ (z) ≥ 12 and uses (2.3) for (z) < 12 . It also gives directly Γ( 12 ) = π. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 B. Jin, Fractional Differential Equations, Applied Mathematical Sciences 206, https://doi.org/10.1007/978-3-030-76043-4_2

19

20

2 Fractional Calculus

It is possible to continue Γ(z) analytically into the left half complex plane C− := {z ∈ C : (z) < 0}. This is often done by representing the reciprocal Gamma 1 as an infinite product (cf. Exercise 2.2) function Γ(z) n−z 1 = lim z(z + 1) . . . (z + n), Γ(z) n→∞ n!

∀z ∈ C.

(2.5)

1 Hence, Γ(z) is an entire function of z with zeros at z = 0, −1, −2, . . ., and accordingly, 1 due to Hermann Γ(z) has poles at z = 0, −1, . . .. The integral representation for Γ(z) Hankel [Han64] is very useful. It is often derived using Laplace transform (cf. Appendix A.3). Substituting t = su in (2.1) yields ∫ ∞ Γ(z) = uz−1 e−su du, sz 0

which can be regarded as the Laplace transform of uz−1 for a fixed z ∈ C. Hence uz−1 can be interpreted as an inverse Laplace transform ∫   Γ(z) 1 z−1 −1 Γ(z) u = =L esu z ds, sz 2πi C s where C is any deformed Bromwich contour that winds around the negative real axis R− = (−∞, 0) in the anticlockwise sense. Now substituting ζ = su yields ∫ 1 1 = ζ −z e ζ dζ . (2.6) Γ(z) 2πi C The integrand ζ −z has a branch cut on R− but is analytic elsewhere so that (2.6) is independent of the contour C. The formula (2.6) is very useful for deriving integral representations of various special functions, e.g., Mittag-Leffler and Wright functions (cf. (3.1) and (3.24) in Chapter 3). In practice, C can be deformed to facilitate the 1 actually easier to compute numerical evaluation of the integral, which makes Γ(z) than Γ(z) itself [ST07]. For large argument, the following Stirling’s formula holds (for any  > 0) Γ(z) = (2π) 2 e−z z z− 2 (1 + O(z −1 )) as |z| → ∞, | arg(z)| < π −  . 1

1

(2.7)

Further, for any two real numbers x and s, the following identity holds lim (x s Γ(x))−1 Γ(x + s) = 1,

x→∞

(2.8)

often known as Wendel’s limit [Wen48], due to his elegant proof using only Hölder’s inequality and the recursion identity (2.2); see Exercise 2.4 for the detail. A closely related function, the Beta function B(α, β) for α, β > 0 is defined by ∫ 1 B(α, β) = (1 − s)α−1 s β−1 ds. (2.9) 0

2.2 Riemann-Liouville Fractional Integral

21

It can be expressed by the Gamma function Γ(z) as B(α, β) =

Γ(α)Γ(β) . Γ(α + β)

(2.10)

The (generalized) binomial coefficient C(α, β) is defined by C(α, β) =

Γ(α + 1) , Γ(β + 1)Γ(α − β + 1)

provided β  −1, −2, . . . and α − β  −1, −2, . . .. For α, β ∈ N0 , it recovers the usual binomial coefficient. The following identities hold for the binomial coefficients. Lemma 2.1 Let j, k ∈ N0 , and α, β  0, −1, . . .. Then the following identities hold. (−1) j C(α, j) = C( j − α − 1, j), C(α, k + j)C(k + j, k) = C(α, k)C(α − k, j), k 

C(α, j)C(β, k − j) = C(α + β, k).

(2.11) (2.12) (2.13)

j=0

Proof The identities (2.11) and (2.12) are direct from the definition. Indeed, Γ(k + j + 1) Γ(α + 1) Γ(k + j + 1)Γ(α − k − j + 1) Γ(k + 1)Γ( j + 1) Γ(α − k + 1) Γ(α + 1) = Γ(k + 1)Γ(α − k + 1) Γ( j + 1)Γ(α − k − j + 1) = C(α, k)C(α − k, j),

C(α, k + j)C(k + j, k) =

showing (2.12). The formula (2.13) can be found at [PBM86, p. 616, 4.2.15.13]. 

2.2 Riemann-Liouville Fractional Integral The Riemann-Liouville fractional integral generalizes the well-known Cauchy’s iterated integral formula. Let finite a, b ∈ R, with a < b, and we denote by D = (a, b). Then for any integer k ∈ N, the k-fold integral a Ixk , based at the left end point x = a, is defined recursively by a Ix0 f (x) = f (x) and ∫ x k k−1 f (s) ds, k = 1, 2, . . . . a Is a I x f (x) = a

We claim the following Cauchy’s iterated integral formula for k ≥ 1 ∫ x 1 k (x − s)k−1 f (s) ds a I x f (x) = (k − 1)! a

(2.14)

22

2 Fractional Calculus

and verify it by mathematical induction. Clearly, it holds for k = 1. Now assume that the formula (2.14) holds for some k ≥ 1, i.e., ∫ x 1 k (x − s)k−1 f (s) ds. a I x f (x) = (k − 1)! a Then by definition and changing integration order, we deduce ∫ x ∫ x∫ s 1 k+1 k f (x) = (s − t)k−1 f (t) dtds a Is f (s) ds = a Ix (k − 1)! a a a ∫ x∫ x ∫ x 1 1 = (s − t)k−1 ds f (t) dt = (x − s)k f (s) ds. (k − 1)! a t k! a This shows the formula (2.14). Likewise, there holds ∫ b 1 k (s − x)k−1 f (s) ds. x Ib f (x) = (k − 1)! x

(2.15)

One way to generalize these formulas to any α > 0 is to use the Gamma function Γ(z): the factorial (k −1)! in the formula can be replaced by Γ(k). Then for any α > 0, by replacing k with α, and (k − 1)! with Γ(α), we arrive at the following definition of Riemann-Liouville fractional integrals. It is named after Bernhard Riemann’s work in 1847 [Rie76] and Joseph Liouville’s work in 1832 [Lio32], the latter of whom was the first to consider the possibility of fractional calculus [Lio32]. Definition 2.1 For any f ∈ L 1 (D), the left-sided Riemann-Liouville fractional integral of order α > 0 based at x = a, denoted by a Ixα f , is defined by ∫ x 1 (a Ixα f )(x) = (x − s)α−1 f (s) ds, (2.16) Γ(α) a and the right-sided Riemann-Liouville fractional integral of order α > 0 based at x = b, denoted by x Ibα f , is defined by (x Ibα f )(x) =

1 Γ(α)



b

x

(s − x)α−1 f (s) ds.

(2.17)

When α = k ∈ N, they recover the k-fold integrals a Ixk f and x Ibk f . Further, we define for x → a+ by (a Ixα f )(a+ ) := lim+ (a Ixα f )(x), x→a

if the right-hand side exists, and likewise (x Ibα f )(b− ). In case α = 0, we adopt the convention a Ix0 f (x) = f (x), in view of the following result [DN68, (1.3)]. Lemma 2.2 Let f ∈ L 1 (D). Then for almost all x ∈ D, there hold lim

α→0+

α a Ix

f (x) = lim+ x Ibα f (x) = f (x). α→0

2.2 Riemann-Liouville Fractional Integral

23

∫h Proof Let x ∈ D be a Lebesgue point of f , i.e., limh→0 h1 0 | f (x + s) − f (s)|ds = 0. ∫x Then let Fx (t) = x−t f (s)ds, for any t ∈ [0, x − a]. Then it can be represented as Fx (t) = t[ f (x) + ωx (t)],

0 ≤ t ≤ x − a,

where ωx (t) → 0 as t → 0. Thus for each  > 0, there exists δ = δ() > 0 such that |ωx (t)| < ,

∀t ∈ (0, δ).

(2.18)

Then by the definition of F and integration by parts, we deduce ∫ x−a 1 α s α−1 f (x − s)ds a I x f (x) := Γ(α) 0 ∫ α − 1 x−a Fx (x − a) (x − a)α−1 − Fx (s)s α−2 ds = Γ(α) Γ(α) 0 α−1 Fx (x − a) (x − a)α−1 − (x − a)α f (x) = Γ(α) Γ(α + 1) ∫ ∫ x−a  α−1 δ − ωx (s)s α−1 ds + ωx (s)s α−1 ds . Γ(α) 0 δ Then it follows from (2.18) and the fact 0 is a pole of Γ(z) that limα→0+ | a Ixα f (x) − f (x)| ≤  . Since  > 0 is arbitrary, it proves the first identity for x. This completes  the proof since the set of Lebesgue points of f ∈ L 1 (D) is dense in D. The integral operators a Ixα and x Ibα inherit certain properties from integer-order integrals. Consider power functions (x − a)γ with γ > −1 (so that (x − a)γ ∈ L 1 (D)). It can be verified directly that for α > 0 and x > a, α a I x (x

− a)γ =

Γ(γ + 1) (x − a)γ+α, Γ(γ + α + 1)

(2.19)

and similarly for x < b α x Ib (b −

x)γ =

Γ(γ + 1) (b − x)γ+α . Γ(γ + α + 1)

Indeed, (2.19) follows from changing variables s = a + t(x − a) and (2.10): ∫ x ∫ 1 (x − a)γ+α 1 α γ α−1 γ I (x − a) = (x − s) (s − a) ds = (1 − t)α−1 tγ dt a x Γ(α) a Γ(α) 0 Γ(γ + 1) B(α, γ + 1) γ+α γ+α (x − a) (x − a) , = = Γ(α) Γ(γ + α + 1) Clearly, for any integer α ∈ N, these formulas recover the integral counterparts. Example 2.1 We compute the Riemann-Liouville fractional integral 0 Ixα f (x), α > 0, of the exponential function f (x) = eλx , λ ∈ R. By (2.19), we have

24

2 Fractional Calculus α 0 Ix ∞ 

f (x) = 0 Ixα

∞ ∞ α k   (λx)k 0 Ix x = λk Γ(k + 1) k=0 Γ(k + 1) k=0

(2.20)

∞  x k+α (λx)k = = xα = x α E1,α+1 (λx), λ Γ(k + α + 1) Γ(k + α + 1) k=0 k=0 k

 zk where the Mittag-Leffler function Eα,β (z) is defined by Eα,β (z) = ∞ k=0 Γ(kα+β) , for z ∈ C, cf. (3.1) in Chapter 3. The interchange of summation and integral is legitimate since Eα,β (z) is entire, cf. Proposition 3.1. The following semigroup property holds for k-fold integrals: a Ixk a Ix f = a Ixk+ f , k, ∈ N0 , which holds also for a Ixα and x Ibα . Theorem 2.1 For f ∈ L 1 (D), α, β ≥ 0, there holds α β a Ix a Ix

β

α+β

f = a Ix a Ixα f = a Ix

f,

α β x Ib x Ib

β

α+β

f = x Ib x Ibα f = x Ib

f.

Proof The case α = 0 or β = 0 is trivial, since a Ix0 is the identity operator. It suffices to consider α, β > 0. By changing integration order, we have ∫ s ∫ x 1 α β α−1 (x − s) (s − t)β−1 f (t) dtds (a Ix a Ix f )(x) = Γ(α)Γ(β) a a ∫ x ∫ x 1 α−1 β−1 = f (t) (x − s) (s − t) ds dt. Γ(α)Γ(β) a t Now by changing variables, the integral in the bracket can be simplified to ∫ x ∫ 1 (x − s)α−1 (s − t)β−1 ds = (x − t)α+β−1 (1 − s)α−1 s β−1 ds t

= B(α, β)(x − t)

0 α+β−1

.

By (2.10), we have β

(a Ixα a Ix f )(x) =

B(α, β) Γ(α)Γ(β)

∫ a

x

α+β

(x − s)α+β−1 f (s) ds = (a Ix

f )(x).

The other identity follows similarly.



The next result collects several mapping properties for a Ixα , and similar results hold for x Ibα . (i) indicates that a Ixα and x Ibα are bounded in L p (D) spaces, 1 ≤ p ≤ ∞. Thus for any f ∈ L 1 (D), a Ixα f and x Ibα f of order α > 0 exists almost everywhere (a.e.). The second part of (ii) is commonly known as the Hardy-Littlewood inequality. The spaces AC(D) and C k,γ (D) are defined in Section A.1.1 in the appendix. Theorem 2.2 The following mapping properties hold for a Ixα , and α > 0. (i) a Ixα is bounded on L p (D) for any 1 ≤ p ≤ ∞.

2.2 Riemann-Liouville Fractional Integral

25

(ii) For 1 ≤ p < α−1 , a Ixα is a bounded operator from L p (D) into L r (D) for 1 ≤ r < (1 − αp)−1 . If 1 < p < α−1 , then a Ixα is bounded from L p (D) to p L 1−α p (D). (iii) For p > 1 and p−1 < α < 1 + p−1 or p = 1 and 1 ≤ α < 2, a Ixα is bounded from 1 L p (D) into C 0,α− p (D), and moreover, a Ixα f (0) = 0, for f ∈ L p (D). (iv) a Ixα maps AC(D) into AC(D). (v) If f ∈ L 1 (D) is nonnegative and nondecreasing then a Ixα f is nondecreasing. Proof By Young’s inequality in (A.1), we deduce a Ixα f L p (D) ≤

1 (b − a)α (x − a)α−1 L 1 (D) f L p (D) = f L p (D), Γ(α) Γ(α + 1)

showing the assertion for a Ixα . The first part of (ii) follows similarly, and the proof of the second part is technical and lengthy, and can be found in [HL28, Theorem 4]. To see (iii), consider first the case p > 1. Suppose first a + h ≤ x < x + h ≤ b. Then Γ(α)(a Ixα f (x + h) − a Ixα f (x)) ∫ x+h ∫ x = (x + h − s)α−1 f (s)ds − (x − s)α−1 f (s)ds a

∫ =

a

x+h

x



+

(x + h − s)α−1 f (s)ds +

x−h

a



x

x−h

((x + h − s)α−1 − (x − s)α−1 ) f (s)ds

((x + h − s)α−1 − (x − s)α−1 ) f (s)ds :=

3 

Ii .

i=1

By Hölder’s inequality, since (α−1)p > −(p−1) and ((p−1)−1 (α−1)p+1)p−1 (p−1) = α − p−1 ,

∫ x+h p1 ∫ x+h p−1 p(α−1) 1 p |I1 | ≤ | f (s)| p ds (x + h − s) p−1 ds ≤ c f L p (D) hα− p . x

x

This argument also yields |I2 | ≤ c f L p (D) hα− p . Now for the term I3 , since 1

|(x + h − s)α−1 − (x − s)α−1 | ≤ h|1 − α|(x − s)α−2, we have ∫ |I3 | ≤ h|1 − α| ≤ h|1 − α|

x−h

a

∫ a

| f (s)|(x − s)α−2 ds

x−h

| f (s)| ds

p−1 ≤ |1 − α| p + 1 − αp

p

p−1 p

p1 ∫ a

x−h

(x − s)

hα− f L p (D) . 1 p

p(α−2) p−1

ds

p−1 p

26

2 Fractional Calculus

Thus, the assertion follows. When x ≤ a + h, the estimate is direct. If p = 1, we have α − 1 ≥ 0, and so ∫ x+h | f (s)|ds, |I1 | ≤ hα−1 x

and similarly |I3 | ≤ h(α − 1)hα−2

∫ a

x−h

| f (s)|ds.

Thus the estimate also holds. This shows (iii). Next, if f ∈ AC(D), then f ∈ L 1 (D) exists a.e. and f (x) − f (a) = (a Ix1 f )(x) for all x ∈ D. Then α a Ix

f (x) = a Ixα a Ix1 f (x) + (a Ixα f (a))(x) = a Ix1 (a Ixα f )(x) + f (a)

(x − a)α . Γ(α + 1)

Both terms on the right-hand side belong to AC(D), showing (iv). Last, ∫ (x − a)α 1 α (1 − s)α−1 f ((x − a)s + a)ds, a I x f (x) = Γ(α) 0 which is nondecreasing in x when f is nonnegative and nondecreasing.



Remark 2.1 The mapping properties of the Riemann-Liouville fractional integral were systematically studied by Hardy and Littlewood [HL28, HL32]. (ii) can be found in [HL28, Theorem 4]. They also showed that the result does not hold if p = 1 for any α ∈ (0, 1), and that if 0 < α < 1 and p = α−1 , a Ixα f is not necessarily bounded. (iii) was proved in [HL28, Theorem 12], where it was also pointed out that the result is false in the cases p > 1, α = p−1 and α = 1 + p−1 . The next result asserts that a Ixα and x Ibα are adjoint to each other in L 2 (D). It is commonly referred to as the “fractional integration by parts” formula [SKM93, Corollary, p. 67]. Lemma 2.3 For any f ∈ L p (D), g ∈ L q (D), p, q ≥ 1 with p−1 + q−1 ≤ 1 + α (p, q  1, when p−1 + q−1 = 1 + α), the following identity holds ∫ b ∫ b α g(x)(a Ix f )(x) dx = f (x)(x Ibα g)(x) dx. a

a

Proof For f , g ∈ L 2 (D), the proof is direct: by Theorem 2.2(i), for f ∈ L 2 (D), ∫b α 2 α a I x f ∈ L (D), and hence a |g(x)||( a I x f )(x)| dx ≤ c f L 2 (D) g L 2 (D) . Now the desired formula follows from Fubini’s Theorem. The general case follows from Theorem 2.2(ii) instead.  The next result discusses the mapping to AC(D) [Web19a, Proposition 3.6]. It is useful for characterizing fractional derivatives and solution concepts for fdes. Lemma 2.4 Let f ∈ L 1 (D) and α ∈ (0, 1). Then a Ix1−α f ∈ AC(D) and a Ixα f (a) = 0 if and only if there exists g ∈ L 1 (D) such that f = a Ixα g.

2.3 Fractional Derivatives

27

Proof If there exists g ∈ L 1 (D) such that f = a Ixα g, then by Theorem 2.1, 1−α a Ix

f = a Ix1−α a Ixα g = a Ix1 g ∈ AC(D),

and since∫g ∈ L 1 (D), by the absolute continuity of Lebesgue integral, a Ix1−α f (a) = x limx→a+ a g(s)ds = 0. Conversely, suppose a Ix1−α f ∈ AC(D) and a Ix1−α f (a) = 0. Let G(x) = a Ix1−α f so that G ∈ AC(D) and G(a) = 0. Then g := G exists for a.e. x ∈ D with g ∈ L 1 (D), and G(x) = a Ix1 g. Furthermore, a Ixα G = a Ix1 f = a Ixα a Ix1 g = 1 α α 1 1 1 α a I x a I x g. Since f , a I x g ∈ L (D), a I x f , a I x a I x g ∈ AC(D) and their derivatives  exist a.e. as L 1 (D) functions and are equal, that is, f = a Ixα g. Last, we generalize the mean value theorem in calculus: if f ∈ C(D), g is Lebesgue integrable on D and g does not change sign in D, then ∫ b ∫ b f (x)g(x)dx = f (ξ) g(x)dx, (2.21) a

a

for some ξ ∈ (a, b). One fractional version is as follows [Die12, Theorem 2.1] (see also [Die17]). The classical case is recovered by setting α = 1 and x = b. Theorem 2.3 Let α > 0 and f ∈ C(D), and let g be Lebesgue integrable on D and do not change its sign in D. Then for almost every x ∈ (a, b], there exists some ξ ∈ (a, x) ⊂ D such that a Ixα ( f g)(x) = f (ξ)a Ixα g(x). If additionally, α ≥ 1 or g ∈ C(D), then this result holds for every x ∈ (a, b]. ∫x 1 (x − s)α−1 f (s)g(s)ds. If α ≥ 1 and x ∈ (a, b], Proof Note that a Ixα ( f g)(x) = Γ(α) a α−1

˜ = (x−s) then (x − s)α−1 is continuous in s. Hence, g(s) Γ(α) g(s) is integrable on [a, x] and does not change sign on [a, x]. Thus by (2.21), ∫ x ∫ x α f (s)g(s)ds ˜ = f (ξ) g(s)ds ˜ = f (ξ)a Ixα g(x). a I x ( f g)(x) = a

a

If 0 < α < 1 and g is continuous, the same line of proof works. Finally, if 0 < α < 1 and g is only integrable, then one can still argue in a similar way, but the integrability of g˜ holds only for almost all x ∈ D, cf. Theorem 2.2(i). 

2.3 Fractional Derivatives Now we discuss fractional derivatives, for which there are several different definitions. We only discuss Riemann-Liouville and Djrbashian-Caputo fractional derivatives, which represent two most popular choices in practice, and briefly mention Grünwald-Letnikov fractional derivative. There are several popular textbooks with extensive treatment on fractional derivatives [OS74, MR93, Pod99, KST06, Die10] and also monographs [Dzh66, SKM93]. An encyclopedic treatment of RiemannLiouville fractional integral and derivatives is given in the monograph [SKM93],

28

2 Fractional Calculus

and the book [Die10] contains detailed discussions on the Djrbashian-Caputo fractional derivative. Like before, for any fixed a, b ∈ R, a < b, which are assumed to be finite unless otherwise stated, we denote by D = (a, b), and by D = [a, b]. Further, for k ∈ N, f (k) denotes the kth order derivative of f .

2.3.1 Riemann-Liouville fractional derivative We begin with the definition of the Riemann-Liouville fractional derivative. Definition 2.2 For f ∈ L 1 (D) and n − 1 < α ≤ n, n ∈ N, its left-sided RiemannLiouville fractional derivative of order α (based at x = a), denoted by RaDxα f , is defined by ∫ x dn dn 1 R α n−α D f (x) := ( I f )(x) = (x − s)n−α−1 f (s) ds, a a x dx n x Γ(n − α) dx n a if the integral on the right-hand side exists. Its right-sided Riemann-Liouville fractional derivative of order α (based at x = b), denoted by RxDbα f , is defined by R α xDb

dn (−1)n dn n−α f (x) := (−1) ( I f )(x) = x dx n b Γ(n − α) dx n n

∫ x

b

(s − x)n−α−1 f (s) ds,

if the integral on the right-hand side exists. Definition 2.2 does not give the condition for the existence of a Riemann-Liouville fractional derivative. By Lemma 2.4, the condition a Ixn−α f ∈ AC n (D) is needed to ensure RaDxα f ∈ L 1 (D), and similarly x Ibn−α f ∈ AC n (D) for RxDbα f ∈ L 1 (D). We denote by R α + R α aDx f (a ) = lim+ aDx f (x), x→a

if the limit on the right-hand side exists. The quantity RxDbα f (b− ) is defined similarly. Due to the presence of a Ixn−α , RaDxα f is inherently nonlocal: the value of RaDxα f at x > a depends on the values of f from a to x. The nonlocality dramatically influences its analytical properties. We compute the derivatives RaDxα (x − a)γ , α > 0, of the function (x − a)γ with γ > −1. The identities (2.19) and (2.2) yield R α aDx (x

− a)γ =

Γ(γ + 1)(x − a)γ−α dn Γ(γ + 1)(x − a)γ+n−α = . n dx Γ(γ + 1 + n − α) Γ(γ + 1 − α)

(2.22)

One can draw a number of interesting observations. First, for α  N, RaDxα f of the constant function f (x) ≡ 1 (i.e., γ = 0) is not identically zero, since for x > a: R α aDx 1

=

(x − a)−α . Γ(1 − α)

(2.23)

2.3 Fractional Derivatives

29

This fact is inconvenient for practical applications involving initial/boundary conditions where the physical meaning of (2.23) then becomes unclear. Nonetheless, when α ∈ N, since 0, −1, −2, . . . are the poles of Γ(z), the right-hand side of (2.23) dn vanishes, and thus it recovers the familiar identity dx n 1 = 0. Second, given α > 0, for any γ = α − 1, α − 2, . . . , α − n, γ + 1 − α is a negative integer or zero, we have R α aDx (x

− a)α−j ≡ 0,

j = 1, . . . , n.

(2.24)

In particular, for α ∈ (0, 1), (x − a)α−1 belongs to the kernel of the operator RaDxα and plays the same role as a constant function for the first-order derivative. Generally, it implies that if f , g ∈ L 1 (D) with a Ixn−α f , a Ixn−α g ∈ AC(D) and n − 1 < α ≤ n, then R α aDx

f (x) = RaDxα g(x)



f (x) = g(x) +

n 

c j (x − a)α−j ,

j=1

where c j , j = 1, 2, . . . , n, are arbitrary constants. Hence, we have the following useful result, where the notation (RaDxα−n f )(a+ ) is identified with (a Ixn−α f )(a+ ). When α ∈ N, the result recovers the familiar Taylor polynomial. Theorem 2.4 If n − 1 < α ≤ n, n ∈ N, f ∈ L 1 (D), and a Ixn−α f ∈ AC n (D), and satisfies RaDxα f (x) = 0 in D, then it can be represented by f (x) =

n R α−j  (D f )(a+ ) a x

j=1

Γ(α − j + 1)

(x − a)α−j .

Proof By the preceding discussion, we have f (x) = RD α−k , k = 1, . . . , n, to both sides gives a x

n

j=1 c j (x

− a)α−j . Applying

(RaDxα−k f )(a+ ) = lim+ (RaDxα−k f )(x) x→a

=

n  j=1

c j lim+ RaDxα−k (x − a)α−j = ck Γ(α − k + 1), x→a

in view of the identities (2.22) and (2.24), which gives directly the assertion.



Example 2.2 The integer-order derivative of eλx , λ ∈ R, is still a multiple of eλx . Now we compute R0Dxα f of f (x) = eλx . From (2.22), we deduce R α λx 0Dx e

=

∞  k=0

= R0Dxα

∞ ∞ RD α x k   (λx)k x = λk 0 Γ(k + 1) Γ(k + 1) k=0 k=0

∞  λ k x k−α (λx)k = x −α = x −α E1,1−α (λx). Γ(k + 1 − α) Γ(k + 1 − α) k=0

Generally the right-hand side is no longer an exponential function if α  N.

(2.25)

30

2 Fractional Calculus

Example 2.3 For α > 0 and λ ∈ C, let f (x) = x α−1 Eα,α (λx α ). Then the RiemannLiouville fractional derivative R0Dxα f is given by R α 0Dx

f (x) = R0Dxα x α−1 =

∞ ∞ R α (k+1)α−1  (λx α )k  k 0Dx x = λ Γ(kα + α) Γ(kα + α) k=0 k=0

∞  λ k x kα−1

Γ(kα)

k=0

= λx α−1 Eα,α (λx α ).

The interchange of the fractional derivative and summation is legitimate since Eα,β (z) is an entire function, cf. Proposition 3.1 in Chapter 3. That is, x α−1 Eα,α (λx α ) is invariant under R0Dxα , and is a candidate for an eigenfunction of the operator R0Dxα (when equipped with suitable boundary conditions). Remark 2.2 Note that in Examples 2.2 and 2.3, the obtained results depend on the starting value a. For example, if we take a = −∞, then direct computation shows R α λx −∞ Dx e

= λα eλx,

λ > 0.

Next we examine analytical properties of RaDxα more closely. In calculus, both integral and differential operators satisfy the commutativity and semigroup property. This is also true for a Ixα . However, the operator RaDxα satisfies neither property. The composition of two Riemann-Liouville fractional derivatives may lead to totally unexpected results. Further, for f ∈ L 1 (D) and α, β > 0 with α + β = 1, the following statements hold: β

(i) If RaDxα (RaDx u) = f , then u(x) = a Ix1 f (x) + c0 + c1 (x − a)β−1 ; β (ii) If RaDx (RaDxα v) = f , then v(x) = a Ix1 f (x) + c2 + c3 (x − a)α−1 ; R 1 (iii) If aDx w = f , then w(x) = a Ix1 f (x) + c4 , where ci, i = 0, . . . , 4 are arbitrary constants. Indeed, for (i), one first applies a Ixα to β both sides, and then a Ix . Thus two extra terms have to be included: (i) and (ii) include two constants, whereas (iii) includes only one. Note that if we restrict f ∈ C(D), then the singular term (x − a)α−1 (or (x − a)β−1 ) must disappear, indicating that a composition rule might still be possible on suitable subspaces. Indeed, it does hold γ on the space a Ix (L p (D)), γ > 0 and p ∈ [1, ∞], defined by γ p a I x (L (D))

γ

= { f ∈ L p (D) : f = a Ix ϕ for some ϕ ∈ L p (D)}, γ

γ

and similarly the space x Ib (L p (D)). A function in a Ix (L p (D)) has the property that its function value and a sufficient number of derivatives vanishing at x = a. Lemma 2.5 For any α, β ≥ 0, there holds R αR β aDx aDx α+β

α+β

f = RaDx

f,

α+β

∀ f ∈ a Ix

α+β

(L 1 (D)).

Proof Since f ∈ a Ix (L 1 (D)), f = a Ix ϕ for some ϕ ∈ L 1 (D). Now the desired assertion follows directly from Theorem 2.6(i) below and Theorem 2.1. 

2.3 Fractional Derivatives

31

There is no simple formula for the Riemann-Liouville fractional derivative of the product of two functions. In the integer-order case, the Leibniz rule asserts k 

( f g)(k) =

C(k, i) f (i) g (k−i) .

i=0

This formula is fundamental to many powerful tools in pde analysis. In contrast, generally, for any α ∈ (0, 1), RaDxα ( f g)  (RaDxα f )g + f RaDxα g, although there is a substantially modified version. Hence, many useful tools derived from this, e.g., integration by parts formula, are either invalid or require substantial modification. This result directly implies R α aDx ((x

− a) f ) = (x − a)(RaDxα f ) + C(α, 1)(RaDxα−1 f ).

Theorem 2.5 Let f and g be analytic on D. Then for any α > 0 and β ∈ R, R α aDx ( f g)

R α aDx ( f g)

= =

∞ 

C(α, k)(RaDxα−k f )g (k),

k=0 ∞ 

α−β−k

C(α, k + β)(RaDx

β+k

f )(RaDx

g).

k=−∞

Proof Since f , g are analytic, the product h = f g is also analytic. Hence, there exists a power series representation of h based at x, h(s) =

∞ 

(−1)k

k=0

h(k) (x) (x − s)k , k!

and then applying the operator RaDxα termwise (which is justified by the uniform convergence of the corresponding series), we obtain for n − 1 < α ≤ n, n ∈ N, R α aDx h(x)

dn 1 = Γ(n − α) dx n

∫ a

x

(x − s)n−α−1

∞  (−1)k (x − s)k k=0

Γ(k + 1)

h(k) (x)ds

∫ x ∞  dn (−1)k h(k) (x) = (x − s)k+n−α−1 ds n Γ(n − α)Γ(k + 1) dx a k=0 =

∞  dn (−1)k (x − a)k+n−α h(k) (x) . dx n Γ(n − α)Γ(k + 1)(k + n − α) k=0

Next we simplify the constant on the right-hand side. By the identity (2.2), C(k + n − α − 1, k) 1 = . Γ(n − α)Γ(k + 1)(k + n − α) Γ(k + n − α + 1) Now by the combinatorial identity (2.11), we further deduce

32

2 Fractional Calculus

C(α − n, k) (−1)k = . Γ(n − α)Γ(k + 1)(k + n − α) Γ(k + n − α + 1) Consequently, R α aDx h(x)

=

∞ 

C(α − n, k)

k=0

dn (x − a)k+n−α h(k) (x) . dx n Γ(n + k − α + 1)

Then by the standard Leibniz rule, we obtain R α aDx h(x)

=

∞ 

C(α − n, k)

C(n, j)

j=0

k=0

=

n 

∞ ∞  

C(α − n, k)C(n, j)

k=0 j=0

(x − a) j+k−α h(k+j) (x) Γ( j + k − α + 1)

(x − a) j+k−α h(k+j) (x) . Γ( j + k − α + 1)

Now introducing a new variable = j + k of summation and changing the order of summation lead to R α aDx h(x)

= =

∞    =0 ∞ 

C(α − n, k)C(n, − k)

(x − a)−α h() (x)

k=0

C(α, )

=0

Γ( − α + 1)

(x − a)−α h() (x) , Γ( − α + 1)

(2.26)

where we have used the identity (2.13). Applying the classical Leibniz rule, interchanging of summation order and the identity (2.12) give R α aDx ( f g)

=

∞  k=0

=

∞  k=0

g (k) (x)

∞ 

C(α, k + j)C(k + j, k)

j=0

C(α, k)g (k) (x)





C(α − k, j)

j=0

(x − a)k+j−α f (j) (x) Γ(k + j + 1 − α)

(x − a)k+j−α f (j) (x) . Γ(k + j + 1 − α)

Now using the identity (2.26) completes the proof of the first assertion. The second assertion can be proved in a similar but more tedious manner, and thus we refer to [SKM93, Section 15] or [Osl70] for a complete proof.  The next result gives the fundamental theorem of calculus for RaDxα : it is the left inverse of a Ixα on L 1 (D), but generally not a right inverse. Theorem 2.6 Let α > 0, n − 1 < α ≤ n, n ∈ N. Then (i) For any f ∈ L 1 (D), RaDxα (a Ixα f ) = f . (ii) If a Ixn−α f ∈ AC n (D), then

2.3 Fractional Derivatives

33

αR α a I x aDx f = f −

n−1 

R α−j−1 aDx

f (a+ )

j=0

(x − a)α−j−1 . Γ(α − j)

Proof (i) If f ∈ L 1 (D), by Theorem 2.2(i), a Ixα f ∈ L 1 (D), and by Theorem 2.1, n−α α a Ix a Ix

f = a Ixn f ∈ AC n (D).

Upon differentiating, we obtain the desired assertion. (ii) If a Ixn−α f ∈ AC n (D), by the characterization of the space AC n (D) (cf. Theorem A.1 in the appendix), we deduce n−α a Ix

f =

n−1 

cj

j=0 j−n+α

for some ϕ f ∈ L 1 (D), with c j = RaDx

αR α a I x aDx

(x − a) j + a Ixn ϕ f , Γ( j + 1)

(2.27)

f (a+ ). Consequently,

f = a Ixα ϕ f .

(2.28)

Now applying a Ixα to both sides of (2.27) and using Theorem 2.1, we obtain n a Ix f =

n−1  j=0

cj

(x − a)α+j + a Ixα+n ϕ f . Γ(α + j + 1)

Differentiating both sides n times gives f =

n−1 

cj

j=0

(x − a)α+j−n + a Ixα ϕ f . Γ(α + j − n + 1)

This together with (2.28) yields assertion (ii). Alternatively, by definition, we have ∫ x dn 1 αR α (x − s)α−1 n (a Isn−α f )(s)ds a I x aDx f (x) = Γ(α) a ds ∫ x   d dn 1 = (x − s)α n (a Isn−α f )(s)ds . dx Γ(α + 1) a ds Then applying integration by parts n times to the term in bracket gives ∫ x n  dn−j n−α (x − a)α−j 1 . (x − s)α−n−1 a Isn−α f (s)ds − I f (s)| a s=a Γ(α − n) 0 Γ(α + 1 − j) ds n−j s j=1 Note that the expression makes sense due to the conditions on f (x). The next result gives a generalization of Theorem 2.6 [DN68, (1.13)–(1.15)]. Proposition 2.1 Let f ∈ L 1 (D).



34

2 Fractional Calculus α−β

(i) For α, β ≥ 0, if RaDx

f ∈ L 1 (D), then R α β aDx a I x

α−β

f (x) = RaDx

f (x).

β

(ii) If RaDx f ∈ L 1 (D), with 0 ≤ n − 1 ≤ β ≤ n, then for any α ≥ 0, there holds αR β a I x aDx

β−α

f (x) = RaDx

f (x) −

n 

R β−j aDx

j=1

f (a)

(x − a)α−j . Γ(α + 1 − j)

(iii) If α, β ∈ (0, 1], then R αR β aDx aDx

f (x) =

d R α+β−1 (x − a)−α  1−β , D f − I f (a) a x x dx a Γ(1 − α)

provided that all the terms make sense in L 1 (D). Proof Not that for β ≥ α, (i) follows directly from Theorems 2.1 and 2.6(i) by R α β aDx a I x

β−α

f (x) = RaDxα a Ixα a Ix

f (x).

Meanwhile, for β < α, there exist integers m, n > 0 such that α − β = m + θ, 0 ≤ θ < 1, α − m = n − r, r ≥ 0. Then the preceding discussion and the definition of the Riemann-Liouville fractional derivative give R α−β aDx

dm θ dm R α−m β I f (x) = D a I x f (x) a x dx m dx m a x m n d d β β = m n a Ixr a Ix f (x) = RaDxα a Ix f (x). dx dx

f (x) =

β

β−α

The identity in (ii) follows from Theorem 2.6(ii), since RaDx f (x) = RaDxαRaDx Last, (iii) follows from (ii) since by definition R αR β aDx aDx

f (x) =

f (x).

d β (a I 1−αRD f )(x). dx x a x

This completes the proof of the proposition.



The next result due to Tamarkin [Tam30] is about the inversion of Abel’s integral equation. Abel [Abe81, Abe26] solved the integral equation now named after him in connection with the tautochrone problem (corresponding to α = 12 ), and gave the solution for any α ∈ (0, 1). The proper history of fractional calculus began with the papers by Abel and Liouville. Theorem 2.7 Let g ∈ L 1 (D). The integral equation with n − 1 < α < n, n ∈ N, ∫ x 1 g(x) = (x − s)α−1 f (s)ds, (2.29) Γ(α) a has a solution f ∈ L 1 (D) if and only if

2.3 Fractional Derivatives

35

(i) The function ω(x) = a Ixn−α g ∈ AC n (D). (ii) ω(a) = ω (a) = . . . = ω(n−1) (a) = 0. If conditions (i) and (ii) are satisfied, then the solution to (2.29) is unique and can be represented a.e. on D by f (x) =

dn n−α g(x). aI dx n x

(2.30)

Proof If (2.29) has a solution f ∈ L 1 (D), then applying to both sides the integral operator a Ixn−α and then appealing to Theorem 2.1 give ω(x) = a Ixn−α g(x) = n−α I α f (x) = I n f (x). Thus the formula (2.30) and conditions (i) and (ii) hold. a x a Ix a x Next we prove the sufficiency of the conditions. It follows from (i) that dn dn n−α ω(x) = g(x) ∈ L 1 (D). aI dx n dx n x Meanwhile, the semigroup property in Theorem 2.1 implies ∫ x 1 1 α−n+1 I g(x) = I ω(x) = (x − s)α−n ω(s)ds. a x a x Γ(α − n + 1) a By integration by parts and condition (ii), the last integral can be rewritten as ∫ x ∫ x 1 g(s)ds = (x − s)α ω(n) ds. Γ(α + 1) a a Thus the function f (x) of the form (2.30) satisfies equation ∫ x ∫ x 1 g(s)ds = (x − s)α f (s)ds. Γ(α + 1) a a 

Thus, the sufficiency part follows. We have the following fractional version of integration by parts formula.

Lemma 2.6 Let α > 0, p, q ≥ 1, and p−1 + q−1 ≤ 1 + α (p, q  1 when p−1 + q−1 = 1 + α). Then if f ∈ a Ixα (L p (D)) and g ∈ x Ibα (L q (D)), there holds ∫

b

a

g(x)RaDxα

∫ f (x) dx =

b

a

f (x)RxDbα g(x) dx.

Proof Since f ∈ a Ixα (L p (D)), f = a Ixα ϕ f for some ϕ f ∈ L p (D), and g = x Ibα ϕg for some ϕg ∈ L q (D). By Theorem 2.6(i), it suffices to show ∫ a

b

ϕ f (x)x Ibα ϕg (x) dx =

which holds in view of Lemma 2.3.

∫ a

b

similarly

α a I x ϕ f ϕg (x) dx,



36

2 Fractional Calculus

The next result gives the Laplace transform of the Riemann-Liouville fractional integral and derivatives. See Section A.3.1 of the appendix for Laplace transform. The Laplace transform L[R0Dxα f ] involves R0Dxα+k−n f (0+ ). Thus applying Laplace transform to solve relevant fdes requires such initial conditions. Lemma 2.7 Let α > 0, f ∈ L 1 (0, b) for any b > 0, and | f (x)| ≤ ce p0 x , for all x > b > 0, for some constant c and p0 > 0. (i) The relation L[0 Ixα f ](z) = z −α L[ f ](z) holds for any (z) > p0 α+j−n (ii) If n−1 < α ≤ n, n ∈ N, f ∈ AC n [0, b] for any b > 0, and limx→∞ R0Dx f (x) = 0, j = 0, 1, . . . , n − 1. Then for any (z) > p0 , there holds L[R0Dxα f ](z) = z α L[ f ](z) −

n−1 

α+j−n

z n−j−1 (R0Dx

f )(0+ ).

j=0

Proof The condition on f ensures that the Laplace transform is well defined. We write 0 Ixα in a convolution form as 0 Ixα f (x) = (ωα ∗ f )(x), with ωα (x) = Γ(α)−1 max(0, x)α−1 . Since L[ωα ] = z −α (cf. Example A.2), part (i) follows from the convolution rule (A.8) for Laplace transform. Likewise, R0Dxα f (x) = g (n) (x) with g(x) = 0 Ixn−α f (x), Then L[g](z) = z α−n L[ f ](z). By the convolution rule (A.8), L[R0Dxα f (x)](z) = L[g (n) (x)] = z n L[g](z) −

n−1 

z n−j−1 g (j) (0+ )

j=0

= z α L[ f ](z) −

n−1 

α+j−n

z n−j−1R0Dx

f (0+ ),

j=0



which shows the desired formula.

A similar relation holds for the Fourier transform (see Appendix A.3.2) of the α f and derivatives R α R α Liouville fractional integrals −∞ Ixα f and x I∞ −∞ Dx f and x D∞ f . Theorem 2.8 For 0 < α < 1 and f sufficiently regular, the following relations hold F [−∞ Ixα f ](ξ) = (iξ)−α F [ f ](ξ) and

α F [x I∞ f ](ξ) = (−iξ)−α F [ f ](ξ),

(2.31)

sign(ξ )

where the notation (±iξ)α , ξ ∈ R, denotes |ξ | α e±απi 2 . Similarly, for any α > 0, the following relations hold for the Liouville fractional derivatives: F [ −∞RDxα f ](ξ) = (iξ)α F [ f ](ξ) and Proof Note that

α α F[R x D∞ f ](ξ) = (−iξ) F [ f ](ξ). (2.32)

2.3 Fractional Derivatives

37

∫ x ∫ ∞ 1 e−iξ x (x − t)α−1 f (t)dtdx Γ(α) −∞ −∞ ∫ ∞ ∫ x 1 − e−iξ x 1 d (x − t)α−1 f (t)dtdx = Γ(α) dξ −∞ ix −∞ ∫ ∞ ∫ ∞ 1 − e−iξ x 1 d (x − t)α−1 dxdt. f (t) = Γ(α) dξ −∞ ix t

F [−∞ Ixα f ](ξ) =

The interchange of order of integration is made possible by Fubini’s theorem. Further, after differentiation, we obtain ∫ ∞ ∫ ∞ 1 F [−∞ Ixα f ](ξ) = f (t) e−ixξ (x − t)α−1 dxdt Γ(α) −∞ t ∫ ∞ ∫ ∞ ∫ 1 F [ f ](ξ) ∞ α−1 −ixξ = f (t)e−iξt dt e−iξ x x α−1 dx = x e dx. Γ(α) −∞ Γ(α) 0 0 Hence, we obtain the desired assertion for −∞ Ixα from the following identity ∫ ∞ 1 x α−1 e−ixξ dx = (iξ)−α, ∀α ∈ (0, 1). Γ(α) 0 Indeed, the condition α ∈ (0, 1) provides the convergence of the left-hand side integral at infinity. Then with the substitution ixξ = s, ∫ ∞ ∫ x α−1 e−ixξ dx = (iξ)−α s α−1 e−s ds, 0

L

where L is the imaginary half-axis (0, i∞) for ξ > 0 and the half-axis (−i∞, 0) for −s vanishes in the right half plane as |ξ | → ∞, we have ∫ξ < 0. Since e ∫exponentially ∞ α−1 −s α−1 −s s e ds = s e ds = Γ(α) by the Cauchy integral theorem. The other 0 L assertions follow in a similar manner, and thus the proof is omitted.  Remark 2.3 For α ∈ (0, 1), f ∈ L 1 (R) is sufficient for the identities in (2.31). For α ≥ 1, the left-hand side in may not exist even ∫ x for very smooth functions, e.g., f ∈ C0∞ (R). Indeed, if α = 1, then −∞ Ix1 f (x) = −∞ f (t)dt, so it tends to a constant as x → +∞, and thus the Fourier transform F [−∞ Ix f ] does not exist in the usual sense. In contrast, (2.32) is valid for all sufficiently smooth functions, e.g., those which are differentiable up to order n, and vanish sufficiently rapidly at infinity together with their derivatives. One may verify this by writing −∞RDxα f as −∞RDxα f = −∞ Ixn−α f (n) and then applying (2.31). The next nonnegativity result, direct from Theorem 2.8, is useful. Proposition 2.2 For any f ∈ L p (D), with p ≥ 2(1 + α)−1 , and α ∈ (0, 1), there holds ∫ b α απ a Ix2 f L2 2 (D) . (a Ixα f )(x) f (x)dx ≥ cos 2 a

38

2 Fractional Calculus

Proof Let f¯ be the zero extension of f to R. Then by Theorem 2.1 and the regularity assumption on f (cf. Lemma 2.3) ∫ ∞ ∫ ∞ ∫ b α α (a Ixα f ) f dx = (−∞ Ixα f¯) f¯dx = (−∞ Ix2 f¯)(x I∞2 f¯)dx. a

−∞

−∞

Then by Theorem 2.2(ii), Theorem 2.8 and Parseval’s theorem ∫ ∞ ∫ ∞ ∫ b cos απ 1 2 α −α  2 (a Ix f ) f dx = (iξ) | f (ξ)| dξ = |ξ | −α |  f (ξ)| 2 dξ. 2π −∞ 2π a −∞ 

Then the desired result follows from Theorem 2.8.

Now we discuss mapping properties in Sobolev spaces [JLPR15, Theorems 2.1 and 3.1]. See also [BLNT17] for relevant results in the space of bounded variations. These properties are important for the study of related bvps in Chapter 5. We use the spaces H0s (D) and H0,s L (D), etc. defined in Section A.2.3 in the appendix. The following smoothing property holds for the operators a Ixα and x Ibα . Theorem 2.9 For any s ≥ 0 and 0 < α < 1, the operators a Ixα and x Ibα are bounded s+α from H0s (D) into H0,s+α L (D) and H0,R (D), respectively. Proof It suffices to prove the result for D = (0, 1). The key idea of the proof is to extend f ∈ H0s (D) to a function f¯ ∈ H0s (0, 2) whose moments up to (k − 1)th order vanish with k > α − 12 . To this end, we employ orthogonal polynomials {p0, p1, . . . , pk−1 } with respect to the inner product ·, · defined by ∫ 2  f , g = ((x − 1)(2 − x)) f (x)g(x)dx, 1

where the integer satisfies > s − 12 so that ((x − 1)(2 − x)) pi ∈ H0s (1, 2), i = 0, . . . , k − 1. Then we set w j = γ j ((x − 1)(2 − x)) p j with γ j chosen so that ∫

2

∫ w j p j dx = 1 so that

1

2

1

w j p dx = δ j,,

j, = 0, . . . , k − 1,

where δ j, is the Kronecker symbol. Next we extend both f and w j , j = 0, . . . , k − 1 by zero to (0, 2) by setting fe = f −

k−1 ∫  j=0

1

f p j dx w j .

0

The resulting function fe has vanishing moments for j = 0, . . . , k − 1 and by construction it is in the space H0s (0, 2). Further, obviously the inequality fe L 2 (0,2) ≤ c f L 2 (D) holds, i.e., the extension is bounded in L 2 (D). We denote by f¯e the extension of fe to R by zero. Now for x ∈ (0, 1), we have (0 Ixα f )(x) = (−∞ Ixα f¯e )(x),

2.3 Fractional Derivatives

39

By Theorem 2.8, we have F [−∞ Ixα f¯e ](ξ) = (iξ)−α F ( f¯e )(ξ) and hence by Plancherel’s identity (A.9), ∫ 1 α ¯ 2 −∞ Ix fe L 2 (R) = |ξ | −2α |F ( f¯e )(ξ)| 2 dξ. 2π R By Taylor expansion centered at 0, there holds (−iξ)k−1 x k−1 (k − 1)! (−iξ x)k (−iξ x)k+1 (−iξ x)k+2 + + + · · · = (−iξ)k 0 Ixk (e−iξ x ). = k! (k + 1)! (k + 2)!

e−iξ x −1 − (−iξ)x − · · · −

Clearly, there holds |0 Ixk (e−iξ x )| ≤ 0 Ixk (1) = (k!)−1 x k ,

∀x ∈ (0, 1).

Since the first k moments of f¯e vanish, multiplying the identity by f¯e and integrating over R gives ∫ k −iξ x ¯ ) fe (x) dx, F ( f¯e )(ξ) = (−iξ)k 0 I x (e R

and upon noting supp( f¯e ) ⊂ (0, 2), we deduce |F ( f¯e )(ξ)| ≤ 2k (k!)−1 |ξ | k fe L 2 (0,2) ≤ c|ξ | k f L 2 (D) . We then have

∫ 2 −∞ Ixα f¯e H = (1 + |ξ | 2 )α+s |ξ | −2α |F ( f¯e )| 2 dξ α+s (R) R ∫ ∫ −2α+2k |ξ | dξ + c2 |ξ | 2s |F ( f¯e )| 2 dξ ≤ c f H s (D) . ≤c1 f L 2 (D) |ξ |1

The desired assertion follows from this and the inequality 2 0 Ixα f H α+s (D) ≤ −∞ Ixα f¯e H α+s (R) .

This completes the proof of the theorem.



Remark 2.4 The restriction α ∈ (0, 1) in Theorem 2.9 stems from Theorem 2.8: for α ≥ 1, −∞ Ixα f has to be understood as a distribution, due to the slow decay at infinity; see [SKM93, Section 2.8]. Nonetheless, 0 Ixα is bounded from H0,s L (D) to H0,α+s L (D), s (D) to H α+s (D), and repeatedly applying Theorem and x I1α is bounded from H0,R 0,R 2.9 shows that the statement actually holds for any α ≥ 0. We have the following immediate corollary. Corollary 2.1 For γ ≥ 0, the functions (x − a)γ and (b − x)γ belong to H0,αL (D) and α (D), respectively, for any 0 ≤ α < γ + 1 . H0,R 2

40

2 Fractional Calculus γ

γ

Proof Since (x − a)γ = cγ a Ix (1) and (b − x)γ = cγ x Ib (1), the assertion follows δ from Theorem 2.9 and the fact that 1 ∈ H0,δ L (D) and 1 ∈ H0,R (D) for any δ ∈ [0, 12 ).  The next result gives the mapping property of the operators R0Dxα and RxD1α . Theorem 2.10 For any α ∈ (n − 1, n), the operators RaDxα f and RxDbα f defined for f ∈ C0∞ (D) extend continuously from H0α (Ω) to L 2 (D). Proof We consider the left-sided case, since the right side case follows similarly. For f ∈ C0∞ (R), Theorem 2.8 and Plancherel’s identity (A.9) imply −∞RDxα f L 2 (R) = (2π)− 2 F 1



f L 2 (R) ≤ c f H α (R) .

R α −∞ Dx

Thus, we can continuously extend −∞RDxα to an operator from H α (R) into L 2 (R). Note that for f ∈ C0∞ (D), there holds (with f¯ the zero extension of f to R) R α aDx

f =

R α −∞ Dx

f¯|D .

(2.33)

By definition, f ∈ H0α (D) implies f¯ ∈ H α (R), and hence −∞RDxα f¯ L 2 (R) ≤ c f H0α (D) . Thus, formula (2.33) provides an extension of the operator RaDxα defined on C0∞ (D) to a bounded operator from the space H0α (D) into L 2 (D).  The next result slightly relaxes the condition in Theorem 2.10. α (D) = {v ∈ C n (D) : v (j) (0) = Corollary 2.2 The operator RaDxα f defined for f ∈ C 0 1 α 0, j = 0, . . . , [α − 2 ]} extends continuously from H0, L (D) to L 2 (D). Proof It suffices to consider D = (0, 1). For any f ∈ H0,αL (D), let fe be a bounded extension of f to H0α (0, 2) and f¯e be its extension by zero to R, and then set RD α f = RD α f¯ | . Note that RD α f is independent of the extension f and coincides e a x a x e D a x α (D). Obviously we have with the formal definition of RaDxα f when f ∈ C 0 RaDxα f L 2 (D) ≤ −∞RDxα f¯e L 2 (R) ≤ c f H0,αL (D) . 

This completes the proof.

Last, we give an alternative representation of RaDxα f . It is often called the left-sided Marchaud fractional derivative. Lemma 2.8 If f ∈ C 0,γ (D) for some γ ∈ (0, 1], then for any α ∈ (0, γ), ∫ x α (x − a)−α R α f (x) + D f (x) = (x − s)−α−1 ( f (x) − f (s)) ds. a x Γ(1 − α) Γ(1 − α) a Proof Note the identity 1−α f (x) = a Ix

1 (x − a)1−α f (x) + Γ(2 − α) Γ(1 − α)

∫ a

x

(x − s)−α ( f (s) − f (x)) ds.

2.3 Fractional Derivatives

41

The derivative of the integral is given by ∫ x f (s) − f (x) (x − a)1−α

f (x). − α (x − s)−α−1 ( f (s) − f (x)) ds − lim− α s→x (x − s) 1−α a Since f ∈ C 0,γ (D), and α < γ, we deduce lims→x − (x − s)−α ( f (s) − f (x)) = 0. Then collecting the terms gives desired identity.  This result can be used to derive an extremal principle for the Riemann-Liouville fractional derivative at an extremum; see Exercise 2.16.

2.3.2 Djrbashian-Caputo fractional derivative Now we turn to the Djrbashian-Caputo fractional derivative, often known as the Caputo fractional derivative in the engineering and physics literature. It is one of the most popular fractional derivatives for time-fractional problems due to its ability to handle initial conditions in a physically transparent way. The possibility of this derivative was known from the nineteenth century. Such a form can even be found in a paper by Joseph Liouville himself [Lio32, p. 10, formula (B)], but Liouville, not recognizing its role, disregarded this notion. It was extensively studied by the Armenian mathematician Mkhitar M. Djrbashian (also spelt as Dzhrbashyan, Dzsrbasjan, Džarbašjan, and Džrbašjan, etc., translated from Russian) in the late 1950s and his students (see the monograph [Dzh66] (in Russian) for a summary; see also [Djr93]). The Italian geophysicist Michele Caputo rediscovered this version in 1967 [Cap67] as a tool for understanding seismological phenomena, and later together with Francesco Mainardi in viscoelasticity [CM71a, CM71b]. Definition 2.3 For f ∈ L 1 (D) and n − 1 < α ≤ n, n ∈ N, its left-sided Djrbashianα Caputo fractional derivative C a Dx f of order α based at x = a is defined by ∫ x 1 C α n−α (n) f )(x) = (x − s)n−α−1 f (n) (s) ds, a Dx f (x) := ( a I x Γ(n − α) a if the integral on the right-hand side exists. Likewise, its right-sided Djrbashianα Caputo fractional derivative C x Db f of order α based at x = b is defined by C α x Db

f (x) := (−1)

n

(x Ibn−α f (n) )(x)

(−1)n = Γ(n − α)

∫ x

b

(s − x)n−α−1 f (n) (s) ds,

if the integral on the right-hand side exists. The Djrbashian-Caputo fractional derivative is more restrictive than the RiemannLiouville one, since the definition requires f ∈ AC n (D), which is always implicitly assumed when using this definition of the fractional derivative. Note that the limits α + − of C a Dx f as α approaches (n − 1) and n are not quite as expected. Actually, by integration by parts, for sufficiently regular f , we deduce

42

2 Fractional Calculus C α a Dx C α a Dx

f (x) → (a Ix1 f (n) )(x) = f (n−1) (x) − f (n−1) (a+ ) f (x) → f (n) (x)

as α → (n − 1)+,

as α → n− .

(2.34)

The first limit is direct, and similarly, for f ∈ C n+1 (D), integration by parts gives ∫ x 1 C α (x − s)n−α−1 f (n) (s)ds a Dx f (x) = Γ(n − α) a f (n) (a)(x − a)n−α = + a Ixn+1−α f (n+1) (x) → f (n) (x) as α → n− . Γ(n + 1 − α) α γ For n − 1 < α ≤ n, we compute C a Dx (x − a) , γ > n − 1, to gain some insight. The condition γ > n − 1 ensures that the nth derivative inside the integral is integrable, so that the operator 0 Ixn−α can be applied. Then direct computation shows C α a Dx (x

− a)γ =

Γ(γ + 1) (x − a)γ−α . Γ(γ + 1 − α)

α γ For the case γ ≤ n−1, C a Dx (x−a) is generally undefined, except for γ = 0, 1, . . . , n− 1, for which it vanishes identically, i.e., C α a Dx (x

− a) j = 0,

j = 0, 1, . . . , n − 1.

α Thus, for α ∈ (0, 1), f (x) ≡ 1 lies in the kernel of C a Dx , exactly as the first-order derivative, which contrasts sharply with the Riemann-Liouville case. The following result holds. The proof is identical with Theorem 2.4 and hence omitted. α Theorem 2.11 If n − 1 < α ≤ n, n ∈ N, f ∈ AC n (D), and satisfies C a Dx f (x) = 0 in n−1 f (k) (a+ ) k D, then it can be represented by f (x) = k=0 k! (x − a) .

Example 2.4 Let us repeat Example 2.2 for the Djrbashian-Caputo fractional derivative. With f (x) = eλx , λ ∈ R, for n − 1 < α ≤ n, n ∈ N, C α 0 Dx

f (x)

α =C 0 Dx

∞  λk x k k=0

=λ n x n−α

k!

=

∞ 

λ

C α k D x k 0 x

k=0

k!

∞ 

(λx)k

k=0

Γ(k + n − α + 1)

=

∞  k=n

λk

x k−α Γ(k − α + 1)

= λ n x n−α E1,n−α+1 (λx).

This result is completely different from that in Example 2.2, except the case α = 1. α For the operator C a Dx , neither the composition rule nor the product rule holds, and there is also no known analogue of Theorem 2.5. We leave the details to the α exercises. Generally, we have (RaDxα f )(x)  ( C a Dx f )(x), even when both fractional C α derivatives are defined: for f (x) ≡ 1, a Dx f (x) = 0, but RaDxα f (x) is nonzero by (2.23). Nonetheless, they are closely related to each other.

2.3 Fractional Derivatives

43

Theorem 2.12 Let n − 1 < α ≤ n, n ∈ N, and f ∈ AC n (D). Then for the left-sided fractional derivatives, there holds α (RaDxα f )(x) = ( C a Dx f )(x) +

n−1  (x − a) j−α (j) + f (a ), Γ( j − α + 1) j=0

(2.35)

and likewise for the right-sided fractional derivatives, there holds α (RxDbα f )(x) = ( C x Db f )(x) +

n−1  (−1) j (b − x) j−α

Γ( j − α + 1)

j=0

f (j) (b− ).

Proof Let D denote taking the usual first-order derivative. Then we claim that for β β any β > 0, f ∈ AC(D), there holds (D a Ix − a Ix D) f (x) = Γ(β)−1 f (a)(x − a)β−1 . Indeed, by the fundamental theorem of calculus, a Ix1 D f (x) = f (x) − f (a), i.e., β f (x) = a Ix1 D f (x) + f (a). Thus, by applying the operator a Ix , β > 0, to both sides β β+1 β and using Theorem 2.1, we deduce a Ix f (x) = a Ix D f (x) + f (a)(a Ix 1)(x), which upon differentiation yields the desired claim. The case α ∈ (0, 1) is trivial by setting β = 1 − α in the claim. Now for α ∈ (1, 2), the claim with β = 2 − α yields R α aDx

(x − a)1−α Γ(2 − α) 1−α x −α (x − a) + f (a) , = a Ix2−α D2 f (x) + D f (a) Γ(2 − α) Γ(1 − α)

f (x) = D2 a Ix2−α f (x) = D



2−α a I x D f (x)

+ f (a)

thereby showing (2.37) for n = 2. The general case follows in a similar manner.  Together with the formula (2.22), we have n − 1 < α ≤ n, C α a Dx

f (x) = RaDxα ( f − Tn−1 f )(x),

with Tn−1 f (x) =

n−1  (x − a)k k=0

CD α a x

k!

f (k) (a+ ).

RD α a x

Hence f can be regarded as f with an initial correction of its Taylor expansion Tn−1 f (x) of order n − 1 (at the base point x = a). This is a form of regularization to remove the singular behavior at the base point x = a, cf. (2.23). Very often, it is also employed as the defining identity for the Djrbashian-Caputo α fractional derivative C a Dx f (see, e.g., [EK04, Die10]), and below it is termed as the regularized Djrbashian-Caputo fractional derivative. Definition 2.4 For n − 1 < α ≥ n, n ∈ N, andf ∈ C n−1 (D) with a Ixn−α ( f − α∗ Tn−1 f )(x) ∈ AC n (I), the regularized Djrbashian-Caputo fractional derivative C a Dx f is defined by C α∗ R α (2.36) a Dx f (x) = aDx ( f − Tn−1 f )(x),

44

2 Fractional Calculus

and similarly one can define the right-sided version by C α∗ x Db

f (x)

= RxDbα

 f (x) −

n−1  (−1)k (b − x)k k=0

k!

f

(k)





(b ) .

α∗ C α n C α If n − 1 < α < n, n ∈ N, then C a Dx f = a Dx f for f ∈ AC (D). Generally, a Dx f C α∗ and a Dx f can be different, and the latter requires less regularity assumption on f . We shall use the two definitions interchangeably. The relations also imply C α a Dx

f (x) = RaDxα f (x),

if f (j) (a) = 0, j = 0, 1, . . . , n − 1.

(2.37)

Note that there are several different proposals to relax the regularity assumption, e.g., operator interpolation [GLY15, KRY20] and convolution groups [LL18a]. α∗ The next result summarizes several properties of C a Dx [Web19a, Section 4]. (i) α∗ is the left inverse of I α for continuous functions. CD α∗ generally D shows that C a x a x a x do not commutes, but there is a positive result due to [Die10, Lemma 3.13], given in (ii), for which the existence of is important, without which the assertion can fail. Proposition 2.3 The following statements hold. α∗ α (i) For n − 1 < α < n, n ∈ N, and f ∈ C(D), C a Dx a I x f (x) = f (x) for all x ∈ D. ∞ C α∗ α Similarly, if f ∈ L (D), then a Dx a Ix f = f a.e. D. (ii) Let f ∈ C k (D) for some k ∈ N. Moreover let α, β > 0 be such that there exists some ∈ N with ≤ k and β, α + β ∈ [ − 1, ]. Then C α∗ C β∗ a Dx a Dx

α+β∗

f = C a Dx

(iii) For n − 1 < α < n, n ≥ 2, if f ∈ AC(D), then whenever both fractional derivatives exist.

f. CD α∗ a x

f (x) =

CD α−1∗ f (x), a x

Proof (i) Since f ∈ C(D), a Ixα f is continuous, and (a Ixα f )(j) (a) = 0, j = 0, 1, . . . , n− 1. Thus, C α∗ α a Dx a I x

f (x) = RaDxα (a Ixα f − Tn−1 (a Ixα f ))(x) = (a Ixn−α a Ixα f )(n) (x) = (a Ixn f )(n) (x) = f (x).

This identity holds for any x ∈ D, since f ∈ C(D). The argument also works for f ∈ L ∞ (D) with a.e. x ∈ D. Note that requirement f ∈ L ∞ (D) can be relaxed to 1 . f ∈ L p (D), with p > α−n+1 (ii) The statement is trivial in the case β = − 1 and α + β = . So it suffices to treat the remaining cases. Note that the assumption implies α ∈ (0, 1). There are three possible cases (a) := α + β ∈ N, (b) β ∈ N and (c) − 1 < β < β + α < . For (a), β∗ we have α = n − β ∈ [0, 1). Since f ∈ C k (D), k ≥ , C a Dx f (a) = 0, and thus C α∗ C β∗ a Dx a Dx β∗

β∗

α+β∗

R α α () f = RaDxα C = f () = C a Dx f = aDx a I x f a Dx α+β∗

α∗ C C α∗ (β) = I 1−α f (β+1) = CD For (b), C a x a Dx a Dx f = a Dx f a x

f.

f . For (c), there holds

2.3 Fractional Derivatives C α∗ C β∗ a Dx a Dx

45 β∗

−β ()

R α f = RaDxα C a Dx f = aDx a I x −(β+α) ()

= (a Ix1 a Ix

f

−β ()

= (a Ix1−α a Ix

α+β∗

) = C a Dx

f

f

)

f.

α∗ (iii) By the definition of C a Dx f , the assertion follows by C α∗ n−α ( f − Tn−1 f ))(n) (x) a Dx f (x) = ( a I x =(a Ixn−α a Ix1 ( f − Tn−2 f ))(n) (x) = (a Ix1 a Ixn−α ( f

α−1∗

=(a Ixn−α ( f − Tn−2 f ))(n−1) (x) = C f (x). a Dx

− Tn−2 f ))(n) (x) 

This completes the proof of the proposition.

α The next result gives the fundamental theorem of calculus for C a Dx . (i) shows that C α α α−n+1 n f ∈ AC (D). However, it when α  N, a Dx provides a left inverse to a Ix , if a Ix is not a left inverse when α ∈ N. The conditions in Theorem 2.13 are more stringent than that in Theorem 2.6; this is to be expected from its more restrictive definition.

Theorem 2.13 Let α > 0, n − 1 < α < n, n ∈ N. Then the following statements hold. (i) If f ∈ L 1 (D) with a Ixα−n+1 f ∈ AC(D) and a Ixα−n+1 f (a) = 0, then C α α a Dx a I x

f = f,

a.e. D.

(ii) If f ∈ AC n (D), then αC α a I x a Dx

f = f − Tn−1 f ,

a.e. D.

If f ∈ C n (D), then the identity holds for all x ∈ D. Proof (i) Since f ∈ L 1 (D) with a Ixα−n+1 f ∈ AC(D), by Lemma 2.4, there exists a ϕ f ∈ L 1 (D) such that f = a Ixn−α ϕ f . Then by Theorem 2.1, α a Ix

f = a Ixα a Ixn−α ϕ f = a Ixn ϕ f ∈ AC n (D).

Differentiating both sides n times gives assertion (i). (ii) If f ∈ AC n (D), then by Theorem 2.1, we deduce α α n−α (n) (a Ixα C f )(x) = (a Ixn f (n) )(x) a Dx ) f (x) = ( a I x a I x

from which assertion (ii) follows. The proof for f ∈ C n (D) follows similarly.



α The following lemma gives the Laplace transform of C 0 Dx f .

Lemma 2.9 Let α > 0, n − 1 < α ≤ n, n ∈ N, such that f ∈ C n (R+ ), | f (x)| ≤ ce p0 x

for large x,

for some p0 ∈ R, and f ∈ AC n [0, b] for any b, the Laplace transforms  f (z) and (k)  (n) f (z) exist, and limx→∞ f (x) = 0, k = 0, 1, . . . , n−1. Then the following relation holds for (z) > p0 :

46

2 Fractional Calculus α α L[ C 0 Dx f ](z) = z f (z) −

n−1 

z α−k−1 f (k) (0).

k=0 α n−α g(x) with g(x) = f (n) (x). By the Proof For n − 1 < α < n, then C 0 Dx f (x) = 0 I x convolution rule (A.8) for Laplace transform, we deduce n−1 

α −(n−α) α−n n z L[ C D f ](z) = z L[g](z) = z L[ f ](z) − z n−1−k f (k) (0) x 0 k=0

= z α L[ f ](z) −

n−1 

z α−1−k f (k) (0),

k=0



which shows the desired formula.

α The classical Djrbashian-Caputo fractional derivative C a Dx f is defined in terms (n) n of f (x), requiring f ∈ AC (D). One can give an alternative representation using f (n−1) (x), analogous to Lemma 2.8. The right-hand side is well defined for any f ∈ C n−1,γ (D). This representation appeared several times; see, e.g., [ACV16].

Lemma 2.10 Let n − 1 < α < n, n ∈ N. If f ∈ C n−1,γ (D) ∩ AC n (D), γ ∈ (0, 1], then for any α − n + 1 ∈ (0, γ) ∫ f (n−1) (x) − f (n−1) (a) α − n + 1 x f (n−1) (x) − f (n−1) (s) C α D f (x) = + ds. a x Γ(n − α) a Γ(n − α)(x − a)α−n+1 (x − s)α−n+2 α Proof Applying integration by parts to the definition of C a Dx f gives ∫ x d C α Γ(n − α) a Dx f (x) = (x − s)n−α−1 ( f (n−1) (s) − f (n−1) (x)) ds ds a ∫ x (n−1) f (n−1) (s) − f (n−1) (x) s=x f (s) − f (n−1) (x) = + (n − α − 1) ds.  s=a (x − s)α−n+1 (x − s)n−α−2 a

Since f ∈ C n−1,γ (D), with γ > α−n+1, lims→x − (x−s)n−α−1 ( f (n−1) (s)− f (n−1) (x)) = 0. This shows the assertion.  The next lemma due to Alikhanov [Ali10, Lemma 1] is very useful. In the case α = 1, the identity holds trivially: f f = 12 ( f 2 ) . Lemma 2.11 For α ∈ (0, 1), and f ∈ AC(D), there holds α f (x) C a Dx f (x) ≥

Proof We rewrite the inequality as

1C α 2 2 a Dx f (x).

2.3 Fractional Derivatives

47

  α 1C α 2 Γ(1 − α) f (x) C a Dx f (x) − 2 a Dx f (x) ∫ x ∫ x (x − s)−α f (s) ds − (x − s)−α f (s) f (s) ds = f (x) a a ∫ x ∫ x ∫ x −α

−α

= (x − s) f (s)( f (x) − f (s)) ds = (x − s) f (s) f (η)dηds a a s ∫ x

∫ η f (η) (x − s)−α f (s) ds dη =: I. = a

a

Then by changing the integration order and integration by parts, we have ∫ η ∫ x f (η) (x − η)α (x − s)−α f (s) dsdη I= (x − η)α a a 2 ∫ η ∫ 1 x α d −α

= (x − η) (x − s) f (s) ds dη 2 a dη a 2 ∫ η ∫ α x = (x − η)α−1 (x − s)−α f (s) ds dη ≥ 0, 2 a a which completes the proof of the lemma.



The next result extends Lemma 2.11 to convex energies [LL18b, Lemma 2.4], where ·, · denotes the duality pairing between a Banach space X and its dual X . Lemma 2.12 If f ∈ C 0,γ (D; X), 0 < γ < 1 and f → E( f ) is a C 1 convex function on X. Then for α ∈ (0, γ), C α a Dx E( f (x))

α ≤ ∇ f E( f (x)), C a Dx f (x).

Proof By the convexity of E( f ), we have E( f (x)) − E( f0 ) ≤ ∇u E( f (x)), f (x) − f0  for any f0 ∈ X. This, Lemma 2.10 and the assumption f ∈ C 0,γ (D) imply ∫ x

E( f (x)) − E( f (a)) E( f (x)) − E( f (s)) 1 C α D E( f (x)) = + α ds a x Γ(1 − α) (x − a)α (x − s)α+1 a α ≤ ∇ f E( f (x)), C a Dx f (x). This completes the proof of the lemma.



Below we discuss a few fractional versions of properties from calculus. First, for a α + smooth f , the starting value C a Dx f (a ) is implied by the smoothness. This property has far-reaching impact on the study of fdes: if one assumes the solution is smooth, the initial- or boundary-condition can only be trivial. This result is well known, greatly popularized by Stynes [Sty16, Lemma 2.1] in the context of subdiffusion. Lemma 2.13 Let γ ∈ (0, 1] and α ∈ (n − 1, n − 1 + γ), n ∈ N. Then for f ∈ α C n−1,γ (D) ∩ AC n (D), limx→a+ C a Dx f (x) = 0.

48

2 Fractional Calculus

Proof We prove only for α ∈ (0, 1). Since f ∈ C 0,γ (D) ∩ AC(D), | f (x) − f (s)| ≤ L|x − s|γ for any x, s ∈ D, and thus by Lemma 2.10, for any x ∈ (a, b],  ∫ x | f (x) − f (s)| | f (x) − f (a)| 1 α |C D f (x)| ≤ + α ds a x Γ(1 − α) (x − a)α (x − s)α+1 a  ∫ x L ≤ (x − s)γ−α−1 ds (x − a)γ−α + α Γ(1 − α) a L γ ≤ (x − a)γ−α → 0 as x → a+ . Γ(1 − α) γ − α This completes the proof of the lemma.



The next result characterizes the monotonicity of f [Web19a, Proposition 7.2]. Proposition 2.4 Let 0 < α < 1. Suppose that f ∈ C(I) with a Ix1−α u ∈ AC(I). Then the following statements hold. α∗ (i) If f is nondecreasing, then C a Dx f (x) ≥ 0 for a.e. x ∈ D. C α∗ C α∗ (ii) If a Dx f ∈ C(D) and a Dx f (x) ≥ 0 for every x ∈ D, then f (x) ≥ f (a) for α∗ every x ∈ D. If C a Dx f (x) > 0 for x ∈ D, then f (x) > f (a) for every x > a. α∗ 1−α

1−α Proof (i) Let fa ≡ f (a) in D. Then C a Dx f (x) = ( a I x ( f − fa )) (x) and a I x ( f − fa )(x) is nondecreasing in x, by Theorem 2.2(v), since f − fa is nonnegative and α∗ nondecreasing. Thus, C a Dx f ≥ 0 a.e. C α∗ (ii) Let g(x) = a Dx f . By assumption, f (x) = f (a) + (a Ixα g)(x), and thus f (x) ≥ f (a), since g(x) ≥ 0 for a.e. x ∈ D. The strict inequality follows similarly. 

The next result gives an extremal principle for the Djrbashian-Caputo derivative; see [Die16, Theorem 2.2] for (iii). It partially generalizes Fermat’s theorem in calculus: at every local extremum x0 of f , f (x0 ) = 0. However, one cannot determine α the local behavior of f near x = x0 by C a Dx f (x0 ), and (ii) gives a partial converse C α C α∗ to (i). If f ∈ AC(D), then a Dx f = a Dx f , and hence the statements in Proposition α 2.4 hold also for C a Dx f , and (iii) gives a partial converse to this statement. A slightly weaker version of the extremal principle was first given by Luchko [Luc09, Theorem 1]; see also [AL14, Theorem 2.1] for an analogue in the Riemann-Liouville case. Proposition 2.5 Let f ∈ C 0,γ (D) ∩ AC(D), 0 < α < γ ≤ 1. (i) If f (x) ≤ f (x0 ) for all x ∈ [a, x0 ] for some x0 ∈ (a, b). Then C α a Dx

f (x0 ) ≥

(x0 − a)−α ( f (x0 ) − f (a)) ≥ 0. Γ(1 − α)

α (ii) If f (x) ≤ f (x0 ) for all a ≤ x ≤ x0 ≤ b, and C a Dx f (x0 ) = 0. Then f (x) = f (x0 ) for all x ∈ [a, x0 ]. α (iii) If f ∈ C 1 (D) satisfy C a Dx f (x) ≥ 0 for all x ∈ D and all α ∈ (α0, 1) for some α0 ∈ (0, 1). Then f is monotonically increasing.

2.3 Fractional Derivatives

49

Proof (i). By Lemma 2.10, for f ∈ C 0,γ (D), there holds for any x ∈ (a, b], ∫ x

f (x) − f (a) f (x) − f (s) 1 C α D f (x) = + α ds . a x Γ(1 − α) (x − a)α (x − s)α+1 a Since f (x0 ) − f (s) ≥ 0 for all a ≤ s ≤ x0 , the integral is nonnegative. (ii). Under the given conditions, we obtain ∫ x0 f (x0 ) − f (a) = 0 and (x0 − s)−α−1 ( f (x0 ) − f (s)) ds = 0. (x0 − a)α a Since f is continuous, it follows that f (x0 ) − f (s) = 0 for all s ∈ [a, x0 ]. α 1−α f (x), since f is continuous, we apply (iii) By assumption 0 ≤ C a Dx f (x) = a I x (2.34) to find the limit of the right-hand side as α → 1− , which yields α 1−α

0 ≤ lim− C f (x) = f (x). a Dx f (x) = lim− a I x α→1

α→1

Thus, f is nonnegative on (a, b]. The continuity of f on the closed interval D implies  f (a) ≥ 0. Hence f is nonnegative on D, and f is monotonically increasing. The extremal property can be generalized to the case α ∈ (1, 2) [Al-12, Theorem 2.1]. It generalizes the criterion that at every local minimum x0 of a smooth f , the second derivative is nonnegative, i.e., f

(x0 ) ≥ 0. Lemma 2.14 Let α ∈ (1, 2), and f ∈ C 2 (D) attain its minimum at x0 ∈ D. Then C α a Dx

f (x0 ) ≥

 (x0 − a)−α  (α − 1)( f (a) − f (x0 )) − (x0 − a) f (a) . Γ(2 − α)

Proof Let g(x) = f (x) − f (x0 ). Then g(x) ≥ 0 on [a, x0 ], g(x0 ) = g (x0 ) = 0, α C α g

(x0 ) ≥ 0 and C a Dx g(x) = a Dx f (x). Then integration by parts yields ∫ x0 x0 α 1−α

Γ(2 − α) C D g(x ) = (x − s) g (s)| − (α − 1) (x0 − s)−α g (s)ds. 0 0 a a x 0

Since g(x0 ) = g (x0 ) = 0 and g

(x0 ) is bounded, we have lim (x0 − x)1−α g (x) = lim− (x0 − x)−α g(x) = 0,

x→x0−

x→x0

and thus α 1−α

Γ(2 − α) C g (a) − (α − 1) a Dx g(x0 ) = −(x0 − a)



x0

a

Integrating by parts again, since g(x) ≥ 0 on [a, x0 ], yields

(x0 − s)−α g (s)ds.

50

2 Fractional Calculus

∫ a

x0

(x0 − s)−α g (s)ds = −(x0 − a)−α g(a) − α ≤ −(x0 − a)−α g(a).



x0

a

(x0 − s)−α−1 g(s)ds

Last combining the preceding relations yields α 1−α

Γ(2 − α) C f (a) + (α − 1)(x0 − a)−α ( f (a) − f (x0 )) a Dx g(x0 ) ≥ −(x0 − a)



from which the desired result directly follows.

α The result is weaker than C a Dx f (x0 ) ≥ 0. In the classical case: at a local minimum α x0 , f

(x0 ) ≥ 0. Unfortunately, this is generally not true for C a Dx f (x0 ), α ∈ (1, 2).

Example 2.5 Consider the function f (x) = x(x − 12 )(x − 1) on [0, 1]. Clearly, f (x) has an absolute minimum value at x0 = C α 0 Dx

f (x) =

√ 3+ 3 6

< 1. For any α ∈ (1, 2), we have

Γ(4) 3 Γ(3) x 3−α − x 2−α . Γ(4 − α) 2 Γ(3 − α)

1.1 C α Thus, C 0 Dx f (x0 ) = −0.427 < 0. Thus, at the global minimum x0 , a Dx f (x0 ) ≥ 0 α does not hold for α ∈ (1, 2). See Fig. 2.1 for an illustration of f and C 0 Dx f .

Fig. 2.1 The plot of f and

C α 0 Dx f

at α = 1.1, 1.5 and 1.9.

f (a) Next we generalize the mean value theorem: if f ∈ C 1 (D) ∩ C(D), f (b)− = b−a for some ξ ∈ (a, b). One fractional version reads [Die12, Theorem 2.3]:

f (ξ)

α Theorem 2.14 Let n − 1 < α < n, n ∈ N, f ∈ C n (D) with C a Dx f ∈ C(D). Then there exists some ξ ∈ (a, b) such that

f (b) −

n−1

(b−a)k k! (b − a)α k=0

f (k) (a)

=

CD α a x

f (ξ) . Γ(α + 1)

2.3 Fractional Derivatives

51

α Proof By Theorem 2.13(ii), there holds f (b) − (Tn−1 f )(b) = a Ixα C a Dx f (b), and by α f is continuous. Thus applying Theorem 2.3 to the right-hand side, D assumption, C a x α  with C a Dx f in place of f , the claim follows.

The existence of a Djrbashian-Caputo fractional derivative implies the existence of a limit at x = a in the sense of the Lebesgue point [LL18b, Corollary 2.16]. The limit is sometimes called the assignment of the initial value for f . 1

α∗ α Theorem 2.15 For α ∈ (0, 1), if C a Dx f ∈ Lloc (D), then a is a Lebesgue point, i.e., + f (a ) = c for some c ∈ R in the sense of ∫ x lim+ (x − a)−1 | f (s) − c|ds = 0. x→a

a

α C α∗ a I x a Dx

Proof With the relation f (x) − c = f , we deduce ∫ x ∫ x∫ s 1 α∗ | f (s) − c|ds ≤ (s − ξ)α−1 | C a Dx f (ξ)|dξds Γ(α) a a a ∫ x ∫ x 1 α∗ = |C D f (ξ)| (s − ξ)α−1 dsdξ. Γ(α) a a x ξ ∫x Since ξ (s − ξ)α−1 ds = α−1 (x − ξ)α , (x − a)−1



x

a

| f (s) − c|ds ≤

1 (x − ξ)α 1 CDα∗ f α1 L (a,x) L 1−α (a,x) a x (x − a)Γ(1 + α)

α∗ = Γ(1 + α)−1 (1 − α)− 1−α C a Dx f 1

1

L α (a,x)

→ 0,

as x → a+ , due to the absolute continuity of Lebesgue integral.



2.3.3 Grünwald-Letnikov fractional derivative The derivative f (x) of a function f : D → R at a point x ∈ D is defined by f (x) = lim h−1 Δh f (x), h→0

with Δh f (x) = f (x) − f (x − h).

Similarly one can construct higher order derivatives directly by higher order backward differences. By induction on k, one can verify that Δhk f (x) =

k 

C(k, j)(−1) j f (x − j h)

j=0

An induction on k shows the following identity

for k ∈ N0 .

52

2 Fractional Calculus

Δhk f (x) =



h

∫ ···

0

h

f (k) (x − s1 − · · · − sk ) ds1 · · · dsk ,

(2.38)

0

and thus by the mean value theorem, we obtain that for f ∈ C k (D): f (k) (x) = lim h−k Δhk f (x).

(2.39)

h→0

It is natural to ask whether one can define a fractional derivative (or integral) in a similar manner, without resorting to the integer-order derivatives and integrals. This line of pursuit leads to the Grünwald-Letnikov definition of a fractional derivative, introduced by Anton Karl Grünwald from Prague in 1867 [Grü67], and independently by Aleksey Vasilievich Letnikov in Moscow in 1868 [Let68]. It proceeds as the case of the Riemann-Liouville fractional integral by replacing the integer k by a real number α. Specifically, one may define the fractional backward difference formula α f , by of order α ∈ R, denoted by Δh,k α Δh,k f (x) =

k 

C(α, j)(−1) j f (x − j h).

j=0

In view of the identity (2.39), given x and a, we therefore define GL α a Dx

α f (x) = lim h−α Δh,k f (x),

where the limit is obtained by sending k → ∞ and h → 0+ while keeping h = x−a k , so that k h = x − a is constant. The next result shows that the Grünwald-Letnikov fractional derivative coincides with the Riemann-Liouville one for α < 0. Theorem 2.16 For α > 0, f ∈ C(D), there holds GL −α a Dx

f (x) = a Ixα f (x).

Proof By (2.11), C(−α, j) = (−1) j C( j + α − 1, j), and we may rewrite Δ−α f (x) as h,k α hα Δ−α h,k f (x) = h

k  j=0

1  B j Ak, j Γ(α) j=0 k

C( j + α − 1, j) f (x − j h) =

with B j = Γ(α) j 1−α C( j + α − 1, j) and Ak, j = h( j h)α−1 f (x − j h). By the definition of C( j + α − 1, j) and Wendel’s limit (2.8), lim B j = lim

j→∞

j→∞

Meanwhile, for f ∈ C(D),

Γ(α) Γ( j + α) Γ( j + α) = lim = 1. j α−1 Γ( j + 1)Γ(α) j→∞ Γ( j + 1) j α−1

2.3 Fractional Derivatives

lim

k→∞

k 

53

Ak, j = lim

j=0

k 

h→0

h( j h)

α−1

∫ f (x − j h) =

j=0

x

a

(x − s)α−1 f (s)ds. 

Now the desired assertion follows from the property of limit.

The next result shows that the Grünwald-Letnikov fractional derivative of order α > 0 coincides with the Riemann-Liouville one for α > 0. Theorem 2.17 If n − 1 < α < n, n ∈ N, and f ∈ C n (D), then GL α a Dx

f (x) =

n−1 

f (j) (a)

j=0

(x − a) j−α + Γ( j + 1 − α)

∫ a

x

(x − s)n−α−1 (n) f (s) ds. Γ(n − α)

Proof It follows from the identity C(α, j) = C(α − 1, j) + C(α − 1, j − 1) that α Δh,k

k k   j f (x) = (−1) C(α − 1, j) f (x − j h) + (−1) j C(α − 1, j − 1) f (x − j h) j=0

j=1

= (−1)k C(α − 1, k) f (a) +

k−1 

(−1) j C(α − 1, j)Δh f (x − j h).

j=0

Repeating the procedure n times yields α Δh,k f (x) =

n−1  j (−1)k−j C(α − j − 1, k − j)Δh f (a + j h) j=0

+

k−n  (−1) j C(α − n, j)Δhn f (x − j h). j=0

It remains to find the limits of the terms. First, by taking limit as h → 0+ , with k h = x − a, and from the identity (2.11), we deduce j

lim h−α (−1)k−j C(α − j − 1, k − j)Δh (a + j h) = lim C(k − α, k − j)(k − j) =

α−j

j

Δ f (a + j h) k α−j (k h)−α+j h k−j hj

(x − a) j−α (j) + f (a ), Γ( j + 1 − α)

where the last line follows from the identity 1 C(k − α, k − j) Γ(k − α + 1)(k − j)α−j = , = lim j−α k→∞ k→∞ Γ(k − j + 1)Γ( j − α + 1) Γ( j + 1 − α) (k − j) lim

cf. Wendel’s limit (2.8). Using the identity (2.11), we have

54

2 Fractional Calculus

h

−α

k−n k−n   j n (−1) C(α − n, j)Δh f (t − j h) = B j Ak, j j=0

j=0

with B j = C( j + n − α − 1, j) j −n+α+1 and Ak, j = h( j h)n−α−1 Δ computation with Wendel’s limit (2.8) gives lim B j = lim

j→∞

j→∞

n f (t−jh)

hn

. Then direct

Γ( j + n − α) −n+α+1 1 j . = Γ( j + 1)Γ(n − α) Γ(n − α)

Meanwhile, we have lim

k→∞

k−n  j=0

∫ Ak, j =

a

x

(x − a)n−α−1 f (n) (x)dx.

Combining these identities with the property of the limit yields the assertion.



By Theorems 2.16 and 2.17, the Grünwald-Letnikov fractional integral and derivative coincide with the Riemann-Liouville counterparts, under suitable regularity condition on f , which is slightly stronger than that for the Riemann-Liouville counterparts. Nonetheless, the Grünwald-Letnikov definition, when fixed at a finite h > 0, serves as one simple way to construct numerical approximations of the Riemann-Liouville fractional integrals and derivatives. See Fig. 2.2 for an illustration on f (x) = x 3 . Generally, the accuracy of the approximation is at best first-order O(h).

α 3 Fig. 2.2 The Grünwald-Letnikov approximation of R 0 Dx f of f (x) = x over [0, 1], with α = 0.5.

2.3 Fractional Derivatives

55

Exercises Exercise 2.1 Show the identity (2.2) by integration by parts. Exercise 2.2 Using the identity limn→∞ (1 − nt )n = e−t , prove the identity Γ(z) = lim

n→∞

n!nz . z(z + 1) . . . (z + n)

Euler originally defines the Gamma function by this product formula in a letter to Goldbach, dated October 13th 1729 [Eul29]. Exercise 2.3 Prove that Γ(x) is log-convex on R+ , i.e., for any x1, x2 ∈ R+ and θ ∈ [0, 1], there holds Γ(θ x1 + (1 − θ)x2 ) ≤ Γ(x1 )θ Γ(x2 )1−θ . Exercise 2.4 James Wendel [Wen48] presented a strengthen version of the identity (2.8), by proving the following double inequality

x 1−s Γ(x + s) ≤ 1, ≤ s x+s x Γ(x) for 0 < s < 1 and x > 0. The proof employs only Hölder’s inequality and the recursion formula (2.2). This exercise is to outline the proof. (i) Use Hölder’s inequality to prove Γ(x + s) ≤ Γ(x + 1)s Γ(x)1−s . This and the recursion formula yield Γ(x + s) ≤ x s Γ(x).

(2.40)

(ii) Substitute s by 1 − s in (2.40) and then x − s by x to get Γ(x + 1) ≤ (x + s)1−s Γ(x + s). This proves the double inequality, and also the identity (2.8). (iii) Extend the proof to s ∈ R by the recursion formula (2.2). Exercise 2.5 Use the log-convexity of Γ(x) to prove Gautschi’s inequality [Gau60] for the Gamma function Γ(x), i.e., for any positive x ∈ R+ and any s ∈ (0, 1), there holds Γ(x + 1) < (x + 1)1−s . x 1−s < Γ(x + s) This inequality is closely connected with Wendel’s identity; see the survey [Qi10] for a comprehensive survey on inequalities related to the ratio of two Gamma functions.

56

2 Fractional Calculus

Exercise 2.6 Prove the identity (2.10), using the convolution rule for the Laplace ∫t transform on the function 0 (t − s)α−1 s β−1 ds. Exercise 2.7 Prove the identity for f ∈ C(D), f (x) = lim+ a Ixα f (x), α→0

∀x ∈ D.

Exercise 2.8 Let f ∈ L 1 (D). Prove the following identity lim a Ixα f − f L 1 (D) = 0.

α→0+

Exercise 2.9 Repeat Examples 2.1 and 2.2 for any fixed a ∈ R. Exercise 2.10 Find (i) 0 Ixα cos(λx), λ ∈ R and (ii) 0 Ixα Γ(x). Exercise 2.11 Prove Theorem 2.2 (iii). Exercise 2.12 Find R0Dxα sin(λx) and R0Dxα cos(λx), λ ∈ R. Exercise 2.13 Prove that for α ∈ (0, 1) and p ∈ [1, α−1 ], if f ∈ AC(D), then f ∈ L p (D).

RD α a x

Exercise 2.14 This exercise generalizes Lemma 2.6. Let f , g ∈ L 1 (D) and α ∈ (0, 1) such that (i) RaDxα f and RxDbα g exist and gRaDxα f ∈ L 1 (D); and (ii) the functions g, a Ix1−α f are continuous at x = a and the functions f , x Ib1−α g are continuous at x = b. Then prove the following identity ∫ ∫ R α g aDx f dx = f RxDbα gdx + ( f x Ib1−α g)|x=b − (g a Ix1−α f )|x=a . D

D

Exercise 2.15 Generalize Lemma 2.8 to the case α ∈ (1, 2). Exercise 2.16 [Al-12] If f ∈ C 1 (D) attains its maximum at x0 ∈ (a, b], then for any α ∈ (0, 1), there holds (RaDxα f )(x0 ) ≥

(x0 − a)−α f (x0 ). Γ(1 − α)

Exercise 2.17 This exercise is about the continuity of the Riemann-Liouville fractional derivative in the order α. Let n − 1 < α < n, n ∈ N. (i) For f ∈ C n (D) lim

α→(n−1)+

R α aDx

f (x) = f (n−1) (x),

∀x ∈ D.

(ii) For f ∈ AC n (D), lim

α→(n−1)+

R α aDx

f (x) = f (n−1) (x),

Discuss how to further relax the regularity on f .

a.e. D.

2.3 Fractional Derivatives

57

Exercise 2.18 [RSL95] Consider the Weierstrass function defined by (with s > 0) Wλ (x) =

∞ 

λ(s−2)k sin λ k x,

λ > 1.

k=0

It is known that for s ∈ (1, 2), Wλ is continuous but nowhere differentiable. Prove the following two statements. (i) R0Dxα Wλ (x) of Wλ (x) exists for every order α < 2 − s. (ii) R0Dxα Wλ (x) of Wλ (x) does not exist for any order α > 2 − s. Exercise 2.19 The way Djrbashian introduced his version of the fractional derivative (see, e.g., [DN68]) is actually more general than that given in Definition 2.3. It is n ⊂ (0, 1]. Denote defined for a set of indices {γk }k=0 σk :=

k 

γ j − 1,

j=0

 and suppose σn = nj=0 γ j − 1 > 0. Then for a function f defined on D, he n using the introduced the following differential operations associated with {γk }k=0 Riemann-Liouville fractional derivative σ0 a Dx σk a Dx

1−γ0

f (x) = a Ix f (x) =

f (x),

1−γk R γk−1 a Ix aDx

γ

γ

· · · RaDx 1RaDx 0 f ,

k = 1, . . . , n,

supposing that these operations have a sense at least in a.e. on D. Clearly, with the choice γ0 = γ1 = . . . = γn−1 = 1, it recovers the classical Djrbashian-Caputo α fractional derivative C a Dx f with α = σn in Definition 2.3 (associated with the set n {γk }k=0 ). With this definition, he and his collaborators studied the Cauchy problem [DN68] and boundary value problems [Džr70] for the associated odes (see the book [Djr93, Chapter 10] for some relevant results). (i) Prove the following identity σs a Dx

⎧ 0, ⎪ ⎨ ⎪ (x − a)σs = (x − a)σk −σs Γ(σs + 1) ⎪ , ⎪ ⎩ Γ(σk − σs + 1)

0 ≤ k ≤ s − 1, 0 ≤ s ≤ n, s ≤ k ≤ n.

n (ii) Derive the Laplace transform relation for 0 Dσ x f. n , in a manner similar to Theorem 2.6. (iii) Derive the fundamental theorem for a Dσ x

Exercise 2.20 Show the following identity for α > 0 and λ ∈ R: C α α 0 Dx Eα,1 (λx )

= λEα,1 (λx α ).

Exercise 2.21 Find one function f (x) such that for 0 < α, β < 1

58

2 Fractional Calculus C αC β a Dx a Dx

β

α+β

C C α f (x) = C a Dx a Dx f (x)  a Dx

f (x).

Exercise 2.22 [NSY10] Prove that for f ∈ C 2 (D) and 0 < α, β < 1, α + β < 1 C β C α a Dx ( a Dx

α+β

f) = C a Dx

f.

Exercise 2.23 Prove that for f ∈ C 2 (D) and α ∈ (1, 2) α

C α 2 2 (C a Dx ) f (x) = a Dx f (x) −

f (a) (x − a)1−α . Γ(2 − α)

Exercise 2.24 Let α ∈ (0, 1), f ∈ AC(D), and f+ = max( f , 0). Prove the following inequality α 1C α 2 f+ (x) C a Dx f (x) ≥ 2 a Dx f+ (x). This inequality generalizes Alikhanov’s inequality in Lemma 2.11. Exercise 2.25 In Proposition 2.5(iii), can the condition α ∈ (α0, 1) be relaxed to α ∈ (α0, α1 ) for some 0 < α0 < α1 < 1? Exercise 2.26 Show the formula (2.38). Exercise 2.27 Give the full argument for “the basic property of limit” in the proof of Theorem 2.16. Exercise 2.28 One generalization of the fractional integral operator a Ixα is the Erdélyi-Köber fractional integral operator, for α > 0 and η > − 12 , defined as (I η,α f )(x) =

2x −2η−2α Γ(α)



x

(x 2 − s2 )α−1 s2η+1 f (s) ds.

0

(i) Show that if f is integrable, the integral on the right-hand side exists. (ii) Show that the semigroup-type relation I η,α I η+α,β = I η,α+β holds.

Chapter 3

Mittag-Leffler and Wright Functions

The exponential function ez plays an extremely important role in the theory of integerorder differential equations. For fdes, its role is subsumed by the Mittag-Leffler and Wright functions. In this chapter, we discuss their basic analytic properties and numerical computation.

3.1 Mittag-Leffler Function Magnus Gustaf Mittag-Leffler treated the function that bears his name in a series of five papers, in connection with his method of summation of divergent series; the two most frequently referenced ones are [ML03, ML05]. These considered the oneparameter version Eα,1 (z), initially for α ∈ R and later for α ∈ C. The two-parameter version was introduced by Wiman [Wim05] in 1905, but majority of the analysis for this case appeared only nearly 50 years later in the (independent) works of Agarwal and Humbert [Aga53, Hum53, HA53] and independently also Djrbashian [Džr54a, Džr54b, Džr54c] (see the monograph by Djrbashian [Dzh66] for an overview of many results). This function was initially treated under “Miscellaneous Functions” in Volume III of the classical Bateman project [EMOT55, pp. 206–212], but its central role in fdes is now widely recognized and the function becomes so prominent that sometimes it is called the “queen function of fractional calculus” [MG07]. The survey [HMS11] and the monograph [GKMR14] provide a comprehensive treatment of Mittag-Leffler-type functions. For α > 0, and β ∈ R, the two-parameter MittagLeffler function Eα,β (z) is defined by Eα,β (z) =

∞  k=0

zk , Γ(αk + β)

z ∈ C.

(3.1)

Often we write Eα (z) = Eα,1 (z), when β = 1. We begin with several elementary facts of the function Eα,β (z). It generalizes the familiar exponential function ez : © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 B. Jin, Fractional Differential Equations, Applied Mathematical Sciences 206, https://doi.org/10.1007/978-3-030-76043-4_3

59

60

3 Mittag-Leffler and Wright Functions

E1,1 (z) =

∞  k=0

 zk zk = = ez . Γ(k + 1) k=0 k! ∞

There are a few other easy-to-verify special cases E1,2 (z) = z −1 (ez − 1), z

E2,2 (z) = z − 2 sinh z 2 ,

1

1

E2,1 (z) = cosh(z 2 ),

−z

z

1

(3.2)

−z

where cosh z = e +e and sinh z = e −e denote the hyperbolic cosine and sine, 2 2 1 respectively. Here z 2 means the principal value of the square root of z ∈ C cut along R− = (−∞, 0). The relation to the complementary error function is also well known E 1 ,1 (z) = 2

∞ 

zk

k k=0 Γ( 2

2

+ 1)

2

= ez (1 + erf(z)) = ez erfc(−z),

(3.3)

where erf and erfc denote the error function and the complementary error function, respectively, for any z ∈ C, defined by ∫ z ∫ ∞ 2 2 2 2 erfc(z) := √ e−s ds and erfc(z) := 1 − erf(z) = √ e−s ds. (3.4) π 0 π z The Riemann-Liouville fractional integrals and derivatives of a Mittag-Leffler function are still Mittag-Leffler functions: for any γ > 0 and λ ∈ R, γ β−1 Eα,β (λx α ) 0 Ix x R γ β−1 Eα,β (λx α ) 0Dx x

= x β+γ−1 Eα,β+γ (λx α ), =x

β−γ−1

(3.5)

α

Eα,β−γ (λx ).

(3.6)

Indeed, since Eα,β (z) is entire (cf. Proposition 3.1), we can integrate termwise and use (2.19) to obtain γ

γ

β−1 Eα,β (λx α ) = 0 Ix 0 Ix x

∞  λ k x kα+β−1 k=0

Γ(kα + β)

=

∞ γ  λ k 0 I x kα+β−1 x

k=0

Γ(kα + β)

∞  λ k x kα+β+γ−1 = = x β+γ−1 Eα,β+γ (λx α ). Γ(kα + β + γ) k=0

The identity (3.6) follows similarly. Recall that the order ρ and the corresponding type of an entire function f (z) are defined, respectively, by   ln ln max | f (z)| ln max | f (z)| |z | ≤R |z | ≤R and lim sup . ρ := lim sup ln R Rρ R→∞ R→∞ Proposition 3.1 For α > 0 and β ∈ R, Eα,β (z) is entire of order

1 α

and type 1.

3.1 Mittag-Leffler Function

61

Proof It suffices to show that the series (3.1) converges for any z ∈ C. Using Stirling’s formula (2.7), we have Γ((k + 1)α + β) = (kα)α (1 + O(k −1 )) Γ(kα + β)

as k → ∞.

Thus, by Cauchy’s criterion, the series (3.1) converges for any z ∈ C. The order and type of Eα,β (z) follow similarly. 

3.1.1 Basic analytic properties Integral representations play a prominent role in the study of entire functions, e.g., analytical properties and practical computations. For Eα,β (z), such representations in the form of an improper integral along the Hankel contour C were treated in the case β = 1 and arbitrary β by [EMOT55] and Djrbashian [Dzh66], respectively. We shall focus on the case α ∈ (0, 2), since the case α ≥ 2 can be reduced to α ∈ (0, 2), cf. Lemma 3.1. First, we define the contour Γ,ϕ by Γ,ϕ = {z ∈ C : z = re±iϕ, r ≥  } ∪ {z ∈ C : z =  eiθ : |θ| ≤ ϕ}, oriented with an increasing imaginary part. The contour Γ,ϕ divides C into two parts G± (, ϕ), where G− (, ϕ) and G+ (, ϕ) are the regions to the left and right of the contour path, respectively, see Fig. 3.1 for an illustration. The derivation is based on the representation (2.6) of the reciprocal Gamma 1 1 function Γ(z) . First, we perform the substitution ζ˜ = ζ α , α ∈ (0, 2) in (2.6) and, for 1 ≤ α < 2, consider only the contours Γ,ϕ for which π2 < ϕ < απ . Since  is arbitrary, we obtain ∫ 1 1−z−α 1 1 = e ζ α ζ α dζ, (3.7) Γ(z) 2απi Γ, ϕ where α ∈ (0, 2) and α2 π < ϕ < min(π, απ). For α ∈ (0, 2), Eα,β (z) admits the following integral representations. Theorem 3.1 Let α ∈ (0, 2) and β ∈ C. Then for any  > 0 and ϕ such that πα 2 < ϕ < min(π, απ), there hold ∫

1

1−β

eζ α ζ α dζ, ∀z ∈ G− (, ϕ), ζ−z Γ, ϕ 1 1−β ∫ eζ α ζ α 1 1−β z α1 1 Eα,β (z) = z α e + dζ, ∀z ∈ G+ (, ϕ). α 2απi Γ, ϕ ζ − z

1 Eα,β (z) = 2απi

(3.8) (3.9)

1 in (3.7). If |z| < , then Proof The proof employs the integral representation of Γ(z) z | ζ | < 1 for ζ ∈ Γ,ϕ . It follows from (3.7) that for 0 < α < 2 and |z| < , there holds

62

3 Mittag-Leffler and Wright Functions

Fig. 3.1 The contour Γ , ϕ , which divides C into two regions G ± (, ϕ)

Eα,β (z) = =

1 2απi



∞  k=0

1 2απi 1

Γ, ϕ

eζ α ζ



1

Γ, ϕ

1−β α −1

eζ α ζ

1−β α −k−1

∞  k  z k=0

ζ

dζ =

dζ z k

1 2απi

∫ Γ, ϕ

1

1−β

eζ α ζ α dζ . ζ−z

Under the condition on ϕ, the last integral is absolutely convergent and defines a function of z, which is analytic in both G− (, ϕ) and G+ (, ϕ). Since for every ϕ ∈ ( α2 π, min(π, απ)), the circle {z ∈ C : |z| <  } lies in the region G− (, ϕ), by analytic continuation, the integral is equal to Eα,β (z) not only in the circle {z ∈ C : |z| <  }, but also in the entire domain G− (, ϕ), and thus (3.8) follows. Now we turn to z ∈ G+ (, ϕ). For any 1 > |z|, we have z ∈ G− (1, ϕ), and thus (3.8) holds. However, if  < |z| < 1 and | arg(z)| < ϕ, then the Cauchy theorem gives 1 2απi



1

Γ1 , ϕ −Γ, ϕ

1−β

eζ α ζ α 1 1−β 1 dζ = z α ez α . ζ−z α

From the preceding two identities, the representation (3.9) follows directly.



Remark 3.1 For the special case β = 1, the integral representation is given by 1 Eα,1 (z) = 2απi

∫ Γ, ϕ

1

eζ α dζ, ζ−z

∀z ∈ G− (, ϕ).

(3.10)

This formula was known to Mittag-Leffler in 1905 [ML05, (56), p. 135]. There are alternative ways, by deforming the contour properly. The following is one variant ∫ α−β ζ ζ e 1 dζ, (3.11) Eα,β (z) = α 2πi C ζ − z

3.1 Mittag-Leffler Function

63

where the contour C is a loop which starts and ends at −∞, and encircles the circular 1 disk |ζ | ≤ |z| α in the positive sense; | arg(ζ)| ≤ π on C. The integrand has a branch point at ζ = 0. The complex plane C is cut along R− , and in the cut plane, the integrand is single valued: the principal branch of ζ α is taken in the cut plane. Theorem 3.1 is fundamental to the study of Eα,β (z). One important property of Eα,β (z) is its asymptotic behavior as z → ∞ in various sectors of C. It was first derived by Djrbashian [Dzh66], and subsequently refined [WZ02, Par02]. The asymptotic for Eα,β (z) with α ∈ (0, 2) is given below. Theorem 3.2 Let α ∈ (0, 2), β ∈ R and ϕ ∈ ( α2 π, min(π, απ)), and N ∈ N. Then for |arg(z)| ≤ ϕ with |z| → ∞, we obtain   N 1 1 1 1 1−β z α1  α + O N +1 , Eα,β (z) = z e − α Γ(β − αk) z k z k=1

(3.12)

and for ϕ ≤ |arg(z)| ≤ π with |z| → ∞ Eα,β (z) = −

N  k=1

  1 1 1 + O N +1 . Γ(β − αk) z k z

(3.13)

Proof To show (3.12), we take φ ∈ (ϕ, min(π, απ)). By the identity N  1 ζ k−1 ζN =− + ζ−z z N (ζ − z) zk k=1

and the representation (3.8) with  = 1, for any z ∈ G+ (1, φ), there holds 1 1−β 1  1 Eα,β (z) = z α ez α − α 2απi k=1 N



1

Γ1, φ

eζ α ζ

1−β α +k−1

dζ z −k + I N ,

where the integral I N is defined by IN =

1 2απiz N

∫ Γ1, φ

1

eζ α

ζ

1−β α +N

ζ−z

dζ .

By (3.7), the first integral can be evaluated by ∫ 1 1−β 1 1 , e ζ α ζ α +k−1 dζ = 2απi Γ1, φ Γ(β − αk)

(3.14)

k ≥ 1.

It remains to bound I N . For sufficiently large |z| and | arg(z)| ≤ ϕ, we have minζ ∈Γ1, ϕ |ζ − z| = |z| sin(φ − ϕ), and hence |I N (z)| ≤

|z| −N −1 2απ sin(φ − ϕ)

∫ Γ1, φ

1

|e ζ α ||ζ

1−β α +N

|dζ .

64

3 Mittag-Leffler and Wright Functions 1

1

φ

For ζ ∈ Γ1,φ , arg(ζ) = ±φ and |ζ | ≥ 1, we have |e ζ α | = e |ζ | α cos α and due to the choice of φ, cos( αφ ) < 0. Hence, the last integral converges. These estimates together yield (3.12). To show (3.13), we take φ ∈ ( α2 π, ϕ), and the representation (3.8) with  = 1. Then, we obtain Eα,β (z) = −

N  k=1

z −k + I N (z), Γ(β − αk)

z ∈ G− (1, φ),

where I N is defined in (3.14). For large |z| with ϕ ≤ | arg(z)| ≤ π, there holds minζ ∈Γ1, φ |ζ − z| = |z| sin(ϕ − φ), and consequently |I N (z)| ≤

|z| −1−N 2πα sin(ϕ − φ)

∫ Γ1, φ

1

|e ζ α ||ζ

1−β α +N

|dζ,

where the integral converges like before, thereby showing (3.13).



Remark 3.2 In the area around the Stokes line | arg(z)| ≤ απ±δ, with δ < α2 π, Eα,β (z) exhibits the so-called Stokes phenomenon and there exists Berry-type smoothing [WZ02, Section 2]. With the function erfc(z), cf. (3.4), it is given by     ∞ z −k 1 1−β z α1 1 z α e erfc −c(θ) 12 |z| α − , (3.15) Eα,β (z) ∼ 2α Γ(β − αk) k=1 α around the lower Stokes line for − 3α 2 π < arg(z) < − 2 π, where the parameter c(θ) is 2 1 given by the relation c2 = 1 + iθ − eiθ , with θ = arg(z α ) + π and the principal branch 2 3 of c is chosen such that c ≈ θ + i θ6 − θ36 for small θ. Around the upper Stokes line α 3α 2 π < arg(z) < 2 π, one finds

Eα,β (z) ∼ 2

    ∞ z −k 1 1−β z α1 1 z α e erfc c(θ) 12 |z| α − , 2α Γ(β − αk) k=1

(3.16)

1

with c2 = 1 + iθ − eiθ , θ = arg(z α ) − π, and the same condition as before for small θ. This exponentially improved asymptotic series converges very rapidly for most value of z ∈ C with |z| > 1. See also the work [Par20] for discussions on the Mittag-Leffler function Eα,1 (z) on R− as α → 1− . Theorem 3.2 indicates that Eα,β (z), with α ∈ (0, 2) and β − α  Z− ∪ {0}, decays only linearly on R− , much slower than the exponential decay for ez . However, on R+ , it can grow super-exponentially, and the growth rate increases dramatically as α → 0+ . We show some plots of Eα,1 (x), x ∈ R, in Figs. 3.2 and 3.3. On R+ , Eα,1 (x) is monotonically increasing, and for fixed x, it is decreasing in α. The behavior on R− is more complex. For α ∈ (0, 1], Eα,1 (x) is monotonically decreasing, but the rate of decay differs substantially with α: for α = 1, it decays exponentially, but for any other α ∈ (0, 1), it decays only linearly, concurring with Theorem 3.2, with the monotonicity to be proved in Theorem 3.5. For α ∈ (1, 2], Eα,1 (x) is no longer

3.1 Mittag-Leffler Function

65

positive any more on R− , and instead it oscillates wildly. For α away from 2, Eα,1 (x) decays as x → −∞, and the closer α is to one, the faster the decay is. These behaviors underpin many important properties of the related fdes. In the limiting case α = 2, there is no decay: by the identity (3.2), √ √ E2 (x) = cosh i −x = cos −x, i.e., it recovers the cosine function, and thus Theorem 3.2 does not apply to α = 2. 104

1 0.8 E ,1(x)

E ,1(x)

=1/4 =1/2 =3/4 =1

102

0.6

=1/4 =1/2 =3/4 =1

0.4 0.2

100

0

0.5

1 x

1.5

0 -6

2

-4

-2

0

x

(a)

(b)

Fig. 3.2 Plots of the function Eα,1 (x) for α ∈ (0, 1]

1 =5/4 =3/2 =7/4 =2

(x)

0.5 ,1

102

=5/4 =3/2 =7/4 =2

0

E

E ,1(x)

103

101

100

-0.5

0

2

4

6

8

-1 -100

10

-80

-60

-40

-20

x

x

(a)

(b)

Fig. 3.3 Plots of the function Eα,1 (x) for α ∈ (1, 2].

The next result is a direct corollary of Theorem 3.2. Corollary 3.1 For 0 < α < 2, β ∈ R and

α 2π

< ϕ < min(π, απ), there holds

0

66

3 Mittag-Leffler and Wright Functions

|Eα,β (z)| ≤ c1 (1 + |z|)

1−β α

−1

|Eα,β (z)| ≤ c(1 + |z|) ,

1

e (z α ) + c2 (1 + |z|)−1,

| arg(z)| ≤ ϕ,

ϕ ≤ | arg(z)| ≤ π.

Last, the case α ≥ 2 can be reduced to the case α ∈ (0, 2) via the following duplication formula. The notation [·] denotes taking the integral part of a real number. Lemma 3.1 For all α > 0, β ∈ R, z ∈ C and m = [ (α−1) 2 ] + 1, there holds m  1 1 2π i j α E 2m+1 ,β (z 2m+1 e 2m+1 ). 2m + 1 j=−m

Eα,β (z) =

(3.17)

Proof Recall the following elementary identity: m  j 2m + 1, if k ≡ 0 (mod 2m + 1), e 2m+1 2kiπ = 0, if k  0 (mod 2m + 1). j=−m

This and the definition of the function Eα,β (z) give m 

j

Eα,β (ze2πi 2m+1 = (2m + 1)E(2m+1)α,β (z 2m+1 ).

j=−m 1

Substituting mα by α and z m by z gives the desired identity.



Using Lemma 3.1, the following result holds. The statement is taken from [PS11, Theorem 1.2.2], which corrects one minor error in the summation range that has appeared in many popular textbooks (see [PS11, pp. 220–221] for a counterexample). Theorem 3.3 Let α ≥ 2, β ∈ R, N ∈ N, the following asymptotic behavior holds: N  z −k 1  1 2mπi 1−β z α1 e 2mπi α α α (z e + O(|z| −N −1 ), Eα,β (z) = ) e − α Γ(β − kα) k=1

as z → ∞, where the first sum is taken for integer m satisfying the condition | arg(z) + 2mπ| ≤ 3απ 4 . Last, we discuss the distribution of zeros of Eα,β (z), which is of independent interest because of its crucial role in the study of Fourier-Laplace-type integrals with Mittag-Leffler kernels [Dzh66] and spectral theory of fdes [Nak77], to name a few. Thus the issue has been extensively discussed. Actually, shortly after the appearance of the work of Mittag-Leffler [ML03] in 1903, Wiman [Wim05] showed in 1905 that for α ≥ 2, all zeros of the function Eα,1 (z) are real, negative and simple, and later Pólya [Pól18] reproved this result for 2 ≤ α ∈ N by a different method. It was revisited by Djrbashian [Dzh66], and many deep results were derived. There are many further refinements [Sed94, OP97, Sed00, Sed04, Psk05a], see [PS11] for

3.1 Mittag-Leffler Function

67

an overview. A general result for the case α ∈ (0, 2) and β ∈ R is given below, see [PS11, Theorem 2.1.1] for the technical proof. Theorem 3.4 Let α ∈ (0, 2), and β ∈ R, where β  1, 0, −1, −2, . . . for α = 1. Then all sufficiently large zeros zn (in modulus) of the function Eα,β (z) are simple and the following asymptotic formula holds: 1

znα = 2πin + ατβ ln 2πin + ln cβ +

cβ dβ ln 2πin − ατβ + rn, + (ατβ )2 cβ (2πin)α 2πin 2πin

as n → ±∞, where the constants cβ , dβ and τβ are defined by α α 1−β , dβ = , τβ = 1 + , if β  α − ,  ∈ N, Γ(β − α) Γ(β − 2α) α α α 1−β , dβ = , τβ = 2 + , if β = α − ,  ∈ N, α  N, cβ = Γ(β − 2α) Γ(β − 3α) α cβ =

and the remainder rn is given by      2  ln |n| 1 ln |n| rn = O +O +O . |n| 1+α |n| 2α n2 The following result specializes Theorem 3.4. We sketch a short derivation to give a flavor of the proof of Theorem 3.4, see [JR12]. Proposition 3.2 Let α ∈ (0, 2) and α  1. Then all sufficiently large zeros {zn } (in modulus) of the function Eα,2 (z) are simple, and as n → ±∞, there holds

1 α π znα = 2nπi − (α − 1) ln 2π|n| + sign(n) i + ln + rn, 2 Γ(2 − α)

where the remainder rn is O ln|n|n| | . Proof Taking N = 1 in (3.12) gives Eα,2 (z) =

1 1 − 1 z α1 1 z αe − + O z −2 , α Γ(2 − α) z 1

as |z| → ∞.

α Hence we have z 1− α ez α = Γ(2−α) +O(z −1 ). Next with ζ = z α and w = ζ +(α−1) ln ζ, α w it can be rewritten as e = Γ(2−α) + O(w −α ). The roots wn for all sufficiently large n satisfy α + O(|n| −α ), wn = 2πn i + ln Γ(2 − α) 1

1

or equivalently ζn + (α − 1) ln ζn = 2πn i + ln

α + O(|n| −α ). Γ(2 − α)

Then the desired assertion follows from this identity.



68

3 Mittag-Leffler and Wright Functions

In Fig. 3.4, we show the contour plot of |Eα,2 (z)| and |Eα,α (z)|, with α = 43 , over C− in a logarithmic scale. They are eigenfunctions of the fractional Sturm-Liouville problem, and thus the behavior of the roots is important, see Section 5.3 in Chapter 5. The deep wells in the plots correspond to the zeros. In both cases, the zeros always appear in complex conjugate pairs, and there is only a finite number of real zeros. This latter is predicted by Theorem 3.4: for any α ∈ (0, 2), asymptotically the zeros with large magnitude must lie on two rays in the complex plane C. However, the existence of a real zero for general Eα,β (z) remains poorly understood. The wells run much deeper for Eα,α (z) than Eα,2 (z), which agrees with the faster asymptotic decay predicted by Theorem 3.2.

100

100

10

50

5

5

50

0

0

−5

−50

0

0

−5 −50 −10

−100 −100

−50

0

−10 −100 −100

−50

0

Fig. 3.4 The contour plots for ln |Eα,2 (z)| and ln |Eα, α (z)|, with α = 43 .

Besides the distribution of roots, the basis property or completeness of MittagLeffler functions has also been extensively studied in the Russian literature [Džr74, DM75, Hač75, DM77, DM83], and a necessary and sufficient condition generalizing the classical Müntz theorem to Mittag-Leffler functions was established [Džr74].

3.1.2 Mittag-Leffler function Eα,1 (−x) The function Eα,1 (−x) with x ∈ R+ is especially important in practice. We describe its complete monotonicity and asymptotics. A function f (x) : R+ → R is called completely monotone if (−1)n f (n) (x) ≥ 0, n = 0, 1, . . .. For example, e−x and

3.1 Mittag-Leffler Function

69

f (x) = (x + c0 )β , with c0 > 0 and β ∈ [−1, 0), are completely monotone. The next result due to Pollard [Pol48] gives the complete monotonicity of Eα,1 (−x). Theorem 3.5 For α ∈ [0, 1], Eα,1 (−x) is completely monotone, and ∫ ∞ e−sx Fα (s)ds, Eα,1 (−x) = 0

with Fα (s) =

∞ 1  (−1)k sin παkΓ(αk + 1)s k−1 . απ k=1 k!

1 Proof Since E0,1 (−x) = 1+x and E1,1 (−x) = e−x , these two cases hold trivially. It suffices to show the case 0 < α < 1. By the integral representation (3.10), we have

1 Eα,1 (−x) = 2απi



1

eζ α dζ, ζ+x

Γ, μ

(3.18)

α −1 with ∫ ∞ μ ∈ ( 2 π, απ) and arbitrary but fixed  > 0. Using the identity (x + ζ) = e−(ζ +x)s ds in (3.18) and the absolute convergence of the double integral, it follows 0 from Fubini’s theorem that ∫ ∞ Eα,1 (−x) = e−xs Fα (s)ds 0

with Fα (s)

1 = 2απi



1

Γ, μ

e ζ α e−ζ s dζ .

By Bernstein’s theorem in Theorem A.8, it remains to show Fα (s) ≥ 0 for all s ≥ 0. Integration by parts yields Fα (s) =

1 2απis



ζ α −1 ζ α1 e dζ . α 1

Γ, μ

e−ζ s

Substituting ζ s = z α gives s−1− α 1 α 2πi 1

Fα (s) =



Γ, μ

α

e−z ezs

1 −α

dz,

(3.19)

∫ 1

is the image of Γ −z α ezζ dz, where Γ,μ ,μ under the mapping. Let φα (ζ) = 2πi Γ e , μ ∫∞ α which is the inverse Laplace transform of e−z = 0 e−zζ φα (ζ)dζ, which is completely monotone for any α ∈ (0, 1) [Pol46], see also Exercise 3.13 for related discussions. Hence 1 1 Fα (s) = α−1 s−1− α φα (s− α ) ≥ 0, and Eα,1 (−x) is completely monotone. Finally, we derive the expression of Fα (s). π

to Γ Indeed, one can deform the contour Γ,μ ,ϕ , with  > 0 and ϕ ∈ ( 2 , π), without

70

3 Mittag-Leffler and Wright Functions

traversing any singularity of the integrand, ∫ α 1 φα (s) = e−z ezs dz. 2πi Γ, ϕ Now substituting z =  eiθ on the circular arc (with θ ∈ [−ϕ, ϕ]) and z = re±iϕ (with r ≥ ) on the two rays gives ∫ ϕ ∫ ∞ α iαϕ iϕ 1 1 − α e αθ i  eiθ s iθ φα (s) = e e ie dθ + e−r e er e s eiϕ dr 2πi −ϕ 2πi  ∫ ∞ α −iαϕ −iϕ 1 + e−r e er e s e−iϕ dr. 2πi  Letting  → 0+ and ϕ → π − yields 1

φα (s) =  π =−





e

−sr −(e−iπ r)α

e

0

1  (−1)k −iπαk Γ(kα + 1) e dr =  π k! s kα+1 k=0



∞ 1  (−1)k Γ(kα + 1) sin(kαπ) kα+1 . π k=0 k! s

Substituting this expression, we find the formula of Fα (s).



Schneider [Sch96] improved Theorem 3.5 that Eα,β (−x) is completely monotone on R+ for α > 0, β > 0 if and only if α ∈ (0, 1] and β ≥ α. His proof employs the corresponding probability measures and the Hankel contour integration. Miller and Samko [MS98] presented an alternative (and shorter) proof using Abelian transformation given below, see also [Djr93, Theorem 1.3-8]. These results were extended by [Sim15] to several functions related to Mittag-Leffler functions. Corollary 3.2 For α ∈ [0, 1] and β ≥ α, Eα,β (−x) is completely monotone on R+ . Proof Observe that E0,β (−x) = (Γ(β)(1 + x)−1 and E0,0 (−x) = 0. In either case, E0,β (−x) is completely monotone. More generally, for β > α > 0, there holds 1 Eα,β (−x) = αΓ(β − α)

∫ 0

1

(1 − t α )β−α−1 Eα,α (−t x)dt. 1

(3.20)

Indeed, with the series representation of Eα,α (−t x), changing variables t = s α and the identity (2.10), the right-hand side can be evaluated to be

3.1 Mittag-Leffler Function

71

∫ 1 ∞  1 1 (−x)k (1 − t α )β−α−1 t k dt αΓ(β − α) k=0 Γ(kα + α) 0 ∫ 1 ∞  (−x)k 1 = (1 − s)β−α−1 s kα+α−1 ds Γ(β − α) k=0 Γ(kα + α) 0

 B(β − α, (k + 1)α)(−x)k  (−x)k 1 = = Eα,β (−x). Γ(β − α) k=0 Γ(kα + α) Γ(kα + β) k=0 ∞

=



This shows (3.20). Next since Eα,1 (−x) is entire, using the recursion formula (2.2) for Gamma function Γ(z), we deduce  d (−x)k  (−1)k k x k−1 d Eα,1 (−x) = = = −α−1 Eα,α (−x). dx dx Γ(kα + 1) kαΓ(kα) k=0 k=1 ∞



(3.21) 

The identities (3.20) and (3.21) and Theorem 3.5 complete the proof. The positivity of Eα,α (−x) is a simple consequence of Theorem 3.5. Corollary 3.3 For 0 < α < 1, we have Eα,α (−x) > 0 on R+ .

Proof Assume the contrary that it vanishes at some x = x0 > 0. By (3.21) and Theorem 3.5, Eα,α (−x) is completely monotone, and thus it vanishes for all x ≥ x0 . The analyticity of Eα,α (−x) in x implies then it vanishes identically over R+ , which  contradicts the fact that Eα,α (0) = Γ(α)−1 . The following two-sided bounds on Eα,1 (−x) are useful. Simon [Sim14, Theorem 4] derived the bounds using a probabilistic argument. The current proof is due to [VZ15, Lemma 6.1], which applies to more general completely positive kernels, using the corresponding resolvent. Theorem 3.6 For every α ∈ (0, 1), the following bounds hold: (1 + Γ(1 − α)x)−1 ≤ Eα,1 (−x) ≤ (1 + Γ(1 + α)−1 x)−1,

∀x ∈ R+ .

Proof By changing variables x = t α and letting g(t) = Eα,1 (−t α ), it suffices to prove (1 + Γ(1 − α)t α )−1 ≤ g(t) ≤ (1 + Γ(1 + α)−1 t α )−1,

∀t ≥ 0.

For the function g(t), it follows directly from (3.5) that g(t) + 0 Itα g(t) = 1,

t ≥ 0.

(3.22)

By Theorem 3.5, g(t) is monotonically decreasing, which implies ∫ t ∫ t 1 g(t) tα α α−1 g(t), I g(t) = (t − s) g(s)ds ≥ (t − s)α−1 ds = 0 t Γ(α) 0 Γ(α) 0 Γ(α + 1) and therefore

72

3 Mittag-Leffler and Wright Functions

1 = g(t) + 0 Itα g(t) ≥ [1 + Γ(α + 1)−1 t α ]g(t). This gives the desired upper bound. Next, let h(t) = −g (t) = t α−1 Eα,α (−t α ) > 0,

(3.23)

cf. Corollary 3.3. Consequently, using (3.5) again, we deduce 1−α h(t) 0 It

= 0 It1−α [t α−1 Eα,α (−t α )] = g(t).

Then by Theorem 2.1 and the identity (3.22), ∫ t ∫ t 1 t −α 1−α −α g(t) = 0 It h(t) = (t − s) h(s)ds ≥ h(t − s)ds Γ(1 − α) 0 Γ(1 − α) 0 t −α 0 Itα g(t) t −α (1 − g(t)) t −α α 1−α = , = h(t) = 0 It 0 It Γ(1 − α) Γ(1 − α) Γ(1 − α) where the inequality follows from the positivity of h over R+ in (3.23). Rearranging this inequality gives the lower bound. 

1

1 E ,1 (-x) lower upper

E ,1 (-x) lower upper

0.5

0.5

0

0 0

2

4

6

8

10

0

2

4

x

6

8

10

x

Fig. 3.5 The function Eα,1 (−x) and its lower and upper bounds

The result can be viewed as rational approximations in t α : 1 tα ∼ Eα,1 (−t α ), as t → 0+, ∼ 1 − Γ(1 + α) 1 + Γ(1 + α)−1 t α 1 t −α ∼ Eα,1 (−t α ), as t → ∞. ∼ α 1 + Γ(1 − α)t Γ(1 − α) The asymptotics as t → 0+ and t → ∞ are also known as stretched exponential decay and negative power law, respectively. The lower and upper bounds were also empirically observed by Mainardi [Mai14]. In Fig. 3.5, we plot Eα,1 (−x) and its

3.1 Mittag-Leffler Function

73

lower and upper bounds given in Theorem 3.6. The approximations are quite sharp for α close to zero, but they are less sharp for α close to one. In general, the Laplace transform of Eα,β (x) is a Wright function Wρ,μ (z), but that of x β−1 Eα,β (−λx α ) does take a simple form. The minus sign in the argument is to ensure the existence of Laplace transform, in view of the exponential asymptotics of Eα,β (z) in Theorem 3.2. The functions Eα,1 (−λx α ) and x α−1 Eα,α (−λx α ), λ ∈ R often appear in the study of fractional odes. Their Laplace transforms are given by L[Eα,1 (−λx α )](z) =

z α−1 λ + zα

L[x α−1 Eα,α (−λx α )](z) =

and

1 . λ + zα

Lemma 3.2 Let λ ≥ 0, α > 0 and β > 0. Then z α−β , (z) > 0, zα + λ L[Eα,β (−λx)](z) = z −1Wα,β (−λz −1 ).

L[x β−1 Eα,β (−λx α )](z) =

Proof By the series expansion of x β−1 Eα,β (−λx α ), we have L[x β−1 Eα,β (−λx α )](z) = =

∞  k=0

∫ (−λ)k





e−zx x β−1

0

∞  (−λx α )k dx Γ(kα + β) k=0

 x kα+β−1 −zx z α−β e dx = . (−λ)k z −kα−β = α Γ(kα + β) z +λ k=0 ∞



0

1

Clearly, this identity holds for all (z) > ∫λ α , for which changing the order of ∞ summation and integral is justified. Since 0 e−zx x β−1 Eα,β (−λx α )dx is analytic with respect to z ∈ C+ , analytic continuation yields the identity for z ∈ C+ . Similarly, by the series expansion of Eα,β (−λx), we have ∫ L[Eα,β (−λx)](z) = =

∞  k=0



0

e−zx

∫ ∞ ∞ ∞   (−λx)k x k e−zx dx = dx (−λ)k Γ(kα + β) Γ(kα + β) 0 k=0 k=0

(−λ)k = z−1Wα,β (−λz −1 ). z k+1 Γ(k + 1)Γ(kα + β) 

This completes the proof of the lemma. The next example illustrates the occurrence of the function Eα,1

(−λx α ).

Example 3.1 Consider the following fractional ode, with 0 < α < 1 and λ > 0: find u satisfying C α 0 Dx u(x)

+ λu(x) = 0,

for x > 0,

with u(0) = 1.

It is commonly known as the fractional relaxation equation. We claim that the solution u(x) is given by u(x) = Eα,1 (−λx α ). In fact, the initial condition u(0) = 1

74

3 Mittag-Leffler and Wright Functions

holds trivially, and further, we have C α 0 Dx u(x)

α = C 0 Dx

∞  k=0

= −λ

∞ 

 x kα x (k−1)α = (−λ)k Γ(kα + 1) k=1 Γ((k − 1)α + 1) ∞

(−λ)k

(−λ)k

k=0

x kα = −λu(x). Γ(kα + 1)

Alternatively, by Lemma 2.9, the Laplace transform u of u satisfies z α u(z) − z α−1 + z α−1 λ u(z) = 0, and thus u(z) = z α +λ . Then Lemma 3.2 gives the desired expression. Note that in the limit of α = 1, it recovers the familiar formula u(x) = e−λx . The function Eα,1 (−λx α ) is continuous but not differentiable at x = 0, which contrasts sharply with C ∞ [0, ∞) regularity of e−λx for α = 1.

3.2 Wright Function For μ, ρ ∈ R with ρ > −1, the Wright function Wρ,μ (z) is defined by Wρ,μ (z) =

∞  k=0

zk , k!Γ(k ρ + μ)

z ∈ C.

(3.24)

It was first introduced in connection with a problem in number theory (the asymptotic theory of partitions) by Edward M. Wright in a series of notes starting from 1933 [Wri33, Wri35b, Wri35a]. Originally, Wright assumed that ρ ≥ 0 and, only in 1940 [Wri40], did he consider the case −1 < ρ < 0, for which the function is still entire but exhibits certain different features. An early overview of the properties of Wρ,μ (z) is given by Stankovic [Sta70], including integral representations, various special cases, functional relation, sign and majorants, etc. The function Wρ,μ (z) generalizes the exponential function W0,1 (z) = ez , and the Bessel function of first kind Jμ and the modified Bessel function of first kind Iμ :

z 2 z −μ = W1,μ+1 − Jμ (z), 4 2

W1,μ+1

z2 4

=

z −μ 2

Iμ (z).

(3.25)

Indeed, they follow from direct computation (and the series definitions of Jμ and Iμ )

z μ 2



z μ 

z2 z 2k := W1,μ+1 ∓ (∓1)k k . 4 2 k=0 4 k!Γ(k + μ + 1)

Hence in the literature it is often called the generalized Bessel function. Further, it follows from direct computation that the following differentiation relations hold:

3.2 Wright Function

75

d Wρ,μ (z) = Wρ,μ+ρ (z), dz

(3.26)

dn μ−1 (t Wρ,μ (ct ρ )) = t μ−1−n Wρ,μ−n (ct ρ ). dt n

(3.27)

Actually, since the Wright function is entire, termwise differentiation leads to  d  zk zk d Wρ,μ (z) = = = Wρ,ρ+μ (z), dz dz k!Γ(k ρ + μ) k=0 k!Γ(k ρ + ρ + μ) k=0 ∞



 dn ck t kρ+μ−1 dn μ−1 ρ (t W (ct )) = ρ,μ dt n dt n k!Γ(k ρ + μ) k=0 ∞

=

∞  k=0

ck t kρ+μ−1−n = t μ−n−1Wρ,μ−n (ct ρ ). k!Γ(k ρ + μ − n)

More generally we have the following identity: for all α, μ ∈ R, ρ ∈ (−1, 0) and c > 0, (for α < 0, R0Dxα is identified with 0 Ix−α ) R α μ−1 Wρ,μ (−cx ρ )) 0Dx (x

= x μ−α−1Wρ,μ−α (−cx ρ ).

(3.28)

3.2.1 Basic analytic properties The next result gives the order of Wρ,μ (z). Note that for ρ ∈ (−1, 0), the function is not of exponential order. Lemma 3.3 For any ρ > −1 and μ ∈ R, the function Wρ,μ (z) is entire of order

1 1+ρ

and type (1 + ρ)| ρ| − 1+ρ . 1

Proof We sketch the proof only for ρ ∈ (−1, 0), since the case ρ ≥ 0 is simpler. Using the reflection formula (2.3), we rewrite Eρ,μ (z) as Wρ,μ (z) =

∞ 1  Γ(1 − k ρ − μ) sin(π(k ρ + μ)) k z . π k=0 k!

 |Γ(1−kρ−μ) | k Next we introduce a majorizing sequence π1 ∞ . By k=0 ck |z| , with ck = k! ck Stirling’s formula (2.7), limk→∞ ck+1 = limk→∞ |ρk+1 = ∞. Hence the auxiliary | ρ k −ρ sequence and also the series (3.24) converge over the whole complex plane C. The order and type follow analogously.  The asymptotic expansions are based on a suitable integral representation [Wri33] ∫ −ρ 1 Wρ,μ (z) = e ζ +zζ ζ −μ dζ, (3.29) 2πi C where C is any deformed Hankel contour. It follows from (2.6) by

76

3 Mittag-Leffler and Wright Functions

Wρ,μ (z) =

∫ ∫ ∞  −ρ zk 1 1 e ζ ζ −ρk−μ dζ = e ζ +zζ ζ −μ dζ . k! 2πi C 2πi C k=0

The interchange between series and integral is legitimate by the uniform convergence of the series, since Wρ,μ (z) is an entire function. We state only the asymptotic expansions without the detailed proof, taken from [Luc08, Theorems 3.1 and 3.2]. Such expansions were first studied by Wright, and then later refined [WZ99a, WZ99b] by including Stokes phenomena. These estimates can be used to deduce the distribution of its zeros [Luc00]. Theorem 3.7 The following asymptotic formulae hold: (i) Let ρ > 0, arg(z) = θ, |θ| ≤ π − ,  > 0. Then  M  (−1)m am 1+ρ 1 Z −μ −M−1 + O(|Z | ) , Wρ,μ (z) = Z 2 e ρ Zm m=0 1

Z → ∞,

θ

where Z = (ρ|z|) ρ+1 ei ρ+1 and the coefficients am , m = 0, 1, . . ., are defined as the coefficients of v 2m in the expansion Γ(m+ 12 ) 2π



2 ρ+1

m+ 12

(1 − v)−β 1 +

ρ+2 3 v

+

(ρ+2)(ρ+3) 2 v 3·4

+...

− 2m+1 2

.

(ii) Let −1 < ρ < 0, y = −z, arg(z) ≤ π, −π < arg(y) ≤ π, |arg(y)| ≤ min( 3(1+ρ) 2 π, π) − ,  > 0. Then   M−1  1 −μ −Y −m −M Wρ,μ (z) = Y 2 e AmY + O(Y ) , Y → ∞, m=0 1

with Y = (1 + ρ)((−ρ)−ρ y) 1+ρ and the coefficients Am , m = 0, 1, . . ., defined by Γ(1 − μ − ρt) 2π(−ρ)−ρt (1 + ρ)(1+ρ)(t+1) Γ(t + 1)  M−1  1 (−1)m Am +O = 1 Γ((1 + ρ)t + β + m=0 Γ((1 + ρ)t + μ + 2 + m)

 1 2

+ M)

,

valid for arg(t), arg(−ρt) and arg(1 − μ − ρt) all lying between −π and π and t tending to infinity. Remark 3.3 For ρ > 0, the case z = −x, x > 0 is not covered by Theorem 3.7. Then the following asymptotic formula holds: Wρ,μ (−x) = x p( 2 −μ) eσx 1

p

cos pπ

cos(( 12 − μ)pπ + σx p sin pπ)(c1 + O(x −p )), ρ

where p = (1 + ρ)−1 , σ = (1 + ρ)ρ− 1+ρ and the constant c1 can be evaluated exactly. Similarly, for − 13 < ρ < 0, the asymptotic expansion as z = x → +∞ is not covered,

3.2 Wright Function

77

and it is given by Wρ,μ (−x) = x p( 2 −μ) e−σx 1

p

cos pπ

cos(( 12 − μ)pπ − σx p sin pπ)(c2 + O(x −p )), ρ

where p = (1 + ρ)−1 , σ = (1 + ρ)(−ρ)− 1+ρ and the constant c2 can be evaluated exactly. It follows from Theorem 3.7 that for z ∈ C and | arg(z)| ≤ π −  (with 0 <  < π), 1−μ

Wρ,μ (z) = (2π(1 + ρ))− 2 (ρz) 1+ρ e 1

1 1+ρ 1+ρ ρ (ρz)

(1 + O(z − 1+ρ )) as z → ∞. 1

The Laplace transform of a Wright function is a Mittag-Leffler function (cf. also Lemma 3.2). The relation (3.31) can be found in [Sta70, (3), p. 114]. Note that for ρ > 0, the series representation of Wρ,μ (z) can be transformed term by term. However, for ρ ∈ (−1, 0), Wρ,μ (z) is an entire function of order great than 1, so that this approach is no longer legitimate. Thus care is required in this case in establishing the existence of Laplace transform, which the minus sign in the argument is to ensure, by the asymptotic formula valid about R− in Theorem 3.7. Proposition 3.3 The following Laplace transform relations hold:  z −1 Eρ,μ (−z −1 ), ρ ≥ 0, L[Wρ,μ (−x)](z) = E−ρ,μ−ρ (−z), − 1 < ρ < 0, −ρ

L[x μ−1Wρ,μ (−cx ρ )](z) = z −μ e−cz ,

∀c > 0.

(3.30) (3.31)

Proof The identity (3.30) is direct from the integral representation (3.29): ∫ ∞

1 ∫ −ρ L[Wρ,μ (−x)](z) = e−zx e ζ −xζ ζ −μ dζ dx 2πi C 0 ∫ ∫

∫ ∞ −ρ ζ −μ 1 1 = e ζ ζ −μ e−(z+ζ )x dx dζ = eζ dζ 2πi C 2πi C z + ζ −ρ 0  ∫ z −1 Eρ,μ (−z −1 ), ρ > 0, ζ ρ−μ 1 = eζ ρ dζ = 2πiz C ζ + z −1 E−ρ,μ−ρ (−z), 0 > ρ > −1, where changing integration order is justified by the asymptotics of the Wright function in Theorem 3.7, and the last identity is due to the integral representation (3.11) e−x , and thus of Eα,β (z). When ρ = 0, Wρ,μ (−x) = Γ(μ) L[Wρ,μ (−x)](z) = (Γ(μ)(z + 1))−1 = z −1 Eρ,μ (−z −1 ), i.e., the desired identity also holds. The identity (3.31) follows similarly by changing variables ξ = x −1 ζ and then applying the Cauchy integral formula

78

3 Mittag-Leffler and Wright Functions

∫ ∞

1 ∫ ρ −ρ L[x μ−1Wρ,μ (−cx ρ )](z) = e−zx x μ−1 e ζ −cx ζ ζ −μ dζ dx 2πi C 0 ∫ ∞ ∫ ∫ ∞

1 ∫ −ρ −ρ 1 = e−zx e xξ−cξ ξ −μ dξ dx = e(ξ−z)x dx e−cξ ξ −μ dξ 2πi C 2πi C 0 0 ∫ −ρ 1 −cξ −ρ −μ 1 e = ξ dξ = z −μ e−cz . 2πi C z − ξ 

This completes the proof of the proposition. The following convolution rule holds for x μ−1Wρ,μ (−cx ρ ). Corollary 3.4 For any c1, c2 > 0, the following convolution identity holds: x μ−1Wρ,μ (−c1 x ρ ) ∗ xν−1Wρ,ν (−c2 x ρ ) = x μ+ν−1Wρ,μ+ν (−(c1 + c2 )x ρ ). Proof By the Laplace transform relation (3.31), we have L[x μ−1Wρ,μ (−c1 x ρ ) ∗ xν−1Wρ,ν (−c2 x ρ )](z) ρ

=L[x μ−1Wρ,μ (−c1 x ρ )](z)L[xν−1Wρ,ν (−c2 x ρ )](z) = z −μ−ν e−(c1 +c2 )z . 

This directly shows the desired identity. We also have a Fourier transform relation. Proposition 3.4 For any ρ ∈ (−1, 0) and μ ∈ R, the following identity holds: F [Wρ,μ (−|x|)](ξ) = 2E−2ρ,μ−ρ (−ξ 2 ). Proof It follows from the series expansion cos ξ x = ∫ F [Wρ,μ (−|x|)](ξ) = =2



e−iξ x Wρ,μ (−|x|)dx = 2

−∞ ∞ 

(−1)k

k=0

Now we claim the identity ∫ ∞ 0

∞

ξ 2k (2k)!





0

xν−1Wρ,μ (−x)dx =

k=0 (−1)

∫ 0



2k

k (ξ x) (2k)!

that

Wρ,μ (−x) cos ξ xdx

x 2k Wρ,μ (−x)dx.

Γ(ν) . Γ(−ρν + μ)

(3.32)

Indeed, it follows from the integral representations of the Wright function Wρ,μ (−x) 1 , cf. (3.29) and (2.6), that and the reciprocal Gamma function Γ(z) ∫ −ρ 1 x Wρ,μ (−x)dx = x e ζ −xζ ζ −μ dζdx 2πi 0 0 C ∫ ∫ ∫ ∞  −ρ 1 Γ(ν) Γ(ν) ζ −μ −xζ ν−1 . = e ζ e x dx dζ = e ζ ζ νρ−μ dζ = 2πi C 2πi Γ(−ρν + μ) C 0 ∫



ν−1





ν−1

3.2 Wright Function

79

The change of integration order is legitimate due to the asymptotics of Wρ,μ (−|x|) in Theorem 3.7. Thus, F [Wρ,μ (−|x|)](ξ) = 2

∞  (−1)k k=0

ξ 2k = 2E−2ρ,μ−ρ (−ξ 2 ). Γ(−(2k + 1)ρ + μ) 

This completes the proof of the proposition.

3.2.2 Wright function Wρ,μ (−x) The function Wρ,μ (−x), with −1 < ρ < 0 and x ∈ R+ , is of independent interest in the study of the subdiffusion model in Chapter 7. This case has been extensively studied in Pskhu [Psk05b] (in Russian). Most of the estimates in Theorem 3.8(i) (and relevant lemmas) are taken from [Psk05b, Section 2.2.7]. We begin with a useful integral representation of Wρ,μ (−x). It allows deriving various useful upper bounds. Lemma 3.4 For x ≥ 0 and ρ ∈ (−1, 0), there holds with k(ϕ) = Wρ,μ (−x) =

1 π(1 + ρ)



π

r 1−μ

−ρ sin(1 + ρ − μ)ϕ

0

sin(−ρϕ)

+

sin(1+ρ)ϕ sin ϕ ,

sin μϕ −xr −ρ k(ϕ) e dϕ. sin ϕ

Proof We take a contour Γ = {z : z = r(ϕ)eiϕ, −π < ϕ < π}, with r(ϕ) = 1  x sin(−ρϕ)  1+ρ . Then we deform the Hankel contour C to Γ and obtain sin ϕ ∫ ∫ −ρ z −μ −xz −ρ e z e dz = ez z −μ e−xz dz. Γ

C

The parametrization of the contour Γ implies dz = (r (ϕ) + ir(ϕ))eiϕ dϕ, and by the choice of r(ϕ), r sin ϕ − xr −ρ sin(−ρϕ) = 0. Consequently, z − xz −ρ = r cos ϕ − xr −ρ cos(−ρϕ) = r(cos ϕ − sin ϕ cot(−ρϕ)) sin(1 + ρ)ϕ sin(1 + ρ)ϕ = xr −ρ . = −r sin(−ρϕ) sin ϕ Then by the definition of k(ϕ), Wρ,μ (−x) =

1 2πi



π

−π

r −μ eiϕ(1−μ) (r + ir)e−xr

−ρ k(ϕ)

dϕ.

Since r(ϕ) is even, and r (ϕ) is odd, we have eiϕ(1−μ) (r (ϕ) + ir(ϕ)) + e−iϕ(1−μ) (r (−ϕ) + ir(−ϕ)) =2i(eiϕ(1−μ) (r (ϕ) + ir(ϕ))) = 2i(r (ϕ) sin(1 − μ)ϕ + r(ϕ) cos(1 − μ)ϕ).

80

3 Mittag-Leffler and Wright Functions

Thus, the last integral can be reduced to ∫ −ρ 1 π −μ r (r sin(1 − μ)ϕ + r cos(1 − μ)ϕ)e−xr k(ϕ) dϕ. Wρ,μ (−x) = π 0 Since r (ϕ) =

r(ϕ) 1+ρ (−ρ cot(−ρϕ)

− cot ϕ), we obtain

r sin(1 − μ)ϕ + r cos(1 − μ)ϕ =

r −ρ sin(1 + ρ − μ)ϕ sin μϕ + . 1+ρ sin(−ρϕ) sin ϕ

Upon substituting the expression, we obtain the desired integral representation.  The next result gives the positivity and monotonicity of Wρ,μ (−x). (i) can also be found in [Sta70, Theorem 8]. Lemma 3.5 For ρ ∈ (−1, 0), the following assertions hold: (i) If μ ≥ 0, then Wρ,μ (−x) > 0 for all x ∈ R+ . (ii) If μ ≥ −ρ, Wρ,μ (−x) is monotonically decreasing in x over R+ . (iii) Let ρ ∈ (−1, 0), and γ ≥ μ ≥ −ρ. Then for any x ≥ 0, Wρ,γ (−x) ≤

Γ(μ) Wρ,μ (−x). Γ(γ)

Proof It is direct to verify the following identity: μ

x μ−1Wρ,μ (−x ρ ) = 0 Ix x −1Wρ,0 (−x ρ ), cf. (3.28). By the integral representation in Lemma 3.4, Wρ,0 (−x ρ ) > 0. This shows d Wρ,μ (−x) = −Wρ,μ+ρ (−x), cf. (3.26), (i). Similarly, (ii) follows from the identity dx 1

and the assertion of (i). Last, by changing variables y = x ρ , γ−μ μ−1

Wρ,γ (−x) = Wρ,γ (−y ρ ) = y 1−γ 0 Iy

y

Wρ,μ (−y ρ ),

cf. (3.28). Since μ ≥ −ρ, by (ii), the function Wρ,μ (−x) is monotonically decreasing, and hence for ρ ∈ (−1, 0), γ−μ μ−1 y Wρ,μ (−y ρ ) 0 Iy

γ−μ μ−1

≤ Wρ,μ (−y ρ )0 Iy

y

= Wρ,μ (−y ρ )yγ−1 Γ(γ)−1 Γ(μ).

The last two relations together imply (iii). Lemma 3.6 Let ρ ∈ (−1, 0). Then for any μ ≥ 0, x ≥ 0, there holds Wρ,μ (−x) ≤ Pn (x)Wρ,1 (−x) with Pn given by Pn (x) =

n  k=0

(−ρx)k , Γ(k(1 + ρ) + μ)

and n ∈ N0 is such that the condition μ + n(1 + ρ) ≥ 1 holds.



3.2 Wright Function

81

Proof The proof is based on mathematical induction. The case n = 0 follows directly from Lemma 3.5(iii). Next assume that the assertion holds for n = N − 1. Suppose μ+ N(1+ ρ) ≥ 1 for some integer N. Let μ = μ+1+ ρ so that μ +(N −1)(1+ ρ) ≥ 1. First we claim the recursion identity Wρ,μ (−x) = −ρxWρ,μ+1+ρ (−x) + μWρ,μ+1 (−x).

(3.33)

Actually, by the definition of the function Wρ,μ (z), − ρxWρ,μ+1+ρ (−x) + μWρ,μ+1 (−x) ∞ ∞   ρ(−x)k+1 μ(−x)k = + k!Γ(k ρ + μ + 1 + ρ) k=0 k!Γ(k ρ + μ + 1) k=0  ∞   ρ μ(−x)0 μ = + + (−x)k Γ(μ + 1) k=1 (k − 1)!Γ(k ρ + μ + 1) k!Γ(k ρ + μ + 1) (−x)k (−x)0  + = Wρ,μ (−x), Γ(μ) k=1 k!Γ(k ρ + μ) ∞

=

since by the recursion formula (2.2), μ 1 kρ + μ 1 ρ + = = . (k − 1)!Γ(k ρ + μ + 1) k!Γ(k ρ + μ + 1) k! Γ(k ρ + μ + 1) k!Γ(k ρ + μ) It follows from this identity, and the induction hypothesis that 

Wρ,μ (−x) ≤ (−ρx) =

N  k=1

N −1  k=0

 μ (−ρx)k + Wρ,1 (−x) Γ(k(1 + ρ) + μ + 1 + ρ) Γ(1 + μ)

1  (−ρx)k + Wρ,1 (−x). Γ(k(1 + ρ) + μ) Γ(μ)

This shows the assertion for n = N, completing the induction step.



Proposition 3.5 If ρ ∈ (−1, 0) and μ < 1, then for all x, y ≥ 0, the following 1 )): statements hold for any a ∈ (0, x), ξ ∈ [−ρ, 1] and ω ∈ ( 12 , min(1, −2ρ μ−1  1−ξ ξ +ρ μ−1  1 1+ρ y 1+ρ ] ξ , [cρ (ξ, ω)] ξ Γ 1−μ ξ [x ξπ μ−1 μ−1   1 (x − a) −ρ Wρ,1 (−ay ρ ), [cρ (ρ, ω)] −ρ Γ 1−μ |y μ−1Wρ,μ (−xy ρ )| ≤ −ρ −ρπ

|y μ−1Wρ,μ (−xy ρ )| ≤

where cρ (ξ, ω) = (1 + ρ)



1−ξ

− cos(ωπ) ξ1+ρ cos(−ρωπ) 1+ρ

ξ+ρ

1−ξ

.

82

3 Mittag-Leffler and Wright Functions

Proof By deforming the contour C in (3.29) to Γ,ωπ , Wρ,μ (−x) is represented by ∫ ωπ iϕ −ρ −iρϕ 1  1−μ e e −x e +(1−μ)ϕ dϕ Wρ,μ (−x) = 2πi −ωπ ∫ ∞ iω π −ρ −iρω π +(1−μ)ωπ −iω π −xr −ρ eiρω π −(1−μ)ωπ 1 + (r −μ er e −xr e − r −μ er e )dr. 2πi  Since μ < 1, upon sending  → 0+ , the first term vanishes. Thus, we obtain ∫ 1 ∞ −μ r cos ωπ−xr −ρ cos(−ρωπ) Wρ,μ (−x) = r e π 0 × sin(r sin ωπ − xr −ρ sin(−ρωπ) + (1 − μ)ωπ)dr. This directly implies |Wρ,μ (−x)| ≤

1 π





r −μ er cos ωπ−xr

−ρ

cos(−ρωπ)

dr.

0

B 1−γ ) , for all A, B ≥ 0 and γ ∈ [0, 1]. Now recall Young’s inequality, A+ B ≥ ( γA )γ ( 1−γ 1 1 Thus, with ω ∈ ( 2 , min(1, −2ρ )), there holds

cos ωπ γ x cos(−ρωπ) 1−γ r cos ωπ − xr −ρ cos(−ρωπ) ≤ − − r γ+(1−γ)(−ρ) . γ 1−γ γ

1−γ

x cos(−ρωπ) Hence, with ξ = γ + (1 − γ)(−ρ) and K = − cosγωπ r γ+(1−γ)(−ρ) , 1−γ 1 |Wρ,μ (−x)| ≤ π





r

−μ −Kr ξ

0

e

1 dr = ξπ



K ξ 1 − μ Γ . ds = ξπ ξ μ−1



s

1−μ ξ −1

e

−K s

0

Now the first assertion follows by substituting x with xy ρ . In particular, with the choices ξ = −ρ and ξ = 1, the inequality implies

1 − μ μ−1 μ−1 1 [cρ (ρ, ω)] −ρ Γ x −ρ , |y μ−1Wρ,μ (−xy ρ )| ≤ −ρπ −ρ 1 |y μ−1Wρ,μ (−xy ρ )| ≤ [cρ (1, ω)]μ−1 Γ(1 − μ)y μ−1, π with cρ (ρ, ω) =

cos(−ρωπ) −ρπ

and

cρ (1, ω) = −

cos(ωπ) . π

Next, using Corollary 3.4, we rewrite y μ−1Wρ,μ (−xy ρ ) as y μ−1Wρ,μ (−xy ρ ) = y μ−1Wρ,μ (−(x − a)y ρ ) ∗ y −1Wρ,0 (−ay ρ ). Consequently,

3.2 Wright Function

83

1 − μ μ−1 μ−1 1 [cρ (ρ, ω)] −ρ Γ (x − a) −ρ −ρπ −ρ

|y μ−1Wρ,μ (−xy ρ )| ≤



y

0

t −1Wρ,0 (−at ρ )dt. 

This directly gives the second assertion.

The next result gives useful bounds on Wρ,μ (−x), see also [KV13, Lemma 3.1] for (ii) and [Kra16, Lemma 2.1] for (iv). Theorem 3.8 For ρ ∈ (−1, 0), the following assertions hold for x ≥ 0: (i) If μ ≥ 1, then 0 < Wρ,μ (−x) ≤ Γ(μ)−1 e−(−ρ)

−ρ 1+ρ

1

(1+ρ)x 1+ρ

.

(ii) If μ ∈ (−ρ, 1], then −ρ 1  1+ρ  1+ρ 1+ρ 0 < Wρ,μ (−x) ≤ 1 + 21+ρ Γ(1 + μ + ρ)−1 (−ρ)1+ρ e−(1+ρ) e− 2 (−ρ) x .

(iii) For any μ ∈ (0, 1), there holds for all x > 0, |Wρ,μ (−x)| ≤ c(1 + x (iv) If μ ∈ R, there holds for any x > 0 |Wρ,μ (−x)| ≤ ce

1 −σx 1+ρ



μ−1 ρ

)−1 .

1,

μ  Z− ∪ {0},

x,

μ ∈ Z− ∪ {0}.

Proof The positivity in (i) and (ii) is direct from Lemma 3.5, and thus we prove only d sin(γϕ) upper bounds. We claim that for all γ ∈ (0, 1), dϕ sin ϕ ≥ 0 for ϕ ∈ [0, π]. Indeed, d sin(γϕ) γ cos(γϕ) sin ϕ − sin(γϕ) cos ϕ . = dϕ sin ϕ sin2 ϕ Clearly, γ cos(γϕ) sin ϕ − sin(γϕ) cos ϕ|ϕ=0 = 0, and

d (γ cos(γϕ) sin ϕ − sin(γϕ) cos ϕ) = (1 − γ 2 ) sin(γϕ) sin ϕ ≥ 0. dϕ

Thus the desired claim follows. Now by Lemma 3.4, for μ = 1, there holds ∫ −ρ 1 1 π −xr −ρ k(ϕ) 1+ρ 1+ρ Wρ,1 (−x) = e dϕ ≤ e−x (−ρ) (1+ρ) . π 0 The case μ > 1 follows from this and Lemma 3.5(iii), showing (i). Part (ii), i.e., μ ∈ [−ρ, 1]. We use the recursion formula (3.33). Since μ + ρ ≥ 0 and μ > 0, 1 + μ > 1 and 1 + μ + ρ ≥ 1. Thus, we can apply (i) to the two terms on the −ρ right-hand side and obtain with κ(ρ) = (−ρ) 1+ρ (1 + ρ),

84

3 Mittag-Leffler and Wright Functions 1 1+ρ

1 1+ρ

1 1+ρ

−ρxe−κ(ρ)x e−κ(ρ)x μe−κ(ρ)x + = Wρ,μ (−x) ≤ Γ(μ + 1) Γ(1 + μ + ρ) Γ(μ)

1 1+ρ

−ρxe−κ(ρ)x + . Γ(1 + μ + ρ)

For μ ∈ [−ρ, 1), there holds Γ(μ)−1 ≤ 1. Then direct computation leads to sup xe−

1 κ(ρ) 1+ρ 2 x

≤ 21+ρ (−ρ)ρ e−(1+ρ) .

x ∈R+

The last inequalities show (ii). (iii) is direct from the first estimate in Proposition 3.5. Last, for (iv), the estimate |Wρ,μ (−x)| ≤ ce−σx

1 1+ρ

,

∀μ ∈ R,

follows from (i)-(ii) and repeatedly applying the recursion (3.33). Indeed, for −1 < μ < 0, with m being the smallest integer such that μ + μ(1 + ρ) > 0, Wρ,μ (−x) = (−ρx)2Wρ,μ+2(1+ρ) (−x) − ρx μWρ,μ+1+ρ+1 (−x)) + μWρ,μ+1 (−x) = (−ρx)mWρ,μ+m(1+ρ) (−x) +

m−1 

μ(−ρx)k Wρ,μ+k(1+ρ) (−x).

k=0

The terms now can be bounded using triangle inequality and (i)-(ii). The case of general μ follows similarly. Thus the desired inequality for μ  Z− ∪ {0} is proved. For μ ∈ Z− ∪ {0}, we use the recursion formula (cf. (3.33)) Wρ,−k (−x) = −ρxWρ,−k+1+ρ (−x) − kWρ,−k+1 (−x). Using this formula and mathematical induction, we deduce Wρ,−k (−x) = −ρx

k 

(−1)k−m

m=0

k! Wρ,−m+1+ρ (−x). m!

Then the desired assertion follows.



3.3 Numerical Algorithms In this section, we describe algorithms for evaluating Eα,β (z) and Wρ,μ (z).

3.3.1 Mittag-Leffler function Eα,β (z) The computation of Eα,β (z) over the whole complex plane C is delicate, and has received much attention [GLL02, SH09]. We describe an algorithm from [SH09] and

3.3 Numerical Algorithms

85

the relevant error estimates. See also the work [Gar15, McL21] for the computation of Mittag-Leffler functions, by the inverse Laplace transform with deformed contours and suitable quadrature rules. Since in different regions of C, Eα,β (z) exhibits very different behavior, different numerical schemes are needed. First, we introduce some notation. We denote by D(r) = {z ∈ C : |z| ≤ r } the closed disk of radius r centered at the origin, and by W(φ1, φ2 ) and W(φ1, φ2 ) the wedges W(φ1, φ2 ) = {z ∈ C : φ1 < arg(z) < φ2 }, W(φ1, φ2 ) = {z ∈ C : φ1 ≤ arg(z) ≤ φ2 }, where φ2 −φ1 is the opening angle measured in a positive sense, and φ1, φ2 ∈ (−π, π). The complex plane C is divided into seven regions {Gi, i = 0, . . . , 6}, cf. Fig. 3.6.

Fig. 3.6 The partition of C, with the thick dashed lines being the Stokes line at arg(z) = απ.

On a disk of radius r0 < 1, i.e., G0 = D(r0 ), for any α > 0, the following Taylor series expansion N −1  zk (3.34) TN (z) = Γ(αk + β) k=0 gives a good approximation, provided that the truncation point N is determined suitably, e.g., by (3.40). In the algorithm, one chooses r0 = 0.95. Below we consider only the case α ∈ (0, 1), since the case α ≥ 1 can be reduced to the case α < 1 using a recursion relation, cf. Lemma 3.1. For large values of z ∈ C, the exponential asymptotics and Berry-type smoothing can be used to compute Eα,β (z). Specifically, we choose the constant r1 > r0 defined in (3.41), and employ the exponential asymptotic for |z| ≥ r1 . Away from the Stokes lines | arg(z)| = απ, the exponential asymptotics (3.12) and (3.13) are very accurate.

86

3 Mittag-Leffler and Wright Functions

However, close to the Stokes lines, these approximations become less stable, and so the Berry-type smoothed asymptotic series (3.15) and (3.16) are employed instead. Hence, the region C \ D(r1 ) is further divided into four subregions. Let δ and δ˜ be positive numbers smaller than α2 π. In the algorithm, δ and δ˜ are chosen to be δ = α8 π and δ˜ = min( α8 π, α+1 2 π). Then in different regions of C, the scheme employs different approximations as follows: ˜ −απ − δ). ˜ • The asymptotics (3.13) in the wedge G1 = [C \ D(r1 )] ∩ W(απ + δ, • The asymptotics (3.12) in the wedge G2 = [C \ D(r1 )] ∩ W(−απ + δ, απ − δ). • The Berry smoothing (3.16) in the area around the upper Stokes line G3 = ˜ [C \ D(r1 )] ∩ W(απ − δ, απ + δ). • The Berry smoothing (3.15) in the area around the lower Stokes line G4 = ˜ −απ + δ). [C \ D(r1 )] ∩ W(−απ − δ, In the exponentially improved asymptotics, there are two parameters, i.e., truncation point N of the series and a lower limit r1 of |z|, that have to be determined such that the error is smaller than  for |z| > r1 . The error bound in Theorem 3.9 suggests the optimal parameters. The optimal truncation for the series will increase like 1 ∼ α−1 |z| α . However, for z → ∞, the asymptotic series becomes better and better, so that we apply an upper limit of N < 100. In the algorithm, the constant c is set to   1 1 1 1 + , (3.35) c= ≈ 2π sin απ min{sin απ, sin ξ} π sin απ where the angle ξ describes the distance of z to the Stokes line. In the transition region D(r1 ) \ D(r0 ), we employ the integral representations (3.8) and (3.9) with properly deformed contours. We use the following two regions: • G5 = D(r1 ) ∩ W(− 56 απ, 56 απ) \ G0 • G6 = D(r1 ) ∩ W( 56 απ, − 56 απ) \ G0 . However, these formulas become numerically unstable when z is close to the contour path, due to the singularity of the integrand at r = |z|e±iθ . Hence, we take (3.9) with μ = απ and (3.8) with μ = 23 απ such that G+ (, απ) and G− (, 23 απ) have nonempty overlap, which allows one to avoid their use close to the contour path. The value of  is set to 12 , and thus lies in G1 , where the Taylor series is used for the calculation. Equation (3.9) is used with μ = απ for z ∈ G5 , and (3.8) with μ = 23 απ for z ∈ G6 . When evaluating the integrals, several cases arise. We distinguish the cases β ≤ 1 and β > 1. Next we give the explicit forms of the integrals involved in the computation. For z ∈ G5 , upon inserting the parametrization of the contour path for μ = απ and  = 12 yields for z ∈ G+ (, μ) that for β ≤ 1 ∫ ∞ Eα,β (z) = A(z; α, β, 0) + B(r; α, β, z, απ)dr, (3.36) 0

and for β > 1

3.3 Numerical Algorithms

87

∫ Eα,β (z) = A(z; α, β, 0) +

∞ 1 2

∫ B(r; α, β, z, απ)dr +

πα

−πα

C(ϕ; α, β, z, 12 )dϕ, (3.37)

where A(z; α, β, x) = α−1 z

1−β α

1

ez α

cos( αx )

, r sin[ω(r, φ, α, β) − φ] − z sin[ω(r, φ, α, β)] , B(r; α, β, z, φ) = π −1 A(r; α, β, φ) r 2 − 2r z cos φ + z 2 cos[ω(, ϕ, α, β)] + i sin[ω(, ϕ, α, β)]  A(; α, β, ϕ) , C(ϕ; α, β, z, ) = 2π (cos ϕ + i sin ϕ) − z ω(x, y, α, β) = x α sin(α−1 y) + α−1 (1 + α − β)y. 1

In the case β ≤ 1, we have applied the limit  → 0. These equations are used to calculate Eα,β (z) for z ∈ G5 . For z ∈ G6 , with a contour with θ = 23 απ, the integral representations read ∫ ∞ Eα,β (z) = B(r; α, β, z, 23 απ)dr, β ≤ 1, (3.38) 0

∫ Eα,β (z) =

∞ 1 2

∫ B(r; α, β, z, 23 απ)dr +

2 3 απ

− 23 απ

C(ϕ; α, βz, 12 )dϕ,

β > 1.

(3.39)

The integrand C(ϕ; α, β, z, ) is oscillatory but bounded over the integration interval. Thus the integrals over C(ϕ; α, β, z, ) can be evaluated numerically using an appropriate quadrature formula. The integrals over B(r; α, β, z, φ) involve unbounded intervals and have to be treated more carefully, especially suitable truncation of the integration interval is needed, which is specified in Theorem 3.9. The complete procedure is listed in Algorithm 2, which is used throughout this book. The following result summarizes the recommended algorithmic parameters for the given accuracy and also the errors incurred in the approximation [SH09]. The notation [·] denotes taking integral part of a real number. Theorem 3.9 The following error estimates hold. (i) Let  > 0. If |z| < 1 and N ≥ max



2−β α



+ 1,



ln( (1− |z |)) ln( |z |)



+1 ,

(3.40)

then the error RN := |Eα,β (z) − TN (z)| of the Taylor series approximation (3.34) is smaller than the given accuracy . (ii) Let α ∈ (0, 1). For N ≈ α−1 |z| α and 1

α  |z| ≥ r1 := − 2 ln c

(3.41)

the error term of the asymptotic series (3.12) and (3.13) fulfills the condition

88

3 Mittag-Leffler and Wright Functions −1 

N   RN (z) = Eα,β (z) − − k=1

 z −k   ≤ , Γ(β − αk)

where c is a constant dependent only on α, β, chosen according to (3.35). (iii) Suppose for φ = απ Rmax

 α ⎧ ⎪ , ⎨ max 1, 2|z|, − ln 6π ⎪

≥  α α π ⎪ , ⎪ max (| β| + 1) , 2|z|, − 2 ln 6( |β |+2)(2 |β |)|β | ⎩

β ≥ 0, β 1 then Compute Eα, β (z) by the recursive formula (3.17). end if Compute r1 by (3.41). if |z | < r1 then if | arg(z)| < 5απ/6 then Compute Eα, β (z) by (3.36) or (3.37). else Compute Eα, β (z) by (3.38) or (3.39). end if else if | arg(z)| ≥ απ + δ then Compute Eα, β (z) by (3.13). else if | arg(z)| ≤ απ − δ then Compute Eα, β (z) by (3.12). else if arg(z) > απ − δ and arg(z) < απ + δ then Compute Eα, β (z) by (3.16). else Compute Eα, β (z) by (3.15). end if end if

β ≥ 0, β < 0.

3.3 Numerical Algorithms

89

3.3.2 Wright function Wρ,μ (x) The computation of Wρ,μ (z) is fairly delicate. In principle, like Eα,β (z), it can be computed using power series for small values of the argument and a known asymptotic formula for large values, while for the intermediate case, using an integral representation. However, an algorithm that works for any z ∈ C with rigorous error bounds is still unavailable. For z ∈ R, some preliminary analysis has been done [Luc08], which we describe below. We have the following representation formulas, derived from the representation (3.29), by suitably deforming the Hankel contour C. Theorem 3.10 The following integral representations hold. (i) Let z = −x, x > 0. Then Wρ,μ (−x) is given by ∫ 1 ∞ Wρ,μ (−x) = K(ρ, μ, −x, r)dr, π 0 if −1 < ρ < 0 and μ < 1, or 0 < ρ < 12 , or ρ = 12 and μ < 1 + ρ, ∫ 1 ∞ Wρ,μ (−x) = e + K(ρ, μ, −x, r)dr. π 0 if −1 < ρ < 0 and μ = 1, and ∫ ∫ 1 ∞ 1 π ˜ Wρ,μ (−x) = K(ρ, μ, −x, r)dr + P(ρ, μ, −x, ϕ)dϕ, π 1 π 0 in all other cases, with −ρ

K(ρ, μ, x, r) = e−r+xr cos ρπ r −μ sin(xr −ρ sin ρπ + μπ), ˜ μ, x, ϕ) = ecos ϕ+x cos ρϕ+cos(μ−1)ϕ cos(sin ϕ − x sin ρϕ − sin(μ − 1)ϕ). P(ρ, (ii) Let z = x > 0. Then Wρ,μ (x) has the following integral representation: ∫ ⎧ 1 ∞ ⎪ −1 < ρ < 0, μ < 1, ⎪ ⎨ π 0 1 K(ρ, ⎪ ∫ ∞ μ, x, r)dr, Wρ,μ (x) = e +∫ π 0 K(ρ, μ, x, r)dr, ∫ −1 < ρ < 0, μ = 1, ⎪ ⎪ 1 ∞ K(ρ, μ, x, r)dr + 1 π P(ρ,  μ, x, ϕ)dϕ, otherwise, ⎪ π 0 ⎩π 1  are the same as before. where the functions K and P The integral representations in Theorem 3.10 give one way to compute Wρ,μ (x), x ∈ R. In principle, these integrals can be evaluated by any quadrature rule. However, the kernel K(ρ, μ, x, r) is singular with a leading order r −μ , with successive singularities. Hence, a direct treatment via numerical quadrature can be inefficient. Dependent on the parameter ρ, a suitable transformation might be useful. For example, for ρ > 0, a more efficient approach is to use the change of variable s = r −ρ , 1 i.e., r = s− ρ , and the transformed kernel is given by

90

3 Mittag-Leffler and Wright Functions

 μ, x, s) = (−ρ)−1 s(−ρ)−1 (−μ+1)−1 e−s K(ρ,

1 −ρ

+x cos(πρ)s

sin(x sin(πρ)s + π μ).

Example 3.2 The case directly relevant to subdiffusion is ρ = − α2 < 0, 0 < μ = 1 + ρ < 1 (and z = −x, x > 0). Then the kernel is given by  ρ, 1 + ρ, s) = (−ρ)−1 e−s K(x,

1 −ρ

+x cos(πρ)s

sin(x sin(πρ)s + (1 + ρ)π).

This kernel is free from grave singularities. The integral can be computed efficiently −1 using Gauss-Jacobi quadrature with the weight function s(−ρ) (−μ+1)−1 . In Fig. 3.7, we plot the function W−μ,1−μ (−|x|) over the interval [−5, 5], which is the basic building block of the fundamental solution for subdiffusion, see Proposition 7.1 in Chapter 7 for detailed discussions. Except for the case μ = 12 , for which it is infinitely differentiable, generally it shows a clear “kink” at the origin, indicating the limited smoothing property of related solution operators.

1

1 =1/8 =1/4

(-|x|) ,1-

0.5

0.5

W-

W-

,1-

(-|x|)

=3/8 =1/2

0 -5 -4 -3 -2 -1

0 x

1

2

3

4

5

0 -5 -4 -3 -2 -1

0 x

1

2

3

Fig. 3.7 The Wright function W−μ,1−μ (−|x |).

Exercises Exercise 3.1 Verify the identities in (3.2). Exercise 3.2 Prove the identity (3.3), and that it can be rewritten as ∫ z

2 2 2 e−s ds . E 1 ,1 (z) = ez 1 + √ 2 π 0 Exercise 3.3 Prove the following duplication formula for α > 0 and β ∈ R: E2α,β (z 2 ) = 12 (Eα,β (z) + Eα,β (−z)),

∀z ∈ C.

4

5

3.3 Numerical Algorithms

91

Exercise 3.4 Prove the order and type in Proposition 3.1. Exercise 3.5 Similar to Lemma 3.1, prove the following version of reduction formula: m−1 j 1 1  E mα ,β (z m e2πi m ), m ≥ 1, Eα,β (z) = m j=0 from the elementary formula m−1 

e

2πi jmk

j=0

=

m, 0,

if k ≡ 0 (mod m), if k  0 (mod m).

Exercise 3.6 Show that the condition 0 ≤ α ≤ 1 in Theorem 3.5 cannot be dropped. Exercise 3.7 Prove that for λ > 0, α > 0 and m ∈ N, dm Eα,1 (−λx α ) = −λx α−m Eα,α−m+1 (−λx α ). dx m β

Is there an analogous identity for R0Dx Eα,1 (−λx α ) of order β ≥ 0? Exercise 3.8 By Laplace transform, there holds for x > 0 ∫ ezx 1 α Eα,1 (−λx ) = dz. 2πi C z + λz 1−α (i) By collapsing the Hankel contour C, show ∫ e−r x λr α−1 sin απ ∞ Eα,1 (−λx α ) = dr. π (r α + λ cos απ)2 + (λ sin απ)2 0 (ii) Using part (i) to show Eα,1 (−λx α ) > 0

and

d Eα,1 (−λx α ) < 0, dx

∀x > 0.

(iii) Use Bernstein theorem in Theorem A.8 to prove that Eα,1 (−x α ) is completely monotone. Exercise 3.9 The exponential function et satisfies the identity eλ(s+t) = eλs eλt . Is there a similar relation for Eα,1 (λt), i.e., Eα,1 (λ(s + t)) = Eα,1 (λs)Eα,1 (λt)? Exercise 3.10 [BS05] Prove that for any α ∈ [0, 1], there holds ∫ 2x ∞ E2α,1 (−t 2 ) dt. Eα,1 (−x) = π 0 x2 + t2 Exercise 3.11 Show the following identity for β > 0 and t > 0:

92

3 Mittag-Leffler and Wright Functions





0

x2

e− 4t x β−1 Eα,β (x α )dx =



β

α

πt 2 E α , β+1 (t 2 ). 2

2

Exercise 3.12 Show the following Cristoffel-Darboux-type formula: ∫ t sγ−1 Eα,γ (ys α )(t − s)β−1 Eα,β (z(t − s)α )ds 0

yEα,γ+β (yt α ) − zEα,γ+β (zt α ) γ+β−1 t = , y−z where y  z are any complex numbers. Exercise 3.13 This exercise is concerned with completely monotone functions. (i) Let f be completely monotone, and h be nonnegative with a complete monotone derivative (which is sometimes called a Bernstein function). Prove that f ◦ h is completely monotone. (ii) Using the fact that x α is a Bernstein function, prove that the function f (x) = α e− |x | , α ∈ (0, 1), is completely monotone. (iii) Prove that for α ∈ (0, 1), Eα,1 (−x α ) is completely monotone using the complete monotonicity of Eα,1 (−x) in Theorem 3.5. Exercise 3.14 [MS01, Theorem 11] Prove that for any α, β > 0, the function Eα,β (x −1 ) is completely monotone on R+ . Exercise 3.15 [MS18a] This exercise is concerned with Turán-type inequality for Mittag-Leffler functions. Suppose α, β > 0 and x ∈ R+ . Let Eα,β (x) = Γ(β)Eα,β (x). (i) Show that the digamma function ψ(x) = (ln Γ(x)) =

Γ (x) Γ(x)

is concave.

(ii) Show that the function β → Eα,β (x) is logconvex on R+ . (iii) Prove the following Turán-type inequality Eα,β (z)Eα,β+2 (z) ≥ Eα,β+1 (z)2 .

Exercise 3.16 [MS18b] This exercise is concerned with the log-convexity of MittagLeffler function, which arises in the study of the maximum principle for two-point boundary value problems with a Djrbashian-Caputo fractional derivative. (i) For 0 < α ≤ 1, z ∈ C, z  0 and | arg z| > απ, there holds sin απ Eα,α+1 (z) = − απ

∫ 0

+∞

1

e−r α 1 dr − . z r 2 − 2r z cos απ + z 2

(ii) Prove the following integral identities for 0 < α ≤ 1 and x > 0:

3.3 Numerical Algorithms

93

∫ r α−1 e−xr sin απ ∞ dr + 1, x Eα,α+1 (−x ) = − π r 2α + 2r α cos(απ) + 1 0 ∫ ∞ r α e−xr sin απ x α−1 Eα,α (−x α ) = dr, π r 2α + 2r α cos(απ) + 1 0 ∫ ∞ r α−1 e−xr sin απ Eα,1 (−x α ) = dr. π r 2α + 2r α cos(απ) + 1 0 α

α

(iii) Using (ii) of this exercise to prove that for α ∈ (0, 1], the function x α Eα,α+1 (−x α ) is log concave, and Eα,1 (−x α ) and x α−1 Eα,α (−x α ) are log convex. Exercise 3.17 Prove the following identity: d 1 Wρ,μ (z) = (Wρ,μ−1 (z) + (1 − μ)Wρ,μ (z)). dz ρz Exercise 3.18 Determine the values for lim Wρ,μ (z) and

|z |→∞

lim zWρ,μ (z),

|z |→∞

where arg(z) > π − , for small . Exercise 3.19 The following version of the Wright function: Mμ (z) = W−μ,1−μ (−z) =

∞  k=0

(−1)k z k , k!Γ(1 − μ(k + 1))

with μ ∈ (0, 1), was first introduced by Francesco Mainardi [Mai96] in his study of time-fractional diffusion, unaware of the prior work of Wright [Wri40]. It is often called the Mainardi M-function or M-Wright function in the literature, and serves the family of similarity solutions for one-dimensional subdiffusion [GLM99, GLM00, MMP10]. Using the reflection identity (2.3) of Γ(z), it is equivalent to Mμ (z) =

∞ 1  (−z)k−1 Γ(k μ) sin(k μπ). π k=0 (k − 1)!

Prove that this function has the following properties: (i) The case μ =

1 2

recovers a Gaussian function M 1 (z) = 2

2

z √1 e− 4 π

.

(ii) The following asymptotic behavior holds [MT95] c

Mμ (x) ∼ Ax a e−bx , 1−2μ

x→∞ μ

with A = (2π(1 − μ)μ 1−μ )− 2 , a = (2μ − 1)(2 − 2μ)−1, b = (1 − μ)μ 1−μ and c = (1 − μ)−1 . 1

(iii) For μ ∈ (0, 1), there holds L[Mμ (x)](z) = Eμ,1 (−z).

94

3 Mittag-Leffler and Wright Functions

(iv) For μ ∈ (0, 1), the Fourier transform of Mμ (|x|) is given by F [Mμ (|x|)](ξ) = 2E2μ,1 (−ξ 2 ). Exercise 3.20 Prove Theorem 3.9(i).

Part II

Fractional Ordinary Differential Equations

Chapter 4

Cauchy Problem for Fractional ODEs

In this chapter, we discuss the existence, uniqueness and regularity of solutions to fractional ordinary differential equations (odes). The existence and uniqueness can be analyzed using the method of successive approximations and fixed point argument. The former, pioneered by Cauchy, Lipschitz, Peano and Picard for classical odes in various settings, shows the existence by constructing suitable approximations, and lends itself to a constructive algorithm, although not necessarily efficient. The latter is an abstraction of the former and converts the existence issue into the existence of a fixed point for a certain (nonlinear) map, often derived via suitable integral transform. It is often easier to analyze but less constructive. The equivalence between an ode and a suitable Volterra integral equation is key to both approaches. While these arguments largely parallel that of the classical ode theory, extra care is needed in the fractional case due to the often limited solution regularity. Generally, the regularity issue in the fractional case is more delicate and not yet fully understood. Notation: Throughout, the notation R∂tα and ∂tα denote the (left sided) RiemannLiouville and Djrbashian-Caputo fractional derivative, respectively, bases at t = 0, I = (0, T] for some T > 0, which will be evident from the context, and I its closure, L p (I) = L p (0, T), and C(I) = C(0, T].

4.1 Gronwall’s Inequalities Integral inequalities play an important role in the qualitative analysis of the solutions to (nonlinear) differential equations. The classical Gronwall inequality provides explicit bounds on solutions of a class of linear integral / differential inequalities. It is often used to establish a priori bounds which are then used in proving global existence, uniqueness and stability results. There are two different forms, the differential form of Thomas Gronwall [Gro19], and the integral form of Richard Bellman [Bel43]. Below we state only the integral form. First we recall the standard version: if the functions u, a, k ∈ C(I) for some 0 < T ≤ ∞ and are nonnegative, and satisfy © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 B. Jin, Fractional Differential Equations, Applied Mathematical Sciences 206, https://doi.org/10.1007/978-3-030-76043-4_4

97

98

4 Cauchy Problem for Fractional ODEs



t

u(t) ≤ a(t) +

k(s)u(s)ds,

t ∈ I,

0

then u(t) satisfies

∫ u(t) ≤ a(t) +

t

a(s)k(s)e

∫t s

k(τ)dτ

ds,

t ∈ I.

(4.1)

0

If, in addition, a(t) is nondecreasing, then u(t) ≤ a(t)e

∫t 0

k(s)ds

,

t ∈ I.

By an approximation argument, the condition k ∈ C(I) can be relaxed to k ∈ L 1 (I), and we have the following result. Below for any 1 ≤ p ≤ ∞, we denote by p

L+ (I) = {u ∈ L p (I) : u(t) ≥ 0 a.e. I}, and the inequality should be understood in almost everywhere (a.e.) sense. There is also a uniform Gronwall’s inequality; see Exercise 4.2. Theorem 4.1 Let a(t) ∈ L+∞ (I) be nondecreasing, and k(t) ∈ L+1 (I). If u ∈ L+∞ (I) satisfies ∫ t k(s)u(s)ds, a.e. t ∈ I. u(t) ≤ a(t) + 0

Then there holds u(t) ≤ a(t)e

∫t 0

k(s)ds

a.e. t ∈ I.

Now we develop several variants of the inequality involving weak singularities, e.g., a(t) = t −α , with α ∈ (0, 1), which appears often in the study of fractional odes. Such inequalities were first systematically developed by Daniel Henry [Hen81, Chapter 7], using an iterative process and Mittag-Leffler function Eα,β (z), cf. (3.1) in Chapter 3. The next result [YGD07, Theorem 1] is one useful version. Case (ii) can be found in Henry [Hen81, Lemma 7.1.1] and is well known. There is also an extended version involving an additional L p integrable factor inside the integral [LY20, Lemma 2.4]; see Exercise 4.4 for details. Theorem 4.2 Let β > 0, a(t) ∈ L+1 (I), and 0 ≤ b(t) ∈ C(I) be nondecreasing. Let u(t) ∈ L+1 (I) satisfy β u(t) ≤ a(t) + b(t)0 It u(t), t ∈ I. Then there holds ∫ t ∞ (b(t))n (t − s)nβ−1 a(s)ds, u(t) ≤ a(t) + Γ(nβ) 0 n=1 In particular, the following assertions hold. (i) If a(t) is nondecreasing on I and b(t) ≡ b, then

t ∈ I.

4.1 Gronwall’s Inequalities

99

u(t) ≤ a(t)Eβ,1 (bt β ) on I. (ii) If a(t) ≡ at −α , with α ∈ (0, 1) and a > 0, and b(t) ≡ b > 0, then u(t) ≤ aΓ(1 − α)Eβ,1−α (bt β )t −α

on I.

β

Proof For any v ∈ L+1 (I), let Av(t) = b(t)0 It v(t). Then the condition u(t) ≤ a(t) + Au(t) and the nonnegativity of a and b imply u(t) ≤

n−1 

Ak a(t) + An u(t).

k=0 nβ

The assertion follows from the claims that An u(t) ≤ b(t)n 0 It u(t) and An u(t) → 0 as n → ∞ for each t ∈ I. The first claim holds for n = 1. Assume that it is true for some n ≥ 1. Then for n + 1, the induction hypothesis implies An+1 u(t) ≤ β nβ b(t)0 It (b(·)n 0 It u). Since b(t) is nondecreasing, by Theorem 2.1, β



An+1 u(t) ≤ b(t)n+1 0 It 0 It u = b(t)n+1 0 It

(n+1)β

u(t).



This shows the first claim. Further, since An u(t) ≤ b nL ∞ (I) 0 It u(t) → 0 as n → ∞ for any t ∈ I, this shows the second claim. (i) follows similarly as u(t) ≤ a(t)

∞ 



bn (0 It 1)(t) = a(t)

n=0

∞  n=0

(bt β )n = a(t)Eβ,1 (bt β ), Γ(nβ + 1)

using the definition of Eβ,1 (z). Similarly, (ii) follows from the elementary identity ∫ t Γ(1 − α) 1 t kβ−α (t − s)kβ−1 s−α ds = Γ(k β) 0 Γ(k β − α + 1) and the definition of the Mittag-Leffler function Eβ,α (z), cf. (3.1) in Chapter 3.



Next we give several Gronwall type inequalities involving double singularities [Web19b, Theorem 3.2]. We begin with case of constant coefficients. For a function u ∈ L ∞ (I), the nondecreasing function u∗ is defined by u∗ (t) = esssups ∈[0,t] u(s). Theorem 4.3 Let a ≥ 0 and b > 0, and suppose that β > 0, γ ≥ 0 and β + γ < 1, and the function u ∈ L+∞ (I) satisfies ∫ t u(t) ≤ a + b (t − s)−β s−γ u(s)ds, t ∈ I. 0

For r > 0, let tr = [bB(1 − β, 1 − γ)]− 1−β−γ r 1−β−γ . Then, if r ≤ bB(1 − β, 1 − γ)T 1−β−γ and also r < 1, there holds 1

1

−β btr

u(t) ≤ a(1 − r)−1 e (1−r )(1−γ) t

1−γ

,

t ∈ I.

(4.2)

100

4 Cauchy Problem for Fractional ODEs

Further, for r∗ = (1 − γ)−1 β, there holds −1

u(t) ≤ a(1 − γ)(1 − β − γ) e

−β btr∗ 1−γ 1−β−γ t

,

t ∈ I.

(4.3)

−β

Moreover, the exponent (1 − β − γ)−1 btr∗ is optimal in the sense that it is the smallest possible for admissible choices of r. ∫t Proof Let cβ,γ = B(1 − β, 1 − γ) and v(t) = a + b 0 (t − s)−β s−γ u(s)ds. Since u ∈ L ∞ (I) and β + γ < 1, v is continuous, u(t) ≤ v(t) for a.e. t ∈ I and ∫ t v(t) ≤ a + b (t − s)−β s−γ v(s)ds, t ∈ I. 0

Clearly, it suffices to prove (4.2) and (4.3) for v, and we may assume that u is continuous. Choose r ≤ bcβ,γ T 1−β−γ with r < 1, hence tr ≤ T. Let t ∈ I and fix any ξ ∈ (0, t]. If ξ ≤ tr , then from the identity ∫ ξ (ξ − s)−β s−γ ds = cβ,γ ξ 1−β−γ (4.4) 0

u∗

and the definitions of ∫ u(ξ) ≤ a + b

ξ

and tr , we deduce ∫

−β −γ

ξ

(ξ − s) s u(s)ds ≤ a + b

0

(ξ − s)−β s−γ u∗ (t)ds

0

= a + bcβ,γ ξ 1−β−γ u∗ (t) ≤ a + bcβ,γ tr

1−β−γ ∗

u (t) = a + ru∗ (t).

Next, if tr ≤ ξ ≤ t, then ∫ ξ−tr ∫ −β −γ u(ξ) ≤ a + b (ξ − s) s u(s)ds + b

ξ

ξ−tr

0

(4.5)

(ξ − s)−β s−γ u(s)ds.

−β

Using the inequalities (ξ − s)−β ≤ tr and s−γ ≤ (s − ξ + tr )−γ in the first and second integrals, respectively, and applying the identity (4.4), we obtain ∫ ξ ∫ ξ−tr −β u(ξ) ≤ a + btr s−γ u(s)ds + b (ξ − s)−β (s − ξ + tr )−γ u∗ (t)ds ξ−tr

0

−β



≤ a + btr

0

ξ−tr

s−γ u∗ (s)ds + bcβ,γ tr

1−β−γ ∗

u (t),

and consequently, u(ξ) ≤ a +

−β btr



t

s−γ u∗ (s)ds + ru∗ (t).

(4.6)

0

It follows from (4.5) that (4.6) holds for t ∈ I and for all ξ ∈ I. Hence by taking the supremum for ξ ∈ [0, t], since r < 1, we obtain

4.1 Gronwall’s Inequalities

101 ∗

(1 − r)u (t) ≤ a +

−β btr



t

s−γ u∗ (s)ds,

0

i.e., −β

u∗ (t) ≤ a(1 − r)−1 + b(1 − r)−1 tr



t

s−γ u∗ (s)ds.

0

Then Theorem 4.1 implies −β btr

u(t) ≤ u∗ (t) ≤ a(1 − r)−1 e (1−r )(1−γ) t

1−γ

,

t ∈ I.

Now we show that the optimality of the choice r∗ := (1 − γ)−1 β in the sense that −β the exponent [(1 − r)(1 − γ)]−1 btr is smallest possible. The set of admissible r β 1−β−γ ]. Define a function f by f (r) := (1 − r)tr = (1 − is the interval (0, bcβ,γ T β

β

r)r 1−β−γ (bcβ,γ )− 1−β−γ . By differentiating f , we deduce f (r) = 0 at r∗ = (1 − γ)−1 β and it is a maximum of f , and we have (1 − r∗ )(1 − γ) = 1 − β − γ. Note that the choice r = r∗ is only valid if r∗ ≤ bcβ,γ T 1−β−γ , i.e., bcβ,γ T 1−β−γ ≥ (1 − γ)−1 β. But, if bcβ,γ T 1−β−γ < (1 − γ)−1 β, we have for any ξ ≤ t ≤ T, ∫ ξ (ξ − s)−β s−γ u(s)ds = a + bcβ,γ ξ 1−β−γ u∗ (t) u(ξ) ≤ a + b 0

≤ a + bcβ,γ T 1−β−γ u∗ (t) < a + (1 − γ)−1 βu∗ (t). Taking the supremum for ξ ∈ [0, t] gives u(t) ≤ u∗ (t) ≤ a(1 − β − γ)−1 (1 − γ) for  t ∈ I. Thus the conclusion holds also in this case. Next we relax a, b to be bounded functions instead of constants using a∗ and b∗ . Corollary 4.1 Let β > 0, γ ≥ 0 and β + γ < 1, and a, b ∈ L+∞ (I). If u ∈ L+∞ (I) satisfies ∫ t (t − s)−β s−γ u(s)ds, t ∈ I, u(t) ≤ a(t) + b(t) 0

then with r∗ = (1 − β −

γ)−1 (1

− γ),

u(t) ≤ r∗ a∗ (t)e

−β b ∗ (t )tr∗ 1−γ 1−β−γ t

,

t ∈ I.

Proof For any t1 ∈ I, the following inequality holds: ∫ t (t − s)−β s−γ u(s)ds, u(t) ≤ a∗ (t1 ) + b∗ (t1 )

t ∈ [0, t1 ].

0 −β b ∗ (t1 )tr∗ 1−γ

Then Theorem 4.3 gives u(t) ≤ r∗ a∗ (t1 )e 1−β−γ t , for t ∈ [0, t1 ]. This estimate holds with u∗ (t1 ) on the left and t replaced by t1 on the right. Since t1 is arbitrary in I, the desired assertion follows.  Example 4.1 If a function u ∈ L+∞ (I) satisfies

102

4 Cauchy Problem for Fractional ODEs

∫ u(t) ≤ a + b

t

(t − s)− 2 u(s)ds 1

on I,

0

then by Theorem 4.3, we have u(t) ≤ 2ae8b t for a.e. t ∈ I. In fact, with β = 12 and γ = 1 −β 0, tr∗ = (16b2 )−1 and (1− β)−1 btr∗ = 8b2 . Theorem 4.2 gives u(t) ≤ aE 1 ,1 (bt 2 Γ( 12 )). 2

2

Using a property of E 1 ,1 (z) in (3.3), we have u(t) ≤ aE 1 ,1 (bt 2 Γ( 12 )) ≤ 2ae πb t . This 1

2

2

2

bound is sharper than u(t) ≤ 2ae8b t , but of the same type. 2

Next we discuss the case where the function u may be singular. The next result is immediate from Theorem 4.3. Corollary 4.2 Let a, b ≥ 0, and c > 0. Let 0 < α, β < 1, γ ≥ 0 with α + β + γ < 1. Suppose that a function u(t)t α ∈ L+∞ (I) and u satisfies ∫ t (t − s)−β s−γ u(s)ds, t ∈ I. u(t) ≤ at −α + b + c 0

Then with r∗ = (1 − α − γ)−1 β and cα,β,γ = (1 − γ − α)(1 − α − β − γ)−1 , −β c t r∗

u(t) ≤ cα,β,γ (at −α + b)e 1−α−β−γ t

1−γ

,

t ∈ I.

Proof Let v(t) = t α u(t) so that v ∈ L ∞ (I) and v satisfies ∫ t v(t) ≤ a + bt α + ct α (t − s)−β s−(γ+α) v(s)ds. 0

Then applying Corollary 4.1 with α + γ in place of γ, we obtain α

v(t) ≤ cα,β,γ (a + bt )e

t

−β

r∗ ct α 1−α−β−γ t 1−γ−α

.

Then the desired assertion follows by dividing both sides by t α .



The next result refines Corollary 4.2 with a few leading singular terms, under weaker conditions on the exponents α, β and γ [Web19b, Theorem 3.9]. Theorem 4.4 Let a, b ≥ 0 and c > 0 be constants. Let 0 < α, β, γ < 1 with α + γ < 1 and β + γ < 1. Suppose that a function u(t)t α ∈ L+∞ (I) satisfies ∫ t u(t) ≤ at −α + b + c (t − s)−β s−γ u(s)ds, t ∈ I. 0

Then, for t ∈ I, u(t) ≤ at −α + acb1 t −α+1−β−γ + ac2 b1 b2 t −α+2(1−β−γ) + . . . −β c t r∗

+ (1 − γ)(1 − β − γ)−1 (b + ac m b1 b2 . . . bm t −α+m(1−β−γ) )e− 1−β−γ t

1−γ

4.1 Gronwall’s Inequalities

103

with m = (1 − β − γ)−1 α , r∗ = (1 − γ)−1 β, and for n ∈ N, bn := b(1 − β, 1 − α − γ + (n − 1)(1 − β − γ)). Proof The proof is based on mathematical induction. Let w0 (t) = u(t), c0 = ∫t w0 (t)t α  L ∞ (I) and w1 (t) := b + c 0 (t − s)−β s−γ w0 (s)ds. Then w0 (t) ≤ at −α + w1 (t),

t ∈ I.

(4.7)

First, direct computation gives ∫ t w1 (t) ≤ b + cc0 (t − s)−β s−(α+γ) ds = b + cc0 t 1−β−α−γ b1 . 0

Thus, w1 ∈ L ∞ (I) if α + β + γ ≤ 1, and t α+β+γ−1 w1 (t) ∈ L ∞ (I) when α + β + γ > 1. If α + β + γ ≤ 1, then w1 satisfies ∫ t w1 (t) ≤ b + c (t − s)−β s−γ (as−α + w1 (s))ds 0 ∫ t 1−α−β−γ b1 + c (t − s)−β s−γ w1 (s)ds. (4.8) = b + act 0

By Corollary 4.1 (since 1 − α − β − γ ≥ 0), we obtain −β c t r∗

w1 (t) ≤ (1 − γ)(1 − β − γ)−1 (b + act 1−α−β−γ b1 )e 1−β−γ t

1−γ

,

t ∈ I.

Thus, when α + β + γ ≤ 1, we obtain the desired assertion. Meanwhile, when ∫t α + β + γ > 1, we define w2 by w2 (t) := b + c 0 (t − s)−β s−γ w1 (s) ds, and we have t α+β+γ−1 w1 (t) ≤ c1 for some c1 > 0, i.e., w1 (t) ≤ c1 t 1−α−β−γ, and so w2 (t) ≤ b + cc1 b2 t −α+2(1−β−γ) . Thus, if 2(β+γ)+α ≤ 2, then w2 ∈ L ∞ (I), and if 2β+2γ +α > 2, then t 2β+2γ+α−2 w2 ∈ L ∞ (I). When 2β + 2γ + α ≤ 2, i.e., −α + 2(1 − β − γ) ≥ 0, we deduce (4.9) w1 (t) ≤ acb1 t −α+1−β−γ + w2 (t). Therefore, w2 satisfies ∫

t

(t − s)−β s−γ w1 (s)ds ∫ t 2 −α+2(1−β−γ) +c (t − s)−β s−γ w2 (s)ds. ≤ b + ac b1 b2 t

w2 (t) ≤ b + c

0

0

Since −α + 2(1 − β − γ) ≥ 0 and w2 ∈

L ∞ (I),

by Corollary 4.1, we deduce −β c t r∗

w2 (t) ≤ (1 − γ)(1 − β − γ)−1 (b + ac2 b1 b2 t −α+2(1−β−γ) )e 1−β−γ t

1−γ

,

t ∈ I.

From (4.7) and (4.9), this gives, for −α + 2(1 − β − γ) ≥ 0, the desired assertion. Note that the procedure can be repeated. At each step, we gain 1 − β − γ in the exponent

104

4 Cauchy Problem for Fractional ODEs

and the process can be continued for a finite number of steps until the power of t α . Thus we obtain the desired assertion.  becomes nonnegative, i.e., for m = 1−β−γ

4.2 ODEs with a Riemann-Liouville Fractional Derivative In this section, we describe the solution theory for the following Cauchy problem with a Riemann-Liouville fractional derivative (with n − 1 < α < n, n ∈ N)  R α ∂t u = f (t, u), t ∈ I, (4.10) R α−j ( ∂t u)(0) = c j , j = 1, . . . , n with {c j } nj=1 ⊂ R. The notation R∂tα−n u(0) should be identified with 0 Itn−α u(0), and α−j

α−j

(R∂t u)(0) = limt→0+ (R∂t u)(t). The initial conditions are inherently nonlocal, which are needed for the well-posedness of the problem, but their physical interpretation is generally unclear. The Djrbashian-Caputo fractional derivative ∂tα u does not have this drawback. First we characterize the nonlocal initial condition 0 It1−α u(0) = c [KST06, Lemmas 3.2 and 3.5] and [BBP15, Theorems 6.1 and 6.2]. Proposition 4.1 Let 0 < α < 1, and u ∈ L 1 (I) ∩ C(I). (i) If there exists a limit limt→0+ t 1−α u(t) = cΓ(α)−1 , then there also exists a limit limt→0+ 0 It1−α (t) = c. (ii) If there exists a limit limt→0+ 0 It1−α (t) = c and if there exists the limit limt→0+ t 1−α u(t), then limt→0+ t 1−α u(t) = cΓ(α)−1 . Proof (i) If limt→0+ t 1−α u(t) = cΓ(α)−1 , then for any  > 0, there exists δ > 0 such that t ∈ (0, δ) implies |t 1−α u(t) − cΓ(α)−1 | <  Γ(α)−1 , or (c − )Γ(α)−1 t α−1 < u(t) < (c + )Γ(α)−1 t α−1 , for t ∈ (0, δ). This implies |u(t)| < Γ(α)−1 (|c| + )t α−1 for any −1 α−1 −α t ∈ (0, δ). Hence, for 0 < s < t < δ, (t − s)−α |u(s)| < (|c| ∫ t + )Γ(α) s (t − s) . −α Since the right-hand side is integrable, the integral 0 (t − s) u(s)ds converges absolutely for any t ∈ (0, δ). Hence, ∫ t ∫ ∫ c− t c+ t (t − s)−α s α−1 ds ≤ (t − s)−α u(s)ds ≤ (t − s)−α s α−1 ds. Γ(α) 0 Γ(α) 0 0 Direct computation gives that for any t ∈ (0, δ), c −  ≤ 0 It1−α u(t) ≤ c + , i.e., | 0 It1−α u(t) − c| <  . (ii) Since u ∈ C(I) ∩ L 1 (I) and α ∈ (0, 1), we deduce that 0 It1−α u(t) exists at each t ∈ [0, T]. Since limt→0+ 0 It1−α u(t) = c, for each  > 0, there exists δ > 0 such that | 0 It1−α u(t) − c| <  or c −  < 0 It1−α u(t) < c + , for t ∈ (0, δ). Applying the operator α −1 α 0 It to both sides, by Theorem 2.1, gives that for any t ∈ (0, δ), (c − )Γ(α + 1) t ≤ 1 −1 α 0 It u(t) ≤ (c + )Γ(α + 1) t , i.e., ∫ t −α u(s)ds = c. lim+ Γ(α + 1)t t→0

0

4.2 ODEs with a Riemann-Liouville Fractional Derivative

105

∫t Since u ∈ L 1 (I), 0 u(s)ds → 0 as t → 0+ , and for α ∈ (0, 1), t α → 0 as t → 0+ . Since u ∈ C(I), and t α−1  0 in a neighborhood of t = 0, L’Hôpital’s rule gives ∫ t −α u(s)ds = Γ(α) lim+ t 1−α u(t). lim+ Γ(α + 1)t t→0

t→0

0



This completes the proof of the proposition.

Remark 4.1 The sufficiency part requires only u ∈ L 1 (I). The necessity employs the L’Hôpital’s rule, which assumes the limit exists and thus stronger conditions. In view of Proposition 4.1, for α ∈ (0, 1), problem (4.10) may be rewritten as R α ∂t u

= f (t, u),

t ∈ I,

with lim+ t 1−α u(t) = c. t→0

The theory developed below is also applicable to this problem. Now we prove existence and uniqueness for problem (4.10), for which, parallel to classical odes, there are several different ways: reduction to Volterra integral equations, Laplace transform and operational calculus. We employ the reduction to Volterra integral equations and then apply a version of fixed point theorem (e.g., Banach contraction mapping theorem) to obtain a unique solution. This key step is to establish the equivalence between the fde and the Volterra integral equation. This idea, if f (t, u) is Lipschitz in u and bounded, goes back to Pitcher and Sewell [PS38] in 1938 for R∂tα u = f (t, u) (with 0 < α < 1). However, the integral operator used in [PS38] to invert the fractional derivative omitted an essential term from the initial value. In 1965, Al-Bassam [AB65] established the existence of a global continuous solution, using the correct Volterra integral equation and the method of successive approximations. Nonetheless, the boundedness assumption on f is restrictive, excluding even f (t, u) = u. In 1996, Delbosco and Rodino [DR96] considered a problem with u(0) = c0 instead of 0 It1−α u(0+ ) = c0 , and proved local existence under a continuity assumption and also uniqueness under a global Lipschitz continuity assumption. The work [HTR+99] obtained the same result for the initial condition 1−α u(0+ ) = c , by the same argument. Kilbas et al [KBT00] established existence 0 It 0 and uniqueness results in spaces of integrable functions. One main step is to prove the equivalence between the solutions of the formulations, which is closely connected with mapping properties of fractional integral and derivative in the spaces of interest. Below we state two equivalence results under slightly different assumptions on f (t, u(t)), under which problem (4.10) is equivalent to the following Volterra integral equation ∫ t n  c j t α−j 1 + (t − s)α−1 f (s, u(s)) ds. (4.11) u(t) = Γ(α − j + 1) Γ(α) 0 j=1 The first result works in the space L 1 (I): if u ∈ L 1 (I) with 0 Itn−α u ∈ AC n (I) and f satisfies t → f (t, u(t)) ∈ L 1 (I). The result imposes only integrability of f ,

106

4 Cauchy Problem for Fractional ODEs

needed for defining 0 Itn−α f (·, u(·)), but no boundedness assumption. The latter is problematic for problem (4.10), due to the presence of the factor O(t α−n ). Proposition 4.2 Let n − 1 < α < n, n ∈ N, u ∈ L 1 (I) with 0 Itn−α u ∈ AC n (I) and t → f (t, u(t)) ∈ L 1 (I). Then u solves problem (4.10) if and only if it solves (4.11). Proof Given a function u ∈ L 1 (I) with 0 Itn−α u ∈ AC n (I), by assumption, g(t) := f (t, u(t)) ∈ L 1 (I). Thus, 0 Itn−α u(t) is n times differentiable a.e. on I, and by (4.10), g(t) = R∂tα u(t) = (0 Itn−α u(t))(n) ∈ L 1 (I). Integrating the identity yields ∫ t g(s)ds = (0 Itn−α u)(n−1) (t) − (0 Itn−α u)(n−1) (η), 0 < η < t ≤ T . η

Taking the limit η → 0+ and using the initial condition lead to 0 It1 g(t) = (0 Itn−α u)(n−1) (t) − c1 . Performing the integration n times leads to n−α u(t) = 0 It

n  j=1

c j t n−j + 0 Itn g(t). Γ(n − j + 1)

Since g(t) ∈ L 1 (I), applying 0 Itα to both sides and using Theorem 2.1 give n 0 It u(t)

=

n  j=1

c j t α+n−j + 0 Itn+α g(t). Γ(α + n − j + 1)

Differentiating both sides n times gives (4.11). Next we show the converse. By hypothesis, u(t) solves (4.11). Applying 0 It1−α to both sides of (4.11) gives n−α u(t) 0 It

= 0 Itn−α

n  j=1

=

n  j=1

c j t α−j  + 0 Itn−α 0 Itα f (t, u(t)) Γ(α − j + 1)

c j t n−j + 0 Itn f (t, u(t)). Γ(n − j + 1)

Since f (t, u(t)) ∈ L 1 (I), 0 Itn−α u(t) ∈ AC n (I). By the absolute continuity of Lebesgue integral and repeated differentiation, we deduce limt→0+ (0 Itn−α )n−j u(t) = c j , j = 1, . . . , n. Further, differentiating both sides n times yields R∂tα u(t) = f (t, u(t)) a.e. I, and thus it solves (4.10). This completes the proof of the proposition.  In Proposition 4.2, the conditions f → f (t, u(t)) ∈ L 1 (I) for u ∈ L 1 (I) and ∈ AC n (I) are central. The former holds for every u ∈ L 1 (I) if and only if | f (t, u)| ≤ a(t) + b|u| for all (t, u) for some a ∈ L 1 (I) and b > 0. Now we can give a unique existence result for problem (4.10).

n−α u 0 It

Theorem 4.5 Let n − 1 < α < n, n ∈ N, J ⊂ R be an open subset, f : I × J → R be a function such that f (t, u) ∈ L 1 (I) for any u ∈ J and Lipschitz in u: | f (t, u) − f (t, v)| ≤ L|u − v|,

∀u, v ∈ J, t ∈ I,

(4.12)

4.2 ODEs with a Riemann-Liouville Fractional Derivative

107

where L > 0. Then there exists a unique solution u ∈ L 1 (I) to problem (4.10). Proof We give two slightly different proofs, both based on Banach fixed point theorem, cf. Theorem A.10 in the appendix. (i) By Proposition 4.2, it suffices to show the unique existence of a solution u ∈ L 1 (I) to problem (4.11). Clearly, (4.11) makes sense for any [0, t1 ], t1 < T. Choose t1 > 0 such that Lt1α < Γ(α + 1). We define an operator A : L 1 (I) → L 1 (I) by Au = u0 (t) + 0 Itα f (t, u(t)),

with u0 (t) =

n  j=1

cj t α−j . Γ(α − j + 1)

(4.13)

Clearly, u0 ∈ L 1 (0, t1 ). By assumption, f (t, u(t)) ∈ L 1 (I), 0 Itα f (t, u(t)) ∈ L 1 (I), cf. Theorem 2.2(i). Hence Au ∈ L 1 (I). Next, by the Lipschitz continuity of f (t, u) in u in (4.12) and the estimate in Theorem 2.2(i), we deduce  Au1 − Au2  L 1 (0,t1 ) ≤  0 Itα ( f (t, u1 (t)) − f (t, u2 (t))) L 1 (0,t1 ) ≤ L 0 Itα |u1 (t) − u2 (t)| L 1 (0,t1 ) ≤ Γ(α + 1)−1 Lt1α u1 − u2  L 1 (0,t1 ) . By the choice of t1 , A is contractive on L 1 (0, t1 ). Hence, by Banach fixed point theorem, there exists a unique solution u ∈ L 1 (0, t1 ) to (4.11) on [0, t1 ]. (Further, the solution u can be obtained as the limit of the convergence sequence Am u0 .) Next consider the interval [t1, 2t1 ] (assuming 2t1 ≤ T). Then we rewrite (4.11) as ∫ t 1 (t − s)α−1 f (s, u(s))ds u(t) = u1 (t) + Γ(α) t1 ∫ t1 1 with u1 (t) = u0 (t) + Γ(α) (t − s)α−1 f (s, u(s))ds. Note that the function u1 belongs 0 to L 1 (Ω) and is known, since u(t) is already uniquely defined on the interval [0, t1 ]. Then the preceding argument shows that there exists a unique solution u ∈ L 1 (t1, 2t1 ) to problem (4.11) on the interval [t1, 2t1 ]. We complete the proof by repeating the process for a finite number of steps. By the very construction, it can be verified that the constructed solution u does belong to L 1 (I) with 0 Itn−α u ∈ AC n (I). (ii). We employ the approach of Bielecki [Bie56]. We equip the space L 1 (I) with ∫T a family of weighted norms  · λ , defined by uλ = 0 e−λt |u(t)|dt, with λ > 0. For any fixed λ > 0,  · λ is equivalent to  ·  L 1 (I) . It has already been verified that the operator A defined in (4.13) maps L 1 (I) into itself. Further, by the Lipschitz continuity of f (t, u) in u in (4.12), there holds ∫ t ∫ T 1  Au1 − Au2 λ ≤ e−λt (t − s)α−1 | f (s, u1 (s)) − f (s, u2 (s))|dsdt Γ(α) 0 0 ∫ t ∫ T L ≤ e−λt (t − s)α−1 |u1 (s) − u2 (s)|dsdt Γ(α) 0 0 ∫ T ∫ T L = e−λs |u1 (s) − u2 (s)| e−λ(t−s) (t − s)α−1 dt ds. Γ(α) 0 s

108

4 Cauchy Problem for Fractional ODEs

∫∞ ∫T Direct computation shows s e−λ(t−s) (t − s)α−1 dt ≤ λ−α 0 e−t t α−1 dt = λ−α Γ(α). Combining the preceding two estimates yields  Au1 − Au2 λ ≤ λ−α Lu1 − u2 λ . For λ sufficiently large, A is contractive on L 1 (I) in the norm  · λ , and by Banach fixed point theorem, there exists a unique solution u ∈ L 1 (I) to problem (4.11).  In view of the Volterra reformulation (4.11), the solution u(t) to problem (4.10) can be singular at t = 0, i.e., u(t) ∼ cn t α−n as t → 0+ . The singularity precludes working in the space C(I) (unless cn = 0). Nonetheless, one may employ a weighted space Cγ (D) (of continuous functions), γ ≥ 0, defined by Cγ (I) = {u ∈ C(I) : tγ u(t) ∈ C(I)}, equipped with the weighted norm uCγ (I) = u(t)tγ C(I) . Then Cγ (I) endowed with the norm  · C(I) is a Banach space. It can be shown that any element u ∈ Cγ (I) can be written as u(t) = t −γ v(t), with v(t) ∈ C(I). The following mapping property of the operator 0 Itα holds on the space Cγ (I). Lemma 4.1 Let 0 ≤ γ < 1, and α > 0, then 0 Itα is bounded on Cγ (I):  0 Itα vCγ (I ) ≤ Γ(1 − γ)Γ(1 + α − γ)−1T α vCγ (I) . Proof By the definition of the norm  · Cγ (I ) ,  0 Itα vCγ (I) = sup t ∈I

tγ Γ(α)

tγ ≤ sup Γ(α) t ∈I This and the identity

∫t 0



t

(t − s)α−1 v(s)ds

0



0

t

(t − s)α−1 s−γ dsvCγ (I ) .

(t − s)α−1 s−γ ds = B(α, 1 − γ)t α−γ imply the desired estimate. 

The next two results are Cγ (I) analogues of Proposition 4.2 and Theorem 4.5. The proof of the proposition is omitted since it is identical with Proposition 4.2. Proposition 4.3 Let n − 1 < α < n, n ∈ N. Let J ⊂ R be an open set, and f : I × J → R be a function such that f (t, u(t)) ∈ Cn−α (I) for any u ∈ Cn−α (I). Then u ∈ Cn−α (I) with (0 Itn−α )(n) u ∈ Cn−α (I) solves (4.10) if and only if u ∈ Cn−α (I) satisfies (4.11). Theorem 4.6 Let n − 1 < α < n, n ∈ N, J ⊂ R an open set and the function f : I × J → R be such that f (t, u) ∈ Cn−α (I) for any u ∈ J and satisfy the condition (4.12). Then there exists a unique solution u ∈ Cn−α (I) to problem (4.10) with R∂ α u ∈ C n−α (I). t

4.2 ODEs with a Riemann-Liouville Fractional Derivative

109

Proof The proof is similar to Theorem 4.5, and employs Banach fixed point theorem. By Proposition 4.3, it suffices to prove the assertion for (4.11). Equation (4.11) makes

sense in any interval [0, t1 ] ⊂ I. Choose t1 such that Lt1α max Γ(α − n + 1)Γ(2α − n + 1)−1, Γ(α + 1)−1 < 1. We rewrite (4.11) as u(t) = Au(t), with A : Cn−α (I) → Cn−α (I) defined in (4.13). Clearly, if u ∈ Cn−α [0, t1 ], since f (t, u(t) ∈ Cn−α [0, t1 ] by assumption, then by Lemma 4.1, 0 Itα f (t, u(t)) ∈ Cn−α [0, t1 ] and also Au ∈ Cn−α [0, t1 ]. Next, by the Lipschitz continuity of f and Lemma 4.1, we deduce that for any u1, u2 ∈ Cn−α [0, t1 ],  Au1 − Au2 Cn−α [0,t1 ] =  0 Itα ( f (t, u1 (t)) − f (t, u2 (t)))Cn−α [0,t1 ] ≤ L 0 Itα |u1 − u2 |Cn−α [0,t1 ] ≤ Lt1α Γ(α − n + 1)Γ(2α − n + 1)−1 u1 − u2 Cn−α [0,t1 ] . By the choice of t1 , the operator A is contractive on Cn−α [0, t1 ], and Banach fixed point theorem implies that there exists a unique solution u ∈ Cn−α [0, t1 ] to (4.11) over [0, t1 ]. Next we continue the solution u ∈ Cn−α [0, t1 ] to [t1, 2t1 ] so that u ∈ Cn−α [0, 2t1 ] (assuming 2t1 ≤ T). Then for any t ∈ [t1, 2t1 ], (4.11) is equivalent to ∫ t 1 (t − s)α−1 f (s, u(s))ds + u1 (t) u(t) = Γ(α) t1 ∫ t1 1 with u1 (t) = u0 (t)+ Γ(α) (t−s)α−1 f (s, u(s))ds ∈ Cn−α [0, t1 ], which is known, since 0 u ∈ Cn−α [0, t1 ] is known. Further, it is continuous over the interval [t1, 2t1 ]. Then repeating the argument but with C[t1, t2 ] shows that there exists a unique solution u ∈ C[t1, 2t1 ] and further u ∈ Cn−α [0, 2t1 ]. By repeating the process for a finite number of steps, we deduce that there exists a solution u ∈ C[t1, T] with u ∈ Cn−α (I).  The construction also shows that u ∈ Cn−α (I) satisfies R∂tα u ∈ Cn−α (I). In the preceding discussions, we give sufficient conditions for unique global existence in L 1 (I) and Cγ (I). Now we discuss the existence of a local solution. There are several possible versions; See Exercise 4.9 for another result. Theorem 4.7 Let n − 1 < α ≤ n, n ∈ N, let K > 0, 0 < χ∗ ≤ T and c j ∈ R, j = 1, . . . , n. Let D be the set given by n



D = (t, u) ∈ R2 : 0 < t ≤ χ∗ : t n−α u − j=1

 b j t n−j

≤K . Γ(α − j + 1)

Let f (t, u) : D → R be such that t n−α f (t, u) ∈ C(D) and | f (t, u1 ) − f (t, u2 )| ≤ L|u1 − u2 |,

∀(t, u1 ), (t, u2 ) ∈ D.

Then there exists an χ ≤ χ∗ , such that problem (4.10) has a unique solution u in the space Cn−α [0, χ] with R∂tα u ∈ Cn−α [0, χ]. Proof By Proposition 4.3, it suffices to prove it for the Volterra equation (4.11).

Let c f = max(t,u)∈ D |t n−α f (t, u)|. We choose χ such that χ = min( χ∗, Γ(2α − n +

110

4 Cauchy Problem for Fractional ODEs

−1 1)c−1 f Γ(α + 1 − n)

α1

) and LΓ(α − n + 1)Γ(2α − n + 1)−1 χ α < 1. Let

n



U = u ∈ C(0, χ] : sup t n−α u(t) − cj t ∈[0, χ]

j=1

 t n−j

≤ K ⊂ Cn−α [0, χ]. Γ(α − j + 1)

Then U is a complete subset of Cn−α [0, χ]. On the set U, we define an operator A by (4.13). Let u ∈ U. Then for any t ∈ [0, χ], we have ∫ t n

 c j t n−j

t n−α

n−α Au(t) − (t − s)α−1 | f (s, u(s))|ds



t Γ(α − j + 1) Γ(α) 0 j=1 ∫ t t n−α c f t α c f Γ(α + 1 − n) Γ(α + 1 − n) ≤ ≤ c f hα . (t − s)α−1 s α−n ds = Γ(α) 0 Γ(2α − n + 1) Γ(2α − n + 1) By the choice of χ, Au ∈ U. Next, by the Lipschitz condition, ∫ t t n−α |t n−α (Au1 (t) − Au2 (t))| ≤ L (t − s)α−1 |u1 (s) − u2 (s)|ds Γ(α) 0 ∫ Lt n−α t α−n s (t − s)α−1 dsu1 − u2 Cn−α [0,h] ≤ Γ(α) 0 ≤ L χ α Γ(α − n + 1)Γ(2α − n + 1)−1 u1 − u2 Cn−α [0,h] . By the choice of χ, A is a contraction on Cn−γ [0, χ], and by Banach contraction mapping theorem, there exists a unique solution u ∈ Cn−γ [0, χ]. Similarly, one can  prove R∂tα u ∈ Cn−γ [0, χ]. For linear Cauchy problems with constant coefficients, explicit solutions can be obtained using Mittag-Leffler functions and the method of successive approximations. This problem was first considered by Barrett [Bar54] in 1954, who showed the existence and uniqueness by essentially the argument given below. Proposition 4.4 Let n − 1 < α ≤ n, n ∈ N, λ ∈ R and g ∈ C(I). Then the solution u to  R α ∂t u = λu + g, t ∈ I, (4.14) α−j (R∂t u)(0) = c j , j = 1, . . . , n, with {c j } nj=1 ⊂ R, is given by u(t) =

n  j=1

c j t α−j Eα,α−j+1 (λt α ) +

∫ 0

t

(t − s)α−1 Eα,α (λ(t − s)α )g(s) ds.

(4.15)

Proof By the linearity, we can split u into u = uh + ui , with uh and ui solving   R α R α ∂t uh = λuh, t ∈ I, ∂t ui = λui + g, t ∈ I, and R α−j R α−j ( ∂t uh )(0) = c j , j = 1, . . . , n, ( ∂t ui )(0) = 0, j = 1, . . . , n,

4.2 ODEs with a Riemann-Liouville Fractional Derivative

111

respectively. By Proposition 4.2, it suffices to apply the  method of successive cj t α−j , and approximations to (4.11) to find uh and ui . Let uh0 (t) = nj=1 Γ(α−j+1) m 0 α m−1 uh (t) = uh (t) + λ0 It uh (t), m = 1, 2, . . . , By the identity (2.19), we deduce uh1 (t) =

n 2  

cj

k=1 j=1

λ k−1 t αk−j , Γ(αk − j + 1)

and by mathematical induction, we obtain uhm (t) =

n m+1  k=1 j=1

cj

λ k−1 t αk−j . Γ(αk − j + 1)

Taking the limit as m → ∞, we get uh (t) =

n  j=1

cj

∞  k=1

 λ k−1 t αk−j = c j t α−j Eα,α−j+1 (λt α ), Γ(αk − j + 1) j=1 n

where, by definition, the inner series is Eα,α−j+1 (z), which converges uniformly over C. To find ui , for any g ∈ C(I), let ui1 (t) = (0 Itα g)(t). Then by Theorem 2.1, ui2 (t) = λ(0 Itα ui1 )(t) + (0 Itα g)(t) =

2 

λ k−1 0 Itkα g(t).

k=1

Like before, continuing the procedure, we obtain ∫ t m m  λ k−1 m k−1 kα (t − s)αk−1 g(s)ds. λ (0 It g)(t) = ui (t) = 0 k=1 Γ(kα) k=1 Taking the limit as m → ∞ and using the definition of Eα,α (λt α ) give ui (t) =

∫ t ∫ t ∞ λ k−1 (t − s)αk−1 g(s) ds = (t − s)α−1 Eα,α (λ(t − s)α )g(s) ds. 0 k=1 Γ(kα) 0

Combining the representations of uh (t) and ui (t) gives the desired result.



Example 4.2 When g ≡ 0, α ∈ (0, 1) and c0 = 1, the solution u(t) is given by u(t) = t α−1 Eα,1 (λt α ). When α = 1, E1,1 (λt) = eλt , which recovers the familiar result for classical odes. For λ > 0, u is exponentially growing like the case α = 1, for large t, but at a faster rate. However, u is no longer monotone in t. For λ < 0, u(t) is monotonically decreasing, by Theorem 3.5, and the smaller is α, the slower is the asymptotic decay, but the behavior at small time is quite the reverse. This behavior differs drastically from the case α = 1. See Fig. 4.1 for a schematic illustration. So far we have discussed only global/local existence and uniqueness of solutions using fixed point theorems. One important issue in the study of odes is

112

4 Cauchy Problem for Fractional ODEs

5

E -1

E

t

20

t

-1

=1/4 =1/2 =3/4 =1

4 3

,

40

( t )

=1/4 =1/2 =3/4 =1

,

( t )

60

2 1 0

0 0

1

2

3

0

2

4

6

t

t Fig. 4.1 The solution t α−1 Eα, α (λt α ) to the homogeneous problem.

the smoothness of the solution with respect to the problem data, and for classical odes, the solution smoothness improves steadily with that of f : loosely speaking, f ∈ C k ([0, χ] × R) implies u ∈ C k+1 [0, χ] for first-order odes. The solution uh (t) in Proposition 4.4 indicates that with nonzero cn , uh (t) ≈ cn t α−n as t → 0+ , which is singular around t = 0, and thus the classical counterpart generally does not hold. Of course this does not preclude a smooth solution for special data, e.g., cn = 0. This observation holds also for problem (4.10), provided that f is smooth in u. It is conjectured that for α ∈ (0, 1), additional singular terms include t α−1+mα , m = 0, 1, . . ., even though a precise characterization seems missing from the literature. Theorem 4.8 For n − 1 < α < n, n ∈ N, and f ∈ C(I × R) satisfy the condition | f (t, u)| ≤ a(t) + b|u| for some a(t) ∈ C(I) and b ≥ 0. Then the solution u to problem  (4.10) belongs to C(I) and |u(t)| ≤ c nj=1 t α−j , as t → 0+ . Further u ∈ C(I) if and only if cn = 0. Proof Under the given conditions, Proposition 4.2 holds, and there exists a unique solution u. It follows from (4.11) that u(t) = u0 (t) + 0 Itα f (t, u(t)),

with u0 =

n  j=1

c j t α−j ∈ C n−1 (I). Γ(α − j + 1)

Under the given assumption on f , for t ∈ I, ∫ t n  |c j |t α−j 1 + (t − s)α−1 (a(s) + b|u(s)|)ds. |u(t)| ≤ Γ(α − j + 1) Γ(α) 0 j=1 Then Gronwall’s inequality implies the bound on |u(t)|. Further, for 0 Itα f (t, u(t)), |(0 Itα f (·, u(·)))(n−1) (t)| = |(0 Itα−n+1 f (·, u(·)))(t)| ∈ C(I). Thus, u ∈ C n−1 (I). The second assertion follows similarly.



4.3 ODEs with a Djrbashian-Caputo Fractional Derivative

113

4.3 ODEs with a Djrbashian-Caputo Fractional Derivative This section is devoted to the following fractional odes with a Djrbashian-Caputo fractional derivative (with n − 1 < α ≤ n, n ∈ N):  α ∂t u(t) = f (t, u(t)), t ∈ I, (4.16) u(j) (0) = c j , j = 0, 1, . . . , n − 1, where I ⊂ R+ is an interval (to be determined), and f is a measurable function. Note that there are several different definitions of the Djrbashian-Caputo fractional derivative ∂tα u, e.g., the classical version ∂tα u for u ∈ AC n (I), and also the regularized version ∂tα∗ u, cf. Definitions 2.3 and 2.4 in Chapter 2. The concept of a "solution" will change accordingly. In spite of this, the difference is often not explicitly specified in the literature, and mostly we also do not distinguish between ∂tα and ∂tα∗ below. First we give an explicit solution for linear problems with constant coefficients. This result was already obtained by Djrbashian and Nersesyan [DN68], who also generalized this to include coefficients pk (t) ∈ C(I) coupling each fractional derivative, which represents one of the earliest studies on fractional odes. Proposition 4.5 Let n − 1 < α < n, n ∈ N, λ ∈ R, g ∈ C(I). Then the solution u to  ∂tα u = λu + g, t ∈ I, (4.17) u(j) (0) = c j , j = 0, . . . , n − 1 is given by u(t) =

n−1  j=0

c j t j Eα, j+1 (λt α ) +

∫ 0

t

(t − s)α−1 Eα,α (λ(t − s)α )g(s) ds.

Proof Clearly, it can be equivalently converted into a Volterra integral equation as in Proposition 4.2, and then solved using the method of successive approximations. We employ Laplace transform instead. We apply Laplace transform in t, denoted  α−1−j +  u(z) = n−1 g (z), that is,  u(z) = by . Then by Lemma 2.9, (λ + z α ) j=0 c j z n−1 z α−1− j zγ 1 g (z). It remains to invert the functions z α +λ with γ set to be j=0 c j λ+z α + λ+z α  γ = 1, α − 1, . . . , α − n. These are available directly from Lemma 3.2. Together with the convolution rule for Laplace transform, we obtain the representation.  Example 4.3 When g ≡ 0, α ∈ (0, 1) and c0 = 1, the solution u to problem (4.17) is given by u(t) = Eα,1 (λt α ), cf. Fig. 4.2. With α = 1, it recovers the well known result for odes. Depending on the sign of λ, u(t) is either increasing or decreasing in t: for λ > 0, u is exponentially growing like the case α = 1, but at a faster rate, whereas for λ < 0, it is only polynomially decaying, and the smaller is α, the slower is the asymptotic decay. This behavior is drastically different from the case α = 1. Example 4.4 Let α ∈ (0, 1). Consider the fractional ode ∂tα u(t) = λu + Γ(γ)−1 tγ−1 with u(0) = 0, and γ > 0. By Proposition 4.5, the solution u(t), if there is one, is

114

4 Cauchy Problem for Fractional ODEs 1

60

( t )

=1/4 =1/2 =3/4 =1

,1

40

=1/4 =1/2 =3/4 =1

0.5

E

E ,1( t )

80

20 0

0 0

1

2

0

3

2

4

6

t

t Fig. 4.2 The solution Eα,1 (λt α ) to the homogeneous problem.

given by u(t) = t α+γ−1 Eα,α+γ (λt α ). Hence, limt→0+ u(t) = 0 if and only α + γ > 1. Thus u(t) is indeed a solution to the ode only if α + γ > 1. However, for any γ > 0, u(t) does satisfy the following ode with a Riemann-Liouville fractional derivative: R∂ α u(t) = λu + Γ(γ)−1 t γ−1 with R∂ α−1 u(0) = 0, since γ > 0. This shows the t t difference of the odes with the Riemann-Liouville and Djrbashian-Caputo cases; and the condition on f is more restrictive in the latter case. Now we turn to the well-posedness of problem (4.16). The analysis strategy is similar to the Riemann-Liouville case, based on reduction to a Volterra integral equation and the application of fixed point theorems. The following result gives the equivalence of problem (4.18) with a Volterra integral equation, in either classical or regularized sense [Web19a, Theorem 4.6], where Tn u denotes the Taylor expansion of u up to order n at t = 0. The factor t −γ allows weakly singular source f . It is frequently asserted that (when γ = 0) problem (4.18) is equivalent to (4.19), but the continuity of f is not enough to ensure u ∈ AC n (I) in (4.19), and thus the assertion is generally not valid then. The condition 0 Itn−α (u − Tn−1 u) ∈ AC n (I) in (iii) ensures the validity of the integration. This is also implicit in the notion of “solution” for the problem, because if ∂tα∗ u exists and ∂tα∗ u(t) = t −γ f (t, u(t)), when u and f are continuous, then t −γ f (t, u(t)) ∈ L 1 (I) so that (0 Itn−α (u − Tn−1 u))(n) ∈ L 1 (I) i.e., n−α (u −T n −γ is absent, and (4.20) is satisfied, then 0 It n−1 u) ∈ AC (I). When the term t n−α n (u − Tn−1 u) ∈ C (I). 0 It Proposition 4.6 Let f be continuous on I × R, n − 1 < α < n, n ∈ N, and 0 ≤ γ < α − n + 1. Then the following statements hold. (i) If a function u ∈ AC n (I) satisfies  α ∂t u(t) = t −γ f (t, u(t)), u (0) = c j , (j)

t ∈ I,

j = 0, . . . , n − 1,

then u satisfies the Volterra integral equation

(4.18)

4.3 ODEs with a Djrbashian-Caputo Fractional Derivative

u(t) =

n−1  cj t j j=0

j!

115

+ 0 Itα (t −γ f (t, u(t)))(t),

t ∈ I.

(4.19)

(ii) If u ∈ C(I) satisfies (4.19), then u ∈ C n−1 (I) and ∂tα∗ u exists a.e. and u satisfies  α∗ ∂t u(t) = t −γ f (t, u(t)), a.e. t ∈ I, (4.20) u(j) (0) = c j , j = 0, . . . , n − 1. Moreover, 0 Itn−α (u − Tn−1 u) ∈ AC n (I). (iii) If u ∈ C n−1 (I) and 0 Itn−α (u −Tn−1 u) ∈ AC n (I) and if u solves (4.20), then u solves (4.18). Proof (i) Let g(t) = t −γ f (t, u(t)), t ∈ I. Then g ∈ L 1 (I), since 0 ≤ γ ≤ α − n + 1 < 1 and f is continuous. Suppose u ∈ AC n (I), i.e., u(n) ∈ L 1 (I), and ∂tα u(t) = t −γ f (t, u(t)) = g(t)

a.e. t ∈ I.

Thus, u(n) ∈ L 1 (I), and 0 Itn−α (u(n) ) = g holds in L 1 (I), and 0 Itα 0 Itn−α u(n) = 0 Itα g. By Theorem 2.1, 0 Itn u(n) = 0 Itα g, and integrating n times the term 0 Itn u(n) leads to u(t) − Tn−1 u(t) = 0 Itα g, i.e. (4.19) holds. (ii) Let u ∈ C n−1 (I) satisfy (4.19). Then for every β ≥ α − n + 1 > γ, ∫ 1 t β−γ β (1 − s)β−1 s−γ f (ts, u(ts))ds 0 It g(t) = Γ(β) 0 exists for every t, since f (ts, u(ts)) is bounded by the continuity of u and f . Thus, β α n−1 (I) and also u ∈ C n−1 (I). Further, 0 It g(t) ∈ C(I), and in particular, 0 It g ∈ C β for any β > γ, 0 It g(0) = 0, and then taking β = α in (4.19) gives u(0) = c0 . By differentiating (4.19) and taking β = α − 1, we obtain u (0) = c1 . Similarly, we obtain u(j) (0) = c j , j = 2, . . . , n − 1. Thus, we have n−1   cj j  R α α ∂tα∗ u = R∂tα (u − Tn−1 u) = R∂tα u − t = ∂t 0 It g = g. j! j=0

Thus, u satisfies (4.20). Moreover, from 0 Itn−α (u − Tn−1 u) = 0 Itn−α 0 Itα g = 0 Itn g, it follows that 0 Itn−α (u − Tn−1 u) ∈ AC n (I). (iii) Let g(t) = t −γ f (t, u(t)). Then by the assumption on f , g ∈ L 1 (I). Since 0 Itn−α (u− Tn−1 u) ∈ AC n (I), integrating the identity (0 Itn−α (u − Tn−1 u))(n) = g(t) n times gives n−α (u − Tn−1 u) = 0 Itn g(t) + 0 It

n−1 

a j t j−1,

j=0

with a j ∈ R. Applying the operator 0 Itα−n+1 to both sides of the identity yields

116

4 Cauchy Problem for Fractional ODEs 1 α+1 g+ 0 It (u − Tn−1 u) = 0 It

n−1 

b j t j+α,

j=0

with b j ∈ R. It follows from the assumption u ∈ C n−1 (I) that 0 It1 (u − Tn−1 u) ∈ C n (I) and 0 Itα+1 g ∈ C n (I), both with its derivatives up to order n − 1 at t = 0 vanishing, and hence, there holds b j = 0, j = 0, 1, . . . , n − 1. Then differentiating the identity  gives u − Tn−1 u = 0 Itα g. This completes the proof of the proposition. Example 4.5 For α ∈ (1, 2), consider the initial value problem  ∂tα u(t) = (u(t) − 1)−1, t > 0, u(0) = 1, u (0) = 0. If we ignore the discontinuity of f (t, u) at (0, u(0)) = (0, 1) and blindly apply Proposition 4.6 with γ = 0 (which is valid for continuous f ), we get u(t) = 1− 0 Itα (1−u)−1 . 1 1 α Clearly, it is satisfied by u(t) = 1 ± Γ(1 − α2 ) 2 Γ(1 + α2 )− 2 t 2 . However, either function has an unbounded first derivative at 0, and cannot be a solution of the ode. This partly shows the necessity of the continuity on f in order to have the equivalence. Now we establish the local unique existence for problem (4.16). Our discussions shall focus on the case 0 < α < 1, since the general case can be analyzed in a similar manner. The next result gives the local well-posedness for α ∈ (0, 1) and c0 ∈ R, when f (t, u) is a continuous and locally Lipschitz function. It was first analyzed systematically by Diethelm and Ford [DF02, Section 2]. Kilbas and Marzan [KM04, KM05] also studied the problem via its integral formulation and proved existence and uniqueness of a global continuous solution in the case of continuous and global Lipschitz continuous function f . Theorem 4.9 If there exist χ∗ > 0 and γ > 0 such that f (t, u) is continuous on D = [0, χ∗ ] × [c0 − γ, c0 + γ] such that there exists an L > 0, | f (t, u1 ) − f (t, u2 )| ≤ L|u1 − u2 |,

∀(t, u1 ), (t, u2 ) ∈ D.

Let χ = min( χ∗, supt ≥0 { f  L ∞ (D) Γ(1 + α)−1 t α Eα,1 (Lt α ) ≤ γ}). Then problem (4.16) has a unique solution u ∈ C[0, χ], which depends continuously on c0 . Proof The proof employs Picard iteration, as for classical odes. In view of Proposition 4.6, it suffices to prove the result for the Volterra integral equation (4.19). We define a sequence of functions in C[0, χ] by u0 (t) ≡ c0 and un (t) = c0 + (0 Itα f (t, un−1 ))(t),

n = 1, 2, . . . .

Let d n = |un − un−1 |. Then for any t ∈ [0, χ], d 1 (t) = | 0 Itα f (t, u0 )| ≤  f  L ∞ (D) Γ(1 + α)−1 χ α =: c f .

4.3 ODEs with a Djrbashian-Caputo Fractional Derivative

117

 m ≤ γ By the choice of χ, c f ≤ γ, i.e., (t, u1 ) ∈ D. Now assume that n−1 m=1 d so that |un−1 − c0 | ≤ γ. We show this for d n . By the induction hypothesis, we have that for t ∈ [0, χ] and m = 2, 3, . . . , n, there holds d m ≤ L 0 Itα d m−1, and d m ≤ c f L m−1 (0 It1+(m−1)α 1)(t), m = 1, 2, . . . , n. It then follows that n 

dm < cf

m=1

By the definition of χ,

∞  m=1

n 

L m−1 t (m−1)α = c f Eα,1 (Lt α ). Γ((m − 1)α + 1)

d m ≤ γ,

∀t ∈ [0, χ].

m=1

 Hence, (t, un (t)) ∈ D for all t ∈ [0, χ] and n ≥ 0, and n |un − un−1 | converges uniformly on [0, χ]. Thus, un converges to some u uniformly on [0, χ], and u is continuous. Passing to limit gives u(t) = c0 + 0 Itα f (t, u), for t ∈ [0, χ], i.e., u is indeed a solution. Next we show the uniqueness. Let u1, u2 be two solutions on the interval [0, χ], and w = u1 − u2 . Then, both u1 and u2 fall into [c0 − γ, c0 + γ] for t ≤ χ, and ∂tα w = f (t, u1 ) − f (t, u2 ), with w(0) = 0. Since f is Lipschitz on D and u1, u2 ∈ L 1 (0, χ), we have f (t, u1 ) − f (t, u2 ) ∈ L 1 (0, χ). Thus, for t ∈ (0, χ), w(t) = (0 Itα ( f (t, u1 )− f (t, u2 )))(t). For all t ≤ χ, |w(t)| ≤ L(0 Itα |w|)(t), and Theorem  4.1 implies |w| = 0 on [0, χ]. The continuity in u0 follows similarly. The next result gives a global existence. Corollary 4.3 Let f ∈ C(R+ × R), and for any T > 0, there exists LT such that | f (t, u1 ) − f (t, u2 )| ≤ LT |u1 − u2 | for any t ∈ [0, T], u1, u2 ∈ R, then there exists a unique solution u on R+ . Proof This is direct from the proof of Theorem 4.9: For any χ∗ > 0, γ can be chosen arbitrarily large so that χ = χ∗ . Hence, the solution u exists and is unique on [0, χ]. Since χ∗ is arbitrary, the claim follows. Next we present an alternative proof using Bielecki norm [Bie56]. For any fixed T > 0 and any u ∈ C(I), we define a family of weighted norms on C(I) by uλ = supt ∈I e−λt |u(t)|, where λ > 0 is to be chosen. The norm  · λ is equivalent to the standard norm. For any u ∈ C(I), let Au = c0 + 0 Itα f (t, u). Then Au ∈ C(I). Next we claim that A is a contraction on C(I). Indeed, for any u, v ∈ C(I), then Au(t) − Av(t) = 0 Itα [ f (·, u(·)) − f (·, v(·))](t). By the Lipschitz continuity of f in u, we deduce ∫ t e−λt −λt e |(Au − Av)(t)| ≤ (t − s)α−1 eλs e−λs LT |u(s) − v(s)|ds Γ(α) 0 ∫ e−λt LT u − vλ t ≤ (t − s)α−1 e−λs ds ≤ λ−α LT u − vλ . Γ(α) 0

118

4 Cauchy Problem for Fractional ODEs

Thus by choosing λ sufficiently large, A is contractive on C(I) in the norm  · λ , and by Theorem A.10, there exists a unique solution u ∈ C(I) to Au = u, or equivalently problem (4.16). Since the choice of T is arbitrary, the desired assertion follows.  The next result gives the following global behavior of the solution [LL18a, Proposition 4.6]. Throughout t∗ denotes the largest time of existence (or the blow-up time). Proposition 4.7 Let J ≡ (a, b) ⊂ R, f (t, u) ∈ C(R+ × J) be continuous and locally Lipschitz in u, i.e., for any K ⊂ J compact and χ > 0, there exists L χ,K > 0 such that | f (t, u1 ) − f (t, u2 )| ≤ L χ,K |u1 − u2 | for any t ∈ [0, χ], u1, u2 ∈ K. Then, for any c0 ∈ J, problem (4.16) has a unique solution u ∈ C[0, t∗ ), with t∗ = sup{h > 0 : The solution u ∈ C[0, h), u(t) ∈ D, ∀t ∈ [0, h)}. If t∗ < ∞, then either lim inf t→t∗− u(t) = a or lim supt→t∗− = b. Proof By Theorem 4.9, the solution with [lim inf t→0+ u(t), lim supt→0+ u(t)] ⊂ J is unique, and it exists locally and is continuous. Hence, t∗ > 0. It suffices to show that if t∗ < ∞ and a < k1 := lim inf t→t∗− u(t) ≤ k2 := lim supt→t∗− < b, then the solution can be extended to a larger interval. Then [k1, k2 ] ⊂ J is compact. Pick δ > 0 so that [k1 − δ, k2 + δ] ⊂ J, and define f˜(t, u) on Dδ = [0, t∗ + δ] × [k1 − δ, k2 + δ] by ⎧ f (t, u), ⎪ ⎪ ⎪ ⎪ ⎪ f (t, k2 + δ), ⎪ ⎪ ⎪ ⎪ ⎨ f (t∗ + δ, k2 + δ), ⎪ f˜(t, u) = ⎪ f (t∗ + δ, u), ⎪ ⎪ ⎪ ⎪ ⎪ f (t, k1 − δ), ⎪ ⎪ ⎪ ⎪ f (t + δ, k − δ), 1 ⎩ ∗

(t, u) ∈ Dδ, t ≤ t∗ + δ, u ≥ k2 + δ, t ≥ t∗ + δ, u ≥ k2 + δ, t ≥ t∗ + δ, u ∈ [k1 − δ, k2 + δ] t ≤ t∗ + δ, u ≤ k1 − δ, t ≥ t∗ + δ, u ≤ k1 − δ.

It agrees with f (t, u) on Dδ and is globally Lipschitz. Thus, by Corollary 4.3, there ˜ on R+ . Since f and f˜ agree exists a unique continuous solution u˜ to ∂tα u˜ = f˜(t, u) α ˜ ˜ on [0, t∗ ). Hence u˜ = u on [0, t∗ ). It follows that on Dδ , u also solves ∂t u˜ = f (t, u) ˜ ∈ Dδ for any t ≤ t∗ + δ1 . On the interval there exists δ1 ∈ (0, δ) such that (t, u(t))  [0, t∗ + δ1 ], u˜ also solves ∂tα u = f (t, u), contradicting with the definition of t∗ . The following example shows the necessity of Lipschitz continuity on f . Example 4.6 In the absence of the Lipschitz assumption on f in u, the solution u is not necessarily unique. To see this, given 0 < α < 1, consider the problem ∂tα u = uq for t > 0, with u(0) = 0. Consider 0 < q < 1, so that the function on the right-hand side is continuous but not Lipschitz. Obviously, u ≡ 0 is a solution of the problem. However, since ∂tα tγ = Γ(γ + 1)Γ(γ + 1 − α)−1 tγ−α , the function

u(t) = Γ(γ + 1 − α) 1−q Γ(γ + 1)− 1−q tγ , with γ = nonuniqueness. 1

1

α 1−q

is also a solution, indicating

However, it is possible to recover the uniqueness if the Lipschitz condition is properly modified. One possible choice for 0 < α < 1 is a fractional analogue of the

4.3 ODEs with a Djrbashian-Caputo Fractional Derivative

119

classial Nagumo’s condition [Nag26]. The next result is taken from [Die12, Theorem 3.1] (see [Die17] for the erratum and [LL09] for the Riemann-Liouville case). Theorem 4.10 Let T > 0 and c0 ∈ R. If the function f : I × R → R is continuous at (0, c0 ) and satisfies the inequality t α | f (t, u1 ) − f (t, u2 )| ≤ Γ(α + 1)|u1 − u2 | for all t ∈ I and all u1, u2 ∈ R, then problem (4.16) has at most one continuous solution u ∈ C(I) with ∂tα u ∈ C(I). Proof The proof proceeds by contradiction, following the approach developed in [DW60], using the mean value theorem (cf. Theorem 2.14). Assume that equation (4.16) has two continuous solutions u and u˜ on I. Let w : I → R be defined by  −α t |u(t) − u(t)|, ˜ t ∈ I, w(t) = 0, t = 0. Then w is continuous on I. Since both u and u˜ solve (4.16) and by Theorem 2.14, w(t) = t −α |(u(t) − u(0)) − (u(t) ˜ − u(0))| ˜ = t −α |(u(t) − u(t)) ˜ − (u(0) − u(0))| ˜ = Γ(α + 1)−1 |(∂tα (u − u))(ξ)| ˜ = Γ(α + 1)−1 | f (ξ, u(ξ)) − f (ξ, u(ξ))|, ˜ with some ξ ∈ (0, t). Thus, as t → 0+ , ξ → 0+ , and since u, u˜ are continuous, we ˜ → u(0) ˜ = c0 . These observations and the also have u(ξ) → u(0) = c0 and u(ξ) continuity of f at the point (0, c0 ) imply w(t) = f (ξ, u(ξ)) − f (ξ, u(ξ)) ˜ → f (0, c0 ) − f (0, c0 ) = 0

as t → 0+ .

Thus, limt→0+ w(t) exists and coincides with w(0). Thus w is continuous at the  origin. Now assume that w is not identically zero on I, and let η := inf t ∗ ∈ [0, T] :  w(t ∗ ) = supt ∈I w(t) . Since w is continuous and nonnegative and w(0) = 0, we have w(η) = supt ∈I w(t) and w(s) < w(η),

∀s ∈ [0, η).

(4.21)

By Theorem 2.14 and Nagumo’s condition, we derive w(η) = η−α |[u(η) − u(0)] − [u(η) ˜ − u(0)]| ˜ = Γ(α + 1)−1 |∂tα u(s) − ∂tα u(s)| ˜ = Γ(α + 1)−1 | f (s, u(s)) − f (s, u(s))| ˜ ≤ w(s) with some s ∈ (0, η), which contradicts (4.21). Thus w vanishes identically.



Example 4.7 The continuity of f at (0, c0 ) is essential. Let c0 = 0 and define ⎧ ⎪ ⎨ Γ(2 − α), ⎪ f (t, u) = Γ(2 − α)t −α u, ⎪ ⎪ 0, ⎩

u > t, 0 α − 1, and can be singular at t = 0. Hence, it makes sense to allow ∂tα u(t) to be singular at t = 0 in relevant odes. This leads to the following initial value problem (with 0 < α < 1): ∂tα u(t) = t −γ f (t, u(t)),

t ∈ I,

with u(0) = c0,

(4.22)

where 0 ≤ γ < α, and f ≥ 0 is continuous. By Proposition 4.6, it is equivalent to ∫ t 1 u(t) = c0 + (t − s)α−1 s−γ f (s, u(s))ds. Γ(α) 0 Below we show the existence of a nonnegative solution [Web19b, Theorem 4.8]. Theorem 4.11 Let f : I × R+ → R+ be continuous, 0 ≤ γ < α < 1 and c0 > 0, and there is a c f > 0 such that f (t, u) ≤ c f (1 + u) for all t ∈ I and u ≥ 0. Then problem (4.22) has a nonnegative solution. Further, if there exists L > 0 such that | f (t, u) − f (t, v)| ≤ L|u − v| for all t ∈ I, u, v ≥ 0, then the solution is unique. Proof Let P = {u ∈ C(I) : u(t) ≥ 0, t ∈ I}, and define A : P → P by ∫ t 1 (t − s)α−1 s−γ f (s, u(s))ds. Au(t) := c0 + Γ(α) 0 First, we prove that there is a bounded open ball UR of radius R (centered at 0) containing 0 such that Au  λu for all u ∈ ∂U ∩ P and all λ ≥ 1. In fact, if there exists λ ≥ 1 and u  0 such that λu(t) = Au(t), then ∫ t 1 u(t) ≤ λu(t) = c0 + (t − s)α−1 s−γ f (s, u(s))ds Γ(α) 0 ∫ t cf (t − s)α−1 s−γ (1 + u(s))ds ≤ c0 + Γ(α) 0 ∫ t cf cf B(α, 1 − γ)T α−γ + (t − s)α−1 s−γ u(s)ds. ≤ c0 + Γ(α) Γ(α) 0 Since 1 − α + γ < 1, by Theorem 4.3, there is c > 0 such that uC(I ) ≤ c. By choosing R > c, then Au  λu for all u ∈ ∂UR ∩ P and all λ ≥ 1. Next we prove A is compact. Let cR = sup(t,u)∈I ×[0,R] f (t, u). Clearly, A(U R ) is bounded. For any 0 ≤ t1 < t2 ≤ T,

4.3 ODEs with a Djrbashian-Caputo Fractional Derivative

121

Γ(α)| Au(t1 ) − Au(t2 )| ∫ t1 ∫ t2 (t2 − s)α−1 s−γ f (s, u(s))ds − (t1 − s)α−1 s−γ f (s, u(s))ds| =| 0 0 ∫ t1 ∫ t2 ((t1 − s)α−1 − (t2 − s)α−1 )s−γ ds + cR (t2 − s)α−1 s−γ ds. ≤ cR t1

0

Since the second term has an L 1 integrand, it is smaller than 3 for |t1 − t2 | < δ1 . For ∫η ∫t any 0 < η < t1 , we split the first term as 0 + η 1 . By the integrability, we can fix η ∫η small so that 0 |(t2 − s)α−1 − (t1 − s)α−1 |s−γ cR ds < 3 . Last, ∫

t1

|(t2 − s)

η −1 −γ

α−1

− (t1 − s)

α−1

−γ

|s ds ≤ η

−γ



t1

η

((t1 − s)α−1 − (t2 − s)α−1 )ds

= α η ((t1 − η)α − (t2 − η)α + (t2 − t1 )α ) ≤ α−1 η−γ )(t2 − t1 )α . It is then smaller than 3 for |t1 − t2 | < δ2 . This shows the equicontinuity, and by Arzela-Ascoli theorem, the operator A : P → P is compact. This and Theorem A.14 prove that A has a fixed point in P, i.e., the existence of a nonnegative solution. If f satisfies the Lipschitz condition and let u, v be two solutions. Then ∫ t 1 (t − s)α−1 s−γ ( f (s, u(s)) − f (s, v(s)))ds, u(t) − v(t) = Γ(α) 0 ∫t and hence |u(t) − v(t)| ≤ LΓ(α)−1 0 (t − s)α−1 s−γ |u(s) − v(s)|ds. By Theorem 4.3, |u(t) − v(t)| ≡ 0.  The next result states a comparison principle [FLLX18b, Theorem 2.2], which is an extension of the argument in [RV12, Theorem 2.3]. See also [LL18a, Theorem 4.10] and [VZ15, Lemma 2.6] for results under a monotonicity condition on f (t, ·). Proposition 4.8 Let f (t, u) be continuous and locally Lipschitz in u, and v(t) be continuous. If ∂tα v ≤ f (t, v), and ∂tα u = f (t, u), with v0 ≤ u0 . Then, v ≤ u on the common interval of existence. Proof By Proposition 4.7, problem (4.16) has a unique continuous solution u on [0, t∗ ), with t∗ being the blow-up time. Now fix T ∈ (0, t∗ ), and I = (0, T]. Pick γ large enough so that u(t) and v(t) fall into I × [−γ, γ]. Let L be the Lipschitz constant of f (t, ·) for the region I × [−2γ, 2γ]. Let v = v −  w, with w = Eα,1 (2Lt α ). If  is sufficiently small, v (t) falls into I × [−2γ, 2γ]. Thus, ∂tα v = ∂tα v − 2Lw ≤ f (t, v) − 2Lw ≤ f (t, v ) −  Lw. We claim that for all small , v (t) ≤ u(t),

∀t ∈ I.

(4.23)

122

4 Cauchy Problem for Fractional ODEs

If not, define t1 = sup{t ∈ I : v (s) ≤ u(s), ∀s ∈ [0, t]}. Since v (0) = v0 −  < u0 , by continuity, we have t1 > 0. Since (4.23) does not hold, t1 < T. Hence, there exists δ1 > 0, such that v (t1 ) = u(t1 ) and v (t) > u(t) for t ∈ (t1, t1 + δ1 ). Moreover, ∂tα (v − u) ≤ f (t, v ) −  Lw − f (t, u). By continuity, for some δ2 ∈ (0, δ1 ), ∂tα (v − u) ≤ 0 on the interval (t1, t1 + δ2 ). Thus, we have v (t) ≤ u(t) for t ∈ (t1, t1 + δ2 ), which is a contradiction. This shows the claim. Taking  → 0 yields the result on I. Since T is arbitrary, the result follows.  Last we discuss solution regularity. One important issue in the study of classical odes is the smoothness of the solution u to problem (4.16) with 0 < α < 1, under suitable assumptions on f . For classical odes, the smoothness of u improves with that of f : if f ∈ C k−1 ([0, χ] × R), then the solution u to the ode u (t) = f (t, u) with u(0) = c0 belongs to C k [0, χ]. This generally does not hold in the fractional case. A first result gives local regularity of u to problem (4.16) (assuming the unique existence over [0, χ] and D being the domain for f ). Thus, even for a smooth f , u can have a weak singularity at t = 0, but away from the origin, it is indeed smooth. Theorem 4.12 If f ∈ C(D), then u ∈ C[0, χ]. Further, if f ∈ C k (D), then u ∈ C k (0, χ] ∩ C[0, χ], with |u(k) (t)| = O(t α−k ) as t → 0+ . Proof By Proposition 4.6, the solution u satisfies u(t) = c0 + 0 Itα f (·, u(·)). The first term is C[0, χ], and 0 Itα f (·, u(·)) ∈ C[0, χ], since u ∈ C[0, χ] and f (t, u(t)) ∈ C[0, χ]. This shows the first assertion. The technical proof of the second assertion can be found in [BPV99, Theorem 2.1 and Section 4], using theory of Fredholm integral equations developed in [Vai89, PV94].  The next result gives the asymptotic expansion of the solution u at the origin, when α is rational [Lub83, Section 2] (see also [MF71, Theorem 6] for related analyticity results). The proof below is taken from [Die10, Section 6.4]. Theorem 4.13 Let α =

p q

∈ Q, with p < q being relative prime, and f be of the form

1

f (t, u) = f˜(t q , u), with f analytic in a neighborhood of (0, c0 ). Then there exists a uniquely determined analytic function u¯ : (−r, r) for some r > 0 such that 1

u(x) = u(t ¯ q ),

∀t ∈ [0, r).

Proof The proof is based on constructing a formal solution using Puiseux series, and proving the convergence of the series using Lindelöf’s majorant method. Let i  q u(t) = ∞ i=0 ui x with u(0) = c0 . Next we show that there are coefficients ui such that the series converges and the function u solves problem ∫ t 1 (t − s)α−1 f (s, u(s))ds. (4.24) u(t) = c0 + Γ(α) 0 Substituting the series into (4.24) and using the series expansion of f˜ near (0, c0 ):

4.3 ODEs with a Djrbashian-Caputo Fractional Derivative ∞ 

1

f (t, u(t)) = f˜(t q , u(t)) =

123

1

f1 2 t q (u − c0 )2 =

1,2 =0

∞ 

1

f1 2 t q

∞ 

1,2 =0

 2

i

ui t q

,

i=1

we deduce ∞ 

i

ui t q = c0 +

i=0

1 Γ(α)



t

(t − s)α−1

0

∞ 

1

f1 2 s q

1,2 =0

∞ 

i

ui s q

2

ds.

i=1

Next we rearrange the terms in the square bracket as ∞ 

i

ui s q

2

=

i=1

∞   3 =0

 i1 +...+i2 =3

 3 ui1 . . . ui2 s q .

Since i ≥ 1, the case 3 = 0 occurs in the first sum only occurs for 2 = 0, for which we set the coefficient to 1, by convention. Assuming uniform convergence, we can exchange the order of summation and integration and integrate termwise, and obtain ∞ 

i

ui t q = c0 +

i=0

∞ 

 f1 2 c1,3

 i1 +...+i2 =3

1,2,3 =0

 1 +3 +p ui1 . . . ui2 t q

(4.25) i

1 +3 +p 3 with c1,3 = Γ( 1 + + 1)−1 . Comparing the coefficients of t q on both q + 1)Γ( q sides gives

ui =

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪

∞     ⎪ ⎪ ⎪ c f    , ⎪ 1 2 1 3 ⎪ ⎪ 1 +3 +p=i 2 =0 i1 +...+i2 =3 ⎩

c0, i = 0, 0, i = 1, . . . , p − 1,  ui1 . . . ui2 , i ≥ p.

Thus, for 0 ≤ i < p, ui s are uniquely determined, and for i ≥ p, it satisfies a recurrence relation: ui depends only on coefficients ui , with i ≤ i1 + . . . + i2 =

3 = i − p − 1 < i, since p > 0. Thus, there exists a unique formal solution, and it suffices to show the series converges locally absolutely and uniformly in a neighborhood of t = 0 using Lindelöf’s majorant method. Let 1

F(t q , u(t)) =

∞ 

1

| f1 2 |t q (u − |c0 |)2 .

1,2 =0

This series converges since f˜ is analytic. Next, we study the Volterra integral equation ∫ t 1 (t − s)α−1 F(s, U(s))ds. U(t) = |c0 | + Γ(α) 0

124

4 Cauchy Problem for Fractional ODEs

The formal solution U(t) can be computed as u(t). Then U(t) is a majorant of u, and all coefficients of U are positive. Thus, it suffices to prove that the series expansion of U converges for some r > 0. The nonnegativity of the expansion coefficients of U implies that the series expansion of U converges uniformly over [0, r]. Let P+1 (t) =

+1 

i

Ui t q .

i=0

Then the nonnegativity of the coefficients and the recurrent relation imply ∫ t 1 P+1 (t) ≤ |c0 | + (t − s)α−1 F(s, P (s))ds. Γ(α) 0 Choose b > 0, and let c = sup(x,u)∈[0,b]×[0,2c0 ] Γ(α + 1)−1 F(t, u),and r :=

min(b, |c0 | α (c )− α ). We claim |P (t)| ≤ 2|c0 | for all = 0, 1, . . . and all t ∈ [0, r], and prove it by mathematical induction. The case = 0 is obvious by the choice of r. The induction step from to + 1 follows by ∫ t 1 (t − s)α−1 F(s, P (s))ds |P+1 (t)| ≤ |c0 | + Γ(α) 0 1

1

≤ |c0 | + Γ(α + 1)−1 r α sup F(t, P (t)) ≤ |c0 | + r α c ≤ 2|c0 |. t ∈[0,r]

Thus, the sequence {P } is uniformly bounded on [0, r] and monotone, and hence uniformly convergent. Since it has the form of a power series, it converges uniformly on compact subsets of [0, r). This justifies the interchange of summation and integration and completes the proof of the theorem.  When α is irrational, the result is slightly different. Corollary 4.4 Let α ∈ (0, 1) be irrational, and f (t, u) = f˜(t, t α, u), where f˜ be analytic in a neighborhood of (0, 0, c0 ). Then there exists a uniquely determined ˜ tα) analytic function u˜ : (−r, r)×(−r α, r α ) → R with some r > 0 such that u(t) = u(t, for t ∈ [0, r). Proof The assumption implies the local existenceof a unique continuous solution. i1 +i2 α into (4.24) and Then we substitute the formal expansion u(t) = ∞ i1 i2 =0 ui1 i2 t repeat the argument in Theorem 4.13 to get u0,0 = c0 , and   ui1 i2 = γm1 m2 j1 j2 fm1 m2  un1 k1 . . . un k m1 +j1 =i1 m2 +j2 =i2

n1 +...+n =j1 k1 +...+k =j2

for the remaining cases, with the coefficients γm1 m2 j1 j2 = Γ(m1 + j1 + (m2 + j2 )α + 1)Γ(m1 + j1 + (m2 + j2 + 1)α + 1)−1 . The coefficient γm1 m2 j1 j2 is uniquely determined by the coefficients with smaller indices. The rest of the proof is analogous to Theorem 4.13, and thus it is omitted. 

4.3 ODEs with a Djrbashian-Caputo Fractional Derivative

125

The proofs give also results under weaker assumptions on f . Corollary 4.5 Let k ∈ N, and 0 < α < 1. (i) If α =

p q

∈ Q, with p < q being relative prime, and f be of the form f (t, u) =

1 q

f˜(t , u), with f˜ ∈ C k (D). Then the solution of problem (4.16) has an asymptotic 1 expansion in powers of t q as t → 0, and the smallest noninteger exponent is α. (ii) If α is irrational, and f is of the form f (t, u) = f˜(t, t α, u) with f˜ ∈ C k ([0, χ] × [0, χ α ] × [c0 − r, c0 + r]) for some r > 0. Then the solution u to problem (4.16) has an asymptotic expansion in mixed powers of t and t α as t → 0. In the rest of the section, we discuss autonomous fractional odes (with 0 < α < 1) ∂tα u = f (u),

in I,

with u(0) = c0

(4.26)

with c0 ∈ R. This problem was studied in detail in [FLLX18a], from which all the results stated below are taken, unless otherwise stated. Similar to Proposition 4.7, if f (u) is locally Lipschitz on an interval (a, b) ⊂ R, then for any c0 ∈ (a, b), there is a unique continuous solution u to problem (4.26) with u(0) = c0 . Either it exists globally on R+ or there exists t∗ > 0 such that either lim inf t→t∗− u(t) = a or lim supt→t∗− u(t) = b. The next result shows that f (u(t)) does not change sign. Theorem 4.14 Let f be locally Lipschitz. If f (c0 )  0, then f (u(t)) f (c0 ) ≥ 0, for all t ∈ (0, t∗ ), and f (u(t)) f (c0 ) = 0 can hold only on a nowhere dense set. Proof We may assume f (c0 ) > 0. Let t ∗ = inf{t > 0 : ∃δ > 0, s.t. f (u(s)) ≤ 0, ∀s ∈ [t, t + δ]}. Since f (c0 ) > 0, t ∗ > 0. It suffices to show t ∗ = ∞. We argue by contradiction. Suppose t ∗ < ∞, then there exists a δ > 0 such that f (u(t)) ≤ 0 for all t ∈ [t ∗, t ∗ + δ]. We claim that u(t) < u(t ∗ ) for any t ∈ (t ∗, t ∗ + δ). Indeed, u(t) − u(t ∗ ) = (0 Itα f (u))(t) − (0 Itα f (u))(t ∗ )  ∫ t ∗ ∫ t 1 ((t − s)α−1 − (t ∗ − s)α−1 ) f (u(s))ds + (t − s)α−1 f (u(s))ds . = Γ(α) 0 t∗ Note that f (u(t)) ≥ 0 for t ∈ (0, t ∗ ), and f (u(t)) is strictly positive when t is close to 0. In addition, f (u(t)) ≤ 0 for t ∈ [t ∗, t ∗ + δ]. Thus, the right-hand side is strictly negative. Hence u(t) < u(t ∗ ) and the claim is proved. By the continuity of u, there exist t1, t2 , 0 ≤ t1 < t2 ≤ t ∗ such that for any s ∈ [t1, t2 ], there exists ts ∈ [t ∗, t ∗ + δ] such that u(s) = u(ts ). Then for any s ∈ [t1, t2 ], f (u(s)) = f (u(ts )) ≤ 0, which  contradicts with the definition of t ∗ . For the usual derivative, the fact f (u(t)) has a definite sign implies that the solution u is monotone. For fractional derivatives, this is less obvious, however it does hold if f is close to C 2 . First we prove the positivity of the solution to an integral equation, which is a slightly different version of [Wei75, Theorem 1]. Lemma 4.2 Let h ∈ L 1 (I), h > 0 a.e. satisfy

126

4 Cauchy Problem for Fractional ODEs

∫ h(t) − 0

t

rλ (t − s)h(s)ds > 0

a.e. t ∈ I

for any λ > 0, where rλ is the resolvent for kernel λt α−1 satisfying ∫ t rλ (t) + λ (t − s)α−1 rλ (s)ds = λt α−1 .

(4.27)

Then for v ∈ C(I), the integral equation ∫ t y(t) + (t − s)α−1 v(s)y(s)ds = h(t)

(4.28)

0

0

has a unique solution y(t) ∈ L 1 (I), which satisfies y(t) > 0 a.e. Proof It can be verified directly that rλ (t) = −

d Eα,1 (−λΓ(α)t α ) ∈ L 1 (I) ∩ C(0, T] dt

and rλ > 0. The existence and uniqueness of a solution y ∈ L 1 (I) to (4.28) follow similar as Theorem 4.5. We prove only y > 0 a.e. Convolving (4.28) with rλ gives ∫ t ∫ t−s ∫ t v(s) y(s)ds rλ (t − s)y(s)ds + λ(t − s − ξ)γ−1 rλ (ξ)dξ λ 0 0 0 ∫ t = rλ (t − s)h(s)ds. 0

Subtracting this identity from (4.28) and using (4.27) yield ∫ t ∫ t ∫ t v rλ (t − s)y(s)ds + rλ (t − s) y(s)ds = h(t) − rλ (t − s)h(s)ds. y(t) − λ 0 0 0 Thus, y also solves ∫ t   ∫ t y(t) = h(t) − rλ (t − s)h(s)ds + rλ (t − s)(1 − λ−1 v(s))y(s)ds. 0

0

∫t

By assumption, h(t) − 0 rλ (t − s)h(s)ds > 0. Since v ∈ C(I), there exists a cv > 0 such that |v| ≤ cv on I. Picking λ > cv , 1 − λv > 0 and then y ≥ 0 a.e. on [0, T) follows.  Theorem 4.15 Let f ∈ C 1 (a, b) for (a, b) ⊂ R and f be locally Lipschitz on (a, b). Then, the solution u to problem (4.26) with u(0) = c0 ∈ (a, b) is monotone on the interval (0, t∗ ) of existence. If f (c0 )  0, the monotonicity is strict. Proof Clearly, if f (c0 ) = 0, then u = c0 is the solution by the uniqueness and is monotone. Now, we assume f (c0 ) > 0. Then by the regularity theory, u ∈ C 1 (0, t∗ ) ∩ C[0, t∗ ). Now, we fix T ∈ (0, t∗ ). The derivative y = u satisfies

4.3 ODEs with a Djrbashian-Caputo Fractional Derivative

f (c0 ) α−1 1 t y(t) = + Γ(α) Γ(α)



t

127

(t − s)α−1 f (u(s))y(s)ds.

0

Since f (u(t)) is continuous on I and f (c0 ) > 0, Lemma 4.2 implies that y is positive on (0, T). Since T is arbitrary, y > 0 on (0, t∗ ), and u is increasing. The argument for  f (c0 ) < 0 is similar. Example 4.8 Given 0 < α < 1 and c0 ∈ R, consider ∂tα u(t) = −u(t)(1 − u(t)),

t > 0,

with u(0) = c0 .

(4.29)

For α = 1, it is known that (i) if c0 ∈ (0, 1), the solution u exists globally, and (ii) if c0 > 1, u does not exist globally. For 0 < α < 1, by Theorem 4.9, locally, there exists a unique continuous solution u. If 0 < c0 < 1, by Proposition 4.5, u satisfies ∫ t α (t − s)α−1 Eα,α (−(t − s)α )u2 (s)ds. u(t) = Eα,1 (−t )c0 + 0

(−t α )

(−t α )

Since Eα,1 > 0 and Eα,α > 0, we have u(t) > 0 for u0 > 0. Let u(t) ¯ ≡1 ¯ and ∂tα u(t) ¯ = 0 = −u(t)(1 ¯ − u(t)). ¯ for t > 0. Since 0 < u0 < 1, then u0 < u(0), Meanwhile, u(t) < u(t) ¯ = 1, and thus 0 < u(t) < 1 for all t > 0. See Fig. 4.3 for an illustration. Note that in all cases, u(t) decays to zero as t grows. However, the decay rate differs markedly with α.

0.3

0.8

=0.3 =0.5 =0.7

=0.3 =0.5 =0.7

0.6 u

u

0.2

0.4

0.1 0.2

0 0

2

4

6

8

10

0 0

t

2

4

6

8

10

t

Fig. 4.3 The solution u(t) for Example 4.8 with different c0 .

For classical odes, the solution curves do not intersect, which holds also for problem (4.26). (See also [DF12, Theorem 4.1] for problem (4.16) with α ∈ (0, 1).) Proposition 4.9 If f (u) is locally Lipschitz and nondecreasing, then the solution curves of (4.26) do not intersect with each other. Now we present some results regarding the blow-up behavior.

128

4 Cauchy Problem for Fractional ODEs

Lemma 4.3 Let f (u) be locally Lipschitz and nondecreasing on (0, ∞), c0 > 0 and f (c0 ) > 0. Then, the solution u to problem (4.26) is nondecreasing on (0, t∗ ) and limt→t∗− u(t) = +∞, with t∗ ∈ (0, ∞]. Proof Note that f is less regular than that in Theorem 4.15, and thus it cannot be applied directly. Let u0 = c0 , and for n ≥ 1, define ∂tα un = f (un−1 ), with un (0) = c0 . Then un is continuous on R+ . Since f (u0 ) > 0, we have u1 (t) = c0 + 0 Itα f (u0 )(t) ≥ c0 = u0 (t) for t ∈ R+ . Thus, f (u1 (t)) ≥ f (u0 (t)) and u2 (t) = c0 + 0 Itα f (u1 )(t) ≥ c0 + 0 Itα f (u0 )(t) = u1 (t). By induction, un (t) ≥ un−1 (t) for all n ≥ 1. Next, we claim u(t) > u0, ∀t ∈ (0, t∗ ). Let t ∗ = sup{t¯ ∈ (0, t∗ ) : f (u(t)) > 0, ∀t ∈ (0, t¯)}, and we prove t ∗ = t∗ . By the continuity of u(t) and f (u), the fact f (c0 ) > 0 implies t ∗ > 0. Indeed, if t ∗ < t∗ , then f (u(t ∗ )) = 0 by the continuity of f and u. In addition, by the definition of t ∗ , u(t ∗ ) = c0 + (0 Itα f (u))(t ∗ ) > u0 . Since f is nondecreasing, f (u(t ∗ )) ≥ f (u0 ) > 0, which is a contradiction. Using u(t) ≥ u0 , we obtain u(t) = c0 + (0 Itα f (u))(t) ≥ c0 + (0 Itα f (u0 ))(t) = u1 (t) for t ∈ [0, t∗ ), and by induction, u(t) ≥ u2 (t) and u(t) ≥ u3 (t), etc. Moreover, since f is nondecreasing and f (un ) is positive, for any 0 ≤ t1 < t2 < ∞: u1 (t2 ) = c0 + (0 Itα f (u0 ))(t2 ) ≥ c0 + (0 Itα f (u0 ))(t1 ) = u1 (t1 ). Thus, u1 is nondecreasing on R+ . Similarly, u2 (t2 ) = c0 + (0 Itα f (u1 ))(t2 ) ≥ c0 + ≥ c0 +

1 Γ(α)



t2

t2 −t1

1 Γ(α)



t2

t2 −t1

(t2 − s)α−1 f (u1 (s))ds

(t2 − s)α−1 f (u1 (s − (t2 − t1 )))ds = u2 (t1 ),

i.e., u2 is nondecreasing on R+ . By induction, un is nondecreasing. Hence the ¯ for any t ∈ [0, t∗ ). By sequence {un (t)} converges to a nondecreasing function u(t) ¯ This and the monotone convergence theorem, u¯ satisfies u(t) ¯ = c0 + (0 Itα f (u))(t). the uniqueness shows u¯ = u. Hence, u is nondecreasing. If t∗ < ∞, by the definition of t∗ and the monotonicity, we have limt→t∗− u(t) = ∞. If t∗ = ∞, we find u(t) = c0 + (0 Itα f (u))(t) ≥ c0 + f (u0 )(0 Itα 1)(t) → ∞. This completes the proof of the lemma.



4.3 ODEs with a Djrbashian-Caputo Fractional Derivative

129

Now we can give an Osgood type criterion for blow up. Proposition 4.10 Let f (u) be locally Lipschitz, nondecreasing ∫on R+ , c0 > 0 and 1 ∞ u ) α du f (c0 ) > 0. Then, t∗ < ∞ if and only if there exists U > 0 such that U ( f (u) u < ∞. 1

Proof By Lemma 4.3, u is increasing and u(t) → ∞ as t → t∗− . Pick r > max(1, c0α ). There exists tn < t∗ so that u(tn ) = r nα for n = 1, 2, . . .. Then we have ∫ f (u(tn−1 )) tn α u(tn ) = c0 + (0 It f (u))(tn ) ≥ (tn − s)α−1 ds Γ(α) tn−1 = Γ(1 + α)−1 (tn − tn−1 )α f (u(tn−1 )). Thus, there exist constants c1 (α) > 0 and c2 (α, r) > 0 such that tn − tn−1 ≤ c1 (α)u(tn ) α f (u(tn−1 ))− α = c1 (α)r 2 (r − 1)−1 (r n−1 − r n−2 ) f (r (n−1)α )− α ∫ r n−1 1 ≤ c2 (α, r) f (s α )− α ds. 1

1

1

r n−2

Meanwhile, ∫ tn−1 1 (tn−1 − s)α−1 f (u(s))ds u(tn ) =c0 + (0 Itα f (u))(tn ) ≤ c0 + Γ(α) 0 ∫ tn f (u(tn )) 1 (tn − tn−1 )α . (tn − s)α−1 f (u(s))ds ≤ u(tn−1 ) + + Γ(α) tn−1 Γ(1 + α) Thus, there exist c¯1 (α) > 0 and c¯2 (α, r) > 0 such that tn −tn−1 ≥ c¯1 (α)(r−1) α (r(r−1))−1 (r n+1 −r n ) f (r nα )− α ≥ c¯2 (α, r) 1

1

∫∞



r n+1

rn

f (s α )− α ds. 1

f (τ α )− α dτ < ∞, or equivalently there exists some Hence, t∗ < ∞ if and only if ∫∞ 1 1 U > 0 such that U u α −1 f (u)− α du < ∞. This completes the proof.  1

To gain further insights, below we analyze the power nonlinearity: given c0 > 0, α ∈ (0, 1), γ > 0 and ν ∈ R, ∂tα u = νuγ,

t > 0,

with u(0) = c0 .

(4.30)

This problem is representative among autonomous problems. For example, if γ > 0 and there exist c1, c2 > 0 such that c1 uγ ≤ f (u) ≤ c2 uγ , then the solution u is under control, according to the comparison principles. Several properties can be deducted directly from preceding discussions: the solution curves do not intersect; for γ ∈ [0, 1] and ν > 0, the solution u exists globally, but for ν > 0 and γ > 1, the solution blows up in finite time. Below we discuss two cases, i.e., ν < 0 and γ > 0, and ν > 0 and γ > 1, and refer to the exercises for several other cases. The next result [VZ15, Theorem 7.1] analyzes the case ν < 0 and γ > 0: the case α ∈ (0, 1)

130

4 Cauchy Problem for Fractional ODEs

differs markedly from α = 1, which has an algebraic decay u(t) ∼ ct − γ−1 for γ > 1, exponential decay for γ = 1, and extinction in finite time for γ < 1. 1

1,1 Proposition 4.11 Let γ > 0 and ν < 0, c0 > 0, and u ∈ Wloc (R+ ) be the solution to problem (4.30). Then there exist constants c1, c2 > 0 such that α

α

c1 (1 + t γ )−1 ≤ u(t) ≤ c2 (1 + t γ )−1,

t ≥ 0.

Proof The proof employs Proposition 4.8. Let ωα (t) = Γ(α)−1 t α−1 . First we con1 1 γ struct a subsolution. Let μ := −cα νc0 and ε := (c0 Γ(1 + α)) α (2μ)− α , with απ cα = Γ(1 − α)Γ(1 + α) = sin απ > 1, in view of (2.2) and (2.3). Now let α α v(t) = c0 − μω1+α (t) for t ∈ [0, ε] and v(t) = ct − γ for t > ε, with c = ε γ c20 . 1,1 Then v ∈ Wloc (R+ ), v(0) = c0 , v(ε) = c20 , v is nonincreasing, and v(t) > 0 for all t ≥ 0. For any t ∈ (0, ε), since ∂tα ω1+α (t) = 1, we have ∂tα v − νvγ = −μ∂tα ω1+α − ν(c0 − μω1+α )γ γ γ ≤ −μ − νc0 = −(−cα + 1)νc0 ≤ 0, by the definition of μ. Meanwhile, since v (t) ≤ 0 and ω1−α is nonnegative and decreasing, for t > ε, there holds ∫ ε ∫ ε c0 ω1−α (t − s)v (s) ds ≤ ω1−α (t) v (s) ds = −ω1−α (t) . ∂tα v(t) ≤ 2 0 0 This, using the definitions of ω1−α , μ, ε and c, we deduce ∂tα v − νvγ ≤ −ω1−α (t) c20 − νcγ t −α = −ω1−α (t) c20 (1 − 2−γ ) ≤ 0. Hence v is a subsolution of problem (4.30). Next we construct a supersolution. α 1−γ

Define t0 > 0 by νt0α = c0 ω1−α ( 12 ) + 2α+ γ cα , with cα = α(γΓ(2 − α))−1 . Let α

α

w(t) = c0 for t ∈ [0, t0 ] and w(t) = ct − γ for t ≥ t0 with c = c0 t0γ . For t < t0 , ∂tα w − νwγ = −νwγ ≥ 0. Note that for t > t0 , ∫ α α t ∂tα w(t) = −c ω1−α (t − s)s− γ −1 ds. γ t0 We denote the integral by I(t), and further analyze the cases t ∈ [t0, 2t0 ] and t > 2t0 separately. For t ∈ [t0, 2t0 ], we have ∫ t −α − α −1 − α −1 γ −1 ω1−α (t − s)ds = t0 γ ω2−α (t − t0 ) ≤ t0 γ ω2−α (t0 ). I(t) ≤ t0 t0

This and the value of c imply ∂tα w(t) ≥ −cα c0 t0−α ≥ −2α cα c0 t −α . Now the definition of w and the choice of t0 lead to

1−γ ∂tα w(t) ≥ νw(t)γ 2α cα c0 (νt0α )−1 ≥ νw(t)γ .

4.3 ODEs with a Djrbashian-Caputo Fractional Derivative

131

Further, for t > 2t0 , by changing variables, we have ∫ 1 α α I(t) = t −α− γ t ω1−α (1 − s)s− γ −1 ds. 0 t

Next we split the integral over [ tt0 , 12 ] and [ 12 , 1], denoted by I1 (t) and I2 , respectively. α α Then direct computation gives I1 ≤ ω1−α ( 12 )γα−1 ( tt0 )− γ and I2 ≤ ω2−α ( 12 )2 γ +1 . These estimates and the choice of t0 give α

∂tα w(t) ≥ νw(t)γ c0 (νt0α )−1 (ω1−α ( 12 ) + cα 2α+ γ ) = νw(t)γ . 1−γ

Thus, w is a supersolution of (4.30). The assertion follows from Proposition 4.8.  Now we derive suitable bounds on the blow-up time t∗ , for ν > 0 and γ > 1, and to understand the effects of the memory. Theorem 4.16 Let γ > 1, ν > 0 and c0 > 0. Then for the blow-up time t∗ of the solution to problem (4.30), there holds γ−1

γ−1

Γ(1 + α) α (νc0 G(γ))− α ≤ t∗ ≤ Γ(1 + α) α (νc0 H(γ, α))− α 1

1

1

1

γα

with G(γ) = min(2γ, γγ (γ − 1)1−γ ) and H(γ, α) = max(γ − 1, 2− γ−1 ). Hence, with ν > 0, γ > 1 fixed, there exist c02 > c01 > 0 such that whenever c0 < c01 , limα→0+ t∗ = ∞, while c0 > c02 implies limα→0+ t∗ = 0. Proof Let r > 1, and choose tn such that u(tn ) = c0 r nα . Then 0 = t0 < t1 < t2 . . .. Let ωα (t) = Γ(α)−1 t α−1 . The following relation ∫ tn−1 ∫ tn ωα (tn − s) f (u(s))ds + ωα (tn − s) f (u(s))ds u(tn ) = c0 + tn−1

0

∫ ≤ c0 +

tn−1

0

ωα (tn−1 − s) f (u(s))ds + ωα+1 (tn − tn−1 ) f (u(tn ))

yields ωα+1 (tn − tn−1 ) ≥ c0 r nα (1 − r −α ) f (c0 r nα )−1 . Hence γ−1

tn − tn−1 ≥ Γ(1 + α) α (νc0 )− α (1 − r −α ) α r −n(γ−1) . 1

1

1

Then the lower bound follows by t∗ =

∞  n=1

γ−1

tn − tn−1 ≥ Γ(1 + α) α (νc0 )− α (r α − 1) α r −1 (r γ−1 − 1)−1 . 1

1

1

To obtain the upper bound, we fix m ≥ 1, and then deduce ∫ t u(t) ≥ c0 + ωα (tm − s) f (u(s))ds := v(t), t ∈ (0, tm ). 0

132

4 Cauchy Problem for Fractional ODEs

Then v(tm ) = u(tm ), and v (t) = Γ(α)−1 (tm −t)α−1 f (u(t)) ≥ Γ(α)−1 (tm −t)α−1 f (v(t)) ∫ u(t ) −1 α and c m fdv (v) ≥ Γ(1 + α) tm, implying 0

γ−1

tm ≤ Γ(1 + α) α (ν(γ − 1)c0 )− α (1 − r mα(1−γ) ) α . 1

For n ≥ m + 1, we have



u(tn ) ≥ c0 +

tn

tn−1

1

1

(4.31)

ωα (tn − s) f (u(tn−1 ))ds,

and thus Γ(1 + α)−1 (tn − tn−1 )α f (u(tn−1 )) ≤ c0 r nα − c0 ≤ u0 r nα .

(4.32)

Combining (4.31) and (4.32) gives an upper bound t∗ =

∞  n=m+1

γ−1

(tn − tn−1 ) + tm ≤ Γ(1 + α) α (νc0 )− α r γ (r (m+1)(γ−1) − r m(γ−1) )−1 + tm 1

γ−1

1

= Γ(1 + α) α (νc0 )− α (r γ (r (m+1)(γ−1) − r m(γ−1) )−1 + (1 − r mα(1−γ) ) α (γ − 1)− α ). 1

1

1

1

Next we optimize the bounds with respect to the parameters r and m to obtain 1 the desired bound. For the lower bound, picking r = 2 α > 1 gives supr >1 (r α − γ γ 1 1 1 1 1) α (r γ − r)−1 ≥ (2 α − 2 α )−1 ≥ 2− α . Similarly, picking r = γ α (γ − 1)− α yields 1 1 supr >1 (r α − 1) α (r γ − r)−1 ≥ [(γ − 1)γ−1 γ −γ ] α . This shows the lower bound. For the upper bound, we fix m > (γ − 1)−1 , and let r → ∞: r γ (r (m+1)(γ−1) − r m(γ−1) )−1 + [(1 − r mα(1−γ) )(γ − 1)−1 ] α → (γ − 1)− α . 1

1

1

Instead choosing m = 1 and r = 2 γ−1 > 1 gives γ

r γ [r (m+1)(γ−1) −r m(γ−1) ]+(1−r mα(1−γ) ) α (γ −1)− α = 2 γ−1 −1 +2−1 (2α −1) α (γ −1)− α . 1

γ

1

1

1

Let Q(γ) := 2 γ−1 ( 2γ−1 α −1 ) α . By elementary calculus, we have 1

Q (γ) = Q(γ)(ln Q(γ)) = Q(γ)(γ − 1)−2 α−1 (γ − 1 − α ln 2). Hence, Q(γ) ≥ Q(α ln 2 + 1) = 2(αe log 2) α (2α − 1)− α ≥ 2(e ln 2) α ≥ 2. 1

1

1

For the second inequality, since α − 2α + 1 is concave on (0, 1) and equals zero at α = 0, 1, α > 2α − 1 for α ∈ (0, 1). We find γ

γ

t∗ ≤ 2−1 2 γ−1 + 2−1 (2α − 1) α (γ − 1)− α < 2 γ−1 , 1

1

4.3 ODEs with a Djrbashian-Caputo Fractional Derivative

133 γ

γ

and the upper bound follows. Then we can pick c01 = ν − γ−1 max(2− γ−1 , (γ − 1)γ 1−γ ) 1 1 and c02 = ν − γ−1 min(1, (γ − 1)− γ−1 ) to obtain the last assertion.  1

Remark 4.2 Theorem 4.16 shows the role of memory, which is getting stronger as α → 0+ . When c0 is very small, the memory defers the blow up. If c0 is large, the memory accelerates the blow up. The critical value of c0 might be determined by the 1 limiting case α = 0: u − c0 = νuα . If c0 > γ −1 (γ −1)(νγ)− γ−1 , this algebraic equation 1 has no solution and it means that the blow-up time is zero. If c0 < γ −1 (γ −1)(νγ)− γ−1 , there is a constant solution for t > 0, i.e., the blow-up time is infinity.

Exercises Exercise 4.1 Prove the standard Gronwall’s inequality in (4.1). Exercise 4.2 Prove the following uniform version of Gronwall’s∫inequality. Let g, t+r 1 h, u ∈ Lloc,+ (0, ∞) such that u ≤ gu + h, for all t ≥ 0, and t g(s)ds ≤ a1 , ∫ t+r ∫ t+r h(s)ds ≤ a2 , t u(s)ds ≤ a3 , for any t ≥ 0, with r, a1 , a2 , a3 > 0. Then t u(t + r) ≤ (a3 r −1 + a2 )e a1 ,

∀t ≥ 0.

Exercise 4.3 Let a, b > 0, and u ∈ L+∞ (I) satisfies ∫ t (t − s)−β u(s)ds, u(t) ≤ a + b

∀t ∈ I.

0

Then Theorems 4.2 and 4.3 can both be applied. Which theorem gives a sharper estimate? Exercise 4.4 [LY20, Lemma 2.4] This exercise is about an extended Gronwall’s q q

inequality. Let α ∈ (0, 1) and q > α−1 . Let L ∈ L+ (I) and a, u ∈ L+q−1 (I). Suppose ∫ t (t − s)α−1 L(s)u(s)ds, a.e. t ∈ I. u(t) ≤ a(t) + 0

Then there exists a constant c > 0 such that ∫ t u(t) ≤ a(t) + c (t − s)α−1 L(s)a(s)ds,

a.e. t ∈ I.

0

Exercise 4.5 For problem (4.14), it is very tempting to specify the initial conditions in the classical manner: u(j) (0) = c j , j = 0, 1, . . . , n − 1. For the case 0 < α < 1 and u ∈ C(I), show that (0 It1−α u)(0) = 0. Discuss the implications of this observation.

134

4 Cauchy Problem for Fractional ODEs

Exercise 4.6 Prove that if the mapping f → f (t, u(t)) ∈ L 1 (I) holds for every u ∈ L 1 (I), then it is necessary and sufficient that there exist a ∈ L 1 (I) and some constant b > 0 such that | f (t, u)| ≤ a(t) + b|u| for all (t, u). Exercise 4.7 In Theorem 4.5, can the Lipschitz continuity of f (t, u) in u be relaxed to a Hölder continuity with exponent γ ∈ (0, 1), i.e., | f (t, u1 ) − f (t, u2 )| ≤ Lγ |u1 − u2 |γ ? Exercise 4.8 Prove Theorem 4.6 using Bielecki’s weighted norm. Exercise 4.9 In this exercise, we consider a nonlinear problem  R α ∂t u = f (t, u), t ∈ I α−j

(R∂t

u)(0) = c j ,

j = 1, . . . , n

with {c j } nj=1 ⊂ R. Suppose that f is continuous and uniformly bounded by c ∈ R, and it is Lipschitz with respect to u with a constant L. Show that there exists a unique continuous solution to the problem in the region D defined by  D = (t, u) : 0 < t ≤ h, |t n−α u(t) − cn Γ(α − n + 1)−1 | ≤ a with the constant a >

n−1

h n− j c j j=0 Γ(α−j+1) .

Exercise 4.10 Prove Proposition 4.4 using Laplace transform. Exercise 4.11 Determine the first three leading terms in the solution for problem (4.10) with 1 < α < 2. Exercise 4.12 Under what conditions on f does the solution u to problem (4.10) with 0 < α < 1 belong to u ∈ C 1 (I)? Exercise 4.13 Consider the following singular Cauchy problem for the RiemannLiouville fractional derivative with 0 < γ < α < 1:  R α γ ∂t u(t) = f (t, u, R∂t u), in I, 1−α u(0) 0 It

= c0

with c0 > 0, and let f : I ×R×R → R be continuous, for t ∈ I, with f (t, u, v) ∈ L 1 (I) for u, v ∈ L 1 (I). Prove the following results. (i) u ∈ L 1 (I) with 0 It1−α u ∈ AC(I) solves the problem if and only if u satisfies ∫ t c0 α−1 1 γ t u(t) = + (t − s)α−1 f (s, u(s), R∂s u(s))ds. Γ(α) Γ(α) 0 (ii) If, in addition, f satisfies the Lipschitz condition | f (t, u, p) − f (t, v, q)| ≤ L(|u − v| + |p − q|) for some L > 0 for all t ∈ I and all u, v, p, q ∈ R, then the problem has a unique solution u ∈ L 1 (I).

4.3 ODEs with a Djrbashian-Caputo Fractional Derivative

135

Exercise 4.14 Consider the following Cauchy problem for the Riemann-Liouville fractional derivative with 0 < α < 1:  R α ∂t u(t) = t −γ f (t, u), in I, 1−α u(0) 0 It

= c0

with 0 < γ < 1 − α, c0 > 0. Let f : I × [0, ∞) → [0, ∞) be continuous, for t ∈ I, and there is a constant c f such that f (t, u) ≤ c f (1 + u) for all t ∈ I and u ≥ 0. Prove the following results. (i) The problem has a nonnegative fixed point in the set X = {v ∈ C(I) : v ≥ 0, t 1−α v ∈ C(I)} with the norm vX = supt ∈I t 1−α |v(t)|. (ii) If, in addition, f satisfies the Lipschitz condition | f (t, u) − f (t, v)| ≤ L|u − v| for some L > 0 for all t ∈ I and all u, v ≥ 0, then the fixed point is unique. Exercise 4.15 Prove Theorem 4.9 for general α > 0. Exercise 4.16 Let 0 < α < 1, and λ ∈ R, with g, g¯ ∈ C(I), and u and u¯ solve, respectively, ∂tα u1 = λu1 + g1,

∂tα u2 = λu2 + g2,

t ∈ I,

with u1 (0) = c1,

t ∈ I,

with u2 (0) = c2 .

(i) Using Proposition 4.5 and properties of the Mittag-Leffler function Eα,β (−x), prove that c1 ≤ c2 and g1 ≤ g2 imply u1 (t) ≤ u2 (t) for all t ∈ I. (ii) Is there an analogue of (i) for α ∈ (1, 2)? Exercise 4.17 Consider the following Cauchy problem (with 1 < α < 2) ⎧ ∂ α u(t) = f (t), ⎪ ⎪ ⎨ t ⎪ u(0) = c0, ⎪ ⎪ ⎪ u (0) = c1 . ⎩

in I,

Prove the following regularity results. (i) For f ∈ AC(I), the solution u can be written as u(t) = c0 + c1 t + f (0)Γ(α + 1)−1 t α + v(t) with v ∈ AC 2 (I). (ii) For f ∈ C 1 (I), the expansion remains valid, with v ∈ C 2 (I). Exercise 4.18 Prove Corollary 4.5. Exercise 4.19 Prove the existence and uniqueness of a solution y ∈ L 1 (I) to (4.28). Exercise 4.20 This exercise is concerned with equation (4.29) with c0 > 1. For c0 > 1, let w = u − 1. Then the function w satisfies ∂tα w(t) = w(t)(1 + w(t)),

w(0) = w0 := c0 − 1 > 0.

136

4 Cauchy Problem for Fractional ODEs

Then by Proposition 4.6, we have the solution representation ∫ t (t − s)α−1 Eα,α ((t − s)α )w 2 (s)ds. w(t) = Eα,1 (t α )w0 + 0

Note that t α−1 Eα,α (t α ) is now exponentially growing, indicating that the solution w(t) may blow up in finite time. Establish this blow-up behavior, and derive the upper and lower bounds on the blow-up time. Exercise 4.21 [FLLX18b] Prove the following bounds for the solution u(t) to problem (4.30) with ν > 0 and 0 < γ < 1: there exist c1, c2 > 0 such that α

α

c1 t 1−γ ≤ u(t) ≤ c2 t 1−γ ,

t ≥ 1.

Exercise 4.22 Let 0 < α < β < 1, and λ ∈ R. Consider the following Cauchy problem with two Djrbashian-Caputo fractional derivatives  β ∂t u + λ∂tα u = f , t ∈ I, u(0) = c0 . (i) For λ ≥ 0, derive the explicit series solution using Laplace transform and the method of successive approximations separately. (ii) Discuss the asymptotics of the solution in (i) as t → 0+ and t → ∞. [Hint: use Theorem A.9.] (iii) Discuss the well-posedness of the problem when λ < 0.

Chapter 5

Boundary Value Problem for Fractional ODEs

This chapter describes the basic mathematical theory for two-point boundary value problems (bvps) for one-dimensional stationary superdiffusion derived in Chapter 1: α 0Dx u

= f (x, u) in D,

(5.1)

with α ∈ (1, 2) and D = (0, 1), where the notation 0Dxα denotes either DjrbashianCaputo or Riemann-Liouville fractional derivative of order α. We describe the solution theory via Green’s function and variational formulation, and also discuss the related Sturm-Liouville problem. These approaches resemble closely that for the classical two-point bvps, but the analysis is more intricate, due to nonlocality of fractional derivatives.

5.1 Green’s Function First, we analyze problem (5.1) using the associated Green’s function, which has been extensively studied with the focus on the existence of positive solutions of (nonlinear) fractional bvps. Djrbashian [DN61] (see [Djr93] for the survey of some of these results) probably is the first researcher to study Dirichlet-type problems for fdes, and also related results for the fractional Sturm-Liouville problem in Section 5.3. Other more recent works include [Zha00, BL05, Zha06a, Zha06b]. We discuss the Riemann-Liouville and Djrbashian-Caputo cases separately, under a relatively strong condition on f .

5.1.1 Riemann-Liouville case First, we comment on the boundary condition. Clearly, −R0Dxα u = f is equivalent to −(0 Ix2−α u) = f , and the condition 0 Ix2−α u ∈ AC 2 (D) is needed for the equation to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 B. Jin, Fractional Differential Equations, Applied Mathematical Sciences 206, https://doi.org/10.1007/978-3-030-76043-4_5

137

138

5 Boundary Value Problem for Fractional

ODEs

make sense in L 1 (D). Upon letting v = 0 Ix2−α u, v satisfies −v  = f , for which the natural boundary conditions are given in terms of v or v  or their combinations at x = 0, 1. Thus, the most “natural” boundary conditions in the Riemann-Liouville case should specify 0 Ix2−α u and R0Dxα−1 u at x = 0, 1, whose physical interpretation is however not obvious and thus has not been extensively studied. Below, we employ a Dirichlet type boundary condition. The next result gives the equivalence between the bvp and its Fredholm integral reformulation via Green’s function. The derivation employs Theorem 2.6 and (2.24), similar to Proposition 4.2 in Chapter 4. Theorem 5.1 Let f : D × R → R be such that f (x, u(x)) ∈ L 1 (D) for u ∈ L 1 (D). Then a function u ∈ L 1 (D) with 0 Ix2−α u ∈ AC 2 (D) solves  −R0Dxα u(x) = f (x, u(x)), in D, (5.2) (R0Dxα−2 u)(0) = u(1) = 0, if and only if u ∈ L 1 (D) satisfies ∫ u(x) =

1

G(x, s) f (s, u(s))ds,

(5.3)

0

where Green’s function G(x, s) is given by  (x(1 − s))α−1 − (x − s)α−1, 1 G(x, s) = Γ(α) x α−1 (1 − s)α−1,

0 ≤ s ≤ x ≤ 1, 0 ≤ x ≤ s ≤ 1.

Proof By Theorem 2.6, for u ∈ L 1 (D) with 0 Ix2−α u ∈ AC 2 (D), by the assumption on f , g(x) := f (x, u(x)) ∈ L 1 (D). Integrating both sides of (5.2) gives u(x) = c0 x α−2 + c1 x α−1 − 0 Ixα g(x), for some c0, c1 ∈ R. The boundary conditions are fulfilled if c0 = 0 and c1 = (0 Ixα g)(1) which, upon substitution, directly gives (5.3). Conversely, if u ∈ L 1 (D) satisfies (5.3), let g = f (x, u(x)) so g ∈ L 1 (D). Then u in (5.3) satisfies u(x) = (0 Ixα g)(1)x α−1 − (0 Ixα g)(x). Clearly, R0Dxα−2 u(0) = 0 and u(1) = 0. Furthermore, it follows that 0 Ix2−α u ∈ AC 2 (D) and (0 Ix2−α u)(x) exists almost everywhere with R α 0Dx u(x)

= R0Dxα ((0 Ixα g)(1)x α−1 − 0 Ixα g) = g

a.e. D.

So the identity −R0Dxα u = f (x, u(x)) holds for a.e. x ∈ D. This proves the theorem.  It is evident from the expression of the general solution u(x) that one cannot specify directly u(0) = c0  0. The proper choice is to specify R0Dxα−2 u(0) = c0 or RD α−1 u(0) = c , which, respectively, resembles a Dirichlet /Neumann-type boundary 0 0 x condition in classical two-point bvps; see [BKS18] for further discussions. Nonethe-

5.1 Green’s Function

139

less, the condition R0Dxα−2 u(0) = 0 may be replaced by u(0) = 0. Note also that a similar issue arises if one seeks a solution u ∈ C(D). Since we focus on the zero “Dirichlet” boundary condition below, we will adopt this convention and write  −R0Dxα u(x) = f (x, u(x)), in D, (5.4) u(0) = u(1) = 0. Formally, for any x ∈ D, the Green’s function G(x, s) satisfies  −R0Dsα G(x, s) = δx (s), in D, G(x, 0) = G(x, 1) = 0, where δx (s) is the Dirac Delta function at s = x. Surely one has to make proper sense of the equation since fractional calculus in Chapter 2 is only defined for L 1 (D) functions. A rigorous treatment requires extending fractional operators to distributions. When α = 2, it recovers classical two-point bvps, and accordingly, the Green’s function G(x, s) also converges.

Fig. 5.1 Green’s function G(x, s) in the Riemann-Liouville case.

The Green’s function G(x, s) is shown in Fig. 5.1 for α = 54 and α = 74 . Clearly, it is nonnegative and continuous, and positive in D × D and achieves its maximum along with the wedge x = s. For α close to unit, it has a sharp gradient along the boundary and the line x = s, indicating a possible limited smoothing property of the solution operator. When α tends to two, it becomes increasingly smoother. The following lemma collects a few properties [BL05, Lemma 2.4]. Lemma 5.1 The Green’s function G(x, s) has the following properties. (i) G(x, s) > 0 for any x, s ∈ D. (ii) The following bound holds: 1 4

min G(x, s) ≥ γ(s) max G(x, s) = γ(s)G(s, s), ≤x ≤ 34

0≤x ≤1

s ∈ D,

140

5 Boundary Value Problem for Fractional

ODEs

where the positive function γ ∈ C(D) is given by  3 (( 4 (1 − s))α−1 − ( 34 − s)α−1 )(s(1 − s))1−α, γ(s) = (4s)1−α,

s ∈ (0, r], s ∈ [r, 1),

with r defined in the proof. Proof Part (i) is obvious. For part (ii), note that for any fixed s ∈ D, G(x, s) is decreasing in x on [s, 1), and increasing on (0, s]. Let g1 (x, s) = Γ(α)−1 ((x(1 − s))α−1 − (x − s)α−1 ), g2 (x, s) = Γ(α)−1 (x(1 − s))α−1 . Then there holds ⎧ ⎪ g1 ( 34 , s), ⎪ ⎪ ⎨ ⎪ min G(x, s) = min(g1 ( 34 , s), g2 ( 14 , s)), 1 3 ⎪ ⎪ 4 ≤x ≤ 4 ⎪ ⎪ g2 ( 1 , s), 4 ⎩

s ∈ (0, 14 ], s∈ s∈

( 14 , 34 ), [ 34 , 1)

 =

g1 ( 34 , s),

s ∈ (0, r],

g2 ( 14 , s),

s ∈ [r, 1),

where r ∈ ( 14 , 34 ) is the unique root to η(s) := g1 ( 34 , s) − g2 ( 14 , s), i.e., α−1 η(s) := ( 34 (1 − s))α−1 − ( 34 − s)α−1 − ( 1−s = 0. 4 )

See Fig. 5.2(a) for an illustration. One can show that for α → 1, r → α → 2, r → 12 (for α = 2, r = 12 ). Note that max G(x, s) = G(s, s) = Γ(α)−1 (s(1 − s))α−1,

0≤x ≤1

3 4

and for

s ∈ D.

Hence, (ii) follows by the choice of γ(s). See also Fig. 5.2(b) for an illustration.  0.5

1

=1.25 =1.50 =1.75

(s)

(s)

0

=1.25 =1.50 =1.75

0.5

-0.5

-1

0.3

0.4

0.5 s

0.6

0.7

0

0.3

0.4

Fig. 5.2 The plots of the functions η(s) and γ(s) over the interval [ 14 , 34 ].

0.5 s

0.6

0.7

5.1 Green’s Function

141

The solution u is generally only Hölder continuous, and not Lipschitz. This lack of regularity is characteristic of bvps involving R0Dxα u, similar to Cauchy problems, cf. Theorem 4.8. Proposition 5.1 If f ∈ C(D × R), then the solution u to problem (5.4), if it exists, belongs to C α−1 (D). Proof It follows from Theorem 5.1 that the solution u, if it exists, is given by u(x) = x α−1 (0 Ixα f (s, u(s)))(1) − (0 Ixα f (s, u(s))(x). It can be verified directly that x α−1 ∈ C α−1 (D). Further, by the mapping property of α α 1,α−1 (D). Thus, the solution u belongs to C α−1 (D).  0 I x , 0 I x f (s, u(s)) ∈ C Remark 5.1 When f is independent of u, u is Lipschitz continuous if and only if ∫1 f satisfies the nonlocal condition 0 (1 − s)α−1 f (s)ds = 0, of which there is no analogue in the integer-order derivative case. Generally, it is unclear under what condition(s) that u is smooth. Now we analyze the existence issue for problem (5.4). The basic strategy is to reformulate the problem into a Fredholm integral equation using Green’s function, then apply a suitable version of fixed point theorem; see Appendix A.4 for several versions. If f is continuous, by Theorem 5.1, problem (5.4) is equivalent to the following Fredholm integral equation: ∫ 1 G(x, s) f (s, u(s))ds := Tu(x). (5.5) u(x) = 0

Then problem (5.4) is equivalent to an operator equation u = Tu in suitable function spaces. We below describe one existence result in the space C(D) [BL05, Theorem 3.1]. Let P = {u ∈ C(D) : u(x) ≥ 0} ⊂ C(D). Note that the constant M can be evaluated explicitly to be M = Γ(2α) Γ(α) , in view of the identity ∫ 0

1

G(s, s)ds =

1 Γ(α)



1

s α−1 (1 − s)α−1 ds =

0

Γ(α) B(α, α) = . Γ(α) Γ(2α)

Assumption 5.1 Let f (x, u) : D × [0, ∞) → R+ be continuous, and there exist two constants r2 > r1 > 0 such that (i) f (x, u) ≤ Mr2 for all (x, u) ∈ D × [0, r2 ]; (ii) f (x, u) ≥ Nr1 for all (x, u) ∈ D × [0, r1 ], ∫ 1

−1 ∫ 3

−1 with M = 0 G(s, s) ds and N = 14 γ(s)G(s, s) ds . 4

Theorem 5.2 Let f (x, u) : D × [0, ∞) → R+ be continuous. Then under Assumption 5.1, problem (5.4) has at least one positive solution u such that r1 ≤ u C(D) ≤ r2 .

142

5 Boundary Value Problem for Fractional

ODEs

Proof By Theorem 5.1, problem (5.4) has a solution u if and only if it solves u = Tu. Next, we show that the operator T : P → P is compact. Since both G(x, s) and f (x, u) are nonnegative and continuous, T is continuous. Let P  ⊂ P be a bounded subset with u C(D) ≤ M for all u ∈ P . Then for u ∈ P , by Theorem 5.1, we have ∫

1

|Tu(x)| ≤ 0

∫ G(x, s)| f (s, u(s))| ds ≤ c f

1

G(s, s) ds =

0

Γ(α) cf , Γ(2α)

with c f = f L ∞ (D×[0, M]) . Hence, the set T(P ) is bounded. Next, we claim that for every u ∈ P , x1, x2 ∈ D with x1 < x2 , there holds |Tu(x2 ) − Tu(x1 )| ≤ c f Γ(α + 1)−1 (x2α−1 − x1α−1 + x2α − x1α ).

(5.6)

Indeed, using the expression of the Green’s function G(x, s) in Theorem 5.1, we have |Tu(x2 ) − Tu(x1 )| ≤|(0 Ixα f (·, u(·)))(1)|(x2α−1 − x1α−1 ) ∫ x2 1 (x2 − s)α−1 | f (s, u(s))|ds + Γ(α) x1 ∫ x1 1 + ((x2 − s)α−1 − (x1 − s)α−1 )| f (s, u(s))|ds. Γ(α) 0 Next, we bound the three terms, denoted by Ii , i = 1, 2, 3. Actually, ∫ 1 cf cf (x α−1 − x1α−1 ), I1 ≤ (1 − s)α−1 ds(x2α−1 − x1α−1 ) = Γ(α) 0 Γ(α + 1) 2 ∫ x2 cf cf (x2 − x1 )α, (x2 − s)α−1 ds = I2 ≤ Γ(α) x1 Γ(α + 1) ∫ x1 cf cf I3 ≤ (x α − (x2 − x1 )α − x1α ). ((x2 − s)α−1 − (x1 − s)α−1 )ds = Γ(α) 0 Γ(α + 1) 2 Collecting these estimates yields (5.6). Thus, the set T(P ) is equicontinuous. By Arzelá-Ascoli theorem, T : P → P is compact. Below, we verify the two conditions in Theorem A.13. Let P1 := {u ∈ P : u C(D) < r1 }. For u ∈ ∂P1 , we have 0 ≤ u(x) ≤ r1 for all x ∈ D. By Assumption 5.1(ii) and Lemma 5.2, for x ∈ [ 14 , 34 ], there holds ∫ 1 ∫ 1 Tu(x) = G(x, s) f (s, u(s)) ds ≥ γ(s)G(s, s) f (s, u(s)) ds 0

≥ Nr1

0



3 4 1 4

γ(s)G(s, s) ds = r1 = u C(D) .

So Tu C(D) ≥ u C(D) for u ∈ ∂P1 . Next, let P2 := {u ∈ P : u C(D) ≤ r2 }. For u ∈ ∂P2 , we have 0 ≤ u(x) ≤ r2 for all x ∈ D. By Assumption 5.1(i), for x ∈ D

5.1 Green’s Function

143

∫ Tu C(D) = sup

x ∈D

1

∫ G(x, s) f (s, u(s)) ds ≤ Mr2

0

0

1

G(s, s) ds = r2 = u C(D) .

Thus, the two conditions in Theorem A.13 hold, and this completes the proof. Example 5.1 Consider the following problem  3 −R0Dx2 u = u2 + sin4 x + 1,



in D,

u(0) = u(1) = 0. One can verify M =

√4 π

f (x, u) = 1 + f (x, u) = 1 +

and N = 13.6649. Choosing r1 = sin x 4 sin x 4

+ u2 ≤ 2.2107 ≤ Mr2, + u ≥ 1 ≥ Nr1, 2

1 14

and r2 = 1 gives

∀(x, u) ∈ D × [0, 1],

1 ∀(x, u) ∈ D × [0, 14 ].

Thus, Assumption 5.1 holds. Then by Theorem 5.2, this problem has at least one 1 solution u such that 14 ≤ u ≤ 1. Note that Theorem 5.2 only ensures the existence of a positive solution, but it does not say anything about the uniqueness or existence of other solutions, which have to be established by other methods. The next result gives the uniqueness under a Lipschitz assumption on f (x, u) in u [CMSS18, Theorem 1]; see also [Bai10] for related results. Theorem 5.3 If f : D × R → R is continuous and there exists an L > 0 such that | f (x, u) − f (x, v)| ≤ L|u − v|,

∀x ∈ D, u, v ∈ R,

with L < α1+α (α − 1)1−α Γ(α), then problem (5.4) has a unique solution u ∈ C(D). ∫1 Proof Note that we can bound 0 G(x, s)ds by ∫ Γ(α) =α−1 (x

1

0 α−1

G(x, s)ds = x α−1



1

(1 − s)α−1 ds −

0

− x α ) ≤ α−1 (x α−1 − x α )|x= α−1 = α α



x

0 −1−α

(x − s)α−1 ds (α − 1)α−1,

since the function x α−1 − x α attains its maximum over D at x = α−1 (α − 1). This and the nonnegativity of G(x, s) in Lemma 5.1(i) imply that for any x ∈ D, there holds ∫ 1 |Tu(x) − T v(x)| ≤ G(x, s)| f (s, u(s)) − f (s, v(s))|ds ∫ ≤L 0

0 1

G(x, s)ds u − v C(D) ≤ LΓ(α)−1 α−1−α (α − 1)α−1 u − v C(D) .

Under the condition on L, T is a contraction on C(D), and Theorem A.10 implies the existence of a unique solution u ∈ C(D) to u = Tu, or equivalently, problem (5.4). 

144

5 Boundary Value Problem for Fractional

ODEs

5.1.2 Djrbashian-Caputo case Now we turn to the Djrbashian-Caputo case, which is analytically more delicate, especially in the classical sense. First, we state the solution representation in the case of general separated boundary conditions (also known as Sturm-Liouville boundary conditions), when the Djrbashian-Caputo fractional derivative is in the regularized sense, cf. Definition 2.4. The classical version in Definition 2.3 requires the condition u ∈ AC 2 (D), which, however, is not direct from the integral representation (5.8). Theorem 5.4 Let a, b, c, d ∈ R with ad + bc + ac  0, and f : D × R → R be continuous. Then a function u ∈ C 1 (D) solves ⎧ ⎪ ⎪ ⎨ ⎪

α −C 0 Dx u(x) = f (x, u(x)), au(0) − bu (0) = 0,

in D,

⎪ ⎪ ⎪ cu(1) + du (1) = 0, ⎩

(5.7)

if and only if u ∈ C 1 (D) satisfies ∫ u(x) =

1

G(x, s) f (s, u(s))ds,

(5.8)

0

where Green’s function G(x, s) : D2 → R is given by (with Λ = ad + bc + ac)  (b + ax)[(α − 1) + c(1 − s)](1 − s)α−2 − Λ(x − s)α−1, s ≤ x, 1 G(x, s) = ΛΓ(α) (b + ax)[(α − 1) + c(1 − s)](1 − s)α−2, x ≤ s. Proof By Theorem 2.13, for u ∈ C 1 (D) satisfying (5.7), we have u(x) = c0 + c1 x − 0 Ixα g(x), with g = f (x, u(x)) and c0, c1 ∈ R to be determined. The boundary conditions are fulfilled if ac0 − bc1 = 0,

cc0 + cc1 − c(0 Ixα g)(1) + dc1 − d(0 Ixα−1 g)(1) = 0.

These two equations can uniquely determine c0 and c1 if and only if a −b det = ad + bc + ac ≡ Λ  0. c c+d Since Λ = ad + bc + ac  0, by assumption, c0 and c1 is uniquely determined by c0 = Λ−1 a(c(0 Ixα g)(1) + d(0 Ixα−1 g)(1)), c1 = Λ−1 b(c(0 Ixα g)(1) + d(0 Ixα−1 g)(1)). Substituting the values of c0 and c1 gives (5.8). Conversely, if u ∈ C 1 (D) satisfies (5.8), let g(x) = f (x, u(x)) so g ∈ L 1 (D). Then u in (5.8) satisfies u(x) = Λ−1 (a + bx)(c(0 Ixα g)(1) + d(0 Ixα−1 g)(1)) − (0 Ixα g)(x).

5.1 Green’s Function

145

It follows that u(0) = Λ−1 a(c(0 Ixα g)(1) + d(0 Ixα−1 g)(1)), u (0) = Λ−1 b(c(0 Ixα g)(1) + d(0 Ixα−1 g)(1)), u(1) = Λ−1 (a + b)(c(0 Ixα g)(1) + d(0 Ixα−1 g)(1)) − (0 Ixα g)(1), u (1) = Λ−1 b(c(0 Ixα g)(1) + d(0 Ixα−1 g)(1)) − (0 Ixα−1 g)(1). Clearly, au(0) − bu (0) = 0 and cu(1) + du (1) = 0. Furthermore, C α 0 Dx u(x)

= R0Dxα (u − u(0) − u (0)x) = R0Dxα 0 Ixα g = g

a.e. D.

α So, the identity C 0 Dx u = f (x, u(x)) holds for a.e. x ∈ D. This completes the proof. 

The next result from [LL13, Theorems 2.4 and 2.7] discusses linear bvps, i.e., f = f (x), in the classical sense, cf. Definition 2.3. Note that in (iii), the condition f ∈ AC(D) cannot be relaxed to C(D), since T does not map C(D) to AC 2 (D). But it can be relaxed to f ∈ C(D) with 0 Ixα−1 f ∈ AC(D), cf. Lemma 2.4. Theorem 5.5 Let a, b, c, d ∈ R satisfy ad + bc + ac  0, and T be defined by ∫ 1 G(x, s) f (s)ds, T f (x) = 0

where the Green’s function G(x, s) is given in Theorem 5.4. Then the following statements hold. (i) If f ∈ L ∞ (D) and u = T f , then u (x) exists for each x ∈ D, and u  L ∞ (D) < ∞. (ii) The integral operator T maps AC(D) into AC 2 (D). (iii) If f ∈ AC(D), then u = T f is a solution of ⎧ ⎪ ⎪ ⎨ ⎪

α −C 0 Dx u(x) = f (x), au(0) − bu (0) = 0,

in D,

⎪ ⎪ ⎪ cu(1) + du (1) = 0. ⎩

(5.9)

(iv) If u ∈ AC 2 (D) and f ∈ L 1 (D) satisfy (5.9), then u = T f . Proof (i) Let Λ = ad + bc + ac and c1 = Λ−1 b(c(0 Ixα f )(1) + d(0 Ixα−1 f )(1)). If f ∈ L ∞ (D), then by Theorem 2.2(i), |c1 | ≤ |Λ| −1 |b|(|c||(0 Ixα f )(1)| + |d||(0 Ixα−1 f )(1)|) ≤ (|Λ|Γ(α))−1 |b|(|c|α−1 + |d|) f L ∞ (D) < ∞. Now by the identity we deduce

u (x) = c1 − 0 Itα−1 f (x),

(5.10)

146

5 Boundary Value Problem for Fractional

ODEs

|u (x)| ≤ (|Λ|Γ(α))−1 (|b|(|c|α−1 + |d|) + |Λ|) f L ∞ (D) < ∞,

∀x ∈ D.

Since u (x) exists for every x ∈ D, u ∈ C(D). (ii) This is direct from (5.10) and Theorem 2.2(iv). (iii) Since f ∈ AC(D), Theorem 2.2(iv) indicates 0 Ixα−1 f ∈ AC(D) with 0 Ixα−1 f (0) = 0. Then by (5.10), u (x) = −(0 Ixα−1 f ) . Applying 0 Ix2−α to both sides leads to C α 0 Dx u

= −0 Ix2−α (0 Ixα−1 f ) = − f .

The boundary conditions can be verified directly. Thus, u = T f solves (5.9). α α C α α (iv) Since − C 0 Dx u = f for a.e. x ∈ D, we have −0 I x ( 0 Dx u) = 0 I x f (x) for a.e. x ∈ D. Since u ∈ AC 2 (D), by Theorem 2.13, αC α 0 I x 0 Dx u(x)

= u(x) − u(0) − u (0)x ∈ C(D),

a.e. x ∈ D.

Since f ∈ L 1 (D) and α ∈ (1, 2), by Theorem 2.2(iii), 0 Ixα f ∈ C(D). Thus, we have u(x) = u(0) + u (0)x − 0 Ixα f (x),

∀x ∈ D.

The expression of Green’s function follows as Theorem 5.4, and thus it is omitted.  α The next result [LL13, Proposition 3.1] gives the equivalence for C 0 Dx u in the classical sense, under a Lipschitz condition on f , thereby extending Theorem 5.4.

Theorem 5.6 Let the conditions in Theorem 5.4 be fulfilled, and f (x, u) be Lipschitz, i.e., | f (x2, u2 ) − f (x1, u1 )| ≤ Lr max(|x2 − x1 |, |u2 − u1 |), for all x1, x2 ∈ D and u1, u2 ∈ R, with |u1 | < r, |u2 | < r. If u ∈ C(D) satisfies (5.8), then u ∈ AC 2 (D) and it solves (5.7). ∫1 Proof If u ∈ C(D) solves (5.8), i.e., u = 0 G(x, s) f (s, u(s))ds for x ∈ D. Let g = f (x, u(x)). Then g ∈ C(D), and by Theorem 5.5(i), u  exists for every x ∈ D, and c := u  L ∞ (D) < ∞. It follows that |u(x2 )−u(x1 )| ≤ c|x2 −x1 | for any x1, x2 ∈ D. This and the Lipschitz continuity of f imply that for x1, x2 ∈ D |g(x2 ) − g(x1 )| = | f (x2, u(x2 )) − f (x1, u(x1 ))| ≤ L max(|x2 − x1 |, |u(x2 ) − u(x1 )|) ≤ L(1 + c)|x2 − x1 |. Hence, g ∈ AC(D). By Theorem 5.5(iii), u ∈ AC 2 (D), and f (x, u(x)) a.e. x ∈ D, i.e., u solves problem (5.7).

C α 0 Dx u(x)

= g(x) = 

The next result is an immediate corollary of Theorem 5.4 in the Dirichlet case. Corollary 5.1 In the Djrbashian-Caputo case, let f : D × R → R be continuous. Then a function u ∈ C 1 (D) solves  α −C in D, 0 Dx u(x) = f (x, u(x)), (5.11) u(0) = u(1) = 0,

5.1 Green’s Function

147

∫1 if and only if u ∈ C 1 (D) satisfies u(x) = 0 G(x, s) f (s, u(s)) ds, where the Green’s function G(x, s) is given by  x(1 − s)α−1 − (x − s)α−1, 0 ≤ s ≤ x ≤ 1, 1 G(x, s) = Γ(α) x(1 − s)α−1, 0 ≤ x ≤ s ≤ 1.

0.4 G(x,s)

G(x,s)

0.5

0

-0.5 1

1

0.5 s

0.5 0

0

x

0.2 0 -0.2 1

1

0.5 s

0.5 0

0

x

Fig. 5.3 Green’s function G(x, s) in the Djrbashian-Caputo case.

The Green’s function G(x, s) in Corollary 5.1 is shown in Fig. 5.3. It is not necessarily positive in D×D, which implies that the solution operator is not positivity preserving, and lacks the comparison principle, etc. It is continuous, but there is a very steep change around the diagonal x = s for α close to unity, similar to the Riemann-Liouville case. The magnitude of the negative part decreases to zero as α approaches two. One possible strategy to regain the positivity is to use a Robin-type boundary condition at x = 0; see [LL13] for detailed discussions. Example 5.2 Let α = 65 , and u(x) = x 3 − 53 x 2 + 23 x on D. Clearly, u(0) = u(1) = 0. 4 8 α Further, for any x ∈ D, − C 0 Dx u(x) > 0. However, u( 5 ) = − 375 < 0, so the bvp does not preserve the positivity, and lacks a comparison principle. See Fig. 5.4 for a graphical illustration. This contrasts sharply with the Riemann-Liouville case. The next result gives the solution regularity, which is better than the RiemannLiouville case. Generally, it cannot be smoother than C 1,α−1 (D), since the term 0 Ixα f has limited regularity, even when f is smooth (e.g., f ≡ 1). Higher regularity can only be obtained under extra compatibility conditions on f . More generally, the regularity may be derived from theory of Fredholm integral equations [Gra82, PV06], but a systematic study seems still missing (for either fractional derivative). Theorem 5.7 If f ∈ C(D × R), then the solution u to problem (5.11), if it exists, belongs to C 1,α−1 (D). Proof It follows from Corollary 5.1 that the solution u is given by

148

5 Boundary Value Problem for Fractional

ODEs

0.1

-C D u 0 x

u

1

0

0.5

0 -0.1

0

0.2

0.4

0.6

0.8

0

1

0.5 x

x

1

Fig. 5.4 The solution u and its Djrbashian-Caputo fractional derivative − 0CDxα u with α = 65 .

u(x) =

1 x Γ(α)



1

(1 − s)α−1 f (s, u(s))ds −

0

1 Γ(α)



x

(x − s)α−1 f (s, u(s))ds.

0

For f ∈ C(D × R), 0 Ixα f (s, u(s)) ∈ C 1,α−1 (D). Thus, u belongs to C 1,α−1 (D).



Now we give an existence result to problem (5.11) [Zha06a, Theorem 4.1]. The proof relies on the Leray-Schauder alternative in Appendix A.4. Theorem 5.8 Let f : D × R → R be continuous, and | f (x, u)| ≤ cμ |u| μ + c, for some cμ ∈ (0, Γ(1+α) 6 ), c > 0 and 0 < μ ≤ 1. Then problem (5.11) has a solution. ∫ Proof We define an operator T : C(D) → C(D) by Tu = D G(x, s) f (s, u(s))ds. It suffices to show that T has a fixed point. It is continuous. For u ∈ Z := {u ∈ C(D) : u C(D) ≤ M }, with c f = f L ∞ (D×[0, M]) , we have |Tu(x)| ≤

cf Γ(α)



x

0

(x − s)α−1 ds + x

∫ 0

1

c f (x + x α ) . (1 − s)α−1 ds ≤ Γ(α + 1)

Thus, the set T(Z) is bounded. Next, we show the equicontinuity of T(Z). For any u ∈ Z, x1, x2 ∈ D with x1 < x2 , by the triangle inequality, ∫ x1 1 |(x2 − s)α−1 − (x1 − s)α−1 || f (s, u(s))|ds |Tu(x2 ) − Tu(x1 )| ≤ Γ(α) 0 ∫ x2 1 (x2 − s)α−1 | f (s, u(s))|ds + Γ(α) x1 ∫ x2 − x1 1 (1 − s)α−1 | f (s, u(s))|ds + Γ(α) 0 ≤c f Γ(α + 1)−1 ((x2 − x1 ) + (x2α − x1α )). Thus, the set T(Z) is equicontinuous, and by Arzelà-Ascoli theorem, T is compact. Let P = {u ∈ C(Ω), u C(D) < R} ⊂ C(D), with R > max(1, 6cΓ(α+1)−1 ). Suppose

5.2 Variational Formulation

149

that there is u ∈ ∂P and λ ∈ (0, 1) such that u = λTu. Then for any u ∈ ∂P, we have ∫ ∫ 1  1 x λ|Tu(x)| ≤ (x − s)α−1 (cu |u| μ + c)ds + x (1 − s)α−1 (cu |u(s)| μ + c)ds Γ(α) 0 0 ≤ (x α + x)Γ(α + 1)−1 (cμ Rμ + c) ≤ 2Γ(α + 1)−1 (cμ R + c) < R, since μ ∈ (0, 1], which implies λ Tu C(D)  R = u C(D) , contradicting the assump tion. Then Theorem A.15 implies that T has a fixed point in C(D). Example 5.3 For the bvp (with γ > 0 and μ ∈ (0, 1)),  C α 2 −1 μ γ in D, 0 Dx u = (1 + u ) u + x , u(0) = u(1) = 0, the conditions in Theorem 5.8 hold, and thus the problem has a solution. The uniqueness of the solution holds under a suitable Lipschitz condition on f . Theorem 5.9 If f : D × R → R is continuous and satisfies | f (x, u) − f (x, v)| ≤ ∫1 L|u − v| for any x ∈ D, u, v ∈ R, with L satisfying L ≤ (supx ∈D 0 |G(x, s)|ds)−1 , then problem (5.4) has a unique solution in C(D). Proof The proof is identical with that for Theorem 5.3, except that G(x, s) in the Djrbashian-Caputo case is not always nonnegative. 

5.2 Variational Formulation Now we develop the variational solution theory for  − 0Dxα u + qu = f , in D, u(0) = u(1) = 0,

(5.12)

with f ∈ L 2 (D) or suitable Sobolev space, and the potential q ∈ L ∞ (D). Variational formulations are useful for constructing numerical approximations. The study on variational formulations for bvps with a fractional derivative was initiated by [ER06], where the Riemann-Liouville case (involving both left-sided and rightsided versions) was studied. The well-posedness of the variational problem was shown. We follow the description in [JLPR15], where both Riemann-Liouville and Djrbashian-Caputo cases were analyzed, with proven regularity estimates. We refer to [WY13] for the case of variable coefficients, and [JLZ16b] for Petrov-Galerkin formulations, which are useful for convection-diffusion problems. These investigations mostly focus on homogeneous Dirichlet boundary conditions, which greatly facilitates the analysis. It is not always trivial to extend to nonzero boundary conditions or other types of boundary conditions, e.g., Neumann type or Robin type.

150

5 Boundary Value Problem for Fractional

ODEs

5.2.1 One-sided fractional derivatives 5.2.1.1 Variational formulations First, we derive the strong solution representation for the case q = 0. The procedure is similar to the derivation of Green’s function in Section 5.1, but with weaker regularity assumptions on f . First, consider the Riemann-Liouville case. Fix f ∈ L 2 (D) and set g = 0 Ixα f ∈ H0,αL (D). By Corollary 2.2, the fractional derivative R0Dxα g is well defined. Now by Theorem 2.1, we deduce 2−α 0 Ix g

= 0 Ix2−α 0 Ixα f = 0 Ix2 f ∈ H0,2 L (D).

Clearly, (0 Ix2 f ) = f holds for f ∈ L 2 (D). This yields the fundamental relation, cf. Theorem 2.6, R0Dxα g = f . Consequently, the representation u = −0 Ixα f + (0 Ixα f )(1)x α−1

(5.13)

is a solution of problem (5.12) in the Riemann-Liouville case when q = 0 since u satisfies the correct boundary condition and R0Dxα x α−1 = (cα x) = 0. Next consider the Djrbashian-Caputo case. To this end, we choose s ≥ 0 so that α + s ∈ ( 32 , 2). For smooth u and α ∈ (1, 2), the Djrbashian-Caputo and RiemannLiouville derivatives are related by (cf. Theorem 2.12) C α 0 Dx u

= R0Dxα u −

u(0) u (0) 1−α x −α − x . Γ(1 − α) Γ(2 − α)

 Applying it to g = 0 Ixα f ∈ H0,α+s L (D), which by Theorem 2.9 satisfies g(0) = g (0) = 0 α R α for f ∈ H0s (D), shows that C 0 Dx g makes sense and equals 0Dx g = f . Thus, a solution u of problem (5.12) in the Djrbashian-Caputo case with q = 0 is given by

u = −0 Ixα f + (0 Ixα f )(1) x.

(5.14)

α s Remark 5.2 The representation (5.14) of the model (5.12) with C 0 Dx u and f ∈ H0 (D) 3 such that α+s ≤ 2 remains unclear. The expressions (5.13) and (5.14) are suggestive: their difference in regularity stems from the kernel of the corresponding differential operator. In the Riemann-Liouville case, the kernel consists of the weakly singular functions cx α−1 , whereas in the Djrbashian-Caputo case, the kernel consists of smooth functions cx.

Now we develop variational formulations of problem (5.12) and establish regularity pickup in Sobolev spaces for the variational solution. We shall first consider the case q = 0, where we have an explicit representation of the solutions, and then the case of a general q  0. We begin with a useful lemma. β

1−β  u.

Lemma 5.2 For u ∈ H0,1 L (D) and β ∈ (0, 1), R0Dx u = 0 Ix 1 (D) H0,R

and β ∈

β (0, 1), RxD1 u

=

1−β −x I1 u  .

Similarly, for u ∈

5.2 Variational Formulation

151 β

Proof It suffices to prove the result for the left-sided derivative R0Dx . For u ∈ C 1 (D) with u(0) = 0, Theorem 2.12 implies that R β 0Dx u

β

1−β

= C 0 Dx u = 0 I x

(u ).

(5.15)

Corollary 2.2 implies that the left-hand side extends to a continuous operator on β H0, L (D) and hence H0,1 L (D) into L 2 (D). Meanwhile, Theorem 2.9 implies that the right-hand side of (5.15) extends to a bounded operator from H 1 (D) into L 2 (D). The lemma now follows by a density argument.  Next, we derive the variational formulation in the Riemann-Liouville case. Upon α (D) and v ∈ C ∞ (D), since RDxα x α−1 = 0, taking u as in (5.13), g = 0 Ixα f ∈ H L 0 0 Lemma 5.2 implies     (R0Dxα u, v) = − (0 Ix2−α g), v = (0 Ix2−α g), v  (5.16)   = R0Dxα−1 g, v  = (0 Ix2−α g , v ). Now Theorem 2.1 and Lemma 2.3 yield (R0Dxα u, v) =



1− α2  1− α  g , x I1 2 v  . 0 Ix

(5.17)

Since g ∈ H0,αL (D) and v ∈ H01 (D), we can apply Lemma 5.2 again to conclude α

α

(R0Dxα u, v) = −(R0Dx2 g, RxD12 v). α

α

Further, by noting the identity R0Dx2 x α−1 = cα x 2 −1 ∈ L 2 (D) and v  ∈ L 2 (D) from the assumption v ∈ H01 (D), since v ∈ H01 (D), we apply Theorem 2.1 to deduce α

α

α

1− α2 

(R0Dx2 x α−1, RxD12 v) = (cα x 2 −1, x I1

1− α2

v ) = (cα0 Ix

Consequently,

α

x 2 −1, v ) = cα (1, v ) = 0.

α

α

A(u, v) := −(R0Dxα u, v) = −(R0Dx2 u, RxD12 v).

(5.18)

Thus, the variational formulation of problem (5.12) (with q = 0) in the Riemannα Liouville case is to find u ∈ U := H02 (D) satisfying A(u, v) = ( f , v),

∀v ∈ U.

We next consider the Djrbashian-Caputo case. Following the definition of the β solution in (5.14), we choose β ≥ 0 so that α + β ∈ ( 32 , 2) and f ∈ H0 (D). α+β By Theorem 2.9, the function g = 0 Ixα f lies in H0, L (D) and hence g (0) = 0. Differentiating both sides of (5.14) and setting x = 0 yields the identity u (0) = (0 Ixα f )(1), and thus, the solution representation (5.14) can be rewritten as u = −0 Ixα f + xu (0) := −g(x) + xu (0).

152

5 Boundary Value Problem for Fractional

ODEs

α

Meanwhile, for v ∈ C 2 (D) with v(1) = 0, (5.16) implies     −(R0Dxα g, v) = (0 Ix2−α u), v = (0 Ix2−α g), v  , where we have again used the relation g (0) = 0 in the last step. Analogous to the C α α R α derivation of (5.17) and the identities C 0 Dx g = 0Dx g and 0 Dx x = 0, we then have α R α  −( C 0 Dx u, v) = (0Dx g, v) = −A(g, v) = A(u, v) − u (0)A(x, v).

(5.19) α 2

The term involving u (0) cannot appear in a variational formulation in H0 (D). To circumvent this issue, we reverse the preceding steps and arrive at   A(x, v) = 0 Ix2−α 1, v  = (Γ(3 − α))−1 (x 2−α, v ) = −(Γ(2 − α))−1 (x 1−α, v). Hence, in order to get rid of the troublesome term −u (0)A(x, v) in (5.19), we require our test functions v to satisfy (x 1−α, v) = 0. Thus, the variational formulation of (5.12) in the Djrbashian-Caputo case (with q = 0) is to find u ∈ U satisfying A(u, v) = ( f , v),

∀v ∈ V,

with the test space   α 2 V = φ ∈ H0,R (D) : (x 1−α, φ) = 0 .

(5.20)

Below we discuss the stability of the variational formulations. Throughout, we denote by U ∗ (respectively, V ∗ ) the set of bounded linear functionals on U (respectively, V), and slightly abuse ·, · for duality pairing between U and U ∗ (or between V and V ∗ ). Further, we will denote by · U the norm on the space U, etc. Remark 5.3 We have seen that when f is in H0s (D), with α + s > 32 , then the solution u constructed by (5.14) satisfies the variational equation and hence coincides with the unique solution to the variational equation. This may not be the case when f is only in L 2 (D). Indeed, for α ∈ (1, 32 ), the function f = x 1−α is in L 2 (D). However, the variational solution (with q = 0) in this case is u = 0 and clearly does not satisfy α the strong form of the differential equation C 0 Dx u = f . 5.2.1.2 Variational stability in the Riemann-Liouville case We now establish the stability of the variational formulations. The following lemma implies the variational stability in the Riemann-Liouville case with q ≡ 0. Lemma 5.3 For γ ∈ ( 12 , 1), there exists c = c(γ) > 0 satisfying γ

γ

2 c u H ≤ −(R0Dx u, RxD1 u) = A(u, u), γ (D) 0

γ

∀u ∈ H0 (D).

(5.21)

5.2 Variational Formulation

153

Proof For u ∈ C0∞ (D) (with u˜ being the zero extension of u to R), by Plancherel’s identity (A.9), there holds ∫ ∞ 1 γ γ − (R0Dx u, RxD1 u) = − (iξ)2γ |F (u)| ˜ 2 dξ 2π −∞ ∫ ∫ cos((1 − γ)π) ∞ 2γ cos((1 − γ)π) ∞ 2γ 2 ξ |F (u)| ˜ dξ = |ξ | |F (u)| ˜ 2 dξ. = π 2π 0 −∞ Now suppose that there does not exist a constant satisfying (5.21). Then by the γ compactness of the embedding from H0 (D) into L 2 (D), there is a sequence {u j } ⊂ γ H0 (D) with u j H γ (D) = 1 convergent to some u ∈ L 2 (D) and satisfying 0

γ

γ

2 > − j(R0Dx u j , RxD1 u j ). u j H γ (D) 0

∫∞ Thus, the sequence {F (u˜ j )} converges to zero in −∞ |ξ | 2γ | · | 2 dξ, and by the conver2 gence ∫ ∞ of the sequence {u j } in L (D) and Plancherel’s theorem, {F (u˜ j )} converges 2 ˜ in −∞ | · | dξ. Hence, {F (u˜ j )} is a Cauchy sequence and hence converges to F (u) ∫∞ 1 γ 2 γ 2 2 in the norm ( −∞ (1 + ξ ) | · | dξ) . This implies that u j converges to u in H0 (D). γ γ Theorem 2.10 implies −(R0Dx u, RxD1 u) = 0, from which it follows that F (u) ˜ = u˜ = 0.  This contradicts the assumption u j H γ (D) = 1, completing the proof. 0

We now return to problem (5.12) with q  0 and define a(u, v) = A(u, v) + (qu, v). We make the following uniqueness assumption on the bilinear form. Assumption 5.2 Let the bilinear form a(u, v) with u, v ∈ U satisfy (a) The problem of finding u ∈ U such that a(u, v) = 0 for all v ∈ U has only the trivial solution u ≡ 0. (a∗ ) The problem of finding v ∈ U such that a(u, v) = 0 for all u ∈ U has only the trivial solution v ≡ 0. The following result, the Petree-Tartar lemma, is a useful tool; the proof can be found in [EG04, pp. 469, Lemma A.38]. Lemma 5.4 Let X, Y and Z be Banach spaces with A a bounded linear, injective operator from X → Y and B a compact linear operator from X → Z. If there exists a constant γ > 0 such that γ x X ≤ Ax Y + Bx Z , then for some c > 0,

c x X ≤ Ax Y ,

We then have the following existence result.

∀x ∈ X.

154

5 Boundary Value Problem for Fractional

ODEs

Theorem 5.10 Let Assumption 5.2 hold and q ∈ L ∞ (D). Then for any given F ∈ U ∗ , there exists a unique solution u ∈ U solving a(u, v) = F, v,

∀v ∈ U.

(5.22)

Proof We define, respectively, S : U → U ∗ and T : U → U ∗ by

Su, v = a(u, v),

and

Tu, v = −(qu, v),

∀v ∈ U.

Assumption 5.2(a) implies that S is injective. Further, Lemma 5.3 implies 2 u U ≤ cA(u, u) = c( Su, u + Tu, u)

≤ c( Su U ∗ + Tu U ∗ ) u U ,

∀u ∈ U.

Meanwhile, the compactness of T follows from the fact q ∈ L ∞ (D) and the compactness of U in L 2 (D). Now Lemma 5.4 implies that there exists γ > 0 satisfying γ u U ≤ sup v ∈U

a(u, v) . v U

(5.23)

This together with Assumption 5.2(a∗ ) shows that the operator S : U → U ∗ is bijective, i.e., there is a unique solution of (5.22), by Lemma 5.4.  We now show that the variational solution u in Theorem 5.10, in fact, is a strong solution when F, v = ( f , v) for some f ∈ L 2 (D). We consider the problem −R0Dxα w = f − qu.

(5.24)

A strong solution is given by (5.13) with a right-hand side  f = f − qu. It satisfies the variational equation and hence coincides with the unique variational solution. Theorem 5.11 Let Assumption 5.2 hold, and q ∈ L ∞ (D). Then with a right-hand side α

F, v = ( f , v) for some f ∈ L 2 (D), the solution u to (5.22) is in H α−1+β (D)∩H02 (D) for any β ∈ (1 − α2 , 12 ), and it satisfies u H α−1+β ≤ c f L 2 (D) . Proof It follows from Theorem 5.10 and Assumption 5.2 that there exists a solution α u ∈ H02 (D). Next we rewrite into (5.24) with  f = f − qu. Since q ∈ L ∞ (D) and α u ∈ H02 (D), we have qu ∈ L 2 (D), and hence  f ∈ L 2 (D). Now the desired assertion follows from (5.13), Theorem 2.9 and Corollary 2.1.  Remark 5.4 In general, the best possible regularity of the solution u to problem (5.12) with a Riemann-Liouville fractional derivative is H α−1+β (D) for any β ∈ (1 − α2 , 12 ), due to the presence of the singular term x α−1 . The only possibility of an improved regularity is the case (0 Ixα f )(1) = 0 (for q ≡ 0).

5.2 Variational Formulation

155

Now we turn to the adjoint problem: given F ∈ U ∗ , find w ∈ U such that a(v, w) = v, F,

∀v ∈ U,

(5.25)

there exists a unique solution w ∈ U to the adjoint problem. Indeed, Assumption 5.2 and (5.23) imply that the inf-sup condition for the adjoint problem holds with the same constant. Now, by proceeding as in (5.13), for q = 0 and F, v ≡ ( f , v) with f ∈ L 2 (D), we have w = −x I1α f + (x I1α f )(0)(1 − x)α−1 . This implies a similar regularity pickup, i.e., w ∈ H α−1+β (D), for (5.25), provided that q ∈ L ∞ (D). Further, we can repeat the arguments in the proof of Theorem 5.11 for a general q to deduce the regularity of the adjoint solution w. Theorem 5.12 Let Assumption 5.2 hold, and q ∈ L ∞ (D). Then with F, v = ( f , v) α for some f ∈ L 2 (D), the solution w to (5.25) is in H α−1+β (D) ∩ H02 (D) for any β ∈ (1 − α2 , 12 ), and it satisfies w H α−1+β (D) ≤ c f L 2 (D) . The time-dependent counterpart of problem (5.12) with a Riemann-Liouville fractional derivative was studied in [JLPZ14], where the existence of a weak solution and the solution regularity was established, and also numerical schemes were developed. 5.2.1.3 Variational stability in the Djrbashian-Caputo case Now we consider the Djrbashian-Caputo case, which, unlike the Riemann-Liouville case, involves a test space V different from the solution space U. We set φ0 = α 2 (1 − x)α−1 . By Corollary 2.1, φ0 is in the space H0,R (D). Further, we observe A(u, φ0 ) = 0,

∀u ∈ U.

(5.26)

Indeed, for u ∈ H01 (D), by Theorem 2.1, we have 1− α2

A(u, (1 − x)α−1 ) = −(0 Ix

α

u , RxD12 (1 − x)α−1 ) 1− α2

= cα (u , x I1

α

(1 − x) 2 −1 ) = cα (u , 1) = 0.

Now for a given u ∈ U, we set v = u − γu φ0 , where γu is defined by γu =

(x 1−α, u) . (x 1−α, φ0 )

(5.27)

156

5 Boundary Value Problem for Fractional

Clearly, the norms · H α2 (D) and · α

α

H0,2R (D)

ODEs

for α ∈ (1, 2) are equivalent on the

2 space H0,R (D). Thus,

|γu | ≤ c|(x 1−α, u)| ≤ c u L ∞ (D) x 1−α L 1 (D) ≤ c u

α

,

H02 (D)

α

and the function v ∈ H 2 (D) and v(1) = 0, i.e., it is in the space V. By Lemma 5.3, A(u, v) = A(u, u) ≥ c u 2

α

H02 (D)

.

Finally, there also holds v

α

H0,2R (D)

≤ u

α

H02 (D)

+ c|γu | ≤ c u

α

,

H02 (D)

and thus the inf-sup condition follows immediately u

α

H02 (D)

≤ c sup v ∈V

A(u, v) v α2

∀u ∈ U.

(5.28)

H0, R (D)

Now given any v ∈ V, we set u = v − v(0)φ0 . Obviously, u is nonzero whenever v  0 and by Lemma 5.3, we get A(u, v) = A(u, u) > 0. Thus, if A(u, v) = 0 for all u ∈ U, then v = 0. This and (5.28) imply that the variational problem is stable. We next consider the case q  0 and make the following assumption. Assumption 5.3 Let the bilinear form a(u, v) with u ∈ U, v ∈ V satisfy (b) The problem of finding u ∈ U such that a(u, v) = 0 for all v ∈ V has only the trivial solution u ≡ 0. (b∗ ) The problem of finding v ∈ V such that a(u, v) = 0 for all u ∈ U has only the trivial solution v ≡ 0. We then have the following existence result. Theorem 5.13 Let Assumption 5.3 hold and q ∈ L ∞ (D). Then for any given F ∈ V ∗ , there exists a unique solution u ∈ U to a(u, v) = F, v,

∀v ∈ V .

(5.29)

Proof The proof is similar to that of Theorem 5.10. In this case, we define S : U → V ∗ and T : U → V ∗ by

Su, v = a(u, v)

and

Tu, v = −(qu, v),

∀v ∈ V .

By Assumption 5.3(b), S is injective. By applying (5.28), we get for any u ∈ U u U ≤ c sup v ∈V

A(u, v) a(u, v) −(qu, v) ≤ c sup + c sup = c( Su V ∗ + Tu V ∗ ). v V v V v ∈V v ∈V v V

5.2 Variational Formulation

157

The rest of the proof, including verifying the inf-sup condition, γ u U ≤ sup v ∈V

a(u, v) , v U

∀u ∈ U,

(5.30) 

is essentially identical with that of Theorem 5.10.

Theorem 5.14 Let s ∈ [0, 12 ) and Assumption 5.3 be fulfilled. Suppose that F, v = ( f , v) for some f ∈ H0s (D) with α + s > 32 , and q ∈ L ∞ (D) ∩ H s (D). Then the α

variational solution u ∈ U of (5.29) is in H02 (D) ∩ H α+s (D) and it is a solution of (5.12). Further, it satisfies u H α+s (D) ≤ c f H0s (D) . Proof Let u be the solution of (5.29). We consider the problem α −C 0 Dx w = f − qu.

(5.31)

Under the given assumption, qu ∈ H0s (D) (exercise). A strong solution of (5.31) is given by (5.14) with a right-hand side  f = f − qu ∈ H0s (D). Since it satisfies problem (5.29), it coincides with u. The regularity u ∈ H α+s (D) is direct from Theorem 2.9.  Remark 5.5 The solution to problem (5.12) in the Djrbashian-Caputo case can achieve full regularity, which contrasts sharply with the Riemann-Liouville case since for the latter, generally the best possible regularity is H α−1+β (D), for any β ∈ (1 − α2 , 12 ), due to the presence of the singular term x α−1 . Last we discuss the adjoint problem: find w ∈ V such that a(v, w) = v, F,

∀v ∈ U,

for some F ∈ U ∗ . If F, v = ( f , v) for some f ∈ L 2 (D), the strong form reads −RxD1α w + qw = f , with w(1) = 0 and (x 1−α, w) = 0. By repeating the steps leading to (5.14), we deduce that for q ≡ 0, the solution w can be expressed as w = c f (1 − x)α−1 − x I1α f , with the prefactor c f given by cf =

(0 Ixα x 1−α, f ) 1 (x 1−α, x I1 α f ) = (x, f ). = 1−α α−1 B(2 − α, α) Γ(α) (x , (1 − x) ) α

Clearly, there holds |c f | ≤ c|(x 1−α, x I1α f )| ≤ c f L 2 (D) . Hence, w ∈ H02 (D) ∩ H α−1+β (D), for any β ∈ (1− α2 , 12 ). The case of a general q can be deduced analogously to the proof of Theorem 5.11. Hence, we have the following improved regularity estimate for the adjoint solution w.

158

5 Boundary Value Problem for Fractional

ODEs

Theorem 5.15 Let Assumption 5.3 hold, and q ∈ L ∞ (D). Then with a right-hand side α

F, v = ( f , v) for some f ∈ L 2 (D), the solution w to (5.29) is in H02 (D)∩H α−1+β (D) for any β ∈ (1 − α2 , 12 ), and further there holds w H α−1+β (D) ≤ c f L 2 (D) . Remark 5.6 The adjoint problem for both Djrbashian-Caputo and Riemann-Liouville cases is of Riemann-Liouville type, albeit with slightly different boundary conditions, and thus shares the same singularity. Example 5.4 Consider problem (5.12) with q(x) = x and f ≡ 1. The solution u is shown in Fig. 5.5 for the Riemann-Liouville and Djrbashian-Caputo case separately. Clearly, in the Riemann-Liouville case, the solution u has a weak singularity at x = 0, which contrasts sharply with the Djrbashian-Caputo case. Interestingly, the change of the solution with respect to the fractional-order α also differs markedly: in the Riemann-Liouville case, the magnitude of the solution u decreases with α, and this trend is reversed in the Djrbashian-Caputo case.

0.5

0.15

=1.25 =1.5 =1.75

0.4

=1.25 =1.5 =1.75

0.1

u

u

0.3 0.2

0.05

0.1 0

0

0.5

x

1

0

0

0.5

x

1

Fig. 5.5 The solution profiles for problem (5.12) with q = x and f = 1.

Last, the nonstationary counterpart of problem (5.12) with a Djrbashian-Caputo fractional derivative is less studied: the existence and uniqueness of the solutions was studied in [NR20] via the concept of viscosity solutions.

5.2.2 Two-sided mixed fractional derivatives In this part, we describe some results for the two-sided problem  Lrα u = f , in D, u(0) = u(1) = 0.

(5.32)

5.2 Variational Formulation

159

The associated differential operator Lrα is defined by Lrα u(x) = D(r 0 Ix2−α + (1 − r)x I12−α )Du(x), where r ∈ [0, 1], and D denotes taking the first-order derivative. The operator Lrα is closely related to but not equivalent to a linear combination of the RiemannLiouville or Djrbashian-Caputo fractional derivative. Putting D inside the fractional integral, introduced by Patie and Simon [PS12, p. 570], changes the kernel of the operator, and thus also the regularity of the solution. However, when considering a zero Dirichlet boundary condition, which we discuss below, the equivalence to the Riemann-Liouville fractional derivative does hold. Problems of this type were first studied in [ER06], where the variational formulation was derived and finite element approximation was developed. The behavior of the solution of this problem is drastically different from either Riemann-Liouville or Djrbashian-Caputo case, in the sense, the solution generally exhibits weak singularity at both end points, a fact only established very recently [EHR18], which establishes the kernel of the operator. Two-sided bvps were studied in [HHK16] in a probabilistic framework. Following the preceding procedure, we can derive the weak formulation. With α α U = H02 (D), the weak formulation of the Dirichlet bvp reads: given f ∈ H − 2 (D), find u ∈ U such that B(u, v) = f , v, ∀v ∈ U, where the bilinear form B(·, ·) : U × U → R is defined by α

α

α

α

B(u, v) := r(R0Dx2 u, RxD12 v) + (1 − r)(RxD12 u, R0Dx2 v). By Lemma 5.3, the bilinear form B is coercive and continuous on U, and thus the weak formulation has a unique solution u ∈ U. Below we analyze the kernel of the operator Lrα , following [EHR18], but replacing some computer verification with direct evaluation. The analysis uses the three-parameter hypergeometric function. Definition 5.1 The Gauss three-parameter hypergeometric function 2 F1 (a, b; c; x) is defined by an integral and series as follows: ∫ 1 Γ(c) F (a, b; c; x) = s b−1 (1 − s)c−b−1 (1 − sx)−a ds 2 1 Γ(b)Γ(c − b) 0 ∞  (a)n (b)n x n , = (c)n n! n=0 with convergence only if (c) > (b) > 0, where (a)n denotes the rising Pochhammer symbol, i.e., (a)n = Γ(a+n) Γ(a) . It is clear from the series definition that the interchange property holds. Lemma 5.5 For (c) > (b) > 0 and (c) > (a) > 0, there holds 2 F1 (a, b; c;

x) = 2 F1 (b, a; c; x).

160

5 Boundary Value Problem for Fractional

ODEs

Further, the following useful identities hold: the first Bolz formula (see, e.g., [AS65, p. 559, (15.3.3)]) 2 F1 (a, b; c;

x) = (1 − x)c−a−b 2 F1 (c − a, c − b; c; x),

(5.33)

for a + b − c  0, ±1, ±2, . . ., and reflection-type formula (see, e.g., [AS65, p. 559. (15.3.6)], [WW96, p. 291]) Γ(c)Γ(c − a − b) 2 F1 (a, b; a + b − c + 1; x) Γ(c − a)Γ(c − b) Γ(c)Γ(a + b − c) + x c−a−b 2 F1 (c − a, c − b; 1 + c − a − b; x). Γ(a)Γ(b)

2 F1 (a,b; c; 1

− x) =

(5.34)

Next, we give the kernel of the operator with a general weight r ∈ (0, 1). Lemma 5.6 The function k(x) = x p (1 − x)q , where p, q ∈ [α − 2, 0) satisfy p+q =α−2

and r sin qπ = (1 − r) sin pπ.

(5.35)

is in the kernel of the operator D(r 0 Ix2−α + (1 − r)x I12−α ). Proof By changing variables s = zx, we have ∫ x 1 2−α k(x) = (x − s)1−α s p (1 − s)q ds 0 Ix Γ(2 − α) 0 ∫ 1 1 2−α+p x = (1 − z)1−α z p (1 − zx)q dz Γ(2 − α) 0 = x 2−α+p Γ(p + 1)Γ(3 − α + p)−1 2 F1 (−q, p + 1; 3 − α + p; x), where the last line follows from the fact 3 − α + p > p + 1, since α ∈ (1, 2). By Lemma 5.5, under the condition 3 − α + p > −q > 0, we have Γ(p + 1) x 2−α+p 2 F1 (p + 1, −q; 3 − α + p; x) Γ(3 − α + p) ∫ 1 Γ(p + 1) 2−α+p x = (1 − z)2−α+p+q z −q−1 (1 − zx)−p−1 dz Γ(−q)Γ(3 − α + p + q) 0 ∫ x Γ(p + 1) (x − s)2−α+p+q s−q−1 (1 − s)−p−1 ds = Γ(−q)Γ(3 − α + p + q) 0

2−α k(x) 0 Ix

=

= Γ(p + 1)Γ(−q)−1 0 Ix

3−α+p+q −q−1

x

(1 − x)−p−1 .

Similarly, one has 2−α k(x) x I1

= Γ(q + 1)Γ(−p)−1 x I1

3−α+p+q −q−1

x

(1 − x)−p−1 .

(5.36)

Comparing the last two identities with D(r 0 Ix2−α + (1 − r)x I12−α )k(x) = 0 implies 2 − α + p + q = 0 and rΓ(p + 1)Γ(−q)−1 = (1 − r)Γ(q + 1)Γ(−p)−1 ⇔ rΓ(−p)Γ(1 −

5.2 Variational Formulation

161

π (−p)) = (1 −r)Γ(−q)Γ(1 − (−q)). By the reflection formula (2.3) for Γ(z), r sin(−pπ) = π  (1 − r) sin(−qπ) . This shows the desired assertion.

Remark 5.7 Given 0 ≤ r < 1 and 1 < α < 2, the exponents p and q satisfying (5.35) can be equivalently expressed as q = α − 2 − p and α − 2 ≤ p ≤ 0 such that h(p) := r − sin pπ(sin(α − p)π + sin pπ)−1 = 0. The existence and uniqueness of p follows from the fact that h(α − 2) = r − 1 ≤ 0, h(0) = r ≥ 0 and h (p) > 0. Remark 5.8 When r = 12 , p = q = α2 − 1, and when r = 1, q = 0 and p = α − 2. Then k(x) is given by k(x) = x α−2 and thus K(x) = x α−1 , which indeed belongs to the kernel of R0Dxα−1 . The next result characterizes the kernel of the operator Lrα . 5.2 The kernel ker(Lrα ) of Lrα is given by span{1, K(x)}, with K(x) = ∫Corollary x k(s)ds = (p + 1)−1 x p+1 2 F1 (−q, p + 1; p + 2; x), where k(s) is given in Lemma 5.6. 0 Proof By Lemma 5.6, span{1, K(x)} ⊂ ker(Lrα ). It suffices to prove dim(ker(Lrα )) = 2. With z(x) = 1 + x and f (x) = Γ(2 − α)−1 (−r x 1−α + (1 − r)(1 − x)1−α ), then there holds Lrα x = f (x) in D. Since K(1)  0, we can choose c1, c2 such that zˆ(x) = z(x) + c1 + c2 K(x) satisfies zˆ(0) = zˆ(1) = 0 and Lrα zˆ(x) = f (x). Now suppose there is a third linearly independent function s(x) ∈ ker(Lrα ), for which we may assume s(0) = s(1) = 0. Then z˜(x) := zˆ(x) + s(x) satisfies Lrα z˜(x) = f (x). However, the existence of z˜(x)  zˆ(x) contradicts the uniqueness of the solution to  Lrα u(x) = f (x) with u(0) = u(1) = 0 (from Lax-Milgram theorem). Remark 5.9 The regularity of K(x) ∈ H min(p,α−p)+ 2 − (D) (with small > 0), which can be less regular than the Riemann-Liouville case, i.e., a worsening regularity as the ratio r decreases from 1 to 12 . However, one may obtain better regularity in weighted Sobolev spaces; see the work [Erv21], which discuss also the influence of a convective term. 1

Lemma 5.7 For 1 ≤ α < 32 , D0 Ix2−α D maps from H α (D) onto L 2 (D). Proof We have D : H α (D) → H α−1 (D). For α ∈ [1, 32 ), H α−1 (D) = H0α−1 (D). Since D0 Ix2−α = R0Dxα−1 , by Theorem 2.10, D0 Ix2−α : H0α−1 (D) → L 2 (D). To show  “onto”, note that for f ∈ L 2 (D), D0 Ix2−α Du = f , with u = 0 Ixα f . The next result gives a concise description of the range of Lrα , with domain H α (D). Note that for α ∈ [1, 32 ), span({x 1−α }) ⊂ L 2 (D), and span({(1 − x)1−α }) ⊂ L 2 (D). Proposition 5.2 For 1 < α < 2, Lrα maps from H α (D) into L 2 (D) ⊕ span({x 1−α }) ⊕ span({(1 − x)1−α }). Proof The case 1 < α < 32 is covered by Lemma 5.7. For f ∈ H α (D), α ≥ 32 , let p(x) denote the Hermite cubit interpolant of f , and f˜(x) = f (x) − p(x) ∈ H0α (D). By Theorem 2.10, Lrα f˜ ∈ L 2 (D). Further, direct computation shows Lrα p(x) ∈ L 2 (D) ⊕ span({x 1−α }) ⊕ span({(1 − x)1−α }). This shows the desired assertion. 

162

5 Boundary Value Problem for Fractional

ODEs

Example 5.5 Consider problem (5.32) with Lrα , q ≡ 0 and f ≡ 1. The solution u is shown in Fig. 5.6. The shape is similar to the Riemann-Liouville case in that it has weak singularity at one end point, depending on the weight r. This partly confirms the analysis of the kernel of Lrα , but the precise regularity is to be ascertained. 0.8 0.6 0.4 0.2 0

=1.25 =1.5 =1.75

0.6

u

u

0.8

=1.25 =1.5 =1.75

0.4 0.2

0

0.5

x

0

1

0

0.5

1

x

Fig. 5.6 The solutions for problem (5.32) with q = 0 and f = 1.

Last, we provide an eigen-type expansion. Proposition 5.3 For 1 < α < 2, 0 < β < α, and r = Lrα x β (1 − x)α−β x n =

n 

an, j x j ,

sin βπ sin βπ+sin(α−β)π

∈ [0, 1],

n = 0, 1, 2, . . . ,

j=0

with the coefficients an, j given by an, j =(−1)(n+1) (1 − r)

(−1) j Γ(1 + α − β)Γ(1 + α + j) sin απ . sin(α − β)π Γ(1 + α − β − n + j)Γ(1 + n − j)Γ( j + 1)

Proof Let u(x) = x β+n (1 − x)α−β . By the definition of 2 F1 (a, b; c; x), ∫ x 1 2−α I u(x) = (x − s)1−α s β+n (1 − s)α−β ds 0 x Γ(2 − α) 0 ∫ 1 1 = (x − xt)1−α (xt)β+n (1 − xt)α−β xdt [with s = t x] Γ(2 − α) 0 ∫ 1 1 2+n+β−α x = t β+n (1 − t)1−α (1 − xt)α−β dt Γ(2 − α) 0 Γ(n + β + 1) x n+2+β−α 2 F1 (β − α, β + n + 1; n + 3 + β − α; x). = Γ(n + 3 + β − α) Next, we evaluate the integral x I12−α u(x). By changing variables s = 1 − (1 − x)t,

5.2 Variational Formulation

163

∫ 1 1 = (s − x)1−α s β+n (1 − s)α−β ds Γ(2 − α) x ∫ 1 1 = [(1 − x)(1 − t)]1−α (1 − (1 − x)t)β+n ((1 − x)t)α−β (1 − x)dt Γ(2 − α) 0 ∫ 1 1 (1 − x)2−β = t α−β (1 − t)1−α (1 − (1 − x)t)β+n dt Γ(2 − α) 0 Γ(α − β + 1) (1 − x)2−β 2 F1 (−β − n, α − β + 1; 3 − β; 1 − x). = Γ(3 − β)

2−α x I1 u(x)

Below we apply (5.33) and (5.34) to rewrite the formula. By (5.34), we have − n, α − β + 1; 3 − β; 1 − x) Γ(3 − β)Γ(2 + n − α + β) = 2 F1 (−β − n, α − β + 1; −n − 1 + α − β; x) Γ(3 + n)Γ(2 − α) Γ(3 − β)Γ(−n − 2 + α − β) + x n+2−α+β 2 F1 (n + 3, 2 − α; n + 3 − α + β; x). Γ(−β − n)Γ(α − β + 1) 2 F1 (−β

Now, by the first Bolz formula (5.33), we have 2 F1 (3

+ n, 2 − α, 3 + n − α + β; x)

= (1 − x)−2+β 2 F1 (−α + β, n + 1 + β, n + 3 − α + β; x), and 2 F1 (−n

=(1 − x)

− β, α − β + 1, −n − 1 + α − β; x)

−2+β

2 F1 (α

− 1, −n − 2, −n − 1 + α − β; x).

Combining the preceding identities, we obtain Γ(α − β + 1)Γ(n + 2 − α + β) 2 F1 (α − 1, −n − 2; −n − 1 + α − β; x) Γ(2 − α)Γ(n + 3) Γ(−n − 2 + α − β) n+2+β−α x + 2 F1 (−α + β, n + 1 + β; n + 3 − α + β; x). Γ(−β − n)

2−α x I1 u(x)

=

Consequently, (r 0 Ix2−α + (1 − r)x I12−α )u(x)

Γ(n + β + 1) Γ(−n − 2 + α − β)  + (1 − r) = r Γ(n + 3 + β − α) Γ(−β − n) × x n+2+β−α 2 F1 (β − α, β + n + 1; n + 3 + β − α; x) Γ(α − β + 1)Γ(n + 2 − α + β) + (1 − r) 2 F1 (α − 1, −n − 2; −n − 1 + α − β; x). Γ(2 − α)Γ(n + 3) Now we simplify the term in the square bracket, denoted by I:

164

5 Boundary Value Problem for Fractional

I=

ODEs

rΓ(n + β + 1)Γ(−β − n) + (1 − r)Γ(−n − 2 + α − β)Γ(n + 3 + β − α) . Γ(n + 3 + β − α)Γ(−β − n)

By the reflection formula (2.3) for the Gamma function, (−1)n+1 π π = , sin(−β − n)π sin βπ (−1)n+2 π π = . Γ(−2 − n + α − β)Γ(n + 3 + β − α) = sin(−2 − n + α − β)π sin(α − β)π Γ(−β − n)Γ(n + β + 1) =

It follows from these two identities and the choice of β that

(−1)n+1 π 1 (−1)n+2 π  r + (1 − r) Γ(n + 3 + β − α)Γ(−β − n) sin βπ sin(α − β)π

r  1−r (−1)n+1 π − = 0. = Γ(n + 3 + β − α)Γ(−β − n) sin βπ sin(α − β)π

I=

Meanwhile, Γ(1 + α − β)Γ(n + 2 − α + β) 2 F1 (α − 1, −n − 2; −n − 1 + α − β; x) Γ(2 − α)Γ(n + 3) n+2 Γ(1 + α − β)Γ(n + 2 − α + β)  (α − 1)k (−n − 2)k k = x Γ(2 − α)Γ(n + 3) (−n − 1 + α − β)k k=0 Γ(1 + α − β)Γ(n + 2 − α + β)Γ(−n − 1 + α − β)  Γ(α − 1 + k)(−n − 2)k k x . Γ(2 − α)Γ(α − 1)Γ(n + 3) Γ(−n − 1 + α − β + k) k=0 n+2

=

Now using the reflection formula for the Gamma function, (−1)n+1 π π = , sin(−n − 1 + α − β)π sin(α − β)π π −π Γ(2 − α)Γ(α − 1) = = . sin(α − 1)π sin απ

Γ(n + 2 − α + β)Γ(−n − 1 + α − β) =

Collecting the identities leads to Γ(1 + α − β)Γ(n + 2 − α + β) 2 F1 (α − 1, −n − 2; −n − 1 + α − β; x) Γ(2 − α)Γ(n + 3) n+2 Γ(α − 1 + k)(−n − 2)k Γ(1 + α − β)(−1)n sin απ  xk . = sin(α − β)π Γ(−n − 1 + α − β + k)Γ(n + 3) k=0 Differentiating both sides gives the desired assertion.



5.3 Fractional Sturm-Liouville Problem

165

5.3 Fractional Sturm-Liouville Problem The classical Sturm-Liouville problem plays a fundamental role in many areas. A lot is known about Sturm-Liouville theory and indeed the picture was almost complete by the middle of the nineteenth century. The eigenvalues are real, the eigenfunctions are simple and those corresponding to distinct eigenvalues are mutually orthogonal in L 2 (D). The zeros of eigenfunctions arising from successive eigenvalues strictly interlace, and there are various monotonicity theorems relating the spectrum to the coefficients and boundary conditions. None of these results in the above generality are known in the case of Riemann-Liouville /Djrbashian-Caputo fractional derivatives, and in many instances the conclusions are false. In this section, we discuss the fractional case:  − 0Dxα u + qu = λu, in D, (5.37) u(0) = u(1) = 0, where the potential q ∈ L ∞ (D). The two fractional derivatives not only share a lot of similarities but also some big differences, and thus we discuss them separately. Since the operator 0Dxα is not self-adjoint, generally, the eigenvalues and the eigenfunctions are genuinely complex. In particular, if λ ∈ C is an eigenvalue of − 0Dxα + q, then its complex conjugate λ is also an eigenvalue. We collect several known analytical results, for the case of q ≡ 0, using the Green’s function in Section 5.1. The eigenvalue problem (5.37) with a nonzero q appears analytically unexplored, although extensive numerical investigations were presented in [JR12]. Djrbashian [Džr70] probably first considered Dirichlet-type problems for fdes. The first such problem is to find the solution u(x) (in L 1 (D) or L 2 (D)) of the bvp ⎧ ⎪ ⎪ ⎨ ⎪

σ2 0 D x u(x) − [λ + q(x)]u(x) σ0 1 cos φ0 (0 Dx u)(0) + sin φ0 (0 Dσ x u)(0) ⎪ ⎪ σ1 0 ⎪ cos φ1 (0 Dσ x u)(1) + sin φ1 (0 D x u)(1) ⎩

= 0,

in D,

= 0, = 0,

2 with a Lipschitz potential q(x), and φ0, φ1 ∈ [0, π). For the set {γi }i=0 ⊂ (0, 1], σ2 the Djrbashian fractional derivative 0 Dx is defined in Exercise 2.19. Let σk = k σi j=0 γk − 1, k = 0, 1, 2, and assume σ2 > 1. Then the operators 0 D x are defined by σ0 0 Dx u

1−γ0

= 0 Ix

u,

σ1 0 Dx u

1−γ1 R γ0 0Dx u,

= 0 Ix

σ2 0 Dx u

1−γ2 R γ1 R γ0 0Dx 0Dx u.

= 0 Ix

σ1 0 Djrbashian defined the function ω(λ) = cos φ1 (0 Dσ x u)(1; λ) + sin φ1 (0 D x u)(1; λ), where u(x; λ) solves the Cauchy-type problem

⎧ Dσ2 u(x) − [λ + q(x)]u(x) = 0, in D, ⎪ ⎪ ⎨0 x ⎪ 0 (0 D σ x u)(0) = sin φ0, ⎪ ⎪ 1 ⎪ (0 D σ x u)(0) = − cos φ0 . ⎩

166

5 Boundary Value Problem for Fractional

ODEs

Djrbashian showed that ω(λ) is an entire function in λ of order σ2−1 , and proved that if λ = λ0 is a zero of the function ω(λ), then u(x; λ0 ) is an eigenfunction of the Dirichlet problem corresponding to this eigenvalue. The work [Gal80] proved the completeness of the system of the eigen and associated functions in the case q ≡ 0. Note that for the fractional Sturm-Liouville problem, several researchers [KA13, KOM14, ZK13, TT16] have focused on retrieving as much as possible from the results of the classical case, in particular seeking conditions on the types of fractional operators (and boundary conditions) that lead to real eigenvalues and mutually orthogonal eigenfunctions. The one-sided case as (5.37) is far less understood.

5.3.1 Riemann-Liouville case The following theorem summarizes several results on problem (5.37) with q ≡ 0 in the Riemann-Liouville case. (iii) and (iv) are taken from [AKT13, Theorems 2.1 and 2.3] and (ii) from [JLPR15, appendix]. The work [JLPR15] also gives a variational approach for efficiently approximating the eigenvalues and eigenfunctions. Theorem 5.16 In the Riemann-Liouville case, the following statements hold for problem (5.37) with q ≡ 0. (i) The eigenvalues {λn }n≥1 are zeros of the function Eα,α (−λ), and the corresponding eigenfunctions are given by x α−1 Eα,α (−λn x α ). (ii) The lowest Dirichlet eigenvalue is real and positive, and the associated eigenfunction can be taken to be strictly positive in D. (iii) All eigenvalues lie in the sector {z ∈ C : | arg z| < (iv) There is no eigenvalue inside the circle with a

2−α 2 π}. radius Γ(2α) Γ(α)

Proof (i). Consider the following ode for μ ∈ C:  R α −0Dx v = μv, in D, v(0) = 0,

R α−1 v(0) 0Dx

= 1.

centered at the origin.

(5.38)

In view of the Dirichlet boundary condition u(0) = u(1) = 0 in problem (5.37), we choose v(0) = 0, and R0Dxα−1 v(0) = 1 (but other normalizing conditions are possible). Then according to the theory of fractional odes in Proposition 4.4, for any μ ∈ C, problem (5.38) has a unique solution vμ (x), given by vμ (x) = x α−1 Eα,α (−μx α ). vμ (1) = 0 if and only if μ is a zero of the Mittag-Leffler function Eα,α (−λ), and the corresponding solution x α−1 Eα,α (−μx α ) is precisely an eigenfunction. (ii). Let the operator T : C(D) → C(D) be defined by ∫ 1 T f (x) = G(x, s) f (s)ds = (0 Ixα f )(1)x α−1 − 0 Ixα f (x), (5.39) 0

where G(x, s) is the Green’s function given in Theorem 5.1. Clearly, T is linear and, further it is compact. Let P = {v ∈ C(D) : v ≥ 0 in D}. We claim that T is positive

5.3 Fractional Sturm-Liouville Problem

167

on P. Indeed, let f ∈ C(D), and f ≥ 0. Then ∫ ∫ x 1 x α−1 1 (1 − s)α−1 f (s)ds + ((x − xs)α−1 − (x − s)α−1 ) f (s)ds. T f (x) = Γ(α) x Γ(α) 0 For any x ∈ D, the first term is nonnegative. Similarly, since (x − xs)α−1 > (x − s)α−1 for s ∈ (0, x), the second term is also nonnegative. Hence, T f ∈ P, i.e., the operator T is positive. Now by Krein-Rutman theorem [Dei85, Theorem 19.2], the spectral radius of T is an eigenvalue of T, and there exists an eigenfunction u ∈ P \ {0}. (iii). It suffices to analyze the expression −(R0Dxα u, u). Note that −(R0Dxα u, u) = (0 Ix2−α u , u ). By the Macaev-Palant theorem [MP62] (see also [GK70, p. 402, footnote (46)]), the values of the quadratic form (0 Ix2−α u , u ) lies in the sector {z ∈ C : | arg z| < 2−α 2 π}. (iv). By Theorem 5.1, problem (5.37) is equivalent to the following integral equation u(x) = λTu, with T defined in (5.39), and by part (i), a number λ ∈ C is an eigenvalue of problem (5.37) if and only if it is a zero of Eα,α (−λ). Now we decompose the operator Tu into Tu = −T0 u + T1 u with T0 u = 0 Ixα u and T1 u = x α−1 (0 Ixα u)(1). For any α ∈ (1, 2), the operators T0 and T1 are trace-class. Hence, the trace of tr(T) the operator T is given by tr(T) = tr(−T0 + T1 ) = −tr(T0 ) + tr(T1 ). Moreover, we have ∫ 1 1 Γ(α) . tr(T0 ) = 0 and tr(T1 ) = s α−1 (1 − s)α−1 ds = Γ(α) 0 Γ(2α)  Γ(α) Γ(α) −1 −1 Hence, tr(T) = Γ(2α) , i.e., λ1−1 + ∞ i=2 λi = Γ(2α) . By (ii), λ1 is a positive number. ∞ −1 By the sector property of the roots of Eα,α (−x) (cf. (iii)), i=2 λi is also a positive Γ(α) . That is, the function Eα,α (−λ) has no zero inside number. Therefore, λ1−1 < Γ(2α) the circle with radius

Γ(2α) Γ(α)

centered at the origin.



Remark 5.10 The first result connects eigenvalues of problem (5.37) with the function Eα,α (−λ). Since a number λ ∈ C is an eigenvalue if and only if it is a zero of Eα,α (−λ), (ii) implies that Eα,α (−λ) has at least one positive root. Theorem 5.16(i) indicates that the eigenfunctions {un } are given by un (x) = α−1 1 Since limx→0 Eα,α, (−λx α ) = Γ(α) , it follows that un (x) ∼ xΓ(α) ,

x α−1 Eα,α, (−λn x α ).

α−1

as x → 0. Thus, for 1 < α < 2, un (x) ∼ xΓ(α) and is not Lipschitz at x = 0. Hence, it is impossible to normalize by setting its derivative at x = 0 to be unity. Theorem 5.16(ii) indicates that there is always at least one real eigenvalue and the asociated eigenfunction is strictly positive in D. Fig. 5.7 shows the lowest eigenvalues (which are positive and real, by Theorem 5.16(ii)) and the lower bound Γ(2α) Γ(α) given by Theorem 5.16(iii). The eigenvalues and eigenfunctions are computed using a numerical method developed in [JLPR15]. Interestingly, the lowest eigenvalue is not monotone with respect to the fractional-order α: It first decreases with α, and then increases with α. Further, the lower bound Γ(2α) Γ(α) is fairly loose. The corresponding eigenfunctions are also presented in Fig. 5.7, with the normalization R0Dxα−1 u1 (0) = 1.

168

5 Boundary Value Problem for Fractional

ODEs

The eigenfunctions become more symmetric about x = 12 as α increases. This is to be expected since as α → 2, the eigenfunction of −u , sin nπx, must be recovered. 10

0.6

exact lower bound

8

u1(x)

6 4

=1.25 =1.50 =1.75

0.4

0.2

2 0

1

1.2

1.4

1.6

1.8

2

0

0

0.5

1

x

α Fig. 5.7 The lowest Dirichlet eigenvalues of the operator −R 0 Dx versus its lower bound the corresponding eigenfunction.

Γ(2α) Γ(α) ,

and

For α sufficiently close to 1, there is only a single real eigenvalue and corresponding eigenfunction (actually this holds for α less than approximately 1.34, after which there are three real eigenvalues with linearly independent eigenfunctions). With increasing α from α = 1, each subsequent real eigenvalue added has an eigenfunction with one more zero. When subsequently added eigenvalues become complex, these occur in complex conjugate pairs, and then the added eigenfunctions have two more zeros in D than the previous one (but there is no proof of this observation). Fig. 5.8 shows a few higher eigenfunctions for the case α = 43 , where the index excludes the complex conjugate. Note both the singular behavior at the α−1 origin (asymptotically xΓ(α) ) and the sinusoidal with rapidly decreasing amplitude as x → 1. However, these computational observations are purely empirical, and no proof is available. It is unclear whether the eigenvalue(s) at the bifurcation point is geometrically/algebraically simple. Naturally, the bifurcation is not stable under the perturbation of a potential term and then the eigenfunctions can be noticeably different.

5.3.2 Djrbashian-Caputo case The Djrbashian-Caputo fractional Sturm-Liouville problem is more complex than the Riemann-Liouville case, and even fewer results are known. Nakhushev [Nak77] investigated the spectrum of the Dirichlet-type problem  β u (x) + λ(R0Dx u)(x) = 0, in D, u(0) = u(1) = 0,

5.3 Fractional Sturm-Liouville Problem

n=1 n=2 n=3 n=4 n=5

(un)

0.4 0.2

0.05 0

(un)

0.6

169

0 -0.2

-0.05

n=1 n=2 n=3 n=4 n=5

-0.1

0

0.5

x

1

-0.15

0

0.5

x

1

4 α Fig. 5.8 The Dirichlet eigenfunctions {un }5n=1 of the operator −R 0 Dx with α = 3 .

with β ∈ (0, 1), and showed that λ is an eigenvalue if and only if it is a zero of the Mittag-Leffler function E2−β,2 (−λ) (and thus identical with problem (5.37) with a Djrbashian-Caputo fractional derivative: This is no coincidence, since Nakhushev’s β formulation can be obtained from problem (5.37) by applying the operator C 0 Dx , with β = 2−α). Further, Nakhshev showed that all the zeros whose moduli are sufficiently large are simple zeros, and they have the asymptotic estimates |λk | = O(k 2−β ) as k → ∞. These investigations were continued by several researchers, e.g., Malamud [Mal94, MO01] and Aleroev [AKT13]. Theorem 5.17 In the Djrbashian-Caputo case, the following statements hold for problem (5.37) with q ≡ 0. (i) The eigenvalues {λn }n ≥1 are zeros of the function Eα,2 (−λ), and the corresponding eigenfunctions are given by xEα,2 (−λn x α ). (ii) If α ∈ ( 53 , 2), then the smallest eigenvalue is real and positive, and the associated eigenfunction is strictly positive in D. Proof The proof of assertion (i) is identical with that in Theorem 5.16. (ii). The existence of a real positive eigenvalue for α > 53 was proved by [Pop06] by a careful analysis of the asymptotics of Eα,2 (−λ), and we refer to [Pop06] for details. The corresponding eigenfunctions is given by u(x) = xEα,2 (−λx α ). It suffices to show that u1 (x) = xEα,2 (−λ1 x α ) does not vanish in D. Let x0 ∈ D such that x0 Eα,2 (−λ1 x0α ) = 0. Then λ1 x0α is a zero of Eα,2 (−λ), and moreover, λ1 x0α < λ1 .  This contradicts the assumption that λ1 is the first zero of Eα,2 (−λ). There are only finitely many real eigenvalues for any α < 2. In fact, the existence of real eigenvalues is only guaranteed for α sufficiently close to 2. It has been shown [Pop06] that there exists real eigenvalues provided α > 53 . Careful numerical experiments indicate that the first real zeros appear for α ∈ (1.5991152, 1.5991153) and they occur in pairs [JR12]. By Proposition 3.2, we have the following asymptotic:

1 π α + O(|n| −1 ln |n|) znα = 2nπi − (α − 1) ln 2π|n| + sign(n) i + ln Γ(2−α) 2

170

5 Boundary Value Problem for Fractional

ODEs

This leads to the following asymptotics for the magnitude and phase: |λn | ∼ (2n +

1−α 2 2 2 ) π

arg(λn ) ∼ π − α arctan



+ ((1−α) ln 2πn + ln

2 α Γ(2−α) )

α2

∼ (2πn)α,

(2 − α)π 2πn + (1 − α) π2 ∼ . α (α − 1) ln 2πn − ln Γ(2−α) 2

(5.40)

This asymptotic behavior is illustrated in Fig. 5.9, where the index n excludes the conjugate eigenvalues. It is observed that the magnitude prediction in (5.40) is fairly accurate except for the first few eigenvalues, but the phase is accurate only for very large eigenvalue numbers and this is particularly evident as α approaches 2, when the real eigenvalues kick in. Thus, these formulas are accurate indeed only in the “asymptotic” regime.

10

10

1.5

|ang( n)|

| n|

103

2

=1.25 =1.5

1

0

5

n

10

15

1

0.5

0

=1.25 =1.5

0

5

n

10

15

Fig. 5.9 The magnitude |λ n | and angle arg(λ n ) of the eigenvalues λ n s for problem (5.37) with − 0CDxα . The dashed and solid lines denote the estimate and true value, respectively. α α The eigenfunctions un for the operator − C 0 Dx are given by xEα,2 (−λn x ). These functions are sinusoidal in nature but significantly attenuated near x = 1 with the degree of attenuation strongly depending on α. Fig. 5.10 shows the first five eigenfunctions (excluding the complex conjugate ones) for α = 32 , where we have the normalization. With complex-valued eigenfunctions, one can always multiply by any complex number so that multiplying by the imaginary unit i interchanges the real to imaginary ones. The choice un (0) = 1 sets a consistent choice for selecting the real part of the eigenfunction. A close inspection of the plots indicates that the number of interior zeros of both real and imaginary parts increases by two with consecutive complex eigenfunctions. Also, the number of interior zeros of the real and imaginary parts of the respective eigenfunction un always differs by one. It should be stressed that this pattern is purely a numerical observation.

5.3 Fractional Sturm-Liouville Problem

n=1 n=2 n=3 n=4 n=5

(un)

0.15 0.1

0.1

0.05

0.06 0.04 0.02

0 -0.05

n=1 n=2 n=3 n=4 n=5

0.08

(un)

0.2

171

0 0

0.5

x

1

0

0.5

x

1

Fig. 5.10 The real and imaginary parts of the eigenfunctions {un }5n=1 for problem (5.37) with − 0CDxα and α = 32 .

Exercises Exercise 5.1 In Lemma 5.1, examine the dependence of the value of r on the fractional-order α. Exercise 5.2 Prove the comparison principle for the two-point bvp in the RiemannLiouville case. Let u1 and u2 be the solutions to   −R0Dxα u1 = f1, in D, −R0Dxα u2 = f2, in D, and u1 (0) = u1 (1) = 0, u2 (0) = u2 (1) = 0. Then f1 ≤ f2 implies u1 ≤ u2 . Exercise 5.3 This exercise studies the “natural” Dirichlet boundary condition in the Riemann-Liouville case for c0, c1 ∈ R, ⎧ −RDxα u = f (x, u), ⎪ ⎪ ⎨ R α−20 ⎪ u(0) = c0, 0Dx ⎪ ⎪ ⎪ RDα−2 u(1) = c . 1 ⎩0 x

in D,

(i) Derive the solution representation via Green’s function. (ii) Discuss the positivity of Green’s function. (iii) Discuss the solution regularity, if f (x, u) is smooth. Exercise 5.4 This exercise studies the “natural” Neumann boundary condition in the Riemann-Liouville case ⎧ −RDxα u = f (x, u), ⎪ ⎪ ⎨ R α−10 ⎪ u(0) = c0, 0Dx ⎪ ⎪ ⎪ RDα−1 u(1) = c , 1 ⎩0 x

in D,

172

5 Boundary Value Problem for Fractional

ODEs

with c0, c1 ∈ R. (i) Derive the solution representation via Green’s function. Is there any compatibility condition as the classical case ? (ii) Discuss the positivity of Green’s function. (iii) Discuss the solution regularity, if f (x, u) is smooth. Exercise 5.5 Let 0 < α < 1, a, b, c ∈ R with a + b  0, and f : D × R → R be continuous. (i) Prove that a function u ∈ C(D) with 0 It1−α (u − u(0)) ∈ AC(D) solves  C α in D, 0 Dx u(x) = f (x, u(x)), au(0) + bu(1) = c, if and only if u ∈ C(D) satisfies u(x) =

c b + (0 Ixα f (·, u(·)))(x) − (a I α f (·, u(·)))(1). a+b a+b x

(ii) Now suppose that f is globally Lipschitz in u with a constant L. Prove that if L is sufficiently small, then the problem has a unique solution. (iii) Derive a sharp bound on the required Lipschitz constant on f . Exercise 5.6 Prove Theorem 5.9. Exercise 5.7 Let γ > 0, 1 < α < 2, and f ∈ C(D). Consider the following bvp  α in D, −C 0 Dx u = f ,  u(0) − γu (0) = u(1) = 0. (i) Derive Green’s function G(x, s) for the problem. (ii) Discuss the positivity of G(x, s) in relation to the parameter γ. (iii) Derive an explicit expression for the critical value γ as a function of α (so as to ensure positivity). Exercise 5.8 Show that the operator T defined in (5.39) is compact on C(D). Exercise 5.9 Discuss the asymptotically behaviour of the “Dirichlet”/“Neumann” eigenvalues with a Riemann-Liouville fractional derivative. Exercise 5.10 Investigate numerically and analytically the properties of the following eigenvalue problem:  α in D, −C 0 Dx u = λu, u (0) = u(1) = 0.

Part III

Time-Fractional Diffusion

Chapter 6

Subdiffusion: Hilbert Space Theory

In this chapter, we describe the solution theory in Hilbert spaces for the timefractional diffusion or subdiffusion model derived in Chapter 1, which involves a (regularized) Djrbashian-Caputo fractional derivative ∂tα u of order α ∈ (0, 1) in time. We will discuss the following aspects: unique existence of a (weak) solution, solution representations via Laplace transform, Hilbert space regularity, timedependent diffusion coefficients, nonlinear problems, (weak) maximum principle, inverse problems, and numerical methods for discretizing ∂tα u. Most regularity results are described using the space H s (Ω) defined in Section A.2.4 in the appendix. Throughout, the problem is defined on an open bounded domain Ω ⊂ Rd (d = 1, 2, 3) with a boundary ∂Ω, and T > 0 the final time. The notation Q = Ω×(0, T] denotes the parabolic cylinder, ∂L Q = ∂Ω× (0, T] and ∂p Q = (∂Ω× (0, T]) ∪ (Ω× {0}) the lateral and parabolic boundary, respectively, and I = (0, T]. For a function v : Ω × I → R, we denote by v(t) = v(·, t) for any t ∈ I.

6.1 Existence and Uniqueness in an Abstract Hilbert Space First, we analyze the subdiffusion problem in an abstract Hilbert space setting and give the uniqueness, existence and basic energy estimates, following discussions in [Zac09] (which covers general completely positive kernels). See also the work [Aka19] for differential inclusions, which discusses also the concept of strong solutions, and gives sufficient conditions for the classical initial condition. Let V and H be real Hilbert spaces such that V is densely and continuously embedded into H, with v H ≤ cV →H vV , for any v ∈ V, where cV →H denotes the constant of the embedding V → H. By identifying H with its dual H , we have (h, v) H

V →H → V , = h, v V ×V , ∀h ∈ H, v ∈ V,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 B. Jin, Fractional Differential Equations, Applied Mathematical Sciences 206, https://doi.org/10.1007/978-3-030-76043-4_6

(6.1) (6.2) 175

176

6 Subdiffusion: Hilbert Space Theory

where (·, ·) H and ·, · V ×V denote the inner product in H and duality pairing between V (the dual space of V) and V, respectively. Consider the abstract evolution problem ∂t ((k α ∗ (u − u0 ))(t), v) H + a(t, u(t), v) = f (t), v V ×V ,

∀v ∈ V,

(6.3)

where u0 ∈ H and f ∈ L 2 (I; V ) (see Section A.2.5 in the appendix for the definition of the space) are given data, and a : I × V × V → R is a time-dependent bounded V-coercive bilinear form; see Assumption 6.1 below for the precise assumptions. Note that the bilinear form a is only assumed to be measurable in time t, which allows, e.g., treating time-fractional pdes in divergence form with merely bounded and measurable coefficients. The term ∂tα u denotes the (regularized) DjrbashianCaputo fractional derivative in time t ∂tα u = R ∂tα (u − u0 ),

with k α (t) = Γ(1 − α)−1 t −α ;

see Definition 2.4 in Chapter 2. We look for a solution of problem (6.3) in the space W(u0, V, H) = {u ∈ L 2 (I; V) : k α ∗ (u − u0 ) ∈ 0 H 1 (I; V )}, where the left subscript 0 in the notation 0 H 1 (I; V ) means a vanishing trace at t = 0. The vector u0 can be regarded as the initial data for u, at least in a weak sense. Note that problem (6.3) is equivalent to the following operator equation: ∂t (k α ∗ (u − u0 ))(t) + A(t)u(t) = f (t),

a.e. t ∈ I,

in V , where the operator A(t) : V → V is defined by

A(t)u, v V ×V = a(t, u, v),

∀u, v ∈ V .

(6.4)

We need one preliminary result. The sequence {k n } is often called Yosida approximations of the singular kernel k α , in view of its construction, and allows overcoming the singularity of k α with an approximation argument. (iii) can be viewed as a version of Alikhanov’s inequality in Lemma 2.11 for the Djrbashian-Caputo fractional derivative ∂tα . Lemma 6.1 For the kernel k α , the following statements hold. (i) k α ∗ k1−α ≡ 1 on I. ∞ ⊂ (ii) There exists a sequence of nonnegative and nonincreasing functions {k n }n=1 1,1 1 W (I), such that k n → k α in L (I). (iii) Any k n from (ii) satisfies u∂t (k n ∗ u) ≥ 12 ∂t (k n ∗ u2 ). Proof (i) can be verified directly using the identity (2.10) as ∫ t 1 (k α ∗ k1−α )(t) = (t − s)α−1 s−α ds Γ(α)Γ(1 − α) 0 ∫ 1 1 B(α, 1 − α) = 1. = (1 − s)α−1 s−α ds = Γ(α)Γ(1 − α) 0 Γ(α)Γ(1 − α)

6.1 Existence and Uniqueness in an Abstract Hilbert Space

177

For an arbitrary completely positive kernel, to which the kernel k α belongs to, the required approximating sequence can be constructed via the so-called Yosida approximation. Specifically, Let k n = nsn , with sn being the unique solution of the scalar Volterra equation sn (t) + n(sn ∗ k1−α )(t) = 1,

t > 0, n ∈ N.

Then it can be verified that k n = nsn satisfies the desired properties using the general theory for completely positive kernels (cf. [Prü93, Proposition 4.5] and [CN81, Proposition 2.1]). This can be verified directly for the kernel k α : k n = nsn = nE1−α,1 (−nt 1−α ) ∈ W 1,1 (I), k n → kα

in L 1 (I),

as n → ∞.

The latter convergence is direct from the exponential asymptotic in Theorem 3.2. Now by the complete monotonicity of the function Eα,1 (−x) in Theorem 3.5, k n is nonnegative and decreasing. (iii) is a direct consequence of (ii), i.e., k n is decreasing and nonnegative.  We begin with an embedding result for the space W(u0, V, H). In the theory of abstract parabolic equations, the following continuous embedding H 1 (I; V ) ∩ L 2 (I; V) → C(I; H)

(6.5)

is well known [LM72, Chapter 1, Proposition 2.1 and Theorem 3.1]. The following theorem provides an analogue for the space W(u0, V, H). When u0 = 0, the property k α ∗ u ∈ C(I; H) follows directly from the embedding (6.5): u ∈ L 2 (I; V) implies k α ∗ u ∈ L 2 (I; V), by Young’s inequality, and by the definition of W(u0, V, H), we have k α ∗ u ∈ H 1 (I; V ) ∩ L 2 (I; V) → C(I; H). However, for u0  0, this simple reduction is not feasible. Theorem 6.1 Let V and H be real Hilbert spaces satisfying (6.1), u0 ∈ H and u ∈ W(u0, V, H). Then k α ∗ (u − u0 ) and k α ∗ u belong to C(I; H) (possibly after redefinition on a set of measure zero), and  k α ∗ uC(I ;H) ≤c(k α  L 1 (I), T, cV →H ) ∂t [k α ∗ (u − u0 )] L 2 (I;V )  + u L 2 (I;V ) + u0  H . 2 (t) ∈ W 1,1 (I), and for a.e. t ∈ I, Further, k α ∗ u H 2 (t) = 2 [k α ∗ (u − u0 )] (t), [k α ∗ u](t) V ×V + 2k α (t)(u0, (k α ∗ u)(t)) H . ∂t k α ∗ u H

Proof Since

(k α ∗ u0 ) = (1 ∗ k α )u0 ∈ W 1,1 (I; H) → C(I; H),

k α ∗ (u − u0 ) ∈ C(I; H) if and only if k α ∗ u ∈ C(I; H). Let k n ∈ W 1,1 (I), n ∈ N, be given by Lemma 6.1(ii). Then k n ∗ u ∈ H 1 (I; V). For n ∈ N, let vn = k n ∗ u. Then

178

6 Subdiffusion: Hilbert Space Theory 2 ∂t vn (t) − vm (t) H = 2(vn (t) − vm (t), vn (t) − vm (t)) H .

Thus, for all s, t ∈ I, with wn,m = vn − vm , there holds ∫ t 2 2 wn,m (t) H = wn,m (s) H + 2 [k n (ξ) − k m (ξ)](u0, wn,m (ξ)) H dξ s ∫ t

[k n ∗ (u − u0 )] (ξ) − [k m ∗ (u − u0 )] (ξ), wn,m (ξ) V ×V dξ. +2 s

This and Young’s inequality imply 2 2 wn,m (t) H ≤ wn,m (s) H + [k n ∗ (u − u0 )] − [k m ∗ (u − u0 )]  L2 2 (0,T ;V ) 2 2 + wn,m  L2 2 (0,T ;V ) + 2u0  H k n − k m  L2 1 (0,T ) + 12 wn,m C([0,T ];H) . (6.6)

Since k n → k α in L 1 (I) as n → ∞ and k α ∗ (u − u0 ) ∈ 0 H 1 (I; V ), we have vn → k α ∗ u := v in L 2 (I; H) and L 2 (I; V),

(6.7)

∂t (k n ∗ (u − u0 )) → ∂t (k α ∗ (u − u0 )) in L (I; V ). 2

Now we fix a point s ∈ (0, T) for which vn (s) → v(s) in H as n → ∞. Taking then in (6.6) the maximum over all t ∈ I and absorbing the last term imply that {vn } is a Cauchy sequence in C(I; H) and converges to some v˜ ∈ C(I; H). By the convergence vn → v in L 2 (I; H) in (6.7), the sequence {vn } actually converges to v a.e. in I. This proves the first assertion. Similarly, for all s, t ∈ I, and n ∈ N, ∫ t 2 2 = vn (s) H +2 k n (ξ)(u0, vn (ξ)) H dξ vn (t) H s ∫ t

[k n ∗ (u − u0 )] (ξ), vn (ξ) V ×V dξ. +2 s

Taking the limit as n → ∞ gives (recall v = k α ∗ u), ∫ t 2 2 = v(s) H +2 k α (ξ)(u0, v(ξ)) H dξ v(t) H s ∫ t

[k α ∗ (u − u0 )] (ξ), v(ξ) V ×V dξ. +2 s

(6.8)

2 (t)} is absolutely continuous on I, and differentiating Hence, the mapping {t → v H the identity (6.8) shows the third assertion. Next, integrating (6.8) with respect to s yields for all t ∈ I 2 v(t) H ≤T −1 v L2 2 (I;H) + ∂t (k α ∗ (u − u0 )) L2 2 (I;V ) 2 2 + v L2 2 (I;V ) + 2u0  H k α  L2 1 (I) + 12 vC(I . ;H)

6.1 Existence and Uniqueness in an Abstract Hilbert Space

179

Then taking the maximum over all t ∈ I and Young’s inequality for convolutions imply the desired estimate, thereby completing the proof.  Next, we turn to the existence and uniqueness of a solution u to problem (6.3). We will make the following assumption on the bilinear form a(t, ·, ·). The condition 2 is an abstract form of Gärding’s inequality. a(t, u, u) ≥ c1 uV2 − c2 u H Assumption 6.1 For a.e. t ∈ I, for the bilinear form a(t, ·, ·) : V × V → R, there exist constants c0, c1 > 0 and c2 ≥ 0 independent of t such that for all u, v ∈ V |a(t, u, v)| ≤ c0 uV vV

2 a(t, u, u) ≥ c1 uV2 − c2 u H .

and

Moreover, the function {t → a(t, u, v)} is measurable on I for all u, v ∈ V. Example 6.1 Let Ω ⊂ Rd , d = 1, 2, 3, be an open bounded domain. Consider ⎧ ∂ α u − ∇ · (a∇u) + b · ∇u + cu = f , in Q, ⎪ ⎪ ⎨ t ⎪ u = 0, on ∂L Q, ⎪ ⎪ ⎪ u(0) = u0, in Ω. ⎩ Assume u0 ∈ L 2 (Ω), g ∈ L 2 (I; H −1 (Ω)), a ∈ L ∞ (Q; Rd×d ) and there exists λ > 0 such that for a.e. (x, t) ∈ Q, a(x, t)ξ · ξ ≥ λ|ξ | 2 for any ξ ∈ Rd , where · and | · | denote the Euclidean inner product and norm on Rd , respectively. Further, b ∈ L ∞ (Q; Rd ), c ∈ L ∞ (Q). To apply the abstract theory, let V = H01 (Ω), H = L 2 (Ω), and ∫ a(t, u, v) = (a(x, t)∇u, ∇v) + (b(x, t), ∇u)v + c(x, t)uvdx Ω



and

f (t), v V ×V =

Ω

f (x, t)v(x)dx.

Then the weak formulation of the problem reads ∂t ((k α ∗ (u − u0 ))(t), v) H + a(t, u(t), v) = f (t), v V ×V ,

∀v ∈ V, a.e. t ∈ I,

and we seek a solution in the space W(u0, H01 (Ω), L 2 (Ω)) = {u ∈ L 2 (I; H01 (Ω)) : k α ∗ (u − u0 ) ∈ 0 H 1 (I; H −1 (Ω))}. It is folklore that Assumption 6.1 holds under these conditions on the coefficients. To construct a solution, we use the standard Galerkin method and derive suitable a priori estimates for the weak solution, in a manner analogous to the standard ∞ be a basis of V, and {u ∞ parabolic case. Let {wm }m=1 0,m }m=1 ⊂ H be a sequence such that u0,m ∈ span{w1, . . . , wm } such that u0,m → u0 in H as m → ∞. Let um (t) =

m j=1

c jm (t)w j ,

u0,m =

m j=1

b jm w j .

180

6 Subdiffusion: Hilbert Space Theory

Then formally, for every m ∈ N, the system of Galerkin equations reads m

∂t (k α ∗ (c jm − b jm ))(t)(w j , wi ) H +

j=1

m

c jm (t)a(t, w j , wi ) = f (t), wi V ×V , (6.9)

j=1

for a.e. t ∈ I, i = 1, . . . , m. Then we have the following existence and uniqueness result. Theorem 6.2 Under Assumption 6.1, for u0 ∈ H and f ∈ L 2 (I; V ), problem (6.3) has exactly one solution u ∈ W(u0, V, H) and k α ∗ (u − u0 ) H 1 (I;V ) + u L 2 (I;V ) ≤ c(u0  H +  f  L 2 (I;V ) ). Moreover, for every m ∈ N, the Galerkin system (6.9) has a unique solution um ∈ W(u0,m, V, H), and the sequence {um } converges weakly to u in L 2 (I; V) as m → ∞. Proof First, we prove the uniqueness. Suppose that u1, u2 ∈ W(u0, V, H) are two solutions of problem (6.3). Then u = u1 − u2 belongs to W(0, V, H) and satisfies

(k α ∗ u) (t), v V ×V + a(t, u(t), v) = 0,

∀v ∈ V .

Taking v = u(t) gives

(k α ∗ u) (t), u(t) V ×V + a(t, u(t), u(t)) = 0. Let k n ∈ W 1,1 (I) be the regularized kernel given in Lemma 6.1(ii). Then the last identity is equivalent to

(k n ∗ u) (t), u(t) V ×V + a(t, u(t), u(t)) = ζn (t), with ζn (t) = (k n ∗ u) (t) − (k α ∗ u) (t), u(t) V ×V . It is easy to prove that ζn → 0 in L 1 (I). Since k n ∗ u ∈ H 1 (I; H), by (6.2), (∂t (k n ∗ u)(t), u(t)) H + a(t, u(t), u(t)) = ζn (t). k n is increasing and nonnegative, and Lemma 6.1(iii) implies 1 2 ∂t (k n

2 ∗ u(·) H )(t) ≤ (∂t (k n ∗ u)(t), u(t)) H .

This and Gärding’s inequality in Assumption 6.1 yield 2 2 ∂t (k n ∗ u(·) H )(t) ≤ 2c2 u(t) H + 2ζn (t).

It can be verified that all terms as functions of t belong to L 1 (I). Then convolving the inequality with k1−α and passing to limit (possibly on a subsequence) give 2 2 u(t) H ≤ 2c2 (k1−α ∗ u H )(t),

(6.10)

where we have used the fact that since ζn → 0 in L 1 (I), Young’s inequality for convolution entails k1−α ∗ ζn → 0 in L 1 (I), and that

6.1 Existence and Uniqueness in an Abstract Hilbert Space

181

2 2 k1−α ∗ ∂t (k n ∗ u H ) = ∂t (k n ∗ k1−α ∗ u H )



2 2 ∂t (k α ∗ k1−α ∗ u H ) = u H ,

in L 1 (I),

since k n → k α in L 1 (I). Since k1−α is nonnegative, now by Gronwall’s inequal2 = 0, i.e., u = 0. To prove the existence, first, ity, (6.10) implies u(t) H we show that for every m ∈ N, the Galerkin system (6.9) has a unique solution cm := (c1m, . . . , cmm )T ∈ Rm on I in the class W(ξm, Rm, Rm ), where bm := (b1m, . . . , bmm )T ∈ Rm . Since w1, . . . , wm are linearly independent, the matrix ((w j , wi ) H ) ∈ Rm×m is invertible. Hence, (6.9) can be solved for ∂t (k α ∗ (c jm − b jm )), ∂t [k α ∗ (cm − bm )](t) = B(t)cm (t) + f(t),

a.e. t ∈ I,

where B ∈ L ∞ (I; Rm×m ), and f ∈ L 2 (I; Rm ), by the assumptions on a and f . Next, we transform it into a system of Volterra equations cm (t) = bm + k1−α ∗ [B(·)cm (·)](t) + (k1−α ∗ f)(t), which has a unique solution cm ∈ L 2 (I; Rm ). Then cm ∈ W(bm, Rm, Rm ), and hence it is also a solution of (6.9). This shows that for every m ∈ N, the Galerkin system (6.9) has exactly one solution um ∈ W(u0,m, V, H). Next, we derive a priori estimates on the Galerkin solutions {um }. The Galerkin system is equivalent to (∂t (k α ∗(um −u0,m ))(t), wi ) H + a(t, um (t), wi ) = f (t), wi )V ×V , i = 1, . . . , m. (6.11) By multiplying by cim and then summing over i, we obtain (∂t (k α ∗ (um − u0,m ))(t), um (t)) H + a(t, um (t), um (t)) = f (t), um (t))V ×V . Then it can be equivalently written as (∂t (k n ∗ um )(t), um (t)) H + a(t, um (t), um (t)) =k n (t)(u0,m, um (t)) + f (t), um (t) V ×V + ζmn (t), with ζmn (t) = [k n ∗ (um − u0,m )] (t) − [k α ∗ (um − u0,m )] (t), um V ×V . Using Lemma 6.1(iii) and Assumption 6.1, we obtain 1 2 ∂t (k n

2 2 ∗ um  H )(t) + 12 k n (t)um (t) H + c1 um (t)V2

2 ≤c2 um (t) H + k n (t)(u0,m, um (t)) + f (t), um (t) V ×V + ζmn (t),

which, together with Young’s inequality, yields 2 ∂t (k n ∗ um (t) H )(t) + c1 um (t)V2 2 2 ≤2c2 um  H + k n (t)u0,m  H + c1−1  f (t)V2 + 2ζmn (t).

Similarly, as n → ∞,

(6.12)

182

6 Subdiffusion: Hilbert Space Theory

k1−α ∗ ζmn → 0 in L 1 (I), 2 2 k1−α ∗ ∂t (k n ∗ um (·) H ) → um (·) H

in L 1 (I).

Thus, convolving (6.12) with k1−α and passing to limit (on a subsequence) give 2 2 2 um (t) H ≤ 2c2 (k1−α ∗ um (·) H )(t) + u0,m  H + c1−1 (k1−α ∗  f (·)V2 )(t),

for all m ∈ N. By the positivity of k1−α , it follows that um  L 2 (I;H) ≤ c(c1, c2, k1−α  L 1 (I), T)(u0,m  H +  f  L 2 (I;V ) ). 2 )(0) = 0 and let n → ∞ to obtain We integrate (6.12) over I, since (k n ∗ um (·) H 2 + c1−1  f (t) L2 2 (I;V ) . c1 um  L2 2 (I;V ) ≤ 2c2 um (t) L2 2 (I;H) + k α  L 1 (I) u0,m  H

The last two estimates and the assumption u0,m → u0 in H yields um  L 2 (I;V ) ≤ c(c1, c2, k α  L 1 (I), T)(u0  H +  f  L 2 (I;V ) ).

(6.13)

Thus, there exists a subsequence of {um }, still denoted by {um }, and some u ∈ L 2 (I; V) such that (6.14) um  u in L 2 (I; V). We claim that u ∈ W(u0, V, H) and it is a solution of problem (6.3). Let ϕ ∈ C 1 (I) with ϕ(T) = 0. By multiplying (6.11) by ϕ and integration by parts, we obtain ∫ T ∫ T − ϕ (t)([k α ∗ (um − u0,m )](t), wi ) H dt + ϕ(t)a(t, um (t), wi )dt 0 0 ∫ T = ϕ(t) f (t), wi V ×V dt, i = 1, . . . , m, 0

since (k α ∗ (um − u0,m ))(0) = 0. Then by (6.14) and u0,m → u0 in H, Assumption 6.1, and Young’s and Hölder’s inequalities, this leads to ∫ T ∫ T − ϕ (t)([k α ∗ (u − u0 )](t),wi ) H dt + ϕ(t)a(t, u(t), wi )dt 0 0 ∫ T = ϕ(t) f (t), wi V ×V dt, ∀i ∈ N. 0

By (6.2), (k α ∗ (u − u0 ), wi ) H = k α ∗ (u − u0 ), wi V ×V . Thus, all the terms in this identity represent bounded linear functionals on V. Since {wi } is a basis of V, ∫ T ∫ T − ϕ (t)([k α ∗ (u − u0 )](t), v)V ×V dt + ϕ(t)a(t, u(t), v)dt 0 0 ∫ T = ϕ(t) f (t), v V ×V dt, ∀v ∈ V . 0

6.1 Existence and Uniqueness in an Abstract Hilbert Space

183

Since the identity holds for all ϕ ∈ C0∞ (I), k α ∗ (u − u0 ) has a weak derivative on I given by (6.15) ∂t [k α ∗ (u − u0 )](t) + A(t)u(t) = f (t), where A(t) is defined in (6.4). It follows from u ∈ L 2 (I; V) and  A(t)u(t)V ≤ c0 u(t)V that A(t)u(t) ∈ L 2 (I; V ). Since f ∈ L 2 (I; V ), (k α ∗ (u − u0 )) ∈ L 2 (I; V ). To show u ∈ W(u0, V, H), it suffices to show (k α ∗(u−u0 ))(0) = 0. Let z := k α ∗(u−u0 ). The preceding discussion indicates z ∈ H 1 (I; V ) → C(I; V ), and the last two identities yield ∫ T ∫ T − ϕ (t) z(t), v V ×V dt = ϕ(t) z (t), v V ×V dt, ∀v ∈ V, 0

0

for all ϕ ∈ C 1 (I) with ϕ(T) = 0. By choosing ϕ such that ϕ(0) = 1, and approximating z in H 1 (I; V ) by a sequence of functions C 1 (I; V ), by integration by parts,

z(0), v V ×V = 0 for all v ∈ V, i.e., z(0) = 0. In sum, the function u ∈ W(u0, V, H) solves (6.15), or equivalently (6.3). Moreover, (6.3) has exactly one solution u in W(u0, V, H). Thus, all weakly convergent subsequences of {um } (in L 2 (I; V)) have the same limit u. Hence, the whole sequence {um } converges weakly to u in L 2 (I; V). Last, we prove continuous dependence on the data. From um  u in L 2 (I; V) and (6.13), by the weak lower semi-continuity of norms, we deduce u L 2 (I;V ) ≤ lim inf um  L 2 (I;V ) ≤ c(u0  H +  f  L 2 (I;V ) ). m→∞

This estimate, the inequality  Au L 2 (I;V ) ≤ c0 u L 2 (I;V ) , (6.15) and the assumption f ∈ L 2 (I; V ) yield the desired stability estimate.  Next, we interpret the role of u0 . It plays the role of the initial data for u, at least in a weak sense. For example, if u and ∂t (k α ∗ (u − u0 )) = f˜ ∈ C(I; V ), then (k α ∗ (u − u0 ))(0) = 0 implies u(0) = u0 . Indeed, u − u0 = ∂t (k1−α ∗ k α ∗ (u − u0 )) = k1−α ∗ f˜,

in C(I; V ).

This implies u(0) = u0 in V . Under additional conditions, e.g., k1−α ∈ L 2 (I), u(0) = u0 actually holds in a classical sense [Aka19, Proposition 2.5]. The kernel k1−α (t) = Γ(α)−1 t α−1 belongs to L 2 (I) for any T > 0 if and only if α > 12 . Theorem 6.2 gives basic energy estimates. Next, we give a stronger result by exploiting the better integrability of k1−α rather than merely k1−α ∈ L 1 (I). It ∫T 2 dt < ∞ for any contains an analogue of the parabolic counterpart: 0 t −1 u(t) H u ∈ 0 H 1 (I; V ) ∩ L 2 (I; V); see [LM72, Chapter 3, Propositions 5.3 and 5.4], for the space W(u0, V, H). Theorem 6.3 Let V and H be real Hilbert spaces with (6.1), and u0 ∈ H, and u ∈ W(u0, V, H) solve problem (6.3). Then the following statements hold. (i) If k1−α ∈ L p (I), p > 1, then u ∈ L 2p (I; H) and with c = c(k1−α  L p (I), T)

184

6 Subdiffusion: Hilbert Space Theory

  u L 2p (I;H) ≤ c ∂t (k α ∗ (u − u0 )) L 2 (I;V ) + u L 2 (I;V ) + u0  H . (ii) The following estimate holds with c = c(k α  L 1 (I) ) ∫

T 2 k α (t)u(t) H dt

0

12

  ≤ c ∂t (k α ∗ (u − u0 )) L 2 (I;V ) + u L 2 (I;V ) + u0  H .

Proof The proof proceeds similarly as Theorem 6.2. Let {k n } be the sequence of Yosida approximations in Lemma 6.1(ii), and for t ∈ I, let g(t) = (k α ∗ (u − u0 )) (t), u(t) V ×V and ζn (t) = (k n ∗ (u − u0 )) (t) − (k α ∗ (u − u0 )) (t), u(t) V ×V . Then (∂t (k n ∗ u), u(t)) H = k n (t)(u0, u(t)) H + g(t) + ζn (t), with each term being in L 1 (I). By Lemma 6.1(iii) and Young’s inequality, 1 2 ∂t (k n

2 2 2 ∗ u H )(t) − 14 k n (t)u(t) H ≤ k n (t)u0  H + g(t) + ζn (t).

(6.16)

2 ≤ To prove part (i), convolving with k1−α , and passing to limit lead to u(t) H 2 2(u0  H + (k1−α ∗ g)(t)). Young’s inequality then gives 1

2 2 u L 2p (I;H) = u H  L p (I) ≤ 2(k1−α  L p (I) g L 1 (I) + T p u0  H ),

which together with the inequality 2g L 1 (I) ≤ ∂t (k α ∗(u −u0 )) L2 2 (I;V ) + u L2 2 (I;V ) implies the desired estimate in (i). To show (ii), by integrating (6.16) over I and 2 , we obtain dropping the term k n ∗ u(·) H ∫ 0

T

  2 2 k n (t)u(t) H dt ≤ 4 (1 ∗ k n )(T)u0  H + g L 1 (I) + ζn  L 1 (I) .

Since (1 ∗ k n )(T) → (1 ∗ k α )(T) and ζn  L 1 (I) → 0 as n → ∞, for sufficiently large n, there holds ∫ T 2 2 k n (t)u(t) H dt ≤ 8((1 ∗ k α )(T)u0  H + g L 1 (I) ). 0

Since k n → k α in L 1 (I), the desired assertion then follows from Fatou’s lemma.  We have the following corollary. The space L p,∞ (I) denotes the weak L p (I) space; see Section A.2.1 in the appendix for the definition. Corollary 6.1 Let Assumption 6.1 be fulfilled, u0 ∈ H and f ∈ L 2 (I; V ), Then problem (6.3) has exactly one solution u in the space Wα (u0, V, H) := {u ∈ L 2 (I; V) : 2 u∫ − u0 ∈ 0W α,2 (I; V )}. Furthermore, k α ∗ u ∈ C(I; H), u ∈ L 1−α ,∞ (I; H) and T −α 2 dt < ∞, and t u(t) H 0

6.2 Linear Problems with Time-Independent Coefficients

185

u − u0 W α,2 (I;V ) + u L 2 (I;V ) + k α ∗ uC(I;H) + u 2 ,∞ L 1−α (I;H) ∫ T 12 2 + t −α u(t) H dt ≤ c(α, T)(u0  H +  f  L 2 (I;V ) ). 0

Note that formally letting α → 1− in the estimate (with μ = 0) in the corollary recovers the well-known estimate for the solutions of abstract parabolic problems. The energy argument (by using suitable test functions in the weak formulation) can handle integrodifferential equations involving a completely positive kernel under low regularity condition on the coefficients, and extends to certain semilinear/quasilinear problems. Many further results have been obtained using the approach. These include L ∞ bounds [Zac08, VZ10], weak Harnack’s inequality [Zac13b], interior Hölder estimates using De Giorgi-Nash-type argument [Zac13a], and asymptotic decay [VZ15]. Sobolev regularity results were also derived recently [MMAK19, MMAK20] for the time-fractional advection-diffusion equation using an energy argument.

6.2 Linear Problems with Time-Independent Coefficients Now we study linear subdiffusion with time-independent coefficients. This is a relatively simple case that allows more direct treatment via separation of variables and Laplace transform. Consider the following subdiffusion problem for u ⎧ ∂ α u = Lu + f , in Q, ⎪ ⎪ ⎨ t ⎪ u = 0, on ∂L Q, ⎪ ⎪ ⎪ u(·, 0) = u0, in Ω. ⎩

(6.17)

In the model, α ∈ (0, 1) is the fractional order, and ∂tα u denotes the left-sided Djrbashian-Caputo fractional derivative of the function u of order α ∈ (0, 1) with respect to time t (based at zero), i.e., ∫ t 1 α (t − s)−α ∂s u(x, s) ds, ∂t u(x, t) = Γ(1 − α) 0 cf. Definition 2.3, and L is a time-independent strongly elliptic second-order differential operator, defined by Lu =

d

∂xi (ai j (x)∂x j u) − q(x)u(x),

x ∈ Ω,

(6.18)

i, j=1

where q(x) ≥ 0 is smooth, ai j = a ji , i, j = 1, . . . , d, and the symmetric matrix-valued function a = [ai j (x)] : Ω → Rd×d is smooth and satisfies the uniform ellipticity

186

6 Subdiffusion: Hilbert Space Theory

λ|ξ | 2 ≤ ξ · a(x)ξ ≤ λ−1 |ξ | 2

∀ξ ∈ Rd, x ∈ Ω,

for some λ ∈ (0, 1), where · and | · | denote the Euclidean inner product and norm, respectively. The precise regularity condition on f and u0 will be specified later. We shall derive explicit solution representations using separation of variables and Laplace transform, and then establish the existence, uniqueness, and regularity theory in a Hilbert space setting. Much of the material of this section can be traced back to the paper [SY11]: it gives a first rigorous treatment of the solution theory in the Hilbert space H s (Ω), regularity estimates and applications in inverse problems including backward subdiffusion and inverse source problem, etc.) and covering both cases of subdiffusion and diffusion wave problems. The analysis uses the standard separation of variables technique and decays property of Eα,β (z) in Theorem 3.2 and complete monotonicity of Eα,1 (−t) in Theorem 3.5. Another early work is [McL10], which treats a slightly different mathematical model ∂t u + R∂t1−α Au = f . Our presentation is more operator theoretic (see e.g., the monograph [Prü93]). The current presentation using Laplace transform follows largely [JLZ18, Sections 2 and 3]. Most of our discussions are only for a zero Dirichlet boundary condition. The extension to a zero Neumann boundary condition is analogous. The case of a nonzero Dirichlet boundary in L 2 (∂L Q) was analyzed in a weak setting by the transposition method in [Yam18] and then applied to Dirichlet boundary control and [KY21] for further results (including Neumann one); see the monograph [KRY20] for a detailed treatment of fdes with the Djrbashian-Caputo fractional derivative in Sobolev spaces.

6.2.1 Solution representation First, we derive a solution representation to problem (6.17) using the separation of variables technique. Let Ω ⊂ Rd be an open bounded smooth domain with a boundary ∂Ω. Let A : H 2 (Ω) ∩ H01 (Ω) → L 2 (Ω) be the realization of the timeindependent second-order symmetric elliptic operator −L defined in (6.18) in the space L 2 (Ω) with its domain D(A) = {v ∈ H01 (Ω) : Av ∈ L 2 (Ω)}, i.e., with a zero Dirichlet boundary condition. It is unbounded, closed and, by elliptic regularity theory and Sobolev embedding theorem, its inverse A−1 : L 2 (Ω) → L 2 (Ω) is compact. Thus, by the standard spectral theory for compact operators, its spectrum is discrete, positive and accumulates at infinity. We repeat each eigenvalue of A according to its (finite) multiplicity: 0 < λ1 < λ2 ≤ · · · ≤ λ j ≤ · · · → ∞, as j → ∞, and we denote by ϕ j ∈ H 2 (Ω) ∩ H01 (Ω) an eigenfunction corresponding to λ j , i.e.,

−Lϕ j = λ j ϕ j , in Ω, ϕ j = 0,

on ∂Ω.

∞ can be taken to form an orthonormal basis of L 2 (Ω). The eigenfunctions {ϕ j }k=1

6.2 Linear Problems with Time-Independent Coefficients

187

Then by multiplying both sides of problem (6.17) by ϕ j , integrating over the domain Ω, and applying integration by parts twice, we obtain ∂tα (u(·, t), ϕ j ) = (Lu(·, t), ϕ j ) + ( f (·, t), ϕ j ) =(u(·, t), Lϕ j ) + ( f (·, t), ϕ j ) = −λ j (u(·, t), ϕ j ) + ( f (·, t), ϕ j ), with (·, ·) being the L 2 (Ω) inner product. Let u j (t) = (u(·, t), ϕ j ), f j (t) = ( f (·, t), ϕ j ) and u0j = (u0, ϕ j ). Then we arrive at a system of fractional odes ∂tα u j (t) = −λ j u j (t) + f j (t),

∀t > 0,

with u j (0) = u0j ,

for j = 1, 2, . . .. It remains to find the scalar functions u j (t), j = 1, 2, . . .. To this end, consider the following fractional ode for λ > 0: ∂tα uλ (t) = −λuλ (t) + f (t),

t > 0,

with uλ (0) = c0 .

(6.19)

By means of Laplace transform, the unique solution uλ (t) is given by (cf. Proposition 4.5 in Chapter 4) ∫ t (t − s)α−1 Eα,α (−λ(t − s)α ) f (s) ds, (6.20) uλ (t) = c0 Eα,1 (−λt α ) + 0

where Eα,β (z) is the Mittag-Leffler function defined in (3.1) in Chapter 3. Hence, the solution u(t) to problem (6.17) can be formally represented by u(x, t) =

∞ (u0, ϕ j )ϕ j (x)Eα,1 (−λ j t α ) j=1 ∞ ∫ t

+

j=1

0

(t − s)α−1 Eα,α (−λ j (t − s)α )( f (·, s), ϕ j ) dsϕ j (x).

The solution u(t) can be succinctly represented by ∫ t E(t − s) f (s) ds, u(t) = F(t)u0 + 0

where the solution operators F and E are defined by F(t)v = E(t)v =

∞ j=1 ∞ j=1

Eα,1 (−λ j t α )(v, ϕ j )ϕ j ,

(6.21)

t α−1 Eα,α (−λ j t α )(v, ϕ j )ϕ j ,

(6.22)

188

6 Subdiffusion: Hilbert Space Theory

respectively. The operators F and E denote the solution operator for problem (6.17) with f ≡ 0 and u0 ≡ 0, respectively. Note that as α → 1− , the two operators F(t) and E(t) coincide and recover that for the standard parabolic case. Next, we re-derive the representation by means of (vector-valued) Laplace transform (cf. Section A.3.1). We begin with the necessary functional analytic framework. We define an operator A in L 2 (Ω) by (Au)(x) = (−Lu)(x),

x ∈ Ω,

with its domain D(A) = H 2 (Ω)∩H01 (Ω). Then it is known that the operator A satisfies the following resolvent estimate (The notation  ·  denotes the operator norm from L 2 (Ω) to L 2 (Ω)) (z + A)−1  ≤ cθ |z| −1,

∀z ∈ Σθ , ∀θ ∈ (0, π),

(6.23)

with Σθ := {0  z ∈ C : |arg(z)| ≤ θ}. Denote by  u the Laplace transform of u. We extend f from the interval I to (0, ∞) by zero extension. By Lemma 2.9, α α the Laplace transform ∂ t u(z) of the Djrbashian-Caputo derivative ∂t u is given by α α−1 α  ∂t u(z) = z  u(z) − z u(0). Thus, applying Laplace transform to (6.17) leads to u(z) + A u(z) =  f (z) + z α−1 u0 . zα Thus, the Laplace transform  u of the solution u is given by  u(z) = (z α + A)−1 (  f (z) + z α−1 u0 ). By inverse Laplace transform and the convolution rule, we have ∫ ∫ 1 1 zt α−1 α −1 u(t) = e z (z + A) u0 dz + ezt (z α + A)−1  f (z)dz 2πi C 2πi C ∫ ∫ t ∫ 1 1 ezt z α−1 (z α + A)−1 u0 dz + ezs (z α + A)−1 dz f (t − s)ds, = 2πi C 2πi 0 C where C ⊂ C is a Hankel contour, oriented with an increasing imaginary part. Therefore, we obtain the following representation: ∫ t u(t) = F(t)u0 + E(t − s) f (s)ds, (6.24) 0

where the solution operators F(t) and E(t) are, respectively, defined by ∫ 1 F(t) := ezt z α−1 (z α + A)−1 dz, 2πi Γθ, δ ∫ 1 ezt (z α + A)−1 dz, E(t) := 2πi Γθ, δ

(6.25) (6.26)

with integral over a contour Γθ,δ ⊂ C (oriented with an increasing imaginary part), deformed from the contour C:

6.2 Linear Problems with Time-Independent Coefficients

189

Γθ,δ = {z ∈ C : |z| = δ, | arg z| ≤ θ} ∪ {z ∈ C : z = ρe±iθ , ρ ≥ δ}.

(6.27)

Throughout, we fix θ ∈ ( π2 , π) so that z α ∈ Σαθ for all z ∈ Σθ . One may deform the contour Γθ,δ to obtain an explicit representation in terms of the eigenexpansion {(λ j , ϕ j )} j ≥1 , and then recover (6.21) and (6.22). The details are left to an exercise. Below we employ both representations for the analysis. Now we give several useful results. The first connects the operators E and F, where I denotes the identity operator. Lemma 6.2 The following identity holds AE(t) = − dtd F(t). Proof It follows from the identities z α (z α +A)−1 = I−A(z α +A)−1 and that for any t > 0, ∫ d 1 − F(t) = − ezt z α (z α + A)−1 dz dt 2πi Γθ, δ ∫ 1 ezt (I − A(z α + A)−1 ) dz =− 2πi Γθ, δ ∫ 1 =A ezt (z α + A)−1 dz = AE(t). 2πi Γθ, δ

∫ Γθ, δ

ezt dz = 0

This shows the assertion. Alternatively, this can be seen from (6.21) and (6.22) d d F(t)v = Eα,1 (−λ j t α )(v, ϕ j )ϕ j dt dt j=1 ∞

=



−λ j t α−1 Eα,α (−λ j t α )(v, ϕ j )ϕ j = −AE(t)v.

j=1

This shows also the identity.



The next result gives the continuity of the operator F(t) at t = 0. Lemma 6.3 For the operator F defined in (6.25), limt→0+ I − F(t) = 0. Proof Note that ∫ ∫  −1  1 1 α−1 α −1 z − A(z α + A)−1 dz = I. z (z + A) dz = 2πi Γθ, δ 2πi Γθ, δ This and the definition of F(t) imply that for any v ∈ L 2 (Ω) with v L 2 (Ω) ≤ 1, ∫  1   (ezt − 1)z α−1 (z α + A)−1 vdz  2 . lim+ F(t)v − v L 2 (Ω) = lim+  t→0 t→0 2π L (Ω) Γθ, δ Then Lebesgue’s dominated convergence theorem yields the desired assertion. Alternatively, by representation (6.21),

190

6 Subdiffusion: Hilbert Space Theory

F(t)v − v L2 2 (Ω) =



|(v, ϕ j )| 2 (Eα,1 (−λ j t α ) − 1)2

j=1

and limt→0+ (Eα,1 (−λ j t α ) − 1) = 0, j ∈ N. Meanwhile, by the complete monotonicity of the function Eα,1 (−t) on R+ in Theorem 3.5, Eα,1 (−t) ∈ [0, 1] for t ≥ 0. Consequently, for any 0 ≤ t ≤ T, ∞

|(v, ϕ j )| 2 |Eα,1 (−λ j t α ) − 1| 2 ≤

j=1

∞ j=1

|(v, ϕ j )| 2 = v L2 2 (Ω) ≤ 1.

Now Lebesgue’s dominated convergence theorem yields the identity.



The next theorem summarizes smoothing properties of F(t) and E(t). The notation k F (k) (t) = dtd k F(t) denotes the kth derivative of F(t) in t, etc. (ii) indicates that unlike F, E can absorb A2 , which agrees with the asymptotic of Eα,α (−x) in Theorem 3.2. Theorem 6.4 For any k ∈ N0 , the operators F and E defined in (6.25)–(6.26) satisfy for any t ∈ I, the following estimates hold. (i) t −α  A−1 (I − F(t)) + t 1−α  A−1 F (t) ≤ c; (ii) t k+1−α E (k) (t) + t k+1  AE (k) (t) + t k+1+α  A2 E (k) (t) ≤ c; (iii) t k F (k) (t) + t k+α  AF (k) (t) ≤ c; (iv) F(t) ≤ Eα,1 (−λ1 t α ), E(t) ≤ t α−1 Eα,α (−λ1 t α ). Proof The proof employs the resolvent estimate (6.23). Obviously the following identity holds: A(z α + A)−1 = I − z α (z α + A)−1, and by the estimate (6.23),  A(z α + A)−1  ≤ c,

∀z ∈ Γθ,δ .

(6.28)

In part (i), by Lemma 6.2 and choosing δ = t −1 in the contour Γθ,δ and letting zˆ = tz (with |dz| being the arc length element of Γθ,δ ): ∫ 1 −1 e(z)t (z α + A)−1  |dz|  A F (t) = E(t) ≤ 2π Γθ, δ ∫ ∫ α−1 (z) ˆ −α α−1 e | zˆ | |d zˆ | ≤ ct ecos(θ) | zˆ | (1 + | zˆ | −1 )|d zˆ | ≤ ct α−1 . ≤ct Γθ,1

Γθ,1

Now for any k ∈ N0 and m = 0, 1, by choosing δ = t −1 in Γθ,δ and changing variables z = s cos ϕ + is sin ϕ, we have ∫  ∫   m (k) zt k+α−1 m α −1   e z A (z + A) dz  ≤ c e(z)t |z| k−1+mα |dz|  A F (t) ≤ c ∫ ≤c

δ

Γθ, δ



est cos θ s k−1+mα ds + c



Γθ, δ

θ

−θ

ecos ϕ δ k+mα dϕ ≤ ct −mα−k .

6.2 Linear Problems with Time-Independent Coefficients

This implies

191

t k F (k) (t) + t k+α  AF (k) (t) ≤ c,

∀t > 0,

showing (iii). Similarly, we can show (ii) by   ∫  1  m (k) zt k m α −1   A E (t) =  e z A (z + A) dz  2πi Γθ, δ ∫ ≤c e(z)t |z| k+(m−1)α |dz| ≤ ct (1−m)α−k−1 . Γθ, δ

Next, the proof of Lemma 6.2 gives AE(t) = −

1 2πi

∫ Γθ, δ

ezt z α (z α + A)−1 dz.

It follows from this identity and (6.28) that ∫ 1  A2 E (k) (t) ≤ e(z)t |z| k+α |dz| ≤ ct −k−1−α . 2π Γθ, δ Last, in view of the identity F (t) = −AE(t) from Lemma 6.2, A−1 F (t) = −E(t). Thus, (ii) implies the estimate t α−1  A−1 F (t) ≤ c in (i). Now the bound on  A−1 (I − F(t)) follows from Lemmas 6.2 and 6.3, ∫ t ∫ t d (I − F(s)) ds = AE(s) ds, I − F(t) = 0 ds 0 from which and (ii) it follows directly that ∫ t ∫ E(s) ds ≤ c  A−1 (I − F(t)) ≤ 0

t

s α−1 ds = ct α .

0

In (iv), the first estimate follows from Theorem 3.5 F(t)v L2 2 (Ω) = ≤Eα,1 (−λ1 t α )2



Eα,1 (−λ j t α )2 (ϕ j , v)2

j=1

∞ j=1

(ϕ j , v)2 = Eα,1 (−λ1 t α )2 v L2 2 (Ω) .

The second also holds since Eα,α (−t) is completely monotone, cf. Corollary 3.2.  The following result is immediate from Theorem 6.4. The space H s (Ω) is defined in Section A.2.4 in the appendix. Corollary 6.2 The operators F(t) and E(t) defined in (6.25)–(6.26) satisfy F(t)v H β (Ω) ≤ ct

γ−β 2 α

v H γ (Ω)

and

E(t)v H β (Ω) ≤ ct (1−

β−γ 2 )α−1

v H γ (Ω),

192

6 Subdiffusion: Hilbert Space Theory

where β, γ ∈ R and γ ≤ β ≤ γ + 2, and the constant c depends only on α and β − γ. Proof The estimates are direct from Theorem 6.4. We provide an alternative proof using the properties of Eα,β (z). Indeed, by the definition of F(t), 2 F(t)v H  β (Ω) =

=t (γ−β)α

∞ j=1

∞ j=1

β

λ j |Eα,1 (−λ j t α )| 2 (v, ϕ j )2

β−γ (β−γ)α

λj

t

γ

|Eα,1 (−λ j t α )| 2 λ j (v, ϕ j )2 .

By the decay property of the function Eα,1 (−t) in Corollary 3.1, |Eα,1 (−λ j t α )| ≤ β−γ c(1 + λ j t α )−1, with c = c(α). For 0 ≤ β − γ ≤ 2, there holds sup j λ j t (β−γ)α (1 + λ j t α )−2 ≤ c, with c > 0 depending only on α and β − γ. Thus, 2 (γ−β)α sup F(t)v H  β (Ω) ≤ ct j

β−γ (β−γ)α ∞ t

λj

(1 + λ j t α )2

j=1

γ

2 λ j (v, ϕ j )2 ≤ ct (γ−β)α v H  γ (Ω) .



The other estimate can be derived similarly.

Remark 6.1 The restriction β ≤ γ + 2 in Corollary 6.2 indicates that F(t) has at best an order two smoothing in space, which contrasts sharply with the classical parabolic case, for which there holds F(t)v H β (Ω) ≤ ct

γ−β 2

v H γ (Ω),

∀β ≥ γ.

This restriction is due to the sublinear decay of Eα,1 (−λ j t α ) on R+ for 0 < α < 1, instead of the exponential decay of e−λ j t in the standard diffusion case. Limited smoothing is characteristic of many nonlocal models. Before turning to the regularity issue, we note that a fractional analogue of the classical Duhamel principle holds for subdiffusion, which allows representing the solution of an inhomogeneous problem in terms of solutions of the associated homogeneous problems [US07]. We use the notation ∫ t 1 d R α ∂t v(·, t; s) = (t − r)−α v(·, r; s)dr, dt Γ(1 − α) s ∫ t 1 (t − r)−α ∂r v(·, r; s)dr. ∂tα v(·, t; s) = Γ(1 − α) s Theorem 6.5 Let f : Q → R be a smooth function. If a smooth function u satisfies

∂tα u + Au = f , in Q, (6.29) u(·, 0) = 0, in Ω, then u can be represented by

6.2 Linear Problems with Time-Independent Coefficients

∫ u(·, t) = 0

t

193

R 1−α ∂t v(·, t; s)ds,

t ∈ I,

(6.30)

where v(·, ·; s) satisfies (with a parameter s ∈ (0, T))

∂tα v + Av = 0, in Ω × (s, T], v(·, s) = f (·, s), in Ω. Proof It suffices to prove that the function u indeed satisfies (6.29). For smooth f , u and v, we may take any derivatives when needed, and further, we omit the dependence on x by writing u(·, t) as u(t) etc. In the analysis, we employ the following identity R 1−α ∂t

f (t) =

f (0)t α−1 Γ(α)

+ ∂t1−α f (t),

(6.31)

cf. Theorem 2.12. Since v is sufficiently smooth, we pass to t → 0+ in the definition of u in (6.30) and obtain u(0) = 0 by the absolute continuity of Lebesgue integral. Next, the definition of u in (6.30) and the initial condition for v give ∫ t R 1−α ∂t R∂t1−α v(t; s)ds ∂t u(t) = ∂t v(t; t) + 0 ∫ t R 1−α ∂t R∂t1−α v(t; s)ds. = ∂t f (t) + 0

Meanwhile, substituting it into the definition ∂tα u = ∂tα u(t) =

∫t 1 −α Γ(1−α) 0 (t − s) ∂s u(s)ds

gives

∫ t 1 (t − s)−α R∂s1−α f (s)ds Γ(1 − α) 0 ∫ s ∫ t R 1−α (t − s)−α ∂s ∂s v(s; r)drds . + 0

0

Next, we simplify the two terms in the bracket, denoted by I1 and I2 . By the identities ∫t (6.31) and r (t − s)−α (s − r)α−1 ds = Γ(α)Γ(1 − α), for r < t, we deduce ∫ s ∫ t 1 (t − s)−α f (0)s α−1 + (s − r)α−1 ∂r f (r)dr ds Γ(α) 0 0 ∫ t ∫ t ∫ t 1 f (0) (t − s)−α s α−1 ds + ∂r f (r) (t − s)−α (s − r)α−1 dsdr = Γ(α) 0 0 r ∫ t ∂r f (r)dr = Γ(1 − α) f (t). = Γ(1 − α) f (0) +

I1 =

0

For the term I2 , the identity (6.31) and changing integration order give

194

6 Subdiffusion: Hilbert Space Theory

∫ s ∫ t 1 −α I2 = (t − s) (s − r)α−1 ∂r v(r; r)drds Γ(α) 0 0 ∫ s∫ s ∫ t 1 −α (t − s) (s − ξ)α−1 ∂ξ2 v(ξ; r)dξdrds + Γ(α) 0 0 r ∫ t ∫ t 1 ∂r v(r; r) (t − s)−α (s − r)α−1 ds dr = Γ(α) 0 r ∫ t∫ ξ ∫ t 1 2 + ∂ξ v(ξ; r) (t − s)−α (s − ξ)α−1 ds drdξ Γ(α) 0 0 ξ ∫ t∫ ξ ∫ t ∂r v(r; r)dr + ∂ξ2 v(ξ; r)drdξ . = Γ(1 − α) 0

0

0

Meanwhile, since the operator A is independent of t and v is smooth, (∂tα v)(s, s) = 0, cf. Lemma 2.13, and thus there holds ∫ t ∫ t ∫ t R 1−α R 1−α α ∂t Av(t; s)ds = − ∂t ∂t v(t; s)ds = − ∂t1−α ∂tα v(t; s)ds. Au(t) = 0

0

0

Substituting the definition of ∂t1−α v(t; s) and applying the identity (6.31) lead to ∫ r ∫ t∫ t 1 α−1 −Au(t) = (t − r) ∂r (r − ξ)−α ∂ξ v(ξ; s)dξdrds Γ(α)Γ(1 − α) 0 s s ∫ t ∫ t 1 ∂s v(s, s) (t − r)α−1 (r − s)−α dr ds + I3 , = Γ(α)Γ(1 − α) 0 s where the term I3 is given by ∫ t∫ t ∫ r α−1 (t − r) (r − ξ)−α ∂ξ2 v(ξ; s)dξdrds I3 = 0 s s ∫ t ∫ t ∫ t (t − r)α−1 (r − ξ)−α dr ∂ξ2 v(ξ; s)dξds = 0

s

ξ

= Γ(α)Γ(1 − α)

∫ t∫ 0

s

t

∂ξ2 v(ξ; s)dξds.

The preceding identities together imply ∫ t I1 + I 2 I3 − = f (t). ∂s v(s; s)ds − (∂tα u + Au)(t) = Γ(1 − α) Γ(α)Γ(1 − α) 0 Thus, the function u defined in (6.30) satisfies (6.29).



6.2 Linear Problems with Time-Independent Coefficients

195

6.2.2 Existence, uniqueness and regularity Below we show the existence, uniqueness and regularity of a solution to problem (6.17), using the solution representation (6.24). First, we introduce the concept of weak solutions for problem (6.17). Definition 6.1 We call u a weak solution to problem (6.17) if the equation in (6.17) holds in L 2 (Ω) and u(t) ∈ H01 (Ω) for almost all t ∈ I and u ∈ C(I; H −γ (Ω)), with lim u(·, t) − u0  H −γ (Ω) = 0,

t→0+

for some γ ≥ 0, which may depend on α. The following existence and uniqueness result holds for problem (6.17) with f ≡ 0. The inhomogeneous case can be analyzed similarly. Alternatively, one may employ the standard Galerkin approximation and energy estimates as in Section 6.1. Theorem 6.6 If u0 ∈ L 2 (Ω) and f ≡ 0, then there exists a unique weak solution u ∈ C(I; L 2 (Ω)) ∩ C(I; H 2 (Ω)) in the sense of Definition 6.1. Proof First, we show that the representation u(t) = F(t)u0 gives a weak solution to problem (6.17). By Corollary 6.2 (with β = γ = 0), we have u(t) L 2 (Ω) = F(t)u0  L 2 (Ω) ≤ cu0  L 2 (Ω),

(6.32)

and by Corollary 6.2 (with β = 2 and γ = 0), for any t > 0, u(t) H 2 (Ω) = F(t)u0  H 2 (Ω) ≤ ct −α u0  L 2 (Ω) .

(6.33)

Further, by Theorem 6.4, u ∈ C(I; L 2 (Ω)) ∩ C(I; H 2 (Ω)). Now using the governing equation ∂tα u = Au in (6.17), ∂tα u ∈ C(I; L 2 (Ω)), and thus the equation is satisfied in L 2 (Ω) a.e. Further, by Lemma 6.3, u(t) = F(t)u0 satisfies lim u(t) − u0  L 2 (Ω) ≤ lim+ F(t) − I u0  L 2 (Ω) = 0.

t→0+

t→0

(6.34)

Thus, u(t) = F(t)u0 is indeed a solution to problem (6.17) in the sense of Definition 6.1. Next, we show the uniqueness of the weak solution. It suffices to show that problem (6.17) with u0 ≡ 0 and f ≡ 0 has only a trivial solution. By taking inner product with ϕ j and setting u j (t) = (u(t), ϕ j ) give ∂tα u j (t) = −λ j u j (t),

∀t ∈ I.

Since u(t) ∈ L 2 (Ω) for t ∈ I, it follows from (6.34) that u j (0) = 0. Due to the existence and uniqueness of solutions to fractional odes in Theorem 4.9 in Chapter 2 4, we deduce u j (t) = 0, j = 1, 2, . . . . Since {ϕ j }∞ j=1 is an orthonormal basis in L (Ω), we have u ≡ 0 in Q. This shows the uniqueness, and completes the proof.  Now we derive the H s (Ω) regularity for f ≡ 0.

196

6 Subdiffusion: Hilbert Space Theory

Theorem 6.7 If u0 ∈ H γ (Ω), with 0 ≤ γ ≤ 2, and f ≡ 0. Then the solution u to problem (6.17) belongs to C(I; H γ (Ω)) ∩ C(I; H β (Ω)) for γ ≤ β ≤ γ + 2, and ∂tα u ∈ C(I; H γ (Ω)) and satisfies for any k ∈ N,   (β−γ)α  (k)  (6.35) u (t) β ≤ ct − 2 −k u0  H γ (Ω), γ ≤ β ≤ γ + 2, H (Ω)

∂tα u(t) H β (Ω)

≤ ct −

β−γ+2 α 2

u0  H γ (Ω),

γ − 2 ≤ β ≤ γ.

(6.36)

Proof By (6.24), the solution u is given by u(t) = F(t)u0 . Then the bound (6.35) follows from Theorem 6.4(iii), and the bound (6.36) from the governing equation ∂tα u = Au. The continuity of u(t) in H γ (Ω) up to t = 0 is proven in Lemma 6.3.  Remark 6.2 Theorem 6.7 indicates that for any t > 0, γ ≥ 0, 0 ≤ β ≤ γ + 2 and k = 1, 2, . . . , u(k) (t) H β (Ω) ≤ ct −k−

β−γ 2 α

u0  H γ (Ω),

for t > 0.

(6.37)

In contrast, for classical diffusion, we have u(k) (t) H β (Ω) ≤ ct −k−

β−γ 2

u0  H γ (Ω)

for every β > γ. The slow decay of Eα,1 (−λ j t α ) accounts for the restriction β ≤ γ +2 in Corollary 6.2. Further, for any u0 ∈ L 2 (Ω) and f ≡ 0, the unique weak solution u ∈ C([0, ∞); L 2 (Ω)) ∩ C((0, ∞); H 2 (Ω)) to problem (6.17) satisfies u(t) L 2 (Ω) ≤ c(1 + λ1 t α )−1 u0  L 2 (Ω),

∀t ≥ 0.

This is direct from Theorem 6.4(iv) and Corollary 3.1. We shall need the L p (I; X)-norm and L p,∞ (I; X)-norm (cf. [BL76, section 1.3]) ∫ u L p (I;X) :=

I

p

u(t)X dt

 p1

, 1

u L p,∞ (I;X) := sup λ|{t ∈ I : u(t)X > λ}| p . λ>0

The next result is direct from Theorem 6.7. It can be viewed as the maximal L p regularity for homogeneous subdiffusion. The notation D(A) denotes the domain of the operator A, equipped with the graph norm. Theorem 6.8 The following maximal L p -regularity estimates hold ∂tα u L p (I;L 2 (Ω)) +  Au L p (I;L 2 (Ω)) ≤ cp u0 (L 2 (Ω),D(A))

1 ,p 1− pα

∂tα u L p,∞ (I;L 2 (Ω)) +  Au L p,∞ (I;L 2 (Ω)) ≤ cp u0  L 2 (Ω), ∂tα u L p (I;L 2 (Ω)) +  Au L p (I;L 2 (Ω)) ≤ cp u0  L 2 (Ω), where the constant cp depends on p.

,

p ∈ ( α1 , ∞], p = α1 , p ∈ [1, α1 ),

6.2 Linear Problems with Time-Independent Coefficients

197

Proof The mapping properties in Theorem 6.4 imply  Au L 2 (Ω) ≤ c Au0  L 2 (Ω) . Similarly, one can show u L 2 (Ω) ≤ cu0  L 2 (Ω)

and

 Au L 2 (Ω) ≤ ct −α u0  L 2 (Ω) .

This last estimate immediately implies the third assertion. Now for p ∈ ( α1 , ∞], the last two estimates imply u L ∞ (I;D(A)) ≤ cu0 D(A)

and

u

1

L α ,∞ (I;D(A))

≤ cu0  L 2 (Ω),

which imply the first assertion with p = ∞ and the second assertion, respectively. The real interpolation of the last two estimates yields u

1

(L α ,∞ (I;D(A)), L ∞ (I;D(A)))1−

1 αp , p

≤ cu0 (L 2 (Ω),D(A))

Since (L α ,∞ (I; , D(A)), L ∞ (I; D(A)))1−

1− α1p , p

1

1 αp ,p

, ∀ p ∈ (α−1, ∞).

= L p (I; D(A)) [BL76, Theorem 5.2.1],

this implies the first assertion in the case p ∈ ( α1 , ∞).



The next result gives the analyticity of the solution u(t). It will play a role in investigating maximum principle and inverse problems. Proposition 6.1 If u0 ∈ L 2 (Ω), then the solution u : I → L 2 (Ω) to problem (6.17) with f ≡ 0 is analytic in the sector Σ =: {z ∈ C : z  0, | arg(z)| ≤ π2 }. Proof Since Eα,1 (−z) is entire, cf. Proposition 3.1 in Chapter 3, Eα,1 (−λn t α ) is analytic in Σ ⊂ C. Hence, the finite sum u N (t) =

N (u0, ϕ j )Eα,1 (−λ j t α )ϕ j j=1

is analytic in Σ. Further, by Corollary 3.1, for any z ∈ Σ, u N (z) − u(z) L2 2 (Ω) =



(u0, ϕ j )2 |Eα,1 (−λ j z α )| 2 ≤ c

j=N +1



|(u0, ϕ j )| 2,

j=N +1

i.e., lim N →∞ u N (z) − u(z) L ∞ (Σ;L 2 (Ω)) = 0. Hence, u is actually analytic in Σ.



Example 6.2 Consider problem (6.17) on the unit interval Ω = (0, 1), i.e., 2 ⎧ ⎪ ∂ α u = ∂xx u, in Q, ⎪ ⎨ t ⎪ u(·, t) = 0, on ∂L Q, ⎪ ⎪ ⎪ u(·, 0) = u0, in Ω, ⎩

with (i) u0 (x) = sin πx and (ii) u0 (x) = δ 1 (x), the Dirac delta function at x = 12 . 2 In case (i), u0 belongs to H γ (Ω) for any γ > 0, and u(x, t) = Eα,1 (−π 2 t α ) sin πx.

198

6 Subdiffusion: Hilbert Space Theory

Despite the smoothness of u0 , the temporal regularity of u is limited for any α ∈ (0, 1): π2 Eα,1 (−π 2 t α ) ∼ 1 − Γ(α+1) t α , as t → 0+, which is continuous at t = 0 but u (t) is

unbounded. It contrasts with the case α = 1, for which u(x, t) = e−π t sin(πx) and is C ∞ [0, T]. Thus, the temporal regularity in Theorem 6.7 is sharp. In case (ii), by Sobolev embedding in Theorem A.3, u0 belongs to H γ (Ω) for any γ < −( 12 + ), with  > 0. In Fig. 6.1, we show the solution profiles for α = 0.5 and α = 1. For any t > 0, u is very smooth in x for α = 1, but it remains nonsmooth for α = 0.5. Actually, the kink at x = 0.5 remains no matter how long the problem evolves, showing the limited spatial smoothing property of the operator F(t). 2

=0.5 =1

0.4

=0.5 =1

0.4

0.3

u

u

0.3

0.2

0.2

0.1

0.1

0 0

0.2

0.4

0.6

0.8

1

0

0

0.2

0.4

x

0.6

0.8

1

x

Fig. 6.1 The solution profiles for Example 6.2(ii) at two time instances for α = 0.5 and 1.

Now we state two further results on the uniqueness of the solutions. Theorem 6.9 Let u0 ∈ H γ (Ω) with γ > d. Let u ∈ C(I; L 2 (Ω)) ∩ C(I; H 2 (Ω)) satisfy (6.17) with f ≡ 0. Let ω ⊂ Ω be an arbitrary subdomain. Then u(x, t) = 0 for (x, t) ∈ ω × I implies u ≡ 0 in Q. Proof By Sobolev embedding in Theorem A.3, since ϕ j  L 2 (Ω) = 1, ϕ j  L ∞ (Ω) ≤ cϕ j   γ2

H (Ω)

γ

γ

≤ c A 4 ϕ j  L 2 (Ω) ≤ c|λ j | 4 . 2

Then in view of the well-known asymptotic λ j = O( j d ) [Wey12] and the condition u0 ∈ H γ (Ω) with γ > d, the Cauchy-Schwarz inequality, we deduce ∞

|(u0, ϕ j )|ϕ j  L ∞ (Ω) ≤ c

j=1



γ

|(u0, ϕ j )||λ j | 4 = c

j=1

≤c

∞ j=1

(u0, ϕ j )2 |λ j |γ

∞ j=1

∞ 12 j=1

− γ2

λj

γ

− γ4

|(u0, ϕ j )||λ j | 2 λ j 12

< ∞.

(6.38)

6.2 Linear Problems with Time-Independent Coefficients

199

 α By Proposition 6.1, the function ∞ j=1 (u0, ϕ j )Eα,1 (−λ j t )ϕ j (x) can be extended analytically in t to a sector {z ∈ C : z  0, | arg z| ≤ θ 0 } for some θ 0 > 0. Therefore, ∞

u(x, t) =

(u0, ϕ j )Eα,1 (−λ j t α )ϕ j (x) = 0,

∀(x, t) ∈ ω × (0, T)

(u0, ϕ j )Eα,1 (−λ j t α )ϕ j (x) = 0,

∀(x, t) ∈ ω × (0, ∞).

j=1

implies u(x, t) =

∞ j=1

∞ be the spectrum of A and m the multiplicity of the kth Let σ(A) = {μk }k=1 k 2 k eigenvalue μk , and denote by {ϕk j } m j=1 an L (Ω) orthonormal basis of Ker(μk − A) (i.e., the spectrum σ(A) is a set instead of a sequence with multiplicities). Then we can rewrite the series into mk ∞ (u0, ϕk j )ϕk j Eα,1 (−μk t α ) = 0,

(x, t) ∈ ω × (0, ∞).

j=1

k=1

By the complete monotonicity of Eα,1 (−x) in Theorem 3.5 and (6.38), we deduce mk ∞

|(u0, ϕk j )||ϕk j (x)||Eα,1 (−μk t α )| ≤

k=1 j=1

mk ∞

|(u0, ϕk j )|ϕk j  L ∞ (Ω) < ∞.

k=1 j=1

Thus, Lebesgue’s dominated convergence theorem yields ∫



e−zt

mk ∞

0

=

k=1 j=1

mk ∞

(u0, ϕk j )

By Lemma 3.2,



e−zt Eα,1 (−μk t α ) dtϕk j (x),



0

k=1 j=1

∫∞ 0

e−zt Eα,1 (−λt α )dt =

mk ∞

(u0, ϕk j )

k=1 j=1

i.e.,

(u0, ϕk j )ϕk j (x)Eα,1 (−μk t α ) dt

(z) > 0, and thus

z α−1 ϕk j (x) = 0, + μk



mk ∞ (u0, ϕk j ) k=1 j=1

z α−1 z α +λ ,

x ∈ ω, (z) > 0.

1 ϕk j (x) = 0, η + μk

x ∈ ω, (z) > 0,

x ∈ ω, (η) > 0.

(6.39)

By (6.38), we can analytically continue both sides of (6.39) in η so that the identity ∞ . Next, we take a suitable small disk which includes (6.39) holds for η ∈ C\ {−μk }k=1 −μ and does not include {−μk }k . Integrating (6.39) along this disk gives

200

6 Subdiffusion: Hilbert Space Theory

u (x) =

m (u0, ϕ j )ϕ j (x) = 0,

x ∈ ω.

j=1

Since (μ − A)u = 0 in Ω, and u = 0 in ω. The unique continuation principle for elliptic pdes [Isa06, Section 3.3] implies u = 0 in Ω for each  ∈ N. Since the set  of functions {ϕ j } m j=1 is linearly independent in Ω, (u0, ϕ j ) = 0 for j = 1, . . . , m ,  ∈ N. Therefore, u ≡ 0 in Q.  Remark 6.3 Theorem 6.9 corresponds to [SW78, Corollary 2.3]. For α = 1, the uniqueness holds without the boundary condition (i.e., unique continuation). However, for α ∈ (0, 1), it is unclear whether the uniqueness holds in that case. The next result gives a converse statement to the asymptotic decay (6.37). It asserts that the solution to the homogeneous problem cannot decay faster than t −m with any m ∈ N, if the solution does not vanish identically. This is distinct for subdiffusion because the classical diffusion (i.e., with α = 1) admits nonzero solutions decaying exponentially fast. This is one description of the slow diffusion modeled by subdiffusion when compared with the classical case. Theorem 6.10 Let u0 ∈ H β (Ω) with β > d and ω ⊂ Ω be an arbitrary subdomain. Let u ∈ C(I; L 2 (Ω)) ∩ C(I; H 2 (Ω)) satisfy problem (6.17) with f ≡ 0. If for any m ∈ N, there exists a constant c(m) > 0 such that u(·, t) L ∞ (ω) ≤ c(m)t −m

as t → ∞,

(6.40)

then u ≡ 0 in Ω × (0, ∞). Proof By (6.38) and since 0 ≤ Eα,1 (−η) ≤ 1 on R+ , cf. Theorem 3.5, we deduce u(x, t) =

mk ∞ (u0, ϕk j )Eα,1 (−μk t α )ϕk j (x) k=1 j=1

converges uniformly for x ∈ Ω and δ ≤ t ≤ T with any δ, T > 0. Hence, by Theorem 3.2, for any p ∈ N, we have u(x, t) = −

p mk ∞ k=1 j=1 =1

mk ∞ O + k=1 j=1

(−1) (u0, ϕk j )ϕk j (x) Γ(1 − α)μ k t α 1 p+1

μk t α(p+1)



(u0, ϕk j )ϕk j (x)

as t → ∞.

(6.41)

Since α ∈ (0, 1), Γ(1 − α)  0 by 1 − α > 0. By setting m = 1 in (6.40) and p = 1 in the identity (6.41), multiplying by t α and letting t → ∞, we deduce mk ∞ k=1 j=1

1 (u0, ϕk j )ϕk j (x) = 0 Γ(1 − α)μk

x ∈ ω.

6.2 Linear Problems with Time-Independent Coefficients

201

By 0 < α < 1, there exists {i }i ∈N such that limi→∞ i = ∞ and αi  N. In fact, n1 , where let α  Q. Then α  N for any  ∈ N. Meanwhile for α ∈ Q, i.e., α = m 1 m1, n1 ∈ N have no common divisors except for 1. There exist infinitely many  ∈ N 1  0. Therefore, with no common divisors with m1 , and α ∈ Q \ N. Then Γ(1−α i) by setting p = 2, 3, . . . and repeating the preceding argument, we obtain mk ∞ 1 k=1

μ ki

(u0, ϕk j )ϕk j (x) = 0,

x ∈ ω, i ∈ N.

j=1

Hence, mk m1 ∞ μ1 i (u0, ϕ1 j )ϕ1 j (x) + (u0, ϕk j )ϕk j (x) = 0, μk j=1 j=1 k=2

x ∈ ω, i ∈ N.

Now using (6.38) and 0 < μ1 < μ2 < . . ., we have mk mk  ∞ ∞    μ1 i    μ1  i (u0, ϕk j )ϕk j  ∞ ≤    |(u0, ϕk j )|ϕk j  L ∞ (Ω) L (Ω) μk μk k=2 j=1 k=2 j=1 mk ∞  μ  i  μ  i  1  1 ≤  |(u0, ϕk j )|ϕk j  L ∞ (Ω) ≤ c  . μ2 k=2 j=1 μ2

Letting i → ∞ and noting | μμ12 | < 1 yield m1 (u0, ϕ1 j )ϕ1 j (x) = 0,

∀x ∈ ω.

j=1

Similarly, we obtain mk

(u0, ϕk j )ϕk j (x) = 0,

x ∈ ω, k ∈ N.

j=1

Since u0 =

mk ∞ ( (u0, ϕk j )ϕk j ),

in L 2 (Ω),

k=1 j=1

we deduce u ≡ 0 in Ω × (0, ∞).



Next, we turn to problem (6.17) with u0 ≡ 0. We begin with the L p estimate in time. Given a Banach space X and a closed linear operator A with domain D(A) ⊂ X, the time fractional evolution equation (with α ∈ (0, 1))

∂tα u(t) + Au(t) = f (t), t ∈ I, (6.42) u(0) = 0,

202

6 Subdiffusion: Hilbert Space Theory

is said to have the property of maximal L p regularity, if for each f ∈ L p (I; X), α, p problem (6.42) possesses a unique solution u in the space W0 (I; X) ∩ L p (I; D(A)) s, p (see Appendix A.2.5 for the space W0 (I; X), in the sense of complex interpolation, where the subscript 0 indicates a zero trace at t = 0). That is, both terms on the left-hand side belong to L p (I; X). The following maximal L p -regularity holds. It recovers the classical maximal regularity estimates for standard parabolic problems as α → 1− . It can be found in the PhD thesis [Baj01, Chapters 4 and 5]; see also [Zac05] for relevant results for Volterra evolution equations. The proof below is taken from [JLZ20a, Theorem 2.2] and is a straightforward application of the now classical operator-valued Fourier multiplier theorem due to Weis [Wei01, Theorem 3.4]. Theorem 6.11 If u0 ≡ 0 and f ∈ L p (I; L 2 (Ω)) with 1 < p < ∞, then problem (6.17) has a unique solution u ∈ L p (I; H 2 (Ω)) such that ∂tα u ∈ L p (I; L 2 (Ω)) and u L p (I; H 2 (Ω)) + ∂tα u L p (I;L 2 (Ω)) ≤ c f  L p (I;L 2 (Ω)), where the constant c does not depend on f and T. Proof For f ∈ L p (I; L 2 (Ω)), extending f to be zero on Ω × [(−∞, 0) ∪ (T, ∞)] yields f ∈ L p (R; L 2 (Ω)) and  f  L p (R;L 2 (Ω)) =  f  L p (I;L 2 (Ω)) .

(6.43)

Further, we have ∂tα f (t) = −∞R∂tα f (t),

∀t ∈ I

and

 R α −∞ ∂t f

= (iξ)α  f (ξ)

cf. Theorem 2.8, where  denotes taking Fourier transform in t (i.e.,  f ≡ F [ f ] the f (ξ) is a solution of (6.17) and Fourier transform of f ). Then,  u(ξ) = ((iξ)α + A)−1  (iξ)α  u(ξ) = (iξ)α ((iξ)α + A)−1  f (ξ). The self-adjoint operator A : D(A) → L 2 (Ω) is invertible from L 2 (Ω) to D(A), and generates a bounded analytic semigroup. Thus, the operator (iξ)α ((iξ)α + A)−1

(6.44)

is bounded from L 2 (Ω) to D(A) in a small neighborhood N of ξ = 0. Further, in N , the operator ξ

d [(iξ)α ((iξ)α + A)−1 ] =α(iξ)α ((iξ)α + A)−1 − α(iξ)2α ((iξ)α + A)−2 dξ

(6.45)

is also bounded. By the resolvent estimate (6.23), for ξ away from zero, the following inequality (iξ)α ((iξ)α + A)−1  ≤ c implies the boundedness of (6.44) and (6.45). Since boundedness of operators is equivalent to R-boundedness of operators in L 2 (Ω) (see [KW04, p. 75] for the

6.2 Linear Problems with Time-Independent Coefficients

203

concept of R-boundedness), the boundedness of (6.44) and (6.45) implies that (6.44) is an operator-valued Fourier multiplier [Wei01, Theorem 3.4], and thus u(ξ)] L p (R;L 2 (Ω)) ∂tα u L p (R;L 2 (Ω)) ≤ F −1 [(iξ)α  −1 α = F [(iξ) ((iξ)α + A)−1  f (ξ)] L p (R;L 2 (Ω)) ≤ cF −1 [  f (ξ)] L p (R;L 2 (Ω)) ≤ c f  L p (R;L 2 (Ω)) . This and (6.43) imply the bound on ∂tα u L p (I;L 2 (Ω)) . The bound on  Au L p (I;L 2 (Ω)) follows similarly by replacing (iξ)α ((iξ)α + A)−1 with A((iξ)α + A)−1 in the proof. This completes the proof of the theorem.  Next, we derive pointwise in time regularity. Theorem 6.12 Let u ∫be the solution to problem (6.17) with u0 = 0. If f ∈ t C k−1 (I; H γ (Ω)) and 0 (t − s)α−1  f (k) (s) H γ (Ω) ds < ∞, for any t ∈ I, then for any γ ≤ β < γ + 2 and k ≥ 0, there holds k−1  (k)  β−γ u (t)  β ≤ c t (1− 2 )α−j−1  f (k−j−1) (0) H γ (Ω) H (Ω) j=0



+

t

(t − s)(1−

β−γ 2 )α−1

0

 f (k) (s) H γ (Ω) ds.

∫t Similarly, if f ∈ C k (I; H γ (Ω)) and 0  f (k+1) (s) H γ (Ω) ds < ∞, for any t ∈ I, then for any β = γ + 2 and k ≥ 0, there holds ∫ k  (k)  −j (k−j) u (t)  γ+2 ≤ c t  f (0) + γ  H (Ω) H (Ω)

0

j=0

t

 f (k+1) (s) H γ (Ω) ds.

Proof In view of the representation (6.24), the solution u(t) is given by ∫ t ∫ t E(t − s) f (s)ds = E(s) f (t − s)ds. u(t) = 0

0

Differentiating the representation k times yields (with the convention summation with lower index greater than upper index being zero) u(k) (t) =

k−1 j=0

∫ E (j) (t) f (k−j−1) (0) +

t

E(s) f (k) (t − s)ds,

0

and thus for γ ≤ β < γ + 2, by Theorem 6.4, we obtain

204

6 Subdiffusion: Hilbert Space Theory

∫ k−1  (k)  (j) (k−j−1) u (t)  β ≤ E (t) f (0) H β (Ω) + H (Ω)

0

j=0



k−1

A

β−γ 2

∫ E (j) (t) f (k−j−1) (0) H γ (Ω) +

≤c

t

(1− β−γ 2 )α−j−1

t

∫ f

(k−j−1)

A

β−γ 2

(0) H γ (Ω) + c

j=0

t

s(1−

E(s) f (k) (t − s) H β (Ω) ds

E(s) f (k) (t − s) H γ (Ω) ds

0

j=0 k−1

t

β−γ 2 )α−1

0

 f (k) (t − s) H γ (Ω) ds.

This shows the first estimate. In the event β = γ + 2, by the identity AE(t) from Lemma 6.2, and integration by parts, we obtain Au(k) (t) =

k−1

∫ AE (j) (t) f (k−j−1) (0) +

k−1

∫ AE (j) (t) f (k−j−1) (0) + 0

j=0

=

k−1

AE(s) f (k) (t − s)ds

t

d (I − F(s)) f (k) (t − s)ds ds ∫

(j)

AE (t) f

− F(t)) =

0

j=0

=

t

d dt (I

(k−j−1)

(0) + (I − F(t)) f

(k)

(0) +

t

(I − F(s)) f (k+1) (t − s)ds,

0

j=0

since I − F(0) = 0, cf. Lemma 6.3. Then by Theorem 6.4 and repeating the preceding argument, we obtain the second assertion.  Remark 6.4 In Theorem 6.12, setting k = 0 and γ = 0 gives the following estimate ∫ t  f (s) L 2 (Ω) ds. u(t) H 2 (Ω) ≤ c f (0) L 2 (Ω) + 0

Thus, with f ∈ L ∞ (I; L 2 (Ω)) only, the solution u generally does not belong to L ∞ (0, T; H 2 (Ω)). Indeed, if u0 = 0, and f ∈ L ∞ (I; H γ (Ω)), −1 ≤ γ ≤ 1, then the solution u ∈ L ∞ (I; H γ+2− (Ω)) for any 0 <  < 1, and 

u(t) H γ+2− (Ω) ≤ c −1 t 2 α  f  L ∞ (0,t; H γ (Ω)) . Actually, by (6.24) and Theorem 6.4(ii), ∫ t ∫ t u(t) H γ+2− (Ω) =  E(t − s) f (s) ds H γ+2− (Ω) ≤ E(t − s) f (s) H γ+2− (Ω) ds 0 0 ∫ t   (t − s) 2 α−1  f (s) H γ (Ω) ds ≤ c −1 t 2 α  f  L ∞ (0,t; H γ (Ω)), ≤c 0

which shows the desired estimate. The  factor in the estimate reflects the limited smoothing property of the subdiffusion operator. The condition f ∈ L ∞ (I; H γ (Ω)) can be further weakened to f ∈ L r (I; H γ (Ω)) with r > α−1 . This follows from

6.2 Linear Problems with Time-Independent Coefficients

205

Theorem 6.4 and the Cauchy-Schwarz inequality with the conjugate exponent r : ∫ t ∫ t E(t − s) f (s) H γ (Ω) ds ≤ c (t − s)α−1  f (s) H γ (Ω) ds u(t) H γ (Ω) ≤ 0



1+r (α−1) c  1+r (α−1) t

0

f  L r (0,t; H γ (Ω)),

where 1 + r (α − 1) > 0 by the condition r > α−1 . It follows from this that the initial condition u(0) = 0 holds in a weak sense: limt→0+ u(t) H γ (Ω) = 0. Hence, for any α ∈ ( 12 , 1) the representation formula (6.24) remains a legitimate solution for f ∈ L 2 (I; H γ (Ω)). For a detailed treatise on fdes in Sobolev spaces, we refer to the work [GLY15] and the recent monograph [KRY20]. Last we give a Hölder in time regularity estimate for problem (6.17). First, we give a lemma on the Hölder regularity of convolution with E(t). Hölder estimates for the subdiffusion model will be discussed in more detail in Chapter 7. ∫t Lemma 6.4 For f ∈ C θ (I; L 2 (Ω)), the function v(t) = 0 E(t − s)( f (s) − f (t)) ds belongs to C θ (I; L 2 (Ω)) and  AvC θ (I;L 2 (Ω)) ≤ c f C θ (I;L 2 (Ω)) . Proof By taking 0 ≤ t < t + τ := t˜ ≤ T, then ∫ t˜ ∫ t v(t˜) − v(t) = E(t˜ − s)( f (s) − f (t˜)) ds − E(t − s)( f (s) − f (t)) ds 0 0 ∫ t (E(t˜ − s) − E(t − s))( f (s) − f (t)) ds = 0



+

t

∫ E(t˜ − s)( f (t) − f (t˜)) ds +

0

t



E(t˜ − s)( f (s) − f (t˜)) ds

:= I + II + III. Now we bound the three terms separately. First, for the term I, by Theorem 6.4(ii),  ∫ t ∫ t˜−s    AE (ζ)( f (s) − f (t)) dζds 2  AI(t) L 2 (Ω) =  t−s t˜−s

0



∫ t∫

t−s

0

≤c

∫ t∫ 0

∫ ≤ cτ 0

t

L (Ω)

 AE (ζ)dζ  f (s) − f (t) L 2 (Ω) ds

t˜−s

t−s

ζ −2 dζ(t − s)θ ds f C θ ([0,t];L 2 (Ω))

(t − s)−1+θ (t˜ − s)−1 ds f C θ ([0,t];L 2 (Ω)) .

Meanwhile, for 0 < θ < 1, using the following identity [GR15, 3.194, p. 318]

206

6 Subdiffusion: Hilbert Space Theory





0

we deduce ∫ t

πτ θ−1 η θ−1 dη = , η+τ sin θπ

(t˜ − s)−1 (t − s)−1+θ ds =

0

∫ 0

and hence

t

η θ−1 dη ≤ η+τ

(6.46)

∫ 0



η θ−1 dη ≤ cτ θ−1, η+τ

 AI(t) L 2 (Ω) ≤ cτ θ  f C θ (I;L 2 (Ω)) .

For the second term II, ∫ t     AII(t) L 2 (Ω) ≤  AE(t˜ − s)( f (t) − f (t˜)) ds 2 L (Ω) 0 ∫ t    ≤ AE(t˜ − s)ds f (t) − f (t˜) L 2 (Ω) . 0

Now by Lemma 6.2, AE(t) = dtd (I − F(t)), and thus by Theorem 6.4, ∫ t    AE(t˜ − s)ds = F(t˜ − t) − F(t˜) ≤ c.  0

This and Hölder continuity of f imply  AII(t) L 2 (Ω) ≤ cτ θ  f C θ (I ;L 2 (Ω)) . Last, by Cauchy-Schwarz inequality and Young’s inequality, we deduce ∫ t˜  AIII(t) L 2 (Ω) ≤  AE(t˜ − s) f (s) − f (t˜) L 2 (Ω) ds t



≤c

t



(t˜ − s)−1+θ  f C θ (I ;L 2 (Ω)) ds ≤ cτ θ  f C θ (I ;L 2 (Ω)) .

Combining the bounds on I, II and III completes the proof of the lemma.



Theorem 6.13 For 0 < α < 1, u0 ∈ H 2 (Ω), f ∈ C θ (I; L 2 (Ω)), with θ ∈ (0, 1). Then for the solution u given by (6.24), there holds for every δ > 0   uC θ ([δ,T ]; H 2 (Ω)) + ∂tα uC θ ([δ,T ];L 2 (Ω)) ≤ c δ−1  f C θ ([δ,T ];L 2 (Ω)) + u0  H 2 (Ω) ,   uC(I; H 2 (Ω)) + ∂tα uC(I;L 2 (Ω)) ≤ c  f C θ (I ;L 2 (Ω)) + u0  H 2 (Ω) . Further, if u0 = 0 and f (0) = 0, then  AuC θ (I;L 2 (Ω)) + ∂tα uC θ (I ;L 2 (Ω)) ≤ c f C θ (I;L 2 (Ω)) . Proof By the solution representation (6.24), we have

6.2 Linear Problems with Time-Independent Coefficients

207



∂tα u(t)

t

= −Au(t) + f (t) = −AF(t)u0 + f (t) − AE(t − s) f (s)ds 0 ∫ ∫ t t AE(t − s) f (t)ds − AE(t − s)( f (s) − f (t))ds. = − AF(t)u0 + f (t) − 0

0

The three terms are denoted by I, II and III. It follows from Lemma 6.4 that III(t)C θ (I;L 2 (Ω)) ≤ c f C θ (I;L 2 (Ω)) . So it suffices to bound the first two terms. Let t˜ = t + τ, with τ > 0. Then for the term I(t), Theorem 6.4(i) implies I(t˜) − I(t) L 2 (Ω) =  A(F(t˜)u0 − F(t))u0  L 2 (Ω) ≤ F(t˜) − F(t) Au0  L 2 (Ω) ≤ ct −1 τ Au0  L 2 (Ω) ≤ cδ−1 τ θ  Au0  L 2 (Ω) . To bound the term II, using the identity dtd (I − F(t)) = AE(t), cf. Lemma 6.2, ∫ t ∫ t ∫ t d (I − F(s))ds = I − F(t), AE(t − s)ds = AE(s)ds = ds 0 0 0 since I − F(0) = 0, cf. Lemma 6.3. Consequently, II(t) = F(t) f (t) and II(t)(t˜) − II(t) L 2 (Ω) ≤ F(t˜) − F(t) f (t) L 2 (Ω) + F(t˜) f (t˜) − f (t) L 2 (Ω) ≤ c(t −1 τ + τ θ ) f C θ (I;L 2 (Ω)) ≤ c(δ−1 + 1)τ θ  f C θ (I ;L 2 (Ω)) . This proves the first estimate. The second assertion follows from Lemma 6.4 and preceding discussions. Finally, we turn to the last assertion. It suffices to show (F(t˜) − F(t)) f (t) ∈ C θ (I; L 2 (Ω)). Since f (0) = 0, f ∈ C θ (I; L 2 (Ω)) implies  f (t) L 2 (Ω) ≤ ct θ  f C θ (I;L 2 (Ω)), and consequently, (F(t˜) − F(t)) f (t) L 2 (Ω) ≤ ct θ

∫ t



s−1 ds f C θ (I ;L 2 (Ω)) .

Now straightforward computation shows ∫ t˜ ∫ t˜ ∫ t˜ (t + τ)θ − t θ ≤ cθ −1 τ θ . tθ s−1 ds ≤ t θ s−1 ds ≤ s θ−1 ds = θ t t t This completes the proof of the theorem.



Remark 6.5 The condition u0 = 0 and f (0) = 0 is one sufficient compatibility condition for the Hölder continuity up to t = 0. Without suitable continuity condition, the Hölder regularity for an arbitrary exponent θ ∈ (0, 1) is generally false; see Example 6.2 for an illustration.

208

6 Subdiffusion: Hilbert Space Theory

6.3 Linear Problems with Time-Dependent Coefficients Now we consider subdiffusion with a time-dependent diffusion coefficient, following the works [JLZ19b, JLZ20b]; see also [KY18] for an alternative treatment via approximating the coefficients by smooth functions. Due to the time-dependence of the elliptic operator, the separation of variable and Laplace transform techniques from Section 6.2 are no longer directly applicable, and the analysis requires different techniques. In this section, we describe a perturbation argument to handle this. Consider the following fractional-order parabolic problem: ⎧ ∂ α u − ∇ · (a∇u) = f , in Q, ⎪ ⎪ ⎨ t ⎪ u = 0, on ∂L Q, ⎪ ⎪ ⎪ u(·, 0) = u0, in Ω, ⎩

(6.47)

where f ∈ L ∞ (I; L 2 (Ω)) and u0 ∈ L 2 (Ω) are given source and initial data, respectively, and a(x, t) : Q → Rd×d is a symmetric matrix-valued diffusion coefficient such that for some constant λ ∈ (0, 1), integer K ≥ 2 and all i, j = 1, . . . , d: λ|ξ | 2 ≤ a(x, t)ξ · ξ ≤ λ−1 |ξ | 2, ∂ | ∂t ai j (x, t)|

+

∂k |∇x ∂t k

∀ ξ ∈ Rd, ∀ (x, t) ∈ Q,

(6.48)

ai j (x, t)| ≤ c, ∀ (x, t) ∈ Q, k = 0, . . . , K + 1.

(6.49)

To derive regularity estimates, it is convenient to define a time-dependent elliptic operator A(t) : H 2 (Ω) → L 2 (Ω) by A(t)φ = −∇ · (a(x, t)∇φ), for all φ ∈ H 2 (Ω). Under condition (6.49) with k = 1, direct computation gives (A(t) − A(s))v L 2 (Ω) ≤ c|t − s|v H 2 (Ω) .

(6.50)

Indeed, direct computation leads to (A(t) − A(s))v(x) = (∇ · a(x, t) − ∇ · a(x, s)) · ∇v(x) + (a(x, t) − a(x, s)) : ∇2 v, where : denotes the Frobenius inner product for matrices. This and the differentiability assumption in (6.49) imply the estimate (6.50). First, we state a version of Gronwall’s inequality for fractional odes. Proposition 6.2 Let X be a Banach space. For α ∈ (0, 1) and p ∈ ( α1 , ∞), if a function u ∈ C(I; X) satisfies ∂tα u ∈ L p (I; X), u(0) = 0 and ∂tα u L p (0,s;X) ≤ κu L p (0,s;X) + σ,

∀s ∈ I,

(6.51)

for some positive constants κ and σ, then uC(I ;X) + ∂tα u L p (I;X) ≤ cσ,

(6.52)

where the constant c is independent of σ, u and X, but depends on α, p, κ and T.

6.3 Linear Problems with Time-Dependent Coefficients

209

Proof Since u(0) = 0, the Riemann–Liouville and Djrbashian-Caputo fractional ∫t 1 derivatives coincide. Thus, by the fundamental theorem of calculus, u(t) = Γ(α) (t− 0 ξ)α−1 ∂ξα u(ξ) dξ. Since p > α1 , Hölder’s inequality implies ∫ u(t)X ≤ c

t

(t − ξ)

(α−1)p p−1

 p−1 p dξ

0

∂ξα u L p (0,t;X) ≤ c∂ξα u L p (0,t;X) .

Upon taking the supremum with respect to t ∈ (0, s) for any s ∈ I, we obtain u L ∞ (0,s;X) ≤ c∂ξα u L p (0,s;X) ≤ cκu L p (0,s;X) + cσ ≤  κu L ∞ (0,s;X) + c κu L 1 (0,s;X) + cσ, where  > 0 can be arbitrary. By choosing  =

1 2κ ,

we have

u L ∞ (0,s;X) ≤ cκ u L 1 (0,s;X) + cσ, That is,

∫ u(s)X ≤ cκ

s

u(ξ)X dξ + cσ,

∀s ∈ I. for s ∈ I.

0

Now the standard Gronwall’s inequality yields sups ∈I u(s)X ≤ ecκ T cσ. Substituting it into (6.51) yields (6.52), completing the proof.  Now we can give the existence, uniqueness and regularity of solutions to problem (6.47) with u0 = 0. Theorem 6.14 Under conditions (6.48)–(6.49), if u0 = 0 and f ∈ L p (I; L 2 (Ω)), with α1 < p < ∞, then problem (6.47) has a unique solution u ∈ C(I; L 2 (Ω)) ∩ L p (I; H 2 (Ω)) such that ∂tα u ∈ L p (I; L 2 (Ω)). Proof For any θ ∈ [0, 1], consider the following subdiffusion problem ∂tα u(t) + A(θt)u(t) = f (t),

t ∈ I,

with u(0) = 0,

(6.53)

p α 2 and define a set  D = {θ ∈ [0, 1] : (6.53) has a solution u ∈ L (I; H (Ω)) and ∂t u ∈ p 2 L (I; L (Ω)) . Theorem 6.11 implies 0 ∈ D and so D  ∅. For any θ ∈ D, by rewriting (6.53) as

∂tα u(t) + A(θt0 )u(t) = f (t) + (A(θt0 ) − A(θt))u(t),

t ∈ I,

(6.54)

with u(0) = 0, and by applying Theorem 6.11 in the time interval (0, t0 ), we obtain ∂tα u L p (0,t0 ;L 2 (Ω)) + u L p (0,t0 ;H 2 (Ω)) ≤c f  L p (0,t0 ;L 2 (Ω)) + c(A(θt0 ) − A(θt))u(t) L p (0,t0 ;L 2 (Ω)) ≤c f  L p (0,t0 ;L 2 (Ω)) + c(t0 − t)u(t) L p (0,t0 ;H 2 (Ω)),

(6.55)

210

6 Subdiffusion: Hilbert Space Theory p

where the last line follows from (6.50). Let g(t) = u L p (0,t;H 2 (Ω)) , which satisfies p

g (t) = u(t) H 2 (Ω) . Then (6.55) and integration by parts imply g(t0 ) ≤ c f = c f

p  L p (0,t ;L 2 (Ω)) 0 p  L p (0,t ;L 2 (Ω)) 0 p

≤ c f  L p (0,t

0



t0

+c 0



+ cp ∫ +c ;L 2 (Ω))

(t0 − t) p g (t)dt

t0

0 t0

(t0 − t) p−1 g(t)dt

g(t)dt,

0

which in turn implies (via the standard Gronwall’s inequality in Theorem 4.1) g(t0 ) ≤ p c f  L p (0,t ;L 2 (Ω)), i.e., u L p (0,t0 ;H 2 (Ω)) ≤ c f  L p (0,t0 ;L 2 (Ω)) . Substituting the last 0 inequality into (6.55) yields ∂tα u L p (0,t0 ;L 2 (Ω)) + u L p (0,t0 ;H 2 (Ω)) ≤ c f  L p (0,t0 ;L 2 (Ω)) .

(6.56)

Since the estimate (6.56) is independent of θ ∈ D, D is a closed subset of [0, 1]. Now we show that D is also open with respect to the subset topology of [0, 1]. In fact, if θ 0 ∈ D, then problem (6.53) can be rewritten as ∂tα u(t) + A(θ 0 t)u(t) + (A(θt) − A(θ 0 t))u(t) = f (t),

t ∈ I,

with u(0) = 0,

which is equivalent to   1 + (∂tα + A(θ 0 t))−1 (A(θt) − A(θ 0 t)) u(t) = (∂tα + A(θ 0 t))−1 f (t). It follows from (6.56) that the operator (∂tα + A(θ 0 t))−1 (A(θt) − A(θ 0 t)) is small in the sense that (∂tα + A(θ 0 t))−1 (A(θt) − A(θ 0 t)) L p (I;H 2 (Ω))→L p (I;H 2 (Ω)) ≤ c|θ − θ 0 |. Thus, for θ sufficiently close to θ 0 , the operator 1 + (∂tα + A(θ 0 t))−1 (A(θt) − A(θ 0 t)) is invertible on L p (I; H 2 (Ω)), which implies θ ∈ D. Thus, D is open with respect to the subset topology of [0, 1]. Since D is both closed and open respect to the subset topology of [0, 1], D = [0, 1]. Further, note that for α1 < p < ∞, the inequality (6.56) and the condition u(0) = 0 directly imply u ∈ C(I; L 2 (Ω)), by Proposition 6.2, which completes the proof of the theorem.  Now we provide some regularity results in H s (Ω) on solutions to problem (6.47). The overall analysis strategy is to employ a perturbation argument and then to properly resolve the singularity. Specifically, for any fixed t∗ ∈ I, we rewrite problem (6.47) into (with the shorthand A∗ = A(t∗ ))

∂tα u(t) + A∗ u(t) = (A∗ − A(t))u(t) + f (t), ∀t ∈ I, (6.57) u(0) = u0 .

6.3 Linear Problems with Time-Dependent Coefficients

211

By the representation (6.24), the solution u(t) of problem (6.57) is given by ∫ t E∗ (t − s)( f (s) + (A∗ − A(s))u(s))ds, (6.58) u(t) = F∗ (t)u0 + 0

with the operators F∗ (t) and E∗ (t) defined by ∫ ∫ 1 1 F∗ (t) = ezt z α−1 (z α + A∗ )−1 dz and E∗ (t) = ezt (z α + A∗ )−1 dz, 2πi Γθ, δ 2πi Γθ, δ respectively, with integrals over the contour Γθ,δ defined in (6.27). The objective is k to estimate the kth temporal derivative u(k) (t) := dtd k u(t) in H β (Ω) for β ∈ [0, 2] using (6.58). However, direct differentiation of u(t) in (6.58) with respect to t leads to strong singularity that precludes the use of Gronwall’s inequality in Theorem 4.2 directly, in order to handle the perturbation term. To overcome the difficulty, we instead estimate (t k+1 u(t))(k)  H β (Ω) using the expansion of t k+1 = [(t − s) + s]k+1 in the following expression: ∫ t E∗ (t − s) f (s)ds t k+1 u(t) = t k+1 F∗ (t)u0 + t k+1 +

k+1 m=0



0

C(k + 1, m) 0

t

(t − s)m E∗ (t − s)(A∗ − A(s))s k+1−m u(s)ds, (6.59)

where C(k + 1, m) denotes binomial coefficients. One crucial part in the proof is to bound kth-order time derivatives of the summands in (6.59). In the analysis, the following perturbation estimate will be used extensively. It implies that  A(s)−1 A(t) ≤ c for any s, t ∈ I. Lemma 6.5 Under conditions (6.48)–(6.49), for any β ∈ [ 12 , 1], there holds  Aβ (I − A(t)−1 A(s))v L 2 (Ω) ≤ c|t − s| Aβ v L 2 (Ω),

∀Aβ v ∈ L 2 (Ω).

Proof For any given v ∈ H01 (Ω), let ϕ = A(s)v and w = A(t)−1 ϕ. Then (A(t)w, χ) = (ϕ, χ) = (A(s)v, χ),

∀ χ ∈ H01 (Ω),

(a(·, t)∇w, ∇ χ) = (a(·, s)∇v, ∇ χ),

∀ χ ∈ H01 (Ω).

which implies

Consequently, (a(·, t)∇(w − v), ∇ χ) = ((a(·, s) − a(·, t))∇v, ∇ χ),

∀ χ ∈ H01 (Ω).

Let φ = w − v ∈ H01 (Ω) be the weak solution of the elliptic problem

212

6 Subdiffusion: Hilbert Space Theory

(a(·, t)∇φ, ∇ξ) = ((a(·, s) − a(·, t))∇v, ∇ξ),

∀ξ ∈ H01 (Ω).

(6.60)

By Lax-Milgram theorem, φ satisfies the following a priori estimate: φ H 1 (Ω) ≤ c(a(·, s) − a(·, t))∇v L 2 (Ω) ≤ c|t − s|v H 1 (Ω) . Since the operator A(t) is self-adjoint, the preceding estimate together with a duality argument yields (I − A(s)A(t)−1 )v L 2 (Ω) ≤ c|t − s|v L 2 (Ω),

∀v ∈ H 2 (Ω).

Consequently, (A(t) − A(s))v L 2 (Ω) ≤ (I − A(s)A(t)−1 )A(t)v L 2 (Ω) ≤ c|t − s| A(t)v L 2 (Ω) . Further, the interpolation between β = 12 , 1 yields  Aβ (t)(I − A(t)−1 A(s))v L 2 (Ω) ≤ c|t − s| Aβ (t)v L 2 (Ω) . 

This completes the proof of the lemma. Now we can give the regularity for the homogeneous problem.

Theorem 6.15 If a(x, t) satisfies (6.48)–(6.49), u0 ∈ H γ (Ω) with γ ∈ [0, 2] and f ≡ 0, then for all t ∈ I and k = 0, . . . , K, the solution u(t) to problem (6.47) satisfies  β dk  β−γ γ  2  k (t u(t)) A  2 ≤ ct − 2 α  A 2 u0  L 2 (Ω), L (Ω) dt k

∀ β ∈ [γ, 2].

Proof When k = 0, setting f = 0 and t = t∗ in (6.58) yields ∫ t∗ β β β A∗2 E∗ (t∗ − s)(A∗ − A(s))u(s)ds, A∗2 u(t∗ ) = A∗2 F∗ (t∗ )u0 + 0

where β ∈ [γ, 2]. By Theorem 6.4 and Lemma 6.5, β

β−γ

γ

 A∗2 u(t∗ ) L 2 (Ω) ≤  A∗ 2 F∗ (t∗ )A∗2 u0  L 2 (Ω) ∫ t∗ β +  A∗ E∗ (t∗ − s) A∗2 (I − A−1 ∗ A(s))u(s) L 2 (Ω) ds 0 ∫ t∗ γ β −(β−γ)α  A∗2 u0  L 2 (Ω) + c (t∗ − s) A∗ E∗ (t∗ − s) A∗2 u(s) L 2 (Ω) ds ≤ ct∗ 0 ∫ t∗ β − β−γ α  A∗2 u(s) L 2 (Ω) ds. ≤ ct∗ 2 u0  H γ (Ω) + c 0

This and Gronwall’s inequality in Theorem 4.2 yield β

 A∗2 u(t∗ ) L 2 (Ω) ≤ c(1 −

β−γ β−γ −1 − 2 α u0  H γ (Ω) . 2 α) t∗

6.3 Linear Problems with Time-Dependent Coefficients

213

In particular, we have β+γ

− β−γ 4 α

 A∗ 4 u(t∗ ) L 2 (Ω) ≤ ct∗

u0  H γ (Ω),

with c being bounded as α → 1− . This estimate, Theorem 6.4(ii) and Lemma 6.5 then imply β

β−γ

γ

 A∗2 u(t∗ ) L 2 (Ω) ≤  A∗ 2 F∗ (t∗ )A∗2 u0  L 2 (Ω) ∫ t∗ β−γ β+γ +  A∗ 4 A∗ E∗ (t∗ − s) A∗ 4 (I − A−1 ∗ A(s))u(s) L 2 (Ω) ds 0 ∫ t∗ γ β−γ β+γ − β−γ α (t∗ − s) A∗ 4 A∗ E∗ (t∗ − s) A∗ 4 u(s) L 2 (Ω) ds ≤ct∗ 2  A∗2 u0  L 2 (Ω) + c 0 ∫ t∗ β−γ β−γ β−γ − 2 α − β−γ α + (t∗ − s)− 4 α s− 4 α ds u0  H γ (Ω) ≤ ct∗ 2 u0  H γ (Ω) . ≤c t∗ 0

Equivalently, we have β

1− β−γ 2 α

 A∗2 t∗ u(t∗ ) L 2 (Ω) ≤ ct∗

u0  H γ (Ω),

where c is bounded as α → 1− . This proves the assertion for k = 0. Next, we prove the case 1 ≤ k ≤ K using mathematical induction. Suppose that the assertion holds up to k − 1 < K, and we prove it for k ≤ K. Indeed, by Lemma 6.6 below,   β dk ∫ t   2 (t − s)m E∗ (t − s)(A∗ − A(s))s k+1−m u(s)ds|t=t∗  2 A∗ k L (Ω) dt 0 ∫ t∗ β−γ β − α+1 ≤ct∗ 2 u0  H γ (Ω) + c  A∗2 (s k+1 u(s))(k)  L 2 (Ω) ds, 0

where m = 0, 1, . . . , k + 1. Meanwhile, the estimates in Theorem 6.4 imply  β  k+1    A∗2 t F∗ (t)u0 (k)  β

By applying A∗2

dk dt k

L 2 (Ω)

≤ ct −

β−γ 2 α+1

u0  H γ (Ω) .

to (6.59) and using the last two estimates, we obtain

  β k+1  A∗2 (t u(t))(k) |t=t  ∗

L 2 (Ω)

− β−γ 2 α+1

≤ct∗



+c 0

t∗

u0  H γ (Ω) β

 A∗2 (s k+1 u(s))(k)  L 2 (Ω) ds.

Last, applying Gronwall’s inequality in Theorem 4.2 completes the induction step. In the proof of Theorem 6.15, we have used the following result. Lemma 6.6 Under the conditions of Theorem 6.15, for m = 0, . . . , k + 1, there holds

214

6 Subdiffusion: Hilbert Space Theory

 β dk ∫ t   2  (t − s)m E∗ (t − s)(A∗ − A(s))s k+1−m u(s)ds|t=t∗  2 A∗ k L (Ω) dt 0 ∫ t∗  β  − β−γ α+1   ≤ct∗ 2 u0  H γ (Ω) + c A∗2 (s k+1 u(s))(k)  2 ds. L (Ω)

0

Proof Denote the integral on the left-hand side by Im (t), and let vm = t m u(t) and Wm (t) = t m E∗ (t). Direct computation using product rule and changing variables gives that for any 0 ≤ m ≤ k, there holds ∫ t dk−m (m) Wm (t − s)(A∗ − A(s))vk−m+1 (s)ds I(k) m (t) = dt k−m 0 ∫ t dk−m (m) = k−m Wm (s)(A∗ − A(t − s))vk−m+1 (t − s)ds dt 0 ∫ t  dk−m  (m) Wm (s) k−m (A∗ − A(t − s))vk−m+1 (t − s) ds = dt 0 ∫ t k−m (m) ( ) C(k − m, ) Wm (s)(A∗ − A(t − s))(k−m− ) vk−m+1 (t − s)ds . = 0 =0  I m,  (t)

Next, we bound the integrand (m) ( ) Im, (s) := Wm (A∗ − A(t∗ − s))(k−m− ) vk−m+1 (t∗ − s)

of the integral Im, (t∗ ). We shall distinguish between β ∈ [γ, 2) and β = 2. First, we analyze the case β ∈ [γ, 2). When  < k, by Theorem 6.4(ii) and 6.5 and the induction hypothesis, we bound the integrand Im, (s) by β

β

(m) ( )  A∗2 Im, (s) L 2 (Ω) ≤  A∗2 Wm (s)(A∗ − A(t∗ − s))(k−m− ) vk−m+1 (t∗ − s) L 2 (Ω) β ⎧ cs(1− 2 )α−1 s A v (k−m) (t − s) 2 ,  = k − m, ⎪ ∗ k−m+1 ∗ ⎨ ⎪ L (Ω)  ≤ β  ( ) ⎪ ⎪ cs(1− 2 )α−1  A∗ vk−m+1 (t∗ − s) 2 ,  < k − m, ⎩ L (Ω) γ β γ ⎧ ⎪ ⎨ cs(1− 2 )α (t∗ − s)1−(1− 2 )α  A∗2 u0  L 2 (Ω), ⎪  = k − m, ≤ γ β γ ⎪ ⎪ cs(1− 2 )α−1 (t∗ − s)k−m− +1−(1− 2 )α  A∗2 u0  L 2 (Ω),  < k − m. ⎩

Similarly for the case  = k (and thus m = 0), there holds β

β

(k)  A∗2 I0,k (s) L 2 (Ω) ≤  A∗ E∗ (s) A∗2 (I − A−1 ∗ A(t∗ − s))vk+1  L 2 (Ω) β

(k) ≤ c A∗2 vk+1 (t∗ − s) L 2 (Ω) .

Thus, for 0 ≤ m ≤ k and  = k − m, upon integrating from 0 to t∗ , we obtain

6.3 Linear Problems with Time-Dependent Coefficients β 2

 A∗ I(k) m (t∗ ) L 2 (Ω)

γ 2+ γ−β α ct∗ 2  A∗2 u0  L 2 (Ω)



215

∫ +c

t∗

0

β

(k)  A∗2 vk+1 (s) L 2 (Ω) ds,

and similarly for 0 ≤ m ≤ k and  < k − m, β

1+ γ−β α

γ

β −1 2  A∗2 I(k)  A∗2 u0  L 2 (Ω) m (t∗ ) L 2 (Ω) ≤c((1 − 2 )α) t∗ ∫ t∗ β (k) +c  A∗2 vk+1 (s) L 2 (Ω) ds. 0

Meanwhile, for m = k + 1, we have ∫ t∗ β β γ γ (k) 2 +1− 2 2 A∗2 I(k) (t ) = A W (t − s)A (I − A−1 ∗ ∗ ∗ ∗ ∗ A(s))u(s)ds, k+1 k+1 0

and consequently, by Theorem 6.4(ii) and Lemma 6.5 and the induction hypothesis, ∫ t∗ β β γ +1− γ2 (k) (t ) ≤  A∗2 Wk+1 (t∗ − s) A∗2 (I − A−1  A∗2 I(k) 2 ∗ L (Ω) ∗ A(s))u(s) L 2 (Ω) ds k+1 0 ∫ t∗ γ β−γ (t∗ − s)1− 2 α  A∗2 u(s) L 2 (Ω) ds ≤c 0



γ 2+ γ−β α ct∗ 2  A∗2 u0  L 2 (Ω) .

In the case 0 ≤ m ≤ k and  < k − m, the last estimates require β ∈ [0, 2). When 0 ≤ m ≤ k,  < k − m and β = 2, we apply Lemma 6.2 and rewrite A∗ Im, (t∗ ) as ∫ t∗ ( ) A∗ Im, (t∗ ) = (s m (I − F∗ (s)) )(m) (A∗ − A(t∗ − s))(k−m− ) vk−m+1 (t∗ − s)ds. 0

Then integration by parts and product rule yield ∫ t∗ ( ) D(s)(A∗ − A(t∗ − s))(k−m− +1) vk−m+1 (t∗ − s)ds A∗ Im, (t∗ ) = − 0 ∫ t∗ ( +1) D(s)(A∗ − A(t∗ − s))(k−m− ) vk−m+1 (t∗ − s)ds − 0

( ) (t∗ ), − D(0)(A∗ − A(t∗ − s))(k−m− ) |s=0 vk−m+1

with D(s) =

I − F∗ (s), m

(m−1)

(s (I − F∗ (s)) )

,

(6.61)

m = 0, m > 0.

By Theorem 6.4(i) and (iii), D(s) ≤ c, and thus the preceding argument with Theorem 6.4 and Lemma 6.5 and the induction hypothesis allows bounding the Im, (s) of (6.61) by integrand A∗ 

216

6 Subdiffusion: Hilbert Space Theory γ

 A∗Im, (s) L 2 (Ω) ≤ c(t∗ − s)k− −(1− 2 )α u0  H γ (Ω)

(k) (t∗ − s) L 2 (Ω),  = k − 1, c A∗ vk+1 + k−1− −(1− γ2 )α c(t∗ − s) u0  H γ (Ω),  < k − 1, where for  = k − 1, we have m = 0 and hence D(0) = 0. Combining the last estimates and then integrating from 0 to t∗ in s, we obtain the desired assertion.  Next, we analyze the case f  0. Now we consider (t k u(t))(k) , instead (k) for the proof of Theorem 6.15. We begin with a bound on of (t k+1 ∫ u(t)) dk k t (t 0 E(t − s; t∗ ) f (s)ds), which follows from straightforward but lengthy comdt k putation. Lemma 6.7 Let k ≥ 1. Then for any β ∈ [0, 2), there holds  β dk ∫ t   2  E∗ (t − s) f (s)ds |t=t∗  2 A∗ k t k L (Ω) dt 0 ∫ k−1 t∗ β (1− β )α+m ≤c t∗ 2  f (m) (0) L 2 (Ω) + ct∗k (t∗ − s)(1− 2 )α−1  f (k) (s) L 2 (Ω) ds, 0

m=0

and further,  dk ∫ t    E∗ (t − s) f (s)ds |t=t∗  2 A∗ k t k L (Ω) dt 0 ∫ t∗ k ≤c t∗m  f (m) (0) L 2 (Ω) + ct∗k  f (k+1) (s) L 2 (Ω) ds. 0

m=0

Proof Let I(t) = dm dt m

∫ 0

t

dk dt k

(t k

∫t 0

E∗ (t − s) f (s)ds). It follows from the elementary identity

E∗ (s) f (t − s)ds =

m−1

∫ E∗( ) (t) f (m−1− ) (0) +

0

=0

t

E∗ (s) f (m) (t − s)ds

and direct computation that I(t) =

k m=0

C(k, m)2 t m

m−1

∫ E∗( ) (t) f (m−1− ) (0) +

=0

Consequently, by Theorem 6.4, for β ∈ [0, 2),

0

t

E∗ (s) f (m) (t − s)ds .

6.3 Linear Problems with Time-Dependent Coefficients β 2

k



 A∗ I(t ) L 2 (Ω) ≤ c +c

k m=0

≤c

k m=0

t∗m

+c

t∗m

≤c

t∗m

m=0

t∗m

β

 A∗2 E∗ (s) f (m) (t∗ − s) L 2 (Ω) ds





t∗

0

(1− β2 )α+m

t∗

0

β

 A∗2 E∗( ) (t∗ ) f (m−1− ) (0) L 2 (Ω)

=0

(1− β2 )α−1−

t∗

k

m−1

t∗

=0

k

m=0

+c

t∗

0

m−1

m=0 k−1

m=0



t∗m

217

 f (m−1− ) (0) L 2 (Ω)

β

s(1− 2 )α−1  f (m) (t∗ − s) L 2 (Ω) ds

 f (m) (0) L 2 (Ω) β

s(1− 2 )α−1  f (m) (t∗ − s) L 2 (Ω) ds.

Next, we simplify the second summation. The following identity: for m < k, f (m) (s) =

k−m−1

f (m+j) (0)

j=0

1 sj + j! (k − m)!

and Theorem 2.1 imply ∫ ∫ t∗ (1− β2 )α−1 (m) (t∗ − s)  f (s) L 2 (Ω) ds ≤ 0

×

 f (m+j) (0) L 2 (Ω)

k−m−1 j=0

(1− β2 )α+j

t∗

sj j!

+

1 (k − m)!

t∗

(s − ξ)k−m−1 f (k) (ξ)dξ,

(6.62)

β

(t∗ − s)(1− 2 )α−1



 f (m+j) (0) L 2 (Ω) + ct∗k−m

s

0

0

k−m−1 j=0

≤c



s

0



(s − ξ)k−m−1  f (k) (ξ) L 2 (Ω) dξ ds

t∗

0

β

(t∗ − s)(1− 2 )α−1  f (k) (s) L 2 (Ω) ds.

Combining these estimates gives the desired assertion for β ∈ [0, 2). For β = 2, by Lemma 6.2 and integration by parts (and the identity I − F∗ (0) = 0), ∫ t∗ ∫ t∗ (m) E∗ (s) f (t∗ − s)ds = (I − F∗ (s)) f (m) (t∗ − s)ds A∗ 0 0 ∫ t∗ (m) (I − F∗ (s)) f (m+1) (t∗ − s)ds, = (I − F∗ (t∗ )) f (0) + 0

and thus

218

6 Subdiffusion: Hilbert Space Theory

 A∗ I(t∗ ) L 2 (Ω) ≤c

k m=0

t∗m

m−1

 A∗ E∗( ) (t∗ ) f (m−1− ) (0) L 2 (Ω)

=0



+  f (m) (0) + 0

t∗

 f (m+1) (s) L 2 (Ω) ds . 

Then repeating the preceding argument completes the proof. Now we can present the regularity result for the inhomogeneous problem.

Theorem 6.16 If a(x, t) satisfies (6.48)–(6.49), u0 ≡ 0, then for all t ∈ I and k = 0, . . . , K, the solution u(t) to problem (6.47) satisfies for any β ∈ [0, 2) (t k u(t))(k)  H β (Ω) ≤c

k−1

β

t (1− 2 )α+j  f (j) (0) L 2 (Ω)

j=0

+ ct k

∫ 0

t

β

(t − s)(1− 2 )α−1  f (k) (s) L 2 (Ω) ds,

and similarly for β = 2, (t k u(t))(k)  H β (Ω) ≤ c

k

t j  f (j) (0) L 2 (Ω) + ct k

j=0

∫ 0

t

 f (k+1) (s) L 2 (Ω) ds.

Proof Similar to Theorem 6.15, the proof is based on mathematical induction. Let vk (t) = t k u(t) and Wk (t) = t k E∗ (t). For k = 0, by the representation (6.58), we have ∫ t∗ β ∫ t∗ β β A∗2 E∗ (t∗ − s) f (s)ds + A∗2 E∗ (t∗ − s)(A∗ − A(s))u(s)ds. A∗2 u(t∗ ) = 0

0

Then for β ∈ [0, 2), by Theorem 6.4(ii) and Lemma 6.5 there holds ∫ t∗ β β 2  A∗2 E∗ (t∗ − s) f (s) L 2 (Ω) ds  A∗ u(t∗ ) L 2 (Ω) ≤ 0 ∫ t∗ β  A∗ E∗ (t∗ − s) A∗2 (I − A−1 + ∗ A(s))u(s) L 2 (Ω) ds 0 ∫ t∗ ∫ t∗ β β (t∗ − s)(1− 2 )α−1  f (s) L 2 (Ω) ds + c  A∗2 u(s) L 2 (Ω) ds. ≤c 0

0

The case β = 2 follows similarly from Lemma 6.2 and integration by parts: ∫ t∗ ∫ t∗  f (s) L 2 (Ω) ds + c  A∗ u(s) L 2 (Ω) ds.  A∗ u(t∗ ) L 2 (Ω) ≤ c f (0) L 2 (Ω) + c 0

0

Then Gronwall’s inequality in Theorem 4.1 gives the assertion for the case k = 0. Now suppose it holds up to k − 1 < K, and we prove it for k ≤ K. Now note that

6.3 Linear Problems with Time-Dependent Coefficients

vk(k) (t)

219

∫ dk k t = k t E∗ (t − s) f (s)ds dt 0 ∫ t k dk C(k, m) k Wm (t − s)(A∗ − A(s))vk−m (s)ds. + dt 0 m=0

This, Lemmas 6.7 and 6.8 and triangle inequality give for β ∈ [0, 2) ∫ t∗ β β 2 (k)  A∗ vk (t)|t=t∗  L 2 (Ω) ≤ c  A∗2 vk(k) (s) L 2 (Ω) ds 0

+c

k−1

(1− β2 )α+m

t∗

m=0

 f (m) (0) L 2 (Ω) + ct∗k

∫ 0

t∗

β

(t∗ − s)(1− 2 )α−1  f (k) (s) L 2 (Ω) ds,

and similarly, for β = 2, ∫  A∗ vk(k) (t)|t=t∗  L 2 (Ω) ≤ c

t∗

0

+ ct∗k

 A∗ vk(k) (s) L 2 (Ω) ds + c ∫

t∗

0

k m=0

t∗m  f (m) (0) L 2 (Ω)

 f (k+1) (s) L 2 (Ω) ds.

This and Gronwall’s inequality from Theorem 4.1 complete the induction step.



The following result is needed in the proof of Theorem 6.16. Lemma 6.8 Under the conditions in Theorem 6.16, for any β ∈ [0, 2] and m = 0, . . . , k, there holds for any β ∈ [0, 2),  β dk ∫ t   2  (t − s)k−m E∗ (t − s)(A∗ − A(s))s m u(s)ds|t=t∗  2 A∗ k L (Ω) dt 0 ∫ t∗ k−1 β β (1− )α+m ≤c  A∗2 (s k u(s))(k)  L 2 (Ω) ds + c t∗ 2  f (m) (0) L 2 (Ω) 0

+ ct∗k

m=0



t∗

0

(t∗ − s)

(1− β2 )α−1

 f (k) (s) L 2 (Ω) ds,

and for β = 2,  dk ∫ t    (t − s)k−m E∗ (t − s)(A∗ − A(s))s m u(s)ds|t=t∗  2 A∗ k L (Ω) dt 0 ∫ t∗ k ≤c  A∗ (s k u(s))(k)  L 2 (Ω) ds + c t∗m  f (m) (0) L 2 (Ω) 0

+ ct∗k

∫ 0

m=0

t∗

 f (k+1) (s) L 2 (Ω) ds.

220

6 Subdiffusion: Hilbert Space Theory

Proof Let vk = t k u(t) and Wk (t) = t k E∗ (t). By the induction hypothesis and (6.62), for  < m, we have ( )  A∗ vm (s) L 2 (Ω)

≤ cs

m−



j

s f

(j)

(0) L 2 (Ω) + s



∫ 0

j=0

≤ cs m−

k−1

s j  f (j) (0) L 2 (Ω) + s k−1

j=0

∫ 0

s

s

 f ( +1) (ξ) L 2 (Ω) dξ

 f (k) (ξ) L 2 (Ω) dξ .



(6.63)

We denote the term in the bracket by T(s; f , k). Now similar to the proof of Lemma 6.6, let Im (t) be the integral on the left-hand side. Then in view of the identity I(k) m (t) =

k−m

∫ C(k − m, )

t

(m) ( ) Wm (A∗ − A(t∗ − s))(k−m− ) vk−m (s)ds,  0

=0

I m,  (t)

it suffices to bound the integrand Im, (s) of the integral Im, (t∗ ),  = 0, 1, . . . , k − m. Below we discuss the cases β ∈ [0, 2) and β = 2 separately, due to the difference in singularity, as in the proof of Lemma 6.6. Case (i): β ∈ [0, 2). For the case  < k, Theorem 6.4(ii) and Lemma 6.5 lead to β

β

(m) ( ) (A∗ − A(t∗ − s))(k−m− ) vk−m (t∗ − s) L 2 (Ω)  A∗2 Im, (s) L 2 (Ω) ≤  A∗2 Wm β ⎧ ⎪ cs(1− 2 )α−1 s A v (k−m) (t − s) 2 ,  = k − m, ⎨ ⎪ ∗ k−m ∗ L (Ω) ≤ β ( ) ⎪ (1− )α−1 ⎪ cs 2  A∗ vk−m (t∗ − s) L 2 (Ω),  < k − m, ⎩ β ⎧ ⎪ ⎨ ⎪ cs(1− 2 )α T(t∗ − s; f , k),  = k − m, ≤ β ⎪ ⎪ cs(1− 2 )α−1 (t∗ − s)k−m− T(t∗ − s; f , k),  < k − m, ⎩

where the last step is due to (6.63). Note that for  < k, the derivation requires β ∈ [0, 2). Similarly, for the case  = k (and thus m = 0), β

β

 A∗2 I0,k (s) L 2 (Ω) ≤ c A∗2 vk(k) (t∗ − s) L 2 (Ω) .

(6.64)

Case (ii): β = 2. Note that for  < k, the derivation in case (i) requires β ∈ [0, 2). When  < k and β = 2, using the identity (6.61) and Theorem 6.4 and repeating the argument of Lemma 6.6, we obtain

c(t∗ − s)k−m− −1 T(t∗ − s; f , k),  < k − 1,   A∗ Im, (s) L 2 (Ω) ≤ cT(t∗ − s; f , k) + c A∗ vk(k) (t∗ − s) L 2 (Ω),  = k − 1, Combining the preceding estimates, integrating from 0 to t∗ in s and then applying Gronwall’s inequality from Theorem 4.2 complete the proof. 

6.4 Nonlinear Subdiffusion

221

Remark 6.6 The regularity results in Theorems 6.15 and 6.16 are identical with that for subdiffusion with a time-independent elliptic operator. All the constants in these theorems depend on k but are uniformly bounded as α → 1− .

6.4 Nonlinear Subdiffusion Now we discuss nonlinear subdiffusion, for which a general theory is still unavailable. Thus we present only elementary results for three model problems, i.e., semilinear problems with Lipschitz nonlinearity, Allen-Cahn equation and compressible Navier-Stokes system, to illustrate the main ideas. One early contribution to nonlinear problems is [LRYZ13] with Neumann boundary conditions; see also [KY17] for semilinear diffusion wave. Other nonlinear subdiffusion -type problems that have been studied include Navier-Stokes equations [dCNP15], Hamilton-Jacobi equation [GN17, CDMI19], and p-Laplace equation [LRdS18], Keller-Segel equations [LL18b], and fully nonlinear Cauchy problems [TY17]. The recent monograph [GW20] treats semilinear problems extensively.

6.4.1 Lipschitz nonlinearity In this part, we analyze the following semilinear subdiffusion problem

∂tα u + Au = f (u), t > 0, u(0) = u0,

(6.65)

where A is the negative Laplacian −Δ (equipped with a zero Dirichlet boundary condition). The solution of this problem depends very much on the behavior of f (u). The solution may not exist, or blow up in finite time, just as in the case of fractional odes, cf. Proposition 4.7 in Chapter 4. If the nonlinearity f is globally Lipschitz, the situation is similar to the standard case, as shown in [JLZ18, Theorem 3.1]. The analysis employs a fixed point argument, equipped with Bielecki norm [Bie56]. Using the solution representation (6.24), the solution u satisfies ∫ t E(t − s) f (u(s))ds. (6.66) u(t) = F(t)u0 + 0

Theorem 6.17 Let u0 ∈ H 2 (Ω), and let f : R → R be Lipschitz continuous. Then problem (6.65) has a unique solution u such that u ∈ C α (I; L 2 (Ω)) ∩ C(I; H 2 (Ω)),

∂tα u ∈ C(I; L 2 (Ω)),

∂t u(t) ∈ L (Ω) and ∂t u(t) L 2 (Ω) ≤ ct 2

α−1

∀t ∈ I.

(6.67) (6.68)

222

6 Subdiffusion: Hilbert Space Theory

Proof The lengthy proof is divided into four steps. Step 1: Existence and uniqueness. We denote by C(I; L 2 (Ω))λ the space C(I; L 2 (Ω)) equipped with the norm vλ := maxt ∈I e−λt v(t) L 2 (Ω), which is equivalent to the standard norm of C(I; L 2 (Ω)) for any fixed λ > 0. Then we define a map M : C(I; L 2 (Ω))λ → C(I; L 2 (Ω))λ by ∫ t E(t − s) f (v(s))ds. Mv(t) = F(t)u0 + 0

For any λ > 0, u ∈ C(I; L 2 (Ω)) is a solution of (6.65) if and only if u is a fixed point of the map M : C(I; L 2 (Ω))λ → C(I; L 2 (Ω))λ . It remains to prove that for some λ > 0, the map M : C(I; L 2 (Ω))λ → C(I; L 2 (Ω))λ has a unique fixed point. In fact, the definition of M and Theorem 6.4(ii) directly yield for any v1, v2 ∈ C(I; L 2 (Ω))λ , e−λt (Mv1 (t) − Mv2 (t)) L 2 (Ω)   ∫ t   = e−λt E(t − s)( f (v1 (s)) − f (v2 (s)))ds 0 L 2 (Ω) ∫ t ≤ ce−λt (t − s)α−1 v1 (s) − v2 (s) L 2 (Ω) ds 0 ∫ t (t − s)α−1 e−λ(t−s) max e−λs (v1 (s) − v2 (s)) L 2 (Ω) ds ≤c s ∈[0,T ] 0  ∫ 1 −α α−1 α −λt(1−ζ ) = cλ (1 − ζ) (λt) e dζ v1 − v2 λ, 0

where the second identity follows by changing variables s = tζ. Now direct compu∫1 α α tation gives λ−α 0 (1 − ζ)α−1 (λt)α e−λt(1−ζ ) dζ ≤ cλ− 2 T 2 . Thus, we obtain α

α

e−λt (Mv1 (t) − Mv2 (t)) L 2 (Ω) ≤ cλ− 2 T 2 v1 − v2 λ .

(6.69)

Thus, by choosing a sufficiently large λ, the map M is contractive on C(I; L 2 (Ω))λ . Theorem A.10 implies that M has a unique fixed point, or a unique solution of (6.65). Step 2: C α (I; L 2 (Ω)) regularity. Consider the difference quotient for τ > 0 ∫ t+τ u(t + τ) − u(t) F(t + τ) − F(t) 1 = u0 + α E(s) f (u(t + τ − s))ds τα τα τ t ∫ t 3 f (u(t + τ − s)) − f (u(t − s)) + E(s) ds =: Ii (t, τ). τα 0 i=1 (6.70) By Theorem 6.4(iii) is that τ −α F(t + τ) − F(t) ≤ c, which implies I1 (t, τ) L 2 (Ω) ≤ c. By appealing to Theorem 6.4(ii), we have

6.4 Nonlinear Subdiffusion

I2 (t, τ) L 2 (Ω)

223

 ∫ t+τ   1   = α E(s) f (u(t + τ − s))ds τ t L 2 (Ω) ∫ t+τ α − tα (t + τ) 1 c ≤c α s α−1 ds = ≤ c. τ t α τα

By the Lipschitz continuity of f , we have   ∫ t  f (u(s + τ)) − f (u(s))  e−λt I3 (t, τ) L 2 (Ω) = e−λt E(t − s) ds  2 τα 0 L (Ω)   ∫ t   u(s + τ) − u(s) −λ(t−s) α−1 −λs   ≤ c1 e (t − s) e   2 τα

ds.

L (Ω)

0

By substituting the estimates of Ii (t, τ), i = 1, 2, 3, into (6.70) and denoting Wτ (t) = e−λt τ −α u(t + τ) − u(t) L 2 (Ω), we obtain ∫ t T α2 Wτ (t) ≤ c + c1 e−λ(t−s) (t − s)α−1Wτ (s)ds ≤ c + c1 max Wτ (s), λ s ∈I 0 where the last inequality can be derived as (6.69). By choosing a sufficiently large λ and taking maximum of the left-hand side with respect to t ∈ I, it implies maxt ∈I Wτ (t) ≤ c, which further yields τ −α u(t + τ) − u(t)X ≤ ceλt ≤ c, where c is independent of τ. Thus, we have proved uC α (I ;X) ≤ c. Step 3: C(I; D(A)) regularity.∫By applying the operator A to both sides of (6.66) and t using the identity I − F(t) = 0 AE(t − s)ds from Lemma 6.2, we obtain ∫ Au(t) = AF(t)u0 +

t

AE(t − s) f (u(s))ds

0

= (AF(t)u0 + (I − F(t)) f (u(t))) ∫ t AE(t − s)( f (u(s)) − f (u(t)))ds := I4 (t) + I5 (t). +

(6.71)

0

By Theorem 6.4(ii) and the C α (I; L 2 (Ω)) regularity from Step 2, we have ∫ t     AE(t − s)( f (u(s)) − f (u(t)))ds I5 (t) L 2 (Ω) =  ∫ ≤ 0

0 t

cu(s) − u(t) L 2 (Ω) ds ≤ c t−s



L 2 (Ω)

t

|t − s| α−1 ds ≤ ct α,

∀ t ∈ I.

0

Theorem 6.4(i) implies that I5 (t) is continuous for t ∈ I, and the last inequality implies that I5 (t) is also continuous at t = 0. Hence, I5 ∈ C(I; L 2 (Ω)). Moreover, Theorem 6.4(i) gives I4 ∈ C(I; L 2 (Ω)) and I4 (t) L 2 (Ω) ≤ c Au0 + f (u(t)) L 2 (Ω) ≤ c.

224

6 Subdiffusion: Hilbert Space Theory

Substituting the estimates of I4 (t) and I5 (t) into (6.71) yields  AuC(I ;L 2 (Ω)) ≤ c, which further implies uC(I ;D(A)) ≤ c. The regularity result u ∈ C(I; D(A)) yields ∂tα u = Au + f (u) ∈ C(I; L 2 (Ω)). Step 4: Estimate of u (t) L 2 (Ω) . By differentiating (6.66) with respect to t, we obtain ∫ t u (t) = F (t)u0 + E(t) f (u0 ) + E(s) f (u(t − s))u (t − s)ds 0 ∫ t E(t − s) f (u(s))u (s)ds. = E(t)(−Au0 + f (u0 )) + 0

By multiplying this equation by t 1−α , we get t 1−α u (t) =t 1−α E(t)(−Au0 + f (u0 )) ∫ t t 1−α s α−1 E(t − s) f (u(s))s1−α u (s)ds, + 0

which directly implies that e−λt t 1−α u (t) L 2 (Ω) ≤ e−λt t 1−α E(t) − Au0 + f (u0 ) L 2 (Ω) ∫ t + e−λ(t−s) t 1−α s α−1 (t − s)α−1  f (u(s)) L ∞ (Ω) e−λs s1−α u (s) L 2 (Ω) ds 0

α

α

≤ ce−λt  − Au0 + f (u0 ) L 2 (Ω) + cλ− 2 T 2 max e−λs s1−α u (s) L 2 (Ω) . s ∈I

where the last line follows similarly as (6.69). By choosing a sufficiently large λ and taking maximum of the left-hand side with respect to t ∈ I, it implies  maxt ∈I e−λt t 1−α u (t) L 2 (Ω) ≤ c, which further yields (6.68). Remark 6.7 For smooth u0 and f , in the absence of extra compatibility conditions, the regularity results (6.67)–(6.68) are sharp with respect to the Hölder continuity in time. The regularity (6.67) is identical with Theorem 6.13. If f is smooth but not Lipschitz, and problem (6.65) has a unique bounded solution, then f (u) and f (u) are still bounded. In this case, the estimates (6.67)–(6.68) are still valid. One can prove regularity results under weaker assumption on initial data u0 [AK19, Theorems 3.1 and 3.2]. The proof is similar to Theorem 6.17, and thus it is omitted. We refer interested readers also to [WZ20, Section 2] for further regularity estimates in L ∞ (Ω). Theorem 6.18 Let u0 ∈ H γ (Ω), γ ∈ [0, 2], and f : R → R be Lipschitz continuous. Then problem (6.65) has a unique solution u. Further, the following statements hold. (i) If γ ∈ (0, 2], then γ

u ∈ C 2 α (I; L 2 (Ω)) ∩ C(I; H 2 (Ω)), ∂t u ∈ L 2 (Ω),

∂tα u ∈ C(I; L 2 (Ω)), γ

∂t u(t) L 2 (Ω) ≤ ct 2 α−1,

t ∈ I.

6.4 Nonlinear Subdiffusion

225

(ii) If γ = 0, then for p < α1 , u ∈ C(I; L 2 (Ω)) ∩ L p (I; H 2 (Ω)) ∩ C(I; H 2− (Ω)), γ ∂tα u ∈ L p (I; L 2 (Ω)), ∂t u(t) ∈ L 2 (Ω), ∂t u(t) L 2 (Ω) ≤ ct 2 α−1 for any t ∈ I and there hold u L p (I; H 2 (Ω)) + ∂tα u L p (I;L 2 (Ω)) ≤ c(u0  L 2 (Ω) +  f (0) L 2 (Ω) ), u(t) H 2− (Ω) ≤ ct −

2− 2

α



u0  L 2 (Ω) + c −1 t 2 α  f (0) L 2 (Ω),

∀t ∈ I.

6.4.2 Allen-Cahn equation The global Lipschitz continuity on the nonlinearity f is restrictive, and does not cover many important examples in practice. Thus, it is of interest to relax the condition to locally Lipschitz or Hölder continuity. Following [TYZ19, DYZ20] (see also [LS21] for gradient flow), we illustrate this with the following time-fractional Allen-Cahn equation

∂tα u + Au = u − u3, in Q, (6.72) u(0) = u0, in Ω. When α = 1, the model recovers the classical Allen-Cahn equation [AC72], which is often employed to describe the process of phase separation in multi-component alloy systems. The nonlinear term f (u) = u − u3 can be rewritten as f (u) = −F (u), with F(u) = 14 (1 − u2 )2 being the standard double-well potential. Below, the operator A is taken to be −Δ with its domain D(A) = H01 (Ω) ∩ H 2 (Ω). Problem (6.72) is the Euler-Lagrange equation of the following energy ∫ |∇u| 2 + F(u) dx. E(u) = 2 Ω Now we show the uniqueness and existence of a solution [DYZ20, Theorem 2.1]. Theorem 6.19 For every u0 ∈ H01 (Ω), there exists a unique weak solution to problem (6.72) such that u ∈ W α, p (I; L 2 (Ω)) ∩ L p (I; H 2 (Ω)), for all p ∈ [2, α2 ). Further, if u0 ∈ H 2 (Ω), it satisfies u ∈ L ∞ (Q). Proof The proof applies the standard Galerkin procedure. We divide the lengthy proof into four steps. Step 1. Galerkin approximation. Let {(λ j , ϕ j )}∞ j=1 be the eigenpairs of the operator A. For every n ∈ N, let Xn = span{ϕ j } nj=1 . Let un ∈ Xn such that (∂tα un, v) + (∇un, ∇v) = ( f (un ), v),

∀v ∈ Xn

and un (0) = Pn u0,

(6.73)

where Pn : L 2 (Ω) → Xn is the L 2 (Ω) orthogonal projection, defined by (φ, v) = (Pn φ, v) for all φ ∈ L 2 (Ω) and v ∈ Xn . The existence and uniqueness of a local solution u ∈ C(I; Rn ) ∩ C(I; Rn ) to (6.73) can be proved, since f is smooth and

226

6 Subdiffusion: Hilbert Space Theory

hence locally Lipschitz continuous; see Corollary 4.3 in Chapter 4. Letting v = un in (6.73) and using the inequality 2 1 α n 2 ∂t u (t) L 2 (Ω)

≤ (∂tα un (t), un (t)),

cf. Lemma 2.11, we obtain ∂tα un (t) L2 2 (Ω) + 2∇un (t) L2 2 (Ω) + 2un (t) L4 4 (Ω) ≤ 2un (t) L2 2 (Ω) . This, Gronwall’s inequality in Theorem 4.2 and L 2 (Ω)-stability of Pn yield un (t) L 2 (Ω) ≤ cT un (0) L 2 (Ω) ≤ cT u0  L 2 (Ω), with cT independent of n. Thus, un ∈ C(I; L 2 (Ω)), and it is a global solution. ∞ converges to a weak Step 2. Weak solution. Now we show that the sequence {un }n=1 solution of problem (6.72) as n → ∞ using an energy argument. To this end, taking v = −Δun ∈ Xn in (6.73) and noting the identity − un − (un )3, Δun = ∇un  L2 2 (Ω) − 3un |∇un | L2 2 (Ω),

we obtain ∂tα ∇un (t) L2 2 (Ω) + 2Δun (t) 2 + 6un |∇un |(t) L2 2 (Ω) ≤ 2∇un (t) L2 2 (Ω) . This and Theorem 4.2 imply ∇un  L ∞ (I;L 2 (Ω)) ≤ cT ∇u0  L 2 (Ω) , where cT is independent of n. Then by Sobolev embedding in Theorem A.3, un ∈ L ∞ (I; L 6 (Ω)) for Ω ⊂ Rd with d ≤ 3. Consequently,  f (un ) L ∞ (I;L 2 (Ω)) ≤ c. Since u0 ∈ H01 (Ω), by the maximal L p regularity in Theorems 6.8 and 6.11, ∂tα un  L p (I;L 2 (Ω)) +  Aun  L p (I;L 2 (Ω)) ≤ cp,κ ,

∀p ∈ [2, α2 ).

where the constant cp,κ is independent of n. This implies un ∈ W α, p (I; L 2 (Ω)) ∩ L p (I; H 2 (Ω)), which compactly embeds into C(I; L 2 (Ω)) for p ∈ ( α1 , α2 ), by Sobolev embedding. Thus, there exists a function u and a subsequence, still denoted by {un } such that as n→∞ un −→ u

weak– ∗ in

∂tα un −→ ζ

weakly in

un −→ u

weakly in

n

u −→ u

in

L ∞ (I; H01 (Ω)), L p (I; L 2 (Ω)), L p (I; H 2 (Ω)),

C(I; L 2 (Ω)).

6.4 Nonlinear Subdiffusion

227

Next, we claim ζ = ∂tα u. Indeed, for any v ∈ H01 (Ω) and φ ∈ C0∞ (I), by integration by parts, we have (with t ∂Tα φ(t) being the right-sided Djrbashian-Caputo fractional derivative) ∫ ∫ (ζ(t), v)φ(t)dt = lim (∂tα un (t), v)φ(t)dt n→∞ I I ∫ = lim (un, v)t ∂Tα φ(t)dt + φ(T)((0 It1−α un )(T), v) − (t IT1−α φ)(0)(un (0), v) n→∞ I ∫ ∫ ∫ = lim (un, v)t ∂Tα φ(t)dt = (u, v)t ∂Tα φ(t)dt = (∂tα u(t), v)φ(t)dt. n→∞

I

I

I

Thus, upon passing to the limit in (6.73), the function u satisfies (∂tα u, v) + (∇u, ∇v) = ( f (u), v),

∀v ∈ H01 (Ω).

(6.74)

Moreover, by the convergence of un in C(I; L 2 (Ω)), un (0) converges to u0 in L 2 (Ω) and so u(0) = u0 . Thus, problem (6.72) admits a weak solution in W α, p (I; L 2 (Ω)) ∩ L p (I; H 2 (Ω)) with p ∈ [2, α2 ). Step 3. Uniqueness. Next, we prove the uniqueness. Let u1 and u2 be two weak solutions of problem (6.72). Then the difference w = u1 − u2 satisfies (∂tα w(t), v) + (∇w(t), ∇v) = ( f (u1 ) − f (u2 ), v),

∀v ∈ H01 (Ω),

with w(0) = 0. Taking v = w(t) and noting the inequalities ( f (u1 ) − f (u2 ))(u1 − u2 ) = (u1 − u2 )2 (1 − u12 − u1 u2 − u22 ) ≤ (u1 − u2 )2, and Lemma 2.11, we obtain ∂tα w(t) L2 2 (Ω) ≤ 2w(t) L2 2 (Ω), with w(0) = 0. The Gronwall’s inequality in Theorem 4.2 implies w ≡ 0, i.e., u1 = u2 . Step 4. L ∞ bound. Now we derive the L ∞ (Q) bound. The preceding argument implies f (u) ∈ L ∞ (I; L 2 (Ω)). Since u0 ∈ H 2 (Ω), we apply the maximal L p regularity in Theorems 6.8 and 6.11, and obtain ∂tα u L p (I;L 2 (Ω)) +  Au L p (I;L 2 (Ω)) ≤ c,

∀p ∈ (1, ∞),

which implies u ∈ W α, p (I; L 2 (Ω)) ∩ L p (I; H 2 (Ω)). Then, by means of real interpolation with a sufficiently large exponent p, we deduce u ∈ L ∞ (Q). This completes the proof the theorem.  The next result gives the solution regularity for smooth initial data u0 ∈ H 2 (Ω) [DYZ20, Theorem 2.2]. Theorem 6.20 If u0 ∈ H 2 (Ω), then the solution u to problem (6.72) satisfies that for any β ∈ [0, 1) and t ∈ I

228

6 Subdiffusion: Hilbert Space Theory

u ∈ C α (I; L 2 (Ω)) ∩ C(I; H 2 (Ω)), Au ∈ C(I; H 2β (Ω)),

(6.75)

 A1+β u(t) L 2 (Ω) ≤ ct −βα ;

∂t u(t) ∈ C((I; H (Ω)) 2β

∂tα u ∈ C(I; L 2 (Ω));

and

β

(6.76)

 A ∂t u(t) L 2 (Ω) ≤ ct

α(1−β)−1

,

(6.77)

where the constant c depends on  Au0  L 2 (Ω) and T. Proof The regularity estimate (6.75) has already been shown in Theorem 6.17. It suffices to show (6.76)–(6.77). We employ the representation (6.66): ∫ t 1+β β Aβ E(t − s)[A(u − u3 )(s)]ds. A u(t) = A F(t)(Au0 ) + 0

By Theorem 6.4(ii)–(iii), we obtain A

1+β

u(t) L 2 (Ω) ≤ ct

−βα

∫  Au0  L 2 (Ω) +

0

t

(t − s)(1−β)α−1  A(u − u3 )(s) L 2 (Ω) ds.

Next, we treat the term in the integral. In view of the identity Δu3 = 6u|∇u| 2 +3u2 Δu, the estimate u ∈ C(I; H 2 (Ω)) from (6.75) and u ∈ L ∞ (Q) from Theorem 6.19, we deduce  A f (u) L 2 (Ω) ≤ cΔu L 2 (Ω) + cu|∇u| 2  L 2 (Ω) + cu2 Δu L 2 (Ω) ≤ cT , with cT independent of t. Hence, we obtain  A1+β u(t) L 2 (Ω) ≤ ct −βα . Last, we prove (6.77). The case β = 0 is given in Theorem 6.17. For β ∈ [0, 1), ∫ t d Aβ E(t − s)[(u − u3 )(s)]ds Aβ u (t) = Aβ−1 F (t)(Au0 ) + dt 0 = Aβ−1 F (t)(Au0 ) + Aβ E(t)[(u − u3 )(0)] ∫ t 3 + Aβ E(t − s)[(u − 3u2 u )(s)]ds = Ii . 0

i=1

By Theorem 6.4, the terms I1 and I2 are respectively bounded by I1  L 2 (Ω) ≤ ct (1−β)α−1 Δu0  L 2 (Ω), I2  L 2 (Ω) ≤ ct (1−β)α−1 u0 − u03  L 2 (Ω) ≤ ct (1−β)α−1  Au0  L 2 (Ω) . Similarly, the third term I3 follows by ∫ t (t − s)α−1  Aβ u (s) L 2 (Ω) ds. I3  L 2 (Ω) ≤ c 0

The last three estimates and the Gronwall’s inequality in Theorem 4.2 give the desired result. Last, the continuity follows directly from that of the solution operators E(t) and F(t), cf. Theorem 6.4. 

6.4 Nonlinear Subdiffusion

229

Last, we give two interesting qualitative results on problem (6.72), i.e., maximum principle and energy dissipation. The following maximum principle holds: if |u0 | ≤ 1, then u(t) L ∞ (Ω) is also bounded by 1. We refer to Section 6.5 for a more detailed account on maximum principle for the linear subdiffusion model. Proposition 6.3 Let u0 ∈ H 2 (Ω) with |u0 (x)| ≤ 1. Then the solution u of problem (6.72) satisfies |u(x, t)| ≤ 1 ∀(x, t) ∈ Q. Proof Suppose that the minimum of u is smaller than −1 and achieved at (x0, t0 ) ∈ Q. Then by the regularity estimates (6.75) and (6.77) in Theorem 6.20, and Proposition 2.5(i), we deduce ∂tα u(x0, t0 ) ≤ 0. By Theorem 6.20 and Sobolev embedding theorem, Δu is continuous in Ω, and hence Δu(x0, t0 ) ≥ 0. Consequently, we obtain 0 ≥ ∂tα u(x0, t0 ) − Δu(x0, t0 ) = u(x0, t0 ) − u(x0, t0 )3 > 0, which leads to a contradiction. The upper bound can be proved similarly.



The last result gives a (weak) energy dissipation law. 2

Proposition 6.4 If the solution u to problem (6.72) belongs to W 1, 2−α (I; L 2 (Ω)) ∩ L 2 (I; H 2 (Ω)), then there holds E(u(T)) ≤ E(u0 ). Proof Taking v = ∂t u in (6.74) and integrating over (0, T) lead to, ∫ T ∫ T (∂tα u(t), ∂t u(t)) + (∇u(t), ∇∂t u(t))dt = ( f (u(t)), ∂t u(t)) dt 0

0

Under the given regularity assumption on u, by Proposition 2.2, ∫ T ∫ ∫ ∫ 1 T d d 2 |∇u(t)| dx dt + F(u(t)) dx dt ≤ 0. 2 0 dt Ω 0 dt Ω This immediately implies the desired bound.



6.4.3 Compressible Navier-Stokes problem This part is concerned with time-fractional compressible Navier-Stokes equations. The standard counterpart describes the dynamics of Newtonian fluids. In the incompressible case with constant density, the existence and uniqueness of a weak solution in 2D have been proved. However, in 3D case, global weak solutions may not be unique. The existence and uniqueness of global smooth solutions are still open. Let Ω ⊂ Rd, d = 2, 3 be an open bounded domain with a smooth boundary. The time-fractional compressible Navier-Stokes equations read

230

6 Subdiffusion: Hilbert Space Theory

⎧ ∂ α u + u · ∇u + (∇u) · u + (∇ · u)u = Δu, in Q, ⎪ ⎪ ⎨ t ⎪ u = 0, on ∂L Q, ⎪ ⎪ ⎪ u(·, 0) = u0, in Ω. ⎩ Note that ∇u is a tensor, given by (∇u)i j = ∂xi u j . This system can also be formulated in the conservative form ⎧ ⎪ ∂ α u + ∇ · (u ⊗ u) + 12 ∇(|u| 2 ) = Δu, in Q, ⎪ ⎨ t ⎪ u = 0, on ∂L Q, ⎪ ⎪ ⎪ u(·, 0) = u0, in Ω. ⎩

(6.78)

The notation u ⊗ u denotes the tensor product, i.e., u ⊗ u = [ui u j ], i, j = 1, . . . , d.

6.4.3.1 Compactness criteria We employ some compactness criteria to show the existence of weak solutions of problem (6.78). For linear evolution equations, proving the existence of weak solutions is relatively easy. Indeed, one only needs weak compactness, which is guaranteed by boundedness in reflexive spaces; see Section 6.1. However, for nonlinear evolution equations, strong compactness criteria, e.g., Aubin-Lions lemma, are often needed. Below we present two strong compactness criteria. The proof relies crucially on the following estimates on the time shift operator τh u(t) := u(t + h) [LL18b, Proposition 3.4]. Proposition 6.5 Fix T > 0. Let E be a Banach space and α ∈ (0, 1). Suppose that 1 (I; E) has a weak Djrbashian-Caputo fractional derivative ∂ α u ∈ L p (I; E) u ∈ Lloc t p if pα < 1. associated with initial value u0 ∈ E. Let r0 = ∞ if pα ≥ 1 and r0 = 1−pα Then, there exists c > 0 independent of h and u such that

1 1 chα+ r − p ∂tα u L p (I;E), r ∈ [p, r0 ), τh u − u L r (0,T −h;E) ≤ chα ∂tα u L p (I;E), r ∈ [1, p]. Proof Let f := ∂tα u ∈ L p (I; E). Then by Proposition 4.6, u(t) = u(0) + 0 Itα f (t). Let K1 (s, t; h) := (t + h − s)α−1 and K2 (s, t; h) := (t − s)α−1 − (t + h − s)α−1 . Then ∫ ∫ t 1 t+h τh u(t) − u(t) = K1 (s, t; h) f (s)ds + K2 (s, t; h) f (s)ds , Γ(α) t 0 so that ∫ 0

T −h

τh u −

uEr dt

r ∫ t+h ∫ 2r T −h ≤ K1  f E (s)ds dt (Γ(α))r 0 t r ∫ T −h ∫ t K2  f E (s)ds dt . + 0

0

6.4 Nonlinear Subdiffusion

231

First, we consider the case r ≥ p and r1 > p1 − α. We denote I1 = (t, t + h) and I2 = (0, t). Let r1 + 1 = q1 + p1 . By Hölder inequality, we have for i = 1, 2, ∫ Ki  f E (s)ds ≤

Ii

∫ Ii

q Ki 

f

p E ds

r1 ∫ Ii

q Ki ds

−q ∫ rqr

Ii

p

 f E ds

−p rpr

.

Next, we bound the three terms on the right-hand side. Direct computation shows ∫ −p ∫ rpr hq(α−1)+1 1− pr p q .  f E ds ≤  f  L p (I;E) and K1 ds = q(α − 1) + 1 Ii I1 q

It follows from the inequality (a + b)q ≥ aq + bq for q ≥ 1, a, b ≥ 0 that K2 ≤ (t − s)q(α−1) − (t + h − s)q(α−1) . Since q(α − 1) + 1 > 0, we find ∫ t t q(α−1)+1 − (t + h)q(α−1)+1 + hq(α−1)+1 q ≤ chq(α−1)+1 . K2 ds ≤ q(α − 1) + 1 0 Combining the preceding estimates gives ∫ T −h ∫ (q(α−1)+1) r −q r q τh u − uE dt ≤ ch 0

∫ +

0 T −h

0

T

f

p E (s) p

 f E (s)





s

0∧s−h T −h

s

q K1 dt ds

q K2 dt ds .

Direct computation shows ∫

s

0∧s−h

and ∫ T −h s

q

K2 dt ≤

∫ s

T −h

q

K1 dt ≤

(t − s)q(α−1) dt −

∫ s

hq(α−1)+1 q(α − 1) + 1 T −h

(t − s + h)q(α−1) dt ≤

hq(α−1)+1 . q(α − 1) + 1

Consequently, τh u − u L r (0,T −h;E) ≤ chα+ r − p ∂tα u L p (I;E) . 1

1

This immediately implies the desired estimate. Next, we consider the case r < p. Note that for r = p, the first part implies τh u − u L p (0,T −h;E) ≤ chα ∂tα u L p (I;E) . Then, by Hölder inequality,

232

6 Subdiffusion: Hilbert Space Theory

τh u − u L r (0,T −h;E) ≤ 1

rp

L p−r (0,T −h;E)

τh u − u L p (0,T −h;E)

≤ T r − p τh u − u L p (0,T −h;E) . 1

1



This completes the proof of the proposition.

Now we can state two compactness criteria of Aubin-Lions type [LL18b, Theorems 4.1 and 4.2], with “assignment” in the sense of Theorem 2.15. Theorem 6.21 Let T > 0, α ∈ (0, 1) and p ∈ [1, ∞). Let E, E0 and E1 be three 1 (I; E ) satisfy  E → E0 . Let the set U ⊂ Lloc Banach spaces such that E1 → 1 p

(i) There exists c1 > 0 such that for any u ∈ U, supt ∈I 0 Itα uE1 (t) ≤ c1 . p (ii) There exist r ∈ ( 1+pα , ∞) ∩ [1, ∞) and c2 > 0 such that for any u ∈ U, there is an assignment of initial value u0 for u and ∂tα u satisfies ∂tα u L r (I;E0 ) ≤ c2 . Then, U is relatively compact in L p (I; E). Proof Note that for any α ∈ (0, 1), there holds p

p

u L p (I;E1 ) ≤ T 1−α 0 Itα uE1 (T).

(6.79)

Thus, (i) implies that U is bounded in L p (I; E). Next, let r0 be the number defined p r > p, since r > 1+pα . Otherwise, in Proposition 6.5. If r < α1 , then r0 = 1−rα r0 = ∞ > p. Together with the condition r ≥ 1, Proposition 6.5 and (ii) imply τh u − u L p (I;E) → 0 uniformly. Hence, the conditions in Theorem A.6 are fulfilled,  and the relative compactness of U in L p (I; E) follows. Theorem 6.22 Let T > 0, α ∈ (0, 1) and p ∈ [1, ∞). Let E, E0 and E1 be three 1 (I; E ) satisfy Banach spaces such that E1 →  E → E0 . Let the set U ⊂ Lloc 1 (i) There exist r1 ∈ [1, ∞) and c1 > 0 such that for any u ∈ U, supt ∈I 0 Itα uEr11 ≤ c1 . (ii) There exists p1 ∈ (p, ∞] such that U is bounded in L p1 (I; E). (iii) There exist r2 ∈ [1, ∞) and c2 > 0 such that for any u ∈ U, there is an assignment of initial value u0 for u so that ∂tα u satisfies ∂tα u L r2 (I;E0 ) ≤ c2 . Then, U is relatively compact in L p (I; E). Proof (i) and (6.79) imply that U is bounded in L p (I; E). It follows from (iii) and Proposition 6.5 that τh u − u L 1 (0,T −h;E) → 0 uniformly. Hence, by Theorem A.6, U is relatively compact in L 1 (I; E). Since U is bounded in L p1 (I; E) with p1 > p,  cf. (ii), the relative compactness of U in L p (I; E) follows from Theorem A.7. Next, we state an abstract weak convergence result for the Djrbashian-Caputo fractional derivative [LL18b, Proposition 3.5]. The notation E denotes the dual space of the space E. Proposition 6.6 Let E be a reflexive Banach space, α ∈ (0, 1) and T > 0. Let un → u in L p (I; E), p ≥ 1, and the Djrbashian-Caputo fractional derivatives ∂tα un with initial values u0,n are bounded in L r (I; E), r ∈ [1, ∞). (i) There is a subsequence of u0,n converging weakly in E to some u0 ∈ E.

6.4 Nonlinear Subdiffusion

233

(ii) If r > 1, there exists a subsequence of ∂tα unk converging weakly to f and u0,nk converging weakly to u0 , and f is the Djrbashian-Caputo fractional derivative of u with initial value u0 so that u(t) = u0 + 0 Itα f (t). Further, if r ≥ α1 , then, u(0+ ) = u0 in E in the sense of Lebesgue point. Proof (i). Let fn = ∂tα un . By Theorem 2.2, un (t) − u0,n = 0 Itα fn is bounded in r − ) if r < α1 or r1 ∈ [1, ∞) if r > α1 . Then, un (t) − u0,n L r1 (I; E) where r1 ∈ [1, 1−rα p is bounded in L 1 (I; E), with p1 = min(r1, p). Since un converges in L p and thus in L p1 , then u0,n is bounded in L p1 (I; E). Hence, u0,n is bounded in E. Since E is reflexive, there is a subsequence u0,nk converging weakly to u0 in E. (ii). Take a subsequence such that u0,nk converges weakly to u0 and ∂tα unk to f weakly in L r (I; E) (since r > 1). For any ϕ ∈ Cc∞ [0, T) and φ ∈ E , we have ∫T ∫T ∂ α ϕ(unk (t)−u0,nk )dt = 0 ϕ fnk dt. With ·, · being the duality pairing between 0 rt T L r −1 (I; E ) and L r (I; E), the preceding identity and the relations φϕ, φt ∂Tα ϕ ∈ r L r −1 (I; E ) imply φt ∂Tα ϕ, unk (t)−u0,nk = φϕ, fnk (with t ∂Tα being the right-sided Djrbashian-Caputo fractional derivative). Taking the limit k → ∞ and using the weak convergence yield φt ∂Tα ϕ, u − u0 = φϕ, f . Since φ is arbitrary and f ∈ L r (I; E), ∫T ∫T there holds 0 t ∂Tα ϕ(u(t) − u0 ) dt = 0 ϕ f dt. Hence, f is the Djrbashian-Caputo fractional derivative of u with initial value u0 in the weak sense, and there holds  u = u0 + 0 Itα f (t). The last claim follows from Theorem 2.15. 6.4.3.2 Existence of weak solutions Now we employ the compactness criteria to establish the existence of weak solutions. Motivated by Lemma 2.6, we define a weak solution as follows. Definition 6.2 Let u0 ∈ L 2 (Ω) and q1 min(2, d4 ). A function u ∈ L ∞ (I; L 2 (Ω)) ∩ L 2 (I; H01 (Ω))

with ∂tα u ∈ L q1 (I; H −1 (Ω))

is called a weak solution to problem (6.78), if ∫ T∫ ∫ T∫   α − ∇v : u ⊗ u − 12 ∇ · v|u| 2 dxdt (u(x, t) − u0 )t ∂T vdxdt + 0 0 Ω Ω ∫ T∫ = u · Δv dxdt, ∀v ∈ Cc∞ ([0, T) × Ω; Rd ). 0

Ω

If u is defined on R+ and its restriction on any interval [0, T), for any T > 0, is a weak solution, u is called a global weak solution. The following existence result holds for problem (6.78) [LL18b, Theorem 5.2]. Theorem 6.23 For any u0 ∈ L 2 (Ω), there exists a global weak solution to problem (6.78) in the sense of Definition 6.2. Further, if max( 12 , d4 ) ≤ α < 1, there is a global weak solution continuous at t = 0 in the H −1 (Ω) norm.

234

6 Subdiffusion: Hilbert Space Theory

To prove the theorem, we employ a Galerkin procedure as in Section 6.1, and compactness criteria in Section 6.4.3.1. To this end, let {ϕ j }∞ j=1 be an orthogonal basis 1 2 2 of both H0 (Ω) and L (Ω) and orthonormal in L (Ω), and let Pn be the orthogonal projection onto span{ϕ j } nj=1 in L 2 (Ω). Then it defines a bounded linear operator in  2 both L 2 (Ω) and H01 (Ω). Given u0 = ∞ j=1 u0, j ϕ j (x) ∈ L (Ω), we seek an approximate solution un (x, t) of the form un (t) =

n

cn, j (t)ϕ j

j=1

with cn, j (0) = u0, j , j = 1, . . . , n, and cn := (cn,1, . . . , cn,n ) ∈ Rd×n is continuous in time t. The approximation un solves the Galerkin problem:

(ϕ j , ∂tα un ) + (ϕ j , ∇ · (un ⊗ un )) + 12 (ϕ j , ∇|un | 2 ) = (ϕ j , Δun ), j = 1, . . . , n un (0) = Pn u0 . (6.80) The following unique existence result and a priori bound hold. Lemma 6.9 (i) For any n ≥ 1, there exists a unique solution un to problem (6.80) that is continuous on R+ , satisfying un  L ∞ (R+ ;L 2 (Ω)) ≤ u0  L 2 (Ω)

and

sup 0≤t 0, (6.81) cn (0) = (u0,1, . . . , u0,n ), where Fn is a quadratic vector-valued function of cn , and hence smooth. (i) By Proposition 4.7 in Chapter 4, the solution cn (t) exists on [0, t∗n ), with the blowup time t∗n either t∗n = ∞ or t∗n < ∞ and lim supt→t∗n − |cn | = ∞. Since Fn is quadratic, by Theorem 4.12, cn ∈ C 1 (0, t∗n ) ∩ C[0, t∗n ) and hence, un ∈ C 1 ((0, t∗n ); H01 (Ω)) ∩  C([0, t∗n ); H01 (Ω)). For un = nj=1 cn, j (t)ϕ j , by Lemma 2.12, using (6.80) and the identity ∫ (un, ∇ · (un ⊗ un )) + 12 (un, ∇|un | 2 ) =

1 2

Ω

∇ · (|u| 2 u)dx,

we have

un, ∂tα un + (un, ∇ · (un ⊗ un )) + 12 (un, ∇|un | 2 ) = −∇un  L2 2 (Ω) .

6.4 Nonlinear Subdiffusion

Hence,

235

2 1 α 2 ∂t un  L 2 (Ω) (t)

≤ −∇un  L2 2 (Ω),

and it implies un (t) L2 2 (Ω) + 20 Itα ∇un  L2 2 (Ω) (t) ≤ u0  L2 2 (Ω) . Thus, t∗n = ∞, and the first assertion also follows. 4 (ii) Take a test function v ∈ L p1 (I; H01 (Ω)), with p1 = max(2, 4−d ) (its conjugate d exponent q1 = min(2, 4 )) and v L p1 (I;H 1 (Ω)) ≤ 1. For vn := Pn v, the stability of Pn 0

in H01 (Ω) implies vn  L p1 (I;H 1 (Ω)) ≤ c(Ω, T). Since vn ∈ span{ϕ j } nj=1 , there holds 0

(v, ∂tα un ) = (vn, ∂tα un ) = −(vn, ∇ · (un ⊗ un )) − 12 (vn, ∇|un | 2 ) + (vn, Δun ). This in particular implies |(v, ∂tα un )| = |(vn, −∇ · (un ⊗ un ) − 12 ∇(|un | 2 ) + Δun )| ∫ T ∫ T ≤c ∇vn |un | 2  L 1 (Ω) dt + ∇vn  L 2 (Ω) ∇un  L 2 (Ω) dt. 0

0

Using Gagliardo-Nirenberg inequality in Theorem A.4, 1− d

d

4 u L 4 (Ω) ≤ cu L 2 (Ω) ∇u L42 (Ω),

the first term can be estimated as ∫ T ∫ 2 ∇vn |un |  L 1 (Ω) dt ≤ 0

T

0

∫ ≤ 0

∇vn  L 2 (Ω) un  L2 4 (Ω) dt T

∇vn 

4 4−d L 2 (Ω)

 4−d ∫ 4 dt 0

T

∇un  L2 2 (Ω) dt

 d4

.

This and the bound vn  L p1 (I;H 1 (Ω)) ≤ c(Ω, T) imply 0

∂tα un  L q1 (I;H −1 (Ω)) ≤ c, with q1 = min(2,

4 d ).

In sum, we have obtained

sup 0 Itα ∇un  L2 2 (Ω) (t) ≤ c, un ∈ L ∞ (I; L 2 (Ω)), ∂tα un  L q1 (I;H −1 (Ω)) ≤ c. t ∈I

p 2 By Theorem 6.22, there is a subsequence {un j }∞ j=1 that converges in L (I; L (Ω)) for any p ∈ [1, ∞). By Proposition 6.6, u has a weak Djrbashian-Caputo fractional derivative with initial value u0 such that ∂tα u ∈ L q1 (I; H −1 (Ω)). By a standard q1 diagonal argument, u is defined on R+ and ∂tα u ∈ Lloc (R+ ; H −1 (Ω)) such that 2 2 un j → u in Lloc (R+ ; L (Ω)). Upon passing to a further subsequence, we may

236

6 Subdiffusion: Hilbert Space Theory

assume that un j also converges a.e. to u in R+ × Ω. Clearly, for any t1 < t2 , ∫ t2 ∫t un  L2 2 (Ω) dt ≤ u0  L2 2 (Ω) (t2 − t1 ), and thus by Fatou’s lemma, t 2 u L2 2 (Ω) dt ≤ t 1

1

u0  L2 2 (Ω) (t2 − t1 ), which implies u ∈ L ∞ (R+ ; L 2 (Ω)). Fix any T > 0, since un j is

bounded in L 2 (I; H01 (Ω)). Then, it has a further subsequence that converges weakly in L 2 (I; H01 (Ω)). By a standard diagonal argument, there is a subsequence that con2 (R ; H 1 (Ω)). The limit must be u by pairing with a smooth test verges weakly in Lloc + 0 2 (R ; H 1 (Ω)).  function. Hence, u ∈ Lloc + 0 Now, we can prove Theorem 6.23: Proof By Lemma 6.9, there is a convergent subsequence in L p (I; L 2 (Ω)) for any p ∈ [1, ∞), with the limit denoted by u. For any v ∈ Cc∞ ([0, T)×Ω; Rd ), let vn = Pn v. Since v is smooth in t and vanishes at T, so is vn , and α t ∂T vn

→ t ∂Tα v

in L p1 (I; H01 (Ω)).

Fix n0 ≥ 1, and for n j ≥ n0 , by integration by parts and noting the fact t ∂Tα vn0 ∈ nj span{ϕ } =1 , we have ∫

T

0

∫ =

T

0

∫ =

0

T



Ω

(un j −

u0 )t ∂Tα vn0 dxdt



Ω



Ω



= 0

T



Ω

vn0 ∂tα un j dxdt

−vn0 ∇ · (un j ⊗ un j ) − 12 vn0 ∇|un j | 2 + vn0 Δun j dxdt   ∇ vn0 : un j ⊗ un j + 12 ∇ · vn0 |un j | 2 − ∇vn0 : ∇un j dxdt.

By Lemma 6.9, taking j → ∞, we have ∫ T∫ ∫ T∫   ∇vn0 : u ⊗ u + 12 ∇ · vn0 |u| 2 − ∇vn0 : ∇u dxdt. (u − u0 )t ∂Tα vn0 dxdt = 0

Ω

0

Ω

Then, taking n0 → ∞, by the convergence vn → ϕ in L p (I; H01 (Ω)) for any p ∈ (1, ∞), the weak formulation holds. Further, if q1 ≥ α1 or α ≥ max( 12 , d4 ), by Lemma 6.9 (ii) and Theorem 2.15, u is continuous at t = 0. 

6.5 Maximum Principles For standard parabolic equations, there are many important qualitative properties, e.g., maximum principle, Hopf’s lemma and unique continuation property. These properties are less understood for subdiffusion. The unique continuation principle states for any solution u supported on t ≥ 0, if u = 0 in a subdomain ω ⊂ Ω over (0, T), then u = 0 in the parabolic cylinder Q, under certain circumstances [LN16, JLLY17], but the general case seems still open. There is also Hopf’s lemma for 1D subdiffusion

6.5 Maximum Principles

237

[Ros16], generalizing that for standard parabolic problems [Fri58]. The proof mostly follows that for the classical parabolic problem as presented in [Can84, Section 15.4], but with Eα (−t) in place of e−t . The maximum principle has been extensively studied [Luc09, LRY16, LY17]. We describe several versions of weak maximum principles below; see the review [LY19b] for further results, including more general subdiffusion -type models, e.g., multi-term and distributed order. We discuss the maximum principle for the following model: ⎧ ∂ α u = Lu + f , in Q, ⎪ ⎪ ⎨ t ⎪ u = g, on ∂L Q, ⎪ ⎪ ⎪ u(0) = u0, in Ω, ⎩

(6.82)

where u0 and g are given functions on Ω and ∂L Q, respectively, and f is a given function on Q. The elliptic operator L ≡ L a,q is defined by d

Lu(x) =

∂x j (ai j (x)∂xi u(x)) − q(x)u(x),

i, j=1

where the matrix a = [ai j ] is symmetric and strongly elliptic, and q ∈ C(Ω) with q ≥ 0 in Ω. Unless otherwise stated, we also assume ai j ∈ C 1 (Ω), and g(x, t) = 0. The following weak maximum principle due to Luchko [Luc09, Theorem 2] holds for classical solutions. The condition ∂tα u, ∂x2i x j u ∈ C(Q) is restrictive, in view of the limited smoothing properties of the solution operators. Theorem 6.24 Let α ∈ (0, 1), q ≥ 0. Suppose that u is a continuous function on Q, with ∂tα u, ∂x2i x j u ∈ C(Q). Then the following statements hold. (i) If (∂tα + L)u(x, t) ≤ 0 for all (x, t) ∈ Q, then u(x, t) ≤ max(0, m),

 with m := max sup u(x, 0), x ∈Ω

(ii) If (∂tα + L)u(x, t) ≥ 0 for all (x, t) ∈ Q, then  min(m, 0) ≤ u(x, t), with m := min inf u(x, 0), x ∈Ω

sup

 u(x, t) .

(x,t)∈∂L Q

inf

(x,t)∈∂L Q

 u(x, t) .

Proof It suffices to prove (i), and (ii) follows by considering −u. We present two slightly different proofs. Proof 1. This proof is taken from [Luc09, Theorem 2]. Assume that the statement does not hold, i.e., there exists (x0, t0 ) ∈ Q such that u(x0, t0 ) > m > 0. Let  = u(x0, t0 ) − m > 0, and let w(x, t) := u(x, t) + 2 TT−t , for (x, t) ∈ Q. Then w satisfies w(x, t) ≤ u(x, t) + 2 , for (x, t) ∈ Q, and for any (x, t) ∈ ∂p Q w(x0, t0 ) ≥ u(x0, t0 ) =  + m ≥  + u(x, t) ≥  + w(x, t) −

2

= w(x, t) + 2 .

238

6 Subdiffusion: Hilbert Space Theory

Thus, w cannot attain its maximum on ∂p Q. Let w attain its maximum over Q at (x1, t1 ) ∈ Q, then w(x1, t1 ) ≥ w(x0, t0 ) ≥  + m >  . Now Proposition 2.5(i) and the necessary conditions for a maximum at (x1, t1 ) give ∂tα w(x1, t1 ) ≥ 0,

∇w(x1, t1 ) = 0,

Δw(x1, t1 ) ≤ 0.

By the definition of w, ∂tα u(x, t) = ∂tα w(x, t) + (2T Γ(2 − α))−1 t 1−α . Consequently, 0 ≥∂tα u(x1, t1 ) − Lu(x1, t1 ) =∂tα w(x1, t1 ) + (2T Γ(2 − α))−1 t11−α − ∇ · (a∇w)(x1, t1 ) + q(w − (2T)−1 (T − t1 )) ≥(2T Γ(2 − α))−1 t11−α + q(1 −

T −t1 2T )

> 0,

leading to a contradiction. Proof 2. Since u is continuous on Q, there exists a point (x0, t0 ) ∈ Q such that u(x, t) ≤ u(x0, t0 ), for all (x, t) ∈ Q. If u(x0, t0 ) ≤ 0, the desired inequality follows directly. Hence, it suffices to consider u(x0, t0 ) > 0, and we distinguish two cases. (a) If (x0, t0 ) ∈ ∂p Q, then m = u(x0, t0 ), and thus the assertion follows. (b) If  (x0, t0 ) ∈ Q, then i,d j=1 ∂x j (ai j ∂xi u(x0, t0 )) ≤ 0, qu(x0, t0 ) ≥ 0, and the condition (∂tα + L)u(x0, t0 ) ≤ 0 implies ∂tα u(x0, t0 ) ≤ 0, while Proposition 2.5(i) implies ∂tα u(x0, t0 ) ≥ 0. Hence, we obtain ∂tα u(x0, t0 ) = 0 and u(x0, t) ≤ u(x0, t0 )

for all 0 ≤ t ≤ t0 .

Now Proposition 2.5(ii) yields u(x0, t0 ) = u(x0, 0), and hence the assertion follows. The maximum principle in Theorem 6.24 allows us to derive a priori estimates and continuity results on classical solutions of problem (6.82) [Luc09, Theorem 4]. Corollary 6.3 If u is a classical solution to problem (6.82), then there holds   uC(Q) ≤ max u0 C(Ω), gC(∂L Q) + Γ(α + 1)−1T α  f C(Q) . Proof Let

w(x, t) = u(x, t) − c f t α,

with c f = Γ(α

+ 1)−1 

(x, t) ∈ Q,

f C(Q) . Then w is a classical solution of problem (6.82) with

a source f (x, t) = f (x, t) −  f C(Q) − qc f t α, and the boundary condition g(x, t) =

g(x, t) − c f t α, in place of f and g, respectively. Clearly, f satisfies f (x, t) ≤ 0, for (x, t) ∈ Q. Then the maximum principle in Theorem 6.24 applied to w leads to w(x, t) ≤ max(supu0, sup g), Ω

∂L Q

(x, t) ∈ Q.

Consequently, for u, we have u(x, t) = w(x, t) + c f t α ≤ max(u0 C(Ω), gC(∂L Q) ) + c f T α . Similar, the minimum principle from Theorem 6.24(ii) applied to the auxiliary function w(x, t) = u(x, t) + c f t α , (x, t) ∈ Q leads to

6.5 Maximum Principles

239

u(x, t) ≥ − max(u0 C(Ω), g C(∂L Q) ) − c f T α . Combining the last two estimates completes the proof of the corollary.



Next, we extend the weak maximum principle in Theorem 6.24 to weak solutions [LY17, Lemma 3.1]. Note that we discuss only the case of a zero boundary condition, i.e., g ≡ 0. Lemma 6.10 Let u0 ∈ L 2 (Ω) and f ∈ L 2 (I; L 2 (Ω)), with u0 ≥ 0 a.e. in Ω and f ≥ 0 a.e. in Q, and ai j and q be smooth. Then the solution u to problem (6.82) satisfies u ≥ 0 a.e. in Q. Proof First, we show the assertion for u0 ∈ C0∞ (Ω) and f ∈ C0∞ (Q). Then the solution u satisfies u ∈ C 1 (I; C 2 (Ω)) ∩ C(Q) and ∂t u ∈ L 1 (I; L 2 (Ω)), i.e., u(t) is a strong solution. Indeed, by the solution theory in Section 6.2, u is given by ∫ t E(t − s) f (s)ds, u(t) = F(t)u0 + 0

where the solution operators F(t) and E(t) associated with L a,q , cf. (6.21) and (6.22). Thus, for any γ > 0, ∫ t    E(t − s)Aγ f (s)ds 2  Aγ u(t) L 2 (Ω) ≤ F(t)Aγ u0  L 2 (Ω) +  L (Ω) 0 ∫ t ≤ c Aγ u0  L 2 (Ω) + c (t − s)α−1  Aγ f (s) L 2 (Ω) ds, 0

and similarly, since f ∈ C0∞ (Q), γ

 A u (t) L 2 (Ω)

∫ t    ≤ F (t)A u0  L 2 (Ω) +  E(t − s)Aγ f (s)ds 2 L (Ω) 0 ∫ t ≤ ct α−1  Aγ u0  L 2 (Ω) + c (t − s)α−1  Aγ f (s) L 2 (Ω) ds.

γ

0

By Sobolev embedding, we deduce the desired claim by choosing a sufficiently large γ > 0. Now by the maximum principle in Theorem 6.24, the assertion holds if u0 ∈ C0∞ (Ω) and f ∈ C0∞ (Q). Now suppose u0 ∈ L 2 (Ω) and f ∈ L 2 (I; L 2 (Ω)). Then ∞ ⊂ C ∞ (Ω) and { f n } ∞ ⊂ C ∞ (Q) such that u n → u there exist sequences {u0n }n=1 0 0 n=1 0 0 2 n in L (Ω) and f → f in L 2 (I; L 2 (Ω)). Then the solution un corresponding to u0n and f n satisfies un ≥ 0 a.e. in Q, n ∈ N. Moreover, since un → u in L 2 (I; H 2 (Ω)), we obtain u ≥ 0 a.e. in Q.  Next, we extend Lemma 6.10 to weak solutions, but removing the restriction on the sign of the coefficient q ∈ C(Ω) [LY17, Theorem 2.1]. Theorem 6.25 Let u0 ∈ L 2 (Ω) and f ∈ L 2 (I; L 2 (Ω)), with u0 ≥ 0 a.e. in Ω and f ≥ 0 a.e. in Q, and the coefficients ai j and q are smooth. Then the solution u to problem (6.82) satisfies u ≥ 0 a.e. in Q.

240

6 Subdiffusion: Hilbert Space Theory

Proof Let cq = qC(Ω) , and q(x) ˜ = cq + q(x). Then q(x) ˜ ≥ 0 on Ω. Then problem (6.82) can be rewritten as ⎧ ∂ α u − L a, q˜ u = f + cq u, in Q, ⎪ ⎪ ⎨ t ⎪ u = 0, on ∂L Q, ⎪ ⎪ ⎪ u(0) = u0, in Ω. ⎩ Then the weak solution u satisfies ∫ t u(t) = F(t)u0 + E(t − s)( f (s) + cq u(s))ds,

∀t > 0,

(6.83)

0

where the solution operators F and E are associated with the elliptic operator L a, q˜ . ∞ by u0 = 0 and Now we define a sequence {un }n=0 ∫ t n+1 u = F(t)u0 + E(t − s)( f (s) + cq un (s))ds, n = 0, 1, 2, . . . 0 ∞ converges in L 2 (I; L 2 (Ω)). In fact, with v n+1 = u n+1 − u n , The sequence {un }n=0 n = 0, 1, . . ., we obtain v 1 = u1 and ∫ t E(t − s)v n (s)ds, n = 1, 2, . . . . v n+1 (t) = cq 0

Let c0 =

u1 (t)C(I;L 2 (Ω)) . v

n+1

By Theorem 6.4(iv) and 3.5,

 L 2 (Ω)

cq ≤ Γ(α)

∫ 0

t

(t − s)α−1 v n (s) L 2 (Ω) ds.

By means of mathematical induction, we obtain v n (t) L 2 (Ω) ≤ cqn−1 c0 Γ((n − 1)α + 1)−1 t (n−1)α,

0 < t < T, n = 2, . . . .

Hence, v n C(I;L 2 (Ω)) ≤ (cq T α )n−1 c0 Γ((n − 1)α + 1)−1, Since un =

n

k=1

v k , the series

∞

k=1

n = 2, 3, . . . .

v k converges: by Stirling’s formula (2.7),

(cq T α )k c0 (cq T α )k−1 c0 −1 Γ((k − 1)α + 1) < 1, = lim cq T α k→∞ Γ(kα + 1) Γ((k − 1)α + 1) k→∞ Γ(kα + 1) lim

Thus, the sequence un converges in C(I; L 2 (Ω)), and by construction, to a fixed point of (6.83), i.e., to the weak solution u of problem (6.82). Last, we show the nonnegativity of u. By the definition of un , since u0 ≥ 0 and f ≥ 0, Lemma 6.10 yields u1 ≥ 0 in Q. Then f + cq u1 ≥ 0 in Q and we can apply Lemma 6.10 again

6.5 Maximum Principles

241

to conclude u2 ≥ 0 in Q. Repeating the arguments yields un ≥ 0 in Q for all n ∈ N.  Since un → u in C(I; L 2 (Ω)), u ≥ 0 holds a.e. in Q. Theorem 6.25 directly yields the following comparison property. Corollary 6.4 Let u0,1, u0,2 ∈ L 2 (Ω) and f1, f2 ∈ L 2 (I; L 2 (Ω)) satisfy u0,1 ≥ u0,2 in Ω and f1 ≥ f2 in Q, respectively, and the coefficients ai j and q be smooth. Let ui be the solution to problem (6.82) with data u0,i and source fi , i = 1, 2. Then there holds u1 (x, t) ≥ u2 (x, t) a.e. in Q. Now we fix f ≥ 0 and u0 ≥ 0, and denote by u(q) the weak solution to problem (6.82) with the potential q. Then the following comparison principle holds. Corollary 6.5 Let f ∈ L 2 (I; L 2 (Ω)) and u0 ∈ L 2 (Ω), with f ≥ 0 in Q and u0 ≥ 0 a.e. in Ω, and the coefficients ai j be smooth. Let q1, q2 ∈ C(Ω) satisfy q1 ≥ q2 in Ω, and u(qi ) be the solution to problem (6.82) corresponding to qi . Then u(q1 ) ≤ u(q2 ) in Q. Proof Let w = u(q1 ) − u(q2 ). Then w satisfies ⎧ ∂ α w − L a,q1 w = −(q1 − q2 )u(q2 ), ⎪ ⎪ ⎨ t ⎪ w = 0, on ∂L Q, ⎪ ⎪ ⎪ w(0) = 0, in Ω. ⎩

in Q,

Since f ≥ 0 in Q and u0 ≥ 0 in Ω, Theorem 6.25 implies u(q2 ) ≥ 0 in Q. Hence,  (q1 − q2 )u(q2 ) ≥ 0 in Q. Now invoking Theorem 6.25 leads to w ≤ 0 in Q. One can strengthen the weak maximum principle to a nearly strong one for the homogeneous problem, but the version as is known for the standard diffusion equation remains elusive. First, we state an auxiliary lemma [LRY16, Lemma 3.1]. Lemma 6.11 Let G(x, y, t) be the Green function, defined by G(x, y, t) =



Eα,1 (−λn t α )ϕn (x)ϕn (y).

n=1

Then for any fixed x ∈ Ω and t > 0, there hold G(x, ·, t) ∈ L 2 (Ω)

and G(x, ·, t) ≥ 0 a.e. in Ω.

Proof Using Green’s function G(x, y, t), the solution u is given by ∫ u(x, t) = G(x, y, t)u0 (y)dy. Ω

(6.84)

We claim that for any fixed x ∈ Ω and t > 0, G(x, ·, t) ≥ 0 a.e. in Ω. Actually, assume that there exist x1 ∈ Ω and t1 > 0 such that the Lebesgue measure of the set ω := {G(x1, ·, t1 ) < 0} ⊂ Ω is positive. Then

242

6 Subdiffusion: Hilbert Space Theory

∫ v χω (x1, t1 ) =

Ω

G(x1, y, t1 ) χω (y)dy < 0,

where χω is the characteristic function of ω satisfying χω ∈ L 2 (Ω) and χω ≥ 0.  Meanwhile, Theorem 6.24 implies v χω (x1, t1 ) ≥ 0, leading to a contradiction. Then the following version of strong maximum principle holds [LRY16, Theorem 1.1]. Note that for the standard diffusion equation, we have E x = ∅. It remains elusive to prove this for subdiffusion. Theorem 6.26 Let u be the solution to problem (6.82) with u0 ∈ L 2 (Ω), u0 ≥ 0 and u0  0, f ≡ 0. Then for any x ∈ Ω, the set E x := {t > 0 : u(x, t) ≤ 0} is at most finite. Proof By Theorem 6.7 and the continuous embedding H 2 (Ω) → C(Ω) for d ≤ 3, cf. Theorem A.3, we have u ∈ C(R+ ; C(Ω)), for u0 ∈ L 2 (Ω). By the weak maximum principle in Theorem 6.25, the solution u(x, t) ≥ 0 a.e. in Q. Thus, E x := {t > 0 : u(x, t) ≤ 0} = {t > 0 : u(x, t) = 0} coincides with the zero point set of u(x, t) as a function of t > 0. Assume that there exists some x0 ∈ Ω such that the set E x0 is not finite. Then E x0 contains at least one accumulation point t∗ ∈ [0, ∞]. We treat the three cases: (i) t∗ = ∞, (ii) t∗ ∈ R+ and (iii) t∗ = 0 separately. (i). If t∗ = ∞, then by definition, there exists {t j }∞ j=1 ⊂ E x0 such that t j → ∞ as j → ∞ and u(x0, t j ) = 0. Since u(x0, t) =



Eα,1 (−λn t α )(u0, ϕn )ϕn (x0 ),

n=1

the asymptotic of Eα,1 (z) in Theorem 3.2 implies u(x0, t) =

∞ ∞ 1 −1 −2α λ (u , ϕ )ϕ (x ) + O(t ) λn−2 (u0, ϕn )ϕn (x0 ), 0 n n 0 n Γ(1 − α)t α n=1 n=1

as t → ∞. Setting t = ti with large i in the identity, multiplying both sides by tiα and  −1 passing i to ∞, we obtain v(x0 ) = 0 and v := ∞ n=1 λn (u0, ϕn )ϕn . Then the function v satisfies Lv = u0 in Ω with v = 0 on ∂Ω. The classical weak maximum principle for elliptic pdes indicates v ≥ 0 in Ω. Moreover, since v attains its minimum at x0 ∈ Ω, the strong maximum principle implies v ≡ 0 and thus u0 = Lv = 0, contradicting the assumption u0  0. Hence, ∞ cannot be an accumulation point of E x0 . (ii). Now suppose that the set of zeros E x0 admits an accumulation point t∗ ∈ R+ . By the analyticity of u : R+ → H 2 (Ω) ⊂ C(Ω) from Proposition 6.1, u(x0, t) is analytic with respect to t > 0. Hence, u(x0, t) vanishes identically if its zeros accumulate at some finite and nonzero point t∗ . This case reduces to case (i) and eventually leads to a contradiction.

6.5 Maximum Principles

243

(iii). Since u(x0, t) is not analytic at t = 0, we have to treat the case t∗ = 0 differently. ∞ ⊂ E such that t → 0 as i → ∞ and further, Then there exists {ti }i=1 x0 i ∫ G(x0, y, ti )u0 (y)dy = 0, i = 1, 2, . . . . u(x0, ti ) = Ω

By Lemma 6.11, G(x0, ·, t) ≥ 0 and u0 ≥ 0, we deduce G(x0, y, ti )u0 (y) = 0,

∀i = 1, 2, . . . , a.e. y ∈ Ω.

Since u0  0, G(x0, ·, ti ) vanish in the set ω := {u0 > 0} whose Lebesgue measure is positive. By (6.84), this indicates ∞

Eα,1 (−λn tiα )ϕn (x0 )ϕn (x) = 0 a.e. in ω, i = 1, 2 . . . .

n=1

Now we choose ψ ∈ C0∞ (ω) arbitrarily as the initial data of problem (6.82) (and f ≡ 0) and study ∫ uψ (x0, t) =

G(x0, y, t)ψ(y)dy =

Ω



Eα,1 (−λn t α )(ψ, ϕn )ϕn (x0 ).

(6.85)

n=1

Let ψn := (ψ, ϕn )ϕn (x0 ). The series (6.85) converges in C[0, ∞). Indeed, by Sobolev embedding H 2 (Ω) → C(Ω) for d ≤ 3 in Theorem A.3, |ϕn (x0 )| ≤ cϕn  H 2 (Ω) ≤ c Aϕn  L 2 (Ω) = cλn . Since ψ ∈ C0∞ (ω) ⊂ D(A3 ), we have |(ψ, ϕn )| = λn−3 (A3 ψ, ϕn ) ≤ λn−3  A3 ψ L 2 (Ω) ϕn  L 2 (Ω) ≤ cλn−3 ψC 6 (ω) . Since λn = O(n− d ) as n → ∞ (Weyl’s law), and 0 ≤ Eα,1 (−x) ≤ 1 on R+ cf. Theorem 3.6, we deduce 2

|Eα,1 (−λn t α )ψn | ≤ cψC 6 (ω) λn−2 ≤ cn− d . 4

Thus, for d ≤ 3, ∞

|Eα,1 (−λn t α )ψn | < ∞,

∀t > 0, ψ ∈ C0∞ (ω).

n=1

Thus, uψ (x0, t) ∈ C[0, ∞). Similarly, for  = 0, 1, 2, . . ., |(ψ, ϕn )| = λn−( +3) |(A +3 ψ, ϕn )| ≤ cλn−( +3) ψC 2(+3) (ω), implying

244

6 Subdiffusion: Hilbert Space Theory ∞

|λn ψn | ≤ cψC 2(+3) (ω)

n=1



λn−2 < ∞,

n=1

C0∞ (ω). Moreover, since Eα,β (−t) is uniformly bounded

for all  = 0, 1, 2, . . . and ψ ∈ for all t ≥ 0 (for any α ∈ (0, 1) and β ∈ R; see Theorem 3.2), we have ∞

|λn Eα,β (−λn t α )ψn | < ∞,

∀ = 0, 1, . . . ,

n=1

for any β > 0, t ≥ 0 and ψ ∈ C0∞ (ω). Utilizing the identity Eα,1+ α (z) = Γ(1 + α)−1 + zEα,1+( +1)α (z),

 = 0, 1, 2, . . . .

(6.86)

with  = 0, we split uψ (x0, t) as uψ (x0, t) =



ψn − t

n=1

α



λn Eα,α+1 (−λn t α )ψn,

n=1

where the boundedness  of the summations have been shown. Taking t = ti and letting i → ∞, we obtain ∞ n=1 ψn = 0, i.e., uψ (x0, t) =

∞ (−λn t α )Eα,α+1 (−λn t α )ψn . n=1

For t > 0, we divide both sides by −t α and take  = 1 in (6.86) to deduce −t −α uψ (x0, t) =

∞ ∞ 1 λ n ψn − t α Eα,2α+1 (−λn t α )ψn . Γ(1 + α) n=1 n=1

Again taking t = ti and then passing i → ∞ yields uψ (x0, t) =

∞

n=1

λn ψn = 0, and thus

∞ (−λn t α )2 Eα,2α+1 (−λn t α )ψn . n=1

Repeating the process and by mathematical induction, we obtain uψ (x0, t) =



(−λn t α ) Eα, α+1 (−λn t α )ψn,

∀ = 0, 1, . . . .

n=1

Now it suffices to prove lim

→∞

∞ n=0

In fact, with η = λn t α ,

(−λn t α ) Eα, α+1 (−λn t α )ψn = 0,

∀t > 0.

(6.87)

6.5 Maximum Principles

245

(−λn t α ) Eα, α+1 (−λn t α ) = (−η)

∞ k=0

(−η)k (−η)k = , Γ(kα + α + 1) k= Γ(kα + 1) ∞

coincides with the summation after the th term in the defining series of Eα,1 (−η). Since the series is uniformly convergent with respect to η ≥ 0, we obtain lim (−λn t α ) Eα, α+1 (−λn t α ) = 0,

→∞

which together with the boundedness of independent of , we deduce uψ (x0, t) =



∞

n=1

∀n = 1, 2, . . . , ∀t ≥ 0, |ψn |, yields (6.87). Since uψ (x0, t) is

Eα,1 (−λn t α )ψn = 0,

t ≥ 0, ∀ψ ∈ C0∞ (ω),

n=1

∫∞

α−1

Since by Lemma 3.2, 0 e−zt Eα,1 (−λt α )dt = zzα +λ (which is analytically extended to (z) > 0), and the above series for uψ converges in C[0, ∞), we derive z α−1

∞ n=1

i.e.,

∞ n=1

ψn = 0, z α + λn

ψn = 0, ζ + λn

∀(z) > 0, ψ ∈ C0∞ (ω),

(ζ) > 0, ∀ψ ∈ C0∞ (ω).

(6.88)

By a similar argument for the convergence of (6.85), we deduce that the series in ∞ , and the analytic continu(6.88) is convergent on any compact set in C \ {−λn }n=1 ∞ ation in ζ yields that (6.88) holds for ζ ∈ C \ {−λn }n=1 . Especially, since the first eigenvalue λ1 is simple, we can choose a small circle around −λ1 which does not contain −λn (n ≥ 2). Integrating (6.88) on this circle yields ψ1 = (ϕ1, ψ)ϕ1 (x0 ) = 0,

ψ ∈ C0∞ (ω).

Since ψ ∈ C0∞ (ω) is arbitrary, ϕ1 (x0 )ϕ1 = 0 a.e. in ω, contradicting the strict positivity of the first eigenfunction ϕ1 . Therefore, t∗ = 0 cannot be an accumulation point of E x0 . In summary, for any x ∈ Ω, E x cannot possess any accumulation point, i.e., the  set E x is at most finite. This completes the proof. Corollary 6.6 Let u0 ∈ L 2 (Ω) satisfy u0 > 0 a.e. in Ω, f ≡ 0, and u be the solution to problem (6.82). Then u > 0 in Q. Proof Recall that the solution u allows a pointwise definition if u0 ∈ L 2 (Ω), and f ≥ 0 in Ω × (0, ∞). Assume that there exists ∫x0 ∈ Ω and t0 > 0 such that u(x0, t0 ) = 0. Employing the representation (6.84), Ω G(x0, y, t0 )u0 (y)dy = 0. Since G(x0, ·, t0 ) ≥ 0 by Lemma 6.11 and u0 > 0, there holds G(x0, ·, t0 ) = 0, that is

246

6 Subdiffusion: Hilbert Space Theory ∞

Eα,1 (−λn t0α )ϕn (x0 )ϕn = 0 in Ω.

n=1 ∞ is an orthonormal basis in L 2 (Ω), we obtain Since {ϕn }n=1

Eα,1 (−λn t0α )ϕn (x0 ) = 0,

n = 1, 2, . . . ,

especially Eα,1 (−λ1 t0α )ϕ1 (x0 ) = 0. However, this leads to a contradiction, since Eα,1 (−λ1 t0α ) > 0, cf. Corollary 3.2, and ϕ1 (x0 ) > 0. Therefore, such a pair (x0, t0 ) cannot exist.  The last corollary combines Theorem 6.26 with Theorem 6.25 [LY17, Corollary 2.2]. Corollary 6.7 For Ω ⊂ Rd , d = 1, 2, 3, let u0 ≥ 0 a.e. in Ω, u0  0, and f ≡ 0. Then the weak solution u to problem (6.82) satisfies u ∈ C(I; C(Ω)) and for each x ∈ Ω the set {t : t > 0 ∩ u(x, t) ≤ 0} is at most finite. Proof If q ≥ 0, the statement is already proved in Theorem 6.26. It suffices to show the statement without the condition q ≥ 0. By Theorem 6.7, under the given conditions, the weak solution u to problem (6.82) belongs to C(I; H 2 (Ω)). For d ≤ 3, Sobolev embedding implies u ∈ C(I; C(Ω)). Now let v be the weak solution to problem (6.82) with the coefficient q + qC(Ω) and f ≡ 0. Since u0 ≥ 0 and q + qC(Ω) ≥ 0 hold, by Theorem 6.26, there holds that for an arbitrary but fixed x ∈ Ω, there exists an at most finite set E x such that v(x, t) ≤ 0, t ∈ E x . Since q ≤ q + qC(Ω) in Ω, Corollary 6.5 leads to the inequality u(x, t) ≥ v(x, t), which implies the desired assertion.  For the standard parabolic problem, i.e., α = 1, there holds E x = ∅. This holds also for subdiffusion with q ≡ 0 [LY19b, Theorem 9]. Theorem 6.27 Let f ≡ 0, u0 ∈ D(Aγ ), with γ > d4 , and u0 ≥ 0 and u0  0. Then the solution u is strictly positive, i.e., u(x, t) > 0 for any (x, t) ∈ Q. The regularity condition on u0 implies u ∈ C(I; C(Ω)). The proof employs a weak Harnack inequality [Zac13b, Theorem 1.1]. For arbitrary fixed δ ∈ (0, 1), t0 ≥ 0, r > 0, τ > 0 and x0 ∈ Ω, let B(x0, r) := {x ∈ Rd : |x − x0 | < r } and 2

Q− (x0, t0, r) := B(x0, δr) × (t0, t0 + δτr α ), 2

2

Q+ (x0, t0, r) := B(x0, δr) × (t0 + (2 − δ)τr α , t0 + 2τr α ). By |Q− (x0, t0, r)| denotes the Lebesgue measure of the set Q− (x0, t0, r) in Rd × R. The proof of the lemma is lengthy: it relies on a Yosida regularization of the singular kernel (cf. Lemma 6.1), and uses Moser iteration technique and a lemma of E. Bombieri and E. Giusti. Thus, we refer interested readers to the original paper for the complete proof.

6.6 Inverse Problems

247

Lemma 6.12 Let 0 < δ < 1, τ > 0 be fixed. Then for any t0 ≥ 0, 0 < p
0 with t0 + 2τr α ≤ T and B(x0, 2r) ⊂ Ω. Then there holds ∫ p1 |Q− (x0, t0, r)| −1 u(x, t) p dxdt ≤c inf u(x, t). (x,t)∈Q+ (x0,t0,r)

Q− (x0,t0,r)

Now we can give the proof of Theorem 6.27. Proof The proof proceeds by contradiction. Assume that there exists (x0, t0 ) ∈ Q such that u(x0, t0 ) = 0. Choose r > 0 sufficiently small so that B(x0, 2r) ⊂ Ω. Now 2 set t˜0 = t0 − s > 0, for a sufficiently small s > 0, and τ = sr − α (2 − δ2 )−1 . Then 2 2 τr α = s(2 − δ2 )−1 is sufficiently small, so that t˜0 + 2τr α ≤ T. Since (2 − δ)τr α = (2 − δ)s(2 − δ2 )−1 < s 2

we have

and

2τr α = 2s(2 − δ2 )−1 > s,

2

2

2

t0 ∈ (t˜0 + (2 − δ)τr α , t˜0 + 2τr α ). Hence, (x0, t0 ) ∈ Q+ (x0, t˜0, r). By Theorem 6.25, for u0 ≥ 0, the inequality u(x, t) ≥ 0 holds on Q. Therefore, inf

(x,t)∈Q+ (x0, t˜0,r)

u(x, t) = 0.

Lemma 6.12 yields that there exists t1 > 0 such that u(x, t) = 0,

(x, t) ∈ B(x0, δr) × (t˜0, t˜0 + t1 ).

By the uniqueness of u from Theorem 6.9, we obtain u(x, t) = 0 for x ∈ Ω and 0 ≤ t ≤ T. This contradicts the condition u0  0, and completes the proof. 

6.6 Inverse Problems In this section, we discuss several inverse problems for subdiffusion: given partial information about the solution to the subdiffusion model (6.17), we seek to recover one or several unknown problem data, e.g., boundary condition, initial condition, fractional order, or unknown coefficient(s) in the model. Such problems arise in many practical applications, and represent a very important class of applied problems. Mathematically they exhibit dramatically different features than direct problems analyzed so far in the sense that inverse problems are often ill-posed in the sense of Hadamard: the solution may not exist and may be nonunique, and if it does exist, it does not depend continuously on the given problem data. Consequently, they require different solution techniques. The goal of this section is to give a flavor of inverse problems, by illustrating the following model problems: backward subdif-

248

6 Subdiffusion: Hilbert Space Theory

fusion, inverse source problem, order determination and inverse potential problem. The literature on inverse problems for subdiffusion is vast; see the introductory survey [JR15], and reviews [LLY19b] (inverse source problems), [LY19a] (fractional orders) and [LLY19a] (inverse coefficient problem) for further pointers. The standard reference for inverse problems for pdes is [Isa06]. The discussions below use extensively various properties of Mittag-Leffler function Eα,β (z), cf. (3.1) in Chapter 3, especially the complete monotonicity in Theorem 3.5. Thus, the discussion is largely restricted to the case of time-independent coefficients, and the extension to more general case is largely open.

6.6.1 Backward subdiffusion Backward subdiffusion is one classical example of inverse problems for subdiffusion. Consider the following subdiffusion problem (with α ∈ (0, 1)) ⎧ ∂ α u = Δu, in Q, ⎪ ⎪ ⎨ t ⎪ u = 0, on ∂L Q, ⎪ ⎪ ⎪ u(0) = u0, in Ω. ⎩

(6.89)

This problem has a unique solution u that depends continuously on u0 , e.g., u(t) L 2 (Ω) ≤ u0  L 2 (Ω), in view of the smoothing property of the operator F(t) in Theorem 6.4(iv). Backward subdiffusion reads: Given the terminal data g = u(T), with T > 0 fixed, can one ∞ of the Dirichlet −Δ, the recover the initial data u0 ? With the eigenpairs {(λn, ϕn )}n=1 solution u to problem (6.89) is given by u(x, t) =



(u0, ϕn )Eα,1 (−λn t α )ϕn (x).

(6.90)

n=1

Thus, the exact terminal data g = u(T) is given by g=



(u0, ϕn )Eα,1 (−λnT α )ϕn,

n=1

and u0 and g are related by (g, ϕn ) = Eα,1 (−λnT α )(u0, ϕn ), Formally, u0 is given by

n = 1, 2, . . . .

(6.91)

6.6 Inverse Problems

249

u0 =

∞ n=1

(g, ϕn ) ϕn, Eα,1 (−λnT α )

if the series does converge in a suitable norm (note that Eα,1 (−λnT α ) does not vanish, cf. Corollary 3.3). The formula (6.91) is very telling. For α = 1, it reduces to (u0, ϕn ) = eλn T (g, ϕn ). It shows clearly the severely ill-posed nature of backward diffusion: the perturbation in (g, ϕn ) is amplified by eλn T in the recovered expansion coefficient (u0, ϕn ). Even for a small index n, (u0, ϕn ) can be huge, if T is not exceedingly small. This implies a very bad stability, and one can only expect a logarithmic-type result. For α ∈ (0, 1), by Theorem 3.2, Eα,1 (−t) decays linearly on R+ , and thus Eα,1 (−λnT α )−1 grows only linearly in λn , i.e., Eα,1 (−λnT α )−1 ∼ T α λn, cf. Theorem 3.6, which is very mild compared to eλn T for α = 1, indicating a much better behavior. Thus, |(u0, ϕn )| ≤ c|(g, λn ϕn )| or |(u0, ϕn )| ≤ c|(g, Δϕn )|, and by integration by part twice, if g ∈ H 2 (Ω), |(u0, ϕn )| ≤ c|(Δg, ϕn )|. Roughly, backward subdiffusion amounts to two spatial derivative loss. More precisely, we have the following stability estimate [SY11, Theorem 4.1]. Theorem 6.28 Let T > 0 be fixed, and α ∈ (0, 1). For any g ∈ H 2 (Ω), there exists a unique u0 ∈ L 2 (Ω) and a weak solution u ∈ C(I; L 2 (Ω)) ∩ C(I; H 2 (Ω)) to problem (6.89) such that u(T) = g, and c1 u0  L 2 (Ω) ≤ u(T) H 2 (Ω) ≤ c2 u0  L 2 (Ω) . Proof The second inequality is already proved in Theorem 6.7. For g ∈ H 2 (Ω), c1 g H 2 (Ω) ≤

∞ n=1

By Corollary 3.3, Eα,1 (−λn cn =

tα)

2 λn2 (g, ϕn )2 ≤ c2 g H 2 (Ω) .

does not vanish on R+ . Thus, we may let

(g, ϕn ) Eα,1 (−λnT α )

and u0 =



cn ϕn .

n=1

Clearly, the solution u(x, t) to problem (6.89) with this choice satisfies g = u(·, T). Further, by Theorem 3.6, ∞ n=1

cn2 ≤

∞ n=1

(g, ϕn )2 (1 + Γ(1 − α)λnT α )2 ≤ cT,α



(1 + λn2 )(g, ϕn )2 .

n=1

Thus, there holds, u0  L 2 (Ω) ≤ cu(·, T) H 2 (Ω), showing the first inequality.



250

6 Subdiffusion: Hilbert Space Theory

Thus, backward subdiffusion is well-posed for g ∈ H 2 (Ω) and u0 ∈ L 2 (Ω). In practice, however, g is measured and corrupted by rough noise, and it should be studied for g ∈ L 2 (Ω), and thus is ill-posed, i.e., the solution may not exist, and even if it does exist, it is unstable with respect to perturbations in the data g. Indeed, the forward map from the initial data u0 to the terminal data u(T) is compact in L 2 (Ω): for u0 ∈ L 2 (Ω), Theorem 6.7 implies u(T) ∈ H 2 (Ω), and the space H 2 (Ω) is compactly embedded into L 2 (Ω), cf. Theorem A.3. Ill-posedness of this type is characteristic of many inverse problems, and specialized solution techniques are required to obtain reasonable approximations. It is instructive to compare the problem for subdiffusion and normal diffusion. Intuitively, for the backward problem, the history mechanism of subdiffusion retains the complete dynamics of the physical process all the way to time t = 0, including u0 . The parabolic case has no such memory effect and hence, the coupling between the current and previous states is weak. The big difference between the cases 0 < α < 1 and α = 1 might lead to a belief that “Inverse problems for fdes are always less ill-conditioned than their classical counterparts.” or “Models based on the fractional derivative paradigm can escape the curse of being strongly ill-posed,” However, this statement turns out to be quite false. 100

100

T=0.001 T=0.01 T=0.1 T=1

10-5

k

k

10-2

T=0.001 T=0.01 T=0.1 T=1

10-10

10-4

10-6

0

20

40

60 k

80

100

10-15

0

50

100

k

Fig. 6.2 The singular value spectrum of the forward map F from the initial data to the final time data for backward (sub)diffusion, for (a) α = 12 and (b) α = 1, at four different time instances.

A relatively complete picture for an inverse problem can be gained by the singular value spectrum of the forward map. Fig. 6.2 shows the singular value spectra for α = 1 and α = 12 . When α = 1, the singular values σk decay exponentially to zero, which agrees with the factor eλn T , but there is considerable difference with the change of time scale. For T = 1 there is likely at most one usable singular value, and still only a handful at T = 0.1. In the fractional case, due to the very different decay rate, there are a considerable number of singular values available even for larger times. However, there is still the initially fast decay, especially as α tends to zero. It is instructive to compare the singular value spectrum at a fixed time T. At T = 0.01, the singular values with smaller indices (those less than about 25) are larger for α = 1 than α = 0.5, and thus at this value for T, normal diffusion should allow superior

6.6 Inverse Problems

251

reconstructions (under suitable conditions on the noise in g, of course). The situation reverses when singular values of larger indices are considered and quickly shows the infeasibility of recovering high-frequency information for backward diffusion, whereas in the fractional case many more Fourier modes might be attainable (again depending on the noise in the data).

6.6.2 Inverse source problems Now we focus on the basic subdiffusion model (with α ∈ (0, 1)) ⎧ ∂ α u − Δu = f , in Q, ⎪ ⎪ ⎨ t ⎪ u = 0, on ∂L Q, ⎪ ⎪ ⎪ u(0) = u0, in Ω, ⎩

(6.92)

with suitable boundary condition. The inverse problem of interest is to recover the source term f from additional data. We shall not attempt a general f , since we are looking at only final time data or lateral boundary data (or internal data). In order to have a unique determination, we look for only a space- or time-dependent component of f . By the linearity of problem (6.92), we may assume u0 = 0. There is a “folklore” theorem for the standard diffusion equation: inverse problems where the data is not aligned in the same direction as the unknown are almost surely severely ill-posed, but usually only mildly so when these directions do align. There is a huge body of literature on inverse source problems, with different combinations of the data and unknowns; Inverse source problems for the classical diffusion equation have been extensively studied; see e.g., [Can68, Can84, IY98, CD98]. First, we consider the case of recovering the component p(t) in f (x, t) = q(x)p(t) from the function u at a fixed point x0 ∈ Ω (or the flux at the boundary). The exact data g(t) is readily available: g(t) =

∞ ∫ n=1

0

t

(t − s)α−1 Eα,α (−λn (t − s)α )p(s) ds (q, ϕn )ϕn (x0 ).

In particular, if q = ϕn for some n ∈ N then the problem becomes ∫ t R(t − s)p(s) ds, g(t) = 0

with R(t) = t α−1 Eα,α (−λn t α ). This is a Volterra integral equation of the first kind for p(t), and the behavior of the solution hinges on the behavior of t α−1 Eα,α (−λn t α ) near t = 0. Since Eα,α (−λn t α ) → Γ(α)−1 , as t → 0+, it is weakly singular with the leading singularity t α−1 . In particular, applying the operator ∂tα to both sides and then using the differentiation formula

252

6 Subdiffusion: Hilbert Space Theory

∂tα



t

s

α−1

0



α

Eα,α (−λs )g(t − s)ds = g(t) − λ

lead to g(t) ˜ = (∂tα g)(t) = p(t) − λn

t

0



t

s α−1 Eα,α (−λs α )g(t − s)ds, (6.93)

R(t − s)p(s) ds,

0

which is a Volterra integral equation of the second kind, where the kernel R is integrable. Thus, following the method of successive approximations (see, e.g., the proof of Proposition 4.4), we can prove that it has a unique solution p(t) that depends continuously on R and g, ˜ and further, a bound can be derived using Gronwall’s inequality in Theorem 4.2 in Section 4.1. Hence, the inverse problem is mildly ill-posed requiring only a fractional derivative of order α loss on the data g and p L ∞ (I) ≤ c∂tα g L ∞ (I) . The assertion holds in the general case. Consider the following initial boundary value problem ⎧ ∂ α u = Δu + q(x)p(t), ⎪ ⎪ ⎨ t ⎪ u = 0, on ∂L Q, ⎪ ⎪ ⎪ u(·, 0) = 0, in Ω. ⎩

in Q, (6.94)

Then the inverse source problem reads: given x0 ∈ Ω, determine p(t), t ∈ I from u(x0, t), t ∈ I. The following result gives a two-sided stability estimate [SY11, Theorem 4.4]. The regularity requirement in Theorem 6.29 is slightly lower than that stated in [SY11, Theorem 4.4], which was obtained in [LLY19b, Theorem 1]. Theorem 6.29 Let q ∈ H γ (Ω), with γ > problem (6.94) with p ∈ C(I) satisfies

d 2,

q(x0 )  0. Then the solution u to

c0 ∂tα u(x0, ·) L ∞ (I) ≤ pC(I) ≤ c1 ∂tα u(x0, ·) L ∞ (I) . Proof The first estimate is direct from Section 6.2. Since p ∈ C(I) and q ∈ H γ (Ω), we obtain ∞ ∫ t p(s)(q, ϕn )(t − s)α−1 Eα,α (−λn (t − s)α )dsϕn in L 2 (I; H 2 (Ω)). u= n=1

0

By the differentiation formula (6.93), we deduce ∂tα u = pq +



∫ (−λn )

n=1

0

t

p(s)(q, ϕn )(t − s)α−1 Eα,α (−λn (t − s)α )dsϕn,

in L 2 (Q). Consequently, 1  α ∂ u(x0, t) + p(t) = q(x0 ) t

∫ 0

t

 K(x0, s)p(t − s)ds ,

(6.95)

6.6 Inverse Problems

253

 α with K(x, t) = t α−1 ∞ n=1 λn Eα,α (−λn t )(q, ϕn )ϕn (x). Let  := the asymptotics of Eα,α (z) in Theorem 3.2, we have K(·, t) 2

H

≤ c2 t

d + 2 (Ω)

2( 2 α−1)

= t 2(α−1)



+ d4

Eα,α (−λn t α )| 2 |λn

d 4



> 0. Then by

(q, ϕn )| 2

n=1

 ∞ (λn t α )1− 2 2

n=1

1− 2

|λn

γ 2

1 + λn



γ



|λn2 (q, ϕn )| 2 ≤ (cq H γ (Ω) t 2 α−1 )2 .

d

Using the Sobolev embedding H 2 + (Ω) → C(Ω), we obtain |K(x0, t)| ≤ K(·, t)C(Ω) ≤ cK(·, t) 

H

α

d + 2 (Ω)

≤ cq H γ (Ω) t 2 −1 .

Consequently, |p(t)| ≤ c∂tα u(x0, t) L ∞ (I) + cq H γ (Ω)



t



s 2 α−1 |p(t − s)|d.

0

Applying the Gronwall inequality in Theorem 4.1 yields the second identity.



Fig. 6.3 Singular value spectrum for the inverse source problem of recovering p(t) from ∂x u(0).

Figure 6.3 shows the singular value spectrum for the forward map. There is a slight increase in ill-posedness as α increases, which agrees with the stability estimate: the recovery amounts to taking the αth-order fractional derivative. Thus, fractional diffusion can mitigate the degree of ill-posedness of the concerned inverse problem. For α close to zero, it effectively behaves as if it were well-posed. Next, we look at the case of recovering a space-dependent (and time-independent) source f (x) from the time trace of the solution u at a fixed point x0 ∈ Ω. The solution u to the direct problem is given by

254

6 Subdiffusion: Hilbert Space Theory

u(x, t) =



  λn−1 1 − Eα,1 (−λn t α ) ( f , ϕn )ϕn (x),

n=1

and hence, g(t) =



  λn−1 ϕn (x0 ) 1 − Eα,1 (−λn t α ) ( f , ϕn ).

n=1

This formula is very informative. First, the choice of the point x0 has to be strategic in the sense that it should satisfy the condition ϕn (x0 )  0 for all n ∈ N. If ϕn (x0 ) = 0, then the nth mode ( f , ϕn ) is not recoverable. The condition ϕn (x0 )  0 is almost impossible to arrange in practice: for Ω = (0, 1), the Dirichlet eigenfunctions are ϕn (x) = sin(nπx) and all points of the form qπ for q rational must be excluded for x0 in order to satisfy the condition. Second, upon simple algebraic operations, the ∞ . These inverse problem amounts to representing g(t) in terms of {Eα,1 (−λn t α )}n=1 functions all decay to zero polynomially in t, and thus almost linearly dependent, indicating the ill-posed nature of the inverse problem. This shows a stark contrast in the degree of ill-posedness between recovering a time and a space-dependent source component from the same time-dependent trace data. It is also very much in line with the “folklore” theorem. To gain further insights, consider Ω = (0, 1) and ⎧ ∂tα u = ∂xx u + f , in Q, ⎪ ⎪ ⎪ ⎪ ⎨ ∂x u(0, ·) = 0, in I, ⎪ ⎪ −∂x u(1, ·) = 0, in I, ⎪ ⎪ ⎪ ⎪ u(·, 0) = u0, in Ω. ⎩

(6.96)

For u0 ∈ L 2 (Ω) and f ∈ L 2 (Ω), there exists a unique solution u ∈ C(I; L 2 (Ω)) ∩ ∞ be the eigenpairs of the Neumann Laplacian C(I; H 2 (Ω)). Indeed, let {(λn, ϕn )}n=0 on the domain Ω. Since λ0 = 0 (with ϕ0 (x) ≡ 1), the solution u is given by u(x, t) =

∞ (u0, ϕn )Eα,1 (−λn t α )ϕn (x) n=0

+

∞ tα ( f , ϕ0 ) + λn−1 ( f , ϕn )(1 − Eα,1 (−λn t α ))ϕn (x). Γ(α + 1) n=1

The inverse problem is to determine f (x) from the lateral trace data g(t) = u(0, t) for t ∈ I. The next result gives an affirmative answer to the question [ZX11, Theorem 1]. The proof indicates that the data to the unknown relation in the inverse source problem amounts to analytic continuation, which is known to be severely ill-posed. Theorem 6.30 The data g(t) uniquely determines the source term f (x). Proof The desired uniqueness follows directly from Lemma 6.13 below.



6.6 Inverse Problems

255

Lemma 6.13 Let w, w  ∈ C(I; L 2 (Ω)) ∩ C(I; H 2 (Ω)) be solutions of (6.96) for with u0 , respectively. Then w(0, t) = w (0, t) for f ≡ 0 and the initial conditions u0 and  u0 (x) in L 2 (Ω). t ∈ I implies u0 (x) =  ∞ of the Neumann Laplacian form a complete Proof Since the eigenfunctions {ϕn }n=0 2 orthogonal basis in L (Ω),the initial conditions can be represented by u0 (x) = ∞ u0 (x) = ∞ n=0 an ϕn (x) and  n=0 a˜ n ϕn (x). It suffices to show an = a˜ n , n = 0, 1, . . ., given the data w(0, t) = w (0, t). Since w(0, t) = w (0, t) for t ∈ I, we have ∞

an Eα,1 (−λn t α ) =

n=0



a˜n Eα,1 (−λn t α ),

t ∈ I.

n=0

Moreover, by the Riemann-Lebesgue lemma and analyticity of the Mittag-Leffler function, both sides are analytic in t > 0. By the unique continuation property of real analytic functions, we have ∞

an Eα,1 (−λn t α ) =

n=0



a˜n Eα,1 (−λn t α ),

Since an → 0, by Theorem 3.6, |e

−t(z)

∀t ≥ 0.

n=0



 α

an Eα,1 (−λn t )| ≤ e

−t(z)

|a0 | +

n=0

≤ ce−t(z) t −α

∞ Γ(1 + α)|an | λn t α n=1



λn−1 ≤ ct −α e−t(z),

n=1

and the function e−t(z) t −α is integrable for t ∈ R+ for fixed z with (z) > 0. By Lebesgue’s dominated convergence theorem and Lemma 3.2, we deduce ∫ 0

which implies



e−zt



an Eα,1 (−λn t α )dt =

n=0 ∞ n=0



an

n=0

bn = 0, η + λn

(η) > 0,

z α−1 , z α + λn

(6.97)

where η = z α and bn = an − a˜n . Since limn→∞ (an − a˜n ) → 0, we can analytically continue in η, so that the identity (6.97) holds for η ∈ C \ {−λn }n ≥0 . Last, we deduce b0 = 0 from the identity by taking a suitable disk which includes 0 and does not ∞ . By the Cauchy integral formula, integrating the identity along the include {−λn }n=1 a0 . Upon repeating the argument, we obtain an = a˜n , disk gives 2πi b0 = 0, i.e., a0 =  n = 1, 2, . . ., which completes the proof of the lemma. 

256

6 Subdiffusion: Hilbert Space Theory

The proof of the theorem indicates that the inverse source problem amounts analytic continuation, which is known to be severely ill-posed. In particular, it does not show any beneficial effect of subdiffusion over normal diffusion.

6.6.3 Determining fractional order This is perhaps the most obvious inverse problem for FDEs! In theory, the fractionalorder α in the subdiffusion model can be determined by the tail decay rate of the waiting time-distribution in microscopic descriptions, as the derivation in Section 1.2 indicates. However, in practice, it often cannot be determined directly, and has to be inferred from experimental data, which leads to a nonlinear inverse problem. In many cases the asymptotic behavior of the solution u can be used to determine α. To show the feasibility of the recovery, assume that in problem (6.89), u0 is taken to be ϕn , and the additional data is the flux at x0 ∈ ∂Ω, i.e., ∂u ∂ν (x0, t) = h(t), for ∂ϕ n 0 < t ≤ T . Then h(t) uniquely determines α if x0 satisfies ∂ν (x0 )  0. Indeed, h(t) is given by h(t) = Eα,1 (−λn t α )(u0, ϕn )

∂ϕn ∂ϕn (x0 ) = Eα,1 (−λn t α ) (x0 ), ∂ν ∂ν

It follows from the series expansion of Eα,1 (−λt α ) that Eα,1 (−λt α ) = 1 − λΓ(1 + α)−1 t α + O(λ2 t 2α ), Thus, one can recover α by looking at the small time behavior of h(t). Now we make this reasoning more precise. In problem (6.89), suppose that we can measure g(t) = u(x0, t) close to t = 0. The following result shows that the asymptotic of g(t) at t = 0 uniquely determines the fractional-order α [HNWY13, Theorem 1]; see [LHY20, Theorem 2] for the stability issue and [Kia20, Theorem 2.1] and [JK21, Theorem 1.2] for determining the fractional order without knowing the initial condition or medium. The formula involves taking one derivative with respect to t, which is ill-posed, even though only mildly so. It is worth noting that besides the smoothness condition, the result actually needs only the value u0 (x0 ). Theorem 6.31 In problem (6.89), let u0 ∈ C0∞ (Ω) with Δu0 (x0 )  0. Then α = lim+ t→0

t ∂u ∂t (x0, t) . u(x0, t) − u0 (x0 )

Proof Since u0 ∈ C0∞ (Ω), we deduce by integration by parts repeatedly that for any  ∈ N, there holds |(u0, ϕn )| = λn−1 |(Δu0, ϕn )| = λn− |(Δ u0, ϕn )| ≤ c()λn− . Meanwhile, by Sobolev embedding in Theorem A.3, we have for any κ >

(6.98) d 2,

6.6 Inverse Problems

257 κ

κ

ϕn  L ∞ (Ω) ≤ cϕn  H κ (Ω) ≤ c(−Δ) 2 ϕn  L 2 (Ω) ≤ cλn2 . For t ∈ I, u(x0, t) =

(6.99)

∞ (u0, ϕn )ϕn (x0 )Eα,1 (−λn t α ), n=1

which converges in C(I). Therefore, for t ∈ I ∂u (x0, t) = −λn (u0, ϕn )ϕn (x0 )t α−1 Eα,α (−λn t α ). ∂t n=1 ∞

Meanwhile, by the definition of Eα,α (z), Eα,α (−λn t α ) = Γ(α)−1 + t α rn (t), where E

(−λ t α )−Γ(α)−1

. The function rn (t) is continuous at rn (t) is defined by rn (t) = α, α nt α t = 0, the limit limt→0+ rn (t) exists, and by the complete monotonicity of Eα,2α (−t), cf. Corollary 3.2, |rn (t)| = |λn ||Eα,2α (−λn t α )| ≤ Γ(2α)−1 |λn |,

t ≥ 0, n ∈ N.

(6.100)

Consequently, ∂u 1 (x0, t) = −λn (u0, ϕn )ϕn (x0 ) ∂t Γ(α) n=1 ∞

lim+ t 1−α

t→0

+ lim+ t α t→0



−λn (u0, ϕn )ϕn (x0 )rn (t).

n=1

The second summation can be bounded using (6.98)–(6.100) and Sobolev embedding theorem by ∞ ∞   |λn | 2   |(u0, ϕn )ϕn (x0 )| −λn (u0, ϕn )ϕn (x0 )rn (t) ≤  Γ(2α) n=1 n=1



∞ κ |λn | 2 c() c|λn | 2 . Γ(2α) |λ | n n=1

For any sufficiently large  ∈ N, we have ∞     sup  −λn (u0, ϕn )ϕn (x0 )rn (t) < ∞. t ∈I

n=1

Hence, lim t 1−α

t→0+

Meanwhile, since

∂u (x0, t) = Γ(α)−1 Δu0 (x0 ). ∂t

(6.101)

258

6 Subdiffusion: Hilbert Space Theory

Eα,1 (−λn t α ) = 1 − Γ(α + 1)−1 λn t α + t 2α λn2 Eα,2α+1 (−λn t α ), we have u(x0, t) =



(u0, ϕn )ϕn (x0 ) + t α

n=1

+ t 2α



∞ −λn (u0, ϕn )ϕn (x0 ) Γ(α + 1) n=1

λn2 Eα,2α+1 (−λn t α )(u0, ϕn )ϕn (x0 )

n=1

= u0 (x0 ) + Γ(α + 1)−1 Δu0 (x0 )t α + t 2α r (t), r (t)| < ∞. Consequently, where supI | lim t −α (u(x0, t) − u0 (x0 )) = Γ(α + 1)−1 Δu0 (x0 ).

t→0+

(6.102)

Combining the identities (6.101)–(6.102), the assumption Δu0 (x0 )  0, and the recursion Γ(α + 1) = αΓ(α) complete the proof of the theorem. 

6.6.4 Inverse potential problem Parameter identifications for pdes encompass a broad range of inverse problems. The earliest contribution for subdiffusion is [CNYY09], which proves the uniqueness of simultaneously recovering the diffusion coefficient and fractional order α from the lateral data at one single spatial point, in the one-dimensional case. The proof relies on relevant results on the inverse Sturm-Liouville problem. This piece of pioneering work has inspired much further research on inverse problems for subdiffusion. See also [KOSY18] for uniqueness of the diffusion coefficient in multi-dimension. We illustrate with an inverse potential problem from the terminal data proved in [JZ21, Theorem 1.1]; see [CY97, Theorem 1] for standard parabolic problems and the works [ZZ17, KR19] for other results on the inverse problem. Specifically, let u ≡ u(q) be the solution to α ⎧ ⎪ ⎪ ∂t u = Δu + qu, in Q, ⎨ ⎪ u = 0, on ∂L Q, ⎪ ⎪ ⎪ u(·, 0) = u0, in Ω. ⎩ The inverse problem reads: given the terminal data g, recover q ∈ L 2 (Ω) such that u(q)(·, T) = g

in Ω.

(6.103)

The direct problem for q ∈ L 2 (Ω) is not covered in Section 6.2, since so far we have assumed that the coefficients a and q are smooth. Following the work [JZ21], we illustrate how the assumption on the potential q can be relaxed. Throughout this part, A = −Δ with its domain D(A) = H 2 (Ω) and and the graph norm denoted by  · D(A) . Then given q ∈ L 2 (Ω), consider the following abstract Cauchy problem

6.6 Inverse Problems

259

∂tα u(t) + Au(t) = qu(t), u(0) = u0 .

in I,

(6.104)

We prove that for suitably smooth u0 and any q ∈ L 2 (Ω), problem (6.104) has a unique classical solution u = u(q) ∈ C α (I; L 2 (Ω)) ∩ C(I; D(A)). The analysis is based on a “perturbation” argument. Specifically, we view qu(t) as the inhomogeneous term and obtain that the solution u(t) satisfies ∫ t u(t) = U(t) + E(t − s)qu(s)ds, 0

where U(t) = F(t)u0 , and the solution operators F(t) and E(t) are associated with the operator A. Now we can specify the function analytic setting. Let 34 < γ < 1 and 0 < β < (1 − γ)α be fixed and set X = C β (I; D(Aγ )) ∩ C(I; D(A)),

(6.105)

with the norm given by vX = vC β (I;D(Aγ )) + vC(I ;D(A)) . Then for every q ∈ L 2 (Ω), we define an associated operator L(q) by ∫ t E(t − s)q f (s)ds, ∀ f ∈ C β (I; D(Aγ )). [L(q) f ](t) = 0

The next result gives the mapping property of the operator L(q). Lemma 6.14 For any q ∈ L 2 (Ω), L(q) maps C β (I; D(Aγ )), with

3 4

< γ < 1, into X.

Proof Let f ∈ C β (I; D(Aγ )), q ∈ L 2 (Ω) and let g(t) = q f (t), 0 ≤ t ≤ T . We split the function L(q) f into two terms L(q) f = v1 + v2 , with ∫ t ∫ t v1 (t) = E(t − s)(g(s) − g(t))ds and v2 (t) = E(t − s)g(t)ds. 0

0

Since f ∈ C β (I; D(Aγ )), by Sobolev embedding theorem, g ∈ C β (I; L 2 (Ω)), and thus by Lemma 6.4, v1 ∈ C β (I; D(A)) ⊂ X. Next, for t ∈ I, τ > 0 such that t + τ ≤ T, ∫ t+τ ∫ t E(s)g(t + τ)ds + E(s)[g(t + τ) − g(t)]ds. v2 (t + τ) − v2 (t) = t

0

Thus, by Theorem 6.4, we deduce γ



 A (v2 (t + τ) − v2 (t)) L 2 (Ω) ≤ cα,γ gC(I ;L 2 (Ω))

t+τ

s(1−γ)α−1 ds ∫ t s(1−γ)α−1 ds. + cα,γ τ β gC β (I ;L 2 (Ω)) t

0

Since

∫ t+τ t

s(1−γ)α−1 ds ≤ ((1 − γ)α)−1 τ (1−γ)α , we obtain

260

6 Subdiffusion: Hilbert Space Theory

τ −β  Aγ [v2 (t + τ) − v2 (t)] L 2 (Ω) ≤((1 − γ)α)−1 cα,γ (τ (1−γ)α−β + T (1−γ)α )gC β (I;L 2 (Ω)) . Since β < (1 − γ)α, v2 ∈ C β (I; L 2 (Ω)).∫ It remains to show Av2 ∈ C(I; L 2 (Ω)). This t follows from the identity −Av2 (t) = − 0 AE(t − s)g(t)ds = (F(t) − I)g(t), in view of Lemma 6.2. Since F(t) − I is continuous on L 2 (Ω), cf. Lemma 6.3, the desired assertion follows. This completes the proof of the lemma.  Lemma 6.15 If q ∈ L 2 (Ω), then I − L(q) is boundedly invertible in X. Proof The proof proceeds by the argument of equivalent norm family, following [Bie56]: we equip X with an equivalent family of norms  · λ , λ ≥ 0, defined by  f λ = sup e−λt [ f (t) L 2 (Ω) +  Aγ f (t) L 2 (Ω) ] t ∈I

+

sup

0≤s 0 with t + τ ≤ T, we have ∫ t v(t + τ) − v(t) = E(s)q[ f (t + τ − s) − f (t − s)]ds 0 ∫ t+τ E(s)q f (t + τ − s)ds, + t

which directly implies

6.6 Inverse Problems

261

e−λ(t+τ+1)  Aγ (v(t + τ) − v(t)) L 2 (Ω) ∫ ∫ t ≤ cτ β q L 2 (Ω)  f λ e−λs s(1−γ)α−1 ds + τ −β

t

0

t+τ

e−λ(s+1) s(1−γ)α−1 ds .

This and the choice β < (1 − γ)α give   e−λ(t+τ+1) τ −β  Aγ (v(t + τ) − v(t)) L 2 (Ω) ≤ cq L 2 (Ω)  f λ λ−(1−γ)α + e−λ . In the same way, we deduce τ −β e−λ(t+τ+1) v(t + τ) − v(t) L 2 (Ω) ≤ cT q L 2 (Ω)  f λ (λ−α + e−λ ). Combining the preceding two estimates gives sup

0≤s 0 such that 1 ≤ Let u0 ∈ D(A1+γ ), with μ0 λ1 ϕ¯1 ≤ −Δu0 ≤ μ1 λ1 ϕ¯1

μ1 μ0

(1− )α cα .


0 depending μ1 α only on (1− )μ0 and α such that if λ1T < θ, then there is V, a neighborhood of 0 in 2 L (ω) and a constant c such that q1 − q2  L 2 (ω) ≤ cu(q1 )(T) − u(q2 )(T)D(A),

∀q1, q2 ∈ V .

Remark 6.8 The regularity condition u0 ∈ D(A1+γ ) is to ensure the well-posedness of the direct problem with q ∈ L 2 (Ω). The condition (6.107) is to ensure pointwise lower and upper bounds on the solution u(0)(T), and the set of u0 satisfying (6.107) is a convex subset of D(A1+γ ). The condition λ1T α < θ dictates that either T or λ1 should be sufficiently small, the latter of which holds if the domain Ω is large, since λ1 tends to zero as the volume of Ω tends to infinity. Below we assume that u0 satisfies the condition of Theorem 6.32. In view of the Sobolev embedding D(A1+γ ) → C 2,δ (Ω) for some δ > 0, the function U(t) ∈ 2+δ C 2+δ, 2 α (Q) (see Theorem 7.9 in Chapter 7), and satisfies ⎧ ∂ α U = ΔU, in Q, ⎪ ⎪ ⎨ t ⎪ U(0) = u0, in Ω, ⎪ ⎪ ⎪ U = 0, on ∂L Q. ⎩ The next result collects several properties of the function U(t). Lemma 6.16 The following properties hold on the function U(t). (i) μ0 ϕ¯1 (x) ≤ u0 (x) ≤ μ1 ϕ¯1 (x), x ∈ Ω. (ii) μ0 Eα,1 (−λ1 t α )ϕ¯1 (x) ≤ U(x, t) ≤ μ1 Eα,1 (−λ1 t α )ϕ¯1 (x), (x, t) ∈ Ω × [0, T]. (iii) 0 ≤ −∂tα U(x, t) ≤ Δu0  L ∞ (Ω) ≤ μ1 λ1 , (x, t) ∈ Ω × [0, T]. Proof Part (i). Since −Δϕ¯1 = λ1 ϕ¯1 , we obtain Δ(μ1 ϕ¯1 − u0 ) ≤ 0. But μ1 ϕ1 − u0 is zero on the boundary ∂Ω. Then μ1 ϕ¯1 − u0 is nonnegative, by the elliptic maximum principle. The first inequality of (i) can be obtained similarly. Part (ii) follows from the maximum principle in Section 6.5. We only prove (iii). Let w(x, t) = ∂tα U(x, t). Then w satisfies ⎧ ∂ α w = Δw, in Ω × (0, T], ⎪ ⎪ ⎨ t ⎪ w(0) = Δu0, in Ω, ⎪ ⎪ ⎪ w = 0, on ∂Ω × [0, T]. ⎩

6.6 Inverse Problems

263

By assumption, Δu0 ≤ 0 in Ω, and thus by the maximum principle, 0 ≤ −w(x, t) ≤  Δu0  L ∞ (Ω) . This implies assertion (iii). Let ω be defined as in Theorem 6.32. Lemma 6.16(ii) implies that U(T1) |ω extended by zero outside ω belongs to L ∞ (Ω). We define an operator PT : L 2 (ω) → L 2 (ω) by ∫ T q → −AE(T − s)U(T)−1 |ω [U(s) − U(T)]q ds + F(T)q. 0

This operator arises from the linearization of the forward map. The next result gives an upper bound on the constant cα defined in (6.106). In particular, it indicates that cα < α < 1, which is crucial for proving Theorem 6.32. Proposition 6.8 For any α ∈ (0, 1), cα := supt ≥0 tEα,α (−t) ≤

α2 π sin(απ)+απ .

Proof Let f (t) = tEα,α (−t). By the asymptotics and complete monotonicity of Eα,α (−t), f (t) is nonnegative on R+ , and tends to zero as t → ∞. Thus, there exists a maximum. Let w(t) = t α Eα,α (−t α ). By the Cristoffel-Darboux-type formula in Exercise 3.12 and a limiting argument, ∫ t d α λt Eα,α+1 (λt α )|λ=−1, s α−1 Eα,α (−s α )Eα,1 (−(t − s)α )ds = dλ 0 which upon simplification gives directly the formula ∫ t (t − s)α−1 Eα,α (−(t − s)α )αEα,1 (−s α )ds. w(t) =

(6.108)

0

Since 0 ≤ Eα,1 (−t) ≤ 1 and Eα,α (−t) ≥ 0, cf. Theorem 3.5, we deduce ∫ t w(t) ≤ α (t − s)α−1 Eα,α (−(t − s)α )ds = α(1 − Eα,1 (−t α )) < α. 0

Meanwhile, by Theorem 3.6, there holds ∫ t (t − s)α−1 Eα,α (−(t − s)α )αEα,1 (−s α )ds w(t) = 0 ∫ t (t − s)α−1 Eα,α (−(t − s)α )s−α ds ≤ αΓ(α + 1) 0

= αΓ(α + 1)

∫ t ∞ 0 k=0

Using the identity

∫t 0

(−1)k (t − s)kα+α−1 s−α ds. Γ(kα + α)

(t − s)a−1 s b−1 ds = B(a, b)t a+b−1 for a, b > 0, we deduce

264

6 Subdiffusion: Hilbert Space Theory

w(t) ≤ αΓ(α + 1)

∞ k=0

(−1)k Γ(kα + α)Γ(1 − α) kα t Γ(kα + α) Γ(kα + 1)

≤ αΓ(α + 1)Γ(1 − α)

∞ k=0

(−1)k kα t = αΓ(1 − α)Γ(1 + α)Eα,1 (−t α ). Γ(kα + 1)

Now by the recursion identity (2.2) and reflection identity (2.3) for Γ(z), Γ(1 − απ . Combining these estimates leads to α)Γ(1 + α) = αΓ(1 − α)Γ(α) = sin(απ) sup tEα,α (−t) ≤ α max min



t ≥0

t ≥0

απ Eα,1 (−t α ), 1 − Eα,1 (−t α ) . sin(απ)

(6.109)

απ Eα,1 (−t α ) is monotonically By the complete monotonicity of Eα,1 (−t), sin(απ) α decreasing, whereas 1 − Eα,1 (−t ) is monotonically increasing. Thus, one simple upper bound is obtained by equating these two terms, which directly gives sin(απ) . Upon substituting it back to (6.109) and noting the complete Eα,1 (−t∗α ) = sin(απ)+απ  monotonicity of Eα,1 (−t), we complete the proof.

Remark 6.9 Note that the identities lim+

α→0

απ 1 = απ + sin απ 2

and

lim−

α→1

απ = 1, απ + sin απ

απ απ+sin απ

and the function f (α) = is strictly increasing in α over the interval (0, 1). Thus, the factor is strictly less than 1 for any α ∈ (0, 1). Note also that for the limiting case α = 1, the constant c1 = supt ≥0 te−t = e−1 , which is much sharper than the preceding bound. Since the function Eα,α (−t) is actually continuous in α, one may refine the bound on cα slightly for α close to unit. Remark 6.10 Proposition 6.8 provides an upper bound on cα . In Fig. 6.4(a), we plot the function α−1 tEα,α (−t) versus t. Clearly, for any fixed α, tEα,α (−t) first increases with t and then decreases, and there is only one global maximum. The maximum is always achieved at some t ∗ ∈ [0.8, 1], a fact that remains to be proved, and the απ is maximum value decreases with α. The ratio cαα versus the upper bound sin απ+απ cα shown in Fig. 6.4(b). Note that α is strictly increasing with respect to α, and the upper bound in Proposition 6.8 is about three times larger than the optimal one cαα , since the derivation employs upper bounds of Eα,1 (−t) that are valid on R+ , instead of sharper ones on a finite interval, e.g., [0, 1]. The next result gives the invertibility of the operator I − PT on L 2 (ω). Lemma 6.17 Under the assumptions of Theorem 6.32, there exists a θ > 0 depending only on α and  such that if λ1T α < θ, then the operator I − PT has a bounded inverse in B(L 2 (ω)). Proof First, we bound t AE(t). Using the eigenpairs {(λ j , ϕ j )}∞ j=1 of the operator ∞ α−1 2 α A, we deduce for v ∈ L (Ω), E(t)v = j=1 t Eα,α (−λ j t )(ϕ j , v)ϕ j . Thus,

6.6 Inverse Problems

265 1 =0.1 =0.3 =0.5 =0.7

0.2

c / bound

0.8

,

(-t)

0.3

-1

tE

0.6

0.1 0.4

0 0

5 t

10

0.2 0

Fig. 6.4 The function α−1 t Eα, α (−t) and its maximum Proposition 6.8.

t AE(t)v 2 =



0.2

cα α

0.4

0.6

versus the upper bound

0.8

1

απ α π+sin α π

in

(λ j t α Eα,α (−λ j t α ))2 (v, ϕ j )2 .

j=1

By Proposition 6.8,  AE(t) ≤ cα t −1 . Meanwhile, using the governing equation for U(t), we have U(t) − U(0) = 0 Itα ΔU(t). This and the fact ΔU(x, t) ≤ 0 imply U(t) − U(T) =(0 Itα ΔU)(t) − (0 Itα ΔU)(T) ∫ t 1 [(T − s)α−1 − (t − s)α−1 ](−ΔU(s))ds = Γ(α) 0 ∫ T 1 + (T − s)α−1 (−ΔU(s))ds. Γ(α) t Since (T − s)α−1 − (t − s)α−1 ≤ 0 and −ΔU(x, t) ≥ 0 in Ω × [0, T], by Lemma 6.16(iii) ∫ T 1 (T − t)α U(t) − U(T) ≤ μ1 λ1 . (T − s)α−1 (−ΔU(s))ds ≤ Γ(α) t Γ(α + 1) Similarly, U(T) − U(t) ≤

1 Γ(α)



t

[(t − s)α−1 − (T − s)α−1 ](−ΔU(s))ds

0

≤ μ1 λ1 Γ(α + 1)−1 (t α + (T − t)α − T α ) ≤ (T − t)α Γ(α + 1)−1 μ1 λ1 . Consequently, there holds U(s) − U(T) L ∞ (Ω) ≤ Γ(α + 1)−1 μ1 λ1 (T − s)α . Lemma 6.16(ii) implies

266

6 Subdiffusion: Hilbert Space Theory

U(T)−1 |ω  L ∞ (Ω) ≤ (μ0 (1 − )Eα,1 (−λ1T α ))−1 . The preceding two estimates and Theorem 6.4(iv) imply PT B(L 2 (ω)) ∫ T ≤  AE(T − s)U(T)−1 |ω  L ∞ (Ω) U(s) − U(T) L ∞ (Ω) ds + F(T) 0 ∫ T λ1 μ1 1 ≤ (T − s)α ds + Eα,1 (−λ1T α ) cα (T − s)−1 α Γ(α + 1) μ (1 − )E 0 α,1 (−λ1T ) 0 cα μ1 λ1T α + Eα,1 (−λ1T α ). = μ0 (1 − )αΓ(α + 1)Eα,1 (−λ1T α ) Let m(x) be defined by m(x) =

x cα μ1 + Eα,1 (−x). μ0 (1 − )αΓ(α + 1) Eα,1 (−x)

Straightforward computation shows m (x) =



Eα,1 (−x) − xEα,1 (−x) cα μ1 + Eα,1 (−x). μ0 (1 − )αΓ(α + 1) Eα,1 (−x)2

Thus, m(0) = 1 and by Proposition 6.8, m (0) =

! " cα μ1 1 1 cα μ1 − = −1 < 0, μ0 (1 − )αΓ(α + 1) Γ(α + 1) μ0 (1 − )α Γ(α + 1)

under the given conditions on , μ0 and μ1 in Theorem 6.32. Thus, there exists a θ > 0 such that whenever x < θ, m(x) < 1, and accordingly, for λ1T α sufficiently close to zero, PT is a contraction on L 2 (ω). Then by Neumann series expansion,  I − PT is invertible and (I − PT )−1 is bounded. Now we define a trace operator tr : X → D(A), v → v(T). Then tr ∈ B(X, D(A)) and trB(X,D(A)) ≤ 1. Finally, we can present the proof of Theorem 6.32. Proof We define the mapping: K : L 2 (ω) → L 2 (ω),

q → [−Au(q)(T)]|ω = [−Atr(I − L(q))−1U]|ω .

Clearly, K is continuously Fréchet differentiable, cf. Lemma 6.15, and its derivative K at q ∈ L 2 (ω) in the direction p is given by K (q)[p] = [−Atr(I − L(q))−1 L(p)(I − L(q))−1U]|ω . Let QT = K (0) = [−AtrL(·)U]|ω . Then #∫ T $  −AE(T − s)p[U(s) − U(T)]ds + (F(T) − I)pU(T) ω . QT (p) = 0

6.7 Numerical Methods

267

We define a multiplication operator M : L 2 (ω) → L 2 (ω), p → U(T)p. Then 1 M is invertible, and its inverse is exactly the multiplication operator by U(T ) |ω . −1 −1 2 Consequently, QT M = PT − I. By Lemma 6.17, (PT − I) belongs to B(L (ω)). Therefore, QT has a bounded inverse and QT−1 = M −1 (PT − I)−1 . By the implicit function theorem, K is locally a C 1 -diffeomorphism from a neighborhood of 0 onto a neighborhood of K(0). In particular, K −1 is Lipschitz continuous in a neighborhood of K(0). Then Theorem 6.32 follows by noting the following inequality  Au(q1 )(T)|ω − Au(q2 )(T)|ω  L 2 (ω) ≤ u(q1 )(T) − u(q2 )(T)D(A), ∀q1, q2 ∈ L 2 (ω).

6.7 Numerical Methods Now we describe some numerical methods for subdiffusion models. One outstanding challenge lies in the accurate and efficient discretization of the Djrbashian-Caputo fractional derivative ∂tα u. Roughly speaking, there are two predominant classes of numerical methods for time stepping, i.e., convolution quadrature (cq) and finite difference-type methods, e.g., L1 scheme and L1-2 scheme. The former relies on approximating the (Riemann-Liouville) fractional derivative in the Laplace domain, whereas the latter approximates ∂tα u directly by piecewise polynomials. These two approaches have their pros and cons: cq is often easier to analyze, since by construction, it inherits excellent numerical stability of the underlying schemes for odes, but it is restricted to uniform grids. Finite difference-type methods are very flexible in construction and implementation and generalize to nonuniform grids, but often challenging to analyze. Generally, these schemes are only low-order, unless restrictive compatibility conditions are fulfilled. One promising idea is to employ suitable corrections to restore the desired high-order convergence. In this section, we review these two popular classes of time-stepping schemes on uniform grids, following [JLZ19a], where many further references and technical N be a uniform partition of the time details can be found. Specifically, let {tn = nτ}n=0 T interval [0, T], with a time step size τ = N .

6.7.1 Convolution quadrature Convolution quadrature (cq) was systematically developed by Christian Lubich in a series of pioneering works [Lub86, Lub88, LST96, CLP06] first for fractional integrals, and then for parabolic equations with memory and fractional diffusion wave equations. It has been widely applied in discretizing the Riemann-Liouville fractional derivative. It requires only that Laplace transform of the kernel be known. R α Specifically, ∫ t cq approximates the Riemann-Liouville fractional derivative ∂t ϕ(t) = 1 d −α dt Γ(1−α) 0 (t − s) ϕ(s)ds (with ϕ(0) = 0) by a discrete convolution (with the

268

6 Subdiffusion: Hilbert Space Theory

shorthand notation ϕn = ϕ(tn )) n 1 ∂¯τα ϕn := α b j ϕn−j . τ j=0

(6.110)

The weights {b j }∞ j=0 are the coefficients in the power series expansion δτ (ζ)α =

∞ 1 b j ζ j, τ α j=0

(6.111)

where δτ (ζ) is the characteristic polynomial of a linear multistep method for odes. There are several possible choices of the characteristic polynomial, e.g., backward differentiation formula (bdf), trapezoidal rule, Newton-Gregory method and RungeKutta methods. The most popular one is the backward differentiation formula of order k (bdfk), k = 1, . . . , 6, for which δτ (ζ) is given by 1 1 (1 − ζ) j , τ j=1 j k

δτ (ζ) :=

j = 1, 2, . . . .

The case k = 1, i.e., backward Euler cq, is also known as Grünwald-Letnikov approximation, cf. Section 2.3.3. Then the weights b j are given explicitly by b0 = 1 and

b j = − j −1 (α − j + 1)b j−1,

j ≥ 1.

The cq discretization of the subdiffusion model first reformulates it using R∂tα ϕ, using the defining relation for the (regularized) Djrbashian-Caputo fractional derivative ∂tα ϕ(t) = R∂tα (ϕ − ϕ(0)) in (2.37), into the form R α ∂t (u

− u0 ) − Δu = f .

Then the time stepping scheme based on cq is to find approximations U n to the exact solution u(tn ) by ∂¯τα (U − u0 )n − ΔU n = f (tn ),

n = 1, . . . , N,

(6.112)

with U 0 = u0 . When combined with spatially semidiscrete schemes, e.g., Galerkin finite element methods, we arrive at fully discrete schemes. We discuss only the temporal error for time-stepping schemes, and omit the spatial errors. The backward Euler cq has the first-order accuracy [JLZ16c, Theorems 3.5 and 3.6]. ∫t Theorem 6.33 Let u0 ∈ H γ (Ω), γ ∈ [0, 2], and f ∈ C(I; L 2 (Ω)) with 0 (t − s)α−1  f (s) L 2 (Ω) ds < ∞ for any t ∈ I. Let U n be the solutions of the scheme (6.112). Then there holds

6.7 Numerical Methods

269

γ u(tn ) − U n  L 2 (Ω) ≤ cτ t 2 α−1 u0  H γ (Ω) + tnα−1  f (0) L 2 (Ω) ∫ tn + (tn − s)α−1  f (s) L 2 (Ω) ds . 0

If the exact solution u is smooth and has a sufficient number of vanishing derivatives at t = 0, then the approximation U n converges at a rate of O(τ k ) uniformly in time t for bdfk convolution quadrature [Lub88, Theorem 3.1]. However, in practice, it generally only exhibits a first-order accuracy when solving subdiffusion problems even for smooth u0 and f [CLP06, JLZ16c], since the requisite compatibility condition is not satisfied. This loss of accuracy is one distinct feature for most time-stepping schemes since they are usually derived under the assumption that u is smooth in time, which, according to the regularity theory in Section 6.2, holds only if the problem data satisfy certain compatibility conditions. In summary, they tend to lack robustness with respect to the regularity of problem data. One promising idea is initial correction. To restore the second-order accuracy for bdf2 cq, one may correct the first step of the scheme and obtain

α ∂¯τ (U − u0 )1 − ΔU 1 = 12 (Δu0 + f (0)) + f (t1 ), (6.113) ∂¯τα (U − u0 )n − ΔU n = f (tn ), 2 ≤ n ≤ N. When compared with the vanilla cq scheme (6.112), the additional terms 12 (Δu0 + f (0)) at the first step are constructed so as to improve the overall accuracy of the scheme to O(τ 2 ) for an initial data v ∈ D(Δ) and a possibly incompatible right-hand side f . The difference between the corrected scheme (6.113) and the standard scheme (6.112) lies in the first step, and hence it is very easy to implement. The scheme (6.113) satisfies the following error estimate [JLZ16c, Theorems 3.8 and 3.9]. ∫t Theorem 6.34 Let u0 ∈ H γ (Ω), γ ∈ [0, 2], f ∈ C 1 (I; L 2 (Ω)) and 0 (t − s)α−1  f (2) (s) L 2 (Ω) ds < ∞ for any t ∈ I. Then for the solutions U n to the scheme (6.113), there holds for any tn > 0.  γ n 2 2 α−2 U − u(tn ) L 2 (Ω) ≤cτ tn u0  H γ (Ω) + tnα−2  f (0) L 2 (Ω) + tnα−1  f (0) L 2 (Ω)  ∫ tn α−1 (2) + (tn − s)  f (s) L 2 (Ω) ds . 0

6.7.2 Piecewise polynomial interpolation Now we describe time stepping schemes based on piecewise polynomial approximation, and are essentially of finite difference nature. The idea has been very popular [SW06, LX07, Ali15], and the most prominent one is the L1 scheme due to Lin and Xu [LX07]. It employs piecewise linear interpolation, and hence the name L1

270

6 Subdiffusion: Hilbert Space Theory

scheme. First, we derive the approach from Taylor expansion [LX07, Section 3]. Recall the following Taylor expansion formula with an integral remainder: ∫ t f (ξ)(t − ξ)dξ ∀t, s ∈ I. f (t) = f (s) + f (s)(t − s) + s

Applying the identity to the function u(t) at t = t j and t = t j+1 , respectively, gives ∫ tj u(t j ) = u(s) + u (s)(t j − s) + u (ξ)(t j − ξ)dξ, s ∫ t j+1 u (ξ)(t j+1 − ξ)dξ. u(t j+1 ) = u(s) + u (s)(t j+1 − s) + s

Subtracting the second identity from the first and dividing both sides by τ yields ∫ t j+1 ∫ tj u(t j+1 ) − u(t j ) − τ −1 u (ξ)(t j+1 − ξ)dξ + τ −1 u (ξ)(t j − ξ)dξ. u (s) = τ s s Thus, for all 0 ≤ n ≤ N − 1, we have ∂tα u(tn ) = = =

n−1 ∫ t j+1

1 Γ(1 − α) j=0

tj

u (s)(tn − s)−α ds

n−1 u(t j+1 ) − u(t j ) 1 Γ(1 − α) j=0 τ n−1

bj

j=0

∫ tj

t j+1

(tn − s)−α ds + rτn

u(tn−j ) − u(tn−1−j ) + rτn τα

n−1 # $ = τ −α b0 u(tn ) − bn−1 u(t0 ) + (b j − b j−1 )u(tn−j ) +rτn, j=1

 =:L1n (u)

where the weights b j are give by b j = Γ(2−α)−1 (( j +1)1−α − j 1−α ), j = 0, 1, . . . , N −1 and rτn is the local truncation error defined by rτn =

n−1 ∫ t j+1

1 τΓ(1 − α) j=0

tj

! ∫ −

s

t j+1

u (ξ)

t j+1 − ξ dξ + (tn − s)α

∫ s

tj

u (ξ)

" tj − ξ dξ ds. (tn − s)α

In essence, the scheme approximates the function u by a continuous piecewise linear interpolation, in a manner similar to the backward Euler method. It can be viewed as a fractional analogue of the latter. It was shown in [LX07, (3.3)] that the local truncation error of the approximation is of order O(τ 2−α ); see also [JLLZ15]. Lemma 6.18 There exists some constant c independent of τ such that

6.7 Numerical Methods

271

|rτn | ≤ c max |u (t)|τ 2−α . 0≤t ≤tn

Proof Changing the integration order gives rτn =

n−1 ∫ t j+1

1 τΓ(1 − α) j=0

tj

tj

∫ + (t j − ξ)

∫ # u (ξ) − (t j+1 − ξ)

t j+1

ξ

ξ

ds (tn − s)α n−1 ∫ t j+1

1 ds $ dξ = α (tn − s) τΓ(2 − α) j=0

tj

u (ξ)Rnj (ξ)dξ,

where the auxiliary function Rnj (ξ) is defined by Rnj (ξ) = (tn − ξ)1−α τ − (t j+1 − ξ)(tn − t j )1−α + (t j − ξ)(tn − t j+1 )1−α . We claim Rnj (ξ) is nonnegative for all ξ ∈ [t j , t j+1 ]. Indeed, we have Rnj (t j ) = d n −1−α τ ≤ 0, for 0 < α < 1. That Rnj (t j+1 ) = 0, and dξ 2 R j (ξ) = (1 − α)(−α)(tn − ξ) n is, R j (ξ) is a concave function. The concavity implies the desired nonnegativity Rnj (ξ) ≥ 0 for all r ∈ [t j , t j+1 ]. Thus, with cu = max0≤ξ ∈tn |u (ξ)|, there holds 2

rτn ≤

n−1 ∫ t j+1

cu τΓ(2 − α) j=0

tj

Rnj (ξ)dξ.

It suffices to bound the integrals on the right-hand side. Since ∫ t j+1 τ 3−α  2(n − j)2−α − 2(n − 1 − j)2−α Rnj (ξ)dξ = 2(2 − α) tj  −(2 − α)[(n − j)1−α + (n − 1 − j)1−α ] , by a simple change of variables n−1 ∫ j=0

with

tj

t j+1

Rnj (ξ)dξ =

n−1 τ 3−α s j, 2 j=0

   2  ( j + 1)2−α − j 2−α − ( j + 1)1−α + j 1−α 2 − α  2j ((1 + j −1 )2−α − 1) − (1 + j −1 )1−α − 1 . = j 1−α 2−α  Next, we claim that the sum n−1 j=0 s j is uniformly bounded independent of n, which completes the proof. Indeed, by binomial expansion, the following expansions hold sj =

272

6 Subdiffusion: Hilbert Space Theory

(2 − α)(1 − α) −2 (2 − α)(1 − α)(−α) −3 j + j 2! 3! (2 − α)(1 − α)(−α)(−α − 1) −4 j + . . ., + 4! (1 − α)(−α) −2 (1 − α)(−α)(−α − 1) −3 j + j +... = 1 + (1 − α) j −1 + 2! 3!

(1 + j −1 )2−α = 1 + (2 − α) j −1 +

(1 + j −1 )1−α

Consequently, we deduce    1  1 2 2   − − |s j | = j 1−α  (1 − α)α j −2 + (1 − α)α(−α − 1) j −3 + . . .  2! 3! 3! 4! 1 2 ≤ (1 − α)α j −1−α 1 + j −1 + j −2 + . . . ≤ (1 − α)α j −1−α ≤ j −1−α . 3! 3! ∞ Therefore, the series j=0 s j converges for all α > 0. Meanwhile, s j = 0 for α = 0, 1. Hence, there exists c > 0, independent of α and k such that & k % 2 2−α 2−α 1−α 1−α [( j + 1) − j ] − [( j + 1) + j ] ≤ c. 2−α j=0 Combining these estimates yields the desired claim. See Fig. 6.5 for an illustration.

100 =0.25 =0.5 =0.75

10-2

=0.25 =0.5 =0.75

0.6

sj

0.4 10-4

0.2 10

-6

0

20

40

60

80

100

0

j

Fig. 6.5 The coefficient s j and the partial sum

20

40

60

80

100

k

k

j=0

sj .

It is noteworthy that the local truncation error in Lemma 6.18 requires that the solution u be twice continuously differentiable in time, which generally does not hold for solutions to subdiffusion (or fractional odes), according to the regularity theory in Section 6.2. Since its first appearance, the L1 scheme has been widely used in practice, and currently it is one of the most popular and successful numerical methods for solving subdiffusion models. With the L1 scheme in time, we arrive at the following time stepping scheme: Given U 0 = v, find U n ∈ H 1 (Ω) for n = 1, 2, . . . , N

6.7 Numerical Methods

273

L1n (U) − ΔU n = f (tn ).

(6.114)

We have the following error estimate for the scheme (6.114) [JLZ16a, Theorems 3.10 and 3.13] (for the homogeneous problem). It was derived by means of discrete Laplace transform, and the proof is technical, since the discrete Laplace transform of the weights b j involves the fairly wieldy polylogarithmic function. Formally, the error estimate is nearly identical to that for the backward Euler cq. ∫t Theorem 6.35 Let u0 ∈ H γ (Ω), γ ∈ [0, 2], and f ∈ C(I; L 2 (Ω)) with 0 (t − s)α−1  f (s) L 2 (Ω) ds < ∞ for any t ∈ I. Let U n be the solutions of the scheme (6.114). Then there holds γ u(tn ) − U n  L 2 (Ω) ≤ cτ t 2 α−1 u0  H γ (Ω) + tnα−1  f (0) L 2 (Ω) ∫ tn + (tn − s)α−1  f (s) L 2 (Ω) ds . 0

Thus, in contrast to the O(τ 2−α ) rate expected from the local truncation error in Lemma 6.18 for smooth solutions, the L1 scheme is generally only first-order accurate, just as the backward Euler cq, even for smooth initial data or source term. Similar to the bdf2 cq scheme, Yan et al [YKF18] developed the following correction scheme, and proved that it can achieve an accuracy O(τ 2−α ) for general problem data. Interestingly, this correction is identical with that in (6.113)

L11 (U) − ΔU 1 = 12 (Δu0 + f (0)) + f (t1 ), L1n (U) − ΔU n = f (tn ),

2 ≤ n ≤ N.

There have been several works in extending the L1 scheme to high-order schemes by using high-order polynomials and superconvergent points. For example, the socalled L1-2 scheme applies a piecewise linear approximation on the first subinterval, and a quadratic approximation on the remaining subintervals to improve the numerical accuracy; see, e.g., Exercise 6.18 for related discussions.

Exercises Exercise 6.1 This exercise is to provide additional details for Lemma 6.1. 1 (R ) be the resolvent kernel associated with nk (i) Let hn ∈ Lloc + 1−α , i.e.,

hn + n(hn ∗ k1−α )(t) = nk1−α (t), Prove that the function hn is uniquely defined. (ii) Prove that k n = hn ∗ k α → k α in L 1 (I). (iii) Prove that for any f ∈ L p (I), k n ∗ f → k α ∗ f .

t > 0, n ∈ N.

274

6 Subdiffusion: Hilbert Space Theory

Exercise 6.2 Derive (6.21)–(6.22) from their Laplace transforms (6.25)–(6.26). Exercise 6.3 Prove the other estimate in Corollary 6.2 using properties of Eα,α (−t). Exercise 6.4 Prove the existence and uniqueness for problem (6.17) with u0 ≡ 0 and a nonzero source f , in a manner similar to Theorem 6.6. Exercise 6.5 Consider problem (6.17) with u0 = 0, and f ∈ L ∞ (I; H γ (Ω)), −1 ≤ γ ≤ 1. Show that there exists a unique weak solution u ∈ L 2 (I; H 2+γ (Ω)) with ∂tα u ∈ L 2 (I; H γ (Ω)), and u L 2 (I; H 2+γ (Ω)) + ∂tα u L 2 (I; H γ (Ω)) ≤ c f  L 2 (I; H γ (Ω)) .

(6.115)

This result can be proved in several steps. ∫t ∫t (i) Use Laplace transform to prove ∂tα 0 E(t − s) f (s)ds = f − 0 AE(t − s) f (s)ds. ∫t (ii) Using the property of Eα,1 (−t) to prove 0 |s α−1 Eα,α (−λ j s α )|ds ≤ λ−1 j . (iii) Use (ii) and Young’s inequality to prove ∫ T  ∫ t 2  α  ( f (s), ϕ j )(t − s)α−1 Eα,α (−λ j (t − s)α ) ds 2 ≤ c |( f (t), ϕ j )| 2 dt. ∂t L (I)

0

0

(iv) Prove the estimate (6.115) using (iii). (v) Show that for any γ ≤ β < γ + 2, limt→0+ u(t) H β (Ω) = 0. Exercise 6.6 Prove the uniqueness and existence for the following Neumann subdiffusion problem ⎧ ⎪ ∂tα u = Lu + f , in Q, ⎪ ⎨ ⎪ ∂n u = 0, on ∂L Q, ⎪ ⎪ ⎪ u(·, 0) = u0, in Ω, ⎩ where ∂n u denotes the unit outward normal derivative of u on the boundary ∂L Q. Exercise 6.7 Prove Theorem 6.18. Exercise 6.8 Consider the following diffusion wave problem (with α ∈ (1, 2)) ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪

∂tα u = Δu + f , in Q, u = 0, on ∂L Q,

⎪ u(·, 0) = u0, ⎪ ⎪ ⎪ ⎪ u (·, 0) = u , 1 ⎩

in Ω, in Ω.

(i) Derive the solution representation using the separation of variables technique. (ii) Discuss the Sobolev regularity of the solution. (iii) Is there any maximum principle for the problem? Exercise 6.9 Let 0 < α0 < α1 < 1. Consider the following two-term subdiffusion:

6.7 Numerical Methods

275

⎧ ∂ α1 u + ∂tα0 u = Δu + f in Q, ⎪ ⎪ ⎨ t ⎪ u = 0 on ∂L Q, ⎪ ⎪ ⎪ u(·, 0) = u0 in Ω. ⎩ (i) Derive the solution representation using the separation of variables technique. (ii) Discuss the existence and uniqueness of the solution. (iii) Discuss the asymptotic of the solution for the homogeneous problem. Exercise 6.10 Consider the following lateral Cauchy problem with α ∈ (0, 1), ⎧ ⎪ ⎪ ⎨ ⎪

2 ∂tα u = ∂xx u, 0 < x < r1, 0 < t < r2, u(0, t) = f (t), 0 < t < r2,

⎪ ⎪ ⎪ −u x (0, t) = g(t), ⎩

0 < t < r2,

where r1, r2 are positive constants, and f and g are known functions. A simple idea  j is to look for a solution of the form of a power series u(x, t) = ∞ j=0 a j (t)x , where the coefficients a j are to be determined. (i) Determine the recursion relation for the coefficients a j , and derive closed-form coefficients a j (t). (ii) Discuss the conditions on the data f and g that are sufficient to ensure the convergence of the formal series in (i). (iii) Discuss the impact of the fractional-order α on the behavior of the solution (assuming existence). Exercise 6.11 [Ros16] Consider the following one-dimensional Stefan problem for subdiffusion with α ∈ (0, 1), and a, b ∈ R: ⎧ ⎪ ⎪ ⎨ ⎪

2 ∂tα u = ∂xx ,

u(0, t) = a, ⎪ ⎪ ⎪ u(t α2 , t) = b, ⎩

α

0 < x < t 2 , 0 < t ≤ T, 0 < t ≤ T, 0 < t ≤ T. α

Show that the function u(x, t) = a + (1 − W− α2 ,1 (−1))−1 (b − a)(1 − W− α2 ,1 (−xt − 2 )) is a solution. Exercise 6.12 Under the setting of Theorem 6.31, derive an alternative inversion formula when Δu(x0 ) = 0. Exercise 6.13 [HNWY13] This problem is concerned with determining the fractionalorder α, as Theorem 6.31. Let u0 ∈ C0∞ (Ω), u0 ≥ 0 or ≤ 0,  0 on Ω. Then there holds α = − limt→∞ tu(x0, t)−1 ∂u ∂t (x0, t). Exercise 6.14 Consider the following diffusion wave problem with 1 < α < 2: ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪

∂tα u = Δu,

in Q,

u = 0, on ∂L Q, ⎪ u(·, 0) = u0, in Ω, ⎪ ⎪ ⎪ ⎪ u (·, 0) = u , in Ω. 1 ⎩

276

6 Subdiffusion: Hilbert Space Theory

Discuss the possibility of simultaneous recovering u0 and u1 from g1 = u(x, T1 ) and g2 = u(x, T2 ) for some T2 > T1 > 0. Exercise 6.15 What about recovering u0 in (6.89) from the time trace g(t) = u x (0, t)? −α−1 . Exercise 6.16 Prove the following bound for backward Euler cq: |b(α) j | ≤ cj

Exercise 6.17 One can rewrite the L1 approximation L1n (U) into the form L1n (U) =

n j=0

j b(n) n−j U .

Then the following statements hold.  (i) For any n ≥ 1, nj=0 b(n) j = 0. n (n) (ii) For any n ≥ 1, j=0 bn−j tn = Γ(2 − α)−1 τ α . (iii) For any n ≥ 1, b(n) j < 0, j = 1, . . . , n. Exercise 6.18 One natural idea to extend the L1 scheme for the Djrbashian-Caputo fractional derivative is to employ high-order polynomial interpolation instead of linear interpolation. This exercise is to explore the idea of using piecewise quadratic T , which leads to interpolants on a uniform grid ti = iτ, i = 0, 1, . . . , N, with τ = N the so-called L1-2 scheme [GSZ14]. (i) Find the piecewise linear interpolant Π1,1 u of u on [0, τ], and piecewise quadratic interpolant Π2,i u on [ti−2, ti ], i ≥ 2. (ii) Develop an approximation scheme to ∂tα u using Π1,1 u on [0, τ] and Π2,i u on [ti−1, ti ], i ≥ 2. (iii) Derive the local truncation error of the scheme under the assumption u ∈ C 3 (I).

Chapter 7

Subdiffusion: Hölder Space Theory

In this chapter, we discuss Hölder regularity of the solutions to subdiffusion models with variable coefficients, using fundamental solutions corresponding to constant coefficients. The issue of Hölder regularity has not been extensively discussed. Our description follows largely the works [KV13, Kra16]. The one- and multidimensional cases will be discussed separately due to the significant differences in fundamental solutions. The overall analysis strategy follows closely that for standard parabolic equations [LSU68, Chapter IV].

7.1 Fundamental Solutions First, we derive fundamental solutions, and discuss the associated fractional θ functions. Fundamental solutions for subdiffusion have been extensively studied [SW89, Koc90, Mai96, MLP01, EK04, Psk09]. In the work [Koc90], Kochubei derived the expression for the fundamental solutions Gα (x, t) and G α (x, t) in terms of Fox’s H-functions, see also [SW89]. The work [EK04] gives several estimates. More recently, Pskhu [Psk09, Section 3] derived alternative representations of Gα (x, t) and Gα (x, t) using the Wright function Wρ,μ (z), cf. (3.24) in Chapter 3, and also gave several estimates. See the works [KKL17, DK19, DK20, HKP20, DK21] for further estimates in L p spaces. The survey [Don20] provides an overview on the estimates on fundamental solutions. We present only the representations via Wright function.

7.1.1 Fundamental solutions Consider subdiffusion in Rd (d = 1, 2, 3)  α ∂t u − Δu = f , u(0) = u0,

in Rd × R+, in Rd .

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 B. Jin, Fractional Differential Equations, Applied Mathematical Sciences 206, https://doi.org/10.1007/978-3-030-76043-4_7

(7.1) 277

278

7 Subdiffusion: Hölder Space Theory

Throughout the initial data u0 and the source f are assumed to be smooth and have compact supports, i.e., there exists an R > 0 such that u0 (x), f (x, t) = 0,

∀|x| ≥ R > 0.

(7.2)

The next result gives a solution representation to problem (7.1) using the Wright function Wρ,μ (z), cf. (3.24) in Chapter 3, see Sections A.3.1 and A.3.2 for Laplace and Fourier transforms, respectively. Proposition 7.1 Under Assumption (7.2), the solution u of problem (7.1) is given by ∫ ∫ t∫ u(x, t) = Gα (x − y, t)u0 (y)dy + G α (x − y, t − s) f (y, s)dyds, (7.3) Rd

Rd

0

with Gα (x, t) and G α (x, t) given, respectively, by Gα (x, t) =

G α (x, t) =

⎧ ⎪ ⎪ ⎨ ⎪

α

⎪ ⎪ ⎪ (4π) ⎩ ⎧ ⎪ ⎪ ⎨ ⎪

− d2

− d2 ⎪ ⎪ ⎪ (4π) ⎩





α

2−1 t − 2 W− α2 ,1− α2 (−|x|t − 2 ), λ

− d2

e

x |2 − | 4λ

0

t −α W−α,1−α (−λt −α )dλ,

α





α

2−1 t 2 −1W− α2 , α2 (−|x|t − 2 ), d

λ− 2 e

x |2 − | 4λ

0

t −1W−α,0 (−λt −α )dλ,

d = 1, (7.4)

d = 2, 3,

d = 1, (7.5)

d = 2, 3.

Proof Applying Fourier transform and Laplace transform to (7.1) with respect to x and t, denoted by  and , respectively, and using Lemma 2.9 give  u(ξ, z) + ξ 2  u(ξ, z) = z α−1  u0 (ξ) +  f (ξ, z), z α i.e.,

 u(ξ, z) = (z α + ξ 2 )−1 z α−1  f (ξ, z). u0 (ξ) + (z α + ξ 2 )−1 

Next we discuss the cases d = 1 and d = 2, 3 separately. For d = 1, by Lemma 3.2, the solution  u(ξ, t) is given by ∫ t 2 α  u(ξ, t) = Eα,1 (−ξ t ) u0 (ξ) + (t − s)α−1 Eα,α (−ξ 2 (t − s)α )  f (ξ, s)ds. 0

The desired assertion follows by applying Fourier transform relation of Wρ,μ (−|x|) in Proposition 3.4. For d = 2, 3, we rewrite  u into ∫ ∞ ∫ ∞ α 2 α 2  u(ξ, z) = z α−1 e−(z + |ξ | )λ dλ u0 (ξ) + e−(z + |ξ | )λ  f (ξ, z)dλ. 0

0 d

| x |2

α

In view of the identities F −1 [e− |ξ | λ ](x) = (4πλ)− 2 e− 4λ and L −1 [z −μ e−z λ ](t) = t μ−1W−α,μ (−λt −α ), cf. Proposition 3.3, the convolution formulas for Laplace and Fourier transforms yield the desired representation.  2

7.1 Fundamental Solutions

279

Remark 7.1 Note that for subdiffusion, there are two fundamental solutions, corresponding to u0 and f , respectively. They are identical in the limit case α = 1, d

| x |2

i.e., normal diffusion, which is given by G1 (x, t) = (4πt)− 2 e− 4t . When d = 2, 3, Gα (x, t) and G α (x, t) can be represented by ∫ ∞ Gα (x, t) = G1 (x, λ)t −α W−α,1−α (−λt −α )dλ, 0 ∫ ∞ G α (x, t) = G1 (x, λ)t −1W−α,0 (−λt −α )dλ. 0

That is, Gα (x, t) and G α (x, t) are both mixtures of G1 (x, t), but with different mixing density. Further, Gα (x, t) and G α (x, t) are related by Gα (x, t) = 0 It1−α Gα (x, t), in view of the identity (3.28) in Chapter 3. See also Lemma 7.1(iii). Remark 7.2 When d = 1, we can rewrite Gα (x, t) using Mainardi’s M function α α Mμ (z) (cf. Exercise 3.19) as Gα (x, t) = 2−1 t − 2 M α2 (|x|t − 2 ). For α ∈ (0, 1), for every t > 0, x → Gα (x, t) is not differentiable at x = 0, cf. Fig. 3.7, which contrasts sharply with G1 (x, t), which is C ∞ in space for every t > 0. The next result collects elementary properties of Gα (x, t) and G α (x, t). For a vector x ∈ Rd , d > 1, the notation x  denotes the subvector (x1, . . . , xd−1 ) ∈ Rd−1 , and R∂tα denotes the Riemann-Liouville fractional derivative, cf. Definition 2.2. Lemma 7.1 The following statements hold: (i) The functions Gα (x, t) and G α (x, t) are positive, and so is −∂x d G α (x, t) for xd > 0. (ii) The following identities hold: ∫ Gα (x, t)dx = 1, ∀t > 0, Rd ∫ G α (x, t)dx = Γ(α)−1 t α−1, ∀t > 0, Rd ∫ ∞ ⎧ ⎪ ⎪ ∂x Gα (x, t)dt = − 12 , d = 1, ⎪ ⎨ ⎪ 0 ∫ ∫ ∞ ⎪ ⎪ ⎪ ∂x d Gα (x, t)dtdx  = − 12 , d = 2, 3, ⎪ d−1 ⎩ R 0

(7.6) (7.7)

∀xd > 0.

(7.8)

(iii) ΔG α (x, t) = ∂t Gα (x, t), if x  0, i.e., G α (x, t) = R ∂t1−α Gα (x, t). Proof (i) is direct from the integral representations of Gα (x, t) and G α (x, t), and Lemma 3.5 and Theorem 3.8. (iii) follows from direct computation. For (ii), we analyze separately d = 1 and d = 2, 3. When d = 1, by the identity (3.26) and α changing variables r = xt − 2 , we deduce ∫ ∞ ∫ 0 ∫ d 1 ∞ W− α2 ,1 (r)dr = W− α2 ,1 (0) = 1. Gα (x, t)dx = W− α2 ,1− α2 (−|r |)dr = 2 dr −∞ −∞ −∞

280

7 Subdiffusion: Hölder Space Theory

This shows the identity (7.6). The equality (7.7) follows analogously: ∫ ∞ ∫ ∞ 1 G α (x, t)dx = 1−α W− α2 , α2 (−|r |)dr 2t −∞ −∞ ∫ 0 d t α−1 W− α2 ,α (r)dr = t α−1W− α2 ,α (0) = . =t α−1 Γ(α) −∞ dr Likewise, the identity (7.8) follows from the formula (3.27) as ∫ ∞ ∫ ∞ α 1 ∂x G α (x, t)dt = − W− α2 ,0 (−|x|t − 2 )dt 2t 0 0 ∫ α 1 ∞ d 1 1 W− α2 ,1 (−|x|t − 2 )dt = − W− α2 ,1 (0) = − . =− 2 0 dt 2 2 Next, when d = 2, 3, by the definition of Gα (x, t), we have ∫ ∫ ∞ ∫ | x |2 d d Gα (x, t)dx = (4π)− 2 λ− 2 e− 4λ dx t −α W−α,1−α (−λt −α )dλ d Rd ∫0 ∞ R ∫ ∞ = t −α W−α,1−α (−λt −α )dλ = W−α,1−α (−η)dη = 1, 0

0

where the last line follows from the formula (3.26). This shows (7.6). The identity (7.7) can be shown analogously. Last, by the definition of G α , ∫ ∞ xd − d − | x |2 −1 1 λ 2 e 4λ t W−α,0 (−λt −α )dλ. −2∂x d G α (x, t) = d λ (4π) 2 0 Thus, the recursion formula W−α,0 (−z) = αzW−α,1−α (−z) from (3.33), and changing 1 1 variables y  = x  λ− 2 , η = λt −α and μ = xd λ− 2 imply ∫ ∫ ∞ −2 ∂x d G α (x, t)dtdx  d−1 R 0 ∫ ∞ ∫ ∫ ∞ μ2 |y  | 2 − d2 =2(4π) e− 4 dμ W−α,1−α (−η)dη e− 4 dy  . 0

0

R d−1

This shows the identity (7.8) and completes the proof of the lemma.



Remark 7.3 Lemma 7.1 indicates that the fundamental solution Gα (x, t) is actually ∫ a pdf for any fixed t > 0, i.e., Gα (x, t) ≥ 0 and R d Gα (x, t) dx = 1. Further, we can compute all the moments for d = 1: by the integral identity (3.32), the moments (of even order) of Gα (x, t) are given by ∫ ∞ Γ(2n + 1) αn t , n = 0, 1, . . . , t ≥ 0. μ2n (t) := x 2n Gα (x, t) dx = Γ(αn + 1) −∞

7.1 Fundamental Solutions

281

In particular, the variance σ 2 (t) := μ2 (t) = 2Γ(α + 1)−1 t α is consistent with anomalously slow diffusion expected for subdiffusion processes. In the limit of α → 1− , it does recover the linear growth for normal diffusion. The representation (7.3) motivates the following two integral operators: ∫ Iu0 (x, t) = Gα (x − y, t)u0 (y)dy, Rd ∫ t∫ V f (x, t) = G α (x − y, t − s) f (y, s)dyds. 0

Rd

(7.9) (7.10)

Their properties will be central to the Hölder regularity analysis. We first give two preliminary results. The first result gives the continuity of the potential Iu0 at t = 0. Lemma 7.2 For any uniformly bounded u0 ∈ C(Rd ), lim Iu0 (x, t) = u0 (x)

t→0+

uniformly for all x in a compact subset of Rd . The limit holds uniformly for all x in compact subsets of a piecewise continuous function u0 , and also for every point x of continuity of any integrable u0 . Proof We prove only the case d = 1. By (7.7), for any fixed x, ∫ R

∫ Gα (x − y, t)u0 (y)dy − u0 (x) =

R

Gα (x − y, t)(u0 (y) − u0 (x))dy :=

3

Ii (t),

i=1

with the three terms I1 , I2 and I3 denoting the integral over (−∞, x − δ), (x − δ, x + δ) and (x + δ, ∞), respectively, where δ > 0 is to be chosen. Since u0 ∈ C(R), it is uniformly continuous on any compact subset S of R. For such a subset S and for each > 0, there exists a δ > 0 such that |u0 (x) − u0 (y)| < 3 holds for every x, y ∈ S with |x − y| < δ . Further, we may assume (x − δ , x + δ ) ⊂ S. Thus, for δ = δ , by Lemma 7.1(i) and (7.6), we have ∫ x+δ |I2 (t)| ≤ Gα (x − y, t)|u0 (x) − u0 (y)|dy ≤ . 3 x−δ α

For the term I3 , by changing variables ξ = |x − y|t − 2 , we have ∫ |I3 (t)| ≤ 2 u0 C(R)



x+δ

∫ Gα (x − y, t)dy ≤ 2 u0 C(R)

α

−δ t − 2

−∞

W− α2 ,1− α2 (ξ)dξ.

∫∞ Now the convergence of the integral −∞ W− α2 ,1− α2 (−|ξ |)dξ implies that there exists t > 0, such that for all t < t , |I3 | < 3 . The term I1 can be bounded similarly. Combining these estimates gives the first claim. The rest follows by a density argument. 

282

7 Subdiffusion: Hölder Space Theory

The next result gives the continuity of the volume potential V f . For an index d | | i . ∈ N0d , Dx u denotes the mix derivative Dx u(x) = 1∂  d u(x), with | | = i=1 ∂x1 ...∂x d

Lemma 7.3 For any bounded f ∈ C(Rd ×R+ ), which is uniformly Hölder continuous with an exponent δ ∈ (0, 1) with respect to x, the function u = V f satisfies the following properties: (i) u, ∂tα u, and Dx u, with | | = 1, 2, are continuous.

(ii) For any x ∈ Rd and t > 0, there holds ∂tα u(x, t) = Δu(x, t) + f (x, t). Proof We give the proof only for the case d = 1. For any 0 < τ < 2t , let ∫ t−τ ∫ G α (x − y, t − s) f (y, s)dyds. uτ (x, t) = R

0

Since the singularity of the kernel G α (x − y, t − s) occurs at x = y, t = s and t − s ≥ τ > 0, uτ is continuously differentiable and by Leibniz’s rule, the functions ∫ t−τ ∫ ∂x uτ (x, t) = ∂x Gα (x − y, t − s) f (y, s)dyds 0 R ∫ t−τ ∫ 2 2 uτ (x, t) = ∂xx G α (x − y, t − s) f (y, s)dyds ∂xx R

0

are continuous. By Lemma 7.1(i) and (7.7), we can bound |u(x, t) − uτ (x, t)| by ∫ t ∫ G α (x − y, t − s)| f (y, s)|dyds |u(x, t) − uτ (x, t)| ≤ t−τ R ∫ t (t − s)α−1 ds = Γ(α + 1)−1 f C(R×I) τ α . ≤Γ(α)−1 f C(R×I) t−τ

0+ ,

Thus, as τ → u is the uniform limit of continuous functions uτ , and hence u is also continuous. The proof shows also |u(x, t)| ≤ ct α f C(R×I), and thus u is continuous at t = 0 and is equal to zero. Let ∫ t∫ ∂x G α (x − y, t − s) f (y, s)dyds. g(x, t) = 0

R

By Lemma 7.1(i), ∂x G α (x, t) does not change sign for x > 0, and by the differentiaα tion formula (3.26), ∂x G α (x, t) = −(2t)−1W− α2 ,0 (−|x|t − 2 ). Then with the change of α variable r = xt − 2 and the differentiation formula (3.26), we deduce ∫ ∞ ∫ α 1 ∞ |∂y Gα (y, t)|dy = W− α2 ,0 (−xt − 2 )dx t 0 −∞ ∫ ∞ ∫ ∞ α α α d t 2 −1 W− α2 , α2 (−r)dr = α . W− α2 ,0 (−r)dr = −t 2 −1 =t 2 −1 dr Γ( 2 ) 0 0 This identity implies

7.1 Fundamental Solutions

283

∫  |g(x, t) − ∂x uτ (x, t)| ≤ 



  ∂x G α (x − y, t − s) f (y, s)dyds t−τ R ∫ t ∫ ≤ f C(R×I) |∂x G α (x − y, t − s)|dyds t

R

t−τ

=

f C(R×I) ∫ Γ( α2 )

t

α

(t − s)

α 2 −1

ds =

t−τ

f C(R×I) τ 2 Γ( α2 + 1)

.

Thus, g is the uniform limit of the continuous functions ∂x uτ , and it is continuous. Further, g(x, ∫ xt) is continuous at t = 0 and is equal to zero. It follows from the identity uτ (x, t) = x ∂x uτ (y, t)dy + uτ (x0, t) and the uniform convergence of uτ to u and 0 ∫x ∂x uτ to g on compact subsets of R, t > 0 that u(x, t) = x g(y, t)dy +u(x0, t), whence 0 the identity ∫ t∫ ∂x G α (x − y, t − s) f (y, s)dyds ∂x u(x, t) = g(x, t) = R

0

exists and continuous. Next, by the identity (7.7), we have ∫ ∫ α 2 ∂t G α (x − y, t − s)dy = ∂xx G α (x − y, t − s)dy R R ∫ 2 G α (x − y, t − s)dy = 0. =∂xx R

2 u (x, t) as The preceding two identities allow rewriting ∂xx τ ∫ t−τ ∫ 2 ∂xx uτ (x, t) = ∂xx Gα (x − y, t − s)( f (y, s) − f (x, s))dyds. R

0

Let h(x, t) =

∫ t∫ 0

R

2 ∂xx Gα (x − y, t − s)( f (y, s) − f (x, s))dyds.

Then by the Hölder continuity of f , ∫ |h(x, t) −

2 uτ (x, t)| ∂xx

≤ f C(I;C α (R))

t

t−τ

∫ R

2 |∂xx Gα (x, t − s)||x| δ dxds. α

By Lemmas 7.1(iii) and 7.6(ii) below, for any t > 0, with r = |x|t − 2 , there holds ∫ ∫ α 2 δ 2 |∂xx G α (x, t)||x| δ dx ≤ c t −1− 2 (1 + r 1+ α )−1 |x| δ dx ≤ ct 2 α−1 . R

R

δ

2 u (x, t)| ≤ c f

α Thus, we obtain |h(x, t) − ∂xx τ C([0,T ];C δ (R)) τ 2 . Thus, h(x, t) is the 2 uniform limit∫of the continuous functions ∂xx uτ (x, t), and it is also continuous. Since x 2 uτ (y, t)dy + ∂x uτ (x0, t), there holds ∂x uτ (x, t) = x ∂xx 0

284

7 Subdiffusion: Hölder Space Theory

∫ ∂x u(x, t) =

x

x0

h(y, t)dy + ∂x u(x0, t),

2 u(x, t) = h(x, t) exists and is continuous. Last, consider the Djrbashianwhence ∂xx Caputo fractional derivative ∂tα u(x, t). Since |u(x, t)| → 0 as t → 0+ , the RiemannLiouville and Djrbashian-Caputo fractional derivatives are identical. Thus, ∫ η∫ ∫ t 1 d (t − η)−α f (y, s)Gα (x − y, η − s)dydsdη ∂tα u(x, t) = dt Γ(1 − α) 0 0 R ∫ ∫ t∫ t 1 d (t − η)−α f (y, s)G α (x − y, η − s)dydηds = dt Γ(1 − α) 0 s R ∫ t ∫ d 1−α f (y, s)G α (x − y, t − s)dyds. = s It dt 0 R

Now by (7.7), ∫ R

| f (y, s)s It1−α G α (x − y, t − s)|dy ≤ c f C(R×[0,t]) .

Hence, ∂tα u(x, t) is continuous. Using this identity gives ∫ t∫ d ∂tα u(x, t) = ( f (y, s) − f (x, s))s It1−α Gα (x − y, t − s)dyds dt 0 R ∫ ∫ t d f (x, s) s It1−α Gα (x − y, t − s)dyds. + dt 0 R ∫ In view of (7.7), R s It1−α G α (x − y, t − s)dy = 1 and thus we have ∂tα u(x, t) =

∫ t∫ 0

R

( f (y, s) − f (x, s)) Rs∂tα G α (x − y, t − s)dyds + f (x, t).

2 u(x, t) above show the desired identity in (ii). This and the expression for ∂xx



7.1.2 Fractional θ -functions For normal diffusion, the fundamental solution G1 (x, t) can be used to solve problems in domains with special geometry, using the so-called θ-function [Can84, Chapter 6], which holds also for Gα (x, t) and G α (x, t). We illustrate the idea with the unit interval Ω = (0, 1), using fractional analogues of the θ-function. Consider

7.1 Fundamental Solutions

285 2 ⎧ ∂tα u = ∂xx u + f , in Ω × I, ⎪ ⎪ ⎪ ⎪ ⎨ u(0) = u0, in Ω, ⎪ ⎪ u(0, ·) = g1, on I, ⎪ ⎪ ⎪ ⎪ u(1, ·) = g , on I. 2 ⎩

(7.11)

To see the extension, we assume g1 ≡ g2 ≡ 0 and f ≡ 0. First, we make an odd extension of u0 from Ω to (−1, 0), and then periodically to R with a period 2, denoted by u˜0 . Then the solution, still denoted by u, of the extended problem is given by ∫ ∞ u(x, t) = Gα (x − y, t)u˜0 (y)dy −∞ 1 

∫ =

0



Gα (x + 2m − y, t) −

m=−∞



 Gα (x + 2m + y, t) u0 (y)dy.

m=−∞

Thus, we define two (fractional) functions θ α (x, t) and θ¯α (x, t) by θ α (x, t) =



Gα (x + 2m, t) and

θ¯α (x, t) =

m=−∞



G α (x + 2m, t).

m=−∞

Both functions coincide with the classical θ function as α → 1− . They were studied in [LZ14], from which the discussions below are adapted. The following lemmas collect useful properties of θ α and θ¯α . Due to the presence of the kink at x = 0 for Gα (x, t) and G α (x, t), the functions θ α and θ¯α are not differentiable at x = 0, ±2, ±4, . . . and not globally smooth for x ∈ R, which differs from the classical θ function. Lemma 7.4 The functions θ α and θ¯α are even in x and C ∞ for x ∈ (0, 2) and t > 0. Proof We give the proof only for θ α , and θ¯α can be analyzed similarly. Let rm = α |x + 2m|t − 2 , m ∈ Z. Then θ α (x, t) can be rewritten as θ α (x, t) =



1 − α2 W− α2 ,1− α2 (−rm ). 2t

m=−∞

Recall the asymptotics of Wρ,ν (z) in Theorem 3.7 (see also Exercise 3.19(ii)) c

W−μ,1−μ (−r) ∼ Ar a e−br , 1−2μ

r → ∞, μ

with A = (2π(1 − μ)μ 1−μ )− 2 , a = (2μ − 1)(2 − 2μ)−1 , b = (1 − μ)μ 1−μ and c = (1 − μ)−1 . Then with μ = α2 , there are constants A > 0, a < 0, b > 0 and c ∈ (1, 2) such that ∞

α c a −brm |θ α (x, t)| < c1 t − 2 Arm e . 1

m=−∞

Now for a fixed t > 0, there exists an M ∈ N such that rm > 1 for all m > M, and c thus e−brm < e−brm . It suffices to consider the terms with m > M.Let um (x, t) =

286

7 Subdiffusion: Hölder Space Theory α

a e−brm . Then for any t > 0, t ≥ t > 0, there is an integer M such that At − 2 rm 0 0 α |rm+1 | − |rm | = 2t − 2 for all m > M. Upon restricting m to this range, we obtain −α um+1 = e−2bt 2 < 1, m→∞ um  since b > 0 and t ≥ t0 > 0. Thus, the series ∞ m=0 um (x, t) converges uniformly for  x ∈ R and t ≥ t0 > 0. The series 0m=−∞ um (x, t) can be analyzed similarly. Thus, the series for θ α (x, t) is uniformly convergent for x ∈ R and t ≥ t0 > 0. Similarly, we can deduce that all partial derivatives of θ α are uniformly convergent for x ∈ (0, 2)  and t > 0, and thus θ α belongs to C ∞ (R+ ) in t and in C ∞ (0, 2) in x.

lim

The following asymptotic expansion at x = 0 is immediate: α

θ α (0, t) = cα t − 2 + 2



α

Gα (2m, t) := cα t − 2 + H(t),

(7.12)

m=1

where (4π)− 2 < cα = 1

1 2Γ(1− α2 )

< 12 , and H(t) ∈ C ∞ [0, ∞) with H (m) (0) = 0, m ∈ N0 .

Lemma 7.5 The following relations hold: lim L [∂x θ α (x, t)] (z) = ∓ 12 z α−1,   lim L ∂x θ¯α (x, t) (z) = ∓ 1 ,

(7.14)

lim θ α (x, t) = 0,

∀x ∈ (0, 2),

(7.15)

lim ∂x θ α (x, t) = 0,

∀t > 0.

(7.16)

(7.13)

x→0±

2

x→0± t→0+ x→1

Proof By (3.26) and the uniform convergence in Lemma 7.4, we have α

lim± ∂x θ α (x, t) = lim± ∂x Gα (x, t) = lim± ∓ 12 t −α W− α2 ,1−α (−|x|t − 2 ).

x→0

x→0

x→0

This and Proposition 3.3 give the identity (7.13). For any ϕ(t) ∈ C0∞ (R+ ), R∂t1−α ϕ(t) exists and is continuous and further 0 It1−α ϕ(0) = 0, and thus by Lemma 2.7, L[R∂t1−α ϕ](z) = z 1−α L[ϕ](z). By Lemma 2.6, we obtain ∫ t  ∫ t  ¯ lim± L ∂x θ α (x, t − s)ϕ(s) ds (z) = lim± L ∂x θ α (x, t − s)R∂s1−α ϕ(s) ds (z) x→0 x→0 0 0     R 1−α 1 α−1 1−α = lim± L ∂x θ α (x, t) (z)L ∂t ϕ (z) = ∓ 2 z z L[ϕ](z) = L[∓ 12 ϕ](z), x→0

from which (7.14) follows. The identity (7.15) is direct from Theorem 3.7: for any x ∈ (0, 2), α

lim+ |θ α (x, t)| ≤ lim+ ct − 2 e−σt

t→0

t→0

Last, to show the identity (7.16), for t > 0:

− α 2−α

2

|x | 2−α

= 0.

7.1 Fundamental Solutions

∂x θ α (x, t)|x=1 = − +

287

α ∞ ∞ 1 (−|1 + 2m|t − 2 )k 2t α m=1 k=0 k!Γ(−k α2 + (1 − α))

α α ∞ ∞ ∞ 1 (−|1 − 2m|t − 2 )k (−t − 2 )k 1 − = 0. 2t α m=1 k=0 k!Γ(−k α2 + (1 − α)) 2t α k=0 k!Γ(−k α2 + (1 − α))

Last, the continuity of ∂x θ α (x, t) in x at x = 1 from Lemma 7.4 gives (7.16).



Now we can state a solution representation for problem (7.11). The uniqueness of the solution has to be shown independently. Theorem 7.1 For piecewise continuous functions f , u0 , g1 and g2 , the following representation gives a solution u to problem (7.11) u(x, t) =

4

ui ,

i=1

where the functions ui are defined by ∫ 1 (θ α (x − y, t) − θ α (x + y, t))u0 (y) dy, u1 (x, t) = 0 ∫ t ∂x θ¯α (x, t − s)g1 (s) ds, u2 (x, t) = −2 0 ∫ t ∂x θ¯α (x − 1, t − s)g2 (s) ds, u3 (x, t) = 2 0

u4 (x, t) =

∫ t∫ 0

0

1

[θ¯α (x − y, t − s) − θ¯α (x + y, t − s)] f (y, s) dyds.

Proof By the definition of the function θ α , we have ∫ 1 ∂tα u1 (x, t) = (∂tα θ α (x + y, t) − ∂tα θ α (x − y, t))u0 (y)dy 0

∫ =

0

1

2 2 ∂xx [θ α (x + y, t) − θ α (x − y, t)]u0 (y)dy = ∂xx u1 (x, t).

Next, we consider the term u2 . By Lemma 2.6, ∫ t ∫ t R 1−α φ(t − s)ϕ(s) ds = φ(t − s)R∂s1−α ϕ(s) ds s∂t 0

0

for any φ, ϕ ∈ AC(I) with φ(0) = ϕ(0) = 0. Then for g1 ∈ AC(I) with g1 (0) = 0, Lemma 7.1(iii) gives

288

7 Subdiffusion: Hölder Space Theory

∂tα u2 (x, t)

= =

−2∂tα −2∂tα



t

0



0

t

∂x θ¯α (x, t − s)g1 (s) ds ∂x θ α (x, t − s)R ∂s1−α g1 (s) ds,

since by (7.13), limt→0+ ∂x θ α (x, t) = 0. Thus, ∫ t −2 α (t − s)−α ∂x θ α (x, 0)R∂s1−α g1 (s)ds ∂t u2 (x, t) = Γ(1 − α) 0 ∫ s ∫ t −2 −α (t − s) ∂x ∂s θ α (x, s − ξ)R∂ξ1−α g1 (ξ) dξds. + Γ(1 − α) 0 0 Since limt→0+ ∂x θ α (x, t) = 0, by Theorem 2.1, we deduce ∫ t α ∂x sR∂tα θ α (x, t − s)R∂t1−α g1 (s) ds ∂t u2 (x, t) = −2 0 ∫ t 3 ∂xxx θ α (x, t − s)R∂t1−α g1 (s) ds = −2 0 ∫ t 2 2 ∂xx (∂x θ¯α )(x, t − s)g1 (s) ds = ∂xx u2 (x, t), = −2 0

where changing the order of derivatives R∂t1−α and ∂x is justified by the uniform convergence of the series for θ α and its fractional derivatives, see Lemma 7.4. The assertion holds also for piecewise continuous g1 by a density argument. The other terms u3 and u4 can be verified similarly. Thus, the representation satisfies the governing equation in (7.11). The initial condition follows from the construction of θ α and Lemma 7.2 by ∫ 1 lim+ u1 (x, t) = lim+ [θ α (x − y, t) − θ α (x + y, t)]u0 (y)dy t→0 t→0 0 ∫ ∞ = lim+ Gα (x − y, t)u˜0 (y)dy = u0 (x), t→0

−∞

for every point x ∈ Ω of continuity of u0 , where u˜0 is the periodic extension of u0 to R described above. The boundary conditions follow from Lemma 7.5, and we analyze only the term u2 . At x = 0, (7.14) in Lemma 7.5 implies ∫ t ∂x θ¯α (x, t − s)g1 (s)ds = g1 (t). lim+ u2 (x, t) = −2 lim+ x→0

x→0

0

At x = 1, (7.16) and Lebesgue’s dominated convergence theorem imply ∫ t ∂x θ¯α (x, t − s)g1 (s)ds = 0. lim− u2 (x, t) = −2 lim− x→1

x→1

0

The initial and boundary conditions for the other terms can be treated similarly. 

7.2 Hölder Regularity in One Dimension

289

A similar argument gives a Neumann analogue of Theorem 7.1. Theorem 7.2 Let Ω = (0, 1). Then for piecewise smooth functions u0 , f , g1 and g2 , the solution u to α 2 ⎧ ⎪ ⎪ ∂t u − ∂xx u = f , in Ω × I, ⎪ ⎪ ⎨ −∂x u(0, ·) = g1, on I, ⎪ ⎪ ∂x u(1, ·) = g2, on I, ⎪ ⎪ ⎪ ⎪ u(0) = u0, in Ω ⎩ is represented by ∫ u(x, t) =

1

0



 θ α (x − y, t) + θ α (x + y, t) u0 (y) dy



−2

1

∫ θ¯α (x, t − s)g1 (s) ds − 2

0 t∫ 1

∫ + 0

0



1

0

θ¯α (x − 1, t − s)g2 (s) ds

 θ¯α (x − y, t − s) + θ¯α (x + y, t − s) f (y, s) dy ds.

7.2 Hölder Regularity in One Dimension Now we turn to Hölder regularity of solutions of time-fractional diffusion, which have different regularity pickups in the space and time variables. Thus, anisotropic Hölder spaces will be used extensively. Let Ω ⊂ Rd be an open bounded domain and T > 0 be the final time. We denote by Q = Ω × (0, T] the usual parabolic cylinder. θ For any θ ∈ (0, 1) and α ∈ (0, 1), the notation C θ, 2 α (Q) denotes the set of functions v defined in Q and having a finite norm

v

C θ,

θα 2 (Q)

= v C(Q) + [v]

C θ,

,

θα 2 (Q)

with

v C(Q) = sup |v(x, t)|,

[v]

θ C θ, 2 α (Q)

(x,t)∈Q

[v](θ) = x,Q

[v]

( θ2 α) t,Q

sup (x,t),(y,t)∈Q,xy

=

sup

= [v](θ) + [v] x,Q

( θ2 α) t,Q

,

|v(x, t) − v(y, t)| , |x − y| θ |v(x, t) − v(x, t )|

(x,t),(x,t  )∈Q,tt 

θ

|t − t  | 2 α

.

The notations [·](θ) and [·](θ) denote Hölder seminorm in space and time, respecx,Q

t,Q

tively. We use extensively the space C 2+θ,

2+θ 2

α (Q)

in this chapter.

290

7 Subdiffusion: Hölder Space Theory

Definition 7.1 A function v : Q → R is said to be in the space C 2+θ, 2 α (Q) if and only if the function v and its spatial derivatives D v(x, t), | | = 0, 1, 2, (left-sided) Djrbashian-Caputo fractional derivatives in time t based at t = 0 ∂tα v are continuous and the following norms are finite (with m = 0, 1):

=

∂tmα Dx v C(Q) + [v] 2+θ, 2+θ α ,

v 2+θ, 2+θ α 2+θ

C

[v]

C 2+θ,

(Q)

2

2+θ α 2 (Q)

C

| |+2m≤2

=

[∂tmα Dx v]

C θ,

| |+2m=2

θα 2 (Q)

+

2

(Q)

[Dx v]

| |=1

k+θ, k+θ α

( 1+θ 2 α) t,Q

.

k+θ

2 Further, the notation C0 (Q) denotes the subspace of C k+θ, 2 α (Q), k = 0, 1, 2, consisting of functions v(x, t) such that ∂tmα v|t=0 = 0, where 2m ≤ k.

It is worth noting that with the help of local coordinates and partition of unity, these spaces can also be defined on manifolds. In the case α = 1, the spaces k+θ k+θ C k+θ, 2 α (Q) coincide with the classical Hölder spaces C k+θ, 2 (Q) for standard parabolic problems [LSU68, Chapter I]. 2+θ In this section, we derive C 2+θ, 2 α (Q) regularity of the solutions u to onedimensional subdiffusion problems, and leave the more complex multi-dimensional case to Section 7.3. The overall analysis strategy parallels closely that of the standard parabolic case [LSU68]. This is achieved using mapping properties of the operators I and V in Hölder spaces. The analysis below begins with the real line R, and then half real line R+ = (0, ∞) and last a bounded interval with variable coefficients. In the first two cases, the analysis relies on sharp mapping properties of suitable integral operators in Hölder spaces, which in turn boils down certain estimates on the Wright function Wρ,μ (z) (cf. (3.24) in Chapter 3) appearing in the fundamental solutions, cf. Lemma 7.6. In the last case, the analysis employs freezing coefficient and reduction to the whole space and half space.

7.2.1 Subdiffusion in R First, we consider the case Ω = R. The analysis uses extensively the following pointwise bounds on Gα (x, t) and G α (x, t) and their mixed derivatives. α

Lemma 7.6 The following statements hold with r = |x|t − 2 , and σ(α) > 0. α

α

(i) t 2 Gα (x, t) + t 1− 2 G α (x, t) + t|∂x G α (x, t)| ≤ ce−σ(α)r

1 1− α 2

.

(ii) For any x  0 and t > 0, α

3 |∂t Gα (x, t)| ≤ ct −1− 2 (1 + r 1+ α )−1, |∂xxx G α (x, t)| ≤ ct −1−α (1 + r 2+ α )−1, 2

α

2

3 |∂xxt G α (x, t)| ≤ ct −2− 2 (1 + r 1+ α )−1 . 2

Proof In view of (7.4) and (3.26), we have

7.2 Hölder Regularity in One Dimension

291 α

∂x Gα (x, t) = −(2t)−1W− α2 ,0 (−|x|t − 2 )sgn(x). Thus, the assertions in (i) follow directly from Theorem 3.8(i) and (iv) for the first two and the third, respectively. Next, the formulas (3.26) and (3.27) imply α

α

∂t Gα (x, t) = t − 2 −1W− α2 ,− α2 (−|x|t − 2 ),

α

3 ∂xxx G α (x, t) = −2−1 t −1−α W− α2 ,−α (−|x|t − 2 )sgn(x), α

α

3 ∂xxt G α (x, t) = 2−1 t −2− 2 W− α2 ,− α2 −1 (−|x|t − 2 ).

Then the first two estimates in (ii) follow from Theorem 3.8(iii). Now the identity (3.33) and Theorem 3.8(iii) lead to |W− α2 ,− α2 −1 (−r)| ≤ cr(1 + r 2+ α )−1 + c(1 + r 1+ α )−1 ≤ c(1 + r 1+ α )−1 . 2

2

2

This shows directly the third estimate, and completes the proof of the lemma.



We begin with several integral bounds on Gα (x, t) and G α (x, t). Lemma 7.7 For any δ > 0 and θ ∈ (0, 1), the following estimates hold: ∫t∫∞ c θ2α t ; (i) 0 −∞ |∂s Gα (x, s)||x| θ dxds ≤ θα   ∫ t ∫ 2 G (x, s)dx  (ii) 0  |x |>δ ∂xx ds ≤ c; α ∫t∫ 2 G (x, s)||x| θ dxds ≤ c δ θ ; (iii) 0 |x |δ |∂xxx α 1−θ α

Proof First, by Lemma 7.6(ii) and changing variables r = xs− 2 , we have ∫ t∫ ∞ ∫ t∫ ∞ α s−1− 2 |∂s Gα (x, s)||x| θ dxds ≤ c |x| θ dxds 2 α 1+ − 0 −∞ 0 −∞ 1 + (|x|s 2 ) α ∫ t ∫ ∞ θα rθ =c s−1+ 2 ds dr, 2 0 0 1 + r 1+ α which, upon integration, gives (i). Second, by (3.26) and Theorem 3.8(i)  ∫ α   2 ∂xx G α (x, t)dx  ≤ c| lim ∂x G α (x, t) − ∂x G α (δ, t)| = ct −1W− α2 ,0 (−δt − 2 ).  |x |>δ

x→∞

The differentiation relation (3.27) with n = 1 gives ∫ t ∫ t ∫  α d   2 W− α2 ,1 (−δs− 2 )ds. ∂xx G α (x, s)dx ds ≤ c  0 0 ds |x |>δ This and Theorem 3.8(i) show (ii). Third, (iii) follows from Lemmas 7.1(iii) and 7.6(ii) as ∫ t∫ ∫ δ ∫ ∞ dr 2 θ θ−1 |∂xx G α (x, s)||x| dxds ≤ c x dx ≤ θc δ θ . 2 |x |δ 0 δ 0 1 + (xs − 2 )2+ α ∫ ∞ ∫ ∞ r c θ−1 θ−2 ≤c δ . x dx dr ≤ 2 2+ 1 − θ δ 0 1+r α 

This completes the proof of the lemma. The next result bounds the volume potential V f in Hölder spaces. θα

Lemma 7.8 Let θ ∈ (0, 1), f ∈ C θ, 2 (Q) and f (x, t) ≡ 0 for |x| ≥ R, for some R > 0. Then the function u(x, t) = V f (x, t) satisfies 2

u C(Q) + ∂xx u C(Q) ≤ c( f C(Q) + [ f ](θ) ), x,Q

2 u](θ) [∂xx x,Q

+

( θα ) 2 [∂xx u] 2 t,Q



c[ f ](θ) . x,Q

Proof The bound on u C(Q) is direct from Lemma 7.1(i) and the identity (7.7). Next, by applying (7.7) again and Lemma 7.1(iii) (see also the proof of Lemma 7.3), 2 u as we can rewrite ∂xx ∫ t∫ 2 2 ∂xx u(x, t) = ∂xx G α (x − y, t − s)[ f (y, s) − f (x, s)]dyds. (7.17) 0

R

Now Lemmas 7.1(iii) and 7.7(i) imply ∫ t∫ c θ α (θ) 2 t 2 [f] . u C(Q) ≤ [ f ](θ) |∂xx G α (x, s)||x| θ dxds ≤

∂xx x,Q 0 x,Q θα R 2 u](θ) , we take any x , x ∈ R with x > x , and let To bound the seminorm [∂xx 1 2 2 1 x,Q

2 u(x , t) − ∂ 2 u(x , t) can be h := x2 − x1 . Using the identity (7.17), the difference ∂xx 2 1 xx split into (with K = {y ∈ R : |x1 − y| < 2h}) ∫ t∫ 2 2 2 u(x2, t) − ∂xx u(x1, t) =: ∂xx Gα (x1 − y, t − s)[ f (x1, s) − f (y, s)]dyds ∂xx K 0 ∫ t∫ 2 ∂xx Gα (x2 − y, t − s)[ f (y, s) − f (x2, s)]dyds + 0 K ∫ t∫ 2 ∂xx G α (x2 − y, t − s)[ f (x1, s) − f (x2, s)]dyds +

+

0

R\K

0

R\K

∫ t∫

2 2 [∂xx G α (x1 − y, t − s) − ∂xx G α (x2 − y, t − s)][ f (x1, s) − f (y, s)]dyds.

The four terms are denoted by Ii , i = 1, . . . , 4. For the terms I1 and I2 , the Hölder continuity of f and Lemma 7.7(iii) with δ = 2h and δ = 3h, respectively, imply

7.2 Hölder Regularity in One Dimension

293

|I1 | + |I2 | ≤ chθ [ f ](θ) . x,Q

By Lemma 7.7(ii) with δ = h, we estimate I3 by ∫ t ∫    2 ∂xx G α (x, s)dx ds ≤ chθ [ f ](θ) . |I3 | ≤ chθ [ f ](θ)  x,Q

x,Q

|x |>h

0

Last, the mean value theorem and Lemma 7.7(iv) with δ = h give ∫ t∫ (θ) 3 |x| θ |∂xxx Gα (ζ, s)|dζds ≤ chθ [ f ](θ) . |I4 | ≤ ch[ f ] x,Q

0

x,Q

|x |>h

( θα )

2 u](θ) ≤ c[ f ](θ) . To bound [∂ 2 u] 2 , for any t , t ∈ I, These bounds show [∂xx 1 2 xx t,Q x,Q x,Q t2 > t1 , let τ = t2 − t1 , and we discuss the cases t1 > 2τ and t1 ≤ 2τ separately. If 2 u(x, t ) − ∂ 2 u(x, t ) can be split into t1 > 2τ, by (7.17), the difference ∂xx 2 1 xx ∫ ∫ t2 2 2 2 ∂xx u(x, t2 ) − ∂xx u(x, t1 ) = ∂xx G α (x − y, t2 − s)[ f (y, s) − f (x, s)]dsdy

∫ ∫ +

R

∫ +

0

R

t1

t1 −2τ t1 −2τ ∫

t1 −2τ

2 ∂xx G α (x − y, t1 − s)( f (x, s) − f (y, s))dsdy

R

2 2 [∂xx G α (x − y, t2 − s) − ∂xx G α (x − y, t1 − s)][ f (y, s) − f (x, s)]dyds.

Next, we bound the three terms, denoted by IIi , i = 1, 2, 3. For II1 , changing variables ξ = t2 − s and then applying Lemmas 7.1(iii) and 7.7(i) with t = τ lead to |II1 | ≤ cτ

θα 2

[ f ](θ) . x,Q

II2 can be bounded similarly. For II3 , the mean value theorem and Lemma 7.6(ii) α imply (with r = |x − y|s− 2 ) ∫ t ∫ ∞ αθ θα rθ |II3 | ≤ cτ[ f ](θ) s 2 −2 drds ≤ c[ f ](θ) τ 2 , 2 x,Q τ x,Q 1+ 0 1+r α since t > t1 > 2τ. In sum, for t1 > 2τ, we obtain 2 u] [∂xx

( αθ 2 ) t,Q

≤ c[ f ](θ) τ x,Q

θα 2

.

Next, we show the estimate for the case t1 ≤ 2τ. Indeed, ∫ ∞ ∫ t1 2 2 2 ∂xx u(x, t2 )−∂xx u(x, t1 ) = ∂xx Gα (x − y, t1 − s)[ f (x, s) − f (y, s)]dyds −∞ 0 ∫ ∞ ∫ t2 2 [ f (y, s) − f (x, s)]∂xx Gα (x − y, t2 − s)dsdy. + −∞

0

294

7 Subdiffusion: Hölder Space Theory

The bounds on the two terms, denoted by III1 and III2 , follow directly from Lemmas 7.1(iii) and 7.7(i) by θα

|III1 | + |III2 | ≤ c[ f ](θ) t2 2 ≤ c[ f ](θ) τ x,Q

θα 2

x,Q

.

This gives the assertion for t1 ≤ 2τ, and completes the proof of the lemma.



The next lemma treats the potential Iu0 . Lemma 7.9 Let u0 ∈ C 2+θ (R) and u0 (x) ≡ 0 for |x| ≥ R, for some R > 0. Then the function u = Iu0 satisfies 2

u C(Q) + ∂xx u C(Q) ≤ c u0 C 2 (R), 2 2 [∂xx u](θ) + [∂xx u] x,Q

( αθ 2 ) t,Q

2 ≤ c[∂xx u0 ](θ) . x,R

Proof The proof is similar ∫ to Lemma 7.8. We only prove the second estimate. From 2 u(x, t) = ∞ G (y, t)∂ 2 u (x − y)dy, we deduce the identity ∂xx xx 0 −∞ α ∫ ∞ 2 2 2 2 ∂xx u(x1, t) − ∂xx u(x2, t) = Gα (y, t)(∂xx u0 (x1 − y) − ∂xx u0 (x2 − y))dy. −∞

Thus, Lemma 7.1(i) and (7.6) imply that for any x2 > x1 , 2 2 2 |∂xx u(x1, t) − ∂xx u(x2, t)| ≤ [∂xx u0 ](θ) |x − x2 | θ . x,R 1

Next, we show Hölder regularity in time. Since Gα is even in x, by (7.6), ∫ ∞ ∫ ∞ u(x, t) = Gα (y, t)u0 (x − y)dy = Gα (y, t)u0 (x + y)dy −∞ −∞ ∫ ∞ 1 Gα (y, t)(u0 (x − y) + u0 (x + y))dy = 2 −∞ ∫ 1 ∞ Gα (y, t)(u0 (x − y) − 2u0 (x) + u0 (x + y))dy. = u0 (x) + 2 −∞ Now for any t2 > t1 , let τ = t2 − t1 . Suppose first τ > t1 . Then by the mean value theorem, there exists some ξ ∈ (t1, t2 ) such that 2 2 |∂xx u(x, t1 ) − ∂xx u(x, t2 )| ∫ ∞   1  2 =  (Gα (y, t1 ) − Gα (y, t2 ))∂xx (u0 (x − y) − 2u0 (x) + u0 (x + y))dy  2 −∞ ∫ ∞ 2 ≤ cτ[∂xx u0 ](θ) |∂t Gα (y, ξ)||y| θ dy, x,R −∞

which together with Lemma 7.6(ii) gives

7.2 Hölder Regularity in One Dimension

295

∫ 2 |∂xx u(x, t1 )

≤cτξ

θ 2

α−1



2 ∂xx u(x, t2 )|

∫ 2 [∂xx u0 ](θ) x,R

2 cτ[∂xx u0 ](θ) x,R





rθ 1+r

0

α



−∞

ξ −1− 2

α

|y| θ dy

1 + (|y|ξ − 2 )1+ α 2

θ

1+ α2

2 dr ≤ cτ 2 α [∂xx u0 ](θ) , x,R

where the last step follows from ξ ∈ (t1, t2 ). The case t1 ≤ τ follows similarly.



Now we can state the unique solvability of problem (7.1) in R. θα

Theorem 7.3 Let θ ∈ (0, 1), and f ∈ C θ, 2 (Q), u0 ∈ C 2+θ (R), and the condition 2+θ (7.2) be fulfilled. Then there exists a unique solution u ∈ C 2+θ, 2 α (Q) of problem (7.1), given in (7.3), and it satisfies  

u 2+θ, 2+θ α ≤ c f θ, θ2 α + u0 C 2+θ (R) . C

C

(Q)

2

(Q)

Proof The regularity estimate is direct from Lemmas 7.8 and 7.9 and interpolation 2+θ inequalities. It suffices to show the uniqueness of the solution u ∈ C 2+θ, 2 α (Q) or, equivalently, the homogeneous problem has only the zero solution:  2 u = 0, in Q, ∂tα u − ∂xx u(0) = 0,

in R,

following the strategy of [Sol76], Clearly, there exists a solution u ∈ C 2+θ, 2 α (Q). Let χR ∈ C ∞ (R) be a smooth cut-off function satisfying χR (x) = 1 for |x| ≤ R and χR (x) = 0 for |x| ≥ 2R and 2+θ

 dk     k χR (x) ≤ cR−k , dx

x ∈ R, k = 1, 2, . . . .

Let u R (x, t) = u(x, t) χR (x) ∈ C 2+θ, 2 α (Q). It has a finite support and satisfies  2 u R = fR, in R × (0, ∞), ∂tα u R − ∂xx u R (x, 0) = 0, in R, 2+θ

2 χ . It can be verified directly that with fR = 2∂x u∂x χR + u∂xx R

fR

C θ,

θα 2 (Q)

≤ cR−θ u

C 2+θ,

2+θ α 2 (Q)

.

By the properties of the functions u R and χR , using Laplace transform, we obtain ∫ t∫ G α (x − y, t − s) fR (y, s)dyds. u R (x, t) = 0

R

Since fR satisfies the conditions of Lemma 7.8, u R C(Q) ≤ cR−θ u

C 2+θ,

2+θ α 2 (Q)

Letting R to infinity gives u R ≡ 0, which completes the proof of the theorem.

.



296

7 Subdiffusion: Hölder Space Theory

7.2.2 Subdiffusion in R+ Consider the following initial boundary value problems on the positive real semi-axis Ω = R+ and Q = Ω × I: 2 ⎧ ⎪ ∂tα u − ∂xx u = f , in Q, ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ u(x, 0) = u (x), in Ω, 0

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ and

u(0, t) = g(t), u(x, t) → 0

on I,

(7.18)

as x → ∞, ∀t ∈ I,

2 ⎧ ⎪ ∂tα u − ∂xx u = f , in Q, ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ u(x, 0) = u (x), in Ω, 0

⎪ ∂x u(0, t) = g(t), on I, ⎪ ⎪ ⎪ ⎪ ⎪ u(x, t) → 0 as x → ∞, ∀t ∈ I. ⎩ We shall assume the following compatibility condition:  u0 (0) = g(0), for (7.18), ∂x u0 (0) = g(0), for (7.19),

(7.19)

(7.20)

and there exists some R > 0 such that u0 (x), f (x, t) ≡ 0,

if x > R, ∀t ∈ I.

(7.21)

First, we discuss the compatibility condition (7.20) for problem (7.18). Without loss of generality, it suffices to study g ≡ 0. By taking the odd extension of u0 to R, i.e., u0 (−x) = −u0 (x) for x > 0 and similarly for f (x, t), then the solution u to problem (7.18) is given by ∫ t∫ ∞ ∫ ∞ Gα (x − y, t)u0 (y) dy + G α (x − y, t − s) f (y, s)dyds u(x, t) = 0 −∞ ∫−∞ ∞ = (Gα (x − y, t) − Gα (x + y, t))u0 (y) dy 0 ∫ t∫ ∞ (Gα (x − y, t) − Gα (x + y, s)) f (y, s) dyds. + 0

0

Clearly, for any integrable and bounded u0 and f , u is continuous for any t > 0. It is natural to ask what happens as t → 0+ . It turns out that this depends very much on the compatibility condition (7.20). If u0 is continuous and compatible, i.e., u0 (0) = 0, then u is continuous up to t = 0. Now suppose that u0 (0)  0, when f ≡ 0. Then u is discontinuous at (x, t) = (0, 0). To determine the nature of the discontinuity, we rewrite it as

7.2 Hölder Regularity in One Dimension

∫ u(x, t) =

297



x

−∞

Gα (y, t)u0 (x − y) dy −



x

Gα (y, t)u0 (y − x) dy.

Let ψ(x, t) be the solution corresponding to ∫ xthe case u0 (x) =∫ χ∞R+ (x), the characteristic function of the set R+ , i.e., ψ(x, t) = −∞ Gα (y, t) dy − x Gα (y, t) dy. By the identity (7.6), we can simplify the expression to ∫ ∞ ∫ ∞ α α ψ(x, t) = 1 − 2 Gα (y, t) dy = 1 − t − 2 W− α2 ,1− α2 (−|y|t − 2 ) dy, x

x

and obtain ψ in the form of a similarity solution (for x > 0) ∫ ∞ α ψ(x, t) = Ψ(xt − 2 ) with Ψ(x) = 1 − W− α2 ,1− α2 (−y) dy. x

Note that the identities lim ψ(x, t) = Ψ(0) = 0,

x→0+

lim ψ(x, t) = Ψ(∞) = 1,

t→0+

∀t > 0, ∀x > 0.

Thus, ψ(x, t) is not continuous at (0, 0), and it characterizes precisely the singular behavior with incompatible problem data. The solution u can be written as u(x, t) = u0 (0)ψ(x, t) + w(x, t), where w is the solution to problem (7.18) with initial data u0 (x) − u0 (0), and thus continuous at (0, 0). The preceding discussion applies also to problem (7.19). These discussions motivate the compatibility condition (7.20). Now we derive Hölder regularity for problems (7.18) and (7.19), with u0 ≡ 0 and f ≡ 0, since the general case can be analyzed using the linearity of the problems. Problems (7.18) and (7.19) can be analyzed similarly, and thus we discuss only 2+θ 1+θ the latter, i.e., u ∈ C 2+θ, 2 α (Q) for g(t) ∈ C 2 (I). Specifically, applying Laplace transform (cf. Section A.3.1 for the definition) to problem (7.19) and Lemma 2.9, we obtain the following ode for u(x, ˆ z): 2 ⎧ ⎪ z α u(x, ˆ z) − ∂xx u(x, ˆ z) = 0, x ∈ Ω, ⎪ ⎨ ⎪ u(x, ˆ z) = 0, if x → ∞, ⎪ ⎪ ⎪ ∂x u(0, ˆ z) = g(z). ˆ ⎩ α

α

ˆ which together The unique solution u(x, ˆ z) is given by u(x, ˆ z) = −z− 2 e−xz 2 g(z), with inverse Laplace transform and Proposition 3.3 yields ∫ t α (x, t − s)g(s)ds, (7.22) G u(x, t) = 0

α (x, t) given by with the corresponding fundamental solution G α (x, t) = −t α2 −1W− α , α (−xt − α2 ) = −2G α (x, t). G 2 2

298

7 Subdiffusion: Hölder Space Theory

Similarly, the solution u(x, t) to problem (7.18) is given by ∫ t Gˇ α (x, t − s)g(s)ds, u(x, t) = 0

α 2

α

with Gˇ α (x, t) = L −1 [−e−xz ] = −W− α2 ,1 (−xt − 2 ) = −2∂x G α (x, t). α . First, we give several integral estimates on G Lemma 7.10 Let θ ∈ (0, 1), x ∈ Ω, t ∈ I. Then the following estimates hold: ∫δ 1+θ α (x, s)|s 1+θ 2 α ds ≤ cδ 2 α , (i) 0 |∂x G ∫∞ θ 3 G α (x, s)|s 1+θ 2 α ds ≤ cδ −1+ 2 α , (ii) δ |∂xxs ∫δ θ 2 G α (x, s)|s 1+θ 2 α ds ≤ cδ 2 α , (iii) 0 |∂xx ∫∞ θ −1 3 G α (x, s)|s 1+θ 2 α ds ≤ cδ 2 α and |∂xxx (iv) δ

(v)

∫t

m

∂  − (t − s)−α ∂x m G α (x, s)ds ≤ ct 0

m+1 2 α

e−κ(xt

−α 2

1 α

) 1− 2

, m = 0, 1.

α

Proof The proof uses Lemma 7.6. Let r = xs− 2 . By Lemma 7.6(i), ∫ δ ∫ δ 1+θ 1+θ α (x, s)|s 1+θ 2 α ds ≤ c |∂x G s−1 s 2 α ds ≤ cδ 2 α . 0

0

Similarly, by Lemma 7.6(ii), ∫ ∞ ∫ 1+θ 3  |∂xxs Gα (x, s)|s 2 α ds ≤ c δ



δ

α

s−2− 2 s

1+θ 2

α

θ

ds ≤ cδ−1+ 2 α .

2 G 2 G (x, t) = α (x, t) = −2∂xx Similarly, (iii) and (iv) follow from the identity ∂xx α −2∂t Gα (x, t), cf. Lemma 7.1(iii), and Lemma 7.6(ii) as ∫ δ ∫ δ 1+θ α 1+θ θ 2  |∂xx s−1− 2 s 2 α ds ≤ cδ 2 α, Gα (x, s)|s 2 α ds ≤ c 0 ∫0 ∞ ∫ ∞ 1+θ 1+θ θ −1 3  |∂xxx s−1−α s 2 α ds ≤ cδ 2 α . Gα (x, s)|s 2 α ds ≤ c δ

δ

The last estimate is direct from Lemma 7.6(i) and the identity (3.28).  ∫t α (x, s)g(s)ds. The next result gives several estimates on the surface potential 0 G 7.11 Let θ ∈ (0, 1), g ∈ C ∫Lemma t α (x, s)g(s)ds satisfies G 0

1+θ 2

α (I)

with g(0) = 0, then the function u(x, t) =

2

∂xx u C(Q) ≤ c( g C(I ) + [g] 2 2 [∂xx u](θ) + [∂xx u] x,Q

( θ2 α) t,Q

≤ c g

C

1+θ α 2 (I)

.

( 1+θ 2 α) t,I

),

7.2 Hölder Regularity in One Dimension

299

Proof Since g(0) = 0, we may extend g(t) by zero for t < 0. Then by (7.8), ∫ t ∫ t 2 α (x, t − s)(g(s) − g(t))ds + g(t) α (x, t − s)ds u(x, t) = ∂xx G ∂xx G ∂xx −∞ −∞ ∫ t α (x, t − s)(g(s) − g(t))ds. = ∂xx G −∞

2 u

Then the bound on ∂xx C(Q) follows by (with t0 > 0 fixed)

∫ 2 |∂xx u(x, t)| ≤

t0

0

α (x, s)|s |∂xx G

1+θ 2

α

ds[g]

( 1+θ 2 α) t,I

∫ + t0



2  |∂xx Gα (x, s)|ds g C(I ) .

The first term can be bounded using Lemma 7.10(iii), and the second by Lemmas 2 7.1(iii) and 7.6(ii). Next, for any x1, x2 ∈ Ω, with h = |x2 − x1 | α , we have ∫ h 2 2 2  u(x2, t) − ∂xx u(x1, t) = ∂xx Gα (x2, s)(g(t − s) − g(t))ds ∂xx ∫ +

0

h

2  ∂xx Gα (x1, s)(g(t) − g(t − s))ds

0

∫ +



h

2  2  [∂xx Gα (x2, s) − ∂xx Gα (x1, s)](g(t − s) − g(t))ds :=

3

Ii .

i=1

The term I1 can be bounded by Lemma 7.10(iii) as ∫ h 1+θ ( 1+θ α) ( 1+θ α) 2  |∂xx Gα (x2, s)|s 2 α ds ≤ c[g] 2 |x2 − x1 | θ . |I1 | ≤ c[g] 2 t,I

t,I

0

The term I2 can be bounded similarly. By the mean value theorem and Lemma 7.10(iv), we have |I3 | ≤ c[g] 2 u] we bound [∂xx

( θ2 α) t,Q

( 1+θ 2 α) t,I

x,Q

∫ + +

2 ∂xx u(x, t1 )

−∞ 2t1 −t2

−∞

t,I

. Last

=

t2

2t1 −t2

2  ∂xx Gα (x, t2 − s)(g(s) − g(t2 ))ds

2  ∂xx Gα (x, t1 − s)(g(t1 ) − g(s))ds

2t1 −t2 ∫ 2t1 −t2

∫ +

t1



( 1+θ 2 α)

. For t1, t2 ∈ I and t2 > t1 , let τ = t2 − t1 . If t1 > 2τ, then ∫

2 u(x, t2 ) ∂xx

2 u](θ) ≤ c[g] |x2 − x1 | θ . This gives [∂xx

2  ∂xx Gα (x, t1 − s)ds(g(t1 ) − g(t2 )) 2  2  (∂xx Gα (x, t2 − s) − ∂xx Gα (x, t1 − s))(g(s) − g(t2 ))ds :=

By Lemma 7.10(iii) with δ = τ,

4

i=1

IIi .

300

7 Subdiffusion: Hölder Space Theory θ

|II1 | + |II2 | ≤ cτ 2 α [g]

( 1+θ 2 α) t,I

.

To bound II3 , we apply Lemma 7.6(ii) and deduce ∫ ∞ 1+θ ( 1+θ α) 2  |II3 | ≤ cτ 2 α [g] 2 |∂xx Gα (x, s)|ds t,I τ ∫ ∞ 1+θ α θ ( 1+θ α) ( 1+θ α) s−1− 2 ds ≤ cτ 2 α [g] 2 . ≤ cτ 2 α [g] 2 t,I

t,I

τ

Finally, the mean value theorem and Lemma 7.10(ii) with δ = τ lead to |II4 | ≤ θ

( θ α)

( 1+θ α)

2 u] 2 . The case t ≤ 2τ can be cτ 2 α [g] 2 . This shows the bound on [∂xx 1 t,I t,Q analyzed similarly. Combining the preceding estimates completes the proof. 

Now we can state the following existence, uniqueness and Hölder regularity estimate for problem (7.19). Theorem 7.4 Let θ ∈ (0, 1), and g ∈ C 2 α (I) with g(0) = 0. Then there exists a 2+θ unique solution u(x, t) ∈ C 2+θ, 2 α (Q) of problem (7.19) with u0 ≡ 0, f ≡ 0, given by (7.22) and ≤ c g 1+θ α .

u 2+θ, 2+θ α 1+θ

C

2

(Q)

C

2

(I )

Proof First, we prove the a priori estimate. By Lemma 7.11 and (7.19),

∂tα u

C θ,

θα 2 (Q)

2 ≤ c ∂xx u

C θ,

θα 2 (Q)

≤ c g

C

1+θ α 2 (I )

.

Since u0 = 0, u C(Q) ≤ c ∂tα u C(Q) . Then by interpolation, we obtain the desired estimate from Lemma 7.11. Next, we verify that the representation u in (7.22) solves problem (7.19). Recall the following identity: ∫ t ∫ t R α ∂t φ(t − s)ϕ(s)ds = ϕ(t − s)R∂sα φ(s)ds + ϕ(t) lim+ 0 Is1−α φ(s) 0

0

s→0

and ∂tα φ(t) = R∂tα φ(t), if φ(0) = 0. Then Lemma 7.10(v) leads to ∫ t α α (x, s)ds + g(t) lim 0 It1−α G α (x, t) ∂t u(x, t) = g(t − s)R∂sα G t→0+ 0 ∫ t α (x, s)ds. g(t − s)R∂sα G = 0

2 u(x, t). By Lemma 7.6(i), lim This directly implies ∂tα u(x, t) = ∂xx t→0+ u(x, t) = 0 and u(x, t) → 0 if x → ∞. Last we verify the boundary condition at x = 0. Note that

7.2 Hölder Regularity in One Dimension

301

|∂x u(x, t) − g(t)| ∫ ∞ ∫ x  α (x, s)(g(t − s) − g(t))ds| ≤| ∂x Gα (x, s)(g(t − s) − g(t))ds| + | ∂x G x 0 ∫ ∞ ∫ x ( 1+θ 2 α)  α (x, s)|s 1+θ 2 α ds. ≤2 g C(I ) |∂x Gα (x, s)|ds + c[g] |∂x G t,I

x

0

Then Lemma 7.6(i) and Lemma 7.10(i) with δ = x give α

|∂x u(x, t) − g(t)| ≤ c g

C

1+θ α 2 (I)

(x 1− 2 + x

1+θ 2

α

).

Thus, the function u in (7.22) satisfies ∂x u(x, t)|x=0 = g(t), and it solves problem (7.19). The uniqueness can be proved as Theorem 7.3.  Now we state the result in the general case, i.e., f , u0  0. θ

Theorem 7.5 Let θ ∈ (0, 1), conditions (7.20), and (7.21) hold, f ∈ C θ, 2 α (Q), 1+θ u0 ∈ C 2+θ (Ω) and g ∈ C 2 α (I). Then for any T > 0, there exists a unique solution u to problem (7.19) and

u

C 2+θ,

2+θ α 2 (Q)

≤ c( g

C

1+θ 2

(I )

+ f

C θ,

θα 2 (Q)

+ u0 C 2+θ (Ω) ).

Proof Let f¯ and u¯0 be extensions of f and u0 , respectively, to x < 0 such that f¯ and u¯0 are finitely supported and



C θ,

θα 2 (R×I )

≤ c f

C θ,

θα 2 (Q)

and

u¯0 C 2+θ (R) ≤ c u0 C 2+θ (Ω) .

Now consider the following Cauchy problem: 2 ⎧ ∂ α u¯ − ∂xx u¯ = f¯, in R × I, ⎪ ⎪ ⎨ t ⎪ u(0) ¯ = u¯0, in R, ⎪ ⎪ ⎪ u(x, ¯ t) → 0, if |x| → ∞, ∀t ∈ I. ⎩

By Theorem 7.3, there is a unique solution u¯ ∈ C 2+θ, 2 α (R × I) and  

u

¯ 2+θ, 2+θ α ≤ c f¯ θ, θ2 α + u¯0 C 2+θ (R) 2 C (R×I) C (R×I)   ≤ c f θ, θ2 α + u0 C 2+θ (Ω) . 2+θ

C

(Q)

¯ t), The solution u(x, t) to problem (7.19) can be split into u(x, t) = u(x, ¯ t) + u(x, where u¯ solves problem (7.19) with f , u0 ≡ 0, and satisfies (7.20). Hence, we can bound u¯ by Theorem 7.4. This completes the proof.  An analogous result holds for problem (7.18). θ

Theorem 7.6 Let θ ∈ (0, 1), conditions (7.20), and (7.21) hold, and f ∈ C θ, 2 α (Q), 2+θ u0 ∈ C 2+θ (Ω), g ∈ C 2 α (I). Then for any T > 0, there exists a unique solution u to

302

7 Subdiffusion: Hölder Space Theory

problem (7.18) and

u

2+θ C 2+θ, 2 α (Q)

≤ c g

C

2+θ α 2 (I )

+ f

θ C θ, 2 α (Q)

+ u0 C 2+θ (Ω) .

7.2.3 Subdiffusion on bounded intervals Now we study subdiffusion on bounded intervals, with variable coefficients and loworder terms. We study the following subdiffusion model on the interval Ω = (0, 1): ⎧ ∂tα u − L(t)u = f , in Q, ⎪ ⎪ ⎪ ⎪ −∂ u(0, t) = g (t), on I, ⎨ ⎪ x

⎪ ⎪ ⎪ ⎪ ⎪ ⎩

1

∂x u(1, t) = g2 (t),

on I,

u(x, 0) = u0 (x),

in Ω,

(7.23)

where the (possibly time dependent) elliptic operator L(t) is defined by 2 L(t)u(x, t) = a0 (x, t)∂xx u(x, t) − a1 (x, t)∂x u(x, t) − a2 (x, t)u(x, t).

The next result gives the Hölder regularity of the solution u. θ

Theorem 7.7 Let θ ∈ (0, 1), and f ∈ C θ, 2 α (Q), u0 ∈ C 2+θ (Ω), gi (t) ∈ C 2 α (I), θ i = 1, 2 and a0 (x, t) ≥ δ1 > 0 in Q, ai (x, t) ∈ C θ, 2 α (Q), i = 0, 1, 2, and the compatibility conditions g1 (0) = −∂x u0 (0) and g2 (0) = ∂x u0 (1). Then there exists a 2+θ unique solution u(x, t) ∈ C 2+θ, 2 α (Q) of problem (7.23) and

u

2+θ C 2+θ, 2 α (Q)

≤ c f

θ C θ, 2 α (Q)

1+θ

+ u0 C 2+θ (Ω) +

2

C

i=1

where the constant c depends only on Ω and ai

C θ,

θα 2 (Q)



gi

1+θ α 2 (I)

,

(7.24)

, i = 0, 1, 2.

We prove the theorem by mathematical induction. First prove the result for the case t ∈ [0, τ] for small τ < τ0 , with τ0 to be determined, and then extend the solution u from [0, τ] to [0, T] by repeating the procedure. Below let Qτ = Ω × (0, τ]. Lemma 7.12 Let the conditions of Theorem 7.7 hold. Then there exists a unique 2+θ solution u(x, t) ∈ C 2+θ, 2 α (Qτ ) of problem (7.23) for any τ ∈ (0, τ0 ] and

u

2+θ C 2+θ, 2 α (Q τ )

≤ c f

θ C θ, 2 α (Q τ )

+ u0 C 2+θ (Ω) +

i=1

where the constant c depends only on Ω and ai

C θ,

Proof If the following conditions hold

2

θα 2 (Q τ )

gi

C

1+θ α 2 [0,τ]

, i = 0, 1, 2.

,

(7.25)

7.2 Hölder Regularity in One Dimension θ, θ2 α

f (x, t) ∈ C0

303 1+θ

gi ∈ C0 2

(Qτ ),

α

[0, τ], i = 1, 2,

u0 ≡ 0,

(7.26)

then the desired assertion follows directly from Theorems 7.3 and 7.5, using the arguments of [LSU68, Chapter IV, Sections 4–7]. This is achieved by freezing the coefficients and then applying the contraction mapping theorem (after lengthy computations). The detailed argument can also be found in Lemma 7.21 below. To remove the conditions in (7.26), we follow the argument of [LSU68, Chapter IV, Section 4] using Hestenes-Whitney extension, and extend the functions u0 , f and ai from Ω to R with compact supports, denoted by u¯0 , f¯ and a¯i , respectively, such that

u¯0 C 2+θ (R) ≤ c u0 C 2+θ (Ω),

a¯i

C θ,



C θ,

θα 2 (R×[0,τ])

θα 2 (R×[0,τ])

≤ c ai

C θ,

≤ c f

C θ,

,

,

θα 2 (Q τ )

i = 0, 1, 2.

θα 2 (Q τ )

2 u¯ − a¯ ∂ u¯ − a¯ u¯ for (x, t) ∈ R × [0, τ]. Then the function Let f˜ := f¯ + a¯0 ∂xx 0 1 x 0 2 0 θ θ, α f˜ ∈ C 2 (R × [0, τ]) has a finite support and satisfies

, ≤ c u0 C 2+θ (Ω) + f θ, θ2 α

f˜ θ, θ2 α C

(R×[0,τ])

where c depends on ai

C θ,



θα 2 (Q τ )

C

(Q τ )

, i = 0, 1, 2. Let w be the solution to

2 2 ∂tα w − ∂xx w = f˜ − ∂xx u¯0,

w(0) = u¯0,

in R × [0, τ],

in R.

By Theorem 7.3, there exists a unique solution w ∈ C 2+θ, 2 α (R × [0, τ]) and

.

w 2+θ, 2+θ α ≤ c u0 C 2+θ (Ω) + f θ, θ2 α 2+θ

C

2

C

(R×[0,τ])

(Q τ )

Now we look for the solution u of problem (7.23) in the form u = w + v, where the function v solves 2 ⎧ ∂tα v − a0 ∂xx v + a1 ∂x v + a2 v = f ∗, ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ −∂ v(0, t) = g ∗, x

⎪ ⎪ ⎪ ⎪ ⎪ ⎩

1

in Qτ , on [0, τ],

∂x v(1, t) = g2∗, on [0, τ], v(·, 0) = 0, in Ω,

2 w − a ∂ w − a w − f˜ + ∂ 2 u − ∂ 2 w, g ∗ = g + ∂ w(0, t) and with f ∗ = f + a0 ∂xx 1 x 2 1 x xx 0 xx 1 ∗ g2 = g2 − ∂x w(1, t). By the very construction of these functions, we have

304

7 Subdiffusion: Hölder Space Theory

f ∗

C θ,

θα 2 (Q τ )

≤c f

+

2

gi∗

θ C θ, 2 α (Q τ )

1+θ α 2 [0,τ]

C

i=1

+ u0 C 2+θ (Ω) +

2

gi

C

i=1

where the constant c depends on ai

C θ,

θα 2 (Q τ )

1+θ α 2 [0,τ]

,

, i = 0, 1, 2. Further, the conditions

gi∗ (0) = f ∗ (x, 0) = 0 hold. Thus, the problem data for v satisfy the compatibility conditions in (7.26), and the preceding discussions indicate that there exists a unique 2+θ solution v ∈ C 2+θ, 2 α (Qτ ) with the desired estimate. Now the lemma follows from the properties of v and w.  Now we can present the proof of Theorem 7.7. Proof By Lemma 7.12, it suffices to prove that problem (7.23) has a unique solution 2+θ u for t ∈ [τ, T]. First, we construct a function u¯ ∈ C 2+θ, 2 α (Q) by 2 ⎧ ∂tα u¯ − a0 ∂xx u¯ + a1 ∂x u¯ + a2 u¯ = f , in Q, ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ −∂x u(0, ¯ t) = g1, on I, ⎪ ∂ u(1, t) = g2, on I, x¯ ⎪ ⎪ ⎪ ⎪ u¯ = u, in Qτ , ⎩

where u ∈ C 2+θ, 2 α (Qτ ) is the unique solution to problem (7.23) for t ∈ [0, τ], τ < τ0 , given in Lemma 7.12. Set T = 2τ, and let u˜ and a˜i be the extensions of u and ai from Ω to R with finite supports and 2+θ

u

˜

C 2+θ,

2+θ α 2 (R×[0,τ])

a˜i

C θ,

θα 2 (R×[0,τ])

Let g(x, t) be defined by



g(x, t) =

≤ c u

C 2+θ,

≤ c ai

C θ,

2 u, ∂tα u˜ − ∂xx ˜ g(x, τ),

,

2+θ α 2 (Q τ )

,

θα 2 (Q τ )

i = 0, 1, 2.

if (x, t) ∈ R × [0, τ], if (x, t) ∈ R × [τ, 2τ].

θ

2 u˜ ∈ Then g ∈ C θ, 2 α (Q2τ ). Indeed, by the regularity u˜ ∈ C 2+θ, 2 α (Qτ ), ∂tα u, ˜ ∂xx θ θ θ C θ, 2 α (Qτ ). Thus, g ∈ C θ, 2 α (Qτ ), and the construction of g ensures g ∈ C θ, 2 α (Q2τ ):

[g](θ)

x,Q 2τ

≤ [g](θ)

x,Q τ

2+θ

and [g] 

( θ2 α)

t,Q 2τ

≤ [g]

( θ2 α) t,Q τ

. Let w solve

2 ∂tα w − ∂xx w = g,

w(0) = u(0), ˜

in R × (0, 2τ), in R.

By the properties of the functions u˜ and g(x, t), Theorem 7.3 implies that there 2+θ ≤ exists a unique solution w ∈ C 2+θ, 2 α (R × [0, 2τ]), with w 2+θ, 2+θ α C

2

(R×[0,2τ])

7.2 Hölder Regularity in One Dimension

305

c u 2+θ, 2+θ α , and w = u˜ in R × [0, τ]. Now we look for the solution u¯ of 2 C (R×[0,τ]) problem (7.23) of the form u¯ = w + v, where v satisfies 2 ⎧ ∂tα v − a0 ∂xx v + a1 ∂x v + a2 v = f ∗, in Q2τ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ −∂x v(0, t) = g1∗ (t), on (0, 2τ],

∂x v(1, t) = g2∗ (t),

⎪ ⎪ ⎪ ⎪ ⎪ ⎩

v = 0,

on (0, 2τ],

in Qτ ,

where the last line is due to the construction of w, and f ∗ and gi∗ are defined by 2 w − a ∂ w − a w − ∂ 2 u − ∂ 2 w, g ∗ = g + ∂ w(0, t) and f ∗ = f − ∂tα w + a0 ∂xx 1 x 2 1 x xx 0 xx 1 ∗ g2 = g2 − ∂x w(1, t). By the construction of w, f ∗ (x, t) = gi∗ (t) = 0, for any t ∈ [0, τ]. Next, let t¯ = t − τ, and shifted quantities: v¯ (x, t¯) = v(x, t¯ + τ), for all t¯ ∈ [−τ, τ], and a¯i (x, t¯), f¯∗ (x, t¯) and g¯i∗ (t¯), etc. Then a¯i , f¯∗ , and g¯i∗ satisfy the conditions of Theorem 7.7. Since f¯∗ (x, t¯) = g¯i∗ (t¯) = 0 for t¯ ∈ [−τ, 0], we deduce v¯ (x, t¯) = 0, if t¯ ∈ [−τ, 0]. Further, we have ∂tα v = ∂t¯α v¯ , for any t¯ ∈ (0, τ]. Consequently, 2 ⎧ ∂t¯α v¯ − a¯0 ∂xx v¯ + a¯1 ∂x v¯ + a¯2 v¯ = f¯∗, in Qτ , ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ −∂ v¯ (0, t¯) = g¯ ∗ (t¯), t¯ ∈ (0, τ), x

1

∂x v¯ (1, t¯) = g¯2∗ (t¯), t¯ ∈ (0, τ), v¯ (0) = 0, in Ω.

⎪ ⎪ ⎪ ⎪ ⎪ ⎩

Lemma 7.12 gives the unique solvability for v¯ (x, t¯) ∈ C 2+θ, 2 α (Qτ ), with a bound similar to (7.25). By changing back t = t¯ + τ, we obtain the unique solvability for v ∈ 2+θ C 2+θ, 2 α (Q2τ ). Thus, the properties of w and v allow constructing a unique solution 2+θ u ∈ C 2+θ, 2 α (Q2τ ) of problem (7.23) for t ∈ [τ, 2τ]. Repeating this procedure, we 2+θ prove the existence of the solution u ∈ C 2+θ, 2 α (Q), and obtain the estimate (7.24). The uniqueness follows from the estimate (7.24).  2+θ

Last we state a Hölder regularity result in the Dirichlet case: ⎧ ∂tα u − L(t)u = f , in Q, ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ u(0, t) = g1 (t), on I, ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

u(1, t) = g2 (t), on I, u(x, 0) = u0 (x), in Ω.

(7.27)

The proof is analogous to Theorem 7.7, and thus omitted. θ

Theorem 7.8 Let θ, α ∈ (0, 1) and f ∈ C θ, 2 α (Q), u0 ∈ C 2+θ (Ω), gi (t) ∈ C 2 α (I), θ i = 1, 2 and a0 (x, t) ≥ δ1 > 0 in Q, ai (x, t) ∈ C θ, 2 α (Q), i = 0, 1, 2, and the compatibility conditions g1 (0) = u0 (0), g2 (0) = u0 (1), ∂tα g1 (0) = L(0)u0 (0) and 2+θ ∂tα g2 (0) = L(0)u0 (1). Then there exists a unique solution u(x, t) ∈ C 2+θ, 2 α (Q) of problem (7.27) and 2+θ

306

7 Subdiffusion: Hölder Space Theory

u

2+θ C 2+θ, 2 α (Q)

≤ c f

θ C θ, 2 α (Q)

+ u0 C 2+θ (Ω) +

2

gi

C

i=1

where the constant c depends on Ω and ai

C θ,

θα 2 (Q)

2+θ α 2 (I )

,

, i = 0, 1, 2.

7.3 Hölder Regularity in Multi-Dimension Now we extend the analysis in Section 7.2 and derive Hölder regularity results for multi-dimensional subdiffusion. The derivation is more complex, since the fundamental solutions are more involved, especially the auxiliary problem with the oblique boundary conditions, cf. problem (7.36). Nonetheless, the overall analysis strategy is similar to Section 7.2, by first analyzing the whole space Rd , then the half space R+d and last a smooth open bounded domain with variable coefficients.

7.3.1 Subdiffusion in Rd We shall need various technical estimates on the fundamental solutions Gα (x, t) and G α (x, t) that are needed in bounding various potentials, an essential step for deriving Hölder regularity of the solution. We begin with several elementary inequalities. Lemma 7.13 The following inequalities hold. (i) For B > 0, ζ > 0 ∫



η

1

−1 1−α ) 1− m 2 −B(ζ η +η

e

0

⎧ m = 1, 2, 3, ⎪ ⎨ 1, ⎪ dη ≤ c | ln ζ | + 1, m = 4, ⎪ ⎪ ζ 2− m2 , m ≥ 5. ⎩

(ii) For ζ ≥ 0 and α ∈ (0, 1), there holds ζ η

1

1

+ η 1−α ≥ 2ζ 2−α ,

∀η ∈ (0, ∞).

(7.28)

(iii) For β ∈ (0, 12 ), and c > 0, (| ln ζ | + 1)e−c |ζ |

1 1−β

c

≤ c  |ζ | − 4 e− 2 |ζ | 1

1 1−β

,

∀ζ > 0.

Proof Denote the integral in (i) by I. If m = 1, 2, 3, the estimate is direct: ∫ ∞ 1 m I(ζ) ≤ c η1− 2 e−Bη 1−α dη ≤ c. 0

If m = 4, we split the integral into two terms and obtain

(7.29)

7.3 Hölder Regularity in Multi-Dimension



ζ

e ∫



ζ

−B ηζ



−1

η dη =

0

e−Bs s−1 ds ≤ c,

1 1

e



307

−Bη 1−α

−1

η dη =

1

ln ηe−Bη 1−α |ζ∞ ∫



B + 1−α



≤ c | ln ζ | +

ln ηη

α 1−α





ζ

1

α

ln ηη 1−α e−Bη 1−α dη

1 e−Bη 1−α dη ≤ c(| ln ζ | + 1).

0

This shows the assertion for m = 4. Last, if m ≥ 5, letting ξ = ∫



I(ζ) =

1

(ξζ)

−1 1−α ) 1− m 2 −B(ξ +(ζ ξ)

e

ζdξ ≤ cζ

2− m 2

0





η ζ

m

gives m

−1

ξ 1− 2 e−Bξ dξ ≤ cζ 2− 2 .

0

This shows (i). (ii) follows since 1

ζ η



1 1−α

attains its global minimum at η∗ = ζ 2−α , 1−α

and the minimum value is 2ζ 2−α . (iii) holds, since by direct computation, c

ζ 4 (| ln ζ | + 1)e− 2 ζ 1

1 1−β

≤ c.

Indeed, for ζ ∈ (0, 1], there holds c

ζ 4 (| ln ζ | + 1)e− 2 ζ 1

1 1−β

≤ ζ 4 | ln ζ | + ζ 4 ≤ (4 + e)e−1 . 1

1

Likewise, by the trivial inequality 1 + ζ ≤ e ζ for ζ > 0, we have ln ζ ≤ ζ. This and the inequality ζ a e−bζ ≤ ( ba )a e−a , for a, b > 0 imply c

ζ 4 (| ln ζ | + 1)e− 2 ζ 1

1 1−β

c

1 4 −4 5 4 −4 ≤ ζ 4 (ζ + 1)e− 2 ζ ≤ ( 2c ) e + ( 2c ) e . 1

1

1

5

5



This shows (iii) and completes the proof of the lemma.

Next, we derive a pointwise bound on the mixed derivative R∂tν Dx G α (x, t) ( ∈ N0d d is a multi-index), for any ν ∈ R, | | := i=1 i ≥ 0, similar to Lemma 7.6. For ν < 0, the Riemann-Liouville fractional derivative R ∂tν is identified with the RiemannLiouville integral operator 0 It−ν of order −ν > 0. The analysis relies on the following bound for the heat kernel G1 (x, t): |Dx G1 (x, t)| ≤ c0 t −

d+| | 2

e−c1

| x |2 t

.

(7.30)

Lemma 7.14 Let ν ∈ R, | | ≥ 0. Then the function G α (x, t) defined in (7.4) satisfies α

1 α  1− 2

| R∂tν Dx G α (x, t)| ≤ ct (2−d− | |) 2 −ν−1 γ p(ν, | |) (r)e−c r α

with r = |x|t − 2 , and

,

(7.31)

308

7 Subdiffusion: Hölder Space Theory

⎧ m ≤ 3, ⎪ ⎨ 1, ⎪ γm (ζ) = | ln ζ | + 1, m = 4, ⎪ ⎪ ζ 4−d, m ≥ 5. ⎩



d + | |, ν ∈ N0, p(ν, | |) = d + | | + 2, ν  N0,

Proof It follows from (7.4) and the differentiation formula (3.28) for Wρ,μ (z) that ∫ ∞ R ν  ∂t Dx G α (x, t) = Dx G1 (x, λ)t −ν−1W−α,−ν (−λt −α )dλ. 0

If ν ∈ N0 , inequality (7.30) and Theorem 3.8(iv) give ∫ ∞ 1 d+| | | x |2 −α | R∂tν Dx G α (x, t)| ≤ c0 λ− 2 e−c1 λ λt −α−ν−1 e−σ(λt ) 1−α dλ, 0

for some σ = σ(α) > 0. By changing variables η = λt −α , we obtain ∫ ∞ 1 | x |2 1 d+| | α 1−α η1− 2 e−c2 ( t α η +η ) dη, | R∂tν Dx G α (x, t)| ≤ c0 t (2−d− | |) 2 −ν−1 0

α

with c2 = min(c1, σ). Now by the inequality (7.28), with r = |x|t − 2 , α

| R∂tν Dx G α (x, t)| ≤ ct (2−d− | |) 2 −ν−1 e−

1 c2 1− α 2 2 r





η1−

d+| | 2

e−

c2 2

1 2 1 1−α η +η

( | txα|

)

dη.

0

The desired assertion follows from Lemma 7.13. Similarly, if ν  N0 , inequality (7.30) and Theorem 3.8(iv) give ∫ ∞ 1 d+| | | x |2 −α | R∂tν Dx G α (x, t)| ≤ c0 λ− 2 e−c1 λ t −ν−1 e−σ(λt ) 1−α dλ. 0

By changing variables η = λt −α , we obtain α

| R∂tν Dx G α (x, t)| ≤ c0 t (2−d− | |) 2 −ν−1





η−

d+| | 2

1 | x |2 1 1−α η +η

e−c2 ( t α

)

dη,

0 α

with c2 = min(c1, σ). Now by the inequality (7.28), with r = |x|t − 2 , α

| R∂tν Dx G α (x, t)| ≤ ct (2−d− | |) 2 −ν−1 e−

1 c2 1− α 2 2 r





η−

d+| | 2

e−

c2 2

1 2 1 1−α η +η

( | txα|

)

dη.

0

The assertion for ν  N0 follows from Lemma 7.13. Now we give several estimates on the potentials associated with Gα and Gα . Lemma 7.15 For the functions v = V f and w = Iu0 , there hold



7.3 Hölder Regularity in Multi-Dimension

309

v C 2,0 (Q) ≤ c f C θ,0 (Q), [Dx v](θ) + [Dx v] x,Q

( θ2 α) t,Q

≤ c[ f ](θ) ,

| | = 2,

x,Q

w C 2,0 (Q) ≤ c u0 C 2 (R d ), +

[Dx w](2+θ) x,Q

+

( θ α) [Dx w] 2 t,Q

≤ c u0 C 2+θ (R d ), +

| | = 2.

Proof The proof of the lemma is nearly identical with Lemmas 7.8 and 7.9, using Lemmas 7.1 and 7.14. We sketch only the estimates for v. First, we derive an auxiliary α estimate. Lemma 7.14 with ν = 0 and then changing variables r = |x|t − 2 lead to ∫



0

|Dx Gα (x, t)|dt

≤ c|x|

2−d− | |



r 0

≤ c|x| 2−d− | |,



d+ | |−3

γd+ | | (r)e

1 α

−c r 1− 2

dr

if d + | | ≥ 3.

(7.32)

The bound on v C 2,0 (Q) is direct to derive. By Lemma 7.1(ii), for any | | ≥ 1, Dx v(x, t)

=

∫ t∫ Rd

0

Dx G α (x − y, t − s)( f (y, s) − f (x, s))dyds.

To bound [Dx v](θ) , with | | = 2, for any distinct x, x¯ ∈ Rd , let h := |x − x| ¯ and x,Q

K = {y ∈ Rd : | x¯ − y| ≤ 2h}. Then we split Dx u(x, t) − Dx u( x, ¯ t) into Dx v(x, t) + + +



Dx v( x, ¯ t)

∫ t∫

Ii =:

∫ t∫ 0

i=1

K

Dx G α ( x¯ − y, t − s)[ f ( x, ¯ s) − f (y, s)]dyds

Dx G α (x − y, t − s)[ f (y, s) − f (x, s)]dyds

0

K

0

R d \K

0

R d \K

∫ t∫

=

4

∫ t∫

Dx G α (x − y, t − s)[ f ( x, ¯ s) − f (x, s)]dyds [Dx G α ( x¯ − y, t − s) − Dx G α (x − y, t − s)][ f ( x, ¯ s) − f (y, s)]dyds.

For I1 , by (7.32), we have I1 ≤



[ f ](θ) x,Q

K

| x¯ − y| θ−d dy ≤ c[ f ](θ) hθ . x,Q

The term I2 can be bounded similarly. Next, similar to the 1D case, cf. Lemma 7.7(ii), for | | = 2, we have ∫ t ∫    (7.33) Dx G α (x − y, t − s)dy ds ≤ c,  0

R d \K

310

7 Subdiffusion: Hölder Space Theory

with which, we can bound I3 by ∫ t ∫  |I3 | ≤ chθ [ f ](θ)  x,Q

0

|x |>h

  Dx G α (x, s)dx ds ≤ chθ [ f ](θ) . x,Q

Last, for I4 , the mean value theorem and (7.32) give for some ζ ∈ [ x¯ − y, x − y], ∫ t∫ (θ) |x| θ |∇Dx G α (ζ, s)|dζds |I4 | ≤ ch[ f ] x,Q 0 |x |>h ∫ (θ) ≤ ch[ f ] |x| θ−d−1 dx ≤ chθ [ f ](θ) . x,Q

x,Q

|x |>h

These bounds on Ii show that for | | = 2, [Dx v](θ) ≤ c[ f ](θ) . Next, we bound x,Q

( θα ) [Dx v] 2 , t,Q

x,Q

with | | = 2. For any t1, t2 ∈ I, t2 > t1 , let τ = t2 − t1 , and we discuss the two cases, i.e., t1 > 2τ and t1 ≤ 2τ, separately. If t1 > 2τ, we split the difference Dx v(x, t2 ) − Dx v(x, t1 ) into ∫ ∫ t2   Dx v(x, t2 ) − Dx v(x, t1 ) = Dx Gα (x − y, t2 − s)[ f (y, s) − f (x, s)]dsdy ∫ + +



Rd

t1

R d t1 −2τ ∫ t1 −2τ ∫

Dx Gα (x − y, t1 − s)( f (x, s) − f (y, s))dsdy

Rd

0

t1 −2τ

[Dx G α (x − y, t2 − s) − Dx G α (x − y, t1 − s)][ f (y, s) − f (x, s)]dyds.

Next we bound the three terms, denoted by IIi , i = 1, 2, 3. For the term II1 , using α Lemma 7.14 with ν = 0 and r = |x|s− 2 , ∫ τ∫ |x| θ |Dx G(x, s)|dxds |II1 | ≤ [ f ](θ) x,Q



c[ f ](θ) x,Q

Rd

0

∫ ∫

τ

x,Q

θ −d α2 −1

Rd

0 τ

= c[ f ](θ)



θ

|x| s

s 2 α−1 ds

∫ Rd

0

γ2+d (r)e

1 α

−c r 1− 2

|x| θ γ2+d (|x|)e−c

dxds

1 α  |x | 1− 2

θ

dx ≤ cτ 2 α [ f ](θ) . x,Q

The term II2 can be bounded similarly. For the term II3 , the mean value theorem and Lemma 7.14 imply for some t ∈ (t1, t2 ), |II3 | ≤

cτ[ f ](θ) x,Q

= cτ[ f ](θ)

x,Q

∫ t∫ Rd

τ



τ

t

θ

θ − d2 α−2

|x| s

s 2 α−2 ds

∫ Rd

γ2+d (r)e

1 α

−c r 1− 2

|x| θ γ2+d (|x|)e−c

dxds

1 α  |x | 1− 2

θ

dx ≤ cτ 2 α [ f ](θ) . x,Q

7.3 Hölder Regularity in Multi-Dimension

311 ( αθ )

In sum, for t1 > 2τ, we obtain [Dx v] 2 ≤ c[ f ](θ) τ t,Q x,Q analyzed similarly, and the proof is omitted.

θα 2

. The case t1 ≤ 2τ can be 

Using Lemma 7.15 and repeating the argument of Theorem 7.3, we can show the 2+θ existence result of a solution u ∈ C 2+θ, 2 α (Q) to problem (7.1). θ

Proposition 7.2 Let u0 ∈ C 2+θ (Ω) and f ∈ C θ, 2 α (Q) satisfy (7.2). Then problem 2+θ (7.1) has a unique solution u ∈ C 2+θ, 2 α (Q), and

u 2+θ, 2+θ α ≤ c f θ, θ2 α + u0 C 2+θ (Ω) . C

C

(Q)

2

(Q)

7.3.2 Subdiffusion in R+d Let the half space Ω ≡ R+d = {x ∈ Rd : xd > 0}, with the lateral boundary ∂L Q = {(x , 0, t) : x  = (x1, . . . , xd−1 ) ∈ Rd−1, t ∈ (0, T)}. Consider the following two initial boundary value problems: ⎧ ∂tα v − Δv = 0, ⎪ ⎪ ⎪ ⎪ ⎨ v(x, 0) = 0, ⎪

in Q, in Ω,

⎪ v → 0 as |x| → ∞, ⎪ ⎪ ⎪ ⎪ v(x , 0, t) = g(x , t), on ∂ Q, L ⎩ and

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪

∂tα w − Δw = 0,

(7.34)

in Q,

w(x, 0) = 0, in Ω, w → 0 as |x| → ∞,

⎪ ⎪ d ⎪

⎪ ⎪ ⎪ hi ∂xi w(x , 0, t) = g(x , t), ⎪ ⎪ ⎩ i=1

(7.35)

on ∂L Q,

where h = (h1, . . . , hd ) is a constant vector. We shall assume g∈

⎧ 2+θ, 2+θ ⎪ 2 α ⎨ C0 ⎪ (Q), ⎪ ⎪C ⎩ 0

1+θ, 1+θ 2

α

(Q),

if (7.34)

and

g(x , t) = 0,

for |x| > R > 0.

(7.36)

if (7.35)

The first part of the condition implies g(x , t) = 0 for t ≤ 0, and may be extended by 0 for t < 0,again denoted by g below. Further, the constant vector h in the oblique d hi ∂xi w(x , 0, t) satisfies the condition for some m0 ≥ 0 (with the derivative i=1 shorthand h  = (h1, . . . , hd−1 ), the first d − 1 components of h ∈ Rd ) hd ≤ −δ0 < 0,

|h  | ≤ m0 .

(7.37)

312

7 Subdiffusion: Hölder Space Theory

Clearly, in the case |h  | = 0, problem (7.35) recovers the standard Neumann problem. Now we derive the solution representations for problems (7.34) and (7.35). The α (x, t) takes a complex form, cf. (7.38), which complicates fundamental solution G the analysis of the associated potentials. Proposition 7.3 Let Assumptions (7.36) and (7.37) hold. Then the solutions of problems (7.34) and (7.35) can be, respectively, represented by ∫ t ∫ v(x, t) = −2 ∂x d G α (x  − y , xd, t − s)g(y , s)dy ds, −∞ R d−1 ∫ t ∫ α (x  − y , xd, t − s)g(y , s)dy ds, w(x, t) = G −∞

R d−1

α (x, t) is given by where the fundamental solution G ∫ ∞ α (x, t) = −2 ∂x d Gα (x − hλ, t)dλ. G

(7.38)

0

Proof Let F  be the Fourier transform in the tangent space variable x , i.e., ∫  F [v] = v(x, t)e−ix ·ξ dx , with ξ = (ξ1, . . . , ξd−1 ) ∈ Rd−1 . R d−1

Taking tangent Fourier transform F  in x  and Laplace transform L in t, problem (7.34) reduces to the following ode (with  v (ξ, xd, z) = L[F [v]](ξ, xd, z)) ⎧ ⎪ z α  v (ξ, xd, z) + |ξ | 2  v −  v x d x d (ξ, xd, z) = 0, xd > 0 ⎪ ⎪ ⎨ ⎪  v (ξ, 0, z) =  g (ξ, z), ⎪ ⎪ ⎪ ⎪  v (ξ, xd, 0) → 0, as xd → ∞. ⎩ √ α 2  g (ξ, z). It suffices to prove The solution  v is given by  v (ξ, xd, z) = e− z + |ξ | x d √ α 2 L[F [−2∂x d G α ]](ξ, xd, z) = e− z +ξ | x d . (7.39) Then the desired representation follows by the convolution formula. Indeed, in view  | x  |2    d−1 2 of the identities F  e− 4λ (ξ) = (4πλ) 2 e−λ|ξ | and L t −1W−α,0 (−λt α ) (z) = α e−λz , cf. Proposition 3.3, we obtain ∫ ∞ x2 3 α 2 1 d L[F [−2∂x d G α (x, t)]](ξ, xd, z) = √ xd λ− 2 e− 4λ −(z + |ξ | )λ dλ. 2 π 0 Changing variables ζ = λ− 2 xd leads to 1

1 L[F [−2∂x d G α ]](ξ, xd, z) = √ π





2

e 0

− ζ4 −

(z α +| ξ | 2 )x 2 d ζ2

dζ = e−



z α + |ξ | 2 x d

,

7.3 Hölder Regularity in Multi-Dimension

313

using also the following identity (cf. [GR15, formula (3.325), p. 339])  ∫ ∞ 1 π −2√ab −ax 2 − b2 x dx = e e , ∀a, b > 0 2 a 0 with a = 14 and b = (z α + |ξ | 2 )xd2 . This shows the representation for v. Similarly, by applying Laplace and (tangent) Fourier transforms in t and x , respectively, problem (7.35) reduces to the following ode: ⎧ ⎪ zα w (ξ, xd, z) + |ξ | 2 w (ξ, xd, z) − w x d x d (ξ, xd, z) = 0, xd > 0 ⎪ ⎪ ⎨ ⎪ (ξ, 0, z) + ih  · ξ w (hd ∂x d w )|x d =0 =  g (ξ, z), ⎪ ⎪ ⎪ ⎪ w (ξ, xd, 0) → 0, as xd → ∞, ⎩ Under Assumption (7.37), the solution w  is given by √ ∫ ∞ √ α 2   g (ξ, z) e− z + |ξ | x d α 2 w (ξ, xd, z) = e− z + |ξ | (x d −h d λ)−ih ·ξλ dλ  g (ξ, z). =  α 2  0 −hd z + |ξ | + ih · ξ Now the identity (7.39) implies α ]](x, t) = −2 L −1 [F −1 [G

∫ 0



∂x d G α (x − hλ, t)dλ.

It remains to prove that the representations satisfy the corresponding boundary conditions. For problem (7.34), the identity (7.8) implies the desired boundary condition. Meanwhile, for problem (7.35), d



α (x, t) = −2 hi ∂xi G

i=1

∫ =2 0

0 ∞

d ∞

∂x2d xi G α (x − hλ, t)hi dλ

i=1

d ∂x G α (x − hλ, t)dλ = −2∂x d Gα (x, t). dλ d

This fact implies the desired oblique boundary condition: d

i=1

hi ∂xi

∫ t∫ 0

R d−1

α (x  − y , xd, t − s)g(y , s)dy ds|x d =0 = g(x , t). G 

This completes the proof of the proposition. Next, we give a simple lower bound on |x − hλ| 2 under Assumption (7.37). Lemma 7.16 Under Assumption (7.37), the following lower bound holds: d

i=1

(xi − hi λ)2 ≥ c02

d−1

i=1

xi2 + λ2 + xd2 ,

with c02 =

δ02 2

min(1,

1 ). δ02 +m02

(7.40)

314

7 Subdiffusion: Hölder Space Theory

Proof By Assumption (7.37), hd < −δ0 < 0 for any a ∈ (0, 1), d d−1



hi xi2 − 2axi λ + hi2 λ2 + xd2 + h2d λ2 (xi − hi λ)2 ≥ a i=1 i=1

≥ (1 − a2 )

d−1

xi2 − (a−2 − 1)m02 λ2 + δ02 λ2 + xd2 .

i=1

Setting a2 = 12 (1 +

m02 2 δ0 +m02



) shows the desired estimate.

α . Now we collect several useful weighted integral bounds on G α (x, t) satisfies the following estimates: Lemma 7.17 The function G ∫ ∞ α (x, t)|dt ≤ c|x  | 2−d− | |, 1 ≤ | | ≤ 2, |Dx G 0 ∫ ∞ α α (x, t)|dt ≤ c|x  | −d+ 32 , i = 1, . . . , d, t 4 |∂xi G ∫ ∞ 0 δ α (x, t)|dt ≤ c|x  | δ−d− | |, | | = 0, 1, δ > 0, t 2 α | R∂tα Dx G ∫0 α α−1+ζ  | R∂t Gα (x, t)|dx  ≤ ct − 2 −ζ , ζ ∈ R, R d−1 ∫ α (x, t)|dx  ≤ ct −1−k , k = 0, 1, i = 1, . . . , d, |∂tk ∂xi G R d−1 ∫ α (x, t)|dx  ≤ ct (1− | |) α2 −ν−1, 0 ≤ | | ≤ 2. | R∂tν Dx G R d−1

Proof In the proof, let β =

α 2.

(7.41) (7.42) (7.43) (7.44) (7.45) (7.46)

By Lemma 7.14, with r = |x|t −β , there holds 1  1−β

| R∂tν Dx G α (x, t)| ≤ ct (2−d− | |)β−ν−1 γ p(ν, | |) (r)e−c r

,

(7.47)

with the number p(ν, ) and the function γm given in Lemma 7.14. We bound the integrals one by one. The integrals in the estimates are denoted by Ii , i = 1, . . . , 6. (i) The inequality (7.47) with ν = 0 and (7.38) allows bounding the integral ∫ ∞∫ ∞ 1  −β 1−β I1 ≤ c t (1−d− | |)β−1 γd+ | |+1 (|x − hλ|t −β )e−c ( |x−hλ|t ) dλdt. 0

0

Due to the difference of γm , we distinguish the following two cases (a) d + | | = 3 and (b) d + | | > 3. In case (a), it follows from (7.29) and Lemma 7.16 and then changing variables s = |x  |t −β and η = λt −β that

7.3 Hölder Regularity in Multi-Dimension



∞∫ ∞

I1 ≤ c 0

∫ ≤c

0

∞∫

0





0

315 c

t −2β−1 (|x − hλ|t −β )− 4 e− 2 ( |x−hλ|t 1

t −2β−1 (|x  |t −β )− 4 e−c 1

1  ( |x  |t −β ) 1−β

1 −β ) 1−β

e−c

dλdt

1  (λt −β ) 1−β

dλdt



1 1 ∞ 1  1−β ds  1−β (|x  |s−1 )−1 s− 4 e−c s e−c η dη s 0 0 ∫ ∞ 1 1  s 1−β  −1 −c  −1 s4e ds ≤ c1 |x | . ≤ c|x | ≤c



0

Similarly, in case (b), we have ∫ ∞∫ ∞ 1  −β 1−β I1 ≤ c t (1−d− | |)β−1 (|x − hλ|t −β )4−d e−c ( |x−hλ|t ) dλdt 0 0 ∫ ∞∫ ∞ 1 1   −β 1−β  −β 1−β ≤c t (2−d− | |)β−1 (|x  |t −β )4−d e−c ( |x |t ) e−c (λt ) dλdt 0 0 ∫ ∞ 1  1−β ds ≤c ≤ c|x  | 2−d− | | . (|x  |s−1 )2−d− | | s4−d e−c s s 0 Combining these two estimates shows the estimate (7.41). (ii). The estimate (7.47) with ν = 0 and | | = 2 yields ∫ ∞∫ ∞ 1 β  −β 1−β I2 ≤ c t 2 −dβ−1 γd+2 (|x − hλ|t −β )e−c ( |x−hλ|t ) dλdt. 0

0

Like before, due to the different form of the function γm , we analyze the cases d = 2 and d ≥ 3 separately. If d = 2, by inequality (7.29) and Lemma 7.16, we have ∫ ∞∫ ∞ 1 β 1 c −β 1−β I2 ≤ c t 2 −2β−1 (|x − hλ|t −β )− 4 e− 2 ( |x−hλ|t ) dλdt 0 0 ∫ ∞∫ ∞ 1 1 3 1   −β 1−β  −β 1−β ≤c t − 2 β−1 (|x  |t −β )− 4 e−c ( |x |t ) e−c (λt ) dλdt 0 0 ∫ ∞ 1 1 1  1−β ds ≤c (|x  |s−1 )− 2 s− 4 e−c s s 0 ∫ ∞ 1 1 3 1  1−β s− 4 e−c s ds ≤ c|x  | − 2 . = c|x  | − 2 0

Similarly, for d ≥ 3, we have ∫ ∞∫ ∞ 1 β  −β 1−β I2 ≤ c t 2 −dβ−1 (|x − hλ|t −β )2−d e−c ( |x−hλ|t ) dλdt 0 0 ∫ ∞∫ ∞ 1 1 β   −β 1−β  −β 1−β ≤c t 2 −dβ−1 (|x  |t −β )2−d e−c ( |x |t ) e−c (λt ) dλdt 0 0 ∫ ∞ 1 3 3  1−β ds ≤ c|x  | 2 −d . ≤c (|x  |s−β ) 2 −d s2−d e−c s s 0

316

7 Subdiffusion: Hölder Space Theory

Combining these two cases shows (7.42). (iii) The identity (7.47) with ν = α and (7.38) allows bounding the integral ∫ ∞∫ ∞ 1  −β 1−β I3 ≤ c t (δ−d− | |−2)β−1 γd+ | |+3 (|x − hλ|t −β )e−c ( |x−hλ|t ) dλdt. 0

0

Since d ≥ 2,

γd+ | |+3 (ζ) = ζ 4−(d+ | |+3) = ζ 1−d− | | .

It follows from (7.29) and Lemma 7.16 and changing variables s = |x  |t −β and η = λt −β that ∫ ∞∫ ∞ 1  −β 1−β I3 ≤ c t (δ−d− | |−2)β−1 (|x − hλ|t −β )1−d− | | e−c ( |x−hλ|t ) dλdt 0 0 ∫ ∞∫ ∞ 1 1   −β 1−β  −β 1−β ≤c t (δ−d− | |−2)β−1 (|x  |t −β )1−d− | | e−c ( |x |t ) e−c (λt ) dλdt 0 0 ∫ ∞ 1  1−β ds ≤c ≤ c|x  | δ−d− | |, (|x  |s−1 )δ−d− | | s1−d− | | e−c s s 0 which shows the estimate (7.43). α (x, t), the integral representation of G α (x, t), and the differ(iv) The definition of G entiation formula (3.28) for Wright function imply ∫ ∞ R α−1+ζ R α−1+ζ  ∂t ∂t ∂x d Gα (x − hμ, t)dμ Gα (x, t) = −2 0 ∫ ∞∫ ∞ α−1+ζ  −1 t W−α,0 (−λt −α ))dμdλ = −2 ∂x d G1 (x − hμ, λ)R∂t 0 0 ∫ ∞∫ ∞ = −2 ∂x d G1 (x − hμ, λ)t −α−ζ W−α,−α+1−ζ (−λt −α )dμdλ. 0

0

Using the bound (7.30) and Lemma 3.5(iv), and then changing variables y  = 1 (x  − h  μ)λ− 2 , we may bound ∫ ∞∫ ∞ ∫ | x−h μ| 2 d+1 I4 ≤ c λ− 2 e−c1 λ dx  t −α−ζ |W−α,1−α−ζ (−λt −α )|dμdλ d−1 ∫0 ∞ ∫0 ∞ R (x d −h d μ)2 λ λ−1 e−c1 t −α−ζ |W−α,1−α−ζ (−λt −α )|dμdλ. ≤c 0

0

Changing variables ξ = μλ− 2 and η = λt −α , the conditions hd ≤ −δ0 < 0 from Assumption (7.37) and Theorem 3.8(iv) give ∫ ∞ ∫ ∞ 2 2 − 12 −α−ζ −α I4 ≤ c λ t |W−α,1−α−ζ (−λt )|dλ e−c1 δ0 ξ dξ 0 0 ∫ ∞ −β−ζ − 12 −β−ζ ≤ ct η |W−α,1−α−ζ (−η)|dη ≤ ct , ∀ζ ∈ R. 1

0

7.3 Hölder Regularity in Multi-Dimension

317

This completes the proof of the estimate (7.44). α (x, t) yields (v). The definition of G ∫ ∞∫ ∞ α (x, t) = −2 ∂tk ∂xi G ∂x2d xi G1 (x − hμ, λ)t −1−k W−α,−k (−λt −α )dμdλ. 0

0

Like before, the bound (7.30) and Lemma 3.5(iv) lead to ∫ ∞∫ ∞ (x d −h d μ)2 3 λ I5 ≤ c λ− 2 e−c1 t −1−k |W−α,−k (−λt −α )|dμdλ. 0

0

Changing variables ξ = μλ− 2 and η = λt −α and Assumption (7.37) give ∫ ∞ ∫ ∞ 2 2 −1 −1−k −α I5 ≤ c λ t |W−α,−k (−λt )|dλ e−c1 δ0 ξ dξ 0 0 ∫ ∞ −1−k −1 −1−k ≤ ct η |W−α,−k (−η)|dη ≤ ct , k = 0, 1, i = 1, . . . , d, 1

0

∫∞ where the bound on the integral 0 η−1 |W−α,−k (−η)|dη follows from Theorem 3.8(iv) for large η and the definition of W−α,−k (−η) in a neighborhood of η = 0. This shows the estimate (7.45). α (x, t) and the integral representation of G α (x, t) in (7.4) give (vi) The definition of G ∫ ∞∫ ∞ R ν  ∂t Dx Gα (x, t) = −2 Dx ∂x d G1 (x − hμ, λ)t −ν−1W−α,−ν (−λt −α )dμdλ. 0

0

The bound (7.30) yields ∫ ∞∫ ∞ (x d −h d μ)2 | | λ λ− 2 −1 e−c1 t −ν−1 |W−α,−ν (−λt −α )|dμdλ. I6 ≤ c 0

0

Changing variables ξ = μλ− 2 and η = λt −α , Assumption (7.37) and Theorem 3.8(iv) give ∫ ∞ ∫ ∞ 2 2 − | 2|+1 −ν−1 −α I6 ≤ c λ t W−α,−ν (−λt )dλ e−c1 δ0 ξ dξ 0 0 ∫ ∞ 1 1−| |  η 1−α (1− | |)β−ν−1 −c ≤ ct η 2 e dη ≤ ct (1− | |)β−ν−1, if 0 ≤ | | ≤ 2. 1

0

This proves the assertion (7.46) and completes the proof of the lemma. The next result gives a differentiation formula for the surface potential ∫ t∫ α (x  − y , xd, t − s)g(y , s)dy ds. G So g(x, t) = 0

R d−1



The notation Q  denotes Q  = Rd−1 × I, and accordingly Q = Rd−1 × I.



318

7 Subdiffusion: Hölder Space Theory 0, 1+θ 2 α

Lemma 7.18 For g ∈ C0 ∫ ∞∫ ∂tα So g(x, t) =

R d−1

0



(Q ), the following identity holds:

R α ∂s Gα (x  −

y , xd, s)(g(y , t − s) − g(y , t))dy ds. (7.48)

Proof Let v(x, t) = So g(x, t). The estimate (7.44) implies ∫ t α α s 2 −1 ds ≤ c g C(Q ) t 2 , |v(x, t)| ≤ c g C(Q ) 0

and thus v(x, t)|t=0 = 0. Further, by assumption, g(x , t) = 0 for t ≤ 0, and thus ∂tα v = R∂tα v. For any > 0, let ∫ t− ∫ 1−α  U (x, t) = Gα (x  − y , xd, t − s)g(y , s)dy ds. s It 0

R d−1

It remains to show that the limit lim →0+ ∂t U equals to the right-hand side of (7.48). Indeed, direct computation gives ∫ t− ∫ R α     ∂t U (x, t) = s ∂t G α (x − y , xd, t − s)g(y , s)dy ds R d−1 −∞ ∫ 1−α  + Gα (x  − y , xd, )g(y , t − )dy  0 It R d−1 ∫ ∞∫ R α = ∂s Gα (x  − y , xd, s)(g(y , t − s) − g(y , t − ))dy ds, R d−1



and

∫ ∞∫ R α δ (x, t) :=∂t U (x, t) − ∂s Gα (x  − y , xd, s)(g(y , t − s) − g(y , t))dy ds d−1 R 0 ∫ ∞∫ R α = ∂s Gα (x  − y , xd, s)(g(y , t) − g(y , t − ))dy ds R d−1  ∫ ∫ R α + ∂s Gα (x  − y , xd, s)(g(y , t − s) − g(y , t))dy ds. R d−1

0

Now it follows from the estimate (7.44) that ∫

1+θ ∫ ∞ ( 1+θ α −1− α2 2 α) 2 s ds + |δ (x, t)| ≤ c[g]  t,Q



0



θ ( 1+θ α) θ s 2 α−1 ds ≤ c[g] 2  2 α . t,Q

Letting → 0+ completes the proof of the lemma.



The next lemma gives one technical estimate. ¯ and h = |x − x  | α , Lemma 7.19 For K = {y  ∈ Rd−1 : |x  − y  | ≤ 2|x − x|}, ∫ h ∫   α (x  − y , xd, s)dy ds ≤ c, i = 1, . . . , d. ∂xi G  2

0

K

7.3 Hölder Regularity in Multi-Dimension

319

Proof We denote the integral by I, and let β =

α 2.

First, we claim the identity

d−1 ∫ ∞

α (x, t) = − 2 G α (x, t) + 2 hi ∂xi Gα (x − hλ, t)dλ G hd hd i=1 0

(7.49)

:= G1 (x, t) + G2 (x, t). Indeed, since d−1

d G α (x − hλ, t) = − hi ∂xi G α (x − hλ, t) − hd ∂x d G α (x − hλ, t), dλ i=1

upon integrating with respect to 0 to ∞, we obtain −G α (x, t) = −

d−1 ∫





hi ∂xi Gα (x − hλ, t)dλ − hd

0

i=1



0

∂x d G α (x − hλ, t)dλ.

This and the identity (7.38) show (7.49). Next we consider the two cases (a) i  d and (b) i = d separately. In case (a), by the divergence theorem, we have ∫ h∫ ∫ h∫ α (x  − y , xd, s)νi dSy ds = I= G2 (x  − y , xd, s)νi dSy ds, G Σ

0

{y 

Σ

0

|x  −

y|

where Σ = : = 2|x − x|} ¯ and ν is the unit outward normal to Σ in Rd−1 , and dSy denotes the (infinitesimal) surface area for Σ. By Lemma 7.14, we obtain ∫

h

|I| ≤

∫ Σ

0



≤c

h

0

|G2 (x  − y , xd, s)|dSy ds



∞∫ Σ

0



s(1−d)β−1 γd+2 (ξ)e−c ξ

1 1−β

dSy dλds,

(7.50)

where ξ = |x  − y  − h  λ|s−β . If d = 2, we use (7.40) and (7.29) and obtain (with ζ = |x − x|s ¯ −β ) ∫ h∫ ∞ 1 1 σ0 1−β σ0 1 −β 1−β |I| ≤ c ζ − 4 e− 4 ζ e− 4 (λs ) s−β−1 dλds 0

∫ ≤c

0 h

ζ

− 41 −

e

σ0 4

0

1

ζ 1−β −1



s ds ≤ c



ζ − 4 e− 5

σ0 4

1

z 1−β

dζ ≤ c.

1 d−1

π 2 Similarly, if d = 3, it follows from (7.50) and the identity |Σ| = 2 Γ(d−1) (2|x − x|) ¯ d−2 for the surface area |Σ| that ∫ ∞ 1 σ0 1−β (ζ |x − x| ¯ −1 )d−2 ζ 2−d e− 4 ζ ζ −1 dζ ≤ c. |I| ≤ c|x − x| ¯ d−2 1

320

7 Subdiffusion: Hölder Space Theory

α and the decomposition (7.49), we have In case (b), by the definition of G α (x, t) = − ∂x d G

d−1 2 1 α (x, t). ∂x d Gα (x, t) − hi ∂xi G hd hd i=1

(7.51)

The summation on the right-hand side is already estimated. It remains to prove ∫ h∫ J := |∂x d G α (x  − y , xd, s)|dy ds ≤ c. 0

K

By Lemma 7.1(i), ∂x d G α (x , xd, s) ≤ 0 and the identity (7.8), we can bound J by ∫ ∞∫ J≤c |∂x d Gα (x , xd, s)|dx  ds ≤ c. 0

R d−1



This completes the proof of lemma.

The next result collects several estimates on the surface potential So g. Let Q = R+d × (0, T] and Q  = Rd−1 × (0, T]. Lemma 7.20 For the function u(x, t) = So g, the following estimates hold: [∂xi u](θ) + [∂xi u] x,Q

( θ2 α) t,Q

+ [u]

( 1+θ 2 α) t,Q

[∂tα u](θ) + [∂tα u] x,Q

( θ2 α) t,Q

≤ c[g]

C θ,

≤ c[g]

,

θα  2 (Q )

C 1+θ,

1+θ α  2 (Q )

.

Proof The proof follows closely [LSU68, Chapter IV]. To simplify the notation in the proof, we let β = α2 , and  g (y , t, s) = g(y , t − s) − g(y , t)

and g (y , x , t) = g(y , t) − g(x , t).

Then we can split ∂xi u(x, t) into ∫ ∞∫ α (x  − y , xd, s) ∂xi u(x, t) = ∂xi G g (y , t, s)dy ds 0 R d−1 ∫ ∞∫ α (x  − y , xd, s) + ∂xi G g (y , x , t)dy ds 0 R d−1 ∫ ∞∫  α (x  − y , xd, s)dy ds. + g(x , t) ∂xi G 0

R d−1

Fix x, x¯ ∈ R+d with x  x. ¯ Then ∂xi u(x, t) − ∂xi u( x, ¯ t) can be decomposed into seven 1  d−1 terms (with h = |x − x| ¯ β and K = {y ∈ R : |x  − y  | ≤ 2|x − x|}) ¯

7.3 Hölder Regularity in Multi-Dimension

∫ ∂xi u(x, t) − ∂xi u( x, ¯ t) = ∫ − 0





h

h

∫ R d−1

0

321

α (x  − y , xd, s) ∂xi G g (y , t, s)dy ds

α ( x¯  − y , x¯d, s) ∂x¯ i G g (y , t, s)dy ds

R d−1

∞∫

α (x  − y , xd, s) − ∂x¯ i G α ( x¯  − y , x¯d, s)) (∂xi G g (y , t, s)dy ds ∫ ∫ ∞    α (x  − y , xd, s)ds + g (y , x , t)dy ∂xi G ∫K ∫0 ∞    α ( x¯  − y , x¯d, s)ds − g (y , x¯ , t)dy ∂x¯ i G 0 K ∫ ∫ ∞ α (x  − y , xd, s) − ∂x¯ i G α ( x¯  − y , x¯d, s)) + (∂xi G g (y , x¯ , t)dsdy  R d−1 \K 0 ∫ ∞∫ α (x  − y , xd, s)dx ds. − g ( x¯ , x , t) ∂xi G +

R d−1

h

K

0

The seven terms are denoted by Ii , i = 1, . . . , 7. The last term I7 follows by ∫ ∫ ∞ α (x  − y , xd, s)(g( x¯ , t) − g(x , t))dsdy  I7 = ∂xi G R d−1 \K 0 ∫ ∫ ∞ α (x  − y , xd, s)dsdy  + g(x , t) ∂xi G R d−1 0 ∫ ∫ ∞  α ( x¯  − y , x¯d, s)dsdy  . − g( x¯ , t) ∂x¯ i G R d−1

0

Now by Lemma 7.1(i), (7.8), and the identity (7.51), we have  ∫ ∫ ∞ 0, if i  d, α (y , xd, s)dsdy  = ∂xi G d−1 c(h 0 R d ), if i = d. Thus ∫



R d−1



0

α ( x¯  − y , x¯d, s)dsdy  = ∂x¯ i G

∫ R d−1

∫ 0



α ( x¯  − y , xd, s)dsdy  . ∂x¯ i G

Substituting the identity gives the expression for I7 . Next we estimate the terms separately. By the estimate (7.45) with k = 0, we bound the terms I1 and I2 by ∫ h (θβ) (θβ) |I1 | + |I2 | ≤ c[g]  s θβ−1 ds ≤ c[g]  |x − x| ¯ θ. t,Q

0

t,Q

For the term I3 , with η = x¯ + λ(x − x), ¯ we have I3 =

d ∫

k=1

h

∞∫ 1∫ 0

R d−1

α (η  − y , ηd, s)(xk − x¯k ) ∂η2i ηk G g (y , t, s)dy dλds.

322

7 Subdiffusion: Hölder Space Theory

Then the estimate (7.46) with | | = 2 implies ∫ ∞ (θβ) (θβ) |I3 | ≤ c[g]  |x − x| ¯ s θβ−1−β ds ≤ c[g]  |x − x| ¯ θ. t,Q

t,Q

h

By the estimate (7.41) with | | = 1, we derive

∫ |I4 | + |I5 | ≤ c[g](θ)  |x  − y  | θ−(d−1) dy  x,Q K ∫ + | x¯  − y  | θ−(d−1) dy  ≤ c[g](θ)  |x − x| ¯ θ. x,Q

| x¯  −y  | ≤3 | x−x ¯ |

Next again with η = x¯ + λ(x − x), ¯ we can rewrite I6 as I6 =

d ∫

k=1



R d−1 \K

0

1∫ ∞ 0

α (η  − y , ηd, s)(xk − x¯k ) ∂η2i ηk G g (y , x¯ , t)dsdλdy  .

Obviously, |x − x| ¯ ≤ |η  − y  |, | x¯  − y  | ≤ 2|η  − y  | for any η = x¯ + λ(x − x), ¯ y  ∈ K, λ ∈ (0, 1). Hence, it follows from (7.41) with | |=2 that ∫ 1∫ |I6 | ≤ c[g](θ)  |x − x| ¯ |η  − y  | θ−d dy dλ ≤ c[g](θ)  |x − x| ¯ θ. x,Q

x,Q

|η  −y  | ≥ |x− x¯ |

0

Next we decompose I7 into the sum

∫ ∞∫ α (x  − y , xd, s)dy ds I7 = g ( x¯ , x , t) ∂xi G ∫

h

+



0

K

h

K

α (x  − y , xd, s)dy ds := ∂xi G g ( x¯ , x , t)(I7 + I7).

By the estimate (7.42), we obtain ∫ ∞∫ β  − 12 α (x  − y , xd, s)|dy ds |I7 | ≤ |x − x| ¯ s 2 |∂xi G h K ∫ 3 − 12 ≤ c|x − x| ¯ |x  − y  | 2 −d dy  ≤ c. K

Now Lemma 7.19 gives the bound |I7 | ≤ c. The preceding estimates together imply (θβ) [∂xi u](θ) ≤ c[g](θ)  . Next we bound [∂tα u] . Fix any t > t¯. By Lemma 7.18, it x,Q

x,Q

suffices to we bound

t,Q

7.3 Hölder Regularity in Multi-Dimension

323

∂tα u(x, t) − ∂tα u(x, t¯) ∫ t ∫ R α ∂t Gα (x  − y , xd, t − s)(g(y , s) − g(y , t))dy ds = ∫ − +

2t¯−t R d−1 t¯ ∫

2t¯−t R d−1 ∫ 2t¯−t ∫

R α ∂t¯ Gα (x 

−∞

R d−1

−∞

R d−1

− y , xd, t¯ − s)(g(y , s) − g(y , t¯))dy ds

α (x  − y , xd, t − s) − R∂¯α G α (x  − y , xd, t¯ − s)) (R∂tα G t

× (g(y , s) − g(y , t¯))dy ds ∫ 2t¯−t ∫ R α + ∂t Gα (x  − y , xd, t − s)(g(y , t¯) − g(y , t))dy ds. We bound the four terms, denoted by IIi , i = 1, . . . , 4, separately. By the estimate (7.44) in Lemma 7.17, we obtain ∫ t¯

∫ t (1+θ)β (t − s)(1+θ)β−β−1 ds + (t¯ − s)(1+θ)β−β−1 ds |II1 | + |II2 | ≤ c[g]  t,Q

≤ c[g]

2t¯−t

t,Q

2t¯−t

|t − t¯| θβ,

((1+θ)β) 

and by the mean value theorem, |II3 |



≤ c[g]

((1+θ)β)

≤ c[g]

((1+θ)β)

≤ c[g]

((1+θ)β)

t,Q

t,Q

t,Q





2t¯−t

−∞ ∫ 2t¯−t

∫ t∫ R d−1





t

−∞ t¯ ∫ t ∫ 2t¯−t



−∞



α (x  − y , xd, ξ − s)(t¯ − s)(1+θ)β |dy dξds | R∂sα+1 G

(ξ − s)β−2 (t¯ − s)(1+θ)β dξds (t¯ − s)θβ−2 dsdξ ≤ c[g]

((1+θ)β) t,Q



|t − t¯| θβ .

Next it follows from the estimate (7.44) and integrating with respect to s that  ∫  α (x  − y , xd, 2(t − t¯))dy  |II4 | =  (g(y , t) − g(y , t¯))R∂tα−1 G R d−1 ((1+θ)β)

≤ c[g]

t,Q



|t − t¯| θβ .

The last three estimates imply [∂tα u] [∂tα u](θ) . x,Q

Indeed, we may rewrite

∂tα u(x, t)

∫ =

t

−∞

∫ R d−1

( θ2 α)

≤ c[g]

t,Q ∂tα u(x, t)

R α ∂s Gα (x 

(1+θ ) 2 α

t,Q

. Next we bound the seminorm

as

− y , xd, s)(g(y , t − s) − g(y , t))dy ds.

324

7 Subdiffusion: Hölder Space Theory

Thus, for any x  x, ¯ we split ∂tα u(x, t) − ∂tα u( x, ¯ t) into (with h = |x − x| ¯ α) 2

∂tα u(x, t) − ∂tα u( x¯ , t) ∫ h∫ R α = ∂s Gα (x  − y , xd, s)(g(y , t − s) − g(y , t))dy ds R d−1 h∫

0



+ 0

∫ +

h

R d−1

∞∫

R d−1

R α ∂s Gα ( x¯ 

− y , x¯d, s)(g(y , t) − g(y , t − s))dy ds

α (x  − y , xd, s) − R∂sα G α ( x¯  − y , x¯d, s)] [R∂sα G

× (g(y , t − s) − g(y , t))dy ds :=

3

IIIi .

i=1

The term III1 can be bounded by (7.44) from Lemma 7.17 as ∫ h∫ ( 1+θ 2 α) α (x  − y , xd, s)|s 1+θ 2 α dy  ds |III1 | ≤ c[g] | R ∂sα G t,Q

≤ c[g]

( 1+θ 2

R d−1

0



α)

t,Q

h

α

s− 2 −1 s

1+θ 2

α

ds = c[g]

0

( 1+θ 2 α) t,Q

|x − x| ¯ θ.

The term III2 can be bounded analogously. By the mean value theorem and (7.46) from Lemma 7.17, we have ∫ h∫ ( 1+θ 2 α) α (x  − y , xd, s)|s 1+θ 2 α dy  ds ¯ |D R∂sα G |III3 | ≤ c|x − x|[g] t,Q 0 R d−1 ∫ ∞ 1+θ ( 1+θ ( 1+θ α) 2 α) ≤ c|x − x|[g] ¯ s−α−1 s 2 α ds = c[g] 2 |x − x| ¯ θ. t,Q

t,Q

h

( 1+θ α)

¯ θ . This shows the The last two bounds together imply [∂tα u](θ) ≤ c[g] 2 |x − x| t,Q x,Q second assertion of the lemma. The remaining assertions can be proved similarly, and thus the proof is omitted.  With these preliminary estimates, we can state the following existence and uniqueness results for problems (7.34) and (7.35). The proofs are identical with that for Theorem 7.4, and hence they are omitted. 2+θ, 2+θ 2 α

Proposition 7.4 Let g ∈ C0



(Q ). Then under condition (7.36), problem (7.34)

2+θ, 2+θ 2 α

has a unique solution v ∈ C0

v

C 2+θ,

2+θ α 2 (Q)

1+θ, 1+θ 2 α

Proposition 7.5 Let g ∈ C0

(Q), and ≤ c g

C 2+θ,

2+θ α  2 (Q )

.



(Q ). Then under conditions (7.36) and (7.37), 2+θ, 2+θ 2 α

problem (7.35) has a unique solution w ∈ C0

(Q), and

7.3 Hölder Regularity in Multi-Dimension

w

C 2+θ,

2+θ α 2 (Q)

325

≤ c(δ0, M0 ) g

C 1+θ,

1+θ α  2 (Q )

.

7.3.3 Subdiffusion on bounded domains Now we study subdiffusion on a bounded domain. Let Ω ⊂ Rd be an open bounded domain with a boundary ∂Ω. Let Q = Ω × (0, T], and Σ = ∂Ω × (0, T]. We consider the following two initial boundary value problems:

and

⎧ ∂ α u + L(t)u = f , in Q, ⎪ ⎪ ⎨ t ⎪ u = g, on Σ, ⎪ ⎪ ⎪ u(0) = u0, in Ω, ⎩

(7.52)

⎧ ∂tα u + L(t)u = f , in Q, ⎪ ⎪ ⎪ ⎪ d ⎪ ⎨ ⎪ bi ∂xi u + b0 u = g, on Σ, ⎪ ⎪ ⎪ i=1 ⎪ ⎪ ⎪ u(0) = u0, in Ω. ⎩

(7.53)

The (possibly time dependent) elliptic operator L(t) is given by L(t)u = −

d

ai j (x, t)∂x2i x j u +

i, j=1

d

ai (x, t)∂xi u + a0 (x, t)u.

i=1

Throughout, the coefficients ai j and bi satisfy ν|ξ | 2 ≤

d

ai j ξi ξ j ≤ μξ 2,

∀(x, t) ∈ Q,

(7.54)

bi (x, t)ni (x) ≤ −δ < 0,

∀(x, t) ∈ Σ,

(7.55)

i, j=1 d

i=1

where n(x) denotes the unit outward normal to the boundary ∂Ω at the point x ∈ ∂Ω, and 0 < ν ≤ μ < ∞. Then we have the following two regularity results for the Dirichlet and Neumann problems, respectively. θ

Theorem 7.9 Let ∂Ω ∈ C 2+θ , ai j , ai , a0 ∈ C θ, 2 α (Q), for i, j = 1, . . . , d, and the following compatibility conditions hold: g(·, 0) = u0, ∂tα g(x, 0) θ

on ∂Ω,

= L(0)u0,

on ∂Ω.

Then for u0 ∈ C 2+θ (Ω), f ∈ C θ, 2 α (Q), g ∈ C 2+θ, 2+θ unique solution u ∈ C 2+θ, 2 α (Q) satisfying

2+θ 2

α (Σ),

problem (7.52) has a

326

7 Subdiffusion: Hölder Space Theory

u

2+θ C 2+θ, 2 α (Q)

≤c u0 C 2+θ (Ω) + f

θ C θ, 2 α (Q)



+ g

2+θ C 2+θ, 2 α (Σ)

.

θ

Theorem 7.10 Let ∂Ω ∈ C 2+θ , ai j , ai , a0 ∈ C θ, 2 α (Q), and b0 , bi ∈ C 1+θ, for i, j = 1, . . . , d, and the following compatibility condition holds: d

bi (x, 0)∂xi u0 (x) + b0 (x, 0)u0 (x) = g(x, 0),

on ∂Ω.

1+θ 2

α (Σ),

(7.56)

i=1 θ

Then for u0 ∈ C 2+θ (Ω), f ∈ C θ, 2 α (Q), g ∈ C 1+θ, 2+θ unique solution u ∈ C 2+θ, 2 α (Q) satisfying

u 2+θ, 2+θ α ≤c u0 C 2+θ (Ω) + f θ, θ2 α C

2

C

(Q)

1+θ 2

(Q)

α (Σ),

problem (7.53) has a

+ g

1+θ C 1+θ, 2 α (Σ)

.

The compatibility conditions in Theorems 7.9 and 7.10 consist in the fact that the fractional derivatives (∂tα )k u|t=0 , which can be determined for t = 0 from the equation and initial condition, must satisfy the corresponding boundary condition. The analysis of problems (7.52) and (7.53) is similar to each other, and thus we discuss only problem (7.53). The overall analysis strategy is identical with that in Section 7.2. First, we consider the case of zero initial data on a small time interval [0, τ], and show the existence of a solution by means of a regularizer [LSU68, Chapter IV]. We use the following shorthand notation: Qτ = Ω × (0, τ], and Στ = ∂Ω × (0, τ]. θ, θ α

1+θ, 1+θ α

2 Lemma 7.21 Suppose u0 ≡ 0, f ∈ C0 2 (Q), g ∈ C0 (Σ) and the compatibility conditions (7.56) are satisfied. Then for sufficiently small τ ∈ (0, T), problem

2+θ, 2+θ α

2 (7.53) has a unique solution u ∈ C0 

u 2+θ, 2+θ α ≤ c f

C

C

(Q τ )

2

(Qτ )) satisfying the estimate  . + g 1+θ, 1+θ α θ, θ α 2 (Q τ )

C

2

(Σ τ )

Proof We prove the lemma by constructing a regularizer, following the standard procedure [LSU68, Chapter IV, Sections 4–7]. We cover the domain Ω with the balls (k) Bλ(k) and B2λ of radii λ and 2λ, respectively, with a common center ξ (k) ∈ Ω for sufficiently small λ > 0. The index k belongs to one of the following two sets:  I1, if Bλ(k) ∩ ∂Ω  ∅, k∈ I2, if Bλ(k) ∩ ∂Ω = ∅. We take

2

τ = κλ α , ζ (k)

with κ < 1.

(7.57)

η(k)

Let and be the sets of smooth functions subordinated to the indicated overlapping coverings of the domain Ω such that

ζ (k) (x)η(k) (x) = 1, x ∈ Ω, k

|Dx ζ (k) | + |Dx η(k) | ≤ cλ− | |,

| | ≥ 0.

7.3 Hölder Regularity in Multi-Dimension

327

(k) By the regularity assumption on the boundary ∂Ω, ∂Ω ∩ B2λ can be expressed  (k) by yd = F(y ) in a local coordinate y with origin at ξ , where the axis yd is oriented in the direction of the outward normal vector n(ξ (k) ) to ∂Ω. We straighten (k) by z  = y  and zd = yd − F(y ). Let z = Zk (x) be the boundary segment ∂Ω ∩ B2λ the transformation of the coordinate x into z. Below we denote

L0 (x, t, ∂x, ∂tα ) = ∂tα −

d

ai j (x, t)∂xi ∂x j

and

B0 (x, t, ∂x ) =

i, j=1

d

bi (x, t)∂xi ,

i=1

i.e., the leading terms of the operators L and B, respectively. Let L0(k) and B0(k) be the operators L0 and B0 in the local coordinate y at the origin (ξ (k), 0): L0(k) (ξ (k), 0, ∂y, ∂tα ) = ∂tα −

d

ai(k) j ∂yi ∂y j

and

B0(k) (ξ (k), 0, ∂y ) =

i, j=1

d

b(k) i ∂yi ,

i=1

(k) (k) (k) with the constants ai(k) j = ai j (ξ , 0) and bi = bi (ξ , 0). Let φ = ( f , g) and define a regularizer R by

Rφ = η(k) (x)uk (x, t), k ∈I1 ∪I2

where the functions uk , k ∈ I1 ∪ I2 , are defined as follows. For k ∈ I2 , uk (x, t) solves  (k) L0 (ξ (k), 0, ∂y, ∂tα )uk (x, t) = fk (x, t), (x, t) ∈ Rd × I, uk (x, 0) = 0,

x ∈ Rd,

with fk (x, t) = ζk (x) f (x, t). For k ∈ I1 , let uk (x, t) = uk (z, t)|z=Z −1 (x), k

where

uk (x, t)

solves

⎧ ⎪ L (k) (ξ (k), 0, ∂z , ∂tα )uk (z, t) = fk(z, t), (x, t) ∈ R+d × I, ⎪ ⎪ ⎨ 0(k) ⎪ B0 (ξ (k), 0, ∂z )uk (z , 0, t) = gk (z , t), (z , t) ∈ Rd−1 × I, ⎪ ⎪ ⎪ ⎪ uk (z, 0) = 0, z ∈ R+d, ⎩ with fk(z, t) = ζk (x) f (x, t)|x=Zk (z) and gk (z, t) = ζk (x)g(x, t)|x=Zk (z) . According to [LSU68, Chapter IV, Section 6], we can reduce these problems to the case ai(k) j = δi j , by properly changing the coordinates. Moreover, we can repeat the routine but lengthy calculations of [LSU68, Chapter IV, Sections 6 and 7] to show that the parameter δ0 in the condition hd < −δ0 < 0 depends only on δ and μ from (7.54) and (7.55). Next we rewrite problem (7.53) (with zero initial data) in an operator form: Au = φ,

328

7 Subdiffusion: Hölder Space Theory

where the linear operator A 2+θ, 2+θ 2 α

A : C0

θ, θ2 α

(Q) → H (Q) ≡ C0

1+θ, 1+θ 2 α

(Q) × C0

(Σ)

is defined by the expressions on the left-hand side of (7.53), and H (Q) is the space of function pairs φ = ( f , g), with the norm

φ H(Q) = f

C θ,

θα 2 (Q)

+ g

C 1+θ,

1+θ α 2 (Σ)

.

In view of Propositions 7.2 and 7.4–7.5, we obtain



C 2+θ,

2+θ α 2 (Q)

≤ c φ H(Q),

where the constant c does not depend on λ and τ. Further, for any φ ∈ H (Qτ ), u ∈ 2+θ C 2+θ, 2 α (Qτ ), the following estimates hold: ⎧ ⎪ ⎨ RAu − u C 2+θ, 2+θ ⎪ α 2 (Q

τ)

ARφ − φ H(Q

τ)

⎪ ⎪ ⎩

≤ 12 u

C 2+θ,

,

2+θ α 2 (Q τ )

≤ 12 φ H(Q ),

(7.58)

τ

if τ is sufficiently small, and (7.57) holds. The requisite computations are direct but tedious (cf. [LSU68, Chapter IV, Section 7] for the standard parabolic case). The last two inequalities directly yield the assertion of the lemma, using a basic result in functional analysis.  Now we can state the proof of Theorem 7.10. Proof To remove the restriction on the initial data u0 in Lemma 7.21 and then to extend the solution from [0, τ] to I, we reduce problem (7.53) to the new unknowns with zero initial data (a) at t = 0 and (b) at t = τ. In case (a), by the HestenesWhitney extension theorem, there exist bounded extensions uˆ0 and fˆ of u0 and f (·, 0) − L(0)u0 − Δu0 ∈ C θ (Ω) from Ω to Rd , with finite support, denoted by uˆ0 ∈ C 2+θ (Rd ) and fˆ ∈ C θ (Rd ). Then let u(0) solve the Cauchy problem  α (0) ∂t u − Δu(0) = fˆ, in Rd × I, u(0) (0) = uˆ0, Then u(0) (·, 0) = u0,

∂tα u(0) (·, 0) = f (·, 0) − L(0)u0

and by Proposition 7.2, uˆ(0) ∈ C

u(0)

C 2+θ,

in Rd .

2+θ, 2+θ 2

2+θ α 2 (R d ×[0,T ])

α (Rd

in Ω.

× I) with

≤ c( uˆ0 C 2+θ (R d ) + fˆ(·, 0) C θ (R d ) ).

7.3 Hölder Regularity in Multi-Dimension

329

Now we look for the solution u of problem (7.53) as u = v + u(0) , where v is the solution of a problem of the form (7.53) but with a zero initial condition. Case (b) can be treated exactly as in the proof of Theorem 7.7, and thus the details are omitted. 

Exercises Exercise 7.1 Give a direct proof of Lemma 7.1(i) using Bernstein’s theorem and complete monotonicity of Mittag-Leffler functions in Theorem 3.5. ∫ Exercise 7.2 Find the moments μ2n (t) = R d |x| 2n Gα (x, t) dx. Exercise 7.3 Give a representation of the solution to the Neumann problem for subdiffusion on the half interval Ω = (0, ∞): ⎧ ⎪ ⎪ ⎨ ⎪

∂tα u = u xx in Ω × (0, ∞), u x (0, t) = 0 in (0, ∞),

⎪ ⎪ ⎪ u(·, 0) = u0 ⎩

in Ω,

using the fundamental solutions, and discuss the solution behavior for incompatible data. (Hint: use an even extension of u). Exercise 7.4 Prove the asymptotic expansion (7.12). Exercise 7.5 Let G(t) = θ α (1, t) ≡ θ α (−1, t). Show that G(t) are C ∞ on [0, ∞), with all finite-order derivatives vanishing at t = 0, i.e., G(m) (0) = 0,

m = 0, 1, . . . .

Exercise 7.6 Consider the following initial boundary value problem with a mixed boundary condition on the unit interval Ω = (0, 1) (with α ∈ (0, 1)): 2 ⎧ ∂tα u − ∂xx u = f , in Q, ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ u(0, ·) = g1, on I, ⎪ −∂ u(1, ·) = g2, on I, x ⎪ ⎪ ⎪ ⎪ u(0) = u0, in Ω. ⎩

Find a representation of the solution u(x, t) using fractional θ functions. Exercise 7.7 Consider the following initial boundary value problem on the unit interval Ω = (0, 1) with α ∈ (0, 1) ⎧ ⎪ ⎪ ⎨ ⎪

2 ∂tα u = ∂xx u, in Q, u(0) = u0, in Ω,

⎪ ⎪ ⎪ v(0, t) = u(1, t) = 0, ⎩

on I.

330

7 Subdiffusion: Hölder Space Theory

(i) Prove that the solution u is given by u(x, t) =



2 2 α

an Eα,1 (−n π t ) sin nπx,

∫ with an = 2

n=1

(ii) Show that the solution u is also given by u(x, t) = y, t))u0 (y)dy.

∫1 0

1

u0 (x) sin nπxdx. 0

(θ α (x − y, t) − θ α (x +

(iii) Using the results in (i) and (ii) show that for all x, y and t > 0, there holds 2



Eα,1 (−n2 π 2 t α ) sin nπx sin nπy = θ α (x − y, t) − θ α (x + y, t).

n=1

This relation generalizes the well-known result for the classical θ function. Exercise 7.8 Prove Theorem 7.2. Exercise 7.9 Derive the representation for problem (7.18), if the elliptic operator has 2 u, for a > 0, instead of ∂ 2 u. a constant coefficient, i.e., −a0 ∂xx 0 xx Exercise 7.10 Discuss the compatibility condition (7.20) for the Neumann problem (7.19), and determine the nature of the leading singularity term, when the initial condition and Neumann boundary condition are incompatible. Exercise 7.11 Prove the estimate (7.33). Exercise 7.12 Does the inequality (7.41) hold for the case | | = 0? Exercise 7.13 Prove the remaining estimates in Lemma 7.20.

Appendix A

Mathematical Preliminaries

In this appendix, we recall various functions spaces (e.g., AC spaces, Hölder spaces and Sobolev spaces), two integral transforms and several fixed point theorems.

A.1 AC Spaces and Hölder Spaces A.1.1 AC spaces The space of absolutely continuous (AC) functions is extensively used in the study of fractional calculus. Throughout, D = (a, b), with a < b, is an open bounded interval. Definition A.1 A function v : D → R is called absolutely continuous on D, if for any  > 0, there exists a δ > 0 such that nfor any finite set of pairwise disjoint intervals (bk − ak ) < δ, there holds the inequality [ak , bk ] ⊂ D, k = 1, . . . , n, such that k=1 n  |v(bk ) − v(ak )| <  . k=1

The space of these functions is denoted by AC(D). It is known that the space AC(D) coincides with the space of primitives of Lebesgue integrable functions, i.e., ∫ x ∫ b v(x) ∈ AC(D) ⇔ v(x) = c + φ(s)ds, ∀x ∈ D, with |φ(s)|ds < ∞, a

a

for some constant c. Hence, AC functions have a summable derivative v (x) almost everywhere (a.e.). Clearly, C 1 (D) → AC(D), but the converse is not true. For example, for α ∈ (0, 1), v(x) = (x − a)α ∈ AC(D), but v(x)  C 1 (D). However, if v is continuous and v  ∈ L 1 (D) exists a.e., v is not necessary AC(D). One counterexample is Lebesgue’s singular function v (also known as the Cantor-Vitali function or devil’s © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 B. Jin, Fractional Differential Equations, Applied Mathematical Sciences 206, https://doi.org/10.1007/978-3-030-76043-4

331

332

A Mathematical Preliminaries

staircase) which is continuous on [0, 1] and has a zero derivative a.e., but is not ∫1  AC(D), in fact v(0) = 0, v(1) = 1 and thus v(1) − v(0)  0 v (s)ds. The following facts are well-known on a bounded interval: C 1 (D) ⊂ C 0,1 (D) ⊂ differential a.e., and AC(D) ⊂ uniformly continuous ⊂ C(D). It is also know that, on a bounded interval, the sum and pointwise product of functions in AC(D) belongs to AC(D), and if v ∈ AC(D) and f ∈ C 0,1 (D), then f ◦ v ∈ AC(D). However, the composition of two AC(D) functions need not be AC(D). Definition A.2 AC n (D), n ∈ N, denotes the space of functions v(x) with continuous derivatives up to order n − 1 in D and the (n − 1)th derivative v (n−1) (x) ∈ AC(D). The space AC n (D) can be characterized as follows. functions Theorem A.1 The space AC n (D), n ∈ N consists of ∫ xthose and only those 1 n−1 ϕ(s)ds+ n−1 c (x− (x−s) v(x) which are represented in the form v(x) = (n−1)! j=0 j a a) j , where ϕ(t) ∈ L 1 (D) and c j , j = 0, . . . , n − 1 are arbitrary constants.

A.1.2 Hölder spaces Let Ω ⊂ Rd be an open bounded domain, ∂Ω be its boundary, and Ω be its closure. Then for any γ ∈ (0, 1], we define the Hölder space C 0,γ (Ω) by   C 0,γ (Ω) := v ∈ C(Ω) : |v(x) − v(y)| ≤ L|x − y|γ, ∀x, y ∈ Ω , where L is a nonnegative constant, equipped with the norm

v C 0,γ (Ω) = v C(Ω) + [v]C γ (Ω), with [·]C γ (Ω) being the Cγ (Ω) seminorm, defined by [v]cγ (Ω) =

sup x,y ∈Ω,xy

|v(x) − v(y)| . |x − y|γ

The case γ = 1 is called Lipschitz, and the corresponding seminorm is the Lipschitz d constant. For any multi-index = ( 1, . . . , d ) ∈ N0d , | | = i=1

i and D = ∂ | |

  ∂x1 1 ...∂x dd

. Then we denote by C k,γ (Ω), k ≥ 1, the space

 C k,γ (Ω) := v ∈ C k (Ω) : |D v(x) − D v(y)| ≤ L|x − y|γ, ∀ ∈ N0d with | | = k , with the norm · C k,γ (Ω) defined similarly by

∀x, y ∈ Ω,

A.2

Sobolev Spaces

333

v C k,γ (Ω) = v C k (Ω) +



[D v]C 0,γ (Ω) .

 ∈N0d , | |=k 

We often identify Cγ (Ω), with γ > 0, with the space C k,γ (Ω), where k is an integer, γ  ∈ (0, 1] and γ = k + γ . The Hölder spaces are Banach spaces. Further, for any 0 < α < β ≤ 1 and k ∈ N0 , C k,β (Ω) →  C k,α (Ω) (i.e., compact embedding). k,γ The Hölder spaces C (Ω) may be defined analogously, if the functions and its derivatives are not continuous up to the boundary ∂Ω. Note that if v ∈ C 0,γ (Ω), 0 < γ < 1, then v is uniformly continuous in Ω, and therefore the Cauchy criterion implies that v can be uniquely extended to ∂Ω to give a continuous function in Ω. This allows us to talk about values on the boundary ∂Ω of v ∈ C 0,γ (Ω), and also write C 0,γ (Ω) = C 0,γ (Ω).

A.2 Sobolev Spaces Now we review several useful results on Sobolev spaces. The standard references on the topic include [AF03, Gri85]. Let Ω ∈ Rd be an open bounded domain with a boundary ∂Ω, and Ωc = Rd \ Ω. The domain Ω is said to be C k,γ (respectively, Lipschitz) if ∂Ω is compact and if locally near each boundary point, Ω can be represented as the set above the graph of a C k,γ (respectively, Lipschitz) function.

A.2.1 Lebesgue spaces The Lebesgue space L p (Ω), 1 ≤ p ≤ ∞, consists of functions v : Ω → R with a finite v L p (Ω) , where ∫  p1 ⎧ ⎪ ⎪ |v| p dx , if 1 ≤ p < ∞, ⎨ ⎪ Ω

v L p (Ω) = ⎪ ⎪ ess sup |v|, if p = ∞. ⎪ ⎩ Ω p

A function v : Ω → R is said to be in Lloc (Ω), if for any compact subset ω ⊂⊂ Ω, there holds v ∈ L p (ω). In L p (Ω) spaces, Hölder inequality is an indispensable tool: if p, q ∈ [1, ∞] with −1 p + q−1 = 1 (i.e., p, q are conjugate exponents, and often written as q = p), then for all v ∈ L p (Ω) and w ∈ L q (Ω), there holds vw ∈ L 1 (Ω) and

vw L 1 (Ω) ≤ v L p (Ω) w L q (Ω) . The equality holds if and only if v and w are linearly dependent. One useful generalization is Young’s inequality for the convolution v ∗ w defined on Rd , i.e.,

334

A Mathematical Preliminaries

∫ (v ∗ w)(x) =

Rd

v(x − y)w(y)dy.

One can obtain a version of the inequality on bounded domain by zero extension. Theorem A.2 Let v ∈ L p (Rd ) and w ∈ L q (Rd ), with 1 ≤ p, q ≤ ∞. For any 1 ≤ r ≤ ∞ with p−1 + q−1 = r −1 + 1,

v ∗ w L r (R d ) ≤ v L p (R d ) w L q (R d ) .

(A.1)

Occasionally we use the weak L p (Ω), 1 ≤ p < ∞, denoted by L p,∞ (Ω). For v : Ω → R, the distribution λv is defined for t > 0 by λv (t) = | {x ∈ Ω : |v(x)| > t} |, where | · | denotes the Lebesgue measure of a set in Rd . v is said to be in the space L p,∞ (Ω), if there is c > 0 such that, for all t > 0, λv (t) ≤ c p t −p . The best constant 1

c is the L p,∞ (Ω)-norm of v, and is denoted by v L p,∞ (Ω) = supt>0 tλvp (t). The L p,∞ (Ω)-norm is not a true norm, since the triangle inequality does not hold. For v ∈ L p (Ω), v L p,∞ (Ω) ≤ v L p (Ω) , and L p (Ω) ⊂ L p,∞ (Ω). With the convention that two functions are equal if they are equal a.e., then the spaces L p,∞ (Ω) are complete, and further, for p > 1, L p,∞ (Ω) are Banach spaces [Gra04]. Example A.1 Let ωα (t) =

t α−1 Γ(α) ,

α ∈ (0, 1), and T > 0 be fixed. Then, ωα ∈ L p (0, T),

for any p ∈ [0, (1 − α)−1 ), but it does not belong to L 1−α (0, T). However, ωα ∈ 1 L 1−α ,∞ (0, T). Indeed, the distribution λωα is given by λωα (t) = |{s ∈ (0, T) : 1 |ωα (s)| > t}| = (Γ(α)t)− 1−α . Hence,

ωα

1

L 1−α ,∞ (0,T )

1

1−α = sup tλω (t) = Γ(α)−1 < ∞. α t ∈(0,T )

This shows the desired assertion.

A.2.2 Sobolev spaces Let C0∞ (Ω) denote the space of infinitely differentiable functions with compact 1 (Ω) and = ( , . . . , ) ∈ Nd a multi-index, then w is support in Ω. If v, w ∈ Lloc 1 d 0 called the th weak partial derivative of v, denoted by w = D v, if there holds ∫ ∫ vD φ dx = (−1) | | wφ dx, ∀φ ∈ C0∞ (Ω), Ω

d

Ω

with | | = i=1 i . Note that a weak partial derivative, if it exists, is unique up to a set of Lebesgue measure zero. We define the Sobolev space W k, p (Ω), for any 1 (Ω) such that for each ∈ Nd with 1 ≤ p ≤ ∞ and k ∈ N. It consists of all v ∈ Lloc 0

A.2

Sobolev Spaces

335

| | ≤ k, D v exists in the weak sense and belongs to L p (Ω), equipped with the norm  ∫  p1 ⎧ ⎪  p ⎪ |D v| dx , if 1 ≤ p < ∞, ⎪ ⎪ ⎨ | | ≤k Ω ⎪

v W k, p (Ω) =  ⎪ ⎪ ess sup |D v|, if p = ∞. ⎪ ⎪ ⎪ Ω | | ≤k ⎩ The space W k, p (Ω), for any k ∈ N and 1 ≤ p ≤ ∞, is a Banach space. The subspace k, p W0 (Ω) denotes the closure of C0∞ (Ω) with respect to the norm · W k, p (Ω) , i.e., W k, p (Ω)

k, p

W0 (Ω) = C0∞ (Ω) . The case p = 2, i.e., W k,2 (Ω) and W0k,2 (Ω), is Hilbert space, and often denoted by H k (Ω) and H0k (Ω), respectively. Below we recall a few useful results. The first is Poincaré’s inequality. When p = 2, the optimal constant c(Ω, d) in the inequality coincides with the lowest eigenvalue of the negative Dirichlet Laplacian −Δ on the domain Ω. Proposition A.1 If the domain Ω is bounded, then for any 1 ≤ p < ∞

v L p (Ω) ≤ c(Ω, d) ∇v L p (Ω),

1, p

∀v ∈ W0 (Ω).

(A.2)

Sobolev embedding inequalities represent one of the most powerful tools in the analysis. The C 2 regularity assumption on the domain can be relaxed. Theorem A.3 Let Ω be a bounded C 1 domain. Then the following statements hold. (i) If k
dp , then the embedding W k, p (Ω) → C k−[ p ]−1,γ (Ω) is continuous, with γ = [ dp ] − dp + 1 if dp  N, and for any γ ∈ (0, 1) otherwise. (iii) If k > and p1 − dk
1.

The next result is a corollary of Theorem A.4, with α = 1 and s = 1.

336

A Mathematical Preliminaries

Corollary A.1 Let 1 ≤ p < d, there holds

v

dp

L d−p (Ω)

≤ c(d, p) Dv L p (Ω),

1, p

∀v ∈ W0 (Ω).

If the boundary ∂Ω is piecewise smooth, functions v in W 1, p (Ω) are actually 1, p defined up to ∂Ω via their traces, denoted by γv. The space W0 (Ω) can be defined equivalently as the set of functions in W 1, p (Ω) whose trace is zero. Theorem A.5 If ∂Ω is piecewise smooth, then for q ∈ [1, (d−1)p d−p ], if 1 < p < d and q ∈ [1, ∞), if p = d, there holds

γv L q (∂Ω) ≤ c(d, p, Ω) v W 1, p (Ω) .

A.2.3 Fractional Sobolev spaces Fractional Sobolev spaces play a central role in the analysis of fdes, see the review [DNPV12] for an overview on fractional Sobolev spaces. For any s ∈ (0, 1), the Sobolev space W s, p (Ω) is defined by  W s, p (Ω) = v ∈ L p (Ω) : |v|W s, p (Ω) < ∞ , (A.3) where the Sobolev-Slobodeckij seminorm | · |W s, p (Ω) is defined by ∬ |v|W s, p (Ω) =

Ω×Ω

(v(x) − v(y)) p dx dy |x − y| d+ps

 p1

.

The space W s, p (Ω) is a Banach space, when equipped with the norm p

p

1

v W s, p (Ω) = ( v L p (Ω) + |v|W s, p (Ω) ) p . Equivalently, the space W s, p (Ω) may be regarded as the restriction of functions in s, p W s, p (Rd ) to Ω. The subspace W0 (Ω) ⊂ W s, p (Ω) is defined as the closure of C0∞ (Ω) s, p with respect to the W (Ω) norm, i.e., s, p

W s, p (Ω)

W0 (Ω) = C0∞ (Ω)

.

(A.4)

We denote W s,2 (Ω) by H s (Ω), and similarly W0s,2 (Ω) by H0s (Ω). These are Hilbert spaces and frequently used. If the boundary ∂Ω is smooth, they can be equivalently defined through real interpolation of spaces by the K-method [LM72, Chapter 1]. Specifically, set H 0 (Ω) = L 2 (Ω), then Sobolev spaces with real index 0 ≤ s ≤ 1 can be defined as interpolation spaces of index s for the pair [L 2 (Ω), H 1 (Ω)], i.e.,   H s (Ω) = L 2 (Ω), H 1 (Ω) s . (A.5)

A.2

Sobolev Spaces

337

Similarly, for s ∈ [0, 1] \ { 21 }, the spaces H0s (Ω) are defined as interpolation spaces of index s for the pair [L 2 (Ω), H01 (Ω)], i.e.,   (A.6) H0s (Ω) = L 2 (Ω), H01 (Ω) s , s ∈ [0, 1] \ { 12 }. 1

2 The space [L 2 (Ω), H01 (Ω)] 1 is the so-called Lions-Magenes space, H00 (Ω) = 2   2 1 L (Ω), H0 (Ω) 1 , which can be characterized as [LM72, Theorem 11.7] 2



1 2

1 2



H00 (Ω) = w ∈ H (Ω) :

Ω

 w 2 (x )  dx < ∞ . dist(x , ∂Ω)

(A.7)

If ∂Ω is Lipschitz, then (i) characterization (A.7) is equivalent to the definition via interpolation, and definitions (A.5) and (A.6) are also equivalent to (A.3) and (A.4), respectively; (ii) the space C0∞ (Ω) is dense in H s (Ω) if and only if s ≤ 12 , i.e., H0s (Ω) = H s (Ω). If s > 12 , H0s (Ω) is strictly contained in H s (Ω) [LM72, Theorem 11.1], and the following inclusions hold: 1

1

1

2 (Ω)  H02 (Ω) = H 2 (Ω), H00 1

1

2 (Ω). since 1 ∈ H02 (Ω) but 1  H00 Fractional Sobolev spaces of order greater than one are defined similarly. Given k ∈ N0 and 0 < s < 1, the space H k+s (Ω) is defined by  H k+s (Ω) = u ∈ H k (Ω) : |D u| ∈ H s (Ω) ∀ ∈ N0d s.t. | | = k ,

equipped with the norm

v H k+s (Ω) = v H k (Ω) +



|D v| H s (Ω) .

| |=k

When d = 1, the domain Ω is an open bounded interval D = (a, b). Then the space H0s (D) consists of functions in H s (Ω) whose extension by zero to R is in s (D)) to be the set of functions whose H s (R). We define H0,s L (D) (respectively, H0,R s extension by zero is in H (−∞, b) (respectively, H s (a, ∞)).

A.2.4 H s (Ω) spaces We use the spaces H s (Ω), s ≥ 0, frequently in Chapter 6. These are spaces with special type of boundary conditions, associated with suitable elliptic operators. Let A : H 2 (Ω) ∩ H01 (Ω) → L 2 (Ω) be a symmetric uniformly coercive second-order elliptic operator with smooth coefficients. The spectrum of A consists entirely of eigenvalues {λ j }∞ j=1 (ordered nondecreasingly, with the multiplicities counted). By

338

A Mathematical Preliminaries

ϕ j ∈ H 2 (Ω) ∩ H01 (Ω), we denote the L 2 (Ω) orthonormal eigenfunctions corresponding to λ j . For any s ≥ 0, we define the space H s (Ω) by ∞    H s (Ω) = v ∈ L 2 (Ω) : λ sj |(v, ϕ j )| 2 < ∞ , j=1

and that H s (Ω) is a Hilbert space with the norm 2

v H  s (Ω) =

∞ 

λ sj |(v, ϕ j )| 2 .

j=1

By definition, we have the following equivalent form: 2

v H  s (Ω) =

∞  j=1

s

|(v, λ j2 ϕ j )| 2 =

∞ ∞   s s s (v, A 2 ϕ j )2 = (A 2 v, ϕ j )2 = A 2 v L2 2 (Ω) . j=1

j=1

We have H s (Ω) ⊂ H s (Ω) for s > 0. In particular, H 0 (Ω) = L 2 (Ω),

H 1 (Ω) = H01 (Ω) and

H 2 (Ω) = H01 (Ω) ∩ H 2 (Ω).

Since H s (Ω) ⊂ L 2 (Ω), by identifying the dual (L 2 (Ω)) with itself, we have H s (Ω) ⊂ L 2 (Ω) ⊂ (H s (Ω)) . We set H −s (Ω) = (H s (Ω)), which consists of all bounded linear functionals on H s (Ω). It can be characterized by the space of distributions that can be written as v=

∞  j=1

v j ϕ j (x),

with

∞ 

2 λ−s j v j < ∞.

j=1

Next we specify explicitly the space H s (Ω). If the domain Ω is of class C ∞ , then s  H (Ω) = H s (Ω) for 0 < s < 12 , and for j = 1, 2, . . ., such that 2 j − 32 < s < 2 j + 12  H s (Ω) = v ∈ H s (Ω) : v = Av = . . . = A j−1 v = 0 on ∂Ω . For the exceptional index s = 2 j − 32 , the condition A j−1 v = 0 on ∂Ω should be 1

2 replaced by A j−1 v ∈ H00 (Ω). These results can be proved using elliptic regularity theory and interpolation [Tho06, p. 34]. If Ω is not C ∞ , then one must restrict s accordingly. For example, if Ω is Lipschitz then the above relations are valid for s ≤ 1, and if Ω is convex or C 1,1 , then we can allow s ≤ 2.

A.3

Integral Transforms

339

A.2.5 Bochner spaces In order to study the well-posedness and regularity for parabolic-type problems, e.g., the subdiffusion model, Bochner-Sobolev spaces are useful. For any T > 0, let I = (0, T). For a Banach space E, we define L r (I; E) = {v(t) ∈ E for a.e. t ∈ I and v L r (I;E) < ∞}, for any r ≥ 1, and the norm · L r (I;E) is defined by

v L r (I;E)

∫  r1 ⎧ ⎪ ⎪ r ⎨ ⎪

v(t) E dt , r ∈ [1, ∞), = ⎪ I ⎪ ⎪ esssup v(t) , r = ∞. E t ∈I ⎩

For any s ≥ 0 and 1 ≤ p < ∞, we denote by W s, p (I; E) the space of functions v : I → E, with the norm defined by complex interpolation. Equivalently, the space is equipped with the quotient norm s

v W s, p (I;E) := inf  v W s, p (R;E) := inf F −1 [(1 + |ξ | 2 ) 2 F ( v )(ξ)] L p (R;E),  v

 v

where the infimum is taken over all possible  v that extend v from I to R, and F denotes the Fourier transform. In analogy with the definition in Section A.2.3, one can also define Sobolev-Slobodeckiˇı seminorm | · |W s, p (I;E) by |v|W s, p (I;E) :=

 ∫ ∫ v(t) − v(ξ) p E I

I

|t − ξ | 1+ps

dtdξ

 p1

,

and the full norm · W s, p (I;E) by 1  p p

v W s, p (I;E) = v L p (I;E) + |v|W s, p (I;E) p . Note that these two norms are not equivalent, and that the former is slightly weaker. Last, we recall two results for compact sets in L p (I; E). The following two are classical [Sim86]. The first can be found in [Sim86, Theorem 5], and the second from [Sim86, Lemma 3]. The notation τh denotes the shift operator. It is possible to obtain compact embedding into subspaces of E via suitable interpolation [Ama00]. Theorem A.6 Let E, E0 and E1 be three Banach spaces such that E1 →  E → E0 . If 1 ≤ p < ∞ and (i) W is bounded in L p (I; E1 ); (ii) τh f − f L p (0,T −h;E0 ) → 0 uniformly as h → 0; then W is relatively compact in L p (I; E). Theorem A.7 Let 1 < p1 ≤ ∞. If W is a bounded set in L p1 (I; E) and relatively 1 (I; E), then it is relatively compact in L p (I; E) for all 1 ≤ p < p . compact in Lloc 1

340

A Mathematical Preliminaries

A.3 Integral Transforms Now we recall Laplace and Fourier transforms.

A.3.1 Laplace transform The Laplace transform of a function f : R+ → R, denoted by  f or L[ f ], is defined by ∫ ∞  e−zt f (t)dt. f (z) = L[ f ](z) = 0

and there exists some λ ∈ R+ such that | f (t)| ≤ ceλt for Suppose f ∈ large t, then  f (z) exists and is analytic for (z) > λ. The Laplace transform of an entire function of exponential type can be obtained by transforming term by term the Taylor expansion of the original function around the origin [Doe74]. In this case the resulting Laplace transform is analytic and vanishing at infinity. 1 (R ) Lloc +

Example A.2 We compute the Laplace transform of the Riemann-Liouville kernel ωα (t) = Γ(α)−1 t α−1 for t > 0. Then for α > 0, the substitution s = tz gives ∫ ∞ ∫ ∞ 1 z −α −zt α−1 L[ωα ](z) = e t dt = e−s s α−1 ds = z −α, Γ(α) 0 Γ(α) 0 where the last identity follows from the definition of Γ(z). The Laplace transform of the convolution on R+ , i.e., ∫ t ∫ t f (t − s)g(s)ds = f (s)g(t − s)ds ( f ∗ g)(t) = 0

0

of two functions f and g that vanish for t < 0 satisfies the convolution rule:  f ∗ g(z) =  f (z) g(z),

(A.8)

provided that both  f (z) and  g (z) exist. This rule is useful for evaluating the Laplace transform of fractional integrals and derivatives. Example A.3 The Riemann-Liouville fractional integral 0 Itα f (t) is a convolution with ωα , i.e., 0 Itα f (t) = (ωα ∗ f )(t). Hence, by (A.8), the Laplace transform of 0 Itα f is given by L[0 Itα f ](z) = z −α  f (z). The Laplace transform of the n-th-order derivative f (n) is given by L[ f (n) ](z) = z n  f (z) −

n−1  k=0

z n−k−1 f (k) (0+ ),

A.3

Integral Transforms

341

which follows from integration by parts, if all the involved integrals make sense. We also have an inversion formula for the Laplace transform ∫ a+i∞ 1 f (t) = ezt  f (z)dz, 2πi a−i∞ where the integral is along the vertical line (z) = a in C (also known as Bromwich contour), such that a > λ, i.e., it is greater than the real part of all singularities of  f and  f is bounded on the line. The direct evaluation of the inversion formula is often inconvenient. The contour can often be deformed to facilitate the analytic evaluation and numerical computation. For example, it may be deformed to a Hankel contour, i.e., a path in the complex plane C which extends from (−∞, a), circling around the origin counterclockwise and back to (−∞, a). Laplace transform is an important tool for mathematical analysis. Below we give two examples, i.e., complete monotonicity and asymptotics. Recall that a function f : R+ → R+ is said to be completely monotone if (−1)k f (k) (t) ≥ 0, for any t ∈ R+ , k = 0, 1, 2, .... Bernstein’s theorem states that a function f is completely monotone if and only if it is the Laplace transform of a nonnegative measure, see [Wid41, Chapter IV] for a proof. Theorem A.8 A function f∫ : R+ → R+ is completely monotone if and only if it has ∞ the representation f (t) = 0 e−ts dμ(s), where μ is a nonnegative measure on R+ such that the integral converges for all t > 0. The asymptotic behavior of a function f (t) as t → ∞ can be determined, under suitable conditions, by looking at the behavior of its Laplace transform  f (z) as z → 0, and vice versa. This is described by Karamata-Feller Tauberian theorem, see the [Fel71] for general statements and proofs. Theorem A.9 Let L : R+ → R+ be a function that is slowly varying at ∞, i.e., for every fixed x > 0, limt→∞ L(xt)L(t)−1 = 1. Let β > 0 and f : R+ → R be a monotone function whose Laplace transform  f (z) exists for all z ∈ C+ . Then  f (z) ∼ z−β L(z −1 )

as z → 0

if and only if

f (t) ∼

t β−1 L(t) as t → ∞, Γ(β)

where the approaches are on R+ and f (t) ∼ g(t) as t → t∗ denotes limt→t∗

f (t) g(t)

= 1.

A.3.2 Fourier transform The Fourier transform of a function f ∈ L 1 (R), denoted by f˜ or F [ f ], is defined by ∫ ∞ e−iξ x f (x)dx. f˜(ξ) = F [ f ](ξ) = −∞

342

A Mathematical Preliminaries

Note that this choice does not involve the factor 2π that often appears in the literature. It is taken in order to be consistent with several popular textbooks on fractional calculus [SKM93, KST06]. If f ∈ L 1 (R), then f˜ is continuous on R, and f˜(ξ) → 0 as |ξ | → ∞. The Plancherel theorem states that the Fourier transform extends uniquely to a bounded linear operator F : L 2 (R) → L 2 (R) satisfying ∫ ∞ ∫ ∞ 1 ˜ = f (x)g(x)dx, (A.9) f˜(ξ)g(ξ)dξ 2π −∞ −∞ where the overline denotes complex conjugate. The inversion formula for Fourier transform is given by ∫ R 1 lim f (x) = eiξ x f˜(ξ)dξ, x ∈ R, 2π R→∞ −R and the equality holds at every continuous point of f . Example A.4 The Fourier transform of f (x) = e−λx , λ > 0, is given by ∫ ∞ ∫ ∞ ∫ ∞ 2 2 2  e−iξ x e−λx dx = cos(ξ x)e−λx dx − i sin(ξ x)e−λx dx. f (ξ) = 2

−∞

−∞

−∞

The second integral vanishes since the integrand is an odd function. Using the identity for any λ > 0 [GR15, p. 391, 3.896]:  ∫ ∞ 1 π − b2 −λx 2 e 4λ , e cos(bx)dx = 2 λ 0 we deduce  f (ξ) =





−∞

 cos(ξ x)e−λx dx = 2

π − ξ2 e 4λ . λ

g ](x) = Similarly, the inverse Fourier transform  g (ξ) = e−λξ is given by F [ 2

x2

√ 1 e− 4λ 4λπ

. Note that the Fourier and inverse Fourier transforms differ by a factor of

2π.

A.4 Fixed Point Theorems Now we describe several fixed point theorems, see [Dei85, Zei86] for complete proofs and extensive applications. These theorems are central to showing various existence and uniqueness results for operator equations in Banach spaces, into which fdes can be reformulated. The most powerful tool is Banach fixed point theorem. There are several versions, with slightly different conditions. The most basic one is concerned with contraction mappings. Let (X, ρ) be a metric space. A mapping T : X → X is said to be a contraction if there exists a constant γ ∈ (0, 1) such that

A.4

Fixed Point Theorems

343

ρ(T x1, T x2 ) ≤ γ ρ(x1, x2 ),

∀x1, x2 ∈ X.

Theorem A.10 Let (X, ρ) be a complete metric space and T be a contraction. Then T has a unique fixed point, given by x = limn→∞ T n x0, for any x0 ∈ X, and further ρ(xn, x) ≤ (1 − γ)−1 γ n ρ(x0, T x0 ). The next result due to Schauder gives the existence of a fixed point, without uniqueness [Zei86, Theorem 2.A, p. 56]. Theorem A.11 Let K be a nonempty, closed, bounded, convex subset of a Banach space X, and suppose T : K → K as a compact operator. Then T has a fixed point. The next result is an alternative version of Schauder fixed point theorem [Zei86, Corollary 2.13, p. 56]. Theorem A.12 Let K be a nonempty, compact and convex subset of a Banach space, and let T : K → K be a continuous mapping. Then T has a fixed point. The next result of Krasnoselskii handles completely continuous operators [Kra64, pp. 137, 147–148]. Theorem A.13 Let X be a Banach space, C ⊂ X a cone, and B1 , B2 two bounded open balls of X centered at the origin, with B1 ⊂ B2 . Suppose that T : C∩(B2 \B1 ) → C is a completely continuous operator such that either (i) T x ≤ x , x ∈ C ∩ ∂B1 and T x ≥ x , x ∈ C ∩ ∂B2 , or (ii) T x ≥ x , x ∈ C ∩ ∂B1 and T x ≤ x , x ∈ C ∩ ∂B2 holds, then T has a fixed point in C ∩ (B2 \ B1 ). The next result is useful for proving the existence in a cone [Dei85, Chapter 6]. Theorem A.14 Let X be a Banach space, C ⊂ X a cone. Let BR be a ball of radius R > 0 centered at the origin. Suppose T : C ∩ BR → C is a completely continuous operator such that T x  λx for all x ∈ ∂(BR ∩ C) and λ ≥ 1. Then T has a fixed point in C ∩ BR . The following nonlinear alternative of Leray and Schauder is powerful [Zei86, pp. 556–557]. Theorem A.15 Let K ⊂ X be a convex set of a linear normed space X, and Z be an open subset in K such that 0 ∈ Z. Then each continuous compact mapping T : Z → K has at least one of the following properties: (i) T has a fixed point in Z. (ii) There exist u ∈ ∂ Z and μ ∈ (0, 1) such that u = μTu.

References

[AB65] Mohammed Ali Al-Bassam, Some existence theorems on differential equations of generalized order, J. Reine Angew. Math. 218 (1965), 70–78. MR 0179405 [Abe26] Niels Henrik Abel, Auflösung einer mechanischen Aufgabe, J. Reine Angew. Math. 1 (1826), 153–157. MR 1577605 [Abe81] Niels Henrik Abel, Solution de quelques problémes à l’aide d’integrales définies, Gesammelte mathematische Werke 1, Leipzig: Teubner, 1881, (First publ. in Mag. Naturvidenbbeme, Aurgang, 1, no 2, Christiania, 1823), pp. 11–27. [AC72] Sam M Allen and John W Cahn, Ground state structures in ordered binary alloys with second neighbor interactions, Acta Metall. 20 (1972), no. 3, 423–433. [ACV16] Mark Allen, Luis Caffarelli, and Alexis Vasseur, A parabolic problem with a fractional time derivative, Arch. Ration. Mech. Anal. 221 (2016), no. 2, 603–630. MR 3488533 [AF03] Robert A. Adams and John J. F. Fournier, Sobolev Spaces, second ed., Elsevier/Academic Press, Amsterdam, 2003. MR 2424078 (2009e:46025) [Aga53] Ratan Prakash Agarwal, A propos d’une note de M. Pierre Humbert, CR Acad. Sci. Paris 236 (1953), no. 21, 2031–2032. [AK19] Mariam Al-Maskari and Samir Karaa, Numerical approximation of semilinear subdiffusion equations with nonsmooth initial data, SIAM J. Numer. Anal. 57 (2019), no. 3, 1524–1544. MR 3975154 [Aka19] Goro Akagi, Fractional flows driven by subdifferentials in Hilbert spaces, Israel J. Math. 234 (2019), no. 2, 809–862. MR 4040846 [AKT13] Temirkhan S. Aleroev, Mokhtar Kirane, and ˘Ii-Fa Tang, Boundary value problems for differential equations of fractional order, Ukr. Mat. Visn. 10 (2013), no. 2, 158–175, 293. MR 3137091 [Al-12] Mohammed Al-Refai, On the fractional derivatives at extreme points, Electron. J. Qual. Theory Differ. Equ. (2012), No. 55, 5. MR 2959045 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 B. Jin, Fractional Differential Equations, Applied Mathematical Sciences 206, https://doi.org/10.1007/978-3-030-76043-4

345

346

References

[AL14] Mohammed Al-Refai and Yuri Luchko, Maximum principle for the fractional diffusion equations with the Riemann-Liouville fractional derivative and its applications, Fract. Calc. Appl. Anal. 17 (2014), no. 2, 483–498. MR 3181067 [Ali10] Anatoly A. Alikhanov, A priori estimates for solutions of boundary value problems for equations of fractional order, Differ. Uravn. 46 (2010), no. 5, 658–664. MR 2797545 (2012e:35258) [Ali15] Anatoly A. Alikhanov, A new difference scheme for the time fractional diffusion equation, J. Comput. Phys. 280 (2015), 424–438. MR 3273144 [Ama00] Herbert Amann, Compact embeddings of vector-valued Sobolev and Besov spaces, Glas. Mat. Ser. III 35(55) (2000), no. 1, 161–177, Dedicated to the memory of Branko Najman. MR 1783238 [AS65] Milton Abramowitz and Irene A Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover, New York, 1965. [Bai10] Zhanbing Bai, On positive solutions of a nonlocal fractional boundary value problem, Nonlinear Anal. 72 (2010), no. 2, 916–924. MR 2579357 [Baj01] Emilia Grigorova Bajlekova, Fractional Evolution Equations in Banach Spaces, Ph.D. thesis, Eindhoven University of Technology, Eindhoven, 2001. MR 1868564 [Bar54] John H. Barrett, Differential equations of non-integer order, Canadian J. Math. 6 (1954), 529–541. MR 0064936 [BBP15] Leigh C. Becker, Theodore A. Burton, and Ioannis K. Purnaras, Complementary equations: a fractional differential equation and a Volterra integral equation, Electron. J. Qual. Theory Differ. Equ. (2015), No. 12, 24. MR 3325915 [BCDS06] Brian Berkowitz, Andrea Cortis, Marco Dentz, and Harvey Scher, Modeling non-Fickian transport in geological formations as a continuous time random walk, Rev. Geophys. 44 (2006), no. 2, RG2003, 49 pp. [Bel43] Richard Bellman, The stability of solutions of linear differential equations, Duke Math. J. 10 (1943), 643–647. MR 9408 [Bie56] Adam Bielecki, Une remarque sur la méthode de Banach-CacciopoliTikhonov dans la théorie des équations différentielles ordinaires, Bull. Acad. Polon. Sci. Cl. III. 4 (1956), 261–264. MR 0082073 [BKS18] Boris Baeumer, Mihály Kovács, and Harish Sankaranarayanan, Fractional partial differential equations with boundary conditions, J. Differential Equations 264 (2018), no. 2, 1377–1410. MR 3720847 [BL76] Jöran Bergh and Jörgen Löfström, Interpolation Spaces. An Introduction, Springer-Verlag, Berlin-New York, 1976. MR 0482275 [BL05] Zhanbing Bai and Haishen Lü, Positive solutions for boundary value problem of nonlinear fractional differential equation, J. Math. Anal. Appl. 311 (2005), no. 2, 495–505. MR 2168413 (2006d:34052)

References

347

[BLNT17] Maïtine Bergounioux, Antonio Leaci, Giacomo Nardi, and Franco Tomarelli, Fractional Sobolev spaces and functions of bounded variation of one variable, Fract. Calc. Appl. Anal. 20 (2017), no. 4, 936– 962. MR 3684877 [BPV99] Hermann Brunner, Arvet Pedas, and Gennadi Vainikko, The piecewise polynomial collocation method for nonlinear weakly singular Volterra equations, Math. Comp. 68 (1999), no. 227, 1079–1095. MR 1642797 [BS05] Mário N. Berberan-Santos, Relation between the inverse Laplace transforms of I(t β ) and I(t): application to the Mittag-Leffler and asymptotic inverse power law relaxation functions, J. Math. Chem. 38 (2005), no. 2, 265–270. MR 2166914 [BWM00] David A Benson, Stephen W. Wheatcraft, and Mark M Meerschaert, Application of a fractional advection-dispersion equation, Water Resources Res. 36 (2000), no. 6, 1403–1412. [Can68] John Rozier Cannon, Determination of an unknown heat source from overspecified boundary data, SIAM J. Numer. Anal. 5 (1968), no. 2, 275–286. MR 0231552 (37 #7105) [Can84] John Rozier Cannon, The One-Dimensional Heat Equation, AddisonWesley, Reading, MA, 1984. MR 747979 (86b:35073) [Cap67] Michele Caputo, Linear models of dissipation whose Q is almost frequency independent – II, Geophys. J. Int. 13 (1967), no. 5, 529– 539. [CD98] John Rozier Cannon and Paul DuChateau, Structural identification of an unknown source term in a heat equation, Inverse Problems 14 (1998), no. 3, 535–551. MR 1629991 (99g:35142) [CDMI19] Fabio Camilli, Raul De Maio, and Elisa Iacomini, A Hopf-Lax formula for Hamilton-Jacobi equations with Caputo time-fractional derivative, J. Math. Anal. Appl. 477 (2019), no. 2, 1019–1032. MR 3955007 [CLP06] Eduardo Cuesta, Christian Lubich, and Cesar Palencia, Convolution quadrature time discretization of fractional diffusion-wave equations, Math. Comp. 75 (2006), no. 254, 673–696. MR 2196986 (2006j:65404) [CM71a] Michele Caputo and Francesco Mainardi, Linear models of dissipation in anelastic solids, La Rivista del Nuovo Cimento (1971-1977) 1 (1971), no. 2, 161–198. [CM71b] Michele Caputo and Francesco Mainardi, A new dissipation model based on memory mechanism, Pure Appl. Geophys. 91 (1971), no. 1, 134–147. [CMS76] John M. Chambers, Colin L. Mallows, and B. W. Stuck, A method for simulating stable random variables, J. Amer. Stat. Assoc. 71 (1976), no. 354, 340–344. MR 0415982 (54 #4059) [CMSS18] Yujun Cui, Wenjie Ma, Qiao Sun, and Xinwei Su, New uniqueness results for boundary value problem of fractional differential equation, Nonlinear Anal. Model. Control 23 (2018), no. 1, 31–39. MR 3747591

348

References

[CN81] Philippe Clément and John A. Nohel, Asymptotic behavior of solutions of nonlinear Volterra equations with completely positive kernels, SIAM J. Math. Anal. 12 (1981), no. 4, 514–535. MR 617711 [CNYY09] Jin Cheng, Junichi Nakagawa, Masahiro Yamamoto, and Tomohiro Yamazaki, Uniqueness in an inverse problem for a one-dimensional fractional diffusion equation, Inverse Problems 25 (2009), no. 11, 115002, 16. MR 2545997 (2010j:35596) [CY97] Mourad Choulli and Masahiro Yamamoto, An inverse parabolic problem with non-zero initial condition, Inverse Problems 13 (1997), no. 1, 19–27. MR 1435868 (97j:35160) [dCNP15] Paulo Mendes de Carvalho-Neto and Gabriela Planas, Mild solutions to the time fractional Navier-Stokes equations in R N , J. Differential Equations 259 (2015), no. 7, 2948–2980. MR 3360662 [Dei85] Klaus Deimling, Nonlinear Functional Analysis, Springer-Verlag, Berlin, 1985. MR 787404 (86j:47001) [DF02] Kai Diethelm and Neville J. Ford, Analysis of fractional differential equations, J. Math. Anal. Appl. 265 (2002), no. 2, 229–248. MR 1876137 [DF12] Kai Diethelm and Neville J. Ford, Volterra integral equations and fractional calculus: do neighboring solutions intersect?, J. Integral Equations Appl. 24 (2012), no. 1, 25–37. MR 2911089 [Die10] Kai Diethelm, The Analysis of Fractional Differential Equations, Springer-Verlag, Berlin, 2010. MR 2680847 (2011j:34005) [Die12] Kai Diethelm, The mean value theorems and a Nagumo-type uniqueness theorem for Caputo’s fractional calculus, Fract. Calc. Appl. Anal. 15 (2012), no. 2, 304–313. MR 2897781 [Die16] Kai Diethelm, Monotonicity of functions and sign changes of their Caputo derivatives, Fract. Calc. Appl. Anal. 19 (2016), no. 2, 561– 566. MR 3513010 [Die17] Kai Diethelm, Erratum: The mean value theorems and a Nagumo-type uniqueness theorem for Caputo’s fractional calculus [ MR2897781], Fract. Calc. Appl. Anal. 20 (2017), no. 6, 1567–1570. MR 3764308 [Djr93] Mkhitar M. Djrbashian, Harmonic Analysis and Boundary Value Problems in the Complex Domain, Birkhäuser, Basel, 1993. MR 1249271 (95f:30039) [DK19] Hongjie Dong and Doyoon Kim, L p -estimates for time fractional parabolic equations with coefficients measurable in time, Adv. Math. 345 (2019), 289–345. MR 3899965 [DK20] Hongjie Dong and Doyoon Kim, L p -estimates for time fractional parabolic equations in divergence form with measurable coefficients, J. Funct. Anal. 278 (2020), no. 3, 108338. MR 4030286 [DK21] Hongjie Dong and Doyoon Kim, An approach for weighted mixednorm estimates for parabolic equations with local and non-local time derivatives, Adv. Math. 377 (2021), 107494, 44. MR 4186022

References

349

[DM75] Mkhitar M. Džrbašjan and V. M. Martirosjan, Theorems of PaleyWiener and Müntz-Szász type, Dokl. Akad. Nauk SSSR 225 (1975), no. 5, 1001–1004. MR 0396923 [DM77] Mkhitar M. Džrbašjan and V. M. Martirosjan, Theorems of PaleyWiener and Müntz-Szász type, Izv. Akad. Nauk SSSR Ser. Mat. 41 (1977), no. 4, 868–894, 960. MR 0473707 [DM83] Mkhitar M. Dzhrbashyan and V. M. Martirosyan, Integral representations and best approximations by generalized polynomials with respect to systems of Mittag-Leffler type, Izv. Akad. Nauk SSSR Ser. Mat. 47 (1983), no. 6, 1182–1207. MR 727751 [DN61] Mkhitar M. Džrbašjan and Anry B. Nersesjan, Expansions in certain biorthogonal systems and boundary-value problems for differential equations of fractional order, Trudy Moskov. Mat. Obšč. 10 (1961), 89–179. MR 0146589 [DN68] Mkhitar M. Djrbashian and Anry B Nersesyan, Fractional derivatives and the Cauchy problem for differential equations of fractional order, Izv. Akad. Nauk Armajan. SSR, Ser. Mat. 3 (1968), no. 1, 3–29. [DNPV12] Eleonora Di Nezza, Giampiero Palatucci, and Enrico Valdinoci, Hitchhiker’s guide to the fractional Sobolev spaces, Bull. Sci. Math. 136 (2012), no. 5, 521–573. MR 2944369 [Doe74] Gustav Doetsch, Introduction to the Theory and Application of the Laplace Transformation, Springer-Verlag, New York-Heidelberg, 1974, Translated from the second German edition by Walter Nader. MR 0344810 [Don20] Hongjie Dong, Recent progress in the L p theory for elliptic and parabolic equations with discontinuous coefficients, Anal. Theory Appl. 36 (2020), no. 2, 161–199. MR 4156495 [DR96] Domenico Delbosco and Luigi Rodino, Existence and uniqueness for a nonlinear fractional differential equation, J. Math. Anal. Appl. 204 (1996), no. 2, 609–625. MR 1421467 [DW60] Joaquín Basilio Diaz and Wolfgang L. Walter, On uniqueness theorems for ordinary differential equations and for partial differential equations of hyperbolic type, Trans. Amer. Math. Soc. 96 (1960), 90–100. MR 120451 [DYZ20] Qiang Du, Jiang Yang, and Zhi Zhou, Time-fractional Allen-Cahn equations: analysis and numerical methods, J. Sci. Comput. 85 (2020), no. 2, Paper No. 42, 30. MR 4170335 [Dzh66] Mkhitar M Dzharbashyan, Integral Transformations and Representation of Functions in a Complex Domain [in Russian], Nauka, Moscow, 1966. MR 0209472 [Džr54a] Mkhitar M. Džrbašyan, On Abel summability of generalized integral transforms, Akad. Nauk Armyan. SSR. Izv. Fiz.-Mat. Estest. Tehn. Nauki 7 (1954), no. 6, 1–26 (1955). MR 0069927

350

References

[Džr54b] Mkhitar M. Džrbašyan, On the asymptotic behavior of a function of Mittag-Leffler type, Akad. Nauk Armyan. SSR. Dokl. 19 (1954), 65–72. MR 0069331 [Džr54c] Mkhitar M. Džrbašyan, On the integral representation of functions continuous on several rays (generalization of the Fourier integral), Izv. Akad. Nauk SSSR. Ser. Mat. 18 (1954), 427–448. MR 0065684 [Džr70] Mkhitar M. M. Džrbašjan, A boundary value problem for a SturmLiouville type differential operator of fractional order, Izv. Akad. Nauk Armjan. SSR Ser. Mat. 5 (1970), no. 2, 71–96. MR 0414982 (54 #3074) [Džr74] Mkhitar M. Džrbašjan, The completeness of a system of Mittage-Leffler type, Dokl. Akad. Nauk SSSR 219 (1974), 1302–1305. MR 0387940 [EG04] Alexandre Ern and Jean-Luc Guermond, Theory and Practice of Finite Elements, Springer-Verlag, New York, 2004. MR 2050138 [EHR18] Vincent John Ervin, Norbert Heuer, and John Paul Roop, Regularity of the solution to 1-D fractional order diffusion equations, Math. Comp. 87 (2018), no. 313, 2273–2294. MR 3802435 [Ein05] Albert Einstein, Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen, Ann. Phys. 322 (1905), no. 8, 549–560. [EK04] Samuil D. Eidelman and Anatoly N. Kochubei, Cauchy problem for fractional diffusion equations, J. Differential Equations 199 (2004), no. 2, 211–255. MR 2047909 (2005i:26014) [EMOT55] Arthur Erdélyi, Wilhelm Magnus, Fritz Oberhettinger, and Francesco G. Tricomi, Higher Transcendental Functions. Vol. III, McGraw-Hill Book Company, Inc., New York-Toronto-London, 1955, Based, in part, on notes left by Harry Bateman. MR 0066496 [ER06] Vincent J. Ervin and John Paul Roop, Variational formulation for the stationary fractional advection dispersion equation, Numer. Methods Partial Differential Equations 22 (2006), no. 3, 558–576. MR 2212226 (2006m:65265) [Erv21] Vincent J. Ervin, Regularity of the solution to fractional diffusion, advection, reaction equations in weighted Sobolev spaces, J. Differential Equations 278 (2021), 294–325. MR 4200757 [Eul29] Leonhard Euler, Letter to Goldbach, 13 october 1729, 1729, Euler Archive [E00715], eulerarchive.maa.org. [Fel71] William Feller, An Introduction to Probability Theory and its Applications. Vol. II, Second edition, John Wiley & Sons, Inc., New YorkLondon-Sydney, 1971. MR 0270403 [FLLX18a] Yuanyuan Feng, Lei Li, Jian-Guo Liu, and Xiaoqian Xu, Continuous and discrete one dimensional autonomous fractional ODEs, Discrete Contin. Dyn. Syst. Ser. B 23 (2018), no. 8, 3109–3135. MR 3848192 [FLLX18b] Yuanyuan Feng, Lei Li, Jian-Guo Liu, and Xiaoqian Xu, A note on one-dimensional time fractional ODEs, Appl. Math. Lett. 83 (2018), 87–94. MR 3795675

References

351

[Fri58] Avner Friedman, Remarks on the maximum principle for parabolic equations and its applications, Pacific J. Math. 8 (1958), 201–211. MR 0102655 (21 #1444) [Gal80] R. S. Galojan, Completeness of a system of eigen- and associated functions, Izv. Akad. Nauk Armyan. SSR Ser. Mat. 15 (1980), no. 4, 310–322, 334. MR 605051 [Gar15] Roberto Garrappa, Numerical evaluation of two and three parameter Mittag-Leffler functions, SIAM J. Numer. Anal. 53 (2015), no. 3, 1350–1369. MR 3350038 [Gar19] Nicola Garofalo, Fractional thoughts, New Developments in the Analysis of Nonlocal Operators, Contemp. Math., vol. 723, Amer. Math. Soc., Providence, RI, 2019, pp. 1–135. MR 3916700 [Gau60] Walter Gautschi, Some elementary inequalities relating to the gamma and incomplete gamma function, J. Math. and Phys. 38 (1959/60), 77–81. MR 103289 [GK70] Israel C. Gohberg and Mark Grigor’evich Kre˘ın, Theory and Applications of Volterra Operators in Hilbert Space, AMS, Providence, R.I., 1970. MR 0264447 [GKMR14] Rudolf Gorenflo, Anatoly A. Kilbas, Francesco Mainardi, and Sergei V. Rogosin, Mittag-Leffler Functions, Related Topics and Applications, Springer, Heidelberg, 2014. [GLL02] Rudolf Gorenflo, Joulia Loutchko, and Yuri Luchko, Computation of the Mittag-Leffler function Eα,β (z) and its derivative, Fract. Calc. Appl. Anal. 5 (2002), no. 4, 491–518. MR 1967847 (2004d:33020a) [GLM99] Rudolf Gorenflo, Yuri Luchko, and Francesco Mainardi, Analytical properties and applications of the Wright function, Fract. Calc. Appl. Anal. 2 (1999), no. 4, 383–414. MR 1752379 (2001c:33011) [GLM00] Rudolf Gorenflo, Yuri Luchko, and Francesco Mainardi, Wright functions as scale-invariant solutions of the diffusion-wave equation, J. Comput. Appl. Math. 118 (2000), no. 1-2, 175–191. MR 1765948 (2001e:45007) [GLY15] Rudolf Gorenflo, Yuri Luchko, and Masahiro Yamamoto, Timefractional diffusion equation in the fractional Sobolev spaces, Fract. Calc. Appl. Anal. 18 (2015), no. 3, 799–820. MR 3351501 [GM98] Rudolf Gorenflo and Francesco Mainardi, Fractional calculus and stable probability distributions, Arch. Mech. (Arch. Mech. Stos.) 50 (1998), no. 3, 377–388, Fourth Meeting on Current Ideas in Mechanics and Related Fields (Kraków, 1997). MR 1648257 [GN17] Yoshikazu Giga and Tokinaga Namba, Well-posedness of HamiltonJacobi equations with Caputo’s time fractional derivative, Comm. Partial Differential Equations 42 (2017), no. 7, 1088–1120. MR 3691391 [GR15] Izrail Solomonovich Gradshteyn and Iosif Moiseevich Ryzhik, Table of Integrals, Series, and Products, eighth ed., Elsevier/Academic Press, Amsterdam, 2015, Translated from the Russian, Translation

352

References

[Gra82]

[Gra04] [Gri85] [Gro19]

[Grü67] [GSZ14]

[GW20]

[HA53]

[Hač75]

[Han64] [Hen81]

[HH98]

[HHK16]

[HKP20]

edited and with a preface by Daniel Zwillinger and Victor Moll, Revised from the seventh edition [MR2360010]. MR 3307944 Ivan G. Graham, Singularity expansions for the solutions of second kind Fredholm integral equations with weakly singular convolution kernels, J. Integral Equations 4 (1982), no. 1, 1–30. MR 640534 Loukas Grafakos, Classical and Modern Fourier Analysis, Pearson Education, Inc., Upper Saddle River, NJ, 2004. MR 2449250 Pierre Grisvard, Elliptic Problems in Nonsmooth Domains, Pitman, Boston, MA, 1985. Thomas Hakon Gronwall, Note on the derivatives with respect to a parameter of the solutions of a system of differential equations, Ann. of Math. (2) 20 (1919), no. 4, 292–296. MR 1502565 Anton Karl Grünwald, Uber “begrenzte” Derivationen und deren Anwendung, Z. angew. Math. und Phys. 12 (1867), 441–480. Guang-Hua Gao, Zhi-Zhong Sun, and Hong-Wei Zhang, A new fractional numerical differentiation formula to approximate the Caputo fractional derivative and its applications, J. Comput. Phys. 259 (2014), 33–50. MR 3148558 Ciprian G. Gal and Mahamadi Warma, Fractional-in-Time Semilinear Parabolic Equations and Applications, Springer, Switherland, 2020. MR 4167508 Pierre Humbert and Ratan Prakash Agarwal, Sur la fonction de MittagLeffler et quelques-unes de ses généralisations, Bull. Sci. Math. (2) 77 (1953), 180–185. MR 0060643 I. O. Hačatrjan, The completeness of families of functions of MittagLeffler type under weighted uniform approximation in the complex domain, Mat. Zametki 18 (1975), no. 5, 675–685. MR 466565 Hermann Hankel, Die Euler’schen Integrale bei unbeschränkter Variabilität des Argumentes, Zeitschr. Math. Phys. 9 (1864), 1–21. Daniel Henry, Geometric Theory of Semilinear Parabolic Equations, Lecture Notes in Mathematics, vol. 840, Springer-Verlag, Berlin-New York, 1981. MR 610244 Yuko Hatano and Naomichi Hatano, Dispersive transport of ions in column experiments: an explanation of long-tailed profiles, Water Resour. Res. 34 (1998), no. 5, 1027–1033. Ma. Elena Hernández-Hernández and Vassili N. Kolokoltsov, On the solution of two-sided fractional ordinary differential equations of Caputo type, Fract. Calc. Appl. Anal. 19 (2016), no. 6, 1393–1413. MR 3589357 Beom-Seok Han, Kyeong-Hun Kim, and Daehan Park, Weighted Lq (L p )-estimate with Muckenhoupt weights for the diffusion-wave equations with time-fractional derivatives, J. Differential Equations 269 (2020), no. 4, 3515–3550. MR 4097256

References

353

[HL28] Godfrey Harold Hardy and John Edensor Littlewood, Some properties of fractional integrals. I, Math. Z. 27 (1928), no. 1, 565–606. MR 1544927 [HL32] Godfrey Harold Hardy and John Edensor Littlewood, Some properties of fractional integrals. II, Math. Z. 34 (1932), no. 1, 403–439. MR 1545260 [HMS11] Hans Joachim Haubold, Arakaparampil M. Mathai, and Ram Kishore Saxena, Mittag-Leffler functions and their applications, J. Appl. Math. (2011), Art. ID 298628, 51. MR 2800586 [HNWY13] Yuko Hatano, Junichi Nakagawa, Shengzhang Wang, and Masahiro Yamamoto, Determination of order in fractional diffusion equation, J. Math. Industry 5 (2013), no. A, 51–57. [HTR+99] Nácere Hayek, Juan J. Trujillo, Margarita Rivero, Blanca Bonilla, and Juan Carlos Moreno Piquero, An extension of Picard-Lindelöff theorem to fractional differential equations, Appl. Anal. 70 (1999), no. 3-4, 347–361. MR 1688864 [Hum53] Pierre Humbert, Quelques résultats relatifs à la fonction de MittagLeffler, C. R. Acad. Sci. Paris 236 (1953), 1467–1468. MR 0054107 (14,872i) [Isa06] Victor Isakov, Inverse Problems for Partial Differential Equations, second ed., Springer, New York, 2006. MR 2193218 (2006h:35279) [IY98] Oleg Yu. Imanuvilov and Masahiro Yamamoto, Lipschitz stability in inverse parabolic problems by the Carleman estimate, Inverse Problems 14 (1998), no. 5, 1229–1245. MR 1654631 (99h:35227) [JK21] Bangti Jin and Yavar Kian, Recovery of the order of derivation for fractional diffusion equations in an unknown medium, Preprint, arXiv:2101.09165, 2021. [JLLY17] Daijun Jiang, Zhiyuan Li, Yikan Liu, and Masahiro Yamamoto, Weak unique continuation property and a related inverse source problem for time-fractional diffusion-advection equations, Inverse Problems 33 (2017), no. 5, 055013, 22. MR 3634443 [JLLZ15] Bangti Jin, Raytcho Lazarov, Yikan Liu, and Zhi Zhou, The Galerkin finite element method for a multi-term time-fractional diffusion equation, J. Comput. Phys. 281 (2015), 825–843. MR 3281997 [JLPR15] Bangti Jin, Raytcho Lazarov, Joseph Pasciak, and William Rundell, Variational formulation of problems involving fractional order differential operators, Math. Comp. 84 (2015), no. 296, 2665–2700. MR 3378843 [JLPZ14] Bangti Jin, Raytcho Lazarov, Joseph Pasciak, and Zhi Zhou, Error analysis of a finite element method for the space-fractional parabolic equation, SIAM J. Numer. Anal. 52 (2014), no. 5, 2272–2294. MR 3259788 [JLZ16a] Bangti Jin, Raytcho Lazarov, and Zhi Zhou, An analysis of the L1 scheme for the subdiffusion equation with nonsmooth data, IMA J. Numer. Anal. 36 (2016), no. 1, 197–221. MR 3463438

354

References

[JLZ16b] Bangti Jin, Raytcho Lazarov, and Zhi Zhou, A Petrov-Galerkin finite element method for fractional convection diffusion equation, SIAM J. Numer. Anal. 54 (2016), no. 1, 481–503. MR 3463699 [JLZ16c] Bangti Jin, Raytcho Lazarov, and Zhi Zhou, Two fully discrete schemes for fractional diffusion and diffusion-wave equations with nonsmooth data, SIAM J. Sci. Comput. 38 (2016), no. 1, A146–A170. MR 3449907 [JLZ18] Bangti Jin, Buyang Li, and Zhi Zhou, Numerical analysis of nonlinear subdiffusion equations, SIAM J. Numer. Anal. 56 (2018), no. 1, 1–23. MR 3742688 [JLZ19a] Bangti Jin, Raytcho Lazarov, and Zhi Zhou, Numerical methods for time-fractional evolution equations with nonsmooth data: a concise overview, Comput. Methods Appl. Mech. Engrg. 346 (2019), 332– 358. MR 3894161 [JLZ19b] Bangti Jin, Buyang Li, and Zhi Zhou, Subdiffusion with a timedependent coefficient: analysis and numerical solution, Math. Comp. 88 (2019), no. 319, 2157–2186. MR 3957890 [JLZ20a] Bangti Jin, Buyang Li, and Zhi Zhou, Pointwise-in-time error estimates for an optimal control problem with subdiffusion constraint, IMA J. Numer. Anal. 40 (2020), no. 1, 377–404. MR 4050544 [JLZ20b] Bangti Jin, Buyang Li, and Zhi Zhou, Subdiffusion with timedependent coefficients: improved regularity and second-order time stepping, Numer. Math. 145 (2020), no. 4, 883–913. MR 4125980 [JR12] Bangti Jin and William Rundell, An inverse Sturm-Liouville problem with a fractional derivative, J. Comput. Phys. 231 (2012), no. 14, 4954–4966. MR 2927980 [JR15] Bangti Jin and William Rundell, A tutorial on inverse problems for anomalous diffusion processes, Inverse Problems 31 (2015), no. 3, 035003, 40. MR 3311557 [JZ21] Bangti Jin and Zhi Zhou, An inverse potential problem for subdiffusion: stability and reconstruction, Inverse Problems 37 (2021), no. 1, 015006, 26. MR 4191622 [KA13] Mał gorzata Klimek and Om Prakash Agrawal, Fractional SturmLiouville problem, Comput. Math. Appl. 66 (2013), no. 5, 795–812. MR 3089387 [KBT00] Anatoly A. Kilbas, Blanca Bonilla, and Kh. Trukhillo, Nonlinear differential equations of fractional order in the space of integrable functions, Dokl. Akad. Nauk 374 (2000), no. 4, 445–449. MR 1798482 [Kia20] Yavar Kian, Simultaneous determination of coefficients and internal source of a diffusion equation from a single measurement, Preprint, arXiv:2007.08947, 2020. [KKL17] Ildoo Kim, Kyeong-Hun Kim, and Sungbin Lim, An Lq (L p )-theory for the time fractional evolution equations with variable coefficients, Adv. Math. 306 (2017), 123–176. MR 3581300

References

355

[KM04] Anatoly A. Kilbas and Sergei A. Marzan, Cauchy problem for differential equation with Caputo derivative, Fract. Calc. Appl. Anal. 7 (2004), no. 3, 297–321. MR 2252568 [KM05] Anatoly A. Kilbas and Sergei A. Marzan, Nonlinear differential equations with the Caputo fractional derivative in the space of continuously differentiable functions, Differ. Uravn. 41 (2005), no. 1, 82–86, 142. MR 2213269 [Koc90] Aantoly N. Kochube˘ı, Diffusion of fractional order, Differentsial nye Uravneniya 26 (1990), no. 4, 660–670, 733–734. MR 1061448 (91j:35133) [KOM14] Mał gorzata Klimek, Tatiana Odzijewicz, and Agnieszka B. Malinowska, Variational methods for the fractional Sturm-Liouville problem, J. Math. Anal. Appl. 416 (2014), no. 1, 402–426. MR 3182768 [KOSY18] Yavar Kian, Lauri Oksanen, Eric Soccorsi, and Masahiro Yamamoto, Global uniqueness in an inverse problem for time fractional diffusion equations, J. Differential Equations 264 (2018), no. 2, 1146–1170. MR 3720840 [Kou08] Samuel C. Kou, Stochastic modeling in nanoscale biophysics: subdiffusion within proteins, Ann. Appl. Stat. 2 (2008), no. 2, 501–535. MR 2524344 [KR19] Barbara Kaltenbacher and William Rundell, On an inverse potential problem for a fractional reaction-diffusion equation, Inverse Problems 35 (2019), no. 6, 065004, 31. MR 3975371 [Kra64] Mark Aleksandrovich Krasnoselski˘ı, Positive Solutions of Operator Equations, P. Noordhoff Ltd. Groningen, 1964. MR 0181881 (31 #6107) [Kra16] Mykola V. Krasnoschok, Solvability in Hölder space of an initial boundary value problem for the time-fractional diffusion equation, Zh. Mat. Fiz. Anal. Geom. 12 (2016), no. 1, 48–77. MR 3477949 [KRY20] Adam Kubica, Katarzyna Ryszewska, and Masahiro Yamamoto, TimeFractional Differential Equations, SpringerBriefs in Mathematics, Springer, Singapore, 2020, A theoretical introduction. MR 4200127 [KST06] Anatoly A. Kilbas, Hari M. Srivastava, and Juan J. Trujillo, Theory and Applications of Fractional Differential Equations, Elsevier Science B.V., Amsterdam, 2006. MR 2218073 (2007a:34002) [KV13] Mykola Krasnoschok and Nataliya Vasylyeva, On a solvability of a nonlinear fractional reaction-diffusion system in the Hölder spaces, Nonlinear Stud. 20 (2013), no. 4, 591–621. MR 3154625 [KW04] Peer C. Kunstmann and Lutz Weis, Maximal L p -regularity for parabolic equations, Fourier multiplier theorems and H ∞ -functional calculus, Functional Analytic Methods for Evolution Equations, Lecture Notes in Math., vol. 1855, Springer, Berlin, 2004, pp. 65–311. MR 2108959

356

References

[KY17] Yavar Kian and Masahiro Yamamoto, On existence and uniqueness of solutions for semilinear fractional wave equations, Fract. Calc. Appl. Anal. 20 (2017), no. 1, 117–138. MR 3613323 [KY18] Adam Kubica and Masahiro Yamamoto, Initial-boundary value problems for fractional diffusion equations with time-dependent coefficients, Fract. Calc. Appl. Anal. 21 (2018), no. 2, 276–311. MR 3814402 [KY21] Yavar Kian and Masahiro Yamamoto, Well-posedness for weak and strong solutions of non-homogeneous initial boundary value problems for fractional diffusion equations, Fract. Calc. Appl. Anal. 24 (2021), no. 1, 168–201. MR 4225517 [Let68] Aleksey Vasilievich Letnikov, Theory of differentiation with an arbitrary index (in Russian), Mat. Sb. 3 (1868), 1–66. [LHY20] Zhiyuan Li, Xinchi Huang, and Masahiro Yamamoto, A stability result for the determination of order in time-fractional diffusion equations, J. Inverse Ill-Posed Probl. 28 (2020), no. 3, 379–388. MR 4104326 [Lio32] Joseph Liouville, Memoire sue quelques questions de géometrie et de mécanique, et sur un nouveau gentre pour resoudre ces questions, J. Ecole Polytech. 13 (1832), 1–69. [Liu08] Jun S. Liu, Monte Carlo Strategies in Scientific Computing, Springer Series in Statistics, Springer, New York, 2008. MR 2401592 (2010b:65013) [LL09] V. Lakshmikantham and S. Leela, Nagumo-type uniqueness result for fractional differential equations, Nonlinear Anal. 71 (2009), no. 7-8, 2886–2889. MR 2532815 [LL13] Kunquan Lan and Wei Lin, Positive solutions of systems of Caputo fractional differential equations, Commun. Appl. Anal. 17 (2013), no. 1, 61–85. MR 3075769 [LL18a] Lei Li and Jian-Guo Liu, A generalized definition of Caputo derivatives and its application to fractional ODEs, SIAM J. Math. Anal. 50 (2018), no. 3, 2867–2900. MR 3809535 [LL18b] Lei Li and Jian-Guo Liu, Some compactness criteria for weak solutions of time fractional PDEs, SIAM J. Math. Anal. 50 (2018), no. 4, 3963– 3995. MR 3828856 [LLY19a] Zhiyuan Li, Yikan Liu, and Masahiro Yamamoto, Inverse problems of determining parameters of the fractional partial differential equations, Handbook of Fractional Calculus with Applications. Vol. 2, De Gruyter, Berlin, 2019, pp. 431–442. MR 3965404 [LLY19b] Yikan Liu, Zhiyuan Li, and Masahiro Yamamoto, Inverse problems of determining sources of the fractional partial differential equations, Handbook of fractional calculus with applications. Vol. 2, De Gruyter, Berlin, 2019, pp. 411–429. MR 3965403 [LM72] Jacques-Louis Lions and Enrico Magenes, Non-homogeneous Boundary Value Problems and Applications. Vol. I, Springer-Verlag, New York-Heidelberg, 1972, Translated from the French by P. Kenneth,

References

[LN16]

[LRdS18]

[LRY16]

[LRYZ13]

[LS21] [LST96]

[LSU68]

[Lub83]

[Lub86] [Lub88]

[Luc00] [Luc08]

[Luc09]

[LX07]

357

Die Grundlehren der mathematischen Wissenschaften, Band 181. MR 0350177 Ching-Lung Lin and Gen Nakamura, Unique continuation property for anomalous slow diffusion equation, Comm. Partial Differential Equations 41 (2016), no. 5, 749–758. MR 3508319 Wei Liu, Michael Röckner, and José Luís da Silva, Quasi-linear (stochastic) partial differential equations with time-fractional derivatives, SIAM J. Math. Anal. 50 (2018), no. 3, 2588–2607. MR 3800228 Yikan Liu, William Rundell, and Masahiro Yamamoto, Strong maximum principle for fractional diffusion equations and an application to an inverse source problem, Fract. Calc. Appl. Anal. 19 (2016), no. 4, 888–906. MR 3543685 Yuri Luchko, William Rundell, Masahiro Yamamoto, and Lihua Zuo, Uniqueness and reconstruction of an unknown semilinear term in a time-fractional reaction-diffusion equation, Inverse Problems 29 (2013), no. 6, 065019, 16. MR 3066395 Wenbo Li and Abner J Salgado, Time fractional gradient flows: Theory and numerics, Preprint, arXiv:2101.00541, 2021. Christian Lubich, Ian H. Sloan, and Vidar Thomée, Nonsmooth data error estimates for approximations of an evolution equation with a positive-type memory term, Math. Comp. 65 (1996), no. 213, 1–17. MR 1322891 (96d:65207) O. A. Ladyženskaja, V. A. Solonnikov, and N. N. Ural’ceva, Linear and Quasilinear Equations of Parabolic Type, AMS, Providence, R.I., 1968. MR 0241822 Christian Lubich, Runge-Kutta theory for Volterra and Abel integral equations of the second kind, Math. Comp. 41 (1983), no. 163, 87– 102. MR 701626 Christian Lubich, Discretized fractional calculus, SIAM J. Math. Anal. 17 (1986), no. 3, 704–719. MR 838249 (87f:26006) Christian Lubich, Convolution quadrature and discretized operational calculus. I, Numer. Math. 52 (1988), no. 2, 129–145. MR 923707 (89g:65018) Yuri Luchko, Asymptotics of zeros of the Wright function, Z. Anal. Anwendungen 19 (2000), no. 2, 583–595. MR 1769012 (2001f:33028) Yuri Luchko, Algorithms for evaluation of the Wright function for the real arguments’ values, Fract. Calc. Appl. Anal. 11 (2008), no. 1, 57–75. MR 2379273 (2009a:33024) Yuri Luchko, Maximum principle for the generalized time-fractional diffusion equation, J. Math. Anal. Appl. 351 (2009), no. 1, 218–223. MR 2472935 (2009m:35274) Yumin Lin and Chuanju Xu, Finite difference/spectral approximations for the time-fractional diffusion equation, J. Comput. Phys. 225 (2007), no. 2, 1533–1552. MR 2349193

358

References

[LY17] Yuri Luchko and Masahiro Yamamoto, On the maximum principle for a time-fractional diffusion equation, Fract. Calc. Appl. Anal. 20 (2017), no. 5, 1131–1145. MR 3721892 [LY19a] Zhiyuan Li and Masahiro Yamamoto, Inverse problems of determining coefficients of the fractional partial differential equations, Handbook of Fractional Calculus with Applications. Vol. 2, De Gruyter, Berlin, 2019, pp. 443–464. MR 3965405 [LY19b] Yuri Luchko and Masahiro Yamamoto, Maximum principle for the time-fractional PDEs, Handbook of fractional calculus with applications. Vol. 2, De Gruyter, Berlin, 2019, pp. 299–325. MR 3965399 [LY20] Ping Lin and Jiongmin Yong, Controlled singular Volterra integral equations and Pontryagin maximum principle, SIAM J. Control Optim. 58 (2020), no. 1, 136–164. MR 4049386 [LZ14] Yuri Luchko and Lihua Zuo, θ-function method for a time-fractional reaction-diffusion equation, J. Alg. Math. Soc. 1 (2014), 1–15. [Mai96] Francesco Mainardi, The fundamental solutions for the fractional diffusion-wave equation, Appl. Math. Lett. 9 (1996), no. 6, 23–28. MR 1419811 (97h:35132) [Mai14] Francesco Mainardi, On some properties of the Mittag-Leffler function Eα (−t α ), completely monotone for t>0 with 0