Fractional Deterministic and Stochastic Calculus (De Gruyter Series in Probability and Stochastics, 4) [1 ed.] 3110779811, 9783110779813

Fractional calculus has emerged as a powerful and effective mathematical tool in the study of several phenomena in scien

123 120 5MB

English Pages 462 [463] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Fractional Deterministic and Stochastic Calculus.pdf
Introduction
Contents
List of notations and abbreviations
1 Fractional integrals and derivatives
2 Integral and differential equations involving fractional operators
3 Fractional Brownian motion and its environment
4 Stochastic processes and fractional differential equations
5 Numerical methods and simulation
A Basics in complex analysis and integral transforms
B Special functions
C Stochastic processes
Bibliography
Index
Recommend Papers

Fractional Deterministic and Stochastic Calculus (De Gruyter Series in Probability and Stochastics, 4) [1 ed.]
 3110779811, 9783110779813

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Giacomo Ascione, Yuliya Mishura, Enrica Pirozzi Fractional Deterministic and Stochastic Calculus

De Gruyter Series in Probability and Stochastics



Edited by Itai Benjamini, Israel Jean Bertoin, Switzerland Michel Ledoux, France René L. Schilling, Germany

Volume 4

Giacomo Ascione, Yuliya Mishura, Enrica Pirozzi

Fractional Deterministic and Stochastic Calculus �

Mathematics Subject Classification 2020 Primary: 26A33, 34A08, 60G22; Secondary: 35R11, 34K37 Authors Dr. Giacomo Ascione Scuola Superiore Meridionale Via Mezzocannone, 4 80138 Napoli Italy [email protected] Prof. Dr. Yuliya Mishura Taras Shevchenko National University of Kyiv Department of Probability Theory, Statistics and Actuarial Mathematics Volodymyrska str., 60 Kyiv 01601 Ukraine [email protected]

Prof. Enrica Pirozzi University of Naples FEDERICO II Monte S. Angelo Department of Mathematics and Applications Renato Caccioppoli Via Cintia 80126 Napoli Italy [email protected]

ISBN 978-3-11-077981-3 e-ISBN (PDF) 978-3-11-078001-7 e-ISBN (EPUB) 978-3-11-078022-2 ISSN 2512-9007 Library of Congress Control Number: 2023942980 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2024 Walter de Gruyter GmbH, Berlin/Boston Typesetting: VTeX UAB, Lithuania Printing and binding: CPI books GmbH, Leck www.degruyter.com

Introduction In the early development of standard differential calculus, both Newton and Leibnitz dn fixed some notations to describe derivatives. It was Leibnitz’s notation dx n to arouse curiosity: what happens if n is not an integer number, but a positive real one? L’Hôpital asked exactly this question in a letter to Leibnitz, dated 3 September 1695 [105]. In the post scriptum of the answer, dated 30 September 1695, Leibnitz proposed a way to define the 1/2 derivative of a function and then stated “This is an apparent paradox from which, one day, useful consequences will be drawn, as there are no paradoxes without utility” [135]. He was right. Fractional calculus has experienced explosive growth in recent years. Notions from fractional calculus have been used in several fields of science, from economics to biology, from viscoelasticity to quantum mechanics, from security to artificial intelligence, and the list could go on further. Fractional calculus formalism is shown to be a simple and powerful tool to solve functional equations. Indeed, as observed by Davis [53], “The great elegance that can be secured by proper use of fractional operators and the power which they have in the solution of complicated functional equations should more than justify a more general recognition and use.” From the point of view of differential equations, fractional derivatives allow capturing some specific memory effects, which lead to power-law time relaxation phenomena. For this reason, fractional differential equations are implemented in several fields of physics and biology, as they present a plethora of subdiffusive phenomena. Furthermore, the nonlocal nature of fractional derivatives has been used, for instance, to provide a natural extension of Schrödinger equations, whose results are compatible with several experiments in optics. Whereas fractional differential equations are used to model deterministic or macroscopic behaviors, fractional calculus can be also used in combination with probability theory to provide several useful stochastic models. A first attempt to apply a fractional integration procedure to the Brownian motion is due to Lévy, but the most used and well-known example of a stochastic process obtained employing fractional calculus is the fractional Brownian motion, developed by Mandelbrot and van Ness. This extremely powerful tool can describe noise with interdependence between samples, which is something that is commonly observed and thus should have been attentively studied, as Mandelbrot and van Ness noticed in [152]: “By way of contrast, the study of random functions has been overwhelmingly devoted to sequences of independent random variables, to Markov processes, and to other random functions having the property that sufficiently distant samples of these functions are independent, or nearly so. Empirical studies of random chance phenomena often suggest, on the contrary, a strong interdependence between distant samples.” Fractional Brownian motions, fractional white noises, and their generalizations have been, since then, extensively used to describe several phenomena, ranging from economics to hydrology and informatics. https://doi.org/10.1515/9783110780017-201

VI � Introduction The fractional Brownian motion is not the only process to be directly related to fractional calculus. Another prominent role in this theory is played by Lévy processes, in particular, the stable ones. Stable distributions (including the Gaussian one) are quite peculiar: they represent the only possible choice of limit distributions for the Central Limit Theorem to hold. They seem to be not related to fractional calculus. But if we consider Lévy processes whose increments admit a stable distribution, then fractional operators start to play a prominent role. Lévy processes are Markov processes whose trajectories are not necessarily continuous: this behavior is reflected in the nonlocality of their generator. In particular, symmetric stable Lévy motions are related to the fractional Laplacian (and are used to study the solutions of fractional Schrödinger equations), whereas stable subordinators (i. e., strictly increasing Lévy processes with stable increments) are generated by Marchaud fractional derivatives. Since they are strictly increasing, the sample paths of stable subordinators can be also inverted. Processes obtained in this way are called inverse stable subordinators. The trajectories of inverse stable subordinators are nondecreasing and exhibit some constancy intervals, corresponding to the jumps of the stable subordinators. They can be used as a suitable time-change for a wide class of processes to model trapping phenomena (due to the intervals of constancy), which occur, for instance, in hydrology. The newly obtained processes are used to provide stochastic representations of timefractional partial differential equations. In this book, we describe some deterministic and stochastic aspects of fractional calculus, mostly in one dimension. On the one hand, we aim to recall and guide the reader through the basics of fractional calculus, and, on the other hand, we want to introduce some more recent concepts, such as reflected and delayed fractional processes. We start with the first notions of fractional calculus in Chapter 1. More precisely, we introduce all the main fractional operators and study their general properties. Properties of fractional integrals, such as Hölder continuity and restrictions to bounded intervals of half-axes, are widely discussed in Sections 1.1–1.6. Various definitions of fractional derivatives, their mutual relation, and their behavior with respect to Lebesgue and Sobolev spaces are studied in Sections 1.7–1.15. Finally, in Section 1.16, we recall some properties of the Hilbert transform and its relations with the fractional operators. Fractional derivatives and integrals are then used in Chapter 2 to study fractional differential equations. Starting from the classical Abel integral equation in Section 2.1, we then explore fractional differential equations driven by Riemann–Liouville or Dzhrbashyan–Caputo derivatives, whose existence and uniqueness results are given in Sections 2.3 and 2.4. The Mittag-Leffler function, its variants, and its properties are preliminarily considered in Section 2.2 and then employed in Section 2.5 to prove a version of the well-known Grönwall inequality, obtaining, as a consequence, continuous dependence on the initial data in Section 2.6. Linear fractional differential equations are solved by means of Laplace transforms in Sections 2.7 and 2.8. Chapter 3 is devoted to the fractional Brownian motion (fBm) and related processes. The main properties of the fBm are stated in Section 3.1, whereas Section 3.2 is devoted

Introduction

� VII

to Wiener integrals with respect to it, where inclusion of Lebesgue and Sobolev spaces in the domain of the Wiener integral is studied in detail. In Section 3.3, particular attention is devoted to integration of functions of bounded variation against the fBm, further extending the domain of possible integrands. In Section 3.4, we recall some representations and transformations formulae and provide some equivalent formulations for the involved kernels. Furthermore, Wiener integrals on bounded intervals are considered, and their domains are investigated in detail. In Section 3.5, we give some properties of the fractional Ornstein–Uhlenbeck process (fOU), such as the continuity of its variance and covariance functions in index H and its asymptotic behavior in time. Moreover, we provide the asymptotic behavior in time of the derivative of the variance function. In Section 3.6, we work with a generalization of the fOU obtained by adding a stochastic forcing term to the equation. In particular, we focus on its correlation function and the impact of the additional stochastic term on the long/short-range dependence of the fOU. In Sections 3.7 and 3.8, we focus on reflected processes, the reflected fBm and the reflected fOU, respectively, with special attention, in Section 3.8, to their approximation in terms of solutions of stochastic differential equations. New results, in comparison with those existing so far, concern the approximation of the reflected fOU and reflected fBm with zero initial data. In Chapter 4, we focus on stable distributions and related processes. Namely, in Section 4.1, we recall the main properties of stable distributions, and then we use them in Sections 4.2 and 4.3 to study stable subordinators and symmetric stable Lèvy motions. Inverse stable subordinators are introduced in Section 4.4, and their relationship with fractional abstract Cauchy problem is explored in Section 4.5. In Section 4.6, we study the properties of the weighted inverse subordination operator, which are then exploited in Sections 4.7, 4.8, and 4.9, where they are used to study the link between delayed (possibly fractional) Brownian motions and Ornstein–Uhlenbeck processes with time-fractional (possibly nonautonomous) heat equations. In Section 4.10, we discuss the basic properties of delayed continuous-time Markov chains with particular attention to the identification of the distribution of the sojourn times. Finally, in Section 4.11, we consider fractional integral equations with stochastic drivers, which are used to introduce noise in fractional differential equations. Chapters 1 to 4 are also equipped with exercises. Numerical methods are studied in detail in Chapter 5. Precisely, in Section 5.1, we consider the numerical evaluation of the Mittag-Leffler function, whereas Sections 5.2, 5.3, and 5.4 are devoted to the approximation of fractional integrals and to numerical solutions of fractional differential equations. Simulation algorithms are considered in Sections 5.5, 5.6, 5.7, and 5.8. In Section 5.5, we give some simulation algorithms for the fractional Brownian motion, which are then used in Section 5.6 to produce numerical approximations of solutions of stochastic differential equations. Section 5.7 is devoted to the simulation of stable random variables and related processes, whereas Section 5.8 deals with delayed ones. Finally, implementations in R [217] of all the described algorithms are provided in Section 5.9.

VIII � Introduction The problems discussed in the book are based both on the works of the authors themselves and on numerous papers and books by other authors on this topic. We will not list here all the works on which the book relies, but they are discussed in detail and cited in the main part of the text. We are really grateful to all our colleagues and friends who supported us in the preparation of this book and who addressed the mathematical problems we are talking about here, really stimulating our curiosity: Nikolai Leonenko, Luisa Beghin, Rene Schilling, Enzo Orsingher, József Lőriczi, Bruno Toaldo, Enrico Scalas, Lorenzo Torricelli, Anton Yurchenko-Tytarenko, Federico Polito, Pierre Patie, Mladen Savov, Giulia di Nunno, Costantino Ricciuti, Mirko d’Ovidio, Gianni Pagnini, Igor Podlubny. The list of people that supported us is, in reality, much longer, and whoever has not been referred here does not come less in importance. Yuliya Mishura is also grateful to the Swedish Foundation for Strategic Research (grant Nr. UKR22-0017) and to Mälardalen University (Sweden), who gave her the opportunity to be in a calm environment during difficult times for Ukraine, and in particular, to be able to prepare this book. 17 July 2023

Giacomo Ascione Yuliya Mishura Enrica Pirozzi

Contents Introduction � V List of notations and abbreviations � XIII Operators � XIII Spaces of functions � XIII Special functions � XIV Stochastic processes and random variables � XV Other symbols � XVI Abbreviations � XVII 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 1.13 1.14 1.15 1.16 1.17

Fractional integrals and derivatives � 1 General properties of fractional integrals � 1 Hölder property of fractional integrals � 10 Fractional integrals of power-integrable functions � 15 Restrictions of fractional integrals � 18 Continuity of fractional integrals in the index of integration � 20 Fractional integrals of complex order � 23 Riemann–Liouville fractional derivatives � 24 Dzhrbashyan–Caputo fractional derivatives � 28 Marchaud fractional derivatives � 32 Fractional derivatives of higher order � 42 The one-dimensional fractional Laplacian � 64 Grunwald–Letnikov fractional derivatives � 85 Fractional derivatives of complex order � 88 Fractional integrals and derivatives with respect to a function � 88 Further properties of fractional derivatives � 92 The Hilbert transform � 101 Exercises � 108

2 2.1 2.2 2.3

Integral and differential equations involving fractional operators � 120 Abel integral equation � 120 The Mittag-Leffler and Prabhakar functions � 121 Fractional differential equations with Riemann–Liouville fractional derivatives � 130 Fractional differential equations with Dzhrbashyan–Caputo fractional derivatives � 143 The generalized Grönwall inequality � 151 Continuous dependence on the initial data � 154 Linear fractional differential equations with Riemann–Liouville fractional derivatives � 157

2.4 2.5 2.6 2.7

X � Contents 2.8 2.9

Linear fractional differential equations with Dzhrbashyan–Caputo fractional derivatives � 173 Exercises � 181

3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9

Fractional Brownian motion and its environment � 191 Fractional Brownian motion: definition and some properties � 191 Wiener integration with respect to the fractional Brownian motion � 194 Wiener integrals of functions of bounded variation � 202 Representations of the fractional Brownian motion � 206 Fractional Ornstein–Uhlenbeck process � 214 Fractional Ornstein–Uhlenbeck process with stochastic forcing � 223 Reflected fractional Brownian motion � 229 Reflected fractional Ornstein–Uhlenbeck process � 235 Exercises � 252

4 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12

Stochastic processes and fractional differential equations � 259 Stable distributions and stable processes � 259 Stable subordinators and Marchaud fractional derivatives � 264 Symmetric α-stable Lévy motion and fractional Laplacian � 270 Inverse stable subordinators � 273 Time-fractional abstract Cauchy problems � 279 Weighted inverse subordination operators � 284 The delayed Brownian motion and the time-fractional heat equation � 292 The delayed fractional Brownian motion � 301 The delayed fractional Ohrnstein–Uhlenbeck process � 313 Delayed continuous-time Markov chains � 316 Fractional integral equations with a stochastic driver � 326 Exercises � 333

5 5.1 5.2 5.3 5.4 5.5 5.6

Numerical methods and simulation � 338 Calculation of the Mittag-Leffler function � 338 Approximation of fractional integrals. Product-integration methods � 345 Product-integration methods for fractional differential equations � 350 Fractional linear multistep methods � 358 Simulation of the fractional Brownian motion � 363 Euler–Maruyama schemes for stochastic differential equations driven by an additive fractional Brownian noise � 368 Chambers–Mallow–Stuck algorithm � 376 Simulation of the delayed processes � 378 Listings � 389

5.7 5.8 5.9

Contents �

A A.1 A.2 A.3

Basics in complex analysis and integral transforms � 399 Basics in complex analysis � 399 Fourier transform � 403 Laplace transform � 408

B B.1 B.2 B.3 B.4

Special functions � 411 Euler Gamma function � 411 Generalized binomial coefficients � 413 Euler Beta function � 414 Hypergeometric series � 415

C C.1 C.2 C.3 C.4 C.5

Stochastic processes � 417 The Kolmogorov–Chentsov theorem � 417 Gaussian processes � 417 Brownian motion and integrals with respect to it � 419 Infinite divisibility and Lévy processes � 420 Feller semigroups and generators � 424

Bibliography � 429 Index � 439

XI

List of notations and abbreviations Operators ∗ ∇⋅,⋅α ∇⋅,⋅n Cov Dα⋅ Dα⋅,g Dα⋅,ε C α D⋅ C α D⋅,g

GL α Da+ H α D⋅ M α D⋅

E ℱ Φ Gm I⋅α α I⋅,g H α I⋅ α

IS ISαw Jm KHT KH,∗ T ℒ ℒw M±H Ψ 𝒬 Qg H Var

Convolution product 5 Fractional finite difference of order α 86 Finite difference of order n 45 Covariance 217, 417 Riemann–Liouville Fractional Derivative of order α > 0 24 Riemann–Liouville Fractional Derivative of order α > 0 with respect to the function g 90 Truncated Fractional Derivative of order α > 0 35 Dzhrbashyan–Caputo Fractional Derivative of order α > 0 28 Dzhrbashyan–Caputo Fractional Derivative of order α > 0 with respect to the function g 91 Grunwald–Letnikov Fractional Derivative of order α > 0 87 Hadamard Fractional Derivative of order α > 0 92 Marchaud Fractional Derivative of order α > 0 33 Expectation 191, 417 Fourier transform 57, 403 Reflection map 231 m-th interjump of a càdlàg function 319 Fractional Integral of order α > 0 1 Fractional Integral of order α > 0 with respect to the function g 89 Hadamard Fractional Integral of order α > 0 92 Inverse subordination operator 284 Weighted inverse subordination operator 286 m-th jump of a càdlàg function 319 Molchan–Golosov operator 208 Molchan–Golosov inversion operator 208 Laplace transform 124, 408 Weighted Laplace transform 286 Mandelbrot–van Ness operator 196 Regulator map 231 Generator of a continuous-time Markov chain 316 Composition operator with the function g 89 Hilbert transform 101 Variance 216, 417

Spaces of functions AC BV BVloc 𝒞 𝒞b 𝒞c 𝒞0 𝒞m

Space of absolutely continuous functions 16 Space of functions of bounded variation 202 Space of functions of locally bounded variation 203, 204 Space of continuous functions 2 Space of bounded continuous functions 2 Space of continuous functions with compact support 2 Space of continuous functions with 0-limits at the endpoints 2 Space of m times continuously differentiable functions 3

https://doi.org/10.1515/9783110780017-202

XIV � List of notations and abbreviations

𝒞bm 𝒞cm 𝒞0m 𝒞∞ 𝒞b∞ 𝒞c∞ 𝒞0∞ 𝒞(⋅; wa,γ ) 𝒞λ 𝒟 Dom ℱH ℱ0 L1 (ℝ) ℱ L1 (ℝ) ℱ 𝒮0 (ℝ) ℍ Hn Hα Iα Lp L2H 󵄨󵄨 2 󵄨󵄨 󵄨󵄨󵄨LH 󵄨󵄨󵄨 Lip 𝒮(ℝ) 𝒮0 (ℝ) W n,p W α,p 𝒲α

Space of bounded m times continuously differentiable functions with bounded derivatives 3 Space of m times continuously differentiable functions with compact support 3 Space of m times continuously differentiable functions with vanishing derivatives 3 Space of infinitely continuously differentiable functions 3 Space of infinitely continuously differentiable functions with bounded derivatives 3 Space of infinitely continuously differentiable functions with compact support 3 Space of infinitely continuously differentiable functions with vanishing derivatives 3 Space of continuous functions on (a, b] that are O((t − a)−γ ). 131

Space of Hölder continuous functions of order λ 10 Space of càdlàg functions 319 Domain of an operator 1 Spectral domain of the integral with respect to the fractional Brownian motion 201 Wiener ring 407 Unitary Wiener ring 407 Lizorkin space 59, 408 Generic class of functions 1 Hilbert Sobolev space of order n 17 Hilbert space of fractional integrals of order α 17 Space of fractional integrals of order α > 0 1 Lebesgue space of functions with integrable p-power 1 Time domain of the integral with respect to the fractional Brownian motion 197, 212 Modified time domain of the integral with respect to the fractional Brownian motion, with H > 1/2 200 Space of Lipschitz continuous functions 10 Schwartz space of rapidly decreasing functions on ℝ 51, 406 Space of Schwartz functions with null derivatives at 0 60, 408 Sobolev space of order n and power p 16 Fractional Sobolev space of order α and power p 74 Set of inverse subordination weights. 285

Special functions (⋅)n Shifted factorial 411 (⋅⋅) Binomial Coefficients 413 ⌊⋅⌋ Integer part 42 1⋅ Indicator function of a set 4 B(⋅, ⋅) Euler Beta function 7, 414 B(⋅; ⋅, ⋅) Normalized incomplete Beta function 278 CαFL Normalization constant of the fractional Laplacian 66 CHMN Mandelbrot–van Ness constant 196 CH (⋅, ⋅) Covariance function of the fractional Ornstein–Uhlenbeck process with stochastic forcing 225 χ(⋅, ⋅) Normalization constant of Marchaud derivatives 47 Eα Mittag-Leffler function of one parameter 121 Eα,β Mittag-Leffler function of two parameters 122 γ Eα,β Prabhakar function 122 fα (t, ⋅) Density of the inverse α-stable subordinator 274 2 F1 (a, b; c; x) Hypergeometric function 105, 415 p Fq (a1 , . . . , ap ; b1 , . . . , bq ; ⋅) Hypergeometric series 415

Stochastic processes and random variables

φX gα (⋅) gα (t, ⋅) gH,K (⋅; ⋅) Γ ι ια KH (⋅, ⋅) Kℓ,α (⋅) n! p(t, ⋅) pα (t, ⋅) pH,f (t, ⋅) pH (t, ⋅) pH,X (t, ⋅) pH,x (t, ⋅) fOU qα (t, ⋅) qαH (t, ⋅) H,f qα (t, ⋅) qαH,X (t, ⋅) H,ξ qα,fOU (t, ⋅) RH (⋅, ⋅) ρH (⋅) Up,α (⋅) VH (⋅) VH,∞ zH (⋅; ⋅) zH∗ (⋅; ⋅) zH,K (⋅; ⋅)

� XV

Characteristic function of a random variable X 260 Density of the stable subordinator at time t = 1 264 Density of a stable subordinator 264 Mandelbrot–van Ness transformation kernel 206 Euler Gamma function 1, 411 Identity function 382 Convolution kernel of the fractional integral 6 Mandelbrot–van Ness kernel 196 Kernel of the truncated fractional derivative 35 Factorial of a nonnegative integer n 411 Density of the Brownian motion (or heat kernel) 113 Density of the symmetric α-stable Lévy motion (or fractional heat kernel) 270 Integral of the density of a fractional Brownian motion against a function f 308 Density of the fractional Brownian motion 305 Density of a perturbation of the fractional Brownian motion 305 Density of the fractional Ornstein–Uhlenbeck process with Hurst index H and initial data x 221 Density of the delayed Brownian motion 294 Density of the delayed fractional Brownian motion 302 Integral of the density of a delayed fractional Brownian motion against a function f 308 Density of a perturbation of the delayed fractional Brownian motion 305 Density of the delayed fractional Ornstein–Uhlenbeck process 314 Covariance function of the fractional Ornstein–Uhlenbeck process 218 Covariance function of the stationary fractional Ornstein–Uhlenbeck process 217 Moment of order p of the inverse α-stable subordinator 277 Variance function of the fractional Ornstein–Uhlenbeck process 220 Variance of the stationary fractional Ornstein–Uhlenbeck process 217 Molchan–Golosov kernel 209 Molchan–Golosov inversion kernel 211 Molchan–Golosov transformation kernel 208

Stochastic processes and random variables B BH H XB BH,b R H,b B Bα Bα,H α,H XB χ γ γα IH J Jα Lα LH,ξ

Brownian motion 192, 419 Fractional Brownian motion 191 Perturbation of the fractional Brownian motion 305 Fractional Brownian motion with drift b 232 Reflected fractional Brownian motion with drift b 232 Delayed Brownian motion 293 Delayed fractional Brownian motion 301 Perturbation of a delayed fractional Brownian motion 305 Jump chain of a continuous-time Markov chain 316 Sojourn times of a continuous-time Markov chain 316 Sojourn times of a delayed continuous-time Markov chain 317 Increments of the fractional Brownian motion 194 Jump times of a continuous-time Markov chain 316 Jump times of a delayed continuous-time Markov chain 317 Inverse stable subordinator 274 Regulator of the reflected fractional Ornstein–Uhlenbeck process with initial data ξ 239

XVI � List of notations and abbreviations

LH,ξ,ε R

M H,b 𝒩 Nα S(α, β, γ, δ) Sα UH U H,ξ R H,ξ U U H,ξ,I Uα,H,ξ Xα Xα Y H,ξ,ε 𝒴n Yα,n

Approximate regulator of the reflected fractional Ornstein–Uhlenbeck process with initial data ξ 244 Running maximum of the reflected fractional Brownian motion with drift 235 Poisson process 323 Fractional Poisson process 323 Stable distribution 260 Stable subordinator 264 Stationary fractional Ornstein–Uhlenbeck process 215 Fractional Ornstein–Uhlenbeck process with initial data ξ 214 Reflected fractional Ornstein–Uhlenbeck process with initial data ξ 239 Fractional Ornstein–Uhlenbeck process with initial data ξ and forcing term I 224 Delayed fractional Ornstein–Uhlenbeck process 314 Symmetric stable Lévy motion 270 Delayed continuous-time Markov chain 317 Approximation of the reflected fractional Ornstein–Uhlenbeck process with initial data ξ 240 Scaled normal compound Poisson process 384 Scaled normal compound fractional Poisson process 386

Other symbols |E|

Lebesgue measure of a measurable set 1



Weak convergence in Skorokhod space 383

d d

→ d

= abs 𝔹 (bX , qX , νX ) ℂ cl Disc dS (⋅, ⋅) eg f[a,b] f± f (t±) γEM hol Λ ℕ ℕ0 Ω P PV ℝ ℝ+ ℝ+0 span

Convergence in finite-dimensional distributions 192 Equality in distribution 193 Abscissa of convergence 124, 409 Generic Banach space 151, 265 Characteristic triplet of an infinitely divisible distribution 261 Field of complex numbers 23 Topological closure of a set 2 Set of discontinuity points of a function 203 Skorokhod metric on 𝒟[0, 1] 382 Exponential growth index 409 Restriction of a function f over the interval [a, b] 18 Positive and negative part of a function f 308 Left/right limits of f in t 202 Euler–Mascheroni constant 262 Abscissa of holomorphic extension 409 Set of strictly increasing functions 382 Set of positive integers 1 Set of nonnegative integers 3, 100 Sample space 191, 417 Probability measure 191, 417 Principal value integral 101 Field of real numbers 1 Positive half axis (0, +∞) 23, 33 Nonnegative half axis [0, +∞) 18, 131 Subspace of finite linear combinations of the considered elements 198

Abbreviations

supp Σ 𝒲 𝒲α W Wα x±

Support of a function 2 σ-algebra of events 191, 417 Wronskian matrix 178 fractional Wronskian matrix 168 Wronskian determinant 178 fractional Wronskian determinant 168 Positive and negative part of the identity 6

Abbreviations a. a. AB ABM a. e. AM a. s. Bm CTMC fBm FFT FLMM fOU LMM PI w. r. t.

Almost all 1 Adams–Bashforth 353 Adams–Bashforth–Moulton 355 Almost everywhere 1 Adams–Moulton 353 Almost surely 191 Brownian motion 192 Continuous-time Markov process 316 Fractional Brownian motion 191 Fast Fourier Transform 348 Fractional linear multistep method 359 Fractional Ornstein–Uhlenbeck process 215 Linear multistep method 358 Product-Integration 345 With respect to 1

� XVII

1 Fractional integrals and derivatives In this chapter, we provide definitions and properties of fractional integrals and derivatives. We focus on some aspects, including the Hölder property in the limits of integration and continuity in the fractional order of integration for fractional integrals. Beyond the fractional Riemann–Liouville integrals and derivatives, we consider the Dzhrbashyan–Caputo, Marchaud, and Grunwald–Letnikov fractional derivatives. For more specific details, we refer the reader to the book [232].

1.1 General properties of fractional integrals Fix −∞ < a < b < +∞, α > 0, and consider a function f : (a, b) → ℝ. Definition 1.1. We denote the Riemann–Liouville left- and right-sided fractional integrals on (a, b) of order α by α (Ia+ f ) (x) :=

x

f (t) 1 dt, ∫ Γ(α) (x − t)1−α

and

a

α (Ib− f ) (x) :=

b

f (t) 1 dt, ∫ Γ(α) (t − x)1−α

(1.1)

x

respectively, where Γ is the Euler Gamma function; see Appendix B.1. α It is clear that the operators Ia+(b−) are linear. Fractional integrals on bounded intervals have been first discussed in [219] (which was published posthumously) and then further studied in [139, 240]. The form (1.1) has been independently established in [123, 132, 183]. For further details on the history of fractional integrals, see [223, 224]. α α α We denote the domain of the operator Ia+(b−) by Dom(Ia+(b−) ), that is, f ∈ Dom(Ia+(b−) ) if the respective integrals converge for almost all (a. a.) x ∈ (a, b) (likewise, we can say almost everywhere, a. e.) with respect to (w. r. t.) the Lebesgue measure on ℝ. For any class of real-valued functions ℍ defined on (a, b), we say that f ∈ Iα a+(b−) (ℍ) if there α exists a function h ∈ ℍ such that f (x) = (Ia+(b−) h)(x) for a. a. x ∈ (a, b) with respect to the Lebesgue measure. The corresponding classes of functions defined on ℝ are denoted by 𝕀α± (ℍ). In what follows, for any d ∈ ℕ := {1, 2, . . . } and for any measurable subset E ⊂ ℝd , we denote by |E| the Lebesgue measure of E. Furthermore, for any p ∈ [1, ∞), we denote by Lp (E) the Lebesgue space of measurable functions f : E → ℝ such that p 󵄨 󵄨p ‖f ‖Lp (E) := ∫󵄨󵄨󵄨f (x)󵄨󵄨󵄨 dx < ∞, E

and by L∞ (E) the Lebesgue space of measurable functions f : E → ℝ such that 󵄨 󵄨 󵄨 󵄨 ‖f ‖L∞ (E) := inf {M ≥ 0 : 󵄨󵄨󵄨󵄨{x ∈ ℝd : 󵄨󵄨󵄨f (x)󵄨󵄨󵄨 ≥ M}󵄨󵄨󵄨󵄨 = 0} < ∞. https://doi.org/10.1515/9783110780017-001

2 � 1 Fractional integrals and derivatives We recall that for p ∈ [1, ∞], the space Lp (E) equipped with the norm ‖⋅‖Lp (E) is a Banach space. We will use of the following continuity lemma (see [254, Theorem 8.19]). Lemma 1.2. For every function f ∈ Lp (a, b), −∞ ≤ a < b ≤ ∞, b−|h|

󵄨 󵄨p lim ∫ 󵄨󵄨󵄨f (x + h) − f (x)󵄨󵄨󵄨 dx = 0.

h→0

a+|h|

In addition, we will use the following inequality, called the generalized Minkowski inequality (see [101, Chapter VI, Inequality 202]). Theorem 1.3. Let d1 , d2 ∈ ℕ, let Ej ⊂ ℝdj , j = 1, 2, be measurable sets, and let f : E1 × E2 → ℝ be a measurable function. Assume that F : x ∈ E1 󳨃→ ∫E f (x, y)dy is well-defined and 2 measurable. Then, for p ∈ [1, ∞), 1 p

󵄨 󵄨p ‖F‖Lp (E1 ) ≤ ∫ (∫󵄨󵄨󵄨f (x, y)󵄨󵄨󵄨 dx) dy E2

E1

󵄩 󵄩 and ‖F‖L∞ (E1 ) ≤ ∫󵄩󵄩󵄩f (⋅, y)󵄩󵄩󵄩L∞ (E ) dy. 1 E2

If p > 1, then equality holds if and only if there exist two measurable functions fj : Ej → ℝ, j = 1, 2, such that f (x, y) = f1 (x)f2 (y) for a. a. (x, y) ∈ E1 × E2 . p

We say that a measurable function f : E → ℝ for E ⊂ ℝd belongs to Lloc (E) if for any compact subset K ⊆ E, the restriction fK : K → ℝ of f on K belongs to Lp (K). Further details on Lebesgue spaces can be found on any real analysis or measure theory book such as [226, 227, 234, 254]. We will also use spaces of continuous and continuously differentiable functions. For −∞ < a < b < +∞, we denote by 𝒞 [a, b] the Banach space of continuous functions f : [a, b] → ℝ equipped with the norm ‖f ‖𝒞[a,b] := supx∈[a,b] |f (x)| and by 𝒞0 [a, b] the subspace of functions f ∈ 𝒞 [a, b] such that f (a) = f (b) = 0. Analogously, for −∞ ≤ a < b ≤ +∞, we denote by 𝒞 (a, b) the space of continuous functions f : (a, b) → ℝ and by 𝒞b (a, b) the subspace of functions f ∈ 𝒞 (a, b) such that 󵄨 󵄨 ‖f ‖𝒞b (a,b) := sup 󵄨󵄨󵄨f (x)󵄨󵄨󵄨 < ∞. x∈(a,b)

The latter space equipped with the norm ‖ ⋅ ‖𝒞b (a,b) is a Banach space. We denote by 𝒞0 (a, b) the subspace of functions f ∈ 𝒞b (a, b) such that limx↓a f (x) = limx↑b f (x) = 0. This is still a Banach space when equipped with the norm ‖ ⋅ ‖𝒞b (a,b) . Furthermore, observe that if f ∈ 𝒞b (a, b), then f ∈ L∞ (a, b) and ‖f ‖𝒞b (a,b) = ‖f ‖L∞ (a,b) . For this reason, we will use both norms depending on the context. For any function f ∈ 𝒞 (a, b), we define the support of f as supp(f ) = cl({x ∈ (a, b) : f (x) ≠ 0}), where cl is the topological closure. We denote by 𝒞c (a, b) the set of functions f ∈ 𝒞 (a, b) such that supp(f ) ⊂ (a, b)

1.1 General properties of fractional integrals

� 3

is a compact set. If a function f : (a, b) → ℝ is m times continuously differentiable, then we denote by f (m) its mth derivative, whereas f (0) ≡ f . For m ∈ ℕ, we denote by 𝒞 m (a, b) the set of functions f : (a, b) → ℝ that are m times continuously differentiable, by 𝒞bm (a, b) (respectively, 𝒞0m (a, b) and 𝒞cm (a, b)) the space of functions f ∈ 𝒞 m (a, b) such that f (j) ∈ 𝒞b (a, b) (respectively, 𝒞0 (a, b) and 𝒞c (a, b)) for all j = 0, 1, . . . , m. The sets 𝒞bm (a, b) and 𝒞0m (a, b) are Banach spaces when equipped with the norm m 󵄩 󵄩 ‖f ‖𝒞 m (a,b) = ∑ 󵄩󵄩󵄩󵄩f (j) 󵄩󵄩󵄩󵄩𝒞 b

j=0

b (a,b)

,

f ∈ 𝒞bm (a, b).

Finally, we denote by 𝒞 ∞ (a, b) the set of functions f : (a, b) → ℝ that are infinitely differentiable and by 𝒞b∞ (a, b) (respectively, 𝒞0∞ (a, b) and 𝒞c∞ (a, b)) the subspace of functions f ∈ 𝒞 ∞ (a, b) such that f (j) ∈ 𝒞b (a, b) (respectively, 𝒞0 (a, b) and 𝒞c (a, b)) for all j ∈ ℕ0 := {0, 1, 2, . . . }. We can also consider the spaces 𝒞 m [a, b] for m = 1, 2, . . . and m = ∞, and the related subspaces by using left and right derivatives at a and b, respectively. All these definitions can be easily extended to the case of half-open intervals [a, b) and (a, b]. For further details, see [244]. α The first natural question is what functions belong to Dom(Ia+(b−) )? For −∞ < a < b < +∞, the answer is given by the next lemma. Lemma 1.4. Let −∞ < a < b < +∞. Then for any α > 0, the operator α Ia+(b−) : L1 (a, b) → L1 (a, b)

is a bounded linear operator. In particular, for f ∈ L1 (a, b), we have the following properties: α (i) f ∈ Dom (Ia+(b−) ),

α (ii) Ia+(b−) f ∈ L1 (a, b),

(b − a)α 󵄩 α 󵄩 (iii) 󵄩󵄩󵄩Ia+(b−) f 󵄩󵄩󵄩L1 (a,b) ≤ ‖f ‖ 1 . Γ(α + 1) L (a,b)

α α Proof. Let f ∈ L1 (a, b) and consider the operator Ia+ (the case Ib− is analogous). Let us α note that (Ia+ f )(x) is well defined for a. a. x ∈ (a, b) if the functions

t ∈ (a, x) 󳨃→ (x − t)α−1 f (t) ∈ ℝ α belong to L1 (a, x) for a. a. x ∈ (a, b). The latter is equivalent to the fact that (Ia+ |f |)(x) α is well defined for a. a. x ∈ (a, b), that is, |f | ∈ Dom(Ia+ ). Moreover, it is clear by defiα α α α nition that |(Ia+ f )(x)| ≤ (Ia+ |f |)(x), and then ‖Ia+ f ‖L1 (a,b) ≤ ‖Ia+ |f |‖L1 (a,b) . Thus, if properties (i), (ii), and (iii) hold for |f |, then they also hold for f . For this reason, we can suppose, without loss of generality, that f ≥ 0 a. e. It is clear that we only need to prove item (iii), as it implies both items (i) and (ii). To do this, let us note that by Fubini–Tonelli

4 � 1 Fractional integrals and derivatives theorem b

α f ) (x)dx = ∫ (Ia+ a

b x

b b

a a

a t

(b − a)α ‖f ‖L1 (a,b) f (t) f (t) 1 1 dt dx = dx dt ≤ , ∫∫ ∫ ∫ Γ(α) Γ(α) Γ(α + 1) (x − t)1−α (x − t)1−α

and we get the proof. α Further, we can in fact prove that Ia+ : Lp (a, b) → Lp (a, b) is a continuous operator for all p ≥ 1. Before doing this, let us introduce the following notation: for any subset E ⊂ ℝ, we define by 1E the indicator function of E, that is,

1E (t) = {

1, 0,

t ∈ E,

(1.2)

t ∈ ℝ\E.

It is clear that the function 1E is measurable if and only if E is a measurable set. More1

over, 1E ∈ L1 (ℝ) if and only if |E| < ∞, and ‖1E ‖Lp (ℝ) = |E| p for any finite p ≥ 1, as (1E (t))p = 1E (t). In general, 1E ∈ L∞ (ℝ) even if |E| = ∞. α Lemma 1.5. Let f ∈ Lp (a, b), p ≥ 1, and α > 0. Then Ia+(b−) f ∈ Lp (a, b), and

(b − a)α 󵄩󵄩 α 󵄩 ‖f ‖ p . 󵄩󵄩Ia+(b−) f 󵄩󵄩󵄩Lp (a,b) ≤ Γ(α + 1) L (a,b) α Proof. Again, let us focus on the operator Ia+ . First of all, by applying the change of variables x − t = s we get α (Ia+ f ) (x)

x−a

b−a

0

0

1 1 = ∫ sα−1 f (x − s)ds = ∫ 1(0,x−a) (s)sα−1 f (x − s)ds, Γ(α) Γ(α)

where 1(0,x−a) is the indicator function of the interval (0, x−a). We assume 1 ≤ p < ∞; the case p = ∞ can be treated in the same way. By the generalized Minkowski inequality (Theorem 1.3) we get, recalling also that 1(0,x−a) (s) = 1(a,b−s) (x − s) for x ∈ (a, b) and s ∈ (0, b − a), 1 p

1

p 󵄨󵄨 󵄨󵄨p 󵄨󵄨 1 b−a 󵄨󵄨 p 󵄨󵄨 α 󵄨󵄨 󵄨󵄨 󵄨󵄨 α−1 (∫ 󵄨󵄨(Ia+ f ) (x)󵄨󵄨 dx) = (∫ 󵄨󵄨 ∫ 1(0,x−a) (s)s f (x − s)ds󵄨󵄨 dx) 󵄨󵄨 Γ(α) 󵄨󵄨 󵄨󵄨 a a 󵄨󵄨 0

b

b



=

b−a

b

0

a

1 p

1 󵄨 󵄨p ( ∫ (∫ 1(a,b−s) (x − s)sp(α−1) 󵄨󵄨󵄨f (x − s)󵄨󵄨󵄨 dx) ds) Γ(α) b−a

b−s

0

a

1 p

1 (b − a)α 󵄨 󵄨p ( ∫ sα−1 ( ∫ 󵄨󵄨󵄨f (t)󵄨󵄨󵄨 dt) ds) ≤ ‖f ‖ p . Γ(α) Γ(α + 1) L ([a,b])

1.1 General properties of fractional integrals

� 5

α α It is interesting that we can rewrite Ia+ in terms of I0+ : if for a function f ∈ L1 (a, b), 1 we set fa (t) = f (t + a), where fa ∈ L (0, b − a), then by the substitution t = s + a we get α (Ia+ f ) (x) =

x

x−a

f (t) f (s + a) 1 1 α dt = ds = (I0+ fa ) (x − a). ∫ ∫ Γ(α) (x − t)1−α Γ(α) (x − a − s)1−α a

(1.3)

0

α A similar argument holds for Ib− . Indeed, α (Ib− f ) (x)

b

0

x

x−b

f (t) f (s + b) 1 1 α = dt = ds = (I0− fb ) (x − b). ∫ ∫ 1−α Γ(α) (t − x) Γ(α) (s − (x − b))1−α

(1.4)

Finally, if we have a function f ∈ L1 (−b, 0), then we can define ̃f ∈ L1 (0, b) as ̃f (t) = f (−t), so that α (I0− f ) (x) =

0

−x

f (t) f (−s) 1 1 α ̃ dt = ds = (I0+ f ) (−x). ∫ ∫ Γ(α) (t − x)1−α Γ(α) (−x − s)1−α x

(1.5)

0

α The previous relations tell us that we can restrict our considerations to the case I0+ . The latter can be written in terms of the convolution product. Let us recall the definition of convolution product. For two measurable functions f , g : ℝ → ℝ, we define their convolution product f ∗g by

(f ∗ g)(x) = ∫ f (x − y)g(y)dy ℝ

for all x ∈ ℝ such that the integral is well-defined. If f : (a, b) → ℝ, then we extend it to f : ℝ → ℝ by setting f (x) = 0 for x ∈ ̸ (a, b). We recall here Young convolution inequality (see [9, Proposition 1.3.2] and [36]). 1 Theorem 1.6. Let 1 ≤ p, q, r ≤ ∞ be such that p1 + q1 = 1 + r1 (where ∞ = 0). If f ∈ Lp (ℝ) and g ∈ Lq (ℝ), then f ∗ g ∈ Lr (ℝ), and there exists a constant Bp,q ∈ (0, 1), independent of f , g, such that

‖f ∗ g‖Lr (ℝ) ≤ Bp,q ‖f ‖Lp (ℝ) ‖g‖Lq (ℝ) . Further properties of the convolution product are stated in the following theorem. Theorem 1.7. (i) Let 1 < p, q < ∞ be such that p1 + q1 = 1, and let f ∈ Lp (ℝ) and g ∈ Lq (ℝ). Then f ∗ g ∈ 𝒞 (ℝ). (ii) Let f ∈ L1 (ℝ) and g ∈ L∞ (ℝ). Then f ∗ g ∈ 𝒞b (ℝ) and is uniformly continuous. (iii) Let f ∈ L1 (ℝ) and g ∈ 𝒞0 (ℝ). Then f ∗ g ∈ 𝒞0 (ℝ).

6 � 1 Fractional integrals and derivatives (iv) Let f ∈ Lp (ℝ) for some 1 ≤ p ≤ ∞ and g ∈ 𝒞cm (ℝ) for some m ∈ ℕ. Then f ∗g ∈ 𝒞 m (ℝ), and dj (f ∗ g)(x) = (f ∗ g (j) ) (x) dx j for all j = 1, . . . , m. α Now we represent I0+ by means of convolution products. For any f ∈ L1 (0, b), we can extend it to f ∈ L1 (ℝ) by setting f (x) = 0 for all x ∈ ̸ (0, b). Defining

ια (x) =

x+ α−1 , Γ(α)

(1.6)

where x+ := max{0, x}, we have α (I0+ f ) (x) = (ια ∗ f )(x),

(1.7)

so that Lemmas 1.4 and 1.5 are direct consequences of the Young convolution inequality (Theorem 1.6). Such an inequality also guarantees an improvement of integrability. 1 Theorem 1.8. (i) Let 0 < α < 1, 1 ≤ q < 1−α , and 1 ≤ p ≤ p constant 0 < Bp,q < 1 such that for all f ∈ L (a, b),

󵄩󵄩 α 󵄩󵄩 󵄩󵄩Ia+ f 󵄩󵄩Lr (a,b) ≤ Bp,q ‖f ‖Lp (a,b) , where r =

pq . q+p−pq

(ii) Let 0 < α < 1 and 1 ≤ p < α1 . Then for every r < 0 < Bp,r < 1 such that for all f ∈ Lp (a, b),

p 1−αp

q . q−1

Then there exists a (1.8)

=: p∗α , there exists a constant

󵄩󵄩 α 󵄩󵄩 󵄩󵄩Ia+ f 󵄩󵄩Lr (a,b) ≤ Bp,r ‖f ‖Lp (a,b) .

(1.9)

α (iii) Let 0 < α < 1. If f ∈ Lp (a, b) for some p > α1 , then Ia+ f ∈ 𝒞 [a, b]. (iv) Let α > 1 and n ≤ α for some positive integer n ≥ 1. Then for all f ∈ L1 (a, b), α Ia+ f ∈ 𝒞 n−1 [a, b]. α The same inequalities hold for Ib− .

Proof. From the previous discussion, we only prove the inequalities for a = 0. To establish item (i), note that for α ∈ (0, 1), function ια from (1.6) belongs to Lq (a, b) q 1 if and only if q < 1−α . To guarantee that p1 + q1 ≥ 1, we need the condition p ≤ q−1 . Inequality (1.8) follows now from the Young convolution inequality (see Theorem 1.6) thanks to (1.7). p Concerning item (ii), let 1 ≤ p < α1 and 1 ≤ r < 1−αp . If r ≤ p, then the result p follows from Lemma 1.5 and the fact that L (a, b) is continuously embedded in Lr (a, b),

1.1 General properties of fractional integrals

� 7

i. e., ‖f ‖Lr (a,b) ≤ Cr,p ‖f ‖Lp (a,b) with a constant Cr,p > 0 independent of f . If p < r < then set q =

rp . rp+p−r

Since r > p, we obtain that q > 1. To show that q
α1 , q = p−1 < 1−α , thus ια ∈ Lq (0, b), and we have the continuity by the properties of the convolution product (see Theorem 1.7). Item (iv) follows from ια ∈ 𝒞bn−1 [0, b] if α ≥ n (see Theorem 1.7).

Definition 1.9. The left- and right-sided Riemann–Liouville fractional integrals on ℝ of a measurable function f : ℝ → ℝ are defined as (I+α f ) (x) :=

x

f (t) 1 dt ∫ Γ(α) (x − t)1−α

and

(I−α f ) (x) :=

−∞



f (t) 1 dt, ∫ Γ(α) (t − x)1−α x

respectively. The function f ∈ Dom(I±α ) if the corresponding integrals converge for a. a. x ∈ ℝ. According to [232, Theorem 5.3], we have the inclusion Lp (ℝ) ⊂ Dom(I±α ), 1 ≤ p < α1 . Moreover, let us formulate the following Hardy–Littlewood theorem. Theorem 1.10. Let 1 ≤ p, q < ∞ and 0 < α < 1. Then the operators I±α are bounded from Lp (ℝ) to Lq (ℝ) if and only if 1 < p < α1 and q = p(1 − αp)−1 =: p∗α , i. e., and in this case there exists a constant Cp,α such that 󵄩󵄩 α 󵄩󵄩 󵄩󵄩I± f 󵄩󵄩Lq (ℝ) ≤ Cp,α ‖f ‖Lp (ℝ) .

(1.10)

α Inequality of the same form holds for Ia+(b−) .

Despite we know some conditions for a function f to admit a fractional integral, it is clear that obtaining an explicit formula for the fractional integral of a function is not always possible. However, there are some cases in which we can actually evaluate the fractional integral by classical integration rules, possibly, with the help of some special functions. One of them is the following lemma, whose proof we leave to the reader (see Exercise 1.1). Lemma 1.11. Fix a ∈ ℝ, β > 0 and let f (x) = (x − a)β−1 . Then α (Ia+ f ) (x) =

B(α, β) Γ(β) (x − a)α+β−1 = (x − a)α+β−1 , Γ(α) Γ(α + β)

x ≥ a,

where B(α, β) is the Euler Beta function; see Appendix B.3. With this in mind, we can prove a quite important property, i. e., the semigroup property (w. r. t. the order of integration) of the fractional integral.

8 � 1 Fractional integrals and derivatives Theorem 1.12 (Semigroup property). (i) For all f ∈ L1 (a, b), β

α+β

β

α Ia+ Ia+ f = Ia+ f ,

α+β

α Ib− Ib− f = Ib− f

a. e. If α + β > 1, then these equalities hold for every x ∈ [a, b]. (ii) Let f ∈ Lp (ℝ), and let α, β > 0 be such that α + β < p1 . Then β

α+β

I±α I± f = I± f . β

α Proof. Note that Ia+ Ia+ f converges absolutely for all f ∈ L1 (a, b). Thus by the Fubini theorem we have α β (Ia+ Ia+ f ) (x)

x

t

1 = ∫(x − t)α−1 ∫(t − u)β−1 f (u)du Γ(α)Γ(β) x

=

=

a

a

x

1 1 ∫ f (u) ( ∫(x − t)α−1 (t − u)β−1 dt) du Γ(β) Γ(α) a

x

u

Γ(β) α+β ∫ f (x)(x − u)α+β−1 du = (Ia+ f ) (x), Γ(α + β)Γ(β) a

α where the inner integral in the second line is the fractional integral Iu+ of the function β−1 (t − u) , and we used Lemma 1.11. If α + β > 1, then the right-hand side is in 𝒞 [a, b]. β 1 Moreover, by item (ii) of Theorem 1.8 we know that Ia+ f ∈ Lr (a, b) for all r < 1−β . Howβ

1 α ever, 1 − β < α, and thus we can consider α1 < r < 1−β . The continuity of Ia+ Ia+ f follows from item (iii) of Theorem 1.8. The same direct proof holds for item (ii), once we observe 1 that p∗β < α1 if and only if p < α+β . β

α Remark 1.13. We also proved that Ia+ Ia+ f has a continuous version if α + β = 1.

Another issue we have to recall is the following integration-by-parts formula. Theorem 1.14 (Integration by parts). (i) Let α ∈ (0, 1) and p, q > 1 be such that 1 + q1 = 1 + α. Then, for all f ∈ Lp (a, b) and g ∈ Lq (a, b), we have the following p integration-by-parts formula for fractional integrals: b

b

a

a

α α f ) (x)dx = ∫ f (x) (Ib− g) (x)dx. ∫ g(x) (Ia+

(1.11)

(ii) Let α ∈ (0, 1) and p, q ≥ 1 be such that p1 + q1 < 1 + α. Then, for all f ∈ Lp (a, b) and g ∈ Lq (a, b) the integration-by-parts formula (1.11) still holds. (iii) Let α ≥ 1. Then, for all f , g ∈ L1 (a, b), the integration-by-parts formula (1.11) still holds.

� 9

1.1 General properties of fractional integrals

(iv) Let f ∈ Lp (ℝ), g ∈ Lq (ℝ), p > 1, q > 1, and

1 p

+

1 q

= 1 + α for α ∈ (0, 1). Then

∫ g(x) (I+α f ) (x)dx = ∫ f (x) (I−α g) (x)dx. ℝ

(1.12)



α α Proof. Let us prove item (i). First of all, we have to verify that gIa+ f , fIb− g ∈ L1 (a, b). Note 1 1 1 that the conditions p + q = 1 + α and p, q > 1 imply that p, q < α . Indeed, if q ≥ α1 , then

we have

1 p

= 1 + α − q1 ≥ 1, a contradiction. Next, let us note that

1 p∗α

+ q1 =

1−αp p

+ q1 = 1, and

α α thus by the Hölder inequality we have that gIa+ f ∈ L1 (a, b). The same holds for fIb− g. To prove the equality, note that b

b

x

a

a

a

b

b

α f ) (x)dx = ∫ g(x) ∫(x − t)α−1 f (t)dt dx ∫ g(x) (Ia+

= ∫ f (t) ∫(x − t) a

α−1

t

b

α g(x)dx dt = ∫ f (t) (Ib− g) (t)dt, a

where we used Fubini theorem. Exactly the same proof holds for item (iv). The proofs α α of items (ii) and (iii) differ only in the way we determine that fIb− g, gIa+ f ∈ L1 (a, b). Concerning item (ii), let us distinguish two cases. If p, q > 1, then there exist 1 < p′ < p and 1 < q′ < q such that p1′ + q1′ = 1 + α, and we reduce the proof to item (i). If p = 1 (or, equivalently, q = 1), then we must have q > 1 α

α Ia+ f

1 1−α

α gIa+ f 1

1

1 . α

In particular,

g ∈ L (a, b), ∈ L (a, b), and then ∈ L (a, b). At the same time, since q > α α we have Ib− g ∈ L∞ (a, b), and then fIb− g ∈ L (a, b). α α Finally, item (iii) follows once we note that Ia+ f , Ib− g ∈ L∞ (a, b).

1 , α

Finally, let us note that for α ∈ (0, 1), the fractional integral is injective. Lemma 1.15. Let 0 < α < 1, and let f ∈ L1 (a, b) (possibly, a = −∞ and b = +∞) be such α that (Ia+(b−) f )(x) = 0 for a. a. x ∈ (a, b). Then f (x) = 0 for a. a. x ∈ (a, b). Proof. Let us first consider the case −∞ < a < b < +∞. It is clear that if we prove the α α α statement for Ia+ , then it is also true for Ib− . Thus let us suppose (Ia+ f )(x) = 0 for a. a. x ∈ (a, b). For any r ∈ [a, b], we have r

0 = ∫(r − z) a

z −α

∫(z − t) a

α−1

r

r

f (t)dt dz = ∫ f (t) ∫(r − z) (z − t) −α

a

t

α−1

r

dz dt = B(α, 1 − α) ∫ f (t)dt, a

where we used Fubini theorem and Lemma 1.11. Thus, since B(α, 1 − α) > 0, we have r that ∫a f (t)dt = 0 for a. a. r ∈ (a, b), which implies f (x) = 0 for a. a. x ∈ (a, b) (see for r

instance, [226, Chapter 6, Lemma 13]). Analogously, for I+α , we prove that ∫−∞ f (t)dt = 0 b

for a. a. r ∈ ℝ: this is sufficient to guarantee that ∫a f (t)dt = 0 for all a < b, and then

10 � 1 Fractional integrals and derivatives we conclude the proof as in the previous case, since a, b are arbitrary. The same holds for I−α . Remark 1.16. Clearly, the previous lemma is also true for α = 1. Moreover, the semigroup property guarantees that such a result is true even for α > 1.

1.2 Hölder property of fractional integrals Let 𝒞 λ [a, b] be the space of Hölder-continuous functions of order λ, and let Lip[a, b] be the space of Lipschitz-continuous functions on the interval [a, b]. Namely, a function f ∈ 𝒞 λ [a, b] for some λ ∈ (0, 1) if [f ]𝒞 λ [a,b] = sup

x,y∈[a,b] x =y̸

|f (x) − f (y)| < ∞. |x − y|λ

This quantity is called the Hölder constant of f and satisfies 󵄨󵄨 󵄨 λ 󵄨󵄨f (x) − f (y)󵄨󵄨󵄨 ≤ [f ]𝒞 λ [a,b] |x − y| ,

x, y ∈ [a, b].

If λ = 1, then we denote |f (x) − f (y)| < ∞, |x − y| x,y∈[a,b]

[f ]Lip[a,b] = sup x =y̸

and we call this quantity the Lipschitz constant of f . The set of Lipschitz functions is denoted by Lip[a, b] to avoid ambiguity with 𝒞 1 [a, b]. In fact, 𝒞 1 [a, b] ⊂ Lip[a, b] (with ‖f ′ ‖L∞ (a,b) = [f ]Lip[a,b] ), and the inclusion is proper. Recall that the space 𝒞 λ [a, b] for λ ∈ (0, 1) (respectively, Lip[a, b]) is a Banach space when equipped with the norm ‖f ‖𝒞 λ [a,b] = ‖f ‖𝒞[a,b] + [f ]𝒞 λ [a,b] (respectively, ‖f ‖Lip[a,b] = ‖f ‖𝒞[a,b] + [f ]Lip[a,b] ) . λ We say that f ∈ 𝒞loc (a, b) for some λ ∈ (0, 1) (respectively, f ∈ Liploc (a, b)), where −∞ ≤ a < b ≤ +∞, if for any interval [c, d] ⊂ (a, b), the restriction f[c,d] : [c, d] → ℝ of f on [c, d] belongs to 𝒞 λ [c, d] (respectively, Lip[c, d]). For further details on Lipschitz and Hölder functions, we refer to [72, 73]. First, let us prove that the fractional integral transforms an integrable function into a Hölder function. More precisely, we have the following result.

Lemma 1.17. Let 1 ≤ p ≤ ∞ and α > p1 . (i) If α < (ii) If α > (iii) If α =

1 p 1 p 1 p

+ 1, then 𝕀α± (Lp (a, b)) ⊂ 𝒞 λ [a, b] for all −∞ < a < b < ∞, where λ = α − p1 . + 1, then 𝕀α± (Lp (a, b)) ⊂ Lip[a, b].

+ 1, then 𝕀α± (Lp (a, b)) ⊂ 𝒞 λ [a, b] for all λ ∈ (0, 1).

1.2 Hölder property of fractional integrals

� 11

Proof. It is clear that we only have to prove the statement for 𝕀α+ (Lp (0, b)). First, let us note that the case α = 1 is well known; see, for instance, [72, Section 5.6.2], and thus we can suppose α ≠ 1. Moreover, let us recall that for all 0 ≤ t1 < t2 ≤ b and s < t1 , (t2 − s)

α−1

− (t1 − s)

α−1

t2

= (α − 1) ∫(u − s)α−2 du.

(1.13)

t1

α Consider any function of the form f (x) = (I0+ g)(x) for some g ∈ Lp (0, b). Then

󵄨󵄨 t2 󵄨󵄨 t1 󵄨󵄨 1 󵄨󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 α−1 α−1 󵄨󵄨f (t2 ) − f (t1 )󵄨󵄨 = 󵄨󵄨∫(t2 − s) g(s)ds − ∫(t1 − s) g(s)ds󵄨󵄨󵄨 󵄨󵄨 Γ(α) 󵄨󵄨󵄨 󵄨󵄨 󵄨0 0 󵄨󵄨 t2 󵄨󵄨 󵄨󵄨 t1 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨󵄨 1 󵄨󵄨 󵄨󵄨 󵄨 α−1 α−1 α−1 󵄨 󵄨 󵄨 ≤ (󵄨󵄨∫(t2 − s) g(s)ds󵄨󵄨 + 󵄨󵄨∫ ((t2 − s) − (t1 − s) ) g(s)ds󵄨󵄨󵄨) 󵄨󵄨 Γ(α) 󵄨󵄨 󵄨󵄨 󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨t1 󵄨󵄨 󵄨 0 1 = (I (t , t ) + I2 (t1 , t2 )). (1.14) Γ(α) 1 1 2 Now let q ≥ 1 be the conjugate exponent of p, i. e., q1 + p1 = 1. Then the condition αp > 1 implies that (α − 1)q > −1. Hence we can use Hölder inequality to get 1 q

t2

I1 (t1 , t2 ) ≤ (∫(t2 − s) t1

(α−1)q

) ‖g‖Lp (0,b) =

(t2 − t1 )

α− p1 1

((α − 1)q + 1) q

(1.15)

‖g‖Lp (0,b) .

Now we need to handle I2 (t1 , t2 ). First, note that I2 (0, t2 ) = 0, and thus we can suppose t1 > 0. Let us first consider the case α ≠ 1 + p1 . Using (1.13), Hölder inequality, and the generalized Minkowski integral inequality (see Theorem 1.3), we have t1

t2

0

t1

t1

t2

0

t1

q

1 q

󵄨 󵄨 I2 (t1 , t2 ) ≤ |α − 1|∫ (∫(u − s)α−2 du) 󵄨󵄨󵄨g(s)󵄨󵄨󵄨ds ≤ |α − 1|‖g‖Lp (0,b) (∫ (∫(u − s)α−2 du) ) ds t2

t1

1 q

≤ |α − 1|‖g‖Lp (0,b) ∫ (∫(u − s)(α−2)q ds) du ≤ t1

=

|α − 1|‖g‖Lp (0,b) |(α − 2)q + 1|

1 q

0

|α − 1|‖g‖Lp (0,b) 1

|(α − 2)q + 1| q

t2

󵄨󵄨 α−2+ q1 α−2+ q1 󵄨󵄨󵄨 −u ∫ 󵄨󵄨󵄨(u − t1 ) 󵄨󵄨 du 󵄨 󵄨

t1

t2

󵄨󵄨 α−1− p1 α−1− p1 󵄨󵄨󵄨 −u ∫ 󵄨󵄨󵄨(u − t1 ) 󵄨󵄨 du. 󵄨 󵄨

(1.16)

t1

Now let us distinguish two cases. If α < 1 + p1 , then the function t 󳨃→ t

α−1− p1

is decreasing.

12 � 1 Fractional integrals and derivatives Since u − t1 < u, we get from (1.16) that I2 (t1 , t2 ) ≤



|α − 1|‖g‖Lp (0,b) |(α − 2)q + 1|

1 q

t2

∫ ((u − t1 )

α−1− p1

−u

t1

|α − 1|‖g‖Lp (0,b) 1

|(α − 2)q + 1| q

t2

∫(u − t1 )

α−1− p1

t1

du =

α−1− p1

) du

|α − 1|‖g‖Lp (0,b) (t2 − t1 )

I2 (t1 , t2 ) ≤



|α − 1|‖g‖Lp (0,b) |(α − 2)q + 1|

1 q

+ 1. Then the function t 󳨃→ t

α−1− p1

t1

|α − 1|‖g‖Lp (0,b) |(α − 2)q + 1|

t2

∫ (u

1 p

1 q

t2

∫u

α−1− p1

t1

− (u − t1 )

du ≤

.

1

|(α − 2)q + 1| q (α − p1 )

Plugging (1.15) and (1.17) into (1.14), we prove item (i). Let us now consider the case α > and from (1.16) we have

α− p1

α−1− p1

α−1− p1

(1.17)

is increasing,

) du

|α − 1|‖g‖Lp (0,b) b

α−1− p1

|(α − 2)q + 1|

1 q

(t2 − t1 )

.

(1.18)

Again, plugging (1.15) and (1.18) into (1.14), we conclude the proof of item (ii) Let us stress that all the previous arguments hold even if p = ∞. The case α = p1 + 1 is more complicated. First of all, since we assume that α ≠ 1, we exclude the case p = ∞. Let us stress that in this case, (1.15) still holds, and thus we only have to argue differently with I2 (t1 , t2 ). The condition α = p1 + 1 implies that (α − 2)q = −1, and thus the same argument as before leads to t2

1 q

t1

t2

1

I2 (t1 , t2 ) ≤ |α − 1|‖g‖Lp (0,b) ∫ (∫(u − s) ds) du = |α − 1|‖g‖Lp (0,b) ∫ log q ( −1

t1

t1

0

u ) du. u − t1

Using the change of variables z = u − t1 , we get I2 (t1 , t2 ) ≤ |α − 1|‖g‖Lp (0,b)

t2 −t1

t2 −t1

1 t b ∫ log (1 + 1 ) dz ≤ |α − 1|‖g‖Lp (0,b) ∫ log q (1 + ) dz. z z 1 q

0

0

We leave to the reader (see Exercise 1.6) to prove that for each λ ∈ (0, 1), there exists a constant Cλ such that I2 (t1 , t2 ) ≤ Cλ |α − 1|‖g‖Lp (0,b) (t2 − t1 )λ .

(1.19)

Combining (1.15) and (1.19), we conclude the proof of item (iii). Now let us prove that the fractional integral increases the Hölder index of the function. More precisely, we have the following result.

1.2 Hölder property of fractional integrals

� 13

α Theorem 1.18. Let φ ∈ 𝒞 λ (a, b). Then for each α > 0, Ia+ (φ(⋅) − φ(a)) ∈ 𝒞 λ+α [a, b] if α λ + α < 1 and Ia+ (φ(⋅) − φ(a)) ∈ Lip[a, b] if λ + α ≥ 1.

Proof. It is clear that, without loss of generality, we can suppose φ(a) = 0 and a = 0. Let α f (x) = (I0+ φ)(x) and consider 0 ≤ t1 < t2 ≤ b. If t1 = 0, then we have t2

t2

0

0

|φ(u)| |φ(u) − φ(0)| 1 1 󵄨󵄨 󵄨 du = du ∫ ∫ 󵄨󵄨f (t2 ) − f (t1 )󵄨󵄨󵄨 ≤ Γ(α) (t2 − u)1−α Γ(α) (t2 − u)1−α t2



C CB(α, λ + 1) α+λ t2 , ∫(t2 − u)α−1 uλ du = Γ(α) Γ(α) 0

where C > 0 is the Hölder constant of φ, and we used Lemma 1.11. Now let us consider t1 > 0. First note that for all s > t1 > 0, analogously to (1.13), (s − t1 )

α−1

−s

α−1

t1

= −(α − 1) ∫(s − z)α−2 dz.

(1.20)

0

Thanks to such identity, we get t2

α−1

∫ [(s − t1 ) t1 [

φ(s) − s

α−1

t1

φ(s) + (α − 1) ∫(s − z)α−2 φ(s)dz] ds = 0. 0 ]

(1.21)

Let us also note that by the Fubini theorem (which holds since φ is bounded) t2 t1

(α − 1) ∫ ∫(s − z)

α−2

t1 0

t1

t2

0

t1

φ(z)dzds = (α − 1) ∫ φ(z) ∫(s − z)α−2 dsdz

t1

t2

t1

0

t1

0

= (α − 1) ∫ φ(z) ∫(s − z)α−2 dsdz = ∫ ((t2 − z)α−1 − (t1 − z)α−1 ) φ(z)dz.

(1.22)

Finally, let us note that t2

∫(t2 − s)α−1 ds =

t1

t2

(t2 − t1 )α = ∫(s − t1 )α−1 ds α t1

and t2

t2

∫(t2 − s)α−1 φ(t1 )ds − ∫(s − t1 )α−1 φ(t1 )ds = 0.

t1

t1

(1.23)

14 � 1 Fractional integrals and derivatives Now we can consider the function f and mention that t1

f (t2 ) − f (t1 ) =

t2

φ(s) ds 1 1 ∫ ((t2 − s)α−1 − (t1 − s)α−1 ) φ(s)ds + ∫ Γ(α) Γ(α) (t2 − s)1−α t1

0

t2 t1

=

t2

1 [ (α − 1) ∫ ∫(s − z)α−2 ds φ(z)dz + ∫(t2 − s)α−1 φ(s)ds] , Γ(α) t1 0 t1 [ ]

where we used (1.22). Subtracting the left-hand sides of (1.21) and (1.23) from the righthand side of the previous equality, we get t2 t1

t2

φ(s) − φ(t1 ) φ(z) − φ(s) 1 [ f (t2 ) − f (t1 ) = (α − 1) ∫ ∫ dz ds + ∫ ds Γ(α) (t2 − s)1−α (s − z)2−α t1 0 t1 [ t2

t2

φ(t1 ) − φ(s) φ(s) − φ(0) ] +∫ ds + ∫ ds , 1−α (s − t1 ) s1−α t1 t1 ] where we also used the fact that φ(0) = 0. By the triangle inequality and the Hölder continuity of φ we have t2 t1

t2

|φ(s) − φ(t1 )| |φ(z) − φ(s)| 1 [ 󵄨󵄨 󵄨 |α − 1| ∫ ∫ dz + ∫ ds 󵄨󵄨f (t2 ) − f (t1 )󵄨󵄨󵄨 ≤ 2−α Γ(α) (t2 − s)1−α (s − z) t t [ 1 0 1 t2

t2

|φ(t1 ) − φ(s)| |φ(s) − φ(0)| ] +∫ ds + ∫ ds (s − t1 )1−α sα−1 t1 t1 ] t2 t1



C [ |α − 1| ∫ ∫(s − z)α+λ−2 dzds Γ(α) t1 0 [ t2

t2

t2

+ ∫(t2 − s)α−1 (s − t1 )λ ds + ∫(s − t1 )α+λ−1 ds + ∫ sα+λ−1 ds] t1 t1 t1 ] t2

=

C [ 󵄨󵄨 󵄨 ∫ 󵄨󵄨󵄨(s − t1 )α+λ−1 − sα+λ−1 󵄨󵄨󵄨󵄨 ds Γ(α) [t1 +B(α, λ + 1)(t2 − t1 )α+λ +

(t2 − t1 )α+λ t2α+λ − t1α+λ ] + , α+λ α+λ ]

(1.24)

where we also used Lemma 1.11. Now let us distinguish two cases. On the one hand, if α + λ < 1, then we have that |(s − t1 )α+λ−1 − sα+λ−1 | ≤ (s − t1 )α+λ−1 . Moreover, by the subadditivity of the function t α+λ we get t2α+λ − t1α+λ ≤ (t2 − t1 )α+λ . Applying these two

1.3 Power-integrable functions



15

observations to inequality (1.24), we get C 1 󵄨󵄨 󵄨 [ (t − t )α+λ + B(α, λ + 1)(t2 − t1 )α+λ 󵄨󵄨f (t2 ) − f (t1 )󵄨󵄨󵄨 ≤ Γ(α) α + λ 2 1 +

(t − t )α+λ 1 (t2 − t1 )α+λ + 2 1 ] = C(t2 − t1 )α+λ , α+λ α+λ

where C is a suitable constant. On the other hand, if α + λ ≥ 1, then we have |(s − t1 )α+λ−1 − sα+λ−1 | ≤ sα+λ−1 and, by the Lipschitz continuity, t2α+λ − t1α+λ ≤ (α + λ)bα+λ−1 (t2 − t1 ). Thus in this case, we get C 󵄨󵄨 󵄨 [bα+λ−1 (t2 − t1 ) + B(α, λ + 1)(t2 − t1 )α+λ 󵄨󵄨f (t2 ) − f (t1 )󵄨󵄨󵄨 ≤ Γ(α) 1 + (t − t )α+λ + bα+λ (t2 − t1 )] ≤ C(t2 − t1 ), α+λ 2 1 which concludes the proof.

1.3 Fractional integrals of power-integrable functions Now let us focus on the spaces 𝕀αa+(b−) (Lp (a, b)) for some p ≥ 1. It is clear that if α > 1 and f ∈ 𝕀αa+ (Lp (a, b)) for p ≥ 1, then we must have f (a) = 0. Moreover, let us stress that, by Lemma 1.15 and Remark 1.16, for every f ∈ 𝕀αa+ (Lp (a, b)), there exists a unique α φ ∈ Lp (a, b) such that f = Ia+ φ. For this reason, we can define a norm on 𝕀αa+ (Lp (a, b)) as α p α follows: for f ∈ 𝕀a+ (L (a, b)) such that f = Ia+ φ, we set ‖f ‖𝕀αa+ (Lp (a,b)) = ‖φ‖Lp (a,b) . With such a norm, the space 𝕀αa+ (Lp (a, b)) acquires a special structure. Proposition 1.19. For all −∞ < a < b < +∞, α > 0, and p ≥ 1, the normed space (𝕀αa+ (Lp (a, b)), ‖ ⋅ ‖𝕀αa+ (Lp (a,b)) ) is a Banach space. Moreover, if p = 2, then it is a Hilbert space with scalar product ⟨f , g⟩𝕀αa+ (L2 (a,b)) = ⟨φf , φg ⟩L2 (a,b) , α α where f = Ia+ φf and g = Ia+ φg . Finally, if (a, b) = ℝ, then the statement still holds under the additional condition 1 < p < α1 . α Proof. Let {fn }n∈ℕ be a Cauchy sequence in 𝕀αa+ (Lp (a, b)). Then we can write fn = Ia+ φn p for φn ∈ L (a, b). Moreover, by the definition of ‖ ⋅ ‖𝕀αa+ (Lp (a,b)) , {φn }n∈ℕ is a Cauchy sequence in Lp (a, b), which is a Banach space. Thus there exists φ ∈ Lp (a, b) such that α ‖φn − φ‖Lp (a,b) → 0. Defining f = Ia+ φ, we have

‖fn − f ‖𝕀αa+ (Lp (a,b)) = ‖φn − φ‖Lp (a,b) → 0,

16 � 1 Fractional integrals and derivatives concluding the proof. The additional condition in the case (a, b) = ℝ is needed to guarantee the well-posedness of 𝕀α+ (Lp (ℝ)). Before proceeding, let us recall some simple notions from real analysis. Fix −∞ < a < b < +∞. A function f : [a, b] → ℝ is said to be absolutely continuous on [a, b] if there exists a function g ∈ L1 (a, b) such that x

f (x) − f (a) = ∫ g(t)dt, a

x ∈ [a, b].

In such a case, f admits the derivative f ′ (x) = g(x) for a. a. x ∈ [a, b]. We denote by AC[a, b] the space of absolutely continuous functions on [a, b]. We say that for −∞ ≤ a < b ≤ +∞, a function f : (a, b) → ℝ is absolutely continuous on (a, b) if it is absolutely continuous on any closed subinterval [c, d] ⊂ (a, b). We denote by AC(a, b) the space of functions that are absolutely continuous on (a, b). For further properties of absolutely continuous functions, we refer to [226, 254]. Let −∞ ≤ a < b ≤ +∞. We say that a function f : (a, b) → ℝ admits a weak derivative g : (a, b) → ℝ if for all φ ∈ 𝒞c∞ (a, b), b

b

∫ f (x)φ (x)dx = − ∫ g(x)φ(x)dx. ′

a

a

For p ∈ [1, +∞], we denote by W 1,p (a, b) the space of functions f ∈ Lp (a, b) that admit a weak derivative g ∈ Lp (a, b). This set is called the Sobolev space. In such a case, we can show that f admits a version in AC[a, b] and that the weak derivative coincides a. e. with the classical derivative f ′ . The set W 1,p (a, b) is a Banach space once we equip it with the norm 󵄩 󵄩 ‖f ‖W 1,p (a,b) = ‖f ‖Lp (a,b) + 󵄩󵄩󵄩f ′ 󵄩󵄩󵄩Lp (a,b) . Moreover, we can show that each function f ∈ W 1,∞ (a, b) admits a version that belongs to Lip[a, b] (and [f ]Lip[a,b] = ‖f ′ ‖L∞ (a,b) ). We can relate the Sobolev space W 1,p (a, b) with Hölder spaces 𝒞 λ (a, b) (see [72, Section 5.6.2]) by means of the Morrey embedding theorem. Theorem 1.20. Let −∞ ≤ a < b ≤ +∞. There exists a constant Ca,b,p such that, for all f ∈ W 1,p (a, b) and λ = 1 − p1 , [f ]𝒞 λ [a,b] ≤ Ca,b,p ‖f ‖W 1,p (a,b) . For m ∈ ℕ, we denote by W m,p (a, b) the set of functions f : (a, b) → ℝ that admit the derivatives up to order m − 1 with f (j) ∈ Lp (a, b) for j = 0, . . . , m − 1 and f (m−1) ∈ W 1,p (a, b). As before, we can show that there exists a version of f that admits

1.3 Power-integrable functions

� 17

the mth-order derivative a. e., coinciding with the weak derivative of f (m−1) ∈ AC[a, b]. In general, considering f ∈ W m,p (a, b), we will always assume to be working with the 1,p version of f such that f (m−1) ∈ AC[a, b]. Finally, we denote by Wloc (a, b) the space of functions f : (a, b) → ℝ such that their restrictions f[c,d] : [c, d] → ℝ on any subinterval m,p [c, d] ⊂ (a, b) belong to W 1,p (c, d) and by Wloc (a, b) the space of functions f : (a, b) → ℝ p that admit derivatives a. e. up to order m − 1 with f (j) ∈ Lloc (a, b) for j = 0, . . . , m − 1 and 1,p

m,p

f (m−1) ∈ Wloc (a, b). Again, we can prove that there exists a version of f ∈ Wloc (a, b) such that f (m−1) ∈ AC(a, b). We will always assume to be working with such a version. Finally, for m ∈ ℕ, denote H m (a, b) := W m,2 (a, b), since W m,2 (a, b) is a Hilbert space equipped with the scalar product m b

⟨f , g⟩H m (a,b) = ∑ ∫ f (j) (x)g (j) (x)dx. j=0 a

We refer to [72, Chapter 5] for further notions on Sobolev spaces. The spaces 𝕀αa+ (Lp (a, b)) play, in some sense, the role of Sobolev spaces in the fractional setting. For this reason, we also set H α (a, b) := 𝕀αa+ (L2 (a, b)). In the following, we focus on the case α ∈ (0, 1). Precisely, we now prove that suitable Sobolev spaces are contained in 𝕀αa+ (Lp (a, b)) for p ≥ 1. Proposition 1.21. Let −∞ < a < b < +∞, α ∈ (0, 1). (i) If f ∈ AC[a, b], where AC[a, b] is the space of absolutely continuous functions on [a, b], α then f ∈ 𝕀αa+ (L1 (a, b)), where f = Ia+ φ with φ(z) =

f (a) 1−α ′ + (Ia+ f ) (z). Γ(1 − α)(z − a)α

(1.25)

(ii) If p < α1 , then W 1,p (a, b) ⊂ 𝕀αa+ (Lp (a, b)). (iii) If f ∈ W 1,p (a, b) with f (a) = 0, then f ∈ 𝕀αa+ (Lp (a, b)) for all α ∈ (0, 1). Proof. (i) Let f ∈ AC[a, b], and suppose there exists φ ∈ L1 (a, b) such that x

f (x) =

1 ∫(x − t)α−1 φ(t)dt, Γ(α) a

x ∈ (a, b).

(1.26)

We want to determine the form of the function φ. To do this, multiply (1.26) by (z − x)−α and integrate: z

z

x

f (x) φ(t) 1 dx = dt) dx ∫ ∫(z − x)−α (∫ (z − x)α Γ(α) (x − t)1−α a

z

a

z

a

z

1 = ∫ φ(t) ( ∫(z − x)−α (x − t)α−1 dx) dt = Γ(1 − α) ∫ φ(t)dt, (1.27) Γ(α) a

t

a

18 � 1 Fractional integrals and derivatives where we used Lemma 1.11. Therefore for a. a. z ∈ (a, b), z

f (x) 1 d φ(z) = (∫ dx) Γ(1 − α) dz (z − x)α a

=

z

1 d (z − a)1−α (z − x)1−α ′ ( f (a) + ∫ f (x)dx) Γ(1 − α) dz 1−α 1−α z

=

a

f (a) f ′ (x) f (a) 1 1−α ′ ( +∫ dx) = + (Ia+ f ) (z), (1.28) α Γ(1 − α) (z − a) (z − x)α Γ(1 − α)(z − a)α a

and we get (1.25). Now take into account that α ∈ (0, 1) and that φ ∈ L1 (a, b) according to Lemma 1.5. 1−α ′ (ii) Note that if f ∈ W 1,p (a, b), then f ′ ∈ Lp (a, b), and Ia+ f ∈ Lp (a, b) by Lemma 1.5, while (⋅ − a)α belongs to Lp (a, b) only if p < α1 . Thus we conclude that φ ∈ Lp (a, b) if p < α1 . (iii) If f ∈ W 1,p (a, b) and f (a) = 0, then 1−α ′ φ = Ia+ f ∈ Lp (a, b)

for all α ∈ (0, 1). Recalling that 𝕀αa+ : Lp (a, b) → Lp (a, b) by Lemma 1.5, we can show, thanks to the previous result, that 𝕀αa+ (Lp (a, b)) is in fact dense in Lp (a, b) for α small enough. Corollary 1.22. For 0 < α
0, and f (t) = t −α (t + a)−1 , t > 0. Then for all x > 0, x

∫(x − t)α−1 f (t)dt = 0

π (a + x)α−1 . sin(πα) aα

1.4 Restrictions of fractional integrals �

19

Theorem 1.24. Let φ ∈ Lp (ℝ) for some p ∈ (1, α1 ). Then (I+α φ)ℝ+ (x) = (I+α ϕ) (x) 0

a. e.

on

(1.29)

ℝ,

where ϕ ∈ Lp (ℝ) is defined as ∞

{ sin απ t α φ(−t) { { dt, ∫( ) {φ(x) + π x x+t ϕ(x) = { 0 { { { 0, {

x > 0, x < 0.

Proof. First, let us prove that ϕ ∈ Lp (ℝ). Indeed, 󵄨󵄨 ∞ 󵄨󵄨p ∞ 󵄨 α 󵄨 󵄨󵄨 󵄨󵄨p 󵄨󵄨 󵄨󵄨p −αp 󵄨󵄨󵄨 t φ(−t) 󵄨󵄨󵄨 dt 󵄨󵄨 dx) , ∫󵄨󵄨ϕ(x)󵄨󵄨 dx ≤ C (∫󵄨󵄨φ(x)󵄨󵄨 dx + ∫ x 󵄨󵄨 ∫ 󵄨󵄨 󵄨󵄨 󵄨󵄨 0 t + x 󵄨󵄨 ℝ ℝ 0 where C is a constant depending only on p and α. We only need to prove that the second integral is finite. This is done by applying the generalized Minkowski inequality. On this way, taking into account that α < 1/p and using the change of variables t = xz, we get the following upper bound: ∞



∫ x −αp ( ∫ 0

0

p

p

t α |φ(−t)| zα |φ(−xz)| dt) dx = ∫ ( ∫ dz) dx t+x 1+z ∞



0

0

1 p

p

p

∞ α− 1

zα z p p 󵄨 󵄨p ≤ (∫ ( ∫ 󵄨󵄨󵄨φ(−xz)󵄨󵄨󵄨 dx) dz) ≤ ( ∫ dz) ‖φ‖Lp (ℝ) < ∞, 1+z 1+z ∞



0

0

0

where in the last inequality, we used again the change of variables t = xz. Second, to prove (1.29), note that (I+α ϕ)(x) = 0 for x ≤ 0. Consider now x > 0 and notice that (I+α ϕ) (x)

x

x

zα φ(−z) 1 sin πα = dz) dt ∫(x − t)α−1 φ(t)dt + ∫(x − t)α−1 ( ∫ α Γ(α) πΓ(α) t (z + t) 0

α = (I0+ φ) (x) +

0

α = (I0+ φ) (x) +

x

0

sin πα (x − t)α−1 t −α dt ∫ (−z)α φ(z)dz ∫ πΓ(α) t−z −∞

α = (I0+ φ) (x) +

0



0

0

1 (x − z)α−1 dz ∫ (−z)α φ(z) Γ(α) (−z)α −∞ 0

1 ∫ (x − z)α−1 φ(z)dz = (I+α φ) (x), Γ(α) −∞

where we used Lemma 1.23 taking into account that −z > 0.

20 � 1 Fractional integrals and derivatives

1.5 Continuity of fractional integrals in the index of integration In this section, we consider the behavior of fractional integrals as functions of α > 0. 0 First, we define Ia+ φ := φ for φ ∈ L1 (a, b), so that for any fixed function φ ∈ Lp (a, b), we α can study the function α ∈ [0, ∞) 󳨃→ Ia+ φ ∈ Lp (a, b). Theorem 1.25. Let φ ∈ Lp (a, b) for some p ∈ [1, ∞). Then the function α α ∈ [0, ∞) 󳨃→ Ia+ φ ∈ Lp (a, b)

is continuous. Proof. Let α, α0 > 0. For x ∈ (a, b), consider the difference α (Ia+0 φ) (x)



α (Ia+ φ) (x)

x

1 1 =( − ) ∫ φ(t)(x − t)α0 −1 dt Γ(α0 ) Γ(α) a

x

+

1 ∫ ((x − t)α0 −1 − (x − t)α−1 ) φ(t)dt =: (Iφ)(x) + (Jφ)(x). Γ(α) a

Let us estimate ‖Iφ‖Lp (a,b) and ‖Jφ‖Lp (a,b) separately. According to Lemma 1.5, 󵄨󵄨 Γ(α0 ) 󵄨󵄨󵄨 (b − a)α0 󵄨󵄨 ‖Iφ‖Lp (a,b) ≤ 󵄨󵄨󵄨󵄨1 − ‖φ‖Lp (a,b) . Γ(α) 󵄨󵄨 Γ(1 + α0 ) 󵄨

(1.30)

To estimate ‖Jφ‖Lp (a,b) , let us assume, without loss of generality, that the function φ equals zero outside the interval [a, b]. Then 󵄨󵄨 x−a 󵄨󵄨 󵄨󵄨 1 󵄨󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 α0 −1 α−1 − z ) φ(x − z)dz󵄨󵄨󵄨 󵄨󵄨(Jφ)(x)󵄨󵄨 = 󵄨󵄨 ∫ (z 󵄨󵄨 Γ(α) 󵄨󵄨󵄨 󵄨󵄨 󵄨0 b−a

1 󵄨 󵄨󵄨 󵄨 ≤ ∫ zα0 −1 󵄨󵄨󵄨1 − zα−α0 󵄨󵄨󵄨 󵄨󵄨󵄨φ(x − z)󵄨󵄨󵄨1(0,x−a) (z)dz. Γ(α)

(1.31)

0

Applying the generalized Minkowski inequality (1.3), we get that b−a

‖Jφ‖Lp (a,b) ≤

α−α0

b

1 p

1 |1 − z | 󵄨 󵄨p (∫󵄨󵄨󵄨φ(x − z)󵄨󵄨󵄨 1(a,b−z) (x − z)dx) dz ∫ Γ(α) z1−α0 a

0

b−a



‖φ‖Lp (a,b) ‖φ‖Lp (a,b) |1 − zα−α0 | dz := J(α, α0 ). ∫ Γ(α) Γ(α) z1−α0 0

(1.32)

1.5 Continuity of fractional integrals in the index of integration

� 21

Taking the sum of the respective parts of (1.30) and (1.32), we get that 󵄨󵄨 Γ(α0 ) 󵄨󵄨󵄨 (b − a)α0 J(α, α0 ) α 󵄩󵄩 α 󵄩 󵄨󵄨 + ). 󵄩󵄩(Ia+ − Ia+0 ) φ󵄩󵄩󵄩Lp (a,b) ≤ ‖φ‖Lp (a,b) (󵄨󵄨󵄨󵄨1 − Γ(α) 󵄨󵄨 Γ(1 + α0 ) Γ(α) 󵄨

(1.33)

Consider J(α, α0 ) and suppose, without loss of generality, that |α − α0 | < δ for a constant δ > 0 small enough. First, let α > α0 . If z ≥ 1, then we have that |1 − zα−α0 | = zα−α0 − 1 ≤ zδ , whereas if z < 1, then |1 − zα−α0 | = 1 − zα−α0 ≤ 1. Hence, in general, if α > α0 , then |1 − zα−α0 |zα0 −1 ≤ zα0 −1 max{zδ , 1}, which belongs to L1 (0, b − a) (since α0 − 1 + δ > −1). Thus by the Lebesgue dominated convergence theorem we get limα↓α0 J(α, α0 ) = 0. Second, consider the case α < α0 . If z ≥ 1, then |1 − zα−α0 | ≤ 1, whereas if z < 1, then |1 − zα−α0 | ≤ z−δ . Hence, in general, if α < α0 , then |1 − zα−α0 |zα0 −1 ≤ zα0 −1 max{z−δ , 1}. This function belongs to L1 (0, b − a) if we choose δ > 0 so that α0 − 1 − δ > −1, that is, δ < α0 . Once this is done, by the Lebesgue dominated convergence theorem we have limα↑α0 J(α, α0 ) = 0. Taking the limit as α → α0 in (1.33), we get the statement. Now let α0 = 0. Our goal is to prove that 󵄩 α 󵄩 lim 󵄩󵄩Ia+ φ − φ󵄩󵄩󵄩Lp (a,b) = 0.

α→0 󵄩

Indeed, proceeding as in (1.31), we transform the difference as follows: α (Ia+ φ) (x) − φ(x) = x−a

=

x−a

1 ∫ t α−1 φ(x − t)dt − φ(x) Γ(α) 0

φ(x − t) − φ(x) 1 (x − a)α dt + φ(x) ( − 1) := (K α φ) (x) + (Y α φ) (x), ∫ Γ(α) Γ(1 + α) t 1−α 0

α so that ‖Ia+ φ − φ‖Lp (a,b) ≤ ‖K α φ‖Lp (a,b) + ‖Y α φ‖Lp (a,b) . Clearly, b α 󵄨 󵄨󵄨󵄨p 󵄩󵄩 α 󵄩󵄩p 󵄨 󵄨p 󵄨󵄨 (x − a) − 1󵄨󵄨󵄨 dx. 󵄩󵄩Y φ󵄩󵄩Lp (a,b) ≤ ∫󵄨󵄨󵄨φ(x)󵄨󵄨󵄨 󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 Γ(1 + α) a

Again, suppose that α ∈ (0, δ). Denoting by x0 the minimum point of the gamma function, which belongs to (1, 2) (see Appendix B.1), we assume that δ < x0 . Then we have p 󵄨󵄨 (x − a)α 󵄨󵄨p (b − a)δ 󵄨󵄨 󵄨 − 1󵄨󵄨󵄨 ≤ (max {1, }) , 󵄨󵄨 󵄨󵄨 Γ(1 + δ) 󵄨󵄨 Γ(1 + α)

and thus the Lebesgue dominated convergence theorem ensures that we may go to the p limit as α → 0 in the latter integral. So limα→0+ ‖Y α φ‖Lp (a,b) = 0. To estimate K α φ, assume for the moment that φ belongs to 𝒞 1 [a, b]. Then 󵄨󵄨 α 󵄨 󵄨󵄨(K φ) (x)󵄨󵄨󵄨 ≤

b−a

1 󵄨 󵄨 ∫ t α max 󵄨󵄨󵄨φ′ (t)󵄨󵄨󵄨 dt 󳨀→ 0 α→0 Γ(α) t∈[a,b] 0

22 � 1 Fractional integrals and derivatives uniformly in x, and thus, as a consequence, ‖K α φ‖Lp (a,b) → 0 as α → 0. Let us prove that limα→0 ‖K α φ‖Lp (a,b) = 0 for φ ∈ Lp (a, b). Since 𝒞 1 [a, b] is dense in Lp (a, b), we can approximate any function φ ∈ Lp (a, b) by a sequence of continuously differentiable functions φn (x) in the Lp (a, b)-norm. Then by the linearity of the operator K α 󵄩󵄩 α 󵄩󵄩 󵄩 α 󵄩 󵄩 α 󵄩 󵄩󵄩K φ󵄩󵄩Lp (a,b) ≤ 󵄩󵄩󵄩K (φ − φn )󵄩󵄩󵄩Lp (a,b) + 󵄩󵄩󵄩K φn 󵄩󵄩󵄩Lp (a,b) .

(1.34)

A direct application of the generalized Minkowski inequality to the first term in (1.34) leads to the bounds 2(b − a)α 󵄩󵄩 α 󵄩 ‖φ − φn ‖Lp (a,b) ≤ Cδ ‖φ − φn ‖Lp (a,b) , 󵄩󵄩K (φ − φn )󵄩󵄩󵄩Lp (a,b) ≤ Γ(1 + α) δ

where Cδ = 2 max {1, (b−a) }. Since for any ε > 0 ‖φ − φn ‖Lp (a,b) < Γ(1+δ) enough, for such n we have that

ε Cδ

for n ∈ ℕ large

󵄩󵄩 α 󵄩 󵄩󵄩K (φ − φn )󵄩󵄩󵄩Lp (a,b) < ε and, additionally, 󵄩 󵄩 󵄩 󵄩 lim sup 󵄩󵄩󵄩K α φ󵄩󵄩󵄩Lp (a,b) ≤ ε + lim 󵄩󵄩󵄩K α φn 󵄩󵄩󵄩Lp (a,b) = ε. α→0

α→0

Since ε > 0 is arbitrary, we can state that 󵄩 󵄩 lim 󵄩󵄩K α φ󵄩󵄩󵄩Lp (a,b) = 0

α→0 󵄩

and conclude the proof. We can in fact prove a stronger form of continuity. For any linear continuous operator F : Lp (a, b) → Lp (a, b), we can define its norm in a standard way: ‖F‖ =

‖Fφ‖Lp (a,b) . φ∈Lp (a,b) ‖φ‖Lp (a,b) sup φ≡0 ̸

It is well known that such a norm on the space ℒ(Lp (a, b), Lp (a, b)) of linear continuous operators endows it with the Banach space structure. Now, in our framework, we have the following continuity result. α Theorem 1.26. The function α ∈ (0, ∞) 󳨃→ Ia+ ∈ ℒ(Lp (a, b), Lp (a, b)) is continuous. α

α Proof. We need to prove that ‖Ia+ − Ia+0 ‖ → 0 as α → α0 for all α0 > 0. To do this, note that (1.33) implies that for any φ ≢ 0 in Lp (a, b), α α ‖(Ia+ − Ia+0 )φ‖Lp (a,b) 󵄨󵄨󵄨 Γ(α0 ) 󵄨󵄨󵄨 (b − a)α0 J(α, α0 ) 󵄨󵄨 ≤ 󵄨󵄨󵄨1 − + , ‖φ‖Lp (a,b) Γ(α) 󵄨󵄨 Γ(1 + α0 ) Γ(α) 󵄨

1.6 Fractional integrals of complex order

� 23

where J(α, α0 ) is defined in the proof of Theorem 1.25. Taking the supremum in φ, we get 󵄨󵄨 Γ(α0 ) 󵄨󵄨󵄨 (b − a)α0 J(α, α0 ) α 󵄩 󵄩󵄩 α 󵄨󵄨 + , 󵄩󵄩Ia+ − Ia+0 󵄩󵄩󵄩 ≤ 󵄨󵄨󵄨󵄨1 − Γ(α) 󵄨󵄨 Γ(1 + α0 ) Γ(α) 󵄨 and thus, taking the limit as α → α0 , we conclude the proof. The next natural questions are: generally speaking, what is the criterion for a function to belong to the set 𝕀α± (Lp (ℝ))? Do the spaces 𝕀α+ (Lp (ℝ)) and 𝕀α− (Lp (ℝ)) (𝕀αa+ (Lp (a, b)) and 𝕀αb− (Lp (a, b))) coincide? We postpone the answer to the first question to Section 1.9, when we will know the notion and properties of fractional derivative; see Theorem 1.48. Concerning the second question, the answer is given in Corollary 1.127.

1.6 Fractional integrals of complex order Up to now, we considered a real number α ≥ 0. However, we could properly extend the definition of fractional integral to complex orders. To do this, let us focus on the operator α α Ia+ , as Ib− can be discussed analogously. Moreover, let us consider a = 0 for simplicity. Let α ∈ ℂ be such that ℜ(α) > 0 and write α = α0 + iθ for α0 > 0 and θ ∈ ℝ. For such α and positive real x, let us consider x α−1 as the principal value of the complex power. Thus x α−1 = x α0 −1 eiθ log(x) , and taking the modulus, we get |x α−1 | = x α0 −1 . Hence, for this choice of α, the complex-valued function ια defined in (1.6) belongs to L1loc (ℝ+ ), where ℝ+ := (0, ∞). For any function f : [0, b] → ℝ and α ∈ ℂ with ℜ(α) > 0, we define the fractional α integral of f as I0+ f via formula (1.1). Moreover, the previous observation tells us that α items (i) and (ii) of Lemma 1.4 still hold, implying that I0+ : L1 (0, b) → L1 (0, b) is well defined. Moreover, taking the modulus and using the fact that |t α−1 | = t ℜ(α)−1 , we conα clude that item (iii) of Lemma 1.4 holds with ℜ(α) in place of α, and therefore I0+ is a 1 continuous operator from L (0, b) into itself. Also, Lemma 1.5 and Theorems 1.8 and 1.12 still hold upon substituting ℜ(α) and ℜ(β) to α and β in the conditions. However, in the case ℜ(α) = 0, we obtain that |t α−1 | = t −1 , and then the integral in (1.1) is not usually convergent. For this reason, the purely imaginary fractional integral is not defined by means of (1.1). Theorem 1.25 implies that if φ ∈ 𝒞 [a, b] and x ∈ [a, b], α then α ∈ [0, ∞) 󳨃→ I0+ φ(x) ∈ ℝ is a continuous function. In [232, p. 38], it is stated that in α such a case the function α ∈ {z ∈ ℂ : ℜ(z) > 0} 󳨃→ I0+ φ(x) can be obtained by analytic continuation, and thus, in particular, it is holomorphic. In fact, we can extend such a function to an entire function on the complex plane: we will cover this in Section 1.13, Theorem 1.103.

24 � 1 Fractional integrals and derivatives

1.7 Riemann–Liouville fractional derivatives Fix any −∞ < a < b < +∞ and 0 < α < 1 and recall that for p ≥ 1, we denote by 𝕀αa+ (Lp (a, b)) the class of functions f that can be presented as Riemann–Liouville inteα grals; more exactly, f = Ia+(b−) φ for some φ ∈ Lp (a, b), p ≥ 1. Lemma 1.15 ensures the uniqueness of such a function φ. Thus we can denote, without ambiguity, φ as Dαa+(b−) f , where, for p ≥ 1, Dαa+(b−) : 𝕀α (Lp (a, b)) → Lp (a, b). To identify the operator Dαa+(b−) , let us go back to the proof of Proposition 1.21. Indeed, up to the first line of (1.28), we did not use yet the condition f ∈ AC[a, b], and thus we see that (Dαa+ f ) (x) =

x

1 d ∫ f (t)(x − t)−α dt Γ(1 − α) dx

(1.35)

a

and (Dαb− f ) (x)

b

1 d =− ∫ f (t)(t − x)−α dt, Γ(1 − α) dx

(1.36)

x

provided that the involved quantities are well-defined for a. a. x ∈ (a, b). The same definition can be adopted if we substitute (a, b) with the whole real line ℝ, obtaining (Dα+ f ) (x)

x

1 d = ∫ f (t)(x − t)−α dt Γ(1 − α) dx

(1.37)

−∞

and (Dα− f ) (x)



1 d =− ∫ f (t)(t − x)−α dt, Γ(1 − α) dx

(1.38)

x

provided that the involved quantities are well-defined for a. a. x ∈ ℝ. Definition 1.27. The operators Dαa+(b−) (Dα± if a = −∞ or b = ∞) are called the left- and right-sided Riemann-Liouville fractional derivatives of order α ∈ (0, 1). Let us note that we can rewrite the Riemann–Liouville fractional derivatives in terms of derivatives of fractional integrals as (Dαa+ f ) (x) =

d 1−α (I f ) (x), dx a+

(Dαb− f ) (x) = −

d 1−α (I f ) (x). dx b−

In the case −∞ < a < b < +∞, by the usual chain rule of the derivative, defining fa (t) = f (t + a) and fb (t) = f (t + b), from (1.3) and (1.4) we get (Dαa+ f ) (x) = (Dα0+ fa ) (x − a),

(Dαb− f ) (x) = (Dα0− fb ) (x − b).

1.7 Riemann–Liouville fractional derivatives

� 25

In the same way, defining ̃f (t) = f (−t), from (1.5) we get (Dα0− f ) (x) = (Dα0+ ̃f ) (−x). Thus if we prove some properties of the operator Dα0+ , then we can deduce analogous properties on Dαa+ and Dαb− for any finite a, b. As for the fractional integral, we would like to determine some functions that belong to the domain Dom(Dαa+(b−) ) of the operator Dαa+(b−) . In the case −∞ < a < b < +∞, we can restate Proposition 1.21 in terms of Riemann–Liouville fractional derivatives. Proposition 1.28. Let α ∈ (0, 1) and −∞ < a < b < +∞. If f ∈ AC[a, b], then Dαa+ f is well defined, belongs to L1 (a, b), and (Dαa+ f ) (x) =

f (a) 1−α ′ + (Ia+ f ) (x). Γ(1 − α)(x − a)α

Moreover, if f ∈ W 1,p (a, b) for some 1 ≤ p ≤ α1 , then Dαa+ f ∈ Lp (a, b). As a direct consequence, we get that for any f ∈ 𝒞 1 [a, b], the fractional derivative is well-defined and belongs to 𝒞 [a, b], as expected. Moreover, Proposition 1.28 guarantees that AC[a, b] ⊂ Dom(Dαa+(b−) ). There are, however, functions that belong to Dom(Dαa+(b−) )\AC[a, b]; see Exercise 1.9. In the case (a, b) = ℝ, the proof of Proposition 1.21 cannot be directly applied, but it needs some modification. Dαa+(b−) f

Proposition 1.29. Let α ∈ (0, 1) and suppose f ∈ W 1,p (ℝ) with 1 < p < assume that lim |x|1−α |f (x)| = 0,

x→−∞

1

and

∫ −∞

|f (x)| + |x||f ′ (x)| dx < ∞. |x|α

1 . 1−α

Further,

(1.39)

Then (Dα+ f ) (z) =

z

1 ∫ (z − x)−α f ′ (x)dx. Γ(1 − α) −∞

Proof. As in the proof of Proposition 1.21, let us assume, at least formally, that f = I+α φ. Arguing as in Proposition 1.21, we get z

φ(z) =

1 d ( ∫ (z − x)−α f (x)dx) . Γ(1 − α) dz −∞

(1.40)

26 � 1 Fractional integrals and derivatives Let us study in detail the integral on the right-hand side of (1.40). First of all, note that (1.39) guarantees that z

z

∫ (z − x) f (x)dx = lim ∫(z − x)−α f (x)dx. −α

a→−∞

−∞

a

Integrating by parts, we get z

z

(z − a)1−α (z − x)1−α ′ f (a) + ∫ f (x)dx. ∫(z − x) f (x)dx = 1−α 1−α −α

a

a

Thanks to both conditions (1.39) and (1.39), we can take the limit as a → −∞ to get z

z

1 ∫ (z − x) f (x)dx = ∫ (z − x)1−α f ′ (x)dx. 1−α −α

−∞

(1.41)

−∞

Finally, we need to differentiate on the right-hand side of (1.41) for a. a. z ∈ ℝ. Since 1 1 < p < 1−α , Hardy-Littlewood Theorem 1.10 implies that z

󵄨 󵄨 ∫ (z − x)−α 󵄨󵄨󵄨f ′ (x)󵄨󵄨󵄨dx < ∞,

for a. e. z ∈ ℝ.

(1.42)

−∞

Let E1 be the set of z ∈ ℝ such that (1.42) holds. Next, since f ′ ∈ Lp (ℝ), it also belongs to L1loc (ℝ). Hence, for a. e. z ∈ ℝ, we have (see, for instance, [254, Theorem 7.16] z+h

1 lim ∫ f ′ (x)dx = f ′ (z). h→0 h

(1.43)

z

Let E2 be the set of z ∈ ℝ such that (1.43) holds. Define E = E1 ∩ E2 and observe that |ℝ \ E| = 0. We now differentiate the right-hand side of (1.41) in any z ∈ E. We argue for h > 0, since the case h < 0 is similar. We consider the quantity z+h

z

−∞

−∞

1 ( ∫ (z + h − x)1−α f ′ (x)dx − ∫ (z − x)1−α f ′ (x)dx) h z

= ∫ −∞

z+h

(z + h − x)1−α − (z − x)1−α ′ 1 f (x)dx + ∫ (z + h − x)1−α f ′ (x)dx = I1 (h) + I2 (h). h h z

Concerning I1 (h), observe that by Lagrange theorem we know that for any x ∈ (−∞, z) there exists ξ(z, h, x) ∈ (z, z + h) such that (z + h − x)1−α − (z − x)1−α 󵄨󵄨 ′ 󵄨󵄨 −α 󵄨 ′ 󵄨 󵄨 −α 󵄨 ′ 󵄨󵄨f (x)󵄨󵄨 = (1 − α)(ξ(z, h, x) − x) 󵄨󵄨󵄨f (x)󵄨󵄨󵄨 ≤ (1 − α)(z − x) 󵄨󵄨󵄨f (x)󵄨󵄨󵄨, h

1.7 Riemann–Liouville fractional derivatives

� 27

where the right-hand side is integrable in (−∞, z) since z ∈ E and (1.42) holds. Taking the limit as h → 0 and using the dominated convergence theorem, we have z

lim I1 (h) = (1 − α) ∫ (z − x)−α f ′ (x)dx. h↓0

−∞

To handle I2 (h), observe that I2 (h) ≤ h1−α (

z+h

1 ∫ f ′ (x)dx) h z

and then, taking the limit and using (1.43) since z ∈ E, we have limh↓0 I2 (h) = 0. This ends the proof. α Let us stress that, by definition, the operator Dαa+ is the left-inverse of Ia+ , i. e., for a. a. x ∈ (a, b), α (Dαa+ (Ia+ f )) (x) = f (x).

(1.44)

On the other hand, we can prove the following identity. Proposition 1.30. Let f ∈ 𝕀α (Lp (a, b)) for some p ≥ 1. Then α (Ia+ (Dαa+ f )) (x) = f (x).

(1.45)

α Proof. Since f ∈ 𝕀α (Lp (a, b)), we can write that f = Ia+ φ for φ ∈ Lp (a, b). Therefore for a. a. x ∈ (a, b), α α α α (Ia+ (Dαa+ f )) (x) = (Ia+ (Dαa+ (Ia+ φ))) (x) = (Ia+ φ) (x) = f (x),

and the proof follows. Identity (1.45) is not true in general, it is explained in Exercise 1.11. Furhermore, it is interesting to note that the fractional derivative of a constant is not 0. Proposition 1.31. Let a > 0 and α ∈ (0, 1). Then, for any x > a, (Dαa+ 1) (x) =

1 (x − a)−α . Γ(1 − α)

Proof. Use Lemma 1.11 with β = 1 to obtain 1−α (Ia+ 1) (x) =

1 (x − a)1−α . Γ(2 − α)

Differentiating both sides of the previous identity, we get the proof.

28 � 1 Fractional integrals and derivatives In some applications, it may be necessary to ensure that some fractional derivative of a constant is equal to 0. This will be explained in the next section. Let us end this section by noting that we can use the relationship between fractional integrals and derivatives to derive a (partial) semigroup result for the Riemann– α+β Liouville derivative. Indeed, if α, β ≥ 0, α + β < 1, and f ∈ 𝕀a+ (L1 (ℝ)), then β

β

α+β α+β

β

β

α+β

α Dαa+ Da+ f = Dαa+ Da+ Ia+ Da+ f = Dαa+ Da+ Ia+ Ia+ Da+ f α+β

α+β

α = Dαa+ Ia+ Da+ f = Da+ f .

(1.46)

1.8 Dzhrbashyan–Caputo fractional derivatives A suitable modification of the Riemann–Liouville derivative has been obtained, independently, by Dzhrbashyan [70] and Caputo [43] more or less at the same time. It is interesting to observe that the same operator has been also considered in [87]. Definition 1.32. For α ∈ (0, 1), the left- and right-sided Dzhrbashyan–Caputo fractional derivatives of order α of a function f ∈ AC[a, b] are defined as (C Dαa+ f ) (x) =

x

1 1−α ′ f ) (x), ∫(x − t)−α f ′ (t)dt = (Ia+ Γ(1 − α)

(1.47)

a

and (C Dαb− f ) (x)

b

1 1−α ′ = f ) (x). ∫(t − x)−α f ′ (t)dt = (Ib− Γ(1 − α)

(1.48)

x

Clearly, for f ∈ AC[a, b], both the quantities (C Dαa+(b−) f )(x) are well-defined for a. a. x ∈ [a, b]. Moreover, Proposition 1.28 guarantees the following identity for all f ∈ AC[a, b]: (C Dαa+ f ) (x) = (Dαa+ f ) (x) −

f (a) , Γ(1 − α)(x − a)α

x ∈ [a, b].

(1.49)

With the previous identity in mind, we can easily prove the following property. Proposition 1.33. Let f ∈ AC[a, b]. Then C α Da+ f

= Dαa+ (f − f (a)).

In particular, C Dαa+ f = Dαa+ f if and only if f (a) = 0. Moreover, C Dαa+ 1 ≡ 0.

(1.50)

1.8 Dzhrbashyan–Caputo fractional derivatives



29

Proof. Note that (Dαa+ f (a)) (x) =

f (a) Γ(1 − α)(x − a)α

by Proposition 1.31 and the linearity of the Riemann–Liouville fractional derivative. In particular, (1.50) tells us that if f (a) = 0, then C Dαa+ f = Dαa+ f . Vice versa, if C Dαa+ f = Dαa+ f , then by (1.49) we get f (a) = 0, Γ(1 − α)(x − a)α

x ∈ [a, b],

which implies f (a) = 0. Finally, note that C Dαa+ 1 = Dαa+ (1 − 1) ≡ 0. Remark 1.34. Similarly, if f ∈ AC[a, b], then C α Db− f

= Dαb− (f − f (b)).

(1.51)

The previous proposition tells us that we can extend the definition of Dzhrbashyan– Caputo fractional derivatives to functions that are not absolutely continuous by taking (1.50) as a definition. Let us also emphasize that the Dzhrbashyan–Caputo fractional derivatives are continuous operators from 𝒞 1 [a, b] into 𝒞 [a, b]. Proposition 1.35. For all f ∈ 𝒞 1 [a, b], we have the upper bound (b − a)1−α ‖f ‖𝒞 1 [a,b] 󵄩󵄩C α 󵄩󵄩 󵄩󵄩 Da+ f 󵄩󵄩 ≤ . 󵄩 󵄩𝒞[a,b] Γ(2 − α) Proof. Just note that x

(b − a)1−α maxx∈[a,b] |f ′ (x)| 1 󵄨󵄨 C α 󵄨 −α 󵄨󵄨 ′ 󵄨󵄨 󵄨󵄨( Da+ f ) (x)󵄨󵄨󵄨 ≤ (x − t) f (t) dt ≤ ∫ 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 Γ(1 − α) Γ(2 − α) a

and then take the maximum over x ∈ [a, b] on the left-hand side. We observe that the Dzhrbashyan–Caputo fractional derivative on AC[a, b] represents, in a certain sense, the right-inverse of the fractional integral (whereas the Riemann–Liouville derivative is the left-inverse). Proposition 1.36. For all f ∈ AC[a, b] and a. a. x ∈ (a, b), α C α (Ia+ ( Da+ f )) (x) = f (x) − f (a).

Proof. The identity follows by observing that for a. a. x ∈ (a, b), α C α α 1−α ′ 1 ′ (Ia+ ( Da+ f )) (x) = (Ia+ (Ia+ f )) (x) = (Ia+ f ) (x) = f (x) − f (a).

We can also define the Dzhrbashyan–Caputo fractional derivatives on the real line.

30 � 1 Fractional integrals and derivatives Definition 1.37. The Dzhrbashyan–Caputo fractional derivatives on ℝ are defined as (C Dα+ f ) (x) =

x

1 ∫ (x − t)−α f ′ (t)dt = (I+1−α f ′ ) (x) Γ(1 − α)

(1.52)

−∞

and (C Dα− f ) (x)



1 = ∫ (t − x)−α f ′ (t)dt = (I−1−α f ′ ) (x). Γ(1 − α)

(1.53)

x

Clearly, in this case the condition f ∈ AC(ℝ) is not sufficient to guarantee that C Dα+ f is well-defined. However, a direct application of the Hardy–Littlewood theorem (Theorem 1.10) gives us the following statement. 1 Proposition 1.38. Let α ∈ (0, 1). If f ∈ W 1,p (ℝ) for some 1 < p < 1−α , then C Dα+ f is wellp q defined and belongs to L (ℝ), where q = 1−p+αp . In particular, there exists a constant Cp,α such that

󵄩󵄩C α 󵄩󵄩 󵄩󵄩 D+ f 󵄩󵄩 q ≤ Cp,α ‖f ‖W 1,p (ℝ) . 󵄩 󵄩L (ℝ) The previous theorem tells us that, for p < C α D+

1 1−α

Theorem 1.39. Let α ∈ (0, 1) and 1 < p < following assumptions: lim |x|

x→−∞

p , 1−p+αp

the operator

: W 1,p (ℝ) → Lq (ℝ)

is continuous. We can also prove that for each p < on the function f ∈ W 1,p (ℝ), C Dα+ f ∈ Lp (ℝ).

1−α

and q =

1 , 1−α

and let a function f ∈ W 1,p (ℝ) satisfy the

1

f (x) = 0 and

1 , under some suitable assumption 1−α

∫ −∞

|f (x)| + |x||f ′ (x)| dx < ∞. |x|α

(1.54)

Then C Dα+ f is well-defined, belongs to Lp (ℝ), and there exists a constant Cp,α > 0 (independent of f ) such that 󵄩󵄩C α 󵄩󵄩 󵄩󵄩 D+ f 󵄩󵄩 p ≤ Cα ‖f ‖W 1,p (ℝ) . 󵄩 󵄩L (ℝ) Proof. Recall that by the definition of the Dzhrbashyan–Caputo fractional derivative (1.53) C α D+ f

= I+1−α f ′ ,

1.8 Dzhrbashyan–Caputo fractional derivatives



31

where the right-hand side is well-defined a. e. thanks to the Hardy–Littlewood Theorem 1.10. In particular, we get 1

1 p 󵄨󵄨 x 󵄨󵄨p p 󵄨󵄨 󵄩󵄩C α 󵄩󵄩 󵄨󵄨 1−α ′ 󵄨󵄨p 󵄨󵄨 󵄩󵄩 D+ f 󵄩󵄩 p = (∫ 󵄨󵄨(I− f ) (x)󵄨󵄨 dx) = (∫ 󵄨󵄨󵄨 ∫ (x − u)−α f ′ (u)du󵄨󵄨󵄨 dx) 󵄩 󵄩L (ℝ) 󵄨 󵄨 󵄨󵄨 󵄨󵄨 󵄨 󵄨󵄨 ℝ ℝ 󵄨−∞ 1

≤2

1− p1

=: 2

1− p1

1

p 󵄨󵄨 x 󵄨󵄨p 󵄨󵄨 x−1 󵄨󵄨p p 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 f ′ (u) f ′ (u) 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 ((∫ 󵄨󵄨 ∫ du󵄨 dx) + (∫ 󵄨󵄨 ∫ du󵄨 dx) ) 󵄨󵄨 (x − u)α 󵄨󵄨󵄨 󵄨󵄨 (x − u)α 󵄨󵄨󵄨󵄨 󵄨󵄨 ℝ 󵄨󵄨x−1 ℝ 󵄨󵄨−∞ 󵄨

(I1 + I2 ).

To handle I1 , we apply the generalized Minkowski inequality: 1

p 󵄨󵄨 1 󵄨󵄨p 󵄨󵄨 󵄨󵄨 ‖f ′ ‖Lp (ℝ) 󵄨󵄨 −α ′ 󵄨󵄨 I1 = (∫ 󵄨󵄨∫ u f (x − u)du󵄨󵄨 dx) ≤ . 󵄨󵄨 󵄨󵄨 1−α 󵄨 󵄨 ℝ 󵄨0 󵄨

To work with I2 , we integrate by parts in the inner integral, thanks to Assumption (1.54), and use the generalized Minkowski inequality to obtain 1

p 󵄨󵄨 󵄨󵄨p x−1 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 −α−1 I2 = (∫ 󵄨󵄨f (x + 1) − α ∫ (u − x) f (u)du󵄨󵄨 dx) 󵄨󵄨 󵄨󵄨 󵄨󵄨 −∞ ℝ 󵄨󵄨

≤2

1− p1

≤2

1− p1

1

󵄨󵄨 +∞ 󵄨󵄨p p 󵄨󵄨 󵄨󵄨 󵄨 󵄨 −α−1 (‖f ‖Lp (ℝ) + α (∫ 󵄨󵄨󵄨 ∫ u f (x − u)du󵄨󵄨󵄨 dx) ) 󵄨󵄨 󵄨󵄨 󵄨󵄨 ℝ 󵄨󵄨 1 1

+∞

p ‖f ‖Lp (ℝ) 󵄨 󵄨p (‖f ‖Lp (ℝ) + α ( ∫ u−α−1 (∫󵄨󵄨󵄨f (x − u)󵄨󵄨󵄨 dx) du)) ≤ . 1 −2 2p ℝ 1

Again, we can investigate the cases in which Dzhrbashyan–Caputo and Riemann– Liouville fractional derivatives coincide. The following condition is a direct consequence of Proposition 1.29. Proposition 1.40. Let α ∈ (0, 1), and let f ∈ W 1,p (ℝ) for some p < assume that 1−α 󵄨󵄨

lim |x|

x→−∞

󵄨 󵄨󵄨f (x)󵄨󵄨󵄨 = 0

1

and

∫ −∞

1 . 1−α

|f (x)| + |x||f ′ (x)| dx < ∞. |x|α

Furthermore,

32 � 1 Fractional integrals and derivatives Then both Dα+ f and C Dα+ f are well-defined, and (C Dα+ f ) (x) = (Dα+ f ) (x) for a. a. x ∈ ℝ. Proof. We can use Proposition 1.29 and (1.52) to conclude that for f satisfying our conditions and for a. a. x ∈ ℝ, (Dα+ f ) (x) =

x

1 ∫ (x − t)−α f ′ (t)dt = (C Dα+ f ) (x). Γ(1 − α) −∞

Remark 1.41. Clearly, Propositions 1.29 and 1.40 and Theorem 1.39 hold also for Dα− and C α D− , upon mirroring the conditions, i. e., asking for lim x

x→+∞

1−α

f (x) = 0

and



∫ −1

|f (x)| + |x||f ′ (x)| dx < ∞. |x|α

Furthermore, the statement of Theorem 1.39 also holds for the Riemann–Liouville derivative Dα+ , since we are under the assumptions of Proposition 1.40. Dzhrbashyan–Caputo fractional derivatives are particularly useful to define fractional-order initial-value problems. This will be better investigated in Section 2.4.

1.9 Marchaud fractional derivatives Now, let us focus on the Riemann–Liouville fractional derivative on the whole real line. Precisely, let us show that we can give an alternative definition of the operator. Proposition 1.42. Let α ∈ (0, 1), and let f ∈ 𝒞 1 (ℝ) ∩ W 1,p (ℝ) for some p < further that 󵄨 󵄨 lim |x|1−α 󵄨󵄨󵄨f (x)󵄨󵄨󵄨 = 0,

resp.,

|f (x)| + |x||f ′ (x)| dx < ∞, |x|α

resp.,

x→−∞

1 . 1−α

Suppose

󵄨 󵄨 lim |x|1−α 󵄨󵄨󵄨f (x)󵄨󵄨󵄨 = 0,

x→+∞

and 1

∫ −∞



∫ −1

|f (x)| + |x||f ′ (x)| dx < ∞. |x|α

Then for all x ∈ ℝ, (Dα+ f ) (x) =

x

f (x) − f (t) α dt, ∫ Γ(1 − α) (x − t)α+1 −∞

resp.,

(Dα− f ) (x) =



f (x) − f (t) α dt. ∫ Γ(1 − α) (t − x)α+1 x

(1.55)

1.9 Marchaud fractional derivatives

� 33

Proof. Let us prove the statement for Dα+ , as for Dα− the proof is analogous. By Proposition 1.40 we know that (Dα+ f )(x) = (C Dα+ f )(x), and thus we have (Dα+ f ) (x)



∞∞

0

0 t

1 α 1 = ∫ t −α f ′ (x − t)dt = ∫ ∫ f ′ (x − t) α+1 dy dt. Γ(1 − α) Γ(1 − α) y

(1.56)

Now our goal is to use Fubini theorem. Indeed, the latter integral is absolutely convergent for all x ∈ ℝ because ∞∞



0 t

0

󵄨 󵄨 dy 󵄨 󵄨 ∫ ∫ 󵄨󵄨󵄨f ′ (x − t)󵄨󵄨󵄨 α+1 dt = ∫ t −α 󵄨󵄨󵄨f ′ (x − t)󵄨󵄨󵄨 dt. y To prove that the latter integral is finite for all x ∈ ℝ, let us split it as follows: 1



∫t 0

−α



󵄨󵄨 ′ 󵄨 󵄨 󵄨 −α 󵄨 ′ −α 󵄨 ′ 󵄨󵄨f (x − t)󵄨󵄨󵄨 dt = ∫ t 󵄨󵄨󵄨f (x − t)󵄨󵄨󵄨 dt + ∫ t 󵄨󵄨󵄨f (x − t)󵄨󵄨󵄨 dt = I1 (x) + I2 (x). 1

0

Concerning I1 (x), since f ∈ 𝒞 1 (ℝ), we easily get I1 (x) ≤

maxs∈[x−1,x] |f ′ (s)| < +∞. 1−α

p To handle I2 (x), let q = p−1 be the conjugate exponent of p. Then since 1 − α < 1/p, we have αq > 1, and by Hölder inequality we get 1 q



󵄩 󵄩 I2 (x) ≤ ( ∫ t −qα dt) 󵄩󵄩󵄩f ′ 󵄩󵄩󵄩Lp (1,∞) = 1

1 (qα − 1)

1 q

󵄩󵄩 ′ 󵄩󵄩 󵄩󵄩f 󵄩󵄩Lp (1,∞) < +∞.

Going back to (1.56), we can use Fubini theorem to get, for all x ∈ ℝ, (Dα+ f ) (x)

∞∞



0 t ∞

0

y

f ′ (x − t) dy α α = dydt = ∫∫ ∫ (∫ f ′ (x − t)dt) α+1 α+1 Γ(1 − α) Γ(1 − α) y y =

x

0

f (x) − f (x − y) f (x) − f (t) α α dy = dt. ∫ ∫ Γ(1 − α) Γ(1 − α) yα+1 (x − t)α+1 0

−∞

We can see that the derivative of f is not present in the final representations (1.55). This means that the differentiability of f is not necessary for such a representation. Definition 1.43. We define the left- and right-sided Marchaud fractional derivatives as, respectively, (M Dα+ ) (x) :=

α ∫ (f (x) − f (x − y))y−α−1 dy Γ(1 − α) ℝ+

34 � 1 Fractional integrals and derivatives and (M Dα− f ) (x) :=

α ∫ (f (x) − f (x + y))y−α−1 dy. Γ(1 − α) ℝ+

These operators were introduced by Marchaud [153], but a prior definition was already given in [253]. Let us stress that such operator is well-defined on suitable Hölder functions. λ Proposition 1.44. Let f ∈ 𝒞loc (ℝ) for some λ > α, and let there exist ε > 0 such that −α+ε lim|x|→∞ |x| |f (x)| = 0. Then M Dα+(−) f is well-defined.

Proof. Without loss of generality, we can suppose that α − ε > 0. Let us consider only M α D+ since the proof for M Dα− is analogous. We can rewrite M Dα+ f as (M Dα+ f ) (x) :=

x

f (x) − f (y) α dy, ∫ Γ(1 − α) (x − y)α+1 −∞

and we only need to show that the integral converges for all x ∈ ℝ. To do this, fix x ∈ ℝ, consider a > 0, and bound the integral as follows: x−a

x

−∞

x−a

|f (x) − f (y)| |f (x) − f (y)| α 󵄨󵄨 M α 󵄨 󵄨󵄨( D+ f ) (x)󵄨󵄨󵄨 ≤ 󵄨 󵄨 Γ(1 − α) ( ∫ (x − y)α+1 dy + ∫ (x − y)α+1 dy) .

(1.57)

To work with the first integral, let us note that x−a

∫ −∞



|f (x) − f (y)| |f (x) − f (x + h)| dy = ∫ dh (x − y)α+1 hα+1 a

| hx + 1|α−ε |f (x)| |f (x)| |x + h|α−ε + C dh ≤ + C dh ∫ ∫ αaα αaα hα+1 h1+ε ∞







a

|f (x)| |x| +( + 1) αaα a

a

α−ε



C∫ a

1

h1+ε

dh =

α−ε |f (x)| |x| C +( + 1) < ∞, α αa a εaε

where in the first inequality, we used the fact that there exists a constant C such that |x|−α+ε |f (x)| ≤ C for all |x| ≥ a. Concerning the second integral in (1.57), by Hölder-continuity we have x

x

x−a

x−a

|f (x) − f (y)| Caλ−α −α−1+λ dy ≤ C (x − y) dy = < ∞, ∫ ∫ λ−α (x − y)α+1

whence the proof follows.

1.9 Marchaud fractional derivatives

� 35

λ Let us stress that functions f ∈ 𝒞loc (ℝ) ∩ L∞ (ℝ) satisfy the conditions of Proposition 1.44. In fact, this means that there exist functions f such that M Dα+ f is well-defined whereas Dα+ f is not. We can see that the Marchaud fractional derivative can be obtained also by taking the limit of suitable truncated fractional derivatives.

Definition 1.45. The truncated fractional derivatives are defined as (Dα+,ε f ) (x) =

x−ε



−∞

ε





x+ε

ε

f (x) − f (t) f (x) − f (x − t) α α dt = dt ∫ ∫ Γ(1 − α) Γ(1 − α) (x − t)α+1 t α+1

(1.58)

and (Dα−,ε f ) (x)

f (x) − f (t) f (x) − f (x + t) α α = dt = dt. ∫ ∫ α+1 Γ(1 − α) Γ(1 − α) (t − x) t α+1

(1.59)

Summarizing, we have (Dα±,ε f ) (x) =



f (x) − f (x ∓ t) α dt. ∫ Γ(1 − α) t α+1 ε

It is clear that if the integral defining M Dα+ f absolutely converges, then (M Dα± f ) (x) = lim (Dα±,ε f ) (x). ε→0

(1.60)

However, the previous limit exists even if the integral converges conditionally but not absolutely. Thus we can use Equation (1.60) to define the Marchaud fractional derivative on less regular functions. The next result states that it is possible to rewrite the truncated fractional derivative of a function f ∈ 𝕀α± (Lp (ℝ)) in terms of a suitable integral kernel. Lemma 1.46 ([232]). Let 1 ≤ p < 1/α, and let f ∈ 𝕀α+ (Lp (ℝ)) with f = I+α φ. Then the truncated derivative Dα+,ε f has the following representation: ∞

(Dα+,ε f ) (x) = ∫ K1,α (t)φ(x − εt)dt,

(1.61)

0

where the kernel K1,α (t)(t) =

α α sin απ t+ − (t − 1)+ ∈ L1 (ℝ) π t

(1.62)

36 � 1 Fractional integrals and derivatives has the properties ∞

∫ K1,α (t)dt = 1

and

K1,α (t) ≥ 0.

(1.63)

0

Proof. For t > 0, we have tα f (x) − f (x − t) = ( ∫ φ(x − ty)yα−1 dy − ∫ φ(x − ty)(y − 1)α−1 dy) , Γ(α) ∞



0

1

so α



f (x) − f (x − t) = t ∫ k1,α (y)φ(x − ty)dy,

(1.64)

0

where k1,α (y) =

α−1 yα−1 + − (y − 1)+ . Γ(α)

It is easy to check that the kernel k1,α ∈ L1 (ℝ+ ) and that ∫0 k1,α (y)dy = 0. In view of (1.64), ∞

(Dα+,ε f ) (x)





ε

0

α 1 z = ∫ 2 ∫ k1,α ( ) φ(x − z)dz dt Γ(1 − α) t t ∞

z/ε



t

0

0

0

0

φ(x − z) φ(x − εt) α α = dt ∫ k1,α (s)ds. ∫ ∫ k1,α (s)ds dz = ∫ Γ(1 − α) z Γ(1 − α) t Now we can put t

K1,α (t) =

α ∫0 k1,α (s)ds Γ(1 − α)t

,

and (1.61) becomes evident by using the Euler reflection formula (see Proposition B.2). Furthermore, it is clear that K1,α (t) ≥ 0. Now observe that K1,α ∈ L1 (ℝ+ ). Indeed, we have ∞

∫ K1,α (t)dt =

sin(πα) sin(πα) t α − (t − 1)α + dt ∫ 2 π t α +∞ 1 +∞

0

=

sin(πα) sin(πα) 1 α α−1 + t [1 − (1 − ) ] dt. ∫ π t α2 1

1.9 Marchaud fractional derivatives

α

Now observe that there exists a constant C > 0 such that 1 − (1 − 1t ) ≤ hence ∞

∫ K1,α (t)dt ≤

C t



37

for t ≥ 1, and

+∞

sin(πα) C sin(πα) sin(πα) C sin(πα) + + < ∞. ∫ t α−2 dt = π π(1 − α) α2 α2 1

0

To prove that ∫0 K1,α (t)dt = 1, fix φ ∈ 𝒞c∞ (ℝ) and let f = I+α φ. Then we know that φ(x) = limε→0 (Dα+,ε f )(x) for all x ∈ ℝ. Let x ∈ ℝ be such that φ(x) ≠ 0. Then ∞

󵄨 󵄨 K1,α (t)󵄨󵄨󵄨φ(x − εt)󵄨󵄨󵄨 ≤ ‖φ‖L∞ (ℝ) K1,α (t), where the right-hand side belongs to L1 (ℝ+ ). Hence we can use the dominated convergence theorem to state that ∞



0

0

φ(x) = lim (Dα+,ε f ) (x) = φ(x) = lim ∫ K1,α (t)φ(x − εt)dt = φ(x) ∫ K1,α (t)dt, ε→0

ε→0

which proves (1.63). The presentation of the truncated derivative given in (1.61) tells us, as a consequence, that if f = I±α φ, then φ is the Marchaud derivative of f , as the following lemma states. Lemma 1.47. Let f ∈ 𝕀α± (Lp (ℝ)) for 1 ≤ p < 1/α with f = I±α φ. Then φ(x) = M Dα± f .

(1.65)

Proof. Indeed, by (1.61) and (1.63) we have ∞

(Dα+,ε f ) (x) − φ(x) = ∫ K1,α (t)(φ(x − εt) − φ(x))dt. 0

Applying the generalized Minkowski inequality, we obtain ∞

ε→0 󵄩󵄩 α 󵄩 󵄩 󵄩 󵄩󵄩D+,ε f − φ󵄩󵄩󵄩Lp (ℝ) ≤ ∫ K1,α (t)󵄩󵄩󵄩φ(x − εt) − φ(x)󵄩󵄩󵄩Lp (ℝ) dt 󳨀→ 0 0

in view of the Lebesgue dominated convergence theorem and Lemma 1.2. Taking into account (1.60), (1.65) is proved. Note that Lemmas 1.46 and 1.47 yield the inequality 󵄩M α 󵄩 󵄩󵄩 α 󵄩󵄩 󵄩󵄩D+,ε f 󵄩󵄩Lp (ℝ) ≤ 󵄩󵄩󵄩󵄩 D+ f 󵄩󵄩󵄩󵄩Lp (ℝ) ,

f ∈ 𝕀α+ (Lp (ℝ)) ,

1 ≤ p < 1/α.

(1.66)

38 � 1 Fractional integrals and derivatives Indeed, in view of (1.63) and (1.65), from (1.61) we have 󵄩M α 󵄩 󵄩󵄩 α 󵄩󵄩 󵄩󵄩D+,ε f 󵄩󵄩Lp (ℝ) ≤ ‖K‖L1 (ℝ) ‖φ‖Lp (ℝ) = ‖φ‖Lp (ℝ) = 󵄩󵄩󵄩󵄩 D+ f 󵄩󵄩󵄩󵄩Lp (ℝ) . Inequality (1.66) implies the equality 󵄩 󵄩 󵄩 󵄩 lim 󵄩󵄩Dα+,ε f 󵄩󵄩󵄩Lp (ℝ) = sup 󵄩󵄩󵄩Dα+,ε f 󵄩󵄩󵄩Lp (ℝ)

ε→0 󵄩

ε>0

(1.67)

for f ∈ 𝕀α+ (Lp (ℝ)). In fact, the inequality obtained from (1.67) after replacing = by ≤ is obvious. The converse inequality follows from (1.66) in view of (1.60). In fact, the truncated derivatives converge in Lp (ℝ) if and only if f ∈ 𝕀α± (Lp (ℝ)). Theorem 1.48. (i) Let the function f belong to the class 𝕀α+ (Lp (ℝ)) and/or to 𝕀α− (Lp (ℝ)) p for some 1 < p < 1/α. Then f ∈ Lr (ℝ), r = 1−αp , and Dα+,ε f or Dα−,ε f , respectively, converge in Lp (ℝ) as ε ↓ 0, supplying that 󵄩 󵄩 sup 󵄩󵄩󵄩Dα+,ε f 󵄩󵄩󵄩Lp (ℝ) < ∞ ε>0

and/or

󵄩 󵄩 sup 󵄩󵄩󵄩Dα−,ε f 󵄩󵄩󵄩Lp (ℝ) < ∞. ε>0

(1.68)

(ii) If f ∈ Lr (ℝ) for some 1 ≤ r < ∞ and (1.58) and/or (1.59) converge in Lp (ℝ) as ε → 0, then f belongs to the class 𝕀α+ (Lp (ℝ)) and/or 𝕀α− (Lp (ℝ)) for all 1 ≤ p < 1/α. Proof. (i) Let, for example, the function f belong to the class 𝕀α+ (Lp (ℝ)) for some p > 1. p Then f ∈ Lr (ℝ), r = 1−αp , according to the Hardy–Littlewood theorem (Theorem 1.10). Moreover, the Lp convergence of (1.58) follows from Lemma 1.47. (ii) Let p ≥ 1 be fixed, f ∈ Lr (ℝ) for some r ≥ 1, and assume that, e. g., (1.58) converges in Lp (ℝ). We have to show that there exists a function φ ∈ Lp (ℝ) such that f = I+α φ,

(1.69)

so that f ∈ 𝕀α+ (Lp (ℝ)). Instead of (1.69), we will prove that f (x) − f (x − h) = (I+α φ) (x) − (I+α φ) (x − h)

(1.70)

for all h > 0. Let us denote (Ah φ)(x) = ∫ ah (x − t)φ(t)dt, ℝ

where ah (t) =

1 (t α−1 − (t − h)α−1 + ), Γ(α) +

so that the desired result (1.70) is f (x) − f (x − h) = (Ah φ)(x). Note that Ah is the convolution operator with integrable kernel ah ∈ L1 (ℝ), and therefore the composition Ah Dα+,ε is, for a fixed ε > 0, a bounded operator in Lr (ℝ) for all r ≥ 1. For functions f from 𝒞c∞ , we have Ah Dα+,ε f = Dα+,ε Ah f = (Dα+,ε I+α f ) (x) − (Dα+,ε I+α f ) (x − h).

1.9 Marchaud fractional derivatives

� 39

Hence by representation (1.61) Ah Dα+,ε f



= ∫ K(t)(f (x − εt) − f (x − h − εt))dt.

(1.71)

0

Since 𝒞c∞ is dense in Lr (ℝ), (1.71) holds for all f ∈ Lr (ℝ) in view of the boundedness of operators on the left and right sides. The required result (1.70) will be obtained from (1.71) by letting ε → 0. In view of (1.63), the right-hand side in (1.71) converges to f (x) − f (x − h) in Lr (ℝ). Consequently, the limit of the left-hand side of (1.71) exists, and so lim Ah Dα+,ε f = f (x) − f (x − h).

ε→0

(1.72)

Furthermore, since the operator Ah is bounded in Lp (ℝ) and (1.58) converges in Lp (ℝ), the limit lim Ah Dα+,ε f = Ah ( lim Dα+,ε f ) = Ah φ

ε→0 (Lp (ℝ))

ε→0 (Lp (ℝ))

(1.73)

exists, where φ = M Dα+ f ∈ Lp (ℝ). Since Ah Dα+,ε f converges in both Lr (ℝ)- and Lp (ℝ)norms, the limit functions must coincide almost everywhere, and from (1.72) we obtain that Ah φ = f (x) − f (x − h), which coincides with (1.70). Equation (1.70) is thus proved. It remains to note that (1.72) implies f (x) = (I+α φ)(x) + c for a suitable p constant c. However, since f ∈ Lr (ℝ) and I+α φ ∈ Lq (ℝ) with q = 1−αp , we get that c = 0. Remark 1.49. We can prove, by means of weak compactness in Lp (ℝ), 1 < p < 1/α, and strong and weak continuity of bounded linear operators, that condition (1.68) is sufficient to guarantee that (1.58) or (1.59) converges in Lp (ℝ). Indeed, if supε>0 ‖D+,ε f ‖Lp (ℝ) < +∞, then there exist a sequence εk → 0 and a function φ ∈ Lp (ℝ) such that w- limk→+∞ Dα+,εk f = φ, where by w- limk→+∞ we denote the limit in the weak p

(Lp (ℝ))

(Lp (ℝ))

topology of L (ℝ). Since Ah is a bounded linear operator, it is also weakly continuous, and then we can substitute the limit with the weak limit in (1.73) to get w- lim Ah Dα+,εk f = Ah φ. k→+∞ (Lp (ℝ))

Thus we get f (x) − f (x − h) = Ah φ, which in turn implies f = I+α φ. Thus f ∈ 𝕀α+ (Lp (ℝ)) for some 1 < p < 1/α, and then by item (i) of Theorem 1.48 we get the desired convergence. This is done in [232, Theorem 6.2].

40 � 1 Fractional integrals and derivatives Let us also stress that for p = 1, a similar argument can be carried on if we consider the following additional condition: lim sup

M→+∞ ε>0

∫ {|Dα±,ε f |>M}

󵄨󵄨 α 󵄨 󵄨󵄨D±,ε f (x)󵄨󵄨󵄨 dx = 0.

Indeed, in such a case, we are able to show that f ∈ 𝕀α± (L1 (ℝ)) with the same proof as before, once we note that {Dα±,ε f }ε>0 is weakly compact in L1 (ℝ) by means of the Dunford– Pettis theorem (see [38, Theorem 4.30]). This, however, does not guarantee the strong convergence in L1 (ℝ) of (1.58) or (1.59). Therefore, the previous theorem guarantees that for f ∈ 𝕀α± (Lp (ℝ)) with p > 1, the Riemann–Liouville fractional derivatives coincide with the Marchaud fractional derivatives, whereas if the Marchaud fractional derivative of f exists as a limit in Lp (ℝ), then f ∈ 𝕀α± (Lp (ℝ)), and it coincides with the Riemann–Liouville fractional derivative. Let us now study the behavior of M Dα+ on W 1,p (ℝ), as we did in Theorem 1.39 for the Dzhrbashyan–Caputo derivative. Theorem 1.50. For all α ∈ (0, 1) and p ∈ [1, ∞], there exists a constant Cα,p > 0 such that if f ∈ W 1,p (ℝ), then M Dα+ f ∈ Lp (ℝ), and 󵄩󵄩M α 󵄩󵄩 󵄩󵄩 D+ f 󵄩󵄩 p ≤ Cα,p ‖f ‖W 1,p (ℝ) . 󵄩 󵄩L (ℝ)

(1.74)

Proof. Let us first argue with p < ∞. Note that 1

󵄨󵄨 +∞ 󵄨󵄨p p 󵄨󵄨 f (x) − f (x − y) 󵄨󵄨󵄨󵄨 󵄩󵄩M α 󵄩󵄩 󵄨 󵄨 󵄩󵄩 D+ f 󵄩󵄩 p = (∫ 󵄨󵄨 ∫ dy󵄨󵄨 dx) 󵄩 󵄩L (ℝ) 󵄨󵄨 󵄨󵄨 y1+α 󵄨󵄨 ℝ 󵄨󵄨 0 ≤2

1− p1

=: 2

1− p1

1

1

p 󵄨󵄨 1 󵄨p 󵄨󵄨 +∞ 󵄨󵄨p p 󵄨󵄨 f (x) − f (x − y) 󵄨󵄨󵄨 󵄨󵄨 f (x) − f (x − y) 󵄨󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 ((∫ 󵄨󵄨∫ dy󵄨󵄨 dx) + (∫ 󵄨󵄨 ∫ dy󵄨󵄨 dx) ) 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 y1+α y1+α 󵄨󵄨 󵄨󵄨 ℝ 󵄨󵄨 0 ℝ 󵄨󵄨 1

(I1 + I2 ).

(1.75)

To work with I1 , observe that since f ∈ W 1,p (ℝ), we have f ∈ AC[x − y, x], and thus y

f (x) − f (x − y) = ∫ f ′ (x − z)dz. 0

Therefore we can rewrite 1

1 y

0

0 0

f (x) − f (x − y) f ′ (x − z) dy = dy dz. ∫ ∫ ∫ y1+α y1+α

1.9 Marchaud fractional derivatives



41

To use Fubini theorem, note that 1 y

∫∫ 0 0

1

1

|f ′ (x − z)| 󵄨 󵄨 dy dz = ∫ 󵄨󵄨󵄨f ′ (x − z)󵄨󵄨󵄨 (∫ y−1−α dy) dz y1+α 0

z

1

1

0

0

I (x) + I4 (x) 1 |f ′ (x − z)| 1 󵄨 󵄨 ≤ ∫ dz + ∫ 󵄨󵄨󵄨f ′ (x − z)󵄨󵄨󵄨 dz =: 3 . α zα α α Concerning I3 , by the generalized Minkowski inequality we have 1

‖f ′ ‖Lp (ℝ) 󵄩 󵄩 ‖I3 ‖Lp (ℝ) ≤ 󵄩󵄩󵄩f ′ 󵄩󵄩󵄩Lp (ℝ) ∫ z−α dz = < ∞, 1−α 0

which implies that I3 (x) < ∞ for a. a. x ∈ ℝ, whereas for I4 , by Hölder inequality we have I4 (x) ≤

‖f ′ ‖Lp (ℝ) < ∞. α

Hence we can use Fubini theorem to obtain 1

1

p 󵄨󵄨 1 󵄨󵄨p p 1 1 󵄨󵄨 󵄨 ‖f ′ ‖Lp (ℝ) 1 1 󵄨󵄨 ′ 󵄨󵄨 ′ 󵄨󵄨p −α 󵄨󵄨󵄨 I1 = (∫ 󵄨󵄨∫ f (x − z)z dz󵄨󵄨 dx) ≤ ∫ (∫ 󵄨󵄨f (x − z)󵄨󵄨 dx) z−α dz = < ∞, 󵄨󵄨 󵄨󵄨 α α (1 − α)α 󵄨󵄨 ℝ 󵄨󵄨 0 0 0 (1.76) where we also used Jensen inequality. For I2 , by the generalized Minkowski inequality we have 1

+∞

I2 ≤ ∫ y 1

−1−α

p 2‖f ‖Lp (ℝ) 󵄨 󵄨p (∫󵄨󵄨󵄨f (x) − f (x − y)󵄨󵄨󵄨 dx) dy ≤ < ∞. α

(1.77)



So, we proved that |(M Dα+ f )(x)| < ∞ for a. a. x ∈ ℝ and M Dα+ f ∈ Lp (ℝ). Plugging (1.76) and (1.77) into (1.75), we get (1.74). To handle the case p = ∞, note that for a. a. x ∈ ℝ, 1

+∞

0

1

|f (x) − f (x − y)| |f (x) − f (x − y)| 󵄨󵄨 M α 󵄨 󵄨󵄨( D+ f ) (x)󵄨󵄨󵄨 ≤ ∫ dy + ∫ dy 󵄨 󵄨 y1+α y1+α 1 y

1 y

2‖f ‖L∞ (ℝ) 󵄩 ′ 󵄩 2‖f ‖L∞ (ℝ) |f ′ (x − z)| ≤ ∫∫ dz dy + ≤ 󵄩󵄩󵄩f 󵄩󵄩󵄩L∞ (ℝ) ∫ ∫ y−1−α dz dy + 1+α α α y 0 0 ′

‖f ‖L∞ (ℝ) 2‖f ‖L∞ (ℝ) ≤ + ≤ Cα,∞ ‖f ‖W 1,∞ (ℝ) 1−α α for some Cα,∞ > 0.

0 0

42 � 1 Fractional integrals and derivatives Similar arguments in the case (a, b) ⊂ ℝ lead to the following definition. Definition 1.51. We define the Marchaud fractional derivatives on bounded domains as (

x

Dαa+ f ) (x)

1(a,b) (x) = (f (x)(x − a)−α + α ∫(f (x) − f (t))(x − t)−α−1 dt) Γ(1 − α)

Dαb− f ) (x)

1(a,b) (x) = (f (x)(b − x)−α + α ∫(f (x) − f (t))(t − x)−α−1 dt) , Γ(1 − α)

M

a

and M

(

b

x

These operators coincide with the Riemann–Liouville fractional derivatives, provided that the integrals converge in Lp (a, b) for some p > 1. As for the Marchaud derivatives, we can define some truncated derivatives and then obtain a condition for f ∈ 𝕀αa+ (Lp (a, b)) according to their convergence. More precisely, α according to [232, Theorem 13.4], we have that f = Ia+ φ for some φ ∈ Lp (a, b), where −α p 1 < p < ∞, if and only if f (x)(x − a) ∈ L (a, b) and b

󵄨 󵄨p sup ∫ 󵄨󵄨󵄨ψε (x)󵄨󵄨󵄨 dx < ∞, ε>0

a+ε

x−ε

where ψε (x) = ∫ a

f (x) − f (t) dt, (x − t)1+α

x ∈ [a + ε, b].

1.10 Fractional derivatives of higher order Up to now, we only considered α ∈ (0, 1). However, we can extend the definition of Riemann–Liouville and Dzhrbashyan–Caputo fractional derivatives to the general case α > 0. Let us denote by ⌊α⌋ := max{n ∈ ℤ : n ≤ α} the integer part of α. Definition 1.52. We define the left- and right-sided Riemann–Liouville fractional derivatives of order α > 0 of a function f : [a, b] → ℝ as x

(Dαa+ f ) (x)

1 d ⌊α⌋+1 = ∫ f (t)(x − t)⌊α⌋−α dt Γ(⌊α⌋ + 1 − α) dx ⌊α⌋+1

(Dαb− f ) (x)

(−1)⌊α⌋+1 d ⌊α⌋+1 = ∫ f (t)(t − x)⌊α⌋−α dt, Γ(⌊α⌋ + 1 − α) dx ⌊α⌋+1

(1.78)

a

and b

x

provided that the involved quantities are well-defined. For α ∈ ℕ, these definitions coincide with those of the classical integer-order derivatives.

1.10 Fractional derivatives of higher order

� 43

Concerning the domain of these operators, we can prove a result about the existence of such a derivative for suitably regular functions, which is similar to Proposition 1.28. Proposition 1.53. Let α > 0, n = ⌊α⌋ + 1, and −∞ < a < b < +∞. If f ∈ 𝒞 n−1 [a, b] with f (n−1) ∈ AC[a, b], then for a. a. x ∈ [a, b], (Dαa+ f ) (x)

x

n−1

f (k) (a) 1 =∑ (x − a)k−α + ∫(x − t)n−1−α f (n) (t)dt. Γ(1 + k − α) Γ(n − α) k=0

(1.79)

a

Proof. From the Taylor formula with integral remainder it follows that for f ∈ 𝒞 n−1 [a, b] with f (n−1) ∈ AC[a, b] and for x ∈ [a, b], we have x

n−1

f (k) (a) 1 (x − a)k + ∫(x − t)n−1 f (n) (t)dt. k! (n − 1)! k=0

f (x) = ∑

a

Substituting this relation into (1.78), we get (Dαa+ f ) (x) =

x

1 d n n−1 f (k) (a) (∑ ∫(t − a)k (x − t)n−1−α dt Γ(n − α) dx n k=0 k! a

x t

+

=

1 ∫ ∫(x − t)n−1−α (t − s)n−1 f (n) (s)dsdt) (n − 1)! a a

n−1 (k) f (a) Γ(k + 1)Γ(n − α) 1 dn (∑ (x − a)k+n−α n Γ(n − α) dx k! Γ(k + 1 + n − α) k=0 x

x

1 + ∫ f (n) (s) ∫(x − t)n−1−α (t − s)n−1 dtds) (n − 1)! s

a

n−1

x

f (a)(x − a)k+n−α d 1 = n (∑ + ∫ f (n) (s)(x − s)2n−α−1 ds) , dx Γ(k + 1 + n − α) Γ(2n − α) k=0 n

(k)

a

where we could apply Lemma 1.11 and Fubini theorem due to the fact that (x − ⋅)2n−α−1 |f (n) (⋅)| belongs to L1 (a, x). Taking the nth derivative, we get (1.79). In the same way, we can adapt the proof of Proposition 1.29 to the general case α > 0. Proposition 1.54. Let α > 0 and set n = ⌊α⌋ + 1. Suppose f ∈ W n,p (ℝ) with p < Further, assume that for any 0 ≤ k ≤ n − 1, k+n−α (k)

lim |x|

x→−∞

f

1

(x) = 0,

and

∫ −∞

|f (x)| + |x|n |f (n) (x)| dx < ∞. |x|1+α−n

1 . n−α

(1.80)

44 � 1 Fractional integrals and derivatives Then for a. a. x ∈ ℝ, (Dα+ f ) (x)

x

1 = ∫ (x − s)n−α−1 f (n) (s)ds. Γ(n − α)

(1.81)

−∞

Proof. Let us mention that condition (1.80) supplies the relation x

∫ (x − t)

n−α−1

x

f (t)dt = lim ∫(x − t)n−α−1 f (t)dt. a→−∞

−∞

a

As before, we have x

n−1

f (k) (a)Γ(n − α) (x − a)k+n−α Γ(k + 1 + n − α) k=0

∫(x − t)n−α−1 f (t)dt = ∑ a

x

+

Γ(n − α) ∫ f (n) (s)(x − s)2n−α−1 ds, Γ(2n − α) a

and then, taking the limit as a → −∞, we get from (1.80) that x

∫ (x − t)

n−α−1

−∞

x

Γ(n − α) f (t)dt = ∫ f (n) (s)(x − s)2n−α−1 ds. Γ(2n − α)

(1.82)

−∞

Differentiating (1.82) n times, dividing by Γ(n − α), and observing that (1.81) is finite a. e., thanks to the Hardy–Littlewood theorem (Theorem 1.10), we conclude the proof. Definition 1.55. We define the left- and right-sided Dzhrbashyan–Caputo fractional derivatives of order α > 0 of a function f ∈ 𝒞 ⌊α⌋ [a, b] such that f (⌊α⌋) ∈ AC[a, b] as x

(C Dαa+ f ) (x)

1 = ∫(x − t)⌊α⌋−α f (⌊α⌋+1) (t)dt Γ(⌊α⌋ + 1 − α)

(C Dαb− f ) (x)

(−1)⌊α⌋+1 = ∫(t − x)⌊α⌋−α f (⌊α⌋+1) (t)dt. Γ(⌊α⌋ + 1 − α)

(1.83)

a

and b

(1.84)

x

The values in (1.83) and (1.84) are well defined for a. a. x ∈ [a, b] since they are the convolution products of two functions in L1 (a, b). As a direct consequence of Proposition 1.53, we get the following result. Proposition 1.56. Let α > 0, n = ⌊α⌋ + 1, and −∞ < a < b < +∞. If f ∈ 𝒞 n−1 [a, b] with f (n−1) ∈ AC[a, b] and f (k) (a) = 0 for any 0 ≤ k ≤ n − 1, then, for a. a. x ∈ [a, b], we have (Dαa+ f ) (x) = (C Dαa+ f ) (x).

1.10 Fractional derivatives of higher order

� 45

In the general case (i. e., where f (k) (a) ≠ 0), for a. a. x ∈ [a, b], we have the equality n−1

f (k) (a) (⋅ − a)k )) (x), k! k=0

(C Dαa+ f ) (x) = (Dαa+ (f − ∑

(1.85)

the proof of which is left to the reader; see Exercise 1.12. Definition 1.57. We define the left- and right-sided Dzhrbashyan–Caputo fractional 1 derivatives on the whole real line of a function f ∈ W n,p (ℝ) with p < n−α , where n = ⌊α⌋ + 1, as (C Dα+ f ) (x)

x

1 = ∫ (x − s)n−α−1 f (n) (s)ds Γ(n − α)

(1.86)

−∞

and (C Dα− f ) (x) =

(−1)n ∫ (s − x)n−α−1 f (n) (s)ds. Γ(n − α) ∞

x

Clearly, by the Hardy–Littlewood theorem (Theorem 1.10) (C Dα+ f )(x) is well defined for a. a. x ∈ ℝ if f ∈ W n,p (ℝ), because f (n) ∈ Lp (ℝ) with n − α < 1/p. Again, as a consequence of Proposition 1.54, we get the following result. Proposition 1.58. Let α > 0 and n = ⌊α⌋ + 1. Assume that f ∈ W n,p (ℝ) with p < Furthermore, assume that for any k ≤ n − 1, lim |x|

x→−∞

k+n−α (k)

f

1

(x) = 0,

and

∫ −∞

1 . n−α

|f (x)| + |x|n |f (n) (x)| dx < ∞. |x|1+α−n

Then for a. a. x ∈ ℝ, we have (Dα+ f ) (x) = (C Dα+ f ) (x). The extension of the Marchaud fractional derivative to the general case α > 0 requires some preliminary observations. Define the nth-order forward and backward differences in x of size τ > 0 of a function f : (a, b) → ℝ as n n n ∇±,τ f (x) = ∑ (−1)k ( )f (x ∓ kτ), k k=0

provided that x ± nτ ∈ (a, b). With such quantities, we can define a higher-order version of the Marchaud derivatives. However, let us first deduce an alternative representation of the Riemann–Louville derivative of higher order on the whole real line. To do this, we make use of some well-known properties of the finite differences, which we recall

46 � 1 Fractional integrals and derivatives here. The first one is an integral representation, which has been proved, for instance, in [232, Lemma 5.4] for forward differences but can be easily extended to backward ones. Proposition 1.59. Let f ∈ 𝒞 n−1 [a, b] with f (n−1) ∈ AC[a, b], x ∈ (a, b), and ℓ ∈ ℕ with ℓ ≥ n. Let also τ > 0 be such that x ∓ ℓτ ∈ (a, b). Then x−kτ

ℓ ∇+,τ f (x)

ℓ−1 1 ℓ = ∑ (−1)k ( ) ∫ (x − kτ − t)n−1 f (n) (t)dt (n − 1)! k=0 k x−ℓτ

and x+ℓτ

ℓ ∇−,τ f (x) =

(−1)n ℓ−1 ℓ ∑ (−1)k ( ) ∫ (t − kτ − x)n−1 f (n) (t)dt. (n − 1)! k=0 k x+kτ

We will also use the following alternative integral representation, which can be easily proved by induction (see also [231, Equation (3.50)]). Proposition 1.60. Let ℓ ∈ ℕ, τ > 0, and Qn (τ) = [0, τ]n . Then for f ∈ 𝒞 ℓ [a, b], x ∈ (a, b), and τ > 0 such that x ∓ ℓτ ∈ (a, b), we have ℓ

ℓ ∇+,τ f (x) = ∫ f (ℓ) (x − ∑ sj ) ds1 ⋅ ⋅ ⋅ dsℓ Qn (τ)

j=1

and ℓ

ℓ ∇−,τ f (x) = (−1)ℓ ∫ f (ℓ) (x + ∑ sj ) ds1 ⋅ ⋅ ⋅ dsℓ . Qn (τ)

j=1

As a corollary of Propositions 1.59 and 1.60, we easily get the following result. Corollary 1.61. Let f ∈ 𝒞 ℓ (a, b) and x ∈ (a, b). Then limt↓0

ℓ ∇±,τ f (x) τℓ

= f (ℓ) (x).

Finally, let us state the following lemma, which is an immediate consequence of [231, Lemma 3.2]. Lemma 1.62. For ℓ ∈ ℕ and polynomials P of degree at most ℓ − 1, ℓ ℓ ∑(−1)j ( )P(j) = 0. j j=0

(1.87)

We prove the following result for quite regular functions, but it clearly holds in the case of less strict assumptions.

1.10 Fractional derivatives of higher order

� 47

Proposition 1.63. Let α > 0 with α ∈ ̸ ℕ and ℓ ∈ ℕ with ℓ > α, and set n = ⌊α⌋ + 1. Let 1 f ∈ 𝒞 n (ℝ) ∩ W n,p (ℝ) for some 1 < p < n−α , for any 0 ≤ k ≤ n − 1, lim |x|k+n−α f (k) (x) = 0,

(1.88)

x→−∞

and there exists β ∈ (n − α, n) such that lim |x|β f (n) (x) = 0.

(1.89)

x→−∞

Assume further that 1

∫ −∞

(|f (x)| + |x|n |f (n) (x)|) dx < +∞. |x|1+α−n

(1.90)

Define the functions ℓ ℓ Aℓ (α) = ∑ (−1)k ( )k α k k=0

and

χ(ℓ, α) = Γ(−α)Aℓ (α).

(1.91)

Then for a. a. x ∈ ℝ, (Dα+ f ) (x)



1 ℓ = f (x)ds. ∫ s−1−α ∇+,s χ(ℓ, α)

(1.92)

0

Proof. First, notice that f satisfies the conditions of Proposition 1.58, and therefore (Dα+ f )(x) = (C Dα+ f )(x) for a. a. (in fact, for all) x ∈ ℝ. Let us fix x ∈ ℝ. By Lemma 1.59 we get ∞

∫ 0

ℓ ∇+,τ f (x)

τ 1+α

(s − kτ)n−1 f (n) (x − s) (−1)k ℓ dτ = ∫ ∑ ( )∫ dsdτ (n − 1)! k τ 1+α k=0 ∞ ℓ−1

ℓτ

0



ℓ−1 (s − kτ)n−1 f (n) (x − s) ℓ 1 = ∑ (−1)k ( ) dsdτ ∫∫ k (n − 1)! τ 1+α k=0 ∞ ℓτ 0 kτ

ℓ−1 (s − kτ)n−1 f (n) (x − s) sn−1 f (n) (x − s) ℓ 1 1 = ∑ (−1)k ( ) dsdτ + dsdτ ∫∫ ∫∫ 1+α k (n − 1)! (n − 1)! τ τ 1+α k=1

=: G1 + G2 .

∞ ℓτ

∞ ℓτ

0 kτ

0 0

(1.93)

48 � 1 Fractional integrals and derivatives Consider G1 applying Fubini theorem. Define Mk = max { kx + 1, 1} and split the integral as follows: ∞ ℓτ

M

0 kτ

0 kτ

k ℓτ 󵄨󵄨 (s − kτ)n−1 f (n) (x − s) 󵄨󵄨 󵄨 −1−α 󵄨 󵄨󵄨 󵄨󵄨 (s − kτ)n−1 f (n) (x − s)󵄨󵄨󵄨󵄨 dsdτ ∫ ∫ 󵄨󵄨 󵄨󵄨 dsdτ = ∫ ∫ 󵄨󵄨󵄨󵄨τ 1+α 󵄨󵄨 󵄨 τ 󵄨

∞ ℓτ

󵄨 󵄨 + ∫ ∫ 󵄨󵄨󵄨󵄨τ −1−α (s − kτ)n−1 f (n) (x − s)󵄨󵄨󵄨󵄨 dsdτ =: I1 + I2 . Mk kτ

Concerning I1 , recall that f ∈ 𝒞 n (ℝ), and therefore f (n) (x − s) is bounded for τ ∈ [0, Mk ] and s ∈ [kτ, ℓτ] ⊆ [0, ℓMk ]. This means that Mk

I1 ≤ C ∫ τ

ℓτ

−1−α

0

∫ (s − kτ)

n−1

Mk

C(ℓ − k)n C(ℓ − k)n n−α dsdτ = M , ∫ τ n−1−α dτ = n n(n − α) k 0



where n − α = ⌊α⌋ + 1 − α > 0. Concerning I2 , note that τ ≥ kx + 1 implies that s ≥ k ( kx + 1) = x + k and thus s − x ≥ k. Since f ∈ 𝒞 n (ℝ) satisfies (1.89), we know that lims→+∞ |x − s|β |f (n) (x − s)| = 0, whereas s ∈ [x + k, +∞) 󳨃→ |x − s|β |f (n) (x − s)| is a continuous function. Hence there exists a constant C > 0 such that for all s ≥ x + k, 󵄨󵄨 (n) 󵄨 󵄨󵄨f (x − s)󵄨󵄨󵄨 ≤ C|x − s|−β = C(s − x)−β . 󵄨 󵄨

(1.94)

Moreover, kτ > x for τ ≥ kx + 1 > kx . Recall that for fixed s ≥ x + k, the function z ∈ [x, kτ] 󳨃→ (s − z)−β is increasing. This means that (1.94) implies the upper bound 󵄨󵄨 (n) 󵄨 󵄨󵄨f (x − s)󵄨󵄨󵄨 ≤ C(s − x)−β ≤ C(s − kτ)−β 󵄨 󵄨 for all τ ≥ Mk ≥

x k

+ 1 and s ∈ [kτ, ℓτ]. Therefore

∞ ℓτ

I2 ≤ C ∫ ∫ τ −1−α (s − kτ)n−1−β dsdτ = C Mk kτ

n−β−α

C(ℓ − k)n−β Mk (ℓ − k)n−β < ∞. ∫ τ n−β−1−α dτ = n−β (n − β)(β + α − n) ∞

Mk

This implies that we can change the order of the integrals in G1 and get ∞ ℓτ

∫∫ 0 kτ

(s − kτ)

n−1 (n)

f

τ 1+α

(x − s)



s k

dsdτ = ∫ (∫ 0

s ℓ

(s − kτ)n−1 dτ) f (n) (x − s)ds. τ 1+α

(1.95)

1.10 Fractional derivatives of higher order

� 49

To handle the inner integral, we use a change of variables and the binomial formula to get s k

∫ s ℓ

s

s

k s ℓ

k s ℓ

n−1 (s − kτ)n−1 n − 1 n−1−j dτ = k α ∫ z−1−α (s − z)n−1 dz = k α ∑ (−1)j ( )s ∫ zj−1−α dz 1+α j τ j=0 n−1

= k α ∑ (−1)j ( j=0

n − 1 sn−1−α n−1 n − 1 sn−1−α j α−j ) − ∑ (−1)j ( ) kℓ . j j−α j j−α j=0

Plugging this equality into (1.95) and using the definition of Dzhrbashyan–Caputo fractional derivatives according to formula (1.86), we get that s k



∫ (∫ 0

s ℓ

n−1 (s − kτ)n−1 n−1 1 (n) α dτ) f (x − s)ds = k ) ∑ (−1)j ( ∫ sn−1−α f (n) (x − s)ds j j − α τ 1+α j=0 ∞ 0

n−1

− ∑ (−1)j ( j=0

n−1

n − 1 1 j α−j ) k ℓ ∫ sn−1−α f (n) (x − s)ds j j−α

= (k α ∑ (−1)j ( j=0

∞ 0

n−1

n−1 1 n − 1 1 j α−j ) − ∑ (−1)j ( ) k ℓ ) Γ(n − α) (C Dα+ f ) (x). (1.96) j j − α j=0 j j−α

To apply the Fubini theorem to G2 , note that ∞ ℓτ

∫∫ 0 0

sn−1 |f (n) (x − s)| ℓα |f (n) (x − s)| 󵄨 󵄨 dsdτ = ∫ ( ∫ τ −1−α dτ) sn−1 󵄨󵄨󵄨󵄨f (n) (x − s)󵄨󵄨󵄨󵄨 ds = ds ∫ 1+α α τ sα+1−n s ∞ 0

M0

=



∞ 0



|f (n) (x − s)| |f (n) (x − s)| ℓα (∫ ds + ∫ ds) =: I3 + I4 , α+1−n α s sα+1−n 0



M0

where M0 = max{x + 1, 1}. Concerning I3 , just note that f ∈ 𝒞 n (ℝ) and thus f (n) (x − s) is bounded on the interval s ∈ [0, M0 ]. Hence M0

I3 ≤ C ∫ sn−1−α ds = 0

C M n−α . n−α 0

To handle I4 , as before, we have that s ≥ M0 ≥ x + 1 and s − x ≥ 1. Then recall that f ∈ 𝒞 n (ℝ) satisfies (1.89) and |f (n) (x − s)| ≤ C(s − x)−β . Obviously, lims→+∞

s−β (s−x)−β

= 1,

50 � 1 Fractional integrals and derivatives and for any s ≥ M0 , 󵄨󵄨 (n) 󵄨 󵄨󵄨f (x − s)󵄨󵄨󵄨 ≤ C(s − x)−β ≤ Cs−β 󵄨 󵄨 for a suitable constant C > 0. This implies that ∞

I4 ≤ ∫ sn−1−α−β ds = M0

n−α−β

M0

β+α−n

.

Hence we can use the Fubini theorem to exchange the order of integration in G2 to get sn−1 f (n) (x − s) dsdτ = ∫ ( ∫ τ −1−α dτ) sn−1 f (n) (x − s)ds ∫∫ τ 1+α s ∞ ℓτ



0 0

0

α

=





f (x − s) ℓ Γ(n − α)ℓα C α ds = ( D+ f ) (x). ∫ α α s1+α−n ∞

(n)

(1.97)

0

Plugging both (1.96) and (1.97) into (1.93), we finally get ∞

∫ 0

ℓ ∇+,τ f (x)

τ 1+α

n−1

−∑

j=0

n−1 (−1)k ℓ (−1)j n − 1 ( ) (k α ∑ ( ) (n − 1)! k j−α j j=0 k=1 ℓ−1

dτ = ∑

(−1)j n − 1 j α−j Γ(n − α)ℓα C α ( )k ℓ ) Γ(n − α) (C Dα+ f ) (x) + ( D+ f ) (x) j−α j α(n − 1)!

= (S1 − S2 +

Γ(n − α)ℓα C α ) ( D+ f ) (x), α(n − 1)!

(1.98)

where ℓ−1 ℓ Γ(n − α) α n−1 (−1)j n − 1 S1 = ∑ (−1)k ( ) k ∑ ( ), k (n − 1)! j−α j j=0 k=1 ℓ−1 ℓ Γ(n − α) n−1 (−1)j n − 1 j α−j S2 = ∑ (−1)k ( ) ( )k ℓ . ∑ k (n − 1)! j=0 j − α j k=1

Now we need to manipulate the values of S1 and S2 . Starting with S1 , we obtain from the series representation of the Beta function (see (B.18)) that ℓ−1 ℓ−1 ℓ Γ(n − α) α Γ(−α)Γ(n) ℓ S1 = ∑ (−1)k ( ) k = Γ(−α) ∑ (−1)k ( )k α = Γ(−α) (Aℓ (α) − (−1)ℓ ℓα ) . k (n − 1)! Γ(n − α) k k=1 k=1 (1.99)

1.10 Fractional derivatives of higher order

� 51

Concerning S2 , we transform it as follows: S2 = =

Γ(n − α) n−1 (−1)j n − 1 α−j ℓ−1 ℓ ( )ℓ ( ∑ (−1)k ( )k j ) ∑ (n − 1)! j=0 j − α j k k=1 Γ(n − α) α ℓ−1 ℓ Γ(n − α) n−1 (−1)j n − 1 α−j ℓ−1 ℓ ℓ ( ∑ (−1)k ( )) + ( )ℓ ( ∑ (−1)k ( )k j ) ∑ −α(n − 1)! k (n − 1)! j − α j k j=1 k=1 k=1

=: S3 + S4 .

(1.100)

Concerning S3 , note that ℓ ℓ ∑ (−1)k ( ) = (1 − 1)ℓ = 0, k k=0

and therefore S3 =

Γ(n − α) α ℓ (1 + (−1)ℓ ) . α(n − 1)!

Concerning S4 , it follows from the inequalities j ≤ n − 1 ≤ ℓ − 1 and Lemma 1.62 that ℓ ℓ ∑ (−1)k ( )k j = 0. k k=0

This implies the equalities S4 =

Γ(n − α) n−1 n − 1 1 α−j ℓ−1 ℓ ) ℓ ( ∑ (−1)k ( )k j ) ∑ (−1)j ( (n − 1)! j=1 j j−α k k=0

=−

Γ(n − α) n−1 n−1 1 α ) ℓ (−1)ℓ . ∑ (−1)j ( (n − 1)! j=1 j j−α

Plugging the latter two relations into (1.100), we get that S2 = = =

Γ(n − α) α Γ(n − α) n−1 (−1)j n − 1 α ℓ (1 + (−1)ℓ ) − ( )ℓ (−1)ℓ ∑ α(n − 1)! (n − 1)! j=1 j − α j Γ(n − α) α ℓα Γ(n − α) n−1 (−1)j n − 1 ℓ − (−1)ℓ ( ) ∑ α(n − 1)! (n − 1)! j=0 j − α j Γ(n − α) α ℓ − (−1)ℓ ℓα Γ(−α). α(n − 1)!

(1.101)

Finally, inserting (1.99) and (1.101) into (1.98), we conclude the proof. Remark 1.64. In the case f ∈ 𝒮 (ℝ), where 𝒮 (ℝ) is the Schwartz space of rapidly decreasing functions (see Appendix A.2), a different proof, based on the Fourier transforms, can be given. We leave it to Exercise 1.13.

52 � 1 Fractional integrals and derivatives However, this is not the way in which Marchaud introduced the fractional integral of higher order. Marchaud’s idea, presented in his PhD thesis (see also the survey [76]) ∞ was the following. Using the integral form of the Gamma function Γ(α) = ∫0 t α−1 e−t dt for α > 0, we obtain the following relation: ∞



0

0

(I+α f ) (x) ∫ t α−1 e−t dt = (I+α f ) (x)Γ(α) = ∫ t α−1 f (x − t)dt. Clearly, if we rewrite the same relation for α < 0, the integral ∫0 t α−1 e−t dt on the lefthand side does not converge. To handle this, let us go back to the case α > 0 and let us note that ∞



∫ t α−1 e−λt dt = λ−α Γ(α) 0

and, at the same time, ∞

1 ∫ t α−1 f (x − λt)dt = λ−α (I+α f ) (x). Γ(α) 0

Therefore ∞ α (I+ f ) (x) ∫ t α−1 e−λt dt 0



−α

(I+α f ) (x)Γ(α)



= ∫ t α−1 f (x − λt)dt.

(1.102)

0

Now, for ℓ ∈ ℕ with ℓ > α, let us consider ℓ ℓ ℓ ℓ ℓ ℓ ψℓ (t) = (e−t − 1) − (e−2t − 1) = ∑(−1)j ( )e−jt − ∑(−1)j ( )e−2jt , j j j=1 j=1

which is a linear combination of exponentials. Taking this representation and (1.102) into account, we get that ∞

(I+α f ) (x) ∫ t α−1 ψℓ (t)dt 0



ℓ ℓ ℓ ℓ = (I+α f ) (x) ∫ t α−1 (∑(−1)j ( )e−jt dt − ∑(−1)j ( )e−2jt ) dt j j j=1 j=1 0







0

0

ℓ ℓ ℓ = ∑(−1) ( ) (I+α f ) (x) ∫ t α−1 e−jt dt − ∑(−1)j ( ) (I+α f ) (x) ∫ t α−1 e−2jt dt j j j=1 j=1 j

1.10 Fractional derivatives of higher order

� 53

ℓ n ℓ ℓ = ∑(−1)j ( ) ∫ t α−1 f (x − jt)dt − ∑(−1)j ( ) ∫ t α−1 f (x − 2jt)dt j j j=1 j=1 ∞



0



0





ℓ ℓ ℓ = ∫ t α−1 (∑(−1)j ( )f (x − jt)) dt − ∫ t α−1 (∑(−1)j ( )f (x − 2jt)) dt j i j=1 j=1 0 ∞



0

0

0

ℓ ℓ = ∫ t α−1 ∇+,t f (x)dt − ∫ t α−1 ∇+,2t f (x)dt.

However, in this case, ∫0 t α−1 ψℓ (t)dt converges even if α < 0 and ℓ > |α|. Marchaud concluded that a definition of a fractional derivative could be obtained implicitly from this relation in the case α < 0, provided that such a definition is independent of the choice of ℓ > |α|. Indeed, let α > 0 and ℓ > α. Marchaud defined M Dα+ f as the solution of the following equation: ∞





0

0

ℓ ℓ (M Dα+ f ) (x) ∫ t −α−1 ψ(t)dt = (−1)ℓ ∫ t −α−1 (∇+,t f (x) − ∇+,2t f (x)) dt.

Observing that ∞

∫t

−α−1

α



ψ(t)dt = (1 − 2 ) (−1) ∫ t −α−1 (1 − e−t ) dt

0





0

and ∞



0

0

ℓ ℓ ℓ f (x) − ∇+,2t f (x)) dt = (1 − 2α ) ∫ t −α−1 ∇+,t f (x)dt, ∫ t −α−1 (∇+,t

we get (M Dα+ f ) (x) =

1



∫0 t −α−1 (1 − e−t )ℓ dt ∞

ℓ f (x)dt. ∫ t −α−1 ∇+,t 0

n We argue analogously with ∇−,t . Such a definition could still depend on ℓ > α. ∞ −α−1 Once we note that ∫0 t (1 − e−t )ℓ dt = χ(ℓ, α) (see Exercise 1.10), we can give the definition of Marchaud fractional derivative of higher order.

Definition 1.65. We define the Marchaud fractional derivative of order α > 0 as (

M

Dα± f ) (x)



1 ℓ = f (x)dt, ∫ t −α−1 ∇±,t χ(ℓ, α) 0

α > 0,

α ∈ ̸ ℕ,

ℓ > α.

(1.103)

54 � 1 Fractional integrals and derivatives If f satisfies the conditions of Proposition 1.63, then such a definition is independent of the choice of ℓ > α. Let us also note that for n ∈ ℕ and f ∈ 𝒮 (ℝ), lim (M Dα+ f ) (x) = f (n) (x).

(1.104)

α↑n

This means that we can use the Marchaud derivative to get some quite interesting nonlocal representation of local operators. Indeed, for ℓ > n with ℓ, n ∈ ℕ, the function ∞ χ(ℓ, ⋅) is continuous in n with χ(ℓ, n) = ∫0 t −n−1 (1 − e−t )ℓ dt, and thus (1.104) tells us that for all ℓ > n, f



(n)

1 ℓ (x) = f (s)ds, ∫ s−1−n ∇+,s χ(ℓ, n) 0

which is a nonlocal representation of the nth-order derivative, that itself is a local operator. Similarly to what happens in the case α ∈ (0, 1), the Marchaud derivative M Dα± is well-defined and continuous from W ⌊α⌋+1,p (ℝ) to Lp (ℝ), as stated in the following result. Theorem 1.66. Let α > 0, n = ⌊α⌋ + 1, and p ∈ [1, ∞]. Then M Dα± : W n,p (ℝ) → Lp (ℝ) is continuous, i. e., there exists a constant Cα,p > 0 such that for all f ∈ W n,p (ℝ), we have 󵄩󵄩M α 󵄩󵄩 󵄩󵄩 D± f 󵄩󵄩 p ≤ Cα,p ‖f ‖W n,p (ℝ) . 󵄩 󵄩L (ℝ)

(1.105)

Proof. We prove this for M Dα+ , as the proof for M Dα− is the same. We already proved the statement for α ∈ (0, 1) in Theorem 1.50, whereas for α ∈ ℕ, it is trivial. Hence let us consider α > 1 with α ∈ ̸ ℕ. Fix t ≥ 0 and distinguish among two cases. If p ∈ [1, +∞), then observe that 󵄨󵄨 +∞ n 󵄨󵄨p 󵄨󵄨 ∇+,τ f (x) 󵄨󵄨󵄨 1 󵄨󵄨 M α 󵄨󵄨p 󵄨 󵄨 dt 󵄨󵄨󵄨 dx ∫ 󵄨󵄨󵄨( D+ f ) (x)󵄨󵄨󵄨 dx = ∫󵄨 ∫ 󵄨󵄨 (χ(n, α))p 󵄨󵄨󵄨󵄨 t 1+α 󵄨󵄨 ℝ ℝ 󵄨 0 󵄨󵄨 1 n 󵄨󵄨p 󵄨󵄨 +∞ n 󵄨󵄨p 󵄨󵄨 ∇ f (x) 󵄨󵄨 󵄨󵄨 ∇+,τ f (x) 󵄨󵄨󵄨 2p−1 2p−1 +,τ 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 dx) =: ≤ (∫ dt dx + dt (I + I ). ∫ ∫ ∫ 󵄨 󵄨 󵄨 󵄨󵄨󵄨 󵄨󵄨󵄨 󵄨󵄨󵄨 󵄨󵄨󵄨 (χ(n, α))p (χ(n, α))p 1 2 t 1+α t 1+α 󵄨󵄨 󵄨󵄨 ℝ 󵄨󵄨 0 ℝ 󵄨󵄨 1 (1.106) To consider I1 , denote Qn (t) = [0, t]n . Proposition 1.60 implies that 󵄨󵄨 1 󵄨󵄨p 󵄨󵄨 󵄨󵄨 f (n) (x − ∑nj=1 sj ) 󵄨󵄨 󵄨 I1 = ∫ 󵄨󵄨󵄨∫ ∫ ds1 ⋅ ⋅ ⋅ dsn dt 󵄨󵄨󵄨󵄨 dx 1+α 󵄨󵄨 󵄨󵄨 t 󵄨󵄨 ℝ 󵄨󵄨 0 Qn (t) 1

p

p

‖f (n) ‖Lp (ℝ) 󵄩 󵄩p ≤ 󵄩󵄩󵄩󵄩f (n) 󵄩󵄩󵄩󵄩Lp (ℝ) (∫ ∫ t −1−α ds1 ⋅ ⋅ ⋅ dsn dt) = , (n − α)p 0 Qn (t)

(1.107)

1.10 Fractional derivatives of higher order

� 55

where we also used the generalized Minkowski inequality. To control I2 instead, we directly use the generalized Minkowski inequality and the fact that p p n ‖∇+,τ f (⋅)‖Lp (ℝ) ≤ (n + 1)p−1 ‖f ‖Lp (ℝ) to get I2 ≤ (n + 1)

p−1

p ‖f ‖Lp (ℝ) (

p

+∞

∫ t

−1−α

dt) =

p

(n + 1)p−1 ‖f ‖Lp (ℝ) αp

1

.

(1.108)

Hence, using (1.107) and (1.108) into (1.106), we get (1.105) for a suitable constant Cα,p > 0. For p = ∞, we have instead 󵄨󵄨 +∞ n 󵄨󵄨 ∇+,τ f (x) 󵄨󵄨󵄨 1 󵄨󵄨󵄨󵄨 󵄨󵄨 M α 󵄨 󵄨 󵄨󵄨( D+ f ) (x)󵄨󵄨󵄨 = 󵄨 󵄨 χ(n, α) 󵄨󵄨󵄨󵄨 ∫ t 1+α dt 󵄨󵄨󵄨󵄨 dx 󵄨󵄨 0 󵄨󵄨 󵄨󵄨 1 n 󵄨󵄨 󵄨󵄨 +∞ n 󵄨󵄨 ∇+,τ f (x) 󵄨󵄨󵄨 󵄨󵄨󵄨 ∇+,τ f (x) 󵄨󵄨󵄨 󵄨󵄨󵄨 1 ≤ (󵄨󵄨∫ 1+α dt 󵄨󵄨󵄨 + 󵄨󵄨󵄨 ∫ dt 󵄨󵄨󵄨) 󵄨󵄨 󵄨󵄨 󵄨󵄨 χ(n, α) 󵄨󵄨󵄨󵄨 t t 1+α 󵄨 󵄨 󵄨󵄨 󵄨0 󵄨 󵄨1 1



n |f (n) (x − ∑nj=1 sj )| |∇+,τ f (x)| 1 (∫ ∫ dt + dt) ∫ 1+α 1+α χ(n, α) t t +∞ 1

0 Qn (t)

1

+∞

1 󵄩 󵄩 ≤ (󵄩󵄩󵄩󵄩f (n) 󵄩󵄩󵄩󵄩L∞ (ℝ) ∫ ∫ t −1−α dt + (n + 1)‖f ‖L∞ (ℝ) ∫ t −1−α dt) χ(n, α) 0 Qn (t)

1

‖f ‖L∞ (ℝ) (n + 1)‖f ‖L∞ (ℝ) 1 ( + ) ≤ Cα,∞ ‖f ‖W n,∞ (ℝ) . χ(n, α) n−α α (n)

=

This ends the proof. When α > 1, we can also provide a different representation of the Marchaud derivative, as presented in the following theorem. Theorem 1.67. Let α > 0 with α ∈ ̸ ℕ, n = ⌊α⌋ + 1, and p ∈ [1, ∞). For any f ∈ W n,p (ℝ), define n−1

(∓y)k (k) f (x), k! k=0

P±,n−1 (x; f , y) = ∑

where f (k) is the kth derivative of f . Then for a. e. x ∈ ℝ, M

(

Dα± f ) (x)

P±,n−1 (x; f , y) − f (x − y) ∏n−1 (k − α) = − k=0 dy. ∫ Γ(n − α) y1+α +∞ 0

(1.109)

56 � 1 Fractional integrals and derivatives Proof. For α ∈ (0, 1), this is clear by the definition of the Marchaud derivative. Hence let us assume that α > 1. Let us for simplicity define ̃ α f ) (x) := − (M D +

P±,n−1 (x; f , y) − f (x − y) ∏n−1 k=0 (k − α) dy. ∫ Γ(n − α) y1+α +∞

(1.110)

0

First, we show that there exists a constant C̃α,p > 0 such that for all f ∈ W n,p (ℝ), we have M ̃α D+ f ∈ Lp (ℝ) and 󵄩󵄩M ̃ α 󵄩󵄩 󵄩󵄩 D+ f 󵄩󵄩 p ≤ C̃α,p ‖f ‖W n,p (ℝ) . 󵄩 󵄩L (ℝ)

(1.111)

To do this, split the integral as follows: 󵄨 ̃α 󵄨󵄨p 󵄨 ∫ 󵄨󵄨󵄨󵄨(M D + f ) (x)󵄨󵄨 dx



󵄨󵄨 1 󵄨󵄨p p 󵄨󵄨 P 󵄨󵄨 ∏n−1 +,n−1 (x; f , y) − f (x − y) 󵄨󵄨 󵄨󵄨 k=0 (k − α) =2 ( ) (∫ 󵄨󵄨∫ dy 󵄨󵄨 dx 1+α 󵄨󵄨 󵄨󵄨 Γ(n − α) y 󵄨󵄨 ℝ 󵄨󵄨 0 󵄨󵄨 +∞ 󵄨󵄨p 󵄨󵄨 P+,n−1 (x; f , y) − f (x − y) 󵄨󵄨󵄨 󵄨 + ∫ 󵄨󵄨󵄨 ∫ dy󵄨󵄨󵄨 dx) 󵄨󵄨 󵄨󵄨 y1+α 󵄨 󵄨󵄨 ℝ 󵄨 1 p−1

=2

p−1

p

∏n−1 (k − α) ( k=0 ) (I1 + I2 ). Γ(n − α)

To bound I1 , note that for f ∈ W n,p (ℝ) it holds that f ∈ 𝒞 n−1 (ℝ), and f (n−1) is absolutely continuous. Hence we can use the Taylor formula with the integral remainder term to get n

y

f (x − y) − P+,n−1 (x; f , y) = (−1) ∫ 0

and then

f (n) (t + x − y) n−1 t dt (n − 1)!

󵄨󵄨 1 y 󵄨󵄨p (n) 󵄨󵄨 󵄨󵄨 f (t + x − y) 1 󵄨󵄨 󵄨󵄨 I1 = dy ∫ ∫ ∫ 󵄨 󵄨󵄨 dx 󵄨󵄨 ((n − 1)!)p 󵄨󵄨󵄨󵄨 y1+α 󵄨󵄨 ℝ 󵄨0 0 p 󵄨 󵄨 y 1 p p 󵄨󵄨 ‖f (n) ‖Lp (ℝ) 󵄨󵄨󵄨󵄨 ‖f (n) ‖Lp (ℝ) −1−α 󵄨󵄨󵄨 󵄨 ≤ dy󵄨󵄨 = , 󵄨∫ ∫ y 󵄨󵄨 ((n − 1)!)p 󵄨󵄨󵄨󵄨 ((α − 1)(n − 1)!)p 󵄨󵄨 󵄨0 0

(1.112)

where we used the generalized Minkowski inequality. Concerning I2 , observe that n−1 k y 󵄩󵄩 󵄩 󵄩󵄩P+,n−1 (⋅; f , y)󵄩󵄩󵄩Lp (ℝ) ≤ ∑ k! k=0

n−1 k y 󵄩󵄩 (k) 󵄩󵄩 󵄩󵄩f 󵄩󵄩 p ≤ ‖f ‖W n,p (ℝ) ∑ . 󵄩 󵄩L (ℝ) k! k=0

1.10 Fractional derivatives of higher order

� 57

Hence by the generalized Minkowski inequality we get I2 ≤

󵄨󵄨 n−1 +∞ 󵄨󵄨p 󵄨󵄨 yk−1−α 󵄨󵄨󵄨󵄨 p 󵄨󵄨 ‖f ‖W n,p (ℝ) 󵄨󵄨 ∑ ∫ dy󵄨󵄨 󵄨󵄨 k! 󵄨󵄨󵄨k=0 󵄨󵄨 󵄨 1

󵄨󵄨 n−1 󵄨󵄨p 1 󵄨 󵄨󵄨 p 󵄨󵄨 . = ‖f ‖W n,p (ℝ) 󵄨󵄨󵄨󵄨 ∑ 󵄨󵄨k=0 (k − α)k! 󵄨󵄨󵄨

(1.113)

We get (1.111) by using (1.112) and (1.113). Now let us consider f ∈ 𝒮 (ℝ) ⊂ W n,p (ℝ). First of all, since f ∈ W n,1 (ℝ), we have +∞ 󵄨 izx 󵄨󵄨 e (P+,n−1 (x; f , y) − f (x − y)) 󵄨󵄨󵄨 󵄨󵄨 dy dx ∫ ∫ 󵄨󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 y1+α ℝ 0 +∞

≤∫ ∫ ℝ 0

|P+,n−1 (x; f , y) − f (x − y)| y1+α

dy ≤ C̃α,1 ‖f ‖W α,1 (ℝ) ,

where the second inequality follows by the same arguments as before. Hence by Fubini theorem we can exchange the order of the Fourier transform and the integral defining M ̃α D+ f , obtaining ℱ

̃ α f ] (z) [M D +

n−1 (∏n−1 (k − α))ℱ [f ](z) (−iyz)k = − k=0 − e−izy ) dy, ∫ y−1−α ( ∑ Γ(n − α) k! k=0 ∞

(1.114)

0

where ℱ is the Fourier transform operator (see Appendix A.2), and we used the relation n−1

(−iyz)k ) ℱ [f ](z). k! k=0

ℱ [P+,n−1 (⋅; f , y)](z) = ( ∑

Now we want to prove that ∞

∫y 0

−1−α

n−1

(−iyz)k (iz)n−1 1 − e−izy (∑ − e−izy ) dy = n−2 ∫ α+2−n dy. k! y ∏k=0 (k − α) k=0 ∞

(1.115)

0

To do this, we need to integrate by parts n − 1 times the integral on the left-hand side. Indeed, by first integrating by parts we get n−1

n−2 (−iyz)k (−iyz)k (iz) − e−izy ) dy = − e−izy ) dy. ∫ y−α ( ∑ k! −α k! k=0 k=0



∫ y−1−α ( ∑ 0

∞ 0

If α ∈ (1, 2), then we obtain the desired result. Otherwise, assume that after integrating by parts ℓ times, where 1 ≤ ℓ ≤ n − 2, we have ∞

∫y 0

−1−α

n−1

n−1−ℓ (−iyz)k (−iyz)k (iz)ℓ (∑ − e−izy ) dy = ℓ−1 − e−izy ) dy. ∫ yℓ−1−α ( ∑ k! k! ∏k=0 (k − α) k=0 k=0 ∞ 0

58 � 1 Fractional integrals and derivatives Integrating again by parts, we have ∞

∫y

−1−α

0

n−1

n−ℓ (−iyz)k (−iyz)k (iz)ℓ+1 (∑ − e−izy ) dy = ℓ − e−izy ) dy. ∫ yℓ−α ( ∑ k! k! (k − α) ∏ k=0 k=0 k=0 ∞ 0

Hence formula (1.115) holds by induction for ℓ = n − 1. Next, we need to evaluate the integral on the right-hand side of (1.115). To do this, let us notice that by the dominated convergence theorem ∞

∫ 0



1 − e−izy 1 − e−(λ+iz)y dy = lim ∫ dy. α+2−n λ→0 y yα+2−n 0

Integrating by parts, we get 1 − e−(λ+iz)y λ + iz (λ + iz)α−n+1 Γ(n − α) n−α−1 −(λ+iz)y dy = − y e dy = − , ∫ ∫ n−α−1 n−α−1 yα+2−n ∞



0

0

and then, taking the limit as λ → 0, ∞

∫ 0

1 − e−izy (iz)α−n+1 Γ(n − α) dy = − . α+2−n n−α−1 y

Substituting this identity into (1.115) we get n−1

(−iyz)k (iz)α Γ(n − α) − e−izy ) dy = − n−1 k! ∏k=0 (k − α) k=0



∫ y−1−α ( ∑ 0

that in turn implies, together with (1.114), M

α

α

M

α

̃ f ] (z) = (iz) ℱ [f ](z) = ℱ [ D f ] (z) ℱ[ D + + for all z ∈ ℝ, where the latter equality is shown in item (ii) of Exercise 1.13. Hence M ̃α D+ f = M Dα+ f for f ∈ 𝒮 (ℝ). Now let f ∈ W n,p (ℝ). Then there exists a sequence {fn }n∈ℕ ⊂ 𝒮 (ℝ) such that fn → f in W n,p (ℝ). Hence 󵄩󵄩M ̃ α ̃ α f 󵄩󵄩󵄩󵄩 p ≤ C̃α,p ‖fn − f ‖W n,p (ℝ) → 0. 󵄩󵄩 D+ fn − M D + 󵄩L (ℝ) 󵄩 In particular, we can consider a subsequence, still denoted {fn }n∈ℕ , such that ̃ α fn ) (x) → (M D ̃ α f ) (x) for a. a. x ∈ ℝ, and denote (M D + + ̃ α fn ) (x) → (M D ̃ α f ) (x)} . Ẽ = {x ∈ ℝ : (M D + +

1.10 Fractional derivatives of higher order

� 59

Similarly, we know from Theorem 1.66 that 󵄩󵄩M α 󵄩 󵄩󵄩 D+ fn − M Dα+ f 󵄩󵄩󵄩 p ≤ Cα,p ‖fn − f ‖W n,p (ℝ) → 0. 󵄩 󵄩L (ℝ) Hence we can extract a further subsequence, still denoted {fn }n∈ℕ , such that (M Dα+ fn ) (x) → (M Dα+ f ) (x) for a. a. x ∈ ℝ, and denote E = {x ∈ ℝ : (M Dα+ fn ) (x) → (M Dα+ f ) (x)} . ̃ the equalities For x ∈ E ∩ E, ̃ α fn ) (x) = (M D ̃ α f ) (x), (M Dα+ f ) (x) = lim (M Dα+ fn ) (x) = lim (M D + + n→+∞

n→+∞

hold, and the proof is concluded. Let us also introduce the following truncated derivatives of higher order. Definition 1.68. The truncated fractional derivative of order α > 0 of a function f equals (Dα±,ε f ) (x)

ℓ ∇±,t f (x) 1 = ∫ 1+α dt, χ(ℓ, α) t ∞ ε

α > 0,

ℓ > α,

(1.116)

provided that the involved quantities are well-defined. Clearly, if the Marchaud derivative exists in the sense that the integral in (1.103) absolutely converges, then limε→0 (Dα±,ε f )(x) = (M Dα± f ) (x). Moreover, we can prove an analogue of Lemma 1.46. The following result is, indeed, a particular case of [231, Lemmas 7.44 and 7.45], where instead of the condition f = I+α φ for some φ ∈ Iα+ (Lp (ℝ)) with 1 ≤ p < α1 (which cannot be satisfied for α > 1), we require some regularity of φ in terms of its Fourier transform. Proposition 1.69. Let α > 0 and ℓ ∈ ℕ with ℓ > α. Let f ∈ 𝕀α+ (Lp (ℝ)), and f = I+α φ for some φ ∈ ℱ𝒮0 (ℝ), where ℱ𝒮0 (ℝ) is the Lizorkin space (see Appendix A.2). Then ∞

(Dα+,ε f ) (x) = ∫ Kℓ,α (t)φ(x − εt)dt,

(1.117)

0

where Kℓ,α (t) =

ℓ 1 ℓ ∑(−1)j ( )(t − j)α+ , tΓ(1 + α)χ(ℓ, α) j=0 j

In particular, Kℓ,α ∈ L1 (ℝ+ ) with ∫0 Kℓ,α (t)dt = 1. ∞

t > 0.

(1.118)

60 � 1 Fractional integrals and derivatives Proof. Let us first establish that (Dα±,ε f )(x) is absolutely convergent. To do this, let us verify that f ∈ ℱ𝒮0 (ℝ). Indeed, taking the Fourier transform, we get the equality −α

ℱ [f ](z) = (iz) ℱ [φ](z).

However, by definition, ℱ [φ] ∈ 𝒮0 (ℝ), where 𝒮0 (ℝ) is defined in Appendix A.2. By a simple application of l’Hôpital rule we get that also (iz)−α ℱ [φ] ∈ 𝒮0 (ℝ) and thus f ∈ ℱ𝒮0 (ℝ) ⊂ 𝒮 (ℝ). In particular, (M Dα+ f ) (⋅) is absolutely convergent, which implies that (Dα+,ε f )(⋅) is absolutely convergent. Now let us consider j = 0, . . . , ℓ and rewrite x−jτ



−∞

j

f (x − jτ) = ∫ (x − jτ − t)α−1 φ(t)dt = τ α ∫ (s − j)α−1 φ(x − τs)ds. Now defining kℓ,α (s) =

1 ℓ ℓ ∑(−1)j ( )(s − j)α−1 + , Γ(α) j=0 j

we have ℓ ∇+,τ f (x)

α



= τ ∫ kℓ,α (s)φ(x − τs)ds,

x ∈ ℝ,

τ > 0.

0

Using the definition of Dα+,ε , we get the equalities (Dα+,ε f ) (x) =

=

ℓ ∇+,τ f (x) 1 1 dτ dτ = ∫ ∫ ∫ kℓ,α (s)φ(x − τs)ds 1+α χ(ℓ, α) χ(ℓ, α) τ τ ∞

∞∞

ε

ε 0





ε

0

1 z ∫ τ −2 ∫ kℓ,α ( ) φ(x − z)dzdτ. χ(ℓ, α) τ

(1.119)

Now we would like to use Fubini theorem to exchange the order of the integrals in (1.119). Hence we need to prove that ∞

∫τ ε



−2

󵄨󵄨 z 󵄨󵄨 󵄨 󵄨 ∫ 󵄨󵄨󵄨kℓ,α ( )󵄨󵄨󵄨 󵄨󵄨󵄨φ(x − z)󵄨󵄨󵄨dzdτ < +∞. 󵄨 τ 󵄨

(1.120)

0

Before doing this, let us first note that for j = 0, . . . , ℓ and s ∈ (j, j + 1), 󵄨󵄨 󵄨 󵄨󵄨kℓ,α (s)󵄨󵄨󵄨 ≤

1 ℓ ℓ α−1 . ∑ ( )(s − r)α−1 + ≤ C(s − j) Γ(α) r=0 r

(1.121)

We also need an improved upper bound for |kℓ,α (s)| for sufficiently large s > 0. To obℓ tain such a bound, observe that kℓ,α (s) = ∇+,1 ια (s), where ια is defined in (1.6). Denoting

1.10 Fractional derivatives of higher order

� 61

Qℓ (1) = [0, 1]ℓ , by Proposition 1.60 we have that ℓ

ℓ kℓ,α (s) = ∇+,1 ια (s) = ∫ ια(ℓ) (s − ∑ si ) ds1 ⋅ ⋅ ⋅ dsn , i=1

Qℓ (1)

where ια(ℓ) is the ℓth derivative of ια . Applying ℓ times the integral mean value theorem, we get (s − ξ)α−1−ℓ Γ(α − ℓ)

kℓ,α (s) = ια(ℓ) (s − ξ) =

for some ξ ∈ [0, ℓ]. For s > ℓ + 1, it is clear that there exists a constant C > 0 such that 󵄨󵄨 󵄨 α−1−ℓ . 󵄨󵄨kℓ,α (s)󵄨󵄨󵄨 ≤ Cs

(1.122)

Combining (1.121) and (1.122), we get ∞



j+1



󵄨 󵄨 ∫ 󵄨󵄨󵄨kℓ,α (s)󵄨󵄨󵄨 ds ≤ C (∑ ∫ (s − j)α−1 ds + ∫ sα−1−ℓ ds) < +∞, j=0 j

0

2ℓ+1

which implies that kℓ,α ∈ L1 (ℝ). Now we are ready to prove (1.120). Indeed, we have ∞



ε

0





󵄨󵄨 󵄨󵄨 z 󵄨 󵄨 ∫ τ −2 ∫ 󵄨󵄨󵄨kℓ,α ( ) φ(x − z)󵄨󵄨󵄨 dzdτ = ∫ τ −1 ∫ 󵄨󵄨󵄨kℓ,α (s)φ(x − τs)󵄨󵄨󵄨 dsdτ 󵄨 󵄨 τ ε





ε

0

0





󵄨 󵄨 󵄨 󵄨 = ∫ τ −1 ∫ 󵄨󵄨󵄨kℓ,α (s)φ(x − τs)󵄨󵄨󵄨 dsdτ + ∫ τ −1 ∫ 󵄨󵄨󵄨kℓ,α (s)φ(x − τs)󵄨󵄨󵄨 dsdτ ε



:= I1 + I2 , where Mτ = max {

x+1 1 1 M , } = max{x + 1, 1} =: . τ τ τ τ

To handle I1 , let us recall that φ ∈ 𝒮 (ℝ) ⊂ L∞ (ℝ) and kℓ,α (s) ≤ Csα−1 for s > 0 to get Mτ



I1 ≤ C‖φ‖L∞ (ℝ) ∫ τ ε

−1

∫ sα−1 dsdτ = 0

CM α CM α ‖φ‖L∞ (ℝ) ∫ τ −1−α dτ = 2 α ‖φ‖L∞ (ℝ) < ∞. α αε

Concerning I2 , let us note that if s ≥ Mτ ≥ fix β > α and claim that

∞ ε

x+1 , τ

then τs − x ≥ 1. Since φ ∈ 𝒮 (ℝ), we can

󵄨󵄨 󵄨 −β 󵄨󵄨φ(x − τs)󵄨󵄨󵄨 ≤ C(τs − x) ,

s ≥ Mτ .

62 � 1 Fractional integrals and derivatives Moreover, since lim

s→+∞

(τs)−β = 1, (τs − x)−β

we have that 󵄨󵄨 󵄨 −β 󵄨󵄨φ(x − τs)󵄨󵄨󵄨 ≤ C(τs) ,

s ≥ Mτ .

Hence we get ∞

M α−β M α−β < ∞. ∫ τ −1−α dτ = C β−α α(β − α)εα





I2 ≤ C ∫ τ −1−β ∫ sα−β−1 ds dτ = C ε

ε



Once we have proved that (1.120) holds, we can change the order of the integrals in (1.119) to get (Dα+,ε f ) (x) =

=





1 z ∫ τ −2 ∫ kℓ,α ( ) φ(x − z)dzdτ χ(ℓ, α) τ ε



0 ∞

0

ε

1 z ∫ ( ∫ τ −2 kℓ,α ( ) dτ) φ(x − z)dz χ(ℓ, α) τ z ε



=

φ(x − z) 1 dz ∫ (∫ kℓ,α (s)ds) χ(ℓ, α) z 0

0



z

0

0

φ(x − εz) 1 = (∫ kℓ,α (s)ds) dz. ∫ χ(ℓ, α) z

(1.123)

Now let us set z

Kℓ,α (z) :=

1 (∫ kℓ,α (s)ds) zχ(ℓ, α) 0

and evaluate explicitly the integral on the right-hand side. Indeed, for t > ℓ, t

t

0

j

1 ℓ ℓ ∑(−1)j ( ) ∫(s − j)α−1 ds ∫ kℓ,α (s)ds = Γ(α) j=0 j t−j

=

ℓ 1 ℓ ℓ 1 ℓ ∑(−1)j ( ) ∫ sα−1 ds = ∑(−1)j ( )(t − j)α , Γ(α) j=0 j αΓ(α) j=0 j 0

(1.124)

1.10 Fractional derivatives of higher order

� 63

whereas if k < t ≤ k + 1 for some k = 0, . . . , ℓ − 1, then t

t

∫ kℓ,α (s)ds = 0

1 k ℓ ∑(−1)j ( ) ∫(s − j)α−1 ds Γ(α) j=0 j j

t−j

=

k 1 k ℓ 1 ℓ ∑(−1)j ( ) ∫ sα−1 ds = ∑(−1)j ( )(t − j)α . Γ(α) j=0 j αΓ(α) j=0 j 0

Combining the latter two identities with (1.124), we get (1.118), and then we get (1.117) from (1.123). To show that Kℓ,α ∈ L1 (ℝ), we need to argue first with kℓ,α . Indeed, since kℓ,α ∈ L1 (ℝ), we can easily determine the Fourier transform of kℓ,α as ℱ [kℓ,α ](z) =

1 ℓ (1 − e−ijz )ℓ j ℓ −ijz (−1) ( )e = . ∑ |z|α j=0 j |z|α

Taking into account that ℓ > α, we establish the equality ∞

∫ kℓ,α (t)dt = ℱ [kℓ,α ](0) = 0. 0

Combining the latter equality with (1.124), we obtain that ∞

1 Kℓ,α (t) = − ( ∫ kℓ,α (s)ds) . tχ(ℓ, α) t

On the one hand, for t > 2ℓ + 1, 1 C Ct α−1−ℓ 󵄨󵄨 󵄨 󵄨 󵄨 ( ∫ 󵄨󵄨󵄨kℓ,α (s)󵄨󵄨󵄨 ds) ≤ . ∫ sα−1−ℓ ds = 󵄨󵄨Kℓ,α (t)󵄨󵄨󵄨 ≤ tχ(ℓ, α) tχ(ℓ, α) χ(ℓ, α) ∞



t

t

On the other hand, it is clear that Kℓ,α (t) = 1

t α−1 Γ(1+α)χ(ℓ,α)

for t < 1. This is sufficient

to guarantee that Kℓ,α ∈ L (ℝ). To prove that ∫ℝ Kℓ,α (t) = 1, let us emphasize that φ = Dα+ f = M Dα+ f = limε↓0 Dα+,ε f . At the same time, we have ∞

󵄨 󵄨󵄨 󵄨 ∫ 󵄨󵄨󵄨Kℓ,α (t)󵄨󵄨󵄨 󵄨󵄨󵄨φ(x − εt)󵄨󵄨󵄨dt ≤ ‖φ‖L∞ (ℝ) ‖Kℓ,α ‖L1 (ℝ) , 0

and thus, by the dominated convergence theorem, φ(x) =

lim (Dα+,ε f ) (x) ε↓0





0

0

= lim ∫ Kℓ,α (t)φ(x − εt)dt = φ(x) ∫ Kℓ,α (t)dt. ε↓0

Taking x such that φ(x) ≠ 0, we get the proof.

64 � 1 Fractional integrals and derivatives Remark 1.70. The previous result holds even if f ∈ 𝒞c∞ (ℝ) or f ∈ Iα+ 𝒞c∞ (ℝ), as shown in Exercises 1.14 and 1.15.

1.11 The one-dimensional fractional Laplacian In the previous section, we described some fractional operator of higher order by considering, in some sense, high fractional powers of first-order derivatives. However, we could ask what happens when we consider fractional powers of differential operator of higher order. This is, indeed, the case of the fractional Laplacian. Let us consider here only the one-dimensional case. We can introduce the fractional Laplacian via Fourier transforms. Indeed, it is well known that for every function f ∈ 𝒮 (ℝ), 2

ℱ [−f ] (z) = z ℱ [f ](z), ′′

z ∈ ℝ,

where ℱ is the Fourier transform. A fractional generalization of this relation leads to the following definition. Definition 1.71. We define the one-dimensional fractional Laplacian of order α ∈ (0, 1) α

2

d as the linear operator (− dx 2 ) acting on f ∈ 𝒮 (ℝ) as follows: α

ℱ [(− 2

α

d This operator (− dx 2)

d2 ) f ] (z) = |z|2α ℱ [f ](z), dx 2

z ∈ ℝ.

(1.125)

: 𝒮 (ℝ) 󳨃→ 𝒮 (ℝ) is well-defined. Indeed, since f ∈ 𝒮 (ℝ),

ℱ [f ] ∈ 𝒮 (ℝ) as well. It is not difficult to check that also z ∈ ℝ 󳨃→ |z|2α ℱ [f ](z) belongs to 𝒮 (ℝ), and then we can take its inverse Fourier transform. 2

α

d For f ∈ 𝒮 (ℝ), we can provide a pointwise integral representation of (− dx 2) f, which better underlines the nonlocal nature of the operator. To prove this, we first need the following lemma.

Lemma 1.72. Let f ∈ 𝒮 (ℝ) and α ∈ (0, 1). Then we have the following equality: lim

ε→0

∫ ℝ\(−ε,ε)

f (x) − f (x + h) 1 f (x + h) − 2f (x) + f (x − h) dh = − ∫ dh. 1+2α 2 |h| |h|1+2α ℝ

Proof. Let us rewrite the left-hand side as ∫ ℝ\(−ε,ε)

f (x) − f (x + h) 1 dh = ( 1+2α 2 |h|



ℝ\(−ε,ε)

f (x) − f (x + h) dh + |h|1+2α

∫ ℝ\(−ε,ε)

f (x) − f (x + h) dh) |h|1+2α

1.11 The one-dimensional fractional Laplacian

� 65

and apply the change of variables h → −h in the second integral to get ∫ ℝ\(−ε,ε)

f (x) − f (x + h) 1 dh = ( 2 |h|1+2α 1 =− 2



ℝ\(−ε,ε)

∫ ℝ\(−ε,ε)

f (x) − f (x + h) dh + |h|1+2α

∫ ℝ\(−ε,ε)

f (x) − f (x − h) dh) |h|1+2α

f (x + h) − 2f (x) + f (x − h) dh. |h|1+2α

(1.126)

Now we want to take the limit as ε → 0. To show that the integral on the right-hand side is absolutely convergent, let us suppose that ε < 1 and then split the integral as follows: ∫ ℝ\(−ε,ε)

|f (x + h) − 2f (x) + f (x − h)| dh = |h|1+2α +

∫ (−1,1)\(−ε,ε)

∫ ℝ\(−1,1)

|f (x + h) − 2f (x) + f (x − h)| dh |h|1+2α

|f (x + h) − 2f (x) + f (x − h)| dh = I1 + I2 . |h|1+2α

Concerning I1 , let us recall that f ∈ 𝒮 (ℝ), and therefore it is bounded; consequently, ∞

I1 ≤ 8‖f ‖L∞ (ℝ) ∫ h−1−2α dh = 1

4‖f ‖L∞ (ℝ) < ∞. α

To handle I2 , recall again that f ∈ 𝒮 (ℝ). Therefore we can use Lagrange theorem twice to get f (x + h) − 2f (x) + f (x − h) = (f (x + h) − f (x)) − (f (x) − f (x − h)) = hf ′ (x + θ1 h) − hf ′ (x − θ2 h) = h(f ′ (x + θ1 h) − f ′ (x − θ2 h)) = h2 (θ1 + θ2 )f (2) (x + θ3 h),

where θ1 , θ2 , θ3 ∈ (0, 1) depend on x and h. Again, since f ∈ 𝒮 (ℝ), its second derivative f (2) is bounded, and we get 󵄨󵄨 󵄨 󵄨 󵄨 󵄨󵄨f (x + h) − 2f (x) + f (x − h)󵄨󵄨󵄨 = 󵄨󵄨󵄨(f (x + h) − f (x)) − (f (x) − f (x − h))󵄨󵄨󵄨 󵄩 󵄩 ≤ 2|h|2 󵄩󵄩󵄩󵄩f (2) 󵄩󵄩󵄩󵄩L∞ (ℝ) . Plugging this inequality into I2 , we get the following upper bound: 1 1 󵄩 󵄩 2󵄩󵄩󵄩f (2) 󵄩󵄩󵄩L∞ (ℝ) 󵄩 󵄩 󵄩 󵄩 I2 ≤ 4 󵄩󵄩󵄩󵄩f (2) 󵄩󵄩󵄩󵄩L∞ (ℝ) ∫ h1−2α dh ≤ 4 󵄩󵄩󵄩󵄩f (2) 󵄩󵄩󵄩󵄩L∞ (ℝ) ∫ h1−2α dh = < ∞. 1−α ε

0

Hence, taking the limit as ε → 0 in (1.126) and using the dominated convergence theorem, we conclude the proof.

66 � 1 Fractional integrals and derivatives Taking into account Lemma 1.72, we can provide an integral form of the fractional Laplacian. Theorem 1.73. Let f ∈ 𝒮 (ℝ) and α ∈ (0, 1). Then α

d2 ((− 2 ) f ) (x) = CαFL lim ε→0 dx =−

CαFL 2

∫ ℝ

∫ ℝ\(−ε,ε)

f (x) − f (x + h) dh |h|1+2α

f (x + h) − 2f (x) + f (x − h) dh, |h|1+2α

(1.127)

where CαFL

−1

1 − cos t = (∫ dt) . |t|1+2α

(1.128)



C FL

(x)+f (x−h) Proof. Let us evaluate the Fourier transform of − 2α ∫ℝ f (x+h)−2f dh. To do this, |h|1+2α note that the same bounds as in the proof of Lemma 1.72 allow us to use Fubini theorem to exchange the order of the Fourier transform and the integration. Hence we have

CαFL f (⋅ + h) − 2f (⋅) + f (⋅ − h) ] dh (z) ∫ 2 |h|1+2α ℝ [ ] FL C ℱ [f (⋅ + h)](z) − 2ℱ [f ](z) + ℱ [f (⋅ − h)](z) =− α ∫ dh 2 |h|1+2α

ℱ [−



= −ℱ [f ](z)

CαFL eizh − 2 + e−izh dh, ∫ 2 |h|1+2α

(1.129)



where we used the properties of the Fourier transform (see Appendix A.2). Now let us rewrite the latter integral in a better form. Since the integrand is even in the variable z, we can suppose, without loss of generality, that z > 0. Then, by using the definition of complex exponential we get −

CαFL eizh − 2 + e−izh 1 − cos(zh) dh = CαFL ∫ dh. ∫ 2 |h|1+2α |h|1+2α ℝ



By using the change of variables zh = t we get −

CαFL eizh − 2 + e−izh 1 − cos t dh = |z|2α CαFL ∫ dt = |z|2α , ∫ 2 |h|1+2α |t|1+2α ℝ



1.11 The one-dimensional fractional Laplacian

� 67

where we used the definition of CαFL given in (1.128). Plugging this relation into (1.129), we get α

CαFL f (⋅ + h) − 2f (⋅) + f (⋅ − h) ] d2 dh (z) = |z|2α ℱ [f ](z) = ℱ [(− 2 ) f ] (z). ∫ 1+2α 2 |h| dx ℝ [ ]

ℱ [−

We conclude the proof by injectivity of the Fourier transform. Remark 1.74. We can prove that CαFL =

4α Γ(α + 21 )

(1.130)

|Γ(−α)|√π

as in Exercise 1.16. Furthermore, transformations of Gamma function give the equalities

FL Cα/2

α(1 − α) { , α ≠ 1, { { { 2Γ(2 − α) cos ( πα ) 2 ={ { { {1, α = 1. {π

The proof of this identity is left to the reader in Exercise 1.17. Theorem 1.73 provides a pointwise definition of fractional Laplacian. Definition 1.75. For a suitable measurable function f : ℝ → ℝ, we define its fractional Laplacian of order α ∈ (0, 1) as α

((−

FL

C f (x + h) − 2f (x) + f (x − h) d2 ) f ) (x) = − α ∫ dh, 2 dx 2 |h|1+2α

x ∈ ℝ,

(1.131)



provided that the integral is absolutely convergent. Arguing as in Lemma 1.72, if (1.131) converges, then we also have α

((−

d2 ) f ) (x) = CαFL lim ε→0 dx 2

∫ ℝ\(−ε,ε)

f (x) − f (x + h) dh, |h|1+2α

x ∈ ℝ.

(1.132)

Furthermore, it is clear that if ∫ ℝ

|f (x) − f (x + h)| dh < ∞ |h|1+2α

(1.133)

for x ∈ ℝ, then also the integral on the right-hand side of (1.131) converges, and we have α

((−

f (x) − f (x + h) d2 ) f ) (x) = CαFL ∫ dh, 2 dx |h|1+2α ℝ

x ∈ ℝ,

(1.134)

68 � 1 Fractional integrals and derivatives by a simple application of the dominated convergence theorem. Sufficient conditions for the convergence of the integral involved in (1.131) are stated in the following proposition. Proposition 1.76. Let α ∈ (0, 1) and f ∈ 𝒞b (ℝ). λ (i) If α < 1/2 and f ∈ 𝒞loc (ℝ) for some λ > 2α, then (1.131) is absolutely convergent, (1.134) 2

α

d holds, and (− dx 2 ) f ∈ 𝒞 (ℝ).

λ (ii) If α ≥ 1/2 and f ∈ 𝒞 1 (ℝ) with f ′ ∈ 𝒞loc (ℝ) for some λ > 2α − 1, then (1.131) is absolutely 2

α

d convergent, and (− dx 2 ) f ∈ 𝒞 (ℝ). Furthermore, for all δ > 0, α

f (x) − f (x + h) + f ′ (x)h1[−δ,δ] (h) d2 ((− 2 ) f ) (x) = CαFL ∫ dh. dx |h|1+2α

(1.135)



λ Proof. Let us first prove item (i). Assume that α < 1/2 and f ∈ 𝒞loc (ℝ) for some λ > 2α. Then we have 1

∫ ℝ

|f (x) − f (x + h)| |f (x) − f (x + h)| dh = ∫ dh + |h|1+2α |h|1+2α −1

∫ ℝ\[−1,1]

|f (x) − f (x + h)| dh. |h|1+2α

To handle the first integral, note that for x, x + h ∈ [x − 1, x + 1], 1

1

−1

0

2Cλ,1 (x) |f (x) − f (x + h)| dh ≤ 2Cλ,1 (x) ∫ hλ−1−2α dh = < ∞, ∫ λ − 2α |h|1+2α

where Cλ,1 (x) > 0 is the Hölder constant of f on [x − 1, x + 1]. At the same time, for the second integral, we have 2‖f ‖L∞ (ℝ) |f (x) − f (x + h)| dh ≤ 4‖f ‖L∞ (ℝ) ∫ h−1−2α dh = < ∞. α |h|1+2α ∞

∫ ℝ\[−1,1]

1

This proves that (1.133) holds, and then we get (1.134). Let us also note that α 󵄨󵄨 󵄨󵄨 2C (x) 2‖f ‖L∞ (ℝ) d2 󵄨󵄨 󵄨󵄨 󵄨󵄨((− 󵄨󵄨 ≤ CαFL ( λ,1 ) f ) (x) + ). 2 󵄨󵄨 󵄨 λ − 2α α 󵄨󵄨 dx 󵄨

(1.136)

To prove the continuity, let x ∈ ℝ and, without loss of generality, δ ∈ [−1, 1]. Notice that |f (x + δ) − f (x + δ + h)| |f (x + δ) − f (x + δ + h)| = 1[−1,1] (h) |h|1+2α |h|1+2α |f (x + δ) − f (x + δ + h)| + 1ℝ\[−1,1] (h). |h|1+2α

1.11 The one-dimensional fractional Laplacian

� 69

If h ∈ [−1, 1], then x + δ, x + δ + h ∈ [x − 2, x + 2], and therefore 2‖f ‖L∞ (ℝ) |f (x + δ) − f (x + δ + h)| ≤ Cλ,2 (x)|h|λ−1−2α 1[−1,1] (h) + 1ℝ\[−1,1] (h), |h|1+2α |h|1+2α where Cλ,2 (x) is the Hölder constant of f on [x − 2, x + 2], and the right-hand side belongs to L1 (ℝ). Hence we can use the dominated convergence theorem to get lim ((−

δ→0

α

f (x + δ) − f (x + δ + h) d2 ) f ) (x + δ) = CαFL lim ∫ dh 2 δ→0 dx |h|1+2α ℝ

= CαFL ∫ ℝ

α

f (x) − f (x + h) d2 dh = ((− ) f ) (x). |h|1+2α dx 2

This proves item (i). To prove item (ii), let us first note that 1

|f (x + h) − 2f (x) + f (x − h)| |f (x + h) − 2f (x) + f (x + h)| dh = ∫ dh ∫ |h|1+2α |h|1+2α



−1

+

∫ ℝ\[−1,1]

|f (x + h) − 2f (x) + f (x + h)| dh =: I1 + I2 . |h|1+2α

Let us bound I2 : ∞

I2 ≤ 8‖f ‖L∞ (ℝ) ∫ h−1−2α dh = 1

4‖f ‖L∞ (ℝ) < ∞. α

Furthermore, Lagrange theorem supplies the existence of θj (x, h) ∈ [−|h|, |h|], j = 1, 2, such that 󵄨󵄨 󵄨 󵄨 ′ 󵄨 ′ 󵄨󵄨f (x + h) − 2f (x) + f (x − h)󵄨󵄨󵄨 = 󵄨󵄨󵄨f (x + θ1 (x, h)) − f (x + θ2 (x, h))󵄨󵄨󵄨 |h|. If h ∈ [−1, 1], then we have x + θj (x, h) ∈ [x − 1, x + 1], j = 1, 2, and then 󵄨󵄨 󵄨 󵄨 󵄨λ λ 1+λ 󵄨󵄨f (x + h) − 2f (x) + f (x − h)󵄨󵄨󵄨 ≤ Cλ,1 (x)󵄨󵄨󵄨θ1 (x, h) − θ2 (x, h)󵄨󵄨󵄨 |h| ≤ 2 Cλ,1 (x)|h| ,

(1.137)

1 where Cλ,1 (x) is the Hölder constant of f ′ on [x − 1, x + 1]. Hence, for I1 , we have

I1 ≤

1 1+λ 1 2 Cλ,1 (x) ∫ hλ−2α dh 0

=

1 21+λ Cλ,1 (x)

λ − 2α + 1

< ∞.

This proves that the integral on the right-hand side of (1.131) is absolutely convergent. Furthermore, α 1 󵄨󵄨 󵄨󵄨 21+λ Cλ,1 (x) 4‖f ‖L∞ (ℝ) d2 󵄨󵄨 󵄨󵄨 󵄨󵄨((− 󵄨󵄨 ≤ CαFL ( ) f ) (x) + ) < ∞. 2 󵄨󵄨 󵄨󵄨 λ − 2α + 1 α dx 󵄨 󵄨

(1.138)

70 � 1 Fractional integrals and derivatives Next, to prove the continuity, let x ∈ ℝ and δ ∈ [−1, 1]. Then |f (x + δ + h) − 2f (x + δ) + f (x + δ − h)| |h|1+2α |f (x + δ + h) − 2f (x + δ) − f (x + δ − h)| = 1[−1,1] (h) |h|1+2α |f (x + δ + h) − 2f (x + δ) + f (x + δ + h)| + 1ℝ\[−1,1] (|h|). |h|1+2α

(1.139)

Let h ∈ [−1, 1]. Then x + δ + h ∈ [x + δ − 1, x + δ + 1] ⊆ [x − 2, x + 2], and hence we can use 1 1 again (1.137), where in place of Cλ,1 (x), we use the Hölder constant Cλ,2 (x) of f ′ referred to the interval [x − 2, x + 2] and thus independent of δ. Therefore |f (x + δ + h) − 2f (x + δ) + f (x + δ − h)| |h|1+2α 4‖f ‖L∞ (ℝ) 1 ≤ 2λ Cλ,2 (x)|h|λ−2α 1[−1,1] (h) + 1ℝ\[−1,1] (|h|), |h|1+2α where the right-hand side belongs to L1 (ℝ). Hence we can use the dominated convergence theorem to get α

d2 ) f ) (x + δ) δ→0 dx 2 f (x + δ + h) − 2f (x + δ) + f (x + δ − h) = CαFL lim ∫ dh δ→0 |h|1+2α lim ((−



= CαFL ∫ ℝ

α

f (x + h) − 2f (x) + f (x − h) d2 dh = ((− 2 ) f ) (x). 1+2α |h| dx

Finally, let us show (1.135). To do this, fix any δ > 0, consider any ε < δ, and notice that ∫ ℝ\(−ε,ε)

f ′ (x)h 1[−δ,δ] (h)dh = |h|1+2α

∫ [−δ,δ]\(−ε,ε)

f ′ (x)h dh = 0, |h|1+2α

so that, by (1.131), α

((−

d2 ) f ) (x) = CαFL lim ε→0 dx 2

∫ ℝ\(−ε,ε)

f (x) − f (x + h) + f ′ (x)h1[−δ,δ] |h|1+2α

dh.

Hence we only need to show that the integral on the right-hand side is absolutely convergent once we take the limit as ε → 0, so that we can use the dominated convergence

1.11 The one-dimensional fractional Laplacian

� 71

theorem to get (1.135). To do this, let |f (x) − f (x + h) + f ′ (x)h1[−δ,δ] |



|h|1+2α



δ

dh = ∫ −δ

|f (x) − f (x + h) + f ′ (x)h| dh |h|1+2α

+

∫ ℝ\[−δ,δ]

|f (x) − f (x + h)| dh =: I3 + I4 . |h|1+2α

Clearly, I4 ≤

2‖f ‖L∞ (ℝ) αδ2α

.

Concerning I3 , by Lagrange theorem there exists θ3 (x, h) ∈ [−h, h] such that f (x + h) − f (x) = f ′ (x + θ3 (x, h))h, and then, since x, x + θ3 (x, h) ∈ [x − δ, x + δ], 󵄨󵄨 󵄨 󵄨 ′ 󵄨 󵄨 󵄨λ ′ ′ 1 1 1+λ 󵄨󵄨f (x) − f (x + h) + f (x)h󵄨󵄨󵄨 = 󵄨󵄨󵄨f (x) − f (x + θ3 (x, h))󵄨󵄨󵄨 |h| ≤ Cλ,δ (x)󵄨󵄨󵄨θ3 (x, h)󵄨󵄨󵄨 |h| ≤ Cλ,δ (x)|h| , 1 where Cλ,δ (x) > 0 is the Hölder constant of f ′ on [x − δ, x + δ]. Hence δ

I3 ≤ 2Cλ,δ (x) ∫ hλ−2α dh = 0

1 2Cλ,δ (x)δλ+1−2α

λ + 1 − 2α

< ∞. 2

α

d Remark 1.77. If f ∈ 𝒞 1 (ℝ) then f ∈ Liploc (ℝ), and for α < 1/2, (− dx 2 ) f ∈ 𝒞 (ℝ). Furthermore, (1.135) holds even in this case.

Remark 1.78. If α > 1/2, then, under the assumptions of Proposition 1.76, for all δ > 0 ∫ ℝ\[−δ,δ]

hence,

f ′ (x)h |h|1+2α

f ′ (x)|h| 2f ′ (x) dh = 2α−1 < ∞, 1+2α |h| δ

is integrable on ℝ \ [−δ, δ], and we have ∫ ℝ\[−δ,δ]

f ′ (x)h dh = 0. |h|1+2α

This implies that, if α > 1/2 and f ∈ 𝒞b (ℝ) satisfies the assumtions of item (ii) of Proposition 1.76, then, for all x ∈ ℝ, α

f (x) − f (x + h) + hf ′ (x) d2 ((− 2 ) f ) (x) = CαFL ∫ dh, dx |h|1+2α ℝ

where the integral is absolutely convergent.

(1.140)

72 � 1 Fractional integrals and derivatives 2

α

d If f is globally Hölder-continuous, we can prove that (− dx 2 ) f is also Höldercontinuous.

Proposition 1.79. Let α ∈ (0, 1) and f ∈ 𝒞b (ℝ). (i) If α
2α, then (− dx (ℝ). Furthermore, if 2 ) f ∈ 𝒞b 2

α

d f ∈ 𝒞0 (ℝ), then also (− dx 2 ) f ∈ 𝒞0 (ℝ).

2

α

d λ+1−2α (ii) If α ≥ 21 and f ∈ 𝒞 1 (ℝ) with f ′ ∈ 𝒞 λ (ℝ) for some λ > 2α − 1, then (− dx (ℝ). 2) f ∈𝒞 2

α

d Furthermore, if f ∈ 𝒞0 (ℝ), then also (− dx 2 ) f ∈ 𝒞0 (ℝ). α

2

d Proof. Recall that (− dx 2 ) f is well-defined and belongs to 𝒞 (ℝ) by Proposition 1.76. Let 2

α

d us first prove item (i). To show that (− dx 2 ) f ∈ 𝒞b (ℝ), we just use (1.136), where, denoting by Cλ the Hölder constant of f , Cλ,1 (x) ≤ Cλ . Now, let x, y ∈ ℝ, δ = |x − y| and notice that α α 󵄨󵄨 󵄨󵄨 d2 d2 󵄨󵄨 󵄨󵄨 󵄨󵄨((− 󵄨󵄨 ) f ) (x) − ((− ) f ) (y) 2 2 󵄨󵄨 󵄨󵄨 dx dx 󵄨 󵄨 δ

≤ CαFL ∫ −δ

+

CαFL

|f (y) − f (y + h) − f (x) + f (x + h)| dh |h|1+2α ∫ ℝ\[−δ,δ]

|f (y) − f (y + h) − f (x) + f (x + h)| dh =: CαFL (I1 + I2 ). |h|1+2α

Concerning I1 , observe that δ

I1 ≤ 4Cλ ∫ hλ−1−2α dh = 0

4Cλ |x − y|λ−2α , λ − 2α

whereas for I2 , we have λ

+∞

I2 ≤ 4Cλ |x − y| ∫ h−1−2α dh = δ 2

2Cλ |x − y|λ−2α . α

α

d λ−2α Therefore (− dx (ℝ). Finally, let f ∈ 𝒞0 (ℝ). Then 2) f ∈ 𝒞

2‖f ‖L∞ (ℝ) |f (x) − f (x + h)| ≤ Cλ |h|λ−1−2α 1[−1,1] (h) + 1ℝ\[−1,1] (h), |h|1+2α |h|1+2α

1.11 The one-dimensional fractional Laplacian

� 73

where the right-hand side belongs to L1 (ℝ), and we get from the dominated convergence theorem that lim ((−

x→±∞

α

f (x) − f (x + h) d2 ) f ) (x) = lim CαFL ∫ dh = 0. x→±∞ dx 2 |h|1+2α ℝ

α

2

d For item (ii), the fact that (− dx ∈ 𝒞b (ℝ) follows by (1.138) once we recall that 2) f 1 Cλ,1 (x) ≤ Cλ1 , where Cλ1 is the Hölder constant of f ′ . Now, let x, y ∈ ℝ, δ = |x − y|. Then α α 󵄨󵄨 󵄨󵄨 d2 d2 󵄨󵄨 󵄨󵄨 󵄨󵄨((− 󵄨󵄨 ) f ) (x) − ((− ) f ) (y) 󵄨󵄨 󵄨󵄨 dx 2 dx 2 󵄨 󵄨 δ

C FL |f (x + h) − 2f (x) + f (x − h) − f (y + h) + 2f (y) − f (y − h)| ≤ α ∫ dh 2 |h|1+2α + =:

−δ FL Cα

2

∫ ℝ\[−δ,δ]

CαFL (I3

|f (x + h) − 2f (x) + f (x − h) − f (y + h) + 2f (y) − f (y − h)| dh |h|1+2α

+ I4 ).

1 To handle I3 , we use (1.137) and the fact that Cλ,1 (x) ≤ Cλ1 to get

I3 ≤

δ 1 4Cλ ∫ hλ−2α dh 0

=

4Cλ1 |x − y|λ+1−2α . λ + 1 − 2α

For I4 , we need to argue differently. Indeed, assuming, without loss of generality, that y ≥ x, we have 󵄨󵄨 󵄨 󵄨󵄨f (x + h) − 2f (x) + f (x − h) − f (y + h) + 2f (y) − f (y − h)󵄨󵄨󵄨 y

󵄨 󵄨 ≤ ∫ 󵄨󵄨󵄨f ′ (t + h) − 2f ′ (t) + f ′ (t − h)󵄨󵄨󵄨 dt ≤ 2Cλ |h|λ |x − y| x

and then ∞

I4 ≤ 4Cλ |x − y| ∫ hλ−1−2α dh = δ 2

4Cλ |x − y|λ+1−2α . 2α − λ

α

d λ+1−2α This proves that (− dx (ℝ). Finally, for f ∈ 𝒞0 (ℝ), it follows from by (1.137) 2) f ∈ 𝒞 that

4‖f ‖L∞ (ℝ) |f (x + h) − 2f (x) + f (x − h)| ≤ 2Cλ1 |h|λ−2α 1[−1,1] (h) + 1ℝ\[−1,1] (h), 1+2α |h| |h|1+2α

74 � 1 Fractional integrals and derivatives where the right-hand side belongs to L1 (ℝ). The dominated convergence theorem implies that lim ((−

x→±∞

α

f (x + h) − 2f (x) + f (x − h) d2 ) f ) (x) = lim CαFL ∫ dh = 0. 2 x→±∞ dx |h|1+2α ℝ

Remark 1.80. In particular, the previous proposition tells us that if f ∈ 𝒞02 (ℝ), then 2

α

d 1+⌊2α⌋−2α (− dx (ℝ) if α ≠ 2 ) f is well-defined and belongs to 𝒞0 (ℝ) ∩ 𝒞

𝒞0 (ℝ) ∩ Lip(ℝ) if α =

1 . 2

1 2

and to

So far we have formulated sufficient conditions for the existence of the fractional Laplacian of a function in terms of its regularity. Now consider some conditions in terms of integrability of the functions involved. Definition 1.81. For p ≥ 1 and α ∈ (0, 1), we define the Gagliardo seminorm of a function f : ℝ → ℝ as [f ]pα,p = ∫ ∫ ℝℝ

|f (x) − f (y)|p dxdy |x − y|1+pα

(1.141)

and, consequently, the Gagliardo–Sobolev space (or fractional Sobolev space) W α,p (ℝ) = {f ∈ Lp (ℝ) : [f ]pα,p < ∞} , which is a Banach space if we equip it with the norm ‖f ‖W α,p (ℝ) = ‖f ‖Lp (ℝ) + [f ]α,p . In the case p = 2 the space W α,2 (ℝ) is in fact a Hilbert space when equipped with the scalar product ⟨f , g⟩W α,2 (ℝ) = ∫ f (x)g(x)dx + ∫ ∫ ℝ

ℝℝ

(f (x) − f (y))(g(x) − g(y)) dxdy. |x − y|1+2α

We also define the space W α,p (ℝ) for α ≥ 1 by setting W α,p (ℝ) = {u ∈ W ⌊α⌋,p (ℝ) : u(⌊α⌋) ∈ W α−⌊α⌋,p (ℝ)}

(1.142)

if α ∈ ̸ ℕ, whereas W α,p (ℝ) is the usual Sobolev space if α ∈ ℕ. For α ∈ ̸ ℕ, W α,p (ℝ) is a Banach space once it is equipped with the norm ‖u‖W α,p (ℝ) := ‖u‖W ⌊α⌋,p (ℝ) + [u(⌊α⌋) ]p,α−⌊α⌋ .

1.11 The one-dimensional fractional Laplacian

� 75

The following result, which is a combination of [57, Corollary 2.3 and Theorem 2.4] (see also [37, Lemma A.1] for a constructive proof), provides a relation between fractional Sobolev spaces. Proposition 1.82. For any 0 ≤ α1 ≤ α2 and p ≥ 1, there exists a constant Cα1 ,α2 ,p such that for every u ∈ W α2 ,p (ℝ) (where W 0,p (ℝ) = Lp (ℝ)), we have ‖u‖W α1 ,p (ℝ) ≤ Cα1 ,α2 ,p ‖u‖W α2 ,p (ℝ) , i. e., W α2 ,p (ℝ) is continuously embedded in W α1 ,p (ℝ). Moreover, both the spaces 𝒞c∞ (ℝ) and 𝒮 (ℝ) are dense in W α,p (ℝ) for all α ≥ 0 and p ≥ 1. Sufficient conditions for the existence of the fractional Laplacian based on the Gagliardo–Sobolev spaces are given in the following proposition. 2

α 2

d Proposition 1.83. Fix α ∈ (0, 2) with α ≠ 1. For every f ∈ W α,1 (ℝ), the function (− dx 2) f

is a. e. well-defined and belongs to L1 (ℝ). Moreover: (i) If α ∈ (0, 1), then for a. a. x ∈ ℝ, α

f (x) − f (x + h) d2 2 FL ((− 2 ) f ) (x) = Cα/2 dh, ∫ |h|1+α dx ℝ

where the integral is absolutely convergent. (ii) If α ∈ (1, 2), then for a. a. x ∈ ℝ, α

f (x) − f (x + h) + hf ′ (x) d2 2 FL ((− 2 ) f ) (x) = Cα/2 dh, ∫ |h|1+α dx

(1.143)



where the integral is absolutely convergent. In particular, there exists a constant Cα such that for all f ∈ W α,1 (ℝ), α 󵄩󵄩 2 2 󵄩󵄩 󵄩󵄩(− d ) 󵄩󵄩 2 󵄩󵄩 dx 󵄩

󵄩󵄩 󵄩󵄩 f 󵄩󵄩󵄩󵄩 ≤ Cα ‖f ‖W α,1 (ℝ) . 󵄩󵄩 1 󵄩L (ℝ)

(1.144)

Proof. Let us first prove the statement and item (i) for α ∈ (0, 1). Obviously, if we prove that for a. a. x ∈ ℝ, ∫ ℝ

|f (x) − f (x + h)| dh < ∞, |h|1+α

76 � 1 Fractional integrals and derivatives then we can explicitly take the limit in the integral representation given by Theorem 1.73. Now note that ∫∫ ℝℝ

|f (x) − f (x + h)| |f (x) − f (y)| dhdx = ∫ ∫ dydx = [f ]α,1 < ∞, |h|1+α |x − y|1+α ℝℝ

(x+h)| where [f ]α,1 is the Gagliardo seminorm defined in (1.141), and thus ∫ℝ |f (x)−f dh is |h|1+α a. e. finite. This also proves (1.144) if α ∈ (0, 1). Now let α ∈ (1, 2). Observe that if f ∈ W α,1 (ℝ), then in particular f ∈ W 1,1 (ℝ), and it is absolutely continuous. Furthermore, f ′ ∈ W α−1,1 (ℝ). Hence we have +∞

|f (x + h) − 2f (x) + f (x − h)| |f (x + h) − 2f (x) + f (x − h)| dh dx = ∫ ∫ dh dx ∫∫ 1+α 2|h| |h|1+α

ℝℝ

ℝ 0

+∞ h

+∞ +∞

|f ′ (x + τ) − f ′ (x − τ)| |f ′ (x + τ) − f ′ (x − τ)| ≤∫ ∫ ∫ dτ dh dx = dh dτ dx ∫ ∫ ∫ |h|1+α |h|1+α ℝ 0 0 +∞

=∫ ∫ ℝ 0

ℝ 0

τ

[f ′ ]α−1 |f ′ (x + τ) − f ′ (x − τ)| |f ′ (x + τ) − f ′ (x − τ)| dτ dx = ∫ ∫ dτ dx = < ∞. α α α|τ| 2α|τ| 2α ℝℝ

2

α 2

𝜕 1 This proves that (− 𝜕x 2 ) f ∈ L (ℝ) and that (1.144) holds for α ∈ (1, 2). Now we prove that (1.143) holds for a. a. x ∈ ℝ. Arguing as in Remark 1.78, it is sufficient to prove that for a. a. x ∈ ℝ and all δ > 0 α

f (x) − f (x + h) + f (x)h1[−δ,δ] (h) d2 2 FL ((− 2 ) f ) (x) = Cα/2 dh, ∫ |h|1+α dx ′

(1.145)



where the integral is absolutely convergent. To do this, we argue exactly in the same way as we did in item (ii) of Proposition 1.76 to get for a. a. x ∈ ℝ α

d2 2 FL ((− 2 ) f ) (x) = Cα/2 lim ε→0 dx



f (x) − f (x + h) + f ′ (x)h1[−δ,δ] (h) |h|1+α

ℝ\(−ε,ε)

dh.

We only need to prove that the integral on the right-hand side is absolutely convergent. To do this, we split the integral as follows

∫ ℝ

|f (x) − f (x + h) + f ′ (x)h1[−δ,δ] (h)| |h|1+α

δ

dh = ∫ −δ

+

|f (x) − f (x + h) + f ′ (x)h| dh |h|1+α ∫ ℝ\[−δ,δ]

|f (x) − f (x + h)| dh =: I1 + I2 . |h|1+α

1.11 The one-dimensional fractional Laplacian

� 77

To handle I1 , we use the fact that f is absolutely continuous and then we have δ h

I1 ≤ ∫ ∫ 0 0 δ h

= ∫∫ 0 0

0 0

|f ′ (x + τ) − f ′ (x)| |f ′ (x + τ) − f ′ (x)| dτ dh + ∫ ∫ dτ dh 1+α h (−h)1+α −δ h

δ h

|f ′ (x + τ) − f ′ (x)| |f ′ (x − τ) − f ′ (x)| dτ dh + ∫ ∫ dτ dh 1+α h h1+α 0 0

δ

δ

0

τ

δ

δ

0

τ

1 1 󵄨 󵄨 󵄨 󵄨 = ∫ 󵄨󵄨󵄨f ′ (x + τ) − f ′ (x)󵄨󵄨󵄨 ∫ 1+α dh dτ + ∫ 󵄨󵄨󵄨f ′ (x − τ) − f ′ (x)󵄨󵄨󵄨 ∫ 1+α dh dτ h h δ

≤∫ 0

δ

|f ′ (x + τ) − f ′ (x)| |f ′ (x − τ) − f ′ (x)| dτ + dτ < ∞ ∫ τα τα 0

for a. a. x ∈ ℝ since f ′ ∈ W α−1,1 (ℝ). Concerning I2 , we get I2 ≤

f (x) + ‖f ‖L1 (ℝ) 1 󵄨 󵄨 < ∞. ∫ 󵄨󵄨f (x) − f (x + h)󵄨󵄨󵄨 dh ≤ δ1+α 󵄨 δ1+α ℝ

The previous proposition holds even for the Marchaud derivative. Indeed, we can prove the following result. Proposition 1.84. Fix α ∈ (0, 2) with α ≠ 1. For every f ∈ W α,1 (ℝ), the function M Dα0+ f is a. e. well-defined and belongs to L1 (ℝ). In particular, there exists a constant Cα such that for every f ∈ W α,1 (ℝ), 󵄩󵄩M α 󵄩󵄩 󵄩󵄩 D0+ f 󵄩󵄩 1 ≤ Cα ‖f ‖W α,1 (ℝ) . 󵄩 󵄩L (ℝ)

(1.146)

Moreover, if α ∈ (1, 2), then for a. a. x ∈ ℝ, M

(

Dα± f ) (x)

+∞

f (x) − f (x ∓ h) ∓ hf ′ (x) α(1 − α) = dh, ∫ Γ(2 − α) h1+α

(1.147)

0

where the integral is absolutely convergent. Proof. We focus on M Dα+ . The proof of (1.146) is identical to that of (1.144) and thus is ̃ α be defined as in (1.110). Arguing as omitted. To prove (1.147), let α ∈ (1, 2), and let M D + in item (ii) of Proposition 1.83, we get that there exists a constant C̃α > 0 such that for ̃ ̃αf ‖ 1 f ∈ W α,1 (ℝ), we have ‖M D + L (ℝ) ≤ Cα ‖f ‖W α,1 . Once this is established, we use the α,1 fact that 𝒮 (ℝ) ⊂ W (ℝ) is dense to get (1.147) by adopting the same proof as in Theorem 1.67. Remark 1.85. The previous proposition holds for all α > 0 with α ∈ ̸ ℕ by substituting (1.147) with (1.109).

78 � 1 Fractional integrals and derivatives There is a strict link between the Marchaud derivative and the fractional Laplacian, which is explicated in the following proposition. Proposition 1.86. Let α ∈ (0, 1) ∪ (1, 2) and f ∈ W α,1 (ℝ). Then α

M

Dα+ f (x)

+

M

Dα− f (x)

α(1 − α) d2 2 = (− 2 ) f (x) FL dx Γ(2 − α)Cα/2

for a. a. x ∈ ℝ. 2

α 2

d M α Proof. From Propositions 1.83 and 1.84 we know that (− dx D− f ∈ L1 (ℝ) for 2) f,

f ∈ W α,1 (ℝ). Now let α ∈ (0, 1) and consider x ∈ ℝ such that all the involved integrals are absolutely convergent. Then M

Dα+ f (x) + M Dα− f (x) =

f (x) − f (x − h) f (x) − f (x + h) α α dh + dh ∫ ∫ α+1 Γ(1 − α) Γ(1 − α) h hα+1 ℝ+

x

=

ℝ+



f (x) − f (y) f (x) − f (y) α α dy + dh ∫ ∫ Γ(1 − α) Γ(1 − α) (x − y)α+1 (y − x)α+1 x

−∞

α

f (x) − f (y) α α(1 − α) d2 2 = dy = (− ) f (x). ∫ FL Γ(1 − α) |x − y|α+1 dx 2 Γ(2 − α)Cα/2 ℝ

If α ∈ (1, 2), then we use (1.147) and (1.143) to get M

Dα+ f (x) + M Dα− f (x) =

f (x) − f (x − h) − hf ′ (x) f (x) − f (x + h) + hf ′ (x) α(1 − α) (∫ dh + dh) ∫ Γ(2 − α) hα+1 hα+1 ℝ+

ℝ+

0

=

+∞

f (x) − f (x + h) + hf ′ (x) f (x) − f (x + h) + hf ′ (x) α(1 − α) (∫ dh + ∫ dh) α+1 Γ(2 − α) |h| hα+1 0

−∞

α

α(1 − α) f (x) − f (x + h) + hf (x) α(1 − α) d2 2 = dh = (− ) f (x). ∫ FL Γ(2 − α) |h|α+1 dx 2 Γ(2 − α)Cα/2 ′



It is clear, from the proof of Proposition 1.83, that if α = 1 then f ∈ W 1,1 (ℝ) is not 2

1 2

d sufficient to guarantee that (− dx 2 ) f is a. e. well-defined. A sufficient condition is established in the following proposition.

1.11 The one-dimensional fractional Laplacian

� 79

Proposition 1.87. Let f ∈ W 1,1 (ℝ) and assume that for every −∞ < a < b < +∞ there exists a function g[a,b] : [0, 1] → ℝ+0 such that 1

∫ 0

b

g[a,b] (τ) dτ < ∞ τ

and

󵄨 󵄨 ∫ 󵄨󵄨󵄨f ′ (x + τ) − f ′ (x)󵄨󵄨󵄨dx ≤ g[a,b] (τ) for a. a. τ ∈ [0, 1]. (1.148) a

1 2

2

d 1 Then (− dx 2 ) f is a. e. well-defined and belongs to Lloc (ℝ). Furthermore, for a. a. x ∈ ℝ and all δ > 0 1

f (x) − f (x + h) + hf ′ (x)1[−δ,δ] (h) d2 2 FL ((− 2 ) f ) (x) = C1/2 dh, ∫ dx |h|2

(1.149)



where the integral on the right-hand side is absolutely convergent. Proof. First observe that since f ∈ W 1,1 (ℝ), then it is absolutely continuous. Arguing exactly as in the proof of Proposition 1.83, for any n ∈ ℕ we have n

n +∞

|f (x + h) − 2f (x) + f (x − h)| |f ′ (x + τ) − f ′ (x − τ)| dh dx ≤ dτ dx ∫∫ ∫ ∫ τ 2|h|2 −n 0

−n ℝ

n

1 2

= ∫∫ −n 0

n +∞

|f ′ (x + τ) − f ′ (x − τ)| |f ′ (x + τ) − f ′ (x − τ)| dτ dx + ∫ ∫ dτ dx = I1 + I2 . τ τ −n

1 2

First we consider I1 . We use the change of variables y = x − τ to get 1 2

1 2

n

n−τ

|f ′ (x + τ) − f ′ (x − τ)| |f ′ (y + 2τ) − f ′ (y)| I1 = ∫ ∫ dτ dx = ∫ ∫ dy dτ τ τ 0 −n 1 2

≤∫ 0

0 −n−τ 1 2

1

0

0

n

g[−n−1,n] (2τ) g[−n−1,n] (τ) 1 󵄨 󵄨 dτ = ∫ dτ < ∞. ∫ 󵄨󵄨󵄨f ′ (y + 2τ) − f ′ (y)󵄨󵄨󵄨dy dτ ≤ ∫ τ τ τ −n−1

For I2 we have instead n +∞

󵄨 󵄨 󵄩 󵄩 I2 ≤ 2 ∫ ∫ 󵄨󵄨󵄨f ′ (x + τ) − f ′ (x − τ)󵄨󵄨󵄨dτ dx ≤ 8n 󵄩󵄩󵄩f ′ 󵄩󵄩󵄩L1 (ℝ) < ∞. −n

1 2

Now we prove that (1.149) holds for a. a. x ∈ ℝ and for all δ > 0. To do this, we argue exactly in the same way as we did in item (ii) of Proposition 1.76 to get for a. a. x ∈ ℝ 1

d2 2 FL ((− 2 ) f ) (x) = C1/2 lim ε→0 dx

∫ ℝ\(−ε,ε)

f (x) − f (x + h) + f ′ (x)h1[−δ,δ] (h) |h|2

dh.

80 � 1 Fractional integrals and derivatives We only need to prove that the integral on the right-hand side is absolutely convergent. Without loss of generality, we can assume that δ < 1. We split the integral as follows



|f (x) − f (x + h) + f ′ (x)h1[−δ,δ] | |h|2



δ

dh = ∫ −δ

+

|f (x) − f (x + h) + f ′ (x)h| dh |h|2 ∫ ℝ\[−δ,δ]

|f (x) − f (x + h)| dh =: I3 (x) + I4 (x). |h|2

I4 can be handled in the same exact way as I2 in Proposition 1.83. To work with I3 , we use the fact that f is absolutely continuous and argue as in Proposition 1.83 to get δ

I3 (x) ≤ ∫ 0

δ

|f ′ (x + τ) − f ′ (x)| |f ′ (x − τ) − f ′ (x)| dτ + ∫ dτ. τ τ 0

To show that this is finite for a. a. x ∈ ℝ, fix n ∈ ℕ and integrate over [−n, n], so that we have n

δ n

1

−n

0 −n

0

g[−n,n] (τ) |f ′ (x + τ) − f ′ (x)| dx dτ ≤ ∫ dτ < ∞. ∫ I3 (x)dx ≤ 2 ∫ ∫ τ τ

Since n ∈ ℕ is arbitrary, we know that I3 (x) < ∞ for a. a. x ∈ ℝ. A similar condition can be used to extend Proposition 1.76. A function f ∈ 𝒞 (ℝ) is said to be locally Dini continuous if for any −∞ < a < b < ∞ there exists a function g[a,b] : [0, 1] → ℝ+0 such that 1

∫ 0

g[a,b] (τ) dτ < ∞ τ

󵄨 󵄨 and 󵄨󵄨󵄨f (x + τ) − f (x)󵄨󵄨󵄨dx ≤ g[a,b] (τ) ∀τ ∈ [0, 1], x ∈ [a, b]. (1.150)

We say that f is Dini continuous if (1.150) holds for a = −∞ and b = ∞. Clearly, if λ f ∈ 𝒞loc (ℝ) for some λ ∈ (0, 1], then f is locally Dini continuous. Hence, Dini continuity is an extension of the notion of Hölder continuity. The following proposition holds true. Proposition 1.88. Let f ∈ 𝒞b (ℝ) ∩ 𝒞 1 (ℝ) with locally Dini continuous derivative. Then 2

d ((− dx 2)

1/2

f ) (x) is well defined for all x ∈ ℝ and belongs to 𝒞 (ℝ). Furthermore, for all

δ > 0 and x ∈ ℝ,

1/2

((−

d2 ) dx 2

FL f ) (x) = C1/2 ∫ ℝ

f (x) − f (x + h) + f ′ (x)h1[−δ,δ] (h) |h|2

dh.

(1.151)

1.11 The one-dimensional fractional Laplacian

� 81

Proof. Fix x ∈ ℝ and observe that ∫ ℝ

|f (x + h) − 2f (x) + f (x − h)| dh 2|h|2 1 2

= ∫ − 21

|f (x + h) − 2f (x) + f (x − h)| dh + 2|h|2

∫ ℝ\[− 21 , 21 ]

|f (x + h) − 2f (x) + f (x − h)| dh 2|h|2

= I1 (x) + I2 (x). To handle I1 (x) we use the fact that f ∈ 𝒞 1 (ℝ) to write 1 2

1 2

h

1 2

|f ′ (x + τ) − f ′ (x − τ)| |f ′ (x + τ) − f ′ (x − τ)| I1 (x) ≤ ∫ ∫ dτ dh = dh dτ ∫ ∫ h2 h2 0 τ

0 0 1 2

≤∫ 0

|f ′ (x + τ) − f ′ (x − τ)| dτ. τ

Since τ ∈ [0, 1/2], we know that x + τ, x − τ ∈ [x − 1/2, x + 1/2] and then we can use the local Dini continuity of f ′ to get 1 2

1

0

0

g[x−1/2,x+1/2] (2τ) g[x−1/2,x+1/2] (τ) I1 (x) ≤ ∫ dτ = ∫ dτ < ∞. τ τ Concerning I2 (x), we observe that f ∈ 𝒞b (ℝ) and we get I2 (x) ≤ 8‖f ‖L∞ (ℝ) < ∞. 2

d This proves that ((− dx 2) 2

d (− dx 2)

1/2

1/2

f ) (x) is well defined for all x ∈ ℝ. Next, we show that

f ∈ 𝒞 (ℝ). Let δ ∈ [−1, 1] and observe that

((−

d2 ) dx 2 1 2

=∫ 0

1/2

f ) (x + δ) = ∫ ℝ

|f (x + δ + h) − 2f (x + δ) + f (x + δ − h)| dh 2|h|2

|f (x + δ + h) − 2f (x + δ) + f (x + δ − h)| dh h2 +∞

+ ∫ 1 2

|f (x + δ + h) − 2f (x + δ) + f (x + δ − h)| dh = I3 (x + δ) + I4 (x + δ). h2

82 � 1 Fractional integrals and derivatives To work with I3 (x + δ), first use the fact that f ∈ 𝒞 1 (ℝ) to write h

|f (x + δ + h) − 2f (x + δ) + f (x + δ − h)| |f ′ (x + δ + τ) − f ′ (x + δ − τ)| ≤ dτ. ∫ h2 h2 0

Next, observe that, since δ ∈ [−1, 1], h ∈ [0, 1/2] and τ ∈ [0, h], we have x +δ +τ, x +δ −τ ∈ [x − 3/2, x + 3/2] and then we can use the local Dini continuity of f ′ to get h

g[x−3/2,x+3/2] (2τ) |f (x + δ + h) − 2f (x + δ) + f (x + δ − h)| ≤∫ dτ, h2 h2 0

where the right-hand side is independent of δ. Now we show that the right-hand side is integrable in [0, 1/2]. Indeed, 1 2

h

∫∫

g[x−3/2,x+3/2] (2τ)

0 0

h2

1 2

1 2

dτ dh = ∫ ∫

g[x−3/2,x+3/2] (2τ) h2

0 τ

dτ dh

1 2

1

0

0

g[x−3/2,x+3/2] (2τ) g[x−3/2,x+3/2] (τ) ≤∫ dτ = ∫ dτ < ∞. τ τ Hence, we can use the dominated convergence theorem and we get lim I3 (x + δ) = I3 (x).

δ→0

Concerning I4 , we have |f (x + δ + h) − 2f (x + δ) + f (x + δ − h)| 4 ‖f ‖L∞ (ℝ) ≤ , h2 h2 where the right-hand side is independent of δ and integrable in [1/2, +∞). Hence, by the dominated convergence theorem, lim I4 (x + δ) = I4 (x).

δ→0 2

1/2

d This proves that (− dx f ∈ 𝒞 (ℝ). 2) Finally, we prove (1.151). To do this, arguing exactly in the same way as we did in item (ii) of Proposition 1.76, we get for all x ∈ ℝ and δ > 0

((−

d2 ) dx 2

1/2

FL f ) (x) = C1/2 lim

ε→0

∫ ℝ\(−ε,ε)

f (x) − f (x + h) + f ′ (x)h1[−δ,δ] (x) |h|2

dh.

1.11 The one-dimensional fractional Laplacian

� 83

We only need to prove that the integral on the right-hand side is absolutely convergent. We assume, without loss of generality, that δ < 1. We split the integral as in Proposition 1.87, getting ∫ ℝ

|f (x) − f (x + h) + f ′ (x)h1[−δ,δ] (x)| |h|2

δ

=∫ −δ

dh

|f (x) − f (x + h) + f ′ (x)h| dh + |h|2

∫ ℝ\[−δ,δ]

|f (x) − f (x + h)| dh = I5 (x) + I6 (x). |h|2

To handle I5 (x), we use the fact that f ∈ 𝒞 1 (ℝ) to get δ h

0 0

0 0

−δ −h

|f ′ (x + τ) − f ′ (x)| |f ′ (x + τ) − f ′ (x)| I5 (x) ≤ ∫ ∫ dτ dh + dτ dh ∫ ∫ h2 (−h)2 δ h

= ∫∫ 0 0 δ δ

= ∫∫ 0 τ δ

≤∫ 0

δ h

|f ′ (x + τ) − f ′ (x)| |f ′ (x − τ) − f ′ (x)| dτ dh + ∫ ∫ dτ dh 2 h h2 0 0

δ δ

|f ′ (x + τ) − f ′ (x)| |f ′ (x − τ) − f ′ (x)| dh dτ + ∫ ∫ dh dτ 2 h h2 0 τ

δ

|f ′ (x + τ) − f ′ (x)| |f ′ (x − τ) − f ′ (x)| dτ + ∫ dτ. τ τ 0

Since δ < 1 and τ ≤ δ, we know that x ± τ, x ∈ [x − 1, x + 1] and we can use the local Dini continuity of f ′ to write 1

I5 (x) ≤ 2 ∫ 0

g[x−1,x+1] (τ) dτ < ∞. τ

Finally, concerning I6 (x), we have +∞

I6 (x) ≤ 4 ‖f ‖L∞ (ℝ) ∫ δ

4 ‖f ‖L∞ (ℝ) 1 dh ≤ < ∞. 2 δ h

The space W α,2 (ℝ) can be used, instead, to provide a suitable extension of the Fourier-transform definition (1.125) of the fractional Laplacian to less regular functions. To do this, let us first recall the following result, which is given in [3, Theorem 7.62].

84 � 1 Fractional integrals and derivatives Proposition 1.89. Let α > 0. Then f ∈ W α,2 (ℝ) if and only if f ∈ L2 (ℝ)

and

(1 + | ⋅ |α ) ℱ [f ] ∈ L2 (ℝ).

j

Moreover, there exist two constants Cα > 0, j = 1, 2, such that 󵄩 󵄩 󵄩 󵄩 Cα1 󵄩󵄩󵄩(1 + | ⋅ |α ) ℱ [f ]󵄩󵄩󵄩L2 (ℝ) ≤ ‖f ‖W α,2 (ℝ) ≤ Cα2 󵄩󵄩󵄩(1 + | ⋅ |α ) ℱ [f ]󵄩󵄩󵄩L2 (ℝ) . Once this is established, the following result is clear. α 2

2

d Theorem 1.90. Let α ∈ (0, 2). Then for every f ∈ W α,2 (ℝ), (− dx 2 ) f is well-defined by α 2

2

d means of (1.125) and belongs to L2 (ℝ). In particular, (− dx : W α,2 (ℝ) → L2 (ℝ) is con2)

tinuous, i. e., there exists a constant Cα such that for all f ∈ W α,2 (ℝ), α 󵄩󵄩 2 2 󵄩󵄩 󵄩󵄩(− d ) 󵄩󵄩 2 󵄩󵄩 dx 󵄩

󵄩󵄩 󵄩󵄩 f 󵄩󵄩󵄩󵄩 ≤ Cα ‖f ‖W α,2 (ℝ) . 󵄩󵄩 2 󵄩L (ℝ) 2

d Also, f ∈ W 1,2 (ℝ) is sufficient to guarantee that (− dx 2)

1/2

1,1

f is well-defined, differ-

ently from the case f ∈ W (ℝ) for which we had to require an additional condition. This is due to the characterization of the fractional Sobolev spaces W α,2 (ℝ) in terms of the Fourier transform of its elements, as established in Proposition 1.89. Next result explaines in more detail the connection between the Gagliardo seminorm of a function, its Fourier transform and its fractional Laplacian. In fact, for α ∈ (0, 1), we can evaluate the L2 norm of the fractional Laplacian in terms of the Gagliardo seminorm, passing through the Fourier transform. Theorem 1.91. Let α ∈ (0, 1). Then for every function f ∈ W α,2 (ℝ), [f ]2α,2

=

2

FL Cα/2

2 󵄨2 ∫ |z| 󵄨󵄨ℱ [f ](z)󵄨󵄨󵄨 dz = FL Cα/2 2α 󵄨󵄨



α 󵄩󵄩 2 2 󵄩󵄩 󵄩󵄩(− d ) 󵄩󵄩 2 󵄩󵄩 dx 󵄩

󵄩󵄩2 󵄩󵄩 f 󵄩󵄩󵄩󵄩 . 󵄩󵄩 2 󵄩L (ℝ)

(1.152)

Once it is shown for f ∈ 𝒮 (ℝ), relation (1.152) clearly holds for every f ∈ W α,2 (ℝ) by continuity. The proof of this statement for f ∈ 𝒮 (ℝ) can be obtained via Exercise 1.18. We refer to [57] for further details. We can also extend the Marchaud derivative to W α,2 (ℝ) in the same way. Indeed, notice that for every f ∈ 𝒮 (ℝ) (see item (ii) of Exercise 1.13), M

α

α

ℱ [ D± f ] (z) = (±iz) ℱ [f ](z),

z ∈ ℝ.

Hence we can define the distributional fractional derivative by means of the Fourier transform.

1.12 Grunwald–Letnikov fractional derivatives



85

Definition 1.92. We define the distributional (or weak) fractional derivative as the operator 𝒲 Dα± : W α,2 (ℝ) → L2 (ℝ) acting on f ∈ W α,2 (ℝ) by ℱ[

𝒲

Dα± f ] (z) = (±iz)α ℱ [f ](z),

z ∈ ℝ.

(1.153)

The following result is a direct consequence of Proposition 1.89 and Theorems 1.50 and 1.66. Theorem 1.93. Let α > 0. Then the operator 𝒲 Dα± : W α,2 (ℝ) → L2 (ℝ) is continuous, i. e., there exists a constant Cα± such that for all f ∈ W α,2 (ℝ), 󵄩󵄩𝒲 α 󵄩󵄩 󵄩󵄩 D± f 󵄩󵄩 2 ≤ Cα± ‖f ‖W α,2 (ℝ) . 󵄩 󵄩L (ℝ) Furthermore, if f ∈ W ⌊α⌋+1,2 (ℝ), then 𝒲 Dα± f = M Dα± f . By the denseness of 𝒮 (ℝ) in W α,2 (ℝ) we can easily extend Lemma 1.86 to the weak fractional derivative 𝒲 Dα± . Proposition 1.94. Let α ∈ (0, 2) and f ∈ W α,2 (ℝ). Then for a. a. x ∈ ℝ, α

𝒲

Dα+ f (x)

+

𝒲

Dα− f (x)

α(1 − α) d2 2 = ((− ) f ) (x). FL dx 2 Γ(2 − α)Cα/2

Finally, thanks to Plancherel formula (see Theorem A.12), we have the following integration-by-parts result. Proposition 1.95. Let α > 0 and f , g ∈ W α,2 (ℝ). Then ∫ (𝒲 Dα± f ) (x)g(x)dx = ∫ (𝒲 Dα± g) (x)f (x)dx. ℝ



Furthermore, if α ∈ (0, 2), then α

∫ ((− ℝ

α

d2 d2 ) f ) (x)g(x)dx = ∫ ((− 2 ) g) (x)f (x)dx. 2 dx dx ℝ

1.12 Grunwald–Letnikov fractional derivatives It is well known in the integer-order setting that for f ∈ 𝒞 n (ℝ), its subsequent derivatives ∇n f (x)

can be obtained as f (n) (x) = limh→0 ±,hhn . Thus it is natural to ask whether such a α relation holds in the fractional-order setting, i. e., if there exists a generalization ∇±,h f ∇α f (x)

of the finite differences for α > 0 such that limh→0 ±,hhα represents, in some sense, a fractional derivative. To give an answer, we first need to define the finite differences of noninteger order.

86 � 1 Fractional integrals and derivatives Definition 1.96. The finite difference of order α > 0 of f is defined as +∞ α α ∇±,h f (x) = ∑ (−1)k ( )f (x ∓ kh), k k=0

(1.154)

where for k ≥ 0, (αk ) is the generalized binomial coefficient (see (B.11)). 󵄨 󵄨 C Due to the upper bound 󵄨󵄨󵄨(αk )󵄨󵄨󵄨 ≤ k α+1 (see (B.14)), if f ∈ L∞ (ℝ), the series in (1.154) is absolutely convergent. Moreover, we have the following auxiliary relation. Proposition 1.97. For all α, β > 0, h > 0, and f ∈ L∞ , β

α+β

α ∇±,h (∇±,h f ) = ∇±,h f .

Proof. Note that +∞ α β β α ∇±,h ∇±,h f (x) = ∑ (−1)k ( )∇±,h f (x ∓ kh) k k=0 +∞ +∞ +∞ α +∞ β α β = ∑ (−1)k ( ) ∑ (−1)j ( )f (x ∓ kh ∓ jh) = ∑ ∑ (−1)k+j ( )( )f (x ∓ (k + j)h). k j k j j=0 k=0 k=0 j=0

Now let us set j + k = ℓ in the inner summation, so that +∞ +∞ α β β α ∇±,h ∇±,h f (x) = ∑ ∑ (−1)ℓ ( )( )f (x ∓ ℓh) k ℓ−k k=0 ℓ=k +∞ ℓ +∞ α β α+β = ∑ (−1)ℓ f (x ∓ ℓh) ∑ ( )( ) = ∑ (−1)ℓ ( )f (x ∓ ℓh), k ℓ−k ℓ ℓ=0 ℓ=0 k=0

where in the last step, we used Chu–Vandermonde identity (see (B.12)). We can also prove the following simple identity. α Proposition 1.98. ∇+,h 1 = 0.

Proof. Recall that +∞ α α ∇+,h 1 = ∑ (−1)k ( ). k k=0

However, let us note that for each x ∈ [−1, 1] (see, for instance, (B.16)), α ∑ (−1)k ( )x k = (1 − x)α . k k=0 +∞

1.12 Grunwald–Letnikov fractional derivatives



87

Thus, substituting x = 1, we get +∞ α α ∇+,h 1 = ∑ (−1)k ( ) = (1 − 1)α = 0. k k=0

Definition 1.99. We say that a function f ∈ L∞ (ℝ) has a Grunwald–Letnikov fractional derivative in x of order α > 0 if the following limit exists: (GL Dα± f ) (x) = lim h↓0

α ∇±,h f (x)



.

This operator was introduced, independently, in [96, 138]. We can prove that GL Dα± f is in fact a fractional derivative. Indeed, we have the following result, which is [231, Theorem 7.47]. Theorem 1.100. Let f ∈ Lr (ℝ) for some 1 ≤ r < ∞ or f ∈ 𝒞0 (ℝ). Then for all 1 ≤ p ≤ ∞ and α > 0, the following properties are equivalent (i) limε↓0 Dα+,ε f = M Dα+ f in Lp (ℝ); (ii) limh↓0

α ∇+,h f hα

= GL Dα+ f in Lp (ℝ).

Moreover, if one of these two limits exists, then M Dα+ f = GL Dα+ f a. e. Remark 1.101. As a consequence, if α ∈ (0, 1) ∪ (1, 2) and f ∈ W α,1 (ℝ), then, by Proposition 1.86, lim h↓0

α

FL Γ(2 − α)Cα/2

α(1 −

α (∇+,h f α)hα

+

α ∇−,h f)

d2 2 = (− 2 ) f , dx

where the limit holds in L1 (ℝ). Let us emphasize that the latter formulation of fractional derivative is extremely useful in the discussion of numerical approaches. Indeed, let α > 0 and assume that f admits support that is bounded from below, i. e., supp(f ) ⊂ [x0 , +∞) for some x0 ∈ ℝ. Then, for α fixed h > 0 and any x ∈ ℝ, the finite difference ∇+,h f (x) in (1.154) is actually a finite sum. ∇α f

Hence, if we are also under the assumptions of Theorem 1.100, the quantity h+,hα , that can be evaluated for any x ∈ ℝ with a finite number of steps, is a good approximation ∇α f

of M Dα+ f for h > 0 small enough. The same holds for h−,hα and M Dα+ f if the support of f is bounded from above, i. e., if supp(f ) ⊂ (−∞, x0 ] for some x0 ∈ ℝ. Finally, if the support of f is bounded from below and from above, and α ∈ (0, 1) ∪ (1, 2), then one can use Remark 1.101 to get the approximation, for a small value of h > 0, FL Γ(2 − α)Cα/2

α(1 −

α (∇+,h f (x) α)hα

α

+

α ∇−,h f (x))

d2 2 ≈ (− 2 ) f (x). dx

A similar argument holds for fractional integrals. This will be discussed in Section 5.4.

88 � 1 Fractional integrals and derivatives

1.13 Fractional derivatives of complex order In this section, we consider functions f : ℂ → ℂ that are holomorphic in an open set that contains the real line, holomorphic at infinity, and vanishing at infinity. The restriction of such functions f on the real line belongs to 𝒮 (ℝ). In Section 1.6, we introduced the fractional integral of complex order of such a function f . Now let us extend the definition to fractional derivatives. Definition 1.102. We define the Riemann–Liouville left- and right-sided fractional derivatives of order α ∈ ℂ with ℜ(α) ≥ 0 as (Dα+ f ) (x)

x

1 dn = ∫ (x − t)n−α−1 f (t)dt Γ(n − α) dx n −∞

and (Dα− f ) (x)

1 dn = ∫ (t − x)n−α−1 f (t)dt, Γ(n − α) dx n ∞

x

where n = ⌊ℜ(α)⌋ + 1. Here let us focus on the right-sided case, as the arguments for the left-sided are similar. First of all, let us emphasize that, unlike the case with fractional integral, here ℜ(α) = 0 is admissible. Moreover, for α ∈ ℂ, let us denote I+α f = {

I+α f ,

D−α + f,

ℜ(α) > 0, ℜ(α) ≤ 0,

so that we have a unified notation for both fractional derivatives and integrals. Such a unified notation is un fact justified by an interesting result proved in [79] (see also [80]), which is the main result of this section. Theorem 1.103. Consider a function f : ℂ → ℂ that is holomorphic in an open set that contains the real line, holomorphic at infinity, and vanishing at infinity. Then for all x ∈ ℝ, the function α ∈ ℂ 󳨃→ (Dα+ f )(x) ∈ ℂ is entire. We omit the proof, as it requires some simple but not trivial arguments of complex analysis. This observation fully justifies the adoption of the Riemann–Liouville fractional derivative as the natural extension of the derivative to noninteger orders.

1.14 Fractional integrals and derivatives with respect to a function Let −∞ ≤ a < b ≤ +∞, and let g : (a, b) → [0, +∞) be a strictly increasing and continuously differentiable function such that g ′ (x) ≠ 0 for all x ∈ (a, b).

1.14 Fractional integrals and derivatives with respect to a function

� 89

Definition 1.104. Given a function f : (a, b) → ℝ and α > 0, we define the Riemann– Liouville left- and right-sided fractional integrals of order α with respect to the function g as x

α (Ia+,g f ) (x) =

1 α−1 ∫ g ′ (t)(g(x) − g(t)) f (t)dt, Γ(α)

α (Ib−,g f ) (x) =

1 α−1 ∫ g ′ (t)(g(t) − g(x)) f (t)dt, Γ(α)

a

x ∈ (a, b),

and b

x

x ∈ (a, b),

provided that the integrals converge. These operators were introduced in [193] together with the respective Riemann– Liouville-type fractional derivatives. Clearly, if g(x) = x, then we have the standard Riemann–Liouville integrals. If this is not the case, such integrals are often named the integrals with respect to other functions. In general, we can denote by Qg the composition operator, i. e., for a function f : (g(a), g(b)) → ℝ, (Qg f )(x) = f (g(x)),

x ∈ (a, b),

(1.155)

to get, observing that g −1 is well-defined and differentiable, α α Ia+,g = Qg Ig(a)+ Qg −1 ,

α α Ib−,g = Qg Ig(b)− Qg −1 ,

where we used the change of variables s = g(t). It is clear that Qg −1 = Qg−1 , and thus we can rewrite the previous identities as conjugation relations α α Ia+,g = Qg Ig(a)+ Qg−1 ,

α α Ib−,g = Qg Ig(b)− Qg−1 .

(1.156)

Thus, in particular, we can state the following generalizations of Lemmas 1.4 and 1.5. Proposition 1.105. Let α > 0, −∞ < a < b < +∞, and g ∈ 𝒞 1 [a, b] with g ′ (x) > 0 for all α x ∈ [a, b]. Then for all p ≥ 1, Ia+(b−),g : Lp (a, b) → Lp (a, b) is a bounded linear operator. For the proof, we refer to Exercise 1.19. In the previous proposition, we had the additional condition g ∈ 𝒞 1 [a, b]. Actually, we can drop such condition, but we can lose some integrability in the meanwhile.

90 � 1 Fractional integrals and derivatives Proposition 1.106. Let α > 0, −∞ < a < b < +∞, and g ∈ 𝒞 1 (a, b) ∩ W 1,q1 (a, b) for some q1 ≥ 1. Additionally, suppose that g ′ (x) > 0 for all x ∈ (a, b) and |g ′ |1−q2 ∈ L1 (a, b) for some q2 ≥ 1. Also, let q1 + p1 = 1, j = 1, 2. j

j

Then, for all p3 ≥ 1 and p = p1 p2 p3 , we have that

α Ia+(b−),g : Lp (a, b) → Lp3 (a, b)

is a bounded linear operator. For the proof, we refer to Exercise 1.20. Finally, let us note how the Hardy–Littlewood theorem is transformed in the current setting. Proposition 1.107. Let α ∈ (0, 1) and g ∈ 𝒞 1 (ℝ) ∩ W 1,q1 (ℝ) for some q1 ≥ 1. Additionally, suppose that g ′ (x) > 0 for all x ∈ ℝ, limx→±∞ g(x) = ±∞, and |g ′ |1−q2 ∈ L1 (ℝ) for some q2 ≥ 1. Also, let q1 + p1 = 1, j = 1, 2. j

j

Then for all 1 ≤ p3
0 for all x ∈ (a, b). Consider f ∈ AC[a, b] and note that Qg−1 f ∈ AC[g(a), g(b)]. Indeed, by the chain rule, D1 [f (g −1 )](x) =

f ′ (g −1 (x)) , g ′ (g −1 (x))

and

g(b)

g(b)

b

g(a)

g(a)

a

|f ′ (g −1 (x))| 󵄨 󵄨 󵄨 󵄨 dx = ∫ 󵄨󵄨󵄨f ′ (y)󵄨󵄨󵄨 dy < ∞, ∫ 󵄨󵄨󵄨󵄨D1 [f (g −1 )] (x)󵄨󵄨󵄨󵄨 dx = ∫ ′ −1 g (g (x))

and thus Qg−1 f ∈ AC[g(a), g(b)], and Dαg(a)+ Qg−1 f is well-defined. In particular, (Dαg(a)+ Qg−1 f ) (x) =

x

1 d ∫ (x − s)f (g −1 (s)) ds. Γ(1 − α) dx g(a)

1.14 Fractional integrals and derivatives with respect to a function



91

Hence we get the equalities (Dαa+,g f ) (x)

y

1 [d ] = ∫ (y − s)α−1 f (g −1 (s)) ds] [ Γ(1 − α) dy [ g(a) ]y=g(x) g(x)

1 1 d α−1 = ∫ (g(x) − s) f (g −1 (s)) ds. Γ(1 − α) g ′ (x) dx g(a)

Using the change of variables t = g −1 (s) in the latter integral, we obtain that (Dαa+,g f ) (x) =

x

1 d 1 d 1−α −α (I f ) (x), ∫(g(x) − g(t)) g ′ (t)f (t)dt = ′ Γ(1 − α)g ′ (x) dx g (x) dx a+,g a

(1.157)

which is a more explicit formula. Definition 1.108. The operator Dαa+,g defined in (1.157) is called the Riemann–Liouville derivative with respect to the function g. Let us anticipate that such derivatives will play a major role in the generalization of the chain rule (which does not hold with the classical statement) to the fractional case (as shown in [193]). We can also define the corresponding Dzhrbashyan–Caputo-type derivative. Definition 1.109. For all f ∈ AC[a, b], α ∈ (0, 1), and g ∈ 𝒞 1 [a, b] with g ′ > 0 on [a, b], the left- and right-sided Dzhrbashyan–Caputo derivatives of f with respect to g are defined as x

(C Dαa+,g f ) (x)

f′ 1 −α 1−α = ( ′ )) (x) ∫(g(x) − g(t)) f ′ (t)dt = (Ia+,g Γ(1 − α) g

(C Dαb−,g f ) (x)

f′ 1 −α 1−α = ( ′ )) (x). ∫(g(t) − g(x)) f ′ (t)dt = (Ib−,g Γ(1 − α) g

a

and b

x

These operators were introduced in [4]. The simple change of variable s = g(t) gives us the following conjugation relation: C α Da+(b−),g

= Qg C Dαg(a)+(g(b)−) Qg−1 ,

and hence all related properties of such derivatives can be deduced directly from those of the Dzhrbashyan–Caputo derivatives. The main motivation of the introduction of

92 � 1 Fractional integrals and derivatives such derivatives (unlike the Riemann–Liouville one) is the fact that, as we will see in the following sections, we can define Cauchy problems with such operators, considering only the initial data, and thus they can be used to generalize some differential relations to the fractional context. Fractional Cauchy problems for g(x) = x will be discussed in Chapter 2. Among fractional Cauchy problems, we will be able to solve explicitly only the linear ones. However, some classical nonlinear equations can be extendend to the fractional context by means of linear fractional differential equations, where the derivative is taken with respect to a suitable function g. This is done, for example, in [20, 21, 78] for the Gompertz equation. Among all the choices of the function g, there is a quite interesting one. Taking g(t) = log(t) (with a = 0), we obtain the Hadamard fractional operators. Definition 1.110. The left-sided Hadamard fractional integral is defined as (H I+α f ) (x) =

x

1 1 x ∫ logα−1 ( ) f (t)dt. Γ(α) t t 0

For α ∈ (0, 1), the left-sided Hadamard fractional derivative is defined as (H Dα+ f ) (x) =

x

1 d 1 x ∫ log−α ( ) f (t)dt. Γ(1 − α) dx t t 0

These operators were first introduced in [97]. Note that the fractional integral (and derivative) on the real line exhibits, as an eigenfunction, the exponential function (see Exercises 1.7 and 1.10). Concerning the Hadamard integral and derivatives, we have the following result. Proposition 1.111. Let α, β > 0 and fβ (x) = x β . Then, for all x > 0, (H I+α fβ ) (x) = β−α x β ,

(H Dα+ fβ ) (x) = βα x β .

This is a direct consequence of the conjugation relations (see Exercise 1.22).

1.15 Further properties of fractional derivatives In this section, we investigate some further properties of the fractional derivatives, in particular, the generalization of the Leibniz rule, the chain rule, and the Taylor expansion to the fractional setting.

1.15 Further properties of fractional derivatives



93

1.15.1 The generalized Leibniz rule Let x ∈ ℝ, and let U ⊂ ℝ be an open interval containing x. Let f , g ∈ C 1 (U). The following equality is very well known: D1 [fg](x) = f ′ (x)g(x) + f (x)g ′ (x). Such a formula is called the Leibniz rule. We could ask whether such a property still holds for fractional derivatives. However, thanks to some basilar results in differential geometry (see [15, Proposition 8.15]), we know that the answer to the previous question is negative. An elementary proof of the absence of Leibniz rule for fractional derivatives was also given in [246]. Let us recall this result here. Proposition 1.112. Let −∞ ≤ a < b ≤ +∞, and let D be a linear operator acting on 𝒞 ∞ (a, b) such that for all x ∈ (a, b) and f , g ∈ 𝒞 ∞ (a, b), (D(fg))(x) = (Df )(x)g(x) + f (x)(Dg)(x).

(1.158)

Then there exists a function h ∈ 𝒞 ∞ (a, b) such that for all f ∈ 𝒞 ∞ (a, b) and x ∈ (a, b), (Df )(x) = h(x)f ′ (x). Proof. Let f ∈ 𝒞 ∞ (a, b) and x, x0 ∈ (a, b). For any t ∈ [0, 1], define a function F(t) = f (x0 + (x − x0 )t). Clearly, F ∈ 𝒞 ∞ (0, 1), and applying the fundamental theorem of calculus and integration by parts, we can provide the following calculations: 1

1

F(1) = F(0) + ∫ F ′ (t)dt = F(0) + F ′ (0) + ∫(1 − t)F ′′ (t)dt. 0

(1.159)

0

Substituting the value of F into (1.159), we get that 2

1

f (x) = f (x0 ) + (x − x0 )f (x0 ) + (x − x0 ) ∫(1 − t)f ′′ (x0 + (x − x0 )t)dt. ′

(1.160)

0

Let 1

f2 (x) = ∫(1 − t)f ′′ (x0 + (x − x0 )t)dt. 0

Note that f2 ∈ 𝒞 ∞ (a, b). Applying the operator D to both sides of (1.160), we obtain (Df )(x) = f (x0 )(D1)(x) + f ′ (x0 )(D(⋅ − x0 ))(x) + (D ((⋅ − x0 )2 f2 )) (x).

(1.161)

94 � 1 Fractional integrals and derivatives Since D satisfies (1.158), it follows that for all x ∈ (a, b), (D1)(x) = (D(1 ⋅ 1))(x) = (D1)(x) + (D1)(x) = 2(D1)(x), and therefore (D1)(x) = 0 for all x ∈ (a, b). Setting h(⋅) = D(⋅ − x0 ), we get from (1.158) and (1.161) that (Df )(x) = h(x)f ′ (x0 ) + (x − x0 )2 (Df2 )(x) + 2h(x)(x − x0 )f2 (x). Furthermore, since x ∈ (a, b) is arbitrary, we can put x = x0 to get (Df )(x0 ) = h(x0 )f ′ (x0 ). We conclude the proof by noting that also x0 ∈ (a, b) is arbitrary. Note that a generalized Leibniz rule holds for integer higher-order derivatives and has the form n n Dn [fg](x) = ∑ ( )f (k) (x)g (n−k) (x) k k=0

for all x ∈ (a, b) and f , g ∈ 𝒞 n (a, b). However, according to Proposition 1.112, we do not expect Dαa+ to satisfy the Leibniz rule for any α ≠ 1. To generalize the Leibniz rule to the fractional-order case, we need to restrict ourselves to analytic functions, i. e., functions that can be expanded into Taylor series. Recall the following technical results. Lemma 1.113. Let α > 0, a ∈ ℝ, h > 0, and U = (a − h, a + h). Suppose that f : U → ℝ is analytic. Then for all x ∈ (a, a + h/2), −α (x − a)k+α (k) ) f (x), k Γ(k + α + 1)

(1.162)

+∞ α (x − a)k−α (k) (Dαa+ f ) (x) = ∑ ( ) f (x). k Γ(k − α + 1) k=0

(1.163)

+∞

α (Ia+ f ) (x) = ∑ ( k=0

and

For the proof, we refer to Exercises 1.26 and 1.28. Remark 1.114. It may seem unexpected that despite f is analytic in the interval (a − h, a + h), the result holds only for x ∈ (a, a + h/2). This is due to the fact that both (1.162) and (1.163) are actually consequences of the fact that f can be represented via a Taylor series centered at x, it converges at 2x − a, and (x − a)±α is well-defined, which is possible if and only if a < x < a + h2 . A similar formula for the left-sided fractional integral and derivative holds in the interval x ∈ (a − h/2, a). Indeed, combining (1.3), (1.4), and (1.5), we get that

1.15 Further properties of fractional derivatives



95

α α ̃ (Ia− f )(x) = (Ia+ f )(2a − x), where ̃f (t) = f (2a − t) for all t ∈ (a − h, a + h) and a < 2a − x < a + h/2. Furthermore, note that ̃f (k) (2a − x) = (−1)k f (k) (x). Hence for all x ∈ (a − h/2, a), (1.162) takes the form +∞ −α (a − x)k+α (k) α (Ia− f ) (x) = ∑ (−1)k ( ) f (x). k Γ(k + α + 1) k=0

(1.164)

Likewise, for all x ∈ (a − h/2, a), +∞ α (x − a)k−α (k) (Dαa− f ) (x) = ∑ (−1)k ( ) f (x). k Γ(k − α + 1) k=0

(1.165)

It is also expectable that if we work with the Taylor series centered in a, we should obtain a power series representation that holds for all x ∈ (a, a + h). This is investigated in Exercises 1.25 and 1.27. Furthermore, if we assume that f : U → ℝ is analytic and (1.162) holds for some x > a + ε > a + h2 , where the series is absolutely convergent, then f can be extended analytically at least to Uε = (a − h, x + ε). Indeed, we can rewrite (1.162) as +∞

α (Ia+ f ) (x) = (x − a)α ∑ ( k=0

−α (2x − a − x)k (k) ) f (x). k Γ(k + α + 1)

(1.166)

Consider the power series centered at x. Let it be of the form +∞

p(t) = ∑ ( k=0

−α (t − x)k ) f (k) (x). k Γ(k + α + 1)

We know that it converges for t = 2x − a, and then the radius of convergence ρ ≥ 2x − a − x > ε. By the Cauchy–Hadamard criterion, the latter claim, in particular, implies that 󵄨 f (k) (x) 󵄨󵄨󵄨󵄨 1 1 k 󵄨󵄨 −α lim sup √󵄨󵄨󵄨( ) 󵄨󵄨 = < . 󵄨 󵄨 k Γ(k + α + 1) 󵄨󵄨 ρ ε k→+∞ This in turn implies that k

lim sup √ k→+∞

|f (k) (x)| f (k) (x) k Γ(k + α + 1) 1 1 −α k = lim sup √( ) = < , √ k! k Γ(k + α + 1) ρ ε (−α )k! k→+∞ k

where we used the fact that (by (B.15)) 󵄨󵄨 󵄨 󵄨 Γ(k + α + 1) 󵄨󵄨󵄨 k k 󵄨 󵄨󵄨 󵄨 √ √ 󵄨󵄨 (−α)k! 󵄨󵄨󵄨 = Γ(α)(k + α) → 1 󵄨 󵄨 k

96 � 1 Fractional integrals and derivatives as k → ∞. Hence the power series ∑+∞ k=0

f (k) (x) (t − x)k has the radius k! f (k) (x) k = ∑+∞ k=0 k! (t − x) for t ∈

ρ > ε, which implies that we can set f (t) where x + ε > a. The same holds for (1.163).

of convergence (a + h/2, x + ε),

Now we are ready to prove the following result, which was first shown in [102]. Theorem 1.115. Let α > 0, a ∈ ℝ, h > 0, and U = (a − h, a + h). Let f , g : U → ℝ be analytic functions. Then for all x ∈ (a, a + h/2), +∞ α α k−α (Dαa+ (fg)) (x) = ∑ ( )f (k) (x) (Dα−k g) (x) + g) (x). ∑ ( )f (k) (x) (Ia+ a+ k k k=0 k=⌊α⌋+1 ⌊α⌋

(1.167)

Proof. Note that the function fg : U → ℝ is analytic. Therefore it follows from (1.163) that for all x ∈ (a, a + h/2), +∞ α (x − a)k−α k (Dαa+ (fg)) (x) = ∑ ( ) D [fg](x). k Γ(k + 1 − α) k=0

According to the classical Leibniz rule, we get that +∞ α (x − a)k−α k k (j) (Dαa+ (fg)) (x) = ∑ ( ) ∑ ( )f (x)g (k−j) (x) k Γ(k + 1 − α) j j=0 k=0 +∞ k α k (x − a)k−α (j) = ∑ ∑ ( )( ) f (x)g (k−j) (x). k j Γ(k + 1 − α) k=0 j=0

Since x ∈ (a, a + h/2) (and then fg is analytic in (x − h/2, x + h/2)), we know that the series is absolutely convergent, and thus we can exchange the order of the summation getting that +∞ +∞ α k (x − a)k−α (j) (Dαa+ (fg)) (x) = ∑ ∑ ( )( ) f (x)g (k−j) (x) k j Γ(k + 1 − α) j=0 k=j +∞ +∞ α k + j (x − a)k+j−α (j) = ∑ ∑( )( ) f (x)g (k) (x). k+j j Γ(k + j + 1 − α) j=0 k=0

Note that (

Γ(k + j + 1) α k+j Γ(α + 1) )( )= k+j j Γ(α − k − j + 1)Γ(k + j + 1) Γ(j + 1)Γ(k + 1) Γ(α + 1) = Γ(α − k − j + 1)Γ(j + 1)Γ(k + 1) Γ(α − j + 1) Γ(α + 1) = Γ(α − j + 1)Γ(j + 1) Γ(k + 1)Γ(α − k − j + 1) α α−j = ( )( ), j k

1.15 Further properties of fractional derivatives



97

whence +∞ +∞ α α − j (x − a)k+j−α (j) (Dαa+ (fg)) (x) = ∑ ∑ ( )( ) f (x)g (k) (x) j k Γ(k + j + 1 − α) j=0 k=0 +∞ α α − j (x − a)k+j−α (j) = ∑ ∑ ( )( ) f (x)g (k) (x) j k Γ(k + j + 1 − α) j=0 k=0 ⌊α⌋

α α − j (x − a)k+j−α (j) ) f (x)g (k) (x). ∑ ( )( j k Γ(k + j + 1 − α) j=⌊α⌋+1 k=0 +∞

+∞

+ ∑

(1.168)

Let us consider the first sum. It can be transformed as follows: α α − j (x − a)k+j−α (j) ) f (x)g (k) (x) ∑ ∑ ( )( j k Γ(k + j + 1 − α) j=0 k=0

⌊α⌋ +∞

+∞ α α − j (x − a)k−(α−j) (k) α = ∑ ( )f (j) (x) ( ∑ ( ) g (x)) = ∑ ( )f (j) (x) (Dα−j g) (x), j k Γ(k − (α − j) + 1) j j=0 j=0 k=0 ⌊α⌋

⌊α⌋

where we again used (1.163). Concerning the second sum in (1.168), we can apply (1.162) and transform it as follows: α α − j (x − a)k+j−α (j) ) f (x)g (k) (x) ∑ ( )( j k Γ(k + j + 1 − α) j=⌊α⌋+1 k=0 +∞

+∞

∑ =

+∞ α −(j − α) (x − a)k+j−α (k) ) g (x)) ∑ ( )f (j) (x) ( ∑ ( j k Γ(k + j − α + 1) j=⌊α⌋+1 k=0

=

α j−α ∑ ( )f (j) (x) (Ia+ g) (x). j j=⌊α⌋+1

+∞

+∞

This concludes the proof. It seems that in Theorem 1.115 the roles of f and g are not symmetric. However, a symmetrized version of such a formula that holds also for derivatives with respect to other functions was proved in [194] (here we recall the statement given in [193] but restricted to analytic functions of a real variable). Theorem 1.116. Let −∞ ≤ a < b ≤ +∞, and let h : (a, b) 󳨃→ ℝ be a function with h(a) = 0 and h′ (x) > 0 for all x ∈ (a, b). Let f , g : (a, b) → ℝ and assume that the functions x ∈ (0, h(b)) 󳨃→ f (h−1 (x)), g(h−1 (x)) ∈ ℝ are analytic. Then for all α ∈ ℂ\{0, −1, −2, . . . }, γ ∈ ℂ\ℕ, and x ∈ (a, h−1 ( h(b) )), 2 +∞

(Dαa+,h fg) (x) = ∑ ( n=−∞

α−γ−n

where Da+,h

γ+n−α

= Ia+,h

α α−γ−n γ+n )D f (x)Da+,h g(x), γ + n a+,h γ+n

if ℜ(α − γ − n) < 0 and Da+,h = Ia+,h if ℜ(γ + n) < 0. −γ−n

98 � 1 Fractional integrals and derivatives We omit the proof as it relies on some tools from complex analysis. Furthermore, a more general statement for functions f , g, h of complex variable is provided in [193]. If we fix a ∈ ℝ and consider analytic functions f , g : (a, a + δ) → ℝ, then for all x ∈ (a, a + δ/2), +∞ α α−γ−n γ+n (Dαa+ fg) (x) = ∑ ( )Da+ f (x)Da+ g(x). n=−∞ γ + n

Such a formula is a direct consequence of Theorem 1.116 if we put h(x) = x − a. 1.15.2 The generalized chain rule Concerning the fractional derivatives of composite functions, we would like to obtain a chain rule that is similar to the one for the classical derivative, i. e., D1 [f (g(⋅))] (x) = f ′ (g(x))g ′ (x).

(1.169)

However, such a simple form cannot be achieved for fractional derivatives. Indeed, we have the following result. Proposition 1.117. Let −∞ ≤ a < b ≤ +∞, and let D be a linear operator acting on 𝒞 ∞ (a, b) and such that for all x ∈ (a, b) and f ∈ 𝒞 ∞ (a, b), it satisfies the relation (D (f (g(⋅)))) (x) = f ′ (g(x))Dg(x).

(1.170)

Then there exists a function h ∈ 𝒞 ∞ (a, b) such that for all f ∈ 𝒞 ∞ (a, b) and x ∈ (a, b), (Df )(x) = h(x)f ′ (x). Proof. Choose g(x) = x, put h = Dg, and note that (1.170) takes the form (Df )(x) = (D (f (g(⋅)))) (x) = f ′ (g(x))Dg(x) = h(x)f ′ (x). Thus, generally speaking, a fractional derivative cannot satisfy (1.170). We refer to [247] for the refutation of other simple chain rules. There is actually a chain rule for fractional derivatives, but, clearly, its form is not so simple as we would like to have. First, let us note that the fractional derivatives of composite functions naturally involve fractional derivatives with respect to other functions. Indeed, let g : [a, b] → ℝ be differentiable with g ′ (x) > 0 for all x ∈ (a, b). Also, let f : [g(a), g(b)] → ℝ be such that f (g(⋅)) has a fractional derivative. Then (Dαa+ f (g(⋅))) = Dαa+ Qg f = Dαa+ Qg−1−1 f

= Qg−1−1 Qg −1 Dαg −1 (g(a))+ Qg−1−1 f = Qg Dαg(a)+,g −1 f .

(1.171)

1.15 Further properties of fractional derivatives



99

The following generalized chain rule was proved in [193]. Theorem 1.118. Let −∞ < a < b < +∞ m and let h, g : [a, b] → ℝ be functions such that h(a) = g(a) = 0 and h′ (x), g ′ (x) > 0 for all x ∈ (a, b). Let b′ = min{h(b), g(b)}, and let f : (a, b) → ℝ. For any fixed y ∈ (a, b), define the function u(x; y) =

g ′ (x) h(x) − h(y) α+1 ( ) , h′ (x) g(x) − g(y)

x ∈ (a, b).

Assume that the functions x ∈ (0, b′ ) 󳨃→ f (g (−1) (x)), f (h(−1) (x)), u(h−1 (x); y) ∈ ℝ are analytic for all y ∈ (a, b). Then for all x ∈ (a, h−1 (b′ /2)), α ∈ ℂ\{0, −1, −2, . . . }, and γ ∈ ℂ\ℤ, we have the following equality: +∞

Dαa+,g f (x) = ∑ ( n=−∞

α−γ−n

where Da+,h

γ+n−α

= Ia+,h

′ α γ+n α−γ−n g (⋅)(h(⋅) − h(x)) ) (Da+,h f ) (x) (Da+,h ( ′ )) (x), γ+n h (⋅)(g(⋅) − g(x)) γ+n

(1.172)

if ℜ(α − γ − n) < 0 and Da+,h = Ia+,h if ℜ(γ + n) < 0. −γ−n

Despite the nontrivial form, (1.172) is actually a chain rule for the fractional derivative. Indeed, let f : (a, a + δ) → ℝ be analytic, and let h(x) = x for all x ∈ ℝ. Suppose further that g : (a, a+δ) → ℝ has strictly positive derivative g ′ (x) > 0 for all x ∈ (a, a+δ). On the one hand, Equation (1.171) tells us that (Dαa+ f (g(⋅))) (x) = (Qg Dαg(a)+,g −1 f ) (x) = (Dαg(a)+,g −1 f ) (g(x)). On the other hand, we know from (1.172) that for x ∈ (a, a + δ2 ), (Dαg(a)+,g −1 f ) (x) +∞

= ∑ ( n=−∞

α 1 ⋅−x γ+n α−γ−n ) (Da+ f ) (x) (Da+ ( ′ −1 ( ))) (x). γ+n g (g (⋅)) g −1 (⋅) − g −1 (x)

Combining these formulas, we conclude that (Dαa+ f (g(⋅))) (x) +∞

= ∑ ( n=−∞

⋅ − g(x) α 1 γ+n α−γ−n ) (Da+ f ) (g(x)) (Da+ ( ′ −1 ( ))) (g(x)). γ+n g (g (⋅)) g −1 (⋅) − x

Clearly, such a chain rule is rather confusing and requires the evaluation of fractional derivatives of more complicated functions. A chain rule that only involves integer-order derivatives of f and g has been proved in [209, Section 2.7.3].

100 � 1 Fractional integrals and derivatives Theorem 1.119. Let a ∈ ℝ and h > 0. Let g : (a − h, a + h) → ℝ and f : (g(a − h), g(a + h)) → ℝ. Suppose further that both f and g are analytic functions on their respective domains, and let α > 0. Then for all x ∈ (a, a + h/2), (Dαa+ f (g(⋅))) (x) =

(x − a)−α f (g(x)) Γ(1 − α)

+∞ α (x − a)k−α +∞ (m) +∑( ) ∑ f (g(x)) k k!Γ(k − α + 1) m=1 k=1

k





a(k) ∈Ak,m r=1

ar

1 g (r) (x) ( ) , ar ! r!

where we denote a(k) = (a1 , . . . , ak ) ∈ ℕ0 k and k

k

r=1

r=1

Ak,m = {(a1 , . . . , ak ) ∈ ℕk0 : ∑ rar = k, ∑ ar = m} .

1.15.3 The Taylor–Riemann formula Let us recall that for any analytic function f : (a − δ, a + δ) → ℝ, the following Taylor formula holds: f (n) (a) (x − a)n n! n=0 +∞

f (x) = ∑

for all x ∈ (a − δ, a + δ). In [219] a generalization of the Taylor formula, now called the Taylor-Riemann formula and including fractional derivatives, was formally considered (see also a comment in [46]). The convergence of such a series was proved in [195] in a much more general form. Here we only consider a particular case of such a formula. Theorem 1.120 (Taylor–Riemann formula). Let a ∈ ℝ and δ > 0, and let f : (a−δ, a+δ) → ℝ be an analytic function. Then for all x ∈ (a, a + δ) and γ ∈ ℂ, we have the following relations: +∞

f (x) = ∑

n+γ

(Da+ f ) ( x+a ) (x − a)n+γ 2

n=−∞

2n+γ Γ(n + γ + 1)

+∞

(D(2a−x)+ f ) (a)(x − a)n+γ

,

(1.173)

,

(1.174)

and f (x) = ∑

n+γ

Γ(n + γ + 1)

n=−∞

where

1 ∞

γ+n

= 0, and Da+ = Ia+

−γ−n

γ+n

and D(2a−x)+ = I(2a−x)+ if ℜ(γ + n) < 0. −γ−n

We omit the proof as it requires some basics in complex analysis. However, if we set γ = 0 and 1/Γ(n + 1) = 0 for nonpositive integers n, then (1.173) and (1.174) become the

1.16 The Hilbert transform



101

classical Taylor formula. Let us also underline that both the series are not well-defined for x = a unless γ = 0.

1.16 The Hilbert transform In this section we introduce an operator that is closely related to fractional integrals and derivatives. Definition 1.121. Let p ≥ 1. For functions φ ∈ Lp (ℝ), we define (Hφ)(x) =

1 lim ∫ π ε↓0

|t−x|>ε

φ(t) φ(t) 1 dt = (PV ∫ dt) , t−x π t−x

(1.175)



where PV stands for Cauchy principal value and is a shorthand for the limit in the definition of H. For further details on principal value integrals, we refer to [113, Section 8.3]. The operator H is called the Hilbert transform. First, let us emphasize that the Hilbert transform H : Lp (ℝ) → Lp (ℝ) is continuous. The following statement is proved in [205, Theorem 4.1]. Theorem 1.122. Let f ∈ Lp (ℝ) for some p ∈ (1, ∞). Then Hf ∈ Lp (ℝ), and ‖Hf ‖Lp (ℝ) ≤ max {tan (

π π ) , cot ( )} ‖f ‖Lp (ℝ) . 2p 2p

In particular, the constant max{tan(π/2p), cot(π/2p)} is optimal, i. e., ‖H‖ = max{tan(π/2p), cot(π/2p)}. To provide some properties of the Hilbert transform H, we need the following preliminaries. For any −∞ ≤ a < b ≤ +∞, a function ψ : [a, b] × [a, b] → ℝ is said to be Hölder continuous if there exist two constants C > 0 and γ ∈ (0, 1] such that for all t1 , t2 , τ1 , τ2 ∈ [a, b], 󵄨󵄨 󵄨 γ γ 󵄨󵄨ψ(t1 , τ1 ) − ψ(t2 , τ2 )󵄨󵄨󵄨 ≤ C (|t1 − t2 | + |τ1 − τ2 | ) . Given a Hölder continuous function ψ : [a, b] × [a, b] → ℝ and t0 ∈ (a, b), we have the following identity, called the Poincaré–Bertrand formula: b

b

b

b

a

a

a

a

ψ(t, τ) ψ(t, τ) 1 PV ∫ (PV ∫ dτ) dt = −π 2 ψ(t0 , t0 ) + PV ∫ (PV ∫ dt) dτ. t − t0 τ−t (τ − t)(t − t0 )

(1.176)

102 � 1 Fractional integrals and derivatives For the proof, we refer to [182, Section 23]. In fact, (1.176) also holds for a = −∞ and b = +∞ if ψ(t, τ) = ψ1 (t)ψ2 (τ) for t, τ ∈ ℝ with ψ1 ∈ Lp (ℝ) and ψ2 ∈ Lq (ℝ), where 1 + q1 ≤ 1. For this version, we refer to [120, Section 2.13]. p Now let us show that H is invertible, and let us provide its inverse. Proposition 1.123. Let p ∈ (1, ∞). The operator H : Lp (ℝ) → Lp (ℝ) is invertible, and H−1 = −H. Proof. We can equivalently prove that for every φ ∈ Lp (ℝ), we have H2 φ = −φ. Moreover, if we prove the statement for φ ∈ 𝒞c∞ (ℝ), then it can be extended to Lp (ℝ) by the denseness of 𝒞c∞ (ℝ) in Lp (ℝ) and continuity of H. Hence let us fix φ ∈ 𝒞c∞ (ℝ), which is clearly Lipschitz on the whole real line ℝ. Then, by the Poincaré–Bertrand formula (1.176), for all x ∈ ℝ, φ(τ) 1 1 1 1 PV ∫ (PV ∫ dτ) dt = −φ(x) + 2 PV ∫ φ(τ) (PV ∫ dt) dτ, t−x τ−t (τ − t)(t − x) π2 π ℝ







(1.177) recalling that all the involved integrals have to be considered in principal value sense. It is not difficult to check that PV ∫ ℝ

1 dt = 0 (τ − t)(t − x)

∀τ ≠ x,

where, again, the integral has to be considered in principal value sense. Hence (1.177) can be rewritten as H2 φ = −φ, which proves the desired claim for all φ ∈ 𝒞c∞ (ℝ). The relation H2 φ = −φ extends to Lp (ℝ) due to the fact that H : Lp (ℝ) → Lp (ℝ) is continuous (as shown in Theorem 1.122) and 𝒞c∞ (ℝ) is dense in Lp (ℝ). Next, we consider the Fourier transform of the Hilbert transform. The following result is discussed in [120, Section 5.2]. Proposition 1.124. For every φ ∈ L2 (ℝ), we have ℱ [Hφ](z) = −isign(z)ℱ [φ](z),

z ∈ ℝ.

Now let us investigate the link between the Hilbert transform H and the fractional integrals I±α . Theorem 1.125. Let α ∈ (0, 1) and φ ∈ Lp (ℝ) for 1 < p < 1/α. Then we have the following relations: I−α φ = cos(απ)I+α φ + sin(απ)HI+α φ,

I+α φ I±α Hφ

= =

cos(απ)I−α φ HI±α φ.



sin(απ)HI−α φ,

(1.178) (1.179) (1.180)

1.16 The Hilbert transform

� 103

Proof. Let us focus on the case φ ∈ 𝒞c∞ (ℝ), because the statement can be easily extended to Lp (ℝ), since 𝒞c∞ (ℝ) is dense in it. First, we want to prove that (HI+α φ) (x) =



1 1 PV ∫ φ(τ) ( ∫ dt) dτ. πΓ(α) (t − τ)1−α (t − x)

(1.181)

τ



To do this, let us fix x ∈ ℝ, and let M > 0 be big enough so that supp(φ) ⊂ [−M, M] and x ∈ [−M, M]. Let us write (HI+α φ) (x) = =

t

φ(τ) 1 1 1 PV ∫ ( dτ) dt ∫ π t − x Γ(α) (t − τ)1−α ℝ

−∞





φ(τ)1(−∞,t) (τ) 1 1 1 PV ∫ ( dτ) dt ∫ π t − x Γ(α) (t − τ)1−α M+1

M+1

φ(τ)1(−∞,t) (τ) 1 1 1 = PV ∫ ( dτ) dt ∫ π t − x Γ(α) (t − τ)1−α −M−1

+

−M−1



M+1

M+1

−M−1

φ(τ)1(−∞,t) (τ) 1 1 1 ( dτ) dt = I1 + I2 . ∫ ∫ π t − x Γ(α) (t − τ)1−α

(1.182)

To handle I1 , put ψ(t, τ) = −

φ(τ)(t − τ)α 1(−∞,t) (τ) πΓ(α)

and rewrite I1 as follows: M+1

M+1

−M−1

−M−1

ψ(t, τ) 1 1 1 I1 = PV ∫ ( PV ∫ dτ) dt. π t − x Γ(α) t−τ Let us show that ψ is Hölder of order α in [−M − 1, M + 1]2 . To do this, let us first fix τ ∈ [−M − 1, M + 1], and let t1 , t2 ∈ [−M − 1, M + 1]. Note that 󵄨󵄨 󵄨 |φ(τ)| 󵄨󵄨 󵄨 α α 󵄨󵄨ψ(t1 , τ) − ψ(t2 , τ)󵄨󵄨󵄨 = 󵄨(t − τ) 1(τ,+∞) (t1 ) − (t2 − τ) 1(τ,+∞) (t2 )󵄨󵄨󵄨 . πΓ(α) 󵄨 1 If t1 , t2 > τ, then ‖φ‖L∞ (ℝ) 󵄨󵄨 󵄨 |φ(τ)| 󵄨󵄨 α α󵄨 |t − t |α . 󵄨󵄨ψ(t1 , τ) − ψ(t2 , τ)󵄨󵄨󵄨 = 󵄨(t − τ) − (t2 − τ) 󵄨󵄨󵄨 ≤ πΓ(α) 󵄨 1 πΓ(α) 1 2

104 � 1 Fractional integrals and derivatives If t1 > τ and t2 ≤ τ, then ‖φ‖L∞ (ℝ) 󵄨󵄨 󵄨 |φ(τ)| |t1 − τ|α ≤ |t − t |α . 󵄨󵄨ψ(t1 , τ) − ψ(t2 , τ)󵄨󵄨󵄨 = πΓ(α) πΓ(α) 1 2 Finally, if t1 , t2 ≤ τ, then the following functions vanish: ψ(t1 , τ) = ψ(t2 , τ) = 0. Hence, in general, 󵄨󵄨 󵄨 ‖φ‖L∞ (ℝ) |t − t |α =: C1 |t1 − t2 |α . 󵄨󵄨ψ(t1 , τ) − ψ(t2 , τ)󵄨󵄨󵄨 ≤ πΓ(α) 1 2 Now fix t ∈ [−M − 1, M + 1] and let τ1 , τ2 ∈ [−M − 1, M + 1]. Then 󵄨󵄨 󵄨 |φ(τ1 ) − φ(τ2 )| 󵄨󵄨 󵄨 α 󵄨󵄨ψ(t, τ1 ) − ψ(t, τ2 )󵄨󵄨󵄨 ≤ 󵄨󵄨(t − τ1 ) 1(−∞,t) (τ1 )󵄨󵄨󵄨 πΓ(α) |φ(τ2 )| 󵄨󵄨 󵄨 α α + 󵄨(t − τ1 ) 1(−∞,t) (τ1 ) − (t − τ2 ) 1(−∞,t) (τ2 )󵄨󵄨󵄨 πΓ(α) 󵄨 ‖φ′ ‖L∞ (ℝ) |τ1 − τ2 | ‖φ‖L∞ (ℝ) ≤ (2M + 2)α + |τ − τ2 |α πΓ(α) πΓ(α) 1 (2M + 2)‖φ′ ‖L∞ (ℝ) + ‖φ‖L∞ (ℝ) ≤ |τ1 − τ2 |α =: C2 |τ1 − τ2 |α . πΓ(α) Note also that C2 > C1 . Finally, for all t1 , t2 , τ1 , τ2 ∈ [−M − 1, M + 1], 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨ψ(t1 , τ1 ) − ψ(t2 , τ2 )󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨ψ(t1 , τ1 ) − ψ(t1 , τ2 )󵄨󵄨󵄨 + 󵄨󵄨󵄨ψ(t1 , τ2 ) − ψ(t2 , τ2 )󵄨󵄨󵄨 ≤ C1 |t1 − t2 |α + C2 |τ1 − τ2 |α ≤ C2 (|t1 − t2 |α + |τ1 − τ2 |α ) . Thus we can use the Poincaré–Bertrand formula (1.176) (and the fact that ψ(x, x) = 0) to write M+1

M+1

I1 = PV ∫ (PV ∫ −M−1

−M−1

ψ(t, τ) dt) dτ (τ − t)(t − x)

M+1

M+1

−M−1

−M−1

φ(τ)1(τ,+∞) (t) 1 = PV ∫ (PV ∫ dt) dτ. πΓ(α) (t − τ)1−α (t − x)

(1.183)

To handle I2 , first rewrite the integral as follows: I2 =



M

M+1

−M

φ(τ) 1 1 1 ( dτ) dt, ∫ ∫ π t − x Γ(α) (t − τ)1−α

where we used the fact that supp(φ) ⊂ [−M, M] and clearly t > τ for any t ≥ M + 1 and τ ∈ [−M, M]. Then note that for x, τ ∈ [−M, M] and t ≥ M + 1, we have t − x ≥ t − M > 0

1.16 The Hilbert transform

� 105

and t − τ ≥ t − M > 0. Hence we can produce the relations M



|φ(τ)| 1 1 1 ( dτ) dt ∫ ∫ π t − x Γ(α) (t − τ)1−α M+1

−M ∞

2M‖φ‖L∞ (ℝ) 2M‖φ‖L∞ (ℝ) 1 dt = < +∞, ∫ πΓ(α) (1 − α)πΓ(α) (t − M)2−α



M+1

and then we can apply the Fubini theorem to I2 and get that I2 =

M



M+1



−M

M+1

−M−1

M+1

φ(τ)1(τ,+∞) (t) φ(τ)1(τ,+∞) (t) 1 1 dt) dτ = dt) dτ, ∫(∫ ∫ (∫ 1−α πΓ(α) πΓ(α) (t − τ) (t − x) (t − τ)1−α (t − x) (1.184)

where we used again the fact that supp(φ) ⊂ [−M, M]. Plugging (1.183) and (1.184) into (1.182) and using the fact that 1(τ,+∞) (t) = 0 for any t < τ and supp(φ) ⊂ [−M − 1, M + 1], s we get (1.181). Let us evaluate the inner integral. If we put t = τ + 1−s , then ∞

∫ τ

1

1

(t − τ)1−α (t − x)

dt = ∫ sα−1 (1 − s)−α (τ − x + s(1 + x − τ)) ds. −1

0

Now let us distinguish two cases. If τ > x, then we can write ∞

∫ τ

1

1

(t − τ)1−α (t − x)

dt = (τ − x) ∫ sα−1 (1 − s)−α (1 − −1

0

τ − x − 1 −1 s) ds τ−x

= (τ − x) Γ(1 − α)Γ(α)2 F1 (1, α; 1; −1

τ−x−1 ) τ−x

(see Theorem B.9), where 2 F1 is a hypergeometric function (see Appendix B.4). The reflection formula (see Proposition B.2) supplies that Γ(1 − α)Γ(α) =

π . sin(απ)

Furthermored, it is clear that for |z| < 1, 2 F1 (1, α; 1; z)

Thus recalling that

τ−x−1 τ−x

< 1, we obtain



∫ τ

(α)n n +∞ α n z = ∑ ( )z = (1 − z)−α . n! n n=0 n=0 +∞

=∑

1

(t −

τ)1−α (t

− x)

dt = (τ − x)α−1

π . sin(απ)

(1.185)

106 � 1 Fractional integrals and derivatives To handle the case x > τ, we need to evaluate the integral via contour integration, and therefore we omit the intermediate calculations that lead to the following formula: ∞

∫ τ

1 dt = −π cot(απ)(x − τ)α−1 . (t − τ)1−α (t − x)

(1.186)

Plugging (1.185) and (1.186) into (1.181), we get (HI+α φ) (x)



1 = ∫ φ(τ)(τ − x)α−1 dτ sin(πα)Γ(α) x

x

cos(πα) 1 − (I α φ(x) − cos(πα)I+α φ(x)) , ∫ φ(τ)(τ − x)α−1 dτ = sin(πα)Γ(α) sin(πα) − −∞

and thus we have proved (1.178). To establish (1.179), let us introduce the operator Qφ(x) = φ(−x). We already know that QI−α φ = I+α Qφ, and it is clear that Q−1 = Q. Concerning H, we have the equalities (HQφ)(x) =

φ(−t) φ(t) 1 1 PV ∫ dt = PV ∫ dt π t−x π −t − x ℝ



φ(t) 1 = − PV ∫ dt = (−Hφ)(−x) = −(QHφ)(x). π t+x ℝ

Now let φ ∈ Lp (ℝ) and φQ = Qφ. From (1.178) we get the equality I−α (Qφ) = cos(απ)I+α (Qφ) + sin(απ)HI+α (Qφ). Applying Q to both sides of the equation, we obtain QI−α (Qφ) = cos(απ)QI+α (Qφ) + sin(απ)QHI+α (Qφ), which is equivalent, together with Qψ = φ, to the formula I+α φ = cos(απ)I−α φ − sin(απ)HI−α φ, and so we get (1.179). To prove (1.180), we can reduce it to the case φ ∈ 𝒮 (ℝ) and then pass to Lp (ℝ) taking into account the denseness of 𝒮 (ℝ) in Lp (ℝ). So, let φ ∈ 𝒮 (ℝ). Note that α

α

ℱ [HI+ φ] (z) = −isign(z)ℱ [I+ φ] (z) = −isign(z)(iz) ℱ [φ](z)

= (iz)−α ℱ [Hφ](z) = ℱ [I+α Hφ] (z),

where we used item (iii) of Exercise 1.13 and Proposition 1.124.

−α

1.16 The Hilbert transform



107

Remark 1.126. The requirement 1 < p < α1 is necessary in (1.180) to guarantee that both sides of the identity are well defined. In the case p = 1, (1.178) and (1.179) are still true, but the proof of such an identity requires the continuity of I±α and H in suitable weighted Lp spaces (see [232, Theorems 5.6 and 11.3]). Now let us underline the following two corollaries. Corollary 1.127. Let α ∈ (0, 1) and 1 < p < 1/α. Then Iα+ (Lp (ℝ)) = Iα− (Lp (ℝ)). Proof. Let f ∈ 𝕀α+ (Lp (ℝ)) and f = I+α φ for φ ∈ Iα+ (Lp (ℝ)). It follows from (1.180) and (1.179) that f = I+α φ = I−α (cos(απ)φ − sin(απ)Hφ). Observing that Hφ ∈ Lp (ℝ) and therefore φ∗ = cos(απ)φ − sin(απ)Hφ ∈ Lp (ℝ), we conclude that f = I−α φ∗ , and thus f ∈ Iα− (Lp (ℝ)). The inverse inclusion is proved similarly with the help of (1.179) and (1.180). For such a reason, we can set α

p

α

p

α

p

I (L (ℝ)) := I+ (L (ℝ)) = I− (L (ℝ)) .

Corollary 1.128. Let α ∈ (0, 1) and 1 < p < 1/α. If f ∈ Iα (Lp (ℝ)), then Hf ∈ Iα (Lp (ℝ)). Proof. Let φ ∈ Lp (ℝ) be such that f = I+α φ. Then Hf = HI+α φ = I+α Hφ, where Hφ ∈ Lp (ℝ). Finally, let us consider the link between the Hilbert transform H and the Riemann– Liouville fractional derivative. Corollary 1.129. Let α ∈ (0, 1) and f ∈ Iα (Lp (ℝ)) for some 1 < p < 1/α. Then Dα− f = cos(απ)Dα+ f − sin(απ)HDα+ f ,

(1.187)

Dα+ f = cos(απ)Dα− f + sin(απ)HDα− f .

(1.188)

and

Proof. Let f = I+α φ. It follows from (1.179) and (1.180) that f = cos(απ)I−α φ − sin(απ)I−α Hφ.

(1.189)

Acting by the operator Dα− on both sides of (1.189) and recalling that Dα+ f = φ, we get (1.187). Arguing similarly and using formula (1.178), we obtain (1.188).

108 � 1 Fractional integrals and derivatives

1.17 Exercises Exercise 1.1. Applying the definition of Euler Beta function from (B.17), prove Lemma 1.11. Hint: Since the Beta function is defined as an integral on [0, 1], use a change of variables that maps the interval [a, x] into [0, 1]. Exercise 1.2. Let α, β > 0, γ ∈ ℝ, a < b, and f (t) = (t−a)β−1 (b−t)γ−1 for t ∈ (a, b). Applying the integral representation of the hypergeometric function given in Theorem B.9, prove that for x ∈ (a, b), α (Ia+ f ) (x) = (b − a)γ−1

Γ(β) x−a (x − a)α+β−1 2 F1 (1 − γ, β; α + β; ), Γ(α + β) b−a

where 2 F1 (a, b; c; x) is a hypergeometric series; see Appendix B.4. Hint: Again, use a change of variables that maps [a, x] into [0, 1]. Exercise 1.3. Let α, β > 0, γ > 1 − α, a < b, and f (t) = (t − a)β−1 (b − t)γ−1 for t ∈ (a, b). Prove that α (Ia+ f ) (b) =

Γ(α + γ − 1)Γ(β) (b − a)α+γ+β−2 . Γ(α + γ + β − 1)Γ(α)

Hint: Use Lemma 1.11. Exercise 1.4. Recall that for all β ∈ ℝ and t ∈ (−1, 1), (β)n n 1 t = , n! (1 − t)β n=0 ∞



where (β)n is the shifted factorial (see (B.1)). Set β > 0, γ > β, a ∈ ℝ, and f (x) = (x − a)β−1 (b − x)−γ . Use Exercise 1.2 to prove that for all x ∈ (a, b), γ−β

(Ia+ f ) (x) =

Γ(β)(x − a)γ−1 . (b − a)γ−β Γ(γ)(b − x)β

Hint: Use the definition of hypergeometric function (B.19). Exercise 1.5. Let a > 0, α ∈ (0, 1), and f (x) = x −α (x + a)−1 . Use Exercise 1.4 to prove that for all x > 0, α (I0+ f ) (x) = Γ(1 − α)

(a + x)α−1 . aα

Use the previous relation and Euler’s reflection formula from Proposition B.2 to prove Lemma 1.23.

1.17 Exercises

� 109

Hint: For any fixed x > 0, provide the representation α (I0+ f ) (x) =

Γ(1 − α) 1−α (I0+ g) (x) Γ(α)

for a suitable function g. Exercise 1.6. Let b > 0 and q > 1. Consider the function t

1 b F(t) = ∫ log q (1 + ) ds. s

0

(i) Prove that this function is well-defined for all t > 0. Hint: Use the fact that 0 ≤ log(1 + z) ≤ z for z > 0. (ii) Prove that for all γ ∈ (0, 1), lim t↓0

F(t) = 0. tγ

Hint: Use the fact that for all β > 0, there exists a constant C(β) such that log(1 + z) < C(β)zβ for all z ≥ 1. (iii) Use item (ii) to conclude the proof of Lemma 1.17. Exercise 1.7. Fix λ, α > 0 and consider function f (x) = eλx . Prove that (I+α f ) (x) = Hint: Use the representation (I+α f )(x) = the Gamma function.

eλx . λα

∞ 1 ∫ t α−1 eλ(x−t) dt Γ(α) 0

and the definition of

Exercise 1.8. Applying Lemma 1.23, prove the following statement. Let φ ∈ Lp (ℝ), 1 < p < α1 . Then (I+α φ)[a,b] (x) = (I+α ϕ) (x) for a. a. x ∈ ℝ, where 0, { { { { ∞ { { α φ(a − t)dt { sin απ t { { { φ(x) + ( ) , ∫ { π x−a x+t−a ϕ(x) = { 0 { { { ∞ ∞ { α φ(a − t)dt α φ(b − t)dt { { sin απ t t { { ( ( ) − ( ) ), ∫ ∫ { { π x−b x+t−a x−a x+t−b 0 0 {

x < a, a < x ≤ b, x > b.

110 � 1 Fractional integrals and derivatives Exercise 1.9. Let α, β ∈ (0, 1) with β ≠ α, a ∈ ℝ, and f (t) = (t − a)β−1 for t > 0. (i) Note that f ∈ ̸ AC[a, a + 1]; (ii) Use Lemma 1.11 to prove that Dαa+ f is well-defined and (Dαa+ f ) (t) =

Γ(β) (t − a)β−α−1 , Γ(β − α)

t > 0.

(1.190)

1−α Hint: As the first step, evaluate Ia+ f.

Remark 1.130. Formula (1.190) was conjectured by Lacroix [129] prior to the formal definition of fractional derivative. Exercise 1.10. Let α > 0. (i) Fix λ > 0 and put f (t) = eλt . Using Exercise 1.7, prove that (C Dα+ f )(x) = λα eλx . ℓ (ii) Establish that for all ℓ ∈ ℕ, we have the equality ∇+,τ f (x) = eλx (1 − e−λτ )ℓ . (iii) Use the item (ii) with λ = 1 and x = 0 to conclude that ∞

χ(α, ℓ) = ∫ t −α−1 (1 − e−t )ℓ dt

for all ℓ > α,

0

where χ(α, ℓ) has been introduced in (1.91). Hint: Set λ = 1 and use both (i) and (ii) together with Proposition 1.63 to evaluate (C Dα+ f )(0) in two different ways. Exercise 1.11. Let α ∈ (0, 1), a ∈ ℝ, and f (t) = (t − a)α−1 for t > 0. (i) Prove that Dαa+ f ≡ 0. 1−α Hint: Evaluate Ia+ f applying Lemma 1.11. α (ii) Establish that Ia+ (Dαa+ f ) ≡ 0 despite f ≢ 0. Exercise 1.12. Let α > 0, n = ⌊α⌋ + 1, and −∞ < a < b < +∞. Consider f ∈ 𝒞 n−1 [a, b] with f (n−1) ∈ AC[a, b]. (i) Let gk (x) = (x − a)k for x ∈ [a, b]. Prove that for all k = 0, . . . , n − 1, C α Da+ gk

(ii) Set f0 (x) = f (x) − ∑n−1 k=0

f (k) (a) (x k!

− a)k for x ∈ [a, b]. Use (i) to establish that C α Da+ f

(iii) Use (ii) to prove (1.85).

≡ 0.

≡ C Dαa+ f0 .

1.17 Exercises

� 111

Exercise 1.13. Let ℓ ∈ ℕ and f ∈ 𝒮 (ℝ). Denote by ℱ [f ] the Fourier transform of f . (i) Prove that ℱ [∇+,τ f ] (z) = (1 − e ℓ

−iτz ℓ

) ℱ [f ](z).

Hint: Apply Proposition A.6. (ii) Fix α > 0 and let ℓ > α. Use the previous relation and the equality ∞

(iz)α χ(ℓ, α) = ∫ t −α−1 (1 − e−itz ) dt ℓ

0

to conclude that for all f ∈ W 1,2 (ℝ) M

α

α

ℱ [ D+ f ] (z) = (iz) ℱ [f ](z).

Hint: Can you take the Fourier transform inside the integral that defines M Dα+ f ? (iii) Recall that, for z ≠ 0, +∞

∫ e−itz t α−1 dt = (iz)−α Γ(α), 0

where the integral is conditionally convergent. Deduce that for every function f ∈ 𝒮 (ℝ), α

ℱ [I+ f ] (z) = (iz) ℱ [f ](z) ℱ

for all α ∈ (0, 1), and z ∈ ℝ,

−α

[C Dα+ f ] (z)

α

= (iz) ℱ [f ](z)

for all α > 0, and z ∈ ℝ.

(1.191) (1.192) 2

Remark. Fix α ∈ (0, 1/2). Then (1.191) also holds for a. a. z ∈ ℝ if f ∈ L2 (ℝ)∩L 1+2α (ℝ). Similarly, if we fix α > 0 with ⌊α⌋ + 1 − α > 1/2, then (1.192) holds for a. a. z ∈ ℝ if 2

f ∈ W ⌊α⌋+1,2 (ℝ) ∩ W ⌊α⌋+1, 1+2(⌊α⌋+1−α) (ℝ). Can you prove this? (iv) Using (i), (ii), and (iii), give alternative proof of Proposition 1.63 for f ∈ 𝒮 (ℝ). Exercise 1.14. Let α > 0 and ℓ ∈ ℕ, ℓ > α. (i) Let 1

{e− 1−x2 , f (x) = { 0, { Prove that f ∈ 𝒞c∞ (ℝ) but f ∈ ̸ ℱ𝒮0 (ℝ).

x ∈ (−1, 1), x ∈ ̸ (−1, 1).

112 � 1 Fractional integrals and derivatives Hint: To prove that f ∈ 𝒞c∞ (ℝ), you only need to establish the equality lim f (n) (x) = lim f (n) (x) = 0

x→1

x→−1

for all n ∈ ℕ. To prove that f ∈ ̸ ℱ𝒮0 (ℝ), consider the inverse Fourier transform ℱ −1 [f ] of f . Can you evaluate ℱ −1 [f ](0)? (ii) Now let f ∈ 𝒞c∞ (ℝ) and assume that the support of f is contained in the interval [a, b]. Prove that (Dα+ f )(x) = 0 for all x ≤ a. Hint: Note that f satisfies the conditions of Proposition 1.58 and use the definition of C Dα+ f . (iii) Let f ∈ 𝒞c∞ (ℝ), φ = Dα+ f , and x ∈ ℝ, and let kℓ,α be defined in the proof of Proposition 1.69. Use (ii) to prove that ∞

∫τ ε



−2

󵄨󵄨 z 󵄨󵄨 󵄨 󵄨 ∫ 󵄨󵄨󵄨kℓ,α ( )󵄨󵄨󵄨 󵄨󵄨󵄨φ(x − z)󵄨󵄨󵄨dzdτ < +∞. 󵄨 τ 󵄨 0

Hint: Assume that the support of f is contained in [a, b]. Item (ii) guarantees that the convergence is trivial if x ≤ a, whereas if x > a, then the inner integral can be taken on (0, x − a) in place of the whole semiaxis (0, +∞). (iv) Using (iii), prove Proposition 1.69 under the condition f ∈ 𝒞c∞ (ℝ) in place of f ∈ Iα+ (ℱ𝒮0 (ℝ)). Hint: Note that 𝒞c∞ (ℝ) ⊂ 𝒮 (ℝ) to use Proposition 1.63 and then follow the proof of Proposition 1.69 up to (1.119). Use (iii) to guarantee that the order of the integrals can be exchanged. Exercise 1.15. Let α > 0 and ℓ ∈ ℕ with ℓ > α. (i) Let 1

{e− 1−x2 , φ(x) = { 0, {

x ∈ (−1, 1), x ∈ ̸ (−1, 1).

Prove that f = I+α φ is well-defined but f ∈ ̸ 𝒞c∞ (ℝ). Hint: Note that f (x) > 0 for all x > −1. (ii) Now let f ∈ Iα+ (𝒞c∞ (ℝ)) with f = I+α φ, x ∈ ℝ, and kℓ,α as in the proof of Proposition 1.69. Prove that ∞



ε

0

󵄨󵄨 z 󵄨󵄨 󵄨 󵄨 ∫ τ −2 ∫ 󵄨󵄨󵄨kℓ,α ( )󵄨󵄨󵄨 󵄨󵄨󵄨φ(x − z)󵄨󵄨󵄨dzdτ < +∞. 󵄨 τ 󵄨 Hint: Argue as in (iii) of Exercise 1.14. (iii) Let f ∈ Iα+ (𝒞c∞ (ℝ)) with f = I+α φ and assume that the support of φ is contained in the interval [a, b]. Prove that f (x) = 0 for all x < a. Hint: Use the definition of I+α .

1.17 Exercises

� 113

(iv) Using (ii) and (iii), prove Proposition 1.69 under the condition f ∈ Iα+ (𝒞c∞ (ℝ)) in place of f ∈ Iα+ (ℱ𝒮0 (ℝ)). Hint: Use (iii) to show that f satisfies the condition of Proposition 1.63 and then follow the proof of Proposition 1.69 up to (1.119). Then use (ii) to guarantee that the order of the integrals can be exchanged. Exercise 1.16. Let α ∈ (0, 1) and f ∈ 𝒮 (ℝ). (i) Note that for all t > 0, ∫ e−

(x−y)2 4t

󵄨󵄨 󵄨 󵄨󵄨f (y) − f (x)󵄨󵄨󵄨dy < +∞



and use this relation to prove that ∫ e−

(x−y)2 4t

(f (y) − f (x))dy =



h2 1 ∫ e− 4t (f (x + h) − 2f (x) + f (x − h))dh. 2



(ii) Prove that for all h ∈ ℝ with h ≠ 0, ∞

∫ 0

4α Γ ( 21 + α) −1−2α 1 − h4t2 −1−α e t dt = |h| . √4πt √π

Hint: Use the change of variables s = (iii) For t > 0, consider

h2 . 4t

p(t, x) =

1 − x2s2 e , √2πs

(1.193)

p̃t (h) = p(2t, h), and uf (t, x) = (p̃t ∗ f )(x) for t > 0 and x ∈ ℝ. Apply (i) and (ii) to prove that ∞

󵄨 󵄨 ∫ 󵄨󵄨󵄨uf (t, x) − f (x)󵄨󵄨󵄨 t −1−α dt < +∞ 0

for all x ∈ ℝ. Prove also that ∞

󵄨 󵄨 ∫ ∫ 󵄨󵄨󵄨uf (t, x) − f (x)󵄨󵄨󵄨 t −1−α dtdx < +∞.

ℝ 0

Hint: Use (i) to rewrite uf (t, x) − f (x) and the fact that f ∈ 𝒮 (ℝ) to obtain |f (x + h) − 2f (x) + f (x − h)| ≤ (supy∈(x−1,x+1) |f ′′ (y)|)|h|2 for |h| ≤ 1. Furthermore, note that f ∈ 𝒮 (ℝ), and therefore the function (supy∈(x−1,x+1) |f ′′ (y)|) belongs to L1 (ℝ). Use also the fact that ∫ℝ p̃t (h)dh = 1.

114 � 1 Fractional integrals and derivatives (iv) Define fα as ∞

1 fα (x) := ∫ (uf (t, x) − f (x)) t −1−α dt, Γ(−α)

x ∈ ℝ.

0

Use (iii) to prove that for all z ∈ ℝ, 2α

ℱ [fα ](z) = |z| ℱ [f ](z). 2

α

d Use the latter equality to prove that fα := (− dx 2) f. Hint: item (iii) guarantees that the Fourier transform of fα exists and that the integral and the Fourier transform can be exchanged. (v) Use (ii) and (iv) to prove (1.130). FL Exercise 1.17. Let α ∈ (0, 2), and let Cα/2 be defined as in (1.130). Prove that

FL Cα/2

α(1 − α) { , α ≠ 1, { { { 2Γ(2 − α) cos ( πα ) 2 ={ {1 { { α = 1. {π

Hint: use the Euler reflection formula (B.2) and the Legendre duplication formula (B.3). Exercise 1.18. Let α ∈ (0, 1) and f ∈ 𝒮 (ℝ). (i) Fix h ∈ ℝ, h ≠ 0, and for x ∈ ℝ, define fα (x; h) =

f (x − h) − f (x) 1

|h| 2 +α

.

Prove that fα (⋅; h) ∈ L1 (ℝ) ∩ L2 (ℝ). Hint: Use the fact that f ∈ 𝒮 (ℝ) is Lipschitz and bounded. (ii) Show that for h ∈ ℝ with h ≠ 0, we have the following equality: ℱ [fα (⋅; h)](z) =

e−izh − 1 1

|h| 2 +α

ℱ [f ](z).

(iii) Use (ii) to show that [u]2α,2 =

2 󵄨 󵄨2 ∫ |z|2α 󵄨󵄨󵄨ℱ [f ](z)󵄨󵄨󵄨 dz CαFL ℝ

and from this fact deduce the proof of Theorem 1.91. Hint: Use the Plancherel theorem (Theorem A.12).

1.17 Exercises

� 115

Exercise 1.19. Let −∞ < a < b < +∞, α > 0, and p ≥ 1. Let g ∈ 𝒞 1 [a, b] with g ′ (x) > 0 for all x ∈ [a, b]. (i) Using the change of variable s = g −1 (t), prove that Qg−1 : Lp (a, b) → Lp (g(a), g(b)) is a bounded operator and that 1 󵄩󵄩 −1 󵄩󵄩 󵄩󵄩 ′ 󵄩󵄩 p 󵄩󵄩Qg f 󵄩󵄩 p ≤ g ‖f ‖Lp (a,b) , 󵄩 󵄩 󵄩 󵄩 L∞ (a,b) 󵄩 󵄩L (g(a),g(b))

where Qg−1 = Qg −1 , and Qg is defined in (1.155). (ii) Using the change of variable s = g(t), prove that Qg : Lp (g(a), g(b)) → Lp (a, b) is a bounded operator and that 1

󵄩 󵄩 ‖Qg f ‖Lp (a,b) ≤ 󵄩󵄩󵄩1/g ′ 󵄩󵄩󵄩Lp∞ (a,b) ‖f ‖Lp (g(a),g(b)) . (iii) Use the previous property to establish that for every f ∈ Lp (a, b), 1 1 (g(b) − g(a))α 󵄩󵄩 α 󵄩 󵄩󵄩 ′ 󵄩󵄩 p 󵄩󵄩 ′ 󵄩󵄩 p 󵄩󵄩Ia+(b−),g 󵄩󵄩󵄩 p ≤ 1/g g ‖f ‖Lp (a,b) . 󵄩 󵄩 󵄩 󵄩 ∞ ∞ 󵄩L (a,b) 󵄩 󵄩L (a,b) 󵄩 󵄩L (a,b) 󵄩 Γ(α + 1)

Exercise 1.20. Let −∞ < a < b < +∞, α > 0, q1 , q2 , q3 ≥ 1, p1 , p2 , p3 ≥ 1, 1

1,q1

1 qj

+

1 pj

= 1,

j = 1, 2, 3. Assume also that g ∈ 𝒞 (a, b) ∩ W (a, b), g (x) > 0 for all x ∈ (a, b), and |g ′ |1−q2 ∈ L1 (a, b). (i) Using the change of variable s = g −1 (t), prove that Qg−1 : Lp1 p3 (a, b) → Lp3 (a, b) is a bounded operator and that ′

1

󵄩󵄩 −1 󵄩󵄩 󵄩󵄩 ′ 󵄩󵄩 p3 󵄩󵄩Qg f 󵄩󵄩 p 󵄩 󵄩L 3 (a,b) ≤ 󵄩󵄩g 󵄩󵄩Lq1 (a,b) ‖f ‖Lp1 p2 (a,b) , where Qg−1 and Qg are the same as in Exercise 1.19. (ii) Using the change of variable s = g(t), establish that Qg : Lp2 p3 (a, b) → Lp3 (a, b) is a bounded operator and that 1

󵄩󵄩󵄨 󵄨1−q2 󵄩󵄩 q p ‖Qg f ‖Lp3 (a,b) ≤ 󵄩󵄩󵄩󵄨󵄨󵄨g ′ 󵄨󵄨󵄨 󵄩󵄩󵄩 21 3 ‖f ‖Lp2 p3 (a,b) . 󵄩 󵄩L (a,b) (iii) Use the previous property to establish that for every f ∈ Lp (a, b), where p = p1 p2 p3 , 󵄩󵄩󵄨 ′ 󵄨1−q2 󵄩󵄩 q21p3 󵄩 ′ 󵄩 p21p3 (g(b) − g(a))α 󵄩󵄩 α 󵄩 󵄩󵄩󵄨󵄨g 󵄨󵄨 󵄩󵄩 󵄩󵄩g 󵄩󵄩 q 󵄩󵄩Ia+(b−),g f 󵄩󵄩󵄩 p ≤ ‖f ‖Lp (a,b) . 1 󵄩 󵄨 󵄨 󵄩 󵄩 󵄩 1 L (a,b) 󵄩 󵄩L 3 (a,b) 󵄩 󵄩L (a,b) Γ(α + 1) Exercise 1.21. Let α ∈ (0, 1) and 1 ≤ q1 , q2 , q3 ≤ ∞, and let g ∈ 𝒞 1 (ℝ) ∩ W 1,q1 (ℝ) with g ′ (x) > 0 for all x ∈ ℝ, limx→±∞ g(x) = ±∞, and |g ′ |1−q2 ∈ L1 (ℝ). Let also p1 , p2 , p3 ≥ 1 be such that q1 + p1 = 1, j = 1, 2. j

j

116 � 1 Fractional integrals and derivatives (i) Use the same arguments as in the previous exercises to prove that Qg−1 : Lp1 p3 (ℝ) → Lp3 (ℝ) and Qg : Lp2 p3 (ℝ) → Lp3 (ℝ) are bounded linear operators, where Qg−1 and Qg are the same as in Exercise 1.19.

(ii) Set p = p1 p2 p3 and suppose further that 1 ≤ p2 p3 < theorem to prove that

α I±,g

p

: L (ℝ) → L

p∗α,p ,p 2 3

1 . α

Use the Hardy–Littlewood

(ℝ), where p∗α,p2 ,p3 =

p3 . 1−αp2 p3

Exercise 1.22. Using the conjugation relation (1.156) and Exercises 1.7 and 1.10, prove Proposition 1.111. Exercise 1.23. Let α ∈ (0, 1), 0 < a < ∞, β ≥ α. Also, let fa,β (t) = logβ−1 ( at ). Using the conjugation relation, prove that in the case β > α the following relation holds for all x > a: (H Dαa+ fa,β ) (x) =

Γ(β) x logβ−α+1 ( ) , Γ(β − α) a

where H Dαa+ = C Dαa+,g and g(t) = log(t). In particular, conclude that for all x > a, (H Dαa+ 1) (x) =

1 x log−α ( ) . Γ(1 − α) a

Moreover, establish that in the case β = α the following relation holds for all x > a: (H Dαa+ fa,α ) (x) = 0. Exercise 1.24. Let fn ∈ 𝒞 [a, b], n ∈ ℕ. (i) Assume that α ∈ (0, 1) and fn → f , n → ∞, uniformly in [a, b] for some f ∈ 𝒞 [a, b]. α α Prove that Ia+ fn → Ia+ f uniformly in [a, b]. α Hint: Use the fact that Ia+ : L∞ (a, b) → L∞ (a, b) is a bounded operator. (ii) Let α ∈ (0, 1) and suppose that Dαa+ fn ∈ 𝒞 [a, b] for all n ∈ ℕ. Suppose further that for each n ∈ ℕ, there exists pn ≥ 1 such that fn ∈ Iα+ (Lpn (a, b)) and Dαa+ fn → g, n → ∞, uniformly in [a, b] for some function g ∈ 𝒞 [a, b]. Prove that there exists a function f ∈ 𝒞 (a, b) such that fn → f , n → ∞, uniformly in [a, b] and g = Dαa+ f . Hint: Use (1.45) and (i). Remark 1.131. Note that g ∈ 𝒞 [a, b] under the uniform converge of Dαa+ fn ∈ 𝒞 [a, b] to it. (iii) Let α > 0 and suppose that Dαa+ fn ∈ 𝒞 [a, b] for all n ∈ ℕ and Dαa+ fn → g uniformly in [a, b] for some function g ∈ 𝒞 [a, b]. Furthermore, assume that for each n ∈ ℕ, there exists pn ≥ 1 such that fn ∈ 𝕀α−⌊α⌋ (Lpn (a, b)) and that if α > 1, then for each + 1 ≤ k ≤ ⌊α⌋, there exist xk ∈ (a, b) and ℓk ∈ ℝ such that Dα−k a+ fn (xk ) → ℓk . Prove that

1.17 Exercises

� 117

there exists a function f ∈ 𝒞 [a, b] such that fn → f uniformly in [a, b] and g = Dαa+ f . m−α Hint: Argue by induction on m = ⌊α⌋ + 1. Define fn,m−α = Ia+ . What can you deduce on the uniform convergence of such a sequence? Note that we can exchange the limit and the derivative under our hypotheses. Remark 1.132. In the case α > 1 the additional conditions are necessary to apply some well-known results on the uniform convergence of the derivatives (see [245, Theorem 3.7.1]). (iv) Let α > 0 and m = ⌊α⌋ + 1. Suppose that fn(m) ∈ 𝒞 [a, b] for all n ∈ ℕ, fn(m) → g uniformly in [a, b], and for all 0 ≤ k ≤ m − 1, there exist xk ∈ [a, b] and ℓk ∈ ℝ such that fn(k) (xk ) → ℓk . Prove that there exists a function f ∈ 𝒞 m [a, b] such that fn → f uniformly in [a, b] and C Dαa+ fn → C Dαa+ f . m−α (m) Hint: Write C Dαa+ fn = Ia+ fn , since fn ∈ 𝒞 m (a, b), and note that we can exchange the limit and the derivative under our hypotheses. Finally, use (i). Exercise 1.25. Let a ∈ ℝ, h > 0, and U = (a − h, a + h). Suppose that f : U → ℝ is analytic. Prove that for all x ∈ (a, a + h) and α > 0, (x − a)k+α (k) f (a). Γ(α + k + 1) k=0 +∞

α (Ia+ f ) (x) = ∑

Hint: Rewrite f as a power series centered in a. Use (i) from Exercise 1.24 and Lemma 1.11. Exercise 1.26. Let a ∈ ℝ, h > 0, and U = (a −h, a +h). Suppose that f : U → ℝ is analytic. Prove that for all x ∈ (a, a + h/2) and α > 0, +∞

α (Ia+ f ) (x) = ∑ ( k=0

−α (x − a)k+α (k) ) f (x). k Γ(k + α + 1)

Hint: Rewrite f as a power series centered in x. Use item (i) from Exercise 1.24 and Exercise 1.3, recalling some properties of the binomial coefficients. Exercise 1.27. Let a ∈ ℝ, α, h > 0, and U = (a − h, a + h). Suppose that f : U → ℝ is analytic. Prove that for all x ∈ (a, a + h) and α > 0, (x − a)k−α (k) f (a), Γ(k − α + 1) k=0 +∞

(Dαa+ f ) (x) = ∑

where we set 1/∞ = 0. Recall that Euler Gamma function can be defined on real (noninteger) negative numbers as in (B.4). Hint: Use Exercise (1.25) and item (ii) of Exercise 1.24.

118 � 1 Fractional integrals and derivatives Exercise 1.28. Let a ∈ ℝ, h > 0, and U = (a −h, a +h). Suppose that f : U → ℝ is analytic. Prove that for all x ∈ (a, a + h/2) and α > 0, +∞ α (x − a)k−α (k) (Dαa+ f ) (x) = ∑ ( ) f (x), k Γ(k + 1 − α) k=0

where we set 1/∞ = 0. Recall that Euler Gamma function can be defined on real (noninteger) negative numbers as in (B.4). Hint: Use Exercise 1.25, item (ii) of Exercise 1.24, and the Chu–Vandermonde identity (B.12). Exercise 1.29. Let λ ∈ ℝ with λ ≠ 0, α > 0, and f (x) = eλx for x ≥ 0. Prove that for all x > 0 and α > 0, λk x k+α , Γ(k + α + 1) k=0 +∞

α (I0+ f ) (x) = ∑

λk x k−α , Γ(k − α + 1) k=0 +∞

(Dα0+ f ) (x) = ∑

where we set 1/∞ = 0. Recall that Euler Gamma function can be defined on real (noninteger) negative numbers as in (B.4). Conclude that eλx is not an eigenfunction of both the Riemann–Liouville and the Dzhrbashyan–Caputo fractional derivatives if α ∈ ̸ ℕ. Exercise 1.30. Let λ ∈ ℝ with λ ≠ 0, α > 0, and f (x) = sin(λx) for x ≥ 0. Prove that for all x > 0 and α > 0, (−1)k λ2k+1 x 2k+1+α , Γ(2k + α + 2) k=0 +∞

α (I0+ f ) (x) = ∑

(−1)k λ2k+1 x 2k+1−α , Γ(2k − α + 2) k=0 +∞

(Dα0+ f ) (x) = ∑

where we set 1/∞ = 0. Recall that Euler Gamma function can be defined on real (noninteger) negative numbers as in (B.4). Exercise 1.31. Let λ ∈ ℝ with λ ≠ 0, α > 0, and f (x) = cos(λx) for x ≥ 0. Prove that for all x > 0 and α > 0, (−1)k λ2k x 2k+α , Γ(2k + α + 1) k=0 +∞

α (I0+ f ) (x) = ∑

(−1)k λ2k x 2k−α , Γ(2k − α + 1) k=0 +∞

(Dα0+ f ) (x) = ∑

where we set 1/∞ = 0. Recall that Euler Gamma function can be defined on real (noninteger) negative numbers as in (B.4).

1.17 Exercises

� 119

Exercise 1.32. Prove the following relations. Here and later, i2 = −1. (i) Establish that for all x ∈ ℝ, |1 ± ix| = √1 + x 2

and

Arg(1 ± ix) = ± arctan(x).

(ii) Establish that for all x ∈ ℝ, |x ± i| = √1 + x 2

and

Arg(x ± i) = ± (

π − arctan(x)) . 2

Hint: Use the relation 1 π arctan(x) + arctan ( ) = sign(x), x 2

x ≠ 0.

(1.194)

(iii) Let μ > 1. Establish that the functions φ±μ (x) = (1 ± ix)−μ , ϕ±μ (x) = (x ± i)−μ , and μ−1

ψ±μ (x) = x± e∓x belong to Lp (ℝ) for all p ≥ 1. Hint: Use items (i) and (ii). (iv) Prove that ±

ℱ [ψμ ] (z) =

Γ(μ) ± φ (z). 2π μ

(1.195)

Hint: Use the definition of the Fourier transform and a suitable change of variables. (v) Prove that ±

ℱ [φμ ] (z) =

ψ∓μ (z) Γ(μ)

.

Hint: Apply the Fourier transform to both sides of (1.195). (vi) Prove that for 0 < α < (1 ∧ (μ − 1)) and x ∈ ℝ, (I+α φ±μ ) (x) = e±

απi 2

Γ(μ − α) ± φμ−α (x). Γ(μ)

Remark 1.133. Note that by continuity in Lp (ℝ) for 1 < p < α1 , the previous relation holds even if μ = α + 1 for a. a. x ∈ ℝ. (vii) Show that for 0 < α < (1 ∧ (μ − 1)) (resp., for α = μ − 1 < 1) and for x ∈ ℝ (resp., for a. a. x ∈ ℝ), (I+α ϕ±μ ) (x) =

Γ(μ − α) ∓απi ± e ϕμ−α (x). Γ(μ)

Hint: Using items (i) and (ii), note that ψ±μ (x) = e∓

iμπ 2

φ±μ (x). Then apply (vi).

2 Integral and differential equations involving fractional operators In Chapter 1, we introduced a plethora of fractional operators. Now we want to discuss some equations involving them. Indeed, whereas the first questions concerning fractional calculus were purely theoretical since the question l’Hôpital posed to Leibnitz in [105] mostly concerned with his notation, the rediscovery of fractional calculus was related to the study of some specific physical phenomena. Indeed, the first main application of a fractional integral is due to Abel [1], who found an elegant solution to the problem of the tautochrone. Since then, several other phenomena have been explained by means of fractional calculus in various fields of science, such as geophysics [43], viscoelasticity [44, 74, 77, 87, 235], thermoelasticity [213], fluid dynamics [67, 81], and population dynamics [22, 78]. Here we will look at some basic results on fractional differential equations with special attention to existence and uniqueness theorems.

2.1 Abel integral equation Suppose we have a function f : [a, b] → ℝ for some −∞ < a < b < +∞. For some α ∈ (0, 1), we want to find another function φα : [a, b] → ℝ such that x

φα (t) 1 dt = f (x), ∫ Γ(α) (x − t)1−α a

x ∈ [a, b].

(2.1)

Such an equation is called Abel integral equation. Such equations were found by Abel [1] in 1823 while working on the problem of the tautochrone. Abel integral equation is clearly equivalent to α Ia+ φα (x) = f (x),

x ∈ [a, b].

We can also weaken the condition on the equality by asking that the equation is solved for a. a. x ∈ [a, b]. With this in mind, the following result is clear. Theorem 2.1. Let α ∈ (0, 1) and f ∈ Iα (Lp (a, b)). Then (2.1) has the unique solution φα = Dαa+ f ∈ Lp (a, b). The case α = 1 is trivial if f ∈ 𝒞 1 (a, b) and f (a) = 0. We can also study the case α > 1. To handle it, note that if we suppose f ∈ 𝒞 n (a, b), where n = ⌊α⌋, then we can differentiate both sides of (2.1) n times. In such a case, we get ∏nj=1 (α − j) Γ(α)

https://doi.org/10.1515/9783110780017-002

x

∫ a

φ(t) dt = f (n) (x), (x − t)n+1−α

2.2 The Mittag-Leffler and the Prabhakar functions



121

that is, x

φ(t) 1 dt = f (n) (x), ∫ Γ(α − n) (x − t)n+1−α a

where α − n ∈ (0, 1). This leads, in particular, to the following solution. Theorem 2.2. Let α ≥ 1, n = ⌊α⌋, and f ∈ 𝒞 (n) [a, b] with f (m) (a) = 0 for m = 0, . . . , n. Suppose further that f (n) ∈ Iα−n (Lp (a, b)) for some p ≥ 1. Then (2.1) has the unique solution (n) φ = Dα−n ∈ Lp (a, b). a+ f Such an equation can be also defined on the whole real line. In such case, we call Abel integral equation the equation x

φα (t) 1 dt = f (x), ∫ Γ(α) (x − t)1−α

x ∈ ℝ,

(2.2)

−∞

where α ∈ (0, 1). Again, solutions of such an equation can be found by means of a fractional derivative. Theorem 2.3. Let f ∈ Iα+ (Lp (ℝ)) for some 1 < p < 1/α. Then (2.2) has the unique solution φ = Dα+ f ∈ Lp (ℝ). Several generalizations of the Abel integral equation have been considered in the literature. We refer the interested reader to [232] for further details.

2.2 The Mittag-Leffler and Prabhakar functions In the next sections, we focus on the Cauchy problem involving fractional differential operators. Before going into the details of the equations, we first need introduce some special functions that play a prominent role in our considerations. In the theory of ordinary differential equations, a prominent role is played by the exponential function eλt , as it is an eigenfunction of the standard derivative. However, this is not true for the Riemann–Liouville and Dzhrbashyan–Caputo fractional derivatives (see Exercise 1.29), and thus some further generalizations of the exponential functions are needed to study such differential operators. Definition 2.4. Fix α ∈ ℂ with ℜ(α) > 0. We define the Mittag-Leffler function zk , Γ(αk + 1) k=0 +∞

Eα (z) = ∑

z ∈ ℂ.

(2.3)

122 � 2 Integral and differential equations involving fractional operators The Mittag-Leffler function has been introduced in [176, 177, 178, 179, 180]. See also [71] for α > 0 and [93] for the general case. A generalization of the Mittag-Leffler function can be obtained by introducing a second parameter. Definition 2.5. The two-parameter Mittag-Leffler function is defined for ℜ(α) > 0 and β ∈ ℂ as zk , Γ(αk + β) k=0 +∞

Eα,β (z) = ∑

z ∈ ℂ.

(2.4)

This generalization has been introduced independently in [69, 107]. It is clear that Eα,1 (z) = Eα (z). We will focus on the case ℜ(β) > 0. A further generalization is obtained by introducing a third parameter. Definition 2.6. The three-parameter Mittag-Leffler function or Prabhakar function is defined for ℜ(α) > 0, ℜ(β) > 0, and γ > 0 as γ

+∞

Eα,β (z) = ∑ ( k=0

γ+k zk ) , k Γ(αk + β)

z ∈ ℂ.

(2.5)

1 This function has been first considered in [215]. Again, it is clear that Eα,β = Eα,β . γ Thus let us investigate directly the properties of Eα,β since they will be true also for Eα,β and Eα . First of all, let us note that the series (2.5) is convergent for any z ∈ ℂ. Indeed, applying the D’Alembert criterion to determine the radius of convergence, we obtain that

󵄨󵄨 γ+k+1 󵄨󵄨 1 󵄨󵄨( 󵄨 󵄨󵄨 (γ + k + 1)Γ(αk + β) 󵄨󵄨 󵄨 k+1 ) Γ(αk+α+β) 󵄨󵄨 󵄨󵄨 󵄨󵄨 lim = lim 󵄨 󵄨 󵄨 󵄨 󵄨󵄨(γ+k ) 1 󵄨󵄨 k→+∞ k→+∞ 󵄨󵄨 (k + 1)Γ(αk + α + β) 󵄨󵄨 󵄨󵄨 k Γ(αk+β) 󵄨󵄨 󵄨󵄨 (αk)α Γ(αk + β) 󵄨󵄨 γ+k+1 1 󵄨󵄨 󵄨󵄨 = lim 󵄨 󵄨 = 0, ℜ(α) α k→+∞ k + 1 k |α | 󵄨󵄨󵄨 Γ(αk + α + β) 󵄨󵄨󵄨 where we also used the asymptotic behavior of the ratio of Gamma functions (see (B.8)) and the fact that |x z | = |x|ℜ(z) if x > 0 is real. This guarantees that the series (2.5) is absolutely convergent in ℂ and uniformly convergent in any compact subset of ℂ. γ Now let us consider the derivatives of Eα,β (z). To do this, let us first recall that by a classical result (see, for instance, [245, Proposition 4.2.6]) a power series can always be differentiated term by term in their domain of convergence and the derivatives admit the same radii of convergence. Hence we get +∞ +∞ d γ γ+k kzk−1 γ+k+1 (k + 1)zk Eα,β (z) = ∑ ( ) = ∑( ) . dz k Γ(αk + β) k=0 k + 1 Γ(αk + α + β) k=1

2.2 The Mittag-Leffler and the Prabhakar functions

� 123

This gives a series representation of the derivative of the Prabhakar function. If ℜ(β) > 1, then we also get that for z ≠ 0, +∞ d γ γ+k kzk−1 1 +∞ γ + k αkzk Eα,β (z) = ∑ ( ) = ) ∑( dz k Γ(αk + β) αz k=1 k Γ(αk + β) k=1

= = =

+∞ 1 +∞ γ + k (αk + β − 1)zk γ+k zk (∑ ( ) − (β − 1) ∑ ( ) ) αz k=1 k Γ(αk + β) k Γ(αk + β) k=1

+∞ 1 +∞ γ + k (αk + β − 1)zk γ+k zk (∑ ( ) − (β − 1) ∑ ( ) ) αz k=0 k Γ(αk + β) k Γ(αk + β) k=0 +∞ 1 +∞ γ + k zk γ+k zk (∑ ( ) − (β − 1) ∑ ( ) ) αz k=0 k Γ(αk + β − 1) k Γ(αk + β) k=0 γ

=

γ

Eα,β−1 (z) − (β − 1)Eα,β (z) αz

.

At the same time, for ℜ(β) > 1, we can produce a differential recursive relation as follows: d β−1 γ d +∞ γ + k λk zαk+β−1 (z Eα,β (λzα )) = ) ∑( dz dz k=0 k Γ(αk + β) +∞

= ∑( k=0

γ + k (αk + β − 1)λk zαk+β−2 +∞ γ + k λk zαk+β−2 γ ) = ∑( ) = zβ−2 Eα,β−1 (λzα ) k Γ(αk + β) k Γ(αk + β − 1) k=0

(2.6)

for all λ ∈ ℂ. This relation also implies that t

γ ∫ sβ−2 Eα,β−1 (λsα ) ds 0

t

=∫ 0

d β−1 γ γ (s Eα,β (λsα )) ds = t β−1 Eα,β (λt α ) ds γ

for ℜ(β) > 1. Furthermore, for λ, α, β, γ > 0, Eα,β (λt α ) > 0 for all t > 0. Combining this observation with (2.6), we obtain the following result. γ

Proposition 2.7. Let α, γ, λ > 0 and β > 1. Then the function t ∈ [0, +∞) 󳨃→ t β−1 Eα,β (λt α ) is strictly increasing. Proof. We just note that for all t > 0, d β−1 γ γ (t Eα,β (λt α )) = t β−2 Eα,β−1 (λt α ) > 0. dt A respective (in some sense, opposite) result for λ < 0 was proved in [55]. However, the proof is more complicated, and hence we omit it.

124 � 2 Integral and differential equations involving fractional operators Proposition 2.8. Let α, β, γ, λ ∈ ℝ, λ < 0, α, β ∈ (0, 1], and γ ∈ (0, β/α]. Then the function γ t ∈ [0, +∞) 󳨃→ t β−1 Eα,β (λt α ) is positive and strictly decreasing, and γ

lim t β−1 Eα,β (λt α ) = 0.

t→+∞

The next statement follows from Proposition 2.8 and can be easily proved. Proposition 2.9. Let α, β, γ, λ ∈ ℝ, λ < 0, α ∈ (0, 1], β > 1, β ∈ ̸ ℕ, and γ ∈ (0, (β − ⌊β⌋)/α]. γ Then the function fα,β,γ (t) := t β−1 Eα,β (λt α ) is nonnegative and strictly increasing on ℝ+ . The same holds for λ < 0, α ∈ (0, 1], β ∈ ℕ \ {1}, and γ ∈ (0, 1/α]. Proof. Consider first the case β ∈ ̸ ℕ and let us prove the statement by induction on ⌊β⌋. First of all, consider any β > 1 such that ⌊β⌋ = 1. Then we have, by (2.6) and Proposition 2.8, ′ fα,β,γ (t) =

d β−1 γ γ γ (t Eα,β (λt α )) = t β−2 Eα,β−1 (λt α ) = t β−⌊β⌋−1 Eα,β−⌊β⌋ (λt α ) > 0 dt

for all t > 0, where the last inequality is stated in Proposition 2.8, since β − ⌊β⌋ ∈ (0, 1] and γ ∈ (0, (β−⌊β⌋)/α]. This inequality tells us that fα,β,γ is strictly increasing. Combining this with the fact that fα,β,γ (0) = 0, we also have that fα,β,γ (t) > 0 for all t > 0. Now fix n ∈ ℕ and assume that the statement is already proved for all non-integer ̃ = n, i. e., for any β̃ ∈ ̸ ℕ such that ⌊β⌋ ̃ = n, any α ∈ (0, 1], and β̃ > 1 such that ⌊β⌋ ̃ ̃ γ ∈ (0, (β − ⌊β⌋)/α], the function fα,β,γ ̃ is strictly increasing, and fα,β,γ ̃ (t) > 0 for all t > 0. Let α ∈ (0, 1] and β > 1 be such that ⌊β⌋ = n + 1 and γ ∈ (0, (β − ⌊β⌋)/α]. Set β̃ = β − 1 ̃ = n and β − ⌊β⌋ = β̃ − ⌊β⌋, ̃ so that γ ∈ (0, (β̃ − ⌊β⌋)/α]. ̃ and note that ⌊β⌋ Then it follows from (2.6) that for all t > 0, ′ fα,β,γ (t) =

̃ d β−1 γ γ γ (t Eα,β (λt α )) = t β−2 Eα,β−1 (λt α ) = t β−1 E ̃ (λt α ) = fαβ,γ ̃ (t). α,β dt

(2.7)

̃ and γ are such that However, by the previously stated induction condition, α, β, fα,β,γ ̃ (t) > 0 for all t > 0, and then (2.7) guarantees that fα,β,γ is strictly increasing. Finally, since fα,β,γ (0) = 0, we also have that fα,β,γ (t) > 0. We end the proof by induction. The argument is exactly the same when β ∈ ℕ \ {1}. γ

Evaluating the Laplace transform of t β−1 Eα,β (λt α ), we obtain an interesting result. Let ℒ be the Laplace transform operator, i. e., ∞

ℒ[f ](z) = ∫ e

−zt

f (t)dt

0

for all z ∈ ℂ such that the integral is well-defined. Also, let abs(f ) = inf{z ∈ ℝ : ℒ[f ](z) is well-defined}.

2.2 The Mittag-Leffler and the Prabhakar functions

� 125

γ

Proposition 2.10. Fix λ ∈ ℝ and set fα,β,γ (t) = t β−1 Eα,β (λt α ) for some β, α, γ > 0. Then, for 1 α

every z ∈ ℂ such that ℜ(z) > |λ| ,

ℒ[fα,β,γ ](z) =

zαγ−β , (zα − λ)γ

(2.8)

1

and abs(fα,β,γ ) ≤ |λ| α . Furthermore, we have the following relations: 1

(i) If λ > 0, then abs(fα,β,γ ) = λ α ; (ii) If λ < 0, α ∈ (0, 1], β > 0 with β ∈ ̸ ℕ, γ ∈ (0, (β − ⌊β⌋)/α], and either α ≠ 1 or γ ≠ β, then abs(fα,β,γ ) = 0; (iii) If λ < 0, α ∈ (0, 1], β ∈ ℕ, γ ∈ (0, 1/α], and either α ≠ 1 or γ ≠ 1, then abs(fα,β,γ ) = 0; (iv) If λ < 0, α = 1, β ∈ (0, 1], and γ = β, then abs(fα,β,γ ) = λ. Proof. For a ∈ ℝ, set Ha := {z ∈ ℂ : ℜ(z) > a}. First, consider z > 0. In this case, using the Fubini theorem, we can conclude that ∞

ℒ[fα,β,γ ](z) = ∫ e

−zt

+∞

∑(

k=0

0

γ+k |λ|k ) t αk+β−1 dt k Γ(αk + β)

+∞ γ+k |λ|k γ + k |λ|k = ∑( ) ) αk+β ∫ e−zt t αk+β−1 dt = ∑ ( k Γ(αk + β) k z k=0 k=0 ∞

+∞

+∞

= z−β ∑ ( k=0

0 k

γ + k |λ| z−β zαγ−β )( α) = = α =: gα,β,γ (z) < +∞ γ k z (z − |λ|)γ (1 − |λ| ) α z

1

(2.9)

1

for z > |λ| α . This implies, by definition, that abs(fα,β,γ ) ≤ |λ| α . Furthermore, by the properties of the Laplace transform (see, for instance, Theorem A.20) we know that ℒ[fα,β,γ ](z) converges for all z ∈ H α1 and is holomorphic on H α1 . At the same time, |λ| |λ| gα,β,γ is also holomorphic on H α1 , and, by (2.9), ℒ[fα,β,γ ](z) = gα,β,γ (z) for all z ∈ ℝ such |λ|

1

that z > |λ| α . This implies (see, for instance, Corollary A.3) that ℒ[fα,β,γ ](z) = gα,β,γ (z) for all z ∈ H α1 , and (2.8) is established. Now let us prove statements (i)–(iv). |λ| (i) Assume that λ > 0. Then fα,β,γ (t) ≥ 0 for all t ≥ 0, and the function gα,β,γ is holomorphic on H 1

1

λα

1

and has a singularity at z = λ α . This suffices to guarantee that

abs(fα,β,γ ) = λ α (see Theorem A.21). (ii) Assume that λ < 0, α ∈ (0, 1], β > 0, β ∈ ̸ ℕ and γ ∈ (0, (β − ⌊β⌋)/α]. Then by either Proposition 2.8 if β ∈ (0, 1) or Corollary 2.9 if β > 1 we know that fα,β,γ (t) ≥ 0. If either α ≠ 1 or γ ≠ β, then the function gα,β,γ extends holomorphically on H0 and has a singularity at z = 0. This suffices to guarantee that abs(fα,β,γ ) = 0.

126 � 2 Integral and differential equations involving fractional operators (iii) Assume that λ < 0, α ∈ (0, 1], β ∈ ℕ, and γ ∈ (0, 1/α]. Then by either Proposition 2.8 if β = 1 or Corollary 2.9 if β > 1 we know that fα,β,γ (t) ≥ 0. If either α ≠ 1 or γ ≠ β, then the function gα,β,γ extends holomorphically on H0 and has a singularity at z = 0. This suffices to guarantee that abs(fα,β,γ ) = 0. (iv) Assume that λ < 0, α = 1, β ∈ (0, 1], and γ = β. Then by Proposition 2.8 we know that f1,β,β (t) ≥ 0. Furthermore, the function g1,β,β extends holomorphically on Hλ . This suffices to guarantee that abs(f1,β,β ) = λ. Let us also recall the following asymptotic formulae for the one-parameter MittagLeffler function (see [93, Proposition 3.6]). Theorem 2.11. Let n ∈ ℕ. (i) Let α ∈ (0, 2) and θ ∈ ( απ , min{π, απ}). Then for z ∈ ℂ such that Arg(z) ≤ θ, 2 Eα (z) =

n 1 z α1 z−k e −∑ + O (|z|−1−n ) , α Γ(1 − αk) k=1

|z| → ∞,

(ii) Let α ∈ (0, 2) and θ ∈ ( απ , min{π, απ}). Then for z ∈ ℂ such that Arg(z) ≥ θ, 2 n

z−k + O (|z|−1−n ) , Γ(1 − αk) k=1

Eα (z) = − ∑

(iii) Let α ≥ 2, and let A(z) = {k ∈ ℤ : |Arg(z) + 2πk| ≤ Eα (z) =

πα }. 2

|z| → ∞. Then for z ∈ ℂ,

n 1 2πki 1 z−k α α −∑ + O (|z|−1−n ) , ∑ ez e α k∈A(z) Γ(1 − αk) k=1

|z| → ∞.

Remark 2.12. Items (i) and (ii) of Theorem 2.11 seem to be contradictory. However, if we fix θ ∈ ( απ , min{π, απ}) and consider any z ∈ ℂ with Arg(z) = θ, then we can write 2 z = |z|eiθ . Hence, for any n ∈ ℕ 󵄨󵄨 z α1 󵄨󵄨 󵄨󵄨 |z|ei αθ 󵄨󵄨 󵄨󵄨e 󵄨󵄨 = 󵄨󵄨e 󵄨󵄨 = e|z| cos( αθ ) = O (|z|−1−n ) , 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 since cos ( αθ ) < 0. Hence, for Arg(z) = θ, the exponential term in Item (i) can be absorbed in the term O(|z|−1−n ), getting the statement of Item (ii). Such a result can be refined to handle two-parameter Mittag-Leffler functions (see [93, Theorems 4.3 and 4.4]).

2.2 The Mittag-Leffler and the Prabhakar functions



127

Theorem 2.13. Let β ∈ ℂ and n ∈ ℕ. (i) For α ∈ (0, 2) and z ∈ ℂ such that |Arg(z)| < min{π, απ}, we have n 1 1 1−β z−k α z α ez − ∑ + O (|z|−n−1 ) , α Γ(β − kα) k=1

Eα,β (z) =

|z| → ∞.

(ii) For α ∈ (0, 1) and z ∈ ℂ such that απ < |Arg(z)| < π, we have n

z−k + O (|z|−n−1 ) , Γ(β − kα) k=1

Eα,β (z) = − ∑

(iii) Let α ≥ 2 and A(z) = {k ∈ ℤ : |Arg(z) + 2πk| ≤ Eα,β (z) =

3πα }. 4

|z| → ∞.

Then, as |z| → ∞,

n 1 2πki 1 2πki 1−β 1 z−k α α −∑ + O (|z|−n−1 ) . ∑ (z α e α ) ez e α k∈A(z) Γ(β − kα) k=1

Having the asymptotic results presented in Theorems 2.11 and 2.13, we are able to provide some exponential upper bounds for real z ≥ 0. Corollary 2.14. For any α, β > 0, there exists a constant Cα,β > 0 such that t

β−1 α

1 α

Eα,β (t) ≤ Cα,β et ,

t ≥ 1.

(2.10)

Proof. Let us first consider the case α ∈ (0, 2). Item (i) of Theorem 2.13 tells us that there (1) exist a continuous function ℛα,β : ℝ+ → ℝ and a constant Cα,β > 0 such that for all t ≥ 1, (1) t|ℛα,β (t)| ≤ Cα,β and

Eα,β (t) =

1 1 1−β α t α et + ℛα,β (t). α

In particular, for all t ≥ 1, t

β−1 α

1 α

Eα (t) = et (

1 1 1 β−1−α 1 1 1 α β−1 α α (1) −t α + e−t t α ℛ(t)) ≤ et ( + Cα,β e t α ) ≤ Cα,β et , α α

where Cα,β := (

1 1 α β−1−α (1) + Cα,β sup (e−t t α )) . α t≥1

Now let us consider the case α ≥ 2. Item (iii) of Theorem 2.13 tells us that there exist (2) (2) a continuous function ℛα,β : ℝ+0 → ℂ and a constant Cα,β > 0 such that t|ℛα,β | ≤ Cα,β and for all t ≥ 1, Eα,β (t) =

1 2πki 1 1−β α α t α ∑ et e + ℛα,β (t). α k∈A(t)

128 � 2 Integral and differential equations involving fractional operators First, let us identify the set A(t). Since Arg(t) = 0, we know that k ∈ A(t) if and only if 2π|k| ≤ 3πα , which happens if and only if |k| ≤ ⌊ 3α ⌋, k ∈ ℤ. Therefore 4 8 1 1−β Eα,β (t) = t α α

⌊ 3α ⌋ 8



k=−⌊ 3α ⌋ 8

et

1 α

e

2πki α

(2.11)

+ ℛα,β (t).

To handle the summation, we use the identity e

2πki α

= cos(2πkα) + i sin(2πkα),

which implies that ⌊ 3α ⌋ 8



k=−⌊ 3α ⌋ 8

e

1

tαe

2πki α

⌊ 3α ⌋ 8

=



k=−⌊ 3α ⌋ 8

e

1

⌊ 3α ⌋ 8

1

t α cos(2πkα)+it α sin(2πkα)

=



k=−⌊ 3α ⌋ 8

et

1 α

1

cos(2πkα) it α sin(2πkα)

e

. (2.12)

Taking into account the identity eit

1 α

sin(2πkα)

1

1

= cos (t α sin(2πkα)) + i sin (t α sin(2πkα)) ,

we rewrite (2.12) as ⌊ 3α ⌋ 8



k=−⌊ 3α ⌋ 8

e

1

tαe

2πki α

⌊ 3α ⌋ 8

=



k=−⌊ 3α ⌋ 8

+i

et

1 α

⌊ 3α ⌋ 8



k=−⌊ 3α ⌋ 8

cos(2πkα)

et

1 α

1

cos (t α sin(2πkα))

cos(2πkα)

1

sin (t α sin(2πkα)) = S1 + iS2 .

(2.13)

To handle S1 , note that ⌊ 3α ⌋ 8

S1 = ∑ e t

1 α

cos(2πkα)

k=1

⌊ 3α ⌋ 8

= ∑e

1

t α cos(2πkα)

k=1

⌊ 3α ⌋ 8

= 2 ∑ et k=1

1

cos (t α sin(2πkα)) +

−1



k=−⌊ 3α ⌋ 8 ⌊ 3α ⌋ 8

1 α

cos (t sin(2πkα)) + ∑ et k=1

1 α

cos(2πkα)

1

1 α

cos (t α sin(2πkα)) + et ,

1 α

et

1 α

cos(2πkα)

cos(2πkα)

1

cos (t α sin(2πkα)) + et 1

cos (t α sin(2πkα)) + et

1 α

1 α

(2.14)

2.2 The Mittag-Leffler and the Prabhakar functions

� 129

where we used the fact that cosine is an even function. To evaluate S2 , note that ⌊ 3α ⌋ 8

S2 = ∑ e t

1 α

1

cos(2πkα)

sin (t α sin(2πkα)) +

k=1

⌊ 3α ⌋ 8

= ∑e

1

k=−⌊ 3α ⌋ 8 ⌊ 3α ⌋ 8

1 α

t α cos(2πkα)

−1



sin (t sin(2πkα)) − ∑ et

k=1

1 α

et

1 α

cos(2πkα)

cos(2πkα)

k=1

1

sin (t α sin(2πkα)) 1

sin (t α sin(2πkα)) = 0, (2.15)

where we used the fact that sine is an odd function. Combining (2.14) and (2.15) with (2.13), we get the equality ⌊ 3α ⌋ 8



k=−⌊ 3α ⌋ 8

e

1

tαe

2πki α

⌊ 3α ⌋ 8

= 2 ∑ et

1 α

cos(2πkα)

k=1

1 α

1

cos (t α sin(2πkα)) + et .

(2.16)

Hence we can rewrite (2.11) as ⌊ 3α ⌋

1 2 1−β 8 α Eα,β (t) = t α ∑ et α k=1

cos(2πkα)

1 α

cos (t sin(2πkα)) + t

1−β α

1 α

et + ℛα,β (t). α

Once we note that for all t > 0, ⌊ 3α ⌋

2 1−β 8 t α1 ℛα,β (t) = Eα,β (t) − t α ∑ e α k=1

cos(2πkα)

1 α

cos (t sin(2πkα)) − t

1−β α

1 α

et , α

we see that ℛα,β (t) ∈ ℝ for all t > 0. Furthermore, t

β−1 α

⌊ 3α ⌋

1 2 8 α Eα,β (t) = ∑ et α k=1

⌊ 3α ⌋

1 2 8 α ≤ ∑ et α k=1

cos(2πkα)

cos(2πkα)

1 α

β−1 et cos (t sin(2πkα)) + + t α ℛα,β (t) α 1 α

1 α

⌊ 3α ⌋

1 α

1 β−1 β−1 et 2 8 et α + + t α ℛα,β (t) ≤ ∑ et + + t α ℛα,β (t) α α k=1 α

1 1 β−1 β−1−α 1 3α 1 3α α α (2) (2 ⌊ ⌋ + 1) et + t α ℛα,β (t) ≤ (2 ⌊ ⌋ + 1) et + t α Cα,β α 8 α 8 1 1 1 1 3α α β−1−α (1) α α = ( (2 ⌊ ⌋ + 1) + e−t t α Cα,β ) et ≤ Cα,β et , α 8

=

where Cα,β :=

1 1 3α α β−1−α (2) (2 ⌊ ⌋ + 1) + Cα,β sup e−t t α . α 8 t≥1

130 � 2 Integral and differential equations involving fractional operators Remark 2.15. Setting β = 1, we get that for all t ≥ 0, 1 α

Eα (t) ≤ Cα et , 1 α

where Cα := max{Cα,1 , maxt∈[0,1] Eα (t)e−t }. We can also prove the following comparison result for two-parameter Mittag-Leffler functions. Proposition 2.16. Let α > 0 and 0 < β1 ≤ β2 . Then for all t > 0, Γ(β2 )Eα,β2 (t) ≤ Γ(β1 )Eα,β1 (t). Proof. By the definition of Eα,βi we have +∞ Γ(β2 ) Γ(β1 ) tk ≤ ∑ t k = Γ(β1 )Eα,β1 (t), Γ(αk + β ) Γ(αk + β ) 2 1 k=0 k=0 +∞

Γ(β2 )Eα,β2 (t) = ∑

where we used the fact that for each k ≥ 0, the function sition B.5).

Γ(⋅) Γ(αk+⋅)

is decreasing (see Propo-

Remark 2.17. Such a comparison result has been used in the case β ∈ ℕ to provide the asymptotic behavior of β ∈ ℕ 󳨃→ Γ(β)Eα,β (t) for fixed α, t > 0 in [197].

2.3 Fractional differential equations with Riemann–Liouville fractional derivatives Consider fractional differential equations of the form α

α

f (t, y(t), (⋅ Da+1 y) (t), . . . , (⋅ Da+N y) (t)) = 0,

t ∈ (a, b],

(2.17) α

where N ∈ ℕ, 0 < α1 < ⋅ ⋅ ⋅ < αN , f : (a, b] × ℝN+1 → ℝ is a measurable function, ⋅ Da+j , j = 1, . . . , k, can be either the Riemann–Liouville or the Dzhrbashyan–Caputo fractional derivatives, and y ∈ L1 (a, b) is an unknown function. The value αN is called the order of the equation. If N > 1, then (2.17) is usually called a multiterm equation. However, let us focus on the single-term case, i. e., N = 1. Furthermore, if N = 1, then we say that (2.17) is in normal form if it can be rewritten as (⋅ Dαa+ y) (t) = F(t, y(t)),

t ∈ (a, b],

(2.18)

where α := α1 > 0, and F : (a, b] × ℝ → ℝ is a suitable measurable function. We focus on equations of the form (2.18). Without loss of generality, we can set a = 0. Recall also

2.3 Fractional differential equations with Riemann–Liouville fractional derivatives

� 131

that we use the shorthand notation ℝ+ = (0, +∞) and ℝ+0 = [0, +∞). It is clear that to claim that such an equation has a unique solution, it must be considered together with a set of suitable initial data. Let us first consider the case in which Dα is the Riemann–Liouville derivative. We equip (2.18) with a set of initial data. To do this, denote n = ⌊α⌋+1. As in the classical case, we expect the initial data to depend on derivatives of smaller order than the equation. For instance, for a Cauchy problem of a fourth-order ordinary differential equation, we need initial data on the third, second, first, and zeroth derivatives (the latter is the function itself). Hence a first natural choice of initial data involves Dα−k 0+ y(0) for k = 1, . . . , n−1 if n > 1. However, even under strong assumptions on the function F, the Cauchy problem {

Dα0+ y(t) = F(t, y(t)), Dα−k 0+ y(0)

= dk ,

t > 0,

k = 1, . . . , n − 1,

(2.19)

for d1 , . . . , dn−1 ∈ ℝ can admit an infinite number of solutions (see item (i) of Exercise 2.9). Hence a further condition(s) must be imposed. Since α < n, we cannot consider Dα−n 0+ y(0). Instead, we could consider, for instance, the Cauchy problem of the form Dα y(t) = F(t, y(t)), { { { 0+ Dα−k { 0+ y(0) = dk , { { {y(0) = dn ,

t > 0,

k = 1, . . . , n − 1,

(2.20)

in which the additional condition concerns the initial value of the unknown function, and if n = 1, then we omit the second equation in (2.20). However, even under strong assumptions on F, (2.20) could not even admit a solution (see item (ii) of Exercise 2.9). Hence we need to consider a different additional initial condition. To do this, we could understand the negative-order fractional derivative Dα−n 0+ y as the fractional integral n−α I0+ y. Thus we would get a Cauchy problem of the form Dα y(t) = F(t, y(t)), { { { 0+ Dα−k y(0) = dk , { { 0+ { n−α {I0+ y(0) = dn ,

t > 0,

k = 1, . . . , n − 1,

(2.21)

where F : ℝ+0 × ℝ → ℝ is a measurable function, d1 , . . . , dn ∈ ℝ, and if n = 1, then we do not need the second equation in (2.21). We will prove that, under suitable assumptions on F, the Cauchy problem (2.21) has a unique solution. First, we must establish what we mean by a solution. Let γ ∈ [0, 1), −∞ < a < b < +∞, and wa,γ (t) = (t − a)γ . Define the weighed 𝒞 -space 𝒞 ([a, b]; wa,γ ) := {y : [a, b] → ℝ : wa,γ y ∈ 𝒞 [a, b]},

(2.22)

132 � 2 Integral and differential equations involving fractional operators which is a Banach space once it is equipped with the norm ‖y‖𝒞([a,b];wa,γ ) = ‖wa,γ y‖𝒞[a,b] . Clearly, 𝒞 [a, b] ⊂ 𝒞 ([a, b]; wa,γ ) because wa,γ ∈ 𝒞 [a, b]. Moreover, we can characterize 𝒞 ([a, b]; wa,γ ) as follows: γ

𝒞 ([a, b]; wa,γ ) = {y ∈ 𝒞 (a, b] : lim y(t)(t − a) = ℓ ∈ ℝ} . t↓a

In fact, a function y ∈ 𝒞 ([a, b]; wa,γ ) is a function that is continuous in (a, b] and may have a singularity at point a of order at most γ. We say that y ∈ 𝒞 (ℝ+0 ; w0,γ ) if y ∈ 𝒞 ([0, T]; w0,γ ) for all T > 0. Definition 2.18. Set α > 0 and n = ⌊α⌋ + 1. We say that y is a solution of (2.21) on the interval [0, T] if it satisfies the following assumptions: (a) the functions y and Dα0+ y belong to 𝒞 ([0, T]; w0,n−α ); (b) the equalities in (2.21) hold for all t ∈ [0, T]. In such a case, we also refer to y as a local solution. On the other hand, we say that y is a global solution of (2.21) if (c) functions y and Dα0+ y belong to 𝒞 (ℝ+0 ; w0,n−α ) and (d) the equalities in (2.21) hold for any t ≥ 0. Let us first investigate how the fractional integrals act on the spaces 𝒞 ([a, b]; wa,γ ). The following result was first shown in [117, Lemma 2]. Lemma 2.19. Let γ ∈ [0, 1), −∞ < a < b < +∞, and α > 0. α (i) If γ ≤ α, then the operator Ia+ : 𝒞 ([a, b]; wa,γ ) 󳨃→ 𝒞 [a, b] is bounded. and for every f ∈ 𝒞 ([a, b]; wa,γ ), B(α, 1 − γ)(b − a)α−γ 󵄩󵄩 α 󵄩󵄩 ‖f ‖𝒞([a,b];wa,γ ) . 󵄩󵄩Ia+ f 󵄩󵄩𝒞[a,b] ≤ Γ(α) α (ii) If γ > α, then the operator Ia+ : 𝒞 ([a, b]; wa,γ ) 󳨃→ 𝒞 ([a, b]; wa,γ−α ) is bounded, and for every f ∈ 𝒞 ([a, b]; wa,γ ),

B(α, 1 − γ) 󵄩󵄩 α 󵄩󵄩 ‖f ‖𝒞([a,b];wa,γ ) . 󵄩󵄩Ia+ f 󵄩󵄩𝒞([a,b];wa,γ−α ) ≤ Γ(α) α (iii) If γ ∈ [0, 1), then the operator Ia+ : 𝒞 ([a, b], wa,γ ) → 𝒞 ([a, b], wa,γ ) is bounded, and for every f ∈ 𝒞 ([a, b]; wa,γ ),

(b − a)α B(α, 1 − γ) 󵄩󵄩 α 󵄩󵄩 ‖f ‖𝒞([a,b];wa,γ ) . 󵄩󵄩Ia+ f 󵄩󵄩𝒞([a,b];wa,γ ) ≤ Γ(α)

2.3 Fractional differential equations with Riemann–Liouville fractional derivatives

133



Proof. To prove property (i), first, note that if f ∈ 𝒞 ([a, b]; wa,γ ), then for all t ∈ [a, b], 󵄨󵄨 󵄨 −γ 󵄨󵄨f (t)󵄨󵄨󵄨 ≤ C(t − a)

(2.23)

for some constant C > 0. Therefore f ∈ L1 (a, b). If α ≥ 1, then we can use Lemma 1.17 to α guarantee that Ia+ f ∈ 𝒞 [a, b]. If α ∈ (0, 1), then consider any t0 ∈ (a, b) and h > 0 such that t0 + h ∈ (a, b]. Then 󵄨󵄨 t0 +h 󵄨󵄨 t0 󵄨 󵄨󵄨 1 󵄨󵄨󵄨 󵄨 󵄨󵄨 α 󵄨󵄨 α α−1 α−1 󵄨󵄨 ∫ (t0 + h − τ) f (τ)dt − ∫(t0 − τ) f (τ)dt 󵄨󵄨󵄨 󵄨󵄨(Ia+ f ) (t0 + h) − (Ia+ f ) (t0 )󵄨󵄨 = 󵄨󵄨 Γ(α) 󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 a a 󵄨 t0 +h

t0

t0

a

1 󵄨 󵄨󵄨 󵄨 󵄨 󵄨 ≤ ( ∫ (t0 + h − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dt + ∫ 󵄨󵄨󵄨󵄨(t0 + h − τ)α−1 − (t0 − τ)α−1 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dt) Γ(α) =:

1 (I + I ). Γ(α) 1 2

(2.24)

To handle I1 , we use (2.23) and conclude that t0 +h

I1 ≤ C ∫ (t0 + h − τ)α−1 (τ − a)−γ dτ ≤ t0

Chα (t − a)−γ . α 0

(2.25)

Furthermore, due to (1.13) and (2.23), we know that t0 t0 +h

t0 +h t0

I2 ≤ C(1 − α) ∫ ∫ (u − τ)α−2 (τ − a)−γ du dτ = C(1 − α) ∫ ∫(u − τ)α−2 (τ − a)1−γ−1 dτ du a

t0

t0

a

t0 +h

t −a Γ(1 − γ) = C(1 − α) (t − a)1−γ ∫ (u − a)α−2 2 F1 (2 − α, 1 − γ; 2 − γ; 0 ) du, Γ(2 − γ) 0 u−a t0

where the last integral is calculated in Exercise 1.2. Observing that 1 − α > 0, we get from Theorem B.10 the following upper bound: 2 F1 (2

− α, 1 − γ; 2 − γ;

t0 − a u − t0 1−α )≤C( ) . u−a u−a

Therefore I2 ≤ C(1 − α)

t0 +h

Γ(1 − γ) Γ(1 − γ) (t − a)1−γ ∫ (u − a)−1 (u − t0 )1−α du ≤ C (t − a)−γ h2−α . Γ(2 − γ) 0 Γ(2 − γ) 0 t0

(2.26)

134 � 2 Integral and differential equations involving fractional operators Plugging (2.25) and (2.26) into (2.24) and taking the limit as h ↓ 0, we conclude that 󵄨 α 󵄨 α lim 󵄨󵄨󵄨(Ia+ f ) (t0 + h) − (Ia+ f ) (t0 )󵄨󵄨󵄨 = 0. h↓0

α Similar arguments work for h < 0, and we obtain that Ia+ f is continuous at any t0 ∈ (a, b]. α To prove that Ia+ f is continuous at point a, let us first consider the case α > γ. Again, fix h > 0 such that a + h ∈ (a, b] and note that

󵄨󵄨 a+h 󵄨󵄨 a+h 󵄨󵄨 1 󵄨󵄨󵄨󵄨 1 󵄨󵄨 α 󵄨󵄨 󵄨󵄨 󵄨 󵄨 α−1 ∫ (a + h − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ 󵄨󵄨(Ia+ f ) (a + h)󵄨󵄨 = 󵄨 ∫ (a + h − τ) f (τ)dτ 󵄨󵄨 ≤ 󵄨󵄨 Γ(α) Γ(α) 󵄨󵄨󵄨󵄨 󵄨󵄨 a 󵄨a a+h



B(α, 1 − γ) α−γ C h → 0. ∫ (a + h − τ)α−1 (τ − a)−γ dτ = C Γ(α) Γ(α)

(2.27)

a

If α = γ, put ℓ = limt↓a (t − a)α f (t) and note that Lemma 1.11 supplies the relations 󵄨󵄨 α 󵄨 󵄨󵄨(Ia+ f ) (a + h) − Γ(1 − α)ℓ󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 a+h 󵄨󵄨 1 a+h 󵄨󵄨 1 󵄨󵄨 󵄨 α−1 α−1 −α = 󵄨󵄨 ∫ (a + h − τ) f (τ)dτ − ∫ (a + h − τ) (τ − a) ℓdτ 󵄨󵄨󵄨 󵄨󵄨 Γ(α) 󵄨󵄨 Γ(α) 󵄨󵄨 󵄨󵄨 a a a+h

1 󵄨 󵄨 ≤ ∫ (a + h − τ)α−1 (τ − a)−α 󵄨󵄨󵄨(τ − a)α f (τ) − ℓ󵄨󵄨󵄨 dτ Γ(α) a

󵄨 󵄨 ≤ max 󵄨󵄨󵄨(τ − a)α f (τ) − ℓ󵄨󵄨󵄨 Γ(1 − α). τ∈[0,h] Taking the limit, we obtain 󵄨 α 󵄨 lim 󵄨󵄨󵄨(Ia+ f ) (a + h) − Γ(1 − α)ℓ󵄨󵄨󵄨 = 0, h↓0

α and it is possible to state that Ia+ f ∈ 𝒞 [a, b]. Finally, note that

󵄨󵄨 α 󵄨 󵄨󵄨(Ia+ f ) (t)󵄨󵄨󵄨 ≤

t

t

1 1 󵄨 󵄨 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ = ∫(t − τ)α−1 (τ − a)−γ (τ − a)γ 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ Γ(α) Γ(α) a

≤ ‖f ‖𝒞([a,b];wa,γ )

t

a

1 ∫(t − τ)α−1 (τ − a)−γ dτ Γ(α) a

B(α, 1 − γ) B(α, 1 − γ)(b − a)α−γ = ‖f ‖𝒞([a,b];wa,γ ) (t − a)α−γ ≤ ‖f ‖𝒞([a,b];wa,γ ) . Γ(α) Γ(α) Taking the supremum, we conclude the proof of property (i). Now let us prove (ii). In this case, 1 > γ > α > 0, and therefore α ∈ (0, 1). Let us α first show that wa,γ−α Ia+ f belongs to 𝒞 [a, b]. By the same argument as before, we prove

2.3 Fractional differential equations with Riemann–Liouville fractional derivatives



135

α α that Ia+ f is continuous at all points t0 ∈ (a, b]. We only need to show that wa,γ−α Ia+ f is γ continuous at point a. To do this, we put ℓ = limt↓a (t − a) f (t) and take into account Lemma 1.11, which supplies the relations

󵄨󵄨 γ−α α 󵄨 󵄨󵄨h (I f ) (a + h) − B(α, 1 − γ) ℓ󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 a+ Γ(α) 󵄨 󵄨 󵄨󵄨 a+h 󵄨󵄨󵄨 γ−α a+h 󵄨󵄨 󵄨󵄨 h hγ−α 󵄨 = 󵄨󵄨󵄨 ∫ (a + h − τ)α−1 f (τ)dt − ∫ (a + h − τ)α−1 (τ − a)1−γ−1 ℓdt 󵄨󵄨󵄨 󵄨󵄨 Γ(α) 󵄨󵄨 Γ(α) 󵄨󵄨 󵄨󵄨 a a a+h

hγ−α 󵄨 󵄨 ≤ ∫ (a + h − τ)α−1 (τ − a)1−γ−1 󵄨󵄨󵄨(τ − a)γ f (τ) − ℓ󵄨󵄨󵄨 dt Γ(α) a

󵄨 󵄨 B(α, 1 − γ) ≤ max 󵄨󵄨󵄨(τ − a)γ f (τ) − ℓ󵄨󵄨󵄨 . τ∈[0,h] Γ(α) Taking the limit as h ↓ 0, we obtain that 󵄨󵄨 B(α, 1 − γ) 󵄨󵄨󵄨 α lim 󵄨󵄨󵄨󵄨hγ−α (Ia+ f ) (a + h) − ℓ󵄨󵄨󵄨 = 0, h↓0 󵄨 Γ(α) 󵄨 α and therefore wa,γ−α Ia+ f ∈ 𝒞 [a, b]. Note that t

t

1 1 (τ − a)γ 󵄨󵄨 󵄨󵄨 α 󵄨 󵄨 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ = ∫(t − τ)α−1 󵄨󵄨(Ia+ f ) (t)󵄨󵄨󵄨 ≤ 󵄨f (τ)󵄨󵄨󵄨dτ Γ(α) Γ(α) (τ − a)γ 󵄨 a



‖f ‖𝒞([a,b];wa,γ ) Γ(α)

t

a

∫(t − τ)α−1 (τ − a)1−γ−1 dτ = a

‖f ‖𝒞([a,b];wa,γ ) B(α, 1 − γ) Γ(α)(t − a)γ−α

.

Therefore B(α, 1 − γ) 󵄨 α 󵄨 (t − a)γ−α 󵄨󵄨󵄨(Ia+ f ) (t)󵄨󵄨󵄨 ≤ ‖f ‖𝒞([a,b];wa,γ ) , Γ(α) which proves (ii), provided that we take the supremum over t ∈ [a, b]. Finally, we will prove (iii) by distinguishing two cases. Obviously, (t − a)γ ≤ (b − a)γ . Therefore, if γ ≤ α, then for every f ∈ 𝒞 ([a, b]; wa,γ ), it follows from (i) that (b − a)α B(α, 1 − γ) 󵄩󵄩 α 󵄩󵄩 γ󵄩 α 󵄩 ‖f ‖𝒞([a,b];wa,γ ) . 󵄩󵄩Ia+ f 󵄩󵄩𝒞([a,b];wa,γ ) ≤ (b − a) 󵄩󵄩󵄩Ia+ f 󵄩󵄩󵄩𝒞[a,b] ≤ Γ(α) Otherwise, if γ > α, then we use the fact that (t − a)γ ≤ (t − a)γ−α (b − a)α together with (ii) to conclude that (b − a)α B(α, 1 − γ) 󵄩󵄩 α 󵄩󵄩 α󵄩 α 󵄩 ‖f ‖𝒞([a,b];wa,γ ) . 󵄩󵄩Ia+ f 󵄩󵄩𝒞([a,b];wa,γ ) ≤ (b − a) 󵄩󵄩󵄩Ia+ f 󵄩󵄩󵄩𝒞([a,b];wa,γ−α ) ≤ Γ(α)

136 � 2 Integral and differential equations involving fractional operators Let us now return to the Cauchy problem (2.21). To deal with this, we need to transform it into a (possibly nonlinear) Volterra integral equation. In turn, to do this, we use the following result. Proposition 2.20. Let T > 0, α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1, and let y ∈ 𝒞 ([0, T]; w0,n−α ) be such that the map t ∈ (0, T] 󳨃→ F(t, y(t)) ∈ ℝ belongs to 𝒞 ([0, T]; w0,n−α ). Then y is a local solution of (2.21) if and only if it satisfies n

y(t) = ∑ j=1

dj

Γ(α − j + 1)

t α−j +

t

F(τ, y(τ)) 1 dτ ∫ Γ(α) (t − τ)1−α

(2.28)

0

for all t ∈ (0, T]. Proof. Assume that y ∈ 𝒞 ([0, T]; w0,n−α ) is a solution of (2.21). Note that (Dα0+ y) (t) = (

d n n−α (I y)) (t). dt n 0+

α We would like to apply the operator I0+ to both sides of this relation. Setting n−α yn−α (t) := (I0+ y)(t) and n−1

y(k) n−α (0) k t , k! k=0

pn (t) = ∑

we get from the definition of fractional derivative (1.78) that (Dα0+ y)(t) = y(n) n−α (t). Hence α α n−α n−α α (n) (I0+ (Dα0+ y)) (t) = (I0+ (y(n) n−α )) (t) = (D0+ (I0+ (I0+ yn−α ))) (t),

(2.29)

n−α where we also used the fact that Dn−α 0+ I0+ is the identity operator. Next, by the semigroup property of the fractional integral (see Theorem 1.12) we can rewrite (2.29) as α n (n) (I0+ (Dα0+ y)) (t) = (Dn−α 0+ (I0+ (yn−α ))) (t).

(2.30)

Integrating by parts n times, we get n (I0+ (y(n) n−α )) (t) = yn−α (t) − pn (t).

Applying the previous identity to (2.30), we obtain the equalities α n−α n−α (I0+ (Dα0+ y)) (t) = (Dn−α 0+ (yn−α − pn )) (t) = (D0+ yn−α ) (t) − (D0+ pn ) (t).

(2.31)

n−α n−α Since Dn−α 0+ is the left-inverse of I0+ , we get the equality D0+ yn−α = y. Furthermore, by linearity, n−1

n y(k) y(n−k) (0) α−k n−α (0) t k−n+α = ∑ n−α t . Γ(1 − n + α + k) Γ(1 + α − k) k=0 k=1

(Dn−α 0+ pn ) (t) = ∑

2.3 Fractional differential equations with Riemann–Liouville fractional derivatives

� 137

Now let us evaluate y(n−k) n−α (0) for k = 1, . . . , n. Clearly, for k = n, we have yn−α (0) = dn , whereas for k = 1, . . . , n − 1, y(n−k) n−α (0) = (

d n−k n−α d n−k n−k−(α−k) (I0+ y)) (0) = ( n−k (I0+ y)) (0) = (Dα−k y) (0) = dk . n−k dt dt

Therefore taking (2.31) into account, we conclude that n

dk t α−k , Γ(1 + α − k) k=1

α (I0+ (Dα0+ y)) (t) = y(t) − ∑

t ∈ [0, T].

(2.32)

However, by (2.21) we know that α α (I0+ (Dα0+ y)) (t) = (I0+ F(⋅, y(⋅))) (t),

t ∈ (0, T].

(2.33)

Combining (2.32) and (2.33), we get (2.28). Vice versa, let us assume that a function y ∈ 𝒞 ([0, T]; w0,n−α ) satisfies (2.28). It can be rewritten as dj

n

y(t) = ∑ j=1

Γ(α − j + 1)

α t α−j + (I0+ F(⋅, y(⋅))) (t),

t ∈ (0, T].

(2.34)

n−α Applying the operator I0+ to both sides of this equality and using Lemma 1.11, we arrive at the relation n

n−α (I0+ y) (t) = ∑ j=1

dj

Γ(n − j + 1)

n t n−j + (I0+ F(⋅, y(⋅))) (t),

t ∈ (0, T],

n−α whence (I0+ y)(0) = dn . Now let us apply operator Dα−k for k = 1, . . . , n − 1 to (2.34) and recall that ⌊α − k⌋ = n − k − 1 and k < α. Then k

(Dα−k 0+ y) (t) = ∑ j=1

dj

Γ(k − j + 1)

k t k−j + (I0+ F(⋅, y(⋅))) (t),

t ∈ (0, T].

Hence, taking the limit as t → 0, we get the value (Dα−k 0+ y)(0) = dk . Applying the operator α D0+ to both sides of (2.34), we obtain that (Dα0+ y) (t) = F(t, y(t)),

t ∈ (0, T].

(2.35)

Finally, we know that Dα0+ y ∈ 𝒞 ([0, T]; w0,n−α ), which concludes the proof. Remark 2.21. If we drop the assumption Dα0+ y ∈ 𝒞 ([0, T]; wn−α ), then Proposition 2.20 still holds under the following weaker condition: – For every y ∈ 𝒞 ([0, T]; w0,n−α ), the function t ∈ (0, T] 󳨃→ F(t, y(t)) belongs to 𝒞 ([0, T]; w0,γ ) for some γ ∈ [0, 1).

138 � 2 Integral and differential equations involving fractional operators Indeed, if γ ≤ α, then it follows from Lemma 2.19 that α Ia+ F(⋅, y(⋅)) ∈ 𝒞 [0, T] ⊂ 𝒞 ([0, T]; w0,1−α ).

Otherwise, if γ > α, then n = 1, and since γ − α < 1 − α, α Ia+ F(⋅, y(⋅)) ∈ 𝒞 ([0, T]; w0,γ−α ) ⊂ 𝒞 ([0, T]; w0,1−α ).

Note that the inclusion between the spaces 𝒞 ([0, T]; w0,γ ) is explicated in Exercise 2.19. In the case under consideration, however, Dα0+ y ∈ 𝒞 ([0, T]; wγ ) for such γ ∈ [0, 1). Having an equivalence between the Cauchy problem (2.21) and the (possibly nonlinear) Volterra integral equation (2.28), we can prove a global existence and uniqueness theorem. To do this, we will also make use of the following contraction principle (see [116, Theorem 3.1]) Theorem 2.22. Let (M, d) be a complete metric space, and let 𝒯 : M → M be a contraction, i. e., there exists L ∈ (0, 1) such that d(𝒯 (x1 ), 𝒯 (x2 )) ≤ Ld(x1 , x2 ) for all x1 , x2 ∈ M. Then there exists a unique fixed point x ∗ ∈ M for 𝒯 , i. e., such that 𝒯 (x ∗ ) = x ∗ . Moreover, for any x0 ∈ M, defining xn = 𝒯 (xn−1 ) for all n ≥ 1, we have limn→+∞ xn = x ∗ and d (xn , x ∗ ) ≤

Ln d(x0 , x1 ), 1−L

n ≥ 1.

With this result, we can prove the following existence and uniqueness theorem. Theorem 2.23. Let α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1. Assume that F : ℝ+ × ℝ → ℝ satisfies the following assumptions: (a) For any T > 0 and any y ∈ 𝒞 ([0, T]; w0,n−α ), the function t ∈ (0, T] 󳨃→ F(t, y(t)) ∈ ℝ belongs to 𝒞 ([0, T]; w0,n−α ). (b) There exists a constant L > 0 such that for all t > 0 and y1 , y2 ∈ ℝ, 󵄨󵄨 󵄨 󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨 ≤ L|y1 − y2 |. Then (2.21) has a unique global solution. Proof. It is known from Proposition 2.20 that in an equivalent way, we can look for a solution (2.28) on any interval [0, T]. To do this, let us fix T > 0 and define the following operator: for functions y ∈ 𝒞 ([0, T]; w0,n−α ), n

(𝒯 y)(t) := ∑ j=1

dj

Γ(α − j + 1)

t α−j +

t

F(τ, y(τ)) 1 dτ, ∫ Γ(α) (t − τ)1−α

(2.36)

0

t ∈ (0, T]. To start with, we establish that 𝒯 y belongs to 𝒞 ([0, T]; w0,n−α ). Indeed, by (a), if y ∈ 𝒞 ([0, T]; w0,n−α ), then F(⋅, y(⋅)) ∈ 𝒞 ([0, T]; w0,n−α ) and 𝒯 y ∈ 𝒞 ([0, T]; w0,n−α ) by

2.3 Fractional differential equations with Riemann–Liouville fractional derivatives



139

item (iii) of Lemma 2.19. Hence the operator from (2.36) acts as follows: 𝒯 : 𝒞 ([0, T]; w0,n−α ) 󳨃→ 𝒞 ([0, T]; w0,n−α ).

Next, let us equip 𝒞 ([0, T]; w0,n−α ) with a norm equivalent to ‖ ⋅ ‖𝒞([0,T];w0,n−α ) . Fix γ > 0, whose value will be specified later, and for y ∈ 𝒞 ([0, T]; w0,n−α ), define the norm 󵄨 󵄨 ‖y‖γ := sup e−γt t n−α 󵄨󵄨󵄨y(t)󵄨󵄨󵄨. t∈[0,T]

Clearly, e−γT ‖y‖𝒞([0,T];w0,n−α ) ≤ ‖y‖γ ≤ ‖y‖𝒞([0,T];w0,n−α ) for all y ∈ 𝒞 ([0, T]; w0,n−α ), and thus (𝒞 ([0, T]; w0,n−α ), ‖ ⋅ ‖γ ) is a Banach space. We now prove that for each T > 0, there exists γ ≥ 0 such that 𝒯 is a contraction on (𝒞 ([0, T]; w0,n−α ), ‖ ⋅ ‖γ ). Note that for all y1 , y2 ∈ 𝒞 ([0, T]; w0,n−α ), we have the following bounds: t

t

|F(τ, y1 (τ)) − F(τ, y2 (τ))| |y (τ) − y2 (τ)| 1 L 󵄨󵄨 󵄨 dτ ≤ dτ ∫ ∫ 1 󵄨󵄨(𝒯 y1 )(t) − (𝒯 y2 )(t)󵄨󵄨󵄨 ≤ 1−α Γ(α) Γ(α) (t − τ) (t − τ)1−α 0

t



t

0

L‖y1 − y2 ‖γ eγτ τ α−n e−γτ τ n−α |y1 (τ) − y2 (τ)| L dτ ≤ ∫ ∫ eγτ τ α−n (t − τ)α−1 dτ. Γ(α) Γ(α) (t − τ)1−α 0

0

1 Now consider any p ∈ (1, n−α ), and let q ≥ 1 be its conjugate exponent. Note that if α ≥ 1, 1 then p(α − 1) ≥ 0, whereas if α < 1, then since p < 1−α , we get p(α − 1) > −1. Using the Hölder inequality and Lemma 1.11, we to get the following bounds:

󵄨󵄨 󵄨 󵄨󵄨(𝒯 y1 )(t) − (𝒯 y2 )(t)󵄨󵄨󵄨 ≤

≤ ≤

L‖y1 − y2 ‖γ Γ(α)

L‖y1 − y2 ‖γ Γ(α)

t

(∫ eqγτ dτ) (∫ τ p(α−n) (t − τ)p(α−1) dτ) 0

(

e

eγt L‖y1 − y2 ‖γ 1

1 q

t

(qγ) q Γ(α)

qγt

1 q

1 p

0 1

Γ(p(α − n) + 1) p(2α−n−1)+1 p −1 ) ( t ) qγ Γ(p(2α − n − 1) + 2) 1

(

p Γ(p(α − n) + 1) 2α−n−1+ p1 ) t . Γ(p(2α − n − 1) + 2)

Let us multiply both sides of the previous inequality by e−γt t n−α and get that 1

p Γ(p(α − n) + 1) α−1+ p1 󵄨󵄨 󵄨 −γt n−α L‖y1 − y2 ‖γ ≤ ( ) t . 󵄨󵄨(𝒯 y1 )(t) − (𝒯 y2 )(t)󵄨󵄨󵄨e t 1 Γ(p(2α − n − 1) + 2) Γ(α)(qγ) q

Now let us distinguish two cases. If α ≥ 1, then clearly α−1+ p1 > 0. If α ∈ (0, 1), then n = 1, 1 and we consider p ∈ (1, 1−α ), and therefore, as it was mentioned above, p(α − 1) > −1,

140 � 2 Integral and differential equations involving fractional operators and thus α − 1 +

1 p

> 0. In any case, we get that 1

p Γ(p(α − n) + 1) α−1+ p1 󵄨󵄨 󵄨 −γt n−α L‖y1 − y2 ‖γ ≤ ( ) T , 󵄨󵄨(𝒯 y1 )(t) − (𝒯 y2 )(t)󵄨󵄨󵄨e t 1 Γ(p(2α − n − 1) + 2) Γ(α)(qγ) q

and taking the supremum in t ∈ [0, T], we obtain the following upper bound: ‖𝒯 y1 − 𝒯 y2 ‖γ ≤ L𝒯 (γ)‖y1 − y2 ‖γ , where L𝒯 (γ) =

1

p Γ(p(α − n) + 1) α−1+ p1 ) T . 1 (qγ) q Γ(α) Γ(p(2α − n − 1) + 2)

L

(

Obviously, limγ→+∞ L𝒯 (γ) = 0. So we can consider γ(T) such that L𝒯 (γ(T)) < 1. With this choice of γ > 0, 𝒯 is a contraction on (𝒞 ([0, T]; w0,n−α ), ‖ ⋅ ‖γ ). By the contraction principle (Theorem 2.22) there exists a unique fixed point y∗T ∈ 𝒞 ([0, T]; w0,n−α ) for 𝒯 . Rewriting explicitly the relation y∗T = 𝒯 y∗T , we obtain that y∗T solves (2.28) for t ∈ [0, T]. Finally, let Tm := m for m ∈ ℕ, ym := y∗Tm , and y(t) := ym (t),

0 ≤ t ≤ m.

(2.37)

To prove that such a function is well defined, just note that any solution of (2.28) on an interval of the form [0, T] is a fixed point of 𝒯 on 𝒞 ([0, T]; w0,n−α ). Hence, if we consider m2 > m1 , then ym1 and ym2 are fixed points of 𝒯 on 𝒞 ([0, m1 ]; w0,n−α ) and 𝒞 ([0, m2 ]; w0,n−α ), respectively,. In particular, (𝒯 ym2 )(t) = ym2 (t) for all t ∈ [0, m2 ] and thus for all t ∈ [0, m1 ]. Hence the restriction of ym2 over [0, m1 ] is a fixed point of 𝒯 on 𝒞 ([0, m1 ]; w0,n−α ). By the uniqueness of the fixed point we must have ym1 (t) = ym2 (t) for all t ∈ [0, m1 ]. This guarantees that (2.37) is well posed. Moreover, for each m ∈ ℕ, ym (t) satisfies (2.21) for all t ∈ [0, m], and hence, by (2.37), y(t) satisfies (2.21) for all t ∈ [0, m] and all m ∈ ℕ. Thus we established that y(t) satisfies (2.21) for all t ≥ 0. Finally, it is clear that, by (2.37), y ∈ 𝒞 ([0, T]; w0,n−α ) for all T > 0. Furthermore, the facts n−α n−α that Dn−α 0+ y(t) = D0+ ym (t) for all t ∈ [0, m] and D0+ ym ∈ 𝒞 ([0, m]; w0,n−α ) imply that n−α + Dn−α 0+ y ∈ 𝒞 ([0, m]; w0,n−α ) for all m ∈ ℕ. It suffices to guarantee that D0+ y ∈ 𝒞 (ℝ0 ; w0,n−α ). Thus we constructed the unique solution y of (2.21). Remark 2.24. If assumption (b) is fulfilled, then a sufficient condition implying (a) is the following one: (a′ ) For all y ∈ ℝ, the function t ∈ ℝ+ 󳨃→ F(t, y) ∈ ℝ belongs to 𝒞 (ℝ+ ). (a′′ ) For every constant C ∈ ℝ, there exists ℓ(C) ∈ ℝ such that lim t n−α F (t, Ct α−n ) = ℓ(C). t↓0

2.3 Fractional differential equations with Riemann–Liouville fractional derivatives

� 141

Indeed, we can prove that (a′ ), (a′′ ), and (b) imply (a). To do this, first note that if we consider t1 > 0 and h ∈ ℝ such that t1 + h ∈ (0, T], then 󵄨󵄨 󵄨 󵄨󵄨F(t1 + h, y(t1 + h)) − F(t1 , y(t1 ))󵄨󵄨󵄨 󵄨 󵄨 󵄨 󵄨 ≤ 󵄨󵄨󵄨F(t1 + h, y(t1 + h)) − F(t1 + h, y(t1 ))󵄨󵄨󵄨 + 󵄨󵄨󵄨F(t1 + h, y(t1 )) − F(t1 , y(t1 ))󵄨󵄨󵄨 󵄨 󵄨 󵄨 󵄨 ≤ L󵄨󵄨󵄨y(t1 + h) − y(t1 )󵄨󵄨󵄨 + 󵄨󵄨󵄨F(t1 + h, y(t1 )) − F(t1 , y(t1 ))󵄨󵄨󵄨 . Furthermore, |y(t1 + h) − y(t1 )| → 0 as h → 0 because y ∈ 𝒞 ([0, T], w0,n−α ). Note that y(t1 ) ∈ ℝ is fixed, and, together with (a′ ), this implies that 󵄨󵄨 󵄨 󵄨󵄨F(t1 + h, y(t1 )) − F(t1 , y(t1 ))󵄨󵄨󵄨 → 0 as h → 0. Therefore F(⋅, y(⋅)) ∈ 𝒞 (0, T]. Moreover, set ℓy = limt↓0 t n−α y(t) and ℓF := ℓ(ℓy ), which exists by condition (a′′ ). Then 󵄨󵄨 n−α 󵄨 󵄨 󵄨 n−α 󵄨 n−α 󵄨󵄨 α−n α−n 󵄨󵄨t F(t, y(t)) − ℓF 󵄨󵄨󵄨 ≤ t 󵄨󵄨F(t, y(t)) − F (t, t ℓy )󵄨󵄨󵄨 − 󵄨󵄨󵄨t F (t, t ℓy ) − ℓF 󵄨󵄨󵄨 󵄨 󵄨 󵄨 󵄨 ≤ L 󵄨󵄨󵄨t n−α y(t) − ℓy 󵄨󵄨󵄨 − 󵄨󵄨󵄨t n−α F (t, t α−n ℓy ) − ℓF 󵄨󵄨󵄨 → 0 as t ↓ 0, and thus (a) is proved. In fact, it is clear that conditions (a) and (b) are equivalent to (a′ ), (a′′ ), and (b). Let us emphasize that there are functions satisfying (a′ ) and (b) but not (a). For instance, consider F(t, y) = sin ( 1t ) y for α ∈ (0, 1). Then the function t ∈ ℝ+ 󳨃→ F(t, y) ∈ ℝ is continuous for every constant y ∈ ℝ. Furthermore, for all t > 0, 1 󵄨󵄨 󵄨󵄨 󵄨 󵄨󵄨 󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨󵄨sin ( )󵄨󵄨󵄨󵄨 |y1 − y2 | ≤ |y1 − y2 |. t 󵄨 󵄨 However, if we consider the function y(t) = t α−1 , which belongs to 𝒞 ([0, T]; w0,1−α ), then t 1−α F(t, y(t)) = sin ( 1t ), and this function has no limit as t ↓ 0. Hence F(⋅, y(⋅)) ∈ ̸ 𝒞 ([0, T]; w0,1−α ). We emphasize that F does not satisfy (a′′ ), as evidenced by the case C = 1. Remark 2.25. Note that assumption (b) can be weakened. Indeed, both Theorem 2.23 and Remark 2.24 still hold if we substitute assumption (b) with the following one: (bT ) For every T > 0, there exists a constant LT > 0 such that for all t ∈ (0, T] and y1 , y2 ∈ ℝ, 󵄨󵄨 󵄨 󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨 ≤ LT |y1 − y2 |. Note that we allow the solution y to have a singularity at 0. In fact, Exercise 2.16 clarifies that we are forcing a singularity of order α − n at 0 if dn ≠ 0. We could also search for solutions of (2.21) in the weaker sense that y satisfies the equation for a. a. t > 0 (instead of all t > 0). In such a case, we do not need to require that y is a continuous function.

142 � 2 Integral and differential equations involving fractional operators Definition 2.26. We say that y is a global a. e. solution of the Cauchy problem (2.21) if (a) y, Dα0+ y ∈ L1loc (ℝ+0 ); (b) y satisfies (2.21) for a. a. t > 0. Similarly, we say that y is an a. e. solution in (0, T) (or just a local a. e. solution) of the Cauchy problem (2.21) if (c) y, Dα0+ y ∈ L1 (0, T) and (d) y satisfies (2.21) for a. a. t ∈ (0, T). Arguments for proving the existence and uniqueness of a. e. solutions are quite similar to those we used before. Indeed, we first prove the equivalence of (2.21) with a nonlinear Volterra integral equation. Proposition 2.27. Let T > 0, and let y ∈ L1 (0, T) be such that t ∈ (0, T) 󳨃→ F(t, y(t)) ∈ ℝ belongs to L1 (0, T). Then y is an a. e. solution of (2.21) in (0, T) if and only if y satisfies (2.28) for a. a. t ∈ (0, T). The proof is similar to that of Proposition 2.20. Now we are ready to prove the following global existence and uniqueness theorem. Theorem 2.28. Let α > 0, α ∈ ̸ ℕ. Assume that a function F : ℝ+ × ℝ → ℝ satisfies assumption (b) of Theorem 2.23 and the following assumption: (c) There exists y0 ∈ ℝ such that F(⋅, y0 ) ∈ L1loc (ℝ+ ). Then (2.21) has a unique global a. e. solution. Proof. We know from Proposition 2.27 that, equivalently, we can look for an a. e. solution of (2.28). To do this, let us fix T > 0, whose value will be specified later, and define the operator 𝒯 : L1 (0, T) → L1 (0, T) as in (2.36) for a. a. t ∈ (0, T). To prove that 𝒯 is well defined, note that for every y ∈ L1 (0, T), it follows from assumption (b) that 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨F(t, y(t))󵄨󵄨󵄨 ≤ L󵄨󵄨󵄨y(t) − y0 󵄨󵄨󵄨 + 󵄨󵄨󵄨F(t, y0 )󵄨󵄨󵄨.

(2.38)

Together with assumption (c), (2.38) guarantees that F(⋅, y(⋅)) ∈ L1 (0, T). Thus 𝒯 y ∈ L1 (0, T) by Lemma 1.4. Moreover, for all y1 , y2 ∈ L1 (0, T), t

L 󵄨󵄨 󵄨 󵄨 󵄨 α (|y1 − y2 |)) (t). ∫(t − τ)α−1 󵄨󵄨󵄨y1 (τ) − y2 (τ)󵄨󵄨󵄨dτ = L (I0+ 󵄨󵄨𝒯 y1 (t) − 𝒯 y2 (t)󵄨󵄨󵄨 ≤ Γ(α) 0

Then item (iii) of Lemma 1.4 supplies that ‖𝒯 y1 − 𝒯 y2 ‖L1 (0,T) ≤

LT α ‖y − y2 ‖L1 (0,T) . Γ(α + 1) 1

2.4 Fractional differential equations with Dzhrbashyan–Caputo fractional derivatives

143



1

Thus let us consider T := ( Γ(α+1) ) α , so that 2L 1 ‖𝒯 y1 − 𝒯 y2 ‖L1 (0,T) < ‖y1 − y2 ‖L1 (0,T) . 2 This implies that 𝒯 is a contraction on L1 (0, T) and thus has a unique fixed point y∗ such that y∗ = 𝒯 y∗ in L1 (0, T), i. e., y∗ (t) = 𝒯 y∗ (t) for a. a. t ∈ (0, T). This proves that y∗ is the unique a. e. solution of (2.21) in (0, T). Let us suppose that we have extended y∗ in (0, kT) for some integer k ≥ 1 in such a way that it is the unique a. e. solution of (2.21) in (0, kT). Define the operator 𝒯 (k+1) : L1 (kT, (k + 1)T) 󳨃→ L1 (kT, (k + 1)T) acting on y ∈ L1 (kT, (k + 1)T) as follows: 𝒯

(k+1)

n

y(t) := ∑ j=1

dj

Γ(α − j + 1)

t

α−j

kT

F(τ, y∗ (τ)) 1 1 + dτ + ∫ Γ(α) Γ(α) (t − τ)1−α 0

(k+1)T

∫ kT

F(τ, y(τ)) dτ. (t − τ)1−α

With the same arguments as before, 𝒯 (k+1) is a contraction on L1 (kT, (k + 1)T) and thus has a unique fixed point yk+1 ∈ L1 (kT, (k + 1)T). Setting y∗ (t) = yk+1 (t) for a. a. t ∈ (kT, (k + 1)T), we have by the definition of 𝒯 (k+1) that y∗ (t) = 𝒯 y∗ (t) for a. a. t ∈ (0, (k + 1)T), and thus y∗ is the unique a. e. solution of (2.21) in (0, (k + 1)T). We conclude the proof by induction. Remark 2.29. With a similar argument, it is possible to substitute assumption (b) with (bT ) as in Remark 2.25. Let us underline the fact that the conditions of Theorems 2.23 and 2.28 are sufficient but not necessary for the existence and uniqueness, as explicated in Exercise 2.17. We omit here the discussion about local solutions, referring to [119] for further properties.

2.4 Fractional differential equations with Dzhrbashyan–Caputo fractional derivatives Similarly to the case of Riemann–Liouville derivatives, we can consider fractional differential equations involving Dzhrbashyan–Caputo fractional derivatives. Indeed, let us consider the following Cauchy problem: {

C α D0+ y(t) (k)

= F(t, y(t)), t > 0,

y (0) = dk ,

k = 0, . . . , n − 1,

(2.39)

where α > 0 and n = ⌊α⌋ + 1. This time the initial conditions are specified directly on y and its derivatives, and thus y cannot admit any singularity at 0. However, we can still suppose that C Dα0+ y is singular at 0.

144 � 2 Integral and differential equations involving fractional operators Definition 2.30. We say that y is a solution of (2.39) for t ∈ [0, T] if it has the following properties: (i) y ∈ 𝒞 (n−1) [0, T]. (ii) There exists γ ∈ [0, α − n + 1) such that C Dα0+ y ∈ 𝒞 ([0, T]; w0,γ ). (iii) The function y satisfies Equation (2.39) for all t ∈ [0, T]. We call this function a local solution. We say that y is a global solution of (2.39) if it has the following properties: (iv) y ∈ 𝒞 (n−1) (ℝ+0 ). (v) There exists γ ∈ [0, α − n + 1) such that C Dα0+ y ∈ 𝒞 (ℝ+0 ; w0,γ ). (vi) The function y satisfies (2.39) for all t ≥ 0. To prove an existence and uniqueness result for (2.39), we need another equivalence statement between (2.39) and a suitable (possibly nonlinear) Volterra integral equation. Proposition 2.31. Let T > 0, α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1, and let y ∈ 𝒞 [0, T] and γ ∈ [0, α) be such that t ∈ (0, T] 󳨃→ F(t, y(t)) ∈ ℝ belongs to 𝒞 ([0, T]; w0,γ ). Then y is a solution of (2.39) if and only if it satisfies the equation n−1

dj

t

F(τ, y(τ)) 1 y(t) = ∑ t + dτ ∫ j! Γ(α) (t − τ)1−α j=0 j

(2.40)

0

for all t ∈ [0, T]. In particular, C Dα0+ y ∈ 𝒞 ([0, T]; w0,γ ). Proof. Let us first assume that y ∈ 𝒞 [0, T] is a solution of (2.39). Then y ∈ 𝒞 (n−1) [0, T] and C α D0+ y ∈ 𝒞 ([0, T]; w0,γ ). Applying (1.85), we fix that C α D0+ y

= Dα0+ (y − pn ),

n−1

y(k) (0) k t , t ∈ [0, T]. k! k=0

where pn (t) = ∑

Since y ∈ 𝒞 [0, T] and pn ∈ 𝒞 [0, T], we have n−α I0+ (y − pn )(0) = 0.

Moreover, emphasize that y − pn ∈ 𝒞 k [0, T] for all 1 ≤ k ≤ n − 1 with y(k) (0) − p(k) n (0) = 0. Hence, together with the equality ⌊α − k⌋ + 1 = n − k, Proposition 1.56 implies that C α−k n−α (Dα−k (y(n−k) − p(n−k) )) (0) = 0 0+ (y − pn )) (0) = ( D0+ (y − pn )) (0) = (I n

for all k = 1, . . . , n − 1. Then (2.32) allows us to conclude that α C α (I0+ (Dα0+ y)) (t) = (I0+ (Dα0+ (y − pn ))) (t) = y(t) − pn (t).

2.4 Fractional differential equations with Dzhrbashyan–Caputo fractional derivatives



145

However, from (2.39) we know that α C α α (I0+ ( D0+ y)) (t) = (I0+ (F(⋅, y(⋅)))) (t),

t ∈ (0, T].

Combining the last two equalities, we get (2.40). Now let us assume that y satisfies (2.40). Taking the limit as t → 0, we obtain that y(0) = d0 . Indeed, F(⋅, y(⋅)) ∈ 𝒞 ([0, T]; w0,γ ) for some γ < α − n + 1 ≤ α, and therefore α (I0+ (F(⋅, y(⋅))))(0) = 0 (see, for instance, Exercise 2.16). Note that for all k = 1, . . . , n − 1, the right-hand side of (2.40) has the kth derivative, and n−1

y(k) (t) = ∑ j=k

dj

(j − k)!

α−k t j−k + (I0+ (F(⋅, y(⋅)))) (t).

Again, since γ < α − n + 1 ≤ α − k, we get the equalities y(k) (0) = dk . This means that we can rewrite (2.40) as α y(t) − pn (t) = (I0+ F(⋅, y(⋅))) (t),

t ∈ (0, T].

Applying C Dα0+ to both sides of the previous identity and using (1.85), we complete the proof. Now we are ready to prove the existence and uniqueness theorem. Theorem 2.32. Let α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1. Assume that F : ℝ+ × ℝ → ℝ satisfies the following properties for some γ ∈ [0, α − n + 1): (aγ ) For any T > 0 and any y ∈ ℝ the function t ∈ (0, T] 󳨃→ F(t, y) ∈ ℝ belongs to 𝒞 ([0, T]; w0,γ ). (bγ ) There exists a constant L > 0 such that for all t > 0 and y1 , y2 ∈ ℝ, 󵄨󵄨 󵄨 −γ 󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨 ≤ Lt |y1 − y2 |. Then (2.39) has a unique global solution y with Dzhrbashyan–Caputo derivative ∈ 𝒞 (ℝ+0 ; w0,γ ).

C α D0+ y

Proof. Let us first show that for all T > 0 and y ∈ 𝒞 [0, T], the function t ∈ ℝ+ 󳨃→ F(t, y(t)) ∈ ℝ belongs to 𝒞 ([0, T]; w0,γ ). Indeed, let y ∈ 𝒞 [0, T]. Consider t0 ∈ (0, T] and h ∈ ℝ such that t0 + h ∈ (0, T]. Then 󵄨󵄨 γ 󵄨 γ 󵄨󵄨t0 F(t0 , y(t0 )) − (t0 + h) F(t0 + h, y(t0 + h))󵄨󵄨󵄨 󵄨γ 󵄨 ≤ 󵄨󵄨󵄨t0 F(t0 , y(t0 )) − (t0 + h)γ F(t0 + h, y(t0 ))󵄨󵄨󵄨 󵄨 󵄨 + (t0 + h)γ 󵄨󵄨󵄨F(t0 + h, y(t0 )) − F(t0 + h, y(t0 + h))󵄨󵄨󵄨 󵄨 󵄨 󵄨γ 󵄨 ≤ L󵄨󵄨󵄨y(t0 + h) − y(t0 )󵄨󵄨󵄨 + 󵄨󵄨󵄨t0 F(t0 , y(t0 )) − (t0 + h)γ F(t0 + h, y(t0 ))󵄨󵄨󵄨 .

146 � 2 Integral and differential equations involving fractional operators Both y and t ∈ ℝ+0 󳨃→ t γ F(t, y(t0 )) are continuous, and taking the limit as h → 0, we obtain that t γ F(t, y(t)) is continuous in (0, T]. Furthermore, note that t γ F(t, y(t)) = t γ F(t, y(0)) + t γ (F(t, y(t)) − F(t, y(0))) and 󵄨 󵄨 󵄨 󵄨 lim t γ 󵄨󵄨󵄨F(t, y(t)) − F(t, y(0))󵄨󵄨󵄨 ≤ lim L󵄨󵄨󵄨y(t) − y(0)󵄨󵄨󵄨 = 0, t↓0

t↓0

whence limt↓0 t γ F(t, y(t)) = limt↓0 t γ F(t, y(0)). This means that t ∈ ℝ+0 󳨃→ F(t, y(t)) belongs to 𝒞 ([0, T]; w0,γ ) for all T > 0 and y ∈ 𝒞 [0, T]. We know from Proposition 2.31 that, equivalently, we can look for a solution of (2.40). To do this, let us fix T > 0 and for any y ∈ 𝒞 [0, T], define n−1

dj

t

F(τ, y(τ)) 1 (𝒯 y)(t) = ∑ t + dτ, ∫ j! Γ(α) (t − τ)1−α j=0 j

t ∈ [0, T].

0

We first establish that 𝒯 y ∈ 𝒞 [0, T]. Indeed, we have shown before that since y ∈ 𝒞 [0, T], F(⋅, y(⋅)) ∈ 𝒞 ([0, T]; w0,γ ). Furthermore, since n ≥ 1, α − n + 1 ≤ α, and thus γ < α. Hence by item (i) of Lemma 2.19 we get that 𝒯 y ∈ 𝒞 [0, T]. Let us prove that 𝒯 is a contraction on 𝒞 [0, T] with a suitably modified norm. Fix β > 0, whose value will be specified later, and define the norm 󵄨 󵄨 ‖y‖β = sup e−βt 󵄨󵄨󵄨y(t)󵄨󵄨󵄨, t∈[0,T]

y ∈ 𝒞 [0, T].

For every y ∈ 𝒞 [0, T], we have bounds e−βT ‖y‖𝒞[0,T] ≤ ‖y‖β ≤ ‖y‖𝒞[0,T] . Therefore (𝒞 [0, T], ‖ ⋅ ‖β ) is a Banach space. Moreover, for all y1 , y2 ∈ 𝒞 [0, T], t

󵄨󵄨 󵄨 󵄨󵄨(𝒯 y1 )(t) − (𝒯 y2 )(t)󵄨󵄨󵄨 ≤ t



t

|F(τ, y1 (τ)) − F(τ, y2 (τ))| τ −γ |y1 (τ) − y2 (τ)| 1 L dτ ≤ dτ ∫ ∫ Γ(α) Γ(α) (t − τ)1−α (t − τ)1−α 0

0

t

τ −γ eβτ e−βτ |y1 (τ) − y2 (τ)| L L dτ ≤ ‖y1 − y2 ‖β ∫ ∫ τ −γ eβτ (t − τ)α−1 dτ. Γ(α) Γ(α) (t − τ)1−α 0

0

Now let us distinguish two cases. If α > 1, then consider some 1 ≤ p < γ1 , and if α ≤ 1, then consider 1 ≤ p < obtain that

1 . Let q γ+1−α

≥ 1 be the conjugate exponent. Using Hölder inequality, we t

1 q

t

1 p

L 󵄨󵄨 󵄨 (∫ eqβτ dτ) (∫ τ −pγ (t − τ)p(α−1) dτ) . 󵄨󵄨(𝒯 y1 )(t) − (𝒯 y2 )(t)󵄨󵄨󵄨 ≤ ‖y1 − y2 ‖β Γ(α) 0

0

2.4 Fractional differential equations with Dzhrbashyan–Caputo fractional derivatives

� 147

1 1 If α ≥ 1, then p(α−1) ≥ 0, and, consequently, p(α−1)+1 > 0. If α < 1, then p < γ+1−α ≤ 1−α and p(α − 1) > −1, implying that p(α − 1) + 1 > 0. Furthermore, if α ≥ 1, then we choose 1 p < γ1 , and if α < 1, then p < γ+1−α < γ1 . Thus 1 − pγ > 0 in both cases. Moreover, if α > 1, then n > 1 and γ < α − n + 1 ≤ α − 1, so that p(α − 1 − γ) + 1 > p(α − 1 − γ) > 0. If α ≤ 1, 1 then p < γ+1−α and p(α − 1 − γ) > −1. Integrating the exponential function, we get from Lemma 1.11 that 1

1

L(B(p(α − 1) + 1, 1 − pγ)) p eqβt − 1 q p(α−1−γ)+1 󵄨󵄨 󵄨 ( ) t p 󵄨󵄨(𝒯 y1 )(t) − (𝒯 y2 )(t)󵄨󵄨󵄨 ≤ ‖y1 − y2 ‖β Γ(α) qβ 1

≤ ‖y1 − y2 ‖β

L(B(p(α − 1) + 1, 1 − pγ)) p Γ(α)(qβ)

1 q

eβt T

p(α−1−γ)+1 p

.

Multiplying both sides by e−βt and taking the supremum, we get ‖𝒯 y1 − 𝒯 y2 ‖β ≤ L𝒯 (β)‖y1 − y2 ‖β , where 1

L𝒯 (β) =

L(B(p(α − 1) + 1, 1 − pγ)) p Γ(α)(qβ)

1 q

T

p(α−1−γ)+1 p

.

Observing that limβ→∞ L𝒯 (β) = 0, we can always choose β > 0 such that L𝒯 (β) < 1. Thus 𝒯 is a contraction, and there exists a unique fixed point y∗T of 𝒯 , that is, a solution of (2.39) for t ∈ [0, T]. The same argument as in Theorem 2.23 completes the proof. Remark 2.33. Let us emphasize that condition (bγ ) can be clearly substituted with the weaker condition: (bγ,T ) For any T > 0, there exists a constant LT > 0 such that for all t ∈ (0, T] and y1 , y2 ∈ ℝ, 󵄨󵄨 󵄨 −γ 󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨 ≤ LT t |y1 − y2 |. Furthermore, note that (bγ,T ) is implied by (b′γ ) There exists a constant L > 0 such that for all t ∈ ℝ+ and y1 , y2 ∈ ℝ, 󵄨󵄨 󵄨 −γ 󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨 ≤ L max {t , 1} |y1 − y2 |. Note that under conditions (aγ ) and (bγ,T ), the function (t, y) ∈ ℝ+0 × ℝ 󳨃→ t γ F(t, y) ∈ ℝ is continuous. Indeed, if we consider (t0 , y0 ) ∈ ℝ+ × ℝ and (h1 , h2 ) ∈ ℝ2 such that (t0 + h1 , y0 + h2 ) ∈ ℝ+ × ℝ with |h1 | + |h2 | < 1, then we can consider T > t0 + 1 and

148 � 2 Integral and differential equations involving fractional operators note that t0 , t0 + h1 ∈ (0, T]. Hence 󵄨󵄨 γ 󵄨 γ 󵄨󵄨t0 F(t0 , y0 ) − (t0 + h1 ) F(t0 + h1 , y0 + h2 )󵄨󵄨󵄨 󵄨γ 󵄨 ≤ 󵄨󵄨󵄨t0 F(t0 , y0 ) − (t0 + h1 )γ F(t0 + h1 , y0 )󵄨󵄨󵄨 󵄨 󵄨 + (t0 + h1 )γ 󵄨󵄨󵄨F(t0 + h1 , y0 ) − F(t0 + h1 , y0 + h2 )󵄨󵄨󵄨 󵄨γ 󵄨 ≤ 󵄨󵄨󵄨t0 F(t0 , y0 ) − (t0 + h1 )γ F(t0 + h1 , y0 )󵄨󵄨󵄨 + LT (t0 + h1 )γ |h2 |. Taking the limit as (h1 , h2 ) → (0, 0), we get that t γ F(t, y) is continuous at point (t0 , y0 ). Concerning points of the form (0, y0 ), let ℓ0 (y) = limt↓0 t γ F(t, y) and consider 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 γ γ γ󵄨 󵄨󵄨ℓ0 (y0 ) − h F(h1 , y0 + h2 )󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨ℓ0 (y0 ) − h F(h1 , y0 )󵄨󵄨󵄨 + h 󵄨󵄨󵄨F(h1 , y0 ) − F(h1 , y0 + h2 )󵄨󵄨󵄨 γ 󵄨 󵄨 ≤ 󵄨󵄨󵄨ℓ0 (y0 ) − h1 F(h1 , y0 )󵄨󵄨󵄨 + LT |h2 |. Taking the limit as (h1 , h2 ) → (0, 0), we obtain that t γ F(t, y) is continuous at point (0, y0 ). Let us consider the case γ = 0, which corresponds, for instance, to functions F not depending explicitly on t. Corollary 2.34. Assume that F : ℝ+0 × ℝ → ℝ has the following properties: (a0 ) For every y ∈ ℝ, the function t ∈ ℝ+0 󳨃→ F(t, y) ∈ ℝ is continuous. (b0 ) There exists a constant L > 0 such that for all t ≥ 0 and y1 , y2 ∈ ℝ, 󵄨󵄨 󵄨 󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨 ≤ L|y1 − y2 |. Then (2.39) has a unique global solution with Dzhrbashyan–Caputo derivative ∈ 𝒞 (ℝ+0 ).

C α D0+ y

Remark 2.35. Clearly, (b0 ) implies (bγ ) for all γ ∈ [0, 1). Note that, similarly to Theorem 2.32, we can substitute condition (b0 ) with the weaker one: (b0,T ) For every T > 0, there exists a constant LT > 0 such that for all t ∈ [0, T] and y1 , y2 ∈ ℝ, 󵄨󵄨 󵄨 󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨 ≤ LT |y1 − y2 |. The above condition implies (bγ,T ) for all γ ∈ [0, 1). Finally, let us note that the solutions of (2.21) cannot have singularities. Therefore it is possible to provide a simple criterion for the existence and uniqueness of local solutions. Theorem 2.36. Let α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1. Assume that F : ℝ+0 × ℝ → ℝ has the following properties for some γ ∈ [0, α − n + 1): (aγ ) For all T > 0 and y ∈ ℝ, the function t ∈ (0, T] 󳨃→ F(t, y) ∈ ℝ belongs to 𝒞 ([0, T]; w0,γ ).

2.4 Fractional differential equations with Dzhrbashyan–Caputo fractional derivatives



149

(bγ,loc ) For every compact set K ⊂ ℝ and every T > 0, there exists a constant LT,K > 0 such that for all t ∈ (0, T] and y1 , y2 ∈ K, 󵄨󵄨 󵄨 −γ 󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨 ≤ LT,K t |y1 − y2 |. Then there exists a constant τ > 0 (possibly τ = +∞) such that Equation (2.39) has a unique solution in [0, τ) with Dzhrbashyan–Caputo derivative C Dα0+ y ∈ 𝒞 ([0, τ); w0,γ ). Proof. For m ∈ ℕ, consider a function ψm ∈ 𝒞c∞ (ℝ) such that ψm (x) = 1 for x ∈ [−m, m], ψm (x) = 0 for |x| > m + 1, and ψm (x) ∈ [0, 1] for x ∈ ℝ. Recall that by definition ψm is a Lipschitz function. We denote by Lψ,m its Lipschitz constant. For m ∈ ℕ and T > 0, define the function Fm,T (t, y) = {

ψm (y)F(t, y),

ψm (y)F(T, y),

t ∈ [0, T], t > T,

and note that it satisfies condition (a0 ) from Corollary 2.34. Moreover, for all y1 , y2 ∈ ℝ and t ∈ [0, T], 󵄨󵄨 󵄨 󵄨 󵄨󵄨 󵄨 󵄨 󵄨󵄨 󵄨 󵄨󵄨Fm,T (t, y1 ) − Fm,T (t, y2 )󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨ψm (y1 ) − ψm (y2 )󵄨󵄨󵄨󵄨󵄨󵄨F(t, y1 )󵄨󵄨󵄨 + 󵄨󵄨󵄨ψm (y2 )󵄨󵄨󵄨󵄨󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨 󵄨 = 󵄨󵄨󵄨ψm (y1 ) − ψm (y2 )󵄨󵄨󵄨t −γ 󵄨󵄨󵄨t γ F(t, y1 )󵄨󵄨󵄨 + 󵄨󵄨󵄨ψm (y2 )󵄨󵄨󵄨󵄨󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨. If |y2 | ≤ m+1, then it follows from the continuity of the function t γ F(t, y) in both variables (see Remark 2.33) and condition (bγ,loc ) that 󵄨󵄨 󵄨 −γ 󵄨󵄨Fm,T (t, y1 ) − Fm,T (t, y2 )󵄨󵄨󵄨 ≤ (Lψ,m MF,T,m + LF,T,m )t |y1 − y2 |, where MF,T,m := maxt∈[0,T],|y|≤m+1 |t γ F(t, y)|, LF,T,m := LT,Km , and Km = [−m − 1, m + 1]. Denote LT,m := Lψ,m MF,T,m + LF,T,m . If |y2 | > m + 1, then ψm (y2 ) = 0, and therefore 󵄨󵄨 󵄨 −γ 󵄨󵄨Fm,T (t, y1 ) − Fm,T (t, y2 )󵄨󵄨󵄨 ≤ Lψ,m MF,T,m t |y1 − y2 | ≤ LT,m |y1 − y2 |. For t > T, we have the following bounds: 󵄨󵄨 󵄨 󵄨 󵄨󵄨 󵄨 󵄨 󵄨󵄨 󵄨 󵄨󵄨Fm,T (t, y1 ) − Fm,T (t, y2 )󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨ψm (y1 ) − ψm (y2 )󵄨󵄨󵄨󵄨󵄨󵄨F(T, y1 )󵄨󵄨󵄨 + 󵄨󵄨󵄨ψm (y2 )󵄨󵄨󵄨󵄨󵄨󵄨F(T, y1 ) − F(T, y2 )󵄨󵄨󵄨 ≤ LT,m T −γ |y1 − y2 |. This means that Fm,T satisfies condition (bγ,T ) of Remark 2.33. Thus we can use Theorem 2.32 to guarantee that the Cauchy problem {

C α D0+ y (k)

= Fm,T (t, y), t ≥ 0,

y (0) = dk ,

k = 0, . . . , n − 1,

(2.41)

150 � 2 Integral and differential equations involving fractional operators has a unique global solution ym,T . Now consider m0 ∈ ℕ such that y(0) ∈ [−m0 , m0 ]. For m ≥ m0 , let τm,T := min{inf{t > 0 : ym,T (t) ∈ ̸ [−m, m]}, T}. Consider T ′ > T. Then Fm,T (t, ⋅) = Fm,T ′ (t, ⋅) for all t ∈ [0, T], and thus, by the uniqueness of the solution of (2.41), ym,T (t) = ym,T ′ (t) for all t ≤ T. This means, in particular, that either τm,T = T < τm,T ′

or

τm,T = τm,T ′ .

Hence the function T ∈ ℝ+ 󳨃→ τm,T ∈ ℝ+ is increasing, and we can define τm := lim τm,T = τm . T→+∞

If τm = +∞, then for each t ≥ 0, we can consider T > 0 such that t ∈ [0, T] and set y(t) := ym,T (t). The function y : ℝ+0 → ℝ is well defined by the previous observations. In particular, (C Dα0+ y) (t) = (C Dα0+ ym,T ) (t) = Fm,T (t, y(t)) = F(t, y(t)). Since t ≥ 0 is arbitrary, we established that y solves (2.39). The uniqueness follows from the fact that (2.41) has a unique solution. If τm < +∞, then let us further note that if m < m′ , then ym,T (t) = ym′ ,T (t) for all T > 0 and t ≤ τm,T since Fm,T (t, ym,T (t)) = Fm′ ,T (t, ym,T (t)) for all t ≤ τm,T . This implies that m ∈ ℕ 󳨃→ τm,T ∈ ℝ+ is increasing. Taking the limit as T → +∞, we get that m ∈ ℕ 󳨃→ τm ∈ ℝ+ is increasing. Thus we can consider the limit τ := lim τm . m→+∞

Now for t < τ, define y(t) = ym,T (t) if t ≤ τm,T , which is well defined by the previous observations. Let t < τ. Then there exist m ≥ m0 and T > 0 such that t ≤ τm,T . Thus we have y(k) (0) = y(k) m,T (0) = dk for all k = 0, . . . , n − 1. Moreover, for all t ≤ τm,T , C α D0+ y(t)

= C Dα0+ ym,T (t) = Fm,T (t, ym,T (t)) = Fm,T (t, y(t)) = F(t, y(t))

since if t ≤ τm,T , then |y(t)| ≤ m and t ≤ T. Thus y is a solution of (2.39) for t ∈ [0, τ). The uniqueness follows from the fact that (2.41) admits unique solutions for all m ∈ ℕ and T > 0.

2.5 The generalized Grönwall inequality

� 151

Remark 2.37. All the results presented in Sections 2.3 and 2.4 also hold for systems of fractional differential equations of the same order. Indeed, extensions of fractional integrals and derivatives to ℝd -valued functions, where d ≥ 1, can be easily obtained by applying the aforementioned operators component-wise. In such a case, all the results we gave in Sections 2.3 and 2.4 can be proven in the same way, after we substitute the absolute value with the Euclidean norm. Furthermore, the results can be extended to the general vector-valued case. Indeed, one can define fractional integrals and derivatives of functions f : t ∈ [0, T] 󳨃→ f (t) ∈ 𝔹, where 𝔹 is a Banach space, by using Bochner integrals and norm-derivatives (see Appendix C.5). Details on the extension of such operators to this setting are given in Section 4.5. Once this is done, one can understand (2.21) and (2.39) as (possibly nonlinear) abstract Cauchy problems and the notion of solution can be easily adapted from Definitions 2.18, 2.26 and 2.30. Furthermore, existence and uniqueness of the solutions for such problems are shown exactly as we did in Sections 2.3 and 2.4, by substituting the absolute value with the norm in 𝔹.

2.5 The generalized Grönwall inequality A prominent role in the classical theory of ordinary differential equations is played by Grönwall inequality. Such an inequality has been extended to the fractional case in [261]. Here we establish a slightly more general form of the Grönwall inequality. Theorem 2.38. Let α > 0, γ ∈ [0, 1), and β ∈ [0, 1 − γ), let f , y ∈ 𝒞 ([0, T]; w0,β ) be nonnegative functions, and let g : [0, T] → ℝ be a nonnegative nondecreasing continuous function. If t

y(t) ≤ f (t) +

g(t)t γ ∫(t − s)α−1 s−γ y(s)ds, Γ(α)

t ∈ (0, T],

(2.42)

0

then +∞

y(t) ≤ f (t) + t γ ∑

n=1

t

(g(t))n ∫(t − s)nα−1 s−γ f (s)ds, Γ(nα)

t ∈ (0, T].

(2.43)

0

Furthermore, if f is nondecreasing, then y(t) ≤ Γ(1 − γ)f (t)Eα,1−γ (g(t)t α ) ,

t ∈ (0, T],

where Eα,1−γ is the two-parameter Mittag-Leffler function defined in (2.4). Proof. Define the operator 𝒯 : 𝒞 ([0, T]; w0,β ) → 𝒞 ([0, T]; w0,β ) as t

g(t)t γ 𝒯 ϕ(t) = ∫(t − s)α−1 s−γ ϕ(s)ds, Γ(α) 0

t ∈ [0, T].

(2.44)

152 � 2 Integral and differential equations involving fractional operators To note that the operator is well-defined for all ϕ ∈ 𝒞 ([0, T]; w0,β ), consider ϕγ (t) = t −γ ϕ(t). α Clearly, ϕγ ∈ 𝒞 ([0, T]; w0,γ+β ), and so I0+ ϕγ ∈ 𝒞 ([0, T]; w0,γ+β ) by Lemma 2.19. We can also rewrite α (𝒯 ϕ)(t) = g(t)t γ (I0+ ϕγ ) (t) α for all t ∈ [0, T]. Hence the fact that g ∈ C[0, T] and I0+ ϕγ ∈ 𝒞 ([0, T]; w0,γ+β ) implies that 𝒯 ϕ ∈ 𝒞 ([0, T]; w0,β ). Since g is nonnegative, 𝒯 ϕ1 (t) ≤ 𝒯 ϕ2 (t) for all t ∈ (0, T] if 0 ≤ ϕ1 (t) ≤ ϕ2 (t) for all t ∈ (0, T]. We can rewrite inequality (2.42) as

y(t) ≤ f (t) + 𝒯 y(t).

(2.45)

Recall that y(t) ≥ 0 for all t ∈ [0, T]. Therefore we can apply 𝒯 to both sides of the previous inequality and get that 2

𝒯 y(t) ≤ 𝒯 f (t) + 𝒯 y(t).

Applying the latter inequality to (2.45), we get y(t) ≤ f (t) + 𝒯 f (t) + 𝒯 2 y(t) for all t ∈ (0, T]. Repeating this procedure n times for each n ∈ ℕ, we get by induction that n−1

y(t) ≤ ∑ 𝒯 k f (t) + 𝒯 n y(t)

(2.46)

k=0

for all t ∈ (0, T], where by 𝒯 0 we denote the identity operator. Consider a nonnegative function ϕ ∈ 𝒞 [0, T]. We want to prove that t

(g(t))n t γ 𝒯 ϕ(t) ≤ ∫(t − s)nα−1 s−γ ϕ(s)ds, Γ(nα) n

t ∈ [0, T].

(2.47)

0

To do this, we argue by induction. For n = 1, the relation is in fact an equality. Now let us suppose this is true for some fixed n ≥ 1 and consider 𝒯 n+1 . We have that for all t ∈ (0, T], 𝒯

n+1

ϕ(t) = 𝒯 (𝒯 n ϕ) (t) ≤

t

s

g(t)t γ (g(s))n sγ (s − τ)nα−1 τ −γ ϕ(τ)dτ ds ∫(t − s)α−1 s−γ ∫ Γ(α) Γ(nα) 0

n+1 γ

t

0

s

(g(t)) t ≤ ∫(t − s)α−1 ∫(s − τ)nα−1 τ −γ ϕ(τ)dτ ds. Γ(α)Γ(nα) 0

0

2.5 The generalized Grönwall inequality

� 153

Changing the order of integration and using Lemma 1.11, we get that 𝒯

n+1

ϕ(t) ≤

=

t

t

0

τ

(g(t))n+1 t γ 1 ∫ τ −γ ϕ(τ) ∫(t − s)α−1 (s − τ)nα−1 ds dτ Γ(nα) Γ(α) t

(g(t))n+1 t γ ∫ τ −γ ϕ(τ)(t − τ)(n+1)α−1 dτ, Γ((n + 1)α) 0

and (2.47) is proved. Furthermore, we can rewrite (2.47) as t

(g(t))n t γ 𝒯 ϕ(t) ≤ ∫ ϕγ (s)(t − s)nα−1 dτ. Γ(nα) n

(2.48)

0

Choose n0 > α1 and consider any n ≥ n0 . Obviously, nα − 1 ≥ n0 α − 1 > 0. We have that ϕ ∈ 𝒞 ([0, T]; w0,β ) and β + γ < 1, hence ϕγ belongs to L1 (0, T). Applying Hölder inequality to (2.48), we get t

(g(t))n t γ 𝒯 ϕ(t) ≤ ∫ ϕγ (τ)(t − τ)nα−1 dτ Γ(nα) n

0

(g(t))n t γ (g(T))n ≤ ‖ϕγ ‖L1 (0,t) t nα−1 ≤ ‖ϕ ‖ 1 T nα−1+γ . Γ(nα) Γ(nα) γ L (0,T) Therefore +∞

+∞

n=n0

n=n0

∑ 𝒯 n ϕ(t) ≤ ∑

+∞ (g(T))n T αn−1+γ (g(T))n+1 T αn+α−1+γ ‖ϕγ ‖L1 (0,T) = ‖ϕγ ‖L1 (0,T) ∑ Γ(nα) Γ(nα + α) n=n −1 0

+∞ (g(T))n T αn (g(T)T α )n = ‖ϕγ ‖L1 (0,T) g(T)T α−1+γ ∑ ≤ ‖ϕγ ‖L1 (0,T) g(T)T α−1+γ ∑ Γ(nα + α) Γ(nα + α) n=n −1 n=0 +∞ 0

= ‖ϕγ ‖L1 (0,T) g(T)T α−1+γ Eα,α (g(T)T α ) .

In particular, this implies that limn→+∞ 𝒯 n ϕ(t) = 0. Taking the limit as n → +∞ in (2.46) and using (2.47), we obtain (2.43). To get (2.44), note that for s ≤ t, we have f (s) ≤ f (t), and then it follows from Lemma 1.11 that t

(g(t))n t γ y(t) ≤ f (t) + ∑ f (t) ∫ s−γ (t − s)nα−1 ds Γ(nα) n=1 +∞

0

+∞

= f (t) ∑ (g(t)) n=0

n

Γ(1 − γ) t nα = Γ(1 − γ)f (t)Eα,1−γ (g(t)t α ) . Γ(nα + 1 − γ)

154 � 2 Integral and differential equations involving fractional operators Remark 2.39. Setting γ = 0, we obtain the standard version of the fractional Grönwall inequality contained in [261, Theorem 1 and Corollary 2]. Moreover, in such a case the condition y ∈ 𝒞 ([0, T]; w0,β ) is not needed. Indeed, if y ∈ L1 (0, T) and (2.42) holds for a. a. t ∈ [0, T], then we can apply the same arguments and conclude that (2.43) and (2.44) hold for a. a. t ∈ [0, T]. In general, we can prove the same statement (using the same arguments) for every y ∈ L1 (0, T) such that yγ (t) = t −γ y(t) belongs to L1 (0, T). However, since we are working with solutions of fractional differential equations, we will always consider y ∈ 𝒞 ([0, T]; w0,β ) for some β ∈ (0, 1 − γ). Finally, if we set γ = 0 and α = 1, then (2.44) coincides with the classical Grönwall inequality, as in [196, Theorem 1.3.1].

2.6 Continuous dependence on the initial data As the first consequence of the generalized Grönwall inequality, we can establish the continuous dependence of the solutions of a fractional Cauchy problem on the initial data. Let us first consider the Riemann–Liouville case. Theorem 2.40. Let α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1. Assume that F : ℝ+0 × ℝ → ℝ satisfies conditions (a) and (b) of Theorem 2.23. Let d1(i) , . . . , dn(i) ∈ ℝ, i = 1, 2, and denote by yi ∈ 𝒞 (ℝ+0 ; w0,n−α ) the solutions of the equations Dα0+ y(t) = F(t, y(t)), { { { { α−k D0+ y(0) = dk(i) , { { { { n−α (i) {I0+ y(0) = dn ,

t > 0, k = 1, . . . , n − 1,

(2.49)

where we do not need the second equation in (2.49) if n = 1. Then for each T > 0, there exists a constant C > 0 such that n

󵄨 󵄨 ‖y1 − y2 ‖𝒞([0,T];w0,n−α ) ≤ C ∑ 󵄨󵄨󵄨󵄨dj(1) − dj(2) 󵄨󵄨󵄨󵄨 . j=1

Proof. Proposition 2.20 implies that for all t ∈ [0, T], 󵄨 󵄨 t n 󵄨󵄨󵄨d (1) − d (2) 󵄨󵄨󵄨 1 j 󵄨 α−j 󵄨 j 󵄨󵄨 󵄨󵄨 󵄨 󵄨 t + ∫(t − τ)α−1 󵄨󵄨󵄨F(τ, y1 (τ)) − F(τ, y2 (τ))󵄨󵄨󵄨 dτ 󵄨󵄨y1 (t) − y2 (t)󵄨󵄨 ≤ ∑ Γ(α − j + 1) Γ(α) j=1 0

󵄨 󵄨 t n 󵄨󵄨󵄨d (1) − d (2) 󵄨󵄨󵄨 L j 󵄨 α−j 󵄨 j 󵄨 󵄨 ≤∑ t + ∫(t − τ)α−1 τ α−n τ n−α 󵄨󵄨󵄨y1 (τ) − y2 (τ)󵄨󵄨󵄨dτ. Γ(α − j + 1) Γ(α) j=1 0

2.6 Continuous dependence on the initial data



155

Now let y(τ) = τ n−α |y1 (τ) − y2 (τ)|. Multiplying the left- and right-hand sides of the previous inequality by t n−α , we get that 󵄨󵄨 (1) 󵄨 t n−α 󵄨󵄨dj − dj(2) 󵄨󵄨󵄨 󵄨 󵄨 n−j Lt y(t) ≤ ∑ t + ∫(t − τ)α−1 τ α−n y(τ)dτ. Γ(α − j + 1) Γ(α) j=1 n

0

Note that y ∈ 𝒞 (ℝ+0 ). Therefore from Theorem 2.38 we get that 󵄨󵄨 (1) 󵄨 󵄨󵄨dj − dj(2) 󵄨󵄨󵄨 󵄨 󵄨 n−j y(t) ≤ Γ(1 − n + α) ∑ t Eα,1−γ (Lt α ) Γ(α − j + 1) j=1 n

≤ max{T n , 1} Setting C := proof.

max{T n ,1}Eα,1−γ (LT α ) 1−n+α

n Γ(1 − n + α) 󵄨 󵄨 Eα,1−γ (LT α ) ∑ 󵄨󵄨󵄨󵄨dj(1) − dj(2) 󵄨󵄨󵄨󵄨 . Γ(2 − n + α) j=1

and taking the supremum in t ∈ [0, T], we conclude the

A similar argument holds for a. e. solutions, as we will show in the following theorem. Theorem 2.41. Let α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1. Assume that F : ℝ+0 × ℝ → ℝ satisfies assumption (b) of Theorem 2.23 and assumption (c) of Theorem 2.28. Let d1(i) , . . . , dn(i) ∈ ℝ, i = 1, 2, and denote by yi ∈ L1loc (ℝ+0 ) the a. e. solutions of Dα0+ y(t) = F(t, y(t)), { { { { α−k D y(0) = dk(i) , { { 0+ { { n−α (i) {I0+ y(0) = dn ,

t > 0, k = 1, . . . , n − 1,

(2.50)

where we do not need the second equation in (2.50) if n = 1. Then for each T > 0, there exists a constant C > 0 such that n 󵄨 󵄨 ‖y1 − y2 ‖L1 (0,T) ≤ C ∑ 󵄨󵄨󵄨󵄨dj(1) − dj(2) 󵄨󵄨󵄨󵄨 . j=1

Proof. As in the previous theorem, note that for all s ∈ [0, t], 󵄨 󵄨 s n 󵄨󵄨󵄨d (1) − d (2) 󵄨󵄨󵄨 L j 󵄨 α−j 󵄨 j 󵄨󵄨 󵄨󵄨 󵄨 󵄨 s + ∫(s − τ)α−1 󵄨󵄨󵄨y1 (τ) − y2 (τ)󵄨󵄨󵄨dτ. 󵄨󵄨y1 (s) − y2 (s)󵄨󵄨 ≤ ∑ Γ(α − j + 1) Γ(α) j=1 0

156 � 2 Integral and differential equations involving fractional operators t

Integrating both parts of this inequality and setting Y (t) = ∫0 |y1 (s) − y2 (s)|ds, we get that 󵄨󵄨 (1) 󵄨 t s 󵄨󵄨dj − dj(2) 󵄨󵄨󵄨 L 󵄨 󵄨 α−j+1 󵄨 󵄨 Y (t) ≤ ∑ t + ∫ ∫(s − τ)α−1 󵄨󵄨󵄨y1 (τ) − y2 (τ)󵄨󵄨󵄨dτ Γ(α − j + 2) Γ(α) j=1 n

0 0

󵄨 󵄨 t s n 󵄨󵄨󵄨d (1) − d (2) 󵄨󵄨󵄨 L j 󵄨 α−j+1 󵄨 j 󵄨 󵄨 =∑ t + ∫ ∫ τ α−1 󵄨󵄨󵄨y1 (s − τ) − y2 (s − τ)󵄨󵄨󵄨dτds Γ(α − j + 2) Γ(α) j=1 0 0

󵄨 󵄨 t t n 󵄨󵄨󵄨d (1) − d (2) 󵄨󵄨󵄨 L j 󵄨 α−j+1 󵄨 j 󵄨 󵄨 =∑ t + ∫ τ α−1 ∫󵄨󵄨󵄨y1 (s − τ) − y2 (s − τ)󵄨󵄨󵄨dsdτ Γ(α − j + 2) Γ(α) j=1 τ

0

󵄨 󵄨 t n 󵄨󵄨󵄨d (1) − d (2) 󵄨󵄨󵄨 L j 󵄨 α−j+1 󵄨 j =∑ t + ∫ τ α−1 Y (t − τ)dτ Γ(α − j + 2) Γ(α) j=1 0

󵄨 󵄨 t n 󵄨󵄨󵄨d (1) − d (2) 󵄨󵄨󵄨 L j 󵄨 α−j+1 󵄨 j =∑ t + ∫(t − τ)α−1 Y (τ)dτ. Γ(α − j + 2) Γ(α) j=1 0

Thus by Theorem 2.38 we get 󵄨󵄨 (1) 󵄨 n 󵄨󵄨dj − dj(2) 󵄨󵄨󵄨 max{T α+1 , 1} 󵄨 󵄨 󵄨 󵄨 α−j+1 Y (t) ≤ ∑ t Eα (Lt α ) ≤ Eα (LT α ) ∑ 󵄨󵄨󵄨󵄨dj(1) − dj(2) 󵄨󵄨󵄨󵄨 . Γ(α − j + 2) Γ(α − n + 2) j=1 j=1 n

Setting C =

max{T α+1 ,1} E (LT α ), Γ(α−n+2) α

we complete the proof.

Finally, we can get the same result in the Dzhrbashyan–Caputo case, as the following theorem states. Theorem 2.42. Let α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1. Assume that F : ℝ+0 × ℝ → ℝ satisfies (i) conditions (aγ ) of Theorem 2.32 and (b0 ) of Corollary 2.34. Let d0(i) , . . . , dn−1 ∈ ℝ, i = 1, 2, + and denote by yi ∈ 𝒞 (ℝ0 ) the solutions of equations {

C α D0+ y(t) (k)

y (0) =

= F(t, y(t)),

dk(i) ,

t > 0,

k = 0, . . . , n − 1, i = 1, 2.

Then for each T > 0, there exists a constant C > 0 such that n−1 󵄨 󵄨 ‖y1 − y2 ‖𝒞[0,T] ≤ C ∑ 󵄨󵄨󵄨󵄨dj(1) − dj(2) 󵄨󵄨󵄨󵄨 . j=0

2.7 Linear fractional differential equations with Riemann–Liouville fractional derivatives

� 157

Proof. We know from Proposition 2.31 that 󵄨 󵄨 t n−1 󵄨󵄨󵄨d (1) − d (2) 󵄨󵄨󵄨 L j 󵄨 j 󵄨 j 󵄨󵄨 󵄨󵄨 󵄨 󵄨 t + ∫(t − τ)α−1 󵄨󵄨󵄨y1 (τ) − y2 (τ)󵄨󵄨󵄨dτ. 󵄨󵄨y1 (t) − y2 (t)󵄨󵄨 ≤ ∑ j! Γ(α) j=0 0

Theorem 2.38 allows us to write 󵄨 󵄨 n−1 󵄨󵄨󵄨d (1) − d (2) 󵄨󵄨󵄨 n−1 󵄨 󵄨 j 󵄨 j 󵄨 j 󵄨󵄨 󵄨󵄨 α y (t) − y (t) ≤ E (Lt ) t ≤ max{T n , 1}Eα (LT α ) ∑ 󵄨󵄨󵄨󵄨dj(1) − dj(2) 󵄨󵄨󵄨󵄨 . ∑ 󵄨󵄨 1 2 󵄨󵄨 α j! j=0

j=0

Setting C = max{T n , 1}Eα (LT α ) and taking the supremum in t ∈ [0, T], we complete the proof. Remark 2.43. Theorem 2.42 holds even if F satisfies condition (bγ ) instead of (b0 ). The proof is given in Exercise 2.28.

2.7 Linear fractional differential equations with Riemann–Liouville fractional derivatives Now let us investigate simple cases in which we can explicitly exhibit the solutions. More precisely, in this and the following section, we will focus on the linear case, i. e., on equations of the form k

α

b0 y(t) + ∑ bj D0+j y(t) = f (t), j=1

t ≥ 0.

To deal with such an equation, we will use the Laplace transform. Let us first examine how the Laplace transform operator ℒ acts on fractional derivatives and integrals. Lemma 2.44. Let y : ℝ+0 → ℝ be Laplace transformable with abscissa of convergence abs(y) = z0 . Also, let α > 0. Then α α (i) I0+ y is Laplace transformable with abscissa of convergence abs(I0+ y) ≤ max{0, z0 }. Moreover, for all z ∈ ℂ such that ℜ(z) > max{0, z0 }, α

ℒ [I0+ y] (z) = z ℒ[y](z). −α

(ii) Assume that Dα0+ y ∈ L1 (ℝ+0 ) and is Laplace transformable with abs(Dα0+ y) = z1 . Then for all z ∈ ℂ such that ℜ(z) > max{0, z0 , z1 }, α

α

ℒ [D0+ y] (z) = z ℒ[y](z) − z

⌊α⌋

⌊α⌋

⌊α⌋+1−α (I0+ y) (0) − ∑ zk−1 (Dα−k 0+ y) (0). k=1

158 � 2 Integral and differential equations involving fractional operators Proof. The proof of item (i) is presented in Exercise 2.3. Let us prove item (ii). Let n−α n = ⌊α⌋ + 1 and note that according to (i), y and consequently Y = I0+ y are Laplace transformable with abs(Y ) ≤ max{0, z0 }. Moreover, the conditions of the theorem supply that Dα0+ y = Y (n) is Laplace transformable with abscissa of convergence abs (Y (n) ) = abs (Dα0+ y) = z1 . Thus by the properties of the Laplace transform (see Proposition A.23) we know that for all z ∈ ℂ such that ℜ(z) > max{0, z0 , z1 }, ℒ [Y

(n)

n−1

] (z) = zn ℒ[Y ](z) − ∑ zn−1−k Y (k) (0). k=0

(2.51)

n−α Let us first evaluate Y (k) (0) for k = 0, . . . , n − 1. If k = 0, then Y (k) (0) = (I0+ y)(0). If k ≥ 1, then

n − α = k + (n − k) − α = k − (α − (n − k)); moreover, ⌊α − (n − k)⌋ = ⌊α⌋ − (n − k) = n − 1 − n + k = k − 1. Hence, if we set αk = α − (n − k), then n − α = k − αk , and we can write k−α

n−α Y = I0+ y = I0+ k y.

Finally, we get α

Y (k) (0) = (D0+k y) (0) = (Dα−(n−k) y) (0). 0+

(2.52)

On the other hand, we know from item (i) that for all z ∈ ℂ with ℜ(z) > max{0, z0 , z1 }, we have α−n

ℒ[Y ](z) = z

ℒ[y](z).

(2.53)

Plugging (2.52) and (2.53) into (2.51), we complete the proof. Remark 2.45. Assume that y ∈ Iα (Lp (0, +∞)) for some α ∈ (0, 1) and p ≥ 1, y is Laplace transformable with abs(y) = z0 , and Dα0+ y is Laplace transformable with abs (Dα0+ y) = z1 . α α Then z0 ≤ max{0, z1 }. Indeed, we can write y = I0+ D0+ y and apply item (i) of Lemma 2.44,

2.7 Linear fractional differential equations with Riemann–Liouville fractional derivatives



159

which guarantees the existence of the following upper bound: α α z0 = abs(y) = abs (I0+ D0+ y) ≤ max{0, z1 }.

Now let α > 0 and λ ∈ ℝ. Put n = ⌊α⌋ + 1, and fix d1 , . . . , dn ∈ ℝ and f ∈ 𝒞 (ℝ+0 ; w0,n−α ). Consider the simple Cauchy problem Dα y(t) − λy(t) = f (t), { { { 0+ Dα−k y(0) = dk , { { 0+ { n−α {I0+ y(0) = dn ,

t > 0,

k = 1, . . . , n − 1,

(2.54)

where we do not need the second equation in (2.54) if n = 1. We know from Theorem 2.23 that (2.54) has a unique global solution y ∈ 𝒞 (ℝ+0 ; w0,n−α ). We would like to take this solution and apply the Laplace transform to both sides of the first equation in (2.54) having in mind Lemma 2.44. To do this, however, we first need to prove that y and Dα0+ y are Laplace transformable. In fact, it suffices to prove that y and Dα0+ y are exponentially bounded. Let us recall that a function f : (0, +∞) → ℝ is exponentially bounded if there exist three constants a, M, C > 0 such that |f (t)| ≤ Ceat for a. a. t > M. It is clear that if f is exponentially bounded, then it is also Laplace transformable. More details on the link between exponential bounds and Laplace transforms are given in Appendix A.3. Now let us prove that if f is exponentially bounded, so y and Dα0+ y are. Once we do this, we will be allowed to use the Laplace transform to solve (2.54). The following theorem generalizes to the Riemann–Liouville case the main result from [115], where Dzhrbashyan–Caputo case is considered. Theorem 2.46. Let α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1. Also, let f ∈ 𝒞 (ℝ+0 ; w0,n−α ) be exponentially bounded. Then for all λ ∈ ℝ, the Cauchy problem α

D y(t) = λy(t) + f (t), { { { 0+ Dα−k y(0) = dk , { { 0+ { n−α {I0+ y(0) = dn ,

t > 0,

k = 1, . . . , n − 1,

(2.55)

where we do not need the second equation in (2.55) if n = 1, has a unique solution y ∈ 𝒞 (ℝ+0 ; w0,n−α ) such that both y and Dα0+ y are exponentially bounded. Proof. First, note that the function F(t, y) = λy + f (t) satisfies conditions (a) and (b) from Theorem 2.23. Therefore there exists a unique solution y ∈ 𝒞 (ℝ+0 ; w0,n−α ). Let us assume that for some σ, M1 > 0, |f (t)| ≤ M1 eσt for all t ≥ t0 . Moreover, since f , y ∈ 𝒞 ([0, t0 ]; w0,n−α ), we know that there exists a constant M2 > 0 such that |f (t)| ≤ M2 t α−n and |y(t)| ≤ M2 t α−n

160 � 2 Integral and differential equations involving fractional operators for all t ∈ [0, t0 ]. Also, we know from Proposition 2.20 that t

t

0

0

n dj t α−j |λ| 1 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 + ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ + ∫(t − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ 󵄨󵄨y(t)󵄨󵄨󵄨 ≤ ∑ Γ(α − j + 1) Γ(α) Γ(α) j=1

dj t

n

≤∑ j=1

t0

α−j

Γ(α − j + 1)

+

t

|λ| |λ| 󵄨 󵄨 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ + ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ Γ(α) Γ(α) t0

0

t0

+

t

1 1 󵄨 󵄨 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ + ∫(t − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ Γ(α) Γ(α) t0

0

n

≤∑ j=1

dj t

t

α−j

Γ(α − j + 1)

+

|λ| 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ Γ(α) t0

t0

t

0

t0

M (|λ| + 1) M + 2 ∫(t − τ)α−1 τ α−n dτ + 1 ∫(t − τ)α−1 eστ dτ Γ(α) Γ(α) t

= I0 (t) +

|λ| 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ + I1 (t) + I2 (t) Γ(α) t0

t−t0

|λ| 󵄨 󵄨 = I0 (t) + ∫ (t − t0 − τ)α−1 󵄨󵄨󵄨y(τ + t0 )󵄨󵄨󵄨dτ + I1 (t) + I2 (t), Γ(α) 0

where n

I0 (t) = ∑ j=1

dj t α−j

Γ(α − j + 1)

t0

M (|λ| + 1) I1 (t) = 2 ∫(t − τ)α−1 τ α−n dτ, Γ(α)

,

0

t

I2 (t) =

M1 ∫(t − τ)α−1 eστ dτ. Γ(α) t0

Setting y(τ) = |y(τ + t0 )| and observing that |y(t)| = y(t − t0 ), we get that for t ≥ t0 , t−t0

y(t − t0 ) ≤ I0 (t) +

|λ| 󵄨 󵄨 ∫ (t − t0 − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ + I1 (t) + I2 (t), Γ(α)

(2.56)

0

and this inequality can be rewritten as t

|λ| 󵄨 󵄨 y(t) ≤ I0 (t + t0 ) + ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ + I1 (t + t0 ) + I2 (t + t0 ) Γ(α) 0

(2.57)

2.7 Linear fractional differential equations with Riemann–Liouville fractional derivatives

� 161

for t ≥ 0. Note that there exists a constant M3 > 0 such that I0 (t + t0 ) ≤ M3 e2σt

(2.58)

for all t ≥ 0. According to Exercise 1.2, I1 (t + t0 ) = M2 (1 + |λ|)(t + t0 )α−1

t0 Γ(α − n + 1) α−n+1 t ). 2 F1 (1 − α, α − n + 1; α − n + 2; Γ(α − n + 2) 0 t0 + t (2.59)

Denote a = 1 − α, b = α − n + 1, and c = α − n + 2. Then c − a − b = α > 0. Applying Theorem B.10, we get that for all |x| ≤ 1, 󵄨󵄨 󵄨 󵄨 󵄨 󵄨󵄨2 F1 (1 − α, α − n + 1; α − n + 2; x)󵄨󵄨󵄨 = 󵄨󵄨󵄨2 F1 (a, b; c; x)󵄨󵄨󵄨 Γ(c)Γ(c − a − b) Γ(α − n + 2)Γ(α) ≤ = . Γ(c − a)Γ(c − b) Γ(2α − n + 1) In particular, for t ≥ 0, we have that 0 ≤

t0 t0 +t

≤ 1. Therefore

t0 ) t0 + t 󵄨󵄨 󵄨󵄨 Γ(α − n + 2)Γ(α) t 󵄨 󵄨 ≤ 󵄨󵄨󵄨2 F1 (1 − α, α − n + 1; α − n + 2; 0 )󵄨󵄨󵄨 ≤ . 󵄨󵄨 t0 + t 󵄨󵄨 Γ(2α − n + 1)

2 F1 (1

− α, α − n + 1; α − n + 2;

(2.60)

Combining (2.60) with (2.59), we conclude that for all t ≥ 0, I1 (t + t0 ) ≤

Γ(α − n + 1)Γ(α) M2 (1 + |λ|)(t + t0 )α−1 t0α−n+1 . Γ(2α − n + 1)

Finally, since t0 > 0 is fixed, there exists a constant M4 > 0 such that for all t ≥ 0, I1 (t + t0 ) ≤ M4 e2σt .

(2.61)

To handle I2 , let us note that eστ ≤ eσ(t+t0 ) for all τ ∈ [t0 , t0 + t], and then it follows that t+t0

M1 σ(t+t0 ) I2 (t + t0 ) ≤ e ∫ (t + t0 − τ)α−1 dτ Γ(α) t0

eσt0 M1 supt≥0 (t α e−σt ) 2σt M1 t eσ(t+t0 ) ≤ e . Γ(α + 1) Γ(α + 1) α

= Setting M5 :=

eσt0 M1 supt≥0 (t α e−σt ) , Γ(α+1)

(2.62)

we get I2 (t + t0 ) ≤ M5 e2σt

(2.63)

162 � 2 Integral and differential equations involving fractional operators for all t ≥ 0. Plugging (2.58), (2.61), and (2.63) into (2.57), we get that for some constant M6 > 0, y(t) ≤ M6 e

2σt

t

|λ| 󵄨 󵄨 + ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ, Γ(α) 0

and this relation, combined with Theorem 2.38, implies that y(t) ≤ M6 e2σt Eα (|λ|t α ) ,

t ≥ 0.

(2.64)

Furthermore, using Corollary 2.14 with β = 1, we claim the existence of a constant M7 > 0 such that for all t ≥ 1, 1 α

Eα (|λ|t α ) ≤ M7 e|λ| t . Plugging this bound into (2.64) and setting M8 = M6 M7 , we obtain that y(t) ≤ M8 e(2σ+|λ|

1 α

)t

,

t ≥ 1,

and thus y is exponentially bounded. Finally, note that for t ≥ t0 + 1, 1

󵄨󵄨 α 󵄨 󵄨 󵄨 󵄨 󵄨 2(σ+|λ| α )t , 󵄨󵄨D0+ y(t)󵄨󵄨󵄨 ≤ |λ|󵄨󵄨󵄨y(t)󵄨󵄨󵄨 + 󵄨󵄨󵄨f (t)󵄨󵄨󵄨 ≤ M12 e concluding the proof. In fact, we can relax the conditions on the function f . More precisely, if we restrict ourselves to global a. e. solutions, then we can prove the same result for exponentially bounded f ∈ L1loc (ℝ+0 ). Theorem 2.47. Let α > 0, α ∈ ̸ ℕ, n = ⌊α⌋ + 1, and d1 , . . . , dn ∈ ℝ, and let f ∈ L1loc (ℝ+0 ) be an exponentially bounded function. Then the linear fractional Cauchy problem α

D y(t) − λy(t) = f (t), { { { 0+ Dα−k y(0) = dk , { { 0+ { n−α {I0+ y(0) = dn ,

t > 0,

k = 1, . . . , n − 1,

(2.65)

where we do not need the second equation in (2.65) if n = 1, has a unique a. e. solution y ∈ L1loc (ℝ+0 ), and both y and Dα0+ y are exponentially bounded. Proof. First of all, we emphasize that F(t, y) = λy + f (t) satisfies the assumptions of Theorem 2.28, and hence there exists a unique a. e. solution y ∈ L1loc (ℝ+0 ).

2.7 Linear fractional differential equations with Riemann–Liouville fractional derivatives

163



Let us make sure that y and Dα0+ y are exponentially bounded. Consider σ, M1 , t0 > 0 such that |f (t)| ≤ M1 eσt for all t ≥ t0 . Then t

t

n dj |λ| 1 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 t α−j + ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ + ∫(t − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ 󵄨󵄨y(t)󵄨󵄨󵄨 ≤ ∑ Γ(α − j + 1) Γ(α) Γ(α) j=1

dj t α−j

n

≤∑ j=1

+

0

t0

Γ(α − j + 1)

+

0

t

|λ| |λ| 󵄨 󵄨 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ + ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ Γ(α) Γ(α) t0

0

t0

t

0

t0

1 1 󵄨 󵄨 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ + ∫(t − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ. Γ(α) Γ(α)

Now let us distinguish two cases. If α > 1, then there exists a constant M2 > 0 such that (t − τ)α−1 ≤ M2 eσ(t−τ) ≤ M2 eσt for all t ≥ τ ≥ 0. Hence n M2 (|λ|‖y‖L1 (0,t0 ) + ‖f ‖L1 (0,t0 ) )eσt dj 󵄨󵄨 󵄨 t α−j + 󵄨󵄨y(t)󵄨󵄨󵄨 ≤ ∑ Γ(α − j + 1) Γ(α) j=1

t

+

t

M |λ| 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ + 1 ∫(t − τ)α−1 eστ dτ Γ(α) Γ(α) t0

t0

t−t0

= I0 (t) +

|λ| 󵄨 󵄨 ∫ (t − t0 − τ)α−1 󵄨󵄨󵄨y(τ + t0 )󵄨󵄨󵄨dτ + I1 (t), Γ(α) 0

where n

I0 (t) = ∑ j=1

dj

Γ(α − j + 1)

M2 (|λ|‖y‖L1 (0,t0 ) + ‖f ‖L1 (0,t0 ) )eσt

t α−j +

Γ(α)

,

(2.66)

and t

I1 (t) =

M1 ∫(t − τ)α−1 eστ dτ. Γ(α)

(2.67)

t0

Setting y(τ) = |y(τ + t0 )|, we get the bound t

|λ| y(t) ≤ I0 (t + t0 ) + ∫(t − τ)α−1 y(τ)dτ + I1 (t + t0 ). Γ(α)

(2.68)

0

It is clear that for t ≥ 0, n

I0 (t + t0 ) = eσt (∑ j=1

dj

Γ(α − j + 1)

(t + t0 )α−j e−σt +

M2 (|λ|‖y‖L1 (0,t0 ) + ‖f ‖L1 (0,t0 ) )eσt0 Γ(α)

) ≤ M3 eσt ,

164 � 2 Integral and differential equations involving fractional operators where n

M3 = ∑

dj supt≥0 ((t + t0 )α−j e−σt ) Γ(α − j + 1)

j=1

+

M2 (|λ|‖y‖L1 (0,t0 ) + ‖f ‖L1 (0,t0 ) )eσt0 Γ(α)

.

Moreover, with the same argument as in (2.62), we get I1 (t + t0 ) ≤ M4 e2σt

for all t ≥ 0, where

M4 :=

M1 eσt0 (t α e−σt ) , Γ(α + 1)

(2.69)

which, together with (2.68), implies that y(t) ≤ (M3 + M4 )e

2σt

t

|λ| + ∫(t − τ)α−1 y(τ)dτ Γ(α) 0

for a. a. t ≥ 0. In turn, Theorem 2.38 and Remark 2.39 imply that y(t) ≤ (M3 + M4 )e2σt Eα (|λ|t α ) for a. a. t ≥ 0. Applying Corollary 2.14 with β = 1, we get that for some constant M5 > 0, y(t) ≤ M5 e2(σ+|λ|

1 α

)t

for a. a. t ≥ 1, which means that y is exponentially bounded. If α ∈ (0, 1), then we must reason differently. In this case, dj

n

󵄨󵄨 󵄨 󵄨󵄨y(t)󵄨󵄨󵄨 ≤ ∑ j=1

Γ(α − j + 1)

t

α−j

+

(|λ|‖y‖L1 (0,t0 ) + ‖f ‖L1 (0,t0 ) )(t − t0 )α−1

t

Γ(α)

t

M |λ| 󵄨 󵄨 + ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ + 1 ∫(t − τ)α−1 eστ dτ Γ(α) Γ(α) t0

t0

t−t0

= I0 (t) +

|λ| 󵄨 󵄨 ∫ (t − t0 − τ)α−1 󵄨󵄨󵄨y(τ + t0 )󵄨󵄨󵄨dτ + I1 (t), Γ(α) 0

where n

I0 (t) = ∑ j=1

dj t α−j

Γ(α − j + 1)

+

(|λ|‖y‖L1 (0,t0 ) + ‖f ‖L1 (0,t0 ) )(t − t0 )α−1 Γ(α)

,

(2.70)

t

I1 (t) =

M1 ∫(t − τ)α−1 eστ dτ. Γ(α) t0

(2.71)

2.7 Linear fractional differential equations with Riemann–Liouville fractional derivatives

Therefore



165

t

|λ| ∫(t − τ)α−1 y(τ)dτ + I1 (t + t0 ). Γ(α)

y(t) ≤ I0 (t + t0 ) +

0

This time, we first use (2.43) and get that t

|λ|k y(t) ≤ I0 (t + t0 ) + I1 (t + t0 ) + ∑ ∫(t − s)kα−1 (I0 (s + t0 ) + I1 (s + t0 ))ds Γ(kα) k=1 +∞

0

t

|λ|k ≤ I0 (t + t0 ) + I1 (t + t0 ) + ∑ ∫(t − s)kα−1 I0 (s + t0 )ds Γ(kα) k=1 +∞

+∞

+∑

k=1

0

t

|λ|k ∫(t − s)kα−1 I1 (s + t0 )ds Γ(kα)

(2.72)

0

for a. a. t > 0. Now recall that by (2.69) for all s ≥ 0, I1 (s + t0 ) ≤ M4 e2σs ,

(2.73)

which means that t

t

0

0

+∞ |λ|k |λ|k ∑ ∫(t − s)kα−1 I1 (s + t0 )ds ≤ M4 ∑ ∫(t − s)kα−1 e2σs ds ≤ M4 e2σt Eα (|λ|t α ) . Γ(kα) Γ(kα) k=1 k=1

+∞

(2.74)

Regarding I0 , note that for all s ≥ 0, I0 (s + t0 ) ≤ M6 eσs + M7 sα−1 , where n

M6 = ∑

dj sups≥0 ((s + t0 )α−j e−σt ) Γ(α − j + 1)

j=1

M7 =

,

|λ|‖y‖L1 (0,t0 ) + ‖f ‖L1 (0,t0 ) Γ(α)

.

Therefore I0 (t + t0 ) ≤ M8 eσt ≤ M8 e2σt

for t ≥ 1, where

M8 := M6 + M7 sup (sα−1 e−σs ) . s≥1

Moreover, by Lemma 1.11 t

∫(t − s)

kα−1

I0 (s + t0 )ds ≤ M6 e

2σt t

0

= M6 e2σt



kα kα

t

+ M7 ∫(t − s)kα−1 sα−1 ds 0

M Γ(kα)Γ(α) (k+1)α−1 t + 7 t . kα Γ((k + 1)α)

(2.75)

166 � 2 Integral and differential equations involving fractional operators Summing up, we obtain the following upper bound: +∞



k=1

t

+∞ +∞ |λ|k (|λ|t α )k (|λ|t α )k + M7 Γ(α)t α−1 ∑ ∫(t − s)kα−1 I0 (s + t0 )ds ≤ M6 e2σt ∑ Γ(kα) Γ(kα) Γ(kα + α) k=1 k=1 0

+∞ (|λ|t α )k (|λ|t α )k + M7 Γ(α)t α−1 ∑ Γ(kα) Γ(kα + α) k=0 k=0 +∞

≤ M6 e2σt ∑

= M6 e2σt Eα (|λ|t α ) + M7 Γ(α)t α−1 Eα,α (|λ|t α ) ,

(2.76)

where we recall that Eα,α is the two-parameter Mittag-Leffler function (2.4). Plugging (2.73), (2.74), (2.75) and (2.76) into (2.72) and setting M8 := M4 +M6 , we get the upper bound y(t) ≤ M4 e2σt + M8 e2σt Eα (|λ|t α ) + M7 Γ(α)t α−1 Eα,α (|λ|t α ) for a. a. t ≥ 1. Using Corollary 2.14 with β = 1 and with β = α, we get two constants M9 , M10 > 0 such that for all t ≥ 1, Eα (|λ|t α ) ≤ M9 e|λ|

1 α

t

t α−1 Eα,α (|λ|t α ) ≤ M10 |λ|

and

1−α α

1 α

e|λ| t ,

respectively. Therefore y(t) ≤ M4 e2σt + M8 M9 e2σt e|λ|

1 α

t

+ M7 Γ(α)M10 e|λ|

1 α

t

≤ M11 e2(|λ|

1 α

+σ)t

1−α

for a. a. t ≥ 1, where M11 = M4 + M8 M9 + M7 M10 |λ| α Γ(α). Thus we can conclude that y is exponentially bounded. To prove that Dα0+ y is exponentially bounded, just notice that for some constant M12 > 0, 1

󵄨󵄨 α 󵄨 󵄨 󵄨 󵄨 󵄨 2(|λ| α +σ)t 󵄨󵄨D0+ y(t)󵄨󵄨󵄨 ≤ |λ|󵄨󵄨󵄨y(t)󵄨󵄨󵄨 + 󵄨󵄨󵄨f (t)󵄨󵄨󵄨 ≤ M12 e for all t ≥ t0 + 1. Now, let f ≡ 0. So, we consider the Cauchy problem Dα y(t) − λy(t) = 0, { { { 0+ Dα−k { 0+ y(0) = dk , { { n−α {I0+ y(0) = dn ,

t > 0,

k = 1, . . . , n − 1,

(2.77)

where we do not need the second equation in (2.77) if n = 1. By Theorem 2.46, since f ≡ 0 belongs to 𝒞 (ℝ+0 ; w0,γ ) for all γ ∈ [0, 1), we know that this Cauchy problem admits a unique global solution y that is exponentially bounded and such that Dα0+ y is exponentially bounded. Hence we can use the Laplace transform to determine such a solution. Once we apply the Laplace transform to both sides of the first equation in (2.77)

2.7 Linear fractional differential equations with Riemann–Liouville fractional derivatives

� 167

(this can be done thanks to Theorem 2.46), we get from Lemma 2.44 that for all real z > max{0, abs(Dα0+ y)}, ℒ[y] satisfies the equality n

zα ℒ[y](z) − ∑ zk−1 dk − λℒ[y](z) = 0. k=1

It can be rewritten as n

zk−1 dk . zα − λ k=1

(2.78)

ℒ[y](z) = ∑

Denote ℒ[yk ](z) =

zk−1 zα −λ

and rewrite (2.78) as follows: n

ℒ[y](z) = ∑ dk ℒ[yk ](z).

(2.79)

k=1

This means that if we can find some functions y1 , . . . , yn such that ℒ[yk ] are their Laplace transforms, then every solution y of (2.77) can be written as a linear combination of such functions. Moreover, each yk can be obtained by considering dj = 0 for each j ≠ k and dk = 1. Let us now identify yk , k = 1, . . . , n. This can simply be done if we apply (2.8) and the injectivity of the Laplace transform operator, yielding yk (t) = t α−k Eα,α+1−k (λt α ) ,

t ≥ 0.

(2.80)

Hence we have proved the following statement. Proposition 2.48. Let α > 0, α ∈ ̸ ℕ, n = ⌊α⌋ + 1, and d1 , . . . , dn ∈ ℝ. Then the linear fractional Cauchy problem Dα y(t) − λy(t) = 0, { { { 0+ Dα−k y(0) = dk , { { 0+ { n−α {I0+ y(0) = dn ,

t > 0,

k = 1, . . . , n − 1,

(2.81)

where we do not need the second equation in (2.81) if n = 1, has a unique global solution n

y(t) = ∑ dk t α−k Eα,α+1−k (λt α ) , k=1

t ≥ 0,

(2.82)

such that y ∈ 𝒞 (ℝ+0 ; w0,n−α ). Proposition 2.48 tells us that the solutions of equation Dα0+ y(t) − λy(t) = 0,

t > 0,

(2.83)

constitute a vector space 𝒮homo of dimension dim 𝒮homo ≤ n generated by {y1 , . . . , yn } defined as in (2.80). We want to show that {y1 , . . . , yn } is in fact a basis for 𝒮homo . To do

168 � 2 Integral and differential equations involving fractional operators this, we only need to prove that they are linearly independent. In the context of ordinary differential equations, we introduce the Wronskian to handle linear dependence of functions. Let us look at a similar tool here. Namely, consider α > 0, n = ⌊α⌋+1, and the n−α + family of functions f1 , . . . , fn : ℝ+ → ℝ such that I0+ fj : ℝ+0 → ℝ and Dα−k 0+ fj : ℝ0 → ℝ are well defined for all k = 1, . . . , n − 1 and j = 1, . . . , n. Then we can define, for t ≥ 0, the fractional Wronskian matrix as Dα−1 0+ f1 (t)

Dα−2 0+ f1 (t) ( .. 𝒲α [f1 , . . . , fn ](t) = ( . Dα−n+1 f1 (t) 0+ n−α ( I0+ f1 (t)

Dα−1 0+ f2 (t)

Dα−2 0+ f2 (t) .. . Dα−n+1 f2 (t) 0+ n−α I0+ f2 (t)

⋅⋅⋅ ⋅⋅⋅ .. . ⋅⋅⋅

Dα−1 0+ fn (t)

Dα−2 0+ fn (t) ) .. ) . α−n+1 D0+ fn (t)

⋅⋅⋅

n−α I0+ fn (t) )

and then the fractional Wronskian determinant (see, for instance, [119, Equation (4.2.35)]) as Wα [f1 , . . . , fn ](t) := det(𝒲α [f1 , . . . , fn ](t)). The following proposition explains the relations between the fractional Wronskian Wα and the linear dependence of the solutions of (2.83). Proposition 2.49. Let α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1. Also, let y1 , . . . , yn : ℝ+ → ℝ be global solutions of (2.83). The following properties are equivalent: (i) y1 , . . . , yn are linearly dependent; (ii) Wα [y1 , . . . , yn ](t) = 0 for all t ≥ 0; (iii) Wα [y1 , . . . , yn ](0) = 0. Proof. Let us first prove that (i) implies (ii). Indeed, if y1 , . . . , yn are linearly dependent, there exist n constants C1 , . . . , Cn such that n

∑ Cj yj (t) = 0, j=1

t ≥ 0,

(2.84)

and there exists j0 such that Cj0 ≠ 0. Applying the operators Dα−k 0+ , k = 1, . . . , n − 1, and n−α I0+ to (2.84), we obtain that Dα−1 0+ yj (t) .. n . ) = 0, ∑ Cj ( α−n+1 D0+ yj (t) j=1

t ≥ 0,

(2.85)

n−α I0+ yj (t)

i. e., for all t ≥ 0, the columns of 𝒲α [y1 , . . . , yn ](t) are linearly dependent, which implies that Wα [y1 , . . . , yn ](t) = 0 for all t ≥ 0.

2.7 Linear fractional differential equations with Riemann–Liouville fractional derivatives



169

The fact that (ii) implies (iii) is trivial, and hence let us prove that (iii) implies (i). To C1 C2

do this, denote C = ( . ) and consider the system of linear equations .. Cn

𝒲α [y1 , . . . , yn ](0)C = 0.

(2.86)

Note that since Wα [y1 , . . . , yn ](0) = 0, there is at least one solution C ≠ 0. Fix such a solution C and put n

Y (t) = ∑ Cj yj (t), j=1

t ≥ 0.

(2.87)

n−α Applying I0+ to (2.87), we get from (2.86) that n

n−α n−α (I0+ Y ) (0) = ∑ Cj (I0+ yj ) (0) = 0. j=1

(2.88)

Similarly, from (2.86) we get that for all k = 1, . . . , n − 1, n

α−k (Dα−k 0+ Y ) (0) = ∑ Cj (D0+ yj ) (0) = 0. j=1

(2.89)

Finally, recalling that y1 , . . . , yn are global solutions of (2.83), we have n

n

j=1

j=1

(Dα0+ Y ) (t) = ∑ Cj (Dα0+ yj ) (t) = λ ∑ Cj yj (t) = λY (t). Hence Y is a global solution of Dα Y (t) = λY (t), { { { 0+ Dα−k { 0+ Y (0) = 0, { { n−α {I0+ Y (0) = 0,

t > 0,

k = 1, . . . , n − 1,

(2.90)

where we do not need the second equation in (2.90) if n = 1. However, we know from Theorem 2.46 that the previous Cauchy problem has a unique solution, and thus Y (t) = 0 for all t ≥ 0, i. e., ∑nj=1 Cj yj (t) = 0 for all t ≥ 0. Since C ≠ 0, this means that y1 , . . . , yn are linearly dependent. Remark 2.50. Similarly, if y1 , . . . , yn : ℝ+0 → ℝ are global a. e. solutions of (2.83), then the following properties are equivalent: (i) y1 , . . . , yn are linearly dependent; (ii′ ) Wα [y1 , . . . , yn ](t) = 0 for a. a. t ≥ 0; (iii) Wα [y1 , . . . , yn ](0) = 0.

170 � 2 Integral and differential equations involving fractional operators The fact that (i) implies (ii) in Proposition 2.49 holds without requiring y1 , . . . , yn to be solutions of (2.83). The converse is not true in general, as we will show in the following statement. Proposition 2.51. Let α ∈ (1, 2). There exist two functions y1 , y2 : ℝ+ → ℝ such that (i) y1 and y2 are linearly independent; 2−α + (ii) I0+ yj : ℝ+0 → ℝ and Dα−1 0+ yj : ℝ0 → ℝ are well-defined for j = 1, 2; (iii) Wα [y1 , y2 ](t) = 0 for all t ≥ 0. Proof. Consider f1 (t) = (t − 1)|t − 1| and f2 (t) = (t − 1)2 for t ≥ 0. Clearly, f1 , f2 ∈ AC(ℝ+0 ), and therefore Proposition 1.28 allows us to define yj (t) = (D2−α 0+ fj )(t) for t > 0. From (1.49) and the fact that fj (0) = (−1)j , we get that for j = 1, 2, yj (t) = (C D2−α 0+ fj ) (t) +

(−1)j α−2 (−1)j α−2 α+1 ′ t = (I0+ fj ) (t) + t . Γ(α + 1) Γ(α + 1)

First, we show that y1 and y2 satisfy (i). Therefore we need to prove that they are linearly independent, i. e., the equation C1 y1 (t) + C2 y2 (t) = 0,

t > 0,

(2.91)

has unique solution C1 = C2 = 0. To do this, let us first calculate yj . If j = 1, then t

y1 (t) =

2 1 t α−2 , ∫(t − τ)α−2 |τ − 1|dτ − Γ(α + 1) Γ(α + 1) 0

whereas for j = 2, t

2 1 y2 (t) = t α−2 . ∫(t − τ)α−2 (τ − 1)dτ + Γ(α + 1) Γ(α + 1) 0

Noting that y1 (t) = −y2 (t) for t < 1, we get that all solutions (2.91) are such that C1 = C2 =: C ∈ ℝ. Thus we only need to find C ∈ ℝ such that C(y1 (t) + y2 (t)) = 0,

t > 0.

It is clear that if we can prove that y1 (2) + y2 (2) ≠ 0, then it must be C = 0, and we have proved that C1 = C2 = 0. To do this, note that y1 (2) =

1

2

0

1

−2 2 2α−2 , ∫(2 − τ)α−2 (τ − 1)dτ + ∫(2 − τ)α−2 (τ − 1)dτ − Γ(α + 1) Γ(α + 1) Γ(α + 1)

2.7 Linear fractional differential equations with Riemann–Liouville fractional derivatives

171



whereas 1

2

0

1

2 2 2α−2 y2 (2) = , ∫(2 − τ)α−2 (τ − 1)dτ + ∫(2 − τ)α−2 (τ − 1)dτ + Γ(α + 1) Γ(α + 1) Γ(α + 1) and therefore 2

y1 (2) + y2 (2) =

4 ∫(t − τ)α−2 (τ − 1)dτ > 0. Γ(α + 1) 1

Thus we have shown that y1 and y2 are linearly independent. To prove (ii), we simply note that 2−α I0+ y j = fj

′ Dα−1 0+ yj = fj .

Finally, we prove (iii). Indeed, 𝒲α [y1 , y2 ](t) = (

2|t − 1| (t − 1)|t − 1|

2(t − 1) ) (t − 1)2

= 2|t − 1|(t − 1)2 − 2|t − 1|(t − 1)2 = 0 for all t ≥ 0. As a consequence of Proposition 2.49, we get the following statement. Corollary 2.52. The functions y1 , . . . , yn defined in (2.80) are linearly independent and constitute a basis for 𝒮homo . In particular, dim 𝒮homo = n. Proof. It is not difficult to check that 𝒲α [y1 , . . . , yn ](0) is the n × n identity matrix and thus Wα [y1 , . . . , yn ](0) = 1. Hence by Proposition 2.49 we get that y1 , . . . , yn are linearly independent. We also study the inhomogeneous case, i. e., the following Cauchy problem: α

D y(t) − λy(t) = f (t), { { { 0+ Dα−k { 0+ y(0) = dk , { { n−α I { 0+ y(0) = dn ,

t > 0,

k = 1, . . . , n − 1,

(2.92)

where f ∈ L1loc (ℝ+0 ) is exponentially bounded. We do not need the second equation in (2.92) if n = 1. Theorem 2.28 guarantees that there exists a unique global a. e. solution y ∈ L1loc (ℝ+0 ). Furthermore, Theorem 2.47 tells us that both y and Dα0+ y are exponentially bounded. Therefore we can apply the Laplace transform to both sides of (2.92). Due to Lemma 2.44, we know that for z > max{0, abs(Dα0+ y)}, the Laplace transform ℒ[y]

172 � 2 Integral and differential equations involving fractional operators satisfies the equation n

zα ℒ[y](z) − ∑ dk zk−1 − λℒ[y](z) = ℒ[f ](z). k=1

1

Assuming that ℜ(z) > λ α for λ > 0, the latter equation can be rewritten as n

ℒ[y](z) = ∑ dk ℒ[yk ](z) + ℒ[y1 ](z)ℒ[f ](z),

(2.93)

k=1

k−1

where ℒ[yk ](z) = zzα −λ are the Laplace transforms of yk (t) = t α−k Eα,α+1−k (λt α ). Moreover, by the properties of the Laplace transform (see Proposition A.23) it is clear that h(z) = ℒ[y1 ](z)ℒ[f ](z) is the Laplace transform of the convolution t

h(t) = ∫(t − s)α−1 Eα,α (λ(t − s)α ) f (s)ds.

(2.94)

0

This argument easily implies the following result. Proposition 2.53. Let α > 0, α ∈ ̸ ℕ, n = ⌊α⌋ + 1, and d1 , . . . , dn ∈ ℝ, and let f ∈ L1loc (ℝ+0 ) be an exponentially bounded function. The linear fractional Cauchy problem Dα y(t) − λy(t) = f (t), { { { 0+ Dα−k { 0+ y(0) = dk , { { n−α {I0+ y(0) = dn

t > 0,

(2.95)

k = 1, . . . , n − 1,

where we do not need the second equation in (2.95) if n = 1, has a unique global a. e. solution n

y(t) = ∑ dk t k=1

α−k

α

t

Eα,α+1−k (λt ) + ∫(t − s)α−1 Eα,α (λ(t − s)α ) f (s)ds,

t ≥ 0.

(2.96)

0

Furthermore, if f ∈ 𝒞 (ℝ+0 ; w0,n−α ), then (2.96) is the unique global solution belonging to 𝒞 (ℝ+0 ; w0,n−α ). Remark 2.54. We emphasize that according to Corollary 2.52 and Proposition 2.53, the set 𝒮 of solutions to the equation Dα0+ y(t) − λy(t) = f (t),

t > 0,

is an n-dimensional affine space of the form 𝒮 = h + 𝒮homo , where h is defined in (2.94).

2.8 Linear fractional differential equations with Dzhrbashyan–Caputo fractional derivatives

� 173

The Laplace transform can in fact be used to define solutions to multiterm fractional differential equations, for example, β

Dα0+ y(t) − λD0+ y(t) − μy(t) = f (t),

t ≥ 0.

Whereas the case μ ≠ 0 is related to other special functions (we refer to [119]), the case μ = 0 is again easily handled using the Mittag-Leffler functions. This case is discussed in Exercises 2.12 and 2.14.

2.8 Linear fractional differential equations with Dzhrbashyan–Caputo fractional derivatives As in the study of equations with Riemann–Liouville derivatives, we want to use the Laplace transform to process linear fractional differential equations involving the Dzhrbashyan–Caputo fractional derivative. To do this, we first show how the Laplace transform operator acts on such a fractional derivative. Lemma 2.55. Let α > 0, n = ⌊α⌋ + 1, and y ∈ 𝒞 n−1 (ℝ+0 ) with y(n−1) ∈ AC(ℝ+0 ). Assume that y is Laplace transformable with abs(y) = z0 . Suppose further that C Dα0+ y is Laplace transformable with abscissa of convergence abs (C Dα0+ y) =: z1 . Then for all z ∈ ℂ such that ℜ(z) > max{0, z0 , z1 }, C α

n−1

α

α−1−k (k)

ℒ [ D0+ y] (z) = z ℒ[y](z) − ∑ z k=0

y (0).

(2.97)

Proof. This formula is well known for α ∈ ℕ, since in such a case the Dzhrbashyan– Caputo derivative coincides with the usual one. Hence let us assume that α ∈ ̸ ℕ. Moreover, without loss of generality, we can assume that z ∈ ℝ. For m1 = 0, . . . , n − 1, m2 = 0, . . . , m1 , and t ≥ 0, set m1

pm2 ,m1 (t) := ∑

j=m2

y(j) (0) j t, j!

pm1 (t) := p0,m1 (t),

and

ym1 (t) = y(t) − pm1 (t).

Each ym1 is continuous and Laplace transformable. Furthermore, for all k = 1, . . . , n − 1, pn−1 = pn−1−k + pn−k,n−1 ,

and

yn−1 = yn−1−k − pn−k,n−1 .

Also, we know from (1.86) and (1.85) that n−α (n−k) α−k I0+ y = C Dα−k 0+ y = D0+ yn−1−k

for all k = 0, . . . , n − 1.

174 � 2 Integral and differential equations involving fractional operators Hence item (ii) from Lemma 2.44 supplies that for z > max{0, z0 , z1 }, C α

α

ℒ [ D0+ y] (z) = ℒ [D0+ yn−1 ] (z) n−1

n−α = zα ℒ[yn−1 ](z) − zn−1 (I0+ yn−1 ) (0) − ∑ zk−1 (Dα−k 0+ yn−1 ) (0) k=1

n−α = zα ℒ[y](z) − zα ℒ[pn−1 ](z) − zn−1 (I0+ yn−1 ) (0) n−1

n−1

k−1 − ∑ zk−1 (Dα−k (Dα−k 0+ yn−1−k ) (0) + ∑ z 0+ pn−k,n−1 ) (0) α

k=1

α

= z ℒ[y](z) − z ℒ[pn−1 ](z) − n−1

k=1 n−1 n−α z (I0+ yn−1 ) (0) n−1

n−α (n−k) − ∑ zk−1 (I0+ y ) (0) + ∑ zk−1 (Dα−k 0+ pn−k,n−1 ) (0). k=1

k=1

(2.98)

Note that yn−1 and y(n−k) are continuous for k = 1, . . . , n − 1. Therefore we can use Exercise 2.16 and conclude that n−α (I0+ yn−1 ) (0) = 0

n−α (n−k) (I0+ y ) (0) = 0.

and

(2.99)

Furthermore, it follows from Lemma 1.11 that n−1

(Dα−k 0+ pn−k,n−1 ) (t) = ∑

j=n−k

y(j) (0)t j+k−α , Γ(j + k − α)

and therefore

(Dα−k 0+ pn−k,n−1 ) (0) = 0. (2.100)

Taking the Laplace transform of pn−1 (see, for example, (A.13)), we get that for all z > 0, n−1

ℒ[pn−1 ](z) = ∑ z k=0

y (0).

−1−k (k)

(2.101)

Combining (2.99), (2.100), and (2.101) with (2.98), we get (2.97). Now let α > 0 and λ ∈ ℝ. Let n = ⌊α⌋ + 1 and fix d0 , . . . , dn−1 ∈ ℝ, γ ∈ [0, α − n + 1), and f ∈ 𝒞 (ℝ+0 ; w0,γ ). Consider the simple Cauchy problem {

C α D0+ y(t) (k)

− λy(t) = f (t), t > 0,

y (0) = dk ,

k = 0, . . . , n − 1.

(2.102)

We know from Theorem 2.32 that (2.102) has a unique global solution y. Again, we would like to use the Laplace transform to get an explicit formula for y. To do this, we use Lemma 2.55. However, we need to prove that both y and C Dα0+ y admit Laplace transforms. This is explained in the following theorem, which is a generalization of the main result of [115] to the case of α > 0. In [115] the same result is proved for α ∈ (0, 1).

2.8 Linear fractional differential equations with Dzhrbashyan–Caputo fractional derivatives

� 175

Theorem 2.56. Let α > 0, α ∈ ̸ ℕ, n = ⌊α⌋ + 1, and γ ∈ [0, α − n + 1), and let f ∈ 𝒞 (ℝ+0 ; w0,γ ) be exponentially bounded. Then for all λ ∈ ℝ, the Cauchy problem {

C α D0+ y(t) (k)

= λy(t) + f (t),

y (0) = dk ,

t > 0,

k = 0, . . . , n − 1,

(2.103)

has a unique solution y ∈ 𝒞 (ℝ+0 ) such that both y and Dα y are exponentially bounded. Proof. First, note that the function F(t, y) = λy + f (t) satisfies the conditions of Theorem 2.56. Therefore there exists a unique solution y ∈ 𝒞 (ℝ+0 ) of (2.103). Since f is exponentially bounded, there exist M1 , t0 , σ > 0 such that |f (t)| ≤ M1 eσt for all t ≥ t0 . Moreover, f ∈ 𝒞 ([0, t0 ]; w0,γ ), and therefore there exists M2 > 0 such that |f (t)| ≤ M2 t −γ for all t ∈ (0, t0 ]. Furthermore, since y ∈ 𝒞 [0, t0 ], there exists M3 > 0 such that |y(τ)| ≤ M3 . Proposition 2.31 supplies that t

t

0

0

n−1 d |λ| 1 j j 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ + ∫(t − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ 󵄨󵄨y(t)󵄨󵄨󵄨 ≤ ∑ t + j! Γ(α) Γ(α) j=0 n−1

dj

t0

j

t0

|λ| |λ| 󵄨 󵄨 󵄨 󵄨 =∑ t + ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ + ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ j! Γ(α) Γ(α) j=0 0

t

+

t

1 1 󵄨 󵄨 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ + ∫(t − τ)α−1 󵄨󵄨󵄨f (τ)󵄨󵄨󵄨dτ Γ(α) Γ(α) t0

n−1

≤∑

j=0

dj j!

tj +

t0

t

M3 |λ|(t − t0 )α |λ| 󵄨 󵄨 + ∫(t − τ)α−1 󵄨󵄨󵄨y(τ)󵄨󵄨󵄨dτ Γ(α + 1) Γ(α) t0

t0

+

0

t

M2 M ∫(t − τ)α−1 τ −γ dτ + 1 ∫(t − τ)α−1 eστ dτ Γ(α) Γ(α) t0

0

t−t0

= I0 (t) +

|λ| 󵄨 󵄨 ∫ (t − t0 − τ)α−1 󵄨󵄨󵄨y(τ + t0 )󵄨󵄨󵄨dτ + I1 (t) + I2 (t), Γ(α) 0

where n−1

dj

M |λ|(t − t0 )α I0 (t) := ∑ t + 3 , j! Γ(α + 1) j=0 j

t

I2 (t) :=

M1 ∫(t − τ)α−1 eστ dτ. Γ(α) t0

t0

M2 I1 (t) := ∫(t − τ)α−1 τ −γ dτ, Γ(α) 0

176 � 2 Integral and differential equations involving fractional operators Setting y(τ) = |y(τ + t0 )|, we get the bound t

|λ| ∫(t − τ)α−1 y(τ)dτ + I1 (t + t0 ) + I2 (t + t0 ). Γ(α)

y(t) ≤ I0 (t + t0 ) +

0

Setting n−1

M4 := ∑

dj supt≥0 ((t + t0 )j e−σt ) j!

j=0

+

M3 |λ| supt≥0 (t α e−σt ) Γ(α + 1)

we get that I0 (t + t0 ) ≤ M4 eσt ≤ M4 e2σt

for all t ≥ 0.

(2.104)

Moreover, reasoning in exactly the same way as in (2.62), we make sure that there exists M5 such that I2 (t + t0 ) ≤ M5 e2σt for all t ≥ 0. Concerning I1 , we can use Exercise (1.2) and conclude that I1 (t + t0 ) = M2 (t + t0 )α−1

t Γ(1 − γ) 1−γ t0 2 F1 (1 − α, 1 − γ; 2 − γ; 0 ) . Γ(2 − γ) t + t0

(2.105)

Denote a = 1 − α, b = 1 − γ, and c = 2 − γ. Then c − a − b = α > 0. Applying Theorem B.10, we get that 󵄨󵄨 󵄨 Γ(2 − γ)Γ(α) 󵄨󵄨2 F1 (1 − α, 1 − γ; 2 − γ; x)󵄨󵄨󵄨 ≤ Γ(1 + α − γ) for all |x| ≤ 1. In particular, we have that 0 ≤ 2 F1 (1

− α, 1 − γ; 2 − γ;

t0 t0 +t

≤ 1 for t ≥ 0. Therefore

󵄨󵄨 󵄨󵄨 Γ(2 − γ)Γ(α) t0 t 󵄨 󵄨 ) ≤ 󵄨󵄨󵄨2 F1 (1 − α, 1 − γ; 2 − γ; 0 )󵄨󵄨󵄨 . 󵄨󵄨 t0 + t t0 + t 󵄨󵄨 Γ(1 + α − γ)

So we obtain from (2.105) that I1 (t + t0 ) ≤ M2 (t + t0 )α−1

Γ(1 − γ)Γ(α) 1−γ t ≤ M6 e2σt , Γ(1 − γ + α) 0

where M6 := M2

Γ(1 − γ)Γ(α) 1−γ t sup ((t + t0 )α−1 e−2σt ) . Γ(1 − γ + α) 0 t≥0

As a consequence, we get that y(t) ≤ (M4 + M5 + M6 )e

2σt

t

|λ| + ∫(t − τ)α−1 y(τ)dτ Γ(α) 0

2.8 Linear fractional differential equations with Dzhrbashyan–Caputo fractional derivatives



177

for all t ≥ 0. Theorem 2.38 supplies that y(t) ≤ (M4 + M5 + M6 )e2σt Eα (|λ|t α ) ,

t ≥ 0.

Using Corollary 2.14 with β = 1, we obtain a constant M7 > 0 such that y(t) ≤ M7 e2(σ+|λ|

1 α

)t

,

t ≥ 1.

This implies that y is exponentially bounded. Finally, note that for t ≥ t0 + 1, 1 󵄨󵄨 C α 󵄨 α 󵄨󵄨( D0+ y) (t)󵄨󵄨󵄨 ≤ |λ|󵄨󵄨󵄨󵄨y(t)󵄨󵄨󵄨󵄨 + 󵄨󵄨󵄨󵄨f (t)󵄨󵄨󵄨󵄨 ≤ M8 e2(σ+|λ| )t , 󵄨 󵄨

which ends the proof. By analogy with the Riemann–Liouville case we first consider Cauchy problem {

C α D0+ y(t) (k)

− λy(t) = 0,

y (0) = dk ,

t ≥ 0,

k = 0, . . . , n − 1,

(2.106)

where α > 0 and λ ∈ ℝ. We know from Theorem 2.32 that (2.106) has a unique global solution y ∈ 𝒞 (ℝ+0 ). Furthermore, we know from Theorem 2.56 that both y and C Dα0+ y are Laplace transformable. Therefore we can apply the Laplace transform to both sides of the first equation in (2.106) and use Lemma 2.55 to get n−1

zα ℒ[y](z) − ∑ dk zα−1−k − λℒ[y](z) = 0, k=0

which can be rewritten as n

ℒ[y](z) = ∑ dk k=1

1

zα−k−1 . zα − λ

Assuming that z > λ α if λ > 0 and setting ℒ[yk ](z) =

zα−k−1 , zα −λ

we get the equality

n

ℒ[y](z) = ∑ dk ℒ[yk ](z). k=1

(2.107)

Hence, if we can find some functions y0 , . . . , yn−1 such that ℒ[y0 ], . . . , ℒ[yn−1 ] are their respective Laplace transforms, then y can be written as their linear combination. By (2.8) we obtain that yk (t) = t k Eα,k+1 (λt α ) , Thus the following statement is proved.

k = 0, . . . , n − 1.

(2.108)

178 � 2 Integral and differential equations involving fractional operators Proposition 2.57. Let α > 0, α ∈ ̸ ℕ, n = ⌊α⌋ + 1, and d1 , . . . , dn ∈ ℝ. The linear fractional Cauchy problem C α D y(t) − λy(t) = 0, { (k)0+ y (0) = dk ,

t > 0,

k = 0, . . . , n − 1,

(2.109)

has a unique continuous solution n

y(t) = ∑ dk t k Eα,k+1 (λt α ) , k=1

t ≥ 0.

The solutions of equation C α D0+ y(t)

− λy(t) = 0,

t > 0,

(2.110)

constitute a vector space C 𝒮homo with dim (C 𝒮homo ) ≤ n, generated by {y0 , . . . , yn−1 } defined in (2.108). We would like to prove that such functions are linearly independent. To do this, consider α > 0, n = ⌊α⌋ + 1, and a family of functions f1 , . . . , fn ∈ 𝒞 n−1 (ℝ+0 ). Introduce for t ≥ 0 their (classical) Wronskian matrix f1 f1′ 𝒲 [f1 , . . . , fn ](t) = ( . .. (n−1) f1

f2 f2′ .. .

f2(n−1)

⋅⋅⋅ ⋅⋅⋅ .. . ⋅⋅⋅

fn fn′ .. ) .

fn(n−1)

and their (classical) Wronskian determinant W [f1 , . . . , fn ](t) = det(𝒲 [f1 , . . . , fn ](t)). The following proposition provides a connection between the Wronskian and the linear dependence of solutions (2.110). Proposition 2.58. Let α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1, and let y0 , . . . , yn−1 : ℝ+0 → ℝ be global solutions of (2.110). The following properties are equivalent: (i) y0 , . . . , yn−1 are linearly dependent; (ii) W [y0 , . . . , yn−1 ](t) = 0 for all t ≥ 0; (iii) W [y0 , . . . , yn−1 ](0) = 0. Proof. The fact that (i) implies (ii) can be proved as in Proposition 2.49, whereas the fact C0 C1

that (ii) implies (iii) is trivial. Let us prove that (iii) implies (i). To do this, let C = ( . ). .. Cn−1

Consider the linear system 𝒲 [y0 , . . . , yn−1 ](0)C = 0.

(2.111)

2.8 Linear fractional differential equations with Dzhrbashyan–Caputo fractional derivatives

� 179

Since W [y0 , . . . , yn−1 ](0) = 0, there exists a solution C ≠ 0. Fix such a solution C and put n−1

Y (t) = ∑ Cj yj (t),

t ≥ 0.

j=0

(2.112)

Taking the derivatives in (2.112) and using the fact that C is a solution of (2.111), we arrive at the equality Y (k) (0) = 0,

k = 0, . . . , n − 1.

Furthermore, applying C Dα0+ to (2.112) and using the fact that yj solve (2.110), we get that C α D0+ Y (t)

n−1

n−1

j=0

j=0

= ∑ Cj C Dα0+ yj (t) = λ ∑ Cj yj (t) = λY (t)

for t > 0. Hence Y is the global solution of the problem {

C α D0+ Y (t) (k)

Y

= λY (t),

(0) = 0,

t > 0,

k = 0, . . . , n − 1,

which by Theorem 2.32 has a unique solution. This means that Y (t) = 0 for all t ≥ 0, i. e., ∑n−1 j=0 Cj yj (t) = 0 for all t ≥ 0. Since C ≠ 0, we conclude that y0 , . . . , yn−1 are linearly independent. Remark 2.59. Clearly, the fact that (i) implies (ii) in Proposition 2.58 holds without requiring y0 , . . . , yn−1 to be solutions of (2.110). It is also well known that, in general, (ii) does not imply (i). For example, if y0 (t) = (t − 1)2 and y1 (t) = (t − 1)|t − 1|, then W [y0 , y1 ](t) = 0 for all t ≥ 0, but y0 and y1 are linearly independent. Necessary and sufficient conditions for the equivalence of (i) and (ii) have been discussed in [32, 201]. As a consequence, we get the following result. Corollary 2.60. The functions y0 , . . . , yn−1 defined in (2.108) are linearly independent and constitute a basis for C 𝒮homo . In particular, dim (C 𝒮homo ) = n. Proof. It is not difficult to check that 𝒲 [y0 , . . . , yn−1 ](0) is the n × n identity matrix and thus W [y0 , . . . , yn−1 ](0) = 1. We know from Proposition 2.58 that y0 , . . . , yn−1 are linearly independent. Next, consider the inhomogeneous case, i. e., the following Cauchy problem: {

C α D0+ y(t) (k)

− λy(t) = f (t),

y (0) = dk ,

t > 0,

k = 0, . . . , n − 1,

(2.113)

where f ∈ 𝒞 (ℝ+0 ; w0,γ ) for some γ ∈ [0, α−n+1). Assume that f is exponentially bounded. Then we know from Theorem 2.32 that (2.113) has a unique global solution y ∈ 𝒞 (ℝ+0 ).

180 � 2 Integral and differential equations involving fractional operators Furthermore, we know from Theorem 2.56 that both y and C Dα0+ y are exponentially bounded and thus Laplace transformable. Hence we can use Lemma 2.55 to take the Laplace transform of both sides of the first equation in (2.113): n−1

zα ℒ[y](z) − ∑ dk zα−k−1 − λℒ[y](z) = ℒ[f ](z). k=0

This equation can be rewritten as n

ℒ[y](z) = ∑ dk ℒ[yk ] + k=1

ℒ[f ](z)

zα − λ

,

1

provided that if λ > 0, then z > λ α . The same arguments as in the Riemann–Liouville case tell us that n

y(t) = ∑ dk yk (t) + h(t), k=1

where t

h(t) = ∫(t − s)α−1 Eα,α (λ(t − s)α ) f (s)ds,

t ≥ 0.

(2.114)

0

So, the following result is established. Proposition 2.61. Let α > 0, α ∈ ̸ ℕ, n = ⌊α⌋ + 1, d0 , . . . , dn−1 ∈ ℝ, γ ∈ [0, α − n + 1), and f ∈ 𝒞 (ℝ+0 ; w0,γ ). Assume also that f is exponentially bounded. The linear fractional Cauchy problem {

C α D0+ y(t) (k)

− λy(t) = f (t), t > 0,

y (0) = dk ,

(2.115)

k = 0, . . . , n − 1,

has a unique global solution n

k

α

t

y(t) = ∑ dk t Eα,k+1 (λt ) + ∫(t − s)α−1 Eα,α (λ(t − s)α ) f (s)ds, k=1

t ≥ 0.

0

Remark 2.62. Corollary 2.60 and Proposition 2.61 supply that the set C 𝒮 of the solutions to the equation C α D0+ y(t)

− λy(t) = f (t),

t > 0,

is an n-dimensional affine space of the form C 𝒮 = h+ C 𝒮homo , where h is defined in (2.114).

2.9 Exercises

� 181

2.9 Exercises Exercise 2.1. Solve the Abel equation x

φ(t) 1 dt = f (x), ∫ Γ(α) (x − t)1−α

x ∈ [0, 1],

0

for the following values of f and α: (i) f (x) = c ∈ ℝ, α ∈ (0, 1); (ii) f (x) = x β , β > 0, α ∈ (0, 1); (iii) f (x) = x β , β > 1, α ∈ (0, 3); (iv) f (x) = sin x, α ∈ (0, 1); (v) f (x) = sin x − x, α ∈ (0, 4). Exercise 2.2. Let α > 1, n = ⌊α⌋, and f ∈ 𝒞 (n) [a, b] with f (n) ∈ Iα−n (Lp (a, b)) for some p ≥ 1. Find conditions on the constants c0 , . . . , cn−1 such that the equation n−1

∑ ck x k +

k=0

x

φ(t) 1 dt = f (x) ∫ Γ(α) (x − t)n−α 0

has a solution and calculate this solution. Exercise 2.3. Prove item (i) of Lemma 2.44. Hint: Write the fractional integral as a convolution product and use (A.13). Exercise 2.4. Let α > 0, γ ∈ [0, 1), and −∞ < a < b < +∞. (i) Consider a sequence fn ∈ 𝒞 ([a, b]; wa,γ ). Prove that if there exists a function α α f ∈ 𝒞 ([a, b]; wa,γ ) such that ‖fn − f ‖𝒞([a,b];wa,γ ) → 0, then also ‖Ia+ fn − Ia+ f ‖𝒞([a,b];wa,γ ) → 0. α α (ii) Assume further that γ ≤ α. Prove that in such a case, Ia+ fn → Ia+ f uniformly in [a, b]. k Exercise 2.5. Let α ∈ (0, 1) and η > 0, and let f (t) = ∑+∞ k=0 ck t be convergent for all t ∈ ℝ. α−1 η Define the function fα,η (t) = t f (t ), t > 0. Use Exercise 2.4 to prove that for all t > 0, +∞

(Dα0+ f ) (t) = ∑ ck+1 k=0

Γ((k + 1)η + α) (k+1)η−1 t . Γ((k + 1)η)

k Exercise 2.6. Let α ∈ (0, 1) and η > 0, and let f (t) = ∑+∞ k=0 ak t be convergent for any η t ∈ ℝ. Define the function fη (t) = f (t ), t > 0. Use Exercise 2.4 to prove that for all t > 0, +∞

(C Dα0+ fη ) (t) = ∑ ak+1 k=0

Γ((k + 1)η + 1) (k+1)η−α t . Γ((k + 1)η + 1 − α)

182 � 2 Integral and differential equations involving fractional operators Exercise 2.7. Let α ∈ (0, 1), t > 0, and y(t) = t α−1 Eα,α (λt α ). Evaluate (Dα0+ y)(t) without using the Laplace transform. Hint: Use the definition of Eα,β and Exercise 2.4. Exercise 2.8. Let α ∈ (0, 1), t > 0, and y(t) = Eα (λt α ). Evaluate (C Dα0+ ) (t) without using the Laplace transform. Hint: Use the definition of Eα and Exercise 2.4. Exercise 2.9. Let α > 0 and n = ⌊α⌋ + 1. (i) Prove that the Cauchy problem {

Dα0+ y(t) = y(t), Dα−k 0+ y(0)

= 0,

t > 0,

(2.116)

k = 1, . . . , n − 1,

where we omit the second equation in (2.116) if n = 1, has infinitely many solutions. (ii) Prove that the Cauchy problem α

D y(t) = y(t), { { { 0+ Dα−k y(0) = 0, { { 0+ { {y(0) = d0 ,

t > 0,

(2.117)

k = 1, . . . , n − 1,

where we omit the second equation in (2.117) if n = 1, has a solution if and only if d0 = 0. k Exercise 2.10. Let α ∈ (0, 1), let (ak )k∈ℕ be a sequence of real numbers, let f (t) =∑+∞ k=0 ak t α−1 α be absolutely convergent for all t > 0, and let fα (t) = t f (t ), t > 0. Furthermore, fix y0 λ, y0 ∈ ℝ and define the sequence (bk )k∈ℕ by setting b0 = Γ(α) and

bk+1 =

Γ(α(k + 1))(λbk + ak ) , Γ(α(k + 2))

k = 0, 1, 2, . . . .

(i) Prove that bk =

k−1 Γ(α(j + 1)) λk Γ(α) b0 + ∑ λk−1−j aj , Γ(α(k + 1)) Γ(α(k + 1)) j=0

k ≥ 1.

k (ii) Use Item (i) to prove that ∑+∞ k=0 bk t absolutely converges for all t > 0. Hint: Use Proposition 2.16 and (B.8). (iii) Let +∞

g(t) = ∑ bk t k , k=0

t > 0,

2.9 Exercises

� 183

and y(t) = t α−1 g(t α ) for t > 0. Prove that y is the unique global solution of the problem {

Dα0+ y(t) − λy(t) = fα (t), 1−α I0+ y(0)

= y0 .

t > 0,

(iv) Use the previous arguments to solve the problem α

Dα0+ y(t) − λy(t) = t α−1 et , { 1−α I0+ y(0) = y0 ,

t > 0,

(2.118)

for y0 , λ ∈ ℝ. (v) Prove that the solution y of (2.118) is such that ℝ ∋ y(0) := limt↓0 y(t) if and only if y0 = 0 and α ≥ 21 . In particular, for y0 = 0, if α > 21 , then limt↓0 y(t) = 0; otherwise, limt↓0 y(t) = √π. Exercise 2.11. Let α, β, γ > 0, n = ⌊α⌋ + 1, and d1 , . . . , dn , λ ∈ ℝ. Find the solution of the problem γ

Dα y(t) − λy(t) = t β−1 Eα,β (λt α ) , t ≥ 0, { { { 0+ Dα−k k = 1, . . . , n − 1, { 0+ y(0) = dk , { { n−α {I0+ y(0) = dn ,

(2.119)

where we omit the second equation in (2.119) if n = 1. Exercise 2.12. Let α, β > 0 with α > β and ⌊α⌋ = ⌊β⌋. Let n = ⌊α⌋ + 1. Prove that the solution of β

Dα0+ y(t) − λD0+ y(t) = 0, t > 0, { { { n−α n−β I0+ y(0) − λI0+ y(0) = dn , { { { α−k β−k {D0+ y(0) − λD0+ y(0) = dk , k = 1, . . . , n − 1,

(2.120)

where we omit the second equation in (2.120) if n = 1, is Laplace transformable together β with Dα0+ y and D0+ y, and that y has the form n

y(t) = ∑ dk t α−k Eα−β,α−k+1 (λt α−β ) .

(2.121)

k=1

β

Hint: Note that n = ⌊β⌋ + 1 and take the Laplace transform of Dα0+ y(t) − λD0+ y(t) in the left-hand side of the first equation in problem (2.120). Remark 2.63. In fact, (2.121) is the unique solution of (2.120) under a suitable notion of a solution similar to that given for (2.21). The fact that this unique solution of (2.120) is exponentially bounded (so that the previous procedure is justified) can be proved similarly to Theorem 2.46. The same concerns Exercises 2.13, 2.14, and 2.15.

184 � 2 Integral and differential equations involving fractional operators Exercise 2.13. Let α, β > 0 with α > β and ⌊α⌋ = ⌊β⌋. Let n = ⌊α⌋ + 1, and let f : ℝ+0 → ℝ be exponentially bounded. Prove that the solution of β

Dα0+ y(t) − λD0+ y(t) = f (t), t > 0, { { { n−α n−β I y(0) − λI0+ y(0) = dn , { { 0+ { β−k α−k {D0+ y(0) − λD0+ y(0) = dk , k = 1, . . . , n − 1,

(2.122)

where we omit the second equation in (2.122) if n = 1, is Laplace transformable together β with Dα0+ y and D0+ y, and that y has the form n

t

k=1

0

y(t) = ∑ dk t α−k Eα−β,α−k+1 (λt α−β ) + ∫(t − τ)α−1 Eα−β,α (λ(t − τ)α−β ) f (τ)dτ.

(2.123)

Exercise 2.14. Let α, β > 0 with α > β. Let n = ⌊α⌋ + 1, m = ⌊β⌋ + 1, and m < n. Prove that the solution of α

β

D0+ y(t) − λD0+ y(t) = 0, { { { { { { I n−α y(0) = dn , { { { 0+ Dα−k y(0) = dk , { { 0+ { { m−α { Dα−m { 0+ y(0) − λI0+ y(0) = dm , { { { α−k β−k {D0+ y(0) − λD0+ y(0) = dk ,

t > 0, k = m + 1, . . . , n − 1,

(2.124)

k = 1, . . . , m − 1,

where we omit the last equation in (2.124) if m = 1, is Laplace transformable together β with Dα0+ y and D0+ y, and that y has the form (2.121). Exercise 2.15. Let α, β > 0 with α > β. Let n = ⌊α⌋ + 1, m = ⌊β⌋ + 1, and m < n. Consider an exponentially bounded function f : ℝ+0 → ℝ. Prove that the solution of β

Dα0+ y(t) − λD0+ y(t) = f (t), { { { { n−α { {I0+ y(0) = dn , { { { α−k D0+ y(0) = dk , { { { { m−α { Dα−m { 0+ y(0) − λI0+ y(0) = dm , { { { α−k β−k {D0+ y(0) − λD0+ y(0) = dk ,

t > 0, k = m + 1, . . . , n − 1, k = 1, . . . , m − 1,

where we omit the last equation in (2.124) if m = 1, is Laplace transformable together β with Dα0+ y and D0+ y and that y has the form (2.123). Exercise 2.16. Let α ∈ (0, 1) and y ∈ 𝒞 ([0, T]; w0,α ). Show that limt↓0 t α y(t) = ℓ if and α only if (I0+ y)(0) = Γ(1 − α)ℓ.

2.9 Exercises

� 185

Exercise 2.17. Let α, γ ∈ (0, 1) and β > (1 − α)γ. (i) Verify that the function y(t) = (

Γ ( β+γα−1 + 1) 1−γ Γ ( β+α−1 + 1) 1−γ

1 1−γ

t

)

β+α−1 1−γ

,

t > 0,

(2.125)

is a global solution of the equation γ

Dα0+ y(t) = t β−1 (y(t)) ,

t > 0.

(2.126)

(ii) Prove that y ∈ 𝒞 (ℝ+0 ; w1−α ). Furthermore, prove that if β ≥ α(1 − γ) + γ(1 − α), then also Dα0+ y ∈ 𝒞 (ℝ+0 ; w1−α ). (iii) Does the equation satisfy the conditions of Theorems 2.23 or 2.28? 1−α (iv) Prove that (I0+ y)(0) = 0. (v) Prove that both function y from (2.125) and y0 (t) ≡ 0 are solutions of the Cauchy problem γ

{

Dα0+ y(t) = t β−1 (y(t)) , 1−α I0+ y(0)

= 0.

t > 0,

Exercise 2.18. Let α ∈ (0, 1), β > 0, γ > 1, and β < (1 − α)γ. (i) Verify that y(t) = (

Γ (1 − Γ (1 −

1

β+α−1 ) γ−1 − β+α−1 γ−1 ) t γ−1 , β+γα−1 ) γ−1

t > 0,

(2.127)

is an a. e. solution of (2.126). (ii) Prove that y ∈ 𝒞 (ℝ+0 ; w1−α ). Also, prove that Dα0+ y ∈ 𝒞 (ℝ+0 ; w1−α ) if β ≤ γ(1 − α) − α(γ − 1). (iii) Does the equation satisfy the conditions of Theorems 2.23 or 2.28? 1−α (iv) Prove that (I0+ y)(0) = 0. (v) Prove that both (2.125) and y0 (t) ≡ 0 are solutions of the Cauchy problem γ

{

Dα0+ y(t) = t β−1 (y(t)) , 1−α I0+ y(0)

= 0.

t > 0,

Exercise 2.19. Let 0 ≤ γ1 < γ2 < 1 and −∞ < a < b < +∞. Prove that for every y ∈ 𝒞 ([a, b]; wa,γ1 ), ‖y‖𝒞([a,b];wa,γ ) ≤ (b − a)γ2 −γ1 ‖y‖𝒞([a,b];wa,γ ) , 2

where we recall that 𝒞 ([a, b]; wa,γ1 ) is defined in (2.22).

1

186 � 2 Integral and differential equations involving fractional operators Exercise 2.20. Let α ∈ (0, 1) and β > 1 − α. Consider the equation {

Dα0+ y(t) = t β−1 y(t), I

1−α

y(0) = y0 .

t > 0,

(2.128)

(i) Prove that if β ≥ 1, then there exists a unique solution y ∈ 𝒞 (ℝ+0 ; w0,1−α ). Hint: Use Remarks 2.24 and 2.25. (ii) For T > 0, β ∈ (1 − α, 1), and y ∈ 𝒞 ([0, T]; w0,1−α ), put yβ (t) = t β−1 y(t) for t ∈ [0, T]. Show that ‖yβ ‖𝒞([0,T];w0,2−β−α ) = ‖y‖𝒞([0,T];w0,1−α ) . (iii) For β ∈ (1−α, 1), we say that y is a solution of (2.128) for t ∈ [0, T] if y ∈ 𝒞 ([0, T]; w0,1−α ), Dα0+ y ∈ 𝒞 ([0, T]; w0,2−α−β ), and the equalities in (2.128) are satisfied. Prove that there exists T ∗ (α, β) > 0 such that if T < T ∗ (α, β), then (2.128) has a unique solution for t ∈ [0, T]. Hint: Use Remark 2.21 to write (2.128) as a fixed point problem (of the form y = 𝒯 y) in 𝒞 ([0, T]; w0,1−α ). Then use Exercise 2.19 and Lemma 2.19 to find T > 0 such that 𝒯 is a contraction. Distinguish between β < 2(1 − α) and β ≥ 2(1 − α). (iv) Let β ∈ (1 − α, 1) and assume that y1 , y2 ∈ 𝒞 (ℝ+0 ; w0,1−α ) are two solutions of (2.128). Prove that y1 ≡ y2 . Hint: Let u = y1 − y2 . Note that by item (iii) we have that u(t) = 0 for t ∈ [0, T ∗ (α, β)) and deduce an integral inequality on u(t) for T > T ∗ (α, β) and t ∈ [T ∗ (α, β), T]. Use Theorem 2.38. (v) Assume that (2.128) has a solution of the form y(t) = y0 t α−1 f (t α+β−1 ) for each β > 1−α, k where f (t) = ∑+∞ k=0 ck t is absolutely convergent for all t > 0. Use Exercise 2.5 to provide a recursive formula for ck . Show that the sequence ck is defined in such a way that f is convergent for all t > 0 (and thus y is well-defined). Finally, prove applying item (iv) that y is the unique solution of (2.128). (Such a function has been introduced in [118], and the same equation is discussed via Laplace transform methods in [119]). k Exercise 2.21. Let α ∈ (0, 1), let (ak )k∈ℕ be a sequence of real numbers, let f (t) =∑+∞ k=0 ak t α be absolutely convergent for all t > 0, and let fα (t) = f (t ), t > 0. Fix also λ, y0 ∈ ℝ and define the sequence (bk )k∈ℕ by setting b0 = y0 and

bk+1 =

Γ(kα + 1)(λbk + ak ) , Γ((k + 1)α + 1)

k = 0, 1, 2, . . . .

(i) Prove that bk =

k−1 Γ(jα + 1) λk b0 + ∑ λk−1−j aj , Γ(kα + 1) Γ(kα + 1) j=0

k ≥ 1.

k (ii) Use item (i) to prove that ∑+∞ k=0 bk t absolutely converges for all t > 0. Hint: Use Proposition 2.16 and (B.8).

2.9 Exercises

� 187

(iii) Let +∞

g(t) = ∑ bk t k ,

t > 0,

k=0

and y(t) = g(t α ) for t > 0. Prove that y is the unique global solution of the problem {

C α D0+ y(t)

− λy(t) = fα (t),

t > 0,

y(0) = y0 .

(iv) Use the previous arguments to solve the problem {

C α D0+ y(t)

α

− λy(t) = et ,

y(0) = y0 .

t > 0,

Exercise 2.22. Let α, β > 0 with α > β and ⌊α⌋ = ⌊β⌋. Let n = ⌊α⌋ + 1. Prove that the solution of {

C α D0+ y(t) (k)

β

− λC D0+ y(t) = 0,

y (0) = dk ,

t > 0,

k = 0, . . . , n − 1,

(2.129)

β

is Laplace transformable together with C Dα0+ y and that C D0+ y has the form n−1

dk k t . k! k=0

y(t) = ∑

(2.130)

Hint: Prove that n = ⌊β⌋ + 1 and take the Laplace transform of C α D0+ y(t)

β

− λC D0+ y(t) = 0.

Remark 2.64. In fact, (2.130) is the unique solution of (2.129) under a suitable notion of a solution similar to that given for (2.39). The fact that this unique solution of (2.129) is exponentially bounded (so that the previous procedure is justified) can be proved similarly to Theorem 2.56. The same concerns Exercises 2.23, 2.24, and 2.25. Exercise 2.23. Let α, β > 0 with α > β and ⌊α⌋ = ⌊β⌋. Let n = ⌊α⌋ + 1, and let f : ℝ+0 → ℝ be exponentially bounded. Prove that the solution of {

C α D0+ y(t) (k)

β

− λC D0+ y(t) = f (t), t > 0,

y (0) = dk ,

k = 0, . . . , n − 1,

188 � 2 Integral and differential equations involving fractional operators β

is Laplace transformable together with C Dα0+ y and that C D0+ y has the form t

n−1

d y(t) = ∑ k t k + ∫(t − τ)α−1 Eα−β,α (λ(t − τ)α−β ) f (τ)dτ. k! k=0

(2.131)

0

Exercise 2.24. Let α, β > 0 with α > β and ⌊α⌋ > ⌊β⌋. Let n = ⌊α⌋ + 1 and m = ⌊β⌋ + 1. Prove that the solution of {

C α D0+ y(t) (k)

β

− λC D0+ y(t) = 0

y (0) = dk

t>0

k = 0, . . . , n − 1, β

is Laplace transformable together with C Dα0+ y and that C D0+ y, has the form m−1

y(t) = ∑

k=0

dk k n−1 t + ∑ dk t k Eα−β,k+1 (λt α−β ) . k! k=m

(2.132)

Exercise 2.25. Let α, β > 0 with α > β and ⌊α⌋ > ⌊β⌋. Let n = ⌊α⌋ + 1 and m = ⌊β⌋ + 1. Furthermore, assume that f : ℝ+0 → ℝ is exponentially bounded. Prove that the solution of {

C α D0+ y(t) (k)

β

− λC D0+ y(t) = f (t),

t > 0,

y (0) = dk ,

k = 0, . . . , n − 1, β

is Laplace transformable together with C Dα0+ y and that C D0+ y, has the form t

m−1

n−1 d y(t) = ∑ k t k + ∑ dk t k Eα−β,k+1 (λt α−β ) + ∫(t − τ)α−1 Eα−β,α (λ(t − τ)α−β ) f (τ)dτ. k! k=0 k=m 0

Exercise 2.26. Let α, γ ∈ (0, 1) and β > 1 − α. (i) Establish that y(t) = (

Γ ( β+γα−1 + 1) 1−γ Γ ( β+α−1 1−γ

+ 1)

1 1−γ

)

t

β+α−1 1−γ

(2.133)

satisfies C α D0+ y(t)

γ

= t β−1 (y(t)) ,

t > 0.

(2.134)

(ii) Prove that y ∈ 𝒞 (ℝ+0 ) and that there exists η ∈ [0, α) such that C Dα0+ y ∈ 𝒞 (ℝ+0 ; w0,η ), and hence it is a global solution of (2.134).

2.9 Exercises

� 189

(iii) Does the equation satisfy the conditions of Theorem 2.32? (iv) Prove that both (2.125) and y0 (t) ≡ 0 are solutions of the Cauchy problem {

C α D0+ y(t)

y(0) = 0.

γ

= t β−1 (y(t)) ,

t > 0,

(2.135)

Exercise 2.27. Let α ∈ (0, 1), γ > 1, and 0 < β ≤ 1 − α. (i) Establish that y(t) = (

Γ (1 − Γ (1 −

1

β+α−1 ) γ−1 − β+α−1 γ−1 ) t γ−1 β+γα−1 ) γ−1

(2.136)

is a global solution of the equation C α D0+ y(t)

γ

= t β−1 (y(t)) ,

t > 0.

(2.137)

(ii) Prove that y ∈ 𝒞 (ℝ+0 ) and there exists η ∈ [0, α) such that C Dα0+ y ∈ 𝒞 (ℝ+0 ; w0,η ); (iii) Does the equation satisfy the conditions of Theorem 2.32? (iv) Prove that both (2.125) and y0 (t) ≡ 0 are solutions of the Cauchy problem (2.135). Exercise 2.28. Let α > 0 and n = ⌊α⌋+1. Assume that F : ℝ+0 ×ℝ → ℝ satisfies conditions (i) (aγ ) and (bγ ) of Theorem 2.32. Let d0(i) , . . . , dn−1 ∈ ℝ, i = 1, 2, and denote by yi ∈ 𝒞 (ℝ+0 ) the solutions of {

C α D0+ y(t) (k)

= F(t, y(t)), t > 0,

y (0) = dk(i) ,

k = 0, . . . , n − 1.

(2.138)

(i) Consider the operator 𝒯 : 𝒞 [0, T] 󳨃→ 𝒞 [0, T] defined as t

1 (𝒯 y)(t) = ∫(t − τ)α−1 F(τ, y(τ))dτ. Γ(α) 0

Show that for all ϕ1 , ϕ2 ∈ 𝒞 [0, T], LB(α, 1 − γ) α−γ 󵄩󵄩 󵄩 T ‖ϕ1 − ϕ2 ‖𝒞[0,T] . 󵄩󵄩(𝒯 ϕ1 )(t) − (𝒯 ϕ2 )(t)󵄩󵄩󵄩𝒞[0,T] ≤ Γ(α) Hint: Use Lemma 2.19. (ii) Use the previous inequality to show that there exists T0 (α, γ, L) such that 𝒯 is a contraction on 𝒞 [0, T] for each T < T0 (α, γ, L).

190 � 2 Integral and differential equations involving fractional operators (iii) Fix T < T0 (α, γ, L). Use the previous item to prove that there exists a constant C > 0 depending only on α, γ, L, T such that n−1 󵄨 󵄨 ‖y1 − y2 ‖𝒞[0,T] ≤ C ∑ 󵄨󵄨󵄨󵄨dk(1) − dk(2) 󵄨󵄨󵄨󵄨 . k=0

(2.139)

Hint: Rewrite (2.138) in an integral form and take the difference and the absolute value. Then apply (i). (iv) Consider T ≥ T0 (α, γ, L). Prove that there exists a constant C > 0 depending only on α, γ, L, T such that (2.139) still holds. Hint: Rewrite (2.138) in an integral form and take the difference and the absolute T (α,γ,L) value. Try to find an integral inequality of the form (2.42) for t ≥ 0 2 . Use (iii) to guarantee that the inequality holds for all t ∈ [0, T]. Then apply Theorem 2.38. (v) Use (iv) to prove that if y1 , y2 are two solutions of C α D y(t) = F(t, y(t)), { (k)0+ y (0) = dk ,

t > 0,

k = 0, . . . , n − 1,

(2.140)

then y1 ≡ y2 . Exercise 2.29. Let α ∈ (0, 1), γ ≥ 1, and β > 1 − α. Consider the problem {

C α D0+ y(t)

γ

= t β−1 (y(t)) ,

y(0) = y0 .

t > 0,

(2.141)

(i) Let γ > 1. Prove that in this case, there exists τ > 0 such that (2.141) has a unique solution for t ∈ [0, τ) with C Dα0+ y ∈ 𝒞 ([0, τ); w0,1−β ) if β ≤ 1 and C Dα0+ y ∈ 𝒞 (ℝ+0 ) if β > 1. (ii) Let γ = 1. Prove that τ = +∞ in this case. (iii) Prove that if y1 , y2 are solutions of (2.141), then y1 ≡ y2 . (iv) Prove that if y0 = 0, then the unique solution is given by y ≡ 0. (v) Let γ = 1 and assume that the solution is of the form y(t) = g(t α+β−1 ) for k(α+β−1) g(t) = ∑+∞ . Prove that, in such a case, c0 = y0 and k=0 ck t ck+1 =

Γ((k + 1)(α + β − 1) − α + 1) c . Γ((k + 1)(α + β − 1) + 1) k

Prove that in such a case, it is the unique solution of (2.135).

3 Fractional Brownian motion and its environment The fractional operators introduced in Chapter 1 can be also used to describe some extremely useful stochastic processes. In [121], Kolmogorov defined the process that was called later fractional Brownian motion (fBm). This process was then reconsidered by Mandelbrot and van Ness [152]. As a first application, the possibility of explaining Hurst’s phenomenon, which was observed during the study of the behavior of the waters of the Nile River [108], was considered. Since then, the fBm and related processes have been employed in several contexts, such as, for instance, finance (see [28, 50, 225] and references therein), queueing theory (see [255, Section 8.7] and references therein), optics [203], and informatics (for Distributed Denial-of-Service attack detection, see, for instance, [141]). Here we consider some aspects concerning the fBm and Wiener integration with respect to it. Furthermore, we discuss some related processes that are quite useful in the application context, such as the fractional Ornstein–Uhlenbeck (fOU), the fOU with stochastic drift, and the reflected fBm and fOU.

3.1 Fractional Brownian motion: definition and some properties Consider a probability space (Ω, Σ, P) and denote by E the expectation operator. Definition 3.1. Let H ∈ (0, 1). A (two-sided) fractional Brownian motion (fBm) of Hurst parameter H is a Gaussian process BH := {BtH , t ∈ ℝ} such that (i) E[BtH ] = 0 for all t ∈ ℝ. (ii) For all t, s ∈ ℝ, E [BtH BsH ] =

1 (|t|2H + |s|2H − |t − s|2H ) . 2

(3.1)

Obviously, fBm is almost surely (a. s.) zero at zero. This process has been introduced in [121] and then studied in detail in [152]. For H = 1, we define BtH = Bt1 = tξ, where ξ η −η is a standard Gaussian random variable. For H = 0, we set (as in [34]) BtH = Bt0 = t√ 0 , 2 where the process η = {ηt , t ∈ ℝ} is a standard Gaussian white noise, i. e., a Gaussian process with E[ηt ] = 0 and E[ηt ηs ] = {

1, 0,

t = s, t ≠ s.

The restriction of the process BH to t ≥ 0 is called the one-sided fBm. Furthermore, it is clear that if H = 21 , then for t, s > 0, 1 1 1 E [Bt2 Bs2 ] = (t + s − |t − s|) = min{t, s}, 2

https://doi.org/10.1515/9783110780017-003

192 � 3 Fractional Brownian motion and its environment 1

and hence B 2 is a standard Brownian motion (Bm). The following proposition clarifies the link between BH for H ∈ (0, 1) and the processes B1 and B0 . d

d

Proposition 3.2. Let BH be an fBm. Then BH → BI as H → I, I = 0, 1, where → means the weak convergence of finite-dimensional distributions. Proof. Since BH , H ∈ [0, 1], are centered Gaussian processes, we only need to show that the covariances converge. First, note that E [Bt1 Bs1 ] = tsE [ξ 2 ] = ts. At the same time, by (3.1), lim E [BtH BsH ] =

H→1

1 2 2 (t + s − (t − s)2 ) = ts = E [Bt1 Bs1 ] . 2

d

So we proved that BH → B1 as H → 1. Furthermore, E [Bt0 Bs0 ] =

{1, t = s ≠ 0, { E[ηt ηs ] − E[η0 ηs ] − E[η0 ηt ] + 1 { = { 21 , t ≠ s, t ≠ 0, s ≠ 0, { 2 { {0 otherwise.

On the other hand, it is clear that

lim

H→0

E [BtH BsH ]

1, t = s ≠ 0, { { {1 1 = (1ℝ\{0} (t) + 1ℝ\{0} (s) − 1ℝ\{0} (t − s)) = { 2 , t ≠ s, t ≠ 0, s ≠ 0, { 2 { {0 otherwise. d

So we proved that BH → B0 as H → 0. From now on, let us focus on H > 0. Concerning the regularity of the sample paths, we state that for γ ∈ (0, H), BH admits a γ-Hölder continuous version. Proposition 3.3. Let H ∈ (0, 1]. Then for each γ ∈ (0, H), BH admits a γ-Hölder continuous version on every interval [0, T]. This means that for all T > 0 and γ ∈ (0, H), there exists a random variable C(ω, T, γ) such that with probability 1 󵄨󵄨 H 󵄨 󵄨󵄨Bt − BsH 󵄨󵄨󵄨 ≤ C(ω, T, γ)|t − s|γ 󵄨 󵄨 for all t, s ∈ [0, T]. The statement is trivial for H = 1, since the process B1 is Lipschitz. For more details in the case H ∈ (0, 1), see Exercise 3.1. The value of C(ω, T, γ) was defined in [188], where it was proved that this random variable can be chosen to have the moments of all orders. We can also prove that the process BH is H-self-similar.

3.1 Fractional Brownian motion: definition and some properties �

193

Proposition 3.4. The process BH is H-self-similar, i. e., for all n ∈ ℕ, a > 0, and t1 , . . . , tn , d

d

H H we have (Bat , . . . , Bat ) = aH (BtH1 , . . . , BtHn ), where = means the equality in distribution. 1 n

For more details, see Exercise 3.2. Let us also recall some properties concerning the p-variation of the process BH (see [169, Section 1.18]). Proposition 3.5. Let H ∈ (0, 1) and T > 0. Then n−1 󵄨

󵄨󵄨p 󵄨 ∑ 󵄨󵄨󵄨BH(k+1)T − BHkT 󵄨󵄨󵄨 󵄨 n n 󵄨

k=0

0, { { { { 1 → {E [|BTH | H ] , { { { {+∞,

p> p=

1 , H 1 , H

1≤p


1 , H

The following result can be proven in various ways (see, e. g., [220]). Proposition 3.7. For each H ∈ (0, 1), H ≠ 21 , BH is not a semimartingale. In fact, BH is not also a Markov process if H ≠ 21 , as its covariance cannot be factorized (see Theorem C.3). The fBm exhibits the following time-reverting property. ̃ H = {B ̃ H , t ∈ ℝ}, defined as Proposition 3.8. Let H ∈ (0, 1]. The process B t

is an fBm.

2H H {|t| B 1 , ̃H = t B t { 0, {

t ≠ 0, t = 0,

(3.2)

Hints for the proof are given in Exercise 3.3. We can use the time-reverting property to characterize the behavior of the sample paths of BH as |t| → +∞. Corollary 3.9. Let H ∈ (0, 1]. Then for all β > H, limt→+∞ |t|−β BtH = 0 a. s. ̃ H be taken from (3.2). Note that lim|t|→+∞ |t|−β BH = 0 a. s. if and only if Proof. Let B t ̃H = 0 lim |t|β−2H B t

|t|→0

a. s.

Since β > H, we have that 2H − β < H. Let γ ∈ (2H − β, H). By Proposition 3.3 there exists an a. s. finite random variable Cγ > 0 such that 󵄨 ̃ H 󵄨󵄨 β−2H 󵄨󵄨 ̃ H ̃ H 󵄨󵄨󵄨󵄨 ≤ Cγ |t|β−2H+γ . 󵄨 󵄨󵄨Bt − B |t|β−2H 󵄨󵄨󵄨󵄨B t 󵄨󵄨 = |t| 0󵄨 󵄨 ̃ H | = 0 a. s. The inequality β − 2H + γ > 0 supplies that lim|t|→0 |t|β−2H |B t

194 � 3 Fractional Brownian motion and its environment In the proof of Corollary 3.9, we were lucky in the sense that we were able to reduce the asymptotic behavior of the fBm to the behavior of the trajectories of the transformed fBm over a finite interval. However, if we still need to know its exact growth at infinity and the Hölder property also on unbounded sets, this asymptotic behavior is described by the following statement from [122]. Proposition 3.10. (i) For all p ≥ 1, the asymptotic growth of fBm is characterized as follows: 󵄨 󵄨 󵄨 󵄨p sup 󵄨󵄨󵄨󵄨BsH 󵄨󵄨󵄨󵄨 ≤ ((t H 󵄨󵄨󵄨log(t)󵄨󵄨󵄨 ) ∨ 1) ξ(p), 0 0,

where ξ(p) is a random variable having moments of all orders. (ii) For all p ≥ 1 and 0 < δ < H, the asymptotic growth of the increments of fBm is characterized as follows: 󵄨󵄨 H 󵄨 p 󵄨󵄨Bs − BtH 󵄨󵄨󵄨 ≤ (|t − s|H−δ (t ∨ s)δ (󵄨󵄨󵄨󵄨log(t ∨ s)󵄨󵄨󵄨󵄨 ∨ 1)) ξ(p), 󵄨 󵄨 where ξ(p) is a random variable having moments of all orders. As we already have seen, it is reasonable to consider the incremental process H I H = {It,s , t, s ∈ ℝ} defined as H It,s = BtH − BsH .

(3.3)

H Obviously, It,s is a Gaussian random variable for all t, s ∈ ℝ. Furthermore, as a consequence of item (i) in Exercise 3.1, we have the following property. d

H H H Proposition 3.11. For all t, s ∈ ℝ It,s = It−s,0 = Bt−s .

We can also directly evaluate the covariance structure of the increments of the fBm. Proposition 3.12. For all H ∈ (0, 1) and s1 , t1 , s2 , t2 ∈ ℝ, E [ItH1 ,s1 ItH2 ,s2 ] =

1 (|t − s |2H + |t2 − s1 |2H − |t1 − t2 |2H − |s1 − s2 |2H ) . 2 1 2

H Some further properties of the process It,s are given in Exercises 3.4, 3.5, and 3.6.

3.2 Wiener integration with respect to the fractional Brownian motion Now we would like to define integrals of nonrandom functions with respect to an fBm. Such integrals are called Wiener integrals. To do this, we proceed by several steps. First of all, let n ∈ ℕ and t0 < t1 < ⋅ ⋅ ⋅ < tn in ℝ. Consider a1 , . . . , an ∈ ℝ and the step

3.2 Wiener integration with respect to the fractional Brownian motion

� 195

function n

f (t) = ∑ ak 1[tk−1 ,tk ) (t).

(3.4)

k=1

Then we define the integral of f with respect to BH , H ∈ (0, 1), as n

∫ f (t)dBtH = ∑ ak (BtHk − BtHk−1 ) . k=1



Clearly, ∫ℝ f (t)dBtH is a Gaussian random variable with E [∫ℝ f (t)dBtH ] = 0. Let us determine the variance of ∫ℝ f (t)dBtH . According to Proposition 3.12, 2 H E [(∫ f (t)dBt ) ]

[



n

= ∑

ak aj 2

k,j=1

]

n

= ∑ ak aj E [(BtHk − BtHk−1 ) (BtHj − BtHj−1 )] k,j=1

(|tk − tj−1 |2H + |tj − tk−1 |2H − |tk − tj |2H − |tk−1 − tj−1 |2H ) .

(3.5)

If H = 21 , then 1

2

n

E [(∫ f (t)dBt2 ) ] = ∑ ak2 |tk − tk−1 | = ∫ f 2 (t)dt, ℝ [ ℝ ] k=1

(3.6)

which is a well-known result. Hence, in the case H = 21 , we can connect the variance of ∫ℝ f (t)dBtH with the L2 (ℝ)-norm of f . We would like to do something similar for any H ∈ (0, 1). More precisely, we want to connect the variance of ∫ℝ f (t)dBtH with the norm of f in a suitable inner product space L2H (ℝ), such that if f ∈ L2H (ℝ) is given as in (3.4), then ‖f ‖2L2 (ℝ) H

2

= E [(∫ f (t)dBtH ) ] . [ ℝ ]

This can be done by using fractional integrals and derivatives, depending on whether H > 21 or H < 21 . From now on, let us denote αH = H − 21 . On the one hand, if H > 21 , then α

(I− H 1[tk−1 ,tk ) ) (t) =

1 α α ((t − t)+H − (tk−1 − t)+H ) , Γ(αH + 1) k

where x+ = max{0, x}. On the other hand, if H < 21 , then −α

(D− H 1[tk−1 ,tk ) ) (t) =

1 α α ((t − t)+H − (tk−1 − t)+H ) . Γ(αH + 1) k

196 � 3 Fractional Brownian motion and its environment To use a uniform notation, define the operator M±H acting as follows: α

H ∈ ( 21 , 1) ,

I Hf, { { {± H MN { M± f = Γ(αH + 1)CH {f , { { { −αH {D± f ,

H = 21 ,

H ∈ (0, 21 ) ,

where CHMN is the normalization constant defined as

CHMN

+∞ { { { {( ∫ ((1 + s)αH − sαH )2 ds + 1 ) := { 2H { { { 0 {1,

− 21

H ∈ (0, 1)\ { 21 } ,

,

H = 21 ,

so that in general α

α

(M−H 1[tk−1 ,tk ) ) (t) = CHMN ((tk − t)+H − (tk−1 − t)+H ) . Let us emphasize that CHMN > 0 is well-defined, since for H ∈ (0, 1), H ≠ 21 , we have +∞

2

∫ ((1 + s)αH − sαH ) ds < +∞,

(3.7)

0

as justified in Exercise 3.8. Here the notation MN stands for Mandelbrot–van Ness, as they were the first to provide a representation of the fBm in terms of the Bm in [152]. It can be useful to define the function KH : ℝ × ℝ → ℝ for H ∈ (0, 1) as follows: α

α

KH (t, s) := CHMN [(t − s)+H − (−s)+H ] ,

t, s ∈ ℝ,

so that we can rewrite compactly (M−H 1[tk−1 ,tk ) )(t) = (KH (tk , t) − KH (tk−1 , t)). Let us now state some properties of KH [169, 206]. Lemma 3.13. Fix any H ∈ (0, 1), H ≠ 21 . We have the following properties: (i) For all t ∈ ℝ, ∫ KH2 (t, s)ds = |t|2H . ℝ

(ii) For all t1 , t2 ∈ ℝ such that t2 > t1 , 2

∫(KH (t2 , s) − KH (t1 , s)) ds = (t2 − t1 )2H . ℝ

3.2 Wiener integration with respect to the fractional Brownian motion

� 197

(iii) For all t1 , t2 ∈ ℝ, 2 ∫ KH (t2 , s)KH (t1 , s)ds = |t1 |2H + |t2 |2H − |t1 − t2 |2H . ℝ

As a direct consequence, we have the following proposition. Proposition 3.14. Fix H ∈ (0, 1), H ≠ s2 < t2 , we have

1 . 2

For all s1 , t1 , s2 , t2 ∈ ℝ such that s1 < t1 and

2 ∫ (M−H 1[s1 ,t1 ) ) (t) (M−H 1[s2 ,t2 ) ) (t)dt ℝ

= |t1 − s2 |2H + |t2 − s1 |2H − |t1 − t2 |2H − |s1 − s2 |2H .

(3.8)

This, combined with (3.5), also proves that for f from (3.4) and H ∈ (0, 1), we have 2

2 E [(∫ f (t)dBtH ) ] = ∫ ((M−H f ) (t)) dt. [ ℝ ] ℝ

(3.9)

Now we define the space L2H (ℝ) := {f ∈ 𝒟 (M−H ) : M−H f ∈ L2 (ℝ)}

(3.10)

equipped with the norm 󵄩 󵄩 ‖f ‖L2 (ℝ) := 󵄩󵄩󵄩󵄩M−H f 󵄩󵄩󵄩󵄩L2 (ℝ) . H Such a space is called the time domain of the integral with respect to the fBm. The norm ‖ ⋅ ‖L2 (ℝ) is induced by the scalar product H

(f , g)L2 (ℝ) = ∫ (M−H f ) (t) (M−H g) (t)dt. H



Clearly, if H = 21 , then M−H is the identity, and L2 (ℝ) = L2H (ℝ). If H < 21 , then L2H (ℝ) is a Hilbert space. Indeed, let (fn )n∈ℕ be a Cauchy sequence in L2H (ℝ). Then φn := M−H fn , n ∈ ℕ, is by definition a Cauchy sequence in L2 (ℝ), and there exists φ ∈ L2 (ℝ) such that ‖φn − φ‖L2 (ℝ) → 0 as n → +∞. Define f = M−1−H φ. From (1.45) we know that M−H f = φ. Furthermore, ‖fn − f ‖L2 (ℝ) = ‖φn − φ‖L2 (ℝ) → 0. H

On the other hand, if H > 21 , then recall (see [169, 206]) that L2H (ℝ) is not a Hilbert space since it is not complete. In any case, however, as shown in [169, 206], the following statement holds.

198 � 3 Fractional Brownian motion and its environment Lemma 3.15. For each H ∈ (0, 1), span {M−H 1(a,b) , −∞ < a < b < +∞} is dense in L2 (ℝ), and span{1(a,b) , −∞ < a < b < +∞} is dense in L2H (ℝ). Lemma 3.15 also states that for each H ∈ (0, 1), the step functions are dense in L2H (ℝ). Let f ∈ L2H (ℝ) and consider a sequence (fn )n∈ℕ of step functions such that fn → f in L2H (ℝ). Also, let Xn = ∫ℝ fn (t)dBtH . Then E [(Xn − Xm )2 ] = ‖fn − fm ‖2L2 (ℝ) , H

which means that (Xn )n∈ℕ is a Cauchy sequence in L2 (Ω). Hence there exists X ∈ L2 (Ω) such that Xn → X in L2 (Ω) as n → +∞. Define ∫ f (t)dBtH := X. ℝ

In particular, by convergence we get the following fractional version of the Itô isometry. Theorem 3.16. Let f ∈ L2H (ℝ). Then ∫ℝ f (t)dBtH is well defined, belongs to L2 (Ω), and E [∫ f (t)dBH ] = 0, t

[ℝ

]

2

E [(∫ f (t)dBtH ) ] = ‖f ‖2L2 (ℝ) . H [ ℝ ]

Furthermore, for all f , g ∈ L2H (ℝ), we have E [(∫ f (t)dBtH ) (∫ g(t)dBtH )] = (f , g)L2 (ℝ) . H ℝ [ ℝ ] Thanks to the Hardy–Littlewood theorem (Theorem 1.10), we can deduce the rela1 tion between the spaces L2H (ℝ) and L H (ℝ). Theorem 3.17. We have the following properties: 1 (i) If H ∈ (0, 21 ), then L2H (ℝ) ⊂ L H (ℝ), and ‖f ‖

1

L H (ℝ)

≤ CH ‖f ‖L2 (ℝ) , H

where CH > 0 is a constant depending only on H.

f ∈ L2H (ℝ),

3.2 Wiener integration with respect to the fractional Brownian motion

1

� 199

1

(ii) If H ∈ ( 21 , 1), then L H (ℝ) ⊂ L2H (ℝ), and for all f ∈ L H (ℝ), ‖f ‖L2 (ℝ) ≤ CH ‖f ‖

1

L H (ℝ)

H

,

where CH > 0 is a constant depending only on H. Proof. First, we prove (i). Let f ∈ L2H (ℝ). Then D− H f ∈ L2 (ℝ), and we get from (1.45) −α

that f =

I− H M−H f . Γ(αH +1)CHMN −α

Now we apply the Hardy–Littlewood theorem (Theorem 1.10) to

α = −αH = 1/2 − H and p = 2. In this case, q = CH > 0 that depends only on H such that −αH

‖f ‖

1 LH

(ℝ)

=

‖I−

M−H f ‖

Γ(αH +

1

L H (ℝ) 1)CHMN

p 1−αp

=

1 , H

and there exists a constant

󵄩 󵄩 ≤ CH 󵄩󵄩󵄩󵄩M−H f 󵄩󵄩󵄩󵄩L2 (ℝ) = CH ‖f ‖L2 (ℝ) . H

Now let us prove (ii). In this connection, it is sufficient to notice that 󵄩 α 󵄩 ‖f ‖L2 (ℝ) = Γ(αH + 1)CHMN 󵄩󵄩󵄩I− H f 󵄩󵄩󵄩L2 (ℝ) ≤ CH ‖f ‖ H

1

L H (ℝ)

,

where we applied Theorem 1.10 to α = αH = H − 1/2 and p = p q = 1−αp = 2.

1 . H

In this case,

In the case H < 21 , we are able to provide another upper bound. Theorem 3.18. Let H ∈ (0, 21 ), and let f ∈ W 1,2 (ℝ) satisfy 󵄨 󵄨 lim |x|1+αH 󵄨󵄨󵄨f (x)󵄨󵄨󵄨 = 0

x→+∞



and

󵄨 󵄨 󵄨 󵄨 ∫ x −αH (󵄨󵄨󵄨f (x)󵄨󵄨󵄨 + x 󵄨󵄨󵄨f ′ (x)󵄨󵄨󵄨) dx < ∞. 0

Then there exists a constant CH such that ‖f ‖L2 (ℝ) ≤ CH ‖f ‖W 1,2 (ℝ) . H

Proof. It follows by Theorem 1.39 and Remark 1.41 with α = −αH and p = 2. In the case H > lemma is clear.

1 , 2

Lemma 3.19. Let H >

1 2

we can provide another time domain. Indeed, the following and s1 , t1 , s2 , t2 ∈ ℝ with s1 < t1 ≤ s2 < t2 . Then

2H(2H − 1) ∫ 1[s1 ,t1 ) (t)1[s2 ,t2 ) (s)|t − s|2H−2 dtds ℝ2 2H

= |t2 − s1 |

− |t2 − t1 |2H − |s2 − s1 |2H + |s2 − t1 |2H ,

(3.11)

200 � 3 Fractional Brownian motion and its environment and H(2H − 1) ∫ 1[s1 ,t1 ) (t)1[s1 ,t1 ) (s)|t − s|2H−2 dtds = (t1 − s1 )2H . ℝ2

Hence, for f from (3.4) and for H > 21 , it follows from (3.5) that 2

E [(∫ f (t)dBtH ) ] = H(2H − 1) ∫ f (t)f (s)|t − s|2H−2 dtds. [ ℝ ] ℝ2

(3.12)

Now we can define the space 󵄨󵄨 󵄨󵄨 { } 󵄨󵄨 2 󵄨󵄨 󵄨 󵄨󵄨LH 󵄨󵄨(ℝ) := {f : ℝ → ℝ 󵄨󵄨󵄨 ∫ 󵄨󵄨󵄨󵄨f (t)󵄨󵄨󵄨󵄨󵄨󵄨󵄨󵄨f (s)󵄨󵄨󵄨󵄨|t − s|2H−2 dsdt < +∞} 󵄨 󵄨 󵄨󵄨 󵄨󵄨 2 { } 󵄨ℝ equipped with the inner products 󵄨 󵄨󵄨 󵄨 (f , g)|L2 |(ℝ),abs = H(2H − 1) ∫ 󵄨󵄨󵄨f (t)󵄨󵄨󵄨󵄨󵄨󵄨g(s)󵄨󵄨󵄨|t − s|2H−2 dsdt H

ℝ2

and (f , g)|L2 |(ℝ) = H(2H − 1) ∫ f (t)g(s)|t − s|2H−2 dsdt. H

ℝ2

Denote by ‖ ⋅ ‖|L2 |(ℝ),abs and ‖ ⋅ ‖|L2 |(ℝ) the respective norms. We can easily prove that H

|L2H |(ℝ) ⊂ L2H (ℝ) and

H

(3.13)

‖f ‖|L2 |(ℝ) = ‖f ‖L2 (ℝ) H

H

for all f ∈ |L2H |(ℝ). For the details, see Exercise 3.10. This inclusion is proper, as shown in [169, Lemma 1.6.9]. The condition ‖f ‖|L2 |(ℝ),abs < +∞ is used in Exercise 3.10 to apply H Fubini theorem in the definition of ‖f ‖L2 (ℝ) . In particular, Fubini theorem can be applied H also if f ≥ 0 a. e. Hence we have the following further inclusion: 󵄨 󵄨 {f ∈ L2H (ℝ) : f ≥ 0} ⊂ 󵄨󵄨󵄨󵄨L2H 󵄨󵄨󵄨󵄨 (ℝ). Let us also recall the following result [206, Theorem 4.1]. Theorem 3.20. The space (|L2H |(ℝ), ‖ ⋅ ‖|L2 |(ℝ),abs ) is a Hilbert space. H

1

Again, we can study the relation between the spaces |L2H |(ℝ) and L H (ℝ). Indeed, the next result improves statement (ii) of Theorem 3.17.

3.2 Wiener integration with respect to the fractional Brownian motion

� 201

1

Theorem 3.21. If H > 21 , then L H (ℝ) ⊂ |L2H |(ℝ), and there exists a constant CH > 0 such that ‖f ‖|L2 |(ℝ),abs ≤ CH ‖f ‖ H

1 LH

(ℝ)

1

f ∈ L H (ℝ).

,

1

Proof. For all f ∈ L H (ℝ), we have 󵄨 󵄨󵄨 󵄨 ‖f ‖2|L2 |(ℝ),abs = ∫ ∫󵄨󵄨󵄨f (s)󵄨󵄨󵄨󵄨󵄨󵄨f (u)󵄨󵄨󵄨|s − u|2H−2 duds H

ℝℝ

󵄨 󵄨 = Γ(2H − 1) ∫󵄨󵄨󵄨f (s)󵄨󵄨󵄨 ((I−2H−1 |f |) (s) + (I+2H−1 |f |) (s)) ds ℝ

≤ Γ(2H − 1)‖f ‖ ≤ Γ(2H − 1)‖f ‖ ≤ CH ‖f ‖2

1

L H (ℝ)

󵄩󵄩 2H−1 󵄩󵄩I |f | 1 L H (ℝ) 󵄩 − 1

L H (ℝ)

󵄩 + I+2H−1 |f |󵄩󵄩󵄩󵄩

󵄩 󵄩 (󵄩󵄩󵄩󵄩I−2H−1 |f |󵄩󵄩󵄩󵄩

1

L 1−H (ℝ)

1

L 1−H (ℝ)

󵄩 󵄩 + 󵄩󵄩󵄩󵄩I+2H−1 |f |󵄩󵄩󵄩󵄩

1

L 1−H (ℝ)

)

,

where we used Hölder inequality and Theorem 1.10 with p = 1 q = 1−H .

1 , H

α = 2H − 1, and

Finally, we can define a further space of integrands by using the Fourier transform. 1 Indeed, for all f ∈ 𝒮 (ℝ), ℱ [M−H f ](z) = CHMN (−iz) 2 −H ℱ [f ](z) (see Exercise 3.11), and this relation can be extended to the space {

2

󵄨2

1−2H

ℱH := {f ∈ L (ℝ) : ∫󵄨󵄨󵄨ℱ [f ](z)󵄨󵄨󵄨 |z|

{

󵄨



} dz < +∞} . }

If H < 21 , then 1 − 2H > 0, and thus ℱH coincides with the fractional Sobolev space W −αH ,2 (ℝ) (see Section 1.11). We give here a fundamental lemma, which summarizes several results in [206]. Lemma 3.22. ℱH can be equipped with the inner product 2

(f , g)ℱH := 2π (CHMN Γ(αH + 1)) ∫ ℱ [f ](z)ℱ [g](z)|z|1−2H dz. ℝ

With this inner product, ℱH satisfies the following properties: 1. ℱH is not complete unless H = 21 . 2. The step functions are dense in ℱH . 3. ℱH ⊂ L2H (ℝ), and the inclusion is proper unless H = 21 . 4. If H > 21 , then L1 (ℝ) ∩ L2 (ℝ) ⊂ ℱH .

202 � 3 Fractional Brownian motion and its environment Furthermore, for every function f ∈ L2 (ℝ), there exists a sequence of step functions fn ∈ ℱH such that 󵄨 󵄨2 lim ∫󵄨󵄨󵄨ℱ [f ](z) − ℱ [fn ](z)󵄨󵄨󵄨 |z|1−2H dz = 0.

n→+∞

(3.14)



Since ℱH ⊂ L2H (ℝ), for every f ∈ ℱH , we can define the integral ∫ℝ f (t)dBtH . Indeed, as a consequence of Plancherel’s formula, we know that ‖f ‖ℱH (ℝ) = ‖f ‖L2 (ℝ) for all H f ∈ ℱH . For such a reason, the space ℱH is called the spectral domain of the integral with respect to the fBm. Using the fact that 1[0,t) ∈ ℱH , we can provide another representation of CHMN valid for all H ∈ (0, 1); see Exercise 3.11.

3.3 Wiener integrals of functions of bounded variation We want to define the integral of a function of bounded variation w. r. t. BH as a Riemann–Stieltjes integral. To do this, let us recall the definition of a function of bounded variation. Fix −∞ < a < b < +∞. We say that f : [a, b] → ℝ has bounded variation if there exists a constant C > 0 such that for all a = t0 < t1 < ⋅ ⋅ ⋅ < tn = b, we have n

󵄨 󵄨 ∑ 󵄨󵄨󵄨f (tj ) − f (tj−1 )󵄨󵄨󵄨 ≤ C. j=1

This definition clearly cannot be applied if f (t) is defined for a. a. t ∈ [a, b]. In such a case, however, we say that f ∈ L1loc (a, b) has bounded essential variation if there exists a function g : [a, b] → ℝ of bounded variation such that g(t) = f (t) for a. a. t ∈ [a, b]. The space of functions of bounded essential variation on [a, b] will be denoted as BV[a, b]. For a function f : [a, b] → ℝ and t ∈ [a, b) (respectively, t ∈ (a, b]), we denote f (t+) = lims↓t f (s) (respectively, f (t−) = lims↑t f (s)) and f (b+) = f (b) (respectively, f (a−) = f (a)). We say that f is càdlàg if f (t+) = f (t) and f (t−) ∈ ℝ for all t ∈ [a, b]. It is well known that (see [5, Theorems 3.27 and 3.28]) for every function f ∈ BV[a, b] there exists a càdlàg function g such that f (t) = g(t) for a. a. t ∈ [a, b]. Hence, without loss of generality, by f ∈ BV[a, b] we will always consider the càdlàg version. Furthermore, we recall that any function of bounded variation has at most a countable number of discontinuities. It is known that AC[a, b] ⊂ BV[a, b], whereas a function f ∈ BV[a, b] that is continuous and differentiable a. e. with f ′ (t) = 0 for a. a. t ∈ [a, b] is said to be singular. A function f ∈ BV[a, b] is called a jump function if there exists an at most countable family of points {x0 , . . . , xn , . . . } ⊂ [a, b] (with xn < xn+1 for all n = 0, 1, . . . ) such that for all x ∈ [a, b], we have f (x) = ∑xn ≤x (f (xn ) − f (xn −)). We can describe every bounded-variation function by means of these three families of functions using the so-called Lebesgue decomposition theorem (see [5, Corollary 3.33]).

3.3 Wiener integrals of functions of bounded variation

� 203

Theorem 3.23. Every function f ∈ BV[a, b] can be written as f = fj + fac + fs , where fj , fac , fs ∈ BV[a, b], fj is a jump function, fac ∈ AC[a, b], and fs is singular. We also recall that any monotone function f : [a, b] → ℝ belongs to BV[a, b]. For −∞ ≤ a < b ≤ +∞, we say that f : (a, b) → ℝ belongs to BVloc (a, b) if its restriction f[c,d] : [c, d] → ℝ on any subinterval [c, d] ⊂ (a, b) belongs to BV[c, d]. Theorem 3.23 still holds for f ∈ BVloc (a, b) with fj , fac , fs ∈ BVloc (a, b). The functions of bounded variation represent an interesting class since we can easily define an integral with respect to them. In general, given two functions f , g : [a, b] → ℝ, we say that f admits a Riemann– Stieltjes integral with respect to g if there exists I ∈ ℝ with the following property: for all ε > 0, there exists δε > 0 such that for every partition a = t0 < t1 < ⋅ ⋅ ⋅ < tn = b of the interval [a, b] with diameter maxk=1,...,n |tk − tk−1 | < δε and arbitrary choice of points ξk ∈ [tk−1 , tk ], k = 1, . . . , n, we have 󵄨󵄨 󵄨󵄨 n 󵄨󵄨 󵄨 󵄨󵄨I − ∑ f (ξk ) (g(tk ) − g(tk−1 ))󵄨󵄨󵄨 < ε. 󵄨󵄨 󵄨󵄨 󵄨 k=1 󵄨 In such a case, we denote b

I = ∫ f (t)dg(t), a

and we call it the Riemann–Stieltjes integral of f with respect to g. It is well known that if b

f ∈ 𝒞 [a, b] and g ∈ BV[a, b], then ∫a f (t)dg(t) exists. In particular, if g is a jump function with set of discontinuities Disc(g) = {t0 , t1 , . . . }, then b

∫ f (t)dg(t) = a



tn ∈Disc(g)

f (tn )(g(tn ) − g(tn −)).

If g ∈ AC[a, b], then instead we have b

b

∫ f (t)dg(t) = ∫ f (t)g ′ (t)dt. a

a

In general, if g ∈ BV[a, b] admits a Lebesgue decomposition g = gj + gac + gs , then b

∫ f (t)dg(t) = a

b



tn ∈Disc(g)

b

f (tn )(g(tn ) − g(tn −)) + ∫ f (t)g ′ (t)dt + ∫ f (t)dgs (t). a

a

Another criterion to determine whether a Riemann–Stieltjes integral exists is provided by the integration-by-parts formula.

204 � 3 Fractional Brownian motion and its environment b

b

Theorem 3.24. Let f , g : [a, b] → ℝ. If ∫a fdg exists, then so does ∫a gdf , and b

b

∫ f (t)dg(t) = [f (b)g(b) − f (a)g(a)] − ∫ g(t)df (t). a

a

b

This tells us that ∫a f (t)dg(t) is well-defined, for instance, even if f ∈ BV[a, b] and g ∈ 𝒞 [a, b]. For further details on functions of bounded variations and Riemann–Stieltjes integrals, we refer to [5, 254]. We can use Theorem 3.24 to define the integral of a bounded variation function with respect to an fBm. Indeed, by Proposition 3.5 with p = 1 we know that for every interval [a, b], the fBm BH is not of bounded variation on it. However, if f ∈ BV[a, b], then since BH is a. s. continuous, the Riemann–Stieltjes integral b

∫ BtH df (t) a

is a. s. well-defined. Hence by the integration-by-parts formula (Theorem 3.24) we can define b

∫ f (t)dBtH a

=

f (b)BbH



f (a)BaH

b

− ∫ BtH df (t). a

In the particular case f ∈ AC[a, b] the previous formula becomes b

∫ f (t)dBtH a

=

f (b)BbH



f (a)BaH

b

− ∫ f ′ (t)BtH dt, a

whereas if f ∈ BV[a, b] is a jump function, then we get b

∫ f (t)dBtH = f (b)BbH − f (a)BaH − a



tn ∈Disc(f )

BtHn (f (tn ) − f (tn −)).

For a general f ∈ BV[a, b], let f = fj + fac + fs be its Lebesgue decomposition (see Theorem 3.23). Then b

∫ f (t)dBtH = f (b)BbH − f (a)BaH − a

b

b

a

a



tn ∈Disc(f )

BtHn (f (tn ) − f (tn −))

− ∫ f ′ (t)BtH dt − ∫ BtH dfs (t).

(3.15)

We can also provide a formula for a particular class of integrands in BVloc (ℝ), i. e., the space of functions of locally bounded variation, with fs ≡ 0.

3.3 Wiener integrals of functions of bounded variation

� 205

Theorem 3.25. Let H ∈ (0, 1) and f ∈ BVloc (ℝ) with fs ≡ 0, and assume that there exists β > H such that 󵄨 󵄨 sup |t|β 󵄨󵄨󵄨f (t)󵄨󵄨󵄨 + t∈ℝ



tn ∈Disc(f )

󵄨 󵄨 󵄨 󵄨 |tn |β 󵄨󵄨󵄨f (tn ) − f (tn −)󵄨󵄨󵄨 + ∫ |t|β 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 dt < +∞.

(3.16)



Then ∫ f (t)dBtH = − ℝ



tn ∈Disc(f )

BtHn (f (tn ) − f (tn −)) − ∫ f ′ (t)BtH dt

a. s.,

(3.17)



where the series and the integral on the right-hand side are a. s. absolutely convergent. Proof. Let −∞ < a < b < +∞. Then by definition f ∈ BV[a, b] with fs ≡ 0, and we can use (3.15) to get b

∫ f (t)dBtH = f (b)BbH − f (a)BaH − a

b



tn ∈Disc(f )∩[a,b]

BtHn (f (tn ) − f (tn −)) − ∫ f ′ (t)BtH dt.

(3.18)

a

Now we need to take the limits as a → −∞ and b → +∞. Let us first note that 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨f (b)BbH 󵄨󵄨󵄨 = |b|β 󵄨󵄨󵄨󵄨f (b)󵄨󵄨󵄨󵄨|b|−β 󵄨󵄨󵄨BbH 󵄨󵄨󵄨 ≤ (sup |t|β 󵄨󵄨󵄨󵄨f (t)󵄨󵄨󵄨󵄨) |b|−β 󵄨󵄨󵄨BbH 󵄨󵄨󵄨 . 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 t∈ℝ Taking the limit as b → +∞ and using Corollary 3.9, we get 󵄨 󵄨 lim 󵄨󵄨󵄨f (b)BbH 󵄨󵄨󵄨󵄨 = 0

a. s.

(3.19)

󵄨 󵄨 lim 󵄨󵄨󵄨f (a)BaH 󵄨󵄨󵄨󵄨 = 0

a. s.

(3.20)

b→+∞ 󵄨

With the same argument, we get a→−∞ 󵄨

Next, note that t ∈ ℝ \ [−1, 1] 󳨃→ |t|−β BtH ∈ ℝ is a. s. continuous and by Corollary 3.9 limt→+∞ |t|−β BtH = 0. Furthermore, supt∈[−1,1] |BtH | =: M1 and supt∈ℝ\[−1,1] (|t|−β |BtH |) =: M2 are random variables that are finite a. s. Assuming that a < −1 < 1 < b and setting I[f ; a, b] = Disc(f )∩[a, b], I1 [f ; a, b] = (Disc(f )∩[a, b])\[−1, 1], and I2 [f ] = Disc(f )∩[−1, 1], we get ∑

tn ∈I[f ;a,b]

≤ M1 ≤ M1

󵄨󵄨 H 󵄨󵄨 󵄨󵄨 󵄨󵄨Bt 󵄨󵄨 󵄨󵄨f (tn ) − f (tn −)󵄨󵄨󵄨󵄨 󵄨 n󵄨 ∑

tn ∈I1 [f ;a,b]



tn ∈I[f ;a,b]

󵄨 󵄨 󵄨 󵄨 |tn |β 󵄨󵄨󵄨f (tn ) − f (tn −)󵄨󵄨󵄨 + M2 ∑ 󵄨󵄨󵄨f (tn ) − f (tn −)󵄨󵄨󵄨 β 󵄨󵄨

tn ∈I2 [f ]

󵄨 󵄨 󵄨 |tn | 󵄨󵄨f (tn ) − f (tn −)󵄨󵄨󵄨 + M2 ∑ 󵄨󵄨󵄨f (tn ) − f (tn −)󵄨󵄨󵄨. tn ∈I2 [f ]

206 � 3 Fractional Brownian motion and its environment Taking the limit as a → −∞ and b → +∞, by (3.16) we get ∑

tn ∈Disc(f )

≤ M1

󵄨󵄨 H 󵄨󵄨 󵄨󵄨 󵄨󵄨Bt 󵄨󵄨 󵄨󵄨f (tn ) − f (tn −)󵄨󵄨󵄨󵄨 󵄨 n󵄨 ∑

tn ∈Disc(f )

󵄨 󵄨 󵄨 󵄨 |tn |β 󵄨󵄨󵄨f (tn ) − f (tn −)󵄨󵄨󵄨 + M2 ∑ 󵄨󵄨󵄨f (tn ) − f (tn −)󵄨󵄨󵄨 < ∞

a. s.

tn ∈I2 [f ]

(3.21)

Furthermore, b

−1

b

a

a

1

1

󵄨 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 ∫ 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 󵄨󵄨󵄨󵄨BtH 󵄨󵄨󵄨󵄨 dt ≤ M1 ( ∫ |t|β 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 dt + ∫ |t|β 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 dt) + M2 ∫ 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 dt −1

b

1

a

−1

󵄨 󵄨 󵄨 󵄨 ≤ M1 ∫ |t|β 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 dt + M2 ∫ 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 dt. Taking the limit as a → −∞ and b → +∞, by (3.16) we have 1

󵄨 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 ∫ 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 󵄨󵄨󵄨󵄨BtH 󵄨󵄨󵄨󵄨 dt ≤ M1 ∫ |t|β 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 dt + M2 ∫ 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 dt < ∞



a. s.

(3.22)

−1



Hence, taking the limit as a → −∞ and b → +∞ in (3.18) and using (3.19), (3.20), (3.21), and (3.22), we get (3.17). Finally, (3.21) and (3.22) give us the absolute convergence of the series and the integral on the right-hand side of (3.17).

3.4 Representations of the fractional Brownian motion Let us consider how it is possible to transform an fBm BH into another fBm BK with a different Hurst index K by means of Wiener integrals. From now on, for t < 0, we denote 1[0,t) = −1(t,0] .

The following result, obtained in [208], provides one possible transformation formula. Theorem 3.26. Let H, K ∈ (0, 1) with H ≠ K. For all t, s ∈ ℝ, define K−H CKMN Γ(αK + 1) {(I− 1[0,t) ) (s), gH,K (⋅; ⋅)(t; s) := MN CH Γ(αH + 1) {(DH−K 1[0,t) ) (s), { −

K > H, K < H.

(3.23)

Then the process BtK := ∫ gH,K (t; s)dBsH ℝ

is an fBm with Hurst index K.

(3.24)

3.4 Representations of the fractional Brownian motion



207

This result is clear noticing that M−H gH,K (t; ⋅) = M−K 1[0,t) (see Exercise 3.13). In the specific case K = 21 , we get from Theorem 3.26 the Mandelbrot–van Ness representation of the fBm, given in [152]. Theorem 3.27. Let H ∈ (0, 1), and let B be a standard Brownian motion. Then the process BtH := ∫ (M−H 1[0,t) ) (s)dBs

(3.25)



is an fBm with Hurst index H. Vice versa, if BH is an fBm, then Bt :=

1

1

(2H(1 − H)(1 − 2H) tan(πH)) 2

∫ (M−1−H 1[0,t) ) (s)dBsH

(3.26)



is a Bm such that (3.25) holds. In fact, we can use (3.25) to rewrite Wiener integrals with respect to the fBm as Wiener integrals with respect to the standard Bm. Theorem 3.28. Let H ∈ (0, 1), and let BH be an fBm. Let also B be defined as in (3.26). Then for all f ∈ L2H (ℝ), ∫ f (t)dBtH = ∫ (M−H f ) (t)dBt . ℝ

(3.27)



Proof. First, let us note that for all t1 < t2 , ∫ 1[t1 ,t2 ) (t)dBtH = ∫ 1[0,t2 ) (t)dBtH − ∫ 1[0,t1 ) (t)dBtH = BtH2 − BtH1 ℝ



=



∫ (M−H 1[0,t2 ) ) (t)dBt

− ∫ (M−H 1[0,t1 ) ) (t)dBt = ∫ (M−H 1[t1 ,t2 ) ) (t)dBt ,







where, in the third equality, we used (3.25). By linearity we get (3.27) for all f of the form (3.4). Now consider arbitrary f ∈ L2H (ℝ), and let (fn )n≥0 be a sequence of functions from (3.4) such that fn → f in L2H (ℝ). On the one hand, we know that ∫ (M−H fn ) (t)dBt = ∫ fn (t)dBtH → ∫ f (t)dBtH ℝ



in L2 (Ω)

(3.28)



as n → +∞. On the other hand, since fn → f in L2H (ℝ), we know from the definition of the norm in L2H (ℝ) that M−H fn → M−H f in L2 (ℝ), and then ∫ (M−H fn ) (t)dBt → ∫ (M−H f ) (t)dBt ℝ



Combining (3.28) and (3.29), we get (3.27).

in L2 (Ω).

(3.29)

208 � 3 Fractional Brownian motion and its environment In the one-sided case, we can also provide a different representation formula obtained by means of finite-interval integrals. To do this, we introduce two operators that will play the role of M−H and its inverse and that take in consideration the fact that we are integrating over a finite interval. So, for s ∈ [0, T], define the following weighted fractional operators acting on f : [0, T] → ℝ as α

MN −αH (KH T f ) (s) = Γ(αH + 1)CH s

α

(IT−H (⋅ H f (⋅))) (s), { { { { f (s), { { { { −αH αH {(DT− (⋅ f (⋅))) (s),

H > 21 , H = 21 , H < 21 ,

and α

s (KH,∗ T f ) (s) = Γ(αH + 1)CHMN −αH

α

(DT−H (⋅ H f (⋅))) (s), { { { { f (s), { { { { −αH αH {(IT− (⋅ f (⋅))) (s),

H > 21 . H = 21 , H < 21 .

With the help of these operators, in [111] the following transformation formula is provided. Theorem 3.29. Let H, K ∈ (0, 1), and let BH be an fBm with Hurst index H. Then the process BK := {BtK , t ≥ 0} defined as t

BtK = ∫ zH,K (t; s)dBsH , 0

where zH,K (t; s) = (KH,∗ (KKt 1[0,t) )) (s) t for t > 0 and s ∈ (0, t), is an fBm with Hurst index K. Furthermore, in [111] the author proved that the kernel zH,K can be expressed in terms of hypergeometric functions. Namely, for t > 0 and s ∈ (0, t), zH,K (t; s) =

Γ(αK + 1)CKMN (t − s)K−H t 2 F1 (1 − K − H, K − H; 1 + K − H; 1 − ) . (3.30) s Γ(αH + 1)Γ(K − H + 1)CHMN

If H < max{1 − K, K}, then we can directly use the integral representation of the hypergeometric function (as in Theorem B.9) to rewrite kernel (3.30) in a simple integral

3.4 Representations of the fractional Brownian motion

� 209

form. First, let K > 21 . Then H < K, and we can use the integral representation of the hypergeometric function according to Theorem B.9 to state that 1

Γ(αK + 1)CKMN (t − s)K−H t − s K+H−1 K−H−1 zH,K (t; s) = v (1 + v) dv ∫ s Γ(αH + 1)Γ(K − H)CHMN 0

t

=

Γ(αK + 1)CKMN s1−K−H ∫(u − s)K−H−1 uK+H−1 du, Γ(αH + 1)Γ(K − H)CHMN

(3.31)

s

where we used the change of variable u = s + (t − s)v. If K < 21 , then H < 1 − K, and we can use the same argument to obtain 1

Γ(αK + 1)CKMN (t − s)K−H t − s H−K −K−H 2K−1 zH,K (t; s) = v (1 − v) (1 + v) dv ∫ s Γ(αH + 1)Γ(1 − K − H)Γ(2K)CHMN 0

t

=

Γ(αK + 1)CKMN sK−H ∫(u − s)−K−H (t − u)2K−1 uH−K du. Γ(αH + 1)Γ(1 − K − H)Γ(2K)CHMN

(3.32)

s

Clearly, if H < min{K, 1 − K}, then both (3.31) and (3.32) hold. Indeed, in such a case, (3.31) and (3.32) coincide, since they both are equal to (3.30). Furthermore, if H = 21 and K ≠ 21 , then H < max{1 − K, K}. Therefore we can specify (3.31) and (3.32) for H = 21 and K ≠ 21 as follows: t

{ { { αK s−αK ∫(u − s)αK −1 uαK du, { { { { s { { MN {1, zK (t; s) = CK { { { { t { { { Γ(αK + 1)sαK (t − u)2K−1 { { du, ∫ α { αK +1 K { Γ(−αK )Γ(2K) s u (u − s) where zK (t; s) := z 1 ,K (t; s). The case K > 2

1 2

K > 21 , K = 21 ,

(3.33)

K < 21 ,

in (3.33) is described in [187], but the authors

1 . The previous argument does not cover the case 2

provide a different form of zK for K < H ≥ max{1−K, K}. We can, however, provide a general integral representation of the kernel zH,K . Such a relation cannot follow directly from the integral representation of the hypergeometric function (as in Theorem B.9) since 1 − K − H and K − H can be negative numbers. Therefore we first need to use one of the relations between hypergeometric functions to avoid negative arguments.

210 � 3 Fractional Brownian motion and its environment Proposition 3.30. For all H, K ∈ (0, 1), t > 0, and s ∈ (0, t), t

Γ(αK + 1)CKMN (t − s)−1 s1−K−H [ zH,K (t; s) = ∫(u − s)K−H uK+H−2 (2Ku + (1 − K − H)t)du] Γ(αH + 1)Γ(K − H + 1)CHMN [s ] (3.34) t

=

Γ(αK + 1)CKMN (t − s)1−2K−2H sK−H [ ∫(u − s)1−K−H uH−K (t − u)2K−1 (u + (K − H)t)du] . Γ(αH + 1)Γ(2 − K − H)Γ(2K)CHMN [s ] (3.35)

Proof. First, let us rewrite (3.30) as zH,K (t; s) =

(K − H + 1)Γ(αK + 1)CKMN (t − s)K−H t 2 F1 (1 − K − H, K − H; 1 + K − H; 1 − ) . MN s Γ(αH + 1)Γ(K − H + 2)CH

Taking into account the relation c 2 F1 (a, b; c; z) = (c − a) 2 F1 (a, b + 1; c + 1; z) + a(1 − z) 2 F1 (a + 1, b + 1; c + 1; z),

(3.36)

we get zH,K (t; s) =

Γ(αK + 1)CKMN (t − s)K−H t [2K 2 F1 (1 − K − H, 1 + K − H; 2 + K − H; 1 − ) MN s Γ(αH + 1)Γ(K − H + 2)CH t t +(1 − K − H) 2 F1 (2 − K − H, 1 + K − H; 2 + K − H; 1 − )] . s s

Since 2 + K − H > 1 + K − H > 0 for all H, K ∈ (0, 1), we can use the integral representation of the hypergeometric function (as in Theorem B.9) 1

zH,K (t; s) =

Γ(αK + 1)CKMN (t − s)K−H [ t − s K+H−1 2K ∫ vK−H (1 + v) dv MN s Γ(αH + 1)Γ(K − H + 1)CH [ 0 1

t t − s K+H−2 ] +(1 − K − H) ∫ vK−H (1 + v) dv . s s 0 ] Setting w = (t − s)v, we have t−s

zH,K (t; s) =

Γ(αK + 1)CKMN (t − s)−1 [ w K+H−1 2K ∫ wK−H (1 + ) dw MN s Γ(αH + 1)Γ(K − H + 1)CH 0 [ t−s

t w K+H−2 ] +(1 − K − H) ∫ wK−H (1 + ) dw s s 0 ]

3.4 Representations of the fractional Brownian motion



211

t−s

Γ(αK + 1)CKMN (t − s)−1 s1−K−H [ = 2K ∫ wK−H (s + w)K+H−2 (s + w)dw Γ(αH + 1)Γ(K − H + 1)CHMN 0 [ t−s

+(1 − K − H)t ∫ wK−H (s + w)K+H−2 dw] 0 ] t−s

Γ(αK + 1)CKMN (t − s)−1 s1−K−H [ = ∫ wK−H (s + w)K+H−2 (2K(s + w) + (1 − K − H)t)dw] . Γ(αH + 1)Γ(K − H + 1)CHMN [0 ] Finally, setting u = w + s, we get (3.34). To prove (3.35), we first use the equality zH,K (t; s) =

(K − H + 1)Γ(αK + 1)CKMN (t − s)K−H t 2 F1 (K − H, 1 − K − H; 1 + K − H; 1 − ) s Γ(αH + 1)Γ(K − H + 2)CHMN

and then proceed exactly as before. We are now able to discuss the case K = 21 . For H ∈ (0, 1), we can use either (3.34) or (3.35) (that will give the same formula) to obtain t

z∗H (t; s) =

2 cos(πH) (t − s)−1 s−αH ∫(u − s)−αH uαH −1 (u − αH t)du, (1 − 2H)πCHMN

(3.37)

s

where z∗H (t; s) = zH, 1 (t; s). Furthermore, if H < 21 , then we can use (3.31) to achieve the 2 alternative formulation t

z∗H (t; s)

cos(πH) −αH = s ∫(u − s)−αH −1 uαH du. πCHMN

(3.38)

s

As a direct corollary of Theorem 3.29, we formulate the following result. Corollary 3.31. Let H ∈ (0, 1), and let B be a Bm. Then the process BH := {BtH , t ≥ 0} defined as t

BtH = ∫ zH (t; s)dBs ,

(3.39)

0

where zH is defined in (3.33), is an fBm with Hurst index H. Vice versa, if BH is an fBm with Hurst index H, then the process B := {Bt , t ≥ 0} defined as t

Bt = ∫ z∗H (t; s)dBsH 0

is a standard Bm such that (3.39) holds.

(3.40)

212 � 3 Fractional Brownian motion and its environment The representation formula provided in the corollary is called the Molchan–Golosov representation, since it was provided for the first time in [181]. Now we would like to deT fine Wiener integrals of the form ∫0 f (s)dBsH . The definition is clear if f 1[0,T] ∈ L2H (ℝ). However, thanks to (3.39), we can provide a direct construction of such integrals by introducing another space of integrands, as in [45, 207]. Indeed, similarly to L2H (ℝ), we can introduce the space H 2 f ∈ L2H (0, T) := {f ∈ Dom (KH T ) : KT f ∈ L (0, T)}

equipped with the norm 󵄩 󵄩󵄩 󵄩 2 ‖f ‖L2 (0,T) := 󵄩󵄩󵄩󵄩KH Tf󵄩 󵄩L (0,T) H and the inner product H (f , g)L2 (0,T) := (KH T f , KT g)L2 (0,T) . H

Then L2H (0, T) is a Hilbert space if and only if H ≤ 21 , and the functions f of the form (3.4) that are supported on [0, T] are dense in L2H (0, T). The following statement is a direct consequence of [169, Theorem 1.8.3] (see also [133]). Theorem 3.32. Let H > 21 . If f ∈ L2H (ℝ) is a. e. equal to a function whose support is a subset of [0, T], then f ∈ L2H [0, T], and ‖f ‖L2 [0,T] = ‖f ‖L2 (ℝ) . H

H

Hence, as a consequence of Theorem 3.17, (ii), and Theorem 3.32, we know that there exists a constant CH such that ‖f ‖L2 [0,T] ≤ CH ‖f ‖ H

1

L H [0,T]

.

This clearly implies that Lp [0, T] ⊂ L2H [0, T] for all p ≥ 1/H if H > 21 . Further, for H < 21 , we have the following result. Theorem 3.33. Fix T > 0 and H ∈ (0, 21 ). Let f : (0, T] → ℝ, and suppose gH (s) = sαH f (s) ∈ AC[0, T]. Then there exists a constant CH,T such that 󵄩 󵄩 ‖f ‖L2 (0,T) ≤ CH,T (󵄩󵄩󵄩gH′ 󵄩󵄩󵄩L1 [0,T] + gH (T)) . H

(3.41)

Proof. Since gH ∈ AC[0, T], we can use (1.51) and Proposition 1.31 to write the following equalities for all s ∈ [0, T]: (DT−H gH ) (s) = (DT−H (gH − gH (T))) (s) + (DT−H gH (T)) (s) −α

−α

= (C DT−H gH ) (s) + −α

−α

gH (T)(T − s)αH . Γ(1 + αH )

(3.42)

3.4 Representations of the fractional Brownian motion

Hence from the definition of KH T for H
0 and H ∈ (0, 1), let ξ be a random variable, and let BH be an fBm of Hurst index H. The Langevin equation (3.44) admits a pathwise unique solution UtH,ξ

t

=e

−λt

(ξ + σ ∫ eλu dBuH ) ,

t ≥ 0.

(3.45)

0

Furthermore, the sample paths of U H,ξ are locally γ-Hölder continuous for all γ < H. Proof. Let us fix ω ∈ Ω and recall that, up to the choice of a version, t ∈ ℝ 󳨃→ BtH (ω) is γ-Hölder continuous for all γ < H. For such fixed ω, consider the linear Cauchy problem {

Z ′ (t, ω) = ξ(ω) − λZ(t, ω) + σBtH (ω), Z(0, ω) = 0.

t > 0,

(3.46)

3.5 Fractional Ornstein–Uhlenbeck process

� 215

This Cauchy problem admits a unique solution t

Z(t, ω) = e−λt ∫ eλu (ξ(ω) + σBuH (ω)) du = 0

t

ξ(ω) (1 − e−λt ) + σe−λt ∫ eλu BuH (ω)du. λ 0

Furthermore, by definition, Z ′ (⋅, ω) solves (3.44) for a fixed ω ∈ Ω. Let us prove that the solution of (3.44) for any fixed ω ∈ Ω is unique. Indeed, if y(⋅, ω) is a solution, t then the function Y (t, ω) = ∫0 y(s, ω)ds solves (3.46), and thus Y (⋅, ω) = Z(⋅, ω) and ′ y(⋅, ω) = Z (⋅, ω). This proves that (3.44) admits a pathwise unique solution. In particular, UtH,ξ

t

:= Z (t, ⋅) = ξe ′

−λt

− σλe

−λt

∫ eλu BuH du + σBtH ,

t ≥ 0,

(3.47)

0

is the unique pathwise solution of (3.44). Moreover, representation (3.45) follows by direct application of (3.15), once we recall that the function u ∈ [0, t] 󳨃→ eλu ∈ ℝ belongs to AC[0, t]. Finally, the local γ-Hölder continuity for all 0 < γ < H follows directly from (3.47). Definition 3.37. The process U H,ξ is called the fractional Ornstein–Uhlenbeck process (fOU) with initial state ξ. Clearly, U H,ξ is adapted to the filtration {ℱt }t≥0 , where ℱt is the σ-algebra generated by ξ and {BsH }0≤s≤t . If ξ ∈ L1 (Ω), then a direct application of Theorem 3.16 tells us that E [UtH,ξ ] = e−λt E[ξ].

(3.48)

In particular, limt→+∞ E[UtH,ξ ] = 0. This property is known as the mean reversion property. Remark 3.38. If ξ is nonrandom or {ξ, BH } is a Gaussian system, then according to Remark 3.35, the Ornstein–Uhlenbeck process is Gaussian. An interesting choice of ξ is 0

ξ = σ ∫ eλt dBtH , −∞

which is well defined by Theorem 3.25. In such a case, we denote by U H the respective fOU process, which is given by t

UtH = σ ∫ e−λ(t−s) dBsH . −∞

For such a process, we can prove the following result.

(3.49)

216 � 3 Fractional Brownian motion and its environment Proposition 3.39. For all H ∈ (0, 1) and σ, λ > 0, the fOU process U H defined in (3.49) is a centered stationary Gaussian process. Proof. First, we rewrite UtH for t > 0 using Theorem 3.25. Formally, we get that UtH

+∞

= σe

−λt

∫ e

λs

H 1(−∞,t] (s)dBs

=

σBtH

t

− λσ ∫ e−λ(t−s) BsH ds.

−∞

(3.50)

−∞

To show that E[UtH ] = 0 for all t > 0, first note that t

t

2 󵄨 󵄨 E [ ∫ e−λ(t−s) 󵄨󵄨󵄨󵄨BsH 󵄨󵄨󵄨󵄨 ds] = √ ∫ e−λ(t−s) |s|H ds < +∞. π −∞ [−∞ ] Hence we can use Fubini’s theorem to obtain that E[UtH ] = 0. Furthermore, 2

t

2

+∞

[ ] [ ] H E [( ∫ e−λ(t−s) BsH ds) ] = E [( ∫ e−λv Bt−v dv) ] [

−∞ +∞

]

0

[

]

+∞

H 2 ≤ λ−1 E [ ∫ e−λv (Bt−v ) dv] = λ−1 ∫ |t − v|2H e−λv dv < +∞, 0 [0 ]

where we used the Cauchy–Schwarz inequality. This also proves that Var[UtH ] < +∞ for all t > 0. So, according to Remarks 3.35 and 3.38, U H is a Gaussian process. To prove that U H is stationary, rewrite (3.50) as +∞

H UtH = σBtH − λσ ∫ e−λv Bt−v dv. 0

Then for all s ∈ ℝ, we have H Ut+s

= d

H σBt+s

+∞

− λσ ∫ 0 +∞

H e−λv Bt+s−v dv



H (Bt+s



BsH )

+∞

H − λσ ∫ e−λv (Bt+s−v − BsH ) dv 0

H = σBtH − λσ ∫ e−λv Bt−v dv = UtH , 0

where we used the stationarity of the increments of the fBm (see Exercise 3.4). Obviously, such equality holds for all finite-dimensional distributions.

3.5 Fractional Ornstein–Uhlenbeck process

� 217

Using (3.50), we can also provide a formula for the variance of UtH : 0

0

VH,∞ := Var [UtH ] = Var [U0H ] = λ2 σ 2 E [ ∫ ∫ eλ(s+u) BsH BuH dsdu] [−∞ −∞ ] ∞∞

λ2 σ 2 = ∫ ∫ e−λ(s+u) (|s|2H + |u|2H − |s − u|2H ) dsdu. 2 0 0

This value can be explicitly calculated: VH,∞ =

Hσ 2 Γ(2H). λ2H

(3.51)

We refer to Exercise 3.16 for details. Next, let us provide a formula for the covariance of U H . Proposition 3.40. Let H ∈ (0, 1) and σ, λ > 0, and let U H be the fOU process defined in (3.49). Then the autocovariance function is given by s

0

H ρH (s) := Cov (UtH , Ut+s ) = VH,∞ e−λs + σ 2 H(2H − 1)e−λs ∫ ∫ eλ(u+v) |u − v|2H−2 dudv. 0 −∞

(3.52)

Furthermore, ρH (s) ∼

σ 2 H(2H − 1) 2H−2 s , λ2

s → +∞.

(3.53)

Proof. Since UtH is stationary, we can write ρH (s) = Cov (U0H , UsH ) . Furthermore, note that UsH

=e

−λs

U0H

s

+ σe

−λs

∫ eλu dBuH .

(3.54)

0

This means that ρH (s) = e

−λs

=e

−λs

Cov (U0H , U0H ) 2

2 −λs

+σ e

VH,∞ + σ H(2H − 1)e

0

s

E [( ∫ eλu dBuH ) (∫ eλu dBuH )] 0 [ −∞ ] s

−λs

0

∫ ∫ eλ(u+v) |u − v|2H−2 dudv, 0 −∞

218 � 3 Fractional Brownian motion and its environment where we used the fact that (−∞, 0] and [0, s) are nonoverlapping and Exercise 3.14. Furthermore, according to Exercise 3.18, s

e

−λs

0

∫ ∫ eλ(u+v) |u − v|2H−2 dudv ∼ 0 −∞

s2H−2 , λ2

s → +∞,

whence (3.53) follows. A more precise asymptotics of ρH is given in [51, Theorem 2.3]. Furthermore, according to Exercise 3.17, the function ft (s) = eλs 1(−∞,t] (s) belongs to ℱH for all t > 0. Therefore, for all s > 0, ρH (s) = σ 2 e−λs (f0 , fs )ℱH =

Hσ 2 sin(πH)Γ(2H) e−izs |z|1−2H dz. ∫ 2 π λ + z2

(3.55)



Consider the case in which ξ is a constant, i. e., ξ = x ∈ ℝ. In this case, U H,x is a nonstationary Gaussian process. Obviously, E[UtH,x ] = xe−λt . Concerning the covariance function, we can relate U H,x to U H via (3.54). Proposition 3.41. Let H ∈ (0, 1), σ, λ > 0, and x ∈ ℝ, and let U H,x be the fOU process defined in (3.45) and starting from x. Then its covariance function equals RH (t, s) := Cov [UtH,x , UsH,x ] ρH (|t − s|) − e−λt ρH (s) − e−λs ρH (t) + e−λ(t+s) VH,∞ ,

(3.56)

where VH is defined in (3.51). Furthermore, for fixed t > 0, RH (t, t + s) ∼

σ 2 H(2H − 1) (1 − e−λt ) s2H−2 , λ2

s → +∞.

(3.57)

Proof. By (3.54) we know that for all t > 0, UtH,0 = UtH − e−λt U0H . Therefore RH (t, s) = E [UtH,0 UsH,0 ] = E [(UtH − e−λt U0H ) (UsH − e−λs U0H )]

2

= E [UtH UsH ] − e−λt E [U0H UsH ] − e−λs E [U0H UtH ] + e−λ(t+s) E [(U0H ) ] = ρH (|t − s|) − e−λt ρH (s) − e−λs ρH (t) + e−λ(t+s) VH,∞ . Now let s, t > 0. Consider RH (t, t + s) = ρH (s) − e−λt ρH (t + s) − e−λ(t+s) ρH (t) + e−λ(2t+s) VH,∞ .

3.5 Fractional Ornstein–Uhlenbeck process

σ 2 H(2H−1) 2H−2 s λ2

According to (3.53), ρH (s) ∼ whence (3.57) follows.

� 219

as s → +∞, and the same is true for ρH (t + s),

Remark 3.42. For fixed t > 0, we also have RH (t, s) ∼

σ 2 H(2H − 1) (1 − e−λt ) s2H−2 , λ2

s → +∞.

(3.58)

This is clear once we note that (s − t)2H−2 ∼ s2H−2 as s → +∞. A more precise asymptotics has been provided in [51, Corollary 2.5]. Furthermore, in [170, Proposition 5.1] the following form of the covariance function has been provided for t ≥ s ≥ 0: t

t−s

Hσ 2 RH (t, s) = (eλ(t−s) ∫ e−λu u2H−1 du − eλ(s−t) ∫ eλu u2H−1 du 2 t−s

t

0

s

t

−e−λ(t+s) ∫ eλu u2H−1 du + eλ(s−t) ∫ e−λu u2H−1 du + 2e−λ(t+s) ∫ eλu u2H−1 du) . s

0

(3.59)

0

If we use the representation of ρH given in (3.55), we can also rewrite (3.56) for t, s ≥ 0 as follows: RH (t, s) =

Hσ 2 sin(πH)Γ(2H) (e−izt − e−λt )(eizs − e−λs ) 1−2H |z| dz. ∫ π λ2 + z 2

(3.60)



The extended form (3.59) can be used to prove some further properties of the covariance function RH . Before proceeding, let us consider the extremal cases H = 0, 1. For H = 1, define Ut1,x := xe−λt +

σζ (1 − e−λt ) , λ

(3.61)

where ζ ∼ 𝒩 (0, 1) is such that Bt1 = ζt. Then the covariance function of U 1,x equals R1 (t, s) =

σ2 (1 − e−λt ) (1 − e−λs ) , λ2

The case H = 0 is quite different. Recalling that Bt0 := 𝒩 (0, 1) random variables, we set Ut0,x

t, s ≥ 0. ηt −η0 , √2

where ηt are independent

t

:= xe

−λt

− σλe

−λt

∫ eλu Bu0 du + σBt0 , 0

t ≥ 0.

(3.62)

220 � 3 Fractional Brownian motion and its environment Its covariance function has the form 0, { { { { { 2 { { σ −λ(t+s) , R0 (t, s) = { 2 e { { { 2 { { { σ (1 + e−2λt ) , {2

min{s, t} = 0, s ≠ t, min{s, t} > 0, s = t > 0.

Clearly, R0 (⋅, ⋅) is discontinuous at points (t, s) with either s = 0 or t = 0 and at the diagonal s = t. We consider the function (H, t, s) ∈ [0, 1] × (ℝ+0 )2 󳨃→ RH (t, s) ∈ ℝ, for which we have the following result. Theorem 3.43. The function (H, t, s) ∈ [0, 1] × (ℝ+0 )2 󳨃→ RH (t, s) ∈ ℝ is continuous on ([0, 1] × (ℝ+0 )2 ) \ Disc(R⋅ (⋅, ⋅)), where Disc(R⋅ (⋅, ⋅)) = {(0, t, s) : min{s, t} = 0 or s = t}. Furthermore, for all s, t ≥ 0, the function H ∈ [0, 1] 󳨃→ RH (s, t) ∈ ℝ is continuous. The proof is given in [13, Theorem 1]. It is a straightforward (but cumbersome) application of the dominated convergence theorem, and hence we omit it. Furthermore, it clearly implies that if we consider a sequence (Hn )n∈ℕ with Hn ∈ [0, 1] such that d

Hn → H ⋆ ∈ [0, 1], then U Hn → U H in the sense of finite-dimensional distributions. Now let us study the variance function VH (t) = Var[UtH,x ], t ≥ 0. We can also consider the variance function in the extremal cases H = 0, 1. Precisely, denoting V1 (t) = Var[Ut1,x ], where Ut1,x is defined in (3.61), we get ⋆

V1 (t) = Clearly, V1 (t) →

σ2 λ2

2 σ2 (1 − e−λt ) . λ2 d

2

as t → +∞. This means that Ut1,x → Z, where Z ∼ 𝒩 (0, σλ2 ) as

t → ∞. In the case H = 0, we set V0 (t) = Var[Ut0,x ], where Ut0,x is defined in (3.62), to get { {0, V0 (t) = { σ 2 { (1 + e−2λt ) , {2 2

t = 0, t > 0. d

2

It is clear that, as t → ∞, V0 (t) → σ2 and thus Ut0,x → Z, where Z ∼ 𝒩 (0, σ2 ). In general, as a direct consequence of (3.56), we have the following result.

3.5 Fractional Ornstein–Uhlenbeck process

� 221

Proposition 3.44. Let H ∈ (0, 1), σ, λ > 0, and x ∈ ℝ, and let U H,x be the fOU process defined in (3.45). Then the variance function is given by VH (t) = (1 − e−2λt ) VH,∞ − 2e−λt ρH (t).

(3.63)

lim VH (t) = VH,∞

(3.64)

In particular, t→+∞

uniformly with respect to H ∈ [0, 1]. This means that d

UtH,x → Z ∼ 𝒩 (0, VH,∞ )

as

t → +∞.

Proof. Equality (3.63) follows from (3.56) once we set s = t and recall that ρH (0) = VH,∞ . To show that the limit (3.64) holds uniformly, let us first rewrite VH,∞ =

σ2 Γ(2H + 1). 2λ2H

The function H ∈ [0, 1] 󳨃→ VH,∞ ∈ ℝ is positive and continuous. Furthermore, we get from the Cauchy–Schwarz inequality that ρH (t) ≤ VH,∞ for all t ≥ 0. Hence 󵄨󵄨 󵄨 −2λt + 2e−λt ) VH,∞ ≤ (e−2λt + 2e−λt ) ( max VH,∞ ) . 󵄨󵄨VH,∞ − VH (t)󵄨󵄨󵄨 ≤ (e H∈[0,1] Taking the supremum for H ∈ [0, 1] and sending t → ∞, we conclude the proof. Again, we can study the continuity of the function (H, t) ∈ [0, 1] × ℝ+0 󳨃→ VH (t) ∈ ℝ. The following result is a direct consequence of Theorem 3.43 and the uniform limit in (3.64). Corollary 3.45. The function (H, t) ∈ [0, 1] × ℝ+0 󳨃→ VH (t) ∈ ℝ is continuous in [0, 1] × ℝ+0 \ {(0, 0)} and is uniformly continuous in [0, 1] × [t0 , +∞) for all t0 > 0 and in [H0 , 1] × [0, +∞) for each H0 ∈ (0, 1]. Furthermore, for any fixed t ≥ 0, the map H ∈ [0, 1] 󳨃→ VH (t) ∈ ℝ is continuous. Remark 3.46. The uniform convergence as H → tion 3.4].

1 2

has been established in [17, Proposi-

Denote the one-dimensional distribution of UtH,x by pH,x fOU (t, y) =

(y − xe−λt )2 1 exp (− ). 2VH (t) √2πVH (t)

We already proved that lim pH,x (t, y) H→H ⋆ fOU

,x = pH fOU (t, y), ⋆

(t, y) ∈ ℝ+0 × ℝ.

(3.65)

222 � 3 Fractional Brownian motion and its environment This convergence can be slightly improved. If we denote by p(t, y) the density of Bt as in (1.193), then pH,x (t, y) = p(VH (t), y − xe−λt ). We can also verify by direct calculations that fOU p is Lipschitz on [0, T] × K for all compact sets K ⊂ ℝ \ {0} and all T > 0. Combining this Lipschitz property with the uniform continuity in Corollary 3.45, we get the following result. Proposition 3.47. Let (Hn )n∈ℕ be a sequence in [0, 1] such that Hn → H ⋆ ∈ [0, 1]. (i) If H ⋆ > 0, then for every compact set K ⊂ ℝ \ {0}, we have H ,x lim p n (t, y) n→+∞ fOU

,x = pH fOU (t, y) ⋆

uniformly for (t, y) ∈ ℝ+0 × K.

(ii) If H ⋆ = 0, then for all compact sets K ⊂ ℝ \ {0} and all t0 > 0, we have H ,x lim p n (t, y) n→+∞ fOU

= p0,x fOU (t, y) uniformly for (t, y) ∈ [t0 , +∞) × K.

Remark 3.48. The previous proposition has been proved in [17, Theorem 3.5] in the particular case H ⋆ = 1/2 and Hn > 1/2. The same strategy applies for all H ⋆ ∈ (0, 1]. Furthermore, if H ⋆ = 0, then we can still use the same strategy as soon as t is separated from 0: this additional condition is due to the fact that (H, t) 󳨃→ VH (t) is not continuous in (0, 0). We can deduce from (3.59) the following formula for VH as H ∈ (0, 1): t

t

0

0

VH (t) = Hσ 2 (∫ e−λu u2H−1 du + e−2λt ∫ eλu u2H−1 du) . Thus we have the following asymptotic behavior (see Exercise 3.19): VH (t) ∼ σ 2 t 2H

as t ↓ 0.

Moreover, it is clear that VH (t) belongs to 𝒞 1 (0, +∞) and t

VH′ (t) = 2Hσ 2 e−λt (t 2H−1 − λe−λt ∫ eλu u2H−1 du) .

(3.66)

0

Therefore VH′ (t) ∼ 2Hσ 2 t 2H−1

as t ↓ 0.

(3.67)

Concerning the behavior at infinity, we can deduce with the help of Exercise 3.15 that VH′ (t) ∼

2H(2H − 1) 2 2H−2 −λt σ t e λ

as t → +∞.

(3.68)

3.6 Fractional Ornstein–Uhlenbeck process with stochastic forcing

� 223

In the case H > 21 , we can also provide an alternative representation for VH : t t

H(2H − 1) VH (t) = ∫ ∫ e−λ(u+v) |u − v|2H−2 dudv. λ2H 0 0

Thanks to this formula, we see that for H > 21 , the function VH is strictly increasing. Also, from (3.60) we get 󵄨󵄨 izt −λt 󵄨󵄨2 Hσ 2 sin(πH)Γ(2H) 󵄨󵄨󵄨e − e 󵄨󵄨󵄨 1−2H VH (t) = |z| dz. ∫ π λ2 + z2 ℝ

Finally, let us formulate the following proposition. Proposition 3.49. Let H ∈ (0, 1), x ∈ ℝ, and λ, σ > 0, let U H,x be the fOU process defined by (3.45), and let U H be the stationary fOU process defined by (3.49), and suppose these processes are based on the same fBm BH . Then for all t ≥ 0, UtH,x



UtH

0

= (x − σ ∫ eλs dBsH ) e−λt .

(3.69)

−∞

Proof. The process δUtH,x = UtH,x − UtH , t ≥ 0, is the solution of the equation δUtH,x

0

= (x − σ ∫ e

λs

−∞

dBsH )

t

− λ ∫ δUsH,x ds, 0

which for fixed ω ∈ Ω is in fact a linear ordinary differential equation. Solving it for fixed ω ∈ Ω, we get the desired result. In particular, (3.69) leads to the equality 2

E [(UtH,x − UtH ) ] = e−2λt (x 2 + VH,∞ ) ,

t ≥ 0.

(3.70)

This equality can be used to estimate the rate of weak (possibly, nonuniform) ergodicity of UtH,x . We will not go into details on this, but we refer to [125, 250] for further details on weak ergodic theory.

3.6 Fractional Ornstein–Uhlenbeck process with stochastic forcing Now we would like to introduce an additional stochastic drift to the Langevin equation (3.44): the latter can be used to describe some injected external force in the model, as done, for instance, in [13], and therefore sometimes it is called a forcing term or

224 � 3 Fractional Brownian motion and its environment stochastic forcing. As before, let H ∈ (0, 1) and λ, σ > 0, let BH be an fBm with Hurst index H, and let ξ be a random variable. As an additional drift, we consider a new stochastic process I = {It , t ≥ 0} such that P(I ∈ L1 (0, T)) = 1 for all T > 0, i. e., locally integrable in ℝ+0 with probability 1. Definition 3.50. We define the fOU with stochastic forcing I as the pathwise unique solution of the equation t

t

UtH,ξ,I = ξ − λ ∫ UsH,ξ,I ds + ∫ Is ds + σBtH , 0

sult.

t ≥ 0.

(3.71)

0

With the same arguments as in Proposition 3.36, we can establish the following re-

Proposition 3.51. Let H ∈ (0, 1) and λ, σ > 0, let BH be an fBm with Hurst index H, let ξ be a random variable, and let I = {It , t ≥ 0} be a stochastic process such that P(I ∈ L1 (0, T)) = 1 for all T > 0. Then (3.71) admits a unique pathwise solution UtH,ξ,I

t

=e

−λt

t

λs

(ξ + ∫ e Is ds + σ ∫ eλs dBsH ) , 0

t ≥ 0.

(3.72)

0

As for the fOU process, it is clear that U H,ξ,I is adapted to the filtration {ℱt }t≥0 , where 0

each σ-algebra ℱt is generated by ξ, {BsH , s ≤ t}, and {Is , s ≤ t}. If ∫−∞ eλs Is ds is finite, then we can construct the process t

t

−∞

−∞

UtH,I = e−λt ( ∫ eλs Is ds + σ ∫ eλs dBsH ) ,

t ≥ 0.

(3.73)

Unlike the standard fOU, this process is not stationary. However, if I is degenerate (i. e., it is a deterministic function) and periodic, then we can extract stationary ergodic sequences from (3.73) by carefully selecting the sampling times. This has been done, for instance, in [56]. Denote by L1loc (Ω×ℝ+0 ) the space of stochastic processes I ∈ L1 (Ω×[0, T]) for all T > 0. Now we can calculate the expectation of the process U H,ξ,I applying the expectation operator to (3.72). Proposition 3.52. Let H ∈ (0, 1), λ, σ > 0, and ξ ∈ L1 (Ω), let I be a stochastic process such that I ∈ L1loc (Ω × ℝ+0 ), and let U H,ξ,I be defined in (3.72). Then E [U

H,ξ,I

t

]=e

−λt

(E[ξ] + ∫ eλs E[Is ]ds) . 0

3.6 Fractional Ornstein–Uhlenbeck process with stochastic forcing

� 225

Let us underline that, due to the presence of the forcing term I, U H,ξ,I is not necessarily a Gaussian process, even if ξ is deterministic. The following statement concerns the limit distribution of U H,ξ,I . Proposition 3.53. Let H ∈ (0, 1), λ, σ > 0, x ∈ ℝ, and I ∈ 𝒞 (ℝ+0 ) a. s. Suppose lim It = I∞ a. s.,

t→+∞

d

where I∞ < ∞ is a constant, and let U H,x,I be defined in (3.72). Then UtH,x,I → I∞ + Z as t → +∞, where Z ∼ 𝒩 (0, VH,∞ ). Proof. Note that UtH,x,I

t

=e

−λt

∫ eλs Is ds + UtH,x .

(3.74)

0

d

Now let us recall that UtH,x → Z for Z ∼ 𝒩 (0, VH,∞ ) by Proposition 3.44. Moreover, under t

our assumptions, e−λt ∫0 eλs Is ds → I∞ a. s. as t → +∞, where I∞ is a constant, whence the proof follows by [29, Theorem 3.9]. Remark 3.54. The same result holds if I∞ < ∞ a. s. (even if it is not a constant), provided that I and the fBm BH are independent, by using [29, Example 3.2]. Now let us study the covariance function of the fOU with drift. Assume that It ∈ L2 (Ω) for t ≥ 0 and denote the covariance function by cI (s, t) = Cov(Is , It ). For simplicity, we will always assume that I and BH are uncorrelated. Proposition 3.55. Let H ∈ (0, 1), λ, σ > 0, and x ∈ ℝ, let BH be an fBm with Hurst index H, let U H,x,I be defined in (3.72), and let cI ∈ L1 ([0, T]2 ) for all T > 0. Then s t

CH (t, s) := Cov [UtH,x,I , UsH,x,I ] = e−λ(t+s) ∫ ∫ eλ(u+v) cI (u, v)dudv + RH (t, s). 0 0

Proof. Rewrite UtH,x,I as in (3.74). Since I and BH are uncorrelated, so are I and U H,x . Without loss of generality, assume that E[It ] ≡ 0. Then t s

CH (t, s) : = Cov [UtH,x,I , UsH,x,I ] = e−λ(t+s) E [∫ ∫ eλ(u+v) Iu Iv dudv] + Cov [UtH,x , UsH,x ] [0 0 ] t s

=e

−λ(t+s)

∫ ∫ eλ(u+v) cI (u, v)dudv + RH (t, s), 0 0

where the last equality follows by Fubini theorem.

226 � 3 Fractional Brownian motion and its environment Now we want to provide some conditions under which lims→+∞ CH (t, s) = 0 for all t > 0. Clearly, we already know that lims→+∞ RH (t, s) = 0, and hence we only need to study the term depending on I. Proposition 3.56. Let the conditions of Proposition 3.55 be fulfilled. Assume that for some t > 0, one of the three following assumptions holds: (𝒜1 ) There exists a constant M > 0 such that for all s > 0, 󵄨󵄨 s t 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 λ(u+v) cI (u, v)dudv󵄨󵄨󵄨 ≤ M. 󵄨󵄨∫ ∫ e 󵄨󵄨 󵄨󵄨 󵄨󵄨 0 0 󵄨󵄨 (𝒜2 ) The following relations hold: s t

t

lim ∫ ∫ eλ(u+v) cI (u, v)dudv = ±∞,

s→+∞

lim ∫ eλu cI (u, s)du = 0.

and

s→+∞

0 0

0

(𝒜3 ) The following relations hold: ∞ t

󵄨 󵄨 ∫ ∫ eλ(u+v) 󵄨󵄨󵄨cI (u, v)󵄨󵄨󵄨dudv = +∞,

t

and

0 0

󵄨 󵄨 lim ∫ eλu 󵄨󵄨󵄨cI (u, s)󵄨󵄨󵄨du = 0.

s→+∞

0

Then lim C (t, s) s→+∞ H

= 0.

(3.75)

Proof. We only need to show that s t

lim e

s→+∞

−λs

∫ ∫ eλ(u+v) cI (u, v)dudv = 0.

(3.76)

0 0

This is clear if Assumption (𝒜1 ) holds. If, instead, Assumption (𝒜2 ) holds, we can use l’Hôpital rule to get s

lim

s→+∞

t

∫0 ∫0 eλ(u+v) cI (u, v)dudv eλs

t

1 = lim ∫ eλu cI (u, s)du = 0. λ s→+∞ 0

Assumption (𝒜3 ) is processed similarly. Clearly, the covariance cI of I can change the rate of decay of the covariance CH of the whole process U H,x,I , because it competes with the asymptotic behavior of RH . If for

3.6 Fractional Ornstein–Uhlenbeck process with stochastic forcing

� 227

fixed t > 0, the term t s

e

−λ(t+s)

∫ ∫ eλ(u+v) cI (u, v)dudv 0 0

decreases faster than RH (t, s) as s → +∞, then CH and RH admit the same asymptotics, that is emphasized by the following proposition. Proposition 3.57. Let the conditions of Proposition 3.55 be fulfilled and assume that either Assumption (𝒜1 ) of Proposition 3.56 or the following assumption holds for some t > 0: (𝒜4 ) The following relations hold: s t

lim ∫ ∫ e

s→+∞

λ(u+v)

cI (u, v)dudv = ±∞,

and

0 0

lim s

s→+∞

2−2H

t

∫ eλu cI (u, s)du = 0. 0

Then CH (t, s) ∼

σ 2 H(2H − 1) (1 − e−λt ) s2H−2 → 0, λ2

s → +∞.

(3.77)

Proof. If assumption (𝒜1 ) holds, then t t+s

e

−λ(t+s)

∫ ∫ eλ(u+v) cI (u, v)dudv ∼ Ce−λs 0 0

for some constant C > 0 depending on t > 0. Combining this with (3.58), we obtain (3.77). If assumption (𝒜4 ) holds, then using l’Hôpital rule, we get that s

lim

s→+∞

t

∫0 ∫0 eλ(u+v) cI (u, v)dudv s2H−2 eλ(t+s)

t

= lim

s→+∞

s2−2H ∫0 eλu cI (u, s)dudv ((2H − 2)s−1 + λ)eλt

= 0.

Combining the latter relation with (3.58), we again obtain (3.77). Remark 3.58. Fix t > 0 and assume that the following relations hold: s t

lim ∫ ∫ eλ(u+v) cI (u, v)dudv = ±∞,

s→+∞

0 0

lim s2−2H cI (u, s) = 0

s→+∞

for every u ∈ [0, t],

and there exists a function k ∈ L1 [0, t] such that 󵄨 󵄨 s2−2H 󵄨󵄨󵄨cI (u, s)󵄨󵄨󵄨 ≤ k(u)

for a. a. u ∈ [0, t].

Then we are in conditions of assumption (𝒜4 ), and therefore (3.77) holds.

228 � 3 Fractional Brownian motion and its environment t

s

If for fixed t > 0, the term e−λ(t+s) ∫0 ∫0 eλ(u+v) cI (u, v)dudv has a slower decay than RH (t, s) as s → +∞, then CH is also slower than RH , since its asymptotics is established by the term involving cI . The following proposition gives a sufficient (but clearly not necessary) condition for such behavior. Proposition 3.59. Let the conditions of Proposition 3.55 be fulfilled and assume that for some t, α > 0, the following assumption holds: s t

lim ∫ ∫ eλ(u+v) cI (u, v)dudv = ±∞,

s→+∞

t

lim sα ∫ eλu cI (u, s)du = ℓ ∈ ℝ.

and

s→+∞

0 0

(3.78)

0

(i) If α > 2 − 2H, then (3.77) holds. (ii) If α ∈ (0, 2 − 2H) and ℓ ≠ 0, then CH (t, s) ∼

ℓ −λt −α e s , λ

s → +∞.

(3.79)

Proof. Item (i) is covered by Proposition 3.57, since in this case, (3.78) implies (𝒜4 ). Let us prove (ii). Applying l’Hôpital’s rule, we get that s

lim

s→+∞

t

∫0 ∫0 eλ(u+v) cI (u, v)dudv s−α eλ(t+s)

t

= lim

s→+∞

sα ∫0 eλu cI (u, s)dudv (−αs−1

+

λ)eλt

=

ℓ , λeλt

whence s t

e

−λ(t+s)

∫ ∫ eλ(u+v) cI (u, v)dudv ∼ 0 0

ℓ −λt −α e s , λ

s → +∞.

Combining the last relation with (3.58), we obtain (3.79). Remark 3.60. Fix t > 0 and assume that the following relations hold: s t

lim ∫ ∫ eλ(u+v) cI (u, v)dudv = ±∞,

s→+∞

0 0 α

lim s cI (u, s) = ℓ̄

s→+∞

for all u ∈ [0, t],

(3.80)

and there exists a function k ∈ L1 [0, t] such that 󵄨 󵄨 sα 󵄨󵄨󵄨cI (u, s)󵄨󵄨󵄨 ≤ k(u)

for a. a. u ∈ [0, t].

Then, by a simple application of the domianted convergence theorem, (3.78) is verified ̄ λt − 1). with ℓ = ℓ(e

3.7 Reflected fractional Brownian motion

� 229

3.7 Reflected fractional Brownian motion Now we consider a process constructed starting from the fBm via a reflection procedure. Reflected stochastic processes have been first introduced in [236, 237] with a procedure that is developed explicitly for solutions of stochastic differential equations driven by the Bm and thus cannot be directly applied to the fBm. In the specific case of the Bm, a further characterization is provided in [155], whereas in [243] the same strategy is shown to hold for any cádlág function. To handle the fBm, we use the latter approach. Hence let us first give some preliminary definitions. Definition 3.61. For a function f ∈ 𝒞 (ℝ+0 ) with f (0) ≥ 0, we say that the couple (g, l) ∈ (𝒞 (ℝ+0 ))2 is a solution of the Skorokhod reflection problem for the path f if (i) g(t) = f (t) + l(t) for all t ≥ 0; (ii) g(t) ≥ 0 for all t ≥ 0; (iii) l is a nondecreasing function such that l(0) = 0 and +∞

∫ g(t)dl(t) = 0.

(3.81)

0

Now we present the result from [243]. Theorem 3.62. For every f ∈ 𝒞 (ℝ+0 ) with f (0) ≥ 0, there exists a unique solution (g, l) ∈ (𝒞 (ℝ+0 ))2 of the Skorokhod reflection problem for the path f . In particular, for all t ≥ 0, l(t) = max max{0, −f (s)}, 0≤s≤t

g(t) = f (t) + l(t).

(3.82)

Proof. To prove the existence, we only have to verify that (3.82) is a solution of the Skorokhod problem for f . Indeed, l is by definition nondecreasing with l(0) = 0, whereas g(t) ≥ 0 for all t ≥ 0. Let us check that l is continuous. Indeed, the function x ∈ ℝ 󳨃→ max{0, x} is a continuous function, and thus so it is f− : s ∈ ℝ+0 󳨃→ max{0, −f (s)}. Now fix any t > 0. Since l is nondecreasing, there exist lims→t− l(s) := l(t−) ∈ ℝ and lims→t+ l(s) := l(t+) ∈ ℝ. Then for all s ∈ [0, t), we have f− (s) ≤ l(s) ≤ l(t−) ≤ l(t). Taking the supremum over [0, t), we get by the continuity of f− that l(t) = sup f− (s) = sup f− (s) ≤ l(t−) ≤ l(t), s∈[0,t]

s∈[0,t)

and hence l(t−) = l(t). Next, assume that l(t+) > l(t). Then for all s > t, there exists ξ(s) ∈ [t, s] such that l(s) = f− (ξ(s)). Taking the limit as s → t + , we get by the continuity

230 � 3 Fractional Brownian motion and its environment of f− that l(t+) = f− (t) > l(t) ≥ f− (t), a contradiction. Hence l(t+) = l(t), which means that l and hence g are continuous. To prove (3.81), let us fix T > 0 and consider any partition Π = {0 = t0 < t1 < ⋅ ⋅ ⋅ < tN = T} of [0, T]. On each interval [tn , tn+1 ], for n = 0, . . . , N − 1, let ξn ∈ [tn , tn+1 ] be such that g(ξn ) = mint∈[tn ,tn+1 ] g(t). Consider N−1

SΠ,ξ [g; l] := ∑ g(ξn )(l(tn+1 ) − l(tn )). n=0

Assume that l(tn+1 ) − l(tn ) > 0. Then max max{0, −f (s)} > max max{0, −f (s)} ≥ 0,

0≤s≤tn+1

0≤s≤tn

which also implies that max max{0, −f (s)} = max max{0, −f (s)}. tn ≤s≤tn+1

0≤s≤tn+1

In particular, there exists a point τ ∈ [tn , tn+1 ] such that max max{0, −f (s)} = max{0, −f (τ)} = max max{0, −f (s)} = l(τ).

tn ≤s≤tn+1

0≤s≤τ

Furthermore, since max{0, −f (τ)} > 0, it follows that 0 < l(τ) = max max{0, −f (s)} = −f (τ). tn ≤s≤τ

Thus at point τ, g(τ) = f (τ) + l(τ) = f (τ) − f (τ) = 0. Consequently, g(ξn ) = mint∈[tn ,tn+1 ] g(t) = 0. With this observation, SΠ,ξ [g; l] = 0, and then taking the limit as maxn (tn − tn−1 ) → 0, we obtain that T

∫ g(t)dl(t) = 0. 0

The fact that T > 0 is arbitrary proves (3.81). Now let us prove the uniqueness. Assume that (g, l) and (g̃ , ̃l) are two solutions. Then for all t ≥ 0, g(t) − g̃ (t) = f (t) + l(t) − f (t) − ̃l(t) = l(t) − ̃l(t). Put G(t) = g(t) − g̃ (t). This function is continuous. Assume that G(t) > 0 for some t > 0. Define τ(t) = sup{s ∈ [0, t) : G(s) = 0} (where the set {s ∈ [0, t) : G(s) = 0} is not

3.7 Reflected fractional Brownian motion

� 231

empty since G(0) = 0) and note that, due to the continuity of G, τ(t) < t. This implies that G(s) > 0 for all s ∈ (τ(t), t). In particular, g(s) > g̃ (s) ≥ 0 for all s ∈ (τ(t), t), and thus l(s) = l(τ(t)) for s ∈ (τ(t), t). Furthermore, G(s) = l(s) − ̃l(s), where ̃l is nondecreasing. Therefore G(s) is nonincreasing on (τ(t), t). This implies that for all s ∈ (τ(t), t), we have the relations 0 < G(s) ≤ G(τ(t)) ≤ 0, which is a contradiction. Hence G(t) ≤ 0 for all t ≥ 0. Arguing symmetrically on g̃ − g, we obtain that G(t) = 0 for all t ≥ 0, and the proof follows. Due to the previous result, we can introduce the operators Φ, Ψ : 𝒞 (ℝ+0 ) → 𝒞 (ℝ+0 )

(3.83)

such that for every f ∈ 𝒞 (ℝ+0 ), the couple (Φ(f ), Ψ(f )) ∈ (𝒞 (ℝ+0 ))2 is the unique solution of the Skorokhod reflection problem. Moreover, thanks to (3.82), we know that {Φ(f )(t) : t ∈ [0, T]} and {Ψ(f )(t) : t ∈ [0, T]} are uniquely determined by {f (t) : t ∈ [0, T]}. This means that we can consider the restrictions Φ, Ψ : 𝒞 [0, T] → 𝒞 [0, T]. For such restrictions, we have the following result. Theorem 3.63. Both Φ, Ψ : 𝒞 [0, T] → 𝒞 [0, T] are Lipschitz in the following sense: for all f1 , f2 ∈ 𝒞 [0, T], ‖Ψ(f1 ) − Ψ(f2 )‖𝒞[0,T] ≤ ‖f1 − f2 ‖𝒞[0,T] ,

‖Φ(f1 ) − Φ(f2 )‖𝒞[0,T] ≤ 2‖f1 − f2 ‖𝒞[0,T] .

(3.84) (3.85)

Proof. We only need to prove (3.84), since (3.85) follows from the relation Φ(f ) = f +Ψ(f ) and the triangular inequality. To establish (3.84), note that for s ≥ 0, 󵄨󵄨 󵄨 󵄨 󵄨 󵄨󵄨max{0, −f1 (s)} − max{0, −f2 (s)}󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨f1 (s) − f2 (s)󵄨󵄨󵄨.

(3.86)

Further note that for all t ∈ [0, T], 󵄨󵄨 󵄨󵄨 󵄨 󵄨󵄨 󵄨󵄨Ψ(f1 )(t) − Ψ(f2 )(t)󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨󵄨 max max{0, −f1 (s)} − max max{0, −f2 (s)}󵄨󵄨󵄨󵄨 s∈[0,t] s∈[0,t] 󵄨 󵄨 󵄨 󵄨 ≤ max 󵄨󵄨󵄨max{0, −f1 (s)} − max{0, −f2 (s)}󵄨󵄨󵄨 s∈[0,t] 󵄨 󵄨 ≤ max 󵄨󵄨󵄨f1 (s) − f2 (s)󵄨󵄨󵄨 ≤ ‖f1 − f2 ‖𝒞[0,T] . s∈[0,t] Finally, taking the maximum over [0, T], we complete the proof. The maps Φ and Ψ are respectively called the reflection and regulator maps, whereas for f ∈ 𝒞 [0, T], Φ(f ) is called the reflected path (or regulated path), and Ψ(f ) is the regulator. Now we are ready to introduce the reflected fBm.

232 � 3 Fractional Brownian motion and its environment Definition 3.64. For H ∈ (0, 1) and b ∈ ℝ, consider the process BH,b = {BtH,b , t ≥ 0} defined as BtH,b = BtH + bt,

(3.87)

where BH is an fBm with Hurst index H. Such a process is called an fBm with drift b. The reflected fBm with drift is the process R BH,b = {R BtH,b , t ≥ 0} defined as R BH,b = Φ(BH,b ). Such a process has found several applications in queuing theory (see, for example, [255, Section 8.7] and references therein). First of all, we determine a property of the paths of R BH,b . Proposition 3.65. For all H ∈ (0, 1) and b ∈ ℝ, the process R BH,b is a. s. locally β-Hölder continuous for all β < H. Proof. Since B0H,b = 0 a. s., we can claim that Ψ (BH,b ) (t) = max (−BsH − bs) . 0≤s≤t

Fix T > 0. Then for all t1 , t2 ∈ [0, T] such that t1 < t2 , Ψ (BH,b ) (t2 ) − Ψ (BH,b ) (t1 ) = max (−BsH − bs) − max (−BsH − bs) ≤

max (−BsH t1 ≤s≤t2

0≤s≤t2

− bs) − max

0≤s≤t1

(−BsH

0≤s≤t1

− bs)

󵄨 󵄨 ≤ max (−BsH − bs) + BtH1 + bt1 ≤ max 󵄨󵄨󵄨󵄨BsH − BuH + b(s − u)󵄨󵄨󵄨󵄨 . t1 ≤s≤t2 t1 ≤s≤u≤t2 Consider β ∈ (0, H). We know that BsH is a. s. locally β-Hölder continuous. Therefore there exists an a. s. finite random variable Cβ such that 󵄨󵄨 H 󵄨 (1) (1) 󵄨󵄨Bs − BuH + b(s − u)󵄨󵄨󵄨 ≤ Cβ |s − u|β + |b||s − u| ≤ Cβ,T |s − u|β ≤ Cβ,T |t2 − t1 |β , 󵄨 󵄨 (1) where Cβ,T = Cβ + |b|T 1−β is a. s. finite. This means that (1) Ψ (BH,b ) (t2 ) − Ψ (BH,b ) (t1 ) ≤ Cβ,T |t2 − t1 |β .

As a consequence, we get 󵄨󵄨R H,b R H,b 󵄨󵄨 󵄨󵄨 H,b 󵄨 (1) 󵄨󵄨 Bt − Bt 󵄨󵄨 ≤ 󵄨󵄨Bt − BtH,b 󵄨󵄨󵄨 + Ψ (BH,b ) (t2 ) − Ψ (BH,b ) (t1 ) ≤ 2Cβ,T |t2 − t1 |β . 󵄨 2 󵄨 2 1 󵄨 1 󵄨 Next, note that we can rewrite R BH,b as R H,b Bt

= BtH,b − min BsH,b = BtH,b + max (−BsH,b ) = max (BtH,b − BsH,b ) . 0≤s≤t

0≤s≤t

0≤s≤t

3.7 Reflected fractional Brownian motion

� 233

Since BH,b is a process with stationary increments, we get the following equality in law: R H,b d Bt =

max (BsH,b )

(3.88)

0≤s≤t

for t > 0. Now let us mention the following crucial property. Proposition 3.66. Let H ∈ (0, 1) and b < 0. Then maxs≥0 (BsH,b ) is a. s. finite. Proof. Corollary 3.9 supplies that a. s. BtH,b = b < 0. t→+∞ t lim

This inequality means that limt→+∞ BtH,b = −∞ a. s., and there exists a random variable τ such that BtH,b < −1 for all t > τ. Hence max (BsH,b ) = max { max (BsH,b ) , −1} < +∞. s≥0

s∈[0,τ]

As a consequence, taking the limit on both sides of (3.88), we get the following result. Corollary 3.67. For all b < 0, R H,b d Bt →

max (BsH,b ) s≥0

as t → +∞.

Remark 3.68. Let X = {Xt , t ∈ ℝ} be a process with stationary increments such that X0 = 0 and R Xt = (Φ(X))t for t ≥ 0. If sups≥0 (Xs ) < ∞ a. s., then R

d

Xt → sup(Xs ) as t → +∞. s≥0

Clearly, if b > 0, then limt→+∞ R BtH,b = +∞ a. s. by (3.88). To study the case b = 0, we need to use the following limit, which is an application of [189, Theorem 1]: lim sup t→+∞

BtH

2t H √log(log(t))

=1

a. s.

Therefore supt≥0 (BtH,0 ) = +∞, and limt→+∞ R BtH,0 = +∞ a. s. Now let us focus on the case b < 0. In this case, a lower bound on the tail distribution H,b of R B∞ := maxs≥0 (BsH,b ) was proved in [186, Theorem 4.1] for H > 21 , but it is clear that the proof holds for all H ∈ (0, 1).

234 � 3 Fractional Brownian motion and its environment Theorem 3.69. For all x > 0 and b < 0, we have H,b P ( R B∞ > x) ≥ max P (BtH,b > x) = P (BtH,b(x) > x) , t≥0



where t∗ (x) = −

Hx . (1 − H)b

Proof. Let us note that for all t, x > 0, H,b P (BtH,b > x) ≤ P (R B∞ > x) .

(3.89)

Furthermore, P (BtH,b

> x) =

P (BtH

+∞

y2 1 − > x − bt) = ∫ e 2t2H dy =: ϕH (t; x). √2πt H

x−bt

Note that on the one hand, limt↓0 ϕH (t; x) = 0. On the other hand, we can rewrite ϕH (t; x) = P (

BtH − x > −b) → 0 t

as t → +∞.

This implies that for a fixed x > 0, ϕH (⋅; x) admits at least one maximum point t∗ (x) ∈ (0, +∞). To determine t∗ (x), let us rewrite ϕH (t; x) applying the self-similarity property from Proposition 3.4: ϕH (t; x) = P (B1H > t −H (x − bt)) . Let p(y; μ, σ) =

− 1 e σ √2π

(y−μ)2 2σ 2

. Then taking the derivative with respect to t, we have

dϕH (t; x) = t −H−1 p (t −H (x − bt); 0, 1) (Hx + (1 − H)bt). dt Therefore the only critical point of ϕH (⋅; x), which must be the desired maximum point, equals t∗ (x) = −

Hx . (1 − H)b

Taking t = t∗ (x) in (3.89), we get the desired result. H,b The precise asymptotics of P(R B∞ > x) as x → +∞ has been obtained in [68] for 1 H ≥ 2 . We omit the proof, which is based on large deviation techniques.

3.8 Reflected fractional Ornstein–Uhlenbeck process



235

Theorem 3.70. For all b < 0 and H ∈ [ 21 , 1), we have H,b lim x 2H−2 log (P (R B∞ > x)) = −

x→+∞

(−b)2H . 2H 2H (1 − H)2(1−H)

We can also describe the asymptotic behavior of the running maximum MtH,b = max0≤s≤t R BtH,b for H > 21 . The next result summarizes [266, Theorem 1 and Proposition 2]. R

Theorem 3.71. Let H ∈ ( 21 , 1). (i) If b < 0, then for all 1 ≤ p < ∞, 󵄨󵄨 R H,b 󵄨p 󵄨󵄨 Mt 2H 2H (1 − H)2(1−H) 󵄨󵄨󵄨󵄨 lim E [󵄨󵄨󵄨 − 󵄨󵄨 ] = 0. 1 󵄨󵄨 󵄨󵄨 t→+∞ (−b)2H 󵄨 (log(t)) 2(1−H) 󵄨 (ii) If b = 0, then d 󵄨 󵄨 t −H (R MtH,0 ) → max 󵄨󵄨󵄨󵄨BrH − BvH 󵄨󵄨󵄨󵄨 v,r∈[0,1]

as t → +∞.

(iii) If b > 0, then d

t −H (R MtH,b − bt) → ξ

as t → +∞,

where ξ is a standard Gaussian random variable.

3.8 Reflected fractional Ornstein–Uhlenbeck process The reflection procedure described before can be clearly applied to any process with continuous sample paths. However, if we consider, for example, the solution of the (classical) stochastic differential equation X = {Xt , t ≥ 0}, then the reflected process Φ(X) in general does not coincide with the process introduced in [236, 237]. This case can be discussed by means of a suitable generalization of the Skorokhod reflection problem. Definition 3.72. Let F : ℝ+0 × ℝ+0 → ℝ, and let f ∈ 𝒞 (ℝ+0 ) with f (0) ≥ 0. We say that (g, l) ∈ (𝒞 (ℝ+0 ))2 is a solution of the generalized Skorokhod reflection problem with drift F if (i) For all t > 0, t

g(t) = f (t) + ∫ F(s, g(s))ds + l(t); 0

(3.90)

236 � 3 Fractional Brownian motion and its environment (ii) g(t) ≥ 0 for all t ≥ 0; (iii) l is nondecreasing with l(0) = 0 and +∞

∫ g(t)dl(t) = 0.

(3.91)

0

Such problems were studied in [218] with the help of a strategy presented, for example, in [268]. Here we follow the same lines. Theorem 3.73. Let F : ℝ+0 × ℝ+0 → ℝ satisfy the following assumptions: (a) F(⋅, 0) ∈ L1loc (ℝ+0 ). (b) There exists L > 0 such that 󵄨󵄨 󵄨 󵄨󵄨F(t, x) − F(t, y)󵄨󵄨󵄨 ≤ L|x − y|,

x, y ∈ ℝ+0 ,

t ≥ 0.

Then for every f ∈ 𝒞 (ℝ+0 ) with f (0) ≥ 0, there exists a unique solution (g, l) ∈ (𝒞 (ℝ+0 ))2 to the generalized Skorokhod reflection problem with drift F. Proof. Consider the following integral equation: t

h(t) = f (t) + ∫ F(s, Φ(h)(s))ds,

t ≥ 0.

(3.92)

0

Let us prove that it admits a unique solution h⋆ ∈ 𝒞 (ℝ+0 ). In this connection, recall that for every function h ∈ 𝒞 (ℝ+0 ), according to the definition of the map Φ from (3.83) and preceding explanations, Φ(h) ∈ 𝒞 (ℝ+0 ). Further, for h ∈ 𝒞 (ℝ+0 ), define t

(𝒯 h)(t) = f (t) + ∫ F(s, Φ(h)(s))ds,

t ≥ 0.

0

The integral on the right-hand side is well defined for all t ≥ 0 because it follows from assumption (b) that t

t

t

󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 ∫ 󵄨󵄨󵄨F(s, Φ(h)(s))󵄨󵄨󵄨 ds ≤ ∫ 󵄨󵄨󵄨F(s, Φ(h)(s)) − F(s, 0)󵄨󵄨󵄨 ds + ∫󵄨󵄨󵄨F(s, 0)󵄨󵄨󵄨ds 0

t

0

t

󵄨 󵄨 󵄨 󵄨 ≤ L ∫󵄨󵄨󵄨Φ(h)(s)󵄨󵄨󵄨ds + ∫󵄨󵄨󵄨F(s, 0)󵄨󵄨󵄨ds < +∞, 0

0

0

3.8 Reflected fractional Ornstein–Uhlenbeck process

� 237

where the first integral is finite since Φ(h) ∈ 𝒞 (ℝ+0 ), whereas the second integral is finite by assumption (a). Consider t0 ≥ 0 and δ > 0. We can produce the following bounds: 󵄨󵄨 󵄨 󵄨󵄨(𝒯 h)(t0 + δ) − (𝒯 h)(t0 )󵄨󵄨󵄨

t0 +δ

t0 +δ

󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 ≤ 󵄨󵄨󵄨f (t0 + δ) − f (t0 )󵄨󵄨󵄨 + ∫ 󵄨󵄨󵄨F(s, Φ(h)(s)) − F(s, 0)󵄨󵄨󵄨 ds + ∫ 󵄨󵄨󵄨F(s, 0)󵄨󵄨󵄨ds t0

t0

t0 +δ

t0 +δ

󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 ≤ 󵄨󵄨󵄨f (t0 + δ) − f (t0 )󵄨󵄨󵄨 + L ∫ 󵄨󵄨󵄨Φ(h)(s)󵄨󵄨󵄨ds + ∫ 󵄨󵄨󵄨F(s, 0)󵄨󵄨󵄨ds. t0

t0

Taking the limit as δ → 0, we have 󵄨 󵄨 lim󵄨󵄨󵄨(𝒯 h)(t0 + δ) − (𝒯 h)(t0 )󵄨󵄨󵄨 = 0. δ↓0

The same argument is valid for δ < 0. Since t0 ≥ 0 is arbitrary, 𝒯 h ∈ 𝒞 (ℝ+0 ). Hence we have defined a map 𝒯 : 𝒞 (ℝ+0 ) 󳨃→ 𝒞 (ℝ+0 ). Now fix T > 0 and note that 𝒯 : 𝒞 [0, T] 󳨃→ 𝒞 [0, T]. For all h1 , h2 ∈ 𝒞 [0, T], we have t

󵄨󵄨 󵄨 󵄨 󵄨 󵄨󵄨(𝒯 h1 )(t) − (𝒯 h2 )(t)󵄨󵄨󵄨 ≤ ∫ 󵄨󵄨󵄨F(s, Φ(h1 )(s)) − F(s, Φ(h2 )(s))󵄨󵄨󵄨 ds t

0

󵄨 󵄨 ≤ L ∫󵄨󵄨󵄨Φ(h1 )(s) − Φ(h2 )(s)󵄨󵄨󵄨ds ≤ 2LT‖h1 − h2 ‖𝒞[0,T] , 0

1 where we used (3.85). If we consider T < 2L and take the supremum over t ∈ [0, T], then we prove that 𝒯 is a contraction and thus admits a unique fixed point h⋆ ∈ 𝒞 [0, T] satisfying t

h⋆ (t) = (𝒯 h⋆ ) (t) = f (t) + ∫ F (s, Φ (h⋆ ) (s)) ds,

t ∈ [0, T].

0

Now we need to extend the solutions to the whole set ℝ+0 . To do this, let us consider 1 T = 4L . Let h1⋆ be the solution of (3.92) for t ∈ [0, T]. Next, let us assume that we already defined hn⋆ as the solution of (3.92) for t ∈ [0, nT]. Let 𝒜n = {h ∈ 𝒞 [0, (n + 1)T], h(t) = hn (t), t ∈ [0, nT]} ⋆

238 � 3 Fractional Brownian motion and its environment and note that (𝒜n , ‖ ⋅ ‖𝒞[0,(n+1)T] ) is a Banach space. We notice also that 𝒯 h ∈ 𝒜n for all h ∈ 𝒜n . Furthermore, for all h1 , h2 ∈ 𝒜n and t ∈ [0, (n + 1)T], (n+1)T

󵄨󵄨 󵄨 󵄨󵄨(𝒯 h1 )(t) − (𝒯 h2 )(t)󵄨󵄨󵄨 ≤

‖h1 − h2 ‖𝒞[0,(n+1)T] 󵄨 󵄨 . ∫ 󵄨󵄨󵄨F(s, Φ(h1 )(s)) − F(s, Φ(h2 )(s))󵄨󵄨󵄨 ds ≤ 2

nT

Taking the supremum over t ∈ [0, (n + 1)T], we obtain that 𝒯 is a contraction over 𝒜n . ⋆ Hence it admits a unique fixed point hn+1 ∈ 𝒜n that satisfies (3.92) for t ∈ [0, (n + 1)T]. ⋆ + Finally, define the solution h ∈ 𝒞 (ℝ0 ) to (3.92) as h⋆ (t) = hn⋆ (t),

t ∈ [(n − 1)T, nT],

for positive integers n ≥ 1. It is also clear that such a solution is unique. Indeed, if h(2) is another solution, then its restriction on the interval [0, T] must be a fixed point of 𝒯 on 𝒞 [0, T], and thus h(2) (t) = h1⋆ (t) = h⋆ (t) for all t ∈ [0, T]. Assuming that h(2) (t) = h⋆ (t) for all t ∈ [0, nT], we get that the restriction of h(2) over [0, (n + 1)T] belongs to 𝒜n and is a fixed point of 𝒯 over it. Hence h(2) (t) = hn⋆ (t) = h⋆ (t) for t ∈ [0, (n + 1)T]. We obtain by induction that h(2) (t) = h⋆ (t) for all t ≥ 0. Let g = Φ(h) and l = Ψ(h). By the definition of the reflection and regulator maps, we obtain that g(t) = h(t) + l(t),

t ≥ 0.

(3.93)

However, h is the solution of (3.92), and thus (3.93) coincides with (3.90). This proves that (g, l) is a solution of the generalized Skorokhod reflection problem with drift F. Now let us prove that such a solution is unique. To do this, let (g̃ , ̃l) be another solution and define t

̃ = f (t) + ∫ F (s, g̃ (s)) ds, h(t)

t ≥ 0.

(3.94)

0

̃ + ̃l(t). By (ii) and (iii) of the definition of solution of the generalIn particular, g̃ (t) = h(t) ized Skorokhod reflection problem with drift F we know that (g̃ , ̃l) is the solution of the ̃ and hence, in particular, g̃ = Φ(h). ̃ Thus Skorokhod reflection problem for the path h, we can rewrite (3.94) as t

̃ = f (t) + ∫ F (s, Φ(h)(s)) ̃ h(t) ds,

t ≥ 0,

0

̃ = Φ(h) = g and which means that h̃ solves (3.92), and thus h̃ = h. Finally, g̃ = Φ(h) ̃l = Ψ(h) ̃ = Ψ(h) = l.

3.8 Reflected fractional Ornstein–Uhlenbeck process



239

Now we can apply this procedure choosing f to be a sample path of an fBm BH and F in a suitable way. For instance, we can consider a linear F, obtaining the couple of processes (R U H,ξ , LH,ξ ) characterized by the generalized Skorokhod reflection problem as follows: R

t

UtH,ξ = ξ + σBtH + ∫ (a − bR UsH,ξ ) ds + LH,ξ t ,

(3.95)

0

where a, b ∈ ℝ, σ > 0, ξ ≥ 0 a. s., R UtH,ξ ≥ 0 for all t ≥ 0 a. s., the regulator LH,ξ is nondecreasing, LH,ξ 0 = 0, and +∞

= 0. ∫ R UsH,ξ dLH,ξ s 0

Definition 3.74. The process R U H,ξ is called a reflected fOU process. Let us prove the local Hölder property of the paths of R U H,ξ . Proposition 3.75. For all H ∈ (0, 1), process R U H,ξ is a. s. locally β-Hölder continuous for all β < H. Proof. Let us fix T ≥ 0 and set s

GsH = (∫ (bR UuH,ξ − a) du − σBsH ) . 0

Arguing as in Proposition 3.65, we have 󵄨 󵄨 H,ξ LH,ξ max 󵄨󵄨󵄨󵄨GsH − GvH 󵄨󵄨󵄨󵄨 . t2 − Lt1 ≤ t ≤s≤v≤t 1 2 Now let us estimate |GsH − GvH |. We get v

󵄨󵄨 H 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨Gs − GvH 󵄨󵄨󵄨 ≤ |a||v − s| + |b| ∫ 󵄨󵄨󵄨R UuH,ξ 󵄨󵄨󵄨 du + |σ| 󵄨󵄨󵄨BvH − BsH 󵄨󵄨󵄨 . 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 s

Recalling that BH is locally β-Hölder continuous and R U H,ξ is continuous, we have 󵄨󵄨 H 󵄨 󵄩 󵄩 β 󵄨󵄨Gs − GvH 󵄨󵄨󵄨 ≤ |a||v − s| + 󵄩󵄩󵄩R UuH,ξ 󵄩󵄩󵄩 󵄨 󵄨 󵄩 󵄩𝒞[0,T] |b||v − s| + C|σ||v − s| ,

v, u ∈ [0, T],

where 0 < C < ∞ a. s. This proves that the process GH is a. s. β-Hölder continuous on [0, T]. Taking the supremum, we have 󵄨󵄨 H 󵄨󵄨 β 󵄨󵄨Lt − LH 󵄨 t1 󵄨󵄨 ≤ K|t1 − t2 | , 󵄨 2

t1 , t2 ∈ [0, T],

240 � 3 Fractional Brownian motion and its environment where K > 0 is a suitable a. s. finite random variable. This proves that the regulator R H,ξ LH,ξ is a. s. β-Hölder continuous on [0, T]. Finally, R UtH,ξ = GtH + LH,ξ is t , and hence U β-Hölder continuous on [0, T]. In [134, Lemma 4.1] a further result concerning the moment-generating function of the square of the running maximum and the Hölder constant of R U H,ξ has been provided. Theorem 3.76. Let H ∈ (0, 1). For all a, b ∈ ℝ, there exists λ0 > 0 such that for all λ < λ0 , E [eλ‖

R

U H,ξ ‖2𝒞[0,T]

] < ∞.

Furthermore, for each β ∈ (0, 1), there exists λβ > 0 such that for all λ < λβ , E [e

λ‖R U H,ξ ‖2 β

𝒞 [0,T]

] < ∞.

We omit the proof, which is based on Fernique’s theorem for pseudo-seminorms of Gaussian random variables in Banach spaces (see [75, Theorem 1.3.2]). Similarly to the reflected fBm, it is difficult to study the properties of the reflected fOU process. However, we can provide some uniform approximations of the trajectories. To do this, let us consider the stochastic integral equation t

YtH,ξ,ε = ξ + ∫ ( 0

where α >

1 H

ε

α (YsH,ξ,ε )

+ a − bYsH,ξ,ε ) ds + σBtH ,

t ≥ 0,

(3.96)

− 1. We have the following result.

Theorem 3.77. Let ε > 0, a, b ∈ ℝ, H ∈ (0, 1), ξ > 0 a. s., and α > H1 − 1. Then (3.96) admits a unique pathwise global solution, i. e., there exists a unique process Y H,ε , adapted to the filtration {ℱt }t≥0 generated by {BsH , s ≤ t} and ξ, such that (3.96) holds for all t ≥ 0 a. s. Remark 3.78. Theorem 3.77 was established in [174] for the case a = 0, b > 0, and α = 1. For the maximally general drift and noise, it was proved in [59, Theorem 2.2]. Here we consider the particular case, Equation (3.96), since we will provide an approximate simulation algorithm for the reflected fOU process in Section 5.6. Proof. Fix ω ∈ Ω such that B⋅H (ω) is continuous. For a positive integer N ≥ 1, let 1 ψN ∈ 𝒞c∞ (ℝ) be such that ψN (x) = 1 for all x ≥ N1 , ψN (x) = 0 for all x ≤ N+1 , and ψN (x) ∈ [0, 1] for all x ∈ ℝ. Define FN (x) = (

ε + a − bx) ψN (x), xα

3.8 Reflected fractional Ornstein–Uhlenbeck process

� 241

and consider the auxiliary equation t

YtH,ξ,ε,N (ω) = ξ(ω) + ∫ FN (YsH,ξ,ε,N (ω)) ds + σBtH (ω),

t ≥ 0.

(3.97)

0

By definition, FN is Lipschitz. Indeed, for all x ≥ FN′ (x) = ψ′N (x) (

1 , N+1

ε αε + a − bx) − ψN (x) ( α+1 + b) , xα x

whence 󵄨󵄨 ′ 󵄨󵄨 󵄩󵄩 ′ 󵄩󵄩 α α+1 󵄨󵄨FN (x)󵄨󵄨 ≤ 󵄩󵄩ψN 󵄩󵄩L∞ (ℝ) (ε(N + 1) + |a| + |b|(N + 1)) + ‖ψN ‖L∞ (ℝ) (αε(N + 1) + |b|) , whereas for x
0 and γ > 0, and for f ∈ 𝒞 [0, T], define ‖f ‖γ := sup0≤t≤T e−γt |f (t)|. According to the proof of Theorem 2.32, (𝒞 [0, T], ‖ ⋅ ‖γ ) is a Banach space. Furthermore, the function 𝒯 f : [0, T] → ℝ defined as t

(𝒯 f )(t) = ξ(ω) + ∫ FN (f (s))ds + σBtH (ω),

t ∈ [0, T],

0

is continuous. Let us prove that 𝒯 : 𝒞 [0, T] → 𝒞 [0, T] is a contraction with respect to ‖ ⋅ ‖γ for some γ > 0. Indeed, for all f1 , f2 ∈ 𝒞 [0, T], t

t

0

0

L 󵄨󵄨 󵄨 󵄨 󵄨 γs γt 󵄨󵄨(𝒯 f1 )(t) − (𝒯 f2 )(t)󵄨󵄨󵄨 ≤ L ∫󵄨󵄨󵄨f1 (s) − f2 (s)󵄨󵄨󵄨ds ≤ L‖f1 − f2 ‖γ ∫ e ds ≤ ‖f1 − f2 ‖γ e . γ Multiplying both sides by e−γt and taking the supremum, we get ‖𝒯 f1 − 𝒯 f2 ‖γ ≤

L ‖f − f ‖ . γ 1 2 γ

Hence, for γ = 2L, 𝒯 is a contraction and thus admits a unique fixed point Y H,ξ,ε,N (ω) ∈ 𝒞 [0, T]. Since T > 0 is arbitrary, we obtain the unique solution Y H,ξ,ε,N (ω) of (3.97). By the contraction principle in Theorem 2.22, we know that Y H,ξ,ε,N is the a. s. limit of a sequence of stochastic processes, hence a stochastic process. Let us now define TN (ω) = inf {t ≥ 0 : YtH,ξ,ε,N (ω)
N, the processes Y H,ξ,ε,N (ω) and Y H,ξ,ε,M (ω) coincide up to TN (ω). Hence, in particular, (TN (ω))N≥1 is an increasing sequence, and there exists ζ (ω) = limN→+∞ TN (ω). For t ∈ [0, ζ (ω)), we define YtH,ξ,ε (ω) = YtH,ξ,ε,N (ω)

if t ∈ [0, TN (ω)).

By definition, Y H,ε (ω) solves (3.96) up to ζ (ω), and YtH,ε (ω) > 0 for all t ∈ [0, ζ (ω)). Furthermore, since each Y H,ξ,ε,N is a stochastic process, also Y H,ξ,ε is a stochstic process. Now we need to prove that ζ = +∞ a. s. To do this, let us argue by contradiction. Assume that ζ < ∞ with positive probability, and let E = {ω ∈ Ω : ζ (ω) < ∞}. Let us first note that we can rewrite (3.96) as YtH,ξ,ε

t

= ξ + ∫( 0

ε

α (YsH,ξ,ε )

− bYsH,ξ,ε ) ds + σBtH,a , ̄

t ≥ 0,

(3.98)

where ā = σa , and BH,ā is defined in (3.87). Clearly, BH,ā is a. s. β-Hölder continuous for 1 1 all β < H. Thus fix also β ∈ ( α+1 , H), which exists since α+1 < H, and without loss of

generality assume that BH,ā (ω) is locally β-Hölder continuous for all ω ∈ E. In particular, for all ω ∈ E, there exists a constant Λ(ω) such that 󵄨󵄨 H,ā 󵄨 󵄨󵄨Bt (ω) − BtH,ā (ω)󵄨󵄨󵄨 ≤ Λ(ω)|t1 − t2 |β , 󵄨 1 󵄨 2

t1 , t2 ∈ [0, ζ (ω)].

Now fix ω ∈ E. By the definition of ζ (ω) lim inf YtH,ξ,ε (ω) = 0. t↑ζ (ω)

(3.99)

Next, note that there exists a constant δ0 > 0 such that for all x ∈ (0, δ0 ), ε ε − bx ≥ α . xα 2x

(3.100)

Once this has been established, let 0 0,

τδ′ := inf {t ∈ [0, τδ ] : YsH,ξ,ε (ω) ≤ 2δ, s ∈ [t, τδ ]} > 0.

(3.101)

3.8 Reflected fractional Ornstein–Uhlenbeck process



243

It follows from (3.98), (3.100), and (3.101) that δ=



YτH,ξ,ε (ω) δ YτH,ξ,ε (ω) ′ δ

≥ 2δ +

ε

YτH,ξ,ε (ω) ′ δ

=

τδ

ε

τδ

+ ∫( τδ′

α ds 2 (YsH,ξ,ε (ω)) τδ′

+∫

2α+1 δα

ε

− bYsH,ξ,ε (ω)) ds − σ (BτH,′ a (ω) − BτH,δ a (ω)) ̄

α (YsH,ξ,ε (ω))

̄

δ

− σ (BτH,′ a (ω) − BτH,δ a (ω)) ̄

̄

δ

󵄨 󵄨β (τδ − τδ′ ) − σΛ(ω) 󵄨󵄨󵄨τδ − τδ′ 󵄨󵄨󵄨 ,

whence δ+

ε 󵄨 󵄨β (τδ − τδ′ ) − σΛ(ω) 󵄨󵄨󵄨τδ − τδ′ 󵄨󵄨󵄨 ≤ 0. 2α+1 δα

Consider the function Fδ (x) = δ +

εx

2α+1 δα

− σΛ(ω)x β .

Clearly, Fδ (0) = δ > 0. Furthermore, for all x > 0, Fδ′′ (x) = β(1 − β)σΛ(ω)x β−2 > 0, and hence Fδ is strictly convex for x > 0. Thus it admits a unique minimum point. To find it, note that Fδ′ (x) =

ε

2α+1 δα

− βσΛ(ω)x β−1 .

Hence the minimum point is given by x̃ = (

1

α β−1 ε ) δ 1−β , α+1 2 βσΛ(ω)

and the minimum of Fδ is Fδ (x̃) = δ − (1 − β)σΛ(ω) (

β

αβ β−1 ε 1−β . ) δ 2α+1 βσΛ(ω)

αβ 1 Since β > α+1 , we have that 1−β > 1, and there exists δ̃ < δ0 such that Fδ (x̃) > 0 for ̃ This means that F (x) > 0 for all x ≥ 0, and we get a contradiction. Thus all δ < δ. δ P(ζ < ∞) = 0, and Y H,ξ,ε is a global solution of (3.97).

244 � 3 Fractional Brownian motion and its environment Theorem 3.77 also tells that min0≤t≤T YtH,ξ,ε > 0 for all T > 0, and thus t

LH,ξ,ε := ε ∫ t

1

α (YsH,ε ) 0

εt

ds ≤

(min0≤s≤t YsH,ε )

α

< +∞.

(3.102)

Now we are ready to prove that the processes Y H,ξ,ε are locally uniform approximations of R U H,ξ , as shown in [175, Theorem 2.2] for a = 0, H > 21 , α = 1, and b > 0. Similarly to Theorem 3.77, Theorem 3.79 was proved in a more general setting in [59, Theorem 3.2]. Again, we give here the proof for completeness. Theorem 3.79. Let a, b ∈ ℝ, σ > 0, and H > 21 , and let R U H,ξ be a reflected fOU process defined via (3.95), with regulator LH,ξ , such that ξ > 0 a. s. Consider any sequence (εn )n≥0 such that εn ≥ εn+1 > 0 and limn→+∞ εn = 0. Let also Y H,ξ,εn be the unique solution of (3.96), and let LH,ξ,εn be defined as in (3.102). Then for all T > 0, 󵄩 󵄩 󵄩 󵄩 P ( lim (󵄩󵄩󵄩󵄩R U H,ξ − Y H,ξ,εn 󵄩󵄩󵄩󵄩𝒞[0,T] + 󵄩󵄩󵄩󵄩LH,ξ − LH,ξ,εn 󵄩󵄩󵄩󵄩𝒞[0,T] ) = 0) = 1. n→+∞

(3.103)

Proof. As the first step, let us prove that Y H,ξ,εn is an a. s. decreasing sequence of conH,ξ,ε H,ξ,ε tinuous functions. Let ε1 > ε2 . Fix ω ∈ Ω, put Δ(t) = Yt 1 (ω) − Yt 2 (ω), and note that t

Δ(t) = ∫ ( 0

ε1 α H,ξ,ε1 (Ys (ω))



ε2 α H,ξ,ε2 (Ys (ω))

− bΔ(s)) ds.

Evidently, Δ(0) = 0. Furthermore, Δ is differentiable, and its derivative Δ′ (t) =

ε1 α H,ξ,ε1 (Yt (ω))



ε2 α H,ξ,ε2 (Yt (ω))

is a continuous function. Furthermore, Δ′ (0+ ) =

ε1 −ε2 (ξ(ω))α

− bΔ(t)

> 0, and hence there exists δ > 0

such that Δ′ (t) > 0 for all t ∈ (0, δ), which in turn implies that Δ(t) > 0 for all t ∈ (0, δ). Let now t ⋆ = sup{t > 0 : Δ(s) > 0, s ∈ (0, t)}. Assume, by contradiction, that t ⋆ < +∞. H,ξ,ε H,ξ,ε By definition, Δ(t ⋆ ) = 0 and Yt⋆ 1 (ω) = Yt⋆ 2 (ω) =: Y ⋆ > 0. Furthermore, Δ′ (t ⋆ ) =

ε1 − ε2 > 0. (Y ⋆ )α

This implies that there exists δ > 0 such that Δ′ (t) > 0 for all t ∈ (t ⋆ − δ, t ⋆ ), which however means that Δ(t) < 0 for all t ∈ (t ⋆ − δ, t ⋆ ). This is a contradiction with the H,ξ,ε H,ξ,ε definition of t ⋆ . Hence t ⋆ = +∞, and Δ(t) > 0 for all t > 0. So Yt n ≥ Yt n+1 for all t ≥ 0 a. s. H,ξ,ε ̃H Since the sequence is monotone, we can define the limit limn→+∞ Yt n = Y t H,ξ,εn for t ≥ 0. Furthermore, for all n ∈ ℕ, Yt (ω) is continuous, and for all T > 0 and

3.8 Reflected fractional Ornstein–Uhlenbeck process



245

t ∈ [0, T], 󵄨󵄨 H,ξ,εn 󵄨󵄨 󵄨 H,ξ,ε 󵄨 󵄨󵄨bYt (ω)󵄨󵄨󵄨 ≤ |b| sup 󵄨󵄨󵄨󵄨Yt 1 (ω)󵄨󵄨󵄨󵄨 . 󵄨 t∈[0,T] Hence a direct application of the dominated convergence implies that H,ξ,εn

lim Lt

n→+∞

H,ξ,εn

(ω) = lim (Yt n→+∞

t

(ω) − ξ(ω) − at + b ∫ YsH,ξ,εn (ω)ds − σBtH (ω)) 0

t

̃H (ω) − ξ(ω) − at + b ∫ Y ̃H (ω)ds − σBH (ω) =: L ̃ H (ω). =Y t s t t 0

̃H , L ̃ H ) solves Equation (3.95). With this notation, the couple (Y ̃H , L ̃ H ) = (R U H,ξ , LH,ξ ), we need to show that Y ̃H , L ̃ H are continuous, To prove that (Y H H H ̃ ̃ ̃ Yt ≥ 0 for all t ≥ 0, L0 = 0, L is nondecreasing, and finally t

̃H d L ̃ H = 0. ∫Y s s

(3.104)

0

̃ H is also nondecreasing, and L ̃ H = 0. Now let Since all LH,ξ,εn are nondecreasing, L 0 H ̃ is continuous. Fix ω ∈ Ω and t ≥ 0 and note that us prove that L ̃ H (ω) := lim L ̃ H (ω) L t+ s s↓t

and

̃ H (ω) := lim L ̃ H (ω) L t− s s↑t

̃ H (ω) = L ̃ H (ω). Assume that L ̃ H (ω)− L ̃ H (ω) > 0. are well defined. We want to prove that L t+ t− t+ t− H H ̃ >Y ̃ =: y− . First assume that y− = 0 and y+ > 0. Fix any T > t and Then also y+ := Y t+ t− β < H, set ā = σa , and let Λ(ω) be such that for all t1 , t2 ∈ [0, T], 󵄨󵄨 H,ā 󵄨 󵄨󵄨Bt (ω) − BtH,ā (ω)󵄨󵄨󵄨 ≤ Λ(ω)|t1 − t2 |β . 󵄨 1 󵄨 2 β

y+ 2

y

be such that (2 + 3|b| y ) δ0 + σΛ(ω)δ0 < 2+ . Since y− = 0, there exists 2 + H ̃ δ (ω) < δ0 and 3 y+ > Y ̃H δ (ω) > y+ . Since YsH,ξ,εn (ω) → Y ̃H (ω), δ1 ∈ (0, δ0 ) such that Y s 1 1 2 2 Now let δ0
(ω) ≥ Y 1 t+

2

Y

H,ξ,εn

t+

δ1 2

t+

2

3 (ω) < y+ , 2

εn < δ0α .

y+ , and Y H,ξ,εn (ω) is continuous. Hence we can define 2

δ1 δ , t + 1 ) , YsH,ξ,εn (ω) = δ0 } , 2 2 δ1 3 τ2 = sup {s ∈ (τ1 , t + ) , YuH,ξ,εn (ω) ≤ y+ , u ∈ [τ1 , s]} . 2 2 τ1 = sup {s ∈ (t −

(3.105)

246 � 3 Fractional Brownian motion and its environment H,ξ,εn

Then Ys

∈ [δ0 , 32 y+ ] for all s ∈ [τ1 , τ2 ]. Furthermore, if τ2 < t +

continuity of Y

H,ξ,εn

(ω),

H,ξ,ε Yτ2 n (ω)

=

3 y 2 +

>

Since τ2 − τ1 < δ, we get from (3.98) that τ2

n n YτH,ξ,ε (ω) = YτH,ξ,ε (ω) + ∫ ( 2 1

τ2

τ1

εn α H,ξ,εn (Ys (ω))

1

≤ δ0 + εn ∫

α H,ξ,εn (ω)) τ1 (Ys

≤ δ0 + ≤ δ0 +

y+ ; 2

δ1 , then 2 H,ξ,εn

otherwise, we recall that Y

t+

δ1 2

by the

(ω) >

y+ . 2

− bYsH,ξ,εn (ω)) ds + σ (BH,aδ1 (ω) − BτH,a (ω)) ̄

t+

̄

2

τ2

ds + |b| ∫ YsH,ξ,εn (ω)ds + σΛ(ω)(τ2 − τ1 )β τ1

εn (τ2 − τ1 ) H,ξ,ε + |b|(τ2 − τ1 ) ( sup Yt n (ω)) + σΛ(ω)(τ2 − τ1 )β (δ0 )α t∈[τ1 ,τ2 ] εn (τ2 − τ1 ) 3 + |b|(τ2 − τ1 )y+ + σΛ(ω)(τ2 − τ1 )β (δ0 )α 2

δ0α+1 3 + |b|y+ δ0 + σΛ(ω)(τ2 − τ1 )β δ0α 2 y 3|b| β ≤ (2 + y ) δ + σΛ(ω)δ0 < + , 2 + 0 2 ≤ δ0 +

H,ξ,εn

where we used the fact that Yt

(ω) ∈ [δ0 , 32 y+ ] for all t ∈ [τ1 , τ2 ]. Hence

n YτH,ξ,ε (ω) < 2

y+ n < YτH,ξ,ε (ω), 2 2

which is impossible. Therefore y− > 0. However, in this case, there is δ > 0 such that ̃H (ω) > y− for all s ∈ [t − δ, t + δ]. Furthermore, since YsH,ξ,εn (ω) is nonincreasing in n, Y s 2 H,ξ,ε ̃H > y− , and then Ys n (ω) ≥ Y s

2

0≤

H,ξ,ε Lt+δ n (ω)



H,ξ,ε Lt−δ n (ω)

t+δ

= ∫ t−δ

εn

H,ξ,εn

(Ys

α ds

(ω))



2α+1 δεn . yα−

Taking the limit as n → +∞, we get that ̃ H (ω) − L ̃ H (ω) ≤ 0. 0≤L t+δ t−δ

(3.106)

̃ H (ω) = L ̃ H (ω), and thus L ̃ H (ω) = L ̃ H (ω), a contradiction. Hence L t+ t− t+δ t−δ H ̃ ̃H is a. s. continuous at all t > 0 accordHence L is a. s. continuous at all t > 0, and Y ̃H (ω) ≥ Y ̃H (ω) = ξ(ω) > 0, ing to (3.95). We only need to prove the continuity at 0. Since Y 0+ 0 H,ξ,ε ̃H (ω) > ξ(ω) , and thus there exists δ > 0 such that for all t ∈ [0, δ], we have Yt n (ω) ≥ Y t 2 0≤

H,ξ,ε Lδ n (ω)

δ

=∫ 0

εn ds

H,ξ,εn

(Ys

α

(ω))



2α εn δ . (ξ(ω))α

3.8 Reflected fractional Ornstein–Uhlenbeck process

� 247

̃ H (ω) = 0, whence L ̃ H (ω) = L ̃ H (ω). Therefore L ̃ H is Taking the limit in n we get that L 0+ 0 δ + H + ̃ is a. s. continuous on ℝ . Furthermore, we a. s. continuous on ℝ0 and consequently Y 0 ̃ H can increase only on the set of the zeros of Y ̃H , hence (3.104) know from (3.106) that L ̃H , L ̃ H ) = (R U H,ξ , LH,ξ ). holds. This proves the equality (Y Finally, to show (3.103), fix T > 0 and ω ∈ Ω and note that 󵄩󵄩 H,ξ,εn 󵄩 󵄩󵄩Y (ω) − R U H,ξ (ω)󵄩󵄩󵄩󵄩𝒞[0,T] → 0 󵄩 by Dini theorem; see [228, Theorem 7.13]. Moreover, we have from (3.98) and (3.95) that for all t ∈ [0, T], t

󵄨󵄨 H,ξ 󵄨 󵄨 󵄨 󵄨 󵄨 H,ξ,ε n 󵄨󵄨Lt (ω) − LH,ξ,ε (ω)󵄨󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨󵄨R UtH,ξ (ω) − Yt n (ω)󵄨󵄨󵄨󵄨 + |b| ∫ 󵄨󵄨󵄨󵄨R UsH,ξ (ω) − YsH,ξ,εn (ω)󵄨󵄨󵄨󵄨 ds t 󵄨 󵄩 󵄩 ≤ (1 + |b|T) 󵄩󵄩󵄩󵄩R U H,ξ (ω) − Y H,ξ,εn (ω)󵄩󵄩󵄩󵄩𝒞[0,T] .

0

Taking the supremum over [0, T], we obtain the relation 󵄩󵄩 H,ξ 󵄩 󵄩 󵄩 n 󵄩󵄩Lt (ω) − LH,ξ,ε (ω)󵄩󵄩󵄩󵄩𝒞[0,T] ≤ (1 + |b|T) 󵄩󵄩󵄩󵄩R U H,ξ (ω) − Y H,ξ,εn (ω)󵄩󵄩󵄩󵄩𝒞[0,T] , t 󵄩 whence the statement follows. Until now, we have provided a uniform approximation procedure for the reflected fOU process under the condition ξ > 0. It is clear that the same procedure cannot be carried out if ξ = 0, since the right-hand side (3.96) admits a singularity at 0. However, we can still propose an alternative approximation procedure, which in this case is based on the double limit. To do this, we first need to prove the stability of the solutions of the generalized Skorokhod reflection problem with respect to the paths. Theorem 3.80. Let F : ℝ+0 × ℝ+0 → ℝ satisfy the following assumptions: (a) F(⋅, 0) ∈ L1loc (ℝ+0 ); (b) There exists L > 0 such that 󵄨󵄨 󵄨 󵄨󵄨F(t, x) − F(t, y)󵄨󵄨󵄨 ≤ L|x − y|,

x, y ∈ ℝ+0 ,

t ≥ 0.

Then for each T > 0, there exists a constant C > 0 depending on L and T such that for all f1 , f2 ∈ 𝒞 (ℝ+0 ) with f1 (0), f2 (0) ≥ 0, ‖g1 − g2 ‖𝒞[0,T] + ‖l1 − l2 ‖𝒞[0,T] ≤ C‖f1 − f2 ‖𝒞[0,T] , where (gj , lj ) ∈ (𝒞 (ℝ+0 ))2 , j = 1, 2, is the unique solution to the generalized Skorokhod reflection problem with drift F and path fj .

248 � 3 Fractional Brownian motion and its environment Proof. Let hj : ℝ+0 → ℝ, j = 1, 2, be the unique solution of t

hj (t) = fj (t) + ∫ F (s, Φ(hj )(s)) ds,

t ≥ 0.

(3.107)

0

In the proof of Theorem 3.73, we established that gj = Φ(hj ) and lj = Ψ(lj ). Therefore, with the help of (3.84) and (3.85), we can produce the relations ‖g1 − g2 ‖𝒞[0,T] + ‖l1 − l2 ‖𝒞[0,T] ≤ 3‖h1 − h2 ‖𝒞[0,T] . It remains to prove that there exists a constant C > 0 depending only on L and T and such that ‖h1 − h2 ‖𝒞[0,T] ≤ C‖f1 − f2 ‖𝒞[0,T] . It follows from (3.107) and (3.84) that for all 0 ≤ s ≤ t ≤ T, s

󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨h1 (s) − h2 (s)󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨f1 (s) − f2 (s)󵄨󵄨󵄨 + ∫ 󵄨󵄨󵄨F(u, Φ(h1 )(u)) − F(u, Φ(h2 )(u))󵄨󵄨󵄨 du 0

s

󵄨 󵄨 󵄨 󵄨 ≤ 󵄨󵄨󵄨f1 (s) − f2 (s)󵄨󵄨󵄨 + L ∫󵄨󵄨󵄨Φ(h1 )(u) − Φ(h2 )(u)󵄨󵄨󵄨du 0 s

󵄨 󵄨 󵄨 󵄨 ≤ 󵄨󵄨󵄨f1 (s) − f2 (s)󵄨󵄨󵄨 + L ∫ sup 󵄨󵄨󵄨Φ(h1 )(v) − Φ(h2 )(v)󵄨󵄨󵄨du 0

0≤v≤u t

󵄨 󵄨 ≤ ‖f1 − f2 ‖𝒞[0,T] + 2L ∫ sup 󵄨󵄨󵄨h1 (v) − h2 (v)󵄨󵄨󵄨du. 0

0≤v≤u

Taking the supremum in s ∈ [0, t], we obtain for H(t) = sup0≤s≤t |h1 (s) − h2 (s)| the upper bound t

H(t) ≤ ‖f1 − f2 ‖𝒞[0,T] + 2L ∫ H(u)du,

t ∈ [0, T],

0

and Grönwall’s inequality, see [196, Theorem 1.3.1], leads to H(t) ≤ ‖f1 − f2 ‖𝒞[0,T] e2Lt ,

t ∈ [0, T].

Putting t = T, we get ‖h1 − h2 ‖𝒞[0,T] ≤ e2LT ‖f1 − f2 ‖𝒞[0,T] , whence the proof follows.

3.8 Reflected fractional Ornstein–Uhlenbeck process



249

Now we can prove the following result. Theorem 3.81. Let {Θn }n≥1 = {(εn , yn )}n≥1 be a sequence such that εn > εn+1 > 0, yn > yn+1 > 0, and εn , yn → 0 as n → +∞. Let also H ∈ (0, 1), a, b ∈ ℝ, and σ > 1, and let R U H,ξ be the reflected fOU process defined by (3.95) with ξ = 0. For positive integers n ≥ 1, define Y H,Θn as the solutions of H,Θ Yt n

t

= yn + ∫ ( 0

εn H,Θn α (Ys )

+ a − bYsH,Θn ) ds + σBtH ,

t ≥ 0,

and H,Θ Lt n

t

εn ds, H,Θn α (Ys ) 0

=∫

t ≥ 0.

Then for all T > 0, 󵄩 󵄩 󵄩 󵄩 P ( lim (󵄩󵄩󵄩󵄩Y H,Θn − R U H,ξ 󵄩󵄩󵄩󵄩𝒞[0,T] + 󵄩󵄩󵄩󵄩LH,Θn − LH,ξ 󵄩󵄩󵄩󵄩𝒞[0,T] ) = 0) = 1. n→+∞

(3.108)

Proof. Let {Θn }n≥1 be as in the statement, fix ω ∈ Ω, and consider H,Θn

Δ(t) = Yt

H,Θn+1

(ω) − Yt

(ω).

Note that t

Δ(t) = yn − yn+1 + ∫ ( 0

εn α H,Θn (Ys (ω))



εn+1 α H,Θn+1 (Ys (ω))

− bΔ(s)) ds.

Clearly, Δ(0) = yn − yn+1 > 0. Furthermore, Δ is differentiable for t > 0, and Δ′ (t) =

εn α H,Θn (Yt (ω))



εn+1 α H,Θn+1 (Yt (ω))

− bΔ(t).

Assume that there exists t ⋆ > 0 such that Δ(t ⋆ ) ≤ 0 and let, without loss of generality, t ⋆ = sup{t > 0 : Δ(s) > 0, s ∈ (0, t)}. H,Θn

Clearly, Δ(t ⋆ ) = 0, and thus Yt⋆ Δ(0) > 0. Hence

H,Θn+1

(ω) = Yt⋆

Δ′ (t ⋆ ) =

(ω) := Y ⋆ . Furthermore, t ⋆ > 0 because

εn − εn+1 > 0, (Y ⋆ )α

which is a contradiction, since that would mean that there exists δ > 0 such that H,Θ H,Θ Δ(t ⋆ − δ) < 0. Hence Δ(t) > 0 for all t ≥ 0, and thus Yt n (ω) > Yt n+1 (ω). In turn, this

250 � 3 Fractional Brownian motion and its environment ̃H (ω) = limn→+∞ Y H,Θn (ω) for all t ≥ 0. Now let us prove means that there exists a limit Y t t that the limit does not depend on the choice of the sequence {Θn }n≥0 . Indeed, consider j j j two vanishing sequences Θn = {(εn , yn )}n≥1 , j = 1, 2. Assume that j

̃H,j (ω) = lim Y H,Θn (ω), Y t t n→+∞

j = 1, 2.

Now we construct an auxiliary sequence {Θn }n≥0 = {(εn , yn )}n≥1 such that {εn }n≥1 and {yn }n≥1 are decreasing and vanishing and if n is odd, then Θn belongs to the sequence {Θ1n }n≥1 , whereas if n is even, then it belongs to {Θ2n }n≥1 . Indeed, since 1 is odd, we set (ε1 , y1 ) = (ε11 , y11 ) and k1 = 1. Next, since 2 is even, let k2 = min {n ≥ 1 : εn2 < ε1 and y2n < y1 } , and set (ε2 , y2 ) = (εk22 , y2k2 ). Now assume that we have defined (εm , ym ) and km for m ≤ n for some n ≥ 2. We want to define (εn+1 , yn+1 ) and kn+1 . To do this, we distinguish two cases. If n + 1 is even, then we set kn+1 = min {h ≥ kn : εh2 < εn , and y2h < yn } and (εn+1 , yn+1 ) = (εk2n+1 , y2kn+1 ). If n + 1 is odd, then we set kn+1 = min {h ≥ kn : εh1 < εn , and y1h < yn } and (εn+1 , yn+1 ) = (εk1 n+1 , y1kn+1 ). Finally, set {Θn }n≥1 = {(εn , yn )}n≥1 . H,Θ Thus there exists Ȳ H = limn→+∞ Y n for all t ≥ 0. Furthermore, by construction, t

t

{Θ2n−1 }n≥1 is a subsequence of both {Θn }n≥1 and {Θ1n }n≥1 , and thus H,Θ ̃H,1 (ω), ȲtH (ω) = lim Yt 2n−1 (ω) = Y t n→+∞

t ≥ 0.

Similarly, {Θ2n }n≥1 is a subsequence of both {Θn }n≥1 and {Θ2n }n≥1 , and thus H,Θ ̃H,2 (ω), ȲtH (ω) = lim Yt 2n (ω) = Y t n→+∞

t ≥ 0.

̃H,1 (ω) = Y ̃H,2 (ω), i. e., the limit is independent of the choice of the This proves that Y sequence {Θn }n≥1 . ̃H (ω) be such a common limit for a. a. ω ∈ Ω. We want to show that Let Y ̃H (ω) = R U H,ξ (ω) for a. a. ω ∈ Ω. To do this, we will construct a specific choice of a Y sequence {Θn }n≥1 . First, let {yn }n≥0 and {εn }n≥0 be two strictly decreasing sequences of positive real numbers such that yn , εn → 0. Let Y H,yn ,εm be the solution of (3.96) with ε = εm and initial data yn , and let R U H,yn be the reflected fOU process defined by means of (3.95) with initial data yn . Fix also T > 0. For fixed n ≥ 1, we know from Theorem 3.79

3.8 Reflected fractional Ornstein–Uhlenbeck process

� 251

that the event En = {ω ∈ Ω :

󵄩 lim 󵄩󵄩󵄩R U H,yn m→+∞ 󵄩

󵄩 − Y H,yn ,εm 󵄩󵄩󵄩󵄩𝒞[0,T] = 0}

has P(En ) = 1. Put E = ⋂n≥1 En , so that P(E) = 1, and fix ω ∈ E. Then there exists m1 (depending on ω) such that 󵄩󵄩R H,y1 󵄩 󵄩󵄩 U (ω) − Y H,y1 ,εm1 (ω)󵄩󵄩󵄩 󵄩 󵄩𝒞[0,T] ≤ 1. Assuming that we have defined mn , we set mn+1 > mn such that 1 󵄩󵄩R H,yn 󵄩 󵄩󵄩 U (ω) − Y H,yn ,εmn (ω)󵄩󵄩󵄩󵄩𝒞[0,T] ≤ . 󵄩 n

(3.109)

Furthermore, recall that R U H,yn (ω) and R U H,0 (ω) solve the generalized Skorokhod reflection problem with drift F(s, x) = a − bx for all s, x ≥ 0 and paths f1 (t) = yn + σBtH (ω) and f2 (t) = σBtH (ω), respectively. Hence by Theorem 3.80 there exists a constant C > 0, depending only on |b| and T, such that for all n ≥ 1, 󵄩󵄩R H,yn 󵄩 󵄩󵄩 U (ω) − R U H,0 (ω)󵄩󵄩󵄩󵄩𝒞[0,T] ≤ C‖f1 − f2 ‖𝒞[0,T] = Cyn . 󵄩

(3.110)

Put Θn = (εmn , yn ) for n ≥ 1. Then, due to (3.109), (3.110), and the fact that Y H,Θn (ω) = Y H,n,mn (ω) by definition, we get that 󵄩󵄩R H,ξ 󵄩 󵄩󵄩 U (ω) − Y H,Θn (ω)󵄩󵄩󵄩 󵄩 󵄩𝒞[0,T] 󵄩󵄩R H,yn 󵄩 󵄩 󵄩 H,Θn ≤ 󵄩󵄩󵄩 U (ω) − Y (ω)󵄩󵄩󵄩󵄩𝒞[0,T] + 󵄩󵄩󵄩󵄩R U H,yn (ω) − R U H,ξ 󵄩󵄩󵄩󵄩𝒞[0,T] ≤ n−1 + Cyn . Passing to the limit as n → +∞, we finally obtain 󵄩 󵄩 lim 󵄩󵄩󵄩R U H,ξ (ω) − Y H,Θn (ω)󵄩󵄩󵄩󵄩𝒞[0,T] = 0.

n→+∞ 󵄩

̃H (ω) is independent of the choice of {Θn }n≥0 , we Since T > 0 is arbitrary and the limit Y H R H,ξ ̃ (ω) = U (ω) for all ω ∈ E. The locally uniform convergence (3.108) follows by have Y a direct application of Dini theorem, since Y H,Θn is a monotone sequence of continuous functions converging to a continuous function R U H,ξ . Finally, we have that for all t ≥ 0, H,Θn

Lt

H,Θn

= Yt

t

− yn − at − σBtH + b ∫ YsH,Θn ds, t

0

LH,0 = R UtH,0 − at − σBtH + b ∫ R UsH,0 ds, t 0

252 � 3 Fractional Brownian motion and its environment and thus, as n → ∞, 󵄩󵄩 H,Θn 󵄩 󵄩 󵄩 󵄩󵄩L − LH,0 󵄩󵄩󵄩󵄩𝒞[0,T] ≤ (1 + |b|T) 󵄩󵄩󵄩󵄩Y H,Θn − R U H,0 󵄩󵄩󵄩󵄩𝒞[0,T] + yn → 0 󵄩

a. s.

Remark 3.82. Furthermore, the previous theorem also applies in the case b = 0 and σ = 1, in which this result provides a locally uniform approximation of the reflected fBm R BH,a and its regulator.

3.9 Exercises Exercise 3.1. Let H ∈ (0, 1). (i) Prove that E[(BtH − BsH )2 ] = |t − s|2H . (ii) Use item (i) to prove Proposition 3.3. Hint: Use the Kolmogorov–Chentsov theorem for Gaussian processes (see Theorem C.4). Exercise 3.2. Let H ∈ (0, 1). H H (i) Prove that E[Bat Bas ] = |a|2H E[BtH BsH ]. (ii) Use item (i) to prove Proposition 3.4. Hint: The process is Gaussian, and therefore the distribution of (BtH1 , . . . , BtHn ) is uniquely determined by the covariance matrix. Exercise 3.3. Let H ∈ (0, 1] and define 2H H {|t| B 1 , ̃H = t B t { 0, {

t ≠ 0, t = 0.

̃ H is a centered Gaussian process. (i) Prove that B ̃ H is an fBm (as in Proposition 3.8). (ii) Use item (i) to prove that B H ̃H ̃ Hint: Evaluate E[Bt Bs ]. H H Exercise 3.4. Fix δ > 0 and consider the process Xt = It+δ,t for t ∈ ℝ, where It,s is defined in (3.3). Prove that Xt is stationary. H Exercise 3.5. For n ≥ 1 and H ∈ (0, 1), let rnH = E[B1H In+1,n ]. 1 H 2H−2 2H−3 (i) Prove that for H ≠ 2 , rn = H(2H − 1)n + O(n ). 1 H (ii) Prove that ∑+∞ |r | converges if H < and diverges if H > 21 . n=1 n 2 1

(iii) Prove that rn2 = 0 for all n ≥ 1.

3.9 Exercises

� 253

Exercise 3.6. Let H ∈ (0, 1). Prove that d

H H Iat,as = aH It,s ,

t, s ∈ ℝ,

a > 0,

where equality holds for finite-dimensional distributions. Furthermore, establish that d

H H I−t,−s = It,s ,

t, s ∈ ℝ.

Exercise 3.7. Let H ∈ (0, 1), T > 0, k ∈ ℕ, and tj = j Tk for j = 0, 1, 2, . . . , k. Prove that for p ≥ 1 such that p1 < H, we have k−1 󵄨 󵄨p lim ∑ 󵄨󵄨󵄨󵄨BtHj+1 − BtHj 󵄨󵄨󵄨󵄨 = 0 k→+∞ j=0

a. s.

Hint: Use the Hölder property with exponent γ ∈ ( p1 , H). Exercise 3.8. Fix any H ∈ (0, 1), H ≠ 21 . (i) Prove that for αH = H − 1/2, (1 + s)αH − sαH = |αH |. s→+∞ sαH −1 lim

(ii) Prove (3.7). Hint: Use (i) to determine the behavior of the integrand as s → +∞. Exercise 3.9. Let H ∈ (0, 1) \ { 21 } and αH = H − 21 , (i) Let H >

1 2

and t > 0. Prove that

α

KH (t, s) = Γ(αH + 1)CHMN (I− H 1[0,t) ) (s). (ii) Let H
0. Prove that KH (t, s) = Γ(αH + 1)CHMN (D− H 1[0,t) ) (s). −α

(iii) Prove that for t > 0, ∫ 1[0,t) (s)dBsH = BtH . ℝ

(iv) Let B be a Bm. Use (i) and (ii) to prove that the process BH defined as BtH := ∫ (M−H 1[0,t) ) dBs ,

t ∈ ℝ,



is an fBm with Hurst index H (as in Theorem 3.27), where for t < 0, we set 1[0,t) (s) = −1(t,0] (s).

254 � 3 Fractional Brownian motion and its environment Exercise 3.10. Let H ∈ ( 21 , 1), and let BH be an fBm with Hurst index H. (i) Let us recall the following relation, given in the proof of [95, Proposition 2.2] for H ∈ ( 21 , 1): min{s,t}

∫ (s − τ)αH −1 (t − τ)αH −1 dτ = |t − s|2H−2

−∞

Γ(αH )Γ(2 − 2H) , Γ(1 − αH )

t, s ∈ ℝ.

(3.111)

Prove that ‖f ‖2|L2 |(ℝ) = H

H(2H − 1)Γ(1 − αH )Γ(αH ) ‖f ‖2 2 . (CHMN )2 Γ(2 − 2H)(Γ(αH + 1))2 LH (ℝ)

(3.112)

Hint: Start from ‖f ‖2|L2 |(ℝ) and use Fubini’s theorem. H (ii) Use (3.12) and (3.9) to show that ‖1[0,t) ‖L2 (ℝ) = ‖1[0,t) ‖|L2 |(ℝ) . H

H

(iii) Use items (i) and (ii) to prove that Γ(αH + 1)CHMN = (

1

2HΓ(1 − αH )Γ(αH + 1) 2 ) . Γ(2 − 2H)

Remark: The equality holds also for H < 21 . (iv) Use item (i) to prove that |L2H |(ℝ) ⊂ L2H (ℝ) and ‖f ‖|L2 |(ℝ) = ‖f ‖L2 (ℝ) for all f ∈ |L2H |(ℝ). H

(v) Prove that nonnegative a. e. f ∈ L2H (ℝ) belong to |L2H |(ℝ).

H

Exercise 3.11. Let H ∈ (0, 1). (i) Let f ∈ ℱH ⊂ L2H (ℝ). Prove that H

MN

1

ℱ [M− f ] (z) = CH Γ(αH + 1)(−iz) 2

−H

ℱ [f ](z),

z ∈ ℝ.

Hint: Use items (ii) and (iii) of Exercise 1.13. (ii) Show that ℱ [1[0,t) ](z) =

1 − e−itz . 2πiz

(iii) Use items (i) and (ii) to establish that 2

(C MN Γ(1 + αH ))2 󵄨 󵄨2 E [(∫ 1[0,t) (s)dBsH ) ] = H ∫ |z|−1−2H 󵄨󵄨󵄨󵄨1 − e−itz 󵄨󵄨󵄨󵄨 dz. 2π ℝ [ ℝ ] Hint: Apply Plancherel’s formula (Theorem A.12) and Theorem 3.16.

3.9 Exercises

� 255

(iv) Use items (iii) and (iv) of Exercise (3.9) to prove that CHMN =

√2H sin(πH)Γ(2H) . Γ(αH + 1)

Hint: Apply integration by parts and (B.9). Exercise 3.12. Let α > 0. (i) Prove that for all t1 < t2 , t ∈ ℝ, (I−α 1[t1 ,t2 ) ) (t) =

(tk − t)α+ − (tk−1 − t)α+ . Γ(α + 1)

(ii) Prove that for all t1 , t2 , t ∈ ℝ with t1 < t2 , (Dα− 1[t1 ,t2 ) ) (t) =

−α (tk − t)−α + − (tk−1 − t)+ . Γ(⌊α⌋ − α + 1)

Exercise 3.13. Let H, K ∈ (0, 1) with K ≠ H and t ∈ ℝ. (i) Prove that for K > H, M−H (I−K−H 1[0,t) ) =

CHMN Γ(αH + 1) K M 1[0,t) . CKMN Γ(αK + 1) −

(ii) Prove that for K < H, M−H (DH−K 1[0,t) ) = −

CHMN Γ(αH + 1) K M 1[0,t) . CKMN Γ(αK + 1) −

(iii) Let BK be the process defined in (3.24). Use items (i) and (ii) to prove that BK is an fBm with Hurst index K. Exercise 3.14. Let H ∈ (0, 1) with H ≠ 21 , −∞ ≤ a < b ≤ c < d ≤ ∞, f ∈ AC[a, b], and g ∈ AC[c, d]. (i) Assume that a > −∞ and d < ∞. Prove that b

d

b d

E [(∫ f (t)dBtH ) (∫ g(s)dBsH )] = H(2H − 1) ∫ ∫ f (t)g(s)(s − t)2H−2 dsdt. (3.113) c a c [ a ] Hint: Without loss of generality, assume that b = 0. To get the result for a general ̃ H = {B ̃ H , t ≥ 0} defined as B ̃ H = BH − BH . b ∈ ℝ, use the auxiliary process B t t t+b b Remark: In fact, (3.113) holds even if f ∈ BV[a, b] and g ∈ BV[c, d].

256 � 3 Fractional Brownian motion and its environment (ii) Assume that a = −∞, d < ∞, and there exists β > H such that b

󵄨 󵄨 󵄨 󵄨 sup |t|β 󵄨󵄨󵄨f (t)󵄨󵄨󵄨 + ∫ |t|β 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 dt < ∞.

t∈(−∞,b)

−∞

Prove that (3.113) still holds. (iii) Assume that a > −∞, d = ∞, and there exists β > H such that ∞

󵄨 󵄨 󵄨 󵄨 sup |t|β 󵄨󵄨󵄨g(t)󵄨󵄨󵄨 + ∫ |t|β 󵄨󵄨󵄨g ′ (t)󵄨󵄨󵄨 dt < ∞.

t∈(c,∞)

c

Prove that (3.113) still holds. (iv) Assume that a = −∞, d = ∞, and there exists β > H such that ∞

b

c

−∞

󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 sup |t|β 󵄨󵄨󵄨g(t)󵄨󵄨󵄨 + sup |t|β 󵄨󵄨󵄨f (t)󵄨󵄨󵄨 + ∫ |t|β 󵄨󵄨󵄨g ′ (t)󵄨󵄨󵄨 dt + ∫ |t|β 󵄨󵄨󵄨f ′ (t)󵄨󵄨󵄨 dt < ∞.

t∈(c,∞)

t∈(−∞,b)

Prove that (3.113) still holds. Exercise 3.15. Let λ > 0 and ft (s) = eλs 1(−∞,t] (s). (i) Prove that for all α > 0 and s ∈ ℝ, (I−α ft ) (s) =

1(−∞,t] (s)eλs

Γ(α)

t−s

∫ uα−1 eλu du. 0

(ii) Prove that for all α ∈ (0, 1) and s ∈ ℝ, (Dα− ft ) (s)

t−s

λ1(−∞,t] (s)eλs 1(−∞,t] (s)eλt = (t − s)−α . ∫ u−α eλu du − Γ(1 − α) Γ(1 − α) 0

(iii) Prove that for all α ∈ (0, 1), lim s

s→+∞

1−α −λs

e

s

∫ uα−1 eλu du = 0

1 . λ

(iv) Prove that for all α ∈ (0, 1), s

lim

s→+∞

λ ∫0 u−α eλu du − s−α eλs s−1−α eλs

=

α . λ

3.9 Exercises

� 257

(v) Now let H ∈ ( 21 , 1). Use items (i) and (iii) to prove that ft ∈ L2H (ℝ) and ‖ft ‖2L2 (ℝ) H

=

2 (αH CHMN ) e2λt

+∞

λs

s

∫ (e ∫ u 0

2 αH −1 λu

e du) ds < +∞.

(3.114)

0

(vi) Let H ∈ (0, 21 ). Use items (ii) and (iv) to prove that ft ∈ L2H (ℝ) and ‖ft ‖2L2 (ℝ) H

=

2 (CHMN ) e2λt

s

+∞

∫ (λe 0

−λs

2 αH λu

αH

∫ u e du − s ) ds < +∞.

(3.115)

0

Remark: Note that by integrating by parts in the inner integral of (3.114) formula (3.115) holds even for H > 21 . Exercise 3.16. Let H ∈ (0, 1) and λ, σ > 0, and let f (s) = eλs for s ∈ ℝ. (i) Prove that +∞ +∞

+∞ +∞

∫ ∫ e−λ(s+u) |s − u|2H dsdu = 2 ∫ ∫ e−λ(s+u) (s − u)2H dsdu. 0

0

0

u

(ii) Use item (i) to prove that +∞ +∞

+∞

∫ ∫ e−λ(s+u) |s − u|2H dsdu = 2Γ(2H + 1) ∫ e−λu (I+2H+1 f ) (−u)du. 0

0

0

(iii) Use item (ii) and Exercise 1.7 to show that +∞ +∞

Γ(2H + 1) . λ2H+2

∫ ∫ e−λ(s+u) |s − u|2H dsdu = 0

0

(iv) Use item (iii) to prove (3.51). Exercise 3.17. Let t ∈ ℝ, λ > 0, and ft (s) = eλs 1(−∞,t] (s). (i) Prove that ℱ [ft ](z) =

e(λ−iz)t , 2π(λ − iz)

z ∈ ℝ.

(ii) Use item (i) to show that ft ∈ ℱH and ‖ft ‖2ℱH =

H sin(πH)Γ(2H)e2λt |s|1−2H ∫ 2 2 ds < +∞. π λ +s ℝ

Hint: Calculate CHMN similarly to item (iv) of Exercise 3.11.

258 � 3 Fractional Brownian motion and its environment (iii) Use item (ii) and (3.51) to prove that ∫ ℝ

|s|1−2H π ds = 2H . λ2 + s 2 λ sin(πH)

Exercise 3.18. Prove that for α ∈ (0, 1) and λ > 0, s

lim

s→+∞

0

∫0 ∫−∞ eλ(u+v) (u − v)2(α−1) dudv s2(α−1) eλs

=

1 . λ2

Exercise 3.19. Prove that for α ∈ (0, 1) and λ > 0, s

lim

s→0

∫0 e−λu uα−1 du sα

1 . α

=

Exercise 3.20. Let λ, σ > 0 and H ∈ (0, 1), let ξ and X be any random variables, and let BH be an fBm with Hurst index H. Show that the integral equation XtH,ξ,X

t

= ξ − λ ∫ (XsH,ξ,X − X) ds + σBtH

(3.116)

0

admits a pathwise unique solution given by t

XtH,ξ,X = X + e−λt (ξ − X + σ ∫ eλu dBuH ) .

(3.117)

0

Furthermore, show that for ξ, X ∈ L1 (Ω), lim E [XtH,ξ,X ] = E[X].

t→+∞

(3.118)

Hint: Note that XtH,ξ,X is a solution of (3.116) if and only if XtH,ξ,X −X is an fOU process with initial data ξ − X. Exercise 3.21. Let H ∈ (0, 1) and λ, σ > 0, and let U H be the fOU defined in (3.49). Show that there exists a standard Bm B such that t

t−s

−∞

0

UtH = CHMN ∫ (λe−λ(t−s) ∫ uαH eλu du − (t − s)αH ) dBs .

4 Stochastic processes and fractional differential equations In Chapter 2, we provided some existence and uniqueness results for fractional differential equations. In this chapter, we study in detail a class of fractional partial differential equations. In particular, we provide some explicit forms of the solutions of such equations. To do this, we use some time-changed stochastic processes obtained by means of stable subordinators and their inverses. Fractional diffusion equations constitute a widespread tool to describe anomalous diffusions, which can be observed in several different contexts, such as in fluid dynamics [238], thermoelasticity [214], biology [24], and even finance [151]. A good introductory survey on anomalous diffusions and fractional Fokker–Planck equations is [167], whereas another really important book on the subject is [162]. On the other side, time-fractional linear (partial) differential equations arise in the description of some special semi-Markov processes obtained by means of time-changes. For instance, time-fractional Fokker– Planck equations are used to model the subdiffusive behavior of materials near thermal equilibrium [166]; heavy-tailed distributions arising in studying data networks on the Internet [257] lead to the development of the fractional Poisson process [131]; time-changed queues [41] and geometric Brownian motions [248] can be used to describe some financial models; and several other applications. Furthermore, since the underlying processes usually arise as scaling limits of continuous-time random walks, they are of interest even if they are not directly connected to a fractional differential equation. This is the case, for instance, of the subordinated/time-changed fBm [126, 260]. Finally, we are interested in perturbing fractional differential equations with some random noise. For this, we can use, for example, fractional stochastic differential equations of the Caputo type, as described in [7, 239]. Another way is to consider directly the integral formulation (2.28) of a Caputo fractional differential equation and then add a stochastic driver. We will follow the latter way.

4.1 Stable distributions and stable processes In this section, we briefly recall some properties of the class of stable distributions, which will play a prominent role throughout the whole chapter. We mainly refer to [185, 233, 269]. Given a random variable X on (Ω, Σ, P), denote by φX (z) = E [eizX ], z ∈ ℝ, its characteristic function. Definition 4.1. We say that the distribution of a random variable X is stable (or X is a stable random variable) if there exist four constants α ∈ (0, 2], β ∈ [−1, 1], γ ≥ 0, and

https://doi.org/10.1515/9783110780017-004

260 � 4 Stochastic processes and fractional differential equations δ ∈ ℝ such that for all z ∈ ℝ, log(φX (z)) = {

−γα |z|α [1 − iβ(sign(z)) tan ( πα )] + iδz, 2 −γ|z| [1 +

2iβ (sign(z)) log(|z|)] π

+ iδz,

α ≠ 1,

(4.1)

α = 1.

In this case, we write X ∼ S(α, β, γ, δ). Recall the following characterizations of stable distributions. Theorem 4.2. Let X be a random variable. The following properties are equivalent: (i) X is a stable random variable. (ii) There exist four constants α ∈ (0, 2], β ∈ [−1, 1], γ ≥ 0, and δ0 ∈ ℝ such that for all z ∈ ℝ, log(φX (z)) = {

−γα |z|α [1 + iβ(sign(z)) (|γz|1−α − 1) tan ( πα )] + iδ0 z, 2 −γ|z| [1 +

2iβ (sign(z)) log(γ|z|)] π

+ iδ0 z,

α ≠ 1, α = 1.

(4.2)

(iii) There exist a constant α ∈ (0, 2] and a sequence of real numbers (an )n≥1 such that for all n ≥ 1, n

d

1

∑ Xk = n α X + an ,

k=1

where X1 , . . . , Xn are independent copies of X. (iv) There exist constants α ∈ (0, 2] and a1 , a2 ∈ ℝ such that d

1

X1 + X2 = 2 α X + a 1 and d

1

X1 + X2 + X3 = 3 α X + a 2 , where X1 , X2 , X3 are independent copies of X. (v) There exist a constant α ∈ (0, 2] and a function B : ℝ2 → ℝ such that for all b1 , b2 ∈ ℝ, d

1

b1 X1 + b2 X2 = (b1 + b2 ) α X + B(b1 , b2 ), where X1 , X2 are independent copies of X. (vi) There exist a sequence of i. i. d. random variables (Yn )n≥1 , a sequence of positive real numbers (cn )n≥1 , and a sequence of real numbers (dn )n≥1 such that d 1 n ∑ Yk + dn → X. cn k=1

4.1 Stable distributions and stable processes

� 261

Furthermore, if X ∼ S(α, β, γ, δ), then the constants are related as follows: ), {δ + γβ tan ( πα 2 δ0 = { 2βγ {δ + π log(γ),

α ≠ 1, α = 1,

1

{δ (n − n α ) , α ≠ 1, an = { 2γβ { π n log(n), α = 1.

(4.3)

Finally, if γ = 0, then δ = δ0 , and P(X = δ) = 1. If X ∼ S(α, β, γ, δ), then we say that α is the stability exponent of X and refer to X as an α-stable random variable, β is its skewness parameter, γ is its scale parameter, and δ is its location parameter. From now on we assume that γ > 0. The case α = 2 is trivial. Proposition 4.3. X ∼ S(2, β, γ, δ) if and only if X is a Gaussian random variable with mean δ and variance 2γ2 . In the rest of the section, we assume that α ∈ (0, 2) unless otherwise specified. The following properties can be directly verified by means of (4.1). Theorem 4.4. Let α ∈ (0, 2), βX ∈ [−1, 1], γX > 0, δX ∈ ℝ, and X ∼ S(α, βX , γX , δX ). Then (i) −X ∼ S(α, −βX , γX , −δX ); (ii) If α ≠ 1, then aX ∼ S(α, βX , aγX , aδX ) for all a > 0; (iii) If α = 1, then aX ∼ S (1, βX , aγX , δX − 2βXπγX a log(a)) for all a > 0; (iv) X + a ∼ S(α, βX , γX , δX + a) for all a ∈ ℝ; (v) If Y ∼ S(α, βY , γY , δY ) is independent of X, then X + Y ∼ S(α, β, γ, δ), where β=

βX γXα + βY γYα , γXα + γYα

1

γ = (γXα + γYα ) α ,

and

δ = δX + δY .

Let us also recall that stable random variables are infinitely divisible and hence admit a characteristic triplet (see Appendix C.4). For X ∼ S(α, β, γ, δ), we denote by α α (bX , qX , νX ) the characteristic triplet of X. Furthermore, it is clear that |φX (z)| ≤ e−γ |z| , and thus |⋅|n φX ∈ L1 (ℝ) for all n ∈ ℕ. This shows that every stable random variable X admits an infinitely differentiable density (see Proposition A.7). Concerning the behavior of the tails of X (and its moments), we recall the following result. Proposition 4.5. Let X ∼ S(α, β, γ, δ), α ∈ (0, 2). Then E[|X|p ] < ∞ if and only if p ∈ (0, α). Definition 4.6. A random variable X ∼ S(α, β, γ, δ) is said to be strictly stable if an = 0 for all n ∈ ℕ, where an is defined in item (iii) of Theorem 4.2. The fact that X is strictly stable can be also expressed by means of the parameters α, β, γ, δ, but according to (4.3), we have to distinguish among two cases. On the one hand, if α ≠ 1, then X is strictly stable if and only if δ = 0, whereas, on the other hand, if α = 1, then X is strictly stable if and only if β = 1. d

Definition 4.7. A random variable X ∼ S(α, β, γ, δ) is said to be symmetric if X = −X or, equivalently, β = δ = 0. Finally, X ∼ S(α, β, γ, δ) is said to be totally skew-symmetric if β = ±1 and δ = 0.

262 � 4 Stochastic processes and fractional differential equations We recall here the main properties of totally skew-symmetric stable random variables. Theorem 4.8. Let X ∼ S(α, 1, γ, 0). Then the following properties hold. (i) For all z ∈ ℝ and α ≠ 1, log(φX (z)) = −

γα (−iz)α . cos ( πα ) 2

(ii) For all z > 0, E [e

γα

−zX

α

{− cos( πα ) z , 2 ]={ 2γ z log(z), {π

α ≠ 1, α = 1.

(iii) If α ∈ (0, 1), then P(X < 0) = 0, and P(X > t) > 0 for all t > 0. (iv) qX = 0, νX (dx) = νX (x)dx, and νX (x) = 0 for all x ≤ 0, αγα

α ≠ 1,

{ Γ(2−α)| cos( απ )| , 2 bX = { 2(γEM −1)γ , { π α(1−α)γ

α = 1,

α

{ Γ(2−α) cos( απ )x 1+α , 2 νX (x) = { 2γ , 2 { πx

α ≠ 1, α = 1,

for x > 0, where γEM is the Euler–Mascheroni constant. Furthermore, if X ∼ S(α, β, γ, δ) and Y1 , Y2 ∼ S(α, 1, γ, 0) are independent, then for α ≠ 1, 1

1

1+β α 1−β α X =( ) Y1 − ( ) Y2 + δ, 2 2 d

whereas for α = 1, d

1+β 1−β ) Y1 − ( ) Y2 2 2 1+β 1+β 1−β 1−β +γ( log ( )− log ( )) + δ. π 2 π 2

X =(

We need to extend the definition of stable distribution to random vectors. Definition 4.9. A random vector X = (X1 , . . . , Xd ) is said to be a stable random vector if there exist α ∈ (0, 2] and a sequence (an )n∈ℕ of vectors in ℝd such that for all n ≥ 2, n

d

1

∑ Xk = n α X + an ,

k=1

4.1 Stable distributions and stable processes

� 263

where Xk , k = 1, . . . , n, are independent copies of X, or, equivalently, there exist α ∈ (0, 2] and a function B : ℝ2 → ℝ such that for all a1 , a2 > 0, d

1

a1 X1 + a2 X2 = (a1α + a2α ) α X + B(a1 , a2 ), where X1 , X2 are two independent copies of X. Again, we say that X is strictly stable if d

an = 0 for all n ≥ 2, whereas it is symmetric stable if X = −X.

As for the Gaussian case, the stable distribution is preserved for linear combinations of the components of X, whereas the converse is not always true. Theorem 4.10. Let X = (X1 , . . . , Xd ) be a random vector. (i) If X is α-stable, then for all a1 , . . . , ad ∈ ℝ, the random variable X = ∑dk=1 ak Xk is α-stable. (ii) X is strictly α-stable if and only if for all a1 , . . . , ad ∈ ℝ, X = ∑dk=1 ak Xk is strictly α-stable. (iii) X is symmetric α-stable if and only if for all a1 , . . . , ad ∈ ℝ, X = ∑dk=1 ak Xk is symmetric α-stable. (iv) X is α-stable with α ≥ 1 if and only if for all a1 , . . . , ad ∈ ℝ, X = ∑dk=1 ak Xk is α-stable with α ≥ 1. (v) For each α < 1, there exists a random vector Y that is not α-stable but for all a1 , . . . , ad ∈ ℝ, Y = ∑dk=1 ak Yk is α-stable. Remark 4.11. We do not present the proof discussed in [233, Sections 2.1 and 2.2]. Furthermore, we mention that item (v) was first proved in [154]. Finally, we can extend this construction to processes. We adopt the nomenclature given in [233]. Definition 4.12. An α-stable stochastic process (resp., strictly or symmetric) is a process X = {Xt , t ≥ 0} such that for all n ≥ 1 and t1 , . . . , tn ≥ 0, the random vector X = (Xt1 , . . . , Xtn ) is α-stable (resp., strictly or symmetric). This clearly coincides with the definition of Gaussian process if α = 2. Among α-stable processes, we can recognize a special class. Definition 4.13. We say that a process X = {Xt , t ≥ 0} is an α-stable Lévy motion if (i) X0 = 0 a. s., (ii) X has stationary and independent increments, 1 (iii) There exist α ∈ (0, 2], β ∈ [−1, 1], and γ > 0 such that Xt − Xs ∼ S (α, β, γ(t − s) α , 0). (iv) X is stochastically continuous, i. e., for all t ≥ 0 and ε > 0, lim P(|Xt − Xs | ≥ ε) = 0. s→t

264 � 4 Stochastic processes and fractional differential equations If X is a 2-stable Lévy motion, then we can write Xt = B2γ2 t for all t > 0, where B is a standard Brownian motion. In particular, it is clear that every 2-stable Lévy motion admits a. s. continuous trajectories. In general, for α ∈ (0, 2), we cannot conclude the same. However, items (i), (ii), and (iv) of the definition of an α-stable Lévy motion tell us that every α-stable Lévy motion X is a Lévy process (see Appendix C.4) and, in particular, X is a. s. càdlàg. The main properties of α-stable Lévy motions are given in the following theorem. Theorem 4.14. Let X = {Xt , t ≥ 0} be an α-stable Lévy motion. (i) If α ≠ 1, then X is a strictly stable process. (ii) If either α ≠ 1, or α = 1 and β = 0, then X is α−1 -self-similar, i. e., for all n ∈ ℕ, a > 0, d

1

and t1 , . . . , tn ≥ 0, we have (Xat1 , . . . , Xatn ) = a α (Xt1 , . . . , Xtn ). (iii) Let α ∈ (0, 1). Then every α-stable process Y that is α−1 -self-similar and has stationary increments is an α-stable Lévy motion. (iv) For each α ∈ [1, 2), there exists an α-stable process Y that is α−1 -self-similar and has stationary increments but is not an α-stable Lévy motion. In Sections 4.2 and 4.3, we will focus in detail on some specific stable Lévy motions, the totally skew-symmetric case β = 1 and the symmetric case β = 0. In both cases, we will show the connection between such processes and some fractional partial differential equations.

4.2 Stable subordinators and Marchaud fractional derivatives Let us consider the case β = 1 and α ∈ (0, 1). Item (iii) of Theorem 4.8 claims that every α-stable Lévy motion X with α ∈ (0, 1) and β = 1 is a. s. positive. Furthermore, by the def1 inition of α-stable Lévy motions, for all 0 ≤ s < t, we have Xt − Xs ∼ S (α, 1, γ(t − s) α , 0). Therefore, again by item (iii) of Theorem 4.8, Xt − Xs > 0 a. s., and thus X is a. s. strictly increasing (see Proposition C.11). Definition 4.15. An α-stable subordinator is an α-stable Lévy motion with α ∈ (0, 1) and 1

β = 1. In this section, we consider γ = (cos ( πα )) α and denote by S α the α-stable subor2 dinator with this γ. Since S α is a Lévy process, it is clear that its characteristic function and Laplace transform are given by α

log (E [eizSt ]) = −t(−iz)α ,

z ∈ ℝ,

α

log (E [e−zSt ]) = −tzα ,

z > 0,

t ≥ 0.

(4.4)

Denote by gα (t, ⋅) the probability density function of Stα , t > 0. and put gα (⋅) := gα (1, ⋅). Then the α−1 -self similarity of S α supplies that 1

1

gα (t, x) = t − α gα (t − α x) ,

x > 0,

t > 0.

(4.5)

4.2 Stable subordinators and Marchaud fractional derivatives



265

The function gα admits the following asymptotics according to [98, Equations (2.2) and (2.3)] (see also [150] for the derivation and [269, Section 2.5] for some further results): 2−α

1

α

α 2−2α α 1−α ( ) exp (−|1 − α| ( ) ) x √2πα(1 − α) x α −α−1 gα (x) ∼ x as x → ∞. Γ(1 − α) gα (x) ∼

as x ↓ 0,

(4.6) (4.7)

α

A formula for the Laplace inversion of the function e−z is proposed in [212]. Applying it to (4.4) (as in [202]), we get a series representation for gα (t, ⋅): { 1 ∑+∞ j=1 gα (t, x) = { π {0,

(−1)j+1 j t Γ(1 j!x 1+αj

+ αj) sin(παj),

x > 0, x ≤ 0.

(4.8)

Such a representation can be extended to any stable random variable, and other integral representations can be achieved. The interested reader can find more information in [269, Chapter 2]. The main aim of this section is to provide the solution of the (one-sided) fractional diffusion equation 𝜕u M α { (t, x) = − ( D+ u) (t, x), 𝜕t { {u(0, x) = f (x),

t > 0, x ∈ ℝ, x ∈ ℝ,

(4.9)

for some f ∈ 𝒞02 (ℝ), where the Marchaud derivative is applied to x variable, i. e. (M Dα+ u) (t, x) =

α ∫ (u(t, x) − u(t, x − y))y−α−1 dy. Γ(1 − α) ℝ+

To solve (4.9), we use some standard results in semigroup theory. Namely, we will interpret (4.9) as an abstract Cauchy problem and will use the process S α to determine its solution. To do this, we first give some fundamentals of semigroup theory and abstract Cauchy problems. Recall that having a Banach space (𝔹, ‖⋅‖𝔹 ) and a function u : ℝ+0 → 𝔹 (resp., u : ℝ+ → 𝔹), we say that u ∈ 𝒞 (ℝ+0 ; 𝔹) (resp., u ∈ 𝒞 (ℝ+ ; 𝔹)) if 󵄩 󵄩 lim 󵄩󵄩󵄩u(t + δ) − u(t)󵄩󵄩󵄩𝔹 = 0,

δ→0

t ≥ 0 (resp., t > 0),

and we say that u ∈ 𝒞 1 (ℝ+ ; 𝔹) if there exists a function

du dt

󵄩󵄩 u(t + δ) − u(t) du(t) 󵄩󵄩 󵄩󵄩 = 0, lim 󵄩󵄩󵄩󵄩 − 󵄩 δ→0 󵄩 δ dt 󵄩󵄩𝔹

∈ 𝒞 (ℝ+ ; 𝔹) such that t > 0.

(4.10)

266 � 4 Stochastic processes and fractional differential equations Such a function du is called the derivative of u. Note that we do not require du to exist dt dt at 0. Indeed, for the first equation in (4.9) to make sense, we only need such a derivative to exist for t > 0 (and not for t = 0). A family of linear operators {Tt }t≥0 with Tt : 𝔹 → 𝔹 is said to be a strongly continuous contraction semigroup or, shortly, a Feller semigroup if (i) T0 = I, the identity operator on 𝔹; (ii) Ts+t = Ts Tt for all t, s ≥ 0; (iii) ‖Tt x‖𝔹 ≤ ‖x‖𝔹 for all x ∈ 𝔹; (iv) limt→0 ‖Tt x − x‖𝔹 = 0 for all x ∈ 𝔹. For a Feller semigroup {Tt }t≥0 over 𝔹, we consider the linear space 󵄩󵄩 T x − x 󵄩󵄩 Dom(A) := {x ∈ 𝔹 : ∃ϕ ∈ 𝔹, lim 󵄩󵄩󵄩󵄩 t − ϕ󵄩󵄩󵄩󵄩 = 0} t→0 󵄩 t 󵄩𝔹 and define the operator A : Dom(A) ⊆ 𝔹 → 𝔹 as Ax = lim t→0

Tt x − x . t

This operator is called the generator of {Tt }t≥0 with domain Dom(A). For the operator A, we can consider the following abstract Cauchy problem: du(t) = Au(t), t > 0, { { dt {u(0) = x,

(4.11)

where x ∈ Dom(A). We say that u : ℝ+0 → 𝔹 is a strong solution of (4.11) if (i) u ∈ 𝒞 (ℝ+0 ; 𝔹) ∩ 𝒞 1 (ℝ+ ; 𝔹); (ii) u(t) ∈ Dom(A) for all t > 0, and Au ∈ 𝒞 (ℝ+ ; 𝔹); (iii) u satisfies (4.11). Precisely, the function u : ℝ+0 → 𝔹 defined as u(t) = Tt x is the unique strong solution of (4.11) (see, for example, Theorem C.16). For more detail on the theory of semigroups, see Appendix C.5. In our case, we consider (4.9) as an abstract Cauchy problem of the form (4.11) on 𝔹 = 𝒞0 (ℝ). Note that u ∈ 𝒞 (ℝ+0 ; 𝒞0 (ℝ)) implies that u(t) ∈ 𝒞0 (ℝ) for all t ≥ 0. For all t ≥ 0 and x ∈ ℝ, instead of (u(t))(x), we write simply u(t, x). For this reason, we can rewrite the definition of a strong solution in the specific case of (4.9). Definition 4.16. We say that u is a strong solution of (4.9) if (a) For all x ∈ ℝ, u(⋅, x) ∈ 𝒞 1 (ℝ+ ) with 𝜕u ∈ 𝒞 (ℝ+ × ℝ). For all t ∈ ℝ+ , u(t, ⋅) ∈ 𝒞02 (ℝ). 𝜕t + 1 + Furthermore, u ∈ 𝒞 (ℝ0 ; 𝒞0 (ℝ)) ∩ 𝒞 (ℝ ; 𝒞0 (ℝ)). (b) For all t > 0, the derivative (M Dα+ u) (t, ⋅) is well defined, belongs to 𝒞0 (ℝ), and (M Dα+ u) (⋅, ⋅) ∈ 𝒞 (ℝ+ ; 𝒞0 (ℝ)). (c) (4.9) holds pointwise for all t > 0 and x ∈ ℝ.

� 267

4.2 Stable subordinators and Marchaud fractional derivatives

Here we required some additional regularity: precisely, instead of assuming that u(t) is in the domain of the Marchaud derivative, we assume that u(t, ⋅) ∈ 𝒞02 (ℝ) for all t > 0. This is due to the fact that we will consider −M Dα+ as the generator of a suitable Feller semigroup. To do this, we can consider the family {Tt }t≥0 of operators on 𝒞0 (ℝ) defined as (Tt f )(x) = E [f (Stα + x)] ,

x ∈ ℝ.

It is well known that this is a Feller semigroup (see Appendix C.5) and hence admits a generator A. We refer to such an operator directly as the generator of S α . Now we will show that the generator A of S α coincides with −M Dα+ on 𝒞02 (ℝ) ⊂ Dom(A). Theorem 4.17. Let A be the generator of S α . Then for every f ∈ 𝒞02 (ℝ), we have Af = −M Dα+ f . Proof. Recall that α ∈ (0, 1). We know from item (iv) of Theorem 4.8 and the characterization of the generator of a Lévy process on 𝒞02 (ℝ) (see item (iii) of Theorem C.17) that for all f ∈ 𝒞02 (ℝ) and x ∈ ℝ, f (x − y) − f (x) + yf ′ (x)1[0,1] (y) α α(1 − α) f ′ (x) + dy ∫ Γ(2 − α) Γ(2 − α) y1+α +∞

(Af )(x) = −

0

+∞

1

0

0

f (x − y) − f (x) α(1 − α)f ′ (x) α α(1 − α) =− f ′ (x) + dy + ∫ ∫ y−α dy Γ(2 − α) Γ(2 − α) Γ(2 − α) y1+α +∞

=

f (x − y) − f (x) α dy = − (M Dα+ f ) (x), ∫ Γ(1 − α) y1+α 0

according to the definition of Marchaud derivative (1.55). Applying the standard arguments of the theory of semigroups (see Theorem C.16) and Theorem 4.17, we get the following result, which is also proved in [162]. 1

Theorem 4.18. Let S α be an α-stable subordinator with γ = (cos ( απ )) α , and let f ∈ 𝒞02 (ℝ). 2 Then the function u(t, x) = E [f (Stα + x)] = ∫ f (y + x)gα (t, y)dy,

t ≥ 0,

x ∈ ℝ,



is the unique strong solution of (4.9). We only need to show that u(t, ⋅) ∈ 𝒞02 (ℝ) for all t > 0, but it follows from the dominated convergence theorem. In fact, we can prove that the function gα : ℝ+ × ℝ → ℝ satisfies the first equation in (4.9). To do this, we need the following simple lemma, which clearly follows from the inversion formula of the Fourier transform (see (A.7)).

268 � 4 Stochastic processes and fractional differential equations Lemma 4.19. Let f : ℝ+ × ℝ → ℝ be such that f (t, ⋅) ∈ L1 (ℝ) for all t > 0. Set ̂f (t, z) = ℱ [f (t, ⋅)](z) for z ∈ ℝ and assume that ̂f (t, ⋅) ∈ L1 (ℝ). For t > 0, h > −t, ̂ ̂ and z ∈ ℝ, define Δ ̂f (t, z) = f (t+h,z)−f (t,z) and assume that ̂f (t, ⋅) admits a derivative in h

1

L (ℝ), i. e., there exists a function

h 𝜕̂f 𝜕t

: ℝ+ × ℝ → ℝ such that for all t > 0,

󵄩󵄩 󵄩 𝜕̂f (t, ⋅) 󵄩󵄩󵄩 󵄩 󵄩󵄩 lim 󵄩󵄩󵄩󵄩Δh ̂f (t, ⋅) − = 0. h→0 󵄩 𝜕t 󵄩󵄩󵄩L1 (ℝ) 󵄩 Then for all t > 0 and x ∈ ℝ, the partial derivative 𝜕f (t, ⋅) 𝜕t

belongs to 𝒞0 (ℝ) with

𝜕f (t, x) is well-defined, and for all t 𝜕t

> 0,

󵄩󵄩 𝜕f (t, ⋅) 󵄩󵄩󵄩 󵄩󵄩 lim 󵄩󵄩󵄩󵄩Δh f (t, ⋅) − = 0. h→0 󵄩 𝜕t 󵄩󵄩𝒞0 (ℝ) (t,⋅) Furthermore, if 𝜕f𝜕t ∈ L2 (ℝ), then t > 0 and z ∈ ℝ. ̂

𝜕f (t, ⋅) 𝜕t

∈ L2 (ℝ), and ℱ [ 𝜕f (t, ⋅)] (z) = 𝜕t

𝜕̂f (t, z) 𝜕t

for all

Now we can prove the following result. Theorem 4.20. The function gα : ℝ+ × ℝ+ → ℝ satisfies the equation 𝜕gα (t, x) = − (M Dα+ gα ) (t, x) 𝜕t

(4.12)

for a. a. (t, x) ∈ ℝ+ × ℝ. Furthermore, gα (t, x) weakly converges as t ↓ 0 to the Dirac delta measure centered at 0, i. e., for all f ∈ 𝒞b (ℝ) and x ∈ ℝ, lim ∫ gα (t, y)f (x + y)dy = f (x). t↓0

(4.13)



Proof. Let us first prove (4.13). Indeed, every f ∈ 𝒞b (ℝ) is bounded, and we can use the dominated convergence theorem to get lim ∫ gα (t, y)f (x + y)dy = lim E [f (Stα + x)] = E [f (S0α + x)] = f (x), t↓0

t↓0



where we also used the fact that S α is a. s. càdlàg and S0α = 0. α To prove (4.12), recall that ĝα (t, z) := ℱ [gα (t, ⋅)](z) = e−t(iz) according to (4.4). Therefore it follows from Proposition 1.89 that for all t > 0, gα (t, ⋅) ∈ W 1,2 (ℝ),

and

(M Dα+ gα ) (t, ⋅) ∈ L2 (ℝ).

4.2 Stable subordinators and Marchaud fractional derivatives

Moreover, note that

𝜕ĝα (t,z) 𝜕t



269

α

= (iz)α e−t(iz) . Let t > 0, h > −t, and z ∈ ℝ. Then α

α

ĝ (t + h, z) − ĝα (t, z) e−(t+h)(iz) − e−t(iz) Δh ĝα (t, z) = α = . h h Furthermore, α 󵄨󵄨 −(t+h)(iz)α 󵄨󵄨 󵄨󵄨 −h(iz)α 󵄨 − e−t(iz) − 1 + h(iz)α 󵄨󵄨󵄨 󵄨󵄨 e α −t(iz)α 󵄨󵄨 α −t cos( πα )|z|α 󵄨󵄨 e 2 󵄨 󵄨 󵄨󵄨 󵄨󵄨 dz. + (iz) e ∫ 󵄨󵄨 󵄨󵄨 dz = ∫ |z| e 󵄨󵄨 󵄨󵄨 h h(iz)α 󵄨󵄨 󵄨󵄨 󵄨 󵄨 ℝ ℝ (4.14) Now recall that for all z ∈ ℂ,

+∞ |ez − 1 − z| 1 +∞ |z|k |z|k ≤ ≤ |z| ∑ ≤ |z|e|z| , ∑ |z| |z| k=2 k! (k + 2)! k=0

hence the right-hand side of (4.14) can be bounded from above as follows: α 󵄨󵄨 −(t+h)(iz)α 󵄨 πα α 󵄨󵄨 α − e−t(iz) 󵄨e + (iz)α e−t(iz) 󵄨󵄨󵄨󵄨 dz ≤ |h| ∫ |z|α+1 e−(t cos( 2 )−|h|)|z| dz. ∫ 󵄨󵄨󵄨󵄨 h 󵄨󵄨 󵄨󵄨 ℝ ℝ

Without loss of generality, we can assume that |h|
0, x ∈ ℝ,

(4.15)

It is clear that (4.15) is indeed a formal version of Theorem 4.20, since the initial data δ{0} (dx) is not even a function. The exact sense of (4.15) is given using generalized function theory. For more information, see [251].

270 � 4 Stochastic processes and fractional differential equations

4.3 Symmetric α-stable Lévy motion and fractional Laplacian Now let us consider the class of symmetric α-stable Lévy motions for α ∈ (0, 2], i. e., α-stable Lévy motions X α := {Xtα , t ≥ 0} with β = 0. Their characteristic function has the form E [eizXα (t) ] = e−tγ

α

|z|α

according to (4.1). For simplicity, we assume throughout this section that γ = 1. Denote by X α the corresponding symmetric α-stable Lévy motion and by pα (t, ⋅) the probability density function of Xtα , t > 0. Consider the following fractional partial differential equation: α

𝜕u d2 2 { { { (t, x) = − ((− 2 ) u) (t, x), 𝜕t dx { { { {u(0, x) = f (x),

t > 0, x ∈ ℝ,

(4.16)

x ∈ ℝ,

with f ∈ 𝒞02 (ℝ). It is called the fractional heat equation. Similarly to (4.9), we can define the notion of a strong solution of (4.16). Definition 4.21. We say that u is a strong solution of (4.16) if (a) For all x ∈ ℝ, u(⋅, x) ∈ 𝒞 1 (ℝ+ ) with 𝜕u ∈ 𝒞 (ℝ+ × ℝ). For all t ∈ ℝ+ , u(t, ⋅) ∈ 𝒞02 (ℝ). 𝜕t + 1 + Furthermore, u ∈ 𝒞 (ℝ0 ; 𝒞0 (ℝ)) ∩ 𝒞 (ℝ ; 𝒞0 (ℝ)). 2

α 2

d (b) For all t > 0, ((− dx 2 ) u) (t, ⋅) is well-defined and belongs to 𝒞0 (ℝ). Furthermore, 2

α 2

d + ((− dx 2 ) u) (⋅, ⋅) ∈ 𝒞 (ℝ ; 𝒞0 (ℝ)).

(c) (4.16) holds pointwise.

Again, we want to use some standard reasoning from semigroup theory, and in this connection, we need to define a generator of X α over 𝒞02 (ℝ). Theorem 4.22. Let X α be a symmetric α-stable Lévy motion with α < 2 and γ = 1, and let 2

α 2

d A be the generator of X α . Then for all f ∈ 𝒞02 (ℝ), we have Af = − (− dx 2) f.

Proof. Let us first determine the characteristic triplet (bX α , qX α , νX α ) of X1α . This is done FL in Exercise 4.4. For all α ∈ (0, 2), bX α = qX α = 0, whereas νX α (dx) = Cα/2 |x|−1−α dx, where FL 2 Cα/2 is defined in (1.130) (see Exercise 1.17). Hence for every f ∈ 𝒞0 (ℝ), we have that the generator A can be evaluated as Af (x) =

FL Cα/2

∫ ℝ\{0}

f (x − y) − f (x) + yf ′ (x)1[−1,1] (y) |y|1+α

α

d2 2 dy = − ((− 2 ) f ) (x), dx

(4.17)

where the last equality follows by Proposition 1.76; see also Remarks 1.77 and 1.80.

4.3 Symmetric α-stable Lévy motion and fractional Laplacian

� 271

Arguing similarly to Theorem 4.18 and using standard arguments of semigroup theory (see Theorem C.16) and Theorem 4.22, we get the following result, which is also stated in [162]. Theorem 4.23. Let X α be a symmetric α-stable Lévy motion with γ = 1, and let f ∈ 𝒞02 (ℝ). Then the function u(t, x) = E [f (Xtα + x)] = ∫ f (y + x)pα (t, y)dy,

t > 0,

x ∈ ℝ,



with u(0, x) = f (x) for x ∈ ℝ is the unique strong solution of (4.16). Furthermore, arguing in the exact same way as in Theorem 4.20, we can prove that pα solves a fractional partial differential equation. Theorem 4.24. The function pα : ℝ+ × ℝ satisfies the equation α

𝜕pα d2 2 (t, x) = − ((− 2 ) pα ) (t, x) 𝜕t dx for a. a. (t, x) ∈ ℝ+ × ℝ. Furthermore, for all f ∈ 𝒞b (ℝ) and x ∈ ℝ, lim ∫ pα (t, y)f (x + y)dy = f (x). t↓0



Again, in the literature, the previous theorem is usually stated as follows: pα solves the equation α

𝜕p (t, x) d2 2 { { { α = − ((− 2 ) pα ) (t, x), t > 0, x ∈ ℝ, 𝜕t dx { { { {pα (0, x)dx = δ{0} (dx), where δ{0} is the Dirac delta measure centered at {0}. The function pα is also called the fractional heat kernel. Its asymptotic behavior has been widely studied in the literature. In particular, in [31, 49] the following two-sided bounds have been obtained. Theorem 4.25. There exist two constants C1 < C2 depending only on α ∈ (0, 2) such that for all t ∈ (0, 1] and x ∈ ℝ, 1

C1 min {t − α ,

t |x|

1

1 α

} ≤ pα (t, x) ≤ C2 min {t − α ,

t

1

|x| α

}.

(4.18)

The two-sided bounds for the short-time asymptotics given in (4.18) underline one of the main differences between the cases α < 2 and α = 2: if α < 2, then for fixed

272 � 4 Stochastic processes and fractional differential equations x2

1 x ≠ 0, pα (t, x) ≈ t as t → 0, whereas if α = 2, then p(t, x) := p2 (t, x) = √4πt e− 4t decays exponentially fast as t → 0. It is also possible to consider a killed (upon exiting a certain open set) symmetric α-stable Lévy motion, whose generator is related to the fractional Laplacian with Dirichlet exterior condition, but we will not do that, as it needs a further discussion about Dirichlet forms. We refer to [33] for further details. However, let us also cite [48], where bounds similar to (4.18) were obtained under the Dirichlet exterior conditions, [127], which contains a characterization of the eigenfunctions of the fractional Laplacian with Dirichlet conditions on the half-axis, and [128], where the eigenvalues of the fractional Laplacian in the intervals are studied. Before proceeding, we present one more important property of the symmetric α-stable Lévy process and its density pα . Such a process can be constructed starting from the Brownian motion by means of a subordination procedure, as stated in the following proposition.

Proposition 4.26. Let α ∈ (0, 2), let B be a standard Brownian motion, and let S α/2 be an α -stable subordinator independent of B. Then the process X α defined as 2 Xtα := B2Sα/2 , t

t ≥ 0,

is a symmetric α-stable Lévy motion with γ = 1. Proof. Clearly, X α is a Lévy process since it is the subordination of a Brownian motion (see Theorem C.12). We only need to find the distribution of the increments. Since d

α

α Xtα − Xsα = Xt−s , we only need to find the distribution of Xtα for t ≥ 0. Since S 2 is independent of B, we can use the conditional expectation to state that α

E [eizXt ] = E [E [e

izB

α/2 2St

󵄨󵄨 α/2 −z2 S α/2 −t|z|α , 󵄨󵄨 St ]] = E [e t ] = e 1

where in the last equality we used (4.4). This shows that Xtα −Xsα ∼ S(α, 0, (t−s) α , 0; 1). This procedure is called Bochner subordination. We will not delve into Bochner subordination theory but just mention [35, 114] for further details. Now let us characterize the density pα in terms of the mixture of Gaussian densities according to gα . Proposition 4.27. For all t > 0 and x ∈ ℝ, we have +∞

α

pα (t, x) = ∫ p(2s, x)g α (t, s)ds = E [p (2St2 , x)] , 2

0

where p(s, x) = α

1 − x2s2 e , √2πs

and S 2 is an α2 -stable subordinator.

s > 0,

x ∈ ℝ,

(4.19)

4.4 Inverse stable subordinators

� 273

α

Proof. Let B be a standard Brownian motion, and let S 2 be an α2 -stable subordinator independent of B. According to Proposition 4.26, the process X α defined as Xtα = B2Sα/2 is t a symmetric α-stable Lévy motion with γ = 1. Therefore 󵄨 P (Xtα ≤ x) = P(B2Sα/2 ≤ x) = E [P (B2Sα/2 ≤ x 󵄨󵄨󵄨 Stα/2 )] t t x

x

= E [ ∫ p (2Stα/2 , y) dy] = ∫ E [p (2Stα/2 , y)] dy. [−∞ ] −∞

(4.20)

To show that the integrand on the right-hand side is continuous, fix any y, δ ∈ ℝ and note that p (2S α/2 , y + δ) ≤

1 √4πStα/2

.

Taking the expectation of the value in the right-hand side, we get that 1

E[

1

] = E[

α/2 [ √4πSt ]

2 α

]=

α/2 [ √t 4πS1 ]

+∞

1 2 α

√t 4π

1

∫ x − 2 g α (x)dx. 2

0

By (4.6) we know that there exists a constant C > 0 such that α 1 3−α 󵄨󵄨 α 󵄨󵄨 α 2−α x − 2 g α (x) ≤ Cx − 2−α exp (− 󵄨󵄨󵄨1 − 󵄨󵄨󵄨 ( ) ) , 2 󵄨 2 󵄨 2x

x ∈ (0, 1],

whence +∞

1

1

+∞

1

1

∫ x − 2 g α (x)dx = ∫ x − 2 g α (x)dx + ∫ x − 2 g α (x)dx 0

2

0

2

1

1

2

α 3−α 󵄨󵄨 α 󵄨󵄨 α 2−α ≤ C ∫ x − 2−α exp (− 󵄨󵄨󵄨1 − 󵄨󵄨󵄨 ( ) ) dx + 1 < ∞. 󵄨 2 󵄨 2x

0

Hence by the dominated convergence theorem lim E [p (2Stα/2 , y + δ)] = E [p (2Stα/2 , y)] .

δ→0

Taking the derivative in x in (4.20), we get the desired result.

4.4 Inverse stable subordinators In the previous sections, we discussed some examples of space-fractional partial differential equations arising as Kolmogorov equations of some suitable Lévy processes. Now

274 � 4 Stochastic processes and fractional differential equations we want to focus on time-fractional partial differential equations. To do this, we need to introduce another process related to the stable subordinator. 1/α Let α ∈ (0, 1), and let S α be an α-stable subordinator with γ = (cos ( πα )) . 2 Definition 4.28. For fixed ω ∈ Ω and t ≥ 0, we define Lαt (ω) = inf{u > 0 : Suα (ω) > t}. The process Lα = {Lαt , t ≥ 0} is called the inverse α-stable subordinator. It is clear that the trajectories of Lα are a. s. continuous and nondecreasing, since S α is a. s. strictly increasing. Note that for all t > 0 Lαt (ω) is the time at which the trajectory S⋅α (ω) exceeds t. In practice, the state of Lαt is a time for the α-stable subordinator S α . At the same time, the time variable t in Lαt represents a lower bound on the position of S α . This is explained by the relation P (Lαt ≤ x) = P (Sxα ≥ t) ,

t, x > 0.

In what follows, we establish that for all t > 0, Lαt has a density fα (t, ⋅). In particular, we relate fα to the density gα of S α . However, it is clear that for all x, t > 0, fα (t, x) will be related to gα (x, t), since, as we noticed, time and space are inverted as we pass from Lα to S α . Let us recall some fundamental properties of the process Lα . Proposition 4.29. Let α ∈ (0, 1), and let Lα be an inverse α-stable subordinator. (i) Lα is α-self similar, i. e., for all n ∈ ℕ, 0 ≤ t1 ≤ t2 ≤ ⋅ ⋅ ⋅ ≤ tn , and a > 0, d

(Lαat1 , . . . , Lαatn ) = aα (Lαt1 , . . . , Lαtn ) . d

(ii) Lαt = t α (S1α )−α for all t > 0. (iii) For all t > 0, Lαt has the density fα (t, x) :=

1 t −1− α1 x gα (tx − α ) , α

x > 0,

(4.21)

where gα is the density of S1α . (iv) For all t, x > 0, t

fα (t, x) = ∫ 0

(t − s)−α 1−α g (x, s)ds = (I0+ gα (x, ⋅)) (t), Γ(1 − α) α

where gα (x, ⋅) is the density of Sxα for x > 0 as in (4.5). (v) The increments of Lα are not stationary. (vi) For all 0 < t1 < t2 < t3 , E [(Lαt3 − Lαt2 ) (Lαt2 − Lαt1 )] ≠ E [Lαt3 − Lαt2 ] E [Lαt2 − Lαt1 ] , and the increments of Lα are dependent.

(4.22)

4.4 Inverse stable subordinators

� 275

Items (i), (ii), and (iii) are established, respectively, in [160, Proposition 3.1 and (a) and (c) of Corollary 3.1], (iv) is a direct consequence of [161, Theorem 3.1], (v) is given in [160, Corollary 3.3], and, finally, (vi) comes from the proof of [160, Theorem 3.1]. The density fα plays a major role in abstract time-fractional Cauchy problems, as will be emphasized in the next section, so let us look at some of its properties. First, let us study the regularity of fα with respect to its variables. The following proposition generalizes the corresponding statements from [163, Section 4] and at the same time determines the asymptotic behavior of fα with respect to both x and t. Proposition 4.30. Fix α ∈ (0, 1). (i) For fixed x > 0 and t ↓ 0, fα (t, x) ∼ √

2−α

α

α 1 α α 2−2α − 2−2α α 1−α ( ) t exp (−|1 − α|x 1−α ( ) ) . 2π(1 − α) x t

(4.23)

(ii) For fixed x > 0 and t → ∞, fα (t, x) ∼

t −α . Γ(1 − α)

(4.24)

(iii) For fixed t > 0, lim fα (t, x) = x↓0

t −α . Γ(1 − α)

(4.25)

(iv) For fixed t > 0 and x → ∞, fα (t, x) ∼ √

2−α

α

α 1 α α 2−2α − 2−2α α 1−α ( ) t exp (−|1 − α|x 1−α ( ) ) . 2π(1 − α) x t

(4.26)

1

Proof. To prove (i), fix x > 0. Then tx − α → 0 as t ↓ 0, and we can use (4.6) and (iii) of Proposition 4.29 to get (4.23). Similarly, to prove (ii), we use (4.7) and (iii) of Proposition 4.29 once we notice that 1 tx − α → ∞ as t → ∞. 1 To prove (iii), fix t > 0 and notice that tx − α → ∞ as x ↓ 0. So we use again (4.7) and 1 (iii) of Proposition 4.29 once we notice that tx − α ↓ 0 as x → ∞. As a consequence, we get from (4.23) that for all x > 0, fα (0+, x) := lim fα (t, x) = 0. t↓0

(4.27)

Furthermore, (4.25) implies that fα (t, ⋅) admits a jump discontinuity at 0. Also, (4.26) tells α us that for all t > 0 and s > 0, E [esLt ] is finite. Finally, due to (4.25) and (4.26), we get the following corollary.

276 � 4 Stochastic processes and fractional differential equations Corollary 4.31. Fix α ∈ (0, 1) and let Lα be an inverse α-stable subordinator. Then E[|Lαt |p ] is finite for all t > 0 if and only if p > −1. We limit ourselves to studying fα in the first quadrant, i. e., for t > 0 and x > 0. In this connection, instead of the Fourier transform in the variable x, we will consider the Laplace one. The following result was obtained in [30, Proposition 1(a)] for s ≤ 0 and α can be easily extended to s > 0 since E [esLt ] < ∞. Proposition 4.32. Let α ∈ (0, 1). Then for all t > 0 and s ∈ ℝ, α

E [esLt ] = Eα (st α ) , where Eα is the Mittag-Leffler function defined in (2.3). We can also calculate the Laplace transform of the density fα (⋅, x) for fixed x > 0 with respect to the variable t. Proposition 4.33. Fix α ∈ (0, 1). Then for all x > 0 and z > 0, ℒ[fα (⋅, x)](z) = z

α−1 −xzα

e

.

Proof. First, note that fα (⋅, x) ∈ L1loc (ℝ+0 ) according to (4.23). Therefore its Laplace transform is well-defined. Let S α be an α-stable subordinator, and let Lα be its inverse. Clearly, for all x, t > 0, P (Lαt ≤ x) = P (Sxα ≥ t) = 1 − P (Sxα ≤ t) . Recall that ℒ[1](z) = z1 for z > 0. Furthermore, it follows from (4.4) and the properties of the Laplace transform (see Proposition A.23) that α

ℒ [P (Sx ≤ ⋅)] (z) =

α

e−xz , z

z > 0.

x

Hence, setting Fα (t, x) = ∫0 fα (t, y)dy, we get that α

+∞

∫ e 0

−zt

1 e−xz Fα (t, x)dt = ℒ[Fα (⋅, x)](z) = − . z z

Now fix x > 0 and consider 0 < a < b such that x ∈ (a, b). Then, by (4.21), 󵄨󵄨 󵄨 t −1− 1 󵄨󵄨fα (t, x)󵄨󵄨󵄨 ≤ a α ‖gα ‖L∞ (ℝ+ ) . α Lagrange’s theorem supplies that for any h ∈ ℝ such that x + h ∈ [a, b], and t > 0, 1 |Fα (t, x + h) − Fα (t, x)| 󵄨 󵄨 t ≤ sup 󵄨󵄨󵄨fα (t, x)󵄨󵄨󵄨 ≤ a−1− α ‖gα ‖L∞ (ℝ+ ) , |h| α x∈[a,b]

(4.28)

277

4.4 Inverse stable subordinators �

where the right-hand side is integrable when we multiply it by e−zt for z > 0. Hence by the dominated convergence theorem +∞

lim ∫ e−zt

h→0

0

+∞

Fα (t, x + h) − F(t, x) dt = ∫ e−zt fα (t, x)dt. h 0

Thus taking the derivative with respect to the variable x in (4.28), we have +∞

ℒ[fα (⋅, x)](z) = ∫ e

−zt

α

fα (t, x)dt = zα−1 e−xz ,

z > 0,

x > 0.

0

Note that, as a consequence of the series representation (4.8) of gα and item (iii) in Proposition 4.29, for x, t > 0, we have +∞

fα (t, x) = ∑ (−1)j+1 j=1

Γ(1 + jα) sin(πjα) −αj j−1 t x . j! απ

(4.29)

Now we want to give the exact formulae for the moments of Lαt for t > 0, which exist due to Corollary 4.31. Let us set, for p > −1, Up,α (t) := E[|Lαt |p ]. It is proved in [160, Corollary 3.1(b)] that Up,α (t) = C(α, p)t αp for all t > 0 and p > 0, where C(α, p) is a positive constant. This constant has been explicitly evaluated, for example, in [168, Example 4.1], where the same statement was proved for p > −1. This is summarized in the next proposition. Proposition 4.34. Let α ∈ (0, 1), and let Lα be an inverse α-stable subordinator. For p > −1 and t > 0, denote Up,α (t) = E[|Lαt |p ]. Then Up,α (t) =

Γ(p + 1) pα t . Γ(pα + 1)

(4.30)

Proof. Taking the Laplace transform and using Proposition 4.33, we get +∞ +∞

ℒ[Up,α ](z) = ∫ ∫ e 0

0

−zt p

+∞

α

s fα (t, s)ds dt = zα−1 ∫ sp e−sz ds = z−pα−1 Γ(p + 1). 0

Together with the injectivity of the Laplace transform, this equality implies (4.30). Remark 4.35. Proposition 4.34 can be verified for p ∈ ℕ using Proposition 4.32, because +∞ α Uk,α (t) k t αk s = E [esLt ] = Eα (st α ) = ∑ sk . k! Γ(αk + 1) k=0 k=0 +∞



278 � 4 Stochastic processes and fractional differential equations Now let us focus on the increments of Lα . An explicit formula for E[|Lαt − Lαs |p ] was obtained in [130, Theorem 1] for p ∈ ℕ and then extended in [168, Lemma 2.1] to any real p > 0. We formulate this in the following theorem. Theorem 4.36. Let α ∈ (0, 1) and p > 0. Then for all t ≥ s > 0, s t

Γ(p + 1) αp [ 1 ] 󵄨 󵄨p E [󵄨󵄨󵄨Lαt − Lαs 󵄨󵄨󵄨 ] = t [1 − ∫(1 − y)α(p−1) yα−1 dy] , (4.31) Γ(αp + 1) B(α, α(p − 1) + 1) 0 [ ] where B is Euler’s Beta function. Remark 4.37. We can rewrite (4.31) applying the change of variables z = 1 − y to get 1

Γ(p + 1) αp [ 1 ] 󵄨 󵄨p E [󵄨󵄨󵄨Lαt − Lαs 󵄨󵄨󵄨 ] = t [1 − ∫ yα(p−1) (1 − y)α−1 dy] Γ(αp + 1) B(α, α(p − 1) + 1) 1− st [ ] 1

Γ(p + 1) αp [ 1 = t [1 − ∫ yα(p−1) (1 − y)α−1 dy Γ(αp + 1) B(α, α(p − 1) + 1) 0 [ 1− st

1 ] + ∫ yα(p−1) (1 − y)α−1 dy] B(α, α(p − 1) + 1) 0 ] 1− st

Γ(p + 1) t αp = ∫ yα(p−1) (1 − y)α−1 dy. Γ(αp + 1) B(α, α(p − 1) + 1)

(4.32)

0

For a, b > 0 and x ∈ [0, 1], we define the normalized incomplete Beta function x

B(x; a, b) =

1 ∫ za−1 (1 − z)b−1 dz, B(a, b) 0

where B(a, b) is Euler Beta function. Hence we can rewrite both (4.31) and (4.32) in terms of normalized incomplete Beta functions: Γ(p + 1)t αp s 󵄨 󵄨p E [󵄨󵄨󵄨Lαt − Lαs 󵄨󵄨󵄨 ] = [1 − B ( ; α, α(p − 1) + 1)] Γ(αp + 1) t Γ(p + 1)t αp s = B (1 − ; α(p − 1) + 1, α) . Γ(αp + 1) t

(4.33)

The next proposition summarizes some consequences of Theorem 4.36, which will be used in the following.

4.5 Time-fractional abstract Cauchy problems �

279

Proposition 4.38. Let α ∈ (0, 1), and let Lα be an inverse α-stable subordinator. (i) For all t > s ≥ 0 and p > 0, Γ(p + 1) 󵄨 󵄨p (t − s)α(p−1)+1 ≤ E [󵄨󵄨󵄨Lαt − Lαs 󵄨󵄨󵄨 ] Γ(α(p − 1) + 2)Γ(α)t 1−α Γ(p + 1) Γ(p + 1) ≤ (t − s)α(p−1)+1 ≤ |t − s|αp . Γ(αp + 1) Γ(αp + 1)t 1−α

(4.34)

(ii) For all γ ∈ (0, α), the trajectories of Lαt are a. s. locally γ-Hölder continuous. Proof. Item (i) follows by (4.33) and the properties of the normalized incomplete Beta function. For the details, we refer to Exercise 4.7. 1 To prove (ii), let γ < α and consider arbitrary p > α−γ . By construction we have

γ < α − p1 . Since (i) states that E[|Lαt − Lαs |p ] ≤ C|t − s|αp for all t, s ≥ 0, and thus Lα is a. s. locally γ-Hölder continuous according to the Kolmogorov–Chentsov theorem (Theorem C.1). Formula (4.34) was proved in [158, Lemma 2.1] in the case p ∈ ℕ. Its right-hand side was also established in the proof of [27, Corollary 5.3] using other methods. As a further direct consequence of Theorem 4.36, we can evaluate E[Lαt Lαs ]. Corollary 4.39. For all 0 < s ≤ t, E [Lαt Lαs ] =

1 s [s2α + t 2α B ( ; α, α + 1)] . Γ(2α + 1) t

Proof. Obviously, 2

2

2

2Lαt Lαs = (Lαt ) + (Lαs ) − (Lαt − Lαs ) , and the statement follows from Proposition 4.34 and (4.33).

4.5 Time-fractional abstract Cauchy problems Le us study the relation between inverse α-stable subordinators and fractional Cauchy problems. To do this, we need to extend the notion of abstract Cauchy problem from Section 4.2 to the time-fractional case. Let α ∈ (0, 1), let 𝔹 be a Banach space, and let u : ℝ+0 → 𝔹 be a Bochner-integrable vector-valued function (see Appendix C.5 for the definition and related properties of Bochner integrals). Then we can define the left-sided fractional integral of u as α (I0+ u) (t)

t

1 = ∫(t − s)α−1 u(s)ds ∈ 𝔹, Γ(α) 0

t ∈ ℝ+ .

280 � 4 Stochastic processes and fractional differential equations The left-sided Riemann–Liouville fractional derivative of u : ℝ+0 → 𝔹 is defined as (Dα0+ u) (t) =

t

1 d ∫(t − s)−α u(s)ds ∈ 𝔹, Γ(1 − α) dt

t ∈ ℝ+ ,

0

provided that the derivative exists (in the sense of (4.10)). Finally, we define the left-sided Caputo derivative (C Dα0+ u) (t) = (Dα0+ (u − u(0))) (t),

(4.35)

provided that the involved quantities exist. If u is differentiable in a. e. t > 0, in the sense of (4.10), and absolutely continuous (see the definition in Appendix C.5), then (C Dα0+ u)(t) is well-defined for all t ∈ ℝ+ , and (C Dα0+ u) (t)

t

1 = ∫(t − s)−α u′ (s)ds, Γ(1 − α)

t ∈ ℝ+ .

0

Note that such definitions of fractional integral and derivatives for functions with values in a Banach space coincide with the classical ones if 𝔹 = ℝ. Now let A : Dom(A) ⊂ 𝔹 → 𝔹 be an operator and consider the fractional abstract Cauchy problem {

(C Dα0+ u) (t) = Au(t),

t > 0,

(4.36)

u(0) = x,

for some x ∈ 𝔹. Definition 4.40. We say that u is a strong solution of (4.36) if (i) u ∈ 𝒞 (ℝ+0 ; 𝔹) and u(t) ∈ Dom(A) for all t > 0; (ii) C Dα0+ u ∈ 𝒞 (ℝ+ ; 𝔹); (iii) (4.36) holds. Once this notion of a strong solution to a fractional abstract Cauchy problem has been established, let us state the main result concerning the existence and uniqueness of such solutions, which was obtained in [25, Theorem 3.1]. Theorem 4.41. Let α ∈ (0, 1), let 𝔹 be a Banach space, and let A : Dom(A) ⊂ 𝔹 → 𝔹 be the generator of a strongly continuous contraction semigroup {Tt }t≥0 on 𝔹. Then the unique strong solution of (4.36) is given for x ∈ Dom(A) by the formula +∞

u(t) = E[TLαt x] = ∫ fα (t, s)Ts xds,

t > 0,

u(0) = x,

0

where the integral on the right-hand side is a Bochner integral.

(4.37)

4.5 Time-fractional abstract Cauchy problems �

281

Remark 4.42. 1) Although the definition of a strong solution of (4.36) does not require x ∈ Dom(A), such an assumption is explicitly used in the proof of Theorem 4.41 provided in [25, Theorem 3.1]. Hence the assumption x ∈ Dom(A) is a sufficient condition to have a strong solution to (4.36). Such condition, however, is not necessary. For instance, in [136] the authors prove the existence of strong solutions to some particular equations of the form (4.36) by means of a series decomposition. In such a case, we can choose some initial data that do not belong to the domain of A. 2) Note that the general theorem was proved in [25], but the subordination-like formula (4.37) was already applied in [229]. Throughout this and the following sections, we will apply the Caputo derivative C Dα0+ to functions u : ℝ+0 × ℝ → ℝ with respect to the first variable t ∈ ℝ+0 for fixed x ∈ ℝ. So, for all α ∈ (0, 1), t > 0, and x ∈ ℝ, (C Dα0+ u) (t, x) =

t

1 𝜕 ∫(t − τ)−α (u(τ, x) − u(0, x))dτ, Γ(1 − α) 𝜕t

(4.38)

0

provided that the involved quantity exists. If u ∈ 𝒞 (ℝ+0 ; 𝒞0 (ℝ)), then, exactly as we did in Section 4.2, we can write u(t)(x) = u(t, x) and identify u as a function of two variables (instead of a 𝒞0 (ℝ)-valued function). In such a case, (4.38) coincides with the pointwise evaluation of (4.35). Obviously, inside we have the fractional integral that is applied to the first variable t ∈ ℝ+0 for fixed x ∈ ℝ. For α > 0, t > 0, and x ∈ ℝ, such a integral equals α (I0+ u) (t, x) =

t

1 ∫(t − τ)α−1 u(τ, x)dτ. Γ(α)

(4.39)

0

Comparing (4.39) with (4.38), it is clear that for all α ∈ (0, 1), t > 0, and x ∈ ℝ, (C Dα0+ u) (t, x) =

1−α 𝜕(I0+ (u − u(0, ⋅))) (t, x). 𝜕t

(4.40)

Now let us take a look at a specific case. Namely, we consider the time-fractional partial differential equation 𝜕u C α {( D0+ u) (t, x) = − (t, x), 𝜕x { u(0, x) = f (x) {

t > 0, x ∈ ℝ,

for some f ∈ 𝒞01 (ℝ). Definition 4.43. We say that u is a strong solution of (4.41) if (i) u ∈ 𝒞 (ℝ+0 ; 𝒞0 (ℝ)), and u(t, ⋅) ∈ 𝒞01 (ℝ) for all t ∈ ℝ+ ;

(4.41)

282 � 4 Stochastic processes and fractional differential equations (ii) C Dα0+ u ∈ 𝒞 (ℝ+ ; 𝒞0 (ℝ)); (iii) (4.41) holds. Let us underline that such a definition is analogous to that of a strong solution for d (4.36), once we set 𝔹 = 𝒞0 (ℝ) and A = − dx . We can provide, thanks to Theorem 4.41, a stochastic representation of the strong solutions of (4.41). Theorem 4.44. Let α ∈ (0, 1). Then for every f ∈ 𝒞01 (ℝ), (4.41) has a unique strong solution given by the formula u(t, x) = E [f (x −

Lαt )]

+∞

= ∫ f (x − s)fα (t, s)ds,

t > 0,

(4.42)

0

with u(0, x) = f (x). Proof. Consider the shift semigroup {Tt }t≥0 on 𝔹 = 𝒞0 (ℝ): (Tt f )(x) = f (x − t),

x ∈ ℝ,

t ≥ 0.

It is a Feller semigroup on 𝒞0 (ℝ). Furthermore, if f ∈ 𝒞01 (ℝ), then lim t→0

Tt f (x) − f (x) f (x − t) − f (x) df = lim = − (x). t→0 t t dx

Since in this case the pointwise convergence is sufficient to guarantee also the uniform convergence (see [35, Theorem 1.33]), we get that the shift semigroup {Tt }t≥0 has a gend erator A = − dx on Dom(A) = 𝒞01 (ℝ). The statement of the theorem now follows from Theorem 4.41. Similarly to the case of the stable subordinator, we ask if fα satisfies the equation (C Dα0+ fα ) (t, x) = −

𝜕fα (t, x) 𝜕x

(4.43)

for t, x > 0. It is clear that fα does not exist at t = 0, and hence we cannot use directly (4.38) to define (C Dα0+ fα ) (t, x) for t, x > 0. However, by (4.27) we know that fα (0+, x) = 0. Hence we adapt (4.38) to this case by substituting the evaluation at t = 0 with the right limits, i. e., for t, x > 0: (C Dα0+ fα ) (t, x) =

t

1 𝜕 ∫(t − τ)−α (fα (τ, x) − fα (0+, x))dτ Γ(1 − α) 𝜕t 0

t

1−α 𝜕(I0+ fα )(t, x) 1 𝜕 = , ∫(t − τ)−α fα (τ, x)dτ = Γ(1 − α) 𝜕t 𝜕t 0

(4.44)

4.5 Time-fractional abstract Cauchy problems



283

where we also used (4.39) and (4.40). After we figured out the meaning of C Dα0+ fα , to prove that fα satisfies (4.43), we need some regularity results, which are given in the next proposition. Proposition 4.45. Fix α ∈ (0, 1). Then for t, x > 0, we have t

𝜕fα (t − s)−α 𝜕gα 1−α 𝜕gα (x, ⋅) (t, x) = ∫ (x, s)ds = (I0+ ( )) (t). 𝜕x Γ(1 − α) 𝜕x 𝜕x

(4.45)

0

Furthermore, for all 0 < a < b, there exists a constant C > 0 such that 󵄨󵄨 𝜕f 󵄨󵄨 sup 󵄨󵄨󵄨󵄨 α (t, x)󵄨󵄨󵄨󵄨 ≤ C min {1, x −2 } , 󵄨 t∈[a,b] 󵄨 𝜕x

x > 0,

(4.46)

󵄨󵄨 𝜕f 󵄨󵄨 Ct 1−α sup 󵄨󵄨󵄨󵄨 α (t, x)󵄨󵄨󵄨󵄨 ≤ , 󵄨 Γ(2 − α) x∈[a,b] 󵄨 𝜕x

t > 0,

(4.47)

󵄨󵄨 𝜕g 󵄨󵄨 sup 󵄨󵄨󵄨󵄨 α (x, s)󵄨󵄨󵄨󵄨 ≤ C, 󵄨 x∈[a,b] 󵄨 𝜕x

s > 0.

(4.48)

and

Remark 4.46. Note that relations (4.45) and (4.46) are particular cases of [19, Proposition 3.6], whereas (4.48) follows from [19, Equation (A.51)]. Finally, (4.47) is a direct consequence of the series representation (4.29). A more precise proof of this inequality is given in [19, Example 5.1]. Having established this, we are ready to prove the following statement. Theorem 4.47. Fix α ∈ (0, 1). The function fα satisfies (4.43) for all x, t > 0. Proof. Fix x > 0. We get from Lemma 2.44, (i), and Proposition 4.33 that for all z > 0, 1−α

ℒ [(I0+ fα ) (⋅, x)] (z) = z

α−1

2α−2 −xzα

ℒ[fα (⋅, x)](z) = z

e

.

(4.49)

Further, (4.48) implies that ℒ [ 𝜕xα (x, ⋅)] (z) is well-defined for z > 0. We know from (4.45) 𝜕g

that

𝜕f ℒ [ 𝜕xα (⋅, x)] (z)

ℒ[

is well-defined for z > 0 and

𝜕fα 𝜕g 1−α 𝜕gα (⋅, x)] (z) = ℒ [I0+ (x, ⋅)] (z) = z−1+α ℒ [ α (x, ⋅)] (z). 𝜕x 𝜕x 𝜕x

(4.50)

To evaluate the right-hand side of (4.50), fix x > 0 and consider 0 < a < b such that x ∈ (a, b). Then it follows from the Lagrange theorem and (4.48) that for small enough h ∈ ℝ such that x + h ∈ [a, b] and for a suitable constant C > 0, 󵄨󵄨 gα (x + h, s) − gα (x, s) 󵄨󵄨 󵄨 󵄨 󵄨󵄨 󵄨󵄨 ≤ sup 󵄨󵄨󵄨 𝜕gα (x, s)󵄨󵄨󵄨 ≤ C. 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 h 󵄨 󵄨 x∈[a,b] 󵄨 𝜕x 󵄨

284 � 4 Stochastic processes and fractional differential equations Hence we can use the dominated convergence theorem and (4.4) to get +∞

∫ e−zs 0

+∞

α 𝜕gα 𝜕 𝜕 −xzα (x, s)ds = ( ∫ e−zs gα (x, s)ds) = e = −zα e−xz . 𝜕x 𝜕x 𝜕x

0

Plugging this equality into (4.50), we get that for z > 0, ℒ[

α 𝜕fα (⋅, x)] (z) = −z−1+2α e−xz . 𝜕x

(4.51)

t 𝜕f

Now consider F(t, x) = ∫0 𝜕xα (s, x)ds for x, t > 0. It is well-defined according to (4.47). Then the properties of the Laplace transform, (4.51), and (4.49) ensure that for z > 0, ℒ[F(⋅, x)](z) =

𝜕f 1 −2+2α −xzα 1−α ℒ [ α (⋅, x)] (z) = −z e = −ℒ [(I0+ fα ) (⋅, x)] (z). z 𝜕x

Taking the inverse Laplace transform and recalling that both sides of the latter equality are continuous in the variable t, we get that for all t > 0, t

∫ 0

𝜕fα 1−α (s, x)ds = − (I0+ fα ) (t, x). 𝜕x

Taking the derivative in t > 0 and recalling that f (0, x) = 0 and according to (4.45), we get the desired result.

𝜕fα (⋅, x) 𝜕x

is continuous

4.6 Weighted inverse subordination operators In fact, Theorem 4.41 tells us that if we know the strong solution u of an abstract Cauchy problem of the form (4.11), then we can get strong solutions of the time-fractional abstract Cauchy problem (4.41) composing u with an inverse α-stable subordinator and then taking the expectation. This procedure can be formalized with a suitable operator. Consider a Banach space 𝔹, and let v : ℝ+ → 𝔹 be a Bochner-measurable function such that E[‖v(Lαt )‖𝔹 ] < ∞ for all t ∈ ℝ+ . Then due to Bochner theorem [9, Theorem 1.1.4], we can define the following operator. Definition 4.48. We define the inverse subordination operator of order α ∈ (0, 1) ISα acting on functions v : ℝ+ 󳨃→ 𝔹 as +∞

(ISα v) (t) = ∫ v(s)fα (t, s)ds, 0

whereas for t = 0, we set (ISα v)(0) = v(0).

t > 0,

4.6 Weighted inverse subordination operators



285

We will focus on the action of the inverse subordination operator on bounded functions. For this reason we consider ISα : L∞ (ℝ+ ; 𝔹) → L∞ (ℝ+ ; 𝔹), which is linear and continuous since 󵄩󵄩 α 󵄩󵄩 󵄩󵄩IS v󵄩󵄩L∞ (ℝ+ ;𝔹) ≤ ‖v‖L∞ (ℝ+ ;𝔹) . Furthermore, consider a Feller semigroup {Tt }t≥0 on 𝔹 with generator A. Let x ∈ Dom(A) and u(t) = Tt x for t ≥ 0. According to Theorem 4.41, the strong solution uα (t) of (4.36) with initial data x ∈ Dom(A) is given by the formula uα (t) = (ISα u)(t). In practice, the inverse subordination operator ISα plays a major role in the stochastic representation of solutions of fractional abstract Cauchy problems. Another very important role of the inverse subordination operator concerns a special class of time-changed processes, constructed as the composition of a stochastic process Y = {Yt , t ≥ 0} and an independent inverse α-stable subordinator Lα . Indeed, if Y α := {YLαt , t ≥ 0}, FY (t, x) = P(Yt ≤ x), and FY α (t, x) = P(Ytα ≤ x) for t ≥ 0 and x ∈ ℝ, then P (Ytα ≤ x) = E [E [1(−∞,x] (YLα (t) ) | Lαt ]] = E [FY (Lαt , x)] = (ISα FY (⋅, x)) (t). In the particular case of a centered Gaussian process Y with variance VY ∈ 𝒞 1 (ℝ+ ), we know that the density pY (t, ⋅) of Yt for t > 0 satisfies the equation 𝜕pY (t, x) VY′ (t) 𝜕2 pY (t, x) = , 𝜕t 2 𝜕x 2

t > 0,

x ∈ ℝ.

(4.52)

Although Y may be non-Markov, we still call (4.52) the Fokker–Planck equation associated with Y , as, for instance, is done in [99]. In a similar way to how fractional abstract Cauchy problems are studied in Theorem 4.41, we ask whether a suitable fractional version of the Fokker–Planck equation can be solved by means of an inverse subordination operator, or more precisely, whether a solution to a generalized fractional Fokker–Planck equation can be represented in terms of the density of Ytα if the original process Y is Gaussian. This is clear in the case of a standard Brownian motion, since in this case, VY′ (t) ≡ 1, and so we are still within the scope of Theorem 4.41. Further details on the delayed Brownian motion will be given in Section 4.7. However, if VY′ is not constant, then Theorem 4.41 is not sufficient to handle the problem. This is the case, for example, with fBm and fOU processes. To consider these cases, we need a generalization of the inverse subordination operator ISα obtained using a weight function that will take into account the behavior of VY′ . In the remainder of this section, we will introduce such an operator and discuss its main properties, and in Sections 4.8 and 4.9 we will discuss in detail delayed fBm and delayed fOU processes. Consider a function w ∈ L1loc (ℝ+ ) such that w(t) ≥ 0 for t ≥ 0. Let ℒ[w](z) be welldefined for all z > 0, and let E[w(Lαt )] < ∞ for all t > 0. As a shorthand, denote by 𝒲α the set of weights w satisfying these assumptions.

286 � 4 Stochastic processes and fractional differential equations Definition 4.49. Fix w ∈ 𝒲α . Then for every function v ∈ L∞ (ℝ+ ; 𝔹), we define the weighted Laplace transform as +∞

ℒw [v](z) = ∫ e

−zt

z > 0,

w(t)v(t)dt,

0

and the weighted inverse subordination of order α as (ISαw v) (t)

+∞

= ∫ w(s)v(s)fα (t, s)ds,

t > 0.

0

If w ≡ 1, then ℒw = ℒ and ISαw = ISα . In general, ISαw = ISα (wv).

ℒw [v] = ℒ[wv],

Let us now state some relations between the (possibly weighted) Laplace transforms and inverse subordination operators. Proposition 4.50. Let v ∈ L∞ (ℝ+ ; 𝔹). Then for all z > 0 and w ∈ 𝒲α , α

ℒ [ISw v] (z) = z

α−1

α

ℒw [v] (z ) .

Proof. Fix z > 0 and note that +∞ +∞

+∞

α 󵄨 󵄨 󵄨 󵄨 ∫ ∫ e−zt w(s)󵄨󵄨󵄨v(s)󵄨󵄨󵄨fα (t, s)dt ds = zα−1 ∫ e−z s w(s)󵄨󵄨󵄨v(s)󵄨󵄨󵄨ds

0

0

0

≤ ‖v‖L∞ (ℝ+ ;𝔹) zα−1 ℒ[w] (zα ) < ∞.

Hence we can use the Fubini theorem and Proposition 4.33 to get α ℒ [ISw v] (z) =

+∞ +∞

∫ ∫ e 0

0

−zt

w(s)v(s)fα (t, s)dt ds = z

α−1

+∞

α

∫ e−z s w(s)v(s)ds = zα−1 ℒw [v] (zα ) . 0

As a consequence of Proposition 4.50, we can prove that the operator ISαw is, in some sense, injective. Corollary 4.51. Let α ∈ (0, 1) and w ∈ 𝒲α \ {0}. If v1 , v2 ∈ L∞ (ℝ+ ; 𝔹) with ISαw v1 = ISαw v2 , then v1 (t) = v2 (t) for a. a. t > 0 such that w(t) > 0. Proof. Let v1 , v2 ∈ L∞ (ℝ+ ; 𝔹) with ISαw v1 = ISαw v2 . Taking the Laplace transform of both sides, we get that zα−1 ℒw [v1 ] = zα−1 ℒw [v2 ]. The injectivity of the Laplace transform

4.6 Weighted inverse subordination operators �

287

implies that w(t)v1 (t) = w(t)v2 (t) for a. a. t > 0. If w(t) > 0, then v1 (t) = v2 (t), whence the proof follows. Concerning the continuity in t, we have to ask for some uniform control of the weight w. For simplicity, we consider here only the case 𝔹 = ℝ. Proposition 4.52. Let w ∈ 𝒲α ∩ 𝒞 (ℝ+ ) be either monotone or bounded. Then ISαw v ∈ 𝒞 (ℝ+ ) for all v ∈ L∞ (ℝ+ ) ∩ 𝒞 (ℝ+ ). Proof. Let t > 0 and 0 < a < b such that t ∈ (a, b), and let h ∈ ℝ be such that t + h ∈ [a, b]. The definition of ISαw implies that (ISαw v) (t + h) = E [w (Lαt+h ) v (Lαt+h )] . First, assume that w is nonincreasing. Then since Lα is a. s. nondecreasing, 󵄨 󵄨 w (Lαt+h ) 󵄨󵄨󵄨v (Lαt+h )󵄨󵄨󵄨 ≤ ‖v‖L∞ (ℝ+ ) w (Lαa ) , where E[w(Lαa )] < ∞. If, instead, w is nondecreasing, then 󵄨 󵄨 w (Lαt+h ) 󵄨󵄨󵄨v (Lαt+h )󵄨󵄨󵄨 ≤ ‖v‖L∞ (ℝ+ ) w (Lαb ) , where E[w(Lαb )] < ∞. Finally, if w is bounded, then 󵄨 󵄨 w (Lαt+h ) 󵄨󵄨󵄨v (Lαt+h )󵄨󵄨󵄨 ≤ ‖v‖L∞ (ℝ+ ) ‖w‖L∞ (ℝ+ ) . Hence by the dominated convergence theorem and the fact that Lα is a. s. continuous we get lim (ISαw v) (t + h) = E [w (Lαt ) v (Lαt )] = (ISαw v) (t).

h→0

Since t > 0 is arbitrary, this completes the proof. Remark 4.53. In fact, if v ∈ 𝒞 (ℝ+0 ) and w is bounded with w(0+) := limt↓0 w(t) ∈ ℝ, then, if we set ISαw v(0) := w(0+)v(0), we have ISαw v ∈ 𝒞 (ℝ+0 ). Furthermore, Proposition 4.52 holds if there exist a nonincreasing function w1 ∈ 𝒲α and a nondecreasing function w2 ∈ 𝒲α such that w ≤ w1 + w2 . Now let v : ℝ+ × ℝ → ℝ be a measurable function such that v(⋅, x) ∈ L∞ (ℝ+ ) for each fixed x ∈ ℝ. Then we can apply both ISαw and ℒw with respect to the first variable while keeping fixed the second one; for example, (ISαw v) (t, x) = E [w (Lαt ) v (Lαt , x)] . We can study the behavior of the resulting functions with respect to the variable x. In particular, we have the following properties.

288 � 4 Stochastic processes and fractional differential equations Proposition 4.54. Let v : ℝ+ ×ℝ → ℝ be a measurable function such that v(⋅, x) ∈ L∞ (ℝ+ ) for each fixed x ∈ ℝ. (i) Let x0 ∈ ℝ, and let for all t > 0, v(t, ⋅) be continuous at x0 . Assume that there exist (c, d) ∋ x0 and a function f[c,d] : ℝ+0 → ℝ+0 such that E [f[c,d] (Lαt ) w (Lαt )] < ∞,

t > 0,

(4.53)

x ∈ [c, d].

(4.54)

and 󵄨󵄨 󵄨 󵄨󵄨v(t, x)󵄨󵄨󵄨 ≤ f[c,d] (t),

t > 0,

Then for all t > 0, lim (ISαw v) (t, x0 + h) = (ISαw v) (t, x0 ).

h→0

(ii) Let v(t, ⋅) ∈ 𝒞 (a, b) for some (a, b) and all t > 0. If for any a < c < d < b there exists a function f[c,d] : ℝ+0 → ℝ+0 satisfying (4.53) and (4.54), then the function (ISαw v)(t, ⋅) is continuous on (a, b) for all t > 0. (iii) Assume that limx→±∞ v(t, x) = 0 for all t > 0 and there exist a constant M ≥ 0 and a function f∞ : ℝ+0 → ℝ+0 such that E [f∞ (Lαt ) w (Lαt )] < ∞,

t > 0,

(4.55)

|x| ≥ M.

(4.56)

and 󵄨󵄨 󵄨 󵄨󵄨v(t, x)󵄨󵄨󵄨 ≤ f∞ (t),

t > 0,

Then for all t > 0, lim (ISαw v) (t, x) = 0.

(4.57)

x→±∞

(iv) Assume that v(t, ⋅) ∈ 𝒞0 (ℝ) for all t > 0. If there exists a function f∞ : ℝ+0 → ℝ+0 satisfying (4.55) and (4.56) with M = 0, then for all t > 0, the function (ISαw v)(t, ⋅) is continuous and satisfies (4.57), i. e., it belongs to 𝒞0 (ℝ) for each fixed t > 0. (v) Let a < b and assume that v(t, ⋅) ∈ 𝒞 1 (a, b) for all t > 0. Also, suppose that for all c, d ∈ ℝ with [c, d] ⊂ (a, b), there exists a function f[c,d] : ℝ+ → ℝ+0 satisfying (4.53) and 󵄨󵄨 󵄨 󵄨󵄨 𝜕v(t, x) 󵄨󵄨󵄨󵄨 󵄨󵄨v(t, x)󵄨󵄨󵄨 + 󵄨󵄨󵄨󵄨 󵄨 ≤ f (t), 󵄨 𝜕x 󵄨󵄨 [c,d]

t > 0,

x ∈ [c, d].

(4.58)

Then for all t > 0, the function (ISαw v)(t, ⋅) is continuously differentiable on (a, b), and for all t > 0 and x ∈ (a, b), 𝜕(ISαw v)(t, x) 𝜕v = (ISαw ) (t, x). 𝜕x 𝜕x

4.6 Weighted inverse subordination operators



289

Proof. Let us prove (i). Let h ∈ ℝ be such that x0 + h ∈ [c, d]. By the definition of ISαw we have that for all t > 0, (ISαw v) (t, x0 + h) = E [w (Lαt ) v (Lαt , x0 + h)] . The integrand can be bounded for all t > 0 as follows: 󵄨󵄨 󵄨 α α α α 󵄨󵄨w (Lt ) v (Lt , x0 + h)󵄨󵄨󵄨 ≤ f[c,d] (Lt ) w (Lt ) , where we know that E[f[c,d] (Lαt )w(Lαt )] < ∞. Hence by the dominated convergence theorem, taking the limit as h → 0, we obtain that lim (ISαw v) (t, x0 + h) = lim E [w (Lαt ) v (Lαt , x0 + h)]

h→0

h→0

= E [w (Lαt ) v (Lαt , x0 )] = (ISαw v) (t, x0 ).

To prove (ii), note that (i) holds for all x ∈ (a, b), since we can always find a < c < d < b such that x ∈ (c, d). Concerning (iii), we recall that for all t > 0 and x ∈ ℝ, (ISαw v) (t, x) = E [w (Lαt ) v (Lαt , x)] . Without loss of generality, we can assume that |x| > M, so that 󵄨󵄨 󵄨 α α α α 󵄨󵄨w (Lt ) v (Lt , x)󵄨󵄨󵄨 ≤ f∞ (Lt ) w (Lt ) , where E[f∞ (Lαt )w(Lαt )] < ∞. Hence the dominated convergence theorem, together with the relation limx→±∞ v(t, x) = 0 for all t > 0, implies (4.57). Item (iv) is obtained by combining items (ii) and (iii). Item (v): let x ∈ (a, b), c, d ∈ ℝ, x ∈ (c, d), and h ∈ ℝ be such that x + h ∈ [c, d]. Lagrange theorem supplies the existence of ξ(x, h) ∈ [c, d] such that 󵄨󵄨 v(t, x + h) − v(t, x) 󵄨󵄨 󵄨󵄨 𝜕v(t, ξ(x, h)) 󵄨󵄨 󵄨󵄨 󵄨󵄨 = 󵄨󵄨 󵄨󵄨 ≤ f (t), 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 [c,d] h 𝜕x 󵄨 󵄨 󵄨 󵄨 where we applied (4.58). Furthermore, recall that (ISαw,t v)(t, x + h) − (ISαw,t v)(t, x) h

E[w(Lαt )v(Lαt , x + h)] − E[w(Lαt )v(Lαt , x)] h α v(L , x + h) − v(Lαt , x) t = E [w (Lαt ) ]. h =

The integrand can be bounded as follows: 󵄨󵄨 v(Lα , x + h) − v(Lα , x) 󵄨󵄨 󵄨 󵄨󵄨 α α t w (Lαt ) 󵄨󵄨󵄨 t 󵄨󵄨 ≤ w (Lt ) f[c,d] (Lt ) , 󵄨󵄨 󵄨󵄨 h

290 � 4 Stochastic processes and fractional differential equations where, by assumption, E[w(Lαt )f[c,d] (Lαt )] < ∞. Hence by the dominated convergence theorem we have that for all t > 0, v(Lαt , x + h) − v(Lαt , x) (ISαw v)(t, x + h) − (ISαw v)(t, x) = lim E [w (Lαt ) ] h→0 h→0 h h 𝜕v(Lαt , x) 𝜕v = E [w (Lαt ) ] = (ISαw ) (t, x). 𝜕x 𝜕x lim

𝜕v Finally, the continuity of (ISαw 𝜕x )(t, ⋅) for fixed t > 0 follows from (ii).

A similar result holds also for the weighted Laplace transform +∞

ℒw [v](z, x) = ∫ e

−zt

w(t)v(t, x)dt,

z > 0.

0

Proposition 4.55. Let v : ℝ+ ×ℝ → ℝ be a measurable function such that v(⋅, x) ∈ L∞ (ℝ+ ) for each fixed x ∈ ℝ. (i) Let x0 ∈ ℝ. Also, let for each t > 0, v(t, ⋅) be continuous at x0 and assume that there exist (c, d) ∋ x0 and a function f[c,d] : ℝ+0 → ℝ+0 such that +∞

∫ e−zt f[c,d] (t)w(t)dt < ∞,

z > 0,

(4.59)

0

and 󵄨󵄨 󵄨 󵄨󵄨v(t, x)󵄨󵄨󵄨 ≤ f[c,d] (t)

t > 0,

x ∈ [c, d].

(4.60)

Then for all z > 0, lim ℒw [v](z, x0 + h) = ℒw [v](z, x0 ).

h→0

(ii) If v(t, ⋅) ∈ 𝒞 (a, b) for all t > 0 and for all a < c < d < b, there exists a function f[c,d] : ℝ+0 → ℝ+0 satisfying (4.59) and (4.60), then the function ℒw [v](z, ⋅) is continuous on (a, b) for all z > 0. (iii) Assume that limx→±∞ v(t, x) = 0 for all t > 0 and that there exist a constant M > 0 and a function f∞ : ℝ+0 → ℝ+0 such that +∞

∫ e−zt f∞ (t)w(t)dt < ∞,

z > 0,

(4.61)

|x| > M.

(4.62)

0

and 󵄨󵄨 󵄨 󵄨󵄨v(t, x)󵄨󵄨󵄨 ≤ f∞ (t)

t > 0,

4.6 Weighted inverse subordination operators �

291

Then for all z > 0, lim ℒw [v](z, x) = 0.

(4.63)

x→±∞

(iv) Assume that v(t, ⋅) ∈ 𝒞0 (ℝ) for all t > 0. If there exists a function f∞ : ℝ+0 → ℝ+0 satisfying (4.61) and (4.62) with M = 0, then for all z > 0, the function ℒw [v](z, ⋅) belongs to 𝒞0 (ℝ). (v) Let a < b and assume that v(t, ⋅) ∈ 𝒞 1 (a, b) for all t > 0. Let for any c, d ∈ ℝ with [c, d] ⊂ (a, b), there exist a function f[c,d] : ℝ+ → ℝ+0 satisfying (4.59) and such that 󵄨󵄨 󵄨 󵄨󵄨 𝜕v(t, x) 󵄨󵄨󵄨󵄨 󵄨󵄨v(t, x)󵄨󵄨󵄨 + 󵄨󵄨󵄨󵄨 󵄨 ≤ f (t) 󵄨 𝜕x 󵄨󵄨 [c,d]

t > 0,

x ∈ [c, d].

(4.64)

Then for all z > 0, the function ℒw,t [v](z, ⋅) is continuously differentiable on (a, b), and for all z > 0 and x ∈ (a, b), 𝜕ℒw [v](z, x) 𝜕v = ℒw [ ] (z, x). 𝜕x 𝜕x We omit the proof, since it is identical to the one of Proposition 4.54. We will also use the following statement. Proposition 4.56. Let E ⊂ ℝ and v : ℝ+ × E → ℝ be such that v(⋅, x) ∈ L∞ (ℝ+ ) and limt↓0 v(t, x) = v(0+, x) ∈ ℝ for all x ∈ E. Let also w ∈ 𝒲α be either monotone or bounded and assume that limt↓0+ w(t) := w(0+) ∈ ℝ. Then lim (ISαw v) (t, x) = v(0+, x)w(0+), t↓0

x ∈ 𝒮.

(4.65)

Proof. Let t ∈ (0, 1) and x ∈ E. The definition of ISαw implies that (ISαw v) (t, x) = E [w (Lαt ) v (Lαt , x)] .

(4.66)

If w is nonincreasing, then w(Lαt ) ≤ w(0+), and 󵄨󵄨 󵄨 󵄩 󵄩 α α 󵄨󵄨w (Lt ) v (Lt , x)󵄨󵄨󵄨 ≤ 󵄩󵄩󵄩v(⋅, x)󵄩󵄩󵄩L∞ (ℝ+ ) w(0+).

(4.67)

If w is nondecreasing, then w(Lαt ) ≤ w(Lα1 ), and 󵄨󵄨 󵄨 󵄩 󵄩 α α α 󵄨󵄨w (Lt ) v (Lt , x)󵄨󵄨󵄨 ≤ 󵄩󵄩󵄩v(⋅, x)󵄩󵄩󵄩L∞ (ℝ+ ) w (L1 ) ,

(4.68)

where E[w(Lα1 )] < ∞. Finally, if w is bounded, then 󵄨󵄨 󵄨 󵄩 󵄩 α α 󵄨󵄨w (Lt ) v (Lt , x)󵄨󵄨󵄨 ≤ 󵄩󵄩󵄩v(⋅, x)󵄩󵄩󵄩L∞ (ℝ+ ) ‖w‖L∞ (ℝ+ ) .

(4.69)

Using the dominated convergence theorem and the fact that Lα is a. s. continuous with limt↓0 Lαt = 0 a. s., we get (4.65) taking the limit in (4.66).

292 � 4 Stochastic processes and fractional differential equations Remark 4.57. The previous proposition will be used mainly when we cannot apply Theorem 4.41. In particular, we need the case with E = ℝ \ {0} and v(0+, x) = 0 for all x ∈ E.

4.7 The delayed Brownian motion and the time-fractional heat equation Now we use Theorem 4.41 to characterize the solutions of some time-fractional equations. In this section, we focus on the time-fractional heat equation of the form 1 𝜕2 u { {(C Dα0+ u(⋅, x)) (t) = (t, x), 2 𝜕x 2 { { {u(0, x) = f (x),

t > 0, x ∈ ℝ,

(4.70)

for some f ∈ 𝒞02 (ℝ). Definition 4.58. We say that u is a strong solution of (4.70) if (i) u ∈ 𝒞 (ℝ+0 ; 𝒞0 (ℝ)), and u(t, ⋅) ∈ 𝒞02 (ℝ) for all t ∈ ℝ+ ; (ii) C Dα0+ u ∈ 𝒞 (ℝ+ ; 𝒞0 (ℝ)); (iii) (4.70) holds. To find a strong solution of (4.70), we consider a standard Bm B = {Bt , t ≥ 0} and an inverse α-stable subordinator Lα = {Lαt , t ≥ 0} independent of it. The Feller semigroup {Tt }t≥0 on 𝒞0 (ℝ) defined as (Tt f )(x) = E[f (x + Bt )],

f ∈ 𝒞0 (ℝ),

x ∈ ℝ.

(4.71)

2

𝜕 2 has the generator A = 21 𝜕x 2 on 𝒞0 (ℝ). Hence a direct application of Theorem 4.41 gives us the following result.

Theorem 4.59. Let α ∈ (0, 1). Then for all f ∈ 𝒞02 (ℝ), the time-fractional heat equation (4.70) has a unique strong solution, which equals u(t, x) = E [TLαt f (x)] = E [f (x + BLαt )] ,

t ≥ 0,

x ∈ ℝ.

(4.72)

Proof. By Theorem 4.41 we already know that u(t, x) = E[TLαt f (x)] is the unique strong solution of (4.70). Hence we only need to verify the second equality in (4.72). To do this, we use conditional expectation. Fix x ∈ ℝ and note that (4.71) holds for each fixed t > 0. Put F(t) = E[f (x + Bt )] = Tt f (x). Since Lα and B are independent, we have 󵄨 E [f (x + BLαt ) 󵄨󵄨󵄨 Lαt ] = F (Lαt ) = (TLαt f )(x).

4.7 The delayed Brownian motion and the time-fractional heat equation

� 293

Taking the expectation, we get 󵄨 E [f (x + BLαt )] = E [E [f (x + BLαt ) 󵄨󵄨󵄨 Lαt ]] = E [(TLαt f )(x)] = u(t, x). Remark 4.60. The regularity condition (i) in the definition of a strong solution is crucial to guarantee the uniqueness. Indeed, similarly to the heat equation (see, for example, the book [199]), we can consider a classical solution of (4.70) as a function u : ℝ+0 × ℝ → ℝ satisfying (4.70) (i. e., satisfying (iii) but not necessarily (i) and (ii) of the definition of a strong solution of (4.70)). Although the strong solution is unique, for any initial data, there are infinitely many classical solutions, as shown in [10, Theorem 6]. Among them, there exists a unique u such that limx→±∞ u(t, x) = 0 locally uniformly with respect to t ∈ ℝ+0 , as noticed in [10, Theorem 5] and [19, Corollary 2.8]. This is a consequence of the weak maximum principle established in [147, Theorem 2] for bounded domains and in [19, Theorem 2.4] for unbounded domains. Clearly, condition (i) of the definition of a strong solution guarantees that the unique classical solution u such that limx→±∞ u(t, x) = 0 is in fact the strong solution (4.72). The weak maximum principle also guarantees that the strong solutions are continuous with respect to the initial data. Indeed, in [19, Corollary 2.9] (see also [147, Theorem 5] for the bounded domain case), it was proved that if u1 , u2 are two strong solutions of (4.70) with initial data f1 , f2 , then 󵄩 󵄩 sup󵄩󵄩󵄩u1 (t, ⋅) − u2 (t, ⋅)󵄩󵄩󵄩L∞ (ℝ) ≤ ‖f1 − f2 ‖L∞ (ℝ) . t≥0

This can be also established by means of the nonexpansivity property of the semigroup {Tt }t≥0 . In (4.72), we considered the process Bα := {Bαt = BLαt , t ≥ 0}. Definition 4.61. The process Bα is called a delayed Brownian motion. We adopt the nomenclature proposed in [148], where this process was called a delayed Brownian motion. We use this name instead of a time-changed Brownian motion proposed in [157] to distinguish it from the subordinated Bm X α defined in Section 4.3. The term delayed emphasizes the influence of the inverse α-stable subordinator Lα on the Bm in the long-time scale. Indeed, on each discontinuity of the subordinator S α , Lα is constant, and thus the process Bα is frozen in a certain position for a random amount of time. This is reflected, in particular, in the mean square value E [|Bαt |2 ], which behaves as t α (in place of t), that is, Bα is subdiffusive. However, note that the delay can only be observed on large time scales. Indeed, E [|Bαt |2 ] behaves like t α also as t → 0, and hence, for small times, it is faster than the variance of a Bm. The latter property has been mentioned also in [42] by means of exit times of such processes through irregular domains. Now let us present some properties of the process Bα .

294 � 4 Stochastic processes and fractional differential equations Proposition 4.62. Let α ∈ (0, 1), and let Bα be a delayed Bm. (i) Bα is α2 -self similar, i. e., for all n ∈ ℕ, 0 ≤ t1 ≤ ⋅ ⋅ ⋅ ≤ tn , and a > 0, α

(Bαat1 , . . . , Bαatn ) = a 2 (Bαt1 , . . . , Bαtn ) . (ii)

For all t ≥ 0, E [Bαt ] = 0.

(iii) For all p, t > 0, p

2− 2 Γ(p + 1) αp2 󵄨 󵄨p E [󵄨󵄨󵄨Bαt 󵄨󵄨󵄨 ] = t . Γ ( αp + 1) 2

(4.73)

(iv) For all t, s ≥ 0, E [Bαt Bαs ] =

(t ∧ s)α . Γ(α + 1)

(v) The increments of Bα are uncorrelated. (vi) The increments of Bα are neither stationary nor independent. (vii) For all t > 0, the distribution of Bαt has a density qα (t, ⋅) of the form +∞

qα (t, x) = ∫ p(s, x)fα (t, s)ds,

x ∈ ℝ,

(4.74)

0

where p and fα are defined in (4.19) and (4.21). (viii) The trajectories of Bα are a. s. γ-Hölder continuous for all γ < α2 . Proof. Throughout the proof, we denote by B the standard Bm and by Lα the inverse α-stable subordinator independent of it such that Bαt = BLαt for all t ≥ 0. (i) Let a > 0, n ∈ ℕ, and 0 ≤ t1 ≤ ⋅ ⋅ ⋅ ≤ tn . Consider a function f ∈ 𝒞b (ℝn ) and observe that 󵄨 E [f (Bαat1 , . . . , Bαatn )] = E [f (BLαat , . . . , BLαat )] = E [E [f (BLαat , . . . , BLαat ) 󵄨󵄨󵄨 Bt , t ≥ 0]] , 1

n

1

n

where in the inner expectation, we consider conditioning with respect to the whole trajectory of the Bm. Since Lα is independent of B, we can use the α-self similarity of Lα , stated in Proposition 4.29(i), and get 󵄨 󵄨 E [f (BLαat , . . . , BLαat ) 󵄨󵄨󵄨 Bt , t ≥ 0] = E [f (Baα Lαt , . . . , Baα Lαt ) 󵄨󵄨󵄨 Bt , t ≥ 0] . n n 1 1

4.7 The delayed Brownian motion and the time-fractional heat equation

� 295

Taking the expectation, we obtain 󵄨 E[f (Bαat1 , . . . , Bαatn )] = E[f (Baα Lαt , . . . , Baα Lαt )] = E[E[f (Baα Lαt , . . . , Baα Lαt ) 󵄨󵄨󵄨 Lαt , t ≥ 0]] , n

1

n

1

where this time, we consider conditioning with respect to the whole trajectory of the inverse subordinator Lα . Hence we can use the 21 -self-similarity of B to get α α 󵄨 󵄨 E [f (Baα Lαt , . . . , Baα Lαt ) 󵄨󵄨󵄨 Lαt , t ≥ 0] = E [f (a 2 BLαt , . . . , a 2 BLαt ) 󵄨󵄨󵄨 Lαt , t ≥ 0] . n n 1 1

Taking the expectations, we conclude that α

α

E [f (Bαat1 , . . . , Bαatn )] = E [f (a 2 Bαt1 , . . . , a 2 Bαtn )] . (ii)

Since f ∈ 𝒞b (ℝ) is arbitrary, we get the statement. Note that independence of Lαt and B implies that 󵄨 E [BLαt 󵄨󵄨󵄨 Lαt ] = 0. Therefore 󵄨 E [Bαt ] = E [E [BLαt 󵄨󵄨󵄨 Lαt ]] = 0.

(iii) Note that Bt is Gaussian for all t ≥ 0 and Lαt is independent of B, whence p

p+1 p 22 Γ( ) 󵄨󵄨 α 2 α 2 E [|B | 󵄨󵄨 Lt ] = (Lt ) . √π Lαt

p

Taking the expectation and using Proposition 4.34, we obtain that p

2 2 Γ ( p2 + 1) Γ ( p+1 ) αp 󵄨󵄨 α 󵄨󵄨p 2 p 󵄨󵄨 α E [󵄨󵄨Bt 󵄨󵄨 ] = E [E [|BLαt | 󵄨󵄨 Lt ]] = t2. αp √π Γ ( 2 + 1) Formula (4.73) follows by an application of Legendre duplication formula; see Proposition B.3. (iv) Fix t, s ≥ 0 and note that due to independence of Lαt , Lαs , and B, 󵄨 E [BLαt BLαs 󵄨󵄨󵄨 Lαt , Lαs ] = min {Lαt , Lαs } = Lαmin{t,s} , where we also used the fact that the trajectories of Lα are a. s. increasing. Taking the expectation and using Proposition 4.34, we get that (min{t, s})α 󵄨 E [Bαt Bαs ] = E [E [BLαt BLαs 󵄨󵄨󵄨 Lαt , Lαs ]] = . Γ(α + 1)

296 � 4 Stochastic processes and fractional differential equations (v)

To prove that the increments of Bα are uncorrelated, consider 0 ≤ s1 ≤ t1 ≤ s2 ≤ t2 and use (iv) to get E [(Bαt1 − Bαs1 ) (Bαt2 − Bαs2 )] =

t1α − t1α + s1α − s1α = 0. Γ(α + 1)

(vi) Let us show that the increments of Bα are not stationary. Let 0 ≤ s ≤ t. On the one hand, recalling that Lα and B are independent and that Lα is a. s. increasing, we get that 󵄨 E [(BLαt − BLαs )2 󵄨󵄨󵄨 Lαt , Lαs ] = Lαt − Lαs . Taking the expectations, we get 2

E [(Bαt − Bαs ) ] = E [Lαt − Lαs ] =

t α − sα . Γ(α + 1)

(4.75)

On the other hand, it follows from (4.73) that 2

E [(Bαt−s ) ] =

(t − s)α t α − sα 2 ≠ = E [(Bαt − Bαs ) ] . Γ(α + 1) Γ(α + 1)

To prove that the increments are not independent, consider 0 < t1 < t2 < t3 . Again, Lα and B are independent, and Lα is a. s. increasing, and therefore 󵄨 E [(BLαt − BLαt )2 (BLαt − BLαt )2 󵄨󵄨󵄨 Lt1 , Lt2 , Lt3 ] 1 3 2 2 󵄨 󵄨 = E [(BLαt − BLαt )2 󵄨󵄨󵄨 Lt3 , Lt2 , Lt1 ] E [(BLαt − BLαt )2 󵄨󵄨󵄨 Lt1 , Lt2 , Lt3 ] = (Lαt3 − Lαt2 ) (Lαt2 − Lαt1 ) . 1 3 2 2 Taking the expectations, we obtain that 2

2

E [(Bαt3 − Bαt2 ) (Bαt2 − Bαt1 ) ] = E [(Lαt3 − Lαt2 ) (Lαt2 − Lαt1 )] . Similarly, 2

2

E [(Bαt1 − Bαs1 ) ] E [(Bαt2 − Bαs2 ) ] = E [(Lαt3 − Lαt2 )] E [(Lαt2 − Lαt1 )] . We get the result from (4.22). (vii) Fix t > 0 and x ∈ ℝ, and note that independence of Lα and B supplies the equality x

󵄨 E [1(−∞,x] (BLαt ) 󵄨󵄨󵄨 Lαt ] = ∫ p (Lαt , y) dy. −∞

Therefore P (Bαt

+∞

x

0

−∞

x +∞

≤ x) = ∫ ( ∫ p(s, y)dy) fα (t, s)ds = ∫ ∫ p(s, y)fα (t, s)ds dy. −∞ 0

4.7 The delayed Brownian motion and the time-fractional heat equation

� 297

Taking the derivative, we get the result. (viii) Let us first evaluate E [|Bαt − Bαs |p ] for p ≥ 1. To do this, note that p

p 2 2 Γ ( p+1 )󵄨 α 󵄨 󵄨p 󵄨 2 󵄨󵄨L − Lα 󵄨󵄨󵄨 2 . E [󵄨󵄨󵄨󵄨BLαα − BLααs 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨 Lαt , Lαs ] = t s 󵄨 󵄨 t √π

Taking the expectations, we get p

2 2 Γ ( p+1 ) 󵄨 α 󵄨 󵄨p 󵄨p 2 E [󵄨󵄨󵄨Bαt − Bαs 󵄨󵄨󵄨 ] = E [󵄨󵄨󵄨Lt − Lαs 󵄨󵄨󵄨 2 ] , √π

(4.76)

and (4.34) supplies that p

2 2 Γ ( p+1 ) Γ(p + 1) αp 󵄨 󵄨p 2 E [󵄨󵄨󵄨Bαt − Bαs 󵄨󵄨󵄨 ] ≤ |t − s| 2 . √πΓ(αp + 1)

(4.77)

Furthermore, lim

p→+∞

αp − 2 α = . 2p 2

Consider large values of p and let γ < α2 − p1 . Hence by (4.77) and the Kolmogorov– Chentsov theorem (Theorem C.1) we know that Bα is locally γ-Hölder continuous. This ends the proof. Remark 4.63. It follows from (4.76) and (4.33) that for all t ≥ s ≥ 0 and p ≥ 1, the moments of the increments of Bα equal p

Γ ( p2 + 1) 2 2 Γ ( p+1 ) αp p s 󵄨󵄨 α 2 α 󵄨󵄨p E [󵄨󵄨Bt − Bs 󵄨󵄨 ] = t 2 B (1 − ; α ( − 1) + 1, α) αp t 2 √π Γ ( 2 + 1) p

2− 2 Γ(p + 1) αp2 p s = t B (1 − ; α ( − 1) + 1, α) , αp t 2 Γ ( 2 + 1) where the second equality follows from Legendre duplication formula (Proposition B.3). Due to Proposition 4.62, we can give another representation of the strong solution of (4.70). Corollary 4.64. Let α ∈ (0, 1). Then for every f ∈ 𝒞02 (ℝ), (4.70) has a unique strong solution given by u(t, x) = ∫ f (x + y)qα (t, y)dy, ℝ

where qα is defined in (4.74).

t ≥ 0,

x ∈ ℝ,

(4.78)

298 � 4 Stochastic processes and fractional differential equations Similarly to what we did in Section 4.3, we could ask if the density qα solves the first equation in (4.70). Again, as in Section 4.5, we first need to clarify what is C Dα0+ qα , since qα (0, x) does not exist. However, once we rewrite qα = ISα p with p defined in (4.19), Proposition 4.56 with w ≡ 1 guarantees that for x ∈ ℝ \ {0}, qα (0+, x) := lim qα (t, x) = lim p(t, x) = 0. t↓0

t↓0

Arguing in the same way as for fα , we can substitute the evaluation at 0 in (4.38) with the right limits at 0, obtaining that for all α ∈ (0, 1), t > 0 and x ∈ ℝ \ {0}, (C Dα0+ qα ) (t, x) =

t

1 𝜕 ∫(t − τ)−α (qα (τ, x) − qα (0+, x))dτ Γ(1 − α) 𝜕t 0

t

1−α 𝜕(I0+ qα )(t, x) 1 𝜕 = , ∫(t − τ)−α qα (τ, x)dτ = Γ(1 − α) 𝜕t 𝜕t

(4.79)

0

where we also used (4.39) and (4.40). This argument cannot be applied in the case x = 0 due to the evident singularity of qα in (0, 0). Hence we cannot ask if the first equation of (4.70) is satisfied by qα since (C Dα0+ qα ) (t, 0) is not well-defined. However, we can ask if the relation (C Dα0+ qα ) (t, x) =

1 𝜕2 qα (t, x) , 2 𝜕x 2

t > 0,

x ∈ ℝ \ {0},

(4.80)

is satisfied. To do this, we first need to show that qα is sufficiently regular, as noted in [19, Proposition 3.8]. Proposition 4.65. Let α ∈ (0, 1). (i) For t > 0 and x ≠ 0,

+∞

𝜕qα (t,x) 𝜕x

and

𝜕2 qα (t,x) 𝜕x 2

𝜕qα (t, x) 𝜕p(s, x) = ∫ f (t, s)ds 𝜕x 𝜕x α 0

are well-defined, and 𝜕2 qα (t, x) 𝜕2 p(s, x) = fα (t, s)ds. ∫ 𝜕x 2 𝜕x 2 +∞ 0

(ii) For fixed t > 0, 𝜕qα (t, ⋅) 𝜕2 qα (t, ⋅) , ∈ 𝒞 (ℝ \ {0}). 𝜕x 𝜕x 2 (iii) For fixed x ≠ 0, 𝜕qα (⋅, x) 𝜕2 qα (⋅, x) , ∈ 𝒞 (ℝ+0 ) . 𝜕x 𝜕x 2

(4.81)

4.7 The delayed Brownian motion and the time-fractional heat equation

� 299

(iv) For all 0 < a < b and x ∈ ℝ with |x| ∈ [a, b], 2 2 󵄨󵄨 𝜕qα (t, x) 󵄨󵄨 󵄨󵄨󵄨 𝜕2 qα (t, x) 󵄨󵄨󵄨 a2 a2 󵄨󵄨 󵄨󵄨 + 󵄨󵄨 󵄨󵄨 ≤ √ 27 b e− 32 + max { b − t1 e− 4t1 , t2 − a e− 4t2 } , (4.82) 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 5 { 5 } 2π a3 󵄨 𝜕x 󵄨 󵄨󵄨 𝜕x 2 󵄨󵄨󵄨 2 t 2 √2π { t √2π } 1

2

where t1 =

5b2 + a2 − √(5b2 + a2 )2 − 12a2 b2 6

,

t2 = a2

3 + √6 . 3

(4.83)

Proof. Before proceeding, note that qα (t, x) = (ISα p) (t, x),

x ∈ ℝ \ {0},

t > 0,

and p(⋅, x) ∈ L∞ (ℝ+0 ) for each fixed x ∈ ℝ \ {0}. Furthermore, for all x ∈ ℝ \ {0} and s > 0, x2 𝜕p(s, x) x =− 3 e− 2s , 𝜕x s 2 √2π

𝜕2 p(s, x) x 2 − s − x2s2 = e . 5 𝜕x 2 s 2 √2π 2

𝜕p 𝜕 p In particular, for each fixed s ≥ 0, we have p(s,⋅) ∈ 𝒞 2 (ℝ \ {0}) and p, 𝜕x , 𝜕x 2 ∈ L∞ (ℝ+0 ×[a, b]) for all 0 < a < b. Applying item (v) of Proposition 4.54 with w ≡ 1, we get items (i) and (ii) of the current statement. Moreover, for each fixed x ∈ ℝ \ {0}, we 2 p(⋅,x) have 𝜕p(⋅,x) , 𝜕 𝜕x ∈ L∞ (ℝ+0 ) ∩ 𝒞 (ℝ+0 ), and hence by Proposition 4.52 and Remark 4.53 2 𝜕x we get (iii). To prove (iv), without loss of generality, assume that x > 0. Then for 0 < a ≤ x ≤ b,

󵄨󵄨 𝜕p(s, x) 󵄨󵄨 󵄨󵄨 󵄨󵄨 ≤ 󵄨󵄨 󵄨 󵄨 𝜕x 󵄨󵄨

b

3 2

4s √π

a2

e− 4s ≤ √

27 b − 32 e . 2π a3

Concerning the second derivative, 󵄨󵄨 𝜕2 p(s, x) 󵄨󵄨 |x 2 − s| a2 󵄨󵄨 󵄨󵄨 e− 2s . 󵄨󵄨 󵄨≤ 󵄨󵄨 𝜕x 2 󵄨󵄨󵄨 s 52 √2π If s ≤ x 2 , then 󵄨󵄨 𝜕2 p(s, x) 󵄨󵄨 b2 − s a2 b2 − t1 − 2ta2 󵄨󵄨 󵄨󵄨 − 2s ≤ e ≤ e 1, 󵄨󵄨 󵄨 5 󵄨󵄨 𝜕x 2 󵄨󵄨󵄨 s 52 √2π t12 √2π whereas if s ≥ x 2 , then 󵄨󵄨 𝜕2 p(s, x) 󵄨󵄨 s − a2 a2 t2 − a2 − 2ta2 󵄨󵄨 󵄨󵄨 − 2s ≤ e ≤ e 2, 󵄨󵄨 󵄨 5 5 󵄨 󵄨󵄨 𝜕x 2 󵄨󵄨 s 2 √2π 2√ t2 2π

(4.84)

300 � 4 Stochastic processes and fractional differential equations where t1 and t2 are defined in (4.83). Therefore for all x ∈ [a, b], 󵄨󵄨 𝜕2 p(s, x) 󵄨󵄨 { b2 − t1 − a2 t2 − a2 − a2 } 󵄨󵄨 󵄨󵄨 ≤ max e 4t1 , 5 e 4t2 } . 󵄨󵄨 󵄨 { 5 󵄨󵄨 𝜕x 2 󵄨󵄨󵄨 2√ 2√ t2 2π { t1 2π }

(4.85)

Substituting (4.84) and (4.85) into (4.81), we get (4.82). Now we are ready to prove that qα is a solution of (4.80). Theorem 4.66. Let α ∈ (0, 1). Then qα satisfies (4.80). Proof. Note that for all x ∈ ℝ and z > 0, Proposition 4.62(viii) and Proposition 4.33 imply the equalities +∞

ℒ[qα (⋅, x)] = ∫ e

+∞ +∞

−zs

qα (s, x)ds = ∫ ∫ e−zs p(u, x)fα (s, u)du ds

0

0 +∞

0

α

= zα−1 ∫ e−z u p(u, x)du = zα−1 ℒ[p(⋅, x)] (zα ) . 0

Hence Lemma 2.44(i) implies that for all z > 0 and x ∈ ℝ, 1−α

ℒ [(I0+ qα ) (⋅, x)] (z) = z

2(α−1)

α

ℒ[p(⋅, x)] (z ) .

(4.86)

Furthermore, due to (4.81) and Proposition 4.50, for all x ∈ ℝ \ {0}, ℒ[

2 𝜕2 qα (⋅, x) 𝜕2 p(⋅, x) α 𝜕 p(⋅, x) α−1 ] (z) = ℒ [IS ( )] (z) = z ℒ [ ] (zα ) 𝜕x 2 𝜕x 2 𝜕x 2 𝜕p(⋅, x) = 2zα−1 ℒ [ ] (zα ) = 2z2α−1 ℒ[p(⋅, x)] (zα ) , 𝜕t

where we also used the relation

𝜕2 p 𝜕x 2

(4.87)

= 2 𝜕p . Comparing (4.86) with (4.87), we see that 𝜕t

1−α

ℒ [(I0+ qα ) (⋅, x)] (z) =

𝜕2 qα (⋅, x) 1 ℒ[ ] (z). 2z 𝜕x 2

Taking the inverse Laplace transform, we get the equality 1−α (I0+ qα ) (t, x) =

t

1 𝜕2 qα (s, x) ds. ∫ 2 𝜕x 2 0

Differentiating in the t variable both sides of this equality and noticing that qα (0+ , x) = 0 for all x ≠ 0, we get the statement.

4.8 The delayed fractional Brownian motion

� 301

Let us emphasize that qα is not a strong solution of (4.70), as it is not even defined in t = 0. Remark 4.67. The approach based on Theorem 4.41 and used to solve (4.70) can also be adopted to provide solutions to other time-fractional generalizations of backward and forward Kolmogorov equations, as done, for example, in [13, 131, 136, 137, 150, 190, 191] and many other papers. In particular, the following space-time fractional heat equation was introduced in [264] and solved in [229] by the inverse subordination method stated in Theorem 4.23: β

{ d2 2 { { {(C Dα0+ u) (t, x) = − ((− 2 ) u) (t, x), dx { { { { {u(0, x) = f (x),

t > 0, x ∈ ℝ,

(4.88)

x ∈ ℝ,

where α ∈ (0, 1), β ∈ (0, 2], and f ∈ 𝒞02 (ℝ). The fundamental solution of (4.88) was studied in [150].

4.8 The delayed fractional Brownian motion The definition of the delayed Bm Bα , given in the previous section, can be extended to the case of the fBm. Indeed, let α, H ∈ (0, 1), let BH be an fBm with Hurst index H, and let Lα be an inverse α-stable subordinator independent of it. Definition 4.68. We define the delayed fractional Brownian motion as Bα,H := {Bα,H t , t ≥ 0} H by setting Bα,H := B α for t ≥ 0. L t t

First, let us study some basic properties of the process Bα,H , extending the results of (4.62) to H ≠ 21 . Proposition 4.69. Let α, H ∈ (0, 1), and let Bα,H be a delayed fBm. (i) Bα,H is Hα-self similar, i. e., for all n ∈ ℕ, 0 ≤ t1 ≤ ⋅ ⋅ ⋅ ≤ tn , and a > 0, α,H αH α,H (Bα,H (Bα,H at1 , . . . , Batn ) = a t1 , . . . , B tn ) .

(ii) E [Bα,H t ] = 0 for all t ≥ 0. (iii) For all p > 0 and t ≥ 0, p

2 2 Γ(Hp + 1)Γ ( p+1 ) αHp 󵄨󵄨 α,H 󵄨󵄨p 2 󵄨 󵄨 E [󵄨󵄨Bt 󵄨󵄨 ] = t . Γ(αHp + 1)√π

(4.89)

302 � 4 Stochastic processes and fractional differential equations (iv) For all t > 0, Bα,H is absolutely continuous with density qαH (t, ⋅) given by t +∞

qαH (t, x) = ∫ p (s2H , x) fα (t, s)ds,

x ∈ ℝ,

0

where p is defined in (4.19). (v) The trajectories of Bα,H are a. s. γ-Hölder continuous for all γ < αH. Proof. Items (i) and (ii) can be proved similarly to Proposition 4.62. (iii) Fix t ≥ 0. Since BtH is Gaussian and Lα is independent of BH , p

p+1 2 󵄨 󵄨p 󵄨 Hp 2 Γ ( 2 ) E [󵄨󵄨󵄨󵄨BLHα 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨 Lαt ] = (Lαt ) . t √π

Taking the expectation and using Proposition 4.34, we get p

2 2 Γ(Hp + 1)Γ ( p+1 ) αHp 󵄨󵄨 α,H 󵄨󵄨p 2 󵄨 󵄨 E [󵄨󵄨Bt 󵄨󵄨 ] = t . Γ(αHp + 1)√π (iv) This statement can be proved in the same way as (viii) of Proposition 4.62, because p (t 2H , ⋅) is the density of BtH . p (v) Let p ≥ 1. Let us calculate E [|Bα,H − Bα,H s | ] for t ≥ s. First, we use Proposition 3.11 t and the fact that Lα is independent of BH to get that 󵄨 󵄨p 󵄨 󵄨 󵄨p 󵄨 E [󵄨󵄨󵄨󵄨BLHα − BLHαs 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨 Lαt , Lαs ] = E [󵄨󵄨󵄨󵄨BLHα −Lαs 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨 Lαt , Lαs ] . t t Now recall that BH is a Gaussian process with E [|BtH |] = t 2H . Hence we have p

p+1 2 󵄨 󵄨p 󵄨 󵄨 󵄨p 󵄨 󵄨 󵄨pH 2 Γ ( 2 ) E [󵄨󵄨󵄨󵄨BLHα − BLHαs 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨 Lαt , Lαs ] = E [󵄨󵄨󵄨󵄨BLHα −Lαs 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨 Lαt , Lαs ] = 󵄨󵄨󵄨Lαt − Lαs 󵄨󵄨󵄨 . t t √π

Taking the expectations and applying (4.34), we get p

p

2 2 Γ ( p+1 ) 2 2 Γ ( p+1 ) Γ(pH + 1) 󵄨 󵄨󵄨 α 2 2 α,H 󵄨󵄨p α 󵄨󵄨pH 󵄨 E [󵄨󵄨󵄨󵄨Bα,H − B ] = E [ L − L ] ≤ |t − s|αpH . 󵄨 󵄨 󵄨 t s 󵄨 s󵄨 󵄨 t √π Γ(αpH + 1)√π (4.90) Furthermore, lim

p→+∞

αpH − 1 = αH. p

Hence for all γ < αH, we can find p such that γ < αH − p1 . By (4.90) and the Kolmogorov–Chentsov theorem (Theorem C.1) we know that Bα,H is a. s. locally γ-Hölder continuous.

4.8 The delayed fractional Brownian motion

� 303

Remark 4.70. The equality in (4.90), together with (4.33), implies the following formula for the moments of the increments of Bα,H for all t ≥ s ≥ 0 and p ≥ 1: p

Γ(pH + 1)2 2 Γ( p+1 ) αpH s 󵄨 󵄨󵄨p 2 󵄨 E [󵄨󵄨󵄨󵄨Bα,H − Bα,H t B (1 − ; α(pH − 1) + 1, α) . t s 󵄨󵄨 ] = t Γ(αpH + 1)√π In contrast to the case H = 21 , the correlation structure of Bα,H is more complicated. Proposition 4.71 ([168]). Let α, H ∈ (0, 1). Then for all 0 < s ≤ t, α,H E [Bα,H t Bs ] =

Γ(2H + 1) s [s2Hα + t 2Hα B ( ; α, α(2H − 1) + 1)] . 2Γ(α2H + 1) t

Proof. For all 0 < s ≤ t, 1 󵄨 󵄨2H 󵄨 󵄨2H 󵄨 󵄨 󵄨2H E [BLHα BLHαs 󵄨󵄨󵄨 Lαt , Lαs ] = (󵄨󵄨󵄨Lαt 󵄨󵄨󵄨 + 󵄨󵄨󵄨Lαs 󵄨󵄨󵄨 − 󵄨󵄨󵄨Lαt − Lαs 󵄨󵄨󵄨 ) . t 2 Taking the expectations and using Theorem 4.36 and Proposition 4.34, we obtain the desired result. Let us return to the increments of the process Bα,H . Proposition 4.72. The increments of Bα,H are neither stationary nor independent. Proof. Fix 0 < s < t. On the one hand, according to (4.90), 2

E [(Bα,H − Bα,H t s ) ]=

Γ(2H + 1) 2αH s t B (1 − ; α(2H − 1) + 1, α) . Γ(2αH + 1) t

On the other hand, we get from (4.89) that 2

E [(Bα,H t−s ) ] =

Γ(2H + 1) Γ(2H + 1)t 2αH s 2αH (t − s)2αH = (1 − ) . Γ(2αH + 1) Γ(2αH + 1) t

α,H 2 2 Thus E [(Bα,H − Bα,H s ) ] if and only if t−s ) ] = E [(Bt

s s 2αH B (1 − ; α(2H − 1) + 1, α) = (1 − ) . t t

(4.91)

However, for all 0 < s < t, s s 2Hα+1−α s 2Hα B (1 − ; α(2H − 1) + 1, α) ≤ (1 − ) < (1 − ) , t t t and thus the increments are not stationary. Now we prove that the increments of Bα,H are not independent. To do this, fix 0 < t1 < t2 < t3 and observe that α,H α,H α,H E [Bα,H t3 − Bt2 ] E [Bt2 − Bt1 ] = 0.

304 � 4 Stochastic processes and fractional differential equations However, Proposition 3.12 implies that 󵄨󵄨 1 󵄨 󵄨2H 󵄨 󵄨2H 󵄨 󵄨2H E [(BLHα − BLHα ) (BLHα − BLHα ) 󵄨󵄨󵄨 Lαt3 , Lαt2 , Lαt1 ] = (󵄨󵄨󵄨󵄨Lαt3 − Lαt1 󵄨󵄨󵄨󵄨 − 󵄨󵄨󵄨󵄨Lαt3 − Lαt2 󵄨󵄨󵄨󵄨 − 󵄨󵄨󵄨󵄨Lαt2 − Lαt1 󵄨󵄨󵄨󵄨 ) . t3 t2 t2 t1 󵄨 2 Taking the expectations with the help of (4.32), we get the equalities H,α H,α H,α E [(BH,α t3 − Bt2 ) (Bt2 − Bt1 )] t

1− t1

3

[ 2αH Γ(2H + 1) [t = ∫ zα(2H−1) (1 − z)α−1 dz 2Γ(α)Γ(α(2H − 1) + 1) [ 3 0 [ t

t

1− t2

1− t1

0

0

3

2

] − t32αH ∫ zα(2H−1) (1 − z)α−1 dz − t22αH ∫ zα(2H−1) (1 − z)α−1 dz] ] ]

t

t

1− t1

1− t1

3

=

2

[ 2αH ] Γ(2H + 1) [t ∫ zα(2H−1) (1 − z)α−1 dz − t22αH ∫ zα(2H−1) (1 − z)α−1 dz] 3 [ ]. 2Γ(α)Γ(α(2H − 1) + 1) t2 0 1− t [ ] 3

Now for fixed 0 < t1 < t2 , define the function 1−

t1 s

F(s) = s2αH ∫ zα(2H−1) (1 − z)α−1 dz,

s ≥ t2 .

t 1− s2

Evidently, H,α H,α H,α E [(BH,α t3 − Bt2 ) (Bt2 − Bt1 )] = 0

if and only if t

1− t1

F(s) =

t22αH

2

∫ zα(2H−1) (1 − z)α−1 dz,

s ≥ t2 .

0

This is true for s = t2 , and hence it is sufficient to prove that F is not a constant. Taking the derivative in s, we get 1−

t1 s

F ′ (s) = 2αHs2αH−1 ∫ zα(2H−1) (1 − z)α−1 dz + t 1− s2

t2α (s − t2 )α(2H−1) − t1α (s − t1 )α(2H−1) , s

s > t2 .

4.8 The delayed fractional Brownian motion

� 305

Note that F ′ (s) is continuous. Therefore it is sufficient to find a value of s such that F ′ (s) > 0. To do this, consider the function G(x) = x α (s − x)α(2H−1) ,

x ∈ [0, s].

Its derivative equals G′ (x) = αx α−1 (s − x)α(2H−1)−1 (s − (2 − 2H)x), s whence G′ (x) > 0 for x < 2−2H . Taking s > 0 big enough to have t1 < t2 < ′ that F (s) > 0, and therefore F(s) is not a constant.

s , we obtain 2−2H

Since BH is not a Markov process, we cannot apply to it Theorem 4.41. However, BH is a Gaussian process with the density pH (t, ⋅) of BtH of the form pH (t, x) = p(t 2H , x) for t > 0 and x ∈ ℝ. If for a fixed x ∈ ℝ, we consider the process x BH := {x BtH = x +BtH , t ≥ 0}, then for all t > 0, x BtH has a density pH (t, ⋅ − x). Consider a random variable X with density fX that is independent of BH . Then the process X BH := {X BtH = X + BtH , t ≥ 0} has a density pH,X (t, x) = ∫ fX (y)pH (t, x − y)dy,

x ∈ ℝ,

t > 0,

(4.92)



and pH,X (0, x) = fX (x) for x ∈ ℝ. Assume for simplicity that fX ∈ 𝒞c∞ (ℝ). Then we can prove that pH,X (⋅, ⋅) satisfies the Fokker–Planck equation 𝜕pH,X (t, x) 𝜕2 pH,X (t, x) { { = Ht 2H−1 , 𝜕t 𝜕x 2 { { H,X {p (0, x) = fX (x),

t > 0, x ∈ ℝ,

(4.93)

x ∈ ℝ.

α,H Can we say something about the process X Bα,H := {X Bα,H = X + Bα,H t , t ≥ 0} where X Bt t for t ≥ 0? First of all, a simple conditioning argument (as in Proposition 4.62(viii)) gives the following statement.

Proposition 4.73. Let α, H ∈ (0, 1), and let X be independent of both BH and Lα . Assume further that X has a density fX . Then for all t ≥ 0, X Bα,H has a density t +∞

qαH,X (t, x) = ∫ fX (y)qαH (t, x − y)dy = ∫ pH,X (s, x)fα (t, s)ds,

x ∈ ℝ,

0



and qαH,X (0, x) = fX (x) for x ∈ ℝ. Remark 4.74. Note that 󵄨󵄨 H,X 󵄨 󵄨󵄨qα (t, x)󵄨󵄨󵄨 ≤ ‖fX ‖L∞ (ℝ) , 󵄨 󵄨

t ≥ 0,

x ∈ ℝ.

t > 0,

(4.94)

306 � 4 Stochastic processes and fractional differential equations Therefore we can write qαH,X (t, x) = (ISα pH,X ) (t, x),

t > 0,

x ∈ ℝ.

(4.95)

We use this representation to prove the following statement. Theorem 4.75. Let X be a random variable independent of BH and having a density fX ∈ 𝒞c∞ (ℝ). Let qαH,X (t, ⋅) be the density of BH,α + X for t ≥ 0. Then it satisfies the t following equation of Fokker–Planck type: 𝜕2 (ISαwH pH,X ) (t, x) C α H,X { { , {( D0+ qα ) (t, x) = 𝜕x 2 { { { H,X {qα (0, x) = fX (x),

t > 0, x ∈ ℝ,

(4.96)

x ∈ ℝ,

where wH (t) = Ht 2H−1 for t > 0. Proof. Recall that pH,X (⋅, ⋅) satisfies (4.93). Now applying the Laplace transform to both sides of (4.93), we get zℒ [pH,X (⋅, x)] (z) − fX (x) = ℒwH [

𝜕2 pH,X (⋅, x) ] (z), 𝜕x 2

z > 0,

(4.97)

where wH (t) = Ht 2H−1 for t > 0 belongs to 𝒲α according to Corollary 4.31 (𝒲α is defined in Section 4.6). Rewrite (4.97) replacing z with zα and multiplying both sides by zα−1 : z2α−1 ℒ [pH,X (⋅, x)] (zα ) − zα−1 fX (x) = zα−1 ℒwH [

𝜕2 pH,X (⋅, x) ] (zα ) , 𝜕x 2

z > 0.

Proposition 4.50 and the equality ℒ[fX (x)](z) = z−1 fX (x) lead to the following relation: zα ℒ [ISα pH,X (⋅, x) − fX (x)] (z) = ℒ [ISαwH (

𝜕2 pH,X ) (⋅, x)] (z), 𝜕x 2

z > 0.

(4.98)

Multiplying again both sides of (4.98) by z−1 and applying (4.95), we get zα−1 ℒ [qαH,X (⋅, x) − fX (x)] (z) = z−1 ℒ [ISαwH (

𝜕2 pH,X ) (⋅, x)] (z), 𝜕x 2

z > 0.

The inverse Laplace transform and Lemma 2.44(i) lead to the equality t

α−1 (I0+ (qαH,X (⋅, x) − fX (x))) (t) = ∫ (ISαwH ( 0

𝜕2 pH,X )) (s, x)ds, 𝜕x 2

t > 0.

(4.99)

4.8 The delayed fractional Brownian motion

� 307

Note that 2 𝜕pH (t, x) x − x =− e 2t2H , 𝜕x t 3H √2π H

𝜕2 pH (t, x) x 2 − t 2H − x2H2 = e 2t . 𝜕x 2 t 5H √2π

2 H

In particular, 𝜕p 𝜕x(t,x) , 𝜕 p𝜕x(t,x) ∈ 𝒞 (ℝ+ × ℝ). Since fX is compactly supported, the integral 2 in (4.92) can be rewritten as b

pH,X (t, x) = ∫ fX (y)pH (t, x − y)dy, a

where supp(fX ) ⊆ [a, b]. Then the dominated convergence theorem implies that b

𝜕2 pH,X (t, x) 𝜕2 pH (t, x − y) = ∫ fX (y) dy 2 𝜕x 𝜕x 2

(4.100)

a

and that

𝜕2 pH,X (⋅,x) 𝜕x 2

∈ 𝒞 (ℝ+ ) for all x ∈ ℝ. The same relation can be produced for 2 H,X

𝜕pH,X (t,x) . 𝜕x

p + Hence, we know from Proposition 4.52 that (ISαwH ( 𝜕 𝜕x 2 )) (⋅, x) ∈ 𝒞 (ℝ ). Therefore we

can differentiate both sides of (4.99), take into account that qαH,X (0, x) = fX (x), and get that (C Dα0+ qαH,X ) (t, x) = (ISαwH (

𝜕2 pH,X )) (t, x), 𝜕x 2

t > 0.

(4.101)

Let [c, d] be some interval, and let x ∈ [c, d]. Then 󵄨󵄨 𝜕pH (t, x) 󵄨󵄨 󵄨󵄨 𝜕2 pH (t, x) 󵄨󵄨 󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 pH (t, x) + 󵄨󵄨󵄨 󵄨+󵄨 󵄨 󵄨󵄨 𝜕x 󵄨󵄨󵄨 󵄨󵄨󵄨 𝜕x 2 󵄨󵄨󵄨 t 4H + t 2H max{|c|, |d|} + max{|c|2 , |d|2 } − min{|c|2H2 ,|d|2 } 2t ≤ e =: f[c,d] (t). t 3H √2π

(4.102)

Hence for all t > 0 and x ∈ [c, d], 󵄨󵄨 𝜕pH,X (t, x) 󵄨󵄨 󵄨󵄨 𝜕2 pH,X (t, x) 󵄨󵄨 󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 pH,X (t, x) + 󵄨󵄨󵄨 󵄨󵄨 + 󵄨󵄨 󵄨󵄨 ≤ ‖fX ‖L1 (ℝ) f[c−b,d−a] (t), 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 𝜕x 𝜕x 2 where we applied (4.102) on the interval [c −b, d −a] ∋ x −y for y ∈ supp(fX ) ⊆ [a, b]. Due to Proposition 4.54(v), we can change the order of the differentiation and the operator ISαwH in (4.101) to get the desired result. Remark 4.76. In fact, Theorem 4.75 is true under a less restrictive condition: fX ∈ L∞ (ℝ) and is a. e. equal to a function g with compact support.

308 � 4 Stochastic processes and fractional differential equations Theorem 4.75 can be extended by linearity to every f ∈ 𝒞c∞ (ℝ). Corollary 4.77. Let f ∈ 𝒞c∞ (ℝ). For t > 0 and x ∈ ℝ, define qαH,f (t, x) = ∫ f (y)qαH (t, x − y)dy,

(4.103)



pH,f (t, x) = ∫ f (y)pH (t, x − y)dy,

(4.104)

ℝ H,f

H,f

and put qα (0, x) = pH,f (0, x) = f (x). Then qα solves the problem 𝜕2 (ISαwH pH,f ) (t, x) { {(C Dα0+ qαH,f ) (t, x) = , 𝜕x 2 { { H,f {qα (0, x) = f (x),

t > 0, x ∈ ℝ, x ∈ ℝ,

(4.105)

where wH (t) = Ht 2H−1 for t > 0. Proof. If f ≡ 0, then the statement is trivial. Let us first consider f ≥ 0 and define fX (y) = ‖f f‖(y) . Then fX is a probability density function, and we can consider a random 1 L (ℝ)

variable X with density fX independent of BH . Recalling the definition of qαH,X from (4.94), we get that for all t > 0 and x ∈ ℝ, qαH,f (t, x) = ∫ f (y)qαH (t, x − y)dy = ‖f ‖L1 (ℝ) ∫ fX (y)qαH (t, x − y)dy = ‖f ‖L1 (ℝ) qαH,X (t, x). ℝ



Similarly, pH,f (t, x) = ∫ f (y)pH (t, x − y)dy = ‖f ‖L1 (ℝ) ∫ fX (y)pH (t, x − y)dy = ‖f ‖L1 (ℝ) pH,X (t, x). ℝ



The Dzhrbashyan–Caputo fractional derivative in time, the second derivative in space, and the operator ISαwH are linear, and we get by Theorem 4.75 that (C Dα0+ qαH,f ) (t, x) = ‖f ‖L1 (ℝ) (C Dα0+ qαH,X ) (t, x) = ‖f ‖L1 (ℝ) (ISαwH (

𝜕2 pH,X 𝜕2 pH,f )) (t, x) = (ISαwH ( )) (t, x) 2 𝜕x 𝜕x 2

for all t > 0 and x ∈ ℝ. Due to Remark 4.76, we get the same statement if f ∈ L∞ (ℝ), has compact support, and f ≥ 0 a. e. Now consider an arbitrary f ∈ 𝒞c∞ (ℝ) and define f± (x) = max{0, ±f (x)} for x ∈ ℝ. Clearly, f± ∈ L∞ (ℝ), have compact supports, and f± ≥ 0. Furthermore, f = f+ − f− ,

4.8 The delayed fractional Brownian motion

� 309

whence qαH,f (t, x) = ∫ f (y)qαH (t, x − y)dy ℝ

= ∫ f+ (y)qαH (t, x − y)dy − ∫ f− (y)qαH (t, x − y)dy = qαH,f+ (t, x) − qαH,f− (t, x). ℝ



Similarly, pH,f (t, x) = pH,f+ (t, x) − pH,f− (t, x). Again, by the linearity of the involved operators we get the equalities (C Dα0+ qαH,f ) (t, x) = (C Dα0+ qαH,f+ ) (t, x) − (C Dα0+ qαH,f− ) (t, x) = (ISαwH (

𝜕2 pH,f+ 𝜕2 pH,f− 𝜕2 pH,f α α )) (t, x) − (IS ( )) (t, x) = (IS ( )) (t, x) wH wH 𝜕x 2 𝜕x 2 𝜕x 2

for all t > 0 and x ∈ ℝ. Adding a random variable X to BH regularizes in a certain sense the density of fBm, which would otherwise converge to the Dirac delta measure as t ↓ 0. However, we can prove that (4.96) is true (in some sense) for qαH as well. To do this, note that for x ≠ 0, qαH (0+, x) := limt↓0 qαH (t, x) = limt↓0 E [pH (Lαt , x)] = 0 by the dominated convergence theorem. Therefore, as in Section 4.7, we can define (C Dα0+ qαH ) (t, x)

t

1 𝜕 = ∫(t − τ)−α (qαH (τ, x) − qαH (0+, x)) dτ Γ(1 − α) 𝜕t 0

t

=

1−α H 𝜕 (I0+ qα ) (t, x) 1 𝜕 , ∫(t − τ)−α qαH (τ, x)dτ = Γ(1 − α) 𝜕t 𝜕t

(4.106)

0

for any t > 0 and x ∈ ̸ ℝ \ {0}. With this definition, we can prove the following result. Proposition 4.78. Let α, H ∈ (0, 1). Then for all t > 0 and x ∈ ℝ \ {0}, (C Dα0+ qαH ) (t, x) = (ISαwH

𝜕2 pH,X ) (t, x), 𝜕x 2

where wH (t) = Ht 2H−1 for t > 0. Proof. The case H = 21 has been already considered in Theorem 4.66, so we focus on H ≠ 21 . Let us proceed taking into account (4.106). Fubini theorem implies that 1−α H (I0+ qα ) (t, x)

+∞

t

+∞

0

0

0

pH (s, x) fα (τ, s) 1−α = ∫ dτ ds = ∫ pH (s, x) (I0+ fα ) (t, s)ds. ∫ Γ(1 − α) (t − τ)α

310 � 4 Stochastic processes and fractional differential equations Let 0 < a < b and t ∈ [a, b]. It follows from Theorem 4.47 and (4.46) that there exists a constant C[a,b] > 0 depending on a, b such that for all t ∈ [a, b] and s > 0, 󵄨󵄨 󵄨󵄨 𝜕f (t, s) 󵄨󵄨 󵄨󵄨 C α 󵄨 󵄨󵄨 𝜕 1−α −2 󵄨󵄨 ≤ C 󵄨󵄨( D0+ fα ) (t, s)󵄨󵄨󵄨 = 󵄨󵄨󵄨 (I0+ fα ) (t, s)󵄨󵄨󵄨󵄨 = 󵄨󵄨󵄨󵄨 α 󵄨 [a,b] min {1, s } . 󵄨 󵄨 󵄨󵄨 𝜕t 󵄨 󵄨 𝜕s 󵄨󵄨

(4.107)

Recall that (C Dα0+ fα ) is continuous in ℝ+ × ℝ+ according to Theorem 4.47. Hence, in par1−α ticular, (I0+ fα )(⋅, s) belongs to 𝒞 1 (ℝ+ ) for all s > 0. Let t > 0 and t ∈ (a, b), and let h ∈ ℝ be such that t + h ∈ [a, b]. Applying Lagrange theorem and the fact that the derivative 1−α of (I0+ fα )(⋅, s) is (C Dα0+ fα ) (⋅, s) (as in (4.44)), we claim the existence of ξ(s, t, h) ∈ [a, b] such that 󵄨󵄨 (I 1−α f )(t + h, s) − (I 1−α f )(t, s) 󵄨󵄨 󵄨󵄨 0+ α 󵄨󵄨 󵄨󵄨 C α 󵄨 0+ α 󵄨󵄨 󵄨󵄨 = 󵄨󵄨( D0+ fα ) (ξ(s, t, h), s)󵄨󵄨󵄨 ≤ C[a,b] min {1, s−2 } . 󵄨󵄨 󵄨 󵄨 󵄨 h 󵄨󵄨 󵄨 Let also x ∈ ℝ \ {0} and consider c < x < d such that 0 ∈ ̸ [c, d]. Then 1−α H 1−α H 1−α (I0+ qα ) (t + h, x) − (I0+ qα ) (t, x) (I 1−α f )(t + h, s) − (I0+ fα )(t, s) = ∫ pH (s, x) 0+ α ds. h h +∞ 0

Notice that 󵄨󵄨 1−α (I 1−α f )(t + h, s) − (I0+ fα )(t, s) 󵄨󵄨󵄨󵄨 󵄨󵄨 H 󵄨󵄨p (s, x) 0+ α 󵄨󵄨 ≤ C[a,b] f[c,d] (s) min {1, s−2 } , 󵄨󵄨 󵄨󵄨 h 󵄨 󵄨 where f[c,d] is defined in (4.102). It is not difficult to check that +∞

∫ f[c,d] (s) min {1, s−2 } ds < ∞. 0

Thus by the dominated convergence theorem we have 1−α H 1−α H (I0+ qα ) (t + h, x) − (I0+ qα ) (t, x) h→0 h

lim

+∞

1−α 1−α (I0+ fα )(t + h, s) − (I0+ fα )(t, s) ds = ∫ pH (s, x) (C Dα0+ fα ) (t, s)ds, h→0 h +∞

= ∫ pH (s, x) lim 0

0

that is, +∞

(C Dα0+ qαH ) (t, x) = ∫ pH (s, x) (C Dα0+ fα ) (t, s)ds.

(4.108)

0

Now we prove that C Dα0+ qαH is continuous in ℝ+ × (ℝ \ {0}). Fix (t, x) ∈ ℝ \ {0} and consider a sequence (tn , xn ) → (t, x) ∈ ℝ+ × (ℝ \ {0}). We can assume, without loss of

4.8 The delayed fractional Brownian motion

� 311

generality, that for all n ∈ ℕ, we have tn ∈ (a, b) for some 0 < a < b and xn ∈ (c, d) for some c < d such that 0 ∈ ̸ [c, d]. Then we get from (4.108) that +∞

(C Dα0+ qαH ) (tn , xn ) = ∫ pH (s, xn ) (C Dα0+ fα ) (tn , s)ds. 0

It follows from (4.107) and (4.102) that 󵄨󵄨 H 󵄨 󵄨󵄨p (s, xn ) (C Dα0+ fα ) (tn , s)󵄨󵄨󵄨 ≤ C[a,b] f[c,d] (s) min {1, s−2 } , 󵄨 󵄨 where the right-hand side belongs to L1 (ℝ+0 ). The dominated convergence theorem supplies that we can pass to the limit: lim (C Dα0+ qαH ) (tn , xn ) n→+∞

+∞

= lim ∫ pH (s, xn ) (C Dα0+ fα ) (tn , s)ds n→+∞

0

+∞

= ∫ pH (s, x) (C Dα0+ fα ) (t, s)ds = (C Dα0+ qαH ) (t, x). 0

So C Dα0+ qαH is continuous at all points (t, x) ∈ ℝ+ × (ℝ \ {0}). Furthermore, if 0 < a ≤ t ≤ b and x ∈ [c, d] for some c < d such that 0 ∈ ̸ [c, d], then +∞

󵄨󵄨 C α H 󵄨 󵄨󵄨( D0+ qα ) (t, x)󵄨󵄨󵄨 ≤ ∫ C[a,b] f[c,d] (s) min {1, s−2 } ds, 󵄨 󵄨

(4.109)

0

where the function f[c,d] and constant C[a,b] are defined in (4.102) and (4.107), respectively. Fix n ∈ ℕ and consider a function f ∈ 𝒞c∞ (ℝ) with supp(f ) ⊆ [ n1 , n2 ]. Also, fix t > 0 H,f

and x > n2 . Then it follows from the definition of qα y ∈ [ n1 , n2 ] that t

and inequality x − y > 0 for

2 n

1 1−α H,f (I0+ qα ) (t, x) = ∫(t − τ)−α ∫ f (y)qαH (τ, x − y)dy dτ Γ(1 − α) 0

2 n

1 n

2 n

t

1 1−α H = ∫ f (y) qα ) (t, x − y)dy. ∫(t − τ)−α qαH (τ, x − y)dτ dy = ∫ f (y) (I0+ Γ(1 − α) 1 n

0

(4.110)

1 n

Note that we could apply Fubini theorem because 󵄨 󵄨 (t − τ)−α 󵄨󵄨󵄨f (y)󵄨󵄨󵄨qαH (τ, x − y) ≤

‖f ‖L∞ (ℝ) 1 1 2 (y) (sup f[c− 2 ,d− 1 ] (s)) n n (t − τ)α [ n , n ] s≥0

(4.111)

312 � 4 Stochastic processes and fractional differential equations with f[c− 2 ,d− 1 ] defined in (4.102), and the right-hand side of (4.111) belongs to n

n

L1 ([0, t] × [ n1 , n2 ]). Now, let 0 < c < d and 0 < a < b be such that Then x − y ∈ [c −

2 , d − n1 ] for all y ∈ [ n1 , n2 ], where c − n2 n

2 n

< c < x < d and a < t < b.

> 0, and we get from (4.109) that

+∞

󵄨󵄨 󵄨 󵄨󵄨f (y) (C Dα0+ qαH ) (t, x − y)󵄨󵄨󵄨 ≤ ‖f ‖L∞ (ℝ) ( ∫ C[a,b] f[c− 2 ,d− 1 ] (s) min {1, s−2 } ds) . 󵄨 󵄨 n n 0

Now the right-hand side is a constant and is integrable in [ n1 , n2 ]. The dominated convergence theorem supplies that we can take the derivative with respect to t inside the integral on the right-hand side of (4.110) in the same way as for (4.108), and as a result, we get 2 n

(C Dα0+ qαH,f ) (t, x) = ∫ f (y) (C Dα0+ qαH ) (t, x − y)dy.

(4.112)

1 n

Consider again t > 0 and x > n2 , and let c < d be such that n2 < c < x < d. Then x − y ∈ [c − n2 , d − n1 ] for all y ∈ [ n1 , n2 ], where c − n2 > 0, and we get that 󵄨󵄨 𝜕pH (t, x − y) 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨󵄨 𝜕2 pH (t, x − y) 󵄨󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨f (y) 󵄨󵄨 + 󵄨󵄨f (y) 󵄨󵄨 ≤ ‖f ‖L∞ (ℝ) f[c− 2 ,d− 1 ] (t), 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 n n 𝜕x 𝜕x 2 where the function f[c− 2 ,d− 1 ] is defined in (4.102) and belongs to L1 (ℝ+0 ). The dominated n n convergence theorem implies that 2 n

2 H,f

𝜕p 𝜕2 pH (t, x) = ∫ f (y) 2 (t, x − y)dy, 2 𝜕x 𝜕x 1 n

where pH,f is defined in (4.104). Furthermore, for t ∈ [a, b] and x ∈ [c, d], where 0 < a < b and n2 < c < x < d, 󵄨󵄨 2 H,f 󵄨󵄨 ‖f ‖ ∞ 󵄨󵄨 𝜕 p 󵄨󵄨 L (ℝ) 󵄨󵄨 󵄨󵄨 ≤ (t, x) (sup f[c− 2 ,d− 1 ] (s)) . 󵄨󵄨 𝜕x 2 󵄨󵄨 n n n s≥0 󵄨 󵄨 Moreover, recall that E [wH (Lαt )] < ∞, and therefore applying again Fubini theorem, we obtain the relations (ISαwH

2 n

𝜕2 pH,f 𝜕2 pH ( )) (t, x) = f (y) w (s) (s, x − y)fα (t, s)ds dy ∫ ∫ H 𝜕x 2 𝜕x 2 1 n

2 n

+∞ 0

= ∫ f (y) (ISαwH 1 n

𝜕2 p H ) (t, x − y)dy. 𝜕x 2

(4.113)

4.9 The delayed fractional Ohrnstein–Uhlenbeck process



313

2 H,f

Notice also that (ISαwH 𝜕 𝜕xp 2 ) (t, x) is continuous in (t, x) ∈ ℝ+ ×(ℝ\{0}). Indeed, consider a sequence (tm , xm ) → (t, x) ∈ ℝ+ × (ℝ \ {0}) as m → ∞ and observe that (ISαwH

2 H,f 𝜕2 pH,f α 𝜕 p ) (t , x ) = E [w (L ) (Lαtm , xm )] . m m H t n 𝜕x 2 𝜕x 2

We can assume, without loss of generality, that xm ∈ [c, d] for some 0 < 0 < a < tm < b. Then wH (Lαtm )

2 n

< c < d and

‖f ‖L∞ (ℝ) 𝜕2 pH,X α (Ltm , xm ) ≤ (sup f[c− 2 ,d− 1 ] (s)) wH (Lαc0 (H) ) , 2 n n n 𝜕x s≥0

where c0 (H) = a0 if H < 21 and c0 (H) = b0 if H > 21 . Hence we get by the dominated convergence theorem and the continuity of all the involved quantities that lim (ISαwH

n→+∞

𝜕2 pH,f 𝜕2 pH,f α ) (tn , xn ) = lim E [wH (Lαtn ) (Ltn , xn )] 2 n→+∞ 𝜕x 𝜕x 2 = E [wH (Lαt )

2 H,f 𝜕2 pH,f α α 𝜕 p (L , x)] = (IS ) (t, x). t wH 𝜕x 2 𝜕x 2

H,f

We know from Corollary 4.77 that qα satisfies (4.105). This equation can be rewritten, due to (4.112) and (4.113), as 2 n

∫ f (y) (C Dα0+ qαH ) (t, x 1 n

2 n

− y)dy = ∫ f (y) (ISαwH 1 n

𝜕2 pH,X ) (t, x − y)dy. 𝜕x 2

(4.114)

Since f ∈ 𝒞c∞ [ n1 , n2 ] is arbitrary, we get that for fixed t > 0 and x > n2 , (C Dα0+ qαH ) (t, x − y) = (ISαwH

𝜕2 pH,X ) (t, x − y) 𝜕x 2

(4.115)

for a. a. y ∈ [ n1 , n2 ]. However, since the two involved functions are continuous, the equality holds for all y ∈ [ n1 , n2 ]. Finally, note that for x > 0, there exists n ∈ ℕ such that [ n1 , n2 ],

2 . n

2 n

< x.

Once we consider y ∈ we have x + y > Therefore (4.115) with x + y in place of x guarantees the desired statement. The reasoning for x < 0 is the same.

4.9 The delayed fractional Ohrnstein–Uhlenbeck process The construction given in Section 4.8 can be generalized to a wider family of processes. As an example, consider the delayed fOU process introduced in [16]. Let λ, σ > 0 and H, α ∈ (0, 1), let BH = {BtH , t ≥ 0} be a (one-sided) fBm, let ξ be a random variable independent of it, and let Lα be an inverse α-stable subordinator independent of both ξ and BH . Let U H,ξ = {UtH,ξ , t ≥ 0} be the fOU process defined in (3.45).

314 � 4 Stochastic processes and fractional differential equations Definition 4.79. We define the delayed fractional Orhnstein–Uhlenbeck process as Uα,H,ξ := {Uα,H,ξ , t ≥ 0}, where Uα,H,ξ = ULH,ξ α for t ≥ 0. t t t

Note that in [16] it is assumed that ξ = 0 a. s. Similarly to the delayed fBm, let us first provide some basic properties of the process α,H,ξ Ut . Proposition 4.80. Let λ, σ > 0 and H, α ∈ (0, 1), let BH be an fBm, let ξ be a random variable independent of BH , and let Lα be an inverse α-stable subordinator independent of both ξ and BH . Consider the delayed fOU process Uα,H,ξ . (i) If ξ ∈ L1 (Ω), then E[Uα,H,ξ ] = E[ξ]Eα (−λt α ) for all t ≥ 0. t (ii) If ξ = 0 a. s., then for all p ≥ 0, p

p 2 2 Γ ( p+1 ) 󵄨 󵄨󵄨p 2 󵄨 E [󵄨󵄨󵄨󵄨Uα,H,0 ] = E [(VH (Lαt )) 2 ] , 󵄨 t 󵄨 √π

where VH was introduced in (3.63). H,x (iii) If ξ = x ∈ ℝ a. s., then for all t > 0, Uα,H,x has a density qα,fOU (t, ⋅) of the form t +∞

H,x qα,fOU (t, y) = ∫ p (VH (s), y − xe−λs ) fα (t, s)ds,

y ∈ ℝ,

(4.116)

0

where p is defined in (4.19). H,ξ (iv) If ξ has a density fξ , then for all t ≥ 0, Uα,H,ξ has a density qα,fOU with t +∞

H,ξ qα,fOU (t, y) = ∫ ∫ p (VH (s), y − xe−λs ) fα (t, s)fξ (x)ds dx,

y ∈ ℝ,

t ∈ ℝ+ , (4.117)

ℝ 0

H,ξ and qα,fOU (0, y) = fξ (y) for y ∈ ℝ.

Proof. Let U H,ξ be the fOU process such that Uα,H,ξ = ULH,ξ α . t t

(i) Fix t ≥ 0 and recall that Lα , ξ, and U H,x are independent. Therefore it follows from (3.48) that 󵄨󵄨 α −λLα E [ULH,ξ 󵄨󵄨 Lt , ξ] = ξe t . α t Taking the expectations and applying Proposition 4.32, we get the desired formula. (ii) Obviously, U H,0 is a Gaussian process. Since Lα and U H,0 are independent, we can calculate the following conditional moments: p

p+1 p 22 Γ( ) 󵄨 󵄨󵄨p 󵄨󵄨 α 2 α 2 E [󵄨󵄨󵄨󵄨ULH,0 , α 󵄨󵄨 󵄨󵄨 Lt ] = (VH (Lt )) 󵄨 t √π

(4.118)

4.9 The delayed fractional Ohrnstein–Uhlenbeck process



315

where VH is from (3.63). The function VH : ℝ+0 → ℝ is continuous and bounded, since p limt→+∞ VH (t) = VH,∞ < ∞ by (3.64), hence E [(VH (Lαt )) 2 ] < ∞ for all t, p > 0. So we can take the expectations in (4.118) and get (iii). (iii) Note that UtH,x , t > 0, has a density −λt pH,x ), fOU (t, y) = p (VH (t), y − xe

where p is from (4.19). We get the statement with the same argument as in (vii) of Proposition 4.62. (iv) Let y ∈ ℝ. Since ξ, BH , and Lα are independent, P (Uα,H,ξ s

y

H,x ≤ y | ξ = x) = ∫ qα,fOU (s, u)du. −∞

Integrating with respect to the density of ξ, we get that y

y

H,x H,x P (Uα,H,ξ ≤ y) = ∫ ∫ qα,fOU (s, u)fξ (x)du dx = ∫ ∫ qα,fOU (s, u)fξ (x)du dx, s ℝ −∞

−∞ ℝ

where we could apply Fubini theorem since the integrand is nonnegative. Differentiation with respect to y concludes the proof. Now let us give some further properties of the density of the delayed fOU process, with particular attention to Fokker–Planck-type equations. First, we state the following result. Theorem 4.81. Let H > 21 , and let BH be an fBm. Consider a random variable ξ that is independent of BH , admits a density fξ ∈ 𝒞c∞ (ℝ) and expectation E[ξ] = 0. Let Uα,H,ξ be H,ξ the delayed fOU process with initial data ξ. Then its density qα,fOU satisfies

𝜕2 (ISαwH pH,ξ ) (t, x) { fOU { {(C Dα0+ qH,ξ ) (t, x) = , α,fOU 𝜕x 2 { { H,ξ { {qα,fOU (0, x) = fξ (x),

t > 0, x ∈ ℝ, x ∈ ℝ,

(4.119)

where pH,ξ (t, x) = ∫ p (VH (t), y − xe−λt ) fξ (x)dx, fOU

t ∈ ℝ+ ,

x ∈ ℝ,



and wH (t) =

VH′ (t) 2

for t ∈ ℝ+ .

We omit the proof, which is similar to that of Theorem 4.75. Observe that to avoid terms involving the first derivative in the x variable, we have to prescribe E[ξ] = 0.

316 � 4 Stochastic processes and fractional differential equations H,x Hence we cannot use the same arguments of Proposition 4.78 to work with qα,fOU for x ∈ ℝ. Nevertheless, we can still say something about the case x = 0. More precisely, H,0 arguing as in the previous section, it is not difficult to see that qα,fOU (0+, y) = 0 for y ≠ 0, and then we can define H,0 (C Dα0+ qα,fOU ) (t, y)

t

1 𝜕 H,0 H,0 = (τ, y) − qα,fOU (0+, y)) dτ ∫(t − τ)−α (qα,fOU Γ(1 − α) 𝜕t 0

t

=

1−α H,0 𝜕 (I0+ qα,fOU ) (t, y) 1 𝜕 H,0 (τ, y)dτ = , ∫(t − τ)−α qα,fOU Γ(1 − α) 𝜕t 𝜕t

y ≠ 0.

(4.120)

0

We have the following theorem. Theorem 4.82. Let H > 21 , and let Uα,H,0 be the delayed fOU process with initial data 0. H,0 Then its density qα,fOU satisfies H,0 (C Dα0+ qα,fOU ) (t, y) =

𝜕2 (ISαwH pH,0 ) (t, y) fOU 𝜕y2

,

t ∈ ℝ+ ,

y ∈ ℝ \ {0},

where pH,0 (t, y) = p(VH (t), y) for t ∈ ℝ+ and y ∈ ℝ, and wH (t) = fOU

VH′ (t) 2

(4.121)

for t > 0.

H,0 This has been proven in [16], whereas further properties of qα,fOU have been explored in [17, 18].

4.10 Delayed continuous-time Markov chains Let X = {Xt , t ≥ 0} be a càdlàg time-homogeneous continuous-time Markov chain (CTMC) with at most countable discrete state space E. Without loss of generality, assume that E = {0, 1, 2, . . . , N} if it is finite and E = ℕ0 if it is infinite. In this case the generator 𝒬 can be represented as a (possibly infinite) matrix. Throughout this section, we assume that 𝒬 = (qi,j )i,j∈E satisfies qi,j < ∞ for i ≠ j and qi,i = − ∑j=i̸ qi,j . For such a process X, we can determine the jump times J = {Jn , n ∈ ℕ0 } as follows: {

J0 = 0,

Jn = inf {t > Jn−1 : Xt ≠ XJn−1 } ,

n ≥ 1.

The discrete-time process χ = {χn : n ∈ ℕ0 } with χn = XJn is a Markov chain called the jump chain of X. Any CTMC X is uniquely determined by the couple ( J, χ). Furthermore, the sojourn times γ = {γn , n ∈ ℕ} of X are defined as γn = Jn − Jn−1 .

4.10 Delayed continuous-time Markov chains

� 317

It is well known that the sojourn times are mutually independent and, for all i ∈ E, P(γn > t | χn−1 = i) = eqi,i t ,

(4.122)

i. e., each sojourn time is an exponential random variable with parameter qi,i , conditionally on the current position i. Now let S α be an α-stable subordinator independent of X, and let Lα be its inverse. Definition 4.83. We define a delayed continuous-time Markov chain as the process Xα := {Xαt , t ≥ 0} where Xαt = XLαt for t ≥ 0. Since Lα is a. s. continuous, Xα is still a. s. càdlàg. We can define the jump times α J = {Jnα , n ∈ ℕ} of Xα as {

J0α = 0,

α Jnα = inf {t ≥ Jn−1 : Xαt ≠ XαJ α } , n−1

n ≥ 1.

The respective sojourn times γα = {γnα , n ∈ ℕ} equal α γnα = Jnα − Jn−1 .

We now study some properties of J α and γα , proceeding as in [15, Proposition 7] and [11]. First, let us provide a link between J α and J through S α as in [11, Lemma 4.2]. Lemma 4.84. For any n ≥ 0, Jnα = SJαn = SJαn − a. s. Proof. First, note that Jn is a. s. a continuity point for S α . Indeed, Jn is independent of S α , which is a Lévy process and in particular is stochastically continuous. So 󵄨 P (SJαn − ≠ SJαn ) = E [P (SJαn − ≠ SJαn 󵄨󵄨󵄨 Jn )] = 0, since P(SJαn − ≠ SJαn | Jn ) = 0. Hence we only need to prove that Jnα = SJαn − . We prove this statement by induction on n. Fix ω ∈ Ω such that Xα (ω) is càdlàg. In what follows, we do not mention ω for the reader’s convenience. Clearly, J0α = 0 = S0α = SJα0 . Assume that it is already proved that Jnα = SJαn a. s. Then α Jn+1 = inf {t > SJαn : Xαt ≠ XαJnα } .

(4.123)

Recall also that the process is càdlàg, and thus the infimum is actually a minimum. Inα deed, there exists a sequence εk ↓ 0 such that XαJ α +εk ≠ XαJ αn . If Jn+1 is not a minimum, n+1

then it must hold XαJ α = XαJ α . Since Xα is a. s. càdlàg, limk→∞ XαJ α +εk = XαJ α , and thus n n+1 n+1 n+1 there exists k0 ≥ 0 such that for all k ≥ k0 , 󵄨󵄨 α 󵄨 1 󵄨󵄨XJ α +ε − XαJ α 󵄨󵄨󵄨 ≤ . 󵄨 n+1 k n+1 󵄨 2

318 � 4 Stochastic processes and fractional differential equations However, since E = {0, 1, . . . , N} or E = ℕ0 , 󵄨 󵄨 1 {x ∈ E : 󵄨󵄨󵄨󵄨x − XαJ α 󵄨󵄨󵄨󵄨 ≤ } = {XαJ α } = {XαJnα } , n+1 n+1 2 and hence XαJ α

n+1 +εk

= XαJ α for all k ≥ k0 , and we get a contradiction. Note that Lαt is n

α α constant for all t ∈ [Sy− , Syα ) and y ≥ 0, and the same holds for Xα . Let us show that Jn+1 α must be of the form Sy− for some y ≥ Jn , arguing by contradiction. Assume this is not the α α case, i. e., Jn+1 ∈ (Sy− , Syα ) for some discontinuity point y > 0. α α α α Note that Jn+1 ≥ Jn = SJαn = SJαn − , and therefore it should be y > Jn , and then Jnα < Sy− < Jn+1 . α α α α α α However, X is constant on [Sy− , Sy ), so XSα = XJ α ≠ XJ α , which is a contradiction with y−

n+1

n

α α α the definition of Jn+1 . Therefore there exists y ∈ ℝ such that Jn+1 = Sy− , and we can write α α α Jn+1 = min {Sy− > SJαn : XαSy− α ≠ XJ α } . n

Now recall that by the definition of Lα we have LαSα = y, whence y−

α α Jn+1 = min {Sy− > SJαn − : Xy ≠ XJn } ,

(4.124)

α where we also used the fact that SJαn − = SJαn a. s. This implies that Jn+1 = SJαn+1 − . Indeed, if α α + Jn+1 = Sy− for some other y ∈ ℝ0 , then, clearly, y > Jn by (4.124). Furthermore, recalling that y ∈ ℝ+0 󳨃→ Syα ∈ ℝ is still strictly increasing, if y < Jn+1 , then by the definition of Jn+1 it α α follows that Xy = XJn , which is absurd, whereas if y > Jn+1 , then SJαn − < SJαn+1 − < Sy− = Jn+1 α α and, by (4.124), XJn+1 = XJn , which is again a contradiction. This proves that Jn+1 = SJn+1 − .

As an immediate consequence, we get the following result. Corollary 4.85. The discrete-time process {XαJ α , n ≥ 0} is a Markov chain and coincides n with the jump chain χ of X. Proof. Recall that by definition Xαt = XLαt , whereas Jnα = SJαn − by Lemma 4.84. Hence we have XαJnα = XLαα = XJn = χn , S

Jn −

where we used the fact that LαSα = Jn . Jn −

Remark 4.86. This result was in fact expected due to the nature of the delay. Indeed, once we apply the time-change Lα to the process X, we are only altering the time it spends in each state, but not the state that it visits. In particular, if we observe the process Xα only at its jump times J α , then we neglect the time spent on each state, and we only observe the sequence of visited states, which is the same for X.

4.10 Delayed continuous-time Markov chains

� 319

Furthermore, we can give the distribution of the sojourn times γα . To do this, we first need the definition of a Mittag-Leffler distributed random variable, as given in [149]. Definition 4.87. A nonnegative random variable Y is said to have a Mittag-Leffler distribution of order α ∈ (0, 1) with parameter λ > 0 if P(Y > t) = Eα (−λt α )

for all t > 0.

Now we can prove the following result (see [11, Proposition 4.5]). d

Proposition 4.88. For all n ≥ 1, we have the equality in distribution γnα = Sγαn . In particular, for all n ≥ 1, P (γnα > t | XαJ α = i) = Eα (qi,i t α ) , n−1

i. e., γnα has a Mittag-Leffler distribution of order α with parameter qi,i conditionally to the current state i of the process Xα . Proof. According to Lemma 4.84, we can write γnα = SJαn − SJαn−1 for all n ≥ 1. Now, let t ∈ ℝ and i ∈ E. Note that by the tower property of conditional expectations P (γnα ≤ t | χn−1 = i) = P (SJαn − SJαn−1 ≤ t | χn−1 = i)

󵄨 = E [P (SJαn − SJαn−1 ≤ t | Xs , s ≥ 0) 󵄨󵄨󵄨 1{χn−1 =i} ] ,

(4.125)

where we make conditioning according to the whole trajectory of X. Now denote by 𝒟(ℝ+0 ; E) the set of càdlàg functions x : ℝ+0 → E ⊂ ℝ. Define the family of functionals Jm : 𝒟(ℝ+0 ; E) → ℝ, m = 0, 1, 2, . . . , as follows: {

J0 (x) = 0,

Jm (x) = inf{t > Jm−1 : x(t) ≠ x(Jm−1 )},

m ≥ 1,

for all x ∈ 𝒟(ℝ+0 ; E). By definition Jn = Jn (X) and Jn−1 = Jn−1 (X) (of course, they are nontrivial if the function is stepwise). Then P (SJαn − SJαn−1 ≤ t | Xs , s ≥ 0) = P (SJαn (X) − SJαn−1 (X) ≤ t | Xs , s ≥ 0) . Now Doob–Dynkin lemma implies that P (SJαn (X) − SJαn−1 (X) ≤ t | Xs , s ≥ 0) = P(X), where P : 𝒟(ℝ+0 ; E) → ℝ is a suitable measurable function. To determine P, introduce the functional Gn : 𝒟(ℝ+0 ; E) → ℝ defined as Gn (x) = Jn (x) − Jn−1 (x)

320 � 4 Stochastic processes and fractional differential equations for all x ∈ 𝒟(ℝ+0 ; E). Then the independence of S α and X, together with the fact that the increments of S α are stationary, implies that for all x ∈ 𝒟(ℝ+0 ; E), α P(x) = P (SJαn (x) − SJαn−1 (x) ≤ t) = P (SG ≤ t) = Gα (Gn (x), t) , n (x)

where t

Gα (z, t) = ∫ gα (z, y)dy,

z, t > 0,

(4.126)

0

with gα defined in (4.5). Thus (4.125) and the equality Gn (X) = γn lead to P (γnα ≤ t | χn−1 = i) = E [Gα (γn , t) | 1{χn−1 =i} ] .

(4.127)

Next, note that by the tower property 󵄨 P (Sγαn ≤ t | χn−1 = i) = E [P (Sγαn ≤ t | Xt , t ≥ 0) 󵄨󵄨󵄨 1{χn−1 =i} ] ,

(4.128)

where we again make conditioning with respect to the whole trajectory of X. The independence of S α and X and the equality γn = Gn (X) imply that P (Sγαn ≤ t | Xt , t ≥ 0) = Gα (γn , t). Then (4.128) is transformed to P (Sγαn ≤ t | χn−1 = i) = E [Gα (γn , t) | 1{χn−1 =i} ] .

(4.129)

Comparing (4.127) and (4.129), we get P (γnα ≤ t | χn−1 = i) = P (Sγαn ≤ t | χn−1 = i) .

(4.130)

Now notice that P (γnα ≤ t) = ∑ P (γnα ≤ t | χn−1 = i) P(χn−1 = i) i∈E

= ∑ P (Sγαn ≤ t | χn−1 = i) P(χn−1 = i) = P (Sγαn ≤ t) , i∈E

d

whence γnα = Sγαn . Note that according to (4.130), for all t ∈ ℝ+0 and i ∈ E, P (γnα > t | χn−1 = i) = P (Sγαn > t | χn−1 = i)

= P (γn ≥ Lαt | χn−1 = i) = E [P (γn ≥ Lαt | 1{χn−1 =i} , Lαt ) | 1{χn−1 =i} ] . (4.131)

4.10 Delayed continuous-time Markov chains

� 321

Since Lα is independent of both χn−1 and γn , we get from (4.122) that α

P (γn ≥ Lαt | d 1{χn−1 =i} , Lαt ) = eqi,i Lt . Using the fact that Lαt and χn−1 are independent and applying Proposition 4.32, it is possible to transform (4.131) into α α 󵄨 P (Sγαn > t | χn−1 = i) = E [eqi,i Lt 󵄨󵄨󵄨 χn−1 = i] = E [eqi,i Lt ] = Eα (qi,i t α ) .

(4.132)

Let us prove mutual independence of the sojourn times; see also [11, Proposition 4.3]. Proposition 4.89. The sojourn times γα are mutually independent. Proof. Let N ∈ ℕ, let 0 ≤ n1 < n2 < ⋅ ⋅ ⋅ < nN be nonnegative integers, and let t1 , . . . , tN ∈ ℝ+ . Then we get from Lemma 4.84 that P (γnαj ≤ tj , j = 1, . . . , N) = P (SJαn − SJαn −1 ≤ tj , j = 1, . . . , N) j

=

j

E [P (SJαn j

− SJαn −1 ≤ tj , j = 1, . . . , N | Jnj , Jnj −1 j = 1, . . . , N)] . j

Now recalling that S α is independent of J and has independent and stationary increments, we get that P (SJαn − SJαn −1 ≤ tj , j = 1, . . . , N | Jnj , Jnj −1 j = 1, . . . , N) j

j

N

= ∏ P (SJαn − SJαn k

k=1

k −1

≤ tk | Jnj , Jnj −1 , j = 1, . . . , N)

N

N

= ∏ P (Sγαn ≤ tk | Jnj , Jnj −1 , j = 1, . . . , N) = ∏ P (Sγαn ≤ tk | Jnk , Jnk −1 ) , k

k=1

k=1

k

(4.133)

where in the last equality, we used the following facts: S α is independent of J, γnk is independent of Jnj and Jnj −1 for j = 1, . . . , N with k ≠ j, Jnj ≠ Jnk −1 , and Jnj −1 ≠ Jnk . Furthermore, note that P (Sγαn ≤ tk | Jnk , Jnk −1 ) = E [P (Sγαn ≤ tk | Jnk , Jnk −1 , γnk ) | Jnk , Jnk −1 ] k

k

=

E [P (Sγαn k

≤ tk | γnk ) | Jnk , Jnk −1 ]

= P (Sγαn ≤ tk | γnk ) = Gα (γnk , tk ), k

where Gα is defined in (4.126). Hence (4.133) can be rewritten as N

P (SJαn − SJαn −1 ≤ tj , j = 1, . . . , N | Jnj , Jnj −1 j = 1, . . . , N) = ∏ Gα (γnk , tk ). j

j

k=1

322 � 4 Stochastic processes and fractional differential equations Taking the expectation and using the fact that {γnk , k = 1, . . . , N} are mutually independent, we get N

P (γnαj ≤ tj , j = 1, . . . , N) = ∏ E[Gα (γnk , tk )]. k=1

(4.134)

Now Proposition 4.88 implies that E[Gα (γnk , tk )] = E [P(Sγn ≤ tk | γnk )] = P(Sγn ≤ tk ) = P (γnαk ≤ tk ) , k

k

and in turn from (4.134) we have N

P (γnαj ≤ tj , j = 1, . . . , N) = ∏ P (γnαk ≤ tk ) , k=1

and the proof follows. Let us also underline that the trajectory of Xα is uniquely determined by the couple (J α , χ). Indeed, by the definition of J α and χ we have Xαt = χn ,

α t ∈ [Jnα , Jn+1 ),

n ∈ ℕ0 .

We will extensively use this construction in simulations. We can also prove an inverse subordination formula for the state probabilities. Proposition 4.90. For i, j ∈ E, P (Xαt

=j|

Xα0

+∞

= i) = ∫ P(Xs = j | X0 = i)fα (t, s)ds. 0

Proof. Observe that 󵄨 P (Xαt = j | Xα0 = i) = E [P (Xαt = j | 1{Xα0 =i} , Lαt ) 󵄨󵄨󵄨 1{Xα0 =i} ] . Since Lαt is independent of X and Xα0 = X0 , we have P (Xαt = j | 1{Xα0 =i} , Lαt ) = P (Lαt , j; i) ,

(4.135)

where P(t, j; i) = P(Xt = j | X0 = i). Taking the expectations of both sides of (4.135), we get the desired result. Let us now give some examples. First, consider the case of finite state space E = {0, 1, . . . , N}. Then we can use Xα to provide the solution to a particular system of fractional differential equations as a consequence of Theorem 4.41.

4.10 Delayed continuous-time Markov chains

� 323

Theorem 4.91. Let α ∈ (0, 1), and let X be a CTMC with finite state space E = {0, . . . , N} and generator 𝒬 ∈ ℝN+1 × ℝN+1 . Denote the elements x ∈ ℝN+1 as column vectors with components (xy )y=0,...,N . Then for all x ∈ ℝN+1 , the unique solution of the equation {

C α Dt u(t)

= 𝒬u(t),

t > 0,

(4.136)

u(0) = x,

is given by the column vector (uy (t))y=0,...,N , where uy (t) = E [xXαt | Xα0 = y] ,

t ≥ 0,

y = 0, . . . , N,

In particular, if, for fixed j ∈ E, xy = δy,j for all y ∈ E, then the unique solution of (4.136) is the column vector (uy (t))y=0,...,N with uy (t) = P (Xαt = j | Xα0 = y) . Proof. Note that the Feller semigroup {Tt }t≥0 on ℝN+1 generating 𝒬 is given by the formula (Tt x)y = E[xXt | X0 = y],

t ≥ 0,

x ∈ ℝN+1 ,

y ∈ E,

and apply Theorem 4.41. Now let 𝒩 be a homogeneous Poisson process with parameter λ. Definition 4.92. The process Nα := {Nαt , t ≥ 0} with Nαt = 𝒩Lαt is called a fractional Poisson process. It was introduced in [131] and further studied in [149, 156, 211]. This process can be used to provide the stochastic representation of an infinite system of fractional difference-differential equations. Theorem 4.93. Let α ∈ (0, 1), and let Nα be a fractional Poisson process. Fix any j ∈ ℕ0 . Then the sequence of functions un (t) = P (Nαt = n | Nα0 = j) is the unique solution of: C α Dt u0 (t) = −λu0 (t), { { { { C α { { Dt un (t) = λ(un−1 (t) − un (t)), { { un (0) = 0, { { { { {uj (0) = 1.

n ≥ 1, n ≠ j,

(4.137)

324 � 4 Stochastic processes and fractional differential equations Proof. If solution exists, then it is unique. Indeed, assume that there are two solutions (un (t))n≥0 and (un∗ (t))n≥0 . Then both u0 and u0∗ satisfy the equations {

C α Dt u0 (t)

= −λu0 (t),

u0 (0) = δ0,j ,

where δn,j = 0 if n ≠ j and δj,j = 1. Corollary 2.34 implies that u0 = u0∗ . Assume that we ∗ already proved that un = un∗ . Then both un+1 and un+1 satisfy {

C α Dt un+1 (t)

= λ(un+1 (t) − un (t)),

un+1 (0) = δn+1,j ,

∗ where un = un∗ . Corollary 2.34 supplies that un+1 = un+1 . So the uniqueness follows by induction. Now assume that j ≥ 1 and note that the problem C α D u0 (t) = −λu0 (t), { { {C tα Dt un (t) = λ(un−1 (t) − un (t)), { { { {un (0) = 0,

n = 1, . . . , j − 1, n = 0, . . . , j − 1,

where the second equation is omitted if j = 1, has the unique solution un (t) ≡ 0 for all n = 0, . . . , j − 1, see Remark 2.37. Similarly, since Nα is nondecreasing (as the Poisson process 𝒩 is nondecreasing), we obtain that P(Nαt = n | Nα0 = j) = 0 for n = 0, . . . , j − 1. This also implies that if (un )n≥0 is the unique solution and if we put un = un+j for n ≥ 0, then we obtain the equalities C α Dt u0 (t) = −λu0 (t), { { { { C α { { Dt un (t) = λ(un−1 (t) − un (t)), { { u0 (0) = 1, { { { { {un (0) = 0,

n ≥ 1, n ≥ 1.

Additionally, P (Nαt = n + j | Nα0 = j) = P (Nαt − Nα0 = n | Nα0 = j) = E [P (Nαt − Nα0 = n | 1{Nα0 =j} , Lαt )] . Since Lαt is independent of 𝒩 and 𝒩 is a Lévy process, we have that P (Nαt − Nα0 = n | 1{Nα0 =j} , Lαt ) = P (Nαt = n | 1{Nα0 =0} , Lαt ) . Taking the expectations, we obtain the equality P (Nαt = n + j | Nα0 = j) = P (Nαt = n | Nα0 = 0) .

4.10 Delayed continuous-time Markov chains

� 325

Hence if we prove that P(Nαt = n | Nα0 = 0) = un (t) for all n ∈ ℕ0 and t ≥ 0, then the desired statement will follow. This means that without loss of generality we can restrict ourselves to the case j = 0. Consider the probability generating function +∞

Pα (t, s) = ∑ sk P (Nαt = k | Nα0 = 0) , k=0

(4.138)

and note that a simple conditioning argument implies the equality Pα (t, s) = E [P (Lαt , s)] , where +∞

P(t, s) = ∑ sk P(𝒩t = k | 𝒩0 = 0) = eλt(s−1) k=0

is the probability generating function of 𝒩t . We know from Proposition 4.32 that Pα (t, s) = Eα (λ(s − 1)t α ) ,

(4.139)

whereas the definition of the Mittag-Leffler function provides the series representation λk (s − 1)k t αk +∞ k k λk (−1)k−h sh t αk = ∑ ∑( ) Γ(αk + 1) h Γ(αk + 1) k=0 k=0 h=0 +∞

Pα (t, s) = ∑

+∞ +∞ k λk (−1)k−h t αk = ∑ sh ∑ ( ) . h Γ(αk + 1) h=0 k=h

(4.140)

Comparing (4.138) with (4.140), we get that +∞ k λk (−1)k−h t αk uh (t) := P (Nαt = h | Nα0 = 0) = ∑ ( ) , h Γ(αk + 1) k=h

(4.141)

where the series is convergent for all t ∈ ℝ. Now let us distinguish two cases. For h = 0, taking the Caputo derivative inside the series (4.141) (see Exercise 2.6), we get that λk+1 (−1)k+1 t αk = −λu0 (t). Γ(αk + 1) k=0 +∞

(C Dα0+ u0 ) (t) = ∑

(4.142)

For h ≥ 1, define {0, ak,h = { k k−1 k λ (−1) {(h) Γ(αk+1) ,

k = 0, . . . , h − 1, k ≥ h,

(4.143)

326 � 4 Stochastic processes and fractional differential equations αk so that we can rewrite uh (t) = ∑+∞ k=0 ak,h t . Taking the Caputo derivative inside the series, we get that +∞

(C Dα0+ uh ) (t) = ∑ ak+1,h t αk k=0

=

+∞ Γ((k + 1)α + 1) k + 1 λk+1 (−1)k+1−h t αk = ∑ ( ) Γ(kα + 1) h Γ(αk + 1) k=h−1

+∞ λh−1 t α(h−1) k + 1 λk (−1)k+1−h t αk +λ ∑ ( ) . Γ(α(h − 1) + 1) h Γ(αk + 1) k=h

Recalling Pascal rule (

k+1 k k )=( )+( ), h h h−1

we arrive at the equalities (C Dα0+ uh ) (t) =

+∞ λh−1 t α(h−1) k λk (−1)k+1−h t αk +λ ∑ ( ) Γ(α(h − 1) + 1) h Γ(αk + 1) k=h +∞

+λ ∑ ( k=h

k λk (−1)k+1−h t αk ) = −λuh (t) + λuh−1 (t), h−1 Γ(αk + 1)

whence the proof follows. Remark 4.94. We can rewrite (4.141) as h

h P (Nαt = h | Nα0 = 0) = (λt α ) Eα,αh+1 (−λt α ) , γ

where Eα,β is the Prabhakar function defined in (2.5). Now considering a sequence of i. i. d. random variables (Yn )n≥1 independent of a fractional Poisson process Nα , we can define the compound fractional Poisson process Nαt

Yαt := ∑ Yn , n=1

where ∑0n=1 = 0. We will use such processes in simulations. Remark 4.95. The interested reader can find other fractional processes obtained by means of time-changes of CTMCs in [11, 13, 14, 190, 191] and many others.

4.11 Fractional integral equations with a stochastic driver In this section, we want to study a stochastic equivalent of fractional differential equations. Clearly, since in general we do not expect stochastic processes to be differentiable,

4.11 Fractional integral equations with a stochastic driver

� 327

we cannot state such equations in a differential form. Hence we consider an integral form, which is equivalent in the deterministic case. Precisely, we focus on the Caputo case. Let G = {Gt , t ≥ 0} be a stochastic process with a. s. continuous trajectories. Fix α > 0, α ∈ ̸ ℕ, and let n = ⌊α⌋ + 1. Consider also a function F : ℝ+ × ℝ → ℝ satisfying the following properties for a constant γ ∈ [0, α − n + 1): (aγ ) For all T > 0 and y ∈ ℝ, the function t ∈ (0, T] 󳨃→ F(t, y) ∈ ℝ belongs to 𝒞 ([0, T]; w0,γ ). (bγ ) There exists a constant L > 0 such that for all t > 0 and y1 , y2 ∈ ℝ, 󵄨󵄨 󵄨 −γ 󵄨󵄨F(t, y1 ) − F(t, y2 )󵄨󵄨󵄨 ≤ Lt |y1 − y2 |. We want to solve the fractional integral equation t

Yt =

F(τ, Yτ ) 1 dτ + Gt , ∫ Γ(α) (t − τ)1−α

t ≥ 0.

(4.144)

0

Remark 4.96. For α ∈ (0, 1) and Gt = X0 for t ≥ 0, this is the integral form of a fractional differential equation of Caputo type, as in (2.40). In general, to mimic the behavior of ̃ t , where fractional differential equations of Caputo type, we could consider Gt = X0 + G ̃ 0 = 0, and G ̃ represents the noise that affects X0 is a possibly stochastic initial value, G the equation. Definition 4.97. A global strong solution of (4.144) is a stochastic process Y = {Yt , t ≥ 0} adapted to the filtration {ℱt }t≥0 , where ℱt is generated by {Gs , s ∈ [0, t]}, with a. s. continuous trajectories that satisfies (4.144) for all t ≥ 0 a. s. Emphasize that the assumptions on the function F are the same as in Theorem 2.32. Indeed, the proof of the strong existence and uniqueness result is similar to that of Theorem 2.32, to which we will refer. Theorem 4.98. Let G = {Gt , t ≥ 0} be a stochastic process with a. s. continuous trajectories, and let α > 0, α ∈ ̸ ℕ, and n = ⌊α⌋ + 1. Let F : ℝ+ × ℝ → ℝ satisfy assumptions (aγ ) and (bγ ) for a constant γ ∈ [0, α − n + 1). Then there exists a global strong solution Y ̃ is another solution of (4.144), then of (4.144). Such a solution is pathwise unique, i. e., if Y ̃t , t ≥ 0) = 1. P(Yt = Y Proof. Without loss of generality, we can assume that t ∈ ℝ+0 󳨃→ Gt (ω) ∈ ℝ is continuous for all ω ∈ Ω. Fix T > 0 and ω ∈ Ω. For y ∈ 𝒞 [0, T], let t

F(τ, y(τ)) 1 (𝒯ω y)(t) = dτ + Gt (ω), ∫ Γ(α) (t − τ)1−α

t ≥ 0.

0

According to Theorem 2.32, conditions (aγ ) and (bγ ) imply that for every function α y ∈ 𝒞 [0, T], the function F(⋅, y(⋅)) ∈ 𝒞 ([0, T]; w0,γ ). Lemma 2.19(i) states that I0+ F(⋅, y(⋅))

328 � 4 Stochastic processes and fractional differential equations belongs to 𝒞 [0, T], and then 𝒯ω y ∈ 𝒞 [0, T], since it is the sum of continuous functions. So 𝒯ω : 𝒞 [0, T] → 𝒞 [0, T]. Now for β > 0, consider the norm ‖y‖β = supt∈[0,T] e−βt |y(t)| for y ∈ 𝒞 [0, T] and recall that (𝒞 [0, T], ‖ ⋅ ‖β ) is a Banach space. With the same argument as in Theorem 2.32, we get that for all y1 , y2 ∈ 𝒞 [0, T], ‖𝒯ω y1 − 𝒯ω y2 ‖β ≤ L𝒯 (β)‖y1 − y2 ‖β , where 1

L𝒯ω (β) =

L(B(p(α − 1) + 1, 1 − pγ)) p Γ(α)(qβ)

1 q

T

p(α−1−γ)+1 p

,

1 1 < p < γ1 if α > 1, 1 < p < γ+1−α if α < 1, and q > 1 satisfies p1 + q1 = 1. Hence we can consider β > 0 large enough to have L𝒯ω (β) < 1. Then 𝒯ω is a contraction on 𝒞 [0, T]. In particular, there exists a unique fixed point Y T (ω) = {YtT (ω), t ≥ 0} ∈ 𝒞 [0, T] such that (𝒯ω Y T (ω))(t) = YtT (ω) for all t ∈ [0, T]. Since T > 0 is arbitrary, we can argue as in Theorem 2.32 to prove that Y (ω), defined by Yt (ω) = YtT (ω) for t ≤ T, is a fixed point of 𝒯ω on 𝒞 (ℝ+0 ), i. e., (𝒯ω Y (ω))(t) = Yt (ω) for all t ≥ 0. The fact that the fixed point is unique proves the pathwise uniqueness of the solution. We only need to prove that Y = {Yt , t ≥ 0} is a stochastic process, i. e., it is measurable with respect to ω ∈ Ω. To do this, consider any constant y0 ∈ ℝ and define Yt(1) (ω) := (𝒯ω y0 )(ω) for t ≥ 0 and ω ∈ Ω. Clearly, y0 is a (degenerate) random variable. Furthermore, t

Yt(1) (ω) = ∫ 0

F(τ, y0 ) dτ + Gt (ω), (t − τ)1−α

i. e., Y (1) is the sum of random variables and thus is a random variable itself. Assuming that Y (k) is a stochastic process, we define Y (k+1) = {Yt(k+1) , t ≥ 0} as follows: Yt(k+1) (ω) = (𝒯ω Y (k) (ω))(t). Note that F is a continuous in both variables, and therefore α F(t, Yt(k) ) is a stochastic process. Furthermore, (I0+ F(⋅, Y⋅(k) (ω)))(t) is a random variable. (k+1) Finally, for each t ≥ 0, Yt is a sum of random variables, and hence it is a random (k) variable. So, by induction, Y is a stochastic process for all k ∈ ℕ. Furthermore, in the same way we prove that each Y (k) is ℱt -adapted. The contraction principle (see Theorem 2.22) guarantees that Y (ω) = limk→+∞ Y (k) (ω). Thus Y is a ℱt -adapted stochastic process since it is limit of ℱt -adapted stochastic processes. Let us prove the Lipschitz continuity of the solutions with respect to the driver under assumption (b0 ) (that is (bγ ) for γ = 0, i. e., F is globally Lipschitz in the second variable uniformly with respect to the first one).

4.11 Fractional integral equations with a stochastic driver



329

Theorem 4.99. Let Gj = {Gt , t ≥ 0}, j = 1, 2, be stochastic processes with a. s. continuous trajectories. Fix α > 0 with α ∈ ̸ ℕ and let n = ⌊α⌋ + 1. Let F : ℝ+ × ℝ → ℝ satisfy assumptions (aγ ) for a constant γ ∈ [0, α − n + 1) and (b0 ). Consider the solutions Y j , j = 1, 2, of (4.144) with respective drivers Gj . Then for all T > 0, a. s. 󵄩󵄩 1 󵄩 󵄩󵄩 1 2󵄩 α 󵄩󵄩 󵄩󵄩Y − Y 2 󵄩󵄩󵄩 󵄩 󵄩 󵄩𝒞[0,T] ≤ 󵄩󵄩G − G 󵄩󵄩𝒞[0,T] Eα (LT ) . Proof. Let T > 0 and let Y j , j = 1, 2, be solutions of (4.144) with drivers Gj . Then for all t ∈ [0, T], we have t

󵄨󵄨 1 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨Yt − Yt2 󵄨󵄨󵄨 ≤ ∫(t − τ)α−1 󵄨󵄨󵄨F (τ, Yτ1 ) − F (τ, Yτ2 )󵄨󵄨󵄨 dτ + 󵄨󵄨󵄨Gt1 − Gt2 󵄨󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 0



t

L 󵄨 󵄨 󵄨 󵄨 ∫(t − τ)α−1 󵄨󵄨󵄨󵄨Yτ1 − Yτ2 󵄨󵄨󵄨󵄨 dτ + sup 󵄨󵄨󵄨󵄨Gs1 − Gs2 󵄨󵄨󵄨󵄨 . Γ(α) s∈[0,t] 0

If we let 󵄨 󵄨 f (t) = sup 󵄨󵄨󵄨󵄨Gs1 − Gs2 󵄨󵄨󵄨󵄨 , s∈[0,t] then f : [0, T] → ℝ is a continuous nondecreasing function. Theorem 2.38 implies that 󵄨󵄨 1 󵄨 󵄨 󵄨 󵄨󵄨Yt − Yt2 󵄨󵄨󵄨 ≤ sup 󵄨󵄨󵄨Gs1 − Gs2 󵄨󵄨󵄨 Eα (Lt α ) , 󵄨 󵄨 s∈[0,t] 󵄨 󵄨

t ∈ [0, T].

Taking the supremum in t ∈ [0, T], we conclude the proof. Remark 4.100. Similarly to the deterministic case, we can prove that if F satisfies (bγ ) in place of (b0 ), then for all T > 0, there exists a constant C(T) such that a. s. 󵄩󵄩 1 󵄩 󵄩󵄩 1 2󵄩 󵄩󵄩 󵄩󵄩Y − Y 2 󵄩󵄩󵄩 󵄩 󵄩 󵄩𝒞[0,T] ≤ C(T) 󵄩󵄩G − G 󵄩󵄩𝒞[0,T] . The proof is exactly the same as in Exercise 2.28. Concerning the Hölder regularity of the solution, we have the following statement. Proposition 4.101. Let G = {Gt , t ≥ 0} be a stochastic process whose trajectories are a. s. locally β-Höler continuous for some β ∈ (0, 1]. Fix α > 0 with α ∈ ̸ ℕ, and let n = ⌊α⌋ + 1. Let also F : ℝ+ × ℝ → ℝ satisfy assumptions (aγ ) and (bγ ) for a constant γ ∈ [0, α − n + 1). Let also Y be the solution of (4.144). (i) If γ = 0, then Y is a. s. locally δ0 -Hölder continuous, where δ0 = min{α, β}. (ii) If γ > 0 and β < α − γ, then Y is a. s. locally β-Hölder continuous. (iii) If γ > 0, α − γ < 1, and β ≥ α − γ, then Y is a. s. locally δ-Hölder continuous for all δ < α − γ.

330 � 4 Stochastic processes and fractional differential equations Proof. Let us fix T > 0. We only need to prove (i)–(iii) for t ∈ [0, T]. So α Yt = (I0+ (F(⋅, Y⋅ ))) (t) + Gt ,

t ∈ [0, T],

(4.145)

where G is a. s. β-Hölder continuous on [0, T]. Let us prove (i). Arguing as in Theorem 2.32, we get that F(⋅, Y⋅ ) ∈ 𝒞 [0, T] ⊂ L∞ (0, T). Let us distinguish two cases. If α < 1, then Lemma 1.17 with p = ∞ implies that α I0+ (F(⋅, Y⋅ )) is α-Hölder continuous, and then from (4.145) we get that Y is δ0 -Hölder continuous. If α > 1, then δ0 = β. Furthermore, Lemma 1.17 with p = ∞ implies that α I0+ (F(⋅, Y⋅ )) is Lipschitz. According to (4.145), Y is β-Hölder continuous. Now let us prove (ii). Again, with the same argument as in Theorem 2.32, we get that F(⋅, Y⋅ ) ∈ 𝒞 ([0, T]; w0,γ ). Let p < γ1 be such that α − γ > α − p1 > β. If β < 1, then we can assume that α −

1 p

< 1. Therefore F(⋅, Y⋅ ) ∈ Lp [0, T], and thus Lemma 1.17 implies

α that I0+ (F(⋅, Y⋅ )) is (α − p1 )-Hölder continuous. In this case, since α − p1 > β, (4.145) gives α the desired statement. If β = 1, then Lemma 1.17 implies that I0+ (F(⋅, Y⋅ )) is Lipschitzcontinuous, and therefore Y also has this property. To prove (iii), fix any δ < α−γ < β < 1. Let p < γ1 be such that δ < α− p1 < α−γ ≤ β ≤ 1. α As before, F(⋅, Y⋅ ) ∈ Lp [0, T], and then we know from Lemma 1.17 that I0+ (F(⋅, Y⋅ )) is 1 1 (α − p )-Hölder continuous. This time, however, δ < α − p < β, and hence by (4.145) we have that Y is δ-Hölder continuous.

Now let us focus on a particular case that corresponds to α ∈ (0, 1) and F(t, y) = a+by for some constant a, b ∈ ℝ. It is clear that F satisfies the assumptions of Theorem 4.98. Before proceeding, let us recall that if G = {Gt , t ≥ 0} is a. s. continuous and f ∈ BV[0, t], t then the Riemann–Stieltjes integral ∫0 f (s)dGs is well-defined using the integration-byparts formula t

t

∫ f (s)dGs = f (t)Gt − f (0)G0 − ∫ Gs df (s); 0

0

see Theorem 3.24. Now we can state the following result. Theorem 4.102. Let a, b ∈ ℝ and α ∈ (0, 1), and let G = {Gt , t ≥ 0} be a stochastic process ̃ t , where X0 is a random variable, and G ̃ t is a stochastic process of the form Gt = X0 + G ̃ 0 = 0 a. s. Then the unique solution of equation with a. s. continuous trajectories and G α Yt = (I0+ (a + bY⋅ )) (t) + Gt

(4.146)

is given by the formula t

Yt =

a ̃ s, (E (bt α ) − 1) + X0 Eα (bt α ) + ∫ Eα (b(t − s)α ) d G b α 0

where the integral is the Riemann–Stieltjes one.

t ≥ 0,

(4.147)

4.11 Fractional integral equations with a stochastic driver



331

Proof. Again, if we prove (4.147) for t ∈ [0, T] with T > 0, then the statement follows. Let Y be the unique solution of (4.146), which exists by Theorem 4.98. Observe that (4.146) can be rewritten as Yt − X0 =

(a + bX0 )t α α ̃t, + b (I0+ (Y⋅ − X0 )) (t) + G Γ(α + 1)

t ≥ 0.

α Fix ω ∈ Ω and T > 0 and define Z(t) = (I0+ (Y⋅ (ω) − X0 (ω)))(t) for t ∈ [0, T]. It is clear, by α ∞ definition, that Z ∈ I (L (0, T)), and it follows from (1.44) that

(Dα0+ Z) (t) = Yt (ω) − X0 (ω),

t ∈ [0, T].

1−α Furthermore, Z is continuous in [0, T], and hence (I0+ Z)(0) = 0 (see Exercise 2.16). This means that Z solves the linear Riemann–Liouville fractional Cauchy problem

(a + bX0 (ω))t α { ̃ t (ω), {(Dα0+ Z) (t) = + bZ(t) + G Γ(α + 1) { 1−α { {(I0+ Z) (0) = 0.

t > 0,

(4.148)

α

(a+bX0 (ω))t ̃ t (ω), t ∈ [0, T], can be extended to an exponentially Note that f (t) = +G Γ(α+1) + bounded function on ℝ0 by setting f (t) = f (T) for t ≥ T. The unique solution of (4.148), due to Proposition 2.53, is given by t

Z(t) = ∫(t − s)α−1 Eα,α (b(t − s)α ) ( 0

(a + bX0 (ω))sα ̃ + Gs (ω)) ds Γ(α + 1)

t

t

0

0

a + bX0 (ω) ̃ s (ω)ds. = ∫(t − s)α−1 sα Eα,α (b(t − s)α ) ds + ∫(t − s)α−1 Eα,α (b(t − s)α ) G Γ(α + 1) (4.149)

For the first integral in (4.149), we recall the definition of Eα,α and write t

0

t

bk ∫(t − s)αk+α−1 sα ds Γ(αk + α) k=0 +∞

∫(t − s)α−1 sα Eα,α (b(t − s)α ) ds = ∑

0

bk Γ(α + 1) α(k+2) Γ(α + 1) +∞ (bt α )k =∑ t = ∑ Γ(α(k + 2) + 1) Γ(αk + 1) b2 k=0 k=2 +∞

=

Γ(α + 1) +∞ (bt α )k bt α Γ(α + 1) tα α ( − 1 − ) = (E (bt ) − 1) − . ∑ α Γ(αk + 1) Γ(α + 1) b b2 b2 k=0

Concerning the second integral in (4.149), set fα,b (t) = Eα (bt α ). Then dfα,b (t) = bt α−1 Eα,α (bt α ) . dt

(4.150)

332 � 4 Stochastic processes and fractional differential equations ̃ s (ω) is continuous. So we can rewrite the second integral in (4.149) as the Furthermore, G Riemann–Stieltjes integral t

∫(t − s) 0

α−1

t

̃ s (ω)ds = − 1 ∫ G ̃ s (ω)dfα,b (t − s). Eα,α (b(t − s) ) G b α

0

Note that s ∈ [0, t] 󳨃→ fα,b (t − s) is nonincreasing if b > 0 and nondecreasing if b < 0 (since Eα is nondecreasing), and hence it is monotone in both cases. In particular, t ̃ s is fα,b (t − ⋅) ∈ BV[0, t], and by Theorem 3.24 we know that the integral ∫0 fα,b (t − s)d G well-defined as the Riemann–Stieltjes integral, and we can integrate by parts. Therefore t

t

̃ s (ω)ds = − 1 ∫ G ̃ s (ω)dfα,b (t − s) ∫(t − s)α−1 Eα,α (b(t − s)α ) G b 0

0

̃ 0 (ω) − fα,b (0)G ̃ t (ω) 1 t fα,b (t)G ̃ s (ω) = + ∫ fα,b (t − s)d G b b 0

t

=−

̃ t (ω) 1 G ̃ s (ω), + ∫ Eα (b(t − s)α ) d G b b

(4.151)

0

̃ 0 (ω) = 0, fα,b (t − s) = Eα (b(t − s)α ), and fα,b (0) = Eα (0) = 1. where we used the facts that G Substituting (4.150) and (4.151) into (4.149), we get t

Z(t) =

̃ (ω) 1 a + bX0 (ω) (a + bX0 (ω))t α G α ̃ s (ω). (E (bt ) − 1) − − t + ∫ Eα (b(t − s)α ) d G α 2 Γ(α + 1)b b b b 0

(4.152)

Now recall that Dα0+ Z = Y⋅ (ω) − X0 (ω), whence we can apply (4.148) and (4.152) to get that Yt (ω) − X0 (ω) =

(a + bX0 (ω))t α a + bX0 (ω) (a + bX0 (ω))t α + (Eα (bt α ) − 1) − Γ(α + 1) b Γ(α + 1) t

̃ t (ω) + ∫ Eα (b(t − s)α ) d G ̃ s (ω) + G ̃ t (ω) −G 0

=

t

a + bX0 (ω) ̃ s (ω), (Eα (bt α ) − 1) + ∫ Eα (b(t − s)α ) d G b 0

which implies (4.147).

4.12 Exercises

� 333

Let us end this section with a few explicit examples. We can put a = 0, b < 0, ̃ = BH , an fBm. In this case the solution of (4.146) is given by H ∈ (0, 1), and G t

α

Yt = X0 Eα (−|b|t ) + ∫ Eα (−|b|(t − s)α ) dBsH , 0

which is a further fractional version of the fOU process (3.45), and coincides with it if α = 1. In particular, if X0 is deterministic, then Y is a Gaussian process. If X0 ∈ L1 (Ω), then the expectation of Y is given by E[Yt ] = E[X0 ]Eα (−|b|t α ) ,

t ≥ 0.

These processes have been studied in [20, 21] to introduce noise in some nonlinear fractional differential equations, namely in Gompertz-type models introduced in [78]. Añ is the following one: other choice of driver G L Hα Bt

=

1

Γ (Hα +

t

1

∫(t − s)Hα − 2 dBs ,

1 ) 2 0

t ≥ 0,

(4.153)

where B is a standard Brownian motion, and Hα > 0. The integral is well-defined, be1 cause (t − ⋅)Hα − 2 ∈ L2 (0, t) for all t > 0. This process was introduced by Lévy [140] with the help of the kernel for the Riemann–Liouville fractional integral, that is integrated with respect to a Brownian motion. For such a reason, it is called the Liouville fractional Brownian motion. If we consider Hα = α − 21 for some α > 21 , (4.144) is the integral formulation of a Caputo-type fractional stochastic differential equation, as introduced and studied in [7, 230, 239, 252]. Furthermore, if we set Hα = α + 21 , then we get a Volterra Gaussian process, which has been extensively studied in [26, 171, 172, 173, 241]. For further properties of L Bα , we refer to Exercise 4.8.

4.12 Exercises Exercise 4.1. Prove Theorem 4.4. Hint: Use (4.1) and the properties of the characteristic functions. Exercise 4.2. Prove the second equality in (4.3). Hint: Distinguish the cases α ≠ 1 and α = 1. Apply Theorem 4.4, (ii), (iii), and (iv), 1 to determine the parameters of n α X + an . Then apply Theorem 4.4(v) to determine the parameters of ∑nk=1 Xk . Proceed by induction.

334 � 4 Stochastic processes and fractional differential equations Exercise 4.3. Let α ∈ (0, 2). (i) Let z > 0. Prove that +∞

∫ (eizy − 1 − izy1[0,1] (y)) y−1−α dy ∈ ℝ. 0

(ii)

Prove that if α ∈ (0, 1), then for z > 0, +∞

+∞

∫ (eizy − 1) y−1−α dy = lim ∫ (e−(λ−iz)y − 1) y−1−α dy. λ→0

0

(4.154)

0

(iii) Prove that if α ∈ (0, 1), then for z > 0, +∞

+∞

0

0

∫ (eizy − 1 − izy1[0,1] (y)) y−1−α dy = ∫ (eizy − 1) y−1−α dy −

iz , 1−α

whereas if α ∈ (1, 2), then +∞

+∞

0

0

∫ (eizy − 1 − izy1[0,1] (y)) y−1−α dy = ∫ (eizy − 1 − izy) y−1−α dy +

iz . 1−α

(iv) Use items (ii) and (iii) to prove that if α ∈ (0, 1), then for z > 0, +∞

∫ (eizy − 1 − izy1[0,1] (y)) y−1−α dy = − 0

(v)

Γ(1 − α) α −i πα2 iz z e − . α 1−α

Use items (ii) and (iii) to prove that if α ∈ (1, 2), then for z > 0, +∞

∫ (eizy − 1 − izy1[0,1] (y)) y−1−α dy = 0

zα Γ(2 − α) −i πα2 iz e + . α(α − 1) 1−α

Hint: after using item (iii), integrate by parts. (vi) Use items (iv) and (v) to prove that for X ∼ S(α, 1, γ, 0), α ∈ (0, 1) ∪ (1, 2), we have that qX = 0, bX =

αγα , Γ(2 − α) cos ( απ ) 2

and νX (dx) =

α(1 − α)γα 1ℝ+ (x)dx, Γ(2 − α) cos ( απ ) x 1+α 2

where (bX , qX , νX ) is the characteristic triplet of X (see Theorem C.7). (vii) Use the equality +∞

∫ −∞

1 − cos(x) dx = 1 πx 2

4.12 Exercises

� 335

to prove that for z > 0, +∞

π lim ℜ [ ∫ (eizy − 1 − izy1[0,1] (y)) y−2 dy] = − z. ε→0 2 [ε ] (viii) Use the relation 1

∫ 0

+∞

cos(y) − 1 cos(y) dy + ∫ dy = −γEM , y y 1

where γEM is the Euler–Mascheroni constant, to prove that +∞

lim ℑ [ ∫ (eizy − 1 − izy1[0,1] (y)) y−2 dy] = (1 − γEM )z − z log(z). ε→0 [ε ] (ix) Use (vii) and (viii) to prove that if X ∼ S(1, 1, γ, 0), then qX = 0, bX =

2(γEM − 1)γ , π

and νX (dx) =

2γ 1ℝ+ (x)dx, πx 2

where (bX , qX , νX ) is the characteristic triplet of X (see Theorem C.7). Exercise 4.4. Let α ∈ (0, 2), β ∈ [−1, 1], γ > 0, and δ1 ∈ ℝ. (i) Show that if α ≠ 1, then the characteristic triplet of X ∼ S(α, β, γ, δ) is given by (bX , 0, νX ), where νX (dx) =

(1 + βsign(x)) α(1 − α)γα dx, απ |x|1+α 2Γ(2 − α) cos ( 2 )

and bX − δ =

{β, αγα 1 1 πα { Γ(2 − α) cos ( 2 ) 2 (( 1−β ) α − ( 1+β ) α ) + β, 2 2 {

α < 1, α > 1.

Hint: Apply Theorem 4.8. (ii) Show that if α = 1, then the characteristic triplet of X ∼ S(α, β, γ, δ) is given by (bX , 0, νX ) with bX =

4(γEM − 1)γβ + δ, π

where γEM is the Euler–Mascheroni constant, and νX (dx) =

γ (1 + βsign(x)) dx. π |x|2

336 � 4 Stochastic processes and fractional differential equations Exercise 4.5. Let α ∈ (1, 2), and let S α := {Stα , t ≥ 0} be an α-stable Lévy motion with β = 1 and γ = − cos ( πα ) > 0. Prove that for every f ∈ 𝒞02 (ℝ), the function 2 u(t, x) := E [f (Stα + x)] ,

t ≥ 0,

x ∈ ℝ,

is the unique strong solution (with the same definition of strong solutions of (4.9)) of the problem 𝜕u M α { (t, x) = − ( D+ u) (t, x), 𝜕t { {u(0, x) = f (x),

t > 0, x ∈ ℝ, x ∈ ℝ,

where the Marchaud derivative is applied to x variable, i. e. (M Dα+ u) (t, x) =

α(α − 1) ∫ (u(t, x) − 2u(t, x − y) + u(t, x − 2y))y−α−1 dy. (2α − 2) Γ(2 − α) ℝ+

Exercise 4.6. Let B = {Bs , s ≥ 0} be a standard Brownian motion, and for t > 0, define St := inf{s ≥ 0 : Bs ≥ t}. (i) Prove that S is a 21 -stable subordinator with γ = 1. Hint: Let ℳs = exp (√2λBs − λs) and observe that the process ℳ† := {Ms† , s ≥ 0} with ℳ†s = ℳs∧St is a uniformly bounded martingale with respect to the natural filtration of the Brownian motion B. Use the optional stopping theorem. (ii) Use (i) to prove that g 1 (t, s) = 2

t

√πs3

t2

e− 4s ,

t, s > 0.

Hint: Use the reflection principle P(St ≤ s) = 2P(Bs ≥ t). Exercise 4.7. Let a, b > 0. (i) Prove that lim aB(a, b)t −a B(t; a, b) = 1. t→0

(ii) Prove that lim aB(a, b)B(t; a, b) − t a = t→0

(1 − b)a . a+1

(iii) Prove that for all t ∈ (0, 1] and b ≠ 1, sign(b − 1)(aB(a, b)B(t; a, b) − t a ) < 0.

4.12 Exercises

� 337

Hint: Prove that if b < 1, then the function f (t) = aB(a, b)B(t; a, b) − t a is strictly increasing, whereas if b > 1, then it is strictly decreasing. (iv) Prove that for all t ∈ [0, 1] and b < 1, B(t; a, b) ≤ t a . Hint: Note that B(a, b) ≥

1 a

for b < 1. Then notice that the function f (t) = B(a, b) (B(t; a, b) − t a )

has a unique critical point t0 = 1 −

1

∈ (0, 1), which is a minimum, whereas

1

(B(a,b)a) 1−b

f (0) = f (1) = 0. (v) Prove (4.34). Hint: To get the lower bound, apply (iii). To get the upper bound, apply (iv). Exercise 4.8. Let Hα ∈ (0, 1), let B be a standard Brownian motion, and let L BHα be a Liouville fractional Brownian motion defined in (4.153). H (i) Show that E [L Bt α ] = 0 for all t ≥ 0. (ii) Prove that for 0 ≤ s ≤ t, H

E [L Bt α ⋅ L BsHα ] =

s

1

1

1

∫(s − τ)Hα − 2 (t − τ)Hα − 2 dτ,

1 ) 2 0

Γ2 (Hα +

whence for all t ≥ 0, H

t 2Hα

2

E [(L Bt α ) ] =

2Hα Γ2 (Hα + 21 )

.

(iii) Apply (ii) to prove that L BHα is Hα -self-similar. (iv) Prove that there exists a constant C(Hα ) such that for all 0 ≤ s ≤ t, 2

H

E [(L Bt α − L BsHα ) ] ≤ C(Hα )(t − s)2Hα . Hint: Establish that L Hα Bt

− L BsHα =

Hα −

Γ (Hα +

1 2

s

t

3

∫ (∫(v − u)Hα − 2 dv) dBu +

1 ) 2 0

s

1

Γ (Hα +

t

1

∫(t − u)Hα − 2 dBu .

1 ) 2 s

(v) Apply (iv) to prove that L BHα is a. s. γ-Hölder continuous for all γ < Hα .

5 Numerical methods and simulation Once we have a plethora of equations, processes and models to work with, we are now interested in their numerical objectification and simulation. For this reason, we recall some of the main numerical algorithms to handle the operators and processes we discussed. Concerning the numerical solution of fractional differential equations, we will talk about product-integration rules (introduced in [262, 263]) and fractional linear multistep methods (considered first in [145, 146]). It is clear that these are not the only possible methods. Just to cite some alternatives, in the literature, we can find also spectral collocation methods [265] and further techniques based on Physics-Informed Neural Networks [198]. However, let us also discuss a delicate topic: the regularity of the solutions of fractional differential equations, which represents the main obstacle to overcome for achieving the convergence of the considered methods. From the stochastic point of view, we recall some algorithms for the simulation of correlated Gaussian processes (to obtain the fBm) and the Chambers–Mallow–Stuck algorithm for the simulations of stable distributions. By exploiting and combining these two procedures we will be able to provide some useful simulation algorithms for more complex processes, such as reflected fOUs and delayed processes.

5.1 Calculation of the Mittag-Leffler function According to Chapter 2, Sections 2.7 and 2.8, the two-parameter Mittag-Leffler function Eα,β defined in (2.4) plays a prominent role in solving linear fractional differential equations. However, in spite of the simple power-series representation, calculating the values of the Mittag-Leffler function is not an easy task. Indeed, despite the series converges for all z ∈ ℂ, for z large enough and ρ small enough (for instance, ρ = 10−13 ), we need a really large value of Nρ (z) to have 󵄨󵄨Nρ (z) 󵄨󵄨 󵄨󵄨 󵄨󵄨 zk 󵄨󵄨 ∑ 󵄨󵄨 ≤ ρ. − E (z) 󵄨󵄨 󵄨󵄨 α,β 󵄨󵄨 k=0 Γ(αk + β) 󵄨󵄨 󵄨 󵄨 This means that the series representation is too much expensive to give a numerical approximation of Eα (z). For fixed q > 0 and |z| ≤ q, in [192] the authors propose to truncate |z|N the series at an index N such that Γ(αN+β) < η, where η is a small value. This truncation criterion, however, does not give explicit control over the approximation error. More|z|N |z|N over, the sequence Γ(αN+β) is not necessarily nonincreasing, so the condition Γ(αN+β) 0. We use the truncation criterion proposed in [94, Theorem 3.1], as stated in the next theorem.

https://doi.org/10.1515/9783110780017-005

5.1 Calculation of the Mittag-Leffler function

� 339

Theorem 5.1. Let q ∈ (0, 1), α > 0, β > 0, and ρ > 0. Then for N = max{⌈ 2−α−β ⌉, ⌈ log(ρ(1−q)) ⌉}, α log(q) we have the representation N

zk + RN (z), Γ(αk + β) k=0

Eα,β (z) = ∑

z ∈ [−q, q],

where the remainder term RN can be bounded as follows: 󵄨󵄨 󵄨 󵄨󵄨RN (z)󵄨󵄨󵄨 ≤ ρ,

|z| ≤ q.

Proof. Let us write N

zk + RN (z), Γ(αk + β) k=0

Eα,β (z) = ∑

where

zk . Γ(αk + β) k=N+1 +∞

RN (z) = ∑

Recall that the Gamma function is increasing for x ≥ 2. Therefore for N ≥ k ≥ N + 1,

2−α−β α

and

Γ(αk + β) ≥ Γ(α(N + 1) + β) ≥ Γ(2) = 1. Thus, for |z| ≤ q < 1, +∞ qN+1 󵄨󵄨 󵄨 k ≤ ρ, 󵄨󵄨RN (z)󵄨󵄨󵄨 ≤ ∑ |z| ≤ 1−q k=N+1

where the latter inequality holds for N ≥

log(ρ(1−q)) log(q)

− 1.

According to Theorem 5.1, for ρ > 0, q ∈ (0, 1), and z ∈ [−q, q], define N

zk , Γ(αk + β) k=0

(1) Ẽα,β (z; q, ρ) := ∑

where N = N(q, ρ) = max {⌈

2−α−β log(ρ(1 − q)) ⌉, ⌈ ⌉} . α log(q)

Then ρ is the accuracy of the polynomial approximation, and q < 1 determines the range of the approximation of accuracy ρ. Now consider large values of |z|. In [94] the authors give the following integral representation of Eα,β . Namely, for α ∈ (0, 1], β ∈ ℝ, δ ∈ ( πα , πα], and ε > 0, introduce 2

340 � 5 Numerical methods and simulation the kernels K1 (r; α, β, δ, z) =

r 1−β α r α1 e πα ×

1

1

r sin (r α sin ( αδ ) + δ 1−β ) − z sin (r α sin ( αδ ) + δ (1 + α 1−β

K2 (y; α, β, ε, z) =

cos( αδ )

ε1+ α ε α1 e 2πα

r 2 − 2rz cos(δ) + z2

,

(5.1)

y

cos( α )

1

×

1−β )) α

cos (ε α sin ( αy ) + y (1 +

1−β )) α

1

+ i sin (ε α sin ( αy ) + y (1 +

εeiy − z

1−β )) α

. (5.2)

In [94, Equations (14) and (15)] the authors proved that for Arg(z) ≠ δ, +∞

δ

ε

−δ

Eα,β (z) = ∫ K1 (r; α, β, δ, z)dr + ∫ K2 (y; α, β, ε, z)dy +

1(−δ,δ) (Arg(z))

α

z

1−β α

ez

1 α

(5.3)

for all |z| > ε. They considered δ = πα, but we wish to consider any possible value of δ, because it helps to improve the algorithm for |Arg(z)| = πα. Indeed, it was noticed in [192] that the choice δ = πα provides a good approximation algorithm for all z such that |Arg(z)| ≠ πα and seems to be unstable if |Arg(z)| = πα, whereas choosing δ < πα solves this problem for such values of z. The next result can be proved exactly as [94, Theorems 2.1 and 2.3]. Theorem 5.2. Let α ∈ (0, 1], β ≥ 0, δ ∈ ( πα , πα], and z ∈ ℂ with |Arg(z)| ≠ δ. Then for 2 β ≤ 1 + α, +∞

Eα,β (z) = ∫ K1 (r; α, β, δ, z)dr −

1{1+α} (β)

0

z

+

1(−δ,δ) (Arg(z))

α

z

1−β α

1 α

ez .

Theorem 5.2 tells us that if β < 1 + α, then we can omit the integral with respect to the kernel K2 . This simplifies the algorithm under such a condition. If β = 1 + α, then we also have to subtract z1 , which seems to create some further numerical problems. Hence, for β = 1 + α, we still prefer the numerical evaluation of the integral of the kernel K2 . For |Arg(z)| ≠ δ and |z| > ε, denote +∞

I(z; α, β, ε, δ) = ∫ K1 (r; α, β, δ, z)dr, ε

δ

J(z; α, β, ε, δ) = ∫ K2 (y; α, β, ε, z)dy. −δ

5.1 Calculation of the Mittag-Leffler function

� 341

In what follows, we denote by ̃J(z; α, β, ε, δ, ρ) the approximation of J(z; α, β, ε, δ) of accuracy ρ > 0, i. e. for which 󵄨󵄨 󵄨 󵄨󵄨J(z; α, β, ε, δ) − ̃J(z; α, β, ε, δ, ρ)󵄨󵄨󵄨 ≤ ρ. This quality of approximation can be obtained by any suitable quadrature method. To provide an approximation of I of prescribed accuracy, we first need to truncate the integral. To do this, we prove the following result, which is similar to [94, Theorem 3.2]. Theorem 5.3. Let α ∈ (0, 1], β ≥ 0, ρ, ε > 0, δ ∈ ( πα , πα], and 0 < ρ < 2 Then for z ∈ ℂ with |Arg(z)| ≠ δ and |z| > ε, we have that I(z; α, β, ε, δ) =

r0 (z,δ,ρ)

∫ ε

6 π

󵄨󵄨 󵄨β−1 󵄨󵄨cos ( αδ )󵄨󵄨󵄨 . 󵄨 󵄨

K1 (r; α, β, δ, z)dr + RI (z),

(5.4)

where |RI (z)| ≤ ρ, and α 󵄨 󵄨1−β { (− log ( π6 󵄨󵄨󵄨󵄨cos ( αδ )󵄨󵄨󵄨󵄨 ρ)) } { } 1 r0 (z, δ, ρ) = max { 󵄨 , 2|z|, . 󵄨󵄨 } δ 󵄨󵄨󵄨 { 󵄨󵄨cos ( δ )󵄨󵄨󵄨α } 󵄨 󵄨󵄨cos ( α )󵄨󵄨 󵄨󵄨 󵄨󵄨 α { }

(5.5)

If, furthermore, β ≤ 1 + α, then the same result holds for ε = 0. Proof. It is clear that +∞

RI (z) =



K1 (r; α, β, δ, z)dr.

r0 (z,δ,ρ)

Note that r 2 − 2rz cos(δ) + z2 = (reiδ − z)(re−iδ − z). Then for r > r0 (z, δ, ρ) (which implies |z| ≤ r/2), we have that |r 2

1 1 4 ≤ ≤ 2. 2 2 󵄨 󵄨 z 2 − 2rz cos(δ) + z | r (󵄨󵄨 󵄨󵄨 − 1) r 󵄨r󵄨

Furthermore, cos ( αδ ) < 0, hence we can write δ

1 α

󵄨󵄨 󵄨

δ

󵄨󵄨 󵄨

1 α

ecos( α )r = e−󵄨󵄨cos( α )󵄨󵄨r . Therefore (5.6) implies that for r ≥ r0 (z, δ, ρ) (and, consequently, r + |z| ≤ 3r/2), 1−β

1−β

1 6r α −1 −󵄨󵄨󵄨󵄨󵄨cos( αδ )󵄨󵄨󵄨󵄨󵄨r α1 󵄨󵄨 󵄨 4r α −󵄨󵄨󵄨󵄨󵄨cos( αδ )󵄨󵄨󵄨󵄨󵄨r α e (r + |z|) ≤ e . 󵄨󵄨K1 (r; α, β, δ, z)󵄨󵄨󵄨 ≤ πα παr 2

(5.6)

342 � 5 Numerical methods and simulation Integrating and taking into account (5.5), we get that 6 󵄨󵄨 󵄨 󵄨󵄨RI (z)󵄨󵄨󵄨 ≤ πα



6 π

+∞



r

1−β −1 α

󵄨󵄨 󵄨

δ

󵄨󵄨 󵄨

1 α

e−󵄨󵄨cos( α )󵄨󵄨r dr =

r0 (z,δ,ρ)

󵄨󵄨 󵄨β−1 6 󵄨󵄨cos ( δ )󵄨󵄨󵄨 ∫ t −β e−t dt ≤ 󵄨󵄨 󵄨 α 󵄨󵄨 π 󵄨 +∞ 1

6 π

󵄨󵄨 󵄨β−1 󵄨󵄨cos ( δ )󵄨󵄨󵄨 󵄨󵄨 󵄨 α 󵄨󵄨 󵄨

+∞



t −β e−t dt

1 󵄨󵄨 󵄨 󵄨󵄨cos( δ )󵄨󵄨󵄨(r0 (z,δ,ρ)) α α 󵄨 󵄨

1 󵄨 󵄨󵄨 󵄨β−1 󵄨 󵄨󵄨cos ( δ )󵄨󵄨󵄨 e−󵄨󵄨󵄨󵄨cos( αδ )󵄨󵄨󵄨󵄨(r0 (z,δ,ρ)) α ≤ ρ. 󵄨󵄨 󵄨󵄨 α 󵄨 󵄨

Again, the integral on the right-hand side of (5.4) can be evaluated with prescribed accuracy. Hence, for ρ > 0, denote by ̃I(z; α, β, ε, δ, ρ) the approximation of I(z; α, β, ε, δ) with prescribed accuracy ρ, i. e., 󵄨󵄨̃ 󵄨 󵄨󵄨I(z; α, β, ε, δ, ρ) − I(z; α, β, ε, δ)󵄨󵄨󵄨 ≤ ρ. Now put q ∈ (0, 1) and define the following functions: πα, δ(z) = { 3 πα, 4

󵄨󵄨 󵄨 󵄨󵄨Arg(z)󵄨󵄨󵄨 ≠ πα, 󵄨󵄨 󵄨 󵄨󵄨Arg(z)󵄨󵄨󵄨 = πα,

0, ε(α, β, q) = { q , 2

β ≤ α + 1, β > α + 1.

Let ρ > 0 and ρI , ρJ > 0 be such that ρI + ρJ = ρ. For |z| ≥ q, consider the approximation q (2) Ẽα,β (z; q, ρ) = ̃I(z; α, β, ε(α, β, q), δ(z), ρI ) + 1[α+1,+∞) (β)̃J (z; α, β, , δ(z), ρJ ) 2 1[−δ(z),δ(z)] (Arg(z)) 1−β z α1 + zα e . α We apply (5.3). If β < α + 1, then put ε = 0 and neglect the numerical integration of K2 due to Theorem 5.2, whereas the choice of δ depends on the argument of z. Consider the kernel K2 for β = α + 1 to avoid computing 1/z. Furthermore, under the condi(2) tion ρI + ρJ = ρ for |z| ≥ q, Ẽα,β (z; q, ρ) is an approximation of Eα,β (z) with accuracy ρ, i. e., 󵄨󵄨 ̃ (2) 󵄨 󵄨󵄨Eα,β (z; q, ρ) − Eα,β (z)󵄨󵄨󵄨 ≤ ρ. 󵄨 󵄨 It remains to figure out how to glue the approximation procedures. We use the method suggested in [192, Section 3]. Select two real numbers 0 < q1 < q2 < 1. For |z| < q1 , (1) (2) take Ẽα,β (z; q2 , ρ), whereas for |z| > q2 , take Ẽα,β (z; q1 , ρ). For |z| ∈ [q1 , q2 ], use |z| − q1 ̃ (2) q − |z| (1) Ẽα,β (z; q2 , ρ) + Eα,β (z; q1 , ρ) 2 . q2 − q1 q2 − q1 This procedure does not change the accuracy of the method.

5.1 Calculation of the Mittag-Leffler function

� 343

Finally, to determine the approximation for α > 1, we use the recurrence formula Eα,β (z) =

i2πj 1 1 ∑ E α ,β (z ⌈α⌉ e ⌈α⌉ ) ⌈α⌉ ⌈α⌉ j=0

⌈α⌉−1

(5.7)

as in [93, Equation (4.7.25)], whereas all the summands can be approximated as before. In fact, (5.7) answers the following question: why, despite working with real fractional differential equations, we want to approximate the Mittag-Leffler function with complex arguments, in particular, with |Arg(z)| = πα for some α ∈ (0, 1]? Clearly, (5.7) tells us that if we want to calculate Eα,β (z) for α > 1 and real z, we still need to calculate it for α ∈ (0, 1] and complex z. Consider, for example, α = 2 and z > 0. Then one of the summands of (5.7) 1 1 is E1,β (−z 2 ) with Arg (−z 2 ) = π. If α = 4, then it holds, for example, for the summand with j = 2; if α = 6, then it holds for j = 3, and so on. Hence Arg(z) = π occurs every time α is an even integer. In general, denote by Ẽα,β (z; q1 , q2 , ρ) the approximation of Eα,β with prescribed accuracy ρ > 0. We give in Listing 5.9.1 a function written in the R language [217], and then in Figures 5.1, 5.2, 5.3, and 5.4 we compare the obtained approximated Mittag-Leffler functions with the exact ones for some choices of α and β. All figures have been realized with the R package ggplot2 [256]. In the script, for a given ρ, we set ρI = 2ρ/3 and ρJ = ρ/3. Indeed, we can prescribe an accuracy of ρ/3 in the numerical integration

Figure 5.1: Comparison of Ẽ1,1 (z; 0.25, 0.75, 10−12 ) with E1,1 (z) = ez .

344 � 5 Numerical methods and simulation

Figure 5.2: Comparison of Ẽ2,1 (z; 0.25, 0.75, 10−12 ) with E2,1 (z) = cosh(√z).

Figure 5.3: Comparison of Ẽ0.5,1 (z; 0.25, 0.75, 10−12 ) with E0.5,1 (z) =

2

2e−z √π

2

∫−z e−t dt. +∞

5.2 Approximation of fractional integrals. Product-integration methods

Figure 5.4: Comparison of Ẽ2,2 (z; 0.25, 0.75, 10−12 ) with E2,2 (z) =



345

sinh(√z) . √z

procedures, and we can choose r0 (z, δ, ρ3 ) in (5.4) in such a way that |RI (z)| ≤ ρ/3. In general, the best total accuracy we can prescribe is 3ϵ, where by ϵ we denote the machine epsilon, (i. e., the least positive real number that can be represented in a given floating-point format; see, for example, [216, Section 2.5.3]). Indeed, the evaluation of Ẽα,β (z; q1 , q2 , ρ) requires at most three approximation procedures, whose accuracy can be greater than or equal to ϵ due to the limitations of floating-point arithmetic, i. e., it is necessary that ρ ≥ 3ϵ. This script is similar to the MATLAB function [210], but it includes the improvements considered in [192]. Another numerical approach to compute the Mittag-Leffler function, which is based on numerical Laplace inversion, is given in [83] and then written as a MATLAB function in [86]. It has been converted into R in the MittagLeffleR package [88]. A comparison of [210], [86], and the integral method considered above is given in [192].

5.2 Approximation of fractional integrals. Product-integration methods Let us approximate fractional integrals with the help of a generalization of the quadrature method. In [263] the author proposed a family of methods, called product-integrab

tion (PI) methods, for approximation of the integrals ∫a ϕ(s)f (s)ds by means of polynomial approximations of f , provided that the moments of ϕ are known. We use this ap-

346 � 5 Numerical methods and simulation α proach, with a first-degree polynomial approximation, to estimate (I0+ f )(t) on a suitable grid t = t0 , . . . , tN . Let T > 0, and let f : [0, T] → ℝ be a continuous function. Also, let N ∈ ℕ, 0 = t0 < t1 < ⋅ ⋅ ⋅ < tN = T, Π = {tk }k=0,...,N , hk = tk − tk−1 for k = 1, . . . , N, and

diam(Π) = max hk . k=1,...,N

α α Fix α > 0. Our goal is to approximate (I0+ f )(tk ), k = 0, . . . , N. Obviously, (I0+ f )(0) = 0. Approximate the function f on each interval [tk−1 , tk ], k = 1, . . . , N, as

f (s) ≈ fk−1 +

s − tk−1 (fk − fk−1 ), hk

where fk = f (tk ), k = 0, . . . , N, i. e., by the first-degree polynomial that interpolates (tk−1 , f (tk−1 )) and (tk , f (tk )). Then α (I0+ f ) (tk )

tj

1 k = ∑ ∫ (tk − s)α−1 f (s)ds Γ(α) j=1 tj−1

fj−1

k

≈∑

Γ(α)

j=1

=

tj

∫ (tk − s)

α−1

tj−1

k

ds + ∑ j=1

(fj − fj−1 ) hj Γ(α)

tj

∫ (tk − s)α−1 (s − tj−1 )ds

tj−1

k 1 ∑ [fj−1 (tk − tj−1 )α − fj (tk − tj )α ] Γ(α + 1) j=1

+

k f −f 1 j j−1 ((tk − tj−1 )α+1 − (tk − tj )α+1 ) ∑ Γ(α + 2) j=1 hj

k

α = ∑ aj,k fj := (̃IΠ,0+ f )k , j=0

where

aj,k

tkα+1 tkα (t − t )α+1 { { − + k 1 , j = 0, { { Γ(α + 1) h1 Γ(α + 2) h1 Γ(α + 2) { { { { { { (t − t )α+1 − (tk − tj )α+1 (tk − tj )α+1 − (tk − tj+1 )α+1 = { k j−1 − , j = 1, . . . , k − 1, { { hj Γ(α + 2) hj+1 Γ(α + 2) { { { { α { { { hk , j = k. { Γ(α + 2)

Now, we define the remainder term 󵄨 α 󵄨 α ℛ(Π) = max 󵄨󵄨󵄨(I0+ f ) (tk ) − (̃IΠ,0+ f )k 󵄨󵄨󵄨 k=1,...,N

and we get the relation limdiam(Π)↓0 ℛ(Π) = 0, possibly, with the rate of convergence.

5.2 Approximation of fractional integrals. Product-integration methods



347

Proposition 5.4. Let λ ∈ (0, 1]. (i) If f ∈ 𝒞 λ [0, T] with Hölder constant C > 0, then C(diam(Π))λ T α . Γ(α + 1)

ℛ(Π) ≤

(5.8)

(ii) If f ∈ 𝒞 1 [0, T] and f ′ ∈ 𝒞 λ [0, T] with Hölder constant C > 0, then ℛ(Π) ≤

C(diam(Π))λ+1 T α . Γ(α + 1)

(5.9)

Proof. Let p1 (s) be the piecewise linear function of the form p1 (s) = fj−1 +

s − tj−1 hj

(fj − fj−1 ) = fj−1

tj − s hj

+ fj

s − tj−1 hj

,

s ∈ [tj−1 , tj ],

j = 1, . . . , N.

First, we prove (5.8). Note that 󵄨󵄨 tk 󵄨󵄨 󵄨󵄨 1 󵄨󵄨󵄨󵄨 󵄨󵄨 α 󵄨󵄨 󵄨 α α−1 ̃ 󵄨󵄨(I0+ f ) (tk ) − (IΠ,0+ f )k 󵄨󵄨 = 󵄨󵄨∫(tk − s) (f (s) − p1 (s))ds󵄨󵄨󵄨 󵄨󵄨 Γ(α) 󵄨󵄨󵄨 󵄨󵄨 󵄨0

tj 󵄨󵄨 t − s 󵄨󵄨 s − tj−1 1 k 󵄨 j 󵄨 ≤ (f (s) − fj−1 ) + (f (s) − fj )󵄨󵄨󵄨󵄨 ds ∑ ∫ (tk − s)α−1 󵄨󵄨󵄨󵄨 Γ(α) j=1 hj 󵄨󵄨 hj 󵄨󵄨 tj−1

tj

C(diam(Π))λ tkα C k λ ≤ . ∑ hj ∫ (tk − s)α−1 ds ≤ Γ(α) j=1 Γ(α + 1)

(5.10)

tj−1

Second, we prove (5.9). If f ∈ 𝒞 1 [0, T], then it follows from the Lagrange theorem that for all s ∈ [tj−1 , tj ], f (s) − fj−1 = f ′ (ξj−1 (s))(s − tj−1 ) and fj − f (s) = f ′ (ξj (s))(tj − s), where ξj−1 (s), ξj (s) ∈ [tj−1 , tj ]. Therefore, similarly to (5.10), tj

(tj − s)(s − tj−1 ) 󵄨 ′ 1 k 󵄨󵄨 α 󵄨 α 󵄨󵄨f (ξj (s)) − f ′ (ξj−1 (s))󵄨󵄨󵄨 ds ∑ ∫ (tk − s)α−1 󵄨󵄨(I0+ f ) (tk ) − (̃IΠ,0+ f )k 󵄨󵄨󵄨 ≤ 󵄨 󵄨 Γ(α) j=1 hj tj−1

tj

C(diam(Π))λ+1 tkα C k λ+1 ≤ , ∑ hj ∫ (tk − s)α−1 ds ≤ Γ(α) j=1 Γ(α + 1) tj−1

which proves (5.9). Remark 5.5. If f ∈ 𝒞 2 [0, T], then of course (5.9) holds. This is similar to the classical trapezoidal rule, [54, Equation (2.1.11)]. For f ∈ 𝒞 [0, T], we can state that limdiam(Π)↓0 ℛ(Π) = 0.

348 � 5 Numerical methods and simulation

aj,k

The case of uniform grid hj = h for j = 1, . . . , N is a bit special. Indeed, in this case, hα = Γ(α+2) ãk−j , where α

α+1

(α + 1 − k)k + (k − 1) , { { { ̃aj = {(j − 1)α+1 − 2jα+1 + (j + 1)α+1 , { { {1,

j = k,

j = 1, . . . , k − 1,

(5.11)

j = 0,

and thus α (̃IΠ,0+ f )k =

k hα ∑ ak−j fj , Γ(α + 2) j=0

i. e., it is a convolution sum. This implies that we can fasten the evaluation procedure by using the fast Fourier transform (FFT) as in [100]. In Listing 5.9.2, we give an example of implementation. For clarity, we do not use the FFT. The considered function takes as input the integrand function, the final time T, the length of the step h, and the fractional order α. In Figures 5.5, 5.6, and 5.7, we have some comparison between the trapezoidal PI approximation and some exact fractional integrals.

α α Figure 5.5: Comparison of ̃IΠ,0+ f for f (t) = t 2 with (I0+ f )(t) = α = 0.25, T = 10, h = 0.25.

2 t α+2 , where the grid is uniform, Γ(α+3)

5.2 Approximation of fractional integrals. Product-integration methods



349

α α Figure 5.6: Comparison of ̃IΠ,0+ f for f (t) = t 2 with (I0+ f )(t) = α = 0.75, T = 10, h = 0.25.

2 t α+2 , where the grid is uniform, Γ(α+3)

α α Figure 5.7: Comparison of ̃IΠ,0+ f for f (t) = t 4.5 with (I0+ f )(t) = α = 0.75, T = 5, h = 0.25.

Γ(5.5) α+4.5 t , where the grid is uniform, Γ(α+5.5)

350 � 5 Numerical methods and simulation

5.3 Product-integration methods for fractional differential equations The original purpose of PI methods is to solve nonlinear integral equations of both Fredholm and Volterra types (see [262]). We would like to use PI methods to solve fractional differential equations involving Caputo derivatives. For simplicity, we consider the case α ∈ (0, 1). Let us consider a function F : ℝ+0 × ℝ → ℝ satisfying the assumptions of Corollary 2.34 and the Cauchy problem {

(C Dα0+ y) (t) = F(t, y(t)) y(0) = y0 .

t > 0,

We know from Proposition 2.31 that such Cauchy problem is equivalent to the nonlinear Volterra equation α y(t) = y0 + (I0+ F(⋅, y(⋅))) (t),

t ≥ 0.

(5.12)

Fix the time horizon T > 0 and a uniform grid with h = tj − tj−1 for j = 1, . . . , N. We know that t ∈ ℝ+0 󳨃→ F(t, y(t)) is a continuous function (see the first part of the proof of Theorem 2.32), and therefore we can approximate its fractional integral applying a PI formula. Denoting by (̃yk )k=0,...,N the approximated solution, we get the following relations: ̃y0 = y0 , { { { k hα { { ∑ ãk−j F(tj , ̃yj ), {̃yk = y0 + Γ(α + 2) j=0 {

k = 1, . . . , N,

(5.13)

where we recall that ãj is defined in (5.11). Since at the kth step, ỹj for j = 0, . . . , k − 1 has been already determined, at each step, we have to solve a nonlinear equation with respect to the unknown ̃yk . Although we do not write it explicitly, it is clear that the approximation ̃y depends on the diameter of the grid h. We now consider the remainder term ℛ(h) = max 󵄨󵄨󵄨y(tk ) − ̃yk 󵄨󵄨󵄨 k=1,...,N

󵄨

󵄨

and try to find an exponent γ > 0 such that ℛ(h) ≤ Chγ . However, we cannot guarantee that y is smooth at 0. This has been noticed, for example, in [60]; see also the discussion in [61, Section 6.4]. However, it is possible to state that if F ∈ 𝒞 m (ℝ × [0, T]) for some m ∈ ℕ, then y ∈ 𝒞 m (0, T], according to [40, Theorem 2.1] and [61, Theorem 6.28]. If we consider only the assumptions of Corollary 2.34 and some minimal regularity in the time variable, then we can find a suitable γ > 0 to control the order of convergence of the method. More precisely, we want to adapt the arguments from [66], which hold in

5.3 Product-integration methods for fractional differential equations

� 351

the linear case. To do this, we use the following discrete fractional Grönwall inequality; see [66, Theorem 2.1 and Corollary]. Theorem 5.6. Let xk , k = 0, . . . , N, be a sequence of nonnegative numbers satisfying xk ≤ a +

k−1 xj b α + Mh , ∑ (kh)1−α (k − j)1−α j=0

i = 1, . . . , N,

where α ∈ (0, 1), a, b ≥ 0, and M, h > 0. Then xk ≤ aEα (M(Γ(α))(kh)α ) +

bΓ(α) Eα,α (M(Γ(α))(kh)α ) , (kh)1−α

k = 0, . . . , N.

Furthermore, for each T > 0, there exists C = C(T) such that xk ≤ C (a +

b ), (kh)1−α

i = 0, . . . , N,

if Nh ≤ T. Now we can prove the following result. Theorem 5.7. Let F : ℝ+0 × ℝ → ℝ satisfy the assumptions of Corollary 2.34, and let there exist λ ∈ (0, 1] and CF such that 󵄨󵄨 󵄨 λ 󵄨󵄨F(t, x) − F(τ, x)󵄨󵄨󵄨 ≤ CF |t − τ| ,

t, τ ∈ [0, T],

x ∈ ℝ.

Then there exists a constant C > 0 such that δ

ℛ(h) ≤ Ch ,

(5.14)

where δ = min{λ, α}. Proof. Note that t ∈ [0, T] 󳨃→ F(t, y(t)) ∈ ℝ is δ-Hölder continuous, where δ = min{λ, α}. α Indeed, we know that t ∈ [0, T] 󳨃→ F(t, y(t)) ∈ ℝ is continuous. Then I0+ F(⋅, y(⋅)) is α-Hölder continuous according to Lemma 1.17. It follows from (5.12) that y ∈ 𝒞 α [0, T]. Consider arbitrary t, τ ∈ [0, T]. Then 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨F(t, y(t)) − F(τ, y(τ))󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨F(t, y(t)) − F(t, y(τ))󵄨󵄨󵄨 + 󵄨󵄨󵄨F(t, y(τ)) − F(τ, y(τ))󵄨󵄨󵄨 󵄨 󵄨 ≤ L󵄨󵄨󵄨y(t) − y(τ)󵄨󵄨󵄨 + CF |t − τ|λ ≤ LCy |t − τ|α + CF |t − τ|λ , where L is such that |F(t, y1 ) − F(t, y2 )| ≤ L|y1 − y2 | for all y1 , y2 ∈ ℝ and t ∈ [0, T], and Cy is the Hölder constant of y. This proves the Hölder continuity of F(⋅, y(⋅)). Let CF,y be the Hölder constant of F(⋅, y(⋅)). Fix k = 1, . . . , N and note that y(tk ) = y0 +

k hα α α F(⋅, y(⋅))) (tk ) − (̃IΠ,0+ F(⋅, y(⋅)))k ) . ∑ ãj−k F(tj , y(tj )) + ((I0+ Γ(α + 2) j=0

352 � 5 Numerical methods and simulation Now denote α

α

ℛQ (Π) = max 󵄨󵄨󵄨(I0+ F(⋅, y(⋅))) (tk ) − (̃IΠ,0+ F(⋅, y(⋅)))k 󵄨󵄨󵄨 , k=1,...,N

󵄨

󵄨

the quadrature remainder term. Then, according to (5.8), 󵄨󵄨 󵄨 󵄨󵄨y(tk ) − ̃yk 󵄨󵄨󵄨 ≤ ≤

k hα 󵄨 󵄨 ∑ ãk−j 󵄨󵄨󵄨F (tj , y(tj )) − F(tj , ̃yj )󵄨󵄨󵄨 + ℛQ (Π) Γ(α + 2) j=0 δ α hα L k 󵄨 󵄨 CF,y h T . ∑ ãk−j 󵄨󵄨󵄨y(tj ) − ̃yj 󵄨󵄨󵄨 + Γ(α + 2) j=0 Γ(α + 1)

If k = 1, then ã1 = α = α1α−1 . Assume that k ≥ 2. For j = 1, ã1 = 2α+1 − 2 = 2 (2α − 1) 1α−1 , and for j = 2, . . . , k − 1, 1

α

1 s

ãj = (α + 1) ∫ ((j + s) − (j − s) ) ds = (α + 1)α ∫ ∫ (j + τ)α−1 dτ 0

α

0 −s

1

≤ 2(α + 1)α ∫(j − s)α−1 sds ≤ (α + 1)α(j − 1)α−1 0

1 α−1 = (α + 1)αjα−1 (1 − ) ≤ 21−α (α + 1)αjα−1 . j Finally, for j = k, 1

1 s

ãk = (α + 1) ∫ (k α − (k − s)α ) ds = (α + 1)α ∫ ∫(k − τ)α−1 dτ ds 0

1

≤ (α + 1)α ∫(k − s)α−1 sds ≤ 0

0 0

(α + 1)α (k − 1)α−1 2

(α + 1)α α−1 1 α−1 (α + 1)α α−1 = k (1 − ) ≤ k . 2 k 2α

(5.15)

5.3 Product-integration methods for fractional differential equations

� 353

So there exists C > 0, not depending on k, such that ãk ≤ Cjα−1 for j = 1, . . . , k, and (5.15) implies that 󵄨 󵄨 δ α Chα L k−1 󵄨󵄨󵄨y(tj ) − ̃yj 󵄨󵄨󵄨 CF,y h T 󵄨󵄨 󵄨 + . ∑ 󵄨󵄨y(tk ) − ̃yk 󵄨󵄨󵄨 ≤ Γ(α + 2) j=0 (k − j)1−α Γ(α + 1)

(5.16)

Furthermore, kh = tk ≤ T, whence, by Theorem 5.6, CF,y hδ T α 󵄨󵄨 󵄨 󵄨󵄨y(tk ) − ̃yk 󵄨󵄨󵄨 ≤ CT Γ(α + 1)

(5.17)

for some constant CT > 0 depending only on T > 0. In this and the previous section, we discussed a PI method based on the trapezoidal quadrature. Furthermore, system (5.13) is a fractional Adams–Moulton (AM) scheme. Stability analysis for this scheme is carried out in [84]. Similar arguments also hold for the rectangular (explicit Euler) method, where for tj = jh, j = 0, 1, 2, . . . , N, and a continuous function f : [0, T] → ℝ, α (I0+ f ) (tk )

tj+1

1 k−1 ≈ ∑ ∫ (tk − s)α−1 f (tj )ds Γ(α) j=0 tj

α k−1

=

h α f )k , ∑ ((k − j)α − (k − j − 1)α ) f (tj ) = (R̃IΠ,0+ Γ(α) j=0

k = 1, . . . , N,

is considered. The same proof as that of Proposition 5.4 gives the following result. Proposition 5.8. Let f ∈ 𝒞 λ [0, T] for λ ∈ (0, 1] with Hölder constant C > 0, and let Π be a uniform partition with N + 1 nodes and of diameter h. Then Chλ T α 󵄨 α 󵄨 α max 󵄨󵄨󵄨󵄨(I0+ f ) (tk ) − (R̃IΠ,0+ f )k 󵄨󵄨󵄨󵄨 ≤ . k=1,...,N Γ(α + 1) Applying this PI method, we get a fractional Adams–Bashforth (AB) formula (the fractional explicit Euler method): ̃y0 = y0 , { { { hα k−1 { { ∑ b f (t , ̃y ), {̃yk = y0 + Γ(α + 1) j=0 k−j j j {

k = 1, . . . , N,

where bj = jα − (j − 1)α . Again, we can determine the convergence order of such a method.

(5.18)

354 � 5 Numerical methods and simulation Theorem 5.9. Let F : ℝ+0 × ℝ → ℝ satisfy the assumptions of Corollary 2.34. Assume further that there exist λ ∈ (0, 1] and CF such that 󵄨󵄨 󵄨 λ 󵄨󵄨F(t, x) − F(τ, x)󵄨󵄨󵄨 ≤ CF |t − τ| ,

t, τ ∈ [0, T],

x ∈ ℝ.

Then there exists a constant C > 0 such that AB

δ

ℛ(h) ≤ Ch ,

where δ = min{λ, α}, Π is a uniform grid having N + 1 nodes and diameter h, (̃yk )k=0,...,N are defined by (5.18), and AB

ℛ(h) := max 󵄨󵄨󵄨y(tk ) − ̃yk 󵄨󵄨󵄨. k=1,...,N

󵄨

󵄨

Proof. Arguing in the same way as in the proof of Theorem 5.7, we get that F(⋅, y(⋅)) is δ-Hölder continuous with coefficient CF,y > 0, where δ = min{λ, α}. Fix k = 1, . . . , N. Since y solves (5.12), we obtain that y(tk ) = y0 +

hα k−1 α α f (⋅, y(⋅))) (tk ) − (R̃IΠ,0+ f (⋅, y(⋅)))k ) . ∑ b F(t , y(tj )) + ((I0+ Γ(α) j=0 k−j j

Hence subtracting ̃yk and taking the absolute value, we arrive at the inequalities hα k−1 󵄨 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 α α f (⋅, y(⋅))) (tk ) − (R̃IΠ,0+ f (⋅, y(⋅)))k 󵄨󵄨󵄨󵄨 ∑ b 󵄨󵄨F (t , y(tj )) − F(tj , ̃yj )󵄨󵄨󵄨 + 󵄨󵄨󵄨󵄨(I0+ 󵄨󵄨y(tk ) − ̃yk 󵄨󵄨󵄨 ≤ Γ(α) j=0 k−j 󵄨 j ≤

δ α hα k−1 󵄨 󵄨 CF,y h T . ∑ bk−j 󵄨󵄨󵄨y(tj ) − ̃yj 󵄨󵄨󵄨 + Γ(α) j=0 Γ(1 + α)

Furthermore, b1 = 1α = 1α−1 , whereas 1

bj = α ∫(j − s)α−1 ds ≤ α(j − 1)α−1 ≤ α21−α jα−1 0

for j = 2, 3, . . . . So there exists a constant C > 0 such that bj ≤ Cjα−1 for j = 1, 2, . . . and δ α α k−1 󵄨󵄨 󵄨󵄨 Ch 󵄨󵄨 CF,y h T α−1 󵄨󵄨 ̃ ̃ y(t ) − y ≤ (k − j) y(t ) − y + . ∑ 󵄨󵄨 k 󵄨󵄨 j k 󵄨󵄨 j 󵄨󵄨 Γ(α) j=0 Γ(1 + α)

From Theorem 5.6 we get that there exists a constant CT such that δ α 󵄨󵄨 󵄨 CT CF,y h T . 󵄨󵄨y(tk ) − ̃yk 󵄨󵄨󵄨 ≤ Γ(1 + α)

5.3 Product-integration methods for fractional differential equations

� 355

Taking the maximum over k = 1, . . . , N, we conclude the proof. The advantage of this AB-type algorithm is that it is explicit and we do not need to solve any (perhaps nonlinear) equation. Finally, notice that we can combine the fractional AB and fractional AM methods to obtain a predictor–corrector scheme, i. e., a fractional Adams–Bashforth–Moulton (ABM) scheme, of the form ̃y0 = y0 , { { { { { { P hα k−1 { { {̃yk = y0 + ∑ b f (t , ̃y ), Γ(α + 1) j=0 k−j j j { { { { k−1 { { hα { { ( ∑ ãk−j f (tj , ̃yj ) + ã0 f (tk , ̃yPk )) , {̃yk = y0 + Γ(α + 2) j=0 {

k = 1, . . . , N,

(5.19)

k = 1, . . . , N.

This scheme was introduced in [62]. The convergence of such a scheme was studied in [63], whereas its stability was established in [82]. Let us present the result from [63, Lemma 3.1] about the order of convergence of the ABM scheme. Theorem 5.10. Let F : ℝ+0 × ℝ → ℝ satisfy the assumptions of Corollary 2.34, and let there exist λ ∈ (0, 1] and CF such that 󵄨󵄨 󵄨 λ 󵄨󵄨F(t, x) − F(τ, x)󵄨󵄨󵄨 ≤ CF |t − τ| ,

t, τ ∈ [0, T],

x ∈ ℝ.

Then there exists a constant C > 0 such that ABM

δ

ℛ(h) ≤ Ch ,

where δ = min{λ, α}, Π is a uniform grid having N + 1 nodes and diameter h, (̃yk )k=0,...,N are defined by (5.19), and ABM

ℛ(h) := max 󵄨󵄨󵄨y(tk ) − ̃yk 󵄨󵄨󵄨. k=1,...,N

󵄨

󵄨

How to extend the approximated values (ỹk )k=0,...,N to a continuous function ̃y : [0, T] → ℝ? To do this, consider the linear interpolation between the values (tk , ̃yk ) for k = 0, . . . , N, i. e., for all k = 0, . . . , N − 1 and t ∈ [tk , tk+1 ], ̃y(t) =

tk+1 − t t − tk ̃yk + ̃y . h h k+1

In this case the approximation error ‖y − ̃y‖𝒞[0,T] decays as hδ . This is a direct consequence of the following result.

356 � 5 Numerical methods and simulation Proposition 5.11. Let F : ℝ+0 × ℝ → ℝ satisfy the assumptions of Corollary 2.34, and let y be the solution of (5.12). Consider [a, b] ⊂ [0, T] and two values ̃ya , ̃yb ∈ ℝ. (b−t)̃ya −(t−a)̃yb Put r(t) = for t ∈ [a, b], M(a) = |y(a) − ̃ya |, M(b) = |y(b) − ̃yb |, and b−a M = maxt∈[0,T] |F(t, y(t))|. Then 4M(b − a)α 󵄨 󵄨 max 󵄨󵄨󵄨y(t) − r(t)󵄨󵄨󵄨 ≤ M(a) + M(b) + . t∈[a,b] Γ(α + 1)

Proof. Note that for t ∈ [a, b], 󵄨󵄨 󵄨󵄨 t − a 󵄨󵄨 󵄨󵄨 󵄨 󵄨󵄨 b − t (y(t) − ̃ya )󵄨󵄨󵄨󵄨 + 󵄨󵄨󵄨 (y(t) − ̃yb )󵄨󵄨󵄨 󵄨󵄨y(t) − r(t)󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨󵄨 󵄨 󵄨 󵄨b − a 󵄨 b−a 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 ≤ 󵄨󵄨y(t) − ̃ya 󵄨󵄨 + 󵄨󵄨y(t) − ̃yb 󵄨󵄨 = I1 + I2 . To bound I1 , recall that t

1 y(t) = y0 + ∫(t − τ)α−1 F(τ, y(τ))dτ Γ(α) 0

= y(a) −

a

t

0 a

0

1 1 ∫(a − τ)α−1 F(τ, y(τ))dτ + ∫(t − τ)α−1 F(τ, y(τ))dτ Γ(α) Γ(α) t

1 1 = y(a) − ∫ ((a − τ)α−1 − (t − τ)α−1 ) F(τ, y(τ))dτ + ∫(t − τ)α−1 F(τ, y(τ))dτ, Γ(α) Γ(α) a

0

whence I1 ≤ M(a) +

a

t

0

a

1 1 󵄨 󵄨󵄨 󵄨 󵄨 󵄨 ∫ 󵄨󵄨󵄨(a − τ)α−1 − (t − τ)α−1 󵄨󵄨󵄨󵄨 󵄨󵄨󵄨F(τ, y(τ))󵄨󵄨󵄨 dτ + ∫(t − τ)α−1 󵄨󵄨󵄨F(τ, y(τ))󵄨󵄨󵄨 dτ Γ(α) 󵄨 Γ(α)

M(aα + (t − a)α − t α ) M(t − a)α ≤ M(a) + + Γ(α + 1) Γ(α + 1) 2M(b − a)α ≤ M(a) + . Γ(α + 1) The same argument works for I2 .

Among PI methods, we cannot expect to find better rates of convergence, and so the piecewise linear approximation is still a good candidate, as shown in [66, Section 4]. However, Proposition 5.11 only guarantees the convergence of at most order α. We give here some implementation of these three methods. First, let us show in Listing 5.9.3 the R implementation of the fractional AM based on the trapezoidal PI rule. Since the algorithm requires the solution of a possibly nonlinear equation, we use the R package nleqslv (see [104]). We test all the algorithms on the simple linear equation {

(C Dα0+ y) (t) = −2y(t), t > 0, y(0) = 1,

(5.20)

5.3 Product-integration methods for fractional differential equations

4t

� 357

2

and α = 1/2. Its solution is given by y(t) = 2e ∫ e−s ds = E 1 (−2√t). The results of √π 2√t 2 the AM method are shown in Figure 5.8. In Listing 5.9.4, we consider the AB model with rectangular PI method, i. e., the fractional explicit Euler method. The method does not rely on nonlinear equations. However, it is clearly less accurate than the AM method, as noticed in Figure 5.9. Finally, in Listing 5.9.5, we give a possible implementation of the described ABM method. The predictor–corrector ABM method is explicit. As we can see in Figure 5.10, it produces better results than the AB method. It is clear that all these methods require a lot of memory, because each step k needs the whole trajectory of the function up to that step. This is unavoidable due to the nonlocal nature of the fractional equations. There are, however, several methods to reduce (at least) the weight of the +∞

Figure 5.8: Comparison of the approximate solution (̃yk ) of (5.20) (α = 1/2) obtained by the fractional AM method with the exact one. Here T = 2 and h = 0.08.

Figure 5.9: Comparison of the approximate solution (̃yk ) of (5.20) (α = 1/2) obtained by the fractional AB method with the exact one. Here T = 2 and h = 0.08.

358 � 5 Numerical methods and simulation

Figure 5.10: Comparison of the approximate solution (̃yk ) of (5.20) (α = 1/2) obtained by the fractional ABM method with the exact one. Here T = 2 and h = 0.08.

memory of numerical solvers of fractional differential equations. For some details, we refer to the surveys [64, 85].

5.4 Fractional linear multistep methods The fractional AB and AM methods can be considered as particular cases of the wider class of the so-called fractional linear multistep methods. Before considering such methods, let us recall some characteristics of classical linear multistep methods. We refer to [110]. Consider a first-order differential equation {

y′ (t) = F(t, y(t)), y(0) = y0 .

t ∈ (0, T],

Let N, l ∈ ℕ such that l ≤ N, and a uniform grid Π : 0 = t0 < t1 < ⋅ ⋅ ⋅ < tN = T with diameter h. We introduce a numerical approximation (̃yj )j=0,...,N of y on the grid Π. Provided that we already know ̃yj for j = 0, . . . , l−1, we approximate y(tk ) for k = l, . . . , N with ̃yk by setting l

l

j=0

j=0

∑ aj ̃yk−j = h ∑ bj F(tk−j , ̃yk−j ),

where (aj )j=0,...,l and (bj )j=0,...,l are suitable coefficients, and a0 = 1. This approximation procedure is called a linear multistep method (LMM). If b0 = 0, then we say that the method is explicit; otherwise, it is implicit (as it requires the solution of a possibly nonlinear equation to determine ̃yk at any step k ≥ l). We can associate with each method

5.4 Fractional linear multistep methods

� 359

two polynomials l

ρ(z) = ∑ aj zl−j j=0

and

l

σ(z) = ∑ bj zl−j , j=0

which are called characteristic polynomials. Each LMM is uniquely determined by the couple of characteristic polynomials (ρ, σ). An LMM is consistent of order p ≥ 1 if ρ(z) − σ(z) log(z) = C(z − 1)p+1 + O (|z − 1|p+2 )

as z → 1,

and we say that a method is stable if all the roots of ρ(z) lie on the closed unit disk D1 (0) := {z ∈ ℂ : |z| ≤ 1} and all the ones that lie on the unit circle 𝜕D1 (0) := {z ∈ ℂ : |z| = 1} are simple. The well-known Dahlquist equivalence theorem, see [110, Theorem 2.2], tells that if the initial approximation error |̃yj − y(tj )|, j = 0, . . . , l − 1, converges to 0 as h → 0 and the method is p-consistent and stable, then it is convergent, i. e., supj=1,...,N |̃yj − y(tj )| → 0 as h → 0. Assume that the method (ρ, σ) is stable, the polynomials ρ and σ do not share any root, ρ(1) = 0, and ρ′ (1) = σ(1) (the latter conditions imply the consistency of at least order 1 of the method). It was shown in [143, Lemma 2.1] that any LMM can be rewritten as a convolution quadrature ̃y0 = y0 , { { { l k { { {̃yk = y0 + h ∑ wj,k F(tj , ̃yj ) + h ∑ δk−j F(tj , ̃yj ), j=0 j=0 {

k = 1, . . . , N,

(5.21)

where (wj,k )j=0,...,l depend on k and are called the starting quadrature. Furthermore, (δn )n≥0 are the coefficients of the Taylor series expansion in a neighborhood of 0 of the function δ(z) = σ(1/z) , which is called the generating function of the method. We can ρ(1/z) express the definition of stability and consistency in terms of the function δ and the coefficients δn , as it is noted in [146]. More precisely, the method is stable if (δn )n≥0 is bounded, whereas the method is p-consistent for p ≥ 1 if hδ (e−h ) = 1 + O (hp )

as h → 0.

Consider the fractional differential equation {

C α D0+ y(t)

y(0) = y0

= F(t, y(t)),

t ∈ (0, T],

(5.22)

for some α ∈ (0, 1) and F : ℝ+ × ℝ → ℝ satisfying the assumptions of Corollary 2.34. In [145] and [146] the author introduced fractional linear multistep methods (FLMMs) to provide a numerical approximation of such an equation. First, in [146] the author considered a convolution quadrature formula. More precisely, given a suitable function f ,

360 � 5 Numerical methods and simulation α we approximate I0+ f on [0, T]. For a fixed uniform grid Π : 0 = t0 < t1 < ⋅ ⋅ ⋅ < tN = T with diameter h, we put l

k

j=0

j=0

α (α) (α) (̃IΠ,0+ f )k = hα ∑ wj,k f (tj ) + hα ∑ δk−j f (tj ),

k = 1, . . . , N,

α where l ∈ ℕ. For k ∈ ℕ, the set (wj,k )j=0,...,l is called the starting quadrature, and

(δj(α) )j∈ℕ0 are the coefficients of the Taylor expansion around zero of a suitable analytic

function δ(α) , called the generating function of the method. Having a starting quadrature, any function δ(α) that is analytic in a neighborhood of zero can be used to identify an FLMM. We say that the FLMM generated by δ(α) is stable if δn(α) = O (nα−1 ) ,

(5.23)

and it is consistent of order p ≥ 1 if hα δ(α) (e−h ) = 1 + O (hp ) . Furthermore, we say that the method is convergent of order p ≥ 1 (according to [146, Definition 2.3]) if for all β ∈ ℂ \ {0, −1, . . . }, N

(α) β−1 j − ∑ δN−j

j=0

Γ(β) N α+β−1 = O (N α−1 ) + O (N α+β−p−1 ) . Γ(α + β)

(5.24)

In [146, Theorem 2.4] the author provides some error bounds for functions of the form f (x) = x β−1 g(x), where g is analytic, and β ∈ ℂ \ {0, −1, . . . }. In [145, Theorem 1] the same statement is proved for functions of the form f (t) = f0 + ∑j,k∈ℕ t j+kα fj,k , but only for particular functions δ(α) . Here we combine two statements. Before proceeding, let us define α

𝒢 := {γ = j + kα, j, k ∈ ℕ0 }.

(5.25)

Now we can state the following theorem, whose proof is omitted as it is very similar to that of [146, Theorem 2.4]. Theorem 5.12. Assume that the FLMM generated by δ(α) is convergent of order p. Define 𝒢pα := {γ ∈ 𝒢 α , 0 ≤ γ ≤ p − 1}, where 𝒢 α is defined in (5.25), and c 𝒢pα = 𝒢 α \ 𝒢pα . Set l = Card(𝒢pα ) − 1 and consider a uniform grid Π on [0, T] of N + 1 points. For any n ∈ ℕ, let (wj,n )j=0,...,l be the solution of the Vandermonde-type system of l + 1 equations l

n

j=0

j=0

∑ wj,n jγ + ∑ δn−j jγ −

Γ(γ + 1) γ+α n = 0, Γ(γ + α + 1)

γ ∈ 𝒢p .

(5.26)

Consider a function f : [0, T] → ℝ such that f (t) = ∑γ∈𝒢 α t γ fγ , where fγ ∈ ℝ for all γ ∈ 𝒢 α , and the series is absolutely and uniformly convergent on [0, T]. Then there exists

5.4 Fractional linear multistep methods

� 361

a constant C(f , T) such that for all t ∈ (0, T] with t = nh for some n = 1, . . . , N, 󵄨󵄨 ̃α 󵄨 α β p 󵄨󵄨(IΠ,0+ f )n − (I0+ f ) (t)󵄨󵄨󵄨 ≤ C(f , T)t h ,

(5.27)

where β = α − p + min c 𝒢pα . Remark 5.13. Any function of the form f = ∑γ∈𝒢 α fγ t γ is infinitely differentiable in (0, T]. In particular, (5.27) gives a uniform bound for the approximation error on any closed interval separated from 0. So, if t = nh ∈ [a, T] with a > 0, then there exists a constant C(a, T, f ), depending on a, T, and f , such that 󵄨󵄨 ̃α 󵄨 α p 󵄨󵄨(IΠ,0+ f )n − (I0+ f ) (nh)󵄨󵄨󵄨 ≤ C(a, T, f )h . How to use such methods to solve fractional differential equations? As in the case of the product integration formulas, we use the quadrature method to approximate the α integral term (I0+ F(⋅, y(⋅)))(t). However, now we need better regularity properties. The following result is a particular case of [144, Theorem 2]. Theorem 5.14. If F : [0, T]×ℝ → ℝ is analytic, then the solution y of (5.22) can be written as y(t) = ∑ yγ t γ ,

(5.28)

γ∈𝒢 α

where 𝒢 α is defined in (5.25), and the series is absolutely and uniformly convergent on [0, T]. tion

As a consequence, since F is analytic, the function F(⋅, y(⋅)) admits the representaF(t, y(t)) = ∑ Fγ t γ , γ∈𝒢 α

where the coefficients Fγ can be determined with the help of Cauchy product formula via the Taylor coefficients of the expansion of F around (0, 0) and the series expansion (5.28). Now, for any suitable function δ(α) such that its related FLMM is stable and convergent of order p ≥ 1 (i. e., such that (5.23) and (5.24) are satisfied), we can consider the approximation (̃yk )k=0,...,N of y on a uniform grid Π on [0, T] with N + 1 points and diameter h, defined by the relation ̃y0 = y0 , { { { s k { α (α) α (α) { {̃yk = y0 + h ∑ wj,k F(tj , ̃yj ) + h ∑ δk−j F(tj , ̃yj ), j=0 j=0 {

k = 1, . . . , N.

(5.29)

362 � 5 Numerical methods and simulation Note that (5.29) is similar to (5.21) except that whereas in (5.21), we were constrained to choose δ as a rational function, here the unique conditions on the structure of δ(α) are dictated by (5.23) and (5.24). Consider the following result generalizing [145, Theorem 1] to any stable and convergent FLMM. Theorem 5.15. Assume that the FLMM generated by δ(α) is stable and convergent of order p ≥ 1. Let F : [0, T] × ℝ → ℝ be analytic and consider the solution y of (5.22). Define 𝒢pα = {γ ∈ 𝒢 α , γ ≤ p − 1}, where 𝒢 α is defined in (5.25), and c 𝒢pα = 𝒢 α \ 𝒢pα . Put s = Card(𝒢pα ) − 1, and for k ∈ ℕ, let (wj,k )j=0,...,s be the solution of the Vandermonde-type system of l + 1 equations (5.26). Let also β = α − p + min c 𝒢pα and (̃yk )k=0,...,N be defined as in (5.29). Then there exists h0 > 0 such that the following properties hold for all h ≤ h0 : (i) If β ≥ 0, then there exists a constant C > 0 such that 󵄨 󵄨 max 󵄨󵄨󵄨y(tk ) − ̃yk 󵄨󵄨󵄨 ≤ Chp .

k=1,...,N

(ii) If β < 0, then there exists a constant C > 0 such that for all k = 1, . . . , N, β p 󵄨󵄨 󵄨 󵄨󵄨y(tk ) − ̃yk 󵄨󵄨󵄨 ≤ Ctk h .

The proof of this statement can be obtained in the same way as that of [145, Theorem 1]. Alternatively, the same statement can be proved by means of the fractional discrete Grönwall inequality given in Theorem 5.6, arguing similarly to Theorem 5.7. Remark 5.16. It is clear that β ≥ 0 whenever α ∈ [1/2, 1). Indeed, in such a case, min c 𝒢pα ≥ p − 1 + α and β ≥ α − p + p − 1 + α = 2α − 1 ≥ 0. Notice that in Theorem 5.15, we had to assume that the method is convergent and stable. It is well known that in the classical case, LMMs converge if and only if they are stabile and consistent, as stated in the Dahlquist equivalence theorem; see also [110]. In [146, Theorem 2.5] the author proved the same statement for a special family of FLMMs. Theorem 5.17. Assume that δ(α) (z) = (r1 (z))α r2 (z), where r1 and r2 are rational functions. Then the FLMM generated by δ(α) is convergent of order p ≥ 1 if and only if it is consistent of order p ≥ 1 and stable. Furthermore, a suitable class of generating functions for FLMMs, obtained as fractional powers of generating functions for classical LMMs, has been provided in [146, Theorem 2.6]. Theorem 5.18. Let ρ and σ be the characteristic polynomials from an LMM that is stable and consistent of order p ≥ 1. Assume that all the roots of σ lie in the open unit disk D1 (0) := {z ∈ ℂ : |z| < 1}. Let δ(z) = σ(1/z) , and consider δ(α) (z) = (δ(z))α . Then δ(α) (z) ρ(1/z) generates an FLMM that is convergent of order p.

5.5 Simulation of the fractional Brownian motion



363

Remark 5.19. Note that the function δ in Theorem 5.18 is any generating function of a convergent classical LMM that satisfies δ(z) ≠ 0 for all z ∈ ℂ with |z| < 1. A special FLMM can be obtained if we start from the classical implicit Euler method. In this case, ρ(z) = z − 1 and σ(z) = z, whence the classical generating function δ(z) = (1 − z)−1 , and the method is convergent of order 1. According to Theorem 5.18, δ(α) (z) = (δ(z))α is a generating function of an FLMM that is convergent of order 1. To find the coefficients of the method, rewrite +∞

δ(α) (z) = (1 − z)−α = ∑ (−1)j ( j=0

−α j )z , j

where (−α ) is a generalized binomial coefficient; see Section B.2. Furthermore, since the j method is convergent of order 1, we get 𝒢pα = {0}, which implies that l = 0. Hence no starting quadrature (wk,n ) is needed, and we can write the formulas for the method as follows: k k −α −α α −α (̃IΠ,0+ f )k = hα ∑(−1)k−j ( )f (jh) = hα ∑(−1)j ( )f (kh − jh) =: hα ∇+,h f (kh). k − j j j=0 j=0

Theorem 5.12 implies that 󵄨󵄨 α −α 󵄨 α α−1 󵄨󵄨h ∇+,h f (x) − (I+ f ) (x)󵄨󵄨󵄨 ≤ Cx h, which is the analogue of Theorem 1.100 for fractional integrals. Indeed, the relation −α limh↓0 hα ∇+,h f (x) = (I+α f )(x) was proved by Letnikov [138] (see also the surveys [221, 222]). For such a reason, this method is called the Grunwald–Letnikov scheme. Let us also notice that the weights δn(α) can be obtained from a simple recursive formula: (α) {δ0 = 1, 1 − α (α) { (α) δ = (1 − ) δn−1 . { n n

In Listing 5.9.6, we implement in R the Grünwald–Letnikov scheme, and we use it to solve (5.20). Since nonlinear equations are possibly involved, the function requires the package nleqslv. A comparison of the approximate and exact solutions is given in Figure 5.11.

5.5 Simulation of the fractional Brownian motion Starting from this section, we will work on stochastic simulation algorithms. First, let us show how to simulate the building block of Chapter 3, i. e., the fBm BH = {BtH , t ≥ 0}. If we want to simulate BtH for t ∈ [0, T] on a uniform grid having N + 1 nodes

364 � 5 Numerical methods and simulation

Figure 5.11: Comparison of the approximate solution (̃yk ) of (5.20) (α = 1/2) obtained by the Grünwald– Letnikov scheme with the exact one. Here T = 2 and h = 0.08.

Π : 0 = t0 < ⋅ ⋅ ⋅ < tN = T and diameter h, then we consider the decomposition k

k

j=1

j=1

BtHk = ∑ (BtHj − BtHj−1 ) = ∑ Itj ,tj−1 , where the increments (BtHj − BtHj−1 )

j=1,...,N

= (ItHj ,tj−1 )

j=1,...,N

are correlated. Since the case

N = 1 is trivial, we assume N ≥ 2. Here we discuss two well-known methods for handling such a correlation. Let us consider the correlation matrix of the increments. Since the fBm is self-similar of exponent H, we can consider T = N, h = 1, and tj = j. By Proposition 3.12 we know that for all j, k = 1, . . . , N, H H E [Ij,j−1 Ik,k−1 ]=

1 ̃ j−k . (|j − k + 1|2H − 2|j − k|2H + |j − k − 1|2H ) = Σ 2

Hence the correlation matrix of the increments is given by Σ = (Σj,k )j,k=1,...,N , where ̃ j−k . Σj,k = Σ The first method is standard in simulation of correlated normal random variables. The correlation matrix Σ is symmetric and positive definite, and hence it admits a Cholesky decomposition (see [92, Section 4.2.3]), i. e., there exists a lower triangular matrix L such that Σ = LLt , where t stands for the transposition. The coefficients of L can be determined recursively as follows: L1,1 = Σ1,1 , { { { { { k−1 { { 1 { { (Σj,k − ∑ Lj,ℓ Lk,ℓ ) , {Lj,k = Lk,k ℓ=1 { { { { { j−1 { { { 2 { {Lj,j = √Σj,j − ∑ Lj,ℓ , ℓ=1 {

k = 1, . . . , j − 1, j = 1, . . . , N, j = 1, . . . , N.

5.5 Simulation of the fractional Brownian motion



365

Consider the i. i. d. standard normal random variables X = (ξ)tj=1,...,N and X = LX. Then X is a multivariate normal random variable. Furthermore, t

E [XX ] = E [LXXt Lt ] = LLt = Σ, where we used the fact that E[XXt ] is the identity matrix. Hence X has the covariance maH trix Σ and thus can be used as a simulation of the increments (Ij,j−1 )j=1,...,N . This method

is quite slow (it has a complexity of O(N 3 )) but can be updated step by step to handle H various stopping conditions. Indeed, if we already simulated (Ij,j−1 )j=1,...,N , starting from the standard multivariate normal r. v. X, and we want to simulate the (N + 1)th step, then we only need to evaluate the (N + 1)th row of L as follows: k−1 1 { { L = (Σ − ∑ LN+1,ℓ Lk,ℓ ) , { N+1,k N+1,k { Lk,k { { ℓ=1 { { N { { { {LN+1,N+1 = √ΣN+1,N+1 − ∑ L2N+1,ℓ , ℓ=1 {

k = 1, . . . , N,

then simulate a standard normal random variable ξN+1 independent of (ξj )j=1,...,N , and, finally, put N+1

H IN+1,N = ∑ LN+1,j ξj . j=1

The second method is called circulant embedding method and has been independently developed in [65, 258]. For such method, we use the special structure of the cõ k−j , is a symmetric Toeplitz mavariance matrix Σ, which, due to the equality Σj,k = Σ trix; see [92, Section 4.7]. It is convenient, just for this part, to denote j, k = 0, . . . , N − 1. Recall that an n × n circulant matrix C is a Toeplitz matrix with Cj,k = Ĉℓ whenever j − k ≡ ℓ (mod n). Hence every circulant matrix is uniquely identified by its first column Cj,0 = Ĉj , j = 0, . . . , N − 1. It is well-known that every n × n Toeplitz matrix can be embedded in a (2n − 1) × (2n − 1) circulant matrix; see [92, Section 4.7.7]. Furthermore, if n ≥ 2, every n × n symmetric Toepliz matrix can be uniquely embedded in a 2(n − 1) × 2(n − 1) circulant matrix, as illustrated in [65]. In particular, to do this in our case, we denote ̃ j for j = 0, . . . , N − 1 and Ĉj = Σ ̃ 2N−2−j for by Ĉ the first column of C, and we set Ĉj = Σ j = N, . . . , 2N − 3. Now consider F to be the (2N − 2) × (2N − 2) Fourier matrix, i. e., whose entries are of 2πijk the form Fj,k = e 2N−2 for j, k = 0, . . . , 2N −3. Then it is easy to check that the inverse matrix F F−1 is given by 2N−2 , where F is the complex conjugate of F. Given a column vector v of length 2N −2, Fv is called the discrete (or finite) Fourier transform of v. We can relate the eigenvalues of a circulant matrix with the finite Fourier transform of its first column. The following simple statement is proved, for instance, in [23, Proposition 3.1].

366 � 5 Numerical methods and simulation ̂ In particProposition 5.20. Let λ = (λ0 , . . . , λ2N−3 ) be the eigenvalues of C. Then λ = FC. ular, C=

FΛF , 2N − 2

where Λ = diag(λ). Denote Λ = diag(λ), where λ are the eigenvalues of the circulant matrix C obtained from the covariance matrix Σ of the increments of the fBm. Then we have the following result, proved in [204]. Proposition 5.21. For all H ∈ (0, 1) and j = 0, . . . , 2N − 3, λj ≥ 0. 1

Due to this proposition, we can define Λ 2 as the matrix whose entries are the square roots of the entries of Λ. We set D =

1 FΛ 2

√2N−2

t

, which satisfies DD = C. So, if we consider

complex standard normal random variables ξ = (ξ0 , . . . , ξ2N−3 )t , i. e., such that all the real and the imaginary parts are independent standard normal random variables, then ̂ and ξ̂ = Dξ is a complex normal random variable whose real and imaginary part ℜ(ξ) ̂ ℑ(ξ) are centered multivariate normal random variables with covariance matrix C and are mutually independent. Therefore the vector X = (ℜ(ξ0 ), . . . ℜ(ξN−1 )) is a multivariate normal random variable with covariance matrix Σ, and it can be used for the simulation H of (Ij,j−1 )j=1,...,N . Instead of evaluating the matrix D, we can calculate ξ̂ by means of a finite Fourier transform. Indeed, if we set η =

1

Λ2 ξ , √2N−2

then we get

1

FΛ 2 ξ ξ̂ = Dξ = = Fη. √2N − 2 Resuming, it is necessary to produce the vector Ĉ starting from Σ and then calculate ̂ After that, it is necessary to simulate a (2N − 2)-dimensional standard complex λ = FC. normal random variable ξ (i. e., 4N −4 independent standard normal random variables) 1

λj2 ξj

and then to calculate the vector η with components ηj = √2N−2 . Finally, it is necessary to calculate ξ̂ = Fη and to extract the real (or the imaginary) parts of the first N com-

ponents. If two discrete Fourier transforms are produced via the fast Fourier transform algorithm, then the complexity is reduced to O(N log(N)), and it is extremely faster than the previous algorithm. The only disadvantage is that we need the whole covariance matrix of the increments to embed it in a circulant matrix, and thus the simulation cannot be dynamically updated. As a consequence, with this method, we cannot simulate under any stopping condition, but only on a fixed time interval [0, T].

5.5 Simulation of the fractional Brownian motion



367

Figure 5.12: Simulation of the trajectory of an fBm by means of the Cholesky decomposition method. Here T = 1, H = 0.75, and h = 0.002.

Figure 5.13: Simulation of the trajectory of an fBm by means of the Cholesky decomposition method. Here T = 1, H = 0.25, and h = 0.002.

Listings 5.9.7 and 5.9.8 provide the implementation of an R function to simulate the fBm on a fixed time interval by means of the Cholesky decomposition and circulant embedding methods, respectively. The simulated trajectories are presented in Figures 5.12, 5.13, 5.14, and 5.15. In the package somebm [106] the authors provide several simulation methods for Brownian motion and related processes. Among them, the function fbm simulates fBm using the circulant embedding method.

368 � 5 Numerical methods and simulation

Figure 5.14: Simulation of the trajectory of an fBm by means of the circulant embedding method. Here T = 1, H = 0.75, and h = 0.002.

Figure 5.15: Simulation of the trajectory of an fBm by means of the circulant embedding method. Here T = 1, H = 0.25, and h = 0.002.

5.6 Euler–Maruyama schemes for stochastic differential equations driven by an additive fractional Brownian noise Now let us investigate how to simulate solutions X = {Xt , t ≥ 0} of SDEs of the form t

Xt = X0 + ∫ F(s, Xs )ds + BtH , 0

t ∈ [0, T],

(5.30)

5.6 Simulating SDEs driven by fBm

� 369

where F : ℝ+0 × ℝ → ℝ is continuous and satisfies 󵄨󵄨 󵄨 󵄨󵄨F(t, x) − F(t, y)󵄨󵄨󵄨 ≤ L|x − y|

(5.31)

for some L > 0. The existence and pathwise uniqueness of the strong solution of (5.30) with a.s. continuous trajectiories under this assumption can be shown by means of standard techniques. The simplest approximation idea consists of approximating the integral term by means of a quadrature method, as, for instance, the Euler scheme. Let us first consider the explicit Euler scheme. Let Π be a uniform grid on [0, T] with N + 1 nodes and diameter h. Define ̃0 = X0 , X { { { k−1 { ̃k = X0 + h ∑ F(jh, X ̃j ) + BH , { {X kh j=0 {

k = 1, . . . , N.

(5.32)

The second equation can be written in an alternative form: for k = 1, . . . , N, k−1

̃k = X0 + h ∑ F(jh, X ̃j ) + BH X kh j=0

k−2

H H ̃j ) + hF ((k − 1)h, X ̃k−1 ) + BH = X0 + h ∑ F(jh, X (k−1)h + Bkh − B(k−1)h j=0

̃k−1 + hF ((k − 1)h, X ̃k−1 ) + hH I H . =X k,k−1 This is the explicit Euler–Maruyama method. Now we want to study whether this simulation method converges to the original solution X as h ↓ 0. More precisely, we say ̃k )k=0,...,N is strongly convergent of order β > 0 if there exthat a simulation method (X ist two constants C, h0 > 0, depending on the characteristics of the SDE, such that for h ≤ h0 , ̃k |] ≤ Chβ . sup E [|Xkh − X

k=0,...,N

To prove that the method is strongly convergent, we need some preliminary estimates. Lemma 5.22. Let X0 ∈ L1 (Ω), T > 0, and M0 := ‖F(⋅, 0)‖L1 [0,T] . Then 2 sup E[|Xt |] ≤ (E[|X0 |] + M0 + √ T H ) eLT . π t∈[0,T]

370 � 5 Numerical methods and simulation Proof. Fix ω ∈ Ω such that t ∈ [0, T] 󳨃→ Xt (ω) is continuous and note that, for all t ∈ [0, T], t

󵄨 H 󵄨 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨Xt (ω)󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨X0 (ω)󵄨󵄨󵄨 + ∫ 󵄨󵄨󵄨F(τ, Xτ (ω))󵄨󵄨󵄨 dτ + 󵄨󵄨󵄨󵄨Bt (ω)󵄨󵄨󵄨󵄨 0

t

t

󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 ≤ 󵄨󵄨󵄨X0 (ω)󵄨󵄨󵄨 + ∫ 󵄨󵄨󵄨F(τ, Xτ (ω)) − F(τ, 0)󵄨󵄨󵄨 dτ + ∫󵄨󵄨󵄨F(τ, 0)󵄨󵄨󵄨dτ + 󵄨󵄨󵄨󵄨BtH (ω)󵄨󵄨󵄨󵄨 0

0

t

󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 ≤ 󵄨󵄨󵄨X0 (ω)󵄨󵄨󵄨 + L ∫󵄨󵄨󵄨Xτ (ω)󵄨󵄨󵄨dτ + M0 + 󵄨󵄨󵄨󵄨BtH (ω)󵄨󵄨󵄨󵄨 . 0

Since t ∈ [0, T] 󳨃→ Xt (ω) ∈ ℝ is continuous, we can use Grönwall inequality, see [196, Theorem 1.3.2], to get t

󵄨 H 󵄨 󵄨 H 󵄨 L(t−s) 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 ds. 󵄨󵄨Xt (ω)󵄨󵄨󵄨 ≤ (󵄨󵄨󵄨X0 (ω)󵄨󵄨󵄨 + M0 + 󵄨󵄨󵄨󵄨Bt (ω)󵄨󵄨󵄨󵄨) + L ∫ (󵄨󵄨󵄨X0 (ω)󵄨󵄨󵄨 + M0 + 󵄨󵄨󵄨󵄨Bs (ω)󵄨󵄨󵄨󵄨) e 0

Taking the expectation and recalling that 2 󵄨 󵄨 E [󵄨󵄨󵄨󵄨BtH 󵄨󵄨󵄨󵄨] = √ t H , π we get the upper bound t

2 2 E[|Xt |] ≤ (E[|X0 |] + M0 + √ t H ) + L ∫ (E[|X0 |] + M0 + √ sH ) eL(t−s) ds π π 0

2 ≤ (E[|X0 |] + M0 + √ t H ) eLt . π Taking the supremum, we conclude the proof. Lemma 5.23. Let X0 ∈ L1 (Ω) and T > 0. Then there exists LX , depending on F, E[|X0 |], T, and H, such that for all t, s ∈ [0, T], E[|Xt − Xs |] ≤ LX |t − s|H . Proof. We can assume, without loss of generality, that t ≥ s ≥ 0. Note that t

Xt − Xs = ∫ F(τ, Xτ )dτ + BtH − BsH , s

5.6 Simulating SDEs driven by fBm

� 371

and therefore taking the absolute value and expectation, we get t

󵄨 󵄨 󵄨 󵄨 E[|Xt − Xs |] ≤ ∫ E [󵄨󵄨󵄨F(τ, Xτ )󵄨󵄨󵄨] dτ + E [󵄨󵄨󵄨󵄨BtH − BsH 󵄨󵄨󵄨󵄨] s

t

󵄨 H 󵄨󵄨 󵄨 󵄨 󵄨󵄨] ≤ ∫ E [󵄨󵄨󵄨F(τ, Xτ ) − F(τ, 0)󵄨󵄨󵄨] dτ + M0 + E [󵄨󵄨󵄨󵄨Bt−s 󵄨 s

t

2 2 ≤ L ∫ E[|Xτ |]dτ + M0 + √ |t − s|H ≤ LCT,H |t − s| + M0 + √ |t − s|H , π π s

where we used Lemma 5.22 and put CT,H = (E[|X0 |] + M0 + √ π2 T H )eLT . Hence 2 E[|Xt − Xs |] ≤ (LCT,H T 1−H + M0 + √ ) |t − s|H . π We recall the following discrete Grönwall inequality, which is a particular case of [242]. Lemma 5.24. Let (an )n=0,...,N be a sequence of nonnegative real numbers such that for all n = 1, . . . , N, n−1

an ≤ C0 + ∑ bj aj , j=0

where C0 ≥ 0 and bj ≥ 0 for n = 0, . . . , N. Then n−1

an ≤ C0 exp ( ∑ bj ) , j=0

where ∑−1 j=0 ≡ 0. Now we are ready to prove the strong convergence of the method. Theorem 5.25. Assume that F : ℝ+0 × ℝ → ℝ satisfies (5.31) and for all t, s ∈ [0, T], 󵄨󵄨 󵄨 λ 󵄨󵄨F(t, x) − F(s, x)󵄨󵄨󵄨 ≤ LF |t − s| for some λ ∈ (0, 1] and LF > 0. Then the explicit Euler–Maruyama method is strongly convergent of order δ = min{λ, H}.

372 � 5 Numerical methods and simulation Proof. First, notice that kh

H Xkh = X0 + ∫ F(s, Xs )ds + Bkh 0

k−1

kh

k−1

j=0

0

j=0

H = X0 + h ∑ F(jh, Xjh ) + ∫ F(s, Xs )ds − h ∑ F(jh, Xjh ) + Bkh .

̃k )k=0,...,N obtained by means of (5.32), we have Then for the approximation (X 󵄨󵄨 kh 󵄨󵄨 k−1 k−1 󵄨 󵄨󵄨 󵄨 󵄨 󵄨󵄨󵄨 󵄨 ̃ ̃ 󵄨 󵄨 |Xkh − Xk | ≤ h ∑ 󵄨󵄨F(jh, Xjh ) − F(jh, Xjh )󵄨󵄨 + 󵄨󵄨 ∫ F(s, Xs )ds − h ∑ F(jh, Xjh )󵄨󵄨󵄨 . 󵄨󵄨 󵄨󵄨 j=0 j=0 󵄨󵄨 0 󵄨󵄨 Taking the expectations, we get k−1

̃k |] ≤ Lh ∑ E [|Xjh − X ̃jh |] + E[I1 ], E [|Xkh − X j=0

where 󵄨󵄨 kh 󵄨󵄨 k−1 (j+1)h k−1 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 󵄨 󵄨 I1 = 󵄨󵄨 ∫ F(s, Xs )ds − h ∑ F(jh, Xjh )󵄨󵄨󵄨 ≤ ∑ ∫ 󵄨󵄨󵄨F(s, Xs ) − F(jh, Xjh )󵄨󵄨󵄨 ds 󵄨󵄨 󵄨󵄨 j=0 󵄨󵄨 0 󵄨󵄨 j=0 jh k−1

(j+1)h

k−1

(j+1)h

j=0

jh

j=0

jh

󵄨 󵄨 󵄨 󵄨 ≤ ∑ ∫ 󵄨󵄨󵄨F(s, Xs ) − F(s, Xjh )󵄨󵄨󵄨 ds + ∑ ∫ 󵄨󵄨󵄨F(s, Xjh ) − F(jh, Xjh )󵄨󵄨󵄨 ds k−1

(j+1)h

k−1

(j+1)h

j=0

jh

j=0

jh

λ

k−1

(j+1)h

j=0

jh

≤ L ∑ ∫ |Xs − Xjh |ds + LF ∑ ∫ |s − jh| ds ≤ L ∑ ∫ |Xs − Xjh |ds + LF T|h|λ . Taking the expectations, we obtain that k−1

(j+1)h

j=0

jh

E[I1 ] ≤ L ∑ ∫ E [|Xs − Xjh |] ds + LF Thλ ≤ LLX hH T + LF Thλ ≤ LX,F hδ , where LX,F > 0 is a suitable constant. Then k−1

̃k |] ≤ Lh ∑ E [|Xjh − X ̃jh |] + LX,F hδ . E [|Xkh − X j=0

5.6 Simulating SDEs driven by fBm



373

Lemma 5.24 implies that ̃k |] ≤ LX,F hδ exp(Lh(k − 1)) ≤ LX,F hδ eLT . E [|Xkh − X A similar argument holds also for the implicit Euler–Maruyama scheme given by {

̃0 = X0 , X ̃k = X ̃k−1 + hF(kh, X ̃k ) + hH I H , X k,k−1

k = 1, . . . , N,

where, for each step k = 1, . . . , N, we have to solve a possibly nonlinear equation with ̃k . Similarly to Theorem 5.25, we get the following result. unknown X Theorem 5.26. Assume that F : ℝ+0 × ℝ → ℝ satisfies (5.31) and for all t, s ∈ [0, T], 󵄨󵄨 󵄨 λ 󵄨󵄨F(t, x) − F(s, x)󵄨󵄨󵄨 ≤ LF |t − s| for some λ ∈ (0, 1] and LF > 0. Then the implicit Euler–Maruyama method is strongly convergent of order δ = min{λ, H}. Remark 5.27. If we compare this with [23, Theorem 3.1], in which λ = H = 21 , we observe that we are working under weaker assumptions. This is due to the fact that we only consider the additive noise case. Furthermore, although we used uniform grids, it was proved in [184] that nonuniform grids can be used to provide optimal approximations. Before proceeding, let us recall that if we do some simulations, we are not satisfied with just the nodes, but we are interested also in connecting those nodes. Hence let us explore how linear interpolation interacts with the approximation error. Theorem 5.28. Assume that F : ℝ+0 × ℝ → ℝ satisfies (5.31) and for all t, s ∈ [0, T], 󵄨󵄨 󵄨 λ 󵄨󵄨F(t, x) − F(s, x)󵄨󵄨󵄨 ≤ LF |t − s| ̃k )k=0,...,N be an approximation of X (by means for some λ ∈ (0, 1] and LF > 0. Let (X of either the explicit or implicit Euler–Maruyama method) and define, for t ∈ [tk , tk+1 ], k = 0, . . . , N − 1, ̃L := t − tk X ̃k+1 + tk+1 − t X ̃k . X t h h Then there exist two constants C, h0 > 0 such that for all h ≤ h0 , 󵄨 ̃L 󵄨󵄨󵄨󵄨] ≤ Chδ , sup E [󵄨󵄨󵄨󵄨Xt − X t 󵄨 t∈[0,T] where δ = min{λ, H}.

374 � 5 Numerical methods and simulation Proof. Let h0 ≥ 0 be defined in Theorem 5.25 or 5.26, depending on whether we are using the explicit or implicit Euler–Maruyama scheme, and assume that h0 ≤ 1 without loss of generality. Let Π be a uniform grid with diameter h ≤ h0 and N + 1 points, and let t ∈ (tk , tk+1 ). Observe that 󵄨󵄨 ̃L 󵄨󵄨󵄨󵄨 ≤ |Xt − X ̃k | + |Xt − X ̃k+1 | 󵄨󵄨Xt − X t 󵄨 󵄨 ̃k | + |Xt − Xt | + |Xt − X ̃k+1 |. ≤ |Xt − Xtk | + |Xtk − X k+1 k+1 Taking the expectations, we have 󵄨 ̃L 󵄨󵄨󵄨󵄨] ≤ LX hH + C0 hδ , E [󵄨󵄨󵄨󵄨Xt − X t 󵄨 where C0 > 0 is the strong convergence constant provided in Theorem 5.25 or 5.26, and LX is defined in Lemma 5.23. Since h ≤ h0 ≤ 1, we have 󵄨 ̃L 󵄨󵄨󵄨󵄨] ≤ Chδ E [󵄨󵄨󵄨󵄨Xt − X t 󵄨 for some constant C > C0 > 0. Observe that the previous inequality holds for all t ∈ ⋃N−1 k=0 (tk , tk+1 ) and all t = t0 , . . . , tN , which follows from Theorem 5.25 or 5.26. Hence, taking the supremum, we have the desired result. We want to use a numerical scheme also to provide a numerical approximation in the case F(t, x) = x −α + C + x for some α > H1 − 1. This, however, relies on some stochastic calculus with respect to the fBm (in particular, the Itô formula), which is out of the scope of this book. Hence let us only state the following result, given in [267, Theorem 4.1] and [124, Theorem 3]. Theorem 5.29. Let H > 21 . Consider the equation t

Xt = X0 + (1 − β) ∫ f (Xs )ds + (1 − β)σBtH , 0

where β ∈ [0, 1], and f : ℝ+ → ℝ satisfies the following assumptions: (i) f is continuously differentiable in ℝ+ ; (ii) There exists a constant K1 ≥ 0 such that f ′ (x) ≤ K1 for all x ∈ (0, ∞); (iii) There exist two constants K2 , α ≥ 0 such that f (x) ≥ K2 x −1−α for all x > 0. (iv) There exists h0 > 0 such that for all 0 ≤ h ≤ h0 , the function F(x) = x − (1 − β)hf (x) satisfies limx→0 F(x) = −∞ and limx→+∞ F(x) = +∞. ̃k )k=0,...,N of X on a uniform grid Π of Then the implicit Euler–Maruyama approximation (X ̃ diameter h with N + 1 points satisfies Xk > 0 for all k = 0, . . . , N, and for each γ ∈ ( 21 , H), there exists an a. s. finite random variable Cγ such that a. s. ̃k | ≤ Cγ hγ . sup |Xtk − X

k=0,...,N

5.6 Simulating SDEs driven by fBm



375

A more general result for H ∈ (0, 1) can be obtained as a particular case of [58, Theorem 4.8]. In Listing 5.9.9, we propose an implementation of a function that simulates the trajectories of the fOU process (solution of (3.44)) in a fixed time interval. The results are shown in Figures 5.16 and 5.17. Moreover, Theorem 5.29 tells us that (3.96) can be approximated numerically using the implicit Euler–Maruyama method if H > 21 . For simplicity, put α = 1 and a = 0, and ̃k )k=0,...,N be the approximate solution. Having Y ̃0 , at each step k ≥ 1, we need to let (Y find the positive solution of the equation ̃k = Y ̃k−1 + hε − bhY ̃k + σhH I H , Y k,k−1 ̃k Y

Figure 5.16: Simulation of the trajectory of an fOU process by means of the explicit Euler method. Here T = 1, h = 0.002, and the parameters of (3.44) are given by H = 0.75, λ = 2, σ = 1, and ξ = 1.

Figure 5.17: Simulation of the trajectory of an fOU process by means of the explicit Euler method. Here T = 1, h = 0.002, and the parameters of (3.44) are given by H = 0.25, λ = 2, σ = 1, and ξ = 1.

376 � 5 Numerical methods and simulation

Figure 5.18: Simulation of the trajectory of an approximate reflected fOU process (solution of (3.96)) by means of the implicit Euler method. Here T = 1, h = 0.002, and the parameters of (3.96) are given by H = 0.75, a = 0, b = 2, σ = 1, ε = 10−10 , and Y0H = 1.

that is, ̃k = Y

̃k−1 + σhH I H + √(Y ̃k−1 + σhH I H )2 + 4hε(1 + bh) Y k,k−1 k,k−1 2(1 + bh)

.

(5.33)

If we choose a small value of ε, then (5.33) provides a good approximation of the reflected fOU process according to Theorem 3.79. Moreover, let us observe that if we choose a small ̃0 , then we obtain a good approximation of a reflected fOU process with zero value of Y initial data. In such a case, setting b = 0 and σ = 1, we get a good approximation of a reflected fBm process. This is implemented in Listing 5.9.10 and represented in Figure 5.18. Similarly to the fOU process, we use λ in place of b. Recall that there is an R package that provides all tools for simulation and inference on SDEs, also possibly driven by the fBm. This package is called YUIMA [39] and is widely described in [109].

5.7 Chambers–Mallow–Stuck algorithm Now let us consider stable distributions and stable processes. In [47] the authors provided a sort of generalized Box–Muller algorithm to deal with stable distributions. Namely, they use an alternative form of the characteristic function of a random variable X ∼ S(α, β, γ, δ). We use the Chambers–Mallow–Stuck algorithm to simulate X only in the case β = 1, γ = 1, and δ = 0. Once this is done, we can use random variables with S(α, 1, 1, 0) distribution as a building block to simulate any stable random variable by exploiting Theorems 4.4 and 4.8. Let us state the following theorem, which is a particular case of [269, Theorem C.3].

5.7 Chambers–Mallow–Stuck algorithm

� 377

Theorem 5.30. Let X ∼ S(α, 1, 1, 0). Define K(α) = α−1−sign(1−α). Then the characteristic function of X is given by log (E [e

izX

π −|z|α exp (−i K(α)sign(z)) , { { 2 ]) = { 2 { −|z| [1 + i (sign(z)) log(|z|)] , { π

α ≠ 1, α = 1.

With this result in mind, in [47] the authors prove the following statement. Proposition 5.31. Let U be a uniform random variable on (− π2 , π2 ), and let W be an exponential random variable with parameter 1, independent of U. Let α ∈ (0, 2]. Define Φ0 (α) = − πK(α) , where K(α) = α − 1 + sign(1 − α). Put 2α sin(α(U − Φ0 (α))) cos(U − α(U − Φ0 (α))) { { ( ) { 1 { W (cos(U)) α X ={ { {2 π πW cos(U) { (( + U) tan(U) − log ( )) , {π 2 π + 2U

1−α α

,

α ≠ 1, α = 1.

Then X ∼ S(α, 1, 1, 0). Remark 5.32. Note that for α = 2, the previous formula coincides with the Box–Muller one for normal distributions. Once this is done, we can provide an algorithm for the simulation of any random variable X ∼ S(α, β, γ, δ). First, consider the case X ∼ S(α, β, 1, 0). To simulate X, simulate two independent random variables Y1 , Y2 ∼ S(α, 1, 1, 0). Theorem 4.8 implies that d

X =(

1

1

1+β α 1−β α 1+β 1+β 1−β 1−β ) Y1 − ( ) Y2 + 1{1} (α) ( log ( )− log ( )) . 2 2 π 2 π 2

Next, we simulate X ∼ S(α, β, γ, 0). To do this, first simulate X1 ∼ S(α, β, 1, 0). Then Theorem 4.4, (ii), (iii), and (iv), implies that d

X = γX1 + 1{1} (α)

2βγ log(γ). π

Finally, to simulate any X ∼ S(α, β, γ, δ), first simulate X1 ∼ S(α, β, γ, 0). Then according to Theorem 4.4(iv), d

X = X1 + δ. Since we understand now how to simulate any stable random variable, let us use these methods to simulate any α-stable Lévy motion. Let X = {Xt , t ≥ 0} be an α-stable Lévy motion with parameters α ∈ (0, 2], β ∈ [−1, 1], and γ > 0. Assume that we want to simulate a trajectory in the interval [0, T]. To do this, consider a uniform grid Π of [0, T]

378 � 5 Numerical methods and simulation with N + 1 points and diameter h. Then for all k = 1, . . . , N, k

Xtk = ∑(Xtj − Xtj−1 ), j=1

where the random variables (Xtj − Xtj−1 )j=1,...,N are mutually independent. Furthermore, it follows from the definition of α-stable Lévy motion that 1

Xtj − Xtj−1 ∼ S (α, β, γh α , 0) . Hence we can simulate a family of i. i. d. random variables (Ij )j=1,...,N distributed as 1

S (α, β, γh α , 0) using the Chambers–Mallow–Stuck algorithm and note that for all k = 1, . . . , N, d

k

Xtk = ∑ Ij . j=1

Remark 5.33. For α = 2, this algorithm is the usual simulation procedure for a standard Brownian motion. If α ≠ 1, then without loss of generality we can assume that T = N and h = 1 and 1 multiply the final result by h α due to the self-similarity of the process. The same holds if α = 1 and β = 0. In Listing 5.9.11, we propose an implementation of the Chambers–Mallow–Stuck algorithm. Simulation algorithms for the stable distributions are also given in the stabledist package [259], which, in addition, provides the numerical density function dstable. It is important to notice, however, that our parameterization coincides with the parameterization pm=1 in the stabledist package. We can also use Listing 5.9.11 to simulate some α-stable Lévy motions, as, for instance, symmetric ones, in Listing 5.9.12, or α-stable subordinators, in Listing 5.9.13. Some simulated trajectories are shown in Figures 5.19, 5.20, and 5.21.

5.8 Simulation of the delayed processes In this section, we focus on some simulation algorithms for processes that have been delayed by means of an independent inverse α-stable subordinator. Although we know how to simulate α-stable subordinators using the Chambers– Mallow–Stuck algorithm, simulation of inverse α-stable subordinators using numerical evaluation of first passage times can produce several undesired numerical errors. To overcome this problem, in [162, Example 5.21] the following strategy is proposed. Consider a continuous stochastic process X = {Xt , t ≥ 0} that we know how to simulate, and let S α be an independent α-stable subordinator with inverse Lα . Let Xαt := XLαt , t ≥ 0. We wish to simulate the process Xα on a time interval [0, T]. Actually, we are in-

5.8 Simulation of the delayed processes

� 379

Figure 5.19: Simulation of the trajectory of a symmetric α-stable Lévy motion. Here T = 1, h = 0.002, and α = 0.9.

Figure 5.20: Simulation of the trajectory of a symmetric α-stable Lévy motion. Here T = 1, h = 0.002, and α = 1.5.

terested in the simulation of the graph 𝒢 = {(t, Xαt ) : t ∈ [0, T]}. However, we could simulate the α-stable subordinator S α until it surpasses the value T (i. e., right after the time LαT ). In practice, we only simulate LαT in place of the full trajectory of Lα on [0, T]. Now assume that we have simulated S α for 0 = t0 < t1 < ⋅ ⋅ ⋅ < tN , where tN ≥ LαT , and simulate X at the same nodes. Then we can consider, as an approximation of 𝒢 , the set 𝒢̃ = {(Stαk , Xtk ), k = 0, . . . , N}. This is due to the fact that actually

α 𝒢 = {(Stα , Xt ) : t ∈ [0, LαT ]} ∪ (⋃t∈Disc(Sα ;[0,Lα ]) {(y, Xy ), y ∈ [St− , Stα ]}), where Disc(S α ; [0, LαT ]) T

the countable set of discontinuities of S α over the set [0, LαT ]. In practice, the graph of Xαt on [0, T] is exactly the curve (Stα , Xt ) for t ∈ [0, LαT ] once we connect the discontinuities by horizontal lines. This was established during the proof of [159, Theorem 2.2].

380 � 5 Numerical methods and simulation

Figure 5.21: Simulation of the trajectory of an α-stable subordinator. Here T = 1, h = 0.002, and α = 0.75.

Although this procedure overcomes all possible difficulties of simulating continuous delayed processes, the problem of simulation delayed CTMCs remains unsolved. Hence let us present a method based on the representation of the delayed CTMCs we obtained in Section 4.10. This is commonly known as discrete-event systems simulation, see [23, Chapter II, Section 6], and has been discussed in a more general setting in [91]. In particular, this coincides with the well-known Doob–Gillespie algorithm introduced in [89] for CTMC; see also [90]. We will not refer to the semi-Markov property, but it is satisfied by the delayed CTMCs and delayed Brownian motion. We refer the interested reader, for example, to [164, 103, 52] and references therein. Before giving the algorithm, we have to answer a further question: how can we simulate a Mittag-Leffler random variable? We present the result from [41, Algorithm 1], which was proved also in [12, Corollary 7.3]. 1

Proposition 5.34. Consider α ∈ (0, 1) and λ > 0. Let S ∼ S(α, 1, γ, 0), where γ = (cos ( πα )) α , 2 and let Y be an exponential random variable with parameter λ, independent of S. Define 1 X = (Y ) α S. Then X is a Mittag-Leffler random variable of fractional order α with parameter λ. Proof. Let S α be an α-stable subordinator independent of Y . Then we know that S1α ∼ S(α, 1, γ, 0), and it is still independent of Y . Thus we can assume, without loss of generality, that S = S1α . Hence for all t > 0, 1

1

P(X > t) = P ((Y ) α S1α > t) = E [P ((Y ) α S1α > t | Y )] . Furthermore, S α and Y are independent, and S α is α−1 -self-similar. Therefore 1

P ((Y ) α S1α > t | Y ) = P (SYα > t | Y ) .

(5.34)

5.8 Simulation of the delayed processes

� 381

Plugging the previous relation into (5.34), we get P(X > t) = P (SYα > t) = P (Y ≥ Lαt ) , where Lα is the inverse of subordinator S α . Also, Lαt is independent of Y , and therefore P(X > t) = E [P (Y ≥ Lαt | Lαt )] ,

(5.35)

and α

P (Y ≥ Lαt | Lαt ) = e−λLt . We can apply this equality to (5.35) and obtain that α

P(X > t) = E [e−λLt ] = Eα (−λt α ) , where we also used Proposition 4.32. Now we know how to simulate Mittag-Leffler distributions, and we can simulate delayed CTMCs. Let X = {Xt , t ≥ 0} be a time-homogeneous CTMC with state space E = {0, . . . , N} or E = ℕ0 and generator 𝒬. Let Lα be an inverse α-stable subordinator independent of X, and let Xα = {Xαt , t ≥ 0} be the delayed process Xαt := XLαt . Let J α = (Jnα )n≥0 be the sequence of jump times of Xα , and let χ be the jump chain of X (and then of Xα by Corollary 4.85). Then the discussion in Section 4.10 supplies that we only have to simulate (J α , χ) to get the full process Xα . This is done in the following way: (i) Initialize J0α = 0 and χ0 according to the initial distribution of Xα (or X, since Xα0 = X0 ). α (ii) Now assume that we have already simulated Jn−1 and χn−1 : (ii.1) Simulate χn as a discrete random variable with distribution P (χn = j | χn−1 ) =

qj,χn−1 −qχn−1 ,χn−1

,

j ∈ E.

(ii.2) Simulate the sojourn time γnα according to P (γnα > t | χn−1 ) = Eα (qχn−1 ,χn−1 t α ) . α (ii.3) Put Jnα = Jn−1 + γnα .

Here we used the fact that the jump chain χ is still a Markov chain and coincides with the jump chain of X (see Corollary 4.85). Furthermore, (ii.2) is a direct consequence of Proposition 4.88, and Mittag-Leffler random variables can be simulated using Proposition 5.34.

382 � 5 Numerical methods and simulation The same algorithm can be used to simulate compound fractional Poisson processes. Indeed, let Y = (Yn )n≥1 be a sequence of random variables that we know how to simulate. Let also Nα = {Nαt , t ≥ 0} be a fractional Poisson process of fractional order α with parameter λ > 0 and independent of Y . Consider the compound fractional Poisson proNα cess Yαt = ∑k=1t Yn . To simulate it, again, it is only necessary to simulate (J α , χ), where J α are the jump times of Nα , and this time, χ is defined in terms of the recursive relation {

χ0 = 0,

χn = χn−1 + Yn ,

n ∈ ℕ.

α Indeed, Yαt = χn for all t ∈ [Jnα , Jn+1 ). Hence, we can proceed as follows: α (i) Initialize J0 = 0 and χ0 = 0. α (ii) Now assume that we have already simulated Jn−1 and χn−1 : (ii.1) Simulate Yn and then set χn = χn−1 + Yn . (ii.2) Simulate the sojourn time γnα according to

P (γnα > t) = Eα (−λt α ) . α (ii.3) Set Jnα = Jn−1 + γnα .

We can use the previous algorithms to provide a further simulation procedure for some delayed Markov processes. Indeed, the following procedure is common for any process that arises as a delay of the limit of a sequence of CTMCs, but we will describe it explicitly in the case of the delayed Brownian motion. Before proceeding, we need to establish in which sense a sequence of CTMCs converges to a continuous process. It is clear that when we want to simulate the paths of a continuous process applying the approximation procedure, we are not satisfied with convergence in any finite-dimensional distribution, but we would like to have, in some sense, convergence in distribution of the whole paths of the process. A first attempt could be made using the space 𝒞 [0, T] and the topology induced by uniform convergence. However, the paths of CTMCs do not belong to such a space. As a next step, we could consider the space 𝒟[0, T] of càdlàg functions f : [0, T] → ℝ equipped with the uniform topology. This however will not work, since such a topology does not take into account the possibility of moving the discontinuity points. We will equip 𝒟[0, T] with the Skorokhod topology. Without loss of generality, let us consider T = 1 and denote by Λ the set of strictly increasing functions f : [0, 1] → [0, 1]. Among them, there is the identity map ι : [0, 1] → [0, 1] defined by ι(t) = t. We define a metric on 𝒟[0, 1] as follows: dS (x, y) = inf max{‖f − ι‖L∞ [0,1] , ‖x − y ∘ f ‖L∞ [0,1] }, f ∈Λ

x, y ∈ 𝒟[0, 1],

where ∘ denotes the composition of functions, i. e., (y ∘ f )(t) = y(f (t)) for t ∈ [0, 1]. The metric space (𝒟[0, 1], dS ) is separable but not complete. Now let (X n )n∈ℕ be a sequence

5.8 Simulation of the delayed processes

� 383

of processes such that P(X n ∈ 𝒟[0, 1]) = 1, and let X be a. s. càdlàg. We say that X n d

weakly converges to X in 𝒟[0, 1], denoted as X n ⇒ X, if for every continuous functional F : 𝒟[0, 1] → ℝ, we have lim E [F (X n )] = E[F(X)].

n→+∞

(5.36)

Now let us denote by Disc(X) the set of discontinuity points of X. It was proved in [29] d

that if X n ⇒ X in 𝒟[0, 1], then for all N ∈ ℕ and t1 , . . . , tN ∈ [0, 1] such that P(tk ∈ ̸ Disc(X)) = 1,

k = 1, . . . , N,

(Xtn1 , . . . , XtnN ) ⇒ (Xt1 , . . . , XtN ). So, the previous definition implies the convergence of finite-dimensional distributions on continuity points of X. The converse is not generally true. Let us however state a sufficient condition that guarantees the weak convergence in 𝒟[0, 1], as given in [29, Theorem 13.5]. Theorem 5.35. Let (X n )n∈ℕ be a sequence of a. s. càdlàg processes, and let X be an a. s. càdlàg process. Assume that for all N ∈ ℕ and t1 , . . . , tN ∈ [0, 1] with P(tk ∈ ̸ Disc(X)) = 1 for k = 1, . . . , N, d

(Xtn1 , . . . , XtnN ) → (Xt1 , . . . , XtN ). d

Assume further that X1 − X1−δ → 0 and there exist β > 0, α > 21 , and a nondecreasing continuous function f : [0, 1] → ℝ such that for all 0 ≤ s ≤ r ≤ t ≤ 1, 2α 󵄨 󵄨2β 󵄨 󵄨2β E [󵄨󵄨󵄨Xtn − Xrn 󵄨󵄨󵄨 󵄨󵄨󵄨Xrn − Xsn 󵄨󵄨󵄨 ] ≤ (f (t) − f (s)) . d

Then X n ⇒ X in 𝒟[0, 1]. In general, if (X n )n∈ℕ is a sequence of random variables with values in a metric d

space (M, dM ), we say that X n ⇒ X in (M, dM ) if for every continuous and bounded functional F : M → ℝ, we have limn→+∞ E[F(X n )] = E[F(X)]. Thanks to this formalism, we can state the following theorem, which is called the continuous mapping theorem (see [255, Theorem 3.4.3]). Theorem 5.36. Let (Xn )n∈ℕ be a sequence of random variables with values in a metric d

space (M1 , d1 ) such that Xn ⇒ X. Let (M2 , d2 ) be another metric space and consider a function g : M1 → M2 . Denote by Disc(g) the set of discontinuity points of g, i. e., Disc(g) = {x ∈ M1 : ∃(xn )n∈ℕ ⊂ M1 , d1 (xn , x) → 0, d2 (g(xn ), g(x)) ↛ 0} . d

If P(X ∈ Disc(g)) = 0, then g(Xn ) ⇒ g(X) in (M2 , d2 ).

384 � 5 Numerical methods and simulation We will use the previous theorem only in some particular cases. Namely, we will consider the case M1 = M2 = ℝN for some N ∈ ℕ (where d1 and d2 are the Euclidean distance), and M1 = 𝒟[0, 1] × 𝒟[0, 1] and M2 = 𝒟[0, 1], where d2 = dS , and d1 is defined on M1 as follows: d1 ((x1 , x2 ), (y1 , y2 )) = dS (x1 , y1 ) + dS (x2 , y2 ), d

(x1 , x2 ), (y1 , y2 ) ∈ 𝒟[0, 1] × 𝒟[0, 1]. (5.37)

d

It is clear that if X n ⇒ X, Y n ⇒ Y in 𝒟[0, 1], and X n and Y n are independent for all d

n ∈ ℕ, then (X n , Y n ) ⇒ (X, Y ) in 𝒟[0, 1] × 𝒟[0, 1] with respect to the metric d1 . We can also consider a special operator (see [255, Theorem 13.2.2]). Theorem 5.37. Let g : 𝒟[0, 1] × 𝒟[0, 1] → 𝒟[0, 1] be defined as g(x, y) = x ∘ y, where ∘ is the composition operator, i. e., (x ∘ y)(t) = x(y(t)). (i) If x ∈ 𝒞 [0, 1] and y is nondecreasing, then (x, y) ∈ ̸ Disc(g). (ii) If y ∈ 𝒞 [0, 1] is strictly increasing, then (x, y) ∈ ̸ Disc(g). Why are we interested in this form of convergence? Assume that X n are obtained as a result of simulation of the process X and that we want to approximate the integral of the process X. Then if we only have the convergence in finite-dimensional distributions, 1 1 then we cannot guarantee that ∫0 Xtn dt converges in distribution to ∫0 Xt dt. However, if d

X n ⇒ X in 𝒟[0, 1], then for every continuous and bounded function f ∈ 𝒞b (ℝ), we can 1 use (5.36) with the continuous functional F(x) = f (∫0 x(t)dt) for x ∈ 𝒟[0, 1] to guarantee 1

d

1

that ∫0 Xtn dt → ∫0 Xt dt. Now consider a Poisson process 𝒩 n with parameter n, and let (Yj )j∈ℕ be a sequence of i. i. d. standard normal random variables independent of X n . Definition 5.38. Define n 𝒴t

𝒩n

1 t = ∑Y, √n j=1 j

where ∑0j=1 ≡ 0. This process is called a scaled normal compound Poisson process. It is not difficult to check that for all t ≥ 0, n ∈ ℕ, and u ∈ ℝ, n

u2

E [eiu𝒴t ] = exp (nt (e− 2n − 1)) .

(5.38)

Another basic property of the scaled normal compound Poisson process is the fact that the increments are stationary and independent. This is in fact a common property of compound Poisson processes with i. i. d. jumps (Yj )j∈ℕ . Furthermore, we have the following convergence result.

5.8 Simulation of the delayed processes

� 385

Theorem 5.39. Let (𝒴 n )n∈ℕ be a sequence of scaled normal compound Poisson processes, d

and let B be a standard Brownian motion. Then 𝒴 n ⇒ B in 𝒟[0, 1]. Proof. First, note that t

n

2

lim E [eiuYt ] = e− 2 u = E [eiuBt ] .

n→+∞

d

Therefore Lévy continuity theorem guarantees that 𝒴tn → Bt for all t ∈ [0, 1]. Now we d

prove that (𝒴tn1 , . . . , 𝒴tnN ) → (Bt1 , . . . , BtN ) for all N ∈ ℕ and 0 ≤ t1 ≤ ⋅ ⋅ ⋅ ≤ tN ≤ 1. Indeed, n setting Δ𝒴t,s = 𝒴tn − 𝒴sn for 0 ≤ s ≤ t ≤ 1, we can rewrite N

(𝒴tn1 , . . . , 𝒴tnN ) = (𝒴tn1 , Δ𝒴tn2 ,t1 + 𝒴tn1 , . . . , ∑ Δ𝒴tnj ,tj−1 + 𝒴tn1 ) j=2

= gN (𝒴tn1 , Δ𝒴tn2 ,t1 , . . . , Δ𝒴tnN ,tN−1 ) , where N

gN : (x1 , . . . , xN ) ∈ ℝN 󳨃→ (x1 , x1 + x2 , . . . , ∑ xj ) ∈ ℝN j=1

d

d

d

is continuous. Now note that for all j = 2, . . . , N, Δ𝒴tnj ,tj−1 = 𝒴tnj −tj−1 → Btj −tj−1 = Btj − Btj−1 . Furthermore, the variables Δ𝒴tnj ,tj−1 are mutually independent for j = 2, . . . , N, and all they are independent of 𝒴tn1 . Hence

d

(𝒴tn1 , Δ𝒴tn2 ,t1 , . . . , Δ𝒴tnN ,tN−1 ) → (Bt1 , Bt2 − Bt1 , . . . , BtN − BtN−1 ), and then the continuous mapping theorem (Theorem 5.36) implies that d

(𝒴tn1 , . . . , 𝒴tnN ) = gN (𝒴tn1 , . . . , Δ𝒴tnN ,tN−1 ) → gN (Bt1 , . . . , BtN − BtN−1 ) = (Bt1 , . . . , BtN ). Next, note that B is a. s. continuous, so B1 − B1−δ → 0 as δ ↓ 0 a. s. Finally, observe that for all 0 ≤ s ≤ r ≤ t, 󵄨 󵄨2 󵄨 󵄨2 󵄨 n 󵄨󵄨2 󵄨 n 󵄨2 E [󵄨󵄨󵄨𝒴tn − 𝒴rn 󵄨󵄨󵄨 󵄨󵄨󵄨𝒴rn − 𝒴sn 󵄨󵄨󵄨 ] = E [󵄨󵄨󵄨𝒴t−r 󵄨󵄨 ] E [󵄨󵄨󵄨𝒴r−s 󵄨󵄨󵄨 ] . To calculate E [|𝒴tn |2 ], note that 2

󵄨 j 󵄨󵄨 j 󵄨󵄨 1 +∞ 󵄨󵄨󵄨 1 +∞ 󵄨 󵄨2 E [󵄨󵄨󵄨𝒴tn 󵄨󵄨󵄨 ] = ∑ E [󵄨󵄨󵄨 ∑ Yj 󵄨󵄨󵄨 ] P (𝒩tn = j) = ∑ ∑ E [|Yj |2 ] P (𝒩tn = j) 󵄨󵄨 󵄨󵄨 n j=1 n j=1 k=1 [󵄨k=1 󵄨 ] ∞ 1 1 = ∑ jP (𝒩tn = j) = E [𝒩tn ] = t. n j=1 n

(5.39)

386 � 5 Numerical methods and simulation Hence (5.39) becomes 󵄨 󵄨2 󵄨 󵄨2 E [󵄨󵄨󵄨𝒴tn − 𝒴rn 󵄨󵄨󵄨 󵄨󵄨󵄨𝒴rn − 𝒴sn 󵄨󵄨󵄨 ] = (t − r)(r − s) ≤ (t − s)2 . Thus by Theorem 5.35 we get the statement. Remark 5.40. This is in fact a well-known result and follows from a Poissonification of the famous Donsker theorem. Precisely, let (Yj )j∈ℕ be a sequence of i. i. d. normal random variables. The Donsker theorem states that if we consider the random walk Sn = ∑nj=1 Yj d

B

⌊nt⌋ and the stochastic process B(n) := {Bt(n) , t ∈ [0, 1]}, where Bt(n) = √n , then Bt(n) ⇒ Bt in n 𝒟[0, 1]. Now for n ∈ ℕ, let 𝒩 be a Poisson process with parameter n independent of the

sequence (Yj ). Observe that 𝒴 n = B(n) . We recall that 𝒩n n

𝒩n n

d

⇒ ι in 𝒟[0, 1] (where ι(t) = t

for t ∈ [0, 1]) and that B is a. s. continuous. Let g : 𝒟[0, 1] × 𝒟[0, 1] → 𝒟[0, 1] be defined as g(x, y) = x ∘ y. Since they are independent, (B(n) ,

𝒩n d )⇒ n

(B, ι) and (B, ι) ∈ ̸ Disc(g) a. s. d

by Theorem 5.37. Hence by the continuous mapping Theorem 5.36 we have Y n ⇒ B. For this reason, we can refer to Theorem 5.39 as an alternative version of Donsker theorem. Now consider an inverse α-stable subordinator Lα independent of 𝒴 n . Definition 5.41. The scaled normal compound fractional Poisson processes Yα,n are den fined as Yα,n t := 𝒴Lα for t ≥ 0. t

We want to use these processes, which can be simulated using the algorithm we presented before, to approximate the delayed Bm Bαt := BLαt . We have the following result. Theorem 5.42. Let (𝒴 n )n∈ℕ be a sequence of scaled normal compound Poisson processes, let B be a standard Brownian motion, and let Lα be an inverse α-stable subordinator independent of B and (𝒴 n )n∈ℕ . For n ∈ ℕ and t ≥ 0, consider Yα,n := 𝒴Lnα and Bαt := BLαt . t Then Y

α,n d

t

α

⇒B . d

Proof. Theorem 5.39 implies that 𝒴 n ⇒ B in 𝒟[0, 1]. Since Lα is independent of (𝒴 n )n∈ℕ d

and B, (𝒴 n , Lα ) ⇒ (B, Lα ) in (𝒟[0, 1] × 𝒟[0, 1], d1 ), where d1 is the metric introduced in (5.37). Let g : 𝒟[0, 1] × 𝒟[0, 1] → 𝒟[0, 1] be such that g(x, y) = x ∘ y. Observe that B ∈ 𝒞 [0, 1] a. s. and Lα is a. s. nondecreasing. Then Theorem 5.37(i) implies that P ((B, Lα ) ∈ Disc(g)) = 0. Thus we can use the continuous mapping theorem (Theorem 5.36) to guarantee that d

Yα,n = g (𝒴 n , Lα ) ⇒ g (B, Lα ) = Bα .

5.8 Simulation of the delayed processes

� 387

Remark 5.43. Let us underline that the choice of the interval [0, 1] is arbitrary, and we can be consider any finite time interval [0, T]. The previous theorem gives us an approximate simulation algorithm for the delayed Brownian motion Bα . Indeed, we can approximate Bα on [0, 1] using Yα,n on [0, 1] for n sufficiently large, which in turn can be simulated using the algorithms described previously in this section. In Listing 5.9.14, we implement the graph simulation method to simulate the paths of the delayed fBm. The results are given in Figures 5.22 and 5.23. Listing 5.9.15 is devoted to the simulation of Mittag-Leffler random variables, which can be then used to simulate,

Figure 5.22: Simulation of the trajectory of a delayed fBm. Here T = 1, h = 10−5 , α = 0.75, and H = 0.75.

Figure 5.23: Simulation of the trajectory of a delayed fBm. Here T = 1, h = 10−5 , α = 0.75, and H = 0.75.

388 � 5 Numerical methods and simulation

Figure 5.24: Simulation of the trajectory of a fractional Poisson process. Here T = 100, λ = 1, and α = 0.75.

Figure 5.25: Simulation of the trajectory of a delayed Brownian motion (using a scaled normal compound Poisson process with scale n = 1000). Here T = 1 and α = 0.75.

for example, the fractional Poisson process, as done in Listing 5.9.16 and presented in Figure 5.24. Finally, in Listing 5.9.17, we propose the implementation of the discrete event system simulation algorithm to obtain the trajectories of a scaled normal compound Poisson process, which approximates the delayed Brownian motion for a large value of the scale parameter n, as in Figure 5.25.

5.9 Listings

� 389

5.9 Listings 5.9.1 MittagLeffler 1

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

35 36 37

38 39

MittagLeffler