Advanced Financial Modelling (Radon Series on Computational and Applied Mathematics) [1 ed.] 3110213133, 9783110213133

This book is a collection of state-of-the-art surveys on various topics in mathematical finance, with an emphasis on rec

352 59 5MB

English Pages 465

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Frontmatter
......Page 2
Contents......Page 8
Preface
......Page 6
Brownian semistationary processes and volatility/intermittency......Page 10
From bounds on optimal growth towards a theory of good-deal hedging......Page 36
Viscosity solutions to optimal portfolio allocation problems in models with random time changes and transaction costs......Page 62
Discrete-time approximation of BSDEs and probabilistic schemes for fully nonlinear PDEs......Page 100
Affine diffusion processes: theory and applications......Page 134
Multilevel quasi-Monte Carlo path simulation......Page 174
Modelling default and prepayment using Lévy processes: an application to asset backed securities......Page 192
Adaptive variance reduction techniques in finance......Page 214
Regularisation of inverse problems and its application to the calibration of option price models......Page 232
Optimal consumption and investment with bounded downside risk measures for logarithmic utility functions......Page 254
A review of some recent results on Malliavin Calculus and its applications......Page 284
The numeraire portfolio in discrete time: existence, related concepts and applications......Page 312
A worst-case approach to continuous-time portfolio optimisation......Page 336
Time consistency and information monotonicity of multiperiod acceptability functionals......Page 356
Optimal investment and hedging under partial and inside information......Page 380
Investment/consumption choice in illiquid markets with random trading times......Page 420
Optimal asset allocation in a stochastic factor model – an overview and open problems......Page 436
Recommend Papers

Advanced Financial Modelling (Radon Series on Computational and Applied Mathematics)   [1 ed.]
 3110213133, 9783110213133

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Radon Series on Computational and Applied Mathematics 8

Managing Editor Heinz W. Engl (Linz/Vienna) Editors Hansjörg Albrecher (Lausanne) Ronald H. W. Hoppe (Augsburg/Houston) Karl Kunisch (Graz) Ulrich Langer (Linz) Harald Niederreiter (Singapore) Christian Schmeiser (Linz/Vienna)

Radon Series on Computational and Applied Mathematics

1 Lectures on Advanced Computational Methods in Mechanics Johannes Kraus and Ulrich Langer (eds.), 2007 2 Gröbner Bases in Symbolic Analysis Markus Rosenkranz and Dongming Wang (eds.), 2007 3 Gröbner Bases in Control Theory and Signal Processing Hyungju Park and Georg Regensburger (eds.), 2007 4 A Posteriori Estimates for Partial Differential Equations Sergey Repin, 2008 5 Robust Algebraic Multilevel Methods and Algorithms Johannes Kraus and Svetozar Margenov, 2009 6 Iterative Regularization Methods for Nonlinear Ill-Posed Problems Barbara Kaltenbacher, Andreas Neubauer and Otmar Scherzer, 2008 7 Robust Static Super-Replication of Barrier Options Jan H. Maruhn, 2009 8 Advanced Financial Modelling Hansjörg Albrecher, Wolfgang J. Runggaldier and Walter Schachermayer (eds.), 2009

Advanced Financial Modelling Edited by

Hansjörg Albrecher Wolfgang J. Runggaldier Walter Schachermayer

≥ Walter de Gruyter · Berlin · New York

Editors Hansjörg Albrecher Universite´ de Lausanne Quartier UNIL-Dorigny Baˆtiment Extranef 1015 Lausanne, Switzerland E-mail: [email protected]

Wolfgang J. Runggaldier Dipartimento di Matematica Pura ed Applicata Universita` degli Studi di Padova Via Trieste 63 35121 Padova, Italy E-mail: [email protected]

Walter Schachermayer Faculty of Mathematics University of Vienna Nordbergstraße 15 1090 Vienna, Austria E-Mail: [email protected] Keywords Mathematical finance, actuarial mathematics, stochastic differential equations, optimization, mathematical modelling, computational methods. Mathematics Subject Classification 2000 91-02, 60G35, 60H35, 60J60, 62P05, 65C05, 91B16, 91B28, 91B70, 93E20.

앝 Printed on acid-free paper which falls within the guidelines 앪 of the ANSI to ensure permanence and durability.

ISBN 978-3-11-021313-3 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.d-nb.de. 쑔 Copyright 2009 by Walter de Gruyter GmbH & Co. KG, 10785 Berlin, Germany. All rights reserved, including those of translation into foreign languages. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage or retrieval system, without permission in writing from the publisher. Printed in Germany Cover design: Martin Zech, Bremen. Typeset using the authors’ LATEX files: Jan Nitzschmann, Leipzig. Printing and binding: Hubert & Co. GmbH & Co. KG, Göttingen.

Preface This book is a collection of state-of-the-art surveys on various topics in mathematical finance, with an emphasis on recent modeling and computational approaches. The volume is related to a Special Semester on Stochastics with Emphasis on Finance that took place from September to December 2008 at the Johann Radon Institute for Computational and Applied Mathematics (RICAM) of the Austrian Academy of Sciences in Linz, Austria. The Special Semester was built around a number of selected topics and each of these topics was the theme of an international workshop with about 20 invited speakers. Besides a Tutorial, a Kick-Off Workshop focusing also on “Academics meeting Practitioners” and a Concluding Workshop, the thematic workshops concerned the following topics: Advanced Modelling in Finance and Insurance; Optimization and Optimal Control; Inverse and Partial Information Problems: Methodology and Applications; Computational Methods with Applications in Finance, Insurance and the Life Sciences; Stochastic Methods in Partial Differential Equations and Applications of Deterministic and Stochastic PDEs. In addition to the workshops, the idea arose to collect surveys on important aspects and recent developments related to the topics of the Special Semester and this forms the contents of the present volume. The topics covered include the following (listed alphabetically and grouped according to their relation with the topics of the Special Semester in the above order): •

Affine diffusion processes in finance



Default and prepayment modeling using Levy processes



Volatility inference in models beyond semimartingales



Optimal asset allocation



Optimal consumption and investment in illiquid markets and with downside risk measures



Multiperiod acceptability functionals



Worst-case portfolio optimization Good deal bounds

• •

Optimal investment and hedging under partial and inside information



Regularization of inverse problems and calibration of option price models



Advanced simulation techniques



Applications of Malliavin Calculus



Probabilistic schemes for fully nonlinear PDE’s

The contributions themselves are arranged in alphabetic order according to the first named author.

vi

Preface

More details on the Special Semester and the full workshop program can be found at the RICAM Special Semester webpage at: http://www.ricam.oeaw.ac.at/specsem/sef We would like to take this opportunity to thank all those who have contributed scientifically to this Special Semester, in particular the authors of this volume and the speakers at the workshops as well as the (more than 250) participants in the workshops. Further thanks go to the Austrian Academy of Sciences and in particular the Johann Radon Institute of Computational and Applied Mathematics in Linz and its director Heinz W. Engl for making this Special Semester possible. We also thank Robert Plato from the publishing house de Gruyter for the professional editorial support during the preparation of this volume. Lausanne, Padua and Vienna, June 2009

Hansjoerg Albrecher Wolfgang Runggaldier Walter Schachermayer

Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

v

O. E. BARNDORFF -N IELSEN , J. S CHMIEGEL Brownian semistationary processes and volatility/intermittency . . . . . . . . .

1

D. B ECHERER From bounds on optimal growth towards a theory of good-deal hedging . . . .

27

C. B LANCHET-S CALLIET, R. G IBSON B RANDON , B. DE S APORTA , D. TALAY, E. TANR E´ Viscosity solutions to optimal portfolio allocation problems in models with random time changes and transaction costs . . . . . . . . . . . . . . . . . . . . .

53

B. B OUCHARD , R. E LIE , N. T OUZI Discrete-time approximation of BSDEs and probabilistic schemes for fully nonlinear PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

91

D. F ILIPOVI C´ , E. M AYERHOFER Affine diffusion processes: theory and applications . . . . . . . . . . . . . . . .

125

M. B. G ILES , B. J. WATERHOUSE Multilevel quasi-Monte Carlo path simulation . . . . . . . . . . . . . . . . . . .

165

¨ H. J ONSSON , W. S CHOUTENS , G. VAN DAMME Modelling default and prepayment using L´evy processes: an application to asset backed securities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 B. J OURDAIN Adaptive variance reduction techniques in finance . . . . . . . . . . . . . . . .

205

S. K INDERMANN , H. K. P IKKARAINEN Regularisation of inverse problems and its application to the calibration of option price models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 ¨ C. K L UPPELBERG , S. P ERGAMENSHCHIKOV Optimal consumption and investment with bounded downside risk measures for logarithmic utility functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

245

A. KOHATSU -H IGA , K. YASUDA A review of some recent results on Malliavin Calculus and its applications . .

275

¨ R. KORN , M. S CH AL The numeraire portfolio in discrete time: existence, related concepts and applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

303

viii

Contents

R. KORN , F. S EIFRIED A worst-case approach to continuous-time portfolio optimisation . . . . . . . .

327

R. KOVACEVIC , G. C H . P FLUG Time consistency and information monotonicity of multiperiod acceptability functionals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

347

M. M ONOYIOS Optimal investment and hedging under partial and inside information . . . . .

371

H. P HAM Investment/consumption choice in illiquid markets with random trading times

411

T. Z ARIPHOPOULOU Optimal asset allocation in a stochastic factor model – an overview and open problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

427

Radon Series Comp. Appl. Math 8, 1–25

c de Gruyter 2009 

Brownian semistationary processes and volatility/intermittency Ole E. Barndorff-Nielsen and J¨urgen Schmiegel

Abstract. A new class of stochastic processes, termed Brownian semistationary processes (BSS), is introduced and discussed. This class has similarities to that of Brownian semimartingales (BSM), but is mainly directed towards the study of stationary processes, and BSS processes are not in general of the semimartingale type. We focus on semimartingale - nonsemimartingale issues and on inference problems concerning the underlying volatility/intermittency process, in the nonsemimartingale case and based on normalised realised quadratic variation. The concept of BSS processes has arisen out of an ongoing study of turbulent velocity fields and is the purely temporal version of the general tempo-spatial framework of ambit processes. The latter, which may have applications also to the finance of energy markets, is briefly considered at the end of the paper, again with reference to the question of inference on the volatility/intermittency. Key words. Ambit processes, intermittency, nonsemimartingales, stationary processes, realised quadratic variation, turbulence, volatility. AMS classification. 60G10

1

Introduction

This paper discusses stochastic processes Y = {Yt }t∈R of the form  t  t q(t − s)as ds g(t − s)σs dBs + Yt = µ +

(1.1)

−∞

−∞

where µ is a constant, B is Brownian motion, g and q are nonnegative deterministic functions on R, with g (t) = q (t) = 0 for t ≤ 0, and σ and a are c`adl`ag processes. When σ and a are stationary then so is Y . Accordingly we shall refer to processes of this type as Brownian semistationary (BSS ) processes. It is sometimes convenient to indicate the formula for Y as Y = µ + g ∗ σ • B + q ∗ a • Leb,

(1.2)

where Leb denotes Lebesgue measure. We consider the BSS processes to be the natural analogue, for stationarity related processes, of the class BSM of Brownian semimartingales  t  t as ds. (1.3) σs dBs + Yt = 0

0

In the present paper the processes σ and a will, unless otherwise stated, be taken to be stationary, and we then refer to σ as the volatility or intermittency process. The term

2

O. E. Barndorff-Nielsen and J. Schmiegel

intermittency comes from turbulence, and in that scientific field intermittency plays a key role, similar to that of (stochastic) volatility in finance. In turbulence the basic notion of intermittency refers to the fact that the energy in a turbulent field is unevenly distributed in space and time. The present paper is part of a project with aim to construct a stochastic process model of the field of velocity vectors representing the fluid motion, conceiving of the intermittency as a positive random field with values σt (x) at positions (x, t) in space-time. However, most extensive data sets on turbulent velocities only provide the time series of the main component (i.e. the component in the main direction of the fluid flow) of the velocity vector at a single location in space. In the present paper the focus is on this latter case, but in Sections 8 and 9 some discussion will be given on the further intriguing issues that arise when addressing tempo-spatial settings. For a detailed discussion of BSS and the more general concept of tempo-spatial ambit processes, in the context of turbulence modelling, we refer to Barndorff-Nielsen and Schmiegel (2004), Barndorff-Nielsen and Schmiegel (2007), Barndorff-Nielsen and Schmiegel (2008a), Barndorff-Nielsen and Schmiegel (2008b) and Barndorff-Nielsen and Schmiegel (2008c). There it is shown that such processes are able to reproduce main stylized facts of turbulent data. In general, as we shall discuss in Section 3, models of the BSS form are not semimartingales. One consequence of this is that various useful techniques developed for semimartingales, such as the calculation of quadratic variation by Ito algebra and those of multipower variation, need extension or modification. The recently established theory of multipower variation (Barndorff-Nielsen et al. (2006a), Barndorff-Nielsen et al. (2006b) and Jacod (2008a), cf. also BarndorffNielsen and Shephard (2003), Barndorff-Nielsen and Shephard (2004), BarndorffNielsen and Shephard (2006a), Barndorff-Nielsen and Shephard (2006b), BarndorffNielsen et al. (2006c) and Jacod (2008b)) was developed as a basis for inference on σ under BSM models and, more generally Ito semimartingales, with particular focus on inference about the integrated squared volatility σ 2+ given by  t 2+ (1.4) σs2 ds. σt = 0

In the present paper the focus is similarly on inference for σt2+ . Specifically we shall discuss to what extent (a suitable normalised version of) realised quadratic variation of Y can be used to estimate σt2+ . It is important to realise that, as regards inference on σ 2+ , there may be substantial differences between cases where g is positive on all of (0, ∞) and those where g (t) = 0 for t > l for some l ∈ (0, ∞). This will be discussed in detail later. In semimartingale theory the quadratic variation [Y ] of Y is defined in terms of the Ito integral Y • Y , as [Y ] = Y 2 − 2Y • Y . In that setting [Y ] equals the limit in probability as δ → 0 of the realised quadratic variation [Yδ ] of Y defined by ⌊t/δ⌋

[Yδ ]t =

 j=1

Yjδ − Y(j−1)δ

2

where ⌊t/δ⌋ is the largest integer smaller than or equal to t/δ .

(1.5)

3

BSS processes

However, the question of whether [Yδ ] has a limit in probability – and what that limit is – is of interest more broadly than for semimartingales, and in particular for BSS processes. For any process Y = {Yt }t≥0 we shall use [Y· ] to denote the limit, when it exists, i.e. [Y· ]t = p- lim [Yδ ]t . δ→0

Thus, in case Y ∈ BSM we have [Y· ] = [Y ].1 We abbreviate realised quadratic variation to RQV and write QV for [Y· ]. The paper is organised as follows. Brownian semistationary processes are introduced in Section 2 and related non-semimartingale issues are considered in Section 3. Section 4 introduces a concept of q -orthogonality of stochastic processes and considers the computation of QV in some semimartingale difference cases. In Section 5 we turn to the increments of Brownian semistationary processes. Section 6 defines a normalised version [Yδ ] of RQV, and Section 7 derives sufficient conditions for the convergence in probability of [Yδ ] to σ 2+ . Extensions to the tempo-spatial setting is discussed in Sections 8 and 9. Some indications of ongoing further work and open problems are given in the concluding Section 10.

2

BSS processes

We have defined the concept of Brownian semistationary processes (BSS ) as processes Y = {Yt }t∈R of the form Yt = µ +



t

g(t − s)σs dBs +



t

q(t − s)as ds

(2.1)

−∞

−∞

where, in the context of the present paper, the processes σ and a are taken to be stationary. The integrals in (2.1) are to be understood as limits in probability for u → −∞ of the integrals   t

t

q(t − s)as ds

g(t − s)σs dBs +

u

u

which are assumed to exist, the first defined for each fixed t as an Ito integral. This of course poses restrictions on which functions g and q are feasible, including square integrability of g . The focus of the present paper is on inference about the integrated squared volatility σ 2+ given by (1.4). In particular, we shall discuss to what extent realised quadratic variation of Y can be used to estimate σt2+ . Note that the relevant question here is whether a suitably normalised version of the realised quadratic variation, and not necessarily the realised quadratic variation itself, converges in probability and law. 1 Of

course, for semimartingales Y we have the more general result that [Y ]t = p- lim|τ |→0 [Yτ ]

where τ denotes a subdivision of [0, t], |τ | is the maximal span in the subdivision, and Yτ is the τ -discretisation of Y over the interval [0, t].

4

O. E. Barndorff-Nielsen and J. Schmiegel

As a modelling framework for continuous time stationary processes the specification (2.1) is quite general. In fact, the continuous time Wold–Karhunen decomposition says1 that any second order stationary stochastic process, possibly complex valued, of mean 0 and continuous in quadratic mean can be represented as Zt =



t

φ (t − s) dΞs + Vt

−∞

where •





the deterministic function φ is an, in general complex, deterministic square integrable function   the process Ξ has orthogonal increments with E |dΞt |2 = ̟dt for some constant ̟ > 0 the process V is nonregular (i.e. its future values can be predicted by linear operations on past values without error).

Under the further condition that ∩t∈R sp {Zs : s ≤ t} = {0}, the function φ is real and uniquely determined up to a real constant of proportionality; and the same is therefore true of Ξ (up to an additive constant).

3

BSS and semi - nonsemimartingale issues

If Y ∈ BSS then Y may or may not be of the semimartingale type. This section discusses criteria for either of these cases.

3.1 Semimartingale cases We begin by recalling a classical necessary and sufficient condition, due to Knight (1992), for the process Y to be a semimartingale, valid in the special simple situation where σ = 1 and a = 0, i.e. where the process is of the form Yt =



t

g (t − s) dBs .

(3.1)

−∞

  Knight’s theorem says that (Yt )t≥0 is a semimartingale in the FtB t≥0 filtration if and only if  t g (t) = c + b (s) ds (3.2) 0

for some c ∈ R and a square integrable function b. 1 See

Doob (1953) and Karhunen (1950)

5

BSS processes

Example.

An example of some particular interest is where g (t) = tα e−λt

for t ∈ (0, ∞)

and some λ > 0. In order for the integral (3.1) to exist, α is required to be greater than 1 − 21 , and for g to be of the form (3.2) we must  have  1α  = 0 or α > 2 . In other words, 1 the nonsemimartingale cases are α ∈ − 2 , 0 ∪ 0, 2 .

Generally, one may ask under what conditions moving average processes of the form  ∞   g (t − s) − h (−s) dBs , Xt = −∞

with g and h deterministic, are semimartingales. More specifically, when is (Xt )t≥0   a FtX t≥0 -seminartingale, where FtX is the σ -algebra generated by {Xs , s ≤ t}. Constructive necessary and sufficient conditions for this are given in a recent paper by Basse, see Basse (2007a). More generally still is the question of when a process X is a Gaussian semimartingale. Also for this case a necessary and sufficient criterion has been obtained by Basse, in Basse (2008), cf. also Basse (2007b) which discusses the spectral representation of Gaussian semimartingales. At a further level of generalisation, Basse and Pedersen, in Basse and Pedersen (2008), consider processes X of the general form  t   Xt = φ (t − s) − ψ (−s) dLs −∞

where L is a (two-sided) nondeterministic L´evy process with characteristic triplet  γ, σ2 , ν , φ and ψ are deterministic functions and the integral exists, in the sense of Rajput and   Rosinski (1989). These authors establish  various necessary conditions on γ, σ2 , ν and φ, ψ in order for (Xt )t≥0 to be an FtL t≥0 -semimartingale. Now, turning to the general BSS case, we first argue formally, as if the differential of Y exists. From (2.1), dYt = g (0+) σt dBt + {g˙ ∗ σ • Bt + q (0+) at + q˙ ∗ a • Lebt }dt

suggesting that Yt can be reexpressed as Yt = Y0 + g (0+)



0

t

σs dBs +



t

As ds

0

with A = g˙ ∗ σ • B + q (0+) a + q˙ ∗ a • Leb.

This will indeed be the case provided the following conditions are satisfied (recall that we have assumed that σ and a are stationary): (i) g (0+) and q (0+) exist and are finite.

6

O. E. Barndorff-Nielsen and J. Schmiegel

(ii) g is absolutely continuous with square integrable derivative g˙ (iii) The process g(−·)σ ˙ · is square integrable ˙ (iv) The process q(−·)a · is integrable.

In view of the results by Knight and Basse, mentioned above, these conditions must be close to necessary as well. We shall here not further discuss affirmative conditions for Y to be of the semimartingale type. Instead we turn to cases where Y can be written as a linear combination of semimartingales which are orthogonal, in a sense that will be specified, and have different filtrations.

4

RQV and linear combinations of semimartingales

While the focus will be on cases where a given BSS process Y can be rewritten as Y + − Y − , where both Y + and Y − are semimartingales, we begin by considering the broader issue of existence and calculation of [Y· ] when Y is a linear combination of q -orthogonal processes, q -orthogonality being defined below.

4.1 General considerations Suppose that a process Y = {Yt } is representable in law as a linear combination Y = Y ′ + Y ′′ of some processes Y ′ and Y ′′ of interest, of semimartingale type or not. Then, defining [Yδ′ , Yδ′′ ] and [Y·′ , Y·′′ ] by [Yδ′ , Yδ′′ ]t =

⌊t/δ⌋



′ Yjδ′ − Y(j−1)δ

j=1



′′ ′′ Yjδ − Y(j−1)δ



and [Y·′ , Y·′′ ] = p- lim [Yδ′ , Yδ′′ ]t δ→0

we have [Yδ ] = [Yδ′ ] + [Yδ′′ ] + 2 [Yδ′ , Yδ′′ ]

and hence, provided the limit exists (in probability), [Y· ] = [Y·′ ] + [Y·′′ ] + 2 [Y·′ , Y·′′ ] .

We will write this symbolically as d [Y· ] = d [Y·′ ] + d [Y·′′ ] + 2d [Y·′ , Y·′′ ] .

In case [Y·′ , Y·′′ ] = 0 we say that Y ′ and Y ′′ are q-orthogonal and express this by writing dY ′ dY ′′ = 0.

7

BSS processes

Then [Y· ] = [Y·′ ] + [Y·′′ ] .

In particular, if Y ′ and Y ′′ are both semimartingales, in general with different own filtrations, and q-orthogonal then [Y· ] = [Y ] = [Y ′ ] + [Y ′′ ]

and d [Y· ] may be calculated as 2

2

d [Y· ]t = (dYt′ ) + (dYt′′ ) .

In this case we may define dY as dY ′ + dY ′′ and then, as in the usual semimartingale world, we have  t

[Y· ]t =

2

(dYs ) ds.

0 Yt′

An elementary instance of this Yt = + Yt′′ with Yt′ = Bt and Yt′′ = −Bt−1 and where B = {Bt }R is Brownian motion on the real line. These considerations are extendable to settings where Y is a linear combination (c) Y = Yt M (dc) of mutually q-orthogonal processes Y (c) and where M is a deterministic, possibly signed, measure. We shall not here discuss specific general conditions for this; however an example is given in the next subsection.

4.2 Some BSS cases Let G be the class of functions of the form (3.2). If g ∈ G then for any u > 0 the function h (·) = g (· + u) also belongs to G. This has the important consequence that if g is of the form g # 1A with A = (0, l) for some l > 0 and g # ∈ G then Y itself is not a semimartingale but it is the difference between two semimartingales, specifically Yt = Yt+ − Yt−

where Yt+ = µ +



t

Yt− =



t

and

g (t − s) σs dBs + q ∗ a • Leb

−∞

g (t − s + l) σs−l dBs−l .

−∞

  Both Y + and Y − are semimartingales but with different filtrations, namely FtB t∈R  B  . Moreover, Y + and Y − are q-orthogonal, and hence and Ft−l t∈R 2 2   d [Y· ]t = dYt+ + dYt− .

More generally, suppose that g has the form  ∞ g0 (t − c) dM (c) g (t) = 0

8

O. E. Barndorff-Nielsen and J. Schmiegel

for a g0 ∈ G and where M is a function of bounded variation on R+ . In this case we have  t g (t − s) σs dBs Yt = −∞



=

t

−∞



=



0



=



0

g0 (t − s − c) dM (c) σs dBs

0 t



g0 (t − s − c) σs dBs dM (c)

−∞



0

=





t−c



g0 (t − c − s) σs dBs dM (c)

−∞



(c)

Yt dM (c)

where (c)

Yt

=



t

g0 (t − s) σs−c dBs−c

−∞

showing that Y is a linear combination of q-orthogonal semimartingales with different  B  ). filtrations (namely, conditional on σ the filtration of Y (c) is Ft−c t∈R

5

Increment processes

# # Again, suppose that g =  g 1[0,l] for some l > 0 and g ∈ G. For any given t we define the increment process Xu|t u≥0 by

Xt+u|t

= Yt+u − Yt   t+u g (t + u − s) σs dBs + =

{g (t + u − s) − g (t − s)} σs dBs

−∞

t

+

t



t+u

q(t + u − s)as ds +



t

{q(t + u − s) − q(t − s)}as ds

−∞

t

It will be convenient to rewrite Xt|t−u as Xt|t−u =



0

−∞

φu (−v) σv+t dBv+t +



0

χu (−v) av+t dv

−∞

where φu and χu are defined by for 0 ≤ v < u g (v) φu (v) = g (v) − g (v − u) for u ≤ v < ∞

(5.1)

9

BSS processes

and χu (v) =



for 0 ≤ v < u for u ≤ v < ∞ .

q (v) q (v) − q (v − u)

From now on we assume that (σ, a) is independent of B and that a is adapted to the filtration F σ . This, together with (5.1), implies in particular that the conditional variance of Yt − Yt−u given the process σ takes the form 2  ∞

  ∞ 2 2 χu (v) at−v dv ψu (v) σt−v dv + E (Yt − Yt−u ) | σ = 0

0

where

ψu (v) =



g 2 (v) 2 {g (v − u) − g (v)}

for 0 ≤ v < u for u ≤ v < ∞ .

Remark 5.1. Note that φu (v) = ψu (v) = χu (v) = 0 for v ≥ l + u while for l ≤ v < 2 l + u we have ψu (v) = g (v − u) and χu (v) = q (v − u). Let c (u) =





ψu (v) dv.

0

Remark 5.2. Trivially, c (δ) ≥



δ

ψδ (v) dv =



δ

g 2 (v) dv

0

0

implying that if g(0+) > 0 then c (δ) cannot tend to 0 faster than δ . Remark 5.3. We have

2

c (u) = 2 g r¯ (u)

where r¯ = 1 − r with r being the autocorrelation function of Y . Furthermore, 

  2 = E σ02 c (u) E (Yt − Yt−u )     ∞ ∞ χu (v) χu (w) ̺ (|v − w|) dv dw +E a20 0

0

where ̺ is the autocorrelation function of a.

6

Normalised RQV

We now define the normalised RQV as [Yδ ] =

δ [Yδ ] . c (δ)

(5.2)

10

O. E. Barndorff-Nielsen and J. Schmiegel

The question we wish to address here is whether and under what conditions [Yδ ] converges in probability to σ 2+ . Concerning the related question of a central limit theorem for [Yδ ], see Section 10. In the present paper we shall largely restrict the discussion to quite regular forms of the weight function g , assuming in particular that g is positive on a finite interval (0, l) only. Specifically, we now assume that the function g is positive, continuously differentiable, convex and decreasing on an interval (0, l) where 0 < l < ∞ and that g (t) = 0 outside that interval. Also, we require that σ and a are stationary and c`adl`ag and, as before, that a is adapted to the natural filtration of σ . Without loss of generality we take t/δ to be an integer n so that t = nδ . Below C denotes a constant that is independent of n but whose value may change with the context.

7

Consistency p

To discuss the question of when [Yδ ] → σ2+ we first note that, by (5.1), 2 n  0  [Yδ ]t = φδ (−v) σv+kδ dBv+kδ −∞

k=1

+2

n  0 

n  

0

χδ (−v) av+kδ dv

−∞

k=1

0

χδ (−v) av+kδ dv

−∞

−∞

k=1

+

φδ (−v) σv+kδ dBv+kδ



2

.

It follows that

  E [Yδ ]t | σ =



0



δ

n 

2 σkδ−v

k=1

where πδ (dv) =

and Dδ (a) =



0







χδ (v) χδ (w) δ

0

(7.1)

ψδ (v) dv c (δ)



πδ (dv) + c (δ)−1 Dδ (a)

n 

k=1

akδ−v akδ−w



dv dw.

Thus πδ is an absolutely continuous probability measure on (0, l + δ). Furthermore, 2  ∞ |χδ (v)| dv |Dδ (a)| ≤ C 0

where the constant C depends on a, l and t. This leads us to introduce

11

BSS processes

Condition A.

c (δ)−1

 ∞ 0

2

|χδ (v)| dv

→ 0.

Remark 7.1. Note that in this connection if q is a positive decreasing function then  ∞  δ q (v) dv. (7.2) |χδ (v)| dv = 2 0

0

Suppose that πδ converges weakly, as δ → 0, to a probability measure π on [0, l], i.e. w

πδ → π.

(7.3)

Then, if Condition A holds we obtain from (7.1) that  ∞

  2+  2+ σt−v − σ−v π (dv) . E [Yδ ]t | σ →

(7.4)

0

In particular, if π = δ0 , the delta measure at 0, then

 E [Yδ ]t | σ → σt2+ where

σt2+ =



0

(7.5)

t

σs2 ds. w

The

following  two subsections derive sufficient conditions for πδ → δ0 and for Var [Yδ ]t | σ → 0, respectively. These two relations together with Condition A imply that p [Yδ ] → σ 2+ . (7.6) We will refer to the case where (7.6) is satisfied by saying that the model for Y is volatility memoryless.

7.1 Pidelta to pi Suppose that l < ∞ and let  u ψδ (v) dv Ψδ (u) =

and

¯ δ (u) = Ψ



l+δ

l+δ−u

0

so that c (δ)−1 Ψδ is the distribution function, say Πδ , of πδ . Next, for k = 1, 2, . . ., let  kδ ψδ (u) du ck (δ) = (k−1)δ

=



δ

ψδ ((k − 1) δ + u) du

0

=

δ



0

1

ψδ ((k − 1 + u) δ) du

ψδ (v) dv,

12

O. E. Barndorff-Nielsen and J. Schmiegel

i.e. ck (δ) = δ



1

0

  2   g (k − 2 + u) δ − g (k − 1 + u) δ du.

(7.7)

We must now distinguish between the cases t < l and t ≥ l. Suppose first that t ≥ l. Let k ∗ = max {k : kδ ≤ l}. Then, by (7.7), for 1 < k ≤ k ∗ ck (δ) = δ

3



1

0

 2 g ′ (k − 2 + u + θk (u)) δ du

where the θk (u) satisfy 0 ≤ θk (u) ≤ 1. Since g is convex and decreasing this implies, provided k∗ ≤ k ≤ k ∗ where k∗ > 2, that ck (δ) ≤ δ 3 g ′ ((k − 2) δ)2 ≤ δ 3 g ′ ((k∗ − 2) δ)2 .

Therefore, for any ε ∈ (2δ, l) with 1 < ⌊ε/δ⌋ < k ∗ we have ∗

k 



Ψδ (k δ) − Ψδ (ε) ≤

ck (δ)

k=⌊ε/δ⌋+1 2



δ 3 (k ∗ − ⌊ε/δ⌋) g ′ (⌊ε − 2δ⌋)



(l − ε + δ) g ′ (⌊ε − 2δ⌋) δ 2

2

so that

2

Πδ (k ∗ δ) − Πδ (ε) ≤ (l − ε + δ) g ′ (⌊ε − 2δ⌋) δ 2 c(δ)−1 .

Consequently, as δ → 0, Πδ (k ∗ δ) − Πδ (ε) → 0.

It follows that if πδ converges to a probability measure π then π is necessarily a linear combination of the delta measures at 0 and l. Furthermore, Ψδ (l + δ) − Ψδ (k ∗ δ) = ck∗ +1 (δ) + ck∗ +2 (δ)

where ck∗ +1 (δ)

=

(k∗ +1)δ



ψδ (v) dv

k∗ δ

=

l



2

{g (v − δ) − g (v)} dv

k∗ δ

+



(k∗ +1)δ

g 2 (v − δ) dv

l

and ck∗ +2 (δ) =



l+δ

(k∗ +1)δ

g 2 (v − δ) dv.

13

BSS processes

So, combining, for πδ → δ0 to hold we must require that  l 2 −1 {g (v − δ) − g (v)} dv → 0 c (δ) k∗ δ

and −1

c (δ)



l+δ

g 2 (v − δ) dv → 0.

l w

But the first relation follows from the smoothness of g , so to guarantee πδ → δ0 , when t ≥ l, we therefore only need to add

Condition B. c (δ)−1

l+δ l

g 2 (v − δ) dv → 0

as δ ↓ 0.

Remark 7.2. Condition B is equivalently to having l g 2 (v)dv l−δ → 0, δ 2 0 g (v)dv

as follows from the above discussion. In particular, it suffices to have g (v) → 0 as v ↑ l. Remark 7.3. In case c (δ)−1 λδ1 .

l+δ l

g 2 (v − δ) dv → λ ∈ (0, 1) we obtain πδ → (1 − λ) δ0 +

When t < l, for any ε ∈ (2δ, t) with 1 < ⌊ε/δ⌋ < n, n 

Ψδ (t + δ) − Ψδ (ε) ≤

ck (δ)

k=⌊ε/δ⌋+1



2

(t − ε + δ) g ′ (⌊ε − 2δ⌋) δ 2 w

which tends to 0 at the order of δ 2 . To obtain πδ → δ0 we therefore only need to add the assumption that Ψδ (l + δ) − Ψδ (t + δ) = o (c (δ)) . (7.8) Now, Ψδ (l + δ) − Ψδ (t + δ) =

∞ 

ck (δ) .

k=n+1

Thus, letting c¯k (δ) =

we have that (7.8) is implied by

ck (δ) c (δ)

(7.9)

14

O. E. Barndorff-Nielsen and J. Schmiegel

Condition C.

∞

(δ) → 0

¯k k=n+1 c

δ ↓ 0.

as

7.2 Conditional Var to 0 We now establish conditions under which the conditional variance of the normalised realised quadratic variation tends to 0 as δ → 0, i.e. Var{[Yδ ]t | σ} → 0.

(7.10)

Suppose first that a = 0. Let ∆nj Y = Yjδ − Y(j−1)δ . Then Var{[Yδ ]t | σ} =

δ2 2

c (δ)

 n j=1

+2

 2 Var{ ∆nj Y | σ}

n n  

j=1 k=j+1

 2 2 Cov{ ∆nj Y , (∆nk Y ) | σ}



where, for j < k , Cov{∆nj Y ∆nk Y | σ}

= E  =



Yjδ − Y(j−1)δ



0



  Ykδ − Y(k−1)δ | σ

2 φδ ((k − j) δ + u) φδ (u) σjδ−u du.

Let K (σ) = sup−l≤s≤t σs2 . As σ is assumed c`adl`ag, K (σ) < ∞ a.s.. Hence, by the Cauchy–Schwarz inequality,      Cov{Yjδ − Y(j−1)δ , Ykδ − Y(k−1)δ | σ}  ≤ K (σ)





ψδ (u) du

0

1/2 



ψδ (u) du

(k−j)δ

1/2

.

Now, recall that for any pair X and Y of normal, mean zero random variables we have Cov{X 2 , Y 2 } = 2 Cov{X, Y }2 .

(7.11)

Therefore Var{[Yδ ]t | σ} ≤

=

2K (σ)2

δ

2 2

c (δ) ⎛



⎝lδ −1 c (δ)2 + 2c (δ)

n−1 

 n 

j=1 i=j+1

∞ (i−j)δ

⎞  ∞ n n−1   δ 2 2K (σ) δ ⎝l + 2 ψδ (u) du⎠ c (δ) j=1 i=j+1 (i−j)δ



ψδ (u) du⎠

15

BSS processes

Here n−1 

 n 

j=1 i=j+1

n−1  ∞  n−j



ψδ (u) du

=

(i−j)δ

ν  ∞ n−1 

=

ψδ (u) du



j=1 i=1

ck+1 (δ)

ν=1 i=1 k=i

∞ n−1 

=

ck+1 (δ)

ν=1 k=1

n−1 

=

ν=1

n−1 

=



ν 

1≤k (i)

i=1

ν 

kck+1 (δ) + ν

k=1

(n − k) kck+1 (δ) +

∞ 

ck+2 (δ)

k∧(n−1)

k=1



ν

ν=1

k=1

n−1 

+

ck+2 (δ)



k=ν

k=1

=

∞ 

1 (n − k) kck+1 (δ) + (k + 1) kck+2 2



∞ (n − 1)n  ck+2 (δ) . 2 k=n

With the notation (7.9) we thus have  Var{[Yδ ]t | σ}



2

2K (σ) δ l + 2δ

n−1 

(n − k) k¯ ck+1 (δ)

k=1

+2δ

∞ 

k∧(n−1)

c¯k+2 (δ)

2

ν

ν=1

k=1

=





2K (σ) lδ 2 2

+2K (σ) δ

n−1  k=1

+2δK (σ)

2

1 ck+2 (n − k) k¯ ck+1 (δ) + (k + 1) k¯ 2

∞ (n − 1) n  c¯k+2 (δ) . 2 k=n

Here δ2

n−1 

(n − k) k¯ ck+1 (δ) +

k=1

and

δ2

1 (k + 1) k¯ ck+2 2



≤ Cδ

n 

k=1

∞ ∞  (n − 1) n  c¯k+2 (δ) ≤ C c¯k (δ) . 2 k=n

k=n+1

k¯ ck (δ)



16

O. E. Barndorff-Nielsen and J. Schmiegel

Consequently, when a = 0, for (7.10) to be valid it suffices to have δ

n 

k¯ ck (δ) → 0

and

k=1

∞ 

c¯k (δ) → 0.

(7.12)

k=n+1

Condition C will ensure the second limit result, and we now introduce

Condition D.

δ

n

k=1

k¯ ck (δ) → 0

δ ↓ 0.

as

Provided a = 0, for (7.10) to be valid it suffices that Conditions C and D to hold. Next we show that the convergence Var{[Yδ ]t | σ} → 0 also holds if a is not 0 provided Condition A is fulfilled too. In case a = 0, Var{[Yδ ]t | σ} is a sum of two terms, one as above for a = 0 while the other is 2  ∞ n  δ2  ∞ 2 χδ (v) akδ−v dv (7.13) ψδ (v) σkδ−v dv 4 c (δ)2 k=1 0 0 which is bounded above by 4HK where

δ H = lim sup c (δ) k,δ

and

n

K=

δ  c (δ) k=1

Here



0







∞ 0

2 ψδ (v) σkδ−v dv



χδ (v) akδ−v dv

0

2

.

2 ψδ (v) σkδ−v dv ≤ Cc (δ)

where the constant C depends on t and σ . Hence H → 0. Furthermore, 2 2 n  ∞ n  ∞   χδ (v) akδ−v dv ≤ C |χδ (v)| dv k=1

0

0

k=1

=



−1



0



|χδ (v)| dv

2

where C , again, depends on t and a. Hence Condition A implies K → 0.

7.3 Summing up Suppose first that t < l < ∞, which is the most interesting case from the viewpoint of turbulence modelling. If  ∞ 2 −1 c (δ) |χδ (v)| dv →0 (7.14) 0

17

BSS processes ∞ 

c¯k (δ) → 0

(7.15)

n−1 

k¯ ck (δ) → 0

(7.16)

k=n+1

and δ

k=1

then

p

πδ → δ0 , Var{[Yδ ] | σ} → 0 and [Yδ ] → σ 2+ .

If l ≤ t then the additional assumption that l 2 l−δ g (v) dv →0 δ g 2 (v) dv 0

(7.17)

(7.18)

is required. The latter is, in particular, fulfilled if g(v) → 0 for v ↑ l. In case (7.18) is w violated but (7.14), (7.15) and (7.16) hold and πδ → π for some π , necessarily of the form π = λδ0 + (1 − λ)δl for some λ ∈ (0, 1), then   2+ p 2+ . [Yδ ]t −→ λσt2+ + (1 − λ) σt−l − σ−l (7.19)

7.4 Examples Recall Conditions A–D: c (δ)

−1





|χδ (v)| dv

0

c (δ)

−1



l+δ

l ∞ 

2

→0

(7.20)

g 2 (v − δ) dv → 0 c¯k (δ) → 0

(7.21)

k¯ ck (δ) → 0.

(7.22)

k=n+1

δ

n 

k=1

In this section we suppose that q = g . Then Condition A has the form c (δ)−1 c1 (δ)2 → 0.

Example. Then

(7.23)

Suppose that t = l and g (v) = e−λv 1(0,l) (v) (a non-semimartingale case). ⎧ 1 for 0≤v 1 small enough with (0, p) ∈ DR (T ). Formula (4.12) combined with (4.11) then yields 1 −r(T −t) e π(t) = 2π  × e φ(T −t,0,p+iy)+ψ1 (T −t,0,p+iy)X1 (t)+(p+iy)X2 (t) R

K 1−p−iy dy. (p + iy)(p + iy − 1)

(5.9)

Alternatively, we may fix any 0 < p < 1 and then, combining (4.13) with (4.11), 1 −r(T −t) e π(t) = S(t) + 2π  × e φ(T −t,0,p+iy)+ψ1 (T −t,0,p+iy)X1 (t)+(p+iy)X2 (t) R

K 1−p−iy dy. (p + iy)(p + iy − 1)

(5.10)

Since we have explicit expressions (5.8) for φ(T −t, 0, p+iy) and ψ1 (T −t, 0, p+iy), we only need to compute the integral with respect to y in (5.9) or (5.10) numerically. We have carried out numeric experiments for European option prices using MATLAB. Fastest results were achieved for values p ≈ 0.5 by using (5.10) whereas keeping a constant error level the runtime explodes at p → 0, 1, which is due to the singularities of the integrand. Also, an evaluation of residua π(t = 0, p = 1/2) − π(t = 0, p = 1/2 + ε) π(t = 0, p = 1/2)

for ε ∈ [0, 1/2) ∪ (1/2, 1] suggests that (5.10) is numerically more stable than (5.9). Next, we present implied volatilities obtained by (5.10) setting p = 1/2. As initial data for X and model parameters, we chose X1 (0) = 0.02, X2 (0) = 0.00, σ = 0.1, κ = −2.0, k = 0.02, r = 0.01, ρ = 0.5.

Table 5.2 shows implied volatilities from call option prices at t = 0 for various strikes K and maturities T , computed with (5.10) for p = 0.5. These values are in well accordance with MC simulations (mesh size T /500, number of sample paths = 10000). The corresponding implied volatility surface is shown in Figure 5.1.

147

Affine diffusion processes

0.8000 0.1611 0.1513 0.1464 0.1438 0.1424 0.1417

T-K 0.5000 1.0000 1.5000 2.0000 2.5000 3.0000

0.9000 0.1682 0.1579 0.1524 0.1492 0.1473 0.1460

1.0000 0.1785 0.1664 0.1594 0.1551 0.1524 0.1505

1.1000 0.1892 0.1751 0.1665 0.1611 0.1574 0.1549

1.2000 0.1992 0.1835 0.1734 0.1668 0.1623 0.1591

Table 5.2. Implied volatilities for the Heston model

0.195 0.19

0.2

0.185

z=Implied Volatility

0.19

0.18

0.18

0.175 0.17 0.17 0.16 0.165 0.15

0.16 0.155

1.4 3

1.2 2 1 Strike K

1 0.8

0

0.15 0.145

Maturity T in Years

Figure 5.1. Implied volatility surface for the Heston model Remark 5.5. We note that the Heston model is often written in the equivalent form √ dv = κ ¯ (η − v)dt + σ v dW1 √ dS = rSdt + S v dW . To see the relation of the parameters of this form and the one used in this section, we simply set v = 2X1 , and then get  dX1 = (¯ κη − κ ¯ v)dt + σ 2X1 dW1 X1 (0) = X10  dS = rdt + 2X1 dW, S(0) = eX2 (0) S

from which we read off

k=κ ¯η,

and all other parameters coincide.

κ = −¯ κ,

X10 = v0 /2

148

6

D. Filipovi´c and E. Mayerhofer

Affine transformations and canonical representation

n As above, we let X be affine on the canonical state space Rm + × R with admissible × Rn the process parameters a, αi , b, βi . Hence, in view of (2.1), for any x ∈ Rm + x X = X satisfies

dX = (b + BX) dt + ρ(X) dW, X(0) = x, (6.1)  and ρ(x)ρ(x)⊤ = a + i∈I xi αi . It can easily be checked that for every invertible d × d-matrix Λ, the linear transform Y = ΛX satisfies     dY = Λb + ΛBΛ−1Y dt + Λρ Λ−1 Y dW, Y (0) = Λx. (6.2)

Hence, Y has again an affine drift and diffusion matrix Λb + ΛBΛ−1y

and

Λα(Λ−1 y)Λ⊤ ,

(6.3)

respectively. On the other hand, the affine short rate model (4.1) can be expressed in terms of Y (t) as r(t) = c + γ ⊤ Λ−1 Y (t) . (6.4) This shows that Y and (6.4) specify an affine short rate model producing the same short rates, and thus bond prices, as X and (4.1). That is, an invertible linear transformation of the state process changes the particular form of the stochastic differential equation (6.1). But it leaves observable quantities, such as short rates and bond prices invariant. This motivates the question whether there exists a classification method ensuring that affine short rate models with the same observable implications have a unique canonical representation. This topic has been addressed in [10, 9, 24, 8]. We now elaborate on this issue and show that the diffusion matrix α(x) can always be brought n m n into block-diagonal form by a regular linear transform Λ with Λ(Rm + ×R ) = R+ ×R . We denote by diag(z1 , . . . , zm ) the diagonal matrix with diagonal elements z1 , . . . , zm , and we write Im for the m× midentity matrix. n n m Lemma 6.1. There exists some invertible d× d-matrix Λ with Λ(Rm + × R ) = R+ × R −1 ⊤ such that Λα(Λ y)Λ is block-diagonal of the form   , . . . , y , 0, . . . , 0) 0 diag(y 1 q  Λα(Λ−1 y)Λ⊤ = 0 p + i∈I yi πi

for some integer 0 ≤ q ≤ m and symmetric positive semi-definite n × n matrices p, π1 , . . . , πm . Moreover, Λb and ΛBΛ−1 meet the respective admissibility conditions (3.1) in lieu of b and B .

149

Affine diffusion processes

Proof. From (2.3) we know that Λα(x)Λ⊤ is block-diagonal for all x = Λ−1 y if and only if ΛaΛ⊤ and Λαi Λ⊤ are block-diagonal for all i ∈ I . By permutation and scaling n of the first m coordinate axes (this is a linear bijection from Rm + × R onto itself, which preserves the admissibility of the transformed b and B ), we may assume that there exists some integer 0 ≤ q ≤ m such that α1,11 = · · · = αq,qq = 1 and αi,ii = 0 for q < i ≤ m. Hence a and αi for q < i ≤ m are already block-diagonal of the special form     0 0 0 0 , αi = . a= 0 aJJ 0 αi,JJ For 1 ≤ i ≤ q , we may have non-zero off-diagonal elements in the ith row αi,iJ . We thus define the n × m-matrix D = (δ1 , . . . , δm ) with ith column δi = −αi,iJ and set   Im 0 . Λ= D In n m n One checks by inspection that D is invertible and maps Rm + × R onto R+ × R . Moreover, Dαi,II = −αi,JI , i ∈ I.

From here we easily verify that Λαi =



and thus ⊤

Λαi Λ =

αi,II 0



αi,II 0

αi,IJ Dαi,IJ + αi,JJ



0 Dαi,IJ + αi,JJ

,



.

Since Λa Λ⊤ = a, the first assertion is proved. The admissibility conditions for Λb and ΛBΛ−1 can easily be checked as well.



In view of (6.3), (6.4) and Lemma 6.1 we thus obtain the following result. Theorem 6.2 (Canonical Representation). Any affine short rate model (4.1), after n some modification of γ if necessary, admits an Rm + × R -valued affine state process X with block-diagonal diffusion matrix of the form   0 diag(x1 , . . . , xq , 0, . . . , 0)  α(x) = (6.5) 0 a + i∈I xi αi,JJ for some integer 0 ≤ q ≤ m.

7

Existence and uniqueness of affine processes

All we said about the affine process X so far was under the premise that there exists a unique solution X = X x of the stochastic differential equation (2.1) on some appro-

150

D. Filipovi´c and E. Mayerhofer

priate state space X ⊂ Rd . However, if the diffusion matrix ρ(x)ρ(x)⊤ is affine then ρ(x) cannot be Lipschitz continuous in x in general. This raises the question whether (2.1) admits a solution at all. In this section, we show how X can always be realized as unique solution of the stochastic differential equation (2.1), which is (6.1), in the canonical affine framework n X = Rm + × R and for particular choices of ρ(x). We recall from Theorem 2.2 that the affine property of X imposes explicit conditions on ρ(x)ρ(x)⊤ , but not on ρ(x) as such. Indeed, for any orthogonal d × d-matrix D, the function ρ(x)D yields the same diffusion matrix, ρ(x)DD⊤ ρ(x)⊤ = ρ(x)ρ(x)⊤ , as ρ(x). On the other hand, from Theorem 3.2 we know that any admissible parameters a, αi , b, βi in (2.3) uniquely determine the functions (φ(·, u), ψ(·, u)) : R+ → C− × n m n Cm − × iR as solution of the Riccati equations (3.2), for all u ∈ C− × iR . These in turn uniquely determine the law of the process X . Indeed, for any 0 ≤ t1 < t2 and n u 1 , u 2 ∈ Cm − × iR , we infer by iteration of (2.2)  ⊤    ⊤  ⊤ ⊤ E e u1 X(t1 )+u2 X(t2 ) = E e u1 X(t1 ) E e u2 X(t2 ) | Ft1   ⊤ ⊤ = E e u1 X(t1 ) e φ(t2 −t1 ,u2 )+ψ(t2 −t1 ,u2 ) X(t1 )

= e φ(t2 −t1 ,u2 )+φ(t1 ,u1 +ψ(t2 −t1 ,u2 ))+ψ(t1 ,u1 +ψ(t2 −t1 ,u2 ))



x

.

Hence the joint distribution of (X(t1 ), X(t2 )) is uniquely determined by the functions φ and ψ . By further iteration of this argument, we conclude that every finite dimensional distribution, and thus the law, of X is uniquely determined by the parameters a, αi , b, βi . We conclude that the law of an affine process X , while uniquely determined by its characteristics (2.3), can be realized by infinitely many variants of the stochastic differential equation (6.1) by replacing ρ(x) by ρ(x)D, for any orthogonal d × d-matrix D. We now propose a canonical choice of ρ(x) as follows: •



n In view of (6.2) and Lemma 6.1, every affine process X on Rm + × R can be −1 written as X = Λ Y for some invertible d × d-matrix Λ and some affine process n Y on Rm + × R with block-diagonal diffusion matrix. It is thus enough to consider such ρ(x) where ρ(x)ρ(x)⊤ is of the form (6.5). Obviously, ρ(x) ≡ ρ(xI ) is a function of xI only.

Set ρIJ (x) ≡ 0, ρJI (x) ≡ 0, and

√ √ ρII (xI ) = diag( x1 , . . . , xq , 0, . . . , 0).

Chose for ρJJ (xI ) any measurable n × n-matrix-valued function satisfying  ρJJ (xI )ρJJ (xI )⊤ = a + xi αi,JJ . (7.1) i∈I

ρJJ (xI ) via Cholesky factorisation, see e.g. [31, In practice, one would determine  Theorem 2.2.5]. If a + i∈I xi αi,JJ is strictly positive definite, then ρJJ (xI )

151

Affine diffusion processes

turns out to be the unique lower triangular matrix with strictly positive diagonal  elements and satisfying (7.1). If a+ i∈I xi αi,JJ is merely positive semi-definite, then the algorithm becomes more involved. In any case, ρJJ (xI ) will depend measurably on xI . •

The stochastic differential equation (6.1) now reads dXI = (bI + BII XI ) dt + ρII (XI ) dWI

dXJ = (bJ + BJI XI + BJJ XJ ) dt + ρJJ (XI ) dWJ

(7.2)

X(0) = x

n Lemma 7.2 below asserts the existence and uniqueness of an Rm + × R -valued x m n solution X = X , for any x ∈ R+ × R .

We thus have shown: Theorem 7.1. Let a, αi , b, βi be admissible parameters. Then  there exists a measurn d×d with ρ(x)ρ(x)⊤ = a + i∈I xi αi , and such that, able function ρ : Rm + ×R → R n m n x for any x ∈ Rm + × R , there exists a unique R+ × R -valued solution X = X of (6.1). Moreover, the law of X is uniquely determined by a, αi , b, βi , and does not depend on the particular choice of ρ. The proof of the following lemma uses the concept of a weak solution. The interested reader will find detailed background in e.g. [25, Section 5.3]. n n m Lemma 7.2. For any x ∈ Rm + × R , there exists a unique R+ × R -valued solution x X = X of (7.2). + Proof. First, we extend ρ continuously to Rd by setting ρ(x) = ρ(x+ 1 , . . . , xm ), where + we denote xi = max(0, xi ). Now observe that XI solves the autonomous equation

dXI = (bI + BII XI ) dt + ρII (XI ) dWI ,

XI (0) = xI .

(7.3)

Obviously, there exists a finite constant K such that the linear growth condition bI + BII xI 2 + ρ(xI )2 ≤ K(1 + xI 2 )

is satisfied for all x ∈ Rm . By [22, Theorems 2.3 and 2.4] there exists a weak solution1 of (7.3). On the other hand, (7.3) is exactly of the form as assumed in [35, Theorem 1], which implies that pathwise uniqueness2 holds for (7.3). The Yamada–Watanabe 1A

weak solution consists of a filtered probability space (Ω, F, (Ft ), P) carrying a continuous adapted process XI and a Brownian motion WI such that (7.3) is satisfied. The crux of a weak solution is that XI is not necessarily adapted to the filtration generated by the Brownian motion WI . See [35, Definition 1] or [25, Definition 5.3.1]. 2 Pathwise uniqueness holds if, for any two weak solutions (X , W ) and (X ′ , W ) of (7.3) defined on the I I I I the same probability space (Ω, F, P) with common Brownian motion WI and with common initial value ′ ′ XI (0) = XI (0), the two processes are indistinguishable: P[XI (t) = XI (t) for all t ≥ 0] = 1. See [35, Definition 2] or [25, Section 5.3].

152

D. Filipovi´c and E. Mayerhofer

Theorem, see [35, Corollary 3] or [25, Corollary 5.3.23], thus implies that there exists a unique solution XI = XIxI of (7.3), for all xI ∈ Rm . Given XIxI , it is then easily seen that   t XJ (t) = e BJJ t xJ + e −BJJ s (bJ + BJI XI (s)) ds 0

+



t

0

 e −BJJ s ρJJ (XI (s)) dWJ (s)

is the unique solution to the second equation in (7.2). Admissibility of the parameters b and βi and the stochastic invariance Lemma B.1 m eventually imply that XI = XIxI is Rm + -valued for all xI ∈ R+ . Whence the lemma is  proved.

A

On the regularity of characteristic functions

This auxiliary section provides some analytic regularity results for characteristic functions, which are of independent interest. These results enter the main text only via the proof of Theorem 3.3. This section may thus be skipped at the first reading. Let ν be a bounded measure on Rd , and denote by  ⊤ e z x ν(dx) G(z) = Rd

its characteristic function1 for z ∈ iRd . Note that G(z) is actually well defined for z ∈ S(V ) where $ #  ⊤  e y x ν(dx) < ∞ . V = y ∈ Rd  Rd

We first investigate the interplay between the (marginal) moments of ν and the corresponding (partial) regularity of G. Lemma A.1. Denote g(y) = G(iy) for y ∈ Rd , and let k ∈ N and 1 ≤ i ≤ d. If ∂y2ki g(0) exists then  On the other hand, if

"

Rd

|xi |2k ν(dx) < ∞.

xk ν(dx) < ∞ then g ∈ C k and  ⊤ xi1 · · · xil e iy x ν(dx) · · · ∂yil g(y) = il

Rd

∂yi1

Rd

for all y ∈ Rd , 1 ≤ i1 , . . . , il ≤ d and 1 ≤ l ≤ k . 1 This

is a slight abuse of terminology, since the characteristic function g(y) = G(iy) of ν is usually defined on real arguments y ∈ Rd . However, it facilitates the subsequent notation.

153

Affine diffusion processes

Proof. As usual, let ei denote the ith standard basis vector in Rd . Observe that s → g(sei ) is the characteristic function of the image measure of ν on R by the mapping x → xi . Since ∂s2k g(sei )|s=0 = ∂y2ki g(0), the assertion follows from the one-dimensional case, see [30, Theorem 2.3.1]. The second part of the lemma follows by differentiating under the integral sign,  which is allowed by dominated convergence. Lemma A.2. The set V is convex. Moreover, if U ⊂ V is an open set in Rd , then G is analytic on the open strip S(U ) in Cd . Proof. Since G : Rd → [0, ∞] is a convex function, its domain V = {y ∈ Rd | G(y) < ∞} is convex, and so is every level set Vl = {y ∈ Rd | G(y) ≤ l} for l ≥ 0. Now let U ⊂ V be an open set in Rd . Since any convex function on Rd is continuous on the open interior of its domain, see [32, Theorem 10.1], we infer that G is continuous on U . We may thus assume that Ul = {y ∈ Rd | G(y) < l} ∩ U ⊂ Vl is open in Rd and non-empty for l > 0 large enough. Let z ∈ S(Ul ) and (zn ) be a sequence in S(Ul ) with zn → z . For n large enough, there exists some p > 1 such that pzn ∈ S(Ul ). This implies pRe zn ∈ Vl and hence   z ⊤ x p e n  ν(dx) ≤ l. Rd

⊤ zn x

Hence the class of functions {e | n ∈ N} is uniformly integrable with respect to ν , ⊤ ⊤ see [34, 13.3]. Since e zn x → e z x for all x, we conclude by Lebesgue’s convergence theorem that    z⊤ x   e n − e z⊤ x  ν(dx) → 0. G(zn ) − G(z) ≤ Rd

Hence G is continuous on S(Ul ). It thus follows from the Cauchy formula, see [11, Section IX.9], that G is analytic on S(Ul ) if and only if, for every z ∈ S(Ul ) and 1 ≤ i ≤ d, the function ζ → G(z +ζei ) is analytic on {ζ ∈ C | z + ζei ∈ S(Ul )}. Here, as usual, we denote ei the ith standard basis vector in Rd . ε < 0 < ε+ We thus let z ∈ S(Ul ) and 1 ≤ i ≤ d. Then there exists some  (z+ε− e )⊤ x  − i   ν(dx) such that z + ζei ∈ S(Ul ) for all ζ ∈ S([ε− , ε+ ]). In particular, e  (z+ε e )⊤ x  + i  ν(dx) are bounded measures on Rd . By dominated convergence, it and e follows that the two summands  ⊤ e (ζ−ε− )xi e (z+ε− ei ) x ν(dx) G(z + ζei ) = {xi 0 S(Ul ), the lemma follows. In general, V does not have an open interior in Rd . The next lemma provides sufficient conditions for the existence of an open set U ⊂ V in Rd .

154

D. Filipovi´c and E. Mayerhofer

Lemma A.3. Let U ′ be an open neighbourhood of 0 in Cd and h an analytic function on U ′ . Suppose that U = U ′ ∩ Rd is star-shaped around 0 and G(z) = h(z) for all z ∈ U ′ ∩ iRd . Then U ⊂ V and G = h on U ′ ∩ S(U ). Proof. We first suppose that U ′ = Pρ for the open polydisc   Pρ = z ∈ Cd | |zi | < ρi , 1 ≤ i ≤ d ,

for some ρ = (ρ1 , . . . , ρd ) ∈ Rd++ . Note the symmetry iPρ = Pρ . As in Lemma A.1, we denote g(y) = G(iy) for y ∈ Rd . By assumption, g(y) = h(iy) for all y ∈ Pρ ∩ Rd . Hence g is analytic on Pρ ∩ Rd , and the Cauchy formula, [11, Section IX.9], yields  g(y) = ci1 ,...,id y1i1 · · · ydid for y ∈ Pρ ∩ Rd 

i1 ,...,id ∈N0

where i1 ,...,id ∈N0 ci1 ,...,id z1i1 · · · zdid = h(iz) for all z ∈ Pρ . This power series is absolutely convergent on Pρ , that is,     ci1 ,...,id  z i1 · · · z id  < ∞ for all z ∈ Pρ . 1 d i1 ,...,id ∈N0

" Fromk the first part of Lemma A.1, we infer that ν possesses all moments, that is, x ν(dx) < ∞ for all k ∈ N. From the second part of Lemma A.1 thus Rd  ii1 +···+id ci1 ,...,id = xi1 · · · xidd ν(dx). i 1 ! · · · i d ! Rd 1 2k−2 From the inequality |xi |2k−1 ≤ (x2k )/2, for k ∈ N, and the above properties, i +xi we infer that for all z ∈ Pρ ,  i   z 1 · · · z id     Pd  1 d |z | |x | i i xi1 · · · xid  ν(dx) < ∞ e i=1 ν(dx) = 1 d i 1 ! · · · i d ! Rd Rd i ,...,i ∈N 1

d

0

Hence Pρ ∩ Rd ⊂ V , and Lemma A.2 implies that G is analytic on S(Pρ ∩ Rd ). Since the power series for G and h coincide on Pρ ∩ iRd , we conclude that G = h on Pρ , and the lemma is proved for U ′ = Pρ . Now let U ′ be an open neighbourhood of 0 in Cd . Then there exists some open polydisc Pρ ⊂ U ′ with ρ ∈ Rd++ . By the preceding case, we have Pρ ∩ Rd ⊂ V and G = h on Pρ . In view of Lemma A.2 it thus remains to show that U = U ′ ∩ Rd ⊂ V . To this end, let a ∈ U . Since U is star-shaped around 0 in Rd , there exists some s1 > 1 such that sa ∈ U for all s ∈ [0, s1 ] and h(sa) is analytic in s ∈ (0, s1 ). On the other hand, there exists some 0 < s0 < s1 such that sa ∈ Pρ ∩ Rd for all s ∈ [0, s0 ], and G(sa) = h(sa) for s ∈ (0, s0 ). This implies   ⊤ ⊤ e sa x ν(dx) e sa x ν(dx) = h(sa) − {a⊤ x≥0}

{a⊤ x 0 such that for all t ≥ 0,   t∧τ3     Z(s) u⊤ b(X(s)) − C u⊤ a(X(s)) u ds . E u⊤ X(t ∧ τ3 )Z(t ∧ τ3 ) = E 0

157

Affine diffusion processes

Now assume that u⊤ a(x) u > 0. By continuity of a and X(t), there exists some ε > 0 and a stopping time τ4 > 0 such that u⊤ a(X(t)) u ≥ ε for all t ≤ τ4 . For C > K/ε, this implies   E u⊤ X(τ4 ∧ τ3 ∧ τ1 )Z(τ4 ∧ τ3 ∧ τ1 ) < 0. This contradicts X(t) ∈ H for all t ≥ 0. Hence (B.1) holds, and part 1 is proved. As for part 2, suppose (B.1) and (B.2) hold for all x ∈ Rd \ H 0 , and let X be a solution of (2.1) with X(0) ∈ H . For δ, ε > 0 define the stopping time   τδ,ε = inf t | u⊤ X(t) ≤ −ε and u⊤ X(s) < 0 for all s ∈ [t − δ, t] . Then on {τδ,ε < ∞} we have u⊤ ρ(X(s)) = 0 for τδ,ε − δ ≤ s ≤ τδ,ε and thus  τδ,ε 0 > u⊤ X(τδ,ε ) − u⊤ X(τδ,ε − δ) = u⊤ b(X(s)) ds ≥ 0, τδ,ε −δ

a contradiction. Hence τδ,ε = ∞. Since δ, ε > 0 were arbitrary, we conclude that u⊤ X(t) ≥ 0 for all t ≥ 0, as desired. Whence the lemma is proved.  It is straightforward to extend Lemma B.1 towards a polyhedral convex set ∩ki=1 Hi d with half-spaces Hi = {x ∈ Rd | u⊤ i x ≥ 0}, for some elements u1 , . . . , uk ∈ R \ {0} m and some k ∈ N. This holds in particular for the canonical state space R+ × Rn . Moreover, Lemma B.1 includes time-inhomogeneous1 ordinary differential equations as special case. The proofs of the following two corollaries are left to the reader. Corollary B.2. Let Hi = {x ∈ Rd | xi ≥ 0} denote the ith canonical half space in Rd , for i = 1, . . . , m. Let b : R+ × Rd → Rd be a continuous map satisfying, for all t ≥ 0, + b(t, x) = b(t, x+ 1 , . . . , xm , xm+1 , . . . , xd )

bi (t, x) ≥ 0

for all x ∈ Rd , and

for all x ∈ ∂Hi , i = 1, . . . , m.

Then any solution f of ∂t f (t) = b(t, f (t)) n m n with f (0) ∈ Rm + × R satisfies f (t) ∈ R+ × R for all t ≥ 0.

Corollary B.3. Let B(t) and C(t) be continuous Rm×m - and Rm + -valued parameters, respectively, such that Bij (t) ≥ 0 whenever i = j . Then the solution f of the linear differential equation in Rm ∂t f (t) = B(t) f (t) + C(t) m with f (0) ∈ Rm + satisfies f (t) ∈ R+ for all t ≥ 0.

Here and subsequently, we let  denote the partial order on Rm induced by the cone for C(t) ≡ 0, That is, x  y if x − y ∈RRm + . Then Corollary B.3 may be rephrased, Rt t B(s) ds B(s) ds m by saying that the operator e 0 Rm is -order preserving, i.e. e 0 + ⊆ R+ .

Rm +.

1 Time-inhomogeneous

differential equations can be made homogeneous by enlarging the state space.

158

D. Filipovi´c and E. Mayerhofer

Next, we consider time-inhomogeneous Riccati equations in Rm of the special form ∂t fi (t) = Ai fi (t)2 + Bi⊤ f (t) + Ci (t),

i = 1, . . . , m,

(B.3)

for some parameters A, B, C(t) satisfying the following admissibility conditions A = (A1 , . . . , Am ) ∈ Rm , Bi,j ≥ 0

for 1 ≤ i = j ≤ m,

(B.4) m

C(t) = (C1 (t), . . . , Cm (t)) continuous R -valued.

The following lemma provides a comparison result for (B.3). It shows, in particular, that the solution of (B.3) is uniformly bounded from below on compacts with respect to  if A  0. Lemma B.4. Let A(k) , B, C (k) , k = 1, 2, be parameters satisfying the admissibility conditions (B.4), and A(1)  A(2) ,

C (1) (t)  C (2) (t).

(B.5)

Let τ > 0 and f (k) : [0, τ ) → Rm be solutions of (B.4) with A and C replaced by A(k) and C (k) , respectively, k = 1, 2. If f (1) (0)  f (2) (0) then f (1) (t)  f (2) (t) for all t ∈ [0, τ ). If, moreover, A(1) = 0 then    t e −Bs C (1) (s) ds  f (2) (t) e Bt f (1) (0) + 0

for all t ∈ [0, τ ). Proof. The function f = f (2) − f (1) solves   2 2 (2) (1) (1) (1) (2) (2) fi (t) + Bi⊤ f + Ci (t) − Ci (t) fi (t) − Ai ∂t fi (t) = Ai   2   (1) (2) (1) (2) (2) (1) fi (t) + Ai fi (t) + fi (t) fi (t) = Ai − Ai (2)

(1)

+ Bi⊤ f (t) + Ci (t) − Ci (t) &i (t)⊤ f (t) + C &i (t), =B

where we write

  &i (t) = Bi + A(1) f (2) (t) + f (1) (t) ei , B i i i  2  &i (t) = A(2) − A(1) f (2) (t) + C (2) (t) − C (1) (t). C i i i i i

 = (B i,j ) and C  satisfy the assumptions of Corollary B.3 in lieu of B and Note that B m C , and f (0) ∈ R+ . Hence Corollary B.3 implies f (t) ∈ Rm + for all t ∈ [0, τ ), as desired. The last statement of the lemma follows by the variation of constants formula  for f (1) (t).

159

Affine diffusion processes

After these preliminary comparison results for the Riccati equation (B.3), we now can state and prove an important result for the system of Riccati equations (3.2). The following is an essential ingredient of the proof of Theorem 3.3. It is inspired by the line of arguments in Glasserman and Kim [16]. Lemma B.5. Let DR denote the maximal domain for the system (3.2) of Riccati equations. Let (τ, u) ∈ DR . Then 1. DR (τ ) is star-shaped around zero. 2. θ∗ = sup{θ ≥ 0 | θu ∈ DR (τ )} satisfies either θ∗ = ∞ or limθ↑θ∗ ψI (t, θu) = n ∞. In the latter case, there exists some x∗ ∈ Rm + ×R such that limθ↑θ ∗ φ(τ, θu)+ ⊤ ∗ ψ(τ, θu) x = ∞. Proof. We first assume that the matrices αi are block-diagonal, such that αi,iJ = 0, for all i = 1, . . . , m. Fix θ ∈ (0, 1]. We claim that θu ∈ DR (τ ). It follows by inspection that f (θ) (t) = ψI (t,θu) solves (B.3) with θ (θ)

Ai

=

1 θαi,ii , 2

⊤ B = BII ,

1 (θ) ⊤ Ci (t) = βi,J ψJ (t, u) + ψJ (t, u)⊤ θαi,JJ ψJ (t, u), 2

and f (0) = u. Lemma B.4 thus implies that f (θ) (t) is nice behaved, as    t ⊤ ⊤ e −BII s C (0) (s) ds  f (θ) (t)  ψI (t, u), e BII t u +

(B.6)

0

for all t ∈ [0, t+ (θu)) ∩ [0, τ ]. By the maximality of DR we conclude that τ < t+ (θu), which implies θu ∈ DR (τ ), as desired. Hence DR (τ ) is star-shaped around zero, which is part 1. / DR (τ ) and Next suppose that θ∗ < ∞. Since DR (τ ) is open, this implies θ∗ u ∈ thus t+ (θ∗ u) ≤ τ . From part 1 we know that (t, θu) ∈ DR for all t < t+ (θ∗ u) and 0 ≤ θ ≤ θ∗ . On the other hand, there exists a sequence tn ↑ t+ (θ∗ u) such that ψI (tn , θ∗ u) > n for all n ∈ N . By continuity of ψ on DR , we conclude that there exists some sequence θn ↑ θ∗ with ψI (tn , θn u) − ψI (tn , θ∗ u) ≤ 1/n and hence lim ψI (tn , θn u) = ∞.

(B.7)

n

Applying Lemma B.4 as above, where initial time t = 0 is shifted to tn , yields    τ ⊤ ⊤ BII (tn −s) (0) BII (τ −tn ) (θn ) f e C (s) ds  f (θn ) (τ ). gn := e (tn ) + tn

⊤ BII (τ −tn )



Corollary B.3 implies that e is -order preserving. That is, e BII (τ −tn ) Rm + ⊆ m (θn ) (tn ), R+ . Hence, in view of (B.6) for f      τ  tn ⊤ ⊤ ⊤ ⊤ BII (tn −s) (0) −BII s (0) BII (τ −tn ) BII tn e u+ e C (s) ds e C (s) ds + gn  e =e

⊤ BII τ

  u+

0

τ

e 0

⊤ −BII s



C (0) (s) ds .

tn

160

D. Filipovi´c and E. Mayerhofer

On the other hand, elementary operator norm inequalities yield gn  ≥ e − BII τ f (θn) (tn ) − e BII τ τ sup C (0) (s). s∈[0,τ ]

Together with (B.7), this implies gn  → ∞. From Lemma B.6 below we conclude that limn f (θn ) (τ )⊤ y ∗ = ∞ for some y ∗ ∈ Rm + . Moreover, in view of Lemma B.4, we know that f (θ) (τ )⊤ y ∗ is increasing θ. Therefore limθ↑θ∗ f (θ) (τ )⊤ y ∗ = ∞. Applying (B.6) and Lemma B.6 below again, this also implies that limθ↑θ∗ f (θ)(τ ) = ∞. It remains to set x∗ = (y ∗ , 0) and observe that bI ∈ Rm + and thus   τ 1 ⊤ ψJ (t, θu)⊤ aJJ ψJ (t, θu) + b⊤ ψ (t, θu) + b ψ (t, θu) dt φ(τ, θu) = I I J J 2 0 is uniformly bounded from below for all θ ∈ [0, θ∗ ). Thus the lemma is proved under the premise that the matrices αi are block-diagonal for all i = 1, . . . , m. The general case of admissible parameters a, αi , b, βi is reduced to the preceding block-diagonal case by a linear transformation along the lines of Lemma 6.1. Indeed, define the invertible d × d-matrix Λ   Im 0 Λ= D In where the n × m-matrix D = (δ1 , . . . , δm ) has ith column vector ' α , if αi,ii > 0 − αi,iJ i,ii δi = else. 0, n m n It is then not hard to see that Λ(Rm + × R ) = R+ × R , and    u) = φ(t, Λ⊤ u), ψ(t,  u) = Λ⊤ −1 ψ(t, Λ⊤ u) φ(t,

satisfy the system of Riccati equations (3.2) with a, αi , b, and B = (β1 , . . . , βd ) replaced by the admissible parameters  a = ΛaΛ⊤ ,

αi = Λαi Λ⊤ ,

b = Λb,

B = ΛBΛ−1 .

Moreover, αi are block-diagonal, for all i = 1, . . . , m. By the first part of the proof, &R (τ ), and hence also DR (τ ) = Λ⊤ D &R (τ ), is starthe corresponding maximal domain D ∗ shaped around zero. Moreover, if θ < ∞, then %   −1 % % % u % = ∞, lim∗ ψI (τ, θu) = lim∗ %ψI τ, θ Λ⊤ θ↑θ

θ↑θ



and there exists some x ∈

Rm +

n

× R such that

lim φ (τ, θu) + ψ (τ, θu)⊤ x∗

θ↑θ ∗

   −1   −1 ⊤ ∗ = lim∗ φ τ, θ Λ⊤ u + ψ τ, θ Λ⊤ u Λx = ∞. θ↑θ

Hence the lemma is proved.



Affine diffusion processes

161

Lemma B.6. Let c ∈ Rm , and (cn ) and (dn ) be sequences in Rm such that c  cn  dn

for all n ∈ N . Then the following are equivalent 1. cn  → ∞

∗ ∗ m 2. c⊤ n y → ∞ for some y ∈ R+ \ {0}.

∗ In either case, dn  → ∞ and d⊤ n y → ∞.  ⊤ 2 ⊤ ⊤ ⊤ Proof. 1 ⇒ 2: since cn 2 = m i=1 (cn ei ) and cn ei ≥ c ei , we conclude that cn ei → ∞ for some i = 1, . . . , m. ∗ ∗ 2 ⇒ 1: this follows from c⊤ n y  ≤ cn y . ⊤ ∗ ∗  The last statement now follows since dn y ≥ c⊤ ny .

Finally, we sketch an alternative proof of Theorem 3.3 part 1 which avoids probabilistic arguments. Remark B.7. We may without loss of generality assume block-diagonal form of αi , i = 1, . . . , d (cf. the final part of the proof of Lemma B.5). Assume, by contradiction, that for some v ∈ Rd , t+ (u + iv) < t+ (u). Then, as in the first proof, we may deduce the existence of tn ↑ t+ (u + iv) such that lim(Re ψi (tn , u + iv))+ = ∞. n

(B.8)

holds for some i ∈ {1, . . . , m}. Set g(t, u+iv) := Re (ψt , u+iv), h := Im (ψ(t, u+iv). Then for i = 1, . . . , m the following differential inequality holds, 1 ⊤ αi,ii (gi2 − h2i ) + gJ⊤ αi,JJ gJ − h⊤ J αi,JJ hJ + βi g 2 1 ≤ αi,ii gi2 + gJ⊤ αi,JJ gJ + βi⊤ g 2

g˙ i (t, u + iv) =

(B.9)

and g(t = 0, u + iv)) = ψ(t = 0, u) = u. Hence noting gJ (t, u + iv) = ψJ (t, u) we obtain by Lemma B.4 for all t ∈ (0, t+ (u + iv)) Re ψ(t, u + iv) = g(t, u + iv)  ψ(t, u). On the other hand, ψI (t, u)  M for some positive constant M ∈ Rm + , for all t ∈ [0, t+ (u + iv)], hence Re ψi (t, u + iv) ≤ Mi , which contradicts (B.8).

162

D. Filipovi´c and E. Mayerhofer

Bibliography [1] H. Amann, Ordinary differential equations, de Gruyter Studies in Mathematics, vol. 13, Walter de Gruyter & Co., Berlin, 1990, An introduction to nonlinear analysis, Translated from the German by Gerhard Metzen. MR MR1071170 (91e:34001) [2] L. B. G. Andersen and V. V. Piterbarg, Moment explosions in stochastic volatility models, Finance and Stochastics 11 (2007), pp. 29–50. MR MR2284011 (2008a:65016) [3] D. Brigo and F. Mercurio, Interest rate models—theory and practice, second. ed., Springer Finance, Springer-Verlag, Berlin, 2006, With smile, inflation and credit. MR MR2255741 (2007d:91002) [4] R. H. Brown, Schaefer S. M, Rogers, L. C. G., S. Mehta, and J. Pezier, Interest Rate Volatility and the Shape of the Term Structure [and Discussion], Philosophical Transactions of the Royal Society of London, Series A 347 (1994), pp. 563–576. [5] M.-F. Bru, Wishart Processes, Journal of Theoretical Probability 4 (1991), pp. 725–751. [6] B. Buraschi, P. Porchia, and F. Trojani, Correlation risk and optimal portfolio choice, Working paper, University St.Gallen, 2006. [7] Li Chen, Damir Filipovi´c, and H. Vincent Poor, Quadratic term structure models for risk-free and defaultable rates, Math. Finance 14 (2004), pp. 515–536. MR MR2092921 (2005f:91066) [8] P. Cheridito, D. Filipovi´c, and R. L. Kimmel, A Note on the Dai–Singleton canonical representation of affine term structure models, Forthcoming in Mathematical Finance, 2008. [9] P. Collin-Dufresne, R. S. Goldstein, and C. S. Jones, Identification of Maximal Affine Term Structure Models, J. of Finance 63 (2008), pp. 743–795. [10] Q. Dai and K. J. Singleton, Specification Analysis of Affine Term Structure Models, J. of Finance 55 (2000), pp. 1943–1978. [11] J. Dieudonn´e, Foundations of modern analysis, Pure and Applied Mathematics, Vol. X, Academic Press, New York, 1960. MR MR0120319 (22 #11074) [12] D. Duffie, D. Filipovi´c, and W. Schachermayer, Affine processes and applications in finance, Ann. Appl. Probab. 13 (2003), pp. 984–1053. MR MR1994043 (2004g:60107) [13] D. Duffie and R. Kan, A Yield-Factor Model of Interest Rates, Mathematical Finance 6 (1996), pp. 379–406. [14] Darrell Duffie, Jun Pan, and Kenneth Singleton, Transform analysis and asset pricing for affine jump-diffusions, Econometrica. Journal of the Econometric Society 68 (2000), pp. 1343–1376. MR MR1793362 (2001m:91081) [15] J. Fonseca, M. Grasseli, and C. Tebaldi, A Multi-Factor volatility Heston model, forthcoming in Quantitative Finance, 2009. [16] P. Glasserman and K-K. Kim, Moment Explosions and Stationary Distributions in Affine Diffusion Models, To appear in Mathematical Finance, 2008/2009. [17] C. Gourieroux and R. Sufana, Wishart Quadratic Term structure models, Working paper, CREF HRC Montreal, 2003. [18] C. Gourieroux and R. Sufana, A Classification of Two Factor Affine Diffusion Term Structure Models, J. of Financial Econometrics 4 (2006), pp. 31–52. [19] M. Grasseli and C. Tebaldi, Solvable Affine Term structure models, Mathematical Finance 18 (2008), pp. 135–153.

Affine diffusion processes

163

[20] S. Heston, A closed-form solution for options with stochastic volatility with appliactions to bond and currency options, Rev. of Financial Studies. [21] F. Hubalek, J. Kallsen, and L. Krawczyk, Variance-optimal hedging for processes with stationary independent increments, Ann. Appl. Probab. 16 (2006), pp. 853–885. MR MR2244435 (2007k:60205) [22] N. Ikeda and S. Watanabe, Stochastic differential equations and diffusion processes, NorthHolland Mathematical Library, vol. 24, North-Holland Publishing Co., Amsterdam, 1981. MR MR637061 (84b:60080) [23] Norman L. Johnson and Samuel Kotz, Distributions in statistics. Continuous univariate distributions. 2., Houghton Mifflin Co., Boston, Mass., 1970. MR MR0270476 (42 #5364) [24] S. Joslin, Can Unspanned Stochastic Volatility Models Explain the Cross Section of Bond Volatilities?, Working Paper, Stanford University, 2006. [25] I. Karatzas and S. E. Shreve, Brownian motion and stochastic calculus, second. ed., Graduate Texts in Mathematics, vol. 113, Springer-Verlag, New York, 1991. MR MR1121940 (92h:60127) [26] M. Keller-Ressel, Moment Explosions and Long-Term Behavior of Affine Stochastic Volatility Models, to appear in Mathematical Finance. [27]

, Affine Processes- Theory and Applications in Finance, PhD thesis Vienna University of Technology (January, 2009).

[28] V. Lakshmikantham, N. Shahzad, and W. Walter, Convex dependence of solutions of differential equations in a Banach space relative to initial data, Nonlinear Anal. 27 (1996), pp. 1351–1354. MR MR1408875 (97e:34109) [29] Roger W. Lee, The moment formula for implied volatility at extreme strikes, Mathematical Finance. An International Journal of Mathematics, Statistics and Financial Economics 14 (2004), pp. 469–480. MR MR2070174 (2005b:91122) [30] E. Lukacs, Characteristic functions, Hafner Publishing Co., New York, 1970, Second edition, revised and enlarged. MR MR0346874 (49 #11595) [31] Arnold Neumaier, Introduction to numerical analysis, Cambridge University Press, Cambridge, 2001. MR MR1854534 (2002g:65002) [32] R. T. Rockafellar, Convex analysis, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, NJ, 1997, Reprint of the 1970 original, Princeton Paperbacks. MR MR1451876 (97m:49001) [33] E. M. Stein and G. Weiss, Introduction to Fourier analysis on Euclidean spaces, Princeton University Press, Princeton, N.J., 1971, Princeton Mathematical Series, No. 32. MR MR0304972 (46 #4102) [34] D. Williams, Probability with martingales, Cambridge Mathematical Textbooks, Cambridge University Press, 1991. [35] T. Yamada and S. Watanabe, On the uniqueness of solutions of stochastic differential equations, J. Math. Kyoto Univ. 11 (1971), pp. 155–167. MR MR0278420 (43 #4150)

164

D. Filipovi´c and E. Mayerhofer

Author information Damir Filipovi´c, Vienna Institute of Finance, University of Vienna and Vienna University of Economics and Business Administration, Heiligenst¨adter Straße 46-48, A-1190 Wien, Austria. Email: [email protected] Eberhard Mayerhofer, Vienna Institute of Finance, University of Vienna and Vienna University of Economics and Business Administration, Heiligenst¨adter Straße 46-48, A-1190 Wien, Austria. Email: [email protected]

Radon Series Comp. Appl. Math 8, 165–181

c de Gruyter 2009 

Multilevel quasi-Monte Carlo path simulation Michael B. Giles and Benjamin J. Waterhouse

Abstract. This paper reviews the multilevel Monte Carlo path simulation method for estimating option prices in computational finance, and extends it by combining it with quasi-Monte Carlo integration using a randomised rank-1 lattice rule. Using the Milstein discretisation of the stochastic differential equation, it is demonstrated that the combination has much lower computational cost than either one on its own for evaluating European, Asian, lookback, barrier and digital options. Key words. Multilevel, Monte Carlo, quasi-Monte Carlo, computational finance. AMS classification. 11K45, 60H10, 60H35, 65C05, 65C30, 68U20

1

Introduction

Giles [4, 5] has recently introduced a multilevel Monte Carlo path simulation method for the pricing of financial options. This improves the computational efficiency of Monte Carlo path simulation by combining results using different numbers of timesteps. This can be viewed as a generalisation of the two-level method of Kebaier [9] and is also similar in approach to Heinrich’s multilevel method for parametric integration [7]. The first paper [5] (which was the second to appear in print due to a publication backlog) introduced the multilevel Monte Carlo method and proved that it can lower the computational complexity of path-dependent Monte Carlo evaluations. It also presented numerical results using the simplest Euler-Maruyama discretisation. The second paper [4] demonstrated that the computational cost can be further reduced by using the Milstein discretisation. This has the same weak order of convergence but an improved first order strong convergence, and it is the strong order of convergence which is central to the efficiency of the multilevel method. In this paper we review the key ideas and introduce a new ingredient, the use of quasi-Monte Carlo (QMC) integration based on a randomised rank-1 lattice rule which further reduces the computational cost. To set the scene, we consider a scalar SDE with general drift and volatility terms, dS(t) = a(S, t) dt + b(S, t) dW (t),

0 < t < T,

(1.1)

with given initial data S0 . In the case of European and digital options, we are interested in the expected value of a function of the terminal state, f (S(T )), but in First author: supported by Microsoft Corporation, the UK Engineering and Physical Sciences Research Council and the Oxford-Man Institute of Quantitative Finance. Second author: supported by the Australian Research Council through a Linkage project between the University of New South Wales and Macquarie Bank.

166

M. B. Giles and B. J. Waterhouse

the case of Asian, lookback and barrier options the valuation depends on the entire path S(t), 0 < t < T . Using a simple Monte Carlo method with a numerical discretisation with first order weak convergence, to achieve a r. m. s. error of ǫ would require O(ǫ−2 ) independent paths, each with O(ǫ−1 ) timesteps, giving a computational complexity which is O(ǫ−3 ). With the Euler–Maruyama discretisation the multilevel method reduces the cost to O(ǫ−2 (log ǫ)2 ) for a European option with a payoff with a uniform Lipschitz bound [5], while the use of the Milstein discretisation further reduces the cost to O(ǫ−2 ) for a larger class of options, including Asian, lookback, barrier and digital options [4]. The paper begins by reviewing the multilevel approach, first with the Euler path discretisation and then with the superior Milstein discretisation. QMC methods based on rank-1 lattice rules are then introduced, with particular attention to Brownian Bridge construction and the use of randomisation to obtain confidence intervals. The combined multilevel QMC algorithm is presented and the following section provides numerical results for a range of options.

2

Multilevel Monte Carlo method

Consider Monte Carlo path simulations with different timesteps hl = 2−l T , l = 0, 1, . . . , L. Thus on the coarsest level, l = 0, the simulations use just 1 timestep, while on the finest level, l = L, the simulations use 2L timesteps. For a given Brownian path W (t), let P denote the payoff, and let Pl denote its approximation using a numerical discretisation with timestep hl . Because of the linearity of the expectation operator, it is clearly true that L  E[PL ] = E[P0 ] + E[Pl − Pl−1 ]. (2.1) l=1

This expresses the expectation on the finest level as being equal to the expectation on the coarsest level plus a sum of corrections which give the difference in expectation between simulations using different numbers of timesteps. The idea behind the multilevel method is to independently estimate each of the expectations on the right-hand side in a way which minimises the overall variance for a given computational cost. Let Y0 be an estimator for E[P0 ] using N0 samples, and let Yl for l > 0 be an estimator for E[Pl − Pl−1 ] using Nl paths. The simplest estimator is a mean of Nl independent samples, which for l > 0 is Yl = Nl−1

Nl    (i) (i) Pl − Pl−1 .

(2.2)

i=1

(i) comes from two discrete approximaThe key point here is that the quantity Pl(i) −Pl−1 tions with different timesteps but the same Brownian path. The variance of this simple estimator is V[Yl ] = Nl−1 Vl where Vl is the variance of a single sample. Combining this with independent estimators for each of the other levels, the variance of the com L −1  bined estimator L l=0 Yl is l=0 Nl Vl , while its computational cost is proportional

Multilevel QMC

167

 −1 to L l=0 Nl hl . Treating the Nl as continuous variables, the variance  is minimised for a fixed computational cost by choosing Nl to be proportional to Vl hl . In the particular case of an Euler discretisation, provided a(S, t) and b(S, t) satisfy certain conditions [2, 10, 21] there is O(h1/2 ) strong convergence. From this it follows that V[Pl − P ] = O(hl ) for a European option with a Lipschitz continuous payoff. Hence for the simple estimator (2.2), the single sample variance Vl is O(hl ), and the optimal choice for Nl is asymptotically proportional to hl . Setting Nl = O(ǫ−2 L hl ), the variance of the combined estimator Y is O(ǫ2 ). If L is chosen such that L = log ǫ−1 / log 2 + O(1), as ǫ → 0, then hL = 2−L = O(ǫ), and so the bias error E[PL −P ] is O(ǫ) due to standard results on weak convergence. Consequently, we obtain a mean square error which is O(ǫ2 ), with a computational complexity which is O(ǫ−2 L2 ) = O(ǫ−2 (log ǫ)2 ). This analysis is generalised in the following theorem [5]:

Theorem 2.1. Let P denote a functional of the solution of stochastic differential equation (1.1) for a given Brownian path W (t), and let Pl denote the corresponding approximation using a numerical discretisation with timestep hl = M −l T . If there exist independent estimators Yl based on Nl Monte Carlo samples, and positive constants α ≥ 12 , β, c1 , c2 , c3 such that   i) E[Pl −P ] ≤ c1 hα l ⎧ ⎨ E[P0 ], l=0 ii) E[Yl ] = ⎩ E[P − P ], l > 0 l l−1 iii) V[Yl ] ≤ c2 Nl−1 hβl

iv) Cl , the computational complexity of Yl , is bounded by Cl ≤ c3 Nl h−1 l ,

then there exists a positive constant c4 such that for any ǫ < e−1 there are values L and Nl for which the multilevel estimator

has a mean-square-error with bound

Y =

L  l=0

Yl ,

 2  < ǫ2 M SE ≡ E Y − E[P ]

with a computational complexity C with bound ⎧ c4 ǫ−2 , β > 1, ⎪ ⎪ ⎪ ⎨ C≤ c4 ǫ−2 (log ǫ)2 , β = 1, ⎪ ⎪ ⎪ ⎩ c4 ǫ−2−(1−β)/α , 0 < β < 1.

168

3

M. B. Giles and B. J. Waterhouse

Milstein discretisation

The theorem proves that the best order of complexity is achieved using discretisations with β > 1. To achieve this for a scalar SDE, we use the Milstein discretisation of equation (1.1) which is   1 ∂bn bn (ΔWn )2 − h . Sn+1 = Sn + an h + bn ΔWn + 2 ∂S

(3.1)

In the above equation, the subscript n is used to denote the timestep index, and an , bn and ∂bn /∂S are evaluated at Sn , tn . All of the numerical results to be presented are for the case of geometric Brownian motion for which the SDE is dS(t) = r S dt + σ S dW (t),

0 < t < T.

By switching to the new variable X = log S , it is possible to construct numerical approximations which are exact, but here we directly simulate the geometric Brownian motion using the Milstein method as an indication of the behaviour with more complicated models, for example those with a local volatility function σ(S, t). The Milstein discretisation defines the numerical approximation at the discrete times tn . Within the time interval [tn , tn+1 ] we use a constant coefficient Brownian interpolation conditional on the two end values,    = Sn + λ (Sn+1 − Sn ) + bn W (t) − Wn − λ (Wn+1 −Wn ) , (3.2) S(t) where

λ=

t − tn . tn+1 − tn

For the fine path, standard results on i) the expected average value, ii) the distribution of the minimum, and iii) the probability of crossing a certain value, will be used to obtain the value Pl for Asian, lookback and barrier options, respectively. Exactly the same approach could also be used on the coarse path with half as many timesteps to obtain Pl−1 . However, this would not give an estimator Yl with variance convergence rate β > 1. To achieve the better convergence rate, we first use the value of the underlying Brownian motion W (t) at the midpoint (which has already been sampled and used for the fine path calculation) to define an interpolated midpoint   1 1 (3.3) Sn+ 12 = (Sn+1 + Sn ) + bn Wn+ 21 − (Wn+1 +Wn ) . 2 2

We can then use the Brownian interpolation (with volatility bn ) on each of the halfintervals [tn , tn+ 12 ] and [tn+ 21 , tn+1 ] which each correspond to one of the timesteps on the fine path. A key point in this construction is that we have not altered the expected value for Pl−1 , averaged over all underlying Brownian paths W (t), compared to its evaluation on level l − 1 on which it corresponds to the finer path; see [4] for further discussion of this important point.

Multilevel QMC

4

169

Quasi-Monte Carlo method

QMC methods approximate an integral on a high-dimensional hypercube with an N point equal-weight quadrature rule of the form 

[0,1]d

f (x) dx ≈

N −1 1  f (xi ). N i=0

This is the same form which is used in the Monte Carlo method. However, rather than choosing the d-dimensional points xi uniformly from the unit cube, as is the case with the Monte Carlo method, QMC methods choose the points in some deterministic manner. Sobol′ sequences [20] and digital nets [15] are two popular choices of QMC points, which have been previously used for financial applications [6, 12]. In this paper we use a rank-1 lattice rule [19] in which the points have the particularly simple construction   i z , xi = N where z is a d-dimensional vector with integer components and the notation { · } denotes taking the fractional part of each component of the argument and disregarding the integer part so that xi lies within the half-open unit cube. For Monte Carlo integration it is well known that the error is O(N −1/2 ). In one dimension, the lattice rule is equivalent to a rectangle rule and can achieve O(N −1 ) convergence of the error, for a sufficiently smooth integrand. For larger dimensions, it may be shown that for integrands with sufficient smoothness and dimensions which become progressively less important, there exist lattice rules for which the error decays at O(N −1+ε ) for all ε > 0, see [11]. Unfortunately, many integrands in mathematical finance applications do not have the required smoothness and so we may not apply the theory to claim the O(N −1+ε ) convergence. However, experimentation suggests that this rate can in fact be achieved for many finance problems [3]. Two key aspects of the implementation of QMC methods are randomisation and the factorisation of the covariance matrix. If we neglect for the moment the discretisation errors which arise from finite timesteps, the standard Monte Carlo method has the attractive feature that it provides both an unbiased estimate of the desired value and a confidence interval for that estimate. The QMC method lacks this feature but it can be regained by re-defining the ith point to be   i z + ∆ . xi = N For a given offset vector ∆ ∈ [0, 1)d , this defines a set of N points, for which one can compute the average N −1 1   f (xi ). Y = N i=0

170

M. B. Giles and B. J. Waterhouse

If we now treat ∆ as a random variable then the expected value of Y is equal to the desired integral, and therefore Y is an unbiased estimator. By choosing a number of different random offsets ∆1 , . . . , ∆q (q = 32 is used in this paper) and computing a separate Yj for each, one can construct a confidence interval in the usual way. For a scalar SDE with nT timesteps, the dimensionality of the problem is d = nT , and the factorisation of the covariance matrix concerns the question of how best to map the different dimensions of the hypercube to the nT Wiener increments in the Milstein discretisation. The expected value of a financial product whose value is determined by an asset whose dynamics are described by (1.1), discretised at times tn = nh, is given by the integral    exp − 21 xT Σ−1 x √ dx. p(x) (2π)d/2 det Σ Rd Here p(x) is the payoff function and the d-dimensional matrix Σi,j = min(ti , tj ) is the covariance matrix for the elements of x which are the underlying Wiener path values Wn . Taking a matrix A such that A AT = Σ, and making the substitutions x = A y and y = Φ−1 (z) where Φ−1 is the inverse of the cumulative Normal distribution function taken componentwise, this can be reformulated as an integral over the unit cube     exp − 21 y T y p(A y) p(A Φ−1 (z)) dz. dy = d/2 d d (2π) R [0,1] For Monte Carlo integration the choice of the matrix A makes no difference, but for QMC integration it is very important [18, 6, 12]. While any choice of A such that A AT = Σ is suitable, there are three established ways in which the matrix A may be chosen. Firstly, A may be chosen to be the Cholesky factor of Σ. This is the simplest method and corresponds to taking the nth component of xi to define ∆Wn through √ ∆Wn = h Φ−1 (xi,n ). This would correctly map a uniform [0, 1] distribution for xn into a Normal distribution for ∆Wn with zero mean and variance h. This method is often referred to as the standard construction and is usually used for Monte Carlo integration due to the simplicity of its construction. A second way in which A may be chosen is to use a Brownian Bridge construction [18, 6]. Under this method, the first component of x is used to define W (T ), the second component defines W (T /2) (conditional on the first), the third and fourth components define W (T /4) and W (3T /4) (conditional on the first two), and so on. Note that in the standard and Brownian Bridge constructions, the matrix A is not explicitly used, but rather implicitly used in the recursive construction. The final way is known as the “Principal Components Analysis”√(PCA) method. In this method A is chosen to be the matrix with nth column equal to λn vn where λn is the nth largest eigenvalue of Σ and vn is the corresponding eigenvector [6]. Several authors [18, 12, 6] have found the Brownian Bridge and PCA constructions to be much better for some problems, although it is known that there are problems from mathematical finance for which the standard construction performs much better than the Brownian Bridge, see [17]. In our numerical experiments we use the Brownian

Multilevel QMC

171

Bridge construction, since for our applications it consistently outperforms the standard construction. The final implementation issue is the choice of the generating vector z . We use a vector using the construction algorithm of Dick et al. [8]. This particular type of lattice rule is said to be embedded since it can be used as a sequence with differing values of N . The construction algorithm is particularly efficient due to the fast FFT implementation technique of Nuyens and Cools [16].

5

Multilevel QMC algorithm

At level l in the multilevel formulation, Nl is defined to be the number of QMC points, and Yl is the computed average of Pl (for l = 0) or Pl − Pl−1 (for l > 0) over the 32 sets of Nl QMC lattice points, each set having a different random offset. An unbiased estimate of its variance Vl is computed in the usual way from the differing values for the 32 averages. On the assumption that there is first order weak convergence, the remaining bias at the finest level E[P − PL ] is approximately equal to YL . Being more cautious (to allow for the possibility that Yl changes sign as l increases before settling into its first order asymptotic convergence) we estimate the magnitude of the bias using max

1    YL−1 , YL  . 2

 The mean square error is the sum of the combined variance L l=0 Vl (where Vl is   now the variance of Vl ) plus the square of the bias E[P − PL ]. We choose to make each of these smaller than ǫ2 /2, so that overall we achieve a user-specified RMS accuracy of ǫ. The variance is reduced by increasing the number of lattice points on each level, while the bias is reduced by increasing the level of path refinement (i.e. increasing L). Given this outline strategy, the multilevel QMC algorithm proceeds as follows:

1. start with L = 0 2. get an initial estimate for VL using 32 random offsets and NL = 1 3. while

L 

Vl > ǫ2 /2, double Nl on the level with largest Vl / (2l Nl )

l=0

√ 4. if L < 2 or the bias estimate is greater than ǫ/ 2, set L := L+1 and go to step 2

Step 3 is based on the fact that doubling Nl will eliminate most of the variance Vl at a cost proportional to the product of the number of timesteps 2l and the number of lattice points Nl . The choice of level l aims to maximise the reduction in variance per unit cost.

172

M. B. Giles and B. J. Waterhouse

0

0

−5

log |mean|

−5

−15

−10

2

−20

2

log variance

−10

−25 1 16 256 4096

−30 −35 −40

0

2

−15

P

l

Pl− Pl−1 4 l

6

−20

8

5

0

2

4 l

6

8

−1

10

10 ε=0.00005 ε=0.0001 ε=0.0002 ε=0.0005 ε=0.001

10

3

−2

10

N

l

10

ε2 Cost

4

2

10

−3

10 1

10

0

10

Std QMC MLQMC

−4

0

2

4 l

6

8

10

−4

−3

10

10 ε

Figure 6.1. European call option

6

Numerical results

6.1 European call option The European call option we consider has the discounted payoff P = exp(−rT ) (S(T ) − K)+ ,

where the notation (x)+ denotes max(0, x). Figure 6.1 shows the numerical results for parameters S(0) = 1, K = 1, T = 1, r = 0.05, σ = 0.2. The solid lines in the top left plot show the behaviour of the variance Pl , while the dashed lines show the variance of Pl − Pl−1 . The four sets of calculations use different numbers of lattice points. The calculations with just one lattice point correspond to

Multilevel QMC

173

standard Monte Carlo. The calculations with 16, 256 and 4094 lattice points show the variance of the average over the set of lattice points multiplied by the number of lattice points; for standard Monte Carlo this quantity would be independent of the number of points, and therefore this is a fair basis of comparison which accounts for the cost of 4096 points being 4096 times greater than a single point. The solid line results show that the QMC method on its own is very effective in reducing the variance compared to the standard Monte Carlo method. The dashed line results show that in conjunction with the multilevel approach the QMC is effective at reducing the variance on the coarsest levels, but the benefits diminish on the finer levels. This is probably because the multilevel approach itself extracts much of the low-dimensional content in the integrand, so that on the finer levels the correction is predominantly high-dimensional and so the QMC approach is less effective. However, most of the computational cost of the multilevel method is on the coarsest levels, and so we will see that the combination does reduce the overall cost significantly. The top right plot shows that E[Pl − Pl−1 ] is approximately O(hl ), corresponding to the expected first order weak convergence. Each line in the bottom left plot shows the values for Nl , l = 0, . . . , L, with the values decreasing with level l as expected. It can also be seen that the value for L, the maximum level of timestep refinement, increases as the value for ǫ decreases, requiring a lower bias error. The bottom right plot shows the variation with ǫ of ǫ2 C where the computational complexity C is defined as  C = 32 2 l Nl , l

which is the total number of fine grid timesteps on all levels. One line shows the results for the multilevel QMC method and the other shows the corresponding cost of a standard QMC simulation of the same accuracy, i.e. the same bias error corresponding to the same value for L, and the same variance. It can be seen that ǫ2 C is roughly constant for the standard QMC method, and this is at a level which is comparable to that achieved previously using the multilevel method on its own. However, combining the multilevel method with QMC gives additional savings of factor 20–100, with the computational cost being approximately proportional to ǫ−1 . This is the best one could hope for using QMC since in the best cases its error is inversely proportional to the number of points, and hence, at best, inversely proportional to the computational cost.

6.2 Asian option The Asian option we consider has the discounted payoff

where

  P = exp(−rT ) max 0, S −K , S=T

−1



0

T

S(t) dt.

174

M. B. Giles and B. J. Waterhouse

0

0

−5

log |mean|

−5

−15

−10

2

−20

2

log variance

−10

−25 1 16 256 4096

−30 −35 −40

0

2

−15

P

l

Pl− Pl−1 4 l

6

−20

8

5

0

2

4 l

6

8

−1

10

10 ε=0.00005 ε=0.0001 ε=0.0002 ε=0.0005 ε=0.001

10

3

−2

10

N

l

10

ε2 Cost

4

2

10

−3

10 1

10

0

10

Std QMC MLQMC

−4

0

2

4 l

6

8

10

−4

−3

10

10 ε

Figure 6.2. Asian option On the fine path, integrating (3.2) and using standard Brownian Bridge results (see section 3.1 in [6]) gives

where

S = T −1 ∆In =



n T −1  0

 1   h (Sn + Sn+1 ) + bn ∆In , 2

tn+1

tn

(W (t) − W (tn )) dt −

1 h∆W 2

is a N (0, h3 /12) Normal random variable, independent of ∆W . The coarse path approximation is similar except that the values for ∆In are derived from the fine path

Multilevel QMC

175

values, noting that 

tn +2h

tn

=

(W (t) − W (tn )) dt − h(W (tn +2h) − W (tn )) 

tn +h

(W (t) − W (tn )) dt −

tn

+



1 h (W (tn +h) − W (tn )) 2

tn +2h

tn +h

(W (t) − W (tn +h)) dt −

1 h (W (tn +2h) − W (tn +h)) 2

1 1 + h (W (tn +h) − W (tn )) − h (W (tn +2h) − W (tn +h)) , 2 2

and hence ∆I c = ∆I f 1 + ∆I f 2 +

1 h(∆W f 1 − ∆W f 2 ), 2

where ∆I c is the value for the coarse timestep, and ∆I f 1 and ∆W f 1 are the values for the first fine timestep, and ∆I f 2 and ∆W f 2 are the values for the second fine timestep. Figure 6.2 shows the numerical results for parameters S(0) = 1, K = 1, T = 1, r = 0.05, σ = 0.2. The top left plot shows the behaviour of the variance of both Pl and Pl − Pl−1 . The standard QMC method is effective at reducing the variance on all levels, but with the multilevel estimator its effectiveness diminishes at the finer levels. The bottom two plots again have results from five multilevel calculations for different values of ǫ. It can be seen that ǫ2 C is very roughly constant for the standard QMC method (again at a level comparable to that achieved previously by the multilevel method on its own [4]), while ǫ2 C decreases significantly with decreasing ǫ for the combined multilevel QMC method.

6.3 Lookback option The lookback option we consider has the discounted payoff   P = exp(−rT ) S(T ) − min S(t) . 0K .

178

M. B. Giles and B. J. Waterhouse

0

0

−5

log |mean|

−5

−15

−10

2

−20

2

log variance

−10

−25 1 16 256 4096

−30 −35 −40

0

2

−15

l

Pl− Pl−1 4 l

6

−20

8

6

0

2

4 l

6

8

0

10

10

Std QMC MLQMC

ε=0.00005 ε=0.0001 ε=0.0002 ε=0.0005 ε=0.001

−1

10

N

l

ε2 Cost

4

10

2

−2

10

10

0

10

P

−3

0

2

4 l

6

8

10

−4

−3

10

10 ε

Figure 6.4. Barrier option

To achieve a good multilevel variance convergence rate, we follow the same procedure used previously [4], smoothing the payoff using the technique of conditional expectation (see section 7.2.3 in [6]) in which we terminate the path calculations one timestep before reaching the terminal time T . If Snf T −1 denotes the fine path value at this time, then if we approximate the motion thereafter as a simple Brownian motion with constant drift anT −1 and volatility bnT −1 , the probability that Snf T > K after one further timestep is pf = Φ



Snf T −1 +anT −1 h − K √ bnT −1 h



,

(6.5)

179

Multilevel QMC

0

0

−5

log2 |mean|

−5

−15 −20

2

log variance

−10

−25 1 16 256 4096

−30 −35 −40

0

2

−10

−15

Pl Pl− Pl−1

4 l

6

−20

8

0

2

4 l

6

8

1

10

Std QMC MLQMC

ε=0.0001 ε=0.0002 ε=0.0005 ε=0.001 ε=0.002

4

Nl

10

0

10 ε2 Cost

6

10

−1

10

2

10

0

10

−2

0

2

4 l

6

8

10

−4

−3

10

10 ε

Figure 6.5. Digital option where Φ is the cumulative Normal distribution. For the fine-path payoff Plf we therefore use Plf = exp(−rT ) pf . For the coarse-path payoff, we note that given the Brownian increment ∆W for the first half of the last timestep, which is already known because it corresponds to the last of the computed timesteps in the fine path calculation, then the probability that Snc T /2 > K is c

p = Φ



Snc T /2−1 +anT −1 h+bnT −1 ∆W − K  bnT −1 h/2



,

where anT /2−1 and bnT /2−1 are the drift and volatility based on Snc T /2−1 .

(6.6)

180

M. B. Giles and B. J. Waterhouse

Figure 6.5 has the results for parameters S(0) = 1, K = 1, T = 1, r = 0.05, σ = 0.2. One strikingly different feature is that the variance of the level 0 estimator, V0 , is zero. This is because at level l = 0 there would usually be only one timestep, and so here it is not simulated at all; one simply uses equation (6.5) to evaluate the payoff. This essentially eliminates the cost of the level 0 calculation, which is where the QMC method is usually most effective. Consequently, the cost of the combined multilevel QMC method remains approximately proportional to ǫ−2 , and is only slightly lower than the results obtained previously for the multilevel method without QMC [4]. However, we still get a factor 5–10 computational savings compared to the standard QMC on its own.

7

Conclusions and future work

In this paper we have demonstrated the benefits of combining rank-1 lattice rule quasiMonte Carlo integration with multilevel Monte Carlo path simulation. Together, the computational cost is lower than using either one on its own. There are two major directions for future research. The first is the extension of the algorithms to multi-dimensional SDEs, for which the Milstein discretisation usually requires the simulation of L´evy areas [6, 10]. Current investigations indicate that this can be avoided for European options with a Lipschitz payoff through the use of antithetic variables. However, the extension to more difficult payoffs, such as the Asian, lookback, barrier and digital options considered in this paper, looks more challenging and the direct simulation of the L´evy areas may be necessary. The second direction for future research is the numerical analysis of multilevel methods. M¨uller-Gronbach and Ritter [14], Giles, Higham and Mao [13] and Avikainen [1] have obtained bounds on the convergence of the multilevel method using the Euler discretisation for different classes of output functional, but additional research is required for the Milstein discretisation.

Bibliography [1] R. Avikainen, Convergence rates for approximations of functionals of SDEs, Finance and Stochastics (to appear) (2009). [2] V. Bally and D. Talay, The law of the Euler scheme for stochastic differential equations, I: convergence rate of the distribution function, Probability Theory and Related Fields 104 (1995), pp. 43–60. [3] G.W. Wasilkowski, F.Y. Kuo and B.J. Waterhouse, Randomly shifted lattice rules for unbounded integrands, J. Complexity 22 (2006), pp. 630–651. [4] M.B. Giles, Monte Carlo and Quasi-Monte Carlo Methods 2006, ch. Improved multilevel Monte Carlo convergence using the Milstein scheme, pp. 343–358, Springer-Verlag, 2007. [5]

, Multilevel Monte Carlo path simulation, Operations Research 56 (2008), pp. 981– 986.

Multilevel QMC

181

[6] P. Glasserman, Monte Carlo Methods in Financial Engineering, Springer-Verlag, New York. [7] S. Heinrich, Lecture Notes in Computer Science, vol. 2179, ch. Multilevel Monte Carlo Methods, pp. 58–67, Springer-Verlag, 1. [8] F. Pillichshammer, J. Dick and B.J. Waterhouse, The construction of good extensible rank-1 lattices, Math. Comp. 77 (2007), pp. 2345–2373. [9] A. Kebaier, Statistical Romberg extrapolation: a new variance reduction method and applications to options pricing, Annals of Applied Probability 14 (2005), pp. 2681–2705. [10] P.E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, SpringerVerlag, Berlin. [11] F.Y. Kuo and I.H. Sloan, Lifting the curse of dimensionality, Notices of the AMS 52 (2005), pp. 1320–1328. [12] P. L’Ecuyer, Proceedings of the 2004 Winter Simulation Conference, ch. Quasi-Monte Carlo methods in finance, pp. 1645–1655, IEEE Press, 2004. [13] D. Higham, M.B. Giles and X. Mao, Analysing multilevel Monte Carlo for options with nonglobally Lipschitz payoff, Finance and Stochastics (to appear) (2009). [14] T. M¨uller-Gronbach and K. Ritter, Monte Carlo and Quasi-Monte Carlo Methods 2006, ch. Minimal errors for strong and weak approximation of stochastic differential equations, pp. 53–82, Springer-Verlag, 2007. [15] H. Niederreiter, Random Number Generation and Quasi-Monte Carlo Methods, SIAM. [16] D. Nuyens and R. Cools, Fast algorithms for component-by-component construction of rank1 lattice rules in shift-invariant reproducing kernel Hilbert spaces, Math. Comp. 75 (2006), pp. 903–920, (electronic). [17] A. Papageorgiou, The Brownian bridge does not offer a consistent advantage in Quasi-Monte Carlo integration, J. Complex. 18 (2002), pp. 171–186. [18] W.J. Morokoff, R.E. Caflisch and A.B. Owen, Valuation of Mortgage Backed Securities Using Brownian Bridges to Reduce Effective Dimension, J. Comput. Finance 1 (1997), pp. 27–46. [19] I.H. Sloan and S. Joe, Lattice Methods for Multiple Integration, Oxford University Press. [20] I.M. Sobol′ , Distribution of points in a cube and approximate evaluation of integrals, ˇ Akademija Nauk SSSR. Zurnal Vyˇcislitel′ no˘ı Matematiki i Matematiˇcesko˘ı Fiziki 7 (1967), pp. 784–802. [21] D. Talay and L. Tubaro, Expansion of the global error for numerical schemes solving stochastic differential equations, Stochastic Analysis and Applications 8 (1990), pp. 483–509.

Author information Michael B. Giles, Mathematical Institute and Oxford-Man Institute of Quantitative Finance, Oxford University, Oxford, United Kingdom. Email: [email protected] Benjamin J. Waterhouse, School of Mathematics and Statistics, University of New South Wales, Sydney, Australia. Email: [email protected]

Radon Series Comp. Appl. Math 8, 183–203

c de Gruyter 2009 

Modelling default and prepayment using L´evy processes: an application to asset backed securities Henrik J¨onsson, Wim Schoutens, and Geert Van Damme

Abstract. The securitization of financial assets is a form of structured finance, developed by the U.S. banking world in the early 1980’s (in Mortgage-Backed-Securities format) in order to reduce regulatory capital requirements by removing and transferring risk from the balance sheet to other parties. Today, virtually any form of debt obligations and receivables has been securitised, resulting in an approximately $2.5 trillion ABS outstanding in the U.S. alone∗, a market which is rapidly spreading to Europe, Latin-America and Southeast Asia. Though no two ABS contracts are the same and therefore each deal requires its very own model, there are three important features which appear in virtually any securitization deal: default risk, Loss-Given-Default and prepayment risk. In this paper we will only be concerned with default and prepayment and discuss a number of traditional (continuous) and L´evy-based (pure jump) methods for modelling the latter risks. After briefly explaining the methods and their underlying intuition, the models are applied to a simple ABS deal in order to determine the rating of the notes. It turns out that the pure jump models produce lower (i.e. more conservative) ratings than the traditional methods (e.g. Vasicek), which are clearly incapable of capturing the shock-driven nature of losses and prepayments. Key words. L´evy processes, default probability, prepayment probability, rating, asset-backed securities. AMS classification. 60G35, 62P05, 91B28, 91B70

1

Introduction

Securitisation is the process whereby an institution packs and sells a number of financial assets to a special entity, created specifically for this purpose and therefore termed the Special Purpose Entity (SPE) or Special Purpose Vehicle (SPV), which funds this purchase by issuing notes secured by the revenues from the underlying pool of assets. In general, we can say that securitization is the transformation of illiquid assets (for instance, mortgages, auto loans, credit card receivables and home equity loans) into liquid assets (marketable securities that can be sold in securities markets). This form of structured finance was initially developed by the U.S. banking world ∗ Source: SIFMA, Q2 2008. First author: H. J¨onsson is funded by the European Investment Bank’s EIBURS programme ”Quantitative Analysis and Analytical Methods to Price Securitization Deals”. Part of this research has been done during the time while H. J¨onsson was an EU-Marie Curie Intra-European Fellow with funding from the European Community’s Sixth Framework Programme (MEIF-CT-2006-041115)

184

H. J¨onsson, W. Schoutens, and G. Van Damme

in the early 1980’s (in Mortgage-Backed-Securities format) in order to reduce regulatory capital requirements by removing and transferring risk from the balance sheet to other parties. Over the years, however, the technique has spread to many other industries (also outside the U.S.) and the goal shifted from reducing capital requirements to funding and hedging. Today, virtually any form of debt obligations and receivables has been securitised, with companies showing a seemingly infinite creativity in allocating the revenues from the pool to the noteholders (respecting their seniority). This results in an approximately $2.5 trillion ABS market in the U.S. alone, which is rapidly spreading to Europe, Latin-America and Southeast Asia. Unlike the nowadays very popular Credit Default Swap, ABS contracts are not yet standardised. This lack of uniformity implies that each deal requires a new model. However, there are certain features that emerge in virtually any ABS deal, the most important ones of which are default risk, amortisation of principal value (and thus prepayment risk) and Loss-Given-Default (LGD). Since defaults, losses and accelerated principal repayments can substantially alter the projected cashflows and therefore the planned investment horizon, it is of key importance to adequately describe and model these phenomena when pricing securitization deals. In the current ABS practice, the probability of default is generally modeled by means of a sigmoid function, such as the Logistic function, or by Vasicek’s one-factor model, whereas the prepayment rate and the LGD rate are assumed to be constant (or at least deterministic) over time and independent of default. However, it is intuitively clear that each of these events is coming unexpectedly and is generally driven by the overall economy, hence infecting many borrowers at the same time, causing jumps in the default and prepayment term structures. Therefore it is essential to model the latter by stochastic processes that include jumps. Furthermore, it is unrealistic to assume that prepayment rates and loss rates are time-independent and uncorrelated, neither with each other, nor with default rates. For instance, a huge economic downturn will most likely result in a large number of defaults and a significant increase of the interest rates, causing huge losses and a decrease in prepayments. Reality indeed shows a negative correlation between default and prepayment. In this paper, we propose a number of alternative techniques that can be applied to stochastically model default, prepayment and Loss-Given-Default, introducing dependence between the latter as well. The models we propose are based on L´evy processes, a well know family of jump-diffusion processes that have already proven their modelling abilities in other settings like equity and fixed income (cf. Schoutens [12]). The text is organised as follows. In the following section we present four models for the default term structure. In Section 3 we discuss three models for the prepayment term structure. Numerical results are presented in Section 4, where the default and prepayment models are built into a cashflow model in order to determine the cumulative expected loss rate, the Weighted Average Life (WAL) and the corresponding rating of two subordinated notes of a simple ABS deal. Section 5 concludes the paper.

Modelling default and prepayment using L´evy processes

2

185

Default models

In this section we will briefly discuss four models for the default term structure, respectively based on 1. the generalised Logistic function; 2. a strictly increasing L´evy process; 3. Vasicek’s Normal one-factor model; 4. the generic one-factor L´evy model [1], with an underlying shifted Gamma process. We will focus on the time interval between the issue (t = 0) of the ABS notes and the weighted average time to maturity (t = T ) of the underlying assets. In the sequel we will use the term default curve to refer to the default term structure. By default distribution, we mean the distribution of the cumulative default rate at time T . Hence, the endpoint of the default curve is a random draw from the default distribution.

2.1 Generalised logistic default model Traditional methods typically use a sigmoid (S-shaped) function to model the term structure of defaults. One famous example of such sigmoid functions is the (generalised) Logistic function (Richards [10]), defined as F (t) =

a 1+

be−c(t−t0 )

,

where F (t) satisfies the following ODE   dF (t) F (t) =c 1− F (t), dt a

(2.1)

(2.2)

with a, b, c, t0 > 0 being constants and t ∈ [0, T ]. In the context of default curve modelling, Pd (t) := F (t) is the cumulative default rate at time t. Note that when b = 1, t0 corresponds to the inflection point in the loss buildup, i.e. Pd grows at an increasing rate before time t0 and at a decreasing rate afterwards. Furthermore, limt→+∞ F (t) = a, thus a controls the right endpoint of the default curve. For sufficiently large T we can therefore approximate the cumulative default rate at maturity by a, i.e. Pd (T ) ≈ a. Hence, a is a random draw from a predetermined default distribution (e.g. the Log-Normal distribution) and each different value for a will give rise to a new default curve. This makes the Logistic function suitable for scenario analysis. Finally, the parameter c controls the spread of the Logistic curve around t0 . In fact, c determines the growth rate of the Logistic curve, i.e. the proportional increase in one unit of time, as can be seen from equation (2.2). Values of c between 0.10 and 0.20 produce realistic default curves. The left panel of Figure 2.1 shows five default curves, generated by the Logistic function with parameters b = 1, c = 0.1, t0 = 55, T = 120 and decreasing values of

186

H. J¨onsson, W. Schoutens, and G. Van Damme

a, drawn from a Log-Normal distribution with mean 0.20 and standard deviation 0.10. Notice the apparent inflection in the default curve at t = 55. The probability density function (p.d.f.) of the cumulative default rate at time T is shown on the right. Logistic default curves (µ = 0.20 , σ = 0.10)

Probability density of LogN(µ = 0.20 , σ = 0.10)

a = 0.4133 a = 0.3679 a = 0.2234 a = 0.1047 a = 0.0804

0.5

0.5

0.4 X ∼ LogN(µ, σ)

d

P (t)

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

20

40

60 t

80

100

120

0

0

1

2

3 f

4

5

6

X

Figure 2.1. Logistic default curve (left) and Lognormal default distribution (right). It has to be mentioned that the Logistic function (2.1) has several drawbacks when it comes to modelling a default curve. First of all, assuming real values for the parameters, the Logistic function does not start at 0, i.e. Pd (0) > 0. Moreover, a is only an approximation of the cumulative default rate at maturity, but in general we have that Pd (T ) < a. Hence Pd has to be rescaled, in order to guarantee that a is indeed the cumulative default rate in the interval [0, T ]. Secondly, the Logistic function is a deterministic function of time (the only source of randomness is in the choice of the endpoint), whereas defaults generally come as a surprise. And finally, the Logistic function is continuous and hence unable to deal with the shock-driven behaviour of defaults. In the next sections we will describe three default models that (partly) solve the above mentioned problems. The first two problems will be solved by using a stochastic (instead of deterministic) process that starts at 0, whereas the shocks will be captured by introducing jumps in the model.

2.2 L´evy portfolio default model In order to tackle the shortcomings of the Logistic model, we propose to model the default term structure by the process1   d Pd = Pd (t) = 1 − e−λt , t ≥ 0 , (2.3) 1 This

can be linked to the world of intensity-based default modelling. See Lando [6] and Sch¨onbucher [11] for a more detailed exposition. Cariboni and Schoutens [3] incorporate jump dynamics into intensity models.

187

Modelling default and prepayment using L´evy processes

  with λd = λdt : t ≥ 0 a strictly increasing L´evy process. The latter introduces both jump dynamics and stochasticity, i.e. Pd (t) is a random variable, for all t > 0. Therefore, in order to simulate a default curve, we must first draw a realization of the process λd . Moreover, Pd (0) = 0, since by the properties of a L´evy process λd0 = 0. In this paper we assume that λd is a Gamma process G = {Gt : t ≥ 0} with shape parameter a and scale parameter b, hence λdt ∼ Gamma (at, b), for t > 0. d Hence, the cumulative default rate at maturity follows the law 1 − e−λT , where λdT ∼ Gamma (aT, b). Using this result, the parameters a and b can be found as the solution to the following system of equations ⎧ d ⎨ E 1 − e−λT = µd ; (2.4) ⎩ Var 1 − e−λdT = σ 2 , d

for predetermined values of the mean µd and standard deviation σd of the default distribution. Explicit expressions for the left hand sides of (2.4) can be found, by noting that the expected value and the variance can be written in terms of the characteristic function of the Gamma distribution. The left panel of Figure 2.2 shows five default curves, generated by the process (2.3) with parameters a ≈ 0.024914, b ≈ 12.904475 and T = 120, such that the mean and standard deviation of the default distribution are 0.20 and 0.10. Note that all curves start at zero, include jumps and are fully stochastic functions of time, in the sense that in order to construct a new default curve, one has to rebuild the whole intensity process over [0, T ], instead of just changing its endpoint. The corresponding default p.d.f. is d again shown on the right. Recall, in this case, that Pd (T ) follows the law 1 − e−λT , with λdT ∼ Gamma (aT, b). −λ(T)

Probability density of 1−e

0.5

0.4

0.4

, with λ(T) ∼ Gamma(aT = 2.99, b = 12.90)

−λ(T)

0.5

X ∼ 1−e

d

P (t)

Cox default curve (µ = 0.20 , σ = 0.10)

0.3

0.3

0.2

0.2

0.1

0.1

0

0

20

40

60 t

80

100

120

0

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

f

X

Figure 2.2. L´evy portfolio default curves (left) and corresponding default distribution (right).

188

H. J¨onsson, W. Schoutens, and G. Van Damme

2.3 Normal one-factor default model The Normal one-factor (structural) model (Vasicek [14], Li [7]) models the cash position V (i) of a borrower, where V (i) is described by a geometric Brownian motion,    

 (i) (i) (i) (i) (i) (i) (i) VT = V0 exp a µT , σT + b µT , σT WT    

 d (i) (i) (i) (i) (i) (2.5) = V0 exp a µT , σT + b µT , σT Zi , d

for i = 1, 2, . . . , N , with N the number of loans in the asset pool. Here = denotes equality in distribution and Zi ∼ N (0, 1). Furthermore, Zi satisfies  √ Zi = ρX + 1 − ρXi , (2.6) i.i.d.

with X, X1 , X2 , . . . , XN ∼ N (0, 1). It is easy to verify that ρ = Corr (Zi , Zj ), for all i = j . The latter parameter is calibrated to match a predetermined value for the standard deviation σ of the default distribution. A borrower is said to default at time t, if his financial situation has deteriorated so (i) dramatically that VT hits a predetermined lower bound Btd , which (as can be seen from (2.5)) is equivalent to saying that Zi hits some barrier Htd . The latter barrier is chosen such that the expected probability of default before time t matches the default probabilities observed in the market, where it is assumed that the latter follow a homogeneous Poisson process with intensity λ, i.e. Htd satisfies Pr Zi ≤ Htd = Φ Htd = Pr [Nt > 0] = 1 − e−λt , (2.7) where λ is set such that Pr Zi ≤ HTd = µd , with µd the predetermined value for the mean of the default distribution. From (2.7) it then follows that   1 (2.8) λ = − log [1 − µd ] T and hence



t Htd = Φ−1 1 − (1 − µd ) T ,

(2.9)

with Φ the standard Normal cumulative distribution function. Given a sample of (correlated) standard Normal random variables Z = (Z1 , Z2 , . . . , ZN ), the default curve is then given by   ♯ Zi ≤ Htd ; i = 1, 2, . . . , N (2.10) Pd (t; Z) = , t ≥ 0. N In order to simulate default curves, one must thus first generate a sample of standard Normal random variables Zi satisfying (2.6), and then, at each (discrete) time t, count the number of Zi ’s that are less than or equal to the value of the default barrier Htd at that time. The left panel of Figure 2.3 shows five default curves, generated by the Normal onefactor model (2.6) with ρ ≈ 0.121353, such that the mean and standard deviation of the

189

Modelling default and prepayment using L´evy processes

default distribution are 0.20 and 0.10. All curves start at zero and are fully stochastic, but unlike the L´evy portfolio model the Normal one-factor default model does not include any jump dynamics. Therefore, as will be seen later, this model is unable to deal with the shock-driven nature of defaults and as such generates ratings that are too optimistic (high). The corresponding default p.d.f. is again shown in the right panel. Normal 1−factor default curve (µ = 0.20 , σ = 0.10)

Probability density of the cumulative default rate at time T (ρ = 0.12135)

0.5

0.4

0.4

0.3

i

d

P (t)

T

#{Z ≤ H }

0.5

0.3

0.2

0.2

0.1

0.1

0

0

20

40

60 t

80

100

120

0

0

0.5

1

1.5

2 2.5 f#{Z ≤ H } i

3

3.5

4

4.5

T

Figure 2.3. Normal one-factor default curves (left) and corresponding default distribution (right).

2.4 Generic one-factor L´evy default model The generic one-factor L´evy model [1] is comparable to and in fact is a generalisation of the Normal one-factor model. Instead of describing a borrower’s cash position by a geometric Brownian motion, V (i) is now modeled with a geometric L´evy model, i.e. (i) (i) (i) VT = V0 exp AT ,

(2.11)

  evy process and satisfies for i = 1, 2, . . . , N . The process A(i) = A(i) t : t ≥ 0 is a L´ (i)

(i)

AT = Yρ + Y1−ρ ,

(2.12)

with Y, Y (1) , Y (2) , . . . , Y (N ) i.i.d. L´e vy processes, based on the same mother infinitely divisible distribution L, such that E Y1 = 0 and Var Y1 = 1, which implies that (i) Var [Yt ] = t. From this it is clear that E A(i) = 0 and Var AT = 1, such that T  (i) (j)  Corr AT , AT = ρ, for all i = j . As with the Normal one-factor model, the crosscorrelation ρ will be calibrated to match a predetermined standard deviation for the default distribution.

190

H. J¨onsson, W. Schoutens, and G. Van Damme

As for the Normal one-factor model, we again say that a borrower defaults at time (i) t, if AT hits a predetermined barrier Htd at that time, where Htd satisfies (i) (2.13) Pr AT ≤ Htd = 1 − e−λt ,

with λ given by (2.8). In this paper we assume that Y, Y (1) , Y (2) , . . . , Y (N ) are i.i.d. shifted Gamma proµ − Gt : t ≥ 0}, where G is a Gamma process, with shape cesses, i.e. Y = {Yt = t parameter a and scale parameter b. From (2.12) and the fact that a Gamma distribution is infinitely divisible it then follows that (i) d

d

 =µ AT = µ −X  − [X + Xi ] ,

(2.14)

with X ∼ Gamma(aρ, b) and Xi ∼ Gamma(a(1 − ρ), b) mutually independent and (i) √ Xi ∼ Gamma(a, b). If we take µ  = ab and b = a, we ensure that E AT = 0,  (i) (j)  (i) Var AT = 1 and Corr AT , AT = ρ, for all i = j . Furthermore, from (2.13), (2.14) and the expression for λ it follows that 

t T , (2.15) Htd = µ  − Γ−1 a,b (1 − µd )

where Γa,b denotes the cumulative distribution function of a Gamma(a, b) distribution. In order to simulate default curves, we first have to generate a sample of random  (2) (N )  satisfying (2.12), with Y, Y (1) , Y (2) , . . . , Y (N ) variables AT = A(1) T , AT , . . . , AT i.i.d. Shifted-Gamma processes and then, at each (discrete) time t, count the number d of A(i) T ’s that are less than or equal to the value of the default barrier Ht at that time. Hence, the default curve is given by  (i)  ♯ AT ≤ Htd ; i = 1, 2, . . . , N , t ≥ 0. Pd (t; AT ) = (2.16) N The left panel of Figure 2.4 shows five default curves, generated by the Gamma µ, a, b) = (1, 1, 1), and ρ ≈ 0.095408, such that the one-factor model (2.12) with ( mean and standard deviation of the default distribution are 0.20 and 0.10. Again, all curves start at zero and are fully stochastic. Furthermore, when comparing the curves of the one-factor shifted Gamma-L´evy model (hereafter termed the shifted Gamma-L´evy model or Gamma one-factor model) to the ones generated by the L´evy portfolio default model, one might be tempted to conclude that the former model does not include jumps. However, it does, but the jumps are embedded in the underlying dynamics of the asset return AT . The corresponding default p.d.f. is shown in the right panel. Compared to the previous three default models, the default p.d.f. generated by the shifted GammaL´evy model seems to be squeezed around µd and has a significantly larger kurtosis. It should also be mentioned that the latter default distribution has a rather heavy right tail, with a substantial probability mass at the 100 % default rate. This can be explained by looking at the right-hand side of equation (2.14). Since both terms between brackets are strictly positive and hence cannot compensate each other (unlike the Nor. Hence, starting with a large mal one-factor model), AT(i) is bounded from above by µ

191

Modelling default and prepayment using L´evy processes

systematic risk factor X , things can only get worse, i.e. the term between brackets can only increase and therefore Ai,T can only decrease, when adding the idiosyncratic risk factor Xi . This implies that when we have a substantially large common factor (close to Γ−1 a,b [1 − µd ], cf. (2.15)), it is very likely that all borrowers will default, i.e. that (i)

AT ≤ HTd for all i = 1, 2, . . . , N .

Probability density of the cumulative default rate at time T (ρ = 0.09541)

0.25

0.25

0.2

0.2 #{Zi ≤ HT}

Pd(t)

Gamma 1−factor default curve (µ = 0.20 , σ = 0.10)

0.15

0.15

0.1

0.1

0.05

0.05

0

0

20

40

60 t

80

100

120

0

0

5

10

15

20 f#{Z ≤ H } i

25

30

35

40

T

Figure 2.4. Gamma 1-factor default curves (left) and corresponding default distribution (right).

3

Prepayment models

In this section we will briefly discuss three models for the prepayment term structure, respectively based on 1. constant prepayment; 2. a strictly increasing L´evy process; 3. Vasicek’s Normal one-factor model. As before, we will use the terms prepayment curve and prepayment distribution to refer to the prepayment term structure and the distribution of the cumulative prepayment rate at maturity T .

3.1 Constant prepayment model The idea of constant prepayment stems from the former Public Securities Association1 (PSA). The basic assumption is that the (monthly) amount of prepayment begins at 0 1 In 1997 the PSA changed

its name to The Bond Market Association (TBMA), which merged with the Securities Industry Association on November 1, 2006, to form the Securities Industry and Financial Markets Association (SIFMA).

192

H. J¨onsson, W. Schoutens, and G. Van Damme

and rises at a constant rate of increase α until reaching its characteristic steady state rate at time t00 , after which the prepayment rate remains constant until maturity T . Note that t00 is generally not the same as the inflection point t0 of the default curve. The corresponding marginal (e.g. monthly) and cumulative prepayment curves are given by  αt ; 0 ≤ t ≤ t00 cpr(t) = (3.1) αt00 ; t00 ≤ t ≤ T and CPR(t) =



αt2 2 αt2 − 200

; 0 ≤ t ≤ t00

+ αt00 t ; t00 ≤ t ≤ T .

(3.2)

From (3.1) it is obvious that the marginal prepayment rate increases at a speed of α per period before time t00 and remains constant afterwards. Consequently, the cumulative prepayment curve (3.2) increases quadratically on the interval [0, t00 ] and linearly on [t00 , T ]. Given t00 and CPR(T ), i.e. the cumulative prepayment rate at maturity, the constant rate of increase α equals α=

CPR(T ) T t00 −

t200 2

.

(3.3)

Hence, once t00 and CPR(T ) are fixed, the marginal and cumulative prepayment curves are completely deterministic. Moreover, the CPR model does not include jumps. Due to these features, the CPR model is an unrealistic representation of real-life prepayments, which are shock-driven and typically show some random effects. In the next sections we will describe two models that (partially) solve these problems. Figure 3.1 shows the marginal and cumulative prepayment curve, in case the steady state t00 is reached after 48 months and the cumulative prepayment rate at maturity equals CPR(T ) = 0.20. The corresponding constant rate of increase is α = 0.434bps.

3.2 L´evy portfolio prepayment model The L´evy portfolio prepayment model is completely analogous to the L´evy portfolio default model described in Section 2.2, with λdt replaced λpt . Although there is empirical evidence that defaults and prepayments are negatively correlated, in the simulation study in Section 4 we assumed the above mentioned processes to be mutually independent. Evidently, also the L´evy portfolio prepayment curves start at zero, are fully stochastic and include jumps, solving the above mentioned problems of the CPR model.

3.3 Normal one-factor prepayment model The Normal one-factor prepayment model starts from the same underlying philosophy as its default equivalent of Section 2.3. We again model the cash position V (i) of a borrower. Just as a borrower is said to default if his financial situation has deteriorated so

193

Modelling default and prepayment using L´evy processes

−3

2.5

Marginal prepayment curve

x 10

Cumulative prepayment curve 0.2 0.18

2

0.16 0.14 0.12

cpr(t)

CPR(t)

1.5

1

0.1 0.08 0.06

0.5

0.04 0.02

0

0

24

48

72

96

120

0

0

24

48

t

72

96

120

t

Figure 3.1. Marginal (left) and cumulative (right) constant prepayment curve.

dramatically that V (i) hits some predetermined lower bound Btd , we state that a borrower will decide to prepay if his financial health has improved sufficiently, so that V (i) (or equivalently Zi ) hits a prespecified upper bound Btp (Htp ). The barrier Htp is chosen such that the expected probability of prepayment before time t equals the (observed) cumulative prepayment rate CPR(t), given by (3.2), i.e. Pr [Zi ≥ Htp ] = 1 − Φ [Htp ] = CPR(t),

(3.4)

Htp = Φ−1 [1 − CPR(t)] ,

(3.5)

which implies,

with Φ the standard Normal cumulative distribution function. In order to simulate prepayment curves, we must thus draw a sample of standard Normal random variables Z = (Z1 , Z2 , . . . , ZN ) satisfying (2.6), and then, at each (discrete) time t, count the number of Zi ’s that are greater than or equal to the value of the prepayment barrier Htp at that time. The prepayment curve is then given by Pp (t; Z) =

♯{Zi ≥ Htp : i = 1, 2, . . . , N } , N

t ≥ 0.

(3.6)

The left panel of Figure 3.2 shows five prepayment curves, generated by the Normal one-factor model (2.6) with ρ ≈ 0.121353, such that the mean and standard deviation of the prepayment distribution are 0.20 and 0.10 (as for the default model). The fact that the cross-correlation coefficient ρ is the same as the one of the default model is a direct consequence of the symmetry of the Normal distribution. The curves start at zero and are fully stochastic, but the model lacks jump dynamics. As will be seen later on, ignoring prepayment shocks results in an overestimation of the weighted average life of an ABS, which in turn produces higher ratings. The corresponding prepayment p.d.f. is shown in the right panel.

194

H. J¨onsson, W. Schoutens, and G. Van Damme

Probability density of the cumulative prepayment rate at time T (ρ = 0.12135)

0.5

0.5

0.4

0.4 #{Zi ≥ HT}

p

P (t)

Normal 1−factor prepayment curve (µ = 0.20 , σ = 0.10)

0.3

0.3

0.2

0.2

0.1

0.1

0

0

20

40

60 t

80

100

0

120

0

0.5

1

1.5

2 f

2.5

3

3.5

4

4.5

#{Z ≥ H } i

T

Figure 3.2. Normal one-factor prepayment curves (left) and corresponding prepayment distribution (right).

4

Numerical results

4.1 Introduction One can now build these default and prepayment models into any scenario generator for rating and analysing asset-backed securities. Any combination of the above described default and prepayment models is meaningful, except for the combination of the shifted Gamma(-L´evy) default model with the Normal one-factor prepayment model. In that case the borrower’s cash position would be modeled by two different processes: one to obtain his default probability and another one for his prepayment probability, which is neither consistent nor realistic. Hence, all together we can construct 11 different scenario generators. Table 4.1 summarises the possible combinations of default and prepayment models.

Default models

Logistic L´evy portfolio Normal one-factor Gamma one-factor

CPR ok ok ok ok

Prepayment models L´evy portfolio Normal one-factor ok ok ok ok ok ok ok nok

Table 4.1. Possible combinations of default and prepayment models. We will now apply each of the above mentioned 11 default-prepayment combina-

Modelling default and prepayment using L´evy processes

195

tions to derive the expected loss, the WAL and the corresponding rating of two (subordinated) notes backed by a homogeneous pool of commercial loans. Table 4.2 lists the specifications of the ABS deal under consideration (cf. Raynes & Rutledge [9]). ASSETS Initial balance of the asset pool Number of loans in the asset pool Weighted Average Maturity of the assets Weighted Average Coupon of the assets Payment frequency Reserve target Eligible reinvestment rate Loss-Given-Default Lag LIABILITIES Initial balance of the senior note Premium of the senior note Initial balance of the subordinated note Premium of the subordinated note Servicing fee Servicing fee shortfall rate Payment method

V0 N0 WAM WAC

LGD

A0 rA B0 rB rsf rsf −sh

$30,000,000 2,000 10 years 12% p.a. monthly 5% 3.92% p.a. 50% 5 months $24,000,000 7% p.a. $6,000,000 9% p.a. 1% p.a. 20% p.a. Pro-rata Sequential

Table 4.2. Specifications of the ABS deal.

Note that the cash collected (from the pool) and distributed (to the note holders) by the SPV, in a particular period, contains both principal and interest. Each period, principal (scheduled, prepaid and recoveries from default) and interest collections are combined into a pool, which is then used to pay the interest and principal (in this order) due to the investors. Whatever cash is left after fulfilling the interest obligations is used to pay the principal due (scheduled principal + prepaid principal + defaulted face value) on the notes, according to the priority rules. From this it is evident that default and prepayment will have a significant effect on the amortisation of the notes and (consequently) on the interest received by the note holders. Furthermore, as can be seen from Table 4.2, the ABS deal under consideration benefits from credit enhancement under the form of a reserve account, required to be equal to 5% of the balance of the asset pool at the end of each payment period. The funds available in this account are reinvested at the 10-year US Treasury rate (of May 22, 2008) and will be used to fulfil the payment obligations, in case the collections in a specific period are insufficient to cover the expenses. In order to achieve the targeted

196

H. J¨onsson, W. Schoutens, and G. Van Damme

reserve amount of 5% of the asset pool’s balance at the end of each payment period, before being transferred to the owners of the SPV, any excess cash is first used to replenish the reserve account. Hence it is possible that the owners of the SPV are not compensated in certain periods, or in the worst case not at all. On the other hand, there may also be periods in which the SPV owners receive a substantial amount of cash. This especially happens in periods with a high number of defaults and/or prepayments, where the outstanding balance of the asset pool suddenly decreases very fast, requiring the reserve account to be reduced in order to match the targeted 5% of the asset pool at the end of the payment period. Furthermore, unless explicitly stated otherwise, the parameter values mentioned in Table 4.3 will be used. Mean of the default distribution Standard deviation of the default distribution Mean of the prepayment distribution Standard deviation of the prepayment distribution Parameters of the Logistic curve

Steady state of the prepayment curve

µd σd µp σp b c t0 t00

20% 10% 20% 10% 1 0.1 55 months 45 months

Table 4.3. Default parameter values for the default and prepayment models. Finally, before moving on to the actual sensitivity analysis, we introduce two important concepts, i.e. the DIRR and the WAL of an ABS. By DIRR we mean the difference between the promised and the realised internal rate of return. The WAL is defined as  T  T 

 1  WAL = (4.1) t · Pt + T P − Pt , P t=1 t=1 where Pt is the total principal paid at time t and P is the initial balance of the note. The term between the square brackets accounts for principal shortfall, in the sense that if the note is not fully amortised after its legal maturity, we assume that the non-amortised amount is redeemed at the legal maturity date1 . Clearly, both the DIRR and the WAL are non-negative. Furthermore, by inspecting the rating table mentioned [4] and [5], it is obvious that there is some interplay between the DIRR and the WAL: of two notes with the same DIRR, the one with the highest WAL will have the highest rating. For instance, consider two notes A1 and A2 with a DIRR of 0.03%, but with respective WALs of 4 and 5 years. Then note A1 will get a Aa3 rating, whereas the A2 note gets a Aa2 rating. Obviously, of two notes with the same WAL, the one with the highest DIRR will get the lowest rating. 1 This

method is proposed in Mazataud and Yomtov [8]. Moreover, in Moody’s ABSROMTM application (v PT −1

Ft

t=0 , with Ft the note’s outstanding balance at time t. Hence 1.0) the WAL of a note is calculated as F0 F0 = P . It is left as an exercise to the reader to verify that this formula is equivalent to formula (4.1).

Modelling default and prepayment using L´evy processes

197

4.2 Sensitivity analysis Tables 5.1–5.3 contain ratings – based on the Moody’s Idealised Cumulative Expected Loss Rates2 – and DIRRs and WALs of the two ABS notes, obtained with each of the 11 default-prepayment combinations, for several choices of µd and µp . The figures mentioned in these tables are averages based on a Monte Carlo simulation with 1,000,000 scenarios. More specifically, in Table 5.1 we investigate what happens to the ratings if µd is changed, while holding µp and σp constant3 , whereas Table 5.1 provides insight in the impact of a change in µp , while keeping µd and σd fixed. Unless stated otherwise, the (principal) collections from the asset pool are distributed across the note holders according to a pro-rata payment method, i.e. proportionally with the note’s outstanding balances. However, Table 5.3 presents the ratings using both pro rata and sequential payment method, where the subordinated B note starts amortising only after the outstanding balance of the senior A note is fully redeemed, in both cases assuming that there exists a reserve account. The effect of having no reserve account in the pro rata case is also shown in Table 5.3. 4.2.1 Influence of µd From Table 5.1 we may conclude that when increasing the average cumulative default rate the credit rating of the notes stays the same or is lowered for all combinations of default and prepayment models. For the model dependence we first analyse the rating columns for the A note. For µd = 10% we can see that all but the two pairs with the Gamma one-factor default model give Aaa ratings, indicating that the rating is not so model-dependent for a relatively low cumulative default rate. Increasing µd to 20%, the rating using the Normal one-factor default model stays at Aaa regardless of prepayment models. For the Logistic default model the rating is changed to Aa1 for all combination of prepayment models and for the Gamma onefactor model the rating is Aa3. It is only for the L´evy portfolio default model that we can see a small difference between the CPR model and the two other prepayment models. Finally, assuming that µd = 40%, the L´evy portfolio prepayment model in combination with either the Logistic or the Normal one-factor default model gives lower ratings than the other two prepayment models. For the other default models no dependence on the prepayment model can be traced. Analysing the influence of the prepayment model, it is worth noticing that the L´evy portfolio model always gives the lowest WAL and the highest DIRR for any default model, compared to the other two prepayment models. This can be explained by looking at the typical path of a L´evy portfolio process (cf. Figure 2.2). Note that such a path does not increase continuously, but moves up with jumps, between which the curve remains rather flat. Translated to the prepayment phenomena, this means that 2 See

Cifuentes and O’Connor [4] and Cifuentes and Wilcox [5] for further details. order to keep µp and σp fixed, also the cross-correlation ρ must remain fixed, since there is a unique parameter ρ for each pair (μp , σp ) (or equivalently (μd , σd )). This explains why also σd changes if μd changes.

3 In

198

H. J¨onsson, W. Schoutens, and G. Van Damme

there will be times when a large number of borrowers decide to prepay, followed by a period where there are virtually no prepayments, until the next time where a substantial amount of the remaining debtors prepays. Obviously, this results in a very irregular cash inflow, which will cause difficulties when trying to honour the payment obligations. Indeed, as previously explained, in payment periods with a jump in the prepayment rate, the outstanding balance of the asset pool and consequently the reserve account will be significantly decreased, which in turn increases the probability of future interest and principal shortfalls, leading to higher DIRRs. Moreover, since a shock-driven prepayment model increases the probability that a substantial number of borrowers will choose to prepay very early in the life of the loan, it is not surprising that the L´evy portfolio prepayment model produces lower WALs than the other two models. Finally, as explained before, higher DIRRs and lower WALs lead to lower ratings. The Gamma one-factor model always gives the lowest rating, and a look at the DIRR and WAL columns gives the explanation for this, namely, the DIRRs for the Gamma one-factor model is always much higher than for any of the other default models but the WALs is almost the same leading to a lower rating. The Normal one-factor default model gives in general the highest rating, which can be explained by the fact that it produces the lowest DIRRs. For the B note the general tendency is that the rating is lowered when the mean cumulative default rate is increased. It is worth mentioning that the Normal one-factor model gives the highest rating among the default models and that the Gamma onefactor model gives the lowest rating for µd = 10% and the L´evy portfolio model gives the lowest for µd = 40%. Thus, the jump-driven default models produce the lowest ratings. The L´evy portfolio prepayment model combined with the L´evy portfolio, Normal one-factor or Gamma one-factor default model gives generally the lowest rating compared to the other prepayment models, for reasons explained in the previous paragraph. 4.2.2 Influence of µp The influence of changing the mean cumulative prepayment rate is given in Table 5.2. A comparison to Table 5.1 learns that the ratings are less sensitive to changes in the mean prepayment rate than they are to changes in the expected default rate, as the rating transitions caused by the former are significantly smaller. Furthermore, any of the above made observations concerning specific prepayment or default models remains valid also here. Especially it still holds that the Normal one-factor default model gives the same or higher rating of both notes than the other default models and that the jump-driven models give the lowest ratings, for each of the prepayment models. 4.2.3 Influence of the reserve account Table 5.3 provides insight into the effect of incorporating a reserve account (credit enhancement) into the cash flow waterfall of an ABS deal. The results in this table show no surprises: since assuming there is no reserve account implies that there are

Modelling default and prepayment using L´evy processes

199

less funds available for reimbursing the note investors (on the contrary, any excess cash is fully transferred to the SPV owners) it is evident that removing the reserve account will lead to higher DIRRs and WALs and lower ratings. This is indeed what we see, when comparing the above mentioned two tables. Notice that the effect is greater for the B note. This is of course due to its subordinated status. 4.2.4 Influence of the payment method Table 5.3 shows the impact of choosing either the pro-rata or the sequential payment method, for allocating the (principal) collections to the different notes. What is clear from the definition of the two payment methods is that sequential payment will shorten the WAL of the A note and increase the WAL of the B note. Consulting Moody’s Idealised Cumulative Expected Loss Rate table one can see that an increase in WAL, keeping the DIRR fixed, will result in a higher rating. The expected decrease and increase in WAL for the A note and B note, respectively, are evident. In fact, the WAL increases on average with a factor 1.72 (or 3.8 years) for the B note, going from pro rata to sequential payment. The decrease of the WAL for the A note is on average with a factor 0.82 (or 0.95 years). Thus the change in WAL is much more dramatic for the B note than for the A note. So based only on the change of the WALs, without taking the change in DIRR into account, we can directly assume that the rating would improve for the B note and for the A note we would expect the rating to stay the same or be lowered. However, taking the change in DIRR into account, we see that the the actual rating of both the A note and the B note stays the same or improves going from pro rata to sequential payment. The improvement of the A note rating is due to the fact that the DIRR is smaller for the sequential case than for the pro rata case, compensating for the decrease in WAL. For the B note the changes of the DIRRs are not enough to influence the rating improvements due to the increases in WALs.

5

Conclusion

Traditional models for the rating and the analysis of ABSs are typically based on Normal distribution assumptions and Brownian motion driven dynamics. The Normal distribution belongs to the class of the so-called light tailed distributions. This means that extreme events, shock, jumps, crashes, etc. are not incorporated in the Normal distribution based models. However looking at empirical data and certainly in the light of the current financial crisis, it are these extreme events that can have a dramatical impact on the product. In order to do a better assessment, new models incorporating these features are needed. This paper has introduced a whole battery of new models based on more flexible distributions incorporating extreme events and jumps in the sample paths. We observe that the jump-driven models in general produce lower ratings than the traditional models.

µd = 40% Baa3 Baa3 Baa3 Baa3 Ba1 Baa3 Baa1 Baa2 Baa1 Baa2 Baa3

Rating µd = 20% A1 A1 A1 A2 A2 A2 Aa1 Aa2 Aa1 A2 A3

µd = 10% Aaa Aaa Aaa Aaa Aaa Aaa Aaa Aaa Aaa Aa1 Aa2

µd = 10% Aa1 Aa1 Aa1 Aa1 Aa2 Aa1 Aaa Aaa Aaa Aa3 A1

Model pair Logistic – CPR Logistic – L´evy portfolio Logistic – Normal one-factor L´evy portfolio – CPR L´evy portfolio – L´evy portfolio L´evy portfolio – Normal one-factor Normal one-factor – CPR Normal one-factor – L´evy portfolio Normal one-factor – Normal one-factor Gamma one-factor – CPR Gamma one-factor – L´evy portfolio

Model pair Logistic – CPR Logistic – L´evy portfolio Logistic – Normal one-factor L´evy portfolio – CPR L´evy portfolio – L´evy portfolio L´evy portfolio – Normal one-factor Normal one-factor – CPR Normal one-factor – L´evy portfolio Normal one-factor – Normal one-factor Gamma one-factor – CPR Gamma one-factor – L´evy portfolio

µd = 10% 0.93026 1.1996 0.93764 1.4051 1.9445 1.6019 0.033692 0.041807 0.023184 6.288 15.293

µd = 10% 0.026746 0.039664 0.027104 0.0017992 0.0067859 0.0032759 0.00036114 0.00060627 0.00014211 1.4443 2.5931

µd = 40% 5.3712 7.4258 5.1148 9.0857 12.044 9.0265 2.9626 3.6883 2.0175 18.431 20.385

µd = 40% 139.46 164.07 140.55 175.75 195.61 175.49 57.936 65.669 48.936 85.662 120.76

Note A DIRR (bp) µd = 20% 0.3466 0.48683 0.3278 0.16105 0.34616 0.20977 0.034631 0.055516 0.017135 4.6682 4.9614 Note B DIRR (bp) µd = 20% 10.581 13.624 10.906 17.801 21.891 18.35 1.5642 1.9829 1.156 20.736 28.406

µd = 10% 5.4901 5.3471 5.4903 5.4949 5.3526 5.4951 5.4777 5.3337 5.4776 5.4955 5.3631

µd = 10% 5.4867 5.343 5.4869 5.4799 5.3355 5.4795 5.4775 5.3335 5.4774 5.4828 5.3427

WAL (year) µd = 20% µd = 40% 5.3124 5.3358 5.1771 5.2456 5.3135 5.3391 5.3525 5.4753 5.2204 5.373 5.353 5.4738 5.2502 4.9709 5.1071 4.8421 5.2491 4.9498 5.3022 4.9739 5.1588 4.8351

WAL (year) µd = 20% µd = 40% 5.2742 4.8642 5.1311 4.729 5.2745 4.8656 5.2529 4.7895 5.1101 4.6543 5.2532 4.7912 5.2427 4.7309 5.0986 4.5895 5.2427 4.7303 5.2599 4.7939 5.1167 4.6503

Table 5.1. Ratings, DIRR and WAL of the ABS notes, for different combinations of default and prepayment models and mean cumulative default rate µd = 0.10, 0.20, 0.40 and mean cumulative prepayment rate µp = 0.20.

µd = 40% Aa3 A1 Aa3 A1 A1 A1 Aa2 Aa3 Aa2 A2 A2

Rating µd = 20% Aa1 Aa1 Aa1 Aaa Aa1 Aa1 Aaa Aaa Aaa Aa3 Aa3

200 H. J¨onsson, W. Schoutens, and G. Van Damme

µp = 40% A2 A3 A2 A3 Baa1 A3 Aa2 Aa3 Aa1 A1 A2

Rating µp = 20% A1 A1 A1 A2 A2 A2 Aa1 Aa2 Aa1 A2 A3

µp = 10% Aa1 Aa1 Aa1 Aaa Aaa Aaa Aaa Aaa Aaa Aa3 Aa3

µp = 10% A1 A1 A1 A1 A1 A1 Aa1 Aa1 Aa1 Baa1 A3

Model pair Logistic – CPR Logistic – L´evy portfolio Logistic – Normal one-factor L´evy portfolio – CPR L´evy portfolio – L´evy portfolio L´evy portfolio – Normal one-factor Normal one-factor – CPR Normal one-factor – L´evy portfolio Normal one-factor – Normal one-factor Gamma one-factor – CPR Gamma one-factor – L´evy portfolio

Model pair Logistic – CPR Logistic – L´evy portfolio Logistic – Normal one-factor L´evy portfolio – CPR L´evy portfolio – L´evy portfolio L´evy portfolio – Normal one-factor Normal one-factor – CPR Normal one-factor – L´evy portfolio Normal one-factor – Normal one-factor Gamma one-factor – CPR Gamma one-factor – L´evy portfolio

µp = 10% 8.9089 9.9097 9.0211 14.216 15.687 14.318 1.3334 1.4404 1.1397 54.297 42.16

µp = 10% 0.31365 0.34552 0.30488 0.10416 0.14828 0.11976 0.023787 0.03208 0.016874 5.811 6.7492

µp = 40% 0.27714 0.90665 0.25706 0.42327 1.4266 0.51304 0.046599 0.1094 0.018373 2.8855 3.2188

µp = 40% 14.756 26.681 14.436 27.506 42.04 28.531 2.0323 3.4481 1.2153 11.785 17.871

Note A DIRR (bp) µp = 20% 0.3466 0.48683 0.3278 0.16105 0.34616 0.20977 0.034631 0.055516 0.017135 4.6682 4.9614 Note B DIRR (bp) µp = 20% 10.581 13.624 10.906 17.801 21.891 18.35 1.5642 1.9829 1.156 20.736 28.406

µp = 10% 5.4642 5.4011 5.4646 5.4994 5.4384 5.4992 5.4064 5.3406 5.4059 5.4614 5.3945

µp = 10% 5.4309 5.365 5.431 5.4093 5.3439 5.4096 5.3995 5.3335 5.3995 5.4149 5.3487

WAL (year) µp = 20% µp = 40% 5.3124 5.0111 5.1771 4.695 5.3135 5.0123 5.3525 5.0628 5.2204 4.7511 5.353 5.0644 5.2502 4.9375 5.1071 4.5943 5.2491 4.9356 5.3022 4.9848 5.1588 4.6418

WAL (year) µp = 20% µp = 40% 5.2742 4.9611 5.1311 4.618 5.2745 4.9633 5.2529 4.9404 5.1101 4.5982 5.2532 4.9424 5.2427 4.9292 5.0986 4.5842 5.2427 4.9291 5.2599 4.9497 5.1167 4.6043

Table 5.2. Ratings, DIRR and WAL of the ABS notes, for different combinations of default and prepayment models and mean cumulative default rate µd = 0.20 and mean cumulative prepayment rate µp = 0.10, 0.20, 0.40.

µp = 40% Aa1 Aa1 Aa1 Aa1 Aa1 Aa1 Aaa Aaa Aaa Aa2 Aa2

Rating µp = 20% Aa1 Aa1 Aa1 Aaa Aa1 Aa1 Aaa Aaa Aaa Aa3 Aa3

Modelling default and prepayment using L´evy processes

201

Reserve (Sq) Aa3 Aa3 Aa3 Aa3 A1 Aa3 Aa1 Aa1 Aa1 A1 A1

Model pair Logistic – CPR Logistic – L´evy portfolio Logistic – Normal one-factor L´evy portfolio – CPR L´evy portfolio – L´evy portfolio L´evy portfolio – Normal one-factor Normal one-factor – CPR Normal one-factor – L´evy portfolio Normal one-factor – Normal one-factor Gamma one-factor – CPR Gamma one-factor – L´evy portfolio

No Reserve (PR) Aa1 Aa1 Aa1 Aa1 Aa1 Aa1 Aaa Aa1 Aaa Aa3 Aa3

No Reserve (PR) A3 Baa1 A3 Baa1 Baa2 Baa1 A1 A1 Aa3 A3 A3

Rating Reserve (PR) A1 A1 A1 A2 A2 A2 Aa1 Aa2 Aa1 A2 A3 Reserve (Sq) 10.792 13.567 10.952 17.089 20.412 17.566 1.5244 1.9526 1.1428 20.322 28.065

Reserve (Sq) 0.02813 0.036137 0.032607 0.064217 0.094711 0.056669 0.020445 0.018883 0.013992 4.1521 4.2954

Reserve (Sq) 4.3424 4.1747 4.3475 4.3189 4.1503 4.323 4.2894 4.1185 4.2858 4.3125 4.1417

Reserve (Sq) 9.0526 9.0201 9.0348 9.1082 9.0832 9.0939 9.0657 9.0305 9.078 9.0956 9.063

Note A DIRR (bp) Reserve (PR) No Reserve (PR) 0.3466 0.71815 0.48683 1.0068 0.327 0.7184 0.16105 0.71116 0.34616 1.1772 0.0977 0.80489 0.034631 0.16448 0.055516 0.26287 0.017135 0.051144 4.6682 5.9435 4.9614 6.5872 Note B DIRR (bp) Reserve (PR) No Reserve (PR) 10.581 38.957 13.624 46.316 10.906 39.955 17.801 67.004 21.891 75.017 18.35 67.608 1.5642 8.5988 1.9829 10.739 1.156 5.4548 20.736 30.589 28.406 37.646

WAL (year) Reserve (PR) No Reserve (PR) 5.3124 5.4739 5.1771 5.3522 5.3135 5.4763 5.3525 5.6466 5.2204 5.5242 5.353 5.6467 5.2502 5.3062 5.1071 5.171 5.2491 5.2995 5.3022 5.341 5.1588 5.1986

WAL (year) Reserve (PR) No Reserve (PR) 5.2742 5.2752 5.1311 5.1327 5.2745 5.2755 5.2529 5.28 5.1101 5.1379 5.2532 5.2802 5.2427 5.2437 5.0986 5.0997 5.2427 5.2437 5.2599 5.264 5.1167 5.1207

Table 5.3. Ratings, DIRR and WAL of the ABS notes, for different combinations of default and prepayment models with and without reserve account for sequential (Sq) and pro rata (PR) payment. Mean cumulative default rate µd = 0.20 and mean cumulative prepayment rate µp = 0.20.

Reserve (Sq) Aaa Aaa Aaa Aaa Aaa Aaa Aaa Aaa Aaa Aa3 Aa3

Model pair Logistic – CPR Logistic – L´evy portfolio Logistic – Normal one-factor L´evy portfolio – CPR L´evy portfolio – L´evy portfolio L´evy portfolio – Normal one-factor Normal one-factor – CPR Normal one-factor – L´evy portfolio Normal one-factor – Normal one-factor Gamma one-factor – CPR Gamma one-factor – L´evy portfolio

Rating Reserve (PR) Aa1 Aa1 Aa1 Aaa Aa1 Aa1 Aaa Aaa Aaa Aa3 Aa3

202 H. J¨onsson, W. Schoutens, and G. Van Damme

Modelling default and prepayment using L´evy processes

203

Bibliography [1] Albrecher, H., Ladoucette, S. and Schoutens, W. (2006), A Generic One-Factor L´evy Model for Pricing Synthetic CDOs, Advances in Mathematical Finance, M. Fu, R. Jarrow, J. Yen, R.J. Elliott (eds.), pp. 259–278, Birkh¨auser, Boston. [2] Bielecki, T. (2008), Rating SME Transactions, Moody’s Investors Service. [3] Cariboni, J., and Schoutens, W. (2008), Jumps in Intensity Models: Investigating the performance of Ornstein-Uhlenbeck processes, Metrika, Vol. 69, No. 2-3, pp. 173–198. [4] Cifuentes, A. and O’Connor, G. (1996), The Binomial Expansion Method Applied to CBO/CLO Analysis, Moody’s Special Report. [5] Cifuentes, A. and Wilcox, C. (1998), The Double Binomial Method and its Application to a Special Case of CBO Structures, Moody’s Special Report. [6] Lando, D. (1994), On Cox Processes and Credit Risky Securities, Review of Derivatives Research, Vol. 2, No. 2-3, (December 1998), pp. 99–120. [7] Li, A. (1995), A One-Factor Lognormal Markovian Interest Rate Model: Theory and Implementation, Advances in Futures and Options Research, Vol. 8, pp. 229–239. [8] Mazataud, P. and Yomtov, C. (2000), The Lognormal Method Applied to ABS Analysis, Moody’s Special Report. [9] Raynes, S. and Rutledge, A. (2003), The Analysis of Structured Securities: Precise Risk Measurement and Capital Allocation, Oxford University Press. [10] Richards, F. J. (1959), A flexible growth function for empirical use, Journal of Experimental Botany, Vol. 10, No. 2, pp. 290–300. [11] Sch¨onbucher, P. J. (2003), Credit Derivatives Pricing Models, Wiley Finance. [12] Schoutens, W. (2003), L´evy Processes in Finance: Pricing Financial Derivatives, John Wiley & Sons, Chichester. [13] Uzun, H. and Webb, E. (2007), Securitization and risk: empirical evidence on US banks, The Journal of Risk Finance, Vol. 8, No. 1, pp. 11–23. [14] Vasicek, O. (1987), Probability of Loss on Loan Portfolio, Technical Report, KMV Corporation, 1987.

Author information Henrik J¨onsson, EURANDOM, Eindhoven University of Technology, Eindhoven, The Netherlands. Email: [email protected] Wim Schoutens, Department of Mathematics, K.U. Leuven, Celestijnenlaan 200 B, B-3001 Leuven, Belgium. Email: [email protected] Geert Van Damme, Department of Mathematics, K.U. Leuven, Celestijnenlaan 200 B, B-3001 Leuven, Belgium. Email: [email protected]

Radon Series Comp. Appl. Math 8, 205–222

c de Gruyter 2009 

Adaptive variance reduction techniques in finance Benjamin Jourdain

Abstract. This paper gives an overview of adaptive variance reduction techniques recently developed for financial applications. More precisely, we explain how information available in the random drawings made to compute the expectation of interest may be used at the same time to optimise control variates, importance sampling or stratified sampling. Key words. Variance reduction techniques, control variates, importance sampling, stratification, sample average optimisation. AMS classification. 65C05, 90C15, 91-08

1

Introduction

In mathematical finance, the price of a European option is expressed as the expectation under the risk neutral probability measure of the discounted payoff of the option. Sensitivities of the price with respect to various parameters, the so-called greeks, and in particular the delta which is of paramount importance for hedging purposes, may also be expressed as expectations. The simplest and most natural numerical approach to compute these expectations, the Monte Carlo method, is widely used in banks. According to the central limit theorem, the precision of the empirical mean approximation of the expectation of a random variable is proportional to the standard deviation of this variable. Variance reduction techniques aim at improving this precision by computing the empirical mean of independent copies of a random variable with the same expectation as the original one but with a lower variance. These techniques may be classified into two categories : •

the ones which guarantee that the variance of the new variable will be lower than the variance of the original one : antithetic variables and conditioning. In general, the variance reduction ratio obtained with these techniques is not very large.



the ones which may lead to a more significant variance reduction ratio but may also increase the variance depending on whether they are properly implemented : control variates and importance sampling.

Stratified sampling is at the boundary between these two classes : when the allocation of the random drawings into the strata is made proportionally to their probabilities, This research benefited from the support of the French National Research Agency (ANR) under the program ANR-05-BLAN-0299 and of the “Chair Risques Financiers”, Fondation du Risque

206

B. Jourdain

variance reduction is guaranteed. Nevertheless, to improve efficiency, one should try other allocation rules but then the variance may increase. Adaptive methods have been developed to ensure a proper implementation of the second category of variance reduction techniques : information available in the random drawings made to compute the expectation of interest is used to optimise the variance reduction technique at the same time. In general, they save computation time in comparison to their more natural and earlier investigated alternative : optimise the variance reduction technique on a first pilot set of random drawings and then compute the empirical mean of the resulting random variable on a second set of independent drawings. Such two stages procedures lead to unbiased estimators whereas, in general, adaptive estimators are only asymptotically unbiased. Sections 2, 3 and 4 are respectively devoted to adaptive control variates, adaptive importance sampling and adaptive stratified sampling. Since we are interested in financial applications, we will pay in what follows particular attention to the computation of E(f (G)) where G is a standard d-dimensional normal random vector and f : Rd → R. Indeed, the price and hedging ratios of European options written on underlying assets evolving according to a multi-dimensional Black Scholes model may be expressed in this way. When the underlyings evolve according to a more general stochastic differential equation, Euler discretisation of this equation leads to approximations of the price and hedging ratios by expectations of the previous form, for a possibly high dimensional normal vector G and a complicated function f . Notice that in the present volume, Giles and Waterhouse [11] present an interesting multilevel path simulation technique which enables to reduce the time-discretisation bias by computing the expectation corresponding to a refined time-grid. In order to reduce the computation time necessary to obtain a balanced statistical error, they suggest to combine results using different time-steps numbers. In the end, their method consists in computing E(f (G)) for an even higher-dimensional and more complicated function f than the one derived from standard Euler discretisation.

2

Adaptive control variates

Let us first illustrate the basic ideas of adaptive variance reduction on the simple example of linearly parametrised control variates (see for instance [21], [24] or Section 4.1 in [12]) before dealing with general parametrisation.

2.1 Linearly parametrised control variates Suppose that we want to compute the expectation E(Y ) of a real random variable Y and that Z = (Z 1 , . . . , Z d )∗ is a related Rd -valued centred random vector with Y and Z both square-integrable. We also assume, up to removing some coordinates of Z , that the covariance matrix Cov(Z) of Z is non-singular and we denote by Cov(Y, Z) = E(Y Z) the covariance between Y and Z . In finance, typically Y = e−rT f (XT1 , . . . , XTd ) where f is the payoff of a European option with maturity T writ-

Adaptive variance reduction in finance

207

ten on d underlying assets X 1 , . . . , X d with respective initial prices x1 , . . . , xd and since the discounted price of each asset is a martingale under the risk neutral measure, one may choose Z = (XT1 − erT x1 , . . . , XTd − erT xd )∗ .

For θ ∈ Rd , since E(Y −θ.Z) = E(Y ), one may approximate the expectation of interest n def E(Y ) by the empirical mean Mn (θ) = n1 j=1 (Yj − θ.Zj ) where ((Yj , Zj ))j≥1 are  independent copies of (Y, Z). The classical estimator n1 nj=1 Yj corresponds to the choice θ = 0. The variance of Mn (θ), equal to v(θ) n where def

v(θ) = Var(Y − θ.Z) = Var(Y ) − 2θ.Cov(Y, Z) + θ.Cov(Z)θ,

is minimal for θ⋆ = Cov(Z)−1 Cov(Y, Z). Of course, when E(Y ) is unknown, so is θ⋆ . But one may estimate the covariances Cov(Z) and Cov(Y, Z), respectively, by ⎛ ⎞⎛ ⎞ n n n    1 1 def 1 Zj Zj∗ − ⎝ Zj ⎠ ⎝ Z ∗⎠ Cn = n j=1 n j=1 n j=1 j ⎛ ⎞⎛ ⎞ n n n    1 1 def 1 Yj Zj − ⎝ Yj ⎠ ⎝ Zj ⎠ . and Dn = n j=1 n j=1 n j=1

Let N be the smallest index n such that no strict affine subspace of Rd contains {Z1 , . . . , Zn }. Since Cov(Z) is non-singular, N is a.s. finite. Moreover Cn is nonsingular if and only if n ≥ N . For n ≥ N , one may approximate θ⋆ by the estimator def θn = Cn−1 Dn which convergesa.s. to θ⋆ when n → ∞. The derived adaptive control variate estimator Mn (θn ) = n1 nj=1 (Yj − θn .Zj ) of E(Y ) is biased in general (but not when (Y, Z) is a Gaussian vector or more generally when E(Y |Z) = E(Y ) + θ⋆ .Z ). Nevertheless, Mn (θn ) is a.s. convergent to E(Y ). Moreover, writing   n √ 1  Yj − E(Y ) 1 .√ , n(Mn (θn ) − E(Y )) = n j=1 Zj θn one deduces from the central limit theorem governing the convergence in law of the second term in the product and Slutsky’s theorem that Mn (θn ) is asymptotically normal with optimal asymptotic variance v(θ⋆ ). To sum up, Proposition 2.1. The vector (θn , Mn (θn )) converges a.s. to (θ⋆ , E(Y )) and √ L n(Mn (θn ) − E(Y )) → N1 (0, v(θ⋆ )). Variance reduction is guaranteed in the limit since v(θ⋆ ) ≤ v(0) = Var(Y ), the inequality being an equality only when Y and Z are uncorrelated. When v(θ⋆ ) = 0 i.e. when Y = E(Y ) + θ⋆ .Z then for all n ≥ N , θn = θ⋆ and Mn (θn ) = E(Y ) (see [19]). This situation is not likely to occur in financial applications but an example in the context of Markov chains is given in [14] which also discusses the asymptotic properties of other adaptive estimators of E(Y ).

208

B. Jourdain

One could also approximate E(Y ) by the unbiased estimator Mn (θ˜m ) with −1  m m m m m m       1 1 ∗ ∗ θ˜m = Z˜k Y˜k Z˜k Z˜k − Z˜k Z˜k Y˜k Z˜k − m m k=1 k=1 k=1 k=1 k=1 k=1 where ((Y˜k , Z˜k ))k≥1 are i.i.d. copies of (Y, Z) independent of ((Yj , Zj ))j≥1 . This is an example of the two stages procedure mentioned in the introduction. But it is a pity not to use the drawings ((Y˜k , Z˜k ))1≤k≤m made to compute θ˜m also in the computation of the expectation of interest. Let us finally mention that θn introduced above as a sample average approximation of the optimal parameter θ⋆ also has another interpretation. The vector θn minimises

 2  the sample approximation vn (θ) = n1 nj=1 (Yj − θ.Zj )2 − n1 nj=1 (Yj − θ.Zj ) of v(θ). For more complex variance reduction techniques involving a parameter, no explicit expression of the optimal parameter θ⋆ is in general available. So defining θn as an estimator of θ⋆ is no longer possible. But the alternative definition of θn as the parameter minimising the sample average approximation of the variance remains possible. We will see applications to generally parametrised control variates in the next paragraph and to importance sampling for normal random vectors in Section 3.

2.2 General parametrisation General parametrisation of control variates for the computation of the expectation E(Y ) of a square-integrable random variable Y is addressed by Kim and Henderson [19, 20]. Let Θ ⊂ U ⊂ Rp with Θ compact and U bounded open, Z be a d-dimensional random vector related to Y , h : U × Rd → R be such that ∀θ ∈ U, E(h2 (θ, Z)) < +∞ and E(h(θ, Z)) = 0,

and ((Yj , Zj ))j≥1 be a sequence of independent copies of (Y, Z).  def For any θ ∈ U , Mn (θ) = n1 nj=1 (Yj − h(θ, Zj )) is an unbiased and a.s. convergent estimator of the expectation of interest E(Y ). Moreover Var(Mn (θ)) = v(θ) n where def

v(θ) = Var(Y − h(θ, Z)). Let m ≥ 2. When for all z ∈ Rd , U ∋ θ → h(θ, z) is C 1 , the unbiased estimator 1 m 2 j )−Mm (θ)) of v(θ) is differentiable on U with respect to θ with j=1 (Yj −h(θ, Z m−1

m 1 m 2 h(θ, Zk ) . gradient equal to m−1 j=1 (Yj −h(θ, Zj )−Mm (θ))∇θ h(θ, Zj ) − m k=1   Let (γl )l≥0 be a sequence of positive steps such that l γl = ∞ and l γl2 < ∞. Starting from θ0 ∈ Θ, Kim and Henderson [19, 20] suggest to optimise v(θ) with respect to θ using the following gradient-based stochastic approximation procedure: ⎧ (l+1)m 1 Al+1 = m ⎪ j=lm+1 (Yj − h(θl , Zj )) ⎪ ⎪ ⎪  ⎪ ⎨ 2γl (l+1)m θl+1 = ΠΘ θl − m−1 j=lm+1 (Yj − h(θl , Zj ) − Al+1 ) ⎪  ⎪    ⎪ ⎪ 1 (l+1)m ⎪ ×∇θ h(θ, Zj ) − m k=lm+1 h(θl , Zk )  ⎩ θ=θl

Adaptive variance reduction in finance

209

where ΠΘ denotes a projection of points outside Θ back into Θ. Using the law of large numbers and the central limit theorem for martingales, they study the asymptotic  def behaviour of the associated estimator µk = k1 kl=1 Al of E(Y ). Theorem 2.2. Assume that for all z ∈ Rd , U ∋ θ → h(θ, z) is C 1 and that    E sup |∇θ (θ, Z)| 1 + sup |Y − h(θ, Z)| < +∞. θ∈U

θ∈U

Then µk converges a.s. to E(Y ) as k → ∞. If moreover θk converges a.s. to a random √ L  variable θ∞ , then km(µk − E(Y )) → v(θ∞ ) × G where G ∼ N1 (0, 1) is inde  k lm 1 2 pendent from θ∞ and k(m−1) l=1 j=(l−1)m+1 (Yj − h(θl−1 , Zj ) − Al ) converges a.s. to v(θ∞ ).  Last, if Θ is a box i.e. Θ = pi=1 [ai , bi ] and ∃θ0 ∈ Θ such that   E Y 4 + sup |∇θ (θ, Z)|4 + h4 (θ0 , Z) < +∞, θ∈U

then the distance of θk to the set S of first order critical points of v on Θ converges a.s. to 0 and, when S is discrete, θk converges a.s. to an S -valued random variable θ∞ . Kim and Henderson also study in [19, 20] the estimator Mn (θ˜m ) obtained by a two critical point of the sample stages procedures where θ˜m is obtained mas a˜ first order 1 ˜k )− 1 m (Y˜j −h(θ, Z˜j )))2 ( Y −h(θ, Z average estimator of the variance m−1 k k=1 j=1 m computed on a sequence ((Y˜k , Z˜k ))k≥1 of independent copies of (Y, Z) independent from ((Yj , Zj ))j≥1 . In [20], the behaviour of both estimators is illustrated on the example of the pricing of barrier options.

3

Importance sampling for normal random vectors

Adaptive importance sampling techniques have been developed to approximate multidimensional integrals over the unit hypercube (see [25] and the reference therein) or in the context of Markov chains (see for instance [3], [8]). But research on this topic in view of financial applications was centred on normal random vectors due to the importance of this specific case for models given by stochastic differential equations. That is why the present section is devoted to the computation of E(f (G)) where G is distributed according to the standard d-dimensional normal law Nd (0, Id ) and f : R d → R. We assume that P(f (G) = 0) > 0 and ∀θ ∈ Rd , E(f 2 (G)e−θ.G ) < +∞.

(3.1)

The second hypothesis is implied for instance by the existence of a finite moment of order 2 + ε with ε > 0 for |f (G)|. Let (Gj )j≥1 be i.i.d. copies of G. For θ ∈ Rd , since   |θ|2 (3.2) E f (G + θ)e−θ.G− 2 = E(f (G)),

210

B. Jourdain def 1 n

Mn (θ) =

n

j=1

f (Gj + θ)e−θ.Gj −

|θ|2 2

is an a.s. convergent and asymptotically nor-

mal estimator of E(f (G)) with variance Var(Mn (θ)) = v(θ)

def

=

= =

v(θ)−E2 (f (G)) , n

where

  2 E f 2 (G + θ)e−2θ.G−|θ|   |θ|2 |θ|2 E f 2 (G + θ)e−θ.(G+θ)+ 2 e−θ.G− 2   |θ|2 E f 2 (G)e−θ.G+ 2 .

(3.3)

d

Notice that the translated normal variable G+θ has the density pθ (x) = (2π)− 2 e−

|x−θ|2 2

|θ|2

and that the importance sampling ratio ppθ0 (G + θ) = e−θ.G− 2 appears as a factor in the left-hand-side of (3.2). The interest of the class of importance sampling estimators Mn (θ) parametrised by the translation vector θ ∈ Rd is that a very simple analytic mapping (addition of θ) permits to transform an i.i.d. sample of the standard normal law Nd (0, Id ) into an i.i.d. sample of Nd (θ, Id ) . This feature is particularly convenient to compute and study adaptive estimators in which the parameter evolves during the simulation. Under (3.1) the function v is 1. C ∞ with derivatives obtained by differentiation under the expectation (3.4) :   |θ|2 ∇θ v f (θ) = E (θ − G)f 2 (G)e−θ.G+ 2   |θ|2 ∇2θ v f (θ) = E (Id + (θ − G)(θ − G)∗ )f 2 (G)e−θ.G+ 2 . 2. strongly convex. Therefore ∃!θ⋆ ∈ Rd : v(θ⋆ ) = inf v(θ). θ∈Rd

This suggests to approximate E(f (G)) by Mn (θ⋆ ) but θ⋆ is unknown. Unlike in the analogous example of linear control variates developed in Section 2, no explicit expression is available for θ⋆ . Methods aimed at approximating θ⋆ have been developed in the literature. These methods are based •

either on deterministic optimisation : in [13], the authors suggest to choose θ 2 maximising Rd ∋ x → log |f (x)| − |x|2 and justify this approximation by a large deviations asymptotics.



or on stochastic optimisation procedures analogous to the ones presented in Section 2.2 : gradient-based stochastic approximation ([27] [26]), adaptive Robbins– Monro procedures [2, 1, 16, 23], robust optimisation of the sample average approximation of v by Newton’s algorithm [15].

Let us now describe those stochastic optimisation procedures more precisely.

Adaptive variance reduction in finance

211

3.1 Gradient based stochastic approximation and adaptive Robbins– Monro algorithms In [27] and [26], the authors suggest to minimise v(θ) over a compact convex subset Θ of Rd by the following iterative procedure using an integer m ∈ N∗ , a sequence ˜ k )k≥1 of independent copies of G (possibly equal to (Gj )j≥1 ) and a sequence of (G   positive steps (γl )l≥0 s.t. l γl = ∞ and l γl2 < ∞ : •



start with θ0 ∈ Θ,

2 (l+1)m ˜ k + |θl | 1 2 ˜ −θl .G ˜ 2 approxiat step l ≥ 0 compute gl = m lm+1 (θl − Gk )f (Gk )e mating ∇θ v(θl ), then define θl+1 as the projection θl − γl gl on Θ.

Proposition 3.1. Under (3.1), the sequence (θl )l≥1 converges a.s. to the unique θΘ ∈ Θ such that v(θΘ ) = inf θ∈Θ v(θ). The papers [27, 26] do not deal with asymptotic properties of the estimators Mn (θl ) as n, l → ∞. These questions are addressed by Arouna [2, 1] who also gets rid of the compact Θ. More precisely, he obtains a sequence (θn )n≥1 adapted to the filtration (σ(G1 , . . . , Gn ))n≥1 by stabilising the Robbins–Monro algorithm corresponding to the ˜ k )k≥1 = (Gj )j≥1 with Chen’s projection technique [6, 5]. Let choice m = 1 and (G d θ0 ∈ R , σ0 = 0 and (sn )n≥0 be an increasing sequence of positive numbers tending to infinity with n and s.t. s0 ≥ |θ0 |. The sequence (θn , σn ) is defined inductively by ⎧ 2 ⎪ 2 −θ .G + |θn | ⎪ ⎨θn+ 12 = θn − γn (θn − Gn+1 )f (Gn+1 )e n n+1 2 ∀n ∈ N, if |θn+ 21 | ≤ sσn then θn+1 = θn+ 21 and σn+1 = σn ⎪ ⎪ ⎩ if |θ 1 | > s then θ n+1 = θ0 and σn+1 = σn + 1 . σn n+ 2 Here σn is the number of projections made during the n first iterations.

Theorem 3.2. Under (3.1), the total number of projections limn→∞ σn is finite and θn converges a.s. to θ⋆ as n → ∞. If moreover E(f 4+ε (G)) < +∞, then as n → ∞,    |θ |2 n  −θj−1 .Gj − j−1 Mn E(f (G)) 2 def 1 a.s. + θ )e f (G j j−1 , = −→ 2 n j=1 Sn v(θ⋆ ) f 2 (Gj + θj−1 )e−2θj−1 .Gj −|θj−1 |

√ L n(Mn − E(f (G))) −→ N1 (0, v(θ∗ ) − E2 (f (G))).  L n As a consequence, Sn −M 2 (Mn − E(f (G))) −→ N1 (0, 1) which enables to construct

and

n

confidence intervals for the expectation of interest E(f (G)). The first statement follows from the verifiable sufficient conditions given by Lelong [22] for the convergence of randomly truncated stochastic algorithms. Originally, Arouna [2] checked the a.s. convergence of θn to θ⋆ only under some explicit restrictive growth assumption on the sequence (sn )n . In [1], remarking that    |θn−1 |2  E f (Gn + θn−1 )e−θn−1 .Gn − 2 σ(G1 , . . . , Gn−1 ) = E(f (G)),

212

B. Jourdain

he derived the second statement using the law of large numbers and the central limit theorem for martingales The previous algorithm takes advantage of the characterisation of θ⋆ as the unique |θ|2 root of the equation E((θ − G)f 2 (G)e−θ.G+ 2 ) = 0. Remarking that for all θ ∈ Rd , |θ|2

2

E((θ − G)f 2 (G)e−θ.G+ 2 ) = e|θ| E((2θ − G)f 2 (G − θ)), Lemaire and Pag`es [23] characterise θ⋆ as the unique root of E((2θ − G)f 2 (G − θ)) = 0. When ∃c, α > 0, ∃β ∈ [0, 2), ∀x ∈ Rd , |f (x)| ≤ ceα|x|

β

then the Robbins–Monro procedure β

∀n ∈ N, θn+1 = θn − γn e−2

α|θn |β

(2θn − Gn+1 )f 2 (Gn+1 − θn )

is stable without projections and Theorem 3.2 still holds with this new definition for the sequence (θn )n≥1 . In particular, when f is bounded, α may be chosen equal to 0 β β and the factor e−2 α|θn | is then equal to 1. In [16], Kawai combines importance sampling with control variates remarking that for θ, λ ∈ Rd , the expectation and variance of the random variable [f (G + θ) − λ.(G + θ)]e−θ.G−

|θ|2 2

are respectively equal to E(f (G)) and v(θ, λ) − E2 (f (G)) where   |θ|2 def v(θ, λ) = E (f (G) − λ.G)2 e−θ.G+ 2 .

The function v is strictly convex in θ for fixed λ and strictly convex in λ for fixed θ. Let g(θ) (resp. h(λ)) denote the unique vector in Rd s.t. v(θ, g(θ)) = inf λ∈Rd v(θ, λ) (resp. v(h(λ), λ) = inf θ∈Rd v(θ, λ)). According to Kawai [16], the functions v(θ, g(θ)) and v(h(λ), λ) are still strictly convex (but the proof of this statement does not seem correct) and there exists a unique θ⋆ ∈ Rd (resp. λ⋆ ∈ Rd ) s.t. v(θ⋆ , g(θ⋆ )) = inf θ∈Rd v(θ, g(θ)) (resp. v(h(λ⋆ ), λ⋆ ) = inf λ∈Rd v(h(λ), λ)). He proposes for (θn , λn ) a two-scale Robbins Monro procedure with Chen’s projection technique and increment ⎞ ⎛ 2 2 −θn .Gn+1 + |θn2 | −γ (θ − G )(f (G ) − λ .G ) e n n n+1 n+1 n n+1 ⎠, ⎝ |θn |2 2˜ γn (f (Gn+1 ) − λn .Gn+1 )Gn+1 e−θn .Gn+1 + 2   where γ˜n is another sequence of positive steps s.t. n γ˜n = +∞ and n γ˜n2 < +∞. The sequence (θn , λn ) converges a.s. to (θ⋆ , g(θ⋆ )) or (h(λ⋆ ), λ⋆ ) depending on whether limn→∞ γγ˜nn is equal to 0 or +∞. Moreover the analogue of Theorem 3.2 holds in this setting, the estimator of E(f (G)) being defined as n

Mn =

|θj−1 |2 1 [f (G + θj−1 ) − λj−1 .(G + θj−1 )]e−θj−1 .Gj − 2 . n j=1

Adaptive variance reduction in finance

213

In [17], Kawai adapts the previous algorithm when the Gaussian random vector G is replaced by an infinitely divisible random vector (stochastic approximation by Robbins–Monro procedures of the parameter θ only is treated in [18]). In finance, problems involving such vectors arise for instance when the Brownian motion driving continuous time models is replaced by a L´evy process. Kawai pays particular attention to the case of independent gamma distributed components. This particular distribution has the following nice property: after the exponential change of measure (also called Esscher transform) considered in the present section, the law of a gamma random variable is the same as the law of this random variable multiplied by a constant under the original probability measure. In comparison with the Gaussian case, addition is replaced by multiplication. Let us finally mention that an adaptive simulated annealing procedure has been recently developed by del Ba˜no Rollin and L´azaro-Cam´ı [7] to optimise antithetic variates. More precisely, using appropriate coordinates on the orthogonal group, the authors propose a Robbins–Monro procedure with an additional noise to compute a sequence (On )n≥1 of orthogonal matrices converging to O⋆ minimising E(f (G)f (OG)) other all orthogonal matrices O. The additional noise, obtained from a sequence ˜ j )j≥1 of random vectors i.i.d. according to N (0, Id ) independent of (Gj )j≥1 , van(G ishes when n tends to infinity and avoids that the algorithm remains trapped in a critical point at which E(f (G)f (OG)) is not minimal. The derived estimator

Mn =

n  1  ˜ j ) + f (Oj G ˜j ) f (Gj ) + f (Oj Gj ) + f (G 4n j=1

of E(f (G)) is then a.s. convergent and asymptotically normal with asymptotic variance 1 4 (Var(f (G)) + Cov(f (G), f (OG))) .

3.2 Robust sample average optimisation In order to save computation time, we introduce in [15] a parameter reduction. Indeed, numerical simulations show that, for a model driven by a Brownian motion, it is not useful to use different parameters for the increments of a single Brownian component. ′ Let A ∈ Rd×d be a matrix with rank d′ ≤ d. We define τ⋆ as the unique minimiser of ′ the strongly convex and continuous function Rd ∋ τ → v(Aτ ). The sample average approximation of v(Aτ ) is given by vn (Aτ ), where the C ∞ function n

|θ|2 1 2 f (Gj )e−θ.Gj + 2 vn (θ) = n j=1

is strongly convex as soon as f (Gj ) = 0 for some j ∈ {1, . . . , n} which holds a.s. for n large enough by (3.1). The unique minimiser τn of τ → vn (Aτ ) is characterised by

214

B. Jourdain

the equality ∇τ vn (Aτ ) = 0, which also writes ∇τ un (τ ) = 0, where ⎛ ⎞ n 2  |Aτ | + log ⎝ f 2 (Gj )e−Aτ ·Gj ⎠ un (τ ) = 2 j=1 n ∗ 2 −Aτ ·Gj j=1 A Gj f (Gj )e n ∇τ un (τ ) = A∗ Aτ − −Aτ ·Gj 2 j=1 f (Gj )e n ∗ ∗ 2 −Aτ ·Gj j=1 A Gj Gj Af (Gj )e n ∇2τ un (τ ) = A∗ A + −Aτ ·Gj 2 j=1 f (Gj )e   ∗  n n ∗ 2 −Aτ ·Gj ∗ 2 −Aτ ·Gj j=1 A Gj f (Gj )e j=1 A Gj f (Gj )e . − 2  n 2 (G )e−Aτ ·Gj f j j=1

The lowest eigenvalue of the Hessian matrix ∇2τ un is always larger than the one of A∗ A. Therefore τn can easily and precisely be computed by a few iterations of Newton’s algorithm using the above explicit expressions of ∇τ un and ∇2τ un . Notice that the computation of the gradient and the Hessian of un is not too time-consuming since the points Gi , at which the payoff function f is evaluated, remain constant during the optimisation procedure. Convergence of τn to τ⋆ is a consequence of classical results concerning M-estimators. Proposition 3.3.

1. Under (3.1), τn and vn (Aτn ) converge a.s. to τ⋆ and v(Aτ⋆ ).

2. If moreover

√ L ∀θ ∈ Rd , E f 4 (G)e−θ.G < +∞, then n(τn − τ⋆ ) → Nd′ (0, B −1 CB −1 )

|Aτ |2 where B = A∗ ∇2θ v(Aτ⋆ )A and C = Cov A∗ (Aτ⋆ − G)f 2 (G)e−Aτ ·G+ 2 .

In [15], we obtain convergence of Mn (Aτn ) to the expectation E(f (G)) assuming that f is continuous and satisfies some growth assumption (see Theorem 3.5 below). When d′ = 1, continuity may be replaced by a monotonicity assumption introduced in the next definition. Definition 3.4. We say that a function h : Rd → R •

is A-nondecreasing (resp. A-nonincreasing) if ∀x ∈ Rd , τ ∈ R → h(x + Aτ ) is nondecreasing (resp. nonincreasing),

• •

is A-monotonic if it is either A-nondecreasing or A-nonincreasing, belongs to VA if h may be decomposed as the sum of two A-monotonic functions h1 and h2 such that β

∃λ > 0, ∃β ∈ [0, 2), ∀x ∈ R, |hi (x)| ≤ λe|x| for i = 1, 2.

215

Adaptive variance reduction in finance

When d = 1, V1 simply consists of functions with finite variation satisfying the previous growth assumption. The asymptotic properties of Mn (Aτn ) stated in the next theorem are proved in [15]. Theorem 3.5. Assume (3.1) and that f admits a decomposition f = f1 + 1{d′ =1} f2 with   1. f1 a continuous function s.t. ∀M > 0, E sup |f1 (G + θ)| < +∞, |θ|≤M

2. f2 ∈ VA defined above.

Then, for any deterministic integer-valued sequence (νn )n going to ∞ with n, Mn (Aτνn ) converges a.s. to E(f

(G)). Assume (3.1), ∀θ ∈ Rd , E f 4 (G)e−θ.G < +∞ and that f admits a decomposition f = f1 + f2 + 1{d′ =1} f3 with 1. f1 a C 1 function s.t.

  ∀M > 0, E sup |f1 (G + θ)| + sup |∇f1 (G + θ)| < +∞, |θ|≤M

|θ|≤M

  2. ∃α ∈ ( d′ 2 + 8d′ − d′ )/4, 1 , β ∈ [0, 2), λ > 0, ∀x, y ∈ Rd , |f2 (x) − f2 (y)| ≤ λe|x|

β

∨|y|β

|x − y|α ,

3. f3 ∈ VA .

√ L Then n(Mn (Aτn ) − E(f (G))) → N1 0, v(Aτ⋆ ) − E2 (f (G)) .

In contrast to the estimator Mn constructed using Robbins–Monro procedures in the previous section, there is no martingale structure for Mn (Aτνn ). This explains why we need some regularity assumptions on the function f . Except for d′ = 1, asymptotic v(Aτ⋆ ) − E2 (f (G)) requires more regunormality with optimal asymptotic variance √ d′2 +8d′ −d′

larity on f than a.s. convergence. Note that is increasing with d′ , equals 4 1 ′ ′ 2 for d = 1 and converges to 1 as d → ∞. Therefore the choice α = 1 is always possible for f2 . So all the financial payoffs except the discontinuous ones (barrier or digital options) satisfy the assumption made on f2 to ensure the asymptotic normality of the adaptive estimator Mn (Aτn ). If Var(f (G)) > 0, then the previous results imply that  n L (Mn (Aτn ) − E(f (G))) → N1 (0, 1) , 2 vn (Aτn ) − Mn (Aτn ) and one may easily derive confidence intervals for E(f (G)). The numerical experiments performed in [15] suggest that strong convergence and asymptotic normality of Mn (Aτn ) still hold under less restrictive assumptions on f than those stated in the previous theorem.

216

4

B. Jourdain

Stratified sampling

We are interested in the computation of c = E(f (X)) where X is an Rd -valued random vector and f : Rd → R a measurable function such that E(f 2 (X)) < ∞. We suppose that (Ai )1≤i≤I is a partition of Rd into I strata such that pi = P[X ∈ Ai ] is known explicitly for i ∈ {1, . . . , I}. Up to removing some strata, we assume from now on that pi is positive for all i ∈ {1, . . . , I}. The stratified Monte-Carlo estimator of c (see [12, p.209–235] and the references therein for a detailed presentation) is based on the  equality E(f (X)) = Ii=1 pi E(f (X i )) where X i denotes a random variable distributed according to the conditional law of X given X ∈ Ai . Indeed, when the variables Xi can be simulated, it is possible to estimate each expectation in the right-hand side using I ni i.i.d drawings of X i . Let n = i=1 ni be the total number of drawings (in all the strata) and qi = ni /n denote the proportion of drawings made in stratum i. c is defined by Then   c=

qi n ni I I  pi  1  pi  f (Xji ), f (Xji ) = n n q i i j=1 j=1 i=1 i=1

where for each i the Xji , 1 ≤ j ≤ ni , are distributed like X i , and all the Xji , for 1 ≤ i ≤ I , 1 ≤ j ≤ ni are drawn independently. This stratified sampling estimator can be implemented for instance when X is distributed according to the standard normal law on Rd , Ai = {x ∈ Rd : yi−1 ≤ θ.x < yi } where −∞ = y0 < y1 < · · · < yI−1 < yI = +∞ and θ ∈ Rd is such that |θ| = 1. Indeed, then one has pi = N (yi ) − N (yi−1 ) with N (.) denoting the cumulative distribution function of the one-dimensional normal law and when U is uniformly distributed on [0, 1] and independent from X , then X + (N −1 [N (yi−1 ) + U (N (yi ) − N (yi−1 ))] − θ.X)θ

follows the conditional law of X given yi−1 ≤ θ.X < yi . c) = c and We have E( Var( c) =

I  p2 σ 2 i

i=1

i

ni

1  p2i σi2 1   pi σi 2 1   pi σi 2 = qi , qi ≥ n i=1 qi n i=1 qi n i=1 qi I

=

I

I

(4.1)

where σi2 = Var(f (X i )) = Var(f (X)|X ∈ Ai ) for all 1 ≤ i ≤ I . In the sequel, we assume σi0 > 0 for at least one index i0 . (Xj )j≥1 be i.i.d. drawings of X . The variance of the crude Monte Carlo estimaLet tor n1 nj=1 f (Xj ) of E(f (X)) is 1 n



I  i=1



pi σi2

 + E f (X ) − 2



i

 I i=1

pi E f (X i )

2

I



1 pi σi2 . n i=1

For given strata, the stratified estimator achieves variance reduction if the allocations ni or equivalently the proportions qi are properly chosen. For instance, for the socalled proportional allocation q ≡ p, the variance of the stratified estimator is equal

Adaptive variance reduction in finance

217

to the previous lower bound of the variance of the crude Monte Carlo estimator. For  def the optimal allocation qi⋆ = pi σi / Ij=1 pj σj , 1 ≤ i ≤ I, the lower bound in (4.1) is attained. Then I 2 2 1 def σ⋆ . = pi σi Var( c) = n i=1 n

In general, when the conditional expectations E(f (X)|X ∈ Ai ) = E(f (X i )) are unknown, then so are the conditional variances σi2 . Therefore optimal allocation of the drawings is not feasible at once. One can of course estimate the conditional variances and the optimal proportions by a first Monte Carlo algorithm and run a second Monte Carlo procedure with drawings independent from the first one to compute the stratified estimator corresponding to these estimated proportions. But why not use the drawings made in the first Monte Carlo procedure also for the final computation of the conditional expectations? Instead of running two successive Monte Carlo procedures, one can think to obtain a first estimation of the σi ’s, using the first drawings of the X i ’s made to compute the stratified estimator. One could then estimate the optimal allocations before making further drawings allocated in the strata according to these estimated proportions. One can next obtain another estimation of the σi ’s, compute again the allocations and so on. This is the principle of the adaptive allocation procedure proposed in [10] and described in the next section. Then, we will present the adaptive algorithm proposed in [9] in order to optimise the strata themselves.

4.1 Adaptive optimal allocation Let N k (resp. Nik ) denote the total number of random drawings Xji made in all the strata (resp. in stratum i) at the end of step k of the following algorithm : 1. At step 1, allocate the N 1 first drawings in the strata proportionally to the pi and estimate E(f (X i )) and σi , 1 ≤ i ≤ I , 2. At the beginning of step k ≥ 2, compute the vector (n1 , . . . , nI ) ∈ RI+ obtained by allocating the N k − N k−1 new drawings k−1 I • either proportionally to the estimations p σ / l=1 pl σ lk−1 of the qi⋆ availi i able at the end of step k − 1, I • or in order to minimise the estimated variance (p σ k−1 )2 /Nik of the i=1 I i i k stratified estimator after step k under the constraints i=1 Ni = N k and ∀i, Nik ≥ Nik−1 . The explicit solution of this constrained optimisation problem is given in [10]. Then convert (n1 , . . . , nI ) to NI by the following rounding procedure preserving   k the sum : nki = ⌊ il=1 nl ⌋ − ⌊ i−1 l=1 nl ⌋ and allocate ni new drawings in stratum k i k i. Refine the estimations cˆi and σ i of E(f (X )) and σi using these new drawings.

In fact, one has to modify this algorithm in order to enforce at least one drawing i10 = 0 whereas σi0 > 0, then no drawings in each stratum at each step. Indeed, if σ

218

B. Jourdain

are made after step k = 1 in the stratum i0 and

1 Nik

0

Nik0

j=1

f (Xji0 ) =

1 Ni1

0

Ni10

j=1

f (Xji0 )

does not converges to E(f (X i0 )) when k → +∞ which prevents the stratified estimator I pi Nik i k j=1 f (Xj ) from converging to E(f (X)). Choosing the sequence (N )k≥1 i=1 N k i

so that N k ≥ N k−1 + I for all k ≥ 2, enforcing one drawing in each stratum at each step k , and allocating the remaining N k − N k−1 − I drawings according the previous procedure permits to overcome this difficulty. Then ∀1 ≤ i ≤ I, ∀k ≥ 1, Nik ≥ k and Nik the following result is proved in [10] by first checking that the proportions N k converge a.s. to the optimal ones qi⋆ as k → ∞ and then applying the central limit theorem for martingales : Theorem 4.1.

⎛ ⎞ Nik I   pi P⎝ f (Xji ) −−−− → E(f (X))⎠ = 1. k k→∞ N i j=1 i=1

If, moreover, σi0 > 0 for some i0 ∈ {1, . . . , I} and limk→+∞ Nkk = 0, then ⎞ ⎛ Nik I   √

p L i → N1 0, σ⋆2 f (Xji ) − E(f (X))⎠ −−−− Nk ⎝ k k→∞ Ni j=1 i=1

with σ⋆2 =

I

pi σi

2

the asymptotic variance for the optimal allocation.   √ k k L I pi Ni i f (X ) − E(f (X)) −−−− → N1 (0, 1) and As a consequence, PI Npi σbk j j=1 i=1 N k i=1

i=1

i

i

k→∞

one may easily construct confidence intervals for E(f (X)). Numerical experiments performed in [10] on the pricing of arithmetic average Asian options in the Black– Scholes model show that adaptive allocation permits to divide the variance obtained with proportional allocation by a factor up to 50. Another stratified sampling algorithm in which the optimal proportions and the conditional expectations are estimated using the same drawings has been proposed in [4] for quantile estimation. More precisely, for a total number of drawings equal to N , the authors suggest to allocate the N γ with 0 < γ < 1 first ones proportionally to the probabilities of the strata and then use the estimation of the optimal proportions obtained from these first drawings to allocate the N − N γ remaining ones. Their stratified estimator is also asymptotically normal with asymptotic variance equal to the optimal one. In practice, N is finite and it seems better to take advantage of all the drawings and not only the N γ first ones to modify adaptively the allocation between the strata.

4.2 Adaptive optimisation of the strata for normal random vectors Let us now consider the problem of optimally designing the strata when they are  parametrised in the following way : for 1 ≤ i ≤ I , Ai = x ∈ Rd : θ.x ∈ [yi−1 , yi ) where −∞ = y0 < y1 < · · · < yI−1 < yI = +∞ and θ ∈ Rd is s.t. |θ| = 1.

Adaptive variance reduction in finance

219

In [9], we address a more general parametrisation where the strata are defined by hyperrectangles but the present section is devoted to the particular case of a single stratification direction. Our aim is to approximate the parameters (θ, y1 , . . . , yI−1 ) defining the strata which  minimise the standard deviation σ⋆ = Ii=1 pi σi obtained either by optimal allocation or with the adaptive allocation algorithm described above. This standard deviation σ⋆ is equal to I   (νθ (1, yi ) − νθ (1, yi−1 ))(νθ (f 2 , yi ) − νθ (f 2 , yi−1 )) − (νθ (f, yi ) − νθ (f, yi−1 ))2 . i=1

def

where νθ (h, y) = E(h(X)1{θ.X≤y} ) for y ∈ R and h : Rd → R such that h(X) is integrable. According to the following lemma proved in [9] it is possible to express the gradient of νθ (h, y) in terms of conditional expectations. Lemma 4.2. When θ.X admits a density pθ w.r.t. the Lebesgue measure on the real line and under further technical regularity assumptions not precised here, ∂y νθ (h, y) = pθ (y)E(h(X)|θ.X = y) ∇θ νθ (h, y) = −pθ (y)E(Xh(X)|θ.X = y).

We suppose from now on that X ∼ Nd (0, Id ) is a standard normal random vector. Then 2

pθ (y) =

/2 e−y √ 2π

and

∀i ∈ {1, . . . , I}, E(h(X)|θ.X = y) = E[h(X i + (y − θ.X i )θ)].

At each step k of the above optimal allocation algorithm, this enables us 1. to estimate the gradient of σ⋆ w.r.t. (y1 , . . . , yI−1 ) and θ using the orthogonal projections on the boundaries of the random drawings Xji made at this step in the strata, 2. to perform a gradient descent step to update the stratification direction and boundaries. In practice, the differences N k − N k−1 should be large enough not to increase significantly the computation time needed to calculate the crude Monte Carlo estimator. As a consequence, the Monte Carlo estimator of the gradient is precise and the optimisation of the strata parameters is rather a noisy gradient descent than a stochastic algorithm. According to our numerical experiments, optimising the direction θ works : the gradient procedure converges to some limit and this ensures effective variance reduction. On examples involving discontinuous payoffs such as barrier options, the optimal direction computed with our algorithm is significantly different and more efficient than the one derived analytically in [13] using some large deviations asymptotics. Numerical optimisation of the strata boundaries was far less convincing. In [9], we explain

220

B. Jourdain

this numerical observation by the following asymptotic analysis performed in the limit I → ∞. We parametrise the boundaries by a positive probability density g on R with y c.d.f. G(y) = −∞ g(z)dz and set yi = G−1 ( Ii ) for i ∈ {0, . . . , I}. Theorem 4.3.





Let d ≥ 2. If for

h ∈ {pθ , pθ × E(f (X)|θ.X = ·), pθ × E(f 2 (X)|θ.X = ·)},   Var(f (X)|θ.X) . then limI→∞ σ⋆ (I) = E

!

R

h2 (y)dy < +∞, g

When d = 1, and f is a locally bounded function on the real line with a locally ′ | (y) < +∞, then integrable distribution derivative f ′ such that esssupdy pθ +|f g limI→∞ Iσ⋆ (I) =

√1 12

R

|f ′ |pθ g (y)dy .

The fact that, in the practical case d ≥ 2, the limit does not depend on g means that under optimal or adaptive allocation, the choice of the boundaries of the strata is not important when the number of strata is large. So only the stratification direction θ should be optimised. Note that the optimised direction θ computed by our algorithm can be used to design Latin hypercube or Quasi Monte Carlo (see [12]) estimators of E(f (X)). When X is a standard normal random vector, for any orthogonal matrix O ∈ Rd×d , E(f (X)) = E(f (OX)), but the convergence properties of Latin hypercube or QMC estimators associated with the variable f (OX) crucially depend on O. Unfortunately, it is very difficult to estimate these rates of convergence and adaptive optimisation of the matrix O seems unreachable. As Latin hypercube or QMC methods somehow consist in stratifying each canonical direction, choosing the first column of O equal to θ should be effective.

Bibliography [1] Bouhari Arouna, Adaptative Monte Carlo method, a variance reduction technique, Monte Carlo Methods Appl. 10 (2004), pp. 1–24. MR MR2054568 (2004m:62159) [2]

, Robbins Monro algorithms and variance reduction in finance, J. of Comput. Finance 7 (Winter 2003/04), pp. 35–61.

[3] Keith Baggerly, Dennis Cox, and Rick Picard, Exponential convergence of adaptive importance sampling for Markov chains, J. Appl. Probab. 37 (2000), pp. 342–358. MR MR1780995 (2001e:65008) [4] Claire Cannamela, Josselin Garnier, and Bertrand Looss, Controlled stratification for quantile estimation, Ann. Appl. Stat. 2 (2008), pp. 1554–1580. [5] Han Fu Chen, Guo Lei, and Ai Jun Gao, Convergence and robustness of the Robbins-Monro algorithm truncated at randomly varying bounds, Stochastic Process. Appl. 27 (1988), pp. 217– 231. MR MR931029 (89b:62180) [6] Han Fu Chen and Yun Min Zhu, Stochastic approximation procedures with randomly varying truncations, Sci. Sinica Ser. A 29 (1986), pp. 914–926. MR MR869196 (88b:62158)

Adaptive variance reduction in finance

221

[7] Sebastian del Ba˜no Rollin and Joan-Andreu L´azaro-Cam´ı, Antithetic variates in higher dimension, Preprint ArXiv 0902.4211 (2009). [8] Paul Dupuis and Hui Wang, Dynamic importance sampling for uniformly recurrent Markov chains, Ann. Appl. Probab. 15 (2005), pp. 1–38. MR MR2115034 (2006b:60042) ´ e, Gersende Fort, Benjamin Jourdain, and Eric ´ Moulines, On adaptive stratification, [9] Pierre Etor´ Preprint ArXiv:0809.1135 (2008). ´ e and Benjamin Jourdain, Adaptive optimal allocation in stratified sampling meth[10] Pierre Etor´ ods, Methodol. Comput. Appl. Probab. (To appear). [11] Michael B. Giles and Ben J. Waterhouse, Multilevel quasi-Monte Carlo path simulation, Radon Series Comp. Appl. Math. 8 (2009). [12] Paul Glasserman, Monte Carlo methods in financial engineering, Applications of Mathematics (New York), vol. 53, Springer-Verlag, New York, 2004, Stochastic Modelling and Applied Probability. MR MR1999614 (2004g:65005) [13] Paul Glasserman, Philip Heidelberger, and Perwez Shahabuddin, Asymptotically optimal importance sampling and stratification for pricing path-dependent options, Math. Finance 9 (1999), pp. 117–152. MR MR1849001 (2002m:91035) [14] Shane G. Henderson and Burt Simon, Adaptive simulation using perfect control variates, J. Appl. Probab. 41 (2004), pp. 859–876. MR MR2074828 (2005h:65009) [15] Benjamin Jourdain and J´erˆome Lelong, Robust adaptive importance sampling for normal random vectors, Ann. Appl. Probab. (To appear). [16] Reiichiro Kawai, Adaptive Monte Carlo variance reduction with two-time-scale stochastic approximation, Monte Carlo Methods Appl. 13 (2007), pp. 197–217. MR MR2349428 (2008h:62195) [17]

, Adaptive Monte Carlo variance reduction for L´evy processes with two-time-scale stochastic approximation, Methodol. Comput. Appl. Probab. 10 (2008), pp. 199–223. MR MR2399681

[18]

, Optimal importance sampling parameters search for L´evy processes via stochastic approximation, SIAM J. Numer. Anal. 47 (2008), pp. 293–307.

[19] Sujin Kim and Shane G. Henderson, Adaptive control variates, Proceedings of the 2004 Winter Simulation Conference (2004), pp. 621–629. [20]

, Adaptive control variates for finite-horizon simulation, Math. Oper. Res. 32 (2007), pp. 508–527. MR MR2348231 (2008i:65005)

[21] Stephen Lavenberg, Thomas Moeller, and Peter Welch, Statistical Results on Control Variables with Application to Queuing Network Simulation, Oper. Res. 30 (1982), pp. 182–202. [22] J´erˆome Lelong, Almost sure convergence of randomly truncated stochastic algorithms under verifiable conditions, Stat. Probab. Letters 78 (2008), pp. 2632–2636. [23] Vincent Lemaire and Gilles Pag`es, Unconstrained Recursive Importance Sampling, Preprint ArXiv:0807.0762 (2008). [24] Barry L. Nelson, Control variate remedies, Oper. Res. 38 (1990), pp. 974–992. MR MR1095954 [25] Teemu Pennanen and Matti Koivu, An adaptive importance sampling technique, Monte Carlo and quasi-Monte Carlo methods 2004, Springer, Berlin, 2006, pp. 443–455. MR MR2208724 (2006k:65065) [26] Yi Su and Michael Fu, Optimal importance sampling in securities pricing, J. Comput. Finance 5 (2002), pp. 26–50.

222

B. Jourdain

[27] Felicia V´azquez-Abad and Daniel Dufresne, Accelerated simulation for pricing Asian options, Proceedings of the 1998 Winter Simulation Conference (1998), pp. 1493–1500.

Author information Benjamin Jourdain, Universit´e Paris-Est, CERMICS, Project team MathFi ENPC-INRIA-UMLV, 6 et 8 avenue Blaise Pascal, 77455 Marne La Vall´ee, Cedex 2, France. Email: [email protected]

Radon Series Comp. Appl. Math 8, 223–244

c de Gruyter 2009 

Regularisation of inverse problems and its application to the calibration of option price models Stefan Kindermann and Hanna K. Pikkarainen

Abstract. We give an overview of the most important features of inverse and ill-posed problems and their solutions by regularisation. We point out links to the problem of model calibration in financial mathematics by a survey on the calibration of option price models using Tikhonov regularisation. Key words. Inverse problems, Tikhonov regularisation, model calibration, option pricing. AMS classification. 65J20, 91B28

1

Introduction

Inverse problems are a well-established field in mathematics, combining theory and application in a fascinating manner. The ill-posedness of many of interesting inverse problems is the most salient feature that make these problems difficult to solve. The classical way to cope with the ill-posedness, is to apply regularisation. The theory of regularisation is built on a sound mathematical basis and is one of the cornerstones in the research of inverse and ill-posed problems. With the emergence of advanced models in financial mathematics, the need for a robust model calibration arose in financial applications. Apart from rather simple models these calibration problems turned out to be ill-posed and inverse problems in many instances. Thus, there was and is a strong requirement for well-founded regularisation methods in financial mathematics. The purpose of this paper is to give an overview of the regularisation theory and related methods in inverse problems. This overview is motivated by the problem of model calibration for option pricing, which is one of the most prominent examples where regularisation ”pays off”. We would like to highlight aspects of (mainly) Tikhonov regularisation with this application (like the calibration of a local volatility in the Dupire model) in mind. Hopefully, this introduction can shed some light on the necessity and the importance of regularisation and can serve as a very basic user’s guide for nonspecialists and researchers in mathematical finance. Finally, we give a survey on recent results where regularisation has been successfully applied to option pricing and related calibration problems.

224

2

S. Kindermann and H. K. Pikkarainen

Inverse problems

A well-known definition [23] states that inverse problems are concerned with determining causes for a desired or an observed effect. This general statement does not answer the question why inverse problems require a special mathematical theory. It is rather the case that most inverse problems are ill-posed in the sense of Hadamard [31, 23], and therefore need an extra mathematical treatment. Hadamard calls a problem wellposed if for all data a solution exists, the solution is unique and the solution depends stably on the data. If at least one of these conditions is violated, a problem is called ill-posed. The ill-posedness is the distinguished feature of most inverse problems such that the notations inverse problem and ill-posed problems is in many cases used synonymously. Ill-posedness also leads to the distinction between direct and inverse problems: These are problems that are inverse to each other but the direct problem is usually the wellposed one while the inverse problem might be an ill-posed one. Ill-posedness is furthermore mainly apparent in the instability of an inverse problem. A solution to an inverse problem does not depend stably (in an appropriate topology) on the input. A well-known example that demonstrates this dichotomy is the relation between the volatility and the option price in standard option price models. Given, for instance, a standard Dupire model, a direct problem is stated as follows: If the (possibly nonconstant) volatility function is known, find the associated option price for a European call option. A calculation of the direct problem here amounts to solving a parabolic partial differential equation with known coefficients (the volatility). From standard PDE theory it follows that a solution to this problem exists and is unique (under not too strong assumptions) and the option price depends stably on the input parameter, the volatility. All this is not the case for the corresponding inverse problem: In the same model, let us assume to have full or partial information on the option price. An inverse problem is stated as follows: Given the option price, find a/the volatility that generates this option price via the Black–Scholes PDE. Now this is an ill-posed problem, because even if a solution to this inverse problem might exist and might be unique, the volatility will not depend on the option price in a continuous way (using, e.g., Sobolev space norms, see, e.g., [20]). The instability is the main obstacle in calculating the volatility from the option price in a reliable way. Hence, it has to be taken into account if we want to calibrate an option price model in a robust way. Note that in general for calibration the use of the inverse problem theory is not always needed: If we want to find any parameter that fits the data, and do not want to draw conclusions about this parameter, this can be done by standard algorithms. Instances where such an approach is relevant are Black-Box models, or in the control theory where one is satisfied with a good fit. Mathematically speaking, here only the convergence of the model to the data in the data space is required. However, if we expect that there is a “real” meaningful parameter behind the model and if we want to extract reliable information from the data about this parameter, we ask for a method which not only provides a good fit, but also approximates this exact parameter in a satisfying manner. We therefore want to have a method which shows convergence in the parameter/solution space. Solving unstable inverse problems in a naive way

Regularisation of inverse problems

225

does not necessarily give convergence in the parameter space: Or as a rule of thumb: for ill-posed problems a good fit to the data does not mean a good fit of the unknown parameter. We should be aware that the option prices are not exact. There exists a bid-ask spread which can be seen as some kind of noise in the data. If we cannot calculate a volatility from the data in a stable way, the result is useless because it does not give any information on the ‘true’ volatility. The calculated solution can depend on the type of the computer used, on the accuracy used and on many other parameters. For ill-posed problems with instability, computational experience shows that with naive algorithms, useless solutions far from the true one are not exceptional but rather common. This, however, is not the end of the story. Fortunately, there is a way to calculate approximate solutions to inverse problems in a stable way by so-called regularisation methods. This theory is the centre point for solving inverse problems algorithmically. The theory of inverse problems can efficiently be treated by formulating them as abstract operator equations. It is common to introduce the so-called input-to-output mapping (the forward mapping or the parameter-to-solution mapping) which is just the mapping of an unknown in an inverse problem to the data generated by this unknown. Let us denote the unknown by x and the data by y . Then the forward mapping can be written as the operator F : D(F ) ⊂ X → Y, x → F (x)

acting between topological spaces X and Y (usually Hilbert or Banach spaces). In the option price example above, the forward operator would take a volatility σ and map it to the solution of the associated Black–Scholes equation. The theory of PDEs can be used to show that this mapping is well-defined and continuous if appropriate spaces are chosen for X and Y . Although this is a compact notation, the computation of the forward operator can be quite difficult and expensive; for instance, it involves solving one or several PDEs. Given the operator F we can express the inverse problem as an operator equation: Given the data y , find the/a solution x such that F (x) = y.

(2.1)

The inverse problem is ill-posed if the operator F does not possess a continuous inverse on Y . Let us look closer at the problem of ill-posedness: As we have already pointed out, out of the three conditions for well-posedness, the violation of stable dependence on the data is the most severe one. Violation of the other conditions can partially be remedied by an appropriate generalisation of the definition of a solution. For instance, it is standard to use least squares solutions for the case when the data are not in the range of F : We call x a least squares solution if it minimises the error F (x) − yY : x = argmin{F (z) − yY | z ∈ D(F )},

(2.2)

where Y is a norm space and  · Y is a norm in Y . Here it is clear that a least squares solution can exist even if y is not in the range of F . However, such a generalised solution does not necessarily solve the problem of existence completely because it might

226

S. Kindermann and H. K. Pikkarainen

not exist as well. For linear problems in Hilbert spaces the set of data y for which a least squares solution exists is well understood: it is the domain of definition of the pseudoinverse (see [23]), and hence a dense subset of the data space Y . Thus, a least squares solution generalises the notion of solution to data out of a dense subset of Y . In the nonlinear case, the question of existence of a least squares solution is not easy to answer. To circumvent this question, it is quite common to use the assumption of attainability, i.e., y is in the range of F . Of course, this trivially implies the existence of a least squares solution. If we assume that our model is correct (i.e., F is an exact description of the parameter-to-data mapping) and there is no noise in the data, attainability is a reasonable assumption. For an analysis of nonlinear ill-posed problems without the attainability assumption we refer to [4]. Next we discuss the uniqueness: A least squares solution is usually not unique. In fact, if F has a null space, adding an element of the null space to a least squares solution yields another least squares solution. If we want to reduce that ambiguity, we can define the minimum norm least squares solution x† : x† = argmin{xX | x is a least squares solution},

(2.3)

where X is a norm space and  · X is a norm in X . Thus we simply select a least squares solution which has the minimal norm among all least squares solutions. It is often helpful to change this definition and include an a-priori guess x∗ to get the x∗ -minimum norm least squares solution: x† = argmin{x − x∗ X | x is a least squares solution}.

(2.4)

Without further assumptions, a minimum norm least squares solution does not have to exist nor has to be unique. It exists if a least squares solution exists, and if F is linear or F is injective it is unique. Showing that a forward operator is injective is usually a difficult task for parameter identification problems, requiring sophisticated methods (for an overview see for instance [42]). From a practical point of view it is useful to separate the question of the uniqueness (the injectivity of F ) from the reconstruction procedure and simply postulate the existence of a minimum norm least squares solution. With these definitions we can now specify more precisely what we understand by a solution to an inverse problem: Given some data y we want to find an (x∗ )-minimum norm least squares solution for equation (2.1). The use of minimum norm least squares solutions is especially useful for the case where F is a linear operator. Then the mapping from y to the minimum norm least squares solution x† (for those y for which it is defined) is called the pseudoinverse F † . For a linear problem we can completely characterise when a problem is ill-posed, namely exactly when the pseudoinverse is not continuous. Using well-known tools from functional analysis (e.g., the open mapping theorem) it can further be concluded that for a linear problem between Hilbert spaces the pseudoinverse is not continuous (in particular, the problem is ill-posed) when the range of F is not closed in Y , or equivalently if the domain of definition of F † is not the whole data space Y (but only a dense subset). This is a quite useful characterisation of the instability. Note that in the linear case the domain of definition of F † coincides with the set of y for which a least squares solution exists. This links the existence

Regularisation of inverse problems

227

question in Hadamard’s definition to the stability question: If for all y a least squares solution exists, the problem is well posed and vice versa. Another important conclusion concerns the discrete case. If we have a discretised problem (i.e., X and Y are finite-dimensional spaces), for linear problems the range of F is always closed and a least squares solution exists for all y . This means that in finitedimensions, there is no ill-posedness. This is illustrated by the quote ill-posedness resides in infinite dimensions [28]. Strictly speaking, this only holds true for the linear case, but as a rule of thumb — finite-dimensional problems are well-posed — it is quite useful in the nonlinear case, too. This explains to some extent why naive algorithms can work for some inverse problems: If we are only interested in finding a finite number of parameters of a solution, we are in the discrete case and the inverse problem is stable. This is in particular the case when we take a parametric approach: We suppose that the volatility is taken out of a specific set of functions parameterised by a finite number of coefficients (e.g., Gaussians where only the mean is unknown). The same argument applies if we assume a constant volatility (the Black–Scholes model): Then we only have to find one number and the problem is not ill-posed. In these cases no regularisation is needed because the problem is stable anyway. However, if the dimension of the solution space becomes larger and larger, the problem might still be stable in a strict sense but – if the underlying infinite-dimensional problem is ill-posed – it will become ill-conditioned. The modulus of continuity of the pseudoinverse will be very large and hence again a naive computation will fail for the same reasons as in the infinite-dimensional case. So even for finite-dimensional problems regularisation is necessary, if the dimension of the solution space becomes large. (There is not a general answer to the question when the dimension is too large; this depends on the problem (on the mapping properties of F )). For nonlinear problems, there is no standard notion of a pseudoinverse and thus it is more involved to find conditions when a problem is ill-posed. There is an important class of problems which are ill-posed and for which ill-posedness can easily be shown: If the operator F is compact, it cannot have a continuous inverse. Thus one way to prove ill-posedness is to show that F is a compact mapping in appropriate spaces. This was done for the option price example in [20]. The last important definition concerns the noise: In the deterministic theory, the noise is treated as a rather arbitrary (but deterministic) function that is additively added to the exact data. Here we can distinguish between the exact data y and the noisy data yδ where y is assumed to be in the range of F while yδ is a version with additive noise: yδ = y + n,

y = F (x† ).

Of course, in practical applications only noisy data yδ are available, neither y nor n are given. What might be known is an upper bound on the amount of noise, the so-called noise level δ ∈ R+ : δ := yδ − yY , y = F (x† ). Besides this approach, there is a stochastic version of the theory of inverse problems, coming from statistics: Most of this is concerned with the case when the noise is not a fixed function but a random variable. Moreover, quite often it is assumed that the distribution of the noise n is fixed (such as a Gaussian noise): Here some analogue to

228

S. Kindermann and H. K. Pikkarainen

the deterministic noise level is the variance of the noise σ 2 = Eyδ − y2Y .

If the assumption of a Gaussian noise is dropped and the distribution of the noise is not a-priori fixed, another general theory for stochastic inverse problems uses general metrics for stochastic variables, such as the Ky Fan or the Prokhorov metric (see [24, 35]). Again a kind of noise level can be defined as the distance of yδ to y in these metrics.

3

Regularisation

After we outlined the main problems in inverse problems, the immediate question is how to solve an inverse problem in a stable way. As we have already indicated this can be done by regularisation. The basic idea of regularisation is as simple as ingenious: Instead of solving the original ill-posed problem we solve a neighboring well-posed one. Let us at first describe the method for linear problems. In abstract operator notation, regularisation can be formulated as follows: Instead of computing a solution from noisy data by the pseudoinverse F † yδ , which is unstable and might not even exist for some data, we use an operator Rα that is stable and approximates F † and compute a regularised solution xδα = Rα yδ . (3.1) Since we changed the operator F † , this will not give the right solution x† , however, under appropriate conditions, the regularised solution will be close to x† . The important properties that we require of Rα is that it is stable and approximates F † . If F † is discontinuous, these properties are opposite to each other: In view of the Banach–Steinhaus theorem it is impossible to approximate a discontinuous operator pointwise by a continuous one. Thus one has to find a compromise between the approximation and the stability. This is achieved by defining a family of regularisation operators Rα , depending on a regularisation parameter α > 0 which controls the compromise between the approximation and the stability. A regularisation operator should at least have the following properties: •

Stability: For any α > 0 Rα is a stable operator.



Approximation: lim Rα y = F † y

α→0

∀ y ∈ D(F † ).

For nonlinear problems, the pseudoinverse is not necessarily defined, but the basic properties of a regularisation operator stay the same. It should be a stable (possibly nonlinear) operator that approximates a minimum norm solution for those y for which a minimum norm solution exists. Before we discuss convergence issues, we have to emphasise the role of the regularisation parameter α. It is not possible to find a good parameter α independent of the

Regularisation of inverse problems

229

data. Instead, the regularisation parameter has to be chosen depending on the data. So in general α has to be a function of the available noisy data yδ and/or the noise level δ . We can distinguish three types of so-called parameter choices: •

• •

a-priori parameter choice rules where α = α(δ), i.e., α depends only on the noise level possibly including information about the a-priori smoothness of x† . a-posteriori parameter choice rules where α = α(yδ , δ). noise level free ([23]: error free) parameter choice rules where α = α(yδ ), i.e., α is independent of the noise level and depends only on the data.

So strictly speaking, a regularisation method is always defined as a family of regularisation operators together with a parameter choice rule, for a precise definition see [23]. The last one of the parameter choice rules might look as a very appealing choice, as it does not require knowledge on the noise level (which might not be available). The crux with this, however, is that such a noise free parameter choice for ill-posed problems will never give rise to a convergent regularisation method in the worst case, i.e., if the problem is ill-posed, there is always a noise such that for any regularisation method together with a noise level free parameter choice rule the regularised solution will not converge to the true one even though the noisy data converge to the exact one: Rα(yδ ) yδ → x† if yδ → y.

This result is often referred to as the Bakushinskii veto [2]. It is the reason why in the deterministic theory one is bounded to use a-priori or a-posteriori parameter choice rules. Note that a similar result holds in the stochastic case for the Ky Fan and the Prokhorov metrics [35]. On the contrary, for the Gaussian noise case described above it is indeed possible to use noise level free parameter choice rules. Furthermore, in [50] it was shown that excluding a so-called smooth noise, one can again obtain convergence even with a noise level free method. Next we should indicate what we mean by convergence: The regularised solution should converge to the true one if the noisy data converge to the exact data (or the noise level tends to 0). More precisely, a regularisation method with a parameter choice rule α(yδ , δ) converges if for all x† lim xδα − x† X → 0

δ→0

∀ yδ : yδ − yY ≤ δ.

This is often called the worst-case convergence because we ask for the convergence for all noisy data below the noise level. For the stochastic case, an average case error can be used: For instance, if n is a random noise, we define average case convergence as lim Exδα − x† 2X → 0

δ→0

∀ yδ : Eyδ − y2Y = δ 2

where in this definition, n is assumed to be a Gaussian noise with finite variance. (This definition can be modified if n is a generalised random process such as the white noise). Note that the convergence analysis for the worst case and the average case have quite different aspects (as we have seen for the Bakushinskii veto) although the general theme of ill-posedness is central in both of them.

230

S. Kindermann and H. K. Pikkarainen

A convergent regularisation method has all the properties we want: We can compute a regularised solution in a stable way and the regularised solution is close to the true one if the noise level is sufficiently small. It is of interest to further quantify what “close” means here: Can we find estimates (“convergence rates”) for the error xδα − x† X in terms of the noise level of the following type xδα − x† X ≤ f (δ)

∀ x†

(3.2)

for some function f (e.g., of the H¨older type f (δ) = δ τ ). Again there is an important negative result: For ill-posed problems and any regularisation method there cannot be a continuous function with f (0) = 0 such that the uniform estimate holds. In other words: for ill-posed problems convergence can be arbitrarily slow [59]. Such a result is rather disappointing as we can never be sure if a computed regularised solution has anything to do with the true one. Note that (3.2) is impossible because it is an estimate uniform in x† . Because of convergence it is immediate that for any x† we can find such a function f (δ). We simply cannot give a uniform bound for all such solutions. The remedy in this situation is to impose additional conditions on the exact solution x† . It is a rule of thumb for many algorithms that a smoother solution gives rise to faster convergence. This is also the case here. If we a-priori assume that the solution is smoother than just x† ∈ X it is possible to show for many regularisation methods convergence rates, i.e., xδα − x† X ≤ f (δ)

∀ x† ∈ X µ .

(3.3)

Here X µ denotes a set of a certain smoothness. This smoothness has to be related to the operator F . For linear problems the question how such a smoothness class looks like is answered completely: For instance the set X µ = {x | x = (F ∗ F )µ w, w ∈ X} μ

for some µ > 0 gives rise to convergence rates f (δ) = δ 2μ+1 for many regularisation methods [23]. The condition that x† is in the range of the operator (F ∗ F )µ is called a (H¨older) source condition. If F is a smoothing operator (e.g., it takes L2 -functions to functions in the Sobolev space H s ), such a condition means that the exact solution is in an appropriate Sobolev space. Hence a source condition can be seen as an abstract smoothness condition. It is possible to extend this convergence rate analysis to the nonlinear case (see below). The source condition is not a condition that can be tested if the solution is not known, it has to be postulated. Nevertheless the convergence rate analysis is quite useful not only because it gives uniform bounds but also because it allows to classify problems according to their ill-posedness. If an operator F is highly smoothing, the condition in the set X µ will be very restrictive and we have to expect slow convergence in general. On the other hand if F is only lightly smoothing, we can expect faster convergence for more solutions than in the first case. Related to this is the degree of ill-posedness of an inverse problem [23].

Regularisation of inverse problems

231

3.1 Tikhonov regularisation Let us now turn to the most prominent example of a regularisation method for nonlinear problems: the Tikhonov regularisation. We have already pointed out the main principle of regularisation: Approximate an ill-posed problem by a well-posed one and solve this problem instead of the original one. The idea of Tikhonov regularisation [60] starts with the least squares formulation in (2.2). We have already indicated that solving equation (2.1) in a least squares sense does not lead to a well-posed problem or a stable algorithm. In Tikhonov regularisation one therefore stabilises the least squares problem by adding a suitable penalty term: Instead of solving (2.1) or (2.2) we consider now minimising the Tikhonov functional J(x) := F (x) − yδ 2Y + αx2X

(3.4)

where X and Y are Hilbert spaces, α > 0 is a fixed regularisation parameter, and yδ indicates the possibly noisy data. As an approximate solution to our ill-posed problem we look for a minimiser of the Tikhonov functional xδα := argminx⊂D(F ) J(x). (3.5) We argued that adding a penalty term in (3.4) helps to stabilise the problem. The question arises if this is true, i.e., if the problem of finding a minimiser (3.5) is a wellposed problem at all. As the reader might guess this is true but needs some mild conditions that have to be satisfied: 1. For noise free data y there exists an exact solution F (x) = y.

2. There exists a minimum norm least squares solution of (2.1). 3. F : D(F ) ⊂ X → Y is continuous.

4. F is weakly sequentially closed, i.e., for any sequence xn ⊂ D(F ) the weak convergence of xn to x (in X ) and the weak convergence of F (xn ) to y (in Y ) imply that x ∈ D(F ) and F (x) = y . The first two conditions have to be postulated. If our model is correct, i.e., for exact data there exists a true solution and if this solution is unique, these conditions are satisfied. The second condition is used to cover the case of nonunique solutions. The last two conditions are assumptions on the forward operator and they have to be shown for a specific problem. F satisfies these conditions if D(F ) is closed and convex (and hence weakly closed) and F is the composition of a linear compact and a continuous mapping. Under these assumptions it is not difficult to show [23, 25]: Theorem 3.1. For any yδ ∈ Y , (3.5) admits a solution xδα . So the Tikhonov regularisation is well-defined, and the first criterion of the Hadamard well-posedness is met for (3.5). Remember that the third criterion of well-posedness

232

S. Kindermann and H. K. Pikkarainen

was concerned with stability: If yδ converges to some element yδ ′ , then the regularised solutions xδα should converge as well to the solution of (3.5) with yδ replaced by yδ ′ . Under the weak assumptions on F , only a subsequence-type of continuity can be established [23, 25]: Theorem 3.2. Let the general assumptions hold. Let α > 0 and let yk and xk be sequences such that yk → yδ and xk is a minimiser of (3.4) with yδ replaced by yk . Then xk has a convergent subsequence and the limit of every convergent subsequence is a minimiser of (3.4). This theorem established the fact that the regularised solution depends continuously (in an appropriate sense) on the data yδ . The reason why subsequences are needed in the proof is the nonuniqueness of minimisers of (3.4). If we additionally impose that a minimiser to (3.4) is unique, we can conclude that xk converges to it (and not only a subsequence) in the previous theorem. Showing this uniqueness is, however, a rather tedious work, and usually one is satisfied with subsequence convergence. The two preceding theorems basically show that computing the regularised solution is a well-posed problem: The most important conclusion is that xδα depends stably on the data. This of course is only true if α > 0. Thus we have replaced the original ill-posed least squares problem by a well-posed one. The central point of interest is now convergence: Does the regularised solution converge to the true one in the sense of the previous section (i.e., when the noise level tends to 0)? This is the essence of the third main theorem on Tikhonov regularisation [23, 25]: Theorem 3.3. Let the general assumptions hold. Let yδ ∈ Y and y − yδ Y ≤ δ . Let δ2 k → 0 as δ → 0. Then every sequence xδα(δ α(δ) be such that α(δ) → 0 and α(δ) k) of minimisers (3.5) has a convergent subsequence as δk → 0. Moreover, the limit is a minimum norm least squares solution. If, in addition, this solution is unique, k lim xδα(δ = x† . k)

δk →0

This is the main convergence theorem: If the noise level tends to 0, the regularised solution converges to the ’true’ one. As we have already pointed out, the regularisation parameter has to be related to the noise level. The conditions in the theorem mean that the regularisation parameter must not go too fast to 0 as the noise level vanishes. Theorem 3.3 is the ‘working horse’ for applying Tikhonov regularisation to any specific problem. The main requirements for an application of Tikhonov regularisation is that the general assumptions hold. This has to be shown for any specific problem. The convergence theorem does not specify how fast the regularised solution will converge to the true solution. As we have said above, without additional conditions convergence can be arbitrarily slow. So in order to find the speed of convergence some additional assumptions have to be made. As in the linear case, source conditions are the appropriate assumptions. Apart of these, one additionally has to impose differentiability of the forward operator in the nonlinear case. The following theorem establishes convergence rates for the Tikhonov regularisation for nonlinear problems [23, 25]:

Regularisation of inverse problems

233

Theorem 3.4. Let the general assumptions hold. Let D(F ) be convex, yδ ∈ Y and y − yδ Y ≤ δ . Let the following conditions hold: • •

• •

F is Fr´echet-differentiable.

There exists a constant γ ≥ 0 such that F ′ (x) − F ′ (x† )Y ≤ γx − x† X for all x ∈ D(F ) in a sufficiently large ball around x† .

There exists a ω ∈ Y such that x† = F ′ (x† )∗ ω . γωY < 1.

Then for the choice α ∼ δ we obtain √ xδα − x† X = O( δ) and

F (xδα ) − F (x† )Y = O(δ).

The third condition is the source condition which holds if x† is in the range of F ′ (x† )∗ . It can be interpreted as an abstract smoothness condition. There are many generalisations of this convergence rate theorem, for instance, one can postulate weaker source conditions and get weaker results (see [23, Theorem 10.7]). Moreover the problem can be viewed in Hilbert scales which allows to find further convergence rates [23]. There is no need for the a-priori parameter choice α ∼ δ but a-posteriori type of choices can be made, for instance, the popular Morozov discrepancy principle [23] or the balancing principle [54, 57, 53]. Let us mention that all the convergence and convergence rates can be generalised to the stochastic setting using metrics in probability spaces [35]. When applying Tikhonov regularisation there are some choices to be made: Most important is the choice of the regularisation term x2X . Note that the space X and its norm was rather arbitrary. The only condition that is required is that F satisfies the assumptions of the previous theorem as an operator from X to Y . Quite often a Sobolev space norm is taken for X . Note that if F satisfies the conditions of the theorem, it remains true if X is taken with a stronger norm. For a stronger norm it is usually easier to show the conditions on F . On the other hand we always have to make sure that the exact solution is in X , so one usually has to bear in mind that x† X < ∞. The choice of the regularisation term can to some extent be derived if one adopts the Bayesian point of view. The regularisation term is directly related to the postulated prior distribution of the exact solution. So the regularisation term reflects which space we believe the exact solution to be in. The main work in applying Tikhonov regularisation and the convergence (rates) theorem has to be done in showing the conditions on F . This restricts to some extent the choice of the space X : the forward mapping F has to be continuous and weakly closed on X . Moreover, if we want to establish convergence rates, we additionally have to show that F is differentiable and the Lipschitz-type continuity of F ′ in the second condition. The results so far hold in the Hilbert space case: Both X and Y are assumed to be Hilbert spaces. Recently, there has been some extension of this theory to the more complicated Banach space case. The main reason why Banach spaces are needed is to cover more general regularisation terms. For some problems, for instance, involving measures [12] or sparse solutions [30], it is convenient not to use Hilbert space norms

234

S. Kindermann and H. K. Pikkarainen

but general convex functionals as regularisation terms. We briefly outline some theoretical results in this field. For instance, in [38] the following functional was considered J(x) = F (x) − yδ pY + αR(x)

(3.6)

where 1 ≤ p < ∞, F is a mapping between Banach spaces X and Y , and R is a convex proper functional on X . The general theory is based on the following assumptions: •

• • • • •

X and Y are Banach spaces and there are topologies τX and τY , respectively, which are weaker than the norm topology.  · Y is sequentially lower semicontinuous with respect to the topology τY . F is continuous with respect to the topologies τX and τY . R : X → R is proper convex and τX -lower semicontinuous. D(F ) is closed with respect to the τX -topology and D(F ) ∩ D(R) = ∅. For any α > 0 and M > 0 the level sets {x : J(x) ≤ M } are sequentially compact with respect to τX .

These assumptions are satisfied in the Hilbertian case if F satisfies the standard assumptions and one choose τX and τY as the weak topologies. Typically also in the Banach space case τX and τY are taken as weak or weak* topologies. Under these assumptions, there exists a minimiser, the minimiser is stable (in the τX -topology) with respect to the data noise and as α → 0 appropriately the minimiser converges to the R-minimal solution (again in the τX -topology). So all the results of Theorems 3.1, 3.2 and 3.3 carry over to the Banach space case. Also convergence rates (in the Bregman distance) can be shown. The mentioned results are shown in [38]. Further extensions to this theory can be found in [30, 52, 33, 55]. As before, the theorems can be used to find convergence and convergence rates for a specific problem when one can verify the assumptions for the specific operator F , the specific choice of Banach spaces, and regularisation functionals.

4

Calibration of option price models via regularisation

In this section we give a survey on the use of regularisation techniques in calibration problems in mathematical finance. We concentrate on calibration of option price models. The identification of an unknown local volatility in the Dupire model and model parameters in the jump diffusion models have been studied in the literature. In the following sections, we summarise the main ideas of the Dupire, the L´evy and the local L´evy option price models and review how regularisation has been utilised for solving corresponding calibration problems. Using the Dupire model as an example we demonstrate how the general theory of Tikhonov regularisation can be employed in calibration problems by emphasising the steps needed for the theoretical results. Even though more sophisticated asset price models than the geometric Brownian motion are in use in practice, the Dupire model serves as a benchmark for the theoretical and numerical analysis of calibration problems in financial mathematics.

Regularisation of inverse problems

235

4.1 Dupire model An option is a contract that gives the owner the right to buy or to sell a specified amount of a particular underlying asset at a fixed price (the strike price) within a fixed period of time (before or at the maturity date). In the Dupire model the dynamics of the price of the underlying asset is described by a geometric Brownian motion, i.e., the price St is a stochastic process defined by the stochastic differential equation dSt = St (µ dt + σ(t, St ) dWt ),

0 0,

(4.1)

S > 0,

where r is the constant interest rate on a riskless investment. If the volatility σ is a constant, equation (4.1) admits an analytic solution (the famous Black–Scholes formula [6]). As a function of K and T , the price of a European call option satisfies the Dupire equation [19] ∂2C 1 ∂C ∂C = σ 2 (T, K)K 2 , − rK 2 ∂T 2 ∂K ∂K C(t, S; T, 0) = S, T > t, C(t, S; t, K) = (S − K)+ ,

T > t, K > 0,

(4.2)

K > 0,

where S is the spot price of the underlying asset at the time t. The option pricing problem is to define the price of a European call option when the local volatility σ is a known function, by solving equations (4.1) and/or (4.2). The calibration problem of the Dupire option price model is to identify the local volatility function σ(t, S) such that the theoretical prices C(t, S; T, K) given by (4.1) and (4.2) coincide with the observed prices C ∗ (T, K) of the European call options for all (given) strike prices K and maturities T . As a parameter identification problem the calibration problem is an inverse problem. The option pricing problem is the corresponding direct problem. The possible ill-posedness of the calibration problem is an essential issue. Since the local volatility is a function and hence usually an infinite-dimensional object, the data required for the unique solvability of the calibration problem have to be continuous on the strike price K and/or the maturity T (see, e.g., [8, 9, 10] for uniqueness results for the time-independent volatilities, i.e., σ(t, S) = σ(S)). In practice, data are always

236

S. Kindermann and H. K. Pikkarainen

discrete both in strike prices and maturities. For each underlying asset the prices of European call options are given only for few strike prices and maturities. In addition, the prices of European call options are not known accurately but the bid-ask spread can be seen as a noise in the data. To assure the existence and the uniqueness the solution of the calibration problem needs to be defined in the least squares sense. The main source of ill-posedness for an inverse problem is that the forward mapping F from the parameter to the data does not have a continuous inverse. The ill-posedness of the calibration problem has been studied in the literature. The existing ill-posedness results can be split into the cases where the unknown local volatility is assumed either to be space-independent [34, 39], i.e., σ(t, S) = σ(t), time-independent [32], i.e., σ(t, S) = σ(S), or dependent on both variables [16, 20, 21]. In these theoretical results, prices of the European call options are supposed to be known either for a fixed strike price K but all maturities T , i.e., C ∗ (T, K) = C ∗ (T ), for a fixed maturity T but all strike prices K , i.e., C ∗ (T, K) = C ∗ (K), or for all strike prices K and maturities T , respectively. In the space-independent case, the Black–Scholes equation (4.1) has a solution in a closed form. Hence the exact definition of the forward mapping F can be given. In the other two cases, the parameter-to-solution mapping is defined via the Dupire equation (4.2). In the references above, the parameter and the data spaces were selected to be suitable function spaces, either Hilbert or Banach spaces. Mostly, the ill-posedness was proven by showing that the forward mapping is a compact operator. As was mentioned in Section 2 a compact operator cannot have a continuous inverse. Due to the ill-posedness a stable way for solving the calibration problem is to utilise some regularisation method. Here, we concentrate on Tikhonov regularisation. Since the calibration problem is a nonlinear inverse problem, the theory of Tikhonov regularisation summarised in Section 3.1 can be applied to the problem. In the Hilbert space setting if the forward mapping F fulfils the general assumptions given in Section 3.1 for the appropriate parameter and data spaces, the minimisation of the Tikhonov functional (3.4) is a well-posed problem by Theorems 3.1 and 3.2. Furthermore, the minimiser of the Tikhonov functional converges to the minimum norm least squares solution of the calibration problem as the noise level tends to zero according to Theorem 3.3. A convergence rate result for the Tikhonov regularised solution is obtained by showing that the forward mapping F satisfies the assumptions of Theorem 3.4. Hence the main theoretical task in applying Tikhonov regularisation to the calibration problem is to study the properties of the parameter-to-solution mapping F . The parameter and the data spaces need to be chosen such a way that the forward mapping fulfils the assumptions of Section 3.1. In more general setting, e.g. in Banach spaces, also the penalty functional R has to be taken into account (see (3.6)). Theoretical results concerning Tikhonov regularisation and the calibration problem, including convergence and convergence rate results, have been published in the literature. Both space-independent [34, 39], time-independent [20, 32, 44, 45], and general local volatilities [16, 20, 21] with corresponding continuous data have been considered. In [16] also the convergence of the Tikhonov regularised solution for discrete data was examined with rates for general local volatilities. In the references above, the penalty term in Tikhonov regularisation was mainly given by a norm in a Hilbert or a Banach space (mainly Sobolev spaces were used) but in [39] the maximum entropy regularisation (see, e.g., [26, 27]) was considered. In the space-independent case, the least square

Regularisation of inverse problems

237

minimisation problem was regularised by the functional   T a(t) +a ¯(t) − a(t) dt, a(t) ln E(a, a ¯) = a ¯(t) 0 called the cross entropy of a relative to the prior a¯, where a(t) = σ 2 (t) for all 0 ≤ t ≤ T and a¯ ∈ L1 (0, T ) such that a¯(t) ≥ c > 0 for almost all 0 ≤ t ≤ T . In numerical implementations of Tikhonov regularisation for the calibration problem, the unknown local volatility has to be represented by finite degrees of freedom. One possibility is to assume that the local volatility is described by a finite number of parameters. In [43] the local volatility was supposed to be a space-time spline with a finite number of nodal points. A less restrictive way is to discretise the local volatility in a suitable grid and to take the unknown to be the values of the local volatility in the grid points [7, 17, 20, 21, 34, 39, 51]. In practice the data are discrete. Therefore both the least squares and the penalty terms in the Tikhonov functional have to be replaced by discrete versions. In the minimisation of the Tikhonov functional, the calculation of the forward mapping is needed. For the space-independent local volatilities, the parameterto-solution mapping is known in a closed form and hence the theoretical option prices can be computed in a straight-forward manner by discretisation [34, 39]. For the other two cases, the Black–Scholes or the Dupire equations need to be solved numerically, e.g., by finite difference and/or finite element methods [20, 21, 51] or by trinomial tree discretisation [17]. For minimising the Tikhonov functional, gradient based minimisation methods have been employed, e.g., steepest decent or quasi-Newton techniques (see the references above). Numerical tests have mainly been done with simulated data but real data were used in [7, 17, 51]. For obtaining a regularised solution close to the true solution the regularisation parameter has to be selected properly. In the references above, known parameter choice rules, either a-posteriori or noise level free rules, were utilised to choose the regularisation parameter. We want to point out that in the Dupire framework, calibration problems where the given data are prices of more complicated options than European call options, have also been treated by using the regularisation theory. In [17, 40] the calibration of American call options, i.e., the right to buy an asset with a strike price at any time up to a maturity date, was studied as a Tikhonov regularised problem mainly from the computational point of view. Note that there are few earlier review papers of the calibration problem of the Dupire option price model. In [9] the main emphasis was on the theoretical study of the corresponding inverse problem whereas in [61] the focus was on the use of the regularisation theory.

4.2 L´evy model One of the disadvantages of the Dupire option price model is that the model does not allow jumps in the price of the underlying asset. A generalisation of the Dupire model is the jump diffusion model. The dynamics of the underlying asset is modeled, under a risk-neutral measure Q, as an exponential of a L´evy process: St = ert eXt

238

S. Kindermann and H. K. Pikkarainen

where r > 0 is the interest rate. The process Xt is a L´evy process with characteristic triplet (σ, γ, ν) where σ > 0 is called the volatility, γ ∈ R the drift, and the L´evy measure ν is a positive measure on R verifying  ∞ min(1, x2 ) ν(dx) < ∞. ν({0}) = 0 and −∞

The L´evy measure ν gives the expected number of jumps of the process Xt per time unit. Since Q is a risk-neutral probability measure, eXt is a martingale and hence the drift is uniquely defined by the volatility and the L´evy measure:  ∞ σ2 − γ=− (ey − 1 − y1y≤1 ) ν(dy). 2 −∞ The price of a European call option with the strike price K and the maturity T fulfils the partial integro-differential equation [12] ∂C ∂C σ2 2 ∂ 2 C (t, S) + rS (t, S) + S (t, S) − rC(t, S) ∂t ∂S 2 ∂S 2   ∞ ∂C (t, S) ν(dy) = 0 C(t, Sey ) − C(t, S) − S(ey − 1) + ∂S −∞

(4.3)

for 0 ≤ t < T and S > 0 with the terminal condition C(T, S) = (S − K)+ ,

S > 0.

Equation (4.3) consists of the Black–Scholes PDE (cf. (4.1)) and an integral term related to the L´evy measure. Note that the volatility σ is a constant like in the Black– Scholes model. The calibration problem for the L´evy model is to identify the parameters (σ, ν) such that the theoretical option prices given, e.g., by (4.3) coincide with the observed option prices. Different sources of the ill-posedness of the calibration problem were pointed out and shown by examples in [14]. Due to the ill-posedness, for solving the calibration problem regularisation is required. In [12, 13, 14, 15] the weighted least squares problem for option prices was regularised by the relative entropy of the risk-neutral measure Q with respect to a prior measure Q0 . For risk-neutral exponential L´evy models, the relative entropy is given by  ∞ 2 T H(ν) = 2 (ex − 1) (ν − ν0 )(dx) 2σ −∞   ∞ dν dν dν ν0 (dx) +T ln +1− dν0 dν0 dν0 −∞ where σ is the common volatility for both risk-neutral measures and ν0 is the L´evy measure related to the prior Q0 . Since the prior Q0 defines the volatility, only the L´evy measure ν needs to be calibrated according to given option prices. Possible choices for an appropriate prior measure were discussed in [12, 13]. The convergence of the

Regularisation of inverse problems

239

relative entropy regularised solution as the noise in the data tends to zero was examined in [13] in the case where the L´evy measure is a finite sum of point measures and in [14] for general L´evy measures. Convergence rate results do not exist in the literature. The numerical implementation of the relative entropy regularisation method was presented in [12, 13, 15]. To discretise the problem, the unknown L´evy measure was modeled by a finite sum of point measures. The forward mapping can be defined by using the characteristic function of the L´evy process and the Fourier transform, not only by equation (4.3). Hence option prices needed in the Tikhonov functional can be calculated by the fast Fourier transform (FFT). The choice of appropriate weights in the least squares functional was discussed in the papers. The suitable regularisation parameter was determined by the Morozov discrepancy principle. The corresponding Tikhonov functional was then minimised by a gradient descent method. Both simulated and real data were used in the numerical tests. Calibration of the L´evy model was also studied in [3] where regularisation of the calibration problem was done in the spectral domain by cutting off high frequencies. Observation noise was assumed to be stochastic instead of the deterministic bid-ask spread. Exact minimax rates of convergence were obtained and it was shown that the proposed spectral estimators are rate optimal. The method was numerically tested with simulated data.

4.3 Local L´evy model Even though in the L´evy model the price of the underlying asset can have jumps, both the volatility and the L´evy measure cannot vary over the time or with the price of the asset. The L´evy model can be generalised by the local L´evy model. Let the asset price St has the risk-free dynamics  t  t St = S0 + rSs− ds + σ(s, Ss− )Ss− dWs 0

+

 t 0

R

0

Ss− (ex − 1) (m(Ss− ,s) (dx, ds) − µ(Ss− ,s) (dx, ds))

where r is the riskless interest rate, σ is the local volatility function, Wt is a standard Wiener process, m is an integer-valued random measure associated to the jumps of St independent of Wt , and µ is the compensator of m. The process St can equivalently be presented as an exponential of an inhomogeneous Markov process Xt of L´evy type, i.e., St = S0 ert eXt where eXt is a martingale. We assume that the compensator µ has the form µ(S,t) (dx, dt) = a(t, S)ν(dx)dt

where a(t, S) is the local speed function and ν is a Radon measure satisfying  ∞ x2 ν(dx) < ∞. ν({0}) = 0 and 2 −∞ 1 + x

240

S. Kindermann and H. K. Pikkarainen

This assumption means that the distribution of jumps remains unchanged over time while the arrival rate varies in the time and the asset price. Note that ν is a L´evy measure. The price C(T, K) of a European call option as a function of the strike price K and the maturity T fulfils the partial integro-differential equation [11] ∂2C ∂C 1 ∂C (T, K) = σ 2 (T, K)K 2 (T, K) (T, K) − rK ∂T 2 ∂K 2 ∂K    ∞ ∂ 2C K dY Y (T, Y )a(T, Y )ψ log + ∂K 2 Y 0

for all K > 0 and T > 0 where  z ψ(z) =

with the initial value

(ez − ex ) ν(dx) −∞ ∞ x (e − ez ) ν(dx) z

C(0, K) = (S0 − K)+

(4.4)

for z < 0, for z > 0

for all K > 0

and the boundary condition C(T, 0) = S0

for all T > 0.

Note that the PDE part of equation (4.4) is equal to the Dupire equation (4.2) and the integral term depends only on the parameters a and ν . The calibration problem for the local L´evy model is to identify the parameters (σ, a, ν) such that the theoretical option prices given by (4.4) coincide with the observed option prices. In [49] it was assumed that the only unknown parameter is the local speed function a. The parameter-to-solution mapping was defined by a PIDE in logarithmic variables related to equation (4.4). The ill-posedness of the calibration problem was shown and the source of the ill-posedness was discussed. It was pointed out that the calibration problem for the local L´evy model is more ill-posed than that for the Dupire model. The local speed function was calibrated by Tikhonov regularisation using a Sobolev penalty term. The convergence and convergence rate results for the Tikhonov regularised solution were obtained. In the numerical implementation the forward mapping was discretised by finite differences and the integral term by a midpoint rule. In the selection of the regularisation parameter the discrepancy principle was used. The Tikhonov functional was minimised by a Gauss–Newton method. Numerical tests were done for both simulated and real data. In [48] the complete calibration problem was studied. Like in [49], the forward mapping was defined by a PIDE in logarithmic variables. In Tikhonov regularisation, Sobolev space penalty functionals were used for the local volatility and the local speed function whereas for the L´evy measure several choices of a regularising functional were proposed. The convergence of the Tikhonov regularised solution was shown and a possible source condition for the speed of convergence was mentioned.

Regularisation of inverse problems

241

4.4 Further remarks Another generalisation of the Dupire model are stochastic volatility models where the local volatility in the Dupire framework is modeled by a stochastic process, not as a function of the time and the asset price like in Section 4.1. In [1, 58] the stochastic volatility was calibrated by minimising the relative entropy functional constrained by the observed prices of a European call option. In these papers the calibration problem was viewed as a stochastic control problem which is closely related to the regularisation point of view. By Hull and White [41], the price of a European call option with a stochastic volatility can be given by the expectation of Black–Scholes option prices over the distribution of the quadratic variation of the stochastic volatility. In [29] the quadratic variation was calibrated by using Tikhonov regularisation. In addition to the regularisation theory, the Bayesian approach to inverse problems can be used to solve ill-posed problems in a stable way. For a comprehensive introduction into the topic see [47]. The theory of Bayesian inversion is not fully developed, especially in infinite-dimensional spaces, but some convergence results similar to the ones in the regularisation theory have been recently published in [36, 37, 56]. In the financial mathematics context, the Bayesian inversion theory was applied to calibrate the quadratic variation of the stochastic volatility in [46]. Robust calibration using Tikhonov regularisation is not restricted only to option price models. In fact, Tikhonov regularisation has been employed for other calibration problems in mathematical finance as well. For applications in the calibration of interest rates models, see, e.g., [5, 18, 22]. The main message of this article is that the regularisation theory is a general technique which is applicable to various range of parameter identification/calibration problems.

Bibliography [1] M. Avellaneda, C. Friedman, R. Holmes, and D. Samperi, Calibrating volatility surfaces via relative-entropy minimization, Appl. Math. Finance 4 (1997), pp. 37–64. [2] A. Bakushinskii, Remarks on choosing a regularization parameter using the quasioptimality and ratio criterion, Comput. Math. Math. Phys. 24 (1984), pp. 181–182. [3] D. Belomestny and M. Reiss, Spectral calibration of exponential L´evy models, Finance Stoch. 10 (2006), pp. 449–474. [4] A. Binder, H. W. Engl, C. W. Groetsch, A. Neubauer, and O. Scherzer, Weakly closed nonlinear operators and parameter identification in parabolic equations by Tikhonov regularization., Appl. Anal. 55 (1994), pp. 215–234. [5] A. Binder, H. W. Engl, and A. Schatz, Advanced Numerical Techniques for Financial Engineering, Derivatives Week XII (2003), pp. 6–7. [6] F. Black and M. Scholes, The pricing of options and corporate liabilities, J. Polit. Econ. 81 (1973), pp. 637–654. [7] J. N. Bodurtha and M. Jermakyan, Non-parametric estimation of an implied volatility surface, J. Comput. Finance 2 (1999), pp. 29–60.

242

S. Kindermann and H. K. Pikkarainen

[8] I. Bouchouev and V. Isakov, The inverse problem of option pricing, Inverse Problems 13 (1997), pp. L11–L17. [9]

, Uniqueness, stability and numerical methods for inverse problems that arises in financial markets, Inverse Problems 15 (1999), pp. R95–R116.

[10] I. Bouchouev, V. Isakov, and N. Valdivia, Recovery of volatility coefficient by linearization, Quant. Finance 2 (2002), pp. 257–263. [11] P. Carr, H. Geman, D. B. Madan, and M. Yor, From local volatility to local L´evy models, Quant. Finance 4 (2004), pp. 581–588. [12] R. Cont and P. Tankov, Financial Modelling With Jump Processes, Chapman & Hall/CRC, Boca Raton, U. S. A., 2004. [13]

, Non-parametric calibration of jump-diffusion option pricing models, J. Comput. Finance 7 (2004), pp. 1–49.

[14]

, Retrieving L´evy processees from option prices: regularization of an ill-posed inverse problem, SIAM J. Control Optim. 45 (2006), pp. 1–25.

[15] R. Cont, P. Tankov, and E. Voltchkova, Option pricing models with jumps: integro-differential equations and inverse problems, European Congress on Computational Methods in Applied Sciences and Engineering (P. Neittaanm¨aki, T. Rossi, S. Korotov, E. Onate, J. P´eriaux, and D. Kn¨orzer, eds.), ECCOMAS, 2004. [16] S. Cr´epey, Calibration of the local volatility in a generalized Black-Sholes model using Tikhonov regularization, SIAM J. Math. Anal. 34 (2003), pp. 1183–1206. [17]

, Calibration of the local volatility in a trinomial tree using Tikhonov regularization, Inverse Problems 19 (2003), pp. 91–127.

[18] A. d’Aspremont, Interest Rate Model Calibration Using Semidefinite Programming, Appl. Math. Finance 3 (2003), pp. 183–213. [19] B. Dupire, Pricing with a smile, RISK 7 (1994), pp. 18–20. [20] H. Egger and H. W. Engl, Tikhonov regularization applied to the inverse problem of option pricing: convergence analysis and rates, Inverse Problems 21 (2005), pp. 1027–1045. [21] H. Egger, T. Hein, and B. Hofmann, On decoupling of volatility smile and term structure in inverse oprion pricing, Inverse Problems 22 (2006), pp. 1247–1259. [22] H. W. Engl, Calibration problems – A inverse problems view, WILMOTT magazine (July 2007), pp. 16–20. [23] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publisher, Dordrecht, the Netherlands, 1996. [24] H. W. Engl, A. Hofinger, and S. Kindermann, Convergence rates in the Prokhorov metric for assessing uncertainty in ill-posed problems, Inverse Problems 21 (2005), pp. 399–412. [25] H. W. Engl, K. Kunisch, and A. Neubauer, Convergence rates for Tikhonov regularisation of nonlinear ill-posed problems, Inverse Problems 5 (1989), pp. 523–540. [26] H. W. Engl and G. Landl, Convergence rates for maximum entropy regularization, SIAM J. Numer. Anal. 30 (1993), pp. 1509–1536. [27]

, Maximum entropy regularization of nonlinear ill-posed problems, Proceedings of the First World Congress of Nonlinear Analysis (Berlin, Germany) (V. Lakshmikantham, ed.), vol. I, de Gruyter, 1996, pp. 513–525.

[28] T. Felici and H. W. Engl, On shape optimization of optical waveguides using inverse problem techniques, Inverse Problems 17 (2001), pp. 1141–1162.

Regularisation of inverse problems

243

[29] P. Friz and J. Gatheral, Valuation of volatility derivatives as an inverse problem, Quant. Finance 5 (2005), pp. 531–542. [30] M. Grasmair, M. Haltmeier, and O. Scherzer, Sparse regularization with lq penalty term, Inverse Problems 24 (2008), p. 055020, (13 pp). [31] J. Hadamard, Lectures on Cauchy’s problem in linear partial differential equations, Yale University Press, New Haven, U. S. A., 1923. [32] T. Hein, Some analysis of Tikhonov regularization for the inverse problems of option pricing in the price-dependent case, Z. Anal. Anwendungen 24 (2005), pp. 593–609. [33] T. Hein, Tikhonov regularization in Banach spaces – improved convergence rates results, Inverse Problems 25 (2009), p. 035002, (18 pp). [34] T. Hein and B. Hofmann, On the nature of ill-posedness of an inverse problem arising in option pricing, Inverse Problems 19 (2003), pp. 1319–1338. [35] A. Hofinger, Ill-posed Problems: Extending the Deterministic Theory to a Stochastic Setup, Trauner Verlag, Linz, Austria, 2006, (Doctoral Thesis). [36] A. Hofinger and H. K. Pikkarainen, Convergence rates for the Bayesian approach to linear inverse problems, Inverse Problems 23 (2007), pp. 2469–2484. [37]

, Convergence rates for linear inverse problems in the presence of an additive normal noise, Stoch. Anal. Appl. 27 (2009), pp. 240–257.

[38] B. Hofmann, B. Kaltenbacher, C. Poeschl, and O. Scherzer, A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators, Inverse Problems 23 (2007), pp. 987–1010. [39] B. Hofmann and R. Kr¨amer, On maximum entropy regularization for a specific inverse problem of option pricing, J. Inverse Ill-Posed Probl. 13 (2005), pp. 41–63. [40] J. Huang and J.-S. Pang, A mathematical programming with equilibrium constraints approach to the implied volatility surface of American options, J. Comput. Finance 4 (2000), pp. 21–56. [41] J. Hull and A. White, The pricing of options with stochastic volatilities, J. Finance 42 (1987), pp. 281–300. [42] V. Isakov, Inverse Problems for Partial Differential Equations, Springer-Verlag, New York, U. S. A., 2006. [43] N. Jackson, E. S¨uli, and S. Howison, Computation of deterministic volatility surfaces, J. Comput. Finance 2 (1999), pp. 5–32. [44] L. Jiang, Q. Chen, L. Wang, and J. E. Zhang, A new well-posed algorithm to recover implied local volatility, Quant. Finance 3 (2003), pp. 451–457. [45] L. Jiang and Y. Tao, Identifying the volatility of underlying assets from option prices, Inverse Problems 17 (2001), pp. 137–155. [46] R. Kaila, The Integrated Volatility Implied by Option Prices, A Bayesian Approach, TKK Mathematics, Espoo, Finland, 2008, (Doctoral Thesis). [47] J. P. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, SpringerVerlag, Berlin, Germany, 2005. [48] S. Kindermann and P. Mayer, On the calibration of local jump-diffusion market models, (2008), submitted. [49] S. Kindermann, P. Mayer, H. Albrecher, and H. W. Engl, Identification of the local speed function in a L´evy model for option pricing, J. Integral Equations Appl. 20 (2008), pp. 161– 200.

244

S. Kindermann and H. K. Pikkarainen

[50] S. Kindermann and A. Neubauer, On the convergence of the quasi-optimality criterion for (iterated) Tikhonov regularization, Inverse Probl. Imaging 2 (2008), pp. 291–299. [51] R. Lagnado and S. Osher, A technique for calibrating derivative security pricing models: numerical solution of inverse problems, J. Comput. Finance 1 (1997), pp. 13–25. [52] D. A. Lorenz, Convergence rates and source conditions for Tikhonov regularization with sparsity constraints, J. Inverse Ill-Posed Probl. 16 (2008), pp. 463–478. [53] S. Lu, S. V. Pereverzev, and R. Ramlau, An analysis of Tikhonov regularization for nonlinear ill-posed problems under a general smoothness assumption, Inverse Problems 23 (2007), pp. 217–230. [54] P. Math´e and S. V. Pereverzev, Geometry of linear ill-posed problems in variable Hilbert spaces, Inverse Problems 19 (2003), pp. 789–803. [55] A. Neubauer, On enhanced convergence rates for Tikhonov regularization of nonlinear problems in Banach spaces, Inverse Problems (2009), p. 065009, (10 pp). [56] A. Neubauer and H. K. Pikkarainen, Convergence results for the Bayesian inversion theory, J. Inverse Ill-Posed Probl. 16 (2008), pp. 601–613. [57] S. Pereverzev and E. Schock, On the adaptive selection of the parameter in regularization of ill-posed problems, SIAM J. Numer. Anal. 43 (2005), pp. 2060–2076. [58] D. Samperi, Calibrationg a diffusion pricing model with uncertain volatility: regularization and stability, Math. Finance 12 (2002), pp. 71–87. [59] E. Schock, Approximate solution of ill-posed equations: arbitrarily slow convergence vs. superconvergence, Constructive Methods for the Practical Treatment of Integral Equations (G. H¨ammerlin and K. H. Hoffmann, eds.), Birkh¨auser, Basel, Switzerland, 1985, pp. 234– 243. [60] A. N. Tikhonov and V. B. Glasko, An approximate solution of Fredholm integral equations of ˇ Vyˇcisl. Mat. i Mat. Fiz. 4 (1964), pp. 564–571. the first kind, Z. [61] J. P. Zubelli, Inverse problems in finance: A short survey of calibration techniques, Proceedings of the 2nd Brazilian Conference on Statistical Modelling in Insurance and Finance (Maresias, Brazil) (N. Kolev and P. Morettin, eds.), Institute of Mathematics and Statistics, University of S˜ao Paulo, 2005, pp. 64–75.

Author information Stefan Kindermann, Industrial Mathematics Institute, Johannes Kepler University Linz, Altenbergerstrasse 69, A-4040 Linz, Austria. Email: [email protected] Hanna K. Pikkarainen, Johann Radon Institute for Computational and Applied Mathematics (RICAM), Austrian Academy of Sciences, Altenbergerstrasse 69, A-4040 Linz, Austria. Email: [email protected]

Radon Series Comp. Appl. Math 8, 245–273

c de Gruyter 2009 

Optimal consumption and investment with bounded downside risk measures for logarithmic utility functions Claudia Kl¨uppelberg and Serguei Pergamenshchikov

Abstract. We investigate optimal consumption problems for a Black–Scholes market under uniform restrictions on Value-at-Risk and Expected Shortfall for logarithmic utility functions. We find the solutions in terms of a dynamic strategy in explicit form, which can be compared and interpreted. This paper continues our previous work, where we solved similar problems for power utility functions. Key words. Black–Scholes model, capital-at-risk, expected shortfall, logarithmic utility, optimal consumption, portfolio optimisation, utility maximisation, value-at-risk. AMS classification. primary: 91B70, 93E20, 49K30; secondary: 49L20, 49K45

1

Introduction

One of the principal questions in mathematical finance is the optimal investment/consumption problem for continuous time market models. By applying results from stochastic control theory, explicit solutions have been obtained for some special cases (see e.g. Karatzas and Shreve [9], Korn [11] and references therein). With the rapid development of the derivatives markets, together with margin trading on certain financial products, the exposure to losses of investments into risky assets can be considerable. Without a careful analysis of the potential danger, the investment can cause catastrophic consequences such as, for example, the recent crisis in the “Soci´et´e G´en´erale”. To avoid such situations the Basel Committee on Banking Supervision suggested some measures for the assessment of market risks. It is widely accepted that the Valueat-Risk (VaR) is a useful summary risk measure (see, Jorion [7] or Dowd [4]). We recall that the VaR is the maximum expected loss over a given horizon period at a given confidence level. Alternatively, the Expected Shortfall (ES) or Tail Condition Expectation (TCE) measures also the expected loss given the confidence level is violated. In order to satisfy the Basel committee requirements, portfolios have to control the level of VaR or (the more restrictive) ES throughout the investment horizon. This leads to stochastic control problems under restrictions on such risk measures. Our goal in this paper is the optimal choice of a dynamic portfolio subject to a risk limit specified in terms of VaR or ES uniformly over the investment interval [0, T ]. In Kl¨uppelberg and Pergamenshchikov [10] we considered the optimal investSecond author: This work was supported by the European Science Foundation through the AMaMeF programme.

246

C. Kl¨uppelberg and S. Pergamenshchikov

ment/consumption problem with uniform risk limits throughout the investment horizon for power utility functions. In that paper also some interpretation of VaR and ES besides an account of the relevant literature can be found. Our results in [10] have interesting interpretations. We have, for instance, shown that for power utility functions with exponents less than one, the optimal constrained strategies are riskless for sufficiently small risk bounds: they recommend consumption only. On the contrary, for the (utility bound of a) linear utility function the optimal constrained strategies recommend to invest everything into risky assets and consume nothing. In this paper we investigate the optimal investment/consumption problem for logarithmic utility functions again under constraints on uniform versions of VaR and ES over the whole investment interval [0, T ]. Using optimisation methods in Hilbert functional spaces, we find all optimal solutions in explicit form. It turns out that the optimal constrained strategies are the unconstrained ones multiplied by some coefficient which is less than one and depends on the specific constraints. Consequently, we can make the main recommendation: To control the market risk throughout the investment interval [0, T ] restrict the optimal unconstrained portfolio allocation by specific multipliers (given in explicit form in (3.6) for the VaR constraint and in (3.26) for the ES constraint). Our paper is organised as follows. In Section 2 we formulate the problem. We define the Black–Scholes model for the price processes and present the wealth process in terms of an SDE. We define the cost function for the logarithmic utility function and present the admissible control processes. We also present the unconstrained consumption and investment problem of utility maximisation for logarithmic utility. In Sections 3 we consider the constrained problems. Section 3.1 is devoted to a risk bound in terms of Value-at-Risk, whereas Section 3.2 discusses the consequences of a risk bound in terms of Expected Shortfall. Auxiliary results and proofs are postponed to Section 4. We start there with material needed for the proofs of both regimes, the Value-at-Risk and the ES risk bounds. In Section 4.1 all proofs of Section 3.1 can be found, and in Section 4.2 all proofs of Section 3.2. Some technical lemmas are postponed to the Appendix, again divided in two parts for the Value-at-Risk regime and the ES regime.

2

Formulating the problem

2.1 The model and first results We work in the same framework of self-financing portfolios as in Kl¨uppelberg and Pergamenshchikov in [10], where the financial market is of Black–Scholes type consisting of one riskless bond and several risky stocks on the interval [0, T ]. Their respective prices S0 = (S0 (t))0≤t≤T and Si = (Si (t))0≤t≤T for i = 1, . . . , d evolve according to the equations: ⎧ ⎨dS0 (t) = rt S0 (t) dt , S0 (0) = 1 , (2.1)  ⎩dS (t) = S (t) μ (t) dt + S (t) d σ (t) dW (t) , S (0) > 0 . j i i i i i j=1 ij

247

Optimal consumption and investment

Here Wt = (W1 (t), . . . , Wd (t))′ is a standard d-dimensional Wiener process in Rd ; rt ∈ R is the riskless interest rate; μt = (μ1 (t), . . . , μd (t))′ is the vector of stockappreciation rates and σt = (σij (t))1≤i,j≤d is the matrix of stock-volatilities. We assume that the coefficients (rt )0≤t≤T , (μt )0≤t≤T and (σt )0≤t≤T are deterministic cadlag functions. We also assume that the matrix σt is not degenerated for all 0 ≤ t ≤ T . We denote by Ft = σ{Ws , s ≤ t}, t ≥ 0, the filtration generated by the Brownian motion (augmented by the null sets). Furthermore, | · | denotes the Euclidean norm for vectors and the corresponding matrix norm for matrices and prime denotes the transposed. For (yt )0≤t≤T square integrable over the fixed interval [0, T ] we define T yT = ( 0 |yt |2 dt)1/2 . The portfolio process (πt = (π1 (t), . . . , πd (t))′ )0≤t≤T represents the fractions of the wealth process invested into the stocks. The consumption rate is denoted by (vt )0≤t≤T . Then (see [10] for details) the wealth process (Xt )0≤t≤T is the solution to the SDE dXt = Xt (rt + yt′ θt − vt ) dt + Xt yt′ dWt ,

X0 = x > 0 ,

(2.2)

where θt = σt−1 (µt − rt 1)

and we assume that



T

0

with

1 = (1, . . . , 1)′ ∈ Rd ,

|θt |2 dt < ∞ .

∈ Rd and vt ≥ 0. More precisely, we define the The control variables are yt = (Ft )0≤t≤T -progressively measurable control process as ν = (yt , vt )t≥0 , which satisfies σt′ πt



T

| yt |2 dt < ∞

0

and



0

T

vt dt < ∞

a.s..

(2.3)

In this paper we consider logarithmitic utility functions. Consequently, we assume throughout that  T (ln vt )− dt < ∞ a.s., (2.4) 0

where a− := − min(a, 0). To emphasise that the wealth process (2.2) corresponds to some control process ν we write X ν . Now we describe the set of control processes. Definition 2.1. A stochastic control process ν = (νt )0≤t≤T = ((yt , vt ))0≤t≤T is called admissible, if it is (Ft )0≤t≤T -progressively measurable with values in Rd × R+ , satisfying integrability conditions (2.3)–(2.4) such that the SDE (2.2) has a unique strong a.s. positive continuous solution (Xtν )0≤t≤T for which  T

E

0

(ln(vt Xtν ))− dt + (ln XTν )−

< ∞.

We denote by V the class of all admissible control processes.

248

C. Kl¨uppelberg and S. Pergamenshchikov

For ν ∈ V we define the cost function  T

ν ν ln vt Xt dt + ln XT . J(x, ν) := Ex

(2.5)

0

Here Ex is the expectation operator conditional on X0ν = x. We recall a well-known result, henceforth called the unconstrained problem: max J(x, ν) .

(2.6)

ν∈V

To formulate the solution we set ω(t) = T − t + 1

and

r t = rt +

|θt |2 , 2

0≤t≤T.

Theorem 2.2 (Karatzas and Shreve [9], Example 6.6, p. 104). The optimal value of J(x, ν) is given by 

x + max J(x, ν) = J(x, ν ) = (T + 1) ln T +1 ν∈V ∗

0

T

ω(t) r t dt .

The optimal control process ν ∗ = (yt∗ , vt∗ )0≤t≤T ∈ V is of the form yt∗ = θt

vt∗ =

and

1 , ω(t)

(2.7)

where the optimal wealth process (Xt∗ )0≤t≤T is given as the solution to

dXt∗ = Xt∗ rt + |θt |2 − vt∗ dt + Xt∗ θt′ dWt , X0∗ = x ,

(2.8)

which is

Xt∗

= x

T +1−t exp T +1



0

t

r u du +



0

t

 θu′ dWu .

Note that the optimal solution (2.7) of problem (2.6) is deterministic, and we denote in the following by U the set of deterministic functions ν = (yt , vt )0≤t≤T satisfying conditions (2.3) and (2.4). For the above result we can state that max J(x, ν) = max J(x, ν) . ν∈V

ν∈U

Intuitively, it is clear that to construct financial portfolios in the market model (2.1) the investor can invoke only information given by the coefficients (rt )0≤t≤T , (µt )0≤t≤T and (σt )0≤t≤T which are deterministic functions. Then for ν ∈ U , by Itˆo’s formula, equation (2.2) has solution Xtν = x Et (y) eRt −Vt +(y,θ)t ,

Optimal consumption and investment

with Rt =

t 0

ru du, Vt =

t 0

vu du, (y, θ)t =

Et (y) = exp



t

0

t 0

yu′ dWu −

249

yu′ θu du and the stochastic exponential 1 2



t

0

 |yu |2 du .

Therefore, for ν ∈ U the process (Xtν )0≤t≤T is positive, continuous and satisfies sup E | ln Xtν | < ∞ .

0≤t≤T

This implies that U ⊂ V . Moreover, for ν ∈ U we can calculate the cost function (2.5) explicitly as J(x, ν) = (T + 1) ln x + +





0

0

3

T

  1 2 ′ ω(t) rt + yt θt − |yt | dt 2

T

(ln vt − Vt )dt − VT .

(2.9)

Optimisation with constraints: main results

3.1 Value-at-Risk constraints As in Kl¨uppelberg and Pergamenshchikov [10] we use as risk measures the modifications of Value-at-Risk and Expected Shortfall introduced in Emmer, Kl¨uppelberg and Korn [5], which reflect the capital reserve. For simplicity, in order to avoid non-relevant cases, we consider only 0 < α < 1/2. Definition 3.1 (Value-at-Risk (VaR)). For a control process ν and 0 < α ≤ 1/2 define the Value-at-Risk (VaR) by VaRt (ν, α) := x eRt − Qt ,

t ≥ 0,

where for t ≥ 0 the quantity Qt = inf{z ≥ 0 : P(Xtν ≤ z) ≥ α} is the α-quantile of Xtν . Note that for every ν ∈ U we find



1 Qt = x exp Rt − Vt + (y, θ)t − y2t − |qα |yt , 2

(3.1)

where qα is the α-quantile of the standard normal distribution. We define the level risk function for some coefficient 0 < ζ < 1 as ζt = ζ x eRt ,

t ∈ [0, T ] .

(3.2)

250

C. Kl¨uppelberg and S. Pergamenshchikov

The coefficient ζ ∈ (0, 1) introduces some risk aversion behaviour into the model. In that sense it acts similarly as a utility function does. However, ζ has a clear interpretation, and every investor can choose and understand the influence of the risk bound ζ as a proportion of the riskless bond investment. We consider the maximisation problem for the cost function (2.9) over strategies ν ∈ U for which the Value-at-Risk is bounded by the level function (3.2) over the interval [0, T ]; i.e. max J(x, ν) ν∈U

subject to

VaRt (ν, α) ≤ 1. ζt

sup 0≤t≤T

(3.3)

To formulate the solution of this problem we define G(u, λ) :=



T 0

(ω(t) + λ)2 (λ|qα | + u(ω(t) + λ))

2 2 |θt |

dt ,

u ≥ 0,λ ≥ 0.

(3.4)

Moreover, for fixed λ > 0 we denote by ρ(λ) = inf{u ≥ 0 : G(u, λ) ≤ 1} ,

(3.5)

if it exists, and set ρ(λ) = +∞ otherwise. For a proof of the following lemma see A.1. Lemma 3.2. Assume that |qα | > θT > 0 and 

k1 + k2 qα2 − θ2T + k12 , 0 ≤ λ ≤ λmax = qα2 − θ2T

√ where k1 =  ωθ2T and k2 = ωθ2T . Then the equation G(·, λ) = 1 has the unique positive solution ρ(λ). Moreover, 0 < ρ(λ) < ∞ for all 0 ≤ λ < λmax , and ρ(λmax ) = 0.

Now for λ ≥ 0 fixed and 0 ≤ t ≤ T we define the weight function τλ (t) =

ρ(λ)(ω(t) + λ) . λ|qα | + ρ(λ)(ω(t) + λ)

(3.6)

Here we set τλ (·) ≡ 1 for ρ(λ) = +∞. It is clear, that for every fixed λ ≥ 0, 0 ≤ τλ (T ) ≤ τλ (t) ≤ 1 ,

0≤t≤T.

(3.7)

To take the VaR constraint into account we define 1 √ Φ(λ) = |qα |τλ θT + τλ θ2T −  τλ θ2T . 2

(3.8)

Denote by Φ−1 the inverse of Φ, provided it exists. A proof of the following lemma is given in A.1.

Optimal consumption and investment

251

Lemma 3.3. Assume that θT > 0 and 2

0 < ζ < 1 − e−|qα |θT +θT /2 .

(3.9)

Then for all 0 ≤ a ≤ − ln(1 − ζ) the inverse Φ−1 (a) exists. Moreover, 0 ≤ Φ−1 (a) < λmax

for

0 < a ≤ − ln(1 − ζ)

and Φ−1 (0) = λmax . Now set φ(κ) := Φ−1 (ln(1 − κ)/(1 − ζ)) ,

0≤κ≤ζ,

(3.10)

and define the investment strategy ytκ := θt τφ(κ) (t) ,

0≤t≤T.

(3.11)

To introduce the optimal consumption rate we define vtκ =

κ T − tκ

κ = κ0 =

T T +1

(3.12)

and recall that for

the function vtκ coincides with the optimal unconstrained consumption rate 1/ω(t) as defined in (2.7). It remains to fix the parameter κ. To this end we introduce the cost function  T

 1 2 ω(t) |θt |2 τφ(κ) (t) − τφ(κ) (t) dt . (3.13) Γ(κ) = ln(1 − κ) + T ln κ + 2 0 To choose the parameter κ we maximise Γ: γ = γ(ζ) = argmax Γ(κ) .

(3.14)

0≤κ≤ζ

With this notation we can formulate the main result of this section. Theorem 3.4. Assume that θT > 0. Then for all ζ > 0 satisfying (3.9) and for all 0 < α < 1/2 for which |qα | ≥ 2 (T + 1) θT , (3.15) the optimal value of J(x, ν) for problem (3.3) is given by J(x, ν ∗ ) = A(x) + Γ (γ(ζ)) ,

(3.16)

where A(x) = (T + 1) ln x +



0

T

ω(t)rt dt − T ln T

(3.17)

252

C. Kl¨uppelberg and S. Pergamenshchikov

and the optimal control ν ∗ = (yt∗ , vt∗ )0≤t≤T is of the form yt∗ = ytγ

vt∗ = vtγ .

and

(3.18)

The optimal wealth process is the solution of the SDE dXt∗ = Xt∗ (rt − vt∗ + (yt∗ )′ θt ) dt + Xt∗ (yt∗ )′ dWt ,

X0∗ = x ,

given by Xt∗ = x Et (y ∗ )

T − γ(ζ)t Rt −Vt +(y∗ , θ)t e , T

0≤t≤T.

The following corollary is a consequence of (2.9). Corollary 3.5. If θT = 0, then for all 0 < ζ < 1 and for all 0 < α < 1/2 yt∗ = 0

vt∗ = vtγ

and

with γ = argmax0≤κ≤ζ (ln(1 − κ) + T ln κ) = min(κ0 , ζ). Moreover, the optimal wealth process is the deterministic function Xt∗ = x

T − min(κ0 , ζ) t Rt e , T

0≤t≤T.

In the next corollary we give some sufficient condition, for which the investment process equals zero (the optimal strategy is riskless). This is the first marginal case. Corollary 3.6. Assume that θT > 0 and that (3.9) holds. If 0 < ζ < κ0 and   ζ(T + 1) , |qα | ≥ (1 + T )θT 2 + (3.19) (1 − ζ)T − ζ then γ = ζ and the optimal solution ν ∗ = (yt∗ , vt∗ )0≤t≤T is of the form yt∗ = 0

and

vt∗ = vtζ .

Moreover, the optimal wealth process is the deterministic function Xt∗ = x

T − ζt Rt e , T

0≤t≤T.

Below we give some sufficient conditions, for which the solution of optimisation problem (3.3) coincides with the unconstrained solution (2.7). This is the second marginal case. Theorem 3.7. Assume that ζ >1−

2 1 e−|qα |θT +θT /2 . T +1

(3.20)

Then for all 0 < α < 1/2 for which |qα | ≥ θT , the solution of the optimisation problem (3.3) is given by (2.7)–(2.8).

253

Optimal consumption and investment

3.2 Expected shortfall constraints Our next risk measure is an analogous modification of the Expected Shortfall (ES). Definition 3.8 (Expected Shortfall (ES)). For a control process ν and 0 < α ≤ 1/2 define

mt (ν, α) = Ex Xtν | Xtν ≤ Qt , t ≥ 0 ,

where Qt is the α-quantile of Xtν given by (3.1). The Expected Shortfall (ES) is then defined as ESt (ν, α) = xeRt − mt (ν, α) , t ≥ 0 . Again for ν ∈ U we find mt (ν, α) = x Fα (yt ) eRt −Vt +(y,θ)t ,

where Fα (z) =  ∞

|qα |

1 e

−t2 /2

dt





2

e−t

/2

dt .

|qα |+z

We consider the maximisation problem for the cost function (2.5) over strategies ν ∈ U for which the Expected Shortfall is bounded by the level function (3.2) over the interval [0, T ], i.e. max J(x, ν) ν∈U

subject to

sup 0≤t≤T

ESt (ν, α) ≤ 1. ζt

(3.21)

We proceed similarly as for the VaR-constraint problem (3.3). Define G1 (u, λ) :=



0

where

T

(ω(t) + λ)2 (λ ψα (u) + u(ω(t) + λ))

1 −u ψα (u) = ϕ(u + |qα |)

2 2 |θt |

dt ,

with ϕ(y) = e

y2 2

u ≥ 0, λ ≥ 0 .

(3.22)



(3.23)



t2

e− 2 dt .

y

It is well known and easy to prove that 1 1 1 − ≤ ϕ(y) ≤ , y y3 y

y > 0.

(3.24)

This means that ψα (u) ≥ |qα | for all u ≥ 0, which implies for every fixed λ ≥ 0 that G1 (u, λ) ≤ G(u, λ) for all u ≥ 0. Moreover, similarly to (3.5) we define ρ1 (λ) = inf{u ≥ 0 : G1 (u, λ) ≤ 1} .

(3.25)

Since G1 has similar behaviour as G, the following lemma is a modification of Lemma 3.2. Its proof is analogous to the proof of Lemma 3.2.

254

C. Kl¨uppelberg and S. Pergamenshchikov

Lemma 3.9. Assume that |qα | > θT > 0 and 

k1 + k2 ψα2 (0) − θ2T + k12 0 ≤ λ ≤ λ1max = , ψα2 (0) − θ2T where k1 and k2 are given in Lemma 3.2. Then the equation G1 (·, λ) = 1 has the unique positive solution ρ1 (λ). Moreover, 0 < ρ1 (λ) < ∞ for 0 ≤ λ < λ1max and ρ1 (λ1max ) = 0. Now for λ ≥ 0 fixed and 0 ≤ t ≤ T we define the weight function ςλ (t) =

ρ1 (λ) (ω(t) + λ) , λ ψα (ρ1 (λ)) + ρ1 (λ) (ω(t) + λ)

(3.26)

and we set ςλ (·) ≡ 1 for ρ1 (λ) = +∞. Note that for every fixed λ ≥ 0, 0 ≤ ςλ (T ) ≤ ςλ (t) ≤ 1 ,

0≤t≤T.

To take the ES constraint into account we define √ Φ1 (λ) = − ςλ θ2T − ln Fα (ςλ θT ) .

(3.27)

(3.28)

the inverse of Φ1 provided it exists. The proof of the next lemma is Denote by Φ−1 1 given in Section A.2. Lemma 3.10. Assume that θT > 0 and 2

0 < ζ < 1 − Fα (θT ) eθT .

(3.29)

exists and 0 ≤ Φ1−1 (a) < λmax for Then for all 0 ≤ a ≤ − ln(1 − ζ) the inverse Φ−1 1 −1 1 0 < a ≤ − ln(1 − ζ) and Φ1 (0) = λmax . Now, similarly to (3.5) we set φ1 (κ) = Φ−1 (ln(1 − κ)/(1 − ζ)) , 1

0≤κ≤ζ,

(3.30)

and define the investment strategy yt1,κ = θt ςφ1 (κ) (t) ,

0≤t≤T.

(3.31)

We introduce the cost function

Γ1 (κ) = ln(1 − κ) + T ln κ +



0

T

ω(t) |θt |2



 1 ςφ1 (κ) (t) − ςφ2 (κ) (t) dt . 2 1

(3.32)

To fix the parameter κ we maximise Γ1 : γ1 = γ1 (ζ) = argmax Γ1 (κ) . 0≤κ≤ζ

With this notation we can formulate the main result of this section.

(3.33)

255

Optimal consumption and investment

Theorem 3.11. Assume that θT > 0. Then for all ζ > 0 satisfying (3.29) and for all 0 < α < 1/2 satisfying |qα | ≥ max(1, 2(T + 1)θT ) (3.34) the optimal value of J(x, ν) for the optimisation problem (3.21) is given by J(x, ν ∗ ) = A(x) + Γ1 (γ1 (ζ)) ,

where the function A is defined in (3.17) and the optimal control ν ∗ = (yt∗ , vt∗ )0≤t≤T is of the form (recall the definition of vtκ in (3.12)) 1,γ1

yt∗ = yt

and

γ

vt∗ = vt 1 .

(3.35)

The optimal wealth process is the solution to the SDE dXt∗ = Xt∗ (rt − vt∗ + (yt∗ )′ θt ) dt + Xt∗ (yt∗ )′ dWt ,

X0∗ = x ,

given by Xt∗ = x Et (y ∗ )

T − γ1 (ζ)t Rt −Vt +(y∗ , θ)t e , T

0≤t≤T.

Corollary 3.12. If θT = 0, then the optimal solution of problem (3.21) is given in Corollary 3.5. Similarly to the optimisation problem with VaR constraint we observe two marginal cases. Note that the following corollary is again a consequence of (2.9). Corollary 3.13. Assume that θT > 0 and that (3.29) and (3.34) hold. Then γ1 = ζ and the assertions of Corollary 3.6 hold with γ replaced by γ1 . Theorem 3.14. Assume that ζ >1−

2 1 Fα (θT ) eθT . T +1

(3.36)

Then for all 0 < α < 1/2 for which |qα | > max(1, θT ) the solution of problem (3.21) is given by (2.7)–(2.8).

3.3 Conclusion If we compare the optimal solutions (3.18) and (3.35) with the unconstrained optimal strategy (2.7), then the risk bounds force investors to restrict their investment into the risk assets by multiplying the unconstrained optimal strategy by the coefficients given in (3.11) and (3.14) for VaR constraints and (3.31) and (3.33) for ES constraints. The impact of the risk measure constraints enter into the portfolio process through the risk level ζ and the confidence level α.

256

4

C. Kl¨uppelberg and S. Pergamenshchikov

Auxiliary results and proofs

In this section we consider maximisation problems with constraints for the two terms of (2.9):  T  T 



1 (4.1) ω(t) yt′ θt − |yt |2 dt . ln vt − Vt dt and H(y) := I(V ) := 2 0 0 We start with a result concerning the optimisation of I(·), which will be needed to prove results from both Sections 3.1 and 3.2. Let W[0, T ] be the set of differentiable functions f : [0, T ] → R having positive cadlag derivative f˙ satisfying condition (2.4). For b > 0 we define W0,b [0, T ] = {f ∈ W[0, T ] : f (0) = 0

and

f (T ) = b} .

(4.2)

Lemma 4.1. Consider the optimisation problem max

f ∈W0,b [0,T ]

I(f ) .

The optimal value of I is given by I ∗ (b) =

max

f ∈W0,b [0,T ]

I(f ) = I(f ∗ ) = −T ln T − T ln

eb , eb − 1

(4.3)

with optimal solution f ∗ (t) = ln

T eb , T eb − t(eb − 1)

0≤t≤T.

(4.4)

Proof. Firstly, we consider this optimisation problem in the space C 2 [0, T ] of two times continuously differentiable functions on [0, T ]: max

f ∈W0,b [0,T ]∩C 2 [0,T ]

I(f ) ,

By variational calculus methods we find that it has solution (4.3); i.e. max

f ∈W0,b [0,T ]∩C 2 [0,T ]

I(f ) = I(f ∗ ) .

where the optimal solution f ∗ is given in (4.4). Take now f ∈ W0,b [0, T ] and suppose first that its derivative f˙min = inf

0≤t≤T

f˙(t) > 0 .

1 Let Υ be a positive two times differentiable function on [−1, 1] such that −1 Υ(z) dz = 1, and set Υ(z) := 0 for |z| ≥ 1. We can take, for example, ⎧ 

1 ⎨R ” “1 if |z| ≤ 1 , exp − 1−z 2 1 1 exp − 1−υ2 dυ Υ(z) = −1 ⎩ if |z| > 1 . 0

257

Optimal consumption and investment

By setting f˙(t) = f˙(0) for all t ≤ 0 and f˙(t) = f˙(T ) for all t ≥ T , we define an approximating sequence of functions by υn (t) = n



R

Υ(n(u − t)) f˙(u) du =



1

−1

z dz . Υ(z) f˙ t + n

It is clear that (υn )n≥1 ∈ C 2 [0, T ]. Moreover, we recall that f˙ is cadlag, which implies that it is bounded on [0, T ]; i.e. sup f˙(t) := f˙max < ∞ ,

0≤t≤T

and its discontinuity set has Lebesgue measure zero. Therefore, the sequence (υn )n≥1 is bounded; more precisely, 0 < f˙min ≤ υn (t) ≤ f˙max < ∞ ,

0≤t≤T,

(4.5)

and υn → f˙ as n → ∞ for Lebesgue almost all t ∈ [0, T ]. Therefore, by the Lebesgue convergence theorem we obtain lim

n→∞



T

|υn (t) − f˙(t)| dt = 0 .

0

Moreover, inequalities (4.5) imply 



| ln υn | ≤ ln max(f˙max , 1) +  ln min(f˙min , 1)  .

Therefore, fn (t) = It is clear that

t 0

υn (u) du belongs to W0,bn [0, T ] ∩ C 2 [0, T ] for bn := lim I(fn ) = I(f )

n→∞

lim bn = b .

and

T 0

υn (u) du.

n→∞

This implies that I(f ) ≤ I ∗ (b) ,

where I ∗ (b) is defined in (4.3). Consider now the case, where inf 0≤t≤T f˙(t) = 0. For 0 < δ < 1 we consider the approximation sequence of functions f˙δ (t) = max(δ , f˙(t))

and

fδ (t) =



t

f˙δ (u) du ,

0

It is clear that fδ ∈ W0,bδ [0, T ] for bδ = Moreover, in view of the convergence lim

δ→0



0

T



T 0

0≤t≤T.

f˙δ (t) dt. Therefore, I(fδ ) ≤ I ∗ (bδ ).

f˙δ (t) − f˙(t) dt = 0

258

C. Kl¨uppelberg and S. Pergamenshchikov

we get lim supδ→0 I(fδ ) ≤ I ∗ (b). Moreover, note that |I(fδ ) − I(f )| ≤ ≤







(ln δ − ln f˙(t)) dt + T





 δ − f˙(t) dt

(ln f˙(t))− dt + δ T Λ(Aδ ) ,



where Aδ = {t ∈ [0, T ] : 0 ≤ f˙(t) ≤ δ} and Λ(Aδ ) is the Lebesgue measure of Aδ . Moreover, by the definition of W[0, T ] in (4.2) the Lebesgue measure of the T set {t ∈ [0, T ] : f˙(t) = 0} equals zero and 0 (ln f˙t )− dt < ∞. This implies that limδ→0 Λ(Aδ ) = 0 and hence lim I(fδ ) = I(f ) ,

δ→0

i.e. I(f ) ≤ I ∗ (b).



In order to deal with H as defined in (4.1) we need some preliminary result. As usual, we denote by L2 [0, T ] the Hilbert space of functions y satisfying the square integrability condition in (2.3). Define for y ∈ L2 [0, T ] with yT > 0 y t = yt /yT

and

ly (h) = y + hT − yT − (y, h)T .

(4.6)

We shall need the following lemma. Lemma 4.2. Assume that y ∈ L2 [0, T ] and yT > 0. Then for every h ∈ L2 [0, T ] the function ly (h) ≥ 0. Proof. Obviously, if h ≡ ay for some a ∈ R, then ly (h) = (|1 + a| − 1 − a)yT ≥ 0. Let now h ≡ ay for all a ∈ R. Then ly (h) =

h2T − (y, h)T ((y, h)T + ly (h)) 2(y , h)T + h2T − (y, h)T = . y + hT + yT y + hT + yT

It is easy to show directly that for all h y + hT + yT + (y, h)T ≥ 0

with equality if and only if h ≡ ay for some a ≤ −1. Therefore, if h ≡ ay , we obtain ly (h) =

h2T − (y , h)2T ≥ 0. y + hT + yT + (y, h)T 

Optimal consumption and investment

259

4.1 Results and proofs of Section 3.1 We introduce the constraint K : L2 [0, T ] → R as K(y) :=

1 y2T + |qα | yT − (y, θ)T 2

(4.7)

For 0 < a ≤ − ln(1 − ζ) we consider the following optimisation problems max

y∈L2 [0,T ]

H(y)

subject to K(y) = a

(4.8)

Proposition 4.3. Assume that the conditions of Lemma 3.3 hold. Then the optimisation problem (4.8) has the unique solution y ∗ = ya = θt τλa (t) with λa = Φ−1 (a).

Proof. According to Lagrange’s method we consider the following unconstrained problem max Ψ(y, λ) , (4.9) y∈L2 [0,T ]

where Ψ(y, λ) = H(y) − λK(y) and λ ∈ R is the Lagrange multiplier. Now it suffices to find some λ ∈ R for which the problem (4.9) has a solution, which satisfies the constraint in (4.8). To this end we represent Ψ as Ψ(y, λ) =



T 0



 1 ω(t) + λ yt′ θt − |yt |2 dt − λ |qα | yT . 2

It is easy to see that for λ < 0 the maximum in (4.9) equals +∞; i.e. the problem (4.8) has no solution. Therefore, we assume that λ ≥ 0. First we calculate the Fr´echet derivative; i.e. the linear operator Dy (·, λ) : L2 [0, T ] → R defined for h ∈ L2 [0, T ] as Dy (h, λ) = lim

δ→0

For yT > 0 we obtain Dy (h, λ) =

Ψ(y + δh, λ) − Ψ(y, λ) . δ



T

(dy (t, λ))′ ht dt

0

with dy (t, λ) = (ω(t) + λ)(θt − yt ) − λ|qα | y t .

If yT = 0, then Dy (h, λ) =



0

T

(ω(t) + λ) θt′ ht dt − λ|qα | hT .

Define now ∆y (h, λ) = Ψ(y + h, λ) − Ψ(y, λ) − Dy (h, λ) .

(4.10)

260

C. Kl¨uppelberg and S. Pergamenshchikov

We have to show that ∆y (h, λ ≤ 0 for all y, h ∈ L2 [0, T ]. Indeed, if yT = 0 then 1 ∆y (h, λ) = − 2

If yT > 0, then 1 ∆y (h, λ) = − 2



T

0



0

T

(ω(t) + λ) |ht |2 dt ≤ 0 .

(ω(t) + λ) |ht |2 dt − λ |qα | ly (h) ≤ 0 ,

by Lemma 4.2 for all λ ≥ 0 and for all y, h ∈ L2 [0, T ]. To find the solution of the optimisation problem (4.9) we have to find y ∈ L2 [0, T ] such that Dy (h, λ) = 0 for all h ∈ L2 [0, T ] . (4.11) First notice that for θT > 0, the solution of (4.11) can not be zero, since for y = 0 we obtain Dy (h, λ) < 0 for h = −θ. Consequently, we have to find an optimal solution to (4.11) for y satisfying yT > 0. This means we have to find a non-zero y ∈ L2 [0, T ] such that dy (t, λ) = 0 . One can show directly that for 0 ≤ λ < λmax the unique solution of this equation is given by ytλ := θt τλ (t) , (4.12) where τλ (t) is defined in (3.6). Note that in view of the second part of Lemma 3.2 and definition (3.6) the function y λ ≡ 0 for every 0 ≤ λ < λmax . It remains to choose the Lagrage multiplier λ so that it satisfies the constraint in (4.8). To this end note that K(y λ ) = Φ(λ) . Under the conditions of Lemma 3.3 the inverse of Φ exists and for 0 < a ≤ − ln(1 − ζ) the inverse function 0 ≤ Φ−1 (a) < λmax . Thus y λa ≡ 0 with λa = Φ−1 (a) is the solution of the problem (4.8).  We are now ready to proof the main results in Section 3.1. The auxiliary lemmas are proved in A.1. Proof of Theorem 3.4. In view of the representation (2.9) and definitions (4.1), we can rewrite the cost function as  T ω(t)rt dt + ln(1 − κ) + I(V ) + H(y) , (4.13) J(x, ν) = (T + 1) ln x + 0

−VT

where κ = 1 − e . We start to maximise J(x, ν) by maximising I over all functions V . To this end we fix the last value of the consumption process, by setting VT = − ln(1 − κ) for some parameter 0 ≤ κ < 1 which will be chosen later. By Lemma 4.1 we find that I(V ) ≤ I(V κ ) = −T ln T + T ln κ ,

261

Optimal consumption and investment

where Vtκ =



t

v κ (t)dt = ln 0

Define now Lt (ν) = (y, θ)t −

T , T − κt

0≤t≤T.

1 y2t − Vt − |qα | yt , 2

(4.14)

0≤t≤T,

and note that condition (3.3) is equivalent to inf

0≤t≤T

Lt (ν) ≥ ln (1 − ζ) .

(4.15)

Firstly, we consider the bound in (4.15) only at time t = T : LT (ν) ≥ ln (1 − ζ) .

Recall definition (4.7) of K and choose the function V as V κ as in (4.14). Then we can rewrite the bound for LT (ν) as a bound for K and obtain K(y) ≤ ln(1 − κ)/(1 − ζ) ,

0≤κ≤ζ.

To find the optimal investment strategy we need to solve the optimisation problem (4.8) for 0 ≤ a ≤ ln(1 − κ)/(1 − ζ). By Proposition 4.3 for 0 < a ≤ − ln(1 − ζ) max

y∈L2 [0,T ] , K(y)=a

H(y) = H( y a ) := C(a) ,

(4.16)

where the solution ya is defined in Proposition 4.3. Note that the definitions of the functions H and ya imply  T 

1 ω(t) τλa (t) − τλ2a (t) |θt |2 dt with λa = Φ−1 (a) . C(a) = 2 0 To consider the optimisation problem (4.8) for a = 0 we observe that 1 K(y) ≥ yT (|qα | − θT ) + y2T ≥ 0 , 2

provided that |qα | > θT (which follows from (3.15)). Thus, there exists only one function for which K(y) = 0, namely y ≡ 0. Furthermore, by Lemma 3.2 ρ(λmax ) = 0 and, therefore, definition (3.6) implies τλmax (·) ≡ 0

and

y λmax ≡ 0 .

(4.17) −1

In view of Lemma 3.3 we get Φ−1 (0) = λmax , therefore, y Φ (0) = y λmax ≡ 0; i.e. y λa with λa = Φ−1 (a) is the solution of the optimisation problem (4.8) for all 0 ≤ a ≤ − ln(1 − ζ). Now we calculate the derivative of C(·) as  T

˙ ω(t) 1 − τλa (t) |θt |2 τ1 (t, λa ) dt , C(a) = λ˙ a 0

262

C. Kl¨uppelberg and S. Pergamenshchikov

where

∂τλ (t) . (4.18) ∂λ ˙ ˙ Since λ˙ a = 1/Φ(λ a ), by Lemma A.1, the derivative C(a) is positive. Therefore,   ln(1 − κ) , max C(a) = C 1−ζ 0≤a≤ln(1−κ)/(1−ζ) τ1 (t, λ) =

and we choose a = ln(1 − κ)/(1 − ζ) in (4.16). Now recall the definitions (3.11) and (3.12), the representation (4.13) and set ν κ = ( ytκ , vtκ )0≤t≤T . Thus for ν ∈ U with VT = − ln(1 − κ) we have J(x, ν) ≤ J(x, ν κ ) = A(x) + Γ(κ) .

It is clear that (3.14) gives the optimal value for the parameter κ. To finish the proof we have to verify condition (4.15) for the strategy ν ∗ defined in (3.18). Indeed, we have  t 1 Lt (ν ∗ ) = (y ∗ , θ)t − y ∗ 2t − |qα | y ∗ t − vs∗ ds 2 0  t  t g(u) du − vs∗ ds , =: − 0

where

  τ∗ g(t) = τt∗ |θt |2 |qα | χ(t) − 1 + t 2

0

and

τt∗ χ(t) =  . t 2 0 (τs∗ )2 |θs |2 ds

We recall φ(κ) from (3.10) and γ from (3.14), then τt∗ = τφ(γ) (t). Definition (3.6) implies χ(t) ≥

τφ(γ) (T ) 1 1 + φ(γ) ≥ . ≥ 2τφ(γ) (0)θT 2θT (1 + T + φ(γ)) 2θT (1 + T )

Therefore, condition (3.15) guarantees that g(t) ≥ 0 for t ≥ 0, which implies Lt (ν ∗ ) ≥ LT (ν ∗ ) = ln(1 − ζ) .

This concludes the proof of Theorem 3.4.



Proof of Corollary 3.6. Consider now the optimisation problem (3.14). To solve it we have to find the derivative of the integral in (3.13)    T 1 2 2 τφ(κ) (t) − τφ(κ) (t) dt . ω(t) |θt | B(κ) := 2 0 Indeed, we have with φ(κ) as in (3.10),  T

∂ ˙ τφ(κ) (t) dt . ω(t)|θt |2 1 − τφ(κ) (t) B(κ) = ∂κ 0

Optimal consumption and investment

263

∂ ˙ τφ(κ) (t) = τ1 (t, φ(κ)) φ(κ) , ∂κ

(4.19)

Obviously,

where τ1 (t, λ) is defined in (4.18). By the definition of φ in (3.10) Φ(φ(κ)) = ln(1 − κ)/(1 − ζ), we have 1 ˙ φ(κ) =− . ˙Φ(φ(κ))(1 − κ) Therefore, ˙ B(κ) =−

1  B(φ(κ)) 1−κ

 B(λ) =

with

T 0

ω(t)| (1 − τλ (t)) τ1 (t, λ) |θt |2 dt . ˙ Φ(λ)

We calculate now the derivative of Φ as  T ˙ τ (t, λ) τ1 (t, λ) |θt |2 dt , Φ(λ) =

(4.20)

0

where

τ (t, λ) =

|qα |τλ (t) − 1 + τλ (t) . τλ θT

By inequality (A.1), τ (t, λ) > 0 and, moreover, in view of Lemma A.1, we have τ1 (t, λ) ≤ 0. Therefore, taking representation (4.20) into account, we obtain  B(λ) =

T 0

ω(t) (1 − τλ (t)) |τ1 (t, λ)| |θt |2 dt . T 2 dt τ (t, λ) |τ (t, λ)| |θ | 1 t 0

Moreover, using the lower bound (A.1) we estimate  B(λ)
κ 1−κ ζ 1−ζ

This implies γ = ζ , i.e. φ(γ) = Φ−1 (0). Therefore, by Lemma 3.3, φ(γ) = λmax .  Therefore, we conclude from (4.17) that yt∗ = τλmax (t)θt = 0 for all 0 ≤ t ≤ T .

264

C. Kl¨uppelberg and S. Pergamenshchikov

Proof of Theorem 3.7. It suffices to verify condition (4.15) for the strategy ν ∗ = (yt∗ , vt∗ )0≤t≤T with yt∗ = θt and vt∗ = 1/ω(t) for t ∈ [0, T ]. It is easy to show that condition (3.20) implies that LT (ν ∗ ) ≥ ln(1 − ζ). Moreover, for 0 ≤ t ≤ T we can represent Lt (ν ∗ ) as  t  t vs∗ ds , gs∗ ds − Lt (ν ∗ ) = − 0

0

where gt∗

=



|qα | −1 θt



|θt |2 ≥ 2



|qα | −1 θT



|θt |2 ≥0 2

since we have assumed that |qα | ≥ θT . Therefore, Lt (ν ∗ ) is decreasing in t; i.e. Lt (ν ∗ ) ≥ LT (ν ∗ ) for all 0 ≤ t ≤ T . This implies the assertion of Theorem 3.7. 

4.2 Results and proofs of Section 3.2 Next we introduce the constraint K1 (y) := −(y, θ)T − ln Fα (yT ) .

(4.22)

For 0 < a ≤ − ln(1 − ζ) we consider the following optimisation problems max

y∈L2 [0,T ]

H(y)

subject to

K1 (y) = a .

(4.23)

The following result is the analog of Proposition 4.3. Proposition 4.4. Assume that the conditions of Lemma 3.10 hold. Then the optimisa(a). tion problem (4.23) has the unique solution yt∗ = yt1,a = θt ςλ1,a (t) with λ1,a = Φ−1 1

Proof. As in the proof of Proposition 4.3 we use Lagrange’s method. We consider the unconstrained problem max Ψ1 (y, λ) , (4.24) y∈L2 [0,T ]

where Ψ1 (y, λ) = H(y) − λK1 (y) and λ ≥ 0 is the Lagrange multiplier. Taking into account the definition of Fα in (4.22), and setting fα = ln Fα , we obtain the representation   T ω(t) ′ 2 |yt | (ω(t) + λ ) θt yt − dt + λ fα (yT ) . Ψ1 (y, λ) = 2 0 Its Fr´echet derivative is given by D1,y (h, λ) = lim

δ→0

Ψ1 (y + δh, λ) − Ψ1 (y, λ) . δ

It is easy to show directly that for yT > 0  T (d1,y (t, λ))′ ht dt , D1,y (h, λ) = 0

Optimal consumption and investment

265

where d1,y (t, λ) = (ω(t) + λ)θt − ω(t) yt + λf˙α (yT ) y t ,

and f˙α (·) denotes the derivative of fα (·). If yT = 0, then D1,y (h, λ) =



T

0

(ω(t) + λ) θt′ ht dt + λ f˙α (0)hT .

We set now ∆1,y (h, λ) = Ψ1 (y + h, λ) − Ψ1 (y, λ) − D1,y (h, λ) ,

(4.25)

and show that Δ1,y (h, λ) ≤ 0 for all y, h ∈ L2 [0, T ]. Indeed, if yT = 0, then Δ1,y (h, λ) = −

1 2



T 0



ω(t) |ht |2 dt + λ fα (hT ) − f˙α (0)hT .

Recalling the definition of ϕ in (3.23) and setting x1 = |qα | + x, the derivatives of fα are given by f˙α (x) = −

1 ϕ(x1 )

1 − x1 ϕ(x1 ) ≤ 0. f¨α (x) = − ϕ2 (x1 )

and

(4.26)

The last inequality follows directly from the right inequality in (3.24). Therefore, taking into account that fα (0) = 0 we get fα (x) ≤ f˙α (0)x for all x ≥ 0. Thus for λ ≥ 0 we have Δ1,y (h, λ) ≤ 0 in the case when yT = 0. Let now yT > 0 and y = y/yT . Then 1 Δ1,y (h, λ) = − 2



T

0

ω(t) |ht |2 dt + λ δ1,y (h) ,

where δ1,y (h) = fα (y + hT ) − fα (yT ) − f˙α (yT ) (y, h)T .

Moreover, by Taylor’s formula and denoting by f¨α the second derivative of fα , we get δ1,y (h) = f˙α (yT ) ly (h) +

1 ¨ 2 f (ϑ) (y + hT − yT ) , 2 α

where ly (·) is defined in (4.6) and min(yT , y + hT ) ≤ ϑ ≤ max(yT , y + hT ) .

Now the last inequality in (4.26) and Lemma 4.2 imply that Δ1,y (h, λ) ≤ 0 for all λ ≥ 0 and y, h ∈ L2 [0, T ]. The solution of the optimisation problem (4.24) is given by y ∈ L2 [0, T ] such that D1,y (h, λ) = 0

for all

h ∈ L2 [0, T ] .

(4.27)

266

C. Kl¨uppelberg and S. Pergamenshchikov

Notice that for θT > 0 the solution (4.27) can not be zero, since for y = 0 we obtain D1,y (h, λ) < 0 for h = −θ. Therefore, we have to solve equation (4.27) for y with yT > 0, equivalently, we have to find a non-zero function in L2 [0, T ] satisfying d1,y (t, λ) = 0 .

One can show directly that for 0 ≤ λ ≤ λ1max the solution of this equation is given by yt1,λ = ςλ (t)θt ,

(4.28)

where ςλ (t) is defined in (3.26). Now we have to choose the parameter λ to satisfy the constraint in (4.23). Note that K1 (y 1,λ ) = Φ1 (λ) .

Under the conditions of Lemma 3.10 the inverse of Φ1 exists. Therefore, the function y 1,λa ≡ 0 with λa = Φ−1 (a) is the solution of the optimisation problem (4.23).  1 Proof of Theorem 3.11. Define L1,t (ν) = (y, θ)t − Vt + fα (yt) ,

0≤t≤T,

(4.29)

with fα = ln Fα . First note that the risk bound in the optimisation problem (3.21) is equivalent to inf

0≤t≤T

L1,t (ν) ≥ ln (1 − ζ) ,

(4.30)

As in the proof of Theorem 3.4 we start with the constraint at time t = T : L1,T (ν) ≥ ln (1 − ζ) .

Taking the definition of K1 in (4.22) into account and choosing V = V κ as in (4.14) we rewrite this inequality as K1 (y) ≤ ln(1 − κ)/(1 − ζ) ,

0≤κ≤ζ.

To find the optimal strategy we use the optimisation problem (4.23), extending the range of a to 0 ≤ a ≤ ln(1 − κ)/(1 − ζ). In Proposition 4.4 we established that for each 0 < a ≤ − ln(1 − ζ) max

y∈L2 [0,T ] , K1 (y)=a

H(y) = H( y 1,a ) =: C1 (a) ,

(4.31)

where the optimal solution y1,a is defined in Proposition 4.4. We observe that    T 1 2 2 ω(t)|θt | ςλ1,a (t) − ςλ (t) dt with λ1,a = Φ−1 C1 (a) = (a) . 1 2 1,a 0 To study the optimisation problem (4.23) for a = 0 note that K1 (y) ≥ kmin (yT )

with

kmin (x) = −xθT − fα (x) ,

x ≥ 0.

Optimal consumption and investment

Moreover, k˙ min (x) =

1 − θT , ϕ(|qα | + x)

267

x ≥ 0,

and by the right inequality in (3.24) we obtain for |qα | > θT (which follows from condition (3.15)) k˙ min (x) ≥ |qα | + x − θT > 0 ,

x ≥ 0, .

Therefore, kmin (x) > kmin (0) = 0 for all x > 0 and kmin (x) = 0 if and only if x = 0. This means that only y ≡ 0 satisfies K1 (y) = 0. Moreover, in view of Lemma 3.9 and Lemma 3.10, as in the proof of Theorem 3.4, we obtain y1,0 ≡ 0. Therefore, the function y1,a is the solution of (4.23) for all 0 ≤ a ≤ − ln(1 − ζ). To choose the parameter 0 ≤ a ≤ ln(1 − κ)/(1 − ζ) we calculate the derivative of C1 (a) as  T

 ω(t)|θt |2 1 − ςλ1,a (t) ς1 (t, λ1,a ) dt , C˙1 (a) = λ˙ 1,a 0

where

ς1 (t, λ) =

∂ ς (t) . ∂λ λ

(4.32)

(a). Therefore, by Lemma A.2, the We recall that λ˙ 1,a = 1/Φ˙ 1 (λ1,a ) with λ1,a = Φ−1 1 ˙ derivative C1 (a) > 0. This implies   ln(1 − κ) . max C1 (a) = C1 1−ζ 0≤a≤ln(1−κ)/(1−ζ)

So in (4.31) we take a = ln(1 − κ)/(1 − ζ). yt1,κ , vtκ )0≤t≤T . Recalling the notation yt1,κ = θt ςφ1 (κ) (t) from (3.31) we set ν 1,κ = ( Then, for ν ∈ U with VT = − ln(1 − κ), J(x, ν) ≤ J(x, ν κ ) = A(x) + Γ1 (κ) .

It is clear that (3.33) gives the optimal value for the parameter κ. To finish the proof we have to verify condition (4.30) for the strategy ν ∗ as defined

−1 in (3.35). To this end, with φ1 (κ) = Φ1 ln(1 − κ)/(1 − ζ) , we set ςt∗ = ςφ1 (γ1 ) (t)

and

ςt∗ . 2ς ∗ θt

χ1 (t) =

With this notation we can represent the function L1,t (ν ∗ ) in the following integral form Lt (ν ∗ ) = −



0

t

g1 (u) du −



t

0

vs∗ ds ,

where g1 (t) = ςt∗ |θt |2



fα (t)χ1 (t) −1 2



with

fα (t) = −f˙α (ς ∗ θt ) .

268

C. Kl¨uppelberg and S. Pergamenshchikov

Note that definition (3.26) and the inequalities (3.27) imply χ1 (t) ≥

ςT∗ 1 1 + φ1 (γ1 ) ≥ . ≥ ∗ 2ς0 θt 2θt (1 + T + φ1 (γ1 )) 2θT (1 + T )

Moreover, from the right inequality in (3.24) we obtain fα (t) =

1 ≥ |qα | + ς ∗ θt ≥ |qα | . ϕ (|qα | + ς ∗ θt )

Therefore, condition (3.15) implies that g1 (t) ≥ 0, i.e. L1,t (ν ∗ ) ≥ L1,T (ν ∗ ) = ln(1 − ζ) .

This concludes the proof of Theorem 3.11.



Proof of Corollary 3.13. Consider now the optimisation problem (3.33). To solve this we have to calculate the derivative of the integral in (3.32)  T

 1 ω(t) |θt |2 ςφ1 (κ) (t) − ςφ2 (κ) (t) dt . B1 (κ) := 2 1 0 We obtain B˙ 1 (κ) = φ˙ 1 (κ)



T

0

ω(t)|θt |2 (1 − ς(t)) ς1 (t, φ1 (κ)) dt ,

where ς1 (t, λ) is defined in (4.32). We recall the definition of φ1 in (3.30). Therefore, B˙ 1 (κ) = −

with  (λ) = B 1

T 0

1  B (Φ−1 (κ)) 1−κ 1 1

ω(t) (1 − ςλ (t)) ς1 (t, λ) |θt |2 dt . ˙ (λ) Φ 1

By Lemma A.2, ς1 (t, λ) ≤ 0, therefore, taking representation (A.5) into account, we obtain T ω(t) (1 − ςλ (t)) |ς1 (t, λ)| |θt |2 dt  (λ) = 0  B . 1 T 2 dt η(t, λ) |ς (t, λ)| |θ | 1 t 0 1 (λ) as in in (4.21), i.e. Moreover, with the lower bound (A.6) we can estimate B  (λ) ≤ B B 1 max .

The remainding proof is the same as the proof of Corollary 3.13.



Proof of Theorem 3.14. We have to verify condition (4.30) for the strategy ν ∗ = (yt∗ , vt∗ )0≤t≤T with yt∗ = θt and vt∗ = 1/ω(t) for t ∈ [0, T ]. First note that condition (3.36) implies L1,T (ν ∗ ) ≥ ln(1 − ζ) .

269

Optimal consumption and investment

Moreover, for 0 ≤ t ≤ T we can represent the function L1,t (ν ∗ ) as  t  t vs∗ ds , ls∗ ds − L1,t (ν ∗ ) = θ2t + fα (θt ) − Vt∗ = − 0

0

where lt∗ =



1 −1 ϕ(|qα | + θt )



|θt |2 .

Therefore, by the right inequality in (3.24) we obtain

lt∗ ≥ (|qα | + θt − 1) |θt |2 ≥ (|qα | − 1) |θt |2

and by condition (3.34) we get lt∗ > 0 for 0 ≤ t ≤ T , therefore, L1,t (ν ∗ ) is decreasing in t, i.e. for 0 < t ≤ T L1,t (ν ∗ ) ≥ L1,T (ν ∗ ) ≥ ln(1 − ζ) .

This concludes the proof of Theorem 3.14.

5



Appendix

A.1 Results for Section 3.1 Proof of Lemma 3.2. Since G(u, λ) is for fixed λ decreasing to 0 in u, equation G(u, λ) = 1 has a positive solution if and only if G(0, λ) ≥ 1. But this is equivalent to k2 + 2λk1 − λ2 (|qα |2 − θ2T ) ≥ 0, which gives the upper bound for λ. Moreover, taking into account that G(0, λmax ) = 1 we obtain from definition (3.5) that ρ(λmax ) = 0.  Next we prove some properties of Φ and τλ . Lemma A.1. The function τλ (t) is continuously differentiable in λ for 0 ≤ λ ≤ λmax , and the partial derivative (4.18) is negative for all 0 ≤ t ≤ T . Moreover, under the ˙ < 0 for 0 ≤ λ ≤ λmax . condition (3.15) the derivative Φ(λ) Proof. First note that τ1 (t, λ) = −|qα |

(ρ(λ)ω(t) − λρ(λ)(ω(t) ˙ + λ)) (λ|qα | + ρ(λ)(ω(t) + λ))

2

.

By the definition of ρ(λ) in (3.5) we get G(ρ(λ), λ) = 1 for 0 ≤ λ ≤ λmax . Therefore, ρ(λ) ˙ =−

with G1 (u, λ) =

∂G(u, λ) ∂u

G2 (ρ(λ), λ) G1 (ρ(λ), λ)

and

G2 (u, λ) =

∂G(u, λ) . ∂λ

270

C. Kl¨uppelberg and S. Pergamenshchikov

The definition of G in (3.4) implies that G1 (u, λ) = −2



and G2 (u, λ) = −2|qα |

T

0



(ω(t) + λ)3 |θ |2 dt (λ|qα | + u(ω(t) + λ))3 t T

0

ω(t)(ω(t) + λ) |θ |2 dt . (λ|qα | + u(ω(t) + λ))3 t

Therefore, for all 0 ≤ λ ≤ λmax and 0 ≤ t ≤ T ρ(λ) ˙ 0 for 0 ≤ t ≤ T and 0 ≤ λ ≤ λmax , i.e. Φ(λ)

Proof of Lemma 3.3. Taking into account that τ0 (·) ≡ 1 we get 1 Φ(0) = |qα |θT − θ2T . 2

Moreover, condition (3.9) implies Φ(0) > − ln(1 − ζ). The second part of Lemma 3.2, the definitions (3.6) and (3.8) imply immediately that Φ(λmax ) = 0. Therefore, in view of Lemma A.1 the inverse Φ−1 (a) exists for 0 ≤ a ≤ − ln(1 − ζ). Moreover, 0 ≤ Φ−1 (a) < λmax , for 0 < a ≤ − ln(1 − ζ) and Φ−1 (0) = λmax . 

A.2 Results for Section 3.2 We present some properties of Φ1 (λ) and ςλ . Lemma A.2. The function ςλ (t) is continuously differentiable in λ for all 0 ≤ λ ≤ λ1max and the partial derivative (4.32) is negative for all 0 ≤ t ≤ T . Moreover, under condition (3.15) the derivative Φ˙ 1 (λ) < 0 for 0 ≤ λ ≤ λ1max . Proof. First note that ς1 (t, λ) = −

(ω(t) + λ)ψα (λ)ρ1 ((ω(t) + λ)ρ1 + ψα (λ))

2



 ω(t) − λρ˙1 Ωα (ρ1 ) ω(t) + λ

(A.2)

271

Optimal consumption and investment

where ρ1 = ρ1 (λ) is defined in (3.25) and ψα (ρ1 ) − ρ1 ψ˙ α (ρ1 ) . ρ1 ψα (ρ1 )

Ωα (ρ1 ) =

Note that we can represent the numerator as ϕ(y) (1 + y(y − |qα |)) − (y − |qα |) ϕ2 (y)

ψα (ρ1 ) − ρ1 ψ˙ α (ρ1 ) =

with y = |qα | + ρ1 . Therefore, the left inequality in (3.24) implies   1 1 − 3 − (y − |qα |) ϕ(y) (1 + y(y − |qα |)) − (y − |qα |) ≥ (1 + y(y − |qα |)) y y qα2 − 1 y|qα | − 1 ≥ , y3 y3

=

and by condition (3.34) we obtain Ωα (ρ1 ) ≥ 0

for ρ1 ≥ 0 .

Let us now calculate ρ˙1 . To this end note that definition (3.25) implies G1 (ρ1 (λ), λ) = 1

for all

0 ≤ λ ≤ λ1max .

Therefore, ρ˙1 (λ) = −

G1,2 (ρ1 (λ), λ) G1,1 (ρ1 (λ), λ)

with

∂G1 (u, λ) and ∂u The definition of G1 in (3.22) implies that G1,1 (u, λ) =

G1,1 (u, λ) = −2



0

T

G1,2 (u, λ) =

∂G1 (u, λ) . ∂λ

(ω(t) + λ)2 (λ(ψ˙ α (u) + 1) + ω(t)) |θt |2 dt (λ ψα (u) + u(ω(t) + λ))3

and G1,2 (u, λ) = −2ψα (u)



0

T

ω(t) (ω(t) + λ) |θ |2 dt . (λ ψα (u) + u(ω(t) + λ))3 t

Taking into account that 1 − (|qα | + u)ϕ(|qα | + u) , ψ˙ α (u) + 1 = ϕ2 (|qα | + u)

we obtain from the right inequality in (3.24) ψ˙ α (x) + 1 ≥ 0

for all

x ≥ 0.

(A.3)

(A.4)

272

C. Kl¨uppelberg and S. Pergamenshchikov

Therefore, for all 0 ≤ λ ≤ λ1max and 0 ≤ t ≤ T ρ˙1 (λ) < 0

and

ς1 (t, λ) < 0 .

Let us calculate now the derivative of Φ1 . We obtain Φ˙ 1 (λ) =



T

0

where η(t, λ) = −

η(t, λ) ς1 (t, λ)θ2T dt ,

(A.5)

ςλ (t) f˙α (aλ ) ςλ (t) 1 −1= −1 aλ ϕ (|qα | + aλ ) aλ

with aλ = ςλ θT . In view of the inequalities (3.27) we obtain ςλ (t) ςλ (t) ςλ (T ) 1 = ≥ ≥ . aλ ςλ θT ςλ (0)θT (T + 1)θT

Therefore, by the right inequality in (3.24) and the condition (3.15) η(t, λ) ≥

|qα | |qα | + aλ −1 ≥ −1>0 (T + 1)θT (T + 1)θT

for 0 ≤ t ≤ T and 0 ≤ λ ≤ λ1max .

(A.6) 

Proof of Lemma 3.10. Similarly to the proof of Lemma 3.3 we observe that condition (3.29) implies Φ1 (0) = −θT − fα (θT ) > − ln(1 − ζ) . Moreover, Φ1 (λ1max ) = 0 since ρ1 (λ1max ) = 0. This means that φ1 (0) = λ1max . In view of Lemma A.2 Φ1 (·) is strictly decreasing on [0, λ1max ]. Therefore, Φ−1 1 exists for all 0 ≤ a ≤ − ln(1 − ζ) such that 0 ≤ φ1 (a) < λ1max

and φ1 (0) = λ1max .

for 0 < a ≤ − ln(1 − ζ) 

Bibliography [1] Artzner, P., Delbaen, F., Eber, J.-M. and Heath, D.: Coherent measures of risk. Math. Finance. 9, 203–228 (1999) [2] Basak, S. and Shapiro, A. (1999) Value at Risk based risk management: optimal policies and asset prices. Review of Financial Studies. 14(2), 371–405 (1999) [3] Cuoco, D., He, H. and Isaenko, S. (2005) Optimal dynamic trading strategies with risk limits. Working paper [4] Dowd, K. (1998) Beyond Value at Risk: the New Science of Risk Management. Wiley, London.

Optimal consumption and investment

273

[5] Emmer, S., Kl¨uppelberg, C. and Korn, T. (2001) Optimal portfolios with bounded Capital-atRisk. Math. Finance 11, 365–384. [6] Gabih, A., Grecksch, W. and Wunderlich, R. (2005) Dynamic portfolio optimisation with bounded shortfall risks. Stoch. Anal. Appl. 23, 579–594 [7] Jorion, P. (2001) Value at Risk. McGraw-Hill, New York. [8] Karatzas, I. and Shreve, S.E. (1988) Brownian Motion and Stochastic Calculus. Springer, Berlin. [9] Karatzas, I. and Shreve, S.E. (1998) Methods of Mathematical Finance. Springer, Berlin. [10] Kl¨uppelberg, C. and Pergamenshchikov, S. (2008) Optimal consumption and investment with bounded downside risk for power utility functions. Invited book contribution. Available at www-m4.ma.tum.de/Papers/ [11] Korn, R. (1997) Optimal Portfolios. World Scientific, Singapore. [12] Yiu, K.F.C. (2004) Optimal portfolios under a value-at-risk constrain. J. Econom. Dynam. Control 28 (7), 1317–1334.

Author information Claudia Kl¨uppelberg, Center for Mathematical Sciences, Technische Universit¨at M¨unchen, 85747 Garching, Germany. Email: [email protected] Serguei Pergamenshchikov, Laboratoire de Math´ematiques Rapha¨el Salem, UMR 6085 CNRS, Universit´e de Rouen, Avenue de l’Universit´e, BP.12, 76801 Saint Etienne du Rouvray, France. Email: [email protected]

Radon Series Comp. Appl. Math 8, 275–301

c de Gruyter 2009 

A review of some recent results on Malliavin Calculus and its applications Kohatsu-Higa Arturo and Yasuda Kazuhiro

Abstract. We review some of the recent developments of Malliavin Calculus and its applications with some focus in Finance. In particular, we discuss the finite difference methods which lead in a generalised form to kernel density estimation methods. We compare this method in relation with the Malliavin Calculus method and in particular with the Malliavin–Thalmaier formula. We finish by giving a short review of other developments in the area. Key words. Multidimensional density function, Malliavin calculus, the Malliavin–Thalmaier formula, greeks. AMS classification. 60H07, 60H35, 60J60, 62G07, 65C05

1

Brief introduction to Malliavin Calculus

Let (Ω, F , P ; Ft ) be a filtered probability space. Here {Ft } satisfies the usual conditions. That is, it is right-continuous and F0 contains all the P -negligible events in F . Suppose that H is a real separable Hilbert space whose norm and inner product are denoted by  · H and ·, ·H respectively (in this article, we usually have H = L2 ([0, T ], Rd )). Let W (h) denote a Wiener process on H . We denote by Cp∞ (Rn ) the set of all infinitely differentiable functions f : Rn → R such that f and all of its partial derivatives have at most polynomial growth. Let S denote the class of smooth random variables of the form F = f (W (h1 ), . . . , W (hn )),

(1.1)

where f ∈ Cp∞ (Rn ), h1 , . . . , hn ∈ H , and n ≥ 1. If F has the form (1.1) we define its derivative DF as the H -valued random variable given by DF =

n  ∂f (W (h1 ), . . . , W (hn ))hi . ∂x i i=1

We will denote the domain of D in Lp (Ω) by D1,p . This space is the closure of the class of smooth random variables S with respect to the norm    p1   F 1,p = E |F |p + E DF pH .

We can define the iteration of the operator D in such a way that for a smooth random variable F , the derivative Dk F is a random variable with values on H ⊗k . Then for

276

A. Kohatsu-Higa and K. Yasuda

every p ≥ 1 and k ∈ N we introduce a seminorm on S defined by k      E Dj F pH ⊗j . F pk,p = E |F |p + j=1

For any real p ≥ 1 and any natural number k ≥ 0, we will denote by Dk,p the completion of the family of smooth random variables S with respect to the norm  · k,p . Note that Dj,p ⊂ Dk,q if j ≥ k and p ≥ q . Consider the intersection   D∞ = Dk,p . p≥1 k≥1

Then D∞ is a complete, countably normed, metric space. We will denote by D∗ the adjoint of the operator D as an unbounded operator from L2 (Ω) into L2 (Ω; H). That is, the domain of D∗ , denoted by Dom(D∗ ), is the set of H -valued square integrable random variables u such that |E[DF, uH ]| ≤ cF 2 ,

for all F ∈ D1,2 , where c is some constant depending on u (here  · 2 denotes the L2 (Ω)-norm). Suppose that F = (F1 , . . . , Fd ) is a random vector whose components belong to the space D1,1 . We associate with F the following random symmetric nonnegative definite matrix:  γF = DFi , DFj H 1≤i,j≤d . This matrix is called the Malliavin covariance matrix of the random vector F .

Definition 1.1. We will say that the random vector F = (F1 , . . . , Fd ) ∈ (D∞ )d is nondegenerate if the matrix γF is invertible a.s. and  (det γF )−1 ∈ Lp (Ω). (1.2) p≥1

In what follows, we always assume G ∈ D∞ , F = (F1 , . . . , Fd ) ∈ (D∞ )d is ddimensional nondegenerate random variable. Therefore the integration by parts formulas will always hold (see Nualart [39], Proposition 2.1.4, p. 100 or Sanz [47], Proposition 5.4, p. 67 and formula (1.3) below). For other references, see [49].

1.1 Three methods to compute densities of random variables on Wiener space 1.1.1 The classical integration by parts formula Let F = (F1 , . . . , Fd ) be a nondegenerate random vector and G a smooth random variable. We denote by pF,G = E [G/F = x] pF,1 (x), where pF,1 (x) ≡ pF (x) denotes

A review of Malliavin Calculus and its applications

277

the density of F . Then there exists a random variable H(1,2,...,d) (F ; 1) ∈ Lp (Ω) for any p > 2 such that

d 1[0,∞) (Fi − x ˆi )H(1,2,...,d) (F ; G) , (1.3) pF,G (ˆ x) = E i=1

where 1[0,∞) (x) denotes the indicator function. In fact, for i = 2, . . . , d, H(1) (F ; 1) :=

d 

 δ G(γF−1 )1j DFj , j=1

H(1,...,i) (F ; 1) :=

d 

 δ H(1,...,i−1) (F ; G)(γF−1 )ij DFj .

(1.4)

j=1

Here δ denotes the adjoint operator of the Malliavin derivative operator D and γF the Malliavin covariance matrix of F . In particular, we remark that δ is an extension of the Itˆo integral that also integrates non-adapted processes and is usually called the Skorohod integral. The definition of H(1,...,i) (F ; 1) in iterative form in (1.4) shows that in order to compute this expression one requires the calculation of i-iterated stochastic integrals. 1.1.2 The finite difference or kernel density method The finite difference (FD) method consists in computing the approximate derivative of any distribution function in order to obtain the density function. This introduces the choice of a parameter in order to compute the approximate derivative. This is a particular case of the kernel density estimation method. In fact, this method requires the choice of a kernel function K and a sufficiently small h > 0 (usually called the bandwidth or the tuning parameter) which gives as an approximation   j N ˆ 1  1 F −x Gj K (1.5) N j=1 hd h where (F j , Gj ), j = 1, . . . , N denotes N independent copies of (F, G) obtained by simulation. First, we remark  that the classical finite difference method is obtained with the choice K(x) = 2−d di=1 1[−1,1]d (x). The theory of kernel density estimation deals with the statistical problem of given some data (F j , Gj ), j = 1, . . . , N what is the “optimal” way of choosing the kernel K and the tunning parameter h. The theory of kernel density estimation is quite vast and we are not able to give a fair account of the theory but it seems that the multidimensional case d > 1 is less well understood than the one dimensional case. In the multidimensional case, one may use multiplicative type of kernels. The order of the bias is of order h2 if the kernel is symmetric and regular in some sense (say  −1 Gaussian type kernels). The variance diverges in the order of N hd . For more information on this method, see e.g. [48] or [55].

278

A. Kohatsu-Higa and K. Yasuda

1.1.3 Malliavin–Thalmaier representation of multidimensional density functions We represent the delta function by δ0 (x) = ∆Qd (x) (x ∈ Rd , d ≥ 2),

in the following sense. If f is a smooth function then the solution of the Poisson equation ∆u = f is given by the convolution Qd ∗ f (see e.g. [20]). Definition 1.2. Given the Rd -valued random vector F and the R-valued random variable G, a multi-index α and a power p ≥ 1 we say that there is an integration by parts formula (IBP formula) in Malliavin sense if there exists a random variable Hα (F ; G) ∈ Lp (Ω) such that   α   ∂ |α| f (F )G = E f (F )H (F ; G) for all f ∈ C0 (Rd ). (1.6) IPα,p (F, G) : E α ∂xα

Related to the Malliavin–Thalmaier formula [38], Bally and Caramellino [8], have obtained the following result.

Proposition 1.3 (Bally, Caramellino [8]). Suppose that for some p > 1  p p   p−1  p−1    ∂   sup E  < ∞ for all R > 0, a ∈ Rd . Qd (F − a) + Qd (F − a) ∂xi |a|≤R (1.7) (i) If IPi,p (F ; G) (i = 1, . . . , d) holds then the law of F is absolutely continuous with respect to the Lebesgue measure on Rd and the density pF is represented as

d  ∂ Qd (F − x)H(i) (F ; G) . (1.8) pF (x) = E ∂xi i=1 (ii) If IPα,p (F ; G) holds for every multi-index α with |α| ≤ m + 1 then pF ∈ C m (Rd ) and for every multi-index ρ with |ρ| ≤ m one has

d  ∂ ∂ρ pF (x) = E Qd (F − x)H(i,ρ) (F ; G) . ∂xρ ∂xi i=1 The heuristic idea of the above proof is to use the integration by parts formula in Malliavin sense as follows  2  d    ∂ E Q (F − x)G pF (x) = E ∆Qd (F − x)G = d ∂x2i i=1

d  ∂ Qd (F − x)H(i) (F ; G) . =E ∂xi i=1 Next we impose conditions to assure that the assumptions of Proposition 1.3 are satisfied.

A review of Malliavin Calculus and its applications

279

Corollary 1.4. If G ∈ D∞ , F = (F1 , . . . , Fd ) ∈ (D∞ )d is a nondegenerate random vector, then the probability density function of the random vector F is d  ∂ pF (x) = E Qd (F − x)H(i) (F ; G) . ∂xi i=1

1.1.4 Theoretical comparison of the methods The method of kernel density estimation is the oldest method of the three methods introduced above and the one that has been used by practitioners for a long time. The method is easy to implement and various standard recommendations are available on the choice of kernels K and the tuning parameter h. The classical integration by parts formula (1.3) attracted the attention of practitioners as it allows in principle the calculation of density functions using Monte Carlo simulations without any bias (see e.g. [22] and [23]). In comparison with kernel density estimation methods this method does not require any tunning as there is no bias. In exchange, the estimator obtained by integration by parts involve in general d iterated stochastic integrals and its calculation is not available for all models. Furthermore, the estimator obtained by integration by parts has a constant variance which tends to be big and one needs to use variance reduction methods. In the one dimensional case a thorough comparison between the classical integration by parts formula and the kernel density estimation method can be found in [29]. When the dimension is bigger than one, one can try to compute the d-iterated Skorohod integrals but this becomes cumbersome as d increases. Furthermore as stochastic integrals have to be approximated by their Riemann sum counterparts the error increases. Nevertheless one can still write the system of linear equations satisfied by the higher order derivatives and try to use this structure in order to improve the system simulation (see e.g. [17]). Another alternative that is in between the classical integration by parts and the kernel density estimation method is the Malliavin–Thalmaier formula (1.8). The significance of the Malliavin–Thalmaier formula is clear. Instead of the diterated stochastic integrals which appear in (1.3) we have instead only one stochastic integral. The problem with the above formula is that the expectation is well defined in ∂ Qd (F − x) ∈ Lp (Ω) for any p < 2 and H(i) (F ; G) ∈ the sense of duality. That is, ∂x i q −1 −1 L (Ω) for p + q = 1. Therefore the variance of the Malliavin–Thalmaier estimator is infinite. In fact, we have for some constant Ad that ∂ xi Qd (x) := Ad d . ∂xi |x|

Therefore, we have to resort again to kernel density estimation methods. We will see later that the order of degeneration of these estimators is milder in comparison with estimators of the type (1.5).

280

2

A. Kohatsu-Higa and K. Yasuda

Error estimation for the Malliavin–Thalmaier formula

In order to avoid the explosion of the variance of the Malliavin–Thalmaier estimator, we have proposed the use of a kernel density type alternative to this estimator, using instead of Q, we define xi ∂ h Q (x) := Ad d , ∂xi d |x|h where | · |h is defined as

  d  x2i + h (h > 0, x ∈ Rd ). |x|h :=  i=1

Then we define the approximation to the density function of F as;

d  ∂ phF,G (x) := E Qhd (F − x)H(i) (F ; G) . ∂x i i=1

(2.1)

Note that clearly, Qd = Q0d . We now give the Central Limit Theorem associated with the proposed approximation. Theorem 2.1. Let Z be a random variable with standard normal distribution and let (F (j) , G(j) ) ∈ (D∞ )d ×D∞ , j ∈ N be a sequence of independent identically distributed random vectors.   2   C and N = h2Cln 1 for some positive constant C (i) When d = 2, set n = h ln 1 h h fixed throughout. Then as h → 0 ⎞ ⎛ 2 N   1 ∂ Qh (F (j) − x ˆ)H(i) (F ; G)(j) − pF,G (ˆ x)⎠ =⇒ C3xˆ Z−C1xˆ C, n⎝ N j=1 i=1 ∂xi 2 (2.2) where H(i) (F ; G)(j) , i = 1, . . . , d, j = 1, . . . , N , denotes the weight obtained in the j -th independent simulation (the same that generates F (j) and G(j) ). " !   C C2 (ii) When d ≥ 3, set n = h ln for some positive constant and N = 1 d +1 1 2 h

h2

(ln

h)

C fixed throughout. Then as h → 0 ⎞ ⎛ d N   1 ∂ h (j) (j) Q (F − x ˆ)H(i) (F ; G) − pF,G (ˆ x)⎠ =⇒ n⎝ N j=1 i=1 ∂xi d

C4xˆ Z−C1xˆ C.

(2.3)

281

A review of Malliavin Calculus and its applications

This result clearly also gives the asymptotic bias and variance of the estimators. In fact the bias is of the order pF,G (ˆ x) − phF,G (ˆ x) = C1xˆ h ln

1 + C2xˆ h + o(h), h

(2.4)

Note that this bias is almost of the same order as in the kernel density estimation method. The asymptotic L2 (Ω)-error is of the order for d = 2, ⎡% &2 ⎤ 2  ∂ h 1 x ∈ Rd ), E⎣ Q2 (F − x ˆ)H(i) (F ; G) − pF,G (ˆ x) ⎦ = C3xˆ ln + O(1) (ˆ ∂x h i i=1

and for d ≥ 3, ⎡% &2 ⎤ d  ∂ h Qd (F − x ˆ)H(i) (F ; G) − pF,G (ˆ x) ⎦ E⎣ ∂x i i=1 = C4xˆ

h

1 d 2 −1

+o



1 h

d 2 −1



(ˆ x ∈ Rd ).

Cixˆ

All the above constants have explicit expressions that depend on the density itself. Note that the order of explosion of the L2 (Ω)-error is reduced in comparison with the classical kernel density estimation methods.

3

Financial application

When computing the greek of any option, the instability of the calculation comes from the irregularities of the payoff function. In Fourni´e et al. it was shown how to deal with the problem. One essentially divides the payoff function in two parts F = F1h + F2h

The first function F1h is a smooth function which depends on a smoothing parameter h and the second localises the irregularity of the payoff. In the second we apply the previous integration by parts formula and in the first one uses a direct simulation method. The question on the choice of the parameter h remains although in Fourni´e et al. the authors seem to suggest that this is not an important issue. Nevertheless note that this is also a tunning problem. In financial applications, one could use the classical integration by parts as follows. ) ∂ ∂ E[f (F µ )] = f (x) pF (µ, x)dx ∂µ ∂µ Rd ⎡ ⎤   d  ∂ F µ,j ⎦ . = E ⎣f (F µ ) H(j) F µ , ∂µ j=1

282

A. Kohatsu-Higa and K. Yasuda

The classical application to Greek calculations is for the case when f involves a step function. Another possibility hinted by the Malliavin–Thalmaier formula is f (x) =

)

f (y)

d  ∂ 2 Qd i=1

∂x2i

(y − x)dy

Therefore one can use any of the following alternative expressions (under certain regularity conditions, for details see [32])    d µ,j  ∂ µ µ ∂F µ E[f (F )] = E f (F )H(j) F , ∂µ ∂µ j=1 =

d 

i,j=1

E

)

f (y)

  ∂F µ,j ∂ 2 Qd . (y − F µ )dyH(i) F µ , ∂xi ∂xj ∂µ

(3.1)

In some cases, the above representation for the Greeks gives a variance reduction effect. In fact, if we consider Delta of a digital put option with two assets;   ∂ −rT Q   e E 1 0 ≤ ST1 ≤ K1 1 0 ≤ ST2 ≤ K2 , 1 ∂S0

then a method by Fourni´e et al. [22] without localisation gives the following expression;    1   1 2 ∂ST 1 2 −rT Q . e E 1 0 ≤ ST ≤ K1 1 0 ≤ ST ≤ K2 H(1) ST , ST ; (3.2) ∂S01 On the other hand, an expression of this new method gives as follows; *    1  1 2 Q 1 2 ∂ST −rT E g1 ST , ST H(1) ST , ST ; e ∂S01  +   1  1 2 Q 2 ∂ST 1 , + E g2 ST , ST H(2) ST , ST ; ∂S01

(3.3)

where we can explicitly calculate the integral parts of (3.1) to obtain that , y − K2 y y y − K2 1 − arctan arctan − arctan , + arctan g1 (x, y) := 2π x x x − K1 x − K1 g2 (x, y) :=

(x2 + y 2 )((x − K1 )2 + (y − K2 )2 ) 1 ln . 4π ((x − K1 )2 + y 2 )(x2 + (y − K2 )2 )

If we assume that the assets follow the Black–Scholes model, then (3.3) gives a variance reduction effect, compared with (3.2). In Kohatsu–Higa, Yasuda [31], we can find the simulation results where we conclude that the variance of (3.3) is about a third of variance of (3.2). This issue needs to be further studied.

A review of Malliavin Calculus and its applications

4

283

Estimation of the optimal value of h

4.1 About optimal h In this section, we give an ad-hoc method to compute a quasi-optimal value of h using similar ideas as in kernel density estimation and the central limit theorem obtained in Theorem 2.1. We consider the L2 (Ω) error of approximation; ⎡⎧ ⎫2 ⎤ & % d N ⎬ ⎨1 

  ∂ h ⎢ ⎥ (4.1) x) ⎦ . ˆ H(i) (F ; 1)(j) − pF (ˆ Qd F (j) − x E⎣ ⎭ ⎩N ∂xi j=1

i=1

From Theorem 2.1 and the comments following it, we have for d = 2, -  , , -2 1 1 1 1 C3xˆ ln + O(1) + 1 − C1xˆ h ln + C2xˆ h + o(h) (4.1) = N h N h , 1 1 2 2 C1xˆ h ln + C2xˆ h + o(h) pF (ˆ x) − pF (ˆ x) , + N h N

and if d ≥ 3, (4.1) =

,  -2 1 1 x ˆ x ˆ + C C + 1 − h + o(h) h ln 2 1 d d N h h 2 −1 h 2 −1 , 2 1 1 2 C1xˆ h ln + C2xˆ h + o(h) pF (ˆ + x) − pF (ˆ x) . N h N

1 N

, C4xˆ

1

+o



1

-

Then we select the leading terms from the above equations to find a trade-off relation between the small bias and the exploding L2 -error; ⎧ 1 xˆ 1  xˆ 2 2 ⎪ ⎨ C ln + C1 h , d = 2, N 3 h g(h) :=  2 1 ⎪ ⎩ C4xˆ + C1xˆ h2 , d ≥ 3. d N h 2 −1

Note that the intervention of the sample size becomes crucial in the above equation: the right choice of N will make the variance of the estimator converge to 0. By considering the minimum value of g(h), finally we obtain the following asymptotic optimal value for h; ⎧ 7 ⎪ C3xˆ ⎪ ⎪ , d = 2, ⎨ 2N (C1xˆ )2 h= (4.2) 2 - 2+d , ⎪ ⎪ d − 2 C4xˆ ⎪ ⎩ , d ≥ 3. 4N (C1xˆ )2

The problem with the above theoretical formula is that it requires the knowledge of the constants Cixˆ .

284

A. Kohatsu-Higa and K. Yasuda

4.2 Calculation of constants C1x , C3x and C4x Here we give a heuristic idea on how to obtain the constants Cixˆ for i = 1, 3, 4 using pilot simulation. From our CLT result, we have ⎞ ⎛ d N 

  1 ∂ Qh F (j) − x ˆ H(i) (F ; 1)(j) − pF (ˆ x)⎠ =⇒ Caxˆ Z − C1xˆ C, n⎝ N j=1 i=1 ∂xi d

(4.3)

where Caxˆ = C3xˆ if d = 2 and Caxˆ = C4xˆ if d ≥ 3. Let Yxˆh,N be the left hand side of (4.3). Therefore we consider the following approximation Yxˆh,N ≈

Caxˆ Z − C1xˆ C.

Then using that Z follows a standard Normal distribution, we have the following approximations;     h,N x ˆ x ˆ ≈E E Yxˆ Ca Z − C1 C = −C1xˆ C, (4.4) E



Yxˆh,N

2 

≈E



Caxˆ Z



C1xˆ C

2

 2 = Caxˆ + C1xˆ C .

(4.5)

The computation of constants is done by first fixing the values of h and N in test simulations, this gives the value of the constant C and n according to the relation in the CLT (Theorem 2.1). We use these test Monte–Carlo simulations in order to approach the mean and the variance in (4.4) and (4.5). In practice, one obtains a stable result for Caxˆ , but the result of C1xˆ is unstable if one uses all the choices of h and N in the pilot simulations. This is due to the fact that when the value of C becomes too small then the above procedure is not good to obtain the value of C1xˆ as the error terms become bigger than the quantity to be estimated. To obtain a stable estimation method, besides deleting (or avoiding) the simulations with small values of C , we additionally consider the following approximating procedure for C1xˆ . M 1  h,N Yxˆ,(k) ≈ M

1 Caxˆ √ Z˜ − C1xˆ C h,N , M

k=1

where let Z˜ be a random variable with the standard normal distribution. Now if we try this test simulation L times using different values of h, then we have & % L M L L 1  1  h(l),N 1  h(l),N 1  xˆ h(l),N ≈− Yxˆ,(k) C1 C = −C1xˆ C . L M L L l=1 k=1 l=1 l=1 Therefore we obtain C1xˆ as follow; 8L C1xˆ ≈ −

l=1

1 M

8L

8M

l=1

h(l),N

k=1 Yx ˆ,(k)

C h(l),N



.

A review of Malliavin Calculus and its applications

285

Remark 4.1. Once we obtain the constant C1xˆ , we can modify the approximation as follows; 1 p˙ hF (ˆ x) = phF (ˆ x) + C1xˆ h ln . h Then from the bias error (2.4), we can improve the bias of the error;   1 = C2xˆ h + o(h). pF (ˆ x) − p˙ hF (ˆ x) = pF (ˆ x) − phF (ˆ x) + C1xˆ h ln h

5

Numerical results

In this section, we give a short report on some simulation results on the following models: the multidimensional Black–Scholes model and two factor models in finance: the Heston model [24] and the double volatility Heston model [21], [25].

5.1 The multidimensional Black–Scholes model We consider the following d-dimensional Black–Scholes model; for i = 1, . . . , d, d  dSti σji dWtj , S0i = si , = µ dt + i Sti j=1

where µi and σji , i, j = 1, . . . , d are constants, si , i = 1, . . . , d is a positive constant, and W = {Wt = (Wt1 , . . . , Wtd )}t≥0 is a d-dimensional standard Brownian motion, whose components are independent of each other. As it is well known, the joint density of the random vector ST = (ST1 , . . . , STd ) is the lognormal density which can be written −1 )i,j=1,...,d is the inverse matrix of Σ. explicitly. and Σ−1 = (σij We can also represent the density pST (x) through the Malliavin–Thalmaier formula. Lemma 5.1. Let F = ST be a nondegenerate random vector. Then the density pST can be expressed as ⎡ * +⎤ d d j j i i   σ T W S − x det(Σ ) i j i T ⎦, pST (x) = Ad E⎣ T (−1)i+j (5.1) i + Si d |S − x| det(Σ) S T T T i=1 j=1 for x ∈ Rd , where Σji , i, j = 1, . . . , d, is a (d − 1) × (d − 1)-matrix obtained from Σ by deleting row j and column i. For more details on the above lemma, see Kohatsu–Higa and Yasuda [32]. Hence we have the following approximation of (5.1); for x ∈ Rd , ⎡ * +⎤ d d j j i i   σ T W S det(Σ ) − x i j i T ⎦. (−1)i+j phST (x) := Ad E⎣ T i + Si d det(Σ) S |S − x| T T T h j=1 i=1

(5.2)

286

A. Kohatsu-Higa and K. Yasuda

Now we provide a short summary of results in case d = 2. The simulation result through the classical representation is unstable and does not work well (unless variance reduction methods are applied (e.g. see [30], [28] and [12]), because of the appearance of a double-Skorohod integral for the Malliavin weight. Compared with the classical method, the Malliavin–Thalmaier formula (5.1) works better since it does not involve double Skorohod integral. But the density approximation exhibited unexpected peaks, ∂ Qd . This instability also appears when which are due to the unstable behaviour of ∂x i the density estimation is magnified locally. To improve these instability, we use the approximation formula (5.2). In fact, this approximation although slightly biased in comparison with the Malliavin–Thalmaier formula (5.1), behaves smoothly. For more details and graphs, see Kohatsu–Higa and Yasuda [33].

5.2 Heston model In this section, we provide some simulation results for the Heston model [24]; √ (1) vt St dWt , √ (2) dvt = −γ(vt − θ)dt + κ vt dWt ,

dSt = µSt dt +

2

where µ, γ, θ, κ are constants with γθ ≥ κ2 (see Lamberton, Lapeyre [35]) and (1) (2) (1) (2) Wt , Wt are standard Brownian motions with E[Wt Wt ] = ρt. We introduce a new standard Brownian motion Z , which is independent of Wt(2) 9 and Wt(1) = ρWt(2) + 1 − ρ2 Zt . We also change variables. Set Xt := ln(St /S0 ) − µt and ut := avt . Then from Itˆo’s formula, we have the following dynamics; ut dXt = − dt + 2a

:

 9 ut  (2) ρdWt + 1 − ρ2 dZt , a √ √ (2) dut = −γ(ut − aθ)dt + aκ ut dWt .

(5.3)

As the exact value of the joint density value of (Xt , ut ) is unknown, we estimate the value by using the Malliavin–Thalmaier formula, the approximated version and the finite difference algorithm applied to the Kolmogorov equation. Set F := (F1 , F2 ) := (Xt , ut ) for fixed t > 0. First we give the Malliavin–Thalmaier formula for this model. For x = (x1 , x2 ) ∈ R2 , we have

2  Fi − xi 1 E H(i) (F ; 1) , pF (x) = 2π |F − x|2 i=1

(5.4)

287

A review of Malliavin Calculus and its applications

where √ ) t 1 a H(1) (F ; 1) := 9 √ dZs , 2 us 1−ρ t 0

1 {A − B} , t ) t ) t e(s) e(s) 1 1 (2) A := √ ds √ dWs + us 2e(t) 0 us aκe(t) 0 √ ) t ) t aκ e(s) e(s) aκ2 s 2 ds − s dWs(2) , + 8e(t) 0 us 4e(t) 0 u 23 s ) r ) t ) t 1 1 e(s) ρ e(r) B := 9 √ dZs − 9 √ dZs dr 2 2 us us κ a(1 − ρ )e(t) 0 2 a(1 − ρ )e(t) 0 0 ) t ) r ) r ) t e(r) e(r) 1 1 ρ 1 + 9 √ √ dZs dWr(2) + √ √ dZs dZr , 2 ur 0 us 2e(t) 0 ur 0 us 2 1 − ρ e(t) 0   √ ) ) t t 1 1 aκ aκ2 e(t) := exp −γt − dr + √ dWr(2) . 8 0 ur 2 ur 0

H(2) (F ; 1) :=

And our approximation is given as follows; phF (x)

2  Fi − xi 1 E = H(i) (F ; 1) . 2π |F − x|2h i=1

(5.5)

All the stochastic integrals appearing in the above formulas are approximated using the corresponding Riemann sums. This obviously introduces a further error of approximation in the above formulas. We will compare the above approximation values with the following deterministic method.

Finite difference method applied to the associated Kolmogorov equation Next we give the corresponding forward Kolmogorov equation of the model (5.3); ∂pt u ∂pt ∂ 2 pt ∂pt = γpt + γ(u − aθ) + + ρκu ∂t ∂u 2a ∂x ∂x∂u 2 2 2 ∂pt u ∂ pt aκ u ∂ pt ∂pt + . + + aκ2 + ρκ ∂x 2a ∂x2 2 ∂u2 ∂u

The initial condition is the Dirac delta function; p0 (x, u) = δ0 (x)δ0 (u − u0 ).

(5.6)

288

A. Kohatsu-Higa and K. Yasuda

When we compute the approximative solution to equation (5.6), we use the following explicit scheme; k+1 k k  Pk

u Pi,j − Pi,j j i+1,j − Pi−1,j k = γPi,j + ρκ + ∆t 2a 2∆x k k  Pi,j+1 − Pi,j−1 + γ(uj − aθ) + aκ2 2∆u k k k uj Pi+1,j − 2Pi,j + Pi−1,j + 2a (∆x)2 k k k k Pi,j+1 + Pi−1,j − Pi−1,j+1 − Pi,j ∆x∆u k k k 2 + Pi,j−1 aκ uj Pi,j+1 − 2Pi,j + , 2 (∆u)2

+ ρκuj

(5.7)

k where Pi,j := ptk (xi , uj |u0 ) and ∆t, ∆x, ∆u > 0. In order to achieve a stable simulation (positivity of the density) in the negative correlation case, we use the forward difference method w.r.t. x and the backward difference method w.r.t. u for the term ∂ 2 pt 1 . The stability property also requires some particular relation between the pa∂x∂u (Δx)2 rameters, that is, assume that (i). ∆x = ∆u is small enough, (ii). uj (1+aκ 2 +ρκ) ≥ ∆t 1 2 under a restriction c1 ≤ uj ≤ c2 , (iii). 2a ≥ −ρκ, (iv). aκ ≥ −2ρ.

Kernel density estimation method We compare the density value to the kernel density method. Here we use the Gaussian kernel and all bandwidth sizes are the same. That is, for F := (F1 , . . . , Fd ) and x = (x1 , . . . , xd ) ∈ Rd , & % N d (j) (Fi − xi )2 1  1 1 √ exp − , (5.8) pF (x) ≈ N j=1 hd i=1 2π 2h2 where Fi(j) , i = 1, . . . , d, j = 1, . . . , N is a sequence of r.v.’s, copies of Fi . To use (5.8), we have to decide how to choose the bandwidth size. To introduce this optimal choice and the calculations of constants as in Section ; 4, we consider the general case of KDE. Let K : R → R+ be a function with R xa K(x)dx = 0 for a = 1, 3. And for x = (x1 , . . . , xd ) ∈ Rd , set

  d − x F 1 i i . K phKDE (x) := E d h i=1 h Then we have the following central limit theorem for kernel density estimations. 1 When 2 If

the correlation is positive, we have to use another approximation to achieve stability. we use other approximation or consider a case ρ ≥ 0, these relations vary.

A review of Malliavin Calculus and its applications

289

< = 2 1 Proposition 5.2. Set h = ( CN ) d+4 and n = hC2 , where C is a positive constant. Let Z be a random variable with standard normal distribution. Then we have ⎞ ⎛ & % N d (j)  1 1 Fi − xi − pF (x)⎠ =⇒ C˙ 2x Z − C˙ 1x C, K n⎝ N j=1 hd i=1 h (j)

where Fi , i = 1, . . . , d, j = 1, 2, . . . are an i.i.d. random variable of Fi .

In fact, from Scott [48], we have that the bias error is ) d    ∂ 2 pF (x) h 21 + O h4 =: C˙ 1x h2 + O h4 . z 2 K(z)dz pF (x) − pKDE (x) = −h 2 2 R ∂x i i=1

And also we obtain the L2 -error; ⎡* +2 ⎤   d 1 − x F i i h − pKDE (x) ⎦ K E⎣ hd i=1 h   d ) 1 1 2 p (x) K(z ) dz + O F i i hd hd−1 i=1 R   1 x 1 ˙ . =: C2 d + O h hd−1 =

Finally, we obtain an optimal bandwidth size from a calculation like in Section 4. Then we obtain the following asymptotic optimal size of the bandwidth 1 & d+4 % dC˙ 2x . (5.9) h= 4N (C˙ 1x )2 And we can calculate the constants C˙ 1x , C˙ 2x through a pilot simulation as explained in Section 4.2 and following Proposition 5.2. Using a KDE method on the Laplacian of the Poisson kernel We also can estimate the density function through the Laplacian of the Poisson kernel. That is, for xˆ ∈ Rd ,

d  ∂2 pF (ˆ x) = E [δ0 (F − x ˆ)] = E Q (F − x ˆ) . (5.10) ∂x2i i=1 If we simulate (5.10) directly, it is clear that the simulation will return either zero or an error. Therefore we introduce the following approximation of (5.10); for h > 0,

d  ∂2 h ˆ) . (5.11) phP oi (ˆ x) := E 2 Qd (F − x ∂x i i=1

290

A. Kohatsu-Higa and K. Yasuda

We give a central limit theorem for (5.11). " !   C C2 , where C is a positive and n = Proposition 5.3. Set N = 1 d+4 h ln 1 h

2

(ln

h)

2

h

constant. Let F (j) , j ∈ N be an i.i.d. random variable of F and Z be a random variable with the standard normal distribution. Then as h → 0, we have % & d N  1   ∂ 2 h (j) Q F −x n ˆ − pF (ˆ x) =⇒ Cˆ3xˆ Z − Cˆ1xˆ C. N j=1 i=1 ∂x2i d The proof uses the following error estimations. First, the bias error is pF (ˆ x) − phP oi (ˆ x) = Cˆ1xˆ h ln

1 + Cˆ2xˆ h + o(h), h

where Cˆ1xˆ and Cˆ2xˆ are some constants defined as C1xˆ and C2xˆ in the Malliavin–Thalmaier formula respectively. Next, the L2 -error; ⎡* +2 ⎤   d 2  ∂ 1 ˆ xˆ 1 h ⎣ ⎦ , E Qd (F − x ˆ) − pF (ˆ x) = d C3 + o d 2 ∂xi h2 h2 i=1

where Cˆ3xˆ is some positive constant. As before, we obtain the following optimal bandwidth 2 & d+4 % dCˆ3xˆ h= .  2 4N Cˆ xˆ

(5.12)

1

And we can calculate the constants Cˆ1xˆ , Cˆ3xˆ through a pilot simulation as Section 4.2 and Proposition 5.3. Numerical results Now we give a survey of the simulation results on the model (5.3). We use the following parameters; parameter initial log stock price (initial volatility)2 scale parameter expected return speed of mean reversion long term mean volatility of volatility process maturity

value S0 v0 a µ γ θ κ t

100 0.1 3 0.1 2 0.1 0.2 1

A review of Malliavin Calculus and its applications

291

We estimate the density value at (x, u) = (0, 0.3) (the initial point). We simulate two cases, the correlation ρ = −0.1, −0.8, through five methods, the Malliavin–Thalmaier formula, the approximated Malliavin–Thalmaier formula, the finite difference method applied to the Kolmogorov equation (only ρ = −0.1), the Gaussian kernel density estimation and the Laplacian of the Poisson kernel method. Then their density estimation and variances appear in Figures 5.1 and 5.2. In Figure 5.1, we have computed two different approximations of the Kolmogorov equation. That is, (∆x, ∆u) = (0.02, 0.02) for “PDE 1” and (∆x, ∆u) = (0.01, 0.01) for “PDE 2”. These results depend heavily on the approximation of the initial condition (the Dirac function), In order to achieve a stable simulation (positivity of the density), we need to restrict the region of u. Then the calculation looses small mass on the boundary of the region. Hence our results depend on these conditions. As in the case ρ = −0.8, (5.7) does not satisfy the stability conditions, we have not included them in Figure 5.3. In Figure 5.1 and 5.3, we give simulation results using the Malliavin–Thalmaier formula and its approximation with the optimal value for h (using (4.2)). The number of time steps until maturity is 50, that is, ∆t = t/50 = 0.02 and the number of the Monte–Carlo simulation changes from 104 to 106 . From these graphs, we can say that the approximative Malliavin–Thalmaier formula 2.1 performs well in comparison to the other approximative density values (Figure 5.1 and 5.3). In Figures 5.2 and 5.4, we can see that the variance of the approximated Malliavin–Thalmaier formula is stable and about a half of the variance of the Malliavin–Thalmaier formula without h. Note that even if we use the optimal size of the bandwidth h, the variance of KDE is comparatively larger than the other methods. Compared with KDE, the Poisson kernel method works better. To reduce L2 -error, the optimal size of the parameter h becomes slightly big, then we find that the numerical results have somewhat large bias errors in Figure 5.1 and 5.3. In Tables 5.1 and 5.2, we give the constant values from the central limit theorems obtained using pilot simulations. Here we first simulate through each method by using from h = 0.1 to h = 10−10 and N = 105 . The cases in which the value of C is too small are removed from further consideration. This gives a narrow range of h where the pilot simulation are carried out. For these we use N = 105 and M = 100. The results of the calculations appear in Tables 5.1 and 5.2. In Table 5.3, we give the optimal size of the parameter h for the case N = 106 by using the constants from Tables 5.1 and 5.2. And we also give the simulation times for each method. In this respect there is no big difference among the methods.

292

A. Kohatsu-Higa and K. Yasuda

Num. of Monte-Carlo -- Density of Heston model 5.8 MT formula with optimal h MT formula without h PDE1 PDE2 MT formula without h (mc=10^8) MT formula with h (mc=10^8) Gaussian KDE with optimal h 2nd derivative of Poisson kernel with optimal h

Density value

5.6

5.4

5.2

5

4.8 0

100000

200000

300000

400000 500000 600000 Number of Monte-Carlo

700000

800000

900000

1e+006

Figure 5.1. Number of MC simulations and density estimates of the Heston model (ρ = −0.1) Num. of Monte-Carlo -- Variance of Heston model 0.1 MT formula with optimal h MT formula without h Gaussian KDE with optimal h 2nd derivative of Poisson kernel with optimal h

Variance value

0.08

0.06

0.04

0.02

0 0

100000

200000

300000

400000 500000 600000 Number of Monte-Carlo

700000

800000

900000

1e+006

Figure 5.2. Number of MC simulations and variance of the density estimates for Heston model (ρ = −0.1)

293

A review of Malliavin Calculus and its applications

Num. of Monte-Carlo -- Density of Heston model 7.8 MT formula with optimal h MT formula without h MT formula without h (MC=10^8) MT formula with h (MC=10^8) Gaussian KDE with optimal h 2nd derivative of Poisson kernel with optimal h

Density value

7.6

7.4

7.2

7

6.8 0

100000

200000

300000

400000 500000 600000 Number of Monte-Carlo

700000

800000

900000

1e+006

Figure 5.3. Number of MC simulations and density estimates for the Heston model (ρ = −0.8) Num. of Monte-Carlo -- Variance of Heston model 0.1 MT formula with optimal h MT formula without h Gaussian KDE with optimal h 2nd derivative of Poisson kernel with optimal h

Variance value

0.08

0.06

0.04

0.02

0 0

100000

200000

300000

400000 500000 600000 Number of Monte-Carlo

700000

800000

900000

1e+006

Figure 5.4. Number of MC simulations and variance of the density estimates for the Heston model (ρ = −0.8)

294

A. Kohatsu-Higa and K. Yasuda

Method MT formula KDE Poisson

Bias error 1 + C2xˆ h + o(h) h C˙ 1x h2 + O(h4 ) 1 Cˆ1xˆ h ln + Cˆ2xˆ h + o(h) h

ρ = −0.1

C1xˆ h ln

C1xˆ = 97.2983 C˙ x = 30258018 1

Cˆ1xˆ

= 195.2020997

ρ = −0.8

C1xˆ = 273.0708762 C˙ x = 9209822.1 1

Cˆ1xˆ

= 274.6290929

Table 5.1. Bias error and constants computed using pilot simulations for the Heston model Method MT formula KDE Poisson

L2 -error 1 C3xˆ ln + O(1) (d = 2) h 1 (d ≥ 3) C4xˆ d h 2 −1 1 1 C˙ 2x d + O( d−1 ) h h 1 1 Cˆ3xˆ d + o( d ) 2 h h2

ρ = −0.1

ρ = −0.8

C3xˆ = 42.741

C3xˆ = 159.642

C˙ 2x = 372966

C˙ 2x = 92540.7

Cˆ3xˆ = 0.555882

Cˆ3xˆ = 0.598472

Table 5.2. L2 -error and constants computed using pilot simulations for the Heston model Method

MT formula

KDE Poisson

Optimal size of h  12 C3xˆ (d = 2) 2N (C1xˆ )2 2 , - 2+d d − 2 C4xˆ (d ≥ 3) 4N (C1xˆ )2 1 & d+4 % dC˙ 2x 4N (C˙ 1x )2 2 & d+4 % dCˆ3xˆ 4N (Cˆ1xˆ )2 

ρ = −0.1

ρ = −0.8

4.75119 × 10−5

3.27177 × 10−5

0.0024256

0.0028585

0.000193937

0.000158309

Table 5.3. Optimal bandwidth h for the Heston model (d = 2 and N = 106 ) Method MT formula KDE Poisson

N = 104 0.406 0.296 0.312

N = 105 3.978 2.933 3.37

N = 106 39.312 27.456 28.143

Table 5.4. Computation time for the Heston model (in seconds)

295

A review of Malliavin Calculus and its applications

5.3 Double volatility Heston model In this section, we consider a 3-dimensional case, the double volatility Heston model [25] which is a special case of [21], given by √ √ (1) (2) vt St dWt + ut St dWt , √ (1) dvt = γ(θ − vt )dt + κ vt dBt , √ (2) dut = α(β − ut )dt + τ ut dBt ,

dSt = µSt dt +

2

2

where µ, γ, θ, κ, α, β, τ are constants with γθ ≥ κ2 and αβ ≥ τ2 , and Wt(1) , Wt(2) , Bt(1) , (2) (1) (1) (2) (2) Bt are standard Brownian motions with E[Wt Bt ] = ρ1 t and E[Wt Bt ] = ρ2 t (−1 ≤ ρ1 , ρ2 ≤ 1) and others are independent of each other. Then we introduce Brownian motions Zt(1) and Zt(2) , (1)

Wt

(1)

= ρ1 Bt

+

(1)

(2)

1 − ρ21 Zt , and Wt

(2)

= ρ2 Bt

+

(2)

1 − ρ22 Zt .

> > > > where B (1) Z (1) , B (2) Z (2) and Z (1) Z (2) where stands for independence of processes. And set Xt := ln(St /S0 ) − µt, Vt := a1 vt and Ut := a2 ut . Then we have 9   1 − ρ21 9 ρ1 9 Ut 1 Vt (1) (1) dt + √ + dXt = − Vt dBt + Vt dZt 2 a1 a2 a1 a1 9 1 − ρ2 9 ρ2 9 (2) (2) Ut dBt + √ 2 Ut dZt , +√ (5.13) a2 a2 9 √ (1) dVt = γ (a1 θ − Vt ) dt + a1 κ Vt dBt , 9 √ (2) dUt = α (a2 β − Ut ) dt + a2 τ Ut dBt .

Through usual calculations for weights H(i) (Xt , Vt , Ut ; 1), i = 1, 2, 3, we obtain the Malliavin–Thalmaier formula. Although the weights are too long to write here (See Kohatsu–Higa and Yasuda [34]) the computational complexity is the same as in the previous example. Then we compare the density value and variance through some methods as the Heston model.

296

A. Kohatsu-Higa and K. Yasuda

Numerical results We use the following parameters; Parameter

Notation

Value

Correlation Scale parameters Speed of mean-reversion Long term mean Volatility of volatility process Initial value of volatility process Initial value of log-price Maturity Time step size

(ρ1 , ρ2 ) (a1 , a2 ) (γ, α) (θ, β) (κ, τ ) (V0 , U0 ) X0 t ∆t

(0.2, −0.15) (1, 1) (2, 1.5) (0.2, 0.15) (0.2, 0.15) (0.2, 0.15) 0 1 1/200 = 0.005

The density estimates are carried at the point (x, v, u) = (0, 0.2, 0.15) (the initial point). From Figure 5.5, we arrive at conclusions similar to those related to the Heston model case. The KDE method has a large bias and variance even if we use the optimal bandwidth size. The bias error of the Poisson kernel method is larger than the corresponding biases of the Malliavin–Thalmaier formula without and with h. Variance of the approximated Malliavin–Thalmaier formula with the optimal h is much smaller than the variances of the other methods. We can easily find that the Malliavin– Thalmaier formula (without h) have some singular values in Figure 5.6. But the approximated version is stable and has smaller variance. Expressions of the Malliavin weights are similar to ones of the Heston model. But computation time is longer than the Heston case, since a problem appears when one performs the simulation of the two volatility processes (the CIR model), for which we need a precise approximation. The time step size ∆t = 0.005 is smaller than the Heston case. Therefore this issue has to be taken into account in the final result.

6

Conclusions and further comments

In this article we have only concentrated on the integration by parts formula in the setting of Wiener spaces and we have compared the kernel density methods with integration by parts methods. In [18], an interesting mixed approach is introduced, although some of the results do not seem encouraging it may lead to new ideas for new simulation methods. There is also another tendency to obtain the infinite dimensional integration by parts formula as a limit of finite dimensional integration by parts. This is the point of view of [15] which also shows that there are various other integration by parts formulae that can be obtained beside the classical ones. This approach can also be used theoretically as shown in [5] and [53]. This has also lead to interesting results in the jump driven stochastic differential equations.

297

A review of Malliavin Calculus and its applications

Num. of Monte-Carlo -- Density value of the double vola. Heston model 85 MT formula with optimal h MT formula without h MT formula without h (mc=10^8) MT formula with optimal h (mc=10^8) Gaussian KDE with optimal h 2nd derivative of Poisson kernel with optimal h

80

Density value

75

70

65

60

55 0

100000

200000

300000 400000 500000 600000 700000 Number of Monte-Carlo simulation

800000

900000

1e+006

Figure 5.5. Number of MC simulations and estimation of the density for the double volatility Heston model Num. of Monte-Carlo -- Variance of the double vola. Heston model 100 MT formula with optimal h MT formula without h Gaussian KDE with optimal h 2nd derivative of Poisson kernel with optimal h 80

Variance

60

40

20

0 0

100000

200000

300000 400000 500000 600000 700000 Number of Monte-Carlo simulation

800000

900000

1e+006

Figure 5.6. Number of MC simulations and variance of the density estimates for the double volatility Heston model

298

A. Kohatsu-Higa and K. Yasuda

There is an increasing literature dealing with the integration by parts formula in the setting of L´evy driven stochastic differential equations. In the early 90’s this became a hot topic of research leading to articles and books (see the references, [11], [13], [40], [45], [46], [14], [51] and [52]). There are various approaches that lead to different integration by parts formula depending which variable one uses to base the integration by parts. Some use the jump distribution, other the jump times and other are based in other variables. There is not a unified approach as in the Wiener case. In most cases, as in the case of the Wiener space, the interest is in proving the existence and smoothness of densities for solutions of stochastic differential equations with jumps. There is another approach centred in the chaos decompositions. See for example, [37], [36]. This approach leads to a definition of derivative but its consequences for densities of random variables have been largely ignored. Also, in this setting, it becomes hard to verify that the solution of stochastic differential equations with jumps are differentiable. In the past few years various authors have studied the application of this methodology in finance and insurance. Leading to similar studies of greeks in Finance. See, e.g. [6], [7], [16], [43], [19] and [27]. Another issue that has raised recent interest is the application of the asymptotic expansion theory developed by S. Watanabe on Wiener space (see [56], [41], [42]) and recently extended to the Poisson space case. These formulas found an application in statistics in the form of Berry-Essen type expansions (see [54] or [50]). In Finance this has lead to approximative formulas for option pricing. In particular, there has been a recent development of expansion formulas using greeks (see [9] and [10]). This formulas seem to have an application in the calibration problem. Although we still seem far from solving this difficult problem from the practical point of view. Partial approaches that do not seem to lead to a clear expansion but give approximative formulas can be found in [1], [2] and[3]. We also remark that there are various other competitive approaches using partial differential equations or a combination of probabilistic arguments and analytic ones. For this, see eg. [4], [26] and [44].

Bibliography [1] E. Al´os, C.-O. Ewald, Malliavin differentiability of the Heston Volatility and applications to option pricing, Adv. in Appl. Probab. 40 (1) (2008), pp. 144–162. [2]

, J. Leon, and J. Vives, On the short-time behavior of the implied volatility for jumpdiffusion models with stochastic volatility, Finance Stoch. 11 (4) (2007), pp. 571–589.

[3]

, A generalization of the Hull and White formula with applications to option pricing approximation, Finance Stoch. 10 (3) (2006), pp. 353–365.

[4] F. Antonelli and S. Scarlatti. Pricing options under stochastic volatility. A power series approach. Preprint. [5] V. Bally, An elementary introduction to Malliavin calculus, INRIA RR-4718 (2003), http://www.inria.fr/rrrt/rr-4718.html. [6]

, M.-P. Bavouzet, and M. Messaoud, Integration by parts formula for locally smooth

A review of Malliavin Calculus and its applications

299

laws and applications to sensitivity computations, Ann. Appl. Probab. 17 (1) (2007), pp. 33– 66. [7] M.-P. Bavouzet, and M. Messaoud, Computation of Greeks using Malliavin’s calculus in jump type market models, Electronic Journal of Probability 11 (10) (2006), pp. 276–300. [8]

, and L. Caramellino, Lower bounds for the density of Ito processes under weak regularity assumptions, working paper.

[9] E. Benhamou, E. Gobet, and M. Miri, Smart expansion and fast calibration for jump diffusion, preprint, http://papers.ssrn.com/sol3/papers.cfm?abstract id=1079627. [10]

, , and , Closed forms for European options in a local volatility model, preprint, http://papers.ssrn.com/sol3/papers.cfm?abstract id=1275872.

[11] K. Bichteler, J. Gravereaux, and J. Jacod, Malliavin calculus for processes with jumps, Stochastics Monographs, 2. Gordon and Breach Science Publishers, New York,1987. Math. Review 100847 [12] B. Bouchard, I. Ekeland, and N. Touzi, On the Malliavin approach to Monte Carlo approximation of conditional expectations, Finance Stoch. 8 (1) (2004), pp. 45–71. [13] E. Carlen, and E. Pardoux, Differential calculus and integration by parts on Poisson space, In Stochastics, Algebra and Analysis in Classical and Quantum Dynamics, Kluwer, 1990, pp. 63–73. [14] T. Cass, Smooth densities for solutions to stochastic differential equations with jumps, doi:10.1016/j.spa.2008.07.005. [15] N. Chen, and P. Glasserman, Malliavin greeks without Malliavin Calculus, Stochastic Processes and their Applications, 117 (2007), pp. 1689–1723. [16] M. Davis, and M. Johansson, Malliavin Monte Carlo Greeks for jump diffusions, Stochastic Process. Appl., 1 (2006), pp. 101–129. [17] J. Detemple, R., Garcia and M., Rindisbacher. Representation formulas for Malliavin derivatives of diffusion processes, Finance Stoch. 9 (2005), pp. 349–367. [18] R. Elie, J.-D. Fermanian, and N. Touzi, Kernel estimation of Greek weights by parameter randomization, Annals of Applied Probability, 17 (2007), pp. 1399–1423. [19] Y. El-Khatib, and N. Privault, Computations of Greeks in a market with jumps via the Malliavin calculus, Finance Stoch. 8 no. 2 (2004), pp. 161–179. [20] L. C. Evans, Partial differential equations, Graduate studies in Mathematics, Vol. 19, American Mathematical Society, 1998. [21] J. Fonseca, and M. Grasselli, Wishart Multi-Deimensional Stochastic Volatility, preprint. (http://www.riskturk.com/ec2/submitted/IMEWISHART.pdf) [22] E. Fourni´e, J. M. Lasry, J. Lebuchoux, P. L. Lions, and N. Touzi, Applications of Malliavin calculus to Monte Carlo methods in finance, Finance Stoch. 3 (4) (1999), pp. 391–412. [23] E. Fourni´e, J. M. Lasry, J. Lebuchoux and P. L. Lions, Applications of Malliavin calculus to Monte Carlo methods in finance II, Finance Stoch. 5 (2) (1999), pp. 201–236. [24] S. L. Heston, A closed-form solution for options with stochastic volatility with applications to bond and currency options, The Review of Financial Studies Vol. 6 No. 2 (1993), pp. 327– 343. [25] D. Kainth, and N. Saravanamuttu, Modelling the FX Skew, presentation slide. (http://www.quarchome.org/FXSkew2.ppt)

300

A. Kohatsu-Higa and K. Yasuda

[26] J. Kampen, A. Kolodko, J.G.M. Schoenmakers. Monte Carlo Greeks for financial products via approximative transition densities. SIAM J. Sci. Comput. 31(1) 1–22 (2008). [27] R. Kawai, and A. Takeuchi, Greeks formulae for an asset price dynamics model with gamma processes, submitted. [28] A. Kebaier, and A. Kohatsu-Higa, An optimal control variance reduction method for density estimation, Stochastic Processes and their Applications, Vol. 118, 12, (2008), pp. 2143–2180. [29] A. Kohatsu-Higa, and M. Montero, Malliavin Calculus in Finance, Handbook of Computational Finance, Birkhauser, 2004. [30] A. Kohatsu-Higa and R. Pettersson. Variance Reduction Methods for simulations of densities on Wiener space. SIAM J. Numerical Analysis. 40, 431–450, 2002. [31]

, and K. Yasuda, Estimating multidimensional density functions for random variables in Wiener space, C. R. Math. Acad. Sci. Paris 346 5-6 (2008), pp. 335–338.

[32]

, , Estimating multidimensional density functions using the MalliavinThalmaier formula, to appear in SIAM Journal of Numerical Analysis.

[33]

, , Simulation of multidimensional density functions through the MalliavinThalmaier formula and its application to finance, submitted.

[34]

, , Heston-type density estimation through the Monte-Carlo method and its application to Greeks calculation, in preparation.

[35] D. Lamberton, and B. Lapeyre, Introduction to stochastic calculus applied to finance, Chapman & Hall, 1996. [36] J. Leon, J.L. Sole, F. Utzet and J. Vives. On Levy processes, Malliavin Calculus and market models with jumps. Finance and Stoch. 6, 197-225 (2006). [37] A. Løkka, Martingale representation of functionals of L´evy processes, Stochastic Anal. Appl. 22 no. 4 (2004), pp. 867–892. [38] P. Malliavin, and A. Thalmaier, Stochastic calculus of variations in mathematical finance, Springer Finance, Springer-Verlag, Berlin, 2006. [39] D. Nualart, The Malliavin calculus and related topics (Second edition), Probability and its Applications (New York), Springer-Verlag, Berlin, 2006. [40]

, and J. Vives, A duality formula on the Poisson space and some applications, Seminar on Stochastc Analysis, Random Fields and Applications, Progr. Probab. 36 (1995), pp. 205– 213.

[41] Y. Osajima, The Asymptotic Expansion Formula of Implied ity for Dynamics SABR Model and FX Hybrid Model, hhtp://papers.ssrn.com/sol3/papers.cfm?abstract id=965265. [42]

Volatilpreprint,

, General Asymptotics of Wiener Functions and Applilcation to Mathematical Finance, preprint, hhtp://papers.ssrn.com/sol3/papers.cfm?abstract id=1019587.

[43] E. Petrou, Malliavin Calculus in L´evy spaces and Applications to Finance, Electronic Journal of Probability, 13 (2008), pp. 852–879. [44] A. Pascucci, F. Corielli, Parametrix approximation of diffusion transition densities, AMS Acta, Universit´a di Bologna (2008), preprint. [45] J. Picard, Formules de dualit´e sur l’espace de Poisson, Ann. Inst. H. Poincar´e Probab. Statist. 32 4 (1996), pp. 509–548. [46] J. Picard, On the existence of smooth densities for jump processes, Probab. Theory Related Fields 105 4 (1996), pp. 481–511.

A review of Malliavin Calculus and its applications

301

[47] M. Sanz-Sol´e, Malliavin Calculus with applications to stochastic partial differential equations, EPFL Press, 2005. [48] D. W. Scott, Multivariate Density Estimation: Theory, Practice, and Visualization, Wiley, New York, 1992. [49] I. Shigekawa, Stochastic Analysis, Translations of Mathematical Monographs, AMS, 2004. [50] A. Takahashi, and M. Yoshida, Monte Carlo simulation with asymptotic method, preprint (2002), J. Japan Statist. Soc. 35 (2005), pp. 171–203. [51] A. Takeuchi, The Malliavin calculus for SDE with jumps and the partially hypoelliptic problem, Osaka J. Math. 39 (2002), pp. 523–559. [52]

, The Bismut-Elworthy-Li type formulae for stochastic differential equations with jumps, submitted.

[53] J. Teichmann, Stochastic evolution equations in infinite dimension with applications to term structure problems, Lecture note (2005), http://www.fam.tuwien.ac.at/ jteichma/leipzigparislinz080605.pdf. [54] M. Uchida, and N. Yoshida, Asymptotic expansion for small diffusions applied to option pricing, Statist. Infer. Stochast. Process, 7 (2004), pp. 189–223. [55] M. P. Wand, and M. C. Jones, Kernel Smoothing, Chapman & Hall, 1995. [56] S. Watanabe, Analysis of Wiener functionals (Malliavin calculus) and its applications to heat kernels, Ann. Probab. 15 1 (1987), pp. 1–39.

Author information Kohatsu-Higa Arturo, Osaka University. Graduate School of Engineering Sciences. Division of Mathematical Science for Social Systems. 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan. Email: [email protected] Yasuda Kazuhiro, Hosei University. Faculty of Science and Engineering. Department of Industrial and Systems Engineering. 3-7-2, Kajino-cho, Koganei-shi, Tokyo, 184-8584, Japan. Email: k [email protected]

Radon Series Comp. Appl. Math 8, 303–326

c de Gruyter 2009 

The numeraire portfolio in discrete time: existence, related concepts and applications Ralf Korn and Manfred Sch¨al

Abstract. We survey the literature on the numeraire portfolio, explain its relation to various other concepts in financial mathematics and present two applications in insurance mathematics and portfolio optimization. Key words. Numeraire portfolio, value preserving portfolio, growth optimal portfolio, benchmark approach, minimal martingale measure, NUIP condition. AMS classification. 90A09, 91B28, 91B62, 93E20, 62P05

1

Introduction and summary

An important subject of financial mathematics is adequate pricing of financial derivatives, in particular options. In the modern theory (see e.g. Duffie 1992), the historical concept based on expectations of discounted quantities (the present value principle) is replaced by the concept of deflators, numeraires (inverse deflators) or the application of the present value principle after a change of measure. In this paper we focus on the concept of the numeraire portfolio, present its definition, its relation to various valuation concepts and its role in important applications. When the value process of a numeraire portfolio is used as a discount process, the relative value processes of all other portfolios with respect to it will be martingales or at least supermartingales (see Vasicek 1977, Long 1990, Artzner 1997, Bajeux-Besnainou and Portait 1997, Korn & Sch¨al 1999, Sch¨al 2000a, Becherer 2001, Platen 2001, 2006, Christensen and Larsen 2007, Karatzas & Kardaras 2007). We will study a financial market with small investors which is free of arbitrage opportunities but incomplete (although we will see that much is valid under a weaker assumption than the no arbitrage assumption). Then in discrete time, one has several choices for an equivalent martingale measure (EMM) needed to value derivatives. In continuous time an EMM exists under more restrictive conditions. It is known (see Harrison & Kreps 1979) that each EMM corresponds to a consistent price system. Thus in incomplete markets, no preference-independent pricing of financial derivatives is possible. In the present paper, the unique martingale measure Q∗ is studied which is defined by the concept of the numeraire portfolio (see Korn & Korn 2001, Section 3.7). The choice of Q∗ can be justified by a change of numeraire in place of a change of measure. Uniqueness is obtained by the fact that the EMM after the change of numeraire should be the original real-world probability measure. It is known that in many cases one can get a numeraire portfolio from the growth

304

R. Korn and M. Sch¨al

optimal portfolio (GOP) which maximises the expected utility when using the logutility. Utility optimisation is now a classical subject. Recent papers with the log-utility are Goll & Kallsen 2000, Kallsen 2000, Goll and Kallsen 2003. When looking for a numeraire portfolio (in the strict martingale sense), we are interested in optimal portfolios which can be chosen from the interior of the set of admissible portfolios. Also for more general utilities, optimal ’interior’ portfolios can be used to define equivalent martingale measures (see Karatzas & Kou 1996, Sch¨al 2000a,b). In order to get full equivalence of a numeraire portfolio and a GOP, one has to generalise the concept by defining a weak numeraire portfolio introduced by Becherer 2001 under the name ’numeraire portfolio’. Such a portfolio defines a supermartingale measure in the above sense. The paper is laid out as follows. We consider a discrete-time market. It turns out that all the ideas can be explained in a simple one-period model starting in 0 and finishing at the time-horizon T = 1. In fact, for a log-utility investor, the optimal strategy is myopic even for market models where optimal power-utility strategies are not guaranteed to be myopic (see Hakansson & Ziemba, 1995). Given the solution to a one-period model, the form of the optimal strategy for a multi-period model is obvious. Therefore we will restrict to such a (0,1)-period. Then strategies ξ and portfolios π can be described by d-dimensional vectors. In fact when considering general semi-martingale models, it is sufficient (in most passages) to replace the inner products ξ ⊤ ∆S or π ⊤ R by stochastic integrals ξ · S or π · R, where S describes the prices and R the cumulative returns. Except for the restriction to a (0, 1)-period, we try to choose the framework as general as possible where the recent paper by Karatzas & Kardaras 2007 will serve as a model. In particular, we accept the framework with general convex constraints. We then consider various valuation and optimisation concepts that are directly related to the numeraire portfolio. Among them are the GOP, the benchmark portfolio, the value preserving portfolio and of course the valuation with the help of EMMs. This is followed by existence considerations for (weak) numeraire portfolios. Finally, we give two important applications of the numeraire portfolios in insurance mathematics and in portfolio optimisation.

2

The one-period market setting

On the market an investor can observe the prices of 1 + d assets at the dates t = 0, 1 which are described by St0 and St = (St1 , . . . , Std )⊤ t = 0, 1. [For any vector x we write x⊤ for the transposed vector and x⊤ y for the inner product of x, y ∈ Rd thought of as column vectors.] Hence our time horizon will be T = 1. Then S00 and S0 are deterministic, S10 is a random variable, S1 is a random vector on a probability space (Ω, F, P ) and St0 is positive. One of these assets will play a special role for which we will choose S 0 . But any other component S k can be chosen in place of S 0 . An important situation will be the case where the asset with price S 0 describes the bank account (or money market) and the other d assets are stocks. This is a very useful interpretation and we will use

Numeraire portfolio

305

it. The interpretation of S 0 as money market leads to further convenient interpretations. But remember that, mathematically, all price components will satisfy the same assumptions. Given an initial capital V0 > 0, one can invest in the assets described by S by choosing some ξ ∈ Rd which describes the strategy in the present simple case with T = 1. The number ξ k represents the number of shares for stock k bought and held  by the investor at time 0. The total amount invested in stocks is ξ ⊤ S0 = dk=1 ξ k S0k . For satisfying the self-financing condition, the remaining wealth of the initial value V0 ,  namely ξ 0 := V0 − ξ ⊤ S0 is invested in the bank account. Then V0 = V0ξ = dk=0 ξ k S0k . Upon defining ∆X := X1 − X0 for X being defined for t = 0, 1, the value V1ξ of ξ at time 1 is described by   d  (2.1) ∆V ξ = ξ 0 ∆S 0 + ξ ⊤ ∆S = ξ k (S1k − S0k ) . k=0

Upon defining discounted quantities S˘t = (S˘t1 , . . . , S˘td )⊤ and V˘t by

we easily obtain

S˘tk := Stk / St0 , V˘tξ := Vtξ / St0 ,

(2.2)

∆V˘ ξ = ξ ⊤ ∆S˘ .

(2.3)

This simple relation is the mathematical reason for using “discounted” quantities. Since we might as well work in discounted terms, from now on we assume that St0 ≡ 1 as is common in Mathematical Finance (see Harrison & Kreps 1979). Then ∆S 0 ≡ 0 and one can dispense with ξ 0 . Starting with capital V0 = x > 0 and investing according to strategy ξ , the investor’s value at time 1 is V1ξ (x) := x + ξ ⊤ ∆S . For any V0 = x > 0 and any strategy ξ , V1ξ (x) = x + ξ ⊤ ∆S is called admissible if V1ξ (x) ≥ 0. The return Rk for stock k is defined by S1k = S0k · (1 + Rk ) .

(2.4)

d

Then we can write V1ξ (x) = x · (1 + k=1 (ξ k S0k /V0 )Rk ). Defining π ∈ Rd as the vector with components π k = ξ k S0k /V0 , π k signifies the proportion of V0 invested in stock k and we have (2.5) V1ξ (x) = x · (1 + π ⊤ R) =: x · V1π . The equivalent of “V1ξ (x) > 0 (≥ 0)”, for x > 0, is “V1π = 1 + π ⊤ R > 0 (≥ 0)”. This simple representation is the reason for our restriction to the case x = 1 in the sequel where we write V1π in place of V1ξ (1). By use of π , admissibility is independent of the initial wealth x and thus easier to handle. We will now introduce constraints, where Karatzas & Kardaras 2007, Kardaras 2006 will serve as a model. For the sake of motivation, we will start with the following example. Example A. The case where the investor is prevented from selling stock short or borrowing from the bank can be describe by ξ k ≥ 0, 1 ≤ k ≤ d, and ξ 0 := V0 − ξ ⊤ S0 ≥ 0.

306

R. Korn and M. Sch¨al

 This condition is equivalent to π k ≥ 0, 1 ≤ k ≤ d, and π 0 := 1 − dk=1 π k ≥ 0. By  setting C := {π ∈ Rd : π k ≥ 0 and dk=1 π k ≤ 1}, the prohibition of short sales and borrowing is translated into the requirement π ∈ C .

Definition 2.1. Consider an arbitrary convex closed set C ⊂ Rd with 0 ∈ C . The admissible value V1π is called C -constrained, if π ∈ C . Here the following set Cˇ := ∩a>0 aC

(2.6)

is called the set of cone points (or recession cone) of C . Note in particular that the “safe” portfolio π = 0 is always admissible.  Example A (continuation). Here we have aC = {aπ ∈ Rd : π k ≥ 0 and dk=1 π k ≤  d 1} = {ϑ ∈ Rd : ϑk ≥ 0 and k=1 ϑk ≤ a}. This leads to the relation Cˇ = {0} ⊂ Rd .

The following example describes the positivity constraints for admissibility. Example B (Natural Constraints). C := Θ := {ϑ ∈ Rd ; 1 + ϑ⊤ R ≥ 0 a.s.} = {ϑ ∈ Rd ; 1 + ϑ⊤ z ≥ 0 ∀z ∈ Z},

where Z is the support of R, i.e. the smallest closed subset B of Rd such that P [R ∈ B] = 1. The representation of Θ by means of Z is easily proved (see Korn and Sch¨al, 1999 Lemma 4.3a). We use “≥”in place of “>” in the definition of Θ to keep the set Θ closed. Then aC = {aπ ∈ Rd ; 1 + π ⊤ R ≥ 0 a.s.} = {ϑ ∈ Rd ; a + ϑ⊤ R ≥ 0 a.s.} and Cˇ = ∩a>0 aC = {ϑ ∈ Rd ; ϑ⊤ R ≥ 0 a.s.}. The requirement of admissibility of V1π is exactly what corresponds to π being Θconstrained. Consider the special case d = 1 and the no-arbitrage condition: −α, β ∈ Z for some α, β > 0. Then again Cˇ = {0} ⊂ R1 . We shall always assume that C is enriched with the natural constraints, i.e. C ⊂ Θ. Otherwise, we can replace C by C ∩ Θ. Example C. The case where the investor is prevented from selling stock short but not from borrowing from the bank can be described by ξ k ≥ 0, 1 ≤ k ≤ d. This condition is equivalent to π k ≥ 0, 1 ≤ k ≤ d. By setting C := {π ∈ Rd : π k ≥ 0}, the prohibition of short sales is translated into the requirement π ∈ C . Here C is a cone and thus we get Cˇ = C = aC for a > 0. In the sequel we will write Π := {π ∈ C ; 1 + π ⊤ R > 0 a.s.} .

(2.7)

The elements of Π will be called portfolios; we make this distinction with the corresponding notion of strategy, denoted by ξ .

Numeraire portfolio

307

Lemma 2.2. For ρ ∈ C and ϑ ∈ Cˇ we have ρ + ϑ ∈ C . Proof (See Karatzas & Kardaras 2007). We know that aϑ ∈ C for all a > 0. Then (1 − a1 )ρ + a1 a ϑ = (1 − a1 )ρ + ϑ ∈ C by the convexity of C . But C is also closed, and  so ρ + ϑ ∈ C .

3

Weak numeraire portfolio

In general, by “numeraire” one understands any strictly positive random variable Y such that it acts as an “inverse deflator D = Y −1 ”, e.g. a stochastic discount factor, for the values V1π . Then we see our investment according to portfolio π relative to Y , giving us a wealth of V1π /Y . There Y may not even be generated by a portfolio. Definition 3.1. A portfolio ρ ∈ Π will be called weak numeraire portfolio, if for the relative value defined as V1π /V1ρ one has: E [V1π /V1ρ ] ≤ 1 (= V0π /V0ρ ) for every portfolio π . The qualifier “weak” is used because we have “≤” in place of “=” in the definition above. Since 0 ∈ Π, one has E [1/V1ρ ] ≤ 1 (= 1/V0ρ ). Thus Vtπ /Vtρ and 1/Vtρ are positive supermartingales. The definition in this form first appears in Becherer 2001. Proposition 3.2. If ρ1 and ρ2 are weak numeraire portfolios, then V1ρ1 = V1ρ2 a.s. Proof. We have both E[Vtρ1 /Vtρ2 ] ≤ 1 and E[Vtρ2 /Vtρ1 ] ≤ 1 which implies that V1ρ1 = V1ρ2 a.s.  Therefore the value generated by weak numeraire portfolios is unique. Moreover ⊤ ρ⊤ 1 R = ρ2 R a.s. In this sense, the weak numeraire portfolio is unique, too. Of course, if ρ satisfies the requirements of the definition above, V1ρ can act as a numeraire in the sense of this discussion. For a weak numeraire portfolio ρ, V1ρ is in a sense the best tradable benchmark: whatever anyone else is doing, it looks as a supermartingale (decreasing in the mean) through the lens of relative value to V1ρ . An obvious example for a numeraire would be Y1 = S10 before assuming St0 ≡ 1. Obviously the relative values do not depend on the discount factor since V˘1π /V˘1ρ = V1π /V1ρ = Y −1 V1π /Y −1 V1ρ . Now we again see that there was no loss of generality in considering discounted values. It will turn out that ρ satisfies certain optimality properties. Thus, when using Vtρ as inverse deflator in place of the classical St0 , we take into account that an investment in the bank account may be far from optimal. The following relation will be used for sake of motivation (see Kardaras 2006): V1π /V1ρ = (1 + ρ⊤ R)−1 (1 + π ⊤ R) = 1 + (π − ρ)⊤ Rρ

where Rρ = (1 + ρ⊤ R)−1 R is the return in an auxiliary market. Therefore the relative value can be seen as the usual value generated by investing in the auxiliary market. If

308

R. Korn and M. Sch¨al

E[π ⊤ R] ∈ [−1, ∞] is called the rate of return or drift rate, then r(π|ρ)

:= E[(π − ρ)⊤ Rρ ] =



−1

E[(1 + ρ R)

(3.1) ⊤





−1

· (π − ρ) R] = (π − ρ) E[(1 + ρ R)

R]

is the rate of return of the relative value process Vtπ /Vtρ . Since E[V1π /V1ρ ] = 1+r(π|ρ), we now obtain the following lemma. Lemma 3.3. ρ is a weak numeraire portfolio if and only if r(π|ρ) ≤ 0 for every π ∈ Π .

(3.2)

It is obvious that if (3.2) is to hold for C , then it must also hold for the closed convex hull of C , so it was natural to assume that C is closed and convex if we want to find the portfolio ρ. The market may show some degeneracies. This has to do with linear dependence that some stocks might exhibit and which are not excluded. As a consequence, there may be seemingly different portfolios producing exactly the same value. Thus they should then be treated as equivalent. To formulate this notion, consider two different portfolios π1 and π2 producing exactly the same value, i.e. π1⊤ R = π2⊤ R a.s. Now (π2 − π1 )⊤ R = 0 a.s. is equivalent to (π2 − π1 )⊤ z for all z ∈ Z where Z is again the support of R. Let L be the smallest linear space in Rd containing Z and L⊥ = {ϑ ∈ Rd ; ϑ ⊥ L} its orthogonal complement. Lemma 3.4.

(a) π1⊤ R = π2⊤ R a.s. is equivalent to π2 − π1 ∈ L⊥ .

(b) ϑ ∈ Rd \ L⊥ if and only if P [ϑ⊤ R = 0] > 0. Two portfolios π1 and π2 satisfy π2 − π1 ∈ L⊥ if and only if V1π1 = V1π2 a.s. It is convenient to assume that L⊥ ⊂ C . So the investor should have at least the freedom to do nothing; that is, if an investment leads to absolutely no profit or loss, one should be free to make it. In the non-degenerate case L = Rd this just becomes 0 ∈ C . The natural constraints Θ can easily be seen to satisfy this requirement as well as the requirements of closedness and convexity. Definition 3.5. Let us define the set I of arbitrage opportunities to be the set of portfolios ϑ such that P [ϑ⊤ R > 0] > 0 and P [ϑ⊤ R ≥ 0] = 1 , i.e., the set of portfolios ϑ ∈ Rd \ L⊥ such that ϑ⊤ R ≥ 0 a.s.

4

ˇ=∅ The NUIP condition I ∩ C

The condition I ∩ Cˇ = ∅ will play an important role and will be called No Unbounded Increasing Profit (NUIP) condition as by Karatzas and Kardaras 2007. The qualifier

Numeraire portfolio

309

“increasing” stems from the fact that ϑ⊤ R ≥ 0 a.s. for ϑ ∈ I . The qualifier “unbounded” reflects the following fact: Suppose that ϑ ∈ I ∩ Cˇ and V1ϑ = 1 + ϑ⊤ R where ϑ⊤ R ≥ 0 a.s. and P [ϑ⊤ R > 0] > 0. Since ϑ ∈ Cˇ , we know that aϑ ∈ C for all a > 0. Moreover aϑ⊤ R ≥ 0 a.s. and {aϑ⊤ R, a > 0} is unbounded on the set ϑ⊤ R > 0 with positive measure. Now suppose that ϑ ∈ I ∩ Cˇ and ρ is a weak numeraire portfolio, then E[V1aϑ /V1ρ ] = E[1/V1ρ ] + aE[ϑ⊤ R/V1ρ ] where E[ϑ⊤ R/V1ρ ] > 0 .

Thus E[V1aϑ /V1ρ ] is unbounded in a, in particular E[V1aϑ /V1ρ ] > 1 for large a which is a contradiction. Therefore we can obtain the following result: Proposition 4.1. The NUIP condition I ∩ Cˇ = ∅ is necessary for the existence of a weak numeraire portfolio. Note in particular that the NUIP condition is far weaker than the no arbitrage condition.  Example A (continuation). In the case C := {π ∈ Rd : π k ≥ 0 and k π k ≤ 1}, we know that Cˇ = {0} ⊂ Rd . / I , the NUIP condition I ∩ Cˇ = ∅ is always Since I ⊂ Rd \ L⊥ , in particular 0 ∈ satisfied.

Example B (Natural Constraints continuation). In the case C := Θ := {ϑ ∈ Rd ; 1 + ϑ⊤ R ≥ 0 a.s.} we have Cˇ = {ϑ ∈ Rd ; ϑ⊤ R ≥ 0 a.s.} ⊃ I . Here the NUIP condition I ∩ Cˇ = ∅ amounts to the no arbitrage condition I = ∅. Example C (continuation). In the case C := {π ∈ Rd : π k ≥ 0} we have Cˇ = C . Here the NUIP condition I ∩ Cˇ = ∅ amounts to the no arbitrage condition I ∩ C = ∅. We now present an example where E[log V1π ] = ∞ for nearly all π , but where is bounded in π for nearly all ϑ and where a unique numeraire portfolio exists.

V1π /V1ϑ

Example D (see Kardaras 2006). Consider the case where d = 1 and   P [R ∈ dx] ∝ 1(−1,1] + x−1 (log{1 + x})−2 · 1(1,∞) dx .

Since the support Z of R is [−1, ∞), we have Θ = [0, 1] =: C . Now the expected log-utility is E[log V1π ] = E[log(1 + πR)] = ∞ for π ∈ (0, 1] ∞ since 1 log(1 + πx)x−1 (log(1 + x))−2 dx = ∞ which easily follows by use of the substitution y = log(1 + x). Obviously E[log V1π ] = 0 for π = 0. π ϑ However if we consider relative values V1π /V1ϑ = 1+πR 1+ϑR , then V1 /V1 is bounded since



π 1−π π 1−π , , ≤ V1π /V1ϑ ≤ max . min ϑ 1−ϑ ϑ 1−ϑ

310

R. Korn and M. Sch¨al

Moreover, if we fix ϑ ∈ (0, 1) and define g(π) =

E[log(V1π /V1ϑ )]



= E log





R 1 + πR



1 + πR 1 + ϑR



,

then we obtain for π ∈ (0, 1) g ′ (π) = E

where g ′ (0+) = ∞ and g ′ (1−) = −∞. Therefore there exists a unique ρ ∈ (0, 1) such that g ′ (ρ) = 0. As a consequence we obtain the relation R 1 + πR ρ π = 1 + (π − ρ)E =1 E[V1 /V1 ] = E 1 + ρR 1 + ρR for any π ∈ Θ. Then ρ will be called a numeraire portfolio (in the strict sense). The portfolio ρ is computed by Kardaras as ρ ∼ = .916. Although the expected log-utility is infinite, the numeraire portfolio does not put all the weight on the stock. Finally we know that ρ is the unique portfolio such that E[log(V1ρ /V1ρ )] = sup E[log(V1π /V1ρ )] = 0 . π∈Π

5

The weak numeraire portfolio and the growthoptimal portfolio

Definition 5.1. (a) A portfolio ρ ∈ Π is log-optimal if E[log V1π ] ≤ E[log V1ρ ] for every π ∈ Π. (b) A portfolio ρ ∈ Π will be called growth optimal portfolio (GOP) [or relatively log-optimal] if E[log(V1π /V1ρ )] ≤ 0 for every π ∈ Π. The present concept of GOP is used e.g. by Christensen & Larsen 2007, the name (relatively) log-optimal is used e.g. by Karatzas and Kardaras 2007. Of course, if the portfolio ρ is log-optimal with E[log V1ρ ] < ∞, then ρ is also a GOP and we will prefer the notation GOP in that case. The two notions coincide if supπ∈Π E[log V1π ] < ∞. In the Example D above, this condition fails and almost every portfolio is log-optimal. But we have existence of a unique numeraire portfolio which is the unique GOP. Theorem 5.2. A portfolio is a weak numeraire portfolio if and only if it is a GOP. Note that this result shows in particular that the existence of a weak numeraire portfolio implies the existence of a GOP and vice versa. Proof of Theorem 5.2. (See Becherer 1999, Christensen and Larsen 2007, B¨uhlmann and Platen 2003.)

Numeraire portfolio

311

(i) Suppose ρ is numeraire portfolio. Then we have by Jensen’s inequality E[log(V1π /V1ρ )] ≤ log (E[V1π /V1ρ ]) ≤ log 1 = 0 .

(ii) Suppose that ρ is GOP and π is an arbitrary portfolio. Then V1ε := (1 − ε)V1ρ +εV1π is the value of some portfolio where V1ε − V1ρ = ε(V1π − V1ρ ). From 1 − t−1 ≤ log t for t > 0 we obtain 0 ≥ ε−1 · E[log(V1ε /V1ρ )] ≥ ε−1 · E[(V1ε − V1ρ )/V1ε ] = E[(V1π − V1ρ )/V1ε ].

From −2 ≤ 2

1 x−y x x−y ≤ ↑ − 1 for ≥ ε ↓ 0 (where x, y > 0) x+y (1 − ε)y + εx y 2

we finally get E[V1π /V1ρ ] ≤ 1 from the monotone convergence theorem. 

Proposition 5.3. The NUIP condition I ∩ Cˇ = ∅ is necessary for the existence of a GOP. Proof. Suppose that ρ is a GOP and suppose that ϑ ∈ I ∩ Cˇ . Since ϑ ∈ I , we know that ϑ⊤ R ≥ 0 a.s. and P [ϑ⊤ R > 0] > 0. Now we conclude from Lemma 2.2 that ρ + ϑ ∈ C where E[ϑ⊤ R/V1ρ+ϑ ] > 0 and thus E[log(V1ρ /V1ρ+ϑ )] ≤ log E[V1ρ /V1ρ+ϑ ] = log{1 − E[ϑ⊤ R/V1ρ+ϑ ]} < 0. Now we have a contradiction to the optimality of ρ.  However, one can also directly derive Proposition 5.3 from Theorem 5.2 and Proposition 4.1 without a proof.

6

Existence of weak numeraire portfolios

In this section we will show that the NUIP condition is also sufficient for the existence of a weak numeraire portfolio. This in particular shows that valuation via discounting by the wealth process of a weak numeraire can even be performed in situations where the no arbitrage condition is not satisfied. This was already emphasised by Platen 2006. For getting the existence result we need some technical notations and results. Definition 6.1. For f : Rd → (0, 1] we write f ∈ F if E[f (R) · log(1 + R)] < ∞. Example E. (see Kardaras 2006) We have fk ∈ F and fk ↑ 1 for fk (x) := 1{x≤1} + 1{x>1} · x−1/k .

Under the no-arbitrage condition I ∩ Θ = ∅, one knows that Θ ∩ L is compact (see Korn and Sch¨al 1999) where Θ is defined in Example B. Under the weaker NUIP condition we need the following more technical lemma.

312

R. Korn and M. Sch¨al

Lemma 6.2. Assume I ∩ Cˇ = ∅. Let F ∗ be some subset of F which is bounded from below in the following sense: there is some f ∗ ∈ F such that f ≥ f ∗ for all f ∈ F ∗ . Let R ⊂ C be a set of portfolios which are “not too bad” in the following sense: for every ρ ∈ R, ρ ∈ L \ {0}, there exists some f ∈ F ∗ such that the function [0, 1] ∋ u → gf (uρ) is increasing where gf (π) := E[log(1 + π ⊤ R) · f (R)] [≤ (log π)+ + E[log(1 + R)f (R)] < ∞] .

Then R is bounded. The lemma is hidden in the proof of Theorem 3.15 in Karatzas and Kardaras 2007. Proof by contradiction. Suppose there exists some sequence (ρm , fm ) ⊂ R × F ∗ such that ρm ∈ L ∩ C and [0, 1] ∋ u → gfm (uρm ) is increasing where ρm  → ∞. Define ξm := ρm −1 ρm . We can assume that ξm → ξ for some ξ ∈ L. We want to show that ξ ∈ Cˇ . Choose any a > 0 and ma such that 0 < u = a/ρm < 1 for m ≥ ma . Then aξm = uρm = uρm + (1 − u)0 ∈ C since C is convex. Moreover, since C is closed, we also have aξ ∈ C . This proves ξ ∈ Cˇ ∩ L with ξ = 1. Now for u ∈ (0, 1] we have 0

≤ ε−1 [gfm (uρm ) − gfm ((1 − ε)uρm )]

      ⊤ = E ε−1 log 1 + u ρ⊤ · fm (R) . m R − log 1 + (1 − ε)u ρm R

From the concavity of log we conclude that the integrand is decreasing for ε ↓ 0. Since the expectation if finite for ε = 1, we apply the monotone convergence theorem and obtain

 d ⊤ −1 ⊤ log{1 + uρm R} · fm (R) = E (1 + uρ⊤ ρm Rfm (R) . 0≤E m R) du Again choose any a > 0 and ma such that 0 < u = a/ρm  < 1 for m ≥ ma . Then ⊤ ⊤ ⊤ ⊤ 0 ≤ E[(1 + a ξm R)−1 ξm R fm (R)] where (1 + a ξm R)−1 ξm R fm (R) ≤ a−1 .

From Fatou’s lemma we now obtain   a−1 ≥ E (1 + aξ ⊤ R)−1 ξ ⊤ R lim fm (R) m   ⊤ ⊤ ≥ lim E (1 + aξm R)−1 ξm Rfm (R) ≥ 0 m

Since lim fm (R) ≥ f ∗ (R) > 0, we conclude from the first inequality that 1 + a ξ ⊤ R > 0 a.s. Now a > 0 was arbitrary, so we conclude that ξ ⊤ R ≥ 0 a.s. where ξ = 1 and ξ ∈ L. Therefore P [ξ ⊤ R = 0] > 0, otherwise ξ ∈ L⊥ . Thus we finally have ξ ∈ I and  hence ξ ∈ I ∩ Cˇ which is a contradiction to our assumption.

Theorem 6.3. Under the NUIP assumption I ∩ Cˇ = ∅, there exists a weak numeraire portfolio ρ.

Numeraire portfolio

313

If E[log(1 + R)] < ∞, then ρ is obtained as the unique solution of the following concave optimisation problem and thus the only GOP in C ∩ L : ρ = arg max g(π) where g(π) := E[log(1 + π ⊤ R)]. π∈C∩L

Remark 6.4. In the general case, where the condition E[log(1 + R)] < ∞ does not hold, one can solve a sequence of optimisation problems and show that the corresponding solutions converge to the solution of the original problem, see below and Theorem 3.15 in Karatzas and Kardaras 2007. Proof. We start with a sequence (fk ) ⊂ F where fk ↑1. The sequence can be chosen as in Example E above. Now define gk (π) = gfk (π) := E[log(1 + π ⊤ R) · fk (R)]. Then gk is strictly concave on C ∩ L and − ∞ ≤ gk (π) < +∞. Further set 0 ≤ gk∗ := sup gk (π) = lim gk (ρkn ) n→∞

π∈C

for some sequence (ρkn ) ⊂ C . Since gk (π+ζ) = gk (π) for ζ ⊥ L we can choose ρkn ∈ L ∩ C . Moreover, we may choose ρkn such that gk (ρkn ) = max0≤u≤1 gk (uρkn ) ≤ supπ∈C gk (π). Then by concavity, u → gk (uρkn ) is increasing. From the preceding lemma we know that R = (ρkn ) is bounded, in particular gk∗ ∈ [0, ∞) and gk∗ = gk (ρ∗k ). Now fix some k and assume w.l.o.g . that ρkn →ρ∗k for some ρ∗k ∈ C where gk∗ = gk (ρ∗k ) since C is closed. Choose π ∈ C , then [0, 1] ∋ u → gk (ρ∗k + u(π − ρ∗k )) is real valued, concave. Since ρ∗k is a maximum point, we conclude from the concavity that 1 [gk (ρ∗k ) − gk (ρ∗k + u(π − ρ∗k ))] ≤ gk (ρ∗k ) − gk (π) u is increasing in 0 < u ≤ 1. From the monotone convergence theorem, we obtain for u ↓ 0, again by concavity of log,   d  ∗ ∗ ⊤ log 1 + [ρk + u(π − ρk )] R}  G(u) ↓ E − · fk (R) . du u=0 0 ≤ G(u) :=

Thus we get

E[(π − ρ∗k )⊤ R /(1 + ρ∗⊤ k R)] ≤ 0.

Since we know that (ρ∗k ) is also bounded, we may assume that ρ∗k → ρ for some ρ ∈ C ∩ L. ⊤ ∗⊤ Now (π − ρ∗k )⊤ R /(1 + ρ∗⊤ k R) = (1 + π R)/(1 + ρk R) − 1 ≥ 1. Then we obtain from Fatou’s lemma r(π|ρ) ≤ 0, since  r(π|ρ) = E[(π−ρ)⊤ R /(1+ρ⊤ R)] = E[lim(π−ρ∗k )⊤ R (1+ρ∗⊤ k R)] ≤ lim E[· · · ] ≤ 0. k

From Lemma 3.3 we finally conclude that ρ is a weak numeraire portfolio. In the case where E[log(1 + R) < ∞] < ∞, we can choose fk ≡ 1 for all k and thus ρk = ρ. Then g ∗ := sup g(π) = g(ρ∗k ) = max where g(π) := E[log(1 + π ⊤ R)]. π∈C

π∈C∩L

Since g is strictly concave on C ∩ L, the maximum point is unique.



314

7

R. Korn and M. Sch¨al

Deflators and value preserving portfolios

The concept of a deflator is important for the valuation of uncertain payment streams and is more general than that of a numeraire portfolio. Definition 7.1. The class D of supermartingale deflators is defined as D := {D ≥ 0; D is a random variable with E[DV1π ] ≤ 1(= V0π )} for all portfolios π}.

Since 0 ∈ Π, we know that E[D] ≤ 1 for D ∈ D. Corollary 7.2. (a) A portfolio ρ ∈ Π is a weak numeraire portfolio if and only if (V1ρ )−1 is a supermartingale deflator. (b) E[log V1ρ ] = inf D∈D E[log(D−1 )]. The second property in (a) is introduced by Korn 1997 and called “ρ is interestoriented ”. The property (b) of ρ can be seen as an optimal property dual to logoptimality. Proof. (a) is clear by definition. (b) See Becherer 2001. E[log(D−1 )] makes sense since E[log− (D−1 )] ≤ E[D] ≤ 1. Assume w.l.o.g. that the right hand in (b) is finite and E[log(D−1 )] ∈ R. Then E[log V1ρ − log(D−1 )] = E[log(DV1ρ )] ≤ log E[DV1ρ ] = 0.  Definition 7.3 (Hellwig 1996). For π ∈ Π and D ∈ D, VDπ := D · (1 + π ⊤ R) is called present economic value of π (at time 0) associated with D ∈ D. Since D is a supermartingale deflator, we always have E[VDπ ] ≤ 1 where 1 is here the initial value. Therefore the following definition is interesting: Definition 7.4 (Hellwig 1996). A portfolio π ∈ Π is called value preserving if VDπ ≡ 1 a.s. for some D ∈ D. Theorem 7.5. The following properties are equivalent: (1) π is value preserving w.r.t. the supermartingale deflator D; (2) π is a weak numeraire portfolio and D = (1 + π ⊤ R)−1 . Thus, by Theorem 7.5 existence of a value preserving portfolio is also related to the existence of a GOP (see Korn and Sch¨al 1998). Proof. “(1) ⇒ (2)” From D · (1 + π ⊤ R) = 1 we get D = (1 + π ⊤ R)−1 where D ∈ D. Now Corollary 7.2 (a) applies. “(2) ⇒ (1)” Again from Corollary 7.2 we know that D = (1 + π ⊤ R)−1 is a deflator and D · (1 + π ⊤ R) = 1. 

Numeraire portfolio

8

315

Fair portfolios and applications in actuarial valuation

Benchmarked portfolios and fair valuation is a concept that is suggested for use in actuarial valuation by B¨uhlman and Platen 2003. As ibidem we call V1π /V1ρ the benchmarked value of portfolio π if ρ is a weak numeraire portfolio and hence V1ρ is uniquely determined according to Proposition 3.2. Then we know that: E[V1π /V1ρ ] ≤ 1 for every portfolio π . In financial valuations in competitive markets, a price is typically chosen such that seller and buyer have no systematic advantage or disadvantage. Let the random variable H be a contingent claim which is a possibly negative random payoff. Candidates for prices of H are E[DH] for some deflator D ∈ D. For H = V1π we thus have E[DH] ≤ 1. For the case E[DH] < 1, this could give an advantage to the seller of the portfolio π ; its expected future benchmarked payoff is less than its present value. The only situation when buyers and sellers are equally treated is when the benchmarked price process Vtπ /Vtρ is a martingale, that means in our situation: E[V1π /V1ρ ] = 1. Definition 8.1 (see B¨uhlmann and Platen 2003). A value process Vt , t = 0, 1, is called fair if its benchmarked value Vt /Vtρ is a martingale, i.e. if E[V1 /V1ρ ] = V0 (since V0ρ = 1). Let us consider a contingent claim H , which has to be paid at the maturity date 1. Let ρ be the weak numeraire portfolio. We choose the following pricing formula pr(H) := E[H/V1ρ ]

(8.1)

which by definition is fair. In contrast to classical actuarial valuation principles no loading factor enters the valuation formula. For premium calculations in insurance business the use of a change of measure is explained in Delbaen & Haezendonck 1989. An important case arises when H is independent of the value V1ρ . Then we obtain pr(H) = E[H] · E[1/V1ρ ] .

(8.2)

Here P (0, 1) = E[1/V1ρ ] is the fair price of the contingent claim H ′ ≡ 1 to be paid at the maturity date T = 1 and thus the zero coupon bond with maturity 1. Thus (8.2) is the classical actuarial pricing formula in the case of stochastic interest rates and pr(H) := E[H/V1ρ ] is an extension to the more general case where dependence may occur. For equity-linked or unit-linked insurance contracts we look again at a claim H payed at T = 1 where H has the following form: H = U · V1π . Intuitively, H stands now for unit linked benefit and premium. Then H can be of either sign. The benefit at maturity is linked to some strictly positive reference portfolio V1π with given portfolio π . The insurance contract specifies the reference portfolio π and the random variable U depending on the occurrence of insured events during the period (0, 1], for instance, death, disablement or accident. These products offer the insurance company as well as the insurance customer advantages compared to traditional products. The insurance industry may benefit from

316

R. Korn and M. Sch¨al

offering more competitive products and the customer may benefit from higher yields in financial markets. Compared to classical insurance products, one distinguishing feature of unit-linked products is the random amount of benefit. But the traditional basis for pricing life insurance policies, the principle of equivalence, based on the idea that premiums and expenses should balance in the long run, does not deal with random benefits. Therefore, we have to use financial valuation theories together with elements of actuarial theory to price such products. The standard actuarial value pro (H) of the contingent claim H = U · V1π is determined by the properly defined liability of prospective reserve as pro (H) = V0π · E[H/V1π ] = E[H · V0π /V1π ].

The standard actuarial methodology assumes that the insurer invests all payments in the reference portfolio π . Then one obtains for pro (H), when expressed in units of the domestic currency, the expression pro (H) = pro (U · V1π ) = V0π · E[U ].

We observe the difference between pro (H) and pr(H). Hence the standard actuarial pricing and fair pricing will, in general, lead to different results. As one could see this is to be expected when the endowments depend on the numeraire portfolio. Indeed, let us assume that ρ is a numeraire portfolio (in the strict sense), then E[V1π /V1ρ ] = V0π /V0ρ = V0π and we obtain pr(U · V1π ) − pro (U · V1π ) = Cov(U, V1π /V1ρ ).

(8.3)

A similar formula is derived by Dijkstra 1998. Hence, the two prices coincide if and only if U and V1π /V1ρ are uncorrelated. Moreover, the sign of the difference is the sign of the covariance. This condition differs from the one given by B¨uhlmann and Platen 2003. In many cases, the endowment H of the insurance contract will include a guaranteed (non-stochastic) amount g(K) where K is the premium paid by the insured. Then the benefit at maturity is composed of the guaranteed amount plus a call option with exercise price g(K) and with the reference portfolio as underlying assets. Then the fair premium is the solution to an equation in K and g(K) (see Nielsen and Sandmann 1995).

9

Existence of numeraire portfolios

It seems to be a general agreement that Stk should be fair, since S0k is a fair price for H = S1k for every k ∈ {0, 1, . . . , d}. This leads to the requirement E[S1k /V1ρ ] = S0k , 0 ≤ k ≤ d.

(9.1)

Definition 9.1. A portfolio ρ ∈ Π will be called numeraire portfolio (in the strict sense), if the above condition (9.1) holds.

Numeraire portfolio

Proposition 9.2. and

317

(a) If ρ is a numeraire portfolio, then we have for any strategy ξ V1ξ (x) := x + ξ ⊤ ∆S :

E[V1ξ (x)/V1ρ ] = x = V0ξ .

(b) A numeraire portfolio is a weak numeraire portfolio.   Proof. Set Vtξ (x) = dk=0 ξ k Stk . Then we obtain E[V1ξ /V1ρ ] = dk=0 ξ k E[S1k /V1ρ ] = d k k k=0 ξ S0 = V0 . In the present simple situation, where the horizon is T = 1 we do not have any integrability problems and we even get the martingale property. In the more general case we obtain the supermartingale property from the fact that each  non-negative local martingale is a supermartingale. As in Lemma 3.3 we know that ρ is a numeraire portfolio if and only if r(π|ρ) = 0, where π is a unit vector or the zero vector in Rd . There r(π|ρ) is the directional derivative of g(π) := E[log(1 + π ⊤ R)] at the point ρ in the direction of π − ρ (if g is finite). In general, we cannot expect to be able to compute the numeraire portfolio just by naively trying to solve the first-order condition ∇g(ρ) = r(0|ρ) = 0, because sometimes this equation simply fails to have a solution. In this section, we make the following assumptions. Assumption 9.3. C = Θ describes the natural constraints, I = ∅ which here is the NUIP condition, and integration of the log exists in the following sense: E[log(1 + R)] < ∞. We now introduce another condition given in the following theorem proved in Sch¨al (1999, Theorem 4.15): Theorem 9.4. Let ρ be the only GOP in Θ ∩ L according to Theorem 6.3. Then, the condition E[ ϑ⊤ · R/(1 + ϑ⊤ · R)] < 0 for all ϑ ∈ ∂Θ ∩ L , (9.2) implies the first order condition: E[Rk /(1 + ρ⊤ · R)] = 0 for k = 1, . . . , d. Corollary 9.5. Let ρ be defined as in the preceding theorem. Then, under condition (9.2), ρ is a numeraire portfolio (in the strict sense) and E[ 1/V1ρ ] = 1.

(9.3)

Proof. We obtain from Theorem 9.4 1=1−

d 

ρk E[Rk /(1 + ρ⊤ · R)] = 1 − E[ρ⊤ · R/(1 + ρ⊤ · R)] ,

k=1

which implies (9.3). Now we get for 0 ≤ k ≤ d with R0 ≡ 0: E[S1k /V1ρ ] = E[S0k (1 + Rk )/(1 + ρ⊤ · R)] = S0k E[1/(1 + ρ⊤ · R)] = S0k .

Thus ρ is a numeraire portfolio.



318

R. Korn and M. Sch¨al

Example F. The one-dimensional case. Consider the case where d = 1 and R is bounded, then the support Z is a compact subset of R. Set −α = min Z, β = max Z . Then conv(Z) = [−α, β]. For the no-arbitrage condition we need α > 0, β > 0. Then condition (9.3) is satisfied if and only if E[ R/(1 +

1 1 R)] < 0 < E[ R/(1 − R)] . α β

(9.4)

For a proof we have min 1 + ϑz = z∈Z

min 1 + ϑz = 1 − ϑα for ϑ > 0 and = 1 + ϑβ for ϑ < 0 .

−α≤z≤β

Hence, we know that Θ = [− β1 , α1 ] and ∂Θ = {− β1 , α1 }.     R ϑ·R = ϑ · E 1+ϑ·R < 0 for ϑ ∈ ∂Θ if and only if (9.4) holds. In fact, Then E 1+ϑ·R the condition (9.4) is weak. It can be looked upon as a kind of no-arbitrage condition. The martingale case E[R] = 0 is not interesting as we can choose ϑ = 0 then. Let us suppose that E[R] > 0. Then E[R/(1 − R/β)] ≥ E[R] > 0 and the condition E[R/(1 + R/α)] < 0 requires that there should not be too little probability for negative values of R. The condition (9.4) can easily be proved to be also necessary for the first order condition. We will give a sufficient condition for (9.2) which is far from being necessary, however. Theorem 9.6. If Ω or Z is finite, then the condition (9.2) is always satisfied and thus the statements of Corollary 9.5 hold true. Proof (See also Long 1990). If Ω is finite, then Z is finite. Choose ϑ ∈ ∂Θ ∩ L, then one obtains the following relation : 0 = min(1 + ϑ⊤ · z) = 1 + ϑ⊤ · zo for some zo ∈ Z . z∈Z

Further, {R = zo } ⊂ {1 + ϑ⊤ R = 0} = {ϑ⊤ · R = −1}. Now E[ϑ⊤ · R/(1 + ϑ⊤ · R)] ≤ E[1{R=zo } · ϑ⊤ · R/(1 + ϑ⊤ · R)] +E[1{ϑ ⊤·R>0} · ϑ⊤ · R/(1 + ϑ⊤ · R)] ≤ −E[1{R=zo } /(1 + ϑ⊤ · R)] + 1 = −∞, since P [R = zo ] > 0 . 

For the theorem one can also use a result by Hakansson 1971 that the GOP can be chosen as an interior point. The theorem is generalised in (Korn and Sch¨al 1999, Theorem 4.22). It is known that the existence of a growth-optimal portfolio will not imply the existence of a numeraire portfolio (see Becherer 2001). We will give an example.

319

Numeraire portfolio

Example G. We may restrict attention to the case d = 1 (see Example F). Let the distribution of R on Z := [−1, 1] be given by E[g(R)] := λ ·



0

−1

3 (1 − z 2 )g(z)dz + (1 − λ) · 2



0

1

3 (1 − z 2 )g(z)dz , 2

where we choose λ > 0 sufficient small, e.g. λ = 1/12. Then

E[R]

=



 1 3 3 (1 − z 2 )z dz + (1 − λ) · (1 − z 2 )z dz −1 2 0 2  1 3 3 (1 − z 2 )z dz = (1 − 2λ) > 0 . (1 − 2λ) 2 8 0

:= λ ·

0

[Obviously, by the choice of λ = λ∗ = 12 , one obtains an equivalent martingale measure (see below)]. Now set f (ϑ) := E



R , 1+ϑ·R

then f is strictly decreasing on Θ := [−1, 1], where f (−1) ≥ f (ϑ) ≥ f (1) for ϑ ∈ Θ. Now R f (1) = E 1+R  1  0 3 3 (1 − z)z dz + (1 − λ) · (1 − z)z dz = λ 2 2 0 −1

1 3 − λ > 0, 4 2 R f (−1) = E 1−R  1  0 3 3 (1 + z)z dz + (1 − λ) · (1 + z)z dz = λ 2 2 0 −1 =

=

5 −λ > 0. 4

Hence there is no ϑ ∈ Θ such that f (ϑ) = 0 and ϑ is hence a numeraire portfolio. On d the other hand, we have ∞ > f (−1) ≥ f (ϑ) = dϑ E [ln(1 + ϑ · R)] ≥ f (1) > 0 for −1 < ϑ < 1. Thus, we know that maxϑ∈Θ E[ln(1 + ϑ · R)] = E[ln(1 + R)] and ϑ∗ = 1 defines the GOP.

320

R. Korn and M. Sch¨al

10 Equivalent martingale measures and the numeraire portfolio A well-known candidate for a fair price of a financial derivative described by the contingent claim H is given by an EMM Q (defined below) with positive density dQ/dP according to

 dQ 0 0 H/S1 EQ H/S1 = E dP where

dQ 0 dP /S1

is a deflator (see Duffie 1992, p. 23).

Definition 10.1. A probability measure Q is an equivalent martingale measure (EMM), if Q has a (a.s.) positive density dQ/dP such that dQ k 0 k 0 S /S = S0k /S00 , 0 ≤ k ≤ d. (10.1) EQ [S1 /S1 ] = E dP 1 1 Here, we present the general property though we decided to consider only the case St0 ≡ 1. Proposition 10.2. (a) A portfolio ρ ∈ Π is a numeraire portfolio if and only if 1/V1ρ = dQ∗ /dP for some EMM Q∗ . (b) In the case of existence, an EMM Q∗ implied by a numeraire portfolio in the sense of (a) is unique. Proof. (a) We make use of (9.1). For the ’only if’-direction we get E[dQ∗ /dP ] = 1 from dQ∗ /dP = 1/V1ρ > 0 and E[S1k /V1ρ ] = S0k for k = 0.  Part (b) follows from the uniqueness of V1ρ according to Proposition 3.2. From the “Fundamental Theorem of Asset Pricing” (see Back and Pliska 1990, Dalang et al. 1990, Schachermayer 1992, Rogers 1994, Jacod & Shiryaev 1998) we know that there exists an EMM if and only if the no arbitrage condition I = ∅ holds. If in addition the market is complete, then the EMM Q is known to be unique and we may consider L−1 := (dQ/dP )−1 as a contingent claim. Upon making use of the definition of completeness, we obtain L−1 = V ξ (x) for some strategy ξ and some initial capital x. Then we obtain x = E[L V1ξ (x)] = E[L L−1 ] = 1. Therefore we conclude that V1ξ (x) = 1 + ρ⊤ R where ρk = ξ k S0k . From the preceding proposition we obtain the following result: Corollary 10.3. Let C = Θ describe the natural constraints. If the market is complete and free of arbitrage opportunities, then a numeraire portfolio (in the strict sense) exists. For the remainder of this section, we consider the case where d = 1 and (as in Example F): conv(Z) = [−α, β] for some α, β > 0 with − α, β ∈ Z.

(10.2)

321

Numeraire portfolio

The minimal martingale measure was introduced by F¨ollmer and Schweizer 1991 in the context of option hedging and pricing in incomplete financial markets. By the discrete-time Girsanov transformation one obtains the minimal martingale Qo accordo dQ o ing to dQ dP = b + a · R (see Schweizer 1995). From the two conditions E[ dP ] = 1 and o E[ dQ dP R] = 0, one can compute that b = 1 + {µ/σ}2 , a = −µ/σ2 where µ := E[R] and σ 2 := Var[R] .

(10.3)

One difficulty with the Girsanav transformation in discrete time is that it may lead to a density with negative values. The resulting martingale measure is then called a signed martingale measure. However, in the case where Z ⊂ {d − 1, 0, u − 1} for o some 0 < d < 1 < u, it is easy to see that dQ dP > 0. On the other hand, we know from ∗ −1 > 0 always defines a (positive) Theorem 9.6 and Corollary 9.5 that dQ dP = {1 + ρR} martingale measure if Z is finite. Thus we know that the minimal martingale measure cannot coincide with the martingale measure Q∗ induced by the numeraire portfolio if Qo is not a positive measure but a signed measure. It can be shown that the two measures only coincide in a binomial model that means only for a complete market (according to Harrison & Pliska 1981 and Jacod & Shiryaev 1998). A binomial model is characterised by the fact R ∈ {−α, β} a.s.

(10.4)

Theorem 10.4. Let Q∗ be the measure defined by Proposition 10.2 and let Qo be the minimal martingale measure. Then Q∗ = Qo if and only if (10.4) holds. The proof is given in Korn and Sch¨al (1999, Theorem 5.18). The theorem is surprising because one always has Q∗ = Qo in the important case of financial markets modeled by diffusion processes (see Becherer 2001, Korn 1998).

11 Portfolio optimisation and the numeraire portfolio So far we mainly highlighted the role of the numeraire portfolio in valuation of uncertain payment streams. However, we already saw that the numeraire portfolio is closely related to the growth optimal portfolio. In this section, we generalise this and show that, for a wide class of portfolio optimisation problems, the numeraire portfolio is the main ingredient of their solution. Definition 11.1. (a) A strictly concave function U on (0, ∞) which is increasing, twice continuously differentiable and satisfies U ′ (0+) = ∞,

is called a utility function.

U ′ (∞) = 0

(11.1)

322

R. Korn and M. Sch¨al

(b) We call the optimisation problem u(x) := sup E[U (V1π (x))], where V1π (x) = x · V1π ,

(11.2)

π∈Π

the portfolio problem of an investor with initial value x. Popular utility functions are U (x) = log x or U (x) = γ1 xγ for γ < 1. The portfolio problem can now be explicitly solved in a complete market setting: Theorem 11.2. Let ρ be the weak numeraire portfolio; define I(y) B

= (U ′ )−1 (y), X(y) = E[I(y/V1ρ )/V1ρ ], Y (x) = X −1 (y) ,

(11.3)

= I(Y (x)/V1ρ ) and assume

(11.4)

X(y) < ∞ .

(11.5)

(a) Then E[U (V1π (x))] ≤ E[U (B)] and E[B/V1ρ ] = x for all admissible portfolios π . (b) If the market is complete and ρ is chosen as the numeraire portfolio, then B is the optimal final value for the portfolio problem of an investor with initial wealth x. Proof. Under the assumption (11.5) it can easily be shown (by dominated and/or monotone convergence) that X(y) is strictly decreasing with X(0) = ∞, X(∞) = 0. Thus, an inverse Y (x) exists and one can define B as in (11.4). Further, by construction, B satisfies E[B/V1ρ ] = x , (11.6) while for all other admissible portfolios π we have E[V1π (x)/V1ρ ] ≤ x .

(11.7)

The following property of a concave function U (x) ≤ U (y) + U ′ (y)(x − y), y, x > 0 ,

(11.8)

U (x) ≤ U (I(y)) + y(x − I(y)), y, x > 0 .

(11.9)

implies that From (11.9), (11.4)–(11.7) we then obtain E[U (V1π (x))] ≤ E[U (B) + Y (x)(E[V1π (x)/V1ρ ] − E[B/V1ρ ]) ≤ E[U (B)] .

(11.10)

If the market is complete, then there exists a portfolio π B and an initial value xB B generating the final payment of B , i.e. V1π (xB ) = B . Now, xB = E[B/V1ρ ] = x by (11.6), since ρ is a numeraire portfolio (see proof of Proposition 9.2). Thus π B is  a solution to the portfolio problem. Example H. (a) Note that for U (x) = log x, we recover from Theorem 11.2 that we have (11.11) B = xV1ρ , which restates the relation between the growth optimal and the numeraire portfolio.

323

Numeraire portfolio

(b) For U (x) = γ1 xγ , Theorem 11.2 yields the optimal final wealth of ′



B = x(V1ρ )γ /E[(V1ρ )γ·γ ] where γ ′ =

1 . 1−γ

(11.12)

Remark 11.3. Portfolio optimisation problems in incomplete markets can be solved by duality methods in a similar way as in Kramkov and Schachermayer 1999. There, the problem is transformed to auxiliary markets which are complete. The portfolio problem in these markets is again solved with the help of the numeraire portfolio. The optimisation problem (11.2) makes sense only if its value function u is finite. Due to the concavity of U , if u(x) < +∞ for some x > 0, then u(x) < +∞ for all x > 0 and u is continuous, concave and increasing. When we have u(x) = ∞ for some (equivalently, all) x > 0, there are two cases. Either the supremum in (11.2) is not attained, so there is no solution; or, in case there exists a portfolio with infinite expected utility, the concavity of U will imply that there will be infinitely many of them. We will show that one cannot do utility optimisation if the NUIP condition fails. We can use the same arguments as at the beginning of Section 4. Then aϑ⊤ R is unbounded for a → ∞ on the set ϑ⊤ R > 0 where P [ϑ⊤ R > 0] > 0. Thus u(x) ≥ lim E[U (xV1aϑ )] = U (1) · P [ϑ⊤ R = 0] + U (∞) · P [ϑ⊤ R > 0] a→∞

and we proved the following result (see Karatzas and Kardaras 2007 Prop. 4.19): Proposition 11.4. Assume that the NUIP condition fails. If U (∞) = ∞ then u(x) = ∞ for all x > 0. If U (∞) < ∞, then there is no solution.

12 Additional remarks 1 Vasicek 1977 was perhaps the first who used the concept of a numeraire portfolio for an equilibrium characterisation of the term structure. In the language of Long 1990 and of the present paper, Vasicek constructed a numeraire portfolio investing in two assets: the short rate and a long rate. 2 By the use of the numeraire portfolio we can replace the change of measure P →Q where Q is an EMM by changing the numeraire {St0 }→{Vtρ } and sticking to the original probability measure P . There P models the ’true world’probability which can be investigated by statistical methods. Long 1990, for example, studied the application of measuring abnormal stock returns by discounting NYSE-stock returns by empirical proxys of the numeraire portfolio. 3 Further properties and applications in the diffusion case, where the numeraire portfolio is mean-variance efficient and therefore related to the CAPM-theory, can also be found in Bajeux-Besnainou & Portait (97) and Johnson (96).

324

R. Korn and M. Sch¨al

4 De Santis, Gerard and Ortu 2000 are interested in the case where no self-financing trading strategy has strictly positive value and introduce the concept of a generalised numeraire portfolio based on non-self-financing strategies. 5 A further advantage of the present discrete time market is the fact that there exists only one concept of no-arbitrage under the natural constraints (Example B). In particular, it cannot happen then as in continuous-time models that the weak numeraire portfolio exists but no equivalent martingale measure does. As mentioned above, a numeraire portfolio can be used for the purpose of pricing derivative securities. Platen 2002 argues that this can be done even in models where an equivalent martingale measure is absent and has developed a benchmark framework to do so (see Platen 2006). 6 Theorem 9.6 is generalised by Korn, Oertel and Sch¨al 2003 to a market modeled by a jump-diffusion process where the state space of the jumps is finite. 7 How to apply the results for the one-period model to the multi-period model is explained in Sch¨al 2000. 8 The concept of a numeraire portfolio (in the strict sense) is extended to financial markets with proportional transaction cost by Sass and Sch¨al 2009.

Bibliography [1] P. Artzner (1997) On the numeraire portfolio. In: Mathematics of Derivative Securities, ed: M.A.H. Dempster and S.R. Pliska, Cambridge Univ. Press, pp. 216–226. [2] F. Back and S.R. Pliska (1990) On the fundamental theorem of asset pricing with an infinite state space. J. of Mathematical Economics 20, pp. 1–18. [3] I. Bajeux-Besnainou, R. Portait (1997) The numeraire portfolio: a new perspective on financial theory. The European Journal of Finance 3, pp. 291–309. [4] D. Becherer (2001) The numeraire portfolio for unbounded semimartingales. Finance and Stochastics 5, pp. 327–341. [5] H. B¨uhlmann and E. Platen (2003) A discrete time benchmark approach for insurance and finance. ASTIN Bulletin 33, pp. 153–172. [6] M.M. Christensen and K. Larsen (2007) No arbitrage and the growth optimal portfolio. Stoch. Anal. Appl. 25, pp. 255–280. [7] M.M. Christensen and E. Platen (2005) A general benchmark model for stochastic jumps. Stochastic Analysis and Applications 23, pp. 1017–1044. [8] R.C. Dalang, A. Morton and W. Willinger (1990) Equivalent martingale measures and noarbitrage in stochastic securities market models. Stochastics and Stochastic Reports 29, pp. 185–201. [9] F. Delbaen and J. Haezendonck (1989) A martingale approach to premium calculation principles in an arbitrage free market. Insur. Math. Econ. 8, pp. 269–277.

Numeraire portfolio

325

[10] G. De Santis, B. Gerard, F. Ortu (2000) Generalized Numeraire Portfolios. Working paper, University of California, Anderson Graduate School of Management. [11] T. Dijkstra (1998) On numeraires and growth-optimum portfolios. Working paper, University of Groningen. [12] D. Duffie (1992) Dynamic Asset Pricing Theory. Princeton University Press. [13] H. F¨ollmer and M. Schweizer (1991) Hedging of contingent claims under incomplete information, In: M.H.A. Davis and R.J. Elliot (eds.) “Applied Stochastic Analysis”, Stochastic Monographs. 5, Gordon and Breach, London, pp. 389–414. [14] T. Goll, J. Kallsen (2000) Optimal portfolios for logarithmic utility. Stochastic Processes Appl. 89, pp. 31–48. [15] T. Goll and J. Kallsen (2003) A complete explicit solution to the log-optimal portfolio problem. Advances in Applied Probability 13, pp. 774–779. [16] N.H. Hakansson (1971) Optimal entrepreneurial decisions in a completely stochastic environment. Management Science 17, pp. 427–449. [17] N.H. Hakansson and W.T. Ziemba (1995) Capital Growth Theory. In: R. Jarrow et al. Handbook in Operations Research & Management Science, Volume 9, Finance. Amsterdam: North Holland. [18] J.M. Harrison and D.M. Kreps (1979) Martingales and arbitrage in multiperiod securities markets. J. Economic Theory 20, pp. 381–408. [19] J.M. Harrison and S.R. Pliska (1981) Martingales and stochastic integrals in the theory of continuous trading. Stoch. Processes Appl. 11, pp. 215–260. [20] K. Hellwig (1996) Portfolio selection under the condition of value preservation. Review of Quantitative Finance and Accounting 7, pp. 299–305. [21] J. Jacod and A.N. Shiryaev (1998) Local martingales and the fundamental asset pricing theorems in the discrete-time case. Finance Stochast. 3, pp. 259–273. [22] B.E. Johnson (1996) The pricing properties of the optimal growth portfolios: extensions and applications. Working paper, Stanford University. [23] J. Kallsen (2000) Optimal portfolios for exponential L´evy processes. Math. Methods Op. Res. 51, pp. 357–374. [24] I. Karatzas and C. Kardaras (2007) The num´eraire portfolio in semimartingale financial models. Finance Stochast. 11, pp. 447–493. [25] C. Kardaras (2006) The num´eraire portfolio and arbitrage in semimartingale models of financial markets. PhD dissertation, Columbia University. [26] I. Karatzas and S.G. Kou (1996) On the pricing of contingent claims under constraints. Ann. Appl. Probab. 6, pp. 321–369. [27] R. Korn (1997) Value preserving portfolio strategies in continuous-time models. Math. Methods Op. Res. 45, pp. 1–43. [28] R. Korn (1997) Optimal portfolios. World Scientific, Singapore. [29] R. Korn (1998) Value preserving portfolio strategies and the minimal martingale measure. Math. Methods Op. Res. 47, pp. 169–179. [30] R. Korn (2000) Value preserving portfolio strategies and a general framework for local approaches to optimal portfolios. Mathematical Finance 10, pp. 227–241. [31] R. Korn and E. Korn (2001) Option pricing and portfolio optimization, American Mathematical Society, Providence.

326

R. Korn and M. Sch¨al

[32] R. Korn, F. Oertel and M. Sch¨al (2003) The numeraire portfolio in financial markets modeled by a multi-dimensional jump diffusion process. Decis. Econom. Finance 26, pp. 153–166. [33] R. Korn and M. Sch¨al (1999) On value preserving and growth optimal portfolios, Math. Methods Op. Res. 50, pp. 189–218. [34] D. Kramkov and W. Schachermayer (1999) The asymptotic elasticity of utility functions and optimal investment in incomplete markets. The Annals of Applied Probability 9, pp. 904–950. [35] J. Long (1990) The numeraire portfolio. J. Finance 44, pp. 205–209. [36] J.A. Nielsen and K. Sandmann (1995) Equity-linked life insurance: A model with stochastic interest rates. Insurance: Mathematics and Economics 16, pp. 225–253. [37] E. Platen (2001) A minimal financial market model. In: Trends in Mathematics. Birkh¨auser Verlag, pp. 293–301. [38] E. Platen (2002) Arbitrage in continuous complete markets. Adv. Appl. Probab. 34, pp. 540– 558. [39] E. Platen (2006) A benchmark approach to finance. Mathematical Finance 16, pp. 131–151. [40] S.R. Pliska (1997) Introduction to Mathematical Finance. Blackwell Publisher, Malden, USA, Oxford, UK. [41] L.C.G. Rogers (1994) Equivalent martingale measures and no-arbitrage. Stochastics and Stochastic Reports 51, pp. 41–49. [42] J. Sass and M. Sch¨al (2009) The numeraire portfolio under proportional transaction costs. Working paper. [43] W. Schachermayer (1992) A Hilbert space proof of the fundamental theorem of asset pricing in finite discrete time. Insurance: Mathematics and Economics 11, pp. 249–257. [44] M. Sch¨al (1999) Martingale measures and hedging for discrete-time financial markets. Math. Oper. Res. 24, pp. 509–528. [45] M. Sch¨al (2000a) Portfolio optimization and martingale measures. Mathematical Finance 10, pp. 289–304. [46] M. Sch¨al (2000b) Price systems constructed by optimal dynamic portfolios. Math. Methods Op. Res. 51, pp. 375–397. [47] M. Schweizer (1995) Variance-optimal hedging in discrete time. Math. Oper. Res. 20, pp. 1– 32. [48] O. Vasicek (1977) An equilibrium characterization of the term structure. J. Financial Economics 5, pp. 177–188. [49] T. Wiesemann (1996) Managing a value-preserving portfolio over time. European Journal of Operations Research 91, pp. 274–283.

Author information Ralf Korn, Fraunhofer-Institut f¨ur Techno- und Wirtschaftsmathematik, Fraunhofer-Platz 1, 67663 Kaiserslautern, Germany. Email: [email protected] Manfred Sch¨al, Inst. Angew. Math. Endenicher Allee 60, 53115 Bonn, Germany. Email: [email protected]

Radon Series Comp. Appl. Math 8, 327–345

c de Gruyter 2009 

A worst-case approach to continuous-time portfolio optimisation Ralf Korn and Frank Thomas Seifried

Abstract. We survey the main ideas, results and methods behind the worst-case approach to portfolio optimisation in continuous time. This will cover the indifference approach, the HJB-system approach and the very recent martingale approach. We illustrate the difference to conventional portfolio optimisation with explicitly solved examples. Key words. Optimal portfolios, worst-case approach, utility indifference, HJB-equation. AMS classification. 93E20

1

Introduction

Stock price models that abandon the continuity of sample paths to include the possibility of asset price jumps have (re-)gained an enormous interest in recent years with the introduction of L´evy processes to financial mathematics (see the monograph [1] and their impressive list of references). Their main motivation is the inability of the standard geometric Brownian motion based models to explain large stock price moves, which are often observed at the markets. In particular, sudden price falls of the whole market, so-called crashes, are not incorporated into the standard continuous-path framework. While many of those recently introduced L´evy process models exhibit a very good fit to observed market prices, they have the drawback that their analytical handling is not easy. Even more seriously, estimating the necessary input parameters from market data is not at all trivial, sometimes not even very stable from a statistical point of view. Motivated by this and also by the desire to be able to model market crashes, [3] introduced their so-called crash model. Its distinctive feature is that stock prices are assumed to follow geometric Brownian motions in normal times; at a crash time they suddenly fall by an unknown factor, which they assume to be bounded by an explicitly known constant. Besides the height of the crash, the time and the number of crashes up to a given time horizon are also unknown, but not explicitly modeled in a stochastic way. [3] obtain so-called worst-case option prices by figuring out the crash scenario that generates the worst case with respect to the option price. In the context of portfolio optimisation, looking at the worst case is also an interFirst author: Ralf Korn was supported by the Center for Mathematical and Computational Modeling (CM )2 at the University of Kaiserslautern. Second author: Frank Seifried was supported by the Center for Mathematical and Computational Modeling (CM )2 at the University of Kaiserslautern.

328

R. Korn and F. Seifried

esting alternative to the focus on expected utility or on the mean-variance criterion. Of course, such a consideration of the worst case needs two essential components: an exact definition of the worst case and a concept how this worst case enters the portfolio decisions. Examples for worst-case approaches that appeared in the continuous-time portfolio optimisation literature are [14], [12], [13] and [2]. These approaches typically focus on the parameters of asset prices, i.e. on the market coefficients. The worst case is then modeled as the parameter setting leading to an optimal portfolio process with the lowest utility. In [14] the market is explicitly regarded as an opponent to the investor that chooses the market coefficients. However, the price processes still remain diffusion processes. [13] formalises the idea by considering a whole set of probability measures that are candidates to govern the evolution of stock prices. In light of this setting, he determines the portfolio strategy that yields the highest lower bound for the expected utility from terminal wealth over all those possible probability measures. In contrast to the approaches mentioned above, [8] have taken up the [3] framework. They focus on the uncertainty of the number, time and height of possible market crashes. By an indifference argument they show how to derive a characterisation of the worst-case optimal portfolio process. This approach is extended to a more general market setting by [6] and to problems including insurance risk processes in [5]. [7] relate the indifference approach of [8] to a system of inequalities that they call the HJB-system and thereby obtain optimality of the worst-case portfolio process in a wider class of strategies. In this survey paper, we will focus on the worst-case approach in the sense of [8]. The indifference approach will be considered in Chapter 2, the HJB-systems approach will be the subject of Chapter 3, while a new approach (that we call the martingale approach) will be presented in Chapter 4. Some hints on open problems will close the paper.

2

The indifference approach to worst-case portfolio optimisation in the log-utility case

2.1 Motivation and model In this section we consider the simplest case of the continuous-time worst-case market model introduced by [3] and taken up by [8]. We look at a market consisting of a riskless bond and one risky security whose price dynamics are given by dP0 (t) = P0 (t) r dt,

P0 (0) = 1

dP1 (t) = P1 (t) [b dt + σ dW (t)] ,

(2.1) P1 (0) = p1

(2.2)

with constant market coefficients b > r and σ = 0 in normal times. At a so-called crash time τ , which is modeled as a stopping time, the stock price can suddenly fall by a relative amount k with 0 ≤ k ≤ k ∗ < 1. Here, k ∗ is assumed to be the biggest

329

Worst-case approach

possible crash height. Thus in a crash scenario (τ, k) we shall have P1 (τ ) = (1 − k)P1 (τ −).

In this section we restrict ourselves to the case that at most one such crash can happen before the investment horizon T . We assume the crash to be unknown a priori, but observable, so an investor will specify his actions completely by a pre-crash portfolio strategy π and a post-crash portfolio strategy π , both of which we assume to be progressive processes. For ease of exposition, we assume throughout that pre-crash strategies are bounded and continuous. As an abbreviation we introduce π := (π, π). Then for a possible crash scenario (τ, k) the dynamics of the investor’s wealth process X = X π = {X π (t) : t ∈ [0, T ]} are governed by the stochastic differential equation dX π (t) X π (t) X π (τ )

dX π (t) X π (t)

= (r + π(t)(b − r)) dt + π(t)σ dW (t) on [0, τ ),

X π (0) = x

= (1 − π(τ )k)X π (τ −) = (r + π(t)(b − r)) dt + π(t)σ dW (t) on (τ, T ]

where x > 0 denotes the initial wealth. Thus, in accordance with the intended interpretation, the pre-crash strategy π is valid up to and including the crash time, whereas π is only applied starting immediately afterwards. All portfolio strategies π that guarantee a corresponding non-negative wealth process starting from an initial wealth of x > 0 form the class A(x) of admissible strategies with initial wealth x. If we consider only the time interval [t, T ], we use the obviously modified notation A(t, x) for the class of admissible strategies starting at time t with wealth x > 0. ˜ π (t) : t ∈ ˜ π = {X Before we state the worst-case portfolio problem, we define X [0, T ]} as the wealth process in the standard crash-free market model given by equations (2.1), (2.2) that corresponds to the portfolio process π . Definition 2.1 (Worst-Case Portfolio Problem). Let U be an R-valued strictly concave, increasing and differentiable function. U will be called a utility function. 1. The problem sup

inf

∗ π∈A(x) 0≤τ ≤T, 0≤k≤k

E [U (X π (T ))]

(P)

with final wealth X π (T ) in the case of a crash of size k at time τ given by ˜ π (T ) X π (T ) = (1 − π(τ )k) X is called the worst-case portfolio problem with value function ν 1 (t, x) :=

sup

inf

∗ π∈A(t,x) t≤τ ≤T, 0≤k≤k

E [U (X π (T ))] .

2. We denote by ν 0 (t, x) the value function of the optimisation problem in the standard (crash-free) Black–Scholes setting; it is given by ˜ π (T ))]. ν 0 (t, x) = sup E[U (X π∈A(t,x)

330

R. Korn and F. Seifried

To allow for explicit computations, we consider the special case of the logarithmic utility function U (x) = ln(x), x > 0 in this chapter. We then have the following representation of the value function in the Black–Scholes setting (see e.g. [4]): ν 0 (t, x) = ln(x) + r(T − t) +

1  b − r 2 (T − t) 2 σ

(2.3)

as well as the corresponding optimal portfolio process π∗ =

b−r . σ2

(2.4)

We motivate the basic ideas of our worst-case concept by looking at two extreme strategies. Note first that the (worst-case) optimal post-crash strategy is π ∗ . This is simply due to the fact that this is the optimal portfolio process in the then relevant market. If we also chose the portfolio process π ∗ before the crash (provided that it satisfies π ∗ < k1∗ ), the worst case would be a crash of maximal height k ∗ (recall that due to the assumption b > r the log-optimal portfolio process is positive!). One can easily verify that the exact time of this crash would have no impact on the resulting final expected utility. It can therefore be obtained from the worst crash happening immediately and equals ν 0 (t, (1 − π ∗ k ∗ )x) = ln(x) + r(T − t) +

1  b − r 2 (T − t) + ln(1 − π ∗ k ∗ ). (2.5) 2 σ

If, instead, we consider a very prudent investor that chooses π(t) = 0 before the crash, the worst case for him is the no-crash scenario. To see this, note that a crash would not harm the investor; however, he could never switch to the strategy π ∗ after the crash (such a switch would result in a higher expected terminal utility!). Hence, he can never benefit from the knowledge that no further crash can happen. His corresponding final utility would simply be    E ln x exp (r(T − t)) = ln(x) + r(T − t). (2.6) Comparing the representations (2.5) and (2.6) one can draw the following conclusions: •

It depends on the investment time left T − t which of the two extreme strategies yields a higher worst-case bound.



While the first strategy takes too much risk (especially when the remaining investment time is small), the second one is too risk averse (especially when the remaining investment time is big). An optimal strategy should in a way balance this out.



A portfolio process that consists of two constant parts π and π cannot be optimal with respect to the worst-case criterion.

Worst-case approach

331

2.2 Indifference strategies: characterisation and optimality We take up the conclusions from the end of the preceding section and look for a portfolio process that attains a balance between good performance of the wealth process when no crash happens and a (just) acceptable loss in the crash scenario. For this we try to find a pre-crash portfolio process making us indifferent between the two scenarios: • •

The worst crash happens immediately. No crash occurs at all.

Such a portfolio process π ˆ = (ˆ π , π ∗ ) has to satisfy the following identity between the expected utilities corresponding to the two different scenarios:    πˆ   ˜ (T ) . ˆ (t)k ∗ )x = Et,x ln X ν 0 t, (1 − π Applying Itˆo’s formula to the right-hand side of this equality and using the explicit form of ν 0 (t, x) on the left-hand side results in 1  b − r 2 (T − t) + ln (1 − π ˆ (t)k ∗ ) ln(x) + r(T − t) + 2 σ 

T 1 2 2 ˆ (s) σ = ln(x) + r(T − t) + E π ˆ (s)(b − r) − π ds 2 t  T (2.7) π ˆ (s)σ dW (s) . +E t

ˆ , the stochasIf we assume existence of a deterministic indifference portfolio process π tic integral has mean zero and the expectation in front of the ds-integral can be dropped. Eliminating identical terms on both sides of equation (2.7) yields T

1  b − r 2 1 ∗ 2 2 ˆ (s) σ π ˆ (s)(b − r) − π (T − t) + ln (1 − π ˆ (t)k ) = ds. 2 σ 2 t

Assuming that π ˆ is differentiable, differentiation of this identity with respect to t leads to the ordinary differential equation π ˆ ′ (t) = −

σ2 2 π (t) − π ∗ ] [1 − π ˆ (t)k ∗ ] [ˆ 2k ∗

(2.8)

while the obvious final condition π ˆ (T ) = 0

(2.9)

follows directly from (2.7). It is now straightforward to verify that there is a unique solution to equations (2.8) and (2.9). Even more, one can directly prove that the strategy determined by (2.8) and (2.9) solves the worst-case problem. The following result is taken from [8], but we will give a somewhat shorter proof. Theorem 2.2 (Worst-Case Optimal Portfolio for Logarithmic Utility). The portfolio π , π ∗ ) determined by (2.8), (2.9) and (2.4) solves the worst-case investprocess πˆ = (ˆ ment problem (P) with logarithmic utility.

332

R. Korn and F. Seifried

Proof. Let πˆ be the unique pre-crash portfolio process determined by (2.8), (2.9). ˆ is attained by a jump of max1. We first show that the worst-case scenario for π imum size k ∗ at any time t ∈ [0, T ]. This obviously is the case if the corresponding expectation function     ˆ (t)k ∗ )X πˆ (t) νˆ t, X πˆ (t) = ν 0 t, (1 − π

is a martingale. However, by the explicit form of ν 0 (t, x) given in equation (2.3) and the fact that πˆ satisfies (2.8), (2.9), we obtain   π (t)k ∗ )X πˆ (t) ν 0 t, (1−ˆ 

1  b − r 2 T = ln (x (1 − π ˆ (0)k)) + r + 2 σ   t  1 1 2 2 ∗ 2 π ˆ (s)(b − r) − σ π ˆ (s) + (π ) ds + ˆ (s)k ∗ 2 0 1−π t σˆ π (s) dW (s) + 0



t 1  b − r 2 = ln (x (1 − π ˆ (0)k)) + r + T+ σˆ π (s) dW (s). 2 σ 0

As the integrand of the stochastic integral is deterministic and bounded, the martingale property is established. 2. Let now π = (π, π ∗ ) be an admissible portfolio process with a better worst-case performance than πˆ ; without loss of generality suppose that the portfolio process π ∗ is used in the Black–Scholes setting after the crash. Due to continuity it must be constant ˆ , it must satisfy in t = 0. Thus, to obtain a higher worst-case bound than π ˆ (0). π(0) < π

Further, as we have 

T   2  1 ˜ π (T ))] = ln(x) + r + 1 b − r T+ E[ln(X E π(s)(b − r) − σ 2 π(s)2 ds 2 σ 2 0 

  2 1 b−r T ≤ ln(x) + r + 2 σ T

1 2 2 E [π(s)] (b − r) − σ (E [π(s)]) ds + (2.10) 2 0

by Jensen’s inequality, due to continuity of π ˆ there has to be a smallest deterministic time t¯ ∈ [0, T ] with     ˆ (t¯) = π ˆ (t¯) E π(t¯) ≥ E π

if in the no-crash scenario the portfolio process π delivers a higher worst-case bound ˆ attains its worst-case bound than πˆ . Note that due to the indifference construction π also in the no-crash-scenario.

333

Worst-case approach

We now look at the worst-crash scenario at time t¯. In this situation we obtain   E ln (X π (T ))  T

1 r + π ∗ (b − r) + σ 2 (π ∗ )2 ds = E ln (X π (t¯)) + 2 t¯ T

  1 ˜ π (t¯)) + E [ln (1 − π(t¯)k ∗ )] + r + π ∗ (b − r) + σ 2 (π ∗ )2 ds = E ln(X 2 t¯ T

  1 ˜ π (t¯)) + ln (1 − E [π(t¯)] k ∗ ) + r + π ∗ (b − r) + σ 2 (π ∗ )2 ds ≤ E ln(X 2 t¯ T

  1 2 ∗ 2 ∗ ∗ π ˆ ¯ ˜ ¯ r + π (b − r) + σ (π ) ds π (t)) k ) + ≤ E ln(X (t)) + ln (1 − (ˆ 2 t¯   πˆ  = E ln X (T ) .

Note that in the first inequality, we have used Jensen’s inequality. The second inequality ˆ (2.10) is satisfied with equality, and of is a consequence of (2.10), the fact that for π course the defining property of t¯. Hence, we arrive at a contradiction to the assumption ˆ.  that π attains a higher worst-case bound than π Remark 2.3 (Analysis of the Worst-Case Optimal Portfolio Process). From the explicit form of the differential equation (2.8) and (2.9) for the worst-case optimal pre-crash ˆ , we can see that strategy π

∗ 1 0≤π ˆ (t) ≤ min π , ∗ =: πmin for t ∈ [0, T ]. k More precisely, under the change of variable t → T − t the differential equation (2.8), (2.9) takes the form h′ (t) =

2  σ2  1 − h(t)k ∗ h(t) − π ∗ , ∗ 2k

h(0) = 0

with π ˆ (t) = h(T − t). It is then clear that starting in 0, in particular below πmin , h cannot cross either 0, π ∗ or k1∗ . Therefore, even in the case π ∗ > k1∗ , the worst-case optimal portfolio process avoids a negative wealth at any time. As constant portfolio processes often play a very prominent role in portfolio optimisation, one might ask for the best constant portfolio process under the worst-case setting. As it is clear that the best constant portfolio process after the crash is π(t) = π ∗ , we refer to a constant (worst-case) portfolio process as a pair of the form π(t) = (π(t), π(t)) = (c, π ∗ ) for all t ∈ [0, T ]

with c a constant. As shown in [8], the optimal constant c depends on the time horizon T . We therefore introduce the optimal constant portfolio process as a function of time

334

R. Korn and F. Seifried 1

0,75

0,5

0,25

0 0

2

4

6

8

10

Figure 2.1. Worst-case optimal strategy (black lines) and worst-case constant best strategy as a function of the time horizon (grey line). c(t) (where the time variable actually denotes the time horizon) as ⎛ ⎞+

  2 1 b−r 1 b−r 1 1 1 + ∗ − − ∗ + 2 ⎠ . c(t) = ⎝ 2 σ2 k 4 σ2 k σ t

Obviously, this constant c(t) converges towards πmin as t → ∞. Note in particular that c(t) = 0

⇐⇒

b−r ≤

k∗ , t

i.e. if the “crash height per time unit” exceeds the excess return of the stock. Example 2.4. To demonstrate the performance of the worst-case strategy together with the worst-case optimal constant portfolio process, we look at an example where we have chosen the following data: r = 0.05, b = 0.20, σ = 0.4, k ∗ = 0.2, T = 10.

As long as no crash has happened, the worst-case optimal portfolio process π ˆ is ˆ . After the jump the investor has to given by the black curved line which shows π switch to the black line parallel to the x-axis with π = π ∗ = 0.9375. For reasons of comparison, the grey line shows the optimal constant portfolio c(T − t) which would be chosen if the portfolio problem started at time t. One clearly sees that the conˆ . It is below stant portfolio function c differs from the worst-case optimal portfolio π the worst-case optimal portfolio close to the investment horizon, and above it if the investment horizon is far away.

2.3 Indifference strategies: generalisations The central result of the previous section can be generalised in various ways by simply using indifference arguments. Here we list some of them.

335

Worst-case approach

A finite number of possible crashes In [6] we allow for more than one crash until the time horizon T . In such a situation of at most n crashes, a portfolio process is specified by an (n+ 1)-vector π = (π0 , . . . , πn ) where πj is the portfolio process that will be used by the investor if still at most j crashes can occur. Then an optimal portfolio process π ˆ exists and is given as the solution of the following sequence of ordinary differential equations for j = 1, . . . , n: π ˆ0 (t) = πˆj ′ (t) =

b−r σ2  2 σ2  ˆj (t) − π ˆj (t)k ∗ π − ∗ 1−π ˆj−1 (t) , 2k

π ˆj (T ) = 0.

Note that each such differential equation has a unique solution that satisfies

1 0≤π ˆj (t) ≤ min π ˆj−1 (t), ∗ for t ∈ [0, T ], j = 1, . . . , n. k Indeed, the arguments used to ensure the corresponding properties in the one-crash setting are valid here, too. More general utility functions If instead of the logarithmic utility function we choose a general utility function U , then [6] and [5] contain verification results that are only valid under very restrictive assumptions. These assumptions are hard to verify, and they are by far not necessary conditions. However, by restricting to deterministic strategies it can be shown that similar differential equations as (2.8), (2.9) characterise the worst-case optimal deterministic portfolio process πˆ . In the case of the negative exponential utility function U (x) = 1 − exp(−λx), x ∈ R, for some λ > 0

there is even a completely explicit result. By assuming r = 0, b > r and allowing for a possibly negative wealth it is shown in [5] that we have: Theorem 2.5 (Worst-Case Optimal Portfolio for Exponential Utility). The optimal deterministic amount of money A(t) invested in the stock before the crash is given by A(t) = A∗ −

λk ∗ 2 for t ∈ [0, T ] λσ 2 T − t + 2k ∗ /b

while after the crash it is optimal to hold the amount of money A∗ =

in the stock.

b λσ 2

336

R. Korn and F. Seifried

Changing market conditions As we have modeled the impact of a crash so far, its only consequence is a drop of the stock price. However, in real-world financial markets, the occurrence of a crash might have a more persistent effect. In [5] and [6] this is modeled by a change of the market coefficients after the crash. In such a situation, one can still insist on being indifferent between the worst possible crash and the no-crash scenario. Such a point of view is taken in [9]. However, under certain relations between the market situations before and after the crash, it is shown in [5] that one has to sacrifice indifference to obtain worst-case optimality. In addition to the model considered so far, we assume that in the crash scenario (τ, k) after the crash the price dynamics are given by dP01 (t) = P01 (t) r1 dt,

P01 (τ ) = P0 (τ )

dP11 (t) = P11 (t) [b1 dt + σ1 dW (t)] ,

P11 (τ ) = (1 − k)P1 (τ )

with constant market coefficients r1 , b1 , and σ1 = 0. To illustrate the possible new effect, we look again at the situation of Theorem 2.5, i.e. at the negative exponential utility case with r = r1 = 0 and the notation A∗ =

b , λσ 2

A∗1 =

b1 . λσ12

Note that after a crash we are in the new market. Thus, if we compare the crash-free scenario with a crash scenario we always have to use the value function in the crashfree scenario of the new market. Further, if it is more attractive to invest in the stock in the new market than in the original market, the possible loss caused by a crash might be overcompensated by the better market conditions in the new market. It can therefore be optimal not to insist on indifference. This is the content of the following theorem from [5]: Theorem 2.6. Under the assumptions of Theorem 2.5, r = r1 = 0 and with the postcrash stock price dynamics given by dP11 (t) = P11 (t) [b1 dt + σ1 dW (t)] we have the following assertions: a) For A∗1 ≤ A∗ the results of Theorem 2.5 remain valid if we replace A∗ by A∗1 . b) For A∗1 > A∗ the optimal deterministic amount of money invested in the stock before the crash is given by

2k ∗ ∗ ∗ for t ∈ [0, T ]. (2.11) A(t) = min A , A1 − λσ12 (T − t) + 2k ∗ /A∗1 The optimal amount of money invested into the stock after a crash equals A∗1 .

337

Worst-case approach

As part b) of the theorem shows, it can thus be better to invest optimally in the market before the crash than to insist on indifference. Following the (deterministic) indifference strategy before the crash would lead to a loss in terms of expected utility compared to A∗ if no crash occurs. In the crash scenario, if there is still much time to the investment horizon T , equation (2.11) shows that the strategy A is below the indifference strategy and would thus also lead to a smaller loss.

3

HJB-systems for worst-case portfolio optimisation

The classical method to solve continuous-time portfolio problems is to apply the basic tool of continuous-time stochastic control theory, the Hamilton-Jacobi-Bellman equation (for short: HJB-equation). This approach has been introduced by Merton (see e.g. [10], [11]). Since then numerous papers have been written on this subject (see e.g. the monograph by [4]). The purpose of this section is to introduce the approach of [7] who derive a system of inequalities that can be regarded as an analogue to the HJB-equation in the worstcase setting. The main achievement of the introduction of this HJB-inequality system is that one can prove that the optimal deterministic strategies derived in [5] and [6] are indeed optimal among all admissible portfolio processes. The conceptually new aspect of [7] is the interpretation of the worst-case setting as a game between the market and the investor. While the market is “allowed” to choose a crash sequence, the investor chooses the portfolio process. The stock price dynamics are modeled by dP1 (t) = P1 (t−) [b dt + σ dW (t) − k ∗ dN (t)] ,

P1 (0) = p1 .

Here, N = {N (t) : t ∈ [0, T ]} is a process that counts the number of jumps such that N (t) = # {0 < s ≤ t : P1 (s) = P1 (s−)} for t ∈ [0, T ] ∗

and k is the (maximal) crash height. For simplicity, we always assume that a crash of maximum size happens (for more on this, see [7]). While in the indifference approach we simply ignored the modelling of jumps, we now assume that the market chooses a jump strategy N with a maximum number of jumps n and N (t) − N (t−) ∈ {0, 1}. This strategy can also be characterised as a sequence of jump times (τ1 , . . . , τn ). We denote by B(n) the class of crash scenarios with at most n jumps. As before, we assume the portfolio process π to be adapted (now with respect to the filtration generated by the stock price and the counting process N , which models the investor’s ability to know how many crashes can still occur!), and we suppose that portfolio processes take values in a subset A of R. Further, we use the notation π = (π0 , . . . , πn ) where πj (t) denotes the part of the portfolio process that the investor chooses if still at most j crashes can occur. To apply standard arguments from stochastic control theory and to avoid a negative wealth due to a crash, we also assume  T E |πj (s)|m ds < ∞ for m = 1, 2, . . . , πj (t)k ∗ < −1, j = 1, . . . , n. 0

338

R. Korn and F. Seifried

Then for a given “control” (π, N ) the wealth process follows the dynamics X (π,N ) (0) = x,

dX (π,N ) (t) = X (π,N ) (t) [(r + πj (t)(b − r)) dt + σ dW (t)] on (τj−1 , τj ] X (π,N ) (τj )

= (1 − πj (τj )k ∗ ) X (π,N ) (τj −),

j = 1, . . . , n.

We assume that the investor chooses a portfolio process to maximise worst-case expected utility of terminal wealth in the sense of the optimisation problem   sup inf E U (X (π,N ) (T )) . π∈A(x) N ∈B(n)

For ν ∈ C 1,2 we define the differential operator Lπ ν by

1 Lπ ν(t, x) := νt (t, x) + νx (t, x)(r + π(b − r))x + νxx (t, x)π 2 σ 2 x2 2

and for n ∈ N we define the value function V n (t, x) by   V n (t, x) := sup inf Et,x,n U (X (π,N ) (T )) . π∈A(t,x) N ∈B(t,n)

Here as above A(t, x) and B(t, n) denote, respectively, admissible strategies and possible crash sequences on [t, T ], given that the investor’s wealth is x and n crashes are possible. With this notation we can now formulate the main result of this section. Theorem 3.1 (Verification Theorem). The worst-case optimisation problem can be solved via the following recursive system of HJB-equations. Step 0. Assume that ν 0 (t, x) is a polynomially bounded classical solution of   0 = sup Lπ ν 0 (t, x) , ν 0 (T, x) = U (x) π∈A

and that

  p(t, x) := arg sup Lπ ν 0 (t, x) π∈A

is an admissible control function. Then we have

V 0 (t, x) = ν 0 (t, x)

and the optimal control function exists and is given by   ∗ π0∗ (t) = p t, X (π ,N ) (t) .

Step n. For n ∈ N and every function ν n ∈ C 1,2 , define A′n (t, x) and A′′n (t, x) by

1 π n ′ An (t, x) := π ∈ A : π < ∗ , 0 ≤ L ν (t, x) k

1 A′′n (t, x) := π ∈ A : π < ∗ , 0 ≤ ν n−1 (t, (1 − πk ∗ )x) − ν n (t, x) . k

Worst-case approach

339

Assume that there exists a polynomially bounded C 1,2 -solution of 0 0 0



sup

[Lπ ν n (t, x)]

π∈A′′ n (t,x)



sup π∈A′n (t,x)

=

sup



 ν n−1 (t, (1 − πk ∗ )x) − ν n (t, x)

[Lπ ν n (t, x)]

π∈A′′ n (t,x) n

ν (T, x) =

sup

π∈A′n (t,x)

U (x)



 ν n−1 (t, (1 − πk ∗ )x) − ν n (t, x)

and that p(t, x) :=

arg

sup

[Lπ ν n (t, x)] ,

π∈A′′ n (t,x)

θ(t, x)

:=

  inf ν n−1 (s, (1 − πk ∗ )x) − ν n (s, x) ≤ 0

s≥t

is a pair of admissible control functions. Then

V n (t, x) = ν n (t, x)

and the optimal control functions exist and are given by     ∗ ∗ πn∗ (t) = p t, X (π ,N ) (t) , τn∗ = θ t, X (π ,N ) (t) .

Remark 3.2 (Form of the HJB-System). The form of the HJB-system characterising the value functions ν n (t, x) needs explanation, as it differs in certain aspects from the HJB-equation or HJB-inequalities of related problems. For this, note first that if we looked at the portfolio problem where the jump process is a Poisson process with constant intensity λ and jump size k ∗ , then the corresponding HJB-equation would read 0 = sup L˜π ν 0 (t, x) π∈A

   1 0 + r + π(b − r) xνx0 = sup νt0 + σ 2 π 2 x2 νxx 2 π∈A   + λ ν 0 (t, (1 − πk ∗ )x) − ν 0 (t, x)    = sup Lπ ν 0 (t, x) + λ ν 0 (t, (1 − πk ∗ )x) − ν 0 (t, x) .

(3.1)

π∈A

As for a utility function of class C 2 and b > r the optimal portfolio process ought to be non-negative, we would expect from (3.1) that   0 ≤ sup Lπ ν 0 (t, x) (3.2) π∈A

which also motivates this requirement for Lπ ν n in the verification theorem. This inequality also characterises the set A′n (t, x). It further suggests that the investor should

340

R. Korn and F. Seifried

only search among those π that satisfy this inequality when he considers the optimal performance (with respect to the no-crash scenario). On the other hand, he should not give the market a chance to hit him more by a crash than necessary. Therefore, he ought to restrict π to those strategies that satisfy ν n (t, x) ≤ ν n−1 (t, (1 − πk ∗ )x)

(3.3)

which is the requirement that characterises the set A′′n (t, x). The assumption that both inequalities (3.3) and (3.2) are strict would intuitively contradict the idea of ν n being a value function, as it would not be in line with the form of the HJB-equation (3.1). This motivates the presence of the complementarity condition that (at least) one of the two inequalities always has to be satisfied with equality. Remark 3.3 (Possible Generalisations). In [7] explicit examples are solved when the utility function is the negative exponential utility function or of the form U (x) =

1 γ x , x > 0, for some γ < 1, γ = 0. γ

To give details of the rather technical way of solving the HJB-inequality systems in the verification theorem is beyond the scope (and length!) of this paper. However, we would like to point out that the HJB-system approach allows for an easy inclusion of a consumption rate process c, which is only subtracted from the wealth equation as a term −c(t)dt. It should therefore be possible to prove a suitably modified verification theorem and also to solve the corresponding HJB-inequality system for the special choices of the utility functions considered in [7].

4

A martingale approach to worst-case portfolio optimisation

In contrast to the dynamic programming approach, the martingale approach to the worst-case portfolio problem is based on martingale optimality arguments and the idea that the market acts as an opponent to the investor. In the following we briefly outline its main components: the Change-of-Measure Device, the Indifference-Optimality Principle, and the notion of an Indifference Frontier.

4.1 The Change-of-Measure device We consider the worst-case portfolio problem (P) and assume that U (x) =

1 γ x , x > 0, with γ < 1, γ = 0. γ

Moreover, suppose that if a crash occurs, it has maximum size k ∗ . We let Θ denote the class of [0, T ] ∪ {∞}-valued stopping times and interpret the event {τ = ∞} as there

341

Worst-case approach

being no crash at all. We recall that admissible strategies are assumed to be bounded and continuous before the crash. Then we may equivalently reformulate the worst-case portfolio problem (P) as the problem to optimally choose a pre-crash strategy so as to obtain   sup inf E ν 0 (τ, (1 − π(τ )k ∗ )X π (τ )) (Ppre ) π∈A(x) τ ∈Θ

where as above ν 0 denotes the value function of the post-crash optimisation problem, which is known explicitly:  

 1 γ 1  b − r 2 γ 0 (T − t) . ν (t, x) = x exp γr + (4.1) γ 2 σ 1−γ

This is intuitively completely obvious because no further crash can occur, and can be shown formally with the following trick: Theorem 4.1 (Change-of-Measure Device). Consider the classical optimal portfolio problem with random initial time τ and time-τ initial wealth ξ sup

˜ π (T ))] Eτ,ξ [U (X

(Ppost )

π∈A(τ,ξ)

where τ is a stopping time and A(τ, ξ) denotes the corresponding class of admissible strategies on [τ, T ]. Then for any π ∈ A(τ, ξ) we can write   T ˜ π (T )) = U (ξ) exp γ Φ(π(s)) ds Mπ (T ) (4.2) U (X τ

with a martingale Mπ = {Mπ (t) : t ∈ [0, T ]} satisfying Mπ (τ ) = 1 and 1 Φ(y) := r + (b − r)y − (1 − γ)σ 2 y 2 . 2

Thus the optimal solution to problem (Ppost ) is given by π ∗ =

b−r (1−γ)σ2

.

Proof. The first part is a consequence of Itˆo’s formula and Novikov’s condition, making use of the boundedness assumption on π . To establish the second note that clearly π ∗ maximises Φ. Hence, if π ∈ A(τ, ξ) is an arbitrary strategy, we have from (4.2) and the martingale property of Mπ      ˜ π (T ))] = Eτ,ξ U (ξ) exp γ T Φ(π(s))ds Mπ (T ) Eτ,ξ [U (X τ      T ≤ Eτ,ξ U (ξ) exp γ τ Φ(π ∗ )ds Mπ (T )      T = Eτ,ξ U (ξ) exp γ τ Φ(π ∗ )ds Mπ∗ (T ) ∗

˜ π (T ))] = Eτ,ξ [U (X

so π ∗ is optimal.



342

R. Korn and F. Seifried

The Change-of-Measure Device allows to transform the stochastic optimisation problem to a pathwise maximisation, quite similar to the log-case. Note that changing market coefficients are subsumed by the above framework, and that Theorem 4.1 also adapts immediately to situations with deterministic trading constraints.

4.2 Abstract indifference strategies The form of (Ppre ) suggests a reformulation of the worst-case portfolio problem as a zero-sum stochastic game; this is the motivation for the martingale approach. Let us consider an abstract controller-and-stopper game played between two players A (the controller) and B (the stopper). Player A controls a stochastic process W = W λ = {W λ (t) : t ∈ [0, T ]}

by choosing λ from a given class of admissible controls Λ, and player B decides on the duration of the game by choosing a stopping time τ ∈ Θ. The controller and stopper aim to maximise or minimise, respectively, the expectation E[W λ (τ )].

Assuming that player A has to choose his strategy first, he faces the problem to obtain sup inf E[W λ (τ )].

λ∈Λ τ ∈Θ

(Pabstract )

ˆ ∈ Λ in such a way that W λˆ is a martingale, Now if player A can choose his strategy λ then player B ’s actions become irrelevant to him because by optional stopping ˆ

ˆ

E[W λ (σ)] = E[W λ (τ )] for all stopping times σ, τ. ˆ an indifference strategy. The crucial Thus it makes sense to call such a strategy λ benefit of indifference strategies is formulated in ˆ is an indifference strategy, Proposition 4.2 (Indifference-Optimality Principle). If λ ˆ and for all λ ∈ Λ there exists a single τ ∈ Θ such that E[W λ (τ )] ≥ E[W λ (τ )], ˆ is optimal for player A in (Pabstract ). then λ

4.3 Optimality and the indifference frontier In the framework of the previous section, observe that if we call player A the investor and player B the market, then setting Λ := A(x) and W π (t) := ν 0 (t, (1 − π(t)k ∗ )X π (t)) for t ∈ [0, T ] and W π (∞) := ν 0 (T, X π (T ))

we obtain the worst-case portfolio problem (Ppre ). Note also that the seemingly obvious terminal condition (2.9) is in fact a consequence of the martingale property of W πˆ between T and ∞. To construct an indifference strategy π ˆ , one goes through the

343

Worst-case approach

same calculation as in the first part of the proof of Theorem 2.2 to obtain the ordinary differential equation π ˆ ′ (t) = −

2   σ2 ˆ (t) − π ∗ , (1 − γ) 1 − π ˆ (t)k ∗ π 2k ∗

π ˆ (T ) = 0

(4.3)

for πˆ , making use of the explicit form (4.1) of ν 0 . Here and in the following, we assume for simplicity that market coefficients do not change after a crash; in particular one sees ˆ (t) ≤ min{π ∗ , k1∗ } for all t ∈ [0, T ]. as in Remark 2.3 that 0 ≤ π Lemma 4.3 (Indifference Frontier). Let π ∈ A(x) be an admissible strategy, let πˆ be ˆ (t)} and define determined by equation (4.3), set σ := inf{t : π(t) > π π ˜ (t) := π(t) if t < σ and π ˜ (t) := π ˆ (t) if t ≥ σ.

Then π˜ ∈ A(x) and the worst-case bound attained by π ˜ is at least as big as that achieved by π . Proof. Let τ be an arbitrary stopping time. By continuity we have π ˜ (t) = π ˆ (t) if 0 ≤ t ≤ σ , and since π ˆ is an indifference strategy the process W π˜ is a martingale on [σ, T ] ∪ {∞}. Thus we obtain         E W π˜ (τ ) = E W π˜ (τ ∧ σ) = E W π (τ ∧ σ) ≥ inf E W π (τ ′ ) . ′ τ ∈Θ

Since τ is arbitrary, the conclusion follows.



Remark 4.4. Lemma 4.3 implies that it suffices to search for optimal strategies which ˆ represents a frontier which rules are dominated by the indifference strategy. Hence π out too optimistic investment, i.e. a too great exposure to the risk of a crash. ˆ is worst-case optimal. Indeed, by the Now it is not hard to see that the strategy π Change-of-Measure device (and the fact that Φ is a quadratic function) the indifference strategy yields an optimal performance for the no-crash scenario in the class of all strategies that remain below the Indifference Frontier. Hence, optimality follows from the Indifference-Optimality Principle.

Theorem 4.5 (Solution of the Worst-Case Portfolio Problem). The optimal strategy in the pre-crash market for the worst-case portfolio problem (P) is given by the indifference strategy πˆ determined from (4.3). After the crash, the Merton strategy π ∗ = b−r (1−γ)σ2 is optimal. The indifference strategy has been verified to be optimal in [7] by means of the dynamic programming methods presented in the previous chapter. The martingale approach provides a simpler and more direct way to analyse the problem, as it focuses directly on the crucial notion of indifference.

344

R. Korn and F. Seifried

4.4 Extensions The approach outlined above applies to more general settings than that considered here. For instance we can consider general L´evy-driven asset price models, we can remove the continuity assumption imposed on admissible trading strategies, we can allow for changing market coefficients, and we can consider multiple crashes. Although this complicates the formal analysis, the concepts developed above remain valid and provide the key to solve the worst-case optimal portfolio problem. A detailed exposition of the martingale approach to worst-case portfolio problems in a more general framework is the subject of a forthcoming paper.

5

Conclusion and further aspects

The worst-case approach to continuous-time portfolio optimisation represents, on the one hand, a generalisation of the classical Merton setting, and on the other hand, an alternative to technically involved frameworks such as the L´evy process setting. Its main strength lies in the fact that for standard utility functions we can derive fully explicit optimal portfolio strategies. Their specific form is appealing, in particular the reduction of risky investments when the time horizon gets near while there is still crash risk. Of course, the strategies depend heavily on the assumed upper bound k ∗ for the jump height and on the maximum number of jumps n. There are various ways to generalise the framework and the problems dealt with. We only mention a few of them: •





More general price processes: In fact, we mainly need the Black–Scholes type modelling framework in order to be able to explicitly solve the portfolio problem in the crash-free setting. Considering different models for which this is also possible is a direct generalisation. This is the subject of a forthcoming paper. Including consumption: The possibility of the investor to consume parts of his wealth during the investment time is a natural generalisation. The HJB-approach seems to be well-suited to deal with this aspect (as already mentioned above). Dependence between jump heights: So far, heights of previous crashes have no impact on the next crash. This memoryless behaviour of the market might be unrealistic. Again, this generalisation will be dealt with in forthcoming research.

We believe that there is a lot of potential in the worst-case approach from both the scientific and the application-oriented perspective.

Bibliography [1] R. Cont, P. Tankov, Financial Modelling with Jump Processes, Chapman & Hall (2004). [2] D. Hern´andez-Hern´andez, A. Schied, Robust Utility Maximization in a Stochastic Factor Model, Statistics & Decisions 24 (2006), pp. 109–125.

Worst-case approach

345

[3] P. Hua, P. Wilmott, Crash Courses, Risk 10 (1997), pp. 64–67. [4] R. Korn, Optimal Portfolios, World Scientific (1997). [5] R. Korn, Worst-Case Scenario Investment for Insurers, Insurance: Mathematics and Economics 36 (2005), pp. 1–11. [6] R. Korn, O. Menkens, Worst-Case Scenario Portfolio Optimization: a New Stochastic Control Approach, Mathematical Methods of Operations Research 62 (2005), pp. 123–140. [7] R. Korn, M. Steffensen, On Worst-Case Portfolio Optimization, SIAM Journal on Control and Optimization 46 (2007), pp. 2013–2030. [8] R. Korn, P. Wilmott, Optimal Portfolios under the Threat of a Crash, International Journal of Theoretical and Applied Finance 5 (2002), pp. 171–187. [9] O. Menkens, Crash Hedging Strategies and Worst-Case Scenario Portfolio Optimization, International Journal of Theoretical and Applied Finance 9 (2006), pp. 597–618. [10] R. C. Merton, Lifetime Portfolio Selection under Uncertainty: The Continuous-Time Case, Review of Economics and Statistics 51 (1969), pp. 247–257. [11] R. C. Merton, Optimum Consumption and Portfolio Rules in a Continuous-Time Model, Journal of Economic Theory 3 (1971), pp. 373–413. [12] F. Riedel, Optimal Stopping with Multiple Priors, to appear in Econometrica (2009). [13] A. Schied, Optimal Investments for Robust Utility Functionals in Complete Market Models, Mathematics of Operations Research 30 (2005), pp. 750–764. [14] D. Talay, Z. Zheng, Worst Case Model Risk Management, Finance and Stochastics 6 (2002), pp. 517–537.

Author information Ralf Korn, Department of Mathematics, University of Kaiserslautern, 67653 Kaiserslautern and Fraunhofer ITWM, Kaiserslautern, Germany. Email: [email protected] Frank Thomas Seifried, Department of Mathematics, University of Kaiserslautern, 67653 Kaiserslautern, Germany. Email: [email protected]

Radon Series Comp. Appl. Math 8, 347–369

c de Gruyter 2009 

Time consistency and information monotonicity of multiperiod acceptability functionals Raimund Kovacevic and Georg Ch. Pflug

Abstract. Time consistency is an often required property of functionals which measure the risk or the acceptability of financial processes. Based on elementary consistency properties for pairs of conditional acceptability mappings, we demonstrate how these properties translate into the multiperiod setting. Moreover, we show that even in elementary cases time consistency may conflict with information monotonicity. Key words. Time consistency, information monotonicity, multi-period risk measures, multi-period acceptability measures. AMS classification. 91B28, 62P05

1

Introduction

Analysing risk inherent in stochastic financial payments or whole cash-flow processes and evaluating their favourability has become an important subject for mathematically oriented theoreticians, as well as for practitioners in different industries. Theoretical approaches reach from the Markowitz-model ([15]) and expected utility ([24, 1]) to the recent discussions regarding coherent risk measures ([4, 2] , [8]) and risk or acceptability functionals ([16]). On the other hand practice in banking and insurance was dominated by the introduction of value at risk based models. Efforts have been made to extend theoretical approaches originally formulated in a static framework to multi-period settings. Therefore conditional and dynamic versions of the basic measures under consideration have been defined and analysed ([5, 7, 18, 22]). Recently many publications (e.g. [3, 6, 9, 12, 14, 17, 20]) explicitly deal with, or at least mention the subject of time consistency in multi-period valuation. In the present paper we define and analyse the main concepts related to time consistency in a simple framework. We want to separate the core of the notion of time consistency from the more technical issues influenced by the respective viewpoints of the papers mentioned above. Finally those concepts are analysed in a genuine multiperiod framework, including intermediate payments. Although we have in view the applicability of our concepts in multistage stochastic optimisation, compared with [23] we go back a step and analyse the properties of functionals and mappings suitable as objective functions or constraints rather than analysing time consistent decisions. In the last section we will confront time consistency with another crucial property of multiperiod acceptability measures: information monotonicity ([16]). One might First author: Supported by FWF

348

R. Kovacevic and G. Ch. Pflug

argue that this concept is even more fundamental for a multiperiod valuation than time consistency.

1.1 Basic framework A probability functional is an extended real valued function on a set of random variables or random processes which are defined on some probability space (Ω, F , P). If a functional is interpreted in the sense that higher values are preferable to lower values we call it a monotonic or acceptability-type ([16]) functional. We define the domain of the functional A as dom A = {X : A(X) > −∞}. Definition 1.1.1. A probability functional A is called centred at zero if A(0) = 0

holds. We concentrate on acceptability and not on risk in the following, but one should be aware that it is very easy to adapt all our statements to statements about risk, first of all changing the sign and moving from concavity to convexity. Besides, it should be clear that suitable acceptability measures implicitly account for the risk of a random variable or process in the sense that risk reduces acceptability. Acceptability functionals are monotonic functionals with some additional properties: ¯ = R ∪ {−∞} is Definition 1.1.2. A probability functional A (·) : Lp (Ω, F , P) → R called an acceptability functional, if the following properties hold for all X, Y ∈ Lp (Ω, F , P):

(A1) Translation Equivariance. A(X + c) = A(X) + c holds for all constants c. (A2) Concavity. A (λ · X + (1 − λ) · Y ) ≥ λ · A (X) + (1 − λ) · A (Y ) holds for all λ ∈ [0, 1]. (A3) Monotonicity. X ≤ Y ⇒ A(X) ≤ A(Y ). For acceptability functionals it is not a heavy restriction to be centred at zero. ′ If A(0) = c = 0 we can easily switch to an acceptability mapping A (X) = A(X) − c. Additionally most of relevant acceptability mappings fulfil the stronger condition A(c) = c for any c ∈ R. Although we do not deal with coherent risk functionals explicitly, it should be noted that ρ is a coherent risk functional if and only if A = −ρ is a positively homogeneous acceptability functional. Thus, acceptability functionals can be considered as a kind of generalisation of coherent measures. One reason why the use of non-homogeneous functionals is necessary in some situations is liquidity risk: For large financial positions it might be very risky to double their amount, because prices will be influenced, if liquidity is needed and the position has to be reduced to its original size. We allow p ∈ [1, ∞], but mention that p = ∞ is a very optimistic assumption in reality.

Time consistency and information monotonicity

349

From the Fenchel–Moreau–Rockafellar Theorem ([19], Theorem 5) it follows that concave upper semicontinuous (u.s.c.) functionals can be represented in the following way:   E (X · Z) − A+ (Z) , (1.1) A (X) = inf Z∈Lq (Ω,F ,P)

1 p

1 q

+

where + = 1 and A (Z) = inf X∈Lp (Ω,F ,P) {E (X · Z) − A (X)} is the concave conjugate or Fenchel–Moreau conjugate of the functional A. Basically this means that the functional equals its biconjugate. This relation can be very useful for analysing the properties of a functional. The second main class of mappings analysed in this paper are certainty equivalents, derived from expected utility. Such mappings are defined in the following way:

Definition 1.1.3. Let u be a (strictly monotonic, concave) function, interpreted as utility function. Then the related expected utility of a random variable X is given by the mapping U(X) = E (u(X)) . The related certainty equivalent is given by Au (X) = u−1 (E (u(X))) .

The special case u(x) = − exp(−x) leads to the entropic mapping 1 Eγ (X) = − ln (E (exp(−γX))) . γ

Not all certainty equivalent functionals fulfil the concavity assumption (A2). It is ′ well known that (A2) holds if and only if the function h(t) = − uu′′(t) (t) is concave ([11], ′

106 (i), p. 88 and [10], Lemma 8, p. 322). Note that uu′′(t) (t) is proportional to the reciprocal relative risk aversion coefficient of the utility function u.  −1 (·) we can characterise the concave conjugate of Using the functional I(·) = u′ certainty equivalents Au (·) for differentiable utility functions u.    if c = u′ u−1 (E (u(I(c · Z))) E (Z · I(c · Z)) + Au (Z) = −∞ if no such c can be found.

If the acceptability of a random variable has to be valuated relative to some nontrivial information, represented by a σ -algebra F , conditional monotonic mappings come into play. Definition 1.1.4. For any space Lp (Ω, F1 , P) the related extended space is given by Lp (Ω, F1 , P) = Lp (Ω, F1 , P) ∪ {−∞}. Here {−∞} represents the functions with (a.s.) constant value −∞. Acceptability functionals can be generalised to conditional acceptability mappings. These are mappings ¯ p (Ω, F , P) → Lp′ (Ω, F1 , P) A (·|F1 ) : L

350

R. Kovacevic and G. Ch. Pflug

with p ≥ p′ and the properties (A2), (A3) understood in the almost sure sense. The domain of such a mapping is given by dom A = {Y : A(Y |F1 ) = −∞}. This means that A(Y |F1 ) is either in Lp′ (Ω, F1 , P) or it is −∞ a.s. In this context, property (A1) has to be generalised in the following way: Condition 1.1.5 (CA1). A(Y + X|F1 ) = A(Y |F1 ) + X holds for all F1 -measurable random variables X . The idea of conditional mappings is that a random payment is valuated relative to some nontrivial information, which may arise at some future points of time. The spaces Lp (Ω, F , P) together with the partial order ≤ a.s. are order complete Banach lattices. ¯ s (Ω, F , P) ⊆ L ¯ q (Ω, F , P), s ≥ p·p′′ and the infimum related to the With Z ∈ L p−p partial order ≤ a.s. it is possible ([13]) to define the concave conjugate of conditional acceptability mappings in a meaningful way: The conjugate is given by a mapping ¯ s (Ω, F , P) → Lp′ (Ω, F1 , P) with 1 + 1 = 1′ and A+ : L p s p A+ (Z|F1 )

=

inf

¯ p (Ω,F ,P) X∈L

{E (X · Z|F1 ) − A (X|F1 )} .

If the superdifferential ∂A (X0 |F1 ) =   ¯ s (Ω, F , P) : A (X|F1 ) ≤ A (X0 |F1 ) + E ((X − X0 ) · Z|F1 ) , ∀X ∈ dom A Z∈L

is nonempty at X0 , a supergradient representation

A (X0 |F1 ) = A++ (X0 |F1 )

holds for proper conditional acceptability mappings ([13], Theorem 4.2.7). In particular it is possible to define conditional acceptability mappings, using acceptability functionals with known supergradient representation. This can be done by replacing any instance of an expectation in the supergradient representation with the suitable conditional expectation. Similarly it is possible to define conditional certainty equivalents by replacing the expectation in Definition 1.1.3 with the suitable conditional expectation. Definition 1.1.6. A conditional mapping A(·|F1 ) is called centred at zero if A(0|F1 ) = 0

holds. A basic example is the average value at risk (also known as tail value at risk, conditional value at risk or expected shortfall). For a random variable X with distribution function G it is defined as  1 α −1 AV @Rα (X) = G (x) dx . α 0

351

Time consistency and information monotonicity

This functional fulfils the conditions (A1), (A2), (A3) and is positive homogeneous in addition. It is well known that the conjugate representation of the average value at risk is given by   AV @Rα (X) = inf E (X · Z) : E (Z) = 1, 0 ≤ Z ≤ α1 . This can be used to define the conditional average value at risk:  AV @Rα (X|F1 ) = inf E (X · Z|F1 ) : E (Z|F1 ) = 1, 0 ≤ Z ≤

2

1 α

 a.s. .

Time consistency for pairs and related concepts

Let (Ω, F , P) be a probability space and F0 ⊆ F1 ⊆ F a simple filtration related to three points of time t0 ≤ t1 ≤ T . We consider random variables X defined on this space, representing a financial payoff or the value of a property at time T , possibly discounted with a deterministic or stochastic deflator. We try to value the random variable at the beginning of the whole period. The filtration represents the information available at the points of time under consideration.

2.1 Time consistency ¯ p (Ω, F , P) ¯ p (Ω, F , P) → L ¯ p0 (Ω, F0 , P)and A1 (·|F1 ) : L In the following A0 (·|F0 ) : L ¯ → Lp1 (Ω, F1 , P) denote conditional monotonic probability mappings. Both mappings might be conditional versions of the same functional A(·), but this is not mandatory. It is possible to include unconditional functionals into the following considerations: If F0 is the trivial σ -algebra {Ω, ∅} then A0 (·|F0 ) ∈ R and we call this functional “unconditional”.

Definition 2.1.1. A pair A0 (·|F0 ), A1 (·|F1 ) is called time consistent, if for all X, Y ∈ ¯ p (Ω, F , µ) the implication L A1 (X|F1 ) ≥ A1 (Y |F1 ) a.s. =⇒ A0 (X|F0 ) ≥ A0 (Y |F0 ) a.s.

holds. This definition was used in Artzner et al. [3], where the focus was on a special class of conditional mappings, related to coherent functionals. One can also define weaker consistency properties. Let χB and 1B denote the following indicator functionals of the set B :  if ω ∈ B 0 χB (ω) = +∞ otherwise ,  if ω ∈ B 1 1B (ω) = 0 otherwise .

352

R. Kovacevic and G. Ch. Pflug

Definition 2.1.2. A pair A0 (·|F0 ), A1 (·|F1 ) is called acceptance consistent, if for all ¯ p (Ω, F , µ) the inequality X∈L ess inf (χB + A1 (X|F1 )) ≤ χB + A0 (X|F0 ) a.s.

holds for any set B ∈ F0 . It is called rejection consistent, if ess sup (χB + A1 (X|F1 )) ≥ χB + A0 (X|F0 ) a.s.

for any B ∈ F0 . The terms “acceptance consistent” and “rejection consistent” were first used by Weber [25], who defined A0 (·), A1 (X|F1 ) to be acceptance consistent if the implication A1 (X|F1 ) ≥ 0 a.s. =⇒ A0 (X) ≥ 0 holds, and called them rejection consistent if A1 (X|F1 ) ≤ 0 a.s. =⇒ A0 (X) ≤ 0 is valid. Weber based his analysis on positive homogeneous functionals and stated his definition in terms of acceptance and rejection sets. The consistency properties can be characterised equivalently in the following way: Proposition 2.1.3. Acceptance consistency is equivalent to A1 (X|F1 ) ≥ W a.s. =⇒ A0 (X|F0 ) ≥ W a.s.

(2.1)

for any F0 -measurable random variable W and rejection consistency is equivalent to A1 (X|F1 ) ≤ W a.s. =⇒ A0 (X|F0 ) ≤ W a.s..

for any F0 -measurable random variable W . For W = 0 this property was also called “weak time consistency” in [3]. Proof. We show the first part of the assertion. Assume first acceptance consistency and let W be F0 -measurable with A1 (X|F1 ) ≥ W a.s.. Additionally let Wn be a sequence of simple functions  αn,i · 1Bn,i Wn = i

such that Wn ↑ W . Since A1 (X|F1 ) ≥ Wn it follows that for all i   ess inf χBn,i + A1 (X|F1 ) ≥ αn,i ,

hence by acceptance consistency we have

χBn,i + A0 (X|F0 ) ≥ αn,i

i.e. A0 (X|F0 ) ≥ Wn .

By letting n tend to infinity it follows that A0 (X|F0 ) ≥ W .

(2.2)

Time consistency and information monotonicity

353

For the other direction let for all B ∈ F0 WB = ess inf (χB + A1 (X|F1 )) · 1B + (−∞) · 1B C .

Then A1 (X|F1 ) ≥ WB and hence A0 (X|F0 ) ≥ WB = ess inf (χB + A1 (X|F1 )) · 1B + (−∞) · 1B C

which implies that χB + A0 (X|F0 ) ≥ ess inf (χB + A1 (X|F1 )) . 

The following lemma shows that time consistency implies weak time consistency or acceptability and rejection consistency. Proposition 2.1.4. If A0 (·|F0 ) and A1 (·|F1 ) are centred at zero, then time consistency implies both weak time consistency, i.e. A1 (X|F1 ) ≥ 0 a.s. =⇒ A0 (X|F0 ) ≥ 0 a.s.

(2.3)

A1 (X|F1 ) ≤ 0 a.s. =⇒ A0 (X|F0 ) ≤ 0 .

(2.4)

and If A0 (·|F0 ), A1 (·|F1 ) are translation equivariant in addition, time consistency implies both acceptability and rejection consistency. Proof. Let A1 (X|F1 ) ≥ A1 (0|F1 ) = 0 a.s. From time consistency we have A0 (X|F0 ) ≥ A0 (0|F0 ) = 0. The second implication (2.4) can be proved in a similar manner. Consider now a F0 -measurable random variable W with A1 (X|F1 ) ≥ W a.s.. If translation equivariance holds, A1 (X − W |F1 ) ≥ 0 a.s. follows. Using (2.3) above we get A0 (X − W |F0 ) ≥ 0 and again by translation equivariance A0 (X) ≥ W . This condition is equivalent to acceptance consistency by Lemma 2.1.3. Rejection consistency  can be proved analogously. Acceptance and rejection consistency are also implied by another strong condition, connected with martingale properties. Definition 2.1.5. A pair of mappings A0 (·|F0 ), A1 (·|F1 ) is called (i) a submartingale pair, if for all X ∈ dom A A0 (X|F0 ) ≤ E (A1 (X|F1 )|F0 ) a.s.

(2.5)

(ii) a supermartingale pair, if for all X ∈ dom A A0 (X|F0 ) ≥ E (A1 (X|F1 )|F0 ) a.s..

(2.6)

The pair is called martingale pair if it is both a submartingale pair and a supermartingale pair.

354

R. Kovacevic and G. Ch. Pflug

A special situation arises, when A1 (·|F1 ) and A0 (·|F0 ) are conditional versions of the same functional A(·). In this case properties (i) and (ii) are called compound convexity and compound concavity ([16], Definition 2.11 and Proposition 2.12). It is well known that the AV @R is compound convex ([16], Proposition 2.35). One critical feature of martingale pair properties is that each of them implies the suitable weak consistency property. Proposition 2.1.6. (i) If a pair A0 (·|F0 )), A1 (·|F1 ) is a submartingale pair, it is rejection consistent. (ii) If a pair A0 (·|F0 )), A1 (·|F1 ) is a supermartingale pair, it is acceptance consistent. (iii) If a pair A0 (·|F0 )), A1 (·|F1 ) is a martingale pair, it is both rejection and acceptance consistent. Proof. We show the implication for a submartingale pair: Assume A1 (X|F1 ) ≤ W and that W is F0 .-measurable. Because the expectation is monotonic, E (A1 (X|F1 )|F0 ) ≤ W

follows. Using the submartingale-pair property this means that A0 (Y |F0 ) ≤ E (A1 (X|F1 )|F0 ) ≤ W.

Hence we have shown the implication A1 (X|F1 ) ≤ W =⇒ A0 (Y |F0 ) ≤ W , which is equivalent to rejection consistency by Lemma 2.1.3. Part (ii) can be shown by the same argument and (iii) follows, because a martingale  pair is both a sub- and a supermartingale pair.

2.2 Recursivity Another concept, closely related to time consistency is recursivity. Definition 2.2.1. The pair A0 (·|F0 ), A1 (·|F1 ) is called recursive, if for all X ∈ ¯ p (Ω, F , µ) the equation L A0 (X|F0 ) = A0 (A1 (X|F1 )|F0 ) a.s.

(2.7)

holds. The pair A0 (·|F1 ), A1 (·|F1 ) is called self-recursive if (2.7) holds and A1 (·|F1 ) and A0 (·|F0 ) are conditional versions of the same functional A(·). It is a basic fact that recursivity is equivalent to time consistency under very general assumptions. The following property is critical in this context. Definition 2.2.2. A mapping A1 (·|F1 ) has the weak projection property if A1 (X|F1 ) = X

(2.8)

Time consistency and information monotonicity

355

holds, whenever X is F1 -measurable. Let W denote the class of mappings with weak projection property. Simple examples for mappings in this class are: Example 2.1. a) Translation equivariant conditional monotonic mapping, centred at zero. This is true because of A1 (X|F1 ) = A1 (0 + X|F1 ) = A1 (0|F1 ) + X = X . b) Conditional certainty equivalent mapping u−1 (E (u(·)|F1 )). Such mappings are not translation equivariant in general. By the projection property of the conditional expectation u−1 (E (u(X)|F1 )) = u−1 (u(X)) = X

holds, if X is F1 -measurable. Acceptability mappings are of type a) if they are centred at zero. It should be remembered that the Value at Risk – though not concave and therefore not a full acceptability mapping – has property a) and hence the weak projection property. It should be clear that the class W contains more than just cases a) and b) above: Example 2.2. • Any convex combination W is a member of the class W .

n

i=1 λi

· Ai (X|F1 ) of mappings Ai ∈

• If α is a parameter of an acceptability mapping Aα (·|F1 ) ∈ W , then the integral

+∞ −∞ Aα (·|F1 ) dF (α) is in W for any distribution function F . • Any mapping u−1 (A (u(·)|F1 )) is in W , if A ∈ W .

The weak projection property will be of basic interest for our analysis: It is an important fact that for mappings with the weak projection property, recursivity and time consistency are equivalent. Theorem 2.2.3. A pair A0 (·|F0 ), A1 (·|F1 ), where A1 (·|F1 ) possesses the weak projection property 2.8 and A0 (·|F0 ) is monotonic, is time consistent if and only if the pair is recursive. Proof. Assume first that A0 (·|F0 ), A1 (·|F1 ) is a recursive pair. Using the monotonicity of A0 (·|F0 ), from A1 (X|F1 ) ≥ A1 (Y |F1 ) a.s. we can infer A0 (A1 (X|F1 )|F0 ) ≥ A0 (A1 (Y |F1 )|F0 ) .

Because of recursivity this implies A0 (X|F0 ) ≥ A0 (Y |F0 ). For the other direction assume now that the pair is time consistent. Because of the weak projection property we have A1 (X|F1 ) = A1 (A1 (X|F1 )|F1 ). Setting Y = A1 (X|F1 ) this means that both A1 (X|F1 ) ≤ A1 (Y |F1 ) and A1 (X|F1 ) ≥ A1 (Y |F1 ) hold.

356

R. Kovacevic and G. Ch. Pflug

Now if time consistency holds, we can conclude A0 (X|F0 ) ≤ A0 (Y |F0 ) and A0 (X|F0 ) ≥ A0 (Y |F0 ) , which results in A0 (X|F0 ) = A0 (Y |F0 ) = A0 (A1 (X|F1 )|F0 ) ,

the recursivity equation.



The connection between recursivity and time consistency was stated and proved in [3] (Theorem 5.1) in a more restricted context. The original definition of time consistency is more and more replaced by the requirement of recursivity in newer papers ([14, 12]). We consider now some connections between time consistency and the martingale pair properties. We have seen that the combination of rejection and acceptance consistency is implied from both the martingale-pair property and (strong) time consistency, although in the latter case we need additional properties – e.g. translation equivariance. More generally we will see that the martingale-pair property is stronger than time consistency. Corollary 2.2.4. For a time consistent pair A0 (·|F0 ), A1 (·|F1 ) with translation equivariant A1 (·|F1 ) and monotonic A0 (·|F1 ) the assumption A0 (X|F0 ) ≤ E (X|F0 ) (strictness) implies the submartingale-pair property. Proof. By Theorem 2.2.3 recursivity and hence the inequality A0 (X|F0 ) ≤ A0 (A1 (X|F1 )|F0 )

holds. Using the assumption of strictness we have A0 (X|F0 ) ≤ E (A1 (X|F1 )|F0 ),  which is the submartingale-pair property. Proposition 2.2.5. Any martingale pair is time consistent. Proof. Assume A1 (X|F1 ) ≥ A1 (Y |F1 ) a.s. Because expectation is monotonic we have E (A1 (X|F1 )|F0 ) ≥ E (A1 (Y |F1 )|F0 ). The martingale-pair property then gives A0 (X|F0 ) = E (A1 (X|F1 )|F0 ) ≥ E (A1 (Y |F1 )|F0 ) = A0 (Y |F0 ).  Example 2.3. Expected conditional functionals are defined as E (A1 (·|F1 )). It is clear that – by definition – such a functional together with the mapping A1 (·|F1 ) defines a martingale pair, which therefore is time consistent. Example 2.4. Any certainty equivalent functional u−1 (E (u (·))) is self-recursive and hence time consistent. A special case is the entropic functional Eγ (X|F1 ).

3

Dynamic and multi-period functionals

So far we have considered only two periods and a single payoff at the end of the second period. Now we will generalise this to arbitrary filtrations. In addition we will also consider the valuation of processes, which includes intermediate cash-flows.

Time consistency and information monotonicity

357

3.1 Dynamic acceptability mappings and multi-period acceptability functionals For multi-period valuation we consider the following general setup: Time t = 0, . . . , T is discrete. We restrict ourselves to a finite time horizon T < ∞. F = (F0 , . . . , FT ) is a filtration with FT = F . As a first step into the multi-period framework we define dynamic acceptability mappings. Such mappings valuate one final payment at each intermediate time period. Definition 3.1.1. A sequence (At (·|Ft ))t∈{0,...,T −1} of conditional monotonic map¯ p (Ω, F , P) → L ¯ pt (Ω, Ft , P) for each t is called dynamic monopings with At (·|Ft ) : L tonic mapping. If the constituent mappings are acceptability mappings we will call the sequence a dynamic acceptability mapping. If XT is a FT -measurable random variable, e.g. representing a final cash-flow, we can consider the sequence of random variables At (XT |Ft ). These are adapted to the filtration F and can be interpreted as the development of the valuation of the final cashflow over time relative to the information available. If the underlying filtration can be ¯ p (Ω, F , P) as values sitting on represented as a tree and the random variable X ∈ L the leaves of the tree, such a dynamic mapping will assign acceptability values to each node of the tree. If F0 = {Ω, ∅} is the trivial σ -algebra the mapping A0 (·|F0 ) represents an (unconditional) monotonic functional. Now we can generalise the notion of time consistency for dynamic mappings: Definition 3.1.2. A dynamic monotonic mapping (At (·|Ft ))t∈{0,...,T −1} is called time consistent if each pair At1 (·|Ft1 ),At2 (·|Ft2 ) with 0 ≤ t1 ≤ t2 ≤ T is a time consistent pair. In addition we define: Definition 3.1.3. A dynamic monotonic mapping (At (·|Ft ))t∈{0,...,T −1} is called recursive if each pair At1 (·|Ft1 ),At2 (·|Ft2 ) with 0 ≤ t1 ≤ t2 ≤ T is a recursive pair. If in addition all conditional mappings are conditional versions of the same unconditional functional, we call the dynamic acceptability mapping self-recursive. Clearly we have again equivalence between time consistency and recursivity: Corollary 3.1.4. A dynamic monotonic mapping (At (·|Ft ))t∈{0,...,T −1} , where all the constituent mappings are monotonic and possess the weak projection property, is time consistent if and only if it is recursive. Proof. This is an application of Theorem 2.2.3.



It is also possible to generalise Propositions 2.1.6 and 2.2.5 for dynamic mappings.

358

R. Kovacevic and G. Ch. Pflug

Corollary 3.1.5. Let (At (X|Ft ))t∈{0,...,T −1} be a dynamic monotonic mapping, applied to F -measurable random variables X . (1) If (At (X|Ft ))t∈{0,...,T −1} is a martingale for any X , then it is time consistent.

(2) If (At (X|Ft ))t∈{0,...,T −1} is time consistent and all the mappings At (·|Ft ) are strict, then (At (X|Ft ))t∈{0,...,T −1} will be a submartingale. Proof. These are applications of Proposition 2.2.5 and Corollary 2.2.4.



3.2 Compositions If we restrict ourselves to a finite time horizon there is an easy way to achieve time consistency by nesting monotonic mappings: Definition 3.2.1. Let (At (X|Ft ))t∈{0,...,T −1} be a sequence of monotonic mappings ¯ pt (Ω, Ft , P) and assume that each mapping At has ¯ pt+1 (Ω, Ft+1 , P) → L At (·|Ft ) : L the weak projection property (2.8). The dynamic composition of the mappings is then defined recursively as BT −1 (·|FT −1 ) = AT −1 (·|FT −1 )

for T − 1 and for t < T − 1.

Bt−1 (·|Ft−1 ) = At−1 (Bt (·|Ft )|Ft−1 )

Remark 3.2.2. Any dynamic composition is a recursive and hence time consistent dynamic monotonic mapping. Proof. Because each individual mapping At has the weak projection property, each nested mapping Bt will also possess this property. Now Bt1 = At1 ◦ At1+1 ◦ . . . ◦ At2 −1 ◦ Bt2 and Bt2 (XT |Ft2 ) is a Ft2 -measurable random variable. Applying the weak projection property we have Bt1 (Bt2 (XT |Ft2 )) = At1 ◦ At1+1 ◦ . . . ◦ AT −1 (Bt2 (XT )|Ft2 ) =

At1 ◦ At1+1 ◦ . . . ◦ At2 −1 (Bt2 (XT )|Ft2 ) = Bt1 (XT |Ft1 ) ,

which is recursivity. This means that each pair Bt1 , Bt2 is recursive. Hence the whole  dynamic mapping is recursive and time consistent. In general, compositions are not self-recursive. Literature mostly concentrates on conditional mappings that are versions of the same unconditional functional. In particular self-recursive mappings are analysed ([12, 3]). Of course such functionals have very appealing mathematical properties but unfortunately they are very rare. Kupper and Schachermayer ([14]) proved that in an a framework with infinite time and under additional assumptions regarding the filtrations involved, the only time consistent dynamic monotonic mappings are compositions of certainty equivalent mappings and the only translation equivariant and time consistent dynamic monotonic mapping is the entropic one.

Time consistency and information monotonicity

359

Example 3.1. The nested AV @R, which is the dynamic composition of conditional AV @Rs is compound convex ([16], Proposition 2.35), hence a submartingale pair and rejection consistent. On the other hand it is well known that the average value at risk together with its conditional version is not acceptance consistent, hence neither time consistent (selfrecursive) nor a martingale pair. This can easily be shown by a counterexample based on the following tree: 1/3 ✟ -1 ✟ ✟ ✟ 1/3 ✟ 1 ❍ ❍ ❍❍ 1/2 2 1/3 ❍ ❅ 1/3 ✟ -10 ✟✟ ❅ 1/2 ❅✟✟ 1/3 10 ❍❍ ❍❍ 20 1/3 ❍ In this case the conditional AV @R at level α = 2/3 equals zero in each scenario but the unconditional AV @R of the tree is negative and equals −8/6. On the other hand the nested AV @R together with the conditional AV @R is time consistent by Corollary 3.1.4. Example 3.2. The Value at risk, defined as the α-quantile of a distribution is not an acceptability mapping, because concavity is missing. Nevertheless it fulfils translation equivariance and is centred at zero. Therefore it possesses the weak projection property and any composition based on the value at risk will be time consistent.

3.3 Intermediate payoffs and multi-period functionals For practical application we want to consider not only a single cash-flow at the end of our planning horizon. We also want take care of random cash-flows at intermediate periods in our valuation and decision. Basically we want to jointly valuate a random vector X = (X1 , . . . , XT ) together with an information structure which represents the gain in information over time. The development of information is again modeled by a filtration F = (F0 , . . . , FT ). In addition we assume that at the beginning of the planning horizon no nontrivial information is available: F0 = {Ω, ∅}. In this setting it is possible to define dynamic multi-period monotonic mappings. Such mappings are constructed in order to evaluate a random income stream relative to nontrivial information available. This can be interpreted as a sequence of valuations of an income stream at future time points. We write X (t) = (Xt , . . . , XT ). If a mapping At is applied to a process that is zero at many time points we will for short write At (Xτ ; F) = At ((0, . . . , 0, Xτ , 0, . . . , 0) ; F) ,

360

R. Kovacevic and G. Ch. Pflug

At ((Xτ , Xυ ) ; F) = At ((0, . . . , 0, Xτ , 0, . . . , 0, Xυ , 0, . . . , 0) ; F)

etc. Definition 3.3.1. A sequence of mappings ¯ pt (Ω, Ft , P) ¯ pj (Ω, Fj , P) → L At (·; F) : ×Tj=t+1 L

with the following properties is called a dynamic multi-period monotonic mapping: (1) Any sequence At (Xτ ; F) with t ≤ τ ≤ T is a dynamic monotonic mapping. (2) Monotonicity. If Xτ ≥ Yτ a.s. for all τ ∈ {t + 1, . . . , T } then     At X (t+1) ; F ≥ At Y (t+1) ; F a.s.

The sequence is called a dynamic multi-period acceptability mapping if the following conditions hold in addition:

(3) Translation equivariance.     At (Xt+1 , . . . , Xτ + c, . . . , XT ); F = At (Xt+1 , . . . , Xτ , . . . , XT ); F + c a.s.

(4) Concavity.   At λ · X (t+1) + (1 − λ)Y (t+1) ; F for any λ ∈ [0, 1].

    ≤ λ · At X (t+1) ; F + (1 − λ) · At Y (t+1) ; F a.s.

¯ pt (Ω, Ft , P)-spaces, it is again possible to If we use the notion of the infimum for L analyse supergradients and conjugates for such mappings ([16]). It is also possible to generalise the notions of time consistency, recursivity and the weak projection property in this new framework:

Definition 3.3.2. A dynamic multi-period monotonic mapping has the weak projection property if At (Xt ; F) = Xt ¯ pt (Ω, Ft , P). holds for any t and Xt ∈ L

Definition 3.3.3. A dynamic multi-period monotonic mapping is called time consis¯ pj (Ω, Fj , P) and all t the implication tent, if for all X, Y ∈ ×Tj=t+1 L At (X (t+1) ; F) ≥ At (Y (t+1) ; F) a.s. ∧ Xt ≥ Yt =⇒ At−1 (X (t) ; F) ≥ At−1 (Y (t) ; F) ¯ pj (Ω, Fj , P). holds for any X (t) ∈ ×Tj=t L

Time consistency and information monotonicity

361

Definition 3.3.4. A dynamic multi-period monotonic mapping is called recursive, if for all t   At−1 (X (t) ; F) = At−1 Xt , At (X (t+1) ) ; F

¯ pj (Ω, Fj , P). holds for any X (t) ∈ ×Tj=t L

The idea for this definition goes back to [12]. Based on these definitions we can restate the equivalence of time consistency and recursivity for dynamic multi-period mappings, an assertion similar to Theorem 2.2.3. Proposition 3.3.5. Let At (·; F) be a dynamic multi-period mapping. If the weak projection property holds, the mapping is time consistent if and only if it is recursive. Proof. The proof is parallel to the proof of Theorem 2.2.3 but uses the generalised definitions of recursivity, time consistency and the weak projection property: If At (X (t+1) ; F) ≥ At (Y (t+1) ; F) a.s. ∧ Xt ≥ Yt holds, applying monotonicity we get     At−1 Xt , At X (t+1) ; F ; F ≥ At−1 Yt , At Y (t+1) ; F ; F . If recursivity is assumed, this means that

At−1 (X (t) ; F) ≥ At−1 (Y (t) ; F),

which confirms time consistency. For the other direction first remember that because of the projection property the equation At (At (X (t+1) ; F); F) = At (X (t+1) ; F) and hence also the inequalities At (At (X (t+1) ; F); F) ≤ At (X (t+1) ; F) and At (At (X (t+1) ; F); F) ≥ At (X (t+1) ; F) hold. In addition for any Ft -measurable random variable Xt we have Xt ≤ Xt and Xt ≥ Xt . Assuming now time consistency, we can conclude the pair of inequalities At−1 (Xt , At (X (t+1) ; F); F) ≤ At−1 (X (t) ; F)

and At−1 (Xt , At (X (t+1) ; F); F) ≥ At−1 (X (t) ; F) .

Together this means equality and therefore recursivity holds.



A special case of dynamic multi-period mappings are acceptability compositions ([21, 22]) which are defined in the following way: Definition 3.3.6. Let At (·|Ft ), t = 1, . . . , T be a sequence of conditional monotonic mappings, abbreviated as At (·). An additive monotonic composition is a multi-period ¯ pj (Ω, Fj , P) is given by probability mapping B(·|F) which for any X ∈ ×Tj=t L BT −1 (XT |FT −1 ) = AT −1 (XT |FT −1 )

for T − 1 and Bt−1 (X (t) |Ft−1 ) = At−1 (Xt + Bt (X (t+1) |Ft )|Ft−1 )

362

R. Kovacevic and G. Ch. Pflug

for t < T − 1. If the mappings At (·|Ft ) are conditional acceptability mappings we call the composition a additive acceptability composition. For additive acceptability compositions we have Bt (X (t+1) ; F) = At ◦ At+1 ◦ · · · ◦ AT −1

 T

k=t+1

 Xk .

If we use a nontrivial σ -algebra for the first period we get dynamic multi-period monotonic mappings. Such mappings are constructed in a recursive way and hence they are also recursive in the sense of Definition 3.3.4. Corollary 3.3.7. Additive monotonic compositions are recursive and hence time consistent multi-period mappings if their constituent mappings possess the weak projection property. Proof. This is true because of the recursive construction of the compositions Bt . Time  consistency follows from Proposition 3.3.5. For compositions of (conditional) acceptability mappings supergradients and supergradient representations have been derived ([13], Lemma 5.2.4, and Theorems 5.2.5 and 5.2.6 ). It should be noted that compositions of acceptability mappings and compositions of certainty equivalents are time consistent in the same sense. For two periods, and for the case of only one final payment the main distinction was the self recursivity of certainty equivalents. Because certainty equivalents are not translation equivariant in general, the notion of self recursivity is not meaningful any more for compositions of certainty equivalents. What is really needed for optimisation in a multi-period, multistage setting are functionals that evaluate the acceptability of a process unconditional - as one real number related to the actual period of decision. Such multi-period functionals can be described ¯ p (Ω, Ft , P), which are Banach lattices as mappings A (·; F) from product spaces ×Tt=1 L 1 T together with the p-norms Xp = t=1 E (|Xt |p ) p , 1 ≤ p ≤ ∞ , into the extended real line R. Definition 3.3.8. We will call a multi-period functional A (X; F) multi-period acceptability functional, if it is proper and satisfies   (MA0) Information Monotonicity: If F(1) = {Ω, ∅} , F1(1) , . . . , FT(1) ,  (2) (1) (2) (2)  F(2) = {Ω, ∅} , F1 , . . . , FT are filtrations such that Ft ⊆ Ft for all t, then     A X; F(1) ≤ A X; F(2) ¯ p (Ω, Ft , P). for any X ∈ dom A ⊆×Tt=1 L

Time consistency and information monotonicity

363

(MA1) Translation Equivariance: For all periods t A (X1 , . . . , Xt + c, . . . , XT ; F) = A (X1 , . . . , Xt , . . . , XT ; F) + c

holds, if c is a real number. (MA2) Concavity: The mapping X → A (X; F) is concave. (MA3) Monotonicity: Xt ≤ Yt a.s. for all t implies A (X; F) ≤ A (Y ; F). This definition goes back to [16], where a stronger version of (MA1) was used. For concave multi-period functionals A (X; F) it is possible to define the concave conjugate   ¯ p (Ω, Ft , P) A+ (Z; F ) = inf X, Z − A (X; F ) : X ∈ ×Tt=1 L

and if A (X; F) is proper and upper semicontinuous in addition the Fenchel–Moreau– Rockafellar Theorem ([19], Theorem 5) ensures again that the functional equals its biconjugate: A (X; F) = A++ (X; F) .

For multi-period acceptability functionals the supergradient representation is well known: Proposition 3.3.9. Let A (·; F) be an upper semicontinuous multi-period functional satisfying (MA1), (MA2) and (MA3). Then the representation  T   A (X; F ) = inf E (Xt · Zt ) − A∗ (Z; F) : Zt ≥ 0; E (Zt ) = 1 (3.1) Z

t=1

¯ p (Ω, Ft , P). holds for every X ∈ ×Tt=1 L Conversely – if A (·; F) can be represented by a dual representation (3.1) and the conjugate A∗ is proper, then A is proper, upper semicontinuous and satisfies (MA1), (MA2) and (MA3).

Proof. See Theorem 3.21 from [16].



There are many ways to construct multi-period functionals. One possibility uses dynamic multi-period mappings: Just define a dynamic multi-period mapping for the time periods 0, . . . , T , where the filtration F starts with the trivial σ -algebra F0 = {Ω, ∅}. It is obvious to call such a multi-period functional time consistent, if the dynamic multiperiod mapping used is time consistent itself. In particular it is possible to construct a time consistent multi-period functional with properties (MA1), (MA2), (MA3) by additive-monotonic compositions of conditional acceptability mappings. By contrast additive-monotonic compositions of certainty equivalents – though meaningful functionals – will not fulfil conditions (MA1) – (MA3) in general. Property (MA0) – information monotonicity – is even harder to achieve. In particular this property is not automatically compatible with a recursive construction of

364

R. Kovacevic and G. Ch. Pflug

dynamic and multi-period mappings. We will see in section 4 that additive compositions of conditional versions of the same unconditional functional can cause severe troubles. Another type of multi-period functionals are SEC-functionals ([16], Definition 3.27): Definition 3.3.10. A dynamic multi-period acceptability mapping Bt (·; F) is called separable expected conditional (SEC) if it is of the form T    E (Aj−1 (Xj |Fj−1 ) |Ft ) , Bt X (t+1) ; F = j=t+1

where the Aj (·|Ft−1 ) are conditional u.s.c. acceptability mappings. Starting the sequence with F0 = {Ω, {}}, a multi-period acceptability functional B (·; F) is called SEC if it has the form B (X; F) =

T  t=1

E (At−1 (Xt |Ft−1 ))

SEC-functionals are not constructed as compositions, nevertheless, they are timeconsistent in the sense of Definition 3.3.3. Proposition 3.3.11. With given acceptability mappings At (·|Ft ) SEC-mappings are time consistent. If the constituting acceptability mappings possess the weak projection property they are recursive in addition.     Proof. Consider sequences X (t) , Y (t) such that Bt X (t+1) ; F ≥ Bt Y (t+1) ; F a.s. imply At−1 (Xt |Ft ) ≥ At−1 (Yt |Ft ) a.s. and and  assumptions    Xt ≥ Yt a.s.  These E Bt X (t+1) ; F |Ft−1 ≥ E Bt Y (t+1) ; F |Ft−1 a.s. by monotonicity. Summing those inequalities and applying the projection property for the conditional expectation we get At−1 (Xt |Ft ) + ≥ At−1 (Yt |Ft ) +

or

T 

j=t+1 T 

j=t+1

E (At−1 (Xj |Fj−1 ) |Ft−1 )

E (At−1 (Yj |Fj−1 ) |Ft−1 ) a.s.

Bt−1 X (t) ; F ≥ Bt−1 Y (t) ; F

Recursivity then follows by Proposition 3.3.5.

4



Information monotonicity

In the last section we want to discuss the notion of information monotonicity in more detail. Consider a stochastic (cash-flow) process X = (X1 , X2 , . . . , XT ), defined on

Time consistency and information monotonicity

365

some probability space (Ω, F , P). If we want to assign an acceptability value A seemingly it would suffice to consider only the random variables Yt , that means to define A = A (Y1 , Y2 , . . . , YT ). However in a multi-period setting not only the information generated by the process Y under consideration might be relevant. Typically there is other information (e.g. from the “economic environment”) available and useful. This is particularly true in situations where decisions can be taken not only at the beginning but also at some intermediate points of time (multistage problems). A multi-period decision generally should prefer a process with a finer filtration over the same process with a coarser filtration. Otherwise it could be optimal to base a decision on outdated information or to create additional uncertainty in an attempt to get higher “acceptability”. In addition it would not be possible to reduce risk by means of hedging. This is the reason why we introduced the filtration F into our definition of multiperiod functionals and this justifies the introduction of property (MA0) in Definition 3.3.3. Of course this property is not restricted to be used with acceptability functionals and can be applied to any (monotonic) multi-period functional. In addition we say that a valuation of a random process X is information monotonic if (MA0) holds at least for this concrete random process. We will also use the notion of information monotonicity with respect to subsets of the domain of a functional. An example would be the multiperiod valuation of final cash-flows – in this case information monotonicity is restricted to processes of the form (0, . . . , 0, XT ). If a multi-period functional is used as an objective function in a multi-stage decision problem, it should be information monotonic in the sense of (MA0), in order to use an information monotonic valuation on the whole space of possible decisions. Sometimes we refer to a stronger version of time consistency: Definition 4.0.12. A mapping At (·; F) is called strictly information monotonic if it is information monotonic and if, whenever Ft(1) ⊂ Ft(2) holds for at least one t, the strict inequality     A X; F(1) < A X; F(2) ¯ pj (Ω, Fj , P). is valid for at least one X ∈ ×Tj=t L

This notion is introduced to distinguish between functionals not responding in any way to additional information and functionals that respond positively at least for some processes. As by now little is known definitely about the interplay of time consistency and information monotonicity the following facts and examples should make clear the questions arising. First we look at final payoffs: Fact 4.1. Compositions of conditional certainty equivalents – applied to final payoffs – are self-recursive and time consistent. But this means that any intermediate information cancels out. This leads to information monotonicity, but strict information monotonicity does not hold.

366

R. Kovacevic and G. Ch. Pflug

Fact 4.2. The composition of conditional AV @Rα -mappings with common parameter α is time consistent but not information monotonic. This can be seen from the following counterexample: Consider a final random payoff X2 at time t = 2 under two alternative filtrations F(1) and F(2) , represented by the following probability trees. Both possibilities are evaluated under the same measure given by the conditional probabilities on the arcs of the tree.

3 ✼1 ✓ ✓ ✸ 1 0.09 ✑✑ 0 ✲0 ✓ 0.09 ◗◗ ❙ s2 0.01 ❙ ✇ ❙ 0.1 0.81

0.9 ✯ 3 ✟✟ 0 ❍ j 1 ❍ 0.1 0.9 ✒ 0 ❅ 0.9 0.1 ✯ 2 ❘ 0 ✟✟ ❅ ❍❍ j 0.1 0.1

Of course the right tree (F(2) ) represents the more informative filtration. If we calculate the composition of AV @Rα mappings we get (1) AV @R0.05 AV @R0.05 (X2 |F1 ) = 0.82

and

(2) AV @R0.05 AV @R0.05 (X2 |F1 ) = 0.1,

which means that this pair can not be information monotonic.

Fact 4.3. If the expectation is composed with the conditional AV @Rα the resulting  (1)  expected conditional mapping E AV @Rα (·|F1 ) will be time consistent and information monotonic ([16], Proposition 3.9). Because the information does not cancel out, strict information monotonicity is possible. This principle can hold also for expected conditional mappings, constructed with other conditional mappings. Fact 4.4. Compositions of acceptability mappings can be information monotonic for ¯ pj (Ω, Fj , P). some subset of ×Tj=t L If we use our tree above, an example would be the composition of the unconditional AV @R0.6 with the conditional AV @R0.05 . In fact any unconditional AV @Rα , with α high enough will suffice. The limiting case with α = 1 is the expectation, which results in an information monotonic valuation by Fact 4.3. Next we introduce intermediate payments and analyse additive monotonic compositions. In particular additive monotonic compositions of certainty equivalents seem to be a very reasonable concept, because the random variable (1) X1 + u−1 E u(X2 )|F1 sums quantities with consistent dimension.

Time consistency and information monotonicity

367

Fact 4.5. The additive composition of entropic mappings is time consistent and information monotonic but not in the strict sense. This can be seen from (1) − ln E exp(−X1 ) · E exp(−X2 |F1 ) = − ln (E (exp(−X1 − X2 ))) . Fact 4.6. In general, additive compositions of other certainty equivalents are not information monotonic. If we introduce an intermediate payment X1 at point in time t = 1 in our exam  2  2 √ (1) X1 + E ple trees the calculation of E leads to a strange pheX2 |F1

nomenon: If X1 is a positive payment, information monotonicity holds. But if X1 is negative, the functional is information antitone.

Fact 4.7. Monotonic compositions of AV @R mappings with different but coordinated α-parameters will show information monotonicity and also time consistency for some concrete random variables. So far, none of the considered additive compositions jointly possesses all the favourable properties of time consistency and (strict) information monotonicity. But at least we can give one example that this is possible. Fact 4.8. As we saw in Proposition 3.3.11, SEC-functionals – although not constructed as additive compositions – are time consistent multi-period acceptability functionals. In addition, some of them – particularly if they are based on the expected average value at risk E (AV @Rα (Xt |Ft−1 )) – are information monotonic ([16], Proposition 3.9). Because the information does not cancel out, strict information monotonicity may hold.

Bibliography [1] K. J. Arrow, Aspects of the theory of risk-bearing, Helsinki, 1965. [2] Ph. Artzner, F. Delbaen, J. M. Eber, and D. Heath, Coherent Measures of Risk, Mathematical Finance 9 (1999), pp. 203–228. [3] Ph. Artzner, F. Delbaen, D. Heath, and H. Ku, Coherent multiperiod risk adjusted values and Bellman’s principle, Annals of Operations Research 152 (2007), pp. 5–22. [4] Ph. Artzner, Fr. Delbaen, and D. Heath, Thinking Coherently, Risk 10 (1997), pp. 203–228. [5] J. Bion-Nadal, Conditional risk measure and robust representation of convex conditional risk measures, CMAP preprint 557, 2004. [6]

, Dynamic risk measures: Time consistency and risk measures from BMO martingales, Finance and Stochastics 12 (2008), pp. 219–244.

368

R. Kovacevic and G. Ch. Pflug

[7] K. Detlefsen and G. Scandolo, Conditional and Dynamic Convex Risk Measures, Finance and Stochastics 9 (2005), pp. 539–561. [8] H. F¨ollmer and A. Schied, Stochastic Finance – An Introduction in Discrete Time, Studies in Mathematics, de Gruyter, 2002. [9] H. Geman and Ohana S., Time-consistency in managing a commodity portfolio: A dynamic risk measure approach, Journal of Banking & Finance 32 (2008), pp. 1991–2005. [10] Ch. Gollier, The Economics of Risk and Time, MIT Press, 2001. [11] G. H. Hardy, J. E. Littlewood, and G. P´olya, Inequalities, Cambridge University Press, 1934. [12] A. Jobert and L.C.G. Rogers, Valuations and dynamic convex risk measures, Mathematical Finance 18 (2008), pp. 1–22. [13] R. Kovacevic, Conditional Acceptability mappings: Convex Analysis in Banach Lattices, Dissertation, University of Vienna, 2008. [14] M. Kupper and W. Schachermayer, Representation Results for Law Invariant Time Consistent Functions, preprint, 2008. [15] H. M. Markowitz, Portfolio selection, The Journal of Finance (1952). [16] G. Ch. Pflug and W. R¨omisch, Modeling, Measuring and Managing Risk, World Scientific, August 2007. [17]

, The role of information in multi-period risk measurement, Stochastic Programming E Print Series (2009).

[18] F. Riedel, Dynamic coherent risk measures, Stochastic Processes and their Applications 112 (2004), pp. 185–200. [19] R. T. Rockafellar, Conjugate Duality and Optimization, CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 16, SIAM, Philadelphia, 1974. [20] B. Roorda and J. M. Schumacher, Time consistency conditions for acceptability measures, with an application to Tail Value at Risk, Insurance: Mathematics and Economics 40 (2007), pp. 209–230. [21] A. Ruszczy´nski and A. Shapiro, Optimization of Risk Measures, Probabilistic and Randomized Methods for Design under Uncertainty (G. Calafiore and F Dabbene, eds.), Springer, 2005, pp. 17–158. [22]

, Conditional Risk Mappings, Mathematics of Operations Research 31 (2006), pp. 544– 561.

[23] A. Shapiro, On a time consistency concept in risk averse multi-stage stochastic programming, Preprint (eprints for the optimization community) (2008). [24] J. Von Neumann and O. Morgenstern, Theory of Games and Economic Behavior, 3d. ed., Princeton University Press, 1953. [25] S. Weber, Distribution-invariant dynamic risk measures, information and dynamic consistency, Mathematical Finance 16 (2006), pp. 419–441.

Time consistency and information monotonicity

369

Author information Raimund Kovacevic, Department of Statistics and Decision Support Systems, University of Vienna, Universit¨atsstraße 5/9, A-1010 Vienna, Austria. Email: [email protected] Georg Ch. Pflug, Department of Statistics and Decision Support Systems, University of Vienna and IIASA, Laxenburg, Austria. Email: [email protected]

Radon Series Comp. Appl. Math 8, 371–410

c de Gruyter 2009 

Optimal investment and hedging under partial and inside information Michael Monoyios

Abstract. This article concerns optimal investment and hedging for agents who must use trading strategies which are adapted to the filtration generated by asset prices, possibly augmented with some inside information related to the future evolution of an asset price. The price evolution and observations are taken to be continuous, so the partial (and, when applicable, inside) information scenario is characterised by asset price processes with an unknown drift parameter, which is to be filtered from price observations. With linear observation and signal process dynamics, this is done with a Kalman–Bucy filter. Using the dual approach to portfolio optimisation, we solve the Merton optimal investment problem when the agent does not know the drift parameter of the underlying stock. This is taken to be a random variable with a Gaussian prior distribution, which is updated via the Kalman filter. This results in a model with a stochastic drift process adapted to the observation filtration, and which can be treated as a full information problem, yielding an explicit solution. We also consider the same problem when the agent has noisy knowledge at time zero of the terminal value of the Brownian motion driving the stock. Using techniques of enlargement of filtration to accommodate the insider’s additional knowledge, followed by filtering the asset price drift, we are again able to obtain an explicit solution. Finally we treat an incomplete market hedging problem. A claim on a non-traded asset is hedged using a correlated traded asset. We summarise the full information case, then treat the partial information scenario in which the hedger is uncertain of the true values of the asset price drifts. After filtering, the resulting problem with random drifts is solved in the case that each asset’s prior distribution has the same variance, resulting in analytic approximations for the optimal hedging strategy. Key words. Duality, filtering, incomplete information, optimal portfolios. AMS classification. 49N30, 93E11, 93C41, 91B28

1

Introduction

This article examines some problems of optimal investment, and of optimal hedging of a contingent claim in an incomplete market, when the agent’s information set is restricted to stock price observations, possibly augmented by some additional information related to the terminal value of a stock price. In classical models of financial mathematics, one usually specifies a probability space (Ω, F , P ) equipped with a filtration F = (Ft )0≤t≤T , and then writes down some stochastic process S = (St )0≤t≤T for an asset price, such that S is adapted to the filtration F. A typical example would be the Black–Scholes (henceforth, BS) model of Work presented during the Special Semester on Stochastics with Emphasis on Finance, September 3 – December 5, 2008, organised by RICAM (Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences) Linz, Austria

372

M. Monoyios

a stock price, following the geometric Brownian motion dSt = σSt (λdt + dBt ),

(1.1)

where B is a (P, F)-Brownian motion and the volatility σ > 0 and the Sharpe ratio λ are assumed to be known constants. Of course, this is a strong assumption that an agent is assumed to be able to observe the Brownian motion process B , as well as the stock price process S . We refer to this as a full information scenario. In this case, an agent uses F-adapted trading strategies in S , a process with known drift and diffusion coefficients. We shall frequently relax the full information assumption in this article. We shall assume that the agent can only observe the stock price process, and not the Brownian motion B . The agent’s trading strategies must therefore be adapted to the observation  := (Ft )0≤t≤T generated by S . This is a partial information scenario. In filtration F recent years there has been a growing research activity in this area, as surveyed by Pham [28], for instance, who examines some different scenarios to the ones in this article. With partial information, the parameter λ would be regarded as an unknown constant whose value needs to be determined from price data. In principle, one would also have to apply this philosophy to the volatility σ , but we shall make the approximation that price observations are continuous, so that σ can be computed from the quadratic variation [S]t of the stock price, since we have [S]t = σ 2 St2 t,

0 ≤ t ≤ T.

One way to model the uncertainty in our knowledge of the (supposed constant) parameter λ is to take a so-called Bayesian approach. This means we consider λ to be an F0 -measurable random variable with a given initial distribution (the prior distribution). The prior distribution initialises the probability law of λ conditional on F0 , and this is updated in the light of new price information, that is, as the observation filtration  evolves. (In the case that λ is some unknown process (λt )0≤t≤T (as opposed to an F unknown constant), then we would consider it to be some F-adapted process such that its starting value λ0 has a given prior distribution conditional on F0 .) This is an example of a filtering problem. In the case of the BS model (1.1), where we model λ as an F0 -measurable random variable, we are interested in computing the conditional expectation   t := E λ | Ft , λ

0 ≤ t ≤ T.

We shall see that the effect of filtering is that the model (1.1) may be replaced by  P ) and written as a model specified on the filtered probability space (Ω, FT , F, t dt + dB t ), dSt = σSt (λ

 is a (P, F)  -Brownian motion. This model may now be treated as a full inforwhere B  are F  and λ -adapted processes. The price we have paid mation model, since both B

Investment and hedging under partial information

373

for restoring a full information scenario is that the constant parameter λ has been re. The procedure by which a partial information model placed by a random process λ is replaced with a tractable full information model under the observation filtration is typically only achievable in special circumstances, such as Gaussian prior distributions and certain linearity properties in the relation between the observable and unobservable processes. The rest of the article is as follows. In Section 2, we briefly introduce the innovations process of filtering theory and state the filtering algorithm that we shall use, the celebrated Kalman–Bucy filter [11]. In Section 3 we use the dual approach to portfolio optimisation (see Karatzas [13] for example), to solve the Merton problem [19, 20] of optimal investment, when the drift parameter of the stock must be filtered from price observations. In Section 4 we solve the Merton problem when the agent is again uncertain of the stock’s drift, but is assumed to have some additional information in the form of knowledge of the value of a random variable I , representing noisy information on the underlying Brownian motion at time T . Further examples of optimal investment problems with inside information and parameter uncertainty are given in Danilova, Monoyios and Ng [2]. Finally, in Section 5 we consider the hedging of a claim in an incomplete market setting under partial information. Specifically, we consider a basis risk model involving the optimal hedging of a claim on a non-tradeable asset Y using a traded stock S , correlated with Y , when the hedger is restricted to trading strategies in S that are adapted to the observation filtration generated by the asset prices. A number of papers, such as Henderson [8], Monoyios [21, 22] and Musiela and Zariphopoulou [26], have used exponential indifference valuation methods to hedge the claim in an optimal manner in a full information scenario. We outline these results before moving on to the partial information case, where we assume the hedger does not know with certainty the drifts of S and Y . Analytic approximations for prices and hedging strategies are given. Further work on this topic can be found in Monoyios [23, 25].

2

Innovations and linear filtering

Filtering problems concern estimating something (in a manner to be made precise shortly) about an unobserved stochastic process Ξ given observations of a related process Λ. The problem was solved for linear systems in continuous time by Kalman and Bucy [11]. Subsequent work sought generalisations to systems with nonlinear dynamics, see Zakai [33] for instance. Kailath [10] developed the so-called innovations approach to linear filtering, which formulated the problem in the context of martingale theory. This approach to nonlinear filtering was given a definitive treatment by Fujisaki, Kallianpur and Kunita [7]. Textbook treatments can be found in Kallianpur [12], Lipster and Shiryaev [16, 17], Rogers and Williams [32], Chapter VI.8, and Fleming and Rishel [6]. The setting is a probability space (Ω, F , P ) equipped with a filtration F = (Ft )0≤t≤T . All processes are assumed to be F-adapted. Note that F is not the observation filtration. Let us call F the background filtration. We consider two processes, both taken to be

374

M. Monoyios

one-dimensional (for simplicity): •

a signal process Ξ = (Ξt )0≤t≤T which is not directly observable;



an observation process Λ = (Λt )0≤t≤T , which is observable and somehow correlated with Ξ, so that by observing Λ we can say something about the distribution of Ξ.

 := (Ft )0≤t≤T denote the observation filtration generated by Λ. That is, Let F Ft := σ(Λs ; 0 ≤ s ≤ t),

0 ≤ t ≤ T.

The filtering problem is to compute the conditional expectation    t := E Ξt | Ft , 0 ≤ t ≤ T. Ξ

(2.1)

To proceed further, we specify some particular model for the observation and signal processes. We shall focus on the linear case where both Λ and Ξ are solutions to linear stochastic differential equations (SDEs).

2.1 Linear observations and linear signal Let B = (Bt )0≤t≤T be an F-Brownian motion. We assume the observation process Λ is of the form  t G(s)Ξs ds + Bt , 0 ≤ t ≤ T, (linear observation) (2.2) Λt = 0

T with G(·) a deterministic function such that E 0 G2 (t)Ξ2t < ∞. We take the signal process to be of the form  t  t C(s)dWs , 0 ≤ t ≤ T, (linear signal) A(s)Ξs ds + Ξt = Ξ0 + 0

0

for deterministic functions A(·), C(·), with W a (P, F)-Brownian motion independent of the F0 -measurable random variable Ξ0 , and correlated with B in the observation model (2.2) according to [W, B]t = ρt,

0 ≤ t ≤ T,

ρ ∈ [−1, 1].

Suppose further that the signal process has a Gaussian initial distribution, Ξ0 ∼ N(µ, v), independent of B and of W , where N(µ, v) denotes the normal probability law with mean µ and variance v. The two-dimensional process (Ξ, Λ) is then Gaussian, so the conditional distribution of Ξt given the sigma-field Ft will also be normal (and so, in particular, is completely characterised by its mean and variance), with mean given by (2.1) and variance  2  2      t )2 | Ft = Ξ , 0 ≤ t ≤ T. Vt := var Ξt | Ft = E (Ξt − Ξ t − Ξt

Investment and hedging under partial information

375

Notice that the initial values are

and

   0 = E Ξ0 | F0 = EΞ0 = µ, Ξ

     0 )2 | F0 = E (Ξ0 − µ)2 = var(Ξ0 ) = v. V0 = E (Ξ0 − Ξ

The problem then boils down to finding an algorithm for computing the sufficient statis t , Vt from their initial values Ξ  0 = µ, V0 = v. For linear systems it turns out that tics Ξ the conditional variance Vt is a deterministic function of t. Thus, there is in fact only  t , which turns out to satisfies a linear one sufficient statistic, the conditional mean Ξ SDE. This is the celebrated Kalman–Bucy filter, given in Theorem 2.1 shortly.

2.2 Innovations process  -adapted innovations process N = (Nt )0≤t≤T by Define the F  t  s ds, 0 ≤ t ≤ T. G(s)Ξ Nt := Λt − 0

We recall two crucial properties of the innovations process, which form the bedrock of filtering theory. • •

 -Brownian motion. The innovations process N is an F  -martingale M admits a representation of the form Every local F Mt = M0 +



t

Φs dNs ,

0

 -adapted and where Φ is F

T 0

0 ≤ t ≤ T,

Φ2t dt < ∞ a.s.

For a proof of the above results, and of the following celebrated result for filtering of linear systems, see Rogers and Williams [32] or Lipster and Shiryaev [16], for instance. Theorem 2.1 (One-dimensional Kalman–Bucy filter). On a filtered probability space (Ω, F , F, P ), with F = (Ft )0≤t≤T , let Ξ = (Ξt )0≤t≤T be an F-adapted signal process satisfying dΞt = A(t)Ξt dt + C(t)dWt , and let Λ = (Λt )0≤t≤T be an F-adapted observation process satisfying dΛt = G(t)Ξt dt + dBt ,

Λ0 = 0,

where W, B are F-Brownian motions with correlation ρ, and the coefficients A(·), C(·), G(·) are deterministic functions satisfying 

0

T



|A(t)| + C 2 (t) + G2 (t) dt < ∞.

376

M. Monoyios

 := (Ft )0≤t≤T by Define the observation filtration F

Ft := σ(Λs ; 0 ≤ s ≤ t).

Suppose Ξ0 is an F0 -measurable random variable, and that the distribution of Ξ0 is v, independent of W and B . Then the conditional Gaussian with meanµ and variance   t := E Ξt | Ft , for 0 ≤ t ≤ T , satisfies expectation Ξ  t dt + [G(t)Vt + ρC(t)] dNt ,  t = A(t)Ξ dΞ

 0 = µ, Ξ

 -Brownian motion satisfying where N = (Nt )0≤t≤T is the innovations process, an F  t dt, dNt = dΛt − G(t)Ξ

  and Vt = var Ξt | Ft , for 0 ≤ t ≤ T , is the conditional variance, which is independent of Ft and satisfies the deterministic Riccati equation   dVt = (1 − ρ2 )C 2 (t) + 2 A(t) − ρC(t)G(t) Vt − G2 (t)Vt2 , dt

V0 = v.

A multi-dimensional version of the Kalman–Bucy filter can be derived using similar techniques to the one-dimensional case. See Theorem V.9.2 in Fleming and Rishel [6], for instance.

3

Optimal investment problems with random drift

3.1 Portfolio optimisation via convex duality We wish to apply the filtering results in the previous section to portfolio optimisation and optimal hedging problems when the agent does not know the drift parameters of the underlying assets. The filtering approach leads to portfolio problems in which the assets follow SDEs with random drift parameters. The dual approach to portfolio optimisation is now a classical technique, well suited to such problems. In this section we recall the main results of portfolio optimisation via convex duality. See Karatzas [13] for more details and further references. Consider an agent with a continuous, differentiable, increasing, concave utility func : R+ → R of U by tion U : R+ → R. Define the convex conjugate U (η) := sup [U (x) − xη], U

η > 0.

(3.1)

x∈R+

is a decreasing, continuously differentiable, convex function given by Then U ′

(η) = U (I(η)) − ηI(η), U

(3.2)

′ (η) = −I(η). U

(3.3)

where I is the inverse of U . Differentiating (3.2) gives

Investment and hedging under partial information

377

We note that the defining duality relation (3.1) is equivalent to the bidual relation (η) + xη], U (x) = inf [U η∈R+

x > 0.

We are interested in solving an optimal portfolio problem for an agent in a complete market with a single stock whose price process is a continuous semimartingale. To be precise, on an a probability space (Ω, F , P ) equipped with a filtration F = (Ft )0≤t≤T , suppose a stock price S = (St )0≤t≤T follows dSt = σt St (λt dt + dBt ), where σ = (σt )0≤t≤T and λ = (λt )0≤t≤T are F-adapted processes, and B = (Bt )0≤t≤T is an F-Brownian motion. For simplicity, we take the interest rate to be zero. The wealth process X = (Xt )0≤t≤T associated with a self-financing portfolio involving S is given by dXt = σt θt Xt (λt dt + dBt ),

X0 = x,

where the process θ = (θt )0≤t≤T represents the proportion of wealth placed in the stock, and constitutes the agent’s trading strategy. Define the set A of admissible tradT ing strategies as those satisfying 0 σt2 θt2 dt < ∞ a.s. and whose wealth process satisfies Xt ≥ 0 a.s. for all t ∈ [0, T ]. The unique martingale measure Q ∼ P on FT is defined by dQ = ZT , dP where Z = (Zt )0≤t≤T is the exponential local martingale defined by Zt := E(−λ · B)t ,

0 ≤ t ≤ T.

We assume that λ satisfies the Novikov condition 

 1 T 2 λ dt < ∞, E exp 2 0 t so that Z is indeed a martingale and Q is indeed a probability measure equivalent to P . Under Q, the process B Q defined by  t λs ds, 0 ≤ t ≤ T, BtQ := Bt + 0

is a Brownian motion. The Q-dynamics of S, X are dSt = σt St dBtQ , dXt = σt θt Xt dBtQ . In particular, the solution of the SDE for X , given X0 = x, is Xt = xE(σθ · B Q )t ,

0 ≤ t ≤ T.

378 We assume that

M. Monoyios



 1 T 2 2 σ θ dt < ∞, E exp 2 0 t t Q

so that X is a Q-martingale, satisfying E Q XT = x, or   E ZT XT = x,

(3.4)

u(x) := sup EU (XT ),

(3.5)

which we shall regard as a constraint on the terminal wealth XT . This is the foundation of the dual approach to portfolio optimisation, namely to enforce the martingale constraint on the wealth process. The basic portfolio problem (the primal problem) is, given X0 = x, to maximise expected utility of wealth at time T : θ∈A

subject to (3.4). The dual value function is u˜ : R+ → R defined by   η dQ , η > 0. u ˜(η) := E U dP

The well-known result on portfolio optimisation via duality for this model is as follows. Theorem 3.1.

1. The primal and dual value functions u(x) and u˜(η) are conjugate: u ˜(η) = sup [u(x) − xη],

u(x) = inf [˜ u(η) + xη],

x∈R+

η>0

so that u′ (x) = η (equivalently, u˜′ (η) = −x); 2. The optimal terminal wealth in (3.5) is XT∗ satisfying U ′ (XT∗ ) = η

  dQ dQ , equivalently, XT∗ = I η . dP dP

A proof of this result can be found in Karatzas [13]. The idea behind the proof is to consider the  maximisation of the objective functional EU (XT ) subject to the constraint  E ZT XT = x, via the Lagrangian 

 L(XT , η) := EU (XT ) + η x − E ZT XT .

The first order condition for an optimum then yields that the optimal terminal wealth is characterised by U ′ (XT∗ ) = ηZT ⇔ XT∗ = I (ηZT ) . (3.6) ∗ The value of themultiplier  η is needed to fully determine XT . We substitute (3.6) into the constraint E ZT XT∗ = x, so that η is given by   E ZT I(ηZT ) = x, or, using the definition of u˜(η) and (3.3),

u˜′ (η) = −x.

This is precisely the relation we expect to hold when u and u˜ are conjugate.

Investment and hedging under partial information

379

3.1.1 Duality for incomplete markets Similar duality theorems have been developed for incomplete market situations, and also when the agent has a random terminal endowment, possibly in the form of a contingent claim. For the incomplete market case, see the seminal paper by Karatzas et al. [14] for markets with continuous price processes, and Kramkov and Schachermayer [15] for the case with general semimartingale price processes. For problems involving a terminal random endowment in the form of an FT -measurable random variable, contributions have been made by (among others) Hugonnier and Kramkov [9], Owen [27] and by Delbaen et al. [5] for an agent with an exponential utility function. We shall use the results of [5] in Section 5, when we examine the exponential hedging of a contingent claim in a basis risk model. For an incomplete market, in which the set M of martingale measures is no longer a singleton, the significant change is that the dual value function is then defined by   η dQ . (3.7) u ˜(η) := inf E U Q∈M dP

The form of the duality theorem for an incomplete market is similar to Theorem 3.1, but with the unique martingale measure Q of the complete market replaced by the optimal dual minimiser Q∗ that achieves the infimum in (3.7). See [13], for instance, for details in an Itˆo process setting.

3.2 Optimal investment with Gaussian drift process We wish to apply filtering theory and the martingale approach to portfolio optimisation to the classical optimal portfolio problem of Merton [19, 20], in the case that the agent does not know the drift parameter of the stock. As we shall see, this will involve a portfolio problem in which the market price of risk of the stock is a Gaussian process. Hence we first describe the solution to such a problem. Suppose a stock price S = (St )0≤t≤T follows the process dSt = σSt (λt dt + dBt ), on a filtered probability space (Ω, F , F = (Ft )0≤t≤T , P ), with B an F-Brownian motion and λ an F-adapted process following  t w0 , 0 ≤ t ≤ T, ws dBs , wt = λt = λ0 + (3.8) 1 + w0 t 0 for constants λ0 , w0 . The self-financing wealth process X from trading S is given by dXt = σθt Xt (λt dt + dBt ),

X0 = x,

(3.9)

where the trading strategy θ = (θt )0≤t≤T is the proportion of wealth invested in stock. T We define the set A of admissible strategies as those satisfying 0 θt2 dt < ∞ almost surely, such that Xt ≥ 0 almost surely for all t ∈ [0, T ].

380

M. Monoyios

The value function is

  u(x) := sup E U (XT ) | F0

(3.10)

θ∈A

where U (x) is the power utility function given by U (x) =

xγ , γ

0 < γ < 1.

Theorem 3.2. Assume that −1 < w0 T
0.

(3.19)

η > 0.

ηq C, q

  C := E ZTq | F0 .

(3.20)

From Theorem 3.1, the primal and dual value functions are conjugate, which yields that the primal value function is indeed given by (3.12), with C defined by (3.20). It therefore remains to show that C is indeed equal to the expression in (3.13) and that the optimal strategy is given by (3.14). Once again using Theorem 3.1, the optimal terminal wealth XT∗ , attained by adopting the strategy that achieves the supremum in (3.10), is given by ′ (u′ (x)ZT ). XT∗ = −U

Hence, using the form (3.12) for u, we obtain x XT∗ = (ZT )−(1−q) . C The optimal wealth process X ∗ is a (Q, F)-martingale, so      x 1  (3.21) E ZT XT∗ | Ft = E ZTq |Ft , 0 ≤ t ≤ T. Xt∗ = E Q XT∗ |Ft = Zt CZt   So, to compute explicit formulae for C = E ZTq | F0 and the optimal wealth process (from which the optimal trading strategy will be derived), we need to evaluate the conditional expectation E ZTq |Ft , 0 ≤ t ≤ T . From (3.8), for t ≤ T , and conditional on Ft , λT is normally distributed according to Law(λT |Ft ) = N(λt , wt − wT ), 0 ≤ t ≤ T.

For a normally distributed random variable Y ∼ N(m, s2 ), we have   cm2 1 , exp E exp(cY 2 ) = √ 1 − 2cs2 1 − 2cs2

382

M. Monoyios

so that, given the explicit expression (3.18) for Zt , both C and the right-hand side of (3.21) can be computed in closed form. We find that C is indeed given by (3.13). Notice that 1 + qw0 T > 0 and 1 + w0 T > 0 due to the conditions on w0 T , thus the solution is well defined. For the optimal wealth process, we obtain the formula Xt∗

=x



Ψt Ψ0

1/2

exp



 1 (1 − q)(Φt − Φ0 ) , 2

0 ≤ t ≤ T,

(3.22)

where Ψt :=

wt , 1 + qwt (T − t)

Φt :=

λ2t , wt (1 + qwt (T − t))

0 ≤ t ≤ T.

To compute the optimal trading strategy θ∗ , we apply the Itˆo formula to (3.22), using the SDE for λ and noting that the derivative of wt is given by dwt = −wt2 . dt We compare the coefficient of dBt in dXt∗ with that in (3.9) for the case of the optimal  wealth process. This gives (3.14). 3.2.1 Classical Merton problem In the limit w0 → 0, the drift of the stock becomes the constant λ0 , and Theorem 3.2 gives the solution to the classical full information Merton optimal investment problem for a stock with constant market price of risk λ0 and volatility σ . In this case it is easy to check that the value function (3.12) becomes   xγ 1 γ 2 u(x) = exp λ T , γ 21−γ 0 and the optimal trading strategy (3.14) becomes θt∗ =

λ0 , σ(1 − γ)

0 ≤ t ≤ T.

That is, the Merton investor keeps a constant proportion of wealth invested in the stock, as is well known.

3.3 Merton problem with uncertain drift We can now solve the Merton problem when the agent has uncertainty over the true value of the drift parameter. Optimal investment models under partial information have been considered by many authors. We refer the reader to Rogers [31], Bj¨ork, Davis and Land´en [1], and Platen and Runggaldier [30], for example.

Investment and hedging under partial information

383

A stock price process S = (St )0≤t≤T follows dSt = σSt (λdt + dBt ),

(3.23)

on a complete probability space (Ω, F , P ) equipped with a filtration F := (Ft )0≤t≤T , with B = (Bt )0≤t≤T an F-Brownian motion. Define the process ξ = (ξt )0≤t≤T , by  1 t dSu = λt + Bt . (3.24) ξt := σ 0 Su The process ξ will be considered as the observation process in a filtering framework, corresponding to noisy observations of λ, with B representing the noise. In a partial information model with continuous stock price observations, an agent must use F   adapted trading strategies, where where F := (Ft )0≤t≤T is the observation filtration, defined by Ft := σ(ξs ; 0 ≤ s ≤ t) = σ(Ss ; 0 ≤ s ≤ t). Then σ is known from the quadratic variation of S , but λ is an unknown constant, and hence modelled as an F0 -measurable random variable. We assume the distribution of λ is Gaussian, λ ∼ N(λ0 , v0 ), independent of B . We are faced with a Kalman–Bucy type filtering problem whose unobservable signal process is the market price of risk λ. The signal process SDE is dλ = 0,

(3.25)

and the observation process SDE is (3.24). We apply Theorem 2.1 to the signal process λ in (3.25) and observation process ξ in (3.24). Then the optimal filter   t := E λ | Ft , 0 ≤ t ≤ T, λ

satisfies where

t = vt dB t , dλ

0 = λ0 , λ

  t )2 |Ft , vt := E (λ − λ

(3.26)

0 ≤ t ≤ T,

is the conditional variance of λ. This satisfies the Riccati equation dvt = −vt2 , dt

(3.27)

with initial value v0 , so that vt =

v0 , 1 + v0 t

0 ≤ t ≤ T.

-Brownian motion, the innovations process, satisfying  is an F The process B t dt. t = dξt − λ dB

(3.28)

(3.29)

384

M. Monoyios

Using this in (3.26), the optimal filter can also be written in terms of the observable ξ as t = λ0 + v0 ξt , 0 ≤ t ≤ T. (3.30) λ 1 + v0 t The effect of the filtering is that the agent is now investing in a stock with dynamics given by dSt = σSt dξt which, using (3.29), becomes t dt + dB t ). dSt = σSt (λ

(3.31)

Our agent has a power utility function (3.11) and may invest a portion of his wealth in shares and the remaining wealth in a cash account with zero interest rate (for simplic -adapted) wealth process X 0 then follows ity). The (F t dt + dB t ), dXt0 = σθt0 Xt0 (λ

X00 = x,

(3.32)

-adapted where θt0 is the proportion of wealth invested in shares at time t ∈ [0, T ], an F  T 0 2 0 process satisfying 0 θt dt < ∞ almost surely, and such that Xt ≥ 0 almost surely for all t ∈ [0, T ]. Denote by A0 the set of such admissible strategies. -adapted The objective is to maximise expected utility of terminal wealth over the F admissible strategies. The value function is   u0 (x) := sup E U (XT0 ) | F0 . θ∈A0

This may now be treated as a full information problem, with state dynamics given by (3.32). We see from equations (3.26), (3.28) and (3.31), that the solution to the partial information optimal portfolio problem is given by Theorem 3.2, when we replace the , and replace (wt )0≤t≤T by (vt )0≤t≤T . We have thereprocess λ of Theorem 3.2 by λ fore proved the following result.

Theorem 3.3 (Merton problem with uncertain drift). In a complete market with stock price process S given by (3.23), suppose an agent is restricted to using stock price adapted strategies to maximise expected utility of terminal wealth, with power utility function given by (3.11). Suppose further that the agent’s prior distribution for λ is Gaussian, according to Law(λ | F0 ) = N(λ0 , v0 ), and assume that

−1 < v0 T
0, (4.8) uI (x) := sup E U (XTI ) | F0 θ I ∈AI

where U is the power utility function (3.11). We emphasise that the objective function in (4.8) is conditioned on F0σ(I) . Define the modulated terminal time Ta by  1 − a 2 , (4.9) Ta := T + a which will appear in our results. Then the solution to this problem is as follows. Theorem 4.1. Assume that T 1−γ T . − 1 < v0 T < + Ta Ta γ

Define the function vI : [0, T ] → R by vtI :=

v0I , 1 + v0I t

v0I := v0 −

1 , Ta

0 ≤ t ≤ T.

(4.10)

388

M. Monoyios

I in (4.6) is given by Then the process λ

I = λ0 + I + λ t aTa



0

t

I , vsI dB s

0 ≤ t ≤ T,

(4.11)

where I is defined in (4.4) and Ta in (4.9). The value function of the insider with knowledge of I at time zero is given by uI (x) =

xγ 1−γ C , γ I

where CI is the F0I -measurable random variable given by 

1/2  I )2 T 1 q(1 − q)(λ (1 + v0I T )q 0 , exp − CI = 2 1 + qv0I T 1 + qv0I T

(4.12)

q=−

γ . 1−γ

The insider’s optimal trading strategy is θI,∗ = (θtI,∗ )0≤t≤T , given by   I 1 λ t , 0 ≤ t ≤ T. θtI,∗ = σ(1 − γ) 1 + qvtI (T − t)

Of course, the value function (4.12) depends explicitly on I , through its dependence I . We note the similarity in the structure of the solution to this problem with that on λ 0 of the Merton problem with uncertain drift and no inside information. The function vI plays a similar role to the function v in the conventional partial information problem. It turns out that vI is related to (but not identical to) the variance of λI conditional on  I , as we shall see. F

4.1 Computing the information drift

The first result we need in order to prove Theorem 4.1 is a lemma that gives an explicit formula for the information drift in (4.2). Recall that we begin with a background filtration F = (Ft )0≤t≤T that includes the Brownian filtration and the sigma-field generated by λ. We enlarge F with the information carried by the random variable I . Define, for a bounded Borel function f : R → R, the process (πt (f ))0≤t≤T as the continuous version of the martingale (E [f (I) | Ft ])0≤t≤T : πt (f ) := E [f (I) | Ft ] ,

0 ≤ t ≤ T.

There then exists a predictable family of measures (µt (dx))0≤t≤T such that  f (x)µt (dx). πt (f ) = R

For fixed t ∈ [0, T ], the measure µt (dx) is the conditional distribution of I given Ft . Suppose I is such that there exists a density function g(t, x, y) for each t ∈ [0, T ], and such that   f (x)g(t, x, Bt )dx. (4.13) f (x)µt (dx) = πt (f ) = R

R

The enlargement decomposition formula is given by the following lemma.

Investment and hedging under partial information

389

Lemma 4.2. Suppose that I is continuous random variable with conditional (on Ft ) distribution given by g(t, x, Bt ). Assume also that this distribution satisfies the following conditions:      gy (t, x, y)   dx < ∞,  |gy (t, x, y)| dx < ∞,   R R g(t, x, y)

for a.e. t ∈ [0, T ] and a.e. y ∈ R. Then the F-Brownian motion B decomposes with respect to the enlarged filtration Fσ(I) according to  t νs ds, 0 ≤ t ≤ T, Bt = BtI + 0

where B I is an Fσ(I) -Brownian motion. The information drift ν is given by νt =

gy (t, I, Bt ) , g(t, I, Bt )

0 ≤ t ≤ T.

Proof. Let f be a test function. Introduce the F-predictable process (π˙ t (f ))0≤t≤T such that  t

π˙ s (f )dBs ,

πt (f ) = Ef (I) +

0

which exists by the representation property of Brownian martingales as stochastic integrals with respect to B . There exists a predictable family of measures (µ˙ t (dx))0≤t≤T such that  f (x)µ˙ t (dx), π˙ t (f ) = R

and such that for each t ∈ [0, T ] the measure µ˙ t (dx) is absolutely continuous with respect to µt (dx). Define α(t, x) by µ˙ t (dx) = α(t, x)µt (dx).

Now suppose we have a continuous F-martingale M given by  t ms dBs , 0 ≤ t ≤ T. Mt = 0

By Theorem 1.6 in Mansuy and Yor [18], there exists an Fσ(I) -local martingale M I such that  t α(s, I) d[M, B]s , Mt = MtI + 0

provided that, almost surely,

In particular, if

t 0



t 0

|α(s, I)| d[M, B]s < ∞.

|α(s, I)| ds < ∞ almost surely, then B decomposes as  t α(s, I) ds, 0 ≤ t ≤ T, Bt = BtI + 0

I

with B an F

σ(I)

-Brownian motion.

390

M. Monoyios

From the definition of α(t, x) we have   f (x)α(t, x)g(t, x, Bt )dx. f (x)α(t, x)µt (dx) = π˙ t (f ) = R

R

Hence, dπt (f ) = π˙ t (f )dBt =



R

so that d[π(f ), M ]t =



R

 f (x)α(t, x)g(t, x, Bt )dx dBt ,

 f (x)α(t, x)g(t, x, Bt )dx d[B, M ]t .

(4.14)

But from the defining representation (4.13), the right-hand side of which is a smooth function of Bt , the Itˆo formula gives   f (x)gy (t, x, Bt )dx d[B, M ]t , (4.15) d[π(f ), M ]t = R

and comparing (4.14) with (4.15) yields the result.



Proof of Theorem 4.1. For I given by (4.4), the conditional distribution of I given Ft , for t ≤ T , is N(aBt , a2 (T − t) + (1 − a)2 ) = N(aBt , a2 (Ta − t)),

where Ta is defined in (4.9). Hence the conditional density is   1 1 (x − aBt )2 . g(t, x, Bt ) =  exp − 2 a2 (Ta − t) a 2π(Ta − t)

So by Lemma 4.2, the information drift is νt =

I − aBt , a(Ta − t)

0 ≤ t ≤ T.

(4.16)

Using the information drift in (4.16) we write the stock price SDE (3.23) in terms of Fσ(I) -adapted processes, to obtain (4.3), where the Fσ(I) -adapted market price of risk λI is given by λIt := λ + νt = λ +

I − aBt =: h(t, Bt ), a(Ta − t)

0 ≤ t ≤ T,

and where h : [0, T ] × R → R is defined by h(t, x) := λ +

I − ax . a(Ta − t)

Applying the Itˆo’s formula and using dBt = νt dt + dBtI , we obtain dλIt = −

1 dB I , Ta − t t

λI0 = λ +

I . aTa

(4.17)

Investment and hedging under partial information

391

With ξ being the returns process in (3.24), we have dξt = λIt dt + dBtI .

(4.18)

We now regard λ as an unknown constant, and hence a random variable, whose distribution conditional on F0σ(I) is given by (4.5). Then we regard (λIt )0≤t≤T as an unobservable signal process following (4.17), and ξ as an observation process following (4.18), in a filtering framework to estimate of λIt conditional on Ftσ(I) . Using (4.5), we can write down the initial distribution of λI0 given F0σ(I) :      I  σ(I) I σ(I) F = N λ Law(λI0 |F0 ) = Law λ + + , v 0 0 . aTa  0 aTa This defines the prior distribution of the signal process λI . Of course, since I is F0σ(I) measurable, it does not contribute to the initial variance. The Kalman–Bucy filter, Theorem 2.1, is directly applicable, and yields that the optimal filter   I := E λI | Fσ(I) , 0 ≤ t ≤ T, λ t t t satisfies the SDE

 I = V I − dλ t t

1 Ta − t



I , dB t

I = λ0 + I , λ 0 aTa

 I is the innovations process, an F  σ(I) -Brownian motion defined by where B  t I ds, 0 ≤ t ≤ T, tI := ξt − λ B s

(4.19)

(4.20)

0

and VtI is the conditional variance of λIt :  

 I 2  Fσ(I) , VtI := E λIt − λ t t

which satisfies

If we define

2 dVtI 2 = V I − VtI , dt Ta − t t vtI := VtI −

then (4.19) becomes

1 , Ta − t

I = vI dB tI , dλ t t

0 ≤ t ≤ T, V0I = v0 .

0 ≤ t ≤ T,

I = λ0 + I . λ 0 aTa

(4.21)

Note that (4.21) is of the same form as (3.8) with wt replaced by vtI and with Bt tI . Indeed, vtI plays the role of an ‘effective variance’, satisfying the replaced by B Riccati equation (3.27), with a modified initial condition:

2 dvtI = − vtI , dt

v0I = v0 −

1 . Ta

392

M. Monoyios

The solution to this equation is then given by (4.10), and the solution to (4.21) is then (4.11). Using (4.20) in the SDE (4.21), the optimal filter may also be written explicitly in terms of the observable ξ , as I I I = λ0 + v0 ξt , 0 ≤ t ≤ T. λ t I 1 + v0 t

I and v0 replaced by vI . This is of the same form as (3.30), with λ0 replaced by λ 0 0 The effect of the filtering is that the agent is now investing in a stock with dynamics σ(I) -adapted wealth given by dSt = σSt dξt which, using (4.20), becomes (4.7). The F I process X then follows I dt + dB tI ), X0I = x, dXtI = σθtI XtI (λ t σ(I) -adapted trading strategy. The theorem then follows immediately where θI is the F from making the replacements I , w → vI , λ → λ

in Theorem 3.2.



It can be shown that the additional information increases the insider’s utility over the regular agent: see [2] for this and other effects of the inside information.

5

Optimal hedging of basis risk with partial information

In this section we analyse the hedging of a contingent claim in a basis risk model, a tractable example of an incomplete market, first under a full information assumption, and then under a partial information scenario. Basis risk models involve a claim on a non-traded asset, which is hedged using a correlated traded asset. They were first studied systematically by Davis [4] (whose preprint on the subject originated in 2000) who used a dual approach to derive approximations for indifference prices. Subsequently, Henderson [8], and Musiela and Zariphopoulou [26] derived an expectation representation (given in Theorem 5.3) for the value function of the utility maximisation problem involving a random endowment of the claim. This was used by Monoyios [21] to derive accurate analytic approximations for indifference prices and hedging strategies. In simulation experiments, Monoyios showed that exponential indifference hedging could outperform the BS approximation of taking the traded asset as a good proxy for the non-traded asset. Unfortunately, the utility-based hedge requires knowledge of the drift parameters of the assets. These are hard to estimate accurately, as shown by Rogers [31] and Monoyios [22], who showed that drift parameter mis-estimation could ruin the effectiveness of the optimal hedge. Finally, in [23, 25] Monoyios developed a filtering algorithm to deal with the drift parameter uncertainty, and showed that with this added ingredient, utility-based hedging was indeed effective, even in the face of parameter uncertainty. We shall describe some of these results in this section.

Investment and hedging under partial information

393

5.1 Basis risk model: full information case In a full information model, the setting is a filtered probability space (Ω, F , F := (Ft )0≤t≤T , P ), where the filtration F is the P -augmentation of that generated by a twodimensional Brownian motion (B, B ⊥ ). A traded stock price S := (St )0≤t≤T follows a log-Brownian process given by dSt = σSt (λdt + dBt ) =: σSt dξt ,

(5.1)

where σ > 0 and λ are known constants. For simplicity, the interest rate is taken to be zero. The process ξ in (5.1) defined by dξt := λdt + dBt will subsequently play a role as one component of an observation process in a partial information model, when λ will be treated as a random variable rather than as a known constant. A non-traded asset price Y := (Yt )0≤t≤T follows the correlated log-Brownian motion dYt = βYt (θdt + dWt ) =: βYt dζt ,

(5.2)

with β > 0 and θ known constants. The Brownian motion W is correlated with B according to  [B, W ]t = ρt, W = ρB + 1 − ρ2 B ⊥ , ρ ∈ [−1, 1],

and the process ζ , given by dζt := θdt + dWt , will act as the second component of an observation process in a partial information model, when θ will be considered a random variable. We shall henceforth refer to the Sharpe ratios λ (respectively, θ) as the drift of S (respectively, Y ), for brevity. A European contingent claim pays the non-negative random variable h(YT ) at time T , where h : R+ → R+ . In what follows we shall consider utility maximisation problems with the additional random terminal endowment nh(YT ), for n ∈ R. We assume the random endowment nh(YT ) is continuous and bounded below, with finite expectation under any martingale measure. An agent may trade the stock in a self-financing fashion, leading to the portfolio wealth process X = (Xt )0≤t≤T satisfying dXt = σπt (λdt + dBt ), where π := (πt )0≤t≤T is the wealth in the stock, representing the agent’s trading stratT egy, satisfying 0 πt2 dt < ∞ almost surely. 5.1.1 Perfect correlation case

This market is incomplete for |ρ| = 1. If the correlation is perfect, however, the market becomes complete and perfect hedging is possible, as we now show. The minimal martingale measure QM has density process with respect to P given by  dQM  = E (−λ · B)t , 0 ≤ t ≤ T. dP Ft

394

M. Monoyios

Under QM , (S, Y ) follow

M

M

dSt

=

σSt dBtQ ,

dYt

=

β (θ − ρλ) Yt dt + βYt dWtQ ,

M

(5.3)

M

where B Q , W Q are correlated Brownian motions under QM . The stock price S is a local QM -martingale, but this is not the case for the non-traded asset, unless we have the perfect correlation case, ρ = 1. In this case Y is effectively a traded asset (as Yt is then a deterministic function of St ), so the QM -drift of Y vanishes. Therefore, given σ, β , when ρ = 1 the Sharpe ratios λ, θ are equal: θ = λ.

In this case the market becomes complete, and perfect hedging is possible. It is easy to show that with ρ = 1, so that W = B , we have    β/σ β St 1 ct . e , c = σβ 1 − Yt = Y0 S0 2 σ Let the claim price process be v(t, Yt ), 0 ≤ t ≤ T , where v : [0, T ] × R+ → R+ is smooth enough to apply the Itˆo formula, so that   dv(t, Yt ) = vt (t, Yt ) + AY v(t, Yt ) dt + βYt vy (t, Yt )dWt ,

where AY is the generator of the process Y in (5.2). The replication conditions are Xt = v(t, Yt ), 0 ≤ t ≤ T,

dXt = dv(t, Yt ).

Standard arguments then show that to perfectly hedge the claim one must hold ∆t shares of S at t ∈ [0, T ], given by ∆t =

β Yt ∂v (t, Yt ), σ St ∂y

0 ≤ t ≤ T,

(5.4)

and the claim pricing function v(t, y) satisfies 1 vt (t, y) + β(θ − λ)yvy (t, y) + β 2 y 2 vyy (t, y) = 0, v(T, y) = h(y). 2 But with ρ = 1, θ = λ, so we get the BS partial differential equation (PDE), and hence v(t, Yt ) = BS(t, Yt ),

0 ≤ t ≤ T,

where BS(t, y) denotes the BS option pricing formula at time t, with underlying asset price y . Therefore, a position in n claims is hedged by ∆(BS) units of S at t ∈ [0, T ], where t (BS)

∆t

= −n

β Yt ∂ BS(t, Yt ; β), σ St ∂y

0 ≤ t ≤ T,

(5.5)

and where BS(t, y; β) denotes the BS formula at time t for underlying asset price y and volatility β . From our perspective, the salient feature of (5.5) is that the perfect hedge does not require knowledge of the values of the drifts λ, θ.

Investment and hedging under partial information

395

5.1.2 Incomplete case Now suppose the correlation is not perfect, so that the market is incomplete. We embed the problem in a utility maximisation framework in a manner that is by now classical. Let the agent have risk preferences expressed via the exponential utility function U (x) = − exp(−αx),

x ∈ R, α > 0.

The agent maximises expected utility of terminal wealth at time T , with a random endowment of n units of claim payoff:   J(t, x, y; π) = E U (XT + nh(YT )) | Xt = x, Yt = y . The value function is u(n) (t, x, y) ≡ u(t, x, y), defined by u(t, x, y) := sup J(t, x, y; π),

u(T, x, y) = U (x + nh(y)),

(5.6)

π∈A

where A is the set of admissible strategies. This is composed of S -integrable processes whose gains process is a Q-martingale for any martingale measure with finite relative entropy with respect to P . Denote the optimal trading strategy that achieves the supremum in (5.6) by π ∗ ≡ π ∗,n , and denote the optimal wealth process by X ∗ ≡ X ∗,n . The following definitions of utility-based price and hedging strategy are now standard. Definition 5.1 (Indifference price). The indifference price per claim at t ∈ [0, T ], given Xt = x, Yt = y , is p(t, x, y) ≡ p(n) (t, x, y), defined by u(n) (t, x − np(n) (t, x, y), y) = u(0) (t, x, y).

We allow for possible dependence on t, x, y of p(n) in the above definition, but with exponential preferences it turns out that there is no dependence on x. Definition 5.2 (Optimal hedging strategy). The optimal hedging strategy for n units of the claim is π H := (πtH )0≤t≤T given by πtH := πt∗,n − πt∗,0 , 0 ≤ t ≤ T.

We have the following representation for the value function and indifference price. Theorem 5.3. The value function u(n) and indifference price p(n) , given Xt = x, Yt = y for t ∈ [0, T ], are given by u(n) (t, x, y) F (t, y) p(n) (t, y)

1

2

1/(1−ρ2 )

, = −e−αx− 2 λ (T −t) [F (t, Yt )]

  M  = EQ exp −α(1 − ρ2 )nh(YT )  Yt = y , = −

1 log F (t, y). α(1 − ρ2 )n

(5.7) (5.8)

396

M. Monoyios

Proof. The Hamilton–Jacobi–Bellman (HJB) equation for the value function u(n) is   1 (n) (n) (n) + AY u(n) = 0. + ρβπyuxy ut + σ sup λπux(n) + σπ 2 uxx 2 π Performing the maximisation gives the optimal feedback control as Π∗,n (t, x, y), where the function Π∗,n : [0, T ] × R × R+ is given by 

(n) (n) + ρβyu λu x xy . (5.9) Π∗,n (t, x, y) := − (n) σuxx The optimal trading strategy π ∗,n is then given by πt∗,n = Π∗ (t, Xt∗ , Yt ). Substituting the optimal Markov control back into the Bellman equation gives the HJB PDE  2 (n) (n) λux + ρβyuxy (n) = 0. ut + AY u(n) − (n) 2uxx The function F (t, y) in (5.7) satisfies the linear PDE 1 Ft + β(θ − ρλ)Fy + β 2 y 2 Fyy = 0, 2

F (T, y) = exp(−α(1 − ρ2 )nh(y)),

by virtue of the Feynman–Kac theorem. It is then straightforward to verify that u(n) as given in the theorem solves the above HJB equation, and the definition of the indiffer ence price gives the formula (5.8). This leads to the following representation for the optimal hedging strategy. Theorem 5.4. The optimal hedging strategy for a position in n claims is to hold ΔH t shares at t ∈ [0, T ], given by ΔH t = −nρ

β Yt ∂p(n) (t, Yt ), σ St ∂y

0 ≤ t ≤ T.

(5.10)

Proof. From Theorem 5.3 the value function may be written in terms of the indifference price as  1 (5.11) u(n) (t, x, y) = − exp − α(x + np(n) (t, y)) − λ2 (T − t) . 2

The optimal trading strategy is πt∗,n = Π∗ (t, Xt∗ , Yt ), where the function Π∗,n (t, x, y) is given in (5.9), in terms of derivatives of the value function. Using (5.11) we obtain (n)

πt∗,n =

λ − ραnβYt py (t, Yt ) , ασ

0 ≤ t ≤ T.

The optimal trading strategy for the problem with no claims, πt∗,0 is obtained trivially by setting n = 0 in this result, and then applying Definition 5.2 proves the theorem. 

Investment and hedging under partial information

397

Notice that, given the PDE satisfied by F , the indifference pricing function p(t, y) ≡ p(n) (t, y) satisfies 1 1 pt + β(θ − ρλ)ypy + β 2 y 2 pyy − β 2 y 2 αn(1 − ρ2 )(py )2 = 0. 2 2

So for ρ = 1, in which case θ = λ, the claim price then satisfies the BS PDE and we recover the perfect delta hedge (5.4). In [21, 22] the hedging strategy in (5.10) is shown to be superior to the BS-style hedge (5.5), in terms of the terminal hedging error distribution produced by selling the claim at the appropriate price (the indifference price or the BS price) and investing the proceeds in the corresponding hedging portfolio. But from (5.3) we see that the exponential hedge requires knowledge of λ, θ, which are impossible to estimate accurately (see Rogers [31] or Monoyios [22]). This can ruin the effectiveness of indifference hedging, as shown in [22]. It is therefore dubious to draw any meaningful conclusions on the effectiveness of utility-based hedging in this model without relaxing the assumption that the agent knows the true values of the drifts.

5.2 Partial information case Now we assume the hedger does not know the values of the return parameters λ, θ, so these are considered to be random variables. Equivalently, the agent cannot observe the Brownian motions B, W driving the asset prices, so is required to use strategies  generated by asset returns. adapted to the observation filtration F

5.2.1 Choice of prior

We take the the two-dimensional random variable   λ Ξ := θ to have a Gaussian distribution which will be updated as the agent attempts to filter the values of the drifts from asset observations during the hedging interval [0, T ]. The choice of Gaussian prior is motivated by the idea that the agent has some past observations of S, Y before time zero, uses these to obtain classical point estimates of the drifts, and the joint distribution of the estimators is used as the prior in a Bayesian framework. Ultimately, in order to obtain explicit solutions, we shall assume that the agent uses observations before time zero of equal length for both assets. In setting the prior this way, we make the approximation that the asset price observations are continuous, so that σ, β, ρ are known from the quadratic variation and co-variation of S, Y . This is because our goal here is to focus on the severest problem of drift parameter uncertainty. So, consider, for the moment, an observer with data for S over a time interval of length tS , and for Y over a window of length tY , who considers λ and θ as constants, and records the returns dSt /St and dYt /Yt in order to estimate the values of the drifts.

398

M. Monoyios

¯ S ) given by An unbiased estimator of λ is λ(t ¯ S) = 1 λ(t tS



t0 +tS

t0

  dSu Bt0 +tS 1 . =λ+ ∼ N λ, σSu tS tS

The estimator of λ is normally distributed, with a similar computation for the estimator ¯ θ) ¯ , of the (supposed constant) vector (λ, θ) is bivariate normal. of θ. The estimator, (λ, Defining v0 := 1/tS and w0 := 1/tY it is easily checked that ¯ λ ∼ N(M, C0 ), θ¯ where the mean vector M and covariance matrix C0 are given by 

  ρ min(v0 , w0 ) v0 λ . M= , C0 = θ ρ min(v0 , w0 ) w0

(5.12)

With this in mind, we shall suppose that (λ, θ), now considered as a random variable, is bivariate normal according to λ ∼ N(λ0 , v0 ), θ ∼ N(θ0 , w0 ), cov(λ, θ) = c0 := ρ min(v0 , w0 ),

for some chosen values λ0 , θ0 , typically obtained from past data prior to time zero. This distribution will be updated via subsequent observations of ξt :=

1 σ



0

t

dSu = λt + Bt , Su

ζt :=

1 β



0

t

dYu = θt + Wt , Yu

over the hedging interval [0, T ]. 5.2.2 Two-dimensional Kalman–Bucy filter We are firmly within the realm of a two-dimensional Kalman filtering problem, which we treat as follows. Define the observation filtration by  := (Ft )0≤t≤T , F

Ft = σ(ξs , ζs ; 0 ≤ s ≤ t).

The observation process, Λ, and unobservable signal process, Ξ, are defined by     λ ξt , Ξ := Λ := , ζt 0≤t≤T θ satisfying the stochastic differential equations   

1 0 Bt  d ⊥ , dΛt = Ξdt + 2 B 1−ρ ρ t

  0 dΞ = . 0

Investment and hedging under partial information

399

   t := E Ξ | Ft , 0 ≤ t ≤ T , a two-dimensional process defining The optimal filter is Ξ the best estimates of λ and θ given observations up to time t ∈ [0, T ]:          E λ | Ft λ0 λt λ0  = .  (5.13)  , Ξt ≡  :=   θ0 θ0 θt E θ | Ft

The solution to this filtering problem converts the partial information model to a full information model with random drifts, given in the following proposition. To avoid  St ) and θ ≡ θ(t,  Yt ) t ≡ λ(t, a proliferation of symbols, we abuse notation and write λ  θ that will turn out to be functions of time and current asset price. for processes λ,

Proposition 5.5. The partial information model is equivalent to a full information  are model in which the asset price dynamics in the observation filtration F t dt + dB t ), = σSt (λ

d St

(5.14)

t ), = βYt (θt dt + dW

dYt

(5.15)

 θ are  W  are F -Brownian motions with correlation ρ, and the random drifts λ, where B,  -adapted processes. F  θ are given by If λ and θ have common initial variance v0 , then λ,      t    dBs λt λ0 = + (5.16) vs  , 0 ≤ t ≤ T,  θ0 dWs θt 0

where (vt )0≤t≤T is the deterministic function vt :=

v0 , 1 + v0 t

0 ≤ t ≤ T.

 θ are given as functions of time and current asset price by Equivalently, λ,

with

t = λ(t,  St ) = λ0 + v0 ξt , λ 1 + v0 t ξt =

1 log σ



St S0



1 + σt, 2

 Yt ) = θ0 + v0 ζt , θt = θ(t, 1 + v0 t

ζt =

1 log β



Yt Y0



1 + βt. 2

(5.17)

(5.18)

Proof. Using a two-dimensional Kalman–Bucy filter (see, for example, Theorem V.9.2  satisfies the stochastic differential equation in Fleming and Rishel [6]), Ξ



 t dt) =: Ct DDT −1 dNt ,  t = Ct DDT −1 (dΛt − Ξ (5.19) dΞ where (Nt )0≤t≤T is the innovations process, defined by t     t s ds ξt − 0 λ Bt  Nt := Λt − =:  , Ξs ds = t  Wt ζt − 0 θs ds 0

(5.20)

400

M. Monoyios

 W  are F -Brownian motions with correlation ρ. The deterministic matrix funcand B, tion Ct is the conditional variance-covariance matrix defined by       t )(Ξ − Ξ  t )(Ξ − Ξ  t )T  Ft = E (Ξ − Ξ  t )T , Ct := E (Ξ − Ξ

 t is (T denoting transpose) where the last equality follows because the error Ξ − Ξ independent of Ft (Theorem V.9.2 in [6] again). Using (5.20), and writing dSt in terms of dξt , as in (5.1), gives the dynamics (5.14) of S in the observation filtration; (5.15) is established similarly. The matrix C = (Ct )0≤t≤T satisfies the Riccati equation −1

dCt = −Ct DDT Ct , dt

with C0 given in (5.12). Then Rt := Ct−1 satisfies the Lyapunov equation −1 dRt

= DDT . dt

Define the elements of the conditional covariance matrix by 

vt ct . Ct =: c t wt Then the filtering equation (5.19) is a pair of coupled stochastic differential equations:

   t dt dξt − λ dλt vt − ρct ct − ρvt 1 = 1 − ρ2 dθt dζt − θt dt ct − ρwt wt − ρct

 t  dB vt − ρct ct − ρvt 1 = t . 1 − ρ2 dW ct − ρwt wt − ρct Solving the Lyapunov equation yields three equations for vt , wt , ct : v0 vt − vt wt − c2t v0 w0 − c20 w0 wt − vt wt − c2t v0 w0 − c20 ct c0 − vt wt − c2t v0 w0 − c20

= = =

t , 1 − ρ2 t , 1 − ρ2 ρt , 1 − ρ2

(5.21)

where we have written c0 ≡ ρ min(v0 , w0 ) for brevity. Now make the simplification w0 = v0 . From the discussion in Section 5.2.1, we see that this corresponds to using past observations over the same length of time, tS = tY , for both S and Y in fixing the prior. Then c0 = ρv0 , and the solution to the system of equations (5.21) gives the entries of the matrix Ct as v0 , wt = vt , ct = ρvt . vt = 1 + v0 t

401

Investment and hedging under partial information

With this simplification, the equation for the optimal filter simplifies to       t dt dξt − λ dBt dλt = vt = vt  , dθt dWt dζt − θt dt

which, along with the initial condition in (5.13), yields (5.16) and (5.17). Finally, the expressions in (5.18) for ξt , ζt follow directly from the solutions of (5.1)  and (5.2) for S and Y .

Armed with Proposition 5.5 we may now treat the model as a full information model t , θt ), and this is done in the next section. with random drift parameters (λ

5.2.3 Optimal hedging with random drifts

 P ), the wealth process associated with trading stratOn the stochastic basis (Ω, FT , F,  -adapted process satisfying T π 2 dt < ∞ a.s., is X = egy π := (πt )0≤t≤T , an F t 0 (Xt )0≤t≤T , satisfying t dt + dB t ). (5.22) dXt = σπt (λ

The class M of local martingale measures for this model consists of measures Q with density processes defined by  dQ  ·B −ψ·B  ⊥ )t , 0 ≤ t ≤ T, = E(−λ (5.23) Zt := dP Fbt

t for integrands ψ satisfying 0 ψs2 ds < ∞ a.s., for all t ∈ [0, T ] (it is not hard to show t 2  ds < ∞, 0 ≤ t ≤ T ). For ψ = 0 we obtain the minimal martingale measure that 0 λ s QM . Q, B  ⊥,Q ) is two-dimensional Brownian motion, where Under Q ∈ M, (B t dt,  Q := dB Q + λ dB t t

 ⊥,Q := dB  ⊥ + ψt dt, dB t t

and the asset prices and random drifts satisfy dSt dYt t dλ dθt

Q, = σSt dB  t t − 1 − ρ2 ψt )dt + dW  Q ], = βYt [(θt − ρλ t t dt + dB  Q ], = vt [−λ t   Q ], t + 1 − ρ2 ψt )dt + dW = vt [−(ρλ t

(5.24)

  Q + 1 − ρ2 B  Q = ρB  ⊥,Q . where W The relative entropy between Q ∈ M and P is defined by   dQ dQ log H(Q, P ) := E dP dP     T  T T 1 ⊥,Q Q 2 + ψ 2 dt . t dB t t − λ ψt dB + λ = EQ − t t 2 0 0 0

402

M. Monoyios

t it is straightforward to establish that E Q Using the Q-dynamics of λ all t ∈ [0, T ]. If, in addition, we have the integrability condition E

Q



0

then

t

ψs2 ds < ∞,

0 ≤ t ≤ T,

   T  1 2 + ψ 2 dt < ∞. λ H(Q, P ) = E Q t t 2 0

t 0

2 ds < ∞ for λ s

(5.25)

(5.26)

In this case we write Q ∈ Mf , where Mf denotes the set of martingale measures Q with finite relative entropy with respect to P , and we define H(Q, P ) := ∞ otherwise. From (5.26) we note that the minimal entropy measure QE is characterised by    T 2 E QE 1  dt , λ H(Q , P ) = E 2 0 t

corresponding to ψ ≡ 0 in (5.26). This means that the minimal martingale measure and the minimal entropy measure in this model coincide: QE = QM . For an initial time t ∈ [0, T ], we define the conditional entropy between Q ∈ M and P by     ZT ZT   Ft , 0 ≤ t ≤ T, Ht (Q, P ) := E log (5.27) Zt Zt 

satisfying H0 (Q, P ) ≡ H(Q, P ). Provided the integrability condition (5.25) is satisfied, then     T   2 2 Q 1  + ψ du Ft , λ Ht (Q, P ) = E u u  2 t

t ≡ and we define Ht (Q, P ) := ∞ otherwise. In particular, therefore, recalling that λ  St ) is a smooth and Lipschitz function of time and current stock price, and that λ(t, t do not depend on ψt for any Q ∈ M, the minimal conditional the Q-dynamics of λ E entropy (Ht (Q , P ))0≤t≤T will be a deterministic function of time and stock price, given by Ht (QE , P ) ≡ H E (t, St ) for a C 1,2 ([0, T ] × R+ ) function H E defined by      1 T 2  E QE H (t, s) := E λ (u, Su )du St = s . (5.28)  2 t

5.2.4 The primal problem

We use an exponential utility function, U (x) = − exp(−αx), x ∈ R, α > 0. The primal value function u(n) is defined as the maximum expected utility of wealth at T from trading S and receiving n units of the claim on Y , when starting at time t ∈ [0, T ]:   u(n) (t, x, s, y) := sup E U (XT + nh(YT )) | Xt = x, St = s, Yt = y , (5.29) π∈A

Investment and hedging under partial information

403

where A denotes the set of admissible trading strategies. The dynamics of the state variables X, S, Y are given by (5.22) and (5.14, 5.15). For starting time zero we write u(n) (x) ≡ u(n) (0, x, ·, ·). The set of admissible strategies is defined as follows. Denote by ∆ := π/S be the adapted S -integrable process for the number of shares held. The space of permitted strategies is  -martingale for all Q ∈ Mf }, A = {∆ : (∆ · S) is a (Q, F)

t where (∆ · S)t = 0 ∆u dSu is the gain from trading over [0, t], t ∈ [0, T ]. Denote the optimal trading strategy by π ∗ ≡ π ∗,n , and the optimal wealth process by X ∗ ≡ X ∗,n . The utility-based price p(n) and optimal hedge for a position in n claims are defined along the lines of Definitions 5.1 and 5.2. The indifference price per claim at t ∈ [0, T ], given Xt = x, St = s, Yt = y , is p(n) given by u(n) (t, x − np(n) (t, x, s, y), s, y) = u(0) (t, x, s). H The optimal hedging strategy to hold (∆t )0≤t≤T shares of stock at time t, where

is H H H H ∆t St =: πt St , and π := πt 0≤t≤T , is defined by

πtH := πt∗,n − πt∗,0 , 0 ≤ t ≤ T.

(5.30)

It is well known that with exponential utility the indifference price is independent of the initial cash wealth x, so we shall write p(n) (t, x, s, y) ≡ p(n) (t, s, y) from now on. For small positions in the claim (or, equivalently, for small risk aversion), we shall later approximate the indifference price by the marginal utility-based price introduced by Davis [3]. This is the indifference price for infinitesimal diversions of funds into the purchase or sale of claims, and is equivalent (as is well-known, see for example Monoyios [24]) to the limit of the indifference price as n → 0. Definition 5.6 (Marginal price). The marginal utility-based price of the claim at t ∈ [0, T ] is p(t, s, y) defined by p(t, s, y) := lim p(n) (t, s, y). n→0

It is well known that with exponential utility the marginal price is also equivalent to the limit of the indifference price as risk aversion goes to zero. Under appropriate conditions (satisfied in this model) it is given by the expectation of the payoff under the optimal measure of the dual problem without the claim. For exponential utility already seen, in our this measure is the minimal entropy measure QE and, as we have  M model QE = QM , giving the representation p(t, s, y) = E Q h(YT ) | St = s, Yt = y , as we shall see in the next section. 5.2.5 Dual problem and optimal hedge We attack the primal utility maximisation problem (5.29) using classical duality results. For a problem with the random terminal endowment of a European claim, and with

404

M. Monoyios

exponential utility, as in this paper, Delbaen et al. [5] establish the required duality relations between the primal and dual problems in a semimartingale setting. We shall use these results below to establish a simple algebraic relation (Lemma 5.7) between the primal value function and the indifference price, which we shall then exploit to derive the representation for the optimal hedging strategy. The dual problem with starting time zero has value function defined by   (ηZT ) + ηZT nh(YT ) , u ˜(n) (η) := inf E U Q∈M

is the convex conjugate of the utility where Z is the density process in (5.23) and U function. For exponential utility U is given by    (η) = η log η − 1 . U α α

Hence the dual value function has the well-known entropic representation   (η) + η inf H(Q, P ) + αnE Q h(YT ) . u ˜(n) (η) = U α Q∈M

Denoting the dual minimiser that attains the above infimum by Q∗,n , we observe that Q∗,n ∈ Mf . For a starting time t ∈ [0, T ] the dual value function is defined by       η ZT + η ZT nh(YT ) St = s, Yt = y , (5.31) u ˜(n) (t, η, s, y) := inf E U  Q∈M Zt Zt and we write u˜(n) (η) ≡ u˜(n) (0, η, ·, ·).

Lemma 5.7. The primal value function and indifference price are related by  u(n) (t, x, s, y) = u(0) (t, x, s) exp −αnp(n) (t, s, y) ,

where the value function without the claim is given by

u(0) (t, x, s) = − exp −αx − H E (t, s) ,

(5.32)

(5.33)

and H E (t, s) is the conditional minimal entropy function defined in (5.28).

Proof. For brevity, we give the proof for t = 0. The proof for a general starting time follows similar lines, and we make some comments on how to adapt the following argument for that case at the end of the proof. The fundamental duality linking the primal and dual problems in Delbaen et al. [5] implies that the value functions u(n) (x) and u˜(n) (η) are conjugate: u ˜(n) (η) = sup[u(n) (x) − xη], x∈R

u(n) (x) = inf [˜ u(n) (η) + xη]. η>0

The value of η attaining the above infimum is η ∗ , given by u˜η(n) (η ∗ ) = −x, so that u(n) (x) = u ˜(n) (η ∗ ) + xη ∗ ,

Investment and hedging under partial information

405

which translates to u

(n)

So, in particular, E

    Q (x) = − exp −αx − inf H(Q, P ) + αnE h(YT ) . Q∈M

  u(0) (x) = − exp −αx − H(QE , P ) , E

(5.34)

(5.35)

∗,0

where Q is the minimal entropy measure: Q = Q . Combining the dual representations (5.34) and (5.35) for the primal problems with and without the claim, with the definition of the indifference price, gives the dual representation for the utility-based price in the form     1 inf H(Q, P ) + αnE Q h(YT ) − H(QE , P ) , p(n) = (5.36) αn Q∈M which is the representation found in Delbaen et al. [5], modified slightly as we have a random endowment of n claims ([5] considered the case n = −1). In particular, for n → 0 or α → 0, we obtain the marginal price of Davis [3]: E

M

p := lim p(n) = E Q h(YT ) = E Q h(YT ), n→0

(5.37)

the last equality following from the equality of QM and QE , as implied by (5.26). From (5.34)–(5.36), the relation between the primal value functions and indifference price then follows immediately, as  u(n) (x) = − exp −αx − H(QE , P ) − αnp(n)  = u(0) (x) exp −αnp(n) .

Similarly, a corresponding relation for a starting time t ∈ [0, T ] may also be derived. This is achieved using the definition (5.31) of the dual value function for an initial time t ∈ [0, T ], the conjugacy of u(n) (t, x, s, y) and u ˜(n) (t, η, s, y) and the definitions (5.27) and (5.28) of the conditional entropy and conditional minimal entropy. 

Using Lemma 5.7 we obtain the following representation for the optimal hedging strategy associated with the indifference price. In what follows we assume that the indifference price is a suitably smooth function of (t, s, y), so that (given Lemma 5.7) we may assume the primal value function is smooth enough to be a classical solution of the associated Hamilton–Jacobi–Bellman (HJB) equation. This smoothness property is confirmed in [23]. Theorem 5.8. The optimal hedge for a position in n claims is to hold ∆H t units of S at t ∈ [0, T ], where   β Yt (n) (n) (t, S , Y ) . p (t, S , Y ) + ρ = −n p ∆H t t t t s t σ St y

406

M. Monoyios

Remark 5.9. We note the extra term in the hedging formula compared with the corresponding full information result (5.10). The drift parameter uncertainty results in additional risk, manifested as dependence of the indifference price on the stock price, and hence the derivative with respect to the stock price appears in the theorem. Proof. The HJB equation associated with the primal the value function is (n)

ut

+ max AX,S,Y u(n) = 0, π

where AX,S,Y is the generator of (X, S, Y ) under P . Performing the maximisation over π yields the optimal Markov control as πt∗,n = π ∗,n (t, Xt∗,n , St , Yt ), where 

(n) (n)  x(n) + σsuxs + ρβyuxy λu ∗,n , π (t, x, s, y) = − (n) σuxx

and where the arguments of the functions on the right-hand side are omitted for brevity. For the case n = 0 there is no dependence on y in the value function u(0) , and we have πt∗,0 = π ∗,0 (t, Xt∗,0 , St ), where 

(0)  (0) + σsu λu x xs . π ∗,0 (t, x, s) = − (0) σuxx Applying the definition (5.30) of the optimal hedging strategy along with the representations (5.32) and (5.33) from Lemma 5.7 for the value functions, gives the result.  5.2.6 Stochastic control representation of the indifference price

Using the expression (5.26) for the relative entropy between measures in Q ∈ Mf and P in the dual representation (5.36) of p(n) , we obtain the indifference price of the claim at time zero as the value function of a control problem:    T 1 (n) Q 2 p = inf E ψ dt + h(YT ) , ψ 2αn 0 t to be minimised over control processes (ψt )0≤t≤T , such that Q ∈ Mf . Of course, we need only consider measures with finite relative entropy since a martingale measure with H(Q, P ) = ∞ will not attain the infimum in (5.36). The dynamics for S, Y are  θ may be expressed as given in the system of equations (5.24). Equivalently, since λ, functions of time and current asset price by (5.17), we may write the state dynamics of the control problem for the indifference price as dSt

=

dYt

=

tQ , σSt dB

 Yt ) − ρλ(t,  St ) − βYt [(θ(t,

  Q ]. 1 − ρ2 ψt )dt + dW t

Adopting a dynamic programming approach, we consider a starting time t ∈ [0, T ]. Then we have     T  1  ψu2 du + h(YT ) St = s, Yt = y . p(n) (t, s, y) = inf E Q  ψ 2αn t

407

Investment and hedging under partial information

The HJB dynamic programming PDE associated with p(n) (t, s, y) is    M 1 2 (n) (n) 2 ψyp(n) = 0, ψ p(T, s, y) = h(y), 1 − ρ p + inf pt + AQ − β y S,Y ψ 2αn M

where AQ S,Y is the generator of (S, Y ) under the minimal measure: M 1 s 1 2 2   AQ S,Y f (t, s, y) = β(θ(t, y) − ρλ(t, s))yfy + s fss + β y fyy + ρσβsyfsy . 2 2

Performing the minimisation in the HJB equation, the optimal Markov control is ψt∗,n ≡ ψ ∗,n (t, St , Yt ), where  ψ ∗,n (t, s, y) = αn 1 − ρ2 βypy(n) (t, s, y), and note that ψ ∗,0 = 0. Substituting back into the HJB equation, we find that p(n) solves the semi-linear PDE (n)

pt

2 M 1 (n) + AQ − αn(1 − ρ2 )β 2 y 2 py(n) = 0, S,Y p 2

p(n) (T, s, y) = h(y).

We note that for n = 0 this becomes a linear PDE for the marginal price p, so that by the Feynman–Kac theorem we have M

Q p(t, s, y) = Et,s,y h(YT ),

(5.38)

consistent with the general result (5.37). We shall see that in this case the marginal price is given by a BS-type formula. 5.2.7 Analytic approximation for the indifference price To obtain analytic results we approximate the indifference price by the marginal price in (5.38). The marginal price (and hence the associated trading strategy) can be computed in analytic form since, under QM , log YT is Gaussian. We have the following result. Proposition 5.10. Under QM , conditional on St = s, Yt = y , log YT ∼ N(m, Σ2 ), where m ≡ m(t, s, y) and Σ2 ≡ Σ2 (t) are given by    y) − ρλ(t,  s) − 1 β (T − t) m(t, s, y) = log y + β θ(t, 2   2 2 2 Σ (t) = 1 + (1 − ρ )vt (T − t) β (T − t) .

t under QM . Proof. This is established by computing the SDEs for Y and for θt − ρλ M Indeed, applying the Itˆo formula to log Yt under Q , we obtain, for t < T ,  T  T 1 2    QM , θu − ρλu du − β (T − t) + β log YT = log Yt + β dW (5.39) u 2 t t

408

M. Monoyios

t under QM  QM is a Brownian motion under QM . The dynamics of θt − ρλ where W are  M  t ) = 1 − ρ2 v t d B  ⊥,Q , d(θt − ρλ t

 ⊥,QM is a QM -Brownian motion perpendicular to that driving the stock, related where B   ⊥,QM , and where B  QM is the Brownian  QM + 1 − ρ2 B  QM by W  QM = ρB to W motion driving S . Hence, for u > t, after changing the order of integration in a double integral, we obtain  T  T   2     u⊥,QM . θu − ρλu du = θt − ρλt (T − t) + 1 − ρ vu (T − u)dB t

t

This can be inserted into (5.39) to yield the desired result.



We are thus able to obtain BS-style formulae for the price and hedge. For a put option of strike K we easily obtain the following explicit formulae for the marginal price and the associated optimal hedging strategy, where Φ denotes the standard cumulative normal distribution function. Corollary 5.11. With m and Σ as in Proposition 5.10, define b ≡ b(t, s, y) by 1 m = log y + b − Σ2 . 2

Then the marginal price at time t ∈ [0, T ] of a put option with payoff (K − YT )+ is p(t, St , Yt ), where p(t, s, y) = KΦ(−d1 + Σ) − yeb Φ(−d1 ),    y 1 2 1 log +b+ Σ . d1 = Σ K 2

The optimal hedging strategy given by Theorem 5.8 with p as an approximation to the  St , Yt ), where  t ≡ ∆(t, indifference price is ∆  s, y) = nρ β y eb Φ(−d1 ). ∆(t, σs

In [23] these results are used to conduct a simulation study of the effectiveness of the optimal hedge under partial information (that is, with Bayesian learning about the drift parameters of the assets), compared with the BS-style hedge and the optimal hedge without learning. The results show that optimal hedging combined with a filtering algorithm to deal with drift parameter uncertainty can indeed give improved hedging performance over methods which take S as a perfect proxy for Y , and over methods which do not incorporate learning via filtering.

Investment and hedging under partial information

409

Bibliography [1] T. Bj¨ork, M. H. A. Davis and C. Land´en, Optimal investment under partial information, preprint (2008) [2] A. Danilova, M. Monoyios and A. Ng, Optimal investment with inside information and parameter uncertainty, preprint (2009). [3] M. H. A. Davis, Option pricing in incomplete markets, Mathematics of Derivative Securities (Cambridge) (M. A. H. Dempster and S. R. Pliska, eds.), Cambridge University Press, 1997, pp. 216–226. [4] M. H. A. Davis, Optimal hedging with basis risk From Stochastic Calculus to Mathematical Finance : The Shiryaev Festschrift (Berlin) (Y. Kabanov, R. Lipster and J. Stoyanov, eds.), Springer, 2006, pp. 169–188. [5] F. Delbaen, P. Grandits, T. Rheinl¨ander, D. Samperi, M. Schweizer and C. Stricker, Exponential hedging and entropic penalties, Mathematical Finance 12 (2002), pp. 99–123. [6] W. H. Fleming and R. W. Rishel, Deterministic and Stochastic Optimal Control, SpringerVerlag, New York, 1975. [7] M. Fujisaki, G. Kallianpur and H. Kunita, Stochastic differential equations for the nonlinear filtering problem, Osaka J. Math. 9 (1972), pp. 19–40. [8] V. Henderson, Valuation of claims on nontraded assets using utility maximization, Mathematical Finance 12 (2002), pp. 351–373. [9] J. Hugonnier and D. Kramkov, Optimal investments with random endowments in incomplete markets, Annals of Applied Probability 14 (2004), pp. 845–864. [10] T. Kailath, An innovations approach to least-squares estimation, Part I: Linear Filtering in Additive Noise, IEEE Trans. Automatic Control 13 (1968), pp. 646–655. [11] R. E. Kalman and R. S. Bucy, New results in linear filtering and prediction theory, Trans. ASME Ser. D. J. Basic Engineering 83 (1961), pp. 95–108. [12] G. Kallianpur, Stochastic Filtering Theory, Springer, 1980. [13] I. Karatzas, I Lectures on the Mathematics of Finance, CRM Monographs 8 (1996), American Mathematical Society. [14] I. Karatzas, J. P. Lehoczky, S. E. Shreve and G-L. Xu, 1991 Martingale and duality methods for utility maximization in an incomplete market, SIAM Journal on Control and Optimization 29 (1991), pp. 702–730. [15] D. Kramkov and W. Schachermayer, The asymptotic elasticity of utility functions and optimal investment in incomplete markets, Annals of Applied Probability 9 (1999), pp. 904–950. [16] R. S. Lipster and A. N. Shiryaev, Statistics of Random Processes I: General Theory, Springer, 2001. [17] R. S. Lipster and A. N. Shiryaev, Statistics of Random Processes II: Applications, Springer, 2001. [18] R. Mansuy and M. Yor, Random times and enlargements of filtrations in a Brownian setting, Lecture Notes in Mathematics 1873 (2006), Springer. [19] R. C. Merton, Lifetime portfolio selection under uncertainty: the continuous-time case, Rev. Econom. Statist. 51 (1969), pp. 247–257.

410

M. Monoyios

[20] R. C. Merton, Optimum consumption and portfolio rules in a continuous-time model, J. Econom. Theory 3 (1971), pp. 373–413. Erratum: ibid. 6 (1973) pp. 213–214. [21] M. Monoyios, Performance of utility-based strategies for hedging basis risk, Quantitative Finance 4 (2004), pp. 245–255. [22] M. Monoyios, Optimal hedging and parameter uncertainty, IMA Journal of Management Mathematics 18 (2007), pp. 331–351. [23] M. Monoyios, Marginal utility-based hedging of claims on non-traded assets with partial information, preprint (2008). [24] M. Monoyios, Utility indifference pricing with market incompleteness, Nonlinear Models in Mathematical Finance: New Research Trends in Option Pricing (New York) (M. Ehrhardt, ed.), Nova Science Publishers, 2008. [25] M. Monoyios, Asymptotic expansions for optimal hedging of basis risk with partial information, preprint (2009). [26] M. Musiela and T. Zariphopoulou, An example of indifference prices under exponential preferences, Finance & Stochastics 8 (2004), pp. 229–239. [27] M. P. Owen, Utility based optimal hedging in incomplete markets, Annals of Applied Probability 12 (2002), pp. 691–709. [28] H. Pham, Portfolio optimization under partial observation: theoretical and numerical aspects, Handbook of Nonlinear Filtering (Oxford) (D. Crisan and B. Rozovsky, eds.), Oxford University Press, to appear. [29] I. Pikovsky and I. Karatzas, Anticipative portfolio optimization, Adv. in App. Prob. 28 (1996), pp. 1095–1122. [30] E. Platen and W. J. Runggaldier, A benchmark approach to portfolio optimization under partial information, Asia Pacific Financial Markets 14 (2007), pp. 25–43. [31] L. C. G. Rogers, The relaxed investor and parameter uncertainty, Finance & Stochastics 5 (2001), pp. 131–154. [32] L. C. G. Rogers and D. Williams, Diffusions, Markov Processes, and Martingales, Vol. 2: Itˆo Calculus, 2nd. ed., Cambridge University Press, Cambridge, UK, 2000. [33] M. Zakai, On the optimal filtering of diffusion processes, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 11 (1969), pp. 230–243.

Author information Michael Monoyios, Mathematical Institute, University of Oxford, UK. Email: [email protected]

Radon Series Comp. Appl. Math 8, 411–426

c de Gruyter 2009 

Investment/consumption choice in illiquid markets with random trading times Huyˆen Pham

Abstract. We consider a portfolio/consumption selection problem in a liquidity risk model introduced in [11], and further investigated in [12] and [4]. This survey paper summarises the main results in these works. In this illiquidity market modelling, stock prices are quoted and observed only at exogenous random times corresponding to the arrivals of buy/sell orders. The investor trades the stock at these random times, while consuming continuously from his cash holdings, and the goal is to maximise the expected utility from consumption. This mixed discrete/continuous stochastic control problem is solved by a dynamic programming approach, which leads to a coupled system of Integro–Partial Differential Equations (IPDE). Analytic characterisation of the value functions and of the optimal strategies are derived, and we provide a convergent numerical algorithm for the resolution to this coupled system of IPDE. Several numerical experiments illustrate the impact of the restricted liquidity trading opportunities, and we measure in particular the utility loss with respect to the classical Merton consumption problem. Key words. Liquidity, random trading times, portfolio/consumption problem, cost of liquidity, integro-partial differential equations, viscosity solutions. AMS classification. 93E20, 49K22, 91B28

1

Introduction

Liquidity risk is one of the most significant risk factors in financial economy, yet a lot remains to be done at the theoretical level for designing appropriate measures of liquidity and understanding the mechanisms which underly it. In general, the market liquidity is the ability to quickly liquidate big volumes to low costs when assets have to be converted into cash. Therefore, the liquidity is a three-dimensional measure composed of: (i) volume: the size of traded position, (ii) price: the costs which are caused by trading the position, (iii) time: the point in time when one has to trade or execute the position. There have been numerous approaches to modelling liquidity over the years, mostly focusing on the volume and price measures of market liquidity. In this direction, the recent papers [1], [2], [3] or [14] studied the price impact, that is the correlation betwen an incoming order (to buy or sell) and the subsequent price change. The temporal dimension of market liquidity is related to the restriction on asset price observation, trade or execution times, and is a crucial determinant for liquidity measure. It has been largely studied in the econometrics of high-frequency data, especially for volatility estimation. The next important issue in this modelling is to understand the implications for pricing and risk management. In this perspective, Schwartz and Tebaldi [15] and Longstaff [8] assumed in their model that illiquid assets could only

412

H. Pham

be traded at the starting date and at a fixed terminal horizon. In a less extreme modelling, Rogers and Zane [13] and Matsumoto [9] consider random trading dates with continuous-time observation, by assuming that trade succeed only at the jump times of a Poisson process, and study the impact on the portfolio choice problem. In this paper, we consider a description of liquidity risk introduced in Pham and Tankov [11], which is consistent with the situation often viewed by practitioners where their ability to trade assets is limited or restricted to the times when a quote comes into the market. The price of the risky asset can be observed and trade orders can be passed only at random times of an exogenous Poisson process . These times model the arrival of buy/sell orders in an illiquid market, or the dates on which the results of a hedge fund are published. This setup was inspired by recent papers of Frey and Runggaldier [6] and Cvitanic, Liptser and Rozovskii [5], who assume in addition that there is an unobservable stochastic volatility, and are interested in the estimation of this volatility. In our liquidity risk context, we suppose that the investor is also allowed to consume (or distribute dividends) continuously from the bank account, and the objective is to maximise the expected discounted utility of consumption. The resulting optimisation problem is a nonstandard mixed discrete/continuous time stochastic control problem, which leads via the dynamic programming principle to a coupled system of nonlinear integro-partial differential equations (IPDE). The aim of this paper is to summarise the main results recently obtained in [11], [12] and [4] about the IPDE characterisation and regularity of the value functions, the existence and representation of the optimal strategies, and the numerical resolution of this investment/consumption problem in a liquidity risk context with computational illustrations compared to the classical Merton problem. The rest of the paper is structured as follows. In Section 2, we describe the liquidity risk model, and we formulate in Section 3 the optimal investment/consumption problem. Section 4 contains the main results of the paper, by stating the IPDE viscosity characterisation and regularity of the value function, which is then used for deriving the optimal portfolio and consumption policies. Finally, we describe in Section 5 a convergent numerical algorithm and give some numerical tests for measuring the impact of our liquidity trading constraints.

2

The liquidity risk model

We consider a market model in which the bids and offers on a risky asset are not available at any time. The arrivals of buy/sell orders occur at the jumps (τk )k , τ0 = 0 < τ1 < . . . < τk of a Poisson process with constant intensity λ, independent of the asset price process S . For simplicity, we assume that the continuous time price process S follows a Black–Scholes dynamics: dSt

= St (bdt + σdWt ),

(2.1)

where W is a standard brownian motion on a probability space (Ω, G, P), b, σ > 0 are positive constants, and we denote by F = (Ft )t≥0 the natural filtration of W , which is also the filtration generated by the asset price S .

Investment/consumption choice in illiquid markets with random trading times

413

In this illiquid market, the investor can observe and trade S only at the random times (τk )k≥0 . We denote by Zt,s

Ss − St , St

=

0 ≤ t ≤ s,

and by Zk

= Zτk−1 ,τk ,

k ≥ 1,

the observed return process valued in (−1, ∞). We set by convention Z0 to some fixed constant. The investor may also consume continuously from the bank account (interest rate is assumed w.l.o.g. to be zero) between two trading dates. We introduce the continuous observation filtration Gc = (Gt )t≥0 with : Gt

= σ {(τk , Zk ) : τk ≤ t} ,

and the discrete observation filtration Gd = (Gτk )k≥0 . A portfolio/consumption policy is a mixed discrete-continuous process (α, c), where α = (αk )k≥0 is real-valued Gd -adapted, and c = (ct )t≥0 is a nonnegative Gc -adapted process: αk represents the amount of stock invested for the period (τk , τk+1 ] after observing the stock price at time τk , and ct is the consumption rate at time t based on the available information. Starting from an initial capital x ≥ 0, and given a control policy (α, c), we denote by Xτxk the wealth of the investor at time τk given by: Xτxk

=

x−



τk

0

ct dt +

k 

αi Zi+1 ,

k ≥ 0.

(2.2)

i=0

Given x ≥ 0, we say that a control policy (α, c) is admissible, and we denote it by (α, c) ∈ A(x) if : Xτxk



0, a.s. ∀ k ≥ 0.

(2.3)

Remark 2.1. For all k ≥ 0, conditionally on the interarrival times τk+1 − τk = t ≥ 0, we see from (2.1) that Zk+1 is independent of Gτk , and has distribution p(t, dz) of support (−1, ∞), with   σ2 p(t, dz) = P e(b− 2 )t+σWt − 1 ∈ dz . Notice that     zp(t, dz) = E Zk+1 Gτk , τk+1 − τk = t = ebt − 1 ≥ 0, k ≥ 0, t ≥ 0 . (2.4)

τ Remark 2.2. Constrained policies. Since Xτxk+1 = Xτxk − τkk+1 cu du + αk Zk+1 , and the support of Zk+1 is (−1, ∞), we see that the admissibility condition (2.3) for (α, c) ∈ A(x) is written as:  s cu du + αk z ≥ 0, ∀k ≥ 0, ∀s ≥ τk , ∀z ∈ (−1, ∞) Xτxk − τk

414

H. Pham

almost surely. This means that we have a no-short sale constraint (both on the risky asset and bank account): 0 ≤ αk

Xτxk , ∀k ≥ 0,



together with the consumption constraint:  ∞ cu du ≤ Xτxk − αk , ∀k ≥ 0.

(2.5)

(2.6)

τk

Remark 2.3. Embedding in a continuous-time wealth process. Let us introduce the continuous time filtration H = (Ht )t≥0 = F ∨ Gc . In other words, Ht corresponds to the path observation of the asset price and of the random times up to time t. Notice that W is still a Brownian motion under H, and the dynamics of S under (P, H) is still governed by (2.1). Given x ≥ 0, and (α, c) ∈ A(x) with corresponding discrete time wealth process (Xτxk )k≥0 , let us define the continuous time process (Xtx )t≥0 by  t cu du + αk Zτk ,t , τk < t ≤ τk+1 k ≥ 0, Xtx = Xτxk − τk

=

x−



t

cu du +



t

Hu dSu ,

t ≥ 0,

(2.7)

0

0

where H is the simple integrand process Ht

=

∞  αk 1τ 0 is a positive discount factor. Actually, it is proved in [12] that for ρ large enough, namely ρ

>

bγ,

(which we shall assume in the sequel), then the nonnegative value function v is finite, and satisfies the growth condition v(x)

≤ Kxγ ,

x ≥ 0,

(3.2)

for some positive constant K . Remark 3.1. Given x ≥ 0, denote by AH (x) (resp. AF (x)) the set of pairs of H-adapted (resp. F-adapted) processes (H, c) with c nonnegative, and corresponding wealth process given in (2.7), such that the no-short sale constraint (2.8) holds. Consider the associated continuous time optimal investment/consumption problems

 ∞ −ρt e U (ct )dt , x ≥ 0, vH (x) = sup E (H,c)∈AH (x)

0

and vM (x)

=

sup

E

(H,c)∈AF (x)



∞ −ρt

e



U (ct )dt ,

0

x ≥ 0.

(3.3)

Problem (3.3) is the classical Merton portfolio/consumption choice problem under noshort sale constraints, and based on the continuous time observation of the stock price. It is not hard to check that independent information provided by the random times τk does not increase the maximal expected utility of consumption; in other words, the value functions vH and vM coincide, and in view of Remark 2.3, we have v



vM (= vH ).

(3.4)

We use a dynamic programming approach for solving the control problem (3.1). The starting point is the following version of the dynamic programming principle (DPP) adapted to our context, and proved rigorously in [11]:

 τ1 e−ρt U (ct )dt + e−ρτ1 v(Xτx1 ) . (3.5) v(x) = sup E (α,c)∈A(x)

0

From the expression (2.2) of the wealth, and the measurability conditions on the control, the above dynamic programming relation is written as



 τ1  τ1 −ρt −ρτ1 ct dt + a Z1 , (3.6) e U (ct )dt + e v x− v(x) = sup E (a,c)∈Ad (x)

0

0

416

H. Pham

where Ad (x) is the set of pairs (a, c) with a deterministic constant, and c a deterministic nonnegative process s.t. (see Remark 2.2) a ∈ [0, x] and  ∞ cu du ≤ x − a (3.7) 0

Given a ∈ [0, x], we denote by Ca (0, x) the set of deterministic nonnegative processes satisfying (3.7). The r.h.s. of (3.6) is then written explicitly in:

 ∞   t −(ρ+λ)t U (ct ) + λ v x − e v(x) = sup cs ds + a z p(t, dz) dt. (3.8) a ∈ [0, x] c ∈ Ca (0, x)

0

0

Denote by D = R+ × X with X = {(x, a) ∈ R+ × R+ : a ≤ x} ,

and let us introduce the dynamic auxiliary control problem: for (t, x, a) ∈ D,

 ∞    e−(ρ+λ)(s−t) U (cs ) + λ v Yst,x + a z p(s, dz) ds, (3.9) vˆ(t, x, a) = sup c∈Ca (t,x) t

where Ca (t, x) is the set of deterministic nonnegative processes c = (cs )s≥t s.t.  ∞ cu du ≤ x − a (3.10) t

and Y t,x is the deterministic controlled process by c ∈ Ca (t, x):  s t,x Ys = x− cu du, s ≥ t.

(3.11)

t

From (3.8)–(3.9), the original value function is then related to this auxiliary optimisation problem by: v(x)

=

sup vˆ(0, x, a).

(3.12)

a∈[0,x]

The Hamilton–Jacobi (in short HJ) equation associated to the deterministic control problem (3.9) is the following Integro Partial Differential Equation (in short IPDE):  v ∂ˆ v ˜ ∂ˆ −U (ρ + λ)ˆ v− − λ v(x + a z)p(t, dz) = 0, (t, x, a) ∈ D, (3.13) ∂t ∂x where U˜ is the Legendre transform of U , i.e. U˜ (y) = supx≥0 [U (x) − xy]. To sum up, the dynamic programming principle for our original stochastic optimisation problem (3.1) leads to a first-order coupled IPDE (3.12)–(3.13): Problem (3.9) is a family over a ∈ R+ of standard deterministic control problems on infinite horizon, associated to the HJ equation (3.13), and the coupling comes from the fact that the reward function appearing in the definition of problem (3.9) or in its IPDE (3.13)

Investment/consumption choice in illiquid markets with random trading times

417

depends on the value function of problem (3.12) and vice-versa. The next section provides a rigorous analytic characterisation of the value functions through their dynamic programming (in short DP) equations (3.12)–(3.13), by showing the regularity properties of the value functions, and then as a byproduct the existence (and uniqueness) of the optimal control feedback.

4

Analytic characterisation of the value functions and optimal strategies

We first recall from [12] some basic properties on the value functions (v, vˆ) defined in the previous section. The value function v is strictly increasing, concave on R+ , and lies in C+ (R+ ), the set of nonnegative continuous functions on R+ . The value function vˆ lies in C+ (D), the set of nonnegative continuous functions on D, and satisfies the boundary condition   ∞ e−(ρ+λ)(s−t) v(a + a z)p(s, dz)ds, ∀t ≥ 0. (4.1) lim vˆ(t, x, a) = λ x↓a

t

It satisfies the growth estimate vˆ(t, x, a)

≤ K(ebt x)γ , (t, x, a) ∈ D,

(4.2)

for some positive constant K . Moreover, vˆ is strictly increasing in x ≥ a, given (t, a) ∈ R+ × R+ , and is concave in (x, a) ∈ X , given t ∈ R+ . We now provide a characterisation of the value functions to the DP equation (3.12)– (3.13) by means of the notion of viscosity solution adapted to our text. Definition 4.1. A pair of functions (w, w) ˆ ∈ C+ (R+ ) × C+ (D) is a viscosity solution to (3.12)–(3.13) if the two following properties hold simultaneously: (i) viscosity supersolution property: w(x) ≥ supa∈[0,x] w(0, ˆ x, a), for all x ≥ 0, and

 ∂ϕ ¯ ˜ ∂ϕ (t¯, x (t, x ¯) − U ¯) − λ w(¯ x + a z)p(t¯, dz) ≥ 0, (ρ + λ)w( ˆ t¯, x ¯, a) − ∂t ∂x for all a ∈ R+ , for any test function ϕ ∈ C 1 (R+ × (a, ∞)), and (t¯, x¯) ∈ R+ × (a, ∞), which is a local minimum of (w(·, ˆ ·, a) − ϕ). ˆ x, a), for all x ≥ 0, and (ii) viscosity subsolution property : w(x) ≤ supa∈[0,x] w(0,

 ∂ϕ ¯ ˜ ∂ϕ (t¯, x (ρ + λ)w( ˆ t¯, x ¯, a) − (t, x ¯) − U ¯) − λ w(¯ x + a z)p(t¯, dz) ≤ 0, ∂t ∂x for all a ∈ R+ , for any test function ϕ ∈ C 1 (R+ × (a, ∞)), and (t¯, x¯) ∈ R+ × (a, ∞), which is a local maximum of (w(·, ˆ ·, a) − ϕ). We reformulate the viscosity characterisation result proved in [12].

418

H. Pham

Theorem 4.2. The pair of value functions (v, vˆ) defined in (3.1), (3.9) is the unique viscosity solution to (3.12)–(3.13) in the sense of Definition 4.1, satisfying the growth conditions (3.2), (4.2), and the boundary condition (4.1). The above characterisation makes the computation of the value functions possible (see the next section) but does not yield the optimal policies in explicit form. We need to go beyond the viscosity property, and focus on the regularity of the value functions. By using arguments of (semi)concavity and the strict convexity of the Hamiltonian for the IPDE in connection with viscosity solutions, it is proved in [4] that the value functions are continuously differentiable. Theorem 4.3. (1) The value function v lies in C 1 (0, ∞), and any maximum point in (3.12) is interior for every x > 0. Moreover, v ′ (0+ ) = ∞. (2) For all a ∈ R+ , we have vˆ(·, ·, a) ∈ C 1 ([0, ∞) × (a, ∞)), and lim x↓a

∂ˆ v (t, x, a) ∂x

= ∞,

t ≥ 0.

In particular, vˆ satisfies the IPDE (3.13) in the classical sense. From the regularity of the value functions, we derive the existence of an optimal control through a verification theorem, and the optimal consumption strategy is produced in feedback form in terms of the classical derivatives of the value functions. We denote by I = (U ′ )−1 : (0, ∞) → (0, ∞) the inverse function of the derivative U ′ , and we consider for each a ∈ R+ the nonnegative measurable function cˆ(t, x, a)

=

   ∂ˆ  v ∂ˆ v arg max U (c) − c (t, x, a) = I (t, x, a) , (t, x, a) ∈ D. c≥0 ∂x ∂x

Theorem 4.4. (1) Let (x, a) ∈ X , i.e. x ≥ a ≥ 0. There exists a unique solution, denoted by Yˆ (x, a), to the equation Yt

= x−



t

cˆ(s, Ys , a)ds,

t ≥ 0,

(4.3)

0

and the pair (Yˆ (x, a), a) lies in X , i.e. Yˆt (x, a) ≥ a, for all t ≥ 0. The feedback control {ˆ c(t, Yˆt (x, a), a), t ≥ 0}

is optimal for

vˆ(0, x, a).

(2) For any x ≥ 0, there exists an optimal control policy (α∗ , c∗ ) ∈ A(x) for v(x), given by αk∗

=

arg

max

a∈[0,Xτx ]

vˆ(0, Xτxk , a),

k ≥ 0,

(4.4)

k

c∗t

=

(k)

cˆ(t − τk , Yt

, αk∗ ),

τk ≤ t < τk+1 ,

k ≥ 0,

(4.5)

Investment/consumption choice in illiquid markets with random trading times

419

where Xτxk is the wealth of the investor at time τk given in (2.2) with the feedback control (α∗ , c∗ ), and Yt(k) = Yˆt−τk (Xτxk , αk∗ ), t ≥ τk , solution to  t (k) c∗s ds, t ≥ τk , Yt = Xτxk − τk

represents the wealth between two trading dates τk and τk+1 . Proof. (1) Let c ∈ Ca (0, x) and Y x = x − 0 cdt the corresponding wealth process. By applying standard differential calculus to e−(ρ+λ)t vˆ(t, Ytx , a) between t = 0 and t = T , we have e−(ρ+λ)T vˆ(T, YTx , a) − vˆ(0, x, a)  T  ∂ˆ v ∂ˆ v = v+ (t, Ytx , a)dt − ct e−(ρ+λ)t − (ρ + λ)ˆ ∂t ∂x 0  T    −(ρ+λ)t = − U (ct ) + λ v(Ytx + az) p(t, dz) e 0

+



T

 v   ∂ˆ v ˜ ∂ˆ (t, Ytx , a) dt, e−(ρ+λ)t U (ct ) − ct (t, Ytx , a) − U ∂x ∂x

0

(4.6)

where we used in the last equality the property that vˆ satisfies the IPDE (3.13). From the growth estimate (4.2) for vˆ, the increasing monotonicity of vˆ(T, ., a), and since ρ > bγ , we see that limT →∞ e−(ρ+λ)T vˆ(T, YTx , a) = 0, and thus by sending T to infinity into (4.6):   ∞   −(ρ+λ)t U (ct ) + λ v(Ytx + az) p(t, dz) e (4.7) vˆ(0, x, a) = 0

+



0



  v  ∂ˆ v ˜ ∂ˆ e−(ρ+λ)t U (ct ) − ct (t, Ytx , a) − U (t, Ytx , a) dt. ∂x ∂x

The existence and uniqueness of a solution Yˆ (x, a) to (4.3), which satisfies Yˆt (x, a) ≥ a for all t ≥ 0 is proved in [4]. The wealth process Yˆ (x, a) is associated to the admissible control cˆt = cˆ(t, Yˆt (x, a), a), t ≥ 0, and by definition of the function cˆ, we have   v ∂ˆ v ˜ ∂ˆ (t, Yˆt (x, a), a) = U (ˆ ct ) − cˆt (t, Yˆt (x, a), a), t ≥ 0, U ∂x ∂x so that from (4.7)   ∞   ct ) + λ v(Yˆt (x, a) + az) p(t, dz), e−(ρ+λ)t U (ˆ (4.8) vˆ(0, x, a) = 0

which shows the optimality of the control cˆ. (2) We first show that for any (α, c) ∈ A(x), and k ≥ 0, 

 τk+1  e−ρ(t−τk ) U (ct )dt + e−ρ(τk+1 −τk ) v(Xτxk+1 ) Gτk E

(4.9)

τk

=





τk

  e−(ρ+λ)(t−τk ) U (ct ) + λ v Xτxk −

t

τk

cu du + αk z p(t − τk , dz) dt.

420

H. Pham

τ Indeed, since Xτxk+1 = Xτxk − τkk+1 cu du + αk Zk+1 , we have by the law of conditional toy expectations: 

 τk+1  −ρ(t−τk ) −ρ(τk+1 −τk ) x e U (ct )dt + e w(Xτk+1 ) Gτk E τk

=E



τk+1

e−ρ(t−τk ) U (ct )dt +

τk

 e−ρ(τk+1 −τk ) E w Xτxk −

τk+1

τk

=E



τk+1

e−ρ(t−τk ) U (ct )dt +



    cu du + αk Zk+1  Gτk , τk+1 − τk  Gτk

τk −ρ(τk+1 −τk )

e =



0





τk +s



 x w Xτk −

τk+1

τk

e−ρ(t−τk ) U (ct )dt +

   cu du + αk z p(τk+1 − τk , dz) Gτk

τk

e−ρs



 w Xτxk −

τk +s

τk

cu du + αk z p(s, dz) λe−λs ds,

where we used Remark 2.1 in the second equality and the fact that τk+1 − τk follows an exponential law of parameter λ, in the last one. We obtain (4.9) with Fubini’s theorem and the change of variable s → s + τk . We next prove that for any (α, c) ∈ A(x), and n ≥ 0,   E e−ρτn (Xτxn )γ ≤

xγ δ n , with δ =

λ < 1 ρ − bγ + λ

(4.10)

Indeed, from Jensen’s inequality  γ   E Xτxn−1 + αn−1 Zn Gτn−1 , τn − τn−1 ≤ ≤

=

 

Xτxn−1 + αn



γ z p(τn − τn−1 , dz)

Xτxn−1 + Xτxn−1 (eb(τn −τn−1 ) − 1)

(Xτxn−1 )γ ebγ(τn −τn−1 ) ,



where we used (2.4) and (2.5). Thus, by writing that Xτxn ≤ Xτxn−1 + αn−1 Zn , and by the law of iterated conditional expectations, we get:     E e−ρτn (Xτxn )γ ≤ E e−(ρ−bγ)(τn −τn−1 ) e−ρτn−1 (Xτxn−1 )γ  ∞   λe−(ρ−bγ+λ)t dt = E e−ρτn−1 (Xτxn−1 )γ 0  −ρτn−1 x  γ = δE e (Xτn−1 ) . We obtain the required inequality (4.10) by induction on n.

421

Investment/consumption choice in illiquid markets with random trading times

Consider the control policy in (4.4)–(4.5). By definition of Yˆ in (4.3), the associated wealth process satisfies for all k ≥ 0  τk+1 c∗t dt + αk∗ Zk+1 Xτxk+1 = Xτxk − τk

= Yˆt−τk (Xτxk , αk∗ ) + αk∗ Zk+1 ≥ αk∗ + αk∗ Zk+1 ≥ 0, a.s.,

and thus (α∗ , c∗ ) ∈ A(x). From (3.12), definition of α∗ and (4.8), we have v(Xτxk )

= vˆ(0, Xτxk , αk∗ )   ∞   (k) e−(ρ+λ)(t−τk ) U (c∗t ) + λ v(Yt + αk∗ z) p(t − τk , dz) = τk

=

E



τk+1

−ρ(t−τk )

e

U (c∗t )dt

τk

−ρ(τk+1 −τk )

+e

 

v(Xτxk+1 ) Gτk



,

where we used (4.9) in the last equality. By iterating these relations for all k, and using the law of iterated conditional expectations, we obtain    τn e−ρt U (c∗t )dt + e−ρτn v(Xτxn ) . v(x) = E 0

From the growth estimate (3.2), relation (4.10), and sending n to infinity, we conclude that  ∞  v(x) = E e−ρt U (c∗t )dt . 0



Furthermore, by using maximum principle, additional properties on the consumption policy between two trading dates are derived in [4], as solution of an Euler– Lagrange ordinary differential equation. Proposition 4.5. Suppose that U ∈ C 2 (0, ∞). Given an investment a ∈ R+ at time t in the stock, and starting from an initial capital Yt (t, x, a) = x ≥ a, the optimal wealth process Yˆ (t, x, a) between two trading dates is twice differentiable, satisfies the second-order ordinary differential equation d2 Yˆs (t, x, a) ds2 cs

=

λ v ′ (Yˆs (t, x, a) + az)p(s, dz) − (ρ + λ)U ′ (cs ) , s ≥ t, U ′′ (cs )

= −

dYˆs (t, x, a) , ds

and we have lims→∞ Yˆs (t, x, a) = a.

422

5

H. Pham

Numerical solution and illustrations

In this section, we focus on the resolution of the DP equation (3.12)–(3.13), and we give some numerical tests for illustrating the impact of liquidity risk induced by the random trading times.

5.1

A numerical decoupling algorithm

The main difficulty in the numerical resolution of the IPDE (3.13) for vˆ comes from the coupling in the integral term involving vˆ via v . To overcome this problem, we suggest the following iterative procedure. We start from an initial function v0 defined on R+ , as the value function of the consumption problem without trading:  ∞ e−ρt U (ct )dt, v0 (x) = sup c∈C(x)

0

t where C(x) is the set of nonnegative (deterministic) processes c = (ct )t s.t. x − 0 cs ds ≥ 0 for all t ≥ 0. v0 is the unique solution with linear growth condition to the first-order differential equation

∂v0 ˜ = 0, x > 0, ρv0 − U ∂x

together with the boundary condition v0 (0+ ) = 0. We then construct by induction a sequence of functions (ˆ vn )n≥1 defined on D and (vn )n≥0 defined on R+ by:

 ∞  e−(ρ+λ)(s−t) U (cs ) + λ vn (Yst,x + a z)p(s, dz) ds vˆn+1 (t, x, a) = sup c∈Ca (t,x)

t

vn+1 (x) = sup vˆn+1 (0, x, a),

n ≥ 0.

(5.1)

a∈[0,x]

By the dynamic programming principle, the function vˆn+1 satisfies the first-order PDE

 ∂ˆ vn+1 ∂ˆ vn+1 ˜ +U + λ vn (x + a z)p(t, dz) = 0, (t, x, a) ∈ D, −(ρ + λ)ˆ vn+1 + ∂t ∂x and we have an approximate trading policy by taking: (n)

αk



arg

max

a∈[0,Xτx ]

vˆn (0, Xτxk , a),

k ≥ 0.

k

The convergence of this iterative decoupling algorithm was studied in [11], where it is proved that the sequence of functions (vn , vˆn )n converges uniformly on any compact subset of D and R+ to (v, vˆ). More precisely, for any compact subset F and G of D and R+ , there exist some positive constants CF and CG s.t. 0 ≤ sup(ˆ v − vˆn ) ≤ CF δ n , F

where 0 < δ < 1 is defined in (4.10).

and

0 ≤ sup(v − vn ) ≤ CG δ n , G

Investment/consumption choice in illiquid markets with random trading times

5.2

423

Numerical illustrations

We now provide simulations for illustrating the impact of liquidity constraints on the attainable utility level and on the investment strategy. We shall compare our numerical experiments with the original Merton problem with no-short sale constraints, and defined in (3.3). We consider the case of power utility functions U (x) = xγ /γ , and we recall that the value function and the optimal trading strategy (in amount) are explicitly given by   b , 1 Xtx , vM (x) = KM xγ , α ¯ tM = min 2 (1 − γ)σ with KM

1 = γ



1−γ ρ−η

1−γ

  1 η = γ max πb − π 2 (1 − γ)σ 2 . 2 π∈[0,1]

,

We know from (3.4) that v ≤ vM . On the other hand, the value function v is always bounded from below by the value function of the consumption problem without trading  1−γ . v0 , given in our present setting by v0 (x) = K0 xγ , with K0 = γ1 1−γ ρ Given (t, x, a) ∈ D, notice that for any β > 0, we have c ∈ Ca (t, x) if and only if βc ∈ Cβa (t, βx). We then easily deduce from (3.9) and (3.12) a scaling relation for the value function v and the auxiliary value function vˆ: vˆ(t, βx, βa) = β γ vˆ(t, x, a),

v(βx) = β γ v(x),

∀β > 0.

The scaling relation for v shows that it is of power type: v(x) = v(1)xγ , hence of the same form as in the Merton model (see Figure 5.1). The scaling relation for vˆ implies that for all β > 0, a ˆ ∈ arg max vˆ(0, x, a)

βa ∈ arg max vˆ(0, βx, a).

if and only if

a∈[0,x]

a∈[0,βx]

From the feedback form (4.4), this shows that αk∗ is linear in Xτxk , or in other words the optimal investmet strategy consists in investing a fixed proportion of the wealth into the risky asset. Moreover, we can reduce the dimension of the problem and denote by v(x) = ϑ1 xγ ,

vˆ(t, x, a) = aγ v¯(t, ξ),

ξ = x/a,

where ϑ1 and v¯ are solution to (ρ + λ)¯ v−

∂¯ v ˜ −U ∂t



∂¯ v ∂ξ

− λϑ1



(ξ + z)γ p(t, dz) = 0,

ϑ1 = sup ξ −γ v¯(0, ξ), ξ≥1

In the sequel, for the numerical experiments, we consider a power utility function b with γ = 0.5. We choose parameters for which (1−γ)σ 2 < 1, and such that KM is substantially different from K0 . These two requirements on the model parameters

424

H. Pham

10

4.0 v0

9 8 7

lambda=1 3.5

lambda=1 lambda=5

lambda=5 lambda=40

3.0

lambda=40 Merton

Merton

2.5

6 5

2.0

4

1.5

3 1.0 2 0.5

1 0 0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

0.0 0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

Figure 5.1. Behavior of the value function in an illiquid market (left) and of the optimal investment policy (right) for different values of the Poisson parameter λ.

correspond to a high-risk return market, where the economic agent can considerably increase her utility with relatively little investment. In addition, the discount factor ρ must satisfy ρ > bγ . To satisfy all these conditions, we take b = 0.4, σ = 1 and ρ = 0.2, b yielding K0 = 3.16, KM = 4.08 and (1−γ)σ 2 = 0.8. The intensity λ is a free parameter that can be changed to adjust the “illiquidity” of the market. A first series of tests computed in [11] studied the performance of the decoupling algorithm in a strongly illiquid market (λ = 1). In Figure 5.2, the left graph shows the form of the value function and the right graph that of the optimal investment strategy obtained at different iterations of the numerical decoupling algorithm. As expected, the limiting value function lies between the solution corresponding to the model without trading v0 and the value function of the Merton problem vM .

10 9 8 7

4.0 v0 4 iterations 10 iterations 50 iterations Merton

3.5 3.0

1 iteration 5 iterations 50 iterations Merton

2.5

6 5

2.0

4

1.5

3 1.0 2 0.5

1 0 0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

0.0 0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

Figure 5.2. Left : Convergence of the iterative algorithm for computing the value function in an illiquid market with λ = 1. Right : Convergence of the iterative algorithm for computing the optimal investment policy (the amount to invest in stock as a function of the total wealth at the trading date).

Investment/consumption choice in illiquid markets with random trading times

425

In the second experiment, we vary the Poisson parameter λ governing the trading frequency, to study the convergence of the illiquid market to the Merton portfolio problem. Figure 5.1 presents the behaviour of the value functions v(x) and the associated optimal trading strategies. From these graphs we observe, empirically, that (i) for a fixed value of x, both the value function and the optimal investment policy are increasing in λ and (ii) as λ → ∞, the value function and the optimal investment policy seem to converge to the corresponding functions in the Merton portfolio problem. Next, we would like to study the utility loss due to liquidity constraints. Following the utility-indifference pricing approach introduced in [7], we define the utility loss in monetary terms (which can also be called cost of liquidity) as the extra amount of initial wealth π(x) needed to reach the same level of expected utility as an investor without trading restrictions and initial capital x. This cost of liquidity is then computed as the solution to v(x + π(x)) = vM (x). In our setting (power utility), the cost of liquidity π(x) is roughly proportional to x. We therefore study the cost of liquidity per unit of initial wealth π(1). Table 5.1 reproduces the values π(1) for different values of the Poisson parameter λ. As expected, the cost of liquidity decreases to zero as λ → ∞. λ π(1)

0 (No trading) 0.6671

1 0.2749

5 0.1214

40 0.0539

Table 5.1. Cost of liquidity π(1) as a function of the parameter λ.

Bibliography [1] Almgren R. and N. Criss (2001): Optimal execution of portfolio transaction, Journal of Risk, 3, 5–39. [2] Bank P. and D. Baum (2004): Hedging and portfolio optimisation in illiquid financial markets with a large trader, Mathematical Finance, 14, 1–18. [3] Cetin U., Jarrow R. and P. Protter (2004): Liquidity risk and arbitrage pricing theory, Finance and Stochastics, 8, 311–341. [4] Cretarola A., Gozzi F., Pham H. and P. Tankov (2008): Optimal consumption policies in illiquid markets, to appear in Finance and Stochastics. [5] Cvitanic J., Liptser R. and B. Rozovskii (2006): A filtering approach to tracking volatility from prices observed at random times, Annals of Applied Probability, 16, 1633–1652. [6] Frey R. and W. Runggaldier (2001): A nonlinear filtering approach to volatility estimation with a view towards high frequency data, International Journal of Theoretical and Applied Finance, 4, 199–210. [7] Hodges S. and A. Neuberger (1989): Optimal replication of contingent claims under transaction costs, Review of Futures Markets, 8, 222–239. [8] Longstaff F. (2005): Asset pricing in markets with illiquid assets, Preprint UCLA.

426

H. Pham

[9] Matsumoto K. (2006): Optimal portfolio of low liquid assets with a log-utility function, Finance and Stochastics, 10, 121–145. [10] Merton R. (1971) : Optimum consumption and portfolio rules in a continuous-time model, Journal of Economic Theory, 3, 373–413. [11] Pham H. and Tankov P. (2008): A Model of Optimal Consumption under Liquidity Risk with Random Trading Time, Mathematical Finance, 18 (4), 613–627. [12] Pham H. and Tankov P. (2007): A Coupled System of Integrodifferential Equations Arising in Liquidity Risk Model, Applied Mathematics and Optimisation, 59 (2), 147–173. [13] Rogers C. and O. Zane (2002): A simple model of liquidity effects, in Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, eds. K. Sandmann and P. Schoenbucher, pp 161–176. [14] Schied A. and T. Sch¨oneborn (2009): Risk aversion and the dynamics of optimal liquidation strategies in illiquid markets, Finance and Stochastics, 13, 181–204. [15] Schwartz E. and C. Tebaldi (2004): Illiquid assets and optimal portfolio choice, Preprint UCLA.

Author information Huyˆen Pham, Laboratoire de Probabilit´es et Mod`eles Al´eatoires, CNRS, UMR 7599, Universit´e Paris 7, Crest and Institut Universitaire de France, France. Email: [email protected]

Radon Series Comp. Appl. Math 8, 427–453

c de Gruyter 2009 

Optimal asset allocation in a stochastic factor model – an overview and open problems Thaleia Zariphopoulou

Abstract. This paper provides an overview of the optimal investment problem in a market in which the dynamics of the risky security are affected by a correlated stochastic factor. The performance of investment strategies is measured using two criteria. The first criterion is the traditional one, formulated in terms of expected utility from terminal wealth while the second is based on the recently developed forward investment performance approach. Key words. Excess hedging demand, forward performance process, Hamilton–Jacobi–Bellman equation, market participation puzzle, myopic portfolio, utility maximisation. AMS classification. 91B16, 91B28

1

Introduction

The aim herein is to present an overview of results and open problems arising in optimal investment models in which the dynamics of the underlying stock depend on a correlated stochastic factor. Stochastic factors have been used in a number of academic papers to model the time-varying predictability of stock returns, the volatility of stocks as well as stochastic interest rates (see, for example, [1], [15], [42] and other references discussed in the next section). The performance of the investment decisions is, typically, measured via an expected utility criterion which is often formulated in a finite trading horizon. From the technical point of view, a stochastic factor model is the simplest and most direct extension of the celebrated Merton model ([66] and [67]), in which stock dynamics are taken to be lognormal. However, as it is discussed herein, very little is known about the maximal expected utility as well as the form and properties of the optimal policies once the lognormality assumption is relaxed and correlation between the stock and the factor is introduced. This is despite the Markovian nature of the problem at hand, the advances in the theories of fully nonlinear pdes and stochastic control, and the computational tools that exist today. Specifically, results on the validity of the Dynamic Programming Principle, regularity of the value function, existence and verification of optimal feedback controls, representation of the value function and numerical approximations are still lacking. The only cases that have been extensively analysed are the ones of special utilities, namely, the exponential, power and logarithmic. In these The author would like to thank the organisers of the special semester on “Stochastics with emphasis on Finance” at RICAM for their hospitality. She would also like to thank S. Malamud, H. Pham, N. Touzi and G. Zitkovic for their fruitful comments. Special thanks go to M. Sirbu for his help and suggestions. This work was partially supported by the National Science Foundation (NSF grants: DMS-FRG-0456118 and DMS-RTG-0636586).

428

T. Zariphopoulou

cases, convenient scaling properties reduce the associated Hamilton–Jacobi–Bellman (HJB) equation to a quasilinear one. The analysis, then, simplifies considerably both from the analytic as well as the probabilistic points of view. The lack of rigorous results for the value function when the utility function is general limits our understanding of the optimal policies. Informally speaking, the first-order conditions in the HJB equation yield that the optimal feedback portfolio consists of two components. The first is the so-called myopic portfolio and has the same functional form as the one in the classical Merton problem. The second component, usually referred to as the excess hedging demand, is generated by the stochastic factor. Conceptually, very little is understood about this term. In addition, the sum of the two components may become zero which implies that it is optimal for a risk averse investor not to invest in a risky asset with positive risk premium. A satisfactory explanation for this counter intuitive phenomenon – related to the so-called market participation puzzle – is also lacking. Besides these difficulties, there are other issues that limit the development of an optimal investment theory in complex market environments. One of them is the “static” choice of the utility function at the specific investment horizon. Indeed, once the utility function is chosen, no revision of risk preferences is possible at any earlier trading time. In addition, once the horizon is chosen, no investment performance criteria can be formulated for horizons longer than the initial one. These limitations have been partly addressed by allowing infinite horizon, long-term growth criteria, random horizon, recursivity and others. Herein, we discuss a new approach that complements the existing ones. The alternative criterion has the same fundamental requirements as the classical value function process but allows for both revision of preferences and arbitrary trading horizons. It is given by a stochastic process, called the forward investment performance, defined for all times. A stochastic partial differential equation emerges which is the “forward” analogue of the HJB equation. The key new element is the performance volatility process which, in contrast to the classical formulation, is not a priori given. The special case of zero-volatility deserves special attention as it yields useful insights for the optimal portfolios. It turns out that for this class of risk preferences, the non-myopic component always disappears, independently of the dynamics of the stochastic factor. This result might give an answer to the market participation puzzle mentioned earlier. In addition, closed form solutions can be found for the performance process as well as the associated optimal wealth and portfolio processes for general preferences and arbitrary factor dynamics. Two classes of non-zero volatility processes and their associated optimal portfolios are, also, discussed. While from the technical point of view these cases reduce to the zero-volatility case, they provide useful results on the structure of optimal investments when the investor has alternative views for the upcoming market movements or wishes to measure performance in reference to a different numeraire/benchmark. We finish this section mentioning that there is a very rich body of research for the analysis of the classical expected utility models based on duality techniques. This powerful approach is applicable to general market models and yields elegant results for the value function and the optimal wealth. The optimal portfolios can be then characterised via martingale representation results for the optimal wealth process (see,

Optimal asset allocation in a stochastic factor model

429

among others, [48], [57], [58], [80] and [81]). However, little can be said about the structure and properties of the optimal investments. Because of their volume as well as their different nature and focus, these results are not discussed herein. The paper is organised as follows. In section 2, we present the market model. In section 3, we discuss the existing results in the classical (backward) formulation. We present some examples and state some open problems. In section 4, we present the alternative (forward) investment performance criterion and analyse, in some detail, the zero-volatility case. We also present the non-zero volatility cases, concrete examples and some open problems.

2

The model

The market consists of a risky and a riskless asset. The risky asset is a stock whose price St , t ≥ 0, is modelled as a diffusion process solving dSt = µ (Yt ) St dt + σ (Yt ) St dWt1 ,

with S0 > 0.The stochastic factor Yt , t ≥ 0, satisfies    dYt = b (Yt ) dt + d (Yt ) ρdWt1 + 1 − ρ2 dWt2 ,

(2.1)

(2.2)

  with Y0 = y, y ∈ R. The process Wt = Wt1 , Wt2 , t ≥ 0, is a standard 2−dimensional Brownian motion, defined on a filtered probability space (Ω, F , P) . The underlying filtration is Ft = σ (Ws : 0 ≤ s ≤ t) . It is assumed that ρ ∈ (−1, 1) . The market coefficients f = µ, σ, b and d satisfy the global Lipschitz and linear growth conditions   (2.3) |f (y) − f (¯ y)| ≤ K |y − y¯| and f 2 (y) ≤ K 1 + y 2 ,

for y, y¯ ∈ R. Moreover, it is assumed that the non degeneracy condition σ (y) ≥ l > 0, y ∈ R, holds. The riskless asset, the savings account, offers constant interest rate r > 0. We introduce the process µ (Yt ) − r . λ (Yt ) = (2.4) σ (Yt )

We will occasionally refer to it as the market price of risk. Starting with an initial endowment x, the investor invests at future times in the riskless and risky assets. The present value of the amounts allocated in the two accounts are denoted, respectively, by πt0 and πt . The present value of her investment is, then, given by Xtπ = πt0 + πt , t > 0. We will refer to Xtπ as the discounted wealth. Using (2.1) we easily deduce that it satisfies   (2.5) dXtπ = σ (Yt ) πt λ (Yt ) dt + dWt1 .

The investment strategies will play the role of control processes and are taken to satisfy the standard assumption of being self-financing. Such a portfolio, πt , is deemed

430

T. Zariphopoulou

t  admissible if, for t > 0, πt ∈ Ft , EP 0 σ 2 (Ys ) πs2 ds < ∞ and the associated discounted wealth satisfies the state constraint Xtπ ∈ D, t ≥ 0, for some acceptability domain D ⊆ R. We will denote the set of admissible strategies by A. The form of the spatial domain D and the consequences of this choice to the structure of the optimal portfolios are subjects of independent interest and will not be discussed herein. Frequently, portfolio constraints are also present which complicate the analysis further. For the model at hand, we will not allow for such generality as the focus is mainly on the choice and impact of risk preferences on investment decisions. To ease the notation, however, we will carry out the D−notation and make it more specific when appropriate. Stochastic factors have been used in portfolio choice to model asset predictability and stochastic volatility. The predictability of stock returns was first discussed in [34], [35] and [38]; see also [13], [14], [17] and [18]. More complex models were analysed in [1] and [12]. The role of stochastic volatility in investment decisions was studied in [3], [22], [38], [39], [42], [76], [84] and others. Models that combine predictability and stochastic volatility, as the one herein, were analysed, among others, in [51], [56], [64], [77] and [93]. In a different modelling direction, stochastic factors have been incorporated in asset allocation models with stochastic interest rates (see, for example, [15], [16], [19], [24], [25], [28], [29], [79] and [89]). From the technical point of view, the analysis is not much different as long as the model remains Markovian. However, various technically interesting questions arise (see, for example, [54], [56] and [87]).

3

The backward formulation

The traditional criterion for optimal portfolio choice has been based on maximal expected utility1 (see, for example, [66] and [67]). The key ingredients are the choices of the trading horizon, [0, T ] , and the investor’s utility, uT , at terminal time T. The utility function reflects the risk attitude of the investor at time T and is an increasing and concave function of his wealth2 . It is important to observe that once these choices are made, the risk preferences cannot be revised. In addition, no investment decisions can be assessed for times beyond T. The objective is to maximise the expected utility of terminal wealth over the set of admissible strategies. The solution, known as the value function, is defined as V (x, y, t; T ) = sup EP ( uT (XT )| Xt = x, Yt = y),

(3.1)

A

for (x, y, t) ∈ D × R×[0, T ] and A being the set of admissible strategies. For conditions on the asymptotic behaviour of uT in infinite and semi-infinite domains see [80] and [81]. 1 See,

for example, the review article [96]. quadratic utility represents an exception as it is not globally increasing. This utility, albeit popular for tractability reasons, yields non intuitive optimal portfolios and is not discussed herein.

2 The

Optimal asset allocation in a stochastic factor model

431

As solution of a stochastic optimisation problem, the value function is expected to satisfy the Dynamic Programming Principle (DPP), namely, V (x, y, t; T ) = sup EP ( V (Xs , Ys , s; T )| Xt = x, Yt = y),

(3.2)

A

for t ≤ s ≤ T. This is a fundamental result in optimal control and has been proved for a wide class of optimisation problems. For a detailed discussion on the validity (and strongest forms) of the DPP in problems with controlled diffusions, we refer the reader to [37] (see, also, [8], [32], [60] and [62]). Key issues are the measurability and continuity of the value function process as well as the compactness of the set of admissible controls. It is worth mentioning that a proof specific to the problem at hand has not been produced to date. Recently, a weak version of the DPP was proposed in [11] where conditions related to measurable selection and boundness of controls are relaxed. Besides its technical challenges, the DPP exhibits two important properties of the value function process. Specifically, V (x, Ys , s; T ), s ∈ [t, T ] , is a supermartingale for an arbitrary investment strategy and becomes a martingale at an optimum (provided certain integrability conditions hold). One may, then, view V (x, Ys , s; T ) as the intermediate (indirect) utility in the relevant market environment. It is worth noticing, however, that the notions of utility and risk aversion for times t ∈ [0, T ) are tightly connected to the investment opportunities the investor has in the specific market. Observe that the DPP yields a backward in time algorithm for the computation of the maximal utility, starting at expiration with uT and using the martingality property to compute the solution for earlier times. For this, we refer to this formulation of the optimal portfolio choice problem as backward. The Markovian assumptions on the stock price and stochastic factor dynamics allow us to study the value function via the associated HJB equation, stated in (3.3) below. Fundamental results in the theory of controlled diffusions yield that if the value function is smooth enough then it satisfies the HJB equation. Moreover, optimal policies may be constructed in a feedback form from the first-order conditions in the HJB equation, provided that the candidate feedback process is admissible and the wealth SDE has a strong solution when the candidate control is used. The latter usually requires further regularity on the value function. In the reverse direction, a smooth solution of the HJB equation that satisfies the appropriate terminal and boundary (or growth) conditions may be identified with the value function, provided the solution is unique in the appropriate sense. These results are usually known as the “verification theorem” and we refer the reader to [37], [60] and [92] for a general exposition on the subject. In maximal expected utility problems, it is rarely the case that the arguments in either direction of the verification theorem can be established. Indeed, it is very difficult to show a priori regularity of the value function, with the main difficulties coming from the lack of global Lipschitz regularity of the coefficients of the controlled process with respect to the controls and from the non-compactness of the set of admissible policies. It is, also, very difficult to establish existence, uniqueness and regularity of the solutions to the HJB equation. This is caused primarily by the presence of the control policy in the volatility of the controlled wealth process which makes the classical assumptions of global Lipschitz conditions of the equation with regards to the non linearities fail.

432

T. Zariphopoulou

Additional difficulties come from state constraints and the non-compactness of the admissible set. To our knowledge, regularity results for the value function (3.1) for general utility functions have not been obtained to date except for the special cases of homothetic preferences (see, for example, [36], [56], [68], [77] and [93]). The most general result in this direction, and in a much more general market model, was recently obtained in [59] where it is shown that the value function is twice differentiable in the spatial argument but without establishing its continuity. Because of lack of general rigorous results, we proceed with an informal discussion about the optimal feedback policies. For the model at hand, the associated HJB equation turns out to be

Vt



1 2 σ (y) π 2 Vxx + π (µ (y) Vx + ρσ (y) d (y) Vxy ) 2

+

max

+

1 2 d (y) Vyy + b (y) Vy = 0 , 2

π

(3.3)

with V (x, y, T ; T ) = uT (x) , (x, y, t) ∈ D × R× [0, T ] . The verification results would yield that under appropriate regularity and growth conditions, the feedback policy πs∗ = π ∗ (Xs∗ , Ys , s; T ) , t ≤ s ≤ T,

with π ∗ : D × R× [0, T ] given by π ∗ (x, y, t; T ) = −

d (y) Vxy (x, y, t; T ) λ (y) Vx (x, y, t; T ) −ρ σ (y) Vxx (x, y, t; T ) σ (y) Vxx (x, y, t; T )

(3.4)

and Xs∗ , t ≤ s ≤ T, solving   dXs∗ = σ (Ys ) π (Xs∗ , Ys , s; T ) λ (Ys ) ds + dWs1 ,

(3.5)

is admissible and optimal. Some answers to the questions related to the characterisation of the solutions to the HJB equation may be given if one relaxes the requirement to have classical solutions. An appropriate class of weak solutions turns out to be the so called viscosity solutions ([26], [62], [63] and [88]). The analysis and characterisation of the value function in the viscosity sense has been carried out for the special cases of power and exponential utility (see, for example, [93]). However, proving that the value function is the unique viscosity solution of (3.3) has not been addressed. A key property of viscosity solutions is their robustness (see [63]). If the HJB has a unique viscosity solution (in the appropriate class), robustness is used to establish convergence of numerical schemes for the value function and the optimal feedback laws. Such numerical studies have been carried out successfully for a number of applications. However, for the model at hand, no such studies are available. Numerical results using Monte Carlo techniques have been obtained in [30] for a model more general than the one herein.

Optimal asset allocation in a stochastic factor model

433

Besides the technically challenging issues that problem (3.1) gives rise to, there is a number of very interesting questions on the economic properties of the optimal portfolios. From (3.4) one sees that the optimal feedback portfolio functional consists of two terms, namely, π ∗,m (x, y, t; T ) = −

λ (y) Vx (x, y, t; T ) σ (y) Vxx (x, y, t; T )

(3.6)

d (y) Vxy (x, y, t; T ) . σ (y) Vxx (x, y, t; T )

(3.7)

and π ∗,h (x, y, t; T ) = −ρ

The first component, π ∗,m (x, y, t; T ) , is known as the myopic investment strategy. It corresponds functionally to the investment policy followed by an investor in markets in which the investment opportunity set remains constant through time. The myopic portfolio is always positive for a nonzero market price of risk. The second term, π ∗,h (x, y, t; T ) , is called the excess hedging demand. It represents the additional investment caused by the presence of the stochastic factor. It does not have a constant sign, for the signs of the correlation coefficient ρ and the mixed derivative Vxy are not definite. The excess risky demand vanishes in the uncorrelated case, ρ = 0, and when the volatility of the stochastic factor process is zero, d (y) = 0, y ∈ R. In the latter case, using a simple deterministic time-rescaling argument reduces the problem to the classical Merton one. Finally, π ∗,h (x, y, t; T ) vanishes for the case of logarithmic utility (see (3.8)). Despite the nomenclature “hedging demand”, a rigorous study for the precise characterisation and quantification of the risk that is not hedged has not been carried out. Indeed, in contrast to derivative valuation where the notion of imperfect hedge is well defined, such a notion has not been established in the area of investments (see [85] for a special case). The total allocation in the risky asset might become zero even if the risk premium is not zero. This phenomenon, related to the so called market participation puzzle, appears at first counter intuitive, for classical economic ideas suggest that a risk averse investor should always retain nonzero holdings in an asset that offers positive risk premium. We refer the reader to, among others, [4], [20] and [43]. Important questions arise on the dependence, sensitivity and robustness of the optimal feedback portfolio in terms of the market parameters, the wealth, the level of the stochastic factor and the risk preferences. Such questions are central in financial economics and have been studied, primarily in simpler models in which intermediate consumption is also incorporated (see, among others, [2], [52], [61], [75] and [78]). For diffusion models with and without a stochastic factor qualitative results can be found in [30], [51], [53], [64], [90] and, recently, in [9] (see, also, [65] for a general incomplete market discrete model). However, a qualitative study for general utility functions and/or arbitrary factor dynamics has not been carried out to date. Some open problems Problem 1: What are the weakest conditions on the market coefficients and the utility function so that the Dynamic Programming Principle holds?

434

T. Zariphopoulou

Problem 2: What are the weakest conditions on the market coefficients and the utility function so that existence and uniqueness of viscosity solutions to the HJB equation hold? Problem 3: Study the regularity of the value function and establish the associated verification theorem. Problem 4: Develop numerical schemes for the value function and the optimal feedback policies for general utility functions. Problem 5: Study the behaviour of the optimal portfolio in terms of market inputs, the horizon length and risk preferences for general utility functions and arbitrary stochastic factor dynamics. Compute and analyse the distribution of the optimal wealth and portfolio processes as well as their moments.

3.1 The CARA, CRRA and logarithmic cases We provide examples for the most frequently used utilities, namely, the exponential, power and logarithmic ones. They have convenient homogeneity properties which, in combination with the linearity of the wealth dynamics in the control policies, enable us to reduce the HJB equation to a quasilinear one. Under a “distortion” transformation (see, for example, [93]) the latter can be linearised and solutions in closed form can be produced using the Feynman–Kac formula. The smoothness of the value function and, in turn, the verification of the optimal feedback policies follows easily. Multi-factor models for these preferences have been analysed by various authors. The theory of BSDE has been successfully used to characterise and represent the solutions of the reduced HJB equation (see [33]). The regularity of its solutions has been studied using PDE arguments by [77] and [68], for power and exponential utilities, respectively. Finally, explicit solutions for a three factor model can be found in [64]. Exponential case: We have uT (x) = −e−γx, x ∈ R and γ > 0. This case has been extensively studied not only in optimal investment models but, also, in indifference pricing where valuation is done primarily under exponential preferences (see [21] for a concise collection of relevant references). The value function is multiplicatively separable and given, for (x, y, t) ∈ R × R× [0, T ] , by δ

V (x, y, t; T ) = −e−γxh (y, t; T ) ,

δ=

1 , 1 − ρ2

where h : R× [0, T ] → R solves   d (y) 1 1 2 1 − ρ2 λ2 (y) h, hy = ht + σ (y) hyy + b (y) − ρ 2 σ (y) 2 with h (x, y, T ; T ) = 1. The optimal feedback investment strategy is independent of the wealth level and given by π ∗ (x, y, t; T ) =

ρ d (y) hy (y, t; T ) λ (y) + . σ 2 (y) 1 − ρ2 σ (y) h (y, t; T )

Optimal asset allocation in a stochastic factor model

435

The optimal wealth and portfolio processes follow directly from (3.4) and (3.5). Namely, for t ≤ s ≤ T, πs∗ = π ∗ (x, Ys , s; T ) =

and Xs∗ = x +



t

s

ρ d (Ys ) hy (Ys , s; T ) λ (Ys ) + 2 σ (Ys ) 1 − ρ2 σ (Ys ) h (Ys , s; T )

σ (Yu ) λ (Yu ) πu∗ du +



t

s

σ (Yu ) πu∗ dWu1 .

A well-known criticism of the exponential utility is that the optimal portfolio does not depend on the investor’s wealth. While this property might be desirable in asset equilibrium pricing, it appears to be problematic and counter intuitive for investment problems. We note, however, that this property is directly related to the choice of the savings account as the numeraire. If the benchmark changes, the optimal portfolio ceases to be independent of wealth (see (4.44)). The next two utilities are defined on the half-line and the stochastic optimisation problem is a state-constraint one. We easily deduce from the form of the optimal portfolios that the non-negativity wealth constraint is always satisfied. Power case: We have uT (x) = γ1 xγ , 0 < γ < 1, γ = 0. The value function is multiplicatively separable and given, for (x, y, t) ∈ R+ × R× [0, T ] , by V (x, y, t; T ) =

1 γ x f (y, t; T )δ , γ

δ=

1−γ , 1 − γ + ρ2 γ

where f : R× [0, T ] → R+ solves the linear parabolic equation  λ2 (y) 1 2 γ γ λ (y) d (y) fy + f = 0, ft + d (y) fyy + b (y) + ρ 2 1−γ 2 (1 − γ) δ with f (x, y, T ; T ) = 1. The optimal policy feedback function is linear in wealth, π ∗ (x, y, t; T ) =

d (y) fy (y, t; T ) ρ 1 λ (y) x+ x. 1 − γ σ (y) (1 − γ) + ρ2 γ σ (y) f (y, t; T )

The optimal investment and wealth processes are, in turn, given by πs∗ = ms Xs∗

and Xs∗

= x exp



s t



s 1 2 2 1 2 σ (Yu ) λ (Yu ) mu − σ (Yu ) mu du + σ (Yu ) mu dWu , 2 t

with ms =

d (Ys ) fy (Ys , s; T ) ρ 1 λ (Ys ) + . 1 − γ σ (Ys ) (1 − γ) + ρ2 γ σ (Ys ) f (Ys , s; T )

436

T. Zariphopoulou

The range of the risk aversion parameter can be relaxed to include negative values. Its choice plays an important role in the boundary and asymptotic behaviour of the value function as well as the long-term behaviour of the optimal wealth and portfolio processes (see [51] and [64]). Verification results for weak conditions on the risk premium can be found, among others, in [55] and [56]. Logarithmic utility: We have uT (x) = ln x, x > 0. The value function is additively separable, namely, V (x, y, t; T ) = ln x + h (y, t; T ) ,

with h : R× [0, T ] → R+ solving 1 1 ht + d2 (y) hyy + b (y) hy + λ2 (y) h = 0 2 2

and h (y, T ; T ) = 1. The optimal portfolio takes the simple linear form π ∗ (x, y, t; T ) =

λ (y) x. σ (y)

(3.8)

In turn, the optimal investment and wealth processes are given, for t ≤ s ≤ T, by  s

s 1 2 λ (Ys ) ∗ Xs λ (Yu ) du + πs∗ = λ (Yu ) dWu1 . and Xs∗ = x exp σ (Ys ) t t 2 The logarithmic utility plays a special role in portfolio choice. Because of the additively separable form of the value function, the optimal portfolio is always myopic. It is known as the “growth optimal portfolio” and has been extensively studied in general market settings (see, for example, [6] and [50]). The associated optimal wealth is the so-called “numeraire portfolio”. It has also been extensively studied, for it is the numeraire with regards to which all wealth processes are supermartingales under the historical measure (see, among others, [40] and [41]).

4

The forward formulation

As discussed in the previous section, the main feature of the expected utility approach is the a priori choice of the utility at the end of the trading horizon. Direct consequences of this choice are, from one hand, the lack of flexibility to revise the risk preferences at other times and, from the other, the inability to assess the performance of investment strategies beyond the prespecified horizon. Addressing these limitations has been the subject of a number of studies and various approaches have been proposed. With regards to the horizon length, the most popular alternative has been the formulation of the investment problem in [0, +∞) and incorporating either intermediate consumption or optimising the investor’s long-term optimal behaviour (see, among others, [47], [48] and [86]). Investment models with

Optimal asset allocation in a stochastic factor model

437

random horizon have also been examined ([23]). The revision of risk preferences has been partially addressed by recursive utilities (see, for example, [31], [82] and [83]). Next, we present another alternative approach which addresses both shortcomings of the expected utility approach. The associated criterion is developed in terms of a family of stochastic processes defined on [0, ∞) and indexed by the wealth argument. It will be called forward performance process. Its key properties are the martingality at an optimum and supermartingality away from it. These are in accordance with the analogous properties of the value function process that stem out from the Dynamic Programming Principle (cf. (3.2)). However, in contrast to the existing framework, the time. risk preferences are specified for today1 and not for a (possibly remote) future  We recall that Ft , t ≥ 0, is the filtration generated by Wt = Wt1 , Wt2 , t ≥ 0, and A the set of admissible policies. As in the previous section, we use D to denote the generic admissible space domain. Definition 4.1. An Ft −adapted process U (x, t) is a forward performance if for t ≥ 0 and x ∈ D: i) the mapping x → U (x, t) is concave and increasing, ii) for each portfolio process π ∈ A, EP (U (Xtπ , t))+ < ∞, and EP (U (Xsπ , s) |Ft ) ≤ U (Xtπ , t) ,

iii) there exists a portfolio process π ∗ ∈ A, for which   ∗    ∗  EP U Xsπ , s |Ft = U Xtπ , t ,

s ≥ t,

s ≥ t,

(4.1)

(4.2)

and iv) at t = 0, U (x, 0) = u0 (x) , where u0 : D → R is increasing and concave. The concept of forward performance process was introduced in [69] (see, also, [70]). The model therein is incomplete binomial and the initial data is taken to be exponential. The exponential case was subsequently and extensively analysed in [71] and [95]. Ideas related to the forward approach can also be found in [23] where the authors consider random horizon choices, aiming at alleviating the dependence of the value function on a fixed deterministic horizon. Their model is more general in terms of the assumptions on the price dynamics but the focus in [23] is primarily on horizon effects. Horizon issues were also considered in [44] for the special case of lognormal stock dynamics. It is worth observing the following differences and similarities between the forward performance process and the traditional value function. Namely, the process U (x, t) is defined for all t ≥ 0, while the value function V (x, y, t; T ), is defined only on [0, T ]. In the classical set up discussed in the previous section, V (x, y, T ; T ) ∈ F0 , due to the deterministic choice of the terminal utility uT . If the terminal utility is taken to be 1 The

choice of the initial condition gives rise to interesting mathematical and modelling questions (see, for example, [73] and references therein).

438

T. Zariphopoulou

state-dependent, V (x, y, T ; T ) ∈ FT , (see, for example, [49], [81] as well as [10], [27] and [46]), the traditional and new formulations are, essentially, identical in [0, T ] . Recently, it was shown in [74] that a sufficient condition for a process U (x, t) to be a forward performance is that it satisfies a stochastic partial differential equation (see (4.5) below). For completeness, we state the result for a general incomplete market model with k risky stocks whose prices are modelled as Ito processes driven by a d-dimensional Brownian motion. We use σt , t ≥ 0, to denote their d × k random volatility matrix and µt the k -dim vector with coordinates the mean rate of return  of each stock.It is assumed that the volatility vectors are such that µt − rt 1 ∈ Lin σtT , where Lin σtT denotes the linear space generated by the columns of σtT . This implies  + that σtT σtT (µt − rt 1) = µt − rt 1 and, therefore, the market price of risk vector  + λt = σtT (µt − rt 1)

(4.3)

σt σt+ λt = λt .

(4.4)

 + is a solution to the equation σtT x = µt − rt 1. The matrix σtT is the Moore–Penrose pseudo-inverse of the matrix σtT . It easily follows that, for t ≥ 0,

It is assumed from now on that there exists a deterministic constant c ≥ 0 such that, for t ≥ 0, λ (Yt ) ≤ c. Proposition 4.2. Let U (x, t) ∈ Ft be such that the mapping x → U (x, t) is increasing and concave. Let, also, U (x, t) be a solution to the stochastic partial differential equation 2 1 Ux (x, t) λt + σt σt+ ax (x, t) dt + a (x, t) · dWt , dU (x, t) = 2 Uxx (x, t)

(4.5)

where a (x, t) ∈ Ft . Then U (x, t) is a forward performance process. It might seem that all Definition 4.1 produces is a criterion that is dynamically consistent across time. Indeed, internal consistency is an ubiquitous requirement and needs to be ensured in any proposed criterion. It is satisfied, for example, by the traditional value function. However, the new criterion allows for much more flexibility as it is manifested by the volatility process a (x, t) introduced above. Characterising the appropriate class of admissible volatility processes is, in our view, an interesting and challenging question. The forward performance SPDE (4.5) poses several challenges. It is fully nonlinear and not (degenerate) elliptic; the latter is a direct consequence of the “forward in time” nature of the involved stochastic optimisation problem. Thus, existing results of existence, uniqueness and regularity of weak (viscosity) solutions are not directly applicable. An additional difficulty comes from the fact that the volatility coefficient may depend on the second order derivative of U. In such cases, it might not be possible to reduce the SPDE, using the method of stochastic characteristics, into a PDE with random coefficients.

439

Optimal asset allocation in a stochastic factor model

For the model at hand, the coefficients appearing in (4.5) take the form T   µ (Yt ) − r 1 T , 0 and λt = ,0 . σt = (σ (Yt ) , 0) , σt+ = σ (Yt ) σ (Yt ) We easily see that (4.4) is trivially satisfied. Proposition 4.3. i) Let U (x, t) ∈ Ft be such that the mapping x → U (x, t) is increasing and concave. Let, also, U (x, t) be a solution to the stochastic partial differential equation 2  1 λ (Yt ) Ux (x, t) + a1x (x, t) dt + a1 (x, t) dWt1 + a2 (x, t) dWt2 , dU (x, t) = 2 Uxx (x, t)  T where a (x, t) = a1 (x, t) , a2 (x, t) , with ai (x, t) ∈ Ft , i = 1, 2. Then U (x, t) is a forward performance process. ii) Let U (x, t) be a solution to the SPDE (4.5) such that, for each t ≥ 0, the mapping x → U (x, t) is increasing and concave. Consider the process πt∗ , t ≥ 0, given by πt∗ = −

a1x (Xt∗ , t) λ(Yt ) Ux (Xt∗ , t) − ∗ σ(Yt ) Uxx (Xt , t) σ(Yt )Uxx (Xt∗ , t)

(4.6)

where Xt∗ , t ≥ 0, solves   dXt∗ = σ (Yt ) πt∗ λ (Yt ) dt + dWt1 ,

(4.7)

with X0∗ = x. If πt∗ ∈ A and (4.7) has a strong solution, then πt∗ and Xt∗ are optimal. Remark: The same stochastic partial differential equation emerges in the classical formulation of the optimal portfolio choice problem. Indeed, assuming for the moment that the appropriate regularity assumptions hold, expanding the process V (x, Yt , t; T ) (cf. (2.2) and (3.1)), yields,  1 2 dV (x, Yt , t) = Vt (x, Yt , t) + d (Yt ) Vyy (x, Yt , t) + b (Yt ) Vy (x, Yt , t) dt 2  + ρd (Yt ) Vy (x, Yt , t) dWt1 + 1 − ρ2 d (Yt ) Vy (x, Yt , t) dWt2 . Using that V (x, y, t; T ) solves the HJB equation and rearranging terms, we deduce that 2

dV (x, Yt , t) =

1 (λ (Yt ) Vx (x, Yt , t) + ρd (Yt ) Vxy (x, Yt , t)) dt 2 Vxx (x, t)  + ρd (Yt ) Vy (x, Yt , t) dWt1 + 1 − ρ2 d (Yt ) Vy (x, Yt , t) dWt2 .

The above SPDE corresponds to the volatility choice, for 0 ≤ t < T,  a1 (x, t) = ρd (Yt ) Vy (x, Yt , t) and a2 (x, t) = 1 − ρ2 d (Yt ) Vy (x, Yt , t).

Notice that in the backward optimal investment model, there is no freedom in choosing the volatility coefficients, for they are uniquely obtained from the Ito decomposition of the value function process.

440

T. Zariphopoulou

4.1 The zero volatility case An important class of forward performance processes are the ones that are decreasing in time. They yield an intuitively rich family of performance criteria which compile in a transparent way the dynamic risk profile of the investor and the information coming from the evolution of the investment opportunity set. This section is dedicated to the representation of these processes and the construction of the associated optimal wealth and portfolios. These issues have been extensively studied in [72] and [73], and we refer the reader therein for the proofs of the results that follow. The local risk tolerance function r (x, t) , t ≥ 0, defined below, plays a crucial role in the representation of the optimal investment and wealth processes. It represents the ′ . Observe dynamic counterpart of the static risk tolerance function, rT (x) = − uu′′T (x) T (x) that similarly to its static analogue, it is chosen exogenously to the market. However, now it is time-dependent and solves the autonomous fast diffusion equation (4.12)2 . The reciprocal of the risk tolerance, the local risk aversion, γ = r−1 solves the porous medium equation (4.13). We recall that u0 (x) is the initial condition of the forward performance process. It is assumed that u0 ∈ C 4 (D) . Theorem 4.4. Let λ be as in (2.4) and define the time-rescaling process

t λ (Ys )2 ds, t ≥ 0. At =

(4.8)

0

Let, also, u ∈ C 4,1 (D × (0, +∞)) be a concave and increasing in the spatial argument function satisfying 1 u2x , (4.9) ut = 2 uxx and u (x, 0) = u0 (x) . Then, the time-decreasing process Ut (x) = u (x, At )

(4.10)

is a forward performance. Proposition 4.5. Let the local risk tolerance function r : D×[0, +∞) → R+ 0 be defined by ux (x, t) , r (x, t) = − (4.11) uxx (x, t) with u solving (4.9). Then, r satisfies 1 rt + r2 rxx = 0, 2

(4.12)



. Its reciprocal, γ = r−1 , solves with r (x, 0) = − uu′′0 (x) 0 (x)  1 1 γt + = 0, 2 γ xx 2 See

[7] and [45] for a similar equation arising in the traditional Merton problem.

(4.13)

Optimal asset allocation in a stochastic factor model

441

′′

(x) with γ (x, 0) = − uu0′ (x) . 0

An analytically explicit construction of the function u was recently developed in [73]. A strictly increasing space-time harmonic function, h : R × [0, +∞) → D, solving the backward heat equation 1 ht + hxx = 0, 2

(4.14)

plays a key role. This function is always globally defined but its range varies as Range (h) = D, with D being the domain of u. It was shown in [73] that there is a one-to-one correspondence between strictly increasing solutions of (4.14) and strictly increasing and concave solutions of (4.9) (see, Propositions 9, 13 and 14 therein). Pivotal role in the analysis is played by a positive Borel measure, ν, through which the function h is represented in an integral form (see (4.19) and (4.25) below). This representation stems from classical results of Widder for the solutions of the (backward) heat equation (see [91]). We note that in the applications at hand, these results are not directly applicable, for the range of h is not always constrained to the positive semi-axis. Indeed, we will see that h is used to represent the optimal wealth (cf. (4.30)), which, in unconstrained problems, may take arbitrary values. The results that follow correspond to the infinite domain case, D = R. To ease the presentation we introduce the following sets, B + (R) = ν ∈ B (R) : ∀B ∈ B, ν (B) ≥ 0

and eyx ν (dy) < ∞, x ∈ R , (4.15) R

B0+ (R)

+ B+ (R)

+ B− (R)

+

 ν ∈ B (R) and ν ({0}) = 0 ,   = ν ∈ B0+ (R) : ν ((−∞, 0)) = 0 and   = ν ∈ B0+ (R) : ν ((0, +∞)) = 0 . =



(4.16) (4.17) (4.18)

We start with representation results for strictly increasing solutions of (4.14) with unbounded range. Proposition 4.6. i) Let ν ∈ B + (R) and C ∈ R. Then, the function h defined, for (x, t) ∈ R× [0, +∞) , by h (x, t) =



R

1

2

eyx− 2 y t − 1 ν (dy) + C, y

(4.19)

is a strictly increasing solution to (4.14).  +∞ + + (R) and 0+ ν(dy) = +∞, or ν ∈ B− (R) and Moreover, if ν ({0}) > 0, or ν ∈ B+ y  0− ν(dy) = −∞, then Range (h) = (−∞, +∞) , for t ≥ 0. On the other hand, if y −∞  +∞  0− + + ν ∈ B+ (R) with 0+ ν(dy) < +∞ (resp. ν ∈ B− (R) with −∞ ν(dy) > −∞), then y y

442

T. Zariphopoulou

 0− ν(dy)  +∞ Range(h) = (C − 0+ ν(dy) y , +∞) (resp. Range(h) = (−∞, C − −∞ y )), for t ≥ 0. ii) Conversely, let h : R × [0, +∞) → R be a strictly increasing solution to (4.14). Then, there exists ν ∈ B + (R) such that h is given by (4.19). Moreover, if Range (h) = (−∞, +∞) , t ≥ 0, then it must be either that ν ({0}) > 0,  0−  +∞ + + or ν ∈ B+ = +∞, or ν ∈ B− (R) and −∞ ν(dy) = −∞. On the (R) and 0+ ν(dy) y y other hand, if Range (h) = (x0 , +∞) (resp. Range (h) = (−∞, x0 )), t ≥ 0 and  +∞ + + < +∞ (resp. ν ∈ B− (R) x0 ∈ R, then it must be that ν ∈ B+ (R) with 0+ ν(dy) y −  0 ν(dy) with −∞ y > −∞).

The next proposition yields the one-to-one correspondence between the solutions h and u. Without loss of generality, we will normalise the values h (0, 0) = 0,

(4.20)

u (0, 0) = 0 and ux (0, 0) = 1.

(4.21)

choosing C = 0, and3 Proposition 4.7. i) Let ν ∈ B + (R) and h : R × [0, +∞) → R be as in (4.19) with the measure ν being used. Assume that h is of full range, for each t ≥ 0, and let h(−1) : R × [0, +∞) → R be its spatial inverse. Then, the function u defined for (x, t) ∈ R × [0, +∞) and given by

x  (−1) 1 t −h(−1) (x,s)+ 2s  (−1) e−h (z,0) dz, (4.22) hx h (x, s) , s ds + e u (x, t) = − 2 0 0 is an increasing and strictly concave solution of (4.9) satisfying (4.21). Moreover, for t ≥ 0, the Inada conditions, lim ux (x, t) = +∞ and

x→−∞

lim ux (x, t) = 0,

x→+∞

(4.23)

are satisfied. ii) Conversely, let u be an increasing and strictly concave function satisfying, for (x, t) ∈ R × [0, +∞) , (4.9) and (4.21), and the Inada conditions (4.23), for t ≥ 0. Then, there exists ν ∈ B + (R), such that u admits representation (4.22) with h given by (4.19), for (x, t) ∈ R × [0, +∞). Moreover, h is of full range, for each t ≥ 0, and satisfies (4.20). − − The cases of semi-finite domain, D = R+ , R+ 0 , R and R0 deserve special attention as they are used in the popular choices of power and logarithmic risk preferences. In these cases, the support of the measure is constrained to the half-line. The representation results above need to be modified for semi-infinite domains. Various cases emerge, depending on certain characteristics of the measure ν which affect the boundary behaviour of the solution u. The arguments are both computationally cumbersome 3 The

first equality is imposed in an ad hoc way. The second one, however, is in accordance with (4.20). For details see the proof of Proposition 9 in [73].

443

Optimal asset allocation in a stochastic factor model

and long. For completeness we state one of these cases and we refer the reader to [73] for the others. To this end, we assume that

+∞ ν (dy) + < +∞, (4.24) ν ∈ B+ (R) and y + 0  +∞ + (R) given in (4.17). Choosing for convenience C = 0+ y1 ν (dy) in (4.19) with B+ yields4 the solution to (4.14)

+∞ yx− 1 y2 t 2 e ν (dy), h (x, t) = (4.25) y 0+ with Range (h) = (0, +∞) . Proposition 4.8. i) Let ν satisfy (4.24) and, in addition, ν ((0, 1]) = 0. Let, also, h : R × [0, +∞) → (0, +∞) be as in (4.25) and h(−1) : (0, +∞) × [0, +∞) → R be its spatial inverse. Then, the function u defined, for (x, t) ∈ (0, +∞) × [0, +∞) , by

x  (−1) 1 t −h(−1) (x,s)+ s  (−1) 2h h e−h (z,0) dz, (4.26) (x, s) , s ds + e u (x, t) = − x 2 0 0 is an increasing and strictly concave solution of (4.9) with lim u (x, t) = 0, for t ≥ 0.

(4.27)

x→0

Moreover, for t ≥ 0, the Inada conditions lim ux (x, t) = +∞

x→0

and

lim ux (x, t) = 0

(4.28)

x→+∞

are satisfied. ii) Conversely, let u, defined for (x, t) ∈ (0, +∞) × [0, +∞) , be an increasing and strictly concave function satisfying (4.9), (4.27) and the Inada conditions (4.28). Then, there exists ν ∈ B + (R) satisfying (4.24) and ν ((0, 1]) = 0, such that u admits representation (4.26) with h given by (4.25), for (x, t) ∈ R × [0, +∞) . Note that the above results yield implicit representation constraints for the initial (−1) condition u0 . For example, from (4.22) we must have u′0 (x) = e−h (x,0) , x ∈ R,  yx (−1) with the integrand e−h (x,0) specified from h (x, 0) = R e y−1 ν (dy) . This, in turn, yields that the inverse of u′0 must be represented as

−y ln x e −1 ′ (−1) (u0 ) (x) = ν (dy) , x > 0. y R Characterising the set of admissible initial data and providing an intuitively meaningful interpretation is, in our view, an interesting question. We continue with the construction of the optimal wealth and portfolio processes for the class of time decreasing performance processes. As the theorem below shows, the optimal processes can be calculated in closed form. R 1 2 may alternatively represent h as h (x, t) = 0+∞ eyx− 2 y t μ (dy) with μ (dy) = + μ ∈ B (R) . Such a representation was used in [5].

4 One

ν(dy) . y

Note that

444

T. Zariphopoulou

Theorem 4.9. i) Let h be a strictly increasing solution to (4.14), for (x, t) ∈ R× [0, +∞) , and assume that the associated measure ν satisfies, for t > 0,

1 2 (4.29) eyx+ 2 y t ν (dy) < ∞. R

Let also At be as in (4.8) and Mt , t ≥ 0, be given by

t λ (Ys ) dWs1 . Mt = 0

Define the processes Xt∗ and πt∗ by   Xt∗ = h h(−1) (x, 0) + At + Mt , At and πt∗ =

 λ (Yt )  (−1) hx h (x, 0) + At + Mt , At , σ (Yt )

(4.30)

(4.31)

t ≥ 0, x ∈ R, with h as above and h(−1) standing for its spatial inverse. Then, the portfolio πt∗ is admissible and generates Xt∗ , i.e., Xt∗ = x +



0

t

  σ (Ys ) πs∗ λ (Ys ) ds + dWs1 .

(4.32)

ii) Let u be the associated with h increasing and strictly concave solution to (4.9). Then, the process u (Xt∗ , At ) , t ≥ 0, satisfies du (Xt∗ , At ) = ux (Xt∗ , At ) σ(Yt )πt∗ dWt1 ,

(4.33)

with Xt∗ and πt∗ as in (4.30) and (4.31). Therefore, the processes Xt∗ and πt∗ are optimal. The optimal portfolio πt∗ may be also represented in terms of the risk tolerance process, Rt∗ , defined as Rt∗ = r (Xt∗ , At ), (4.34) with Xt∗ solving (4.32) and r as in (4.11). Indeed, one can show that the local risk tolerance function satisfies, for (x, t) ∈ D × [0, +∞) ,   r (x, t) = hx h(−1) (x, t) , t . (4.35) Therefore, (4.31) yields

πt∗ =

λ (Yt ) ∗ R . σ (Yt ) t

(4.36)

One then sees that under the investment performance criterion (4.10), the investor will always follow a myopic strategy. The excess hedging demand component disappears as long as the volatility performance process remains zero.

445

Optimal asset allocation in a stochastic factor model

4.2 The CARA, CRRA and generalised CRRA cases Case 1: Let ν = δ0 , where δ0 is a Dirac measure at 0. Then, from (4.19) we obtain t h (x, t) = x and, thus, (4.22) yields u (x, t) = 1 − e−x+ 2 . The optimal performance process is At U (x, t) = 1 − e−x+ 2 . Formulae (4.32) and (4.31) yield, respectively,

Xt∗ = x + At + Mt and πt∗ =

λ (Yt ) . σ (Yt )

This class of forward performance processes is analysed in detail in [71] (see, also, [95]). 1

2

Case 2: Let ν = δγ , γ > 1. Then (4.25) yields h (x, t) = γ1 eγx− 2 γ t . Since ν ((0, 1]) = γ−1

0, u is given by (4.26) and, therefore, u (x, t) = formance process is

γ γ γ−1

x

γ−1 γ

e−

γ−1 2 t

. The forward per-

γ−1

γ−1 γ−1 γ γ Ut (x) = x γ e− 2 At , t ≥ 0. γ−1

The optimal wealth and portfolio processes are given, respectively, by    λ (Yt ) ∗ γ X . At + γMt and πt∗ = γ Xt∗ = x exp γ 1 − 2 σ (Yt ) t 1 For the cases ν = δγ with γ = 1, γ ∈ (0, 1) and γ = − 2k+1 , k > 0, see [94].

Case 3: Let ν = 2b (δa + δ−a ) , a, b > 0, and δ±a are Dirac measures at ±a, a = 1. 1 2 We, then, have h (x, t) = ab e− 2 a t sinh (ax) and, from (4.22), u (x, t)   √ 2 −αt 2 √ √ 2 x2 + b2 e−α2 t α b e + a (1 + α) αx + x a a α 1−a t α 1− 1 e 2 b a. = 2 − 1   2 1+ α √ α −1 α − 1 2t 2 2 2 −α αx + α x + b e

Equalities (4.30) and (4.31) yield the optimal wealth and portfolio processes     b 1 2 Xt∗ = e− 2 a At sinh a h(−1) (x, 0) + At + Mt a and     λ (Yt ) − 12 a2 At e cosh a h(−1) (x, 0) + At + Mt . πt∗ = b σ (Yt ) The case a = 1 deserves special attention as it corresponds to the generalised logarithmic case (see [94] for details).

446

T. Zariphopoulou

4.3 Two special cases of volatilities We focus on the case that the volatility coefficient a is a local affine function of U and xUx . These examples can be reduced to the zero-volatility case but in markets with modified risk premia.     The “market-view” case: α1 (x, t) , a2 (x, t) = U (x, t) ϕ1t , ϕ2t , ϕ1t , ϕ2t ∈ Ft

We assume that the processes ϕ1t , ϕ2t are bounded by a (deterministic) constant. The forward performance SPDE, (4.5), becomes 2

dU (x, t) =

  2 (Ux (x, t)) 1 dt + U (x, t) ϕ1t dWt1 + ϕ2t dWt2 . (4.37) λ (Yt ) + ϕ1t 2 Uxx (x, t)

We introduce the process U (x, t) = u (x, Aϕ t ) Mt , Aϕ t,

t ≥ 0, defined as

Aϕ t =



with u as in (4.9), the process

0

t



λ (Ys ) + ϕ1s

2

ds

(4.38)

(4.39)

and the exponential martingale Mt , t ≥ 0, solving   dMt = Mt ϕ1t dWt1 + ϕ2t dWt2

with M0 = 1.

One may interpret Mt as a device that offers the flexibility to modify our views on 1 asset returns, changing the original market risk premium, λ (Yt ) , to λM t = λ (Yt ) + ϕt . ∗ The optimal allocation vector, πt , t > 0, has the same functional form as (3.4) but for a different time-rescaling process, namely, πt∗ = −

ux (Xt∗ , Aϕ λM λM t t) t = r (Xt∗ , Aϕ ϕ t ), σ (Yt ) uxx (Xt∗ , At ) σ (Yt )

with Aϕ as in (4.39) and r as in (4.11). The optimal wealth process solves   dXt∗ = r (Xt∗ , At ) λM λ(Yt )dt + dWt1 . t

It is worth noticing that if we choose ϕ1t = −λ (Yt ) , t ≥ 0, solutions become static, independently of the choice of the second volatility component. Indeed, the timerescaling process vanishes, Aϕ t = 0, t > 0. In turn, the forward performance process becomes constant, U (x, t) = u0 (x) , t > 0 and the optimal investment and wealth processes degenerate, πt∗ = 0 and Xt∗ = x, t ≥ 0. An optimal policy is to allocate zero wealth in the risky asset.

Optimal asset allocation in a stochastic factor model

447

  The “benchmark” case: α1 (x, t) , a2 (x, t) = (−δt xUx (x, t) , 0) , δt ∈ Ft

It is assumed that δt , t ≥ 0, is bounded by a deterministic constant. The forward performance SPDE, (4.5), becomes dU (x, t) =

1 (Ux (x, t) (λ (Yt ) − δt ) − xUxx (x, t)δt )2 dt − xUx (x, t)δt dWt1 . (4.40) 2 Uxx (x, t)

Let Aδt , t ≥ 0, be Aδt

=



0

t

(λ (Ys ) − δs )2 ds,

and consider the process Nt , t ≥ 0, solving   dNt = Nt δt λ (Yt ) dt + dWt1

with N0 = 1.

(4.41)

(4.42)

One can then show that the process

U (x, t) = u



x δ ,A , Nt t

(4.43)

with u as in (4.9), is a forward performance. One may interpret the auxiliary process Nt , t ≥ 0, as a benchmark with respect to which the performance of investment policies is measured. It is, then, natural to look ˜ ∗ , t ≥ 0, defined, ˜t∗ and X at the benchmarked optimal portfolio and wealth processes, π t respectively, as ∗ π∗ ˜ t∗ = Xt . π ˜t∗ = t and X Nt Nt N Using (4.6) and (4.43) we obtain, setting λt = λ (Yt ) − δt ,   ˜ t∗ , Aδt X u N x λ δt ˜ ∗   π ˜t∗ = (4.44) X − t σ (Yt ) t σ (Yt ) uxx X ˜ ∗ , Aδ t t δt ˜ ∗ λN  ˜ ∗ δ  Xt + t r X t , At , σ (Yt ) σ (Yt ) ˜ t∗ solving with Aδt , t ≥ 0 as in (4.41), r as in (4.11) and X   ˜∗ = R ˜ ∗ λN λ(Yt )dt + dW 1 . dX t t t t =

The optimal portfolio process is represented as the sum of two funds, say π ˜ ∗,X and ∗,R π ˜t , defined as   δt ˜ ∗ λN t ˜ t∗ , Aδt . r X π ˜ ∗,X = and π ˜t∗,R = Xt σ (Yt ) σ (Yt ) The first component is independent of the risk preferences, depends linearly on wealth and vanishes if δt = 0. The situation is reversed for the other component in that it depends only on the investor’s risk preferences and vanishes when δt = λ(Yt ). The latter condition corresponds to the case when the stock becomes the benchmark itself. Note that, even for exponential preferences, the optimal portfolio may depend on the wealth if performance is measured in terms of a benchmark different than the savings account.

448

T. Zariphopoulou

Some open problems Problem 1: Characterise the class of volatility processes for which the SPDE (4.5) has a solution which satisfies the requirements of a forward performance process. Problem 2: Prove a verification theorem for the forward stochastic optimisation problem (4.5). Problem 3: Characterise the family of initial risk preferences u0 (x) for which a forward performance process exists. Problem 4: Infer the investor’s initial risk preferences from his desirable investment targets. Problem 5: Study the invariance and consistency of the forward performance process and the associated optimal portfolios in terms of different numeraires and benchmarks.

Bibliography [1] Ait-Sahalia, Y. and M. Brandt: Variable selection for portfolio choice, Journal of Finance, 56, 1297–1351 (2001). [2] Arrow, K.: Aspects of the theory of risk bearing, Helsinki, Hahnson Foundation (1965). [3] Bates, D.S.: Post-87 crash fears and S&P futures options, Journal of Econometrics, 94, 181– 238 (2000). [4] Benzoni, L.P., Collin-Dufrense, C. and R.S. Goldstein: Portfolio Choice over the Life-Cycle when the Stock and Labor Markets Are Cointegrated, The Journal of Finance, 62(5), 2123– 2167 (2007). [5] Barrier F., Rogers L.C. and M. Tehranchi: A characterization of forward utility functions, preprint (2007). [6] Becherer, D.: The numeraire portfolio for unbounded semimartingales, Finance and Stochastics, 5, 327–344 (2001). [7] Black, F.: Investment and consumption through time, Financial Note 6B (1968). [8] Borkar, V.S.: Optimal control of diffusion processes, Pitman Research Notes, 203 (1983). [9] Borrell, C.: Monotonicity properties of optimal investment strategies for log-Brownian asset prices, Mathematical Finance, 17(1), 143–153 (2007). [10] Bouchard, B. and H. Pham: Wealth-path dependent utility maximization in incomplete markets, Finance and Stochastics, 8, 579–603 (2004). [11] Bouchard, B. and N. Touzi: Weak Dynamic Programming Principle for viscosity solutions, submitted for publication (2009). [12] Brandt, M.: Estimating portfolio and consumption choice: A conditional Euler equation approach, Journal of Finance, 54, 1609–1645 (1999).

Optimal asset allocation in a stochastic factor model

449

[13] Brennan, M.J., Schwartz, E.S. and R. Lagnado: Strategic asset allocation, Journal of Economic Dynamics and Control, 21, 1377–1402 (1997). [14] Brennan, M.J.: The role of learning in dynamic portfolio decisions, European Finance Review, 1, 295–306 (1998). [15] Brennan, M. and Y. Xia: Stochastic interest rates and the bond-stock mix, European Finance Review, 4, 197–210 (2000). [16] Brennan, M. and Y. Xia: Dynamic asset allocation under inflation, Journal of Finance, 57, 1201–1238 (2002). [17] Campbell, J.Y. and L. Viceira: Consumption and portfolio decisions when expected returns are time varying, Quarterly Journal of Economics, 114, 433–495 (1999). [18] Campbell, J.Y. and J. Cochrane: By force of habit: A consumption-based explanation of aggregate stock market behavior, Journal of Political Economy, 107, 205–251 (1999). [19] Campbell, J.Y. and L.M. Viceira: Who should buy long-term bonds?, The American Economic Review, 91, 99–127 (2001). [20] Canner, N., Mankiw, N.G. and D. N. Weil: An asset allocation puzzle, The American Economic Review, 87, 181–191 (1997). [21] Carmona, R. (Ed.): Indifference pricing, Princeton University Press (2009). [22] Chacko, G. and L. M. Viceira: Dynamic consumption and portfolio choice with stochastic volatility in incomplete markets, Review of Financial Studies, 18, 1369–1402 (2005). [23] Choulli, T., Stricker, C. and J. Li: Minimal Hellinger martingale measures of order q, Finance and Stochastics, 11(3), 399–427 (2007). [24] Constantinides, G.: A theory of the nominal term structure of interest rates, Review of Financial Studies, 5, 531–552 (1992). [25] Cox, J.C., Ingersoll, J.E. and S.A. Ross: A theory of the term structure of interest rates, Econometrica, 53, 385–407 (1985). [26] Crandall, M., Ishii, H. and P.-L. Lions: User’s guide to viscosity solutions of second order partial differential equations, Bulletin of the American Mathematical Society, 27, 1–67 (1992). [27] Cvitanic, J., Schachermayer, W. and H. Wang: Utility maximization in incomplete markets with random endowment, Finance and Stochastics, 5, 259–272 (2001). [28] Deelstra, G., Grasselli, M. and P.-F. Koehl: Optimal investment strategies in a CIR framework, Journal of Applied Probability, 37, 936–946 (2000). [29] Detemple, J. and M. Rindisbacher: Closed-form solutions for optimal portfolio selection with stochastic interest rate and investment constraints, Mathematical Finance, 15(4), 539–568 (2005). [30] Detemple, J., Garcia, R. and M. Rindisbacher: A Monte Carlo method for optimal portfolios, The Journal of Finance, 58(1), 401–446 (2003). [31] Duffie, D. and P.-L. Lions: PDE solutions of stochastic differential utility, Journal of Mathematical Economics, 21, 577–606 (1992). [32] El Karoui, N., Nguyen, D.H. and M. Jeanblanc: Compactification methods in the control of degenerate diffusions: existence of an optimal control, Stochastics, 20, 169–220 (1987). [33] El Karoui, N., Peng, S. and M.C. Quenez: Backward stochastic differential equations in finance, Mathematical Finance, 7(1), 1–71 (1997). [34] Fama, W.E. and G.W. Schwert: Asset returns and inflation, Journal of Financial Economics, 5, 115–146 (1977).

450

T. Zariphopoulou

[35] Ferson, W.E. and C. R. Harvey: The risk and predictability of international equity returns, Review of Financial Studies, 6, 527–566 (1993). [36] Fleming, W. and D. Hernandez-Hernandez: An optimal consumption model with stochastic volatility, Finance and Stochastics, 7, 245–262 (2003). [37] Fleming, W.H. and M.H. Soner: Controlled Markov processes and viscosity solutions, Springer-Verlag, 2nd edition (2005). [38] French, K.R., Schwert, G.W. and R.F. Stambaugh: Expected stock returns and volatility, Journal of Financial Economics, 19, 3–29 (1987). [39] Glosten, L.R., Jagannathan, R. and D.E. Runkle, On the relation between the expected value and the volatility of the nominal excess return of stocks, Journal of Finance, 48, 1779–1801 (1993). [40] Goll, T. and J. Kallsen: Optimal portfolios for logarithmic utility, Stochastic Processes and their Applications, 89, 31–48 (2000). [41] Goll, T. and J. Kallsen: A complete explicit solution to the log-optimal portfolio problem, The Annals of Applied Probability, 12(2), 774–799 (2003). [42] Harvey, C.R.: Time-varying conditional covariances in tests of asset pricing models, Journal of Financial Economics, 24, 289–317 (1989). [43] Heaton, J. and D. Lucas, Market frictions, savings behavior and portfolio choice, Macroeconomic Dynamics, 1, 76–101 (1997). [44] Henderson, V. and D. Hobson: Horizon-unbiased utility functions, Stochastic processes and their applications, 117(11), 1621–1641 (2007). [45] Huang, C.-F. and T. Zariphopoulou: Turnpike behavior of long-term investments, Finance and Stochastics 3(1), 15–34 (1999). [46] Huggonier, J. and D. Kramkov: Optimal investment with random endowments in incomplete markets, Annals of Applied Probability, 14, 845–864 (2004). [47] Karatzas, I.: Lectures on the Mathematics of Finance, CRM Monograph Series, American Mathematical Society (1997). [48] Karatzas, I., Lehoczky, J.P., Shreve S. E. and G.-L. Xu: Martingale and duality methods for utility maximization in an incomplete market, SIAM Journal on Control and Optimization, 25, 1157–1586 (1987). [49] Karatzas, I. and G. Zitkovic: Optimal consumption from investment and random endowment in incomplete semimartingale markets, Annals of Applied Probability, 31(4), 1821–1858 (2003). [50] Karatzas, I. and Kardaras, C.: The numeraire portfolio in semimartingale financial models, Finance and Stochastics, 11, 447–493 (2007). [51] Kim, T.S. and E. Omberg: Dynamic nonmyopic portfolio behavior, Review of Financial Studies, 9, 141–161 (1996). [52] Kimball, M.S.: Precautionary saving in the Small and in the Large, Econometrica, 58, 53–73 (1990). [53] Korn, R and H. Kraft: On the stability of continuous-time portfolio problems with stochastic opportunity set, Mathematical Finance 14, 403–414 (2003). [54] Korn, R and H. Kraft: A stochastic control approach to portfolio problems with stochastic interest rates, SIAM Journal on Control and Optimization, 40, 1250–1269 (2001). [55] Korn, R. and E. Korn: Option pricing and portfolio optimization – Modern methods of Financial Mathematics, American Mathematical Society (2001).

Optimal asset allocation in a stochastic factor model

451

[56] Kraft, H.: Optimal portfolios and Heston’s stochastic volatility model, Quantitative Finance 5, 303–313 (2005). [57] Kramkov, D. and W. Schachermayer: The asymptotic elasticity of utility functions and optimal investment in incomplete markets, The Annals of Applied Probability, 9(3), 904–950 (1999). [58] Kramkov, D. and W. Schachermayer: Necessary and sufficient conditions in the problem of optimal investment in incomplete markets, The Annals of Applied Probability, 13(4), 1504– 1516 (2003). [59] Kramkov, D. and M. Sirbu: On the two times differentiability of the value functions in the problem of optimal investment in incomplete market, The Annals of Applied Probability, 16(3), 1352–1384 (2006). [60] Krylov, N.: Controlled diffusion processes, Springer-Verlag (1987). [61] Landsberger, M. and I. Meilijnson: Demand for risky assets: A portfolio analysis, Journal of Economic Theory, 50, 204–213 (1990). [62] Lions, P.-L.: Optimal control of diffusion processes and Hamilton–Jacobi–Bellman equations. Part I: The Dynamic Programming Principle and applications, Communications in Partial Differential Equations, 8, 1101–1174 (1983). [63] Lions, P.-L.: Optimal control of diffusion processes and Hamilton–Jacobi–Bellman equations. Part II: Viscosity solutions and uniqueness, Communications in Partial Differential Equations, 8, 1229–1276 (1983). [64] Liu, J.: Portfolio selection in stochastic environments, Review of Financial Studies, 20(1), 1–39 (2007). [65] Malamud, S. and E. Trubowitz: The structure of optimal consumption streams in general incomplete markets, Mathematics and Financial Economics, 1, 129–161 (2007). [66] Merton, R.: Lifetime portfolio selection under uncertainty: the continuous-time case, The Review of Economics and Statistics, 51, 247–257 (1969). [67] Merton, R.: Optimum consumption and portfolio rules in a continuous-time model, Journal of Economic Theory, 3, 373–413 (1971). [68] Mnif, M.: Portfolio Optimization with Stochastic Volatilities and Constraints: An Application in High Dimension, Applied Mathematics and Optimization, 56, 243–264 (2007). [69] Musiela M. and T. Zariphopoulou: The backward and forward dynamic utilities and their associated pricing systems: The case study of the binomial model, preprint (2003). [70] Musiela M. and T. Zariphopoulou: The single period binomial model, Indifference Pricing, R. Carmona (ed.), Princeton University Press (2009). [71] Musiela, M. and Zariphopoulou, T.: Optimal asset allocation under forward exponential criteria, Markov Processes and Related Topics: A Festschrift for Thomas. G. Kurtz, IMS Collections, Institute of Mathematical Statistics, 4, 285–300 (2008). [72] Musiela M. and T. Zariphopoulou: Portfolio choice under dynamic investment performance criteria, Quantitative Finance, in press. [73] Musiela M. and T. Zariphopoulou: Portfolio choice under space-time monotone performance criteria, submitted for publication (2008). [74] Musiela M. and T. Zariphopoulou: Stochastic partial differential equations in portfolio choice, preprint (2007). [75] Neave, E.H.: Multi-period consumption-investment decisions and risk preferences, Journal of Economic Theory, 3, 40–53 (1971).

452

T. Zariphopoulou

[76] Pagan, A.R. and G.W. Schwert: Alternative models for conditional stock volatility, Journal of Econometrics, 45, 267–290 (1990). [77] Pham, H: Smooth solutions to optimal investment models with stochastic volatilities and portfolio constraints, Applied Mathematics and Optimization, 46, 1–55 (2002). [78] Ross, S.A.: Some stronger measures of risk aversion in the small and in the large with applications, Econometrica, 49(3), 621–639 (1981). [79] Sangvinatsos, A. and J. Wachter: Does the failure of the expectations hypothesis matter for long-term investors?, Journal of Finance, 60, 179–230 (2005). [80] Schachermayer, W.: Optimal investment in incomplete markets when wealth may become negative, Annals of Applied Probability 11(3), 694–734 (2001). [81] Schachermayer, W.: A super-martingale property of the optimal portfolio process, Finance and Stochastics, 7(4), 433–456 (2003). [82] Schroder, M. and C. Skiadas: Optimal lifetime consumption-portfolio strategies under trading constraints and generalized recursive preferences, Stochastic Processes and their Applications, 108, 155–202 (2003). [83] Schroder, M. and C. Skiadas: Lifetime consumption-portfolio choice under trading constraints, recursive preferences and nontradeable income, Stochastic Processes and their Applications, 115, 1–30 (2005). [84] Scruggs, J.T.: Resolving the puzzling intertemporal relation between the market: risk premium and conditional market variance: a two-factor approach, Journal of Finance, 53, 575–603 (1998). [85] Stoikov, S. and T. Zariphopoulou: Optimal investments in the presence of unhedgeable risks and under CARA preferences, IMA Volume Series, Institute for Mathematics and its Applications, in press (2009). [86] Stutzer, M.: Portfolio choice with endogenous utility: A large deviations approach, Journal of Econometrics, 116, 365–386 (2003). [87] Tehranchi, M. and N. Ringer: Optimal portfolio choice in the bond market, Finance and Stochastics, 10(4), 553–573 (2006). [88] Touzi, N.: Stochastic control problems, viscosity solutions and application to finance, Lecture Notes, Scuola Normale Superiore, Pisa (2002). [89] Wachter, J.: Risk aversion and allocation to long term bonds, Journal of Economic Theory, 112, 325–333 (2003). [90] Wachter, J.: Portfolio and consumption decisions under mean-reverting returns: An exact solution for complete markets, Journal of Financial and Quantitative Analysis, 37, 63–91 (2002). [91] Widder, D.V.: The heat equation, Academic Press (1975). [92] Yong, J. and X. Y. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations, Springer, New York (1999). [93] Zariphopoulou, T.: A solution approach to valuation with unhedgeable risks, Finance and Stochastics, 5, 61–82 (2001). [94] Zariphopoulou, T. and T. Zhou: Investment performance measurement under asymptotically linear local risk tolerance, Handbook of Numerical Analysis, A. Bensoussan (Ed.), in print (2009). [95] Zitkovic, G.: A dual characterization of self-generation and log-affine forward performances, Ann. of Appl. Probab., in press.

Optimal asset allocation in a stochastic factor model

453

[96] Zitkovic, G.: Utility theory-historical perspectives, Encyclopedia of Quantitative Finance, in press (2009).

Author information Thaleia Zariphopoulou, The University of Texas at Austin, Department of Mathematics, 1 University Station – C1200, Austin, TX 78712-0257, U.S.A. Email: [email protected]