276 77 6MB
English Pages 645 Year 2011
MATHEMATICS RESEARCH DEVELOPMENTS
HANDBOOK OF OPTIMIZATION THEORY: DECISION ANALYSIS AND APPLICATION
No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services.
MATHEMATICS RESEARCH DEVELOPMENTS Additional books in this series can be found on Nova‟s website under the Series tab.
Additional E-books in this series can be found on Nova‟s website under the E-book tab.
MATHEMATICS RESEARCH DEVELOPMENTS
HANDBOOK OF OPTIMIZATION THEORY: DECISION ANALYSIS AND APPLICATION
JUAN VARELA AND
SERGIO ACUÑA EDITORS
Nova Science Publishers, Inc. New York
Copyright © 2011 by Nova Science Publishers, Inc. All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher. For permission to use material from this book please contact us: Telephone 631-231-7269; Fax 631-231-8175 Web Site: http://www.novapublishers.com NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers‟ use of, or reliance upon, this material. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works. Independent verification should be sought for any data, advice or recommendations contained in this book. In addition, no responsibility is assumed by the publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication. This publication is designed to provide accurate and authoritative information with regard to the subject matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. If legal or any other expert assistance is required, the services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS. Additional color graphics may be available in the e-book version of this book. LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA Handbook of optimization theory : decision analysis and application / [edited by] Juan Varela and Sergio Acuña. p. cm. Includes index. ISBN 978-1-62100-138-6 (eBook) 1. Gene mapping--Mathematics. 2. Mathematical optimization. I. Varela, Juan, 1954- II. Acuña, Sergio, 1960QH445.2.H36 2009 519.6--dc22 2009044352
Published by Nova Science Publishers, Inc. † New York
CONTENTS Preface
vii
Chapter 1
Discrete Optimization for Some TSP-Like Genome Mapping Problems D. Mester, Y. Ronin, M. Korostishevsky, Z. Frenkel, O. Bräysy, W. Dullaert, B. Raa and A. Korol
1
Chapter 2
Benchmarking Hospital Units‟ Efficiency Using Data Envelopment Analysis: The Case of Greek Obstetric and Gynaecology Public Units Maria Katharaki
41
Chapter 3
Markov Models in Manpower Planning: A Review Tim De Feyter and Marie-Anne Guerry
67
Chapter 4
Stochastic Differential Games with Structural Uncertainties: A Paradigm for Interactive Stochastic Optimization David W.K. Yeung
89
Chapter 5
An Optimization Approach for Inventory Routing Problem in Congested Road Network Suh-Wen Chiou
117
Chapter 6
Accelerating Iterative Solvers with Reconfigurable Hardware Issam Damaj
139
Chapter 7
Fair Division Problems with Nonadditive Evaluations: Existence of Solutions and Representation of Preference Orderings Nobusumi Sagara
153
Chapter 8
A VOF-Based Shape Optimization Method for Incompressible Hyperelastic Materials Kazuhisa Abe
185
Chapter 9
Multiobjective Optimization: Quasi-even Generation of Pareto Frontier and Its Local Approximation Sergei V. Utyuzhnikov
211
vi
Contents
Chapter 10
On the Dynamics of Coalition Structure Beliefs Giuseppe De Marco and Maria Romaniello
237
Chapter 11
Relaxed Stability Conditions for Linear Time-Varying Systems with Applications to the Robust Stabilization Problem Leopoldo Jetto and Valentina Orsini
259
Chapter 12
A Decomposition Method to Solve Non-Symmetric Variational Inequalities Gabriela F. Reyero and Rafael V. Verdes
291
Chapter 13
Numerical Approximation to Solve QVI Systems Related to Some Production Optimization Problems Laura S. Aragone and Elina M. Mancinelli
329
Chapter 14
Vector Optimization on Metric Spaces Alexander J. Zaslavski
377
Chapter 15
Robust Static and Dynamic Output Feedback Suboptimal Control of Uncertain Discrete-Time Systems Using Additive Gain Perturbations Hiroaki Mukaidani, Yasuhisa Ishii, Yoshiyuki Tanaka and Toshio Tsuji
387
Chapter 16
Numerical Computation for Solving Cross-Coupled Large-Scale Singularly Perturbed Stochastic Algebraic Riccati Equation Hiroaki Mukaidani and Vasile Dragan
407
Chapter 17
Subprime Mortgages and Their Securitization with Regard to Capital, Information, Risk and Valuation M.A. Petersen, S. Thomas, M.C. Senosi, J. Mukuddem-Petersen, T. Bosch, M.P. Mulaudzi, I.M. Schoeman and B. De Waal
425
Chapter 18
Mortgage Loan Securitization, Capital and Profitability and Their Connections with the Subprime Banking Crisis M.A. Petersen, M.P. Mulaudzi, I.M. Schoeman, J. MukuddemPetersen and B. De Waal
491
Chapter 19
Queueing Networks with Retrials due to Unsatisfactory Service Jesus R. Artalejo
565
Chapter 20
Some Results on Condition Numbers Zhao Li, Seak-Weng Vong, Yi-Min Wei and Xiao-Qing Jin
577
Chapter 21
The Subprime Mortgage Crisis: Optimal Cash Flows from the Financial Leverage Profit Engine M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen, B. De Waal and I.M. Schoeman
587
Index
623
PREFACE Several problems in modern genome mapping analysis belong to the field of discrete optimization on a set of all possible orders. In this book, formulations, mathematical models and algorithms for genetic/genomic mapping problem that can be formulated in TSP-like terms are proposed. Since the 1960s, Operational Research techniques have extensively been developed to support organizations in their Manpower Planning challenge - a fundamental aspect of Human Resource Management in organizations. This book reviews the techniques and alternative approaches that have been introduced in Manpower Planning (e.g., simulation techniques), and in general, the Markov Chain Theory. Furthermore, the authors of this book propose a new class of strategies for giving the optimal inventory replenishments for each retailer. In addition, the authors demonstrate how to increase the usage of iterative methods in all possible fields by accelerating such solvers using Reconfigurable Hardware. An optimization method for material layout of incompressible rubber components is presented as well. Other chapters in this book use a generic approach to study minimization problems on a complete metric space, provide a novel design method in the case of an output feedback suboptimal control problem, derive Levy process-based models of jump diffusion-type for banking operations involving securitization, capital and profitability, and investigate the optimality of the loan securitization process that has had a prominent role to play in the subprime mortgage crisis (SMC). Several problems in modern genome mapping analysis belong to the field of discrete optimization on a set of all possible orders. In Chapter 1 the authors propose formulations, mathematical models and algorithms for genetic/genomic mapping problem, that can be formulated in TSP-like terms. These include: ordering of marker loci (or genes) in multilocus genetic mapping (MGM), multilocus consensus mapping (MCGM), and physical mapping problem (PMP). All these problems are considered as computationally challenging because of noisy marker scores, large-size data sets, specific constraints on certain classes of orders, and other complications. The presence of specific constrains on ordering of some elements in these problems does not allow applying effectively the well-known powerful discrete optimization algorithms like Cutting-plane, Genetic algorithm with EAX crossover and famous Lin-Kernighan. In the paper the authors demonstrate that developed by us Guided Evolution Strategy algorithms successfully solves this class of discrete constrained optimization problems. The efficiency of the proposed algorithm is demonstrated on standard TSP problems and on three genetic/genomic problems with up to 2,500 points.
viii
Juan Varela and Sergio Acuña
Hospital institution‟s managers are called upon to combine and utilize efficiently the finite financial resources toward the goal of maximizing the number and quality of health services offered. The research aim of Chapter 2 is to primarily estimate the relative technical efficiency by using a sample from public hospital units that provide obstetrical and gynaecological services in Greece and secondly, to emphasize the policy implications for health sector policy-makers. In order to effectively address the above goals, a comparative analysis of 32 Greek Public Hospital Units was conducted. The research was based on data collected from official public sources. Quantitative analysis, specifically data envelopment analysis (DEA) is used to estimate efficiency of hospital units. Based on the results that emerge from the application of Data Envelopment Analysis, information is provided to their managers, which refer to: (i) the degree of utilization of their production factors, (ii) the particular weight of each production factor in the modulation of the relative technical efficiency score, (iii) the utilization level of each production factor, and (iv) those hospital units that utilize their resources in an optimal way and constitute models for the exercising of effective management. Particular emphasis is given to the economic efficiency of central region hospital units‟ relative to those of the outlying regions. The derived information assists in the modulation of an appropriate policy mix per hospital unit which should be applied by their management teams along with a set of administrative measures that need to be undertaken in order to promote efficiency. Manpower Planning is a fundamental aspect of Human Resource Management in organizations. The objective of Manpower Planning is developing plans to meet the future human resource requirements. A shortage as well as a surplus of (skilled) staff would be highly undesirable: it would lead to lower production, loss of orders and customers, higher costs and/or less profit. Especially for companies confronted with an ageing workforce or shortages on the labor market, Manpower Planning becomes a crucial instrument to create a sustainable competitive advantage. Since the 1960s, Operational Research techniques have extensively been developed to support organizations in their Manpower Planning challenge. Those techniques are especially interesting tools for large organizations in gaining insights in their complex manpower system and in clarifying the future dynamics of its workforce: employees might leave the organization, acquire experience and qualifications, or develop a broader range of skills. Although recently alternative approaches have been introduced in Manpower Planning (e.g. simulation techniques), in general, Markov Chain Theory remains useful to model the dynamics in a manpower system: Manpower Planning models based on Markov Chains, aim to predict the evolution of the manpower system and/or control it by setting the organization‟s human resource policies (e.g. recruitment, promotion, training). The analytical Markov approach allows identifying interesting characteristics of the manpower system which influence its future dynamics. Although Markov manpower models are by nature stochastic, a lot of researchers use a deterministic approach, by assuming that all parameters of the model are known and precisely determined. This indeed allows focusing on special characteristics of specific manpower systems. However, this knowledge is only applicable if the basic assumptions of Markov Manpower Planning are respected and the parameters of the model are reliably estimated. More specifically, the aggregated Markov models are defined by transition probabilities between homogeneous subgroups of the manpower system. Therefore stochastic approaches take into account the uncertainty and imprecision in real-world applications and suggest methodologies for model building.
Preface
ix
There is a rich variety of publications on Markov models in Manpower Planning, in which properties of manpower systems are investigated under very specific assumptions. This results in different types of models. Chapter 3 offers a review of those different types of models and covers the latest advances in the field. The structure of the overview clearly follows the successive stages of the Markov Manpower Planning methodology in real-world applications, from model building and selection, parameter estimation, model validation to prediction and control. An essential characteristic of decision making over time is that though the decisionmaker may gather past and present information about his environment, inherently the future is not completely knowable and is therefore uncertain. An empirically meaningful theory of optimization must therefore incorporate uncertainty in an appropriate manner. Important forms of structural uncertainties follow from uncertainty in future payoff structures and in the future configurations of the state dynamics. Chapter 4 presents a general class of stochastic differential games in which future payoff structures and configurations of the state dynamics are not known with uncertainty. Only the probability distributions of payoff structures and those of the configurations of the state dynamics are known. Mechanism for solving this class of games is derived and examples are provided. The analysis is also extended to the cover the case of infinite horizon games. It is the first time that stochastic differential games with uncertain payoff structures and state dynamics configurations are presented. Novel subclasses of differential games and control problems can be derived from the model. The results can also be applied to single decision-maker optimization theory. In sum, the analysis has widened the application of game theory by provides a paradigm for modeling game-theoretic situations over time with more content and realism. Consider a congested urban road network with one depot and many geographically dispersed retailers facing demands at constant and deterministic rate over a period of planning horizon, but the lead time is variable due to traffic congestion. All stock enters the network through the depot and from where it is distributed to the retailers by a fleet of vehicles. In Chapter 5, the authors propose a new class of strategies for giving the optimal inventory replenishments for each retailer while the efficient delivery design is taken into account such that the minimization of total inventory cost and transportation cost is achieved. A mathematical program is formulated for this combined problem. Numerical computations are conducted and good results are obtained with reasonable computational efforts. In Chapter 6, the authors aim at increasing the usage of iterative methods in all possible fields by accelerating such solvers using Reconfigurable Hardware. To demonstrate the acceleration of these solvers, the authors implement the Jacobi solver on different classes of FPGAs, such as Virtex II Pro, Altera Stratix and Spartan3L. The design presented is implemented using Handel-C, a compiler with hardware output. Obtained results show that reconfigurable hardware is suitable for realizing accelerated versions of such solvers. The purpose of Chapter 7 is to investigate fair division problems in which each player has nonadditive utility functions on a -algebra. To this end, the authors report the known results on the characterization and existence of solutions for additive utilities and demonstrate how the additive case can(not) be extended to the nonadditive case. Moreover, the authors axiomatize preference orderings on -algebras representable by numerical functions (utility functions). In this chapter, they formulate representations of partial orderings on a -algebra in terms of nonadditive set functions that satisfy the appropriate requirements of convexity and continuity. The authors provide several nonadditive representation theorems.
x
Juan Varela and Sergio Acuña
Chapter 8 presents an optimization method for material layout of incompressible rubber components. Topology optimization is realized within the framework of the material approach in which material distribution is described on an Eulerian mesh. In order to avoid occurrence of the checkerboard pattern and intermediate density, the VOF method is employed for representation of the material region. In this method an optimal shape results from the advection of the VOF function governed by the Hamilton-Jacobi equation. The relaxation of incompressibility in void regions is achieved by replacing the rubber with a compressible linear material. Through numerical examples, validity of the developed method is examined. In multidisciplinary optimization the designer needs to find a solution to an optimization problem that includes a number of usually contradicting criteria. Such a problem is mathematically related to the field of nonlinear vector optimization with constraints. It is well-known that the solution to this problem is far from unique and given by a Pareto surface. In the real-life design the decision-maker is able to analyze only several Pareto optimal (trade-off) solutions. Therefore, a well-distributed representation of the entire Pareto frontier is especially important. At present, there are only a few methods that are capable of even generating a Pareto frontier in a general formulation. In Chapter 9 they are compared to each other, with the main focus being on a general strategy combining the advantages of the known algorithms. The approach is based on shrinking a search domain to generate a Pareto optimal solution in a selected area on the Pareto frontier. The search domain can be easily conducted in the general multidimensional formulation. The efficiency of the method is demonstrated on different test cases. For the problem in question, it is also important to carry out a local analysis. This provides an opportunity for a sensitivity analysis and local optimization. In general, the local approximation of a Pareto frontier is able to complement a quasi-even generated Pareto set. In Hart and Kurz (1983), stability and formation of coalition structures has been investigated in a noncooperative framework in which the strategy of each player is the coalition he wishes to join. However, given a strategy profile, the coalition structure formed is not unequivocally determined. In order to solve this problem, they proposed two rules of coalition structure formation: the γand the δmodels. In Chapter 10 the authors look at evolutionary games arising from the γ model for situations in which each player can choose mixed strategies and has vague expectations about the formation rule of the coalitions in which is not involved; players determine at every instant their strategies and the authors study how, for every player, subjective beliefs on the set of coalition structures evolve coherently to the strategic choices. Coherency is regarded as a viability constraint for the differential inclusions describing the evolutionary game. Therefore, the authors investigate viability properties of the constraints and characterize velocities of pairs belief/strategies which guarantee that coherency of beliefs is always satisfied. Finally, among many coherent belief revisions (evolutions), the authors investigate those characterized by minimal change and provide existence results. The twofold purpose of Chapter 11 is to state relaxed stability conditions for linear timevarying systems with possible parametric uncertainties and then to consider their application to robust stabilization problems. Sufficient conditions for the exponential stability are derived with reference to linear time-varying (LTV) systems of the form Dx(t) = A(t)x(t), where D denotes the time-derivative (t ∈
) or forward shift operator (t ∈
) and A(·) is
xi
Preface
uniformly bounded. The approach proposed derives and uses the notion of perturbed frozen time (PFT) form that can be associated to any LTV system. Exploiting the Bellman–Gronwall lemma, relaxed stability conditions are then stated in terms of “average” parameter variations. This leads to the notion of “small average variation plant”. Salient features of the approach are: pointwise stability of A(·) is not required,
A() may not be bounded, the stability
conditions also apply to uncertain systems. As shown in the second part of this chapter, the developed stability analysis represents a powerful tool that can be also employed in robust stabilization problems. The main advantage of the proposed synthesis method is that it can be applied without assuming an accessible state vector or a particular parametric dependence. The approach is illustrated by numerical examples. In Chapter 12 the authors present a decomposition method to solve linear nonsymmetric variational inequalities. Some preliminary ideas concerning this method and especially related to symmetric case can be seen in [6]– [8], [13]– [20], as well as some concrete applications. The method itself stems from the general principles exposed in [11]. The procedure was developed to solve the set of junction problems which were presented in [10]. Basically the problem consists in solving the variational inequality: Find u such that (1) a (u , v u ) ( f , v u ) v K ,
where Kis a closed convex set. In their work, the authors suppose that Kcan be decomposed in the following form: K Kˆ (vI ), vI K I
(2)
where vI is an auxiliary variable which belongs to a convex set K I . The decomposition of K given by (2) implies that the original problem – which is equivalent to a saddle-point problem on the whole set K × K– can be decomposed in a set of
variational inequalities defined on the sets Kˆ (vI ) for each value of the auxiliary variable
vI . These variational inequalities correspond to simplified saddle-point problems defined on the set Kˆ (vI ) × Kˆ (vI ) which, generally, is smaller than K × K. In the second part
of the method it is found the privileged vI such that u Kˆ (vI ) , where ¯ u is the original solution; u itself is computed solving the simplified variational inequality:
a (u , v u ) ( f , v u )
v Kˆ (vI ).
The chapter is organized in the following form: In the Section 2 the authors present the original VI, its properties and an equivalent reformulation as a saddle-point problem. In the Section 3 the authors present the methodology of decomposition and the solution by a system of hierarchically coupled variational inequalities. In the Section 4 an iterative algorithm is described and its convergence is proved. In the Appendix 1 some properties of continuity and convexity of some auxiliary functions are proved. In the Appendix 2 some properties of differentiability of the same auxiliary functions are proved. In Chapter 13 the authors develop numerical procedures to solve Quasi Variational Inequalities (QVI) Systems which appear in the optimization of some problems related to a multi-item single machine, which at any time the machine is either idle or producing any of m
xii
Juan Varela and Sergio Acuña
different items. Their aim is to obtain an optimum production schedule for each of the cases they propose. The authors focus their attention on three cases: the discount case, where the cost to be optimized takes into account a running cost and the production switching costs, as the integral cost functional has an infinite horizon a discount factor is used to guarantee that the integral converges; the ergodic case, where the objective is to find an optimal production schedule that minimizes the average cost for an infinite horizon; and the piecewise deterministic case where the demand varies randomly according to a piecewise deterministic process, the demand changes are described by a Poisson processes and the demand value may take a finite number of values. In Chapter 14 we use a generic approach in order to study vector minimization problems on a complete metric space. The authors discuss their recent results which show that solutions of vector minimization problems exist generically for certain classes of problems. Any of these classes of problems is identified with a space of functions equipped with a natural complete metric and it is shown that there exists a G everywhere dense subset of the space of functions such that for any element of this subset the corresponding vector minimization problem possesses solutions. The authors also discuss the stability and the structure of a set of solutions of a vector minimization problem. Chapter 15 provides a novel design method in the case of an output feedback suboptimal control problem for a class of uncertain discrete-time system using additive gain perturbations. Based on the linear matrix inequality (LMI), a class of the fixed output feedback controller is established, and some sufficient conditions for the existence of the suboptimal controller are derived. The novel contribution is that time-variant additive gain perturbations are included in the feedback systems. Although the additive gain perturbations work using the feedback systems, both stability of closed-loop systems and adequate suboptimal cost are attained. The numerical example demonstrates that the large cost due to the LMI design can be reduced by using additive gain perturbations. In Chapter 16, the linear quadratic infinite horizon Nash games for large-scale singularly perturbed stochastic systems (LSPSS) are studied. After establishing the local uniqueness and the asymptotic structure of the solutions to the cross-coupled largescale stochastic algebraic Riccati equation (CLSARE), a new algorithm on the basis of the Newton‟s method is established. It is shown that the quadratic convergence of the proposed method under an appropriate initial guess is guaranteed by using this structure of the solutions. Furthermore, in order to avoid the large dimensional computations of matrix calculation, the fixed point iterations are also given. As a result, the results obtained in this paper represent very powerful tools for simplified computations with the high-order accuracy. The computational examples are given to demonstrate the efficiency and feasibility of the proposed algorithm. In Chapter 17, the authors investigate the securitization of subprime residential mortgage loans (RMLs) into structured notes such as subprime residentialmortgage-backed securities (RMBSs) and collateralized debt obligations (CDOs). In this regard, their deliberations seperately focus on capital, information, risk and valuation under RMBSs and RMBS CDOs. With regard to the former, their contribution discusses credit (including counterparty and default), market (including interest rate, price and liquidity), operational (including house appraisal, valuation and compensation), tranching (including maturity mismatch and synthetic) and systemic (including maturity transformation) risks. The hypothesis of this
Preface
xiii
chapter is that the SMC was mainly caused by the intricacy and design of subprime mortgage origination and securitization as well as systemic agents. This led to information (loss, asymmetry and contagion) problems, valuation opaqueness and ineffective risk mitigation. This claim is illustrated via several examples. In Chapter 18, the authors derive L´evy process-based models of jump diffusion-type for originator (OR) operations involving subprime residential mortgage loans (RMLs) and their securitization, capital as well as profitability. The main motivation for their study is the fact that RMLs, residential mortgage-backed securities (RMBSs) and related mortgage products are inextricably linked to the causes, consequences and cures for the subprime mortgage crisis (SMC). A further motivation is the need to generalize the more traditional discrete- and continuous-time models of RMBSs, regulatory capital, returns on assets (ROA) and returns on equity (ROE) in the context of interacting RML portfolios. Prior to determining an optimal price for RMBSs, the authors construct stochastic models for RMBS price dynamics, Basel II regulatory capital and OR‟s nett income after taxes and before the cost of funds (NIATBCF) in a semi-martingale setting. As far as OR‟s optimization problem is concerned, their main conclusion is that both sub- and super-optimal pricing may be characterized in terms of a constantvalued pricing error term. Besides subprime RML securitization, regulatory capital and profits, the authors highlight the role of ORs‟ problematic RML subportfolios, RML rates, demand, default and loss provisioning, risk premia, deposits, London InterBank Offered Rate (LIBOR) as well as liquidity. Furthermore, the authors consider the connections between the aforementioned variables and the SMC. Also, the authors provide numerical examples involving the dynamics of OR profitability via the indicators ROA and ROE. Here, the data is sourced from 36 anonymous U.S. banks for the period 2002-2007. The slow progress in the analytic investigation of queueing networks with retrials is explained in terms of the impossibility of having product-form limiting distributions in those networks in which the retrials are due to blocking. In Chapter 19, the authors introduce a new class of queueing networks in which customers who are not satisfied after receiving a primary service have a chance to perform new attempts later on. In this context, the authors prove the existence of product-form solutions. The authors deal both with open and closed networks with retrials due to unsatisfactory service. A number of illustrative motivating examples are considered. The authors give mixed and componentwise condition numbers of the orthogonal projector. Some explicit and computable expressions on the data for the Frobenius norm and spectral norm of the solution of underdetermined linear systems with full row rank are also presented in Chapter 20. Subprime residential mortgage loan securitization and its associated risks have been a major topic of discussion since the onset of the mortgage crisis in 2007. In Chapter 21, the authors solve a stochastic optimal credit default insurance problem that has the cash outflow rate for satisfying depositor obligations, the investment in securitized loans and credit default insurance as controls. As far as the latter is concerned, the authors compute the credit default swap premium and accrued premium by considering the credit rating of the securitized mortgage loans. Finally, the authors provide an analysis of the aforementioned optimal insurance problem and its connections with the mortgage crisis.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acuña, pp. 1-39
ISBN: 978-1-60876-500-3 © 2011 Nova Science Publishers, Inc.
Chapter 1
DISCRETE OPTIMIZATION FOR SOME TSP-LIKE GENOME MAPPING PROBLEMS D. Mester1,*, Y. Ronin 1, M. Korostishevsky2, Z. Frenkel1, O. Bräysy3, W. Dullaert4, B. Raa 5 and A. Korol1,** 1)
Institute of Evolution, University of Haifa, Mount Carmel, Haifa 31905, Israel. Department of Anatomy and Anthropology, Sackler School of Medicine, Tel Aviv University, Tel Aviv 69978, Israel 3) Agora Innoroad Laboratory, Agora Center, P.O.Box 35, FI-40014 University of Jyväskylä, Finland 4) Institute of Transport and Maritime Management Antwerp, University of Antwerp, Keizerstraat 64, B-2000 Antwerp, Belgium 5) Department of Management Information and Operations Management, Ghent University, Tweekerkenstraat 2, 9000 Gent, Belgium
2)
Abstract Several problems in modern genome mapping analysis belong to the field of discrete optimization on a set of all possible orders. In this paper we propose formulations, mathematical models and algorithms for genetic/genomic mapping problem, that can be formulated in TSP-like terms. These include: ordering of marker loci (or genes) in multilocus genetic mapping (MGM), multilocus consensus mapping (MCGM), and physical mapping problem (PMP). All these problems are considered as computationally challenging because of noisy marker scores, large-size data sets, specific constraints on certain classes of orders, and other complications. The presence of specific constrains on ordering of some elements in these problems does not allow applying effectively the well-known powerful discrete optimization algorithms like Cutting-plane, Genetic algorithm with EAX crossover and famous Lin-Kernighan. In the paper we demonstrate that developed by us Guided Evolution Strategy algorithms successfully solves this class of discrete constrained optimization problems. The efficiency of the proposed algorithm is demonstrated on standard TSP problems and on three genetic/genomic problems with up to 2,500 points. *
E-mail address: [email protected]. E-mail address: [email protected]. Corresponding authors: D. Mester and A. Korol. Phone: (972)-48240-449, Fax: (972)-48288-788, 48286-78 **
2
D. Mester, Y. Ronin, M. Korostishevsky et al.
1. General Introduction Several computationally challenging problems related to genetic (genomic) analysis belong to the field of discrete optimization on a set of all possible orders. In particular, genetic and genomic problems that can be formulated in such terms include: ordering of marker loci (or genes) in multilocus genetic mapping (MGM), multilocus consensus mapping (MCGM) and physical mapping problem (PMP ). Different elements can be ordered in these problems, including genes, DNA markers, DNA clones. The essence of multipoint genome mapping problems is unravel the linear order of the elements (genes, DNA markers, or clones) based on the measured matrix of pairwise distances dij between these elements. With n elements the number of the possible orders is n!/2 out of which only one is considered as a true order. A primary difficulty in ordering genomic elements is the large number of possible orders. In real problems, n might vary from dozens to thousands and more. Exact solution to the problem that can be obtained on 3000 Mhz modern computer is n=14 after one hour. Clearly, already with n>15 it would not be feasible to evaluate all n!/2 possible orders using two-point linkage data. In addition to the large number of possible order, the mapping problems are difficult for solving because of noisy marker scores, large-size data sets, specific constraints on certain classes of orders, and other complications. Historically, the main approach of ordering markers within linkage groups to produce genetic maps was based on multipoint maximum likelihood analysis. Several effective algorithms have been proposed using various optimization tools, including branch and bound method (Lathrop et al., 1985), simulated annealing (Thompson, 1984; Weeks and Lange, 1987; Stam, 1993; Jansen et al., 2001), and seriation (Buetow and Chakravarti, 1987). Computational complexity does not allow applying this approach for large scale problems. Olson and Boehnke (1990) compared eight different methods for marker ordering. In addition to multilocus likelihood, they also considered more simple criteria for multipoint marker ordering in large-scale problems based on two-point linkage data (by minimizing the sum of adjacent recombination rates or adjacent genetic distances). The simple criteria are founded on the biologically reasonable assumption that the true order of a set of linked loci will be the one that minimizes the total map length of the chromosome segment. As an alternative to maximum likelihood multilocus analysis, multipoint ordering problem can be addressed by using the methods and algorithms developed for the classical formulation of Traveling Salesperson Problem (TSP) (Press et al., 1986; Week and Lange, 1987; Falk, 1992; Schiex and Gaspin, 1997). Here we consider three groups of genome mapping problems that can be efficiently solved based on heuristics developed for TSP: multilocus genetic mapping (MGM), multilocus consensus mapping (MCGM) and physical mapping problem (PMP ). MGM can be reduced to a particular case of TSP referred to as Wandered Salesperson Problem (WSP). In fact, the genomic ordering problems are “uni-dimensional” WSP (UWSP) because all elements belong to one coordinate axis only in accordance with the basic organization of genetic material in the chromosomes. WSP is a particular case of the TSP in which the salesperson can start wherever he/she wishes and does not have to return to the starting city after visiting all cities (Papadimitiou and Steiglitz, 1981). In contrast to classical TSP, UWSP formulation reflects the real fact that the genetic maps of all eukaryotic organisms have a beginning and end points. Moreover, genetic/genomic problems reduced
Discrete Optimization for Some TSP-Like Genome Mapping Problems
3
formally to TSP have some specified particularities and complications: (a) before construction of the maps we must divide the datasets of markers (or clones in physical mapping) into non-overlapping sets corresponding to chromosomes and linkage groups (or contigs of clones); this step can be referred to as clustering stage; (b) some points of the solution may have known (predefined) order (anchor markers); (c) the genetic or physical distance between two points on a chromosome, d (i, j), can not be measured exactly due to noises in scoring and other complications. Because of these complications, the true ordering of the points may not provide the optimum value of the optimization criterion (minimal total distance). By this reason, to unravel the true order we must verify the stability of the solution with respect to stochastic variation of the distance matrix. For solving the TSP, several well-known heuristic algorithms can be applied: Tabu Search (TS), Simulated Annealing (SA), Guided Local Search (GLS), Genetic Algorithm (GA), Evolution Strategy (ES), Guided Evolution Strategy (GES), Ant Colony Behavior (ACB), and Artificial Neural Networks (ANN), Cutting-plane (for detailed references see Mester et al., 2004). The presence of specific constrains on ordering of some elements in MGM does not allow applying effectively the well-known powerful discrete optimization algorithms like Cutting-plane (Chvatal et al., 1999), GA with EAX crossover (Nagata and Kobayashi, 1997; Nagata, 2007), and LandK (Helsgaun, 2000; Applegate et al., 2003). MCGM is a further considerable complication of genome mapping problem caused by the need of combining the mapping results from different labs. Two approaches were suggested to solve MCGM problems, both looking for shared orders with maximum number of shared markers. The first is based on “giving credit” to the available maps. To obtain the consensus solution, it employs different heuristics, e.g., graph-analytical method based on voting over median partial orders (Jackson et al., 2007). The second approach is based on two-phase algorithm that on Phase I performs multilocus ordering combined with iterative re-sampling to evaluate the stability of marker orders. On this phase, the problem is reduced to TSP. Powerful metaheuristic, referred to as GES, can be applied to solve TSP (Mester et al., 2004). On Phase II, we consider consensus mapping as a new variant of TSP that can be formulated as synchronized-TSP (sTSP), and MCGM is solved by minimizing the criterion of weighted sum of recombination lengths along all multilocus maps. For the considered mapping problem we developed a new version of GES algorithm which defines consensus order for all shared markers (Mester et al., 2005; Korol et al., 2009). This approach is extended in the present chapter. PMP is a genomic problem that also includes multipoint ordering to be addressed by reduction to TSP. The essence of PMP is assembling contigs from overlapping DNA clones using marker data and fingerprinting information (reviewed in Marra et al., 1999). The true order of clones will be the one that minimizes the total number of gaps. Main complications in solving PMP are the large number of clones in cluster (with n ~ 100-500), noisy data and the possibility for similarity of non-adjacent clones due to abundance of repeated DNA in the genomes of higher eukaryotes (Lander et al., 2001; Coe et al., 2002; Gregory et al., 2002; Faris and Gill, 2002). We developed a new effective method of clustering and contig ordering based on global optimization assisted by re-sampling verification process (analogously to Mester et al., 2003a). Consequently, longer contigs can be obtained compared to the ordering results based on local optimization methods (e.g., those implemented in the standard FPC software Soderlund et al., 2000).
4
D. Mester, Y. Ronin, M. Korostishevsky et al.
Genomic applications described in this paper are based on our powerful Guided Evolution Strategy (GES) algorithms that combine two heuristics: Guided Local Search (GLS) (Voudouris, 1997; Tsang and Voudouris, 1997) and Evolution Strategy (Mester et al., 2003a, 2004). Recently we developed and successfully tested some GES algorithms for more challenging discrete optimization problems known as Vehicle Routing Problems (Mester and Braysy, 2005; Mester and Braysy, 2006). ES stage in GES algorithms is presented as a random search by asexual reproduction, which uses mutation-derived variation and selection. The mutation changes of the current vector of parameters can be introduced by adding a vector of normally distributed variables with zero means. The level of changes can be defined by variances of these disturbances. Selection is another important stage of any ES algorithm. In GES algorithms we use (1+1) evolution strategies (Rechenberg, 1973) in which after each mutation the best solution vector is selected. During the optimization of the objective function, all mutations are performed on the best solution vector. In the sections below we consequently show applications and adaptation of GES algorithm to solve standard TSP and three genetic/genomic problems.
2. Guided Evolution Strategy Algorithm for Classic TSP as a Basis for Solving the Genetic/Genomic TSP-Like Problems 2.1. Introduction In this section we present the basic variant of GES metaheuristic for the classical symmetric TSP. This simple version of GES algorithm is referred to as Guided Local Search with Small Mutation (GLS-SM). The TSP is one of the most basic, most important, and most investigated problems in combinatorial optimization. It consists of finding the cheapest tour (minimum total distance or some other cost measure associated with the performed trajectory) to sequentially visit a set of clients (cities, locations) starting and ending at the same client. In this paper we focus on the undirected (symmetric) TSP. The undirected TSP can be defined as follows. Let G (V, A) be a graph where V {v1 ,..., vn } is the vertex set and
A {(vi , v j ) | vi , v j V, i j} is an edge set, with a non-negative distance (or cost)
matrix C (cij ) associated with A. The problem‟s resolution consists in determining the
minimum cost Hamiltonian cycle on the problem graph. The symmetry is implied by the use of undirected edges (i.e., cij c ji ). In addition, it is assumed that the distance matrix
satisfies the triangle inequality ( cij c jk cik ).
The TSP is known to be a NP-hard combinatorial optimization problem, implying that there is no algorithm capable of solving all problem instances in polynomial time. Heuristics are often the only feasible alternative to provide high-quality but not necessarily optimal solutions. The TSP‟s apparent simplicity but intrinsic difficulty in finding the optimal solution has resulted in hundreds of publications. For excellent surveys, we refer to (Lawler et al., 1985;
Discrete Optimization for Some TSP-Like Genome Mapping Problems
5
Reinelt, 1994; Burkard et al., 1998; Johnson and MCGeoch, 2002). Descriptions of the most successful and recent algorithms can be found in (Renaud et al., 1996; Tsang and Voudouris, 1997; Chvatal et al., 1999; Helsgaun, 2000; Applegate et al., 2003; Fisher and Merz, 2004; Walshaw, 2002; Schneider, 2003; Tsai et al., 2004; Cowling and Keuthen; 2005, Gamboa et al., 2006). There are also numerous industrial applications of TSP and its variants, varying from problems in transport and logistics (Lawler et al., 1985) to different problems in scheduling (Tsang and Voudouris, 1997; Pekney and Miller, 1991; Cowling, 1995), genetics (Mester et al., 2003a; Mester et al., 2003b; Mester et al., 2004; Mester et al., 2005) and electronics (Lin and Chen, 1996). The main contribution of this section is developing a simple and efficient GLS-SM metaheuristic and demonstrating its performance on standard TSP benchmarks. The suggested metaheuristic combines the strengths of the well-known GLS metaheuristic (Voudouris, 1997) with a simple mutation phase to further facilitate escape from the local minimum. The mutation phase is based on the principles of evolution strategies (Rechenberg, 1973; Schwefel, 1977) and the 1-interchange improvement heuristic (Osman, 1993). In addition, we suggest a strategy for automatic tuning of the optimization parameters. The experimental tests on the standard TSP benchmarks demonstrate that the proposed algorithm is efficient and competitive with state-of-the-art algorithms. The section is structured as follows. In the next part we describe the suggested metaheuristic, whereas section 2.3 details the algorithm configurations, experimental test setting and analysis of the computational results. Finally, in Section 2.4 conclusions are drawn.
2.2. The Problem Solving Methodology The guided local search with small mutations (GLS-SM) metaheuristic starts with an initial solution in such that cities are listed in ascending order according to their sequence number in the input data. Then, a modified 2-opt improvement heuristic (Flood, 1956), described in the next subsection, is optionally applied to the solution before starting the metaheuristic search. The metaheuristic search is based on guided evolution strategies (Mester et al., 2004) and active guided evolution strategies metaheuristics (Mester and Braysy, 2005; Mester and Braysy, 2006). It consists of two phases. The first phase makes use of the GLS metaheuristic as an aid to escaping local minima by augmenting the objective function with a penalty term based on particular solution features (e.g. long arcs) not considered to be part of a nearoptimal solution. Here the GLS is used to guide a modified version of the classical 2-opt improvement heuristic, described in the next subsection. When no more improvements have been found for a given number of iterations, the second phase is started. In the second phase the GLS-SM further attempts to find an improved solution by performing a series of random 1-interchange moves, followed by local optimization with the modified 2-opt heuristic. As the possible improvement is checked only after 2-opt and because the number and type of modifications done each time are random, the second phase follows the principles of the (1+1)-evolution strategies metaheuristic. The second phase is repeated until no more improvements can be found. The search then goes back to the first phase, iterating repeatedly between both phases. The search is stopped by the user if no more improvements can be found in neither phases.
6
D. Mester, Y. Ronin, M. Korostishevsky et al.
2.2.1. The Improvement Heuristics Before proceeding to the detailed description of the suggested metaheuristic, we first describe the main features of the two improvement heuristics applied within our solution method. The standard 2-opt heuristic works by replacing two edges in the current solution by two other edges and iterates until no further improvement is possible. Figure 2.1 illustrates the 2-opt operator.
Figure 2.1. 2-opt exchange operator. The edges (i, i+1) and (j, j+1) are replaced by edges (i, j) and (i+1, j+1), thus reversing the direction of customers between i+1 and j.
To speed up the search, we suggest two variants of the standard 2-opt heuristic. Both variants work only on a limited neighborhood (except in the construction of the initial solution), denoted penalty variable neighborhood (PVN), detailed in the next subsection. The first variant, referred to as flexible 2-opt, adjusts the size of the PVN dynamically according to the possible improvements found. To be more precise, each time an improving move is found, the PVN is extended by four additional points, related to the end points l and m of the second exchanged edge that is located lower down in the current tour (j and j+1 in Figure 2.1). The goal of this extension is to perturb the search. Therefore, the algorithm selects the points l-1, l+1, m-1 and m+1 that are next to the indexes of l and m in the original problem data, provided that they are not already included in the PVN. After the improving move, the 2-opt is restarted using the extended PVN. The second variant, referred to as fast 2-opt, does not modify the PVN and it does not restart the search after improving moves. According to our experiments, fast 2-opt is 10–100 times faster than the flexible 2-opt but it results in lower solution quality as illustrated in Table 2.1 in section 2.3. This disadvantage of fast 2-opt is partially compensated by the next PVNs since the PVNs could be partially overlapped during the optimization process. The fast and flexible 2-opt operators are applied here with the first-accept strategy, i.e., the solution information is updated right after each improving move. The 1-interchange was originally proposed for inter-route improvements in vehicle routing problems but here it is applied within a single TSP route. The idea is to swap simultaneously the position of two customers in the current tour. As described above, here the 1-interchange is applied together with a 2-opt variant and the possible improvement is checked only at the end after the 2-opt. In addition, only a limited set of random 1-interchange moves is considered.
2.2.2. Phase 1: A Guided Local Search Metaheuristic for the TSP The GLS-SM metaheuristic consists of two phases: a GLS phase and a mutation phase. In the first phase a standard GLS metaheuristic is combined with either the flexible or the fast 2-opt
Discrete Optimization for Some TSP-Like Genome Mapping Problems
7
variant. The GLS works by penalizing a single edge only every time the algorithm gets stuck in a local minimum. The edge chosen for penalization is the edge in the current solution for which the highest value of the following utility function is achieved by U=cij*/(1+pij), where pij is the penalty counter for edge (i, j) which holds the number of times the edge has been penalized so far. The variable cij* refers to the (virtual) penalized cost of the edges during the evaluation of the moves in the local search. It is only actually imposed for the arc with the highest utility value. This cost is calculated as cij*=cij+ pij, where cij refers to the original distances of the edges and =αL is a dynamic coefficient for determining the (virtual) penalized cost of an edge. Here α is a parameter and L=∑cij/n is the average length of the edges in the current (in the first phase) or in the current best (in the second phase) solution. By basing L and thus the penalties on the current and current best solution, it becomes possible to take into account the solution improvement during the search. This makes it possible to avoid penalty coefficients that are too high, which would otherwise allow the solution to deteriorate too much during the search. To diversify the search, the currently penalized edge is selected randomly with probability 0.001. A new penalized edge is chosen every time when an improving move is found and at the beginning of both phases. In order to avoid calculating the penalized distances cij* several times, they are stored in a matrix. After determining the edge to be penalized, the next step is to define the PVN that is used to restrict the local search to the neighborhood of the currently penalized edge (i, j). The forming of the PVN is illustrated at Figure 2.2.
Figure 2.2. Illustration of creation the PVN. Here (i, j) is currently penalized edge , and two radiuses R1=cij+ci-1,i and R2=cij+cj,j+1 define the size of PVN.
Based on the currently penalized edge (i, j), two radiuses R1=cij+ci-1,i and R2=cij+cj,j+1 are calculated. Now all points of the problem within either R1 or R2 belong to the current PVN. Given that the PVN depends on the currently penalized edge and its neighboring edges, the size of the PVN is varying dynamically during the search. The old PVN is erased and a new PVN is defined at the beginning of each phase and also if an improvement is found. If the 1
number of non-improving iterations c1 exceeds a user-defined maximum, c max , and if the PVN includes at least k points, the second phase is started.
8
D. Mester, Y. Ronin, M. Korostishevsky et al.
2.2.3. Phase 2: Attempting Small Mutations At the beginning of the second phase both the penalty edge and the PVN are redefined, as described above. Then, an attempt to improve the solution is made by performing r 1interchanges randomly within the PVN followed by the local optimization of the obtained solution with either 2-opt variant. The number of moves, calculated as r =3+22ξ2, where ξ is a random value uniformly distributed between 0 and 1. As opposed to the first phase, in the second phase the improvements are evaluated using the original distance matrix, instead of the GLS-based augmented objective function. In doing so, direct or real improvements to the objective function are evaluated after the mutation and selection processes. Local search within the second phase is continued until no more improvements can be found. In case improvements have been found in at least one of the two phases of the previous iteration of the GLS-SM, the search goes back to the first phase. Otherwise, the search is terminated. As deterioration is allowed, the algorithm maintains in memory the best solution found during the entire search and returns that solution at the end.
2.3. Experimental Results In this section we first describe the used parameter configurations as well as the problem data and the test environment used. Then, the results of computational testing on the suggested algorithm are presented, followed by a comparative analysis with state-of-art methods from the literature.
2.3.1. The Problem Data and Parameter Setting The GLS-SM algorithm includes only three optimization parameters: the minimum number of points in the PVN to launch phase 2 (k), the weighting parameter α for setting the dynamic coefficient determining the (virtual) penalized cost of an edge and the maximum number of 1
non-improving iterations of the first phase, c max . In addition one can define whether or not fast 2-opt is applied in the creation for getting the initial solution and which of the proposed two 2-opt variants is used within the metaheuristic. We experimented with detailed tuning of these parameters for each test problem individually and we also tested a strategy for automatic tuning of these parameters. The detailed parameter values and the results are presented in Table 2.1. The value of parameter k is set to k=10 in all cases. The following strategy, based on the information obtained during the individual tuning of the parameters, is used for the automatic tuning. For the construction of the initial solution 1
and for the local search, only the fast 2-opt variant is applied. At the beginning c max is set to
c 1max 2000 and 0.9 . Each time when c1 1000 , the value of α is decreased by
10%, c max is set to c max 200 and c1 is initialized to c1=0. Correspondingly after each 1
1
improving move c max is set to c max 2000 and c is set to c 0 . In case α reaches value 1
0.2, the value of is fixed to
1
1
1
0.2 and c 1max =200 is used during the rest of the search.
Each time the parameter setting is changed, the penalty counters pij are reinitialized to zero.
Discrete Optimization for Some TSP-Like Genome Mapping Problems
9
The proposed GLS-SM metaheuristic has been implemented in Visual Basic 6.0 and the algorithm was executed on a Pentium IV Net Vista PC2800 MHz (512 Mb RAM) computer. The computational tests were carried out on 43 Euclidean symmetric TSP benchmarks (with 51–2392 points) taken from the TSPLIB (Reinelt, 1991) library. The selected problems are detailed in Table 2.1. The other problems from TSPLIB were not considered because the current implementation of the GLS-SM can only handle integer customer coordinates.
2.3.2. Analysis of Different Algorithm Configurations In Table 2.1 we provide a comparative analysis of different algorithm configurations. The first column to the left lists the tested 43 problems with integer datasets. The numbers under problem name indicate the problem size. The second column gives the known optimal solution value (total distance) to each problem. The rest of the table is divided in two parts. In both parts we present the CPU time in seconds and perceptual excess with respect to the optimal solution. In the first part we examine the results obtained with automatic parameter tuning and compare results obtained only during the first phase of the algorithm (GLS) with the results of the whole algorithm (GLS-SM). In the second part we list the values used for individually optimized parameters α and c max . For the parameter optimization, the following 1
values were tested for α: 0.05, 0.1, 0.2, 0.25, 0.3, 0.35 and 0.4 and c max : 20, 50, 100, 200, 1
250, 500, 1000, 2500 and 5000. The sensitivity of the results with respect to parameter α is illustrated in Figure 2.3.According to the figure, there are quite significant differences in the results obtained with different values of α. Value α=0.2 and values close to that seem to provide the best results. These values were therefore used in most of the experiments.
Figure 2.3. Distribution of the parameter α. The vertical axis represents the number of problems to which an optimal solution was found with the corresponding value of α (horizontal axis). 1
Figure 2.4 illustrates the sensitivity of the results with respect to parameter c max . Again, it seems to be important to select a good parameter value. None of the parameter values can provide the optimal solution for all test problems, but in general a value of 20 seems to be the
10
D. Mester, Y. Ronin, M. Korostishevsky et al. 1
best. On the other hand, in several problems, very high values of c max , close to 5000 appear to work well.
c1
Figure 2.4. Distribution of the parameter max . As in Figure 2.3, the vertical axis describes the number of optimal solutions obtained with each parameter value.
In addition, for certain individual problems, some other exceptional values were attempted as well, as can be seen from Table 2.1. The results in the first set are obtained without applying 2-opt in the construction of the initial solution (poorer starting point for the metaheuristic) whereas in the second set 2-opt is used to provide higher quality initial solutions. The two sets differ also in the fact that in the first set fast 2-opt variant was applied and in the second set the flexible 2-opt operator. In general, it appears from Table 2.1 that the results obtained by the GLS-SM metaheuristic are very close to the optimal. With the optimal parameter adjusting and usage of 2-opt in the initial solution construction, we obtained the optimal solution to all problems. The CPU times also seem to be reasonable, although for larger scale problems the required CPU time increases significantly. By comparing the first and second part, it can be observed that the results obtained with the automatic parameter tuning are only slightly worse than the results of optimized parameter values. It is interesting to note that in terms of CPU time the automatic tuning is faster. From the first part of Table 2.1 one can see that the GLS-SM algorithm gives both better and faster than average results, compared to the first GLS phase only. Based on the second part, it appears that better results can be obtained if a higherquality initial solution is created with 2-opt and if the flexible 2-opt is applied. The differences are however small, in terms of both CPU time and solution quality, indicating the robustness of the procedures.
Table 2.1. Comparison of different algorithm configurations Problem
Optimal
GLS a280 berlin52 bier127 eil51 eil76 eil101 kroA100 kroB100 kroC100 kroD100 kroE100 kroA150 kroB150 kroA200 kroB200 lin105 Linhp318 nrw1379 pcb442 pr76 pr107 pr124
2579 7542 118282 426 538 629 21282 22141 20749 21294 22068 26524 26130 29368 29437 14379 41345 56638 50778 108159 44303 59030
Optimized parameters GLS-SM No 2-opt in initial solution 2-opt in initial solution
Self-tuning GLS-SM
CPU
%
CPU
%
α
c 1max
CPU
%
α
c 1max
CPU
%
14.00 0.02 14.00 1.30 0.05 0.10 0.08 0.05 0.05 3.50 2.50 1.98 0.05 11.00 5.50 0.06 14.00 187.00 26.00 0.06 5.10 0.30
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.26 0 0 0 0
2.90 0.01 2.80 0.06 0.06 0.30 0.30 0.35 0.06 0.06 2.90 2.20 1.60 3.50 1.60 0.30 4.96 126.00 27.00 1.40 0.68 0.37
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.04 0.03 0 0 0
0.2 0.2 0.2 0.2 0.3 0.3 0.2 0.3 0.3 0.2 0.3 0.2 0.2 0.2 0.2 0.2 0.3 0.25 0.35 0.35 0.2 0.3
20 200 200 200 200 200 200 200 200 200 200 100 200 200 200 200 200 500 250 25 200 200
0.40 0.03 0.31 0.06 0.06 0.12 0.06 0.12 0.06 0.06 0.12 0.40 0.18 0.30 0.50 0.10 1.20 260.00 3.80 0.12 0.30 0.06
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.04 0 0 0 0
0.3 0.2 0.25 0.3 0.2 0.2 0.2 0.25 0.25 0.25 0.25 0.2 0.2 0.2 0.25 0.25 0.3 0.25 0.25 0.2 0.2 0.3
200 200 200 250 250 250 200 200 200 200 200 200 200 200 200 200 2500 2000 500 20 200 200
1.90 0.01 0.40 0.06 0.05 0.10 0.10 0.10 0.10 0.10 0.20 0.30 0.30 0.67 0.58 0.08 1.40 309.00 3.20 0.40 0.67 0.20
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Table 2.1. Continued Problem
Optimal
GLS pr136 pr144 pr152 pr226 pr264 pr299 pr439 pr1002 pr2392 rat99 rat195 rat575 rat783 r11304 r11323 st70 ts225 u159 u2319 vm1084 vm1748 Average
96772 58537 73682 80369 49135 48191 107217 259045 378032 1211 2323 6773 8806 252948 270199 675 126643 42080 234256 239297 336556
Optimized parameters GLS-SM No 2-opt in initial solution 2-opt in initial solution
Self-tuning GLS-SM
CPU
%
CPU
%
α
c 1max
CPU
%
α
c 1max
CPU
%
10.88 6.95 4.05 19.89 34.77 13.92 30.00 106.00 0.05 0.14 7.99 34.86 15.00 150.00 2500.00 1.70 4.00 4.20 1045.00 108.00 180.00 106.14
0 0 0 0 0 0 0.14 0.19 0 0.25 0 0.13 0.70 0.08 0.12 0 0 0 0 0.12 0.12 0.05
1.00 2.18 3.88 11.60 4.60 3.50 66.00 119.00 0.05 0.29 2.38 41.00 339.00 207.00 160.00 0.48 0.50 0.20 323.00 36.00 547.00 47.63
0 0 0 0.11 0 0 0.01 0 0 0 0 0.01 0 0.34 0.05 0 0 0 0.01 0.05 0.09 0.02
0.3 0.1 0.07 0.05 0.3 0.25 0.2 0.25 0.2 0.2 0.25 0.05 0.1 0.1 0.2 0.2 0.25 0.2 0.3 0.2 0.25
200 200 50 200 100 200 200 3000 200 200 50 200 2000 200 500 200 20 200 2500 2000 5000
0.06 0.25 0.12 1.00 6.40 1.70 35.00 84.00 0.05 0.12 0.60 7.00 13.80 345.00 341.00 0.06 0.20 0.06 182.00 52.00 1272.00 60.72
0 0 0 0 0 0 0 0 0 0 0 0.05 0 0 0 0 0 0 0.01 0 0.03 0.003
0.2 0.2 0.2 0.02 0.2 0.2 0.35 0.2 0.2 0.3 0.3 0.3 0.25 0.4 0.2 0.2 0.25 0.2 0.35 0.05 0.25
50 50 20 250 20 200 400 700 200 250 250 250 500 200 5000 200 20 200 1000 500 5000
0.40 0.08 0.07 0.66 3.00 1.88 12.00 148.00 0.05 0.09 0.18 12.88 8.87 200.00 555.00 0.08 1.08 0.08 652.00 225.00 910.00 70.96
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.00
Discrete Optimization for Some TSP-Like Genome Mapping Problems
13
2.3.3. Results for Standard TSP Benchmarks In this section we present a comparative analysis of the results obtained with our algorithm with two state-of-art algorithms from the literature, namely version 1.1 of the Concorde cutting plane (Chvatal et al., 1999) and Concorde LandK (Applegate et al., 2003; Fisher and Merz, 2004). In our experiments the original implementations of both algorithms (with default parameter settings) were tested on the same computer as the GLS-SM. Because both Concorde algorithms are implemented with C++ and the GLS-SM in Visual Basic (which always results in slower computation times compared to C++ implementation), a direct comparison of CPU times is difficult. In Table 2.2 the tested algorithms are first listed, then the number of optimal solutions obtained with the algorithm and average percentual difference of the results with respect to the optimal solutions, based on single test run. Finally, the average CPU time in seconds and the number of optimization parameters related to each algorithm are described. Table 2.2. Comparison of results for the 43 TSP benchmarks from Table 2.1 Algorithm Concorde Cutting Plane Concorde LandK GLS-SM GLS-SM Auto GLS Auto
NS 43 18 43 33 33
AI 0 0.61 0 0.018 0.051
ACPU 416.05 0.80 70.95 47.62 115.25
n 1 17 2 3 2
NS – Number of optimal solutions, AI – Average inaccuracy (%), ACPU – Average CPU time (sec), n – number of adjusted parameters.
Based on Table 2.2, only Concorde cutting plane and the suggested GLS-SM with optimized parameters found the optimal solution to all problems. However, the CPU time of Concorde appears to be significantly (six times) higher. Concorde LandK is clearly the fastest but comes at a price, producing lower quality solutions and requiring a detailed tuning of 17 parameters. To obtain higher quality solutions with Concorde LandK, more test runs would be required, given the random nature of the algorithm. In general it appears that the suggested metaheuristic is competitive with Concorde software both in terms of computation time and solution quality.
3. Multilocus Genetic Mapping 3.1. Introduction This section is devoted to genetic mapping, i.e. one-dimensional ordering along chromosomes such elements as genes and various types of DNA markers. With n such elements, the number of all possible orders will be n !/2 out of which only one order is considered as the true one, representing the organization of the real chromosome. As we noted in section 1, the MGM is TSP-like UWSP because all ordered elements are placed on one coordinate axis only.
14
D. Mester, Y. Ronin, M. Korostishevsky et al.
One of the possibilities in addressing this problem is to recover the marker order from a known matrix dij of pairwise marker distances. The special case of the problem can contain the restriction on the sequence of some (anchor) markers. Revealing the ”true” marker order requires solving the UWSP with high precision and within a reasonable CPU time (Mester et al., 2003a, b) since even a small change of the optimization criterion (e.g., total map length) may result in a different order of the markers. These requirements are further complicated by necessity of applying computing-intensive methods for testing the reliability of the constructed map based on the estimates of local stability of marker neighborhoods. For example, one may use bootstrap or jackknife re-sampling methods (Efron, 1979, 1993; Wang et al., 1994; Liu, 1998) that suppose repeatedly solving the problem (e.g., 100-1000 times) in order to verify the obtained multilocus order (Mester et al., 2003a). In this section we demonstrate the application of a highly effective metaheuristic referred to as Guided Evolution Strategies (GES), to multipoint genetic mapping problems. GES combines the strengths of Guided Local Search (Voudouris, 1997) and Evolution Strategy (Mester et al., 2003a). It was successfully applied to a more complex combinatorial problem, the so-called Vehicle Routing Problems (Mester and Bräysy, 2005; Mester et al., 2007). The proposed metaheuristic proved efficient and very competitive in comparison to the previous heuristic methods providing best-known solutions to 86% of 300 standard VRPTW benchmarks (see site www.top.sintef.no/vrp/benchmarks.html). In sections below we represent the particularities of adaptation of GES algorithm to MGM. For this case of TSP (UWSP) some new meta-heuristics and evolution strategies were developed (Mester et al., 2003a, 2005).
3.2. Evolution Strategies for Combinatorial Optimization Problems ES is a heuristic algorithm mimicking natural population processes. The numerical procedures in such an optimization are based on simulation of mutation, followed by selection of the fittest “genotypes” founded on obtained values of the optimization criterion. In contrast to GA, ES does not employ sexual process, i.e., recombination (or crossover). Various approaches were proposed for choosing the population size and selection type in the ES including the (1+1)-strategy (Rechenberg, 1973) and ( , )-strategy (Schwefel, 1977). Clearly, combinatorial problems cannot be directly represented in terms of ES with real-value formulation. Combinatorial versions of ES differ from the real-value formulation by a specific representation of the solution vector x and the mutation mechanisms. Homberger and Gehring (1999) and Mester et al., (2003a) adopted ( , )-ES and (1+1)-ES algorithms, respectively, and proposed the combinatorial formulations for solving the vehicle routing problem with time window restrictions that is similar to multipoint analysis of markers belonging to several chromosomes (linkage groups). In combinatorial formulation, the solution of the TSP and the UWSP can be represented as a vector x = (x1, x2,…, xn) that consists of n ranked discrete coordinates. At generation k, the mutation operator (referred to hereafter as mutator ) changes the order of some components of vector x k thereby producing a new solution vector x k+1. The fitness function assigns to each arc (a i, a j) or pair of coordinates (xi, xj) of the solution vector x k+1 a non-negative dij cost of moving from element i to element j. For optimization of a combinatorial problem, one needs to define such an order of the vector coordinates (or nodes) that will provide minimum total
Discrete Optimization for Some TSP-Like Genome Mapping Problems
15
cost f(x ). If after the current selection step f(x k+1) is better than f(x k), then the optimization process will continue with the new solution vector xk+1. The central question in ES algorithms is about Mutation Strategies . Contrary to GA, mutation is the only way to change a solution in an ES algorithm. Three components of mutation strategy have to be defined for an ES algorithm. First component mimics the mechanism of mutation. For that, one can use move-generation and solution-generation mechanisms (Osman, 1993). The move-generation mechanism can be effectively applied only to a TSP where no constrains are imposed on the variables. For constrained problems, the solution –generation mechanisms working based on the “remove-insert” scheme (Shaw, 1998; Homberger and Gehring, 1999; Mester et al., 2003a) are usually applied. The basic idea of solution –generation is to remove a selected set of components from current solution, and then reinsert the removed components at optimal cost. Different ways to utilize this approach are reviewed by Bräysy and Gendreau (2001a). The second component forms the neighborhood for mutation. Optimal size of the neighborhood is very important for solving large-scale problems. Local search on large neighborhoods (LNS) increases the CPU time. Perturbation with small neighborhoods (SN) accelerates the optimization process but can not allow to reach remote points on a large solution vector. The Variable Neighborhood strategy (VN) of Mester et al., (2003a, 2007) combines the ideas of both approaches (LNS and SN), and for the large-scale problems the solution vector is divided into some specific parts (the set V of VN). The third component defines the size of mutation on the selected neighborhood. This is remove step in the “remove-insert” mutation mechanism. In ES algorithms, usually small mutation disturbances to the solution vector are desirable. We found no clear formulation of the notion small mutation in the earlier literature on ES algorithms. Consequently, we attempted to provide such a formulation together with a notion of Variable Mutation Size (VMS) (see Mester et al., 2003a). The number of removed points β is determined by β=(0.2+0.5ξ)n2. Here n is the number of customers in the VN and ξ is a random value uniformly distributed between 0 and 1. The formula means that smaller VMS are used more often than intermediate and high VMS (close to 0.2+0.5*1=0.7). Additionally, a large VMS size (i.e., β= n) is used with small probability (0.01).
3.3. The Evolution Strategy Algorithm with Multi-Parametric Mutator (ES-MPM) The quality of ES algorithms depends on how efficient will be the contribution of the mutation process to diversity of solutions subjected to selection. The ES algorithm described by Mester et al., (2003a, b), was intended to solve moderate scale UWSP with up to 200-300 points. For this algorithm we developed the multi-parametric mutator (MPM) based on three components of the mutation strategy M{V, , }, where V is a set of VN that is formed via
division of the data set into special subsets; is a parameter of the insertion operation in the “remove-insert” mutation mechanism (Mester et al., 2003a), and is a parameter of mutation size. To accelerate solving of such problems, the large-scale UWSP is divided on 8 specific parts (see Figure 3.1) and the optimization process is carried out consequently on each VN in V, so that the largest VN (numbered as v8 in the set V) is employed with
16
D. Mester, Y. Ronin, M. Korostishevsky et al.
frequency 1/8 only. After successful mutation of the specific subset, MPM adaptively repeats the new mutation on current VN with another parameter .
Figure 3.1. Forming set of the Variable Neighborhoods V{ v1… v8} in ES-MPM algorithm for largescale UWSP problems.
To generate new mutated solutions, the ES algorithm removes random components
from current solution vector and reinserts these by the best insertion criterion with parameter, i.e., each rejected component k will be inserted between components i and j=i+1, beginning from the first component of the set, while (dik+ dkj-dij.)> (di+1,k+ dk,j+ 1-di+1,j+ 1). Varying the value of inserting parameter within some range (say, from 0.6 to 1.4), and with some increment (e.g., 0.2 units), we can obtain some new solutions. Three well-known improving heuristics are applied to each new solution: Reinsert (see survey Bräysy and Gendreau, 2001b), 1-interchange (Osman, 1993), and 2-opt (Lin and Kernighan, 1973). In our algorithm, these three procedures, merged into a subroutine called Simple Local Search (SLS), are repeated in a loop until the current solution is improved. The best solution (out of these new ones and the last best) is selected as current best solution so far, and mutation process is continued in the same manner. High efficiency of the ES approach in solving one-dimensional multilocus ordering problems (of about 50-300 markers) was demonstrated (Mester et al., 2003a, b). However, the size of the real world ordering problems could be significantly larger (up 1000 or more markers per chromosome with a consequent nonlinear increase in CPU).
3.4. Guided Evolution Strategies (GES) for MGM In the sections 2, we noted that both GLS and ES meta-heuristics are powerful and competitive in solving combinatorial problems. However, we clearly understand that each of these approaches is a heuristic and there is no guarantee to reach the exact solution. With such point of view, we suppose that hybrid algorithms that combine powerful properties of some metaheuristics will work somewhat better than each of them taken separately. Our successful experiments with more complicated combinatorial problems, both VRP and VRPTW (Mester and Bräysy, 2005), confirmed usefulness of the idea to combine positive properties of both algorithms (GLS and ES-MPM) in one hybrid scheme (Mester and Bräysy, 2005). In this section we describe same hybrid (GES) algorithm that was adapted for UWSP-based genomic applications. GES algorithm works in two phases. In the first phase, an initial solution is created by 2-opt local search of Lin and Kernighan (1973) whereas in the second phase an attempt is made to improve the initial solution by GES algorithm. In this hybrid algorithm,
Discrete Optimization for Some TSP-Like Genome Mapping Problems
17
GLS is a memory-based heuristic that punishes “bad” arcs (longest and often visited in local optima) of the solution vector and forms the Penalty Variable Neighborhood (PVN) as a mutation region for optimization by the ES-MPM algorithm. In the GES, we define the PVN as γ random components in the vicinity of a penalty arc: γ=(0.1+0.6ξ2)n, where 0 ξ 1 is a random value, and n=|x k|. In our algorithm, a large PVN (γ=n) is applied with probability 0.01. GLS generates small PVN values more often, accelerating thereby the optimization process. The GLS and ES steps are repeated iteratively in GES, one after another, until a stopping criterion was achieved (e.g., limit of the executing time; the time during which the best solution achieved so far has not been improved, and others). More precisely, the ES algorithm is switched on if GLS cannot improve the solution during a predetermined time interval. Each ES step produces six mutations (with different inserting parameter ), and after each mutation a new constructed solution vector is improved by the SLS. Moreover, the ES steps are repeated after the successful mutations. Since the GLS step is 20-30 fold faster than the ES step, GLS runs while the counter of unsuccessful iterations (trials) b is less than a factor bmax predefined by the user, or the size of the current PVN is greater than some limit number (say, 200 points). Parameter bmax determines the ratio between the number of GLS steps and the number of ES steps in optimization process. With large bmax (range 25-100), the number of generated GLS solutions is also large whereas with small bmax (5-25) ES participates more frequently and the resulting solutions may be of higher quality. A short schema of the GES algorithm in pseudo code is represented bellow:
Phase I Create initial solution S0; k = 1; Sk = S0. Define bmax = 5÷100; b = 0. Initiate penalty parameters L = Sk/n (where = 0.05÷0.3). pi= 0 for each penalty counter. S**=S0.
Phase II 1. GLS steps: Do 1.1. Define penalty arc a i,i+ 1 on current Sk by maximum utili. 1.2. Correct penalty distance matrix (di,i+ 1 = di,i+ 1 + L) and penalty counter (pi= pi+ 1). 1.3. Define current PVNk around the penalty arc. 1.4. Produce 2-Opt Local Search on augmented objective function h(PVNk)S*. 1.5. If g(S*) 200.
18
D. Mester, Y. Ronin, M. Korostishevsky et al.
2. ES-MPM Steps: Do 2.1. Success=0 2.2. For =0.6 to 1.4 with step 0.2 Do. { 2.3. Produce mutation M{PVNk, , } 2.4. Produce the SLS on augmented objective function h(PVNk)S*. 2.5. Select best from S* and S**: If g(S*) < g(S**) Then S** = S*; success=1. } Loop While success=1 2.6. b = 0; k=k+1.
3. If not terminated Then Goto step 1.1. 4. Print S**.
3.5. Experiments on MGM Using Simulated Datasets For experiments with TSP, researchers usually try standard problems from the internet library “TSP benchmarks” (Moscato, 1996). Standard UWSP benchmarks for genomic problems are absent in the literature, thus we simulated for our computation experiments different examples with various complications and different level of signal/noise ratio. The simulation algorithm repeatedly generated a single-chromosome mapping population F2 for a given number of markers with dominant and codominant markers, marker misclassification, negative and positive interference, and missing data, as in our previous study of ES-based algorithms (Mester et al., 2003a, b), but with a several-fold increase of the problem size (e.g., 800÷1000 markers per chromosome instead of 50÷200). In order to compare different situations, a coefficient of restoration quality Kr =(n-1)/∑|xi-xi+1| was employed, where xi is the digit code of the i-th marker in the currently ordered marker sequence. Table 3.1 represents classification of the simulated UWSPs based on three factors that complicate multilocus ordering and the reached Kr level. Clearly, the “true” marker ordering on the simulated data have Kr = 1.0. All experiments were produced on a processor PentiumIV (2000 Mhz, RAM 1GB) and operation system Windows-2003XP. The software for our GES and ES-MPM algorithms was written in Visual Basic 6.0. Table 3.1. Classification of the simulated UWSP Class of problem
Restoring factor Kr
Complication type, %
MC E – easy 0.95-1.0 15 MC - misclassification, I - Interference, MD - Missing data.
I 15
MD 30
Discrete Optimization for Some TSP-Like Genome Mapping Problems
19
For a comparison of efficiency of the three algorithms (GLS, ES-MPM, and GES) on the multipoint genetic mapping problem, five classes of UWSP were simulated (with 50÷800 loci). The characteristics of performance of the algorithms are compared in Table 3.2. During the experiments, for each algorithm we registered the best (min), the worse (max), and the average (aver) CPU time to reach the optimal solution. These parameters were obtained using 100 random runs for each problem. Before each run, the components of the generated vector (xk) were carefully reshuffled. Thus, we started with arbitrary initial points in each run. In this benchmark, all tested algorithms proved to be very fast and rather similar on problems with 50÷200 loci, but GES was the leader on difficult large-scale problems (D-400 and D-800). Some notes about stopping (termination) criteria of the optimization process are needed. Usually, in algorithms based on mimicking natural processes (e.g., GA and ES or their hybrids GES, GGA, GTS) a stopping rule is required. Thus, the user needs to terminate the optimization process by a predetermined time or by a certain condition on the value of the goal function. This fact is especially important if one has to use such algorithms not only in getting the solution, but also in verification of the solution based on resampling procedures such as jackknife and bootstrap (e.g., Mester et al., 2003a, b). One of the ways to deal with this problem is by conducting preliminary tests in order to get a reasonable empirical stopping rule. Using this approach, we defined the termination factor for the GES algorithm in resampling-based verification process for multilocus genetic mapping (see last row in Table 3.2.). For example, a 100-cycle bootstrap or jackknife verification on D-800 UWSP with time limit 350 seconds requires nearly 10 hours, and it is still a reasonable CPU time for a problem of this size. In fact, even a seven-fold reduction of the time limit gives very good results (see average CPU in Table 3.2.). Simpler algorithms (e.g., 2-Opt) require less CPU time, and should be applied for very large TSP (up 5,000-1,000,000 points) arising in technical engineering (Codenotty et al., 1996). However, for genomic problems, obtaining high quality solution is by far more important than economy in CPU time. Table 3.2. Comparison performance of GLS1), ES-MPM and GES algorithms on the five simulated UWSP classes CPU Case Min
Max
Mean Tr
Algorithm GLS ES-MPM GES GLS ES-MPM GES GLS ES-MPM GES
M-50 0.1
M-100 0.1 0.1 0.1 0.05 0.05 0.05 0.3
CPU (sec) M-200 0.10 0.10 0.10 0.60 0.90 0.70 0.29 0.31 0.26 2.0
D-400 6 18 4 200 350 40 83 112 25 120
D-800 36 58 13 418 500 113 205 185 49 350
This requirement is explained by small differences in the objective function even for very dissimilar multilocus orders. For example, in the D-800 problem, 2-Opt simple local search produced a solution with minimal total length L=619780 with restoring coefficient Kr =0.464.
20
D. Mester, Y. Ronin, M. Korostishevsky et al.
Application of the GES gave a slightly better solution L=619560, with an improvement in L of only 0.03%, but the quality of ordering was significantly better: Kr =0.859 (improvement of 85%). We conducted special experiments to analyze the effect of the neighborhood size (i.e., the size of mutations) on the rate and quality of optimization. It appeared that applying a large neighborhood (PLN) increases the executing time 5-20 fold and even more compared to PVN strategy; with a small neighborhood (PSN) an increase in the rate is overweighed by a reduced quality of the solution compared to PVN. Our approach of adaptive mutation strategy (PVN) ensures both low CPU time and high quality solutions, by using both PLN and PSN (PSN being used more frequently). Typical dependencies of the solution quality on the neighborhood size are shown in Figure 3.2.
Figure 3.2. Influence of the neighborhood size of GES solutions on E-200 (a) and M-400 (b) UWSP.
Figure 3.3. Influence of the ratio parameter bmax on the CPU time for two simulated UWSP. (a)M-800, (b) M-400.
Optimization process in GES algorithm is controlled by ratio parameter bmax and by penalty parameter . Typical dependence of CPU time (average, minimal, and maximal) on these parameters is shown in Figure 3.3 and 3.4. In Figure 3.3 the last value corresponds to GLS case (when ES algorithm is turned off in GES). As one can see, GES algorithm was three-fold faster than GLS with all tested ratio values. According to our experiments, the optimal values of the parameters under discussion were bmax= 10 and =0.05.
Discrete Optimization for Some TSP-Like Genome Mapping Problems
21
Figure 3.4. Influence of the (lambda) parameter on the CPU time of M-400 simulated UWSP.
3.6. An Approach to Increase the Map Reliability by Using Verification Process The objective of the verification procedure is detecting unstable neighborhoods in the constructed map. This can be achieved by a jackknife procedure (Efron, 1979): repeated resampling of the initial data set, e.g., using each time a certain proportion (say, 80%) of randomly chosen genotypes of the mapping population and building a new map on each such sub-sample. The identification of unstable regions can be conducted based on the frequency distribution of the right-side and left-side neighbors for each marker. The higher the deviation from 1 (i.e., from "diagonal" pattern) the less certain is local order (Figure 3.5a, b). Clearly, the unstable neighborhoods result from fluctuations of estimates of recombination rates across the repeated samples; the range of fluctuations depends on the sample size and the proportion of genotypes sampled at each jackknife run. In fact, this analysis is a modeling tool to quantify the diversity of map versions for the treated chromosome representing the sampling (stochastic) nature of the map. The results of such evaluation can be visualized to facilitate decision making about candidate problematic markers that should be removed from the map with following re-building of the map. Algorithm of this process is presented below: 1. 2. 3.
4. 5. 6. 7.
Define distance matrix d(i, j) = F (population size). Build the map Si via a TSP algorithm. Accumulate the solutions Si. Jackknife on the population dataset. If i < 100 then goto 1. Detect and remove marker(s) causing local map instability. If a marker was deleted then goto 2.
Figure 3.5a demonstrates a part of the map for the simulated data that contains some unstable areas. After removing three “bad” markers (numbers 6, 7 and 8) we got a map (Figure 3.5b) with stable marker ordering.
22
D. Mester, Y. Ronin, M. Korostishevsky et al.
Figure 3.5 A fragment of the jackknife-based grid table for the map build on simulated data. (a) Initial order with unstable neighborhoods; (b) Stabilization of the order after removing problematic markers (# 6, 7 and 8).
4. Multilocus Consensus Genetic Mapping: Formulation, Model and Algorithms 4.1. Introduction Numerous mapping projects conducted on various organisms have generated an abundance of mapping data. Consequently, many multilocus maps were constructed using diverse mapping populations and marker sets for the same species. The quality of maps varies broadly between studies that are based on different populations, marker sets, and used software. As one would expect, there might be some inconstancies between different versions of the maps for the same species. This proved to be the case for many organisms calling for efforts to integrate the mapping information and generate consensus maps (Klein et al., 2000; Menotti-Raymond et al., 2003). Recently we proposed formulations, mathematical models, and algorithms for some multilocus consensus genetic mapping (MCGM) problems (Mester et al., 2005; Korol et al., 2009). The main aspect of MCGM approach is requirement of identical order of shared markers for any set and subset of mapping populations. The problem of consensus mapping is even more challenging compared to multilocus mapping based on one data set, due to additional complications: differences in recombination rate and distribution along chromosomes; variations in dominance of the employed markers; different subsets of markers used by different labs. As a result, there is a clear need for being able to handle arbitrary patterns of shared sets of markers. Various formulations of MCGM problems can be considered: a) Multilocus genetic maps with “dominance” complication: When building genetic maps using F2 or F3 data with dominant markers in the repulsion phase, we split the marker set into two groups, each with dominant markers in coupling phase only and the shared codominant markers (Peng et al., 2000). Multilocus maps are then ordered for the two sets with a requirement that shared (codominant) markers should have identical order (Mester et al., 2003b). b) Multilocus genetic maps with sex-dependent recombination: These maps are built on the basis of male and female recombination rates represented as sex-specific matrices. Sex-specificity of the “distance” matrix may force an optimization
Discrete Optimization for Some TSP-Like Genome Mapping Problems
23
algorithm to produce different marker orders (maps). Thus, our goal is to find the optimal solution with the restriction of the same order in two maps. c) Multilocus ordering in building consensus maps: Such maps should be constructed based on re-analysis of raw recombination data generated in different genomic centers on different mapping populations. To solve this problem we developed a new version of GES algorithm which defines consensus order for all shared markers (Mester et al., 2005: Korol et al., 2009).
4.2. Main Idea of the Approach to Solve MCGM As noted above, to solve MCGM we developed a two-phase algorithm that on Phase I performs multilocus ordering combined with iterative re-sampling based evaluation of the stability of marker orders. On Phase II, we consider consensus mapping as synchronizedTSP, and MCGM is solved by minimizing the criterion of weighted sum of recombination lengths along all multilocus maps. For the next consideration of the consensus mapping, we will need some definitions. Let capital letters A, B and C denote sets of objects, while small letters be used for single objects. Let L(A) denotes an order of A objects and, l(a, L(A)) is index number of object a in the order. Let C be a subset of A. We will say that L(C) is partial order of L(A) if and only if the sequences of C objects are the same in both orders. Namely, L(C) is partial order of L(A) if for any pair of C objects, c and c’, from l(c, L(C)) < l(c’, L(C)) follows that l(c, L(A)) < l(c’, L(A)). We will say that L(C) is a consensual order (CO) of two orders L(A) and L(B) if it is a partial order of maximal length. By the same way, consensual order of more than two orders may be defined as their maximal partial order. Then, we define a term consensus (CS) as consensual order of the consensual orders. According to these definitions, only a single consensus exists if at all. It includes shared objects of the consensual orders, since they have the same order in consensual orders. In the case of genetic maps based on overlapping (shared) markers, two complementary sets can be defined: MS - set of shared markers, presented in two or more maps (1) and MU set of unique markers presented in a single map: Ms = ( Mi ∩ Mj),
(4.1)
MCS = Ms-MCF .
(4.2)
where i ≠ j; i, j {1,…, n}; n = the number of maps. MS includes consensus markers (MCS) and the others named from here as conflicted markers (MCF )
For a pair of maps the degree of conflict for a MCF marker can be defined as the portion of consensual orders which do not include this marker. The degree of conflict for MCS markers is always equal to zero, but the reverse is not necessarily true. It holds true for consensus markers, i.e. common markers, presented in all maps. In further consideration we will denote consensus orders by LCO. The main idea to solve MCGM is to generate a set of possible orders of shared markers L(MCS) and evaluate them by means of criterion of minimal total distance R on the datasets i {1,…,n} (chromosomes) of the problem:
24
D. Mester, Y. Ronin, M. Korostishevsky et al. R = Σ ki Ri → min,
(4.3)
where ki is coefficient of quality of the dataset i, and Ri is the map length of the dataset i. Each dataset i contains some shared markers Ms(i) MCS named as anchors, and some unique markers MU(i). For the proposed two-phase approach, we developed two different algorithms for the second phase, i.e., for searching the consensus solution to MCGM. The first one was named Full Frame (FF); it assumes using special heuristics for global discrete optimization in synchronized-TSP for all markers (unique, shared conflicting and non-conflicting). Our tests show that FF algorithm is effective with up to k = 10-15 data sets with total number of shared markers n < 50. For larger problems, we developed another algorithm, based on defining regions of local conflicts in the orders of shared markers (referred to as Specific Conflicted Frames (SCF), followed by “local” multilocus consensus ordering for each such region. This approach allows solving much larger MCGM problems (e.g., k > 20-30 and N > 100) by moving along SCF. Solving MSGM via dissecting the chromosome into SCF includes defining sets of conflicted marker regions obtained on Phase I (based on non-synchronized solutions). Then, SCF are formed by analysis of all pairs of the resulting individual maps. Each SCF contains shared conflicting and non-conflicting markers, and some set-specific (“unique”) markers. The remainder non-conflicting shared markers between the SCF regions are considered as “frozen” anchors during the solution process for each SCF region, i.e. only SCF markers participate in the optimization process. This approach significantly reduces CPU time and for small size of SCF (m ≤ 14-16) exact solution can be obtained. For Full Frame (FF ) approach, the optimal order of MU(i) markers on Ms(i) is defined by heuristic GES algorithm (Mester et al., 2004) adapted to work with anchor markers. Both FF and SCF algorithms are described in detail in the next sections.
4.3. Full Frame (FF) Algorithm for MCGM Two-phase FF algorithm, based on synchronized GES algorithm (see Mester et al., 2005), was improved in some aspects. For Phase I, GES algorithm described in section 3 was strengthen by three additional local search procedures: “Reinsert”, “Reverse-Reinsert”, and “Exchange1*1 (Bräysy and Gendreau, 2001b). These procedures were adapted for working with anchor marker constrains. As a result, acceleration of the optimization process and better accuracy of the solution were achieved. For Phase II, new random multi-parametric generator of shared marker orders was developed, comprising the mutation stage of the optimization algorithm. For each generated order of shared markers the ordering of non-shared (chromosome specific) markers is determined by minimizing the criterion of weighted sum of recombination lengths along all individual multilocus maps (the selection stage of the optimization algorithm). For getting the exact solution for shared markers we must generate all (n!/2) possible orders. One of the ways to improve the mutation stage is to strengthen the procedure by making the main parameters of the process variable, analogous to natural processes where mutation and recombination may depend on environment and organism‟s fitness (Korol et al., 1994). In our scheme, six parameters define the mutation strategy:
size of mutation neighborhood z = 1+ 0.25 f2;
Discrete Optimization for Some TSP-Like Genome Mapping Problems
25
random position p1 of the neighborhood; new random position p2 ≠ p1of the neighborhood; probability s = 0.2 of shifting (transposing) the neighborhood; probability v = 0.2 of reverse of the neighborhood; number of mutations h = 1+ 2f2 on the new generation order,
where f is a random value uniformly distributed between 0 and 1. This mutation strategy generates short neighborhoods, so that small shifting values will appear more often among new generated orders of shared markers. Each time the mutation is performed under the best order that has been achieved so far, in accordance with the usual scheme of evolution strategy algorithms. The efficiency of the algorithms is affected by quality of the initial solutions. Two parameters are used for estimating the quality of mapping during the optimization process:
total distance T of shared markers, named skeleton criterion (SKC); and total distance R calculated according eq. 4.3 and named primary criterion (PRC).
SKC is accepted as one of starting points for ordering of shared markers. Minimum of SKC is defined by using GES algorithm; then during the process of PRC optimization, all current T(Li(Ms)) and corresponding best R(Li(MS), MU) values are stored in a “learning” list. The process of accumulation of T(Li(Ms)) and R(Li(MS), MU) pairs can be considered as a „learning” process. The values of Ti and Ri from the list are correlated, although the extremes do not necessarily coincide (i.e., minimum of R may not correspond to the minimum of T. Computation time for defining criterion R could require from 0.1s to 10s depending on the problem size. Therefore, checking each generated order Li(MS) that deviates considerably from the best PRC does not seem to be a good idea; thus we check only those Li(MS) that satisfy the following condition: T(Li(MS)) ≤ q,
(4.4)
where parameter q is the threshold value of T (see Appendix for details on calculation of q). This limitation allows reducing significantly (20-100 folds) the computation time. Typical dependence form of the primary criterion on the skeleton criterion is shown in Figure 4.1. We keep sampling new orders for choosing q till synchronized GES algorithm is not stopped.
4.4. Specific Conflicted Frames (SCF) Algorithm for MCGM The main idea to solve MSGM by SCF is to form sets of conflicted marker zones on the nonsynchronized solutions. SCF is formed by analysis of all pairs of chromosomes. Figure 4.2 represents an example of forming SCF for a pair of chromosomes.
26
D. Mester, Y. Ronin, M. Korostishevsky et al.
Figure 4.1. Typical dependence of the primary criterion R from the skeleton criterion T.
Figure 4.2. Examples of SCP on the non-synchronized solutions.
Discrete Optimization for Some TSP-Like Genome Mapping Problems
27
In this example, three SCFs were formed by conflicted markers (*mar1, *mar2, *mar3; *mar11,*mar13; and *mar40,*mar41). Each SCF contains shared conflicting and nonconflicting markers, and some unique (non-shared) markers. For other markers (between the SCF), we preserve the consensus order obtained from the non-synchronized solutions. Therefore, only SCF markers participate in the optimization process by criterion (4.3). This approach significantly reduces CPU time. Moreover, for small size of SCF, the problem can be solved exactly by trying all possible orders (FP) of this SCF. Combinatorial complexity of the SCF is a function of both the number of shared markers k and the number of unique markers n. The total number of possible orders is (n+k)! An example of SCF with k=5 and n= 10 is shown in Figure 4.3.
Figure 4.3. Forming order combinations of markers for SCP, where gi
Ms,
ui
MU.
The main idea of fast testing of alternative orders is based on the fact that any interval between two markers from set Ms includes a subset of markers from set MU (in some cases this subset may be an empty set). The revealed optimal order of markers from MU will be optimal for any interval [b1, b2] from Ms. In other words, the proposed accelerated perturbation (AP) algorithm prevents repeated perturbations of markers from MU on Ms. The dependence of CPU time (in seconds) of AP and FP algorithms on n is illustrated in Table 4.1. Clearly, the advantage of AP algorithm grows with increasing n. Table 4.2 represents consensus solution of MCGM for a simulated example with ten chromosomes and 50 markers in each chromosome; only shared markers shown. As one can see, the resulting consensus solution does not contain conflicted markers, and the ordering of shared markers is correct.
5. TSP-Like Problem in Physical Mapping (PMP) 5.1. Introduction In the last decade, genomes of many organisms were cloned in bacterial artificial chromosomes (BAC). Resulting BAC libraries can be characterized by specific markers and/or via restriction fingerprinting, resulting for each clone in a “barcode” of DNA fragments. The availability of BAC libraries with overlapping clones allows employing the barcode profiles as a tool for quantifying clone overlaps with an objective to construct physical maps (Shizuya et al., 1992; Vanhouten and Mackenzie, 1999). Such maps characterize the relative positions of cloned DNA fragments in the genome and can be used in large scale DNA sequencing projects (e.g., human genome progect, Lander et al., 2001; McPherson et al., 2001; maize mapping project, Coe et al., 2002; mouse, Gregory et al., 2002), high resolution gene mapping (Faris and Gill, 2002), and map-based gene cloning (Wang et al., 1995; Tanksley et al., 1995). Physical mapping is long, laborious and expensive
28
D. Mester, Y. Ronin, M. Korostishevsky et al.
procedure, hence development of algorithms and methods making this process more effective is important, especially for dealing with large complex genomes. One of the major steps in physical map construction is clone assembly into ordered contigs. The main idea of contig assembly it that the shared part of DNA in two clones is expected to produce shared size fragment in the fingerprint profiles of the clones. Hence, the presence of fragments (bands) with the same length in two clones ci and cj can indicate a possible overlap of these clones. However, the abundance of repeated elements in the genome and limited accuracy of scoring the band lengths make the data on shared bands less informative: a part of shared bands for two clones may derive from different parts of the chromosome (Soderlund et al., 2000). The overlap of clones can also be tested via sequencing of clone ends and scoring the overlap of sequences. Such procedure is much more powerful but it is also much more expensive and laborious than the scoring of common bands (Venter et al., 1996).
5.1.1. The Model and Problem Formulation A chromosome can be considered as sequence of bands (bi)i = 1,..,I. Each band is an integer number from 1 to L. Theoretically, each clone c represents a part of this sequence: Bc = (bi), i = ibegin(c),..,iend(c). Only sets of clone bands can be observed, but not their orders and abundance within the clones. Practically, an observed band size in clone can deviate from the real one due to limited resolution of the technical system. Moreover, some of the clone bands can be missed and some extra (false) bands can be observed. Additional difficulties come from the artificial ("chimerical") clones which physically contain parts from different (nonadjacent) places of the chromosome. Using such sets of observed clone bands we need to reconstruct clone relative positions within the chromosome. To solve this problem the clone overlaps for each pair of clones are scored as the numbers of common bands. Using theoretical distribution for clone overlaps for random clones, the statistical significance of clone overlap (p-value) can be calculated. It is expected that highly significant clone overlap corresponds to their physical overlap. Therefore, we are looking for clone orders satisfying the following requirements: (i) adjacent clones are significantly overlapping (orders with higher significances of adjacent clones are preferable); (ii) orders are long (orders containing more clones are preferable); (iii) orders are effective by overlap (i.e., order with higher sum of clone overlaps for adjacent clones are preferable). Note that some (short) clones can represent a part of genome covered by another (longer) clone. Such short clones are referred as "buried". One-dimensional ordering of a set of clones containing buried clones can be problematic (for example, in a situation of four clones c1, c2, c3 and c, such as c is buried in c2, and clones c1 and c3 overlap only with c2). Nevertheless, buried clones can be considered as attached to the ordered chain of non-buried clones. Buried clones can provide additional information about relative position of bands within clones and prove clone ordering (Fasulo et al., 1998).
5.1.2. Standard Methodology for Solving Physical Mapping Problem A standard program package FingerPrinted Contigs (FPC) assembles clones into contigs by using either the end-labeled double digest method (Coulson et al., 1986; Gregory et al., 1997) or the complete digest method (Olson et al., 1986; Marra et al., 1999). Because of
Discrete Optimization for Some TSP-Like Genome Mapping Problems
29
computational challenge of ordering huge amount of clones (about 104-105 per chromosome), FPC subdivides the set of clones on relatively small clusters of strongly overlapped clones (up to 20-50 per cluster). Clones of each cluster are ordered using local optimization and building band maps. The next step is merging the resulted orders into contigs under less restrictive conditions for clone overlaps allowing only end-to-end merging (Soderlund et al., 2000). FPC uses uniform significance stringency to decide about clone overlap. To obtain relatively small cluster sizes, FPC applies very stringent level of significance; so that many pairs of clones that in fact overlap physically do not overcome this level. As a result, many singletons and short clusters appear where ordering of clones is questionable. End-to-end merging of short condigs is also problematic. Using more liberal significance stringency can lead to appearance of contigs with non-linear structure caused by problematic ("questionable", Q-) clones and by false significant overlapping. Local optimization also can be not effective to find optimal clone order and proper band map construction: even a small error in clone ordering can lead to situations where clones "jump" from the correct position to the first vacant place. In particular, it can result in high inconsistence in clone overlaps and their positions in the resulted map of bands within the contig. If adjacent within the contig clones have non-significant number of common bands, then one can expect that these clones will show poor overlap also at the sequence level. This will lead to splitting of the contig into shorter sub-contigs.
5.1.3. An Alternative Approach to Contig Assembly Here we present elements of our new approach to contig assembly. Alternatively to a standard FPC one, we start from clone clustering with relatively liberal overlap stringency (cutoff of significance level). Then, we conduct stepwise clustering with ever increasing stringency coordinated with assessment of the topological structure of the resulting clustering. In each cluster we order clones by using global optimization heuristics developed for solving standard TSP. Based on computing-intensive jackknife resampling analysis, we detect and exclude from the contig clones that disturb consensus clone order. This leads to appearance of well ordered contigs that can be merged into longer contigs by relaxing cutoff, under topological control of cluster structure. Our methods allow the construction of reliable and longer contigs, detection of “weak” connections in contigs and their “repair”, as well as the elongation of contigs obtained by other assembly methods.
5.2. Reducing PMP to Standard TSP Our algorithm of contig construction includes the next steps: (i) calculation of p-value for clone overlaps; (ii) grouping clones into reliable size clusters with linear topological structure; (iii) one-dimensional ordering using global optimization; (iv) re-sampling based verification of the obtained order; and (v) merging of contigs into longer contigs. In the first stage we calculate all pair-wise p-values Pr (ci, cj) of clone overlapping and select a threshold Pr 0 (cutoff) to define clones ci and cj with Pr (ci, cj) < Pr 0 that can be referred to as significantly overlapping. A proper choice of the threshold Pr 0 should provide a reasonable tradeoff between two requirements: (a) to provide enough number of pairs of overlapped clones, and (b) to reduce the proportion of false overlaps among selected clone pairs.
30
D. Mester, Y. Ronin, M. Korostishevsky et al.
Alternatively to the standard FPC strategy, where falsely significant clone overlaps and chimerical clones are identified by using band map in while ordering of highly significantly overlapped clones, we exclude putatively false significant overlaps and putatively chimerical clones at the stage of clustering. The excluded clones and overlaps can be used later in attempts to merge contigs. Our main idea of identification of problematic clones and clone overlaps is that each part of the chromosome is most probably covered by several clones (although in fact, some parts can be uncovered or poorly covered by clones). We expect that chimerical clones and false clone overlaps usually are not proven by parallel clones. Clustering would subdivide clones into groups covering different parts of the chromosome. We cluster clones in such a way that a whole chromosome part (without ends), that covered by clones from the cluster, is covered by several significantly overlapped clones. Moreover, we require that, even after excluding from the consideration any single clone or clone overlap, for any pair of clones ci and cj from the cluster C0 there exist a sequence of clones c(1),..,c(n) from C0 such as c(1)=ci, c(n)=cj and overlap of clones c(k) and c(k+1) is significant for all k=1,..,n-1. For clones from such clusters we define distance based on pvalue of clone overlaps to be equal to infinity for non-significantly overlapped clones (see below). After excluding buried clones, the problem of clone ordering is reduced to the standard TSP problem without requiring return to the initial point.
5.2.1. Clustering Algorithm Let Pr 0 be a liberal level of cutoff (we used 10-3/N2, where N~104-106 is the number of clones in a physical mapping project). With calculating all pair-wise clone overlaps Pr we subdivide the clone database into clusters by single-linkage algorithm (Jain and Dubes, 1988). Ordering of clusters with large number of clones is a computationally challenging problem. This is why stringent cutoffs are usually used to subdivide large clusters into small-to-intermediate ones. FPC uses uniform cutoff for entire database to decide if a pair of clones belong to one cluster in single-linkage clustering algorithm. However, database can contain cluster(s) with a very large number of highly significantly overlapped clones. In such situation, applying uniform cutoff may result in a solution with a few very big clusters and a lot of extremely small clusters. Further dividing of the big clusters leads to the appearance of numerous very small clusters reducing the chance of getting large contigs. Instead of uniform cutoff we propose a procedure with adaptively increasing cutoff stringency (say by three orders of magnitude). We start clustering with a liberal cutoff Pr 0, and select the resulting reasonable size (rs) clusters, say clusters with size up to 300 clones. For each cutoff level we consider each cluster of clones as a net of significant overlaps (where net vertices correspond to clones and edges correspond to significant relative to Pr 0 clone overlaps). In this net we identify and temporally exclude from the analysis clones and clone overlaps not proven by parallel paths (see Figure 5.1a). For convenience, we refer to such procedure as TENPP-procedure with a respective cutoff Pr . After TENPP-procedure we run single linkage algorithm on large clusters. At the next step, we increase the stringency, but only after removing from the consideration the selected reasonable size clusters (i.e., protecting them from further “dissolving”). After running this algorithm, rs-clusters are again considered as a net of significant overlaps and is subdivided into sub-clusters such as corresponding net is subdivided into parts having linear topological structure (see Figure 5.1b and paragraph 5 .2.2. below). The scheme of our clustering algorithm is illustrated in Figure 5.2.
Discrete Optimization for Some TSP-Like Genome Mapping Problems
31
Figure 5.1. Splitting clones into clusters with linear topological structure. Set of significant clone overlaps considered as net: vertices correspond to clones, pair of vertices connected by edge if overlap of respective clones is significant. (a) Excluding of clones and clone overlaps not proven by short (of 24 edges) parallel paths. First we detect and exclude from the analysis non-proven clone overlaps, and only after this we detect and exclude non-proven clones. Excluded edges and vertices in the presented example are marked by arrows. (b) Excluding clones in branching. In the presented example set of such clones is marked by ellipse. Remained set of clones is subdivided into four clusters having linear topological structure by standard single-linkage algorithm.
Figure 5.2. Scheme of clone clustering with adapting cutoff. Diamonds denote single-linkage clustering with corresponding cutoff. Circles denote procedure of excluding from the analysis clones and clone overlaps not proven by parallel paths in the net of significant (relative to corresponding cutoff) clone overlaps.
5.2.2. Looking for Linear Topological Structure Although after TENPP procedure any non-excluded significant clone overlap has a parallel path (may pass through temporally excluded clones), some of the overlaps still can be false significant and some of the non-excluded clones can be chimerical. Such overlaps and chimeras can lead to situations where cluster of clones represented in the form of a net of significant overlaps will have non-linear topological structure (Figure 5.1b) incompatible with one-dimensional structure of eukaryotic chromosome. Clearly, ordering such clusters is problematic. To overcome this problem, we propose to split clusters into sub-clusters having linear topological structure by excluding from the analysis clones at the branching nodes (Figure 5.1b). Non-linearity of cluster structure can be detected by scoring ranks of vertices relative to some diametric path (any of possible diametric paths can be employed). Remind that, by definition, diametric path is the longest, in sense of number of edges, non-reducible path (e.g., Bollobas, 2002). The presence of vertices with not too small ranks (e.g., more than 2) points to non-short offshoots from the selected diametric path and, hence, a possibility of non-linear structure. Maximal rank can depend on selection of diametric path (up to twice).
32
D. Mester, Y. Ronin, M. Korostishevsky et al.
Hence formulated criterion for non-linearity detection is sufficient but not necessary. Clones and overlaps causing non-linearity are excluded from the analysis only at the stage of subcontig ordering, although some of them may be used later for merging of sub-contigs into a contig.
5.2.3. Multipoint Ordering Using Global Optimization As it was mentioned above, we propose ordering the clusters proved to have linear topological structure based on statistical significance of clone overlaps. We formulate the ordering problem in terms of global maximization of some criterion (similar to Fickett and Cinkosky, 1992; Alizadah et al., 1993; Ben-Dor and Chor, 1997; Flibotte et al., 2004). For simplicity, we consider only the situation where all clones are not buried. Such situation can be achieved by temporal excluding of buried clones from the analysis, although sometime it can lead to loosing the contig connectedness. The criterion W(Ω) of clone order Ω=(cΩ(1),…, cΩ(n)) is calculated as: W(Ω) = i=1,..,n-1 Wi(Ω) - b(Ω)W0 = -log Pr (cΩ(i), cΩ(i+1)) - b(Ω)W0,
(5.1)
where b(Ω) is the number of adjacent (within order Ω) clones with Pr (cΩ(i), cΩ(i+1))>Pr 0 and W0 is the penalty for non-significant overlapping of adjacent clones. Maximization of such criterion can be reformulated as standard TSP without requiring of return to the starting point. Let Wmax be the maximum of -log Pr (ci, cj). We define distance between two clones as d(ci, cj) = Wmax-(-log Pr (ci, cj))+W0 1{Pr (ci, cj) >Pr 0},
(5.2)
where 1{Pr (ci, cj) > Pr 0} is indicator function equal to 1, if Pr (ci, cj) >Pr 0, and equal to zero otherwise. Global optimization is especially effective if additional information on DNA markers is also available (Flibotte et al., 2004). Solution of TSP is considered as NP-hard problem. Nevertheless, good heuristics (e.g, based on evolutionary strategy optimization) for the solution of TSP were developed for situations where the number of vertices is up to order of 103 (Mester et al., 2004).
5.2.4. Re-Sampling Verification of the Obtained Solution The quality of ordering of clones within the contig is characterized not only by the value of the chosen criterion, but also by its robustness to small uncertainty of band content of the clones, that can be referred to as contig stability. To evaluate this stability we use jackknife iterations. Namely, we first construct the order using clone overlaps scored over all bands. In addition, we construct orders using clone overlaps based on randomly selected subsets of bands (95% of the total set). Then, the identification of unstable regions can be conducted based on the frequency distribution of the right-side and left-side neighbors for each clone in the contig order. The higher the deviation from 1 (i.e., from the "diagonal" pattern) the less certain is the local order (Mester et al., 2003). One of the main reasons of appearance unstable orders is high similarity of parallel clones that may differ mainly due to noise unavoidable under any technology. Excluding of parallel clones allows constructing stable
Discrete Optimization for Some TSP-Like Genome Mapping Problems
33
"skeleton" map, analogously to the approach suggested for building genetic maps (see section 3.6).
5.2.5. Merging of Sub-Contigs After ordering, we try to elongate the resulted contigs by merging contigs displaying end-toend significant overlaps (that may be also achievable via adding 1-2 connecting clones, or by adding singletons. First, we return to analysis all clones and clone overlaps temporally excluded at previous stages. To elongate a concrete contig we search for all clones connected (by significant overlapping or via short path of significant overlaps) with the clones from ends of the contig. If adding of all these clones (for one of the two contig ends) does not lead to violation of contig linearity, then such elongation does not seem to be problematic. If adding of these clones does leads to branching (i.e., contradicts to linear structure of the chromosome), then each of the possibilities of linear elongations (Figure 5.3) must be considered. The correct elongation can be detected by testing of clone overlapping based on clone-end sequencing (Venter et al., 1996). The same problem arises if clones from one contig significantly overlap with middle clones from another contig. Availability of DNA markers (in clones) with known chromosomal position can help to reject merging of contigs from different chromosomal zones. Contigs having clones with markers from different chromosomal zones must be divided. Contigs resulting from elongation should be reordered (see section 5.2.3).
Figure 5.3. End-to-end merging of contigs. Three contigs with linear topological structure (significant clone overlaps marked by solid lines) are end-to-end connected via additional clones and significant clone overlaps (possibly excluded in previous stages, marked by doted lines). There are several possibilities of merging: (a), (b), (c) end-to-end merging of two contigs (overlaps with clones from the third contigs considered as false significant); (d) merging of all three contigs with reordering clones in the second one.
34
D. Mester, Y. Ronin, M. Korostishevsky et al.
6. Conclusions Several problems in modern genome mapping analysis belong to the field of discrete optimization on a set of all possible orders. In this paper we propose formulations, mathematical models and algorithms for genetic/genomic problem that can be formulated in TSP-like terms. These problems are considered as computationally challenging because of noisy marker scores, large-size data sets, specific constraints on certain classes of orders, and other complications. These complications do not allow to use directly both known exact and heuristic discrete optimization methods (e.g. Cutting-plane, Genetic algorithm with EAX crossover and the famous Lin-Kernighan). For solving the genome mapping problems we developed Guided Evolution Strategy heuristic based on Guided Local Search and Evolution Strategy algorithms. Both GLS and ES algorithms in GES are working together; employment of “the variable neighborhood” and multi-parametric mutation process empower the optimization algorithm. An approach to increase the reliability of multilocus map ordering was presented here based on jackknife re-sampling as a tool for testing of the map quality. Detection and removing markers responsible for local map instabilities and non-monotonic change in recombination rates allows building stable skeleton maps with minimal total length. Further improvement of mapping quality is achievable by joint analysis of mapping data from different mapping populations. Separate ordering of different data sets does not guarantee obtaining identical orders for shared markers in resulting maps, calling for detection and removing conflicting markers. An alternative (presented above) is building de novo a consensus multilocus map based on “synchronized ordering” rather than merging the previously derived maps. This approach is also applicable in situations of gender dependent recombination and combined analysis of genetic and physical mapping data, possibly in sequential experimentation manner. In this paper we demonstrated on the genetic and genomic TSP-like applications that proposed Guided Evolution Strategy algorithms successfully solves these constrained discrete optimization problems. The efficiency of the proposed algorithms is demonstrated on standard TSP problems and on three genetic/genomic problems with up to 2,500 points.
Acknowledgment This research was partially supported by Binational Agricultural Research and Development Fund (BARD research project US-3873-06), by the Ministry of Absorption, Israel and German-Israeli Cooperation Project (DIP project funded by the BMBF and supported by BMBF's International Bureau at the DLR), FP7-212019 research grant, and by the EDGE project funded by the Research Council of Norway and Jenny and Antti Wihuri Foundation.
Appendix. Choosing Threshold Q Value for PRC Calculation Let PRC and SKC be already calculated for n different orders of skeleton markers (we used n 100). For the current SKC value, we want to decide whether the calculation of PRC (which
Discrete Optimization for Some TSP-Like Genome Mapping Problems
35
takes much more CPU time than SKC) is desirable. As a model of relationship between SKC and PRC the following linear approximation is used: Ri = Rmean + b(Ti-Tmean)+ei,
where Ri and Ti are PRC and SKC values for the i-th order of skeleton markers, respectively; Rmean and Tmean are the mean values for PRC and SKC for all possible orders of skeleton markers; b is the regression coefficient; and ei is the current difference between Ri and Rmean+b(Ti-Tmean). Let p0 be the level of significance (for example p0 = 5%), and e0 be the quintile of p0 level for sample distribution of e, i.e. np0th element of the ordered by increasing sequence of ei values. Values Rmean and b can be estimated by least squares method. Let Rbest = mini Ri and T be observed SKC value for the current order of skeleton markers. To make a decision on weather PRC should be calculated, we score Re=Rmean+ b(T - Tmean). If the Re + e0 > Rbest, then we suppose that PRC for such skeleton order is higher (with probability 1- p0) than the obtained Rbest and do not calculate it. Thus, q =Tmean - (Rmean - Rbest + e0)/b.
References Alizadah, F., Karp, R., Newberg, L. A., and Weisser, D. (1993). Physical mapping of chromosomes: A combinatorial problem in molecular biology. In Proceedings of the Fourth Annual ACM_SIAM symposium on DiscreteAlgorithms, pp. 371-381. Applegate, D., Cook, W., and Rohe, A. (2003). Chained Lin-Kernighan for large traveling salesman problems. IJOC, 15, pp. 82-92. Ben-Dor, A., and Chor, B. (1997). On constructing radiation hybrid maps. Journal of Computational Biology, 4, pp. 517-534. Bollobas, B. (2002). Modern Graph Theory, (1st ed.) Springer. Bräysy, O., and Gendreau, M. (2001a). Metaheuristics for the vehicle routing problem with time windows. Internal Report STF42 A01025, SINTEF Applied Mathematics, Department of Optimization, Norway. Bräysy, O., and Gendreau, M. (2001b). Route construction and local search algorithm for the vehicle routing problem with time windows . Internal Report STF42 A01025, SINTEF Applied Mathematics, Department of Optimization, Norway. Burkard, R., Deineko, V., van Dal, R., van der Veen, J., and Woeginger, G. (1998). Wellsolvable special cases of the travelling salesman problem: a survey. SIAM Rev., 40, pp. 496546. Chvatal V., Applegate, D., Bixby, R., and Cook, W. (1999). Concorde: a code for solving travelling salesman problems (http://www.math.princeton.edu/tsp/concorde.html) Codenotty, B., Margara, L., and Resta, G. (1996). Perturbation: An efficient technique for the solution of very large instances of the Euclidean TSP. Informs Journal on Computing, Vol. 8(2), pp. 125-133. Coe, E., Cone, K., McMullen, M., Chen, S., Davis, G., Gardiner, J., Liscum, E., Polacco, M., Paterson, A., Sanchez-Villeda, H., Soderlund, C., and Wing, R. (2002). Access to the maize genome: an integrated physical and genetic map. Plant Physiology, 128, pp. 9-12.
36
D. Mester, Y. Ronin, M. Korostishevsky et al.
Coulson, A., Sulston, J., Brenner, S., and Kam, J. (1986). Toward a physical map of the genome of the nematode C.elegans. Proc. Natl Acad. Sci. USA, 83, pp. 7821-7825. Cowling, P. (1995). Optimization in steel hot rolling. In: Optimization in industry. Wiley, Chichester, pp. 55-66. Cowling, P., and Keuthen, R. (2005). Embedded local search approaches for routing optimization. Comp and Oper Res, 32, pp. 465490. Efron, B. (1979). Bootstrap method: Another look at the jackknife. Ann. Stat., 7, pp. 1-26. Efron, B., and Tibshirani, R. (1993). An Introduction to the Bootstrap . Chapman and Hall, New York. Ellis, T. (1997). Neighbour mapping as a method for ordering genetic markers. Genet. Res. Camb., 69, pp. 35-43. Emrich, S. J., Aluru, S., Fu, Y., Wen, T. J., Narayanan, M., Guo, L., Ashlock, D. A., and Schnable, P. S. (2004). A strategy forassembling the masize(Zea mays L.) genome. Bioinformatics, 20, pp. 140-147. Falk, C. T. (1992). Preliminary ordering of multiple linked loci using pairwise linkage data. Genetic Epidemiology, 9, pp. 367-375. Faris, J. T., and Gill, B. S. (2002). Genomic targeting and high-resolution mapping of the domestication gene Q in wheat. Genome, 45, pp. 706–718. Fasulo, D., Jiang, T., Karp, R. M., and Sharma, N. (1998). Constructing maps using the span and inclusion relations. In RECOMB. Proceedings of the Second Annual International Conference on Computational Molecular Biology. Edited by: Istrail, S., Pevzner, P., Waterman, M., New York, NY, USA: ACM. pp. 64-73. Fickett, J., and Cinkosky, M. (1992). A genetic algorithm for assembling chromosome physical maps. In Lim,H., Fickett,J., Cantor, C., and Robbins, R. (Eds), The Second International Conference on Bioinformatics . Supercomputing and Complex Genomic Analysis. World Scientific, New Jersey, pp. 273-285. Flibotte, S. R., Chiu, R., Fjell, C., Krzywinski, M., Schein, J. E., Shin, H., and Marra, M. A. (2004). “Automated ordering of fingerprinted clones”. Bioinformatics, 20(8). Fisher, T., and Merz, P. (2004). Embedding a chained Lin-Kernighan algorithm into a distributed algorithm. Report 331/04, University of Kaiserslautern. Flood, M. M. (1956). The travelling-salesman problem. Oper. Res., 4, pp. 61–75. Gamboa, D., Rego, C., and Glover, F. (2006). Implementation analysis of efficient heuristic algorithms for the traveling salesman problem. Comp. and Oper. Res., 33, pp. 11541172. Givry, S., Bouchez, M., Chabrier, P., Milan, D., and Schiex, T. (2001). CarthaGene: multipopulation integrated genetic and radiation hybrid mapping. Bioinformatics, 8, pp. 1703-1704. Gregory, S. G., Howell, G., and Bentley, D. (1997). Genome mapping by fluorescent fingerprinting. Genome Res., 7, pp. 1162–1168. Gregory, S., Sekhon, M., Schein, J., et al., (2002). A physical map of the mouse genome. Nature, 418, pp. 743–750. Hall, D., Bhandarkar, M., Arnold, J., and Jiang, T. (2001). Physical mapping with automatic capture of hybridization data. Bioinformatics, 3, 205-213. Helsgaun, K. (2000). An effective implementation of the Lin-Kernighan traveling salesman heuristic. Eur. J. Oper. Res., 1, pp. 106130.
Discrete Optimization for Some TSP-Like Genome Mapping Problems
37
Homberger, J., and Gehring, H. (1999). Two evolutionary metaheuristics for vehicle routing problem with time windows. INFOR, 37, pp. 297-318. Jackson, B., Schnable, P., and Aluru, S. (2007). Consensus genetic maps as median orders from inconsistent sourses. Transactions on Computational Biology and Bioinformatics, in press. Jain, A. K., and Dubes, R.C. (1988). “Algorithms for clustering data.” Englewood Cliffs, N.J.: Prentice Hall. Johnson, D., and MCGeoch, L. (2002). Experimental analysis of heuristics for the STSP. In: Gutin, G., Punnen, A. (eds) The traveling salesman problem and its variations. Kluwer, Dordrecht, pp. 369443. Lander, E. S., Linton, L. M., Birren, B., et al., (2001). Initial sequencing and analysis of the human genome. Nature, 409, pp. 860–921. Lawler, E., Lenstra, J., Kan, A., and Shmoys, D. (1985). The traveling salesman problem. Wiley, New York. Lin, S., and Kernighan, B. (1973). An effective heuristic algorithm for the TSP. Operation Research, 21, pp. 498-516. Lin, K., and Chen, C. (1996). Layout-driven chaining of scan flip-flops. IEE Proc. Comp. Dig. Tech., 143, pp. 421425. Liu, B. H., (1998). Statistical Genomics: Linkage, Mapping, and QTL Analysis. CRC Press, New York. Korol, A. B., Preygel, I. A., and Preygel, S. I. (1994). Recombination Variability and Evolution. Chapman and Hall, London. Korol, A., Mester, D., Frenkel, Z., and Ronin, Y. (2009). Methods for genetic analysis in the Triticeae. In: Feuillet, C., and Muehlbauer, G. (eds). Genetics and Genomics of the Triticeae. Springer (in press). Marra, M., Kucaba, T., Sakhon, M., et al., (1999) A map for sequence analysis of the Arabidopsis thaliana genome. Nat. Genet., 22, pp. 265–275. McPherson, J. D., Marra, M., Hillier, L., et al., (2001) A physical map of the human genome. Nature, 409, pp. 934–941. Mester, D., Ronin, Y., Minkov, D., Nevo, E., and Korol, A. (2003a). Constructing large scale genetic maps using evolution strategies algorithm. Genetics, 165, pp. 2269–2282. Mester, D., Ronin, E., Nevo, E., and Korol, A. (2003b). Efficient multipoint mapping: making use of dominant markers repulsion-phase. Theor Appl Genet, 107, pp.1002–1112. Mester, D., Korol, A., and Nevo, E. (2004). Fast and high precision algorithms for optimization in large scale genomic problems. Comput Biol and Chem, 28, pp. 281–290. Mester, D., Ronin, Y., Korostishevsky, M., Picus, V., Glazman, A., and Korol, A. (2005). Multilocus consensus genetic maps: formulation, algorithms and results. Computation Biology and Chemistry, 30, pp. 12-20. Mester, D., and Braysy, O. (2005). Active guided evolution strategies for large scale vehicle routing problems with time windows. Comp. and Oper. Res., 32, pp.15931614. Mester, D., and Braysy, O. (2007). Active guided evolution strategies for large scale capacitated vehicle routing problems. Comp. and Oper. Res., 34, pp. 2964-2975. Mester, D., Bräysy, O., and Dulaert, W. (2007). A multi-parametric evolution strategies algorithm for vehicle routing problems. Expert Systems with Application, 32, pp. 508717.
38
D. Mester, Y. Ronin, M. Korostishevsky et al.
Moscato, P. (1996). TSPBIB, Available from: URL: http://www.densis.fee.unicamp.br /~moscato/TSPBIB_home.html. Mott, R. F., Grigoriev, A. V., Maier, E., Hoheisel, J. D., and Lehrach, H. (1993). Algorithms and software tools for ordering clone libraries: application to the mapping of the genome of Schizosaccharomyces pombe. Nucleic Acids Research, 21, pp.1965-1974. Nagata, Y., and Kobayashi, S. (1997). Edge Assembly Crossover: A High-power Genetic Algorithm for the Traveling Salesman Problem, Proc. of 7th Int. Conf. on Genetic Algorithms, pp. 450-457. Nagata, Y. (2007). Edge Assembly Crossover for the Capacitated Vehicle Routing Problem, Proc. of 7th Int. Conf. on Evolutionary Computation in Combinatorial Optimization , pp. 142–153. Olson, M. V., Dutchik, J. E., Graham, M. Y., Brodeur, G. M., Helms, C., Frank, M., MacCollin, M., Scheinman, R., and Frank, T. (1986). Random-clone strategy for genomic restriction mapping in yeast. Proc. Natl. Acad. Sci., 83, pp. 7826–7830. Olson, J. M., and Boehnke, M. (1990). Monte Carlo comparison of preliminary methods of ordering multiple genetic loci. American Journal of Human Genetics, 47, pp. 470-482. Osman, I. H. (1993). Metasrategy simulated annealing and tabu search algorithm for VRP. Annals of Operation Research, 41, pp. 421-451. Ott, G. (1991). Analysis of Human Genetic Linkage . The John Hopkins University Press, Baltimore and London. Papadimitiou, C., and Steiglitz, K. (1981). Combinatorial optimization: Algorithms and complexity. Prentice-Hall, Englewood Cliffs. Pekney, J., and Miller, D. (1991). Exact solution of the no-wait flowshop scheduling problem with a comparison to heuristic methods. Comp. and Chem. Eng., 15, pp. 741748. Rechenberg, I. (1973). Evolutionstrategie. Fromman-Holzboog, Stuttgart. Renaud, J., Boctor, F., and Laporte, G. (1996). Fast composite heuristic for symmetric TSP. IJOC, 2, pp. 134–143. Reinelt, G. (1991). TSPLIB - a travelling salesman problem library. ORSA J. Comput., 3, pp. 376384. Reinelt, G. (1994). The travelling salesman. Lecture Notes in Computer Science 840. Springer, Berlin. Schiex, T., and Gaspin, C. (1997). Carthagene: constructing and joining maximum likelihood genetic maps. ISMB, 5, pp. 258-267. Shaw, P. (1998). Using constraint programming and local search methods to solve vehicle routing problems. In Maher, M., and Puget, J.-F. (eds.): Principles and Practice of Constraint Programming – CP98, Lecture Notes in Computer Science, Springer-Verlag, New York, pp. 417431. Schneider, J. (2003). Searching for Backbones – a high performance parallel algorithm for solving combinatorial optimization problems. Fut. Gen. Comp. Syst., 19, pp. 121131. Schwefel, H-P. (1977). Numeriche optimierung von vomputer-modellen mittels der evolutions strategie. Birkauser, Basel. Shizuya, H., Birren, B., Kim, U. J., Mancino, V., Slepak, T., Tachiiri, Y., and Simon, M. (1992). Cloning and stable maintenance of 300-kilobase-pair fragments of human DNA in Escherichia coli using an F-factor–based vector. PNAS, 89, pp. 8794–8797.
Discrete Optimization for Some TSP-Like Genome Mapping Problems
39
Soderlund, C., Humphray, S., Dunham, A., and French, L. (2000). Contigs built with fingerprints, markers, and FPC V 4.7. Genome Res., 10, pp. 1772–1787. Tanksley, S. D., Ganal, M. W., and Martin, G. B. (1995). Chromosome landing: a paradigm for map-based gene cloning in plants with large genomes. Trends Genet., 11, pp. 63–68. Tsai, C-F., Tsai, C-W., and Tseng, C-C. (2004). A new hybrid heuristic approach for solving large traveling salesman problem. Inf. Sciences., 166, pp. 6781, Tsang, E., and Voudouris, C. (1997). Fast local search and guided local search and their application to British telecom's workforce scheduling problem. Oper. Res. Let., 20, pp. 119127. Vanhouten, W., and Mackenzie, S. (1999). Construction and characterization of a common bean bacterial artificial chromosome library. Plant Mol. Biol., 40, pp. 977-983. Venter, J. C., Smith H. O., and Hood, L. (1996). A new strategy for genome sequencing. Nature, 381, pp. 364–366. Voudouris, C. (1997). Guided local search for combinatorial problems. PhD Thesis, University of Essex, Colchester. Walshaw, C. (2002). A multilevel approach to the travelling salesman problem. Oper. Res., 50, pp. 862877. Weeks, D., and Lange, K. (1987). Preliminary ranking procedures for multilocus ordering. Genomics, 1, pp. 236-242. Wang, Y., Prade, R., Griffith, J., et al., (1994). S_BOOTSTRAP: Assessing the statistical reliability of physical maps by bootstrap resampling. Cabios, 10, pp. 625-634. Wang, G. L., Holsten, T. E., Song, W. Y., Wang, H. P., and Ronald, P. C. (1995). Construction of a rice bacterial artificial chromosome library and identification of clones linked to the Xa-21 disease resistance locus. Plant J., 7, pp. 525–533.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acuña, pp. 41-65
ISBN: 978-1-60876-500-3 © 2011 Nova Science Publishers, Inc.
Chapter 2
BENCHMARKING HOSPITAL UNITS’ EFFICIENCY USING DATA ENVELOPMENT ANALYSIS: THE CASE OF GREEK OBSTETRIC AND GYNAECOLOGY PUBLIC UNITS Maria Katharaki1 Quantitative Methods, Statistics and Econometrics, Faculty of Economics National and Kapodistrian University2, Athens, Greece
Abstract Hospital institution’s managers are called upon to combine and utilize efficiently the finite financial resources toward the goal of maximizing the number and quality of health services offered. The research aim of this study is to primarily estimate the relative technical efficiency by using a sample from public hospital units that provide obstetrical and gynaecological services in Greece and secondly, to emphasize the policy implications for health sector policy-makers. In order to effectively address the above goals, a comparative analysis of 32 Greek Public Hospital Units was conducted. The research was based on data collected from official public sources. Quantitative analysis, specifically data envelopment analysis (DEA) is used to estimate efficiency of hospital units. Based on the results that emerge from the application of Data Envelopment Analysis, information is provided to their managers, which refer to: (i) the degree of utilization of their production factors, (ii) the particular weight of each production factor in the modulation of the relative technical efficiency score, (iii) the utilization level of each production factor, and (iv) those hospital units that utilize their resources in an optimal way and constitute models for the exercising of effective management. Particular emphasis is given to the economic efficiency of central region hospital units’ relative to those of the outlying regions. The derived information assists in the modulation of an appropriate policy mix per hospital unit which should be applied by their management
1 2
E-mail address: [email protected]; [email protected] 8 Pesmazoglou Str., 10559, Athens, Greece, phone: +302103237319, fax: +3021203223758.
42
Maria Katharaki teams along with a set of administrative measures that need to be undertaken in order to promote efficiency.
Keywords: Hospital management; efficiency; quantitative analysis; data envelopment analysis; health economics
1. Introduction The health system worldwide is facing significant problems, which in large part are attributable to under-funding. A result of the under-funding of the health system as a whole is the under-funding of the public hospital units, the consequence of which is the emergence of a strict and effective monitoring of expenditures. Through economic resources, a production factor connected with the services provided by the hospital units, it is possible to express the efficiency management of those units. The Greek health care system is characterized by the coexistence of the National Health Service (NHS), a compulsory social insurance and a voluntary private health insurance system (Allin et al, 2004). The NHS provides universal coverage to the population operating on the principles of equity, social cohesion and equal access to health services for all. Under this context, citizens are not directly dependent on a specific healthcare institution. On the other hand, they are free to choose amongst a variety of healthcare units depending on the type of treatment they wish to follow. It should be pointed out that the Greek Ministry of Health decides on the overall national health strategy and the relative health policy issues within Greek healthcare organizations. Its main responsibilities, amongst all, are the definition of priorities, the approval and extension of funding for proposed activities and the resource allocation at a national level. With the latest reforms, the main objectives were the decentralization of the system with the establishment of 17 Regional Health Authorities. Decentralization efforts devolved political and operational authority to Regional Health Authorities but were only partially fulfilled (Tountas, Karnaki and Pavi, 2002). Decision making and all administrative procedures continued to depend on a very centralized and bureaucratic Ministry of Health (Tountas, Karnaki and Pavi, 2002). The consequences of this fragmentation, combined with the lack of a monitoring system had an impact on the extent and quality of services provided to beneficiaries of different funds, leading to overconsumption of services and serious socioeconomic health inequalities (Center for Health Services Research, 2000). Moreover, Greek public hospital units operate within a framework characterized by limited economic resources, a restricted number of beds and a geographically unequal distribution of both personnel and patients (Tountas, Karnaki and Pavi, 2002; Giokas, 2001). There are wide discrepancies between the number of hospitals and number of hospital beds allocated in different regions (Tountas, Karnaki and Pavi, 2002) and a wide variation between the distribution of resources in urban and rural areas (Center for Health Services Research, 2001). For example, in the greater Athens area in 2000 there were 6.4 hospital beds per 1000 population while the corresponding ratio in Central Greece was 1.2 beds per 1000 population (Tountas, Karnaki and Pavi, 2002). These characteristics are more vivid in the provision of health care services in obstetrical and gynaecological (OandG) cases than in others (Gatzonis, 2000; Desai, 2003) and are attributable to demographic and national factors. The lack of
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
43
gynaecologists/obstetricians together with the limited experience of the serving staff on obstetric issues in rural areas, pose difficulties in the routine follow-up of pregnant women (Gatzonis, 2000; Katharaki, 2006). As a consequence, women choose to seek health services and even followup examinations at tertiary hospitals in Athens (Katharaki, 2006). Under this context, hospital OandG management staff is expected to perform an optimal utilization of resources in terms of quantity and quality of offered services. In other words, managers are expected to achieve efficiency despite the fact that reality imposes certain, wellknown limitations. Under these circumstances, the use of quantitative methods can provide the management staff useful information concerning: 1) the evaluation of the efficiency score regarding the utilization of the available production factors; 2) the contribution of every productive factor used and of the health services provided (outputs) in the formation of the efficiency score; 3) the policy mixture, that is, the combination of inputs and outputs, which must be applied to improve the degree of utilization of the production factors; 4) the “prototype—models of best practice” hospital units which constitute “models to be emulated” in managing the other hospital units. This information, when complete, assures the improvement of efficiency and quality when combined with the managers’ ability to make analogous with reality, the needs of the receivers of health services and the producers of health services. The objective of the current study is two-fold. Primarily, to estimate the relative technical efficiency by using a sample from public hospital units that provides obstetrical and gynaecological services in Greece. Furthermore it pinpoints the degree of economic resources utilization, in terms of expenditures, it determines the size of the expenses which will lead to their more efficient utilization and it localizes those functions of hospital units which must be improved in order to optimize their performance. Secondly, to emphasize the policy implications for health sector policy-makers. These implications can trigger the associated policy-makers in order to conduct a national efficiency study amongst all healthcare organizations in Greece. The present paper is organized as follows. Section 2 below describes the materials and methods used. This section includes a thorough discussion on the data sources, the Data Envelopment Analysis approach and the inputs and outputs of the study. In addition, the selection strategy for the appropriate sample is presented in conjunction with a discussion for the analysis plan of the study. Continuously, Section 3 provides an outline of the results obtained along with their interpretation. Section 4 is then presenting a discussion on the overall study and its outcomes whereas the last section, Section 5, provides a summary and conclusion remarks.
2. Materials and Methods 2.1. Data Sources Data availability and notification of hospital units’ managers is very important as it would facilitate decision making and optimize efficiency. The research aim of this study is to
44
Maria Katharaki
provide such a framework by using a comparative analysis of 32 Greek public hospital units with obstetrical and gynaecological services. The research is based on data collected from official public sources (Center for Health Services Research, 2000; Center for Health Services Research, 2001; NSSG, 2001; NSSG, 1992-2000; NSSG, 2000-2002; OECD, 2003) and on data published in Yearbook of Health 1994 by Greek Ministry of Health. The well-timed of data was partially covered from data given by Greek National Statistical Service the decade 1992–2002 and on OECD (Organization for Economic Co-operation and Development) health data 2002. The data were also combined with primarily collected data directly from hospital units of the sample.
2.2. Data Envelopment Analysis The method of quantitative analysis, in the context of the current research study, is Data Envelopment Analysis (DEA), which is a technique that can be used to evaluate the efficiency of a number of producers or decision making units (Charnes, Cooper and Rhodes, 1978; Charnes and Cooper, 1985; Cooper, Seiford and Zhu, 2004; Al-Shammari, 1999; Salinas-Jimenez and Smith, 1996; O’ Neil and Dexter, 2004). DEA works by estimating a piece-wise linear envelopment surface, known as the best practice frontier (O’ Neil and Dexter, 2004). Thus, efficient units obtained by DEA are those that produce a certain amount of or more outputs while spending a given amount of inputs (output-oriented model), or use the same amount of or less inputs to produce a given amount of outputs (input-oriented model), as compared with other firms in the test group. Despite its limitations (SalinasJimenez and Smith, 1996; O’ Neil and Dexter, 2004; Anderson, 1996), the DEA model has several important advantages over parametric and econometric approaches. Two of the most important are, primarily it does not impose a particular functional form on the production frontier (Cooper, Seiford and Zhu, 2004; Al-Shammari, 1999; Salinas-Jimenez and Smith, 1996) and secondly, the ability to handle multiple-output, multiple-input technologies in a straightforward way, which is considered an important feature when assessing efficiency in public sector activities (Cooper, Seiford and Zhu, 2004). A feature which is especially important in the assessment of efficiency in public sector activities (Salinas-Jimenez and Smith, 1996). The use of DEA model has been widely applied in similar recent research studies in order to estimate the relative technical efficiency in healthcare services and consequently in healthcare organizations (Al-Shammari, 1999; Salinas-Jimenez and Smith, 1996; O’ Neil and Dexter, 2004; Anderson, 1996; Hollingsworth, Dawson and Maniadakis, 1999; Chilingerian and Sherman, 2004; Butler and Li, 2005; Ballestero and Maldonado, 2004; Chang, 1998; Maniadakis and Thanassoulis, 2000; Sarkis and Talluri, 2002; Chilingerian, 1995; Goni, 1999; Banker, Conrad and Strauss, 1986; Thanassoulis, Boussofiane and Dyson, 1995; Miller and Adam, 1996; Kirigia et al, 2004). Concerning health, technical efficiency refers to the relationships of technological nature of inputs (capital, employment and medical equipments) and the health results (Gounaris, Sissouras, Athanassopoulos, 2000). These results can be expressed in terms of intermediate output, as “number of patients subject to specific treatment”, “waiting time”, “treatment days” or even as a result of total usage (i.e. decrease of mortality rates, increase in life expectancy) (Palmer and Torgerson, 1999).
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
45
For a review of DEA health care studies see Hollingsworth et al. (1999), Ozcan et al. (2004) and Chilingerian and Sherman (2004). Several other areas of application include hospitals, perioperative services, surgical operating rooms, and physicians. This study extends the use of DEA in health care to hospital OandG services. The analysis is extended to provide detailed information at the level of Greek units’ performance that provide health services on female population of the urban and rural areas of the country. This level of detail is necessary for policymakers to make decisions on which individual units should undergo changes. To estimate the efficiency of Greek public OandG units, the CCR (Charnes, Cooper and Rhodes) input oriented model was used (Charnes, Cooper and Rhodes, 1978; Cooper, Seiford and Zhu, 2004). With the application of DEA, information is acquired about the inadequate provision of services, that is, information on the concept of the “slackness” of the productive forces concerning the way in which the health units function and the proposed changes at the level of the utilization of the factors of production that can improve the efficiency of every OandG unit (Katharaki, 2006). Low efficiency is usually due to excess resources and the model can be used to explore the effect on efficiency of decreasing input resources. The DEA model can be used to explore some of the underlying reasons for inefficiency.
2.3. Defining Inputs and Outputs To carry out a DEA assessment for a group of units it is necessary to construct an input set to reflect the resources used by the units and an output set of the results obtained (Charnes, Cooper and Rhodes, 1978; Anderson, 1996; Chilingerian and Sherman, 2004). Under the production model, a hospital unit uses physical resources such as labour and plant and economic resources in order to process health care services and so on. With the application of DEA in the quantitative investigation of the relative technical efficiency of the 32 OandG hospital units, an attempt is made to evaluate the degree of utilization of the following production factors (inputs):
number of OandG beds; number of OandG medical personnel; total expenditure for the provision of care.
Regarding the selected inputs, hospital size and capacity were measured by the number of beds. Most studies exclude the number of physicians because there exist independent contractors who may admit patients. For the purpose of the current study, it is important to include them as an input since there exist wide discrepancies between the number of specialized physicians in different regions of the country which they largely determine the volume of the OandG services that a hospital can perform. Moreover, the shortage of OandG physicians in rural units influences the female flow to Attica units (Gatzonis, 2000; Katharaki, 2006). The importance of more evenly distributed finances throughout the healthcare regions was the primary reason for performing a DEA analysis with the input being “total expenditure”. The focus of the current study is on grand total expenditure and not on the individual resource component costs (doctors’ salaries, nurses’ salaries, etc.). Therefore, OandG expenditures do not include medical personnel expenses.
46
Maria Katharaki The corresponding group of outputs that describe the health care services offered are:
The number of OandG hospitalization days/bed-days. The number of female patients treated. The number of OandG examinations in outpatient clinics. The number of OandG lab tests.
The use of number of OandG lab tests and patient days as outputs of the study was selected in order to become criteria for efficiency assessment of units as proxy factors of the degree of resources utilization. These criteria have been utilized in a plethora of related studies (Giokas, 2001; Chilingerian and Sherman, 2004; Chilingerian and Sherman, 1996).
2.3.1. Scenarios which are Applied The DEA was based on two different scenarios – one with all three inputs (beds, staffing and total expenditure) and one with only a single input (total expenditure). The output set was the same for both scenarios, except the output “number of female patients treated” was not used in scenario B (single input). The importance of more evenly distributed finances throughout the healthcare regions was the primary reason for performing a DEA analysis with the only input being ‘total expenditure’. The questions that the study attempted to address were: 1) 2) 3) 4)
Which units currently utilise existing resources most efficiently (i.e. 100%)? To what degree do inefficient units use available resources? In which units can observed slacks of production resources be demonstrated? What should the ‘virtual’ set of inputs and outputs be to ensure optimisation of allocated resource utilisation?
2.4. Sample Selection and Analysis Plan The study was based on 32 Obstetrical and Gynaecological (OandG) units located in 5 of the 10 geographical Greek NHS regions. With regard to the sample of the public OandG Units, it must be noted that they are located in the following geographical districts of Greece:
Attica (central region), remaining Continental Greece, the Peloponnese, Thessaly, and the Aegean Islands.
Eleven of the 32 hospitals are located in the central region (which includes Athens and Piraeus) and the remaining 21 units are in the districts outside of these major metropolitan areas (Table 1). On the basis of the criterion of number of beds in these units, 40% of the beds offered by these 32 units of the sample belong to 2 units of the central region the “Alexandra General Hospital of Attica” and the “Elena Venizelou General Hospital of Attica”. Based on the size of the above, in terms of how representative the sample is for the entire country, figures above 50% are considered satisfactory (Table 2).
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
47
Table 1. The thirty two Obstetrical-Gynaecological Hospital Units in the Region being Studied Number of obstetrical and gynaecological units in the hospitals in the sample
Total obstetrical and gynaecological units of the hospitals in the five geographical districts
Attica (Central Region) Remaining continental Greece
11
14
78.57%
5
10
50.00%
Aegean Islands
3
9
33.33%
Thessaly
3
4
75.00%
Peloponnese
10
14
71.43%
Total Total number of OandG units in country Percent %
32
51
62.75%
Hospitals in the study area (sample)
Country total Percent %
Percent %
92 34.78% 119 26.89%
Source: Ministry of Health and Social Insurance.
Table 2. The Representativeness of the Sample of 32 OandG Hospital Units
INPUTS
Percentage of the 32 OandG Units in the 5 Districts
Percentage of the 32 OandG Units in the OandG field of the Country
Percentage of the OandG Units of the 5 Districts in the OandG field of the Country
Beds
90.8%
50.0%
55.1%
Medical Personnel
93.0%
56.2%
60.4%
Total expenditures
91%
53%
57,3%
OUTPUTS Bed-days Patients hospitalized Patient in outpatient clinics Lab tests
Percentage of the 32 OandG Units in the 5 Districts 95.7% 94.2% 86.5% 89%
Percentage of the 32 OandG Units in the OandG field of the Country 52.6% 51.7% 53.5% 52%
Percentage of the OandG Units of the 5 Districts in the OandG field of the Country 54.9% 54.9% 61.9% 57%
Source: Ministry of Health and Social Insurance.
The separation of the sample into the OandG units of Attica (central region) and of the outlying regions is essential. This is because the geographically unequal distribution is real, both in terms of resources and of patients who demand and receive health care services outside their region of residence, that is, in the large urban centers (Katharaki, 2006).
48
Maria Katharaki
Maternity services are of major importance for the rural areas and Aegean Islands of Greece, mainly due to their isolation, especially during winter. Traditionally, maternity and gynaecology services in those areas are offered by the Healthcare Centres, as well as by obstetricians working in private practices (Gatzonis, 2000). Healthcare Centres are manned by Internists, General Practitioners and young, non-specialized physicians (Gatzonis, 2000). The gynaecologists/obstetricians shortage together with the limited experience of the serving staff on obstetric–gynaecology issues, pose difficulties in the health services delivery and in the handling of emergency cases. Private practices, on the other hand, are usually not adequately equipped to handle difficult or emergency cases. Under these circumstances, the primary reason that many female patients travel from the peripheral regions to Athens is because they are seeking for higher quality healthcare services (Katharaki, 2006). These are inevitably available in city hospitals which have the facilities and expertise to deal with a large variety of cases. Birth rate at place of permanent residence of mother compared to birth rate at place of childbirth happen rise as a key indicator of women internal flow to urban hospital units (NSSG, 2000-2002). Furthermore emergency cases are evacuated to tertiary hospitals of Attica either by boat or aeroplane, depending on the severity of the case and on weather conditions, resulting in intensity of available resources utilization and therefore in expenditures and lab tests increase (Gatzonis, 2000; Katharaki, 2006). As a consequence, it can be deduced that different types of problems are presented in the administrative practice of the hospital units of the central region than those encountered by the administrators of the units of the outlying regions. More specifically, in the hospitals of the Attica, a build-up of patients is observed, resulting in the administration having to confront functional and performance problems in their effort to satisfy the existing demand both qualitatively and quantitatively. The hospitals operating in the outlying regions are characterized by a limited utilization of the production factors (Tountas, Karnaki and Pavi, 2002; Center for Health Services Research, 2000). As a result, their administrators are interested in providing services of a higher quality and in increasing demand. That is to say, their administrations confront problems in the utilization of the available resources. Furthermore, the implementation of scenario B was based on an investigation of the financial management problems, an identification of the problem areas as well as a differentiation of needs between hospital units of the central region and those of the outlying regions. Finally, the comparative evaluation of hospital units in the base of their economic management is of fundamental importance, particularly for the authority in charge, for the governmental bodies responsible for supervising the functioning of the hospital units. This is also true for the pin-pointing of those units which function as models of excellent practice, those that facilitate decision making for the managers of the other units and promote well intentioned rivalry. The composition of the sample, based on geographic location and the figures used as inputs and outputs, is presented in Table 3. From the data in the Table, the figures of the sample related to the activities of the hospital units of the central region are more than twice those of the outlying regions. Thus, the sample composition between the central region and that of the outlying regions illustrates the geographic inequalities at the level of the available production factors and the health care services provided.
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
49
Table 3.Composition of the Sample of 32 OandG Hospital Units between the Central and Outlying Regions Total Sample
Central Region (Attica)
%
Outlying Regions
%
Beds Medical Personnel
1,082 412
649 293
59.98% 71.12%
433 119
40.02% 28.88%
Total expenditures
49,353,333
28,295,903
57.33%
21,057,430
42.67%
226,853 50,209
148,954 30,667
65.66% 61.08%
77,899 19,543
34.34% 38.92%
215,088
137,277
63.82%
77,811
36.18%
1,900,462
1,098,124
57.78%
802,338
42.22%
INPUTS
OUTPUTS Bed-days Patients hospitalized Patient in outpatient clinics Lab tests
Source: Ministry of Health and Social Insurance.
3. Results: Model Interpretation 3.1. Scenario A Results The implementation of DEA, with the use of the inputs and outputs described, led to the results presented in Table 4 that concern the evaluation of the OandG unit’s performance of the sample. DEA identified 18 OandG units as efficient and 14 as inefficient. Attica’s hospital units A, B, C and H are tertiary units with the biggest market share amongst the volume of patient treated from all over the country. Table 4. The Relative Technical Efficiency of the 32 Units of the Sample HOSPITALS WITH OandG UNITS BY REGION
TECHNICAL EFFICIENCY SCORE
BENCHMARKS
A (GH ATTICA ALEXANDRA)
100.00%
N2
B (OandG E. VENIZELOU)
100.00%
4
N3
C (GH AGIA OLGA)
100.00%
9
N4
D (GH ATHINA)
81.81%
1 (0.0004) 3 (0.0245) 8 (0.4530) 21 (0.0469)
N5
E (GH LAIKO)
99.55%
3 (0.3223) 8 (0.3610) 11 (0.0025) 21 (0,1354)
F (GH ATTICA EVAGELISMOS
79.65%
1 (0.0230) 3 (0.2394) 8 (0.1927)
G (GH ATTICA ELPIS
59.16%
3 (0.2275) 8 (0.0475)
H (GH AGIOS SAVVAS)
100.00%
9
N9
I (GH METAXAS)
100.00%
N10
J (GH JANNEIO PIRAEUS)
90.78%
N11
K (GH NIKAIA PIRAEUS)
100.00%
0 1 (0.0728) 2 (0.0300) 3 (0.4357) 8 (0.1247) 11 (0.0042) 21 (0.1159) 4
N6 N7 N8
ATTICA
N1
4
50
Maria Katharaki Table 4. Continued
N13 N14 N15 N16 N17 N18 N19 N20
THESSALY
N12
REMAINING CONTINENTAL GREECE
HOSPITALS WITH OandG UNITS BY REGION
TECHNICAL EFFICIENCY SCORE
BENCHMARKS
L (GH AGRINIO)
88.92%
2 (0.0044) 3 (0.1183) 8 (0.1125) 31 (0.3541)
M (GH PATRA)
100.00%
1
N (GH UNIVERSITY OF PATRA)
47.78%
3 (0.0984) 8 (0.0879)
O (GH AMALIADA)
100.00%
0
P (GH LEIVADIA)
41.08%
Q (GH HALKIDA)
68.52%
R (GH LAMIA)
74.59%
8 (0.0510) 29 (0.1584) 2 (0.0013) 3 (0.0659) 11 (0.0507) 13 (0.0304) 28 (0.2669) 23 (0.0837) 30 (0.3329) 31 (0.0537)
S (GH AMFISSA)
87.88%
3 (0.0220) 21 (0.2342) 24 (0.0100)
93.70%
1 (0.0128) 21 (0.2697) 28 (0.3121)
100.00%
5
V (GH TRIKALA)
81.93%
30 (0.2470) 31 (0.0672)
N23
W (GH ARGOS)
100.00%
1
N24
X (GH NAVPLIO)
100.00%
1
Y (GH TRIKALA)
100.00%
0
Z (GH KORINTHOS)
100.00%
0
AA (GH SPARTI)
100.00%
0
AB (GH KALAMATA)
100.00%
3
AC (GH-HC KYPARISSIA)
100.00%
1
N22
N25 N26 N27 N28
PELOPONNESE
T (GH LARISA) U (GH VOLOS)
N21
N30 N31 N32
AEGEAN ISLANDS
N29
AD (GH MYTILINI)
100.00%
2
AE (GH RHODOS)
100.00%
3
AF (GH SPYROS VARVAKIOS)
63.43%
2 (0.006) 8 (0.670) 11 (0.366) 28 (0.773)
Source: DEA results.
Thus an evaluation of the relative technical efficiency of a unit of less than 100% demonstrates the degree to which the unit in question lags behind relative to the best practice unit of the sub-category of reference, with which it is compared. One of the by-products of DEA is that – for OandG units it deems inefficient – it produces a set of efficient peers with which the apparently inefficient unit is being compared (Al-Shammari, 1999; SalinazZimenez and Smith, 1996; Thanassoulis, Boussofiane and Dyson, 1995). The comparison is formed by taking a weighted average of each of inputs and outputs of the efficient units. The performance of the “composite” unit formed by this weighting procedure gives achievable targets for the inefficient unit. For instance, the efficiency reference set for the peripheral unit “L” is a combination of the actual outputs and inputs of the reference subset of hospitals and results in a composite hospital that produces as much or more outputs as unit L, but uses as much or less inputs than this unit. The composite unit L is formed by multiplying the weights (0.03, 0.26, and 0.16) of individual hospitals with the actual inputs and outputs of the 100% efficient OandG units (B, C and H). The results of the multiplication for the three hospitals are then combined to arrive at a hypothetical best practice hospital.
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
51
Using Table 4 as a basis and with the use of the statistical measurement-the arithmetic mean of the efficiency score of the units of the sample, Table 5 emerges. Thus, the utilization of production factors (technical efficiency) in the total of the 32 units of the sample is 89.33%. The differentiation in the score between the hospital units of the central region and the outlying regions must however be noted. It becomes clear that the efficiency score of hospital units in the central region surpasses that of the hospital units of the outlying region. This is confirmed by reality, since in the current state of functioning of the health care units; the patients demand and receive services in Attica (Gatzonis, 2000; Katharaki, 2006). The logical consequence of this is the inadequate utilization of the available production factors in the OandG units in the outlying region (83.23%) relative to the OandG units of Attica (91.90%). Table 5. Efficiency score based on the comparison of the arithmetic mean of the results of DEA Total Sample
89.33%
OandG Units Attica
91.90%
OandG Units Outlying Regions
83.23%
From the quantitative analysis, the results of Table 6 emerge with regard to the significance coefficient of the individual production factors, or the percentage of participation of every factor of production used in the configuration of the efficiency score. Table 6. Percentage contribution of individual production factors in the determination of the efficiency score based on the arithmetic mean
Production Factors
Total Sample
OandG Units Attica
OandG Units Outlying Regions
OandG Medical Personnel
36.3%
5.3%
52.5%
OandG Beds
31.9%
50.3%
22.2%
OandG Expenditures
31.9%
44.3%
25.3%
Table 6 demonstrates that the technical efficiency score of the OandG units of Attica depends primarily on bed utilization and economic management, while the utilization of the potential of the medical personnel contributes to a lesser degree. On the contrary, for the hospital units of the outlying regions, it is the utilization of the potential of the medical personnel (52.5%) that contributes primarily to the shaping of the technical efficiency score, while bed utilization and economic management make a relatively limited contribution. Moreover, the differences of the population means of bed and medical personnel variables between Attica’s and outlying units were statistically significant for the Mann–Whitney test with p = 0.046 and 0.000 (p < 0.05), respectively. These results point to the areas of activity of the hospital units in which it is necessary for actions to be developed that will improve efficiency. The differences in the contribution of the inputs between hospital units in the
52
Maria Katharaki
central region and in the outlying regions indicate how different the problems and the conditions under which these units operate are. With regard to the contribution of the outputs to the formation of the technical efficiency score, the results presented in Table 7 emerge. Table 7. Percentage contribution of the outputs to the determination of the efficiency score of the production factors based on the arithmetic mean
Outputs
Total Sample
OandG Patients Hospitalized OandG Lab tests OandG Bed-days OandG Examinations in outpatient clinics
20.5% 22.7% 34.0% 22.8%
OandG Units Attica 14.8% 24.8% 43.2% 17.2%
OandG Units Outlying Regions 23.6% 21.6% 29.0% 25.9%
From Table 7 it is clear that “bed-days” constitute the basic determining factor in the efficiency score of the hospital units of the central region, while on the contrary, the contribution of the number of patients hospitalized is not the basic factor. In contrast with the practices in the hospital units of Attica, there is a difference in the contribution of the outputs to the determination of the efficiency score in the respective units of the outlying regions. However, these differences of the population means are not statistically significant for the Mann–Whitney tests. The comparatively greater contribution of the number of patient days and lab tests to the formulation of the efficiency score of the Attica units can be explained by the fact that the Attica units, by definition, serve as a welcome point of all acute cases. On the other hand, it should be noted that patients accommodated in Attica units usually spend more time within these units. Several reasons include the time consuming process of such patients for returning back home or the high level of patient succession and high level of bed coverage within these units. In addition, the fact that the Attica units are often academic units as well cannot be neglected since this also contributes to the increased lab tests utilization. Regarding the greater contribution of outpatient exams in formulating the efficiency score of the outlying regions units, it can be explained from the fact that patients tend to seek their initial treatment in a unit which is close to their location and, as a result, to become admitted to a clinic depending on the bed availability. For inefficient units, DEA provided information on the sources of inefficiency as given by the slack values (increased output, decreased input) (Miller and Adam, 1996). DEA methodology however allows the possibility of information provision at the level of each unit. In addition to the identification of inefficient OandG units and their efficiency reference set, DEA provides additional insight into the magnitude of inefficiency of the inefficient units (Giokas, 2001). The magnitude of inefficiency is obtained from the magnitude of slack inputs and/or deficient outputs produced by inefficient hospitals (Cooper, Seiford and Zhu, 2004; Ozcan et al, 2004). Slack inputs and/or deficient output production must be eliminated before a given unit is said to be relatively efficient compared with its composite reference set of units. Thus, from the analysis, the “policy mix” emerges, that is to say, the target group of inputs and outputs proposed for the managers of the units in order to improve and optimize the effective utilization of the production factors (Table 8).
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
53
Table 8. Proposed policy mix (Percentage reduction of the production factors based on the arithmetic mean) OandG Units Outlying Regions
Total Sample
OandG Units Attica
15.0%
4.10%
57.90%
OandG Beds
6.0%
0.40%
14.03%
OandG Expenditures
6.0%
4.30%
9.40%
Production Factors OandG Medical Personnel
The slack values of inputs indicate the way personnel and beds are adequately used in rural units, because of patient crowd in Attica’s units (Table 8). The difference between the proposed policy mix for the hospital units of the central region and those of the outlying regions should be emphasized since the significant changes that must be made in the utilization of personnel serving in the units of the outlying regions are clear. In addition, the almost non-existent reduction in beds of the units of the central region must be emphasized in contrast with what is taking place in the outlying regions. This reduction of beds relative to the health services provided proves their inadequate utilization. The alternative proposal to the limitation of the inputs is the development of initiative and administrative measures which could contribute to the utilization of the production factors available especially when the intention to reduce these is, in practical terms, difficult to implement. The results that have been presented refer to the total sample and to its two subcategories. Due to limitations of space, results for each unit cannot be given in this paper. Indicatively, the estimate which emerged from the quantitative analysis for four hospital units are presented. Two of these are in Attica (Obstetric Units of the Central Region [OUCR] “A” and “F”) and the other two are from the sub-category of the outlying regions (Obstetric Units of the Outlying Regions [OUOR] “O”and “Q”). Table 9. Efficiency score and the inputs and outputs weights OUCR “A”
OUCR “F”
Relative efficiency (Score) 100% 79.65% Contribution of the production factors (inputs) (%) OandG Beds 43% 47% OandG Expenditures 57% 53% OandG Medical Personnel 0% 0% (%) Contribution of outputs OandG Patients Hospitalized 6.9% 0% OandG Lab tests 0% 19.2% OandG Bed-days 93.1% 80.8% OandG Examinations in 0% 0% outpatient clinics
OUOR “O”
OUOR “Q”
100%
68.52%
40.6% 0% 59,4%
2.6% 88.8% 8.4%
38.1% 0% 0%
57.7% 36.6% 0%
61.9%
5.7%
54
Maria Katharaki
According to Table 9, it can be demonstrated that in the hospital units in the central region, the primary factors determining the efficiency score are: (i) economic management and (ii) the number of bed-days. Differences however are observed in the hospital units of the outlying regions where the basic determining factors are: (i) the examinations in outpatient clinics and (ii) the number of patients hospitalized and secondarily economic management and the number of beds. We must note the particularity displayed by the productive factor “medical personnel” between the two hospitals units of the outlying regions which makes it necessary for the administration of these units to take different measures in order for this factor to be utilized. With regard to the proposed policy mixture which should be used in order to optimize the efficiency of the two hospital units (OUCR “F”, OUOR “Q”); see Table 10. Table 10. Proposed policy mix expressed as a percentage of the change in inputs and outputs, of which the efficiency score falls short of optimal OUCR “F” Relative efficiency (Score)
OUOR “Q”
79.65%
68.52%
OandG Beds
-22.3%
-33.4%
OandG Expenditures
-20.4%
-31.5%
OandG Medical Personnel
-27.3%
-40.0%
41.5% 0% 0% 129.6%
0% 0% 0% 0%
Percentage reduction of production factors (inputs)
Possible percentage increase in services offered OandG Patients Hospitalized OandG Lab tests OandG Bed-days OandG Examinations in outpatient clinics
Increased efficiency of the two hospital units will be achieved through the reduction of inputs at different levels for each input, a fact which also highlights the areas as well as the extent of the need for administrative measure to be taken. Particularly in OUOR “Q”, the reduction in beds and of medical personnel is larger, which means not that they should be limited but rather that they must be utilized more productively to cover all hospital needs.
3.2. Scenario B Results The implementation of DEA, with the use of the input and outputs described, led to the results presented in Table 11 that concern the evaluation of the OandG unit’s performance of the sample.
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
55
Table 11. Relative Efficiency of the Hospitals in the Sample using the DEA Method HOSPITALS WITH 0-G UNITS IN THE REGIONS
RELATIVE EFFICIENCY SCORE
A (GH ATTICA ALEXANDRA)
N2
B (OandGH E. VENIZELOU)
100.00%
23
N3
C (GH NEA IONIA AGIA OLGA)
100.00%
19
N4
D (GH ATHINA)
74.00%
2 (0.04) 3 (0.04) 8 (0.32)
E (GH LAIKO)
65.18%
2 (0.10) 3 (0.38)
F (GH ATTICA EVAGELISMOS
66.65%
2 (0.07) 3 (0.21)
N6
ATTICA
N1
N5
94.83%
BENCHMARKS
G (GH ATTICA ELPIS
N8
H (GH AGIOS SAVVAS)
N9
I (GH METAXAS)
86.65%
N10
J (GH JANNEIO PIRAEUS)
84.61%
2 (0.16) 3 (0.37) 8 (0.14)
N11
K (GH NIKAIA PIRAEUS)
89.15%
2 (0.06) 3 (0.73)
N12
L (GH AGRINIO)
77.57%
2 (0.03) 3 (0.26) 8 (0.16)
M (GH PATRA)
91.12%
2 (0.23) 3 (0.78)
N (GH UNIVERSITY OF PATRA)
47.78%
3 (0.10) 8 (0.09)
O (GH AMALIADA)
65.72%
3 (0.03) 8 (0.21)
P (GH LEIVADIA)
30.06%
8 (0.07)
N15 N16 N17 N18 N19 N20 N21 N22
THESSALLY
N14
REMAINING CONTINENTAL GREECE
N7
N13
59.16%
2 (1.39)
100.00%
3 (0.23) 8 (0.05) 19 2 (0.21)
Q (GH HALKIDA)
55.98%
2 (0.04) 3 (0.17)
R (GH LAMIA)
56.09%
2 (0.06) 8 (0.04)
S (GH AMFISSA)
54.11%
2 (0.05) 3 (0.11) 8 (0.00)
T (GH LARISA)
68.37%
2 (0.12)
U (GH VOLOS)
83.14%
2 (0.19) 3 (0.35) 8 (0.06)
V (GH TRIKALA)
44.47%
2 (0.05)
W (GH ARGOS)
72.26%
2 (0.01) 3 (0.20) 8 (0.15)
N24
X (GH NAVPLIO)
82.62%
3 (0.46) 8 (0.05)
Y (GH TRIKALA)
73.62%
2 (0.09) 8 (0.15)
Z (GH KORINTHOS)
69.89%
2 (0.03) 3 (0.27) 8 (0.05)
N25 N26
PELLEPONESE
N23
77.53%
2 (0.10) 8 (0.07)
AB (GH KALAMATA)
73.59%
2 (0.10) 3 (0.17)
N29
AC (GH-HC KYPARISSIA)
39.00%
8 (0.10)
N30
AD (GH MYTILINI)
80.11%
2 (0.14) 8 (0.06)
AE (GH RHODOS)
84.14%
2 (0.11) 3 (0.32) 8 (0.08)
AF (GH SPYROS VARVAKIOS)
47.31%
2 (0.01) 3 (0.09) 8 (0.08)
N28
N31 N32
AEGEAN ISLANDS
AA (GH SPARTI)
N27
Source: DEA results.
According to Table 11, DEA identified 3 OandG units as efficient and 29 as inefficient, that means expenditures are used in the best way, comparatively speaking, in only 3 hospital units in the sample, which display a relative efficiency score equal to 100%. Attica’s hospital units B, C and H are tertiary units with the biggest market share among the volume of patient treated from all over the country. In addition, a limited utilization of the economic resources
56
Maria Katharaki
of the OandG hospital units of the outlying regions is observed to a great degree. This can be seen from the relative efficiency scores. Using Table 11 as a basis and with the use of the statistical measurement - the arithmetic mean of the efficiency score of the units of the sample, Table 12 emerges. Thus, the utilization of economic resources (efficiency score) in the total of the 32 units of the sample is 71.71%. The differentiation in the score between the hospital units of the central region and the outlying regions must however be noted. It becomes clear that the efficiency score of hospital units in the central region surpasses that of the hospital units of the outlying region. This is confirmed by reality, since in the current state of functioning of the health care units; the patients demand and receive services in Attica. The logical consequence of this is the inadequate utilization of the resources, in terms of “total expenditures” input, in the OandG units in the outlying region (65.45%) relative to the OandG units of Attica (83.66%) Table 12. Efficiency Score based on a Comparison of the arithmetic mean of the DEA Results for each Unit Total Sample OandG Units Attica OandG Units Outlying Regions
71.71% 83.66% 65.45%
From the data above it is clear that all the hospital units of the total sample definitely display problems regarding the score based on their total expenditures management. To further explain the above findings one could say that: the managers of the hospital units of the sample could take measures to increase the efficiency of their economic resources by 28-29% or in other words that a waste of the available economic resources is taking place of that proportion. If we add to the above that only three hospital units, which are located in the central region, achieve a final degree of comparative utilization of 100%, then the size of the problem becomes apparent. The range of measures which must be taken by the managers, particularly in the hospital units of the outlying regions also becomes clear. At once the question arises concerning the identification of the functions, areas or category of services offered to which the above unfavorable result could be attributed and as a consequence the identification of those areas which must be reorganized in order to increase the effective use of economic resources. From the DEA method, the results of Table 13 emerge with regard to the significance coefficient of the individual described output, or the percentage of participation of each category of services offered in the configuration of the efficiency score. Table 13. Percentage Contribution of Output to the Formation of the Efficiency Score
OandG Lab tests OandG Bed-days OandG Examinations in outpatient clinics
OandG Units Attica
OandG Units Outlying Regions
32.06 50.56 17.39
22.29 48.03 29.68
Total Sample 25.65 48.90 25.46
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
57
From the above data it can be seen that in about half, or approximately 50%, of the total sample, the cause of the economic management situation can be attributed to bed-days. As a result, appropriate measures must be taken to ensure a more rational utilization of beds as well as the duration of hospitalization. It is noteworthy that this observation holds for the hospital units of the central region as well as for the outlying regions, albeit for different reasons. It is believed that the hospital units of the outlying regions do not fully utilize their available beds, while the units of the central region utilize them in a wasteful fashion or simply uneconomically. Beyond this, we must note the underlying differences in the evaluations of “specific weight” between the units of the central region and of the outlying regions with regard to the number of lab tests and the number of patients seen at the outpatient clinics. These differences constitute classic examples of the tendencies displayed by patients with regard to the choice of hospital units to which they will turn to deal with their problems. From the quantitative analysis undertaken, an evaluation also emerges concerning the degree to which the functional expenditures should be limited, so that in combination with the services provided, the activities of the hospital units of the sample can be optimized. Table 14 presents the results of the “total expenditures” slack analysis for the OandG units which did not achieve 100% efficiency. A slack value for an input indicates that the input is not being fully exploited in terms of the economic resources available. Table 14. Reduction of the Input “OandG Expenditures” proposed by DEA for Optimizing the Performance of the Hospital Units in the Sample OandG Expenditures Total Sample OandG Units Attica OandG Units Outlying Regions
-17.47% -9.40% -28.31%
According to Table 14, it can be observed the necessity of limitation of expenditures by the corresponding percentage, in order to optimize the use of the available economic resources. It is clear that a greater effort must be made to take substantive measures to manage efficiently the amount of expenditures of the hospital units of the outlying regions. From a comparison of the results presented in tables 14 and 15, it is clear that there is a difference in the intensity of the problem of the utilization of the available economic resources. At this point however, the estimates must be presented that emerge from the quantitative analysis regarding the existence of “surplus” potential for the provision of hospital services that are used as evaluation criteria or outputs. Initially however, the concept of “surplus” potential is: that as hospital units function (-using the same amount of or less inputs), they are able to provide more of those services that they do provide and as a result an underutilization or a waste of the productive factors, in our case of economic resources takes place. In table 15 below, the existence of “surplus” potential is presented by category of service provided (output) in the total sample and in the two sub-categories that make it up. It can be seen that hospital units in the outlying regions display surplus potential of bed-days and
58
Maria Katharaki
examinations at the outpatient clinics which reach 3.41% and 13.35% respectively, while the corresponding figures in the units of the central region are 0.10% and 2.3%. On the contrary, in the hospital units of the central region, a significantly higher number of slack possibilities in the provision of lab tests is observed with 15.26%, as opposed to the 3.83.6% observed in the units of the outlying regions. Table 15. Surplus Potential of services provided (outputs) based on a comparison of the arithmetic mean
OandG Lab tests OandG Bed-days OandG Examinations in outpatient clinics
OandG Units Attica
OandG Units Outlying Regions
15.26% 0.10% 2.30%
3.83% 3.41% 13.35%
Total Sample 10.44% 1.24% 6.30%
The results that have been presented refer to the total sample and to its two subcategories. Due to limitations of space, results for each unit cannot be given in this paper. Indicatively, the estimate which emerged from the quantitative analysis for two hospital units are presented. One of these is in Attica (Obstetric Units of the Central Region [OUCR] “A”) and the other one is from the sub-category of the outlying regions (Obstetric Units of the Outlying Regions [OUOR] “AD”). According to Table 16, it can be demonstrated the followings: The relative evaluation score of the unit in the central region is comparatively larger than that of the corresponding unit in the outlying regions. However, both fall short of the highest evaluation score for the utilization of their economic resources. And for the two hospital units, the bed-days are the services offered that contribute substantively to the formation of the efficiency score. It is noteworthy that there is a latent potential for the provision of lab tests in the units of the centre and in the corresponding units of the regions. Finally, the proposed reduction of expenditures for the optimizing of their evaluation scores is 5.2%, that is, relatively limited for the units of the center but significantly larger – 22% for the units of the outlying regions. With regard to the proposed policy mixture which should be used in order to optimize the efficiency of the two hospital units (OUCR “a”, OUOR “AD”); see Table 16. Slack inputs and/or deficient output production must be eliminated before a given unit is said to be relatively efficient compared with its composite reference set of units (benchmarks). Thus, from the analysis, the “policy mix” emerges, that is to say, the target group of inputs and outputs proposed for the managers of the units in order to improve and optimize the effective utilization of the production factors. Particularly in OUOR2, the reduction in total expenditures is larger, which means not that they should be limited but rather that they must be utilized more productively to cover all hospital needs.
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
59
Table 16. Estimates for two Hospital Units of the Center and of the Outlying Regions OUCR “A” Relative efficiency (Score)
OUOR “AD”
94.83%
80.11%
100%
100%
OandG Lab tests
99%
78.3
OandG Bed-days
0
-
OandG Examinations in outpatient clinics
0
21.7
% Contribution of the production factors Oand G (total) Expenditures % Contribution of outputs in the formation of score
Proposed policy mix expressed as a percentage of the change in inputs and outputs, of which the efficiency score falls short of optimal Percentage reduction of inputs Oand G (total) Expenditures
-5.2%
-20%
OandG Lab tests
+49.8%
+34.6
OandG Bed-days
0
0
OandG Examinations in outpatient clinics
0
0
Possible percentage increase in services offered
4. Discussion The existing homogeneous statutory framework – Greek National Health System – under which all public healthcare units operate along with the common public source of funding, form the basis for the creation of a comparative evaluation amongst them. This evaluation is based on the level of utilization of available resources and the determination of responsibilities areas for any possible pathogenesis within all Greek healthcare units, independent of their location. In this study, DEA was applied to 32 Greek public OandG units. OandG units operations were represented by means of an input–output model whereby each unit uses quantities of inputs to generate outputs in the form of services. Specifically, clinics were considered to transform labour (physicians) and capital (approximated by the number of beds and expenditures) into services, which were assumed to be approximated by the number of female patients, in-patient days, lab tests and outpatient exams. By utilizing specific data sets in two schenatios, we applied a DEA approach in order to deduce useful conclusions for the existing economic resources management situation in Greek healthcare organizations and hence provide a roadmap for future directions. The results of the quantitative analysis differ both with regard to the type of production factors used as well as to the geographical location of the hospital units. This necessitates the taking of different administrative measures both on a general level as well as at the level of individual units. Consequently, for the administration, the areas of activity are noted where problems exist while at the same time indications are provided on the breadth of and the needs for
60
Maria Katharaki
measures to be taken. More specifically, prioritizing the problem areas, the following are noted. For the hospital units in the central region, measures concerning the more rational utilization of economic resources should take priority, in addition to measures concerning personnel which are also important, but to a lesser degree. At the same time, a more genuine evaluation of bed-days and laboratories is essential. It is imperative that measures be taken for more rational utilization of patient hospitalization in the units of the central region in combination with the bed utilization. From the analysis of scenario B, emerges the degree of the utilization of economic resources by the individual hospital units. In this way, those units were identified which do not utilize the economic resources that they have at their disposal to the degree to which they should. At the same time, the activities of those units to which the inadequate utilization of economic resources can be attributed were also demonstrated in the base of the geographical separation of the sample. In addition, a proposal is made for the mixture of services that the administration should aim to offer. All of the above constitute good data for the evaluation of the capabilities of the administrations of the individual hospital units. At the same time they constitute clear indications of the activities which display serious problems and as a result underline the need for financial management decisions to be made. The results which emerge for the hospital unit OUCR “F” are a characteristic example. In order to optimize the efficiency score, the taking of measures to increase the effective use of inputs by more than 20%, in addition to the increase of some of its outputs to as much as double the number of lab tests in the outpatient clinics are assumed. The assumption to double the number of lab tests in outpatient clinics, is based on the ascertainment that the use of telemedicine will assist in delivering healthcare services in remote locations. Thus, the number of patients remaining for treatment at their home location will be increased which will consequently lead to more frequent patient monitoring and lab tests increase. Besides, the main objective of telemedicine is the geographical spreading of all the medical incidents and the proportional exploitation of all the productive resources (Hovenga et al, 1998) within healthcare units, both in urban and in regional level. With reference to the problem areas and to the breadth of the needs for the hospital units of the outlying regions to take measures, the following should be noted. It is imperative that measures be taken immediately to utilize the potential of the personnel which is underemployed by more than 50%. It is necessary for beds available in the units of the outlying regions to be utilized and secondly for the more rational utilization of expenditures to be take place. At the same time, it is considered crucial that policy be implemented that will result in the increase of (almost all) services provided by approximately 20%. In support of the above, the information which is provided for the administration of the hospital unit OUOR “Q” justifies the taking of measures which will increase the utilization of the available factors of production by more than 30%.
5. Conclusion The research aim of this paper was to primarily estimate the relative technical efficiency by using a sample from public hospital units that provide obstetrical and gynaecological services in Greece and secondly, to emphasize the policy implications for health sector policy-makers. In order to effectively address the above goals, a comparative analysis of 32
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
61
Greek Public Hospital Units with obstetrical and gynaecological services was conducted. The research was based on data collected from official public sources. From an analysis of the evaluation of the utilization of the production factors of these 32 Greek Public Hospital Units of the Central and Outlying regions, using the DEA method, the following emerged: (i) The areas were noted which must be reorganized in order to increase the efficiency of the functioning of the hospital units. (ii) Estimates were made of the breadth of the interventions which must be made by the administrations of the hospitals. (iii) It was ascertained that both the problems and their breadth differ between the hospital units of the central region and those of the outlying regions and as a consequence, the administrative measures to be taken also differ. (iv) The “model” hospital units that should be emulated as well as the production factors which contribute to the creation of these models were identified. These results give an overall picture of the benefits. Inevitably to implement changes it is necessary for policy and decision makers to know exactly which units need changes and the magnitude of these changes. With respect to this DEA is a powerful tool as it provides information about the efficiency of individual units taking into consideration multiple-inputs and multiple-outputs. It can thus enable decision makers to know exactly which inputs (beds or staff), need to be increased or decreased in an individual unit to maximise efficiency and cost savings. Furthermore economic considerations are important factors in decisions to accept and pay for health and medical services (Bloom, 2004). Drummond and colleagues (2003) clearly delineates the issues with regard to adoption of economic evaluations in decision making. According to them, it offers guidance and recommendations to policy makers the incorporation of economic analysis results in their health policy making processes. The research aim of this paper was also to estimate the efficiency of economic resources management in a sample from public hospital units and secondly, to emphasize the policy implications for health sector stakeholders. In the field of technical efficiency with the use of DEA, the study is developed to the validity control of the model that includes financial resources as input. At this point, it should be stressed out that DEA results underline the need for more rational utilization of economic resources within the healthcare units, if we further consider that the input “total expenditures” refers to variable (functional) expenses. A strict monitoring of the “spending” of financial resources is required, particularly in the units of the outlying regions, so that their limitation or an increase in their utilization will be achieved. In this effort, the study and the monitoring of the utilization of beds and of the length of hospitalization are necessary. Inevitably, the reliability and validity of the DEA results are vital if decisions are to be made based on these, and further work may be required to establish their accuracy. However, even if the predictions from DEA are not completely accurate, DEA will still almost certainly indicate which areas could be targeted to improve use of resources and to maximize efficiency. Consequently, the following benefits should be achieved to a large extent by implementing changes:
62
Maria Katharaki
Prevention of unnecessary journeys by female patients. Better geographical distribution of OandG cases. More efficient use of resources in rural areas. Increased provision of services in rural areas. Information indicating which resources should be targeted to improve efficiency maximise resources and provide more equitable care.
DEA results can help administrators by providing new insights on the distribution of health resources to individual hospital units. The present study provides valuable information regarding deployment of medical staff and beds, the utilization of financial resources and the deployment of medical supplies and equipment. Resource planning and the identification of the needs of OandG units are greatly aided by the availability of up-to-date DEA results. Briefly, all of the above are believed to constitute useful information for the managers of the hospital units, which will assist them in making decision that will lead to the more effective operation of the units. Over and above these measures, which fall into the competencies and responsibilities of the managers of the hospital units, the possibility for comparison between these units facilitates monitoring by those in charge while at the same time it contributes to the creation of a spirit of rivalry and competition with each other. Finally, the differences in the evaluations between the hospitals of the central region and of those of the outlying regions underline the necessity for the geographic redistribution of the cases. Underline, also, the necessity to investigate the reasons that patients prefer the central Attica hospitals, considering that are reputed to be better. Marketing campaigns and regulation to create barriers to movement of non-local patients into the centrally located hospitals are issues for further research. Social marketing is an approach to changing behaviour and thus improving public health. It could help to facilitate this critical review, the object of which would be to isolate those approaches that really do enable individuals and communities to gain greater control over their health and the quality of their lives (Walsh et al, 1993). Nevertheless, the redistribution is assumed to be achieved through the implementation of Information and Communication Technologies (ICT). This can be used not only to store and transfer patient information but also to improve decision making, to improve institutional efficiency, to promote better health behaviour, and to enhance more rational management of resources (ECA, 1999; Rao, 2001; Charles, 2000; Hakansson and Gavelin, 2000). The essence of telemedicine lies in transferring expertise and not the patient (Rao, 2001). This enables the needs and demands of healthcare to be met across large distances and is an important means for achieving geographical redistribution of resources and thus this would facilitate the taking of more effective measures at the management level of the units. Telemedicine allows local services to be provided to patients wherever and whenever it is possible, eliminating unnecessary journeys for patients. The feasibility of the introduction of such a national system of telemedicine and its impact on the efficiency of hospital units of the central and outlying regions is an issue which requires further research.
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
63
References Allin, S., Bankauskaite, V., Dubois, H., Figueras, J., Golna, C., Grosse-Tebbe, S., et al. In: S. Grosse-Tebbe, J. Figueras (eds), Snapshots of health systems. WHO, European Observatory on Health Systems and Policies; 2004. Available from: www.euro.who.int/observatory/Hits/20060518 1. Al-Shammari, M. (1999) A multi-criteria data envelopment analysis model for measuring the productive efficiency of Hospitals. International Journal of Operations and Production Management, 19, 879–90. Anderson, T. (1996) A data envelopment analysis (DEA) home page. Available from: http://www.emp.pdx.edu/dea/homedea.html. Ballestero, E., Maldonado, J.A. (2004) Objective measurement of efficiency: applying single price model to rank hospital activities. Computers and Operations Research, 31(4), 515– 32. Banker, R.D., Conrad, R.F., Strauss, R.P. (1986) A comparative application of Data Envelopment Analysis and tran slog methods: an illustrative study of hospital production. Management Science, 32(1), 30–44. Basson, M.D., Butler, T. (2006) Evaluation of operating room suite efficiency in the Veterans Health Administration system by using data envelopment analysis. American Journal of Surgery, 192, 649–56. Bloom, B.S. (2004) Use of formal Benefit/Cost Evaluations in Health System Decision Making. American Journal of Managed Care, 10, 329-335. Butler, T.W., Li, L. (2005) The utility of returns to scale in DEA programming: An analysis of Michigan rural hospitals. European Journal of Operational Research 161(2), 469–77. Central Council on Health (1994) Yearbook of health. Athens: Ministry of Health and Welfare [in Greek]. Center for Health Services Research (2000) The state of health in Greece. Athens: Ministry of Health and Welfare [in Greek]. Center for Health Services Research (2001) Health services in Greece. Athens: Ministry of Health and Welfare [in Greek]. Charles, B.L. (2000) Telemedicine can lower costs and improve access. Healthcare Financial Management, 4(4), 66–9. Chang, H. (1998) Determinants of hospital efficiency: the case of central government-owned hospitals in Taiwan. Omega, 26(2), 307–17. Charnes, A., Cooper, W.W., Rhodes, E. (1978) Measuring the efficiency of decision making units. European Journal of Operational Research, 2, 429–44. Charnes, A., Cooper, W.W. (1985) Preface to topics in data envelopment analysis. Annals of Operations Research, 2, 59–94. Chilingerian, J.A. (1995) Evaluating physician efficiency in hospitals: a multivariate analysis of best practices. European Journal of Operational Research, 80, 548–74. Chilingerian, J.A., Sherman, H.D. (2004) Health care applications. From Hospitals to Physicians, from productive efficiency to quality frontiers. In: Cooper W.W., Seiford, L.M., Zhu, J. (Eds) Handbook on data envelopment analysis. Boston/London: Kluwer Academic Publisher.
64
Maria Katharaki
Cooper, W.W., Seiford, L.M., Zhu, J. (2004) Data envelopment analysis: history, models and interpretations. In: Cooper, W.W., Seiford, L.M., Zhu, J., (Eds), Handbook on data envelopment analysis. Boston/London: Kluwer Academic Publisher; 1–39 [Chapter 1]. Desai, J. (2003) The cost of emergency obstetric care: concepts and issues. International Journal of Gynecology and Obstetrics, 81, 74–82. Drummond, M., Brown, R., Fendrick, A.M., Fullerton, P., Neumann, P., Taylor, R., Barbieri, M. (2003) Use of pharmacoeconomics information-report of the ISPOR task force on use of pharmacoeconomic/health economic information in health-care decision making. Value Health, 6, 407-416. Economic Commission for Africa (ECA) (1999) Information and Communication Technology for Health Sector. The African Development Forum ’99. United Nations Economic Commission for Africa. Gatzonis, M., Deftereos, S., Vasiliou, P., Dimitriou, F., Creatsas, G., Sotiriou, D., et al. (2000) Maternity Telemedicine Services in the Aegean Islands. In: Proceedings of 2nd International conference on telemedicine, p. 22–4. Giokas, D.I. (2001) Greek Hospitals: how well their resources are used. Omega, 29(1), 73–83. Goni, S. (1999) An analysis of the efficiency of Spanish primary health care teams. Health Policy, 48, 107–17. Gounaris, C., Sissouras, A., Athanassopoulos, A., (2000) The Problem of Efficiency Measurement of the General Hospitals in Greece. In Dolgeras, A., Kyriopoulos, J. (Eds) Equity, Efficiency and Effectiveness in Health services, Themelio Publications, Athens (In Greek). Hakansson, S., Gavelin, C. (2000) What do we really know about the cost-effectiveness of telemedicine? Journal of Telemedicine and Telecare, 6(Suppl. 1), 133–6. Hollingsworth, B., Dawson, P.J., Maniadakis, N. (1999) Efficiency measurement of health care: a review of non-parametric methods and applications. Health Care Management Science, 2, 161–72. Hovenga, E.J.S., Hovel, J., Klotz, J., Robins, P. (1998) Infrastructure for reaching disadvantaged consumers. Telecommunications in rural and remote nursing in Australia. Journal of the American Medical Informatics Association 5, 269–75. Katharaki, M. (2006) The efficiency impact of Telemedicine on Obstetric and Gynaecology services: effects on Hospitals units’ management. Dissertation. Greece: University of Athens [in Greek]. Kirigia, M.J., Emrouznejad, A., Sambo, G.L., Munguti, N., Liambila, W. (2004). Using Data Envelopment Analysis to measure the technical efficiency of public health centers in Kenya. Journal of Medical Systems, 28, 155-166. Maniadakis, N., Thanassoulis, E. (2000) Assessing productivity changes in UK hospitals reflecting technology and input prices. Applied Economics, 32, 1575–89. Miller, J.L., Adam, E.E. (1996) Slack and performance in health care delivery. International Journal of Quality and Reliability Management, 13(8), 63–74. National Statistical Service of Greece. (2001) Statistical yearbook of Greece 2000. Athens: National Statistical Service of Greece [in Greek]. National Statistical Service of Greece. (1992–2002). Statistics for social care and health. Athens: National Statistical Service of Greece [in Greek]. National Statistical Service of Greece. (2000–2002) Vital statistics of Greece. Athens: National Statistical Service of Greece [in Greek].
Benchmarking Hospital Units’ Efficiency Using Data Envelopment Analysis
65
OECD. Health data software; 2003, Version 06/15/2003. O’Neill L, Dexter F. Evaluating the efficiency of Hospitals’ perioperative services using DEA. In: Brandeau, M.L., Sainfort, F., Pierskalla, W.P., (Eds) Operations research and health care. Norwell, MA: Kluwer Academic Publishers. Ozcan, Y.A, Merwin, E., Lee, K., Morrisey, J.P. (2004) Benchmarking using DEA: the case of Mental Health Organizations. In: Brandeau, M.L., Sainfort, F., Pierskalla, W.P., (Eds) Operations research and health care. Norwell, MA: Kluwer Academic Publishers. Palmer, S. and Torgerson, D.J. (1999) Definitions of efficiency. British Medical Journal, 318, 1136. Rao, S.S. (2001) Integrated health care and telemedicine. Work Study, 50, 222–9. Salinas-Jimenez, J., Smith, P. (1996) Data envelopment analysis applied to quality in primary health care. Annals of Operations Research, 67, 141–61. Sarkis, J., Talluri, S. (2002) Efficiency measurement of hospitals: issues and extensions. International Journal of Operations and Production Management, 22(3), 306–13. Thanassoulis, E., Boussofiane, A., Dyson, R.G. (1995) Exploring output quality targets in the provision of perinatal care in England using data envelopment analysis. European Journal of Operational Research, 80, 588–607. Tountas, Y., Karnaki, P., Pavi, E. (2002) Reforming the reform: the Greek national health system in transition. Health Policy, 62, 15–29. Walsh, D.C., Rudd, R.E., Moeykens, B.A., Moloney, T.W. (1993). Social marketing for public health. Health Affairs, 104-119.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acuña, pp. 67-88
ISBN: 978-1-60876-500-3 © 2011 Nova Science Publishers, Inc.
Chapter 3
MARKOV MODELS IN MANPOWER PLANNING: A REVIEW Tim De Feyter1 and Marie-Anne Guerry2 1
Hogeschool Universiteit Brussel, K.U.Leuven Association, Belgium 2 Vrije Universiteit Brussel, Belgium
Abstract Manpower Planning is a fundamental aspect of Human Resource Management in organizations. The objective of Manpower Planning is developing plans to meet the future human resource requirements. A shortage as well as a surplus of (skilled) staff would be highly undesirable: it would lead to lower production, loss of orders and customers, higher costs and/or less profit. Especially for companies confronted with an ageing workforce or shortages on the labor market, Manpower Planning becomes a crucial instrument to create a sustainable competitive advantage. Since the 1960s, Operational Research techniques have extensively been developed to support organizations in their Manpower Planning challenge. Those techniques are especially interesting tools for large organizations in gaining insights in their complex manpower system and in clarifying the future dynamics of its workforce: employees might leave the organization, acquire experience and qualifications, or develop a broader range of skills. Although recently alternative approaches have been introduced in Manpower Planning (e.g. simulation techniques), in general, Markov Chain Theory remains useful to model the dynamics in a manpower system: Manpower Planning models based on Markov Chains, aim to predict the evolution of the manpower system and/or control it by setting the organization‟s human resource policies (e.g. recruitment, promotion, training). The analytical Markov approach allows identifying interesting characteristics of the manpower system which influence its future dynamics. Although Markov manpower models are by nature stochastic, a lot of researchers use a deterministic approach, by assuming that all parameters of the model are known and precisely determined. This indeed allows focusing on special characteristics of specific manpower systems. However, this knowledge is only applicable if the basic assumptions of Markov Manpower Planning are respected and the parameters of the model are reliably estimated. More specifically, the aggregated Markov models are defined by transition probabilities between homogeneous subgroups of the manpower system. Therefore stochastic approaches take into account the uncertainty and imprecision in real-world applications and suggest methodologies for model building.
68
Tim De Feyter and Marie-Anne Guerry There is a rich variety of publications on Markov models in Manpower Planning, in which properties of manpower systems are investigated under very specific assumptions. This results in different types of models. This paper offers a review of those different types of models and covers the latest advances in the field. The structure of the overview clearly follows the successive stages of the Markov Manpower Planning methodology in real-world applications, from model building and selection, parameter estimation, model validation to prediction and control.
Introduction While the term Manpower Planning (MP) originates from the 1960s (Walker, 1992; Smith and Bartholomew, 1988; Walker, 1980), the first applications of quantitative techniques for personnel management go back until 1779. The British Marine used a mathematical model to plan military‟s career and to analyze turnover (McClean, 1991). It concerned an application of a model from actuarial sciences. The techniques originally developed to analyze demography, were suitable to study manpower behaviour: recruitments, turnover and promotions were studied in the same way as respectively births, deaths and changes in socioeconomic characteristics (Bartholomew, 1971). This allowed predicting the internal personnel availability. Dill, Graver and Weber (1966) refer to the use of the replacement table before World War II. This was an early version of the skills inventory (Ivancevich, 2003; Mondy, Noe and Premeaux, 1999), which had the objective to predict turnover by actuarial methods and to assure succession from within the organization. The largest problem with applications of actuarial methods in personnel management was the lack of accuracy. The estimation of birth-, death- and transition probabilities in demography is based on large datasets, while populations within organizations are much smaller. Therefore, other stochastic methods for personnel management were developed since the 1940s (Bartholomew, 1971). While previously Manpower Planning was mainly applied in the army, during World War II, because of a large personnel shortage at national level, individual American companies were obligated to estimate their future personnel needs (Dill, Graver and Weber, 1966). It was only in the mid-1960s that the term Manpower Planning became widely recognized. The interest in the Manpower Planning problem had strongly grown, especially in the UK, originating from a specific phenomenon on the labor market. On the one hand, the technologic developments caused a shortage in specific personnel categories, while on the other hand a surplus of other employees occurred (Smith and Bartholomew, 1988). This dual problem shaped the theoretical development of Manpower Planning. Firstly, the awareness grew that Manpower Planning should consider differences in personnel characteristics. Secondly, the understanding grew that Manpower Planning has a supply as well as a demand side. The personnel shortages provoke a view on Manpower Planning as a process with different stages (Walker, 1992; Verhoeven, 1980): 1) forecasting internal demand for employees 2) forecasting internal supply of employees 3) planning actions to match future demand and supply. Following the emphasis in the 1950s on forecasting models in management thinking (Vloeberghs, 1998), Manpower Planning was mainly concerned with quantitative
Markov Models in Manpower Planning
69
management methods. This way, it became an interesting topic in Operations Research. Since then, for several reasons (Bell, 1994; Meehan and Ahmed, 1990), almost all academic attention for quantitative Manpower Planning has gone to personnel supply forecasting. Although nowadays new management paradigms have shifted away from the old focus on structure and planning (Vloeberghs, 2004; Evans and Doz, 1989), the focus on internal personnel supply still remains very useful in current perspectives on Human Resource Management. HR managers nowadays are confronted with two main goals: efficiency and innovation (Boxall and Purcell, 2003). Walker (1978) introduced the implementation of Manpower Planning in strategic management, with the main objective to predict and minimize future personnel costs, for which insights in the future personnel system are necessary. Since the 1980s two different approaches on Human Resource Management can be distinguished, increasing the relevance of the focus on internal personnel supply even more:
The market-based view presumes that companies will gain a sustainable competitive advantage by adjusting their internal resources to the organizational strategy, which is fully determined by external factors (Schuler and Jackson, 2005). Consequently, the company will try to match its internal personnel supply with the externally determined personnel demand. A mismatch between demand and supply is highly undesirable. A personnel surplus would result in inefficiency and excessive costs. A personnel shortage would be an obstruction for the company to execute its strategy, leading to lower production and loss of orders and customers. Especially employers confronted with an ageing workforce or shortages on the labor market face a higher risk for personnel shortages. For them, effective Manpower Planning is of vital importance. The resource-based view on the other hand presumes that companies will gain a sustainable competitive advantage by adjusting its strategy to the strengths (and weaknesses) of the future internal resources (Wright, Dunford and Snell, 2001). This way, predicting the evolution of the internal manpower supply (mainly in terms of personnel characteristics) becomes crucial to identify a successful competitive strategy.
In summary, Manpower Planning involves long term strategic management decisions. Besides this, strong Operations Research efforts have been taken to solve problems at the short term tactical level of personnel management. Personnel Scheduling and Rostering assigns the available employees to specific tasks or shifts that should be performed by the company (Ernst et al., 2004; Burke et al., 2004). While in personnel scheduling the available personnel is more or less fixed, Manpower Planning tries to adapt the long term availability of employees in the company and/or the forecasted long-term needs. Of course, in real-world applications, a strong interaction exists between Manpower Planning and Personnel Scheduling. Decisions about future personnel availability, taken at the long term planning level, will have an impact on the conditions which short term planning should take into account. On the other hand, Personnel Scheduling and Rostering might incorporate some specific needs about future personnel characteristics (e.g. willingness to work flexible hours) which long term planning should consider. Operations Research Models that integrate both long and short term problems (to become an optimal solution at both levels) are still quite
70
Tim De Feyter and Marie-Anne Guerry
rare. This forms an interesting but rather complex challenge for future research in Operations Research (Petrovic and Vanden Berghe 2007).
Markov Models in Manpower Planning Young and Almond (1961) were the first to introduce the application of Markov Theory in Manpower Planning (Smith and Bartholomew, 1988; Verbeek 1991). They hereby laid the foundations for decades of ongoing research on Markov manpower models for predicting and controlling organization‟s internal personnel supply. In practice, manpower systems can also be analyzed by other operations research techniques as there are computer simulation models, optimization models and models based on system dynamics (Wang, 2005; Parker and Cain, 1996; Purkiss, 1981). In general those alternative models are much more complex. While complex models have the objective to improve the accuracy of results, they often require very specific data which are difficult to collect. Especially when a large number of parameters have to be estimated, this just could harm the reliability of the results. At the cost of simplicity, simpler and more robust models (like Markov Manpower Models) are therefore often much more attractive (Skulj, Vehovar and Stamfelj, 2008). Moreover, for real-world practitioners, the use of very complex models might be too costly and time-consuming. Finally, unlike other more complex models, the analytical Markov approach allows identifying interesting characteristics of the manpower system which influence its future dynamics.
1. General Concepts The introductory section clarified the relevance of Markov models in Manpower Planning. It explains the ongoing efforts of researchers to further develop knowledge in this field. This chapter offers a review of the current state-of-the-art in research on Markov models in Manpower Planning. In this section an overview is given of the basic concepts and assumptions underlying those models.
Dynamics in a Manpower System The evolution of personnel availability fully depends on the future dynamics of the workforce. To study this dynamics, Markov Manpower Planning models consider organizations as manpower systems of stocks and flows. Stocks. The manpower system is classified into k exclusive subgroups, resulting in the states of the system S1,…,Sk. Those states form a partition of the total population. The number of members in state i at time t is called the stock of Si and is denoted as ni (t ) . The stock vector is given by n(t ) ni (t ) and is called the personnel distribution at time t. This row vector gives information on the total number of personnel members at time t, denoted by
N (t ) ni (t ) . k
i 1
Markov Models in Manpower Planning
71
Sometimes it is useful to express stocks as a proportion of the total number of members in the manpower system, resulting in a stochastic vector q(t ) qi (t ) with
q i (t )
ni (t ) ni (t )
k
.
i 1
So, having information on the stocks in absolute numbers, the stochastic vector q (t ) can be computed. The other way around, knowing q (t ) and the total number N (t ) of members in the system, the stock can be computed as n(t ) N (t ).q(t ) .
Flows. The stock vector provides a snapshot of the system, but gives no information about changes in the personnel distribution over time. Markov manpower models consider time intervals [t-1,t) and denote the number of individuals moving between categories i and j in this interval by the flow nij (t 1, t ) . For time interval [t-1,t), the flows are denoted in a
square matrix N (t 1, t ) nij (t 1, t ) . It is much more common to express the flows as
rates. By simply dividing the flows from state i to state j by the stock of Si at time t-1, we nij (t 1, t ) obtain the transition rate p ij (t 1, t ) . The transition matrix is given by ni (t 1)
P (t 1, t ) pij (t 1, t ) .
The transition matrix gives an overview of the internal flows within the manpower system. Depending on the definition of the states, a transition could be interpreted as e.g. getting experience, acquiring specific qualifications, developing a broader range of skills or simply a promotion. To model the future manpower dynamics, external flows should also taken into consideration. We distinguish incoming and outgoing employees, in other words recruitments and wastage. The total recruitments R(t ) in the time interval [t-1,t) are divided
over the k states according to the distribution r (t 1, t ) ri (t 1, t ) with ri (t 1, t ) being
manpower system can be obtained by wi (t 1, t ) 1 p ij (t 1, t ) .
the proportion of the recruits assigned to state i. The flow rates from state i out of the k
j 1
Functional Relation between Stocks and Flows. Using the notations above, the dynamics in a Manpower System can be expressed as a system of difference equations: n(t ) n(t 1) P (t 1, t ) R(t ) r (t 1, t ) .
Model Assumptions The functional relation between stocks and flows is very convenient for predicting and controlling the internal personnel supply. To allow this, Markov manpower models simplify this relation by making the following assumptions:
72
Tim De Feyter and Marie-Anne Guerry
Markov manpower models are discrete-time models. Stocks and flows are only studied over equal time intervals. Consequently, the model ignores multiple flows during [t-1,t). Only the flow between the state at t-1 and the state at t is considered. The system is memory-less, meaning that future transitions only depend on the present state and are independent from previous transitions in the past (e.g. previous promotions, length of service). The flow rates are time-independent. In every time interval, the transition rates pij (t 1, t ) are the same and independent from earlier transitions. The constant transition rates are denoted by p ij , the constant transition matrix is denoted by P.
The constant recruitment vector is given by r = (r i). The states are homogeneous with respect to transition rates.
Under the Markov manpower model assumptions, the relation between stocks and flows is given by: n(t) = n(t-1) P + R(t) r .
Notations: k n i (t ) n (t ) pij P R(t) ri r
number of states in the system number of staff in state i on time t (1 × k) stock vector with entries ni(t) time-independent transition rate from state i to state j time-independent (k × k) transition matrix with entries pij total number of recruitments during interval [t-1,t) and who remain at time t time-independent proportion of R(t) recruited in state i time-independent (1 × k) recruitment distribution vector with entries r i
Although it falls beyond the scope of this chapter, we mention other Markov approaches in Manpower Planning that relax the restrictive assumptions of the Markov manpower model. Vassiliou and his research team focused on Non-homogeneous Markov models. Those models are characterized by transition rates that are not time-homogeneous (Georgiou 1992; Georgiou and Vassiliou, 1997; Tsantas, 1995; Vassiliou, 1982b; Vassiliou, 1984; Vassiliou, 1986; Vassiliou, 1992; Vassiliou, 1998; Young, 1974). More recently, results on Nonhomogeneous Markov models can also be found in Yadavalli et al. (2002). Semi-Markov models on the other hand relax the assumptions of homogeneous subgroups, by assuming conditional transition rates, depending on the duration in the grade. Indeed, the longer a person stays in a particular state, the less likely it might be for him to leave. The probability of leaving may therefore vary substantially with duration and destination. A semi-Markov manpower model combines a transition matrix of probabilities of moving between grades with a conditional distribution of duration in a grade (e.g. Yadavalli and Natarajan, 2001; McClean and Montgomery, 2000; Yadavalli, 2001; Vassiliou, 1992). Some researchers combine those two alternatives in Non-homogeneous Semi-Markov models (e.g. Janssen and Manca, 2002; McClean, Montgomery and Ugwuowo, 1998). Although those alternative approaches have the objective to improve accuracy in forecasting the manpower system dynamics, they suffer the same bottlenecks as other complex models (cfr. introduction). For
Markov Models in Manpower Planning
73
real-world applications, practitioners might therefore be better off with strict Markov manpower models. In Section 2, some guidelines are given to build models conform to the strict model assumptions.
Deterministic and Stochastic Models Markov Theory is by definition a stochastic approach. Nevertheless, in Markov Manpower Planning research, stochastic as well as deterministic models can be distinguished. Deterministic models. The stochastic nature of Markov Theory is inconvenient for extensive manpower system investigation and therefore many researchers ignore it. They take the model assumptions for granted. Firstly, they assume a given personnel classification in homogeneous groups, mostly called grades. Consequently, internal transitions are referred to by the term promotions. Secondly, deterministic models assume a known transition matrix and recruitment distribution (e.g. Georgiou and Tsantas, 2002; Davies, 1975). This might be reasonable in some real-world applications, for example when the transitions are completely under management control. Stochastic models. In most cases however, there is no functional but a statistical relation between successive stocks. Totally from the Markov philosophy, the stock is considered to be a random variable. It is well known that n(t) follows a multinomial distribution, for which the parameters (i.e. the flow rates) should be estimated. In this case, the flow rates are transition probabilities and the model can be used for predicting future stocks and flows. Of course, prediction in stochastic models incorporates uncertainty which can not be ignored in realworld applications. Three sources of error can be distinguished: Firstly, a statistical error arises from the fact that stocks are associated with a probability distribution. Secondly, an estimation error occurs because the transition probabilities have to be estimated (De Feyter and Guerry, 2009; Vassiliou and Gerontidis, 1985). Finally, if the model assumptions are not satisfied, prediction involves a specification error. Bartholomew (1975) provides estimators for the prediction error in stochastic Markov manpower models under specific alternative hypotheses, i.e. Markov cohort analysis, Markov census analysis with given or estimated recruitment and Markov census analysis with known total size (cfr. Section 3). Partially stochastic models. Besides the manpower models in which the flows are considered as deterministic and the models in which the flows are treated as stochastic, in Manpower Planning has paid attention to partially stochastic models in which some flows are deterministic and others are stochastic. In practice on the one hand the number of promotions is usually largely in hands of the management and on the other hand the natural wastage, for example, is uncontrollable. For these reasons manpower models have been discussed in which the description of promotions is deterministic and the wastage variable is considered as stochastic (Davies, 1982; Davies, 1983; Guerry, 1993; McClean, 1991).
Prediction and Control Whether the Markov manpower model is applied from a deterministic or a stochastic approach, the relation between stocks and flows allows forecasting internal personnel supply. Moreover, it offers insights in possible actions to match future personnel demand and supply.
74
Tim De Feyter and Marie-Anne Guerry
Therefore, research on Markov manpower models is concerned with several specific problems, briefly described further in this section. In Section 3 a current state-of-the-art is given on the answers offered by scientific research. Predictions of the stocks. In a Markov manpower model the flows (internal, incoming and outgoing) are characterized by transition probabilities. The values of these parameters of the model reflect the personnel strategy of the manpower system. In fact the parameters of the model are a quantification of aspects as promotion and recruitment policy. An interesting question that can be stated is how the system would evolve in case the current considered strategy remains unchanged for the future. In case the model assumptions as well as the values of the parameters of the model remain relevant for the future, the (expected) stocks can be forecasted: Starting from the current stocks and based on the model, in an iterative way predictions for the stocks can be computed. What-if-analyses. In fact, the forecasting procedure can be repeated for alternative personnel strategies. An alternative strategy can be characterized by modified values of the parameters of the model. The forecast of the stocks under these conditions give an answer to the question of what the evolution of the personnel system would be if the strategy would be characterized by those considered values of the parameters. Such what-if-analyses give insights in the effect of a particular change in the strategy on the evolution of the stocks. Moreover those analyses allow comparing the quality of different strategies in terms of certain goals to be achieved in the system. Control. For reasons of efficiency and effectiveness it is desirable to have the right number of employees at the right places in the organization at any time. Therefore in Manpower Planning it is important to deal with the problem of controlling parameters in order to maintain or to attain a desired stock vector. In control problems, first of all, it has to be clarified which aspects are under control of the management in order to have an evolution of the personnel system in a desired direction. The control actions that can be taken under consideration depend on the set of parameters of the model that can be controlled and on the direction(s) the control of these parameters is acceptable in order to minimize the discrepancy between the desired and the actual stocks in the future. The aspects that are under control of the management will determine the list of parameters of the model that can be controlled. Moreover the way/direction in which these aspects are controllable and to what degree, determine the extreme values in between choices for these parameters are realistic for the manpower system (Skulj, Vehovar and Stamfelj, 2008). These insights can be translated into a characterization of the personnel strategies that are acceptable for the company. In case alternative recruitment strategies are under consideration for the manpower system, the parameters in relation to recruitment can be (arbitrarily) chosen and the system is under control by recruitment (Davies, 1975; Davies, 1976; Davies, 1982). Similar, the system is under control by promotion in case alternative internal transition probabilities can be considered for the personnel strategy (Bartholomew, Forbes and McClean, 1991). Maintainabilty and attainability. Postulated objectives of a company can be of different kind. In case the current stock vector is a desirable one for the future, the goal is to maintain the stocks. In case there is a desirable stock vector for the future, not corresponding with the current one, the goal is to attain these preferred stocks. Maintainability and attainability are studied, among under other conditions, under control by promotion and control by
Markov Models in Manpower Planning
75
recruitment (Abdallaoui, 1987; Bartholomew, 1977; Bartholomew, Forbes and McClean, 1991; Davies, 1975; Davies, 1981; Guerry, 1991; Haigh, 1983; Haigh, 1983; Nilikantan and Raghavendra, 2005; Tsantas and Georgiou, 1998). Maintainability refers to the fact that the stock vector is wanted to be kept constant in time. Besides the discussion on maintainability (after one period of time), in manpower studies there has also been attention for the more generalized concept of maintainability after n steps (Davies, 1973). In quasi-maintainability the goal is less restrictive than in maintainability since the proportional personnel structure is maintained constant in time, although the stocks can vary in time because of a change in the total size (Nilikantan, 2005). Asymptotic behavior . The objective of a manpower system can also be formulated in terms of the asymptotic behavior of the stock vector. The limiting structure of the system reflects properties of the evolution in the long run (Keenay 1975; Vassiliou 1982). Preferences of strategies. Within the set of acceptable strategies that are acceptable for the company, preferences for the strategy in achieving the goal(s) can be taken into account. By doing this it can be reflected whether some way of achieving the goal(s) may be preferable to others. It can be for example that a personnel strategy is more preferred as the corresponding cost is less (Vajda, 1978; Glen, 1996; Georgiou, 1997), or as the speed of converging is faster (Keenay, 1975), or in case the strategy is at the same time efficient in reaching the goal(s) and deviates as little as possible from a preferred strategy (Mehlmann, 1980). Besides, in case a desired stock vector is not attainable, a preferred strategy can be selected based on the concept of the grade of attainability, i.e. the degree of similarity between attainable stock vectors and the desired vector (Guerry, 1999).
2. Building the Markov Manpower Model In the previous section, we presented an overview of the basic concepts, assumptions and notations in Markov manpower models. We discussed its interesting possibilities for prediction and control. However, the adequacy of the model depends on some strict assumptions. These include homogeneous groups and time-independent transition probabilities. In this section, we consider a strategy for building an adequate Markov manpower model which estimates its parameters and minimizes the specification error. Four different phases in the model-building process are distinguished: 1) 2) 3) 4)
Data collection Identification of homogeneous groups Model estimation Model validation
Problem Description Markov Manpower models can be used for prediction and control in various ways. In the simplest case, practitioners are only interested in the future available personnel of the organization in its entirety. In other cases, the interest is in the future supply of employees
76
Tim De Feyter and Marie-Anne Guerry
with a certain characteristic, e.g. grade, salary, qualifications or experience level (Wijngaard, 1983). Before building the Markov manpower models, it is useful to describe the nature of the problem. This defines the states in the personnel system under study. However, to meet the model assumptions, those preliminary groups might need further division in the modelbuilding process.
Data Collection A historical dataset of (former and current) employee transitions between (preliminary) states is necessary to investigate the dynamics of a personnel system. But this might not be enough to build a Markov manpower model that satisfies the assumption of homogenous groups. Therefore, building a Markov manpower model requires data on personnel characteristics which might have a direct or indirect influence on employee‟s transition behavior (e.g. gender, number of children, full-time equivalent). The dataset should include all changes in the considered states and in the influential factors. The most natural way for collecting data to model the dynamics of a personnel system is to observe a group of entrants. Such a group, joining the personnel system in the same time interval, is called a cohort. In practice, it is often only possible to have data on stocks and flows for recent time periods. This means that the available cohort-information is related to a restrictive period. In case a personnel dataset consists of incomplete data of different cohorts, the data are called transversal data (Bartholomew, Forbes and McClean, 1991).
Identification of Homogeneous Groups In a Markov manpower model the states are homogeneous groups of personnel for which the transition probabilities are assumed to be equal for each of the individuals within a group. Once the data have been collected, homogeneous groups can be identified. The problem description has determined a preliminary group classification. In De Feyter (2006) a general framework is presented to determine homogeneous subgroups in a personnel system. Homogeneous Groups. For every state, a multinomial logistic regression analysis is suggested to investigate the relation between personal characteristics and the transition probabilities. This identifies the significant variables for further division of the preliminary groups into more homogenous subgroups. Guerry (2008) offers an alternative recursive partitioning algorithm to determine homogeneous subgroups of personnel profiles under timediscrete Markov assumptions. In the final definition of the states of the Markov manpower model, one can decide to aggregate subgroups that have comparable transition probabilities (Wijngaard, 1983). An aggregation of subgroups results in personnel groups with a greater number of members that leads to better estimations of the parameters. Memory-less system. A matrix of observed flows nij can be tested for the presence of a
Markov-memory based on a 2 -test (Hiscott, 1981). This test indicates to what extent the
probability for a member to be in state j at time t depends on the state i at time t-1 and not on the states at t-2, t-3, … .
Markov Models in Manpower Planning
77
Time-homogeneity. Another consideration in this stage of the model-building process is the assumption of time-homogeneous transition probabilities. The heterogeneity in the preliminary states most often causes a problem with time-homogeneity. Since the preliminary groups consist of subgroups with different transition probabilities, the composition of the overall preliminary group would change over time. This way, it is unlikely that the preliminary groups satisfy the assumption of time-homogeneity. However, the possibility exists that the transition probabilities of homogeneous subgroups also suffer from timedependency. Therefore, De Feyter (2006) suggests, besides other personal variables, to also enter time as an explanatory variable in the multivariate regression analysis, as well as the interaction effect between time and the other variables. For this, two approaches can be used complementary: by considering time as a continuous variable, a functional relation with transition probabilities is tested. In case of a significant relation, by a transformation, this could be incorporated in the model (Bartholomew, 1975). By considering time as a discrete variable, less predictive time-heterogeneity can be tested. In case of time-dependent transition probabilities which are difficult to incorporate in the model, the practitioner might still prefer Non-homogeneous Markov models (cfr. Section 1). However, at the cost of accuracy, the practitioner might be better off with accepting this specification error (cfr. introduction). In this case, the best subset of explanatory variables without the variable time is chosen. Alternatively, Sales (1971) offers a Goodness of Fit Statistic for testing timehomogeneity of transition probabilities in state i:
2 (i ) ni (t 1) t
( pˆ ij (t 1, t ) pˆ ij ) 2
J (i )
pˆ ij
which follows a 2 -distribution with (T 1)(m(i ) 1) degrees of freedom, with: J (i) = all values of j for which pˆ ij > 0
pˆ ij (t 1, t ) = transition rate during interval [t-1,t) pˆ ij = the estimated transition probability based on the whole historical dataset
T = number of observed time-periods in the historical dataset m(i) = number of possible flows originating from state i. Hidden Markov models. The general framework suggested above determines homogeneous groups in a personnel system based on observable variables available in the historical dataset. Nevertheless in practice, some flows depend on individual traits even within such a homogeneous group. In case there is a lack of observations on these sources of heterogeneity, parameter estimation is not possible for these subgroups in a Markov model. Earlier work (Ugwuowo and McClean, 2000) concerning manpower models points to the importance of making a distinction between two types of sources of heterogeneity, namely observable sources and latent sources. Guerry (2005) dealt with the problem of latent sources of heterogeneity by introducing a hidden Markov manpower model. This specifies a technique to improve homogeneity of the subgroups of the manpower system. When latent
78
Tim De Feyter and Marie-Anne Guerry
sources are considered in the model-building process, the statistical relation between stocks and flows needs further specification to allow prediction and control (Guerry, 2005).
Model Estimation Once the homogeneous groups are determined, the transition probabilities have to be estimated. During the identification of homogeneous groups, the relations between the individual‟s characteristics and transition probabilities are investigated. Consequently, the fitted response functions could be used to estimate the parameters of the Markov manpower model. However, a closed-form solution exists for the values of the transition parameters. Already in 1957, Anderson and Goodman developed a maximum likelihood estimator for the transition probabilities under the strict Markov assumptions:
pˆ ij
N ij Ni
with N ij nij (t ) and N i ni (t ) . T 1
T 1
t 0
t 0
ni (t ) and nij (t ) are the observed stocks and flows in the historical dataset. This estimator is shown to be a minimum variance unbiased estimator.
Model Validation In validating the manpower model, the goal is to measure the extent in which the model is able to reproduce the data on the observed stock vectors. According to validity, in most previous work in Manpower Planning a distinction was made between internal and predictive validity (Bartholomew, Forbes and McClean, 1991). For the internal validation the parameters of the model are estimated based on the available observations for all time periods [t,t+1) t 0,...,T 1 of the stocks and flows. Based on the estimated parameters and the
initial stock vector at time t 0 , projections are computed for the stock vectors at the subsequent time points. In comparing these predicted stocks with the observed ones, it becomes clear to what extent the model is able to reproduce the observations. For the predictive validity the observations are divided into two sets. The observations of the stocks and flows available for the time periods [t,t+1) t 0,..., 1 with T are used to estimate the parameters of the model. Based on these estimated parameters and the observed stock vector at time , projections for the stocks at time 1,...,T are computed and compared with the actual observed stocks. In fact in the predictive validation method the quality of the model is tested by treating the time as the present and the time points 1,...,T as the future (for which there are observations available). Moreover an n-fold cross validation approach can be useful in testing the goodness of fit of a manpower model. Since the flows from a state i are multinomial the validity of the Markov model can be discussed based on a 2 -test. In previous work a goodness of fit test is expressed in terms of examined by a 2 -test based on the observed flows and their estimations.
the observed stocks and their estimated values (Sales, 1971). Alternatively the validity can be
Markov Models in Manpower Planning
79
3. Some Markov Manpower Models The model-building process, as discussed in Section 2, ensures the acceptability of the Markov model assumptions. It will result into well-defined states. These states are personnel subgroups that are homogeneous with respect to the transition probabilities (estimated based on observable variables). Under the Markov model assumptions, additional alternative hypotheses on the manpower system result in different models. In each of these models, the aspects prediction, control and asymptotic behavior can be examined in a very specific manner.
Markov Cohort Analysis In modelling a cohort and under the assumption that nobody can join the cohort in a later stage, for the manpower system there is no incoming flow to be considered. The personnel system can be described by an absorbing Markov chain in which the transient states are corresponding with the homogeneous categories of the personnel system and the absorbing states are corresponding with the different types of wastage flows (retiring, accepting a job in another organisation, …). Based on the matrix P ( pij ) of the internal transition probabilities from a personnel category Si
(1 i k) to a personnel category S j
(1 j k) , the evolution of the stock
vector can be described as:
n(t ) n(t 1).P n(0).P t . The fact that there are no recruitments at a later stage results in stocks evolving towards zero, under the condition that there is wastage out of each of the personnel categories:
lim n (t ) (0,....,0) .
t
The fundamental matrix N (nij ) ( I P ) 1 of the absorbing Markov chain provides information on the (expected) total number of times nij that the process, starting from category S i , is in category S j (Bartholomew, 1982). Consequently for members starting from
nij .
category S i , the average seniority at the moment of leaving the manpower system is given by j
Markov Census Model In general, a manpower system is not composed only by the members of one cohort: staff members will leave the system and others will join the system. Based on census personnel
80
Tim De Feyter and Marie-Anne Guerry
data the transition matrix, the wastage vector and the recruitment vector can be estimated (cfr. Section 2). For some manpower systems prognoses or assumptions on the trend of the total number of recruitments in the future will be made based on efficiency and effectiveness considerations (Gallisch, 1989). For other systems it is more obvious to formulate targets for the evolution of the total number of employees (Zanakis, 1980). Depending on the aspects on which there are insights available for the future and depending on the assumptions that are realistic for the manpower system, the evolution of the stock vector will be described in a different way. In what follows Markov census models are built and discussed under several hypotheses.
Markov Census Model with Known Recruitment In a manpower model with known recruitment, at any time t a known targeted number R(t) of staff members is recruited. The incoming flow can therefore be considered starting from an additional state of which the stock at time t equals to R(t). The probability that a new recruit will enter into state i is given by ri , the i-th component of the recruitment vector. The evolution of the stock vector can be discribed as:
n(t ) n(t 1) P R(t )r with P the matrix of the internal transition probabilities. Although the hypotheses of this model are corresponding with the Markov properties, in general the manpower model is no Markov chain model. Under the more restrictive hypothesis of a constant total number R of recruitments at any time, the asymptotical behavior is characterized by the limiting stock vector:
lim n(t ) R.r .(I P ) 1
t
that is also a fixed point for the strategy characterized by P , R and r (Bartholomew, Forbes and McClean, 1991).
Markov Models with Known Total Size At time t the stocks in the different states are the coordinates of the stock vector n(t), with which one can calculate the total size of the system at that moment as:
N (t ) ni (t ) . i
The probability that a member of the personnel category i has left the system after one period of time is wi 1 pij . The vector w ( wi ) is the wastage vector. j
The evolution of the stock vector can be described as:
Markov Models in Manpower Planning
n(t ) n(t 1) . P w' r N (t ) N (t 1). r
81
in which the matrix Q P w' r is a row-stochastic matrix. If the evolution of the size of the system is of this nature that after each time interval the size has increased/decreased with a fixed proportion , i.e. the expanding/contracting rate, n(t ) can be then N (t ) 1 N (t 1) . And the proportional personnel structure q (t ) N (t ) forecasted based on:
q (t )
1 q (t 1) . Q .r. 1 1
According to Bartholomew, Forbes and McClean (1991), for a system with expansion rate and that is under control by recruitment, the proportional personnel structure q (t ) is attainable from q(t 1) iff
q(t 1).P (1 )q(t ) . And in the context of quasi-maintainability, there can be stated that the proportional personnel structure q is maintainable iff q.P (1 )q . In the more restrictive situation of a manpower system with a constant total size, the evolution of q (t ) is characterized by the Markov chain with transition matrix Q:
q(t ) q(t 1) . Q . In case the matrix Q is the transition matrix of a regular Markov chain, the Fixed point theorem for regular Markov chains provides in a characterization of the asymptotic behavior of the proportional personnel structure (Seneta, 1973). Namely independent of the initial distribution q(0) the limiting proportional personnel structure:
lim q (0). Q t q * t
is equal to the unique probability vector q * that is a fixed point of Q. It is possible that a particular structure is not maintainable/attainable after one period of time but that it is after n steps. The set of n-step maintainable structures and the set of n-step attainable structures are discussed in Davies (1973) for both systems with constant total size and systems with constant expanding/contracting rate. In Davies (1975), for a constant size system with k states and controllable by recruitment, the set M n of n-step maintainable n
structures is described geometrically as the convex hull of k points of IR k. In order to attain a preferred proportional structure, in Vajda (1978) an optimization algorithm is introduced to find a preferred strategy as the result of minimizing the
82
Tim De Feyter and Marie-Anne Guerry
corresponding cost by taken into account the cost of supporting a state as well as the cost of recruiting. Hereby the cost may differ from state to state and may vary from step to step. Mehlmann (1980) deals with the problem of attainability for a system with known total size by introducing a dynamic programming approach to determine optimal strategies. Hereby the goal is to get from the initial structure to the desired proportional personnel distribution in a reasonably short time and to hold deviations from the preferred recruitment distribution and transition matrix as small as possible. Another criterion in selecting the most preferred strategy can be for example efficiency (Keenay 1975). In case a desired stock vector is attainable by several acceptable strategies, a strategy resulting in the desired stocks after a minimum number of time periods is the most efficient in achieving the goal. In Keenay (1975) convergence properties, such as the speed of convergence, are studied for systems with expanding/contracting rate . In Bartholomew (1982) the set of attainable structures is described for constant size systems under control by recruitment and for systems under control by promotion. In this approach a structure is attainable in case it is attainable in a finite number of steps from at least one other structure. The interest of the study lies in the complement of the set, since for a structure not belonging to the set of attainable structures it is known that it is not attainable from whatever starting structure. In case a desired stock vector is not attainable, a preferred strategy can be selected based on the degree of similarity between attainable stock vectors and the desired vector, resulting in a strategy with an attainable stock vector that is very similar to the desired one. This concept of the grade of attainability is studied for constant size systems under control by recruitment in Guerry (1999).
Markov Models Applied to Age and Length of Service Distributions In general, wastage probabilities are to an important degree depending on the age and/or the length of service of the members (Forbes, 1971). For this reason in several Markov manpower models the states are defined based on age and length of service (e.g. Bartholomew, 1977; Keenay, 1975; Nilikantan and Raghavendra, 2008). For example in Woodward (1983) a Markov manpower model in which the states are defined by grade, age and length of service provides in projections of age and grade distributions of academics in UK Universities. The discussed model is equispaced: the prediction intervals are all equal and the personnel are classified in age and length of service classes of the same size as the prediction intervals.
Proportionality Markov Manpower Models More recently Nilakantan and Raghavendra (2005) introduced the concept of proportionality into the Markov manpower models. The condition of proportionality refers to the fact that in the same time interval the recruitment inflow into a personnel category j equals a prespecified proportion of the promotion inflow into the category j. Under the assumption of proportionality the incoming flows and the internal flows are not considered as independent, as it is the situation in general manpower models. In Nilikantan (2005) maintainability, quasimaintainability and attainability is examined for proportionality policies.
Markov Models in Manpower Planning
83
Mixed Push-Pull Models Human Resource Management literature distinguishes two recruitment approaches, in function of the firm‟s competitive strategy (Sonnenfeld et al. 1989; Schuler and Jackson, 1987). Firstly, vacancies could be filled by external recruitment. Once hired, employees grow in terms of skills, knowledge and abilities. In this case, Markov manpower models are suitable to investigate personnel dynamics. Therefore, Markov manpower models are also often called push models, because in each time interval [t-1,t) a certain number of employees is expected to make a transition from state i to state j, independent from vacancies in state j. Secondly, vacancies could be filled by internal recruitment. Therefore, the Manpower Planning literature offers a pull approach, based on Renewal models (Bartholomew, Forbes and McClean, 1991; Sirvanci, 1984). In such systems, transitions from state i to state j are only possible in case of vacancies in state j. Vacancies are assumed to follow a binomial distribution, based on the wastage probability in state j. More recently, the consensus has grown that firms seldom apply one unique competitive or recruitment strategy, but mix several strategies to enable success on several separate markets (Ferris et al., 1999). Consequently, a mix of push and pull promotions might occur in the same personnel systems at the same time. Georgiou and Tsantas (2002) and De Feyter (2007) therefore introduced mixed push-pull models for prediction, control and investigation of asymptotic behavior. Georgiou and Tsantas (2002) introduced the Augmented Mobility Model, which allows modeling push as well as pull flows within the system. Besides the push flows between the active classes in the system, a trainee class is introduced from which individuals can be pulled towards the active classes j in case vacancies arrive in the active states. However, the discussion in Georgiou and Tsantas (2002) is restricted to an embedded Markov model, assuming known total size (cfr. above), implying that the total number of individuals in the system is fixed or at least known and the vacancies are calculated at an aggregated level. For some companies however, it might be more interesting to model vacancies at the level of individual states. De Feyter (2007) weakened the assumptions of the Augmented Mobility Model, by modeling push and pull flows between all states in the personnel system and by estimating vacancies in all individual states using a binomial distribution.
4. Conclusion A Markov manpower model implicitly assumes several hypotheses that are not realistic for any organisation. This aspect can be experienced as a disadvantage of the Markov manpower models. Nevertheless a Markov manpower model is in anyway an interesting tool in for example gaining insights by what-if-analyses. What-if-analyses based on a manpower model with Markov assumptions result in properties of the evolution of the manpower system under the assumption that the actual promotion rates, wastage rates and recruitment vector would be applied in the future. These insights can be helpful in deciding whether the actual personnel strategy is a preferable one for the future. Moreover an analysis based on a Markov manpower model does have the advantage that the results can be easily communicated in terms of rates and/or numbers of employees in well-defined personnel categories. As a consequence of its simplicity, there are no advanced quantitative concepts to know to understand a report on the results from a Markov analysis.
84
Tim De Feyter and Marie-Anne Guerry
This chapter offered a review on Markov manpower models. First of all, it clarified the interesting characteristics of those models within current Human Resource Management, especially in comparison with other Operations Research techniques. This explains the ongoing efforts of researchers in the field. Moreover, we offered an overview of all steps during a real-world application and explained the specific problems dealt with by academic researchers. Therefore, it is very useful for the scientific community to place further research in the current state-of-the-art. Important elements in the operationalisation of these models (finding homogenous groups, estimating the model parameters and model validation) are discussed. Under specific characteristics of the manpower system, there is paid attention to the main aspects prediction, asymptotic behavior and control. An overview is given of properties with respect to maintainability and attainability. The validity of the Markov models is in a great extent determined by the degree of homogeneity of the states. One of the challenges for further research is to get an integrated approach in which the definition of the states of the manpower model takes into account observable as well as hidden heterogeneity in order to end up with more homogeneous states. In this context, another point that has to be clarified is in what way a good balance can be found between the level of the subdivision of the personnel system on the one hand and the quality of the estimated parameters with respect to the corresponding states on the other hand. A division of the personnel system into more subgroups can result in states of the Markov model that are more homogeneous but not necessarily in parameter estimations that are of a better quality.
References G. Abdallaoui, Probability of maintaining or attaining a structure in one step, Journal of applied probability, 24 (1987), 1006-1011. T.W. Anderson and L.A. Goodman, Statistical inferences about Markov chains, Annals of mathematical statistics, 28(1) (1957), 89-110. D.J. Bartholomew. The statistical approach to manpower planning, The Statistician, 20(1) (1971), 3-26. D.J. Bartholomew, Errors of prediction for Markov chain models, Journal of the Royal Statistical Society, Series B, 3 (1975), 444-456. D.J. Bartholomew, Maintaining a grade or age structure in a stochastic environment, Advances in Applied Probability, 9 (1977), 1-17. D.J. Bartholomew, Stochastic models for social processes (3rd ed), John Wiley and Sons, New York, 1982. D.J. Bartholomew, A.F. Forbes and S.I. McClean, Statistical techniques for manpower planning, Chichester: Wiley publishers, 1991. D. Bell, New demands on manpower planning, Personnel Management Plus, 1994. E.K. Burke, P. De Causmaecker, G. Vanden Berghe and H. Van Landeghem. The state of the art of nurse rostering, Journal of Scheduling, 7 (2004), 441-499. P. Boxall and J. Purcell, Strategy and Human Resource Management, New York: Palgrave Macmillan, 2003. G.S. Davies, Structural control in a graded manpower system, Management Science, 20 (1973), 76-84.
Markov Models in Manpower Planning
85
G.S. Davies, Maintainability of structures in Markov chain models under recruitment control, Journal of Applied Probability, 12 (1975), 376-382. G.S. Davies, Consistent recruitment in a graded manpower system, Management Science, 22 (1976), 1215-1220. G.S. Davies, Maintainable regions in a Markov manpower model, Journal of Applied Probability, 18 (1981), 738-742. G.S. Davies, Control of grade sizes in partially stochastic Markov manpower model, Journal of Applied Probability, 19 (1982), 439-443. G.S. Davies, A note on the geometric/probabilistic relationship in a Markov manpower model, Journal of Applied Probability, 20 (1983), 423-428. T. De Feyter, Modeling heterogeneity in Manpower Planning: dividing the personnel system in more homogeneous subgroups, Applied Stochastic Models in Business and Industry, 22(4) (2006), 321-334. T. De Feyter, Modeling mixed push and pull promotion flows in Manpower Planning, Annals of Operations Research, 155(1) (2007), 25-39. T. De Feyter and M.A. Guerry, Evaluating recruitment strategies using fuzzy set theory in stochastic manpower planning, Stochastic analysis and applications, 27(6) (2009), 11481162. W. Dill, D.P. Graver and W.L. Weber, Models and Modelling for Manpower Planning, Management Science, 13(4) (1966), 142-167. A.T. Ernst, H. Jiang, M. Krishnamoorthy and D. Sier, Staff scheduling and rostering: A review of applications, methods and models, European Journal of Operational Research , 153 (2004), 3-27. P. Evans and Y. Doz, The Dualistic Organization, In: P. Evans, Y. Doz and A. Laurant (eds.), Human Resource Management in International firm, London, (1989), 219-242. G. Ferris, W. Hochwarter, R. Buckley, et al, Human Resources Management: Some new directions, Journal of Management, 25(3) (1999), 385-415. A.F. Forbes, Non-Parametric Methods of Estimating the Survivor Function, Statistician, 20 (1971), 27-52. E. Gallisch, On the development of manpower number and recruitment in the civil services, Methods and Models of Operations Research , 33 (1989), 267-286. A.C. Georgiou, Partial maintainability and control in nonhomogeneous Markov manpower systems, European Journal of Operational Research, 62(2) (1992), 241-251. A.C. Georgiou and P.-C.G. Vassiliou, Cost models in nonhomogeneous Markov systems, European Journal of Operations research, 100 (1997), 81-96. A.C. Georgio and N. Tsantas, Modelling recruitment training in mathematical human resource planning, Applied Stochastic Models in Business and Industry, 18 (2002), 53-74. J.J. Glenn and C.L. Yang, A model for promotion rate control in hierarchical manpower systems, IMA Journal of Management Mathematics, 7(3) (1996), 197-206. M.A. Guerry, Monotonicity property of t-step maintainable structures in three-grade manpower systems: a counterexample, Journal of Applied Probability, 28 (1991), 221224. M.A. Guerry, The probability of attaining a structure in a partially stochastic model, Advances in Applied Probability, 25(4) (1993), 818-824. M.A. Guerry, Using fuzzy sets in manpower planning, Journal of Applied Probability, 36(1) (1999), 155-162.
86
Tim De Feyter and Marie-Anne Guerry
M.A. Guerry, Hidden Markov chains as a modelisation tool in manpower planning, Working paper MOSI/18, (2005). M.A. Guerry, Profile based push models in manpower planning, Applied Stochastic Models in Business and Industry, 24(1) (2008), 13-20. J. Haigh, Maintainability of manpower structures – counterexamples, results and conjectures, Journal of Applied Probability, 20 (1983), 700-705. J. Haigh, Further counterexamples to the monotonicity property of t-step maintainable structures, Journal of Applied Probability, 29(2) (1992), 441-447. J.M. Ivancevich, Human Resource Management (9th edition), McGraw-Hill, New York, 2003. J. Janssen and R. Manca, Salary cost evaluation by means of nonhomogeneous Semi-Markov processes, Stochastic models, 18(1) (2002), 7-23. G. Keenay, Convergence properties of age distributions, Journal of Applied Probability, 12 (1975), 684-691. S. McClean, E. Montgomery and F. Ugwuowo, Non-homogeneous continuous-time Markov and semi-Markov manpower models, Applied Stochastic Models and Data Analysis 13 (1998), 191-198. S.I. McClean and E.J. Montgomery, Estimation for Semi-Markov Manpower Models in a Stochastic Environment, in: Semi-Markov Models and Applications. Eds. J. Janssen and N. Limnios, Kluwer Academic Publishers, (2000), 219-227. S. McClean, Manpower planning models and their estimation, European Journal of Operational Research, 51 (1991), 179-187. A. Mehlmann, An approach to optimal recruitment and transition strategies for manpower systems using dynamic programming, Journal of the Operational Research Society, 31 (11) (1980), 1009- 1015. R.H. Meehan and S.B. Ahmed, Forecasting human resources requirements: a demand model, Human resource planning, 13(4) (1990), 297-307. R.W. Mondy, R.M. Noe and S.R. Premeaux, Human resource management (7e ed), Prentice Hall, Upper Saddle River, N.J., 1999. K. Nilikantan and B.G. Raghavendra,. Control aspects in proportionality Markov manpower systems, Applied Mathematical Modelling, 29 (2005), 85-116. K. Nilikantan and B.G. Raghavendra, Length of service and age characteristics in proportionality Markov manpower systems, IMA Journal of Management Mathematics, 19 (2008), 245-268. B. Parker and D. Caine, Holonic modeling: human resource planning and the two faces of Janus, International Journal of Manpower , 17(8) (1996), 30-45. S. Petrovic and G. Vanden Berghe, Preface Special Issue on Personnel Planning and Scheduling, Annals of Operations Research, 155(1) (2007), 1-4. C. Purkiss, Corporate manpower planning: a review of models, European Journal of Operational Research, 8 (1981), 315-323. P. Sales, The validity of the Markov chain model for a class of the civil service, The statistician, 20(1) (1971), 85-110. R.S. Schuler and S.E. Jackson, Linking competitive strategies with human resource management practices, The Academy of Management Executive, 3 (1987), 207-219. R.S. Schuler and S.E. Jackson, A quarter-century review of human resource management in the U.S.: The growth in importance of the international perspective, Management Revue, 16(1) (2005), 1-25.
Markov Models in Manpower Planning
87
E. Seneta, Non-negative Matrices. An Introduction to Theory and Applications. George Allen and Unwin, London, 1973. M. Sirvanci, Forecasting manpower losses by the use of renewal models, European Journal of Operational Research, 16(1) (1984), 13-18. D. Skulj, V. Vehovar and D. Stamfelj, The modelling of manpower by Markov chains – a case study of the Slovenian armed forces, Informatica , 32 (2008), 289-291. J. Sonnenfeld, M.A Peiperl and J.P. Kotter, Strategic determinants of managerial labour markets: a career systems view, Human Resource Management, 4 (1988), 369-388. A.R. Smith and D.J. Bartholomew, Manpower planning in the United Kingdom: An historical review, Journal of the operational research society, 39(3) (1988), 235-248. N. Tsantas, Stochastic analysis of a non-homogeneous Markov system, European Journal of Operational Research, 85 (1995), 670-685. N. Tsantas and A.C. Georgiou, Partial maintainability of a population model in a stochastic environment, Applied stochastic Models and Data Analysis, 13 (1998), 183-189. N. Tsantas, Ergodic behavior of a Markov chain model in a stochastic environment, Mathematical Methods of Operations Research, 54(1) (2001), 101-118. F.I. Ugwuowo and S.I. McClean, Modelling heterogeneity in a manpower system: a review, Applied stochastic models in business and industry, 16 (2000), 99-110. S. Vajda, Mathematics of manpower planning, John Wiley, London, 1978. P. Vassiliou, Asymptotic behavior of Markov system, Journal of Applied Probability, 19 (1982a), 851-857. P. Vassiliou, On the limiting behaviour of a non-homogeneous Markovian manpower model with independent Poisson input, Journal of Applied Probability, 19 (1982b), 433-438. P. Vassiliou, Cyclic behaviour and asymptotic stability of non-homogeneous Markov system, Journal of Applied Probability, 21 (1984), 315-325. P. Vassiliou and I. Gerontidis, Variances and covariances of the grade sizes in manpower systems, Journal of Applied Probability, 22(3) (1985), 583-597. P. Vassiliou, Asymptotic variability of nonhomogeneous Markov systems under cyclic behaviour, European Journal of Operational Research, 27(2) (1986), 215-228. P. Vassiliou, Non-homogeneous semi-Markov systems and maintainability of the state sizes, Journal of Applied Probability, 29 (1992), 519-534. P. Vassiliou, The evolution of the theory of non-homogeneous Markov systems, Applied stochastic Models and Data Analysis, 13 (1998), 159-176. P.J. Verbeek, Learning about Decision Support Systems: two case studies on Manpower Planning in an airline, Tinbergen Institute Research Series, (1991). C.J. Verhoeven, Instruments for corporate manpower planning. Applicability and applications, (PhD) Technische Universiteit Eindhoven, Nederland, 1980. D. Vloeberghs, Handboek Human Resource Management, Uitgeverij Acco, Leuven, 1988. D. Vloeberghs, Human Resource Management: Fundamenten en Perspectieven. Op weg naar de intelligente organisatie, Lannoo Campus Uitgeverij, Leuven, 2004. J.W. Walker, Linking human resource planning and strategic planning, Human Resource Planning, 1(1) (1978), 1-18. J.W. Walker, Human Resource Strategy. McGraw-Hill, New-York, 1992. J. Wang, A review of operations research applications in workforce planning and potential modelling of military training . DSTO Systems Science Laboratory. Australia, 2005.
88
Tim De Feyter and Marie-Anne Guerry
J. Wijngaard, Aggregation in manpower planning, Management Science, 29 (1983), 14271435. M. Woodward, On forecasting grade, age and length of service distributions in manpower systems, Journal of Royal Statistical Society, 146 (1983), 74-84. P. Wright, B. Dunford and A. Snell, Human Resources and the Resource-Based view on the Firm, Journal of Management, 27 (2001), 701-721. V.S.S. Yadavalli and R. Natarajan, A semi-markov model of a manpower system, Stochastic analysis and applications, 19(6) (2001), 1077-1086. V.S.S. Yadavalli, R. Natarajan and S. Udayabhaskaran, Time dependent behavior of stochastic models of manpower system – impact of pressure on promotion, Stochastic analysis and applications, 20(4) (2002), 863-882. A. Young and G. Almond, Predicting Distributions of Staff, The Computer Journal, 3(4) (1961), 246-250. A. Young and P. Vassiliou, A non-linear model on the promotion of staff, J.R. Statist. Soc., A 137 (1974), 584-595. S.A. Zanakis and M.W. Maret, A Markov chain application to manpower supply planning, Journal of the Operational Research Society, 31 (1980), 1095-1102.
Reviewed by Prof. Dr. Ir. Greet Vanden Berghe, KaHo Sint-Lieven and K.U. Leuven.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acuña, pp. 89-116
ISBN: 978-1-60876-500-3 © 2011 Nova Science Publishers, Inc.
Chapter 4
STOCHASTIC DIFFERENTIAL GAMES WITH STRUCTURAL UNCERTAINTIES: A PARADIGM FOR INTERACTIVE STOCHASTIC OPTIMIZATION David W.K. Yeung 1 SRS Consortium for Advanced Study in Dynamic Cooperative Games, Hong Kong Shue Yan University and Center of Game Theory, Faculty of Applied Mathematics-Control Processes St Petersburg State University
Abstract An essential characteristic of decision making over time is that though the decisionmaker may gather past and present information about his environment, inherently the future is not completely knowable and is therefore uncertain. An empirically meaningful theory of optimization must therefore incorporate uncertainty in an appropriate manner. Important forms of structural uncertainties follow from uncertainty in future payoff structures and in the future configurations of the state dynamics. This analysis presents a general class of stochastic differential games in which future payoff structures and configurations of the state dynamics are not known with uncertainty. Only the probability distributions of payoff structures and those of the configurations of the state dynamics are known. Mechanism for solving this class of games is derived and examples are provided. The analysis is also extended to the cover the case of infinite horizon games. It is the first time that stochastic differential games with uncertain payoff structures and state dynamics configurations are presented. Novel sub-classes of differential games and control problems can be derived from the model. The results can also be applied to single decision-maker optimization theory. In sum, the analysis has widened the application of game theory by provides a paradigm for modeling game-theoretic situations over time with more content and realism.
Keywords: stochastic differential games, structural uncertainties, feedback strategies, Nash equilibrium. 1
E-mail address: [email protected]. Corresponding author: David W. K. Yeung, SRS Consortium for Advanced Study in Dynamic Cooperative Games, Hong Kong Shue Yan University, Braemar Hill, Hong Kong.
90
David W.K. Yeung
1. Introduction A particularly complex and fruitful branch of interactive optimization over time is differential games. Classic contributions in this field include Isaacs (1965), Berkovitz (1964), Leitmann and Mon (1967) and Pontryagin (1966). Since institutions like firms, markets and governments function through human interactions over time, it also follows that “the social world is a differential game”. An essential characteristic of decision making over time is that though the individual may gather past and present information about his environment, inherently the future is not completely knowable and is therefore uncertain (in the mathematical sense). There is no escaping from this fact, regardless of the resources devoted to obtaining data and to forecasting. An empirically meaningful theory of optimization must therefore incorporate uncertainty in an appropriate manner. Random considerations introduced as stochastic dynamics gave rise to stochastic differential games. Basar (1977a, 1977b, 1980) derived explicit solutions to stochastic linear quadratic differential games. Other examples of solvable stochastic differential games include Clemhout and Wan (1985), Kaitala (1993), Jørgensen and Yeung (1996), and Yeung (1998, 1999). Besides stochastic dynamics, another cause of stochasticity comes from structural uncertainties follow from uncertainty in future payoff structures and in the future configurations of the state dynamics. Yeung (2001 and 2003) introduced stochastic changes in the payoff structure to formulate randomly-furcating stochastic differential games. Petrosyan and Yeung (2006 and 2007) applied cooperative game theory to randomlyfurcating games. In this analysis, we present a general class of stochastic differential games in which future payoff structures and configurations of the state dynamics are not known with uncertainty. Only the probability distributions of payoff structures and those of the configurations of the state dynamics are known. Interactive problems involving structural uncertainties include transboundary environmental management, technology research and development, natural resources extraction, capital accumulation and corporate investment. Mechanism for solving this class of games is derived and examples are provided. The analysis is also extended to the cover the case of infinite horizon games. It is the first time that stochastic differential games with uncertain payoff structures and state dynamics configurations are presented. This class of stochastic differential games provides a paradigm for modeling game-theoretic situations over time with increased realism. Some novel subclasses of differential games and control problems can be derived from the model. For instance, replacing the stochastic dynamics with deterministic dynamics yields differential games with structural uncertainties in payoffs and dynamics. Removing uncertainty in future payoffs structures, one can obtain stochastic differential games with uncertain configurations of future state dynamics. In sum, the analysis has widened the application of game theory to more complicated and realistic environments of uncertainty. The results can also be applied to single decision-maker optimization theory. In the case when the number of players equals one, a stochastic control problem with structural uncertainties will result. The organization of the analysis is as follows. Section 2 presents a stochastic differential game formulation with structural uncertainties in payoffs and dynamics. The characterization of Nash equilibria is given Section 3. An application in resource extraction is provided in
Stochastic Differential Games with Structural Uncertainties
91
Section 4. The analysis is extended to the cover the case of infinite horizon games in Section 5. Concluding remarks are provided Section 6.
2. Game Formulation with Structural Uncertainties in Payoffs and Dynamics Consider a class of stochastic differential games, in which the game horizon is [t0 , T ] . The game
horizon
is
divided
into
1
time
intervals
as:
[t0 , t1 ) ,
[t1 , t 2 ) ,
, [t 1 , t ) , [t , t 1 ] [t , T ] . The players‟ payoffs and state dynamics are affected by a
series of random events. In particular, a k , k {1,2, , } , are independently distributed k
random
variables
with
range
{1k , 2k , ..., k }
and
corresponding
probabilities
{ , , ..., } which will be realized in the time interval [t k , tk 1 ) . At time t0 , a00 10 k 1
k 2
k
k
k
is known to prevail in the time interval [t0 , t1 ) . The payoff of player i in the period [t0 , t1 ) is
g i [ a00 ; s, x( s ), u1 ( s ), u 2 (s ),, u n (s )] , for i N , and the state dynamics is:
dx( s ) f [ a00 ; s, x( s ), u1 ( s ), u 2 ( s ), , u n ( s )]ds [ a00 ; s, x( s )]dz( s ) ,
x(t0 ) x0 ,
for s [t0 , t1 ) ,
(1)
where
x( s ) X Rm denotes the state variables of game, and u i U i is the control of player i , for i N . k If the event a k {1 , 2 ,, k } is realized in the time interval [t k , t k 1 ) , for
k {1,2, , } , the payoff of player i in this period of time becomes
g i [ akk ; s, x( s ), u1 ( s ), u 2 ( s ),, u n (s )] , for i N , and the state dynamics:
dx( s ) f [ akk ; s, x( s ), u1 ( s ), u 2 ( s ), , u n ( s )]ds [ akk ; s, x( s )]dz( s ) ,
(2)
92
David W.K. Yeung
for s [t k , t k 1 ) . where
[ ak ; x( s )] is a m matrix and z( s ) is a -dimensional Wiener process and
the initial state x0 is given. Let [ a k ; s, x( s )] = k
k
[ ak ; s, x( s )] [ ak ; s, x( s )]T denote h and column k
the covariance matrix with its element in row
h [ ak ; s, x( s )] .
k
denoted by
Moreover, at time T , if the random variable a T with range {1 , 2 , ,T } and k
T
T
T
T
corresponding probabilities {1 , 2 , , T } occurs, the terminal payoffs of player i T
T
T
becomes:
q i [ aTT , x(T )] , for i N . Player i , i N , then seeks to maximize:
Et 0
s g i [ a00 ; s, x( s ), u1 ( s ), u 2 ( s ),, u n ( s )] exp r ( y)dy ds t0 t 0 t1
ka k
k
k 1 a k 1
T
a T 1
T aT
t k 1 tk
s g i [ akk ; s, x( s ), u1 ( s ), u 2 ( s ),, u n ( s )] exp r ( y)dy ds t0
T q i [ aTT , x(T )] exp r ( y)dy , t0
(3)
subject to the furcating state dynamics (1) and (2). The game (1)-(3) allows random shocks in its stock dynamics, and future stochastic changes in payoffs and the structure of stock dynamics. Since future payoffs and the structures of the state dynamics are not known with certainty, the term “randomly furcating” is introduced to emphasize that a particularly way to analyze a situation in which the payoffs and dynamics structures change at any future time instant according to (known) probability distributions defined in terms of branching stochastic processes. This new approach widens the application of differential game theory to problems where future environments are not known with certainty. Important cases abound in finance and economics: in particular, the (real) returns to major asset classes such as equities and bonds over time are subject to significant uncertainty from stochastic shocks in economic activity, government policy, and unanticipated effects of strategic behavior in imperfectly competitive markets. Finally, this approach also represents a new way of modeling dynamic game situations under uncertainty.
Stochastic Differential Games with Structural Uncertainties
93
3. Characterization of Nash Equilibria
(
; s, x) , for i N
,
In an environment with uncertainty feedback strategies have to be used. A feedback strategy is a decision rule
k ak
contingent upon the realization of
ak {1k , 2k ,, k } , such that it is continuous in t and uniformly Lipschitz in x for i
each t . We denote the set of all feedback strategies for player i by U k , for i N . k
k
i
To obtain a feedback Nash solution for the game (1) – (3), we first consider the solution for the subgame in the last time interval, that is [t , T ] . For the case where
a {1 , 2 , , } occurs in time interval [t , T ] and x(t ) x at time t , the
subgame in question becomes an n person game with duration [t , T ] , in which player i maximizes the expected payoff:
Et
T
a T 1
T t
T aT
s g i [ a ; s, x( s ), u1 ( s ), u 2 ( s ),, u n ( s )] exp r ( y)dy ds t
T q i [ aTT , x(T )] exp r ( y)dy , t
iN ,
(4)
subject to
dx( s ) f [ a ; s, x( s ), u1 ( s ), u 2 ( s ), , u n ( s )]ds [ a ; s, x( s )]dz( s ) ,
x(t ) x ,
for s [t , T ] .
i
( a ; s, x) U i , for s [t , T ] and i N
(5)
constitutes a Nash equilibrium solution
For the n-person stochastic differential game (4)-(5), an n -tuple of feedback strategies
if there exist functionals V
( ) i
( a ; t , x) defined on [t , T ] R m and satisfying the
following relations for each i N :
V i ( a ; T , x)
T
a T 1
T aT
V ( )i ( a ; t , x) E t
T q i [ aTT , x] exp r ( y)dy , t
T
g i [ a ; s, x* ( s ), 1 ( a ; s, x* ( s )), 2 ( a ; s, x* ( s )),
s , n ( a ; s, x* ( s ))] exp r ( y)dy ds t
T
a T 1
T aT
t
T q i [ aTT , x* (T )] exp r ( y)dy t
94
David W.K. Yeung
E t
T t
g i [ a ; s, xi ( s ), 1 ( a ; s, xi ( s )), 2 ( a ; s, xi ( s )),
, i1 ( a ; s, xi ( s )), ui ( s, xi ( s )), i1 ( a ; s, xi ( s )), s , n ( a ; s, xi ( s ))] exp r ( y)dy ds t
T
a T 1
T aT
T q i [ aTT , xi (T )] exp r ( y)dy , ui ( s, x) Ui , t
(6)
where on the interval [t , T ] ,
dx* ( s ) f [ a ; s, x* ( s ), 1 ( a ; s, x* ( s )), 2 ( a ; s, x* ( s )), , n ( a ; s, x* ( s ))]ds
[ a ; s, x* ( s )]dz( s ) ,
x* (t ) x X ;
and
dxi ( s ) f [ a ; s, xi ( s ), 1 ( a ; s, xi ( s )), 2 ( a ; s, xi ( s )),
, i1 ( a ; s, xi ( s )), ui ( s, xi ( s )), i1 ( a ; s, xi ( s )), , n ( a ; s, xi ( s ))]ds
[ a ; s, xi ( s )]dz( s ) , x(t ) x X ; for i N .
Invoking the principle of optimality from Fleming (1969) and Fleming and Rishel (1975), we obtain the conditions characterizing a feedback solution of the game (4)-(5).
iN
( a ; t , x) , for t [t , T ] and
constitutes a Nash equilibrium solution to the game (4)-(5) if there exist suitably
Lemma 3.1. An n -tuple of feedback strategies
smooth functions V
( ) i
i
( a ; t , x) : [t , T ] R m R , i N , satisfying the following set of
partial differential equations:
Vt ( )i ( a ; t , x)
1 m h ( a ; t , x) Vx(x)i ( a ; t , x) 2 h , 1 h
max g i [ a ; t , x, 1 ( a ; t , x), 2 ( a ; t , x), , i1 ( a ; t , x), ui (t , x), i1 ( a ; t , x), ui t , n ( a ; t , x) exp r ( y)dy t ( ) i Vx ( a ; t , x) f [ a ; t , x, 1 ( a ; t , x), 2 ( a ; t , x), , i1 ( a ; t , x), ui (t , x), i1 ( a ; t , x), , n ( a ; t , x)] ,
Stochastic Differential Games with Structural Uncertainties
V ( )i ( a ; T , x)
T
a T 1
T aT
95
T q i ( aTT , x) exp r ( y)dy , i N . t
Proof. This result follows readily from the optimality conditions in stochastic control as derived by Fleming (1969) and Fleming and Rishel (1975), and from the definition of Nash equilibrium. ■
Lemma 3.1 characterizes the players‟ value functions during the time interval [t , T ] in the
case
where
a {1 , 2 , , }
has
occurred.
The
value
functions
V ( )i (1 ; t , x) , V ( )i ( 2 ; t , x) , ,V ( )i ( ; t , x) for various realization of the random
variable
a {1 , 2 , , } can be derived accordingly.
In order to formulate the subgame in the second last time interval [t 1 , t ) , it is necessary to identify the terminal payoffs at time t . To do this, first note that if
a {1 , 2 , , } occurs at time t the value function of player i is V ( )i ( a ; t , x)
at t . The expected terminal payoff for player i at time t can evaluated as:
V
a 1
For the case where
a
( )i
( a ; t , x) .
(7)
a1 {1 1 , 2 1 , , 1 } occurs in time interval [t 1 , t ) and 1
1
x(t 1 ) x 1 at time t 1 , the subgame in question becomes an n person game with duration [t 1 , t ) , in which player i maximizes the expected payoff:
Et 1
i n 1 2 1 s r ( y)dy ds g [ ; s , x ( s ), u ( s ), u ( s ), , u ( s )] exp a t 1 1 t 1 t
t a V ( )i ( a ; t , x) exp r ( y)dy , t 1 a 1
iN ,
(8)
subject to
dx( s ) f [ a11 ; s, x( s ), u 1 ( s ), u 2 ( s ), , u n ( s )]ds [ a11 ; s, x( s )]dz( s ) , for s [t 1 , t ) .
x(t 1 ) x 1 ,
(9)
96
David W.K. Yeung Applying
V
( 1) i
Lemma
(1 ; t , x) , V 1
3.1
( 1) i
one
can
characterize
( 2 ; t , x) , ,V 1
( 1) i
(
1 1
the
players‟
value
functions
; t , x) for the subgame interval
[t 1 , t ) . For the preceding subgame in the interval [t 2 , t 1 ) , the expected terminal payoff for player i at time t 1 can evaluated as:
1
a 1 1
V ( 1)i ( a11 ; t 1 , x) .
1 a 1
Following the above analysis, the expected terminal payoff for player i at time t k 1 for
the subgame in the interval [t k , t k 1 ) , for k {0,1,2, , 1} can be evaluated as:
k 1
a k 1 1
For the case where
V ( k 1)i ( akk11 ; t k 1 , x) , for i N .
k 1 a k 1
(10)
ak {1k , 2k ,, k } occurs in time interval [tk , tk 1 ) and
x(t k ) xk at time t k for k {0,1,2, , 1} , the subgame in question becomes an k
k
n person game with duration [t k , t k 1 ) , in which player i maximizes the expected payoff:
Et k
t k 1 tk
k 1
a k 1 1
s g i [ akk ; s, x( s ), u1 ( s ), u 2 ( s ),, u n ( s )] exp r ( y)dy ds t k
tk 1 V ( k 1)i ( akk11 ; t k 1 , x) exp r ( y)dy , tk
k 1 a k 1
iN ,
(11)
subject to
dx( s ) f [ akk ; s, x( s ), u1 ( s ), u 2 ( s ), , u n ( s )]ds [ akk ; s, x( s )]dz( s ) , x(t k ) xk X ,
for s [t k , t k 1 ) .
(12)
Decomposing the game (1)-(3) into 1 subgames as formulated in (4)-(5) and (11)(12) allows the solution of the game to be characterized as follow.
Theorem 3.1. A set of feedback strategies
{ , ,, } , k {0,1,2,, } k ak
k 1
k 2
k
k
k i
( ak ; t , x) , for t [t k , t k 1 ) , i N , k
contingent upon
ak constitutes a Nash k
equilibrium solution to the game (1)-(3) if there exist suitably smooth functions
Stochastic Differential Games with Structural Uncertainties
97
V ( k )i ( akk ; t , x) : [t k , tk 1 ) R m R , i N , akk {1k , 2k ,, kk } , k {0,2, , } satisfying the following set of partial differential equations:
Vt ( )i ( a ; t , x)
1 m h ( a ; t , x) Vx(x)i ( a ; t , x) 2 h , 1 h
max g i [ a ; t , x, 1 ( a ; t , x), 2 ( a ; t , x), , i1 ( a ; t , x), ui (t , x), i1 ( a ; t , x), ui t , n ( a ; t , x)] exp r ( y)dy t Vx( )i ( a ; t , x) f [ a ; t , x, 1 ( a ; t, x), 2 ( a ; t , x),
, i1 ( a ; t , x), ui (t , x), i1 ( a ; t , x), , n ( a ; t , x)] , T T V ( )i ( a ; T , x) TaT q i ( aTT , x) exp r ( y)dy , i N t a T 1 and
a {1 , 2 , , } ;
Vt ( k )i ( ak ; t , x) k
1 m h k ( a k ; t, x) Vx(hkx)i (akk ; t, x) 2 h , 1
max g i [ akk ; t , x, 1k ( akk ; t , x), 2k ( akk ; t , x), , ik1 ( akk ; t , x), ui (t , x), ik1 ( akk ; t , x), ui t , nk ( ak ; t , x)] exp r ( y)dy tk Vx( k )i ( ak ; t , x) f [ ak ; t , x, 1k ( ak ; t , x), 2k ( ak ; t , x), k
, ik1 ( akk ; t , x), ui (t , x), ik1 ( akk ; t , x), , nk ( akk ; t , x)] , k
k
V ( k )i ( akk ; t k 1 , x)
for i N ,
k
k 1
a k 1 1
k
tk 1 V ( k 1)i ( akk11 ; t k 1 , x) exp r ( y)dy , tk
k 1 a k 1
{ , ,, k } and k 0,2, , 1. k ak
k 1
k 2
k
Proof. As demonstrated above, the game (1)-(3) can be decomposed into 1 subgames as formulated in (4)-(5) and (11)-(12) Invoking Lemma 3.1, these results follow from the optimality conditions in stochastic control as derived by Fleming (1969) and Fleming and Rishel (1975), and from the definition of Nash equilibrium for the each relevant subgame. ■
98
David W.K. Yeung
ak , for
Theorem 3.1 suggests that a set of sufficiently informed paths, in the form of strategies contingent upon information regarding the realization of the random variables
k {1,2, , } , forms an essential element in the feedback solution of the game (1)-(3). k
Information is produced in temporal sequence, in each of which it is sufficient (in the statistical sense) for the execution of feedback strategies within a given interval of time. This result captures some stylized real-life facts. For instance, market news rapidly disseminate over the trading day, until it becomes market information which is commonly known and available to all individuals for the framing of optimal (feedback) investment strategies. At the beginning of the next day more news arrive, after which the process repeats itself. The informational sufficiency and recursive nature (in the sense of dynamic programming) of this sequential realization of payoffs and dynamics render the analysis operational. Given any specification of the stochastic process, it is possible to simulate a set of predicted paths, which in turn would supply sufficient informational support for the execution of optimal feedback strategies within the corresponding time interval. Finally, more complicated stochastic processes can be adopted in the analysis. For
instance, a series of random events a k , for k 1, 2,, , which is a random variable k
stemming from the randomly branching process is presented as below.
a1 {11,21, ,1 } with corresponding probabilities {11, 12 , , 1 } . 1
i
Given
1
a1 is realized in time interval
that
a1 1,2,,1 ,
[t1 , t 2 ) , for
a2 {12[(1, a )] , 22[(1, a )] , ,2[(1, a )] } would be realized with the corresponding probabilities 1
1
1
{12[(1, a1 )] , 22[(1, a1 )] , , 2[(2[(11,,a11)])] } . 2
1
2 [(1, a1 )]
a11 is realized in time interval [t1 , t2 ) a
Given that
and
[t 2 , t3 ) , for a1 1,2,,1 and a 2 1,2,,2[(1, a1 )] ,
a3 {13[(1, a
1 ) ( 2 , a 2 )]
3
, 23[(1, a1 ) ( 2, a 2 )] , ,33[([(11,, a11) ()2(,22, a)]2 )] } a
a
a2[(1,a )] is realized in time interval 1
2
would
be
realized
with
the
corresponding probabilities
{13[(1, a1 ) ( 2, a 2 )] , 32[(1, a1 ) ( 2, a 2 )] , , 3[(3[(11,,a11))((22, ,2a)]2 )] } . a
In general, given that interval [t2 , t3 ), , and
a
a1 is realized in time interval [t1 , t 2 ) , a2[(1,a )] is realized in time
ak 1[(1, a 1
1
1 )( 2, a 2 )( k 2, a k 2 )]
2
is realized in time interval [t k 1 , t k ) , for
a1 1,2,,1 , a 2 1,2,,2[(1, a1 )] , , a k 1 1,2,,k 1[(1, a1 )( 2, a 2 )...(k 1, a k 1 )] ,
ak {1k[(1, a k
k 1
1 ) ( 2 , a 2 )( k 1, a k 1 )]
, 2k[(1, a1 ) ( 2, a 2 )( k 1, a
k 1 )]
would be realized with the corresponding probabilities
( k 1, a , ,k[([(11,, a11)()2(, 22,)a2() 1, 1 )] k
a
a
k
ak
k 1 )]
}
Stochastic Differential Games with Structural Uncertainties
{1k[(1, a1 ) ( 2, a 2 )( k 1, a
k 1 )]
, k2[(1, a1 ) ( 2, a 2 )( k 1, a 1 )] ,, k[([(11,,a11)() 2(,22, a)2 ()1(, k 11)], a k
k
a
a
k
ak
99 k 1 )]
}
for k 1,2, , .
4. An Application in Resource Extraction Consider an economy endowed with a single renewable resource, with n resource extractors i (firms). Let ui ( s ) U denote the rate of resource extraction of firm i and x( s ) the size of
the resource stock at time s . In particular, we have U R i
x 0.
for x 0 , and U = {0} for i
The extraction cost for firm i N depends on the quantity of resource extracted ui ( s ) ,
the resource stock size x( s ) , and a parameter c . In particular, extraction cost can be specified as follows:
Ci
c u i (s) . 12 x( s )
(13)
This specification implies that the cost per unit of resource extracted by firm i ,
cx( s ) 1 2 , decreases when x( s ) increases. The above cost structure was also adopted by
Jørgensen and Yeung (1996). A decreasing unit cost follows from two assumptions: (i) the cost of extraction is proportional to extraction effort, and (ii) the amount of resource extracted, seen as the output of a production function of two inputs (effort and stock level), is increasing in both inputs (cf. Clark 1990). The market price of the resource depends on the total amount extracted and supplied to the market. The price-output relationship at time s is given by the following downward sloping inverse demand curve:
P ( s ) akk Q ( s ) 1 2 ,
for s [t k , t k 1 ) ,
(14)
where
u n
Q(s) =
j 1
i
( s ) is the total amount of resource extracted and marketed at time s , and
ak {1k , 2k ,, k }
ka {1k , k2 , , k } . k
k
k
with
corresponding
k
The state of the game evolves according to:
probabilities
of
occurrence
being
100
David W.K. Yeung N dx( s ) akk x( s )1 2 bakk x( s ) u j ( s )ds x( s )dz( s ) , j 1
in the time interval [t k , t k 1 ) if
(15)
ak occurs, where is a positive constant and
{ , ,, } and b {b , b2k ,, bk } . Note that when ak occurs, ak and k ak
k 1
k 2
k ak
k
k
k
k 1
k
k
k
k ak
b would also occur. Two types of stochasticities enter the stock dynamics. First the dynamics itself is governed by a stochastic differential equation. Second the dynamics is subject to random structural changes through
ak and bak . In the absence of human harvesting, the resource k
k
stock will grow according to the dynamics:
dx( s ) akk x( s )1 2 bakk x( s ) ds x( s )dz( s ) . The structure of the natural dynamics is subject to random evolution brought about by phenomena like climate change. It is also known that at time t0 ,
a0 10 , a0 10 , ba0 b10 and x(t0 ) x0 . At 0
0
0
i aT
1/ 2
terminal time T , firm i would receive a terminal payoff q x
i
, where q a T is a random
variable with range {q1 , q2 ,, qT } and corresponding probabilities {1 , 2 , , T } . The i
i
i
T
T
T
discount rate is r , a constant. Firm i would seek to maximize expected present value of profits:
E t0
1 2 n c 0 j i i exp r ( s t 0 ) ds u s u s u s ( ) ( ) ( ) t0 a0 x( s )1 2 j 1 t1
k
k 1 a k 1
T
a T 1
T aT
k ak
th 1
th
1 2 n c i k j exp r ( s t 0 )ds a h u ( s ) u i ( s ) u s ( ) x( s )1 2 j 1
qai T , x(T )1 / 2 exp (T t 0 ) ,
subject to n dx( s ) a00 x( s )1 2 ba00 x( s ) u j ( s )ds x( s )dz( s ) , x(t0 ) x0 , j 1
in the time interval [t0 , t1 ) , and
Stochastic Differential Games with Structural Uncertainties
101
n dx( s ) akk x( s )1 2 bakk x( s ) u j ( s )ds x( s )dz( s ) , j 1
in the time interval [t k , t k 1 ) if
(16)
a occurs. k
Invoking Theorem 3.1, the conditions characterizing the solution in the subgame [t , T ] yields:
1 Vt ( )i ( a ; t , x) 2 x 2 Vxx( )i ( a ; t , x) 2 12 n i c max a u (t , x) j ( a ; t , x) u i (t , x) 1 2 u i (t , x) exp[ r (t t )] ui x j 1 j i
V
( ) i x
( a ; t , x) a x ba x j ( a ; t , x) u (t , x)
V ( )i ( a ; T , x)
for
iN,
T
a T 1
12
T aT
n
j 1 j i
qai T x1 / 2 exp[ r (T t )] ,
a {1 , 2 , , }
ba {b1 , b2 ,, b } .
and
i
,
(17)
a {1 , 2 ,, }
and
Applying the maximization operator in (17) for player i , yields the condition for a maximum as:
n ( ; t , x) 1 ( ; t , x) j a i a a 2 j 1 j i c Vx( )i ( a ; t , x) exp[ r (t t )] 0 , 32 12 n x j ( a ; t , x) j 1
for i N . Summing over i 1,2, , n in (18) yields:
n j ( a ; t , x) j 1
1/ 2
(18)
1 n c a n 1 2 exp[ r (t t )]Vx( ) j ( a ; t , x) . (19) 2 j 1 x
1
102
David W.K. Yeung Substituting (19) into (18) produces
n 1 j ( a ; t , x) 2 i ( a ; t , x) 3 n j 1 c j i ( ) j 3 x1 2 exp[ r (t t )]Vx ( a ; t , x) 2 1 j 1 a n 2 c 1 2 exp[ r (t t )]Vx( )i ( a ; t , x) 0 , for i N . (20) x
Re-arranging terms in (20) yields:
n 1 j ( a ; t , x) 2 i ( a ; t , x) j 1 j i
3 ( ) i 12 2 1 c exp[ r (t t )]Vx ( a ; t , x) x x , for i N . a n 3 2 n c exp[ r (t t )]Vx( ) j ( a ; t , x) x1 2 j 1
Condition (21) represents a system of equations which is linear in
2 ( a ; t , x) , , n ( a ; t , x) .
(21)
1 ( a ; t , x) ,
Solving (21) yields:
i ( a ; t , x)
2n 1
x a
2
2
n 2 c exp[ r (t t )]Vx( ) j ( a ; t , x) x1 2 j 1
n c exp[ r (t t )]Vx( ) j ( a ; t , x) x1 2 3 j 1 j i
3 , ( ) i 12 n c exp[ r (t t )]Vx ( a ; t , x) x 2
for i N .
(22)
By straightforward substitution, one can verify that (22) does solve (21). Using (22) and (19), one can express (17) as:
Stochastic Differential Games with Structural Uncertainties
1 Vt ( )i ( a ; t , x) 2 x 2 Vxx( )i ( a ; t , x) 2
2n 1
x1 2 a
2
n ( ) j 12 c exp[ r (t t )]Vx ( a ; t , x) x j 1
103
N c exp[ r (t t )]Vx( ) j ( a ; t , x) x1 2 2 j 1 j i
3 ( ) i 12 n c exp[ r (t t )]Vx ( a ; t , x) x exp[ r (t t )] 2
2n 1
n c exp[ r (t t )]Vx( ) j ( a ; t , x) x1 2 3 n j 1 ( ) j 12 c exp[ r (t t )]Vx ( a ; t , x) x j i j 1 cx1 2 a 2
2
2
3 ( ) i 12 n c exp[ r (t t )]Vx ( a ; t , x) x exp[ r (t t )] 2
2 x( 2n 1) 2 a 12 ( ) i Vx ( a ; t , x) a x ba x , 2 n 4 c exp[ r (t t )]Vx( ) j ( a ; t , x) x1 2 j 1
V ( )i ( a ; T , x)
T
a T 1
T aT
(23)
qai T x1 / 2 exp[ r (T t )] , for i N .
(24)
Proposition 4.1. The system (23)-(24) admits a solution
V ( )i ( a ; t , x) exp[ r (t t )] A (t ) x1 2 B (t ) , a a
where A (t ) and B (t ) satisfy: a
a
for i N ,
a b A (t ) r 1 2 a A (t ) 2n 1 2 2n 2 c A (t ) / 2 8 2
a
a
a
(25)
104
c
2 2n 1
c A (t ) / 2 a
B (t ) rB (t ) a
A (T ) a
a
T
a T 1
a 2
( 2n 1) 2 a
2
a
4n 3
A
David W.K. Yeung
2
a
2
(t )
8n c A (t ) / 2 a
2
,
2
A (t ) a
qai T , and B (T ) 0 .
T aT
a
■
Proof. See Appendix A. Proposition 4.2.
The value function of firm i N in the subgame [t k , t k 1 ) contingent upon the occurrence
ak {1k , 2k , , k } for k {0,1,2,, 1} is: k
k
V ( k )i ( akk ; t , x) exp[ r (t t k )] Akk (t ) x1 2 Bkk (t ) , a k ak where A k (t ) and B k (t ) satisfy: k
k
ak
ak
ak bk Ak (t ) r 1 2 a Ak (t ) 2n 1 2 2n 2 c Ak (t ) / 2 8 2
k
k
k ak
k ak
2n 12 4n 3
c k 2 ak
c Ak (t ) / 2
2
k ak
Bkk (t ) rBkk (t ) ak
Akk (t k 1 ) ak
Bkk (t k 1 ) ak
ak
ak
a k 1 1
a k 1 1
2
2
akk k
(t )
8n c Akk (t ) / 2 ak
2
,
2
Akk (t ) , ak
k 1 a k 1
Akk11 (t k 1 ) exp[ r (t k 1 t k )] , and
k 1 a k 1
Bkk11 (t k 1 ) exp[ r (t k 1 t k )] .
k
A
( 2n 1) 2 akk
k ak
a k 1
a k 1
Proof. Invoke Theorem 3.1 and follow the proof of Proposition 4.1.
■
Stochastic Differential Games with Structural Uncertainties
105
in the time interval [t k , t k 1 ) , given that
The optimal strategies of firm i
ak {1k , 2k , , k } and ak {1k , 2k ,, k } and bak {b1k , b2k ,, bk } can be k
k
k
k
2n 1
k
k
obtained by using (22) as:
( ; t , x) k i
k ak
x akk
2
2
n c Ak (t ) / 2
2
,
3
for t [t k , t k 1 ) and i N .
(26)
k ak
5. Infinite Horizon Problems In many game situations, the terminal time of the game, T , is either very far in the future or unknown to the players. For example, the value of a publicly listed firm is the present value of its discounted expected future earnings. Nobody knows when the firm will be out of business. As argued by Dockner et al (2000), in this case setting T may very well be the best approximation for the true game horizon. Even if the firm‟s management restricts itself to considering profit maximization over the next year, it should value its asset positions at the end of the year by the earning potential of these assets in the years to come. In this section, game situations with infinite horizon stochastic differential games with structural uncertainties are examined.
5.1. Game Formulation and Solution Characterization
Consider a class of stochastic differential games, in which the game horizon is [t 0 , ) . The game horizon is divided into time intervals of equal lengths: [t0 , t1 ) , [t1 , t 2 ) , [t 2 , t3 ), . The players‟ payoffs and state dynamics are affected by a series of random events. In
ak , k {1,2, , } , are independent and identically distributed random
particular,
variables with range {1 , 2 , , } and corresponding probabilities {1 , 2 , , } which k
will be realized in the time interval [t k , t k 1 ) . At time t0 , a 0 1 is known to prevail in the time interval [t0 , t1 ) .
Player i , i N , then seeks to maximize:
Et 0
t1 t0
g i [ a 0 ; x( s ), u1 ( s ), u 2 ( s ),, u n ( s )] exp r ( s t0 )ds
a k
k 1 a k 1
t k 1 tk
g i [ a k ; x( s ), u1 ( s ), u 2 ( s ),, u n ( s )] exp r ( s t 0 )ds
subject to the furcating state dynamics:
(27)
106
David W.K. Yeung
dx( s ) f [ a 0 ; x( s ), u1 ( s ), u 2 ( s ), , u n ( s )]ds [ a 0 ; x( s )]dz( s ) ,
(28)
for s [t0 , t1 ) , and
dx( s ) f [ a k ; x( s ), u1 ( s ), u 2 ( s ), , u n ( s )]ds [ a k ; x( s )]dz( s ) , x(t k ) xk , (29) for s [t k , t k 1 ) , if
a {1 , 2 ,, } is realized in the time interval [tk , tk 1 ) . k
The lengths of the interval [t k , t k 1 ) , for all k , are the same (and equals Tˆ ). Let
ˆ [ ; x( s )] = [ ; x( s )] [ ; x( s )]T denote the covariance matrix with its element in a a a ˆ h [ ; x( s )] . denoted by a
k
k
row h and column
k
k
To characterize the solution of the infinite-horizon autonomous problem (27)-(29), consider first the game
Et k
t k 1 tk
g i [ a k ; x( s ), u1 ( s ), u 2 ( s ),, u n ( s )] exp r ( s tk )ds
k 1 a 1
a
t 1 t
g i [ a ; x( s ), u1 ( s ), u 2 ( s ),, u n ( s )] exp r ( s t k )ds , iN, (30)
subject to the furcating state dynamics:
dx( s ) f [ a k ; x( s ), u1 ( s ), u 2 ( s ), , u n ( s )]ds [ a k ; x( s )]dz( s ) , for s [t k , t k 1 ) and
a known; k
dx( s ) f [ a ; x( s ), u1 ( s ), u 2 ( s ), , u n ( s )]ds [ a ; x( s )]dz( s ) , for s [t , t 1 ) , if
(31)
(32)
a {1 , 2 ,, } is realized in the time interval [t , t1 ) .
The problem (30)-(32) is independent of the choice of t k and dependent only upon
xk (the state x at time t k ) and the random outcome a k . In particular, consider another
game starting at t h t k , but
a a and x(th ) x(t k ) , this game is identical to the h
game (30)-(32). A
set
of
feedback
strategies
k
( a ; s, x) ,
a {1 , 2 ,, } , {k, k 1, k 2, } i
for
s [t , t 1 ) ,
contingent upon
iN,
a constitutes a
Stochastic Differential Games with Structural Uncertainties Nash
V
( k )i
107
( a ; t , x) : [t k , tk 1 ) R R satisfying the following relations for each i N : equilibrium
solution
to
the
game
(30)-(32)
if
there
exists
functions
m
k
V ( k )i ( a k ; t , x) E tk
tk 1
g i [ a k ; x( s ), 1k ( a k ; s, x( s )), 2k ( a k ; s, x( s )),
, nk ( a ; s, x( s ))] exp r ( s t k )ds
k
k 1 a 1
a
t
g i [ a ; x( s ), 1 ( a ; s, x( s )), 2 ( a ; s, x( s )),
t 1 t
, n ( a ; s, x( s ))] exp r ( s t k )ds E tk
kmax k 1 {ui ,ui
,}
tk 1 t
g i [ a k ; x( s ), 1k ( a k ; s, x( s )), 2k ( a k ; s, x( s )),
, ik1 ( a ; s, x( s )), uik ( s, x( s )), ik1 ( a ; s, x( s )),
, ( a ; s, x( s ))] exp r ( s t k )ds k
k n
k
k 1 a 1
a
t 1 t
k
g i [ a ; x( s ), 1 ( a ; s, x( s )), 2 ( a ; s, x( s )),
, i1 ( a ; s, x( s )), ui ( s, x( s )), i1 ( a ; s, x( s )),
, n ( a ; s, x( s ))] exp r ( s t k )ds , for all i N , subject to dynamics (31)-(32). Invoking the definition of V
V
( k )i
( k 1) i
( a ; t , x) can be expressed as:
(33)
( a 1 ; t , x) according to (33), the value function k
k
V ( k )i ( a k ; t , x) E tk
tk 1
g i [ a k ; x( s ), 1k ( a k ; s, x( s )), 2k ( a k ; s, x( s )),
, nk ( a ; s, x( s ))] exp r ( s t k )ds t
exp[ r (t k 1 t k )] a k 1 V ( k 1)i ( a 1 ; t k 1 , x(t k 1 )) , for all i N . (34) a k 1 1 k
k
As noted before the outcome of the infinite-horizon autonomous problem (30)-(32) is independent of the choice of initial time t k . Therefore we can define
108
David W.K. Yeung
V ( k )i ( a k ; t , x) : [t k , tk 1 ) R m R alternatively as:
W i ( a k ; t , x) : [0, Tˆ ) R m R , i N . The feedback equilibrium strategy of alternatively stated as
ik ( a ; s, x) , for s [t k , t k 1 ) , can be
( a ; s, x) , for s [0, Tˆ ) . * i
k
k
The expression in (34) can then be written as:
W i ( a k ; t , x) E 0
Tˆ t
g i [ a k ; x( s ), 1* ( a k ; s, x( s )), 2* ( a k ; s, x( s )),
, n* ( a ; s, x( s ))] exp(rs) ds exp( r Tˆ ) a W i ( a ;0, x(Tˆ )) , a 1 for all i N . k
(35)
Making use of (33), (34) and (35), a feedback Nash solution of the game (30)-(32) can be characterized as follows.
Theorem 5.1. A set of feedback strategies
a {1 , 2 ,, } k
( * i
ak
; t , x) , for t [0, Tˆ ) , i N ,
contingent upon the occurrence of
a in the time interval k
[t k , tk 1 ) constitutes a Nash equilibrium solution to the game (30)-(32) if there exist suitably smooth
W i ( a k ; t , x) : [0, Tˆ ) R m R ,
functions
i N , a k {1 , 2 ,, } ,
satisfying the following set of partial differential equations:
Wt i ( a ; t , x) k
1 m ˆ h ( a k ; x) Wxxi (ak ; t, x) 2 h , 1
max g i [ a k ; x, 1* ( a k ; t , x), 2* ( a k ; t , x), , i*1 ( a k ; t , x), ui (t , x), i*1 ( a k ; t , x), ui , n* ( a ; t , x)] exp( rt ) Wxi ( a ; t , x) f [ a ; x, 1* ( a ; t , x), 2* ( a ; t , x),
, i*1 ( a k ; t , x), ui (t , x), i*1 ( a k ; t , x), , n* ( a k ; t , x)] , k
k
W i ( a k ; Tˆ , x)
for i N and
a 1
a
W i ( a ;0, x) exp( r Tˆ ) ,
a {1 , 2 ,, } . k
k
k
k
Stochastic Differential Games with Structural Uncertainties
109
Proof. As demonstrated above, the value functions of the game (30)-(32) can be expressed as in (35). Invoking Lemma 3.1, these results follow from the optimality conditions in stochastic control as derived by Fleming (1969) and Fleming and Rishel (1975), and from the definition of Nash equilibrium for the each relevant subgame. ■ An interesting feature is that, by the tenet of stochastic recurrence, the solution to an infinite-horizon problem is reduced to the characterization of a solution to problem with a
finite horizon [0, Tˆ ) within which the realized random element
a remains unchanged.
5.2. Infinite Horizon Resource Extraction Consider the resource extraction game in Section 4 with an infinite game horizon. Firm i would seek to maximize expected present value of profits:
E t0
1 2 n c i i j exp r ( s t0 )ds u s u s ( ) ( ) u ( s ) t0 a0 j 1 x( s )1 2 t1
a k
k 1 a k 1
t h 1
th
1 2 n c j i exp r ( s t0 )ds a h u ( s ) u i ( s ) u s ( ) 12 x( s ) j 1
(36)
subject to n dx( s ) a 0 x( s )1 2 b a 0 x( s ) u j ( s ) ds x( s )dz( s ) , x(t0 ) x0 , j 1
in the time interval [t 0 , t1 ) , and n 12 dx( s ) a h x( s ) b a k x( s ) u j ( s ) ds x( s )dz( s ) , j 1
in the time interval [t k , t k 1 ) if
(37)
a occurs. k
a {1 , 2 ,, }
a {1 , 2 ,, }
Invoking Theorem 5.1, the conditions characterizing the solution in any subgame interval
[t k , t k 1 )
given
that
and
ba k {b1 , b2 ,, b } have occurred can be expressed as:
1 Wt i ( a ; t , x) 2 x2 Wxxi ( a ; t , x) 2
and
110
David W.K. Yeung
max a u i (t , x) ui
n i * j ( a ; t , x) u (t , x) jj 1i
12
c i 1 2 u (t , x) exp( rt ) x
W ( a ; t , x) a x ba x ( a ; t , x) u (t , x) n
W i ( a k ; Tˆ , x) for i N ,
* j
12
i x
j 1 j i
a
a 1
i
,
W i ( a ;0, x) exp( r Tˆ ) ,
(38)
a {1 , 2 ,, } and a {1 , 2 ,, } and ba {b1 , b2 ,, b } . k
Following the analysis in Section 4, one can obtain:
( a ; t , x) * i
k
2n 1
x ak
2
2
n 2 c exp( rt )]Wxj ( a ; t , x) x1 2 j 1 k
3 , 12 i n c exp( rt )Wx ( a k ; t , x) x 2
n c exp( rt )Wxj ( a ; t , x) x1 2 3 j 1 j i
k
for i N .
(39)
Using (39) and analysis in Section 4, condition (38) can be expressed as:
1 Wt i ( a ; t , x) 2 x2 Wxxi ( a ; t , x) 2
2n 1
x1 2 a
2
n 12 j c exp( rt )Wx ( a ; t , x) x j 1
N c exp( rt )Wxj ( a ; t , x) x1 2 2 j 1 j i
3 i 12 n c exp( rt )Wx ( a ; t , x) x exp( rt ) 2
2n 1
cx1 2 a 2
2
2
n 12 j c exp( rt )Wx ( a ; t , x) x j 1
n c exp( rt )Wxj ( a ; t , x) x1 2 3 j 1 j i
Stochastic Differential Games with Structural Uncertainties
3 i 12 n c exp( rt )Wx ( a ; t , x) x exp( rt ) 2
111
2 2 ( 2 1 ) x n a Wxi ( a ; t , x) a x1 2 ba x , 2 n 4 c exp( rt )Wxj ( a ; t , x) x1 2 j 1
W i ( a k ; Tˆ , x)
a
a 1
W i ( a ;0, x) exp( r Tˆ ) , for i N .
(40)
(41)
Proposition 5.1. The system (40)-(41) admits a solution
W i ( a k ; t , x) exp( rt ) A (t ) x1 2 B (t ) , a k ak where A (t ) and B (t ) satisfy: ak
k
ak
2n 12 4n
3
a 2n 1 A (t ) 2 2n c A (t ) / 2 2
k
c
c A (t ) / 2 ak
a
2
ak
k
(2n 1) 2 a k A (t ) ak
2
B (t ) rB (t )
ak
2
ak
ak
(42)
ak
(t ) r 1 2 ba A 2 8
for i N ,
2
8n c A (t ) / 2 ak ak
2
,
2
A (t ) , ak
A (Tˆ ) a Aa (0) exp( rt ) , and
ak
a 1
B (Tˆ ) a Ba (0) exp( rt ) .
ak
a 1
Proof. Follow the proofs of Propositions 4.1 and 4.2.
■
112
David W.K. Yeung
6. Conclusion In this analysis, we present a general class of stochastic differential games in which future payoff structures and configurations of the state dynamics are not known with uncertainty. Only the probability distributions of payoff structures and those of the configurations of the state dynamics are known. Mechanism for solving this class of games is derived and examples are provided. It is the first time that stochastic differential games with uncertain payoff structures and state dynamics configurations are presented. This class of stochastic differential games provides a paradigm for modeling game-theoretic situations over time with more content and realism. Some novel sub-classes of differential games and control problems can be derived from the model. For instance, replacing the stochastic dynamics with deterministic dynamics yields differential games with structural uncertainties in payoffs and dynamics. The game becomes
Et 0
0 1 2 i n s r ( y)dy ds g s x s u s u s u s [ ; , ( ), ( ), ( ), , ( )] exp a t0 0 t 0 t1
ka k
k
k 1 a k 1
t k 1 tk
s g i [ akk ; s, x( s ), u1 ( s ), u 2 ( s ),, u n ( s )] exp r ( y)dy ds t0
T T TaT q i [ aTT , x(T )] exp r ( y)dy , t 0 aT 1
(43)
subject to the state dynamics:
x ( s ) f [ a00 ; s, x( s ), u1 ( s ), u 2 (s ),, u n (s )] , x(t 0 ) x0 , for s [t0 , t1 ) , and
x ( s ) f [ akk ; s, x( s ), u1 (s ), u 2 ( s ),, u n ( s )] , for s [t k , t k 1 ) , for k {1,2,, } . (44) Removing uncertainty in future payoffs structures, one can obtain stochastic differential games with uncertain configurations of future state dynamics. The game becomes:
Et 0
s g i [ s, x( s ), u1 ( s ), u 2 ( s ),, u n ( s )] exp r ( y)dy ds t0 t 0 T
T q i [ x(T )] exp r ( y)dy , t0
subject to the state dynamics:
(45)
Stochastic Differential Games with Structural Uncertainties
113
dx(s ) f [ a00 ; s, x(s ), u1 (s ), u 2 (s ),, u n ( s)]ds [ a00 ; s, x(s )]dz(s ) , x(t 0 ) x0 , for s [t 0 , t1 ) ,
dx(s ) f [ akk ; s, x(s ), u1 (s), u 2 (s ),, u n (s )]ds [ akk ; s, x(s )]dz(s ) ,
and
(46)
for s [t k , t k 1 ) and k {1,2,, } . In the case where the number of players equals one, a stochastic control problem with structural uncertainties will result. In sum, the analysis has widened the application of game theory to more complicated and realistic environments of uncertainty.
Appendix A Proof of Proposition 4.1. Using (25) we obtain
Vt ( )i ( a ; t , x) r exp[ r (t t )] A (t ) x1 2 B (t ) a a
(t ) x1 2 B (t ) , exp[ r (t t )] A a a
Vx( )i ( a ; t , x) and
1 exp[ r (t t )] A (t ) x 1 2 , a 2
1 Vxx( )i ( a ; t , x) exp[ r (t t )] A (t ) x 3 2 . a 4 Upon substituting these values into (23) yields:
(t ) x1 2 B (t ) r exp[ r (t t )] A (t ) x1 2 B (t ) exp[ r (t t )] A a a a a 1 exp[ r (t t )] 2 A (t ) x1 2 a 8
2n 1
x1 2 a
( n 1) c A (t ) / 2 a n c A (t ) / 2 a 2
2
(A.1)
114
David W.K. Yeung
3 n c A (t ) / 2 exp[ r (t t )] a 2
cx 1 2 2 2 a 2 n 1 ( n 1) c A (t ) / 2 2 3 a n c A (t ) / 2 a
3 n c A (t ) / 2 exp[ r (t t )] a 2
2 2 x ( 2 n 1 ) 1 a exp[ r (t t )] A (t ) x 1 2 a x1 2 ba x , 2 a 2 4 n c A (t ) / 2 a
exp[ r (T t )] A (T ) x1 2 B (T ) Ta T qai T x1 / 2 exp[ r (T t )] , a a 1 a T T
for i N .
(A.2)
Upon cancellation of common factors, summation and rearrangement of terms, we have:
(t ) x1 2 B (t ) 1 2 A (t ) x1 2 r A (t ) x1 2 B (t ) A a a a a 8 a ba 1 A (t ) x1 / 2 a A (t ) a a 2 2
2n 1 2n 2
x 2 a
1/ 2
c A (t ) / 2
2n 1
2
4n3
a
( 2 n 1) 2 a A (t ) x1 / 2
cx 2 a
1/ 2
c A (t ) / 2
2
a
2
a
8 n 2 c A (t ) / 2 a
2
,
A (T ) x1 2 B (T ) Ta T qai T x1 / 2 , for i N . a a a T 1 T
(A.3)
A solution to A (t ) and B (t ) must satisfy the condition as stated in Proposition 4.1. a
a
Hence the Proposition 4.1 follows.
■
Stochastic Differential Games with Structural Uncertainties
115
Acknowledgments The author is grateful to the research support by HK Research Grants Council CERGHKBU202807, European Commission TOCSIN Project RTD REG/I.5(2006)D/553242, and HKBU Strategic Development Fund 03-17-224.
References Basar, T., 1977a: Existence of unique equilibrium solutions in nonzero-sum stochastic differential games in Differential Games and Control Theory II , E. O. Roxin, P. T. Liu, and R. Sternberg (eds), Marcel Dekker, Inc., 201-228. Basar, T., 1977b: Informationally nonunique equilibrium solutions in differential games, SIAM J. Control Optim., 15, 636-660. Basar, T., 1980: On the existence and uniqueness of closed-loop samled-data Nash controls in linear-quadratic stochastic differential games in Optimization Techniques , K. Iracki et al. (eds.), Lecture Notes in Control and information Sciences, Springer-Verlag, New York, ch. 22, 193-203. Berkovitz, L. D., 1964: A variational approach to differential games in Advances in Game Theory, Dresher M., L. S. Shapley, and A. W. Tucker (eds.), Princeton, Princeton University Press, NJ, 127-174. Clark, C .W., 1990: Mathematical Bioeconomics: The Optimal Management of Renewable Resources, 2nd edn, Wiley, New York. Clemhout, S., and H. Y. Wan, Jr., 1985: Dynamic common-property resources and environmental problems, J. Optim. Theory Appl., 46, 471-481. Dockner, E., S. Jørgensen, N. V. Long and G. Sorger, 2000: Differential Games in Economics and Management Science, Cambridge University Press, Cambridge. Fleming, W. H., 1969: Optimal continuous-parameter stochastic control, SIAM Review, 11, 470-509. Fleming, W. H. and R. W. Rishel, 1975: Deterministic and stochastic optimal control, Applications of Mathematics, vol. 1, Springer-Verlag, New York, Heidelberg and Berlin, 222 pp. Isaacs, R., 1965: Differential Games, Wiley, New York. Jørgensen, S. and D. W. K. Yeung, 1996: Stochastic differential game model of a common property fishery, J. Optim. Theory Appl., 90, 381-403. Kaitala, V., 1993: Equilibria in a stochastic resource management game under imperfect information, Eur. J. Oper. Res., 71, 439-453. Leitmann G. and G. Mon, 1967: Some geometric aspects of differential games, J. Astron. Sci., 14, 56-. Petrosyan, L. and D. W. K. Yeung, 2006: Dynamically Stable Solutions in Randomlyfurcating Differential Games, Trans. Steklov Inst. Math., 253(Supplement 1), S208-S220. Petrosyan, L. and D. W. K. Yeung, 2007: Subgame-consistent Cooperative Solutions in Randomly-furcating Stochastic Differential Games, Math. Comput. Model (Special Issue on Lyapunov’s Methods in Stability and Control), 45, 1294-1307. Pontryagin, L. S., 1966: On the theory of differential games, Uspekhi Mat. Nauk, 21, 219274.
116
David W.K. Yeung
Yeung, D. W. K., 1998: A class of differential games which admits a feedback solution with linear value functions, Eur. J. Oper. Res., 107, 737-754. Yeung, D. W. K., 1999: A stochastic differential game model of institutional investor speculation, J. Optim. Theory Appl., 102, 463-477. Yeung, D. W. K., 2001: Infinite-Horizon Stochastic Differential Games with Branching Payoffs, J. Optim. Theory Appl, 111, 445-460. Yeung, D. W. K., 2003: Endogenous-horizon Randomly Furcating Differential Games, in Game Theory and Applications, Volume IX, L. Petrosyan and V. Mazalov (eds.), Nova Science Publishers, New York, 199-217.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acuña, pp. 117-137
ISBN: 978-1-60876-500-3 © 2011 Nova Science Publishers, Inc.
Chapter 5
AN OPTIMIZATION APPROACH FOR INVENTORY ROUTING PROBLEM IN CONGESTED ROAD NETWORK Suh-Wen Chiou 1 Department of Information Management, National Dong Hwa University 1, Sec. 2, Da Hsueh Rd., Shou-Feng, Hualien, 974, Taiwan
Abstract Consider a congested urban road network with one depot and many geographically dispersed retailers facing demands at constant and deterministic rate over a period of planning horizon, but the lead time is variable due to traffic congestion. All stock enters the network through the depot and from where it is distributed to the retailers by a fleet of vehicles. In this paper, we propose a new class of strategies for giving the optimal inventory replenishments for each retailer while the efficient delivery design is taken into account such that the minimization of total inventory cost and transportation cost is achieved. A mathematical program is formulated for this combined problem. Numerical computations are conducted and good results are obtained with reasonable computational efforts.
Keywords: optimization; routing problem, inventory allocation, simplicial decomposition; traffic congestion
1. Introduction For a congested urban road network, the role of logistics management is changing. Companies are recognizing that the value for customers can be realized through a kind of integrated service of effective logistics management and product availability. For such an integrated kind of service, inventory allocations and vehicle routings are two important and closely interrelated decisions that arise in management contexts, which have been 1
E-mail address: [email protected].
118
Suh-Wen Chiou
investigated extensively as the inventory routing problem ([1]-[14]). The inventory routing problem (IRP) addresses the coordination of inventory replenishment and transportation, which refers to the inventory replenishment at a number of locations controlled by a central manager with a fleet of vehicles. A good distribution strategy needs to be developed for which the minimization of distribution cost is achieved while the demands for retailers are satisfied without stock-out occurrence during the period of planning horizon. For urban road network, traffic congestion is often neglected in analyzing the replenishment policies for the IRP as discussed in [4]-[7] and [12]. However, as it has been noted by [4] that travel times and the corresponding costs are severely affected by traffic conditions of logistics network and thus for the sake of realistic situations, more attentions need to focus on the variable nature of the transportation times when modelling the combined problem. Contrary to conventional design for inventory replenishments, e.g. the Economic Order Quantity (EOQ) model where the lead-times are regarded fixed. In this paper we consider that the lead-time is a mapping of the result of the vehicle routings, which is difficultly expressed as a closed form due to the NP hard nature of vehicle routing problems. The objective function at retailers is thus to minimize the total inventory costs with respect to both the inventory replenishment quantities and the vehicle routings. For such combined problem of inventory allocations and vehicle routings, in our mathematical programming formulation it becomes a non-convex problem in the following ways. Firstly, for the inventory cost function it is supposed that the lead-time of on-route orders is dependent on the results of vehicle routings and thus the lead-time demand is not constant as it appeared in the classical Economic Order Quantity (EOQ) model, but an implicit form of the results of vehicle routings. Secondly, regarding the computation for vehicle routings, the results are strongly influenced by the results of inventory replenishments once they have been determined by the optimization model. In order to effectively deal with this problem, a new class of iterative solution strategies is developed in simultaneously solving the two interrelated problems. Numerical computations have been conducted on a series of experimental scenarios. In comparisons with method by separately solving the two independent problems of optimal inventory replenishments and vehicle routings, where the lead-times are regarded fixed, the proposed has obtained better results either in decreasing transportation or inventory costs over the period of planning horizon with reasonable computational overheads. The remainder of this paper is organized as follows. In the next section, relevant literature in the field of inventory allocation and vehicle routings is reviewed. Mathematical programs for the combined problem of inventory allocations and vehicle routings with variable leadtime are formulated in the section 3. A new solution scheme for the IRP is developed in the section 4. In section 5, numerical computations are conducted at three randomly generated instances where good results are obtained. Conclusions and further research opportunities are marked in the section 6.
2. Literature Review Golden et al. [10] were the first ones of those who investigated the interrelated problem of inventory allocation and vehicle routing problems. For an energy-products company that distributes liquid propane to its customers, Golden et al. proposed a simulation model to determine the set of which customers should be serviced, the corresponding amount to supply
An Optimization Approach for Inventory Routing Problem…
119
the selected customers and the way in how to route the vehicles to deliver the allocated amounts. Federgruen and Zipkin [9] approached the inventory routing problem as a special case of vehicle routing problem for a single day period. They considered stochastic demands and non-linear inventory costs, and suggested a non-linear integer programming formulation for the inventory and routing problem. Chien et al. [6] also developed a single day model of the inventory and routing problem and proposed a mixed integer programming model, which attempts to find a less myopic solution by passing inventory information from one day to the next. A Lagrangian based procedure was proposed to generate the upper bounds and lower bounds for the feasible solutions to the IRP and good results have shown the effectiveness of the proposed procedure. For the inventory allocation and vehicle routing problems over a long time period, Dror and Ball [8] proposed an approach to take into account what happens after the single day planning period. Dror and Ball gave a reduced procedure for which the long-term effect of the problem can be brought into a short-term period such that long-term delivery cost is minimized while no customer runs out of stock at any time over the planning horizon of interest. Anily and Federgruen [1] took a look at minimizing long run average transportation and inventory costs by determining long-term routing patterns where a fixed partition policy was analyzed for the IRP with constant deterministic demand rates and an unlimited number of vehicles. The routing patterns are determined by using a modified circular partition scheme. A lower bound for the long run average cost is also determined by which the performance of the determined routing patterns can be evaluated. Following the fixed partition policy of Anily and Federgruen, Chan et al. [5] continually analyzed the zeroinventory ordering policies for the IRP and derived asymptotic worst-case bounds on performance evaluation for setting the replenishment policies to minimize long-term average costs. However, as it has been noted that travel times and the corresponding costs are severely affected by traffic conditions of urban road network and thus for the sake of realistic situations, more attentions need to focus on the variable nature of the transportation times when modelling this routing problem.
3. The Inventory Routing Problem In this section, mathematical programs are given for the IRP with variable lead-times when traffic congestion is taken into account. First, notation used is given below.
3.1. Notation K : number of vehicles. n : number of locations, index from 1 to n ; index 0 denotes the central depot. Q : total amount of product available at the central depot. bk : capacity of vehicle k . A : ordering cost. u i : retailer i demand rate.
h : inventory carrying cost.
120
Suh-Wen Chiou
c ij : Euclidean distance from location i to j .
xijk : binary variable, 1 if vehicle k travels directly from location i to j ; 0 otherwise. yik : binary variable, 1 if delivery point i assigned to route k ; 0 otherwise. wi : amount delivered to location i .
i : lead time at location i . 3.2. Problem Formulation
Let be a converting factor from Euclidean distance to monetary unit and the inventory cost expressed as a composite of order quantity and monetary unit at retailer i , denoted by
q i ( wi ) , the inventory routing problem formulation can be addressed as follows: let
u h qi wi A i wi , it implies wi 2 IRP
wi cij xijk qi ( wi )
min x, y , w
i , j ,k
subject to wi u i i ( x, y) , i 1,...n
w y n
i 0
i
ik
(2)
bk ,
k 1,..., K
w n
i 1
y K
yik 0
x n
i 0
ijk
x n
j 0
ijk
(1)
i
k 1
ik
or
1,
i
Q
(3)
(4)
K , i 0 1, i 1,..., n
i 1,...n, k 1,..., K
(5)
(6)
y jk , j 0,..., n,
k 1,..., K
(7)
yik , i 0,..., n,
k 1,..., K
(8)
An Optimization Approach for Inventory Routing Problem…
x
ijk ( i , j ) SS
S 1,..., n,
S 1,
xijk 0
2 S n 1,
i 0,...n,
or 1,
j 0,..., n,
k 1,..., K
k 1,..., K
121
(9)
(10)
The problem in (1)-(10) can be furthermore decomposed as the following two closely related problems: the inventory allocation problem (IA) and the vehicle routing problem (VRP). For the inventory allocation problem, it can be expressed as follows. IA
q ( w) qi ( wi ) n
min
i 1
wi
subject to wi u i i ( x, y) , i 1,...n
(11)
(12)
w n
i 1
i
Q
(13)
For the vehicle routing problem, it can be expressed as follows.
VRP
min x, y
w y n
subject to
i 0
i
ik
bk ,
y K
yik 0
x n
i 0
ijk
x n
j 0
ijk
c
k 1
ik
or
1,
ij
xijk
(14)
k 1,..., K
(15)
i , j ,k
K , i 0 1, i 1,..., n
i 1,...n, k 1,..., K
(16)
(17)
y jk , j 0,..., n,
k 1,..., K
(18)
yik , i 0,..., n,
k 1,..., K
(19)
122
Suh-Wen Chiou
x
ijk ( i , j ) SS
S 1,
xijk 0
S 1,..., n,
2 S n 1,
i 0,...n,
or 1,
j 0,..., n,
k 1,..., K
k 1,..., K
(20)
(21)
Consider a fixed route k , the retailer i with yik 1 , let Yk i : yik 1 the vehicle routing problem in (14-21) can be decomposed into a number of traveling salesman problems (TSP). For a fixed route Yk , k 1,..., K , consider x solves TSP( Yk ), we have TSP ( Yk )
c
min x
x n
subject to
i 0
ij
x n
j 0
x
ij ( i , j ) SS
ij
or
xij
(22)
1,
j 0,..., n
(23)
1,
i 0,..., n
(24)
S 1,..., n,
S 1,
xij 0
ij
i, j
1,
2 S n 1
i 0,...n,
j 0,..., n
(25)
(26)
and the corresponding inventory allocation problem, for a fixed Yk tour, it can be expressed as follows. IA( Yk )
min wi
subject to
q (w )
iYk
i
i
wi u i i ( x, y) , i Yk
(27)
(28)
Therefore the inventory allocation problem in (11-13) can be re-written as follows. Let
An Optimization Approach for Inventory Routing Problem…
Wk wi , iYk
123
k 1,..., K and let Qk (Wk ) qi ( wi ) , we have iYk
Q K
min
k 1
Wk
k
(Wk )
(29)
Wk u i i ( x, y)
subject to
(30)
iYk
W K
k 1
Wk bk ,
k
Q
(31)
k 1,..., K
(32)
For the inventory allocation problem (29-32), since the lead time is determined by the vehicle routing problem (14-21), which is implicitly expressed as a VRP solution and is not a closed form, which can be directly solved. On the other hand, for the traveling salesman problem (22-26) or the vehicle routing problem, the inventory replenishment amounts, w , are not determined without solving the inventory allocation problem. It has been noted by Federgruen and Zipkin [9], and Chien et al. [6], that exact solution for the inventory routing problem can be difficult to find due to the interrelationships between inventory replenishments and routing patterns. In the following sections, we propose a new solution procedure conceptually for the inventory routing problem in (1-10) and employ the simplicial decomposition technique to effectively deal with such closely related problems for inventory allocations and vehicle routings in (11-13) and (14-21). We also develop a new class of strategies with tractable computation efforts as demonstrated in later sections.
3.3. The Solution Procedure Let
Y Yk , k 1,..., K , X X k , k 1,..., K , W Wk , k 1,..., K
where
Wk wi , and k , k 1,..., K where k i . The solution set of the iYk
iYk
inventory allocation problem (29-32) is denoted by I 3(Y, ) and thus W I 3(Y, ) , and solution set of IA( Yk ) problem in (27-28) is denoted by
I 2(Yk , k ) and thus
Wk I 2(Yk , k ) . The solution set of the traveling salesman problem TSP( Yk ) in (22-26) is
denoted by TSP (Yk ,Wk ) and thus X k TSP (Yk ,Wk ) . The solution set of the vehicle
routing problem VRP in (14-21) is denoted by VRP (W ) , and thus Y VRP (W ) , and
VRP (W ) . The proposed solution procedure for the variable lead-time inventory routing
124
Suh-Wen Chiou
problem in (1-10) can be conceptually conducted as follows. Let subscript t denote the iteration index. STEP0. Given a set of routing patterns, Yt , and initial lead-times
t . Set index t 0 .
replenishment Wt , such that Wt I 3(Yt , t ) . Also solve the corresponding traveling STEP1. Solve the inventory allocation problem in (29-32) and find the optimal inventory
salesman problem TSP( Yk ) in (22-26) and find the sequence of visiting orders to each retailer, X kt , on a given route k such that X kt TSP (Ykt ,Wkt ) , k 1,..., K . STEP2. Improve X kt , for k 1,..., K by TSP-MOD procedure.
STEP3. Solve the vehicle routing problem VRP in (14-21) and find a new set Yt 1 such
that Yt 1 VRP (Wt ) via the VRP-COS procedure given below. Update the new lead-time set t 1 by multiplying the converting factor
such that t 1 VRP (Wt ) . Set t t 1 .
STEP 4. Termination test. For a given value, TMAX , if t TMAX then stop; otherwise
return STEP1. For the traveling salesman problem, a tour construction is considered by using the sweep method and the improvement heuristic, TSP-MOD, which can be conducted as follows.
TSP-MOD T-STEP1. For a given route k , construct an initial tour, X k , by sweep method such that
each retailer with replenishment wit within this tour is serviced and the corresponding route distance is minimized. T-STEP2. Improve current tour X k by interchange visiting retailers such that a lower route distance can be achieved by 2-opt or 3-opt procedure. T-STEP3. Iterate T-STEPs 1-2 until no improvement is achieved. For the vehicle routing problem, when taking into vehicle capacity into account, it can be regarded as multiple TSPs, which can be iteratively solved in the following manner:
VRP-COS V-STEP1. Conduct TSP-MOD for each fixed route k .
V-STEP2. Check the feasibility of each routing k , k 1,..., K . If route k violate the
feasibility of VRP in (14-21), remove the visiting retailer by inventory replenishment in increasing orders until the feasibility is satisfied.
An Optimization Approach for Inventory Routing Problem…
125
V-STEP3. Make new routes k to include the removed retailers and satisfy the feasibility
test. V-STEP4. Improve current routes by using the branch interchange technique conducted in the following composites. V-COS1 . Use 2-opt first to interchange visiting retailers within the same tour until no improvement in minimizing routing distance. V-COS2. Use 3-opt secondly to interchange visiting retailers within the same tour until no improvement in minimizing routing distance. V-COS3 . Use 2-opt first to interchange visiting retailers across different tours until no improvement in minimizing routing distance. V-COS4 . Use 3-opt secondly to interchange visiting retailers across different tours until no improvement in minimizing routing distance. V-STEP5. Iterate procedures for V-COS1-4 until no improvement is achieved.
4. A New Solution Scheme for the IRP In this section, a new class of implementation heuristics for conducting STEPs 0-4 in the solution procedure is developed, for which a better mutually consistent solutions for the inventory routing problem in (1-10) can be found in comparison with two individually separate solutions. The simplicial decomposition technique is employed following the work of Von Hohenbalken [16] and Holloway [15]. Simplicial decomposition is a simple and direct approach in dealing with large-scale non-linear programming problems with simple constraints. In the simplicial decomposition approach, extreme points in a bounded polyhedral set are generated algorithmically by approximate sub-problems, and a master problem alternately defined by a set of extreme points is solved to produce a new iteration point. In the followings, firstly, two sub-problems for the inventory allocation problem in (11-13) are given to effectively generate the extreme points in a polyhedral bounded set defined by the solutions of the vehicle routing problem VRP in (14-21) and denoted by VRP (W ) . The master problem is solved secondly for (11-13) over the convex hull of the extreme points generated by the sub-problems and such processes are iteratively alternated until the predetermined threshold is achieved. Thirdly, the convergence of the simplicial decomposition for the inventory routing problem is also given with mathematical theorems.
4.1. Sub-Problems For Inventory Allocations In order to generate the extreme points for the inventory allocation problem in (11-13), two sub-problems are established. For a given iteration t, a sub-problem in defining the bounded polyhedral set for the problem (11-13) can be characterized as a VRP in the following way.
126
Suh-Wen Chiou
Given an iteration t , for each retailer i, i 1,..., n , with the order quantity wit , the variable lead time it can be obtained via the conversion factor from solving the following xt and yt measured in Euclidean distance to corresponding travel time.
Sub-VRP
c
min xt , yt
w
subject to
n
i 0
it
yikt bk ,
y K
k 1
yikt 0
x n
i 0
ijkt
x n
j 0
x
ijkt (i , j ) SS
ijkt
S 1,
xijkt 0
ij
ikt
or 1,
xijkt
(33)
k 1,..., K
(34)
i , j ,k
K , i 0 1, i 1,..., n
(35)
i 1,...n, k 1,..., K
(36)
y jkt , j 0,..., n,
k 1,..., K
(37)
yikt , i 0,..., n,
k 1,..., K
(38)
S 1,..., n,
2 S n 1,
i 0,...n,
or 1,
j 0,..., n,
k 1,..., K
k 1,..., K
(39)
(40)
For each tour Ykt , k 1,..., K , the linear sub-problem for inventory allocation to retailers can be determined as to
Sub-IA
min zit
subject to
q (w
iYkt
i
it
) zit
(41)
An Optimization Approach for Inventory Routing Problem…
127
zit u i it ( xt , yt ) , i Ykt
(42)
where zit denotes the extreme point in the bounded set defined by (33-40).
4.2. The Master Problem The master problem for inventory allocations in (11-13) can be determined by finding a convex combination of the extreme points generated from (41-42) for minimizing the objective function (11) as follows. Let zit denote feasible solution solved from (41-42), it is to find the non-negative weight it , for each retailer i, i 1,..., n such that the minimum of
the objective function value of the linear combination is achieved.
q ( n
min
i 1
it
subject to
it
i
it
zit )
(43)
t
1 , i 1,...n
(44)
t
it 0,
i, t
(45)
4.3. A New Solution Scheme
Consider the STEPS 0-4 a new solution scheme is addressed as follows. Let denote a set of extreme points bounded by the solution set of VRP (W ) and wit , i, i 1,..., n a feasible point of the IRP in (1-10) SD-STEP 0. Set t and index t= 0 .
it . Identify the tour Ykt , Ykt yikt 1, i 1,.., n, k 1,..., K .
SD-STEP 1. (Sub-problem-VRP): Conduct the sub-VRP in (33-40) to find the variable
lead time
SD-STEP 2. (Sub-problem-IA): For each fixed tour Ykt , k 1,..., K , solve the linear
sub-problem IA in (41-42) to obtain the extreme points. If
q t ( zt wt ) 0, zt TSP (Ykt )
(46)
128
Suh-Wen Chiou
stop and wt is optimal. Otherwise, augment the feasible set t 1 t zt and go to SDSTEP 3.
wt 1 ArgMinq (w) : w H (t 1 ) where H ( t 1 ) denotes the convex hull of feasible
SD-STEP 3. (Master problem): Let wt 1 solve the master problem in (43-45) such that
point set t 1 . Remove the set t 1 of all extreme points with zero weight in the expression
of wit , i 1,..., n as a convex combination of elements of t 1 . Let t t 1 and return to
SD-STEP 1.
As it seen from the sub-problem IA, the feasible region for the master problem, H ( t 1 ) contains the current iterate wt and the incoming feasible point zt generated by
(41-42); when wt is not a solution, zt wt defines a descent direction along which a lower value of the objective function in problem (1-10) can be achieved because a lower value of lead-time t is taken into account. Therefore, a decrease in the objective function value in problem (1-10) at the new iterate, wt 1 , can be ensured. The convergence proof then follows from the fact that wt 1 solves the master problem in (43-45) in the following ways. Lemma 1. In the master problem (43-45),
q(w )( z w ) 0
(47)
for all z VRP (W ) if and only if w is optimal for the inventory routing problem in (1-10). Proof. This is a standard nonlinear programming result as referred to Zangwill [17]. □
Lemma 2. If wt is not optimal to the inventory allocation problem in (11-13), then the
objective function value
q (wt 1 ) q (wt )
(48)
Proof. As observed earlier, wt is a feasible solution to a master problem in (43-45).
Because wt 1 solves the master problem, we have
q (wt 1 ) q ( wt )
(49)
if q ( wt 1 ) q ( wt ) then wt is also a minimum at iteration t 1 and
q ( wt )( z wt ) 0,
z H (t 1 )
(50)
An Optimization Approach for Inventory Routing Problem…
129
but this contradicts our assumption for the sub-problem IA in (41-42) since zt t 1 .□
Lemma 3 . Let wt be the sequence generated by the sub-problems in (33-40) and (41-
42). Then there can not be a sub-sequence
wt , t Tmax
with the following three
properties: (1). wt w , t Tmax (2). zt z , t Tmax
(3). q( w )( z w ) 0
Tmax , that is, there exists an 0 such that
Proof. We prove this lemma by a contradiction. Suppose there exists a sub-sequence
q(w )( z w )
(51)
Because the objective function q ( w) is continuously differentiable there exists a
t Tmax sufficiently large so that for any t t we have
q ( wt )( zt wt ) and a
0 such that for 0 ,
(52)
3
q ( wt ( zt wt ))( zt wt )
9
(53)
At iterate t, both zt and wt are feasible to the master problem in (43-45). This, there must exist
(0, ) such that wt ( zt wt ) is feasible to the master problem.
According to the optimality of wt 1 , we have
q (wt 1 ) q (wt ( zt wt ))
(54)
and by Taylor‟s expansion, let 0 a 1
q ( wt 1 ) q (wt ) q (wt a ( zt wt )) ( zt wt )
(55)
130
Suh-Wen Chiou Because 0 a ,
q ( wt 1 ) q ( wt )
9
(56)
Since q ( w) is continuous and q ( wt ) is monotonically decreasing (see lemma 2), the
limit of q ( wt ) , q ( w ) exists. For t sufficiently large,
q ( w ) q ( wt )
27
(57)
For t t and sufficiently large so that (57) holds. Expressions (56-57) imply that
q ( w ) q ( wt )
27
q ( wt )
9
q (wt 1 ) q ( wt )
(58)
and it contradicts that q ( wt ) is decreasing monotonically thus this lemma is proved. □ approach wither terminates at a solution or generate a sequence wt for which the subTheorem 4. Given that q ( w) is continuously differentiable, the simplicial decomposition
sequence limit is a solution to the inventory routing problem in (1-10). Proof. If the simplicial decomposition algorithm terminates, the current iterate wt must
sequence wt is generated, lemma 3 ensures that the limit of every convergent sub-sequence satisfy the stopping condition (in SD-STEP 2). By lemma 1, wt must be a solution. When a
is a solution. □
5. Numerical Computations In this section, the proposed new class of implementation heuristics, SD-STEPs, given in section 4 is conducted at three randomly generated instances of the inventory and routing problems. Consider the instances of interest, the retailers are scattered around X-Y coordinates, where integer points X [100,100] and Y [100,100] . The daily demand rate, u i , i 1,..., n , is set as 5 items. The ordering cost, A, is set $1200 and the inventory
cost, h, set as $1.0 per day per item. The converting factor from Euclidean distance to monetary unit, , is set $150 per unit distance. For the following three instances, the numbers of retailers are 10, 20 and 50. The number of vehicles is 10 and capacity 3900 units. The depot capacity is set as 50000 units.
An Optimization Approach for Inventory Routing Problem…
131
Regarding the time periods of planning horizon, 10 replenishment cycles are taken into account, i.e. TMAX 10 . The performance indices are expressed as three kinds of composite costs in problem (1), that is, the composite transportation cost evaluated from vehicle routing problem, the composite inventory cost evaluated from inventory allocation problem and composite total cost, which is the sum of composite transportation and inventory costs. Consider the conventional approach in solving the inventory allocation and vehicle problems, an EOQ-based stock policy is used where the lead-time is regarded as fixed and accordingly the inventory allocation and vehicle routing problems are separately solved iteratively until the termination condition holds. Computational results are summarized in Figures 1-9. For the first instance, as it seen from Figures 1-3, the proposed SD-STEPs (SD for short) outperforms the EOQ-based (EOQ for short) stock policy by yielding approximately 7.6% improvement in composite transportation cost and 5.4% in composite inventory cost. For composite total cost, SD achieved nearly 6.3% improvement over that did EOQ. For the second instance, n=20, as it seen from Figures 4-6, SD outperformed EOQ approximately 18% in composite transportation cost, 20% in composite inventory cost and 19% in composite total cost. For the third instance, n=50, as it seen from Figures 7-9, SD, again, outperformed EOQ approximately 8% in composite total cost. Numerical experiments conducted above were on Sun SPARC machine and coded by C++ computer language. Total computation times are within 1 minute of CPU time on that machine.
composite cost
Transportation cost for 10 retailers
12400
EOQ
12200
SD
12000 11800 11600 11400 11200 11000 1
2
3
4
5
6
7
8
9
10
iteration Figure 1. Composite transportation cost for 10 retailers.
132
Suh-Wen Chiou
Inventory cost for 10 retailers
16400 EOQ
composite cost
16200
SD 16000 15800 15600 15400 15200 1
2
3
4
5
6
7
8
9
10
iteration
Figure 2. Composite inventory cost for 10 retailers.
Total cost for 10 retailers
composite cost
EOQ SD
28800 28400 28000 27600 27200 26800 26400 26000 1
2
3
4
5
6
7
8
9 10
iteration Figure 3. Composite total cost for 10 retailers.
An Optimization Approach for Inventory Routing Problem…
133
Transportation cost for 20 retailers
composite cost
23500
EOQ SD
22000 20500 19000 17500 16000 1
2
3
4
5
6
7
8
9 10
iteration Figure 4. Composite transportation cost for 20 retailers.
Inventory cost for 20 retailers
composite cost
40000 EOQ
38000
SD 36000 34000 32000 30000 1
2
3
4
5
6
7
8
9 10
iteration Figure 5. Composite inventory cost for 20 retailers.
134
Suh-Wen Chiou
Total cost for 20 retailers EOQ
composite cost
65000
SD
61000 57000 53000 49000 45000 1
2
3
4
5
6
7
8
9
10
iteration Figure 6. Composite total cost for 20 retailers.
Transportation cost for 50 retailers 42000 EOQ SD
composite cost
41000 40000 39000 38000 37000 1
2
3
4
5
6
7
8
9 10
iteration Figure 7. Composite transportation cost for 50 retailers.
An Optimization Approach for Inventory Routing Problem…
135
Inventory cost for 50 retailers
composite cost
83000 81500
EOQ
80000
SD
78500 77000 75500 74000 1
2
3
4
5
6
7
8
9
10
iteration
Figure 8. Composite inventory cost for 50 retailers.
Total cost for 50 retailers EOQ 124000
SD
composite cost
122000 120000 118000 116000 114000 112000 1
2
3
4
5
6
7
8
9 10
iteration Figure 9. Composite total cost for 50 retailers.
136
Suh-Wen Chiou
6. Conclusions and Discussions In this paper, we considered a combined problem of the inventory allocation and vehicle routing problems for one depot and many geographically dispersed retailers when variable lead-times have been taken into account in congested urban logistics networks. Mathematical programs were given for this combined problem and a new solution scheme was developed to effectively solve this complicated problem with global convergence. Numerical experiments have been conducted at three randomly generated IRPs where the proposed class of implementation heuristics was carried out and good results are consistently obtained. Consider the variations of instances for the inventory routing problem, more computations are being undertaken in order to investigate the efficiency and robustness of the proposed solution procedure.
Acknowledgements Thanks go to Taiwan National Science Council via grants NSC 96-2416-H-259-010-MY2 and NSC 98-2410-H-259-009-MY3.
References [1]
Anily, S. and Federgruen, A. (1990). One warehouse multiple retailer system with vehicle routing costs. Management Science, 36(1), 92-114. [2] Bard, J., Huang, L., Jaillet, P. and Dror, M. (1998). A decomposition approach to the inventory routing problem with satellite facilities. Transportation Science, 32, 189-203. [3] Bertazzi, L., Paletta. G., and Speranza, M.G. (2002). Deterministic order-up-to level policies in an inventory routing problem. Transportation Science, 36, 119-132. [4] Campbell, A., Clarke, L., Kleywegt, A.J. and Savelsbergh, M.W.P. (1998). The inventory problem. In: Crainic, T.G. and Laporte, G. (Eds.), Fleet Management and Logistics (pp. 95-113). Dordrecht : Kluwer Academic Publishers. [5] Chan, L.M.A., Federgruen, A. and Simchi-Levi, D. (1998). Probabilistic analysis and practical algorithms for inventory-routing models. Operations Research , 46, 96-106. [6] Chien, T., Balakrishnan, A. and Wong, R.T. (1989). An integrated inventory allocation and vehicle routing problem. Transportation Science, 23, 67-76. [7] Dror, M., Ball, M. and Golden, B. (1985). A computational comparison of algorithms for the inventory routing problem. Annals of Operations Research, 4, 3-23. [8] Dror, M. and Ball, M. (1987). Inventory/routing: reduction from an annual to a short period problem. Naval Research Logistics Quarterly, 34, 891-905. [9] Federgruen, A. and Zipkin, P. (1984). A combined vehicle routing and inventory allocation problem. Operations Research, 32, 1019-1037. [10] Golden, B., Assad, A. and Dahl, R. (1984). Analysis of a large scale vehicle routing problem with an inventory component. Large Scale Systems, 7, 181-190. [11] Jaillet, P., Bard, J., Huang, L. and Dror, M. (2002). Delivery cost approximations for inventory routing problems in a rolling horizon framework. Transportation Science, 36, 292-300.
An Optimization Approach for Inventory Routing Problem…
137
[12] Raa, B., and Aghezzaf, E. (2008). Designing distribution patterns for long-term inventory routing with constant demand rates. International Journal of Production Economics, 112, 255-263. [13] Zhao, Q. and Zang, S.C.H. (2008). Model and algorithm for inventory/routing decision in a three-echelon logistics system. European Journal of Operational Research , 191, 623-635. [14] Yu, Y., Chen, H., and Chu, F. (2008). A new model and hybrid approach for large scale inventory routing problems. European Journal of Operational Research, 189, 10221040. [15] Holloway, C.A. (1974). An extension of the Frank-Wolfe method of feasible directions. Mathematical Programming, 6, 14-27. [16] Von Hohenbalken, B. (1977). Simplicial decomposition in nonlinear programming algorithm. Mathematical Programming , 13, 49-68. [17] Zangwill W.I. (1969). Nonlinear Programming a Unified Approach. Englewood Cliffs, NJ : Prentice Hall.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acuña, pp. 139-151
ISBN: 978-1-60876-500-3 © 2011 Nova Science Publishers, Inc.
Chapter 6
ACCELERATING ITERATIVE SOLVERS WITH RECONFIGURABLE HARDWARE Issam Damaj* Division of Sciences and Engineering American University of Kuwait, Safat, Kuwait
Abstract In this chapter, we aim at increasing the usage of iterative methods in all possible fields by accelerating such solvers using Reconfigurable Hardware. To demonstrate the acceleration of these solvers, we implement the Jacobi solver on different classes of FPGAs, such as Virtex II Pro, Altera Stratix and Spartan3L. The design presented is implemented using Handel-C, a compiler with hardware output. Obtained results show that reconfigurable hardware is suitable for realizing accelerated versions of such solvers.
1. Introduction Physical, chemical and biological phenomena are modeled using Partial Differential Equations (PDEs). Interpreting and solving PDEs is the key for understanding the behavior of the modeled system. The broad field of modeling real systems has drawn the researchers‟ attention for designing efficient algorithms for solving PDEs. The two basic approaches for finding the solution of the modeled system are the Direct and the Iterative Methods. In Direct Methods, the exact solution of the system is found by using finite number of operations. In Iterative Methods, an approximate guess of the solution to the PDE is made; then this guess is used in the second iteration to generate another approximate solution with a higher accuracy than the initial guess [1]. Iterative Methods are more powerful than Direct methods and thus are the methods of choice for solving nowadays modeled systems. Examples of Iterative Methods are: Multigrid, Successive Over Relaxation, Gauss-Seidel, and Jacobi.
*
E-mail address : [email protected]
140
Issam Damaj
Researchers have benefited from the continuous advances in hardware devices and software tools to accelerate the computation of complex mathematical problems [2]. At early stages, algorithms were designed and implemented to run on a general purpose processor (software). Techniques for optimizing and parallelizing the algorithm, when possible, were then devised to achieve better performance. As applications are getting more complicated, the performance provided by processors degenerates. A better performance can be achieved using a dedicated hardware where the algorithm is digitally mapped onto a silicon chip. Though it provides better performance than traditional processors, customized hardware lacks flexibility. In the last decade, a new computing paradigm, Reconfigurable Computing (RC), has emerged [3]. RC overcomes the limitations of the processor and the IC technology. RC benefits from the flexibility offered by software and the performance offered by hardware [35]. RC has successfully accelerated a wide variety of applications including cryptography and signal processing [6]. This requires a reconfigurable hardware, such as Field Programmable Gate Array (FPGA), and a software design environment that aids in the creation of configurations for the reconfigurable hardware [3]. In this chapter, we aim at increasing the usage of iterative methods in all possible fields by accelerating such solvers using Reconfigurable Hardware. To demonstrate the acceleration of these solvers, we implement the Jacobi solver on different classes of FPGAs, such as Virtex II Pro , Altera Stratix and Spartan3L. The design presented is implemented using Handel-C, a compiler with hardware output. Obtained results show that reconfigurable hardware is suitable for realizing accelerated versions of such solvers.
2. Reconfigurable Computing Today, it has become possible to benefit from the advantages of both software and hardware with the presence of the Reconfigurable Computing paradigm [3]. Actually, the first idea to fill the gap between the two computing approaches, hardware and software, goes back to the 1960s when Gerald Estrin proposed the concept of RC [7]. The basic idea of Reconfigurable Computing is the “ability to perform certain computations in hardware to increase the performance, while retaining much of the flexibility of a software solution” [3]. The realization of the RC paradigm is made possible by the presence of programmable hardware such as large scale Complex Programmable Logic Device (CPLD) and Field Programmable Gate Array (FPGA) chips [8]. Reconfigurable computing involves the modification of the logic within the programmable device to suite the application at hand.
2.1. Hardware Compilation There are certain procedures to be followed before implementing a design on an FPGA. First, the user should prepare his/her design by using either a schema editor or by using one of the Hardware Description Languages (HDLs) such as VHDL (Very high scale integrated circuit Hardware Description Language) and Verilog. With schema editors, the designer draws his/her design by choosing from the variety of available components (multiplexers, adders,
Accelerating Iterative Solvers with Reconfigurable Hardware
141
resistors, ..) and connects them by drawing wires between them. A number of companies supply schema editors where the designer can drag and drop symbols into a design, and clearly annotate each component [9]. Schematic design is considered simple and easy for relatively small designs. However, the emergence of big and complex designs has substantially decreased the popularity of schematic design while increasing the popularity of HDL design. Using an HDL, the designer has the choice of designing either the structure or the behavior of his/ her design. Both VHDL and Verilog support structural and behavioral descriptions of the design at different levels of abstractions. A VHDL behavioral and structural model of a half adder is shown in Figure 2. In structural design, a detailed description of the system‟s components, sub-components and their interconnects are specified. The system will appear as a collection of gates and interconnects [9]. Though it has a great advantage of having an optimized design, structural presentation becomes hard, as the complexity of the system increases. In behavioral design, the system is considered as a black box with inputs and outputs only, without paying attention to its internal structure. In other words, the system is described in terms of how it behaves rather than in terms of its components and the interconnection between them. Though it requires more effort, structural representation is more advantageous than the behavioral representation in the sense that the designer can specify the information at the gate-level allowing optimal use of the chip area [10]. It is possible to have more than one structural representation for the same behavioral program. Noting that modern chips are too complex to be designed using the schematic approach, we will choose the HDL instead of the schematic approach to describe our designs. Whether the designer uses a schematic editor or an HDL, the design is fed to an Electronic Design Automation (EDA) tool to be translated to a netlist. The netlist can then be fitted on the FPGA using a process called place and route, usually completed by the FPGA vendors‟ tools. Then the user has to validate the place and route results by timing analysis, simulation and other verification methodologies. Once the validation process is complete, the binary file generated is used to (re)configure the FPGA device. More about this process is found in the coming sections. Implementing a logic design on an FPGA is depicted in Figure 1:
Place and Route
HDL Source Code
synthesize
Generate BitStream File
Configure
Figure 1. FPGA design flow.
142
Issam Damaj
The above process consumes a remarkable amount of time; this is due to the design that the user should provide using HDL, most probably VHDL or Verilog. The complexity of designing in HDL; which has been compared to the equivalent of assembly language; is overcome by raising the abstraction level of the design; this move is achieved by a number of companies such as Agility, Cadence and Synopsys. These companies are offering higher level languages with concurrency models to allow faster design cycles for FPGAs than using traditional HDLs. Examples of higher level languages are Handel-C, SystemC, and Superlog [9] [11].
2.2. Handel-C Language Handel-C is a high level language for the implementation of algorithms on hardware. It compiles program written in a C-like syntax with additional constructs for exploiting parallelism [9]. The Handel-C compiler comes packaged with the Agility DK Design Suite which also includes functions and memory controller for accessing the external memory on the FPGA. A big advantage, compared to other C to FPGA tools, is that Handel-C targets hardware directly, and provides a few hardware optimizing features [12]. In contrast to other HDLs, such as VHDL, Handel-C does not support gate-level optimization. As a result, a Handel-C design uses more resources on an FPGA than a VHDL design and usually takes more time to execute. In the following subsections, we describe Handel-C features‟ that we have used in our design [12] [13].
2.2.1. Types and Type Operator Almost all ANSI-C types are supported in Handel-C with the exception of float and double. Yet, floating point arithmetic can still be performed using the floating point library provided by Agility. Also, Handel-C supports all ANSI-C storage class specifiers and type qualifiers expect volatile and register which have no meaning in hardware. Handel-C offers additional types for creating hardware components such as memory, ports, buses and wires. Handel-C variables can only be initialized if they are global or if declared as static or const. Handel-C types are not limited to width since when targeting hardware, there is no need to be tied to a certain width. Variables can be of different widths, thus minimizing the hardware usage.
2.2.2. Par Statement The notion of time in Handel-C is fundamental. Each assignment happens in exactly one clock cycle, everything else is “free” [12]. An essential feature in Handel-C is the „par’ construct which executes instructions in parallel.
2.2.3. Handel-C Targets Handel-C supports two targets. The first is a simulator that allows development and testing of code without the need to use hardware, P1 in Fig 2. The second is the synthesis of a netlist for input to place and route tools which are provided by the FPGA‟s vendors, P2 in Fig 2.
Accelerating Iterative Solvers with Reconfigurable Hardware
143
Figure 2. Handel-C targets.
The remaining of this section describes the phases involved in P2, as it is clear from P1 that we can test and debug our design when compiled for simulation. The flow of the second target involves the following steps:
Compile to netlist: The input to this phase is the source code. A synthesis engine, usually provided by the FPGA vendor, translates the original behavioral design into gates and flip flops. The resultant file is called the netlist. Generally, the netlist is in the Electronic Design Interchange Format (EDIF ) format. An estimate of the logic utilization can be obtained from this phase. Place and Route (PAR): The input to this phase is the EDIF file generated from the previous phase; i.e. after synthesis. All the gates and flip flops in the netlist are physically placed and mapped to the FPGA resources. The FPGA vendor tool should be used to place and route the design. All design information regarding timing, chip area and resources utilization are generated and controlled for optimization at this phase. Programming and configuring the FPGA: After synthesis and place and route, a binary file will be ready to be downloaded into the FPGA chip [14] [15].
3. Iterative Methods A large number of physical phenomena can be expressed as systems of linear equations The two basic approaches for finding the solution of the modeled system are the Direct and the Iterative Methods. In Direct Methods, the exact solution of the system is found by using finite number of operations. In Iterative Methods, an approximate guess of the solution to the PDE is made; then this guess is used in the second iteration to generate another approximate solution with a higher accuracy than the initial guess [1]. Iterative Methods are
144
Issam Damaj
more powerful than Direct methods and thus are the methods of choice for solving nowadays modeled systems. Examples of Iterative Methods are: Multigrid, Successive Over Relaxation, Gauss-Seidel, and Jacobi. The well-known iterative methods are: Multigrid, Successive Over Relaxation, GaussSeidel, and Jacobi. In this chapter, we have chosen the Jacobi to study the feasibility of accelerating iterative methods using reconfigurable hardware.
3.1. Description of the Jacobi Algorithm The Jacobi Method is considered to be as the simplest iterative method for solving a linear system. As it is the case with the other iterative methods (Gauss-Seidel, SOR and Multigrid), the Jacobi technique starts with an initial estimate for the true solution and at each step, the current approximate solution is used to produce a better approximation for the true solution. This iterates continues until the approximate solution is sufficiently close to the true solution. th Unlike Gauss-Seidel and SOR strategies where the update of the (i 1) element depends on th
the update of the i element, the Jacobi‟s strategy is updating all the elements at the same time [16]. Given the linear system of equations:
A b where A can be split into three matrices: the diagonal ( D ) , an upper triangular (U ) and a
lower triangular ( L) ; having D the diagonal part of A , U the upper part of A , and
L the lower part of A
A can be written as:
A D L U
Therefore:
( D L U ) b can be rewritten as:
D ( L U ) b
and
D 1 ( L U ) D 1b
This leads to the iterative Jacobi technique:
( k ) D 1 ( L U ) ( k 1) D 1b where k 1,2,... The convergence of the Jacobi technique is guaranteed if the matrix A is diagonally dominant; i.e., in every row of the matrix, the magnitude of the diagonal entry in that row is larger than the sum of the magnitude of all the other entries in that row [16].
Accelerating Iterative Solvers with Reconfigurable Hardware
145
4. Hardware Implementation of Jacobi We used Handel-C, a higher-level hardware design tool to design the Jacobi method. HandelC comes packaged with DK Design Suite from Agility. It allows the designer to focus more on the specification of the algorithm rather than adopting a structural approach to coding [12]. Handel-C syntax is similar to the ANSI-C with additional extensions for expressing parallelism [12]. One of the most important features in Handel-C which is used in our implementation is the „par‟ construct that allows statements in a block to be executed in parallel and in the same clock cycle. Our design has been tested using the Handel-C simulator; afterwards, we have targeted a Xilinx Virtex II Pro FPGA, an Altera Stratix FPGA, and Spartan3L which is embedded in an RC10 FPGA-based system from Agility. We have used the proprietary software provided by the devices vendors' to synthesize, place and route, and analyze the design [12], [17], [18] In Figure 3 and Figure 4, we present a parallel and a sequential version of SOR. In the first version, we used the 'par' construct whenever it was possible to execute more than one instruction in parallel and in the same clock cycle without affecting the logic of the source code. The dots in the combined flowchart/concurrent process model which is shown in Figure 3 represent replicated instances. Figure 4 shows a traditional way of sequentially executing instructions on a general purpose processor. Executing instructions in parallel have shown a substantial improvement in the execution of the algorithm. To handle floating point arithmetic operations which are essential in finding the solution to PDE using iterative methods, we used the Pipelined Floating Point Library provided by Agility [12]. However, an unresolved bug in the current version of the DK simulator limited the usage of the floating point operations to four in the design. The only possible way to avoid this failure was to convert/Unpack the floating point numbers to integers and perform integer arithmetic on the obtained unpacked numbers. Though it costs more logic to be generated, the integer operations on the unpacked floating point numbers have a minor effect on the total number of the design's clock cycles.
5. Experimental Results There are two criteria on which our obtained results are based on:
Speed of convergence: the time it takes the Jacobi method to find the solution to the PDE in hand. In another word, it is the time needed to execute the Jacobi algorithm. In hardware implementation, the speed of convergence is measured using the clock cycles of the design divided by the frequency at which the design operates at. The first parameter is found using the simulator while the second is found using the timing analysis report which is generated using the FPGA vendor‟s tool. Chip-area : this performance criterion measures the number of occupied slices on the FPGA on which the design is implemented. The number of occupied slices is generated using the FPGA vendor‟s place and route tool.
146
Issam Damaj
Figure 3. Jacobi flowchart, sequential version.
We use the FPGA vendor's tools (DK Design Suite, Xilinx ISE 8.1i and Quartus II 5.1) to analyze and report the performance results of each FPGA. The software version was written in C++, compiled using Microsoft Visual Studio .Net and running on a Pentium (M) processor 2.0 GHz, 1.99 GB of RAM.
Accelerating Iterative Solvers with Reconfigurable Hardware
147
Figure 4. Jacobi parallel version, showing the combined flowchart/concurrent process model. The dots represent replicated instances.
The execution time of Jacobi in hardware (Handel-C) and in software (C+ + ) is shown in Figure 5 for different problem sizes. As the Figure shows, a significant improvement in the execution time of the hardware implementation of the algorithm over the software implementation. The acceleration of the Jacobi algorithm is directly presented in the following table. The Handel-C code was synthesized for Xilinx Virtex II Pro (2vp7ff672-1), Altera Stratix (ep1s10f484C5), and Spartan3L (3s15001fg320-4) which is embedded on RC10 board from Agility. Tables 2, 3 and 4 report the obtained synthesis results.
148
Issam Damaj
Handel-C
C++
Execution time (seconds)
5 4 3 2 1 0 8x8
16x16
32x32
64x64
Mesh size
8
4 x2 48 20
10
24
x1
04
02
2 51
2x
51
25 6x 25
12
8x
12
8
6
Figure 5(a). Jacobi execution time results in both versions Handel-C and C+ + .
Figure 5(b). Jacobi execution time results in both versions Handel-C and C+ + .
Table 1. Design Speedup: Execution Time (C++) / Execution Time (Handel-C) Mesh Size 8x8 16x16 32x32 64x64 128x128 256x256
Speedup 223.8095 56.2 5.676667 2.89267 1.409887 2.287887
Accelerating Iterative Solvers with Reconfigurable Hardware Table 1. Continued Mesh Size 512x512 1024x1024 2024x2024
Speedup 2.386364 0.753898 1.391351
Table 2. Virtex II Pro Synthesis Results Number of Occupied Slices 146 159 299 380 499 839 1286 1890 3198
Mesh Size 8x8 16x16 32x32 64x64 128x128 256x256 512x512 1024x1024 2048x2048
Total equivalent gate count 3,229 3,397 5,090 7,849 11,897 17,864 23,649 31,327 35,839
Table 3. RC10 Spartan3L Synthesis Results Number of Occupied Slices 416
Mesh Size 8x8 16x16 32x32 64x64 128x128 256x256 512x512
Total equivalent gate count 356,109
599
357,631
7326
359,989
9010
342,768
1198
389,999
1665
397,987
2810
498,030
Table 4. Altera Stratix Synthesis Results
Mesh Size
Total Logic Elements
8x8 16x16 32x32 64x64 128x128 256x256 512x512 1024x1024 2048x2048
610 709 880 1001 1286 1590 2589 3342 3927
Logic element usage by number of LUT inputs 354 401 556 681 801 950 1,101 1,499 1,941
Total Registers 189 232 300 385 390 476 560 689 819
149
150
Issam Damaj
6. Conclusion In this chapter, we aim at increasing the usage of iterative methods in all possible fields by accelerating such solvers using Reconfigurable Hardware. To demonstrate the acceleration of these solvers, we implemented the Jacobi solver on different classes of FPGAs, such as Virtex II Pro, Altera Stratix and Spartan3L. The design presented was coded and implemented using Handel-C, a compiler with hardware output. The design was then mapped onto highperformance FPGAs: Virtex II Pro, Altera Stratix, and Spartan3L which is embedded in the RC10 board from Agility. We used the FPGAs vendor's tool to analyze the performance of our hardware implementation. For testing purposes, we designed a software version of the algorithm and compiled it using Microsoft Visual Studio .Net. The synthesis results obtained show that it is feasible to implement iterative methods on reconfigurable hardware. The software implementation results were compared to the hardware implementation results. The obtained timing results show that hardware implementation of the Jacobi algorithm outperforms the software implementation. Thus, iterative methods can be accelerated using reconfigurable hardware. The Jacobi‟s speedup can be further accelerated by designing a pipelined version of the algorithm. Its efficiency can be further improved by moving from Handel-C to a lower level HDL such as VHDL. Also, we consider accelerating complicated iterative solvers, such as, generalized minimal residual method (GMRES), and biconjugate gradient method BiCG, by implementing them on reconfigurable hardware devices.
References [1] [2] [3] [4] [5]
[6]
[7] [8]
D. Young: Iterative Methods for Solving Partial Difference Equations of Elliptic Type. Ph.D. thesis, Harvard University 1950. D. Bailey, and J.M. Borwein. Future Prospects for Computer-assisted Mathematics. Canadian Mathematical Society Notes, 37(8), 2-6, 2005. K. Compton and S. Hauck. Reconfigurable Computing: A Survey of Systems and Software. In: ACM Computing Surveys. 34(2), 171-210, 2002. R. Enzler, The Current Status of Reconfigurable Computing, Technical Report, Electronics Lab., Swiss Federal Institute of Technology (ETH) Zurich, 1999. Y. Li, T. Callahan, E. Darnell, R. Harr, U. Kurkure, J. and Stockwood, HardwareSoftware Co-Design of Embedded Reconfigurable Architectures. In 37th Design Automation Conference, Los Angeles, CA, pp. 507-512, 2000. A. J. Elbirt and C. Paar. An FPGA Implementation and Performance Evaluation of the Serpent Block Cipher. ACM/SIGDA International Symposium on FPGAs, pp. 33-40, 2000. F. Vahid, and T. Givargis. Embedded systems design: a unified hardware/software introduction. New York: Wiley; 2002. T.J. Todman, G.A. Constantinides, S.J.E. Wilton, O. Mencer, W. Luk, and P.Y.K. Cheung. Reconfigurable computing: architectures and design methods. IEE Proceedings - Computers and Digital Techniques, vol. 152, no. 2, pp. 193-197. 2005.
Accelerating Iterative Solvers with Reconfigurable Hardware [9]
[10] [11] [12] [13]
[14] [15] [16] [17] [18]
151
J. Turely. How Chips are Designed. Prentice Hall; 2003, Professional TechnicalReference:http://www.phptr.com/articles/article.asp?p=31679andseqNum=2a ndrl= S. K. Valentina, Designing A Digital System with VHDL, Academic Open Internet Journal, vol. 11, 2004. D. Pellerin, and S. Thibault: Practical FPGA programming in C. Upper Saddle River, N.J. : Prentice Hall Professional Technical Reference 2005. Agility, http://www.agilityds.com. 2008. C. Peter .Overview: Hardware Compilation and the Handel-C language. Oxford University Computing Laboratory, 2000 http://web.comlab.ox.uk/oucl/work/christian .peter/overview_handelc.html J. Cong. FPGAs Synthesis and Reconfigurable Computing. University of California, Los Angeles, 1997: http://www.ucop.edu/research/micro/96_97/96_176.pdf J. Shewel: A Hardware / Software Co-Design System using Configurable Computing Technology, 1998. http://ipdps.cc.gatech.edu/1998/it/schewel.pdf D. P. Bertsekas and J.N. Tsitsiklis, "Some aspects of parallel and distributed iterative algorithms -- a survey," Automatica , vol. 27, no. 1, pp. 3--21, 1991 Altera Inc., www.altera.com. 2008. Xilinx , http://www.xilinx.com, 2008
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 153-184
ISBN 978-1-60876-500-3 c 2011 Nova Science Publishers, Inc.
Chapter 7
FAIR D IVISION P ROBLEMS WITH N ONADDITIVE E VALUATIONS : E XISTENCE OF S OLUTIONS AND R EPRESENTATION OF P REFERENCE O RDERINGS Nobusumi Sagara∗ Faculty of Economics, Hosei University 4342, Aihara, Machida, Tokyo 194–0298 Japan
Abstract The purpose of this chapter is to investigate fair division problems in which each player has nonadditive utility functions on a σ-algebra. To this end, we report the known results on the characterization and existence of solutions for additive utilities and demonstrate how the additive case can(not) be extended to the nonadditive case. Moreover, we axiomatize preference orderings on σ-algebras representable by numerical functions (utility functions). In this chapter, we formulate representations of partial orderings on a σ-algebra in terms of nonadditive set functions that satisfy the appropriate requirements of convexity and continuity. We provide several nonadditive representation theorems.
Keywords: Fair division; nonatomic finite measure; nonadditive set function; Pareto optimality; α-maximin optimality; α-equitability; envy-freeness; envy minimality; convexity; supermodularity; Choquet integral; core of a game; preference ordering; utility function. 2000 Mathematics Subject Classification: Primary 28B05, 28E10; Secondary 91A30, 91B16.
1.
Introduction
Dividing fixed resources among members of a society to fulfill efficiency and fairness is a central theme of social decision making. This chapter discusses the problem of dividing a “cake” among a finite number of players in such a way that every player is satisfied with ∗
E-mail address: [email protected]
154
Nobusumi Sagara
the piece of cake that he or she receives. This problem can be formulated as the partitioning of a measurable space among finitely many players. Although this mathematical problem has attracted much attention in recent years, it has a long history (for classical studies, see Steinhaus [55, 56]). Since the seminal work by Dubins and Spanier [20], many solution concepts have been proposed and existence of their solutions investigated. While Pareto optimality is a standard criterion for efficiency, many other fairness criteria are known in the literature: α-envy freeness (Ichiishi and Idzik [29]), super envy freeness (Barbanel [3]), group envy freeness (Berliant et al. [7]), egalitarian equivalence [7], α-fairness [20], αmaximin optimality (Legut and Wilczy´nski [36]), α-leximin optimality [20], α-equitability [36], envy minimality (Dall’Aglio and Hill [15]), and lexicographic envy minimality [15]. Among these solution concepts, the compatibility of efficiency and fairness should be addressed. A common assumption in the theory of fair division is that the preference orderings of each player are represented by a nonatomic probability measure. Representing a preference ordering by a probability measure means that the corresponding utility functions are countably additive on a σ-algebra, and consequently assumes constant marginal utility. This implies that given an element E in an σ-algebra F , a utility function ν on F satisfies ν(A ∪ E) − ν(A) = ν(E) for every element A ∈ F with A ∩ E = ∅. Obviously, this severe restriction on preference orderings is difficult to justify from an economics point of view. The conditions under which the player’s preference orderings can have additive or nonadditive representations should also be addressed. The purpose of this chapter is twofold. First, we investigate fair division problems in which each player has nonadditive utility functions on a σ-algebra. To this end, we report known results on the characterization and existence of solutions for the case of additive utilities and demonstrate how the additive case can(not) be extended to the nonadditive case. If the preference orderings of the players are represented by a nonatomic probability measure, then Lyapunov’s convexity theorem (see Lyapunov [38]) guarantees the convexity and compactness of the utility possibility set, crucial to establishing the logical implications and the existence of solutions. In consideration of nonadditive utility functions on σ-algebras, such a strong property is no longer guaranteed. For the compactness of the utility possibility set, we introduce an adequate continuity condition on utility functions and the closedness condition on the utility possibility set along the lines of Sagara [47]. As an alternative, there are two classes of functions that assure the convexity of the utility possibility set: those of µ-concave functions introduced by Sagara and Vlach [48, 49, 50, 51] and the supermodular functions investigated by Choquet [12], Marinacci and Montrucchio [40], Schmeidler [53], among others. The characterization and existence of solutions are investigated under these classes of nonadditive set functions. Second, we axiomatize the preference orderings on σ-algebras representable by numerical functions (utility functions). A large number of works deal with the representation of partial orderings on algebras by means of additive set functions. This line of research goes back to de Finetti [19], Koopman [31], Savage [52], and others who introduced axioms on the likelihood ordering of events to obtain a “subjective probability”. In this chapter, we present an alternative representation of partial orderings on a σ-algebra in terms of nonadditive set functions that satisfy the appropriate requirements of convexity and continuity.
Fair Division Problems with Nonadditive Evaluations
155
To this end, we formulate three nonadditive representation theorems. Two are provided by Sagara and Vlach [48] and the other is given by Dall’Aglio and Maccheroni [17]. For an additive representation theorem, we introduce the representation via the nonatomic finite measures by Villega [60]. The chapter is organized as follows: In Section 2, we first collect some mathematical results on the partition range of a nonatomic finite measure needed for analysis in the sequel. A fundamental result is the provision of Lyapunov’s convexity theorem and its variants. We next state the fair division problem and introduce solution concepts on efficiency and fairness. Section 3 investigates fair division problems where the utility functions of each player are given by some finite measure. For the characterization and existence of solutions in the additive case, two important conditions on utility functions are imposed: nonatomicity and mutual absolute continuity. Classical results in the additive case are benchmarks for an extension to the nonadditive case in the following section. Section 4 comprises the main part of this chapter. To characterize Pareto optimality and α-maximin optimality, we introduce a more general notion of monotonicity and continuity for nonadditive utility functions. We then introduce the closedness condition on the lower partition range of utility functions of the players and establish the existence of αleximin optimal and lexicographic envy minimal partitions. Next, we investigate µ-concave functions and demonstrate the convexity of their lower partition range to establish the compatibility of Pareto optimality and α-fairness. Finally, we introduce utility functions that satisfy submodularity and demonstrate the existence of α-fair partitions. In Section 5, we address a significant issue, the axiomatization of preference orderings represented by additive or nonadditive functions on σ-algebras satisfying certain continuity and convexity conditions. While the problem of additive representation originates from the study of subjective probability, it is also indispensable in the fair division problem. The representation results in Section 5 equip us with the theoretical grounding needed for use of the nonatomic probability measures in Section 3 and the µ-concave and submodular functions in Section 4. We close this chapter by raising some open issues in fair division theory (Section 6).
2. 2.1.
Partitioning of a Measurable Space Lyapunov’s Convexity Theorem
Let (Ω, F ) be a measurable space with F a σ-algebra of a nonempty set Ω. An m-partition of Ω is an m-tuple (A1 , . . . , Am ) of mutually disjoint elements A1 , . . . , Am in F whose union is Ω. We denote by P m the set of m-partitions of Ω. Finite measures µ1 , . . . , µn on F constitute an n-dimensional vector-valued measure (µ1 , . . . , µn ) : F → Rn . For a given (A1 , . . . , Am ) ∈ P m , an n × m-matrix (µi (Aj )) constitutes a partition matrix, which is identified with an element in Rnm . A measure µ on F is said to be nonatomic if every set A ∈ F with µ(A) > 0 contains a set E ∈ F such that 0 < µ(E) < µ(A). The following celebrated result is attributed to Lyapunov [38].
156
Nobusumi Sagara
Lyapunov’s Convexity Theorem. If µ1 , . . . , µn are finite measures, then the range R(µ1 , . . . , µn ) = (µ1 (A), . . . , µn (A)) ∈ Rn | A ∈ F
of a vector measure (µ1 , . . . , µn ) is compact in Rn . If, moreover, µ1 , . . . , µn are nonatomic, then R(µ1 , . . . , µn ) is also convex in Rn . There are a large number of elaborated proofs of Lyapunov’s convexity theorem. For example, Dubins and Spanier [20] and Halmos [25] presented a measure-theoretic proof and Lindenstrauss [37] provided a proof based on the fundamental results of functional analysis. Gouweleeuw [24] also provided necessary and sufficient conditions for the convexity of the range of a Rn -valued nonatomic vector measure. The following is a useful variant of Lyapunov’s convexity theorem provided by Dvoretsky et al. [22]. Theorem 2.1. If µ1 , . . . , µn are finite measures, then the partition matrix range M R m (µ1 , . . . , µn ) = (µi (Aj )) ∈ Rnm | (A1 , . . . , Am ) ∈ P m
of (µ1 , . . . , µn ) is compact in Rnm . If, moreover, µ1 , . . . , µn are nonatomic, then it is also convex in Rnm . Applications of Theorem 2.1 to fair division problems can be found in Akin [1], Barbanel and Zwicker [6], Dall’Aglio [14], and Gouweleeuw [24], and those to partitioning problems with coalition formation can be found in H¨usseinov [27], Legut [35], and Sagara [46]. For later use, we present the following simple observation derived from Theorem 2.1. Corollary 2.1. If µ1 , . . . , µn are finite measures, then the partition range PR(µ1 , . . . , µn ) = (µ1 (A1 ), . . . , µn (An )) ∈ Rn | (A1 , . . . , An ) ∈ P n
of (µ1 , . . . , µn ) is compact in Rn . If, moreover, µ1 , . . . , µn are nonatomic, then it is also convex in Rn . 2
Proof. Define the continuous linear mapping T : Rn → Rn by n T (xij ) = (x11 , . . . , xnn ). As PR(µ1 , . . . , µn ) = T (M R (µ1 , . . . , µn )), the compactness of PR(µ1 , . . . , µn ) follows from the continuity of T and the compactness of M R n (µ1 , . . . , µn ), and the convexity of PR(µ1 , . . . , µn ) follows from the linearity of T and the convexity of M R n (µ1 , . . . , µn ).
2.2.
Solution Concepts
The fair division of a cake among a finite number of players is formulated as the partitioning of a measurable space (Ω, F ). Here, the cake Ω (nonempty set) is a metaphor for a divisible heterogeneous commodity and a σ-algebra F of subsets of Ω describes a possible collection of pieces of the cake. There are n players, each of whom is indexed by i = 1, . . . , n. Each player’s preference on F is given by a real-valued function νi : F → R, called a utility function, in terms of
Fair Division Problems with Nonadditive Evaluations
157
which the inequality νi (A) ≥ νi (B) means that A is preferred over B by player i. A utility function νi is normalized if 0 ≤ νi ≤ 1, νi (∅) = 0 and νi (Ω) = 1. In the sequel, by a partition we simply mean an element in P n . A partition (A1 , . . . , An ) is said to be positive if νi (Ai ) > 0 for each i = 1, . . . , n. Let ∆n−1 denote the (n − 1)-dimensional unit simplex in Rn , that is, ) ( n X αi = 1 and αi ≥ 0, i = 1, . . . , n . ∆n−1 = (α1 , . . . , αn ) ∈ Rn | i=1
n−1 The relative interior ∆+ of ∆n−1 is the set of elements of ∆n−1 whose components are positive. For a vector α = (α1 , . . . , αn ) in ∆n−1 , each αi denotes a share of a piece of the cake agreed among the players. When referring to a generic element α in the following n−1 definitions, we mean that α belongs to ∆+ . n−1 For a given partition (A1 , . . . , An ) and α ∈ ∆+ , arrange the profile {αi−1 νi (Ai )} of utility values of the players in nondecreasing order and denote the resulting sequence by −1 {ασ(i) νσ(i) (Aσ(i) )} with: −1 −1 ασ(1) νσ(1) (Aσ(1) ) ≤ · · · ≤ ασ(n) νσ(n) (Aσ(n) ),
where σ is a permutation of {1, . . . , n}. Given a partition (A1 , . . . , An ), define the maximum envy ei (A1 , . . . , An ) of player i by: ei (A1 , . . . , An ) = max{νi (Aj ) − νi (Ai )}. j6=i
Contrary to the above, arrange the profile {ei (A1 , . . . , An )} in nonincreasing order and denote its configuration by {eσ(i) (A1 , . . . , An )} with: eσ(1) (A1 , . . . , An ) ≥ · · · ≥ eσ(n) (A1 , . . . , An ). Definition 2.1. A partition (A1 , . . . , An ) is: (i) Weakly Pareto optimal if there exists no partition (B1 , . . . , Bn ) such that νi (Ai ) < νi (Bi ) for each i = 1, . . . , n. (ii) Pareto optimal if no partition (B1 , . . . , Bn ) exists such that νi (Ai ) ≤ νi (Bi ) for each i = 1, . . . , n and νj (Aj ) < νj (Bj ) for some j. (iii) Envy free if νi (Aj ) ≤ νi (Ai ) for each i, j = 1, . . . , n. Given a utility profile ν1 , . . . , νn of the players, a partition with property P is said to be scale invariant if P is true under any utility profile f1 ◦ ν1 , . . . , fn ◦ νn with a strictly increasing transformation fi : R → R for each i. It is evident that the partitions in Definition 2.1 are scale invariant, though the partitions in the following definition are not. This makes sense only when the utility functions of the players are normalized. Definition 2.2. Let ν1 , . . . , νn be normalized utility functions. A partition (A1 , . . . , An ) is: (i) α-fair if νi (Ai ) ≥ αi for each i = 1, . . . , n. An α-fair partition for α = ( n1 , . . . , n1 ) is said to be fair.
158
Nobusumi Sagara
(ii) α-envy-free if αj−1 νi (Aj ) ≤ αi−1 νi (Ai ) for each i, j = 1, . . . , n. An α-envy-free partition for α = ( n1 , . . . , n1 ) is said to be envy-free. (iii) Super envy-free if νi (Ai ) >
1 n
for each i = 1, . . . , n and νi (Aj )
ατ−1 (k) ντ (k) (Bτ (k) ) −1 1 if k is the smallest integer with ασ(k) νσ(k) (A(k) ) 6= ατ−1 (k) ντ (k) (Bτ (k) ). An α-leximin
optimal partition for α = ( n1 , . . . , n1 ) is said to be leximin optimal.
(vi) α-equitable if αi−1 νi (Ai ) = αj−1 νj (Aj ) for each i, j = 1, . . . , n. An α-equitable partition for α = ( n1 , . . . , n1 ) is said to be equitable. (vii) Envy minimal if it is a solution of the problem n min max ei (A1 , . . . , An ) | (A1 , . . . , An ) ∈ P . 1≤i≤n
(P)
(viii) Lexicographic envy minimal if, for every partition (B1 , . . . , Bn ), either eσ(i) (A1 , . . . , An ) = eτ (i) (B1 , . . . , Bn )
for each i = 1, . . . , n
or eσ(k) (A1 , . . . , An ) < eτ (k) (B1 , . . . , Bn ) if k is the smallest integer with eσ(k) (A1 , . . . , An ) 6= eτ (k) (B1 , . . . , Bn ).2 Among these solution concepts, the compatibility of “efficiency” and “fairness” should be addressed. Pareto optimality is a standard criterion for efficiency and α-maximin and αleximin optimality are joint concepts of efficiency and fairness. It is immediate from the definition of the solutions that the following implications are true without any additional assumption. • Pareto optimality implies weak Pareto optimality. • α-maximin optimality implies weak Pareto optimality. 1 2
−1 Here, {ατ−1 (i) ντ (i) (Bτ (i) )} is a configuration of {αi νi (Bi )} with a permutation τ . Here, {eτ (i) (B1 , . . . , Bn )} is a configuration of {ei (B1 , . . . , Bn )} with a permutation τ .
Fair Division Problems with Nonadditive Evaluations
159
• α-leximin optimality implies α-maximin optimality. • Lexicographic envy minimality implies envy minimality. • Super envy-freeness implies envy-freeness and fairness. It is worth pointing out that the classical paper by Dubins and Spanier [20] had already devised the notions of envy-freeness, maximin optimality, α-leximin optimality and equitability long before economists presented an analytical framework for the problem of fair allocations in an exchange economy (see Schmeidler and Vind [54] and Varian [59].) Thereafter α-envy freeness was introduced by Ichiishi and Idzik [29], α-maximin optimality and α-equitability by Legut and Wilczy´nski [36], super envy freeness by Barbanel [3], envy minimality and lexicographic envy minimality by Dall’Aglio and Hill [15].
3.
Fair Division with Additive Evaluations
3.1.
Characterization of Solutions
Pareto optimality obviously implies weak Pareto optimality, but not vice versa. Two concepts coincide when the underlying measures are nonatomic and mutually absolutely continuous3 . This is analogous to the result of the allocation problem in an exchange economy with strictly monotone, continuous preference orderings with a finite-dimensional commodity space (see Aliprantis et al. [2], Theorem 1.5.2). Moreover, Pareto optimal partitions can be characterized by means of solutions to the maximization problem of the weighted utility sum of players (for the case of allocations in an exchange economy, see Mas-Colell et al. [42], Proposition 16.E.2). Theorem 3.1. If ν1 , . . . , νn are nonatomic finite measures that are mutually absolutely continuous, then the following conditions are equivalent: (i) (A1 , . . . , An ) is Pareto optimal; (ii) (A1 , . . . , An ) is weakly Pareto optimal; (iii) (A1 , . . . , An ) is a solution to the maximization problem ) ( n X n αi νi (Ai ) | (A1 , . . . , An ) ∈ P max
(Qα )
i=1
for some α ∈ ∆n−1 . In proving Theorem 3.1, the role of mutual absolute continuity was recognized by Dubins and Spanier [20] and Barbanel and Zwicker [6], but it is impossible to find an available proof of the equivalence of (i) and (ii) in the literature. The equivalence of (ii) and (iii) 3
Measures µ1 , . . . , µn are mutually absolutely continuous if µj (A) = 0 with A ∈ F for some j implies µi (A) = 0 for each i = 1, . . . , n. The condition states that null sets with respect to each measure coincide with those with respect to other measures.
160
Nobusumi Sagara
was given by Barbanel [3]. We present a more general result than Theorem 3.1 later in Theorems 4.1 and 4.6. The following characterization of α-maximin optimality is a special case of Sagara [47]. (See Theorem 4.1.) Theorem 3.2. If ν1 , . . . , νn are nonatomic probability measures that are mutually absolutely continuous, then the following hold: n−1 (i) For every α ∈ ∆+ , a partition is α-maximin optimal if and only if it is Pareto optimal and α-equitable.
(ii) A partition is positive and Pareto optimal if and only if it is α-maximin optimal for n−1 some α ∈ ∆+ . Dubins and Spanier [20] pointed out (without a proof) that leximin optimality implies equitability if ν1 , . . . , νn are nonatomic probability measures that are mutually absolutely continuous. Specifically, the following sharper assertion is true employing Theorem 3.2. Corollary 3.1. If ν1 , . . . , νn are nonatomic finite measures that are mutually absolutely continuous, then α-leximin optimal partitions are Pareto optimal and α-equitable. One may expect that leximin and maximin optimality have a logical implication to envy freeness. Even in the additive case, however, one can construct an example such that no maximin optimal partition and no leximin optimal partition are envy free. (See Dall’Aglio and Hill [15].) Moreover, Brams et al. [8] presented an example in which no α-equitable partition is α-fair. Regarding the existence of super envy-free partitions, Barbanel [3] presented the following intriguing characterization. The proof employs an effective use of Lyapunov’s convexity theorem (for a proof, see also Barbanel [4]). Theorem 3.3. If ν1 , . . . , νn are nonatomic probability measures, then the following conditions are equivalent: (i) There exists a super envy-free partition; (ii) ν1 , . . . , νn are linearly independent;4 (iii) (s, . . . , s) ∈ Rn is in the interior of R(ν1 , . . . , νn ) for every s ∈ (0, 1).
3.2.
Existence of Solutions
To clarify the respective role of the hypothesis for the existence theorems, we discriminate between the results that need neither nonatomicity nor mutual absolute continuity (Theorem 3.4), those that require only nonatomicity (Corollary 3.2, Theorems 3.5 and 3.6) and those that impose both nonatomicity and mutual absolute continuity (Theorems 3.7 and 3.8). Theorem 3.4. If ν1 , . . . , νn are finite measures, then: 4
A finite sequence of measures ν1 , . . . , νn is linearly independent if α1 ν1 + · · · + αn νn = 0 for real numbers α1 . . . , αn implies αi = 0 for each i = 1, . . . , n. Thus, linear independence rules out the situation where some νi is represented by a convex combination of other measures.
Fair Division Problems with Nonadditive Evaluations
161
(i) There exists a weakly Pareto optimal partition. (ii) There exists a lexicographic envy minimal partition. n−1 (iii) For every α ∈ ∆+ , there exists an α-leximin optimal partition.
Proof. (i): Define the finite measure ν by ν =
n P
νi . Then each νi is absolutely continuous
i=1
with respect to ν. Let fi be the Radon–Nikodym derivative of νi with respect to ν. We then have: Z n n Z n Z X X X νi (Ai ) = fi dν ≤ max fj dν = max fj dν i=1
i=1 A
i=1 A
i
1≤j≤n
1≤j≤n
Ω
i
for every (A1 , . . . , An ) ∈ P n . Define the partition (A∗1 , . . . , A∗n ) of Ω inductively by: ∗ A1 = ω ∈ Ω | f1 (ω) = max fj (ω) 1≤j≤n
and A∗k = Then
n X
k−1 [ A∗i , ω ∈ Ω | fk (ω) = max fj (ω) \ 1≤j≤n
νi (A∗i ) =
i=1
Thus,
n P
i=1
n Z X
i=1 A∗ i
fi dν =
k = 2, . . . , n.
i=1
n Z X
i=1 A∗ i
max fj dν =
1≤j≤n
Z
max fj dν.
1≤j≤n
Ω
νi is maximized at (A∗1 , . . . , A∗n ), from which the weak Pareto optimality of
(A∗1 , . . . , A∗n ) follows. (ii): Given (A1 , . . . , An ) ∈ P n , the n × n matrix (νi (Aj ) − νi (Ai )) denotes an envy matrix of the players. The set of maximum envy matrices constitutes the envy matrix range given by: n o 2 E R(ν1 , . . . , νn ) = (xij − xii ) ∈ Rn | (xij ) ∈ PR(ν1 , . . . , νn ) , 2
which is compact in Rn by Corollary 2.1. The set of maximum envy vectors, defined by: ( ) xi = maxj6=i {xij − xii }, i = 1, . . . , n E = (x1 , . . . , xn ) ∈ Rn , (xij − xii ) ∈ E R(ν1 , . . . , νn )
is also compact in Rn . Let E1 be the minimizers of the problem: min xσ(1) | (x1 , . . . , xn ) ∈ E ,
where {xσ(i) } is a configuration of {xi } with nonincreasing order xσ(1) ≥ · · · ≥ xσ(n) . Given the projection map (x1 , . . . , xn ) 7→ xi is continuous for each i, E1 is nonempty and the minimizers E2 of the problem: min xσ(2) | (x1 , . . . , xn ) ∈ E1
162
Nobusumi Sagara (n)
(n)
is nonempty. Continuing in this way, we obtain a solution (x1 , . . . , xn ) to the problem: min xσ(n) | (x1 , . . . , xn ) ∈ En−1 . (n)
Thus, there exists a partition (A1 , . . . , An ) such that νi (Ai ) = xi for each i. By construction, (A1 , . . . , An ) is lexicographic envy minimal. −1 (iii): For arbitrary (x1 , . . . , xn ) ∈ Rn , let {ασ(i) xσ(i) } be a configuration of {αi−1 xi }
−1 −1 xσ(1) ≤ · · · ≤ ασ(n) xσ(n) . Let X1 be the set of solutions to with nondecreasing order ασ(1) the maximization problem: n o −1 max ασ(1) xσ(1) | (x1 , . . . , xn ) ∈ PR(ν1 , . . . , νn ) .
Given PR(ν1 , . . . , νn ) is compact by Corollary 2.1 and the function given by −1 (x1 , . . . , xn ) 7→ ασ(i) xσ(i) is continuous for each i, the maximizers X1 is a nonempty and compact subset of PR(ν1 , . . . , νn ). Then the maximizers X2 of the problem: n o −1 max ασ(2) xσ(2) | (x1 , . . . , xn ) ∈ X1 (n)
(n)
is nonempty. Continuing in this way, we obtain a solution (x1 , . . . , xn ) to n o −1 max ασ(n) xσ(n) | (x1 , . . . , xn ) ∈ Xn−1 . (n)
Thus, there exists a partition (A1 , . . . , An ) such that νi (Ai ) = xi tion, (A1 , . . . , An ) is α-leximin optimal.
for each i. By construc-
The proof of Theorem 3.4(i) is easy if one employs Corollary 2.1 as a variant of Lyapunov’s convexity theorem. Here, employing Radon–Nikodym derivatives, we present an alternative proof along the lines of Dubins and Spanier [20] without using Lyapunov’s convexity theorem. Dall’Aglio [14] investigated the existence of leximin optimal and equitable solutions to an allocation problem of integrable functions induced by the partitioning problem of a measurable space. For an extension of Theorem 3.4, see Theorem 4.3. The Role of Nonatomicity The following result follows from Theorem 3.2. Corollary 3.2. If ν1 , . . . , νn are nonatomic probability measures that are linearly independent, then there exists a super envy-free partition. If one imposes the nonatomicity of the measures, then the existence of envy-free fair partitions and α-fair partitions is derived from the following sharper result established by Dubins and Spanier [20]. Theorem 3.5. If ν1 , . . . , νn are nonatomic probability measures, then: (i) For every α ∈ ∆n−1 , there exists a partition (A1 , . . . , An ) such that νi (Aj ) = αj for each i, j = 1, . . . , n.
Fair Division Problems with Nonadditive Evaluations
163
n−1 (ii) If, moreover, νi 6= νj for some i 6= j, then for every α ∈ ∆+ , there exists a partition (A1 , . . . , An ) such that νi (Ai ) > αi for each i = 1, . . . , n.
Proof. (i): If Pk = (E1 , . . . , En ) is a partition in which Ek = Ω and Ej = ∅ for j 6= k, then a partition matrix M (Pk ) = (νi (Ej )) has values of 1 in the kth column and values of zero elsewhere. Given M (P1 ), . . . , M (Pn ) belong to M R n (ν1 , . . . , νn ), Theorem 2.1 imn P αi M (Pi ) is in M R n (ν1 , . . . , νn ) for every α ∈ ∆n−1 . Therefore, there exists plies that i=1
a partition P = (A1 , . . . , An ) of Ω such that M (P ) =
n P
αi M (Pi ); that is, νi (Aj ) = αj
i=1
for each i, j. (ii): Suppose, for example, that ν1 and ν2 are not identical and ν1 (A) > ν2 (A) for some A ∈ F . Then ν2 (Ω \ A) > ν1 (Ω \ A). Without loss of generality (by symmetry), we can assume that α1−1 ν1 (A) ≥ α2−1 ν2 (Ω \ A). Let P0 be a partition defined by P0 = (A, Ω \ A, ∅, . . . , ∅) and Pi be a partition given in the proof of (i). For every (x1 , . . . , xn ) ∈ ∆n−1 , it follows from (i) that there exists a partition P = (A1 , . . . , An ) such that: M (P ) = x1 M (P0 ) +
n X
xi M (Pi ).
i=2
Letting D denote the diagonal of M (P ) (so that ith entry of D is νi (Ai )) and Di denote the n P xi Di . We shall attempt to choose the xi so diagonal of M (Pi ), we see that D = x1 D0 + i=2
that all entries of D are in the same ratios as the αi . Hence, we wish to solve the equations: x1 ν1 (A) = tα1 ,
x1 ν2 (Ω \ A) + x2 = tα2 ,
Solving for xi , summing, and using the fact that
n P
xi = tαi
for i = 3, . . . , n.
αi = 1, we find that if we choose:
i=1
t= 1+
α1 1 − ν1 (A) − ν2 (Ω \ A) ν1 (A)
−1
,
we have a solution. As ν1 (A) + ν2 (Ω \ A) > 1 and α1 /ν1 (A) < 1, it follows that t > 1. Therefore, choosing xi to satisfy the above equations for this value t, we find that ith entry of D is tαi > αi for each i. The existence of α-envy free partitions on a general measurable space is still unsolved, even in the additive case. If one restricts a measurable space to a unit simplex, the existence of α-envy free partitions is guaranteed (see Ichiishi and Idzik [29]). A remarkable feature of their result is that α-envy free partitions are obtained, not only as a measurable partition of the unit simplex, but also as a division of the unit simplex into subsimplexes. Theorem 3.6. Let L be the σ-algebra of Lebesgue measurable subsets of ∆n−1 . If n−1 ν1 , . . . , νn are nonatomic finite measures on (∆n−1 , L ), then for every α ∈ ∆+ , there n−1 exists a partition of ∆ into n subsimplexes ∆1 , . . . , ∆n such that (∆1 , . . . , ∆n ) is an α-envy-free partition. A partial extension of Theorem 3.6 to nonadditive evaluations was given by Dall’Aglio and Maccheroni [17], who showed the existence of envy-free fair partitions of ∆n−1 with respect to the preference orderings satisfying certain convexity axioms (see Subsection 5.3).
164
Nobusumi Sagara
The Role of Absolute Continuity If one imposes both nonatomicity and mutual absolute continuity of the measures, then we obtain the compatibility of efficiency and fairness. Theorem 3.7. If ν1 , . . . , νn are nonatomic probability measures that are mutually absolutely continuous, then: (i) For every α ∈ ∆n−1 , there exists a Pareto optimal α-fair partition. n−1 (ii) For every α ∈ ∆+ , there exists an α-maximin optimal, α-equitable partition.
Theorem 3.7(i) is a special case of Sagara [47] and Sagara and Vlach [49] and Theorem 3.7(ii) follows from Theorems 3.1 and 3.2(i) (see Theorem 4.8). Legut and Wilczy´nski [36] proved Theorem 3.7(ii) by employing the minimax theorem without mentioning the mutual absolute continuity, but this is implicitly assumed to derive that α-maximin optimality implies α-equitability. The compatibility of Pareto optimality and envy freeness is given by the following result in Weller [61]. This is an analogue of the result by Varian [59] who showed the existence of Pareto optimal envy-free allocations in an exchange economy with a finite-dimensional commodity space. Theorem 3.8. If ν1 , . . . , νn are nonatomic finite measures that are mutually absolutely continuous, then there exists a Pareto optimal envy-free partition. The proof of Theorem 3.8 exploits a useful geometric relation between the set of Pareto optimal partitions and the unit simplex, and Kakutani’s fixed point theorem. For an exhaustive investigation of the geometric characterization of the set of Pareto optimal partitions and the proof of Theorem 3.8, see Barbanel [4]. Another geometric characterization is given by Thomson [58], who concentrated on partitioning a circle into arcs.
4.
Fair Division with Nonadditive Evaluations
4.1.
Monotonicity and Continuity
The monotonicity and continuity of measures follow from the countable additivity of measures, and the convexity of the range of vector measures is a consequence of the nonatomicity of measures. We thus suggest from the additive case that when one admits nonadditivity in utility functions, a certain kind of monotonicity and continuity of utility functions and the convexity of the range of utility functions are required to establish the existence and characterization of solutions. A set function is a real-valued function on F that vanishes on the empty set. A function ν : F → R is monotone if ν(A) ≤ ν(B) for every A, B ∈ F with A ⊆ B; ν is strictly monotone if ν(A) < ν(B) for every A, B ∈ F with A B; monotone set functions are nonnegative. ∞ S Ak imply A function ν is continuous from below if A1 ⊆ A2 ⊆ · · · and A = ν(Ak )
→ ν(A); ν is continuous from above if
A1
⊇
A2
⊇ · · · and A =
k=1 ∞ T
k=1
Ak imply
Fair Division Problems with Nonadditive Evaluations
165
ν(Ak ) → ν(A); ν is continuous if it is continuous from both below and above. A monotone continuous set function is called a capacity (or a fuzzy measure). It is clear that continuity and strict monotonicity are automatically satisfied whenever a set function is a finite measure. The following weaker notion of continuity from below and the strict monotonicity for functions on F was proposed by Sagara [47]. Definition 4.1. Let µ be a finite measure. A function ν : F → R is: (i) µ-continuous from below if A1 ⊆ A2 ⊆ · · · ⊆ A and µ(A \ ν(Ak ) → ν(A). (ii) Strictly µ-monotone if A
∞ S
Ak ) = 0 imply
k=1
B and µ(A) < µ(B) imply ν(A) < ν(B).
The following result is a generalization of Theorems 3.1 and 3.2 and their promised proof is presented here. Theorem 4.1. Let µ1 , . . . , µn be nonatomic finite measures that are mutually absolutely continuous. If νi is µi -continuous from below and strictly µi -monotone for each i = 1, . . . , n, then the following hold. (i) A partition is Pareto optimal if and only if it is weakly Pareto optimal. If, moreover, ν1 , . . . , νn are normalized, then: n−1 (ii) For every α ∈ ∆+ , a partition is α-maximin optimal if and only if it is Pareto optimal and α-equitable.
(iii) A partition is positive and Pareto optimal if and only if it is α-maximin optimal for n−1 some α ∈ ∆+ . Proof. (i): It is evident that Pareto optimality implies weak Pareto optimality. We show the converse implication. Let (A1 , . . . , An ) be a weakly Pareto optimal partition. Suppose that (A1 , . . . , An ) is not Pareto optimal. There then exists a partition (B1 , . . . , Bn ) such that νi (Ai ) ≤ νi (Bi ) for each i and νj (Aj ) < νj (Bj ) for some j. The µj -continuity of νj from below and the nonatomicity of µj imply that there exists some F ⊆ Bj such that νj (Aj ) < νj (Bj \ F ) and µj (F ) > 0. By the nonatomicity and mutual absolute S continuity Fi = F and of µi , we can decompose F into n − 1 disjoint sets Fi for i 6= j such that i6=j
µi (Fi ) > 0 for i 6= j. Let Ci = Bi ∪ Fi for i 6= j and Cj = Bj \ F . Then the resulting partition (C1 , . . . , Cn ) satisfies νi (Ai ) < νi (Ci ) for each i by the strict µi -monotonicity of νi . This contradicts the weak Pareto optimality of (A1 , . . . , An ). n−1 (ii): For an arbitrary α ∈ ∆+ , let (A1 , . . . , An ) be an α-maximin optimal partition. Suppose that (A1 , . . . , An ) is not Pareto optimal. Then there exists a partition (A1 , . . . , An ) such that νi (Ai ) < νi (Bi ) for each i by (i), and hence min αi−1 νi (Ai ) < 1≤i≤n
min αi−1 νi (Bi ). This contradicts the fact that (A1 , . . . , An ) is α-maximin optimal. There-
1≤i≤n
fore, (A1 , . . . , An ) is Pareto optimal. Given α-maximin optimality implies Pareto optimality, it suffices to show the α-equitability of (A1 , . . . , An ). Suppose to the contrary that (A1 , . . . , An ) is not α-equitable. We
166
Nobusumi Sagara
then have min αi−1 νi (Ai ) < αj−1 νj (Aj ) for some j. The µj -continuity of νj from be1≤i≤n
low and the nonatomicity of µj imply the existence of E ⊆ Aj satisfying µj (E) > 0 and min α−1 νi (Ai ) < αj−1 νj (Aj \ E). By the nonatomicity and mutual absolute continuity 1≤i≤n i S Ei = E and of µi , we can decompose E into n − 1 disjoint sets Ei for i 6= j such that i6=j
µi (Ei ) > 0 for i 6= j. Define Bi = Ai ∪ Ei for i 6= j and Bj = Aj \ E. By the strict µi monotonicity of νi , the resulting partition (B1 , . . . , Bn ) satisfies αi−1 νi (Ai ) < αi−1 νi (Bi ) for i 6= j, and hence min αi−1 νi (Ai ) < min αi−1 νi (Bi ), which contradicts the α-max1≤i≤n
1≤i≤n
imin optimality of (A1 , . . . , An ). Conversely, let (A1 , . . . , An ) be a Pareto optimal αequitable partition. Suppose to the contrary that (A1 , . . . , An ) is not α-maximin optimal. Then, min αi−1 νi (Ai ) < min αi−1 νi (Bi ) for some partition (B1 , . . . , Bn ). As 1≤i≤n
1≤i≤n
αi−1 νi (Ai ) = αj−1 νj (Aj ) for each i, j in view of the α-equitability of (A1 , . . . , An ), we have αi−1 νi (Ai ) < αi−1 νi (Bi ) for each i, which contradicts the Pareto optimality of (A1 , . . . , An ). (iii): Let (A1 , . . . , An ) be a positive Pareto optimal partition. Define: αi =
νi (Ai ) n P νk (Ak )
for i = 1, . . . , n.
k=1
n−1 By the positivity of (A1 , . . . , An ), we have α = (α1 , . . . , αn ) ∈ ∆+ , and hence n P −1 −1 νk (Ak ) for each i, j = 1, . . . , n. Therefore, (A1 , . . . , An ) αi νi (Ai ) = αj νj (Aj ) = k=1
is α-equitable and its α-maximin optimality follows from (ii). Conversely, let (A1 , . . . , An ) be an α-maximin optimal partition. As (A1 , . . . , An ) is Pareto optimal by (ii), it suffices to show its positivity. Assume by way of contradiction that νj (Aj ) = 0 for some j. By the strict µj -monotonicity of νj , we have µj (Aj ) = 0. Given µi (Ω \ Aj ) = µi (Ω) > 0 for each i by the nonatomicity and mutual absolute continuity of µi , we can decompose Ω \ Aj into n disjoint sets E1 , . . . , En such that µi (Ei ) > 0 for n S Ei = Ω \ Aj . Define Bi = Ei for i 6= j and Bj = Aj ∪ Ej . The strict µi each i and i=1
monotonicity of νi implies νi (Bi ) > 0 for each i. Then the resulting partition (B1 , . . . , Bn ) yields 0 = min νi (Ai ) < min νi (Bi ), which contradicts the α-maximin optimality of 1≤i≤n
1≤i≤n
(A1 , . . . , An ). Theorem 4.1 is due to Sagara [47], who proved the equivalence (iii) under the closedness condition on the utility possibility set to be introduced in Subsection 4.2. H¨usseinov [28] pointed out that this condition can be removed from (iii), thereby making the result more general. By virtue of Theorem 4.1, Corollary 3.1 can be extended as follows. Corollary 4.1. Let µ1 , . . . , µn be nonatomic finite measures that are mutually absolutely continuous. If νi is normalized, µi -continuous from below and strictly µi -monotone for each i = 1, . . . , n, then α-leximin optimal partitions are Pareto optimal and α-equitable.
Fair Division Problems with Nonadditive Evaluations
167
Transformations of a Nonatomic Measure We here restrict ourselves to the case where the individual’s utility function is represented by a transformation of a finite measure. This implies that νi = fi ◦ µi , where fi : R(µi ) → R is a real-valued function and R(µi ) is the range of a finite measure µi . Lemma 4..1. Let µ be a nonatomic finite measure and f : R(µ) → R be a real-valued function. Then: (i) f ◦ µ is continuous if and only if f is continuous. (ii) f ◦ µ is strictly µ-monotone if and only if f is strictly monotone. Proof. (i): It is evident that f ◦ µ is continuous whenever f is continuous. To show the converse implication, suppose that f is discontinuous at some point x in R(µ). Because R(µ) is convex in R by Lyapunov’s convexity theorem and contains the origin of R, for each k = 1, 2, . . . , xk = (1 − k1 )x belongs to R(µ) and xk ↑ x. By the discontinuity of f at x, there exists some subsequence {xk } (which we do not relabel) and ε > 0 such that |f (xk ) − f (x)| ≥ ε for each k. Let A ∈ F be such that x = µ(A). Given the nonatomicity of µ, there exists a measurable subset Ak ⊆ A such that µ(Ak ) = (1 − k1 )µ(A) for each k and Ak ↑ A. It follows from |f (µ(Ak )) − f (µ(A))| = |f (xk ) − f (x)| ≥ ε for each k that f ◦ µ is discontinuous at A. (ii): It is evident that if f is strictly monotone, then f ◦ µ is strictly µ-monotone. Conversely, suppose that f ◦ µ is strictly µ-monotone. Choose any x and y in R(µ) satisfying x < y. By the nonatomicity of µ, there exist A and B in F such that µ(A) = x, µ(B) = y and A ( B. We then have f (x) = f (µ(A)) < f (µ(B)) = f (y); hence, f is strictly monotone. Theorem 3.8 can be generalized as follows. Theorem 4.2. Let µ1 , . . . , µn be nonatomic finite measures that are mutually absolutely continuous. If f1 , . . . , fn are strictly monotone and continuous, then there exists a Pareto optimal envy-free partition. Proof. Theorem 3.8 implies that there exists a Pareto optimal partition (A1 , . . . , An ) with respect to µ1 , . . . , µn such that µi (Ai ) ≥ µi (Aj ) for each i 6= j. It follows from the strict monotonicity of fi that fi (µi (Ai )) ≥ fi (µi (Aj )) for each i 6= j. Thus, (A1 , . . . , An ) is envy-free with respect to f1 ◦ µ1 , . . . , fn ◦ µn . Suppose that (A1 , . . . , An ) is not Pareto optimal relative to f1 ◦µ1 , . . . , fn ◦µn . As it is not weakly Pareto optimal by Theorem 4.1(i) and Lemma 4..1, there exists a partition (B1 , . . . , Bn ) such that fi (µi (Ai )) < fi (µi (Bi )) for each i = 1, . . . , n. The strict monotonicity of fi implies that µi (Ai ) < µi (Bi ) for each i = 1, . . . , n. This contradicts the fact that (A1 , . . . , An ) is Pareto optimal with respect to µ1 , . . . , µn . Therefore, (A1 , . . . , An ) is Pareto optimal and envy-free with respect to f1 ◦ µ1 , . . . , fn ◦ µn .
4.2.
Closedness of the Lower Partition Range
One of the technical problems with admitting nonadditivity for utility functions lies in the fact that it is difficult to find a suitable topology on a σ-algebra F under which the set
168
Nobusumi Sagara
of partitions P n is compact. This difficulty had been already recognized by Dubins and Spanier [20], p. 9. To carefully avoid this, we introduce the following condition along the lines of Sagara [47]. Closedness Condition. The lower partition range of (ν1 , . . . , νn ) defined by: PR(ν1 , . . . , νn ) = is closed in Rn .
) ∃(A1 , . . . , An ) ∈ P n : (x1 , . . . , xn ) ∈ R xi ≤ νi (Ai ), i = 1, . . . , n
(
n
The lower partition range corresponds to the utility possibility set of players. The significance of the closedness condition of this type was pointed out by Mas-Colell [41] in the context of an exchange economy with an infinite-dimensional commodity space with topological vector lattices. It is widely known that in an infinite-dimensional commodity space, the set of feasible allocations (a bounded closed set) may not be compact in a suitable topology, which leads to the lack of closedness of the utility possibility set, even if the utility functions are continuous (see also Aliprantis et al. [2], Chapter 3). This situation is essentially the same as the fair division problem. Note that if the σ-algebra F is endowed with a topology that makes each νi continuous on F and the set of partitions P n compact in its product topology, the closedness condition is obviously satisfied. However, it is much more general than imposing a topological requirement as the following example suggests. Example 4.1. Let µ1 , . . . , µn be nonatomic finite measures and fi be a real-valued function on R(µi ) for each i = 1, . . . , n. Then the lower partition range PR(f1 ◦ µ1 , . . . , fn ◦ µn ) coincides with the set: ( ) ∃(y1 , . . . , yn ) ∈ PR(µ1 , . . . , µn ) : n (x1 , . . . , xn ) ∈ R . xi ≤ fi (yi ), i = 1, . . . , n
If f1 , . . . , fn are continuous, then PR(f1 ◦ µ1 , . . . , fn ◦ µn ) is closed in Rn without imposing any topology on F by virtue of Corollary 2.1. Under the closedness condition, Theorem 3.4 can be extended as follows:
Theorem 4.3. Let µ1 , . . . , µn be finite measures. If the closedness condition is satisfied, then: (i) There exists a weakly Pareto optimal partition. If, moreover, ν1 , . . . , νn are normalized, then: (ii) There exists a lexicographic envy minimal partition. n−1 (iii) For every α ∈ ∆+ , there exists an α-leximin optimal partition.
Fair Division Problems with Nonadditive Evaluations
169
Proof. (i): Let (x∗1 , . . . , x∗n ) be a solution to the maximization problem ( n ) X xi | (x1 , . . . , xn ) ∈ PR(ν1 , . . . , νn ) . max i=1
As PR + (ν1 , . . . , νn ) := PR(ν1 , . . . , νn ) ∩ Rn+ is bounded from above, and hence compact by the closedness condition, it is evident that such a solution exists in PR + (ν1 , . . . , νn ). Here, Rn+ is the nonnegative orthant of Rn . Choose any partition (A1 , . . . , An ) with x∗i ≤ νi (Ai ) for each i = 1, . . . , n. It is evident that (A1 , . . . , An ) is weakly Pareto optimal. (ii): The proof is a slight modification of that for Theorem 3.4(ii). The changes are that E R(ν1 , . . . , νn ) is replaced with: n o E R + (ν1 , . . . , νn ) = (xij − xii ) ∈ Rn | (xij ) ∈ PR + (ν1 , . . . , νn ) , the set E is replaced with: ( E+ =
) xi = max{xij − xii }, i = 1, . . . , n j6=i (x1 , . . . , xn ) ∈ R (xij − xii ) ∈ E R + (ν1 , . . . , νn ) n
and in the last step of the proof, there exists a partition (A1 , . . . , An ) such that νi (Ai ) ≥ (n) xi for each i. The remainder of the proof is as before. (iii): The proof is a fine tuning of that for Theorem 3.4(iii). The areas that require changes are where PR(ν1 , . . . , νn ) is replaced with PR + (ν1 , . . . , νn ) and in the last (n) step of the proof, there exists a partition (A1 , . . . , An ) such that νi (Ai ) ≥ xi for each i. The remainder of the proof is as before. Employing Theorem 4.3, it is easy to note that Theorem 3.7(ii) can be extended as follows: Corollary 4.2. Let µ1 , . . . , µn be nonatomic finite measures that are mutually absolutely continuous. If νi is normalized, strictly µi -monotone and µi -continuous from below for n−1 each i = 1, . . . , n and the closedness condition is satisfied, then for every α ∈ ∆+ , there exists an α-maximin optimal, α-equitable partition.
4.3. µ-Concave Functions Another difficulty with allowing nonadditive utility functions is the observation that the utility possibility set may not be convex. To overcome this difficulty, we formulate concave functions on σ-algebras. Let A ∈ F and t ∈ [0, 1] be arbitrarily given, and Kt µ (A) denote the family of measurable subsets of A defined by: Kt µ (A) = E ∈ F | µ(E) = tµ(A), E ⊆ A .
The nonatomicity of µ implies that Kt µ (A) is nonempty for every A ∈ F and t ∈ [0, 1]. µ (A), and that µ(A) = 0 if and only if Note that E ∈ Kt µ (A) if and only if A \ E ∈ K1−t
170
Nobusumi Sagara
Kt µ (A) contains the empty set for every t ∈ [0, 1]. Denote with Kt µ (A, B) the family of µ (B). sets C ∈ F such that C is the union of some disjoint sets E ∈ Kt µ (A) and F ∈ K1−t If µ is a nonatomic finite measure, then Kt µ (A, B) is nonempty for every A, B ∈ F and t ∈ [0, 1]. The family Kt µ (A, B) of sets in F is regarded as a “convex combination” of measurable sets A and B. The notion of convex combinations of measurable sets developed here can be easily extended to Rn -valued nonatomic finite measures with the aid of Lyapunov’s convexity theorem. See Sagara and Vlach [51]. Definition 4.2. A function ν : F → R is: (i) µ-quasiconcave if A, B ∈ F and t ∈ [0, 1] imply min{ν(A), ν(B)} ≤ ν(C)
for every C ∈ Kt µ (A, B);
(ν is µ-quasiconvex if −ν is µ-quasiconcave.) (ii) µ-concave if A, B ∈ F and t ∈ [0, 1] imply tν(A) + (1 − t)ν(B) ≤ ν(C)
for every C ∈ Kt µ (A, B).
(ν is µ-convex if −ν is µ-concave.) The notion of µ-(quasi)concavity was introduced by Sagara and Vlach [48, 49, 50, 51]. The definition also bears an obvious resemblance to the definition of (quasi)concave functions on vector spaces. Example 4.2. Let f be a real-valued function on the range R(µ) of a nonatomic finite measure µ. Then f ◦ µ is µ-(quasi)concave if and only if f is (quasi)concave. Suppose that f is quasiconcave. Because C ∈ Kt µ (A, B) implies: µ(C) = tµ(A) + (1 − t)µ(B), we have: f (µ(C)) = f (tµ(A) + (1 − t)µ(B)) ≥ min{f (µ(A)), f (µ(B))}, for every C ∈ Kt µ (A, B) and t ∈ [0, 1]. Therefore, f ◦ µ is µ-quasiconcave. Conversely, suppose that f is such that f ◦µ is µ-quasiconcave. Choose x, y ∈ [0, µ(Ω)] and t ∈ [0, 1] arbitrarily. By the nonatomicity of µ, there exist measurable sets A and B µ such that µ(A) = x and µ(B) = y. As there exist E ∈ Kt µ (A) and F ∈ K1−t (B) such that E ∩ F = ∅, we have: f (tx + (1 − t)y) = f (tµ(A) + (1 − t)µ(B)) = f (µ(E) + µ(F )) = f (µ(E ∪ F )) ≥ min{f (µ(A)), f (µ(B))}
= min{f (x), f (y)},
hence, f is quasiconcave. The proof for the concave case is similar to the above. A useful property of a µ-concave set function is that it dominates some nonatomic finite signed measure.
Fair Division Problems with Nonadditive Evaluations
171
Theorem 4.4. Let µ be a nonatomic finite measure. If ν is a µ-concave set function, then there exists a nonatomic finite signed measure λ such that λ ≤ ν and ν(Ω) = λ(Ω). Proof. Choose A ∈ F arbitrarily and define t = µ(A) µ(Ω) ∈ [0, 1]. Because ν(∅) = 0 and µ A ∈ Kt (Ω, ∅), we have ν(A) ≥ tν(Ω) + (1 − t)ν(∅) = tν(Ω) by the µ-convexity of ν, ν(Ω) which yields ν(A) ≥ λ(A) for every A ∈ F with λ := µ(Ω) µ. In the terminology of cooperative game theory, Theorem 4.4 is equivalent to stating that a µ-convex game5 −ν has a nonatomic finite signed measure in its core. The core C (ν ′ ) of a game ν ′ : F → R is defined by: C (ν ′ ) = λ ∈ ba(Ω, F ) | ν ′ ≤ λ, ν ′ (Ω) = λ(Ω) ,
where ba(Ω, F ) is the space of finitely additive set functions on F of bounded variation. For the argument concerning the core of a (supermodular) game, see Marinacci and Montrucchio [40] and Schmeidler [53]. Convexity of the Lower Partition Range Admitting nonadditivity for utility functions leads to the observation that Lyapunov’s convexity theorem can no longer apply, which readily yields the nonconvexity of the utility possibility set. This difficulty can be overcome by introducing the lower partition range of the utility functions of the players that satisfy µi -concavity. For functions ν1 , . . . , νn on F , define the lower range by: n o R(ν1 , . . . , νn ) = (x1 , . . . , xn ) ∈ Rn | ∃A ∈ F : xi ≤ νi (A), i = 1, . . . , n and the lower partition matrix range by: ( ) ∃(A1 , . . . , Am ) ∈ P m : xij ≤ νi (Aj ) M R m (ν1 , . . . , νn ) = (xij ) ∈ Rnm . i = 1, . . . , n; j = 1, . . . , m
Example 4.3. In the setup of Example 4.1, suppose that f1 , . . . , fn are concave and continuous. Then PR(f1 ◦ µ1 , . . . , fn ◦ µn ) is closed and convex in Rn . Similarly, as R(f1 ◦ µ1 , . . . , fn ◦ µn ) and M R m (f1 ◦ µ1 , . . . , fn ◦ µn ) coincide with the sets: ( ) ∃(y1 , . . . , yn ) ∈ R(µ1 , . . . , µn ) : (x1 , . . . , xn ) ∈ Rn xi ≤ fi (yi ), i = 1, . . . , n and
) ∃(y1 , . . . , yn ) ∈ PR(µ1 , . . . , µn ) : (xij ) ∈ Rnm xij ≤ fi (yj ), i = 1, . . . , n; j = 1, . . . , m
(
respectively, they are closed and convex respectively in Rn and in Rnm . 5
A real-valued set function is called a game.
172
Nobusumi Sagara
The following result is due to Sagara and Vlach [51], which is a variant of Lyapunov’s convexity theorem, Theorem 2.1, and Corollary 2.1. Its proof exploits the direct application of Lyapunov’s convexity theorem. Theorem 4.5. Let µ1 , . . . , µn be nonatomic finite measures. If νi is µi -concave for each i = 1, . . . , n, then R(ν1 , . . . , νn ) and PR(ν1 , . . . , νn ) are convex in Rn and M R m (ν1 , . . . , νn ) is convex in Rnm . We now present the remaining proof of Theorem 3.1. Theorem 4.6. Let µ1 , . . . , µn be nonatomic finite measures that are mutually absolutely continuous. If νi is strictly µi -monotone, µi -continuous from below and µi -concave for each i = 1, . . . , n and the closedness condition is satisfied, then a partition is Pareto optimal if and only if it is a solution to (Qα ) for some α ∈ ∆n−1 . Proof. It is evident by Theorem 4.1(i) that solutions to (Qα ) are Pareto optimal for every α ∈ ∆n−1 . We shall show the converse implication. Choose any Pareto optimal partition (A∗1 , . . . , A∗n ). Then the utility vector (ν1 (A∗1 ), . . . , νn (A∗n )) is in the boundary of PR(ν1 , . . . , νn ). Because PR(ν1 , . . . , νn ) is closed by the closedness condition and convex by Theorem 4.5, the supporting hyperplane theorem n P αi νi (A∗i ) ≥ implies that there exists a nonzero vector (α1 , . . . , αn ) in Rn such that n P
i=1
i=1
αi xi for every (x1 , . . . , xn ) ∈ PR(ν1 , . . . , νn ).
Suppose that some αj < 0.
Given PR(ν1 , . . . , νn ) is unbounded from below in Rn , letting xj → −∞ yields n P αi νi (A∗i ) = +∞, which is impossible. Thus, we may assume without loss of generality
i=1
that (α1 , . . . , αn ) in ∆n−1 . As PR(ν1 , . . . , νn ) is contained in PR(ν1 , . . . , νn ), we have n n P P αi νi (Ai ) for every (A1 , . . . , An ) ∈ P n . Therefore, (A∗1 , . . . , A∗n ) is a αi νi (A∗i ) ≥ i=1
i=1
solution to (Qα ).
The existence of α-fair partitions can be obtained from the following extension of Theorem 3.5. Theorem 4.7. Let µ1 , . . . , µn be nonatomic finite measures. If νi is normalized and µi -concave for each i = 1, . . . , n, then for every α ∈ ∆n−1 , there exists a partition (A1 , . . . , An ) such that νi (Aj ) ≥ αj for each i, j = 1, . . . , n. Proof. The proof is a slight modification of that for Theorem 3.5(i). Because M (P1 ), . . . , n P M (Pn ) ∈ M R n (ν1 , . . . , νn ), Theorem 4.5 implies that αi M (Pi ) ∈ M R n (ν1 , . . . , νn ) for every α
∈
∆n−1 .
P = (A1 , . . . , An ) of Ω such that M (P ) ≥ each i, j.
i=1
n P
i=1
Therefore, there exists a partition αi M (Pi ); that is, νi (Aj ) ≥ αj for
Theorem 4.7 is quite unsatisfactory in that it is insufficient to obtain envy-free partitions, unlike Theorem 3.5(i). Regarding the existence of envy-free partitions, Theorem 4.2 is the
Fair Division Problems with Nonadditive Evaluations
173
best possible known result so far for a general measurable space. Stromquist [57] used nonadditive continuous utility functions on a unit simplex to demonstrate the existence of envy-free partitions. Theorem 4.8. Let µ1 , . . . , µn be nonatomic finite measures that are mutually absolutely continuous. If νi is normalized, monotone, µi -continuous from below and µi -concave for each i = 1, . . . , n and some νj is µj -strictly monotone and the closedness condition is satisfied, then for every α ∈ ∆n−1 , there exists a Pareto optimal α-fair partition. Proof. By Theorem 4.7, solutions to the maximization problem: max x1 s.t. xi ≥ αi ,
i = 1, . . . , n
(Rα )
(x1 , . . . , xn ) ∈ PR + (ν1 , . . . , νn )
exist for every α ∈ ∆n−1 . Here, we assume without loss of generality that ν1 is strictly µ1 -monotone. Take any solution (x1 , . . . , xn ) to (Rα ). Then there exists a partition (A1 , . . . , An ) such that νi (Ai ) ≥ xi ≥ αi for each i. It suffices to show that (A1 , . . . , An ) is Pareto optimal. Suppose to the contrary that there exists a partition (B1 , . . . , Bn ) such that νi (Ai ) ≤ νi (Bi ) for each i and νj (Aj ) < νj (Bj ) for some j. If j = 1, then x1 ≤ ν1 (A1 ) < ν1 (B1 ), which obviously violates the fact that (x1 , . . . , xn ) is a solution to (Rα ). Thus, we investigate the case for j 6= 1. As νj is µj -continuous from below and µj is nonatomic, there exists a measurable subset F of Bj such that νj (Aj ) < νj (Bj \ F ) and µj (F ) > 0. By the mutual absolute continuity of each µi , we have µi (F ) > 0 for each i. Define a partition (C1 , . . . , Cn ) by C1 = B1 ∪ F , Cj = Bj \ F and Ci = Bi for i 6= 1, j. The strict µ1 -monotonicity of ν1 implies that α1 ≤ x1 ≤ ν1 (A1 ) ≤ ν1 (B1 ) < ν1 (C1 ), which contradicts the fact that (x1 , . . . , xn ) is a solution to (Rα ) in view of νi (Ci ) ≥ αi for each i. Note that to derive the Pareto optimality in Theorem 4.8, it suffices to impose strict µi monotonicity at least on one player’s utility function. Without resorting to any convexity hypothesis, Sagara and Vlach [49] demonstrated the existence of Pareto optimal α-fair partitions for the class of µi -average anti-monotone set functions, which is more general than that of µi -concave set functions. Thus, Theorem 4.8 follows from the special case of Sagara and Vlach [49].
4.4.
Submodular Functions
Another important class of nonadditive set functions on σ-algebras is that of submodular functions, which has been investigated independently for many years in game theory, discrete convex analysis, fuzzy measure theory and statistical decision theory. Given a set function ν, an element N ∈ F is ν-null if ν(A ∪ N ) = ν(A) for every A ∈ F . If N ∈ F is ν-null, then ν(N ) = 0. A set function ν is null-additive if A∩N = ∅, and ν(N ) = 0 implies ν(A ∪ N ) = ν(A). For a null-additive monotone set function ν, an element N ∈ F is ν-null if and only if ν(N ) = 0 (see Pap [43], Theorem 2.1). Set functions ν1 , . . . , νn are mutually absolutely continuous if that A ∈ F is νj -null for some j implies that it is νi -null for each i = 1, . . . , n.
174
Nobusumi Sagara
A nonnegative set function ν is nonatomic if for every A ∈ F with ν(A) > 0 there exists a measurable subset B of A such that 0 < ν(B) < ν(A). A set function ν : F → R is submodular (or concave) if ν(A ∪ B) + ν(A ∩ B) ≤ ν(A) + ν(B) for every A, B ∈ F ; ν is supermodular (or convex) if −ν is submodular. Example 4.4. Example 4.2 continues. Suppose that f : R(µ) → R is continuous. Then f ◦ µ is submodular if and only if f is concave. Note that the continuous function f is concave if and only if f has decreasing differences; that is, x, y ∈ R(µ), x ≤ y, x+z, y +z ∈ R(µ) and z ≥ 0 imply f (x+z)−f (x) ≥ f (y + z) − f (y). Given the submodularity of f ◦ µ is equivalent to stating that f ◦ µ has decreasing differences in the sense that f (µ(A∪E))−f (µ(A)) ≥ f (µ(B ∪E))−f (µ(B)) for every A, B, E ∈ F with A ⊆ B and B ∩ E = ∅ (for a proof, see Marinacci and Montrucchio [40], Proposition 4.15), by using the nonatomicity of µ it is easy to see that f has decreasing differences if and only if f ◦ µ is submodular. Therefore, if f is continuous, then the following conditions are equivalent: (i) f ◦ µ is µ-concave; (ii) f ◦ µ is submodular; (iii) f is concave. As the following theorem indicates, the monotonicity of submodular (or supermodular) set functions is automatically satisfied when they are normalized (see Dall’Aglio and Maccheroni [16]). Theorem 4.9. A normalized submodular (supermodular) function is monotone. Proof. Let A ⊇ B. It follows from the submodularity of ν that: ν(A) + ν((Ω \ A) ∪ B) ≥ ν(A ∪ (Ω \ A) ∪ B) + ν(A ∩ ((Ω \ A) ∪ B)) = ν(Ω) + ν(A ∩ B) = ν(Ω) + ν(B)
and hence 0 ≤ ν(Ω) − ν((Ω \ A) ∪ B) ≤ ν(A) − ν(B) as ν is normalized. Therefore, ν is monotone. To show the supermodular case, consider the dual ν ∗ of ν defined by ν ∗ (A) = ν(Ω) − ν(Ω \ A) for A ∈ F . It is easy to verify that ν ∗∗ = ν and ν ∗ is monotone and supermodular if and only if ν is monotone and submodular. Thus, for the supermodular ν, applying the above result to ν ∗ yields the desired result. The existence of α-fair partitions for the case of submodular set functions was given by Dall’Aglio and Maccheroni [16] and Maccheroni and Marinacci [39]. Theorem 4.10. If ν1 , . . . , νn are normalized, nonatomic and submodular set functions that are continuous from above at ∅, then the following hold: (i) For every α ∈ ∆n−1 , there exists an α-fair partition. n−1 (ii) Moreover, if νi 6= νj for some i 6= j, then for every α ∈ ∆+ , there exists a partition (A1 , . . . , An ) such that νi (Ai ) > αi for each i = 1, . . . , n.
Proof. (i): Given each −νi is a bounded supermodular game, its core C (−νi ) is nonempty and by the continuity from above and the nonatomicity of νi , every element in C (−νi ) is a nonatomic finite signed measure (see Marinacci and Montrucchio [40]). This implies
Fair Division Problems with Nonadditive Evaluations
175
that νi ≥ µi and νi (Ω) = µi (Ω) hold for every nonatomic probability measure µi with −µi ∈ C (−νi ). Thus, Theorem 3.5(i) applied to µi yields the result. (ii) Given νi (A) = max{µi (A) | −µi ∈ C (−νi )} for every A ∈ F (see Marinacci and Montrucchio [40]), one can choose nonatomic probability measures µi 6= µj with −µi ∈ C (−νi ) and −µj ∈ C (−νj ) whenever νi 6= νj . Then apply Theorem 3.5(ii) to µi to obtain the result. Dall’Aglio and Maccheroni [17] axiomatized preference orderings that can encompass preference orderings induced by submodular set functions and derived the corresponding result to Theorem 4.10 with respect to preference orderings. (For this axiomatization, see also Subsection 5.3.) Choquet Integral Let B(Ω, F ) be the space of bounded measurable functions on Ω with the supremum norm. An element µ in ba(Ω, R F ) has an continuous linear extension to B(Ω, F ) by the duality relation hµ, f i := f dµ for f ∈ B(Ω, F ). The support functional σC : B(Ω, F ) → Ω
R ∪ {±∞} of a subset C of ba(Ω, F ) is defined by σC (f ) = sup{hµ, f i | µ ∈ C }. For nonadditive set functions on F , continuous extensions to B(Ω, F ) are given by Choquet integrals (see Choquet [12]). The Choquet integral νˆ : B(Ω, F ) → R of a set function ν is defined by an improper Riemann integral of the form +∞ Z0 Z ν(f ≥ t) − ν(Ω) dt. ν(f ≥ t) dt + νˆ(f ) = 0
−∞
Here, (f ≥ t) denotes the measurable set {ω ∈ Ω | f (ω) ≥ t}. Note that this integral exists whenever ν is of bounded variation6 (see Pap [43], Theorem 7.21); ν(A) = νˆ(χA ) for every A ∈ F , where χA is the characteristic function of A ∈ F . For the Choquet integral νˆ of a set function ν of bounded variation, there are equivalent conditions: (i) ν is submodular; (ii) νˆ is submodular; that is, νˆ(f ∨ g) + νˆ(f ∧ g) ≤ νˆ(f ) + νˆ(g) for every f, g ∈ B(Ω, F );7 (iii) νˆ is convex; (iv) νˆ is subadditive; that is, νˆ(f + g) ≤ νˆ(f ) + νˆ(g) for every f, g ∈ B(Ω, F ); (v) νˆ(f ) = max{hµ, f i | −µ ∈ C (−ν)} for every f ∈ B(Ω, F ).8 (For a proof, see Marinacci and Montrucchio [40].) Submodularity vs. Supermodularity As demonstrated in Example 4.4, the submodularity of ν is equivalent to ν having decreasing differences in the sense that: ν(A ∪ E) − ν(A) ≥ ν(B ∪ E) − ν(B) 6
A set function ν is of bounded variation if sup
k P
|ν(Ai ) − ν(Ai−1 )| is finite, where the supremum is
i=1
taken over by all finite chains ∅ = A0 ⊆ A1 ⊆ · · · ⊆ Ak = Ω in F . A bounded submodular set function is of bounded variation (see Marinacci and Montrucchio [40]). 7 Here, f ∨ g and f ∧ g are the pointwise maximum and pointwise minimum of f and g, respectively, defined by (f ∨ g)(ω) = max{f (ω), g(ω)} and (f ∧ g)(ω) = min{f (ω), g(ω)}. 8 Condition (v) states that νˆ is the support functional of −C (ν).
176
Nobusumi Sagara
for every A, B, E ∈ F with A ⊆ B and B ∩ E = ∅. Thus, a submodular utility function indeed exhibits “decreasing marginal utility”. However, there is seemingly a contradictory property on submodular utility functions when one extends them by Choquet integrals; that is, a utility function is submodular on F if and only if its Choquet integral is convex on B(Ω, F ). When one treats utility functions on the vector space B(Ω, F ), it is quite natural to impose them to be monotone and concave on B(Ω, F ). This is because the monotone concavity of the utility functions implies a “decreasing marginal rate of substitution”, a more general property than decreasing marginal utility, if Ω is a finite set. It also captures the risk averseness of individuals confronting choices under uncertainty if each element in B(Ω, F ) is regarded as a random variable. If one wishes the extension of a utility function ν by Choquet integrals to preserve this standard assumption, one must impose ν to be supermodular; that is, ν should have increasing marginal utility. In practical terms, the set functions chosen depend on the specific situation under investigation. For example, Dall’Aglio and Maccheroni [17] proposes the use of submodular functions in a land division context because convex Choquet extensions mean that the concentration of land is desirable. A converse interpretation is, however, possible when one considers the division of a pizza (where a half-and-half mixture is desirable), in which case concave Choquet extensions (or equivalently, supermodular utility functions) appear plausible. The pursuit of the existence theorems for supermodularity as well as submodularity raises an independent interest in its own right.
5. 5.1.
Representation of Preference Orderings on σ-Algebras Representation by µ-Quasiconcave Functions
We have assumed in Subsection 4.3 that the preference orderings of each player has a numerical representation by µi -concave functions. Here, we attempt to provide a theoretical framework for representing a preference ordering in terms of µ-quasiconcave functions. A preference ordering % on F is a complete transitive binary relation on F . The strict ordering A ≻ B means that A % B, but not B % A; the indifference ordering A ∼ B means that both A % B and B % A hold. If ν is a real-valued function on F such that ν(A) ≥ ν(B) if and only if A % B, then ν is called a utility function for %. Axiom 5.1 (µ-upper semicontinuity). For every A, B ∈ F with A ≻ B there exists some δ > 0 such that µ(B △ B ′ ) < δ implies A ≻ B ′ . Axiom 5.2 (µ-lower semicontinuity). For every A, B ∈ F with A ≻ B there exists some δ > 0 such that µ(A △ A′ ) < δ implies A′ ≻ B. % is said to be µ-continuous if it satisfies both µ-upper semicontinuity and µ-lower semicontinuity. The continuity of preference orderings states that if A is preferred to B, then any A′ close enough to A is preferred to any B ′ close enough to B, in which closeness is induced by the subjective measure µ. Axiom 5.3 (µ-convexity). A % X, B % X and t ∈ [0, 1] imply Y % X for every Y ∈ Kt µ (A, B).
Fair Division Problems with Nonadditive Evaluations
177
The convexity of preference orderings can be interpreted as follows. If A and B are disjoint elements in F , then any union of a “half” of A and a “half” of B evaluated by a subjective measure µ is preferred to A or to B; “mixing” is desirable in contrast to additive preferences, in which mixing is indifferent. Definition 5.1. A function ν : F → R is µ-continuous at A ∈ F if for every ε > 0 there exists some δ > 0 such that µ(A△B) < δ implies |ν(A)−ν(B)| < ε. If ν is µ-continuous at every element of F , it is said to be µ-continuous. The following result is due to Sagara and Vlach [48]. Theorem 5.1. Let (Ω, F , µ) be a nonatomic finite measure space such that F is countably generated. A preference ordering on F satisfies Axioms 5.1 to 5.3 if and only if it is representable by a µ-continuous, µ-quasiconcave function. This representation is unique up to a monotone increasing transformation. Proof. Two measurable sets A and B in F are µ-equivalent if µ(A △ B) = 0. It can easily be seen that the µ-equivalence is an equivalence relation on F . We denote the µ-equivalence class of A ∈ F by [A] and the set of µ-equivalence classes in F by Fµ . For any two µ-equivalence classes [A] and [B], we define the metric dµ on Fµ by dµ ([A], [B]) = µ(A △ B). If F is countably generated, then the metric space (Fµ , dµ ) is complete and separable (see Dunford and Schwartz [21], Lemma III.7.1 and Halmos [26], Theorem 40.B). Let % be a preference ordering satisfying Axioms 5.1 to 5.3. We first show that µ(A △ B) = 0 implies A ∼ B. Suppose to the contrary that there exist A and B in F such that µ(A △ B) = 0 and A ≻ B. By the µ-upper semicontinuity of ν, there exist some δ > 0 such that for every B ′ ∈ F with µ(B △ B ′ ) < δ we have A ≻ B ′ . By choosing B ′ = A, we have A ≻ A, a contradiction. Therefore, we must have B % A whenever µ(A △ B) = 0. By interchanging the role of A with B in the above argument, we obtain A ∼ B. Thus, % induces a preference relation %µ on the metric space (Fµ , dµ ) defined by [A] %µ [B] if and only if A % B. By the µ-continuity of %, for every [A] ∈ Fµ , both the upper contour set {[B] ∈ Fµ | [B] %µ [A]} and the lower contour set {[B] ∈ Fµ | [A] %µ [B]} are closed in the metric topology of Fµ . Because %µ is a continuous preference ordering on the separable metric space Fµ by virtue of the celebrated theorem of Debreu [18], there exists on Fµ a continuous utility function νµ for %µ . This representation is unique up to a monotone increasing transformation. Define ν(A) = νµ ([A]). Then, ν is a µ-continuous utility function for %. Choose A, B ∈ F arbitrarily. Without loss of generality, we may assume A % B. Because the µ-convexity of % implies X % B for every X ∈ Kt µ (A, B) and t ∈ [0, 1], we then have ν(X) ≥ ν(B) = min{ν(A), ν(B)}. Therefore, ν is µ-quasiconcave. Conversely, let ν be a µ-continuous, µ-quasiconcave utility function for %. To show the µ-upper semicontinuity of %, let A ≻ B. By the µ-continuity of ν, there exists some δ > 0 such that µ(B △ B ′ ) < δ implies ν(A) > ν(B ′ ), which is equivalent to A ≻ B ′ . The proof of the µ-lower semicontinuity of % is similar. We finally show that % is µ-convex. Let A % X, B % X, and t ∈ [0, 1]. Because ν(Y ) ≥ min{ν(A), ν(B)} ≥ ν(X) for every Y ∈ Kt µ (A, B), we then have Y % X.
178
Nobusumi Sagara
Fine [23] and Roberts [44] obtained a representation by nonadditive set functions by imposing the continuity with respect to an order interval topology on an algebra. However, it remains unclear whether the utility functions obtained by their approach exhibit decreasing marginal utility.
5.2.
Representation by Capacities
Recall that a capacity is a monotone continuous set function on F . We establish a sufficient condition for the representability of monotone µ-continuous preference orderings by means of capacities. Axiom 5.4 (Monotonicity). A ⊆ B implies A - B. The following result is due to Sagara and Vlach [48]. Theorem 5.2. Let (Ω, F , µ) be a nonatomic finite measure space such that F is countably generated. A preference ordering on F is representable by a capacity if it satisfies Axioms 5.1, 5.2, and 5.4. Proof. By virtue of the proof of Theorem 5.1, it is immediate that a preference ordering satisfies Axioms 5.1, 5.2, and 5.4 if and only if it is representable by a µ-continuous monotone function ν. ∞ ∞ S S Ak ) Ak . It follows that µ(Ak ) ↑ µ( Let {Ak } be a sequence in F with Ak ↑ k=1
given the continuity of µ from below. We then have: ! ! ∞ ∞ [ [ k k q A − µ(Aq ) ↓ 0 as q → ∞. A =µ µ A △ k=1
k=1
k=1
If {Ak } is a monotonically decreasing sequence in F , the above argument, when applied to the monotonically increasing sequence {B k }, defined by B k = Ω \ Ak , yields: ! ! ∞ ∞ \ \ k k q A − µ(Aq ) ↑ 0 as q → ∞. A =µ µ A △ k=1
k=1
Therefore, the µ-continuity of ν implies continuity from above and below, and hence ν is a capacity. The proof of Theorem 5.2 also demonstrates that: Corollary 5.1. A monotone µ-continuous set function on F is a capacity.
5.3.
Representation by Support Functionals
To formulate another nonadditive representation result, we investigate preference orderings on the space B0 (Ω, F ) of measurable simple functions on Ω taking values in [0, 1]. Denote by M the space of nonatomic finite signed measures that assigns 1 to Ω, endowed with the relative weak* topology from ba(Ω, F ).
Fair Division Problems with Nonadditive Evaluations
179
Let % be a complete transitive binary relation on B0 (Ω, F ). A preference ordering χA % χB for characteristic functions of A and B in F is equivalently written as A % B. A element N ∈ F is null for % if A ∪ N ∼ A for every A ∈ F . An atom for % is a nonnull element A ∈ F such that for every E ⊆ A, either E or A \ E is null for %. Axiom 5.5 (Nontriviality). Ω ≻ ∅. Axiom 5.6 (Nonatomicity). There is no atom for %. Axiom 5.7 (Lower semicontinuity). fk → f pointwise and fk - g for each k implies f - g. Axiom 5.8 (Convexity). f ∼ g implies 12 f + 12 g % f . Axiom 5.9 (Constant independence). For every α, β ∈ [0, 1] with α 6= 0: f % g if and only if αf + (1 − α)βχΩ % αg + (1 − α)βχΩ . The following result is due to Dall’Aglio and Maccheroni [17]. Theorem 5.3. A preference ordering % on B0 (Ω, F ) satisfies Axioms 5.5 to 5.9 if and only if there exists a weak*-compact, convex subset C of M uniquely such that: f % g ⇐⇒ min hµ, f i | µ ∈ C ≥ min hµ, gi | µ ∈ C .9
A preference ordering % on F induced by a normalized nonatomic supermodular set function ν can be extended to B(Ω, F ) (which we do not relabel) by means of the Choquet integral νˆ; that is, f % g ⇐⇒ νˆ(f ) ≥ νˆ(g). It is easy to verify that the restriction of % to B0 (Ω, F ) satisfies Axioms 5.5 and 5.6. Given νˆ is Lipschitz continuous, concave and translation invariant (see Marinacci and Montrucchio [40]), Axioms 5.7 to 5.9 are met. Moreover, νˆ(f ) = min{hµ, f i | µ ∈ C (ν)} for f ∈ B(Ω, F ), where the core C (ν) of ν is a weak*-compact, convex subset of M by the nonatomicity of ν. Conversely, Theorem 5.3 implies that if % on B0 (Ω, F ) is restricted to the space of characteristic functions of F , which can be identified with F , the restriction of % is representable such that: A % B ⇐⇒ min{µ(A) | µ ∈ C } ≥ min{µ(B) | µ ∈ C } for some weak*-compact, convex subset C of M . 9 When concentration is desirable, as in land divisions, Axiom 5.7 is replaced with upper semoconitnuity: fk → f pointwise and fk % g for each k implies f % g and Axiom 5.8 is replaced with the axiom that f ∼ g implies f % 21 f + 21 g. If this is the case, then the assertion in Theorem 5.3 becomes
f % g ⇐⇒ max hµ, f i | µ ∈ C ≥ max hµ, gi | µ ∈ C .
180
Nobusumi Sagara
5.4.
Representation by Probability Measures
We assumed in Section 3 that the preference orderings of each player are represented by a nonatomic finite measure. Theorems 5.2 and 5.3 do not necessarily imply that utility functions representing preference orderings are nonatomic finite measures. We thus need the axiomatization of preference orderings that admit a representation in terms of nonatomic finite measures. Axiom 5.10 (Nontriviality). Ω ≻ ∅ and Ω % A % ∅ for every A ∈ F . Axiom 5.11 (Nonatomicity). For every A ∈ F with A ≻ ∅ there exists B ⊆ A such that A ≻ B ≻ ∅. When % is monotone, Axiom 5.11 is equivalent to Axiom 5.6, which can be shown by employing a similar argument to Pap [43], Theorem 2.1. Axiom 5.12 (Disjoint additivity). A1 % B1 , A2 % B2 and A1 ∩A2 = ∅ implies A1 ∪A2 % B1 ∪ B2 . Axioms 5.10 to 5.12 were proposed by de Finetti [19]. A preference ordering satisfying the above axioms is a subjective probability (or qualitative probability). Axiom 5.13 (Continuity from below). Ak ↑ A (i.e., A1 ⊆ A2 ⊆ · · · and Ak - B for each k implies A - B.
∞ S
Ak = A) and
k=1
The following result is due to Villega [60]. Theorem 5.4. A preference ordering on F satisfies Axioms 5.10 to 5.13 if and only if it is representable by a unique nonatomic probability measure. Generalizations of Theorem 5.4 are found in, for example, Chuaqui and Malitz [13], and Zhang [62]. The representation by finitely additive measures on algebras was given by, for example, Barbanel and Taylor [5], Koopman [31], and Savage [52]. For a good introduction to the additive representation, see Kranz et al. [32], Chapter 5 and Kreps [33], Chapter 8.
6.
Conclusion
We conclude this chapter by posing some open problems and detailing some topics not discussed. As mentioned earlier, the existence of α-envy-free partitions in a general measurable space is unsolved, even in the additive case (see Theorem 3.6), and the existence of Pareto optimal envy-free partitions are unknown for general nonadditive utility functions (see Theorem 4.2). We have not pursued the representability of µ-convex preference orderings by µ-concave utility functions, which is a nontrivial problem (see Theorem 5.1). The situation here is similar to the possibility where convex preference orderings may not have a representation in terms of concave utility functions on a commodity space. For a finite dimensional
Fair Division Problems with Nonadditive Evaluations
181
commodity space, Kannai [30] characterized the representability of convex preference orderings by concave utility functions. At present, we do not know whether the approach of Kannai is applicable to the µ-convex preference orderings on a σ-algebra in our framework. In addition, the representability of preference orderings by submodular functions is unsolved for an infinite set (see Theorem 5.3), although the case of a finite lattice was demonstrated by Chambers and Echenique [11]. This chapter is silent about the algorithm or protocol to obtain solutions in a finite number of cake cuts. For instance, Brams and Taylor [9] constructed a procedure to derive fair partitions and envy-free partitions in finite steps when the utility functions of each player are given by a nonatomic probability measure and the number of players is arbitrarily finite. Likewise, Brams et al. [8] constructed a procedure for obtaining Pareto optimal, αequitable partitions with two individuals. No similar procedure is known for nonadditive utility functions. For the construction of solutions as a game form in the additive case, see Brams and Taylor [10], Kuhn [34], and Robertson and Webb [45].
Acknowledgement This research is supported by a Grant-in-Aid for Scientific Research (No. 21530277) from the Ministry of Education, Culture, Sports, Science and Technology, Japan.
References [1] Akin, E., (1995). “Vilfred Pareto cuts the cake”, J. Math. Econom. 24, 23–44. [2] Aliprantis, C. D., Brown, D. J. and O. Burkinshow, (1990). Existence and Optimality of Competitive Equilibria, Springer-Verlag, Berlin. [3] Barbanel, J. B., (1996). “Super envy-free cake division and independence of measures”, J. Math. Anal. Appl. 197, 54–60. [4] Barbanel, J. B., (2005). The Geometry of Efficient Fair Division, Cambridge University Press, Cambridge. [5] Barbanel, J. B. and A. D. Taylor, (1995). “Preference relations and measures in the context of fair division”, Proc. Amer. Math. Soc. 123, 2061–2070. [6] Barbanel, J. B. and W. S. Zwicker, (1997). “Two applications of a theorem of Dvoretsky, Wald, and Wolfovitz to cake division”, Theory and Decision 43, 203–207. [7] Berliant, M., W. Thomson and K. Dunz, (1992). “On the fair division of a heterogeneous commodity”, J. Math. Econom. 21, 201–216. [8] Brams, S. J., M. A. Jones and C. Klamler, (2008). “Proportional pie-cutting”, Internat. J. Game Theory 36, 353–367. [9] Brams, S. J. and A. D. Taylor, (1995). “An envy-free cake division protocol”, Amer. Math. Monthly 102, 9–18.
182
Nobusumi Sagara
[10] Brams, S. J. and A. D. Taylor, (1996). Fair Division: From Cake-Cutting to Dispute Resolution, Cambridge University Press, Cambridge. [11] Chambers, C. P. and F. Echenique, (2009). “Supermodularity and preferences”, J. Econom. Theory 144, 1004–1014. [12] Choquet, G., (1955). “Theory of capacities”, Ann. Inst. Fourier (Grenoble) 5, 131– 295. [13] Chuaqui, R. and J. Malitz, (1983). “Preorderings compatible with probability measures”, Trans. Amer. Math. Soc. 279, 811–824. [14] Dall’Aglio, M., (2001). “The Dubins-Spanier optimization problems in fair division theory”, J. Comput. Anal. Appl. 130, 17–40. [15] Dall’Aglio, M. and T. P. Hill, (2003). “Maximin share and minimax envy in fair-division problems”, J. Math. Anal. Appl. 281, 346–361. [16] Dall’Aglio, M. and F. Maccheroni. (2005). “Fair division without additivity”, Amer. Math. Monthly 112, 363–365. [17] Dall’Aglio, M. and F. Maccheroni, (2009). “Disputed lands”, Games Econom. Behav. 66, 57–77. [18] Debreu, G., (1964). “Continuity properties of Paretian utility”, Internat. Econom. Rev. 5, 285–293. [19] de Finetti, B., (1937). “La pr´evision: Ses lois logiques, ses sources subjective”, Ann. Inst. H. Poincar´e Probab. Statist. 7, 1–68. English translation by H. E. Kyburg, “Foresight: Its logical laws, its subjective sources”, in: H. E. Kyburg and H. E. Smolker, (eds.), Studies in Subjective Probability, John Wiley & Sons, New York. [20] Dubins, L. E. and E. H. Spanier, (1961). “How to cut a cake fairly”, Amer. Math. Monthly 68, 1–17. [21] Dunford, N. and J. T. Schwartz, (1958). Linear Operators, Part I: General Theory, John Wiley & Sons, New York. [22] Dvoretsky, A., A. Wald and J. Wolfowitz, (1951). “Relations among certain ranges of vector measures”, Pacific J. Math. 1, 59–74. [23] Fine, T., (1971). “A note on the existence of quantitative probability”, Ann. Math. Statist. 42, 1182–1186. [24] Gouweleeuw, J., (1995). “A characterization of vector measures with convex range”, Proc. London Math. Soc. 70, 336–362. [25] Halmos, P. R., (1948). “The range of a vector measure”, Bull. Amer. Math. Soc. 54, 416–421. [26] Halmos, P. R., (1950). Measure Theory, Van Nostrand, New York.
Fair Division Problems with Nonadditive Evaluations
183
[27] H¨usseinov, F., (2008). “Existence of the core in a heterogeneous divisible commodity exchange economy”, Internat. J. Game Theory 37, 387–395. [28] H¨usseinov, F., (2009). “α-maximin solutions to fair division problems and the structure of the set of Pareto utility profiles”, Math. Social Sci. 57, 279–281. [29] Ichiishi, T. and A. Idzik, (1999). “Equitable allocation of divisible goods”, J. Math. Econom. 32, 389–400. [30] Kannai, Y., (1977). “Concavifiability and constructions of concave utility functions”, J. Math. Econom. 4, 1–56. [31] Koopman, B. O., (1940). “The axioms and algebra of intuitive probability”, Ann. of Math. 41, 269–292. [32] Krantz, D., R. D. Luce, P. Suppes and T. Tversky, (1971). Foundations of Measurement, Vol. I: Additive and Polynomial Representations, Academic Press, San Diego. [33] Kreps, D. M., (1988). Notes on the Theory of Choice, West View Press, Boulder. [34] Kuhn, H. W., (1967). “On games of fair division”, in: M. Shubik, ed., Essays in Mathematical Economics: In Honor of Oskar Morgenstern, Princeton University Press, Princeton. [35] Legut, J., (1986). “Market games with a continuum of indivisible commodities”, Internat. J. Game Theory 15, 1–7. [36] Legut, J. and M. Wilczy´nski, (1988).“Optimal partitioning of a measurable space”, Proc. Amer. Math. Soc. 104, 262–264. [37] Lindenstrauss, J., (1966). “A short proof of Lyapunov’s convexity theorem”, J. Math. Mech. 15, 971–972. [38] Lyapunov, A., (1940). “Sur les fonctions-vecteurs compl`etement additives”, Bull. Acad. Sci. URSS. S´er. Math. 4, 465–478 (in Russian). [39] Maccheroni, F. and M. Marinacci, (2003). “How to cut a pizza fairly: Fair division with decreasing marginal evaluations”, Soc. Choice Welf. 20, 457–465. [40] Marinacci, M. and L. Montrucchio, (2004). “Introduction to the mathematics of ambiguity”, in: I. Gilboa, ed., Uncertainty in Economic Theory, Routledge, New York, 46–107. [41] Mas-Colell, A., (1986). “The price equilibrium existence problem in topological vector lattices”, Econometrica 54, 1039–1053. [42] Mas-Colell, A., M. D. Whinston and J. R. Green, (1995). Microeconomic Theory, Oxford University Press, Oxford. [43] Pap, E., (1995). Null-Additive Set Functions, Kluwer Academic Publishers, Dordrecht.
184
Nobusumi Sagara
[44] Roberts, F. S., (1973). “A note on Fine’s axioms for qualitative probability”, Ann. Probab. 1, 484–487; “Correction”, (1974), ibid. 2, p. 182. [45] Robertson, J. and W. Webb, (1998). Cake-Cutting Algorithm: Be Fair If You Can, A K Peters, Natick. [46] Sagara, N., (2006). “An existence result on partitioning of a measurable space: Pareto optimality and core”, Kybernetika 42, 475–481. [47] Sagara, N., (2008). “A characterization of α-maximin solutions of fair division problems”, Math. Social Sci. 55, 273–280. [48] Sagara, N. and M. Vlach, (2009). “Representation of preference relations on σ-algebras of nonatomic measure spaces: Convexity and continuity”, Fuzzy Sets and Systems 160, 624–634. [49] Sagara, N. and M. Vlach, (2009). “A new class of convex games on σ-algebras and optimal partitioning of measurable spaces”, Faculty of Economics, Hosei University, mimeo., hhttp://home.v00.itscom.net/nsagara/i. [50] Sagara, N. and M. Vlach, (2010). “Convex functions on σ-algebras of nonatomic measure spaces”, Pac. J. Optim. 6, 89–102. [51] Sagara, N. and M. Vlach, (2010). “Convexity of the lower partition range of a concave vector measure”, Adv. Math. Econ. 13, 155–160. [52] Savage, L. J., (1972). The Foundations of Statistics, second revised ed., Dover, New York. [53] Schmeidler, D., (1972). “Cores of exact games, I”, J. Math. Anal. Appl. 40, 214–225. [54] Schmeidler, D. and K. Vind, (1972). “Fair net trades”, Rev. Econom. Stud. 40, 637– 641. [55] Steinhaus, H., (1948). “The problem of fair division”, Econometrica 16, 101–104. [56] Steinhaus, H., (1949). “Sur la division pragmatique”, Econometrica 17 (Suppl.), 315– 319. [57] Stromquist, W., (1980). “How to cut a cake fairly”, Amer. Math. Monthly 87, 640–644. Addendum in: Amer. Math. Monthly 88, 613–614. [58] Thomson, W., (2007). “Children crying at birthday parties. Why?”, Econom. Theory 31, 501–521. [59] Varian, H. R., (1974). “Equity, envy, and efficiency”, J. Econom. Theory 9, 63–91. [60] Villega, C., (1964). “On qualitative probability σ-algebras”, Ann. Math. Statist. 35, 1787–1796. [61] Weller, D., (1985). “Fair division of a measurable space”, J. Math. Econom. 14, 5–17. [62] Zhang, J., (1999). “Qualitative probabilities on λ-systems”, Math. Social Sci. 38, 11– 20.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 185-210
ISBN 978-1-60876-500-3 c 2011 Nova Science Publishers, Inc.
Chapter 8
A VOF-BASED S HAPE O PTIMIZATION M ETHOD FOR I NCOMPRESSIBLE H YPERELASTIC M ATERIALS Kazuhisa Abe Department of Civil Engineering and Architecture, Niigata University
Abstract This chapter presents an optimization method for material layout of incompressible rubber components. Topology optimization is realized within the framework of the material approach in which material distribution is described on an Eulerian mesh. In order to avoid occurrence of the checkerboard pattern and intermediate density, the VOF method is employed for representation of the material region. In this method an optimal shape results from the advection of the VOF function governed by the Hamilton-Jacobi equation. The relaxation of incompressibility in void regions is achieved by replacing the rubber with a compressible linear material. Through numerical examples, validity of the developed method is examined.
1.
Introduction
Rubber components are utilized for various structural members and machine parts to reduce the vibration or shocks acting on structures. Since those materials are subjected to little volume change under deformation, in general, the mechanical behavior can be modeled by an incompressible hyperelasticity. Owing to the quasi-incompressibility, the mechanical characteristics strongly depend on the shape. Consequently, it is very important to find a material layout that is endowed with a required performance. For this purpose, the shape or topology optimization methods can be helpful tools in the design process. Kim & Kim [1] have developed a shape optimization method for an engine mount. In their work the rubber was modeled by an incompressible hyperelastic material given by the Mooney-Rivlin form. Choi & Duan [2] have attempted the shape optimization of structural components. In their paper the material is also described by the Mooney-Rivlin model. A topology optimization approach for rubber components has been developed by Lee &
186
Kazuhisa Abe
Youn [3]. They have considered both the static and dynamic (viscous damping) effects in optimization. Since the topology optimization method allows any topological change during the optimization process, an optimal solution can be found without any restrictions on the shape. This is realizable by means of the so-called material approaches [4] in which the material layout is represented by a distribution of material density on a Eulerian mesh fixed in a design domain. In this method the structural shape is given by a region in which the material density is equal to 1. Hence, the topology optimization can be reduced to the optimal sizing problem of the density in the design domain. Therefore, design variables given by the material density are to be controlled element-wisely. The topological change arises from this process. Although the employment of the material approach enables us to realize any topology, it involves encountering some issues such as the occurrence of the checkerboard pattern and the intermediate density. Various remedies for the former have been proposed by many researchers. Filtering of material distribution [5] will be a simple and effective approach for that anomaly. The latter can be reduced by the aid of a simple method called solid isotropic microstructure with penalty (SIMP) [6, 7]. In this method a penalty is imposed on the intermediate density. Although it cannot avoid shading completely, that will be enough to recognize the profile. Nevertheless, the residue of intermediate density may cause a serious situation in the topology analysis of incompressible materials. In the material approach, to ensure the solvability of finite element equations, voids or holes are replaced by very soft materials; still, incompressibility remains. It may induce unnatural pressure inside holes under deformation, and may affect the optimization process. Abe, Nishinomiya & Koro [8] have attempted to suppress the unwanted pressure by restoring the compressibility and controlling the Poisson’s ratio within the framework of the SIMP method. In that strategy the Poisson’s ratio is given by a function of the density so that the material region possesses quasi-incompressibility while the voids preserve the compressibility. Although the proposed method can reduce the pressure in holes, due to the relaxation of incompressibility for the intermediate density the incompressibility is to be sacrificed in narrow material regions. The reduction of intermediate density by the SIMP method is enough for the recognition of shape, but is insufficient to conserve the mechanical properties such as stiffness and incompressibility. The occurrence of the checkerboard pattern and the intermediate density are closely related to the way of optimization in the material approaches. That is, in those methods the material density in each finite element is updated individually, and the geometrical connectivity of the distribution pattern is not regarded in the optimization process. The essential settlement of these issues can be attained by virtue of the level set method [9]. In this method the profile of a structure is clearly given by zero contour lines of the level set function defined on a fixed mesh as a signed distance function from the shoreline. Hence, the intermediate density never distributes in the design domain. A change of the shape and topology is reduced to the advection of the level set function governed by the HamiltonJacobi equation. It may contribute to suppression of the checkerboard pattern. Though the topology optimization with the level set approach can be an effective method, the formulation and implementation of the method are more complicated than
A VOF-Based Shape Optimization Method
187
those of the density methods. A new method that possesses both the simplicity of the material approaches and the performance of the level set method has been proposed by Abe & Koro [10]. In that approach the volume of fluid (VOF) method [11] is utilized to depict the shape. The material layout is given by the VOF function representing a volume fraction of material occupying each finite element. Therefore, the VOF function can be regarded as a material density. A difference between these methods is the method for updating the function. While, as mentioned above, the material density of each element is optimized independently, the VOF function is changed due to its advection, as is the level set function. Because of this, the method is relieved from the checkerboard pattern. Moreover, as long as the advection of the VOF function is computed accurately, the boundary can be captured clearly. The reduction of the intermediate density region will be advantageous especially to incompressible materials. In this chapter, application of the VOF-based topology optimization method to rubber components subjected to large deformation is attempted. In order to attain the compressibility of the empty regions, holes are filled with a compressible soft material. Since the VOF function is given by an element-wise constant distribution, the boundary region having intermediate values with a width of about one mesh will be unavoidable. Therefore, the mechanical modeling should cope with the material transition zone. For this purpose, the finite element equations are constructed by linear combination of the incompressible hyperelasticity and the compressible linear elasticity forms. The weighting function in the formulation is given by a function of the VOF value. As numerical examples, applications of the developed method to the minimum compliance and the load-displacement curve-fitting problems are demonstrated. Based on these results, validity of the optimization method is proven.
2. 2.1.
Finite Element Equations for Incompressible Hyperelasic Materials Constitutive Form of Incompressible Hyperelasticity
Since rubber components possess quasi-incompressibility, they can be modeled approximately by incompressible hyperelastic materials. In the following, the Mooney-Rivlin form is employed as a constitutive equation. In this case, stresses can be derived from the potential function [12] given by ¯ − p(j − 1), W =W
¯ = c1 (IC III −1/3 − 3) + c2 (IIC III −2/3 − 3), W C C
(1)
where p is the pressure and IC , IIC , IIIC are principal invariants of the right Cauchy– Green deformation tensor C. In j=detF, here F is the deformation gradient tensor. c1 and c2 are material constants. F and C are described in terms of the displacement u as F = I + u ⊗ ∇, T
C = F · F,
∇ :=
∂ ei , ∂xi
(2)
188
Kazuhisa Abe
where I is the identity tensor, ⊗ denotes the tensor product, xi and ei are the coordinates and the basis vectors before deformation, and ( )T stands for the transpose of a tensor. The invariants IC , IIC , IIIC can be described by C as IC = tr C, 1 IIC = (tr C)2 − tr (C2 ) , 2 IIIC = det C ≡ j 2 ,
(3)
here tr( ) stands for the trace of a tensor. The second Piora-Kirchhoff stress tensor S is given by S=2
¯ ∂W ∂j ∂W =2 − 2p . ∂C ∂C ∂C
(4)
From eqs(1),(2) and (3), S can be expressed as follows, 1 2 −1/3 −2/3 −1 −1 I − IC C S = 2c1 IIIC IC I − C − IIC C +2c2 IIIC −pjC−1 . (5) 3 3
2.2.
Variational Formulation
Let us consider a static finite deformation problem. The governing equation and the boundary conditions are given by the following equations: ∇ · (S · FT ) + ρ0 g = 0 (in Ω), ¯ (on Γu ), u=u T T (S · F ) · n = ¯t (on Γt ),
(6)
where Ω is the domain of a structure, ρ0 is the mass density at the initial state, g is the ¯ acceleration due to gravity, n is the unit outward normal vector in the undeformed frame, u is a prescribed displacement on the sub-boundary Γu and ¯t is a prescribed traction on Γt . It is assumed that the sub-boundaries Γu and Γt are satisfying the condition Γu ∩ Γt = ∅. In what follows, for the sake of simplicity, no body force is assumed. In addition to eq(6), the incompressible materials have to be subjected the equivolumial condition. This restriction is described by 1 − j = 0.
(7)
In the context of the total Lagrangian form, the variational statements corresponding to eqs(6) and (7) are given by Z Z 1 ¯ on Γu , ∀ δu ∈ D, S(u) : δC dΩ = ¯t · δu dΓ, u = u (8) 2 Γ Ω Z t δp(1 − j) dΩ = 0 ∀ δp, (9) Ω
A VOF-Based Shape Optimization Method
189
where δ( ) is a variation and D denotes the function space defined by displacement u such that u = 0 on Γu . From eq(2), δC is expressed by δC = δu ⊗ ∇ + ∇ ⊗ δu + (∇ ⊗ δu) · (u ⊗ ∇) + (∇ ⊗ u) · (δu ⊗ ∇).
(10)
The finite element analysis is achieved with the displacement-pressure form, i.e., u and p are to be discretized as unknown variables. For this purpose, the stabilized finite element method [13] is used. The variational form corresponding to the equivolumial condition is enhanced by Z
(1 − j)δp dΩ +
Ω
ne X e
τe
Z
2
∂j : (∇p ⊗ ∇δp) dΩ = 0, ∂C
(11)
Ωe
where τe is the stabilizing coefficient of the eth element Ωe , and ne is the number of elements. Notice that the first term is the original equivolumial condition given by eq(9), and the second term is the stabilizing term. If we use triangular linear elements for 2dimensional problems or tetrahedral linear elements for 3-dimensional problems the stabilizing term is given only by the integration of ∇p as in eq(11). The finite element equations are composed of eqs(8) and (11).
2.3.
Formulation for Linear Elasticity
As mentioned in the prior section, the empty regions are approximated by a compressible linear elastic material possessing very soft stiffness. Since the mechanical problem of the rubber components is described by the mixed form, to take consistency in discretization, the linear elastic field is also formulated by using both displacement and pressure. The variational equations are then given by Z
Ω
2Gǫ : δǫ dΩ −
Z
3ν ptr(δǫ) dΩ = 1+ν
Ω
Z
¯t · δu dΓ,
¯ on Γu , ∀ δu ∈ D, u=u
Γt
Z Z ne X 2G(1 + ν) τe (∇p) · (∇δp) dΩ ∀ δp, tr(ǫ) δp dΩ + p+ 3(1 − 2ν) e Ωe
Ω
(12) where G is the shear modulus, ν is the Poisson’s ratio. ǫ is linear strain tensor defined as
ǫ=
1 (∇ ⊗ u + u ⊗ ∇). 2
(13)
190
3.
Kazuhisa Abe
Optimization Problem with VOF Function
3.1.
Optimization Problem
We consider an optimization problem of incompressible hyperelastic structures defined by min J(u) := Γ
ZS
J˜ ds,
J˜ :=
0
Z
F (u, s) dΩ
Ω
subject to a(u(s), p(s), δu) = ℓ(s, δu),
¯ (s) for 0 ≤ s ≤ S on Γu , ∀ δu ∈ D, u(s) = u
b(u(s), δp) + R(u(s), p(s), δp) = 0 ∀ δp, Z V := dΩ ≤ Vmax , Ω
(14) where the objective function J is specified by a function F (u, s), s stands for a parameter representing the loading process, V is the volume of the structure and Vmax is an allowable volume limit. Operators a, ℓ, b and R are corresponding to each term in eqs(8) and (11), i.e., Z 1 a(u(s), p(s), δu) = S(u, p) : δC dΩ, 2 Ω Z ℓ(s, δu) = ¯t(s) · δu dΓ, Γt
b(u(s), δp) =
R(u(s), p(s), δp) =
Z
(15) (1 − j)δp dΩ,
Ω ne X e
3.2.
τe
Z
2
∂j : (∇p ⊗ ∇δp) dΩ. ∂C
Ωe
Introduction of VOF Function
The VOF function ψ is defined by a volume fraction of material in each element. That is, ψ=1 for an element filled with the material and ψ=0 for an empty element. Stiffness in an element is given in proportion to ψ. Therefore, in order to ensure the solvability of finite element equations, a positive lower limit 0 < ψmin is specified. In this case, the empty regions are replaced with a soft material. The material region is then extended to the entire design domain. In this context, the stiffness is to be defined in the whole domain as c¯1 = ψc1 ,
c¯2 = ψc2 ,
¯ = ψG, G
¯ in Ω
(16)
¯ is the design domain in which the body is included. where Ω Moreover, in order to avoid the pressure in an empty region, such region is replaced with a compressible material. According to this strategy, the optimization problem is re-casted
A VOF-Based Shape Optimization Method
191
as follows, ¯ p; ψ) := J(u, p; ψ) + λ+ (V − Vmax ), min J(u, ψ
subject to a(u, p, δu; ψ) = ℓ(δu),
(17)
¯ on Γu ∀ δu ∈ D, u=u
b(u, p, δp; ψ) + R(u, p, δp; ψ) = 0 ∀ δp,
λ+ (V − Vmax ) = 0,
λ+ ≥ 0,
where λ+ is a Lagrange multiplier. J, V , a, b and R are given by J(u, p; ψ) =
ZS
J˜ ds,
J˜ =
Z
¯ Ω
0
ψF (u, p) dΩ,
V =
Z
ψ dΩ,
¯ Ω
1 a(u, p, δu; ψ) = ψ ω1 S : δC + ω2 2Gǫ : δǫ dΩ 2 ¯ Ω Z ∂j 3ν − p ω1 : δC + ω2 tr(δǫ) dΩ, ∂C 1+ν ¯ Ω Z 2G(1 + ν) b(u, p, δp; ψ) = tr(ǫ) δp dΩ, ω1 ψg(1 − j) + ω2 p + ψ 3(1 − 2ν) ¯ Ω X Z ∂j ω1 2 R(u, p, δp; ψ) = τe : (∇p ⊗ ∇δp) + ω2 (∇p · ∇δp) dΩ, ∂C e Z
(18)
Ωe
where ω1 and ω2 are weighting functions corresponding to the incompressible hyperelasticity and the compressible linear elasticity, respectively. Those are given by functions of ψ and satisfy the condition ω1 + ω2 = 1. Concrete expressions of ω1 and ω2 will be given in the numerical examples. In eq(18), since the finite element equations are nonlinear ones, the solution at each incremental step is calculated iteratively by means of the Newton-Raphson method based on the following incremental forms, ∂ak ∂ak · ∆uk+1 + ∆pk+1 = ℓ(δu) − ak , ∂u ∂p ∂bk ∂bk ∂Rk ∂Rk · ∆uk+1 + ∆pk+1 + · ∆uk+1 + ∆pk+1 = −bk − Rk , ∂u ∂p ∂u ∂p
(19)
where ( )k stands for an operator given by the kth iterative solution in the convergence process. ∆uk+1 and ∆pk+1 are the k + 1th correctors. Once the solution is obtained within an error tolerance, we proceed to the next incremental step. Notice that the displacement corrector must satisfy the Dirichlet condition, i.e., ∆uk+1 = 0 on Γu . Eq(19) is expressed by the following matrix equation: ) ) ( ( h i ∆Uk+1 ∆rku k k K(u , p ) , (20) = ∆rkp ∆Pk+1
192
Kazuhisa Abe
where ∆Uk+1 and ∆Pk+1 are nodal vectors concerning displacement and pressure, and ∆rku and ∆rkp are residual terms corresponding to the right-hand side in eq(19).
4. 4.1.
Topology Optimization Using VOF Method Design Sensitivity Analysis
The sensitivity of the objective function J¯ due to the variation of the VOF function has to be evaluated in the optimization. Increment ∆J¯ of the objective function resulted from the change ∆ψ of the VOF function is expressed as
∆J¯ =
ZS 0
∂ J˜ ∂ J˜ ∂ J˜ · ∆u + ∆p + ∆ψ ∂u ∂p ∂ψ
!
ds + λ+ ∆V,
(21)
˜ where ∆u and ∆p are variations of u and p due to the change ∆ψ. Notice that ∂ J/∂( ) should be interpreted in the sense of the Fr´echet derivative. The volume change ∆V can be expressed in terms of ∆ψ as Z ∆V = ∆ψ dΩ. (22) ¯ Ω
As stated above, ∆u and ∆p depend on ∆ψ. Unlike ∆V , however, these terms cannot be derived explicitly. To evaluate the first and the second terms in eq(21) in terms of ∆ψ, we introduce the adjoint equations: ∂ ∂ ˜+ a(u, p, wu ; ψ) · u a(u, p, wu ; ψ)˜ p= ∂u ∂p ∂ ∂ ˜+ b(u, p, wp ; ψ) · u b(u, p, wp ; ψ)˜ p ∂u ∂p ∂ ∂ ˜+ R(u, p, wp ; ψ) · u R(u, p, wp ; ψ)˜ p= + ∂u ∂p ˜ ∈ D, ∀ p˜, ∀u
∂ J˜ ˜, ·u ∂u (23) ∂ J˜ p˜ ∂p
where u and p are finite element solutions at the present step, and wu and wp are unknowns. From eq(23), we obtain the following matrix equation,
[K(u, p)]T
(
Wu Wp
)
∂ J˜ ∂u = ∂ J˜ ∂p
,
(24)
A VOF-Based Shape Optimization Method
193
˜ and p˜. Consewhere [K]T denotes transpose of [K]. wu and wp satisfy eq(23) for all u ˜ and p˜, respectively, eq(23) leads to quently, if the variation ∆u and ∆p are used as u ∂ ∂ ∂ J˜ a(u, p, wu ; ψ) · ∆u + a(u, p, wu ; ψ)∆p = · ∆u, ∂u ∂p ∂u ∂ ∂ b(u, p, wp ; ψ) · ∆u + b(u, p, wp ; ψ)∆p (25) ∂u ∂p ∂ ∂ J˜ ∂ R(u, p, wp ; ψ) · ∆u + R(u, p, wp ; ψ)∆p = ∆p. + ∂u ∂p ∂p Displacement u + ∆u and pressure p + ∆p corresponding to the updated shape ψ + ∆ψ satisfy the variational equations: a(u + ∆u, p + ∆p, δu; ψ + δψ) = ℓ(δu), b(u + ∆u, p + ∆p, δp; ψ + δψ) + R(u + ∆u, p + ∆p, δp; ψ + δψ) = 0 ∀ δu ∈ D,
(26)
∀ δp.
Substituting wu and wp for δu and δp in eq(26), we can obtain the first order form as ∂ ∂ a(u, p, wu ; ψ) · ∆u + a(u, p, wu ; ψ)∆p ∂u ∂p ∂ + a(u, p, wu ; ψ)∆ψ = ℓ(wu ), ∂ψ ∂ ∂ ∂ b(u, p, wp ; ψ) · ∆u + b(u, p, wp ; ψ)∆p + b(u, p, wp ; ψ)∆ψ b(u, p, wp ; ψ) + ∂u ∂p ∂ψ ∂ ∂ + R(u, p, wp ; ψ) + R(u, p, wp ; ψ) · ∆u + R(u, p, wp ; ψ)∆p ∂u ∂p ∂ R(u, p, wp ; ψ)∆ψ = 0. + ∂ψ (27) Recall that, from eq(17), the solution u and p satisfy the following equations a(u, p, wu ; ψ) +
a(u, p, wu ; ψ) = ℓ(wu ), b(u, p, wp ; ψ) + R(u, p, wp ; ψ) = 0.
(28)
Subtraction of eq(28) from eq(27) yields ∂ ∂ ∂ a(u, p, wu ; ψ) · ∆u + a(u, p, wu ; ψ)∆p = − a(u, p, wu ; ψ)∆ψ, ∂u ∂p ∂ψ ∂ ∂ ∂ b(u, p, wp ; ψ) · ∆u + b(u, p, wp ; ψ)∆p + R(u, p, wp ; ψ) · ∆u ∂u ∂p ∂u ∂ ∂ ∂ R(u, p, wp ; ψ)∆ψ. + R(u, p, wp ; ψ)∆p = − b(u, p, wp ; ψ)∆ψ − ∂p ∂ψ ∂ψ (29) Since the left-hand sides of eqs(25) and (29) are identical, the next relation can be obtained, ∂ ∂ J˜ · ∆u = − a(u, p, wu ; ψ)∆ψ, ∂u ∂ψ (30) ∂ J˜ ∂ ∂ ∆p = − b(u, p, wp ; ψ)∆ψ − R(u, p, wp ; ψ)∆ψ. ∂p ∂ψ ∂ψ
194
Kazuhisa Abe
Substituting eq(30) into eq(21), we can evaluate ∆J¯ in terms of ∆ψ, i.e., ZS ∂ ∂ a(u, p, wu ; ψ)∆ψ − b(u, p, wp ; ψ)∆ψ − ∆J¯ = ∂ψ ∂ψ 0 # ∂ ∂ J˜ − R(u, p, wp ; ψ)∆ψ + ∆ψ ds + λ+ ∆V. ∂ψ ∂ψ In eq(31) we define β and β¯ as Z β dΩ :=
! ∂a ∂b ∂R ∂ J˜ − ∆ψ, − − + ∂ψ ∂ψ ∂ψ ∂ψ
¯ Ω
β¯ :=
ZS
(31)
(32)
β ds.
(33)
Z
(34)
0
Eq(31) can then be rewritten as ∆J¯ =
(β¯ + λ+ )∆ψ dΩ.
¯ Ω
4.2.
Update of VOF Function
The optimization of the VOF function is resulted from advection of ψ governed by the Hamilton–Jacobi equation: ∂ψ ¯ = −v · ∇ψ, in Ω, (35) ∂t ¯ Notice that the parameter t corresponds to the where v is a velocity vector defined in Ω. optimization process. The velocity v which will lead to an optimum topology can be obtained by v = (β¯ + λ+ )∇ψ. (36) From eqs(35) and (36), the increment ∆ψ at an optimization step is given by ∆ψ = −(β¯ + λ+ )|∇ψ|2 ∆t,
(37)
where ∆t > 0 is an incremental parameter. Substituting eq(37) into (34), we obtain ∆J¯ as Z ¯ ∆J = − (β¯ + λ+ )2 |∆ψ|2 dΩ∆t ≤ 0. (38) ¯ Ω
¯ Eq(38) ensures a monotonically decreasing sequence of ∆J. Once the volume V reaches the allowable limit Vmax , it should be kept the same value during the optimization. From eq(37), the Lagrange multiplier λ+ conserving the volume at the limit value can be found as R 2 dΩ ¯ β|∇ψ| ¯ λ+ = − ΩR . (39) |∇ψ|2 dΩ ¯ Ω
A VOF-Based Shape Optimization Method
5. 5.1.
195
Numerical Implementations Advection Analysis of VOF Function i
vi
−vi ∆ t
Figure 1. Advection analysis of VOF function at the ith node. In order to avoid the occurrence of intermediate VOF values, the advection of ψ should be computed with high accuracy. For this purpose, the reduction of numerical diffusion in advection analysis is achieved by using a method called cubic interpolation with volume/area coordinates (CIVA) [14]. This method can be regarded as a generalization of the CIP method proposed by Yabe [15] for rectangular grid. That is, the CIVA method realizes the cubic interpolation for triangular and tetrahedral elements. As illustrated in Figure 1, nodal value of the VOF function ψ at the next time step t + ∆t, which is governed by the Hamilton-Jacobi equation, is given by the current solution as ψ(xi , t + ∆t) = ψ(xi − vi ∆t, t),
(40)
where xi is the position of the ith node, and vi is the convection velocity. Although the VOF function is originally defined by an element-wise constant distribution, in the above equation it is extended to a smooth continuous function due to the introduction of the cubic interpolation. Eq(40) implies that the advection analysis is reduced to the interpolation of the VOF function. Highly accurate interpolation is attained by using not only nodal values but also gradients in the CIP and the CIVA methods. More reduction of the intermediate region is attempted by using a transformation of a tangent function for ψ [16]. In this method ψ is transformed into T (ψ) as 1 T (ψ) = tan π ψ − (1 − ε) , (41) 2
where ε is a small parameter needed to avoid infinity. T (ψ) is governed by the HamiltonJacobi equation as is ψ, i.e., ∂T (ψ) = −v · ∇T (ψ). (42) ∂t The advection analysis by the CIVA method is applied to eq(42), and the VOF function ψ used for the design sensitivity analysis (DSA) is obtained by the inverse transform as ψ=
1 tan−1 T + . π(1 − ε) 2
(43)
196
Kazuhisa Abe
Since |T (ψ)| becomes very large for ψ ≈ 1, 0, even if some numerical diffusion is occurred during the advection analysis, the VOF function evaluated from eq(43) can keep a clear boundary.
5.2.
Adaptive Setting of Time Increment
The shorter the time increment ∆t, the more accurate and stable the advection analysis. However, small ∆t results in long computation time. Therefore it is desired to set ∆t as large as possible. To meet this requirement, ∆tk at the kth optimization step is determined adaptively by ch ∆tk = k , (44) vmax k is the maximum velocity at the current step and c is a where h is the mesh size, vmax constant.
5.3.
Curtailment of Optimization Steps
A large deformation analysis consists of a number of incremental steps. Since the sequence of finite element computations is to be repeated at each optimization step, it is desired to reduce the number of DSA. The optimization process can be accelerated by performing a DSA every several advection steps. In this case the advection analysis is carried out for several time steps without updating the velocity field and therefore without finite element analysis.
6. 6.1.
Minimum Compliance Problem Minimization of End-Compliance
Since this chapter is dealing with the finite deformation problems, the stiffness of a structure will change during the loading process. Therefore it should be clarified what definition is used for the stiffness. The simplest form is so-called end-compliance given by J=
Z
Γp
¯t · u dΓ =
Z
n · Π · u dΓ = a(u, p, u; ψ),
(45)
Γp
where ¯t is the prescribed traction at the final loading step, Π is the first Piola-Kirchhoff stress tensor, and u and p are solutions corresponding the traction ¯t. Notice that the integration with respect to s in eq(14) is replaced by the evaluation at the final state. Buhl et al. [17] discussed the influence of definitions of the stiffness on the optimization. Though the end-compliance may generate unnatural structure, the objective function is given by a simple form as eq(45). Therefore, in the following, minimization problems for the endcompliance are considered.
A VOF-Based Shape Optimization Method
6.2.
197
Analytical Conditions
Two-dimensional plane strain condition is considered. The material constants are c1 = 0.3, c2 = 0.1 (N/mm2 ) for hyperelasticity, and ν=0, G = 2(c1 +c2 )=0.8 (N/mm2 ) for linear elasticity. The minimum limit of the VOF function is ψmin =0.001, that is, the stiffness of voids is given by 0.001×G. Based on numerical experiments, the weighting functions ω1 and ω2 in eq(18) are determined as 1 ψ< 0 2 , ω1 = 1 (46) otherwise 2 ψ− 2 ω2 = 1 − ω1 . The constant in eq(44) is set to c=0.5. Even if the CIVA method with the transformation by tangent function is employed, the material boundary may be shaded during many advection steps. To cope with this issue, the reinitialization of the VOF function is achieved by resetting ψ to 1 or ψmin [17] based on the threshold of ψ=0.5. In general, profiles at early steps are apt to be intricate ones. It may interrupt the convergence of nonlinear equations under large deformation. In order to avoid this difficulty, loading is restricted to the first increment for early optimization stages.
6.3.
Topology Optimization of Cantilever 2.0mm
3.0mm
1.5mm 1.5mm
Figure 2. Analytical conditions of cantilever.
A design domain of 2×3 (mm) is considered. The left side is fixed to a rigid wall. Vertical loads are acting on the right-hand side as depicted in Figure 2. Discretization with
198
Kazuhisa Abe
0th step
4th step
8th step
12th step
16th step
20th step
Figure 3. Evolution of topology of cantilever.
triangular linear elements is also shown in the figure. The design sensitivity analysis is achieved 20 times. The velocity field is updated every 5 advection steps up to the 10th DSA step, and every 3 steps after that. The load increment of ∆P =0.003(N) is considered. The velocity is calculated at the first load increment up to the 10th DSA, and at the 40th increment after that. The volume limit Vmax is set to 50% of the initial volume. The VOF function is reinitialized at the 10th optimization. Topologies during the optimization process are shown in Figure 3. As shown in the figure, the initial topology has a hole in the material region. It should be noted that, in the final shape, the lower bar is thinner than the upper one due to the effect of geometrical nonlinearity. The final topology has already been established at the 8th step, and finally two-bar truss was obtained. Load-displacement curves for the 4th and the final topologies are shown in
A VOF-Based Shape Optimization Method
199
load (N)
0.1
20step 4step 0 0
5
displacement (mm) Figure 4. Load–displacement curves at the 4th and the final steps of cantilever.
Figure 4. It can be confirmed that increase of stiffness is attained by the optimization. The evolutional process obtained by a modeling in which the incompressibility is applied to the all design domain irrespective of the VOF value is shown in Figure 5. It is found that the material distribution has been stagnated and incomplete development is observed in the shape. Moreover, in order to prove the validity of the proposed decompression scheme, the initial structure subjected to a horizontal load is considered. In Figure 6 pressure distribution obtained by the present method is compared with the one given by the incompressible form. It can be seen that, unless the incompressibility is relaxed in voids, unwanted pressure is arisen in the hole, while it can be removed adequately by the proposed method.
6.4.
Topology Optimization of Simple Beam
A simply supported rectangular beam with size of 6×2(mm) is considered. Finite element nodes locating at the center on the bottom side are subjected to vertical forces. Due to the symmetry, the right half of the structure is discretized as illustrated in Figure 7. The design sensitivity analysis is achieved for 40 steps. The velocity field is updated every 4 advection steps up to the 20th DSA step, and every 2 steps after that. The advection velocity is calculated at the first load increment up to the 10th DSA step, and at the 10th increment after that, with the load increment ∆P =0.003(N). The volume limit is set to 50% of the initial volume. The VOF function is reinitialized every 10 DSA steps. The evolutional process of the layout is shown in Figure 8. The initial structure has 4×4 rectangular holes. Through the optimization, a truss-like structure has appeared. Though intricate boundaries are yielded at the early stage, those have been degenerated with progress of topology. Load-displacement curves at the 11th and the final steps are shown in Figure 9. The stiffness of the structure has been improved remarkably by the optimization as in the previous example. In both examples similar configurations to ones for linear problems are obtained [10], and thus the efficiency of the developed method can be assured.
200
Kazuhisa Abe
0th step
50th step
100th step
150th step
200th step
500th step
Figure 5. Evolution of topology with incompressible voids.
7. 7.1.
Load-Displacement Curve Fitting Objective Function and Design Sensitivity
In general, stiffness of rubber components will be hardened with the progress of deformation. In many cases, this will be an undesirable property. As an example, rubber pads installed in railway tracks can be cited. The main role of that equipment is the reduction of vibration arisen by running trains. The softer the stiffness, the lower the vibration level. However, a soft pad will suffer large deformation due to loading by vehicles, and the deformed pad will be hardened. Consequently, the performance of pads should deteriorate during the passage of trains. Moreover, large displacement may affect the running stability of vehicles.
A VOF-Based Shape Optimization Method [N/mm2 ]
201
[N/mm2 ]
2e-004
2e-004
1e-004
1e-004
9e-005
8e-005
3e-005
2e-005
-2e-005
-4e-005
-8e-005
-1e-004
(a) compressible void
(b) incompressible void
Figure 6. Pressure distribution.
The improvement of pads can be attained by controlling the load-displacement curve in such a manner that its stiffness decrease with increasing displacement. Since, in this case, the initial stiffness has the highest value, the pad can bear large load under small deformation. Besides, the vibration reduction is capable of being enhanced by decreasing the stiffness at large loads. In order to find shape realizing such mechanical properties, application of the developed optimization method is attempted. As shown in Figure 10, a body subjected to uniform displacement u on the upper side is considered. A rigid plate is attached on the top end and a vertical load P is acting on it. Let us introduce an objective function given by J=
Zuf
J˜ du,
0
J˜ := (P (u) − P¯ (u))2 ,
(47)
where uf is an upper bound of displacement, P¯ (u) is a given load specifying a loaddisplacement curve (Figure 11). Curve fitting can be attained by minimizing the objective function. Total load P is obtained by boundary integration: Z P = n · ΠT · n dΓ, (48) ΓL
where ΓL is the loading boundary, and the integration should be achieved with respect to the undeformed configuration. From eq(48), finite element discretization leads to the following matrix expression P = [b]T {P}, (49)
202
Kazuhisa Abe
3.0mm
2.0mm Figure 7. Analytical conditions of simple beam.
where {P} is the nodal force vector and {b} is a vector, in which each component is given by 0 or 1, extracting components contributing to the load P . From eq(17), {P} can be defined by [δU]T {P} = ℓ(δu) = a(u, p, δu; ψ) ∀ δu ∈ D. (50)
˜ ˜ ˜ Evaluation of three terms ∂ J/∂u, ∂ J/∂p and ∂ J/∂ψ∆ψ is needed to proceed to the ˜ design sensitivity analysis. From eqs(47) and (49), ∂ J/∂u · ∆u can be expressed by ∂ J˜ ∂P · ∆u =2(P − P¯ ) · ∆u ∂u ∂u ∂P T ¯ =2(P − P )[b ] {∆U}. ∂U From eq(50), we can also obtain the following equation ∂ ∂P {∆U} = a(u, p, δu; ψ) · ∆u [δU]T ∂U ∂u = [δU]T [Kuu ]{∆U},
where [Kuu ] is a submatrix of [K] in eq(20) defined as Kuu Kup ∆U ∆U T T . = [δU δP] [δU δP] [K] ∆P ∆P Kpu Kpp From eq(52), [∂P/∂U]{∆U} is given by ∂P {∆U} = [Kuu ]{∆U}. ∂U
(51)
(52)
(53)
(54)
A VOF-Based Shape Optimization Method
0th step
5th step
10th step
15th step
20th step
40th step
203
Figure 8. Evolution of topology of simple beam.
Therefore, from eqs(51) and (54), finally the following relation is obtained " #T ∂ J˜ = 2(P − P¯ )[b]T [Kuu ]. ∂U Similar deduction leads to the following equation #T " ∂ J˜ = 2(P − P¯ )[b]T [Kup ]. ∂P
(55)
(56)
204
Kazuhisa Abe 0.03
load (N)
0.02
0.01
40step 11step 0 0
1
displacement (mm)
Figure 9. Load–displacement curves at the 11th and the final steps of simple beam. P
ΓL
u(t)
Figure 10. A body subjected to uniform displacement.
˜ From eqs(47) and (49), (∂ J/∂ψ)∆ψ is expressed as ∂ J˜ ∂P T ¯ ∆ψ. ∆ψ = 2(P − P )[b] ∂ψ ∂ψ Here recall that the following relation is valid for ∀ δu ∂P ∂ T [δU] ∆ψ = a(u, p, δu; ψ)∆ψ. ∂ψ ∂ψ Replacing [δU]T in eq(58) with [b]T , we can obtain the next equation ∂ ∂P T ∆ψ = a(u, p, b; ψ)∆ψ. [δb] ∂ψ ∂ψ
(57)
(58)
(59)
Therefore, the variation of J˜ with respect to ∆ψ is given by ∂ J˜ ∂ ∆ψ = 2(P − P¯ ) a(u, p, b; ψ)∆ψ. ∂ψ ∂ψ
(60)
A VOF-Based Shape Optimization Method
205
load
P − P − P−P
uf displacement Figure 11. Load–displacement curve fitting.
7.2.
Shape Optimization of Railway Rubber Pad
30
P
20
0
analytical unit z
y x
240
(mm)
Figure 12. Design conditions of sleeper pad.
Rubber pads used for a railway track are installed between a rail and sleepers (rail pad), and beneath sleepers (sleeper pad). Since reduction of stiffness of the former will induce the inclination of rail under a wheel load, its stiffness should be kept about 50MN/m or more. On one hand, the stiffness of the latter pad can be reduced lower than 10MN/m. Hence, the larger deformation will be observed in the sleeper pads. In the following, shape optimization of a sleeper pad, as illustrated in Figure 12, is attempted. It is assumed that the profile in x − z section is not changed in y direction. That is, the plane strain condition can be assumed in the deformation analysis. Due to the symmetry with respect to the middle horizontal plane and the periodicity in x direction, the mechanical behavior can be represented approximately by one unit shown by a shaded region in the figure. The material parameters c1 and c2 are set to 0.6 and 0.2 (N/mm2 ), respectively. Figure 13 shows the unit design domain with boundary conditions. The finite deforma-
Kazuhisa Abe
15mm
206
7.5mm
Figure 13. Analytical conditions of unit domain.
load (kN)
20
10
0 0
1
2
3
displacement (mm)
Figure 14. Prescribed load–displacement curve.
tion analysis is achieved up to the vertical settlement of 1.5mm with increment of 0.075mm. The optimization is iterated 50 times. The reinitialization is performed at every 10 steps. The load-displacement curve is specified as Figure 14. Notice that the ordinate is showing the total load acting on the pad and the displacement is corresponding to the entire thick-
A VOF-Based Shape Optimization Method
0th step
10th step
20th step
30th step
40th step
207
50th step
Figure 15. Evolution of topology of unit domain.
ness. The curve was given so that it has a stiffness of 3MN/m at a load of 20kN, which is for a somewhat light wheel load, with displacement of about 3mm.
load (kN)
40
prescribed 0 step 10 step 20 step 30 step 40 step 50 step
20
0 0
1
2
3
displacement (mm)
Figure 16. Load–displacement curves.
The evolutional process of shape and corresponding load-displacement curves are shown in Figure 15 and Figure 16. During the optimization analysis, the upper portion with a thickness of 3mm is fixed as a material region. It can be seen that the stiffness approaches around the prescribed curve immediately, and thus the developed method can find an optimal shape successfully. Figure 17 shows the deformation of the pad with the final shape at 3mm displacement. From the figure, we can understand that the reduction of stiffness at the higher loads is realized by quasi-buckling behavior. Finally, based on the layout obtained by the optimization, the shape of the rubber pad is designed as shown in Figure 18. The load-displacement curve is also shown in Figure
208
Kazuhisa Abe
Figure 17. Deformation of design domain at 3mm settlement.
19. From the figure, it can be confirmed that the shape possesses the required mechanical property. Figure 20 shows the pressure distribution in the deformed shape. In the figure tension is positive. Stress concentration is taking place at the upper left corner and the middle portion on the right side. It will be possible to improve the stress distribution by taking into account restrictions on stress in the optimization analysis.
Figure 18. Shape of rubber pad designed based on the optimization.
A VOF-Based Shape Optimization Method
8.
209
Conclusion
A VOF-based topology optimization method was developed for incompressible rubber components. Since the VOF function is updated as a result of the advection, it can be released from the checkerboard pattern. Moreover, the intermediate density will be suppressed adequately with the aid of the CIVA method enhanced by the tangent transformation. In numerical examples, any anomalies such as the checkerboard pattern and the wide distribution of intermediate density have not been observed. In order to avoid the unnatural pressure inside holes, voids are replaced by a compressible soft material. The efficiency of the proposed method has been confirmed by numerical analysis. It has also been found that the relaxation of incompressibility in voids is necessary in order to attain an optimal shape.
prescribed
20
load (kN)
result
10
0 0
1
2
3
displacement (mm)
Figure 19. Load–displacement curve for the designed shape.
N/mm2
0.3
-0.0
-0.3
-0.6
-0.9
Figure 20. Pressure distribution.
210
Kazuhisa Abe
References [1] Kim,J.J.; Kim,H.Y. Comput Struct. 1997, 65, 725-731. [2] Choi,K.K.; Duan,W. Comput Methods Appl Mech Engrg. 2000, 187, 219-243. [3] Lee,W.S.; Youn,S.K. Struct Multidisc Optim. 2004, 27, 284-294. [4] Eschenauer,H.A.; Olhoff,N. Appl Mech Rev. 2001, 54, 331-390. [5] Sigmund,O. Int J Solid Struct. 1994, 17, 2313-2329. [6] Bendsœ,M.P. Struct Optim. 1989, 1, 193-202. [7] Bendsœ,M.P.; Sigmund,O. Arch Appl Mech. 1999, 69, 635-654. [8] Abe,K.; Nishinomiya,Y.; Koro,K. (2007) In Proceedings of APCOM’07-EPMESC XI [CD-ROM]. [9] Allaire,G.; Jouve,F.; Toader,A-M. C R Acad Sci Paris Ser I. 2002, 334, 1125-1130. [10] Abe,K.; Koro,K. Struct Multidisc Optim. 2006, 31, 470-479. [11] Hirt,C.W.; Nichols,B.D. J Comput Phys. 1981, 39, 201-225. [12] Watanabe,H.; Hisada,T. Trans JSME. 1996, 62, 745-752(in Japanese). [13] Klaas,O.; Maniatty,A.; Shephard,M.S. Comput Methods Appl Mech Engrg. 1999, 180, 65-79. [14] Tanaka,N. Int J Num Methods Fluids. 1999, 30, 957-976. [15] Yabe,T. Comput Phys Commun. 1991, 66, 219-242. [16] Xiao,F.; Yabe,T.; Ito,T.; Tajima,M. Comput Phys Commun. 1997, 102, 147-160. [17] Buhl,T.; Pedersen,C.B.W.; Sigmund,O. Struct Multidisc Optim. 2000, 19, 93-104.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 211-235
ISBN 978-1-60876-500-3 c 2011 Nova Science Publishers, Inc.
Chapter 9
M ULTIOBJECTIVE O PTIMIZATION : Q UASI - EVEN G ENERATION OF PARETO F RONTIER AND I TS L OCAL A PPROXIMATION Sergei V. Utyuzhnikov School of Mechanical Aerospace and Civil Engineering, University of Manchester, P.O. Box 88, Manchester, M60 1QD, UK
Abstract In multidisciplinary optimization the designer needs to find a solution to an optimization problem that includes a number of usually contradicting criteria. Such a problem is mathematically related to the field of nonlinear vector optimization with constraints. It is well-known that the solution to this problem is far from unique and given by a Pareto surface. In the real-life design the decision-maker is able to analyze only several Pareto optimal (trade-off) solutions. Therefore, a well-distributed representation of the entire Pareto frontier is especially important. At present, there are only a few methods that are capable of even generating a Pareto frontier in a general formulation. In the present work they are compared to each other, with the main focus being on a general strategy combining the advantages of the known algorithms. The approach is based on shrinking a search domain to generate a Pareto optimal solution in a selected area on the Pareto frontier. The search domain can be easily conducted in the general multidimensional formulation. The efficiency of the method is demonstrated on different test cases. For the problem in question, it is also important to carry out a local analysis. This provides an opportunity for a sensitivity analysis and local optimization. In general, the local approximation of a Pareto frontier is able to complement a quasi-even generated Pareto set.
1.
Introduction
In the real-life design, the decision-maker (DM) has to take into account many different criteria such as low cost, manufacturability, long life and good performance which cannot be satisfied simultaneously. In fact, it is possible only to consider a trade-off among all (or almost all) criteria. The task becomes even more complicated because of additional constraints which always exist in practice.
212
Sergei V. Utyuzhnikov
Mathematically, the trade-off analysis can be formulated as a vector nonlinear optimization problem under constraints. Generally speaking, the solution of such a problem is not unique. It is natural to exclude from the consideration any design solution that can be improved without deterioration of any discipline and violation of the constraints; in other words, a solution that can be improved without any trade-off. This leads to the notion of a Pareto optimal solution [1]. Mathematically, each Pareto point is a solution of the multiobjective optimization (MOO) problem. A designer selects the ultimate solution among the Pareto set on the basis of additional requirements, which may be subjective. Generally speaking, it is desirable to have a sufficient number of Pareto points to represent the entire Pareto frontier in the objective space. Yet it is important the Pareto set to be evenly distributed, otherwise the generation of the Pareto set becomes inefficient. Despite the existence of many numerical methods for vector nonlinear optimization, there are few methods potentially suitable for real-design applications. This is because the problem in question is usually very time-consuming. In many practical multidisciplinary optimization applications the design cycle includes time-consuming and expensive computations at each discipline. For example, in the aerospace industry this is most evident in the solution of the aerodynamics and stress analysis tasks. The solutions corresponding to these subtasks influence each other and usually demand iterations to reach the ultimate solution. Under such conditions, it is important to minimize the number of iterations required to find a Pareto optimal solution. In general, the optimization methods can be split into two principle categories: classical, preference-based, methods and evolutionary genetic-based algorithms (GA). The classical methods usually use deterministic approaches, whereas GA typically employ stochastic algorithms. It goes without saying that such a division is not strict and a combination of classical and GA methods is also possible. In classical MOO methods, vector optimization approaches are often reduced to the minimization of an aggregate objective function (AOF) (also called preference function), which includes a combination of objective (cost) functions. The simplest and most distributed example of the AOF is given by a linear (weight) combination of the objective functions [1]. This method has many drawbacks related mainly to uncertainty in weights coefficients. Many iterations may be required to find the combination of the weights leading to a solution, which corresponds to the DM’s expectations [2]. Furthermore, it is well known that this method can generate only the convex part of a Pareto surface [3] while real-life problems often result in non-convex Pareto frontiers. This drawback can be avoided by using either a more complex consideration of the AOF [4] or the weighted Tchebychev method [1]. However, the weights remain unknown functions of the objectives [2]. In real design, the DM is able to consider only a few possible solutions (Pareto points). In such a context, it is important to have a well-spread distribution of Pareto points to obtain maximum information on the Pareto surface at minimum (computational) cost. Das and Dennis [5] showed that an even spread of weights in the AOF does not necessary result in an even distribution of points in the Pareto set. In the literature, there are only a few methods that can be considered for even generating the entire Pareto frontier in the general multidimensional formulation [6].
Multiobjective Optimization
1.1.
213
Quasi-even Set Generators of Pareto Frontier
The Normally Boundary Intersection (NBI) Method [7], [8] was developed by Das and Dennis for generating a quasi-even distribution of Pareto solutions. The method has a clear geometrical interpretation. It is based on the well-known fact that a Pareto surface belongs to the boundary of feasible domain towards minimization of objective functions [1]. First, so-called anchor points are obtained in the objective space. An anchor point corresponds to the optimal value of one and only one objective function in the feasible domain. Thus, n objective functions provide up to n anchor points. Second, the utopia plane passing through the anchor points is obtained. The Pareto surface is then constructed by the intersection of lines normal to the utopia plane and the boundary of feasible domain. A single optimization problem with respect to one of the objective functions is solved along each line. An even distribution of Pareto points is provided by even distribution of lines orthogonal to the utopia plane. One can note that this is valid until cosine of the angle between such a line and the normal to the boundary of feasible space is not locally close to zero. The method appears to generate non-Pareto and locally Pareto solutions that require a filtering procedure [9], [10]. In addition, the NBI method might be non-robust because the feasible domain is reduced to a line. The Normal Constraint (NC) Method [9], [12] by Messac et al. represents the modification of the NBI approach. The single optimization problem, used in the NC, is based only on inequality constraints. This modification makes the method more flexible and stable. Both methods may fail to generate Pareto solutions over the entire Pareto frontier [12] in a multidimensional case. The modification of the NC [12] partially eliminates this drawback. However, both methods may generate non-Pareto and locally Pareto solutions [9] although the NC is less likely to do this [12]. In [10], [6] it is shown that both the NBI and NC methods can be inefficient because of the significant number of redundant solutions. In the example given in [6], sixty six points on the utopia plane lead only to twenty four Pareto solutions. Another examples can be found in [10]. Both methods can have significant difficulties in the case of a disconnected frontier [10]. In particular, they do not always find the entire Pareto frontier. Meanwhile, one can note that a recent modification of the NC method [14] is able to improve these methods. The Physical Programming (PP) Method is suggested in [13] by Messac. This method also generates Pareto points on both convex and non-convex Pareto frontiers as shown in [15]. The method does not use any weight coefficients and allows one to take into account the DM experience immediately. In the PP, the designer assigns each objective to one of the four categories (class-functions). The optimization is based on minimization of an AOF determined by the preference functions (class-functions). The algorithm given in [15] is able to generate an evenly distributed Pareto set. However, it contains several free parameters, the optimal choice of which requires some preliminary information on the location of the Pareto frontier. In [16] and [6], Utyuzhnikov et al. modified the PP to make it simpler and more efficient for practical applications. A simpler structure of the class-functions is suggested. The classfunctions are generalized to shrink the search domain and make its location in space more optimal. This is critical for generating an even set of the Pareto frontier. The proposed modification combine the advantages of the PP, NBI and NC methods. One of the main
214
Sergei V. Utyuzhnikov
advantages of the approach [6] is that it does not provide non-Pareto solutions while local Pareto solutions may be easily recognized and removed. In [6] it is shown that the modified PP is able to generate a quasi-even Pareto set in the general formulation. It is proven that the method is able to capture an entire Pareto frontier in the general case. As will be shown in this Chapter, the algorithm suggested in [6] can be applied beyond the PP technique. In contrast to the classical, preference-based, approaches, the class of evolutionary methods, such as genetic-based algorithms (GA), generate a set of Pareto solutions simultaneously (see, e.g., [17], [18]). This class of methods seems to be very promising for solving multiobjective problems. Unfortunately GA do not usually guarantee either the generation of a well-distributed Pareto set or the representation of the entire Pareto frontier. In [11], GA is combined with the generalized data envelopment analysis to remove dominated design alternatives. The method is capable of the efficient generation of both convex and concave frontiers. All examples in [11] are obtained only for two objective functions. A modification of the evolutionary algorithm is suggested in [10] to generate a well-distributed Pareto set. The general problem of GA is that their efficiency significantly drops if the number of objective functions increases, more so at the presence of constraints. A global information on the Pareto frontier, gained from a well distributed Pareto set, can be complimented via a local analysis. If a local approximation of the Pareto frontier is available, then it can be used for a sensitivity (trade-off) analysis. In [22], Utyuzhnikov et al. derived the precise formula for the linear and quadratic approximations in the multidimensional formulation. They showed that the formulae [23], widely used in the literature for the linear approximation, should be corrected in the general formulation. The Chapter is organized as follows. In Section 2. the general mathematical formulation of multiobjective optimization under constraints is given. Then, in Section 3. an algorithm for even-generating a Pareto set is described. The algorithm is given in the general formulation for an arbitrary number of design variables, objective functions and constraints. Some examples of the application of the algorithm are shown in Section 4.. Local approximations of a smooth Pareto frontier in the objective space are derived in Section 5.. Finally, an example of the local approximation is given in Section 6..
2.
Multiobjective Optimization Problem. Pareto Optimal Solution
Assume that there are N design variables to be found. Then, we can introduce a design space X ⊂ RN . Each element in the design space is represented by a design vector x = (x1 , x2 , . . . , xN )T : x ∈ X. Suppose that the quality of each combination of design variables is characterized by M objective (cost) functions. As such, along with the design space X, we introduce the space of objective functions Y ⊂ RM . Each element in the objective space Y is represented by a vector y = (y1 , y2 , . . . , yM )T , where yi = fi (x), fi : RN → R1 , i = 1, 2, . . . , M . Thus, X is mapped onto Y by f ∈ RM : X 7→ Y. Suppose that there are constraints, which are formulated via either equations or inequalities. Then, we arrive at the following multiobjective optimization problem under constraints: min[y(x)]
(1)
Multiobjective Optimization
215
subject to K inequality constraints gi (x) ≤ 0, i = 1, 2, . . . , K
(2)
hj (x) = 0, j = 1, 2, . . . , P.
(3)
and P equality constraints
The feasible design space X∗ is defined as the set of design variables satisfying all the constraints (2) and (3). The feasible criterion (objective) space Y∗ is defined as the set: {Y(x)| x ∈ X∗ }. Thus, the feasibility means no constraint is violated.
Definition. a design vector a (a ∈ X∗ ) is called a Pareto optimum iff there exists no b ∈ X∗ such that y(b) ≤ y(a) and exists l ≤ M : yl (b) < yl (a).
A design vector is called a local Pareto optimum if it is a Pareto optimum in its some vicinity. In other words, a design vector is Pareto optimal if it cannot be improved with regard to any objective function without deterioration of at least one of other functions. Due to constraints, we are not able to minimize all objectives simultaneously. Thus, the solution of the MOO problem (1), (2), (3) is not unique and any solution represents a trade-off between different objectives. Consider any element x ∈ X such that vector y(x) belongs to the interior of the feasible space Y∗ rather than its boundary. Obviously, such an element cannot be a Pareto solution. More precisely, the general solution of an MOO problem is represented by a Pareto surface, which always belongs to the boundary of the feasible space Y∗ . In the real-life design, as a rule, it is impossible to obtain the entire Pareto frontier. In fact, the entire Pareto frontier is not usually required. In the next Section, we consider an algorithm that provides a well-distributed representation of the entire Pareto surface. We describe a strategy to seek the Pareto frontier based on a Directed Search Domain (DSD) algorithm, which was first applied for the modification of the PP method in [16] and [6]. One can see that the DSD approach can be used for very different search engines. The main idea of DSD is that we shrink the search domain to obtain a Pareto solution in a selected area of Y∗ . A well-spread distribution of the selected search domains should give us a quasi-even Pareto set.
3. 3.1.
Generation of a Well-Distributed Pareto Set. DSD Algorithm Trade-off Matrix. Utopia Plane
For further consideration, we introduce the trade-off matrix T : f1,min f12 ... f1M f21 f2,min . . . f2M T = . . . . . . . . . . . . . . . . . . . . . . . . . . . fM 1 fM 2 . . . fM,min
(4)
216
Sergei V. Utyuzhnikov
Here, an i-th row represents the coordinates of an anchor point µ∗i corresponding to the solution of the single-optimization problem min fi in the feasible criterion space Y∗ (see, e.g.,[12]): i∗ ∗ ∗ (5) µ∗i = f (Xi ), X : Xi = arg min fi . ∗ Y
In the feasible space Y∗ we consider a hypercube H limiting the search domain. For that we define pseudo-nadir points [12]: fi,max = max fij , where fij are the elements j
of the trade-off matrix T . Then, the hypercube H is represented by H = [f1,min f1,max ] × [f2,min f2,max ]×· · ·×[fM,min fM,max ]. One can show that the hypercube H always contains the Pareto frontier [12]. Next, similar to the NC method, we introduce the utopia plane created by anchor points µ∗i . One can show that the polygon spanned by all M vertexes µ∗i is convex. Then, any point f∗ , belonging to the interior of this polygon, is represented by: ∗
f =
M X
αi µ∗i .
(6)
i=1
Here, the parameters αi satisfy the following conditions: 0 ≤ αi ≤ 1 (i = 1, . . . , M ), M X
(7)
αj = 1.
j=1
As shown in [6], the definition of an anchor point can strongly affect the efficiency of the algorithm for Pareto set generation. The standard definition does not always lead to a uniquely determined point. To resolve this problem, in the next Section a modified lexicographic-based definition of the anchor point is given following [16] and [6]. It guarantees the uniqueness of the anchor point for each objective.
3.2.
Modified Anchor Points
The standard definition of an anchor point (5) may lead to a non-uniqueness. If the solution of the problem (5) is not unique, then the point corresponding to the minimal value of the other objective functions is to be chosen. It may lead to the problem of trade-off minimization for the remaining objectives. To avoid this, priority in minimization is introduced as follows. First, instead of space Y∗ , we consider domain Y∗∗ that includes all ultimate points of Y∗ . Then, we minimize fi , then fi+1 and so on up to fi−1 . Thus, we use the following lexicographic prioritization in a circular order: i + 1, i + 2, . . . , M, 1, 2, . . . , i − 1. A k − th prioritization assumes that the k − th minimization must not violate all the previous k − 1 ones. One can prove that all anchor points belong to the Pareto frontier. Indeed, the anchor points belong to the boundary of the feasible domain Y∗ and no objectives can be improved without deterioration of any other objective. As soon as we know the coordinates of the anchor points we can determine the polygon (6), (7) on the utopia plane.
Multiobjective Optimization
3.3.
217
Reference Points on the Utopia Plane
Next, in the objective space consider a search domain D: fi ≤ fi∗ (i = 0, . . . , M ), where the vector f∗ is determined by (6) and (7). The lower boundary in D can be quite arbitrary in each direction. However, the values of the lower boundaries must be small enough for the search domain to contain a part of the Pareto surface if possible. It is natural to require that fi,min ≤ fi ≤ fi∗ (i = 0, . . . , M ), where fi,min are determined by the trade-off matrix T ∗ )T , in (4). The end of the vector f∗ determines a reference point M : M = (f1∗ , f2∗ , . . . , fM which belongs to the interior of the utopia polygon (6) and (7) including its boundaries. Obviously, there is no guarantee that a local search domain D has any intersection with Y∗ . Then, and only then, we switch the search on the opposite direction: fi∗ ≤ fi (i = 0, . . . , M ). In turn, it is natural to require the search domain is limited by fi∗ ≤ fi ≤ fi,max (i = 0, . . . , M ), where fi,max are determined by the trade-off matrix T . A well-spread distribution of the search domains can be reached via an even distribution of the coefficients αi in (7). An algorithm for calculating the coefficients αi is given in [12]. The approach is based on an induction procedure. First, a uniform distribution of the coefficient P α1 is considered. From conditions (7) it is clear that the sum of the rest coefficients M 1 αj equals to 1 − α1 for each selected value of α1 . Then, we consider a uniform distribution of the coefficient α2 for each of these variants. The algorithm is repeated until either the last coefficient αM is reached or the sum of the coefficients already determined is equal to 1. In the latter case we set the remaining coefficients to be zero. In contrast to the NC and NBI algorithms, the DSD approach allows us to represent the entire Pareto frontier considering only non-negative coefficients αi , which satisfy conditions (7). As shown in [6], it is important to avoid redundant solutions. A distribution of reference points leads to different search domains. As a result, one can expect generating different Pareto solutions. However, as shown in [6], it is not always the case. Although the search domain is limited for each case, it is large enough and the distribution of the Pareto set may be sensitive to the displacement of the box D along the utopia plane especially if the Pareto frontier is concave. A more efficient and flexible algorithm based on the introduction of new objective functions is described in the next Section. It is based on an affine transform of the coordinate system in the objective space, which substantially shrinks the search domain.
3.4.
Shrinking of Search Domain
Following to [6], let us introduce new objective functions fei via an affine transform fei =
M X
fj Bji (i = 1, . . . , M ).
(8)
j=1
In the objective space Y, this transform is equivalent to the introduction of a new coordinate system with the basis vectors ai =
M X
Aij ej (i = 1, . . . , M ),
j=1
A−1 = B,
(9)
218
Sergei V. Utyuzhnikov
where ej (j = 1, . . . , M ) are the basis vectors of the original coordinate system. For the new objective functions we can shrink the search domain to the following: fei ≤ fei∗ (i = 1, . . . , M )),
where fei∗ is determined by the transform (8): fei∗ =
M X
fj∗ Bji (i = 1, . . . , M ).
(10)
(11)
j=1
Then, the search domain can be changed as shown in Figure 1. In particular, we can choose the basis vectors ai (i = 1, . . . , M ) to form an angle γc to a selected direction l. In 2D case the matrices A and B can be easily determined: 1 sin γ+ − sin γ− cos γ− sin γ− , (12) A= , B= cos γ+ sin γ+ 2γc − cos γ+ sin γ− where γ+ = γn + γc , γ− = γn − γc , l = (cos γn , sin γn )T .
Figure 1. Search domain [6].
Figure 2. Basis vectors [6].
Multiobjective Optimization
219
To extend this approach to a multidimensional space RM , we set the following conditions on the basis vectors ai : (ai , l) = cos γc (i = 1, . . . , M ). Here, (·, ·) corresponds to the inner product. Thus, all the vectors ai are parallel to the lateral area of the hypercone that has the angle γc and the axis directed along vector l. It is important to guarantee a spread distribution of these vectors. Then, it is clear that the basis created by vectors ai must not vanish. The following algorithm guarantees a fully uniform distribution of the basis vectors ai (i = 1, . . . , M ) around an axis l in RM . First, we introduce vector l: l = l0 ,
(13)
l0 = (l0 , l0 , . . . , l0 )T . If the vector l is unit, then it has the coordinates: 1 l0 ≡ cos γ0 = √ . M The basis vector ai can be determined in the plane created by the vectors ei and l0 (see Figure 1). One can show that ai =
sin γc sin(γ0 − γc ) ei + l0 sin γ0 sin γ0
(i = 1, . . . , M ).
(14)
Obviously, the basis of the vectors ai (i = 1, . . . , M ) does not vanish. The basis vectors form a search cone similar to the 2D cone shown in Figure 1. It is clear that if M = 2 and γ0 = π/4, then we obtain formula (12). From (9), (13) and (14) we have A = A0 ≡
sin(γ0 − γc ) sin γc I+ E, sin γ0 sin γ0
(15)
where all elements of the matrix E are unities: kEij k = 1. In (15), the angle γc is quite arbitrary. However, it should be small enough to provide shrinking the search domain. On the other hand, it should not be too small in order to avoid any stiffness related to that [6]. Finally, it should be noted that the matrix A0 has to be inverted to find the matrix B. However, it needs to be done only once because the matrix A0 is the same for all the search domains. Transform (8) allows us to shrink the search domain and focus on a much smaller area on the Pareto surface. It makes the algorithm more flexible and much less sensitive to the displacement of box D. It generates a “light beam” emitting from point M and highlighting a spot on the Pareto frontier [6]. As noted above, if no solution is found, the direction is switched to the opposite.
220
3.5.
Sergei V. Utyuzhnikov
Arbitrary Direction of Search Domain
The direction of the search domain along the lines parallel to the vector l0 can be sufficient. However, in the general case the appropriate matrix A is required for an arbitrary unit vector l. For this purpose, we consider a linear transform mapping the previous pattern in such a way that the vector l0 is mapped onto the vector l. It can be obtained by multiplying both parts of equation (9) by an orthogonal matrix R: Rl0 = l. Next, we obtain the basis of vectors {a′i } (i = 1, . . . , M ) uniformly distributed on the lateral area of the hypercone that has the axis parallel to the vector l: a′i =
sin γc ′ sin(γ0 − γc ) e + l (i = 1, . . . , M ), sin γ0 i sin γ0
(16)
where e′i = Rei are the basis vectors of the Cartesian coordinate system in which the vector l has equal coordinates. One can see that the columns of transition matrix R are the coordinates of the vectors e′i (i = 1, . . . , M ) in the basis {ej } (j = 1, . . . , M ). It is clear that all angles are preserved because the transform is orthogonal. In particular, (a′i , l) = cos γ0 . Hence, we obtain the matrix A in the general form: A = A0 RT =
sin γc T sin(γ0 − γc ) R + E, sin γ0 sin γ0
(17)
where kEij k = klj k. If γc = γ0 , obviously a′i = e′i that means the transform becomes orthogonal and is only reduced to a turn of the original Cartesian coordinate system. As such, the matrix A is orthogonal and B = AT . It is clear that in the general case B = RA−1 0 . The matrix A0 is only assigned to the vector l0 . Thus, in the entire algorithm only matrix A0 is to be inverted, which is important for multidimensional applications. The general presentation requires the calculation of the orthogonal matrix R, the components of which must satisfy the following additional requirements: cos γ0
M X
Rij = li .
(18)
j=1
It is clear that the matrix R in (18) is not unique. The simplest way to obtain it is to consider the rotation from the vector l0 to the vector l in a Cartesian coordinate system related to these vectors: R = DTR DT . Here, TR is an elementary rotation matrix describing the rotation in the plane created by the first two basis vectors p − 1 − (l0 , l) 0 . . . 0 p (l0 , l) 1 − (l0 , l) (l0 , l) 0 ... 0 TR = 0 0 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 0 0 ... 1
Multiobjective Optimization
221
and the matrix D is the transition matrix from the original basis to an orthogonal basis {bi } assigned to the vectors l and l0 . For example, it can be obtained as follows. First, we conp 2 sider orthogonal unit vectors b1 = l0 and b2 = (l − (l, l0 )l0 ) / 1 + (l, l0 ) . Then, we complement these two vectors by vectors {ei } upon the Gram-Schmidt orthogonolization procedure. According to this procedure, each subsequent vector is orthogonal to all the previous. More precisely, from all basis vectors {ei } we eliminate two vectors that create the smallest angle with the vector l. Without loss of generality, assume that vectors ei (i = 3, . . . , M ) are retained. Then, we have ek − bk = s
i=k−1 P
(ek , bi )bi
i=1
1+
(k = 3, . . . , M ).
i=k−1 P
(ek , bi )2
i=1
Thus, the columns of the matrix D consist of the vectors bi (i = 1, . . . , M ): D = (b1 , b2 , . . . , bM ). It is clear that the obtained matrix D is orthogonal: D−1 = DT . Thus, the direction of the search can be easily conducted. Finally, it is to be noted that the rotation from the vector l0 to a vector l makes sense only if the angle between the two vectors is large enough.
3.6.
Rotation of Search Domain
The general representation of the matrix A given by equation (17) can be important for seeking the Pareto set nearby its boundary. If we consider the orthogonal projection of the Pareto set onto the utopia hyperplane, the images of some Pareto points may not belong to the interior of the convex polygon (6), (7) spanned by the M vertexes µ∗i . This fact was first noted in [12]. One of the possibilities to resolve this problem, suggested in [12] for the NC method, is based on the use of negative coefficients αi . However, this can result in too many redundant solutions [6]. Another opportunity is suggested in [6] and described below. Let us consider the edge vectors of polygon (6), (7): νi = µi+1 −µi (i = 1, . . . , M −1). The point pi belongs to a k-th edge of the polygon if and only if αm = 0 (m 6= k, k + 1). Assume that the vector l is related to the normal of the utopia hyperplane. Then, if the point M belongs to an edge of the polygon, we rotate the vector l in the direction opposite to the polygon. In other words, l is changed in such a way that the orthogonal projection of the end of the vector, drawn from an edge, onto the utopia hyperplane does not fall in the interior of the polygon. For this purpose, in the utopia plane we introduce a unit vector which is the outer normal to the edge in question. The vector can be defined as: si =
νi+1 + βi νi , |νi+1 + βi νi |
βi = −
(νi−1 , νi ) . (νi , νi )
222
Sergei V. Utyuzhnikov
Then, the current vector lr is determined via si and normal n to the utopia hyperplane towards the decrease of all objective functions as follows: lr = cos θr n + sin θr si ,
(19)
0 < θr < π/2.
(20)
The angle θr is a parameter. Changing θr from 0 to π/2, the vector lr is turned from the normal vector n to the vector si (see Figures 3a and 3b). Thus, we arrive at the following algorithm. If the reference point M strictly belongs to the interior of the polygon, then the vector lr coincides with the normal n. If point M is on an edge of the polygon, then an additional rotation of the vector may be required. To obtain an even distribution of the Pareto set, the number of additional points Nr related with the rotation of the vector lr depends on the distance to the vertexes of the edge. For example, the rotation is not required at the anchor points. Generally speaking, it is reasonable to choose the maximal value of Nr at the center of an edge. The following evaluation of Nr is suggested for a k-th edge: Nr = integer (4mαk αk+1 )
(m ≥ 1).
(21)
Finally, it is worth noting that this number can be substantially optimized if the information on the current local distribution of the Pareto set is taken into account. For example, as noted above, if a Pareto solution appears to be at an edge of the polygon, no additional rotation is needed and Nr = 0.
Figure 3. Rotation of search domain [6].
Multiobjective Optimization
3.7.
223
Finding a Pareto Solution
The DSD algorithm gives us a well-spread distribution of search domains. In each selected search domain we seek only one Pareto solution. To do this it is enough to introduce an aggregate objective function (AOF), which is a combination of objective functions. Obviously, the AOF cannot be arbitrary. Requirements to admissible AOF can be found in [19]. According to [19], an admissible AOF must be a locally coordinate-wise increasing function of the objective functions. On the definition, a function is a coordinate-wise increasing function if it is a strict monotonically increasing function with respect to each argument [20]. Partial examples of AOF are given in [13], [16] and [6] in the framework of PP method.
3.8.
Filtering Procedure
As noted in [6], the algorithm can generate local Pareto solutions. However, they can be removed via the filtering procedure based on the contact theorem. Consider the sketch of an example shown in Figure 4. A point P is a local Pareto solution rather than a global one. To filter local Pareto solutions, we put the reference point M of a search domain at a point considered as a candidate Pareto solution and set A = I in (14), (15). If the point corresponds to a global Pareto solution (e.g., a point P ′ ), then no any other solution can be obtained, which immediately follows from the contact theorem [21]. Thus, if D ∩ Y∗ = P , then and only then the point P represents a Pareto solution. Thus, we have a criterion for verification if the solution is a global Pareto solution.
Figure 4. Filtering local Pareto solutions [6].
3.9.
Scaling Procedure
In the general formulation, to avoid undesirable severe skewing of the search domain in the algorithm, the objective functions should be preliminary scaled [1]: fisc =
fi − fi,min . fi,max − fi,min
(22)
224
3.10.
Sergei V. Utyuzhnikov
Step-by-Step Algorithm
The entire DSD method can shortly be formulated as a 12-step algorithm. Step 1: apply the scaling procedure (22) to the objective functions. Step 2: find anchor points according to the procedure described in Section 3.2.. Step 3: find the utopia polygon determined by (6) and (7). Step 4: introduce a distribution of reference points according to Section 3.3.. Step 5: determine the angle γc and matrix A0 according to (15). Step 6: find matrix B = A−1 0 . Step 7: identify the local search domain according to (10) and (11). Step 8: find a local Pareto solution. Step 9: if no solution is found, switch the direction of the search on the opposite. Step 10: displace the search domain to another reference point. Step 11: at the edges of the utopia polygon (6), (7), apply the rotation algorithm described in Section 3.6.. Then, Steps 6-8 are repeated for matrix A determined by (17). Step 12. apply the filtering procedure described in Section 3.8..
3.11.
Efficiency of the Algorithm
Thus, the algorithm described above is able to generate an entire well distributed Pareto set. This is achieved by solving a number of single-objective optimization problems for an AOF. The algorithm is efficient because the number of the single-objective problems solved mostly equals the number of the Pareto points obtained. The method can efficiently be applied in multidimensional case because most matrices are known explicitly in an analytical form. In the entire algorithm only matrix A0 is to be inverted once. In addition, it is to be noted that the method can naturally be realized on parallel processors because each Pareto search can be done independently from the others.
4.
Test Cases
The DSD algorithm was implemented in the PP method in [6]. In that paper, the efficiency of the approach is demonstrated on a number of test cases including comparison against the NBI, NC and original PP methods. The suggested algorithm performs quite well on multidimensional test cases, convex and concave frontiers.
4.1.
Criterion of Evenness
To compare different methods, the following criterion of evenness is suggested in [6]. It is based on a coefficient ke characterizing how evenly a Pareto set is distributed on the Pareto surface. For this purpose, we introduce a curvilinear coordinate system {xi } (i = 1, . . . , M − 1) on the Pareto surface in the objective space. In the Riemann space RM −1 , related to the Pareto surface, the Riemann metric is given by dr2 =
−1 M −1 M X X i=1 j=1
gij dxi dxj .
(23)
Multiobjective Optimization
225
Then, the coefficient ke is defined via the Hausdorff metric: max min rij ke =
i
j
min min rij i
j
(i, j = 1, . . . , Np ; i 6= j).
Here, Np is the number of Pareto points, rij is the distance between an i-th and j-th Pareto points in metric (23): rij = |xi − xj |. The coefficient ke represents the ratio of the maximal possible distance between any two nearest Pareto points to the minimal one. The following three relatively simple test cases from [6] demonstrate how the algorithm works. Example 1. First, we consider a two-dimensional test case: min x, y
(24)
x2 + y 2 ≤ 1.
(25)
subject to a constraint
It is clear that in this test case the feasible domain Y∗ corresponds to the unit circle and the Pareto surface is convex. The solution of problem (24), (25) is represented by a segment of the unit circle as shown in Figure 5. The DSD algorithm provides an even representation of the Pareto frontier with ke = 1.6 if Np = 11. Without shrinking the search domain, if the angle γc = γ0 = 450 , the generated Pareto set turns out not to be well spread with ke = 5.6. The difference between the two strategies is even more impressive if a similar test case with a concave Pareto frontier is tackled [6]. Example 2. Next, we consider an example of a concave Pareto frontier in R3 : min x, y, z
(26)
x2 + y 2 + z 2 ≥ 1,
(27)
subject to
x > 0,
y > 0, z > 0. In contrast to Example 1, the standard definition of an anchor point does not lead to a unique point. For example, the solution of a single-objective problem min x leads to a segment of the unit circle in the plane x = 0. Meanwhile, it is easy to see that the modified definition of the anchor point given in Section 3.2. results in only three anchor points: (0, 0, 1), (1, 0, 0) and (0, 1, 0). The entire orthogonal projection of the Pareto surface onto the utopia plane does not necessarily appear to be in the triangle created by the anchor points (see Figure 6). For this
226
Sergei V. Utyuzhnikov
Figure 5. Segment-of-circle frontier [6].
reason, a method such as NBI, for example, is not able to catch the entire Pareto frontier if the coefficients αi in (7) are not negative. In [6], the rotation strategy described in Section 3.6. is used to generate a complete representation of the Pareto frontier. The utopia plane and reference points, distributed in the polygon (triangle) according to algorithm (6), (7), are given in Figure 7. Then, the generated Pareto set is shown in Figure 8. As noted above, the definition of the anchor point may be very important for the efficiency of the algorithm. This statement is illustrated by several examples available in [6]. Example 3. The next test case, suggested in [9], includes a Pareto frontier with both convex and concave parts, which are created by three ellipsoid segments centered at the origin. The problem reads:
min x, y
Multiobjective Optimization
Figure 6. 3D test case. Orthogonal projection of Pareto surface [6].
Figure 7. 3D test case. Utopia plane [6].
227
228
Sergei V. Utyuzhnikov
Figure 8. 3D test case. Pareto frontier [6].
subject to x2 + (y/3)2 ≥ 1,
x4 + y 4 ≥ 16,
(x/3)3 + y 3 ≥ 1,
0 ≤ x ≤ 2.9,
0 ≤ y ≤ 2.9.
The exact Pareto curve is shown by the dashed line in Figure 9. It is important to note that the Pareto frontier is not smooth and is located on both sides of the utopia line. As shown in Figure 9, the algorithm is capable of capturing the entire frontier and generating a well-distributed Pareto set. As soon as a quasi-even distributed Pareto set is available, the information on the Pareto optimal solutions can be complemented by a local approximation of the Pareto surface in the objective space. This is also important for a sensitivity analysis. A local approximation can be achieved by the either linear or quadratic approximation described in the next Section. It is based on the results obtained in [22], in which it is shown that a linear approximation known in the literature is not applicable in the general formulation. A corrected linear approximation is suggested and is proven to be accurate in the multidimensional case.
Multiobjective Optimization
229
Figure 9. Hyperellipsoid frontier [6].
5.
Pareto Local Approximation
Consider a Pareto solution and assume that in its vicinity the Pareto surface is smooth enough. To obtain a local approximation of the Pareto surface, we should identify only active constraints and consider their local approximation [23]. It is clear that in the general case not all constraints (2), (3) are necessarily active on the Pareto frontier. A constraint is said to be active at a Pareto point x∗ of the design space X if a strict equality holds at this point [23]. We suppose that if some constraints are active at a Pareto point, then they remain active in its vicinity. Without loss of generality, we assume that all constraints (2), (3) are active. Note the set of active constraints as G ∈ RI , where I = K + P . Then, at the given point x∗ of the feasible design space X∗ we have: G(x∗ ) = 0.
(28)
If G ∈ C 1 (RI ), then the constraints can be linearized: J(x − x∗ ) = 0. where J is the Jacobian of the active constraints set at x∗ : J = ∇G. A point x∗ is said to be regular if all gradients of the active constraints are linearly independent [21]. It is clear that at a regular point rank J = I. In our further analysis we consider only regular points.
230
Sergei V. Utyuzhnikov In the objective space Y let the Pareto surface be represented by: S(y) = 0
(29)
and S ∈ C 2 (RI ) in a vicinity of x∗ . The gradient of any differentiable function F at point x∗ under constraints is given by the reduced gradient formula [25]: ∇F|Sl = P ∇F.
(30)
Here, Sl is the hyperplane tangent to X∗ in the design space: Sl = {x| J(x − x∗ ) = 0}, and P is the projection matrix onto hyperplane Sl : P = I − J T (JJ T )−1 J. Then, in the objective space the tangential derivatives of the function F on the boundary of the feasible domain Y∗ are give by dF dx dF = dfi dx |Sl dfi
(i = 1, . . . , M ).
(31)
In equality (31) the meaning of the right-hand side is the following. The first term represents the reduced gradient (30), whereas the second term gives the derivative of the design vector x with respect to an objective function fi along the direction that is tangent to the Pareto surface. The latter term can be represented via the gradients of the objective functions in the design space. One can prove that the columns of matrix P ∇f , f = (f1 , f2 , . . . , fM )T are linearly dependent [22]. Without loss of generality, we assume that the first nf < M columns are linearly independent and represented by P ∇ef. Thus, def = (P ∇ef)T dx, (32) where matrix P ∇ef ≡ (P ∇fe1 , P ∇fe2 , . . . , P ∇fenf ) has all the columns linearly independent. Next, we represent dx as dx = Adfe. (33) To find the matrix A, we multiply both sides of equation (33) by (P ∇ef)T . Then, we obtain: ((P ∇ef)T dx)T Adef = def and
h i−1 . A = P ∇ef (P ∇ef)T P ∇ef
(34)
It is easy to see that the inverse matrix [(P ∇ef)T (P ∇ef)]−1 in (34) is always non-singular because all the vectors P ∇fi (i = 1, . . . , nf ) are linearly independent. Thus, the matrix A
Multiobjective Optimization
231
is the right-hand generalized inverse matrix to (P ∇ef)T . From the definition of matrix A it follows that (P ∇ef)T A = I and ATi P ∇fej = δij , where I is the unit matrix and δij is the Kronecker symbol. From equation (34) it follows that P A = A and in equation (32) dx belongs to the tangent plane Sl at the Pareto point. Thus, h i−1 dx (35) A = P ∇ef (P ∇ef)T P ∇ef def and for any i ≤ nf dx = Ai , dfi where A = (A1 , A2 , . . . , Anf ). Then, from equations (30), (31) and (35), we obtain for any i ≤ nf : dF = (P ∇F )T Ai = ATi P ∇F = ATi ∇F. dfi In particular, if F = fj (nf < j < M ), then we get the first order derivative of objective fj with respect to objective fi on the Pareto surface. Hence, dfj = ATi ∇fj (0 ≤ i ≤ nf , dfi
f
< j ≤ M ).
(36)
As soon as we know the tangential derivatives (36), we are able to obtain a linear local approximation of the surface S. One can derive the formula of sensitivity of objective fj with respect to objective fi along the greatest feasible descent direction for objective fi [23]: dfj (P ∇fj , P ∇fi ) (fj , P ∇fi ) = ≡ . dfi (P ∇fi , P ∇fi ) (fi , P ∇fi )
(37)
It is to be noted that formulae (37) is usually used as a linear approximation of the Pareto surface [24]. However, formulae (37) exactly corresponds to the linear approximation if and only if either nf = 1 or the vectors P ∇ef create an orthogonal basis. In the latter case the matrix (P ∇ef)T P ∇ef is diagonal. If there are only two objectives functions, (36) and (37) always coincide because nf = 1. However, in the general case, formulae (37) is not always applicable. This is demonstrated in the next Section. On the Pareto surface in the objective space the operator of the first derivative can be defined as: d = ATi ∇. dfi By applying this operator to the first order derivative, we arrive at the reduced Hessian: d2 F = ATi ∇(ATj ∇F ) ≈ ATi ∇2 Aj (0 ≤ i, j ≤ nf ). dfi dfj
(38)
Thus, we obtain a local approximation of the Pareto surface. It can be represented by either a linear hyperplane: nf X dS ∆fi = 0 (39) dfi i=1
232
Sergei V. Utyuzhnikov
or a quadratic surface: nf X dS i=1
nf 1 X d2 S ∆fi + ∆fj ∆fk = 0, dfi 2 dfj dfk
(40)
j,k=1
where ∆f = f − f(x∗ ). Approximations (39) and (40) can be rewritten with respect to the trade-off relations between the objective functions as follows: fp = fp∗ +
nf X dfp 1
and fp =
fp∗
+
∆fi (p = nf + 1, . . . , M )
nf 1 X (p) Hjk ∆fj ∆fk (p = nf + 1, . . . , M ), ∆fi + dfi 2
nf X dfp i=1
dfi
(41)
(42)
j,k=1
where (p)
Hjk =
d2 fp . dfj dfk
In [23], to obtain a quadratic approximation, it is suggested that the reduced Hessian matrix Hij should be evaluated via a least-squared minimization using the Pareto set generated around the original Pareto point. However, in the case of a well distributed Pareto set the accuracy of this approximation may not be sufficient. The determination of the reduced Hessian (38) is based entirely on the local values in vicinity of a Pareto point. One should note that the approximations (41) and (42) derived in [22] precisely correspond to the first three terms of the Taylor expansion in the general case. It is clear that a local approximation of the Pareto surface allows us to carry out a sensitivity analysis and trade-off between different local Pareto optimal solutions. The considered local approximation assumes that the Pareto surface under study is smooth enough. The case of a non-differentiable Pareto frontier and its local analysis is addressed in [22].
6.
Example of a Local Approximation
Following [22], in this section, we compare the linear approximations of the Pareto surface (41) based on the derivatives (36) and (37). For this purpose we consider the optimization problem (26), (27). It is easy to see that the first order tangent derivatives to the Pareto surface are given by: df3 dz −x , = =p df1 dx 1 − x2 − y 2 df3 dz −y = =p . df2 dy 1 − x2 − y 2
(43)
Multiobjective Optimization
233
Next, we apply formulas (36) and (37). It is clear that on the Pareto surface the matrices J, P and A are given by: J = [−2x, −2y, −2z], 2 y + z2 −xy −xz x2 + z 2 −yz , P = −xy 2 −xz −yz x + y2 1 0 1 . A= 0 −x/z −y/z
(44)
Then, from (37) we obtain
df3 (x2 + y 2 − 1)x −xz p = , =p df1 |Eq.37 x2 + y 2 (1 − x2 ) 1 − x2 − y 2 −yz df3 (x2 + y 2 − 1)y p =p = . df2 |Eq.37 x2 + y 2 (1 − y 2 ) 1 − x2 − y 2
(45)
One can see that the derivatives (45) do not coincide with the exact derivatives (43). Thus, the approach [23], based on (37), leads only to an approximate linear approximation. This is demonstrated in Figure 10.
Figure 10. Linear approximation based on [23] (from [22]). In turn, from equation (36) we obtain the exact first order derivatives (43). It is easy to verify that equation (38) gives the exact second order derivatives. The linear approximation based on equations (36) and (41) is shown in Figure 11. More examples including industrial applications are available in [22].
7.
Conclusion
The DSD algorithm provides an efficient way for quasi-even generating a Pareto set in a quite arbitrary multidimensional formulation. The approach is based on shrinking a search
234
Sergei V. Utyuzhnikov
Figure 11. Linear approximation [22]. domain. The orientation of the search domain in space can be easily conducted. The approach can be combined with different search engines. The use of the DSD algorithm with the PP technique [6] demonstrates that the approach is capable of generating both convex and concave Pareto sets and filter local Pareto solutions. The algorithm can be efficiently complemented by a local first- or second-order approximation of the Pareto frontier described in [22].
References [1] Miettinen, K. M. Nonlinear Multiobjective Optimization. Boston: Kluwer Academic, 1999. [2] Messac, A. AIAA Journal. 2000, 38 (1), 155–163. [3] Koski, J. Communications in Applied Numerical Methods. 1985, 1, 333–337. [4] Athan, T. W.; Papalambros, P. Y. Engineering Optimization. 1996, 27, 155–176. [5] Das, I.; Dennis, J. E. Structural Optimization. 1997, 14, 63–69. [6] Utyuzhnikov, S. V.; Fantini, P.; Guenov, M. D. Journal of Computational and Applied Mathematics. 2009, 223 (2), 820–841. [7] Das, I.; Dennis, J. E. Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems, SIAM Journal of Meter Design Optimization Problems, In Proceeding of ASME Design Automation Conference, pp. 77–89, Montreal, Quebec, Canada, Sept. 17–20, 1989. [8] Das, I. An Improved Technique for Choosing Parameters for Pareto Surface Generation Using Normal-Boundary Intersection, WCSMO-3 Proceedings, Buffalo, NY, March, 1999.
Multiobjective Optimization
235
[9] Messac, A.; Ismail-Yahaya, A.; Mattson, C. A. Structural and Multidisciplinary Optimization. 2003, 25 (2), 86–98. [10] Shukla, P. K.; Deb, K. European Journal of Operational Research. 2007, 181, 1630– 1652. [11] Yun, Y. B.; Nakayam, H.; Tanino, T.; Arakawa, M. European Journal Operational Research. 2001, 129, 586-595. [12] Messac, A.; Mattson, C. AIAA Journal. 2004, 42 (10), 2101–2111. [13] Messac, A. AIAA Journal. 1996, 34 (1), 149–158. [14] Sanchis, J.; Martn´ınez, M.; Blasco, X.; Salcedo, J.V. Structural and Multidisciplinary Optimization. 2008, 36 (5), 537–546. [15] Messac, A.; Mattson, C. Optimization and Engineering. 2002, 3, 431–450. [16] Utyuzhnikov, S. V.; Fantini, P.; Guenov, M. D. Numerical method for generating the entire Pareto frontier in multiobjective optimization, Proceedings of Eurogen’2005, Munich, September 12-14, 2005. [17] Collette, Y.; Siarry, P. Multiobjective Optimization: Principles and Case Studies, Berlin, Heidelberg, New York: Springer, 2003. [18] Deb, K. Multi-objective Optimization Using Evolutionary Algorithms. Chichester, J. Wiley & Sons, 2001. [19] Messac, A.; Melachrinoudis, A.; Sukam, C. Optimization and Engineering Journal. 2000, 1 (2), 171–188. [20] Steuer, E. R. Multiple Criteria Optimization: Theory, Computation, and Application. Melbourne, Florida: Krieger, 1989. [21] Vincent, T. L.; Grantham, W. J. Optimality in Parametric Systems. New York: John Wiley & Sons, 1981. [22] Utyuzhnikov, S. V.; Maginot, J.; Guenov, M. D. Journal of Engineering Optimization. 2008, 40 (9), 821–847. [23] Tappeta, R. V.; Renaud, J. E. Interactive multi-objective optimization procedure. AIAA 99-1207, April 1999. [24] Tappeta, R. V.; Renaud, J. E.; Messac, A.; Sundararaj, G. J. AIAA Journal. 2000, 38 (5), 917–926. [25] Fletcher, R. Practical Methods of Optimization. New York: John Wiley & Sons, 1989.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 237-258
ISBN 978-1-60876-500-3 c 2011 Nova Science Publishers, Inc.
Chapter 10
O N THE DYNAMICS OF C OALITION S TRUCTURE B ELIEFS Giuseppe De Marco1,∗ and Maria Romaniello2 1 Dipartimento di Statistica e Matematica per la Ricerca Economica, Universit`a di Napoli Parthenope, Via Medina 40, Napoli 80133, Italia 2 Dipartimento di Strategie Aziendali e Metodologie Quantitative, Seconda Universit`a di Napoli, Corso Gran Priorato di Malta, Capua 81043, Italia
Abstract In Hart and Kurz (1983), stability and formation of coalition structures has been investigated in a noncooperative framework in which the strategy of each player is the coalition he wishes to join. However, given a strategy profile, the coalition structure formed is not unequivocally determined. In order to solve this problem, they proposed two rules of coalition structure formation: the γ and the δ models. In this paper we look at evolutionary games arising from the γ model for situations in which each player can choose mixed strategies and has vague expectations about the formation rule of the coalitions in which is not involved; players determine at every instant their strategies and we study how, for every player, subjective beliefs on the set of coalition structures evolve coherently to the strategic choices. Coherency is regarded as a viability constraint for the differential inclusions describing the evolutionary game. Therefore, we investigate viability properties of the constraints and characterize velocities of pairs belief/strategies which guarantee that coherency of beliefs is always satisfied. Finally, among many coherent belief revisions (evolutions), we investigate those characterized by minimal change and provide existence results.
Key words and phrases: Coalition formation; coherent beliefs; differential inclusions; viability theory; minimal change belief revision. ∗
E-mail address: [email protected] Part of the results in this paper have been presented at the XVII European Workshop on General Equilibrium.
238
1.
Giuseppe De Marco and Maria Romaniello
Introduction
As recognized by von Neumann and Morgenstern (1944) in their seminal work, the problem of coalition formation plays a central role in game theory. A significative topic in the theory of coalition formation is the investigation of mechanisms and processes which determine how individuals choose to partition themselves into mutually exclusive and exhaustive coalitions, that is, into a coalition structure. It has been shown that in many situations the payoffs of the players belonging to a coalition S depend also on the way players outside S are organized in coalitions and, as a consequence, the strategic choices of the agents depend on the whole coalition structure (see for example Greenberg (2002)). In the context of classical TU characteristic form games, such a point of view has been taken into account, for instance, by defining “power indexes” deriving from the coalition structure formed or by considering “characteristic functions” depending on the whole coalition structure (see also Owen (1977), Hart and Kurz (1983) or Myerson (1978)). Stability of coalition structures has been analyzed via concepts of equilibrium in associated strategic form games. The key feature for this approach is that the strategy set of each player j is the set of all subgroups of players containing j and his choice represents the coalition he wishes to join. However, given a strategy profile (i.e. a coalition for each player), the coalition structure formed is not unequivocally determined. So, different rules of coalition structure formation can be considered, namely, functions associating to every strategy profile a coalition structure. In Hart and Kurz (1983) the following rules are proposed: the so called model γ in which a coalition S forms if and only if all its members have chosen it; the other players become singletons. The model δ in which a coalition S forms if and only if it is a maximal coalition in which all its members have chosen the same coalition T (which might be different from S); the other players become singletons. Note that, given a strategy profile, the γ and the δ rules determine whether each coalition forms or not and consequently determine a unique coalition structure. A fundamental assumption in the Hart and Kurz model is that each player j makes his choice knowing not only the strategies of every other player, but also the formation rule of coalitions in which is not involved. However, in many situations it happens that the formation of a coalition is the outcome of private communication within the members of the coalition (see Moreno and Wooders (1996) and references therein). Hence, differently from the previous literature, in this paper, we consider the case in which each player has vague expectations about the choices of his opponents corresponding to the coalitions in which is not involved and about the formation rule of these coalitions. Moreover, in this paper, we are interested in the evolutionary games arising from the static model of coalition formation. More precisely, we consider the situation in which players determine at every instant the set of players they wish to join and then we study how the coalition structure evolves according to these strategic choices. To this purpose the classical concept of coalition as a subset of players, also called crisp coalition, seems to be not well suited. In fact, such concept implicitly presupposes that a group of players signs a contract (either they cooperate or not), the realization of which requires the involved players to cooperate with a full commitment. In a dynamical setting, it seems natural to assume that a player is not asked to sign a contract at any instant, but rather to announce the set of players he wishes to join. According to this point of view, Konishi and Ray (2003) state that uncertainty enters into the coalition
On the Dynamics of Coalition Structure Beliefs
239
formation process in a natural way: whenever a player has more than one reasonable move he might randomize among them. So we assume that, at any instant, each player j can randomize his choice by playing a mixed strategy, that is, a probability distribution on the set of all coalitions containing j, called mixed coalition (De Marco and Romaniello (2006)). Therefore, in our approach, each player j, knowing only the components of the strategies of the other players corresponding to the coalitions S containing j, can only infer, via a mixed rule of coalition formation, the subjective probabilities that coalitions containing j will eventually form and, consequently, subjective “coalition structure beliefs”, cs beliefs for short, (that is, probability distribution on the set of coalition structures). However, this approach embodies two fundamental problems: i) Even if player j knows the components of the strategies of the other players corresponding to the coalitions S containing j, there exist many rules which assign to every mixed strategy profile the probabilities that coalitions containing j will eventually form and which generalize the rules of coalition formation in Hart and Kurz (1983). ii) Given a probability assignment on coalitions containing j corresponding to a mixed strategy profile under one of the possible coalition formation rules then, the coalition structure belief is not unequivocally determined. In other words, there might exist many cs beliefs which are consistent (in terms of laws of probability) with the probability assignment. In this paper we focus only on the γ model and, to tackle the first question, we introduce the so called mixed γ model, which gives back the pure γ model whenever agents play only pure strategies and which is the natural probabilistic extension of the pure γ model since the probability of each coalition S is calculated as the product of the probabilities given by every player in S to this coalition. The second question is unsolvable in a sense. In fact, if one interprets the probability of a coalition S as the probability of the event “coalition structures containing S”, then, there might exist multiple coalition structure beliefs which are coherent in the sense of de Finetti (1931) with this probability assignment (roughly speaking, such that the total probability theorem is satisfied for the probability of every event/coalition). Therefore, such multiplicity problem do not allow for an unambiguous and well-defined decision mechanism of each player. For instance, the multiplicity of coherent coalition structure beliefs for player j implies multiplicity of von Neumann - Morgestern expected utilities to player j, given the mixed strategy profile. We will show below that it is possible to deal with these difficulties by selecting a mechanism of revision of prior beliefs in a dynamical environment. In particular, we consider the coalition structure beliefs updating problem of the generic player j and state the condition that coalition structures beliefs be consistent (in terms of de Finetti’s coherency) with his subjective probabilities that coalition containing j will eventually form, at all instants, as a viability constraint. Then, we give characterizations for continuous evolutions in both the players’ strategies and (at least one) corresponding subjective coherent belief, by applying the main viability theorem for differential inclusions. More precisely, we consider evolutionary games in which players act on the velocities of the strategies which are regarded as decisions (controls) and used by the players to govern the evolution of coherent coalition structure beliefs. For every player j, the evolution
240
Giuseppe De Marco and Maria Romaniello
of the strategies determines, through the mixed γ model, an unique evolution of the player j’s subjective probability assignment on coalitions containing j. Moreover, among every possible evolution of player j subjective coalition structure belief we consider only those which are coherent with the given probability assignments at every instant. The coherency condition is regarded as a viability constraint and we characterize the so called regulation map (Aubin (1991)) which gives the velocities which guarantee viable evolution of couples coalition structure belief/mixed strategy profile starting from every point in the viability constraint’s domain. Moreover, starting from such characterizations, we exhibit some paradoxes for differentiable evolutions of pairs coalition structure belief/mixed strategy profile whenever they start from or arrive at a pure cs belief. Finally, given feedback (state dependent) controls of the players, the evolution of the beliefs can be regarded as a problem of probabilistic belief revision. Namely, the classical question in belief revision theory is the following: Suppose one holds a certain belief about the states of the world and at a given moment something that contradicts these belief is observed. How should the belief be revised? Of course, different approaches might be considered; we focus on the idea of minimal change revision (see Schulte (2002) and Perea (2007)). In fact, in belief revision theory, it is a generally accepted idea that if one observes an event that contradicts the previous belief, then the new belief about the world should explain the event just observed, and should be “as close as possible” to the previous ones. The intuition behind this principle is that previous belief should change only as far as necessary. In our case, belief revision works in continuous time and revised beliefs explain observations at every instant through the coherency conditions, since observations are in terms of probability assignments on coalitions rather than events; moreover, the idea of minimal change is translated in terms of revision with minimal velocity. Therefore we provide an existence theorem for evolutions of cs belief of minimal velocity in the mixed γ. As a final remark we recall that, to describe uncertainty in cooperative games, a different approach is the use of the concept of fuzzy coalition (Aubin (1974, 1981) in which each player is characterized by his participation rate. However this concept is more suited to describe stability of the grand coalition rather than coalition structures formation. Moreover differential cooperative games were firstly introduced by Filar and Petrosjan (2000) and then developed for fuzzy coalitions by Aubin (2002, 2003) in the framework of characteristic form games.
2. 2.1.
Mixed Strategies and Coherent Coalition Structure Beliefs Preliminaries
Let I = {1, . . . , n} be the set of players. Then the coalition structures set (cs set, for short) is the set B of all partitions of I, that is [ S = I, S ∩ T = ∅ ∀ S, T ∈ B. B ∈ B ⇐⇒ S∈B
In Hart and Kurz (1983), the strategy set of each Q agent j is Sj = {S ⊆ I | j ∈ S} so that a Si and the strategy Sj is the set of players strategy profile is the n-tuple (S1 , . . . , Sn ) ∈
that player j wishes to join.
i∈I
On the Dynamics of Coalition Structure Beliefs
241
The γ model proposed in Hart Q and Kurz (1983) for coalition structure formation can be γ Si → B defined as represented by a function h : i∈I
T ∈ hγ (S1 , . . . , Sn ) ⇐⇒ T = Si for all i ∈ T or T = {l} for some l ∈ I.
Now we provide a relation between the number of coalition structures and the number of not empty coalitions in a finite set, which will be useful in the next sections. Recall that the total number of partitions of a set with n elements is given by the Bell number, denoted with B(n), which can be defined by the following recursive equation B(n) =
n−1 X k=0
Moreover B(n) =
n P
n−1 B(k). k
(1)
S(n, k) where S(n, k) are the Stirling numbers of the second kind
k=0
which are a doubly indexed sequence of natural numbers, each element representing the number of ways to partition a set of n objects into k groups. On the other hand, it is well known that the number C(n) of nonempty coalitions of a set with n elements is given by C(n) =
n X n k=1
k
= 2n − 1.
Note that, since the number of strategies of each player is equal to the number of nonempty coalitions in a set with n − 1 players plus 1 (corresponding to the strategy singleton), then we get |Si | = 2n−1 for every i ∈ I. Moreover, we have Lemma 2.1. If B(n) > C(n) then B(n + 1) > C(n + 1). Proof. Trivially C(n + 1) =
n+1 X k=1
Consider
n+1 = 2n+1 − 1 = 2(2n − 1) + 1 = 2C(n) + 1. k n X n B(k), B(n + 1) = k k=0
by Pascal’s rule
so B(n + 1) =
n−1 n−1 n + = k−1 k k n−1 X n − 1 n − 1 n n + B(k) + B(n) B(0) + k−1 k n 0 k=1
242
Giuseppe De Marco and Maria Romaniello
since n−1 X n − 1 n−1 = C(n − 1), B(k) > k−1 k−1 k=1 k=1 n−1 X n − 1 n−1 B(k) = B(n) − B(0) k 0 n−1 X
k=1
then B(n+1) >B(0)+C(n−1)+B(n)−B(0)+B(n) = 2B(n)+C(n−1) > 2C(n)+1 = C(n+1). Note that, since B(5) = 52 > 31 = C(5), then B(n) > C(n) for all n ≥ 5.
2.2.
Mixed Strategies and Coalition Formation Rules
Differently from Hart and Kurz (1983), we assume that every player is allowed to choose a mixed strategy, called mixed coalition. Denote with Sj = {S ⊆ I | j ∈ S} then, a mixed coalition for player jPis a vector of probabilities mj = (mj,S )S∈Sj such that mj,S ≥ 0 for every S ∈ Sj and mj,S = 1. The set of mixed strategies of player j is the simplex S∈Sj
Rk ,
∆j ⊂ where we denote with k = |Sj |. As stated in the Introduction, we consider the situation in which the generic player j observes only the probability µj,S that coalition S will eventually form, for every S ∈ Sj . n Q ∆i → Following the idea of the γ model, each µj,S should be given by a function λj,S : i=1 [0, 1]; that is, given the strategy profile (mi )i∈I , the probability that coalition S will eventually form is µj,S = λj,S (mi )i∈I . Each function λj,S represents the subjective coalition S formation rule to player j. The mixed γ model There are different ways to generalize the γ model to the case of mixed strategies, that is there are different set of functions (λj,S )S∈Sj which extend the γ model. Among the possible extensions to the mixed strategy case we consider the following: Y mi,S ∀ S ∈ Sj , |S| ≥ 2 µj,S = λj,S (m1 , . . . , mn ) = i∈S X . (2) mixed γ model := µj,S µ = λ (m , . . . , m ) = 1 − 1 n j,{j} j,{j} S∈Sj , |S|≥2
In this case µj,S =
Q
i∈S
mi,S translates the idea that player j evaluates the probability of
coalition S as the product of the probabilities announced by players in S and the probabilities left are assigned to the singleton. Observe that in the mixed γ model, the functions λj,S are multiaffine and therefore continuously differentiable. Finally, note that whenever players choose only pure strategies the mixed γ model is equivalent to the (pure) γ model restricted to Sj . In the following part of the paper, we consider the functions λj,S always defined as in (2).
On the Dynamics of Coalition Structure Beliefs
2.3.
243
Coherent Beliefs
A coalition structure belief (cs belief for short) is a probability distribution on PB, that is a vector of probabilities ̺ = (̺B )B∈B such that ̺B ≥ 0 for all B ∈ B and ̺B = 1. B∈B
Denote the set of all of cs beliefs with ∆B ⊂ Rb with b = B(n) = |B|. It is obvious that a coalition S can be interpreted as an event in the set of all coalition structures B, more precisely as the event ES = {B ∈ B | S ∈ B} for every S ⊆ I, so that the probability µj,S can be regarded as µj,S = prob{B ∈ B | S ∈ B} for every S ⊆ I. Therefore, as stated in the Introduction, the generic player j considers feasible only those coalition structure beliefs which are coherent in the sense of de Finetti (1931) with his subjective probability assignment on the event/coalition S for every S ∈ Sj . This is equivalent to say that, for every event/coalition S ∈ Sj the total probability theorem should be satisfied, that is, a cs belief must satisfy the following coherency constraint: X ̺B = µj,S ∀ S ∈ Sj . (3) B∋S
Remark 2.2. Usually, Probability Theory works with probability measures on σ-algebras and needs the specification of probabilities of all the events in the σ-algebra. There are, however, situations in which one can be interested in working with partial assignments of probability. In such cases the collection of all events for which probabilities are known (or believed to be something) need not have any algebraic structure (for example, do not form a σ-algebra). In such cases, one would like to know if there is a probability space (Ω, Σ, P) such that Σ contains all events of interest to us and P assigns the same probabilities to these as we believe them to be. In other words, one would like to know if the probability assignment is coherent in the sense of de Finetti (1931). In our model the probability assignments to player j are the probabilities µj,S to the event/coalition S. The probability distributions on B are the cs-beliefs ̺ and constraints (3) determine coherency, that is, if constraints (3) are satisfied then the cs belief assigns to the coalitions the same probabilities as player j believes them to be in light of his strategies and the corresponding mixed coalition rule (λj,S )S∈Sj . Existence The system of equations in constraints (3) define a linear system in the unknowns (̺B )B∈B where the number of unknowns is greater than the number of equations. The next proposition gives sufficient conditions for the existence of coherent cs beliefs: Proposition 2.3. For every probability assignment (µj,S )S∈Sj satisfying the following condition X µj,S = 1 (4) S∈Sj
there exists at least a cs belief satisfying the coherency constraints (3). Proof. For every coalition S with at least two players, let BS be the coalition structure defined by BS = {S, ({l})l∈S 6 BT . Given the assignment / } . Obviously S 6= T ⇐⇒ BS =
244
Giuseppe De Marco and Maria Romaniello
(µj,S )S∈Sj satisfying (4), let B ′ be the coalition structure having only singletons as elements and ̺b a cs belief defined by ̺bBS = µj,S
It results that
∀ S ∈ Sj , |S| ≥ 2; X
B∋S
̺bB′ = 1 −
X
µj,S ;
S∈Sj ,|S|≥2
̺bB = 0 otherwise.
̺bB = ̺bBS forall S ∈ Sj , |S| ≥ 2
so, being ̺bBS = µj,S , the coherency constraints for coalitions with at least two players is satisfied. Moreover, the only coalition structures containing {j} which have positive probability in the cs belief ̺b is B ′ . Therefore, from the assumption (4), it results that µj,{j} = 1 −
X
S∈Sj , |S|≥2
µj,S = ̺bB′ =
X
B∋{j}
̺bB .
Hence the coherency constraint for coalition {j} is satisfied. Since ̺b is a probability distribution on B then the assertion follows. Multiplicity In the next example we show that multiple coalition structure beliefs might be supported by the same probability assignment (µj,S )S∈Sj . Example 2.4. i) ii) iii)
Consider a 3 player game and the following strategies: m1,{1,2,3} = 1/2, m1,{1,2} = 1/4, m1,{1,3} = 0, m1,{1} = 1/4 m2,{1,2,3} = 1/3, m2,{1,2} = 0, m2,{2,3} = 0, m2,{2} = 2/3
.
(5)
m3,{1,2,3} = 1, m3,S = 0 otherwise
Consider player 1 and calculate µ1,S for all S ∈ S1 as in the γ model, we obtain the following µ = 1/6 1,{1,2,3} µ1,{1,2} = 0, µ1,{1,3} = 0 µ1,{1} = 5/6
then it easily follows that coherent cs beliefs must satisfy the following coherency conditions 1) µ1,{1,2} = ̺{{1,2},{3}} = 0, µ1,{1,3} = ̺{{1,3},{2}} = 0 2) µ1,{1,2,3} = ̺{{1,2,3}} = 1/6 3) 5/6 = µ{1} = ̺{{1},{2},{3}} + ̺{{2,3},{1}}
and therefore, from 3), we get infinite solutions.
On the Dynamics of Coalition Structure Beliefs
3.
245
Evolution of Coherent Coalition Structure Beliefs
Now we introduce the evolutionary games arising from the γ model of coalition formation. We consider the situation in which players determine at every instant the set of players they wish to join, more precisely players act on the velocities of the strategies which are regarded as controls. Then, fixed a generic player j, we study how his subjective coalition structure beliefs might evolve, governed by Nature, according to these strategic choices, that is, coherently with his subjective probability assignment on coalitions determined by the strategies. n Q ∆l Rk is the set-valued map of feasible For every player i ∈ I, Ui : ∆B × l=1
controls of player i; the set valued map of the a-priori feasible dynamics of cs beliefs is n Q ∆l Rb , with b = B(n) = |B|. Note that H could be constant and H : ∆B × l=1
given, for instance, by the entire space Rb or by a closed ball with radius η and center 0, B(0, η) ⊂ Rb . Moreover, the evolution of pairs cs belief/mixed strategy profile (̺(t), m(t)) should satisfy the following simplex constraints ∀ S ∈ Si ∀B ∈ B ̺X m i,S ≥ 0 B ≥0 X for all i ∈ I (6) ; mi,S = 1 ̺B = 1 S∈Si
B∈B
and the coherency constraints of player j, which can be rewritten as X ̺B = λj,S (m1 , . . . , mn ) ∀ S ∈ Sj . B∋S
Summarizing, from the point of view of player j, evolutions of coherent pairs of coalition structure belief/mixed strategies should be solutions of the following dynamical system (i.e. absolutely continuous functions satisfying the following system for almost all t: ̺′B (t) = hB (t) ∀B ∈ B m′ (t) = ui (t) ∀i ∈ I i (7) u (t) ∈ U (̺(t), m(t)) ∀ i ∈ I i i (h ) B B∈B ∈ H(̺(t), m(t))
under the viability constraints Kj given by: i) ̺B ≥ 0 ∀ B ∈ B X ii) ̺B − 1 = 0 B∈B iii) mi,S ≥ 0 ∀ S ∈ Si , and ∀ i ∈ I (̺, m) ∈ Kj ⇐⇒ . X mi,S − 1 = 0 ∀ i ∈ I iv) S∈S i X ̺B − λj,S (m1 , . . . , mn ) = 0 ∀ S ∈ Sj v) χ (̺, m) = S B∋S
(8)
246
Giuseppe De Marco and Maria Romaniello Note that the control ui (t) is the vector ui (t) = ui,S (t) S∋i , where each component ui,S (t) governs the velocity of mi,S . We emphasize again that ̺ represents subjective beliefs of player j. Moreover, even if the dynamics of each mi are completely given by players’ controls, player j only knows the components mi,S for all S ∈ Sj and all i ∈ S, which, in turn, are the only components involved in the coherency constraints. Of course, there is no a-priori reason why a solution (̺(t), m(t)) of the system (7) should be viable in the constraints (8) for every t ∈ [0, +∞[, that is, satisfy the constraints (8) for every t ∈ [0, +∞[. So we are interested to characterize velocities (h(t), u(t)) = (hB (t))B∈B , (ui (t))i∈I such that the corresponding solutions are viable. To this purpose we will apply the main viability theorems for control systems as stated in Aubin (1991). The viability theorem Now we recall some classical definitions and state the main viability theorem for the previous system. The constraints set Kj is said to be viable under the control system (7) if for every point (̺0 , m0 ) ∈ Kj there exists at least one solution (̺(·), m(·)) starting at (̺0 , m0 ) and governed by (7) such that (̺(t), m(t)) ∈ Kj for all t ≥ 0. We recall that if K ⊂ Rq and y ∈ K, a direction v ∈ Rq belongs to the contingent cone TK (y) if there exist a sequence εn > 0 and vn ∈ Rq converging to 0 and v respectively, such that: y + εn vn ∈ K ∀ n ∈ N. n Q Ui (̺, m) , then the Viability To simplify notations denote M(̺, m) = H(̺, m) × i=1
Theorem (Aubin, 1991, 1997) for system (7) under the constraints (8) reads:
Theorem 3.1. Assume that the set-valued maps H and Ui , with i ∈ I, are Marchaud, that is with closed graphs, not empty, compact and convex images for every point in the domain and bounded by linear growth, that is, there exist c, ψ1 , . . . , ψn > 0 such that, for all (̺, m) ∈ Kj : sup kyk ≤ c k(̺, m)k + 1 y∈H(̺,m)
and
sup zi ∈Ui (̺,m)
kyk ≤ ψi k(̺, m)k + 1 ∀ i ∈ I
Then, Kj is viable under the control system (7) if and only if for every (̺, m) ∈ Kj , the n Q Rk are not empty, where RKj is Rb × images of the regulation map RKj : Kj j=1
defined by:
o n RKj (̺, m) = (h, u) ∈ M(̺, m) | (h, u) ∈ TKj (̺, m) .
(9)
Moreover, every evolution (̺(·), m(·)) viable in Kj is regulated by (it is a solution of) the system: ̺′ (t) = hB (t) ∀ B ∈ B B m′i (t) = ui (t) ∀ i ∈ I . (10) (u(t), h(t)) ∈ RKj (̺(t), m(t))
On the Dynamics of Coalition Structure Beliefs
247
This previous theorem provides existence conditions and characterization of continuous evolutions in both the players’ strategies and corresponding coherent beliefs to player j for a general class of set valued maps of feasible controls (satisfying classical regularity assumptions). This approach obviously might include, as particular cases, set valued maps of feasible controls related, for instance, to myopic optimization criteria (such as best reply dynamics). However, at this point, the definition of suitable preference relations over strategy profiles is not straightforward since we deal with ambiguous probabilities or expected payoffs (Ellsberg (1961)) arising from the multiplicity of beliefs for a given strategy profile and therefore deserves a future accurate analysis. Conditional viability and belief revision Let (̺, m, u)
A(̺, m, u) be the set valued map defined by
A(̺, m, u) = h ∈ H(̺, m) | (h, u) ∈ RKj (̺, m)
and u(̺, m) be a profile of feedback controls of the players. (b ̺(t), m(t)) b of the following differential inclusion: (
̺′ (t) ∈ A ̺(t), m(t), u ̺(t), m(t) m′ = u(̺(t), m(t))
(11)
Consider a solution
(12)
then it follows that the evolution of the cs belief ̺b(t) satisfies the coherency constraints at every instant t given the probability assignment λj,S (m(t)). b Therefore, it can be regarded as a revision, in continuous time, of player j subjective probabilistic belief conditioned (in terms of the coherency constraints) by the evolution, in continuous time, of the assignment of probability on coalitions in Sj which, on the other hand, is determined by the evolution of the mixed strategy profile m(t). b Of course, there are no a-priori reasons why system (12) should admit a solution. Existence could be guaranteed, for instance, by the lower semicontinuity of the set valued map A (see, for example, the proof of Theorem 9.2.4 in Aubin (1997)) which, however, is not assured in general even when the model satisfies the hypothesis of the main Viability Theorem 3.1. Finally, observe that, even if system (12) admits a solution, then it could be not unique. Therefore it could be reasonable to refine the set of feasible solutions of system (12) by restricting the set valued map A. In Section 4 we will apply this procedure to select belief revision of minimal velocity.
4.
The Regulation Map in the Mixed γ Model
We give the formula for the regulation map RKj (̺, m) of system (7) under P the constraints (8) in the mixed γ model. To this purpose, denote with ∇ (̺B ), ∇ ̺B , ∇ (mi,S ), B∈B Q P P mi,S the gradients of the functions with respect ̺B and ∇ mi,S , ∇ ∇ S∋i
B∋S
i∈S
to the variables (̺, m). Moreover recall that a set C ⊆ Rn is said to be it regular in a point x ∈ C (also called sleek, see Aubin and Frankowska (1990)) if the set valued map
248
Giuseppe De Marco and Maria Romaniello
x TC (·) is lower semicontinuous in x. If the set C is regular in x, then the normal cone of C in x NC (x) is given by set of regular normal vectors, i.e.: n o NC (x) = ω | hω, x − xi ≤ o(kx − xk) ∀ x ∈ C .
A function f : Rn → R is said to be lower subdifferentially regular in a point x if the epigraph epi f = (x, y) ∈ Rn × R | y ≥ f (x)
is a regular set in (x, f (x)). Moreover, f is said to be upper subdifferentially regular in x if −f is lower subdifferentially regular in x. The characterization theorem Let LB (̺, m) (resp. Li,S (̺, m)) be the functions defined by LB (̺, m) = 1 if ̺B = 0 and LB (̺, m) = 0 otherwise (resp. Li,S (̺, m) = 1 if mi,S = 0 and Li,S (̺, m) = 0 otherwise). Proposition 4.1. Let (ϕB )B∈B , (ηi,S )i∈I,S∋i be nonnegative real numbers and ζ, (βi )i∈I , (αS )S∈Sj , |S|≥2 , θ be real numbers. If the following transversality conditions hold: i) ϕB LB (̺, m) + ζ + αS = 0, ∀ B ∈ B such that B ∩ Sj = S with |S| ≥ 2 ii) ϕB LB (̺, m) + ζ + θ = 0, ∀ B ∈ B such that B ∩ Sj = {j} iii) ηi,S Li,S (̺, m) + βi = 0, ∀ (i, S) such that S ∈ / Sj , i ∈ S Y Y . ml,S + θ iv) ηi,S Li,S (̺, m) + βi − αS ml,S = 0 l∈S\{i} l∈S\{i} ∀ (i, S) such that S ∈ Sj , iw∈ S w ϕB = ζ = θ = αS = ηi,S = βi = 0 for all B, S, i
(13)
Then (h, u) ∈ M(̺, m) belongs to RKj (̺, m) if and only if:
i) ii)
hB ≥ 0 whenever ̺B = 0 X hB = 0 B∈B
iii) ui,S ≥ 0 whenever mi,S = 0 X ui,S = 0, ∀ i ∈ I iv) . S∈Si Y X X ml,S = 0, ∀ S ∈ Sj , |S| ≥ 2 ui,S hB − v)
vi)
B∋S
i∈S l∈S\{i} X Y X X hB = 0. ml,S + ui,S
S ∈S / j
i∈S
l∈S\{i}
B∋{j}
(14)
On the Dynamics of Coalition Structure Beliefs
249
For the proof of the previous proposition the following Lemma is needed: Lemma 4.2. Let fi : Rn → R with i ∈ J1 and gl : Rn → R with l ∈ J2 be continuously differentiable functions and let K be the set defined by: fi (x) = 0, ∀ i ∈ J1 n K = x ∈ R and gl (x) ≤ 0, ∀ l ∈ J2
Let H(x) ⊆ J2 denote the set of active constraints in a point x, that is l ∈ H(x) ⇐⇒ gl (x) = 0. Assume that ∇fi (x) 6= 0 for all i ∈ J1 and ∇gl (x) 6= 0 for all l ∈ H(x) and that the following transversality condition holds: given vi ∈ R for all i ∈ J1 and ql ∈ R+ for all l ∈ H(x) then X X vi ∇fi (x) + ql ∇gl (x) = 0 =⇒ vi = 0 ∀ i ∈ J1 and ql = 0 ∀ l ∈ H(x). (15) i∈J1
Then
l∈H(x)
h∇fi (x), wi = 0 ∀ i ∈ J1 and w ∈ TK (x) ⇐⇒ h∇gl (x), wi ≤ 0 ∀ l ∈ H(x)
(16)
Proof. Consider the sets Ei = {x ∈ Rn | fi (x) = 0} ∀ i ∈ J1 and Fl = {x ∈ Rn | gl (x) ≤ 0} ∀ l ∈ J2 . From the assumptions it follows that Ei and Fl are regular sets in every point and TEi (x) = {w | h∇fi (x), wi = 0} ∀ i ∈ J1 ,
TFl (x) = {w | h∇gl (x), wi ≤ 0} ∀ l ∈ H(x).
The normal cones are NEi (x) = {w∇fi (x) | w ∈ R}
∀ i ∈ J1 ,
NFl (x) = {w∇gl (x) | w ∈ R+ } ∀ l ∈ H(x).
So, in light of (15) in the assumptions, from Proposition 6.42 in Rockafellar and Wets (1998) it follows that \ \ TK (x) = TFl (x) . TEi (x) ∩ (17) i∈J1
l∈H(x)
Hence, we get the assertion.
Proof of Proposition 4.1. Note that the functions defining the constraints in (8) are continuously differentiable. Since the functions in equations v) in (8) are defined as in the mixed γ model (2), then they can be rewritten as Y X mi,S = 0 for all S ∈ Sj , |S| ≥ 2 ̺B − B∋S
i∈S
250
Giuseppe De Marco and Maria Romaniello
and
X
S∈Sj , |S|≥2
"
Y
#
mi,S +
i∈S
X
B∋{j}
̺B − 1 = 0.
One first computes the gradients with respect to the variables (̺, m); P ∇ (̺B ) is a vector with 1 as the entry corresponding to partition B and 0 elsewhere; ∇ ̺B is a vector B∈B
with 1 as the first B(n) entries (associated to all the partitions B) and 0 elsewhere;P ∇ (mi,S) mi,S is a vector with 1 as the entry associated to the pair (i, S) and 0 elsewhere; ∇ S∈Si
is a vector with 1 as the entries associated to the pair (i, S) for all S ∈ Si and 0 elsewhere. Then one considers the constraints v) in (8). Let S contain at least two players and consider the entries of the gradient
∇
X
B∋S
̺B −
Y
mi,S
i∈S
!
;
among the first B(n) components (those associated to the partitions) one finds 1 corresponding to the partitions containing S and 0 elsewhere; for the remaining entries, Q ml,S appears for the entries corresponding to the pairs (i, S), for every i ∈ S, while l∈S\{i}
the remaining entries are equal to 0. Finally consider the entries of the gradient
∇
X
S∈Sj ,|S|≥2
"
Y
i∈S
#
mi,S +
X
B∋{j}
̺B − 1 ;
among the first B(n) components (those associated to the partitions) one finds 1 corresponding to the partitions containing {j} and 0 elsewhere; for the remaining entries, Q ml,S appears for the entries corresponding to the pairs (i, S), for all i ∈ S and all l∈S\{i}
S ∈ Sj with |S| ≥ 2, while the entries are equal to 0 elsewhere. So the gradients are
On the Dynamics of Coalition Structure Beliefs
251
different from 0 and
i) h∇ (̺B ) , (h, u)i = hB , ∀ B ∈ B * ! + X X ̺B , (h, u) = ii) ∇ hB B∈B
B∈B
iii) h∇ (mi,S ) , (h, u)i = ui,S , ∀ S ⊆ I with S ∋ i, ∀ i ∈ I * + X X iv) ∇ mi,S , (h, u) = ui,S , ∀ i ∈ I S∈Si
v)
*
∇
X
B∋S
S∈Si
̺B −
Y
mi,S
i∈S
!
+
, (h, u)
=
X
B∋S
hB −
X i∈S
ui,S
∀ S ∈ Sj , |S| ≥ 2 + # * " X Y X ̺B − 1 , (h, u) = mi,S + vi) ∇ i∈S
S∈Sj , |S|≥2
=
X
S∈Sj , |S|≥2
Y
l∈S\{i}
ml,S
(18)
B∋{j}
X Y X hB ml,S + ui,S i∈S
B∋{j}
l∈S\{i}
Condition (13) in the assumptions guarantees that
" !# X ̺B ϕB [∇ (̺B )] LB (̺, m) + ζ ∇ B∈B B∈B XX ηi,S [∇ (mi,S )] Li,S (̺, m) + i∈I S∋i !#! " Y X X X X βi ∇ mi,S ̺B − + αS ∇ mi,S + X
i∈I
i S∈S
+θ ∇
X
S∈Sj ,|S|≥2
"
B∋S
S∈Sj , |S|≥2
Y
i∈S
#
mi,S + w w
X
B∋{j}
i∈S
̺B − 1 = 0
ϕB = ζ = βi = ηi,S = αS = θ = 0 for all i, S, B,
which implies that condition (15) in Lemma 4.2 holds true.
(19)
252
Giuseppe De Marco and Maria Romaniello Hence (h, u) ∈ TKj (̺, m) if and only if i) hB ≥ 0 whenever ̺B = 0 X ii) hB = 0 B∈B iii) u i,S ≥ 0 whenever mi,S = 0 X ui,S = 0, ∀ i ∈ I iv) . S∈Si Y X X ml,S = 0 ∀ S ∈ Sj , |S| ≥ 2 u h − v) i,S B i∈S B∋S l∈S\{i} X Y X X + hB = 0 m u vi) i,S l,S S∈Sj , |S|≥2
i∈S
(20)
B∋{j}
l∈S\{i}
Hence, the assertion follows.
A Paradox in the mixed γ model Proposition 4.3. If the assumption of Proposition 4.1 are satisfied and if (̺, m) ∈ Kj is 6 B ′ . Then such that ̺B′ = 1 and ̺B = 0 for all B = (h, u) ∈ RKj (̺, m) =⇒ hB = 0 ∀ B ∈ B such that ∃ S ∈ B∩Sj with S ∈ / B′ and |S| ≥ 2 Proof. Let (h, u) ∈ RKj (̺, m) and consider a coalition S ∈ Sj such that S ∈ / B′ and |S| ≥ 2. Being mi,S = 0 for all i ∈ S, we have: Y X ml,S = 0. ui,S i∈S
l∈S\{i}
Therefore, in light of condition v) in (14), it follows that
P
hB = 0.
B∋S
However, since S ∈ / B′ and ̺B = 0 for all B = 6 B ′ , from i) in (14) it follows that hB ≥ 0 for all B ∋ S, and so hB = 0 for all B ∋ S. Remark 4.4. The “only if” part in Proposition 4.1 does not require the transversality assumption (13) in Proposition 4.1. In fact, from Proposition 6.42 in Rockafellar and Wets (1998), it follows that the contingent cone to the intersection of sets is a subset of the intersection of the contingent cones, while only for the converse statement the transversality conditions is required. Therefore Proposition 4.3 can be extended also when condition (13) is not satisfied. Proposition 4.3 can be also interpreted as follows: whenever at a given time the cs belief is pure, that is players are partitioned in coalitions with probability 1, then any differentiable deviation of a player j from his pure strategy has the only effect of increasing the subjective probability of player j to stay alone. In other words, even if two or more players jointly
On the Dynamics of Coalition Structure Beliefs
253
deviate from a pure coalition in order to form a new one, then feasible cs beliefs evolve in such a way that the probability of this new coalition remains 0. We will better illustrate the previous paradox in the following example: Example 4.5. Let I = {1, 2, . . . , 5} (̺, m) such that be the set of players and consider ̺B′ = 1, with B ′ = {1, 2, . . . , 5} , and ̺B = 0 for all B = 6 B ′ . Of course this implies that, for all i ∈ I, mi,{1,2,...,5} = 1 and mi,S = 0 otherwise. Consider the following controls of the players: u = −u1,{1,2,...,5} > 0, u3,{1,3} = −u3,{1,2,...,5} > 0 1,{1,3} ui,{2,4,5} = −ui,{1,2,...,5} > 0, ∀ i = 2, 4, 5 . (21) ui,S = 0 otherwise
Let h be velocities of coalition structure beliefs such that (h, u) belongs to the regulation map. From (v) and (ii) in (14) hB′ =
5 X
ui,{1,2,...,5} < 0,
i=1
X
hB = 0.
B∈B
Consider the evolution of beliefs of player 1. In light of Proposition 4.3 hB = 0 for all B= 6 B ′ such that ∃ S ∈ B ∩ S1 with S ∈ / B′ and |S| ≥ 2. In particular hB = 0 for all B containing {1, 3}. Therefore, let B = {B ∈ B | {1} ∈ B}, then X X hB = 0 =⇒ hB > 0. hB′ + B∈B
B∈B
This means that even if players’ deviations are somehow in the direction of coalition structure {{1, 3}, {2, 4, 5}}, the beliefs evolve only in the direction of the coalition structure in which player 1 stays alone.
5.
Minimal Change of Beliefs
As stated in Section 2, given a profile of feedback controls of the players u(̺, m) = (ui (̺, m))i∈I , a solution (b ̺(t), m(t)) b of the differential inclusion (12) provides revision, in continuous time, of cs belief conditioned (in terms of the coherency constraints) by the control u(̺, m) (therefore by the corresponding evolution of the mixed strategy profile m(t)). b However some evolution of cs beliefs might show inconsistencies. Consider the following example:
Example 5.1. Fixed (b ̺, m), b let u b a profile of feedback controls such that u b(b ̺, m) b = 0. b Consider velocities of the cs belief h satisfying b h (b ̺, m) b =b h{{1},{2},{3},{4},{5}} (b ̺, m) b =1 {{1,2,3},{4,5}} b h{{1,2,3},{4},{5}} (b ̺, m) b =b h{{1},{2},{3},{4,5}} (b ̺, m) b = −1 . b hB (b ̺, m) b = 0 otherwise
254
Giuseppe De Marco and Maria Romaniello
It follows that for every coalition S ∈ Sj , conditions v), vi) in (14) are satisfied so that b h belongs to A ̺b, m, b u b(b ̺, m) b . However notice that in this case velocities lead to a change of the coalition structure even if playersare not changing their strategies. From (14), it is easy to check that 0 ∈ A ̺b, m, b u b(b ̺, m) b . Hence, if we follow the idea of minimal change belief revision (see, for instance, Schulte (2002) or Perea (2007)), which states that the new belief should be as similar as possible to the previous one, we expect that, whenever the mixed strategy profile reaches m b with velocity 0 then, the corresponding cs belief reaches ̺b where it remains there in equilibrium. The previous example shows that in order to capture the idea minimal change belief revision, we could restrict velocities to those characterized by minimal norm and then consider the corresponding solutions. More precisely:
Definition 5.2. An evolution ̺(t) is a minimal change cs belief revision of system (12) for a given continuous feedback control profile (̺, m) → u e(̺, m) if there exists an evolution of strategy profile m(t) such that (̺(t), m(t)) is a solution of the following system (
̺′ (t) = e h ̺(t), m(t), u e ̺(t), m(t) m′ = u e(̺(t), m(t))
for a function e h(̺, m, u(̺, m)) defined by
e h(̺, m, u e(̺, m)) =
min
h∈A(̺,m,e u(̺,m))
(22)
khk.
Of course, there are no a-priori reasons why system (12) should admit minimal change cs belief revision. We give some existence results below. Note also that the concept of minimal change cs belief revision corresponds to a slight modification of the concept of heavy solution to a differential inclusions which has been investigated in Aubin (1991, 1997) and Aubin and Saint-Pierre (2006). Lemma 5.3. If the assumptions of Theorem 3.1 are satisfied and if A(̺, m, u e(̺, m)) is a lower semicontinuous set valued map with not empty and compact values for all (̺, m) ∈ Kj and the feedback control u e(̺, m) is continuous and bounded by linear growth in Kj , then every point in Kj is the starting point of a minimal change cs belief revision.
Proof. Since A(̺, m, u e(̺, m)) has not empty and compact values for all (̺, m), then the function (̺, m) → π(̺, m) = min khk h∈A(̺,m,e u(̺,m))
is well defined, moreover A is a lower semicontinuous set valued map so the assumptions of the Marginal Function Theorem (see for instance theorem 1.4.16 in Aubin and Frankowska (1990)) hold true, and π is also an upper semicontinuos function, that is lim sup π(̺, m) ≤ π(̺, m)
(̺,m)→(̺,m)
∀ (̺, m) ∈ Kj .
On the Dynamics of Coalition Structure Beliefs
255
The set valued map (̺, m) B(0, π(̺, m)) has closed graph; in fact consider a sequence {(̺ν , mν )}ν∈N converging to (̺, m) and a sequence {hν }ν∈N converging to h with hν ∈ B(0, π(̺ν , mν )) for all ν then khν k ≤ π(̺ν , mν ) =⇒ khk = lim sup khν k ≤ lim sup π(̺ν , mν ) ≤ π(̺, m) ν→∞
ν→∞
therefore h ∈ B(0, π(̺, m)) and (̺, m) B(0, π(̺, m)) has closed graph. From the assumptions, the set valued map H has closed graph so the set valued map (̺, m) W(̺, m) defined by: W(̺, m) = B(0, π(̺, m)) ∩ H(̺, m)
∀ (̺, m) ∈ Kj
has closed graph. Moreover W(̺, m) is the intersection of compact and convex sets and so it is compact and convex, for every (̺, m). Finally, from W(̺, m) ⊆ H(̺, m), it follows that W is bounded by linear growth. Therefore, the system: h(t) ̺′ (t) = e ′ m (t) = u e(̺(t), m(t)) (23) e h(t) ∈ W(̺(t), m(t))
satisfies the assumptions of Theorem 3.1 then Kj is viable under this auxiliary system. Hence, every point (̺, m) ∈ Kj is the staring point of at least a solution (̺(t), m(t)) of system (22) which remains in Kj , that is, an evolution (̺(t), m(t)) such that m(·) is governed by u e(̺(t), m(t)) and ̺(·) by a control e h(t) which satisfies ke h(t)k ≤ min khk = π ̺(t), m(t), u e(̺(t), m(t)) (24) h∈A(̺(t),m(t),e u(̺(t),m(t)))
for almost all t ≥ 0. Hence, there exists a minimal change cs belief revision starting from every point in Kj .
Proposition 5.4. If the assumptions of Theorem 3.1 are satisfied, the set valued maps H, Ui for i = 1, . . . , n are lower semicontinuous in Kj and satisfy the following: ∀ (̺, m) ∈ Kj
∃ η > 0, θ > 0
such that B(0, η) ⊂ M(b ̺, m) b − TKj (b ̺, m) b
∀ (b ̺, m) b ∈ B (̺, m), θ
(25)
then, given a feedback control u e(̺, m) continuous and bounded by linear growth in Kj such that A ·, ·, u ˜(·, ·) has not empty values, for every initial condition in Kj there exists a minimal change cs belief revision. Proof. Since the functions λj,S are defined by system (2), the set Kj defined by (8) is regular (as the functions that define the constraints are differentiable or concave), then, by definition, it follows that (̺, m) TKj (̺, m) is a lower semicontinuous set valued map. We claim that the regulation map RKj is lower semicontinuos in Kj . In fact, fix (̺, m) ∈ Kj , z = (h, u) ∈ RKj (̺, m) and a sequence (̺ν , mν ) in Kj converging to (̺, m). Since the set valued maps TKj and M are lower semicontinuous in (̺, m), there
256
Giuseppe De Marco and Maria Romaniello
exists sequences xν → z and yν → z such that xν ∈ M(̺ν , mν ) and yν ∈ TKj (̺ν , mν ) for all ν ∈ N. From the assumptions there exists η > 0 and ν such that B(0, η) ⊂ M(̺ν , mν ) − TKj (̺ν , mν ) Set kxν − yν k = εν and αν =
η η+εν
∀ ν ≥ ν.
∈]0, 1[, it follows that αν εν = (1 − αν )η and then
αν (xν − yν ) ∈ B(0, αν εν ) = B(0, (1 − αν )η) ⊂ (1 − αν ) M(̺ν , mν ) − TKj (̺ν , mν ) .
Thus
αν (xν − yν ) = (1 − αν )(ϕν − ψν )
with ϕν ∈ M(̺ν , mν ), ψν ∈ TKj (̺ν , mν ).
Therefore, αν (xν − yν ) = (1 − αν )(ϕν − ψν ) ⇐⇒ αν xν + (1 − αν )ϕν = αν yν + (1 − αν )ψν . Since M(̺ν , mν ) and TKj (̺ν , mν ) are convex sets αν xν + (1 − αν )ϕν ∈ M(̺ν , mν )
and αν yν + (1 − αν )ψν ∈ TKj (̺ν , mν ).
So, ξν = αν xν + (1 − αν )ϕν ∈ M(̺ν , mν ) ∩ TKj (̺ν , mν ) = RKj (̺ν , mν ). Moreover αν → 1 as ν → ∞ and then ξν → z as ν → ∞, which means that RKj is lower semicontinuous in (̺, m). Then, it follows that A is lower semicontinuous. Since u ˜ is continuous in Kj , A ·, ·, u ˜(·, ·) is lower semicontinuous in Kj , in light of (20), the images of A ·, ·, u ˜(·, ·) are convex and compact and, from the assumptions, non empty. Then the assumptions of Lemma 5.3 are satisfied and we get the assertion.
6.
Conclusion
This paper proposes an evolutionary game style model for the dynamics of coalition structure beliefs when players announce the coalition they wish to join by using a mixed strategy. This model extends the static γ model of coalition formation introduced in Hart and Kurz (1983) for situations in which each player has vague expectations about the choices of his opponents corresponding to the coalitions in which is not involved and about the formation rule of these coalitions as a consequence of private communication within the members of each coalition. In particular, an evolutionary game is considered, where strategies and coalition structure beliefs are state variables and players act on the velocities of their strategies. Fixed a generic player j, we state the condition that his subjective coalition structures beliefs be consistent (in terms of de Finetti’s coherency) with the mixed strategy choices of the players at all instant as a viability constraint and then, give characterizations for continuous evolutions in both the players’ strategies and corresponding coherent belief, by applying the main viability theorem. Finally we relate the evolution of the beliefs to probabilistic belief revision; in particular, we propose to reduce the set of viable evolutions of beliefs by selecting the changes with minimal norm and provide existence results.
On the Dynamics of Coalition Structure Beliefs
257
As a final remark, in this paper we considered a general class of set valued maps of feasible controls of the players (satisfying classical assumptions). Further research might focus on the problem of considering particular controls; for instance, related to some myopic optimization criteria (such as best reply dynamics); however, this approach is not straightforward since it requires the definition of suitable preference relations in case of ambiguous probabilities arising from the multiplicity of coherent beliefs for a given strategy profile.
References [1] Aubin J.-P. (1974): Coeur et Valeur des Jeux Flous a´ Paiements Lat´eraux, C.R. Acad. Sci. Paris, 279 A, 891–894. [2] Aubin J.-P. (1981): Locally Lipchitz Cooperative Games, Journal of Mathematical Economics, 8, 241–262. [3] Aubin J.-P. (1991): Viability Theory, Birkhauser. [4] Aubin J.-P. (1997): Dynamic Economic Theory: A Viability Approach, Springer. [5] Aubin J.-P. (2002): Dynamic Core of Fuzzy Cooperative Games, Annals of Dynamic Games, 7. [6] Aubin J.-P. (2003): Regulation of the Evolution of the Architecture of a Network by Connectionist Tensors Operating on Coalitions of Actors, Journal of Evolutionary Economics, 13, 95–124. [7] Aubin J.-P. and H. Frankowska (1990): Set Valued Analysis, Birkhauser. [8] Aubin J.-P. and P. Saint-Pierre (2006): Guaranteed Inertia Functions in Dynamical Games, International Game Theory Review, 8(2), 185–218. [9] Clarke F.H., Ledyaev Y.S., Stern R.J and P.R. Wolenski (1998): Nonsmooth Analysis and Control Theory, Springer. [10] de Finetti B. (1931): Sul Significato Soggettivo della Probabilit`a, Fundamenta Mathematicae, T. XVIII, 298–329. [11] De Marco G. and M. Romaniello (2006): Dynamics of Mixed Coalitions under Social Cohesion Constraints, Mathematical Population Studies, 13(1), 39–62. [12] Ellsberg D. (1961): Risk, Ambiguity, and the Savage Axioms, Quarterly Journal of Economics, 75(4), 643–669. [13] Filar J.A. and L.A. Petrosjan (2000): Game Theory Review, 2, 47–65.
Dynamic Cooperative Games, International
[14] Greenberg J. (2002): Coalition Structures. Chapter 37 in Handbook of Game Theory with Economic Applications, Volume 2, Elsevier Science Publishers (North-Holland), Amsterdam.
258
Giuseppe De Marco and Maria Romaniello
[15] Hart S. and M. Kurz (1983): Endogenous Coalition Formation, Econometrica, 51(4), 1047–1064. [16] Konishi H. and D.Ray (2003): Coalition Formation as a Dynamic Process, Journal of Economic Theory, 110, 1–41. [17] Moreno D.. and J. Wooders (1996): nomic Behavior, 17, 80–112.
Coalition-Proof Equilibrium, Games and Eco-
[18] Myerson R. (1978): Graphs and Cooperation in Games, Mathematics of Operations Research, 2, 225–229. [19] von Neumann J. and O. Morgenstern (1944): Theory of Games and Economic Behavior, Princeton. [20] Owen G. (1977): Values of Games with a-priori Unions, in Essays in Mathematical Economics and Garne Theory, R. Hein and O. Moesehlin (ed.), New York: SpringerVerlag, pp. 76–88. [21] Perea A. (2007): A Model of Minimal Probabilistic Belief Revision, Theory and Decision, to appear, on line version available. [22] Rockafellar R. T. and R. J-B. Wets (1998): Variational Analysis, Springer. [23] Schulte, O. (2002): Minimal Belief Change, Pareto-Optimality and Logical Consequences, Economic Theory, 19, 105–144.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 259-289
ISBN 978-1-60876-500-3 c 2011 Nova Science Publishers, Inc.
Chapter 11
R ELAXED S TABILITY C ONDITIONS FOR L INEAR T IME -VARYING S YSTEMS WITH A PPLICATIONS TO THE ROBUST S TABILIZATION P ROBLEM Leopoldo Jetto and Valentina Orsini∗ Dipartimento di Ingegneria Informatica, Gestionale e dell’Automazione, Universit´a Politecnica delle Marche, Ancona, Italy
Abstract The twofold purpose of this chapter is to state relaxed stability conditions for linear time-varying systems with possible parametric uncertainties and then to consider their application to robust stabilization problems. Sufficient conditions for the exponential stability are derived with reference to linear time-varying (LTV) systems of the form Dx(t) = A(t)x(t), where D denotes the time-derivative (t ∈ R+ ) or forward shift operator (t ∈ Z+ ) and A(·) is uniformly bounded. The approach proposed derives and uses the notion of perturbed frozen time (PFT) form that can be associated to any LTV system. Exploiting the Bellman–Gronwall lemma, relaxed stability conditions are then stated in terms of “average” parameter variations. This leads to the notion of “small average variation plant”. Salient features of the approach are: pointwise sta˙ bility of A(·) is not required, kA(·)k may not be bounded, the stability conditions also apply to uncertain systems. As shown in the second part of this chapter, the developed stability analysis represents a powerful tool that can be also employed in robust stabilization problems. The main advantage of the proposed synthesis method is that it can be applied without assuming an accessible state vector or a particular parametric dependence. The approach is illustrated by numerical examples.
1.
Introduction
The first part of this chapter deals with the stability analysis of linear time-varying (LTV) systems possibly affected by parametric uncertainties. The second part exploits the developed analysis tools to define a synthesis method to be employed in the robust stabilization problem. Part of the results presented here can be found in [21], [22]. ∗ E-mail
address: [email protected]
260
Leopoldo Jetto and Valentina Orsini
The stability analysis is here accomplished with reference to linear time-varying (LTV) systems described by Dx(t) = A(t)x(t), x(0) = x0 , t ≥ 0, (1)
where D denotes the time-derivative (t ∈ R+ ) or forward shift operator (t ∈ Z+ ) and A(·) is uniformly bounded. Many authors investigated this problem using the frozen-time approach (FTA), whose main advantage is the possibility of exploiting the great deal of tools which have been developed for linear time-invariant (LTI) systems. The first papers dealing with this topic show that pointwise stability implies stability of the LTV system provided that ˙ sup kA(·)k < δ1 [12], [33], or kA(t) − A(t − 1)k < δ2 [13] for sufficiently small δ1 and t≥0
δ2 . For continuous-time systems, the requirement of pointwise stability is also used in [19]. This latter reference extends previous results [8], [24], [25] to derive explicit upper bounds for different measures of parameter variations guaranteeing stability. Qualitative conditions on the allowed variation rate of parameters guaranteeing stability are stated in [11] and [38], using the notion of slowly time-varying operator. Under a lightly weaker assumption on pointwise stability, the FTA has been also used in [1] to derive sufficient stability conditions both for continuous and discrete-time LTV systems. Pointwise stability has been also recently exploited in [27], where the stability analysis is performed solving successive Lyapunov equations defined on a time grid. In [35], sufficient stability conditions are derived only requiring that the eigenvalues of a frozen continuous-time A(t) be stable “on average” for t ≥ 0. The approach developed in this chapter is based on the notion of perturbed frozen time (PFT) form of a LTV system and uses the continuous and discrete-time versions of the Bellman–Gronwall lemma [14]. The system is not required to be pointwise slowly varying and the dynamical operator A(·) is not required to take values over small compact subsets of Rn×n . The relaxed sufficient stability conditions derived here concern an “average” of time-variations of kA(·)k and lead to the notion of “small average variation plant”. Given the existing literature on the topic, the stability analysis accomplished in this chapter has the following salient features: 1) both the continuous and discrete-time case are considered, 2) pointwise stability is not required, 3) the stability conditions are easy to be checked, 4) the method also applies in the case of a little a priori information on A(·), 5) the more refined the a priori information is the less restrictive the stability conditions are, 6) the robust stabilization problem can be easily dealt with. The chapter is organized in the following way. Some preliminaries are stated in Section 2, the continuous and discrete-time stability conditions are stated in Sections 3.1 and 3.2 respectively, some numerical examples concerning the stability conditions are reported in Section 4. The robust stabilization problem and the relative numerical examples are reported in Sections 5 and 6 respectively.
2. 2.1.
Preliminaries The Perturbed Frozen Time form of a LTV Plant
Consider the following linear, unforced, time-varying system Σ Dx(t) = A(t)x(t),
x(0) = x0 , t ≥ 0,
(2)
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
261
where D denotes the derivative (t ∈ R+ ) or forward difference operator (t ∈ Z+ ). The frozen-time version of Σ at time t¯ is the time invariant system corresponding to A(t¯). Let {tℓ }, ℓ = 0, 1, . . . , be a sequence of non negative integers with t0 = 0; let χ (t)[tℓ ,tℓ+1 ) be the characteristic function of [tℓ ,tℓ+1 ), namely χ (t)[tℓ ,tℓ+1 ) = 1, ∀t ∈ [tℓ ,tℓ+1 ); χ (t)[tℓ ,tℓ+1 ) = 0, ∀t ∈ / [tℓ ,tℓ+1 ). This allows us to define the following matrices △
A(tℓ ) = Aℓ , ∞
∑ Aℓ χ (t)[t ,t
ℓ ℓ+1 )
△
(A(t) − Aℓ )χ (t)[tℓ ,tℓ+1 ) = ∆Aℓ (t),
(3)
∑ ∆Aℓ (t) = ∆A′ (t).
(4)
∞
△
= A′ (t),
ℓ=0
△
ℓ=0
Using the above notation, it is easily seen that (2) can be rewritten in the following equivalent perturbed frozen-time (PFT) form Dx(t) = (A′ (t) + ∆A′ (t))x(t),
x(0) = x0 , t ≥ 0.
(5)
Equation (5) and the Bellman–Gronwall Lemma will be used in Section 3 to state the stability conditions. Let Φ(·, ·) denote the state transition matrix of Σ, system Σ is said to be uniformly asymptotically stable if ∀ ω > 0, there exists a positive t(ω ) < ∞, such that kΦ(t, t¯)k ≤ ω ,
∀t ≥ t¯ + t(ω ), ∀ t¯ ≥ 0.
(6)
Sistem Σ is said exponentially λ -stable if there exist constants c > 0, and λ < 0, (t ∈ R+ ), or 0 ≤ λ < 1 (t ∈ Z+ ), such that kΦ(t, t¯)k ≤ ceλ (t−t¯) , ∀ t¯, ∀t ≥ t¯, t ∈ R+ , kΦ(t, t¯)k ≤ cλ (t−t¯) , ∀ t¯, ∀t ≥ t¯, t ∈ Z+ .
(7) (8)
The uniform asymptotic stability of Σ is equivalent to its exponential stability [37].
2.2.
The Prior Knowledge on the Plant
The sufficient stability conditions for Σ stated here are derived assuming that A(·) satisfies the following mild assumptions, A1: kA(·)k ≤ ma < ∞; A2: there exists an infinite known sequence of time instants {tℓ }, ℓ ∈ Z+ , at which Re[λi {Aℓ }] ≤ αℓ < 0, |λi {Aℓ }| ≤ αℓ < 1,
i = 1, . . . , n, t ∈ R+ , +
i = 1, . . . , n, t ∈ Z ,
(9) (10)
so that the frozen-time plant at time tℓ is exponentially αℓ -stable, namely k exp(Aℓt)k ≤ mℓ exp(αℓt), t ∈ R+ , kAtℓ k
≤
mℓ αℓt ,
for some scalar mℓ ≥ 1, depending on Aℓ ;
+
t ∈Z ,
(11) (12)
262
Leopoldo Jetto and Valentina Orsini
A3: the length of each interval Iℓ = [tℓ ,tℓ+1 ), is assumed to take values in the finite set △
{Ti , i = 1, . . . , l}, with T = max Ti < ∞; i
A4: for each fixed i = 1, . . . , l, let {Iℓi } ⊆ {Iℓ } be the subsequence of all the intervals Iℓi whose length has a fixed value Ti , i = 1, . . . , l. Then, for each fixed i, ni different possible sets of conditions Ski , k = 1, . . . , ni on the shape of the function k∆Aℓ (τ )k, τ ∈ Iℓi , Iℓi ∈ {Iℓi } are given. Each set Ski is defined in the following way: there exist + − i+ < i+ < · · · < positive scalars ρk,i j , j = 1, . . . , pik and ρk,i j , j = 1, . . . , qik , with ρk,1 ρk,2 + + − − − − i i i i i i ρk,pi −1 < ρk,pi < ρk,1 < ρk,2 < · · · < ρk,qi −1 < ρk,qi , such that k
k
k
k
+
+
i i ⊂ Iℓi , , τ ∈ Ik,1 0 ≤ k∆Aℓ (τ )k ≤ ρk,1 .. . +
+
+
ρk,i j < k∆Aℓ (τ )k ≤ ρk,i j+1 , τ ∈ Ik,i j+1 ⊂ Iℓi , j = 1, . . . , pik − 1,
i+ ρk,p i k
< k∆Aℓ (τ )k ≤
i− ρk,1 ,
.. .
−
τ∈
i− Ik,1
⊂ −
−
Iℓi ,
ρk,i j < k∆Aℓ (τ )k ≤ ρk,i j+1 , τ ∈ Ik,i j+1 ⊂ Iℓi , j = 1, . . . , qik − 1, with
(13)
(14) (15)
(16)
S i+ △ i+ S i− △ i− i+ − Ik, j = Ik , Ik, j = Ik , Ik ∪ Iki = Iℓi . j
j
+
Roughly speaking one can say that Iki , is the subset of Iℓi where k∆Aℓ (τ )k is known to − be small enough, while Iki , is the subset of Iℓi where k∆Aℓ (τ )k is known to assume larger values. The above assumptions deserve some remarks. Condition A1 is the commonly accepted assumption of uniform boundedness, A2 considerably weakens the usual assumption of a pointwise stable A(·) with sufficiently negative frozen-time eigenvalues (as in [1], [8], [11], [12], [13], [19], [24], [25], [27], [33], [38]). Here it is only required the existence of a known sequence {tℓ } of time instants at which the frozen Aℓ is stable. It is not either required that the eigenvalues of A(·) be strictly negative “on average” (as in [35]). Conditions A3 and A4 describe the behavior of A(·) for t 6= tℓ . These assumptions do not require that A(t) be perfectly known ∀t ∈ R+ , or ∀t ∈ Z+ . It is only assumed that A(t) vary in such a way that ∆Aℓ (t) = A(t) − Aℓ , t ∈ Iℓi , i = 1, . . . , l, ℓ ∈ Z+ , have a norm respecting one of the ni different possible sets of conditions Ski , k = 1, . . . , ni , defined by (13)–(16). On the other hand this is not restrictive because assumption A1 obviously implies that ∆Aℓ (t), t ∈ Iℓi , i = 1, . . . , l, ℓ ∈ Z+ is bounded. As a consequence, whatever the pointwise behavior of ∆Aℓ (t) may + − be, there surely exist some positive scalars ρk,i j , j = 1, . . . , pik and ρk,i j , j = 1, . . . , qik , such that (13)–(16) hold. These scalars are assumed to be known. Moreover, though the true pointwise behavior of ∆Aℓ (t), t ∈ Iℓi ∈ {Iℓi } may assume infinitely many shapes, assumption A1 also implies that for each possible length Ti , i = 1, . . . , l, of the intervals Iℓi , the number of corresponding possible sets of conditions Ski given by (13)–(16) is finite (k = 1, . . . , ni ). ˙ is not required to exist or to be It is also stressed that, unlike [8], [12], [19], [27], [33], A(·) bounded. All the above mentioned papers dealing with continuous-time systems impose a
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
263
˙ pointwise bound on kA(·)k, or alternatively, a bound on its time average. The first bound leads to conservative results, the second one reduces conservatism but leads to conditions requiring an analytical description of A(·), and hence a perfect knowledge of the plant. The prior knowledge on the plant assumed here is general enough to encompasses a more general class of systems with respect to the other approaches. Clearly it includes ˙ perfectly known systems without the usual constraints on kA(·)k, t ∈ R+ or on kA(t) − A(t − 1)k, t ∈ Z+ . It also refers to control situations involving systems with mode-switch dynamics, each mode is time-varying and the different operating conditions are such that an accurate knowledge on the plant model is available only from time to time. It is intuitive that under A1 and A2, the introduction of A4 could guarantee the uniform asymptotic stability of Σ. This is precisely stated in the next section, exploiting assumptions A1-A4 and the PFT form (5).
3. 3.1.
Stability Conditions Continuous Time Systems
For t ∈ R+ , the PFT form of Σ is x(t) ˙ = (A′ (t) + ∆A′ (t))x(t),
x(0) = x0 , t ≥ 0.
(17) △
Equation (17) is now exploited to express the state response of Σ inside each interval Iℓ = [tℓ ,tℓ+1 ), ℓ ∈ Z+ , where each tℓ is an element of the sequence {tℓ } of time instants at which the frozen-time plant is exponentially αℓ -stable. From (17) one has x(t) = exp(Aℓ (t − tℓ ))x(tℓ ) +
Zt tℓ
exp(Aℓ (t − τ ))∆Aℓ (τ )x(τ ) d τ ,
t ∈ Iℓ ∈ {Iℓ },
(18)
△
△
moreover (11) implies k exp(Aℓt)k ≤ m¯ exp(α t), t ∈ R+ , where m¯ = max mℓ , α = max αℓ < ℓ
ℓ
0. Taking the norm of both sides of (18) and multiplying them by exp(−α t), one obtains kx(t)k exp(−α t) ≤ m¯ exp(−α tℓ )kx(tℓ )k +
Zt tℓ
m¯ exp(−ατ )k∆Aℓ (τ )k kx(τ )k d τ ,
and applying the continuous-time Bellman–Gronwall inequality kx(t)k exp(−α t) ≤ m¯ exp(−α tℓ )kx(tℓ )k · exp
Hence, ∀t ∈ Iℓ ∈ {Iℓ }, kx(t)k is decreasing as
kx(t)k ≤ mkx(t ¯ ¯ ℓ )k exp α (t − tℓ ) + m
Zt tℓ
Zt tℓ
mk∆A ¯ ℓ (τ )k d τ .
k∆Aℓ (τ )k d τ .
(19)
(20)
(21)
264
Leopoldo Jetto and Valentina Orsini
As (21) holds ∀ ℓ ∈ Z+ , it is easy to see that for any two arbitrary t, t¯ ∈ R+ , with t ≥ t¯, one has Zt
kx(t)k ≤ m¯ exp α (t − tℓ¯) + m¯
tℓ¯
¯ ℓ−1
k∆Aℓ¯(τ )k d τ
· ∏ m¯ exp α (tℓ+1 − tℓ ) + m¯ ℓ=ℓ0
Ztℓ+1 tℓ
k∆Aℓ (τ )k d τ mkx( ˜ t¯)k, (22)
where: ℓ¯ = ℓ0 , and tℓ¯ = tℓ0 if t and t¯ belong to the same Iℓ , otherwise tℓ¯ is the maximum tℓ ∈ {tℓ } such that tℓ¯ ≤ t and tℓ0 is the minimum tℓ ∈ {tℓ } such that tℓ0 ≥ t¯, the empty product △
is taken as 1, m˜ =
sup
t¯≥0, t¯≤t≤t¯+T
kΦ(t, t¯)k. Assumptions A1 and A3 imply the existence of a
positive scalar m¯ s < ∞, such that
Zt
m¯ exp α (t − tℓ¯) + m¯
tℓ¯
k∆Aℓ¯(τ )k d τ m˜ ≤ m¯ s ,
hence, the uniform asymptotic stability of Σ follows if t m¯ exp α (tℓ+1 − tℓ ) + m¯
In fact (22)–(24) imply
Zℓ+1 tℓ
k∆Aℓ (τ )k d τ ≤ δ < 1,
¯ kx(t)k ≤ m¯ s δ (ℓ−ℓ0 ) kx(t¯)k,
∀ t¯, ∀t ≥ t¯,
(23)
∀ ℓ ∈ Z+ .
∀ x(t¯),
(24)
(25)
with ℓ¯ = ℓ0 if t and t¯ belong to the same Iℓ . So, by the arbitrariness of kx(t¯)k, inequality (25) implies that ∀ ω > 0 and for each fixed t¯ ≥ 0 (to which a fixed ℓ0 corresponds), a △ sufficiently large ℓ(ω ) = ℓ¯ can be found such that (6) holds for t(ω ) ≥ (ℓ(ω ) − ℓ0 )T . Hence Σ is uniformly asymptotically stable. This in turn is equivalent to the exponential stability of Σ according to (7), where, applying e.g. the procedure given in [37] it is found that ∀ ω < 1, △
△
one has c = c(ω ) = m¯ s ω −1 and λ = λ (ω ) = ln(ω )/t(ω ), so that λ < 0. Condition (24) holds if Ztℓ+1 α (tℓ+1 − tℓ ) + m¯ (k∆Aℓ (τ )k d τ ) < ln 1 < 0, ∀ ℓ ∈ Z+ , (26) m¯ tℓ
so, considering that α < 0, the exponential stability of Σ follows if the quantity tℓ+1 R
(k∆Aℓ (τ )k d τ ), is not too large. Exploiting (13)–(16), the following theorem states easy
tℓ
+
−
to check sufficient conditions on parameters ρk,i j , j = 1, . . . , pik , and ρk,i j , j = 1, . . . , qik , for (26) to hold.
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
265
+
Theorem 1. For each i = 1, . . . , l, and k = 1, . . . , ni , let the scalars ρk,i j , j = 1, . . . , pik , and − ρk,i j , j = 1, . . . , qik , of (13)–(16) be such that +
△
+
j = 1, . . . , pik ,
(27)
−
△
−
j = 1, . . . , qik ,
(28)
γk,i j = α + m¯ ρk,i j < 0, γk,i j = α + m¯ ρk,i j > 0, −
+
−
+
and let Tk,i j and Tk,i j be the lengths of the intervals Ik,i j and Ik,i j respectively, then (26) holds if pik
∑
j=1
+ + |γk,i j |Tk,i j
qik
>
¯ ∑ γk,i j Tk,i j + ln(m), −
−
i = 1, . . . , l, k = 1, . . . , ni .
(29)
j=1
Proof. By (13)–(16), and taking into account (27), (28), the l.h.s. of (26) is upperly bounded by
pik
qik
pik
qik
α ∑ Tk, j + ∑ Tk, j + m¯ ∑ ρk, j Tk, j + ∑ ρk, j Tk, j i−
j=1
j=1
+
i+
pik
i+
i+
i−
i−
j=1
j=1
qik
pik
qik
= ∑ (α + m¯ ρk,i j )Tk,i j + ∑ (α + m¯ ρk,i j )Tk,i j = ∑ γk,i j Tk,i j + ∑ γk,i j Tk,i j . (30) +
+
j=1
−
−
j=1
+
j=1
+
j=1
−
−
As γk,i j < 0, j = 1, . . . , pik , (30) implies (26) if (29) is satisfied.
In words, the condition stated by Theorem 1 means that Σ may be stable even if − k∆Aℓ (τ )k takes large values over the sub-interval Iki of each Iℓi ∈ {Iℓi }, provided this is + compensated by small enough values assumed by k∆Aℓ (τ )k over the other sub-interval Iki . ˙ It is also stressed that no requirement is imposed on A(·), it does not need to be defined ˙ ∀t ∈ R+ , and kA(·)k may be unbounded. Large values of pik and qik mean an accurate prior knowledge on the plant, hence it is expected that the degree of conservatism of conditions (29) is inversely proportional to pik and qik . The purpose of the following calculations is to quantitatively study this aspect of the problem. Assume that for some of the possible values of i (i = 1, . . . , l), and k (k = 1, . . . , ni ), an upper bound on k∆Aℓ (τ )k, τ ∈ Iℓi ∈ {Iℓi } is available, but the whole information on k∆Aℓ (τ )k is not as refined as that corresponding to A4 for large pik and qik . The worst case is when for some i and k, the information carried by (13)–(16) on k∆Aℓ (τ )k reduces to +
+
0 ≤ k∆Aℓ (τ )k ≤ ρk,i j¯, τ ∈ I¯k,i j¯ ⊂ Iℓi , i−
i+
i−
ρk, j¯ < k∆Aℓ (τ )k ≤ ρk,qi , τ ∈ I¯k,qi ⊂ Iℓi , k
+
(31) (32)
k
−
i i with I¯k,i j¯ ∪ I¯k,q i = Iℓ , and k
+
△
+
γk,i j¯ = α + m¯ ρk,i j¯ < 0,
(33)
266
Leopoldo Jetto and Valentina Orsini △
−
−
i i γk,q ¯ ρk,q i = α +m i > 0, k
(34)
k
+ + i− where ρk,i j¯ is one of the parameters ρk,i j of (13), (14) for some fixed 1 ≤ j¯ ≤ pik , and ρk,q i
k
is the same parameter of (16) for j = qik − 1. The values of pik and qik corresponding to (31), (32) are denoted by p¯ik and q¯ik respectively, to avoid any confusion with the pik and qik , corresponding to (13)–(16). Clearly one has p¯ik = 1 and q¯ik = 1. + By (31), (32) it follows that the subset I¯k,i j¯, where k∆Aℓ (τ )k is known to be small i
enough, has a reduced length with respect to the corresponding
+ Iki
pk S + = Ik,i j resulting from j=1
i− , where k∆A (τ )k is known to be large (13)–(16). As a consequence, the time interval I¯k,q ℓ i k
i
enough, has an increased length with respect to the corresponding
− Iki
qk S − = Ik,i j . j=1
+
More precisely, comparing (31), (32) with (13)–(16) one has that the intervals I¯k,i j¯ and
i− are given by I¯k,q i k
j [
i
i
¯
+ I¯k,i j¯ =
+ Ik,i j ,
j=1
i− I¯k,q i k
=
pk [
+ Ik,i j
j= j¯+1
qk [
−
Ik,i j ,
(35)
j=1
+ i− the lengths of I¯i+ and I¯i− respectively, one has T¯ i+ = hence, denoting by T¯k,i j¯ and T¯k,q i k, j¯ k, j¯ k,qi k
j¯
k
i+
∑ Tk, j , and
j=1
i− T¯k,q i k
pik
=
∑
+ Tk,i j +
j= j¯+1
qik
∑ Tk,i j . −
j=1
Recalling that p¯ik = q¯ik = 1, condition (29) gives the following lower bound on the length + + T¯k,i j¯ of the whole interval I¯k,i j¯ over which (31) must be verified to guarantee stability −
i γk,q i ln(m) ¯ i− ¯ Tk, j¯ > i+ + i+k T¯k,q i. k |γk, j¯| |γk, j¯| i+
(36)
It is now assumed that the more refined information carried by (13)–(16) is available and + T¯k,i j¯ is computed on the basis of such information to show that an estimate less conservative than (36) is obtained. The first step is to rewrite (36) as i− pik qik γ i − + + ln(m) ¯ k,q (37) T¯k,i j¯ > i+ + i+k ∑ Tk,i j + ∑ Tk,i j , |γk, j¯| |γk, j¯| j= j¯+1 j=1
the second step is to exploit (29) in order to state an explicit stability condition in terms of + T¯k,i j¯ even in the case pik > 1, qik > 1, according to (13)–(16). To this purpose it is enough to rewrite (29) as pik
∑
j=1
+ + |γk,i j |Tk,i j −
j¯−1
∑
j=1
+ + |γk,i j¯|Tk,i j +
j¯−1
∑
j=1
+ + |γk,i j¯|Tk,i j
qik
> ln(m) ¯ + ∑ γk,i j Tk,i j . j=1
−
−
(38)
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
267
Taking into account that pik
∑
j=1
+ + |γk,i j |Tk,i j
j¯−1
∑
=
j=1
+ + + + |γk,i j |Tk,i j + |γk,i j¯|Tk,i j¯ +
pik
∑
+
j= j¯+1
+
|γk,i j |Tk,i j ,
and j¯
∑ |γk,i j¯|Tk,i j = |γk,i j¯|T¯k,i j¯, +
+
+
+
j=1
inequality (38) can be rewritten in the following way + + |γk,i j¯|T¯k,i j¯ +
j¯−1
∑
j=1
pik
∑
+ + |γk,i j |Tk,i j +
j= j¯+1
+ + |γk,i j |Tk,i j −
j¯−1
∑
j=1
+ + |γk,i j¯|Tk,i j
qik
> ln(m) ¯ + ∑ γk,i j Tk,i j , −
−
j=1
whence,
− j¯−1 γk,i j i− T − i+ k, j j=1 j=1 |γk, j¯|
qik
+ ln(m) ¯ T¯k,i j¯ > i+ + ∑ |γk, j¯|
∑
+
+
(|γk,i j | − |γk,i j¯|) + |γk,i j¯|
+
Tk,i j +
+
+ |γk,i j | i+ T . i+ | k, j | γ j= j¯+1 k, j¯
pik
∑
(39)
Inequality (39) gives the lower bound on T¯k,i j¯ guaranteeing stability, obtained by the refined information carried by (13)–(16). Comparing (36) with (39), one has that the difference between the r.h.s of these two inequalities is pik qik − 1 − − i △ i i+ i i i− ∆k = i+ ∑ γk,qi − γk, j Tk, j + ∑ γk,q i Tk, j k k |γk, j¯| j=1 j= j¯+1 j¯−1
+ ∑ |γk, j | − |γk, j¯| Tk, j + i+
i+
j=1
+
+
−
i+
pik
∑
j= j¯+1
−
i+
i+
|γk, j |Tk, j .
(40)
i | and i < i for j < r, so that it is apparent that ∆i > 0 By (13)–(16) one has |γk,i j | > |γk,r γk, j γk,r k i and that its value increases with pk and qik .
3.2.
Discrete Time Systems
For t ∈ Z+ , the PFT form of Σ is x(t + 1) = (A′ (t) + ∆A′ (t))x(t).
(41)
△
By (41), the state response of Σ inside each interval Iℓ = [tℓ ,tℓ+1 ), ℓ ∈ Z+ is given by (t−tℓ )
x(t) = Aℓ
t−1
x(tℓ ) +
j−1 ∆Aℓ ( j)x( j), ∑ At− ℓ
j=tℓ
(42)
268
Leopoldo Jetto and Valentina Orsini
where the empty sum is taken as zero. △
△
By (12) one has kAtℓ k ≤ m¯ α t , where m¯ = max mℓ , α = max αℓ . Taking the norm of both ℓ
ℓ
sides of (42) and multiplying them by α −t , one obtains −tℓ ¯ + kx(t)kα −t ≤ mkx(t ℓ )kα
t−1
∑ m¯ α −1 k∆Aℓ ( j)k kx( j)kα − j ,
j=tℓ
and applying the discrete Bellman–Gronwall inequality t−1 −tℓ ¯ )k α kx(t)kα −t ≤ mkx(t ℓ ∏ 1 + m¯ α −1 k∆Aℓ ( j)k . j=tℓ
Hence, ∀t ∈ Iℓ ∈ {Iℓ }, kx(t)k is decreasing as t−1
¯ kx(t)k ≤ mkx(t ¯ ℓ ( j)k) . ℓ )k ∏ (α + mk∆A
(43)
j=tℓ
As (43) holds ∀ ℓ ∈ Z+ , then, for any two arbitrary t, t¯, with t ≥ t¯, one has # " ¯ t−1
ℓ−1
tℓ+1 −1
τ =tℓ¯
ℓ=ℓ0
τ =tℓ
kx(t)k ≤ m¯ ∏ (α + mk∆A ¯ ¯ ℓ¯(τ )k) ∏ m
∏
(α + mk∆A ¯ ˜ t¯)k, ℓ (τ )k) mkx(
(44)
where: tℓ¯ = tℓ0 if t and t¯ belong to the same Iℓ , otherwise tℓ¯ is the maximum tℓ ∈ S such that tℓ¯ ≤ t and tℓ0 is the minimum tℓ ∈ {tℓ } such that tℓ0 ≥ t¯, the empty product is taken as 1, m˜ =
sup t¯≥0, t¯≤t≤t¯+T
kΦ(t, t¯)k.
Assumptions A1 and A3 imply the existence of a positive m¯ s < ∞, such that ! t−1
¯ m˜ m¯ ∏ (α + mk∆A ℓ¯(τ )k) τ =tℓ¯
≤ m¯ s ,
(45)
hence, the uniform asymptotic stability of Σ follows if m¯
tℓ+1 −1
∏
τ =tℓ
(α + mk∆A ¯ ℓ (τ )k) < δ < 1,
∀ ℓ ∈ Z+ .
(46)
In fact (44)–(46) imply ¯
kx(t)k ≤ m¯ s δ (ℓ−ℓ0 ) kx(t¯)k,
∀ t¯, ∀t ≥ t¯,
∀ x(t¯),
(47)
with ℓ¯ = ℓ0 if t and t¯ belong to the same Iℓ . So, by the arbitrariness of kx(t¯)k, one has △ that ∀ ω > 0 and ∀ t¯ ≥ 0, a sufficiently large ℓ(ω ) = ℓ¯ can be found such that (6) holds for 1
t(ω ) = (ℓ(ω ) − ℓ0 )L. This, in turn, implies that (8) holds for c = m¯ s ω −1 and λ = (ω ) t(ω ) . As for the case of continuous time systems, sufficient conditions for (46) to hold can be stated in terms of bounds on k∆Aℓ (τ )k. Assume that conditions (13)–(16) of assumption A4 still hold, then the following theorem can be proved.
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
269
+
Theorem 2. For each i = 1, . . . , l, and k = 1, . . . , ni , let the scalars ρk,i j , j = 1, . . . , pik , and − ρk,i j , j = 1, . . . , qik , of (13)–(16) be such that +
△
+
−
△
−
|α + m¯ ρk,i j | = γk,i j < 1, d j = 1, . . . , pik ,
(48)
|α + m¯ ρk,i j | = γk,i j > 1, j = 1, . . . , qik ,
(49)
then (46) holds if pik
∑
j=1 +
+ + |λk,i j |Tk,i j
qik
>
¯ ∑ λk,i j Tk,i j + ln(m),
i = 1, . . . , l, k = 1, . . . , ni ,
(50)
j=1
−
+
−
−
−
−
+
+
−
where λk,i j = ln γk,i j , λk,i j = ln γk,i j , Tk,i j and Tk,i j are the lengths of Ik,i j and Ik,i j respectively. Proof. By (48), (49), condition (46) can be expressed as pik
∏ exp j=1
+ + (ln γk,i j )Tk,i j
1 i− i− exp (ln )T γ , ∏ k, j k, j < m¯ j=1 qik
(51)
namely pik
∑
+ + ln exp(λk,i j Tk,i j ) +
△
+
¯ ∑ ln exp(λk,i j Tk,i j ) < − ln(m), −
−
(52)
j=1
j=1 +
qik
−
△
−
−
+
where ln γk,i j = λk,i j , ln γk,i j = λk,i j . As λk,i j < 0 and λk,i j > 0, condition (52) is equivalent to (50). As for continuous time systems, assume that for some of the possible values of i (i = 1, . . . , l), and k (k = 1, . . . , ni ), an upper bound on k∆Aℓ (τ )k, τ ∈ Iℓi ∈ {Iℓi } is available, but the whole information on k∆Aℓ (τ )k is not as refined as that corresponding to A4 for large pik and qik . The worst case is when for some i and k, the information carried by (13)–(16) on k∆Aℓ (τ )k reduces to (31) and (32), with (33), (34) replaced by +
△
+
γk,i j¯ = |α + m¯ ρk,i j¯| < 1, △
−
−
i i γk,q ¯ ρk,q i = |α + m i | > 1. k
(53) (54)
k
respectively. By arguing as for (36), condition (50) particularizes as i− pik qik λ i + ln(m) ¯ + − k,q T¯k,i j¯ > i+ + i+k ∑ Tk,i j + ∑ Tk,i j , |λk, j¯| |λk, j¯| j= j¯+1 j=1
(55)
+ + where T¯k,i j¯ is the length of the interval I¯k,i j over which (31) must be verified to guarantee the + stability of Σ. If the lower bound on T¯k,i j¯ were computed on the basis of the information carried by (13)–(16), then, by arguing as for (39), the following less conservative estimate would be obtained ¯−1 (|λ i+ | − |λ i+ |) pik qik i− i+ | j | λk, j − λk, j i+ + + ln(m) ¯ k, j k, j¯ (56) Tk,i j + ∑ T . T¯k,i j¯ > i+ + ∑ i+ Tk,i j − ∑ + i i+ | k, j | |λk, j¯| j=1 |λk, j¯| | λ | λ ¯ j=1 j= j+1 k, j¯ k, j¯
270
Leopoldo Jetto and Valentina Orsini
The difference ∆ik between the r.h.s.’s of (55) and (56) is
∆ik
− k k 1 i+ i− i− i− i = i+ ∑ λk,q i − λk, j Tk, j + ∑ λk,qi Tk, j k k |λk, j¯| j=1 j= j¯+1 pi
qi
j¯−1
+ ∑ |λk,i j | − |λk,i j¯| Tk,i j + +
j=1
+
−
+
+
+
pik
∑
j= j¯+1
+
+
|λk,i j |Tk,i j .
(57)
−
i | and λ i < λ i for j < ℓ, it is apparent that ∆i > 0 and that its value As |λk,i j | > |λk,ℓ k k, j k,ℓ increases with pik and qik .
Remark. The stability conditions (29) and (50) can be interpreted as a sort of requirement implying that the plant dynamics be slowly varying in the average (and not pointwise), with respect to some stable frozen-time configurations. For this reason, continuous or discrete-time plants satisfying (29) or (50) respectively are said: “small average variation plants”.
4.
Numerical Examples on Stability Analysis
The matrix norm used in all the numerical examples is the spectral norm. Example 4.1. Consider the following plant Σ
A(t) =
−2 0.15 −0.1 a(t)
,
a(t) = −1 + f (t),
0 ≤ f (t) ≤ 1.8.
(58)
When f (t) = 1.8, the corresponding frozen-time A(t) is such that max Re [λi {A(t)}] = i
0.795. Hence the stability of (58) can not be studied with anyone of the usual methods based on pointwise stability. It is known that a(t) = −1, at the frozen times tℓ , ℓ ∈ Z+ , ℓ−1 with tℓ = 2ℓ L1 + 2ℓ L2 , for even ℓ and tℓ = ( ℓ−1 2 + 1)L1 + ( 2 )L2 , for odd ℓ, with L1 = 40, L2 = 50. According to the notation of Section 2 this corresponds to l = 2, T1 = L1 = 40, T2 = L2 = 50 and T = max Ti = 50. The stability of A(t) has been investigated assuming i=1,2
that for each fixed i = 1, 2, the prior knowledge on k∆Aℓ (τ )k = | f (t)| inside each Iℓi ∈ {Iℓi } is given by two possible sets of conditions Ski , k = 1, . . . , ni , with n1 = n2 = 2. The sets Ski , + k, i = 1, 2, are reported beneath as well as the corresponding values of the γk,i j , j = 1, . . . , pik , − and γk,i j , j = 1, . . . , qik . These values have been computed according to (27), (28) considering that in correspondence of each tℓ , ℓ ∈ Z+ , one has αℓ = −1.0152 and mℓ = 1.2906, so
Relaxed Stability Conditions for Linear Time-Varying Systems. . . that α = max αℓ = −1.0152, and m¯ = max mℓ = 1.2906, ℓ
ℓ
S11
△ 1+ △ 1+ = ρ1,p1 , 0 ≤ k∆Aℓ (τ )k ≤ 0.7 = ρ1,1 1 + + 1+ = −0.1118, 1 1 τ ∈ I1,1 , T1,1 = 30, γ1,1 △ 1− , 0.7 < k∆Aℓ (τ )k ≤ 0.9 = ρ1,1 = 1− , T 1− = 6, γ 1− = 0.1463, τ ∈ I 1,1 1,1 1,1 △ 1− △ 1− 0.9 < k∆Aℓ (τ )k ≤ 1.2 = ρ1,2 = ρ1,q11 , 1− , T 1− = 4, γ 1− = 0.5335, τ ∈ I1,2 1,2 1,2
S21 =
S12 =
△
+
1 , 0 ≤ k∆Aℓ (τ )k ≤ 0.4 = ρ2,1 + + + 1 , T 1 = 6, γ 1 = −0.499, τ ∈ I2,1 2,1 2,1 △
+
△
+
1 = 1 , 0.4 < k∆Aℓ (τ )k ≤ 0.6 = ρ2,2 ρ2,p1 +
+
+
1 , T 1 = 20, γ 1 = −0.2408, τ ∈ I2,2 2,2 2,2 △
−
△
−
2
1 , 0.6 < k∆Aℓ (τ )k ≤ 1.1 = ρ2,1 − − − 1 , T 1 = 10, γ 1 = 0.4045, τ ∈ I2,1 2,1 2,1 △
−
1 = 1 , 1.1 < k∆Aℓ (τ )k ≤ 1.4 = ρ2,2 ρ2,q1 −
−
2
−
1 , T 1 = 4, γ 1 = 0.7916, τ ∈ I2,2 2,2 2,2
△
+
2 , 0 ≤ k∆Aℓ (τ )k ≤ 0.3 = ρ1,1 + + 2+ = −0.628, 2 2 τ ∈ I1,1 , T1,1 = 16, γ1,1 △
+
△
+
2 = 2 , 0.3 < k∆Aℓ (τ )k ≤ 0.5 = ρ1,2 ρ1,p2 +
+
+
2 , T 2 = 10, γ 2 = −0.3699, τ ∈ I1,2 1,2 1,2 △
−
△
−
2 , 0.5 < k∆Aℓ (τ )k ≤ 1 = ρ1,1 − − − 2 , T 2 = 14, γ 2 = 0.2754, τ ∈ I1,1 1,1 1,1 △
−
2 = 2 , 1 < k∆Aℓ (τ )k ≤ 1.5 = ρ1,2 ρ1,q2 −
−
−
2 , T 2 = 10, γ 2 = 0.9207, τ ∈ I1,2 1,2 1,2
1
1
271
272
S22
Leopoldo Jetto and Valentina Orsini △ 2+ , 0 ≤ k∆Aℓ (τ )k ≤ 0.2 = ρ2,1 + + + 2 2 2 τ ∈ I2,1 , T2,1 = 10, γ2,1 = −0.7571, △ 2+ , 0.2 < k∆Aℓ (τ )k ≤ 0.4 = ρ2,2 + + + 2 2 2 τ ∈ I2,2 , T2,2 = 5, γ2,2 = −0.499, △ 2+ △ 2+ 0.4 < k∆A (τ )k ≤ 0.6 = ρ2,3 = ρ2,p2 , ℓ 2 = + + + 2 , T 2 = 20, γ 2 τ ∈ I = −0.2408, 2,3 2,3 2,3 △ 2− , 0.6 < k∆Aℓ (τ )k ≤ 1.3 = ρ2,1 − − − 2 2 2 τ ∈ I2,1 , T2,1 = 9, γ2,1 = 0.6626, △ 2− △ 2− = ρ2,q2 , 1.3 < k∆Aℓ (τ )k ≤ 1.8 = ρ2,2 2 − − − 2 , T 2 = 6, γ 2 τ ∈ I2,2 2,2 2,2 = 1.308.
(59)
The l.h.s. and r.h.s. of (29) reported in the Table 1 have been computed on the basis of the above values. The table clearly shows that (29) is satisfied for each Ski , i, k = 1, 2, so that plant (58) is exponentially stable. Table 1. r.h.s value of (29) l.h.s value of (29)
S11 3.267 3.354
S21 7.47 7.81
S12 13.31 13.74
S22 14.06 14.88
The stability analysis is now performed on the basis of the coarser information carried by (31) and (32). More precisely it is assumed that for k = 2 and i = 2, the set S22 of conditions representing the prior knowledge on k∆Aℓ (·)k reduces to △ 2+ △ 2+ 0 ≤ k∆Aℓ (τ )k ≤ 0.4 = ρ2,2 = ρ2, j¯, τ ∈ I¯2+ , T¯ 2+ = T¯ 2+ = 15, γ 2+ = −0.499, 2,2 2,2 2,2 2, j¯ S22 = (60) △ 2− , 0.4 < k∆A ( τ )k ≤ 1.8 = ρ ℓ 2,2 τ ∈ I¯2− , T¯ 2− = 35, γ 2− = 1.308, 2,2 2,2 2,2
which means to consider a set of conditions with p¯22 = 1 and q¯22 = 1. Comparing (60) with 2+ of I¯2+ is obtained as T¯ 2+ = T 2+ +T 2+ = 15, while the (59) and recalling (35), the length T¯2,2 2,2 2,2 2,1 2,2 2− of I¯2− is obtained as T¯ 2+ + T 2− + T 2− = 35. Of course, the values of 2+ = 2+ length T¯2,2 γ2, j¯ γ2,2 2,2 2,3 2,1 2,2 −
2 are the same of the corresponding parameters in (59). On the basis of (60), the and γ2,2 2+ = 15, and value 92.247 is obtained for the r.h.s of (36). Hence (36) is not satisfied as T¯2,2 the stability of Σ can not be inferred on the basis of the information carried by (60). On the
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
273
+
2 = 15 satisfies (39), whose r.h.s. assumes the value 13.364, as it is contrary, the actual T¯2,2 obtained from (59). A comparison has been also made with the sufficient stability conditions stated in [35]. With reference to the given sequence {tℓ } and to Theorem 2 of the above paper, the matrix ¯ + P(t), where A(t) given by (58) can be written as A(t) △
¯ = A¯ = A(t)
−2 0.15 −0.1 −1
,
P(t) =
0 0
0 f (t)
.
Given parameters α¯ ′ , A, δ and β such that: lim L
L→∞
−1
tZ 0 +L t0
lim L
L→∞
−1
tZ 0 +L t0
¯ kA(s)k ds ≤ A, ∀t0 ∈ R+ ,
¯ + h) − A(t)k ¯ kA(t ≤ β hγ , ∀t ∈ R+ , 0 < γ ≤ 1,
¯ ds ≤ α¯ ′ < 0, max Re [λi (A(s))] i
lim L
L→∞
−1
tZ 0 +L t0
kP(s)k ds ≤ δ , ∀t0 ∈ R+ ,
the same theorem states that A(t) is exponentially stable provided that: (i) a sufficiently small ε > 0 can be chosen, such that α¯ ′ + 2ε < 0, (ii) then δ is so small that α¯ ′ + 2ε + Mε δ < 0, (with Mε = 32 2A ε + 1 ),
(iii) then β is so small that
1 (γ +1) < 0, α¯ ′ + 2ε + Mε δ + 2(ln Mε )γ /(γ +1) β (Mε + ε A) / (iv) and also so small that 1 (γ +1) < ε, σ β 1/(γ +1) ln Mε Mε + ε A /
where σ (·) is a certain non increasing function defined in the proof of Theorem 2 of [35].
In the present case one has α¯ ′ = −1.0152, A = 2, β = 0 and δ = 0.7778. It is easily seen that ∀ ε > 0, condition (ii) can not be satisfied because 3 6 ′ ′ α¯ + 2ε + Mε δ = α¯ + 2ε + 2A ε + 1 δ = −1.0152 + 2ε + + 1.5 0.7778 2 ε 4.6668 = 0.1515 + 2ε + > 0, ∀ ε > 0. ε
274
Leopoldo Jetto and Valentina Orsini
Example 4.2. Consider the LTV plant Σ described by the following dynamic matrix A(t) A(t) =
−1 1.1 sint −1.1 sint −1
.
(61)
The prior knowledge on the plant is: at time instants tℓ = π ℓ, ℓ = 0, 1, . . . one has A(tℓ ) = Aℓ = −I, so that (11) holds with αℓ = −1 and mℓ = 1, ∀ ℓ ∈ Z+ . According to the notation of Section 2 one has l = 1, Ti = T1 = T = π , {Iℓi } = {Iℓ1 } = {Iℓ }, so that Iℓ1 = Iℓ , ∀ ℓ ∈ Z+ . Moreover, inside each Iℓ the prior knowledge on k∆Aℓ (τ )k is assumed to be given by a unique set of conditions Ski , k = 1, . . . , ni , with ni = n1 = 1, so that Ski = S11 . The + set of conditions S11 , is reported beneath as well as the corresponding values of the γ1,1 j , − j = 1, . . . , p11 , and γ1,1 j , j = 1, . . . , q11 which are computed according to (27), (28) recalling that α = max αℓ = −1, m¯ = max mℓ = 1, ℓ
ℓ
S11 =
△ 1+ △ 1+ 0 ≤ k∆Aℓ (τ )k ≤ 0.8 = ρ1,1 = ρ1,p1 , 1 τ ∈ I 1+ T 1+ = 1.628, γ 1+ = −0.2, 1,1 1,1 1,1
△ 1− △ 1− = ρ1,q1 , 0.8 < k∆Aℓ (τ )k ≤ 1.1 = ρ1,1 1 − − 1− = 0.1 . 1 1 τ ∈ I1,1 , T1,1 = 1.513, γ1,1
It is straightforward to verify that on the basis of the above values, the l.h.s. and r.h.s. of (29) result to be 0.3256 and 0.1513, respectively. Hence condition (29) is satisfied and one may conclude that plant (61) is exponentially stable. The considered plant Σ is pointwise stable because the eigenvalues of each frozen A(t), are λ1,2 = −1 ± j1.1 sin(t), t ∈ R+ . This makes it possible to compare the present method with the sufficient stability conditions stated in Theorem 3.2 of [19], which enhances previous results. This theorem considers (n × n), time-varying matrices A(·) such that Re[λi {A(t)}] ≤ α˜ < 0, i = 1, . . . , n, ∀t ≥ 0 and states that A(·) is stable if one of the following conditions holds ∀t ≥ 0: (i) |α˜ | > 4ma , ˙ (ii) kA(·)k is piecewise differentiable and kA(t)k ≤ δ1
0, such that (|α˜ | − ε − (log kε )/k∗) > 0, where kε = (2ma /ε ), and log kε 1 |α˜ | − ε − ∗ > 0, ∀t ≥ 0, sup kA(t + τ ) − A(t)k ≤ δ1 < kε k 0≤τ ≤k∗ (iv) |α˜ | > n − 1 and for some η ∈ (0, 1),
A(t + h) − A(t)
≤ δ1 < 2η n−1 |α˜ | − 2ma η + (n − 1) log η . sup
h h>0
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
275
In the present case one has ma = sup kA(t)k = 2.1, and |α˜ | = 1, so that (i) is not satisfied. t≥0
˙ ˙ Condition (ii) imposes kA(·)k < 0.0161, which is not satisfied because sup kA(t)k = 1.1. t
The above version of condition (iii) is that reported on page 161 of [19]. It has been checked comparing sup kA(t + τ ) − A(t)k with k1ε (|α˜ | − ε ), and choosing k∗ > 0 as the minimum 0≤τ ≤k∗
value compatible with the condition (|α˜ | − ε − (log kε )/k∗ ) > 0. This surely defines a relaxed bound on sup kA(t + τ ) − A(t)k, because reduces the time interval over which (iii) 0≤τ ≤k∗
has to be satisfied and increases the upper bound on the r.h.s. of (iii) because log kε > 0 (kε = 2ma /ε > 1, ∀ ε ∈ (0, |α˜ |)). The minimum value of k∗ is 3.76 and for t = 0 one has sup kA(τ ) − A(0)k = 1.1, while max k1ε (|α˜ | − ε ) = 0.06. Hence, even this simplified
0≤τ ≤3.76
ε ∈(0,|α˜ |)
reduced conservatism condition is not satisfied. Finally, it is easily seen that condition (iv) can not be applied because in this case one has |α˜ | = 1 = n − 1. The comparison with [35] has been performed using the same notation and procedure described in Example 4.1. Writing matrix (61) as 0 1.1 sint ¯ + P(t) = −I + , A(t) −1.1 sint 0 it is found that the prior available information gives α¯ ′ = −1, A = 1, β = 0 and δ = 0.944. It is easily seen that ∀ ε > 0, condition (ii) can not be satisfied because 3 3 ′ ′ α¯ + 2ε + Mε δ = α¯ + 2ε + + 1.5 0.944 2A ε + 1 δ = −1 + 2ε + 2 ε 2.832 > 0, ∀ ε > 0. = 0.416 + 2ε + ε Example 4.3. The following plant Σ has been considered in several papers dealing with robust stability analysis (see [7] and references therein), 0 1 , 0 ≤ p(t) ≤ p. ¯ (62) A(t) = −(2 + p(t)) −1 The plant is pointwise stable ∀ p¯ and, for unbounded | p(·)|, ˙ it is quadratically stable ∀ p¯ ≤ 3.82. Imposing a uniform bound on | p(·)|, ˙ less conservative conditions on p¯ can be found. For example, it has been recently shown in [7] that, using parameter dependent homogeneous Lyapunov functions, the plant is asymptotically stable also for p¯ = 10 if | p(·)| ˙ ≤ 23. It is now shown that if p(·) is known to satisfy some conditions of the kind (13)–(16), the above bounds can be greatly relaxed. As an example, the time varying term p(t) has been generated as the positive part of a sin function centered around the middle point of each Iℓ = [10ℓ, 10(ℓ + 1)), ℓ ∈ Z+ . This corresponds to l = 1 and Ti = T1 = T = 10. Inside each Iℓ , p(·) has been generated as p(t) = 10 sin(ω1 (t − a)), ∀t ∈ [a, a + π /(ω1 )], p(t) = 0 otherwise, where: a = (tℓ + tℓ+1 )/2 − π /2ω1 , ω1 = 40, tℓ = 10ℓ. This corresponds to assume p¯ = 10, | p(·)| ˙ ≤ 400. The assumed prior knowledge on A(t) is: p(t) = 0 at the frozen time instants tℓ = 10ℓ, ℓ = 0, 1, . . . , moreover inside each Iℓ , k∆Aℓ (τ )k = |p(t)| satisfies a unique set of conditions
276
Leopoldo Jetto and Valentina Orsini
(13)–(16) given by Ski , k = 1, . . . , ni , with ni = n1 = 1, so that Ski = S11 . The set of conditions + S11 , is reported beneath as well as the corresponding values of the γ1,1 j , j = 1, . . . , p11 , and − γ1,1 j , j = 1, . . . , q11 . These values have been computed according to (27), (28) considering that in correspondence of each tℓ , ℓ ∈ Z+ , one has αℓ = −0.5 and, mℓ = 2.268, so that α = max αℓ = −0.5, and m¯ = max mℓ = 2.268, ℓ
ℓ
S11 =
△
+
1 , 0 ≤ k∆Aℓ (τ )k ≤ 0.1 = ρ1,1 + + + 1 T 1 = 9.922, γ 1 ∀ τ ∈ I1,1 1,1 1,1 = −0.2732, △
+
△
+
1 = 1 , 0.1 < k∆Aℓ (τ )k ≤ 0.2 = ρ1,2 ρ1,p1 +
+
+
1
1 , T 1 = 0.5 · 10−3 , γ 1 ∀ τ ∈ I1,2 1,2 1,2 = −0.0464,
△ 1− , 0.2 < k∆Aℓ (τ )k ≤ 5 = ρ1,1 − − − 1 1 1 ∀ τ ∈ I1,1 , T1,1 = 0.0251, γ1,1 = 10.839, △ 1− △ 1− = ρ1,q1 , 5 < k∆Aℓ (τ )k ≤ 10 = ρ1,2 1 − − − 1 1 1 ∀ τ ∈ I1,2 , T1,2 = 0.0524, γ1,2 = 22.178.
It is straightforward to verify that on the basis of the above values, the l.h.s. and r.h.s. of (29) result to be 2.7109 and 2.253 respectively. Hence condition (29) is satisfied and plant (62) is exponentially stable also for p¯ = 10, | p(·)| ˙ ≤ 400. As Σ is pointwise stable ∀ p(·) ≥ 0, a comparison with [19] can be made assuming that the analytical description of p(·) is really available (see Example 4.2 for the stability conditions of [19]). △ The considered plant is such that Re[λi (A(·))] ≤ −0.5 = α˜ , and ma = 12.04, so that (i) ˙ is not satisfied. Condition (ii) imposes kA(·)k < 2.48 · 10−7 , which is not satisfied because ˙ sup kA(t)k = 400. A relaxed version of condition (iii) has been checked arguing as in t
Example 4.2. It is found that the minimum k∗ compatible with the condition (|α˜ | − ε − (log kε )/k∗ ) > 0 is k∗ = 13.6, so that the relaxed version of (iii) becomes sup kA(t + 0≤τ ≤13.6
τ ) − A(t)k ≤ δ1 < k1ε (|α˜ | − ε ), ∀t ≥ 0. It is seen that for t = 0 there is not any ε ∈ (0, |α˜ |) such that the above condition is satisfied. In fact one has sup kA(τ ) − A(0)k = 10, and max
ε ∈(0,|α˜ |)
1 kε
0≤τ ≤13.6
(|α˜ | − ε ) = 2.5 · 10−3 . Finally, it is easily seen that condition (iv) can not be
applied because in this case one has |α˜ | = 0.5 < n − 1 = 1. The comparison with [35] has been performed using the same notation and procedure of the previous examples. Writing (62) as 0 0 0 1 ¯ + A(t) + P(t) = −p(t) 0 −2 −1
and exploiting the prior information on A(t), it is easily found that α¯ ′ = −0.5, A = 2.2882, β = 0,δ = 0.1642. It is easily seen that condition (ii) given by α¯ ′ + 2ε + Mε δ = α¯ ′ + 2ε + 6.8646 3 + 1.5)0.1642 = −0.2537 + 2ε + 6.8646 < 0 can not 2 2A ε + 1 δ = −0.5 + 2ε + ( ε ε be satisfied by any ε > 0.
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
277
Example 4.4. Consider the discrete-time LTV plant Σ described by the following dynamic matrix A(t) 0.2 0.8 sin(ω tTc ) A(t) = (63) , ω = 1, t ∈ Z+ , Tc = 10−4 . 0.6 sin(ω tTc ) 0.3 Let {tℓ } be the sequence of frozen time instants, such that ω tℓ Tc = π ℓ, ℓ = 0, 1, . . . . According to the notation of section 2 this corresponds to l = 1, Ti =T1 = T = π /(ω Tc ) and 0.2 0 , so that (12) holds with Iℓi = Iℓ1 = Iℓ , ∀ ℓ ∈ Z+ . At each tℓ one has A(tℓ ) = Aℓ = 0 0.3 αℓ = 0.3 and mℓ = 1, and therefore α = max αℓ = 0.3, and m¯ = max mℓ = 1. The only a ℓ
ℓ
priori knowledge on k∆Aℓ (τ )k consistent with (63) inside each Iℓ1 is given by a unique set of conditions Sk1 , k = 1, . . . , , n1 with n1 = 1,
S11 =
+
△ 1+ △ 1+ 0 ≤ k∆Aℓ (τ )k ≤ 0.6 = ρ1,1 = ρ1,p1 , 1 + + + 1 1 1 τ ∈ I , T = 16, 961, γ = 0.9, 1,1 1,1 1,1
△ 1− △ 1− 0.6 < k∆Aℓ (τ )k ≤ 0.8 = ρ1,1 = ρ1,q1 , 1 − − − 1 1 1 = 1.1 . τ ∈ I1,1 , T1,1 = 14, 455, γ1,1 −
+
+
−
−
1 and 1 , one has | 1 | = | ln 1 | = 0.1054 and 1 = ln 1 = From the values of γ1,1 γ1,1 λ1,1 γ1,1 λ1,1 γ1,1 0.0953 respectively. It is straightforward to verify that condition (50) is satisfied so that Σ is uniformly asymptotically stable. The considered plant Σ is pointwise stable because sup max |λi {A(t)}| = 0.9446. This t≥0
i
makes it possible to compare the present method with that one proposed in [13]. Given matrix (63), the sufficient stability condition stated in [13] imposes sup kA (t) − A (t − 1)k ≤ t
4.1621 · 10−17 , while it is easy to see that for the considered plant one has sup kA (t) − A (t − 1)k = 8 · 10−5 . t
Example 4.5. Consider the discrete-time LTV plant Σ described by the following dynamic matrix a(t) 0.2 , a(t) = 0.3 + f (t), t ∈ Z+ , (64) A(t) = 0.15 0.1 where f (t) is a discrete periodic function, with period P = 236, whose behaviour is reported in Figure 1. The considered plant is not pointwise stable because sup max |λi {A(t)}| = 1.4768. t≥0
i
Let {tℓ } be the sequence of frozen time instants tℓ = 236ℓ, ℓ = 0, 1, . . . . According to i 1 the notation of section 2 this correspondsto l = 1, Ti = T1 = T = 236 and Iℓ = Iℓ = Iℓ , 0.3 0.2 , to which the eigenvalues λ1 = 0 ∀ ℓ ∈ Z+ . At each tℓ one has A(tℓ ) = Aℓ = 0.15 0.1 and λ2 = 0.4 correspond. It can be seen that (12) holds with αℓ = 0.4 and mℓ = 2, so that α = max αℓ = 0.4, and m¯ = max mℓ = 2. Suppose that, ∀ τ ∈ Iℓ = [tℓ ,tℓ+1 ), ℓ = 0, 1, . . . , ℓ
ℓ
the a priori knowledge on k∆Aℓ (τ )k consistent with (64), be given by a unique set S11 of
278
Leopoldo Jetto and Valentina Orsini 1.4 1.2 1
f(t)
0.8 0.6 0.4 0.2 0 0
110 126
236
t
Figure 1. Diagram of f (t). conditions △ 1+ , 0 ≤ k∆Aℓ (τ )k ≤ 0.25 = ρ1,1 1+ , T 1+ = 200, γ 1+ = 0.9, τ ∈ I 1,1 1,1 1,1 △ 1+ △ 1+ 0.25 ≤ k∆A (τ )k ≤ 0.275 = ρ1,2 = ρ1,p1 ℓ 1 S11 = + + + 1 , T 1 = 20, γ 1 = 0.95, τ ∈ I 1,1 1,2 1,2 △ 1− △ 1− 0.275 < k∆Aℓ (τ )k ≤ 1.155 = ρ1,1 = ρ1,q1 , 1 − − − 1 , T 1 = 16, γ 1 τ ∈ I1,1 1,1 1,1 = 2.71 . +
−
+
+
+
+
+
+
1 , 1 , and 1 , | 1 | = | ln 1 | = 0.1054, | 1 | = | 1 | = | ln 1 | = γ1,2 γ1,1 λ1,1 γ1,1 λ1,2 λ1,p1 γ1,2 From the values of γ1,1 −
−
1
−
1 = 1 = ln 1 = 0.9969 are obtained, respectively. It is straightforward to 0.0513, λ1,1 λ1,q1 γ1,1 1 verify that condition (50) is satisfied so that Σ is uniformly asymptotically stable. Suppose now that, according to (31)–(32), the following coarser information on k∆Aℓ (τ )k is available over each interval Iℓ = [tℓ ,tℓ+1 ), ℓ ∈ Z+ : △ 1+ △ 1+ 0 ≤ k∆Aℓ (τ )k ≤ 0.25 = ρ1,1 = ρ1, j¯, 1+ = 0.9, τ ∈ I¯1+¯ , T¯ 1+¯ = 200, γ1,1 1, j 1, j S11 = △ 1− △ 1− 0.25 < k∆Aℓ (τ )k ≤ 1.155 = ρ1,1 = ρ1,q11 , 1− , T¯ 1− = 36, γ 1− τ ∈ I¯1,1 1,1 1,1 = 2.71 . +
+
+
−
−
1 | = | ln 1 | = 0.1054 and 1 = ln 1 = 0.9969. Condition (55) gives γ1,1 λ1,1 γ1,1 One has |λ1,1 j¯ | = |λ1,1 + + + + 1 1 1 1 ¯ ¯ ¯ ¯ T1, j¯ = T1,1 ≥ 348. As in this case T1, j¯ = T1,1 = 200, the stability of Σ can not be inferred from the coarser information, because the corresponding condition (55) is not satisfied. Es1+ by means of (56), the condition T¯ 1+ = T¯ 1+ ≥ 149 is obtained, so that the actual timating T¯1,1 1,1 1, j¯
Relaxed Stability Conditions for Linear Time-Varying Systems. . . +
279
+
1 = 200 guarantees the uniform asymptotic stability of Σ (in agreement with value T¯1,1 j¯ = T¯1,1 the fulfilment of (50)).
5.
The Robust Stabilization Problem
Robust stabilization of uncertain LTV systems has been widely investigated in the past from different points of view. Necessary and sufficient conditions for the quadratic stabilizability of uncertain LTV systems have been estabilished in [5]. Owing to the numerical complexity of the approach proposed in this reference, a notable effort has been later devoted to the problem of deriving easy to check conditions. A category of methods considers the LTV (continuous or discrete-time) plant as a LTI one affected by time-varying perturbations with a relatively “small” size bound and particular structured forms [2], [16], [30], [31], [40]. The approach proposed in [18], [32], [36], [39], assumes that the system matrices may have some arbitrarily large varying terms, but the uncertainties are required to satisfy the so called “matching conditions”. In the case of polytopic uncertainties, the conservatism inherent the quadratic stability paradigm has been greatly reduced using LMI techniques based on parameter dependent Lyapunov functions (see e.g. [9], [10], [17] and references therein). All the above references (save [2]) also assume a full state information. A static output feedback controller based on the LMI approach has been recently proposed in [15]. A comprehensive overview of methods for analysis and synthesis of uncertain LTV systems can be found in [3]. The robust synthesis method described in this section is based on the previously developed stability analysis and can be applied without assuming an accessible state vector or a particular structure on the way the physical parameters affect the system dynamics.
5.1.
Robust Controller Design
Consider the following LTV uncertain plant Σ p Dx(t) = A(t)x(t) + Bu(t), y(t) = Cx(t),
(65) (66)
where operator D is defined as in (1), u(·) ∈ Rm is the control input, x(·) ∈ Rn is the state, y(·) ∈ Rq is the output. It is assumed that: A5) A(·) satisfies the same assumptions A1, A3, A4 specified in Section 2.2; A6) A(t) is exactly known only at t = tℓ , ℓ ∈ Z+ , A7) the triplet (C, A(tℓ ), B) is pointwise reachable and observable ∀ ℓ ∈ Z+ . It is remarked that the assumption of constant B and C matrices is not strong. In fact, as shown in [4], it is always possible to define a suitable pre-filtering and/or post-filtering of u(·) and/or y(·) such that the resulting extended plant has constant actuator and sensor matrices, still keeping the reachability and observability properties of the original plant. The robust control of uncertain plants like Σ p originates, for example, in some situations where the plant dynamics is affected by changing operating conditions which make unrealistic the assumption of a perfect knowledge of all physical parameters at each time instant [20]. The idea is to control Σ p through a piecewise time invariant observer based controller Σc so that the resulting closed-loop system Σ f be characterized by a dynamical matrix A f (·)
280
Leopoldo Jetto and Valentina Orsini
satisfying assumptions A1-A4. In particular Σc must guarantee that each frozen A f (tℓ ) be exponentially αℓ -stable, ∀ ℓ ∈ Z+ . Then the stability of Σ f would be guaranteed imposing the same conditions of Theorem 1 to A f (·). △
With reference to the frozen plant Σ p,ℓ ≡ (C, Aℓ , B) (with A(tℓ ) = Aℓ ), define the following LTI observer based controller Σc,ℓ Dzℓ (t) = (Aℓ − LℓC)zℓ (t) + Buℓ (t) + Lℓ y(t), uℓ (t) = −Kℓ zℓ (t),
(67) (68)
where zℓ (·) ∈ Rn is the state of Σc,ℓ and the gains Kℓ , Lℓ are determined imposing the desired eigenvalues to Aℓ − BKℓ and Aℓ − LℓC respectively, which is surely possible by A7. The piecewise time invariant Σc is obtained keeping each fixed Σc,ℓ only acting for t ∈ Iℓ , ℓ ∈ Z+ . The feedback connection Σ f ,ℓ of the LTI Σc,ℓ with the LTV Σ p over each Iℓ = [tℓ ,tℓ+1 ), ℓ ∈ Z+ is described by the following pair Aℓ + ∆Aℓ (t) −BKℓ , Cf = C 0 . A f ,ℓ (·) = LℓC Aℓ − LℓC − BKℓ
Hence A f (t) = A f ,ℓ (t), ∀t ∈ Iℓ , ℓ ∈ Z+ , and using the same notation of (3), matrix A f ,ℓ (t) can be rewritten as A f ,ℓ (t) = A f ,ℓ + ∆A f ,ℓ (t), t ∈ Iℓ , ℓ ∈ Z+ , (69) where Aℓ −BKℓ , A f ,ℓ = A f ,ℓ (tℓ ) = LℓC Aℓ − LℓC − BKℓ ∆Aℓ (t) 0 ∆A f ,ℓ (t) = (A f ,ℓ (t) − A f ,ℓ )χ (t)[tℓ ,tℓ+1 ) = χ (t)[tℓ ,tℓ+1 ) . 0 0
△
(70) (71)
Let Φ f ,ℓ (·, ·) be the state transition matrix of the closed loop system Σ¯ f ,ℓ ≡ (C f , A f ,ℓ ) frozen at time tℓ . Recalling that by A7 it possible to choose the two matrices Kℓ and Lℓ such that λi {A f ,ℓ } = λi {Aℓ − BKℓ } ∪ λi {Aℓ − LℓC}, i = 1, 2, . . . , can be arbitrarily assigned, ∀ ℓ ∈ Z+ , one has kΦ f ,ℓ (t,t0 )k = k exp(A f ,ℓ (t−t0 ))k ≤ m f ,ℓ exp(α f ,ℓ (t−t0 )), ∀t ≥t0 , ∀t0 ≥ 0, t ∈ R+ , (72) △
where α f ,ℓ = max λi {A f ,ℓ }, or i
(t−t )
(t−t0 )
kΦ f ,ℓ (t,t0 )k = kA f ,ℓ 0 k ≤ m f ,ℓ α f ,ℓ
, ∀t ≥ t0 , ∀t0 ≥ 0, t ∈ Z+ ,
(73)
△
where α f ,ℓ = max |λi {A f ,ℓ }|. i
This suggests to control Σ p using a piecewise time-invariant controller Σc which, over each Iℓ = [tℓ ,tℓ+1 ), assumes the configuration of the corresponding LTI Σc,ℓ stabilizing the frozen plant Σ p,ℓ ≡ (C, Aℓ , B). The following theorem can now be proved.
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
281
Theorem 3. Let Σ p be the plant described by (65), (66), with k∆Aℓ (τ )k satisfying conditions (13)–(16), ∀ ℓ ∈ Z+ . Consider the family of LTI controllers Σc,ℓ , ℓ ∈ Z+ , given by (67), (68), stabilizing the corresponding frozen plant Σ p,ℓ . Then the piecewise time varying controller Σc assuming the configuration of the LTI Σc,ℓ over Iℓ = [tℓ ,tℓ+1 ), ℓ ∈ Z+ , gives an exponentially stable closed-loop system Σ f , if k∆Aℓ (·)k = k∆A f ,ℓ (·)k is such that:
i) in the case t ∈ R+ , condition (29) is satisfied, ∀t ∈ Iℓ , ℓ ∈ Z+ , where parameters + − γk,i j , i = 1, . . . , l, k = 1, . . . , ni , j = 1, . . . , pik and γk,i j , i = 1, . . . , l, k = 1, . . . , ni , j = △
1, . . . , qik , are computed as in (27), (28) with m¯ and α replaced by m¯ f = max m f ,ℓ ℓ
△
and α f = max α f ,ℓ respectively and ℓ
+ ρk,i j ,
− ρk,i j
which are the same parameters defining
conditions (13)–(16) on the norm of the open loop matrix ∆Aℓ (τ ), ∀ τ ∈ Iℓ , ∀ ℓ ∈ Z+ , ii) in the case t ∈ Z+ condition (50) is satisfied, ∀t ∈ Iℓ , ℓ ∈ Z+ , where parameters + − γk,i j , i = 1, . . . , l, k = 1, . . . , ni , j = 1, . . . , pik and γk,i j , i = 1, . . . , l, k = 1, . . . , ni , j = △
1, . . . , qik , are computed as in (48), (49) with m¯ and α replaced by m¯ f = max m f ,ℓ , ℓ
△
α f = max α f ,ℓ respectively and ℓ
+ − ρk,i j , ρk,i j
are the same parameters defining conditions
(13)–(16) on the norm of the open loop matrix ∆Aℓ (τ ), ∀ τ ∈ Iℓ , ∀ ℓ ∈ Z+ . Proof. By (69) and using the same notation of (3) and (4), the dynamical matrix A f (·), can be written as A f (·) = A′f (·) + ∆A′f (·), where A′f (t) =
∞
∑ A f ,ℓ χ (t)[t ,t
ℓ ℓ+1 )
,
∆A′f (t) =
ℓ=0
∞
∑ ∆A f ,ℓ (t),
(74)
ℓ=0
and k∆A f ,ℓ (t)k = k∆Aℓ (t)k, ∀t ∈ Iℓ , ℓ ∈ Z+ , by (71). Hence, recalling (72) and (73), one has that A f (·) satisfies the assumptions A1-A4 defined in Section 2.2. It follows that part i) of Theorem 3 is a direct consequence of Theorem 1, and part ii) of Theorem 2.
6.
Numerical Examples on Robust Stabilization
Example 6.1. Consider the classical example of a force f (·) applied to a mass, spring and damper shown in Figure 2, [34]. The values of physical parameters are: m = 1, b = 1, while the stiffness factor k(·) of the spring is assumed to be time-varying in the interval k(·) ∈ [5, 10]. Defining the state vector as x(·) = [x1 (·), x2 (·)]T = [y(·), y(·)] ˙ T , the following state space representation of Σ p is obtained x(t) ˙ = A(t)x(t) + Bu(t),
(75)
y(t) = Cx(t),
(76)
where u(·) = f (·), and C = [1, 0],
0 A(t) = k(t) − m
1 b , − m
B=
"
0 1 m
#
(77)
282
Leopoldo Jetto and Valentina Orsini
y0 b
k(t)
m y(t) f(t)
Figure 2. System containing a spring, a mass and a damper. The following prior knowledge on the way k(·) varies inside [5, 10] is assumed: △
i) the only known values of k(t) are kℓ = k(tℓ ) where tℓ = 20ℓ, ℓ ∈ Z+ . Hence, according to the notation of Section 2.2 one has l = 1, Iℓ = Iℓ1 = [20ℓ, 20(ℓ + 1)), ℓ ∈ Z+ , T = T1 = 20, ii) defining ∆kℓ (t) = k(t) − kℓ , t ∈ Iℓ , it is known that according to (13)–(16) inside each Iℓ one has +
+
+
+
1 1 1 1 0 ≤ k∆Aℓ (τ )k = |∆kℓ (τ )| ≤ ρ1,1 = ρ1,p 1 = 0.01, τ ∈ I1,1 = I1,p1 , −
−
−
−
1 1 1 1 0.01 < k∆Aℓ (τ )k = |∆kℓ (τ )| ≤ ρ1,1 = ρ1,q 1 = 1, τ ∈ I1,1 = I1,q1 , −
+
(79)
1
1
+
(78)
1
1
−
1 and I 1 are T 1 = 19.98, and T 1 = 0.02, respectively. and the lengths of I1,1 1,1 1,1 1,1 It is required to find a controller Σc yielding an internally exponentially stable closedloop system Σ f and a zero steady state error for a step function input. It is easily seen that each frozen plant Σ p,ℓ ≡ (C, Aℓ , B) is reachable and observable, so that the piecewise LTI controller Σc can be defined computing the corresponding family of observer based controllers Σc,ℓ , ∀ ℓ ∈ Z+ . Each Σc,ℓ is implemented according to the scheme corresponding to a tracking problem (see e.g. [28]). Each frozen closed loop dynamical matrix A f ,ℓ has the form Aℓ BKℓ1 −BKℓ , 0 (80) A f ,ℓ = −BcC Ac 1 LℓC BKℓ Aℓ − LℓC − BKℓ
where Bc = 1 and Ac = 0, define the state-space representation of the accessible state system △ providing the internal model of a step input and matrices K¯ ℓ = [Kℓ , −Kℓ1 ] and Lℓ are designed to assign the desired eigenvalues to B ¯ Aℓ 0 △ △ Kℓ , and Nℓ = Aℓ − LℓC, (81) − Mℓ = 0 −BcC Ac
Relaxed Stability Conditions for Linear Time-Varying Systems. . . respectively. One also has −BKℓ Aℓ + ∆Aℓ (t) BKℓ1 , ∀t ∈ Iℓ , ℓ ∈ Z+ , −BcC Ac 0 A f ,ℓ (t) = 1 LℓC BKℓ Aℓ − LℓC − BKℓ
283
(82)
so that k∆A f ,ℓ (t)k = k∆Aℓ (t)k = |∆kℓ (t)|, ∀t ∈ Iℓ , ℓ ∈ Z+ . In the present case, matrices K¯ ℓ and Lℓ have been computed imposing λi {Mℓ } = {−1, −2, −1.6}, λi {Nℓ } = {−1.5, −2.5}, so that α f = max α f ,ℓ = −1. Moreover, by gridℓ
ding the uncertainty domain of k(·), it is found that m¯ f = max m f ,ℓ = 69. Hence, accordℓ
ing to (72) one has k exp(A f ,ℓ (t − t0 ))k ≤ 69 exp(−(t − t0 )), ∀t0 , ∀t ≥ t0 . As (82) implies k∆A f ,ℓ (·)k = k∆Aℓ (·)k = |∆kℓ (·)|, then by (78) and (79), the stability condition (29) applied to the dynamical matrix A f (·) of the closed loop system Σ f results in +
−
+
−
1 1 1 1 |γ1,1 |T1,1 T1,1 > γ1,1 + ln(m¯ f ),
(83)
with +
+
−
−
1 1 γ1,1 = α f + m¯ f ρ1,1 = −1 + 69 · 0.01 = −0.31, 1 1 γ1,1 = α f + m¯ f ρ1,1 = −1 + 69 · 1 = 68,
+
1 T1,1 = 19.98,
−
1 T1,1 = 0.02.
It is seen that condition (83) is satisfied so that the considered robust tracking problem admits a solution in terms of a piecewise LTI controller Σc designed as explained in Section 5. Figure 3 shows the obtained output response for t ∈ [0, 100). y(t)
1.4 1.2 1 0.8 0.6 0.4 0.2 0 0
20
40
60
80
100
t
Figure 3. Output response of the closed loop system Σ f . The controller gains K¯ ℓ and Lℓ , computed at tℓ , ℓ = 0, 1, 2, 3, 4, are reported in Table 2. Example 6.2. The discrete time plant Σ p whose state space representation is given by the following triplet 1 0 0 C = [1, 0], A(·) = , (84) , B= 0 0.99θ (·) 0.0787
284
Leopoldo Jetto and Valentina Orsini Table 2. tℓ
Lℓ
K¯ ℓ
0
L0 = [3, −4.25]T
K¯ 0 = [1.8, 3.6, −3.2]
20
L1 = [3, −5.26]T
K¯ 1 = [0.79, 3.6, −3.2]
40
L2 = [3, −6.27]T
K¯ 2 = [−0.22, 3.6, −3.2]
60
L3 = [3, −7.28]T
K¯ 3 = [−1.23, 3.6, −3.2]
80
L4 = [3, −8.29]T
K¯ 4 = [−2.24, 3.6, −3.2]
has been considered by various authors (see e.g. [6], [23], [29]) assuming that θ (·) ∈ [0, 1] is a priori unknown but exactly measurable on line. The following prior knowledge on the way θ (·) varies inside [0, 1] is here assumed: △
i) the only known values of θ (t) are θℓ = θ (tℓ ) where tℓ = 90ℓ, ℓ ∈ Z+ . Hence, according to the notation of Section 2.2 one has l = 1, Iℓ = Iℓ1 = [90ℓ, 90(ℓ + 1)), ℓ ∈ Z+ , T = T1 = 90, ii) defining ∆θℓ (τ ) = θ (τ ) − θℓ , τ ∈ Iℓ , it is known that inside each Iℓ one has +
+
+
+
1−
1−
−4 1 1 1 1 0 ≤ k∆Aℓ (τ )k = 0.99|∆θℓ (τ )| ≤ ρ1,1 , τ ∈ I1,1 = ρ1,p = I1,p 1 = 10 1, 1
1−
1−
(85)
1
10−4 < k∆Aℓ (τ )k = 0.99|∆θℓ (τ )| ≤ ρ1,1 = ρ1,q1 = 10−1 , τ ∈ I1,1 = I1,q1 , 1
(86)
1
+
1 and where the relation k∆Aℓ (τ )k = 0.99|∆θℓ (τ )| is implied by (84) and the lengths of I1,1 1− are T 1+ = 89, and T 1− = 1, respectively. I1,1 1,1 1,1 Also in this case it is required to find a controller Σc robustly stabilizing Σ f and giving a null steady state error for a step input function. As each frozen plant Σ p,ℓ ≡ (C, Aℓ , B) is reachable and observable, the controller Σc can be computed as in Example 1. Each frozen closed loop dynamical matrix A f ,ℓ has the form (80) where Bc = 1 and Ac = 1 define the state-space representation of the accessible state system providing the internal model of a △ discrete step input and matrices K¯ ℓ = [Kℓ , −Kℓ1 ] and Lℓ are designed to assign the desired eigenvalues to matrices Mℓ and Nℓ given by (81). As in the continuous time case, matrix A f ,ℓ (t), t ∈ Iℓ , ℓ ∈ Z+ , is again given by (69), so that k∆A f ,ℓ (t)k = k∆Aℓ (t)k, ∀t ∈ Iℓ , ℓ ∈ Z+ , with k∆Aℓ (t)k = 0.99|∆θℓ (t)|, by (84). Matrices K¯ ℓ and Lℓ have been determined imposing λi {Mℓ } = {0.9, −0.85, 0.8}, λi {Nℓ } = {−0.8, 0.85}, so that α f = max α f ,ℓ = 0.9. Moreover, by gridding the uncerℓ
tainty domain of θ (·), it is found that m¯ f = max m f ,ℓ = 136. Hence, according to (73) ℓ
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
285
y(t)
1.4 1.2 1 0.8 0.6 0.4 0.2 0 0
100
200
300
400
500
600
700
800
900
t
Figure 4. Output response of the closed loop system Σ f . (t−t )
one has kA f ,ℓ 0 k ≤ 136 · 0.9(t−t0 ) , ∀t0 , ∀t ≥ t0 . As k∆A f ,ℓ (·)k = k∆Aℓ (·)k = 0.99|∆θℓ (·)|, by (78) and (79), the stability condition (50) applied to the dynamical matrix A f (·) of the closed loop system Σ f results in +
+
−
−
i i i i |T1,1 T1,1 |λ1,1 > λ1,1 + ln(m¯ f ), +
−
+
−
+
(87)
−
1 = ln 1 and 1 = ln 1 , T 1 = 89, T 1 = 1 and with λ1,1 γ1,1 λ1,1 γ1,1 1,1 1,1 +
+
−
−
1 1 γ1,1 = α f + m¯ f ρ1,1 = 0.9 + 136 · 10−4 = 0.9136, 1 1 γ1,1 = α f + m¯ f ρ1,1 = 0.9 + 136 · 10−1 = 14.5, +
−
1 | = 0.0904, and λ 1 = 2.6741. It is seen that condition (87) is satisfied so whence |λ1,1 1,1 that the considered robust tracking problem admits a solution in terms of a piecewise LTI controller Σc designed as explained in Section 5. Figure 4 shows the obtained output response for t ∈ [0, 900). The controller gains K¯ ℓ and Lℓ , computed at tℓ , ℓ = 0, 1, 2, . . . , 9 are reported in Table 3.
7.
Conclusion
This chapter has introduced the notion of “small average variation plant” and used it to define relaxed stability conditions whose fulfillment does not require pointwise stability and tolerates physical parameters which may be quickly time varying and/or to exhibit large excursions over some time intervals. In other words, conditions (29) and (50) can be rephrased saying that the exponential stability can be also attained letting the frozen-time, time-varying system be unstable over some time-intervals, provided this is compensated by a sufficiently large stability margin at some other frozen-times. The numerical simulations and comparisons with other methods for stability analysis show that really relaxed conditions are obtained. Besides its utility as a new analysis tool, another merit of the present
286
Leopoldo Jetto and Valentina Orsini Table 3. tℓ
Lℓ
K¯ ℓ
0
L0 = [0.95, −6.8]T
K¯ 0 = [73.0623, 14.6125, −4.7014]
90
L1 = [1.0501, −6.7499]T
K¯ 1 = [73.0623, 15.8842, −4.7014]
180
L2 = [1.1502, −6.4994]T
K¯ 2 = [73.0623, 17.1560, −4.7014]
270
L3 = [1.2503, −6.0485]T
K¯ 3 = [73.0623, 18.4278, −4.7014]
360
L4 = [1.3504, −5.3973]T
K¯ 4 = [73.0623, 19.6996, −4.7014]
450
L5 = [1.4505, −4.5457]T
K¯ 5 = [73.0623, 20.9714, −4.7014]
540
L6 = [1.5505, −3.4938]T
K¯ 6 = [73.0623, 22.2432, −4.7014]
630
L7 = [1.6506, −2.2415]T
K¯ 7 = [73.0623, 23.5150, −4.7014]
720
L8 = [1.7507, −0.7888]T
K¯ 8 = [73.0623, 24.7868, −4.7014]
810
L9 = [1.8508, 0.8642]T
K¯ 9 = [73.0623, 26.0586, −4.7014]
approach is its impact in the robust control of uncertain LTV plants. As shown in Section 5, the stability analysis developed in the first part of the chapter allows the definition of an easy to use robust synthesis method without strong assumptions like an accessible state vector or a particular parametric dependence.
References [1] Amato, F., Celentano, G., & Garofalo, F. (1993). New sufficient conditions for the stability of slowly varying linear systems. IEEE Transaction Automatic Control, 38, 1409–1411. [2] Amato, F., Pironti, A. & Scala, S. (1996). Necessary and sufficient conditions for quadratic stability and stabilizability of uncertain linear time-varying systems. IEEE Transaction Automatic Control, 41, 125–128. [3] Amato, F. Robust control of linear systems subject to uncertain time varying parameters; Springer-Verlag: Berlin, 2006.
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
287
[4] Apkarian, P. & Gahinet, P. (1995). A convex characterization of gain scheduled H∞ controllers. IEEE Transaction Automatic Control, 40, 853–864. [5] Barmish, B. (1983). Stabilization of uncertain systems via linear control. IEEE Transaction Automatic Control, 28, 848-850. [6] Casavola, A., Famularo, D. & Franze, G. (2002). A feedback min-max MPC algorithm for LPV systems subject to bounded rate of change of parameters. IEEE Transaction Automatic Control, 47, 1147–1153. [7] Chesi, G. , Garulli, A. , Tesi, A. , & Vicino, A. (2007). Robust stability of time-varying polytopic systems via parameter-dependent homogeneous Lyapunov functions. Automatica, 43, 309–316. [8] Coppel, W. A. Dichomoties in stability theory; Lecture Notes in Mathematics, 629; Springer: Berlin–New York, 1978. [9] Daafouz, J. & Bernussou, J. Poly-quadratic stability and H∞ performance for discretetime systems with time varying uncertainties. in Proceedings of the 40th IEEE Conference on Decision and Control, Orlando, FL, 267–272, 2001. [10] Daafouz, J. & Bernussou, J. (2001). Parameter dependent Lyapunov functions for discrete-time systems with time varying parametric uncertainties. Systems & Control Letters, 43, 355–359. [11] Dahleh, M., & Dahleh, M. (1991). On slowly time-varying systems. Automatica, 27, 201–205. [12] Desoer, C. A. (1969). Slowly varying systems x(t) ˙ = Ax(t). IEEE Transaction Automatic Control, 14, 780–781. [13] Desoer, C. A. (1970). Slowly varying discrete-systems xi+1 = Ai xi . Electronics Letters, 6, 339–340. [14] Desoer, C. A. ; Vidyasagar, M. Feedback systems: input-output properties; Academic Press: NY, 1975; 252–254. [15] Dong J. & Yang G. H. (2008). Robust static output feedback control for linear discretetime systems with time-varying uncertainties. Systems & Control Letters, 57, 123– 131. [16] Garcia, G., Bernussou, J. & Arzelier, D. (1994). Robust stabilization of discrete-time linear systems with norm bounded time-varying uncertainty Systems & Control Letters, 22, 327–339. [17] Geromel, J. C. & Colaneri, P. (2006). Robust stability of time varying polytopic systems. Systems & Control Letters, 55, 81–85. [18] Hu, S. & Wang, J. (2001). Quadratic stabilizability of a new class of linear systems with structural independent time-varying uncertainty. Automatica, 37, 51–59.
288
Leopoldo Jetto and Valentina Orsini
[19] Ilchmann, A., Owens, D. H., & Pratzel-Walters, D. (1987). Sufficient conditions for stability of linear time-varying systems. Systems & Control Letters, 9, 157–163. [20] Ippoliti, G., Jetto, L. & Longhi, S. (2005). Switching based supervisory control of underwater vehicles. In: G.N. Roberts & R. Sutton (Eds.), Advances in unmanned marine vehicles, IEE Control Engineering Series (pp. 105–121). [21] Jetto, L. & Orsini, V. (2007). Relaxed sufficient conditions for the stability of continuous and discrete-time linear time-varying systems in Proceedings of the 46 th. IEEE Conference on Decision and Control, New Orleans, Louisiana. [22] Jetto, L. & Orsini, V. (2009). Relaxed conditions for the exponential stability of a class of linear time-varying systems. IEEE Transaction Automatic Control, 54, 1580-1585. [23] Kothare, M. V., Balakrishnan, V. & Morari, M. (1996). Robust constrained model predictive control using linear matrix inequalities, Automatica, 32, 1361-1379. [24] Krause J. M., & Kumar, K. S. P. (1986). An alternative stability analysis framework for adaptive control. Systems & Control Letters, 7, 19–24. [25] Kreisselmeier, G. (1985). An approach to stable indirect adaptive control. Automatica, 21, 425-431. [26] Lien, C. H. (2004). Robust observer based controller of systems with state perturbations via LMI approach. IEEE Transaction Automatic Control, 49, 1365-1370. [27] Mullhaupt, Ph., Buccieri, D., & Bonvin, D. (2007). A numerical sufficiency test for the asymptotic stability of linear time-varying systems. Automatica, 43, 631-638. [28] Ogata, K. Modern Control Engineering; Prentice-Hall: New Jersey, 2002; Vol. 12, pp. 847–850. [29] Park, P. & Jeong, S. C. (2004). Constrained RHC for LPV systems with bounded rates of parameter variations. Automatica, 40, 865-872. [30] Petersen, I. R., & Hollot, C. V. (1986). A Riccati equation approach to the stabilization of uncertain linear systems. Automatica, 22, 397–411. [31] Petersen, I. R. (1987). A stabilization algorithm for a class of uncertain linear systems. Systems & Control Letters, 8, 351-357. [32] Petersen, I. R. (1988). Quadratic stabilizability of uncertain linear systems containing both constant and time-varying uncertain parameters. Journal of Optimization Theory and Applications, 57, 439-461. [33] Rosenbrock, H. H. (1963). The stability of linear time-dependent control systems. International Journal of Electronic Control, 15, 73-80. [34] Shinners, S. Modern Control System Theory and Design, J. Wiley and Sons: New York, 1998; Vol. 3, pp. 190.
Relaxed Stability Conditions for Linear Time-Varying Systems. . .
289
[35] Solo, V. (1994). On the stability of slowly time-varying linear systems. Mathematics of Control, Signals, and Systems, 7, 331–350. [36] Thorp, J. S. & Barmish, B. R. (1981). On guaranteed stability of uncertain linear systems via linear control. Journal of Optimization Theory and Applications, 35, 559579. [37] Vidyasagar, M. Nonlinear Systems Analysis; Prentice Hall, Englewood Cliffs: NJ, 1978; Vol. 5, pp. 170–171. [38] Voulgaris, P. G., & Dahleh. M. A. (1995). On ℓ∞ to ℓ∞ , performance of slowly varying systems. Systems & Control Letters, 24, 243–249. [39] Wei, K. (1990). Quadratic stabilizability of linear systems with structural independent time-varying uncertainties. IEEE Transaction Automatic Control, 35, 268–277. [40] Zhou, K., & Khargonekar, P. (1988). Robust stabilization of linear systems with normbounded time varying uncertainty. Systems & Control Letters, 10, 17–20.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 291-328
ISBN 978-1-60876-500-3 c 2011 Nova Science Publishers, Inc.
Chapter 12
A D ECOMPOSITION M ETHOD TO S OLVE N ON -S YMMETRIC VARIATIONAL I NEQUALITIES Gabriela F. Reyero∗ and Rafael V. Verdes† Depto. Matem´atica - Fac. Ciencias Exactas, Ingenier´ıa y Agrimensura Universidad Nacional de Rosario - Argentina
1.
Abstract
In this chapter we present a decomposition method to solve linear nonsymmetric variational inequalities. Some preliminary ideas concerning this method and especially related to symmetric case can be seen in [6]– [8], [13]– [20], as well as some concrete applications. The method itself stems from the general principles exposed in [11]. The procedure was developed to solve the set of junction problems which were presented in [10]. Basically the problem consists in solving the variational inequality: Find u ¯ such that a (¯ u, v − u ¯) ≥ (f, v − u ¯) ∀ v ∈ K,
(1)
where K is a closed convex set. In our work, we suppose that K can be decomposed in the following form: [ ˆ I ), K(v (2) K= vI ∈KI
where vI is an auxiliary variable which belongs to a convex set KI . The decomposition of K given by (2) implies that the original problem – which is equivalent to a saddle-point problem on the whole set K × K – can be deˆ I ), for each composed in a set of variational inequalities defined on the sets K(v value of the auxiliary variable vI . These variational inequalities correspond to ∗ †
E-mail address: [email protected] E-mail address: [email protected]
292
Gabriela F. Reyero and Rafael V. Verdes ˆ I ) × K(v ˆ I ) which, simplified saddle-point problems defined on the set K(v generally, is smaller than K × K. In the second part of the method it is found ˆ vI ), where u the privileged v¯I such that u ¯ ∈ K(¯ ¯ is the original solution; u ¯ itself is computed solving the simplified variational inequality: ˆ vI ). a (¯ u, v − u ¯) ≥ (f, v − u ¯) ∀ v ∈ K(¯ The chapter is organized in the following form: In the Section 2 we present the original VI, its properties and an equivalent reformulation as a saddle-point problem. In the Section 3 we present the methodology of decomposition and the solution by a system of hierarchically coupled variational inequalities. In the Section 4 an iterative algorithm is described and its convergence is proved. In the Appendix 1 some properties of continuity and convexity of some auxiliary functions are proved. In the Appendix 2 some properties of differentiability of the same auxiliary functions are proved.
2. 2.1.
Variational Problem The Original Problem
Let V = ℜn , we consider on V × V the bilinear coercive and continuous form a , i.e. there exist α > 0, and β > 0 such that a(v, v) ≥ α kvk2 V |a(v, u)| ≤ β kvk kuk V V
∀ v ∈ V, ∀ v, u ∈ V.
(3)
We also consider the continuous linear form L defined by L (v) = (f, v) where f ∈ V , and (·, ·) denotes the scalar product in V. Let K be a non empty, compact and convex subset of V . The problem consists of solving the variational inequality: VI
Find u ¯ ∈ K such that a (¯ u, v − u ¯) ≥ (f, v − u ¯) ∀ v ∈ K.
(4)
Let A be the linear operator associated to the bilinear form a, i.e. a(v, u) = (Av, u) ∀ u ∈ V and A∗ the adjoint of A. Since A is monotone, hemicontinuous and coercive on K, the VI can be stated in the following equivalent way (see [3]) Find u ¯ ∈ K such that a (v, v − u ¯) ≥ (f, v − u ¯) ∀ v ∈ K. We know from ( [3] and [12]) that this VI has a unique solution.
(5)
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
2.2.
293
Reformulation as a Saddle-Point Problem
We consider the following non linear function F, defined on V × V F (v, u) = a (v, v − u) − L (v − u) ∀ v, u ∈ V.
(6)
Remark 1. It is clear that F (u, u) = 0, ∀ u ∈ V. It is easy to prove that F is a convexconcave function (see the proof in Appendix 1), so it is natural to associate F with its set of saddle- points. Let us remember here the definition of saddle-point. Definition 1. Let Y ⊂ ℜp , Z ⊂ ℜq and ψ : Y × Z → ℜ, we say that (ˆ y , zˆ) ∈ Y × Z is a saddle-point of ψ if ψ(ˆ y , z) ≤ ψ(ˆ y , zˆ) ≤ ψ(y, zˆ) ∀ y ∈ Y, ∀ z ∈ Z. We study in the following paragraphs the saddle-points of F and its relation with the solution of the VI. The following proposition gives the equivalence between the search of saddle-points of F and the computation of the solution u ¯ of VI . Proposition 1. The functional F has a unique saddle-point on K × K. This point belongs to the diagonal of K × K and it has the form (¯ u, u ¯) , where u ¯ is the solution of the VI; in other words, we have F (¯ u, u) ≤ F (¯ u, u ¯) ≤ F (v, u ¯) ∀ (v, u) ∈ K × K.
(7)
Proof. 1. Existence: Let u ¯ be the solution of VI and (6) we have that F (¯ u, u) ≤ 0 ∀ u ∈ K, F (v, u ¯) ≥ 0 ∀ v ∈ K. As F (¯ u, u ¯) = 0, we have
F (¯ u, u) ≤ F (¯ u, u ¯) ≤ F (v, u ¯) ∀ v, u ∈ K and then (¯ u, u ¯) is a saddle-point of F on K × K. 2. Uniqueness: Let (u1 , u2 ) be a saddle-point of F on K × K, so F (u1 , u) ≤ F (u1 , u2 ) ≤ F (v, u2 ) ∀ (v, u) ∈ K × K.
(8)
Taking u = u1 and v = u2 in (8) , we have 0 = F (u1 , u1 ) ≤ F (u1 , u2 ) ≤ F (u2 , u2 ) = 0 and then F (u1 , u2 ) = 0.
(9)
294
Gabriela F. Reyero and Rafael V. Verdes From (8) and (9) , we obtain F (u1 , u) ≤ 0 ≤ F (v, u2 ) which can be written in the following form a (u1 , u − u1 ) ≥ (f, u − u1 ) ∀ u ∈ K, a (v, v − u2 ) ≥ (f, v − u2 ) ∀ v ∈ K.
This means that u1 and u2 are solutions of VI. Since u ¯ is the unique solution of VI, we conclude that u1 = u2 = u ¯.
Remark 2. Proposition 1 means that the VI is equivalent to the following problem SP
3.
Find u ¯ ∈ K such that (¯ u, u ¯) is a saddle-point of F in K × K.
Solution by a Decomposition Method
To find by a decomposition method the element u ¯ (the unique element of K such that (¯ u, u ¯) is a saddle-point of F on K × K or the unique solution of VI), we analyze the case where the convex set K is the union of convex sets and we propose a hierarchical decomposition problem using this family of convex sets.
3.1.
Definitions and Preliminary Results
Definition 2. Let K be a non empty convex subset of V, ϕ ∈ F (V, ℜ) a real function, we say that 1. ϕ is a convex function on K if ∀ λ ∈ (0, 1), ∀ x1 , x2 ∈ K, ϕ (λx1 + (1 − λ) x2 ) ≤ λϕ (x1 ) + (1 − λ) ϕ (x2 ) . 2. ϕ is a strictly convex function on K if ∀ λ ∈ (0, 1), ∀ x1 , x2 ∈ K, x1 6= x2 ϕ (λx1 + (1 − λ) x2 ) < λϕ (x1 ) + (1 − λ) ϕ (x2 ) . 3. ϕ is a strongly convex function on K if there exists δ > 0 (named coercivity coefficient) such that ∀ λ ∈ (0, 1) , ∀ x1 , x2 ∈ K, ϕ (λx1 + (1 − λ) x2 ) ≤ −δλ (1 − λ) kx1 − x2 k2 + λϕ (x1 ) + (1 − λ) ϕ (x2 ) . Definition 3. Let Y be a convex subset of ℜp , Z a convex subset of ℜq , a function ψ : Y × Z → ℜ is convex-concave in Y × Z if ∀ z ∈ Z, y → ψ(y, z) is a convex function and ∀ y ∈ Y , z → ψ(y, z) is a concave function. Definition 4. Let E : Dom (E) → V be an operator, if W ⊂ Dom (E) ⊂ V then 1. E is monotone on W if (E (u) − E (v) , u − v) ≥ 0 ∀ u, v ∈ W,
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
295
2. E is strongly monotone on W if there exists η > 0 such that (E (u) − E (v) , u − v) ≥ η ku − vk2 ∀ u, v ∈ W. The following propositions are valid (see its proof in [3], [21]) Proposition 2. Let Y ⊂ ℜp , Z ⊂ ℜq , a function ψ : Y × Z → ℜ has a saddle-point in Y × Z if and only if max min ψ(y, z) = min max ψ(y, z). z∈Z y∈Y
y∈Y z∈Z
Proposition 3. The set of saddle-points of ψ has the form Yo × Zo where Yo ⊂ Y and Zo ⊂ Z. Let Y and Z be two spaces of finite dimension and ψ : Y × Z → ℜ such that (H1) Y1 ⊂ Y is a non empty, compact and convex set, (H2) Z1 ⊂ Z is a non empty, compact and convex set, (H3) ∀ y ∈ Y1 , z → ψ(y, z) is a concave and upper-semicontinuous function (u.s.c.). (H4) ∀ z ∈ Z1 , y → ψ(y, z) is a convex and lower-semicontinuous function (l.s.c.). Proposition 4. Under the hypotheses (H1)–(H4) the set of saddle-points of ψ in Y1 × Z1 is Yo × Zo , where Yo and Zo are compact and convex sets. Proposition 5. If z → ψ(y, z) is strictly concave ∀ y ∈ Y, Zo has at most one point. Proposition 6. If y → ψ(y, z) is strictly convex ∀ z ∈ Z, Yo has at most one point.
3.2.
Decomposition of the Convex K
Let XI = ℜm (we will call it the intermediate space). We suppose that there exists a compact and convex set KI ⊂ XI , and an application which assigns to each vI ∈ KI a ˆ (vI ) of the space V in such a way that the convex set K compact and convex subset K verifies the following decomposition [ ˆ I ). K(v (10) K= vI ∈KI
Definition 5. Let Y, Z ⊂ K, we define
!
d1 (Y, Z) = max sup inf ky − zk , sup inf ky − zk , y∈Y z∈Z
d2 (Y, Z) = inf inf ky − zk . y∈Y z∈Z
z∈Z y∈Y
296
Gabriela F. Reyero and Rafael V. Verdes
Remark 3. It can be seen that d1 is a distance between subsets of K named the Hausdorff distance. ˆ (vI ) Properties of the family of convex sets K ˆ (vI ) : vI ∈ KI } verifies the following We will suppose that the family of convex sets {K hypotheses: 1. ∀ uI , vI ∈ KI and ∀ λ ∈ (0, 1) , ˆ I ) + (1 − λ) K(v ˆ I) ⊂ K ˆ (λuI + (1 − λ) vI ) . λK(u 2. Given uI , vI ∈ KI there exist positive constants c and γ such that ˆ (uI ) , K ˆ (vI ) ≤ c kuI − vI k , d1 K XI d2
ˆ ˆ K (uI ) , K (vI ) ≥ γ kuI − vI kXI .
(11)
(12) (13)
ˆ (vI ) there exists a linear and continuous 3. For each vI ∈ KI and for each v ∈ K operator Tv : X I → V such that ˆ (vI + δvI ) . if vI ∈ KI , δvI ∈ XI and vI + δvI ∈ KI =⇒ v + Tv (δvI ) ∈ K (14) Moreover, the family of operators (Tv )v∈K(v ˆ I ) is uniformly continuous in norm with respect to the parameter v, i.e. kTv − Tv˜ k ≤ C kv − v˜kV ∀ v, v˜ ∈ K.
3.3.
(15)
Hierarchical Solution of the Saddle-Point Problem
Taking into account (10) we can decompose hierarchically the problem SP. From Proposition 2 we know that this problem is associated to the equation max min F (v, u) = 0. u∈K v∈K
Under the hypothesis (10) we have the following concatenated max-min relation max min F (v, u) = max u∈K v∈K
max
min
min F (v, u) = 0.
uI ∈KI u∈K(u ˆ I ) vI ∈KI v∈K(v ˆ I)
(16)
We introduce the function FI (vI , u) = min F (v, u) . ˆ I) v∈K(v
(17)
Now (16) becomes max min F (v, u) = max u∈K v∈K
max
min FI (vI , u) .
uI ∈KI u∈K(u ˆ I ) vI ∈KI
(18)
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
297
We will prove in the Appendix 1 that FI is convex-concave, then we have that max
min FI (vI , u) = min
ˆ I ) vI ∈KI u∈K(u
max FI (vI , u) .
vI ∈KI u∈K(u ˆ I)
(19)
Replacing (17) and (19) , the condition (16) is written hierarchically as: max min
max
min F (v, u) = 0.
uI ∈KI vI ∈KI u∈K(u ˆ I ) v∈K(v ˆ I)
(20)
We consider the function FII (vI , uI ) = max FI (vI , u) , ˆ I) u∈K(u
then (20) has the form max min FII (vI , uI ) = 0.
uI ∈KI vI ∈KI
(21)
We will see in Appendix 1 that the following properties hold: • FII is convex-concave in KI × KI . • FII is continuous in KI × KI . • FII (·, uI ) : KI → ℜ is strictly convex ∀ uI ∈ KI . • FII (uI , uI ) = 0 ∀ uI ∈ KI . At the light of these properties, it is natural to consider the searching of saddle-points of FII . Therefore, we will associate to (21) the saddle-point problem SPI
Find v¯I , u ¯I ∈ KI such that (¯ vI , u ¯I ) is a saddle-point of FII in KI × KI .
For this problem we have this following result: Lemma 1. There is a unique saddle-point (¯ uI , u ¯I ) of FII . Proof. FII is a continuous, convex-concave function and KI is a compact and convex set, then the set of saddle-point of FII is not empty. Moreover, by Propositions 4 and 6, the set of saddle-points has the form {¯ uI } × U. So, let (¯ uI , u ˆI ) be a saddle-point of FII , then FII (¯ uI , uI ) ≤ FII (¯ uI , u ˆI ) ≤ FII (vI , u ˆI ) ∀ vI , uI ∈ KI .
(22)
Using u ¯I at the left hand side of (22) and u ˆI at the right hand side of (22) , we get FII (¯ uI , u ¯I ) ≤ FII (¯ uI , u ˆI ) ≤ FII (ˆ uI , u ˆI ) , but FII (¯ uI , u ¯I ) = FII (ˆ uI , u ˆI ) = 0, then FII (¯ uI , u ˆI ) = 0. Moreover, FII (·, uI ) is strictly uI + u ˆI ), u ˆI ) < 0 and as (¯ uI , u ˆI ) is a saddle-point, we convex; so, if u ˆI 6= u ¯I then FII ( 12 (¯ have 0 = FII (¯ uI , u ˆI ) ≤ FII ( 12 (¯ uI + u ˆI ), u ˆI ) < 0. Therefore we arrive at a contradiction, this contradiction implies that u ˆI = u ¯I and then the set of saddle-points comprises only the point (¯ uI , u ¯I ).
298
Gabriela F. Reyero and Rafael V. Verdes
3.3.1. A Subordinated Saddle-Point Problem In order to find a relation between the solution of SP and SPI , we will associated to any ˆ (uI ) defined in the following way: point uI ∈ KI the point SuI ∈ K Definition 6. 1. SuI is the unique solution of the restricted variational inequality: VIS
ˆ (uI ) , SuI ∈ K ˆ (uI ) , a (SuI , v − SuI ) ≥ (f, v − SuI ) ∀ v ∈ K
ˆ (uI ) such that (SuI , SuI ) is a saddle-point of F in 2. SuI is the unique element of K ˆ ˆ K (uI ) × K (uI ) . Proposition 7. The two forms of the previous definition are equivalent. ˆ (uI ) instead of K. Proof. Is the same one as in Proposition 1, considering K We will prove now, using only the hypothesis (12) that the application uI → SuI is continuous in KI . Proposition 8. If SuI is the solution of VIS , then SuI is a H¨older continuous function with respect to the parameter vI , i.e. 1
kSvI − SuI k ≤ kvI − uI k 2 .
(23)
Proof. Let uI ∈ KI and vI ∈ KI , by definition of SuI and SvI we have ˆ (uI ) , (ASuI − f, v − SuI ) ≥ 0 ∀ v ∈ K ˆ (vI ) . (ASvI − f, v − SvI ) ≥ 0 ∀ v ∈ K
(24) (25)
ˆ I ) such that By definition of d1 and (12) we have that ∃ u ˆ ∈ K(v ˆ I ), K(v ˆ I )) ≤ c kuI − vI k kˆ u − SuI kV ≤ d1 (K(u XI
(26)
ˆ I ) such that and ∃ vˆ ∈ K(u ˆ I ), K(v ˆ I )) ≤ c kuI − vI k . kˆ v − SvI kV ≤ d1 (K(u XI
(27)
By virtue of (24) and (25) we have (ASuI − f, vˆ − SuI ) ≥ 0,
(ASvI − f, u ˆ − SvI ) ≥ 0.
We obtain from (28) and (29) (ASuI , vˆ − SuI ) − (ASvI , SvI − u ˆ) ≥ (f, u ˆ − SuI + vˆ − SvI )
(28) (29)
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
299
then A (SuI − SvI ) , SvI − SuI + (ASuI , vˆ − SvI ) + (ASvI , u ˆ − SuI ) ≥ (f, u ˆ − SuI + vˆ − SvI ) (30)
Considering that K is bounded and the relations (3), (26) and (27), we obtain
|(f, u ˆ − SuI )| ≤ kf k kˆ u − SuI k ≤ C kuI − vI kXI , |(f, vˆ − SvI )| ≤ kf k kˆ v − SvI k ≤ C kuI − vI kXI ,
|(ASuI , vˆ − SvI )| ≤ C kˆ v − SvI k ≤ C kuI − vI kXI ,
|(ASvI , u ˆ − SuI )| ≤ C kˆ u − SuI k ≤ C kuI − vI kXI , A (SuI − SvI ) , SvI − SuI ≤ −α kSuI − SvI k2 .
So, from (30) we get
α kSuI − SvI k2 ≤ C kuI − vI k , or, in the equivalent form 1
kSvI − SuI k ≤ C kvI − uI k 2 . If we use now the hypotheses (14) and (15) we can strengthen the previous continuity result to a Lipschitz continuity result. Proposition 9. The operator S : KI → V is Lipschitz continuous, i.e. there exists kS which verifies, ∀ uI ∈ KI and ∀ δuI ∈ XI such that uI + δuI ∈ KI
S (uI + δuI ) − S (uI ) ≤ kS kδuI k .
Proof. To simplify the notation we will use in the following paragraphs a common letter C to denote a generic constant independent of any particular point u ∈ K, uI ∈ KI , etc. (i.e. those constants C depend only on the data of the problem: the form a (·, ·) , the convex K, etc.). Let uI ∈ KI , uI + δuI ∈ KI . By definition of the operator S we have
We put
ˆ (uI ) (ASuI − f, v − SuI ) ≥ 0 ∀ v ∈ K ˆ (uI + δuI ) . AS (uI + δuI ) − f, v − S (uI + δuI ) ≥ 0 ∀ v ∈ K
(31) (32)
ˆ (uI + δuI ) in (32) and v = SuI + TSuI (δuI ) ∈ K ˆ (uI ) in (31) , v = S (uI + δuI ) + TS(uI +δuI ) (−δuI ) ∈ K we obtain AS (uI + δuI ) − f, S (uI + δuI ) − SuI − TSuI (δuI ) ≤ 0 ASuI − f, S (uI + δuI ) − SuI − TS(uI +δuI ) (δuI ) ≥ 0.
(33) (34)
300
Gabriela F. Reyero and Rafael V. Verdes
Substracting (33)–(34): AS (uI + δuI ) − ASuI , S (uI + δuI ) − SuI ≤ AS (uI + δuI ) , TSuI (δuI ) − ASuI , TS(uI +δuI ) (δuI ) + f, TS(uI +δuI ) − TSuI (δuI ) = AS (uI + δuI ) − ASuI , TSuI (δuI ) + ASuI , TSuI − TS(uI +δuI ) (δuI ) + f, TS(uI +δuI ) − TSuI (δuI ) . Then, by virtue of (3), we have
α kS (uI + δuI ) − SuI k2 ≤ kAS (uI + δuI ) − ASuI k kTSuI k kδuI k
+ kASuI k TSuI − TS(uI +δuI ) kδuI k + kf k TS(uI +δuI ) − TSuI kδuI k
≤ β kS (uI + δuI ) − SuI k kTSuI k kδuI k + kAS uI k C kS (uI + δuI ) − SuI k kδuI k + kf k C kS (uI + δuI ) − SuI k kδuI k ,
therefore α kS (uI + δuI ) − SuI k ≤ β kTSuI k + C kAS uI k + C kf k kδuI k .
Let uoI ∈ KI , we have
kTSuI k ≤ TSuI − TSuoI + TSuoI ≤ C kSuI − SuoI k + TSuoI .
(35)
(36)
Putting uoI instead of uI and uI instead of uI + δuI , in (35) it results
α kSuI − SuoI k ≤ β TSuoI + C kAS uoI k + C kf k kuI − uoI k ≤ C
since KI is bounded. Then kSuI − SuoI k ≤ C (C independent of uI ). Consequently kSuI k ≤ C ∀ uI ∈ KI .
(37)
kTSuI k ≤ C ∀ uI ∈ KI .
(38)
Then in (36) we obtain: Moreover, kAS uI k ≤ kAS uI − AS uoI k+kAS uoI k ≤ β kSuI − SuoI k+kAS uoI k ≤ C ∀ uI ∈ KI . (39) Taking into account (38) and (39) , we obtain that (35) is α kS (uI + δuI ) − SuI k ≤ kS kδuI k , where kS is a constant independent of uI and δuI .
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
301
3.3.2. Relations between Saddle-Points of F and Saddle-Points of FII The operator S carries the elements of the parametric space KI to elements of the original convex set K. Now we define the operator P, which identifies the parametrization of the set K in terms of the parameter uI ∈ KI. We introduce the operator P P : K → KI , P u = uI ,
ˆ (uI ). where uI is the unique element of KI such that u ∈ K The relations between SP–SPI are depicted by the following diagram SP
⇐⇒
SPI
K
←→
KI
P (K)
=
K
=
KI [
ˆ (uI ) K
uI ∈KI
u ¯ ←→ u ¯I
Pu ¯
=
u ¯I
Su ¯I
=
u ¯.
The two last fundamental relations are proved below in Theorem 1 and Theorem 2. Theorem 1. Let u ¯ be such that (¯ u, u ¯) is a saddle-point of F in K × K then (P u ¯, P u ¯) is a saddle-point of FII in KI × KI . Proof. By definition of saddle-points we have F (¯ u, u) ≤ F (¯ u, u ¯) ≤ F (v, u ¯) ∀ v, u ∈ K.
(40)
F (u, u) = 0 ∀ u ∈ K,
(41)
We know that ˆ (vI ) and therefore then 0 ≤ F (v, u ¯) ∀ v ∈ K
0 ≤ FI (vI , u ¯) . So, denoting u ˆI = P u ¯ we obtain 0 ≤ FI (vI , u ¯) ≤ max FI (vI , u) = FII (vI , u ˆI ) . ˆ uI ) u∈K(ˆ
From (40) , we have F (¯ u, v) ≤ 0 ∀ v ∈ K, then FI (ˆ uI , v) =
min F (u, v) ≤ F (¯ u, v) ≤ 0 ∀ v ∈ K
ˆ uI ) u∈K(ˆ
(42)
302
Gabriela F. Reyero and Rafael V. Verdes
ˆ (uI ) , consequently, and in particular ∀ v ∈ K FII (ˆ uI , uI ) = max FI (ˆ uI , v) ≤ 0. ˆ I) v∈K(u
(43)
We also know that FII (uI , uI ) = 0, ∀ uI ∈ KI then from (42) and (43) we have FII (ˆ uI , uI ) ≤ FII (ˆ uI , u ˆI ) ≤ FII (vI , u ˆI ) ∀ vI , uI ∈ KI , which proves that (P u ¯, P u ¯) is a saddle-point of FII in KI × KI . We present here a procedure to construct saddle-points of F from saddle-points of FII . Theorem 2. If (¯ uI , u ¯I ) is a saddle-point of FII in KI × KI then (S u ¯I , S u ¯I ) is a saddlepoint of F in K × K. Proof. By definition of u ¯I we have min FII (vI , u ¯I ) = max min FII (vI , uI ) = 0,
vI ∈KI
uI ∈KI vI ∈KI
then min
max FI (vI , u) = min FII (vI , u ¯I ) = 0. vI ∈KI
vI ∈KI u∈K(¯ ˆ uI )
From (19) we have min FI (vI , u) = 0.
max
ˆ uI ) vI ∈KI u∈K(¯
ˆ (¯ Let u ˇ (¯ uI ) ∈ K uI ) such that min FI (vI , u ˇ (¯ uI )) = max
vI ∈KI
min FI (vI , u)
ˆ uI ) vI ∈KI u∈K(¯
then, we have min
min F (v, u ˇ (¯ uI )) = 0,
vI ∈KI v∈K(v ˆ I)
so, by virtue of (10) we obtain min F (v, u ˇ (¯ uI )) = 0. v∈K
Therefore F (v, u ˇ (¯ uI )) ≥ 0 ∀ v ∈ K.
(44)
By definition of u ¯I we have max FII (¯ uI , uI ) = min max FII (vI , uI ) = 0,
uI ∈KI
vI ∈KI uI ∈KI
then max
max FI (¯ uI , u) = 0.
uI ∈KI u∈K(u ˆ I)
(45)
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
303
Now max
max FI (¯ uI , u) = max
max
min F (v, u) = max
min
max F (v, u) .
uI ∈KI v∈K(¯ ˆ uI ) u∈K(u ˆ I)
uI ∈KI u∈K(u ˆ I ) v∈K(¯ ˆ uI )
uI ∈KI u∈K(u ˆ I)
(46)
It can be proved that max F (v, u) is a convex-concave function in the same way as it ˆ I) u∈K(u
was proved that FI is convex-concave, then from (45) and (46) we have max
min
max F (v, u) = min
uI ∈KI v∈K(¯ ˆ uI ) u∈K(u ˆ I)
max
max F (v, u) = 0.
ˆ uI ) uI ∈KI u∈K(u ˆ I) v∈K(¯
(47)
ˆ (¯ We choose u ˆ (¯ uI ) ∈ K uI ) such that it realizes the minimum in (47), i.e. max
max F (ˆ u (¯ uI ) , u) = 0,
uI ∈KI u∈K(u ˆ I)
then, again by virtue of (10) we obtain max F (ˆ u (¯ uI ) , u) = 0. Therefore u∈K
F (ˆ u (¯ uI ) , u) ≤ 0 ∀ u ∈ K.
(48)
Particularly, getting u ˆ (¯ uI ) in place of v in (44) and u ˇ (¯ uI ) in place of u in (48) , we obtain F (ˆ u (¯ uI ) , u ˇ (¯ uI )) = 0. Then F (ˆ u (¯ uI ) , u) ≤ F (ˆ u (¯ uI ) , u ˇ (¯ uI )) ≤ F (v, u ˇ (¯ uI )) ∀ v, u ∈ K. Consequently (ˆ u (¯ uI ) , u ˇ (¯ uI )) is a saddle-point of F in K × K. Clearly (ˆ u (¯ uI ) , u ˇ (¯ uI )) is ˆ ˆ also a saddle-point of F in K (¯ uI )× K (¯ uI ) . By definition of S u ¯I , (S u ¯I , S u ¯I ) is the unique ˆ (¯ ˆ (¯ saddle-point of F in K uI )×K uI ) and so, we obtain the identity u ˆ (¯ uI ) = u ˇ (¯ uI ) = S u ¯I , which proves that (S u ¯I , S u ¯I ) is a saddle-point of F in K × K. Remark 4. As (S u ¯I , S u ¯I ) is a saddle-point of F in K × K we get, by Proposition 1, Su ¯I = u ¯. Remark 5. The above result can be obtained in a shorter way using arguments of uniqueness instead of a constructive procedure. This is shown in the following theorem: Theorem 3. Let u ¯I be such that (¯ uI , u ¯I ) is a saddle-point of FII in KI × KI then S u ¯I = u ¯ and (S u ¯I , S u ¯I ) is the unique saddle-point of F en K × K. ˆ (ˆ Proof. Let u ˆI ∈ KI such that u ¯∈K uI ) , by Theorem 1 (ˆ uI , u ˆI ) is a saddle-point of FII . By the uniqueness of saddle-point of FII (Lemma 1) we have that u ˆI = u ¯I . By definition, ˆ (¯ ˆ (¯ ˆ (¯ ˆ (¯ (S u ¯I , S u ¯I ) is a saddle-point of F in K uI ) × K uI ), but (¯ u, u ¯) ∈ K uI ) × K uI ) , then ˆ (¯ ˆ (¯ by restriction of the saddle-point of F in K ×K to K uI )× K uI ) , (¯ u, u ¯) is a saddle-point ˆ (¯ ˆ (¯ ˆ (¯ ˆ (¯ of F in K uI ) × K uI ) . But, by the uniqueness of saddle-point of F in K uI ) × K uI ), we have S u ¯I = u ¯.
304
3.4.
Gabriela F. Reyero and Rafael V. Verdes
Hierarchical Systems of Variational Inequalities
We have introduced in the previous subsections a couple of saddle-point problems and we have shown that the original solution can be found solving in a first place the simpler saddle-point problem SPI and, using that solution u ¯I , constructing the global solution S u ¯I ˆ ˆ by solving a saddle-point problem in the restricted set K (¯ uI ) × K (¯ uI ) (set smaller than K × K). We want here to associate a variational inequality to each saddle-point problems and to obtain a system of coupled variational inequalities. SP ~ w
SPI
⇐⇒ ⇐⇒
VI ~ w
VII
The variational inequality associated to SP. We have seen in Section 2 that (¯ u, u ¯) is a saddle-point of F iff the VI holds. The variational inequality associated to SPI . To obtain the VII , we state the following result: Lemma 2. Let W1 , W2 be compact, convex subsets of ℜn , ℜm and let ψ : W1 × W2 → ℜ be a convex-concave function, ψ differentiable at any point of W1 × W2 . Then (w ¯1 , w ¯2 ) is a saddle-point of ψ iff ∂ψ (49) (w ¯1 , w ¯2 ) , w1 − w ¯ 1 ≥ 0 ∀ w1 ∈ W 1 , ∂w1 ∂ψ (50) (w ¯1 , w ¯2 ) , w2 − w ¯ 2 ≤ 0 ∀ w2 ∈ W 2 . ∂w2 Proof. It can be seen in [3]. We will prove in the Appendix 2 that the function FII is differentiable in the diagonal of KI × KI . This property allows us to associate to the unique saddle-point (¯ uI , u ¯I ) of FII in KI × KI a variational inequality using Lemma 2 and the property of differentiability of FII . To apply Lemma 2 to SPI , we take W1 = W2 = KI , ψ = FII and (w ¯1 , w ¯2 ) = (¯ uI , u ¯I ). In the Appendix 2 we will prove that ∂FII (¯ uI , u ¯I ) = TS∗u¯I (AS u ¯I − f ) , ∂vI ∂FII (¯ uI , u ¯I ) = TS∗u¯I (−AS u ¯I + f ) . ∂uI So, (¯ uI , u ¯I ) ∈ KI × KI is a saddle-point of FII if and only if u ¯I is a solution of TS∗u¯I (AS u ¯I − f ) , vI − u ¯I ≥ 0 ∀ vI ∈ KI , VII
(51) (52)
because by virtue of (51) and (52) both conditions (49) and (50) give rise to the same condition VII .
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
305
Remark 6. The VII is a well defined variational inequality because, as it is proved in the Appendix 2, the operator B associated to VII is hemicontinuous and strongly monotone,
B : KI → XI ,
(53)
∗ uI → B (uI ) = TSu (ASuI − f ) . I
Remark 7. The strongly monotone property of B implies that the VII has a unique solution u ¯I ∈ KI . Remark 8. Taking into account the VIS which defines SuI , we have that the original problem is reduced to find the solution of the system of coupled variational inequalities: T ∗ (AS u ¯ − f ) , u − u ¯ ≥0 I I I Su ¯I (AS u ¯I − f, u − S u ¯I ) ≥ 0
∀ uI ∈ KI ,
VII
ˆ (¯ ∀v ∈ K uI ) .
VIS
The following theorem summarizes the equivalence above described.
Theorem 4.
1. For each uI ∈ KI , the variational inequality VIS has unique solution SuI . The point ˆ (uI ) × K ˆ (uI ) . (SuI , SuI ) is the unique saddle-point of F on K 2. The variational inequality VII has unique solution u ¯I ∈ KI . Moreover let u ¯I be the unique solution of VII then (¯ uI , u ¯I ) is a saddle-point of FII in KI × KI . 3. If u ¯I is the solution of VII then S u ¯I is the solution of the original VI, i.e.
a (v, v − S u ¯I ) ≥ (f, v − S u ¯I ) ∀ v ∈ K.
(54)
306
Gabriela F. Reyero and Rafael V. Verdes
Proof. ˆ (uI ) instead of K. 1. It is consequence of Proposition 1 considering the convex K ∗ (ASu − f ) is strongly monotone the VI has 2. Since the operator B (uI ) = TSu I I I unique solution u ¯I ∈ KI .
Now, let u ¯I ∈ KI the solution of VII , i.e.
TS∗u¯I (AS u ¯I − f ) , vI − u ¯I ≥ 0 ∀ vI ∈ KI ,
where S u ¯I is the solution of VIS
ˆ (¯ (AS u ¯I − f, u − S u ¯I ) ≥ 0 ∀ u ∈ K uI ) .
(55)
Taking into account that TS∗u¯I (AS u ¯I − f ) is a sub-gradient (we will prove it in the Appendix 2) of the function vI → FII (vI , u ¯I ) in u ¯I ∈ KI , and VII we have FII (vI , u ¯I ) − FII (¯ uI , u ¯I ) ≥ TS∗u¯I (AS u ¯I − f ) , vI − u ¯I ≥ 0 ∀ vI ∈ KI , then
FII (¯ uI , u ¯I ) ≤ FII (vI , u ¯I ) ∀ vI ∈ KI .
(56)
FII (¯ uI , vI ) ≤ FII (¯ uI , u ¯I ) ∀ vI ∈ KI .
(57)
As TS∗u¯I (−AS u ¯I + f ) is a super-gradient (see Appendix 2) of the function vI → FII (¯ uI , vI ) in u ¯I ∈ KI , and VII we have FII (¯ uI , vI ) − FII (¯ uI , u ¯I ) ≤ TS∗u¯I (−AS u ¯I + f ) , vI − u ¯I ≤ 0 ∀ vI ∈ KI ,
therefore
From (56) and (57) we obtain FII (¯ uI , vI ) ≤ FII (¯ uI , u ¯I ) ≤ FII (vI , u ¯I ) ∀ vI ∈ KI , i.e. (¯ uI , u ¯I ) is a saddle-point of FII in KI × KI . 3. Let u ¯I be the solution of VII . We will prove that the solution S u ¯I of VIS is the solution of the original VI, i.e. (AS u ¯I − f, v − S u ¯I ) ≥ 0 ∀ v ∈ K. Let u ¯ be the solution of VI, we have
Since K =
S
uI ∈KI
(A¯ u − f, v − u ¯) ≥ 0 ∀ v ∈ K.
(58)
ˆ (uI ) and K ˆ (uI ) ∩ K ˆ (vI ) = ∅, if uI 6= vI , then there exists a K
ˆ (ˆ unique u ˆI ∈ KI such that u ¯∈K uI ) . From (58) we have ˆ (ˆ (A¯ u − f, v − u ¯) ≥ 0 ∀ v ∈ K uI ) ,
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
307
consequently, u ¯ = Su ˆI
(59)
i.e., it is the unique solution of VIS for uI = u ˆI . We will prove that u ˆI is the unique solution of VII , i.e. (Tu¯∗ (A¯ u − f ) , vI − u ˆI ) ≥ 0 ∀ vI ∈ KI .
(60)
Let vI ∈ KI , then we have (Tu¯∗ (A¯ u − f ) , vI − u ˆI ) = (A¯ u − f, Tu¯ (vI − u ˆI ))
= (A¯ u − f, u ¯ + Tu¯ (vI − u ˆI ) − u ¯) = (A¯ u − f, v − u ¯) ,
ˆ (vI ) ⊂ K. Since u where v = u ¯ + Tu¯ (vI − u ˆI ) ∈ K ¯ is the solution of the original VI, we have (A¯ u − f, v − u ¯) ≥ 0, then (60) is verified. We have proved that u ˆI is the solution of VII , i.e. u ˆI = u ¯I . From (59) it follows that the solution of the original VI is u ¯ = Su ¯I .
4.
Iterative Solution of the Decomposition Procedure
The system VII − VIS can be solved by the following iterative method.
4.1.
Description of the Iterative Algorithm
The backbone of the procedure is the following one: given a tentative point uνI ∈ KI and using the information given by the element SuνI (which is computed in terms of uνI through the solution of VIS ), we modify this point uνI in order to satisfy the condition VII . To describe the algorithm we will need the following: Definition 7. Pr (u, Ω) is the projection of a point u on a closed convex set Ω, i.e. kPr (u, Ω) − uk ≤ kw − uk ∀ w ∈ Ω, Pr (u, Ω) ∈ Ω. Specifically, the algorithm has the following structure: Algorithm 1 Set uoI ∈ KI , ρ > 0, ν = 0 2 Solve VIS and obtain uν = SuνI 3 Compute TSuνI ∗ (ASuν − f ) 4 Compute B ν = TSu ν I I
5 Set uν+1 = Pr (uνI − ρB ν , KI ) I set ν = ν + 1, and go to 2 .
308
Gabriela F. Reyero and Rafael V. Verdes
Analysis of the Algorithm This algorithm generates a sequence (uνI , uν ) which converges to the solution (¯ uI , u ¯) of (VII −VI) for all ρ < ρ¯, being ρ¯ a suitable positive number. At step 2, given uνI ∈ KI we solve the VI ˆ (uνI ) , (ASuνI − f, v − SuνI ) ≥ 0 ∀ v ∈ K and we obtain the solution uν = SuνI . At step 3, for these uνI ∈ KI and the associated ˆ (uν ) , we compute TSuν . At step 4 we compute the vector B ν = B(uν ), uν = SuνI ∈ K I I I where B is the strongly monotone operator defined by (53). To describe step 5 we introduce the following applications Q, M. Definition 8. • We define for ρ > 0 the application Q : KI → XI in the following form Q (uI ) = uI − ρB (uI ) . • We also define the application M : XI → KI in the following form M (uI ) = Pr (Q (uI ) , KI ) . At step 5 we compute the element uν+1 applying the operator M, i.e. we define I uν+1 = M (uνI ). I
4.2.
Convergence of the Algorithm
Remark 9. As S is Lipschitz continuous, B is also Lipschitz continuous (see Appendix 2), then if we denote kB (uI ) − B (˜ uI )k Ξ = sup , kuI − u ˜I k uI ∈KI u ˜I ∈KI
Ξ is finite and ∀ uI ∈ KI , ∀ u ˜I ∈ KI we have that ˜I k2 . B (uI ) − B (˜ u I ) , uI − u ˜I ≥ βFII kuI − u In addition, we obtain that
βFII Ξ
(61)
< 1.
Using these parameters we can prove the following: 2βFII Ξ2
Proposition 10. If 0 < ρ < unique fixed point u ¯M ∈ KI .
, then Q and M are contractive operators and M has a
Proof. Let uI ∈ KI , u ˆI ∈ KI , we have Q (uI ) = uI − ρB (uI ) and Q (ˆ uI ) = u ˆI − ρB (ˆ uI ) . We will estimate the difference
2 kQ (uI ) − Q (ˆ uI )k2 = uI − ρB (uI ) − (ˆ uI − ρB (ˆ uI )) =
2 = kuI − u ˆI k2 + ρ2 B (uI ) − B (ˆ uI ) = −2ρ uI − u ˆI , B (uI ) − B (ˆ uI ) .
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
309
Since B is strongly monotone, from (61) we obtain kQ (uI ) − Q (ˆ uI )k2 ≤ 1 − 2ρβFII + ρ2 Ξ2 kuI − u ˆI k2 . 2β
Therefore, if 0 < ρ < ΞF2II , there exists σ < 1 such that kQ (uI ) − Q (ˆ uI )k ≤ σ kuI − u ˆI k and then Q is contractive. Moreover
Pr (Q (uI ) , KI ) − Pr (Q (ˆ uI ) , KI ) ≤ kQ (uI ) − Q (ˆ uI )k ,
consequently,
kM (uI ) − M (ˆ uI )k ≤ σ kuI − u ˆI k and then M has a unique fixed point u ¯M . Lemma 3. The fixed point u ¯M of M is the solution of the VII i.e. u ¯M = u ¯I and TS∗u¯M (AS u ¯M − f ) , vI − u ¯M ≥ 0 ∀ vI ∈ KI ,
consequently, S u ¯M is the solution of the original VI.
Proof. Let u ¯M be the fixed point of M . We know that u ¯M = Pr (¯ uM − ρB (¯ uM ) , KI ) ⇐⇒ (¯ uM − ρB (¯ uM ))−¯ uM , uI −¯ uM ≤ 0 ∀ uI ∈ KI . We have that ∀ uI ∈ KI
¯ M − f ) , uI − u ¯M ≤ 0, (−ρB (¯ u M ) , uI − u ¯M ) = − ρ TS∗u¯M (AS u
then, since ρ > 0
TS∗u¯M (AS u ¯ M − f ) , uI − u ¯M ≥ 0 ∀ uI ∈ KI
and, consequently, u ¯M is solution of VII . By the uniqueness of solution we have u ¯M = u ¯I ; ˆ finally by virtue of Theorem 4, S u ¯I ∈ K (¯ uI ) is the solution of the original VI. The following theorem summarizes the properties of the algorithm: 2β
Theorem 5. The algorithm generates a sequence {uνI , SuνI } such that, if 0 < ρ < ΞF2II , uνI converges to the unique solution u ¯I of VII and SuνI converges to S u ¯I the unique solution of the original problem. Proof. As M is contractive, the sequence uνI converges to u ¯I , the fixed point of M. By the ν continuity established in Proposition 8, the sequence SuI converges to S u ¯I = u ¯.
310
5. 5.1.
Gabriela F. Reyero and Rafael V. Verdes
Appendix 1. Continuity and Convexity Issues for F, FI and FII . The Function F
5.1.1. Convexity-Concavity We consider the function F (v, u) = a (v, v − u) − L (v − u) defined in V × V. Proposition 11. F (·, u) is strongly convex in v for each u ∈ V. Proof. The coercivity of the bilinear form a implies that F is a strongly convex function, in fact, let v1 6= v2 and 0 < λ < 1, we put F λv1 + (1 − λ) v2 , u = a λv1 + (1 − λ) v2 , λv1 + (1 − λ) v2 − u − L (λv1 + (1 − λ) v2 − u) = a λv1 +(1−λ) v2 , λ (v1 −u)+(1−λ) (v2 −u) −L λ (v1 −u)+ (1−λ) (v2 −u) = λ2 a (v1 , v1 − u) + λ (1 − λ) a (v1 , v2 − u) + λ (1 − λ) a (v2 , v1 − u)
+ (1 − λ)2 a (v2 , v2 − u) − λL (v1 − u) − (1 − λ) L (v2 − u) = λ2 − λ a (v1 , v1 − u) + λ a (v1 , v1 − u) − L (v1 − u) + (1 − λ)2 − (1 − λ) a (v2 , v2 − u) + (1 − λ) a (v2 , v2 − u) − L (v2 − u) + λ (1 − λ) a (v1 , v2 − u) + λ (1 − λ) a (v2 , v1 − u)
= λ (1 − λ) a (v2 − v1 , v1 − u) − λ (1 − λ) a (v2 − v1 , v2 − u) + λF (v1 , u) + (1 − λ) F (v2 , u)
= −λ (1 − λ) a (v2 − v1 , v2 − v1 ) + λF (v1 , u) + (1 − λ) F (v2 , u) ≤ −αλ (1 − λ) kv2 − v1 k2 + λF (v1 , u) + (1 − λ) F (v2 , u) .
So, F (λv1 + (1 − λ) v2 , u) ≤ −αλ (1 − λ) kv2 − v1 k2 + λF (v1 , u) + (1 − λ) F (v2 , u) . (62) Then F is strongly convex and therefore is strictly convex, i.e. F (λv1 + (1 − λ) v2 , u) < λF (v1 , u) + (1 − λ) F (v2 , u) . Proposition 12. F (v, ·) is concave in u for each v ∈ V. Proof. Let u1 , v2 ∈ V and λ ∈ (0, 1) , then F v, λu1 + (1 − λ) u2 = a v, v − (λu1 + (1 − λ) u2 ) − L (v − (λu1 + (1 − λ) u2 )) = a v, λ (v − u1 ) + (1 − λ) (v − u2 ) − L λ (v − u1 ) + (1 − λ) (v − u2 ) = λ (a (v, v − u1 ) − L (v − u1 )) + (1 − λ) a (v, v − u2 ) − L (v − u2 ) = λF (v, u1 ) + (1 − λ) F (v, u2 ) .
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
311
5.1.2. Continuity Lemma 4. F (·, ·) is Lipschitz continuous in K × K. Proof. In Appendix 2 we will see that F is differentiable and ∂F ∂F = Av − f, = Av − f + A∗ (v − u) . ∂u ∂v
and ∂F are uniformly bounded in K ×K and so, F is uniformly As K is bounded, ∂F ∂u ∂v Lipschitz continuous in K × K.
5.2.
The function FI
Lemma 5. Let us consider the function Jw : V → ℜ such that Jw (v) = F (v, w) . There exists a unique element that realizes the minimum of Jw in W, where W is any non empty, closed and convex subset of V . Proof. F (·, w) is continuous and strongly convex, then there exists and it is unique the element that minimizes Jw . Remark 10. By virtue of Lemma 5 the function FI defined by FI (vI , u) = min F (v, u) ˆ I) v∈K(v
is well defined. Definition 9. We define v¯ (vI , u) the unique point that realizes the minimum of F (·, u) in ˆ (vI ) . K Remark 11. In Appendix 2 we will see that the point v¯ (vI , u) , which realizes the minimum ˆ (vI ) is the unique solution of of F (·, u) in K ∂F ˆ (vI ) , (¯ v (vI , u) , u) , v − v¯ (vI , u) ≥ 0 ∀ v ∈ K ∂v in other words, ˆ (vI ) . (A + A∗ ) v − A∗ u − f, v − v¯ (vI , u) ≥ 0 ∀ v ∈ K
Proposition 13. If v¯ (vI , u) is the unique point that realizes the minimum of F (·, u) in ˆ (vI ) then v¯ (vI , u) is a continuous function of vI . i.e. K
lim v¯ (vI , u) − v¯ (uI , u) = 0. (63) vI →uI
Proof. From the previous Remark v¯ (vI , u) is the solution of the variational inequality: ˆ (vI ) . (A + A∗ ) v − A∗ u − f, v − v¯ (vI , u) ≥ 0 ∀ v ∈ K
(64)
312
Gabriela F. Reyero and Rafael V. Verdes
Let uI ∈ KI and vIλ → uI , by definition of v¯ (uI , u) and v¯ vIλ , u we have, denoting Aˆ = A + A∗ and fˆ = A∗ u + f ˆv (uI , u) − fˆ, v − v¯ (uI , u) ≥ 0 ∀ v ∈ K ˆ (uI ) , A¯ ˆv (vIλ , u) − fˆ, v − v¯(vIλ , u) ≥ 0 ∀ v ∈ K ˆ vIλ . A¯ (65) We want prove that v¯(vIλ , u) → v¯ (uI , u) . ˆ (uI ) such that By definition of d1 and (12) we have that ∀ λ there exists uλo ∈ K ˆ (uI ) , K(v ˆ Iλ ) ≤ ckuI − vIλ kX . kuλo − v¯(vIλ , u)kV ≤ d1 K (66) I
ˆ (uI ) is a compact set, uλ admits a convergent subsequence, i.e. there exists uo ∈ As K o ˆ (uI ) such that uλo → uo . From (66) , k¯ K v (vIλ , u) − uλo k → 0 then we also have the following convergence v¯(vIλ , u) → uo . ˆ (uI ) , i.e. We will prove that uo is solution of (64) in K ˆ (uI ) . ˆ o − fˆ, v − uo ≥ 0 ∀ v ∈ K Au ˆ (uI ) , then there exists v λ ∈ K ˆ v λ such that Let v ∈ K I kv λ − vkV → 0 as λ → 0.
(67)
From (65) we have ˆ − fˆ, v λ − v¯(vIλ , u) ˆ λ − Av, ˆ v λ − v¯(vIλ , u) + Av ˆ λ − fˆ, v λ − v¯(vIλ , u) = Av 0 ≤ Av ˆ − fˆ, v − v¯(vIλ , u) . ˆ − fˆ, v λ − v + Av ˆ λ − Av, ˆ v λ − v¯(vIλ , u) + Av = Av We also have ˆ λ − Av, ˆ v λ − v¯(vIλ , u) −→ 0 because v λ → v and Aˆ is continuous, Av ˆ − fˆ, v λ − v −→ 0 because v λ → v, Av ˆ − fˆ, v − v¯(vIλ , u) −→ Av ˆ − fˆ, v − uo , Av then
ˆ − fˆ, v − uo ≥ 0. Av
ˆ (uI ) and so, by uniqueness of solution we obtain Consequently uo is solution of (64) in K that uo = v¯ (uI , u) ; then v¯(vIλ , u) → v¯ (uI , u) . Remark 12. If we use now the hypotheses (14) and (15) we can strengthen the continuity result concerning v¯(vI , u) to the following Lipschitz continuity result: Proposition 14. The mapping v¯(vI , u) is Lipschitz continuous in both variables, vI ∈ KI and u ∈ K. The proof of this result is similar to the proof of Proposition 9 and it is omitted.
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
313
5.2.1. Convexity-Concavity Proposition 15. If the hypothesis (13) holds, i.e. if ˆ (vI ) , K ˆ (¯ vI ) ≥ γ kvI − v¯I k d2 K
XI
then FI (·, u) is a strongly convex function on KI . Proof. Let vI 6= v¯I ∈ KI , from (11) we have FI (λ¯ vI + (1 − λ) vI , u) =
min
ˆ vI +(1−λ)vI ) v∈K(λ¯
F (v, u) ≤
min
ˆ vI )+(1−λ)K(v ˆ I) v∈λK(¯
F (v, u) . (68)
ˆ (¯ ˆ (vI ) such that There exist v1 ∈ K vI ) and v2 ∈ K F (v1 , u) = FI (¯ vI , u) and F (v2 , u) = FI (vI , u) . Since F is strongly convex we have F λv1 + (1 − λ) v2 , u ≤ −αλ (1 − λ) kv1 − v2 k2 + λF (v1 , u) + (1 − λ) F (v2 , u) and from (13) we have
ˆ (vI ) , K ˆ (¯ vI ) ≥ γ kvI − v¯I k , kv1 − v2 k ≥ d2 K
then
F λv1 + (1 − λ) v2 , u ≤ −αγλ (1 − λ) kvI − v¯I k2 + λFI (¯ vI , u) + (1 − λ) FI (vI , u)
and therefore we get min
ˆ vI )+(1−λ)K(v ˆ I) v∈λK(¯
F (v, u) ≤ −αγλ (1 − λ) kvI − v¯I k2 +λFI (¯ vI , u)+(1 − λ) FI (vI , u) .
So, by virtue of (68) we have FI (λ¯ vI + (1 − λ) vI , u) ≤ −αγλ (1 − λ) kvI − v¯I k2 + λFI (¯ vI , u) + (1 − λ) FI (vI , u) (69) and then it follows that FI is strongly convex in vI ∈ KI . Proposition 16. FI (vI , ·) is concave in K.
Proof. Let u1 6= u2 in K and 0 ≤ λ ≤ 1. FI vI , λu1 + (1 − λ) u2 = min F (v, λu1 + (1 − λ) u2 ) . ˆ I) v∈K(v
(70)
ˆ (vI ) such that Let v˜ ∈ K
FI vI , λu1 + (1 − λ) u2 = F v˜, λu1 + (1 − λ) u2 .
From the concavity of F (v, ·) we have v , u1 ) + (1 − λ) F (˜ v , u2 ) F v˜, λu1 + (1 − λ) u2 = λF (˜
≥ λ min F (v, u1 ) + (1 − λ) min F (v, u2 ) ˆ I) v∈K(v
ˆ I) v∈K(v
= λFI (vI , u1 ) + (1 − λ) FI (vI , u2 ) . Then
FI vI , λu1 + (1 − λ) u2 ≥ λFI (vI , u1 ) + (1 − λ) FI (vI , u2 ) .
(71)
(72)
314
Gabriela F. Reyero and Rafael V. Verdes
5.2.2. Continuity Proposition 17. The function vI → FI (vI , u) is Lipschitz continuous on KI , for each u ∈ K. ˆ (vI ) such that Proof. Being FI (vI , u) = min F (v, u) , there exists v¯ (vI , u) ∈ K ˆ I) v∈K(v
F v¯ (vI , u) , u = FI (vI , u) .
ˆ (vI + δvI ) such that We consider vI + δvI ∈ KI , then there exists v¯ (vI + δvI , u) ∈ K F v¯ (vI + δvI , u) , u = FI (vI + δvI , u) . Also, by definition of FI (vI + δvI , u) we have FI (vI + δvI , u) =
min
ˆ I +δvI ) v∈K(v
then
ˆ (vI + δvI ) , u F (v, u) ≤ F Pr v¯ (vI , u) , K
ˆ (vI + δvI ) , u − F v¯ (vI , u) , u . FI (vI + δvI , u) − FI (vI , u) ≤ F Pr v¯ (vI , u) , K (73) Since F is Lipschitzian in v, there exists LF such that ˆ (vI + δvI ) , u − F v¯ (vI , u) , u F Pr v¯ (vI , u) , K
ˆ (vI + δvI ) ≤ LF v¯ (vI , u) − Pr v¯ (vI , u) , K ˆ (vI ) , K ˆ (vI + δvI ) ≤ LF c kδvI k ≤ LF d1 K XI (74) from (73) and (74) , there exists LFI such that
FI (vI + δvI , u) − FI (vI , u) ≤ LFI kδvI kXI . The inverse inequality follows in the same way, then FI (vI + δvI , u) − FI (vI , u) ≤ LF kδvI k . I XI
Proposition 18. The function u → FI (vI , u) is Lipschitz continuous on K for each vI ∈ KI . ˆ (vI ) such that Proof. Being FI (vI , u) = min F (v, u) , there exists v¯ (vI , u) ∈ K ˆ I) v∈K(v
F (¯ v (vI , u) , u) = FI (vI , u) . As FI (vI , u ˆ) ≤ F (¯ v (vI , u) , u ˆ) , we have −FI (vI , u) + FI (vI , u ˆ) ≤ −F (¯ v (vI , u) , u) + F (¯ v (vI , u) , u ˆ) . Since F is Lipschitzian in u, there exists LF such that −F (¯ v (vI , u) , u) + F (¯ v (vI , u) , u ˆ) ≤ LF ku − u ˆk , then −FI (vI , u) + FI (vI , u ˆ) ≤ LF ku − u ˆk .
The inverse inequality follows in the same form, then FI (vI , u) − FI (vI , u ˆ) ≤ LF ku − u ˆk .
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
5.3.
315
The Function FII
Definition 10. We define the function FII (vI , uI ) = max FI (vI , u) .
(75)
ˆ I) u∈K(u
Remark 13. The function FII is well defined because for vI ∈ KI , the function w → FI (vI , w) is Lipschitz continuous in V. If W is a non empty, bounded and convex set of V , then there exists a non empty closed convex set where FI attains the maximum in W. 5.3.1. Convexity-Concavity Proposition 19. FII (vI , ·) is concave in KI for each vI and FII (·, uI ) is strongly convex in KI for each uI . Proof. • We will prove that FII (vI , ·) is concave in KI .
Let 0 < λ < 1 and uI , u ˜I ∈ KI , from (11) we have
FII (vI , (1 − λ) uI + λ˜ uI ) =
max
ˆ u∈K((1−λ)u uI ) I +λ˜
FI (vI , u) ≥
max
ˆ I )+λK(˜ ˆ uI ) u∈(1−λ)K(u
FI (vI , u) .
ˆ (uI ) and u2 ∈ K ˆ (˜ There exist u1 ∈ K uI ) such that
FI (vI , u1 ) = max FI (vI , u) and FI (vI , u2 ) = max FI (vI , u) . ˆ I) u∈K(u
ˆ uI ) u∈K(˜
As FI (vI , u) is concave in u, we have FI (vI , (1 − λ) u1 + λu2 ) ≥ (1 − λ) FI (vI , u1 ) + λ FI (vI , u2 )
≥ (1 − λ) FII (vI , uI ) + λ FII (vI , u ˜I ) . ˆ (uI ) + λK ˆ (˜ Since (1 − λ) u1 + λu2 ∈ (1 − λ) K uI ) , it follows that FII (vI , (1 − λ) uI + λ˜ uI ) ≥
max
ˆ I )+λK(˜ ˆ uI ) u∈(1−λ)K(u
FI (vI , u)
≥ (1 − λ) FII (vI , uI ) + λ FII (vI , u ˜I ) . • We will prove that FII (·, uI ) is strongly convex in KI . Let 0 ≤ λ ≤ 1 and vI , v˜I ∈ KI , we put
FII ((1 − λ) vI + λ˜ vI , uI ) = max FI ((1 − λ) vI + λ˜ vI , u) . ˆ I) u∈K(u
ˆ (uI ) we have From the strong convexity of FI (vI , u) in vI , then ∀ u ∈ K vI , u ≤ −αγλ (1 − λ) kvI − v˜I k2 + (1 − λ) FI (vI , u) + λ FI (˜ vI , u) FI (1−λ) vI +λ˜
≤ −αγλ (1−λ) kvI −˜ vI k2 +(1−λ) FII (vI , uI )+λFII (˜ vI , u I ) ,
therefore vI , u I FII (1 − λ) vI + λ˜
≤ −αγλ (1 − λ) kvI − v˜I k2 + (1 − λ) FII (vI , uI ) + λ FII (˜ vI , u I )
and so, it follows that FII is strongly convex in vI .
(76)
316
Gabriela F. Reyero and Rafael V. Verdes
5.3.2. Continuity Proposition 20. The function uI → FII (vI , uI ) is Lipschitz continuous in KI for each vI ∈ KI and the function vI → FII (vI , uI ) is Lipschitz continuous in KI for each uI ∈ KI . Proof. • FII (vI , ·) : KI → ℜ is Lipschitz continuous in uI ∈ KI for each vI ∈ KI . By definition, FII (vI , uI ) = max FI (vI , u) , ˆ I) u∈K(u
ˆ (uI ) such that so, there exists at least one u ˘∈K FII (vI , uI ) = FI (vI , u ˘) . Also by definition of FII (vI , u ˜I ) we have ˆ (˜ ˘, K uI ) , FII (vI , u ˜I ) = max FI (vI , u) ≥ FI vI , Pr u ˆ uI ) u∈K(˜
consequently, ˆ (˜ FII (vI , uI ) − FII (vI , u ˜I ) ≤ FI (vI , u ˘) − FI vI , Pr u ˘, K uI ) .
(77)
Since FI is Lipschitz continuous in u, then
ˆ (˜ ˆ (˜ FI (vI , u ˘) − FI vI , Pr u ˘, K uI ) ≤ LFI u ˘ − Pr u ˘, K uI ) ˆ (uI ) , K ˆ (˜ ≤ LFI d1 K uI ) ≤ LFI c kuI − u ˜I kXI ,
(78)
from (77) and (78) , there exists LFII such that
FII (vI , uI ) − FII (vI , u ˜I ) ≤ LFII kuI − u ˜I kXI . The inverse inequality follows in the same way, then FII (vI , uI ) − FII (vI , u ˜I kXI . ˜I ) ≤ LFII kuI − u
• FII (·, uI ) : KI → ℜ is Lipschitz continuous in vI ∈ KI for each uI ∈ KI . ˆ (uI ) such that FII (vI , uI ) = FI (vI , u Let u ˘∈K ˘) . Now, as FII (˜ vI , uI ) = max FI (˜ vI , u) ≥ FI (˜ vI , u ˘) , ˆ I) u∈K(u
then FII (vI , uI ) − FII (˜ vI , uI ) ≤ FI (vI , u ˘) − FI (˜ vI , u ˘)
(79)
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
317
and since FI is Lipschitz continuous in vI , we have FI (vI , u ˘) − FI (˜ vI , u ˘) ≤ LFI kvI − v˜I kXI then, there exists LFII such that FII (vI , uI ) − FII (˜ vI , uI ) ≤ LFII kvI − v˜I kXI The inverse inequality follows in the same form, then FII (vI , uI )−FII (˜ vI , uI ) ≤ LFII kvI − v˜I kXI .
6.
Appendix 2. Differential Properties of F, FI and FII
6.1.
Differentiability of F
Lemma 6. i) v → F (v, u) is differentiable for any u ∈ V , being ∂F (v, u) = (A + A∗ ) v − A∗ u − f. ∂v
(80)
ii) u → F (v, u) is differentiable for any v ∈ V , being ∂F (v, u) = −Av + f. ∂u
(81)
Proof. i) Let v, δv ∈ V, then F (v + δv, u) = a (v + δv, v + δv − u) − L (v + δv − u)
= a (v, v − u) + a (δv, v − u) + a (v, δv) + a (δv, δv) − L (v − u) − L (δv) = F (v, u) + a (δv, v − u) + a (v, δv) − L (δv) + a (δv, δv)
therefore F (v + δv, u) − F (v, u) = (Av, δv) + (Aδv, v − u) − (f, δv) + (Aδv, δv) = (A + A∗ ) v − A∗ u − f, δv + (Aδv, δv) .
Then, using the notation G (v, u) = (A + A∗ ) v − A∗ u − f ∈ V we have F (v + δv, u) = F (v, u) + (G (v, u) , δv) + o (δv) , because by virtue of (3) the following estimate holds |(Aδv, δv)| ≤ kAδvk kδvk ≤ kAk kδvk2 ≤ β kδvk2 and then
(Aδv, δv) = 0. kδvk kδvk→0 lim
(82)
318
Gabriela F. Reyero and Rafael V. Verdes
So, it is proved that F (v, u) is differentiable in v and ∂F (v, u) = (A + A∗ ) v − A∗ u − f. ∂v ii) Let u, δu ∈ V, then F (v, u + δu) = a (v, v − u − δu) − L (v − u − δu)
= a (v, v − u) − L (v − u) − a (v, δu) + L (δu) = F (v, u) + (−Av + f, δu) .
This shows that
Corollary 1.
∂F ∂v
∂F (v, u) = −Av + f. ∂u (·, u) is strongly monotone and hemicontinuous.
The proof is trivial and it is here omitted for the sake of brevity. The minimum problem associated to computing FI as a variational problem ˆ (vI ) which The function F (·, u) is convex, so the problem of finding v¯ (vI , u) ∈ K ˆ (vI ) , i.e. realizes the minimum of F (·, u) in K FI (vI , u) = min F (v, u) = F (¯ v (vI , u) , u) ˆ I) v∈K(v
(83)
is equivalent to find ˆ (vI ) such that v¯ (vI , u) ∈ K
∂F ˆ (vI ) . (¯ v (vI , u) , u) , z − v¯ (vI , u) ≥ 0 ∀ z ∈ K ∂v (84)
Remark 14. As a result of Corollary 1 the solution of FI (vI , u) = min F (v, u) = F (¯ v (vI , u) , u) ˆ I) v∈K(v
ˆ (vI ) which is the unique solution of is given by the point v¯ (vI , u) ∈ K ∂F ˆ (vI ) (¯ v (vI , u) , u) , v − v¯ (vI , u) ≥ 0 ∀ v ∈ K ∂v or, in other words, ˆ (vI ) . (A + A∗ ) v¯ (vI , u) , v − v¯ (vI , u) ≥ A∗ u + f, v − v¯ (vI , u) ∀ v ∈ K
6.2.
(85)
Differentiability of FI
Proposition 21. The function FI (vI , ·) : K → ℜ is differentiable for each vI ∈ KI , moreover ∂FI (vI , u) = −A v¯ (vI , u) + f. (86) ∂u
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
319
Proof. Let u ∈ K and δu be an admissible increment, i.e. u + δu ∈ K. We have FI (vI , u) = F (¯ v (vI , u) , u) ,
(87)
ˆ (vI ) is the unique solution of where v¯ (vI , u) ∈ K ˆ (vI ) . (A + A∗ ) v¯ (vI , u) − A∗ u − f, v − v¯ (vI , u) ≥ 0 ∀ v ∈ K Also
FI (vI , u + δu) = F v¯ (vI , u + δu) , u + δu ,
(88)
ˆ (vI ) is the unique solution of where v¯ (vI , u + δu) ∈ K ˆ (vI ) . (A + A∗ ) v¯ (vI , u + δu) − A∗ (u + δu) − f, v − v¯ (vI , u + δu) ≥ 0 ∀ v ∈ K For the sake of simplicity we will denote
vˆo = v¯ (vI , u) ,
vˆ1 = v¯ (vI , u + δu) .
Then, considering (87) and (88) we get: FI (vI , u + δu) − FI (vI , u) = F (ˆ v1 , u + δu) − F (ˆ vo , u) =
= F (ˆ v1 , u + δu) − F (ˆ vo , u + δu) + F (ˆ vo , u + δu) − F (ˆ vo , u) .
Taking now into account that F (ˆ v1 , u + δu) ≤ F (ˆ vo , u + δu)
(89)
and as F is differentiable in u, we have FI (vI , u + δu) − FI (vI , u) ≤ (−A vˆo + f, δu) .
(90)
We consider now the difference: FI (vI , u + δu) − FI (vI , u) = F (ˆ v1 , u + δu) − F (ˆ vo , u) =
= F (ˆ v1 , u + δu) − F (ˆ v1 , u) + F (ˆ v1 , u) − F (ˆ vo , u) ,
so, since it follows that
F (ˆ vo , u) ≤ F (ˆ v1 , u) ,
(91)
FI (vI , u + δu) − FI (vI , u) ≥ F (ˆ v1 , u + δu) − F (ˆ v1 , u) Therefore
= (−A vˆ1 + f, δu) = (−A vˆo + f, δu) − A (ˆ v1 − vˆo ) , δu .
FI (vI , u + δu) − FI (vI , u) ≥ (−A vˆo + f, δu) − A (ˆ v1 − vˆo ) , δu .
(92)
But (A (ˆ v1 − vˆo ) , δu) is of order o (δu) , then from (90) and (92) it follows that FI (vI , u + δu) = FI (vI , u) + (−A vˆo + f, δu) + o (δu) . We conclude that FI (vI , u) is differentiable in the variable u, and that ∂FI (vI , u) = −A v¯ (vI , u) + f. ∂u
(93)
320
Gabriela F. Reyero and Rafael V. Verdes
I Corollary 2. The function u → (− ∂F ∂u ) is monotone and hemicontinuous.
Proof. The monotony stems from the concavity of the function u → FI (vI , u) . The hemicontinuity follows from (93) and the continuity of the mapping u → v¯ (vI , u). Proposition 22. The function vI → FI (vI , u) is differentiable for each u ∈ K, and ∂FI (vI , u) = Tv¯∗(vI ,u) (A + A∗ ) v¯ (vI , u) − A∗ u − f . ∂vI
(94)
Proof. Let vI ∈ KI and δvI be an admissible increment, this means that vI + δvI ∈ KI . We have FI (vI , u) = F (¯ v (vI , u) , u) , (95) ˆ (vI ) is the unique solution of where v¯ (vI , u) ∈ K ˆ (vI ) . (A + A∗ ) v¯ (vI , u) − A∗ u − f, v − v¯ (vI , u) ≥ 0 ∀ v ∈ K Then
FI (vI + δvI , u) = F v¯ (vI + δvI , u) , u ,
(96)
ˆ (vI + δvI ) is the unique solution of where v¯ (vI + δvI , u) ∈ K ˆ (vI + δvI ) . (A + A∗ ) v¯ (vI + δvI , u) − A∗ u − f, v − v¯ (vI + δvI , u) ≥ 0 ∀ v ∈ K For the sake of simplicity we denote
vˆo = v¯ (vI , u) ,
vˆ1 = v¯ (vI + δvI , u) .
Let Tv : KI → V be a linear continuous operator. As vI and vI + δvI ∈ KI and from the ˆ (vI + δvI ) , so ˆ (vI ) , we obtain that vˆo + Tvˆ δvI ∈ K fact that vˆo ∈ K o FI (vI + δvI , u) ≤ F (ˆ vo + Tvˆo δvI , u) .
(97)
As vI + δvI ∈ KI and vI = (vI + δvI ) + (−δvI ) ∈ KI and considering that vˆ1 ∈ ˆ (vI + δvI ) , we obtain that vˆ1 + Tvˆ (−δvI ) ∈ K ˆ (vI ) , so K 1 FI (vI , u) ≤ F vˆ1 − Tvˆ1 (δvI ) , u . (98) The function F is differentiable with respect to v, then, we have vo , u) + G (ˆ vo , u) , Tvˆo δvI + o (Tvˆo δvI ) , F (ˆ vo + Tvˆo δvI , u) = F (ˆ F (ˆ v1 − Tvˆ1 (δvI ) , u) = F (ˆ v1 , u) − G (ˆ v1 , u) , Tvˆ1 (δvI ) + o (Tvˆ1 (δvI )) .
(99) (100)
From (95) , (97) and (99) it follows that
FI (vI + δvI , u) − FI (vI , u) ≤ G (ˆ vo , u) , Tvˆo δvI + o (Tvˆo δvI ) .
(101)
From (96) , (98) and (100) it follows that
FI (vI + δvI , u) − FI (vI , u) ≥ F (ˆ v1 , u) − F (ˆ v1 − Tvˆ1 (δvI ) , u) = G (ˆ v1 , u) , Tvˆ1 (δvI ) + o Tvˆ1 (δvI ) .
(102)
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
321
Considering now that G (ˆ v1 , u) = (A + A∗ ) vˆ1 − A∗ u − f = G (ˆ vo , u) + (A + A∗ ) (ˆ v1 − vˆo ) ,
(103)
we get vo , u) , Tvˆ1 (δvI ) − Tvˆo (δvI ) vo , u) , Tvˆo (δvI ) + G (ˆ G (ˆ v1 , u) , Tvˆ1 (δvI ) = G (ˆ v1 − vˆo ) , Tvˆ1 (δvI ) − Tvˆo (δvI ) . + (A + A∗ ) (ˆ v1 − vˆo ) , Tvˆo (δvI ) + (A + A∗ ) (ˆ
(104)
In Proposition 23 we will see that the last three terms in (104) are o (δvI ) . We will prove that (105) G (ˆ v1 , u) , Tvˆ1 (δvI ) = G (ˆ vo , u) , Tvˆo (δvI ) + o (δvI ) .
From the continuity of Tv we get
o (Tvˆo δvI ) = o (δvI ) ,
o (Tvˆ1 (δvI )) = o (δvI ) .
Taking into account (105) we can rewrite (101) and (102) in the following way vo , u) , Tvˆo δvI +o (δvI ) . G (ˆ vo , u) , Tvˆo δvI +o (δvI ) ≤ FI (vI + δvI , u)−FI (vI , u) ≤ G (ˆ We conclude that
FI (vI + δvI , u) − FI (vI , u) = Tvˆ∗o G (ˆ vo , u) , δvI + o (δvI ) ,
(106)
therefore FI (vI , u) is differentiable with respect to vI , for each u ∈ K and ∂FI (vI , u) = Tv¯∗(vI ,u) (A + A∗ ) v¯ (vI , u) − A∗ u − f . ∂vI
ˆ (vI ) and vˆ1 = Proposition 23. Let δˆ v = vˆ1 − vˆo where vˆo = v¯ (vI , u) ∈ K ˆ (vI + δvI ) , then v¯ (vI + δvI , u) ∈ K 1. G (ˆ vo , u) , Tvˆ1 (δvI ) − Tvˆo (δvI ) = o (δvI ), 2. (A + A∗ ) δˆ v , Tvˆo (δvI ) = o (δvI ), 3. (A + A∗ ) δˆ v , Tvˆ1 (δvI ) − Tvˆo (δvI ) = o (δvI ). Proof.
1. By virtue of Lemma 13 and the uniform continuity property of the operator T, we have G (ˆ vo , u)k kTvˆ1 (δvI ) − Tvˆo (δvI )k vo , u) , Tvˆ1 (δvI ) − Tvˆo (δvI ) ≤ kG (ˆ ≤ kG (ˆ vo , u)k kTvˆ1 − Tvˆo k kδvI k = o (δvI ) . {z } | ˜ ≤Ckδv Ik
322
Gabriela F. Reyero and Rafael V. Verdes
2. From Lemma 13 and the continuity of A + A∗ , we obtain that ((A + A∗ ) δˆ v , Tvˆo (δvI )) ≤ 2 kAk kδˆ v k kTvˆo k kδvI k = o (δvI ) .
3. In a same form, we have v k kTvˆ1 − Tvˆo k kδvI k = o (δvI ) . v , Tvˆ1 (δvI )−Tvˆo (δvI ) ≤ 2 kAk kδˆ (A + A∗ ) δˆ | {z } ˜ ≤Ckδv Ik
Theorem 6. (ˆ vI , u ˆ) is a saddle-point of FI (vI , u) if and only if ∂FI vI , u) , vI − vˆI ≥ 0 ∀ vI ∈ KI , ∂v (ˆ I ∂F I ˆ (uI ) . (vI , u ˆ) , u − u ˆ ≤0 ∀u ∈ K ∂u
(107)
In that case holds
max
min FI (vI , u) = min
ˆ I ) vI ∈KI u∈K(u
max FI (vI , u) = FI (ˆ vI , u ˆ) .
ˆ I) vI ∈KI u∈K(u
(108)
Proof. This result is an immediate consequence of Propositions 6, 15, 16, 17, 21 and 22. Then the system (107) becomes ∗) v ∗u − f , v − v T∗ ≥ 0 ∀ vI ∈ KI , (A + A ¯ (ˆ v , u) − A ˆ I I I v¯(ˆ vI ,u) ˆ (uI ) . A¯ v (ˆ vI , u) − f, u − u ˆ ≥0 ∀u ∈ K
ˆ (vI ) × K ˆ (uI ) , the set of saddleRemark 15. As F is a convex-cancave function on K points in this set is non empty. So, using the necessary condition of saddle-point we have the following result: Let vI , uI ∈ KI , then the VI system which consists of finding (vo , uo ) ∈ ˆ (vI ) × K ˆ (uI ) such that K (
ˆ (vI ) (A + A∗ ) vo , v − vo ≥ (A∗ uo + f, v − vo ) ∀ v ∈ K
(−Avo + f, u − uo ) ≤ 0
ˆ (uI ) ∀u ∈ K
(109)
admits a solution. Remark 16. The set of solutions S (vI , uI ) of the above system has the form: S (vI , uI ) = v¯ (vI , uI ) × U (vI , uI ) , ˆ (uI ) . where U (vI , uI ) is a suitable subset of K Remark 17. If uI = vI the set U (uI , uI ) is a singleton and we have S (uI , uI ) = SuI × SuI .
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
6.3.
323
Differentiability of FII
Proposition 24. Let (ˆ v, u ˆ) verify the system (109) , then Tvˆ∗ ((A + A∗ ) vˆ − A∗ u ˆ − f ) is a sub-gradient of FII with respect to vI in KI . ˆ (uI ) such that Proof. For vI , uI ∈ KI , there exits u ˆ∈K FII (vI , uI ) = FI (vI , u ˆ)
(110)
and for any admissible increment δvI , i.e. vI + δvI ∈ KI , it follows FII (vI + δvI , uI ) = max FI (vI + δvI , u) ≥ FI (vI + δvI , u ˆ) . ˜ I) u∈K(u
(111)
Taking to account that FI is differentiable in vI , we have FI (vI + δvI , u ˆ) = FI (vI , u ˆ) + Tvˆ∗ ((A + A∗ ) vˆ − A∗ u ˆ − f ) , δvI + o (δvI ) .
(112)
From (110) , (111) and (112) we obtain
FII (vI + δvI , uI )−FII (vI , uI ) ≥ Tvˆ∗ ((A + A∗ ) vˆ − A∗ u ˆ − f ) , δvI +o (δvI ) . (113)
Now, let wI ∈ KI and 0 < t < 1, we put δvI = t (wI − vI ) in (113) , then
FII (vI + t (wI − vI ) , uI ) − FII (vI , uI ) ˆ − f , t (wI − vI ) + o t (wI − vI ) . ≥ Tvˆ∗ (A + A∗ ) vˆ − A∗ u
Since FII is convex in vI , it follows that
o (t (w − v )) I I ˆ − f , wI − vI + Tvˆ∗ (A + A∗ ) vˆ − A∗ u ≤ FII (wI , uI ) − FII (vI , uI ) . t
Now, taking limits t → 0+ we obtain ˆ − f , wI − vI ≤ FII (wI , uI ) − FII (vI , uI ) , Tvˆ∗ (A + A∗ ) vˆ − A∗ u therefore
ˆ−f Tvˆ∗ (A + A∗ ) vˆ − A∗ u is a sub-gradient of FII (vI , uI ) in vI .
6.4.
Differentiability of FII in the Diagonal of KI × KI
ˆ (vI ) × K ˆ (uI ) if and only if it is Proposition 25. (ˆ v, u ˆ) is a saddle-point of F (·, ·) in K solution of: ( ˆ (vI ) , ((A + A∗ ) vˆ − A∗ u ˆ − f, v − vˆ) ≥ 0 ∀ v ∈ K (114) ˆ (uI ) . (−Aˆ v + f, u − u ˆ) ≤ 0 ∀u ∈ K
324
Gabriela F. Reyero and Rafael V. Verdes
Proof. It follows from Lemma 2 taking into account that and ∂F v, u ˆ) = −Aˆ v + f. ∂u (ˆ
∂F ∂v
(ˆ v, u ˆ) = (A + A∗ ) vˆ−A∗ u ˆ −f
Corollary 3. If vI = uI ∈ KI and (SuI , SuI ) is the unique solution of the system (114) , ∗ (ASu − f ) is a sub-gradient of F . then TSu I II I ◦
◦
Theorem 7. FII (uI , ·) is differentiable in the diagonal of K I × K I and FII (·, uI ) is ◦ ◦ differentiable in the diagonal of K I × K I . Proof. ◦
◦
• FII (uI , ·) is F-differentiable in the diagonal of K I × K I . We remember that SuI verifies 0 = FII (uI , uI ) = max FI (uI , u) = FI (uI , SuI ) , ˆ I) u∈K(u
and FII (uI , uI + δuI ) =
max
ˆ I +δuI ) u∈K(u
FI (uI , u) ,
then ˆ (uI + δuI ) . FII (uI , uI + δuI ) ≥ FI (uI , u ˜) ∀ u ˜∈K Since FI is differentiable in u and ˆ (uI + δuI ) , SuI + TSuI δuI ∈ K FI (uI , SuI ) = 0, ∂FI (uI , u) = −ASuI + f, ∂u
we have FI (uI , SuI + TSuI δuI ) = (−ASuI + f, TSuI δuI ) + o (δuI ) , then FII (uI , uI + δuI ) ≥ (−ASuI + f, TSuI δuI ) + o (δuI ) .
(115)
Let us consider now uI − δuI , by a similar argument we get FII (uI , uI − δuI ) ≥ − (−ASuI + f, TSuI δuI ) + o (δuI ) .
(116)
Taking now into account that FII (uI , ·) is a concave function we have for any supergradient p and for any t in a suitable neighborhood of 0 ∈ ℜ FII (uI , uI + tδuI ) ≤ (p, tδuI ) . So, FII (uI , uI + tδuI ) ≤ (p, δuI ) . t→0 t lim
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
325
In the same form, from (115) and (116) we get FII (uI , uI + tδuI ) ∗ ≥ TSu (−ASuI + f ) , δuI . I t→0 t lim
Consequently, as we are at an interior point of KI , δuI is arbitrary and so we get ∗ p = TSu (−ASuI + f ) , I
∗ FII (uI , uI + δuI ) = TSu (−ASu + f ) , δu + o (δuI ) . I I I
Therefore FII (uI , ·) is differentiable and we have
∂FII ∗ (uI , uI ) = TSu (−ASuI + f ) . I ∂uI ◦
(117)
◦
• FII (·, uI ) is F-differentiable in the diagonal of K I × K I . Since FII (uI , uI ) = 0 ∀ uI ∈ KI , we get, using (117) ∗ FII (uI + δuI , uI ) = − TSu (−ASuI + f ) , δuI + o (δuI ) . I
Consequently,
∂FII ∗ (uI , uI ) = TSu (ASuI − f ) . I ∂vI Remark 18. The gradients ◦
◦
∂FII ∂vI
and
∂FII ∂uI
are uniformly Lipschitz functions of uI at the
diagonal of K I × K I . Consequently the function FII is also differentiable in the complete diagonal set: {(uI , uI ) : uI ∈ KI }, moreover ∂FII ∂FII (uI , uI ) = − (uI , uI ) . ∂vI ∂uI
6.5.
(118)
Monotony of the Gradients of FII
Lemma 7. The operator B : KI → XI ,
∗ uI → B (uI ) = TSu (ASuI − f ) I
is strongly monotone. Proof. Taking to account that FII (·, ·) is strongly convex-concave and it is F-differentiable in both variables (see [1]), we have FII (uI , uI ) − FII (˜ uI , u ˜I ) ≥
+ βFII
∂FII (˜ uI , u ˜ I ) , uI − u ˜I ∂vI ∂FII 2 kuI − u ˜I k + (uI , uI ) , uI − u ˜I , ∂uI
326
Gabriela F. Reyero and Rafael V. Verdes
moreover, FII (uI , uI ) = F (SuI , SuI ) = 0 where SuI is the solution of ˆ (uI ) (ASuI − f, v − SuI ) ≥ 0 ∀ v ∈ K and by Remark 18 ∂FII ∂FII (˜ uI , u ˜I ) = − (˜ uI , u ˜I ) , ∂vI ∂uI then we have ∂FII ∂FII (uI , uI ) , uI − u ˜I − (˜ uI , u ˜ I ) , uI − u ˜I ≥ βFII kuI − u ˜I k2 , ∂vI ∂vI therefore
˜I k2 , B (uI ) − B (˜ u I ) , uI − u ˜I ≥ βFII kuI − u
then the operator B is strongly monotone.
Now, we will prove properties of lipschitzianity of the operator B. ∗ (ASu − f ) is Lipschitz continuous in K , Proposition 26. The operator B (uI ) = TSu I I I i.e. there exists kB such that
B (uI + δuI ) − B (uI ) ≤ kB kδuI k .
Proof. We have
∗ ∗ B (uI + δuI ) − B (uI ) = TS(u (AS (uI + δuI ) − f ) − TSu (ASuI − f ) I I +δuI ) ∗ ∗ ∗ = TS(u − TSu AS (uI + δuI ) − f + TSu AS (uI + δuI ) − ASuI I I I +δuI )
then
B (uI + δuI ) − B (uI )
∗
∗ ∗
− T ≤ TS(u SuI AS (uI + δuI ) − f + TSuI AS (uI + δuI ) − ASuI I +δuI )
∗ ≤ C S (uI + δuI ) − SuI kAS (uI + δuI )k + kf k + kTSu kβ S (uI + δuI ) − SuI I ≤ C kS (C + kf k) + C β kS kδuI k .
Therefore B (uI + δuI ) − B (uI ) ≤ kB kδuI k.
References
[1] Auslender A., Optimisation, m´ethodes num´eriques, Masson, Paris, 1976. [2] Ciarlet P. G., Plates and junctions in elastic multi-structures: an asymptotic analysis, Masson, Paris, 1990. [3] Ekeland I., Temam R., Analyse convexe et probl`emes variationnels, Dunod, Paris, 1974.
A Decomposition Method to Solve Non-Symmetric Variational Inequalities
327
[4] Ferris M. C., Pang J.-S., (Eds.), Complementary and variational problems. State of the Art, SIAM Review, Vol 39, N◦ 4 pp. 669–713, 1997. [5] Glowinski R., P´eriaux J., Shi Z-C., Widlund O., (Eds.), Domain decomposition methods in sciences and engineering, J. Wiley & Sons, Chichester, 1997. [6] Gonz´alez R.L.V., Rofman E., On some junction problems, Rapport de Recherche INRIA N◦ 2937, Rocquencourt, France, 1996. [7] Gonz´alez R. L. V., Rofman E., On the role of the interface in the constructive or approximate solution of junction problems, in Computational Science for the 21st Century, pp. 549–557, J. Wiley & Sons, Chichester, 1997. [8] Gonz´alez R. L. V., Rofman E., Reyero G. F., On coupled systems and decomposition techniques related to junction problems. Study of the symmetric case. Optimal Control and Partial Differential Equations (Book in honour of Professor Alain Bensoussan’s 60th birthday), J.-L. Menaldi, E. Rofman and A. Sulem. (Eds.). IOS Press, Amsterdam, Berlin, Oxford, Tokyo, Washington DC., pp. 466–474, 2001. ISBN: 158603-096-5. [9] Le Dret H., Probl`emes variationnels dans les multi-domaines. Mod´elisation des jonctions et applications, Masson, Paris, 1991. [10] Lions J.-L., Some more remarks on boundary value problems and junctions, in Asymptotic Methods for Elastic Structures, Ph.G. Ciarlet, L. Trabucho, J. M. Viano (Eds.), pp. 103-118, Walter de Gruyter, Berlin, 1995. [11] Lions J.-L., Marchouk G. I., Sur les m´ethodes num´eriques en sciences physiques et e´ conomiques, Dunod, Paris, 1974. [12] Lions J-L., Stampacchia G., Variational inequalities, Comm. Pure Appl. Math., Vol. 20, pp. 493-519, 1967. [13] Lotito P. A., Reyero G. F., Gonz´alez R. L. V., Numerical solution of a junction problem, in Mec´anica Computacional, Vol. 18, E. Dari - C. Padra - R. Saliba (Comp.), AMCA, pp. 619–628, Bariloche, November 1997. [14] Lotito P. A., Reyero G. F., Gonz´alez R. L. V., Numerical solution of a bilateral constrained junction problem, in Finite Difference Methods: Theory and Applications, (Eds.) Samarskii A., Vabishchevich P., Vulkov L., Ch. 17, Nova Science, USA, ISBN 1-56072-645-8, pp. 151–160, 1999. [15] Reyero G. F., Gonz´alez R. L. V., Some applications of decomposition techniques to systems of coupled variational inequalities, in Mec´anica Computacional, Vol. 16–17, G. Etse - B. Luccioni (Comp.), AMCA, pp. 403–412, Tucum´an, September 1996. [16] Reyero G. F., Gonz´alez R. L. V., Some applications of decomposition techniques to systems of coupled variational inequalities, Rapport de Recherche INRIA N◦ 3145, Rocquencourt, France, 1997.
328
Gabriela F. Reyero and Rafael V. Verdes
[17] Reyero G. F., Verdes R. V., Gonz´alez R. L. V.: Un procedimiento de descomposici´on para resolver inecuaciones variacionales no lineales. M´etodos Num´ericos en Ingenier´ıa, (Eds.) Abascal R., Dom´ınguez J., Bugeda G., SEMNI, ISBN: 84-89925-45-3, Spain, 1999. [18] Verdes R.V., On the solution of a non symmetrical variational inequalities system. Mathematicae Notae, Vol. 40, pp. 33–59, Rosario, 2004. [19] Verdes R. V., Sobre cierto tipo de inecuaciones variacionales no lineales. Mathematicae Notae, Vol XLIII , pp. 29–50, Rosario, 2005. [20] Verdes, R. V.: Complemento al trabajo “Sobre cierto tipo de inecuaciones variacionales no lineales”. Mathematicae Notae, Vol XLIV, pp. 17–22, Rosario, 2006. [21] Vorob’ev N. N., Game theory, Springer, New York, 1977.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 329-376
ISBN 978-1-60876-500-3 c 2011 Nova Science Publishers, Inc.
Chapter 13
N UMERICAL A PPROXIMATION TO S OLVE QVI S YSTEMS R ELATED TO S OME P RODUCTION O PTIMIZATION P ROBLEMS Laura S. Aragone∗ and Elina M. Mancinelli CONICET - FCEIA, Universidad Nacional de Rosario, Argentina
Abstract In this paper we develop numerical procedures to solve Quasi Variational Inequalities (QVI) Systems which appear in the optimization of some problems related to a multi-item single machine, which at any time the machine is either idle or producing any of m different items. Our aim is to obtain an optimum production schedule for each of the cases we propose. We focus our attention on three cases: the discount case, where the cost to be optimized takes into account a running cost and the production switching costs, as the integral cost functional has an infinite horizon a discount factor is used to guarantee that the integral converges; the ergodic case, where the objective is to find an optimal production schedule that minimizes the average cost for an infinite horizon; and the piecewise deterministic case where the demand varies randomly according to a piecewise deterministic process, the demand changes are described by a Poisson processes and the demand value may take a finite number of values.
1.
Introduction
In this article we present numerical procedures to solve Quasi Variational Inequalities (QVI) Systems which appear in the optimization of some problems related to a multi-item single machine, where at any time the machine is either idle or producing any of m different items. Our aim is to obtain an optimum production schedule for each of the cases we propose, therefore we define a control policy as a pair (production state, time) which states at each instant of time which item must be produced. The cost to be optimized takes into account a running cost and the production switching costs. ∗
E-mail address: [email protected]
330
Laura S. Aragone and Elina M. Mancinelli
This type of problems belong to the class of optimal control problems of jumping processes with state space constraints. The space constraint means that the trajectories of the controlled process must remain in a given bounded closed subset (a polyhedron in our case) of Rm . Using dynamical programming methods we arrive to Hamilton-Jacobi-Bellman (HJB) equations associated to these problems which are given by QVI systems. The main theoretical result is the characterization of the optimal cost function as the unique solution in the viscosity sense of this QVI systems (see [15]). We focus our attention on three particular cases: the deterministic problem with discount rate, the ergodic case and the piecewise deterministic cases. For the first one, in [14]– [15], it is considered a QVI system arising from the application of Dynamic Programming methodology to an optimal switching control problem. The optimal cost function U must verify special boundary conditions resulting from the state restrictions imposed to the state x ∈ Q of the dynamical system. In a portion of ∂Q, U must tend to +∞, while in the remaining points of ∂Q, a regular Dirichlet type condition holds. Both boundary conditions are essential to the definition of the viscosity solution of the QVI and to the proof of the uniqueness of its solution. In Part I, we define the discrete problem by discretizing the corresponding QVI system. Taking advantage of the fact that the inventory level is a piecewise linear function we create a mesh such that the trajectory associated to admissible controls reaches a node of the mesh at every switching time. This property of the mesh plays a key role in relation to the precision of the method and√the rate of convergence of its computational algorithm, which in this case is k instead of k, avoiding the dispersion error present when using standard meshes. In Part II we analyze the ergodic problem. The objective is to find an optimal production schedule that minimizes the average cost for an infinite horizon. By using dynamic programming techniques and taking into account the switching cost, it is possible to find an optimal feedback policy, in terms of any solution in the viscosity sense of a first order QVI system. This system is obtained by considering a sequence of optimization problems with non zero discount rate. We prove the existence of a solution to this QVI system and the uniqueness of the optimal average cost. A method of discretization and a computational procedure are described. They allow us to compute the solution in a short time and with precision of order k. We obtain an estimate for the discretization error and develop an algorithm that converges in a finite number of steps. Finally in Part III we deal with the case where the demand varies randomly according to a piecewise deterministic process. The demand changes are described by a Poisson processes and the demand value takes a finite number of values. This type of problems belong to the class of optimal control problems of jumping processes with state space constraints. We describe the optimal scheduling problem of a multi-item single machine production system where, besides the inner production, external purchases are allowed to cope with the demands. We analyze the HJB equation associated to this problem, considering both the integral and the differential expressions of this equation. We present the existence, uniqueness and regularity of the optimal cost function. We present a discretization procedure and we prove (in addition to the existence and uniqueness of the discrete solution) the convergence of the discrete solution to the continuous one. We estimate the rate of convergence.
Numerical Approximation to Solve QVI Systems
331
Part I
The Discount Rate Problem We present a numerical method to optimize the production schedule of a multi-item single machine. The purpose is to find an optimal production schedule that minimizes the following functional J, involving instantaneous and switching costs. That is: θi Z ∞ X J(β(·)) = (1) f (y(s), β(s))e−αs ds + q(di−1 , di )e−αθi . i=1
θi−1
We define the value function
n o Udα (x) = inf J(β(·)) : β(·) ∈ Adx ,
(2)
¯ ¯ ∈ Ad such that J(β(·)) β(·) = Ud (x). x
(3)
which is the minimum cost of operation starting from the initial state x and the initial state of production d. Adx is the set of admissible policies starting from those initial conditions. We want to find, for each (x, d), an optimal schedule α ¯ (·), i.e.
In [13], we can see the way an optimal feedback policy can be found in terms of the optimal cost function Ud . In this work we develop a numerical method which computes Udα with a fast and accurate procedure. Related results can be seen in [4], [14], [15]. The most important properties of our method are the following • The discrete approximations have precision of order k. • The iterative algorithm converges in a finite number of steps.
2.
Description of the Control Problem
2.1.
Production System
At any time the machine is either idle or producing any of m different items. We will denote D = {0, 1, . . . , m} and we assign the following values to the machine setting • d = 0 the idle state of the machine, • d = 1, . . . , m, when it is producing item d. For each item d = 1, . . . , m; we define the problem data as follows • rd the demand by unit time of item d, • pd the production quantity by unit time at the machine setting d, • Md the inventory capacity constraint of item d,
332
Laura S. Aragone and Elina M. Mancinelli e the switching cost of the machine from state d to d, e • q(d, d)
• f (x, d) the instantaneous inventory-holding/production cost, • α the discount rate. We will always assume a non zero loop cost condition: ∃ q0 > 0 such that for any closed loop d0 , d1 , . . . , dp , dp+1 , with d0 = dp+1 , p ≤ m, we have p X i=0
q(di , di−1 ) ≥ q0 .
(4)
Also, we assume the following conditions are verified e ≥ 0 ∀ de 6= d, q(d, d) = 0 ∀ d ∈ D, q(d, d) ¯ ≤ q(d, d) e + q(d, e d) ¯ ∀ d 6= de 6= d. q(d, d)
(5)
In addition, we suppose that the switching time is small enough to be disregarded and that the following condition, under which a feasible schedule exists, holds m X rd < 1. pd
(6)
d=1
In fact, we will always assume (6), as condition the idle state except for a total time τ =
m P
d=1
problem with infinite horizon.
2.2.
m P
d=1 xd pd ,
rd pd
= 1, forbids the machine to be in
and this is not a natural condition for a
Admissible States
Let yd (t) be the inventory level of item d at time t, starting at yd (0) = xd . Therefore, for the global state “y” of the system, we have y(t) = (y1 (t), . . . , ym (t)), (y1 (0), . . . , ym (0)) = (x1 , . . . , xm ). As neither backlogging nor production over the capacity constraints are allowed for the inventory state yd , the following restriction holds
We denote Q =
n Q
0 ≤ yd ≤ Md ∀ d = 1, . . . , m.
(7)
[0, Mi ]. The set Q of admissible states comprises only the set of points
i=1
with at most one zero component, because if we start from other points that do not verify this condition, we cannot avoid the shortage of at least one item, i.e. Q = (x1 , . . . , xi , . . . , xm ) ∈ Q : at most one component xi = 0 . Let us denote with ∂Q+ the points of ∈ Q that are not admissible, i.e. ∂Q+ = (x1 , . . . , xi , . . . , xm ) ∈ Q : at least two xi = 0 .
We denote with Ω the interior of Q, i.e. Ω ≡ {x : 0 < xi < Mi , i = 1, . . . , m}.
Numerical Approximation to Solve QVI Systems
2.3.
333
System Evolution
For any step function β : [0, ∞) → D from the definition of rd , pd , the following equation of evolution holds −rd dy β 6= d, (8) = g(β(t)), g(β) = (g1 (β), . . . , gm (β)), gd (β) = pd − rd β = d. dt
Remark 1. Since g is piece-wise constant, the equation (8) has global solution for any control policy. At the same time we always suppose that the function f is uniformly Lipschitz in Q, ∀ d ∈ D.
Remark 2. No admissible trajectory can start at a point of ∂Q+ . To understand this phenomenon, let us consider the case m = 2. In this case, ∂Q+ = {0}. If we start at x = 0, then, for any control policy, we have x1 (t) < 0 ∀ t ∈ (0, θ1 ), or x2 (t) < 0 ∀ t ∈ (0, θ1 ). So, condition (7) is not satisfied and the policy is not admissible.
2.4.
Admissible Controls
Any admissible schedule, which is generally denoted by β(·), as it is a step function may be also characterized by a sequence of pairs {θi , di }, where θi is the switching time, (with lim θi = +∞), 0 ≤ θ0 ≤ θ1 < · · · < θi < θi+1 < · · · and di ∈ D; di 6= di+1 ; i→∞
i = 0, 1, . . . is the state of production in (θi , θi+1 ] . For each x ∈ Q, d ∈ D, we denote Adx the set of all admissible schedules with initial state x and initial machine setting d n o + Adx = β(·) = (θi , di )∞ i=0 : d0 = d, ∀ t ∈ R , y(t) ∈ Q . In other words, we will consider sequences {θi , di } such that the associated trajectories remain in Q, ∀ t ≥ 0. In [13] it is proved that an admissible control exists.
2.5.
Optimal Cost Function
We consider the cost function defined by (1) and the associated optimal cost function U α (x) defined in (2); U α is a vector with components Udα . By virtue of the properties of the dynamical system studied, some properties of regularity for functions Udα hold; in particular, they are locally Lipschitz continuous. On each compact subset of Ω, the Lipschitz coefficient is independent of the parameter α. This property is crucial for the estimation of the convergence rate in numerical solutions, and so it is for the study of the optimization problem, when the time average criterion is used.
3. 3.1.
Dynamic Programming Solution Properties of the Optimal Cost Function
Hypotheses (4), (5), (8), together with the feasibility condition (6), enable the proof of strong regularity properties. The feasibility condition (6) implies a property of controllability that plays a key role to prove that the optimal cost function is bounded and locally Lipschitz-continuous. A detailed study of these properties can be seen in [13].
334
3.2.
Laura S. Aragone and Elina M. Mancinelli
Optimal Feedback Policy
We define
e + U α (x) , x ∈ Q, d ∈ D. S d (U α )(x) = min q(d, d) de d6e=d
(9)
The optimal cost function and the operator S d can be used to define an optimal feedback control policy β¯ = {θi , di }∞ i=0 in the following way (see [13]): We define θ0 = 0, d0 = d and recursively o n θ = min t ≥ θ di−1 (U α ))(y(t)) , α i i−1 : Udi−1 (y(t)) = (S (10) o n α α d i−1 di ∈ d ∈ D : d 6= di−1 , S (U ) (y(θi )) = Ud (y(θi )) + q(di−1 , d) .
3.3.
Dynamic Programming Approach
The dynamic programming method allows us to obtain the (HJB) equations associated to this problem and its boundary conditions. The proof can be seen in [1]. 3.3.1. Boundary Conditions for the HJB Equation Since the system is constrained to the set Q, conditions involving the values of function U α appear on the boundary of Q (see [13], [14], [15]). These conditions depend only on the values of U α and they do not involve its derivatives – as it occurs in problems with continuous controls, where constraints concerning the values of the Hamiltonian appear (see [10], [6]). Definition 1. Let us define ∂Qe =
m [
i=1
where
γi+ ∪ γi− ,
γd+ = (x1 , . . . , xd , . . . , xm ) ∈ Q, : xd = Md , γd− = (x1 , . . . , xd , . . . , xm ) ∈ Q : xd = 0 .
Theorem 1. In ∂Qe the following boundary conditions are verified α e U e (x) = (S d U α )(x) ∀ x ∈ γd− , ∀ de 6= d, d α Ud (x) = (S d U α )(x), d 6= 0 ∀ x ∈ γd+ .
3.3.2. The HJB Equation
The optimal cost function U α is uniformly Lipschitz continuous on compact subsets of Ω and then it is differentiable a.e. (see [12]). In consequence, we can associate – in terms of its derivatives – the following HJB inequality to the optimal cost function.
Numerical Approximation to Solve QVI Systems
335
Theorem 2. For each d ∈ D, the following relations are verified Udα (x) ≤ S d (U α )(x) ∀ x ∈ Q, (11) ∂U (x) d g(d) − f (x, d) ≤ 0, a.e. x ∈ Q, (12) αUdα (x) − ∂x ∂Udα (x) (Udα (x) − S d (U α )(x)) αUdα (x) − g(d) − f (x, d) = 0, a.e. x ∈ Q, (13) ∂x e U α (x) = (S d U α )(x) ∀ de 6= d, if x ∈ γ − (d 6= 0), (14) de
d
Udα (x)
d
α
= (S U )(x) if x ∈
γd+ ,
d 6= 0.
(15)
For the proof see [1], [15]. 3.3.3. Maximum Subsolution of the HJB Equation In this step we identify the optimal cost U α as a maximum subsolution, as a preliminary step to define the discrete problem. The proof of the following theorem is classical (see [13] for details). Definition 2. Set of subsolutions W n o W = w(·) : D × Q → R|wd (·) ∈ W1,∞ (Q), (11), (12) . loc Theorem 3. U α is the maximum element of the set W , i.e. U α ∈ W and Udα (x) ≥ wd (x) ∀ x ∈ Q, ∀ d ∈ D, ∀ w ∈ W. By virtue of this theorem, conditions (13)–(15) are ignored and in its place the concept of maximum element is introduced. In this form, the computation of U α is transformed into the problem Problem Pα : Find the maximum element U α of the set W.
4. 4.1.
Discrete Problem Elements of the Discrete Problem
To define the discrete problem, we introduce an approximation which comprises a dis1,∞ cretization of the space Wloc (Ω) and a discretization of conditions (11)–(12). We use some techniques analyzed in [13], [5]. S Domain of approximation. We will approximate Q with Qk = S̺k , where S̺k is a ̺
m finite set of quadrilateral ielements and, in consequence, Qk is a polyhedron of R . We k will denote by V = x , i = 1, . . . , N the set of nodes of Qk and we will denote the cardinal of V k by N . The typical shape of this mesh can be seen in Figure 1. Let k = max(diam(Sρk )). ̺
336
Laura S. Aragone and Elina M. Mancinelli x2 x2 M2
Stock 2
✻ ✉
❳❳❳ ❳❳❳ ❈ ❳❳❳ ❈ ❳❳❳ ❳❳ ❳❳ ❈ ❈ ❈ ❈ ❳❳ ❳❳❳ ❳❳ ❈ ❈ ❈ ❈ ❈ ❈ ❳❳❳ ❳❳❳ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❳❈❳ ❈❳ ❳ ❳ ❈ ❈ ❈ ❳ ❳ ❈ ❈ ❈ ❳❳❈❳ ❈ ❳❳❈❳ ❳ ❳ ❈ ❈ ❳❳❳❈ ❳❳❳❈ ❈ ❈ ❈ ❈ ❳ ❳❳❳ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❳ ❳❳❳ ❈ ❳❳❳ ❈ ❈ ❈ ❈ ❈ ❈ ❳❳❈❳ ❈ ❳❳❈❳ ❈ ❈ ❳❳ ❈ ❳❳ ❈ ❈ ❈ ❈ ❳❳❈❳ ❈ ❳❳❈❳ ❈ ❈ ❳ ❳ ❈ ❈ ❳❳❳❈ ❳❳❳❈ ❈ ❈ ❈ ❈ ❈ ❳ ❈ ❈ ❈ ❈ ❈ ❳❳❳❳❈❈❳ ❈❳ ❈ ❈ ❈ ❈ ❳❳❳❳❈❈❳ ❈ ❳❳❳❳❈❈❳ ❈ ❈ ❈ ❈ ❳❳❳❳❈❈❳ ❈ ❳❳❳❳❈❈ ❳❳ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❳❳❈❳ ❳❳ ❈ ❈ ❈ ❈ ❈ ❈ ❈ ❳❳❈❳ ❈❳ ❳❳ ❈ ❳❳ ❈ ❈ ❈ ❈ ❳❳❈❳ ❈ ❳❳❈❳ ❈ ❳ ❳ ❈ ❈ ❳❳❳❈ ❳❳❳❈ ❈ ❈ ❈ ❈ ❳ ❈ ❈ ❈ ❈ ❈ ❈ ❳❳❳❳❈❈❳ ❈ ❈ ❈ ❈ ❈ ❈ ❳❳❳❳❈❈❳ ❈❳❳ ❈ ❳ ❳❳ ❈ ❈ ❳❳ ❈❈ ❈ ❳❈❳❳ ❈ ❳❳❳ ❈ ❳❳❳❈ ❳❳❈❈❳ ❳❳ ❈ ❳❳❈ ✉ ✉✲ x1
(0, 0)
M1
Stock 1
(a) The mesh of Ω
M2
Stock 2
✻ ✉
◗ ❆❑❆ ◗ ❆ ◗◗ ◗ ❆ ◗ ❆ ◗ ◗ ❆ ◗ ❆ ◗ ◗ ❆ ◗ ❆ ◗ ◗ ❆ ◗ ❆ ◗ ◗ ❆ ◗ ❆ ◗ ◗ ❆ ◗ ❆ ✁❆❑◗ ✁❆❑ ✁❆❑ ❆ ✁ ❆ ◗◗ ✁ ❆ ✁ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ◗ ✁ ◗ ❆ ❆ ❆ ❆ ✁ ✁◗ ✁ ❆ ❆ ❆ ✁ ✁ ✁ ◗ ❆ M1 ◗ ❆ ❆ ✁ ❆ ✁ ❆ ✁ ◗◗ ✉ s❆ ✉ ❆✁☛ ❆✁☛ ❆✁☛ ♣ ✲ x1
(0, 0)
0.82 Stock 1
(b) State space trajectory
Figure 1. Mesh and trajectory. We use a special uniform mesh B k of the space Rm . This mesh is defined in terms of an arbitrary parameter h, in the following way ) (m X rd d k ςd e : ςd integer , hd = B = h, pd d=0 ! (16) m X rd h, e0 = (−r1 , . . . , −ri , . . . , −rm )h0 , h0 = 1 − pd d=1
d
e = (−r1 , . . . , −rd−1 , pd − rd , −rd+1 , . . . , −rm )hd .
We will say that S̺k is an elementary domain of Qk if it has the following form ) (m X ζd ed : ζd ∈ [0, 1] , x̺ ∈ B k , S̺k ⊂ Q. S̺k = x̺ + d=0
If k is small enough, for any two vertices of V k , there always exists a path given by a natural system trajectory, which joins the first vertex to the second one. From (6) and (16) it results that B k can be generated by ) (m X d k ςd e : ςd is an integer . B = d=1
Definition 3. Discrete Controls Associated to the Mesh. We introduce a special family of controls by constraining the distance between switching times in the following way n o d Ad,k = β(·) ∈ A : θ − θ = ςh , ς integer . (17) i+1 i di x x Interpretation of the mesh. The special mesh used, originates a discrete optimal control problem. In that problem, the system has an evolution given by the differential equation
Numerical Approximation to Solve QVI Systems
337
(8), but controls di are applied during intervals whose length is ςhdi and the initial state x must be a node of V k . Consequently, the trajectory associated to this control reaches a node of the mesh at every switching time. Taking into account the interpretation of the discrete equations as the optimality conditions over the Markov chain associated to this discretization, this interpretation implies that the chain is deterministic in the sense that Pi,j = 0 or 1. This property of the mesh plays a key role in the precision of the method and the velocity of convergence of its computational algorithm. Boundary approximation. We define, ∀ d = 1, . . . , m + γk,d = xi ∈ V k : xi + hd g(d) ∈ / Qk , − ˆ ∈ γk,d = xi ∈ V k : xi + hdˆ g(d) / Qk , ∀ dˆ 6= d .
Approximation space. We consider the set F k of functions w : Qk × D → R, w(·, d) ∈ W 1,∞ (Qk ), such that in each quadrilateral element Qk , w(·, d) is a polynomial of the form a0 +
m X
aj xj +
j=1
XX
aij xi xj .
i=1 j6=i
It is obvious that any w ∈ F k is uniquely characterized by the values w(xi , d), xi ∈ V k , d ∈ D. 4.1.1. Discretization of HJB Inequalities We will use the following discretization of conditions (11), (12); which take, respectively, the following forms w(xi , d) ≤ S d (w)(xi ) ∀ xi ∈ V k , ∀ d ∈ D, w(xi , d) ≤ (Dk w)(xi ) ∀ xi ∈ V k , ∀ d ∈ D. d Here, Ddk is defined by: (Ddk w)(xi ) = e−αhd w(xi +hd g(d), d)+hd f (xi , d) !!! S + − , ∀ xi ∈ V k \ γk,d ∪ γk,r r6=d [ − (Dk w)(xi ) = +∞, ∀ xi ∈ γ + ∪ γk,r . d k,d r6=d
(18)
We can see that the definition of Ddk is consistent with (12) and that it also takes into account the boundary constraints (14)–(15). It can be easily proved that the corresponding associated discrete problem has the same convergence properties as the discrete problem Pk defined below. Operator Pkα . We define the operator Pk : Fk → Fk as (Pkα w)(xi , d) = min (Ddk w)(xi ) , (Sd (w))(xi ) ∀ xi ∈ V k , ∀ d ∈ D.
338
Laura S. Aragone and Elina M. Mancinelli
As it is easy to prove that there exists a unique fixed point of Pkα , i.e. U k = Pkα U k , then U k can be computed with the following iterative algorithm which converges from any starting point U 0 . Algorithm A0 : Step 0: Set ν = 0, U 0 ∈ Fk . Step 1: Set U ν+1 = Pkα U ν . Step 2: If U ν+1 = U ν , stop; else ν = ν + 1, and go to Step 1.
4.2.
Convergence of the Solutions
Using the property that the trajectory associated to any discrete admissible control reaches a node of the mesh at every switching time, in [1] it is proved the following result. ¯ an optimal policy for the initial conditions Theorem 4. Let xk ∈ V k , d ∈ D and β(·) (xk , d), with y(·) its corresponding trajectory. Then there exists a discrete control αk ∈ k Ad,k x such that, if we denote with y its associated trajectory, it is verified: ky k (s) − y(s)k ≤ M k ∀ s ∈ [0, T ], and if θνk are the switching times of β k , then θνk ≤ θν ,
θν − θνk ≤ C(ν)k.
Let ǫ >0, and Kǫ = {x ∈ Q : kxk ≥ ǫ}, then there exists C2 (ǫ) ∈ R+ such that Udk (xi ) − Udα (xi ) ≤
C2 (ǫ) k ∀ xi ∈ V k ∩ Kǫ , ∀ d ∈ D. α
The convergence of U k to U is (locally) of order k. The convergence is uniform in closed subsets of Q not intersecting ∂Q+ . Corollary 1. The discretization error can be estimated in the following way kU α − U k k ≤
C2 (ǫ) k. α
(19)
α k d Proof. As Ad,k x ⊂ Ax , we have U ≤ U ; this, together with the previous theorem implies that (19) holds.
4.3.
Accelerated Algorithm
In [1] a faster algorithm is presented with a combination of the value iteration algorithm and the policy iteration algorithm. From the finiteness of the discrete admissible policies set follows that the algorithm finishes in a finite number of steps. The solution to the linear systems in the algorithm is obtained explicitly and using elemental operations, instead of inverting a matrix at each step, as it is usual in policy iteration procedures. This accelerates
Numerical Approximation to Solve QVI Systems
339
the velocity of the numerical algorithm and it is clear that the procedure can be easily programmed.
Part II
Ergodic Case The objective of this Part is to find an optimal production schedule that minimizes the average cost for an infinite horizon for a multi-item single machine. The production system is the same as in the previous case, but as we now deal with the average cost there is no discount rate. We use the same notation and the same hypothesis as in Section 2.1.. Specifically we will try to minimize the following criterion θi Z ν 1 X J (β (·)) = lim sup f (y(s), di−1 ) ds + q(di−1 , di ) , (20) θ ν→∞ ν i=1
θi−1
where θi are the switching times of the control policies used and y(·) is the trajectory of the system when the control β(·) is applied (see [4], [17], [?], [?] and [23]) for a more general description of similar problems). By using dynamic programming techniques and taking into account the switching cost, it is possible to find an optimal feedback policy, in terms of any solution in the viscosity sense of the following first order Quasi-Variational Inequalities (QVI) system:
∂Ud g (d) + f − µ ≥ 0 in Ω, ∂x Ud ≤ S d (U ) in Ω, ∂Ud g (d) + f − µ Ud − S d (U ) = 0 in Ω, ∂x
(21)
with Q, Ω and D as defined in Section 2.2.. n o ˜ + U ˜(x) , x ∈ Q, d ∈ D. S d (U )(x) = min q(d, d) d d6=d˜
This system is obtained considering a sequence of optimization problems with non zero discount rate. In a strict sense, the relation between the discount problem and the optimization problem with average cost is: ∀ x ∈ Q, ∀ d ∈ D lim αU α = µ = inf J (β (·)) .
α→0
β(·)
(22)
The average cost. To each control policy β (·) we associate the cost function (20). For each d ∈ D and x ∈ Q, we define the minimum average cost n o µd (x) = inf J (β (·)) : β (·) ∈ Adx . (23)
340
Laura S. Aragone and Elina M. Mancinelli
Our objective is to find ∀ x ∈ Q and ∀ d ∈ D, a policy β¯xd (·) ∈ Adx , such that J(β¯xd (·)) = µd (x). The following proposition states that µd (x) does not depend on d and x (see [2] for the proof). Proposition 1. ∃ µ ∈ R such that µd (x) = µ ∀ x ∈ Q, ∀ d ∈ D.
5.
Use of Viscosity Techniques
We will use viscosity techniques (see [8]), in order to consider general solutions of the Hamilton–Jacobi–Bellman (HJB) equations system associated to this problem. Also with these techniques, we can easily prove properties of uniqueness of solution.
5.1.
Definition of the QVI System and Its Viscosity Solution
By using the same methodology as that employed in [14], [15], we will say that (Ud , µ) is a viscosity solution of (21), with boundary conditions (24)–(26) if • the functions Ud are continuous functions in Q, • the functions Ud satisfy Ud ≤ S d (U ), • the functions Ud verify the following boundary conditions, for x ∈ (∂Qe ∪ ∂Q+ ) ˜
Ud˜(x) = S d (U )(x) ∀ d˜ 6= d if x ∈ γd− ,
lim
x→∂Q+
•
Ud (x) = S d (U )(x) if x ∈ γd+ ,
Ud (x) = +∞,
(24) (25) (26)
∂Ud ∂x
g (d) + f − µ ≥ 0 in Ω in the viscosity sense, i.e. ∀ ψ ∈ C 1 (Ω), if Ud − ψ has a local maximum in x0 , then ∂ψ (x0 ) g(d) + f (x0 , d) − µ ≥ 0, ∂x
• ∀ x ∈ Ω / Ud (x) < S d (U )(x), then ∃ δ (x) / viscosity sense, i.e. ∀ ψ ∈ C 1 (Bδ(x) (x)),
∂Ud g (d) + f − µ = 0 in Bδ(x) in the ∂x
∂ψ (x0 ) g(d) + f (x0 , d) − µ ≥ 0, ∂x ∂ψ (x0 )g(d) + f (x0 , d) − µ ≤ 0. if Ud − ψ has a local minimum in x0 → ∂x
if Ud − ψ has a local maximum in x0 →
Numerical Approximation to Solve QVI Systems
5.2.
341
Uniqueness Property in Terms of Viscosity
A constructive procedure for an optimal policy. Let Ud be any set of continuous functions, which are solutions in the viscosity sense to the QVI system (21), with boundary conditions (24)–(26). By using them, an optimal feedback policy β ∗ = {θi , di } ∈ Adx can be obtained in the following way: We define θ0 = 0, d0 = d, and recursively o n θi = min t ≥ θi−1 : Udi−1 (y(t)) = S di−1 (U ) (y(t)) , (27) o n (28) di ∈ d ∈ D : (S di−1 (U ))(y(θi )) = q(di−1 , di ) + Ud (y(θi )), di−1 6= di . Next theorem (see [2]) establishes the optimality of the procedure.
Theorem 5. If U is a continuous viscosity solution of the system (21), (24)–(26), then the policy constructed according to (27)–(28), satisfies n o J(β ∗ (·)) = inf J(β(·)) : β ∈ Adx .
Corollary 2. There exists at most one value of the parameter µ such that (21), (24)–(26) has a solution in the viscosity sense.
Remark 3. If U is a solution of (21), (24)–(26), then U + c · e, is also a solution ∀ c ∈ R, being e = (1, . . . , 1) ∈ Rm+1 .
5.3.
Existence of Viscosity Solution
The discount problem. A Lipschitz-continuous solution of the system (21) with boundary conditions (24)–(26) can be obtained considering a sequence of optimization problems with non zero discount rate (we denote this coefficient with α). For this type of problem, the solution is given by the unique solution in the viscosity sense of the (QVI) system given in Theorem 2. Relation between the two problems. The relation between the discount problem and the optimal average problem is given by the following theorem (see [2]). It provides a strict statement of the intuitive fact that problems with a low discount rate or with average cost are optimized by similar policies. Theorem 6. By virtue of the feasibility condition (6), the following properties hold • lim αUdα (x) = µ, ∀ x ∈ Q, ∀ d ∈ D, α→0
• ∀U ∈
6. 6.1.
T
S
ς>0 ς>α>0
(U α (·) − Udα0 (x0 ) · e), (U, µ) is solution of (21), (24)–(26).
The Discrete Problem Elements of the Discrete Problem
To define the discrete problem, we use the approximation which comprises a discretization of the space W1,∞ loc (Ω) and a discretization of conditions (24)–(25) introduced in Section 4.1.
342
Laura S. Aragone and Elina M. Mancinelli
6.1.1. Discretization of HJB Inequalities We will use the following discretization of conditions (21) w(xi , d) ≤ Dk (w, µk )(xi ) ∀ (xi , d) ∈ V k × D, d i d w(x , d) ≤ S (w)(xi ) ∀ (xi , d) ∈ V k × D.
We define Ddk (w, µk )(xi ) in the following form (Dk (w, µk ))(xi ) = w(xi + hd g(d), d) + hd (f (xi , d) − µk ) d [ − + ∀ xi ∈ V k \ γk,d γk,d ∪ , r6=d [ − (Dk (w, µk ))(xi ) = +∞ ∀ xi ∈ γ + ∪ γk,d . d k,d r6=d
(29)
(30)
Remark 4. We can see that the definition of Ddk is consistent with (21) and takes into account the constraints (24)–(26). 6.1.2. Definition of Operator P k We define the operator Pk : Fk × R → Fk in the following form Pk (w, µk )(xi , d) = min (Ddk (w, µk )), S d (w)(xi ) ∀ (xi , d) ∈ V k × D.
(31)
6.1.3. Definition of the Discrete Problem In relation to the QVI system, we introduce the following problem which is defined in Part I. Problem Pk : Find (w, µk ) such that w = Pk (w, µk ).
(32)
Proposition 2. There exists at most one value of parameter µk such that (32) has a solution w ∈ F k. Proof. Can be seen in [2]. Remark 5. If (w, µ) is a solution to (32), then w + c · e is a solution ∀ c ∈ R, being e = (1, . . . , 1) ∈ R(m+1)×N . 6.1.4. Definition of Operator Pkα We repeat here the definition of operator Ddλ,k , introduced in [1] [ − (Dλk w)(xi ) = (1−λhd )w(xi +hd g(d), d)+hd f (xi , d) ∀ xi ∈V k \ γ + ∪ γk,d , d k,d r6=d [ − (Ddk w)(xi ) = +∞ ∀ xi ∈ γ + ∪ γk,d . k,d r6=d
Numerical Approximation to Solve QVI Systems The operator Pkα : Fk → Fk is defined by (Pk w)(xi , d) = min (Ddk w)(xi ), (S d (w))(xi ) ∀ xi ∈ V k , ∀ d ∈ D,
343
(33)
and the following problem enables us to find the unique solution U α,k of the discrete discounted cost problem Problem Pαk : Find the fixed point of operator Pkα .
6.2.
Relation between Problems Pk and Pαk
The relation between these two problems is established in the following form and the proof can be seen in [2]. Proposition 3. By virtue of feasibility condition (6), we have lim α U α,k (xi , d) = µk ∀ xi ∈ V k , ∀ d ∈ D.
α→0
Let us denote e ∈ F k the element with constant value 1. Then, ∀ xi0 ∈ V k , ∀ d0 ∈ D, ∀ w / ∃ a subsequence αν → 0 such that w = lim U α,k (·) − U α,k (xi0 , d0 ) · e α→0
(w, µk ) is a solution of Pk . The following estimation of the convergence velocity holds |αU α,k − µk | ≤ C(k) α.
6.3.
Convergence of the Method
Theorem 7. The following estimation for the difference between the optimal average cost of the original continuous problem and the cost corresponding to its discrete approximation holds |µ − µk | ≤ C k. (34)
6.4.
Accelerated Algorithm
In [2] is presented an algorithm which uses value iteration and policy iteration techniques and also makes use of the properties established in Theorem 6. This algorithm is a natural modification of the algorithm used in the discount problem. Theorem 8. The algorithm converges in a finite number of steps
Part III
Piecewise Deterministic Problem (PWD) We consider in this part the optimization of a production system comprising a multi-item single machine again but with piecewise deterministic demands in this case. The demands
344
Laura S. Aragone and Elina M. Mancinelli
change are described by Poisson processes and the demands take a finite number of values. See [20], [21], [22]. These piecewise deterministic processes were introduced by Davis in [9] and some results about their control can be seen in [18]– [19].
7.
Description of the PWD Problem
7.1.
Production System
The production system is, as it was in the discount problem, composed by a multi-item single machine, in which at any time the machine is either idle or producing any of m different items. We use the notation introduced in Section 2.1. and for each item d = 1, . . . , m ; we denote the new problem data as follows • nd the quantity of possible demands for item d. • The set of possible demands J =
m Q
d=1
{1, . . . , nd } and |J | =
m Q
nd .
d=1
• For each j ∈ J , rj = (r1 j , . . . , rm j ) the demand vector, i.e. rdj is the demand rate for item d. • αij the commutation rate between the states of demand i and j. • α the discount rate. The commutation costs satisfy: q(d, d) = 0 ∀ d ∈ D, ¯ ≥ q0 > 0 ∀ d¯ 6= d, q(d, d)
(35)
¯ ≤ q(d, d) e + q(d, e d) ¯ ∀ d 6= de 6= d q(d, d)
We also suppose instantaneous commutations and that the compatibility condition between demands and production holds (similar to condition (6)): ∀ j ∈ J m X rd j d=1
7.2.
pd
< 1.
(36)
Admissible States
Let yd (t) be the inventory level of item d at time t, starting at yd (0) = xd . Therefore, for the global inventory state y we have y (t) ≡ (y1 (t), . . . , ym (t)), (y1 (0), . . . , ym (0)) = (x1 , . . . , xm ) . As neither backlogging nor production over capacity constraints are allowed for the inventory state yd , the following restrictions for yd hold 0 ≤ yd ≤ Md , ∀ d = 1, . . . , m .
Numerical Approximation to Solve QVI Systems
345
The demand rates rd j are always positive, then if the stock levels of at least two items reach zero simultaneously, the shortage of at least one item is inevitable with any admissible control policy. Therefore, we consider here that it is possible to make external purchases to modify the stock by other means besides production. To simplify the analysis we suppose that each time that an external purchase is done, it has a fixed cost A and the system jumps instantaneously to the point of jointly maximum stock e = (M1 , M2 , . . . , Mm ). Then, in this case the admissible state space is Q=
m Y
[0, Md ].
d=1
We also define
γd− = (x1 , . . . , xd , . . . , xm ) ∈ Q : xd = 0 , γd+ = (x1 , . . . , xd , . . . , xm ) ∈ Q : xd = Md , m [ [ , γ0 = γd = γd+ ∪ γ− γ− e e. d6e=d
7.3.
d
e d=1
(37)
d
System Evolution
The state y follows an evolution given by dy = g(β(t), j), dt
where gd (β, j) =
−rd j pd − rd j
if β 6= d, if β = d.
(38)
The evolution of the system depends on the current state of demands. The state of demands r is given by a continuous time Markov chain with transition rate αi j between states of demand i and j. Remark 6. We suppose that the following inequalities hold |f (x, d)| ≤ Mf , ∀ x ∈ Q, ∀ d ∈ D , q(d, d) e ≤ Mq , ∀ d, de ∈ D , kg(d, j)k ≤ Mg , ∀j ∈ J , ∀d ∈ D, |f (x, d) − f (x, d)| ≤ Lf kx − xk , ∀ x, x ∈ Q, ∀ d ∈ D .
7.4.
(39)
Admissible Controls
An admissible control is characterized by a sequence of pairs (θi , di ) , where θi are the commutation times of the control (state of production) and di are the state of production after the time θi (θi are stopping times adapted to the demand process and di are random step functions also adapted to the demand process (see [3], [18])). The control must also verify the additional constraint condition: y(t) ∈ Q , ∀ t ∈ [0, ∞). We denote with Bxd j the set of these admissible controls.
346
Laura S. Aragone and Elina M. Mancinelli
Remark 7. It can be proved that for the optimization purpose it is only necessary to consider the subset of Markovian admissible controls. Given a function D : Q×D ×J −→ D, such that D(x, D(x, d, j), j) = D(x, d, j), ∀ x ∈ Q, ∀ j ∈ J , ∀ d ∈ D,
(40)
the feedback control d(·) (see [11]) is defined as d(t) = D(y(t), d(t−), j(t)). Under suitable additional conditions, (i.e. the equation (38) has unique solution when β(t) = D(y(t), β(t−), j(t)) ), the production state d (t) results in a piecewise constant function with a finite number of switchings in any finite interval, i.e. d = di in the interval (θi , θi+1 ] , with 0 = θ0 ≤ θ1 < · · · < θi < θi+1 < · · · ;
di ∈ {0, 1, . . . , m} ; di 6= di+1 ; i = 0, 1, . . . . Remark 8. Condition (40) is a technical restriction aimed at avoiding the existence of instantaneous closed loops of control switchings.
8. 8.1.
The Optimal Cost Function V The Optimization Target
The purpose of the optimization is to find an optimal feedback control policy that minimizes the criterion Jxd j (β) . This criterion takes into account the production cost and the commutation cost of control policies and has the following explicit form Zθi ∞ X dj −αs −αθi Jx (β) = E f (y(t), di−1 )e ds + q(di−1 , di )e (41) . i=1 θi−1
Related to this cost functional we define the optimal cost V, as n o Vdj (x) = inf Jxdj (β(·)) : β(·) ∈ Bxdj ,
(42)
where j represents the initial state of demands and d the initial state of production. A special feature of our problem is that neither the shortage of products nor the production over the maximum stocks are allowed. This additional requirement imposes constraints to the admissible trajectories and to the admissible policies. These constraints appear as algebraic conditions involving the values of the optimal cost function over the boundaries of the domain Q.
8.2.
The Optimal Cost Function V and Its Properties
In the following section we introduce an auxiliary operator P , we prove that P has a unique fixed point which is a H¨older continuous function. It can be proved, using verification techniques, that the optimal cost function V is this unique fixed point. We also present the Dynamic Programming Principle (DPP) verified by the optimal cost function of our problem.
Numerical Approximation to Solve QVI Systems
8.3.
347
A Fixed Point Pproblem and Its Solution
We use the following notation and auxiliary variables α ˜j = α +
P
λji , and
i6=j
τ (x, d, j) = sup t : x + t g(d, j) ∈ Q ,
(43)
the time when the trajectory reaches the boundary from the initial position x if there have not been demand changes, is given by ! xde Md − xd τ (x, d, j) = min , min . pd − rdj d6e=d rde
This function is a minimum of Lipschitz functions and therefore it is a Lipschitz function, 1 1 in fact, using Lτ = max max max pd −rdj , r e , it results d=1,m j∈J
d
τ (x, d, j) − τ (x, d, j) ≤ Lτ kx − xk .
Definition 4. Let B(Q)m+1)×|J | be the set of Borel-measurable and bounded functions, we define the operator S A : B(Q)(m+1)×|J | → B(Q)(m+1)×|J | , in the following way (∀ d ∈ D, ∀ j ∈ J , ∀ x ∈ Q) ! e , Wdj (e) + A . S A W (x) = min min W e (x) + q(d, d) (44) dj
dj
d6e=d
Definition 5. The operator P : B(Q)(m+1)×|J | → B(Q)(m+1)×|J | , is defined as follows (∀ d ∈ D, ∀ j ∈ J , ∀ x ∈ Q, ∀ w ∈ B(Q)(m+1)×|J | )
=
inf
τ ≤τ (x,d,j)
(P w)d j (x) τ Z X e−α˜ j s f (y(s), d) + λj i wdi (y(s)) ds + e−α˜ j τ (Sw)d j (y(τ )) , 0
i6=j
(45)
where y(s) = x + s g(d, j). This operator verifies the following theorem. Theorem 9. There exists in B(Q)(m+1)×|J | a unique fixed point of P , W. Besides, W is a uniform H¨older continuous function in Ω. The proof is based on techniques introduced in [16] and used in [13]. In order to simplify the reading we have included the complete proof of this theorem in Section 9.. Construction of an optimal feedback policy. When W , the fixed point of operator P , is known, an optimal control policy can be found in the following way: given a state (x, d, j), while no demands commutation appear, the policy is to continue the production of item d until the time of commutation n o tx,d,j = inf t ≥ 0 : Wdj (y(t)) = (S A W )dj(t) (y(t)) . (46)
348
Laura S. Aragone and Elina M. Mancinelli
is reached. tx,d,j is a well defined non anticipative commutation time (see [7], [11]). If in the interval (0, tx,d,j ) it does appear a demand commutation in a generic time s, (y(s), d, j(s+ )) is taken as the new state of the system and the definition of the new commutation time given by (46) is again applied. Computing the cost of this policy we obtain W , therefore W = V (the optimal cost). Remark 9. The proof of optimality of this policy can be obtained following the verification technique (see [26]). We can state from now on that the optimal cost function V is the unique fixed point of operator P . Remark 10. By virtue of (35) technical condition (40) is verified.
8.4.
Dynamic Programming Principle
From the properties of operator P we can obtain the differential conditions (the usual expression for the HJB equation) and the associated boundary conditions which are verified by the function V over the sets γd− and γd+ . Integral form of the HJB equation. Theorem 10. Let V be the fixed point of operator P , then ∀ x ∈ Ω, ∀ d ∈ D, ∀ j ∈ J the following conditions hold: (C1) Vdj (x) ≤ S A V dj (x) . (C2) ∀ t ≤ τ (x, d, j), if y(s) = x + sg(d, j) then Zt X λj,i Vdi (y(s)) ds + e−α˜ j t Vdj (x + tg(d, j)). Vdj (x) ≤ e−α˜ j s f (y(s), d) + i6=j
0
(C3) Furthermore if for some x ∈ Ω a strict inequality holds in (C1) then there exists tx,d, j > 0 such that ∀ 0 ≤ t ≤ tx,d,j ≤ τ (x, d, j) Zt X λj,i Vdi (y(s)) ds+e−α˜ j t Vdj (x +t g(d, j)), Vd j (x) = e−α˜ j s f (y(s), d) + i6=j
0
(47)
Boundary conditions for the HJB equation. The evolution of the system is restricted to the set Q and there are some conditions involving the values of the function V at the boundary ∂Q. Theorem 11. In ∂Q the following boundary conditions are verified ∀ d 6= 0: V e (x) = (S A V ) e (x), ∀ x ∈ γ − , ∀ de 6= d, d dj dj V (x) = (S A V ) (x), ∀ x ∈ γ + , dj
dj
(48)
d
where γd− and γd+ are defined in (37).
The proof is obvious from the definition of operator P and the fact that V is the unique fixed point of P .
Numerical Approximation to Solve QVI Systems
8.5.
349
Differential Form of the HJB Equation
If we assume that the function V is differentiable enough it can be immediately proven from the DPP that function V verifies the following differential equation in the classical sense, i.e. ◦ min (LV )dj , (S A V )dj − Vdj = 0 in Q, (49) with boundary conditions: Vdj = (S A V )dj in γd , where (LV )dj (x) =
X ∂Vdj λij (Vdi − Vdj ) (x). g(d, j) + f (x, d) − α Vdj (x) + ∂x i6=j
Remark 11. Given the fact that V is in general only H¨older-continuous, V verifies this equation in the viscosity sense. The solution of the HJB equation in the viscosity sense. w ∈ (C(Q))(m+1)×|J | is a viscosity solution of equation: ◦ min (Lϕ)dj , (S A w)dj − wdj = 0 in Q,
(50)
with boundary conditions
wdj = (S A w)dj in γd ,
(51)
if it verifies the algebraic conditions (51) in γd , and it is simultaneously a viscosity subso◦
lution and a viscosity supersolution of (50), where that means: ∀ d, j, ∀ ϕ ∈ C 1 (Q): ◦
• Subsolution) if wdj − ϕ has a local maximum in x0 ∈Q then
n o A min (Lϕ)d j (x0 ), (S w)d j (x0 ) − wd j (x0 ) ≥ 0 ◦
• Supersolution) if wd j − ϕ has a local minimum in x0 ∈Q then
n o min (Lϕ)d j (x0 ), (S A w)d j (x0 ) − wd j (x0 ) ≤ 0.
Existence and uniqueness of viscosity solution. It is easy to prove, using classical techniques based on the DPP, that V is the unique viscosity solution of (49). The uniqueness of the viscosity solution can be prove making a similar analysis to that presented in [15]. The proof of these properties is omitted for the sake of briefness.
9.
The Operator P
This section contains a detailed study of the operator P defined in Section 2 and the analysis of existence and uniqueness of its fixed point.
350
Laura S. Aragone and Elina M. Mancinelli
Let us define ∀ d ∈ D, ∀ j ∈ J , ∀ x ∈ Q, ∀ w ∈ B(Q)(m+1)×|J | , 0 ≤ θ ≤ τ (x, d, j), Zθ X λji wdi (y(s)) ds+(S A w)dj (x+θg(d, j))e−α˜ j θ , βxdj (θ, w) = e−α˜ j s f (y(s), d) + i6=j
0
where y(s) = x + sg(d, j), then the operator P : B(Q)(m+1)×|J | → B(Q)(m+1)×|J | , is defined as (P w)dj (x) = min βxdj (θ, w). θ≤τ (x,d,j)
We also use the following notation n o τ (x, d, j) = inf θ ≤ τ (x, d, j) : (P w)dj (x) = βxdj (θ, w) .
9.1.
Operator P Properties
Lemma 1. P is an operator from (W 1,∞ (Q))(m+1)×|J | to (W 1,∞ (Q))(m+1)×|J | and verifies k∇P wk∞ ≤ (1 + η) (k∇wk∞ + ρ) , where
1 X λji , η = 1 + Lτ Mg + α ˜j i6=j X ρ = Lf + L M + A λji . τ f α ˜j i6=j
(52)
In other words, if w is a Lipschitz function (with associated Lipschitz constant Lw ), then P w is a Lipschitz function also, whose associated Lipschitz constant is LP w = (1+η)(Lw + ρ). Proof. We want to prove that
k(P w)dj (x) − (P w)dj (x)k = min βxdj (θ, w) − min βxdj (θ, w)
θ≤τ (x,d,j) θ≤τ (x,d,j)
= kβxdj (θx , w) − βxdj (θx , w)k ≤ LP w kx − xk.
where θx and θx denote the times which realize the minimum of β associated to the initial conditions x and x respectively. We will suppose that τ (x, d, j) ≤ τ (x, d, j) • Case 1: θx ≤ τ (x, d, j) (P w)dj (x) = βxdj (θx , w), (P w)dj (x) = βx d j (θx , w) ≤ βxdj (θx , w)
then
Numerical Approximation to Solve QVI Systems
351
(P w)dj (x) − (P w)dj (x) ≤ βxdj (θx ) − βxdj (θx ) ≤
Zθx 0
−α ˜j s
e
f (y(s), d) − f (y(s), d) ds +
Zθx
e−α˜ j s
X i6=j
λji wdi (y(s)) − wdi (y(s)) ds
0 + e−α˜ j θx (S A w)dj (x + θx g(d, j)) − (S A w)dj (x + θx g(d, j)) Zθx X ≤ e−α˜ j s kx − xk Lf + Lw λji ds + e−α˜ j θ Lw (1 + Lτ Mg )kx − xk 0
≤
i6=j
X
1 λji kx − xk + Lw (1 + Lτ Mg )kx − xk Lf + Lw α ˜j i6=j X L 1 f λji + kx − xk. ≤ Lw 1 + Lτ Mg + α ˜j α ˜j i6=j
Considering
(P w)dj (x) = βxdj (θx , w) ≤ βxdj (θx , w), (P w)dj (x) = βxdj (θx , w)
and by similar calculations we have
X Lf 1 (P w)dj (x) − (P w)dj (x) ≤ Lw 1 + Lτ Mg + λji + kx − xk, α ˜j α ˜j i6=j
therefore
X
L 1 f
(P w)dj (x) − (P w)dj (x) ≤ Lw 1 + Lτ Mg + λji + kx − xk. α ˜j α ˜j i6=j
• Case 2: τ (x, d, j) ≤ θx ≤ τ (x, d, j) (P w)dj (x) = βxdj (θx ) ≤ βxdj (τ (x, d, j)), (P w)dj (x) = βx d j (θx ) .
From these relations we have
(P w)dj (x) − (P w)dj (x) ≤ βxdj (τ (x, d, j)) − βxdj (θx ) ≤
τ (x,d,j) Z 0
e−α˜ j s f (y(s), d)−f (y(s), d) ds+ +
Zθx
τ (x,d,j)
τ (x,d,j) Z
e−α˜ j s
0
e−α˜ j s f (y(s), d) +
X i6=j
X i6=j
λji wdi (y(s))−wdi (y(s)) ds
λji wdi (y(s)) ds
352
Laura S. Aragone and Elina M. Mancinelli + e−α˜ j τ (x,d,j) (S A w)dj x + τ (x, d, j)g(d, j) − e−α˜ j θx (S A w)dj x + θx g(d, j) X X 1 1 Mf + A λji kx − xk + λji e−α˜ j τ (x,d,j) − e−α˜ j θx Lf + Lw ≤ α ˜j α ˜j i6=j i6=j + e−α˜ j τ (x,d,j) (S A w)dj x + τ (x, d, j)g(d, j) − (S A w)dj (x + θx g(d, j)) X X 1 ≤ Lf + Lw λji + Lw (1 + Lτ Mg ) kx − xk λji + Lτ Mf + A α ˜j i6=j i6=j X X Lf 1 ≤ Lw 1 + Lτ Mg + λji kx − xk. λji + + Lτ Mf + A α ˜j α ˜j i6=j
i6=j
We can also consider the following inequalities (P w)dj (x) = βxdj (θx ), (P w)dj (x) = βxdj (θx ) ≤ βxdj (θx ),
then
X Lf 1 (P w)dj (x) − (P w)dj (x) ≤ Lw 1 + Lτ Mg + λji + kx − xk, α ˜j α ˜j i6=j
therefore
(P w)dj (x) − (P w)dj (x) X X Lf 1 λji kx − xk. λji + + Lτ Mg + Mf + A ≤ Lw 1 + α ˜j α ˜j i6=j
i6=j
Hence by the symmetry between x and x this inequality holds for any x, x ∈ Q. By virtue of (52) it results LP w ≤ ηLw + ρ and so we arrive to k(P w)dj (x) − (P w)dj (x)k ≤ LP w kx − xk. Remark 12. The following relation between the Lipschitz constants of P w and w holds LP w ≤ (Lw + ρ)(η + 1)ν , ∀ ν ∈ N . Corollary 3. P : C(Q)(m+1)×|J | → C(Q)(m+1)×|J | . (m+1)×|J | Proof. Let w ∈ C(Q)(m+1)×|J | and wν be a sequence in W 1,∞ (Q) that approximates w in the following sense lim kw − wν k∞ = 0. It is easy to check that ν→∞
kP w − P wk∞ ≤ kw − wk∞ , therefore P w is the uniform limit of a sequence of Lipschitz continuous functions. Consequently P w is itself a continuous function.
Numerical Approximation to Solve QVI Systems
9.2.
353
Existence and Uniqueness of Fixed Point
We consider now the following fixed point problem Find W ∈ B(Q)(m+1)×|J | such that W = P W. We prove for this problem, the existence and uniqueness of solution. Definition 6. w is a subsolution of operator P if it verifies w dj (x) ≤ (P w) dj (x) ∀ d ∈ D, ∀ j ∈ J , ∀ x ∈ Q, s is a supersolution of operator P if it verifies sdj (x) ≥ (P s)dj (x), ∀ d ∈ D, ∀ j ∈ J , ∀ x ∈ Q. Remark 13. For any u, v ∈ B(Q)(m+1)×|J | we consider the partial order u ≤ v iff udj ≤ vdj ∀ x ∈ Q, ∀ d ∈ D, ∀ j ∈ J . Lemma 2. The set of subsolutions to operator P is not empty −M
Proof. We prove that if the constant function w verifies w ≤ α f , then w is a subsolution. Applying the definition (45) we get Zτ X P w dj = e−α˜ j s f (y(s), d) + λji w di ds + e−α˜ j τ S A w dj 0
=
Zτ 0
≥ = ≥
Zτ 0
i6=j
e−α˜ j s f (y(s), d) +
e−α˜ j s −Mf +
(1 −
(1 −
e−α˜ j τ )
α ˜j e−α˜ j τ
)
X i6=j
X i6=j
And so w is a subsolution.
λji w di ds + e−α˜ j τ
λji w di ds + e−α˜ j τ
−Mf + w dj
α ˜j
X i6=j
e w dj + q(d, d)
e w dj + q(d, d)
e λji + e−α˜ j τ w dj + q(d, d)
−Mf − α w dj + w dj + e−α˜ j τ q0 ≥ w dj + e−α˜ j τ q0 ≥ w dj .
Lemma 3. The set of supersolutions of operator P is not empty. Proof. Let c = min min(τ (e, d, j)) and ρ = max j∈J d∈D
If we define ∀ d ∈ D, ∀ j ∈ J sdj (x) =
j∈J
α ˜ j A + Mf (1 − ecα˜ j ) . α(1 − ecα˜ j )
ρ, x 6= e, we have s¯ is a supersolution. ρ − A, x = e.
354
Laura S. Aragone and Elina M. Mancinelli
Lemma 4. Given ρ > 0, ∃ w = w(ρ), w(ρ), 0 < µ(ρ) ≤ 1, such that ∀ u, v ∈ B(Q)(m+1)×kJ k verifying kuk ≤ ρ, kvk ≤ ρ, it holds that (P1) w ≤ u ≤ w, (P2) w ≤ v ≤ w, (P3) P w ≤ w, (P4) w + µ(w − w) ≤ P w. Proof. We consider −Mf w dj (x) = min −ρ, , ∀ d ∈ D, ∀ j ∈ J , ∀ x ∈ Q, wdj (x) = sdj (x) + ρ, α (53) where sdj (x) is given by Lemma (3). As kuk ≤ ρ, kvk ≤ ρ, properties (P1) and (P2) follow. sdj is a supersolution, so if we add a constant to it, the new function is also a supersolution. Therefore (P w)dj ≤ wdj and then property (P3) holds. By (53) we have w dj ≤
−Mf =⇒ w dj + e−α˜ j τ q0 ≤ (P w) dj , ∀ d ∈ D, ∀ j ∈ J . α
To obtain (P4) it should be µ(w − w) ≤ e−α˜ j τ q0 or the equivalent expression µ≤
e−α˜ j τ q0 , (w − w)
which is possible because w − w 6= 0. Besides w − w ≥ ksk + ρ + e−α˜ j τ q0 ksk + ρ +
Mf α
Mf α ,
as for any ρ
< 1,
and because ksk ≥ e−α˜ j τ q0 , then there exists 0 < µ(ρ) ≤ 1 such that (P4) holds 0 0 (a constant independent of the discount rate α) such that |Vdj (x)| ≤ C1 1 − log(d(x, ∂Q+ )) + 1/α ,
•
lim Vdj (x) = +∞.
x→∂Q+
In the next theorems we prove these properties. To simplify the proof we only consider the case Q ⊂ R2 . o n 2 : x1 + x2 ≤ b and 1 M2 Theorem 14. Let 0 < b < min M , , E = Q ∩ x ∈ R p1 p2 p1 p2 x ∈ Q \ E, then there exists C1 > 0 such that: |Vd,j (x)| ≤ C1 (1 + 1/α) .
(56)
Numerical Approximation to Solve QVI Systems
M2
B = {x ∈ Q : η2 < x1 + x2 },
✻ ❅ ❅
Ξ = {x ∈ Q : η1 < x1 + x2 ≤ η2 }, F = {x ∈ Q : b ≤ x1 + x2 ≤ η1 }.
❅
359
❅B ❅ ❅ Ξ ❅
F
❅ ❅
E❅ ❅
❅ ❅
✲
M1
Proof. Without losing generality, we consider p1 = p2 = 1. Let max(M1 , M2 ) < η1 < η2 < M1 + M2 and We define the following feedback control policy (∀ j ∈ J ) and x belonging to the state set D(x, 0, j) = 1, F \ (γ2− ∪ γ1+ ), D(x, 0, j) = 0, Ξ, D(x, 0, j) = 2, F ∩ (γ2− ∪ γ1+ ), D(x, 0, j) = 0, B, + − D(x, 1, j) = 1, (Ξ ∪ F ) \ (γ2 ∪ γ1 ), D(x, 1, j) = 1, B \ γ1+ , (57) D(x, 1, j) = 2, (Ξ ∪ F ) ∩ (γ2− ∪ γ1+ ), D(x, 2, j) = 2, B \ γ2+ , D(x, 2, j) = 2, (Ξ ∪ F ) \ (γ1− ∪ γ2+ ), D(x, 1, j) = 0, B ∩ γ1+ , D(x, 2, j) = 1, (Ξ ∪ F ) ∩ (γ1− ∪ γ2+ ), D(x, 2, j) = 0, B ∩ γ2+ . It can be proved (after some lengthy calculation) that, if we use this feedback control policy D starting at a point x ∈ Q \ E, we obtain a control policy β(·) whose associated switching times verify: ∀ ν ≥ 1, 0 < ǫ ≤ θν+1 − θν , where ǫ is a positive constant defined in terms of η1 , η2 and the data of the problem (but independent of the discount factor α). Consequently, for the associated functional cost Jxdj (β) we have
Jxdj (β) = E
∞ Zθν X ν=1
f (y(s), dν−1 )e−αs ds + q(dν−1 , dν )e−αθν
θν−1
)
) (∞ X Mf −ανǫ Mq e Mq e ≤ +E α ν=1 ν=1 Mf Mf Mq 1 1 = ≤ C1 + ≤ + Mq 1 + +1 . α 1 − eαǫ α αǫ α
Mf ≤ +E α
(∞ X
−αθν
Theorem 15. Let x ∈ E then there exists a constant C1 > 0 (which depends only on problem data but not on the discount factor α) such that Vdj (x) ≤ C1 1 + (log(d(x, ∂Q+ )))− + 1/α .
Proof. Let x ∈ E, and suppose x1 = 0, x2 < b. We apply the control policy defined in (57). So, at the switching time θν we have y(θ1 ) = (x11 , 0),
y(θ2 ) = (0, x22 ),
y(θ3 ) = (x31 , 0)
................................................... 1
360
Laura S. Aragone and Elina M. Mancinelli y(θ2ν ) = (0, lν ),
y(θ2ν+1 ) = (lν+1 , 0).
Is is clear that ν
θν+1 − θ1 ≥ l min min j
d
1 rdj
and lνn+1 ≥ γlν , where l0 = kxk and γ = min min(1 − r1j ) min(1/r2i ), min(1 − r2j ) min(1/r1i ) . j
i
j
i
h i kxk From (36) it results γ > 1. Consequently, for a time t ≥ θν , where ν = log b−log + 1, log γ the system reaches Q \ E. If after t¯ we apply the feedback policy defined in (57), we have for the corresponding cost Mf 1 log b − log kxk Mq + + C1 +1 . Jxdj (β) ≤ 1 + log γ α α And so for a suitable constant C we have 1 dj − Jx (β) ≤ C 1 + + (log kxk) . α For the general case x ∈ / E, it is possible to get a similar result and then we have the general relation Vdj (x) ≤ C 1 + α1 + log(kxk)− .
Theorem 16. For each x ∈ Q it holds that Vdj (x) ≥ −C1 log kxk − C2 .
Proof. Let β(.) ∈ Bxdj . We want to obtain a bound of the first switching time θ1 as a linear function of kxk. It always happens that at least one of the stocks yd (·) is a decreasing function in [0, θ1 ] and the shortage of any item is forbidden, then we have
1 1
θ1 ≤ max xd max ≤ kxk
r , j d rdj ∞
where 1r ∞ = max max r11j , r12j . For a general time θν+1 we have j
By virtue of (39) we get
1
θν+1 − θν ≤ ky(θν )k
r . ∞
ky(θν+1 )k ≤ ky(θν )k + Mg (θν+1 − θν ) ≤ χ ky(θν )k , where
1
χ = 1 + Mg
r . ∞
(58)
(59)
Numerical Approximation to Solve QVI Systems
361
From (58) and (59) we obtain
ν
1
kxk 1 − χ . θν ≤ ∞
r 1−χ ∞
(60)
Let us define ν = min{ν : θν ≥ 1}. Then we have
log 1 + (−1 + χ)/( 1r ∞ kxk∞ ) ν≥ log χ
(61)
and consequently, for the corresponding functional value Jxdj (β) we have Jxdj (β) ≥ −
Mf + (ν − 1)q0 e−α . α
So, by virtue of (61) we get Mf + e−α q0 Vdj (x) ≥ − α
log(Mg /kxk) −1 log(χ)
then Vdj (x) ≥ −C1 log kxk − C2 , where
−α q0 C1 = e log χ , Mf log Mg −α C2 = . + e q0 1 − α log χ
Remark 14. When m > 2 the preceding result takes the form Vdj (x) ≥ −C1 log(d(x, ∂Q+ )) − C2 .
10.3.
Approximation of the Singular Problem
Properties of the function V A related to V . Lemma 6. The relationship between the two problems is given by the following properties: ∀ x ∈ Q, ∀ d ∈ D, ∀ j ∈ J , 1) VdjA (x) ≤ Vdj (x), e e 2) VdjA (x) ≤ VdjA (x), ∀ A ≥ A,
3)
lim VdjA (x) = Vdj (x).
A→+∞
362
Laura S. Aragone and Elina M. Mancinelli
˜xdj ⊂ Bxdj . Proof. 1) From the definition of the admissible policies sets we conclude that B Then from the definition of V A (42) and V (55) it results that ∀ x ∈ Q, ∀ d ∈ D, ∀ j ∈ J , VdjA (x) ≤ Vdj (x). 2) This property is obvious because the policies sets are the same but the cost of applye ing them for parameter A is greater or equal to the cost of applying parameter A. 3) Let {An } be an increasing sequence. According to property 2, the function sequence An {Vdj } is monotone non decreasing and upper bounded by function Vdj . Then for each n o x ∈ Q we have that VdjAn (x) converges pointwisely to a function that we denote with V which is lower semi-continuous because V = sup VdjAn (x). We also notice from property 1 n
that V dj ≤ Vdj obviously holds. We prove that V is a viscosity supersolution to the following variational inequality min ((LV )dj , (SV )dj − Vdj ) = 0 in Ω,
(62)
Vdj = (SV )dj in γd .
(63)
with boundary condition The proof of the boundary condition (63) is immediate from the fact that in γd it is verifies the relation VdjA = (S A VdjA ) is verified for all A. To prove (62), let ϕ ∈ C 1 (Ω) such that V dj − ϕ has a strict local minimum in x0 , and let xn be a local minimum point of VdjAn − ϕ in a compact neighborhood of x. We prove in the first place that xn → x0 . We can suppose without loss of generality that (V dj − ϕ)(x0 ) = 0, besides VdjAn ր V dj then it results (VdjAn − ϕ)(xn ) ≤ (VdjAn − ϕ)(x0 ) ≤ 0. Let us suppose that the sequence {xn } does not converge to x0 . The sequence is contained in a compact set then there exists a subsequence (still denoted by xn ) that converges to x b. As x). x0 is the point that realizes the minimum it holds that 0 = (V dj − ϕ)(x0 ) < (V dj − ϕ)(b As V dj is lower semi-continuous function, we have lim (V dj − ϕ)(xn ) ≥ (V dj − ϕ)(b x) > 0,
n→∞
hence ∃ N such that ∀ n ≥ N we have (V dj − ϕ)(xn ) ≥
x) (V dj − ϕ)(b > 0. 2
¯ (compact As V is a pointwise limit of continuous functions V An and the fact that {xn } ⊂ Q ˜ ˜ set), then there exists N such that ∀ n ≥ N (VdjAn − ϕ)(xn ) ≥ (V dj − ϕ)(b x)/4 > 0, hence Ank
(V dj − ϕ)(x0 ) ≥ (Vdj
Ank
− ϕ)(x0 ) ≥ (Vdj
− ϕ)(xnk ) ≥ (V dj − ϕ)(b x)/4 > 0
Numerical Approximation to Solve QVI Systems
363
which is a contradiction to the fact that (V dj − ϕ)(x0 ) = 0. Therefore x b = x0 . Let us now show that VdjAn (xn ) → V dj (x0 ). (VdjAn − ϕ)(xn ) is an increasing sequence because A
A
(Vdj n−1 − ϕ)(xn−1 ) ≤ (Vdj n−1 − ϕ)(xn ) ≤ (VdjAn − ϕ)(xn ) and moreover it is bounded by V dj (x0 ) − ϕ(x0 ) (VdjAn − ϕ)(xn ) ≤ (VdjAn − ϕ)(x0 ) ≤ (V dj − ϕ)(x0 ) ∀ n ∈ IN, therefore lim VdjAn (xn ) ≤ V dj (x0 ).
(64)
Given ε > 0, ∃ n0 such that ∀ n ≥ n0 we have V dj (x0 ) − ε ≤ VdjAn (x0 ). An0
Besides Vdj
(xn ) ≤ VdjAn (xn ) ∀ n ≥ n0 , in consequence An0
lim Vdj
n→∞ An0
but as Vdj
(xn ) ≤ lim VdjAn (xn ), n→∞
An0
is a continuous function then lim Vdj n→∞
An0
V dj (x0 ) − ε ≤ Vdj
An0
(x0 ) = lim Vdj n→∞
An0
(xn ) = Vdj
(x0 ) and so
(xn ) ≤ lim VdjAn (xn ) n→∞
this is valid for all ε > 0 then V dj (x0 ) ≤ lim VdjAn (xn ). n→∞
(65)
From (64) and (65) we get V dj (x0 ) = limn→∞ VdjAn (xn ). For each n we know that n o min (LAn ϕ)dj (xn ), (S An V An )dj (xn ) − VdjAn (xn ) ≤ 0.
Case 1: There exists a sub-sequence (still denoted by xn ) such that it verifies (LAn ϕ)dj (xn ) ≤ 0, ∀ n, i.e. X ∂ϕd, j λi,j VdiAn − VdjAn (xn ) ≤ 0, g(d, j)(xn ) + f (xn , d) − αVdjAn (xn ) + ∂x i6=j
as ϕ is a continuous function, the hypothesis about f and g and the properties: lim VdjAn (xn ) = V¯dj (x0 ),
n→∞
lim VdiAn (xn ) ≥ V¯di (x0 )
(66)
n→∞
we conclude that (Lϕ)dj (x) ≤ 0. Case 2: Let us suppose that the condition VdjAn (xn ) = (S An V An )dj (xn ), is always satisfied, there are two possibilities:
(67)
364
Laura S. Aragone and Elina M. Mancinelli
i) The jumps are always of type “A” then VdjAn (xn ) = VdjAn (e) + An . This is impossible because the left member converges to V (x0 ) while the right one tends to ∞. ii) There exists a sub-sequence where only jumps of type “not A” appear, i.e. e . VdjAn (xn ) = S(V An )(xn ) = min V eAn (xn ) + q(d, d) d6e=d
Therefore for some particular de we have
dj
e VdjAn (xn ) = V eAn (xn ) + q(d, d) dj
taking limit and considering (66) we obtain
e V dj (x0 ) ≥ V dej (x0 ) + q(d, d)
and in consequence
e , V dj (x0 ) ≥ min V dej (x0 ) + q(d, d) d6e=d
from that we get
V dj (x0 ) ≥ (SV )dj (x0 ).
(68)
From (67) and (68) we have that the function V is a viscosity supersolution of (62), it means that min (LV¯ )dj , (S V¯ )dj − V¯dj (x0 ) ≤ 0.
We wish to remark from the last inequality that ∀ x such that V¯ < S V¯ we have LV¯ ≤ 0. We define a feedback policy in the following way: if (x, d) is a point such that V¯dj (x) < (S V¯ )dj (x), we continue with the policy d while j does not change or it reaches a point e where V¯dj (y(t)) ≥ (S V¯ )dj (t), in other words V¯dj (y(t)) ≥ V¯dj e (y(t)), for some d. Then for t = t1 ∧ T1 (t1 is the first production switching time and T1 is the first jump time of the process) we have (LV¯ )dj (y(t)) ≤ 0 and so Zt e −α t + e−αs f (y(s)) ds ≤ 0 ¯ E e−αt V¯dj ˜ (y(t)) − Vdj (x) + q(d, d)e 0
it means that
e −α t + V¯dj (x) ≥ E e−αt V¯dj ˜ (y(t)) + q(d, d)e
Zt 0
e−αs f (y(s)) ds ,
for V¯dj (y(t)) the same inequality is valid, hence Ztn n X + −α tl V¯dj (x) ≥ E e−αtn V¯dj q d− e−αs f (y(s)) ds + ˜ (y(tn )) + l , dl e 0
l=1
Numerical Approximation to Solve QVI Systems Ztn n X ≥ E e−αtn f (y(t)) ds + q d− , d+ e−α tl
l
365
l
l=1
0
taking limit we obtain V¯dj (x) ≥ JV¯dj (x) then V¯dj (x) ≥ Vdj (x) therefore V A → V .
11. 11.1.
Discrete Problem Elements of the Discrete Problem
To define the discrete problem, we introduce an approximation which comprises a discretization of the space W 1,∞ (Ω) and the equation V = P V . We use techniques and results presented in [1], [13]. Domain approximation. We identify the discretization of the space variables with the parameter k, which also indicates the size of discretization. Let h > 0; for each j ∈ J , we define the uniform discretization B jk of Rm given by: •
B jk
•
h0j
=
m P
d=0
ζd edj
m P = 1−
• hdj =
d=1
rdj pd
: ζd is integer ,
rdj pd
h,
h,
• e0j = (−r1j , . . . , −rdj , . . . , −rmj ) h0j ,
• edj = −r1j , . . . , −r(d−1)j , pd − rdj , −r(d+1) j , . . . , −rmj hdj . For each state of demand j we approximate Q with Qjk = maximum set of polyhedrons of Rm which verify S̺jk
=
xj̺
+
(m X d=1
ξd edj
S ̺
S̺jk , where {S̺jk } is the
)
: ξd ∈ [0, 1] , xj̺ ∈ B jk , S̺jk ⊂ Q;
i.e. for each j we define the same kind of polyhedron as in Part I for the discount problem. We define kj = max(diam S̺jk ), k = max(kj ). ̺ j∈J j jk We denote V = x̺ , ̺ = 1, . . . , Nj the set of nodes of Qjk , where Nj is the cardinal of V jk . Remark 15. If k is small enough, then for any two vertices of V jk , there always exists a path given by a system trajectory, (in fact, an especial trajectory without demand changes) which joins the first vertex to the second one.
366
Laura S. Aragone and Elina M. Mancinelli Approximation of the boundary. We define ∀ d = 1, . . . , m, ∀ j ∈ J , n o + γk,d,j = xj̺ ∈ V jk : xj̺ + hd g(d, j) ∈ / Qjk , n o e e − γk,d,j = xj̺ ∈ V jk : xj̺ + hd g(d, j) ∈ / Qjk , ∀ de 6= d .
Approximation space F k . We divide each quadrilateral element S̺jk in simplices such that the edges are coincident with lines of the form xj̺ + sg(d, j), s ≥ 0. We consider the set F k of functions Y Qjk × D → R, wdj (·) ∈ W 1,∞ (Qjk ), w: j∈|J |
that any w ∈ F k is entirely characterized by the values ̺ = 1, . . . , Nj .
11.2. 11.2.1.
m P
ai xi . It is obvious i=1 wdj (xj̺ ), xj̺ ∈ V jk , d ∈ D, j ∈ J ,
such that in each simplex wdj (·) is an affine function of the type a0 +
The Discrete HJB Equation A Discrete Fixed Point Problem
We define X 1 wdj (xj̺ +hdj g(d, j))+hdj f (xj̺ , d)+hdj λji wdi πQik (xj̺ ) , 1+ α ˜ j hdj i6=j [ − + ∀ xj̺ ∈ V jk \ γk,d,j (Lk w)dj (xj̺ ) = γk,r,j ∪ , r6 = d [ − + +∞ ∀ xj̺ ∈ γk,d,j γk,r,j ∪ , r6=d
where πQik (xj̺ ) is the projection of xj̺ over the set Qik and (Sk w)dj (xj̺ )
= min min d6e=d
j wdj e (x̺ )
! e , wdj (πQ (e)) + A , + q(d, d) jk
where πQjk (e) is the projection of e over the set Qjk . In this way, (Lk w)dj (xj̺ ) is a natural discretization of (47) and it includes the boundary conditions (48). We define Pk : F k → F k such that ∀ xj̺ ∈ V jk , ∀ d ∈ D, ∀ j ∈ J (Pk w)dj (xj̺ ) = min (Lk w)dj (xj̺ ), (SSk w)dj (xj̺ )
(69)
Numerical Approximation to Solve QVI Systems
367
and the following discrete problem: P roblem Pk : Find the fixed point of operator Pk . Existence and uniqueness of the discrete solution. Using the techniques introduced in [16] we can prove the following characterization of the unique solution U k of problem Pk . Theorem 17. There exists a unique fixed point of operator Pk , i.e. ∃ ! U k such that U k = Pk U k . Moreover ∀ w ∈ F k we have U k = lim (Pk )ν w and the following estimation of ν →∞ the convergence rate holds kPkν w − U k k ≤ K(ρ)(1 − ̺(ρ))ν ,
(70)
where 0 < ̺(ρ) ≤ 1 and K(ρ) > 0, ρ depends of kwk . Proof. To prove this theorem we use the techniques given in ( [13]). To use this techniques we should prove that the set of supersolutions and the set of subsolutions are not empty, i.e. there exist, at least, two functions s¯ and w ¯ such that: a) Pk s ≤ s, b) Pk w ≥ w. a) Let s be the function such that ∀ d, ∀ j is given by !!! M S Mq f j − + , γk,r,j + , x̺ ∈ V jk \ γk,d,j ∪ α ηα r6=d j sdj (x̺ ) = !! M S − Mq f j + , γk,r,j + + Mq , x̺ ∈ γk,d,j ∪ ηα α r6=d
where η verify
η ≤ min(hdj ). dj
(71)
We want to show that s is a supersolution. We consider two cases: S − + γk,r,j . ∪ (i) xj̺ ∈ V jk \ γk,d,j r6=d
+ ∪ (ii) xj̺ ∈ γk,d,j
Case (i)
S
r6=d
− γk,r,j
.
X 1 sdj xj̺ + hdj g(d, j) + hdj f (xj̺ , d) + (Pk s)dj (xj̺ ) ≤ λji sdi (πQik (xj̺ )) 1+α ˜ j hdj i6=j X Mf Mq 1 Mf + Mq + Mq + hdj Mf + hdj λji + ≤ d α ηα α ηα 1+α ˜ j hj i6=j
368
Laura S. Aragone and Elina M. Mancinelli d X d X h h 1 1 1 j Mf + hdj + j = λji + Mq λji +1+ α α ηα ηα 1+α ˜ j hdj i6=j i6=j X Mq 1 d d Mf (1 + α = λji ˜ h ) + 1 + ηα + h j j j α ηα 1+α ˜ j hdj i6=j
≤
Mf Mq + = sdj (xj̺ ), α ηα
this last inequality holds by virtue of (71). Case (ii) e + s e (xj ) ≤ Mf + Mq + Mq = sdj (xj ). (Pk s)dj (xj̺ ) = (Ss)dj (xj̺ ) = q(d, d) ̺ dj ̺ α ηα
The analysis of cases (i) and (ii) proves that s is a supersolution. b) Let w be the function given by wdj = −
Mf , ∀ d ∈ D, ∀ j ∈ J . α
We want to show that w is a subsolution. e ≥ 0 we have Since q(d, d)
(Sk w)dj ≥ wdj , ∀ d ∈ D, ∀ j ∈ J .
Let us see that operator Lk verifies the following inequality (Lk w)dj ≥ w, ∀ d ∈ D, ∀ j ∈ J .
=
(Lk w)dj (xj̺ )
X
1 wdj (xj̺ + hdj g(d, j)) + hdj f (xj̺ , d) + λji wdi (πQi k (xj̺ )) 1+α ˜ j hdj i6=j X M M 1 − f − hdj Mf − hdj f λji ≥ α α 1+α ˜ j hdj i6=j X −Mf Mf Mf + αhdj + hdj = λji ≥ − = wdj (xj̺ ). d α α(1 + α ˜ j hj ) i6=j
Then we have ∀ d ∈ D, ∀ j ∈ J
wdj ≤ min (Lk w)kdj , (Sk w)dj
and so w is a subsolution of Pk . The remaining part of the proof is analogous to what was presented for the continuous operator P in Lema (2).
Numerical Approximation to Solve QVI Systems
11.3.
369
Convergence Results
We have the following result for the convergence of the discretization procedure to the solution of the original continuous problem. Theorem 18. The following rate of discretization holds kU k − V k∞ ≤ Khγ . Proof. Let h be the positive value associated to the parameter of discretization k of the domain Ω, U k the discrete solution associated to this parameter, then U k is given by k Udj (xj̺ ) = min (Lk U k )dj (xj̺ ), (Sk U k )dj (xj̺ ) ∀ xj̺ ∈ V jk , ∀ d ∈ D, ∀ j ∈ J ,
We want to estimate the difference between this function and the solution of the original problem. We consider first the difference: V − U k . Let n o k ∆1 = max Vdj (xi ) − Udj (xi ), xi ∈ V jk , j ∈ J , d ∈ D .
Then, if we call x0 to the point that realizes this maximum, we have
k k x0 + hdj g(d, j) + hdj (f (x0 , d) Udj (x0 ) = min (1 − α ˜ j hdj )Udj +
X
.
k λji Udi (πQik x0 )), (SSk U k )dj (x0 )
i6=j
By using a recursive argument we can assume that the minimum is attained by the first component appearing in the min operation, i.e. X k k λji Udki (πQi (x0 )) . Udj (x0 ) = (1 − α ˜ j hdj )Udj (x0 + hdj g(d, j)) + hdj f (x0 , d) + i6= j
+ Then x0 ∈ / γh,d,j ∪
and Vdj verifies:
Vdj (x0 ) ≤
Zhdj 0
S
r6=d
− γh,r,j
, and so x0 + hdj g(d, j) ∈ Qjk ⊂ Ω; hence
hdj < τ (x0 , d, j) = sup t : x0 + t g(d, j) ∈ Ω
e−α˜ j s f (y (s) , d) +
Therefore we have
k ∆1 = Vdj (x0 ) − Udj (x0 )
X i6=j
λji Vd i (y(s)) ds+e−α˜ j hdj Vdj (x0 + hdj g(d, j)) .
370
Laura S. Aragone and Elina M. Mancinelli Zhdj X λji Vd i (y(s)) ds + e−α˜ j hdj Vdj (x0 + hdj g(d, j)) e−α˜ j s f (y(s), d) + ≤
0
i6=j
k − (1 − α ˜ j hdj )Udj (x0 + hdj g(d, j)) − hdj f (x0 , d) + hdj
≤
Zhdj 0
X
k λji Udi (πQik (x0 ))
i6=j
X k λji Vdi (y(s)) − Udi πQik (x0 ) ds e−α˜ j s f (y(s), d) − f (x0 , d) +
i6=j
−α ˜ j hdj
+ Vdj (x0 + hdj g(d, j))e ≤
Zhdj
−α ˜j s
e
k − (1 − α ˜ j hdj )Udj (x0 + hdj g(d, j))
X λji Vd i (y(s)) − Vdi (x0 ) Lf hdj + i6=j
0
k ds + Vdi (x0 ) − Vdi (πQik (x0 )) + Vdi (πQik (x0 )) − Udi (πQik (x0 )) k + (1 − α ˜ j hdj ) Vdj (x0 + hdj g(d, j)) − Udj (x0 + hdj g(d, j)) + O(h2dj ) X λji (1 + Mgγ )LV hγdj + ∆1 hdj + (1 − α ≤ Lf hdj + ˜ j hdj )∆1 + O(h2dj ). i6=j
As a result we obtain X X λji LV hγdj hdj +O(h2dj ), λji − 1 + α ˜ j hdj ≤ Lf hdj + (1 + Mgγ ) ∆1 1 − hdj i6=j
i6=j
which implies
X 1 ∆1 ≤ λji LV hγdj + O(h2dj ). Lf hdj + (1 + Mgγ ) α i6=j
Then for any node of Qjk ∀ j, ∀ d the same estimation is verified. For any x in the domain Qjk there exists a point, which we denote by xi ∈ Qjk , such that kx−xi k ≤ hdj . Therefore, k is an affine function we get taking in to account that Vdj is H¨older continuous and that Udj k (x) ≤ ∆1 + Ckx − xi kγ . Vdj (x) − Udj Then for x ∈ Ω
X
1 λji LV hγdj + O(hdj ) + O(hγdj ). Lf hdj + (1 + Mgγ ) α i6=j Defining K = max (1 − (rdj )/(pd )), rdj /pd and remembering that h0j = (1 − rdj /pd )h k Vdj (x) − Udj (x) ≤
dj
and hdj = hrdj /pd , we have
k Vdj (x) − Udj (x) ≤
K Lf h + (1 + Mgγ ) α
X i6=j
λji LV hγ + O(hγ ).
Numerical Approximation to Solve QVI Systems
371
We consider now the difference uh − V. Let x be such that k k ∆2 = Udj (x) − Vdj (x) = max Udj (x) − Vdj (x) . dj x
k is given by The function Udj k Udj (xj̺ ) = min (Lk U k )dj (xj̺ ), (SSk uh )dj (xj̺ ) ∀ xj̺ ∈ V j,k , ∀ d ∈ D, ∀ j ∈ J ,
then
X 1 k k k Udj λji Udi (πQik (xj̺ )) , xj̺ + hdj g(d, j) + hdj f (xj̺ , d) + Udj (xj̺ ) ≤ 1+α ˜ j hdj i6=j k k xj̺ + hdj g(d, j) , (1 + α ˜ j hdj )Udj (xj̺ ) ≤ Udj X k πQik (xj̺ ) λji Udi + hdj f (xj̺ , d) + i6=j
k α ˜ j hdj Udj (xj̺ )
≤
k + hdj g(d, j) − Udj (xj̺ ) X k πQik (xj̺ ) λji Udi + hdj f (xj̺ , d) +
k Udj
xj̺
i6=j
k α ˜ j hdj Udj (xj̺ ) ≤ Ddj hdj + hdj f (xj̺ , d) +
where
Ddj =
X i6=j
(72)
k πQik (xj̺ ) , λji Udi
k (xj + h g(d, j)) − U k (xj ) Udj ̺ dj dj ̺
hdj
is the discrete derivative. The function xj̺
k Udj
is affine along the edges (which coincide with
segments of lines of the type + sg(d, j), s ≥ 0). Hence, in (72) we can replace hdj by b hdj ≤ hdj being b hdj such that Vdj (x) =
b
Zhdj 0
−α ˜j s
e
f (y(s), d) +
In consequence we have
X i6=j
b b λji Vdi (y(s)) ds + Vdj x + hdj g(d, j) e−α˜ j hdj .
k α ˜j b hdj Udj (xj̺ ) ≤ Ddj b hdj + b hdj f (xj̺ , d) +
X
k λji Udi
i6=j
+ Moreover there exists an xi such that xi ∈ Qjk , xi ∈ / γh,d,j ∪ k k Udj (x) − Udj (xi ) = O(hγdj )
πQik (xj̺ ) .
S
r6=d
− γh,r,j
and
372
Laura S. Aragone and Elina M. Mancinelli
Then we have
k k Udj (x) ≤ (1− α ˜j b hdj )Udj (x+ b hdj g(d, j))+ b hdj f (x, d)+
∆2 =
≤
≤
k Udj (x) b Zhdj 0
b
0
i6=j
k λji Udi (πQik (x)) + b hdj O(hγdj ),
− Vdj (x) X k λji Udi e−α˜ j s f (x, d) − f (y (s) , d) + (πQik (x)) − Vdi (y(s)) ds
k + (1− α ˜j b hdj )Udj
Zhdj
X
e−α˜ j s Lf b hdj +
+
b
Zhdj 0
e−α˜ j s
X i6=j
i6=j
b x+ b hdj g(d, j) −Vdj x+ b hdj g(d, j) e−α˜ j hdj + b hdj O(hγdj )
X i6=j
k λji Udi (πQik (x)) − Vdi (πQik (x)) ds
λji Vdi (πQik (x)) − Vdi (x) + Vdi (x) − Vdi (y(s)) ds
+ 1−α ˜j b hdj ∆2 + o(b hdj ) + b hdj O(hγdj ) X hγdj + ∆2 b λji (1 + Mgγ )LV b hdj ≤ Lf b hdj + i6=j
+ (1 − α ˜j b hdj )∆2 + o(b hdj ) + b hdj O(hγdj ).
Therefore it results X X λji LV b hγdj b hdj ˜j b hdj ≤ Lf b hdj + (1 + Mgγ ) ∆2 1− b hdj λji −1+ α i6=j
i6=j
+ O(b h2dj ) + b hdj O(hγdj ), X 1 λji LV b hγdj + O(hγdj ). hdj + (1 + Mgγ ) ∆2 ≤ Lf b α i6=j
Similar to what we have done for ∆1 we have ∀ x, ∀ d, ∀ j k Udj (x) − Vdj (x) ≤
then
k U − V
K2 Lf h + (1 + Mgγ ) α
∞
≤
X i6=j
K2 Lf h + (1 + Mgγ ) α
λji LV hγ + O(hγ ), X i6=j
λji LV hγ + O(hγ ).
Numerical Approximation to Solve QVI Systems
12.
373
Applications
We have applied the presented numerical procedure to an example with m = 2 items, being the discount rate α = 0.1 . Production rate by unite time: p1 = p2 = 1. Demand’s data J = {1, 2, 3, 4}, |J | = 4. Commutation rates: λi j i/j 1 2 3 4 r11 = 0.07415 r21 = 0.37230 1 0 0.03 0.031 0 r12 = 0.07415 r22 = 0.06741 2 0.2 0 0 0.031 r13 = 0.32300 r23 = 0.37230 3 0.2 0 0 0.03 r14 = 0.32300 r24 = 0.06741 4 0 0.2 0.2 0 Maximum stocks: M1 = 0.525, M2 = 1.67. The instantaneous cost f is a linear function in both variables and it does not depend on the parameter d, i.e.f (x1 , x2 ) = 4x1 + 5x2 . e = 7, ∀ d 6= d. e Commutation costs: q(d, d) In Figure 2 the simulation results corresponding to the use of a sub-optimal control policy given by the computational procedure for 100 units time, are shown. Item 1 evolution
0.4
0.2
0 0
20
40
60
Item 1 evolution
0.6
STOCK 1
STOCK 1
0.6
80
0.4
0.2
0
100
0
20
40
Time
80
100
60
80
100
1
Item 1 production
0.3
DEMAND 1
60 Time
0.2 0.1
0.5
0 0 0
20
40
60
80
0
100
(a) Stock and demand of item 1
20
(b) Stock and production of item 1
Item 2 evolution
Item 2 evolution 1.5
STOCK 2
STOCK 2
1.5 1 0.5 0 0
40
20
40
60
80
1 0.5 0
100
0
20
40
Time
60
80
100
60
80
100
Time
0.4
Item 2 production
DEMAND 2
1 0.3 0.2 0.1
0.5
0 0 0
20
40
60
80
(c) Stock and demand of item 2
100
0
20
40
(d) Stock and production of item 2
Figure 2. Simulation results for 100 units time.
374
Laura S. Aragone and Elina M. Mancinelli
Conclusion We have developed in this work a numerical method of approximation for the optimization of a production system comprising a multi-item single machine. We presented a discretization procedure for the numerical solution based on the finite element method. We showed that finding the discrete solution consists in determining the unique fixed point of a non linear contractive operator P h : IRn → R and we gave an explicit estimation of the error between the discrete solution and the exact one. In addition, we have presented fast computational algorithms that obtain the solution of the discrete problem in a finite number of steps. This property is also a consequence of the special type of mesh used.
References [1] Aragone L. S., Gonz´alez R. L. V. (1997), Fast computational procedure for solving multi-item single machine lot scheduling optimization problem, Journal of Optimization Theory & Applications, Vol. 93, N◦ 3, pp. 491–515. [2] Aragone L. S., Gonz´alez R. L. V. (2000), A fast computational procedure to solve the multi-item single machine lot scheduling optimization problems. The average cost case, Mathematics of Operations Research, Vol. 25, N◦ 3, pp. 455–475. [3] Bensoussan A., Lions J. L. (1982), Contrˆole impulsionnel et in´equations quasivariationnelles, Dunod, Paris. [4] Capuzzo Dolcetta I., Evans L. C. (1984) Optimal switching for ordinary differential equations, SIAM Journal Control & Optimization, Vol. 22, pp. 143–161. [5] Capuzzo Dolcetta I., Ishii H. (1984), Approximate solution of the Bellman equation of deterministic control theory, Applied Mathematics & Optimization, Vol. 11, pp. 161–181. [6] Capuzzo Dolcetta I., Lions P.-L. (1990), Hamilton-Jacobi Equations with State Constraints, Transaction American Mathematical Society, Vol. 318, pp. 643–683. [7] Ciarlet P. G. (1970), Discrete maximum principle for finite-difference operators, Aequationes Mathematicae, Vol. 4, N◦ 3, pp. 338–352. [8] Crandall M.G., Evans L.C., Lions P.-L. (1984), Some properties of viscosity solutions of Hamilton-Jacobi equations, Transaction American Math. Society, 282, pp. 487-502. [9] Davis M. H. A. (1984), Piecewise-deterministic Markov processes: A general class of nondiffusion models, Journal of the Royal Statistical Society, Series B, Vol. 46, pp. 353–388. [10] Fleming W. H. and Rishel R. W. (1975), Deterministic and Stochastic Optimal Control, Springer-Verlag, New York, New York.
Numerical Approximation to Solve QVI Systems
375
[11] Fleming W. H., Soner H. M. (1993), Controlled Markov processes and viscosity solutions, Springer Verlag, New York. [12] Friedman A. (1971), Differential Games, Wiley-Interscience, New York. [13] Gonz´alez R. L. V., Muramatsu K., Rofman E. (1992), Quasi-variational inequality approach to multi-item single machine lot scheduling problem, In System Modeling and Optimization, Lecture Notes in Control and Information Sciences Vol. 180, pp. 885–893, Springer Verlag, New York. [14] Gonz´alez R. L. V., Rofman E. (1993) Sur des solutions non born´ees de l’´equation de Bellman associ´ee aux probl`emes de commutation optimale avec contraints sur l’´etat, Comptes Rendus Acad. Sc. Paris, Serie I, Vol. 316, 1193–1198. [15] Gonz´alez R. L. V., Rofman E. (1995), On unbounded solutions of Bellman’s equation associated to optimal switching control problems with state constraints, Applied Mathematics & Optimization, Vol. 31, pp. 1–17. [16] Hanouzet B., Joly J. L. (1978), Convergence uniforme des iter´es d´efinissant la solution d’une in´equation quasi variationnelle abstraite, Comptes Rendus Acad. Sc. Paris, Serie A, Tome 286, pp. 735–738. [17] Lions P.-L., Perthame B. (1986), Quasi variational inequalities and ergodic impulse control, SIAM Journal Control and Optimization, Vol. 24, N◦ 4, pp. 604–615. [18] Lenhart S. M. (1987), Viscosity solutions associated with switching control problems for piecewise deterministic processes, Houston Journal of Mathematics, Vol. 13, N◦ 3, pp. 405–426. [19] Lenhart S. M., Liao Y. C. (1988), Switching control of piecewise deterministic processes, Journal of Optimization Theory & Applications, Vol. 59, N◦ 1, pp. 99–115. [20] Mancinelli E. M. (1994), Optimizaci´on de m´aquinas multiproducto con demandas seccionalmente determin´ısticas, Mec´anica Computacional, Vol. 14, pp. 561–570, AMCA, Santa Fe, Argentina. [21] Mancinelli E. M. (1999), Sobre la resoluci´on de algunos problemas de optimizaci´on determin´ısticos y estoc´asticos, Tesis Doctoral, Universidad Nacional de Rosario, Argentina. [22] Mancinelli E. M., Gonz´alez R. L. V. (1997), Multi-item single machine scheduling optimization. The case with piecewise deterministic demands, Rapport de Recherche N◦ 3144, INRIA, Rocquencourt, France. [23] Robin M. (1983), Long run average control of continuous time Markov processes: A survey, Acta Applied Mathematics, 1, 281-299. [24] Soner H. M. (1986), Optimal control with state-space constraint II, SIAM Journal on Control and Optimization, Vol. 24, pp. 1110–1122.
376
Laura S. Aragone and Elina M. Mancinelli
[25] Soner H. M. (1987), Optimal Control Problems with State Space Constraint I, SIAM Journal on Control and Optimization, Vol. 24, pp. 551–561. [26] Soner H. M. (1993), Singular perturbations in manufacturing, SIAM Journal on Control and Optimization, Vol. 31, pp. 132–146. [27] Souganidis P. E. (1985), Approximation Schemes for Viscosity Solutions of Hamilton-Jacobi Equations, Journal of Differential Equations, Vol. 59, pp. 1–43.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 377-385
ISBN 978-1-60876-500-3 c 2011 Nova Science Publishers, Inc.
Chapter 14
V ECTOR O PTIMIZATION ON M ETRIC S PACES Alexander J. Zaslavski Department of Mathematics The Technion-Israel Institute of Technology 32000 Haifa, Israel
Abstract In this paper we use a generic approach in order to study vector minimization problems on a complete metric space. We discuss our recent results which show that solutions of vector minimization problems exist generically for certain classes of problems. Any of these classes of problems is identified with a space of functions equipped with a natural complete metric and it is shown that there exists a Gδ everywhere dense subset of the space of functions such that for any element of this subset the corresponding vector minimization problem possesses solutions. We also discuss the stability and the structure of a set of solutions of a vector minimization problem.
1.
Introduction
The study of vector optimization problems has recently been a rapidly growing area of research. See, for example, [1–3, 5, 7, 9–12] and the references mentioned therein. In this paper we use a generic approach in order to study vector minimization problems on a complete metric space. We discuss our recent results obtained in [13–17] which show that solutions of vector minimization problems exist generically for certain classes of problems. Any of these classes of problems is identified with a space of functions equipped with a natural complete metric and it is shown that there exists a Gδ everywhere dense subset of the space of functions such that for any element of this subset the corresponding vector minimization problem possesses solutions. We also discuss the stability and the structure of a set of solutions of a vector minimization problem [13, 14, 16]. It should be mentioned that the generic approach, when a certain property is investigated for the whole space and not just for its single point, has already been successfully applied in many areas of Analysis (see, for example, [4, 6, 8] and the references mentioned there).
378
2.
Alexander J. Zaslavski
Generic and Density Results in Vector Optimization
We use the convention that ∞/∞ = 1 and denote by Card(E) the cardinality of the set E. Let R1 be the set of real numbers and let n be a natural number. Consider the finitedimensional space Rn with the norm kxk = k(x1 , . . . , xn )k = max |xi | : i = 1, . . . , n , x = (x1 , . . . , xn ) ∈ Rn .
Let x = (x1 , . . . , xn ), y = (y1 , . . . , yn ) ∈ Rn . We equip the space Rn with the natural order and say that x ≥ y if xi ≥ yi for all i ∈ {1, . . . , n}, x > y if x ≥ y and x 6= y and x >> y if xi > yi for all i ∈ {1, . . . , n}. We say that x > x (respectively, y > x, y ≥ x). Let (X, ρ) be a complete metric space such that each of its bounded closed subsets is compact. Fix θ ∈ X. Denote by A the set of all continuous mappings F = (f1 , . . . , fn ) : X → Rn such that for all i ∈ {1, . . . , n} lim fi (x) = ∞. ρ(x,θ)→∞
For each F = (f1 , . . . , fn ), G = (g1 , . . . , gn ) ∈ A set
˜ G) = sup |fi (x) − gi (x)| : x ∈ X and i = 1, . . . , n , d(F, ˜ G) 1 + d(F, ˜ G) −1 . d(F, G) = d(F,
Clearly the metric space (A, d) is complete. Note that ˜ G) = sup kF (x) − G(x)k : x ∈ X d(F,
for all F , G ∈ A. Let A ⊂ Rn be a nonempty set. An element x ∈ A is called a minimal element of A if there is no y ∈ A for which y < x. Let F ∈ A. A point x ∈ X is called a point of minimum of F if F (x) is a minimal element of F (X). If x ∈ X is a point of minimum of F , then F (x) is called a minimal value of F . Denote by M (F ) the set of all points of minimum of F and put v(F ) = F (M (F )). The following proposition was obtained in [14]. Proposition 2.1. Let F = (f1 , . . . , fn ) ∈ A. Then M (F ) is a nonempty bounded subset of (X, ρ) and for each z ∈ F (X) there is y ∈ v(F ) such that y ≤ z. Assume that n ≥ 2 and that the space (X, ρ) has no isolated points. The following theorem was also obtained in [14].
Vector Optimization on Metric Spaces
379
Theorem 2.1. There exists a set F ⊂ A which is a countable intersection of open everywhere dense subsets of A such that for each F ∈ F the set v(F ) is infinite. The following theorem was obtained in [13]. Theorem 2.2. Suppose that the space (X, ρ) is connected. Let F = (f1 , . . . , fn ) ∈ A and ˜ G) ≤ ǫ and the set v(G) is not closed. let ǫ > 0. Then there exists G ∈ A such that d(F, It is clear that if X is a finite-dimensional Euclidean space, then X is a complete metric space such that all its bounded closed subsets are compact and Theorems 2.1 and 2.2 hold. It is also clear that Theorems 2.1 and 2.2 hold if X is a convex compact subset of a Banach space or if X is a convex closed cone generated by a convex compact subset of a Banach space which does not contain zero.
3.
Vector Optimization with Continuous Objective Functions
In this section we consider a class of vector minimization problems on a complete metric space X without compactness assumptions. This class of problems is associated with a complete metric space of continuous bounded from below vector functions A which is defined below. Let F ∈ A. An element z ∈ X is a called a solution of the vector minimization problem F (x) → min, x ∈ X if F (z) is a minimal element of the image F (X) = {F (x) : x ∈ X}. The set of all solutions of the minimization problem above is denoted by KF . We show that for most (in the sense of Baire category) functions F ∈ A the set KF is nonempty and compact. Let (X, ρ) be a complete metric space and n be a natural number. For each vector x = (x1 , . . . , xn ) of the n-dimensional Euclidean space Rn set kxk = max |xi | : i = 1, . . . , n .
Let x = (x1 , . . . , xn ), y = (y1 , . . . , yn ) ∈ Rn . We say that x ≤ y if xi ≤ yi for all i = 1, . . . , n. We say that x < y if x ≤ y and x 6= y, and say that x 0 there is δ > 0 such that for each H ∈ A satisfying d(H, G) ≤ δ, each minimal element a of cl(H(X)) and each z ∈ X satisfying H(z) ≤ a+δe there exists x ∈ KG such that ρ(x, z) ≤ ǫ, kH(z) − G(x)k ≤ ǫ, ka − G(x)k ≤ ǫ. 3. For each ǫ > 0 there is δ > 0 such that for each x ∈ KG and each z ∈ X satisfying G(z) ≤ G(x) + δe the inequalities ρ(z, x) ≤ ǫ and kG(z) − G(x)k ≤ ǫ hold.
4.
Vector Optimization Problems with Semicontinuous Objective Functions
In this section we consider a class of vector minimization problems on a complete metric space X which is identified with the corresponding complete metric space of lower semicontinuous bounded from below objective functions A. We show the existence of a Gδ everywhere dense subset F of A such that for any objective function belonging to F the corresponding minimization problem possesses a solution. Let (X, ρ) be a complete metric space. A function f : X → R1 ∪ {∞} is called lower semicontinuous if for each convergent sequence {xk }∞ k=1 in (X, ρ) the inequality f ( lim xk ) ≤ lim inf f (xk ) k→∞
holds. For each function f : X →
k→∞
R1
∪ {∞} set dom(f ) = x ∈ X : f (x) < ∞ .
We use the convention that ∞ + ∞ = ∞, ∞ − ∞ = 0, ∞/∞ = 1, x + ∞ = ∞ and x − ∞ = −∞ for all x ∈ R1 , λ · ∞ = ∞ for all λ > 0 and that λ · ∞ = −∞ for all λ < 0. We also assume that −∞ < x < ∞ for all x ∈ R1 . ¯ = R1 ∪ {∞, −∞} and let n be a natural number. For each x = (x1 , . . . , xn ) ∈ Set R n ¯ set R kxk = max |xi | : i = 1, . . . , n . ¯ n . We say that x ≤ y if xi ≤ yi for all i = Let x = (x1 , . . . , xn ), (y1 , . . . , yn ) ∈ R 1, . . . , n. We say that x < y if x ≤ y and x 6= y and say that x 0 there exists y ∈ F (X) ∩ Rn such that y − ǫe ≤ z. For each F = (f1 , . . . , fn ), G = (g1 , . . . , gn ) ∈ A define ˜ G) = sup kF (x) − G(x)k : x ∈ X d(F, and
˜ G) 1 + d(F, ˜ G) −1 . d(F, G) = d(F,
It is not difficult to see that the metric space (A, d) is complete. The topology induced by d in A is called the strong topology. Denote by cl(E) the closure of a set E ⊂ Rn . Let F = (f1 , . . . , fn ) ∈ A. Set epi(F ) = (y, a) ∈ X × Rn : a ≥ F (y) . Clearly, epi(F ) is a closed subset of X × Rn . Define a function ∆F : X × Rn → R1 by ∆F (x, a) = inf ρ(x, y) + ka − bk : (y, b) ∈ epi(F ) , (x, a) ∈ X × Rn .
For each integer q ≥ 1 put
E(q) = (F, G) ∈ A × A :
|∆F (x, u) − ∆G (x, u)| ≤ 1/q, (x, a) ∈ X × Rn .
We equip the space A with the uniformity determined by the base E(q), q = 1, 2, . . . . The topology in A induced by this uniformity is called the weak topology. It is clear that this topology is weaker than the strong topology. Let G = (g1 , . . . , gn ) ∈ A. A sequence {zi }∞ i=1 ⊂ X is called (G)-minimizing if there n of minimal elements of cl(G(X) ∩ Rn ) and a sequence exist a sequence {ai }∞ ⊂ R i=1 {∆i }∞ i=1 ⊂ (0, ∞) such that lim ∆i = 0, i→∞
G(zi ) ≤ ai + ∆i e for all integers i ≥ 1. For each F ∈ A denote by Ω(F ) the set of all x ∈ X such that F (x) is a minimal element of cl(F (X) ∩ Rn ). The following theorem is the main result of [17]. Theorem 4.1. There exists a set F ⊂ A which is a countable intersection of open (in the weak topology) everywhere dense (in the strong topology) subsets of A such that for each F = (f1 , . . . , fn ) ∈ F the following assertions hold. 1. Any (F )-minimizing sequence of elements of X possesses a convergent subsequence.
382
Alexander J. Zaslavski
2. For each minimal element a of cl(F (x) ∩ Rn ) there is x ∈ X such that F x = a. 3. F (Ω(F )) is the set of all minimal elements of cl(F (X) ∩ Rn ). 4. Any sequence of elements of Ω(F ) has a convergent subsequence. 5. Let ǫ > 0. Then there are δ > 0 and an open neighborhood U of F in A with the weak topology such that the following properties hold: (a) for each G ∈ U, each minimal element a of cl(G(X) ∩ Rn ) and each x ∈ X satisfying G(x) ≤ a + δe the inequality inf{ρ(x, z) : z ∈ Ω(F )} < ǫ holds; (b) for each G ∈ U and each z ∈ Ω(F ) there exists a sequence {zi }∞ i=1 ⊂ X such that ρ(zi , z) < ǫ for all natural numbers i and that there is limi→∞ G(zi ) which is a minimal element of cl(G(X) ∩ Rn ). 6. For each x ∈ X there is y ∈ Ω(F ) such that F (y) ≤ F (x). Remark. Assume that F = (f1 , . . . , fn ) : X → (R1 ∪ {∞})n is bounded from below and that fi is lower semicontinuous for all i = 1, . . . , n. Then (C1) holds if the following condition holds: (C2) For each z ∈ F (X) there is y ∈ F (X) ∩ Rn such that y ≤ z. Note that if the space X is compact, then the conditions (C1) and (C2) are equivalent and if they hold, then the sets F (X), F (X) ∩ Rn , the closure of F (X) and its intersection with Rn have the same minimal points. We are interested in solutions x ∈ X of the vector minimization problem with the objective function F such that F (x) is finite-valued. By this reason we consider functions F satisfying the condition (C1) and solutions of the minimization problems which are minimal points of the closure of F (X)∩Rn . By assertion 6 of Theorem 4.1, a generic function F ∈ A satisfies condition (C2). Now we present an example of a function F ∈ A with a noncompact space X which does not satisfy (C2) and such that the set F (X) possesses a minimal point which does not belong to Rn . Assume that X is the set of nonnegative integers, ρ(x, y) = |x − y|, x, y ∈ X, n = 2, F = (f1 , f2 ) where f1 (0) = 0,
f1 (i) = −1/i, i = 1, 2, . . . ,
f2 (0) = ∞,
f2 (i) = i,
i = 1, 2, . . . .
Clearly, F ∈ A, F does not satisfy (C2), F (0) is a minimal point of F (X) which does not belong to R2 and the set F (X) ∩ R2 is closed.
5.
Density Results
We use the notation and definitions introduced in Section 4. Suppose that the complete metric space (X, ρ) does not contain isolated points and that n ≥ 2. The following results were obtained in [17]. Theorem 5.1. There exists an everywhere dense (in the weak topology) set F ⊂ A such that for each F ∈ F the set of all x ∈ X such that F x is a minimal element of cl(F (X) ∩ Rn ) is nonempty and not closed.
Vector Optimization on Metric Spaces
383
Proposition 5.1. Assume that F = (f1 , . . . , fn ) ∈ A, F (x) ∈ Rn for all x ∈ X, the mapping F : X → Rn is continuous and γ ∈ (0, 1). Then there exists G = (g1 , . . . , gn ) ∈ ˜ G) ≤ γ and that the set A such that d(F, x ∈ X : G(x) is a minimal element of cl(G(X)
is nonempty and not closed.
Proposition 5.2. Assume that F = (f1 , . . . , fn ) ∈ A, γ ∈ (0, 1) and that a natural (0) (0) number q satisfies γ < (4q)−1 . Then there exists F (0) = (f1 , . . . , fn ) ∈ A such that (F, F (0) ) ∈ E(q), the set of all minimal elements of cl(F (0) (X) ∩ Rn ) is a nonempty finite subset of F (0) (X) ∩ Rn and that the following property holds: For any natural number k satisfying 1/k < γ/64 there exist G ∈ A and x ¯ ∈ X such that (G, F (0) ) ∈ E(k), G(¯ x) is a minimal element of cl(G(X) ∩ Rn ) and kG(¯ x) − zk ≥ γ/64 for all minimal elements z of cl(F (0 (X) ∩ Rn ).
6.
Vector Constrained Optimization
In this section we consider a class of vector constrained minimization problems on a complete metric space X F (x) → min, x ∈ A, where F belongs to a complete metric space of continuous bounded from below vector functions A which is defined below and A is a nonempty closed subset of X. This class of vector constrained minimization problems is identified with the complete metric space of pairs (F, A). We show that for most (in the sense of Baire category) pairs (F, A) the corresponding vector constrained optimization problem has a nonempty compact set of solutions which is stable under small perturbations of the pair (F, A). Let (X, ρ) be a complete metric space and n be a natural number. For each vector x = (x1 , . . . , xn ) of the n-dimensional Euclidean space Rn set kxk = max |xi | : i = 1, . . . , n .
Let x = (x1 , . . . , xn ), y = (y1 , . . . , yn ) ∈ Rn . We say that x ≤ y if xi ≤ yi for all i = 1, . . . , n. We say that x < y if x ≤ y and x 6= y and say that x 0. It is known that the space S(X) with this uniformity is metrizable by the metric H and complete. This uniformity induces a topology in S(X). The space A × S(X) is equipped with the product topology which is induced by the complete metric d1 defined by d1 ((F, A), (G, B)) = d(F, G) + H(A, B), (F, A), (G, B) ∈ A × S(X). n = {x = (x , . . . , x ) ∈ Rn : x ≥ 0} and let e = (1, 1, . . . , 1) be an element Set R+ 1 n n of R all of whose coordinates are unity. Denote by cl(E) the closure of a set E ⊂ Rn . The following theorem is the main result of [16].
Theorem 6.1. There exists a set F ⊂ A × S(X) which is a countable intersection of open everywhere dense subsets of A × S(X) such that for each (G, A) ∈ F the following assertions hold: 1. There is a nonempty compact set KG,A ⊂ A such that G(KG,A ) is the set of all minimal elements of cl(G(A)). ˜ G) ≤ δ, each 2. For each ǫ > 0 there is δ > 0 such that for each F ∈ A satisfying d(F, ˜ B ∈ S(X) satisfying H(A, B) ≤ δ, each minimal element a of cl(F (B)) and each z ∈ B satisfying F (z) ≤ a + δe there exists x ∈ KG,A such that ρ(x, z) ≤ ǫ, kF (z) − G(x)k ≤ ǫ and ka − G(x)k ≤ ǫ. 3. For each ǫ > 0 there is δ > 0 such that for each x ∈ KG,A and each z ∈ A satisfying G(z) ≤ G(x) + δe the inequalities ρ(z, x) ≤ ǫ and kG(z) − G(x)k ≤ ǫ hold.
References [1] T. Q. Bao, P. Gupta and B. S. Mordukhovich, Necessary conditions in multiobjective optimization with equilibrium constraints, J. Optim. Theory Appl. 135, 179–203 (2007). [2] T. Q. Bao and B. S. Mordukhovich, Variational principles for set-valued mappings with applications to multiobjective optimization, Control and Cybernetics 36, 531– 562 (2007).
Vector Optimization on Metric Spaces
385
[3] G. Chen, X. Huang and X. Yang, Vector Optimization, Springer, Berlin (2005). [4] S. Cobzas, Generic existence of solutions for some perturbed optimization problems, J. Math. Anal. Appl. 243, 344–356 (2000). [5] J. P. Dauer and R. J. Gallagher, Positive proper efficient points and related cone results in vector optimization theory, SIAM J. Control Optim. 28, 158–172 (1990). [6] F. S. De Blasi and J. Myjak, Sur la porosit´e des contractions sans point fixe, C. R. Acad. Sci. Paris 308, 51–54 (1989). [7] C. Finet, L. Quarta and C. Troestler, Vector-valued variational principles, Nonlinear Anal. 52 197–218 (2003). [8] A. D. Ioffe and A. J. Zaslavski, Variational principles and well-posedness in optimization and calculus of variations, SIAM Journal on Control and Optim. 38 566–581 (2000). [9] J. Jahn, Vector Optimization. Theory, Applications and Extensions, Springer, Berlin (2004). [10] T. Tanino, Stability and sensitivity analysis in convex vector optimization, SIAM J. Control Optim. 26 521–536 (1988). [11] M. Villalobos-Arias, C. A. Coello Coello and O. Hernandez-Lerma, Asymptotic convergence of a simulated algorithm for multiobjective optimization problems, Math. Method Oper. Res. 64 353–362 (2006). [12] G. Wanka, R. I. Bot and S. M. Grad, Multiobjective duality for convex programming problems, Z. Anal. Anwendungen 22 711–728 (2003). [13] A. J. Zaslavski, A Density result in vector optimization, Intern. J. Mathematics and Math. Sci. 2006 1–10 (2006). [14] A. J. Zaslavski, A generic result in vector optimization, Journal of Inequalities and Applications 2006 1–14 (2006). [15] A. J. Zaslavski, Generic existence of solutions in vector optimization on metric spaces, J. Optim. Theory Appl. 136 139–153 (2008). [16] A. J. Zaslavski, Generic existence of solutions in vector constrained optimization, International Journal of Mathematics and Statistics, in press. [17] A. J. Zaslavski, Existence of solutions of a vector optimization problem with a generic lower semicontinuous objective function, J. Optim. Theory Appl., in press.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 387-406
ISBN 978-1-60876-500-3 c 2011 Nova Science Publishers, Inc.
Chapter 15
ROBUST S TATIC AND DYNAMIC O UTPUT F EEDBACK S UBOPTIMAL C ONTROL OF U NCERTAIN D ISCRETE -T IME S YSTEMS U SING A DDITIVE G AIN P ERTURBATIONS Hiroaki Mukaidani1∗, Yasuhisa Ishii2 , Yoshiyuki Tanaka2 and Toshio Tsuji2 1 Graduate School of Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima, 739-8524 Japan 2 Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, 739-8524 Japan
Abstract This paper provides a novel design method in the case of an output feedback suboptimal control problem for a class of uncertain discrete-time system using additive gain perturbations. Based on the linear matrix inequality (LMI), a class of the fixed output feedback controller is established, and some sufficient conditions for the existence of the suboptimal controller are derived. The novel contribution is that time-variant additive gain perturbations are included in the feedback systems. Although the additive gain perturbations work using the feedback systems, both stability of closed-loop systems and adequate suboptimal cost are attained. The numerical example demonstrates that the large cost due to the LMI design can be reduced by using additive gain perturbations.
1.
Introduction
It is well known that uncertainty occurs in many dynamic systems, and is frequently a source of instability and performance degradation of systems. In recent years, the problem of designing robust controllers for linear systems with parameter uncertainty has received considerable attention in control system literature (see e.g., [32] and reference therein). ∗
E-mail address: [email protected]
388
Hiroaki Mukaidani et al.
Although there have been numerous results on the robust control of discrete-time uncertain systems, there have been considerable efforts to find a controller in order to guarantee robust stability. However, it is also desirable to design a control system that is not only asymptotically stable but also guarantees an adequate cost performance level. One design approach to this problem is the so-called guaranteed cost control [25]. This approach has the advantage of placing an upper bound on a given performance index, and it is guaranteed that the system performance degradation due to the uncertainty is smaller than the cost bound. When controlling a practical system, it is not always possible to have access to the state vector, and only partial information from a measured output vector is available. Therefore, the output feedback problem for uncertain systems is an important problem. For example, the guaranteed cost control for an uncertain discrete time system by using static output feedback control that is based on Riccati equation has been discussed in [4]. However, for the existing static output feedback control system design, the implementation of the controller appears to be difficult because a conservative condition is assumed. In recent years, linear matrix inequality (LMI) has gained considerable attention for its computational efficiency and usefulness in control theory. For example, the necessary and sufficient condition for stabilizability via the static output feedback is proposed in terms of two LMIs under a coupling condition [13]. On the other hand, the guaranteed cost control problem for a class of the uncertain system with delay that is based on the LMI design approach was solved by using the output feedback [17]. Furthermore, the output feedback guaranteed cost control problems for the uncertain discrete delay and discrete-time systems through the LMI optimization technique have been tackled, respectively [24, 34]. However, due to the presence of the design parameter that is included in the LMI technique, it is known that the cost bound becomes fairly large. Moreover, the controller gain perturbations have not been considered. In the past decade, several stable adaptive neural control approaches have been introduced [3, 23]. Moreover, closely related fuzzy control schemes have been studied [22, 29]. Later, the theoretical foundations for the efficient design of NN controllers based on inverse control have been reported [2]. The stability properties of the learning scheme using neural networks were investigated for the restricted case [27]. In [28], the feedback control law that guarantees semiglobal uniform ultimate boundedness has been proposed. On the other hand, several good NN control approaches have been proposed based on Lyapunovfs stability theory [5, 6, 15]. However, these researches have focused mainly on the analysis of the stability. As another important study, the linear quadratic regulator (LQR) problem using the NN or fuzzy logic has been investigated [1,9,10,30]. It is the advantage of these approaches that the controllers can be implemented even without an exact knowledge of the plant dynamics. However, the stability may not be guaranteed because in these researches the stability of the original overall closed-loop system that includes the neurocontroller or fuzzy controller has not been considered. In fact, it has been shown that the system stability is destroyed when the degree of the system nonlinearity is high [9]. In [16], a nonlinear optimal design method that integrates linear optimal control techniques and neural network has been investigated. The global asymptotic stability is guaranteed under the assumption that the nonlinear function is known completely [16]. However, if such an assumption is not met, only a uniformly
Robust Static and Dynamic Output Feedback
389
ultimate boundedness is attained. Moreover, an output feedback scheme has not been considered. Although the stability of the closed-loop system with the neurocontroller have been studied via the LMI-based design approach [11, 18–20], the output feedback system that has uncertainty in the input matrix has also not been considered. In this paper, the output feedback suboptimal control problem of the discrete-time uncertain system that has uncertainty in both the state and input matrices is discussed. The new contributions of our study are as follows. First, it is newly shown that the output feedback control can be designed by adapting an additive control input, such as a neurocontroller or fuzzy controller. Second, although the neurocontroller or fuzzy controller is included in the discrete-time uncertain system, the closed-loop system is guaranteed robust stability, and there is a reduction in the cost. Another important feature is that a class of the fixed output feedback controller of the discrete-time uncertain system with additive gain perturbations [33] is newly established by means of the LMI. Furthermore, in order to reduce the large cost incurred by the LMI approach, the fuzzy controller is substituted into the additive gain perturbations. As a result, although additive gain perturbations such as the fuzzy controller are included in the discrete-time uncertain system, robust stability of the closed-loop system and reduction in the cost are both attained. It is noteworthy that the concept of such a novel controller synthesis has not existed till now. Finally, in order to demonstrate the efficiency of our design approach, a numerical example is given when the fuzzy controller is used.
2.
Novel Concept
First, in order to show the effectiveness of the novel suboptimal control concept via the additive control gain, one example is demonstrated. The example is a scalar dimension, and the controller is designed to trace the required trajectory with minimum energy. Let us consider the following system. x(k + 1) = [A + DF (k)Ea ] x(k) + [B + DF (k)Eb ] u(k) = F (k)x(k) + u(k), 0 < F (k) ≤ 0.5,
(1a)
u(k) = [K + Dk N (k)Ek ] x(k) = [−1 + N (k)] x(k), 0 ≤ N (k) ≤ 1, ∞ h ∞ h i i X X x2 (k) + u2 (k) . xT (k)Qx(k) + uT (k)Ru(k) = J = k=0
(1b) (1c)
k=0
Then, taking 0 < F (k) ≤ 0.5 and 0 ≤ N (k) ≤ 1 into account, the closed-loop system (2) is stable. x(k + 1) = [−1 + F (k) + N (k)] x(k), −1 < −1 + F (k) + N (k) ≤ 0.5.
(2)
It is assumed that the uncertainty of F (k) should be changed as the step response, e.g., the uncertain parameter such as mass jumps from the original value to the update value due to the reduction of mass for any time. It should be noted that N (k) is the proposed novel time-varying additive control gain. In this situation, the LQR technique is used to minimize the cost function (1c). For using the existing LQR theory cannot be applied to this problem because the system has the
390
Hiroaki Mukaidani et al.
m → m + ∆m
x (0) u = Kx (k )
u = Kx (k ) 0
k
Optimal trajectory
(a) Without additive control gain Recalculate of Gain
x (0)
u = Kx (k )
m → m + ∆m u = (K + ∆ K )x (k ) Goal point
0
Optimal trajectory
k
(b) With additive control gain
Figure 1. New concept using additive control gain.
uncertainty with the change of the step response. On the other hand, it is possible to solve this optimization problem by introducing the additive control gain N (k). Furthermore, the closed-loop system is stable because −1 ≤ −1 + N (k) ≤ 0 holds. ¯ In fact, when the uncertainty of F (k) has changed as follows for any time k = k, F (k) = 0.5, 0 ≤ k ≤ k¯ → F (k) = 0.25, k¯ ≤ k
(3)
the optimal gains can be computed by applying the LQR theory, respectively. ¯ F (k) = 0.5 ⇒ −1 + N (k) = −0.26556, 0 ≤ k ≤ k, F (k) = 0.25 ⇒ −1 + N (k) = −0.12695, k¯ ≤ k. It should be noted that these values satisfy the range of stability margin for N (k). Finally, the suboptimal trajectory can also be attained by recalculating and letting the new gain after the change of mass. The concept of the novel control synthesis is illustrated in Figure 1. The above concept seems to be natural and reliable. In this paper, the adaptation of the additive control gain will be carried out artificially by using the fuzzy logic control, while the fixed control gain can be computed by applying the guaranteed cost control technique via the LMIs.
Robust Static and Dynamic Output Feedback y(k)
K Dk
x(k)
C
Guaranteed cost control gain
N(k)
+
L
u(k)
Ek
391
B D
A
F(k)
Eb
+
Plant uncertainties
Additive gain D
Controller
F(k)
Ea
+
x(k+1) +
Plant uncertainties
Plant
Figure 2. Block diagram of a new proposed system.
3.
Preliminary
Consider the following class of the uncertain discrete-time linear system. x(k + 1) = [A + ∆A(k)] x(k) + [B + ∆B(k)] u(k),
(4a)
y(k) = Cx(k),
(4b)
u(k) = [K + ∆K(k)] y(k),
(4c)
where x(k) ∈ ℜn is the state, y(k) ∈ ℜl is the output and u(k) ∈ ℜm is the control input. A, B and C are known constant matrices, K ∈ ℜm×l is the fixed control matrix of the controller (4c). ∆A(k) and ∆B(k) are parameter uncertainties, and ∆K(k) is the additive gain perturbations. The parameter uncertainties and the additive gain perturbations considered here are assumed to be of the following form ∆A(k) ∆B(k) = DF (k) Ea Eb , ∆K(k) = Dk N (k)Ek , (5)
where D, Dk , Ea , Eb and Ek are known constant matrices, F (k) ∈ ℜpa ×qa is an unknown matrix function and N (k) ∈ ℜpn ×qn is an arbitrary function. It is assumed that F (k) and N (k) satisfy (6). F T (k)F (k) ≤ Iqa , N T (k)N (k) ≤ Iqn .
(6)
Although these assumptions (6) appear to be a conservative condition, they are necessary to establish the LMI condition. It should be noted that the assumptions are based on the control-oriented assumption from the existing results [4, 31, 32]. Moreover, N (k) will be used as the additive gain perturbations such as a neurocontroller or fuzzy controller. The block diagram of the new proposed method is shown in Figure 2, where L is a time lag. It should be noted that the controller (4c) has additive gain perturbations as the matrix function ∆K(k), as compared to the existing results [9, 10]. In this paper, it is considered that the suboptimal control can be achieved by adapting the arbitrary function N (k) with a NN or fuzzy logic. The quadratic performance index (7) is associated with the system (1), J=
∞ X T x (k)Qx(k) + uT (k)Ru(k) , k=0
(7)
392
Hiroaki Mukaidani et al.
where Q ∈ ℜn×n and R ∈ ℜm×m are given as the positive definite symmetric matrices. It should be noted that the transient response can be improved appropriately by changing the weight matrices Q ∈ ℜn×n and R ∈ ℜm×m . In this situation, the definition of the suboptimal control with additive gain perturbations is given below. Definition 3..1. For the uncertain discrete-time system (1) and cost function (7), if there exists a fixed control matrix K and a positive scalar J ∗ such that for the admissible uncertainties and additive control gain (5), the closed-loop system is asymptotically stable and the closed-loop value of the cost function (7) satisfies J < J ∗ , then J ∗ and K are said to be the suboptimal cost and suboptimal control gain matrix, respectively. The above definition is very popular for dealing with the time-varying uncertainties and is also used in [25]. The following theorem gives the sufficient condition for the existence of the suboptimal control. Lemma 3..2. Suppose that the following matrix inequality holds for the uncertain discretetime system (1) with the cost function (7) for all x(k) 6= 0. h i ˜ T RKC ˜ xT (k + 1)P x(k + 1) − xT (k)P x(k) + xT (k) Q + C T K x(k) < 0, (8)
˜ := K + Dk N (k)Ek . where K If such a condition is met, the matrix K of the controller (4c) is the suboptimal control matrix associated with the cost function (7). That is, the closed-loop uncertain system ˜ x(k + 1) = [(A + DF (k)Ea ) + (B + DF (k)Eb )KC]x(k),
(9)
is stable and achieves the following inequality J < J ∗ = xT (0)P x(0),
(10)
where x(0) 6= 0. Proof. Let us define the following Lyapunov function candidate V (x(k)) = xT (k)P x(k),
(11)
where P is the positive definite matrix. By considering (8), it follows that V (x(k + 1)) − V (x(k)) = xT (k +1)P x(k +1)−xT (k)P x(k) < 0 holds. Thus, the closed-loop uncertain system is stable. Moreover, summing the inequality (8) from zero to N results in T
T
x (N + 1)P x(N + 1)− x (0)P x(0)+
N X
˜ T RKC]x(k) ˜ xT (k)[Q+C T K < 0.
k=0
Since the closed-loop uncertain system is stable, x(N + 1) → 0 as N → ∞. Finally, (10) holds. This is the desired result. The objective of this section is to design a fixed suboptimal control gain matrix K for the uncertain system (1) via the LMI design approach.
Robust Static and Dynamic Output Feedback
0 B Im 0 0 Eb 0
CT 0 0 0 0 0 0
X In
⊥ −Y A 0 0 Ea 0 In
⊥ −Y A 0 0 Ea 0 In
In Y
AT −X + (µ1 + µ2 ) DDT T BDk DkT T [BDk ] 0 0 0 0 0 0 0 0 −µ2 Iqn 0
0 BDk DkT −R−1 + Dk DkT 0 Eb Dk DkT 0 0 In 0 0 B 0 Im 0 0 0 0 E 0 b 0 −(Q + EkT Ek )−1
AT −X + (µ1 + µ2 ) DDT T BDk DkT T [BDk ] 0 0 0
≥ 0, rank
0 0 0 0 0 −µ2 Iqn 0 X In
In Y
0 BDk DkT −R−1 + Dk DkT 0 Eb Dk DkT 0 0 T In C 0 0 0 0 0 0 0 0 0 0 −(Q + EkT Ek )−1 0 ≤ n.
393
0 BDk 0 −Im Eb Dk 0 0 ⊥T
< 0,
0 BDk 0 −Im Eb Dk 0 0 ⊥T
EaT 0 T Eb Dk DkT T [Eb Dk ] −µ1 Iqa 0 0
(12a)
EaT 0 T Eb Dk DkT T [Eb Dk ] −µ1 Iqa 0 0
< 0,
(12b)
(12c)
Theorem 3..3. Consider the uncertain discrete-time system (1) and cost function (7). For the unknown matrix function F (k) and arbitrary function N (k), if the LMIs (9) have a feasible solution such as the symmetric positive definite matrices X ∈ ℜn×n and Y ∈ ℜn×n , and the positive scalars µ1 > 0 and µ2 > 0, then K is a fixed suboptimal control matrix gain. Furthermore, the corresponding value of the cost function (7) satisfies the following inequality (13) for all admissible uncertainties F (k) and the arbitrary function N (k) J < J ∗ = xT (0)X −1 x(0).
(13)
In order to prove Theorem 3, the following Lemmas will be used [12, 14]. Lemma 3..4 ( [12]). Let matrices U ∈ ℜdn ×dm , V ∈ ℜdk ×dn , and W = WT ∈ ℜdn ×dn be given. Suppose rank (U) = dm < dn and rank (V) = dk < dn . Then there exists a matrix K ∈ ℜdm ×dk satisfying UKV + (UKV)T + W < 0 if and only if the matrices U, V and W satisfy U⊥ WU⊥ < 0, VT ⊥ WVT ⊥T < 0, where M⊥ denotes a left annihilator of M. Lemma 3..5 ( [14]). Let G, H and F be real matrices of appropriate dimensions with FFT ≤ In . Then, for any given ϕ > 0, the inequality GFH + (GFH)T ≤ ϕGGT + ϕ−1 HT H holds.
394
Hiroaki Mukaidani et al.
Proof. Applying the Schur complement [35] and standard inequality as Lemma 5 to the matrix inequality (8), and using the existing results [13] as Lemma 4 yield the inequalities (9). On the other hand, since the results of the cost bound (10) can be proved by using a similar argument for the proof of Lemma 2, it is omitted. It is shown that the overall stability of closed-loop systems is guaranteed in Theorem 3. We propose that a neurocontroller or fuzzy controller can be substituted for the controller based on additive gain perturbations. Based on this proposal, the proof has been completed by regarding these additive gain perturbations as the uncertainties. Finally, although the neurocontroller or fuzzy controller is included in the discrete-time uncertain system, the robust stability of the closed-loop system is attained. Although the considered problem has the additive gain perturbations in the output feedback gain, it might appear that the obtained results via the LMI condition is not a novel contribution because they can be easily derived by applying the existing results [24]. However, the method has succeeded in avoiding the bilinear matrix inequality (BMI) condition that has been established in [21]. Moreover, the main contribution is to propose that the additive gain perturbations such as a neurocontroller or fuzzy controller instead of an uncertainty output feedback controller can be used for achieving reduction in the enormous cost caused by the conservative LMI conditions. It is worth pointing out that there the concept of such a novel controller synthesis has not previously existed. The following algorithm [7] can be used to find the matrix pair (X, Y ) such that the LMIs (9) with X = Y −1 > 0 are satisfied. Algorithm 1 ( [7]). To solve such above problem, the linearization algorithm is conceptually described as follows. 1) Find a feasible point (ε01 , ε02 , X 0 , Y 0 ) that satisfy the LMIs (9). If there are no such points, exit. Set r = 0. 2) Set V r = Y r , W r = X r , and find X r+1 , Y r+1 that solve the following LMI problem. minimize Trace(V r X + W r Y ) subject to the LMIs (9). 3) If a stopping criterion is satisfied, exit. Otherwise, set r = r+1 and go to step 2). Based on [7], it was shown that the algorithm converges to some value. On the other hand, it should be noted that Algorithm 1 may not be able to find the smallest-order controller in all the cases [7]. Moreover, it is possible to verify a vibration phenomenon and a very slow rate of convergence through simulation in some cases [8]. It is easy to acquire a solution set of (ε1 , ε2 , X, Y ) because the algorithm is simple LMI problem. If the solution set has the relation X = Y −1 > 0, the suboptimal control gain matrix K is obtained by using the Matlab toolbox with Lemma 4. For the uncertain discrete-time system (1) associated with the cost function (7), the suboptimal cost J ∗ can be achieved if the feasible solution exists.
4.
Main Idea
The LMI approach for the uncertain discrete-time systems usually results in the conservative controller design due to the existence of the uncertainties ∆A, ∆B and the additive
Robust Static and Dynamic Output Feedback
395
gain perturbations ∆K. As a result, the cost J becomes large. The main contribution of this paper is to apply the NN or fuzzy logic as the additive gain perturbations to improve the cost performance. It is well known that NNs have found wide potential applications in system control because of their ability to perform nonlinear mapping. Therefore, since a sufficiently accurate model of the system is generally not available, using the nonlinear mapping provided by the neural output with the uncertainty determined will result in a better performance. On the other hand, fuzzy control is a theory that can be the best option when the existing control method is hard to apply to systems due to difficulties with regard to a mathematical model or due to a nonlinear model. Since fuzzy control translates a rule of thumb into certain inputs/outputs, it is easy to design the controller and it is possible to implement it without the knowledge of the exact system model. It should be noted that the proposed neurocontroller and fuzzy controller regulate their outputs in real time with a robust stability by the LMI approach. Hence, it can be expected that there will be a reduction in the cost when the uncertain discrete-time system performs the nominal closed-loop system. That is, the neurocontroller or fuzzy controller is required to compensate as the nominal system. This idea can be explained as follows. First, it should be noted that the gain for the nominal systems will be derived from the LQR theory. Since the nominal systems attain the minimum cost, if the neurocontroller or fuzzy controller is programmed such that the resulting system response closely approaches that of a nominal system, these controlled systems can be expected to achieve better performances. Let us consider the following nominal system without uncertainties. x ˆ(k + 1) = Aˆ x(k) + B u ˆ(k), yˆ(k) = C x ˆ(k), ˆ yˆ(k), u ˆ(k) = K
(14a) (14b) (14c)
where x ˆ(k) ∈ ℜn is the state, yˆ(k) ∈ ℜl is the output and u ˆ(k) ∈ ℜm is the control input. ˆ ∈ ℜm×l is the output feedback gain for the nominal system (11). The quadratic cost K function (9) is associated with the system (11). Jˆ =
∞ X T x ˆ (k)Qˆ x(k) + u ˆT (k)Rˆ u(k) .
(15)
k=0
ˆ is derived by means of the existing LMI approach [13] for the nominal The control gain K system (11) and cost function (15). For the nominal system (11) and the cost function (15), it is known that the cost of the nominal system Jˆ∗ is smaller than that of the uncertain system J ∗ . As a result, if the behavior of the additive gain perturbation that is obtained from the NN or fuzzy logic is sufficiently close to that of the closed-loop nominal system that is based on the LQR theory, the increase in the cost caused by the LMI design can be reduced. Remark 4..1. Without loss of generality, using the result in [26], it can be assumed that the uncertainty F (k) is the Gaussian white noise process with zero mean. Moreover, the condition E[x(0)xT (0)] = In is also assumed, where E[·] denotes the expectation value. It may be noted that although these conditions seem to be conservative, they can be checked
396
Hiroaki Mukaidani et al.
before control is implemented. In this situation, the cost of the uncertain system under the proposed control without the intelligent control scheme such as NN can be computed as follows. "∞ # X T T ∗ x (k)Qx(k) + u (k)Ru(k) J := E k=0
=
∞ X k=0
=
∞ X k=0
E xT (k)(Q+C T K T RKC)x(k)
Trace (Q+C T K T RKC) (A+BKC)k (A+BKC)T k + E(k) ,
where x(k) = A(k − 1) · · · A(1)A(0)x(0), k ≥ 1, A(k) := A + BKC + F(k),
F(k) := DF (k)(Ea + Eb KC),
E[F (k)] := 0, E[F T (k)F (k)] := σIqa , E(k) := E F(k − 1) · · · F(1)F(0)F T (0)F T (1) · · · F T (k − 1) + · · ·
= kσD(Ea +Eb KC)(Ea +Eb KC)T DT +· · · ≥ 0, k ≥ 1, E(0) = 0
and K is the output feedback control gain that is based on the proposed LMIs (9). On the other hand, the cost of the nominal system under the control of (Iwasaki et al 1994) is given below. "∞ # X xT (k)Qx(k) + uT (k)Ru(k) Jˆ∗ := E k=0
=
∞ X k=0
=
∞ X k=0
h i ˆ T RKC)x(k) ˆ E xT (k)(Q + C T K
ˆ T RKC)(A ˆ ˆ k (A + B KC) ˆ Tk , Trace (Q + C T K + B KC)
ˆ is the suboptimal gain such that Jˆ∗ is minimized. where K Thus, it can be shown that the cost of the nominal system is smaller than that for the uncertain system without NN or fuzzy control.
5.
Control Algorithm Using Fuzzy Logic
In order to ensure easy implementation and simple design, fuzzy control is proposed for the reduction in the cost performance. The proposed fuzzy controller regulates the arbitrary function N (k) so that the response of the uncertain system may approach that of the nominal system. It should be noted that although the neurocontroller can also be regulated and this discussion appears to be helpful, it is omitted due to page limitation.
Robust Static and Dynamic Output Feedback
397
In this paper, the error Ef (k) between the proposed system (1) and the nominal system (11) and the difference of error ∆Ef (k) are defined as the performance index to decide the fuzzy rules. The error Ef (k) and the difference of error ∆Ef (k) can be defined as Ef (k) = yˆ(k) − y(k),
∆Ef (k) = Ef (k) − Ef (k − 1).
(16a) (16b)
Ef (k) and ∆Ef (k) are defined as the input of the fuzzy controller. Hence, the fuzzy controller outputs arbitrary function N (k). The membership functions and their ranges are shown in Figure 3 and Figure 4. As a symbol that denotes degree and sign of Ef (k), ∆Ef (k) and N (k), Negative Big (NB), Negative Middle (NM), Negative Small (NS), Zero (ZO), Positive Small (PS), Positive Middle (PM), Positive Big (PB) are defined. The range of the membership functions is selected according to the maximum error and the difference of the error values when N (k) = 0 for the proposed system (1). The relationship between the input and output of the fuzzy controller is the most important part. This relationship, which is called if-then rules, must be obtained correctly to improve the performance of the fuzzy logic control system. The fuzzy logic is determined not by a strict value but by a vague expression. Therefore, the proposed fuzzy rules can be achieved by expressions such as “Big” or “Small”. The process for determining the rules is whether the arbitrary function N (k) should be increased or decreased by the error Ef (k) and the difference of in the error ∆Ef (k). As a result, the control rules when the initial condition of the proposed system changes from the positive values to the origin are considered. In order to determine the amount of increment or decrement for the arbitrary function N (k), If-then rules are used. These rules are converted into a table as given in Table 1. For example, when ∆Ef (k) is Zero (ZO) and Ef (k) is Negative Big (NB), then N (k) is should be Positive Big (PB) to increase the absolute value of k K + ∆K(k) k. As a result, the convergence (change) will be fast (great). In the other case, when ∆Ef (k) is Positive Big (PB) and Ef (k) is Zero (ZO), then N (k) is should be Negative Big (NB) so that the absolute value of k K + ∆K(k) k is decreased. Thus, the convergence (change) will be slow (small). In this way, the control rules are set by considering how Ef (k) and ∆Ef (k) change. In this paper, fuzzy subsets of the output Hi (k) are given in the following form If Er (k) is h1i (k) and ∆Er (k) is h2i (k), then N (k) is Hi (k), i = 1, . . . , M,
(17)
where M is the total number of rules, and h1i (k) and h2i (k) are fuzzy subsets of the input at step k. An OR operation is applied to the fuzzy subsets Hi (k), and N (k) can be obtained by calculating its center of gravity. Then, N (k) is given by
N (k) =
M X
φi S(φi )
i=1 M X i=1
, S(φi )
(18)
398
Hiroaki Mukaidani et al.
Figure 3. Membership function for input of fuzzy controller.
Figure 4. Membership function for output of fuzzy controller.
Figure 5. Two cart systems.
where S(φi ) is the OR operation set of Hi (k), and φi is the horizontal axis of the membership function for the output of the fuzzy controller. Using (17), (18) and the proposed If-then rules, the fuzzy controller can regulate N (k) so that the cost J at each step k is decreased.
6.
Numerical Example
In order to demonstrate the effectiveness of the proposed fuzzy controller, a numerical example is given. Let us consider a two-cart system that is shown in Figure 5. x1 (t) and x2 (t) are respectively the positions of the cart A and the cart B, and u(t) is the control input, k1 and k2 are the spring constants, c1 and c2 are the damper constants and m1 and m2 are mass of cart A and cart B, respectively. In this system, a frictional force between the floor and wheel of the cart is not considered. By choosing the cart positions and their
Robust Static and Dynamic Output Feedback
399
Table 1. If-then rules for uncertain system.
Er (k)
NB NM NS ZO PS PM PB
NB
NM
PB PM
PM
∆Er (k) NS ZO PS PB PM PM PS PS ZO PS ZO NS ZO NS NS NM NM NB
PM
PB
NM
NM NB
velocities as the state variables and observing the cart positions as the output variables, the continuous-time state-space model of the two cart system is given by
0 0 1 0 0 0 k1 + k2 k2 c1 + c2 x(t) ˙ = − − m1 m1 m1 k2 c2 k2 − m2 m m2 2 1 0 0 0 y(t) = x(t), 0 1 0 0
0 1 c2 m1 c2 − m2
x(t) +
0 0 1 m1 0
u(t),
(19a)
(19b)
where x(t) = [x1 (t) x2 (t) x˙ 1 (t) x˙ 2 (t)]T , y(t) = [x1 (t) x2 (t)]T . In this paper, the parameters of the cart system (22) are chosen as m1 = 2.0 [kg], m2 = 1.0 [kg], k1 = 1.3 [N/m], k2 = 0.7 [N/m], c1 = 0.9 [Ns/m] and c2 = 0.5 [Ns/m]. Changing continuous-time description into discrete-time description as sampling time ts = 0.01[s], the matrices for the system (1) are given by 0.0000 0.0000 0.0010 , B = 0.0000 , 0.0050 0.0025 0.0000 0.9950 0.0 0.0 1.0 0.0 0.0 0.0 C= , D= 0.012 , 0.0 1.0 0.0 0.0 0.0 Ea = 0.15 −0.052 0.1 −0.037 , Eb = −0.05, N1 (k) 0 , Dk = 0.5 0.1 , Ek = 1.0, F (k) = f (k), N (k) = 0 N2 (k)
1.0000 0.0000 0.0000 1.0000 A= −0.0100 0.0035 0.0070 −0.0070
0.0100 0.0000 0.9930 0.0050
where N1 (k) and N2 (k) are the outputs of the fuzzy control.
400
Hiroaki Mukaidani et al.
Table 2. The actual costs. (The cost of the nominal system is Jˆ = 1.3941e + 04.) F (k) 1 exp(−0.002k) cos(πk/18.0)
With fuzzy 1.2669e+04 1.2309e+04 1.1709e+04
Without fuzzy 1.5441e+04 1.4720e+04 1.3990e+04
In this system, it is assumed that the mass m1 of the cart A can vary from 1.7 to 2.4, and D, Ea and Eb are fixed. The initial system condition is x(0) = [2.0 3.0 0.0 0.0]T , and the weighting matrices are chosen as Q = diag(4.0, 3.0, 2.0, 2.0) and R = 1.0, respectively. The assumption in inequalities (3) is rather restrictive and difficult to verify. Moreover, the partitioning method for the matrices D, Ea and Eb is not unique. It should be noted that a detailed method of identification of these matrices along with many examples has been given by [26]. The output feedback control gain K that is based on the proposed LMIs (9) is given by (20) K = K1 K2 = −2.1232 0.4290 . ˆ which is based on the LMI For the nominal system (11), the output feedback control gain K design method in [13] is given by ˆ = K ˆ1 K ˆ 2 = −0.9636 −0.1631 . K (21)
In order to compare with the proposed method, let us consider the following system without the proposed additive gain. x ¯(k + 1) = [A + ∆A(k)] x ¯(k) + [B + ∆B] u ¯(k) y¯(k) = C x ¯(k) ¯ y¯(k) u ¯(k) = K
(22a) (22b) (22c)
where x ¯(k) ∈ ℜn is the state, y¯(k) ∈ ℜl is the output and u ¯(k) ∈ ℜm is the control input. m×l ¯ K ∈ℜ is the output feedback gain for the uncertain system (19). The quadratic cost function (23) is associated with the system (19). J¯ =
∞ X T x ¯ (k)Q¯ x(k) + u ¯T (k)R¯ u(k) .
(23)
k=0
¯ is designed by using the proposed LMI approach for the uncertain system (19) without K the additive gain perturbations. ¯1 K ¯ 2 = −1.14359 −0.1719 . ¯ = K (24) K
The results of the cost for the proposed system (1) with the fuzzy controller and the uncertain system (19) without the additive gain perturbations are shown in Table 2. In all
Robust Static and Dynamic Output Feedback
401
cases, the cost J with the fuzzy controller is smaller than the cost J¯ without the fuzzy controller. Therefore, it is also shown from Table 2 that it is possible to improve the cost by applying the new proposed fuzzy controller. The simulation results of f (k) = 1 are shown in Figure 6. It is verified from Figure 6 (a), . . . , (e) that the response of the proposed fuzzy controller is faster than that of the controller without the fuzzy controller. Figure 6 (f) shows the result for the feedback gain with the additive gain perturbations K + ∆K(k). It is also verified that the proposed fuzzy rules can reduce the cost and compensate for the uncertainties of each system. Figure 7 shows the response of the system with the proposed fuzzy controller and the nominal control system under f (k) = 1. The state variables xi , i = 1, . . . , 4 can trace the state variables x ˆi , i = 1, . . . , 4 well as shown in Figure 7 (a), . . . , (e). Since K + ∆K(k) changes to compensate the system uncertainties and its response can be close to the nominal response, the proposed fuzzy controller can reduce the cost. Therefore, the control rules are adequate for the fuzzy logic. Moreover, since it is easy to decide the fuzzy logic as compared with the conservative condition of the learning algorithm of the proposed neurocontroller, it is expected that the proposed fuzzy controller is very useful and reliable.
6.1.
Comparison between the Static Feedback and Proposed Augmented Controllers
In the classical LQR theory, we first form a mathematical model of the plant dynamics based on the existing information on the plant dynamics. If the model equation is an accurate representation of the plant dynamics, it can generate an suboptimal control input. However, in actual applications, the knowledge of the plant dynamics is rarely exhaustive, and it is difficult to express the actual plant dynamics in terms of mathematical equations precisely. As a result, these above-mentioned factors result in the generation of an suboptimality gap. Therefore, it is preferable to adapt the static output feedback gain by using the time-varying feedback gain to identify the uncertainty and unmodelled nonlinearity. Moreover, such a fixed controller may not satisfy the suboptimality due to the variation in the initial condition; this is because the proposed static output feedback gain is designed under the conservative condition E[x(0)xT (0)] = In [25]. Thus, using the time-varying feedback gain is recommended because suboptimality can be achieved under the various initial conditions.
7.
Dynamic Output Feedback
Consider the uncertain systems (4a) and (4b), and suppose that a controller is to be constructed such that the closed-loop system is quadratically stable. ξ(k + 1) = Ac ξ(k) + Bc y(k), u(k) = [K + ∆K(k)] ξ(k) + Cc y(k).
(25a) (25b)
Here, ξ(k) ∈ ℜnc is the state vector of the dynamic controller with nc ≤ n. Furthermore, the controller is required to minimize a bound on a given quadratic cost function (7).
Hiroaki Mukaidani et al. 2
0
-1
0
10
20
30
(a)
-1
-2
-1.5
-3 10
2
u
-2 20
30
(d)
20
30
40
0
Time [s]
With Fuzzycontroller Without Fuzzycontroller
1
Control input
State variables
-0.5
(b)
-1
10
0 -1
0
0
0
0
1
40
Without Fuzzycontroller
1
Without Fuzzycontroller
0.5
2
Time [s]
With Fuzzycontroller
x
4
2
Without Fuzzycontroller
0 -1 -2 -3
40
0
10
20
30
(e)
Time [s]
with additive gain
1
With Fuzzycontroller
With Fuzzycontroller 3
Output feed back gain
Without Fuzzycontroller
1
4
x
With Fuzzycontroller 2
State variables
State variables
x
1
402
10
20
30
40
1 0.5 0
K K
-0.5 -1 -1.5
N k N k
1+0.5
1(
)
2+0.1
2(
)
-2 -2.5 -3
40
0
10
Time [s]
20
30
(f)
40
Time [s]
20
30
10
30
(d)
20
30
u
3
Nominal system
0 -1 -2 -3
40
0
10
20
30
(e)
Time [s]
Proposed system
40
Nominal system
0.5
0
-0.5
-1
-1.5
40
Time [s]
Proposed system
1
1
x
-3
2
-2 20
-2
(b)
-1
10
0 -1
0
0
0
1
40
Nominal system
1
2
Time [s]
Proposed system
x
4
2
State variables
-1
Nominal system
0
Output feed back gain
.
0
10
Proposed system 3
10
20
30
(c) with additive gain
2
1
(a)
State variables
State variables
Nominal system
0
4
x
Proposed system 2
Control input
State variables
x
1
Figure 6. Simulation results of the fuzzy controller when f (k) = 1.0. (a), (b), (c), (d): State variables. (e): Control input. (f): Output feedback gain as additive gain perturbations.
40
Time [s]
1 0.5 0
K K
-0.5 -1 -1.5
N k N k
1+0.5
1(
)
2+0.1
2(
)
-2 -2.5 -3 0
10
Time [s]
20
(f)
30
40
Time [s]
Figure 7. Simulation results of the proposed system versus the nominal control system of the fuzzy controller when f (k) = 1.0. (a), (b), (c), (d): State variables. (e): Control input. (f): Output feedback gain as additive gain perturbations.
This problem can be transformed into a problem of designing a static output feedback controller of the form
u(k) z(k)
=
Cc K + ∆K(k) Bc Ac
y(k) ξ(k)
(26)
for the uncertain system x ˜(k + 1) =
h
i i h ˜ + ∆B(k) ˜ ˜ u ˜(k), A˜ + ∆A(k) x ˜(k) + B
y˜(k) = C˜ x ˜(k), h i ˜ + ∆K(k) ˜ u ˜(k) = K y˜(k),
(27a) (27b) (27c)
Robust Static and Dynamic Output Feedback
403
where
x(k) ξ(k)
x ˜(k) := B 0 ˜ B := 0 Inc u(k) u ˜(k) := z(k) 0 ˜ ∆K(k) := 0
A 0 ∆A(k) 0 ˜ , , ∆A(k) := , 0 0 0 0 ∆B(k) 0 ˜ , , ∆B(k) := 0 0 C 0 ˜ := Cc K , , K , C˜ := Bc Ac 0 Inc y(k) ∆K(k) , y˜(k) := ξ(k) 0 A˜ :=
and the cost function J=
∞ h X k=0
i ˜ x(k) + u ˜ u(k) x ˜T (k)Q˜ ˜T (k)R˜
(28)
with ˜ := Q
Q 0 0 Q0
> 0,
˜ := R
R 0 0 R0
> 0.
Finally, since C˜ is a full row rank, the static output feedback results can be used to solve the above problem.
8.
Conclusions
The applicability of the additive gain perturbations for the output-feedback suboptimal control problem of the discrete-time system that has uncertainties in both state and input matrices has been investigated. Compared with the existing results [11, 18–20], a new LMI condition has been derived. In order to reduce the cost, a fuzzy controller is newly introduced. By substituting the fuzzy controller into the additive gain perturbations, the robust stability and adequate suboptimal cost of the closed-loop system are guaranteed even if such systems include these artificial controllers. The numerical example has shown that the fuzzy controller has succeeded in reducing the large cost caused by the LMI technique.
References [1] Ahmed, M. S., & Al-Dajani, M. A. Neural regulator design. Neural Networks 11 (1998), no. 9, 1695–1709. [2] Cabrera, J. B. D. & Narendra, K. S. Issues in the application of neural networks for tracking based on inverse control. IEEE Transactions on Automatic Control 44 (1999), no. 11, 2007–2027. [3] Diao, Y & Passino, K. M. Stable adaptive control of feedback linearizable timevarying nonlinear systems with application to fault tolerant engine control. Int. Journal of Control 77 (2004), no. 17, 1463–1480.
404
Hiroaki Mukaidani et al.
[4] Garcia, G., Pradin, B., Tarbouriech, S., & Zeng, F. Robust stabilization and guaranteed cost control for discrete-time linear systems by static output feedback. Automatica 39 (2003), no. 9, 1635–1641. [5] Ge, S. S. & Wang, C. Adaptive NN control of uncertain nonlinear pure-feedback systems. Automatica 38 (2002), no. 4, 671–682. [6] Ge, S. S. & Li, Y. Adaptive NN control for a class of strict-feedback discrete-time nonlinear systems. Automatica 39 (2003), no. 5, 807–819. [7] Ghaoui, L. E., Oustry, F., & AitRami, M. A cone complementarity linearization algorithm for static output-feedback and related problems. IEEE Transactions on Automatic Control 42 (1997), no. 8, 1171–1176. [8] Henrion, D., Arzelier, D. and Peaucelle, D. Robust stabilization of matrix polytopes with the cone complementarity linearization algorithm: numerical issues. In Proceedings of the Process Control Conference, Slovakia (2001). [9] Iiguni, Y. Sakaiand, H., & Tokumaru, H. A nonlinear regulator design in the presence of system uncertainties using multi-layered neural networks. IEEE Transactions on Neural Networks 2 (1991), no. 4, 410–417. [10] Iiguni, Y. A robust neurocontroller incorporating a prior knowledge of plant dynamics. Math Comput Model 23 (1996), no. 1–2, 143–157. [11] Ishii, Y., Mukaidani, H., Tanaka, Y., Bu, N., & Tsuji, T. LMI based neurocontroller for output-feedback guaranteed cost control of discrete-time uncertain system. Proceedings of the IEEE Int. Midwest Symp. Circuits and Systems, Hiroshima III (2004), 141–144. [12] Iwasaki T., & Skelton, R. E. All controllers for the general H∞ control problem: LMI existence conditions and state space formulas. Automatica 30 (1994), no. 8, 1307– 1317. [13] Iwasaki T., Skelton, R. E., & Geromel, J. C. Linear quadratic suboptimal control with static output feedback. Systems and Control Letters 23 (1994), no. 6, 421–430. [14] Khargonekar, P. P., Petersen, I. R., & Zhou, K. Robust stabilization of uncertain linear systems: quadratic stabilizability and H ∞ control theory. IEEE Transactions on Automatic Control 35 (1990), no. 3, 356–361. [15] Lewis, F. L., Campos, J., & Selmic, R. Neuro-Fuzzy Control of Industrial Systems With Actuator Nonlinearities. Philadelphia, SIAM (2002). [16] Kim, Y. H., Lewis, F. L., & Dawson, D. M. Intelligent optimal control of robotic manipulators using neural networks. Automatica 36 (2000), no. 9, 1355–1364. [17] Li, Y., & Furong, G. Optimal guaranteed cost control of discrete-time uncertain systems with both state and input delays. Journal of the Franklin Institute 338 (2001), no. 1, 101–110.
Robust Static and Dynamic Output Feedback
405
[18] Mukaidani, H., Ishii, Y., Tanaka, Y., Bu, N., & Tsuji, T. LMI based neurocontroller for guaranteed cost control of discrete-time uncertain system. Proceedings of the 43rd IEEE Conference Decision and Control, Bahamas (2004), 809–814. [19] Mukaidani, H., Ishii, Y., & Tsuji, T. Decentralized guaranteed cost control for discretetime uncertain large-scale systems using neural networks. Proceedings of the IFAC World Congress, Czech Republic, CD-Rom (2004). [20] Mukaidani, H., Ishii, Y., Bu, N., Tanaka, Y., & Tsuji, T. LMI based neurocontroller for state-feedback guaranteed cost control of discrete-time uncertain system. Special issue on recent advances in circuits and systems of the IEICE Transactions on Information and Systems E88-D (2005), no. 8, 1903–1911. [21] Mukaidani, H., Sakaguchi, S., Ishii, Y., & Tsuji, T. BMI-based neurocontroller for state-feedback guaranteed cost control of discrete-time uncertain system. Proceedings of the IEEE International Symposium on Circuits and Systems, Kobe (2005), 3055– 3058. [22] Ordonez, R. & Passino, K. M. Stable multi-input multi-output adaptive fuzzy/neural control. IEEE Transactions on Fuzzy Systems 7 (1999), no. 3, 345–353. [23] Narendra, K. S., & Parthasarathy, K. Identification and control of dynamical systems using neural networks. IEEE Transactions on Neural Networks 1 (1990), no. 1, 4–27. [24] Park, J. H. On dynamic output feedback guaranteed cost control of uncertain discretedelay systems: LMI optimization approach. Journal of Optimization Theory and Applications 121 (2004), no. 1, 147–162. [25] Petersen, I. R. & McFarlane, D. C. Optimal guaranteed cost control and filtering for uncertain linear systems. IEEE Transactions on Automatic Control 39 (1994), no. 9, 1971–1977. [26] Petersen, I. R., Ugrinovskii, V. A., & Savkin, A. V. Robust Control Design Using H ∞ Methods. London: Springer (2000). [27] Polycarpou, M. M. & Helmicki, A. J. Automated fault detection and accommodation: a learning systems approach. IEEE Transactions on Systems, Man, and Cybernetics 25 (1995), no. 11, 1447–1458. [28] Polycarpou, M. M. Stable adaptive neural control scheme for nonlinear systems. IEEE Transactions on Automatic Control 41 (1996), no. 3, 447–451. [29] Spooner, J. T., Passino K. M. Stable adaptive control using fuzzy systems and neural networks. IEEE Transactions on Fuzzy Systems 4 (1996), no. 3, 339–359. [30] Tanaka, K., Ikeda, T., & Wang, H. O. Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs. IEEE Transactions on Fuzzy Systems 6 (1998), no. 2, 250–265.
406
Hiroaki Mukaidani et al.
[31] Xie, L., & Soh, Y. C. Guaranteed cost control of Uncertain discrete-time systems. Control Theory and Advanced technology 10 (1995), no. 4, 1235–1251. [32] Xie, L., Souza, C. E. D., & Wang, Y. Robust control of discrete time uncertain dynamical systems. Automatica 29 (1993), no. 4, 1133–1137. [33] Yang, G.-H., Wang, J. L., & Soh, Y. C. Guaranteed cost control for discrete-time linear systems under controller gain perturbations. Linear Algebra and its Applications 312 (2000), 161–180. [34] Yu, L. & Gao, F. (2002). Output feedback guaranteed cost control for uncertain discrete-time systems using linear matrix inequalities. Journal of Optimization Theory and Applications 113 (2002), no. 3, 621–634. [35] Zhou, K. Essentials of Robust Control. New Jersey: Prentice Hall (1998).
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 407-424
ISBN 978-1-60876-500-3 © 2011 Nova Science Publishers, Inc.
Chapter 16
N UMERICAL C OMPUTATION FOR S OLVING C ROSS -C OUPLED L ARGE -S CALE S INGULARLY P ERTURBED S TOCHASTIC A LGEBRAIC R ICCATI E QUATION Hiroaki Mukaidani1,∗ and Vasile Dragan2 1 Graduate School of Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima, 739-8524 Japan 2 Institute of Mathematics of the Romanian Academy, 1-764, Ro-70700, Romania
Abstract In this paper, the linear quadratic infinite horizon Nash games for large-scale singularly perturbed stochastic systems (LSPSS) are studied. After establishing the local uniqueness and the asymptotic structure of the solutions to the cross-coupled largescale stochastic algebraic Riccati equation (CLSARE), a new algorithm on the basis of the Newton’s method is established. It is shown that the quadratic convergence of the proposed method under an appropriate initial guess is guaranteed by using this structure of the solutions. Furthermore, in order to avoid the large dimensional computations of matrix calculation, the fixed point iterations are also given. As a result, the results obtained in this paper represent very powerful tools for simplified computations with the high-order accuracy. The computational examples are given to demonstrate the efficiency and feasibility of the proposed algorithm.
1.
Introduction
The linear quadratic Nash games and their applications have been extensively investigated in analysis of deterministic dynamic systems (see e.g., [1]). It is well-known that in order to obtain the Nash equilibrium strategy, the cross-coupled algebraic Riccati equation (CARE) has to be solved. Various reliable approaches to the theory of the CARE have been well ∗
E-mail address: [email protected]
408
Hiroaki Mukaidani and Vasile Dragan
documented in many literatures (see e.g., [2, 3]). One of the approaches is the Newton method [2]. Although this algorithm has the useful property of quadratic convergence, the large dimension for the computation is needed. On the other hand, in order to reduce the dimension of the computing, the Lyapunov iterations have been derived [3]. However, these computational approaches cannot be applied to the dynamic systems with the stochastic uncertainty such as standard Wiener process. The stochastic control problems governed by Itˆo’s differential equation have become a popular research topic during the past decade. It has attracted much attention and has been widely applied to various control problems. Recently, the stochastic H2 /H∞ control with state-dependent noise has been addressed [4]. Although the results in [4] are very elegant in theory and despite it being easy to obtain a strategy pair by solving the crosscoupled stochastic algebraic Riccati equations (CSARE), the numerical analysis has not been discussed. Meanwhile, a few reliable results have been obtained on the Newton’s method for solving the CSARE related to the weakly coupled large-scale systems [17]. It has been shown that the quadratic convergence and the reduced-order computation are both attained by using a hybrid algorithm. The control problems for the multiparameter singularly perturbed systems (MSPS) have been investigated extensively (see e.g., [5,6] and reference therein). It is well known that the multimodeling problems arise in large-scale dynamic systems such as multimachine power systems. In general, large-scale systems that consists of MSPS are composed of pure slow and fast dynamics and weak interconnections among state variables. For these problems, the concept of a composite control methodology has been introduced. Although the composite strategy is valid in general for the control problem of MSPS, it has been shown from [6] that a Nash composite strategy may not satisfy the Nash equilibrium property. Therefore, decision makers need to use a numerical strategy that converges to the exact Nash strategy as long as the small parameters are exactly known. Namely, since an approximate Nash solution may not possess the Nash equilibrium, it is necessary to establish the validity of the exact strategy on the basis of the numerical one. Recent advance in the numerical computation approach for the MSPS has allowed us to expand the study on the Nash games [7, 14, 15]. It can be utilized to find the feasible solutions with the adequately high-order accuracy of the Nash strategy. However, a weakness of these theoretical results is that the stochastic uncertainty has not been considered. Therefore, the stochastic uncertainty such as standard Wiener process should be included in the treatment because the existing results on the Nash games are restricted to the deterministic MSPS. In this paper, the linear quadratic infinite horizon N -players Nash game for large-scale singularly perturbed stochastic systems (LSPSS) is discussed. After defining the crosscoupled large-scale stochastic algebraic Riccati equation (CLSARE) to obtain the Nash strategy, the local uniqueness and the asymptotic structure of the solutions for the CLSARE is formulated. Our attention in this paper is not to solve the general stochastic Nash games for the MSPS but to obtain numerical strategy for the given algorithms. For this purpose, first the existing algorithm [17] is extended to CLSARE. That is, a revised algorithm on the basis of the Newton’s method with the reduced-order computations is established. As a result, the quadratic convergence and the local uniqueness are both attained. Furthermore, a hybrid algorithm by means of two fixed point algorithms for solving the CLSARE is con-
Numerical Computation...
409
sidered. Using this algorithm, the required work space to solve the reduced-order equations is the same as the reduced-order slow and fast subsystems that is smaller than the dimension of the full-order system. Two different computational examples are given to demonstrate the efficiency and feasibility of the proposed analysis. Notation. The notations used in this paper are fairly standard. detL denotes the determinant of square matrix L. In denotes the n × n identity matrix. || · || denotes its Euclidean norm for a matrix. block diag denotes the block diagonal matrix. vecM denotes the column vector of the matrix M [18]. ⊗ denotes the Kronecker product. ⊕ denotes the Kronecker sum such that M ⊕ N := M ⊗ In + Im ⊗ N , M ∈ ℜm×m , N ∈ ℜn×n . Ulm denotes a permutation matrix in the Kronecker matrix sense [18] such that Ulm vecM = vecM T , M ∈ ℜl×m . E[·] denotes the expection operator.
2.
Problem Formulation
Let us consider the following LSPSS that consist of N -fast subsystems with specific structure of lower level interconnected through the dynamics of a higher level slow subsystem. N N X X B0j uj (t) dt A0j xj (t) + dx0 (t) = j=0
+
M X p=1
j=1
Ap00 x0 (t) + µ
N X j=1
Ap0j xj (t) dwp (t), x0 (0) = x00 ,
(1a)
εi dxi (t) = [Ai0 x0 (t) + Aii xi (t) + Bii ui (t)]dt +¯ εδ
M X
[Api0 x0 (t) + Apii xi (t)]dwp (t),
p=1
xi (0) = x0i , i = 1, . . . , N,
(1b)
where xi (t) ∈ ℜni , i = 0, 1, . . . , N are the state vectors, ui (t) ∈ ℜmi , i = 1, . . . , N are the control inputs. εi > 0, i = 1, . . . , N and µ > 0 are small parameters and δ > 1/2 is independent of ε¯ := min{ε1 , . . . , εN } [9–11]. It should be noted that the parameters µ and δ have been introduced in [9–11] for the first time. wp (t) ∈ ℜ, p = 1, . . . , M is a one-dimensional standard Wiener process defined in the filtered probability space [4,9–11]. It is assumed that the ratios of the small positive parameters εi , i = 1, . . . , N and µ are bounded by some positive constants kij , k¯ij , l and ¯l and only these bounds are assumed to be known [5, 6]. In other words, they have the same order of magnitude. 0 < k ij ≤ αij ≡
εj ε¯ ≤ k¯ij < ∞, 0 < l ≤ ≤ ¯l < ∞. εi µ
(2)
Note that one of the fast state matrices Ajj , j = 1, . . . , N may be singular. The performance criterion is given by Z ∞ [xT (t)Qi x(t) + uTi (t)Ri ui (t)]dt, i = 1, . . . , N, (3) Ji (u1 , . . . , uN ) = E 0
410
Hiroaki Mukaidani and Vasile Dragan
where xT (t) := Qi := Qi0f :=
CiT Ci
xTN (t)
xT0 (t) xT1 (t) · · · =
0 ···
Qi00 Qi0f QTi0f Qif 0 Qi0i
T
∈ ℜn¯ , n ¯ :=
T , Qi00 := Ci0 Ci0 , 0 ··· 0 ,
Qif := block diag 0 · · · 0 Qiii 0 · · · C1 := C10 C11 0 · · · 0 , .. . Ci := Ci0 0 · · · 0 Cii 0 · · · 0 , .. . CN := C0N 0 · · · 0 CN N .
0
N X
nj ,
j=0
,
Let us introduce the partitioned matrices A00 A0f A := , A0f := A01 · · · A0N , Af 0 Af T T , Af := block diag A11 · · · AN N , Af 0 := A10 · · · ATN 0 Ap00 µAp0f Ap := , Ap0f := Ap01 · · · Ap0N , δ δ ε¯ Apf 0 ε¯ Apf T T Apf 0 := Ap10 · · · ATpN 0 , Apf := block diag Ap11 · · · ApN N , T T T B11 0 ··· 0 , B1 := B10 .. . T T 0 · · · 0 BiiT 0 · · · 0 , Bi := Bi0 .. . T T T 0 · · · 0 BN BN := B0N . N
Without loss of generality, the following basic assumptions (see e.g., [3, 6]) are made.
Assumption 1. The triples (Aii , Bii , Cii ), i = 1, . . . , N are stabilizable and detectable. These conditions are quite natural since at least one control agent has to be able to control and observe unstable modes. Our purpose is to find a linear feedback strategy set (u∗1 , . . . , u∗N ) such that Ji (u∗1 , . . . , u∗N ) ≤ Ji (u∗1 , . . . , u∗i−1 , ui , u∗i+1 , . . . , u∗N ), i = 1, . . . , N.
(4)
The decision makers are required to select the closed loop strategy u∗i (t), if they exist, such that (4) holds. Moreover, each player uses the strategy u∗i (t) such that the closed-loop
Numerical Computation...
411
system is asymptotically mean square stable for sufficiently small εi [20]. The following lemma is already known [17]. Lemma 1. There exists an admissible strategy such that the inequality (4) holds iff the CLSARE T M N N X X X Sje Pje ATpe Pie Ape Sje Pje + Ae − Pie + Pie Ae − p=1
j=1
j=1
+Pie Sie Pie + Qi = 0,
(5)
have solutions Pie := Φe Pi ≥ 0, where Φe := In0 Πe , Πe := block diag
ε1 In1
···
εN InN
,
−1 T −1 −1 Ae := Φ−1 e A, Ape := Φe Ap , Bie := Φe Bi , Sie := Bie Ri Bie , Pi00 PifT 0 Πe T Pi := , Pi00 := Pi00 , Pif 0 Pif T T T · · · PiN Pif 0 := Pi10 , Πe Pif := PifT Πe , 0 T T T ··· α1N PiN α13 Pi31 Pi11 α12 Pi21 1 T T Pi21 P α P · · · α P i22 23 2N i32 iN 2 .. .. .. .. .. Pif := . . . . . T Pi(N −1)1 Pi(N −1)2 Pi(N −1)3 · · · α(N −1)N PiN (N −1)
PiN 1
PiN 2
PiN 3
···
PiN N
.
Then the closed-loop linear Nash equilibrium solutions to the full-order problem are given by u∗i (t) = −Ri−1 BiT Pi x(t).
3.
(6)
Asymptotic Structure and Local Uniqueness
In order to obtain the numerical Nash strategies for the CLSARE (5), asymptotic structure and local uniqueness are investigated. Underq Assumption 1, the following zeroth-order equations of the CLSARE (5) are given as ||ν|| :=
ε21 + ε22 + · · · + ε2N + µ2 → 0+ .
P¯i00 As − +
M X
N X j=1
Ssj P¯j00 + As −
N X j=1
T
Ssj P¯j00 P¯i00
ATp00 P¯i00 Ap00 + P¯i00 Ssi P¯i00 + Qsi = 0,
(7a)
p=1
ATii P¯iii + P¯iii Aii − P¯iii Siii P¯iii + Qiii = 0, P¯ikl = 0, k > l, P¯ijj = 0, i 6= j
(7b) (7c)
412
=
=
P¯110 P¯111 P¯120 P¯222 .. .
Hiroaki Mukaidani and Vasile Dragan P¯210 · · · P¯N 10 −1 I 0 ··· 0 −In1 T111 , T110 ¯n0 ¯ P100 P200 · · · P¯N 00 P¯220 · · · P¯N 20 −1 0 In0 · · · 0 −In2 T222 T220 ¯ , P100 P¯200 · · · P¯N 00
P¯1N 0 P¯2N 0 · · · P¯N N 0 −1 0 0 · · · I n 0 = P¯N N N −InN TN N N TN N 0 ¯ , P100 P¯200 · · · P¯N 00
where
As * T * −As
=
A00 * T * −A00
−
N X
−1 Ti0i Tiii Tii0 ,
i=1
−Ssi −1 = Ti00 − Ti0i Tiii Tii0 , * A0i A00 −Si00 , Ti0i := Ti00 := T −Qi0i −Qi00 −A00 T Aii Ai0 −Si0i , Tiii := Tii0 := T T −Qi0i −A0i −Qiii * −Qsi
(7d)
−Si0i −ATi0
−Siii −ATii
,
, i = 1, . . . , N.
Before establishing the asymptotic structure of the reduced-order solution, we introduce the following assumption. Assumption 2. The cross-coupled stochastic algebraic Riccati equation (CSARE) (7a) has stabilizing solution P¯i00 , i = 1, . . . , N . This means that the solution x0 (t) = 0 of the closed-loop stochastic system M N X X dx0 (t) = As − Ssj P¯j00 x0 (t)dt + Ap00 x0 (t)dwp (t) (8) p=1
j=1
is exponentially stable in mean square.
It may be noted that the stochastic stabilizability is necessary condition for the existence of the stabilizing solution of CSARE. The following theorem shows the relation between the solutions Pi and the zeroth-order solutions P¯ikl , i = 1, . . . , N, k ≥ l, 0 ≤ k, l ≤ N . P ˆT T T −(Ss2 P¯100 ) ⊕ (Ss2 P¯100 ) ··· As ⊕ AˆTs + M p=1 Ap00 ⊗ Ap00 PM T T T T ˆ ¯ ¯ ˆ −(Ss1 P200 ) ⊕ (Ss1 P200 ) As ⊕ As + p=1 Ap00 ⊗ Ap00 · · · det . .. .. .. . . ¯ ¯ ¯ ¯ −(Ss PN 00 ) ⊕ (Ss PN 00 ) −(Ss PN 00 ) ⊕ (Ss PN 00 ) · · · 1
1
2
2
Numerical Computation... −(SsN P¯100 ) ⊗ In0 −In0 ⊗ (SsN P¯100 ) −(SsN P¯200 ) ⊕ (SsN P¯200 ) 6= 0, .. . PM T T T T ˆ ˆ As ⊕ As + p=1 Ap00 ⊗ Ap00 where Aˆs := As −
N X
413
(9)
Ssj P¯j00 and Aˆs are stable matrix.
j=1
Theorem 1. Under Assumptions 1 and 2, there is a neighborhood V(0) of ||ν|| = 0 such that for all ||ν|| ∈ V(0) there exists a solution Pi = Pi (ε1 , . . . , εN ). These solutions are unique in a neighborhood of P¯i = Pi (0, . . . , 0). Then, the CLSARE (5) possess the power series expansion at ||ν|| = 0. That is, the following form is satisfied. Pi = P¯i + O(||ν||) P¯i00 0 · · · 0 0 P¯i10 0 · · · 0 0 .. .. . . .. .. . . . . . = ¯ ¯ Pii0 0 · · · 0 Piii .. .. . . .. .. . . . . . ¯ PiN 0 0 · · · 0 0
0 ··· 0 ··· .. . . . . 0 ··· .. . . . .
0 0 .. . 0 .. .
0 ··· 0
+ O(||ν||).
(10)
Proof. First, zeroth-order solutions for the asymptotic structure of CLSARE (5) are established. Under Assumption 1, the following equality holds.
Aii −Siii −Qiii −ATii
=
Ini 0 P¯iii Ini
Aˆii −Sii 0 −AˆTii
Ini 0 ¯ −Piii Ini
,
(11)
where Aˆii := Aii − Siii P¯iii . Since Tiii is nonsingular, Aˆii is also nonsingular. This means −1 that Tiii can be expressed explicitly in terms of Aˆ−1 ii . Therefore, using the above result, the formulations (7) are obtained. These transformations can be done by the lengthy, but direct algebraic manipulations [15, 16], which are omitted here. For the local uniqueness of the solutions Pi = Pi (ε1 , . . . , εN ), it is enough to verify that the corresponding Jacobian is nonsingular at ||ν|| = 0. Formally calculating the derivative of the CLSARE (5) and after some tedious algebra, the left-hand side of (9) is obtained by setting ||ν|| = 0 and using (7). Then, since Assumption 2 holds, the condition (9) is automatically verified. Finally, the implicit function theorem implies that there is a unique solutions map Pi = Pi (ε1 , . . . , εN ) and a neighborhood V(0) of ||ν|| = 0 because the condition (9) is equivalent to the corresponding Jacobian at ||ν|| = 0. It is noteworthy that the local uniqueness is newly shown compared with the existing results [5, 6, 14–16]. Moreover, it may be noted that the formulas under the equation (7) have been used to simplify the expressions for the first time to the stochastic case.
414
4.
Hiroaki Mukaidani and Vasile Dragan
Newton’s Method
First, in order to obtain the optimal strategies, a useful algorithm that is based on Newton’s method is given as follows. T N N X X (k+1) (k) (k) (k+1) Sje Pje Pie Sje Pje + Ae − Pie Ae − j=1
j=1
+
M X
(k+1)
ATpe Pie
p=1
+
N X
j=1, j6=i (k)
Ape −
(k)
N X
j=1, j6=i
(k)
(k+1)
Pje
(k)
(k)
Pje Sje Pie + Pie Sje Pje
(k)
(k)
(k)
(k+1)
Sje Pie + Pie Sje Pje
+Pie Sie Pie + Qi = 0, k = 0, 1, . . . ,
(12)
(0)
with the initial conditions Pie = Φe P¯i . Theorem 2. Suppose the positive semidefinite solutions of the CLSARE (5) exist. Under Assumptions 1 and 2, there exists a small σ ¯ such that for all ||ν|| ∈ (0, σ ¯ ), σ ¯ ≤ σ ∗ the Newton’s method (12) converges to the exact solution Pie∗ = Φe Pi∗ = Pi∗T Φe with the (k) (k) (k)T rate of quadratic convergence, where Pie = Φe Pi = Pi Φe is positive semidefinite. Moreover, the convergence solutions attain a local unique solution in the neighborhood of the initial condition. In other words, the following condition is satisfied. (k)
k
− Pi∗ || = O(||ν||2 ), k = 0, 1, 2, . . . .
||Pi
(13)
Proof. Since it is clear that this proof can be derived by applying the Newton-Kantorovich theorem [19], it has been omitted. See e.g., [15] for details.
5.
Fixed Point Iterations
When we apply the Newton’ method, the large dimension for the computations are needed. In order to avoid this drawback, we give another numerical computation method on the basis of the fixed point iterations. Now, let us consider the following fixed point algorithm for solving the CLSARE (5). T N N X X (n+1) (n) (n) (n+1) Pie A− Sje Pje + A − Sje Pje Pie j=1
+
M X
(n+1)
ATpe Pie
j=1
(n)
(n)
Ape + Pie Sie Pie + Qi = 0, n = 0, 1, . . . ,
(14)
p=1
(0)
where Pie is the solutions of the following stochastic algebraic Riccati equation (SARE). (0)
(0)
(0)
(0)
P1e A + AT P1e − P1e S1e P1e +
M X p=1
(0)
ATpe P1e Ape + Q1 = 0,
Numerical Computation... (0)
(0)
(0)
415
(0)
P2e (A − S1e P1e ) + (A − S1e P1e )T P2e (0)
(0)
−P2e S2e P2e +
(0)
.. .
PN e A −
N −1 X j=1
(0)
M X
(0)
ATpe P2e Ape + Q2 = 0,
p=1
(0)
Sje Pje + A − (0)
−PN e SN e PN e +
M X
N −1 X j=1
(0)
T
(0)
(0)
Sje Pje PN e
ATpe PN e Ape + QN = 0.
p=1
Theorem 3. Suppose the positive semidefinite solutions of the CLSARE (5) exist. Under Assumptions 1 and 2, there exists a small σ ˆ such that for all ||ν|| ∈ (0, σ ˆ ), σ ˆ ≤ σ ∗ the (n) fixed point algorithm (14) converges to the exact solution Pie∗ . Moreover, Pie is positive N X (n) Sje Pj is stable. semidefinite and Aie (n) := Ae − j=1
Proof. We give the proof by using the successive approximation technique [13]. Firstly, we (0) T P (0) x(t). Then, the following take any stabilizable linear strategy ui (t, x) = −Ri−1 Bie ie minimization problems need to be considered. M N X X (0) Sje Pje x(t) + Bie ui (t) dt + Ape x(t)dwp (t), (15a) dx(t) = A − j=1, j6=i Z ∞
Vi (t, x) = min E ui
t
p=1
[xT (τ )Qi x(τ ) + uTi (τ )Ri ui (τ )]dτ.
(15b)
Corresponding Hamiltonians to the stochastic Nash differential games for each control agent are given below. (0) (0) (0) (0) (0) Hi t, x, u1 , . . . , ui−1 , ui , ui+1 , . . . , uN , pi M (0) 2 X ∂ Vi 1 = Tr Ape x(t) + x(t)T Qi x(t) + ui (t)T Ri ui (t) xT (t)ATpe 2 ∂x2 p=1 ! N (0) T X ∂Vi (0) A − + Sje Pje x(t) + Bie ui (t), (16) ∂x j=1, j6=i
where
dx(t) = A −
(0) Vi
:=
(0) Vi (t,
N X j=1
(0) Sje Pje x(t)dt
x) = E
Z
∞
t
+
M X
Ape x(t)dwp (t),
p=1
(0) (0) xT (τ ) Qi + Pie Sie Pie x(τ )dτ.
416
Hiroaki Mukaidani and Vasile Dragan
The equilibrium controls must satisfy the following equation ∂Hi 1 (1) T = 0 ⇒ ui (t, x) = − Ri−1 Bie ∂ui 2
(0)
∂Vi ∂x
!
.
(17)
(0)
∂Vi along the system trajectory can be calculated from (18). Note that ∂x (0)
dVi (t, x) N (0) X ∂Vi (0) = A− Sje Pje x(t)dt ∂x j=1 M M (0) 2 X X ∂ Vi 1 A x(t) xT (t)ATpe Pie dwp (t) dt + 2 xT (t)ATpe + Tr pe 2 ∂x2 p=1 p=1 d (0) V (t, x) dt, i = 1, . . . , N. = dt i
(18)
Assume that these simple partial differential equations (18) have solutions of the following form (0)
(1)
Vi (t, x) = xT (t)Pie x(t).
(19)
A partial differentiation to (19) gives ∂ (0) ∂ 2 (0) (1) (1) Vi (t, x) = 2Pie x(t), Vi (t, x) = 2Pie . 2 ∂x ∂x
(20)
Moreover, we have
d (0) Vi (t, x) dt dt M X (0) (0) = xT (t) Qi + Pie Sie Pie x(t)dt + 2 xT (t)ATpe Pie dwp (t). (0)
dVi (t, x) =
(21)
p=1
Therefore, using (18) and (20), for any x(t) we have
(1) Pie A −
+
M X
N X j=1
(0)
Sje Pje + A −
(1)
(0)T
ATpe Pie Ape + Pie
N X j=1
(0)
T
(0)
(1)
Sje Pje Pie
Sie Pie + Qi = 0.
(22)
p=1
Thus, from (17) and (20), we get (1)
(1)
(1)
T Pie x(t), Pie ≥ 0. ui (t, x) = −Ri−1 Bie
(23)
Numerical Computation... (2)
417 (2)
(2)
TP Repeating the above steps, we get ui (t, x) = −Ri−1 Bie ie x(t), Pie ≥ 0. Continuing the same procedure, we get the sequences of the solution matrices. Finally, by using the monotonicity result of the successive approximations and the minimization technique in the negative gradient direction [13, 20], we get a monotone decreasing sequence (n+1)
Vi
(n)
(t, x) ≤ Vi
(t, x),
(24)
(n)
where Vi (t, x) ≥ 0. Thus, these sequences are convergent. Note that the sequence (n) (n) T P (n) x(t). Consequently, from ui (t, x) is also convergent, since ui (t, x) = −Ri−1 Bie ie the method of successive approximations [13, 20], the convergence proof is completed. N X (n) (n) Sje Pj Second, we prove that Pie is positive semidefinite and Aie (n) := Ae − j=1
is stable. The first stage is to prove that Aie (n) is stable. The proof is done by using (0) mathematical induction. When n = 0, Aie (0) is stable because Pie is the stabilizing solution of the SARE. Next n = q, we assume that Aie (q) is stable. Substituting n = q into (14) instead of n = 0, the minimization problem (14) produce a stabilizing control given by (q+1)
ui
(q+1)
T (t, x) = −Ri−1 Bie Pie
x(t).
(25)
It is obvious from the method of successive approximations that Aie (q + 1) is stable since it is the stable matrix of the closed-loop stochastic system. Thus, Aie (n) is stable for all (n) n ∈ N. The next stage is to prove that Pie is positive semidefinite matrix. This proof is (n) also done by using mathematical induction. When n = 0, it is obvious that Pie is positive (0) semidefinite matrix because Pie is the positive semidefinite solution of the SARE. Next (q) n = q, we assume that Pie is positive semidefinite matrix. Using the theory of [12] and (q+1) (n) fact that Aie (q) is stable, Pie is positive semidefinite matrix. Thus, Pie is positive semidefinite matrix for all n ∈ N. Consequently, the proof of Theorem 3 is completed.
6.
Reduced-Order Computation for Solving Stochastic Algebraic Lyapunov Equation
We must solve the large-scale stochastic algebraic Lyapunov equation (LSALE) (14) with P n . the dimension n ¯ := N j=0 j Thus, in order to reduce the dimension of the workspace, a new algorithm for solving LSALE (14) which is based on the fixed point algorithm is established. Let us consider the following LSALE (14), in a general form. ΛTe Xe + Xe Λe + AT1e Xe A1e + U = 0,
(26)
where Xe is the solution of the LSALE (14), and Λe and U are known matrices defined by T T X00 ε1 X10 ε2 X20 Λ00 Λ01 Λ02 T , Λ = Φ−1 Λ Λ11 φΛ12 , Xe = Φe X10 X11 α12 X21 10 e e X20 X21 X22 Λ20 φΛ21 Λ22
418 A1e
Hiroaki Mukaidani and Vasile Dragan A100 µA101 µA02 U00 U01 U02 T , U = U01 ε¯δ A110 ε¯δ A111 0 U11 φU12 , = Φ−1 e T T δ δ U02 φU12 U22 ε¯ A120 0 ε¯ A122
T T T T T T X00 = X00 , X11 = X11 , X22 = X22 , U00 = U00 , U11 = U11 , U22 = U22 ,
X00 , Λ00 , U00 ∈ ℜn0 ×n0 , X11 , Λ11 , U11 ∈ ℜn1 ×n1 , √ X22 , Λ22 , U22 ∈ ℜn2 ×n2 , φ = ε1 ε2 . It should be noted that the following special case of N = 2 and M = 1 is considered because it is easy to extend it to the general case. In order to solve the LSALE (26) corresponding to the iterative algorithm (14), we need another assumption. −1 Assumption 3. Λ11 , Λ22 and Λ0 := Λ00 − Λ01 Λ−1 11 Λ10 − Λ02 Λ22 Λ20 are stable.
We propose the following algorithm (27) for solving the LSALE (14). (l+1)T
X21
(l+1)T
Λ22 + α12 ΛT11 X21 (l)
(l)
(l)T
+ ε1 X10 Λ02 + ε2 ΛT01 X20
(l)
(l)
+φ(X11 Λ12 + ΛT21 X22 ) + µ2 AT101 X00 A102 (l)T
(l)
+¯ εδ µ(AT111 X10 A102 + AT101 X20 A122 ) (l)T
+¯ ε2δ ε1 AT111 X21 A122 + φU12 = 0, (l+1)
ΛT11 X11
(l+1)
+ X11
(27a)
(l)T
(l)
Λ11 + ε1 (ΛT01 X10 + X10 Λ01 )
(l)
(l)T
(l)
+φ(ΛT21 X21 + X21 Λ21 ) + µ2 AT101 X00 A101 (l)
(l)T
+¯ εδ µ(AT111 X10 A101 + AT101 X10 A111 ) +
ε¯2δ T (l) A X A111 + U11 = 0, ε1 111 11
(l+1)
ΛT22 X22
(l+1)
+ X22
(27b) (l)T
(l)
Λ22 + ε2 (ΛT02 X20 + X20 Λ02 )
(l)T
(l)
(l)
+φα12 (ΛT12 X21 + X21 Λ12 ) + µ2 AT102 X00 A102 (l)
(l)T
+¯ εδ µ(AT122 X20 A102 + AT102 X20 A122 ) +
ε¯2δ T (l) A X A122 + U22 = 0, ε2 122 22
(l+1)
ΛT0 X00
(l+1)
+ X00
(l)
(27c)
(l+1)
Λ0 + AT100 X00 (l)T
−1 −ΛT20 Λ−T 22 Ξ20 − Ξ20 Λ22 Λ20 (l)
(l)
(l)T
−1 A100 − ΛT10 Λ−T 11 Ξ10 − Ξ10 Λ11 Λ10
(l)T
(l)
(l)T
+¯ εδ (AT110 X10 A100 + AT100 X10 A110 + AT120 X20 A100 + AT100 X20 A120 ) ε¯2δ T ε¯2δ T (l) (l) A110 X11 A110 + A120 X22 A120 ) ε1 ε2 ε¯2δ T (l)T (l) + (A110 X21 A120 + AT120 X21 A110 ) + U00 = 0, ε1
+
(l+1)
Xj0
(l+1)
T = −Λ−T jj (Λ0j X00
(l)
+ Ξj0 ), j = 1, 2,
(27e)
where l = 0, 1, 2, . . . , (l)
(l)
(l)
(l+1)
Ξ10 = φΛT21 X20 + ε1 X10 Λ00 + X11
(27d)
(l+1)T
Λ10 + X21
Λ20
Numerical Computation... (l)
419
(l)T
(l)T
+µAT101 X00 A100 + ε¯δ µ(AT101 X10 A110 + AT101 X20 A120 ) (l)
+¯ εδ AT111 X10 A100 + (l)
(l)
ε¯2δ T (l) (l)T T (A X A110 + AT111 X21 A120 ) + U01 , ε1 111 11
(l)
(l+1)
Ξ20 = φΛT12 X10 + ε2 X20 Λ00 + X22
(l+1)
Λ10 (l)T (l)T T T + ε¯ µ(A102 X10 A110 + A102 X20 A120 ) ε¯2δ T (l) (l) (l) (A X A120 + α12 AT122 X21 A110 ) +¯ εδ AT122 X20 A100 + ε2 122 22 (0) ¯ 10 , X (0) = X ¯ 20 , X (0) = X ¯ 11 , X (0) = X ¯ 22 , X (0) = 0, X10 = X 20 11 22 21 T ¯ T ¯ ¯ Λ0 X00 + X00 Λ0 + A100 X00 A100 −1 −1 T T −T T −ΛT10 Λ−T 11 U01 − U01 Λ11 Λ10 − Λ20 Λ22 U02 − U02 Λ22 Λ20 −1 −1 T −T +ΛT10 Λ−T 11 U11 Λ11 Λ10 + Λ20 Λ22 U22 Λ22 Λ20 + U00 = 0, T ¯ j0 ¯ 00 Λ0j + ΛTj0 X ¯ jj + U0j )Λ−1 , X = −(X jj T ¯ ¯ Λjj Xjj + Xjj Λjj + Ujj = 0, j = 1, 2. (l) +µAT102 X00 A100
Λ20 + α12 X21
δ
T + U02 ,
The following theorem indicates the convergence of the algorithm (27). Theorem 4. Under Assumption 3, the fixed point (27) converges to the exact algorithm l+1 2δ−1 solution Xpq with the rate of convergence of O ε¯ , that is (l) ||Xpq
h il+1 2δ−1 − Xpq || = O ε¯ , l = 0, 1, 2, . . . ,
(28)
pq = 00, 10, 20, 11, 21, 22.
Proof. Since the proof of Theorem 2 can be done by using mathematical induction similarly as in [15], it is omitted.
7.
Computational Examples
In order to demonstrate the efficiency of the proposed algorithm, the computational examples are given.
7.1.
Example 1 Table 1. Error per iterations. n 0 1 2 3
||F(k) (ε1 , ε2 , ε1 , ε2 )|| 1.2582e − 01 1.1408e − 04 5.0074e − 10 7.8340e − 12
420
Hiroaki Mukaidani and Vasile Dragan
First, the Newton’s method (12) is demonstrated to verify the reliability of the proposed algorithm. Consider the system (1) with
n0 = 2, n1 = 1, n2 = 2, n3 = 3, n4 = 4, q µ = h ε21 + ε22 + ε23 + ε24 ,
ε1 = 4.0e − 04, ε2 = 3.0e − 04, ε3 = 2.0e − 04, ε4 = 1.0e − 04, A00 = rand(2, 2), A0f = rand(2, 10),
Af 0 = rand(10, 2), Af = rand(10, 10), Ap00 = hrand(2, 2), Ap0f = hrand(2, 10), Apf 0 = hrand(10, 2), Apf = hrand(10, 10), h = 0.001, B0i = rand(n0 , 1), Bii = rand(ni , 1), Ri = 0.1, i = 1, . . . , 4, Q1 = diag 1 1 1 0 0 0 0 0 0 0 0 0 , Q2 = diag 1 1 0 1 1 0 0 0 0 0 0 0 , Q3 = diag 1 1 0 0 0 1 1 1 0 0 0 0 , Q4 = diag 1 1 0 0 0 0 0 0 1 1 1 1 , where rand(m, n) denotes a random valued matrix drawn from a uniform distribution on the unit interval with m-by-n matrix of the same dimension. In order to verify the exactitude of the solution, the remainder per iteration is computed (k) by substituting Pie into CLSARE (5). Table 1 shows the errors F(ε) per iteration, where
||F(k) (ε1 , ε2 , ε3 , ε4 )|| := (k) (k) Fi (P1e ,
(k) P2e ,
:= Pie Ae − +
M X
4 X j=1
(k) P3e ,
4 X
(k)
(k)
i=1 (k) P4e )
(k)
(k)
(k)
||Fi (P1e , P2e , P3e , P4e )||,
Sje Pje + Ae −
2 X j=1
T
Sje Pje Pie
ATpe Pie Ape + Pie Sie Pie + Qi .
p=1
It should be noted that algorithm (12) converges to the exact solution with an accuracy of ||F(k) (ε1 , ε2 , ε1 , ε2 )|| < 1.0e − 11 after three iterations. Hence, it can be observed from Table 1 that algorithm (12) attains quadratic convergence. It should be noted that the Newton’s algorithm requires quite large dimension for solving the LSALE (12). In this case, ℜ48×48 dimensions are needed for the matrix algebraic computation.
Numerical Computation...
421
Table 2. Error per iterations. n 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
7.2.
||F(n) (ε1 , ε2 )|| 1.1013 2.5151 1.3611 3.3242e − 01 7.4747e − 02 1.7252e − 02 3.8764e − 03 8.9936e − 04 2.0398e − 04 4.7611e − 05 1.0898e − 05 2.5583e − 06 5.9064e − 07 1.3935e − 07 3.2419e − 08 7.6865e − 09 1.7988e − 09 4.2884e − 10 9.9901e − 11 2.7457e − 11 1.1876e − 11 8.9389e − 12
Example 2
In this subsection, we verify the usefulness of the hybrid fixed point algorithms (14) and (27). The system matrices of the LSPSS (1) are given as follows.
A00
=
A01 = A10 =
0 4.5 0 1 0 0 4.5 −1 0 −0.05 0 −0.1 , 0 0 −0.05 0.1 0 32.7 −32.7 0 0 0 0 0 0 0 0 0 0 0 , 0.1 0 , A = 02 0.1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 , , A20 = 0 0 0 −0.4 0 0 0 −0.4 0 0 0 0 0 0 0
422
Hiroaki Mukaidani and Vasile Dragan −0.05 0.05 , A11 = A22 = 0 −0.1 p = 1, A100 = hA00 , A10f = hA0f , A1f 0 = hAf 0 , A1f = hAf , h = 0.01, T 0 , B01 = B02 = 0 0 0 0 0 , B11 = B22 = 0.1 R11 = R22 = 20, Q1 = diag 1 1 1 1 1 1 1 0 0 , Q2 = diag 1 1 1 1 1 0 0 1 1 .
The small parameters are chosen as ε1 = 0.001, ε2 = 0.0001. It should be noted that the algorithm (14) converges to the exact solution with accuracy of ||F(n) (ε1 , ε2 )|| < 1.0e − 11 after 21 iterations, where (n)
||F
(n)
(ε1 , ε2 )|| :=
2 X i=1
(n)
(n)
(n)
(n)
(n)
||Fi (P1e , P2e )||,
Fi (P1e , P2e ) := Pie Ae −
+AT1e Pie A1e
2 X j=1
Sje Pje + Ae −
2 X j=1
T
Sje Pje Pie
+ Pie Sie Pie + Qi , i = 1, 2.
In order to verify the exactitude of the solution, the remainder per iteration by substituting (n) Pie , i = 1, 2 into the CLSARE (5) is computed. In Table 2, the results for the error ||F(n) (ε1 , ε2 )|| per iterations are given. It can be seen that the algorithm (14) with (27) has the linear convergence. From this example point of view, it is worth pointing out that even if number of the player is more than three, the required work space for calculating the strategies is the same as the dimension of the original LSPSS.
8.
Conclusion
In this paper, the new design methodology for solving the stochastic N -players Nash games has been addressed. In particular, two algorithms for solving CLSARE have been proposed. Since the first one is based on the Newton’s method, the quadratic convergence and the local uniqueness are both attained. Since another one is based on the fixed point algorithm, it is possible to compute the solution with each system dimension. Furthermore, the convergence and the positive semidefiniteness of the solutions have been newly proved. In order to obtain these algorithms, the asymptotic structure of the solution for the CLSARE have been established. It would be worth pointing out that the local uniqueness of the original solutions is proved via implicit function theorem for the first time. Finally, the convergence rate of the algorithms were verified in the computational examples.
Numerical Computation...
423
References [1] Starr, A. W. & Ho, Y. C. Nonzero-sum differential games. J. Optim. Theory Appl. 3 (1969), no. 3, 184–206. [2] Krikelis, N. J. & Rekasius, Z. V. On the solution of the optimal linear control problems under conflict of interest. IEEE Trans. Automatic Control 16 (1971), no. 2, 140–147. [3] Li, T-Y. & Gaji´c, Z. Lyapunov iterations for solving coupled algebraic Lyapunov equations of Nash differential games and algebraic Riccati equations of zero-sum games. New Trends in Dynamic Games and Applications, Boston: Birkhauser (1994), 333– 351. [4] Chen B. S. & Zhang, W. Stochastic H2 /H∞ Control with State-Dependent Noise. IEEE Trans. Automatic Control 49 (2004), no. 1, 45–57. [5] Khalil, H. K. & Kokotovi´c, P. V. Control of linear systems with multiparameter singular perturbations. Automatica 15 (1979), no. 2, 197–207. [6] Khalil, H. K. Multimodel design of a Nash strategy, J. Optim. Theory Appl. 31 (1980), no. 4, 553–564. [7] Koskie, S., Skataric, D. & Petrovic, B. Convergence proof for recursive solution of linear-quadratic Nash games for quasi-singularly perturbed systems. Dynamics Continuous, Discrete and Impulsive Systems 9b (2002), no. 2, 317–333. [8] Abou-Kandil, H., Freiling, G., Ionescu, V. & Jank, G. Matrix Riccati Equations in Control and Systems Theory. Basel: Birkhauser (2003). [9] Dragan, V., Morozan, T. & Shi, P. Asymptotic properties of input–output operators norm associated with singularly perturbed systems with multiplicative white noise. SIAM Journal on Control and Optimization 41 (2002), no. 1, 141–163. [10] Dragan, V. The linear quadratic optimization problem for a class of singularly perturbed stochastic systems. International Journal of Innovative Computing, Information and Control, 1 (2005), no. 1, 53–64. [11] Dragan, V. Stabilization of linear stochastic systems modelled singularly perturbed Itˆo differential equations. Proceedings of the 9th International Symposium on Automatic Control on Computer Science Romania (2007). [12] Wonham, W. M. On a matrix Riccati equation of stochastic control. SIAM Journal on Control and Optimization 6 (1968), no. 4, 681–697. [13] Wang, F. -Y. & Saridis, G. N. On successive approximation of optimal control of stochastic dynamic systems. International Series in Operations Research & Management Science, New York: Springer Chapter 16 (2005), 333–358. [14] Mukaidani, H. & Xu, H. Recursive computation of Nash strategy for multiparameter singularly perturbed systems. Dynamics of Continuous, Discrete and Impulsive Systems 11b (2004), no. 6, 673–700.
424
Hiroaki Mukaidani and Vasile Dragan
[15] Mukaidani, H. A new design approach for solving linear quadratic Nash games of multiparameter singularly perturbed systems. IEEE Trans. Circuits & Systems I 52 (2005), no. 5, 960–974. [16] Mukaidani, H. Nash games for multiparameter singularly perturbed systems with uncertain small singular perturbation parameters. IEEE Trans. Circuits & Systems II 52 (2005), no. 9, 586–590. [17] Sagara, M., Mukaidani, H. & Yamamoto, T. Numerical solution of stochastic Nash games with state-dependent noise for weakly coupled large-scale systems. Applied Mathematics and Computation 197 (2008), no. 2, 844–857. [18] Magnus, J. R. & Neudecker H. Matrix Differential Calculus with Applications in Statistics and Econometrics. New York: John Wiley and Sons (1999). [19] Ortega, J. M. Numerical Analysis, A Second Course. Philadelphia: SIAM (1990). [20] Afanas’ev, V. N., Kolmanowskii, V. B. & Nosov, V. R. Mathematical Theory of Control Systems Design. Dordrecht: Kluwer Academic (1996).
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 425-490
ISBN 978-1-60876-500-3 © 2011 Nova Science Publishers, Inc.
Chapter 17
S UBPRIME M ORTGAGES AND T HEIR S ECURITIZATION WITH R EGARD TO C APITAL , I NFORMATION , R ISK AND VALUATION M.A. Petersen∗, S. Thomas, M.C. Senosi, J. Mukuddem-Petersen, T. Bosch, M.P. Mulaudzi, I.M. Schoeman and B. De Waal North-West University
Abstract In this book chapter, we investigate the securitization of subprime residential mortgage loans (RMLs) into structured notes such as subprime residential mortgage-backed securities (RMBSs) and collateralized debt obligations (CDOs). In this regard, our deliberations seperately focus on capital, information, risk and valuation under RMBSs and RMBS CDOs. With regard to the former, our contribution discusses credit (including counterparty and default), market (including interest rate, price and liquidity), operational (including house appraisal, valuation and compensation), tranching (including maturity mismatch and synthetic) and systemic (including maturity transformation) risks. The hypothesis of this chapter is that the SMC was mainly caused by the intricacy and design of subprime mortgage origination and securitization as well as systemic agents. This led to information (loss, asymmetry and contagion) problems, valuation opaqueness and ineffective risk mitigation. This claim is illustrated via several examples.
JEL CLassification: G10, IM01, IM10 Keywords: Residential Mortgages; Residential Mortgage Securitization; Capital; Information; Risk; Valuation; Subprime Mortgage Crisis
1.
Introduction
The 2007-2009 subprime mortgage crisis (SMC) was preceded by a decade of low interest rates that spurred significant increases in both residential mortgage loan (RML) financing ∗
E-mail address: [email protected]
426
M.A. Petersen et al.
and house prices. This environment encouraged investors (including investment banks) to pursue instruments that offer yield enhancement. Subprime RMLs offer higher yields than standard RMLs and consequently have been in demand for securitization. In essence, securitization offers the opportunity to transform below investment grade assets (the investment or reference portfolio) into AAA and investment grade liabilities. The demand for increasingly intricate structured products such as residential mortgage backed securities (RMBSs) and collateralized debt obligations (CDOs) which embed leverage within their structure exposed investing banks (IBs) to an elevated risk of default. In the light of relatively low interest rates, rising house prices and investment grade credit ratings (usually AAA) given by the credit rating agencies (CRAs), this risk was not considered to be excessive. A surety wrap – insurance purchased from a monoline insurer (MLI) – may also be used to ensure such credit ratings. The process of subprime mortgage securitization is explained below. The first step is where MRs – many first-time buyers – or individuals wanting to refinance seeked to exploit the seeming advantages offered by subprime RMLs. Next, mortgage brokers (MBs) entered the lucrative subprime market with MRs being charged high fees. Thirdly, originators (ORs) offering subprime RMLs solicited funding that was often provided by Wall Street money. After extending RMLs, these ORs quickly sold them to dealer (investment) banks and associated special purpose vehicles (SPVs) for more profits. In this way, ORs outsourced credit risk while relying on income from securitizations to fund new mortgages. The fourth step involved Wall Street dealer banks (DBs) pooling risky subprime RMLs that did not meet the standards of the government sponsored enterprises (GSEs) such as Fannie Mae and Freddie Mac and sold them as ”private label,” non-agency securities. This is important because the structure of securitization will have special features reflecting the design of the subprime mortgages themselves. Fifthly, CRAs such as Standard and Poors assisted DBs in structuring RMBSs. In this way DBs received the best possible bond ratings, earned exorbitant fees and made RMBSs attractive to investors including money market, mutual and pension funds. However, during the SMC, defaults on subprime reference mortgage portfolios increased and the appetite for securities backed by mortgages decreased. The market for these securities came to a standstill. ORs no longer had access to funds raised from pooled mortgages. The wholesale lending market shrunk. Intraday and interday markets became volatile. In the sixth step, the RMBSs were sold to investors worldwide thus distributing the risk. The main hypothesis of this paper is that the SMC was partly caused by the intricacy and design of subprime mortgage origination, securitization and systemic agents that led to information (loss, asymmetry and contagion) problems, valuation opaqueness and ineffective risk mitigation. More specifically, information was lost due to intricacy resulting from an inability to look through the chain of mortgages and structured products – reference RML portfolios and RMBSs, ABS CDOs, structured investment vehicles (SIVs) etc. This situation was exacerbated by a lack of understanding of how subprime securities and structures are designed and intertwined. It is our opinion that the interlinked or nested unique security designs that were necessary to make the subprime market function resulted in information loss among IBs as the chain of structured products stretched longer and longer. Also, asymmetric information arose because IBs could not penetrate the portfolio far enough to make a determination of the exposure to the financial sector. An additional problem involved infor-
Subprime Mortgages and Their Securitization...
427
mation contagion that played a crucial role in shaping defensive retrenchment in interbank as well as mortgage markets during the SMC. The valuation opaqueness problem related to structured products results from the dependence of valuation on house prices and its independence from the performance of the reference RML portfolios. Finally, we claim that SMC partly resulted from mortgage agents’ appetite for rapid growth and search for high yields – both of which were very often pursued at the expense of risk mitigation practices. The subprime structure described above is unique to the SMC and will be elaborated upon in the sequel.
1.1.
Literature Review
The hypothesis outlined above and subsequently in Sections 2., 3. and 5. is supported by various strands of existing literature. The paper [7] examines the different factors that have contributed to the SMC (see, also, [3] and [11]). These papers have discussions about yield enhancement, investment management, agency problems, lax underwriting standards, credit rating agency (CRA) incentive problems, ineffective risk mitigation, market opaqueness, extant valuation model limitations and structured product intricacy (see Sections 2. and 3. for more details) in common with our contribution (see, also, [21]). Furthermore, this article discusses the aforementioned issues and offers recommendations to help avoid future crises (compare with [10] and [23]). In [4], light is shed on subprime MRs, RML design and their historical performance. Their discussions involve predatory borrowing and lending and are cast within the context of real-life examples. The working paper [8] firstly quantifies how different determinants contributed to high delinquency and foreclosure rates for vintage 2006 mortgages. More specifically, they analyze loan quality as the performance of loans adjusted for differences in MR characteristics (such as credit score, level of indebtedness, ability to provide documentation), loan characteristics (such as product type, amortization term, loan amount, interest rate) and subsequent house appreciation (see, also, [11] and [21]). Their analysis suggests that different loan-level characteristics as well as low house price appreciation was quantitatively too small to explain the bad performance of 2006 loans. Secondly, they observed a deterioration in lending standards with a commensurate downward trend in loan quality and a decrease in the subprime-prime RML rate spread during the 2001–2006 period. Thirdly, Demyanyk and Van Hemert show that loan quality deterioration could have been detected before the SMC1 (see, also, [10] and [23]). The literature about RML securitization and the SMC is growing and includes the following contributions. Our contribution has close connections with [4] where the key structural features of a typical subprime securitization are presented. Also, the paper demonstrates how CRAs assign credit ratings to asset-backed securities (ABSs) and how these agencies monitor the performance of reference RML portfolios (see Subsections 2.2., 2.3. and 2.4.). Furthermore, this paper discusses RMBS and CDO architecture and is related to [17] that illustrates how misapplied bond ratings caused RMBSs and ABS CDO market disruptions (see Subsections 3.2., 3.3. and 3.4.). In [8], it is shown that the subprime (secu1
We consider ”before the SMC” to be the period prior to July-August 2007 and ”during the SMC” to be the period thereafter.
428
M.A. Petersen et al.
ritized) mortgage market deteriorated considerably subsequent to 2007 (see, also, [21]). We believe that RML standards became slack because securitization gave rise to moral hazard, since each link in the securitization chain made a profit while transferring associated credit risk to the next link (see, for instance, [20]). At the same time, some financial institutions retained significant amounts of the RMBSs they originated, thereby retaining credit risk and so were less guilty of moral hazard (see, for instance, [9]). The increased distance between ORs and the ultimate bearers of risk potentially reduced ORs’ incentives to screen and monitor MRs (see [19]). The increased intricacy of markets related to mortgages and their securitization also reduces IB’s ability to value them correctly where the value depends on the correlation structure of default events (see, for instance, [9] and [11]). [13] considers parameter uncertainty and the credit risk of ABS CDOs (see, also, [10], [21] and [23]). The working paper [7] asserts that since the end of 2007 monolines have been struggling to keep their triple-A rating. Only the two major ones, MBIA and Ambac, and a few others less exposed to subprime RMLs such as Financial Security Assurance (FSA) and Assured Guaranty, have been able to inject enough new capital to keep their AAA credit rating. In [22] it is claimed that ABS CDOs opened up a whole new category of work for MLIs who insured the senior tranches as part of the credit enhancement process (see, also, [21]). Before the SMC, risk management and control put excessive confidence in credit ratings provided by CRAs and failed to provide their own analysis of credit risks in the underlying securities (see, for instance, [14]). The paper [6] investigates the anatomy of the SMC that involves mortgages and their securitization with operational risk as the main issue. At almost every stage in the subprime process – from mortgage origination to securitization – operational risk was insiduously present but not always acknowledged or understood. For instance, when ORs sold mortgages, they were outsourcing their credit risk to IBs, but what they were left with turned out to be something much larger – significant operational and reputational risk (see, for instance, [6]). The quantity of loans was more important than the quality of loans issued. More and more subprime loans were extended that contained resets. The underwriting of new subprime RMLs embeds credit and operational risk. House prices started to decline and default rates increased dramatically. On the other hand, credit risk was outsourced via securitization of mortgage loans which funded new subprime loans. Securitization of subprime RMLs involves operational, tranching and liquidity risk. During the SMC, the value of these securities decreased as default rates increased dramatically. The RMBS market froze and returns from these securities were cut off with mortgages no longer being funded. Financial markets became unstable with a commensurate increase in market risk which led to a collapse of the whole financial system (compare with Subsections 2.2. and 3.2.). The paper [14] discusses several aspects of systemic risk. Firstly, there was excessive maturity transformation through conduits and SIVs. This ended in August 2007 with the overhang of SIV ABSs subsequently putting additional downward pressure on securities prices. Secondly, as the financial system adjusted to mortgage delinquencies and defaults and to the breakdown of the aforementioned maturity transformation, the interplay of market malfunctioning or even breakdown, fair value accounting and the insufficiency of equity capital at financial institutions, and, finally, systemic effects of prudential regulation created a detrimental downward spiral in the overall banking system. The paper argues that these developments have not only been caused by identifiably faulty decisions, but also by flaws in financial system architecture. We agree with this paper that regulatory reform
Subprime Mortgages and Their Securitization...
429
must go beyond considerations of individual incentives and supervision and pay attention to issues of systemic interdependence and transparency. The paper [14] also discusses credit (including counterparty), market (including interest rate) and tranching (including maturity mismatch) risks. Furthermore, [18] and [21] provides further information about subprime risks such as credit (including counterparty), market (including interest rate, basis, prepayment, liquidity and price), tranching (including maturity mismatch and synthetic), operational and systemic risks. Our hypothesis involves the intricacy and design of subprime mortgage origination, securitization and systemic agents as well as information (loss, asymmetry and contagion) problems, valuation opaqueness and ineffective risk mitigation. In this regard, [16] investigates the effects of agency and information asymmetry issues embedded in structural form credit models on bank credit risk evaluation, using American bank data from 2001 to 2005. Findings show that both the agency problem and information asymmetry significantly cause deviations in the credit risk evaluation of structural form models from agency credit ratings (see, also, [11] and [14]). Additionally, both the effects of information asymmetry and debtequity agency positively relate to the deviation while that of management-equity agency relates to it negatively. The paper [15] is specifically focused on the issue of counterparty risk and claim that the effects on counterparties in the SMC are remarkably small.
1.2.
Preliminaries about Subprime Mortgage Securitization
In this subsection, we provide preliminaries about subprime mortgages and risks as well as a subprime mortgage model that describes the main subprime agents. All events take place in either period t or t + 1. 1.2.1. Preliminaries about Subprime Mortgages Subprime mortgage origination involves the following issues. Subprime RMLs are financial innovations that aim to provide house ownership to riskier MRs. A design feature of these RMLs is that over short periods, MRs are able to finance and refinance their houses based on gains from house price appreciation (see [21] for more details). House appraisals were often inflated with ORs having too much influence on appraisal companies. No-incomeverification mortgages led to increased cases of fraud. Subprime mortgages contain resets. MBs compensate on volume rather than mortgage quality. This increased volume led to a poor credit culture. House values started to decline. MRs were unable to meet mortgage terms when they reset resulting in increased defaults. An traditional mortgage model for profit with RMLs at face value is built by considering the difference between cash inflow and outflow in [21]. For this profit, in period t, cash inflow is constituted by returns on risky marketable securities, rtB Bt , RMLs, rtM Mt and Treasuries, rtT Tt . Furthermore, we denote the recovery amount, mortgage insurance (MI) payments per loss and present value of future profits from additional RMLs based on current RMLs by Rt , C(S(Ct )), and Πpt , respectively. Also, we consider the cost of funds for M, cM ω Mt , face value of RMLs in default, rS Mt , recovery value of RMLs in default, rR Mt , MI premium, pi (Ct )Mt , the all-in cost of holding risky marketable securities, D D cB t Bt , interest paid to depositors, rt Dt , cost of taking deposits, c Dt , interest paid to IBs,
430
M.A. Petersen et al.
rtB Bt , the cost of borrowing, cB Bt , provisions against deposit withdrawals, P T (Tt ), and the value of RML losses, S(Ct ), to be cash outflow. Here rD and cD are the deposit rate and marginal cost of deposits, respectively, while rB and cB are the borrower rate and marginal cost of borrowing, respectively. In this case, we have that a traditional model for profit with defaulting, refinancing and fully amortizing RMLs at face value may be expressed as
Πt
p f M Mω i R S = rt − ct − pt + ct rt − (1 − rt )rt Mt + C(E[S(Ct )])
(1.1)
B B T T D D B B + rt − ct Bt + rt Tt − P (Tt ) − rt + ct Dt − rt + c Bt + Πpt .
Also, OR’s total capital constraint for subprime RMLs at face value is given by B M Kt = nt Et−1 + Ot ≥ ρ ω(Ct )Mt + ω Bt + 12.5f (mV aR + O) ,
(1.2)
where ω(Ct ) and ω B are the risk-weights related to M and B, respectively, while ρ – Basel II pegs ρ at approximately 0.08 – is the Basel capital regulation ratio of regulatory capital to risk-weighted assets. Furthermore, for the function
Jt
B M dw = Πt + lt Kt − ρ ω(Ct )Mt + ω Bt + 12.5f (mV aR + O) − ct Kt+1 (1.3) +Et δt,1 V Kt+1 , xt+1 ,
the optimal OR valuation problem is to maximize the value of OR by choosing rM , D, T, and K, for V (Kt , xt ) =
max
rtM , Dt , Tt
Jt ,
(1.4)
subject to RML, balance sheet, cash flow and financing constraints given by Mt = m0 − m1 rtM + m2 Ct + σtM ,
(1.5)
Furthermore, (1.13), (1.1) and Kt+1 = nt (dt + Et ) + (1 + rtO )Ot − Πt + ∆Ft ,
(1.6)
respectively. In the value function, lt is the Lagrange multiplier for the capital constraint, cdw t is the deadweight cost of capital and δt,1 is a stochastic discount factor. In the profit function, cΛω is the constant marginal cost of loans (including the cost of monitoring and
Subprime Mortgages and Their Securitization...
431
screening). In each period, banks invest in fixed assets (including buildings and equipment) which we denote by Ft . OR is assumed to maintain these assets throughout its existence so that it must only cover the costs related to the depreciation of fixed assets, ∆Ft . These activities are financed through retaining earnings and the eliciting of additional debt and equity, Et , so that
∆Ft = Etr + (nt+1 − nt )Et + Ot+1 .
(1.7)
Suppose that J and V are given by (1.3) and (1.4), respectively. When the capital constraint given by (1.2) holds (i.e., lt > 0), a solution to OR’s optimal valuation problem yields an optimal M and rM of the form
Mt∗ =
Kt ω B Bt + 12.5f M (mV aR + O) − ρω(Ct ) ω(Ct )
(1.8)
and
rtM ∗
1 M ∗ = m0 + m2 Ct + σt − Mt , m1
(1.9)
respectively. In this case, OR’s corresponding optimal deposits, provisions for deposit withdrawals and profits are given by
Dt∗
=
D 1 1 B B (rtD + cD D + p rtT + (rtB − cB t ) t ) + (rt + ct ) − 1−γ rt 1−γ +
T∗t
and
Kt ω B Bt + 12.5f M (mV aR + O) − + Bt − Kt − Bt , ρω(Ct ) ω(Ct )
1 D T B B B B D D = D + p rt + (rt − ct ) + (rt + ct ) − (r + ct ) rt 1−γ t
(1.10)
(1.11)
432
Π∗t
M.A. Petersen et al.
=
Kt ω B Bt + 12.5f M (mV aR + O) − ρω(Ct ) ω(Ct )
1 ω B Bt + 12.5f M (mV aR + O) Kt + + m2 Ct + σtM m0 − m1 ρω(Ct ) ω(Ct )
ω − cpt rtf + pit + (1 − rtR )rtS + (rtD + cD − cM t t ) −
(rtD
+
cD t )
1 (1 − γ)
Bt − Kt − Bt
1 (1 − γ)
(1.12)
D 1 1 B B + D + p rtT + (rtB − cB (rtD + cD rtT − (rtD + cD t ) t ) t ) + (rt + ct ) − rt 1−γ (1 − γ) +
rtB
−
cB t
Bt −
rtB
+
cBt
Bt + C(E[S(Ct )]) − P T (Tt ) + Πpt ,
respectively. From [21], OR’s balance sheet with RMLs at face value may be represented as Mt + Bt + Tt = (1 − γ)Dt + Bt + Kt .
(1.13)
1.2.2. Preliminaries about Subprime Mortgage Models Next, we introduce a subprime mortgage model with default to explain aspects of the SMC. Figure 1 presents a subprime mortgage model involving ten subprime agents, four banks and three types of markets. As far as subprime agents are concerned, we note that circles 1a, 1b and 1c represent flawed independent checks by house appraisers (HAs), mortgage brokers (MBs) and credit rating agencies (CRAs), respectively. Regarding the former agent, the process of subprime mortgage origination is flawed with house appraisers not performing their duties with integrity and independence. According to [6] this type of fraud is the ”linchpin of the house buying transaction” and is an example of operational risk. Also, the symbol X indicates that the cash flow stops as a consequence of defaults. Before the SMC, HAs estimated house values based on data that showed that the house market would continue to grow (compare with 1A and 1B). In steps 1C and 1D, independent MBs arrange RML deals and perform checks of their own while OR originates RMLs to MRs in 1E. Subprime MRs generally pay high RML interest rates to compensate for their increased risk from poor credit histories (compare with 1F). Next, the servicer (SR) collects monthly payments from MRs and remits payments to DBs and SPVs. In this regard, 1G is the RML interest rate paid by MRs to the SR of the RMLs while the interest rate 1H (RML interest rate minus the servicing fee), is passed by the SR to the DB/SPV for the payout to IBs. OR mortgage insurers (OMIs) compensate ORs for losses due to mortgage defaults. Several subprime agents interact with the DB and SPV. For instance, the trustee holds or manages and invests RMLs and RMBSs for the benefit of another. Also, the underwriter is a banking
Subprime Mortgages and Their Securitization... Subprime Agents
House Appraiser (HA)
Banks
433 Markets
Interbank Lender (IL)
1a
1A
Mortgage 1B
Broker (MB)
1I
1b
1J
1D 1C
1E
X
Mortgagor (MR)
1O
Originator (OR) 1F
X
RML Market 1P
1G
Servicer (SR) 1H
1K
1L
X
X
OR Mortgage Insurer (OMI) Dealer Banks (DBs) & SPVs
Trustee
1Q
1R
RMBS & ABS CDO Bond Market
Underwriter Credit Rating Agency (CRA)
1c 1S 1M
1N
1W
1X
1T
Credit Enhancement (CE) Provider
Monoline Insurer (MLI)
Investing Banks (IBs)
1U
1V
Money/ Hedge Fund Market
Figure 1. Diagrammatic Overview of a Subprime Mortgage Model With Default. agent who assists the SPV in underwriting new RMBSs. CRAs rate the RMBSs and credit enhancement (CE) providers devise a method of reducing the risk of securitizing RMLs. Monoline insurers (MLIs) guarantee the repayments of bonds such as, for instance, the senior tranches of RMBSs and ABS CDOs. In monoline insurance2 , default risk is transferred 2
The effect of MLI insurance – having the character of a guarantee - is that the risk premium on the bond
434
M.A. Petersen et al.
from the bondholders - in our case IBs - to the MLIs. IBs are only left with the residual risk that the MLI will default. Given the low perceived risk of these products, MLIs generally have very high leverage, with outstanding guarantees often amounting to 150 times capital. MLIs carry enough capital to earn AAA ratings and as a result often do not have to post collateral. As a consequence, MLIs make bond issues easy to market, as the credit risk is essentially that of the highly rated MLI simplifying analysis for most IBs. In this case, the analysis of the MLI is closely connected with the analysis of the default risk of all bonds they insured. An OR has access to subprime RML investments that may be financed by borrowing from an IL, represented by 1I. The IL, acting in the interest of risk-neutral shareholders, either invests its deposits in Treasuries or in OR’s subprime RML projects. In return, OR pays interest on these investments to IL, represented by 1J. Next, the OR deals with the RML market represented by 1O and 1P, respectively. Also, the OR pools its RMLs and sells them to dealer banks (DBs) and SPVs (see 1K). The DB or SPV pays the OR an amount which is slightly greater than the value of the pool of RMLs as in 1L. A DB or SPV is an organization formed for a limited purpose that holds the legal rights over RMLs transferred by ORs during securitization. In addition, the DB or SPV divides this pool into sen, mezz and jun tranches which are exposed to different levels of credit risk. Moreover, the DB or SPV sells these tranches as securities backed by subprime RMLs to investing banks (IBs) (see 1N) that is paid out at an interest rate determined by the RML default rate, prepayment and foreclosure (see 1M). Also, DBs and SPVs deal with the RMBS and asset-backed security (ABS) collateral debt obligation (CDO) bond market for their own investment purposes (compare with 1Q and 1R). Furthermore, ORs have securitized RMLs on their balance sheets, that have connections with the RMBS and ABS CDO bond market. IBs invest in this bond market, represented by 1S and receive returns on securities in 1T. The money market and hedge fund market are secondary markets where previously issued marketable securities such as RMBSs and ABS CDOs are bought and sold (compare with 1W and 1X). In return, IBs invest in these short-term securities (see, 1U) to receive profit, represented by 1V. During the SMC, the model represented in Figure 1 was placed under major duress as house prices began to plummet. As a consequence, there was a cessation in subprime agent activities and the cash flows to the markets began to dry up. Thus, causing the whole subprime mortgage model to collapse. We note that the traditional mortgage model is embedded in Figure 1 and consists of the agents, MR, IL and OR as well as the RML market. In this model, IL lends funds to OR to fund mortgage originations (see, 1I and 1J). Home valuation as well as income and credit checks were done by the OR before issuing the mortgage. The OR then extends mortgages to MRs and receives mortgage payments in return, which are represented by 1E and 1F, respectively. The OR also deals with the RML market in 1O and 1P. When an MR defaults on repayments, the OR repossesses the house.
shrinks thus reducing the return IBs receive. However, the DB has to pay a price for this as the MLI must be paid. In a perfect efficient, MLIs would be superfluous as the cost of insuring bonds would have a value equal to the savings from the lower risk premium. There are many reasons why such guarantees are viable in the real world. Differences in access to information and in demand for credit risk are but two of them.
Subprime Mortgages and Their Securitization...
435
1.2.3. Preliminaries about Subprime Risks The main risks that arise when dealing with subprime residential mortgage products (RMPs) are credit (including counterparty and default), market (including interest rate, price and liquidity), operational (including house appraisal, valuation and compensation), tranching (including maturity mismatch and synthetic) and systemic (including maturity transformation) risks. For sake of argument, risks falling in the categories described above are cumulatively known as subprime risks. In Figure 2 below, we provide a diagrammatic overview of the aforementioned subprime risks. Counterparty Risk Credit Risk Default Risk Interest Rate Risk Market Risk
Price Risk Liquidity Risk
Subprime Risks
Basis Risk Prepayment Risk Investment Risk Re-investment Risk Funding Risk Credit Crunch Risk
House Appraisal Risk Operational Risk
Valuation Risk Compensation Risk Maturity Mismatch Risk
Tranching Risk Synthetic Risk
Maturity Transformation Risk Systemic Risk
Figure 2. Diagrammatic Overview of Subprime Risks. The most fundamental of the above risks is credit and market risk (refer to Subsections 6.1. and 6.2.). The former involves OR’s risk of loss from a MR who does not make scheduled payments and its securitization equivalent. This risk category generally includes counterparty risk that, in our case, is the risk that a banking agent does not pay out on a bond, credit derivative or credit insurance contract (see, for instance, Subsection 6.1.). It refers to the ability of banking agents – such as ORs, MRs, servicers, IBs, SPVs, trustees, underwriters and depositors – to fulfill their obligations towards each other (see Section 6. for more details). During the SMC, even banking agents who thought they had hedged their bets by buying insurance – via credit default swap contracts or MLI insurance – still faced the risk that the insurer will be unable to pay. In our case, market risk is the risk that the value of the mortgage portfolio will decrease mainly due to changes in the value of securities prices and interest rates. Interest rate risk arises from the possibility that subprime RMP interest rates will change. Subcategories of interest rate risk are basis and prepayment risk. The former is the risk associated with yields on RMPs and costs on deposits which are based on different bases with different
436
M.A. Petersen et al.
rates and assumptions (discussed in Subsection 6.2.2.). Prepayment risk results from the ability of subprime MRs to voluntarily (refinancing) and involuntarily (default) prepay their RMLs under a given interest rate regime. Liquidity risk arises from situations in which a banking agent interested in selling (buying) RMPs cannot do it because nobody in the market wants to buy (sell) those RMPs (see, for instance, Subsections 6.1.2., 6.1.3. and 6.2.2.). Such risk includes funding and credit crunch risk. Funding risk refers to the lack of funds or deposits to finance RMLs and credit crunch risk refers to the risk of tightened loan supply and increased credit standards. We consider price risk to be the risk that RMPs will depreciate in value, resulting in financial losses, markdowns and possibly margin calls that is discussed in Subsections 6.1.3. and 6.2.3.. Subcategories of price risk are valuation risk (resulting from the valuation of long-term RMP investments) and re-investment risk (resulting from the valuation of short-term RMP investments). Valuation issues are a key concern that must be dealt with if the capital markets are to be kept stable and they involve a great deal of operational risk. In the early ’80s, in many European countries and the United States, house financing changed from fixed-rate (FRMs) to adjustable-rate mortgages (ARMs) with the interest rate risk shifting to MRs. However, when market interest rates rose again in the late ’80s, ORs found that many MRs were unable or unwilling to fulfil their obligations at the newly adjusted rates. Essentially, this meant that the interest rate (market) risk that ORs thought they had eradicated had merely been transformed into counterparty credit risk. Presently, it seems that the lesson of the ’80s that ARMs cause credit risk to be higher, seems to have been lost perhaps forgotten, perhaps also neglected because, after all, the credit risk would affect the RMBS bondholders rather than the ORs (see, for instance, [14]). The system of house financing based on RMBSs has some eminently reasonable features. Firstly, this system permits ORs to divest themselves from the interest rate risk that is associated with such financing. The experience of the US Savings & Loans debacle has shown that banks cannot cope with this risk. The experience with ARMs has also shown that debtors are not able to bear this risk and that the attempt to burden them with it may merely transform the interest rate risk into counterparty credit risk. Securitization shifts this risk to a third party. Operational risk is the risk of incurring losses resulting from insufficient or inadequate procedures, processes, systems or improper actions taken (see, also, Subsection 6.1.). As we have commented before, for subprime mortgage origination, operational risk involves documentation, background checks and the integrity of loan process. Also, mortgage securitization embeds operational risk via mis-selling, valuation and IB issues. Operational risk related to mortgage origination and securitization results directly from the design and intricacy of loans and structured products. Moreover, IBs carry operational risk associated with mark-to-market issues, the worth of securitized mortgages when sold in volatile markets and uncertainty involved in investment payoffs. Also, market reactions include increased volatility leading to behavior that can increase operational risk such as unauthorized trades, dodgy valuations and processing issues. Often additional operational risk issues such as model validation, data accuracy and stress testing lie beneath large market risk events (see, for instance, [6]). Tranching risk is the risk that arises from the intricacy associated with the slicing of securitized RMLs into tranches in securitization deals (refer to Subsections 6.1.2. and 6.1.3.). Prepayment, interest rate, price and tranching risk are also discussed in Subsection 6.4.
Subprime Mortgages and Their Securitization...
437
where the intricacy of subprime RMPs is considered. Another tranching risk that is of issue for RMPs is maturity mismatch risk that results from the discrepancy between the economic lifetimes of RMPs and the investment horizons of IBs. Synthetic risk can be traded via credit derivatives – like credit default swaps (CDSs) – referencing individual subprime RMBS bonds, synthetic CDOs or via an index linked to a basket of such bonds. Synthetic risk is discussed in Subsection 6.2.2.. In banking, systemic risk is the risk that problems at one bank will endanger the rest of the banking system (compare with Subsection 6.1.). In other words, it refers to the risk imposed by interlinkages and interdependencies in the system where the failure of a single entity or cluster of entities can cause a cascading effect which could potentially bankrupt the banking system or market. In Table 1 below, we identify the links in the chain of subprime risks with comments about the information created and the agents involved.
Table 1. Chain of Subprime Risk; Compare with [11]
Chain of Subprime Risk Step in Chain
Information Generated
Agents Involved
Mortgages
Underwriting Standards; RML Risk Characteristics; Credit Risk Involved
ORs & MBs
Mortgage Securitization
Reference RML Portfolio Selected; RMBS Structured Maturity Mismatch Risk Involved
Dealer Banks; Servicers; CRAs; IBs Buying Deal
Securitization of ABSs, RMBSs, CMBSs into ABS CDOs;
ABS Portfolio Selected; Manager Selected; CDO Structured
Dealer Banks; CDO Managers; CRAs; IBs Buying Deal
CDO Risk Transfer via CDSs in Negative Basis Trade
CDOs & Tranche Selected; Credit Risk in the form of Counterparty Risk Introduced
Dealer Banks; Banks with Balance Sheets; CDOs
CDO Tranches Sale to SIVs & Other Vehicles;
CDOs & Tranche Selected for SIV Portfolio
SIV Manager; SIV Investors buy SIV Liabilities
Investment in SIV Liabilities by Money Market Funds;
Choice of SIV & Seniority
Only Agents Directly Involved: Buyer & Seller
CDO Tranches Sale to Money Market Funds via Liquidity Puts;
CDOs & Tranche Selected
Dealer Banks; Money Market Funds; Put Writers
Final Destination of Cash RMBS Tranches, Cash CDO Tranches & Synthetic Risk
Location of Risk
Only Agents Directly Involved: Buyer & Seller
438
1.3.
M.A. Petersen et al.
Main Problems and Outline of the Paper
In this subsection, we identify the main problems addressed in and give an outline of the paper. 1.3.1. Main Problems The main problems that are solved in this paper may be formulated as follows. Problem 1.1. (Modeling of Capital, Information, Risk and Valuation under Mortgage Securitization): Can we construct a discrete-time subprime mortgage model for capital, information, risk and valuation that incorporates losses, MLI insurance, costs of funds and profits under mortgage securitization ? (see Subsection 2.2. of Section 3.). Problem 1.2. (Intricacy and Design Leading to Information Problems, Valuation Opaqueness and Ineffective Risk Mitigation): Was the SMC partly caused by the intricacy and design of subprime mortgage securitization that led to information (loss and asymmetry) problems, valuation opaqueness and ineffective risk mitigation ? (see Theorem 2.3 of Subsection 2.4.). Problem 1.3. (Optimal Valuation Problem under Subprime Mortgage Securitization): In order to obtain an optimal valuation under subprime mortgage securitization which decisions regarding RML rates, deposits and Treasuries must be made ? (see Theorems 2.3 and 3.3 of Subsections 2.4. and 3.4., respectively). 1.3.2. Outline of the Paper Section 2. contains a discussion of an optimal profit problem under RMBSs. To make this possible, capital, information, risk and valuation for a subprime mortgage model under RMBSs is analyzed. In this regard, we consider a mechanism for subprime RML securitization, subprime RMBS bonds as well as a motivating example for such securitization. More specifically, SPVs, RMBS bond structure, subordination and excess spread as well as other forms of credit enhancement such as shifting interest, performance triggers and interest rate swaps are also considered. Some additional subprime RMBS parameters such as cost of funds for subprime RMBSs, financing, adverse selection, MLI contracts for subprime RMBSs as well as residuals are discussed. Section 3. investigates an optimal profit problem under RMBS CDOs. Section 5. provides examples involving subprime mortgage securitization while Section 6. discusses some of the key issues in our paper. Finally, an appendix is provided in Section 8..
2.
Risk, Profit and Valuation under RMBSs
In this subsection, we provide more details about securitized RMLs. In the sequel, we assume that the notation Π, rM , M, cM ω , pi , cp , rf , rR , rS , S, C, C(E[S(C)]), rB , cB , B, rT , T, P T (T), rD , cD , D, rB , cB , B, Πp , K, n, E, O, ω(C), ω B , f M , mV aR and O
Subprime Mortgages and Their Securitization...
439
corresponds to that of Subsection 1.2.2.. Furthermore, the notation rSΣ , represents the loss rate on securitized RMLs, f Σ , is the fraction of M that is securitized and fbΣ denotes the fraction of M, realized as new securitized RMLs, where fbΣ ∈ f Σ . The following assumption about the relationship between IB’s and OR’s profit is important. Assumption 2.1. (Relationship between OR and IB): We suppose that IB and OR share the same balance sheet given by (1.13). Furthermore, we assume that IB is the only recipient of ORs securitized mortgages and that IB’s profit can be expressed as a function of the components of this securitization. This assumption enables us to subsequently derive an expression for IB’s profit under mortgage securitization as in (2.14) from OR’s profit formula given by (1.1).
2.1.
Subprime RMBSs
The process of RML securitization takes a portfolio of illiquid RMLs – called the reference RML portfolio – with high yields and places them into a special purpose vehicle (SPV). To finance the purchase of this portfolio, the SPV plans to issue highly rated bonds paying lower yields. The trust issues bonds that are partitioned into tranches with covenants structured to generate a desired credit rating in order to meet investor demand for highly rated assets. The usual trust structure results in a majority of the bond tranches being rated investment grade. This is facilitated by running the reference RML portfolio’s cash flows through a waterfall payment structure. The cash flows are allocated to the bond tranches from the top down: the senior bonds get paid first, and then the junior bonds, and then the equity. To insure a majority of the bonds get rated AAA, the waterfall specifies that the senior bonds get accelerated payments (and the junior bonds get none), if the reference RML portfolio appears to be under stress3 . As was mentioned before, this credit ratings can also be assured via the use of a MLI surety wrap. In addition, the super senior tranches are often unfunded, making them more attractive to IBs. The costs related to RML securitization emanate from managerial time, legal fees and rating agency fees. The RMBS equity holders would only perform securitization if the process generated a positive nett present value. This could occur if the other tranches were mispriced. For example, if AAA rated tranches added a new RMBS that attracted new sources of funds. However, asset securitization started in the mid 1980s, so it is difficult to attribute the demand that we have witnessed over the last few years for AAA rated tranches to new sources of funds. After this length of time, investors should have learnt to price tranches in a way that reflects the inherent risks. If RMBS bond mispricing occurred, why ? The AAA rated liabilities could be over-priced due to either mispricing liquidity or the rating of the SPV’s bonds were inaccurate.
2.2.
Risk and Profit under RMBSs
In this subsection, we discuss a subprime mortgage model for capital, information, risk and valuation and its relation to retained earnings. 3
Stress is usually measured by collateral/liability and cashflow/bond-payment ratios remaining above certain trigger levels.
440
M.A. Petersen et al.
2.2.1. A Subprime Mortgage Model for Risk and Profit under RMBSs In this paper, a subprime mortgage model for capital, information, risk and valuation under RMBSs can be constructed by considering the difference between cash inflow and outflow. In period t, cash inflow is constituted by returns on the residual from RML securitization, rtr fbtΣ ftΣ Mt , securitized subprime RMLs, rtM (1 − fbtΣ )ftΣ Mt , unsecuritized RMLs, rtM (1 − ftΣ )Mt , unsecuritized RMLs that are prepaid, cpt rtf (1 − ftΣ )Mt , rtT Tt , (rtB − cB t )Bt , as well as C(E[S(C)]) and the present value of future gains from subsequent mortgage origination and securitizations, ΠΣp t . On the other hand, in period t, we consider the average weighted cost of funds to securitize RMLs, cM Σω fbtΣ ftΣ Mt , losses from securitized bΣ Σ RMLs, rtSΣ fbtΣ ftΣ Mt , cost of MLI insurance for securitized RMLs, ciΣ t ft ft Mt , transaction cost to extend RMLs, ctt (1 − fbtΣ )ftΣ Mt , and transaction costs from securitized RMLs bΣ Σ ctΣ t (1 − ft )ft Mt as part of cash outflow. Additional components of outflow are weighted ω (1 − f Σ )M , MI premium for unsecuritized average cost of funds for extending RMLs, cM t t t RMLs, pi (Ct )(1 − ftΣ )Mt , nett losses for unsecuritized RMLs, (1 − rtR )rtS (1 − ftΣ )Mt , decreasing value of adverse selection, aftΣ Mt , losses from suboptimal SPVs, Et and cost of funding SPVs, Ft . From the above and (1.1), we have that a subprime mortgage model for profit under subprime RMBSs may have the form ΠΣ t
=
fbtΣ ftΣ Mt + rtM − ctt − ctΣ (1 − fbtΣ )ftΣ Mt rtr − ctM Σω − rtSΣ − ciΣ t t
+
rtM
−
ω cM t
−
pit (Ct )
+
cpt rtf
− (1 −
rtR )rtS
(1 −
ftΣ )Mt
−
aftΣ Mt
+
rtT Tt
(2.14)
−
D D + rtB − cB B − r + c Dt + C(E[S(Ct )]) − P T (Tt ) + ΠΣp t t t t t − Et − Ft , p eΣ where ΠΣp t = Πt + Πt . Furthermore, by considering
rtB
+
cBt
Bt
∂S(Ct ) < 0 and (2.14), ΠΣ is an ∂Ct
∂ΠΣ t > 0. Furthermore, the MI cost term, ciΣ , is a function ∂Ct of the MI premium and payment terms. Below we roughly attempt to associate different risk types to different cash inflow and outflow terms in (2.14). We note that the cash inflow terms rtr fbtΣ ftΣ Mt and rtM (1 − fbtΣ )ftΣ Mt embed credit, market (in particular, interest rate), tranching and operational risks while rtM (1 − ftΣ )Mt carries market (specifically, interest rate) and credit risks. Also, cpt rtf (1 − ftΣ )Mt can be associated with market (in particular, prepayment) Σp risk while (rtB − cB t )Bt mainly embeds market risk. C(E[S(C)]) and Πt involve at least credit (particularly, counterparty) and market (more specifically, interest rate, basis, prepayment, liquidity and price), respectively. In (2.14), the cash outflow terms cM Σω fbtΣ ftΣ Mt , bΣ Σ ctt (1 − fbtΣ )ftΣ Mt and ctΣ t (1 − ft )ft Mt involve credit, tranching and operational risks ω (1 − f Σ )M and pi (C )(1 − f Σ )M carry credit and operational risks. Also, while cM t t t t t t Σ SΣ Σ b rt ft ft Mt embeds credit, market (including valuation), tranching and operational risks bΣ Σ and ciΣ t ft ft Mt involves credit (in particular, counterparty), tranching and operational risks. Furthermore, (1 − rtR )rtS (1 − ftΣ )Mt and aftΣ Mt both carry credit and market (including valuation) risks. Finally, Et and Ft embed credit (in particular, counterparty and valuation) and market and operational risks, respectively. In reality, the risks that we assoincreasing function of C so that
Subprime Mortgages and Their Securitization...
441
ciate with each of the cash inflow and outflow terms in (2.14) are more complicated than presented above. For instance, these risks are inter-related and may be strongly correlated with each other. All of the above risk-carrying terms contribute to systemic risk that affects the entire banking system. 2.2.2. Profit under RMBSs and Retained Earnings As for OR’s profit under RMLs, Π, we conclude that IB’s profit under RMBSs, ΠΣ , are used to meet its obligations, that include dividend payments on equity, nt dt . The retained earnings, Etr , subsequent to these payments may be computed by using Πt = Etr + nt dt + (1 + rtO )Ot .
(2.15)
p f M S R i M ω After adding and subtracting rt − ct − pt + ct rt − (1 − rt )rt Mt from (2.14), we obtain
ΠΣ t
r M Σω SΣ iΣ M t tΣ = Πt + rt − ct − rt − ct − rt + ct + ct ftΣ fbtΣ Mt
p f ω i R S t tΣ eΣ + p + (1 − r )r − c − c − c r − a ftΣ Mt − Et − Ft + Π + cM t t t t t t t . t t
If we replace Πt by using (2.15), ΠΣ t is given by ΠΣ t
=
t tΣ M Etr + nt dt + (1 + rtO )Ot + rtr − ctM Σω − rtSΣ − ciΣ − r + c + c ftΣ fbtΣ Mt (2.16) t t t t ω i R S t tΣ p f eΣ + cM + p + (1 − r )r − c − c − c r − a ftΣ Mt − Et − Ft + Π t t t t t t t . t t
From (1.6) and (2.16) we may derive an expression for IB’s capital of the form Σ Kt+1
=
O nt (dt + Et ) − ΠΣ t + ∆Ft + (1 + rt )Ot +
tΣ t M + c + c − r ftΣ fbtΣ Mt rtr − ctM Σω − rtSΣ − ciΣ t t t t
p f Σ ω eΣ + cM + pit + (1 − rtR )rtS − ctt − ctΣ t − ct rt − a ft Mt − Et − Ft + Πt . t
(2.17)
where Kt is defined by (1.13).
2.3.
Valuation under RMBSs
If the expression for retained earnings given by (2.16) is substituted into (1.7), the nett cash flow under RMBSs generated by IB is given by
442
M.A. Petersen et al.
NtΣ = ΠΣ t − ∆Ft
Σ = nt (dt + Et ) − Kt+1 + (1 + rtO )Ot M Σω SΣ iΣ M t tΣ r − rt − ct − rt + ct + ct ftΣ fbtΣ Mt + rt − ct
(2.18)
p f ω i R S t tΣ eΣ + cM + pt + (1 − rt )rt − ct − ct − ct rt − a ftΣ Mt − Et − Ft + Π t t .
We know that valuation is equal to IB’s nett cash flow plus ex-dividend value. This translates to the expression Σ VtΣ = NtΣ + Kt+1 ,
(2.19)
where Kt is defined by (1.13). Furthermore, the stock analyst evaluates the expected future cash flows in j periods based on a stochastic discount factor, δt,j such that IB’s value is
VtΣ
=
NtΣ
+ Et
X ∞ j=1
2.4.
Σ δt,j Nt+j
.
(2.20)
Optimal Valuation under RMBSs
In this subsection, we make use of the modeling of assets, liabilities and capital of the preceding section to solve an optimal valuation problem. 2.4.1. Statement of Optimal Valuation Problem under RMBSs Suppose that IB’s valuation performance criterion, J Σ , at t is given by JtΣ
Σ B M (2.21) = + lt Kt − ρ ω(Ct )Mt + ω Bt + 12.5f (mV aR + O) Σ dw Σ −ct Kt+1 + Et δt,1 V Kt+1 , xt+1 , ΠΣ t
where lt is the Lagrangian multiplier for the total capital constraint, Kt is defined by (1.13), Et [·] is the expectation conditional on IB’s information in period t and xt is the Treasuries in period t with probability distribution f (xt ). Also, cdw t is the deadweight cost of total capital that consists of equity. The optimal valuation problem is to maximize the IB value given by (2.20). We can now state the optimal valuation problem as follows. Problem 2.2. (Statement of IB’s Optimal Valuation Problem under RMBSs): Suppose that the total capital constraint and the performance criterion, J, are given by (1.2) and (2.21), respectively. The optimal OR valuation problem under RMBSs is to maximize OR’s value given by (2.20) by choosing the RML rate, deposits and regulatory capital for
Subprime Mortgages and Their Securitization...
V Σ (KtΣ , xt ) =
max
rtM , Dt , ΠΣ t
443
JtΣ ,
(2.22)
subject to RML, balance sheet, cash flow and financing constraints given by (1.5), (1.13), (2.14) and (2.17), respectively.
2.4.2. Solution to Optimal Valuation Problem under RMBSs In this subsection, we find a solution to Problem 2.2 when the capital constraint (1.2) holds. In this regard, the main result can be stated and proved as follows. Theorem 2.3. (Solution to IB’s Optimal Valuation Problem under RMBSs): Suppose that J and V are given by (2.21) and (2.22), respectively. When the capital constraint given by (1.2) holds (i.e., lt > 0), a solution to the optimal valuation problem under RMBSs yields optimal RMLs at face value, RML rate, IB’s optimal deposits and provisions for deposit withdrawals via Treasuries are of the form (1.8), (1.9), (1.10) and (1.11), respectively. In this case, IB’s optimal profits under RMBSs is given by
ΠΣ∗ t
=
nt Et−1 +Ot ρω(Ct )
− m1
1
m0 −
−
fbtΣ ftΣ rtr − ctM Σω − rtSΣ − ciΣ t B M ω Bt +12.5f (mV aR+O) + + m2 Ct + σtM + ctt + ctΣ t ω(C )
ω B Bt +12.5f M (mV aR+O) ω(Ct )
nt Et−1 +Ot ρω(Ct )
t
ω + pi + (1 − r R )r S − ct − ctΣ − cp r f − a +ftΣ cM t t t t t t t t
+
1 m1
m0 −
nt Et−1 +Ot ρω(Ct )
+
ω B Bt +12.5f M (mV aR+O) ω(Ct )
1 ) −(1 − rtR )rtS − (rtD + cD t 1−γ + D +
1 rtT − (rtD + cD t ) 1−γ
−
1 (rtD + cD ) 1−γ
D p rt
+ m2 Ct + σtM
ω − pi + c p r f − cM t t t t
B B rtT + (rtB − cB t ) + (rt + ct ) −
Bt − nt Et−1 − Ot − Bt
+
1 (r D 1−γ t
+ cD t )
rtB − cB Bt t
(2.23)
−(rtB + cBt )Bt + C(E[S(Ct )]) − P T (Tt )) + ΠΣp t − Et − Ft ,
respectively. The next corollary follows immediately from Theorem 2.3. Corollary 2.4. (Solution to the Optimal Valuation Problem under RMBSs (Slack)): Suppose that J Σ and V Σ are given by (2.21) and (2.22), respectively and P (Ct ) > 0. When the capital constraint (1.2) does not hold (i.e., lt = 0), a solution to the optimal valuation problem under RMBSs stated in Problem 2.3 yields optimal RML supply and its rate
444
MtΣn∗
M.A. Petersen et al.
" 1 m 1 ω = cM + pit (Ct ) + (1 − rtR )rtS (Ct ) (m0 + m2 Ct + σtM ) − t Σ Σ b 2 2(1 − 2ft ft ) (rtD + cD ) p f M Σω SΣ iΣ t tΣ bΣ Σ r + − rt − ct + c + ct ft ft − ct rt − 2 rt − ct (1 − γ) # p f ω Σ + pit (Ct ) + (1 − rtR )rtS (Ct ) − ct − ctΣ (2.24) −2 cM t t − ct rt − a ft
and
Σn∗ rtM
" 1 1 M ω (m0 + m2 Ct + σt ) + = cM + pit (Ct ) + (1 − rtR )rtS (Ct ) t Σ Σ b 2m1 2(1 − 2ft ft ) (rtD + cD ) p f M Σω SΣ iΣ t tΣ bΣ Σ r − rt − ct + c + ct ft ft − ct rt − 2 rt − ct + (1 − γ) # p f Mω i R S t tΣ (2.25) −2 ct + pt (Ct ) + (1 − rt )rt (Ct ) − c − ct − ct rt − a ftΣ ,
respectively. In this case, the corresponding Tt , deposits and profits are given by
TΣ∗ t
1 D T B B B B D D = D + p rt + (rt − ct ) + (rt + ct ) − (r + c ) rt 1−γ t
DtΣ∗ = × D+ and
D rtp
B B rtT + (rtB − cB t ) + (rt + ct ) −
1 1−γ 1 D 1−γ (rt
(2.26)
(2.27) +
cD )
+
MtΣn∗
+ Bt − Kt − Bt
Subprime Mortgages and Their Securitization...
Σ∗
Πt
=
"
1 2
M
(m0 + m2 Ct + σt ) −
m1 2(1 − 2fbΣ f Σ ) t
t
Mω
ct
i
R
S
+ pt (Ct ) + (1 − rt )rt (Ct ) +
445
(rtD + cD ) (1 − γ)
p f
− ct rt
# p f M Σω SΣ iΣ t tΣ bΣ Σ Mω i R S t tΣ Σ r ft ft − 2 ct − rt − ct + c + ct + pt (Ct ) + (1 − rt )rt (Ct ) − c − ct − ct rt − a ft −2 rt − ct
(
Σ Σ (1 − fbt ft )
1
1
M
2m1
(m0 + m2 Ct + σt ) +
2(1 −
2fbtΣ ftΣ )
Mω
ct
i
R
S
+ pt (Ct ) + (1 − rt )rt (Ct ) +
(rtD + cD ) (1 − γ)
p f
− ct rt
r p f t tΣ bΣ Σ Mω i R S t tΣ Σ −2 rt − ctM Σω − rtSΣ − ciΣ ft ft − 2 ct + pt (Ct ) + (1 − rt )rt (Ct ) − c − ct − ct rt − a ft t + c + ct Σ Σ
+fbt ft
r
t tΣ rt − ctM Σω − rtSΣ − ciΣ t + c + ct
Σ
+ ft
Mω
ct
p f
i
R
S
t
tΣ
+ pt (Ct ) − ct rt + (1 − rt )rt (Ct ) − c − ct
−a
(2.28)
) D (rtD + cD ) (rtD + cD ) Mω i R S p f T B B B B − pt (Ct ) − (1 − rt )rt (Ct ) − + D + p rt + (rt − ct ) + (rt + ct ) − + ct rt − ct 1−γ rt 1−γ
T
rt −
(rtD + cD ) 1−γ
T
−
(rtD + cD ) 1−γ Σp
+C(E[S(Ct )]) − P (Tt ) + Πt
Bt − Kt − Bt
+
B
B
rt − ct
B
B
Bt − (rt + ct )Bt
− Et − Ft ,
respectively.
3.
Risk, Profit and Valuation under RMBS CDOs
In this section, we discuss optimal profit under RMBS CDOs. In the sequel, we assume that the notation Π, rM , M, cM ω , pi , cp , rf , rR , rS , S, C, C(E[S(C)]), rB , cB , B, rT , T, P T (T), rD , cD , D, rB , cB , B, Πp , K, n, E, O, ω(C), ω B , f M , mV aR, O, rSΣ , f Σ and fbΣ , corresponds to that of Sections 1. and 2.. Further suppositions are that rr , cM Σω , ciΣ , ct , ctΣ , a, ΠΣp , E and F denote the same parameters as in Section 2.. Assumption 3.1. (Senior Tranches of RMBSs): We assume that risky marketable securities, B, – appearing in the balance sheet (1.13) – consist entirely of the senior tranches of RMBSs that are wrapped by an MLI. Also, IB has an incentive to retain an interest in these tranches. This assumption implies that the CDO structure depends on the securitization of senior tranches of RMBSs.
3.1.
Subprime ABS CDOs
A cash CDO is a SPV, that buys a portfolio of fixed income loans and finances the purchase of the portfolio via issuing different tranches of risk in the capital markets. Of particular interest are ABS CDOs – CDOs which have the underlying portfolios consisting of ABSs, including RMBSs. CDO portfolios typically include tranches of subprime and Alt-A deals, sometimes quite significant amounts. A representation of the creation of a RMBS deal is given on the left-hand side of the figure. Some of the bonds issued in this deal go into ABS CDOs. In particular, as shown on the right-hand side of the figure, RMBS bonds
446
M.A. Petersen et al.
rated AAA, AA, and A form part of a high grade CDO portfolio, so called because the portfolio bonds have these ratings. The BBB bonds from the RMBSs deal go into a mezz CDO, so named because its portfolio consists entirely, or almost entirely, of BBB rated ABS and RMBS tranches. If bonds issued by mezz CDOs are put into another CDO portfolio, then new CDO – now holding mezz CDO tranches – is called a CDO squared or CDO2 . These CDOs are rated in following way. The CDO trust partners, the equity holders, would work with a CRA to get the CDO’s liabilities rated. They paid the CRA for this service. The CRA informed the CDO trust about the procedure methods, historical default rates, prepayment rates and recovery rates - it would use to rate bonds. The CDO trust structured the liabilities and waterfall to obtain a significant percent of AAA bonds (with the assistance of the CRA). The rating process was a fixed target. The CDO equity holders designed the liability structure to reflect the fixed target. Note that given the use of historic data, the ratings did not reflect current asset characteristics, such as the growing number of no documents RMLs and large loan-to-value ratios for subprime RMLs. The chain of RMLs, RMBSs and CDOs is portrayed in the Figure 3 below.
Risk Profile of RMLs
High Grade ABS CDO Senior ’AAA’
88 %
Junior ’AAA’
5%
’AA’
3%
Subprime Low
RMLs
Y
High
Good
X
Bad
RMBS Bonds
’A’
2%
”AAA”
81 %
’BBB’
1%
”AA”
11 %
NR
1%
”A”
4%
”BBB”
3%
Mezz ABS CDO
1 %, Not All Deals
Senior ”AAA”
62 %
(CDO)2 Senior ‘AAA’
60 %
Junior ‘AAA’
14 %
Junior ‘AAA’
27 %
”BB.NR”
Other Credit Enhancement: Excess Spread; X-axis = MR Credit
Over-Collateralization
Y-axis = MR Down Payment
”AA”
8%
‘AA’
4%
‘A’
6%
‘A’
3%
‘BBB’
6%
‘BBB’
3%
NR
4%
NR
2%
Other Credit Enhancement: Excess Spread
Other Credit Enhancement: Excess Spread
Figure 3. Chain of Subprime Securitizations; Source: [21].
3.2.
Risk and Profit under RMBS CDOs
In this subsection, we investigate a subprime mortgage model for profit under RMBS CDOs and its relationship with retained earnings. 3.2.1. A Subprime Mortgage Model for Risk and Profit under RMBS CDOs In this paper, a subprime mortgage model for profit under RMBS CDOs can be constructed by considering the difference between cash inflow and outflow. For this profit,
Subprime Mortgages and Their Securitization...
447
in period t, cash inflow is constituted by returns on the residual from RMBSs securitization, rtrb fbtΣb ftΣb Bt , securitized subprime RMBSs, rtB (1 − fbtΣb )ftΣb Bt , unsecuritized securities, rtB (1 − ftΣb )Bt , Treasuries, rtT Tt , and RMLs, rtM Mt , as well as the recovery amount, Rt , MLI protection leg payments, C(S(Ct )), and the present value of future gains from subsequent RMBS purchases and their securitizations, ΠΣp t . On the other hand, we consider the average weighted cost of funds to securitize RMBSs, cM Σωb fbtΣb ftΣb Bt , losses from securitized RMBSs, rtSΣb fbtΣb ftΣb Bt , cost of MLI insurance for securitized tb bΣb Σb bΣb Σb RMBSs, ciΣb t ft ft Bt , transaction cost to extend RMBSs, ct (1 − ft )ft Bt , and transbΣb Σb action costs from securitized RMBSs ctΣb t (1 − ft )ft Bt as part of cash outflow. Additional components of outflow are weighted average cost of funds for extending RMBSs, ωb (1 − f Σb )B , fraction of the face value of unsecuritized RMBSs corresponding to cM t t t Σb Σb ft (1 − ftΣb )Bt , MLI premium for unsecuritized RMBSs losses, pib t (1 − ft )Bt , decreasb Σb ing value of adverse selection, a ft Bt , the all-in cost of holding RMBSs, cM t Mt , interest D D paid to depositors, rt Dt , the cost of taking deposits, c Dt , provisions against deposit withdrawals, rtB Bt and cBt Bt are the borrower rate and marginal cost of borrowing, respectively, while rB and cB are the borrower rate and marginal cost of borrowing, respectively, P T (Tt ), losses from suboptimal SPVs, Et and costs for funding RMBS securitization, Ft . From the above, we have that model for profit under subprime RMBS CDOs may have the form ΠΣb t
=
bΣ f Σ Mt + r M − ct − ctΣ (1 − fbΣ )f Σ Mt f rtr − ctM Σω − rtSΣ − ciΣ t t t t t t t t
p f ω i R S Σ Σ + rtM − cM − p + c r − (1 − r )r t t t t (1 − ft )Mt − aft Mt t t
tΣb + rtrb − ctM Σωb − rtSΣb − ctiΣb fbtΣb ftΣb Bt + rtB − ctb (1 − fbtΣb )ftΣb Bt t − ct
(3.29)
bp f b ωb Rb Sb + rtB − cM − pib (1 − ftΣb )Bt − ab ftΣb Bt + rtT Tt t t + ct rt − (1 − rt )rt − rtB + cBt Bt − rtD + cD Dt + C(E[S(Ct )]) − P T (Tt ) + ΠΣp t t − Et − Ft ,
∂S(Ct ) p eΣ < 0 and (3.29), we know where ΠΣp t = Πt + Πt . Furthermore, by considering ∂Ct that ΠΣb is an increasing function of current credit rating, C. From the above, we note that in (3.29) the cash inflow terms rtrb fbtΣb ftΣb Bt and rtB (1 − fbtΣb )ftΣb Bt carry credit, market (in particular, interest rate), tranching and operational risks while rtB (1 − ftΣb )Bt embed credit (in particular, counterparty) and market (in particular, interest rate) risks. In (3.29), the cash outflow terms cM Σωb fbtΣb ftΣb Bt , rtSΣb fbtΣb ftΣb Bt , tb tΣb bΣb Σb bΣb Σb bΣb Σb ciΣb t ft ft Bt , ct (1 − ft )ft Bt and ct (1 − ft )ft Bt involve credit (for instance, counterparty), market (specifically, liquidity and valuation), tranching and operational risks. ωb (1 − f Σb )B and pib (1 − f Σb )B carry credit, market (particularly, liquidity) Also, cM t t t t t t and operational risks while ab ftΣb Bt embeds credit and market (in the form of liquidity and valuation) risks. As before, the risks that we associate with each of the cash inflow and outflow terms in (3.29) are less straightforward. For instance, strong correlations may exist between each of the aforementioned risks. Also, the risk-carrying terms found in (3.29) affect the entire banking system via systemic risk.
448
M.A. Petersen et al.
3.2.2. Profit under RMBS CDOs and Retained Earnings We know that profits, Πt , are used to meet its obligations, that include dividend payments on equity, nt dt . The retained earnings, Etr , subsequent to these payments may be computed
ωb −pib +cpb r f b −(1−r Rb )r Sb B by using (2.15). After adding and subtracting rtB −cM t t t t t t t
from (3.29), we get
ΠΣb t
r M t tΣ M Σω SΣ iΣ bΣ Σ = Πt + rt − ct − rt − ct ft ft Mt + rt − ct − ct (1 − fbtΣ )ftΣ Mt p f M Mω i R S + rt − ct − pt + ct rt − (1 − rt )rt (1 − ftΣ )Mt − aftΣ Mt M Σωb SΣb iΣb B tb tΣb rb − r t − ct − r t + ct + ct ftΣb fbtΣb Bt + rt − ct
pb f b M ωb ib Rb Sb tb tΣb b + pt + (1 − rt )rt − ct − ct − ct rt − a ftΣb Bt + ct
eΣ −Et − Ft + Π t .
Replace Πt by using (2.15). In this case ΠΣb t is given by
ΠΣb t
=
Etr
+ nt dt + (1 +
rtO )Ot
r M Σω SΣ iΣ bΣ Σ + rt − ct − rt − ct ft ft Mt
M t tΣ + rt − ct − ct (1 − fbtΣ )ftΣ Mt
p f M Mω i R S + rt − ct − pt + ct rt − (1 − rt )rt (1 − ftΣ )Mt − aftΣ Mt M Σωb SΣb iΣb B tb tΣb rb − r t − ct − r t + ct + ct ftΣb fbtΣb Bt + rt − ct
pb f b M ωb ib Rb Sb tb tΣb b + pt + (1 − rt )rt − ct − ct − ct rt − a ftΣb Bt + ct
eΣ −Et − Ft + Π t .
For (3.30) and (1.6) to obtain an expression for capital of the form
(3.30)
Subprime Mortgages and Their Securitization...
Σb Kt+1
=
O nt (dt + Et ) − ΠΣb t + ∆Ft + (1 + rt )Ot +
449
rtr − ctM Σω − rtSΣ − ciΣ fbtΣ ftΣ Mt t
ω (1 − fbtΣ )ftΣ Mt + rtM − cM + rtM − ctt − ctΣ − pit + cpt rtf − (1 − rtR )rtS (1 − ftΣ )Mt t t −aftΣ Mt +
tΣb rtrb − ctM Σωb − rtSΣb − ctiΣb − rtB + ctb + c ftΣb fbtΣb Bt t t
fb ωb Rb Sb tb tΣb b + cM + pib − cpb ftΣb Bt t t + (1 − rt )rt − ct − ct t rt − a e Σ. −Et − Ft + Π t
(3.31)
where Kt is defined by (1.13).
3.3.
Valuation under RMBS CDOs
If the expression for retained earnings given by (3.30) is substituted into (1.7), the nett cash flow generated for a shareholder is given by NtΣb
=
Σb O ΠΣb t − ∆Ft = nt (dt + Et ) − Kt+1 + (1 + rt )Ot +
rtr − ctM Σω − rtSΣ − ciΣ fbtΣ ftΣ Mt t
ω + rtM − ctt − ctΣ − pit + cpt rtf − (1 − rtR )rtS (1 − ftΣ )Mt (1 − fbtΣ )ftΣ Mt + rtM − cM t t −aftΣ Mt +
tΣb + c ftΣb fbtΣb Bt rtrb − ctM Σωb − rtSΣb − ctiΣb − rtB + ctb t t
fb b tΣb tb Rb Sb ωb + cM − cpb + pib ftΣb Bt t + (1 − rt )rt − ct − ct t t rt − a e Σ. −Et − Ft + Π t
(3.32)
We know that valuation is equal to the nett cash flow plus ex-dividend value. This translates to the expression Σb VtΣb = NtΣb + Kt+1 ,
(3.33)
where Kt is defined by (1.13). Furthermore, under RMBS CDOs, the analyst evaluates the expected future cash flows in j periods based on a stochastic discount factor, δt,j , such that IB’s value is
VtΣb
=
NtΣb
+ Et
X ∞ j=1
Σb δt,j Nt+j
.
(3.34)
450
3.4.
M.A. Petersen et al.
Optimal Valuation under RMBS CDOs
In this subsection, we make use of the modeling of assets, liabilities and capital of the preceding section to solve an optimal valuation problem.
3.4.1. Statement of Optimal Valuation Problem under RMBS CDOs Suppose that the valuation performance criterion, J Σ b, at t is given by
JtΣb
Σb M B + lt Kt − ρ ω(Ct )Bt + ω Mt + 12.5f (mV aR + O) (3.35) = Σb Σb , δ V K −cdw , x K t+1 t+1 t t+1 + Et t,1 ΠΣb t
where lt is the Lagrangian multiplier for the total capital constraint, Kt is defined by (1.13), Et [·] is the expectation conditional on information in period t and xt is the Treasuries in period t with probability distribution f (xt ). Also, cdw t is the deadweight cost of total capital that consists of equity. The optimal valuation problem is to maximize the value given by (3.34). We can now state the optimal valuation problem as follows. Problem 3.2. (Statement of Optimal Valuation Problem under RMBS CDOs): Suppose that the total capital constraint and the performance criterion, J Σb , are given by
KtΣb
M B = nt Et−1 + Ot ≥ ρ ω(Ct )Mt + ω Mt + 12.5f (mV aR + O) ,
(3.36)
and (3.35), respectively. IB’s optimal valuation problem is to maximize its value given by (3.34) by choosing the RMBS rate, deposits and regulatory capital for
V Σb (Kt , xt ) =
max
rtB , Dt , ΠΣb t
JtΣb ,
(3.37)
subject to RMBS, balance sheet, cash flow and financing constraints given by
Bt = b0 − b1 rtB + b2 Ct + σtB , Dt = (3.29) and (3.31), respectively.
Bt + Mt + Tt − Bt − nt Et−1 − Ot , 1−γ
(3.38) (3.39)
Subprime Mortgages and Their Securitization...
451
3.4.2. Solution of Optimal Valuation Problem under RMBS CDOs In this subsection, we find a solution to Problem 3.2 when the capital constraint (3.36) holds. In this regard, the main result can be stated and proved as follows.
Theorem 3.3. (Solution to an Optimal Valuation Problem under RMBS CDOs): Suppose that J Σb and V Σb are given by (3.35) and (3.37), respectively. When the capital constraint given by (3.36) holds (i.e., lt > 0), a solution to the optimal valuation problem yields optimal RMBS supply and an RMBS bond rate of the form
Bt∗ =
Kt ω M Mt + 12.5f B (mV aR + O) − ρω(Ct ) ω(Ct )
(3.40)
and
rtB∗
Kt 1 ω M Mt + 12.5f B (mV aR + O) B b0 + b2 Ct + σt − , = + b1 ρω(Ct ) ω(Ct )
(3.41)
respectively. In this case, optimal deposits, provisions for deposit withdrawals via Treasuries and profits under RMBS securitization are given by
DtΣb∗
=
TΣb∗ t
and
1 1 D T M M B B D D D + p rt + (rt − ct ) + (rt + ct ) − (r + ct ) 1−γ rt 1−γ t ω M Mt + 12.5f B (mV aR + O) Kt − + Mt − Kt − Bt , + ρω(Ct ) ω(Ct )
(3.42)
1 D T B B D D ) + (r + c ) − = D + p rt + (rtM − cM (r + c ) t t t t rt 1−γ t
(3.43)
452
M.A. Petersen et al.
ΠtΣb∗
=
ω B Bt + 12.5f M (mV aR + O) nt Et−1 + Ot − ρω(Ct ) ω(Ct )
fbtΣ ftΣ rtr − ctM Σω − rtSΣ − ciΣ t
ω B Bt + 12.5f M (mV aR + O) nt Et−1 + Ot 1 + + m2 Ct + σtM + ctt + ctΣ m0 − t m1 ρω(Ct ) ω(Ct )
−
p f ω + pit + (1 − rtR )rtS − ctt − ctΣ +ftΣ cM t t − ct rt − a +
−
ω M Mt + 12.5f B (mV aR + O) ω(Ct )
1 ω B Bt + 12.5f M (mV aR + O) nt Et−1 + Ot + + m2 Ct + σtM m0 − m1 ρω(Ct ) ω(Ct ) nt Et−1 + Ot p f Mω i R S + − pt + ct rt − (1 − rt )rt −ct ρω(Ct )
fbtΣb ftΣb rtrb − ctM Σωb − rtSΣb − ctiΣb
ω M Mt + 12.5f B (mV aR + O) nt Et−1 + Ot 1 tΣb + + b2 Ct + σtB + ctb b0 − t + ct b1 ρω(Ct ) ω(Ct ) fb Σb M ωb ib Rb Sb tb b +ft ct + pt + (1 − rt )rt − ct − cttΣb − cpb r − a t t
−
+
ω M Mt + 12.5f B (mV aR + O) nt Et−1 + Ot 1 + + b2 Ct + σtB b0 − b1 ρω(Ct ) ω(Ct )
pb f b ωb ib Rb Sb − (rtD + cD − p + c r − (1 − r )r −cM t ) t t t t t t
1 1−γ
1 D B B (rtD + cD + D + p rtT + (rtM − cM t ) t ) + (rt + ct ) − rt 1−γ
rtT − (rtD + cD t )
1 1−γ
−
(rtD + cD )
1 1−γ
Mt − nt Et−1 − Ot − Bt
−(rtB + cBt )Bt + C(E[S(Ct )]) − P T (Tt )) + ΠΣp t − Et − Ft ,
(3.44)
respectively.
4.
Mortgage Securitization and Capital under Basel Regulation
In this section, we deal with a model where both subprime RML losses and RML riskweights are a function of the current level of credit rating, Ct . The capital constraint is described by the expression in (1.2), where the risk-weights on short-and long-term marketable securities, ω B 6= 0, are considered. Also, in this situation, the risk-weight on mort∂ω(Ct ) < 0. gages, ω(Ct ), is a decreasing function of the current level of credit rating, i.e, ∂Ct
Subprime Mortgages and Their Securitization...
453
In particular, we keep the risk-weights for short-and long-term marketable securities are kept constant, i.e., ω B = 1. In this case, the capital constraint (1.2) becomes B M Kt ≥ ρ ω(Ct )Mt + ω Bt + 12.5f (mV aR + O) .
4.1.
(4.45)
Quantity and Pricing of Mortgages and Capital under Basel Regulation (Securitized Case)
In this subsection, we examine how capital, K, and the quantity and price of mortgages, M, are affected by changes in the level of credit rating, C, when risk-weight on mortgages, ω(Ct ), are allowed to vary. Theorem 4.1. (Mortgages and Capital under Basel Regulation (Securitized Case)): Suppose that B(Ct ) > 0 and the loan risk-weight, ω(Ct ), are allowed to vary. In this case, we have that 1. if
M∗ ∂σt+1 ∂Kt+1 < 0 then > 0; ∂Ct ∂Ct
2. if
M∗ ∂σt+1 ∂Kt+1 > 0 then < 0. ∂Ct ∂Ct
Proof. The full proof of Theorem 4.1 is contained in Subsection 8.6..
4.2.
Subprime Mortgages and Their Rates under Basel Capital Regulation (Slack Constraint; Securitized Case)
Next, we consider the effect of a shock to the current level of credit rating, Ct on mortgages, M, and the subprime loan rate, rM . In particular, we analyze the case where the capital constraint (4.45) is slack. Proposition 4.2. (Subprime RMLs under Basel II (Slack Constraint; Securitized Case)): Under the same hypothesis as Theorem 4.1 when lt = 0 we have that i n∗ ∂Mt+j ∂p (Ct+j ) ∂rtS (Ct+j ) 1 C (1 − 2ftΣ ) = µj m2 − m1 + ∂Ct 2 ∂Ct+j ∂Ct+j (1 − 2fˆtΣ ftΣ )
(4.46)
and n∗
M ∂rt+j
∂Ct
i ∂p (Ct+j ) ∂rtS (Ct+j ) 1 C m2 (1 − 2ftΣ ) . = µj + + 2 m1 (1 − 2fˆtΣ ftΣ ) ∂Ct+j ∂Ct+j
Proof. The full proof of Proposition 4.2 is contained in Subsection 8.7..
(4.47)
454
4.3.
M.A. Petersen et al.
Subprime Mortgages and Their Rates under Basel Capital Regulation (Holding Constraint; Securitized Case)
Next, we present results about the effect of changes in the level of credit rating, C, on loans when the capital constraint (4.45) holds. Proposition 4.3. (Subprime RMLs under Basel II (Holding Constraint; Securitized Case)): Assume that the same hypothesis as in Theorem 4.1 holds. If lt > 0 then by taking the first derivatives of equation (1.8) with respect to Ct and using the fact that the risk-weights for short- and long-term marketable securities, ω B , are constant we obtain
∂Mt∗ =− ∂Ct
B M Kt − ρ ω Bt + 12.5f (mV aR + O) [ω(Ct )]2 ρ
∂ω(Ct ) . ∂Ct
(4.48)
In this situation, the subprime mortgage loan rate response to changes in the level of credit rating is given by
∗
∂rtM m2 = + ∂Ct m1
B M Kt − ρ ω Bt + 12.5f (mV aR + O) [ω(Ct )]2 ρm1
∂ω(Ct ) . ∂Ct
(4.49)
Proof. The full proof of Proposition 4.3 is contained in Subsection 8.8..
4.4.
Subprime Mortgages and Their Rates under Basel Capital Regulation (Future Time Periods; Securitized Case)
In the sequel, we examine the effect of a current credit rating shock in future periods on subprime RMLs, M, and RML rates, rM . 4.4.1. Capital Constraint Slack (Securitized Case) If the capital constraint is slack, the response of subprime RMLs and RML rates in period j ≥ 1 to current fluctuations in the level of credit rating is described by Theorem 4.1. Nevertheless, as time goes by, the impact of the credit rating shock is minimized since µCj < 1. 4.4.2. Capital Constraint Holding (Securitized Case) In future, if the capital constraint holds, the response of subprime RMLs and RML rates to a change in the level of credit rating, Ct , is described by ∗ µCj−1 ∂Mt+j ∂(Kt+j − ρ(ω B Bt+j + 12.5f M (mV aR + O))) (4.50) = ∂Ct ω(Ct+j )ρ ∂Ct−1+j µCj−1 ∂ω(Ct+j ) µC − (Kt+j − ρ(ω B Bt+j + 12.5f M (mV aR + O))) ω(Ct+j )ρ ω(Ct+j ) ∂Ct+j
Subprime Mortgages and Their Securitization...
455
and ∗
M ∂rt+j µCj−1 ∂(Kt+j − ρ(ω B Bt+j + 12.5f M (mV aR + O))) m2 C = µj − ∂Ct m1 ω(Ct+j )ρm1 ∂Ct−1+j
+
µCj ∂ω(Ct+j ) (Kt+j − ρ(ω B Bt+j + 12.5f M (mV aR + O))) . 2 [ω(Ct+j )] ρ ∂Ct+j
From the equation (4.50), it can be seen that future subprime RMLs can either rise or fall in response to positive credit rating shocks. This process depends on the relative magnitudes of the terms in equation (4.50). If capital rises in response to positive credit rating shocks, subprime RMLs can fall provided that the effect of the shock on capital is greater than the effect of the shock on subprime mortgage loan risk-weights.
5.
Examples Involving Subprime Mortgage Securitization
In this section, we provide examples to illustrate some of the results obtained in the preceding sections. In one way or the other all of the examples in this section support the claim that the SMC was mainly caused by the intricacy and design (refer to Subsections 5.1., 5.2., 5.3. and 5.4.) of systemic agents (refer to Subsections 5.1. and 5.2.), subprime mortgage origination (refer to Subsections 5.2., 5.3. and 5.4.) and securitization (refer to Subsections 5.1., 5.2., 5.3. and 5.4.) that led to information (loss, asymmetry and contagion) problems, valuation opaqueness and ineffective risk mitigation (refer to Subsections 5.1., 5.2., 5.3. and 5.4.).
5.1.
Numerical Example Involving Subprime Mortgage Securitization
In this section, we present a numerical example to highlight some issues in Sections 2. and 3.. In particular, we address the role of valuation in house prices. Here we bear in mind that we solve a subprime mortgage securitization maximization problem subject to the financing and regulatory capital constraints, with and without CDO tranching. The choices of the values of the economic variables in this subsection are justified by considering data from LoanPerformance (LP), Bloomberg, ABSNET, Federal Housing Finance Agency (FHFA; formerly known as OFHEO), Federal Reserve Bank of St Louis (FRBSL) database, Financial Service Research Program’s (FSRP) subprime mortgage database, Securities Industry and Financial Markets Association (SIFMA) Research and Statistics as well as Lender Processing Services (LPS; formerly called McDash Analytical) for selected periods before and during the crisis. Additional parameter choices are made by looking at, for instance, [3] and [11]. These provide enough information to support the choices for prices, rates and costs while the parameter amounts are arbitrary. 5.1.1. Choices of Subprime Mortgage Securitization Parameters In Table 2 below, we make choices for subprime securitization, profit and valuation parameters.
456
M.A. Petersen et al.
Table 2. Choices of Capital, Information, Risk and Valuation Parameters under Securitization Parameter
Period t
Period t + 1
Parameter
Period t
Period t + 1
Parameter
Period t
M
$ 10 000
$ 12 000
m0
$ 5 000
$ 5 000
L
0.909
Period t + 1 1.043
ωM
0.05
0.05
m1
$ 5 000
$ 5 000
L1∗
0.9182
1.0554
rM
0.051
0.082
m2
$ 5 000
$ 5 000
Ln∗
0.3802
0.4439
cM ω
0.0414
0.0414
C
0.5
0.5
L2∗
0.9201
1.0563
pi cp
0.01 0.05
0.01 0.05
a C(E[S(Ct )])
0.05 400
0.05 400
cp∗ ∆F
0.0611 2 062.3
0.0633 –
rf
0.01
0.01
rp
0.1
0.1
b0
$ 5 000
$ 5 000
rR
0.5
0.5
K
$ 500
$ 650
b1
$ 5 000
$ 5 000
rS
0.15
0.25
O
$ 150
$ 150
b2
$ 5 000
$ 5 000
H
$ 11 000
$ 11 500
rO
0.101
0.101
ciΣ
0.05
0.06
Λ
$ 50 000
$ 63 157.89
E
$ 250
$ 250
cM Σω
0.045
0.045 0.03
ρ
0.08
0.08
Ep
$ 150
$ 150
ct
0.03
C
$ 1 000
$ 1 000
Ec
$ 100
$ 100
ctΣ
0.04
0.04
rB
0.105
0.105
rB
0.1
0.1
rr
0.041
0.072
r SΣ fbΣ
0.15
0.25
0.3
0.2
cB
0.101
0.101
cB
0.09
0.09
B
$ 1 300
$ 1 500
B
$ 5 200
$ 6 200
ωB
0.5
0.5
Πp
$ 6 000
$ 6 000
rT
0.036
0.04
D
$ 9 300
$ 11 100
fΣ
0.65
0.5
F
700
500
T
$ 2 000
$ 2 000
rD
0.105
0.105
E
500
300
O
150
150
rO
0.101
0.101
ab
0.05
0.05
n
1.75
2
rL
0.02
0.05
$ 8 000
5.4
5.4
fM
0.08
0.19
2 000
2 000
mV ar
400
400
ω(C)
0.5
0.5
r rb
0.041
0.072
PT
$ 800
$ 1 200
cD
0.101
0.101
ΠΣp eΣ Π
$ 8 000
d
cM Σωb
0.05
0.05
γ
0.1828
0.2207
̺
0.031
0.032
ciΣ
0.05
0.06
S
750
1 500
R
550
575
r SΣb
0.15
0.25
Er
$ 1 849.8
$ 518.65
σM
2 755
4 910
ctb
0.035
0.035
u
0.03621
0.06587
v
0.90329
0.90329
ctΣb
0.045
0.045
w
0.04429
0.04398
Π
$ 2 024.4
$ 694.6
0.4
0.3
0.031
0.0545
0.2
0.15
$ 7 246.53
$ 7 732.28
$ 4 182.55
$ 5 104.4
f Σb fbΣb
r
M∗
$ 10 100
$ 12 137.5
D∗
$ 15 387.34
$ 18 631.82
Π∗
$ 656.02
- $ 974.36
Mn
T∗ Mn
∗
0.8365
1.0209
$ 8 601.42
$ 9 606.93
rf b
0.01
0.01
cM ωb
0.05
0.055
Dn
r
M∗
∗
T
∗
n∗
pib
0.01
0.01
r Rb
0.5
0.5
Sb
$ 7 246.53
$ 7 732.28
r
0.15
0.25
$ 7 659.27
$ 8 918.64
cpb
0.05
0.05
fB
0.08
0.08
ciΣb
0.05
0.06
cM
0.04
0.06
r pΣ
0.01
0.01
Πn
∗
5.1.2. Computation of Subprime Mortgage Securitization Parameters bt−1 = $200 and We compute important equations by using the values from Table 2. For E b Et = $250, in period t, IB’s profit under RMBSs and retained earnings in (2.16) is given Σ by ΠΣ t = 2502. IB’s capital (2.17) is given by Kt+1 = 650 while the nett cash flow under Σ RMBSs given by (2.18) is computed as Nt = 439.7. Furthermore, valuation in (2.19) is equal to VtΣ = 1089.7, while optimal profit under RMBSs in (2.23) is ΠΣ∗ t = 1169.79. In this case, the optimal RML supply in (2.24) and its rate (2.25) is given by MtΣn∗ = 3395.65 Σn∗ and rtM = 1.3719, respectively. The corresponding Treasuries (2.26), deposits (2.27) and profits (2.28) are given by TΣ∗ = 7246.53, DtΣ∗ = 7638.5, and ΠΣ∗ = 7613.93, t t respectively. The following values are computed in period t under RMBS CDOs. IB’s profit under RMBS CDOs and retained earnings in (3.30) is given by ΠΣb t = 1731, while IB’s capital
Subprime Mortgages and Their Securitization...
457
Σb = 650. IB’s nett cash flow (3.32) for a shareholder is given in (3.31) is of the form Kt+1 by NtΣb = −331.3. IB’s valuation in 3.33 is equal to the nett cash flow plus ex-dividend value with a value of VtΣb = 318.7. The total capital constraint (3.36) is given by KtΣb = 500 ≥ 484. IB’s optimal valuation problem under RMBS CDOs is to maximize the value by choosing the RMBS rate, deposits and regulatory capital for (3.37) subject to RMBS, balance sheet, cash flow and financing constraints given by (3.38), (3.39), (3.29) and (3.31), respectively. Here, we have σtB = −5675 and Dt = 9300. IB’s optimal RMBS supply (3.40) and its rate (3.41) are given by Bt∗ = 10400 and rtB∗ = −1.715, respectively. In this case, optimal deposits (3.42), provisions for deposit withdrawals via Treasuries (3.43) and profits under RMBS securitization (3.44) are given by DtΣb∗ = 27652.39, TΣb∗ = 7897.53 t and ΠΣb∗ = −19141.33, respectively. t In period t + 1, profit under RMBSs and retained earnings in (2.16) is given by ΠΣ t+1 = 1876, while optimal profit under RMBSs in (2.23) is computed as ΠΣ∗ = 240.21. Also, t+1 Σn∗ Σn∗ M the optimal RML level (2.24) and its rate (2.25) is given by Mt+1 = 4870.81 and rt+1 = 1.5078, respectively. The corresponding Treasuries (2.26), deposits (2.27) and profits (2.28) Σ∗ Σ∗ are given by TΣ∗ t+1 = 7732.28, Dt+1 = 9307.19 and Πt+1 = 9405.41, respectively. The following values are computed in period t + 1 under RMBS CDOs. IB’s profit under RMBS CDOs and retained earnings in (3.30) is given by ΠΣb t+1 = 754.85, while the total Σb capital constraint (3.36) is given by Kt+1 = 650 ≥ 572. IB’s optimal valuation problem is to maximize the value by choosing the RMBS rate, deposits and regulatory capital for (3.37) subject to RMBS, balance sheet, cash flow and financing constraints given by (3.38), B = −5475 and D (3.39), (3.29) and (3.31) where σt+1 t+1 = 11099.7. IB’s optimal RMBS ∗ B∗ = −2.385, respecsupply (3.40) and its rate (3.41) are given by Bt+1 = 13950 and rt+1 tively. In this case, optimal deposits (3.42), provisions for deposit withdrawals via TreaΣb∗ = 36995.1, suries (3.43) and profits under RMBS securitization (3.44) are given by Dt+1 Σb∗ Σb∗ Tt+1 = 9730.28 and Πt+1 = −37811.06, respectively. We provide a summary of computed profit and valuation parameters under RMBSs and RMBS CDOs in Table 3 below.
Table 3. Computed Capital, Information, Risk and Valuation Parameters under Securitization Parameter
Period t
Period t + 1
Parameter
Period t
Period t + 1
ΠΣ NΣ ΠΣ∗
$2 502 $ 439.7 $1 169.79
$1 876 $240.21
KΣ VΣ M Σn∗
$1 089.7 $3 395.65
$650 $4 870.81
1.3719 $7 638.5 $1 731 -$331.3 $500 $ 10 400 $27 652.39
1.5078 $9 307.19 $754.85 $650 $13 950 $36 995.1
T Σ∗ ΠΣ∗ (slack) ΠΣb∗ V Σb σB rB∗ T Σb∗
$7 246.53 $7 613.93 -$19 141.33 318.7. -5 675 -1.715 $7 897.53
$7 732.28 $9 405.41 - $37 811.06 -5 475 -2.385 $9 730.28
Σn∗
rM DΣ∗ ΠΣb N Σb K Σb B∗ DΣb∗
458
5.2.
M.A. Petersen et al.
Example Involving Profit from Mortgage Securitization
The ensuing example illustrates issues involving mortgage securitization and its connections with profit and capital. The sale of OR’s original RML portfolio via securitization is intended to save economic capital4 , K e , where K e ≥ E. K e differs from regulatory capital, K, in the sense that the latter is the mandatory capital that regulators require to be maintained while K e is the best estimate of required capital that ORs use internally to manage their own risk and the cost of maintaining K (compare with (1.6)). In the ensuing example, we assume that OR’s profit from RML securitization, ΠΣ ∝ Π, where Π is expressed as return-on-equity (ROE5 ; denoted by rE ) and return-on-assets (ROA6 ; denoted by rA ). Subsection 5.2. contains the following discussions. The purpose of the analysis in Subsection 5.2.1. is to determine the costs and benefits of securitization and to assess the impact on ROE. The three steps involved in this process are the description of OR’s unsecuritized RML portfolio, calculation of OR’s weighted cost of funds for on-balance sheet items, cM ω , as well as OR’s weight cost of funds for securitization, cM Σω . In Subsection 5.2.2., the influence on ROE results from both lower level E and reduced cM Σω . The value gained is either the present value or an improvement of annual margins averaged over the life of the securitization. In our example, capital saving, K es , is calculated with a preset forfeit percentage of 4 % used as an input. In this regard, the impact on ROE follows in Subsection 5.2.3.. under a forfeit valuation of capital as a function of f Σ M, we are able to determine whether securitization enhances rE , by how much and what the constraints are. under full economic capital analysis, K e results from a direct calculation involving a RML portfolio model with and without securitization. The enhancement issue involves finding out whether the securitization enhances the risk-return profile of OR’s original RML portfolio and, more practically, whether post-securitization rEΣ is higher or lower than the pre-securitization rE . 5.2.1. Cost of Funds In the sequel, we show how K es results from securitization, where K es ∝ f Σ M is valid for K under Basel capital regulation. For sake of illustration, we assume that the balance sheet (1.13) can be rewritten as Mt = Dt + nt Et−1 + Ot . For OR’s original RML portfolio, M = 10000, suppose that the weight ω M = 0.5 and the market cost of equity, cE = 0.25 before tax. In the case where ρ = 0.08 (see [21] for more information), regulatory capital is given by Kt = nt Et−1 + Ot = 1.25 × 200 + 150 = 400 = 0.08 × 0.5 × 10000, 4 Economic capital is the amount of risk capital (equity, E) which OR requires to mitigate against risks such as market, credit, operational and subprime risk. It is the amount of money which is needed to secure survival in a worst case scenario. 5 Ratio of profit after tax to equity capital employed. 6 Ratio which measures the return OR generates from its total assets.
Subprime Mortgages and Their Securitization...
459
with nt = 1.25. Furthermore, we assume that cE is equivalent to a minimum accounting rE = 0.25 and OR considers RML securitization for M = 6500 while K includes subordinate debt with rO = 0.101 and deposits (liabilities) cost cD = 0.101. We suppose that OR’s original RML portfolio, M, has an effective duration of 7 years despite its 10 year theoretical duration due to early voluntary prepayments as before and during the SMC. The return nett of statistical losses and direct monitoring and transaction costs is rM = 0.102. K es = 260 = 0.04 × 6500, where the K es calculation uses a 4 % forfeit applied to f Σ M. K es is constituted by 130 of equity and 130 subordinate debt and is the marginal risk contribution of f Σ M as evaluated with a portfolio model. The resulting K es would depend on the selection of f Σ M and its correlation with M. In the following table, we provide an example of a subprime RML securitization with two classes of tranches, viz., sen (AAA rating) and sub (including mezz and jun tranches; BBB rating). Given such ratings, the required rate of return for sub tranches is rBSub = 0.1061 and that of sen tranches is rBS = 0.098 ≤ cD = 0.101. However, in order to obtain a BBB rating, the CRA imposes that the sub tranches, B sub ≥ 0.101 × f Σ M. The direct costs include the initial cost of organizing the structure, ci , plus the servicing fees, f s . The annual servicing fees, f s = 0.002 × f Σ M. Table 4. OR’s Original RML Portfolio OR’s Original RML Portfolio Current Funding Cost of Equity (cE ) Cost of Subordinate Debt (cSD ) Cost of Deposits (cD ) Structure Cost of Sen Tranches (cBS ) Weight of Sen Tranches (ω BS ) Maturity of Sen Tranches (mBS ) Cost of Sub Tranches (cBSub ) Weight of Sub Tranches (ω BSub ) Maturity of Sub Tranches (mBSub ) Direct Costs of the Structure (cΣ ) Original Assets Reference RML Portfolio Rate of Return (rM ) Reference RML Portfolio Duration (y M ) Outstanding Balances Outstanding RMLs (M ) Securitized RML Amount (f Σ M )
0.25 0.102 0.101 0.098 0.9 10 years 0.1061 0.1 10 years 0.002 0.102 7 years 10 000 6 500
For f Σ M = 6500, the sen tranches fund 5 850 and sub tranches fund 650 with M decreasing from 10 000 to 3 500, where the weighted RMLs, M ω = 1750 = 0.5 × 3500. The capital required against this portfolio is K = 140 = 0.08 × 1750. With an initial K e = 400, the deal saves and frees K es = 260 for further utilization. This is shown in the
460
M.A. Petersen et al.
next table. Table 5. OR’s Required Capital Before and After Securitization OR’s Required Capital Before and After Securitization Outstanding Balances Value Required Capital OR’s Original RML Portfolio (M ) 10 000 400 Σ 260 OR’s Reference RML Portfolio (f M ) 6 500 Sen Tranches (B S ) 5 850 Sold Sub Tranches (B Sub ) 650 Sold Σ Final RML Portfolio ((1 − f )M ) 3 500 140 3 500 – Total RMLs ((1 − f Σ )M ) Total Weighted RMLs (M ω ) 1 750 140
The cost of funds structure consists of K at 4 % – divided into 2 % E at cE = 0.25 and 2 % O at cSD = 0.102 – as well as 96 % deposits at cD = 0.101. We consider the face value of D by using book values for weights, so that the weighted cost of funds is cM ω = 0.96 × 0.101 + 0.08 × 0.5 × (0.25 × 0.5 + 0.102 × 0.5) = 0.104,
where cM ω is consistent with rE = 0.25 before tax, where rE is the return-on-equity (ROE). If OR’s original RML portfolio fails to generate this required return, rE adjusts. Since OR’s original RML portfolio generates only rM = 0.102 < cM ω = 0.104, the actual rE < 0.25. The return actually obtained by shareholders is such that the average cost of funds is identical to rA = 0.102. If rA = cM ω , then effective rE may be computed as in rA = 0.96 × 0.101 + 0.08 × 0.5 × (rE × 0.5 + 0.102 × 0.5).
After calculation, rE = 0.15 < 0.25 before tax. In this case, it would be impossible to raise new capital since the portfolio return does not compensate its risk. Therefore, OR cannot originate any additional RMLs without securitization. In addition, the securitization needs to improve the return to shareholders from (1 − f Σ )M. The potential benefit of securitization is a reduction in cM ω . The cost of funds via securitization, cM Σω , is the weighted cost of the sen and sub tranches (denoted by cBSω and cBSubω , respectively) plus any additional cost of the structure cΣ = 0.002. Without considering differences in duration, the cost of sen notes is cBSω = 9.8 % and that of sub notes is cBSubω = 10.61 %. The weighted average is cM Σω before monitoring and transaction costs, so that cM Σω = 0.09881 = 0.9 × 0.098 + 0.1 × 0.1061.
The overall cost of securitization, cM ΣωA , is the sum of cM Σω and the annual cΣ = 0.002 averaged over the life of the deal. The overall costs, cM ΣωA , becomes the aggregate weighted cost of funds via securitization so that
Subprime Mortgages and Their Securitization...
461
cM ΣωA = 0.10081 = cM Σω + 0.002 = 0.09881 + 0.002. From the above, we draw the following preliminary conclusions. Firstly, we note that cM ΣωA = 0.10081 < cM ω = 0.104. This is sufficient to make RML securitization beneficial. Also, cM ΣωA = 0.10081 < rM = 0.102. Therefore, selling OR’s reference RML portfolio to SPV generates a capital gain which improves OR’s profitability. However, the change in rE remains to be quantified. 5.2.2. Return on Equity (ROE) The value of OR’s reference RML portfolio, f Σ M, is the discounted value of cash flows calculated at a rate equal to the required return to ORs, rB . The average required return to ORs who buy SPV’s securities is rB = 0.10081 = cM ΣωA . For OR’s lenders and shareholders, the average return on OR’s unsecuritized RML portfolio is rls = 0.102. Nevertheless, existing shareholders would like to have rE = 0.25 instead of rE = 0.15 resulting from insufficient rA = 0.102. In order to obtain rE = 0.25, OR’s ROA should be higher and reach rA = 0.104. The present value of f Σ M for ORs is the discounted value of future flows at δ = 0.1008. The value of this RML portfolio for those who fund it results from discounting the same cash flows at δ = 0.102, either with the current effective rE = 0.15 or δ = 0.104, with a required rE = 0.25. In both cases, cM Σω > cM ω . Therefore, the price of OR’s reference RML portfolio, at the δ = 0.1008 discount rate required by ORs, will be higher than the price calculated with either δ = 0.102 or δ = 0.104. The difference is a capital gain for OR’s existing shareholders. Since the details of projected cash flows generated by f Σ M are unknown, an accurate calculation of its present value is not feasible. In practice, a securitization model generates the entire cash flows, with all interest received from OR’s reference RML portfolio, rM , voluntary and involuntary prepayments as well as recoveries. For this example, we simplify the entire process by circumventing model intricacy for capital structure. The duration formula offers an easier way to get a valuation for OR’s reference RML portfolio. We know that the discounted value of future flows generated by OR’s reference RML portfolio at rA = 0.102 is exactly 1000 because its return is 0.102. With another discount rate, the present value differs from this face value. An approximation of this new value can be obtained from the duration formula via −Duration p1 f Σ M − 100 = , 100 (1 + i)(δ − rM ) where the present value of OR’s reference RML portfolio is denoted by p1 f Σ M. In this case, the rate of return from f Σ M is rM and the discount rate is δ, while the ratio (p1 f Σ M − 100)/100 provides this value as a percentage of the face value, M. The duration formula provides p1 f Σ M given all three other parameters so that p1 f Σ M (% of Face Value) = 100 % + Duration(rM % − δ %).
462
M.A. Petersen et al.
Since rM = 0.102, the value of f Σ M at the discount rate δ = 10.08 % is
p1 f Σ M = 1 + 7 × (0.102 − 0.1008) = 1.0084. This means that the sale of RMLs to the securitization structure generates K es = 0.0084 over an amount of 6 500 or 54.6 in value. The sale OR’s reference RML portfolio will generate a capital gain only when cM ΣωA < M µ = 0.102, so that
cM ΣωA < cM ω . In this case, the capital gain from the sale of f Σ M will effectively increase revenues, thereby increasing the average rA on the balance sheet. This is a sufficient condition to improve rE under present assumptions. The reason is that the effective ROE remains a linear function of the effective rM inclusive of capital gains from the sale of f Σ M to SPV, as long as the weights used to calculate it from rA as a percentage of OR’s original RML portfolio remain approximately constant. This relation remains
rA = 0.96 × 0.101 + 0.08 × 0.5 × (rE × 0.5 + 0.102 × 0.5). This is true as long as E ∝ f Σ M, which is the case in this example. However, in general, K e 6∝ f Σ M, and the linear relationship collapses. One has to take uncertainty into account if one is required to determine the effective rE . Note that current cM ω = 0.102, by definition, since it equates rM with the weighted average cost of capital: effective percentage of rA of effective percentage of cM ω . The implied return to shareholders is rls = 0.15. Whenever, cM ΣωA < rM , it is by definition lower than the effective cM ω . If the shareholders obtain rE = 0.25, instead of the effective rE = 0.15 only, cM ω = 0.104. This securitization would be profitable as long as cM ΣωA < 0.104. Since, in this case, cM ΣωA = 0.1008, the deal meets both conditions. However, the first one only is sufficient to generate capital gain. Using the current effective rE = 0.15, we find that OR’s capital gain from selling RMLs is 0.0084 as shown in Table 6 below. It is possible to convert K es from securitization into an additional annualized margin obtained over the life of the deal. A simple proxy for this annual margin is equal to the instantaneous capital gain averaged over the life of the deal (ignoring the time value of money). The gain is K es = 54.6 = 0.0084 × 6500. This implies that subsequent to securitization, OR’s reference RML portfolio provide rM = 0.102 plus an annual return of K es = 0.0084 applicable only to f Σ M = 6500. Once OR’s original RML portfolio have been securitized, the size of the balance sheet drops to 3 500 that still provides rM = 0.102. There is an additional return due to the capital gain. Since this annualized capital gain is K es = 0.0084 of 6 500, it is (0.0084/6500) × 3500 in percentage of (1 − f Σ )M or 0.000001292 applicable to 3 500. Accordingly rM = 0.102 increases to rM = 0.102001292 after mortgage securitization. This increased rM also implies a higher rE (see Subsection 5.2.3. below).
Subprime Mortgages and Their Securitization...
463
Table 6. OR’s Costs and Benefits from Mortgage Securitization OR’s Cost Benefits from Mortgage Securitization 0.102 cBSub 0.1061 M Σω c 0.0988 cM ΣωA 0.1008 1 OR’s Reference RML Portfolio Value at cM ω M ΣωA OR’s Reference RML Portfolio Value at c 1.0084 0.0084 K es cM ω
5.2.3. Enhancing ROE Via Securitization Under a forfeit valuation of capital as a function of f Σ M, it is relatively easy to determine whether the securitization enhances rE , by how much and what the limitations are. under full economic analysis, the capital results from a direct calculation of OR’s reference and unsecuritized RML portfolios. The enhancement issue consists of finding out whether the securitization enhances the risk-return profile of OR’s original RML portfolio and, more practically, whether post-securitization rE is higher or lower than pre-securitization rE . We address both these issues in the sequel. Table 7 shows the income statement under OR’s reference and unsecuritized RML portfolios. The deposits, subordinate debt and equity represent the same percentages of OR’s original RML portfolio, viz., 0.96, 0.02 and 0.02, respectively. Their costs are identical to the above. Before OR’s original RML portfolio is securitized, rM = 0.102 while thereafter rM = 0.102001292. This gain influences the ROE directly with an increase from rE = 0.15 to rE = 0.20263. In general, an increase in rM causing an increase in rE is not guaranteed since K es is the marginal risk contribution of f Σ M. Therefore, an increase in rM due to K es from the sale of OR’s reference RML portfolio to SPV might not increase the rE if K es is lower. For instance, if K decreases to 190 and subordinate debt does also, the remaining deposits being the complement to 9 000 or 9 620, the same calculations would show that the new ROE becomes rE = 0.15789 = 30/190. It is necessary to determine K e before and after OR’s original RML portfolio is securitized in order to determine the size of K es and to perform return calculations on new capital subsequent to securitization. Once K e is determined and converted into a percentage of OR’s original RML portfolio, we have the same type of formula as in the above.
5.3.
Example of a Subprime RMBS Deal
The example in this section is about specific RMBSs and discusses SAIL 2006-2 in terms of the evolution of different tranches’ riskiness with the refinancing of reference RML portfolios affecting the loss triggers for subordinated tranches, changing the sensitivity of the values of different claims to house prices that drive collateral values. Furthermore, in Sub-
464
M.A. Petersen et al. Table 7. Effect of Securitization on OR’s Return on Capital
Effect of Securitization on OR’s Return on Capital Balances Returns and Costs (%) Returns and Costs Pre-Securitization of 6 500 RMLs 10 000 0.102 1020 9 600 -0.101 -969.6 Deposits Equity Capital 200 30 200 -0.101 -20.2 Subordinate Debt Return on Capital 0.15 RMLs 3 500 0.102001292 357.0 Deposits 3 360 -0.101 -339.36 70 Equity Capital Subordinate Debt 70 -0.101 -7.07 Return on Capital 0.20263 RMLs 3 500 0.102001292 357.0 Deposits 3 360 -0.101 -339.36 190 29.9991 Equity Capital Subordinate Debt 190 -0.101 -19.19 0.15789 Return on Capital
section 5.3., we provide an argument about how the speed of securitization effects the optionality of RMBS CDO tranches with respect to the underlying house prices. The example contained in this subsection is based on [11] and discusses the subprime RMBS deal Structured Asset Investment Loan Trust 2005-6 (SAIL 2005-6) issued in July 2005. The bond capital structure is outlined in Table 8 below. From Table 8 we see that the majority of tranches in SAIL 2005-6 have an investmentgrade rating of BBB- or higher with Class A1 to A9 certificates being rated AAA. On a pro rata basis, Class A1 and A2 certificates receive principal payments, ̟1 f Σ M, concurrently, unless cumulative reference RML portfolio losses or delinquencies exceed specified levels. In the latter case, these classes will be treated as senior, sequential pay tranches. The classes of certificates listed in Table 8 were offered publicly by the SAIL 2005-6 prospectus supplement while others like Class P, Class X and Class R certificates were not. Four types of reference RML portfolios constitute the deal with limited crosscollateralization. Principal payments, ̟1 f Σ M, on the senior certificates will mainly depend on how the reference RML portfolios arte constituted. However, the senior certificates will have the benefit of CE in the form of OC and subordination from each RML portfolio. As a consequence, if the rate of loss per reference RML portfolio related to any class of sen certificates is low, losses in unrelated RMLs may reduce the loss protection for those certificates. At initiation, we note that the mezz tranches (AA+ to BBB-) were very thin with minimal defaults. This thinness may be offset by a significant prepayment amount, ̟2 f p f Σ M,
Subprime Mortgages and Their Securitization...
465
Table 8. Structured Asset Investment Loan Trust 2005-6 Capital Structure; Source: [25]
Class
A1 A2 A3 A4 A5 A6 A7 A8 A9 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10-A M10-F
Structured Asset Investment Loan Trust 2005-6 Capital Structure RML Principal Principal Tranche Moody’s S&P Reference Type Amount Thickness Portfolios (Dollars) (%) 1 Senior 455 596 000 20.18 % Aaa AAA 1 Senior 50 622 000 2.24 % Aaa AAA 2 Senior 506 116 000 22.42 % Aaa AAA 3 Senior 96 977 000 4.30 % Aaa AAA Seqntl Pay 3 Senior 45 050 000 2.00 % Aaa AAA Seqntl Pay 3 Senior 23 226 000 1.03 % Aaa AAA 3 Seqntl Pay 4 Senior 432 141 000 19.14 % Aaa AAA Seqntl Pay 4 Senior 209 009 000 9.26 % Aaa AAA Seqntl Pay 4 Senior 95 235 000 4.22 % Aaa AAA Seqntl Pay 1,2,3,4 Subordinated 68 073 000 3.02 % Aa1 AA+ 1,2,3,4 Subordinated 63 534 000 2.81 % Aa2 AA 1,2,3,4 Subordinated 38 574 000 1.71 % Aa3 AA1,2,3,4 Subordinated 34 036 000 1.51 % A1 A+ 1,2,3,4 Subordinated 34 036 000 1.51 % A2 A 1,2,3,4 Subordinated 26 094 000 1.16 % A3 A1,2,3,4 Subordinated 34 036 000 1.51 % Baa2 BBB 1,2,3,4 Subordinated 22 691 000 1.01 % Baa3 BBB1,2,3,4 Subordinated 11 346 000 0.50 % N/R BBB1,2,3,4 Subordinated 5 673 000 0.25 % N/R BBB1,2,3,4 Subordinated 5 673 000 0.25 % N/R BBB-
Fitch
AAA AAA AAA AAA AAA AAA AAA AAA AAA AA+ AA AAA+ A ABBB BBBBBBBB+ BB+
entering the deal at the outset. An example of this is the M9 tranche with a thickness of 50 bps, but with a BBB- investment-grade rating. Although the rating may not necessarily be wrong, the underlying assumption is that the cash flow dynamics of SAIL 2005-6 has a high probability of success. Some of the characteristics of the reference RML portfolios are shown below.
5.4.
Comparisons between Two Subprime RMBS Deals
In this subsection, we follow [11] by considering the subprime securitization deals Ameriquest Mortgage Securities Inc. 2005-R2 (AMSI 2005-R2) and Structured Assets Investment Loan Trust 2006-2 (SAIL 2006-2). Both AMSI 2005-R2 and SAIL 2006-2 pos-
466
M.A. Petersen et al.
Table 9. Summary of the Reference RML Portfolios’ Characteristics; Source: [11] Summary of the Reference RML Portfolios’ Characteristics Pool 1 Pool 2 Pool 3 Pool 4 % First Lein 94.12 % 98.88 % 100.00 % 93.96 % 59.79 % 46.68 % 75.42 % 37.66 % % 2/28 ARMs % 3/27 ARMs 20.82 % 19.14 % 19.36 % 9.96 % 13.00 % 8.17 % 2.16 % 11.46 % % Fixed Rate % Full Doc 59.98 % 56.74 % 44.05 % 35.46 % 39.99 % 37.47 % 34.30 % 33.17 % % Stated Doc % Primary Residence 90.12 % 90.12 % 80.61 % 82.59 % 636 615 673 635 WA FICO
sess the basic structures of securitization deals outlined in Subsection 5.3., with OC and various triggers determining the dynamics of CE. AMSI 2005-R2 consists of three portfolios while both deals have OC. Our aim is to compare the performance of AMSI 2005-R2 and SAIL 2006-2 with 2005 vintage RMLs and 2006 vintage RMLs, respectively. The 2006 vintage subprime RMLs under-performed as H started to decline in that year. The ensuing examples demonstrate how the extent of refinancing of the underlying RMLs affects securitization. 5.4.1. Details of AMSI 2005-R2 and SAIL 2006-2 Tables 10 and 11 present AMSI 2005-R2 deal structure, tranche thickness and ratings at the outset as well as in Q12007. The initial thickness of the BBB tranches – measured as a percentage of collateral – are extremely thin. CRAs do not usually allow such thin tranches, but it is anticipated that these tranches will grow as more sen tranches amortize as a result of refinancing and sequential amortization. Further, we note the subordination percentages for BBB tranches at inception. For instance, the M9 tranche of AMSI 2005-R2 has only 1.1 % of subordination. However, as amortization occurs, CE accumulates and reference RMLs refinance, this situation could improve. In that case, the deals shrink as amortization occurs. Also, after the step-down date, the BBB tranches will seem attractive – depending on H. Tables 12 and 13 present SAIL 2006-2 deal structure, tranche thickness and ratings at the outset as well as in Q1:07. Once again, the initial thickness of the BBB tranches – measured as a percentage of collateral – are extremely thin. As far as the subordination percentages for BBB tranches at inception are concerned, for instance, the M8 tranche of SAIL 2006-2 has only 0.7 % subordination. As before, as amortization occurs, CE accumulates and reference RMLs refinance, this situation could improve.
Subprime Mortgages and Their Securitization...
467
Table 10. Ameriquest Mortgage Securities Inc. 2005-R2 (AMSI 2005-R2) At Issue in 2005; Source: [2] Ameriquest Mortgage Securities Inc. 2005-R2 (AMSI 2005-R2) At Issue in 2005 Size Related Ratings % of Subordination Mortgage (Fitch, Moody’s Collateral Pool(s) S&P) Publicly-Offered Certificates A-1A 258 089 000 I AAA/Aaa/AAA 21.5 % 35.48 % A-1B 64 523 000 I AAA/Aaa/NR 5.4 % 19.35 % 258 048 000 II AAA/Aaa/AAA 21.5 % 35.48 % A-2A A-2B 64 511 000 II AAA/Aaa/NR 5.4 % 19.35 % 124 645 000 III AAA/Aaa/AAA 10.4 % 19.35 % A-3A A-3B 139 369 000 III AAA/Aaa/AAA 11.6 % 19.35 % 26 352 000 III AAA/Aaa/AAA 2.2 % 19.35 % A-3C A-3D 32 263 000 III AAA/Aaa/NR 2.7 % 19.35 % M1 31 200 000 I,II,III AA+/Aa1/AA+ 2.6 % 16.75 % 49 800 000 I,II,III AA/Aa2/AA 4.1 % 12.6 % M2 M3 16 800 000 I,II,III AA-/Aa3/AA1.4 % 11.2 % 28 800 000 I,II,III A+/A1/A+ 2.4 % 8.8 % M4 M5 16 800 000 I,II,III A/A2/A 1.4 % 7.4 % 12 000 000 I,II,III A-/A3/A1.0 % 6.4 % M6 M7 19 200 000 I,II,III BBB+/Baa1/BBB+ 1.6 % 4.8 % 9 000 000 I,II,III BBB/Baa2/BBB 0.7 % 4.05 % M8 M9 13 200 000 I,II,III BBB/Baa2/BBB1.1 % 2.95 % Non-Publicly-Offered Certificates M10 7 800 000 I,II,III BB+/Ba1/BB+ 1.0 % 1.3 % M11 12 000 000 I,II,III BB/Ba2/BB 1.3 % 0.0 % 15 600 000 NR/NR/NR CE Total 1 200 000 000 Collateral 1 200 000 147
5.4.2. Comparisons between AMSI 2005-R2 and SAIL 2006-2
Judging from Q1:07, the two deals differ dramatically. AMSI 2005-R2 is older than SAIL 2006-2 and by Q1:07, AMSI 2005-R2 has passed its triggers. As expected, the tranche thicknesses and subordination levels have increased. For example, initially M9 had a 1.1 % subordination level, but by Q1:07 its subordination is 9.06 %. Despite this, Fitch has downgraded the BBB tranches to B. By contrast, SAIL 2006-2 took place during a period where H was flat and the frequency of refinancing had declined. Neither tranche thickness nor subordination had increased significantly weakening the SAIL 2006-2 deal. This is reflected by the mezz tranche ratings.
468
M.A. Petersen et al.
Table 11. Ameriquest Mortgage Securities Inc. 2005-R2 (AMSI 2005-R2) In Q1:07; Source: [2] Ameriquest Mortgage Securities Inc. 2005-R2 (AMSI 2005-R2) In Q1 of 2007 Size Related Ratings % of Subordination Mortgage (Fitch, Moody’s Collateral Pool(s) S&P) Publicly-Offered Certificates A-1A 258 089 000 I AAA/Aaa/AAA 21.5 % 35.48 % 64 523 000 I AAA/Aaa/NR 5.4 % 19.35 % A-1B A-2A 258 048 000 II AAA/Aaa/AAA 21.5 % 35.48 % A-2B 64 511 000 II AAA/Aaa/NR 5.4 % 19.35 % 124 645 000 III AAA/Aaa/AAA 10.4 % 19.35 % A-3A A-3B 139 369 000 III AAA/Aaa/AAA 11.6 % 19.35 % 26 352 000 III AAA/Aaa/AAA 2.2 % 19.35 % A-3C A-3D 32 263 000 III AAA/Aaa/NR 2.7 % 19.35 % 31 200 000 I,II,III AA+/Aa1/AA+ 2.6 % 16.75 % M1 M2 49 800 000 I,II,III AA/Aa2/AA 4.1 % 12.6 % 16 800 000 I,II,III AA-/Aa3/AA1.4 % 11.2 % M3 M4 28 800 000 I,II,III A+/A1/A+ 2.4 % 8.8 % M5 16 800 000 I,II,III A/A2/A 1.4 % 7.4 % 12 000 000 I,II,III A-/A3/A1.0 % 6.4 % M6 M7 19 200 000 I,II,III BBB+/Baa1/BBB+ 1.6 % 4.8 % 9 000 000 I,II,III BBB/Baa2/BBB 0.7 % 4.05 % M8 M9 13 200 000 I,II,III BBB/Baa2/BBB1.1 % 2.95 % Non-Publicly-Offered Certificates M10 7 800 000 I,II,III BB+/Ba1/BB+ 1.0 % 1.3 % 12 000 000 I,II,III BB/Ba2/BB 1.3 % 0.0 % M11 CE 15 600 000 NR/NR/NR Total 1 200 000 000 Collateral 1 200 000 147
6.
Discussions on Subprime Mortgage Securitization and the SMC
In this section, we discuss the relationships between the SMC and optimal profit under RMBSs and RMBS CDOs as well as examples involving subprime mortgage securitization and its relationships with risk, profit and valuation. In particular, we focus our discussion on the main hypothesis of this paper that the SMC was partly caused by the intricacy and design of subprime mortgage securitization and systemic components that led to information (loss, asymmetry and contagion) problems, valuation opaqueness and ineffective risk mitigation.
6.1.
Risk, Profit and Valuation under RMBSs and the SMC
In this subsection, we discuss the relationships between the SMC and risk, profit and valuation under RMBSs.
Subprime Mortgages and Their Securitization...
469
Table 12. Structured Asset Investment Loan Trust 2006-2 (SAIL 2006-2) At Issue in 2006; Source: [25]
Structured Asset Investment Loan Trust 2006-2 (SAIL 2006-2) At Issue in 2006 Size Related Ratings % of Subordination Mortgage (Fitch, Moody’s Collateral Pool(s) S&P) Publicly-Offered Certificates A1 607 391 000 I Aaa/AAA/AAA 45.3 % 16.75 % A2 150 075 000 I Aaa/AAA/AAA 5.4 % 19.35 % 244 580 000 II Aaa/AAA/AAA 21.5 % 35.48 % A3 A4 114 835 000 II Aaa/AAA/AAA 5.4 % 19.35 % 84 875 000 III Aa2/AA/AA 10.4 % 19.35 % M1 25 136 000 III Aa3/AA-/AA11.6 % 19.35 % M2 M3 20 124 000 III A1/A+/A+ 2.2 % 19.35 % 20 124 000 III A2/A/A 2.7 % 19.35 % M4 M5 15 428 000 I,II,III A3/A-/A2.6 % 16.75 % 15 428 000 I,II,III Baa1/BBB+/BBB+ 4.1 % 12.6 % M6 M7 11 404 000 I,II,III Baa2/BBB/BBB 1.4 % 11.2 % 10 733 000 I,II,III Baa3/BBB-/BBB0.7 % 4.05 % M8 Non-Publicly-Offered Certificates B1 7 379 000 I,II,III Ba1/Ba1/Ba1 0.6 % 1.05 % B2 7 379 000 I,II,III Ba2/Ba2/Ba2 0.6 % 0.5 % 6 708 733 CE Total 1 341 599 733
6.1.1. Subprime RMBSs and the SMC The key design feature of subprime RMLs was the ability of MRs to finance and refinance their houses based on capital gains due to house price appreciation over short horizons and then turning this into collateral for a new mortgage or extracting equity for consumption. The unique design of subprime mortgages resulted in unique structures for their securitizations (response to Problem 1.2). Further, the subprime RMBS bonds resulting from the securitization often populated the underlying portfolios of CDOs, which in turn were often designed for managed, amortizing, portfolios of ABSs, RMBSs and CMBSs. One always suspects that the securitization of credit risks would be a source of moral hazard that could endanger banking system stability. The practice of splitting the claims to a reference RML portfolio into tranches can actually be seen as a response to this concern. In this regard, sen and mezz tranches can be considered to be senior and junior debt, respectively. If ORs held equity tranches and if, because of packaging and diversification, the probability of default, i.e., the probability that reference portfolio returns fall short of the sum of sen and mezz claims, were (close to) zero, we would (almost) be subscribing to
470
M.A. Petersen et al.
Table 13. Structured Asset Investment Loan Trust 2006-2 (SAIL 2006-2) In Q1:07; Source: [25] Structured Asset Investment Loan Trust 2006-2 (SAIL 2006-2) In Q1:07 Size Related Ratings % of Subordination Mortgage (Fitch, Moody’s Collateral Pool(s) S&P) Publicly-Offered Certificates A1 607 391 000 I Aaa/AAA/AAA 45.3 % 16.75 % 150 075 000 I Aaa/AAA/AAA 5.4 % 19.35 % A2 A3 244 580 000 II Aaa/AAA/AAA 21.5 % 35.48 % A4 114 835 000 II Aaa/AAA/AAA 5.4 % 19.35 % 84 875 000 III Aa2/AA/AA 10.4 % 19.35 % M1 M2 25 136 000 III Aa3/AA-/AA11.6 % 19.35 % 20 124 000 III A1/A+/A+ 2.2 % 19.35 % M3 M4 20 124 000 III A2/A/A 2.7 % 19.35 % 15 428 000 I,II,III A3/A-/A2.6 % 16.75 % M5 M6 15 428 000 I,II,III Baa1/BBB+/BBB+ 4.1 % 12.6 % 11 404 000 I,II,III Baa2/BBB/BBB 1.4 % 11.2 % M7 M8 10 733 000 I,II,III Baa3/BBB-/BBB0.7 % 4.05 % Non-Publicly-Offered Certificates B1 7 379 000 I,II,III Ba1/?/? 0.6 % 1.05 % B2 7 379 000 I,II,III Ba2/?/? 0.6 % 0.5 % 6 708 733 CE Total 1 341 599 733
the Diamond model where moral hazard in banking is negligible. Why then did this system fail ? The answer to this question is straightforward: Both ifs in the preceding statement were not satisfied. ORs did not, in general, hold the equity tranches of the portfolios that they generated; indeed, as time went on, ever greater portions of equity tranches were sold to external IBs. Moreover, default probabilities for sen and mezz tranches were significant because, by contrast to the Diamond model, packaging did not provide for sufficient diversification of returns on the reference RML portfolios in RMBS portfolios (see, for instance, [14]). 6.1.2. Risk and Profit under RMBSs and the SMC In Subsection 2.2.1. of Subsection 2.2., a subprime mortgage model for profit under subprime RMBSs from (2.14) reflects the fact that OR sells RMLs and distributes risk to IBs through RMBSs. This way of mitigating risks involves operational, liquidity and tranching risk that returned to ORs when the SMC unfolded. ORs are more likely to securitize more RMLs if they hold less capital, are less profitable and/or liquid and have RMLs of low quality. This situation was prevalent before the SMC when ORs’ pursuit of yield did not take decreased capital, liquidity and RML quality into account. The investors in RMBSs embed credit risk which involves bankruptcy if investors can’t raise funds.
Subprime Mortgages and Their Securitization...
471
In Subsection 2.2.2., ΠΣ is given by (2.16) while K Σ has the form (2.17). It is interesting to note that the formulas for ΠΣ and K Σ depend on Π and K, respectively, and are far more complicated than the latter. Losses on RMBSs increased significantly as the crisis expanded from the housing market to other parts of the economy, causing ΠΣ (as well as retained earnings in (2.15)) to decrease. During the SMC, capital adequacy ratios declined as K Σ levels became depleted while banks were highly leveraged. Therefore, methods and processes which embed operational risk failed. Such risk had risen as banks succeeded in decreasing their capital requirements. This risk was not fully understood and acknowledged which resulted in loss of liquidity and failed operational risk management. 6.1.3. Valuation under RMBSs and the SMC When U.S. house prices declined in 2006 and 2007, refinancing became more difficult and ARMs began to reset at higher rates. This resulted in a dramatic increase in subprime RML delinquencies so that RMBSs began to lose value. Since these mortgage products are on the balance sheet of most banks, their valuation given by (2.20) in Subsection 2.3. began to decline. Before the SMC, moderate reference RML portfolio delinquency did not affect valuation in a significant way. However, the value of mortgages and related structured products decreased significantly due to operational, tranching and liquidity risks during the SMC. The yield from these mortgage products decreased as a consequence of high default rates (credit risk) which caused illiquidity with a commensurate rise in the incidences of credit crunch and funding risk. The imposition of fair value accounting for subprime mortgages enhances the scope for systemic risk that does not involve the intrinsic solvency of the debtors but rather the functioning or malfunctioning of the financial system. under fair value accounting, the values at which securities are held in the banks’ books depend on the prices that prevail in the market. If these prices change, the bank must adjust its books even if the price change is due to market malfunctioning and even if it has no intention of selling the security, but intends to hold it to maturity. under currently prevailing capital adequacy requirements, this adjustment has immediate implications for the bank’s financial activities. In particular, if market prices of securities held by the bank have gone down, the bank must either recapitalize by issuing new equity or retrench its overall operations. The functioning of the banking system thus depends on how well asset markets are functioning. Impairments of the ability of markets to value mortgages and related structured products can have a large impact on the banking system. 6.1.4. Optimal Valuation under RMBSs and the SMC In Subsection 2.4., IB’s valuation performance criterion, J, at t is given by (2.21). The optimal valuation problem under RMBSs is to maximize value given by (2.20) by choosing the RML rate, deposits and regulatory capital for (2.22) subject to RML, balance sheet, cash flow and financing constraints given by (1.5), (1.13), (2.14) and (2.17), respectively. When the capital constraint given by (1.2) holds (i.e., lt > 0), a solution to the optimal valuation problem under RMBSs yields optimal profit under RMBSs of the form (2.23). When the capital constraint (1.2) does not hold (i.e., lt = 0) and P (Ct ) > 0, then opti-
472
M.A. Petersen et al.
mal RML supply and its rate, (2.24) and (2.25) respectively, are solutions to the optimal valuation problem stated in Problem 2.3. Notwithstanding the above, during the SMC, CRAs have been reprimanded for giving investment-grade ratings to RMBSs backed by risky subprime RMLs. Before the SMC, these high ratings enabled such RMBSs to be sold to investors, thereby financing and exacerbating the housing boom. The issuing of these ratings were believed justified because of risk reducing practices, such as CDI and equity investors willing to bear the first losses. However, during the SMC, it became clear that some role players in rating subprime-related securities knew at the time that the rating process was faulty. Uncertainty in financial markets spread to other financial role players, increasing the counterparty risk which caused interest rates to increase. Refinancing became almost impossible and default rates exploded. All this operations embed systemic risk which finally caused the whole financial system to collapse.
6.2.
Risk, Profit and Valuation under RMBS CDOs and the SMC
In this subsection, we discuss the relationships between the SMC and capital, information, risk and valuation under RMBS CDOs. In this regard, Table 14 below shows CDO issuance with Column 1 showing total issuance of CDOs while the next column presents total issuance of ABS CDOs. This table suggests that CDO issuance has been significant with the majority being CDOs with structured notes as collateral. Also, the motivation for CDO issuance has primarily been arbitrage. Issuance of ABS CDOs roughly tripled over the period 2005-07 and ABS CDO portfolios became increasingly concentrated in subprime RMBSs. Table 15 shows estimates of the typical collateral composition of high grade and mezz ABS CDOs. As is demonstrated in Table 16, increased volumes of origination in the subprime RML market lead to an increase in subprime RMBSs as well as CDO issuance. 6.2.1. Subprime RMBS CDOs and the SMC Certain features of ABS CDOs make their design more intricate (compare with Problem 1.2). For instance, many cash ABS CDOs are managed with managers being allowed to buy and sell bonds too a limited extent over a limited period of time. The reason for this is that structured products amortize. In order to achieve a longer maturity for the CDOs, managers are allowed to re-invest. They can take cash that is paid to the CDO from amortization and re-invest it, and with limitations, as mentioned, they can sell bonds in the portfolio and buy other bonds. However, their are restrictions on the portfolio that must be maintained. CDO managers typically owned all or part of the CDO equity, so they would benefit from higher yielding assets for a given liability structure. Essentially, it was a managed fund with term financing and some constraints on the manager in terms of trading and the portfolio composition. Table 1 implies that IBs purchased tranches of structured products such as RMBSs, ABS CDOs, SIV liabilities and money market funds without an intimate knowledge of the dynamics of the products they were purchasing. These IBs likely relied on repeated relationships, bankers and on credit ratings. Essentially, IBs do not have the resources
Subprime Mortgages and Their Securitization...
473
Table 14. Global CDO Issuance ($ Millions); Source: [24]
Q1:04 Q2:04 Q3:04 Q4:04 2004 Tot. % of Tot. Q1:05 Q2:05 Q3:05 Q4:05 2005 Tot. % of Tot. Q1:06 Q2:06 Q3:06 Q4:06 2006 Tot. % of Tot. Q1:07 Q2:07 Q3:07 Q4:07 2007 Tot. % of Tot. Q1:08 Q2:08 Q3:08 Q4:08 2008 Tot. % of Tot. Q1:09 Q2:09 Q3:09 Q4:09 2009 Tot. % of Tot. Q1:10 Q2:10 Q3:10 Q4:10 2010 Tot. % of Tot.
Total Issuance 24 982.5 42 864.6 42 864.6 47 487.8 157 418.5 49 610.2 71 450.5 52 007.2 98 735.4 271 803.3 108 012.7 124 977.9 138 628.7 180 090.3 551 709.6 186 467.6 175 939.4 93 063.6 47 508.2 502 978.8 12 846.4 16 924.9 11 875.0 3 290.1 44 936.4 296.3 1 345.5 442.9 730.5 2 815.2 2 420.8 1 655.8 2 002.7 6 079.3
Global CDO Issuance ($ Millions) Structured Cash Flow Synthetic Finance & Hybrid Funded NA 18 807.8 6 174.7 NA 25 786.7 17 074.9 NA 36 106.9 5 329.7 NA 38 829.9 8 657.9 NA 119 531.3 37 237.2 75.9 % 23.7 % 28 171.1 40 843.9 8 766.3 46 720.3 49 524.6 21 695.9 34 517.5 44 253.1 7 754.1 67 224.2 71 604.3 26 741.1 176 639.1 206 225.9 64 957.4 65.0 % 75.9 % 23.9 % 66 220.2 83 790.1 24 222.6 65 019.6 97 260.3 24 808.4 89 190.2 102 167.4 14 703.8 93 663.2 131 525.1 25 307.9 314 093.2 414 742.9 89 042.7 56.9 % 75.2 % 16.1 % 101 074.9 140 319.1 27 426.2 98 744.1 135 021.4 8 403.0 40 136.8 56 053.3 5 198.9 23 500.1 31 257.9 5 202.3 263 455.9 362 651.7 46 230.4 52.4 % 72.1 % 9.1 % 12 771.0 75.4 15 809.7 1 115.2 11 875.0 – 3 140.1 150.0 43 595.8 1 340.6 32.4 % 91.2.1 % 1.6 % 196.8 99.5 1 345.5 – 337.6 105.3 681.0 49.5 2 560.9 254.3 40.4 % 91.2.1 % 1.6 % 2 378.5 42.3 1 655.8 – 2 002.7 – – – 6 037.0 42.3 44.1 % 91.2.1 % 1.6 %
Arbitrage 23 157.5 39 715.5 38 207.7 45 917.8 146 998.5 93.4 % 43 758.8 62 050.5 49 636.7 71 957.6 227 403.6 83.7 % 101 153.6 102 564.6 125 945.2 142 534.3 472 197.7 85.6 % 156 792.0 153 385.4 86 331.4 39 593.7 436 102.5 86.8 % 18 607.1 15 431.1 10 078.4 3 821.4 47 938.0 89.4 % 658.7 1 886.4 208.7 689.5 3 443.3 89.4 % – 598.1 2002.7 – 2 600.8 89.4 %
Balance Sheet 1 825.0 3 146.1 3 878.8 1 569.9 10 419.8 6.6 % 5 851.4 9 400.0 2 370.5 26 777.8 44 399.7 16.3 % 6 859.1 22 413.3 12 683.5 37 556.0 79 511.9 14.4 % 29 675.6 22 554.0 6 732.2 7 914.5 66 8769.3 13.3 % 1 294.6 6 561.4 4 255.0 1 837.8 13 948.8 10.6 % 99.5 – 363.5 429.7 892.7 10.6 % 2 420.7 1 378.9 – – 3 799.6 10.6 %
to individually analyze such complicated structures and, ultimately, rely to a lesser extent on the information about the structure and the fundamentals and more on the relationship with the product seller. Agency relationships are substituted for actual information. To
474
M.A. Petersen et al.
Table 15. Typical Collateral Composition of ABS CDOs (%); Source: Citigroup Typical Collateral Composition of ABS CDOs (%) High Grade ABS CDOs Mezzanine ABS CDOs Subprime RMBS Tranches 50 % 77 % 25 12 Other RMBS Tranches CDO Tranches 19 6 6 5 Other
Table 16. Subprime-Related CDO Volumes; Source: [26]
Vintage 2005 2006 2007 2008
Subprime-Related CDO Volumes Mezz ABS CDOs High Grade ABS CDOs 27 50 50 100 30 70 30 70
All CDOs 290 468 330 330
emphasize this is not surprising, and it is not unique to structured products such as RMBSs and ABS CDOs. However, in the SMC case, the length of the chain of subprime risks is a huge problem.
6.2.2. Risk and Profit under RMBS CDOs and the SMC A subprime mortgage model for profit under subprime RMBS CDOs has the form (3.29) given in Subsection 3.2.. under RMBS CDOs, ΠΣb t is given by (3.30) while capital is of the form (3.31). In this regard, before the SMC, IBs sought higher profits than those offered by U.S. Treasury bonds. Continued strong demand for RMBSs and ABS CDOs began to drive down lending standards to sell RMLs along the supply chain. RMBS CDOs lost most of their value which resulted in a large decline in the capital of many banks and GSEs, with a resultant tightening of credit globally. Before the SMC, CDOs purchased subprime RMBS bonds because it was profitable. At first, lower-rated BBB tranches of subprime RMBS were difficult to sell since they were thin and, hence, unattractive. Despite this, a purchasing CDO may not be aware of the subprime risks inherent in the deal, including credit and synthetic risk. Tranching added complexity to securitization. Risks were underestimated and RMPs were over-rated. By 2005, spreads on subprime BBB tranches seemed to be wider than other structured notes with the same rating, creating an incentive to arbitrage the ratings between subprime RMBS and CDO tranches ratings. Subprime RMBSs increasingly dominated CDO portfolios, suggesting that the pricing of risk was inconsistent with the ratings. Also, concerning the higher rated tranches, CDOs may have been motivated to buy large amounts of structured notes, because
Subprime Mortgages and Their Securitization...
475
their AAA tranches would input profitable negative basis trades7 . As a consequence, the willingness of CDOs to purchase subprime RMBS bonds increased. 6.2.3. Valuation under RMBS CDOs and the SMC In Subsection 3.3., we note that value under RMBS CDOs is given by (3.34). In this regard, we note that there is no standardization of triggers across CDOs with some having sequential cash flow triggers while others have OC trigger calculations based on ratings changes. In fact, each ABS CDO must be separately modeled which may not be possible. This played a role in the problems IBs face when they attempt a valuation of CDO tranches. Furthermore, CDOs backed with subprime RMBSs, widely held by financial firms, lost most of their value during the SMC. Naturally, this led to a dramatic decrease in IB’s valuation from holding such structured notes which increased the subprime risks in the financial market. Future studies should consider how the information about house prices and delinquencies and foreclosures was linked to valuations of the various links of the chain. In this regard, we note that accounting rules put the accountant at the forefront of decision-making about the valuation of intricate financial instruments. While the accounting outcome is basically negotiated, the rules put management at a bargaining disadvantage. 6.2.4. Optimal Valuation under RMBS CDOs and the SMC In Subsection 3.4., valuation performance criterion, J Σb , in period t is given by (3.35). The total capital constraint is given by (3.36). The optimal valuation problem under RMBSs is to maximize value given by (3.34) by choosing the RMBS rate, deposits and regulatory capital for (3.37) subject to RMBS, balance sheet, cash flow and financing constraints given by (3.38), (3.39), (3.29) and (3.31), respectively. When the capital constraint given by (3.36) holds (i.e., lt > 0), a solution to the optimal valuation problem yields optimal RMBS and RMBS rate of the form (3.40) and (3.41), respectively.
6.3.
Mortgage Securitization and Capital under Basel Regulation and the SMC
The SMC fallout did not originate from lightly-regulated hedge funds, but from banks regulated by governments. For instance, in the fourth quarter of 2007, Citigroup Inc. had its 7
According to [11], in a negative basis trade, a bank buys the AAA-rated CDO tranche while simultaneously purchasing protection on the tranche under a physically settled CDS. From the bank’s viewpoint, this is the simultaneous purchase and sale of a CDO security, which meant that IL could book the NPV of the excess yield on the CDO tranche over the protection payment on the CDS. If the CDS spread is less than the bond spread, the basis is negative. An example of this is given below. Suppose the bank borrows at LIBOR + 5 and buys a AAA-rated CDO tranche which pays LIBOR + 30. Simultaneously, IB buys protection (possibly from a monoline insurer) for 15 bps (basis points). So IB makes 25 bps over LIBOR nett on the asset, and they have 15 bps in costs for protection, for a 10 bps profit. Note that a negative basis trade swaps the risk of the AAA tranche to a CDS protection writer. Now, the subprime-related risk has been separated from the cash host. Consequently, even if we were able to locate the AAA CDO tranches, this would not be the same as finding out the location of the risk. We do not know the extent of negative basis trades.
476
M.A. Petersen et al.
worst-ever quarterly loss of $ 9.83 billion and had to raise more than $ 20 billion in capital from outside investors, including foreign-government investment funds. This was done in order to augment the depleted capital on its balance sheet after bad investments in RMBSs. According to the FDIC, at the time, Citigroup held $ 80 billion in core capital on its balance sheet to protect against its $ 1.1 trillion in assets. In the second half of 2007, Citigroup wrote down about $ 20 billion. Interestingly, at the end of 2007, major U.S. banks like J.P. Morgan Chase & Co., Wachovia Corp., Washington Mutual Inc. and Citigroup lobbied for leaner, European-style capital cushions. These banks urged the U.S. government ”to help ensure U.S. banking institutions remain strong and competitive, the federal banking agencies should avoid imposing domestic capital regulation that provides an advantage to non-U.S. banks.” They argued that tighter rules would make it tougher for them to compete globally, since more of their money would be tied up in the capital cushion. Eventually, in July 2008, the U.S. Federal Reserve and regulators acceded to the banks’ requests by allowing them to follow rules similar to those in Europe. That ruling potentially could enable American banks to hold looser, European-style capital. However, by then, cracks in the global financial system were already spreading rapidly. In this subsection, we discuss the connections between Basel capital regulation and the SMC. The capital constraint in Section 4. for the securitized case is described by the expression in (1.2). In particular, we keep the risk-weights for short-and long-term marketable securities are kept constant, i.e., ω I = 1. In this case, the capital constraint (1.2) becomes (4.45). 6.3.1. Quantity and Pricing of Mortgages and Capital under Basel Regulation (Securitized Case) and the SMC Systemic risk explains why the SMC has turned into a worldwide financial crisis unlike the S&L crisis of the late eighties. There were warnings at the peak of the S&L crisis that overall losses of US savings institutions might well amount to 600-800 billion dollars. This is no less than the IMF’s estimates of losses in subprime MBSs. However, these estimates never translated into market prices and the losses of the S&Ls were confined to the savings institutions and to the deposit insurance institutions that took them over. By contrast, critical securities are now being traded in markets, and market prices determine the day-to-day assessments of equity capital positions of institutions holding them. This difference in institutional arrangements explains why the fallout from the current crisis has been so much more severe than that of the S&L crisis. 6.3.2. Subprime Mortgages and Their Rates under Basel Capital Regulation (Slack Constraint; Securitized Case) and the SMC In Subsection 4.2. we consider the effect of a shock to Ct , M, and rM . In particular, we analyze the case where the capital constraint (4.45) is slack. When lt = 0 we have (4.46) and (4.47).
Subprime Mortgages and Their Securitization...
477
Before the SMC, there was a relative decline in equity related to the capital that banks held in fulfilment of capital adequacy requirements as well as the buffers that they held in excess of required capital. A decline in required capital was made possible by changes in statutory rules relating to the prudential regulation of bank capital. The changes in rules provided banks with the option to determine regulatory capital requirements by assessing value-at-risk in the context of their own quantitative risk models, which they had developed for their own risk management. In particular, internationally active banks were able to determine capital requirements for market risks on the basis of these internal models. The amount of capital they needed to hold against any given asset was thereby greatly reduced. 6.3.3. Subprime Mortgages and Their Rates under Basel Capital Regulation (Holding Constraint; Securitized Case) and the SMC Subsection 4.3. present results about the effect of changes in the level of credit rating, C, on loans when the capital constraint (4.45) holds. If lt > 0 then by taking the first derivatives of equation (1.8) with respect to Ct and using the fact that the risk-weights for short- and long-term marketable securities, ω B , are constant we obtain (4.48). In this situation, the subprime mortgage loan rate response to changes in the level of credit rating (4.49). Unlike in the 19th century, there is no modern equivalent to clearinghouses that allowed information asymmetry to dissipate. During the SMC, there was no information producing mechanism that was implemented. Instead, accountants follow rules by, for instance, enforcing ”marking.” Even for earlier vintages, accountants initially seized on the ABX indices in order to determine ”price,” but were later willing to recognize the difficulties of using ABX indices. However, marking-to-market implemented during the SMC, has very real effects because regulatory capital and capital for CRA purposes is based on generally accepted accounting principles (GAAP). The GAAP measure of capital is probably a less accurate measure of owner-contributed capital than the Basel measure of capital since the latter takes into account banks’ exposure to credit, market and operational risk and their offbalance sheet activities. There are no sizeable platforms that can operate ignoring GAAP capital. During the SMC, partly as a result of GAAP capital declines, banks are selling large amounts of assets or are attempting to sell assets to clean up their balance sheets, and in so doing raise cash and de-levering. This pushes down prices, and another round of marking down occurs and so on. This downward spiral of prices – marking down then selling then marking down again – is a problem where there is no other side of the market (see, for instance, [14]). 6.3.4. Subprime Mortgages and Their Rates under Basel Capital Regulation (Future Time Periods; Securitized Case) and the SMC In Subsection 4.4. we examine the effect of a current credit rating shock in future periods on subprime RMLs, M, and RML rates, rM . The response of subprime RMLs and RML rates to a change in the level of credit rating, Ct , is described by (4.50) if the capital constraint holds. The incidence of systemic risk in the SMC has been exacerbated by an insufficiency of equity capital held against future mortgage losses. As the system of risk management on the basis of quantitative risk models was being implemented, banks were becoming more
478
M.A. Petersen et al.
conscious of the desirability of ”economizing” on equity capital and of the possibility of using the quantitative risk models for this purpose. Some of the economizing on equity capital involved improvements in the attribution of equity capital to different activities, based on improvements in the awareness and measurement of these activities’ risks. Some of the economizing on equity capital led to the relative decline in equity that is one of the elements shaping the dynamics of the downward spiral of the financial system since August 2007. One may assume that the loss of resilience that was caused by the reduction in equity capital was to some extent outweighed by the improvements in the quality of risk management and control. However, there may also have been something akin to the effect that the instalment of seat belts or anti-blocking systems in cars induces people to drive more daringly. A greater feeling of protection from harm or a stronger sense of being able to maintain control may induce people to take greater risks.
6.4.
Examples Involving Subprime Mortgage Securitization and the SMC
In this subsection, we discuss the relationships between the SMC and a numerical example involving mortgage securitization, securitization economics, an example of a subprime RMBS deal as well as a comparison between two typical subprime deals. 6.4.1. Numerical Example Involving Subprime Mortgage Securitization and the SMC The example in Subsection 5.1. shows that under favorable economic conditions (for instance, where RML default rates are low and C is high) huge profits can be made from securitizing subprime RMLs as was the case before the SMC. On the other hand, during the SMC, when conditions are less favorable (for instance, where RML default rates are high and C is low), IBs suffer large mortgage securitization losses. We observe from the numerical example that costs of funds and capital constraints from Basel capital regulation have important roles to play in subprime mortgage securitization, profit and valuation. We see that the profit under securitization in period t+1 is less than the profit under securitization in period t. This is mainly due to higher reference RML portfolio defaults as a result of higher RML rates in period t + 1. This was a major cause of the SMC. 6.4.2. Example Involving Profit from Mortgage Securitization and the SMC The part of our example of mortgage securitization in Subsection 5.2.1., does not suggest that it is generally true that all outcomes will be favorable, because it is possible that cM Σω < cM ω , but is still higher than the reference RML portfolio return, rM , thereby generating a loss when selling RMLs to SPV. Even if cM Σω > cM ω , there would be room to improve OR’s portfolio rE because of K es . As a consequence the discussion presented in Subsection 5.2.1. is not representative of all possible situations. In Subsection 5.2.2., we note that the influence on rE is from both lower level E and reduced cM ΣωA . The gain value is either present value or an improvement of annual margins averaged over the life of the deal. In the example, K es is a present forfeit percentage of 4 % used as an input. When considering the SMC, this same percentage of RMLs could result from modeling K e and could be an output of a RML portfolio model. In both cases, an
Subprime Mortgages and Their Securitization...
479
analysis of the securitization economics should strive to determine whether securitization improves the risk-return profile of OR’s original RML portfolio. Enhancing the risk-return profile means optimizing the efficient frontier or increasing rE for OR’s reference RML portfolio. We may ascertain whether this is true by calculating rE and rEΣ as well as comparing them. From Subsection 5.2.3., if securitization improves rE , OR’s might be inclined to increase f Σ M. Potentially, OR could benefit even more from the good relationship between f Σ M and rE – known as the leverage effect of securitization. Leverage is positive as long as cM Σω = 0.1008 remains fixed with a higher f Σ M leading to a higher final rE subsequent to securitization. For instance, using the example in Subsection 5.2.3., securitizing 2 000 instead of 1 000, and keeping the same proportions of RMLs to D and K, would automatically increase rE . This increase does not result from an additional capital gain in f Σ M, since this gain remains 0.00833 of RMLs. Instead it results from the fact that the additional annualized rate of return, ra , is proportional to the ratio of RMLs before and after securitization. In the example, with f Σ M = 1000, ra as a percentage of f Σ M is ra = 0.0000926 = 0.000833 × 1000/9000 of RMLs. Should OR sell 2 000, the same percentage of f Σ M would increase to 0.0002082 = 0.000833 × 2000/9000, the earnings before tax (EBT) would become re = 0.33346 and the return on capital (now 160) would be rK = 0.0020841 = 0.33346/160. Another simulation will demonstrate that f Σ M = 5000, would provide an re = 0.23965 and rK = 0.23965. In fact, f Σ M = 5553, would allow hitting the 25 % target return on (1 − f Σ )M. This is the leverage effect of securitization, which is more than proportional to f Σ M. We note that there are limits to this leverage effect. Firstly, an OR securitizing all RMLs (i.e., f Σ = 1 as in true-sales securitization) changes its core operations by becoming a pure OR reselling new business to SPVs. As in the OTH model, origination and lending, collecting deposits and providing financial services to customers are the core business of commercial banking. Keeping RMLs on the balance sheet is part of the core business. This is a first reason for OR not going to the extreme by securitizing its entire balance sheet. Secondly, ORs need to replenish the portfolio of securitized receivables. In order to do so, they need to keep assets on the balance sheet. This happens, for instance, when securitizing credit cards that amortize much quicker than RMLs. In such cases, the pool of credit card receivables rolls over with time and fluctuates depending on the customers’ behavior. OR needs a pool of such receivables to replenish its reference RML portfolio. Thirdly, increasing securitization would result in significant changes in operations and might change ORs’ perception by, for instance, modifying its cost of funds. This may or may not be true depending on how K es is utilized. 6.4.3. Example of a Subprime RMBS Deal and the SMC From Table 8 in Subsection 5.3., for the structure of SAIL 2005-6, we can deduce that there are four reference RML portfolios with limited cross-collateralization. This deal took place immediately prior to the onset of the SMC in mid-2007. Furthermore, it is obvious that principal payments on the sen certificates will largely depend on collections on the
480
M.A. Petersen et al.
reference RML portfolios. Thus, even if the loss rate per reference portfolio related to any sen certificates class is low, losses in unrelated RMLs may reduce the loss protection for those certificates. This is so because the sen certificates will have the benefit of CE in the form of OC and subordination from each RML pool. This is typically what happened during the SMC with toxic RMLs reducing protection for sen certificates. Initially, the mezz tranches are thin and small with respect to defaults. This makes the investment-grade rating BBB- of these tranches somewhat surprising. This may be offset by a significant amount of prepayment, ̟2 f p f Σ M, coming into the SAIL 2005-6 deal at the onset. Despite the fact that the underlying supposition is that the deal’s cash flow dynamics has a high probability of success, the accuracy of these ratings are being questioned in the light of the SMC. The procedure by which ̟2 f p f Σ M from f Σ M are allocated will differ depending on the occurrence of several different triggers8 given in Subsection 5.3.. As noted in [11] and described in the SAIL 2005-6 prospectus supplement, the triggers have the following specifications. • whether a distribution date occurs before or on or after the step-down date, which is the latter of the (1) distribution date in July 2008 and (2) first distribution date on which the ratio Total Principal Balance of the Subordinate Certificates Plus Any OC Amount Total Principal Balance of the RMLs in the Trust Fund equals or exceeds the percentage specified in this prospectus supplement; • a cumulative loss trigger event occurs when cumulative losses on the RMLs are higher than certain levels specified in this prospectus supplement; • a delinquency event occurs when the rate of delinquencies of the RMLs over any 3-month period is higher than certain levels set forth in this prospectus supplement and • in the case of reference RML portfolio 1, a sequential trigger event occurs if (a) before the distribution date in July 2008, a cumulative loss trigger event occurs or (b) on or after the distribution date in July 2008, a cumulative loss trigger event or a delinquency event occurs. 6.4.4. Comparisons between Two Subprime RMBS Deals and the SMC The example involving AMSI 2005-R2 and SAIL 2006-2 in Subsection 5.4. illustrate how the option on H implicitly embedded in the subprime RML securitization – the tranche thickness and extent of CE – depend on cash flow coming into the deal from ̟2 f p f Σ M from f Σ M via refinancing that is also H dependent. The deals AMSI 2005-R2 and SAIL 2006-2 illustrate this link to H very effectively. The former passed its triggers and achieved the CE and subordination levels hoped for by the original structure. This was largely due to the refinancing and prepayments of the reference RMLs. By contrast, the SAIL 2006-2 deal deteriorated. In 2006, subprime MRs did not accumulate enough house equity to refinance with the result that they defaulted on their repayments. Consequently, SAIL 2006-2 was not able to pass its triggers. However, at fire sale prices, the 2006 bond could still prove to be a prudent purchase. 8
Some of these triggers were simply ignored before and during the SMC.
Subprime Mortgages and Their Securitization...
7.
481
Conclusions and Future Directions
This paper investigates modeling aspects of the securitization of subprime RMLs into structured products such as subprime RMBSs and CDOs (compare with Problem 1.1). In this regard, our discussions in Sections 2. and 3. focus on risk, profit and valuation as well as the role of capital under RMBSs and RMBS CDOs, respectively. With regard to the former, our paper discusses credit, maturity mismatch, basis, counterparty, liquidity, synthetic, prepayment, interest rate, price, tranching and systemic risks. As posed in Problem 1.2, the main hypothesis of this paper is that the SMC was largely caused by the intricacy and design of subprime mortgage securitization that led to information (loss and asymmetry) problems, valuation opaqueness (compare with Problem 1.3) and ineffective risk mitigation. This claim is illustrated via the examples presented in Section 5. and their discussions in Section 6.. On the face of it the securitization of housing finance through MBSs appears, in principle, to be a excellent way of shifting risks resulting from the mismatch between the economic lifetimes of housing investments and IBs’ horizons away from ORs and their debtors without impairing ORs’ incentives to originate mortgages. Securitization would thus appear to provide a substantial improvement in risk allocation in the global banking system. The question is then what went wrong. In several important respects, the practice was different from the theory. Firstly, moral hazard in origination was not eliminated, but was actually enhanced by several developments. Secondly, many of the MBSs did not end up in the portfolios of insurance companies or pension funds, but in the portfolios of highly leveraged IBs that engaged in substantial maturity transformation and were in constant need of refinancing. Finally, the markets for refinancing these highly leveraged banks broke down during the SMC. As far as subprime risks are concerned, we identify that IBs carry credit, market and operational risks involving mark-to-market issues, the worth of mortgage securitizations when sold in volatile markets, uncertainty involved in investment payoffs and the intricacy and design of structured products. Market reactions include market risk, operational risk involving increased volatility leading to behavior that can increase operational risk such as unauthorized trades, dodgy valuations and processing issues and credit risk related to the possibility of bankruptcies if ORs, DBs and IBs cannot raise funds. Recent market events, which demonstrate how credit, market and operational risks come together to create volatility and losses, suggest that it is no longer relevant to dissect, delineate and catalogue credit and market risk in distinct categories without considering their interconnection with operational risk. Underlying many of the larger credit events are operational risk practices that have been overlooked such as documentation, due diligence, suitability and compensation. Our contribution also underlies the following SMC-related timeline. In the years since 2000, with low interest rates, low intermediation margins and depressed stock markets, many private investors were eagerly looking for structured products offering better yields and many banks were looking for better margins and fees. The focus on yields and on growth blinded them to the risk implications of what they were doing. In particular, they found it convenient to rely on CRA assessments of credit risks, without appreciating that these assessments involved some obvious flaws. Given IBs’ hunger for the business of securitization and high-yielding securities, there was little to contain moral hazard in mortgage
482
M.A. Petersen et al.
origination, which, indeed, seems to have risen steadily from 2001 to 2007. For a while, the flaws in the system were hidden because house prices were rising, partly in response to the inflow of funds generated by this very system. However, after house prices began to fall in the summer of 2006, the credit risk in the reference RML portfolios became apparent. Often additional operational risk issues such as model validation, data accuracy and stress testing lie beneath large market risk events. Market events demonstrate that risk cannot always be eliminated and can rarely be completely outsourced. It tends to come back in a different and often more virulent form. For instance, Countrywide Financial had outsourced its credit risk through packaging and selling of subprime mortgages. However, in doing so, the company created sizeable operational risks through its business practices and strategy; A shortcoming of this paper is that it does not provide a complete description of what would happen if the economy were to deteriorate or improve from one period to the next. This is especially interesting in the light of the fact that in the real economy one has yield curves that are not flat and so describe changes in the dynamics of the structured note market. More specifically, we would like to know how this added structure will affect the results obtained in this paper. This is a question for future consideration. Also, we would like to investigate the relationships between subprime agents more carefully. Also, scenarios need to be more robust. They need to account for what will happen when more than one type of risk actualizes. For instance, a scenario can be constructed from 2007-2008 events that could include mis-selling of products to IBs, securitizing that same product, the bringing of lawsuits by retail investors who bought the product and IBs who acquired the securitized mortgages. Several questions that need urgent answers arise. What would this scenario affect ILs, ORs, DBs and IBs ? What would the liquidity consequences be ? Can the real-life examples be readily studied ?
8.
Appendix
In this section, we prove some of the main results in the paper.
8.1.
Appendix A: Proof of Theorem 2.3
An immediate consequence of the prerequisite that the capital constraint (1.2) holds, is that RML supply is closely related to the capital adequacy constraint and is given by (1.8). Also, the dependence of changes in the loan rate on credit rating may be fixed as ∂rtM Σ∗ m2 = . ∂Ct m1 Equation (3.40) follows from (1.2) and the fact that the capital constraint holds. This also leads to equality in (1.2). In (1.9) we substituted the optimal value for Mt into control law (1.5) to get the optimal default rate. We obtain the optimal Tt using the following steps. Firstly, we rewrite (1.13) to make deposits the dependent variable so that DtΣ =
Mt + Bt + Tt − Bt − nt Et−1 − Ot . 1−γ
Subprime Mortgages and Their Securitization...
483
Next, we note that the first-order conditions are given by Z M ∂ΠΣ ∂V Σ t M t tΣ M δt,1 1 + cdw dF (σt+1 ) − m1 rtr − ctM Σω − rtSΣ − ciΣ t − rt + ct + ct t − Et M Σ ∂rt ∂(Kt+1 ) M p f ω i R S t tΣ + p + (1 − r )r − c − c − c r − a ftΣ + lt ρm1 ω(Ct ) = 0; fbtΣ ftΣ − fbtΣ ftΣ Mt − m1 cM t t t t t t t t
(8.51)
Z M ∂ΠΣ ∂V Σ t M δ ) = 0; dF (σ − E 1 + cdw t t,1 t+1 t Σ ) ∂Dt ∂(Kt+1 M
(8.52)
ρ ω(Ct )Mt + ω B Bt + 12.5f M (mV aR + O) ≤ Kt ;
(8.53)
−cdw t + Et
Z
M
δt,1 M
∂V Σ M dF (σt+1 ) Σ ) ∂(Kt+1
= 0.
(8.54)
Here F (·) is the cumulative distribution of the shock to the RML. Using (8.54) we can see that (8.52) becomes ∂ΠΣ t = 0. ∂Dt Looking at the form of ΠΣ t given in (2.14)and the equation
P T (Tt ) =
rtp [D − Tt ]2 2D
(8.55)
it follows that ΠΣ t
=
fbtΣ ftΣ Mt + rtM − ctt − ctΣ (1 − fbtΣ )ftΣ Mt rtr − ctM Σω − rtSΣ − ciΣ t t
p f Σ Σ T ω i R S − p + c r − (1 − r )r + rtM − cM t t t t (1 − ft )Mt − aft Mt + rt Tt t t
(8.56)
rp + rtB − cB Bt − rtB + cBt Bt − rtD + cD Dt + C(E[S(Ct )]) − t [D − Tt ]2 + ΠΣp t t t − Et − Ft . 2D
Finding the partial derivatives of OR’s profit, ΠΣ t , with respect to deposit, Dt , we have that ∂ΠΣ t ∂Dt
rtp T B B B B = (1 − γ) rt + (rt + ct ) + (rt − ct ) + (D − Tt ) − (rtD + cD t ) D (8.57)
This would then give us the optimal value for Dt . Using (2.17) and all the optimal values calculated to date, we can find optimal deposits as well as optimal profits. To derive equations (8.51) to (8.54), we rewrite equation (2.22) to become
484
M.A. Petersen et al.
V Σ (KtΣ , xt )
=
max
rtM , Dt , ΠΣ t
Σ B M (mV aR + O) ΠΣ t + lt Kt − ρ ω(Ct )Mt + ω Bt + 12.5f
Σ Σ −cdw Kt+1 . , xt+1 + Et δt,1 V Kt+1 t
(8.58)
By substituting equations (1.5) and (2.17), equation (8.58) becomes V Σ (KtΣ , xt )
=
max
rtM , Dt , ΠΣ t
Σ nt (dt + Et ) − Kt+1
M t tΣ +∆Ft + (1 + rtO )Ot + rtr − ctM Σω − rtSΣ − ciΣ t − rt + ct + ct p f ω Σ + pit + (1 − rtR )rtS − ctt − ctΣ ftΣ fbtΣ m0 − m1 rtM + m2 Ct + σtM + cM t t − c t r t − a ft eΣ m0 − m1 rtM + m2 Ct + σtM − Et − Ft + Π t +lt KtΣ − ρ ω(Ct ) m0 − m1 rtM + m2 Ct + σtM + ω B Bt + 12.5f M (mV aR + O) Σ Σ −cdw Kt+1 , xt+1 + Et δt,1 V Kt+1 . (8.59) t
Finding the partial derivative of OR’s value in (8.59), with respect to the capital constraint, Σ , we have Kt+1 ∂V Σ = −1 − cdw t + Et Σ ∂Kt+1
Z
M
δt,1
M
∂V Σ M dF σ . t+1 Σ ∂Kt+1
(8.60)
Next, we discuss the formal derivation of the first order conditions (8.51) to (8.54).
8.2.
Appendix B: First Order Condition (8.51)
Choosing the RML rate, rtM , from equation (8.59) and using equation (8.60) above, the first order condition (8.51) for Problem 2.2 is Z M ∂V Σ ∂ΠΣ t dw M 1 + ct − Et dF (σt+1 ) δt,1 Σ ) ∂rtM ∂(Kt+1 M r M Σω SΣ iΣ M t tΣ bΣ Σ −m1 rt − ct − rt − ct − rt + ct + ct ft ft − fbtΣ ftΣ Mt
−m1
8.3.
ω cM t
+
pit
+ (1 −
rtR )rtS
−
ctt
−
ctΣ t
−
cpt rtf
− a ftΣ + lt ρm1 ω(Ct ) = 0.
Appendix C: First Order Condition (8.52)
Choosing the deposits, Dt , from equation (8.59) and using equation (8.60) above, the first order condition (8.52) for Problem 2.2 is Z M ∂ΠΣ ∂V Σ t M dw dF (σt+1 ) =0 δt,1 1 + ct − Et Σ ) ∂Dt ∂(Kt+1 M
Subprime Mortgages and Their Securitization...
8.4.
485
Appendix D: First Order Condition (8.53)
We now find the partial derivative of OR’s value in (8.59) with respect to the Lagrangian multiplier, lt . ∂V Σ B M = Kt − ρ ω(Ct )Mt + ω Bt + 12.5f (mV aR + O) . ∂lt In this case, the first order condition (8.53) for Problem 2.2 is given by B M ρ ω(Ct )Mt + ω Bt + 12.5f (mV aR + O) ≤ Kt .
8.5.
Appendix E: First Order Condition (8.54)
Choosing the regulatory capital, ΠΣ t , from equation (8.59) and using equation (8.60) above, the first order condition (8.54) for Problem 2.2 is
−1 −
cdw t
+ Et
Z
M
M
∂V Σ M dF (σt+1 ) + 1 = 0, δt,1 Σ ) ∂(Kt+1
which is the same as
−cdw t + Et
8.6.
Z
M
δt,1
M
∂V Σ M dF (σ ) = 0. t+1 Σ ) ∂(Kt+1
Appendix F: Proof of Theorem 4.1
We equate IB’s optimal mortgages with lt = 0 and lt > 0 in order to obtain m1 1 (m0 + m2 Ct + σtM ) − 2 2(1 − 2fˆtΣ ftΣ )
"
ω cM + pit (Ct ) + (1 − rtR )rtS (Ct ) + t
(rtD + cD ) − cpt rtf (1 − γ)
t tΣ ˆΣ Σ ω + pit (Ct ) + (1 − rtR )rtS (Ct ) − ct − ctΣ −2 rtr − ctM Σω − rtSΣ − cSi ft ft − 2 cM t t + c + ct t # −cpt rtf − a ftΣ =
Kt Bt + 12.5f M (mV aR + O) − . ω(Ct )ρ ω(Ct )
Solving for σtM , we get σtM ∗
=
2(1 − 2fˆtΣ ftΣ )
+m1
−cSi t
"
Bt + 12.5f M (mV aR + O) Kt − ω(Ct )ρ ω(Ct )
ω cM + pit (Ct ) + (1 − rtR )rtS (Ct ) + t
t
+c +
ctΣ t
− (1 − 2fˆtΣ ftΣ )(m0 + m2 Ct )
1 (r D + cD ) − cpt rtf 1−γ t
− 2 rtr − ctM Σω − rtSΣ
# p f Σ Σ t tΣ R S i Mω ˆ ft ft − 2 ct + pt (Ct ) + (1 − rt )rt (Ct ) − c − ct − ct rt − a ftΣ
486
M.A. Petersen et al.
and m1 ρω(Ct )lt
=
M t tΣ ˆΣ Σ ω + pit (Ct ) − cpt rtf ft ft + 2m1 cM 2m1 rtr − ctM Σω − rtSΣ − cSi t − rt + c + ct t Σ M Mω ˆΣ Σ +(1 − rtR )rtS (Ct ) − ct − ctΣ − pit t − a ft + 2ft ft Mt + m1 rt − ct (rtD + cD ) 1−γ
+cpt rtf − (1 − rtR )rtS −
− Mt .
∗
Substitute rtM and Mt∗ into the expression above to obtain lt∗ =
σtM − σtM ∗ . ω(Ct )ρm1
Using equation (2.22) to find the partial derivative of the value function with respect to OR capital we obtain
∂V = ∂Kt
1 (rD + cD ), 1−γ t 1 (rD + cD ), for M ≤ σtM ≤ σtM ∗ , 1−γ t 1 σ M − σtM ∗ (rtD + cD ) + t , for σtM ∗ ≤ σtM ≤ M . 1−γ ω(Ct )ρm1 lt +
By substituting the above expression into the optimal condition for total capital (8.54), we obtain
cdw t
− Et δt,1
Z M 1 1 D D M M∗ M (r + c ) − Et δt,1 σt+1 − σt+1 dF (σt+1 ) = 0. M∗ 1−γ t ω(Ct+1 )ρm1 σt+1
We denote the left-hand side of the above expression by Y, so that
Y =
1 ω(Ct+1 )ρm1
Et
Z
M
M∗ σt+1
M M∗ M δt,1 σt+1 − σt+1 dF (σt+1 ) .
From the Implicit Function Theorem, we can calculate order to obtain
(8.61)
∂Y by using equation (8.61) in ∂Ct
Z M ∂ω Ct ∂Y 1 (−µ ) ∂Ct+1 M M∗ M δt,1 (σt+1 − σt+1 )dF (σt+1 ) =− Et M∗ ∂Ct ρm1 [ω(Ct+1 )]2 σt+1 Z M M∗ ∂σt+1 1 M − δt,1 dF (σt+1 ) , Et M∗ ρm1 ω(Ct+1 ) ∂Ct σt+1
Subprime Mortgages and Their Securitization...
487
where M∗ ∂σt+1
−
=
∂Ct
2(1 − 2fˆtΣ ftΣ ) Kt − ρ(Bt + 12.5f M (mV aR + O)) ∂ω − (1 − 2fˆtΣ ftΣ )m2 µCt µC t ρ [ω(Ct+1 )]2 ∂Ct+1
+m1 µCt
∂pi ∂rtS + (1 − 2ftΣ ) ∂Ct+1 ∂Ct+1
(8.62)
and 2(1 − 2fˆtΣ ftΣ ) ∂Y = Et ∂Kt+1 m1 [ω(Ct+1 )ρ]2
As a consequence, we have that
8.7.
Z
M M∗ σt+1
M δt,1 dF (σt+1 ) .
M∗ ∂σt+1 ∂Kt+1 > 0 only if < 0. ∂Ct ∂Ct
Appendix G: Proof of Proposition 4.2
In order to prove Proposition 4.2, we find the partial derivatives of OR’s RML supply, M ∗ , and the OR’s subprime loan rate, rM ∗ , with respect to the current level of credit rating, Ct . ∂rS (Ct+j ) < 0, and rtR = 0. We are Here, we consider (2.24), (2.25) and the conditions t ∂Ct+j now able to calculate n∗ ∂Mt+j
∂Ct
"
m1 1 (m0 + m2 Ct + σtM ) − 2 2(1 − 2fˆtΣ ftΣ )
ω cM + pit (Ct ) + (1 − rtR )rtS (Ct ) + t
(rtD + cD ) − cpt rtf 1−γ
# p f t tΣ ˆΣ Σ Mω i R S t tΣ + c + c c + p (C ) + (1 − r )r (C ) − c − c − c r − a ftΣ −2 rtr − ctM Σω − rtSΣ − cSi f f − 2 t t t t t t t t t t t t t i ∂p (Ct+j ) ∂rS (Ct+j ) (1 − 2ftΣ ) 1 C µj m2 − m1 + t 2 ∂Ct+j ∂Ct+j (1 − 2fˆtΣ ftΣ )
=
and n∗
M ∂rt+j
∂Ct
"
1 1 (m0 + m2 Ct + σtM ) + 2m1 2(1 − 2fˆtΣ ftΣ )
ω cM + pit (Ct ) + (1 − rtR )rtS (Ct ) + t
(rtD + cD ) − cpt rtf 1−γ
# p f t tΣ ˆΣ Σ Mω i R S t tΣ f f − 2 −2 rtr − ctM Σω − rtSΣ − cSi + c + c c + p (C ) + (1 − r )r (C ) − c − c − c r − a ftΣ t t t t t t t t t t t t t =
8.8.
1 C µ 2 j
i ∂rS (Ct+j ) ∂p (Ct+j ) (1 − 2ftΣ ) m2 + t + . m1 ∂Ct+j ∂Ct+j (1 − 2fˆtΣ ftΣ )
Appendix H: Proof of Proposition 4.3
In order to prove Proposition 4.3, we find the partial derivatives of the optimal bank loan supply, M ∗ , and OR’s subprime loan rate, rM , with respect to Ct . This involves using the
488
M.A. Petersen et al.
equations (1.8) and (1.9) and the condition ∗
∂ω(Ct+j ) ∂M ∗ t < 0 in order to find and ∂Ct+j ∂Ct
∂rtM , respectively. We are now able to determine that ∂Ct B Kt ω Bt + 12.5f M (mV aR + O) ∂Mt∗ − ∂Ct ρω(Ct ) ω(Ct ) =−
Kt − ρ(12.5f M (mV aR + O) + ω B Bt ) ∂ω(Ct ) [ω(Ct )]2 ρ ∂Ct
and ∂rtM ∂Ct
∗
Kt 1 ω B Bt + 12.5f M (mV aR + O) M m0 + m2 Ct + σt − + m1 ρω(Ct ) ω(Ct ) =
m2 Kt − ρ(12.5f M (mV aR + O) + ω B Bt ) ∂ω(Ct ) + m1 [ω(Ct )]2 ρm1 ∂Ct
as required to complete the proof of Proposition 4.3.
References [1] Allen, F., & Carletti, E., (2006). Credit risk transfer and contagion. Journal of Monetary Economics, 53, 89-111. [2] Ameriquest Mortgage Securities Inc. (AMSI) 2005-R2. AMSI 2005-R2 Prospectus. Available: http://www.dbrs.com/research/221070/ameriquest-mortgagesecurities-inc-2005-r2/. [3] Ashcraft, A. B., Goldsmith-Pinkham, P., & Vickery, J. I. (2010). MBS ratings and the mortgage credit boom. FRB of New York Staff Report No.449. Available at SSRN: http://ssrn.com/abstract=1615613. [4] Ashcraft, A. B., & Schuermann, T. (2008). Understanding the Securitization of Subprime Mortgage Credit. New York: Now Publishers Inc. [5] Bear Sterns. (September 2006). Bear Sterns quick guide to non-agency mortgagebacked securities. Residential Mortgage Criteria Report. [6] Cagan, P. (2008). What lies beneath operational risk issues underlying the subprime crisis. The RMA journal, 96–99, 4 pages. [7] Crouhy, D., Turnbull, P., & Jarrow, H. (2008). The subprime credit crisis of 07’. Working Paper. University of Houston, Natixis and Cornell University. [8] Demyanyk, Y., & Van Hemert, O. (2008). Understanding the subprime mortgage crisis. Social Science Research Network. Available: http://ssrn.com/abstract=1020396.
Subprime Mortgages and Their Securitization...
489
[9] Fouche, C. H., Mukuddem-Petersen, J., Petersen, M. A., & Senosi, M. C. (2008). Bank valuation and its connections with the subprime mortgage crisis and Basel II Capital Accord. Discrete Dynamics in Nature and Society, DOI:10.1155/2008/740845, 44 pages. [10] Financial Services Authority. (July 2010). Mortgage market review: Responsible lending. Consultation Paper 10/16. Available: http://www.fsa.gov.uk/. [11] Gorton, G. B. (2008). The subprime panic. ssrn.com/abstract=1276047.
Yale ICF Working Paper, 08, 25,
[12] Harrington S. D., & Moses, A. (2008). Credit swap disclosure obscures true financial risk. Bloomberg.com. Available: http://www.bloomberg.com/ apps/news?pid=20601109&sid=aKKRHZsxRvWs&refer=home [Thursday, 6 November 2008]. [13] Heitfeld, E. (2008). Parameter uncertainty and the credit risk of collateralized debt obligations. Federal Reserve Board, working paper. [14] Hellwig, M. (2009). Systemic risk in the financial sector: An analysis of the subprimemortgage financial crisis. De Economist, 157, 129–207. [15] Jorion, P., & Zhang, G. (2009). Credit contagion from counterparty risk. Journal of Finance, 64, 2053-2087. [16] Liao, H-H., Chen, T-K., & Lu, C-W. (2009). Bank credit risk and structural credit models: Agency and information asymmetry perspectives. Journal of Banking and Finance, 33, 1520–1530. [17] Mason, J., & Rosner, J. (2007) Where did the risk go ? How misapplied bond ratings cause mortgage backed securities and collateralized debt obligation market disruptions. Working Paper. Drexel University. [18] Mukuddem-Petersen, J., Mulaudzi, M. P., Petersen, M. A., De Waal, B., & Schoeman, I. M. (2010). The subprime banking crisis and its risks: Residential mortgage products with regret, Discrete Dynamics in Nature and Society, DOI:10.1155/2010/950413, 33 pages. [19] Petersen, M. A., & Rajan, R. G. (2002). Does disturbance still matter ? The information revolution in small business lending. Journal of Finance, 57(6), 2533-2570. [20] Petersen, M. A., Mulaudzi, M. P., Schoeman, I. M., & Mukuddem-Petersen, J. (2010). A note on the subprime mortgage crisis: Dynamic modeling of bank leverage profit under loan securitization, Applied Economic Letters, 17(15), 1469–1474. [21] Petersen, M. A., Senosi, M. C., & Mukuddem-Petersen, J. (2010). Subprime Mortgage Models. New York: Nova, ISBN: 978-1-61728-694-0. [22] Roberts, J., & Jones, M. (2009). Accounting for self interest in the credit crisis. Accounting, Organization and Society, 34, 856-867.
490
M.A. Petersen et al.
[23] Securities Exchange Commission. (14 January 2010). Testimony by SEC chairman, Mary L. Shapiro, concerning the state of the financial crisis. Before the Financial Crisis Inquiry Commission. Available: http://www.sec.gov/news/testimony/2010/ts011410mls.htm. [24] Securities Industry and Financial Markets Association. (14 October 2007). SIFMA Research and Statistics. Available: http://www.sifma.org. [25] Structured Asset Investment Loan Trust (SAIL) 2006-2. SAIL 2006-2 Prospectus. Available: http://www.secinfo.com/d12atd.z3e6.html. [26] UBS. (13 November 2007). Mortgage strategist. Available: http://www.ubs.org. [27] UBS. (13 December 2007). Mortgage and ABS CDO losses. Available: http://www.ubs.org. [28] Wikipedia: The Free Encyclopedia. (December 2010). Subprime Mortgage Crisis. Available: http://en.wikipedia.org/wiki/Subprime mortgage crisis.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 491-563
ISBN 978-1-60876-500-3 © 2011 Nova Science Publishers, Inc.
Chapter 18
M ORTGAGE L OAN S ECURITIZATION , C APITAL AND P ROFITABILITY AND T HEIR C ONNECTIONS WITH THE S UBPRIME BANKING C RISIS M.A. Petersen∗, M.P. Mulaudzi, I.M. Schoeman, J. Mukuddem-Petersen and B. De Waal North-West University (Potchefstroom Campus)
Abstract In this book chapter, we derive L´evy process-based models of jump diffusion-type for originator (OR) operations involving subprime residential mortgage loans (RMLs) and their securitization, capital as well as profitability. The main motivation for our study is the fact that RMLs, residential mortgage-backed securities (RMBSs) and related mortgage products are inextricably linked to the causes, consequences and cures for the subprime mortgage crisis (SMC). A further motivation is the need to generalize the more traditional discrete- and continuous-time models of RMBSs, regulatory capital, returns on assets (ROA) and returns on equity (ROE) in the context of interacting RML portfolios. Prior to determining an optimal price for RMBSs, we construct stochastic models for RMBS price dynamics, Basel II regulatory capital and OR’s nett income after taxes and before the cost of funds (NIATBCF) in a semi-martingale setting. As far as OR’s optimization problem is concerned, our main conclusion is that both sub- and super-optimal pricing may be characterized in terms of a constantvalued pricing error term. Besides subprime RML securitization, regulatory capital and profits, we highlight the role of ORs’ problematic RML subportfolios, RML rates, demand, default and loss provisioning, risk premia, deposits, London InterBank Offered Rate (LIBOR) as well as liquidity. Furthermore, we consider the connections between the aforementioned variables and the SMC. Also, we provide numerical examples involving the dynamics of OR profitability via the indicators ROA and ROE. Here, the data is sourced from 36 anonymous U.S. banks for the period 2002-2007.
Keywords: banking; originator (OR); residential mortgage loan (RML); residential mortgage-backed securities (RMBSs); RML losses; regulatory capital; profit; subprime mortgage crisis; L´evy process-based modeling and optimization. ∗
E-mail address: [email protected].
492
1.
M.A. Petersen et al.
Introduction
The main motivation for studying originator1 (OR) securitization, capital and profitability problems in this book chapter involves extending the discrete- and continuous-time models constructed in the analysis of banking behavior and regulation in previous contributions (see, for instance, [2] and [8]) to the more general class of L´evy process-driven models of jump diffusion-type. In particular, despite the extent of the existing literature, the use of discrete-time banking models beyond two-periods is limited. In this regard, a broader stochastic calculus can potentially make dynamic securitization, capital and profitability models tractable and widen the scope for risk analysis and regulation via Basel capital regulation (see, for instance, [5], [6] and [28]). Some of these objectives have already been achieved by Petersen and co-authors in [30] (see, also, [29]). In the sequel, we shall connect the aforementioned models with issues related to the subprime mortgage crisis (SMC). In our contribution, subprime lending is defined as the practice of extending RMLs to mortgagors (MRs) who do not qualify for market interest rates owing to various risk factors, such as income level, size of the down payment made, credit history and employment status. The traditional mortgage model involves an OR originating a RML and extending it to MR thus retaining credit risk. With the advent of financial innovation via securitization, this lending practice has made way for the ”originate-to-distribute” (OTD) model in which credit risk is transferred to investors through mortgage-backed securities (MBSs) and collateralized debt obligations (CDOs). In this regard, securitization is a form of structured finance that involves the pooling of financial assets, especially those for which there is no obvious secondary market, such as RMLs (see Figures 2 and 3 in Subsection 1.2.2.). In effect, securitization created such a market for RMLs and meant that those issuing RMLs were no longer required to hold them to maturity. The pooled assets serve as collateral for new financial assets issued by the entity (mostly investment banks) owning the underlying assets. This practice has turned out to be a major cause of the SMC. In particular, securitization along with elevated investor demand for highly rated (by credit rating agencies) RMBSs led to the fact that mortgages with a high risk of default could easily be originated. This essentially resulted in credit risk shifting from the mortgage issuers to investors. Also, securitization enabled these issuers to increase their cash inflow by repeatedly re-lending funds. Since issuers no longer carried any default risk, they had every incentive to lower their underwriting standards to increase their RML volume and total profit. However, both the failure of the Lehman Brothers investment bank and the acquisition in September 2008 of Merrill Lynch and Bear Stearns by Bank of America and JP Morgan, respectively, was preceded by an increase in such volume and profit. A similar trend was discerned for the government sponsored enterprises (GSEs), Fanie Mae and Freddie Mac, who had to be bailed out by the U.S. government at the beginning of September 2008. RML pricing models usually have components related to the financial funding cost, a risk premium to compensate for the risk of MR default – a premium reflecting OR’s market power and the sensitivity of the cost of capital raised to changes in RMLs extended. By contrast to ordinary RMLs, after the $ 700 billion bailout, the U.S. Treasury has the problem of how to price copious amounts of RMBSs that no one wants to buy. The U.S. 1
The originator is the main agent in this book chapter with its primary function being to extend and securitize residential mortgage loans.
Mortgage Loan Securitization, Capital and Profitability...
493
government warns that these RMBSs have to be bought otherwise credit crises will continue to deepen with dire consequences for the global financial system. But determining how much to pay for these RMBSs is the one of the most difficult SMC-related issues facing the government. A consequence of overpricing is that the U.S. government will appear to have been taken advantage of by the securities industry. On the other hand, underpricing the RMBSs will result in the U.S. Treasury precipitating the failure of some financial institutions. Nonetheless, it is irrelevant what the U.S. government pays for troubled securities from Wall Street, role players (like investors, taxpayers and politicians) will in all likelihood argue that RMBSs were mispriced in the first place (see, for instance, [39] and [40]). As far as RML defaults and securitization are concerned, RML losses can be associated with an offsetting expense called the RML loss provision (RLP) which is charged against nett profit. This offset will reduce reported income but has no impact on taxes, although when the assets are finally written off, a tax-deductible expense is created. For the purposes of our discussion, we consider primary RML losses to result from defaults on reference RML portfolios while secondary losses coincide with losses suffered from RML securitization (refer to Figures 2 and 3). Another factor related to RML loss provisioning is regulation and supervision. Measures of capital adequacy are generally calculated using the book values of assets and equity. RML provisioning and its associated write offs will cause a decline in these capital adequacy measures, and may precipitate increased regulation by bank authorities. Greater levels of regulation generally entail additional costs for OR. Currently, this regulation mainly takes the form of the Basel capital regulation (see [5] and [6]) that has been implemented on a worldwide basis. As far as profitability is concerned, our contribution mainly involves a discussion of the nett income after taxes and before the cost of funds (NIATBCF) as a component of nett profit (see, for instance [1]). In this regard, an important open problem in financial economics is to develop a nonlinear model for NIATBCF by means of general semi-martingale theory. The importance of this problem is that nonlinear models are generally closer to reality than linear models. In the case where the RML market is imperfectly competitive, profit is assured and loss is not a possibility. Here, profitability is discussed within the framework of the ”bank capital channel” (see, for instance, [34]). This is based on the hypothesis of an imperfect market for bank equity: banks cannot easily raise new equity because of the presence of agency costs and tax disadvantages (see [1] and [34]). After a decline in bank profitability, if equity is sufficiently low and it is too expensive to raise new deposits, banks reduce lending, otherwise they fail to meet regulatory capital requirements. A consequence of this is the real effects on consumption and investment. Decisions related to how much bank capital to hold play an important role in bank profitability with the association between capital, RML extension and macroeconomic activity being of considerable significance. In this contribution, some of our findings are corroborated by data from the U.S. Federal Reserve (Fed) on profitability via return on assets (ROA) and return on equity (ROE) for the period Tuesday, 1 January 2002 to Saturday, 31 March 2007. In this chapter, we consider the operational problem of pricing and optimizing RMBSs by means of stochastic analytic methods. A motivation for studying this problem is to extend the discrete-time models used in the analysis of bank behavior and regulation (see, for instance, [2] and [8]) to a more general class of models. A further motivation is the fact that securitization has been responsible for financial crises for households, companies and
494
M.A. Petersen et al.
financial institutions (see [11]). An example of the latter, from the SMC, is that both the failure of the Lehmann Brothers investment bank and the acquisition in September 2008 of Merrill Lynch and Bear Stearns by Bank of America and JP Morgan, respectively, was preceded by an increase in securitization. A similar trend was discerned for the U.S. mortgage companies, the Federal National Mortgage Association (Fanie Mae) and Federal Home Loan Mortgage Corporation (Freddie Mac), who had to be bailed out by the U.S. government at the beginning of September 2008. The SMC was initiated with the decline of the United States (U.S.) housing price (see, for instance, [7] and [25]) and high default rates on ”subprime” and adjustable rate mortgages (ARMs). Financial institutions from around the world have recognized subprime-related losses and write-downs exceeding U.S. $ 501 billion by October 2008 (compare with [34]). The international financial sector first began to feel the consequences of the SMC in February 2007 with the $ 10.5 billion writedown of Hongkong and Shanghai Banking Corporation (HSBC), which was the first major CDOs or RMBSs related loss to be reported (compare with [34]). During 2007, at least 100 mortgage companies have either shut down, suspended operations or been sold. Top management has not escaped unscathed, as the CEOs of Merrill Lynch and Citigroup were forced to resign within a week of each other (see, for instance, [34]). Various institutions followed-up with merger deals. For instance, Northern Rock and Bear Stearns (see, for instance, [34]) have required emergency assistance from central banks. IndyMac was shut down by the FDIC on 11 July 2008. The crisis also affected Indian banks which have ventured into the U.S. ICICI, India’s second largest bank, has reported mark-to-market loss of $ 263 million in its RMLs and investment exposures by October 2008. At the same time, other state owned banks such as the State Bank of India and Bank of Baroda refused to release their figures. As increasing amounts of bad debt were passed on to professional debt collectors, that industry grew by 9.5 % in 2008 and will continue to experience growth as long as delinquencies continue to mount. In the insurance industry, on Tuesday, 16 September 2008, the U.S. Federal Reserve extended an $ 85 billion loan to the insurer, American Insurance Group (A.I.G.), in exchange for an 80 % stake. A.I.G. had been expected to declare bankruptcy the next day if no intervention occurred (compare with [34]).
1.1.
Literature Review
In this subsection, we provide a brief literature review of RML securitization, capital and profitability as well as the SMC. 1.1.1. Brief Literature Review of Subprime Mortgage Crisis The SMC began with the bursting of the U.S. housing bubble (see, for instance, [7] and [25]2 ) and high default rates on ”subprime” and ARMs. The working paper [12] provides evidence that the rise and fall of the subprime mortgage market follows a classic lending boom-bust scenario, in which unsustainable growth leads to the collapse of the market. RML incentives, such as easy initial terms, in conjunction with an acceleration in rising 2
a quote from this article states that ”It’s now conventional wisdom that a housing bubble has burst. In fact, there were two bubbles, a housing bubble and a financing bubble. Each fueled the other, but they didn’t follow the same course.”
Mortgage Loan Securitization, Capital and Profitability...
495
housing prices encouraged MRs to assume difficult RMLs in the belief that they would be able to quickly refinance at more favorable terms. However, once housing prices started to drop moderately in 2006-2007 in many parts of the U.S., refinancing became more difficult. Defaults and foreclosure activity increased dramatically, as easy initial terms expired, home prices failed to go up as anticipated and ARM interest rates reset higher. A model that has become important during this crisis is the Diamond-Dybvig model (see, for instance, [13] and [14]). Despite the fact that these contributions consider a simpler model than ours, they are able to explain important features of bank liquidity that reflect reality. The quarterly reports [16] and [17] of the Federal Deposit Insurance Corporation (FDIC) intimate that profits decreased from $ 35.6 billion to $ 19.3 billion during the first quarter of 2008 versus the previous year, a decline of 46 %. Foreclosures accelerated in the U.S. in late 2006 and triggered a global financial crisis through 2007 and 2008. During 2007, nearly 1.3 million U.S. housing properties were subject to foreclosure activity, up 79 % from 2006 (see [4] for more details). ORs that retained credit risk were the first to be affected, as MRs became unable or unwilling to make payments. Corporate, individual and institutional investors holding RMBSs or CDOs faced significant losses, as the value of the underlying mortgage assets declined. Stock markets in many countries declined significantly. 1.1.2. Brief Literature Review of RMLs and Their Securitization In the U.S., asset securitization began with the creation of private mortgage pools in the 1970s (see, for instance, [3]). In 1995, the Community Reinvestment Act (CRA) was revised to allow CRA mortgages to be securitized. In 1997, Bear Sterns was the first to take advantage of this law (see, for instance, [34]). Under the CRA guidelines, OR receives credit for originating subprime RMLs, or buying RMLs on a whole loan basis, but not holding subprime RMLs. This rewarded ORs for originating subprime RMLs, then selling them to others who would securitize them. Thus any credit risk in subprime RMLs was passed from ORs to others, including financial firms and investors around the globe. In this regard, the OTD model of lending as for RMBSs, where OR sells it to various third party investors, has become a popular vehicle for credit and liquidity risk management. This method of lending was very popular in the RML market till the freeze in this market began in June-July 2007. The total amount of RMBSs issued almost tripled between 1996 and 2007, to $ 7.3 trillion. The securitized share of subprime RMLs (i.e., those passed to thirdparty investors via RMBSs) increased from 54 % in 2001, to 75 % in 2006 (refer to [12]). The article [35] shows that the extent of OR’s participation in the OTD market prior to the June-July 2007 freeze positively predicts its RML charge-offs in the post-freeze period during the SMC. These losses are more pronounced among banks that are unable to sell their OTD RMLs to third party investors in the SMC period. The aforementioned article also provides some evidence in support of higher foreclosure rates for OTD mortgages. These findings support the view that the credit risk transfer through the OTD market resulted in the origination of inferior quality RMLs. Further, [35] explores the effect of bank capital and liability structure on this behavior and show that these effects are larger for capital constrained ORs and ORs that rely less on demand deposits. Overall, Purnanandam’s paper provides evidence that the lack of screening incentives coupled with leverage induced risk
496
M.A. Petersen et al.
taking behavior significantly contributed to the SMC. Lastly, [35] found that the fragility of bank’s capital structure acted as a moderating device. Alan Greenspan stated that the current global credit crisis cannot be blamed on RMLs being issued to MRs with poor credit, but rather on the securitization of such RMLs (see [34]). Investment banks sometimes placed the RMBSs they originated or purchased into off-balance sheet entities called structured investment vehicles (SIVs) or special purpose vehicles (SPVs). Moving the debt ”off the books” enabled large financial institutions to circumvent capital requirements, thereby increasing profits but augmenting risk. Such off-balance sheet financing is sometimes referred to as the shadow banking system, and is thinly regulated (see [34]). Some believe that mortgage standards became lax because securitization gave rise to a form of moral hazard, whereby each link in the mortgage chain made a profit while passing any associated credit risk to the next link in the chain (see, for instance, [34] and [35]). At the same time, some financial institutions retained significant amounts of RMBSs they originated, thereby retaining significant amounts of credit risk and so were less guilty of moral hazard. Some argue this was not a flaw in the securitization concept per se, but in its implementation (see, for instance, [34]). According to Nobel laureate, Dr. A. Michael Spence, we have that ”systemic risk escalates in the financial system when formerly uncorrelated risks shift and become highly correlated. When that happens, then insurance and diversification models fail. There are two striking aspects of the current crisis and its origins. One is that systemic risk built steadily in the system. The second is that this buildup went either unnoticed or was not acted upon. That means that it was not perceived by the majority of participants until it was too late. Financial innovation, intended to redistribute and reduce risk, appears mainly to have hidden it from view. An important challenge going forward is to better understand these dynamics as the analytical underpinning of an early warning system with respect to financial instability.” 1.1.3. Brief Literature Review of Bank Capital In our contribution, in the presence of RML market frictions, OR value is dependent on its financial structure (see, for instance, [15]). In this case, it is well-known that OR’s decisions about lending and other issues may be driven by the capital adequacy ratio (CAR) (see, for instance, [28] and [38]). Further evidence of the impact of capital requirements on bank lending activities are provided by [34]. A new line of research into credit models for monetary policy has considered the association between bank capital and RML demand and supply (see, for instance, [2]). This credit channel is commonly known as the bank capital channel and propagates that a change in interest rates can affect lending via bank capital. The discussion paper [19] examines the optimal allocation of equity and debt across banks and industrial firms when both are faced with incentive problems and firms borrow from banks. Gersbach finds that increasing bank equity mitigates OR-level moral hazard but may exacerbate the firm-level moral hazard due to the dilution of firm equity. In this case, competition among banks does not result in a socially efficient level of equity. Furthermore, [19] asserts that imposing capital requirements on banks leads to the socially optimal
Mortgage Loan Securitization, Capital and Profitability...
497
capital structure of the economy in the sense of maximizing aggregate output. Such capital regulation is second-best and must balance three costs: excessive risk-taking of banks, credit restrictions banks impose on firms with low equity and credit restrictions due to high RML rates.
1.1.4. Brief Literature Review of Bank Profitability In general, OR’s profit is computed by taking the difference between the income and all expenses (see, also, [34]). The paper [9] claims that profitability by bank function is determined by subtracting all direct and allocable indirect expenses from total gross revenue generated by that function. This computation results in the nett revenue (yield) that excludes cost of funds. From the nett yield the cost of funds is subtracted to determine OR’s nett profit by function. Coyne represents four major leading functions, viz., investments, RMLs, installment RMLs and commercial and agricultural loans. Our paper has a strong connection with [9] in that we restrict bank functions to RML subportfolio activities that may include all functions mentioned by Coyne except investments. The contribution [23] (see, also, [22]) demonstrates by means of a technical argument that OR’s profits will not decrease if the growth rate of sales is higher than the absolute growth rate of OR’s lending rate. The mathematical discussion contained in [23] provides a condition for a bank remaining profitable. In our contribution, by contrast to [34], in the presence of competition imbalances, OR’s value is dependent on its financial structure. Several discussions related to OR modeling problems in discrete- and continuous-time settings have recently surfaced in the literature (see, for instance, [2], [8], [20] and [38]). In this regard, in [38], a discretetime dynamic bank model of imperfect competition is presented. To a certain extent, OR’s stochastic model is an analogue of the one presented in [2] (see, also, [8] and [34]). In particular, the latter mentioned contribution analyzes the effect of monetary policy in an economy underpinned by banks operating in an imperfectly competitive RML market. In our paper, a present value formula for continuous cash flows with continuous discounting plays an important role. In this regard, [20] comments on traditional discounted cash flow models and their relation with the option value embedded in banks.
The paper [33] studies a risk management problem which involves interacting RML portfolios in a discrete-time stochastic framework through a difference equations (DEs) approach (see, also, [34]). In this regard, the current paper is different to [33] since it presents nonlinear stochastic models in a L´evy process of jump diffusion-type setting. As a consequence, the approach to deriving profitability models are different.
1.2.
Preliminaries About the SMC, Securitization, Capital and Profitability
In this section, we give some of the preliminaries about securitization, indicators of OR profitability, Poisson processes, L´evy processes of jump diffusion-type, shot noise processes for primary and secondary RML losses.
498
M.A. Petersen et al.
1.2.1. Preliminaries about the Subprime Mortgage Crisis In this subsection, we provide a diagrammatic overview of and sketch a background to the SMC. A diagrammatic overview of the SMC (see, for instance, [42]) may be represented as follows. Housing Market (HM)
START
Excess
Housing
Inability To
Mortgage
Negative
Housing
Price
Refinance
Delinquency
Effects on
Inventory
Decline
Mortgage
& Foreclosure
Economy
(HM1)
(HM2)
(HM3)
(HM4)
(HM5)
• Overbuilding During Boom Period • Speculation • Easy Credit Policies
• Housing Bubble Burst • Household Wealth Declines
• Poor Lending & Borrowing Decision • ARM Adjustments
Mortgage Cash Flow Declines
Financial Market (FM)
• Business Investment Declines • Risk of Increasing Unemployment • Stock market Declines Further Reduce Household Wealth
• Home Building Declines • Downward Pressure on Consumption as Household Wealth Declines
Negative
Liquidity
Effects on Economy (FM5)
Crunch for Businesses (FM4) • Harder to Get Loans • Higher Interest Rate for Loans
(HM6)
Bank Capital Bank Failures (FM3)
• Washington Mutual • Wachovia • Lehman Brothers
Levels Depleted (FM2) • High Bank Debt Levels (“Leverage”)
Bank Losses (FM1)
• Loss on Mortgage Retained • Loss on Mortgage-Backed Securities (MBS)
Government & Industry Responses (GIR)
Central Bank Actions (GIR1)
Fiscal Stimulus Package (GIR2)
• Lower Interest Rates • Increased Lending
• Economic Stimulus Act of 2008
Homeowner Assistance (GIR3)
• Hope Now Alliance • Housing & Economic Recovery Act of 2008
Once-Off Bailout (GIR4)
• Fannie & Freddie • Bear Sterns • Northern Rock • AIG
Systemic Rescue (GIR5)
• Emergency Economic Stabilization Act ($ 700 Billion Rescue) • Bank Recapitalizations Globally
Figure 1. Diagrammatic Overview of the Subprime Mortgage Crisis (compare [42]). Most of the information contained in this subsection was sourced from [42]. The SMC was initiated by the deflation of the U.S. housing bubble (see, for instance, [7] and [25]) and high default rates on subprime and adjustable rate mortgages (ARMs). Loan incentives, such as easy initial terms and low loan rates, in combination with escalating housing prices encouraged MRs to accept potentially problematic RMLs in the belief that they would be able to refinance at more favorable rates (see HM1 in Figure 1). During this time, great concern was also expressed about the rapid growth in corporate loans from banks with excessively easy credit standards. Some analysts claimed that competition between ORs had greatly increased, causing banks to reduce RML rates and ease credit standards in order to issue new credit. Others were of the opinion that as the economic expansion (up to De-
Mortgage Loan Securitization, Capital and Profitability...
499
cember 2007 when the U.S. economy officially entered recession) continued and past loan losses were forgotten, banks exhibited a greater propensity for risk. However, once U.S. housing prices started to fall moderately in 2006-2007, refinancing became more difficult (see HM2 and HM3 in Figure 1). Defaults and foreclosure activity increased dramatically, as easy initial terms expired, home prices failed to go up as anticipated and ARM interest rates reset higher. Foreclosures accelerated in the U.S. in late 2006 and triggered a global financial crisis through 2007 and 2008. During 2007, nearly 1.3 million U.S. housing properties were subjected to foreclosure activity; up 79 % from 2006 (see [42] for more details; also HM4 in Figure 1). As was mentioned before, as MRs became unable or unwilling to make payments, ORs that retained credit risk were the first to be affected. (see HM5 and HM6 in Figure 1). Major banks and other financial institutions globally had reported losses of approximately $ 435 billion from SMC-related activities by Thursday, 17 July 2008 (see [34] and [42]; also FM1 in Figure 1). As was mentioned in Subsection 2.2.3., by using securitization strategies, many ORs passed the rights to RML repayments and related credit risk to third-party investors via RMBSs and CDOs. Corporate, institutional and individual investors holding RMBSs and/or CDOs suffered major losses, as the underlying mortgagebacked asset value decreased dramatically. As a consequence, stock markets throughout the world declined significantly (see FM2 in Figure 1). In particular, the broader international financial sector first began to experience the fallout from the SMC in February 2007 with the $ 10.5 billion writedown of HSBC, which was the first major CDO or Mortgage Bankers Association (MBA) related loss to be reported. During 2007, at least 100 mortgage companies had either failed, suspended operations or been sold. In addition, Northern Rock and Bear Stearns required emergency assistance from central banks. IndyMac was shut down by the FDIC on Sunday, 11 July 2008. Moreover, on Sunday, 14 September 2008, after performing banking duties for more than 150 years, Lehman Brothers filed for bankruptcy as a consequence of losses stemming from the SMC (see FM3 in Figure 1). Subsequent to this many U.S. and other banks throughout the world also failed. Top management did not escape unaffected either, as the CEOs of Merrill Lynch and Citigroup were forced to resign within a week of each other. Subsequently, merger deals were struck by many institutions. The widespread dispersion of credit risk and the unclear effect on financial institutions caused reduced lending activity and increased spreads on higher interest rates. Similarly, the ability of corporations to obtain funds through the issuance of commercial paper was affected (see, for instance, [42] for more details). This aspect of the crisis is consistent with a credit crunch. There were a number of reasons why banks suddenly made obtaining a loan more difficult or increased the costs of obtaining a loan during the SMC. This was due to a decline in the value of the collateral used by banks when issuing loans, an increased perception of risk regarding the solvency of other banks within the banking system, a change in monetary conditions (for example, where the Central Bank suddenly and unexpectedly raises interest rates or capital requirements) as well as the central government imposing direct credit controls and instructing banks not to engage in further lending activity (see, for instance, [42] for more information). Fewer and more expensive loans tend to result in decreased business investment and consumer spending (see FM4 and FM5 in Figure 1). Because of problems with liquidity, central banks throughout the world took action by providing funds to member banks to encourage lending to creditworthy MRs and to restore
500
M.A. Petersen et al.
faith in the commercial paper markets (see GIR1 in Figure 1). With interest rates on a large number of subprime and other ARM adjusting upward during 2008, U.S. legislators, the U.S. Treasury Department, and financial institutions took action. A systematic program to limit or defer interest rate adjustments was implemented to reduce the effect (see, for instance, [42] for more information). In addition, ORs and MRs facing defaults were encouraged to cooperate to, for instance, enable MRs to stay in their homes. During this period, banks sought and received over $ 250 billion in additional funds from investors to offset losses (see [26] for more information). The risks to the broader economy created by the financial market crisis and housing market downturn were primary factors in several decisions by the U.S. Federal Reserve to cut interest rates and encourage the implementation of the Economic Stimulus Package (ESP) passed by Congress and signed by President Bush on Wednesday, 13 February 2008 (see, for instance, [4] and [34]; also GIR2 in Figure 1). Bush also announced a plan voluntarily and temporarily to freeze the mortgages of a limited number of mortgage debtors holding ARMs. A refinancing facility called FHA-Secure was also created. This action was part of an ongoing collaborative effort between the U.S. government and private industry to help some subprime MRs called the Hope Now Alliance (see GIR3 in Figure 1). During 2008, the U.S. government also bailed-out key financial institutions, assuming significant additional financial commitments (see GIR4 in Figure 1). Also, key risk indicators became highly volatile during the last quarter of 2008, a factor leading the U.S. government to pass the Emergency Economic Stabilization Act of 2008. In this regard, following a series of ad-hoc market interventions to bailout particular firms, a $ 700 billion systemic rescue plan was accepted by the U.S. House of Representatives on Friday, 3 October 2008. These actions were designed to stimulate economic growth and inspire confidence in the financial markets (see GIR5 in Figure 1). By November 2008, banks in Europe, Asia, Australia and South America had followed the example of the U.S. government by implementing rescue plans. By the end of 2008, the crisis had also spread to the U.S. motor industry (see, for instance, [42] for more details). In this regard, in December 2008, the U.S. government announced that it would give $ 17.4 billion in loans to assist three of the nation’s automobile makers, viz., Chrysler, General Motors and Ford, to avoid bankruptcy. The money was taken from the $ 700 billion bailout package originally intended to rescue U.S. banks. In particular, General Motors received $ 9.4 billion and Chrysler $ 4 billion (see GIR5 in Figure 1). 1.2.2. Preliminaries about RML Securitization In this subsection, we provide more information about securitization with regard to borrowing under securitization strategies (see Figure 2 below) and the financial leverage profit engine (see Figure 3 below) as presented on the website [42]. RMBSs are asset-backed securities whose cash flows are backed by the principal and interest payments of a set of RMLs (see, for instance, [42] for more information). Payments are typically made monthly over the lifetime of the underlying RMLs. Such mortgages in the U.S. have the option to pay more than the required monthly payment (curtailment) or to pay off the RML in its entirety (prepayment). Because curtailment and prepayment affect the remaining RML principal, the monthly cash flow of RMBSs is not known in advance, and therefore presents an additional risk to RMBS investors. A diagrammatic overview of
Mortgage Loan Securitization, Capital and Profitability...
501
borrowing under securitization strategies may be represented as follows. Mortgage Broker
Step 1 - MR obtains a RML from a OR. This may be done with help from a Mortgage Broker. In many cases the Lender and the Mortgage Broker have no further interaction with the
RML Proceeds Borrower
OR
MR after the RML is made.
RML
Step 2 - OR sells the RML to the Issuer Monthly Payments
and MR begins making monthly payments to the Servicer.
Loan
Cash Servicer
Trustee Monthly Payments
Underwriter Rating Agency Issuer
Credit Enhancement Provider
Securities
Cash
Monthly Payments
Step 3 - The Issuer sells securities to the Investors. The Underwriter assists in the sale, the Rating Agency rates the securities, and Credit Enhancement may be obtained.
Step 4 - The Servicer collects monthly payments Investor
from MR and remits payments to the Issuer. The Servicer and the Trustee manage delinquent RMLs according to terms set forth in the Pooling & Servicing Agreement.
Figure 2. Diagrammatic Overview of Borrowing Under Securitization Strategies (compare [42]). Next, a diagrammatic overview of the financial leverage profit engine that is closely related to borrowing under securitization strategies will be given. Further insight may be gained about securitization by considering the following illustration (compare with Figure 3). Let us assume that an investment bank borrows money from an investor or money market fund and agrees to pay, for instance, 4 % in interest. The RMBS portfolio is collateral, which the investors can seize in the event of a default on interest payments. The investment bank uses the funds to expand its RMBS portfolio, which is paying, for instance, 7 % interest rate. The 3 % rate difference between the amounts is called the spread. This provides an incentive to borrow and invest as much as possible, known as leveraging. This was considered safe during the early 2007 housing boom, as RMBS portfolios typically received high credit ratings and defaults were minimal. Since
502
M.A. Petersen et al. 1
2
Rand
Rand MortgageFinancial Institution
Investors 5% interest paid to investor
3
8% interest received from RMBSs portfolio
Backed Securities (RMBSs) Portfolio
Mortgagors Mortgage Cash Payments
4
Figure 3. Diagrammatic Overview of the Financial Leverage Profit Engine (compare [42]). investment banks do not have the same capital reserve requirements as depository banks, many borrowed and lent amounts exceeding 30 times their nett worth. By contrast, depository banks rarely lend more than 15 times their nett worth. At the time of its nationalization, Freddie Mac was leveraged nearly 70 times its nett worth. With increasing delinquencies and foreclosures during 2007-2008, the value of the RMBS portfolios declined. Investors became concerned and in some cases demanded their money back, resulting in margin calls (need to sell/liquidate the RMBS portfolios) to pay them. Being so highly leveraged prior to the SMC, many investment banks and mortgage companies have faced huge losses, bankruptcy or merged with other institutions. Because RMBSs became ”toxic” during the SMC due to uncertainty in the housing market, they became illiquid and their values declined. The market value is penalized by the inability to sell RMBSs whose worth may be less than the value of the actual cash inflow. During the SMC, the ability of financial institutions to obtain funds via RMBSs has been dramatically curtailed. Spreads have narrowed, as investors are demanding higher returns to lend money to highly leveraged institutions. A problem related to the above is the pricing of RMBSs. Pricing a vanilla corporate bond such as a RMBS is based on three sources of uncertainty, viz., credit risk, interest rate exposure and early redemption or prepayment (see, for instance, [39] and [40]). The number of MRs whose RMLs are securitized who prepay increases when interest rates decrease. One reason for this is that MRs can refinance at a lower fixed interest rate. Commercial MBSs often mitigate this risk using call protection. Since interest rate risk and prepayment are linked, determining tractable stochastic models for RMBS value is a difficult proposition. The level of difficulty rises with the complexity of the interest rate model and the sophistication of the prepayment interest rate dependence, to the point that no closed form solution (solution that can be written down) is widely known. In such models, numerical methods provide approximate theoretical prices. These are also required in most models which specify the credit risk as a stochastic function with an interest rate correlation. Practitioners typically use the Monte Carlo method or Binomial Tree numerical solutions (see, for instance, [39] and [40]). Nevertheless, in the sequel, we assume that the RMBS price follows a jump diffusion process. 1.2.3. Preliminaries about Profitability Bank profitability is influenced by various factors such as bank characteristics, macroeconomic conditions, taxes, regulation of deposit insurance and several underlying legal and
Mortgage Loan Securitization, Capital and Profitability...
503
institutional indicators. For our purposes, bank profitability may be defined by the system of equations Income Before Taxes & Cost of Funds = Gross Revenue − Expenses; Income After Taxes & Before Cost of Funds = Income Before Taxes & Cost of Funds - Tax-Free Securities - Taxes; Nett Income After Taxes & Before Cost of Funds = Income Before Taxes & Cost of Funds - Actual Tax-Free Securities; Nett Income After Taxes & Before Cost of Funds = Nett Profit + Cost of Funds.
From the last equation, it is clear that Nett Profit (NP)
(1.1)
= Nett Income After Taxes & Before Cost of Funds (NIATBCF) − Cost of Funds (CF).
Roughly speaking, the cost of funds is the interest cost that OR must pay for the use of funds (money) while the gross revenue is the funds generated by all banking operations before deductions for expenses. Studies show that bank profitability should be considered as an important indicator of financial crises (see [11]). In this regard, we use ROA and ROE to measure bank profitability for illustrative purposes. In Section 4., with data from the Fed about the ROA and ROE for the period Tuesday, 1 January 2002 to Saturday, 31 March 2007, we plot the trend of bank profitability from a real data against the trend which is given by the stochastic dynamics of ROA and ROE. In this regard, Ari , Πi , Θi , as well as E ri , E i and Ai denote the ROA, NIATBCF, cost of funds, equity capital, ROE and assets of the i-th RML subportfolio, respectively. The first measure is ROA that, in the i-th RML subportfolio, is defined as ROA (Ari t ) =
NIATBCF (Πit ) − Cost of Funds (Θi ) . Bank Assets (Ait )
The ROA provides information about how much profit is generated per average by each unit of assets. Therefore, the ROA is an indicator of how efficiently a bank is being managed. Secondly, we express the ROE in terms of the difference between the NIATBCF, cost of funds and equity capital so that ROE (Etri ) =
NIATBCF (Πit ) − Cost of Funds (Θi ) . Equity Capital (Eti )
The ROE provides information about how much shareholders are earning on their investment in bank equity.
504
M.A. Petersen et al.
1.2.4. Preliminaries about Poisson Processes, Jump Diffusions and L´evy Processes The preliminaries presented in this subsection are important for a number of reasons. Firstly, a Poisson process is a stochastic process with a discontinuous path and is used as a building block for constructing more complex jump processes, e.g., L´evy processes. Moreover, jump diffusions are solutions of SDEs that are driven by L´evy processes. Since a L´evy process can be expressed as the linear sum of t, a Brownian motion, Z, and a pure jump process, jump diffusions represent a natural and useful generalization of Itˆo diffusions. Throughout our contribution, we suppose for the sample space Ω, σ-algebra F, filtration F = (Ft )t≥0 and real probability measure P, that (Ω, F, F, P) is a filtered probability space. In this subsection, for sake of completeness, we firstly provide general descriptions of different types of Poisson processes and define a L´evy process and its measure. Moreover, we define a L´evy-Itˆo decomposition as well as the concept of a stopping time. Definition 1.1. (Poisson Process): Let (tj )1≤j≤k Pmbe a sequence of independent exponential random variables with parameter λ and Tm = j=1 tj . The process (Nt )t≥0 which is given by Nt =
∞ X
1t≥Tk
k=1
is called a Poisson process with intensity λ. In line with the definition above, a Poisson process can be considered to be a counting process which counts the number of random times, Tk , which occur between 0 and t, when (Tk − Tk−1 )k≥1 is an independent, identical distributed (iid) sequence of exponential variables. Definition 1.2. (Compound Poisson Process): A compound Poisson process with intensity, λ > 0, and jump size distribution function, Fe, is a process, Ntc , which is given by Ntc =
Nt X j=1
Jej
where Jej represents the jump sizes which are iid with distribution function Fe and Ntc is a Poisson process with intensity λ, which does not depend on (Jej )j≥1 . Next, we assume that φ(ξ) is the characteristic function of a distribution. If for every positive integer n, φ(ξ) is also the n-th power of a characteristic function, we say that the distribution is infinitely divisible. For each infinitely divisible distribution, a stochastic process L = (Lt )0≤t called a L´evy process exists. Definition 1.3. (L´evy Process): A L´evy process initiates at zero, has independent and stationary increments and has (φ(u))t as a characteristic function for the distribution of an increment over [s, s + t], 0 ≤ s, t, such that Lt+s − Ls . Next, we provide more important definitions and a useful result.
Mortgage Loan Securitization, Capital and Profitability...
505
Definition 1.4. (C´adl´ag Stochastic Process): A stochastic process X is said to be c´adl´ag if it almost surely (a.s.) has sample paths which are right continuous with left-hand limits. The jump process ∆L = (∆Lt )t≥0 associated with a L´evy process, L, is denoted by ∆Lt = Lt − Lt− , for each t ≥ 0, where Lt− = lims↑t Ls is the left limit at t. Let L = (Lt )0≤t≤τ with L0 = 0 a.s. be the c´adl´ag version of a L´evy process. Also, we assume that the L´evy measure, ν, satisfies the following integrals Z
Z
2
|y| 0 of the Poisson process, N, has L´evy measure ν(dy) = λFe(dy), where Fe is the common distribution function of an iid sequence of random variables (Jej )j∈N . In this spirit, we consider a L´evy-Itˆo decomposition of the form Lt = at + bZt + γ
Z tZ 0
R
f(ds, dy), yM
(1.3)
where a = E[L1 ], (bZt )0≤t≤τ is a Brownian motion with standard deviation, b ≥ 0. From (1.3) it follows that
dLt = adt + bdZt + γ
Z
R
f(dt, dy). yM
506
M.A. Petersen et al.
Furthermore, we have that f(dt, dy) = M (dt, dy) − ν(dy)dt, M
is the compensated Poisson random measure on [0, ∞) × R \ {0} related to L, M (dt, dy) is the Poisson random measure on R + × R \ {0} with intensity measure dt × ν. Here dt is the Lebesgue measure and ν is the L´evy measure as before. The measure dt × ν ( or sometimes just ν) is called the compensator of M (dt, dy). In the sequel, if ν = 0 then we will have that Lt = Zt , where Zt is appropriately defined Brownian motion. Definition 1.6. (Quadratic Covariation): Suppose X and Y are two semimartingales. Then the quadratic covariation of X and Y, denoted by [X, Y ]t , is the unique semimartingale such that d(Xt Yt ) = Xt− dYt + Yt− dXt + d[X, Y ]t , t > 0. Proposition 1.7. (Stopping Time): Let X be an adapted c´adl´ag stochastic process, and let Λ be a closed set. Then the random variable T (ω) = inf{t > 0 : Xt (ω) ∈ Λ or Xt− (ω) ∈ Λ} is a stopping time. Proof. The proof is contained in [37] and will not be reproduced here. Definition 1.8. (Optimal Stopping Time): Let (Xn )n≥0 be a Markov chain on B, with ˜ Assume we have two bounded functions c : B → R, Ξ : B → R, transition matrix O. respectively the continuation cost andSthe stopping cost, respectively. A random variable, τ, which can take any value from Z + {∞} is called an optimal stopping time if for every η s ∈ Z + , the event {τ = η s } depends only on X 1 , X 2 , . . . , X n . 1.2.5. Preliminaries about Shot Noise Processes for Primary & Secondary RML Losses In order to describe RML losses from securitization-related events, we appeal to the theory of shot noise processes (SNPs). In this regard, a significant time lapse exists between the initial RML loss event and the time at which all RML losses are materialized and the event’s fallout is considered to be a thing of the past. RML loss events such as internal fraud frequently take months or years to settle once detected. The settlement process related, for instance, to the SMC may also involve multiple losses in the form of write-downs, restitution, legal liability, fines and others. For example, the bankruptcy of Lehman Brothers (due to primary losses) caused secondary losses via a depreciation in the price of commercial real estate. Fears of ORs liquidating their holdings in such real estate led to other holdings being sold in anticipation. It is also expected that the unloading of Lehman’s debt and equity pieces of the $ 22 billion purchase of Archstone could cause a similar reaction
Mortgage Loan Securitization, Capital and Profitability...
507
in apartment building sales (see, for instance, [42]). Furthermore, many banks, real estate investment trusts (REIT) and hedge funds suffered significant losses (secondary losses) as a result of mortgage payment defaults or mortgage asset devaluation (primary losses). Consider a Poisson counting process, Nt , on the interval [0,t]. If the intensity of the Poisson process is a constant λ, then we have a time-invariant or homogeneous Poisson process with mean λt. On the other hand, if the factor is dependent on time, λ(t), then we have a non-homogeneous Poisson process with mean expressed by the cumulative intensity Z
t
λ(s)ds
0
over the interval [0, t]. Poisson processes with a non-homogeneous intensity are also called Cox processes. We postulate a model for credit risk where, for each of the i-th RML subportfolios or combination of subportfolios, the cumulative RML loss process on [0, t] follows a compound Poisson process of the form i
Lit =
Nt X
S ji
j=0
Our model is a special case of a non-homogeneous compound Cox process model in the i-th RML subportfolio. In particular, we study the behavior of the two-dimensional process J = (S i , N i ) in which RML losses, S i , and the Poisson process, N i , are dependent. We consider a non-homogeneous, time-dependent, Poisson point process with intensity parameter, λi (t), in the i-th RML subportfolio. If we assume that λi is itself the sample function of some stationary continuous stochastic process, then the counting process is known as a doubly stochastic process. In the case of SNPs, λ, undergoes random changes due to individual jumps that coincide with the arrivals of events. Shot noise is often referred to as a self-exciting process in the sense that the intensity depends on the location of points at which counting process events occur prior to t. These properties of SNPs provide a suitable framework for modeling sequences of RML losses each of which is followed by secondary RML losses that may be associated with, for instance, RMBSs. We begin the description of the SNP model by introducing some notations and assumptions. In our model, we have that 0
0
0
0
(S 1 i ; S 2 i ) and (N 1 i ; N 2 i ). 0
We define the sizes of primary RML losses, S 1 ji , in the i-th RMLn subportfolio associated with RMBSs j, j = 0, . . . , n as the expected and unexpected RML losses that occur 0 0 0 0 randomly at times T 1 ji with realizations {τ 1 ji }j≥0 at a constant rate λ1 i (t). Let S 1 i be 0 an iid positive random variable independent of the Poisson process, N 1 i , and the sequence, 0 {s1 ji }j≥0 , be the realizations of the sizes associated with these losses representing random draws from a continuous distribution. Any primary RML loss is the first in a possible sequence of losses that are associated with a particular event. Therefore, we say that in the i-th RML subportfolio there is an associated sequence of secondary events (or aftershocks) 0 0 0 with sizes, S 2 i , and realizations, {s2 jki }k≥0 , arriving at the frequency rate λ2 ji (t).
508
1.3.
M.A. Petersen et al.
Main Problems and Outline of the Chapter
In this chapter, we derive L´evy process-based models of jump diffusion-type for banking operations involving securitization, capital and profitability. The main motivation for our study is the need to generalize the more traditional discrete- and continuous-time models of banking variables such as RMBSs, regulatory capital, ROA and ROE in the context of interacting RML portfolios. A further motivation is the fact that RMBSs have, for instance, been at the forefront of investigations into the SMC. An example of the latter is that both the failure of the Lehmann Brothers investment bank and the acquisition in September 2008 of Merrill Lynch and Bear Stearns by Bank of America and JP Morgan, respectively, was preceded by an increase in RML securitization. A similar trend was discerned for the U.S. mortgage companies, Fanie Mae and Freddie Mac, who had to be bailed out by the U.S. government at the beginning of September 2008. As a precursor to our main conjectures, we construct stochastic models for banking items such as RMBSs, regulatory capital and profits. These include models that describe the price dynamics of RMBSs, Basel II regulatory capital and bank NIATBCF by means of semi-martingale theory. Here the cost of funds is related to the cost of banking activities such as deposit withdrawals, RML extension and operations. As far as securitization pricing is concerned, our main conclusion is that both sub- and super-optimal RMBS pricing may be characterized in terms of a constant-valued pricing error term. Furthermore, we apply the aforementioned models to the ongoing SMC. In particular, we will highlight the role of OR’s problematic RML subportfolios, RML demand, risk premium, sensitivity of changes in capital to RML extension, Central Bank base rate, OR’s own RML rate, securitization via RMBSs, pricing of RMBSs, RML losses and default rate, RML loss provisions (see, for instance, [21]), deposit raising activities, LIBOR, asset portfolio, liquidity, capital valuation as well as profit in the SMC. Furthermore, we provide numerical examples of the dynamics of bank profitability via the indicators ROA and ROE. Here, the data is sourced from 36 anonymous Southern African banks for the period 2002-2007. 1.3.1. Main Problems The main problems to emerge from the previous paragraph can be formulated as follows. Problem 1.9. (Modeling Bank Securitization, Capital and Profitability): Can we model the dynamics of RMBS prices, regulatory capital and profitability via L´evy processes of jump diffusion-type ? (Section 2.). Problem 1.10. (Optimal Bank Securitization Problem): Which decisions about optimal stopping times must be made in order to attain an optimal price for RMBSs ? (Theorem 3.3 in Subsection 3.2. of Section 3.). Problem 1.11. (Numerical and Illustrative Examples): Can we confirm that our banking models for profitability and its indicators are realistic in some respects ? (Section 4.). Problem 1.12. (Connections with the SMC): How do the banking models developed in our paper relate to the SMC ? (Section 5.).
Mortgage Loan Securitization, Capital and Profitability...
509
1.3.2. Outline of the Chapter In the current subsection, an outline of our contribution is given. In Section 2., we present OR’s stochastic model. In this regard, we give the list of possible RML subportfolios that OR may wish to hold. Section 2. describes the behavior of bank items (liabilities, cash, bonds, shares, RMLs and capital) and their mathematical models. Furthermore, this section investigates the relationship between RML demand and supply and other parameters (RML rate, rate of inflow of deposits and shift parameter). In addition, our contribution presents a formula for OR’s own RML rate, rΛi . Moreover, the same section, also focusses on the types of losses that OR may experience. In this regard, our chapter identifies the expected and unexpected losses which may arise from RML extension. In Section 2.2.3., we construct the stochastic dynamics of the price of a RMBS. The analysis in Section 2. enables us to develop a stochastic model for the NIATBCF for banks with interacting RML subportfolios. The solution of the dynamic model of NIATBCF (see (2.38)) is given by Lemma 2.4. This solution is achieved via the one-dimensional Itˆo’s formula for jump diffusion processes. In Section 3., we also compute the total nett bank profitability generated by banks with interacting RML subportfolios. In the sequel, we determine the dynamics of ROE and ROA in order to indicate the evolution of bank profit (see Section 2.6.2.). These dynamics are determined via the SDEs for bank assets and equities (see Subsections 2.3. and 2.5.). The development described in the previous paragraph enables us to set-up a optimal securitization problem in Section 3.. In particular, we solve an optimal stopping time problem (see Theorem 3.3 in Subsection 3.2.). The optimal securitization problem in Theorem 3.3 only makes sense if OR can decide when to sell the RMBSs. Where pi is the initial RMBS price associated with the i-th RML subportfolio, the formulas in (3.-41) only yield the correct value function, Vs (pi ), providing the sale of RMBSs actually happens at pi = pi∗ . Moreover, the value function has a power form before profit-taking from the sale of RMBSs and a linear form thereafter. Section 4. compares bank profitability from real data with our simulation models for ROS and ROE. In Section 5., we analyze the connections between bank securitization, capital and profitability and the SMC. Also, in this section, we provide a background and diagrammatic overview of the SMC. Section 6. presents a few concluding remarks and highlights some topics for future research. Finally, in Section 7., we provide appendices containing the background results needed to prove Lemma 2.4 and our main result (Theorem 3.3 in Subsection 3.2.). In particular, the proof of Theorem 3.3 makes use of Theorem 2.2 and Proposition 2.3 of Chapter 2 in [32].
2.
OR’s Stochastic Model
Throughout the chapter, we choose n to be the number of RML subportfolios with i being the index of each RML subportfolio so that i = 1, 2, . . . , n.
2.1.
OR’s General Assets
In this subsection, we discuss issues that are related to OR’s capital, liabilities, cash, bonds and shares as well as RMLs. In order to model the uncertainty associated with these items
510
M.A. Petersen et al.
we consider the filtered probability space (Ω, F, (Ft )0≤t≤T , P). 2.1.1. OR’s Cash, Bonds and Shares Below, we construct OR’s stochastic model involving cash, bonds and shares. We denote the deterministic rates of return on cash and bonds for the i-th RML subportfolio by rCi and rBi , respectively, and the stochastic rate of return on shares by rSi . Furthermore, our notation for proportions invested in cash, bonds and shares from the i-th RML subportfolio at time t is given by πtCi , πtBi and πtSi , respectively. In this situation, we have that πtCi + πtBi + πtSi = 1. The standard values of the proportions invested in bonds and shares in the i-th RML subportfolio are denoted by π0Bi and π0Si , respectively. 2.1.2. OR’s Treasuries and Reserves Treasuries, Tt , coincide with securities that are issued by national Treasuries at a rate denoted by rT . In essence, they are the debt financing instruments of the federal government. There are four types of Treasuries, viz., Treasury bills, Treasury notes, Treasury bonds and savings bonds. All of the Treasury securities besides savings bonds are very liquid and are heavily traded on the secondary market. Bank reserves are the deposits held in accounts with a national institution (for instance, the Federal Reserve) plus money that is physically held by banks (vault cash). Such reserves are constituted by money that is not lent out but is earmarked to cater for withdrawals by depositors. Since it is uncommon for depositors to withdraw all of their funds simultaneously, only a portion of total deposits may be needed as reserves. OR uses the remaining deposits to earn profit, either by issuing RMLs or by investing in assets such as Treasuries and shares. 2.1.3. OR’s Intangible Assets In the contemporary banking industry, shareholder value is often created by intangible assets which consist of patents, trademarks, brand names, franchises and economic goodwill. Such goodwill consists of the intangible advantages a bank has over its competitors such as an excellent reputation, strategic location, business connections, etc. In addition, such assets can comprise a large part of OR’s total assets and provide a sustainable source of wealth creation. Intangible assets are used to compute Tier 1 bank capital and have a risk-weight of 100 % according to Basel II regulation (see Table 1 below). In practice, valuing these off-balance sheet items constitutes one of the principal difficulties with the process of bank valuation by a stock analyst. The reason for this is that intangibles may be considered to be “risky” assets for which the future service potential is hard to measure. Despite this, our model assumes that the measurement of these intangibles is possible (see, for instance, [24] and [41]). In reality, valuing this off-balance sheet item constitutes one of the principal difficulties with the process of bank valuation (see, for instance, [24] and [41]).
Mortgage Loan Securitization, Capital and Profitability...
2.2.
511
Residential Mortgage Loans (RMLs) and Their Securitization
We use the notation rΛi to denote the OR’s own RML rate that is a stochastic variable and covers the cost of funds (deposits, partial insurance and regulatory capital) of the i-th RML subportfolio at time t. In addition, pit and ρit denote MR repayments or defaults-to-total RML value ratio at time t and total RML value in the i-th RML subportfolio-to-total RML value ratio at time t, respectively. The variable pt can take both positive and negative values. In fact it takes negative values in the case of MR defaults and positive values when they make multiple repayments with some time delay. This means that pit < 0, MR Defaults pit > 0, MR Delayed Multiple Repayments
(2.1)
Also, it follows that m X
ρit = 1.
i=1
In the sequel, the total RML value at time t is denoted by Λt with the corresponding total RML value of the i-th RML subportfolio at time t having the notation Λit and Λit = ρit Λt . In addition, Λτt i denotes the total RML value that corresponds to MRs that regularly repay their RMLs from the i-th RML subportfolio at time t, where Λτt i := (1 + pit )Λit , where pi is given by (2.1). The cost of capital of the i-th RML subportfolio at time t is denoted by cKi t . The cost of capital for a bank is a weighted sum of the cost of equity and cost of debt (Subordinate debt). In particular, the cost of equity is the minimum rate of return a bank must offer shareholders to compensate for waiting for their returns, and for bearing some risk. Furthermore, cost of debt is the return paid by bank to the debtholders. 2.2.1. RML Demand and Supply In the sequel, we assume that OR operates in the primary market. In this regard, OR raises deposits from individuals and uses these deposits to issue RMLs. The paper [13] regards banks as intermediaries between savers who prefer to deposit in liquid accounts and MRs who prefer to take out long-maturity RMLs. Under ordinary circumstances, ORs can provide a valuable service by channeling funds from many individual deposits into RMLs for MRs. Individual depositors might not be able to make these RMLs themselves, since they know they may suddenly need immediate access to their funds, whereas business investments will only pay off in the future. Moreover, by aggregating funds from many different
512
M.A. Petersen et al.
depositors, banks help depositors save on the transactions costs they would have to pay in order to lend directly to businesses. Since banks provide a valuable service to both sides (providing the long-maturity RMLs MRs want and the liquid accounts depositors want), they can charge a higher RML interest rate than they pay on deposits and thus profit from the difference. Diamond and Dybvig’s crucial point about how banking works is that under ordinary circumstances, savers’ unpredictable needs for cash are unlikely to occur at the same time (see, for instance, [13] and [14]). Therefore, since depositors’ needs reflect their individual circumstances, by accepting deposits from many different sources, OR expects only a small fraction of withdrawals in the short term, even though all depositors have the right to take their deposits back at any time. Thus OR can make RMLs over a long horizon, while keeping only relatively small amounts of cash on hand to pay any depositors that wish to make withdrawals. This means that, because individual expenditure needs are largely uncorrelated, by the law of large numbers ORs expect few withdrawals on any one day. Furthermore, OR also operates in the secondary market in order to bridge the gap between surpluses and deficits in its reserves. This involves transactions with other banks (for instance, interbank borrowing), with the Central Bank (monetary loans or deposits with the Central Bank) and in financial markets (for instance, purchases and sales of Treasury Securities). Moreover, the financial variables that are used to determine RML demand, ζ i , and supply, גi , in the i-th RML subportfolio are OR’s own RML rate, rate of inflow of deposits as well as macroeconomic and business cycle shift parameters denoted by rΛi , rdi , α ei and βei , respectively. More specifically, we write the demand function for bank credit in the i-th RML subportfolio as ζ i (rΛi , α ei ). The shift parameter, α ei , represents macroeconomic factors such as changes in the level of macroeconomic activity, M, and conditions in the capital markets, υ i . The relationship between the demand function, ζ i , bank’s own RML rate, rΛi , and macroeconomic shift parameter, α ei , may be given by ∂ζ i ∂ζ i ∂ζ i < 0, > 0 and < 0. ∂rΛi ∂M ∂υ i
(2.2)
From the inequalities above, we see that when banks increase their own RML rates, then RML demand, ζ i , will decrease and vice versa. Furthermore, when the level of macroeconomic activity, M, improves the RML demand, ζ i , will increase and vice versa. Lastly, when conditions in the capital markets, υ i , deteriorate the RML demand, ζ i , will increase and vice versa. Also, we write the supply function of the public’s deposits of the i-th RML subportfolio as גi (rdi , βei ). In this case, the shift parameter, βei , represents the business cycle and its relationship with גi (rdi , βei ) is written as rΛi ,
∂גi > 0. ∂ βei
This shows that during an economic upturn OR will supply more credit, but less credit during a downturn. A further relation between the rate of inflow of deposits, rdi , and גi (rdi , βei ) is as follows
Mortgage Loan Securitization, Capital and Profitability...
513
∂גi > 0. ∂rdi In this regard, when the rate of inflow of deposits increases, OR will supply more credit to MRs. In our contribution, we represent OR’s activity in the secondary market by i pi i Υ = Λ − (1 − R ) ג, i
(2.3)
where Rpi is the reserve requirements on public deposits in the i-th RML subportfolio. Equation (2.3) can be positive, negative or zero. In this regard, if Υi > 0 then OR has a shortage of fund sources in the primary market and will have to raise funds in the secondary market at an interest rate of rwi via for instance, interbank borrowing. Moreover, if Υi < 0 then OR has excess fund sources and is able to buy assets in the secondary market via, for instance, deposits with the central bank, Treasuries, earning an interest rate rawi . 2.2.2. RML Pricing We suppose that, after providing liquidity, OR lends in the form of RMLs, Λi , at the OR’s own RML rate, rtΛi . In the i-th RML subportfolio, this RML rate for profit maximizing ORs is determined by the risk premium (or yield differential), given by ̺it = rtΛi − rt ,
(2.4)
the industry’s market power as determined by its concentration, Ni , the market elasticity of RML demand, η i , Central Bank base rate, rt , the marginal cost of raising funds in the secondary market, crwi , and the product of the cost of elasticity (equity) raised, cEi , and the sensitivity of the required capital to changes in the amount of RMLs extended, ∂K i . ∂Λi
(2.5)
In this situation, in the i-th RML subportfolio, we may express OR’s own RML rate, rΛi , as
rtΛi = ̺i + (1 + rt )
i Ni rwi Ei ∂K + c + c + E[li ], ηi ∂Λi
where
Ni =
m X j=1
Sbji2
(2.6)
514
M.A. Petersen et al.
is the Herfindahl-Hirschman index of the concentration in the RML market, Λmi Sbmi = i Λ
is the market share of bank m in the RML market and
ηi = −
∂Λi rtΛi ∂rtΛi Λi
is the elasticity of RML demand. Also, in our model, besides the risk premium, we include E[li ] which constitutes the amount of provisioning that is needed to match the average expected RML losses. 2.2.3. Residential Mortgage-Backed Securities (RMBSs) In the modeling and pricing of RMBSs we must consider both the dynamic behavior of the term structure of interest rates and MRs prepayment behavior. In this regard, RMBS valuation depends on assumptions about a particular stochastic process for term structure movements and uses specific statistical models for prepayment. In order to price RMBSs, we suppose that a set of k variables, denoted by v, be the underlying factors that determine the dynamics of interest rates and prepayment behavior. In our case, we consider a v that includes only interest rate variables (e.g., the level of interest rates) as well as a RMBS pricing error term, χ, (e.g. marginal cost of securitization). The pricing error term allows for the fact that model prices based on a small number of pricing factors will not be the same as quoted market prices. There are several reasons why model prices differ from market prices. Firstly, bid prices may not be synchronized with respect to the interest rate quotes. Secondly, the RMBS prices may refer to prices of unspecified reference RML portfolios in the marketplace. Thirdly, certain pricing factors may not be specified in the model. In the light of the above, the RMBS price at time, t, denoted by Pt , can be written as Pt = h(vt ) + χt ,
(2.7)
where h(vt ) is a function of the state variable, vt . In our case, we consider the RMBS pricing error term, χt = χ, to be a constant. Suppose the price, Pi , at time t of RMBSs associated with the i-th RML subportfolio may be represented via a geometric L´evy process given by
dPit
=
Pit−
Z Pi Pi Pi Pi α dt + σ dZt + γ
R
y M (dt, dy ) , Pi0 = pi , i fi
i
(2.8)
where αPi , σ Pi and γ Pi are constants, γ Pi y i > −1 a.s. ν i . Furthermore, we assume that OR is able to sell the RMBSs at time s + τ with the expected discounted nett payoff being given by
Mortgage Loan Securitization, Capital and Profitability...
s,pi
i
Jτ (s, p ) := E
i i exp{−δ (s + τ )} Pτ − χ · Iτ 0 and χi , are the discounting exponent and RMBS error term, respectively. Next, we let the integro-differential operator, Gp , of Pt on C02 ( R), of the controlled process
Qt =
s+t Pit
; t ≥ 0, Q0− = q =
s pi
be given by
Gp ξ(s, pi ) =
∂ξ ∂ξ 1 ∂2ξ + αPi pi i + (σ Pi pi )2 ∂s ∂p 2 ∂pi 2 Z ∞ i Pi i i i Pi i i ∂ξ ξ(s, p + γ y p ) − ξ(s, p ) − γ y p + ν i (dy i ).(2.10) i ∂p −1
In the boom period, ORs became highly leveraged as a result of increasing their securitization activities. Secondly, the financial leverage profit engine (see Figures 3 and 1) involving RMBSs play a major role in the SMC discussed in Section 5.. 2.2.4. RML Losses and their Provisioning In the i-th RML subportfolio, the frequency process in our RML loss SNP model is doublystochastic with the underlying Poisson count process has a varying intensity rate that has a homogeneous and non-homogeneous component. The homogeneous component arises 0 due to a constant intensity, λ1 i , of the frequency of the primary losses due to defaults 0 0 on mortgage payments, S 1 i , with observed realizations {s1 ji }j≥0 . The additional nonhomogeneous term is induced when the frequency rate experiences jumps at the points of arrival of primary RML losses with a subsequent decay. For each mortgage payment default, the frequency rate of secondary losses due to RML securitization is modeled as a function of two elements, viz., the size of the observed primary RML losses and time that has elapsed since the occurrence of these losses, given by 0 ji
λji = λ(s1
0 ji
, t − τ1
0 ji
)I(0,t] (τ 1
),
where the indicator function is defined in the standard way as
0 ji
I(0,t] (τ 1
)=
1, if τ 10 ji ∈ (0, t]
0, if τ 10 ji ∈ (t, ∞).
(2.11)
516
M.A. Petersen et al.
The dependence of the frequency rate of secondary losses from securitization on the size of the primary loss from defaults on mortgage payments and time may be further formulated as 0 ji
λji (t) = ρ(s1
0 ji
)Ψi (t − τ 1
0 ji
)I(0,t] (τ 1
),
(2.12)
0
where the term ρ(s1 ji ) dictates the contribution of the primary RML losses to the size of the jump in the associated frequency process and the time-dependent term can be further formulated as 0 ji
Ψi (t − τ 1 Ψi (t
−
0 ji
) = exp{−γ(t − τ 1
0 τ 1 ji )
= (t −
)}, γ ≥ 0 Exponential Decay in Time;
0 τ 1 ji )−γ ,
(2.13)
γ ≥ 0 Power Decay in Time.
When γ 6= 0, the frequency decays with time and the two-dimensional process (S i , N i ) is a RML losses and number of losses process. On the other hand, when γ = 0, the frequency 0 is time-invariant and the process (S i , N i ) is a RML loss process. The term ρ(s1 ji ) is a constant and may take many different forms. In our model, the jump size is related to the size of the primary RML losses due to MR payment defaults. For example, consider 0 ji
ρ(s1
0 ji
) = m exp{−b αs1
0 ji
}(s1
b
)−β .
Then, if α b > 0 and βb = 0, the relation of frequency of secondary losses due to RML secu0 ritization to s1 ji is an exponential decay in the i-th RML subportfolio. On the other hand, 0 when α b = 0 and βb > 0, the decay in s1 ji obeys a power function. Also, α b = 0 and β = 0, b give a linear case and α b = β = 0 refers to the scenario in which frequency is independent 0 from s1 ji . If α b = βb = γ = 0, then the frequency is constant at k, and k = 0 means that no secondary losses take place in which case the process (S i , N i ) reduces to a homogeneous Poisson process. In the presence of shot noise in the RML loss frequency data, given 0 I(0,t] (τ 1 ji ), the intensity rate of the Poisson process at any given point t becomes
λi (t) = λ
10 i
Nt1
+
0i
X
0 ji
ι1
(t)
(2.14)
j=0
0
In the case where a primary RML loss due to MR payment default, j, of observed size, s1 ji , 0 occurred at time, τ 1 ji , the expected number of secondary losses before time t associated with this MR payment default event, j, is given by
0 0 0 E[Nt1 i |τ 1 ji , s1 ji ]
=
Z
t−τ 1
0 ji 0 ji
λ(s1
, u)du.
(2.15)
0
Then, the expected number of secondary RML securitization-related losses for any random size of primary RML losses drawn from a continuous distribution having density, hθ (si ), with parameter (or parameter set), θ, on support, Ω, is
Mortgage Loan Securitization, Capital and Profitability...
10 i
E[Nt |τ
10 ji
Z Z
]=
Ω
t−τ 1
517
0 ji
λ(u, v)hθ (u)dudv.
(2.16)
0
Note that, because the inter-arrival times between the occurrences of primary RML losses 0 are distributed as iid exponential random variables with rate λ1 i , the corresponding arrival 0 0 times T 1 ji , j = 0, . . . , m follow a gamma distribution with parameter set {j, λ1 i } such that 0 ji
T1
0 ji−1
− T1
0 ji
0
0 i0
∼ Exp (λ1 i ) j = 0, . . . , m, T 1 0
T1
∼ Γ (j, λ1 i )
= 0; (2.17)
j = 0, . . . , m.
This property allows us to write the general expression for the expected number of secondary losses due to RML securitization before time t triggered by any given primary loss from a RML default event, j, as
0 0 E[Nt1 i |τ 1 ji ]
Z
=
0
∞Z
Ω
Z
t−w
10 i
λ(u, v)hθ (u)hj,λ
(w)dvdudw.
(2.18)
0 0
The formula (2.18) implies that, since the variables T 1 ji are not iid, the sequence {Ntj }j≥0,t>0 is also not iid and depends on the number of events (i.e., shocks) that have occurred up until time t. The expected number of aggregate losses (primary RML losses plus secondary RML securitization losses) by time t is 0
[λ1 i t]
E[Nti ] = λ
10 i
t+
X
E[Ntji ].
(2.19)
j=0
The sizes of the primary RML losses due to MR payment default in the i-th RML subport0 folio, S 1 i , are assumed to be iid, drawn from a continuous parametric distribution F θi , of, for instance, lognormal, Weibull or Pareto type. The magnitude of the secondary loss 0 associated with the RML securitization sequence, S 2 ji (t), from default event, j, with re0 alizations {s2 ji (t)}j≥0 is a function that depends on the size of the primary RML losses, 0 S 1 ji , and time such that 0 ji
S2
0 ji
(t) = S 1
0 ji
exp{−ø(t − τ 1
0 ji
)}I(0,t] (τ 1
(Exponential Decay in Time);
0 ji
S2
0 ji
(t) = S 1
0 ji
(t − τ 1
0 ji
)−ø I(0,t] (τ 1
(Power Decay in Time); 0 ji
S2
0 ji
(t) = S 1
0
) + ǫ(t), ø 6= 0
) + ǫ(t), ø 6= 0 0
, ø = 0, S 1 i and S 2 i are iid.
(2.20)
518
M.A. Petersen et al.
with the assumption that the noise term for the first two cases is a mean-zero normal random variable with a varying scale parameter that tends to zero as time increases so that ǫ(t) ∼ N (0, σ(t)). In a realistic scenario, sizes of secondary losses due to RML securitization would decay over time as larger losses get settled first and smaller losses get settled last and so ø > 0. In a special case, if secondary loss sizes increase as time elapses, then ø < 0. The equality ø = 0 implies that secondary losses are iid and are drawn from an identical distribution as that of S j . The characterization in (2.20) allows for a stochastic randomization of the secondary loss values from RML securitization centered around a deterministic conditional mean function of the form 0 ji
µ eji (t) = s1
0 ji
exp{−ø(t − τ 1
0 ji
)}I(0,t] (τ 1
), ø 6= 0.
(2.21)
Under the conditions described above, the aggregate loss process, S i , in the i-th RML subportfolio, may be represented by 0i Ntji X 10 ji X jki i St + S (t)I(τ 10 ji ,t] (t) . St =
Nt1
j=0
(2.22)
k=0
In principle, we can decompose OR’s aggregate RML losses, Sti , in (2.22) into a component for expected RML losses, Stei , and unexpected RML losses, Stui , such that Sti = Stei + Stui . ei ≤ 1 and ̺i is the risk premium In the i-th RML subportfolio, (e αi ̺i +E[li ])Λit , where 0 ≤ α li from (2.4) and RML loss reserves, R , act as buffers against expected and unexpected RML losses, respectively (see, for instance, [34]). In this case, we can distinguish between total provisioning for loan losses, P i , and loan loss reserves, Rli . Provisioning entails a decision made by OR about the size of the buffer that must be set aside in a particular time period in order to cover RML losses, S i . However, not all of P i may be used in a time period with the amount left over constituting RML loss reserves, Rli , so that for the i-th RML subportfolio we have Rtli = Pti − Sti , P i > S i . Our model for provisioning at time, t, can be taken to be
Pti
=
(
(e αi ̺i + E[li ])Λis0 ,
for P i > S i Expected Losses
(2.23 (e αi ̺i + E[li ])Λis0 + Rtli , for P i ≤ S i Expected Losses + Unexpected Losses, s0 < t,
where s0 is the time at which the RML was originated.
Mortgage Loan Securitization, Capital and Profitability...
2.3.
519
Other Issues Related to OR’s Assets
In this subsection, we discuss matters related to OR’s assets such as risk-weighted assets (RWAs), aggregate OR assets, OR’s assets price process and asset portfolios. 2.3.1. Risk-Weighted Assets We consider RWAs that are defined by placing each on- and off-balance sheet item into a risk category. The more risky assets are assigned a larger weight. Table 1 below provides a few illustrative risk categories, their risk weights and representative items. Table 1. Risk Categories, Risk Weights and Representative Items Risk Category
Risk Weight
Banking Items
1
0%
Cash, Bonds, Treasuries, Reserves
2
20 %
Shares
3
50 %
RMLs
4
100 %
Intangible Assets
5
100 %
Loans to Private Agents
As a result, RWAs are a weighted sum of OR’s various assets. In the sequel, we can identify a special risk weight on RMLs, ω Λi = ω i (Mt ), that is a decreasing function of current macroeconomic conditions so that ∂ω i (Mt ) < 0. ∂Mt
(2.24)
This is in line with the procyclical notion that during booms, when macroeconomic activity increases, the risk weights will decrease. On the other hand, during recessions, risk weights may increase because of an elevated probability of default and/or loss given default on RMLs. In the sequel, it may be more useful to represent the RWAs in the following way. We recall that the charge to cover credit risk equals the sum of OR’s long and short trading positions multiplied by risk weights for specific assets. As a result, if we let ω ∈ [0, 1]m denote the m × 1 vector of asset risk weights, then the capital charge to cover credit risk at time t equals ri
a =ω
iT
ρia+ t
+
ρia− t
,
where for any ρia we denote by ρia+ the m × 1 vector with components ρia+ = max[0, ρia ] (long trading positions)
(2.25)
520
M.A. Petersen et al. and by ρia+ the m × 1 vector with components ρia− = max[0, −ρia ] (short trading positions).
2.3.2. OR’s Aggregate Assets In this subsection, we provide a description of OR’s aggregate risk-free and risky assets and their respective rates. The objective is to identify the key characteristics of broad swings in the prices of these assets that may be masked by differences in the behavior of individual prices so as to highlight their relationship to macroeconomic performance and monetary policy. Two basic criteria for selecting the assets included in the aggregate should be that they constitute a sizeable proportion of said assets and they should be traded with some frequency on primary and secondary markets. There are two methods of defining and constructing the aggregate. The first is to define the aggregate price as a measure of the values of the underlying assets. Secondly, we may define the aggregate price as a measure of the change in the value of the risk-free and risky assets themselves. Nevertheless, as a purely practical matter, it would be of interest to know whether the differences in methodology produce substantial differences in the implied behavior of the aggregate assets over time. The calculation of the weights for these assets may be based on flow-of-funds balance sheets. In the i-th RML subportfolio, we suppose that an aggregate risk-free banking asset (ARFBA) may be expressed in terms of cash, C i , bonds, B i , Treasuries, Ti , and reserves, Ri . The aggregate risk-free asset, Ai , can be written as Ai = ai1 C i + ai2 B i + ai3 Ti + ai4 Ri , 0 ≤ ai1 , ai2 , ai3 , ai4 ≤ 1,
ai1 ,
ai2 ,
ai3
(2.26)
ai4
where and are weights for the respective risk-free banking assets. The corresponding aggregate risk-free asset interest rate term may be expressed as i
rA = f i (rCi , rBi , rTi , rRi ). We can define the price of the ARFBA at time t, Ait , in the i-th RML subportfolio, follows the process dAit = rAi Ait dt for some fixed rAi ≥ 0. In a manner analogous to that for an ARFBA, in the i-th RML subportfolio, we can define an aggregate risky banking asset (ARBA) as the weighted sum of RMLs, Λi , shares, Si , and intangible assets, I i . In this case, the aggregate risky asset, b Ai , has the form b Ai = b a1i Λi + b a2i Si + b a3i I i , 0 ≤ b a1i , b a2i , b a3i ≤ 1,
(2.27)
where b a1i , b a2i and b a3i are corresponding weights and the ARBA interest rate may be expressed as bi
rA = g i (rΛi , rSi , rIi ).
Mortgage Loan Securitization, Capital and Profitability...
521
2.3.3. OR’s Asset Price Processes In the sequel, we discuss bank asset price processes in the i-th RML subportfolio. OR’s investment portfolio is constituted by n + 1 assets including RMLs and bonds. We pick the first asset to be an ARFBA, that earns a constant, continuously-compounded interest rate i of rA . In the i-th RML subportfolio, profit maximizing banks set their rates of return on i ARBAs as a sum of the risk-free rate rA 1 and risk premium, ̺i . Here the unitary vector and risk premium are given by 1 = (1, 1, . . . , 1)T and ̺i = (̺i1 , ̺i2 , . . . , ̺im )T , i
respectively. The sum rA 1 + ̺i covers, for instance, the cost of monitoring and screening of RMLs and cost of capital in the i-th RML subportfolio. The m assets besides bonds are risky and their price process with reinvested dividends included, Y ij , j = 1, . . . , m, ij i eY , as follows a geometric L´evy process with drift vector, rA 1 + ̺i and diffusion matrix, σ in Ytij
=
Y0ij Z
+
t
Z
t 0
IuY (rA 1
ij
+ ̺ )du +
ij
ij
Z
t
ij
ij
IuY σ Y aY du 0 Z tZ ij ij f(du, dy) + σY γ Y IuY GY M 0 R
r
IuY σ Y bY dZuY 0 Z t ij ij ij = Y0 + IuY (rA 1 + ̺r + σ Y aY )du 0 Z t Z tZ Y Y ij Y ij Y ij f(du, dy) + Iu σ e dZu + γ e IuY GY M 0 0 R +
where 1 ≤ i ≤ n, 1 ≤ j ≤ m, γ eY follows that Ytij
ij
ij
= σY γ Y
ij
and σ eY
ij
ij
(2.28)
ij
= σ Y bY . Therefore, it
T T ij i1 i2 in i1 i2 in ; and Y0 = Y0 , Y0 , . . . , Y0 = Yt , Yt , . . . , Yt
f(du, dy) = M =
T i i1 i i2 i im f f f M (du, dy ), M (du, dy ), . . . , M (du, dy )
T M i (du, dy i1 ) − ν i (dy i1 )dt, . . . , M i (du, dy im ) − ν i (dy im )dt ,
where GY is m × m diagonal matrix with entries
y i1 , y i2 , . . . , y im
∈ R n ; IuY de-
notes the m × m diagonal matrix with entries Ytij and Z Y is an m-dimensional Browij nian motion. Also γ eY is the m × m diagonal matrix of the constant jump coefficients, i1
i2
γ eY , γ eY , . . . , γ eY
im
.
ij
522
M.A. Petersen et al.
2.3.4. OR’s Asset Portfolio In the sequel, the following assumptions are important. Assumptions 2.1. (Rank and Frictionless Trading): Without loss of generality, we have ij that rank (e σ Y ) = n. Furthermore, OR is allowed to engage in continuous frictionless trading over the planning horizon [0, T ]. In this case, we suppose that ρi is the n-dimensional stochastic process that represents the current value of the risky assets in the i-th RML subportfolio. In this case, the dynamics of the current value of the total bank’s assets in the i-th RML subportfolio, Ait , over any reporting period may be given by i
dAit = Ait (rA + ρiT ̺r )dt + Ait ρiT σ Ai aAi dt + Ait ρiT σ Ai bAi dZtAi Z i iT Ai Ai fi (dt, dy i ) − rAi Dti dt − dSti +At ρ σ γ yi M R iT r i Ai = At (r + ρ (̺ + σ Ai aAi ))dt + Ait ρiT σ Ai bAi dZtAi Z fi (dt, dy i ) − rAi Dti dt − k i dAit . yi M +Ait ρiT σ Ai γ Ai R
It then follows that dAit where
=
Ait
Z µ eAi dt + σ eAi dZtAi + γ eAi
R
y M (dt, dy ) − reDi Dti dt i fi
i
(2.29)
µ eAi = (1 + k i )−1 (rA + ρiT (̺r + σ Ai aAi )), σ eAi = (1 + k i )−1 ρiT σ Ai bAi , γ eAi = (1 + k i )−1 ρiT γ A and r˜Di = (1 + k i )−1 rA ; Di − is the face value of the deposits which is described in the usual way; µ eAi , σ eAi and γ eAi are the drift vector, volatility and constant jump coefficient corresponding to the assets portfolio, Ait , respectively.
2.4.
OR’s Liabilities
OR’s liabilities are constituted by deposits and borrowings from other banks. 2.4.1. OR’s Deposits For simplicity, we assume that the face value (or outstanding value) of deposits, D, is fixed over the planning horizon and that complications related to equity issues and dividend payments are removed over this period.
Mortgage Loan Securitization, Capital and Profitability...
523
Remark 2.2. (Deposit Withdrawals and Bank Liquidity): A vital component of the process of deposit withdrawal is liquidity. The level of liquidity in the banking sector affects the ability of banks to meet commitments as they become due (such as deposit withdrawals) without incurring substantial losses from liquidating less liquid assets. Liquidity, therefore, provides the defensive cash or near-cash resources to cover banks’ liabilities. 2.4.2. OR’s Borrowings from Other Banks Interbank borrowing including borrowing from the Central Bank provides a further source of funds. Interbank markets have at least two key roles to play in modern financial systems. Firstly and most importantly, it is in such markets that central banks actively intervene to guide their policy interest rates. Secondly, efficiently run interbank markets effectively channel liquidity from institutions with a surplus of funds to those in need. This ultimately leads to more efficient financial intermediation. As a consequence, policymakers prefer a financial system with a well functioning and robust interbank market, viz., one in which the Central Bank can achieve its desired rate of interest and one that allows institutions to efficiently trade liquidity at favorable (LIBOR which represents the rate at which banks typically lend to each other) interest rates. In the sequel, in the i-th RML subportfolio, the amount borrowed from other banks is denoted by Bi , while the interbank borrowing rate and marginal borrowing costs are i i denoted by rB and cB , respectively. Of course, when our bank borrows from the Central i Bank, we have rB = r, where r is the Central Bank base rate appearing in (2.4). Another important issue here is the comparison between the cost of raising and holding deposits, i i i i (rD + cD )Di , and the cost of interbank borrowing, (rB + cB )Bi . In this regard, a bank in need of capital would have to choose between raising deposits and borrowing from other banks on the basis of overall cost. In other words, the expression i
i
i
i
min{(rD + cD )Di , (rB + cB )Bi }
(2.30)
is of some consequence.
2.5.
OR’s Capital
In this section, we discuss OR’s regulatory capital, K i (see, for instance [15]), and its stochastic dynamics as well as capital adequacy. 2.5.1. OR’s Regulatory Capital OR’s total capital, K i , has the form Kti = KtT 1i + KtT 2i + KtT 3i ,
(2.31)
where KtT 1i , KtT 2i and KtT 3i are Tier 1, Tier 2 and Tier 3 capital, respectively. Tier 1 (T1) capital is the book value of bank capital defined as difference between the accounting value of the assets and liabilities. In our contribution, T1 capital is represented at t− ’s market
524
M.A. Petersen et al.
value of OR’s equity, nt Et− , where nt is the number of shares and Et is the market price of OR’s common equity at t. Tier 2 (T2) and Tier 3 (T3) capital consists of preferred stock and subordinate debt (collectively known as supplementary capital). Subordinate debt is the subordinate to deposits and hence faces greater default risk. Tier 2 capital, Oti , issued at t− is represented by bonds that pay an interest rate, rOi , (see [2]). Let us define OR’s regulatory capital, K i , as Kti = Ait − Dti − Bit ,
(2.32)
where Ai is the current value of the total assets and Di is the face value of the deposits. According to Basel II regulation, OR is required to maintain regulatory capital above a minimum level equal to the sum of the charges to cover general market risk, credit risk and operational risk (see, for instance, [5]). As far as the charge to cover market risk is concerned, we suppose that OR divulges its current VaR at the start of each reporting period as well as the value of its profits and losses from the preceding reporting period. In fact, the market risk charge equals the VaR reported at the beginning of the current reporting period times a capital reserve multiple k i . As a consequence of the above, if V aR ≥ 0 is the VaR reported to regulators at the beginning of the current reporting period and m is the multiple that currently applies, OR must satisfy the constraint
Kti
i+ i− ≥ k (V aR) + ω + Oi ρt + ρt i
i
iT
(2.33)
throughout the duration of the reporting period. In essence, the reported VaR can differ from the true VaR because of the fact that OR’s future trading strategy, and hence its true VaR, cannot be observed by regulators. Moreover, Basel II prescribes that Oi in inequality (2.33) may be written as
i
O = max
X 8 l=1
β g , 0 li li
and constitutes the capital charge to cover operational risk under the standardized approach. 2.5.2. Stochastic Dynamics of OR’s Capital From this point forward we do not consider operational risk (compare the last term in (2.33)), since it may be considered to be constant over all reporting periods. It may happen that OR incurs a cost, ci , at the termination of each reporting period in which the actual loss exceeds the reported VaR. This cost does not take the increase in the capital reserve multiple k into account and is meant to relate to further regulatory interventions that can occur as a result of exceptions or reputation losses. For simplicity, we refer to these costs simply as reputation costs and assume that they are proportional to the amount by which the actual loss exceeds the reported VaR. This implies that
Mortgage Loan Securitization, Capital and Profitability...
525
ci = ki (K bi − K ei − (V aR)i )+ , where ki ≥ 0 is the proportionality cost constant for capital. Here, K bi and K ei is the value of OR’s regulatory capital at the onset and termination of the particular reporting period, respectively. At the end of each back-testing period, the number e = 0, 1, . . . , d of reporting periods in which the actual loss exceeded the reported VaR is determined. In this case, the reserve multiple, k i , for the next back-testing period is set equal to k i (e) such that
k i (0) ≤ k i (1) ≤ k i (2) ≤ . . . ≤ k i (d). It is clear that the sceptre of reputation costs and the revision of the value of k i at the end of each back-testing period removes the incentive for under-reporting the true (V aR)i . On the other hand, capital requirements provide an incentive to not over-report. Besides the market risk emanating from volatility in the value of its assets (as given in (2.29)), OR experiences unhedgeable credit risk. In this regard, at the termination of each reporting period, a small probability, pi , may exist that an anomaly will occur that will lead to the loss of an amount of q i K bi , where q i ∈ [0, 1]. These anomalies can result in bank failure if the value of OR’s capital becomes negative so that debts cannot be paid. While these shocks are unhedgeable, OR can manipulate the default probability by controlling the probability of losses in the market value of its assets exceeding (1 − q i )K bi in any given period. In essence, this means that OR avoids excessively risky investment strategies. Since the market value of deposits is kept constant and issues related to equity does not play a role, it follows from (2.29) that the dynamics of OR’s regulatory capital may be represented by
dKti
=
Kti
in the absence of anomalies.
Z µ eAi dt + σ eAi dZtAi + γ eAi
R
y M (dt, dy ) i fi
i
(2.34)
2.5.3. Stochastic Dynamics of OR’s Equity Capital In order to describe the dynamics of the equity capital we have to make some assumptions. Assumptions 2.3. (Dynamics of Equity Capital): Assume that the equity capital follows a geometric L´evy process with the L´evy-Itˆo decomposition given by
LEi t
Ei
Ei
=a t+b
ZtEi
+γ
Ei
Z tZ 0
R
fi (dt, dy i ). yi M
526
M.A. Petersen et al.
Under this assumption, we describe the evolution of equity capital in the i-th RML subportfolio, E i , as dEti
Z Ei Ei Ei Ei Ei Ei Ei Ei i fi i = (µ + σ a )dt + σ b dZt + σ γ y M (dt, dy ) (2.35) R Z fi (dt, dy i ) , = Eti− µ eEi dt + σ eEi dZtEi + γ eEi yi M R Eti−
where µ eEi = (µEi + σ Ei aEi ), σ eEi = σ Ei bEi , µEi , γ eEi and ZtEi are the volatility of E i , the total expected returns on E i , rates of return on E i , constant jump coefficient and the standard Brownian motion, respectively. 2.5.4. OR’s Capital Valuation In the sequel, we assume that i , (V aR)i , e, k i , t) V i (K i , K−
denotes OR’s value function at time t conditional on current capital being K i , capital at the i , the VaR reported at the beginning of the beginning of the current reporting period being K− current reporting period being (V aR)i , the number of exceptions in the current backtesting period being e and the current capital reserve multiple being k i (compare with [10]). Without loss of generality, suppose that t is in the l-th reporting period, i.e., that t ∈ [(l−1)τ, lτ ). Finally, let T = {1, . . . , T } denote the set of backtesting dates. Then it follows from the principle of dynamic programming that the capital valuation may be given by i i i V i (K i , K− , (V aR)i , e, k i , t) = max E v i (Klτ , K− , (V aR)i , e, k i , lτ )|Kτi = K i (2.36) ci
such that
dKsi
Ksi
=
Ksi
Z Ai Ai Ai Ai µ e dt + σ e dZt + γ e
R
y M (dt, dy ) ; i fi
i
i i ≥ k (V aR) + ω ρs+ + ρs− + Oi , for all s ∈ [t, lτ ), i
i
iT
i , (V aR)i , e, k i , lτ ) represents the value of havfor K i ≥ k i (V aR)i , where v i (K i , K− ing capital, K i , at the end of the current period, before the capital shock is realized. In turn,
i i v(K i , K− , (V aR)i , e, k i , lτ ) = (1 − pi )e v i (K i , K− , (V aR)i , e, mi , lτ )
i i , K− , (V aR)i , e, k i , lτ ), + pi vei (K i − q i K−
Mortgage Loan Securitization, Capital and Profitability...
527
where i vei (K i , K− , (V aR)i , e, k i , lτ )
=
max
(V aR)i1 ≥0
+
max
i1 V i (K i1 , K− , (V aR)i1 , e1 , k1 , lτ )Ii{K i ≥K i
(V aR)i2 ≥0
− −(V
i2 V i (K i2 , K− , (V aR)i2 , e2 , k2 , lτ )Ii{K i 0 for 0 < pi < pi∗ and (3.-38) is true. Thus ξ ≥ g on G holds. In the sequel, the
Lipschitz surface of the continuation region, H, is given by ∂H = Therefore, we have that
(s, pi ) : pi = pi∗ .
536
M.A. Petersen et al.
pi
E
Z
∞
I∂H (Qt )dt =
0
Z
∞
0
i i∗ P Pt = p dt = 0. pi
In addition, it is trivial that ξ is continuous twice differentiable on G \ ∂H, with locally bounded derivatives near ∂H. However, outside of the continuation region, H, i.e. pi ≥ pi∗ , we have i i ξ(s, p ) = exp{−δ s} (p − χ ) , i
Pi
f = 0, and by (2.10) we have Pi i i Pi i G ξ + f (s, p ) = exp{−δ s} − δ p − χ ) + α p Pi Pi Pi i Pi i = exp{−δ s} (α − δ )p + δ χ . pi
i
Pi
(3.-37)
Now, if we assume that αPi < δ Pi then we get (αPi − δ Pi )pi + δ Pi χi ≤ 0 for every pi ≥ pi∗ . This equation is true if and only if (αPi − δ Pi )pi∗ + δ Pi χi ≤ 0. Again the later condition holds if and only if
pi∗ ≥
δ Pi χi . δ Pi − αPi
/ H < ∞ a.s, we proceed as folIn order to verify that τH = inf t > 0 : Pit ∈
lows. Consider the solution, Pit of (2.38) given by Lemma 2.4. Then by the law of iterated logarithm for Brownian motion we see that if Assumption 3.2.3 of Assumptions 3.2 in Subsection 3.2.1. holds, then lim Pit = ∞ a.s.
t→∞
and, in particular, τH < ∞ a.s.
(3.-36)
Mortgage Loan Securitization, Capital and Profitability...
537
i∗ Furthermore, since ξ is bounded on [0, p ], then we can verify that exp{−2δ Pi τ }Pi2 is uniformly integrable. In this regard, we must show that τ τ ∈T
for a constant C which exists, then the following condition holds
Pi i2 E exp{−2δ τ }Pτ ≤ C, f or all τ ∈ T .
(3.-35)
Applying Lemma 2.4 together with the statistical properties of the mathematical expectation, we compute the left hand side of (3.-35) as shown below
Pi2 0 E
Pi i2 E exp{−2δ τ }Pτ =
Z Pi Pi Pi2 exp 2α − 2δ + σ +
R
Pi i 2 Pi i i i (1 + γ y ) − 1 − 2γ y ν (dy ) τ .
If we consider Assumption 3.2.5 in Subsection 3.2.1. and the possibility of obtaining the equality sign in that assumption then from the above equation we can deduce (3.-35). It then follows that
exp{−2δ
Pi
τ }Pi2 τ
τ ∈T
is uniformly integrable. Next, we assume that y i > −1 a.s. ν i . Then PiτG ∈ ∂G a.s. on {τG < ∞} and lim ξ(Pit ) = g(PiτG ) · I{τG α̺ + E[l] (see equation (2.22) for RML losses and default rate). In the latter case, OR’s capital may be needed to cover these excess (and unexpected) losses. If this capital is not enough then OR will face insolvency. During the latter half of 2008, we saw a rapid decline in such capital. The most significant relationships between our models and the SMC are established via OR’s own RML rate, rΛ , given by (2.6) (see also HM1, HM3, FM4 and GIR1 in Figure 1). Low RML rates and large inflows of foreign funds created easy credit conditions for a number of years prior to the crisis. Such RML rates together with increasing housing prices, encouraged MRs to take on potentially difficult mortgages in the belief that they would be able to refinance such mortgages at more favorable rates in future (see HM1 in Figure 1). With interest rates on a large number of subprime and other ARM adjusting upward during
Mortgage Loan Securitization, Capital and Profitability...
547
the SMC, U.S. legislators, the U.S. Treasury Department and financial institutions took action. A systematic program to limit or defer interest rate adjustments was implemented to reduce the effect. In addition, ORs and MRs facing defaults were encouraged to cooperate to enable MRs to retain their homes (see, for instance, [42] for additional information).
5.7.
Securitization via RMBSs as in Subsection 2.2.3.
Currently, there is a greater interdependence between the U.S. housing market and global financial markets due to RMBSs than before. When MRs default, the amount of cash flowing into RMBSs declines and becomes uncertain. Investors and businesses holding RMBSs have been significantly affected during the SMC. MBSs enabled financial institutions and investors around the world to invest in the U.S. housing market. Major banks and financial institutions borrowed and invested heavily in RMBSs and reported significant losses. In this regard, during the SMC, the decline in mortgage payments reduced the value of RMBSs, which eroded the nett worth and financial health of banks. This vicious cycle is at the heart of the crisis. Of the $ 10.6 trillion of USA residential mortgages outstanding as of midyear 2008, $ 6.6 trillion were held by mortgage pools and $ 3.4 trillion by traditional depository institutions (see, for instance, [42] for further details). Between the third quarter of 2007 and the second quarter of 2008, rating agencies lowered the credit ratings on $ 1.9 trillion in RMBSs. Financial institutions felt they had to lower the value of their RMBSs and acquire additional capital so as to maintain capital ratios. If this involved the sale of new shares of stock, the value of the existing shares was reduced. Thus ratings downgrades lowered the stock prices of many financial firms (see, for instance, [42] for more information). In December 2008, during congressional hearings on the collapse of Freddie Mac and Fannie Mae, economist, Arnold Kling, testified that a high-risk RML could be ”laundered” by Wall Street and returned to the banking system as a highly rated security for sale to investors, obscuring its true risks and avoiding capital reserve requirements (see, for instance, [34]).
5.8.
Pricing of RMBSs as in (2.7) from Subsection 2.2.3.
Subsequent to the $ 700 billion bailout, the U.S. Treasury faced the problem of how to determine the price of vast quantities of RMBSs that were hard to sell. The U.S. government warned that these RMBSs had to be bought otherwise credit crises will continue to deepen with dire consequences for the entire global financial system. But determining how much to pay for these RMBSs was one of the most difficult SMC-related issues faced by the government. In general, a consequence of overpricing makes the U.S. government appear to have been taken advantage of by the securities industry. On the other hand, underpricing the RMBSs may result in the U.S. Treasury causing some financial institutions to fail. Nevertheless, no matter what the government pays for Wall Street’s toxic securities, investors, taxpayers and politicians will argue that RMBSs were mispriced (see, for instance, [39] and [40]). The solution to the optimal securitization pricing problem presented in Theorem 3.3 of Section 3.2. has a few interesting ramifications for the SMC. From (3.-41), we note for the optimality exponent κ > 1 and χi > 0 that pi∗ must have a positive value that is greater
548
M.A. Petersen et al.
than the value of χi by a specific proportionality constant. At optimality, (3.-41) and (3.-40) allow a direct comparison between κ on the one hand and the discounting exponent, δ, and the coefficient, αPi , of the second term of the generator, Gp , on the other. More specifically, for an optimal profit, pi∗ , we have that κ δ Pi ≥ Pi . κ−1 δ − αPi
In addition, (3.-40) and (3.-39) seem to imply, for (t, pi ) ∈ H, that piκ is an important index for profit-taking purposes. This may suggest that banks with a suboptimal RMBS price should be more concerned with the RMBS price itself rather than its relationship with the RMBS error term, χi . The suboptimality of the RMBS price may result in a negative nett RMBS price if the RMBS error term is very high. This will lead to the breakdown of the securitization process since it would not be profitable to securitize RMLs under these conditions. A practical example for this, is that from the SMC, where both the failure of the Lehmann Brothers investment bank and the acquisition in September 2008 of Merrill Lynch and Bear Stearns by Bank of America and JP Morgan, respectively, was preceded by an increase in securitization but a decrease afterwards. A similar trend was discerned for the U.S. mortgage companies, Fanie Mae and Freddie Mac, who had to be bailed out by the U.S. government at the beginning of September 2008. By contrast, for (t, pi ) ∈ / H, the value of χi now plays a significant role. This may have something to do with the effect that the RMBS error term has on the overall cost of securitization and its profitability. Finally, (3.-39) intimates that optimal RMBSs, pi∗ , is bounded away from the RMBS error term, χi , by a constant factor that depends on the discounting exponent for the RMBS price, δ Pi , and the coefficient of the second term, αPi , of the generator, Gp . The securitization problem in Theorem 3.3 of Section 3.2. only makes sense if the issuer can decide when to sell RMBSs. In particular, the formulas in (3.-41) only yield the correct value function, Vs (pi ), providing that RMBSs are actually sold at pi = pi∗ . In particular, the value function has the power form piκ before the sale occurs and the linear form pi∗ − χi after selling RMBSs. In addition, the pricing error term can influence ORs to raise lending rates on their RMLs in the situation where banks need to repay higher costs on securitization.
5.9.
RML Losses and Default Rate as in (2.22) from Subsection 2.2.4.
In this subsection, we discuss RML losses and the default rate given by (2.22) in Subsection 2.2.4. (see HM4, FM1, FM3, GIR3 and GIR5 in Figure 1). An acceleration in RML growth, as was experienced prior to the SMC, eventually led to a surge in RML losses (see, for instance, (2.22) for RML losses and default rate) resulting in reduced bank profitability (see equations (2.38) and (2.39) for bank profit). This ultimately precipitated a round of bank failures. As experience during the mortgage crisis has shown, such a slump in the banking sector not only threatened the deposit insurance fund but also slowed the economy by entrenching credit crunches (see, for instance, [42] for further details). In this case, it should be borne in mind that faster RML growth leads to higher RML losses (see, for instance, [18] and [27]). When RML growth increases because ORs become more willing to lend, credit standards fall and RML losses eventually rise (see, for instance, [21] for more information).
Mortgage Loan Securitization, Capital and Profitability...
5.10.
549
RML Loss Provisions as in (2.23) from Subsection 2.2.4.
Experiences in the SMC have reinforced the fact that RML loss provisions (covering expected and unexpected RML losses) matter a great deal when it comes to earnings performance in the banking industry. For many distressed ORs suffering because of the SMC, increased RML loss provisions as in (2.23) translated to a decrease in earnings (see, for instance, [18], [21] and [27]). In the U.S., higher RML loss provisions were the primary reason that industry earnings for the first quarter of 2008 totaled only $ 19.3 billion, compared to $ 35.6 billion a year earlier (see [17] for more information). FDIC-insured commercial banks and savings institutions set aside $ 37.1 billion in RML loss provisions during the aforementioned quarter, more than four times the $ 9.2 billion provisioned in the first quarter of 2007. Provisions absorbed 24 % of the industry’s nett operating revenue (nett interest income plus total noninterest income) in the first quarter of 2008, compared to only 6 % in the first quarter of 2007.
5.11.
OR’s Asset Portfolios as in (2.29) from Subsection 2.3.
During the SMC, the quality of assets held in portfolios as in (2.29) deteriorated dramatically. Empirical evidence in [17] showed that deteriorating asset quality concentrated in RML portfolios continued to take a toll on the earnings performance of many FDIC-insured institutions in the first quarter of 2008. Two examples of the deterioration of asset portfolios held by prominent banks and their subsequent acquisition is given below. The significant losses suffered by Merrill Lynch during the SMC in 2008 were partly blamed on the fall in value of its unhedged portfolio of CDOs after AIG ceased offering CDSs on Merrill’s CDOs. The loss of confidence of trading partners in Merrill Lynch’s solvency and its ability to refinance its short-term debt led to it being acquired by the Bank of America (see, for instance, [34]). Also, British bank, Bradford & Bingley, was nationalized on Monday, 29 September 2008 by the U.K. government as a result of the poor quality of its asset portfolios. More specifically, the government took control of OR’s troubled 50 billion pound RML portfolio, while its deposit and branch network were sold to Spain’s Grupo Santander (see, for instance, [42] for further details).
5.12.
LIBOR as in (2.30) from Subsection 2.4.2.
The LIBOR as in (2.30) from Subsection 2.4.2. has had a significant part to play in the SMC. In this regard, the TED spread is defined as a measure of credit risk for inter-bank lending. In the U.S., it is the difference between the risk-free three-month U.S. Treasury bill (t-bill) rate and three-month LIBOR. A higher TED spread indicates that banks perceive each other as riskier counterparties - as witnessed during the SMC. The t-bill is considered to be a riskfree asset because it is backed by the U.S. government. As far as the SMC is concerned, the TED spread reached record levels in late September 2008. In this regard, we note that the Treasury yield movement was a more significant driver than the changes in LIBOR. A three month t-bill yield so close to zero means that people are willing to forgo interest just to keep their money (principal) safe for three months. This situation is indicative of a very high level of risk aversion and tight lending conditions. Driving this change were investors
550
M.A. Petersen et al.
shifting funds from money market funds (generally considered nearly risk free but paying a slightly higher rate of return than t-bills) and other investment types to t-bills. These issues are consistent with the September 2008 aspects of the SMC which prompted the Emergency Economic Stabilization Act of 2008 signed into law by the U.S. President on Thursday, 2 October 2008. In addition, an increase in LIBOR means that financial instruments with variable interest terms are increasingly expensive. For example, car loans and credit card interest rates are often tied to LIBOR. In the U.S., it is estimated that as much as $ 150 trillion in loans and derivatives are tied to LIBOR. During the SMC, higher interest rates placed downward pressure on consumption while increasing the risk of recession (see, for instance, [42] for more information).
5.13.
Raising Deposits as in (2.30) from Subsection 2.4.2.
By the end of 2008, U.S. banks had raised over $ 250 billion in deposits from investors to offset losses suffered during the SMC (see [26] for more information). Deposits can provide financial institutions with a strong base for funding operations when many other avenues for raising capital, such as the securitization market, are frozen up. Several financial institutions, such as Goldman Sachs Group Inc. and Morgan Stanley, have become bank holding companies that aim to build stable deposit bases and allow for wider access to government programmes that were originally meant for commercial banks. Also a great deal of financial innovation has been used by struggling companies to raise deposits. An example of this is American Express who, amid the ongoing SMC in November 2008, received approval to convert to a bank holding company. This enables American Express to access more government lending programs aimed at stimulating more lending between financial institutions and extending credit to consumers. As we have mentioned before, the status change also allows American Express to create a large deposit base to help fund its operations (compare with [26] and [42]).
5.14.
Contracted Liquidity as in Remark 2.2 from Subsection 2.4.
The SMC is characterized by shrinking liquidity in global credit markets and banking systems. As such, contracted liquidity (as described by Remark 2.2 from Subsection 2.4.; see also FM4 in Figure 1) is an ongoing economic problem (see, for instance, [42]). These liquidity concerns resulted in central banks around the world taking action to provide funds to member banks to encourage lending to worthy MRs and to restore confidence in the commercial paper markets (see GIR1 in Figure 1). We consider the PD as an important parameter that affects liquidity in the credit markets, because a higher probability of default may deter banks from lending to each other. This caused a liquidity problem in global credit markets. According to [42], the U.S. Federal Reserve made a concerted effort during the SMC to support market liquidity. In this regard, along with other central banks worldwide, it undertook to open market operations to ensure affiliated banks remain liquid. These interventions mainly involved short-term loans (liquidity) to banks that were collateralized by government securities. In particular, the U.S. Federal Reserve used the Term Auction Facility (TAF) to accomplish this. Also, it increased the monthly amount of these auctions
Mortgage Loan Securitization, Capital and Profitability...
551
throughout the SMC, raising it from $ 20 billion at inception to $ 300 billion by November 2008. In order to address continued liquidity concerns, a total of $ 1.6 trillion in loans to banks were made by the U.S. Federal Reserve for various types of collateral by November 2008. In this regard, by October 2008, it expanded the collateral it lent against to include commercial paper. By November 2008, the U.S. Federal Reserve had purchased $ 271 billion of such paper, out of a program limit of $ 1.4 trillion. In November 2008, it announced the $ 200 billion Term Asset-Backed Securities Loan Facility (TALF) that supported the issuance of ABSs collateralized by loans related to motor vehicles, credit cards, education and small businesses. Once again, this drastic step was taken by the U.S. Federal Reserve to offset liquidity concerns.
5.15.
Capital Adequacy as in (2.36) from Subsection 2.5.4.
During the SMC, it was strongly recommended that banks hold larger amounts of capital because of the increased possibility of large losses from their RML subportfolios. However, the SMC was characterized by sharp declines in bank capital. In this regard, we note that nondepository financial institutions (e.g., investment banks and mortgage companies) were not subjected to the same capital requirements as depository banks. As a consequence, during the SMC, many investment banks had limited capital to offset declines in their holdings of RMBSs, or to uphold their role in credit default insurance agreements. In December 2008, the former U.S. Federal Reserve chairman, Alan Greenspan, called for banks to maintain a 14 % capital ratio, rather than the historical 8-10 %. Interestingly, major U.S. banks had capital ratios of around 12 % in December 2008 after the initial round of bailout funding. As from January 2008, in many countries, the minimum capital ratio has been regulated via the Basel II Capital Accord. Some analysts claimed that the prescripts of that capital accord exacerbated the problems experienced during the SMC. The level of bank capital held by banks also affects profitability. For instance, in situations where banks hold large amounts of capital, the ROE will be lowered. Furthermore, when banks were confident that losses are decreasing, they held less capital, resulting in higher ROE.
5.16.
Profitability as in (2.38) and (2.39) from Subsection 2.6.
Profitability as in (2.38) and (2.39) from Subsection 2.6., has a part to play in explaining some of the aspects of the SMC. For the fourth quarter of 2007 it was reported that profits at the 8 533 U.S. banks insured by the FDIC fell from $ 35.2 billion to $ 646 million (effectively by 89 %) year-on-year. This was largely due to escalating RML losses and provisions for RML losses (see, for instance, [21]). The aforementioned decline in profits contributed to the worst bank and thrift quarterly performance since 1990. In 2007, these banks earned approximately $ 100 billion, which represented a decline of 31 % from the record profit of $ 145 billion in 2006. Furthermore, profits decreased from $ 35.6 billion to $ 19.3 billion during the first quarter of 2008 versus the previous year, a decline of 46 % (see [16] and [17] for more detail). The average ROA in the first quarter of 2008 was 0.59 %, falling from 1.20 % in the first quarter 2007. The ROA in the first quarter of 2008 is the second lowest since the fourth quarter of 1991. The downward trend in profitability was relatively broad with more than half of all insured institutions (50.4 %) reporting yearon-year declines in quarterly earnings. However, the brunt of the earnings decline was
552
M.A. Petersen et al.
borne by large institutions. Almost two out of every three institutions with more than $ 10 billion in assets (62.4 %) reported lower nett income in the first quarter of 2008, and four large institutions accounted for more than half of the $ 16.3 billion decline in industry nett income. The advantage of computing nett profit per RML subportfolio is that OR can determine its most profitable lending functions, arrange them in descending order of profitability and make loanable funds available to those RML departments that make the largest profits. This must be borne in mind for the SMC where central banks around the world provided funds to affiliated banks in order to encourage lending to worthy MRs and to restore faith in the commercial paper markets. In this regard, the difference of performance between RML departments should also be used to select future MRs. Since RML departments which are constituted by worthy MRs are likely to generate profit, these departments may be funded more than the others. It must be remembered that ORs strive to extend credit in all subportfolios despite the fact that some generate more profit than others. Some of the issues that we highlight in this subsection are corroborated by numerical examples in Section 4..
6.
Conclusions and Future Directions
The period 2007-2009 has been significant in the history of the banking history. Innovative financial practices that have earned banks a reputation for erudite sophistication has now come back to haunt them. In 2007, when the SMC was sparked by defaulting subprime MRs in the U.S., it would have been hard to predict that it would lead to the virtual nationalization of some of the world’s largest banks. The remaking of the model for international banking will set new standards and procedures. In particular, there is strong evidence to support the introduction of a revised form of the global banking risk rule book Basel II. In this chapter, we have found an expression for the optimal RMBS price, pi∗ , which is bounded away from the pricing error term, χi , by a constant factor that depends on the discounting exponent, δ Pi , and the coefficient of the second term, αPi , of the generator, Gp . The securitization model takes interacting RML subportfolios into account. In addition, we also incorporated the SMC which enabled us to see how some of the financial variables in our work may impact the credit crisis. Currently, banks are kept occupied by consumer and corporate defaults. An unresolved issue relates to what the banking industry will look like in future. In the main, two factors, viz., regulation and competition, are likely to determine the answer to this question. Firstly, there will be a regulatory backlash just like the Sarbanes-Oxley Act, which followed the U.S. corporate accounting scandals in 2000-2001, and changed that industry. Despite the existence of a safety net, banks will be forced to worry more about credit risk. Governments will have to figure out ways to get banks to realize that imprudent behavior will lead to them loosing part of their equity capital and the dismissal of managers. Although it is not clear how this can be achieved in practice, a way of accomplishing this is to make banks smaller, so each is less systematically dangerous. Others will argue that big banks are inherently more stable. So far, the direction has been towards larger institutions as banks have been forced to merge with each other and with other financial institutions. Much debate will be rage over the reintroduction of something like the Glass Steagall Act
Mortgage Loan Securitization, Capital and Profitability...
553
in the U.S., abandoned in 1999, which prevented banks from straddling too many industries or jurisdictions. That would also deal with the problem of banks confusing taking deposits with creating securities as a source of funding for their loans. The desire for simplicity will force regulators to want banks to do one or the other. There will also be new rules on how banks and other financial institutions measure their risk. A casualty will be international ratings agencies, whose opinion on riskiness underlie much of the way Basel II works. The SMC has shown rating agencies’ opinions are questionable. The mind-boggling array of risk distribution instruments, from securitization through to credit default swaps, will be narrowed to fewer, better-regulated ones. New methods will have to be devised in order to measure the risk of those instruments. In this dispensation, trading in risky instruments for their own accounts or lending to hedge funds by banks will not be allowed. The second major factor in play will be competition between banks. Just how will banks that have survived as independent institutions compete with those that have not ? For instance, how will Barclays in the U.K., which has so far survived without a government-imposed bailout, compete with the Royal Bank of Scotland (RSB), now 58 % government-owned. In this regard, academic literature argues that government-owned financial institutions are not very efficient at achieving the main positive impact banks are meant to have on their economies, viz., optimizing the distribution of capital. This fact is understandable. Politicians will be most concerned that banks do not lose taxpayers’ money while being unconcerned about innovation and new products. Risk aversion will probably allow Barclays, JP Morgan, HSBC, Deutsche Bank and other survivors to grab market share aggressively. In this situation, governments are in the unenviable position of being both players and referees in the banking market, making them bad shareholders. The best that we can hope for is that governments manage these two factors (regulation and competition) optimally while extricating themselves from an ownership role in the industry. In doing that, they can change its structure, setting new rules for the types of institutions and products that can compete which will lead to a duller, less vulnerable, international financial industry. Another future study involves finding a stochastic dynamic model that incorporates the state variable, Πit , the control variables, πtBi , πtSi and ςti and the input variables rCi , i→j and ϕti→j . In this process, we can compare the control rBi , rSi , rΛ , pt , ρit , Λit , cKi t , φt variables πtbi , πtsi and ςti with the standard variables π0bi , π0Si and ϕ0i→j , mentioned in Subsections 2.1.1. and 2.6.1., respectively. In this regard, we may introduce the weights θij , that measure the impact of the change in the state variable, Πit , and the control variables mentioned earlier. The weight parameters would be obtained after research and negotiations with interested parties such as financial institution managers, creditors, international banking authorities, regulators etc. Furthermore, we may wish to incorporate the RML loss provisioning in OR’s model since some of the literature (see [17]) shows that making RML loss provisions will affect OR’s earnings performance. In this regard, we will strive to measure the RML loss provisions that ORs must hold against the expected RML losses, and still produce optimal profits. In future research, we can diverge slightly from our current problem by assuming that ORs earn real interest rates on RML subportfolios. This will enable us to study the connection between inflation and bank profitability since the real interest rates is defined as the nominal interest rates minus the expected rate of inflation. In addition, this will reveal how much nett interest rates ORs must charge from issuing RML subportfolios in order to have optimal profit despite the changes in inflation. Moreover, we
554
M.A. Petersen et al.
may also like to study Problem 3.1 with a zero error term, i.e. χi = 0. This will enable us to analyze the effect of various component costs in the securitization process. Finally, the model presented in (2.39) lends itself to the formulation of a profit maximization problem (compare the contrasting discussion in [22] and [29]) that may involve optimal choices of the depository value, investment in risky assets and RML loss provisions (see, for instance, [21]). In this regard, a realistic goal would be to maximize the expected utility of the discounted depository value during a fixed time interval, [t, T ], and final profit at time T. In addition, we could place some restricting conditions on the optimization problem mentioned earlier by introducing constraints arising from cash flow, RML demand, financing and the balance sheet. Another factor that could influence the profit optimization procedure is banking regulation and supervision via the Basel II Capital Accord (see, for instance, [5] and [6]). The aforementioned issues provide ample opportunities for future research.
7.
Appendices
In this section, we present the background results needed to prove Lemma 2.4 and our main problem (Theorem 3.3 in Subsection 3.2.). In particular, the proof of Theorem 3.3 makes use of Theorem 2.2 and Proposition 2.3 of Chapter 2 in [32] which are presented as Theorem 7.3 and Proposition 7.4 below, respectively.
7.1.
Appendix A1: Itˆo’s Formula for Jump Diffusions
In this section, we state without proof the one-dimensional Itˆo’s formula for jump processes which is relevant to our study as follows. Lemma 7.1. (One-Dimensional Itˆo’s Formula for L´evy Processes): Suppose Xt ∈ R is an Itˆo-L´evy process of the form
dXt = α(t, ω)dt + β(t, ω)dZt +
Z
R
Γ(t, y, ω)M (dt, dy),
(7.18)
where
M (dt, dy) =
(
M (dt, dy) − ν(dy)dt, if |y| < R; M (dt, dy), if |y| ≥ R.
(7.19)
for some R ∈ [0, ∞]. Let h ∈ C 2 ( R 2 ) and define Ht = h(t, Xt ). Then Ht is again an Itˆo-L´evy process and
Mortgage Loan Securitization, Capital and Profitability...
dHt
555
1 ∂h ∂2h ∂h (t, Xt )dt + (t, Xt ) α(t, ω)dt + β(t, ω)dBt + β 2 (t, ω) 2 (t, Xt ) = ∂t ∂x 2 ∂x Z ∂h + h(t, Xt− + Γ(t, y)) − h(t, Xt− ) − Γ(t, y) (t, Xt− ) ν(dy)dt ∂x |y| 0 : Xt ∈ /G be the bankruptcy time and let T denote the set of all stopping times τ ≤ τG . Let f : R n → R and g : R n → R be continuous functions satisfying the conditions − E f (Xt )dt < ∞, for all x ∈ R n . x
and the family g − (Xτ ) · I{τ 0 . Suppose that for all x ∈ G there exists a neighbourhood Nx of x such that
/ Nx τNx := inf t > 0 : Xt ∈
< ∞ a.s.
Then L ⊂ x ∈ G : V (x) > g(x) = H. Hence it is never optimal to stop while Xt ∈ L.
7.5.
Appendix B: Definitions
In this section, we provide definitions of some of the key concepts discussed in the book. The discount rate is the rate at which the U.S. Federal Reserve lends to banks. The federal funds rate is the interest rate banks charge each other for loans. The London Interbank Offered Rate (LIBOR) is a daily reference rate based on the interest rates at which banks borrow unsecured funds from banks in the London wholesale money market (or interbank market). Mortgage loan value may be characterized in several different ways. The face or nominal or par value of a mortgage loan is the stated fixed value of such a loan as given on the agreement. By contrast the market value of a loan is its value in the credit market and fluctuates. Outstanding value refers to the outstanding payments on the loan. The current selling price or current worth of a loan is called its present value. The nett present value (NPV) is the difference between the present value of cash inflows and the present value of cash outflows. NPV is used in capital budgeting to analyze the profitability of extending a loan. Fair value is a method of determining what a troubled loan would be worth (its present value) if its present owner sold it in the current market. Fair value assumes a reasonable marketing period, a willing buyer and a willing seller. It assumes that the current selling price (its present value) would rise or fall in relation to the asset’s future earnings potential.
Mortgage Loan Securitization, Capital and Profitability...
559
An adjustable-rate mortgage (ARM) is a mortgage loan whose interest rate is adjustable throughout its term. On the other hand, afixed-rate mortgage (FRM) is a loan whose interest is fixed for the duration of its term. A deadweight loss, which may also be referred as excess burden or allocative inefficiency, is the cost to OR created by an inefficiency in the economy. Causes of the deadweight loss can include taxes or subsidies. The deadweight cost is dependent on the elasticity of supply and demand for a loan. Cost of loans is the interest cost that a bank must pay for the use of funds to extend a loan. The delinquency rate includes loans that are at least one payment past due but does not include loans somewhere in the process of foreclosure. Foreclosure is the legal proceeding in which a mortgagee, or other loanholder3 , usually OR obtains a court ordered termination of MR’s equitable right of redemption. Usually OR obtains a security interest from MR who pledges collateral to secure the RML. If MR defaults and OR tries to repossess the property, courts of equity can grant the owner the right of redemption if MR repays the debt. When this equitable right exists, OR cannot be sure that it can successfully repossess the property, thus OR seeks to foreclose the equitable right of redemption. Other MRs can and do use foreclosure, such as for overdue taxes, unpaid contractors’ bills or overdue HOA dues or assessments. The foreclosure process as applied to RMLs is a bank or other secured creditor selling or repossessing a parcel of real property (immovable property) after the owner has failed to comply with an agreement between OR and MR called a “mortgage” or “deed of trust”. Commonly, the violation of the mortgage is a default in payment of a promissory note, secured by a lien on the property. When the process is complete, OR can sell the property and keep the proceeds to pay off its mortgage and any legal costs, and it is typically said that ”OR has foreclosed its RML or lien.” If the promissory note was made with a recourse clause then if the sale does not bring enough to pay the existing balance of principal and fees, the mortgagee can file a claim for a deficiency judgment. Subprime residential mortgage lending is the practice of extending RMLs to MRs who do not qualify for market interest rates owing to various risk factors, such as income level, size of the down payment made, credit history and employment status. The term subprime describes a RML that in some respects may be inferior to a prime RML. MRs may find subprime RMLs worse because of high interest rates or high fees that ORs charge. ORs also may charge larget penalties for late payments or prepayments. A subprime RML is worse from OR’s perspective because it is considered riskier than a prime RML – there may be a higher probability of default - so ORs require those higher rates and fees to compensate for an extra risk, compared to prime RMLs. These RMLs can also be worse for all role players in the economy if this risk does materialize. In general, a RML is subprime if 1. it is made to a MR with a poor credit history (for instance, with a FICO score below 620); 3
In law, a lien is a form of security interest granted over an item of property to secure the payment of a debt or performance of some other obligation. The owner of the property, who grants the lien, is referred to as the loanor and the person who has the benefit of the lien is referred to as the loanee.
560
M.A. Petersen et al.
2. it is issued by an OR who specializes in high-cost RMLs; 3. it became part of a so-called reference subprime RML portfolio, to be traded on a secondary market; or 4. it is made to a MR with prime credit characteristics (e.g., a high FICO score) but is a subprime-only contract type, such as a 2/28 hybrid, a product not generally available in the prime RML market. (A 2/28 hybrid mortgage carries a fixed rate for the first two years; after that, the rate resets into an index rate [usually a six-month LIBOR] plus a margin.) Credit crunch is a term used to describe a sudden reduction in the general availability of loans (or credit) or sudden increase in the cost of obtaining loans from banks (usually via raising interest rates). Securitization is a structured finance process, which involves pooling and repackaging of cash-flow producing financial assets into securities that are then sold to investors. In other words, securitization is a structured finance process in which assets, receivables or financial instruments are acquired, classified into pools, and offered for sale to third-party investment. The term ”securitization” is derived from the fact that the form of financial instruments used to obtain funds from investors are securities. Credit enhancement is the amount of loss on underlying reference RML portfolios (collateral) that can be absorbed before the tranche absorbs any loss. Equity is a term used to describe investment in the bank. Two types of equity are described below: 1. Common equity is a form of corporation equity ownership represented in the securities. It is a stock whose dividends are based on market fluctuations. It is risky in comparison to preferred shares and some other investment options, in that in the event of bankruptcy, common stock investors receive their funds after preferred stockholders, bondholders, creditors, etc. On the other hand, common shares on average perform better than preferred shares or bonds over time. 2. Preferred equity, also called preference equity, is typically a higher ranking stock than voting shares, and its terms are negotiated between the bank and the regulator. The leverage of a bank refers to its debt-to-capital reserve ratio. A bank is highly leveraged if this ratio is high.
References [1] Albertazzi U, Gambacorta L. Bank profitability and taxation. Banca d’Italia, Economic Research Department; Friday, 20 January 2006. [2] Altug S, Labadie P. Dynamic Choice and Asset Markets. San Diego CA: Academic Press. 1994.
Mortgage Loan Securitization, Capital and Profitability...
561
[3] Asset Securitization Comptroller’s Handbook. U.S. Comptroller of the Currency Administrator of National Banks. Available: http://www.dallasfed.org/news/ca/2005/05wallstreet assets.pdf [November 1997]. [4] Aversa J. Rebate checks in the mail by spring. The Huffington Post, Arianna Huffington. Available: http://www.huffingtonpost.com/2008/02/13/rebate-checks-in-themail n 86525.html [Wednesday, 13 February 2008]. [5] Basel Committee on Banking Supervision, The new Basel Capital Accord, Bank for International Settlements 2001, http://www.bis.org/publ/bcbsca.htm. [6] Basel Committee on Banking Supervision. International convergence of capital measurement and capital standards: A revised framework. Bank for International Settlements June 2006; Available: http://www.bis.org/publ/bcbs107.pdf. [7] Bill Moyers Journal. PBS. Episode 06292007. Available: http://www.pbs.org/moyers/journal/06292007/transcript5.html [Friday, 29 June 2007]. [8] Chami R, Cosimano TF. Monetary policy with a touch of Basel. International Monetary Fund 2001; Working Paper WP/01/151. [9] Coyne TJ. Commercial bank profitability by function. Financial Management 1973; 2:64–73. [10] Cuoco D, Liu H. An analysis of VaR-based capital requirements. Journal of Financial Intermediation 2006; 15:362–394. [11] Demirg¨uc-Kunt A, Detragiache E. Monitoring banking sector fragility: A multivariate logit approach. IMF Working Paper 1999; No. 106. [12] Demyanyk Y, Van Hemert O. Understanding the subprime mortgage crisis. Available at SSRN: http://ssrn.com/abstract=1020396. [Tuesday, 19 August 2008]. [13] Diamond DW. Bank and liquidity creation: A simple exposition of the DiamondDybvig model. Economic Quarterly 2007; 93:189–200. [14] Diamond DW, Dybvig PH. Bank runs, deposit insurance and liquidity. Journal of Political Economy 1983; 91:401–419. [15] Diamond DW, Rajan RG. A theory of bank capital. Journal of Finance 2000; 55:24312465. [16] FDIC Quarterly Banking Profile (Pre-Adjustment). Fourth Quarter 2007; 29(1). Available: http://www2.fdic.gov/qbp/qbpSelect.asp?menuItem = QBP. [17] FDIC Quarterly Banking Profile, First Quarter 2008; http://www2.fdic.gov/qbp/qbpSelect.asp?menuItem = QBP.
29(2).
Available:
562
M.A. Petersen et al.
[18] Fouche CH, Mukuddem-Petersen J, Petersen MA, Senosi MC. Bank valuation and its connections with the subprime mortgage crisis and Basel II Capital Accord. Discrete Dynamics in Nature and Society 2008; 2008:50 pp., doi:10.1155/2008/740845. [19] Gersbach H. The optimal capital structure of an economy. Centre for Economic Policy Research (CEPR) Discussion Paper No. 4016. August 2003. [20] Gheno A. Corporate valuations and the Merton model. Applied Financial Economic Letters 2007; 3:47–50. [21] Gideon F, Mukuddem-Petersen J, Mulaudzi MP, Petersen MA. Optimal provisioning for bank loan losses in a robust control framework. Optimal Control Applications and Methods 2009; 30(1):27–52. [22] Granero LM, Reboredo JC. Competition, risk taking and governance structures in retail banking. Applied Financial Economic Letters 2005; 1:37–40. [23] Halkos GM, Georgiou MN. Bank sales, spread and profitability: An empirical analysis. Applied Financial Economic Letters 2005; 1:293–296. [24] Hand JRM, Lev B. Intangible Assets: Values, Measures and Risks. Oxford University Press 2003. [25] Lahart J. Egg cracks differ in housing, finance shells. Wall Street Journal. Available: http://online.wsj.com/article/SB119845906460548071.html [Monday, 24 December 2007]. [26] Matthews S, Lanman S. Bernanke urges ”hunkering” banks to raise sapital. Available: http://www.bloomberg.com/apps/news [Thursday, 15 May 2008]. [27] Senosi MC, Petersen MA, Mukuddem-Petersen J, Mulaudzi MP, Schoeman IM, De Waal B. Comparing originators holding securitized and unsecuritized subprime mortgage loans with regard to profit and valuation. Chapter 17, Handbook of Optimization Theory: Decision Analysis and Application. Series: Mathematics Research Developments. Editors: Juan Varela and Sergio Acu˜na. Nova Science Publishers, New York (Invited Chapter in Book), ISBN: 978-1-60876-500-3, 2010. [28] Mukuddem-Petersen J, Petersen MA. Stochastic behavior of risk-weighted bank assets under the Basel II Capital Accord. Applied Financial Economics Letters 2005; 1:133-138. [29] Mukuddem-Petersen J, Petersen MA, Schoeman IM, Tau BA. Maximizing banking profit on a random time interval. Journal of Applied Mathematics 2007; Volume 2007, Issue 1, July 2007, pp. 62-86, doi:10.1155/2007/29343. [30] Mukuddem-Petersen J, Petersen MA, Schoeman IM, Tau BA. Dynamic modeling of bank profits. Applied Financial Economic Letters 2008; 4:151-157. [31] Mulaudzi MP, Petersen MA, Schoeman IM. Optimal allocation between bank loans and Treasuries with regret. Optimization Letters 2008; 2:555-566.
Mortgage Loan Securitization, Capital and Profitability...
563
[32] Oksendal B, Sulem A. Applied Stochastic Control of Jump Diffusions. Springer: Berlin, 2005. [33] Pantelous AA. Dynamic risk management of the lending rate policy of an interacted portfolio of loans via an investment strategy into a discrete stochastic framework. Economic Modelling 2008; 25(4):658–675. [34] Petersen MA, Senosi MC, Mukuddem-Petersen J. Subprime Banking Models. New York: Nova, ISBN: 978-1-61728-694-0, 2010. [35] Purnanandam AK. Originate-to-distribute model and the subprime mortgage crisis. Available at SSRN: http://papers.ssrn.com/sol3/papers.cfm?abstract id=1167786 [Saturday, 30 August 2008]. [36] Pindyck RS, Rubinfeld DL. Econometric Models and Economic Forecasts Third Edition. Economics Series, 1991. [37] Protter P. Stochastic Integration and Differential Equations Second Edition. Springer, Berlin, 2004. [38] Repullo R. Capital requirements, market power and risk-taking in banking. Journal of Financial Intermediation 2004; 13:156–182. [39] Ross SA, Westerfield RW, Jordan BD. Essentials of Corporate Finance Fourth Edition. McGraw-Hill/Irwin. 158–186. ISBN 0-07-251076-5. [40] Waggoner J, Krantz M. Pricing mortgage-backed securities will be tough. Available: http://www.usatoday.com/money/industries/banking/2008-09-23-toxic-paperbailout N.htm [Tuesday, 7 October 2008]. [41] Whitwell GJ, Lukas BA, Hill P. Stock analyst’s assessments of the shareholder value of intangible assets. Journal of Business Research 2007; 60:84–90. [42] Wikipedia: The Free Encyclopedia. Subprime mortgage crisis. Available: http://en.wikipedia.org/wiki/Subprime mortgage crisis [Monday, 14 June 2010].
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 565-576
ISBN 978-1-60876-500-3 c 2011 Nova Science Publishers, Inc.
Chapter 19
Q UEUEING N ETWORKS WITH R ETRIALS DUE TO U NSATISFACTORY S ERVICE Jesus R. Artalejo∗ Faculty of Mathematics Complutense University of Madrid Madrid 28040, Spain
Abstract The slow progress in the analytic investigation of queueing networks with retrials is explained in terms of the impossibility of having product-form limiting distributions in those networks in which the retrials are due to blocking. In this chapter, we introduce a new class of queueing networks in which customers who are not satisfied after receiving a primary service have a chance to perform new attempts later on. In this context, we prove the existence of product-form solutions. We deal both with open and closed networks with retrials due to unsatisfactory service. A number of illustrative motivating examples are considered.
1.
Introduction
Many queueing situations have the feature that customers who cannot receive service leave the service area and repeat their request after a random amount of time. Most retrial queues arise from computer and communication applications in which repeated attempts appear due to blocking in a system with limited service capacity. This situation is similar to the experience of a subscriber finding the line engaged in a mobile telephone network. A retrial queue can be regarded as a special type of queueing network with two nodes: the service facility and the retrial group. There exists a vast amount of literature that includes a variety of analytical results for the basic M/M/c and M/G/1 retrial queues and their extensions [6] as well as a number of approximate techniques and algorithmic methods for solving the analytically intractable retrial queues [2]. Conversely, the available results for multiple node queueing networks with retrials are very limited and concern particularly ∗
E-mail address: jesus [email protected]
566
Jesus R. Artalejo
simple network topologies such as two tandem queues. In some papers, approximate and heuristic methods are proposed. Queueing networks having product-form distributions have been extensively studied in the literature. After the seminal work by Jackson, who proved that Markovian networks with state independent Markovian routing possess product-form limiting distributions, many authors have extended the product-form methodology to investigate important classes of queueing networks including general service times, specific queueing disciplines, batch movements, negative arrivals, etc. For a comprehensive overview of classical and advanced perspectives in this matter, we refer to books [7, 9, 14, 16]. In contrast, there are very few papers dealing with queueing networks with retrials. Moutzoukis and Langaris [12] obtained some exact performance measures for a network consisting of two single server nodes in tandem. Since there are no waiting positions between the two nodes, the blocking phenomenon is observed which motivates the consideration of a retrial node. G´omez-Corral [8] extended the study to a tandem network with blocking and linear retrial policy. The matrix-analytic formalism is helpful in dealing with arrivals occurring according to a Markovian arrival process and service times of phase type. Klimenok et al. [10] studied another variant of a tandem network with retrials motivated by a computer maintenance application. The so-called direct truncated method and the generalized truncated methods [2] play a fundamental role in simplifying approximations in both papers [8, 10]. In a recent paper [3], another simple tandem network with constant retrial rate is considered. Other authors [13, 15] have investigated networks with more general topologies but always use approximations and heuristic methods. Unfortunately, it does not seem possible to apply the product-form methodology to queueing networks with retrials caused by blocking. Van Dijk [16] pointed out this fact and Artalejo and Economou [1] used a general characterization of Chao et al. [4] to prove the non-existence of product-form solutions for networks of single server nodes with linear retrial policy and Markovian routing. Our main objective in this chapter is to introduce a new class of queueing networks with which after receiving a first fundamental service customers become satisfied or unsatisfied. In the former case, the customer is routed to another node or leaves the system. The distinguished conceptual feature is that unsatisfied customers have a chance to receive a secondary service and reattempt fundamental service after a random amount of time. As a related work, we mention the call-routing system with callbacks presented in [5], but in that paper the emphasis is put on the investigation of an optimal routing policy rather than in presenting the model as a queueing network. The following motivating situations provide examples in which the retrials due to unsatisfactory service arise naturally. 1. Hospitals and health systems. Patients are subject to a certain test (e.g., glucose analysis) in a medical unit. In the case of a positive result, they are diagnosed and routed to the next unit (e.g., clinical surgery). In contrast, those patients who do not get a successful result must complete a treatment (e.g., drugs or aids) but they receive a new appointment to check whether or not they are then ready for the necessary intervention. 2. Computer repair system. In [10] a computer repair, real life experience is described. The primary service consists of the technical advertisement of an expert who has recom-
Queueing Networks with Retrials due to Unsatisfactory Service
567
mended a scanning test (secondary service provided on self-service basis or by an external supplier) before coming back to the manufacturer’s technical service for a proper repair of the equipment (second round of fundamental service). 3. Call centers. Several modes of returning to a call center can be distinguished. Redials after finding all agents busy and redials due to balking and/or impatience are widely considered in the existing literature. In contrast, revisits after completing service indicate the grade of satisfaction with the received service. The organization of the rest of the chapter is as follows. Sections 2 and 3 deal, respectively, with the study of open and closed networks with retrials due to unsatisfactory service. We first state general results providing the existence of product-form solutions. Then, a number of particularizations of practical interest are presented. Finally, concluding remarks are given in Section 4.
2.
Open Queueing Networks
This section deals with open queueing networks with retrials due to unsatisfactory service. After receiving a satisfactory fundamental service at any service node, a customer may join another service node or may leave the network. On the other hand, a customer receiving an unsatisfactory service must join a satellite node, called the orbit of the preceding node, to receive a secondary service, but after some time the unsatisfied customer comes back to the preceding node in order to again request a fundamental service. A schematic diagram describing the transitions in a network with M = 3 nodes is shown in Figure 2.1. The arrows interconnecting the nodes are associated with external arrivals (a), departures outside the network (d), circulating customers among the nodes (c), unsatisfied customers (u) and retrials (r).
2 r ❄ ✻ u ✎☞ d a ✲ ✲ 1 ✍✌ ❅ c 4 6 c ❅ ❅ ❘ ✠ r r c✒ ❅ ■ c ❄ ❄ ❅ ✻ ✻ ❅ ❅ u u ❘ ❅ ✒ ❅ ✎☞ ✎☞ ❅ ❅ a c c d ✲✛ 3 5 ✍✌ ✍✌ ❅ ❅ ❘ ❅ ✒ a d❅
a = arrival c = circulation d = departure r = retrial u = unsatisf action
Figure 2.1. General network topology.
568
Jesus R. Artalejo
We start by introducing the class of open Jackson networks with state dependent service characterized by the following assumptions: i) The network has M service nodes. ii) The service rate at node i, when there are xi customers at node i, is δi (xi ) > 0, for 1 ≤ i ≤ M and xi ≥ 1. Moreover, δi (0) = 0, for 1 ≤ i ≤ M. iii) The capacity of the service facility is infinite at each node.
iv) The external arrivals at node i follow a Poisson process with rate λi , for 1 ≤ i ≤ M. v) All the external arrival processes and the service times are mutually independent.
vi) When a customer completes service at node i he joins PMthe node j with probability pij or leaves the network with probability ri . Note that ri + j=1 pij = 1, for 1 ≤ i ≤ M. vii) The routing matrix P = [pij ] verifies that I − P is invertible.
Before giving the fundamental result, we need some preliminaries. Let Xi (t) be the number of customers in the ith node at time t, for 1 ≤ i ≤ M. Thus, the vector X(t) =
(X1 (t), . . . , XM (t)) denotes the state of the Jackson network at time t. We note that the process {X(t); t ≥ 0} is a multidimensional continuous time Markov chain with state space S = ZM + . The elements of its infinitesimal generator Q = [q(x, x´)] are as follows: λi , if x´= x + ei , δ (x )r , if x´= x − ei , i i i δ (x )p , if x´= x − ei + ej , i i ij q(x, x´) = M M P P δi (xi )(1 − pii ) , if x´= x, − λi + i=1 i=1 0, otherwise,
where x = (x1 , . . . , xM ) and ei is a row vector of dimension M such that all entries are equal to 0, except for the ith one which is equal to 1. The above infinitesimal rates can be easily interpreted. An external arrival at node i occurs with rate λi and it implies the motion from state x to state x + ei . A departure from the node i to outside takes place with rate δi (xi )ri , then the system moves to state x − ei . The rate δi (xi )pij corresponds to the transfer of a customer from node i to node j; that is, the final state is x − ei + ej . Finally, q(x) = − q(x, x) gives the rate of leaving the current state x. Now consider the limiting probabilities p(x) = lim P {X1 (t) = x1 , . . . , XM = xM } , x ∈ S, t→∞
and the balance equations aj = λj +
M X i=1
ai pij , 1 ≤ j ≤ M,
(2.1)
where ai represents the total arrival rate (external input plus internal input) which amounts the effective departure from node i. We note that {ai ; 1 ≤ i ≤ M } satisfy the matrix equation a = λ (I − P)−1 ,
Queueing Networks with Retrials due to Unsatisfactory Service
569
where a = (a1 , . . . , aM ) and λ = (λ1 , . . . , λM ) . The next theorem shows that the open Jackson networks with state dependent service have a product-form solution. A similar result can be found in [11, Theorem 7.6] without proof. Theorem 2.1. The process {X(t); t ≥ 0} is positive recurrent if and only if ∞ xY i −1 X
xi =1 j=0
ai < ∞, for all 1 ≤ i ≤ M. δi (j + 1)
(2.2)
Then, the limiting distribution of the system state has the product-form p(x) =
M Y
φi (xi ),
(2.3)
i=1
where xY i −1
ai , xi ≥ 1, 1 ≤ i ≤ M, δ (j + 1) j=0 i −1 ∞ xY i −1 X ai , 1 ≤ i ≤ M. φi (0) = 1 + δi (j + 1)
φi (xi ) =φi (0)
(2.4)
(2.5)
xi =1 j=0
Proof. The limiting probabilities p(x) satisfy the Kolmogorov equations λi p(x i=1 M M X X
q(x)p(x) =
+
M X
j=1 i=1 i6=j
− ei ) +
M X
δi (xi + 1)ri p(x + ei )
i=1
δi (xi + 1)pij p(x + ei − ej ).
(2.6)
From (2.4)–(2.5) we easily conclude that φi (xi − 1) δi (xi ) = , xi ≥ 1, 1 ≤ i ≤ M. φi (xi ) ai Substituting (2.3) in equations (2.6), we find that M Y
q(x)
+
φi (xi ) =
i=1 M X i=1
+
M M X X j=1 i=1 i6=j
M X i=1
M
λi
φi (xi − 1) Y φi (xi ) φi (xi ) i=1
M φi (xi + 1) Y δi (xi + 1)ri φi (xi ) φi (xi ) i=1
M
φi (xi + 1) φj (xj − 1) Y φi (xi ). δi (xi + 1)pij φi (xi ) φj (xj ) i=1
(2.7)
570
Jesus R. Artalejo Now removing
QM
i=1 φi (xi )
q(x) =
M X
λi
i=1
from both sides and employing the identities (2.7), we get M
M
M
i=1
j=1
i=1 i6=j
X δj (xj ) X δi (xi ) X ai ri + ai pij . + ai aj
(2.8)
From the definition of {ai ; 1 ≤ i ≤ M } in (2.1) we observe that M X i=1
λi =
M X
ai ri ,
i=1
P PM and, as a result, equation (2.8) yields q(x) = M i=1 λi + i=1 δi (xi )(1 − pii ) which concludes the verification of formula (2.3). The positive recurrent condition (2.2) follows from the uniqueness of the limiting distribution and the fact that φi (0) > 0 if and only if the series in (2.2) converges. The product-form solution says that the system state at node i , independently of the other nodes, performs as a birth and death process with arrival rate ai and state dependent rates δi (xi ). Once the results of Theorem 2.1 are stated, many open queueing networks with retrials due to unsatisfactory service can be modeled as a particular case. To this end, we take M = 2N and think of any odd node as a service facility providing fundamental service whereas even nodes represent the associated orbits where the unsatisfied customers receive secondary services. The fundamental and secondary service rates are denoted by δi (xi ) =
νi (xi ), µi (xi ),
if i = 2k − 1, 1 ≤ k ≤ N, if i = 2k, 1 ≤ k ≤ N.
We may deal with service facilities consisting of si identical servers, for 1 ≤ i ≤ 2N . This case corresponds with δi (xi ) = min(xi , si )δi , where δi represents the service rate at node i. In particular, we are interested in the case where the fundamental nodes deal with s2k−1 < ∞ servers whereas the orbit nodes operate as a self-service facility; that is, s2k = ∞. An external arrival only may join a fundamental node, so λi = 0, for i = 2k and 1 ≤ k ≤ N. We remark the existence of some conditions affecting the routing probabilities. Since there is not communication among different orbit nodes, we assume that p2k,2k−1 = 1, for 1 ≤ k ≤ N , reflecting that the orbit customer must return to the preceding fundamental node. Moreover, the conditions p2k−1,2k´ = 0, for k 6= k´, show that transitions from a fundamental node to orbit nodes are forbidden, except to the associated node labelled as 2k. Under the above specifications, the positive recurrent condition reduces to ρ2k−1 = a2k−1 /s2k−1 ν2k−1 < 1, for 1 ≤ k ≤ N , and the limiting distribution of the network state
Queueing Networks with Retrials due to Unsatisfactory Service
571
is given by
p(x) =
N Y
k=1
×
a2k−1 ν2k−1
s2k−1 −1
X
x2k−1 =0
x2k−1
a2k−1 ν2k−1
x2k−1
1 x2k−1 !
+
s2k−1 s2k−1
s2k−1 ρ2k−1
s2k−1 ! 1 − ρ2k−1
s
−1
2k−1 s2k−1 x2k−1 I{x2k−1 ≥s2k−1 } I{x2k−1 0, for 1 ≤ i ≤ M and xi ≥ 1. If xi = 0, then δi (0) = 0, for 1 ≤ i ≤ M.
iii) After completing service at node i a customer is routed to the node j with probability PM pij . Of course, j=1 pij = 1, for 1 ≤ i ≤ M. The routing matrix P = [pij ] is assumed to be irreducible.
The network state at time t can be described by a multidimensional continuous time Markov chain {X(t); t ≥ 0}, where X(t) = (X1 (t), . . . , XM (t)) and Xi (t) represents the number of customers at node i at time t, for 1 ≤P i ≤ M . Its state space is S = {x = (x1 , . . . , xM ); M xi ≥ 0, x = K}. The infinitesimal generator Q = [q(x, x´)] has elements i=1 i )p , δi (x PiM ij q(x, x´) = − i=1 δi (xi )(1 − pii ), 0,
if x´= x − ei + ej , if x´= x, otherwise.
Queueing Networks with Retrials due to Unsatisfactory Service
573
Since the state space is finite and the chain is irreducible, the limiting probabilities p(x) = limt→∞ P {X1 (t) = x1 , . . . , XM = xM } , x ∈ S, exits and they are positive. The following result states that the Jackson closed queueing networks with state dependent services have product-form distributions. Since this is a know result [11, Theorem 7.8], we summarize it but the proof is omitted. Theorem 3.1. The limiting distribution of the chain {X(t); t ≥ 0} is given by p(x) = G(M, K)
M Y
φi (xi ),
i=1
where xi Y πi φi (xi ) = , 1 ≤ i ≤ M, x ∈ S, δi (j)
(3.1)
j=1
φi (0) = 1, 1 ≤ i ≤ M,
and {πi ; 1 ≤ i ≤ M } is the solution to the system of equations πj =
M X i=1
πi pij , 1 ≤ j ≤ M,
(3.2)
M X
(3.3)
πi = 1.
i=1
The normalizing constant G(K, M ) satisfies that
P
x∈S
p(x) = 1.
We next discuss some particularizations of interest to deal with retrials due to unsatisfactory service. Following the approach developed in Section 2 for the case of open networks, we can take M = 2N and distinguish between fundamental nodes providing the essential service and orbit nodes where the unsatisfied customers receive a secondary service. The route matrix and the service rates δi (xi ) can be chosen as in the case of open networks. Obviously, now ri does not exist. This formulation can be generalized by adding the node 0, which represents the outside world. In our closed network, node 0 can be used to collect those customers who are not currently receiving any type of service. Then, M = 2N + 1 and x = (x0 , x1 , . . . , x2N ). The notation for permanence rates at node 0 is as follows: θ0 (x0 ), if x0 > 0, δ0 (x0 ) = 0, if x0 = 0. The next example illustrates a simple application. Consider a medium-size company with K employees. At any time t, an employee can be active (i.e., rendering service) or down due to illness. Let X0 (t) be the number of active employees at time t. We subdivide the down period in two phases. At phase I the employee visits the doctor for a physical examination / tests. The diagnosis determines whether the employee is discharged (maybe subject to a minor treatment) and he/she come back to the active regime or if the employee needs to be on sick leave for some time. The former possibility occurs with probability
574
Jesus R. Artalejo
p ∈ (0, 1). Customers on sick leave are said to be on phase II. When a phase II period is completed, the employee visits again the doctor who decides after a new exploration either to discharge the employee (with probability p) or to extend once more the certificate to continue on sick leave (with probability q = 1 − p). The number of down employees of type I and type II are respectively denoted by X1 (t) and X2 (t). Thus, we have only M = 3 nodes but K could be moderately large.
✲✛
outside
p
service facility
1
q
✲✛ 1
orbit
Figure 3.1. Closed network with retrials.
To be more specific, we may assume that θ0 (x0 ) = x0 θ, ν, if x1 > 0, ν1 (x1 ) = 0, if x1 = 0, µ2 (x2 ) = x2 µ.
Active periods and sick leave periods concern each individual employee, so rates θ0 (x0 ) and µ2 (x2 ) are proportional to the number of customers at the corresponding nodes. We assume that each visit to the doctor generates a special action (e.g., a blood test to be performed on first-in-first-out basis by any external laboratory). Other aspects of the visit are time negligible. Thus, we have s1 = 1. We notice that active employees, employees on phase I and employees on phase II in the above description correspond to customers at the outside world, at the service facility node and at the orbit node, respectively; see also Figure 3.1. The routing matrix for this network is 0 1 0 P = p 0 q . 0 1 0 Thus, the solution to the system (3.2)–(3.3) is π0 =
p 1 q , π1 = , π2 = , 2 2 2
and the quantities in (3.1) and the constant G(M, K) are given by x1 x2 p x 0 1 1 1 q , φ1 (x1 ) = , φ2 (x2 ) = φ0 (x0 ) = 2θ x0 ! 2ν 2µ x2 ! G(M, K) =
K−x K X 1 pν x0 1 X0 qν x1 1 (2ν)K θ x0 ! µ x1 ! x0 =0
x1 =0
!−1
.
Queueing Networks with Retrials due to Unsatisfactory Service
575
It is possible to prove that the limiting probabilities associated to the rates (θ, ν, µ) are 1 µ θ equal to those associated to 2ν , 2 , 2ν . This allows some simplification yielding to p (x0 , x1 , x2 ) =
pν x0 1 θ x0 ! K P
x0 =0
qν µ
K−x0 −x1
pν x0 1 θ x0 !
K−x P0 x1 =0
1 (K−x0 −x1 )!
qν µ
x 1
,
1 x1 !
0 ≤ x0 ≤ K, 0 ≤ x1 ≤ K − x0 , x2 = K − x0 − x1 . Then, the expectation of the number of customers at each node can be numerically evaluated. Table 3.1 shows the influence of K on the three expected values for a network with parameters θ = 0.05, ν = 1.6, µ = 0.7 and p = 0.2. Table 3.1. The influence of K K E[X0 ] E[X1 ] E[X2 ]
1 0.69349 0.10835 0.19814
2 1.37090 0.23741 0.39168
5 3.26284 0.80491 0.93224
10 5.55471 2.85821 1.58706
20 6.39857 11.77325 1.82816
30 6.39999 21.77142 1.82857
Now, it is easy to see that limt→∞ E[X0 ] = pν/θ, limt→∞ E[X2 ] = qν/µ and limt→∞ E[X1 ] = limt→∞ (K − E[X0 ] − E[X2 ]) = ∞. In other words, for a sufficiently large K, we have pν , θ qν pν − , E[X1 ] ≈ K − θ µ qν E[X2 ] ≈ . µ E[X0 ] ≈
(3.4) (3.5) (3.6)
In our numerical example, we have pν/θ = 6.40000 and qν/µ = 1.82857. Hence, the approximate expectations (3.4)–(3.6) are corroborated by the entries in Table 3.1.
4.
Conclusion
In this chapter, a class of queueing networks with retrials due to unsatisfactory service is investigated. We provide illustrative examples showing that this class of queues arises frequently in applications to computer networks, health systems and call centers. The starting point is the general theory for open and closed Jackson networks. Then, we adapt the general theory to the network structure under study and derive a number of particular cases of interest. We stress that the applicability of the product-form methodology is a distinguished feature between classical networks in which the retrials are due to blocking and networks with retrials caused by an unsatisfactory previous service.
Acknowledgements This work was supported by MEC, grant no. MTM2005–01248.
576
Jesus R. Artalejo
References [1] Artalejo, J. R.; Economou, A. On the non-existence of product-form solutions for queueing networks with retrials. Electron. Model. 27 (2005), 13–19. [2] Artalejo, J. R.; G´omez-Corral, A. Retrial Queueing Systems: A Computational Approach. Springer, Berlin (2008). [3] Avrachenkov, K.; Yechiali, U. Retrial networks with finite buffers and their application to Internet data traffic. Probab. Eng. Inf. Sci. 22 (2008), 519–536. [4] Chao, X.; Miyazawa, M.; Serfozo, R. F.; Takada, H. Markov network processes with product form stationary distribution. Queueing Syst. Theory Appl. 28 (1998), 377–401. [5] de V´ericourt, F.; Zhou, Y.-P. Managing response time in a call-routing problem with service failure. Oper. Res. 53 (2005), 968–981. [6] Falin, G. I.; Templeton, J. G. C. Retrial Queues. Chapman & Hall, London (1997). [7] Gelenbe, E.; Pujolle, G. Introduction to Queueing Networks. J. Wiley & Sons, Chichester (1999). [8] G´omez-Corral, A. A matrix-geometric approximation for tandem queues with blocking and repeated attempts. Oper. Res. Lett. 30 (2002), 360–374. [9] Kelly, F. P. Reversibility and Stochastic Networks. J. Wiley & Sons, New York (1979). [10] Klimenok, V. I.; Chakravarthy, S. R.; Dudin, A. N. Algorithmic analysis of a multiserver Markovian queue with primary and secondary services. Comput. Math. Appl. 50 (2005), 1251–1270. [11] Kulkarni, V. G. Modeling and Analysis of Stochastic Systems. Chapman & Hall, London (1995). [12] Moutzoukis, E.; Langaris, C. Two queues in tandem with retrial customers. Probab. Eng. Inf. Sci. 15 (2001), 311–325. [13] Pourbabai, B. Tandem behavior of a telecommunication system with finite buffers and repeated calls. Queueing Syst. Theory Appl. 6 (1990), 89–108. [14] Serfozo, R. F. Introduction to Stochastic Networks. Springer, New York (1999). [15] Takahara, G. K. Fixed point approximations for retrial networks. Probab. Eng. Inf. Sci. 10 (1996), 243–259. [16] Van Dijk, N. M. Queueing Networks and Product Forms: A Systems Approach. J. Wiley & Sons, Chichester (1993).
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 577-586
ISBN 978-1-60876-500-3 c 2011 Nova Science Publishers, Inc.
Chapter 20
S OME R ESULTS ON C ONDITION N UMBERS Zhao Li1 , Seak-Weng Vong2 , Yi-Min Wei 3,∗ and Xiao-Qing Jin2 1 Institute of Mathematics, School of Mathematical Sciences, Fudan University, Shanghai, 200433, China 2 Department of Mathematics, University of Macau, Macao, China 3 Shanghai Key Laboratory of Contemporary Applied Mathematics and School of Mathematical Science, Fudan University, Shanghai, 200433, China
Abstract We give mixed and componentwise condition numbers of the orthogonal projector. Some explicit and computable expressions on the data for the Frobenius norm and spectral norm of the solution of underdetermined linear systems with full row rank are also presented.
Keywords: Moore-Penrose inverse; underdetermined system; orthogonal projector; componentwise condition number; mixed condition number; normwise condition number. AMS Subject Classifications: 15A09, 65F20.
1.
Introduction
A general theory of condition numbers was first given by Rice [21] in 1966. A lot of research results have been documented on the theory of condition numbers since then. Higham [14] gave the condition numbers for the inverse of a nonsingular matrix and the condition numbers for the nonsingular linear system Ax = b. Geurts [8] and Malyshev [18] considered perturbations on A only and gave an expression for the normwise condition number for linear least squares (LS) problems min kAx − bk2 where A has full column x rank. Gratton [12] studied perturbations on both A and b and obtained an expression for the normwise condition number with respect to a “weighted” Frobenius norm of the pair (A, b). Grcar [13] gave an optimal backward error analysis for linear LS problems and used it to obtain expression for the condition number. Arioli et al. [1] discussed the partial ∗
E-mail address: [email protected].
578
Zhao Li, Seak-Weng Vong, Yi-Min Wei et al.
condition number for linear LS problems and gave tight bounds. Xu et al. [26] studied the condition numbers of linear LS problem with structured matrices such as Toeplitz, Hankel, and circulant matrices. In this chapter, we study mixed and componentwise condition numbers of the orthogonal projector. Some explicit and computable expressions on the data for the Frobenius norm and spectral norm of the solution of underdetermined linear systems with full row rank are also given. The paper is organized as follows. In Section 2, we first give the definition of different condition numbers and then discuss mixed and componentwise condition numbers for the orthogonal projector. In Section 3, we consider the Frobenius norm and spectral norm for the condition number of an underdetermined linear system with full row-rank. We conclude with some remarks in Section 4.
2. 2.1.
Mixed and Componentwise Condition Numbers for the Orthogonal Projector Preliminaries
To define mixed and componentwise condition numbers, the following form of “distance” function is useful. Let a, b ∈ Rn . We denote by ab the element in Rn whose ith component is ai /bi if bi 6= 0 and 0 otherwise. Then we define
a − b |ai − bi |
d(a, b) = . = max b ∞ i=1,...,n |bi | b 6=0 i
Note that
d(a, b) = min
i=1,...,n
ν ≥ 0 |ai − bi | ≤ ν|bi | .
Also, if b = 0 then d(a, b) = 0. We can extend the function d to matrices in an obvious manner. We introduce a notation allowing us to do so smoothly. For a matrix A ∈ Rm×n we define vec(A) ∈ Rmn by T T vec(A) = [aT 1 , . . . , an ] ,
where A = [a1 , . . . , an ] with ai ∈ Rm for i = 1, . . . , n. Then, we define d(A, B) = d(vec(A), vec(B)). Note that vec is a homeomorphism between Rm×n and Rmn . In addition, it transforms norms in the sense that for all A = [aij ] ∈ Rm×n , kvec(A)k2 = kAkF
and
kvec(A)k∞ = kAkmax ,
where k · kF is the Frobenius norm defined by kAkF =
n m X X i=1 j=1
a2ij
1/2
Some Results on Condition Numbers
579
and k · kmax is the max norm defined by kAkmax = max |aij |. i,j
Let k · kα be a norm in Rp . We denote Bα (a, ε) = {x ∈ Rp | kx − akα ≤ ε},
B 0 (a, ε) = {x | d(x, a) ≤ ε}.
For a partial function F : Rp → Rq we denote by Dom(f ) its domain of definition. Definition 1 ([9]). Let F : Rp → Rq be a continuous mapping defined on an open set Dom(F ) ⊂ Rp such that 0 ∈ / Dom(F ). Let a ∈ Dom(F ) such that F (a) 6= 0. (i) Let k · kα and k · kβ be norms in Rp and Rq respectively. The normwise condition number of F at a (with respect to the norms k · kα and k · kβ ) is defined by κ(F, a) = lim
ε→0
sup x∈Bα (a,ε) x6=a
kF (x) − F (a)kβ kakα . kx − akα kF (a)kβ
(ii) The mixed condition number of F at a is defined by m(F, a) = lim
ε→0
sup x∈B 0 (a,ε) x6=a
kF (x) − F (a)k∞ 1 . kF (a)k∞ d(x, a)
(iii) Suppose F (a) = [f1 (a), . . . , fq (a)] is such that fj (a) 6= 0 for j = 1, . . . , q. Then the componentwise condition number of F at a is defined by c(F, a) = lim
ε→0
sup x∈B 0 (a,ε) x6=a
d(F (x), F (a)) . d(x, a)
Explicit expressions of the mixed and componentwise condition numbers of F at a are given by the following lemma in [9]. Lemma 1. For a mapping F satisfying the conditions in Definition 1, suppose that F is Fr´echet differentiable at a. Denoting the corresponding Fr´echet derivative as DF (a), we then have (i) If F (a) 6= 0, then m(F, a) = where
kDF (a)Diag(a)k∞ k |DF (a)| |a| k∞ = , kF (a)k∞ kF (a)k∞
Diag(a) = Diag(a1 , a2 , . . . , ap ) =
a1 0 .. . 0
0 ··· a2 · · · .. .. . . 0 ···
0 0 .. . ap
with its diagonal entries being given by the vector a = (a1 , a2 , . . . , ap )T .
580
Zhao Li, Seak-Weng Vong, Yi-Min Wei et al.
(ii) If F (a) = [f1 (a), f2 (a), . . . , fq (a)] with fj (a) 6= 0 for j = 1, 2, . . . , q, then
|DF (a)| |a| −1
. c(F, a) = kDiag(F (a)) DF (a)Diag(a)k∞ = |F (a)| ∞
We recall that the Moore-Penrose inverse A† of A ∈ Rm×n is the unique n × m matrix satisfying the following four matrix equations [2, 24] AA† A = A,
A† AA† = A† ,
(AA† )T = AA† ,
(A† A)T = A† A,
where M T denotes the transpose of a real matrix M . If A = [aij ] ∈ Rm×n and B ∈ Rp×q , then the Kronecker product A ⊗ B ∈ Rmp×nq is defined by a11 B a12 B . . . a1n B a21 B a22 B . . . a2n B A⊗B = . .. .. .. .. . . . . am1 B am2 B . . . amn B
It is proven in [11] that there exists a matrix Π ∈ Rmn×mn called the vec-permutation matrix such that for all A ∈ Rm×n , Π(vec(A)) = vec(AT ).
(1)
The following results can be found in [11, 16, 19, 23]. |A ⊗ B| = |A| ⊗ |B|,
vec(AXB) = (B T ⊗ A)vec(X),
(2)
where |A| = [|aij |] if A = [aij ].
2.2.
Main Result
Consider a matrix A ∈ Rm×n (m ≥ n) such that rank(A) = n. The orthogonal projector onto the range of A is given by PA = AA† . Let V = {g ∈ Rmn g = vec(G) with G ∈ Rm×n and rank(G) = n}.
We study mixed and componentwise condition numbers on the mapping φ : V → Rmn given by φ(vec(G)) = vec(PG ). Continuity of φ follows from the following lemma [22, p.150]. Lemma 2. Let A ∈ Rm×n and {Ak } be a sequence of m × n matrices satisfying lim Ak = A.
k→∞
A necessary and sufficient condition for lim PAk = PA is rank(Ak ) = rank(A) for sufficiently large k.
k→∞
The main result in this section is given as follows.
Some Results on Condition Numbers
581
Theorem 1. Let A ∈ Rm×n (m ≥ n) with rank(A) = n. Consider the mapping φ(vec(A)) = vec(PA ). Then (i) m(φ, vec(A)) =
k|M (A)|vec(|A|)k∞ ; kvec(PA )k∞
(ii) c(φ, vec(A)) = |M (A)|vec(|A|)
if vec(PA )i 6= 0 for i = 1, . . . , mn, vec(PA ) ∞ † T where M (A) ≡ (A ) ⊗ (Im − AA† ) + (Im − AA† ) ⊗ (A† )T Π with Π being the vec-permutation matrix given in (1). Proof. Following the proof in [4], the main step is to derive the Fr´echet derivative of φ at a = vec(A). To this end, note that PA+∆A − PA = (A + ∆A)((A + ∆A)† − A† ) + ∆AA† . Under the assumption that (A + ∆A) ∈ V , one has (see [22])
(A + ∆A)† − A† = −A† (∆A)A† + (AT A)−1 (∆A)T (Im − AA† ) + O(k∆Ak2 ).
(3)
Denoting δa = vec(∆A) (note that k∆AkF = kδak2 ), we see that φ(a + δa) − φ(a) = vec[−(A + ∆A)A† ∆AA† + (A + ∆A)(AT A)−1 (∆A)T (Im − AA† ) + ∆AA† ] +O(kδak22 ) = vec[−AA† ∆AA† + A(AT A)−1 (∆A)T (Im − AA† ) + Im ∆AA† ] + O(kδak22 ) = vec[(Im − AA† )∆AA† + (A† )T (∆A)T (Im − AA† )] + O(kδak22 ), where we used the fact that (A† )T = [(AT A)−1 AT ]T = A(AT A)−1 . Thus we have by (1) and (2), n o φ(a + δ) − φ(a) = (A† )T ⊗ (Im − AA† ) + (Im − AA† ) ⊗ (A† )T Π δa + O(kδak2 ), i.e., the Fr´echet derivative of φ at a is given by n o M (A) ≡ (A† )T ⊗ (Im − AA† ) + [(Im − AA† ) ⊗ (A† )T ]Π .
By Lemma 1, the remaining part of the proof follows [4].
The following corollary gives a easier way to compute upper bounds for these condition number. Corollary 1. Under the conditions of Theorem 1, we have (i) m(φ, vec(A)) ≤
2k|Im −AA† ||A||A† |kmax ; kPA kmax
−AA† ||A||A† |) (ii) c(φ, vec(A)) ≤ 2 vec(|Imvec(P
A)
∞
if vec(PA )i 6= 0 for i = 1, . . . , mn.
The corollary is a direct consequence of Theorem 1 and Lemma 5 in [4].
Remark. For A ∈ Rm×n (m ≤ n) with rank(A) = m, similar results for PAT can be obtained by considering φ at b = vec(AT ).
582
3.
Zhao Li, Seak-Weng Vong, Yi-Min Wei et al.
Normwise Condition Number for Underdetermined Linear System with Full Row-rank Matrix
This section extends the result in [6]. Consider the underdetermined system Ax = b where A ∈ Rm×n (m ≤ n) with rank(A) = m. There are infinitely many solutions to the system. Among them, the unique solution that minimizes kxk2 is given by x = A† b [5]. Assume that the singular value decomposition of A is given by A = U [ Σ 0 ]V T ,
(4)
where U = [u1 , u2 , . . . , um ] ∈ Rm×m and V = [v1 , v2 , . . . , vn ] ∈ Rn×n are unitary matrices, and Σ = Diag(σ1 , σ2 , . . . , σm ) with σ ≥ σ2 ≥ · · · ≥ σm > 0. Then the Moore-Penrose inverse A† of A is given by −1 Σ † U T. A =V 0 To study the Frobenius normwise condition number for the least-squares problems, we define a mapping ψ : Rm×n × Rm → Rn by ψ(A, b) = A† b. Consider the norm defined on ψ : Rm×n × Rm → Rn given by k∆AkF k∆bk2 k(∆A, ∆b)kFro = max . , kAkF kbk2 The normwise condition number is defined as κF (A, b) = lim
sup
ǫ→0 k∆Ak ≤εkAk F F k∆bk2 ≤εkbk2
k∆xk2 . ǫkxk2
We have the following main result in this section. Theorem 2. Let A ∈ Rm×n with m ≤ n and rank(A) = m. We have κF (A, b) = kAkF kA† k2 +
kA† k2 kbk2 . kxk2
Proof. First we want to derive the Fr´echet derivative of ψ at (A, b). For ∆A ∈ Rm×n , one gets that (see (20.7), p. 417 in [15]) Dψ(A, b) · (∆A, ∆b) = (In − A† A)(∆A)T (A† )T x + A† (∆b − ∆Ax). By using A† Ax = x, one has kDψ(A, b) · (∆A, ∆b)k22 ≤ k(In − A† A)(∆A)T (A† )T − A† ∆AA† Ak2 kxk2 + kA† ∆bk2 . (5)
Some Results on Condition Numbers
583
k(In − A† A)(∆A)T (A† )T − A† ∆AA† Ak22 ≤ k(In − A† A)(∆A)T (A† )T k22 + kA† ∆AA†Ak22 ≤ k(In − A† A)(∆A)T k22 + kA† A(∆A)T k22 kA† k22
(6)
Noting that
and
k(In −A† A)(∆A)T k22 +kA† A(∆A)T k22 ≤ k(In −A† A)(∆A)T k2F +kA† A(∆A)T k2F = k∆Ak2F , it follows kDψ(A, b) · (∆A, ∆b)k2 ≤ kA† k2 k∆AkF kxk2 + kA† k2 k∆bk2 = kA† k2 kAkF kxk2 + kA† k2 kbk2 for k(∆A, ∆b)kFro = 1. Recalling that |||Dψ(A, b)||| =
max
k∆A,∆bkFro =1
kDψ(A, b) · (∆A, ∆b)k2 ,
we now prove the above upper bound is attainable. With um given in (4), let ∆A = −kAkF um
xT , kxk2
∆b = kbk2 um .
It is easy to see that k(∆A, ∆b)kFro = 1. Hence, we obtain Dψ(A, b) · (∆A, ∆b) = −A† ∆Ax + (In − A† A)(∆A)T (AAT )−1 Ax + A† ∆b kAkF T −1 † = −A† ∆Ax−[(In −A† A)(xuT m ) kxk2 (AA ) Ax+A ∆b = = = =
kAkF T −1 † −A† ∆Ax−[(In −A† A)A† b]uT m kxk2 (AA ) Ax+A ∆b −A† ∆Ax + A† ∆b xT kAkF A† um kxk x + kbk2 A† um 2 kA† k2 kAkF kxk2 vm + kA† k2 kbk2 vm .
We thus conclude that κF (A, b) = kAkF kA† k2 +
kA† k2 kbk2 . kxk2
It is easy to obtain the following corollary for nonsingular linear equations Ax = b. Corollary 2 ([14, Theorem 4.1]). Let A ∈ Rn×n with rank(A) = n and b ∈ Rn . Then κF (A, b) = kAkF kA−1 k2 +
kA−1 k2 kbk2 . kxk2
584
Zhao Li, Seak-Weng Vong, Yi-Min Wei et al.
Using the same technique as in Theorem 2, we can prove the following perturbation result on the spectral normwise condition number. Similar √ result can be found in [10, 17, 25]. Although our bound is sharper only by a factor of 2, we still present it as the following theorem. We first need to define the spectral normwise condition number for underdetermined system with full row rank as κ2 (A, b) := lim
sup
ε→0 k∆Ak ≤εkAk 2 2 k∆bk2 ≤εkbk2
k∆xk2 . εkxk2
Theorem 3. On the hypothesis of Theorem 2, we have κ2 (A, b) = ρkAk2 kA† k2 + where 1 ≤ ρ ≤
√
kA† k2 kbk2 , kxk2
2. Especially when m = n, κ2 (A, b) = kAk2 kA−1 k2 +
Proof. Obviously κ2 (A, b) ≥ kAk2 kA† k2 + κ2 (A, b) ≤
√
kA−1 k2 kbk2 . kxk2
kA† k2 kbk2 . kxk2
2kAk2 kA† k2 +
We only need to prove that
kA† k2 kbk2 . kxk2
Since k(In − A† A)(∆A)T k22 + kA† A(∆A)T k22 ≤ 2k∆Ak22 , it thus follows from (5) and (6) that kDψ(A, b) · (∆A, ∆b)k2 ≤
√
2kA† k2 k∆Ak2 kxk2 + kA† k2 k∆bk2
and the result follows. When n = m, one has (In − A† A) = 0 and the assertion obviously holds.
4.
Concluding Remark
In this chapter, we give mixed and componentwise condition numbers of the orthogonal projector. Some explicit and computable expressions on the data for the Frobenius norm and spectral norm for the solution of underdetermined linear systems with full row rank are also presented. It is natural to ask the minimum L 1 -norm solution to an underdetermined linear system ([3, 7, 27, 28]) minn kxk1 Ax = b , which will be our future research topic. x∈R
Acknowledgements
Z. Li is supported by Doctoral Program of the Ministry of Education under grant 20090071110003 and KLMM0901. Y. Wei is supported by the National Natural Science Foundation of China under grant 10871051, Shanghai Science & Technology Committee under grant 09DZ2272900 and Shanghai Education Committee (Dawn Project). X. Jin is supported by the research grant RG-UL/07-08S/Y1/JXQ/FST from University of Macau.
Some Results on Condition Numbers
585
References [1] M. Arioli, M. Baboulin and S. Gratton, A partial condition number for linear least squares problems, SIAM J. Matrix Anal. Appl. 29 (2007), 413–433. [2] A. Ben-Israel and T. N. E. Greville, Generalized Inverses: Theory and Applications, 2nd Edition, Springer Verlag, New York, 2003. [3] D. Cheung, F. Cucker and Y. Ye, Linear programming and condition numbers under the real number computational model, in Handbook of Numerical Analysis, Vol. XI, P. G. Ciarlet (Editor), Special Volume Foundations of Computational Mathematics, F. Cucker (Guest Editor), 141–207, Elsevier Science, 2003. [4] F. Cucker, H. Diao and Y. Wei, On mixed and componentwise condition numbers for Moore-Penrose inverse and linear least squares problems, Math. Comp. 76 (2007), 947–963. [5] J. Demmel and N. Higham, Improved error bounds for underdetermined system solvers, SIAM J. Matrix Anal. Appl. 14 (1993), 1–14. [6] H. Diao and Y. Wei, On Frobenius normwise condition numbers for Moore-Penrose inverse and linear least-squares problems, Numer. Linear Algebra Appl. 14 (2007), 603–610. [7] D. Donoho and J. Tanner, Sparse nonnegative solution of underdetermined linear equations by linear programming, Proc. Natl. Acad. Sci. USA, 102 (2005), 9446–9451. [8] A. Geurts, A contribution to the theory of condition, Numer. Math. 39 (1982), 85–96. [9] I. Gohberg and I. Koltracht, Mixed, componentwise, and structured condition numbers, SIAM J. Matrix Anal. Appl. 14 (1993), 688–704. [10] G. Golub and C. Van Loan, Matrix Computation, 3rd Edition, Johns Hopkins University Press, Baltimore, MD, 1996. [11] A. Graham, Kronecker Products and Matrix Calculus: With Applications, Wiley, New York, 1981. [12] S. Gratton, On the condition number of linear least squares problems in a weighted Frobenius norm, BIT 36 (1996), 523–530. [13] J. Grcar, Optimal sensitivity analysis of linear least squares, Report LBNL-52434, Lawrence Berkeley National Laboratory, 2003. [14] D. Higham, Condition numbers and their condition numbers, Linear Algebra Appl. 214 ( 1995), 193–213. [15] N. Higham, Accuracy and Stability of Numerical Algorithms, 2nd Edition, SIAM, Philadelphia, 2002.
586
Zhao Li, Seak-Weng Vong, Yi-Min Wei et al.
[16] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, 1991. [17] C. Lawson and R. Hanson, Solving Least Squares Problems, Revised reprint of the 1974 original. Classics in Applied Mathematics 15, SIAM, Philadelphia, 1995. [18] A. Malyshev, A unified theory of conditioning for linear least squares and Tikhonov regularization solution, SIAM J. Matrix Anal. Appl. 24 (2003), 1186–1196. [19] C. D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, 2000. [20] O. Pourquier and M. Sadkane, On the normwise backward error of large underdetermined least squares problems, Int. J. Pure Appl. Math. 14 (2004), 365–376. [21] J. Rice, A theory of condition, SIAM J. Numer. Anal. 3 (1966), 217–232. [22] G. Stewart and J. Sun, Matrix Perturbation Theory, Academic Press, New York, 1990. [23] C. Van Loan, The ubiquitous Kronecker product, J. Comput. Appl. Math. 123 (2000), 85–100. [24] G. Wang, Y. Wei and S. Qiao, Generalized Inverses: Theory and Computations, Science Press, Beijing, 2004. [25] P. Wedin, Perturbation theory for pseudo-inverse, BIT 13 (1973), 217–232. [26] W. Xu, Y. Wei and S. Qiao, Condition numbers for structured least squares, BIT 46 (2006), 203–225. [27] W. Yin, S. Osher, D. Goldfarb and J. Darbon, Bregman iterative algorithms for l1 minimization with applications to compressed sensing, SIAM J. Imaging Sciences 1 (2008), 143–168. [28] Y. Zhang, Solution-recovery in L1 -norm for non-square linear systems: deterministic conditions and open questions, Technical Report TR 05-06, Department of computational and Applied Mathematics, Rice University, Houston, TX, 2005.
In: Handbook of Optimization Theory Editors: J. Varela and S. Acu˜na, pp. 587-621
ISBN 978-1-60876-500-3 © 2011 Nova Science Publishers, Inc.
Chapter 21
T HE S UBPRIME M ORTGAGE C RISIS : O PTIMAL C ASH F LOWS FROM THE F INANCIAL L EVERAGE P ROFIT E NGINE M.A. Petersen∗, M.P. Mulaudzi, J. Mukuddem-Petersen, B. De Waal and I.M. Schoeman North-West University (Potchefstroom Campus), Private Bag X 6001, Potchefstroom 2520, SA
Abstract Subprime residential mortgage loan securitization and its associated risks have been a major topic of discussion since the onset of the mortgage crisis in 2007. In this paper, we solve a stochastic optimal credit default insurance problem that has the cash outflow rate for satisfying depositor obligations, the investment in securitized loans and credit default insurance as controls. As far as the latter is concerned, we compute the credit default swap premium and accrued premium by considering the credit rating of the securitized mortgage loans. Finally, we provide an analysis of the aforementioned optimal insurance problem and its connections with the mortgage crisis.
JEL CLassification: G10, IM01, IM10 Key words: Residential mortgage loan (RML); Residential mortgage-backed securities (RMBSs); Investing bank (IB); special purpose vehicle (SPV); credit risk; credit default swap (CDS); tranching risk; counterparty risk; liquidity risk; subprime mortgage crisis.
1.
Introduction
The 2007-2010 subprime mortgage crisis (SMC) can be attributed to a confluence of factors such as lax screening by mortgage originators (ORs) and a rise in the popularity of new structured financial products whose risks were difficult to evaluate. As far as the latter ∗
E-mail address: [email protected]
588
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
is concerned, subprime residential mortgage loan (RML) securitization involves the pooling of RMLs that are subsequently repackaged into interest-bearing securities. The interest and principal payments from RMLs are passed through to credit market investors such as investing banks (IBs). The risks associated with RML securitization are transferred from ORs to special purpose vehicles (SPVs) and securitized RML bond holders such as IBs. RML securitization thus represents an alternative and diversified source of housing finance based on the transfer of credit risk. Some of the other risks involved are tranching, counterparty and liquidity risks. In this paper, we specifically investigate the securitization of subprime RMLs as illustrated in Figure 1. The first step in the process involves an OR that extends RMLs that are subsequently removed from its balance sheet and pooled into RML reference portfolios. OR then sells these portfolios to SPV – an entity set up by a financial institution, specifically to purchase RMLs and realize their off-balance-sheet treatment for legal and accounting purposes. Next, the SPV finances the acquisition of subprime RML portfolios by issuing tradable, interest-bearing securities that are sold to IBs. They receive fixed or floating rate coupons from the SPV account funded by cash outflows generated by RML reference portfolios. In addition, servicers service the RML portfolios, collect payments from the original mortgagors, and pass them on – less a servicing fee – directly to SPV. A diagrammatic overview of the securitization of RMLs is given below.
Transfer of RMLs from OR to the issuing SPV
Step 1 Originator (OR)
RML Reference Portfolio
• RMLs Immune from Bankruptcy of OR • OR Retains No Legal Interest in RMLs
SPV Issues RMBSs to IBs
Special Purpose Vehicle (SPV)
Step 2
Typically Structured into Various Classes/Tranches, Rated by One or More CRAs
Credit Market IRs
Issues RMBSs Senior Tranche(s) Mezzanine Tranche(s) Equity Tranche(s)
Figure 1. Diagrammatic Overview of RML Securitization Process. In our paper, subprime RML securitization mainly refers to the securitization of such RMLs into residential mortgage-backed securities (RMBSs). For this reason we use the terms ”securitized RML” and ”RMBS” interchangeably. However, most of our arguments also apply to the securitization of RMBSs into RMBS collateralized debt obligations (CLOs) as well as RMBS CLOs into RMBS CLO2 s. Unfortunately, the analysis in the latter cases is much more complicated and will not be attempted. The RMBSs themselves
The Subprime Mortgage Crisis
589
are structured into tranches. As in Figure 1, this paper involves three such tranches: the senior (usually AAA rated and abbreviated as sen), mezzanine (usually AA, A, BBB rated and abbreviated as mezz) and junior (equity) tranches (usually BB, B rated and unrated and abbreviate as jun) in order of contractually specified claim priority. At this stage, the location and extent of subprime risk cannot be clearly described. This is due to the chain of interacting securities that cause the risk characteristics to be opaque. Another contributing factor are the derivatives that resulted in negative basis trades moving CLO risk and credit derivatives that created additional long exposure to subprime RMLs. Determining the extent of the risk is also difficult because the effects on expected RML losses depend on house prices as the first order risk factor. Simulating the effects of this through the chain of interacting securities is very difficult. By way of motivating our study and illustrating the aforementioned risk issues and their cascading effects, we consider profits from an interacting subprime RML, a sen/sub tranche RMBS securitization of this single RML and a sen/sub tranche RMBS CLO, which has purchased the sen tranche of the RMBS (compare with [13]). In our example, all profits take place at time v. The RML has a face value of M f . At time v, the RML experiences a step-up rate, rτ , and will either be refinanced or not. If it is not refinanced, then it defaults, in which case OR will recover Rv . Therefore, OR will suffer a loss of Sv which is given by Sv = Mvf − Rv , where Sv and Rv are the RML losses and recovery, respectively. In the case where no default occurs, the new RML is expected to be worth E(Mv ). If we assume no dependence of Rv and E(Mv ) on house prices, the profit to OR is given by Πv = max[Mv , Rv ], where Mv is the value of the new RML after refinancing. If Mv < Rv then OR does not refinance and the mortgagor defaults. OR finances RML extensions via securitization, where the RML is sold at par of M f . The subprime RMBSs transaction has two tranches: the first tranche attaches at 0 and detaches at Nv , the second tranche attaches at Nv and detaches at the end value M f . The face value of the sen tranche is the difference between the face value of RML and the first loss to be absorb by the equity tranche, i.e., M f − Nv . It then follows that the losses that may occur on a sen tranche is given by Svs = max[Sv − Nv , 0]
(1.1)
where Nv is the value at which the first RMBS tranche detaches. Here, the profit to the RMBS bond holder on the sen tranche has the form
Πsv
= min
(
max[Mvf − Nv , 0]; Mvf − Nv − Svs .
In this case, if max[Mvf − Nv , 0] = Mvf − Nv , then Svs ≤ Mvf − Nv . This implies that Πsv = Mvf − Nv − Svs ,
590
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
which, in turn, implies that Πsv = min[Mvf − Nv , Mvf − Sv ]. Next, we consider a situation in which the sen tranche of the subprime RMBS is sold to a CLO, which has two tranches: the first tranche attaches at 0 and detaches at Nvc ; the second tranche attaches at Nvc and detaches at the end value M f − Nv . We note that the size of the CLO is M f − Nv since it only purchases the sen tranche of the subprime RMBS. Moreover, the amount Nvc will be less than Nv because the CLO portfolio is smaller; the sub tranche of the CLO could be large in percentage terms though. In this case, we have that the loss on the sen tranche is
Svc
= max min[S
s
, Mvf
− Nv ] −
.
Nvc , 0
(1.2)
Furthermore, the profit to the RMBS CLO holder on this tranche is given by
Πcv
= min
(
max[Mvf − Nv − Nvc , 0]; Mvf − Nv − Nvc − Svc .
(1.3)
If we substitute (1.2) into (1.3), then Πcv takes the form
max[Mvf − Nv − Nvc , 0]; Πcv = min f c s c Nv − Nv − max min[Sv , Mv − Nv ] − Nv , 0 .
(1.4)
Finally, substituting (1.1), we obtain max[Mvf − Nv − Nvc , 0]; f c s c Nv − Nv − max min[max[min[Sv , Mv − Nv ] − Nv , 0 , Πcv = min f c Mv − Nv ] − Nv , 0 .
1.1.
Literature Review
The literature about RML securitization and the SMC is growing and includes the following contributions. The article [11] (see, also, [2] and [6]) shows that RML charge-offs are more pronounced among ORs that are unable to sell their originate-to-distribute (OAD) RMLs to investors. This finding supports the view that the credit risk transfer through the OAD market resulted in the origination of inferior quality RMLs by ORs. We believe that RML standards became slack because securitization gave rise to moral hazard, since each link in the RML chain made a profit while transferring associated credit risk to the next link (see, for instance, [11] and [13]). The increased distance between ORs and the ultimate bearers
The Subprime Mortgage Crisis
591
of risk potentially reduced ORs’ incentives to screen and monitor mortgagors (see [12]). The increased complexity of RMBSs and markets also reduces IB’s ability to value them correctly (see, for instance, [13]). CDSs are financial instruments that are used as a hedge and protection for debtholders, in particular subprime RMBS investors, from the risk of default (see, for instance, [7]). Like all swaps and other credit derivatives, CDSs may either be used to hedge risks (specifically, to insure IBs against default) or to profit from speculation. In the SMC, as the nett profit to IBs decreased because of subprime RML losses, the probability increased that protection sellers would have to compensate their counterparties (see [17] for further discussion). This created uncertainty across the system, as IBs wondered which agents would be required to pay to cover RML defaults. Our work has a strong connection with this issue via IB’s profit model under RML securitization that incorporates CDS dynamics and the rate of cash outflow to fulfill depositor obligations. CDSs are largely not regulated. As of 2008, there was no central clearinghouse to honor CDSs in the event a party to a CDS proved unable to perform its obligations under the CDS contract. Required disclosure of CDS-related obligations has been criticized as inadequate (compare with [3] and [7]).
1.2.
Preliminaries About RML Securitization, CDSs and Risks
In this subsection, we provide preliminaries about RML securitization, CDSs and risks. 1.2.1. Preliminaries about Subprime RML Securitization A diagrammatic overview of RML securitization strategies may be represented as follows. From Figure 2, at the outset, OR extends RMLs, Mt , to mortgagors (see 2A). By agreement, mortgagors will be required to pay an interest rate, rM , on their RMLs (compare with 2b). Next, OR pools its RMLs, and sells them to SPV (see 2D). SPV pays OR an amount which is slightly greater than the value of the pool of RMLs as in 2C. As was mentioned before, the SPV divides this pool into sen, mezz and jun tranches which are exposed to differing levels of credit risk. Moreover, the SPV sells these tranches as securities backed by subprime RMLs to IB (see 2E). IB is paid out a coupon, rB , that is determine by the RML default rate, prepayment and foreclosure (see 2f). On the other hand, IB has an option of investing in Treasuries with a return of rT (see 2G and 2h). In our IB profit model under RML securitization, depositor obligations are fulfilled at a stochastic interest rate, k, (see 2j). Also, the rate of cash inflow from depositors to IB for investments in securitized subprime RMLs is denoted by µB (represented by 2i). Depositors may also invest funds in safe assets, where their realized rate of returns, rb, are known in advance (see 2K and 2l). IB may attract depositors because of deposit insurance and the possibility that it pays higher rates than those for riskless assets. In anticipation of losses arising from investment in securitized subprime RMLs, IB purchases credit protection from a swap protection seller (see 2M). 2n represents payment made by the swap protection seller after a credit event1 . More is said about CDSs in Subsection 1.2.2.. Furthermore, 2o is the interest rate paid by mortgagors to the servicer of the RMLs. For the servicing fee, 2p is the interest rate passed by a servicer of the RMLs to the SPV. 1
A credit event is a legally defined event that typically includes bankruptcy, failure-to-pay and restructuring.
592
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al. 2A Originator (OR)
Step 1: OR extends RMLs to MRs.
Mortgagors (MRs)
2b 2o
Step 2: OR sells RMLs to SPV. MRs make monthly payments to SR.
2C
2D
Servicer (SR)
Trustee 2p
Underwriter
Special Purpose
Credit Rating Agency (CRA)
Vehicle (SPV)
Credit Enhancement (CE) Provider
2E
RMBSs
2f
Step 3: SPV sells RMBSs to IB.
Treasuries (T)
The Underwriter assists in the sale, CRA rates the RMBSs, and CE may be obtained.
2G 2h 2M
Investing Bank (IB)
Step 4: SR collects monthly payments Swap Protection Seller
2n
from MRs and remits payments to SPV. Trustees submit monthly remittance reports to IB. SR and the Trustee manage delinquent RMLs according to the Pooling & Servicing Agreement.
2j
2i
2K Safe Assets
Depositors
2l
Figure 2. Diagrammatic Overview of RML Securitization Strategies. 1.2.2. Preliminaries about Credit Default Swaps In this subsection, a diagrammatic overview of a CDS protecting a RMBS is provided. Our dynamic model allows for protection against securitized RML losses via CDS contracts. The CDS counterparty, IB, who is the protection buyer makes a regular stream of payments constituting the premium leg (see 3A) to the RMBS SPV. This SPV, in turn, makes regular coupon payments to the protection seller (refer to 3B). These payments are made until a credit event occurs or until maturity, which ever happens first. The size of premium payments is dependent on the quoted default swap spread which is paid on the face value of the protection and is directly related to credit ratings. If there is no credit event, the
The Subprime Mortgage Crisis
593
PROTECTION
CREDIT DEFAULT
PROTECTION
BUYER
SWAP
SELLER
(i.e., IB)
3A
3B RMBS Coupon (L+bps)
CDS Premium (bps) CDS Counterparty
RMBS SPV
Protection Payments ($)
Protection
3D
3E
3C
3f
LIBOR (L)
RML
Collateral
Reference
or Eligible
Portfolio
Seller
RMBS Proceeds ($)
RMBS Proceeds ($)
3G
Investments
Figure 3. Diagrammatic Overview of a RMBS Protected by a Credit Default Swap (CDS). seller of protection receives the periodic fee from the buyer, and profits if the RML reference portfolio remains fully functional through the life of the contract and no profit takes place. However, the protection seller is taking the risk of big losses if a credit event occurs. Depending on the terms agreed upon at the onset of the contract, when such an event takes place, the protection seller may deliver either the current cash value of the referenced bonds or the actual bonds to the protection buyer via the RMBS SPV (refer to 3C and 3D). This payment to the protection buyer, is known as the protection leg (see 3D). It equals the difference between par and the price of the cheapest to deliver (CTD) asset associated with the RML portfolio on the face value of the protection and compensates the protection buyer for the RML loss. The value of a CDS contract fluctuates based on the increasing or decreasing probability that a RML reference portfolio will have a credit event (compare with 3E). Increased probability of such an event would make the contract worth more for the buyer of protection, and worth less for the seller. The opposite occurs if the probability of a credit event decreases. Collateral or eligible investments are highly rated, highly liquid financial instruments purchased from the sale proceeds of the initial RMBS (represented by 3G). These investments contribute the index portion (see 3f) of the RMBS coupon and provides protection payments or the return of principal to RMBS bond holders. 1.2.3. Preliminaries about RML Securitization Risks The main risks from subprime RML securitization that we are interested in are credit (including, prepayment), tranching, counterparty and liquidity risks. Credit risk emanates from the ability of subprime mortgagors to make regular repayments on the RML reference portfolio under any interest rate regime. This risk category generally includes both default and delinquency risk. Prepayment risk results from the ability of the subprime mortgagor to repay his/her RML after an interest rate change – usually from teaser to step-up rate –
594
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
has been implemented by OR. Counterparty risk refers to the ability of economic agents – such as ORs, mortgagors, servicers, investors, SPVs, trustees, underwriters and depositors – to fulfill their obligations towards each other. Liquidity risk arises from situations in which SPV or IB as holders of RMBSs cannot trade because no economic agent in the credit market is willing to do so. For sake of argument, risks falling in these categories are cumulatively known as securitization risks – just called risks hereafter. In Figure 4 below, we provide a diagrammatic overview of the aforementioned risks and their relationship with returns for IB. MORTGAGORS
Last Loss
Lowest
Lower
Position
Credit Risk
Expected Yield
First Loss
Highest Risk
Higher Expected Yield
MR MR MR
AAA/Aaa
MR MR
RML Reference Portfolio
MR MR MR
AA/Aa A/A BBB/Baa
MR MR
BB/Ba B/B Unrated
Figure 4. Diagrammatic Overview of Risk and Return for IBs
Figure 4. Diagrammatic Overview of Risk and Return for IBs.
1.3.
Main Problems and Outline of the Paper
The main problems to emerge from the above discussion are formulated below. Problem 1.1. (IB’s Profit From Investment in Securitized Subprime RMLs): Can we construct a stochastic dynamic model to describe IB’s profit from subprime RML securitization in continuous-time ? (Section 2.2.). Problem 1.2. (Stochastic Optimal Credit Default Insurance Problem): Which decisions about the rate of cash outflow for fulfilling depositor obligations, value of IB’s investment in securitized subprime RMLs and credit default insurance must be made in order to attain an optimal profit for IB ? (Theorem 3.2 in Section 3.2.). Problem 1.3. (Connections with the SMC): How does our stochastic model for credit default insurance and its associated risks relate to the SMC ? (Section 4.).
The Subprime Mortgage Crisis
595
The current section is introductory in nature. In Section 2., we present the dynamics of the RMBS price process (see Subsection 2.1.1.), subprime RMBS losses (see Subsection 2.1.2.) and credit ratings (see Subsection 2.1.3.). Furthermore, Subsection 2.1.4. describes the type of CDS contract that we consider and includes the mathematical formulation of the associated premium in (2.-1). The above discussions about subprime mortgage credit enable us to develop a stochastic model for IB’s profit under RML securitization in (2.0) from Subsection 2.2.. The stochastic differential equation (2.1) allows us to state and prove a stochastic optimal credit default insurance problem in Subsections 3.1. and 3.2., respectively. More specifically, Theorem 3.2 and Proposition 3.3 give the general solution to Problem 3.1. Subsequently, we determine explicit solutions for Problem 3.1 when exponential, power and logarithmic utility functions are chosen. For instance, Proposition 3.4 provides an explicit solution to the stochastic optimal credit default insurance problem with the choice of an exponential utility function. In this case, IB’s optimal investment in securitized subprime RMLs, rate of cash outflow for fulfilling depositor obligations and accrued premium2 for CDSs, denoted by B ∗ , k ∗ and Φ∗ , respectively, are not random variables, since they are not dependent on IB’s profit, Π. On the other hand, the choice of a power utility function in Proposition 3.5 yields an explicit solution where the optimal control processes are expressed as linear functions of IB’s optimal profit, Π∗ . Moreover, Proposition 3.6 provides an explicit solution to Problem 3.1 with the choice of a logarithmic utility function – here the optimal controls are found to be comparable with those in Proposition 3.5. In Section 4., we provide an analysis of issues related to IB’s profit model under RML securitization and the stochastic optimal credit default insurance problem as well as their connections with the SMC. For instance, in Subsection 4.1.1., we discuss credit, tranching, counterparty and liquidity risks as they pertain to our models. Furthermore, Section 5. presents a few concluding remarks and highlights some topics for future research.
2.
Subprime Securitization Models
In this section, we discuss subprime RML securitization and construct a stochastic model of IB’s profit under such securitization. In order to model the uncertainty associated with these items, we consider the filtered probability space, (Ω, F, (Ft )0≤t≤T , P), throughout.
2.1.
Subprime Residential Mortgage Loan Securitization
In this subsection, we consider the modeling of subprime RMBSs, RMBS losses, credit ratings and credit default insurance. 2.1.1. Subprime RMBS Price Process Excess spread/Overcollateralization (XS/OC) transactions are prevalent in subprime RML securitization. They are more complex than straight sen/sub 6-pack deals for typical prime and Alt-A structures. For subprime RML securitization, further complications for IB’s 2
Accrued premium is the amount owing to the swap protection seller for IB’s credit default protection for the period between the previous premium payment and the negative credit event.
596
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
profit arise from the available funds cap risk (AFC). Generally, RMBS bonds (liabilities) in XS/OC deals pay a floating coupon, rB , while subprime RML reference portfolios (collateral) typically pay a fixed rate, rM , until the reset date on hybrid adjustable rate mortgages (ARMs). In this case, the risk that interest paid into the deal from the RML reference portfolio, rM , is not sufficient to make coupon payments, rB , to RMBS bond holders may arise. To mitigate this situation, the deal may be subject to an AFC. Here, IBs receive interest as the minimum of the sum of the index rate, rL , (i.e., 6-month LIBOR) and margin, ̺, or the weighted average AFC, ra . Symbolically, this means that rB = min[rL + ̺, ra ]. PB ,
(2.-3)
Given (2.-3), the stochastic dynamics of the securitized subprime RML price process, may be represented via geometric Brownian motion as
dPB t
=
PB t
B B B rt dt + σt dZt ,
(2.-2)
where σ B and Z B are RMBS price volatility and standard Brownian motion, respectively. 2.1.2. Subprime RMBS Losses We suppose that losses suffered by IB from RML reference portfolio defaults is a random variable, S, with the distribution function, F (S). In the sequel, we define this loss as S : Ω → R + = [0, ∞), where Ω takes on nonnegative real values that may not necessarily be measurable. Moreover, let θ ≥ 0 be a nonnegative real number which is an upper bound of S(η), for all η ∈ Ω, where η is defined as IB’s profit. Therefore, {η ∈ Ω : S(η) > θ} is empty. This enables us to define the smallest essential upper bound for the aggregate securitization losses, S, as ess sup S(η) = inf{θ ∈ R+ : P({η : S(η) > θ}) = 0}. Furthermore, we assume that S is modeled as a compound Poisson process, for which e is a Poisson process with a deterministic frequency parameter, φ(t). In this case, N e is N stochastically independent of the Brownian motion, Z B . 2.1.3. Credit Ratings Concerns about credit ratings have resurfaced during the SMC, where banks have been allowed to use ratings to determine the risk attached to their subprime RML securitizations. Based on private and public information about RMBS quality and a published credit rating, IBs have to decide whether or not to continue investing in securitized RMLs.
The Subprime Mortgage Crisis
597
We suppose that at time t, a continuum of RMBSs are eligible to be rated. To simplify notation, this mass of RMBSs is normalized to 1. There are two types of RMBSs, viz., A and B. If no shock occurs, type-A RMBSs have a low default probability, pA , and type-B RMBSs have a high default probability, pB , where 0 < pA < pB < 1. Let Γ(t) denote the mass of type-A RMBSs at time t, where 0 ≤ Γ(t) ≤ 1. In principle, the value of Γ rises when perceived credit risk (or probability of default) is low and decreases when such risk is high. In general, there is substantial evidence to suggest that credit rating changes exhibit procyclical behavior. At each t, RMBSs are uniformly located along the unit interval according to their type. The CRA chooses a fee and offers a rating to each RMBS. There are two rating categories, viz., A and B. Rating category A indicates that an RMBS is of type A (for instance, AAA, AA and Aaa) and rating category B indicates that an RMBS is of type B (for instance, BBB, BB and B). The CRA chooses fee ft ∈ ℜ+ 0 and rating threshold at ∈ [0, 1]. The CRA offers RMBSs, who are located on or to the right of at on the unit interval, an A rating, and RMBSs, who are located to the left of at on the unit interval, a B rating. If at = Γ(t), the CRA gives all type-A RMBSs an A rating and all type-B RMBSs a B rating. If at > Γ(t), the CRA extends inflated ratings to the RMBS portfolio. If at > Γ(t), the CRA offers type-B RMBSs an A rating. 2.1.4. Credit Default Swaps IB’s investment in securitized RMLs may yield substantial returns but may also result in losses as suggested in Subsection 2.1.2.. In particular, our dynamic IB profit model allows for protection against such losses via CDS contracts (see Subsection 1.2.2. for more details). Our contribution uses CDSs to hedge risk rather than for speculation. In this process, we assume that the CDS premium paid by IB takes the form of a continuous contribution that is expressed as
Θ(C(S)) = [1 − Γ(u)]φ(u)E[Cu (S)], u ≥ t,
(2.-1)
where Γ is given as in Subsection 2.1.3., S is IB’s aggregate losses from RML securitization investments (see Subsection 2.1.2.) and C is the payment to the swap protection buyer for such losses as in the premium leg. This means that if the losses are S = l at time u, the payment, Cu (l), equals the difference between par and the price of the CTD of the RML on the face value of protection.
2.2.
IB’s Profit under Optimal Subprime RML Securitization
In this subsection, we provide a stochastic model for IB’s profit under subprime RML securitization. In the sequel, we denote IB’s rate of return on investments in securitized RMLs by rB , the rate of cash inflow from depositors to IB for investments in securitized subprime RMLs by µB , and the stochastic rate of cash outflow for fulfilling depositor obligations by ku . In this case, the stochastic model of IB’s profit under subprime RML securitization, is given by
598
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
dΠu
B B = ru Bu + µu − ku − [1 − Γ(u)]φ(u)E[Cu (S)] du + σBu dZuB eu , u ≥ t, Πt = η. − S(Πu , u) − Cu (S(Πu , u)) dN
(2.0)
If we assume that IB receives fixed coupons from its investment in RMBSs, i.e., ruB = rB , then the stochastic model (2.0) may be rewritten as
dΠu
3.
B B = r Bu + µu − ku − [1 − Γ(u)]φ(u)E[Cu (S)] du + σBu dZuB eu , u ≥ t, Πt = η. − S(Πu , u) − Cu (S(Πu , u)) dN
(2.1)
The Stochastic Optimal Credit Default Insurance Problem
In this section, we solve IB’s stochastic optimal credit default insurance problem using the model of IB’s profit under RML securitization described in equation (2.1) from Section 2.2..
3.1.
Statement of the Optimal Credit Default Insurance Problem
Let a set of control laws, A, which is adapted to IB’s profit under subprime RML securitization, Π, be given by
A = {(kt , Bt , Ct ) : measurable w.r.t. filtration Ft ; (2.1) has unique solution}. (3.-4) The objective function of the stochastic optimal credit default insurance problem is given by
t,η
J(η, t) = sup E A
Z
T t
r
exp{−δ (u − t)}U
(1)
r
(ku )du + exp{−δ (T − t)}U
(2)
(ΠT ) (3.-3) ,
where DU (1) (.) > 0, D2 U (1) (.) < 0, DU (2) (.) > 0 and D2 U (2) (.) < 0. Here, D and D2 are the first- and second-order differential operators. Also, U (1) and U (2) are increasing, concave utility functions and δ r > 0 is the rate at which the utility functions of IB’s rate of cash outflow for fulfilling depositor obligations, k, and terminal profit, ΠT are discounted. We considered several choices of utility functions such as power, logarithmic and exponential utility functions. We are now in a position to state the stochastic optimal credit default insurance problem for a fixed adjustment period, [t, T ].
The Subprime Mortgage Crisis
599
Problem 3.1. (Optimal Rate of Depositor Cash Outflow and Profit): Suppose that the admissible class of control laws, A 6= ∅, is given by (3.-4). Moreover, let the controlled stochastic differential equation for the Π-dynamics be given by (2.1) and the objective function, J : A → R + , by (3.-3). In this case, we solve sup J(Πt ; kt , Bt , Ct ),
and the optimal
3.2.
A ∗ control law (kt , Bt∗ , Ct∗ ), if it exists, (kt∗ , Bt∗ , Ct∗ ) = arg sup J(Πt ; kt , A
Bt , Ct ) ∈ A.
Solution to the Optimal Credit Default Insurance Problem
In this section, we determine a solution to Problem 3.1 by using the dynamic programming method involving a Hamilton-Jacobi-Bellman equation (HJBE). 3.2.1. General Solution to the Optimal Credit Default Insurance Problem In the sequel, we assume that the optimal control laws exist, with the objective function, J, given by (3.-3) being continuous twice-differentiable. Next, we assume that F (u, ku ) = exp[−δ r (u − t)]U (1) (ku ) and J(ΠT , T ) = exp[−δ r (T − t)]U (2) (ΠT ). Then (3.-3) takes the form t,η
J(η, t) = sup E A
and from (3.-2), we see that
Z
T t
F (u, ku )du + J(ΠT , T )
(3.-2)
F (u, ku )du + J(ΠT , T ) J(η, t) ≥ E t Z T t,η t,η =E F (u, ku )du + E J(ΠT , T ) . t,η
Z
T
t
Applying a form of Itˆo’s formula that is appropriate to our problem, we have dJ(Πu , u) = = + +
∂J(Πu , u) 1 ∂ 2 J(Πu , u) ∂J(Πu , u) du + dΠu + (dΠu )2 ∂t ∂η 2 ∂η 2 ∂J(Πu , u) rB B + µB − k − [1 − Γ(u)]φ(u)E[C(S)] du u u ∂η ∂J(Πu , u) B eu σBdZu + J(Πu − (S − C(S)), u) − J(Πu , u) dN ∂η 1 ∂ 2 J(Πu , u) 2 2 ∂J(Πu , u) σ B du + du. 2 2 ∂η ∂t
600
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
The above expression can be rewritten as 1 dJ(Πu , u) = rB BJη (Πu , u) + σ 2 B 2 Jηη (Πu , u) + µB u Jη (Πu , u) − ku Jη (Πu , u) 2 − [1 − Γ(u)]φ(u)E[C(S)]Jη (Πu , u) + Jt (Πu , u) du + Jη (Πu , u)σBdZuB eu . + J(Πu − (S − C(S)), u) − J(Πu , u) dN
Furthermore, if we integrate from t to T we obtain
J(ΠT , T ) = J(Πt , t) +
Z
T
t
1 rB BJη (Πu , u) + σ 2 B 2 Jηη (Πu , u) 2
+ µB u Jη (Πu , u) − ku Jη (Πu , u)
− [1 − Γ(u)]φ(u)E[C(S)]Jη (Πu , u) + Jt (Πu , u) du + +
Z
Z
T
t
t
T
Jη (Πu , u)σBdZuB eu . J(Πu − (S − C(S)), u) − J(Πu , u) dN
Taking the mathematical expectation of the above expression, we get Z E J(ΠT , T ) ≥ E J(ΠT , T ) + E + − + +
T t
Z F (u, ku )du + E
rB BJη (Πu , u)
1 2 2 σ B Jηη (Πu , u) + µB u Jη (Πu , u) − ku Jη (Πu , u) 2 [1 − Γ(u)]φ(u)E[C(S)]Jη (Πu , u) + Jt (Πu , u) du Z T B Jη (Πu , u)σBdZu E t Z T e J(Πu − (S − C(S)), u) − J(Πu , u) dNu . E t
Assuming integrability as in [9], we have Z E It then follows that
t
T
T t
Jη (Πu , u)σBdZuB
= 0.
The Subprime Mortgage Crisis Z E
T
F (u, ku )du
t
Z + E Z
t
Z E
t
T
1 rB BJη (Πu , u) + σ 2 B 2 Jηη (Πu , u) + µB u Jη (Πu , u) 2 t − ku Jη (Πu , u) − [1 − Γ(u)]φ(u)E[C(S)]Jη (Πu , u) + Jt (Πu , u) du +
and
T
601
T
eu ≤ 0 J(Πu − (S − C(S)), u) − J(Πu , u) dN
1 F (u, ku ) + rB BJη (Πu , u) + σ 2 B 2 Jηη (Πu , u) + µB u Jη (Πu , u) − ku Jη (Πu , u) 2 − [1 − Γ(u)]φ(u)E[C(S)]Jη (Πu , u) + Jt (Πu , u) + E J(Πu − (S − C(S)), u) − J(Πu , u) φ(u) du ≤ 0.
If we assume regularity as in [9], the above expression can be reduced to 1 F (t, kt ) + rB BJη (Πt , t) + σ 2 B 2 Jηη (Πt , t) + µB t Jη (Πt , t) − kt Jη (Πt , t) 2 − [1 − Γ(t)]φ(t)E[C(S)]Jη (Πt , t) + Jt (Πt , t) + E J(Πt − (S − C(S)), t) − J(Πt , t) φ(t) ≤ 0. In the sequel, we recall that Πt = η, and equality is attained at the maximum. Thus 1 Jt (η, t) + max F (t, kt ) + rB BJη (η, t) + σ 2 B 2 Jηη (η, t) + µB t Jη (η, t) A 2 − kt Jη (η, t) − [1 − Γ(t)]φ(t)E[C(S)]Jη (η, t) + φ(t) E[J(η − (S − C(S)), t)] − J(η, t) = 0. Again, since F (t, kt ) = U (1) (k) and A is given by (3.-4), it follows that 1 2 2 (1) B Jt (η, t) + max U (kt ) − kt Jη (η, t) + max r M Jη (η, t) + σ M Jηη (η, t) B k 2 + max φ(t) E[J(η − (S − C(S)), t)] C − J(η, t) − [1 − Γ(t)]φ(t)E[C(S)]Jη (η, t) + µB t Jη (η, t) = 0.
Also, the boundary condition from (3.-2) impies that J(η, T ) = U (2) (η). Then J satisfies the HJBE given by
602
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
1 2 2 B (1) 0 = max U (k) − kJη (η, t) + max r BJη (η, t) + σ B Jηη (η, t) B k 2 + max φ(t) E[J(η − (S − C(S)), t)] − J(η, t) − [1 − Γ(t)]φ(t)E[C(S)]Jη (η, t) (3.-1) C B +µt Jη (η, t) + Jt (η, t) J(η, T ) = U (2) (η).
Note that
Jt =
∂J ∂J ∂2J , Jη = and Jηη = . ∂t ∂η ∂η 2
The objective function, J, is increasing and concave with respect to IB’s profit, η, because the utility functions U (1) and U (2) are increasing and concave. It is important to note that we can use verification theorems to show that if the objective function, J, has a smooth b then under the regularity conditions in this paper, solution as well as the related HJBE, J, b we have J = J.
Theorem 3.2. (Optimal Rate of Depositor Cash Outflow and Profit): Suppose that the objective function, J(η, t), solves the HJBE (3.-1). In this case, a solution to IB’s stochastic optimal credit default insurance problem is
Bt∗ = −
rB Jη (η, t) , σ 2 Jηη (η, t)
(3.0)
where Π∗t = η is IB’s optimally controlled profit under subprime RML securitization. Also, the optimal cash outflow for satisfying depositor obligations, {kt∗ }t≥0 , solves the equation Dk U (1) (kt∗ ) = Jη (η, t),
(3.1)
where Dk represents the ordinary derivative with respect to k. Proof. In our proof, we consider a static optimization problem given by 1 2 2 (1) B max U (k) − kJη (η, t) + max r BJη (η, t) + σ B Jηη (η, t) . B k 2
(3.2)
In order to verify (3.0) and (3.1), we differentiate the expression inside the square brackets of (3.2) with respect to B and k. Then we set the resulting partial derivatives to zero so that rB Jη (η, t) + σ 2 B ∗ Jηη (η, t) = 0
(3.3)
Dk U (1) (k ∗ ) − Jη (η, t) = 0
(3.4)
The Subprime Mortgage Crisis
603
From (3.3) and (3.4), we have that
Bt∗ = −
rB Jη (η, t) σ 2 Jηη (η, t)
and Dk U (1) (kt∗ ) = Jη (η, t), respectively. An alternative method of proof of Theorem 3.2 is by using the martingale approach that is expounded in [1] and the references contained therein. 3.2.2. Optimal Credit Default Swap Contracts In this subsection, we determine the optimal CDS contract, Ct∗ , when the premium is given by the functional, Θ(C), in (2.-1). In our paper, we consider a CDS market which can trade the type of CDS contract described earlier. In addition, we assume that 0 ≤ C ≤ S. Taking our lead from insurance theory and the assumption that Θ(C) is proportional to the nett CDS premium for a portfolio with mass of type-A RMBSs, Γ, the optimal CDS contract takes the form
C(S) =
(
0,
if S ≤ Φ;
S − Φ, if S > Φ.
(3.5)
Some features of the aforementioned CDS contract are as follows. If S ≤ Φ, then it would be optimal for IB not to buy CDS protection. If S > Φ, then it would be optimal to buy CDS protection. In the sequel, the maximization of the CDS contract purchased by IB is now reduced to the problem of determining the optimal accrued premium, Φ. Proposition 3.3. (Optimal Credit Default Insurance): The optimal CDS contract is either no swap protection or per-loss accrued premium CDSs, in which the accrued premium, Φ, varies with time. In particular, at a specified time, the optimal accrued premium, Φ∗t , solves Jη (Π∗t − Φt , t) = [1 − Γ(t)]Jη (Π∗t , t).
(3.6)
No CDSs contract is optimal at time t if and only if Jη (Π∗t − ess supS(Π∗t , t), t) ≤ [1 − Γ(t)]Jη (Π∗t , t).
(3.7)
Proof. We consider max φ(t) E[J(η − (S − C(S)), t)] − J(η, t) − [1 − Γ(t)]φ(t)E[C(S)]Jη (η, t) .(3.8) C
604
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
Let H be the expression inside the square brackets of (3.8) so that H = φ(t) E[J(η − (S − C(S)), t)] − J(η, t) − [1 − Γ(t)]φ(t)E[C(S)]Jη (η, t) (3.9) If we utilize (3.5), then (3.9) becomes Z H = φ(t)
0
Φ
J(η − S, t)dF (S) +
Z
∞ Φ
J(η − Φ, t)dF (S) − J(η, t)
−[1 − Γ(t)]φ(t)Jη (η, t)
Z
(3.10)
∞ Φ
(S − Φ)dF (S).
Differentiating (3.10) with respect to Φ, we obtain dH dΦ
Z = φ(t) Jη (η − Φ, t)
∞
Φ
Z dF (S) + [1 − Γ(t)]φ(t)Jη (η, t)
∞
dF (S) (3.11)
Φ
= −φ(t)Jη (η − Φ, t)[1 − F (Φ)] + [1 − Γ(t)]φ(t)Jη (η, t)[1 − F (Φ)]. To determine the optimal accrued premium we set (3.11) to zero, so that
−φ(t)Jη (η − Φ, t)[1 − F (Φ)] + [1 − Γ(t)]φ(t)Jη (η, t)[1 − F (Φ)] = 0.
(3.12)
From the above equation we can produce (3.6) so that
Jη (η − Φ, t) = [1 − Γ(t)]Jη (η, t).
(3.13)
The expression (3.7) follows immediately from (3.13). In order to determine an exact (closed form) solution for the stochastic optimization problem in Theorem 3.2, we are required to make a specific choice for the utility functions U (1) and U (2) . Essentially these functions can be almost any function involving k and η, respectively. However, in order to obtain smooth analytic solutions to the stochastic optimal credit default insurance problem, in the ensuing discussion, we choose power, logarithmic and exponential utility functions and analyze the effect of the different choices. 3.2.3. Boundary Value Problem From what we have done so far, we see that by substituting the optimal control processes, Bt∗ and kt∗ , our problem can be further reduced to a boundary value problem (BVP) that consists of a partial differential equation involving the unknown objective function, J, and one boundary condition. Our argument leads to a BVP that may be expressed as
The Subprime Mortgage Crisis 1 Jt (η, t) + U (1) (k ∗ ) − k ∗ Jη (η, t) + rB B ∗ Jη (η, t) + σ 2 B ∗ 2 Jηη (η, t) 2 +φ(t)(E[J(η − Φ∗ , t)] − J(η, t)) − [1 − Γ(t)]φ(t)E[S − Φ∗ ]Jη (η, t) +µB t Jη (η, t) = 0 J(η, T ) = U (2) (η).
605
(3.14)
In the sequel, we determine the objective function J(η, t) which solves (3.14). 3.2.4. Stochastic Optimal Credit Default Insurance with Exponential Utility Assume that 1 U (1) (k) = 0 and U (2) (η) = − exp{−̺η}, ̺ > 0. ̺
(3.15)
The following proposition provides a closed form solution to Problem 3.1 in the case of an exponential utility function. Proposition 3.4. (Stochastic Optimal Credit Default Insurance with Exponential Utility): Let the exponential utility functions be given by (3.15) and assume that IB’s RML securitization losses, S, are independent of its profit, with the probability distribution of S being a deterministic function of time. under the exponential utility, the objective function is given by 1 J(η, t) = − exp ̺
rB2 − ̺η − 2 (T − t) ϑ(t), 2σ
(3.16)
Y (r)dr ,
(3.17)
with ϑ(t) being given by
ϑ(t) = exp
−
Z
T t
where Y has the form Y (r) = φ(r)E[exp(̺Φ∗ )] − φ(r) + [1 − Γ(r)]φ(r)̺E[S − Φ∗ ] − µB r ̺. Then IB’s optimal investment in securitized subprime RMLs is
Bt∗ =
rB . σ2̺
(3.18)
Furthermore, U (1) (kt ) = 0 leads to the optimal rate of cash outflow for satisfying depositor obligations being equal to zero, so that
606
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
kt∗ = 0.
(3.19)
The optimal accrued premium is given by
Φ∗t
= min
1 ln(1 − Γ(t)), ess sup S(t) . ̺
(3.20)
Proof. For the objective function 1 J(η, t) = − exp ̺
− ̺η −
rB2 (T − t) ϑ(t) 2σ 2
we have that ′ rB2 rB2 1 rB2 Jt (η, t) = − 2 exp − ̺η − 2 (T − t) ϑ(t) − exp − ̺η − 2 (T − t) ϑ (t), 2σ ̺ 2σ ̺ 2σ rB2 rB2 Jη (η, t) = exp − ̺η − 2 (T − t) ϑ(t), Jηη (η, t) = −̺ exp − ̺η − 2 (T − t) ϑ(t). 2σ 2σ
If we consider the above partial derivatives as well as Theorem 3.2 and Proposition 3.3, we obtain the optimal control laws (3.18), (3.19) and (3.20). If we substitute the control laws, i.e., Bt∗ , kt∗ and Φ∗t into (3.-1), then we obtain a solution for the BVP in (3.14) so that 1 − exp ̺
rB2 1 rB2 ′ − ̺η − 2 (T − t) ϑ (t) − φ(t) exp − ̺η − 2 (T − t) ϑ(t) 2σ ̺ 2σ B2 1 r ∗ ×E[exp(̺Φ )] + φ(t) exp − ̺η − 2 (T − t) ϑ(t) ̺ 2σ r B2 −[1 − Γ(t)]φ(t) exp − ̺η − 2σ2 (T − t) ϑ(t)E[S − Φ∗ ] r B2 B +µt exp − ̺η − 2σ2 (T − t) ϑ(t) = 0, ′
ϑ (t) + φ(t)ϑ(t)E[exp(̺Φ∗ )]
−φ(t)ϑ(t) + [1 − Γ(t)]φ(t)̺ϑ(t)E[S − Φ∗ ] − µB t ̺ϑ(t) = 0 and ϑ (t) + ϑ(t) φ(t)E[exp(̺Φ∗ )] − φ(t) +[1 − Γ(t)]φ(t)̺E[S − Φ∗ ] − µB ̺ = 0. t ′
The Subprime Mortgage Crisis
607
The boundary condition which corresponds to the ordinary differential equation above is given by 1 1 J(η, T ) = − exp(−̺η)ϑ(T ) = − exp(−̺η), ϑ(T ) = 1. ̺ ̺ If we set Y (t) = φ(t)E[exp(̺Φ∗ )] − φ(t) + [1 − Γ(t)]φ(t)̺E[S − Φ∗ ] − µB t ̺ then, ϑ(t) is a solution to the BVP given by ′ ϑ (t) + Y (t)ϑ(t) = 0
ϑ(T ) = 1.
This verifies that ϑ(t) is indeed of the form (3.17). The objective function (3.16) together with the optimal control processes (3.18) and (3.19) solve the BVP (3.14). From the verification theorems, we can also conclude that (3.16) is the optimal value function. In addition, the uniqueness of the solution to (2.1) under exponential utility can be established via [15, Chapter V, Section 3]. 3.2.5. Stochastic Optimal Credit Default Insurance with Power Utility For a choice of power utility we have that
U
(1)
(k) =
k̺ η̺ (2) and U (η) = eb , ̺ ̺
(3.21)
for some ̺ < 1, ̺ 6= 0, and eb ≥ 0. The parameter eb represents the weight that IB gives to terminal profit versus the rate of cash outflow for fulfilling depositor obligations and can be viewed as a measure of IB’s propensity to retain earnings. This leads to the following result. Proposition 3.5. (Stochastic Optimal Credit Default Insurance with Power Utility): Suppose that the power utility functions are given as in (3.21) and assume that the RML securitization losses, S, are proportional to IB’s profit under subprime RML securitization so that
S(η, t) = ϕ(t)η, for some deterministic S and severity function, ϕ(t), where 0 ≤ ϕ(t) ≤ 1. under power utility, the objective function may be represented by
608
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
J(η, t) =
η̺ ζ(t), ̺
(3.22)
where ζ(t) is given by Z ζ(t) = eb exp −
T t
Z T Z T 1−̺ Z s Q(s) Q(u) ̺ Q(s) .(3.23) ds + exp − ds du ds exp 1−̺ 1−̺ 1−̺ t t 1−̺ t
with Q having the form ̺ 1 − 1−̺ −1 +∆ , ϕ(t) Q(t) = φ(t) 1 − min 1 − (1 − Γ(t)) 1 − 1−̺ −1 − [1 − Γ(t)]φ(t)̺ ϕ(t) − min 1 − (1 − Γ(t)) , ϕ(t) + µB t ̺η
and
∆=
rB2 ̺. − ̺)
2σ 2 (1
In this case, IB’s optimal rate of cash outflow for fulfilling depositor obligations is given by kt∗ = ζ(t)
1 − 1−̺
η,
(3.24)
and IB’s optimal investment in securitized RMLs is
Bt∗ =
rB η. σ 2 (1 − ̺)
(3.25)
Furthermore, under power utility, the optimal accrued premium is given by
Φ∗t
1 − 1−̺ = min 1 − (1 − Γ(t)) , ϕ(t) η.
(3.26)
Proof. The proof of Proposition 3.5 is analogous to that of Proposition 3.4. In this regard, for the objective function ̺ ¯ t) = η ζ(t) J(η, ̺
we have that η̺ ′ J¯t (η, t) = ζ (t), J¯η (η, t) = η ̺−1 ζ(t) and J¯ηη (η, t) = (̺ − 1)η ̺−2 ζ(t). ̺
The Subprime Mortgage Crisis
609
Using the partial derivatives above and Theorem 3.2 as well as Proposition 3.3, we can verify that the optimal control laws are given by (3.24), (3.25) and (3.26). If we substitute these control laws into (3.14), we obtain ̺ η̺ ′ η̺ 1 rB2 − ̺ ζ (t) + ζ(t) 1−̺ − η ̺ ζ(t)− 1−̺ + η ̺ ζ(t) ̺ ̺ 2 σ 2 (1 − ̺) ̺ ̺ η η̺ − 1 + φ(t) ζ(t) 1 − min 1 − (1 − Γ(t)) 1−̺ , ϕ(t) − ζ(t) ̺ ̺ − 1 ̺−1 ζ(t) = 0 − [1 − Γ(t)]φ(t) ϕ(t) − min 1 − (1 − Γ(t)) 1−̺ , ϕ(t) η ̺ ζ(t) + µB t η
so that −
′
̺
−
̺
ζ (t) + ζ(t) 1−̺ − ̺ζ(t) 1−̺ ̺ 1 1 rB2 − 1−̺ + 2 , ϕ(t) − φ(t)ζ(t) ̺ζ(t) + φ(t)ζ(t) 1 − min 1 − (1 − Γ(t)) 2 σ (1 − ̺) 1 −1 ζ(t) = 0. − [1 − Γ(t)]φ(t) ϕ(t) − min 1 − (1 − Γ(t))− 1−̺ , ϕ(t) ̺ζ(t) + µB t ̺η
In turn, the above expression implies that
̺ 1 − 1−̺ ζ (t) + ζ(t) φ(t)ζ(t) 1 − min 1 − (1 − Γ(t)) , ϕ(t) −1 1 rB2 − 1−̺ B −1 1 ̺− [1 − Γ(t)]φ(t)̺ ϕ(t) − min 1 − (1 − Γ(t)) +2 2 , ϕ(t) + µt ̺η σ (1 − ̺) ′
[̺ − 1]ζ(t)
=
̺ − 1−̺
.
This ordinary differential equation is of Bernoulli-type with a boundary condition given by rB2 ζ(T ) = eb ≥ 0. Next, we set ∆ = 12 2 ̺, so that σ (1 − ̺) ̺ 1 ′ − 1−̺ ζ (t) + ζ(t) φ(t) 1 − min 1 − (1 − Γ(t)) −1 +∆ , ϕ(t) 1 − 1−̺ B −1 , ϕ(t) + µt ̺η − [1 − Γ(t)]φ(t)̺ ϕ(t) − min 1 − (1 − Γ(t)) ̺
= [̺ − 1]ζ(t)− 1−̺ .
Furthermore, let ̺ − 1 −1 +∆ Q(t) = φ(t) 1 − min 1 − (1 − Γ(t)) 1−̺ , ϕ(t) 1 − 1−̺ −1 − [1 − Γ(t)]φ(t)̺ ϕ(t) − min 1 − (1 − Γ(t)) , ϕ(t) + µB t ̺η , ̺ − 1 = ̺e and ̺¯ =
̺ . 1−̺
610
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
We restate our problem as ′ ζ (t) + Q(t)ζ(t) = ̺eζ(t)−¯̺
(3.27)
ζ(T ) = eb ≥ 0.
In order to solve the above ordinary differential equation, we let z = ζ 1+¯̺ and divide (3.27) by ζ −¯̺ , so that ζ ̺¯ Since z = ζ 1+¯̺ , we have that
dζ + Q(t)ζ 1+¯̺ = ̺e. dt ζ ̺¯
1 dz dζ = dt 1 + ̺¯ dt
and it follows that dz + (1 + ̺¯)Q(t)z = ̺e(1 + ̺¯). dt
From the above equation, we see that the integrating factor is given by
exp
Z
(1 + ̺¯)Q(t)dt .
The general solution of the above ODE is given by
z(t) = exp
−
Z
T
t
Z ¯ (1 + ̺¯)Q(s)ds A +
t
z(T ) = A¯ = eb ≥ 0.
T
̺e exp
Z
s
t
(1 + ̺¯)Q(u)du ds ,
This yields the expression
z(t) = eb exp
−
Z
T
(1 + ̺¯)Q(s)ds t
+e ̺ exp − Z e = b exp −
Z
T
Z
T
Z
s
exp (1 + ̺¯)Q(s)ds (1 + ̺¯)Q(u)du ds t t Q(s) ds t 1−̺ Z s Z T Z T Q(s) Q(u) ̺ exp exp − ds du ds. + 1−̺ t 1−̺ t 1−̺ t t T
The Subprime Mortgage Crisis
611
1
Again from z = ζ 1+¯̺ , we have ζ(t) = z 1+̺¯ , which implies that Z e ζ(t) = b exp −
Q(s) ds t 1−̺ Z s Z T Z T 1−̺ Q(u) Q(s) ̺ . exp exp − ds du ds + 1−̺ t 1−̺ t t 1−̺ T
As before, (3.22) and the corresponding optimal control laws (3.24) and (3.25) satisfy the BVP (3.14) with (3.22) being an optimal value function. A consideration of [15, Chapter V, Section 3] yields a unique solution to (2.1) under power utility. 3.2.6. Stochastic Optimal Credit Default Insurance with Logarithmic Utility Suppose we let ̺ → 0 in Proposition 3.5, so that e (1) (k) = ln k and U e (2) (η) = eb ln η. U
(3.28)
In this case, it is clear that the following special extension to Proposition 3.5 holds. Proposition 3.6. (Stochastic Optimal Credit Default Insurance with Logarithmic Utility): Suppose the logarithmic utility functions are given as in (3.28). In this case, we have that the objective function has the form
where P (t) is defined by
e t) = eb ln η + P (t), J(η, P (t) = −
Z
T
t
and
e C(s)ds
(3.29)
(3.30)
1 rB2 e η eb ln 1 − min − Γ(t) , ϕ(t) e b + φ(t) + C(t) = ln eb 2 σ2 1 − Γ(t) Γ(t) e −1 − 1. e , ϕ(t) + µB − [1 − Γ(t)]φ(t)b ϕ(t) − min − t bη 1 − Γ(t)
In this case, IB’s optimal rate of cash outflow for fulfilling depositor obligations is given by kt∗ =
η eb
and its optimal investment in securitized RMLs is
(3.31)
612
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
Bt∗ = η
rB . σ2
(3.32)
Also, in this case, IB’s optimal accrued premium is given by Φ∗t
= min
Γ(t) , ϕ(t) η. − [1 − Γ(t)]
(3.33)
Proof. As before, Theorem 3.2 is the main tool for verifying (3.31) and (3.32) as optimal control laws. In this regard, we note that the objective function
has the partial derivatives
e t) = eb ln η + P (t) J(η,
eb eb ′ Jet = P (t), Jeη = and Jeηη = − 2 . η η
Substituting (3.31), (3.32) and (3.33) into (3.14), we obtain
Γ(t) 1 rB2 e η e , ϕ(t) b + φ(t)b ln 1 − min − −1+ P (t) + ln eb 2 σ2 1 − Γ(t) Γ(t) e −1 = 0. − [1 − Γ(t)]φ(t)eb ϕ(t) − min − , ϕ(t) + µB t bη 1 − Γ(t) ′
If we let
η 1 rB2 e Γ(t) ˜ e C(t) = ln + , ϕ(t) b + φ(t)b ln 1 − min − eb 2 σ2 1 − Γ(t) Γ(t) e −1 − 1, e , ϕ(t) + µB − [1 − Γ(t)]φ(t)b ϕ(t) − min − t bη 1 − Γ(t)
′ e = 0. Since P (T ) = 0, the solution of this ODE is given by then it follows that P (t) + C(t) (3.30). From the above, we conclude that substituting the optimal objective function (3.29) as well as (3.31) and (3.32) into the right-hand side of (3.14) leads to the desired result. Finally, [15, Chapter V, Section 3] yields the unique solution to (2.1) under logarithmic utility.
4.
Optimal Credit Default Insurance and the Subprime Mortgage Crisis
In this section, we briefly discuss some issues emanating from the subprime banking models constructed in Section 2. and optimal credit default insurance problem (see Section 3.) as
The Subprime Mortgage Crisis
613
well as their relationship with the SMC. As is well-known, this ongoing crisis is characterized by shrinking liquidity in global credit markets and the opaqueness of risks associated with structured financial products. A downturn in the U.S. housing market, risky practices by IBs and mortgagors and excessive individual and corporate debt levels have affected the world economy adversely on a number of levels. As a result, the SMC has exposed pervasive weaknesses in the global banking system and regulatory framework.
4.1.
Subprime Banking Models and Their Connections with the SMC
In this subsection, we provide a brief analysis of subprime securitization risk, credit default insurance and IB’s profit under subprime RML securitization as well as their connections with the SMC. Furthermore, we furnish a numerical example to illustrate the influence of some of the key parameters. 4.1.1. Subprime Securitization Risks and the SMC RML securitization largely involves credit, tranching, counterparty and liquidity risks. As was evident from the motivating example in Section 1., tranching makes RMBS deals very complex. Besides the difficulties related to the estimation of the RML reference portfolio’s loss distribution, tranching requires comprehensive, deal-specific documentation to ensure that the desired characteristics, such as claim seniority, will be carried through (see, for instance, Figure 4). Moreover, complexity may be further exacerbated by regret-averse asset managers and other agents, whose own incentives to act in the interest of some investor classes at the expense of others may need to be curtailed. As complexity increases, less sophisticated IBs have more difficulty understanding RMBS tranching and thus a limited capacity to make prudent investment decisions (see the role of IB in Figure 2). For instance, tranches from the same deal may have different risk, reward and/or maturity characteristics. Modeling the performance of tranched transactions based on historical performance may have led to the over-rating (by CRAs) and underestimation of risks (by IBs) of RMBSs with high-yield RML reference portfolios. All these factors have contributed towards the SMC. In a CDS contract, the protection seller assumes the credit risk that IB does not wish to bear in exchange for periodic premiums, and is obligated to pay only if negative credit events occur. As a consequence, in addition to credit risk, IB will also be exposed to counterparty risk. In our paper, counterparty risk refers to the risk that a swap protection seller will not be able to make a payment to IB when a credit event occurs. For instance, the credit default insurance term, Cu (S(Πu , u)), in equation (2.1) for IB’s profit dynamics may be compromised. This could happen because CDSs are over-the-counter and unregulated. Also, the contracts often get heavily traded with the definition of the roles of all security transaction agents becoming difficult. There is the possibility that the protection seller may lack the financial muscle to honor the CDS contract’s provisions, making it difficult to value them. The leverage involved in many CDS contracts, and the possibility that a widespread market downturn could cause massive RML reference portfolio defaults and challenge the ability of protection sellers to meet their obligations, contributes to the uncertainty. During the SMC, as IB charter values deteriorated on the back of subprime RML losses, the likelihood increased that swap protection sellers would have to compensate their counterparties.
614
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
Also, counterparty risk is of importance in IB-depositor relationships, where IB is under pressure to fulfill obligations towards depositors (see, for instance, the role of k in equation (2.1)). In our case, liquidity refers to the degree to which RMBSs can be bought or sold in the secondary market without affecting their price. Liquidity is characterized by a high level of trading in RMBSs in this market with complexity reducing liquidity. In circumstances where RML default rates increase, RMBSs that mixed and matched credit risk types in a intricate web of transactions become ”toxic.” Toxicity spreads across the banking sector, the wholesale markets, the retail markets, insurance companies, the asset management industry, and into the household. In particular, as liquidity dries up in the secondary RML market, the situation deteriorates even more when RMBS bond holders experience difficulty finding other IBs to trade with. Those holding RMBSs, who may have raised deposits to do so, may have margin calls that force them to trade illiquid RMLs and their securities at a discount (see, for instance, the role of µB in equation (2.1)). During the SMC, liquidity was further restricted when financially distressed IBs hoarded cash as a buffer against rampant subprime RML reference portfolio losses. 4.1.2. Credit Default Insurance and the SMC The volume of CDSs outstanding increased 100-fold from 1998 to 2008, with estimates of the debt covered by CDS contracts, as of November 2008, ranging from $ 33 to $ 47 trillion. During the SMC, IBs were not sure whether swap protection sellers would be able to pay to cover subprime RML defaults (see, for instance, [17]). For instance, when investment bank Lehman Brothers went bankrupt on Monday, 15 September 2008, there was much uncertainty as to which financial firms would be required to honor the CDS contracts on its $ 600 billion of bonds outstanding (see, for instance, [17]). Merrill Lynch’s large losses in 2008 were attributed in part to the drop in value of its unhedged portfolio of collateralized debt obligations (CDOs) after the American International Group (AIG) ceased offering CDSs on Merrill’s CDOs. The loss of confidence by trading partners in Merrill Lynch’s solvency and its ability to refinance its short-term debt led to its acquisition by the Bank of America. 4.1.3. IB’s Profit under Subprime RML Securitization and the SMC In Subsection 2.2., the dynamic model of IB’s profit under subprime RML securitization given by (2.0), involves the mass of type-A RMBSs, Γ, and a CDS premium, Θ(C). In this regard, our model shows that when Γ is high this premium is low because high Γ means that a low probability of default (PD) is associated with the securitized subprime RMLs. In our profit model, the size of Γ is indicative of credit risk, while Π, k and µB are indicators of tranching, counterparty and liquidity risk, respectively. Before the SMC, low credit risk created liquidity in the credit market since IBs were more willing to indulge in RML securitization activities. This liquidity caused some of the financial institutions to more readily lend to subprime mortgagors because of competition in the credit market. During the SMC, depreciation in house prices contributed to a decline in profits (related to IB’s profit model in (2.0)) of many U.S. banks. IBs that retained credit risk were the first to be affected, as mortgagors became unable or unwilling to make payments and the value
The Subprime Mortgage Crisis
615
of RML reference portfolios declined. In this regard, profits at the 8 533 U.S. banks insured by the Federal Deposit Insurance Corporation (FDIC) fell from $ 35.2 billion to $ 646 million (effectively by 89 %) during Quarter 4 of 2007 when compared with the previous year. This was largely due to escalating RML losses and provisions for such losses. This decline in profits contributed to the worst bank and thrift quarterly performance since 1990. In 2007, these banks earned approximately $ 100 billion, which represented a decline of 31 % from the record profit of $ 145 billion in 2006. Profits decreased from $ 35.6 billion to $ 19.3 billion during the first quarter of 2008 versus the previous year, a decline of 46 %. The quarterly reports [4] and [5] of the FDIC intimate that profits decreased from $ 35.6 billion to $ 19.3 billion during the first quarter of 2008 versus the previous year, a decline of 46 %.
4.1.4. Simulation In this subsection, for an anonymous IB, we compute the inputs of our model before (i.e. 2001 - 2006) and during the SMC (i.e., 2007 - 2009) and explore changes in optimal credit default insurance between the two periods. In order to accomplish this, data was sourced from the U.S. Federal Reserve Bank (see [16]) and the paper [8]. Table 1. Numerical Example Variables; Source: U.S. Federal Reserve Bank and [8] Date 2001 2002 2003 2004 2005 2006 2007 2008 2009
Type-A RMBS Mass 0.86 0.88 0.89 0.9 0.91 0.91 0.50 0.20 0.15
Rate of Return 6.19 % 4.67 % 4.12 % 4.34 % 6.19 % 7.96 % 8.05 % 5.09 % 3.25 %
Index Rate 3.40 % 2.17 % 2.11 % 2.34 % 4.19 % 5.96 % 5.86 % 2.39 % 0.50 %
Margin 3.51 % 2.5 % 2.01 % 2% 2% 2% 2.19 % 2.70 % 2.75 %
The following parameter choices were used for before and after the SMC. Table 2. Parameter Choices Period Before SMC During SMC
η 35.2 bn 19.3 bn
rB 5.7 % 5.46 %
S 13.2 bn 7.3 bn
C(S) 5.28 bn 2.92 bn
̺ 0.30 0.90
ϕ 0.2074 0.6839
˜b 0.50 0.50
Γ(t) 0.2833 0.8917
φ(t) 0.20 0.70
µB 0.80 0.30
By inputting the data above in our stochastic model (2.1), it follows that the dynamics of IBs profits before and during the SMC are given in Figures 5 and 6, respectively.
616
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al. The dynamics of IBs profit before SMC 40 35 30 25
Profit 20 15 10 5 0 2001
2002
2003
2004
2005
2006
Time
Figure 5. Dynamics of IBs Profit Before the SMC.
The dynamics of IBs profit during SMC 40 35 30 25
Profit 20 15 10 5 0 2007
2008
2009
Time
Figure 6. Dynamics of IBs Profit During the SMC.
4.2.
Stochastic Optimal Credit Default Insurance and Its Connections with the SMC
This subsection discusses connections between the optimal credit default insurance problem and the SMC. Here, we include an analysis of the results from Propositions 3.3, 3.4, 3.5 and
The Subprime Mortgage Crisis
617
3.6. 4.2.1. Statement and Proof of the Optimal Credit Default Insurance Problem The objective function in (3.-3) is additively separable in U (1) and U (2) which is not necessarily true for all IBs. In our problem, we have a discount rate, δ r , which is used to discount these utility functions but is not the market discount rate. The stochastic credit default insurance problem determines the optimal rate of cash outflow for satisfying depositor obligations, k ∗ , IB’s optimal investment in securitized RMLs, B ∗ , and optimal credit default insurance, Φ∗ . In this regard, Theorem 3.2 provides the general solution to this optimization problem (see Problem 3.1). In the sequel, connections between specific solutions of the optimal credit default insurance problem and the SMC are forged. 4.2.2. Optimal Credit Default Swap Contracts and the SMC From Proposition 3.3, we deduce that the optimal CDS contract coincides with the optimal accrued premium, Φ∗ . In this regard, Φ∗ is attained when the marginal cost of decreasing or increasing Φ is equals to the marginal benefit of the CDS contract. Moreover, if Γ = 0 then the optimal accrued premium should be zero, i.e., Φ∗ = 0. In this case, if IB holds no type-A RMBSs, Γ, – which is indicative of a high PD for reference portfolio RMLs – then it may be optimal for IB to purchase a CDS contract which protects against all such losses. However, full protection may also introduce high costs in the event that the swap protection seller fails to honor its obligations. In particular, during the SMC, many IBs that purchased CDS contracts promising to cover all losses, regretted making this decision when the swap protection sellers were unable to make payments after a credit event. Notwithstanding this, certain IBs that bought CDS contracts that only pay when the losses exceed a certain level set by the swap protection seller found swap protection beneficial. In particular, they did not experience the same volume of losses as those who purchased full protection (see, for instance, [2]). 4.2.3. Optimal Credit Default Insurance with Exponential Utilities and the SMC In Proposition 3.4, IB’s optimal investment in securitized subprime RMLs, B ∗ , the optimal rate of cash outflow for fulfilling depositor obligations, k ∗ , and the optimal accrued premium, Φ∗ , given by (3.18), (3.19) and (3.20), respectively, are not random variables since they are not dependent on IB’s profit, Π. Furthermore, IB’s optimal investment in securitized RMLs (3.18) from Proposition 3.4 is fixed. Moreover, the expression (3.18) also involves the risk aversion measure, ̺. Ceteris parabis, before the SMC, if ̺ was very high, IB was unlikely to engage in extensive RML securitization. Moreover, in the aforementioned result, k ∗ = 0. This may be indicative of the fact that a troubled IB may no longer be able to fulfill depositor obligations (compare with the formula (3.17)) thus exacerbating counterparty risk. This does not mean, however, that deposits necessarily will dry up since µB − k ∗ = µB , where µB is the rate of cash inflow from depositors to IB for investments in securitized subprime RMLs. Another factor that could deter IB from investing in lower rated RMBSs is the cost of credit default insurance. In this case, we note from (3.20) that Φ∗ is dependent on ̺, Γ and S, with Γ having a major effect on the cost of CDS contracts.
618
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
The relevance for the SMC of the analysis in the previous paragraph may be identified as follows. It is clear that during the SMC the ability of IBs to obtain funds for subprime RML securitization investment (compare with µB in equation (2.1)) has been dramatically curtailed. Also, spreads have narrowed, as depositors are demanding higher returns to lend money to highly leveraged IBs (compare with k in equation (2.1)). Furthermore, there is substantial evidence to suggest that with increasing delinquencies and foreclosures during 2007-2008, the credit ratings of subprime RMBSs declined dramatically. Depositors became concerned and, in some cases, demanded refunds resulting in margin calls – immediate need to sell/liquidate these securities at fire-sale prices. Consequently, highly leveraged IBs suffered huge losses, bankruptcy or merged with other institutions. Because RMBSs were considered to be toxic due to uncertainty in the housing market and could not be sold readily (i.e., they are illiquid), their values plummeted. In addition, the market value of houses were penalized by the inability to sell the RMBSs – sometimes they were less than the value the actual cash inflow would merit. 4.2.4. Optimal Credit Default Insurance with Power Utilities and the SMC In Proposition 3.5, the optimal controls in (3.24), (3.25) and (3.26) are expressed as linear functions of IB’s optimal profit under subprime RML securitization, Π∗ . In this case, we see that the optimal rate of cash outflow for satisfying depositor obligations, k ∗ , is independent of the frequency and severity parameters φ and ϕ, of the aggregate securitized RML losses, S, respectively. These results are true because the power utility function exhibits constant relative risk aversion which means that −
ηD2 U DU
(2)
(2)
(η)
(η)
= 1 − ̺.
Here, we see that if the relative risk aversion increases, the amount invested in RMBSs decreases which may be indicative of the fact that the mass of type-A RMBSs, Γ, is low at that time. The expression for ζ in (3.23) reveals that not only the objective function, J, is affected by the horizon T, but also the optimal rate of cash outflow for satisfying depositor obligations, k ∗ . Moreover, IB’s optimal investment in securitized subprime RMLs, B ∗ , is affected by the time horizon T via the optimal rate of cash outflow, k ∗ , which impacts on IB’s profit. In addition, the expression for ζ in (3.23) shows that k ∗ depends on the frequency and severity parameters, φ and ϕ, of the RMBS losses, S, respectively. Furthermore, IB’s optimal investment, B ∗ , is affected by RML losses that indirectly involves k ∗ . From Proposition 3.5, it is clear that the amount invested in RMBSs, B, depends on the profit, Π. RML reference portfolio defaults will cause a decrease in IB’s profit under subprime RML securitization, which will later affect the rate of cash outflow for fulfilling depositor obligations, k. In particular, this may cause a liquidity problem in the secondary RML market since µB may decrease as a result of this effect on k. If profits, Π, decrease, it is natural to expect that some IBs will fail as in the SMC (see, for instance, [10]). For instance, both the failure of the Lehman Brothers investment bank and the acquisition in September 2008 of Merrill Lynch and Bear Stearns by Bank of America and JP Morgan, respectively, was preceded by a decrease in profits from securitization.
The Subprime Mortgage Crisis
619
A similar trend was discerned for the U.S. mortgage companies, Fanie Mae and Freddie Mac, who had to be bailed out by the U.S. government at the beginning of September 2008. 4.2.5. Optimal Credit Default Insurance with Logarithmic Utility and the SMC As expected, the optimal controls (3.31), (3.32) and (3.33) in Proposition 3.6 are consistent in the case where ̺ 6= 0 with those in Proposition 3.5. In particular, these optimal controls may be expressed as linear functions of IB’s optimal profit under subprime RML securitization, Π∗ . By contrast to Proposition 3.5, the optimal rate of cash outflow for fulfilling depositor obligations, k ∗ , does not depend on the frequency and severity parameters φ and ϕ, respectively. If the inclination of IB towards retaining earnings, eb, given by (3.31), is high, the optimal rate of cash outflow, k ∗ , decreases. Furthermore, if the RMBS price process is very volatile, IB’s optimal investment in securitized RMLs, B ∗ , declines. Indeed, high volatility associated with the price of RMBSs is indicative of high credit risk, which will deter investment in such structured financial products. The relevance of the discussion in the above, for the SMC, is as follows. Our contention is that if the inclination of IB, eb, towards retaining earnings is very high, the rate of cash outflow for satisfying depositor obligations, k, decreases. Before the SMC, k was very high as depositors demanded higher returns. This meant that not much of the IB’s profit was retained. On the other hand, during the SMC, there was a tendency for IBs to hoard cash so that the value of eb in (3.31) increased.
5.
Conclusions and Future Directions
In this paper, we have built a stochastic dynamic model for IB’s profit under subprime RML securitization. This model enabled us to set-up a stochastic optimal credit default insurance problem (see Problem 3.1) for a fixed term, [t, T ], which optimizes the rate of cash outflow for fulfilling depositor obligations, k, value of IB’s investment in securitized subprime RMLs, B, and CDS contract accrued premium, Φ. Explicit closed form solutions to this problem was determined by choosing exponential, power and logarithmic utility functions. We also highlighted some of the risk issues and related key outcomes to the SMC. In future studies, an open problem is to solve the stochastic optimal credit default insurance problem solved in this paper in a L´evy process framework (see, for instance, Protter in [15, Chapter I, Section 4]). This will involve expected subprime securitized RML losses that are modeled via exponential L´evy processes. Such processes have an advantage over the more traditional modeling tools such as Brownian motion in that they describe the noncontinuous evolution of the value of economic and financial items more accurately. For instance, because the behavior of RMLs, profit, capital and CARs are characterized by jumps, the representation of the dynamics of these items by means of L´evy processes is more realistic (see, for instance, [11]). Another problem of interest is the optimization of IB’s profit from securitized and unsecuritized subprime RMLs simultaneously. In this case, we would like to ascertain which investment decision must be made about the proportion invested in securitized and unsecuritized RMLs in order to generate optimal profit (compare with [14]).
620
M.A. Petersen, M.P. Mulaudzi, J. Mukuddem-Petersen et al.
References [1] Cox JC., Huang CF.: A variational problem arising in financial economics. Journal of Mathematical Economics, 20, 183-204 (1991). [2] Demyanyk Y., Van Hemert O.: Understanding the subprime mortgage crisis. Available at SSRN: http://ssrn.com/abstract=1020396 (19 August 2008). [3] Fabozzi FJ., Cheng X., Chen R-R.: Exploring the components of credit risk in credit default swaps, Finance Research Letters, 4, 10–18 (2007). [4] FDIC Quarterly Banking Profile (Pre-Adjustment).: Fourth Quarter 2007; 29(1), 2008. Available: http://www2.fdic.gov/qbp/qbpSelect.asp?menuItem = QBP. [5] FDIC Quarterly Banking Profile.: First Quarter 2008; 29(2), 2008. Available: http://www2.fdic.gov/qbp/qbpSelect.asp?menuItem = QBP. [6] Fouche CH., Mukuddem-Petersen J., Petersen MA., Senosi MC.: Bank valuation and its connections with the subprime mortgage crisis and Basel II Capital Accord, Discrete Dynamics in Nature and Society, 2008, Article ID 740845, 44 pages (2008). [7] Hull J., Predescu M., White A.: The relationship between credir default swap spreads, bond yields and credit rating announcements, Journal of Banking and Finance, 28, 2789-2811 (2004). [8] Jaffee DM.: The U.S. subprime mortgage crisis: Issues raised and lessons learned, Commision on Growth and Development, Working paper no. 28, (2008). [9] Merton, RC.: Continuous-Time Finance, 2nd Edition, Cambridge, Massachusetts: Blackwell Publishers (1992). [10] Mukuddem-Petersen J., Petersen MA., Schoeman IM., Tau B.A.: Dynamic modelling of bank profits, Applied Economics Letters, 1, 1–5 (2008). [11] Petersen MA., Mulaudzi MP., Schoeman IM., Mukuddem-Petersen J.: A note on the subprime mortgage crisis: Dynamic modeling of bank leverage profit under loan securitization. Applied Economic Letters, DOI:10.1080/13504850903035907. [12] Petersen, MA., Rajan, RG. (2002). Does disturbance still matter ? The information revolution in small business lending. Journal of Finance, 57(6), 2533-2570. [13] Petersen MA., Senosi MC., Mukuddem-Petersen J.: Subprime Banking Models, New York: Nova, ISBN: 978-1-61728-694-0, (2010). [14] Petersen MA., Senosi MC., Mukuddem-Petersen J., Mulaudzi MP., Schoeman IM.: Did bank capital regulation exacerbate the subprime mortgage crisis ? Discrete Dynamics in Nature and Society, 2009, Article ID 742968, 37 pages (2009). [15] Protter P.: Stochastic Integration and Differential Equations (2nd ed.), Springer (2004).
Berlin:
The Subprime Mortgage Crisis
621
[16] U.S. Federal Reserve Bank.: Available: http://www.federalreserve.gov/releases/h15/data.htm [Wednesday, 19 May 2010]. [17] Wikipedia: The Free Encyclopedia. Subprime Mortgage Crisis. Available: http://en.wikipedia.org/wiki/Subprime mortgage crisis [Wednesday, 5 June 2010].
INDEX A
B
absorption, 544 abstraction, 142 accounting, 428, 459, 471, 475, 477, 523, 552, 588 accuracy, xii, 24, 28, 61, 68, 70, 72, 77, 139, 143, 195, 232, 407, 408, 420, 422, 436, 480, 482 actual output, 50 actuarial methods, 68 adaptation, 4, 14, 390 adjustment, 471, 598 administrators, 48, 62 advantages, x, 44, 140, 211, 213, 214, 426, 510 aerospace, 212 Africa, 64 agencies, 426, 427, 432, 476, 492, 547, 553 aggregation, 76 allocative inefficiency, 559 amortization, 427, 466, 472 analytical framework, 159 anatomy, 428 annealing, 2, 38 appraisals, 429 Arabidopsis thaliana, 37 arbitrage, 472, 474 architecture, 427, 428 Argentina, 329, 375 arithmetic, 51, 52, 53, 56, 58, 142, 145 armed forces, 87 Artificial Neural Networks, 3 assessment, 29, 44, 45, 46 assets, xiii, 105, 426, 430, 431, 439, 442, 450, 458, 472, 476, 477, 479, 491, 492, 493, 495, 503, 509, 510, 513, 519, 520, 521, 522, 523, 524, 525, 530, 549, 552, 554, 560, 561, 562, 563, 591 asymmetric information, 426 asymmetry, xiii, 425, 426, 429, 438, 455, 468, 477, 481, 489 attribution, 478 aversion, 553 axiomatization, 155, 175, 180
BAC, 27 backlash, 552 balance sheet, 430, 432, 434, 439, 443, 445, 450, 457, 462, 471, 475, 476, 477, 479, 520, 554, 588 bank failure, 548 bank profits, 562, 620 bankers, 472 banking industry, 510, 549, 552 banking sector, 523, 548, 561, 614 bankruptcy, 470, 494, 499, 500, 502, 506, 555, 560, 591, 618 banks, xiii, 426, 431, 432, 434, 436, 471, 474, 475, 476, 477, 481, 491, 493, 494, 495, 496, 497, 498, 499, 500, 502, 507, 508, 509, 510, 511, 512, 521, 522, 523, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 558, 560, 562, 588, 596, 614, 615 bargaining, 475 barriers, 62 base rate, 513, 523, 543, 546 basis points, 475, 545 BBB, 446, 459, 464, 466, 467, 468, 469, 470, 474, 589, 594, 597 Beijing, 586 Belgium, 1, 67 benchmarks, 5, 9, 13, 14, 18, 58, 155 birth rate, 48 births, 68 BMI, 394 bond market, 434 bondholders, 434, 436, 560 bonds, 92, 433, 434, 437, 438, 439, 445, 446, 469, 472, 474, 475, 509, 510, 520, 521, 524, 546, 560, 593, 596, 614 boundary conditions, 188, 205, 330, 334, 340, 341, 348, 349, 358 boundary value problem, 327, 604 bounds, 119, 260, 268, 275, 357, 409, 578, 581, 585 branching, 31, 33, 92, 98 breakdown, 428, 548
Index
624
Brownian motion, 504, 505, 506, 521, 526, 528, 536, 538, 596, 619 buildings, 431 Bush, President, 500 business cycle, 512 buyer, 558, 592, 593, 597 by-products, 50
C calculus, 385, 492 call centers, 571, 575 campaigns, 62 capital accumulation, 90 capital employed, 458 capital gains, 462, 469 capital markets, 436, 445, 512 case study, 87 cash flow, 430, 432, 434, 439, 442, 443, 449, 450, 456, 457, 461, 465, 475, 480, 497, 500, 554 cation, 88 CCR, 45 Census, 79, 80 central bank, 494, 499, 513, 523, 546, 550, 552 certificate, 574 China, 546, 577, 584 chromosome, 2, 3, 13, 16, 18, 21, 24, 27, 28, 29, 30, 31, 33, 36, 39 circulation, 567 civil service, 85, 86 class, vii, ix, xii, xiii, 1, 83, 86, 89, 90, 91, 105, 112, 116, 117, 118, 123, 125, 130, 136, 142, 173, 177, 184, 213, 214, 247, 257, 263, 287, 288, 330, 374, 379, 380, 383, 387, 388, 389, 391, 404, 423, 464, 480, 492, 493, 565, 566, 568, 575, 599 climate, 100 climate change, 100 clone, 27, 28, 29, 30, 31, 32, 33, 38 cloning, 27, 39 closure, 380, 381, 382, 384 clustering, 3, 29, 30, 31, 37 clusters, 29, 30, 31, 32 CNN, 410 coding, 145 codominant, 18, 22 collateral, 434, 439, 463, 466, 469, 472, 492, 499, 501, 533, 551, 559, 560, 596 commercial bank, 479, 550 commodity, 156, 159, 164, 168, 180, 181, 183 community, 84 compatibility, 154, 155, 158, 164, 344 compensation, xii, 425, 435, 481 competition, 62, 496, 497, 498, 552, 553, 614 competitive advantage, viii, 67, 69 competitors, 510
complement, x, 82, 211, 221, 394, 463 complementarity, 404 complexity, 2, 27, 38, 141, 142, 279, 474, 502, 591, 613, 614 compliance, 187, 196 complications, vii, 1, 2, 3, 18, 22, 34, 522, 595 composites, 125 composition, 48, 77, 472 compressibility, 186, 187 computation, 13, 18, 25, 118, 123, 131, 140, 196, 293, 335, 408, 414, 420, 423, 497 computer simulation, 70 computer systems, 572 computing, 140, 150, 282, 318, 408, 552, 558 concurrency, 142 conditioning, 586 conference, 64 configuration, 51, 56, 157, 158, 161, 162, 201, 280, 281 configurations, ix, 5, 8, 9, 11, 89, 90, 112, 140, 199, 270 conflict, 23, 423 conflict of interest, 423 connectivity, 186 consensus, vii, 1, 2, 3, 22, 23, 24, 27, 29, 34, 37, 83 constant rate, 507 consumption, 469, 493, 550 contour, 177, 186 contradiction, 166, 297, 363 convention, 378, 380, 384 convergence, xi, xii, 82, 125, 128, 136, 144, 145, 191, 197, 292, 312, 330, 333, 337, 338, 343, 355, 356, 367, 369, 385, 394, 397, 407, 408, 414, 417, 419, 420, 422, 561 coordination, 118 correlation, 428, 459, 502 correlations, 447 cost saving, 61 covering, 30, 549 CPU, 9, 10, 11, 12, 13, 14, 15, 16, 19, 20, 21, 24, 27, 35, 131 credit history, 492, 559 credit market, 543, 546, 550, 558, 588, 594, 613, 614 credit rating, xiii, 426, 427, 428, 429, 432, 439, 447, 452, 453, 454, 455, 472, 477, 482, 487, 492, 501, 545, 547, 587, 592, 595, 596, 597, 618, 620 creditors, 553, 560 cryptography, 140 culture, 429 cycles, 131, 142, 145, 545 Czech Republic, 405
D damping, 186
Index data set, vii, 1, 2, 15, 21, 22, 24, 34, 59 database, 30, 455 datasets, 3, 9, 23, 68 deaths, 68 debtors, 436, 471, 481, 500 debts, 525 decay, 515, 516, 518 decentralization, 42 decision makers, 61, 408, 410 decomposition, xi, 117, 123, 125, 130, 136, 137, 291, 292, 294, 295, 327, 504, 505, 525, 582 deduction, 203 deficiency, 559 deflation, 498 deformation, 185, 186, 187, 188, 196, 197, 200, 201, 205, 207 degradation, 387, 388 delinquency, 427, 471, 480, 559, 593 demand curve, 99 demography, 68 dependent variable, 482, 541 deposits, xiii, 429, 430, 431, 434, 435, 436, 438, 442, 443, 444, 447, 450, 451, 456, 457, 459, 460, 463, 471, 475, 479, 482, 483, 484, 491, 493, 495, 509, 510, 511, 512, 513, 522, 523, 524, 525, 532, 546, 550, 553, 614, 617 depreciation, 431, 506, 614 derivatives, 162, 230, 231, 232, 233, 334, 437, 454, 477, 483, 487, 536, 550, 557, 589, 591, 602, 606, 609, 612 destination, 72 detection, 29, 32, 34 devaluation, 507 developing countries, 546 deviation, 21, 32, 252, 429 diagnosis, 573 differential equations, 423 diffusion, 195, 196, 502, 505, 509, 521, 538, 555 diffusion process, 502, 505, 509, 538 disadvantages, 493 disclosure, 489, 591 discontinuity, 167 discounted cash flow, 497 discretization, 189, 201, 330, 335, 337, 338, 341, 342, 365, 366, 369, 374 dispersion, 330, 499 displacement, 187, 188, 189, 191, 192, 199, 200, 201, 204, 205, 206, 207, 209, 217, 219 distribution function, 504, 505, 596 disturbances, 4, 15 diversification, 469, 470, 496 diversity, 15, 21 DNA, 2, 3, 13, 27, 28, 32, 33, 38 DNA sequencing, 27 doctors, 45 dominance, 22
625
drugs, 566 duality, 175, 385 dynamic control, 401 dynamic systems, 387, 407, 408, 423 dynamical systems, 405, 406
E earnings, 105, 431, 441, 479, 549, 551, 553, 558, 607, 619 economic activity, 92 economic efficiency, viii, 41 economic evaluation, 61 economic growth, 500, 543 economic problem, 550 economic resources, 42, 43, 45, 55, 56, 57, 58, 59, 60, 61 economy, 19, 99, 159, 164, 168, 183, 471, 482, 497, 500, 543, 544, 546, 548, 559, 562, 613 editors, 140, 141 Efficiency, 43, 45, 47, 49, 51, 53, 55, 56, 57, 59, 61, 63, 64, 65, 224 eigenvalues, 260, 262, 274, 277, 280, 282, 284 elasticity of supply, 559 elongation, 29, 33 Emergency Economic Stabilization Act, 500 employment, 34, 44, 186, 492, 559 employment status, 492, 559 engineering, 19, 327 England, 65, 546 equality, 215, 229, 230, 357, 413, 482, 518, 537, 601 equilibrium, 89, 95, 96, 108, 115, 183, 238, 254, 384, 416 equipment, 62, 200, 431, 567 equities, 92, 509 equity, xiii, 42, 428, 429, 431, 439, 441, 442, 446, 448, 450, 458, 459, 463, 469, 470, 471, 472, 476, 477, 478, 480, 491, 493, 496, 497, 503, 506, 511, 513, 522, 524, 525, 526, 530, 546, 552, 559, 560, 589 Euclidean space, 379, 383 European Central Bank, 546 European Commission, 115 evolutionary games, x, 237, 238, 239, 245 examinations, 43, 46, 54, 58 execution, 98, 145, 147, 148 expenditures, 42, 43, 45, 47, 48, 49, 55, 56, 57, 58, 59, 60, 61 experiences, 515, 525, 589 expertise, 48, 62 exploitation, 60 exploration, 574 exposure, 426, 477, 502, 589 extraction, 90, 99, 109
Index
626
F fairness, 153, 154, 155, 158, 159, 164 fault detection, 405 federal funds, 558 Federal Reserve Board, 489 feedback, vii, xii, 89, 93, 94, 96, 98, 106, 108, 116, 240, 247, 253, 254, 255, 279, 280, 287, 330, 331, 334, 339, 341, 346, 347, 359, 360, 364, 387, 388, 389, 394, 395, 396, 400, 401, 402, 403, 404, 405, 406, 410 FFT, 393 filtration, 504, 598 financial crisis, 476, 489, 490, 495, 499, 543, 546 financial instability, 496 financial market, 472, 500, 512, 547 financial markets, 472, 500, 512, 547 financial resources, viii, 41, 61, 62 financial sector, 426, 489, 494, 499 Financial Services Authority, 489 financial system, 428, 471, 472, 478, 493, 496, 523, 543, 547 fine tuning, 169 fingerprints, 39 finite element method, 374 Finland, 1 fitness, 14, 24 fixed rate, 560, 596 flexibility, 140 fluctuations, 21, 454, 530, 545, 560 fluid, 187 Ford, 500 forecasting, 68, 69, 72, 73, 74, 88, 90, 541 foreclosure, 427, 434, 495, 499, 559, 591 formula, 15, 214, 219, 230, 231, 247, 439, 461, 463, 497, 509, 517, 528, 530, 531, 532, 533, 554, 570, 571, 599, 617 foundations, 70, 388 fragility, 496, 561 fragments, 27, 28, 38 framing, 98 France, 327, 375 fraud, 429, 432, 506 freedom, 77 frequency distribution, 32 functional analysis, 156 funding, 42, 59, 426, 436, 440, 447, 471, 492, 531, 550, 551, 553 fuzzy sets, 85
G game theory, ix, 89, 90, 92, 171, 173, 238 gene mapping, 27
General Motors, 500 genes, vii, 1, 2, 13 genetic marker, 36 genetics, 5 genome, vii, 1, 2, 3, 27, 28, 34, 35, 36, 37, 38, 39 glucose, 566 governance, 562 government policy, 92 government securities, 550 grades, 72, 73 graph, 4, 255, 557 gravity, 188, 397 Greece, viii, 41, 42, 43, 46, 47, 48, 60, 63, 64 grounding, 155 grouping, 29 growth rate, 497 guidance, 61 guidelines, 73, 495 guilty, 428, 496
H Hamiltonian, 4, 334 harvesting, 100 health care system, 42 health insurance, 42 health policy issues, 42 health services, 42, 43, 45, 48, 53 health systems, 63, 566, 575 heterogeneity, 77, 84, 85, 87 holding company, 550 homogeneity, 77, 84 Hong Kong, 89 hospitalization, 46, 57, 60, 61 host, 475 housing, 471, 472, 481, 494, 495, 498, 499, 500, 501, 502, 543, 545, 546, 547, 562, 588, 613, 618 human genome, 27, 37 human interactions, 90 Human Resource Management, vii, viii, 67, 69, 83, 84, 85, 86, 87 human resources, 86 hybrid, 16, 35, 36, 39, 137, 408, 421, 560, 596 hybridization, 36 hypercube, 216 hypothesis, xii, 80, 160, 173, 247, 296, 298, 313, 339, 363, 425, 426, 427, 429, 453, 454, 468, 481, 493, 584
I IMA, 85, 86 images, 221, 246, 256, 379 imbalances, 497
Index IMF, 476, 561 impacts, 618 in transition, 65 incidence, 477 independence, 160, 179, 181, 427, 432 India, 494 induction, 217, 355, 356, 417, 419 inefficiency, 45, 52, 69, 559 inequality, xi, xii, 4, 157, 213, 215, 263, 264, 267, 268, 291, 292, 298, 304, 305, 311, 314, 316, 317, 334, 348, 352, 362, 364, 368, 375, 380, 382, 387, 388, 392, 393, 394, 411, 524 inferences, 84 inflation, 553 initial state, 92, 188, 331, 337, 346 initiation, 464 insertion, 15, 16 integration, 189, 196, 201 interaction effect, 77 interbank market, 523, 558 interdependence, 429, 547 interest rates, 426, 432, 435, 436, 472, 481, 492, 495, 496, 499, 500, 502, 514, 523, 545, 546, 550, 553, 558, 559, 560 interface, 327 interference, 18 intermediaries, 511 International Monetary Fund, 561 intervention, 494, 566 invariants, 187, 188 investment bank, 426, 492, 494, 501, 502, 508, 548, 551, 614, 618 investors, 426, 439, 470, 472, 476, 481, 482, 492, 493, 495, 499, 500, 501, 502, 546, 547, 549, 550, 560, 588, 590, 591, 594 isolation, 48 Israel, 1, 34, 377 Italy, 259 iteration, 8, 124, 125, 126, 128, 131, 132, 133, 134, 135, 139, 143, 338, 343, 420, 422 iterative solution, 191
J Japan, 153, 181, 387, 407 Jordan, 563
K Kenya, 64 Kuwait, 139
627
L languages, 142 lattices, 168, 183 learning, 25, 388, 401, 405 lending, 426, 427, 474, 479, 489, 492, 493, 494, 495, 496, 497, 499, 544, 545, 548, 549, 550, 552, 553, 559, 563, 620 life expectancy, 44 lifetime, 500, 532 light beam, 219 linear function, 293, 330, 373, 462, 595, 619 linear model, 493 linear programming, 585 linear systems, xiii, 286, 287, 288, 289, 387, 404, 405, 423, 577, 578, 584, 586 liquid assets, 523 liquidate, 502, 618 liquidity, xii, xiii, 425, 428, 429, 435, 439, 440, 447, 470, 471, 481, 482, 491, 495, 499, 508, 513, 523, 543, 544, 546, 550, 551, 561, 587, 588, 593, 595, 613, 614, 618 loan securitization, vii, xiii, 489, 587 local order, 21, 32 locus, 39 logical implications, 154 logistics, 5, 117, 118, 136, 137 Louisiana, 288 Lyapunov function, 275, 279, 287, 392
M majority, 439, 464, 472, 496 management, viii, 41, 42, 43, 48, 51, 54, 56, 57, 59, 60, 61, 62, 64, 68, 69, 73, 74, 86, 90, 105, 117, 427, 475, 478, 494, 499, 614 manpower, viii, ix, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88 manufacturing, 376 mapping, vii, 1, 2, 3, 13, 14, 18, 19, 21, 22, 23, 25, 27, 30, 34, 35, 36, 37, 38, 118, 156, 220, 312, 320, 383, 395, 579, 580, 581, 582 marginal costs, 532 marginal utility, 154, 176, 178 markers, 2, 3, 13, 14, 16, 18, 21, 22, 23, 24, 25, 27, 32, 33, 34, 35, 37, 39 market share, 49, 55, 514, 553 market structure, 546 marketing, 62, 65, 558 marketplace, 514 Markov chain, 79, 80, 81, 84, 85, 86, 87, 88, 337, 345, 506, 568, 572 mathematical programming, 118 mathematics, 183
Index
628
matrix algebra, 420 mechanical properties, 186, 201 median, 3, 37 membership, 397, 398 memory, 8, 142 metaphor, 156 methodology, ix, xi, 52, 68, 292, 330, 340, 408, 422, 520, 566, 575 metric spaces, 385 Microsoft, 146, 150 microstructure, 186 military, 68, 87 Ministry of Education, 181, 584 mixing, 177 modeling, ix, 21, 83, 86, 89, 90, 92, 112, 139, 187, 199, 442, 450, 478, 481, 489, 491, 497, 507, 514, 562, 595, 619, 620 modelling, 79, 87, 118, 119, 620 modification, 140, 169, 172, 213, 214, 215, 254, 343 modulus, 189 molecular biology, 35 monetary policy, 496, 497 monitoring, 42, 60, 61, 62, 430, 459, 460, 521, 530, 532 Monte Carlo method, 502 moral hazard, 428, 470, 481, 496, 590 mortality rate, 44 mortgage-backed securities, xiii, 491, 492, 563, 587, 588 Moses, 489 motivation, xiii, 472, 491, 492, 493, 508, 571 multidimensional, x, 211, 212, 213, 214, 219, 220, 224, 228, 233, 568, 572 multiplication, 50 multiplier, 191, 194, 430, 442, 450, 485 mutation, 4, 5, 6, 8, 14, 15, 16, 17, 18, 20, 24, 25, 34
N Nash equilibrium, 93, 94, 97, 107, 108, 109, 407, 408, 411 natural resources, 90 nematode, 36 neural network, 388, 403, 404, 405 neural networks, 403, 404, 405 nodes, 14, 31, 199, 335, 365, 565, 566, 567, 568, 570, 571, 572, 573, 574 noise, 18, 32, 395, 408, 423, 424, 497, 506, 507, 516, 518, 540 nonlinear dynamics, 530 nonlinear systems, 403, 404, 405 Norway, 34, 35 numerical analysis, 408 numerical computations, 118 nursing, 64
O obstruction, 69 opaqueness, xiii, 425, 426, 427, 429, 438, 455, 468, 481, 613 open market operations, 550 operations research, 70, 87 opportunities, 118, 554 optimization method, vii, x, 3, 34, 185, 186, 187, 201, 209, 212 orbit, 567, 570, 571, 573, 574 organism, 24 organizing, 459 outsourcing, 428 overlap, 28, 29, 30, 31, 33 ownership, 429, 553, 560 ox, 151
P Pacific, 182 parallel, 30, 31, 32, 38, 142, 145, 147, 151, 219, 220, 224 parallel algorithm, 38 parallelism, 142, 145 parameter estimation, ix, 68, 77, 84 Pareto, x, 153, 154, 155, 157, 158, 159, 160, 161, 164, 165, 166, 167, 168, 169, 172, 173, 180, 181, 183, 184, 211, 212, 213, 214, 215, 216, 217, 219, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 517 Pareto optimal, x, 153, 155, 157, 158, 159, 160, 161, 164, 165, 166, 167, 168, 169, 172, 173, 181, 211, 212, 215, 232 partial differential equations, 94, 97, 108, 139, 416 partition, 70, 119, 155, 156, 157, 158, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 171, 172, 173, 174, 184, 238, 241, 250 patents, 510 pathogenesis, 59 penalties, 7, 559 performance, 5, 19, 38, 43, 45, 48, 49, 50, 54, 64, 119, 131, 140, 145, 146, 150, 185, 187, 200, 211, 287, 289, 387, 388, 391, 395, 396, 397, 427, 438, 442, 450, 466, 471, 475, 520, 533, 534, 549, 551, 552, 553, 556, 559, 566, 613, 615 perinatal, 65 periodicity, 205 personnel costs, 69 plants, 39, 270, 279, 286 policy iteration, 338, 343 policy making, 61 policy-makers, viii, 41, 43, 60, 61 pools, 434, 495, 547, 560, 591
Index population growth, 572 population size, 14, 21 portfolio, 426, 435, 439, 445, 446, 458, 459, 460, 461, 462, 463, 464, 469, 471, 472, 478, 479, 480, 501, 502, 508, 510, 521, 522, 543, 549, 560, 563, 590, 593, 596, 597, 603, 613, 614, 617, 618 Potchefstroom, 491, 587 predictive validity, 78 present value, 100, 105, 109, 429, 439, 440, 447, 458, 461, 478, 497, 532, 558 primary function, 492 prior knowledge, 263, 265, 270, 272, 274, 275, 282, 284, 404 prioritizing, 60 private practice, 48 probability, ix, 7, 15, 17, 25, 35, 72, 73, 76, 77, 80, 81, 83, 84, 85, 89, 90, 92, 112, 154, 155, 160, 162, 164, 175, 180, 181, 182, 183, 184, 239, 240, 242, 243, 244, 245, 247, 252, 253, 409, 442, 450, 465, 469, 480, 504, 510, 519, 525, 533, 545, 550, 559, 568, 571, 572, 573, 574, 591, 593, 595, 597, 605, 614 probability distribution, ix, 73, 89, 90, 92, 112, 239, 243, 244, 442, 450, 605 producers, 43, 44 production function, 99 productive efficiency, 63 productivity, 64 profitability, vii, xiii, 461, 491, 492, 493, 494, 497, 502, 503, 508, 509, 527, 530, 533, 538, 542, 548, 551, 552, 553, 558, 560, 561, 562 programming, 38, 63, 82, 86, 98, 119, 125, 128, 137, 151, 330, 334, 339, 385, 526, 585, 599 project, 27, 30, 34 propane, 118 proportionality, 82, 86, 525, 548 proposition, 243, 249, 293, 340, 378, 502, 558, 605 prototype, 43 public health, 59, 62, 64, 65 public sector, 44
Q qualifications, viii, 67, 71, 76 quality of service, 42 quantitative concept, 83 quantitative technique, 68
R radiation, 35, 36 radius, 245 rate of return, 459, 461, 479, 510, 550, 591, 597 rating agencies, 553
629
reactions, 436, 481 reading, 347 real estate, 506, 507, 544, 545 real numbers, 248, 378 real time, 395 realism, ix, 89, 90, 112 reality, 43, 51, 56, 440, 493, 495, 510 recall, 204, 240, 246, 247, 519, 532, 580, 601 recalling, 272, 274, 281 recession, 499, 545, 550 recognition, 186 recombination, 2, 3, 14, 21, 22, 23, 24, 34 recommendations, 61, 427 recruiting, 82 recurrence, 109 redistribution, 62 referees, 553 reforms, 42 regression, 35, 76, 77 regression analysis, 76, 77 regulatory framework, 543, 613 reintroduction, 552 relaxation, x, 185, 186, 209 relevance, 69, 70, 618, 619 reliability, 14, 34, 39, 61, 70, 420 rent, 544 repackaging, 560 repair, 29, 566, 567 replacement, 68, 533 reproduction, 4 reputation, 510, 524, 525, 552 reserves, 510, 512, 518, 520 residuals, 438, 540, 541 resilience, 478 resistance, 39 resolution, 4, 27, 28, 532 resource allocation, 42 resource management, 86, 115 resource policies, viii, 67 resources, viii, 41, 42, 43, 45, 46, 47, 48, 56, 57, 59, 60, 61, 62, 64, 69, 90, 115, 142, 143, 153, 472, 523 response time, 576 restitution, 506 restructuring, 591 retail, 482, 562, 614 retained earnings, 439, 441, 446, 448, 449, 456, 457, 471 returns to scale, 63 revenue, 497, 503, 549 rights, 434, 499 risk aversion, 549, 617, 618 risk factors, 492, 559 risk management, 428, 471, 477, 495, 497, 563 risk-taking, 497, 563 Romania, 407, 423
Index
630 root-mean-square, 542 rubber, vii, x, 185, 187, 189, 200, 207, 208, 209 rural areas, 42, 43, 45, 48, 62
S Sarbanes-Oxley Act, 552 savings, 434, 476, 510, 549 scaling, 224 scheduling, 5, 38, 39, 69, 85, 330, 374, 375 schema, 17, 140, 141 SCP, 26, 27 screening, 431, 495, 521, 587 SEC, 490 sensing, 586 sensitivity, x, 9, 192, 195, 198, 199, 202, 211, 214, 228, 231, 232, 385, 463, 492, 508, 513, 543, 545, 546, 585 sequencing, 28, 33, 37, 39 servers, 570 set theory, 85 shape, x, 185, 186, 187, 193, 198, 199, 201, 205, 207, 208, 209, 262, 335 shareholder value, 510, 563 shareholders, 434, 460, 461, 462, 503, 511, 531, 553 shear, 189 shock, 453, 454, 455, 476, 477, 483, 526, 597 shoreline, 186 shortage, viii, 45, 48, 67, 68, 69, 332, 345, 346, 357, 360, 513 significance level, 29 signs, 238 silicon, 140 simulation, vii, viii, 14, 18, 67, 118, 141, 143, 373, 394, 401, 479, 509, 538, 539 single test, 13 skeleton, 25, 26, 33, 34, 35 Slovakia, 404 small businesses, 551 SNP, 507, 515 social care, 64 software, 3, 13, 18, 22, 38, 65, 140, 145, 146, 147, 150 source code, 143 Southern Africa, 508 space constraints, 330 Spain, 328, 549, 565 species, 22 specifications, 480, 570 speculation, 116, 591, 597 stabilization, x, xi, 259, 260, 279, 287, 288, 289, 404 staffing, 46 stakeholders, 61 standard deviation, 505, 538 standardization, 475
statistics, 64, 84, 541, 542 steel, 36 stochastic model, xiii, 73, 85, 87, 88, 491, 497, 502, 508, 509, 510, 527, 543, 594, 595, 597, 598, 615 stochastic processes, 98 stock markets, 481, 499 stock price, 547 storage, 142 strategic management, 69 strategic planning, 87 strong interaction, 69 structural changes, 100 structure formation, x, 237, 238, 241 structuring, 426 subgame, 93, 95, 96, 97, 101, 104, 109 subgroups, viii, 67, 70, 72, 76, 77, 79, 84, 85, 238 subprime loans, 428 substitution, 102, 176 succession, 52, 68 successive approximations, 417 supervision, 429, 493, 554 supply chain, 474 suppression, 186 surplus, viii, 57, 67, 68, 69, 523, 545 survey, 16, 35, 151, 375 survival, 458 Survivor, 85 survivors, 553 Sweden, 546 Switzerland, 546 symmetry, 4, 163, 199, 205, 352 synthesis, xi, 142, 143, 147, 150, 259, 279, 286, 389, 390, 394 systemic risk, 428, 429, 437, 441, 447, 471, 472, 477, 481, 496
T Taiwan, 63, 117, 136 taxation, 560 technical efficiency, 41, 44, 45, 50, 51, 52, 60, 61, 64 tension, 208 testing, 8, 14, 27, 33, 34, 77, 78, 142, 150, 436, 482 time increment, 196 time periods, 76, 78, 82, 131 time series, 539, 542 tin, 521 topology, 167, 168, 177, 178, 185, 186, 187, 194, 198, 199, 200, 203, 207, 209, 381, 382, 384, 567 tracks, 200 trademarks, 510 trade-off, x, 211, 212, 214, 215, 216, 217, 232 trading partner, 549, 614 trading partners, 549, 614 training, viii, 67, 85, 87
Index
631
traits, 77 trajectory, 4, 330, 333, 336, 337, 338, 339, 347, 365, 389, 390, 416 tranches, 428, 433, 434, 436, 439, 445, 446, 459, 460, 463, 464, 466, 467, 469, 470, 472, 474, 475, 480, 589, 590, 591, 613 transaction costs, 440, 447, 459, 460, 532 transactions, 512, 545, 595, 613, 614 transformation, xii, 77, 157, 167, 177, 195, 197, 209, 425, 428, 435, 481 transformations, 413 transition rate, 71, 72, 77, 345 translation, 179, 182 transparency, 429 transport, 5 transportation, ix, 117, 118, 119, 131, 133, 134 Treasury bills, 510 triggers, 438, 463, 466, 467, 475, 480 turnover, 68
vacancies, 83, 544 validation, ix, 68, 75, 78, 84, 141, 436, 482 valuation, xii, xiii, 425, 426, 427, 429, 430, 431, 434, 435, 436, 438, 439, 440, 442, 443, 447, 449, 450, 451, 455, 456, 457, 458, 461, 463, 468, 471, 472, 475, 478, 481, 489, 508, 510, 514, 526, 562, 620 variations, xi, 22, 37, 136, 192, 259, 260, 288, 385 vehicles, ix, 117, 118, 119, 130, 200, 288, 426, 496, 551, 588 velocity, 194, 195, 196, 198, 199, 240, 246, 247, 254, 337, 339, 343 vibration, 185, 200, 201, 394 viscosity, 330, 339, 340, 341, 349, 362, 364, 374, 375 volatility, 436, 481, 522, 525, 526, 528, 596, 619 voting, 3, 560
U
W
U.S. economy, 499 U.S. Treasury, 474, 492, 493, 500, 547, 549 underwater vehicles, 288 uniform, 29, 30, 201, 204, 217, 219, 261, 262, 263, 264, 268, 275, 279, 321, 336, 338, 347, 352, 356, 365, 388, 420 unit cost, 99 United Kingdom, 64, 68, 82, 87, 211 United Nations, 41, 49, 50, 55, 64 updating, 144, 187, 196, 239
V
waste, 56, 57 weakness, 408 wealth, 510, 546 web, 151, 614 wholesale, 426, 558, 614 windows, 35, 37 wires, 141, 142 withdrawal, 523 world order, 16