213 20 6MB
English Pages 409 [410] Year 2011
International Series in Operations Research & Management Science Volume 162
Series Editor Frederick S. Hillier Stanford University, CA, USA Special Editorial Consultant Camille C. Price Stephen F. Austin State University, TX, USA
For further volumes: http://www.springer.com/series/6161
Ahti Salo Jeffrey Keisler Alec Morton Editors
Portfolio Decision Analysis Improved Methods for Resource Allocation
13
Editors Ahti Salo Aalto University School of Science Systems Analysis Laboratory PO Box 11100 00076 Aalto Finland [email protected]
Jeffrey Keisler Department of Management Science and Information Systems University of Massachusetts, Boston Morrissey Boulevard 100 Boston, MA 02125 USA jeff [email protected]
Alec Morton Department of Management London School of Economics Houghton Street WC2A 2AE London United Kingdom a [email protected]
ISSN 0884-8289 ISBN 978-1-4419-9942-9 e-ISBN 978-1-4419-9943-6 DOI 10.1007/978-1-4419-9943-6 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2011932226 © Springer Science+Business Media, LLC 2011 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Foreword
Resource allocation problems are ubiquitous in business and government organizations. Stated quite simply, we typically have more good ideas for projects and programs than funds, capacity, or time to pursue them. These projects and programs require significant initial investments in the present, with the anticipation of future benefits. This necessitates balancing the promised return on investment against the risk that the benefits do not materialize. An added complication is that organizations often have complex and poorly articulated objectives and lack a consistent methodology for determining how well alternative investments measure up against those objectives. The field of decision analysis (or DA) has recognized the ubiquity of resource allocation problems for three or four decades. The uncertainty of future benefits clearly calls for application of standard approaches for modelling uncertainties – such as decision trees, influence diagrams, and Monte Carlo simulation, and for using utility functions to model decision makers’ risk preference. The problem of many objectives is an opportunity to apply multi-attribute utility or value models – or similar techniques – to decompose a difficult multidimensional problem into a series of relatively simple judgments about one-dimensional preferences and relative trade-off weights. Application of these models in real-world organizations also requires state-of-the-art techniques and processes for eliciting probabilities and preferences, performing sensitivity analyses, and presenting clear and compelling results and recommendations. As such, these problems clearly fall within the mainstream of DA methods and practice. However, in contrast to traditional DA, it is important to recognize that resource allocation is a portfolio problem, where the decision makers must choose the best subset of possible projects or investments subject to resource constraints. Several other disciplines have also focused on the portfolio aspects of resource allocation problems, particularly operations research (OR) and finance. OR techniques such as mathematical programming are directly relevant to finding optimal portfolios of projects subject to resource constraints. However, mathematical optimization is not typically considered part of routine DA practice, and combining the best of v
vi
Foreword
decision analytic models with mathematical programming has received only limited attention. DA practitioners commonly solve portfolio problems with heuristic approaches, such as prioritization based on benefit–cost ratios, despite foundational work done by Donald Keefer, Craig Kirkwood, and their colleagues in the 1970s. In finance, the considerable body of work on applications of OR to financial portfolios is relevant to understanding trade-offs of risk and return. However, finance and DA use different approaches and make different assumptions for assessing and modelling risk preferences. Portfolio DA also poses significant practical challenges. In large organizations, hundreds or thousands of projects and programs may be considered for inclusion in a portfolio. Assessing multiple evaluation criteria across hundreds or thousands of projects can be daunting. If probability assessments are required, then good DA practice typically emphasizes careful and rigorous assessment and modelling of the uncertainties. This amounts to building hundreds of models and assessing probability distributions for many thousands of uncertain parameters. In my own experience working on resource allocation in hospitals, healthcare systems, and large manufacturing firms, these organizations sometimes struggle to develop even straightforward deterministic financial models for their projects. Conducting a fullblown decision analysis for each project is simply infeasible. Stated quite simply, the most critical scarce resources are the resources required to obtain and evaluate data, assess probabilities, construct models, and evaluate alternative portfolios of projects. Decision makers typically start with limited information about each project and portfolio, and need guidance about which projects to analyze, and at what level of detail. Given the ubiquity of portfolio problems, the rich intersection of multiple relevant disciplines, and the very real practical challenges associated with these problems, one would naturally expect to see decision analysts rise to the challenge. The applied side of the decision analysis profession appears to have responded – particularly in oil and gas production, pharmaceuticals, and military applications. A number of vendors have also responded by providing commercial software tools specifically designed to support portfolio DA. Despite the apparent success of portfolio DA in practice, there has been surprisingly limited attention to portfolio theory and methods in academic DA. With only a few notable exceptions, introductory DA textbooks either provide no discussion of portfolio methods and problems, or at best, a cursory overview of a few applications. Nor has the theory and methods of portfolio DA received more than limited attention in the research literature. In recent years, in part to address this relative neglect, Ahti Salo, Jeffrey Keisler, and Alec Morton organized a series of invited sessions at various national and international conferences. These sessions brought together researchers and practitioners with a shared interest in portfolio decision analysis. I believe these sessions succeeded in encouraging academicpractitioner communication, and perhaps have persuaded more academic decision analysts to focus their attention of portfolio problems, issues, and challenges. This volume, Portfolio Decision Analysis: Improved Methods for Resource Allocation, is the next logical step in Salo, Keisler, and Morton’s efforts to foster a
Foreword
vii
rich and productive interaction of theory, method, and application in portfolio DA. I commend and applaud the editors’ objectives, and I am pleased to see that they have succeeded in attracting a diverse collection of contributions from a distinguished set of authors. The editors do an excellent job of providing an overview of these contributions in their introductory chapter. I simply conclude by encouraging you to take up where these authors leave off, either by furthering the development of new theory and methods, or by putting these theories and methods to the test in real organizations struggling to make the best use of scarce resources. Strata Decision Technology, L.L.C. Champaign, IL, USA
Don N. Kleinmuntz, Ph.D. Executive Vice President
Acknowledgments
This book is the result of the collective efforts of many people. Chapter authors have contributed in the most directly and tangible way; but there are many others who deserve mention, not least for having convinced us of the need for a dedicated book on Portfolio Decision Analysis and also for having provided us with the intellectual foundation and the financial means to bring such a book to fruition. In particular, we wish to thank the panelists in the series of panel discussions that we organized at the INFORMS and EURO conferences in 2008–2010: Jos´e Figueira, Don Kleinmuntz, Jack Kloeber, Jim Matheson, Larry Phillips, Martin Schilling, Theo Stewart, Christian Stummer, and Alexis Tsouki`as (in alphabetical order). These panelists presented through-provoking ideas and engaged prominent scholars and practitioners like Carlos Bana e Costa, Robert Dyson, Ron Howard, and Carl Spetzler in lively discussion. We are pleased to note that echoes of these discussions – which were thus enabled by EURO and INFORMS conference organizers – are reflected in many places of this book. The three of us are indebted to our educators who instilled in us a deep and abiding interest in decision analysis and who have mentored us in our professional lives. In particular, Ahti Salo wishes to thank Raimo P. H¨am¨al¨ainen for an excellent research environment and numerous occasions for fruitful collaboration; Alec Morton acknowledges the support and guidance of Larry Phillips and Carlos Bana e Costa, who introduced him to this area, and thanks them for many long and insightful discussions. Jeffrey Keisler thanks his former employers and colleagues at General Motors Corp., Argonne National Laboratory and Strategic Decisions Group for opportunities to work on, experiment with, and inquire about portfolio applications. He also thanks the many other decision analysis friends and colleagues (including a substantial number of the contributors to this book) who have been engaged in this discussion over the years. Jeffrey also thanks Elana Elstein for her support during the development of this book. We have received financial support from several sources. The Academy of Finland provided the grant which allowed Ahti Salo to devote a substantial share of his time to the development of this book, while the Game Changers project made it ix
x
Acknowledgments
possible for him to work on the book together with Alec Morton in December 2010 at IIASA (International Institute for Applied Systems Analysis). The Suntory and Toyota International Centres for Economics and Related Disciplines (STICERD) at the London School of Economics provided a grant which allowed us to work together in Helsinki and London in summer 2010. Finally, we wish to express our most sincere gratitude to the authors with whom it has been our pleasure to work and who have succeeded in writing 15 chapters which, taken together, push the frontiers of research and practice in this fascinating subfield of decision analysis still further.
Contents
Part I
Preliminaries
1
An Invitation to Portfolio Decision Analysis . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Ahti Salo, Jeffrey Keisler, and Alec Morton
3
2
Portfolio Decision Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Jeffrey Keisler
29
3
The Royal Navy’s Type 45 Story: A Case Study . . . .. . . . . . . . . . . . . . . . . . . . Lawrence D. Phillips
53
Part II 4
Methodology
Valuation of Risky Projects and Illiquid Investments Using Portfolio Selection Models. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . Janne Gustafsson, Bert De Reyck, Zeger Degraeve, and Ahti Salo
79
5
Interactive Multicriteria Methods in Portfolio Decision Analysis. . . . . 107 Nikolaos Argyris, Jos´e Rui Figueira, and Alec Morton
6
Empirically Investigating the Portfolio Management Process: Findings from a Large Pharmaceutical Company . . . . . . . . . . . 131 Jeffrey S. Stonebraker and Jeffrey Keisler
7
Behavioural Issues in Portfolio Decision Analysis . .. . . . . . . . . . . . . . . . . . . . 149 Barbara Fasolo, Alec Morton, and Detlof von Winterfeldt
8
A Framework for Innovation Management . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 167 Gonzalo Ar´evalo and David R´ıos Insua
xi
xii
9
Contents
An Experimental Comparison of Two Interactive Visualization Methods for Multicriteria Portfolio Selection .. . . . . . . . . . 187 Elmar Kiesling, Johannes Gettinger, Christian Stummer, and Rudolf Vetschera
Part III
Applications
10 An Application of Constrained Multicriteria Sorting to Student Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 213 Julie Stal-Le Cardinal, Vincent Mousseau, and Jun Zheng 11 A Resource Allocation Model for R&D Investments: A Case Study in Telecommunication Standardization . . . . . . . . . . . . . . . . . 241 Antti Toppila, Juuso Liesi¨o, and Ahti Salo 12 Resource Allocation in Local Government with Facilitated Portfolio Decision Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 259 Gilberto Montibeller and L. Alberto Franco 13 Current and Cutting Edge Methods of Portfolio Decision Analysis in Pharmaceutical R&D . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 283 Jack Kloeber 14 Portfolio Decision Analysis: Lessons from Military Applications .. . . . 333 Roger Chapman Burk and Gregory S. Parnell 15 Portfolio Decision Analysis for Population Health ... . . . . . . . . . . . . . . . . . . . 359 Mara Airoldi and Alec Morton About the Editors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 383 About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 387 Index . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 405
Contributors
Mara Airoldi Department of Management, London School of Economics and Political Science, London, UK, [email protected] Gonzalo Ar´evalo International RTD Projects Department, Carlos III National Health Institute, Ministry of Science and Innovation, Madrid, Spain, [email protected] Nikolaos Argyris Management Science Group, Department of Management, London School of Economics and Political Science, London, UK, [email protected] Roger Chapman Burk Department of Systems Engineering, US Military Academy, West Point, NY 10996, USA, [email protected] Zeger Degraeve Department of Management Science & Operations, London Business School, Sussex Place, Regent’s Park, London NW1 4SA, UK, [email protected] Bert De Reyck Management Science & Innovation, University College London, Gower Street, London WC1E 6BT, UK, [email protected] Department of Management Science & Operations, London Business School, Sussex Place, Regent’s Park, London NW1 4SA, UK, [email protected] Barbara Fasolo Department of Management, London School of Economics and Political Science, London, UK, b [email protected] Jos´e Rui Figueira CEG-IST, Instituto Superior T´ecnico, Universidade T´ecnica de Lisboa, Lisbon, Portugal, [email protected] LORIA, D´epartement de G´enie Industriel, Ecole des Mines de Nancy, Nancy, France, [email protected] L. Alberto Franco Warwick Business School, Coventry, UK, alberto [email protected]
xiii
xiv
Contributors
Johannes Gettinger Institute of Management Science, Vienna University of Technology, Theresianumgasse 27, 1040 Vienna, Austria, [email protected] Janne Gustafsson Ilmarinen Mutual Pension Insurance Company, Porkkalankatu 1, 00018 Ilmarinen, Helsinki, Finland, [email protected] Jeffrey Keisler Management Science and Information Systems Department, College of Management, University of Massachusetts Boston, Boston, MA, USA, jeff [email protected] Elmar Kiesling Department of Business Administration, University of Vienna, Bruenner Str. 72, 1210 Vienna, Austria, [email protected] Jack Kloeber Kromite, LLC, Boulder, CO, USA, [email protected] Juuso Liesi¨o Systems Analysis Laboratory, Aalto University School of Science, P.O. Box 11100, 00076 Aalto, Finland, [email protected] Gilberto Montibeller Management Science Group, Department of Management, London School of Economics and Political Science, London, UK, g [email protected] Alec Morton Management Science Group, Department of Management, London School of Economics and Political Science, London, UK, a [email protected] Vincent Mousseau Laboratoire G´enie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chˆatenay-Malabry, Cedex, France, [email protected] Gregory S. Parnell Department of Systems Engineering, US Military Academy, West Point, NY 10996, USA, [email protected] Lawrence D. Phillips Management Science Group, Department of Management, London School of Economics and Political Science, London, UK, larry [email protected] David R´ıos Insua Royal Academy of Sciences, Valverde 22, 28004 Madrid, Spain, david [email protected] Ahti Salo Systems Analysis Laboratory, Aalto University School of Science, P.O. Box 11100, 00076 Aalto, Finland, [email protected] Julie Stal-Le Cardinal Laboratoire G´enie Industriel, Grande Voie des Vignes, 92295 Chˆatenay-Malabry, Cedex, France, [email protected] Jeffrey S. Stonebraker Department of Business Management, College of Management, North Carolina State University, Campus Box 7229, Raleigh, NC 27695-7229, USA, jeff [email protected]
Contributors
xv
Christian Stummer Faculty of Business Administration and Economics, Universitaetsstr. 25, 33615 Bielefeld, Germany, [email protected] Antti Toppila Systems Analysis Laboratory, Aalto University School of Science, P.O. Box 11100, 00076 Aalto, Finland, [email protected] Rudolf Vetschera Department of Business Administration, University of Vienna, Bruenner Str. 72, 1210 Vienna, Austria, [email protected] Detlof von Winterfeldt International Institute for Applied Systems Analysis, Laxenburg, Austria, [email protected] Jun Zheng Laboratoire G´enie Industriel, Grande Voie des Vignes, 92295 Chˆatenay-Malabry, Cedex, France, jun.zheng@ecp fr
Part I
Preliminaries
Chapter 1
An Invitation to Portfolio Decision Analysis Ahti Salo, Jeffrey Keisler, and Alec Morton
Abstract Portfolio Decision Analysis (PDA) – the application of decision analysis to the problem of selecting a subset or portfolio from a large set of alternatives – accounts for a significant share, perhaps the greater part, of decision analysis consulting. PDA has a sound theoretical and methodological basis, and its ability to contribute to better resource allocation decisions has been demonstrated in numerous applications. This book pulls together some of the rich and diverse efforts as a starting point for treating PDA as a promising and distinct area of study and application. In this introductory chapter, we first describe what we mean by PDA. We then sketch the historical development of some key ideas, outline the contributions contained in the chapters and, finally, offer personal perspectives on future work in this sub-field of decision analysis that merits growing attention.
1.1 What Is Portfolio Decision Analysis? Practically, all organizations and individuals have goals that they seek to attain by allocating resources to actions that consume resources. Industrial firms, for example, undertake research and development (R&D) projects, expecting that these projects allow them to introduce new products that generate growing profits. Municipalities allocate public funds to initiatives that deliver social and educational services to their citizens. Regulatory bodies attempt to mitigate harmful consequences of human activity by imposing alternative policy measures which contribute to objectives such as safety and sustainability. Even many individual decisions can be viewed analogously. For instance, university students need to consider what academic courses and recreational pursuits to engage in, recognizing that time is a limited A. Salo () Systems Analysis Laboratory, Aalto University School of Science, P.O. Box 11100, 00076 Aalto, Finland e-mail: [email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 1, © Springer Science+Business Media, LLC 2011
3
4
A. Salo et al.
resource when aspiring to complete one’s studies successfully and on schedule while having a rewarding social life. Decision problems such as these are seemingly different. Yet from a methodological point of view, they share so many similarities that it is instructive to consider them together. Indeed, all the above examples involve one or several decision makers who are faced with alternative courses of action which, if implemented, consume resources and enable consequences. The availability of resources is typically limited by constraints while the desirability of consequences depends on preferences concerning the attainment of multiple objectives. Furthermore, the decision may affect several stakeholders who are impacted by the decision even if they are not responsible for it. There can be uncertainties as well, for instance, at the time of decision making, it may be impossible to determine what consequences the actions will lead to or how much resources they will consume. These, in short, are the key concepts that characterize decision contexts where the aim is to select a subset consisting of several actions with the aim of contributing to the realization of consequences that are aligned with the decision maker’s preferences. They are also key parts of the following definition of Portfolio Decision Analysis:
By Portfolio Decision Analysis (PDA) we mean a body of theory, methods, and practice which seeks to help decision makers make informed multiple selections from a discrete set of alternatives through mathematical modeling that accounts for relevant constraints, preferences, and uncertainties.
A few introductory observations about this definition are in order. To begin with, theory can be viewed as the foundation of PDA in that it postulates axioms that characterize rational decision making and enable the development of functional representations for modeling such decisions. Methods build on theory by providing practicable approaches that are compatible with these axioms and help implement decision processes that seek to contribute to improved decision quality (see Keisler, Chapter 2). Practice consists of applications where these methods are deployed to address real decision problems that involve decision makers and possibly even other stakeholders (see, e.g. Salo and H¨am¨al¨ainen 2010). Thus, applications build on decision models that capture the salient problem characteristics, integrate relevant factual and subjective information, and synthesize this information into recommendations about what subset of alternatives (or portfolio) should be selected. In general, PDA follows the tradition of decision analysis (and, more broadly, of operations research) in that it seeks to improve decision making by using mathematical models in the development of decision recommendations. In effect, the breadth of application domains where PDA has already been applied, combined with the strategic nature of many resource allocation decisions, suggest that PDA is one of the most important branches of decision analysis (see, e.g. Kleinmuntz 2007).
1 An Invitation to Portfolio Decision Analysis
5
This notwithstanding, PDA does not seem to have received comparable attention in the literature. In part, this may be due to its many connections to other areas. For example, a considerable share of applied research is scattered across specialized journals that are not regularly read by methodologically oriented researchers or practicing decision analysts. PDA differs somewhat from the standard decision analysis paradigm in its focus on portfolio choice (as opposed to the choice of a single alternative from a set, such as choosing a single site to drill for oil). In this setting, it is possible to put forth analytical arguments as to why the pooling of several single choice problems into a more encompassing portfolio choice problem can be beneficial. First, the solution to the portfolio problem will be at least as good, because the combination of single choice problems, when considered together, constitutes a portfolio problem where there is a constraint to choose one alternative from each single choice problem. Thus, when considering these single choice problems together, the removal of these (possibly redundant) single choice constraints may lead to a better solution. Second, if the single choice problems are interconnected – for instance, due to the consumption of shared resources or interactions among alternatives in different subsets – the portfolio frame may provide a more realistic problem representation and consequently better decision recommendations. Although these remarks do not account for concerns such as computational complexity or possible delays formulating portfolio problems, they suggest that some problems that have been addressed by looking at independent single choice sub-problems may benefit from re-formulations as portfolio problems. As a rule, PDA problems involve more alternatives than single choice problems. From the viewpoint of problem structuring, a key question in PDA is therefore what alternatives can be meaningfully analyzed as belonging to the “same” portfolio. While PDA methods do not impose inherent constraints on what alternatives can be analyzed together, there are nevertheless considerations which suggest that some alternatives can be more meaningfully treated as a portfolio. This is the case, for instance, when the alternatives consume resources from the same shared pool; when the alternatives are of the same “size” (measured, e.g. in terms of cost, or the characteristics of anticipated consequences); when the future performance of alternatives is contingent on decisions about what other alternatives are selected; or when the consideration of alternatives together as part of the same portfolio seems justified by shared responsibilities in organizational decision making. The fact that there are more alternatives in portfolio choice suggests also that stakes may be higher than in single choice problems, as measured, for example, by the resources that are committed, or by the significance of economic, environmental or societal impacts of consequences. As a result, the adoption of a systematic PDA approach may lead to particularly substantial improvements in the attainment of desired consequences. But apart from the actual decision recommendations, there are even other rationales that can be put forth in favor of PDA-assisted decision processes. For example, PDA enhances the transparency of decision making, because the structure of the decision process can be communicated to stakeholders
6
A. Salo et al.
and the process leaves an auditable trail of the evaluation of alternatives with regard to the relevant criteria. This, in turn, is likely to enhance the efficiency of later implementation phases and the accountability of decision makers. Overall, the aim of this book is to consolidate and strengthen PDA as a vibrant sub-discipline of decision analysis that merits the growing attention of researchers and practitioners alike. Towards this end, we offer 15 chapters that present new theoretical and methodological advances, describe high-impact case studies, and analyze “best practices” in resource allocation in different application domains. In this introductory chapter, we draw attention to key developments in the evolution of PDA. We then summarize the key contributions in each chapter and, finally, outline avenues for future work, convinced that there are exciting opportunities for theoretical, methodological and practical work.
1.2 Evolution of Portfolio Decision Analysis In this section, we discuss developments which have contributed to the emergence of PDA over the past few decades, based on our reading of the scientific literature and personal conversations with some key contributors. Our presentation is intended as neither a systemic historical account nor a comprehensive literature review but, rather, as a narrative that will give the reader a reasonable sense of how what we call PDA has evolved. By way of its origins, PDA can be largely viewed as a subfield of decision analysis (DA). We will therefore give most of our attention to developments in DA, particularly from the viewpoint of activities in the USA. However, we start by briefly discussing interfaces to neighboring fields where related quantitative methods for resource allocation have also been developed.
1.2.1 Financial Portfolio Optimization As a term, “portfolio” is often associated with finance and, in particular, with optimization models that provide recommendations for making investments into market-tradable assets that are characterized by expected return and risk. The evolution of such models began with the celebrated Markowitz mean-variance model (Markowitz 1952) and continued with the development of the Capital Asset Pricing Model (Sharpe 1964). Already in the 1960s, optimization techniques were becoming increasingly important in forming investment portfolios that would exhibit desired risk and return characteristics. Unlike PDA models, however, models for financial portfolio optimization typically consider tradable assets whose past market performance provides some guidance for estimating the risk-return characteristics of assets. Another difference is that the decision variables in financial optimization models are usually continuous so that assets can be purchased or sold in any given quantities, whereas PDA models
1 An Invitation to Portfolio Decision Analysis
7
are built for selection problems which usually involve binary choices. These differences notwithstanding, advances in financial optimization are relevant to PDA. For example, in many cases financial optimization models have spearheaded the adoption of risk measures that are applicable also in PDA (see, e.g. Artzner et al. 1999).
1.2.2 Capital Budgeting Models While financial optimization deals with portfolios of tradeable assets, capital budgeting is typically concerned with intra-organizational investment, most notably the allocation of resource to real assets (Lorie and Savage 1955; Brealey and Myers 2004). Capital budgeting models have early antecedents in the optimization models that were formulated for resource allocation and asset deployment during the Second World War (see, e.g. Mintzberg 1994). Related models were then adopted in the corporate world. The main responsibility would often rest with a centralized corporate finance function that would establish fixed top-down budgets for divisions or departments which, in turn, would take detailed decisions on what activities to pursue with these budgets. Technically, some archetypal capital models can be formulated as linear programming problems (Asher 1962), even if the explicit modeling of problem characteristics such as nonlinearities may call for more general mathematical programming approaches. Capital budgeting and its variants received plenty of attention in finance and operations research in the 1960s and 1970s (see, e.g. Weingartner 1966, 1967). For example, program budgeting sought to elaborate what resources would be used by the elements of a large public sector program and how program objectives would be affected by these elements, in order to improve the transparency of planning processes and the efficiency of resource allocation (Haveman and Margolis 1970). Marginal analysis, in turn, focused on the comparative analysis of marginal costs and marginal benefits as a step towards finding the best uses of resources to activities, in recognition of the broader concerns that are not necessarily under the direct control of the decision maker (Fox 1966; see also Loch and Kavadias 2002). At present, formalized approaches to capital budgeting are practically pervasive in corporations and public organizations. Yet it is our impression that it is still rare to find capital budgeting models that make systematic use of all the advanced features of decision modeling – such as probabilities, multiple criteria, or dynamic modeling with real options – that belong to the modern PDA toolkit.
1.2.3 Quantitative Models for Project Selection Project selection was recognized as an important decision problem in operations research by the late 1950s (Mottley and Newton 1959). Methods for this problem help
8
A. Salo et al.
determine which combinations of proposed projects or activities an organization should fund in order to maximize the attainment of its objectives (see, e.g. Heidenberger and Stummer 1999 for a good review). Between the mid-1960s and the mid-1970s, these methods received plenty of attention in the literature. Growing demand for these methods was generated by large centralized R&D departments of major corporations where R&D managers had to decide which project proposals to fund. Helping to meet this early demand was the efforts of consulting firms such as Arthur D. Little (see Roussel et al. 1991, for a review). By the mid-1970s, the literature on project selection methods had grown to the point where various multicriteria and multiobjective approaches (Lee and Lerro 1974; Schwartz and Vertinsky 1977), probabilistic approaches, and even approaches that account for project interactions were being proposed (see Aaker and Tyebjee 1978; Heidenberger and Stummer 1999; Henriksen and Traynor 1999). These approaches typically lead to mathematical optimization formulations where the desired results were captured by the objective function that was to be maximized subject to budgetary and other resource constraints. The scope of these approaches was gradually extended so that the projects would be assessed not only with regard to financial return but also in view of objectives such as strategic fit and innovativeness (see, e.g. Souder 1973). Research on project selection methods then slowed markedly in the mid-1970s. One plausible cause is that organizations become less centralized with less formal planning cycles. It has also been suggested that the methods became excessively complex and time consuming, even to the point where their practical utility was being undermined (cf. Shane and Ulrich 2004). Experiences from this period should serve as a reminder that while a high degree of mathematical sophistication makes it possible to consider complex project portfolios elaborated at a great level of detail, the successful uptake of these methods is still crucially dependent on securing an appropriate fit with actual organizational decision-making practices.
1.2.4 Decision Analysis Although linkages to finance, capital budgeting and project selection are relevant, the origins and philosophy of PDA stem from the field of decision analysis. Prior to 1960, Ramsey (1978), de Finetti (1937, 1964), and Savage (1954) had built a mathematical theory of decision under uncertainty based on subjective probability. Edwards (1954, 1961) then outlined a research agenda for exploring the descriptive accuracy of subjective expected utility theory, but there was yet no decision analysis, in the sense of a managerial technology based on these concepts. In this setting, Pratt et al. (1965) made key steps towards the development of an advanced statistical decision theory with a stronger orientation towards prospective applications than the philosophical discussions and theoretic system-building of early Bayesian statisticians. In this context, von Neumann–Morgenstern’s utility functions (von Neumann and Morgenstern 1947) were increasingly seen as a viable tool for helping decision makers clarify their attitudes to risk.
1 An Invitation to Portfolio Decision Analysis
9
As an applied field, decision analysis was established in the 1960s (Raiffa and Schlaifer 1961; Howard 1966; Raiffa 1968). It was quickly recognized as essential particularly in capital intensive industries in the USA. For example, oil wildcatters’ drilling decisions (Grayson 1960) served as a testbed for decision tree analysis, resulting in some of the most canonical pedagogical examples of high-impact applications. By the end of the 1960s, decision analysis gained prominence in management education at leading US research universities (e.g. Raiffa 1968). For instance, Howard began to develop a Stanford-based group to expand the uptake of DA in addressing important decision problems (see, e.g. Howard and Matheson 1984). Multi-attribute utility theory (MAUT) was formulated by Keeney and Raiffa (1976) who built on the work on utility theory (e.g. Fishburn 1964, 1970) and measurement theory (Krantz et al. 1971). MAUT and its riskless sister theory, multiattribute value theory (MAVT), were subsequently applied to a growing range of problems in public policy problems where the scoring of alternatives with regard to multiple attributes contributed to more transparent and systematic decision making. The application of DA also expanded due to the efforts that the Stanford group made by developing the DA cycle (Howard 1968; see also Keeney 1982) in order to create an organizational decision support process to facilitate the analysis of decisions. Contemporaneous advances included tornado diagrams, which show the sensitivity of DA results to uncertainties, and influence diagrams, which are simple yet powerful graphical tools for representing decision models and for structuring the elicitation of probability assessments (Howard 1988). A more interactive approach to decision analysis, decision conferencing, evolved from consulting activities where relatively simple quantitative models were employed to facilitate interactive decision processes. For example, the firm Decisions and Designs applied decision analytic methods with particular attention to the process of modeling (see Buede 1979). In the 1970s, this group started to work with the US Marine Corps using a PDA approach called Balanced Beam which was to last for several decades. Beginning in the late 1970s, Cam Peterson, his colleagues and later also Larry Phillips began to expand their work on decision conferencing with multiple parties (Phillips and Bana e Costa 2007). An alternative family of methods have been developed largely by francophone researchers under the heading title “decision aid” (Roy 1985, 1991). The general philosophy of these methods differs somewhat from the above approaches, in that considerable emphasis is placed on the constructive nature of the interaction between analyst and the decision maker, as well as the social context of the decision. Although similar ideas also surface in the decision analysis tradition (Phillips 1984), they are arguably even more prominent in the decision aid tradition, most notably in the so-called “outranking” approaches that employ different procedures in the aggregation of criterion scores. Many of the assumptions of classical decision theory – such as complete comparability and ability to fully discriminate between alternatives – are not taken for granted by proponents of outranking approaches (see, e.g. Bouyssou and Vansnick 1986; Brans and Vincke 1985). Overall, the theoretic and axiomatic basis of outranking approaches has a more discrete and ordinal flavor than those of value/utility-based approaches.
10
A. Salo et al.
Another decision analytic tradition which departs from the axiomatic foundation of MAUT/MAVT theory has grown around the Analytic Hierarchy Process (AHP; see, e.g. Saaty 2005) which, however, has been criticized for methodological shortcomings such as the possibility of rank reversals where the introduction or removal of an alternative may change the relative rankings of other alternatives (see, e.g. Salo and H¨am¨al¨ainen 1997 and references therein). Despite such shortcomings, the AHP has won popularity among practitioners, and it has been applied to various PDA problems such as the selection of R&D projects (Ruusunen and H¨am¨al¨ainen 1989).
1.2.5 From Decision Analysis to Portfolio Decision Analysis As DA methods matured, they were increasingly applied to portfolio problems. As early as the 1960s, Friend and Jessop (1969) were experimenting with the use of formal methods in coordinating planning decisions within local government, and by the mid-to-late 1970s there was much practical work which could readily fall under our definition of PDA. Several methods for project selection, for instance, incorporated systematic approaches to quantification, such as project trees which resembled the more formal DA methods deployed by decision analysts. In particular, the DA group at the Stanford Research Institute (which later morphed into several consulting firms, most notably Strategic Decisions Group, or SDG) expanded its portfolio activities. Specifically, several companies (for examples, see Howard and Matheson 1984) requested consultants to examine their entire R&D plan once they had conducted analyses of isolated R&D projects and strategic decisions. In these analyses, the cost, the probability of success (in some cases as the combined probability of clearing multiple technical hurdles), and the market value given success (based on forecasts of market share, price, etc.) were first assessed for each project using DA methods. Combining these analyses at the portfolio level offered several useful outputs. For instance, after the expected net present values (ENPVs) had been calculated for each project, the projects could be sorted in order of “bang-for-the-buck” (calculated as the ENPV of a project divided by its cost) in order to establish the efficient frontier. For the given budget, the highest value portfolio could then be identified as the intersection of the budget line with this frontier. This portfolio typically contained a substantially better set of projects than those the company would have selected without the analysis. These kinds of successful DA engagements were conducted at major companies such as Eastman-Kodak (Rzasa et al. 1990; Clemen and Kwit 2001) and Du Pont (Krumm and Rolle 1992). Graphical structuring tools, too, have been essential in project and portfolio management. For example, while financial portfolios are often shown with plots of risk versus return for potential investments, the BCG matrix (Henderson 1970) prompts business managers to view their sets of products holistically by classifying
1 An Invitation to Portfolio Decision Analysis
11
them into quadrants of dogs, question marks, stars and cash cows, based on the current status and potential for growth and profitability (and companies were advised to prune their businesses to achieve a healthy mix). One related result from early PDA efforts is an analogous risk-return matrix (Owen 1984) where projects are placed into quadrants that reflect value and probability of success. When using such a matrix, decision makers, most notably portfolio managers, are encouraged to build a balanced portfolio with some “bread and butter” projects (high probability, not very high value), some “oysters” (low probability but high potential value) so that there would be a stream of “pearls” (successful projects that were revealed as oysters were developed) and to eliminate “white elephants” (projects with low prospects for success and low potential value). Researchers and consultants have also been developing dedicated decision analysis-based tools for portfolio choice problems. The DESIGN software was developed by Decisions and Designs (which later evolved into the Equity software), with a view to supporting interactive modeling in a decision conferencing setting (von Winterfeldt and Edwards 1986, pp. 397–399). By using project-level costs and benefits as inputs, the software produced what is now regarded as a classic display showing the efficient portfolios in a cost–benefit space. According to Larry Phillips (personal communication), the original motivation for DESIGN was not portfolio choice but negotiation. In this frame, “costs” and “benefits” were viewed not as different objectives of a single decision maker, but rather as the differing, but not completely opposed, objectives of different parties. In the late 1970s, Keefer (1978) and Keefer and Kirkwood (1978) formulated resource allocation decisions as non-linear optimization problems where the objective function could be assessed as multi-attribute utility function. Such combinations of formal optimization and decision analytic preference elicitation called for methodological advances, including the use of mixed integer programming in solving problems with multiple decision trees containing uncertainties. Also, approaches for three point approximations were developed in response to the need to build optimization models where the probability distributions could be estimated at each point (Keefer and Bodily, 1983). In effect, these developments foreshadow current PDA efforts to combine practical methods for assessing uncertainties and for eliciting preferences when deriving inputs for optimization models. At the same time, PDA approaches were applied to guide the development of major governmental project and R&D portfolios by practitioners in organizations such as Woodward Clyde, Inc (Golabi et al. 1981) and US National Laboratories (Peerenboom et al. 1989), in a line of work that continues through today (see, for instance, Chapter 14 by Parnell in this book). In the 1980s and 1990s, various approaches to product portfolio management were developed, and motivated by project management problems, but largely uninformed by the formal discipline of decision analysis. These approaches employ various means of scoring projects heuristically (e.g. innovativeness, risk, profitability) as a step in providing guidance to the management of project lifecycles. Clients of DA firms, internal experts applying DA, and DA consultants were exposed to
12
A. Salo et al.
these approaches, while specialists in these approaches were in turn exposed to DA. This has led to mingling of ideas which is not easy to track in detail. This notwithstanding, Cooper et al. (2001) provides an excellent comparison of many of these approaches which lie outside the mainstream of decision analysis. In the 1990s, and largely, but not exclusively in the USA, the portfolio approach gained prominence at several major corporations. Together with this development, decision analysis consulting grew and portfolio consultancy became a major part of its business. Portfolio projects were especially successful in the pharmaceutical industry (e.g. Sharpe and Keelin 1998) and the oil and gas industry (e.g. Skaf 1999; Walls et al. 1995) – perhaps because these industries make large capital outlays, are exposed to uncertain returns, have clearly staged decisions and face large numbers of comparable investment opportunities with a clear separation between the financial and the operational sides of the business. Many, if not most, large companies in these industries have formalized some sort of DA portfolio planning processes. Equally important, large numbers of people from these industries have received formal DA training. Thanks to this breadth of practice, scholars engaged in consulting activities began to codify principles for high-quality R&D decision-making based on the use of DA methods (Matheson and Matheson 1998). Portfolio approaches have evolved with advances in software. For example, when spreadsheets and presentation tools became more widespread and user-friendly, it became possible to harness them in building companywide portfolio processes that were integrated with company project/finance databases. Indeed, companies such as General Motors (Kusnic and Owen 1992; Bordley et al. 1999; Barabba et al. 2002) established DA approaches to portfolio management as part of formal planning processes in their R&D, product groups, and even business units (Bodily and Allen 1999). When these DA groups gained expertise, they tended to develop and manage their processes with less outside help. Subsequently, these practitioners have formed networks such as the Decision Analysis Affinity Group (DAAG). Various smaller consulting firms have also sprouted, some with variations of the SDG type of approach, with a focus on project portfolio management. The resulting proliferation of decentralized activity has spawned innovations at the firm level and also conferences and books about lessons learned (e.g. Pennypacker and Retna 2009). Thus, practitioner-level knowledge sharing proceeds apace, with some, but not necessarily close, connections to the scholarly research community. In the 1990s and 2000s, there has been an expansion of multicriteria decisionmaking approaches to portfolio problems both in public and private spheres. For example, Phillips and Bana e Costa have championed the application of PDA methods to significant resource allocation problems in the UK and Portuguese public sectors (see Phillips and Bana e Costa 2007, for a description of their approach and experiences). There has also been a trend towards processes that seek to inform decision makers and help them to reduce the set of candidate solutions to a manageable size, partly based on the notion that a “unique” optimum solution is not necessarily best aligned with complex multi-stakeholder processes (Liesi¨o et al. 2007; Vilkkumaa et al. forthcoming). Advances in computer and software technology have been important drivers, too, because increasing computing power
1 An Invitation to Portfolio Decision Analysis
13
and visualization technology have permitted the near real-time presentation of what the model assumptions signify in terms of recommended portfolio decisions. Indeed, recent portfolio management systems (see, e.g. Portfolio Navigator of SmartOrg, Inc.) and integrated enterprise resource management systems collate project data and offer both standard and query-based portfolio graphs and reports. After the turn of the millennium, we have seen a proliferation of PDA approaches that build on many of the above streams. Recent advances by Gustafsson and Salo (2005), Stummer and Heidenberger (2003), Stummer et al. (2009), Kavadias and Loch (2004) and Liesi¨o et al. (2007), for instance, exemplify new approaches that combine preference assessment techniques, multicriteria methods, decision trees, optimization algorithms and interactive software tools for engaging stakeholders in organizational decision-making processes. The nature of applied work seems to be evolving as well. Some companies in the pharmaceutical industry, for instance, that have employed simpler productivity indices to rank projects are now modeling project-level interactions in more detail and using advanced structured optimization methods. PDA approaches are also finding uses in a growing range of application domains, such as the selection of nature reserves, for instance (see, e.g. Bryan 2010). In conclusion, PDA has well-established roots that go back to the very origins of operations research. It is based on sound approaches for problem structuring, preference elicitation, assessment of alternatives, characterization of uncertainties and engagement of stakeholders. Recent advances in “hard” and “soft” methodologies, together with improved computer technology and software tools, have fostered the adoption of PDA approaches so that many organizations are now explicitly thinking in terms of portfolios. PDA methods become gained ground especially in capital intensive industries such as pharmaceuticals and energy; but they have penetrated even many other industries and public organizations where decisionmaking activities can benefit from participatory processes (cf. H¨am¨al¨ainen 2003, 2004) through which the stakeholders’ expertise is systematically brought to bear on the decisions.
1.3 Contributions in this Book The 15 chapters in this book give an overview of the range of perspectives which can be brought to bear on PDA. These chapters are structured in three parts: Preliminaries (of which this “invitation” is the first chapter); Theory; and Applications. The chapters in Preliminaries expand on this chapter and give a stronger sense of what PDA is. The chapters in Theory present new theoretical advances in PDA, while the part on Applications illustrates the diversity and commonality of PDA practice across problem domains. We emphasize that there are important synergies between the three parts: the part on Theory, for example, presents advances that are relevant for future applied work while the chapters in Applications demonstrate the diversity of decision contexts where PDA can be deployed.
14
A. Salo et al.
1.3.1 Preliminaries In Chapter 2, Keisler tackles the critical question: What is a “good” decision – and a good decision analysis – in a portfolio context? One perspective is provided by the celebrated decision quality chain of Matheson and Matheson (1998): Keisler argues that the dimensions of decision quality proposed by Matheson and Matheson – sound decision framing, diverse and well-specified alternatives, meaningful and accurate information, clear and operationally useful statements of value, logical synthesis, and an orientation towards implementation – provide a useful frame for elaborating the distinctive features of portfolio decisions. Keisler also reminds us that all modeling is a simplification of reality, and distinguishes four different levels of modeling detail in portfolio decisions. Costs and values may be observed more or accurately; projects may be more or less well-specified; multiple objectives may be taken into account completely, incompletely or not at all; and synergies between projects may be considered to a varying degree of detail. Keisler also summarizes a body of evidence, based on simulation models, which sheds light on the likely loss in value from assuming away a particular aspect of modeling complexity. Thus, his work offers new perspectives on what constitutes a suitably detailed or even “requisite” model (see also Phillips 1984). In Chapter 3, Phillips provides a detailed account of a successful high-profile application of PDA to the design of a battleship for the UK’s Royal Navy. His story begins with the breakdown of the Horizon project, a three-nation collaboration to produce a new warship, and the decision by the UK to strike out on its own. Despite this inauspicious start, the author helped the Ministry of Defence to make critical design decisions to arrive at an agreed design concept in only 15 months. A critical element of the multicriteria approach was the use of “decision conferences,” facilitated working modeling meetings with the representation and active engagement from key players and experts. With its rich detail, this chapter invites a careful study by all those who are interested in instructive details of “how to do it.” Phillips also argues for the importance of being attentive of the social process, and of the role that PDA can play in helping members of an organization to construct preferences as a step towards agreeing on how to go forward.
1.3.2 Theory The part on theory begins with a chapter “Valuation of Risky Projects and Other Illiquid Investments Using Portfolio Selection Models” where Gustafsson, de Reyck, Degraeve and Salo present an approach to the valuation of projects in a setting where the decision maker is able to invest in lumpy projects (which cannot necessarily be traded in the market) and financial assets (which can be traded). Starting from the assumption that future uncertainties can be captured through scenario trees and that alternative actions for the investment projects can
1 An Invitation to Portfolio Decision Analysis
15
be explicated for these scenarios, their approach essentially generalizes standard financial methods for project valuation, most notably option pricing and “contingent claims analysis” (the authors call their method “contingent portfolio analysis”), resulting in an approach that is consistent with standard decision analysis approach to investment under uncertainty. Gustafsson et al. also show that an important property of option pricing methodology, the equality of break-even buying and selling prices, generalizes to their environment. Overall, the approach is a significant and practically relevant extension of existing theory which benefits from the rigor of a DA framework. In Chapter 5, Argyris, Figueira and Morton discuss multicriteria approaches. Building on the themes of Keisler, they discuss what constitutes a “good” DA process. They underscore the dangers of simply assuming, as is common in practice, that value functions over criteria are linear. While endorsing the view that analysis should be an interactive process, they point out that the term “interactive” is somewhat loose and, in fact, compatible with a range of different practices of varying degrees of interactivity. They then present two formal optimization models which can produce multicriteria non-dominated solutions based on limited preference information. They also discuss alternative uses of these models in an interactive setting. A natural question about any problem helping technique is “What does practice actually look like?” In Chapter 6, Stonebraker and Keisler provide intriguing insights into this question in the context of pharmaceutical drug development. In effect, drug development is a highly cost-intensive activity with clear-cut binary outcomes – either it leads to a marketable product or not – and thus one would expect that DA methods are particularly well received in this environment. This is indeed the case, and the (unnamed) company in question has built up an impressive database of its decision analyses. Yet Stonebraker and Keisler’s analysis shows that there is considerable variability in the structure of the underlying DA models even in the same organization. This raises the question of whether such differences are warranted while it also points back to themes in Chapter 2: How can analyses of such differences be exploited, in order to improve practice? Applying PDA in practice is essentially a human endeavor. This realization – which we endorse emphatically – implies that psychological and organizational issues are central. In Chapter 7, Fasolo, Morton and von Winterfeldt bring this perspective to bear on portfolio decision making. Taking their cue from the celebrated work of Tversky and Kahneman on heuristics and biases, they observe that much of decision modeling builds on normative assumptions that are made when seeking guide decision behavior. In particular, they discuss how the (often implicit) assumptions underlying this normative stance can be violated behaviorally, drawing on, first of all, individual laboratory-based research on individual decision making, and field experience of supporting organizational resource allocation decision. One of their observations is that the portfolio frame itself can serve as a debiasing device, because it forces decision makers to think of decisions more broadly. In Chapter 8, Ar´evalo and Rios Insua turn their attention to the technology underpinning portfolio management in the context of innovation management.
16
A. Salo et al.
They survey web-based tools for innovation management, and outline an IT system, called SKITES, for managing a portfolio of innovations either within an organization or in a collaborative network. SKITES users may have a sponsoring or facilitative role, or they may propose or assess innovative projects. The proposed system is based around a workflow where proposal are sought, screened, evaluated and managed to delivery of promised benefits. This chapter serves to remind us of the potential of that technology can have in ensuring that the decision-making process unfolds in an orderly manner, and the players involved have access to the tools they need to help them structure choice and make thoughtful decisions. Chapter 9, the last chapter in the Theory section (like Fasolo, Morton and von Winterfeldt’s chapter) draws explicitly on psychological theory, and like Ar´evalo and Rios Insua, is concerned with the information technology which underpins PDA. Specifically, the authors, Kiesling, Gettinger, Stummer and Vetschera, are interested in the human–computer interface aspects of PDA software. They report an experiment to compare how users respond to two different information displays in a multicriteria setting – a classical display, namely the parallel coordinates plot, and a more innovative “heatmap” display. Their chapter presents a compelling case that the choice of display makes a striking difference both on how users experience the system, and also on their actual behavior in exploring the solution space. They conclude, echoing a theme that arises often in this book, that the most appropriate choice of information display may depend on task and user characteristics – that there may be no all-purpose “best” solution.
1.3.3 Applications In Chapter 10, Le Cardinal, Mousseau and Zheng present an application of an outranking method to the selection of students to an educational program. This set-up is slightly different from the typical set-up in PDA: decisions are not simply yes/no, because the students are assigned to four categories – definitely yes, possibly yes, possibly no and definitely no; moreover, the constraints are not monetary but pertain to demands such as balance of gender. Such a problem can be modeled as a sorting problem where the set of objects is to be partitioned into ordered classes, each with different action implications. They employ the ELECTRE TRI method which is a member of the ELECTRE family, combined with a mathematical programming model to arrive an overall portfolio. Because the direct elicitation of preference parameters can be difficult, they allow the decision maker to provide judgments that characterize attractive solutions and place restrictions on the distribution of the items across the categories. Importantly, this chapter reminds us that in many cases one may wish to take a coordinated system of decisions, but these decisions may be more complex (and thus, more interesting) than simply “do” or “don’t do”, and of the importance of taking into account the need for portfolio balance.
1 An Invitation to Portfolio Decision Analysis
17
Toppila et al. in Chapter 11 present a case study which describes the development of a PDA model for helping a high-technology telecommunication company make investments in its standardization activities. The model explicitly recognizes the uncertainties associated with these standardization investments and admits incomplete information about model parameters. Building on these inputs, the decision model helps determine which standardization activities should be either strengthened or weakened, based on a “core index” metric which is based on the computation of all non-dominated resource allocations. Another feature of the decision model is that it specifically captures interaction terms, thus distinguishing it from the model of, for example, Phillips (Chapter 3) and in contrast to the advice of Phillips and Bana e Costa (2007) who recommend handling interactions outside of the formal model. In Chapter 12, Montibeller and Franco discuss the implementation of PDA in local government in the UK. They emphasize that local authorities have historically tended to budget in an incremental fashion, and describe an environment in which transparency and accessibility are more important than technical sophistication. Their approach – of which they provide a number of case studies – blends ideas about structuring drawn from the British “Problem Structuring Methods” tradition, combined with the decision conferencing approach already foreshadowed in Chapter 3 by Phillips. They close by presenting some ideas for the development of a standardized toolkit – in the broadest sense, comprising both software and process templates – for practitioners of PDA in this domain. Chapter 13 by Kloeber deals with the practice of PDA in the pharmaceutical sector. He describes an industry with high costs, long development cycles, and opportunities to earn huge sums of money – or fail utterly. In such an environment, a company’s fortunes are critically dependent on the quality of decisions made about R&D investment and the management of the R&D portfolio. Based on extensive consulting experience, Kloeber describes tools for both project-level and portfoliolevel analysis, and discusses the role of both probabilistic and multicriteria models. Burk and Parnell survey PDA activity in the military area in Chapter 14. They point out that in this decision context there is no one single dimension of value, and consequently most of the work they survey uses multicriteria methods. Specifically, they present a comprehensive, structured literature review and describe six case studies in detail using a comparative framework which, in terms of its approach, may be useful for other authors looking to perform a structured review of practice. Drawing on this review, they discuss different modeling choices and levels of information (gold, silver, platinum and combined) in model development. Burk and Parnell are not bound to any particular modeling approach but, rather, report experiences from different approaches in different settings. Their chapter thus serves as a reminder of the importance of broadmindedness in modeling and the pragmatic use of whatever tools seem the most appropriate for the problem at hand. In the final Chapter, Airoldi and Morton begin with an overview of PDA practice in public sector healthcare systems, especially in settings where the implementing body is a health authority that is responsible for a geographically defined population. They draw attention to two indigenous traditions in healthcare, starting with the Generalized Cost Effectiveness tradition and continuing with an approach
18
A. Salo et al.
derived from Program Budgeting and Marginal Analysis which incorporates certain multicriteria ideas. By way of contrast, they then present a case study of their own, using a decision conferencing approach that has some similarities with the work of Phillips (Chapter 3) and Montibeller and Franco (Chapter 12), but draws on ideas in health economics in scaling up benefits to the population level. The population aspect features significantly in Airoldi and Morton’s ideas for future work, and they describe concerns about the nature, timing and distribution of benefits and costs which are not explicitly incorporated in their process. They also speculate whether models which handle these important and decision relevant aspects could be developed without losing the accessibility which comes from the use of simpler models. The chapter therefore serves as a reminder that to be genuinely valuable and useful, PDA methods have to be adapted and contextualized to meet the specific challenges in some given domain.
1.4 Perspectives for the Future Taken together, the above chapters highlight the diversity of contexts where PDA can be deployed. Combined with the messages in our storyline on the evolution of PDA, they also suggest prospective topics for future work. We structure our openended discussion of these topics under three headings. We begin with “Embedding to PDA in organizational decision making.” We then proceed by addressing topics in relation to “Enhancing PDA theory, methods and tools” and “Expanding the knowledge base.”
1.4.1 Embedding PDA in Organizational Decision Making As pointed out by the then President of INFORMS, Don N. Kleinmuntz, in a panel discussion on PDA at the EURO XXIV meeting in 2009, the real value of PDA is realized when an organization institutionalizes the use of PDA in its planning processes. There are highly encouraging examples of such institutionalizations, for instance, companies such as Chevron have built strong competitive advantage by making systematic use of PDA methods (Bickel 2010). Yet such institutionalizations are not very common and, at times, when institutionalization has taken place, the early focus on portfolio decisions may have been diluted by data management or other implementation issues. Seen from this perspective, we need to know more about what factors contribute to successful institutionalization. Here, lessons from the broad literature on the diffusion of innovations (Rogers 2005) can offer insights. Further guidance can be obtained from reflective studies of attempted institutionalizations, along the lines of Morton et al. (2011) who have examined such attempted institutionalizations in two areas of the UK public sector. Furthermore, since there exists a rich knowledge base of “one-shot” PDA interventions, it is of interest to study how uses of PDA can be
1 An Invitation to Portfolio Decision Analysis
19
integrated in organizational decision-making processes on a more recurrent basis. Thus, topics for future activities under this broad heading include the following: • Transcending levels of organizational decision making: Resource allocation decisions often involve decision makers at different levels of the organization with differentiated roles and responsibilities, which leads to the question of how PDA activities at different levels can be best interlinked. For instance, PDA activities may focus on strategic and long-term perspectives that provide “topdown” guidance for operational and medium-term activities (see, e.g. Brummer et al. 2008). But one can also build “bottom-up” processes where individual departments first carry out their own PDA processes to generate inputs that are taken forward to higher levels of decision within corporate management teams or executive boards (see, e.g. Phillips and Bana e Costa 2007). Such processes can be complementary, which means that the design of a process that is most appropriate for a given organization calls for careful consideration. • Interlinking organizations with PDA methods: In many sectors, the boundaries between organizations and their environment are becoming more porous. For example, the competitiveness of industrial firms is increasingly dependent on how effectively they collaborate in broader innovation networks. In parallel with this development, demands for transparency and accountability in policy making have fostered the development of methods which explicitly capture the interests of different stakeholder groups. Hence, PDA methods with group decision support capabilities for inter-organizational planning seem highly promising (see, e.g. Vilkkumaa et al. forthcoming). We therefore believe that more work is needed to explore what PDA approaches suit best problem domains that involve several organizations. • Re-using PDA models and processes: Deploying a similar PDA process from one year to the next may offer substantial benefits: for instance, the costs of training are likely to be lower while software tools can also be used effectively. But there may be drawbacks as well. For instance, strict adherence to a “standardized” PDA process may stifle creativity or, in some settings, it can be harmful if the characteristics of the decision context change so that the initial PDA formulation is no longer valid, say, due to the emergence of additional decision criteria. As a result, there is a need to understand when PDA models can really be re-used on a recurrent basis. This stands in contrast to most reported case studies which typically assume that the PDA model is built from scratch. • Facilitating the diffusion of PDA models across problem domains: Even though the specifics of PDA models vary from one planning context to another, there are nevertheless archetypal models – such as the multi-attribute capital budgeting model – which can be deployed across numerous problem contexts subject to minor variations. This suggests that when PDA models are being developed, it may be useful to extract the salient modeling features of specific applications as a step towards promoting the diffusion of PDA approaches by way of “transplanting” models from one context to another. Yet, in such model transplantation, concerns of model validity merit close attention, because a model that is valid in one problem context may not be so in another.
20
A. Salo et al.
1.4.2 Extending PDA Theory, Methods and Tools Embedding PDA in organizational decision making calls for adequate theory, methods and tools. Over the years, the literature on PDA methods has grown so that there are now several tested and well-founded approaches for addressing most of the concerns that are encountered in PDA problems (including uncertainties, constraints, preferences, and interactions). The relevant tool set, too, has become broader and spans tools that range from the relatively simple (e.g. scoring templates) to quite complicated (e.g. dedicated PDA tools with integrated optimization algorithms). As an overall remit for extending PDA methods and tools, we believe that the advancement of PDA benefits from an appropriate balance of theory, methods and practice. This means, for instance, that the development of exceedingly complex mathematical PDA models whose complexity is not motivated by challenges of real problems may be counter-productive. In effect, the modest uptake of some of the most sophisticated R&D planning models can be seen as an indication of the risk of developing models that are impractical for one reason or another; hence, requisite models (Phillips 1984) may prove more useful than the results of more onerous and comprehensive modeling efforts. There are also important tradeoffs even in the framing of PDA processes. That is, while truly comprehensive formulations make it possible to provide more “systemic” recommendations that span more alternatives and even organizational units, such formulations can be more difficult to build, for instance, due to difficulties of establishing criteria that can be meaningfully applied to a broader set of alternatives. All in all, we believe there are many promising avenues for advancing the frontiers of theory, methods and tools: • Theoretical development: Although PDA inherits much of its theoretic axiomatic machinery from the wider field of decision analysis, interpreting that theory in terms of the particular models used in PDA still requires additional work. A recent example of such an apparent gap in theory is the baseline problem documented in Clemen and Smith (2009) and Morton (2010), where certain plausible procedures for setting the zero for value measurement can lead to rank reversals when the option set is changed. Moreover, while axiomatizations of the portfolio choice problem exist (Fishburn 1992), these axioms seem to require strong additive separability assumptions between projects. Thus, a further direction for theoretic research would seem to investigate what sorts of relaxations of these separability assumptions might be possible (see Liesi¨o 2011). • Advances in IT and software tools: Progress in ICT, mathematical programming and interactive decision support tools has been an essential enabler of the adoption of PDA approaches: for instance, the computing power of modern PCs makes it possible to solve large portfolio problems while visualization tools help communicate salient data properties and decision recommendations. Advances in ICT offer also untapped potential for approaching new kinds of problems that
1 An Invitation to Portfolio Decision Analysis
21
have not yet been addressed with PDA methods. For example, PDA methods can be deployed in collaborative innovation processes for eliciting and synthesizing information from stakeholder groups. A related use of PDA methods is participatory budget formation where stakeholders have shared interests in resource allocation decision and the priority-setting process must fulfill demanding quality standards, for instance, due to accountability requirements (Rios Insua et al. 2008; Danielson et al. 2008). • Interfacing PDA tools with other IT systems: Many PDA software tools are standalone programs (see Lourenc¸o et al. 2008 for a review of currently available tools). Yet portfolio decisions often rely on data that may reside in other IT systems. This would suggest that a greater degree of integration with other IT systems can be helpful in designing decision processes that are linked to sources of information and, moreover, generate information that can be harnessed in other contexts. Such aspirations need to be tempered by the recognition that the development of integrated software solutions and IT systems can be costly and time consuming. Moreover, it is possible that radical changes in organizational structures and processes lead to different requirements for decision support so that existing IT systems lose some of their relevance. These caveats notwithstanding, the identification of viable opportunities for tool integration can expand the use of PDA tools. • Harnessing incomplete information: Typically, the estimation of parameters for a PDA model requires a large number of judgments which can be time consuming and cognitively burdensome to elicit. There exist some tools which reduce the elicitation effort by providing recommendations based on incomplete information (Liesi¨o et al. 2007, 2008; Argyris et al. 2011), or which help decision makers understand how sensitive the recommendations are with respect to model parameters (Lourenc¸o et al. 2011). Nevertheless, there are still significant opportunities for tool development, for instance, by exploring how sensitive the results are to assumptions concerning the underlying representation of value or by transferring ideas form the reasonably well-developed multi-criteria context to the probabilistic context (see, e.g. Liesi¨o and Salo 2008). • Building models for the design of PDA processes: Most PDA models help make choices from the set of available alternatives. But analytical models can also assist in the design of PDA-assisted decision support processes. For example, by capturing the salient characteristics of the PDA problem (e.g. number of alternatives, uncertainty of model parameters), it is possible to evaluate alternative PDA process designs ex ante and to guide decisions about screening thresholds or strategies for acquiring information about the alternatives. Indeed, as demonstrated by Keisler (2004), the combination of simulation/optimization approaches for evaluating PDA processes holds much promise in terms of improving the appropriate use of PDA methods.
22
A. Salo et al.
1.4.3 Expanding the PDA Knowledge Base This third heading covers topics on which work is needed to improve knowledge of how successful PDA-assisted planning processes can be best enabled. These topics have a strong element of empirical research aimed better understanding of the use of PDA methods and tools to improve resource allocation processes: • Pursuing behavioral research: Decision analysis – more than any other subfield of operations research – has a close relationship with psychology and with behavioral decision theory, in particular. From the perspective of psychology, PDA problems have several specific features. First, decision makers may have to consider very large numbers of alternatives, which may pose challenges (for visualization, for instance). Second, portfolio decisions are often taken in planning contexts which involve stakeholders from around the organization. These decisions are therefore social undertakings, which means that social psychological issues (such as what constitutes a persuasive argument) are important. Third, PDA problems may give rise to behavioral biases beyond those considered in the earlier literature on single choice problems. There is consequently a need for studies on when such biases are likely to occur, what impacts these biases have on decision quality, and how they can be best avoided. More generally, there is a need for knowledge about how decisions are shaped by the choice of PDA methodologies and what methods can be expected to work “best” in specific decision contexts. • Enabling successful facilitation: Facilitation is often a significant part of PDA interventions, especially when the PDA activity takes place in workshops or “decision conferences” (Phillips and Phillips 1993). The existing literature on facilitation – much of which is associated with the discipline of Organizational Development – helps understand these interventions by addressing processual concerns such as the management of power, politics and personalities. In the PDA context, there is an additional challenge in that modeling central becomes central in guiding content issues (Eden 1990). There is therefore a need to better understand the role of facilitation skills in PDA (e.g. Ackermann 1996). Such an understanding is vital because becoming an effective facilitator is not easy, and facilitation skills are not taught in most operations research/management science programs. • Reflective analyses of real PDA-assisted processes: To better understand preconditions for the successful deployment of PDA methods, there is a need to build evidence from reflective analyses of real case studies. Here, relevant perspectives (see, e.g. H¨am¨al¨ainen 2004) extend well beyond the choice of PDA methods to the broader characterization of the decision context, including questions about how the PDA process is framed, what roles the participants enact, and what the “status” of the recommendations is as an input to subsequent decisions. Even though the enormous variety of decision problems and the plurality of decision-making cultures may thwart attempts at generalization, these kinds of analyses – which are likely to benefit from the use of systematic frameworks that
1 An Invitation to Portfolio Decision Analysis
23
resemble those proposed for MCDA methods (see, e.g. Montibeller 2007) – may generate valuable insights into what methods work best as a function of problem characteristics. • Organizational learning through evaluation: In organizations, PDA interventions are substantial undertakings which, as a rule, should consequently be subjected to appropriate evaluations. These evaluations can serve two purposes. First, they help the organization understand what opportunities for improvement there are and how a re-designed PDA process could constitute an improvement over past practice. Second, if there is a well-established evaluation culture, consistent evaluations serve to build a database of “what works and where and when.” One attendant challenge for research is to develop practical evaluation frameworks for PDA, for instance, along the lines discussed by Schilling et al. (2007). • Strengthening skill sets: In PDA, skills for appropriate framing and scoping are pivotal because they lay the foundation for the analysis and co-determine how PDA effectively will inform organizational decision making (see, e.g. Spetzler 2007). Furthermore, skills that pertain to modeling – such as the choice of functional representation or the specification of model parameters – are crucial, too, because even seemingly innocuous modeling assumptions (such as the choice of baselines; Clemen and Smith 2009) may have repercussions on recommendations. As a result, those in charge of PDA processes need to master a broad range of skills, including social skills that help organizations articulate their objectives and technical skills that are needed to make well-founded choices among alternative approaches. In summary, PDA has reached its current position through an evolutionary path, enabled through the fruitful interplay of theory, methods and practice. At present, organizations are arguably faced with growing challenges in their decision making. In many business environments, for instance, firms much reach decisions more quickly in order to reap profits under accelerated product life cycles while the complexity of issues at stake may make it imperative to more experts and stakeholders. Similarly, in the public sphere legitimate demands for accountability and the efficiency of resource allocations impose ever more stringent quality requirements on decision processes. In this setting, PDA is well positioned to address these kinds of challenges, thanks to the concerted efforts of researchers and practitioners who pursue opportunities for theoretical and methodological work and, by doing so, help organizations improve their decision making.
References Aaker DA, Tyebjee TT (1978) A model for the selection of interdependent R&D projects. IEEE Trans Eng Manage EM-25:30–36 Ackermann F (1996) Participants’ perceptions of the role of facilitators using a GDSS. Group Decis Negot 5:93–112
24
A. Salo et al.
Argyris N, Figueira JR, Morton A (2011) A new approach for solving multi-objective binary optimisation problems, with an application to the knapsack problem. J Global Optim 49:213–235 Artzner P, Delbaen F, Eber J-M, Heath D (1999) Coherent measures of risk. Math Finance 9(3):203–228 Asher DT (1962) A linear programming model for the allocation of R and D efforts. IRE Trans Eng Manage EM-9:154–157 Barabba V, Huber C, Cooke F, Pudar N, Smith J, Paich M (2002) A multimethod approach for creating new business models: the General Motors OnStar project. Interfaces 32(1):20–34 Bickel E (2010) Decision analysis practice award. Decis Anal Today 29(3):8–9 Bodily SE, Allen MS (1999) A dialogue process for choosing value-creating strategies. Interfaces 29(6):16–28 Bouyssou D, Vansnick J-C (1986) Noncompensatory and generalized noncompensatory preference structures. Theory Decis 21:251–266 Bordley RF, Beltramo M, Blumenfeld D (1999) Consolidating distribution centers can reduce lost sales. Int J Prod Econ 58(1):57–61 Brans JP, Vincke P (1985) A preference ranking organisation method: (The PROMETHEE method for multiple criteria decision-making). Manage Sci 31(6):647–656 Brealey RA, Myers S (2004) Principles of corporate finance, 7th edn. McGraw-Hill, New York Brummer V, K¨onn¨ol¨a T, Salo A (2008) Foresight within ERA-NETs: experiences from the preparation of an international research program. Technol Forecast Soc Change 75(4):483–495 Bryan BA (2010) Development and application of a model for robust, cost-effective investment in natural capital and ecosystem services. Biol Conserv 143(7):1737–1750 Buede DM (1979) Decision analysis: engineering science or clinical Art? Decisions and Designs, McLean Clemen RT, Kwit RC (2001) The value of decision analysis at Eastman Kodak company. Interfaces 31(5):74–92 Clemen RT, Smith JE (2009) On the choice of baselines in multiattribute portfolio analysis: a cautionary note. Decis Anal 6(4):256–262 Cooper RJ, Edgett S, Kleinschmidt E (2001) Portfolio management for new products, 2nd edn.. Perseus, Cambridge Danielson M, Ekenberg L, Ekengren A, H¨okby T, Lid´en J (2008) Decision support process for participatory democracy. J Multi-Criteria Decis Anal 15(1–2):15–30 de Finetti B (1964) Foresight: its logical laws, its subjective sources (translation of the article La Pr´evision: ses lois logiques, ses sources subjectives, Annales de l’Institut Henri Poincar´e, 1937). In: Kyburg HE, Smokler HE (eds) Studies in subjective probability. Wiley, New York Eden C (1990) The unfolding nature of group decision support: two dimensions of skill. In: Eden C, Radford J (eds) Tackling strategic problems: the role of group decision support. Sage, London Edwards W (1954) The theory of decision making. Psychol Bull 51(4):380–417 Edwards W (1961) Behavioral decision theory. Ann Rev Psychol 12:473–498 Fishburn PC (1964) Decision and value theory. Wiley, New York Fishburn PC (1970) Utility theory for decision making. Wiley, New York Fishburn PC (1992) Utility as an additive set function. Math Oper Res 17 (4):910–920 Fox B (1966) Discrete optimization via marginal analysis. Manage Sci 13(3):210–216 Friend JK, Jessop WN (1969) Local government and strategic choice. Tavistock, London Golabi K, Kirkwood CW, Sicherman A (1981) Selecting a portfolio of solar energy projects using multiattribute preference theory. Manage Sci 27:174–189 Grayson CJ (1960) Decisions under uncertainty: drilling decisions by oil and gas operators. Graduate School of Business, Harvard University, Boston Gustafsson J, Salo A (2005) Contingent portfolio programming for the management of risky projects. Oper Res 53(6):946–956 H¨am¨al¨ainen RP (2003) Decisionarium – aiding decisions negotiating and collecting opinions on the Web. J Multi-Criteria Decis Anal 12(2–3):101–110 H¨am¨al¨ainen RP (2004) Reversing the perspective on the applications of decision analysis. Decis Anal 1(1):26–31
1 An Invitation to Portfolio Decision Analysis
25
Haveman RH, Margolis J (eds) (1970) Public expenditures and policy analysis. Markham, Chicago Heidenberger K, Stummer C (1999) Research and development project and resource allocation: a review of quantitative approaches. Int J Manage Rev 1(2):197–224 Henderson BD (1970) Perspectives on the product portfolio. Boston Consulting Group, Boston Henriksen A, Traynor A (1999) A practical R&D project-selection scoring tool. IEEE Trans Eng Manage 46(2):158–170 Howard RA (1966) Decision analysis: applied decision theory. Proceedings of the Fourth International Conference on Operational Research, Wiley-Interscience, New York, pp 55–71 Howard RA (1968) The foundations of decision analysis. IEEE Trans Syst Man Cybern SSC4:211–219 Howard RA (1988) Decision analysis: practice and promise. Manage Sci 34(6):679–695 Howard RA, Matheson JE (eds) (1984) Readings on the principles and applications of decision analysis. Strategic Decisions Group, Menlo Park Kavadias S, Loch CH (2004) Project selection under uncertainty: dynamically allocating re-sources to maximize value. Kluwer, Dordrecht Keefer DL (1978) Allocation planning for R & D with uncertainty and multiple objectives. IEEE Trans Eng Manage EM-25:8–14 Keefer DL, Bodily SE (1983) Three-point approximations for continuous random variables. Manage Sci 29(5):595–603 Keefer DL, Kirkwood CW (1978) A multiobjective decision analysis: budget planning for product engineering. J Oper Res Soc 29:435–442 Keeney RL (1982) Decision analysis: an overview. Oper Res 30(5):803–838 Keeney RL, Raiffa H (1976) Decisions with multiple objectives: preferences and value trade-offs. Wiley, New York Keisler J (2004) Value of information in portfolio decision analysis. Decis Anal 1(3):177–189 Kleinmuntz DN (2007) Resource allocation decisions. In: Edwards W, Miles RF, von Winter-feldt D (eds) Advances in decision analysis. Cambridge University Press, Cambridge Krantz DH, Luce DR, Suppes P, Tversky A (1971) Foundations of measurement, vol. I. Academic, New York Krumm FV, Rolle CF (1992) Management and application of decision analysis in Du Pont. Interfaces 22(6):84–93 Kusnic MW, Owen D (1992) The unifying vision process: value beyond traditional decision analysis in multiple-decision-maker environments. Interfaces 22(6):150–166 Lee SM, Lerro AJ (1974) Capital budgeting for multiple objectives. Financ Manage 3(1):51–53 Liesi¨o J (2011) Measurable multiattribute value functions for project portfolio selection and resource allocation. Aalto University, Systems Analysis Laboratory, manuscript Liesi¨o J, Mild P, Salo A (2007) Preference programming for robust portfolio modeling and project selection. Eur J Oper Res 181(3):1488–1505 Liesi¨o J, Mild P, Salo A (2008) Robust portfolio modeling with incomplete cost information and project interdependencies. Eur J Oper Res 190(3):679–695 Liesi¨o J, Salo A (2008) Scenario-based portfolio selection of investment projects with incomplete probability and utility information. Helsinki University of Technology, Systems Analysis Laboratory Research Reports, E23 Loch CH, Kavadias S (2002) Dynamic portfolio selection of NPD programs using marginal returns. Manage Sci 48(10):1227–1241 Lorie JH, Savage LJ (1955) Three problems in rationing capital. J Business 28(4):229–239 Lourenc¸o JC, Bana e Costa C, Morton A (2011) PROBE: a multicriteria decision support system for portfolio robustness evaluation (revised version). LSE Operational Research Group Working Paper Series, LSEOR 09.108. LSE, London Lourenc¸o JC, Bana e Costa C, Morton A (2008) Software packages for multi-criteria resource allocation. In: Proceedings of the international management conference, Estoril, Portugal Markowitz HM (1952) Portfolio selection. J Finance 7(1):77–91 Matheson D, Matheson JE (1998) The Smart Organization: creating value through strategic R&D. Harvard Business School Press, Boston
26
A. Salo et al.
Mintzberg H (1994) The rise and fall of strategic planning: reconceiving the roles for planning, plans, planners. Prentice-Hall, Glasgow Montibeller G (2007) Action-researching MCDA interventions. In: Shaw D (ed), Keynote Papers, 49th British Operational Research Conference (OR 49), 4–6 Sep, University of Edinburgh. The OR Society, Birmingham Morton A (2010) On the choice of baselines in portfolio decision analysis. LSE Management Science Group working paper, LSEOR10.128. LSE, London Morton A, Bird D, Jones A, White M (2011) Decision conferencing for science prioritisation in the UK public sector: a dual case study. J Oper Res Soc 62:50–59 Mottley CM, Newton RD (1959) The selection of projects for industrial research. Oper Res 7(6):740–751 Owen DL (1984) Selecting projects to obtain a balanced research portfolio. In: Howard RA, Matheson JE (eds) Readings on the principles and applications of decision analysis. Strategic Decisions Group, Menlo Park Peerenboom JP, Buehring WA, Joseph TW (1989) Selecting a portfolio of environmental programs for a synthetic fuels facility. Oper Res 37(5):689–699 Pennypacker J, Retna S (eds) (2009) Project portfolio management: a view from the trenches. Wiley, New York Phillips LD (1984) A theory of requisite decision models. Acta Psychol 56(1–3):29–48 Phillips LD, Bana e Costa CA (2007) Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res 154(1):51–68 Phillips LD, Phillips MC (1993) Facilitated work groups: theory and practice. J Oper Res Soc 44(3):533–549 Pratt JW, Raiffa H, Schlaifer R (1965) Introduction to statistical decision theory. McGraw-Hill, New York Raiffa H (1968) Decision analysis. Addison-Wesley, Reading Raiffa H, Schlaifer R (1961) Applied statistical decision theory. Harvard Business School, Boston Ramsey FP (1978) Truth and probability. In: Mellor DH (ed) Foundations: essays in philosophy, logic, mathematics and economics. Routledge & Kegan Paul, London Rios Insua D, Kersten GE, Rios J, Grima C (2008) Towards decision support for participatory democracy. In: Burstein F, Holsapple CW (eds) Handbook on decision support systems 2. Springer, Berlin, pp 651–685 Rogers EM (2005) Diffusion of INNOVATIONS (5th edn). Free Press, New York Roussel PA, Saad KN, Erickson TJ (1991) Third generation R&D: managing the link to corporate strategy. Harvard Business School Press, Boston Roy B (1985) M´ethodologie multicrit`ere d’aide a` la d´ecision. Economica, Paris Roy B (1991) The outranking approach and the foundations of ELECTRE methods. Theory Decis 31:49–73 Ruusunen J, H¨am¨al¨ainen RP (1989) Project selection by an integrated decision aid. In: Golden BL, Wasil EA, Harker PT (eds) The analytic hierarchy process: applications and studies. Springer, New York, pp 101–121 Rzasa P, Faulkner TW, Sousa NL (1990) Analyzing R&D portfolios at Eastman Kodak. Res Technol Manage 33:27–32 Saaty TL (2005) The analytic hierarchy and analytic network processes for the measurement of intangible criteria and for decision-making. In: Figueira J, Greco S, Ehrgott M (eds) Multiple criteria decision analysis: state of the art surveys. Kluwer, Boston, pp 345–408 Salo A, H¨am¨al¨ainen RP (1997) On the measurement of preferences in the analytic hierarchy process. J Multi-Criteria Decis Anal 6(6):309–319 Salo A, H¨am¨al¨ainen RP (2010) Multicriteria decision analysis in group decision processes. In: Kilgour DM, Eden C (eds) Handbook of group decision and negotiation. Springer, New York, pp 269–283 Savage LJ (1954) The foundations of statistics. Wiley, New York Schilling M, Oeser N, Schaub C (2007) How effective are decision analyses? Assessing decision process and group alignment effects. Decis Anal 4(4):227–242
1 An Invitation to Portfolio Decision Analysis
27
Schwartz SL, Vertinsky I (1977) Multi-attribute investment decisions: a study of R&D project selection. Manage Sci 42(3):285–301 Shane SA, Ulrich KT (2004) Technological innovation, product development, and enterpreneurship in management science. Manage Sci 50(2):133–144 Sharpe WF (1964) Capital asset prices: a theory of market equilibrium under conditions of risk. J Finance 19:425–442 Sharpe P, Keelin T (1998) How SmithKline Beecham makes better resource-allocation decisions. Harvard Business Rev 76(2):45–46 Skaf MA (1999) Portfolio management in an upstream oil and gas organization. Interfaces 29(6):84–104 Souder WE (1973) Analytical effectiveness of mathematical models for R&D project selection. Manage Sci 19(8):907–923 Spetzler CS (2007) Building decision competency in organizations. In: Edwards W, Miles RF, von Winterfeldt D (eds) Advances in decision analysis. Cambridge University Press, Cambridge Stummer C, Heidenberger K (2003) Interactive R&D portfolio analysis with project interdependencies and time profiles of multiple objectives. IEEE Trans Eng Manage 50:175–183 Stummer C, Kiesling E, Gutjahr WJ (2009) A multicriteria decision support systems for competence-driven project portfolio selection. Int J Inform Technol Decis Making 8(2):379–401 Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science 185:1124–1131 Vilkkumaa E, Salo A, Liesi¨o J (forthcoming) Multicriteria portfolio modeling for the development of shared action agendas. Group Decis Negot von Neumann J, Morgenstern O (1947) Theory of games and economic behavior. Princeton University Press, Princeton von Winterfeldt D, Edwards W (1986) Decision analysis and behavioral research. CUP, Cambridge Walls MR, Morahan GT, Dyer SD (1995) Decision analysis of exploration opportunities in the onshore US at Phillips petroleum company. Interfaces 25(6):39–56 Weingartner HM (1966) Capital budgeting of interrelated projects: survey and synthesis. Manage Sci 12(7):485–516 Weingartner HN (1967) Mathematical programming and the analysis of capital budgeting problems. Markham, Chicago
Chapter 2
Portfolio Decision Quality Jeffrey Keisler
Abstract The decision quality framework has been useful for integrating decision analytic techniques into decision processes in a way that adds value. This framework extends to the specific context of portfolio decisions, where decision quality is determined at both the project level and the portfolio level, as well as in the interaction between these two levels. A common heuristic says that the perfect amount of decision quality is the level at which the additional cost of improving an aspect of the decision is equal to the additional value of that improvement. A review of several models that simulate portfolio decision-making approaches illustrates how this value added depends on characteristics of the portfolio decisions, as does the cost of the approaches.
2.1 Introduction The nature of portfolio decisions suggests particular useful interpretations for some of the elements of decision quality (DQ). Because of the complexity of these problems, portfolio decision makers stand to benefit greatly from applying DQ. This chapter aims to facilitate such an application by characterizing the role of DA in portfolios; describing elements of portfolio decision quality (PDQ); defining levels of achievement on different dimensions of DQ; relating such achievement to value added (using a value-of-information analogy); and considering the drivers of cost in conducting portfolio decision analysis (PDA). It is helpful to start by characterizing the role of DA in portfolio problems, particularly in contrast to the role of optimization algorithms. Portfolio optimization algorithms range from simple to quite complex, and may require extensive data J. Keisler () Management Science and Information Systems Department, College of Management, University of Massachusetts Boston, Boston, MA, USA e-mail: [email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 2, © Springer Science+Business Media, LLC 2011
29
30
J. Keisler
inputs. DA methods largely serve to elicit and structure these often subjective inputs, thereby improving the corresponding aspects of DQ. A portfolio decision process should frame the decision problem by defining what is in the portfolio under consideration and what can be considered separately, who is to decide, and what resources are available for allocation. At their simplest, portfolio alternatives are choices about which proposed activities are to be supported, e.g., through funding. But there may also be richer alternatives at the project level, or different possible funding levels, as well as portfolio level choices that affect multiple entities. For each project, information may be needed about the likelihood of outcomes and costs associated with investments – and for portfolio decisions especially, the resulting estimates should be consistent and transparent across projects. The appropriate measure of the value of the portfolio may be as simple as the sum of the project expected monetary values, or as complex as a multiattribute utility function of individual project and aggregate portfolio performance; this involves balancing tradeoffs among different attributes as well as between risk and return, short term and long term, small and large, etc. Decisions that logically synthesize information, alternatives and values across the portfolio can be especially complex, as they must comprehend many possible interactions between projects. Implementation of portfolio decisions requires careful scheduling and balancing to assure that sufficient resources and results will be available when projects need them. In practice, it has been common to calculate the value added by PDA as the difference between the value of the “momentum” portfolio that would have been funded without the analysis and the value of the (optimal) portfolio ultimately funded. This idea can be refined to understand the value added by specific analysis that focuses on any of the DQ elements. This is similar to an older idea of calculating the value of analysis by calculating the value of information “revealed” to the decision maker by the analysis, and it suggests specific, conceptually illuminating, structural models for the value of different improvements to portfolio decision quality. It is productive to think in terms of four steps in the different dimensions of DQ: No information, information about the characteristics of the portfolio in general, partial specific information about some aspects of the portfolio in a particular situation, or complete information about those aspects. These steps often correspond (naturally, as it were) to discrete choices about formal steps to be used in the portfolio decision process. Thus, it is possible to discuss clearly rather detailed questions about the value added by PDA efforts. This chapter synthesizes some of my past and current research (some mathematical, some simulation, some empirical) where this common theme has emerged. Without going to the level of detail of the primary research reports, this chapter describes how various PDQ elements can be structured for viewing through this lens. A survey of results shows that no one step is clearly most valuable. Rather, for each step, there are conditions (quantitative characteristics of the portfolio) that make quality relatively more or less valuable (perhaps justifying a low-cost pre-analysis or an organizational diagnosis step assessing those conditions).
2 Portfolio Decision Quality
31
Several related queries help to clarify how PDQ could apply in practical settings. We consider (briefly) how choices about analysis affect the cost of analysis, e.g., number of variables, number of assessments, number of meetings, etc. Because these choices go hand-in-hand with DQ levels that can be valued, this provides a basis for planning to achieve DQ. We review (briefly) case data from several organizations to understand where things stand in current practice and which elements of DQ might need reinforcement. Finally, in order to illustrate how the PDQ framework facilitates discussion of decision processes, we review several applications that describe approaches taken. The chapter concludes with a PDQ-driven agenda to motivate future research. A richer set of decision process elements could be modeled in order to deepen the understanding of what drives PDQ. Efforts to find more effective analytic techniques can be focused on the aspects of portfolio decisions that have the highest potential value from increased quality. Alternatively, in areas where lower quality may suffice, research can focus on finding techniques that are simpler to apply. The relative value of PDQ depends on the situation, and it may be that simple and efficient approaches ought to be refined for one area in one application setting, while more comprehensive approaches could be developed for another setting.
2.2 Decision Portfolios A physical portfolio is essentially a binder or folder in which some related documents are carried together, a meaning which arises from the Latin roots port (carry) and folio (leaf or sheet). Likewise, an investment portfolio is a set of individual investments which a person or a firm considers as a group, while a project portfolio is a set of projects considered as a group. PDA applies decision analysis (DA) to make decisions about portfolios of projects, assets, opportunities, or other objects. In this context, we can speak more generally of (a) portfolio (of) decisions. We define a portfolio decision as a set of decisions that we choose to consider together as a group. I emphasize the word choose because the portfolio is an artificial construct – an element is a member of the portfolio only because the person considering the portfolio deems it so. I emphasize the word decisions because PDA methods are applied to decisions and not to projects or anything else. A portfolio of project decisions starts with a portfolio of projects and (often) maps each project simply to the decision of whether or not to fund it. Likewise, other portfolios of concern may be mapped to portfolios of decisions. Considering these decisions in concert is harder than considering them separately. There are simply more facts to integrate and there are coordination costs. Therefore, decisions are (or ought to be) considered as a portfolio only when there is some benefit in doing so. That benefit is typically that the best choice in one decision depends on the status of the other decisions.
2 Portfolio Decision Quality
33
Brown 1978). This approach is hard to apply in general. With portfolio decisions, there are reasons why it is not so hopeless. The ultimate value resulting from the decisions about a portfolio decision is a function of the analytic strategy used (steps taken to improve decision quality in different dimensions) and certain portfolio characteristics. I have used an approach that has been dubbed optimization simulation (Nissinen 2007) to model the value added by the process. This approach works if there is an appropriate way to specify the steps taken and to quantify the portfolio characteristics. I am not sure if it is applicable as a precise tool to plan efforts for a specific portfolio decision, but it is able to generate results for simulated portfolios and these give insight that may guide practice.
2.4 Portfolio Decision Quality We now discuss how the decision quality framework applies in the context of portfolio decisions. Note, this is a perspective piece, and so the classifications that follow are somewhat arbitrary (as may be the classifications of the original decision quality framework), but are intended as a starting point for focusing discussion on the portfolio decision process itself.
2.4.1 Framing In DA practice, “framing” (drawing rather loosely on Tversky and Kahneman’s (1981) work on framing effects) typically refers to understanding what is to be decided and why. In many decisions (single or portfolio), before even discussing facts, it is necessary to have in mind the right set of stakeholders, as it is their views that drive the rest of the process. There are various methods for doing so (e.g., McDaniels et al. 1999) and because portfolios involve multiple decisions, it is typical for there to be multiple stakeholders associated with them. In portfolio settings, a first step in framing is to determine what the portfolio of decisions is. Sometimes the decisions are simple fund/do not fund decisions for a set of candidate projects. In other situations, it is not as clear. For example, there might be a set of physical assets the disposition of each of which has several dimensions that can be influenced by a range of levers. Then the portfolio might be viewed in terms of the levers (what mix of labor investment, capital improvements, new construction, outsourcing, and closure should be applied), the dimensions of disposition (how much should efforts be directed toward achieving efficiency, effectiveness, quality, profitability, growth, and societal benefit), the assets themselves (how much effort should be applied to each site), or richer combinations of these.
34
J. Keisler
Once the presenting portfolio problem has been mapped to a family of decisions, a key issue in framing is to determine what is in and out of the portfolio of decisions. The portfolio is an artificial construct, and decisions are considered together in a portfolio when the decision maker deems it so. There is extra cost in considering decisions as a portfolio rather than as a series of decentralized and one-off decisions. Thus, the decisions that should be joined in a portfolio are the ones where there is enough benefit from considering them together to offset the additional cost (note: Cooper et al. 2001 discuss the related idea of strategic buckets, elements of which are considered together). Such benefit arises when the optimal choice on one decision depends on the status of other decisions or their outcomes. A common tool for framing decisions is to use the decision hierarchy (Howard 2007), to characterize decisions as already decided policy, downstream tactics (which can be anticipated without being decided now), out of scope (and not linked to each other in an important way at this level), or as the strategic decisions in the current context. Portfolios themselves may also form a hierarchy, e.g., a company has a portfolio of business units each of which has a portfolio of projects (e.g., Manganelli and Hagen 2003). The decision frame, like a window frame, determines what is in view and what is not – what alternatives could reasonably be considered, what information is relevant, what values are fundamental for the context, etc. In the standard project selection problem, the most basic interaction between decisions arises from their drawing on the same set of constrained resources. Thus, with a vector of decision variables X D .x1 I : : : I xn / (e.g., the amount of funding for projects 1 to n, or simply whether indicator variable of whether or not to fund the projects) the decision problem is MaxX V .X / s.t. C.X / B. The first part of the framing problem is to determine the elements of X , and the related question of identifying the constraints B. The set of all project proposals can be divided into clusters in terms of timing (e.g., should all proposals received within a given year be considered in concert, or should proposals be considered quarterly?), department or division (should manufacturing investments compete with funds alongside marketing investments?) geography, level, or other dimensions. Montibeller et al. (2009) give some attention to this grouping problem. Even if there are no interactions between projects other than competition for resources, the larger the set of projects considered within a portfolio, the less likely it is that productive projects in one group (often a business unit or department) will go without resources while unproductive projects elsewhere (often in another business unit or department) obtain them. Related to bounding of decisions in the portfolio problem is defining the constraints. Of course, a portfolio consisting of all proposals received in a 3-month period ought to involve allocating a smaller budget than a larger portfolio of proposals received over a longer time. But sometimes the budget has to be an explicit choice. For example, Sharpe and Keelin (1998) describe their success with the portfolio at Smithkline Beecham, and in particular, how they persuaded management to increase the budget for the portfolio when the analysis showed that there were worthwhile projects that could not be funded. In this case, the
2 Portfolio Decision Quality
35
higher budget required the company to align its R&D strategy with its financial strategy for engaging in capital markets, essentially allowing the portfolio of possible investments to spread over a greater time span. In other cases, allowing the portfolio budget to vary could imply that R&D projects are competing for funds with investments in other areas, so that the relevant portfolio contains a wider range of functions. We see here that portfolio decisions involve distinctions that are not salient in general. Mapping from a portfolio of issues to a portfolio of decisions (what is to be decided) frames at the level of the individual projects, although this also necessarily drives how analysis proceeds at the portfolio level. Scoping frames explicitly at the portfolio level, but it automatically affects the primary question at the project level of whether a proposal is even under consideration. Bounding the solution space in terms of budget/resource constraints frames at the portfolio level, and has little bearing on efforts at the project level.
2.4.2 Alternatives In the DQ framework, the quality of alternatives influences the quality of the decision because if the best alternatives are not under consideration, they will simply not be selected. High-quality alternatives are said to be well specified (so that they can be evaluated correctly), feasible (so their analysis is not a waste of time), and creative (so that surprising potential sources of value will not be overlooked). In a portfolio, there are alternatives defined at the project level (essentially, these are mutually exclusive choices with respect to one object in the portfolio), and at the portfolio level (e.g., the power set of projects). At the project level, the simplest alternatives are simply “select” or “don’t select.” A richer set of alternatives may contain different funding levels and a specification for what the project would be at each funding level; although these variations take time to prepare, richer variation at the project level allow for a better mix at the portfolio level. A related issue is that the level of detail with which projects are defined determines what may be recombined at the portfolio level. For example, if a company is developing two closely related products, it may or may not be possible to consider portfolio alternatives including one but not the other product, depending on whether they are defined as related projects (with detail assembled for each) or as a single project. At the portfolio level, one may consider as the set of alternatives the set of all feasible combinations of project-level decisions. Here, we get into issues of what is computationally tractable, as well as practical for incorporating needed human judgments. How alternatives are defined (and compared) at the portfolio level affects what must be characterized at the project level. Where human input is needed, high-quality portfolio level alternatives may be organized with respect to events (following Poland 1999), objectives (following value-focused thinking, Keeney 1996), or resources (e.g., strategy tables, as in Spradlin and Kutoloski 1999), or constraints, or via interactive decision support tools (e.g., visualization methods such as heatmaps, as in Kiesling et al. 2011).
36
J. Keisler
2.4.3 Information The quality of information about the state of the world (especially as it pertains to the value of alternatives) is driven by its completeness, precision, and accuracy. High quality information about what is likely to happen enables estimates that closely predict actual value of alternatives. In portfolio decisions, the quality of information at the project level largely has the same drivers. But rather than feeding a go/no-go decision about the project in isolation, this information feeds choices about what is to be funded within a portfolio, and this can mean that less detailed estimates of project value are needed. On the other hand, because projects may be in competition for resources used, project level information about costs may be more significant in feeding the portfolio decision. Furthermore, as has been noted, consistency across projects is important – a consistent bias may have less impact on the quality of the portfolio decision (and its implementation) than a smaller bias that is less consistent. At the level of the portfolio, interactions between projects are important – synergies and dissynergies, dynamic dependencies/sequencing, and correlations may all make the value of the portfolio differ from the value of its components considered in isolation. When these characteristics are present, the search for an optimal portfolio is not so simple as ranking projects in order of productivity index to generate the efficient frontier. Therefore, it is important not only to collect this portfolio information appropriately (and perhaps iteratively), but also to structure it so as to facilitate later operations.
2.4.4 Values In decision analysis and other prescriptive approaches to decision making, decision makers seek to select the most preferred option. Much effort may go into identifying preferences and values in order to facilitate that selection. In valuing a portfolio of projects, since the options are different possible portfolios, it helps to construct a value function that comprehends preferences and then represents them in a form amenable to making the necessary comparisons, i.e., the options that have higher values ought to be the ones the decision maker prefers. If project value is additive, then high quality on values requires mainly that the value function include the right attributes and the right weights for each project. If the value function is additive across attributes and across projects, then the best portfolio decision arises from setting the right value function to be incorporated across the set of projects. A more complex value function, e.g., a nonlinear multi-attribute utility function over the sum of project level contributions, requires that the projects be characterized and measured in the right terms, but the hard judgments must be made at the portfolio level. Finally, with portfolio decisions there are often more stakeholders affected by the set of projects, and who therefore have values to be integrated. This is an area within PDA where there are numerous common approaches.
2 Portfolio Decision Quality
37
Of particular note, at the portfolio level there is often a range of interacting objectives (or constraints). Rather than undertaking the sometimes prohibitive task of formally structuring a utility function for them all, the portfolio manager may strive for “balance” (see Cooper et al. 2001, the “SDG grid” in Owen 1984 and elsewhere, or Farquhar and Rao 1976), which is commonly done through 2 2 matrices showing how much of one characteristic and how much of another each project has. This could include balance between risk and return, across risks (i.e., diversification), benefits and costs, short term and long term, internal and external measure, one set of stakeholders and another (i.e., fairness), resources of different types used, balance over time, or other characteristics. Balance grids are easier to work with conceptually than are convex multi-attribute utility functions with interactive terms. With such grids, and with projects mapped to them, the decision maker can then envision the effect of putting in or pulling out individual projects, thus directly relating decisions about individual projects to portfolio level value. Other interactive approaches utilizing feedback or questions based on an existing portfolio model may also help to identify values and interactions between them, e.g., Argyris et al. (2011).
2.4.5 Logical Synthesis In a standard DA setting, the logic element of DQ means the assurance that information, values, and alternatives will be properly combined to yield identify the course of action most consistent with the decision maker’s preferences. Standard DA utilizes devices such as decision trees, probability distributions, and utility functions to ensure consistency that the decision maker’s actions and beliefs conform to normative axioms. Certainly, much of this still applies at the project level within portfolios. Detailed inputs at the project level ought to be logically synthesized to obtain value scores that are then incorporated in the portfolio level decision. For example, in the classic SDG-style PDA, decision trees at the project level identify the ENPV and cost of each project. If there is minimal interaction between projects, the portfolio level synthesis is simply to rank projects by productivity index and fund until the budget is exhausted. However, interactions – as mentioned previously – can include synergies and dissynergies, logical constraints where under some circumstances some combinations of activities may be impossible, while in other cases certain activities may only be possible if other activities are also undertaken. In simpler cases, it may be possible to still capture this primarily with simple spreadsheet-level calculations based on the DA-derived inputs, but often optimization and math programming techniques are required and in their absence, it is unlikely that an unaided decision maker would be able to approach the best portfolio. When such techniques are to be used, it is especially important to have coherence between the algorithms to be deployed and the project level inputs. Furthermore, because optimization models often require simplifying assumptions (as do all models to some extent), this element of quality
38
J. Keisler
may require a feedback loop in which managers review the results of the model and refine it where necessary. Such feedback is commonly prescribed in decision modeling (see Fasolo et al. 2011).
2.4.6 Commitment and Implementation Producing the desired results once portfolio decisions are made (which SDG calls Value Delivery) requires effort at the border of project and portfolio management. At the portfolio level, resources must be obtained and distributed to projects as planned. The portfolio plan itself, with more precise timing, targets, and resource requirements must be translated back into detailed project plans, as the initial specifications for all individual approved projects must be organized into a consistent and coherent set of activities for the organization. Project managers monitor progress and adapt plans when the status changes. Sets of projects may affect each others’ execution, e.g., one project might need to precede another or it may be impossible to execute simultaneously two projects that require the same resource. In this case, in addition to orchestrating a multiproject plan, the portfolio manager must monitor the fleet of projects and make adjustments to keep them in concordance over time with respect to resource use and product release. One important event that can occur at the project level is failure. When projects really do fail, the portfolio is better off if they are quickly abandoned. At the project level, this requires incentives not to hide failure and to move on (as was embodied in Johnson & Johnson’s value of “Freedom to Fail,” Bessant 2003). All these information flows between projects and between project and portfolio managers benefit if the portfolio decision process has organizational legitimacy – if it is transparent and perceived as fair (Matheson and Matheson 1998).
2.4.7 Interacting Levels of Analysis Thus, PDQ is determined at the project level, the portfolio level, and in both directions of the interface between those two levels, as shown in this partial listing (Table 2.1). Since the required level of each element of decision quality depends on the value added by that element and its costs (we can think of this terms of bounded rationality), it would help to have a way to measure the value and cost. Cost of efforts to create decision quality is a subject that has not been much studied, and we shall only consider it in abstract terms in this chapter, but it is not so difficult to think about – if specific efforts are contemplated, the main cost of those efforts is the time of the individuals involved, and there are many areas of management in which methods are applied to quantify such costs, e.g., software development cost
2 Portfolio Decision Quality
39
Table 2.1 Some determinants of portfolio decision quality
Framing
Alternatives
Information
Values Logical synthesis
Implementation
Portfolio level Resources and budgets (bounding) Subsets of candidate projects, portfolio strategies Specifying synergies, dynamic dependency, correlations between projects Utility function, balance Optimization
Alignment, monitoring and correcting
Interchange between project and portfolio levels Which projects are in which portfolio (scoping) Projects suitably decomposed Consistency
Summary statistics Dealing with dependencies between projects Ensuring resource availability
Project level Mapping issues to decisions Well-specified plans for multiple funding levels Probability distribution over outcomes
Attributes and measures Decision tree
Project management, incentives, buyin, etc.
estimation. Value depends on the information and decisions involved, and perhaps other context, which makes it difficult to judge intuitively. We now explore how models of the portfolio decision process can be used to gain insight about the way specific efforts affect portfolio value.
2.5 Valuing Portfolio Decision Quality 2.5.1 Four Discrete Levels of Portfolio Decision Quality It can be useful in some of these cases to think about four possible levels of information being brought to bear on the decision (Fig. 2.2). The first level is “ignorance” – not that decision makers have no information, but rather that their decisions take a perspective other than that of using the parametric information they do have. For example, in highly politicized situations, information about project value may have little to do with funding. The value of the portfolio under such a process can be used as a baseline for measuring improvement, and the process is modeled as if it selects projects at random. The second level is “analog” portfolio-level information but not about information specific projects. In this case, a decision rule can be developed that takes into account the characteristics of the average project – or even of the distribution of projects. In the pharmaceutical industry, for example, it is common to use “portfolio-wide averages
42
J. Keisler
The implication is that portfolio managers should focus first on creating a culture that supports discipline in sticking to prioritization, and only focus on more precise estimates where there is relatively large uncertainty. Under some circumstances, reasonable shortcuts can yield most of the value added by analysis with proportionally lower cost of analysis than with the brute-force approach in which all projects are analyzed. Specifically, if organizational conditions allow for a well-defined threshold productivity level to be identified before analysis, a project can be funded or not merely based on whether its value-to-cost ratio exceeds the threshold. A modified threshold rule works well if there is large variance among the value-to-cost ratio of different projects: apply triage and fund projects that exceed the productivity threshold by a certain amount, do not fund projects that fall a certain amount below, and analyze the rest of the projects so that they can be funded if after analysis they are shown to exceed the threshold. This model considers the general question of improving estimates. The three other models essentially take this as a starting point and consider choices about which bases of the value estimate to improve, or, alternatively, what more complex portfolio value measures should be based on the general value estimates. Decision analysts who saw these results found them interesting, but also stressed that analysts do more than obtain precise value estimates. For example, they make the alternatives better – in particular by identifying a range of alternatives for different funding levels.
2.5.3 Modeling the Value of Higher Quality Alternatives The next model (Keisler 2011) compares different tactics for soliciting alternatives for various budget levels for each potential investment. In this model, C denotes cost, and each project is assumed to have an underlying value trajectory (colloquially called a buyup curve): Vi .Ci / D ri Œ1 EXP.ki Ci =Ci max /=Œ1 EXP.ki /: Project level parameter values for the return .ri / and curvature P .ki / are P drawn from random distributions. Funding decisions to maximize E i Vi s.t. i Ci B are compared for analytic strategies that result in stronger or weaker information states about k and r. As before, in a baseline situation projects are selected at random, after a first-cut analysis, projects are funded based on their productivity parameter, also essentially as before. At the other extreme is the gold standard analysis in which a full continuum of funding levels and corresponding project values is determined for each project. In between are strategies where a small number of intermediate alternatives are defined for each project as well as “haircut” strategies that trim each project’s funding from its requested level, with the rationale that projects tend to have decreasing returns to scale. If returns to scale are not decreasing, the optimal
2 Portfolio Decision Quality
43
160000 140000
Portfolio value
120000 100000 80000 60000 40000 20000 0 0
1000
2000 Budget
3000
4000
Contininuous project funding levels
Funding proportional to project's productivity index at full funding
Four possible funding levels per project Yes-no funding decisions
Each project gets same portion of request (haircuts) Fund projects at random
Fig. 2.4 Portfolio value when projects may be funded at intermediate levels
portfolio allocates projects either 0 or 100% of their requested funds. Otherwise, the relative value added by different analytic strategies varies with the distribution of returns to scale across projects. This study showed that value added by generating project level alternatives can be as high as 67% of the value added by getting precise value estimates for the original simple projects as in the first model, as seen in Fig. 2.4. Most of the additional value added can be obtained by either generating an additional one or two intermediate funding level alternatives per project. In some circumstances, similar gains are possible from applying a formula that utilizes project-specific information about k along with a portfolio-wide estimate of r, for a sophisticated type of haircut (layered). The results were sensitive to various parameters. For example, at high budget levels where most of the funding requested by projects is available, there is little benefit to having additional alternatives between full and no funding.
2.5.4 Modeling the Value of Higher Quality on Values Another PDA technique is scoring projects on multiple attributes, computing total project value as a weighted average of these scores (e.g., Kleinmuntz and Kleinmuntz 1999); weights themselves may be derived from views of multiple
2 Portfolio Decision Quality
45
could share the cost of that element if both were pursued, i.e., they have cost synergy. Alternatively, projects can have value synergy, e.g., while each product has a target market, new markets can be pursued only if multiple products are present (e.g., peanut butter and chocolate). In these cases where portfolio cost and value is not actually the sum of individual project cost and value, PDA can add value by identifying such synergies prior to funding decisions. A decentralized PDA will not identify synergies, but where the process has a specific step built in to identify synergies, it will most likely identify those that exist. This is not trivial, as the elements from which potential synergies emerge are not labeled as such – in the example above, it would be necessary to identify the peanut butter cup market and this would require creative interaction involving the chocolate and peanut butter product teams. The model in Keisler (2005) compares analytic strategies that evaluate all synergies, cost synergies or value synergies against the simplest strategy that considers no synergies and again, a baseline in which projects are not funded or funded at random. (This model does not include possible dissynergies, e.g., in weapon selection problems where if an enemy is going to be killed by one weapon, there is no added value in another weapon that is also capable of killing the enemy.) In this model, each project can require successful completion of one or more of atomic cost elements, where Sik D 1 or 0 depending on whether or not cost element k is required to complete project i . Similarly, Rij D 1 or 0 depending on whether project i is required to achieve value element j . The j th value P element Q is worth Vj , and the kth P cost element costs Ck . Portfolio value is j Vj i Fi Rij , and portfolio cost is k Ck Maxi Fi Sik . In the simulation, V and C are drawn from known distributions, and Rij and Sik have randomly generated values of 0 or 1 with known probability. The relative value added by strategies that comprehend synergies compared to myopic strategies depends on the munificence of the environment (Lawerence and Lorsch 1967). At low levels of actual synergy, value added is small because little is worth funding, and above a saturation point, value added is small because many projects are already worth funding. At a sweet spot in the middle, completely considering synergies can increase portfolio value more than 100% and in some cases over 300%. In certain cases, comprehension of the possibility of synergies is a substantial improvement over the case where each project is evaluated myopically. Figure 2.6 shows results from a less extreme cluster of projects simulated in this study, where cross-project cost synergies were assumed to be present in 30% of the cost elements in the cluster, and cross-project value synergies were assumed to be present in 10% of the value elements. In this example, the value of identifying synergies is substantial, but far more substantial if both cost and value synergies are both identified. Most important in determining the value added at each level are the prevalence of synergies between value elements and cost elements of different projects, as well as the relative size of value elements to cost elements (and thus the likelihood that projects will merit funding even in isolation).
2 Portfolio Decision Quality
47
understanding what drives the cost of a process. Assessment costs ought to vary systematically with the number of assessments that must be done of each type. This way of thinking about decision process costs is similar to the way that computation times for algorithms are estimated – considering how often various steps are deployed as a function of the dimensions of the problem and then accounting for the cost of each step (and in simpler decision contexts, psychologists (Payne et al. 2003) have similarly modeled the cost of computation). Beyond assessments, there are organizational hurdles to successfully implement some approaches. The treatment of both issues below is speculative, but in practice, consultants use similar methods to define the budget and scope for their engagements. Estimating assessment costs A: number of projects analyzed B: cost of analysis per project C: number of alternatives generated per project D: cost per alternative generated/evaluated E: number of attributes in value function F: cost of assessing an attribute’s weight G: cost of scoring a project on an attribute H: projects per cluster I: number of value elements J: number of cost elements K: cost per synergy possibility checked Analysis for estimation of project productivity: AB For example, with a portfolio with 30 .D A/ projects for which productivity must be estimated at a cost of $5,000 .B/ of analysis per project, the cost of analysis would be $150,000. Analysis for estimation of intermediate alternatives: AC D For example, with a portfolio with 30 .D A/ projects, each having a total of 3 .D C / nonzero funding levels whose cases must be detailed at a cost of $10,000 .D D/ per case, the cost of analysis would be $900,000. Analysis involving use of multiple criteria: E F CE AG For example, with a portfolio with 30 .D A/ projects being evaluated on 8 .D E/ attributes with a cost of $5,000 .D F / per attribute to judge its importance (e.g., translate to a dollar scale), and $1,000 (D G, e.g., the cost 1 h of a facilitated group meeting with five managers) to score a single attribute on a single project, the cost of analysis would be $40;000 C $240;000 D $280;000.
48
J. Keisler
Analysis to identify synergy: .A=H / H Š .I C J /K For example, with a portfolio of 30 .D A/ projects divided into clusters of size 5 .D H /, searching for potential synergies among 7 .I / value elements and 8 .J / potential value elements at an average cost of $50 per synergy checked (DK, some short interviews and checked with some brief spreadsheet analysis), the cost of analysis would be 6 5Š .7 C 8/ $50 D $540;000. These costs apply only to the parts of analysis that are at the most complete level. Frugal strategies replace some of the multiplicative cost factors here with onetime characterizations, and it is also possible to omit some of the analysis entirely. Excluded are certain fixed costs of analysis.
2.7 Observations from Practice and from Other Research Both the PDQ framework and the associated value-of-analysis as value-ofinformation approach have, in my experience, been useful lenses for examining organizations’ portfolio decision processes. At one company (Stonebraker and Keisler 2011), this framework facilitated analysis of the process used at a major pharmaceutical corporation. In general, the data showed (at a coarse level) that the organization was putting more effort in where the value added was higher. We were able to identify specific areas in which it may have spent too much or too little effort developing analytic detail. This led to discussion about the reasons for such inconsistencies – which were intentional in some cases but not in others – and whether the organization could improve its effectiveness in managing those subportfolios. At another major company (Keisler and Townson 2006), analysis of the data from a recent round of portfolio planning revealed that adding more alternatives at the project level would add substantial value to some of the portfolio by allowing important tasks on relatively low-priority projects to go ahead, and that this would be an important consideration for management of certain specific portions of the portfolio. We were able to identify some simple steps to gain some of this potential value with minimal disruption. At this same company, we also considered the measures used for project evaluation and were able to find a simpler set of criteria that could lead to the same value as the existing approach – or better if some of the measures were better specified. In another application (described at the end of Keisler 2008), a quick look at the portfolio characteristics showed the way to a satisfactory approach involving a modest amount detail on criteria and weights, to successfully recover from an unwieldy design for a portfolio decision support tool. Finally, on another PDA effort that covered multiple business units at a major pharmaceutical company, involved many projects and products with a substantial
2 Portfolio Decision Quality
49
amount of interaction in terms of value and cost, and it looked like it would be very challenging to handle such a volume of high-quality analysis. At the outset of the project, the engagement manager and I discussed the value of identifying potential synergies in PDA. In setting up the effort, the analysis team divided up to consider subportfolios within which synergies were considered most likely. The resulting engagement went efficiently and was considered a strong success.
2.8 Research Agenda Within this framework, we can discuss certain broad directions for research. Only a few pieces of the longer list of decision quality elements were modeled. More such optimization simulation models could enrich our understanding of what drives the value of PDQ. Additionally, the description here of the elements of PDQ can be fleshed out and used to organize lessons about practices that have been successful in various situations. As we consider where to develop more effective analytic techniques, we can focus on aspects of portfolio decisions where the value from increased quality would be highest – that is, as it becomes easier to make a decision process perform at a high level, the “100% quality level” will be higher as will the total value added by the decision process. Alternatively, in areas where the value added by greater precision etc. is minimal, research can focus on finding techniques – shortcuts – that are simpler to apply. The relative value of PDQ depends on the situation, and it may be that simple and efficient approaches ought to be refined for one area in one application setting, while more comprehensive approaches could be developed for another setting.
2.9 Conclusions Interpreting DQ in the context of portfolios allows us to use it for the same purposes as in other decision contexts. We aim to improve decisions as much as is practical by ensuring that all the different aspects of the decision are adequately considered. But we also recognize – especially with portfolio decisions – that many parts of the decision process are themselves costly due to the number of elements involved, e.g., eliciting probability judgments. Resources for decision making (as opposed to resources allocated as a result of the decision) are generally quite limited, and time may be limited as well. Therefore, we can use the DQ checklist to test whether the resources applied to the decision process match the requirements of the situation. To the extent we can think in concrete terms of the drivers of the value added by a decision process, we can use the DQ framework more skillfully. To the extent that portfolio decisions have common characteristics that are not shared by other classes of decisions, more detailed descriptions of their DQ elements help ensure that attention goes to the right parts of the process. As organizations formally integrate
50
J. Keisler
PDA into their planning processes, choices such as what is to be centralized and decentralized, and what is to be done in parallel should relate in a clear way to the overhead cost of the process and to the levels of quality and hence the value added by the process.
References Argyris N, Figueira JR, Morton A (2011) Interactive multicriteria methods in portfolio decision analysis. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Bessant J (2003) Challenges in innovation management. In: Shavinin LV (ed) The international handbook on innovation. Elsevier, Oxford, pp 761–774 Cooper RJ, Edgett S, Kleinschmidt E (2001) Portfolio management for new products. 2nd edn. Perseus, Cambridge Farquhar PH, Rao VR (1976) A balance model for evaluating subsets of multiattributed items. Manage Sci 22(5):528–539 Fasolo B, Morton A, von Winterfeldt D (2011) Behavioural issues in portfolio decision analysis. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Howard RA (1988) Decision analysis: practice and promise. Manage Sci 34(6):679–695 Howard RA (2007) The foundations of decision analysis revisited. In: Edwards W, Miles RF, von Winterfeldt D (eds) Advances in decision analysis from foundations to applications. Cambridge University Press, Cambridge, pp 32–56 Keeney RL (1996) Value focused thinking. Harvard University Press, Boston Keisler J (2004) Value of information in portfolio decision analysis. Decis Anal 1(3):177–189 Keisler J (2005) When to consider synergies in project portfolio decision analysis. UMass Boston College of Management Working Paper Keisler J (2008) The value of assessing weights in multi-criteria portfolio decision analysis. J Multi-Criteria Decis Anal 15(5–6):111–123 Keisler, J (2011) The value of refining buyup alternatives in portfolio decision analysis. UMass Boston College of Management Working Paper UMBCMWP 1048 Keisler J, Townson D (2006) Granular portfolio analysis at Pfizer. Presented at POMS Conference. Boston Kiesling E, Gettinger J, Stummer C, Vetschera V (2011) An experimental comparison of two interactive visualization methods for multi-criteria portfolio selection. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Kleinmuntz CE, Kleinmuntz DN (1999) A strategic approach to allocating capital in healthcare organizations. Healthcare Finan Manage 53(4):52–58 Lawerence P, Lorsch D (1967) Organization and environment. Harvard Business School Press, Boston Manganelli R, Hagen BW (2003) Solving the corporate value enigma. American Management Association, New York Matheson D, Matheson JE (1998) The smart organization. Harvard Business School Press. Boston, MA McDaniels TL, Gregory RS, Fields D (1999) Democratizing risk management: successful public involvement in local water management decisions. Risk Anal 19(3):497–510 McNamee J, Celona P (2001) Decision analysis for the practitioner, 4th edn. SmartOrg, Menlo Park Montibeller G, Franco LA, Lord E, Iglesias A (2009) Structuring resource allocation decisions: a framework for building multi-criteria portfolio models with area-grouped options. Eur J Oper Res 199(3):846–856
2 Portfolio Decision Quality
51
Nissinen J (2007) The impact of evaluation information in project portfolio selection. Masters Thesis Helsinki University of Technology Owen DL (1984) Selecting projects to obtain a balanced research portfolio. In: Howard RA, Matheson, JE (eds) Readings on the principles and applications of decision analysis. Strategic Decisions Group, Menlo Park Payne JW, Bettman JR, Johnson JR (2003) The adaptive decision maker. Cambridge University Press, Cambridge Poland WB (1999) Simple probabilistic evaluation of portfolio strategies. Interfaces 29:75–83 Sharpe PT, Keelin T (1998) How SmithKline Beecham makes better re-source-allocation decisions. Harvard Business Rev 76(2):45–57 Spradlin CT, Kutoloski DM (1999) Action oriented portfolio management. Res Technol Manage 42(2): 26–32 Stonebraker J, Keisler J (2011) Characterizing the analytical process used in evaluating the commercial potential of drug development projects in the portfolio for a large pharmaceutical company. In: Salo A, Keisler J, Morton A (eds) Advances in portfolio decision analysis. Springer, New York Tversky A, Kahneman D (1981) The framing of decisions and the psychology of choice. Science 211(4481):453–458 Watson SR, Brown RV (1978) The valuation of decision analysis. J R Stat Soc A-141:69–78
Chapter 3
The Royal Navy’s Type 45 Story: A Case Study Lawrence D. Phillips
Abstract This chapter presents systems engineering as portfolio analysis carried out with multiple stakeholders who hold different perspectives about the system elements, and where conflicting objectives must be accommodated in deciding what is affordable. A case study of the United Kingdom’s Royal Navy Type 45 destroyer shows how a portfolio approach traded-off time, cost and performance to arrive at a plausible way forward. The combination of technical system modelling with group processes that engaged all stakeholders enabled a solution to be agreed within only 15 months. This set a record for major equipment procurement in the Ministry of Defence, and saved the contractor 2 years of design work.
3.1 Introduction Deciding what combination of capabilities for a new ship are affordable, within a budget and delivered in time, provides a near-perfect example of how a portfolio approach to prioritisation and resource allocation can apply to systems engineering and provide an efficient path for procurement. An example is the United Kingdom’s Type 45 Daring Class destroyer (Fig. 3.1), a major project that was commended as “innovative” by a National Audit Office report. They went on to say that “: : :the Department used a Capability Cost Trade-Off Model to determine the optimum affordable capability” (National Audit Office 2002). The Type 45 destroyer was the first major project undertaken under the new Smart Procurement strategy, introduced by the UK’s Defence Secretary in 1997, and now known as Smart Acquisition. This approach allowed trade-offs of cost against capability to be considered prior to the design phase, and included through-life L.D. Phillips () Management Science Group, Department of Management, London School of Economics and Political Science, London, UK e-mail: larry [email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 3, © Springer Science+Business Media, LLC 2011
53
54
L.D. Phillips
Fig. 3.1 HMS Daring, the first in class of the United Kingdom’s Type 45 destroyers, commissioned 23 July 2009. Copyright© Brian Burnell, made available under Creative Commons Attribution-Share Alike 3.0 Unported license
costing of the system. With this approach, the operational requirements ceased to be a straightjacket, making it possible to meet new priorities, accommodate emerging requirements and solve outstanding design problems. Risks were thereby reduced, and it was hoped that uncertainty about costs would be reduced. This chapter explains how this new approach to procurement strategy emerged from a combination of a social process that engaged all the key players in a succession of 1-to-3-day meetings, and a prioritisation model based on multi-criteria decision analysis (MCDA). Three goals governed this approach: (1) to determine the best combination of capabilities that would be affordable (the “solution”) thereby ensuring (2) that the first-of-class ship could be delivered by 2006, and (3) to align the key players to the solution so that subsequent revisions would be minimised.
3.2 Background The precursor to Type 45 was the Horizon project, a joint effort by Italy, France and the UK that began in 1992 to design a frigate. Differing requirements of the countries soon surfaced, and despite compromises, a more formal structure was required to handle disagreements. An International Joint Venture Company (IJVC) was established to bring each country’s contractors together with key naval and civilian personnel in an attempt to agree a plausible and affordable solution.
3 The Royal Navy’s Type 45 Story: A Case Study
55
In 1997, I was commissioned to facilitate meetings of the combined teams, using a structured process of deliberative discourse (Renn 1999), decision conferencing (Phillips 2007) and MCDA modelling (Keeney and Raiffa 1976) to help the group achieve an affordable solution. Three challenging decision conferences, lasting between 3 and 4 days each, carried out in late 1997 and early 1998, with much work between sessions by the ten teams formed to provide support for each of the ship’s functions, managed to work through the many differences of opinion, resulting in a possible solution. At the final decision conference, participants agreed that the decision conference provided a sound foundation for the development of an affordable ship. Despite this progress, in April 1999 the UK withdrew from the project as it became apparent that they differed in their assessment of the level and complexity of the threat, and because the UK wanted to use the technology of the Sampson radar system. The French and Italians continued their collaboration, and the UK’s subsequent independent development of the Type 45 adopted the Horizon project’s Principal Anti-Air Missile System (PAAMS) and some aspects of the hull design.
3.3 Method This section explains our approach as process consultants concerned to develop a helping relationship with a client (Schein 1999), building trust and enabling the client to develop and own the model, thereby enabling them to achieve their objectives, while also developing a shared understanding of the issues among team members, creating a sense of common purpose, and building commitment to the way forward. We begin by describing our first two meetings, then the kick-off meeting and the workshops. An explanation of the MCDA model and of decision conferencing follows. We describe the decision conferences leading to the solution, and the subsequent specialised decision conferences that resolved a few remaining issues about the ship’s propulsion system.
3.3.1 The Initial Meetings and Workshops From our first meeting on 21 July 1999, with the shipbuilder BAE Systems (which had taken over the Horizon project’s UK contractor, Marconi), the Royal Navy customer and the naval architect, Chris Lloyd, it was made clear that the Type 45 must be capable of fitting different systems in the future, so that an upgrade path would have to be accommodated by the ship’s design. We showed the model developed in the Horizon project, which demonstrated how solution trade-offs were made. The naval architect said this is how naval architects think – trading off one design solution against another. This persuaded the leader of the 10-week study
56
L.D. Phillips
period to buy in to the approach. During the meeting, we were apprised of the structure of the study period, the directors of six areas being studied, commercial, programme, warship design, combat systems, integrated logistics support and habitability, and the many tasks being pursued in each of those areas. At a second meeting 8 days later, we were introduced to participants representing the operational requirements, the procurement authority and industry. We showed the Horizon approach, explained that the final model represented the collective views of all participants, that the model provided insight, not “the right answer,” and that it helped the group to achieve their objectives, even if the individual paths differed. Chris Lloyd suggested updated capability solutions to be considered in the revised Horizon model, and he outlined the basic systems, called the “Sump,” that were essential (such as the hull, internal structure, electrical and fire fighting systems), the starting point for adding systems that would make the ship operational by adding value needed to realise mission objectives. This kind of initial meeting, with just a few senior key players, is a typical start for a project of this type. Its purpose, from our perspective, is threefold: first, to ensure that the senior decision makers understand and are committed to the sociotechnical process that we will be applying, second, to plan the roll-out of the project, and third, to begin the process of structuring the problem. Participants at these two initial meetings agreed to engage all the leaders of the Integrated Project Teams at a kick-off meeting on 9 August. At this meeting, participants began the process of modifying the structure of the Horizon model to make it suitable for the UK. They agreed definitions of costs and benefit criteria, reviewed the definitions of the ship’s main functions, agreed the constituents of the Sump, and began defining new capability solutions for the functions, avoiding use of the phrase “system solutions” because that implied the purchase of a particular system, which at this stage would be premature. Instead, “capability” indicated what performance was needed, not how it would be achieved, thereby avoiding any commitment to a particular system that could deliver the capability. Another 1-day meeting a week later reviewed the previous week’s work, made some changes and completed the structure, recognising that subsequent work would require further changes. At this stage, the structure of the model covered 116 capability options distributed very unevenly across 25 functional areas, for a total number of possible whole-ship capability solutions just short of 6 1015 . Two costs and five benefits contributed further complexity to the task of finding an affordable whole-ship solution. With a “good enough” structure of functions, capability options, cost and criteria agreed, we facilitated a series of workshops, one for each functional area, in the last week of August. The purpose was to further refine the structure, receive cost data and score the solution options against the criteria. Each workshop was attended by about 20 people, including the Warship Integration civilian, relevant Team Leader, operational requirements staff, operational analysts, specialists, military and procurement support staff and contractors knowledgeable about the function under consideration. A final merge meeting was intended to assess trade-off weights and bring everything together in September. However, this was not what happened.
3 The Royal Navy’s Type 45 Story: A Case Study
57
3.3.2 The Prioritisation Model For a ship, once the most basic functions are agreed, the key functions that determine the ship’s capabilities to achieve missions were defined as separate areas for considering solutions. Many solutions were considered for each function, from modest-performing, low-cost solutions, to high-performing, costly solutions. It is, of course, possible to establish a requirement for each function, and then send separate teams away to provide specific solutions that meet the requirements. This approach, however, inevitable imposes the Commons Dilemma: individually optimal solutions rarely, if ever, provide an overall best use of the available resource (Hardin 1968). So, which options constitute the best overall portfolio of solutions so as to generate the most effective warship for the available budget? The answer requires considering trade-offs between solution options, but some solutions meet objectives of performance, time and cost better than others, so we adopted multi-criteria modelling to enable those solution options to be traded-off through the exercise of military judgement. The basic principle for prioritising the solutions is embodied in the priority index, PI: PI D
Risk adjusted benefit Cost
MCDA is applied to combine all benefits onto one preference value for each solution. The preference value is then multiplied by the probability that the capability option will deliver the expected performance. That product is divided by the project’s total forward cost to obtain the project’s priority index. The overall preference value for each solution requires two types of judgement. The first is a preference value representing the value of the capability option’s performance on each criterion, while the second is a set of criterion weights, which are scale constants that ensure the units of value on all criterion scales are equal. For the Type 45 project, the preference values were mostly determined by direct judgements of a group of specialists, civilian and military, knowledgeable about the capability options and the military requirements for the capability options. Often, scores were assessed directly onto preference scales as judged by the military participants, occasionally assisted by operational analysts who had studied performance and effectiveness. At other times, preferences were judged to be simply linear, inverse linear or non-linear value functions related to measurable performance. For example, a specialist group concerned with accommodation assessed an S-shaped value function relating preference values to square metres of deck space for accommodation: low preference scores for cramped accommodation, improving with more square metres, but then levelling off as the provision becomes too hotel-like. In all cases, the capability option or combination of capability options within a function associated with the highest performance was assigned a score of 100, with the least well-performing capability option given a zero. All other capability options
58
L.D. Phillips
were scored on that 0–100 scale. Participants were reminded that a preference value of zero did not mean that there was no value associated with that capability option, just as 0 ı C does not represent the absence of temperature. Consistency checks comparing ratios of differences of preference values helped to ensure the realism and consistency of the preference scaling process. Participants were frequently reminded that a preference judgement associated with a capability option represented the added value the option provides to achieving the criterion’s objective. In one case, an officer asked, “What if a capability option gives me more performance than I need?” to which our answer was, “Then it doesn’t add value and its preference value should be no more than represents what you need.” This was an important idea for the group, as many found it difficult initially to distinguish between performance and the value of that performance. Operational analysts present at the meetings could provide information about performance and technical effectiveness, but none had ever studied the value of that performance and effectiveness to the accomplishment of missions. The MCDA approach showed that while an understanding of performance, and possibly effectiveness, is required for scoring capability options, it is not sufficient without considering military judgement about the value that performance brings in a military context. The second judgement concerns the scale constants associated with each criterion scale: paired comparisons of swing-weights were applied to all the capability options in one function compared to all the capability options in another function. Swing weights represent the comparability of preference units on one criterion compared to another, as 9 ı F equate to 5 ı C of temperature. As such, they represent trade-offs between criterion scales, and we put a two-part trade-off question to participants: “First, consider the difference in performance between the lowest and highest performing capability options in this function compared to that function. Now, how much do you care about those differences?” By asking this question repeatedly between pairs of criterion scales, the largest difference that matters is identified and assigned an arbitrary weight of 100 points. Each of the other scales is then compared to that one and assigned a weight equal to or less than 100. It is, of course, a matter for military judgement to assess how much a difference in performance matters to the accomplishment of missions, and the idea of “caring” about a performance difference was quickly grasped by the experienced military participants. Preference values and weights were combined by computer software, Equity1 , which is programmed to calculate the overall preference value, vi , for each capability option: X vi D wj vij j
1 Developed initially by the Decision Analysis Unit at the London School of Economics, Equity is now available from Catalyze Limited, www.catalyze.co.uk. Equity is used to prioritise options across many areas to create cost-effective portfolios than can inform the processes of budgeting and resource allocation.
3 The Royal Navy’s Type 45 Story: A Case Study
59
The preference value of capability option i on criterion j is given by vij and the weight associated with the criterion is wj . Multiplying the value by the weight and summing over the criteria gives the overall value. Details about the Equity calculation are given by Phillips and Bana e Costa (2007). Two types of weights are required by Equity. One set compares the scales for a given criterion from one function to the next, known as within-criterion weights. The method of assessing these weights leaves at least one function’s scale with a weight of 100. The second set compares those 100-weighted function scales across the benefit criteria, called across-criteria weights. The within-criterion weights ensure that all units of added value are equivalent from one function to the next but only within each criterion, while the across-criteria weights equate the units of added value across the criteria, thereby ensuring that all benefit scales throughout the entire model are based on the same unit of added value. Equity provides a natural structure for classifying possible capability solutions: as options within functions. Twenty-five functions defined areas of required capability, including Propulsion (power generation facility), Stealth (signature reduction), Vulnerability (resistance to shock, blast, fragments, and other threats), Accommodation, Helicopter Facilities, Underwater Detection, and so forth. Capability options for each of these functions spanned the whole range of possibilities, from below the minimum deployable capability to well above requirements. The initial structure of the model, as displayed by Equity, is shown in Fig. 3.2.
3.3.3 The Decision Conferences Scoring the options and weighting the criteria in August raised many questions that required modifications to the Sump, clarification of assumptions, improved definitions of the functions and better precision in defining the capability option. It was clear by the middle of August that a “preliminary” decision conference (held without the attendance of any senior officers) would be required “in order to weed out the areas requiring more work by the industry/project/customer team and start the pricing process, before a further conference : : : that will begin to firm up on the design.” That wording of the 24 August invitation to the preliminary decision conference proved to be prophetic. It took one further decision conference, several workshops and a shorter decision conference in 2002 to realise a finally agreed affordable solution. The preliminary decision conference, held over 3 days, 2, 3 and 6 September (the break was a weekend), spent the first day-and-a-half reviewing and revising options, costs and scores for the capability options in all functions. A social process, the “nominal group technique” (Delbecq et al. 1974) was applied to assessing weights in the afternoon. First, paired comparisons enabled the entire team to identify, for the Performance criterion, the function whose capability options would provide the most added preference value from performance; the selected function was given a weight of 100 points. Next, another function was chosen, and all team members
Fig. 3.2 The initial structure of the Equity model. Functions are shown in the left column. The adjacent rows show the capability options. Note that for some functions the first option is “None,” whereas for others it is the least-performing, lowest-cost capability option. The options in many areas, such as Propulsion, are mutually exclusive: one and only one will be chosen for inclusion in the portfolio. For other functions, options prefixed with a “C” sign are cumulative: any number of those options might be included, or even none if that is shown in column 1
60 L.D. Phillips
3 The Royal Navy’s Type 45 Story: A Case Study
61
were asked to privately write down their judged weight, compared to the 100, for the added value of that function’s capability options. The facilitator then quickly constructed on a flip chart a frequency distribution for the full range of weights by asking for a show of hands of all those assigning a weight greater than 100, equal to 100, in the 90s, 80s, and so on down until all views had been obtained. The members assigning the highest and lowest weights were then asked for their reasons, followed by discussion about the weights by the whole group. After a brief discussion, the final judgement was given to the senior naval officers representing those responsible for the acquisition of the ship, and the eventual users. Initially, this process was difficult for the group, but as it enabled everyone to express a view, and minimised anchoring phenomena – the first person to speak can bias the entire group (Roxburgh 2003; Tversky and Kahneman 1974) – it was adopted as standard procedure, and became easier. A first run of the model and examination of the benefit–cost curves for each function showed some disparities that were the subject of much discussion the next morning, provoking further changes and additions. By the end of the day, the group felt that the model was now plausible, but still needed further work, especially on costs. The next decision conference engaged senior officers, who sat at a separate table, and provided the final judgements when disagreements, especially about weights, arose.
3.4 Results This section explains the cost and benefit criteria against which the capability options were evaluated in the preliminary decision conference, the results obtained with that model, the further development of the model over the next decision conference, the subsequent workshops, a follow-through workshop, and the final results.
3.4.1 Cost and Benefit Criteria Two costs were represented in the Equity model: 1. Unit Production Cost (UPC). Incremental total cost (recurring for every ship) of each capability solution, including the cost of installation and setting to work, for one first-of-class ship. 2. Non-Recurring Cost (NRC). This includes the cost of developing each solution, system engineering costs, facility development and production, documentation, initial logistics package, initial training, and first-of-class trials. The Equity model took a weighted sum of these costs. The weight on UPC was taken as 90% of the stated figure; this represented the effect of learning across the
62
L.D. Phillips
whole class. A weight of 9% was used for NRC to spread the development costs across a class of 12 ships, thereby not penalising the first ship by requiring it to take the full NRC. It is important to note that a substantial portion of the whole-ship cost is hidden from the Equity model in the Sump. Five benefit criteria were included in the model. The first two captured the extent to which a capability solution is expected to add value, and the remaining three benefits acted as penalties or dis-benefits. 1. Performance. The added value of a capability option was assessed against subcriteria associated uniquely with each function. With this definition, the common performance criterion could be defined differently for each function. 2. Growth Potential. The extent to which a capability option provides potential to grow to provide a better solution with more capability. (This was considered to be a radical criterion, for it would be possible to trade out capability to gain more growth potential.) 3. Time to Deliver Solution. Score 100 if the option supports at least a 90% chance of ship in-service date (ISD) of September 2007, score 0 of the option has 50% or less probability of meeting that date. This analysis was done for the first-ofclass only. 4. Risk. Probability that the solution will deliver the expected performance, at the expected cost. A logarithmic proper scoring rule was applied to convert the probabilities into a negative penalty score (Bernardo and Smith 1994)2, thereby providing proportionately more negative penalty scores for low probabilities. 5. Logistic Cost. This includes fuel, ammunition, maintenance, training, manning and lifetime support over a 25-year ship life. Evaluations were derived from rating models related to each function’s performance criteria, with the total rating scores converted to preferences scale judgements – the higher the cost, the lower the preference. These last three penalty criteria made it possible for their effects to so substantially reduce the positive benefits that the prioritisation index could be very low, even negative if the negative score from the Risk criterion exceeded the benefits. As the MoD had been accused by the National Audit Office of failing to control risk on previous major projects, the idea that risks could outweigh benefits was appealing to the Type 45 team.
3.4.2 Results of the Preliminary Decision Conference On completion of the scoring and double-weighting assessments, the model was complete and Equity’s calculations left every capability option characterised by a
2
See Section 2.7.2, “The Utility of a Probability Distribution.”
3 The Royal Navy’s Type 45 Story: A Case Study
63
Fig. 3.3 The prioritisation triangle VALUE for MONEY
RISKADJUSTED BENEFIT
COST
Fig. 3.4 Benefits versus costs for the Accommodation options
single cost and single risk-adjusted benefit, and, therefore, a priority score, shown graphically in Fig. 3.3 – the higher the ratio of benefit to cost, the steeper the slope, and the better the value for money. We started the process of examining results by looking at the triangles for each function to see if the priorities there made sense. For example, the Accommodation function is shown in Fig. 3.4. There, each triangle represents an additional cost and benefit from the previous triangle, on which it depends for its added value – they are not separate, independent projects whose triangles can be arranged in order of declining benefit to yield a wholly concave function. In addition, the starting point, option 1, is a very minimum standard of accommodation. Although the Accommodation benefit–cost function looked about right to the team, many of the others revealed inconsistencies. Some were very jagged: more cost delivered less benefit in many cases. In others the penalties outweighed the costs, giving negative slopes.
64
L.D. Phillips
Still others showed how decisions about the first-of-class might be affected by the September 2007 ISD. Curves that were mostly convex instead of concave suggested that the options might be improved for that function. These curves stimulated several suggestions about how the options should be revised, which demonstrates the importance of being able to show these graphs separately for each function. One of the obstacles to obtaining better cost data was a vicious circle: team members who were potential contractors were reluctant to provide costs for the options until they received an exact specification for a system from the Royal Navy. But the naval attendees could not do that without a better idea of what a specification might cost. Each side, in the absence of better guidance from the other, felt they would be guessing. To overcome the stalemate the naval officers pointed out that systems were not being costed – capabilities were. The facilitators pointed out that the Equity model is not sensitive to precise costs, that “ball-park” estimates are sufficient. And the armed forces specialists in costs asked that only representative, realistic costs be presented by the contractors. The contractors realised that if they gave costs that were too high, that could call into question the viability of the project, and could lead to its cancellation. If the contractors gave costs that were too low, they might eventually be held to them. Everyone knew that costs had to be squeezed, now and in the future. Realism was encouraged. Equity next combined the triangles across all the functions into one overall prioritisation, which enabled the group for the first time to see whole-ship solutions. They discovered that a high proportion of high-priority solutions generated benefits out of proportion to their costs. It also showed that many capability solutions cannot be justified on the basis of the benefit they generate for the cost; more beneficial solutions exist for the same or less cost. Overall, it was clear that too much capability was required, too little money was available, and too little time was left to deliver the first ship. The group proposed a possible whole-ship solution, i.e. a portfolio of proposed options, but the model showed that better solutions at no additional cost were available. However, the better solutions often went beyond requirements for some functions while not going far enough in others, so they were unacceptable to the group. Participants then introduced constraints on the model to prevent unacceptable combinations, and an affordable whole-ship solution emerged for ships later in the class. Further constraints and a reduction in cost provided a ball-park view of a firstof-class ship. The model indicated how this solution could be substantially improved with some additional funding. Sensitivity analyses conducted by changing weights and scores showed that the same solutions resulted for many of the functions, so the solutions for these functions would be largely unaffected by new information. This finding helped the group to suggest functions for which further study of the options, and their costs and benefits, would be warranted.
3 The Royal Navy’s Type 45 Story: A Case Study
65
Fig. 3.5 The position of the Class solution at £152 million
3.4.3 Results of the Decision Conference Key flag officers attended this final decision conference, the purposes of which were to update the Equity model and examine possible affordable solutions for the Class and first-of-class. Participants began by reviewing capability solutions for the same 25 ship functions considered in the preliminary decision conference. Updates provided before the start of the decision conference by the Integrated Project Team included a substantially expanded Sump and additional assumptions, several new or redefined options, and revisions of costs and benefits. The decision conference began with a review of these changes and additions, with participants ensuring that this new material was realistic and consistent across all functions. The group then re-assessed the trade-offs between the functions as new within-criterion weights, but these did not signal the need for a change to the across-criteria weights. In examining the results, the group started with the Class model (which placed zero weight on the Time criterion), and then a plausible best solution at the cost target. The affordable portfolio, at point F for a budget of £152 million (recall that this is not the whole-ship cost, only the cost of the capability options) is shown in Fig. 3.5. The options included in that portfolio are shown to the left of the solid vertical lines in Fig. 3.6. Next, the group repeated this process by including the Time criterion to find a best solution for the first-of-class at its cost target of £280 million. The resulting
Fig. 3.6 The Class solution at £152 million: all capability options to the left of the solid vertical lines
66 L.D. Phillips
3 The Royal Navy’s Type 45 Story: A Case Study
67
portfolio was wholly unsatisfactory to the group, as it precluded migration paths to the Class solution for some of the functions. This proved to be a major turning point for the group as they began an intensive process of using the model to develop compatible first-of-class and Class solutions. They began by blocking off high-priority options in the first-of-class model that limited the migration path, caused incompatible combinations, were clearly unacceptable or were thought to provide unnecessary extra benefit. These changes caused a reconsideration of the Class model, so several iterations had to be completed before consistency was achieved between the two models. By the fifth iteration, plausible solutions emerged that were affordable. Figure 3.7 gives a clear graphical picture of the constraints that led to the affordable solution. The result of eliminating these solutions from the portfolio can be seen in Fig. 3.8. The possible portfolios are shown on the upper surface of the inner figure, and the Class solution lies at the knee of the inner curve. Immediately after the decision conference, a smaller number of key players carried out numerous sensitivity analyses, changing some within-criterion and across-criteria weights, to identify configurations that would improve on the firstof-class solutions. Their efforts confirmed an uncomfortable truth: under the current budget, the 12 ships would have to go to sea without either a gun or sonar. Keeping the budget fixed, ten lowest-priority capability options would have to be forgone to pay for the gun (Fig. 3.9). In a November briefing, the Royal Navy’s Commanderin-Chief said that this was not acceptable, though he was clear that the other recommendations about the Type 45 configuration could be used as the basis for agreeing next steps. By January 2000, new information had come to light suggesting that instead of the gas turbine propulsion system that had been agreed at the second decision conference, a fully integrated electrical propulsion system could now be considered. To test the feasibility of this type of system, 23 participants from the MoD and industry gathered for a third decision conference, on 25–26 January, to evaluate five propulsion options: (1) a low-risk option that provided a benchmark, (2) a variation on the system used in the Type 23 frigate, (3) the gas turbine system from the previous decision conference, (4) a variation on the gas turbine system and (5) the electric propulsion technology. These options were evaluated against five costs, eight benefit criteria, and two risk criteria, with cost estimates provided by a Rolls Royce/Alstom industry report, preference values for the benefit criteria assessed by participants on 0–100 scales as in the Equity model, and probabilities of success judged by the group for the two risk scales. These data were input to the Hiview computer program3, which converted cost data to preference values by applying an inverse linear transformation, and a logarithmic scoring rule transformed the probabilities to negative preference values. Swing-weights assessed by participants equated the units of added value on
3 Hiview is used for evaluating options on many criteria. Like Equity, it is a commercial product available from Catalyze Limited, www.catalyze.co.uk.
Fig. 3.7 The Class solution at the end of the decision conference. The lightly shaded capability solutions to the left of the solid vertical lines were blocked out of the whole-ship portfolio if they hindered the migration path, or were clearly unacceptable for other reasons. The lightly shaded solutions to the right of the vertical lines were blocked if they provided low or unnecessary added value. The up arrow indicates the next priority, the down arrow the previous one
68 L.D. Phillips
3 The Royal Navy’s Type 45 Story: A Case Study
69
Fig. 3.8 The position of the Class solution at £152 million, shown at point F, with constraints imposed on the model. The outer curve is the original, unconstrained result
all scales. The overall weighted scores revealed a surprise: the electrical propulsion system was both lowest in overall cost, and best in overall risk-adjusted benefit. This conclusion withstood numerous sensitivity analyses on the weights of the criteria. In January 2002, a fourth 2-day facilitated decision conference of 23 participants from the MoD and BAE Systems reviewed priorities in light of new information, as work was soon to begin on building the first three ships. All the functions were reviewed, scores changed as required, and new weights assessed. BAE Systems made further slight refinements after the meeting, and on re-running the model found that very little had changed. At a half-day follow-through meeting in midFebruary, the author helped the four BAE Systems participants to use the model to develop a short-term plan.
3.5 Discussion From the UK’s withdrawal from the Horizon Project in April 1999, which signalled the “Initial Gate” for the planning start of the Type 45, through to the “Main Gate” approval by the MOD and subsequent announcement by the UK’s Defence Secretary in July 2000, just 15 months passed, a record for a major equipment programme
Fig. 3.9 Forcing the 155 mm gun into the affordable portfolio (right-most option in MCGS) causes Equity to trade out ten lowest-priority options (darker shading) in order to fund the gun
70 L.D. Phillips
3 The Royal Navy’s Type 45 Story: A Case Study
71
in the MOD. Why did this happen so quickly? Of course, the ship configuration from the Horizon project provided a head start, but a major reason is that all the key players were aligned at the end of the decision conferencing process. As a result, few changes were made during the subsequent design process, and the built ships show pretty much the same configuration as was agreed at the end of the decision conferencing process. We believe that five features contributed to the success of the project. First, all the key perspectives on the capabilities of an affordable ship were represented by participants in the decision conferences – people were not just brought into meetings as they were needed. At first, we met considerable resistance to the idea of meetings that lasted 2 or 3 days, and it was necessary for senior military staff to mandate attendance. Very quickly, though, participants discovered the value of interacting with others as it became evident that decisions about one system often created knock-on consequences elsewhere, and as new insights developed that affected the ship’s capabilities. Second, it was possible to keep working meetings of 20–25 people on track and focussed through the efforts of the facilitators – the author and two Royal Naval personnel, one of whom supported the computer modelling, while the other worked with the author in facilitating the group. The discussion moved from one part of the MCDA model to another, enabling the group to maintain focus on one piece of the overall picture at a time. Third, the modelling enabled military judgement to be made explicit, as in the case of the S-shaped value function for accommodation space. Indeed, the accommodation team argued during the first two decision conferences about the importance of good accommodation to morale, which led to the development of the value function. (A judgement later confirmed by the glowing pride of a seaman as he showed his personal space on the Daring to the interviewer in a national network TV program broadcast in 2010.) Fourth, working together and building trust enabled the group to circumvent the potential impasse between being specific about requirements and costs. Initially, industry asked for detailed requirements to develop realistic costs, but the MoD needed to know what level of performance industry could provide before being too specific about requirements. By exploring capabilities rather than specific system requirements, and approximate costs for the capabilities, it became possible to break this stalemate. Fifth, the senior Navy staff effectively used the MCDA model to “steer” the answer to better and more realistic solutions. After seeing the incompatibility between the first-of-class and Class solutions, they tried out different judgements, looked at the results, made new judgements, saw those results, shaping and reshaping their judgements until they arrived at solutions that were consistent and realistic. Indeed, all participants were constructing their preferences throughout the process (Fischhoff 2006; Lichtenstein and Slovic 2006), learning from each other, gaining new perspectives, trying out novel ideas, developing a shared understanding of the issues and gaining a higher-level view of the whole technical system. Everyone recognised there was no single “best” solution that would be revealed by the MCDA
72
L.D. Phillips
model; rather, they saw the model as a way to test ideas, to try out different capability solutions before having to commit to purchasing specific systems, and to find the collectively best solution possible. We had not anticipated the extensive re-working that characterised this exploration of an affordable configuration. Although the project began with a thorough study to gain agreement about the Sump, this was revised over and over again at successive workshops and decision conferences. The 25 functional areas of required ship capability were defined early in the project, but by the third decision conference had been reduced to 18. Objectives were agreed early and translated into four criteria, the most important one being performance, which was defined differently for each functional capability area, and these continued to be refined during the project. In particular, logistic costs proved to be a particularly difficult challenge because there were, of course, no data available for such a new type of ship. As the developing MCDA model only required relative scores for the benefits, a separate group developed a point-scoring model that saw revisions at each successive decision conference. Again, the value of working as a group became evident, as new ideas developed. It is clear to us that we were engaged in a socio-technical system. The many workshops and decision conferences engaged all the key players, and the group processes helped them to construct coherent preferences that contributed to an overall integrated solution. Each participant spoke from his or her own area of expertise, but had to be subject to instant peer review by the whole group. Many inputs were changed as this process of judging and reviewing took place, a process that helped the group construct new, robust solutions before they had to be tested in reality. The model played a role similar to that of an architect’s model (Gregory et al. 1993): representing a to-be-constructed future, which could be modified, shaped and improved by examining the overall result, eliminating inconsistencies, improving sub-system designs, preventing over-designing for excess capability, keeping costs and risks under control.
3.6 Conclusions The original plan for 12 ships was reduced to eight ships in 2003, but delays and rising costs no doubt contributed to the decision in June 2008 to further scale back the programme to six ships. HMS Daring was launched on 1 February 2006 and commissioned on 23 July 2009, after extensive sea trials, and the second ship, HMS Dauntless, was commissioned on 3 June 2010. At this writing the next four ships were either under construction at BAE Systems Surface Fleet, fitting out or in sea trials. Perhaps the best people to provide an overall view of this project are those who were directly involved and responsible for delivering a solution. I asked four key participants to provide their views about the decision conferencing/MCDA modelling process; their roles in the Type 45 project are in parentheses.
3 The Royal Navy’s Type 45 Story: A Case Study
73
Rear Admiral Philip Greenish (from 1997 to 2000, Director of Operational Requirements, Sea Systems, and then, following reorganization, Director of Equipment Capability, Above Water Battlespace) We were faced with a very urgent need to define an affordable capability for the Type 45 destroyer when the UK withdrew from the Horizon Project. We were unlikely to have sufficient funding to provide the full Horizon capability – at least in the first of class – and we did not have time for traditional approaches to trading cost and capability. With some limited experience in the use of decision conferencing and multi-criteria decision analysis, we embarked on what could best be termed a voyage of exploration and discovery. The result was extraordinary: consensus was achieved across all aspects of capability with a rapidity that was scarcely believable for such a complex scenario and with such difficult constraints. Furthermore, the decisions made [were] then sustained. It was widely agreed by all participants that there had been a thorough, comprehensive and open debate which had been supported by an appropriate methodology that everyone had bought in to. It is a pleasure to see the first ships of the class at sea looking pretty much as we had envisaged.
Rear Admiral Richard Leaman (Assistant Director Above Water Battlespace – responsible for requirements definition for the T45, Future Carrier and Future Escort) When initially introduced, this process was sorely needed, and ground-breaking thinking for the MoD. For the first time, highly complex decisions on cost-capability trade-offs could be made in a consensual, analytical and structured way. Whilst not everybody agreed with the final results, all the key stakeholders had been given the chance to argue their case, and then had no option but to accept the broad conclusions that emerged. A great tool and great process; I have used it successfully several times since – in a $500 million complex NATO transformation programme, all the way to a £60 million National charity budget. Skilled facilitation is critical – I would recommend the reader chooses contractor very carefully.
Captain Malcolm Cree (Requirements Desk Officer, Equipment Capability Customer, Above Water Battlespace Directorate) I took over desk level MoD responsibility for reconciling the “requirements” (performance), time and cost conundrum of the Type 45 Destroyer project from the moment the UK pulled out of the tri-national Horizon Frigate programme. I was faced with a new programme, a very well developed and over-ambitious set of requirements, an extremely demanding timeline and a budget that was insufficient to deliver the capability required in the time allowed. However, the MoD had just introduced “Smart Procurement,” and Type 45 was the first major project that would face approval under this new regime. I was initially sceptical about using externally facilitated Decision Conferencing to help decipher the conundrum but I rapidly became a fervent supporter and helped to bring the key decision makers in the MoD on side with the process. The critical factor was that everyone involved – the IPT, MoD, Fleet HQ and Industry had a common interest in succeeding – because failure to solve the problem would have seriously threatened the entire project. The MCDA approach drove all parties, who would not normally have engaged in an open, honest debate, to recognise the viewpoints of others and realise what was vital, important, desirable and superfluous, enhancing understanding of the issues, generating a sense of common purpose and eventually a commitment to the way forward. The Prime Contractor was obliged to produce realistic capability solutions and costs; over-estimating would have made the project unaffordable, under-estimating would have laid them open to serious criticism down-stream. I suspect that they became as intellectually and socially committed to the process as the MoD participants.
74
L.D. Phillips I was able to argue the case for longer term “growth paths” and for including capabilities that were of minimal cost if “designed in” but would have cost much more to “add in” later (such as the requirement to carry an embarked military force of up to 60 troops and strengthen the flight deck to enable a Chinook to land on it). Several capabilities that formerly would have been regarded as unaffordable “nice to have” options became highly cost-effective realities. We were also able to justify building in “growth potential,” such as space for additional (and longer) missile silos for a range of possible weapons – the common sense of including this “future-proofing” at the design stage would simply have not been possible without the rigour of MCDA and the inclusion of “soft” criteria. The end result made “selling” the plan to senior decision makers, and the production of a convincing Main Gate Business Case, much simpler. The project sailed through Main Gate and won a commendation from the Board and its Chairman (the Chief Scientific Advisor). I am convinced that this would not have been possible with a traditional procurement approach. The use of MCDA and Decision Conferencing did not stop there; with the support of several senior officers, I was able to roll out the process across the whole Above Water Battlespace Directorate, enabling the prioritisation all the current and future projects. The process was then applied across the whole of the central Defence procurement budget. The former worked well because fundamentally we all shared a desire to achieve the best from the money available for the Royal Navy, even to the point of accepting that projects individuals had worked on for years were not the best use of resources after all. The latter also worked but was poorly applied in many areas and fell foul to inter-service rivalry and the lack of robust, independent facilitation. Decision Conferencing supported by Equity and Hiview, with experienced independent facilitation, for Type 45 and later work in the MoD enabled complex problems to be solved and encouraged real teamwork. Some of the results were undermined by subsequent MoD savings (6 ships instead of 12) but when I first embarked and went to sea in HMS DARING, I was able to point out to the Captain all the things that had only been made possible by a process, a model, a decision analyst and a group of sufficiently enlightened people with a common aim. I have used MCDA and Decision Conferencing many times since, for prioritising savings and enhancement measures for the Fleet Command to deciding the optimum Base Port arrangements for the Navy’s submarines and mine countermeasures vessels. The steps of MCDA have much in common with the military “appreciation” or “campaign planning” process (that we tend to ignore when faced with a budget or a programme instead of an enemy!). When applied with care, MCDA is the most helpful process I have come across in many years of appointments in which prioritisation and decision making were the key requirements. I only wish it worked at home!
Chris Lloyd (Type 45 Design Authority & Engineering Director, BAE Systems, 1999–2003) Traditional conceptual design trade-off studies can take many months to laboriously iterate through numerous design solutions and costing exercises. One of the great benefits of Decision Conferencing was its ability to mix and match the major capability elements and achieve a near instant assessment of the relative performance and cost. In this respect, the sensitivity analysis was crucial to build confidence that simple models were suitable for even the most complex problems. The other key element was the concept of agreeing a “target cost” and focussing on what could be afforded rather than fixing the scope and commercially negotiating the “best” price. Whilst, common in commercial sectors, this was, and still is, a novelty in Defence. Another revelation of Decision Conferencing to the Type 45 Programme participants was how it dealt with the subjectivity of capability requirements, prioritisation and arriving at shared conclusions. This “social” process of joint workshops with all leading stakeholders was fundamental to its success. In my 25 years in naval procurement, it was the first and
3 The Royal Navy’s Type 45 Story: A Case Study
75
only time that I have seen Flag Officers openly debating the relative merits of radars and electronic warfare systems with senior procurement managers, technical specialists and ship designers. The exchange of knowledge and thus convergence on a jointly understood solution was phenomenal. Much is made of Integrated Project Teams (IPT) and Customer/Industry collaboration in Defence, but this is one of the few occasions where I have seen an approach that actually steers the participants towards a conclusion in days, not weeks or years. Acknowledgements I am grateful to Malcolm Cree and the editors for helpful suggestions that improved the manuscript. I also belatedly thank the many participants in the decision conferences, particularly Commander Dean Molyneaux and Lieutenant Commander Bill Biggs, who assisted in facilitating the decision conferences, Commodore Philip Greenish, who ably led participants through the tangled web of conflicting objectives, Brigadier General Keith Prentiss, who continued to exert pressure on costs, and Captain Joe Kidd, Captain Joe Gass and Commander Malcolm Cree, who championed the process from the start and saw it through to the final propulsion decision conference.
References Bernardo JM, Smith AFM (1994) Bayesian theory. Wiley, Chichester Delbecq A, Van de Ven A, Gustafson D (1974) Group techniques for program planning. Scott Foresman, Glenview Fischhoff B (2006) Constructing preferences from labile values. In: Lichtenstein S, Slovic P (eds) The construction of preference. Cambridge University Press, New York Gregory R, Lichtenstein S, Slovic P (1993) Valuing environmental resources: a constructive approach. J Risk Uncertain 7(2):177–197 Hardin, G (1968) The tragedy of the commons. Science 162:1243–1248 Keeney RL, Raiffa H (1976) Decisions with multiple objectives: preferences and value tradeoffs. Wiley, New York Lichtenstein S, Slovic P (eds) (2006) The construction of preference. Cambridge, New York National Audit Office (2002) Ministry of Defence major project reports. The Stationery Office, London Phillips LD (2007) Decision conferencing. In: Edwards W, Miles RF, Von Winterfeldt D (eds) Advances in decision analysis: from foundations to applications. CUP, Cambridge Phillips LD, Bana e Costa CA (2007) Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res 154(1):51–68 Renn O (1999) A model for an analytic-deliberative process in risk management. Environ Sci Technol 33(18):3049–3055 Roxburgh C (2003) Hidden flaws in strategic thinking. McKinsey Quarterly, 2 Schein EH (1999) Process consultation revisited: building the helping relationship. Addison-Wesley, Reading Tversky A, Kahneman D (1974) Judgment under uncertainty: HEuristics and biases. Science 185:1124–1131
Part II
Methodology
Chapter 4
Valuation of Risky Projects and Illiquid Investments Using Portfolio Selection Models Janne Gustafsson, Bert De Reyck, Zeger Degraeve, and Ahti Salo
Abstract We develop a portfolio selection framework for the valuation of projects and other illiquid investments for an investor who can invest in a portfolio of private, illiquid investment opportunities as well as in securities in financial markets, but who cannot necessarily replicate project cash flows using financial intruments. We demonstrate how project values can be solved using an inverse optimization procedure and prove several general analytical properties for project values. We also provide an illustrative example on the modeling and pricing of multiperiod projects that are characterized by managerial flexibility.
4.1 Introduction Project valuation and selection has attracted plenty of attention among researchers and practitioners over the past few decades. Suggested methods for this purpose include (1) discounted cash flow analysis (DCF, see e.g. Brealey and Myers 2000) to account for the time value of money, (2) project portfolio optimization (see Luenberger 1998) to account for limited resources and several competing projects, and (3) options pricing analysis, which has focused on the recognition of the managerial flexibility embedded in projects (Dixit and Pindyck 1994; Trigeorgis 1996). Despite the research efforts that have been made to address challenges in project valuation, traditional methods tend to suffer from shortcomings which limit their practical use and theoretical relevance. For example, DCF analysis does not specify how the discount rate should be derived so as to properly account for (a) the time value of money, (b) risk adjustment implied by (1) the investor’s risk aversion (if any) and (2) the risk of the project and its impact on the aggregate risk faced J. Gustafsson () Ilmarinen Mutual Pension Insurance Company, Porkkalankatu 1, 00018 Ilmarinen, Helsinki, Finland e-mail: [email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 4, © Springer Science+Business Media, LLC 2011
79
80
J. Gustafsson et al.
by the investor through diversification and correlations; and (c) opportunity costs imposed by alternative investment opportunities such as financial instruments and competing project opportunities. Traditionally, it is proposed that a project’s cash flows should be discounted at the rate of return of a publicly traded security that is equivalent in risk to the project (Brealey and Myers 2000). However, the definition of what constitutes an “equivalent” risk is problematic; in effect, unless a publicly traded instrument (or a trading strategy of publicly traded instruments) exactly replicates the project cash flows in all future states of nature, it is questionable whether a true equivalence exists. (Also, it is not clear if such a discount rate would account for anything else than opportunity costs implied by securities.) Likewise, traditional options pricing analysis requires that the project’s cash flows are replicated with financial instruments. Still, most private projects and other similar illiquid investments are, by their nature, independent of the fluctuations of the prices of publicly traded securities. Thus, the requirement of the existence of a replicating portfolio for a private project seems unsatisfactory as a theoretical assumption. In this chapter, we approach the valuation of private investment opportunities through portfolio optimization models. We examine a setting where an investor can invest both in market-traded, infinitely divisible assets as well as lumpy, nonmarkettraded assets so that the opportunity costs of both classes of investments can be accounted for. Examples of such lumpy assets include corporate projects, but in general the analysis extends to any nontraded, all-or-nothing-type investments. Market-traded assets include relevant assets that are available to the investor, such as equities, bonds, and the risk-free asset which provides a risk-free return from one period to the next. In particular, we demonstrate how portfolio optimization models can be used to determine the value of each nontraded lumpy asset within the portfolio. We also show that the resulting values are consistent with options pricing analysis in the special case that a replicating portfolio (or trading strategy) exists for such a private investment opportunity. Our procedure for the valuation of a project resembles the traditional net present value (NPV) analysis in that we determine what amount of money at present the investor equally prefers to the project, given all the alternative investment opportunities. Because this equivalence can be determined in two similar but different ways (where the choice of the appropriate way depends on the setting being modeled), we present two pricing concepts: breakeven selling price (BSP) and breakeven buying price (BBP), which have been used frequently in decision analytic approaches to investment decision making and even other settings (Luenberger 1998; Raiffa 1968; Smith and Nau 1995). Regarding the investor’s preferences, we seek to keep the treatment quite generic by allowing the use of virtually any rational preference model. The preference model merely influences the objective function of the portfolio model and possibly introduces some risk constraints, which do not alter the core of the valuation procedure, on condition that the preference model makes it possible to determine when two investment portfolios are equally preferred. In particular, we show that the required portfolio models can be formulated for both expected utility maximizers and mean-risk optimizers.
4 Valuing Investments with Portfolio Selection Methods
81
To illustrate how the project valuation procedure can be implemented in a multiperiod setting with projects that are characterized by varying degrees of managerial flexibility, we use contingent portfolio programming (CPP, see Gustafsson and Salo 2005) to formulate a multiperiod project-security portfolio selection model. This approach uses project-specific decision trees to capture real options that are embedded in projects. Furthermore, we provide a numerical example that parallels the example in Gustafsson and Salo (2005). This example suggests that while there is wide array of issues to be addressed in multiperiod settings, it is still possible to deal with them all with the help of models that remain practically tractable. Relevant applications of our valuation methodology include, for instance, pharmaceutical development projects, which are characterized by large nonmarketrelated risks and which are often illiquid due to several factors (see Chapter 13 in this book). For example, if the project is very specialized, the expertise to evaluate it may be available only within the company, in which case it may be difficult for outsiders to verify the accuracy and completeness of evaluation information. Also, details concerning pharmaceutical compounds may involve corporate secrets that can be delivered only to trusted third parties, which may lower the number of potential buyers and decrease the liquidity of the project. This chapter is structured as follows. Section 4.2 introduces the basic structure of integrated project-security portfolio selection models and discusses different formulations of portfolio selection problems. In Section 4.3, we introduce our valuation concepts and examine their properties. Section 4.4 discusses the valuation of investment opportunities such as real options embedded in the project. Section 4.5 gives an example of the framework in a multiperiod setting, which allows us to compare the results with a similar example in Gustafsson and Salo (2005). In Section 4.6, we summarize our findings and discuss implications of our results.
4.2 Integrated Portfolio Selection Models for Projects and Securities 4.2.1 Basic Model Structure We consider investment opportunities in two categories: (1) securities, which can be bought and sold in any quantities and (2) projects, lumpy all-or-nothing type investments. From a technical point of view, the main difference between these two types of investments is that the projects’ decision variables are binary, while those of the securities are continuous. Another difference is that the cost, or price, of securities is determined by a market equilibrium model, such as the Capital Asset Pricing Model (CAPM, see Lintner 1965; Sharpe 1964), while the investment cost of a project is an endogenous property of the project.
82
J. Gustafsson et al.
Portfolio selection models can be formulated either in terms of rates of return and portfolio weights, like in Markowitz-type formulations, or by specifying a budget constraint, expressing the initial wealth level, subject to which the investor’s terminal wealth level is maximized. The latter approach is more appropriate to project portfolio selection, because the investor is often limited by a budget constraint and it is natural to characterize projects in terms of cash flows rather than in terms of portfolio weights and returns.
4.2.2 Types of Preference Models Early portfolio selection formulations (see, e.g., Markowitz 1952) were bi-criteria decision problems minimizing risk while setting a target for expected return. Later, the mean-variance model was formulated in terms of expected utility theory (EUT) using a quadratic utility function. However, there are no similar utility functions for most other risk measures, including the widely used absolute deviation (Konno and Yamazaki 1991). In effect, due to the lexicographic nature of bi-criteria decision problems, most mean-risk models cannot be represented by a real-valued functional, thus being distinct from the usual preference functional models such as the expected utility model. Therefore, we distinguish between two classes of preference models: (1) preference functional models, such as the expected utility model, and (2) bicriteria optimization models or mean-risk models. For example, for EUT, the preference functional is U ŒX D EŒu.X /, where u./ is the investor’s von Neumann–Morgenstern utility function. Many other kinds of preference functional models, such as Choquet-expected utility models, have also been proposed. In addition to preference functional models, mean-risk models have been widely used in the literature. These models are important, because much of the modern portfolio theory, including the CAPM, is based on a mean-risk model, namely the Markowitz mean-variance model (Markowitz 1952). Table 4.1 describes three possible formulations for mean-risk models: risk minimization, where risk is minimized for a given level of expectation (Luenberger 1998), expected value maximization, where expectation is maximized for a given level of risk (Eppen et al. 1989), and the additive formulation, where the weighted sum of mean and risk is maximized (Yu 1985) and which is effectively a preference functional model. The models employed by Sharpe (1970) and Ogryczak and Ruszczynski (1999) are special cases of this model. The latter one, in particular, is important, because it can represent the investor’s certainty equivalent. In Table 4.1, is the investor’s risk measure, is the minimum level for expectation, and R is the maximum level for risk. The parameters are tradeoff coefficients. In our setting, the sole requirement for the applicable preference model is that it can uniquely identify equally preferable portfolios. By construction, models based on preference functionals have this property. Also, risk minimization and expected value maximization models can be employed if we define that equal preference
4 Valuing Investments with Portfolio Selection Methods
83
Table 4.1 Formulations of mean-risk preference models Objective min Œ X max E Œ X max 1 E Œ X 2 Œ X max E Œ X Œ X max E Œ X Œ X
Risk minimization Expected value maximization General additive Sharpe (1970) Ogryczak and Ruszczynski (1999)
Constraints E ŒX ŒX R
prevails whenever the objective function values are equal and the applicable constraints are satisfied (but not necessarily binding). In general, there no particular reason to favor any one of these models, because the choice of the appropriate model may depend on the setting. For example, in settings involving the CAPM, the mean-variance model may be more appropriate, while decision theorists may prefer to opt for the expected utility model.
4.2.3 Single-Period Example Under Expected Utility A single-period portfolio model under expected utility can be formulated as follows. Let there be n risky securities, a risk-free asset (labeled as the 0th security), and m projects. Let the price of asset i at time 0 be S i0 and let the corresponding (random) price at time 1 be SQi1 . The price of the risk-free asset at time 0 is 1 and 1 C r f at period 1, where r f is the risk-free interest rate. The amounts of securities in the portfolio are denoted by x i ; i D 0; : : : ; n . The investment cost of project k in time 0 is C k0 and the (random) cash flow at time 1 is CQ k1 . The binary variable zk indicates whether project k is started or not. The investor’s budget is b . We can then formulate the model using utility function u as follows: 1. maximize utility at time 1: max E x; z
"
u
n X
SQi1 x i C
iD 0
m X
CQ k1 zk
kD 1
!#
subject to 2. budget constraint at time 0:
n P
iD 0
S i0 x i C
m P
kD 1
C k0 zk b
3. binary variables for projects: zk 2 f0; 1g; k D 1; : : : ; m 4. continuous variables for securities: x i free i D 0; : : : ; n In typical settings, the budget constraint could be formulated as an equality, because in the presence of a risk-free asset all of the budget will normally be expended at the optimum. In this model and throughout the chapter, it is assumed that there are no transaction costs or taxes on capital gains, and that the investor is able to borrow and lend at the risk-free interest rate without limit.
84
J. Gustafsson et al.
4.3 Valuation of Projects and Illiquid Investments 4.3.1 Breakeven Buying and Selling Prices Because we consider projects as illiquid, nontradable investment opportunities, there is no market price that can be used to value the project. In such a setting, it is reasonable to define the value of the project as the cash amount at present that is equally preferred to the project. In a portfolio context, this can be interpreted so that the investor is indifferent between the following two portfolios: (A1) a portfolio with the project and (B1) a portfolio without the project and additional cash equal to the value of the project. Alternatively, however, we may define the value of a project as the indifference between the following two portfolios: (A2) a portfolio without the project and (B2) a portfolio with the project and a reduction in available cash equal to the value of the project. The project values obtained in these two ways will not, in general, be the same. Analogous to (Luenberger 1998; Raiffa 1968; Smith and Nau 1995), we refer to the first value as the “breakeven selling price,” as the portfolio comparison can be understood as a selling process, and the second type of value as the “breakeven buying price.” A central element in BSP and BBP is the determination of equal preference for two different portfolios, which holds in many portfolio optimization models when the optimal values for the objective function match for the two portfolios. This works straightforwardly under preference functional models where the investor is, by definition, indifferent between two portfolios with equal utility scores, but the situation becomes slightly more complicated for mean-risk models where equal preference might be regarded to hold only when two values – mean and risk – are equal for the two portfolios. However, as we discussed before, if the risks are modeled as constraints, the investor can be said to be indifferent if the expectations of the two portfolios are equal and they both satisfy the risk constraints. Thus, we can establish equal preference by comparing the optimal objective function values also in this case. Table 4.2 describes the four portfolio selection settings and, in particular, the necessary modifications to the base portfolio selection model for the calculation of breakeven prices. Here, the base portfolio selection model is simply the model that is appropriate for the setting being modeled; for example, it can be the simple oneperiod project-security model given in Section 4.2.3 or the complex multiperiod model described in Section 4.5. All that is required for the construction of the underlying portfolio selection problem is that it makes it possible to establish equal preference between two portfolios (in this case through the objective function value) and to have a parameter that describes the initial budget. Here, vsj and vbj are unknown modifications to the budget in Problem 2 such that the optimal objective function value in Problem 2 matches that of Problem 1. The aim of our valuation methodology is to determine the values of these unknown parameters.
4 Valuing Investments with Portfolio Selection Methods
85
Table 4.2 Definitions of the value of project j A. Breakeven selling price
B. Breakeven buying price
Definition
vsj such that WsC D Ws
vbj such that WbC D Wb
Problem 1
Problem A1 Mandatory investment in the project
Problem B1 Project is excluded from the portfolio (investment in the project is prohibited) Optimal objective function value: Wb Budget at time 0: b0
Optimal objective function value: WsC Budget at time 0: b0 Problem 2
Problem A2 Project is excluded from the portfolio (investment in the project is prohibited) Optimal objective function value: Ws Budget at time 0: b0 C vsj
Problem B2 Mandatory investment in the project Optimal objective function value: WbC Budget at time 0: b0 vbj
4.3.2 Inverse Optimization Procedure Finding a BSP and BBP is an inverse optimization problem (see e.g. Ahuja and Orlin 2001): one has to find for what budget the optimal value of the Problem 2 matches a certain desired value (the optimal value of Problem 1). Indeed, in an inverse optimization problem, the challenge is to find the values for a set of parameters, typically a subset of all model parameters, that yield the desired optimal solution. Inverse optimization problems can broadly be classified into two groups: (a) finding an optimal value for the objective function and (b) finding a solution vector. The problem of finding a BSP or BBP falls within the first class. In principle, the taskof finding a BSP is equivalent to finding a root to the function f s vsj D Ws vsj WsC , where WsC is the optimal value of Problem 1 and Ws vsj is the corresponding optimal value in Problem 2 as the function of parameter vsj . Similarly, the BBP can be obtained by finding the root to the function f b vbj D Wb WbC vbj . In typical portfolio selection problems where the investor exhibits normal kind of risk preferences and a risk-free asset is available, these functions are normally increasing with respect to their parameters. To solve such root-finding problems, we can use any of the usual root-finding algorithms (see, e.g. Belegundu and Chandrupatla 1999) such as the bisection method, the secant method, and the false position method. These methods do not require knowledge of the functions’ derivatives which are not typically known. If the first derivatives are known, or when approximated numerically, we can also use the Newton–Raphson method. Solution of BSP and BBP in more complex settings with discontinuous or nonincreasing functions may require more sophisticated methods. It may be noted that, in some extreme, possibly unrealistic settings, equal preference cannot be established for any budget amount, which would imply that BSP and BBP would not exist.
86
J. Gustafsson et al.
4.3.3 General Analytical Properties 4.3.3.1 Sequential Consistency Breakeven selling and buying prices are not, in general, equal to each other. While this discrepancy is accepted as a general property of risk preferences in EUT (Raiffa 1968), it may also seem to contradict the rationality of these valuation concepts. It can be argued that if the investor were willing to sell a project at a lower price than at which he/she would be prepared to buy it, the investor would create an arbitrage opportunity and lose an infinite amount of money when another investor repeatedly bought the project at its selling price and sold it back at the buying price. In a reverse situation where the investor’s selling price for a project is greater than the respective buying price, the investor would be irrational in the sense that he/she would not take advantage of an arbitrage opportunity – if such an opportunity existed – where it would be possible to buy the project repeatedly at the investor’s buying price and to sell it at a slightly higher price below the investor’s BSP. However, these arguments ignore the fact that the breakeven prices are affected by the budget and that therefore these prices may change after obtaining the project’s selling price and after paying its buying price. Indeed, it can be shown that in a sequential setting where the investor first sells the project, adds the selling price to the budget, and then buys the project back, the investor’s selling price and the respective (sequential) buying price are always equal to each other. This observation is formalized as the following proposition. The proof is given in the Appendix. It is assumed in this proof and throughout Section 4.3.3 that the objective function is continuous and strictly increasing with respect to the investor’s budget (money available at present). Thus, an increase in the budget will always result in an increase in the objective function value. Proposition 1. A project’s breakeven selling (buying) price and its sequential breakeven buying (selling) price are equal to each other.
4.3.3.2 Consistency with Contingent Claims Analysis Option pricing analysis or contingent claims analysis (CCA; see, e.g., Brealey and Myers 2000; Luenberger 1998), can be applied to value projects whenever the cash flows of a project can be replicated using financial instruments. According to CCA, the value of project j is given by the market price of the replicating portfolio (a portfolio required to initiate a replicating trading strategy) less than the investment cost of the project: vCCA D Ij Ij0 j where Ij0 is the time-0 investment cost of the project and Ij is the cash needed to initiate the replicating trading strategy. A replicating trading strategy is a trading strategy using financial instruments that exactly replicates the cash flows of the project in each future state of nature.
4 Valuing Investments with Portfolio Selection Methods
87
It is straightforward to show that, when CCA is applicable, i.e., if there exists a replicating trading strategy, then the breakeven buying and selling prices are equal to each other and yield the same vCCA result as CCA (cf. Smith and Nau 1995). j This follows from the fact that a portfolio consisting of (A) the project and (B) a shorted replicating portfolio is equal to getting a cash flow for sure now and zero cash flows at all future states of nature. Therefore, whenever a replicating portfolio exists, the project can effectively be reduced to obtaining a sure amount of money at present. To illustrate this, suppose first that vCCA is positive. Then, any rational investor j will invest in the project (and hence its value must be positive for any such investors), since it is possible to make money for sure, amounting to vCCA , by j investing in the project and shorting the replicating portfolio. Furthermore, any rational investor will start the project even when he/she is forced to pay a sum vbj less than vCCA to gain a license to invest in the project, because it is now possible to j CCA b gain vj vj for sure. On the other hand, if vbj is greater than vCCA , the investor j will not start the project, because the replicating portfolio makes it possible to obtain the project cash flows at a lower cost. A similar reasoning applies to BSPs. These observations are formalized in Proposition 2. The proof is straightforward and is given in the Appendix. Due to the consistency with CCA, the breakeven prices can be regarded as a generalization of CCA to incomplete markets. Proposition 2. If there is a replicating trading strategy for a project, the breakeven selling price and breakeven buying price are equal to each other and yield the same result as CCA.
4.3.3.3 Sequential Additivity The BBP and BSP for a project depend on what other assets are in the portfolio. The value obtained from breakeven prices is, in general, an added value, which is determined relative to the situation without the project. When there are no other projects in the portfolio, or when we remove them from the model before determining the value of the project, we speak of the isolated value of a project. We define the respective values for a set of projects as the joint added value and joint value. Figure 4.1 illustrates the relationship between these concepts. Isolated project values are, in general, non-additive; they do not sum up to the value of the project portfolio composed of the same projects. However, in a sequential setting where the investor buys the projects one after the other at the prevailing buying price at each time, the obtained project values do add up to the joint value of the project portfolio. These prices are the projects’ added values in a sequential buying process, where the budget is reduced by the buying price after each step. We refer to these values as sequential added values. This sequential additivity property holds regardless of the order in which the projects are bought. Individual projects can, however, acquire different added values depending on the sequence in which they are bought. These observations are formalized in the following proposition. The proof is in the Appendix.
Single project
Isolated Value
Added Value
Portfolio of projects
J. Gustafsson et al. Number of projects being valued
88
Joint Value
Joint Added Value
N o other projects
Additional projects
Number of other projects in the portfolio
Fig. 4.1 Different types of valuations for projects
Proposition 3. The breakeven buying (selling) prices of sequentially bought (sold) projects add up to the breakeven buying (selling) price of the portfolio of the projects regardless of the order in which the projects are bought (sold).
4.4 Valuation of Opportunities 4.4.1 Investment Opportunities When valuing a project, we can either value an already started project or an opportunity to start a project. The difference is that, although the value of a started project can be negative, that of an opportunity to start a project is always nonnegative, because a rational investor does not start a project with a negative value. While BSP and BBP are appropriate for valuing started projects, new valuation concepts are needed for valuing opportunities. Since an opportunity entails the right but not the obligation to take an action, we need selling and buying prices that rely on the comparison of settings where the investor can and cannot invest in the project, instead of does and does not. The lowest price at which the investor would be willing to sell an opportunity to start a project can be obtained from the definition of the BSP by removing the requirement to invest in the project in Problem A1. We define this price as the opportunity selling price (OSP) of the project. Likewise, the opportunity buying price (OBP) of a project can be obtained by removing the investment requirement in Problem B2. It is the highest price that the investor is willing to pay for a license to start the project. Opportunity selling and buying prices have a lower bound of zero; it is also straightforward to show that the opportunity prices can be computed by taking a maximum of 0 and the respective breakeven price. Table 4.3 gives a summary of opportunity selling and buying prices.
4 Valuing Investments with Portfolio Selection Methods
89
Table 4.3 Definitions of a value of an opportunity A. Opportunity selling price
B. Opportunity buying price
Definition
vs such that WsC D Ws
vb such that WbC D Wb
Problem 1
Problem A1 No alteration to base portfolio model
Problem B1 Opportunity is excluded from the portfolio Optimal objective function value: Wb Budget at time 0: b0
Optimal objective function value: WsC Budget at time 0: b0 Problem 2
Problem A2 Opportunity is excluded from the portfolio Optimal objective function value: Ws Budget at time 0: b0 C vs
Problem B2 No alterations to base portfolio model Optimal objective function value: WbC Budget at time 0: b0 vb
4.4.2 Real Options and Managerial Flexibility Opportunity buying and selling prices can also be used to value real options (Brosch 2008; Trigeorgis 1996) contained in the project portfolio. These options result from the managerial flexibility to adapt later decisions to unexpected future developments. Typical examples include possibilities to expand production when markets are up, to abandon a project under bad market conditions, and to switch operations to alternative production facilities. Real options can be valued much in the same way as opportunities to start projects. However, instead of comparing portfolio selection problems with and without the possibility to start a project, we will compare portfolio selection problems with and without the real option. This can typically be implemented by preventing the investor from taking a particular action (e.g., expanding production) when the real option is not present. Since breakeven prices are consistent with CCA, also opportunity prices have this property, and can thus be regarded as a generalization of the standard CCA real option valuation procedure to incomplete markets.
4.4.3 Opportunity to Sell the Project to Third Party Investor From the perspective of finance theory, an important application of the real options concept is the management’s ability to sell the project to a third party investor (or to the market). Indeed, a valuation where the option to sell the project to a third party investor is accounted for, in addition to the opportunity costs implied by other investment opportunities, can be regarded as a holistic valuation that fully accounts for both private and market factors that influence the value of the project. Projects where options to sell are important include pharmaceutical development projects, where the rights to develop compounds further can be sold to bigger companies after a certain stage in clinical trials is reached. The possibility to sell the project to the market is effectively an American put option embedded in the project. When selling the project is a relevant management
90
J. Gustafsson et al.
option, the related selling decisions typically need to be implemented as a part of the project’s decision tree, which necessitates the use of an approach where projects are modeled through decision trees, such as CPP (Gustafsson and Salo 2005). That is, at each state, in addition to any other options available to the firm, the firm can opt to sell the project at the highest price than being offered by any third party investor. The offer price may depend on several factors and who the investor is, but if a market-implied pricing is used, then it may be possible to compute the offer price by using standard market pricing techniques, such as the market-implied risk-neutral probability distribution. Like any other real option, the opportunity to sell the project can increase but cannot decrease the value of the project. Also, such an American put option also sets a lower bound for the value of the project.
4.5 Implementation of Multiperiod Project Valuation Model 4.5.1 Framework We develop an illustrative multiperiod model using the CPP framework (Gustafsson and Salo 2005). In CPP, uncertainties are modeled using a state tree, representing the structure of future states of nature, as depicted in the leftmost chart in Fig. 4.2. The state tree need not be binomial or symmetric; it may also take the form of a multinomial tree with different probability distributions in its branches. In each nonterminal state, securities can be bought and sold in any, possibly fractional quantities. The framework includes budget balance constraints that allow the transfer of cash from one time period to the next, adding interest while doing so, so that the accumulated cash and the impact of earlier cash flows can be measured in the terminal states. Projects are modeled using decision trees that span over the state tree. The two right-most charts in Fig. 4.2 describe how project decisions, when combined with the state tree, lead to project-specific decision trees. The specific feature of these decision trees is that the chance nodes are shared by all projects, since they are generated using the common state tree. Security trading is implemented through state-specific trading variables, which are similar to the ones used in financial models of stochastic programming (e.g. Mulvey et al. 2000) and in Smith and Nau’s method (Smith and Nau 1995). Similar to the single-period portfolio selection model in Section 4.2.3, the investor seeks either to maximize the utility of the terminal wealth level, or the expectation of the terminal wealth level subject to a risk constraint.
4.5.2 Model Components The two main components of the model are (a) states and (b) the investor’s investment decisions, which imply the cash flow structure of the model.
92
J. Gustafsson et al.
each decision point d are bound by the restriction that only one za ; a 2 Ad , can be equal to one. The state in which the action at decision point d is chosen is denoted by !.d /. For a project k, the vector of all action variables za relating to the project, denoted by zk , is called the project management strategy of k. The vector of all action variables of all projects, denoted by z, is the project portfolio management strategy. We call the pair (x, z), composed of all trading and action variables, the aggregate portfolio management strategy.
4.5.2.3 Cash Flows and Cash Surpluses p
Let CFk .zk ; !/ be the cash flow of project k in state ! with project management strategy zk . When Ca .!/ is the cash flow in state ! implied by action a, this cash flow is given by X X p Ca .!/ za CFk .zk ; !/ D a2Ad d 2 Dk W !.d / 2 B .!/
where the restriction in the summation of the decision points guarantees that actions yield cash flows only in the prevailing state and in the future states that B can state. .!/ is defined as B .!/ D The set ˚ 0 be reached from the prevailing k 0 n ! 2 j9 k 0 such that B .!/ D ! , where B .!/ D B.B n1 .!// is the nth predecessor of ! .B 0 .!/ D !/. The cash flows from security i in state ! 2 are given by 8 < Si .!/ xi;! CFsi .xi ; !/ D Si .!/ xi;B.!/ : Si .!/ .xi;B.!/ xi;! /
if ! D !0 if ! 2 T if ! ¤ !0 ^ ! … T
Thus, the aggregate cash flow CF.x, z; !/ in state ! 2 , obtained by summing up the cash flows for all projects and securities, is CF.x; z; !/ D
n P
CFsi .xi ; !/ C
m P
p
CFk .zk ; !/ iD1 kD1 8 n P P P ˆ Si .!/ xi;! C Ca .!/ za ; if ! D !0 ˆ ˆ ˆ iD1 a2Ad ˆ ˆ d 2 D k W ˆ ˆ ˆ ˆ !.d / 2 B .!/ ˆ ˆ ˆ n ˆ P P ˆP ˆ < Si .!/ xi;B.!/ C Ca .!/ za ; if ! 2 T a2Ad D iD1 d 2 Dk W ˆ ˆ ˆ ˆ !.d / 2 B .!/ ˆ ˆ ˆ n P P P ˆ ˆ ˆ Si .!/ .xi;B.!/ xi;! / C Ca .!/ za ; if ! ¤ !0 ^ ! … T ˆ ˆ ˆ iD1 a2A d ˆ d 2 Dk W ˆ ˆ : !.d / 2 B .!/
4 Valuing Investments with Portfolio Selection Methods
93
Together with the initial budget in each state, cash flows define cash surpluses that would result in state ! 2 if the investor chose portfolio management strategy (x, z). Assuming that excess cash is invested in the risk-free asset, the cash surplus in state ! 2 is given by CS! D
b.!/ C CF.x; z; !/ b.!/ C CF.x; z; !/ C .1 C rB.!/!! / CSB.!/
if ! D !0 ; if ! ¤ !0
where b.!/ is the initial budget in state ! 2 and rB.!/!! is the short rate at which cash accrues interest from state B.!/ to !. The cash surplus in a terminal state is the investor’s terminal wealth level in that state.
4.5.3 Optimization Model When using a preference functional U , the objective function for the model can be written as a function of cash surplus variables in the last time period, i.e. max U.CST /;
x;z;CS
where CST denotes the vector of cash surplus variables in period T . Under the risk-constrained mean-risk model, the objective is to maximize the expectation of the investor’s terminal wealth level X max p.!/ CS! : x;z;CS
!2T
Three types of constraints are imposed on the model: (a) budget constraints, (b) decision consistency constraints, and (c) risk constraints (in the case of riskconstrained models). The formulation of a multiperiod portfolio selection model under both a preference functional and a mean-risk model is given in Table 4.4.
4.5.3.1 Budget Constraints Budget constraints ensure that there is a nonnegative amount of cash in each state. They can be implemented using continuous cash surplus variables CS! , which measure the amount of cash in state !. These variables lead to the budget constraints CF.x; z; !0 / CS!0 D b.!0 / CF.x; z; !/ C .1 C rB.!/!! / CSB.!/ CS! D b.!/;
8! 2 nf!0 g
94
J. Gustafsson et al. Table 4.4 Multi-period models Preference functional model max U.CST /
Objective function Budget constraints
x,z,CS
CF.x,z; !0 / CS!0 D b.!0 /
Mean-risk model P p.!/ CS! max x,y,CS !2T
CF.x, z; !/ C .1 C rB.!/!! / CSB.!/ CS! D b.!/; 8! 2 nf!0 g P za D 1; k D 1; : : : ; m
Decision consistency constraints
a2Ad 0
Pk
za D zap.d / ;
a2Ad
Risk constraints
˚ 8d 2 Dk n dk0 ;
za 2 f0; 1g; 8a 2 Ad 8d 2 Dk k D 1; : : : ; m xi;! free 8! 2 ; i D 1; : : : ; n CS! free 8! 2
Variables
k D 1; : : : ; m
; C R CS! .CST / C ! C ! D 08! 2 T za 2 f0; 1g; 8a 2 Ad 8d 2 Dk k D 1; : : : ; m xi;! free8! 2 ; i D 1; : : : ; n CS! free 8! 2 ! 0 8! 2 T C ! 0 8! 2 T
Note that if CS! is negative, the investor borrows money at the risk-free interest rate to cover a funding shortage. Thus, CS! can also be regarded as a trading variable for the risk-free asset. 4.5.3.2 Decision Consistency Constraints Decision consistency constraints ensure the logical consistency of the projects’ decision trees. They require that (a) at each decision point reached only one action is selected, and that (b) at each decision point that is not reached, no action is taken. Decision consistency constraints can be written as X za D 1; k D 1; : : : ; m a2Ad 0 k
X
a2Ad
za D zap.d / ;
˚ 8d 2 Dk n dk0 ;
k D 1; : : : ; m;
where the first constraint ensures that one action is selected in the first decision point, and the second implements the above requirements for the other decision points.
4 Valuing Investments with Portfolio Selection Methods
95
4.5.3.3 Risk Constraints A risk-constrained model includes one or more risk constraints. We focus on the single constraint case. When denotes the risk measure and R the risk tolerance, a risk constraint can be expressed as .CST / R: In addition to variance .V /, several other risk measures have been proposed. These include semivariance (Markowitz 1959), absolute deviation (Konno and Yamazaki 1991), lower semi-absolute deviation (Ogryczak and Ruszczynski 1999), and their fixed target value counterparts (Fishburn 1977). Semivariance (SV), absolute deviation (AD) and lower semi-absolute deviation (LSAD) are defined as SV W N X D
ZX
2
.x X / dFX .x/; AD W ıX D
1
Z1
jx X j dFX .x/; and
1
LSAD W ıNX D
ZX
jx X j dFX .x/ D
1
ZX
.X x/dFX .x/;
1
where X is the mean of random variable X and FX is the cumulative density function of X . The fixed target value statistics are obtained by replacing X by an appropriate constant target value . All these measures can be formulated in an optimization program by introducing deviation constraints. In general, deviation constraints are expressed as CS! .CST / C ! C ! D 0 8! 2 T ;
where .CST / is a function that defines the target value from which the devia tions are calculated, and C ! and ! are nonnegative deviation variables which measure how much the cash surplus in state ! 2 T differs from the target value. For example, when the target value is the mean of the terminal wealth level, the deviation constraints are written as X CS! p.! 0 /CS! 0 C 8! 2 T : ! C ! D 0; ! 0 2T
With the help of these deviation variables, some common dispersion statistics can now be written as follows: AD W
X
C p.!/ . ! C ! /
X
p.!/ !
!2T
LSAD W
!2T
96
J. Gustafsson et al.
V W
X
C p.!/ . ! C ! /
X
2 p.!/ . !/
2
!2T
SV W
!2T
The respective fixed-target value statistics can be obtained with the deviation constraints CS! C ! C ! D 0;
8! 2 T ;
where is the fixed target level.P Expected downside risk (EDR), for example, can then be obtained from the sum p.!/ !. !2T
4.5.3.4 Other Constraints Even other constraints can be modeled, including short selling limitations, upper bounds for the number of shares bought, and credit limit constraints (Markowitz 1987). For the sake of simplicity, however, we assume in the following sections that there are no such additional constraints in the model.
4.5.4 Example We next illustrate project valuation in a multiperiod setting with an example similar to the one in Gustafsson and Salo (2005). In this setting, the investor can invest in projects A and B in two stages as illustrated in Fig. 4.3. At time 0, he/she can start either one or both of the projects. If a project is started, he/she can make a further investment at time 1. If the investment is made, the project generates a positive cash flow at time 2; otherwise, the project is terminated with no further cash flows. In the spirit of the CAPM, it is assumed that the investor can also borrow and lend money at a risk-free interest rate, in this case 8%, and invest in the equity market portfolio. The investor is able to buy and short the market portfolio and the risk-free asset in any quantities. The initial budget is $9 million. The investor is a meanLSAD optimizer with a risk (LSAD) tolerance of R D $5 million. Here, we use a risk-constrained model instead of a preference functional model as in Gustafsson and Salo (2005), because otherwise the optimal strategy will be unbounded with the investor investing an infinite amount in the market portfolio and financing this by going short in the risk-free asset, or vice versa, depending on the value of the mean-LSAD model’s risk aversion parameter. Uncertainties are captured through a state tree where uncertainties are divided into market and private uncertainties (see Figs. 4.3 and 4.4). The price of the market portfolio is entirely determined by the prevailing market state, while the projects’
4 Valuing Investments with Portfolio Selection Methods
99
Based on Figs. 4.5 and 4.6, budget constraints can now be written as: 1zASY 2zBSY x!0 CS!0 D 9 3zACY1u 2zBCY1u C 1:24x!0 1:24x!1u C 1:08CS!0 CS!1u D 0 3zACY2u 2zBCY2u C 1:24x!0 1:24x!2u C 1:08CS!0 CS!2u D 0 3zACY1d 2zBCY1d C 1x!0 1x!1d C 1:08CS!0 CS!1d D 0 3zACY2d 2zBCY2d C 1x!0 1x!2d C 1:08CS!0 CS!2d D 0 20zACY1u C 2:5zBCY1u C 1:5376x!1u C 1:08CS!1u CS!1u1u D 0 20zACY1u C 2:5zBCY1u C 1:24x!1u C 1:08CS!1u CS!1u1d D 0 20zACY1d C 2:5zBCY1d C 1:24x!1d C 1:08CS!1d CS!1d1u D 0 20zACY1d C 2:5zBCY1d C 1x!1d C 1:08CS!1d CS!1d1d D 0 10zACY1u C 1zBCY1u C 1:5376x!1u C 1:08 CS!1u CS!1u2u D 0 10zACY1u C 1zBCY1u C 1:24x!1u C 1:08CS!1u CS!1u2d D 0 10zACY1d C 1zBCY1d C 1:24x!1d C 1:08CS!1d CS!1d 2u D 0 10zACY1d C 1zBCY1d C 1x!1d C 1:08CS!1d CS!1d 2d D 0 5zACY2u C 25zBCY2u C 1:5376x!2u C 1:08CS!2u CS!2u1u D 0 5zACY2u C 25zBCY2u C 1:24x!2u C 1:08CS!2u CS!2u1d D 0 5zACY2d C 25zBCY2d C 1:24x!2d C 1:08CS!2d CS!2d1u D 0 5zACY2d C 25zBCY2d C 1x!2d C 1:08CS!2d CS!2d1d D 0 10zBCY2u C 1:5376x!2u C 1:08CS!2u CS!2u2u D 0 10zBCY2u C 1:24x!2u C 1:08CS!2u CS!2u2d D 0 10zBCY2d C 1:24x!2d C 1:08CS!2d CS!2d 2u D 0 10zBCY2d C 1x!2d C 1:08CS!2d CS!2d 2d D 0 For each terminal state !T 2 T , there is a deviation constraint CS!T EV C !T C !T D 0, where EV is the expected cash balance over all terminal states, viz. EV D
X
!T 2T
p!T CS!T :
100
J. Gustafsson et al.
Table 4.5 Investments in securities ($ million)
State
xM
CS
0 1u 1d 2u 2d
71.37 16.40 105.46 27.28 98.71
64:37 4:35 106:61 16:85 98:86
In addition, the following decision consistency constraints apply: zASY C zASN D 1 zACY1u C zACN1u D zASY zACY2u C zACN2u D zASY zACY1d C zACN1d D zASY zACY2d C zACN2d D zASY
zBSY C zBSN D 1 zBCY1u C zBCN1u D zBSY zBCY2u C zBCN2u D zBSY zBCY1d C zBCN1d D zBSY zBCY2d C zBCN2d D zBSY
The risk constraint is now X
p!T !T 5;
!T 2T
and the objective function is Maximize EV D
X
p!T CS!T :
!T 2T
In this portfolio selection problem, the optimal strategy is to start both projects; project A is terminated at time 1 if private state 2 occurs and project B if private state 1 occurs, i.e., variables zASY ; zACY1u ; zACY1d ; zBSY ; zBCY2u ; zBCY2d are one and all other action variables are zero. The optimal amounts invested in the market portfolio and the risk-free asset are given in columns 2 and 3 of Table 4.5, respectively. There is an expected cash balance of EV D $25:63 million and LSAD of $5.00 million at time 2. The portfolio has its least value, $0.32 million, in state 1d 2d . It is worth noting that the value of the portfolio can be negative in some terminal states at higher risk levels, because the states’ cash surplus variables are not restricted to nonnegative values. Thus, the investor will borrow money at the risk-free rate and invest it in the market portfolio, and may hence default on his/her loan obligations if the market does not go up in either of the time periods and the project portfolio performs poorly. Breakeven selling and buying prices for projects A and B, as well as for the entire project portfolio, are given in the last row in Table 4.6. These prices are now equal, and we therefore record them in a single cell. (This is a property of the employed preference model.) For the sake of comparison, we also give the terminal wealth levels when the investor does and does not invest in the project/portfolio being valued, denoted by W C and W , respectively. The portfolio value differs from the value of $5.85 million obtained in Gustafsson and Salo (2005), where the investor did not have the possibility to invest in
4 Valuing Investments with Portfolio Selection Methods Table 4.6 Project values ($ million)
101
Portfolio WC W WC W v D vb D vs
A
B
$25:63 $17:16
$25:63 $22:17
$25:63 $20:54
$8:47 $7:26
$3:46 $2:97
$5:09 $4:36
the market portfolio. There are two reasons for this difference. First, due to the possibility to invest limitless amounts in the market portfolio, we use a riskconstrained preference model with R D $5:00 million, whereas the example in Gustafsson and Salo (2005) used a preference functional model with D 0:5. With different preference models we also have different risk-adjustment. Second, because here it is possible to invest in the market portfolio, the optimal portfolio mix is likely to be different from the setting where investments in market-traded securities are not possible, and thus the project portfolio is also likely to obtain a value different from the one obtained in Gustafsson and Salo (2005).
4.6 Summary and Conclusions In this chapter, we have considered the valuation of private projects in a setting where an investor can invest in a portfolio of projects as well as securities traded in financial markets, but where the replication of project cash flows with financial securities may not be possible. Specifically, we have developed a valuation procedure based on the concepts of breakeven selling and buying prices. This inverse optimization procedure requires the solution of portfolio selection problems with and without the project that is being valued and determining the lump sum that makes the investor indifferent between the two settings. We have also offered analytical results concerning the properties of breakeven prices. Our results show that the breakeven prices are, in general, consistent valuation measures in that they exhibit sequential additivity and consistency; they are also consistent with CCA. Quite importantly, the proposed methodology overcomes several deficiencies in earlier approaches to the valuation of projects and other illiquid investments. That is, the methodology accounts systemically for the time value of money, explicates the investor’s risk preferences, captures the projects’ risk characteristics and their impacts on aggregate risks at the portfolio level. This methodology also accounts for the opportunity costs of alternative investment opportunities which are explicitly included in the portfolio selection model. Overall, the methodology constitutes a new, complete and theoretically well-founded approach to the valuation of nonmarket traded investments. Furthermore, we have shown that it is possible to include real options and managerial flexibility in the projects through modeling them as decision trees in the CPP framework (Gustafsson and Salo 2005). We have also shown how such real
102
J. Gustafsson et al.
options can be valued using the concepts of opportunity buying and selling prices, and demonstrated that such resulting real option values are consistent with CCA. Indeed, since the present framework does not require the existence of a replicating trading strategy, a key implication is that the proposed methodology makes it possible to generalize CCA to the valuation of private projects in incomplete markets. This work suggests several avenues for further research. In particular, it is of interest to investigate settings where the investor can opt to sell the project to the market (or to a third party investor), as the resulting valuations would then holistically account for the impact that markets can have on the value of the project (i.e. implicit market pricing and opportunity costs). Analysis of specific preference models such as the mean-variance model also seems appealing, not least because the CAPM (Lintner 1965; Sharpe 1964) is based on such preferences. More work is also needed to facilitate the use of the methodology in practice and to link it to existing theories of pricing.
Appendix Proof of Proposition 1 Let us prove the proposition first for the BSP and the sequential buying price. Let the BSP for the project be vsj . Then, based on Table 4.2, vsj will be defined by the portfolio setting in the middle column of Table A.1. Next, we can observe that Problem B1 in determining the sequential buying price is the same as Problem A2 for the BSP, wherefore also the optimal objective function values will be the same, i.e., Wb D Ws . Since by definition of breakeven prices Table A.1 Definition of the sequential buying price value of project j A. Breakeven selling price
B. Sequential buying price
Definition
vsj such that WsC D Ws
vbj such that WbC D Wb
Problem 1
Problem A1 Mandatory investment in the project
Problem B1 Project is excluded from the portfolio (investment in the project is prohibited) Optimal objective function value: Wb Budget at time 0: b0 C vsj
Optimal objective function value: WsC Budget at time 0: b0 Problem 2
Problem A2 Project is excluded from the portfolio (investment in the project is prohibited) Optimal objective function value: Ws Budget at time 0: b0 C vsj
Problem B2 Mandatory investment in the project Optimal objective function value: WbC Budget at time 0: b0 C vsj vbj
4 Valuing Investments with Portfolio Selection Methods
103
we have WsC D Ws and WbC D Wb , it follows that we also have WsC D WbC . Since Problem A1 and Problem B2 are otherwise the same, except that the first has the budget of b0 and the second b0 C vsj vbj , and because the optimal objective function value is strictly increasing with respect to the budget, it follows that b0 C vsj vbj must be equal to b0 to get WsC D WbC , and therefore vbj D vsj . The proposition for the BBP and the respective sequential selling price is proven similarly.
Proof of Proposition 2 A replicating trading strategy for a project is a trading strategy that produces exactly the same cash flows in all future states of nature as the project. Thus, by definition, starting the project and shorting the replicating trading strategy will lead to a situation where cash flows net each other out in each state of except at time 0. At time 0, the cash flow will be Ik Ik0 , where Ik0 is the time-0 investment cost of the project and Ij is the cash needed to initiate the replicating trading strategy. Therefore, a setting where the investor starts the project and shorts the replicating trading strategy will be exactly the same as a case where the project is not included in the portfolio and the time-0 budget is increased by Ik Ik0 (and hence the same objective function values). Therefore, the investor’s BSP for the project is, by the definition of BSP, Ik Ik0 . The proposition for the BBP can be proven similarly.
Proof of Proposition 3 Let us begin with the BBP and a setting where the portfolio does not include projects and the budget is b0 . Suppose that there are n projects that the investor buys sequentially. Let us denote the optimal value for this problem by Wb;1 . Suppose then that the investor buys a project, indexed by 1, at his or her BBP, vb1 . Let us C denote the resulting optimal value for the problem by Wb;1 . By definition of the C BBP, Wb;1 D Wb;1 . Suppose then that the investor buys another project, indexed by 2, at his or her BBP, vb2 . The initial budget is now b0 vb1 , and after the second project is bought, it is b0 vb1 vb2 . Since the second project’s Problem 1 and the first project’s Problem 2 are the same, the optimal values for these two problems C are the same, i.e., Wb;2 is equal to Wb;1 . Add then the rest of the projects in the same manner, as illustrated in Table A.2. The resulting budget in the last optimization problem, which includes all the projects, is b0 vb1 vb2 vbn . Because Problem 2 of each project (except for the last) in the sequence is always Problem 1 of the next project and because by the definition of the BBP, for each project, the objective function values in Problems 1 C C C and 2 are equal, we have Wb;n D Wb;n D Wb;n1 D Wb;n1 D Wb;n2 D D Wb;1 . Therefore, by the definition of the BBP, the BBP for the portfolio including all the
104
J. Gustafsson et al.
Table A.2 Buying prices in a sequential buying process P1
First project Optimal value: Wb;1 Budget at time 0: b0
P2
Optimal value: C Wb;1 Budget at time 0: b0 vb1
Second project Optimal value: C Wb;2 D Wb;1
Third project Optimal value: C Wb;3 D Wb;2
nth project Optimal value: C Wb;n D Wb;n1
C Optimal value: Wb;2
C Optimal value: Wb;3
C Optimal value: Wb;n
Budget at time 0: b0 vb1 vb2
Budget at time 0: b0 vb1 vb2 vb3
Budget at time 0: b0 vb1 vb2 vbn
Budget at time 0: b0 vb1
Budget at time 0: b0 vb1 vb2
Budget at time 0: b0 vb1 vb2 vbn1
project must be vbptf D vb1 C vb2 C C vbn . By re-indexing the projects and using the above procedure, we can change the order in which the projects are added to the portfolio. In doing so, the projects can obtain different values, but they still sum up to the same joint value of the portfolio. Similar logic proves the proposition for BSPs.
References Ahuja RK, Orlin JB (2001) Inverse optimization. Oper Res 49(5):771–783 Belegundu AD, Chandrupatla TR (1999) Optimization concepts and applications in engineering. Prentice Hall, New York Brealey R, Myers S (2000) Principles of corporate finance. McGraw-Hill, New York Brosch R (2008) Portfolios of real options. Lecture notes in economics and mathematical systems, vol 611. Springer, Berlin Clemen RT (1996) Making hard decisions – an introduction to decision analysis. Duxbury, Pacific Grove Dixit AK, Pindyck RS (1994) Investment under uncertainty. Princeton University Press, Princeton Eppen GD, Martin RK, Schrage L (1989) A scenario based approach to capacity planning. Oper Res 37(4):517–527 Fishburn PC (1977) Mean-risk analysis with risk associated with below-target returns. Am Econ Rev 67(2):116–126 French S (1986) Decision theory – an introduction to the mathematics of rationality. Ellis Horwood, Chichester Gustafsson J, Salo A (2005) Contingent portfolio programming for the management of risky projects. Oper Res 53(6):946–956 Konno H, Yamazaki H (1991) Mean-absolute deviation portfolio optimization and its applications to the Tokyo stock market. Manage Sci 37(5):519–531 Lintner J (1965) The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets. Rev Econ Stat 47(1):13–37 Luenberger DG (1998) Investment science. Oxford University Press, New York Markowitz HM (1952) Portfolio selection. J Finance 7(1):77–91
4 Valuing Investments with Portfolio Selection Methods
105
Markowitz HM (1959) Portfolio selection: efficient diversification of investments. Cowles Foundation, Yale Markowitz HM (1987) Mean-variance analysis in portfolio choice and capital markets. Frank J. Fabozzi Associates, New Hope Mulvey JM, Gould G, Morgan C (2000) An asset and liability management model for Towers Perrin-Tillinghast. Interfaces 30(1):96–114 Ogryczak W, Ruszczynski A (1999) From stochastic dominance to mean-risk models: semideviations as risk measures. Eur J Oper Res 116(1):33–50 Raiffa H (1968) Decision analysis – introductory lectures on choices under uncertainty. AddisonWesley, Reading Sharpe WF (1964) Capital asset prices: a theory of market equilibrium under conditions of risk. J Finance 19(3):425–442 Sharpe WF (1970) Portfolio theory and capital markets. McGraw-Hill, New York Smith JE, Nau RF (1995) Valuing risky projects: option pricing theory and decision analysis. Manage Sci 41(5):795–816 Trigeorgis L (1996) Real options: managerial flexibility and strategy in resource allocation. MIT, MA Yu P-L (1985) Multiple-criteria decision making: concepts, techniques, and extensions. Plenum, New York
Chapter 5
Interactive Multicriteria Methods in Portfolio Decision Analysis Nikolaos Argyris, Jos´e Rui Figueira, and Alec Morton
Abstract Decision Analysis is a constructive, learning process. This is particularly true of Portfolio Decision Analysis (PDA) where the number of elicitation judgements is typically very large and the alternatives under consideration are a combinatorial set and so cannot be listed and examined explicitly. Consequently, PDA is to some extent an interactive process. In this chapter we discuss what form that interactivity might take, discussing first of all how the process of asking for judgements should be staged and managed, and secondly what assumptions should be made about the substantive value models which the analyst assumes. To make the discussion concrete, we present two interactive procedures based on extended dominance concepts which assume linear additive and concave piecewise-linear additive value functions, respectively.
5.1 Introduction Recent years have seen Portfolio Decision Analysis (PDA) grow in practical importance and in the attention which it has received from Decision Analysis scholars and practitioners. The characteristic feature of PDA is the application of Decision Analysis concepts of preference and uncertainty modelling to the construction of portfolios of organisational activities. As such PDA represents the application of management science techniques to one of the most significant problems which an organisation faces, that of arriving at a systematic and “rational” allocation of internal funds and workforce. Numerous examples of practical applications exist in the military procurement (Austin and Mitchell 2008; Ewing et al. 2006), in the management of R&D in pharmaceutical and high-technology companies as well as A. Morton () Management Science Group, Department of Management, London School of Economics and Political Science, London, UK e-mail: [email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 5, © Springer Science+Business Media, LLC 2011
107
108
N. Argyris et al.
in the government sector (Phillips and Bana e Costa 2007; Lindstedt et al. 2008; Morton et al. 2011), in planning healthcare provision (Kleinmuntz 2007), prioritising maintenance funds for public infrastructure such as roads and flood defences (Mild and Salo 2009), and the like. Supporting the resource allocation process is not a new ambition in Operations Research. Indeed, the very term “programming” in mathematical programming is borrowed from the military planning problems which were the original inspiration for the seminal work of Dantzig in the late 1940s. Viewed from a mathematical programming perspective, the tools of decision analysis provide a systematic way to specify an objective function in situations where values are contested and decision makers face significant uncertainties, as is typical in substantial applications. This is an important aspect of the analysis process as solutions are often highly sensitive to the way in which objectives are specified and operationalised mathematically and decision makers (DMs) cannot be expected to simply write down or draw out objective functions without support. Taking this theme somewhat further, a common view (which we share) is that preference elicitation is a constructive process (Slovic and Lichtenstein 2006) whereby the DM reflects, deliberates, learns, and comes to form a relatively stable set of values (Phillips 1984; Roy 1993). Often this involves a reconciliation of alternative perspectives, for example, the “gut feel” ranking of options and the modelled ranking which comes from the disaggregate judgements assembled in the decision model. Learning takes place in all choice situations; however, this is particularly true in PDA (as compared to single choice where only one object is to be selected) as the set of projects which the DM has to assess may be projects with which she is not deeply familiar, and the number of elicitation judgements is typically very large. Moreover, the set of possible portfolios is typically so large that it cannot be contemplated directly and instead has to be defined implicitly. If one accepts this view, it seems clear that any approach to PDA should be (in some sense) interactive. Further, anyone who wishes to design a tool or process for PDA has to confront two critical questions which we focus on in this chapter. • What assumptions are to be made substantively about the form of the value function (if any) which is to be used to characterise the DM’s preferences? • How should the process of asking for judgements from the DM be staged and managed, so as to support learning and ensure that the time available for elicitation is used most productively? In this chapter, drawing on ideas from the sister discipline of multiobjective programming, we discuss these two questions, focusing simultaneously on questions of computational tractability and efficient use of elicited judgement. An important idea in this chapter is the use of extended dominance concepts to guide interactive search. Because these concepts have been developed and extensively discussed in the multiobjective programming literature, we focus particularly on multicriteria or multiobjective PDA rather than probabilistic PDA. However, there is a strong formal analogy between the additive value model †l l vl ./, and the expected utility
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
109
P model l pl u./ and the reader who is more interested in choice under uncertainty than under conflicting objectives may find the chapter retains a semblance of comprehensibility if she reads “probability” for “criterion weight”; “utility” for “value”, and so forth throughout. The reader who is interested in finding out more about multiobjective programming is referred to Steuer (1986), Miettinen (1999), or Ehrgott (2005) for further information. The chapter proceeds as follows. First, we address the two questions above. In Section 5.2, we discuss assumptions relating to the form of the value function; in Section 5.3, we discuss questions relating to the staging of the interactive procedure. Section 5.4 presents our own interactive procedure which we think is promising in terms of bridging the gap between the decision analysis and multiobjective optimisation literatures. Section 5.5 presents some numerical examples of this procedure. Section 5.6 concludes.
5.2 Assumptions About Value Functions In this section, we will discuss the implications of assumptions which the analyst and the DM together make about the value functions which characterise the DM’s preferences. To do this, we briefly outline some formal concepts and notation which will underpin our analysis. We suppose that there is a DM who can undertake a number of projects, subject to resource constraints, for example, on cash or manpower. For expositional simplicity we will frame our discussion in terms of a single DM, while recognising that resource allocation decisions are often – indeed typically – made collectively. This DM has a number of criteria or objectives which she wishes to achieve, that is to say “maximise”. Formal notation is presented in Table 5.1. Again for expositional simplicity, we will suppose that projects are not linked by logical constraints (for example, one project is not a superset of another project). Such constraints pose no essential computational difficulties from the point of view of the methods we will discuss but would complicate notation as we would have to define criterion functions f over a subset of f0; 1gN , as some combinations of projects might be logically impossible and could have no criterion value associated with them. One way to look at this problem and data is as a multiobjective programme. “max” f .x/ D .f1 .x/; f2 .x/; : : : ; fl .x/; : : : ; fP .x// X subject to W wj i xj 6 Wi ; i 2 I ; j 2J
xj 2 f0; 1g; j 2 J:
(5.1)
Specifically, this is a multidimensional, multiobjective knapsack problem (Gomes da Silva et al. 2004; Mavrotas et al. 2009).
110
N. Argyris et al.
Table 5.1 Main notation Concept Projects Portfolios
Resources
Criteria
Notation J D f1; 2; : : : ; j; : : : N g denotes an index set of indivisible projects X D fx 1 ; x 2 ; : : : ; x j ; : : : x N g f0; 1gN denotes a set of feasible portfolios, where a portfolio is a set containing one or more projects. This set may be too large to list explicitly G D fg1 ; g2 ; : : : ; gi ; : : : gM g denotes a set of non-decreasing resource functions. The set of constraints, gi .x/ Wi , for all i 2 I , are resource constraints where the right-hand side corresponds to the available amount of a given resource. Resources provide an alternative, implicit way to define X F D ff1 ; f2 ; : : : ; fl ; : : : fP g denotes a set of non-decreasing criterion functions. The criterion functions fl from F are such that fl W f0; 1gN 7! RC and f D .f1 ; f2 ; : : : ; fj ; : : : fP /
Another way to look at this problem is through the lens of decision analysis. In this case, the focus of interest is a value function V ./, that is to say, a functional which maps in the criterion space (i.e. the whole of the space of P -dimensional vectors of non-negative real numbers, in which f .X / is contained) into Œ0; 1 and represents the DM’s preferences in the sense that the DM prefers z0 to z00 iff V .z0 / > V .z00 /. To see the relationship between these complementary perspectives, we define the following extended dominance concepts. Given a set of value functions V , we define: Definition 1. (V -efficient portfolios and V -non-dominated points). A feasible portfolio x 0 2 X is called V -efficient iff, for some V 2 V , there is no other feasible portfolio x 00 2 X such that V .f .x 00 // > V .f .x 0 //. In this case, f .x 0 / is called a V -non-dominated point. Any portfolio which is not V -efficient is V -inefficient, and any feasible point which is not V -non-dominated is V -dominated. Definition 2. (V -supported portfolios and points). A feasible portfolio x 0 2 X is called a V -efficientP supported portfolio iff, some V 2 V , there is no convex Pfor K K k k k combination z D f .x / (with D 1 and k > 0 8k) such kD1 kD1 that V .z/ > V .f .x 0 //. f .x 0 / is called a V -non-dominated supported point. Any efficient portfolios or non-dominated points which are not supported are called unsupported. Where V is the set of all possible monotonic increasing functionals, these extended dominance concepts collapse to standard multiobjective definitions of supported and unsupported dominance and efficiency. It is also wellPknown that when V is the set of linear additive functions L A D fV ./ W V .z/ D PlD1 l zl for some l 2 RC 8lg, then all efficient portfolios/non-dominated points are supported and the two definitions coincide. The implications of restricting oneself to a linear additive value model are worth exploring in some depth, as this is a model form which is commonly used in practice. As is well known the use of linear additive value models has considerable advantages in terms of both conceptual simplicity and computational
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
111
tractability. For example, once the DM has specified weights, an attractive approach is to prioritise projects in the order of the ratio of their weighted contribution on each criterion to their resource cost (Phillips and Bana e Costa 2007). However, if the linear additive model is not a correct characterisation of the DM’s preferences, it can be rather misleading. For example, suppose the analyst (male) asks the DM (female) for “swing weights” for two benefit dimensions (“revenue” and “social welfare”, for example) using the following approach; he asks her whether she would prefer all the revenue gains of doing all the projects or the social welfare gains of doing all the projects, and when she concludes in favour of the former, he asks her how great the value of the said revenue gains is as a fraction of the said social welfare gains. Normalising gives a pair of scaling factors, revenue and welfare , with revenue C welfare D 1: In the linear value model, these “weights” are effectively extrapolated across the whole of the criterion space: an obvious danger is that the there could be some curvature in the organisation’s value functions over aggregate levels of revenue and social welfare, and a consequent curvature in the isopreference curves (see MacCrimmon and Wehrung 1988 for some empirical evidence on this point). If the isopreference curvature is rather slight in the neighbourhood of the points chosen as the basis for the weighting question, it may not be evident to DM or analyst. We illustrate this danger with an example, depicted in Fig. 5.1. Suppose that revenue can be delivered at a lower cost than social welfare, in the sense that if the organisation devoted its money budget to delivering social welfare, it could deliver almost all the revenue which it would achieve from implementing all its candidate projects; on the other hand, if it devoted all its budget to delivering social welfare, it would generate only a fraction of the social welfare which it would gain from implementing all the projects on the table. The (extended convex hull of the) feasible set is thus the grey area in the figure and the implied isopreference curves of the linear additive model are the dashed lines. However, suppose the DM attaches diminishing value to revenue (for example, in the case of a not-for-profit organisation which has to pay salaries but really exists to undertake more altruistic activities). In this case, the isopreference curves which would be a better representation of the DM’s preferences (referred to for
Social Welfare
b
Fig. 5.1 Isopreference lines for an organisation trading off revenue and social welfare
a Revenue
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
113
only the most provisional preference information at the first pass, and using that to narrow the space of possible attractive portfolios (for example, using triage rules) before returning to firm up key preference judgements (Liesi¨o et al. 2007). Still other analysts might perform mini-Portfolio Decision Analyses within different areas (e.g. budget categories) of an organisation with a view to building to an overall global value function which synthesises and subsumes the value functions across multiple areas (Phillips and Bana e Costa 2007). Nature of preference information sought? The preference information sought may differ in terms of precision (Are point values required or are interval or qualitative assessments adequate?), comprehensiveness (Do all parameters of the value model have to be assessed, at least provisionally, before proceeding to a subsequent stage?), and whether preference intensities or only ordinal preferences are admitted. But different analysts may also differ in terms of the objects over which preferences are sought: Are preferences sought over the portfolios in X directly (as is the case in the Balanced Beam procedure of Kuskey et al. 1981) or are they sought over the criterion space with the construction of specific portfolios coming rather late in the process (Keeney 1992; Keeney and McDaniels 1992, 1999)? This has significant consequences for the construction of a value function: If preferences are elicited over the (discrete) portfolio space, the solvability properties necessary for the identification of a unique value function will not generally hold, whereas if questioning takes place in the criterion space, the analyst may be able to ask for indifference judgements which would allow the identification of a unique value function. Definition of attractive portfolios and termination criterion? The general aim of the analysis is that the DM learns enough that she can home in on a small number of “attractive” portfolios. But is attractiveness defined by the model (for example, portfolios which are efficient with respect to some possible value function) or should they rather be portfolios which are attractive for some other reason (for example, the holistic evaluation of the DM or of the DM’s boss)? The former suggests that the model structure can be taken for granted, and the latter that the model structure itself is always tentative. Similarly if there are multiple attractive portfolios, or the portfolios which the value model suggests are attractive are not so regarded by the DM, does this suggest that the model needs refinement, or might one terminate the analysis at this stage if the DM has a sense of having adequately explored the problem (Phillips 1984)? To summarise, in designing a PDA procedure one typically has several design elements or parameters to play with. There is disappointingly little in the literature in the way of frameworks to guide this design process (as is the case in multicriteria decision analysis and multiobjective programming literature more generally – see, for example, Vanderpooten (1989) or Gardiner and Steuer (1994)). It seems clear that such design choices should be made on the basis of both efficiency considerations, based on analytic and computational work, or psychological considerations, based on empirical investigation of DMs’ responses to these methods. However, such research is thin on the ground and where it does exist gives ambiguous guidance. For
114
N. Argyris et al.
example, in an interactive single choice multicriteria choice setting, Stewart (1993) finds that obtaining indifference statements is highly effective in narrowing down the space of possible value functions. One can always elicit an indifference statement if one accepts intensity of preference judgements by asking for a bisection point in the criterion space. However, such questions are also more cognitively challenging for DMs than simple questions of ordinal preference and so it is not possible to infer from this a context-free recommendation that bisection questions should always be used in all circumstances.
5.4 An Interactive Scheme In Sections 5.2 and 5.3, we have discussed two critical questions arising in the design of a PDA process: the question of what the analyst can assume about the DM’s value function and the question of how the interactive procedure itself should be staged and managed. Possible answers to these questions are linked and that linkage is heavily dependent on the existence of supporting software and computing technology. In this section, to illustrate this point, we outline an interactive scheme and present the mathematical and computational machinery needed to make this scheme operational. The scheme uses the sort of extended dominance concept outlined in Section 5.2. It thus differs from multiobjective approaches based on a free search concept, in which assumptions about value functions are not made explicit (Vanderpooten and Vincke 1989; Vanderpooten 1989) or from approaches such as the Zionts–Wallenius procedure (Zionts and Wallenius 1976, 1983), where qualitative assumptions (such as quasiconcavity) are made about value functions but the implications of a DM’s judgements are used to direct local search rather than to characterise the set of possible value functions. Extended dominance is made concrete in our scheme in the following way. Judgements of ordinal preference % between pairs of alternatives or intensity of preference % between four-tuples of alternatives are elicited from DMs and these judgements are translated into constraints on the set of possible value functions. As usual and denote the symmetric parts of these preference relations. Our scheme relies on three particular assumptions about the problem structure: 1. Value functions are additive and concave, and the partial value functions are monotonically increasing. Concavity is often a plausible feature of value and utility functions. Empirically, value functions are likely to be concave if there is some sort of satiation effect, whereby benefits give less marginal value when there are already large quantities, although non-concavities may exist if there is, for example, a target level of performance (see MacCrimmon and Wehrung 1988). An interesting subcase is where value functions are linear – as we have mentioned above, this is a common assumption in practice, and it makes our scheme particularly tractable.
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
115
2. Criterion functions are additive, that is to say fl .x/ D cl x, where cl is a vector of project-level benefits. This assumption is not as restrictive as it might appear – see, for example, Liesi¨o et al. (2008) for some ways to handle criterion level interactions between projects within a similar framework. 3. Resource functions are linear additive, that is to say gi .x/ D wi x, where wi is a vector of project-level costs (which again is not an especially restrictive assumption). We do not necessarily recommend the use of the procedures we propose for all PDA problems, or indeed, for any specifically. Nor are they in any way typical of the vast literature on interactive multiobjective methods (for overviews see, e.g. Vanderpooten 1989; Gardiner and Steuer 1994; Korhonen 2004; Branke et al. 2008). Rather, we present it here from the point of view of illustrating what is possible. Using as it does extended dominance, this scheme has the attractive property (from a Decision Analysis point of view) of being based on a explicit characterisation of the set of possible value functions derived from elicited information, and so can be related to approaches which are popular in the Decision Analysis community. A program flowchart representing the logic of the interactive scheme is shown in Fig. 5.3. Stage 1 involves identifying criteria, projects, and resources, and thus optimal portfolios; Stage 2 involves the identification of the reference set and the expression of preferences by the DM. In Stage 3, a preference disaggregation procedure is used to test for possible compatible value functions and perhaps generate such value functions if they exist and incompatibilities if they do not. In Stage 4, some sort of portfolio optimisation model is solved and the results passed back to the DM for consideration, and depending on whether the model is judged requisite, the process either cycles back to an earlier stage or terminates. To make this scheme practical, we need some sort of machinery for steps 3 and 4, in particular, which can “find compatible value functions”, “identify inconsistencies”, and “solve portfolio optimisation models”. We present in Section 5.4.1, two formulations which involve representing the space of possible value functions as a polyhedron and discuss how these formulations might help.
5.4.1 Formulations We suppose that the range of the possible portfolios in the dimensions of the criterion space Zj range in value from zj to zj with the respective vectors written z and z. We suppose that the DM has expressed preferences % or preference differences % over some portfolios corresponding to a “reference set” R of points in criterion space. “r 1 % r 2 ” is read as “r 1 is at least as preferred as r 2 ” and “.r 1 r 2 / % .r 3 r 4 /” is read as “r 1 is preferred to r 2 at least as intensely 3 4 as r is preferred to r ”. If r 1 % r 2 and r 2 % r 1 then r 1 and r 2 are indifferent, r 1 r 2 , and the corresponding relation is defined in the obvious way. We suppose for presentational convenience that the DM has judged z % r % z 8r 2 R. Bounding the value function so that all portfolios are evaluated in the Œ0; 1 interval
116
N. Argyris et al.
START
Benefits
Projects
Resources
Stage 1. Structuring
Feasible Portfolios Stage 2. Elicitation of preference information
Construct set of reference portfolios Express preferences over reference portfolios
Identify inconsistencies
NO
Compatible value functions?
YES
Find compatible value functions
Solve portfolio optimisation models NO
Stage 2. Construction of value functions
Stage 2. Identification of attractive portfolios
Requisite? YES
END
Fig. 5.3 Program flow of the interactive scheme
and translating the DM’s expressed preference judgements into constraints, we can arrive at a space of possible value functions (as shown in (5.2)) (Some writers also propose modelling a weak preference relation by “padding” the constraints with a very small number " but there is in our view no theoretic reason for any particular choice of the value of ".): .˘1/ .˘ 2/ .˘ 3/ .˘ 4/ .˘ 5/ .˘ 6/
8.r 1 ; r 2 / 2 R R W r 1 % r 2 U.r 1 / > U.r 2 / 1 2 1 2 U.r 1 / D U.r 2 / 8.r ; r / 2 R R W r r 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W U.r 1 / U.r 2 / > U.r 3 / U.r 4 / r 2 / % .r 3 r 4/ .r 1 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W U.r 1 / U.r 2 / D U.r 3 / U.r 4 / r 2 / .r 3 r 4/ .r 1 Set “1” U.z/ D 1 Set “0” U.z/ D 0: (5.2)
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
117
The intended interpretation of (5.2) is as follows: U.r/ is the value or utility assigned to point r. The first two constraints (˘1–˘ 2) impose that whenever r 1 is preferred to (indifferent to) r 2 , then the value assigned to r 1 is greater than (equal to) the value assigned to r 2 (this is what it means for a value function to represent a system of preferences). The second two constraints (˘ 3–˘ 4) impose similar restrictions on values for statements involving preference intensities. The third pair of constraints (˘ 5–˘ 6) bound the value scores over the feasible criterion values to the [0,1] interval. In other words (5.2) provides us with the space of possible value assignments which respect the DM’s expressed preferences. However, if we are to perform an optimisation, it is not enough to know the value assignments to these portfolios only; we have to know something about value functions over the whole of the criterion space. One simple way to extend the value function from these assignments is to introduce a vector of non-negative P variables .1 ; 2 ; : : : ; l ; : : : P / and impose an additional constraint U.r k / D l2F l rlk for each r k 2 R. This requires that the value function is of a linear additive form (U 2 L A ), which will lead to easier and more convenient formulations. However, it is plausible that we may wish to use more complex value models. One approach is borrow ideas from preference disaggregation (Figueira et al. 2009). Our presentation here draws on the UTA approach (Jacquet-Lagr`eze and Siskos 1982, 2001). We show how it is possible to use such ideas to impose a value function of the concave piecewise-linear additive form (U 2 C PA ). Introduce P P variables ul .fl .rlk // for each r k 2 R and impose the constraint k U.f .r // D l2F ul .fl .rlk // (thus rather than weighting the criteria, we are l transforming them by utility functions). Define zl WD z0l and zl WD zL l . Then l each range Œz0l ; zL l (=Œzl ; zl /, for all l 2 F , can be divided into Ll subintervals: q1
Œz0l ; z1l ; Œz1l ; z2l ; : : :; Œzl
q
l ; zl ; : : :; ŒzlLl 1 ; zL l 8l 2 F;
(5.3)
where q
zl D z0l C
q .zl zl /; q D 0; 1; : : : ; Ll ; and 8l 2 F: Ll
(5.4)
The value ul ./ of a criterion level rlk , i.e. ul .rlk /, can be determined by linear q qC1 interpolations, as follows, for rlk 2 Œzl ; zl for all l 2 F : q
q
ul .rlk / D ul .zl / C
rlk zl qC1
zl
qC1
q
zl
.ul .zl
q
/ ul .zl //:
(5.5)
The piecewise linear function is thus defined through the determination of q the value of the breakpoints (i.e. the ul .zl /). Increasing the number of breakpoints enlarges the space of possible value functions and so make more accurate assessments possible; however, it will increase the size of the associated mathematical programmes and will have computational implications. In the L A
118
N. Argyris et al.
formulation we impose monotonicity by requiring that weights are non-negative; in this new environment, we achieve this by imposing the family of constraints q qC1 ul .zl / ul .zl / > 0, 8l 2 F; 8q D 0; : : : ; Ll 1: If we were just concerned with feasible value functions this would be enough. However, we are specifically interested in value functions which we can use in an optimisation setting, and to achieve this we will impose concavity through an additional set of constraints qC2
ul .zl
qC2
zl
qC1
/ ul .zl qC1
zl
/
qC1
6
ul .zl
qC1
zl
q
/ ul .zl /
(5.6)
q
zl
8l 2 F , 8q D 0; : : : ; Ll 2. To summarise, we have the following polyhedra of possible value functions in the L A and C PA cases, as shown in (5.6) and (5.7). Linear additive case: .˘1/ U.r 1 / > U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .˘ 2/ U.r 1 / D U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 r 2 .˘ 3/ U.r 1 / U.r 2 / > U.r 3 / U.r 4 / r 2 / % .r 3 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 1 2 3 4 .˘ 4/ U.r / U.r / D U.r / U.r / r 2 / .r 3 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 5/ U.z/ D 1 .˘ 6/ U.z/ D 0 P l rlk 8r k 2 R .L1/ U.r k / D
r 4/ r 4/
l2F
.L2/ l > 0 8l 2 F
Concave piecewise-linear additive case: .˘1/ U.r 1 / > U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .˘ 2/ U.r 1 / D U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 r 2 .˘ 3/ U.r 1 / U.r 2 / > U.r 3 / U.r 4 / r 2 / % .r 3 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 1 2 3 4 .˘ 4/ U.r / U.r / D U.r / U.r / r 2 / .r 3 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 5/ U.z/ D 1 .˘ 6/ U.z/ D 0 P ul .rlk / 8r k 2 R .C1/ U.r k / D
.C 2/
.C 3/ .C 4/
l2F q rlk zl q q qC1 k / ul .zl // 8r k 2 ul .rl / D ul .zl / C qC1 q .ul .zl zl zl q qC1 ul .zl / ul .zl / > 0; 8l 2 F; 8q D 0; : : : ; Ll 1 qC1 q qC2 qC1 ul .zl /ul .zl / ul .zl /ul .zl / qC2
zl
qC1
zl
6
qC1
zl
q
zl
(5.7)
r 4/ r 4/
R;8l 2 F
; 8l 2 F; 8q D 0; : : : ; Ll 2
(5.8)
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
119
The interpretation of these systems of inequalities is straightforward. Constraints (˘1–˘ 6) which are shared in both systems are the same as in (5.2) and ensure that utility assignments are compatible with stated preferences. In the linear additive case, they are supplemented by constraints L1–L2 which ensure that the value function can be written as a positive weighted sum of the criterion values. In the concave piecewise-linear additive case the additional constraints are the C constraints: C1 ensures that the value of a point can be written as the sum of its criterion values; C 2 ensures that these criterion values are piecewise interpolations of the values at the adjacent grid points; C 3 ensures that the partial value functions are increasing and C 4 that they are concave.
5.4.2 Use of Formulations We can use these polyhedra to address various questions of interest: 1. Do value functions compatible with the DM’s expressed preference information exist? 2. If value functions do not exist, can we suggest to the DM preference judgements which she may wish to revise? 3. If value functions do exist, what would a representative sample of them look like? 4. What efficient portfolios might we wish to present back to the DM to elicit further preferences? Question 1 can be easily addressed using a penalty idea. We replace constraints ˘1–˘ 4 in the L A or C PA polyhedra with the following “soft” constraints ˘ 0 1–˘ 0 4: .˘ 0 1/ U.r 1 /C C .r 1 ; r 2 / > U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .˘ 0 2/ U.r 1 /C C .r 1 ; r 2 / .r 1 ; r 2 / D U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 r 2 .˘ 0 3/ U.r 1 / U.r 2 /C C .r 1 ; r 2 ; r 3 ; r 4 / > U.r 3 / U.r 4 / r 2 / % .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 0 4/ U.r 1 / U.r 2 /C C .r 1 ; r 2 ; r 3 ; r 4 / .r 1 ; r 2 ; r 3 ; r 4 / D U.r 3 / U.r 4 / r 2 / .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 (5.9) These constraints ˘ 0 1–˘ 0 4 contain positive slack terms C s and s which measure the deviation of a solution from feasibility with respect to ˘1–˘ 4. To obtain a solution which is feasible with respect to ˘ 0 1–˘ 0 4, we require the slack terms to be non-negative and minimise their sum. If the value of the resulting programme is 0, there is a feasible assignment of values, otherwise not. In this latter case, we have found that the preferences expressed by the DM are collectively not consistent with any possible value function (this is not uncommon – see Korhonen et al. 1990). It is possible to identify candidate sets of judgements which the DM may wish to be relaxed by formulating and solving a sequence of integer programmes, based on the ideas of Mousseau et al. (2003), thus supplying an answer to question 2.
120
N. Argyris et al.
First we solve the following integer programme, where M is a large real number: P ı.r 1 ; r 2 / min .r 1 ;r 2 /2RRWr 1 %r 2
C
P
ı C .r 1 ; r 2 / C
.r 1 ;r 2 /2RRWr 1 r 2
C
P
.r 1 ;r 2 ;r 3 ;r 4 /2RRRRW.r 1
C
P
.r 1 ;r 2 ;r 3 ;r 4 /2RRRRW.r 1
C
P
.r 1 ;r 2 ;r 3 ;r 4 /2RRRRW.r 1
P
ı .r 1 ; r 2 /
.r 1 ;r 2 /2RRWr 1 r 2 1 2 3
ı.r ; r ; r ; r 4 /
r 2 /% .r 3
r 4/
r 2 / .r 3
r 4/
r 2 / .r 3
r 4/
ı C .r 1 ; r 2 ; r 3 ; r 4 / ı .r 1 ; r 2 ; r 3 ; r 4 /
subject to W .˘ 00 1/ U.r 1 / C M ı.r 1 ; r 2 / > U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .˘ 00 2/ U.r 1 / C M ı C .r 1 ; r 2 / > U.r 2 / and U.r 2 / C M ı .r 1 ; r 2 / > U.r 1 / 8.r 1 ; r 2 / 2 R R W r 1 r 2 .˘ 00 3/ U.r 1 / U.r 2 / C M ı.r 1 ; r 2 ; r 3 ; r 4 / > U.r 3 / U.r 4 / r 2 / % .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 00 4/ U.r 1 / U.r 2 / C M ı C .r 1 ; r 2 ; r 3 ; r 4 / > U.r 3 / U.r 4 / and U.r 3 / U.r 4 / C M ı .r 1 ; r 2 ; r 3 ; r 4 / > U.r 1 / U.r 2 / r 2 / .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 5/ U.z/ D 1 .˘ 6/ U.z/ D 0 .1/ ı.r 1 ; r 2 / 2 f0; 1g 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .2/ ı C .r 1 ; r 2 /; ı .r 1 ; r 2 / 2 f0; 1g 8.r 1 ; r 2 / 2 R R W r 1 r 2 .3/ ı.r 1 ; r 2 ; r 3 ; r 4 / 2 f0; 1g r 2 / % .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 C 1 2 3 4 1 2 3 4 .4/ ı .r ; r ; r ; r /; ı .r ; r ; r ; r / 2 f0; 1g r 2 / .r 3 r 4/ 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 (5.10) This uses the same idea as formulation (5.9). The difference is that in this formulation the ıs are binary variables. We multiply these ıs by a large number “M ” and this is used as the slack in the inequalities ˘ 00 1–˘ 00 4, which govern whether value assignments represent preferences (which they must do if all ıs = 0). The mathematical programme minimises the number of violated constraints (the sum of indicator variables of violated constraints). If it has been shown that there are no feasible value functions in L A or C PA , but the objective function of (5.10) is 0 at optimality, the DM’s preference judgements do not violate normative principles (e.g. transitivity) but they are inconsistent with linear additivity/concave additivity. In this case, it might be advisable to begin a discussion with the DM to re-establish whether these structural assumptions are appropriate, perhaps with a view to restructuring the value model. If the objective function is not 0 at optimality, it may be possible to restore feasibility if the DM is prepared to reconsider some of her preference or indifference
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
121
statements. Denote the index set of the ıs (ı C s , ı s) which take the value 1 in the optimal solution of (5.10) as S1 (S1C , S1 ). Then append the constraint to P
ı.r 1 ; r 2 / C
ı.r 1 ;r 2 /2S1
C
ı C .r 1 ; r 2 / C
P
P
P
ı .r 1 ; r 2 /
ı .r 1 ;r 2 /2S1
ı C .r 1 ;r 2 /2S1C
ı.r 1 ;r 2 ;r 3 ;r 4 /2S
C
P
ı.r 1 ; r 2 ; r 3 ; r 4 / C
P
ı C .r 1 ; r 2 ; r 3 ; r 4 /
ı C .r 1 ;r 2 ;r 3 ;r 4 /2S1C
1
ı .r 1 ; r 2 ; r 3 ; r 4 /
ı .r 1 ;r 2 ;r 3 ;r 4 /2S1 6 jS1 j C jS1C j C jS1 j
(5.11)
(5.10) and resolve. This constraint ensures that when we solve our modified version of (5.10) we will find a new set of infeasible constraints. Denote the index set of the ı s (ıC s , ı s) which take the value 1 in the optimal solution to this new programme as S2 (S2C , S2 ), define a new constraint corresponding to (5.11), and iterate until the resulting mathematical programme is infeasible. This procedure will generate a succession of sets of preference and indifference statements which may be relaxed to restore feasibility. Turning now to question 3, assuming for the moment that we obtain a positive indication that there is indeed a set of feasible value functions, a natural next question might be how to find a representative set – perhaps a sophisticated DM might be able to choose between value functions directly. An idea which has been expressed in the literature, though in an MOLP rather than a combinatorial context (Jacquet-Lagreze et al. 1987; Stewart 1987), is to obtain a set of P value functions by finding the feasible value functions which maximise l or ul .zl / for each l 2 F . In the C PA case, however, it would seem to make sense also to consider “most concave” and “least concave” value functions, perhaps obtained by a procedure such as maximising/minimising the utility of some central point in z, as this could make a significant difference to the character of the eventual solution: intuitively, the “more concave” a utility function is, the more evenly distributed the preferred solution vector will be in the criterion space. q Once a number of suitable sets of l s or ul .rlk /s and ul .zl /s have been found, any member of that set can be used as an objective function in a combinatorial programme, and solving that programme will result in an efficient solution. In the L A case, given a vector o of weights, one can form max U D
P P
ol cl x
lD1
subject to W X .O1/ wj i xj 6 Wi ; 8i 2 I ; j 2J
.O2/ xj 2 f0; 1g; j 2 J
(5.12)
122
N. Argyris et al.
O1 is a constraint on feasible solutions to ensure that their aggregate cost falls within some budget constraint, and O2 ensures that projects are either done or not done, i.e. they cannot be fractionally done. In the C PA case, it is somewhat more involved, q1 as we qhave toq linearise the q1 partial value functions. To do this, let zl ; ul zl and zl ; ul zl denote the two points that allows to define the piece q of the piecewise function ul ./ linear q q1 q1 ı q q w.r.t. criterion l 2 F . This piece has a slope sl D ul zl ul zl , zl zl and since the function is concave the following relation holds: q
L
s11 > sl2 > sl > > sl q : q
q
q1
denote the difference between two consecutive criterion Let dl D zl zl q q1 q q levels, and zOl denote a feasible value within the range Œzl ; zl . Thus, zOl can be q q considered as a new decision variable such that 0 6 zOl 6 dl ; 8l 2 F; 8q D 1; 2; : : : ; Ll ; 8l 2 F , the single term, Pzl , of leach function ul .zl / becomes zl D P q Ll and this in turn has to equal z O j 2J cj xj to ensure that there is a feasible qD1 l point in the decision space. Now, we can define the linear value function ul .zl / D PLl q q ; 8l 2 F . So, by adding constraints D1 and D2 which impose these z O s qD1 l l conditions, given some value function U.f .x//,we can find the optimal portfolio for each of the extremal value functions by solving a new mixed f0; 1g LP model: max U D
P
l2F
q q Ol qD1 sl z
PLl
subject to W X .O1/ wij xj 6 Wi ; 8i 2 I ; j 2J
.O2/ xj 2 f0; 1g; j 2 J Ll X X q .D1/ zOl D cjl xj ; 8l 2 F ; qD1
q
j 2J q
.D2/ 0 6 zOl 6 dl ; 8l 2 F; 8q D 1; 2; : : : ; Ll :
(5.13)
Given the way we built the concave functions it is easy to see that the solution will be the value for the optimal portfolio according to the value function consistent with q ul .rlk /s and ul .zl /s. Because of the concavity of the function, for a given criterion q1 q q q q level zOl 2 Œ0; dl , it is also easy to see that zOl D dl for all l D 1; : : : ; dl and q zOl D 0 for all l D q C 1; : : : ; Ll . The approach sketched above has given us one way of tackling question 4: In the context of an interactive procedure, one could use the above method to find a number of very different possible value functions; use these value functions to identify (hopefully also very different) portfolios, each of which is efficient with respect to some value function, and present these portfolios back to the DM to elicit new judgements. Suppose, however, our DM is particularly demanding, and wants not just some but all efficient portfolios. Consider first of all the L A case. In this case, we do
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
123
not have a determinate o . To comply with the DM’s request, we could form a programme, parametric in of the following form: P max U./ D l cl x l2F
subject to W
.˘1/ U.r 1 / > U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 % r 2 .˘ 2/ U.r 1 / D U.r 2 / 8.r 1 ; r 2 / 2 R R W r 1 r 2 .˘ 3/ U.r 1 / U.r 2 / > U.r 3 / U.r 4 / 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 1
2
3
r 2 / % .r 3
r 4/
r 2 / % .r 3
r 4/
4
.˘ 4/ U.r / U.r / D U.r / U.r / 8.r 1 ; r 2 ; r 3 ; r 4 / 2 R R R R W .r 1 .˘ 5/ U.z/ D 1 .˘ 6/ U.z/ D 0 P l rlk .L1/ U.r k / D l2F
.L2/ l > 0 X .O1/ wij xj 6 Wi ; 8i 2 I ; j 2J
.O2/ xj 2 f0; 1g; j 2 J
(5.14)
We have seen all the constraints of this programme before: the reader is referred back to (5.2), (5.7), and (5.12) for an interpretation. It is not immediately obvious how one should solve this parametric programme. One strategy, explored in detail in Argyris et al. (2011), is to treat the programme as an optimisation not just in x but also in . Because of the Boolean nature of the x vector, it is possible to substitute a third set of variables ˛j l for the product term l xj as long as a suitable set of constraints is added. Solving this problem yields one efficient solution; if constraints are then appended to “cut off” this efficient solution and the programme is resolved, one obtains another efficient solution, and it is possible to show that all efficient solutions can be obtained in this way. The C PA case can be dealt with in a completely analogous way. Another possibility is that the DM wants to identify portfolios which are “as different as possible” from some incumbent portfolio x o which she has in mind. In this case, we with the constraint set of (5.14) but in which we Pcan solve a programme P maximise l2F l cl x l2F l cl x o : Argyris et al. (2011) present the formulation in full for the L A case and describe a procedure whereby a DM interacts with a Decision Support System. In this procedure, at each iteration, the Decision Support System solves this programme to obtain portfolio x , and the DM expresses a preference between her incumbent portfolio and x . The more preferred portfolio of these two becomes the new incumbent, and the Decision Support System resolves the programme updated with the new preference information. Argyris et al. show that if the DM knows her value function, the procedure will eventually recover her optimal portfolio. These results can also be generalised to the C PA case.
124
N. Argyris et al.
5.5 Example Finally, we present for illustrative purposes an example of how our formulations might work. The data we use are based on work with a Primary Care Trust (a body which combines the roles of local health authority and health insurer in England) reported in Airoldi et al. (2011). The DM has a cash surplus of £1m per annum and is interested in thinking through how these funds might be spent. There are 21 candidate packages and the two criteria in this example are health benefit and inequality reduction. The data from this example are as shown in Table 5.2. We start by considering the linear additive case, where V L A . Using the enumeration procedure described earlier we can identify all V -efficient supported portfolios and associated points in the criterion space. It turns out that there is only a handful of these in this example, specifically the five portfolios listed in Table 5.3. This provides a very considerable reduction in problem complexity, as we have effectively transformed an instance of a PDA problem into a typical instance of a single choice problem. Figure 5.4 plots all supported points in the criterion space. It is particularly interesting that the single portfolio that has the greatest impact in reducing inequalities seems to be relatively unattractive in terms of overall health benefits. Similarly, the group of portfolios that offer higher health benefits seem relatively unattractive in terms of reducing inequalities. Looking back at the portfolio compositions and data, there is a simple explanation for this. Project 2
Table 5.2 Data for numerical example
Project
Cost
Health benefit
Inequality Reduction
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
130 650 100 600 300 300 760 50 75 50 150 120 300 300 100 60 160 80 600 480 140
45 4,500 2.5 19.2 12 16 90 6 100 72 18.9 18 16.2 50.4 43.2 45 67.5 9 101.25 72 36
75 100 25 50 5 50 25 0 60 8 80 56 76 40 49 17.5 35 35 70 56 28
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
125
Table 5.3 Supported portfolios and points in the criterion space Portfolio r1 r4 r3 r2 r5
Projects .1; 3; 9; 11; 12; 15; 16; 17; 18/ .1; 2; 9; 12/ .1; 2; 9; 10; 18/ .1; 2; 9; 10; 16/ .2; 8; 9; 10; 15; 16/
Cost 975 975 985 965 985
Health 349.1 4,663.0 4,726.0 4,762.0 4,766.2
Inequity 432:5 291:0 278:0 260:5 234:5
500 r1
Inequality Reduction
450 400
r6
350 300
r2
250
r 34 r r5
200 150 100 50 0
0
1000
2000
3000
4000
5000
Health Benefit Fig. 5.4 Non-dominated portfolios for healthcare example
delivers very considerable health benefits – more than an order of magnitude greater than any other project. We now illustrate the interactive procedure based on a L A value function which was discussed in Section 5.4. For the purposes of “simulating” a DM, we shall assume that her preference ordering is represented by the L A value function u .z/ D z1 C 5z2 , where z1 ; z2 are the aggregate health benefit and inequality reduction criteria scores, respectively. To initialise the procedure we need to identify an efficient supported portfolio. This can easily be done by maximising one of the criteria over the feasible portfolio set. Choosing to maximise the reduction in inequalities leads to the identification of supported portfolio r 1 . We then initialise R WD fr 1 g and the incumbent portfolio r o WD r 1 . We proceed to identify an alternative portfolio which maximises the difference in value with the incumbent with respect to a compatible value function (any L A value function at this first iteration). This leads to the identification of r 5 as the alternative portfolio. At this point the DM is asked to state her preferences with respect to r 1 and r 5 . Since u .r 5 / > u .r 1 /, she states that r 5 % r 1 . We incorporate this information in the characterisation of all compatible value functions by introducing the constraint U.r 5 / > U.r 1 / in the instance of the formulation of Argyris et al. (2011), set the incumbent portfolio r o WD r 5 and then resolve. This
126
N. Argyris et al.
Value function for inequality reduction
1
1
0.8
0.8
Value
Value
Value F unction 1
Value function for Health benefit
0.6 0.4 0.2
0.6 0.4 0.2 0
0 0
1780.05
3560.1
0
5340.15
1
1
0.8
0.8
Value
Value
Value function 2
313.5
627
940.5
Inequality reduction
Health benefit
0.6 0.4 0.2
0.6 0.4 0.2
0
0 0
1780.05
3560.1
Health benefit
5340.15
0
313.5
627
940.5
Inequality reduction
Fig. 5.5 Two value functions compatible with expressed preference information
leads to the identification of r 4 which is preferred to r 5 so that we set r o WD r 4 introduce the additional constraint U.r 4 / > U.r 5 / and repeat this process. In the next two iterations, we will set r o WD r 3 and again r o WD r 2 and progressively add the constraints U.r 3 / > U.r 4 / and U.r 2 / > U.r 3 /. Resolving at this point we identify r 2 again with an optimal objective function value equal to zero. From this we establish that there exists no portfolio which could possibly be preferred to r 2 on the basis of any compatible value function. It is immediate to verify that this is indeed the case by using the value function u .z/ which simulates the DM. Suppose on the other hand the DM has structurally quite different preferences, of the concave piecewise additive (C PA ) type. In particular, she faces a government target requiring her to achieve a certain reduction in health inequalities and she is intent on hitting this target. (Suppose this target is around the 310 mark, illustrated in the figure by the dashed line.) She does not only care about inequalities, but when her total score falls below the inequality target, her preferences are largely driven by the existence of the target: Achieving the target, she says, is worth about 15 times as much as achieving the health benefit of doing all the interventions, for a total health benefit of 5,340 points. However, once she has achieved the target, she is not particularly concerned about further inequality reductions: in the neighbourhood of r 1 , a unit of health inequality reduction is worth between 2 and 5 points of health benefit. If we were to find value functions which respect these expressed preferences (with three linear pieces in each criterion), we might find feasible partial value functions as shown in Fig. 5.5. Although rather hard to distinguish by visual inspection, these two value functions do yield different solutions. Optimising value function 2 returns us r 1 but value function 1 returns
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
127
us a point we have not previously encountered, r 6 – the portfolio of projects .1; 9; 10; 11; 12; 15; 16; 17; 21/. This portfolio yields 445.6 units of health benefit and 408.5 units of reduction in health inequalities, and costs £985,000. It is shown in Fig. 5.4 marked with a star. Portfolio r 6 is thus an unsupported solution, which we could not have found if we had restricted ourselves to value functions of the linear additive type.
5.6 Conclusion This chapter has been predicated on a belief that incorporating interactive elements in PDA can often be highly beneficial. In our exploration of the form which that interactivity should take we have discussed two interlinked questions: First, how should the process of asking for judgements be staged and managed, and second, what assumptions might an analyst want to make about the structure of a DM’s value functions. We have presented and contrasted two interactive procedures for the case where value functions are linear additive or concave piecewise-linear additive. In the former case, it is relatively easy to both identify feasible value functions and build formulations which integrate the weight and decision space (like (5.14)), but the disadvantage is that it imposes a strong and possibly unrealistic assumption on the DM’s preferences. In the latter case on the other hand, identifying value functions and solving the integrated formulation is computationally more complex but holds out the possibility of representing a richer set of value functions. One way to look at PDA is that it represents an attempt to broaden out the traditional decision analysis paradigm beyond the “single choice” setting, to a setting where the set of alternatives may be very large and only available implicitly (in particular because they are members of a combinatorial set defined by a system of constraints). An implication is that alongside the activities associated with performing a decision analysis – assessment of utility of value functions, elicitation of criterion weights or subjective probabilities, and so on – there is a need to select small number of attractive alternatives from this feasible set, in other words to solve some sort of optimisation problem. Thus two (interlinked) questions arise: • How can the elicitation best be structured (or what assumptions have to made about the DM’s beliefs and preferences) in order that the underlying optimisation problem becomes tractable? • How can the optimisation problem be best exploited to produce results which can guide the elicitation procedure? Neither of these important questions fall squarely within the boundaries of any one of the conventional OR/MS subfields, and therein lies the excitement and challenge. Because of this linkage between elicitation and optimisation, opportunities for advancing PDA seem to us to depend heavily on the availability of suitable software
128
N. Argyris et al.
(although it is quite possible to undertake Portfolio Decision Analyses with rather minimal technology – see, e.g. Kirkwood 1996). Several commercialised PDA software packages exist (Lourenc¸o et al. 2008), and experimental softwares with some interactive features are also available and documented in the research literature (Liesi¨o et al. 2007, 2008; Lourenc¸o et al. 2011). However, many apparently rather simple tasks, such as the enumeration of all efficient unsupported solutions given some set of preference information, are in fact extremely technically challenging. As a result, we can expect PDA softwares to go through multiple iterations, and by the time they stabilise, to incorporate some rather sophisticated algorithms. Building useable technology is itself an interactive, learning process. In this chapter, we have tried to highlight that in designing a PDA procedure one typically has several design elements or parameters to play with – indeed, there are combinatorially many ways in which a PDA could be conducted (when one takes into account that choices have to be made about the structure of the value function; the nature of the questioning of the decision maker, whether cardinal or ordinal, and whether in the decision or criterion space; the intensity of interactivity, and so on). There is disappointingly little in the literature in the way of frameworks to guide this design process, and we hope that this chapter may represent a small step towards remedying this. The focus on this chapter has been on what is technically possible, but in the long run, we would hope that a broader suite of research work (analytic, numerical or empirical) may actually try to answer some of the questions about what works best and where and why. Acknowledgements The authors gratefully acknowledge the support of the RAMS grant from the Council of Rectors of Portuguese Universities and the British Council under the Treaty of Windsor programme, and COST Action Research grant IC0602 on Algorithmic Decision Theory. Jos´e Rui Figueira also acknowledges a RENOIR research grant from FCT (PTDC/GES/73853/2006) and financial support from LORIA (project: Multiple Criteria in ROAD). Alec Morton wishes to thank the Instituto Superior T´ecnico at the Technical University of Lisbon for the hospitality shown to him over several visits. We are grateful to Mara Airoldi and Jenifer Smith for permission to use the data on which our worked example is based. Thanks to Nuno Gonc¸alves for proof reading an earlier version and Ahti Salo and Jeff Keisler for helpful comments.
References Airoldi M, Morton A, Smith J, Bevan G (2011) Healthcare prioritisation at the local level: a sociotechnical approach. Sympose Working Paper 7. LSE, London Argyris N, Figueira JR, Morton A (2011) A new approach for solving multi-objective binary optimisation problems, with an application to the Knapsack problem. J Global Optim 49:213–235 Austin J, Mitchell IM (2008) Bringing value focused thinking to bear on equipment procurement. Mil Oper Res 13:33–46 Branke J, Deb K, Miettinen K, Słowi´nski R (eds) (2008) Multiobjective optimisation. Lecture notes in computer science: theoretical computer science and general issues, vol 5252. Springer, Berlin Ehrgott M (2005) Multicriteria optimisation. Springer, Berlin
5 Interactive Multicriteria Methods in Portfolio Decision Analysis
129
Ewing PL, Tarantino W, Parnell GS (2006) Use of decision analysis in the army base realignment and closure (BRAC) 2005 military value analysis. Decis Anal 3:33–49 Figueira JR, Greco S, Słowinski R (2009) Building a set of additive value functions representing a reference preorder and intensities of preference: GRIP method. Eur J Oper Res 195:460–486 Gardiner LR, Steuer RE (1994) Unified interactive multiple-objective programming. Eur J Oper Res 74:391–406 Gomes da Silva C, Cl´ımaco J, Figueira J (2004) A scatter search method for the bi-criteria multidimensional f0,1g-knapsack problem using surrogate relaxation. J Math Model Algorithms 3:183–208 Jacquet-Lagr`eze E, Siskos J (1982) Assessing a set of additive utility functions for multi-criteria decision-making, the UTA method. Eur J Oper Res 10:151–184 Jacquet-Lagr`eze E, Siskos Y (2001) Preference disaggregation: 20 years of MCDA experience. Eur J Oper Res 130:233–245 Jacquet-Lagr`eze E, Meziani R, Słowi`nski R (1987) MOLP with an interactive assessment of a piecewise linear utility function. Eur J Oper Res 31:350–357 Keeney RL (1992) Value focused thinking: a path to creative decision-making. Harvard University Press, Cambridge Keeney RL, McDaniels TL (1992) Value-focused thinking about strategic decisions at BC Hydro. Interfaces 22:94–109 Keeney RL, McDaniels TL (1999) Identifying and structuring values to guide integrated resource planning at BC Gas. Oper Res 47:651–662 Kirkwood C (1996) Strategic decision making: multiobjective decision analysis with spreadsheets. Duxbury, Belmont, CA Kleinmuntz DN (2007) Resource allocation decisions. In: Edwards W, Miles RF, von Winterfeldt D (eds) Advances in decision analysis. CUP, Cambridge Korhonen P (2004) Interactive methods. In: Figueira J, Greco S, Ehrgott M (eds) Multiple criteria decision analysis: state of the art surveys. Springer, Berlin Korhonen P, Moskowitz H, Wallenius J (1990) Choice behavior in interactive multiple-criteria decision making. Ann Oper Res 23(1):161–179 Kuskey KP, Waslow KA, Buede DM (1981) Decision analytic support of the United States Marine Corp’s program development: a guide to the methodology. Decisions and Designs, Inc., McLean, VA Liesi¨o J, Mild P, Salo A (2007) Preference programming for robust portfolio modeling and project selection. Eur J Oper Res 181:1488–1505 Liesi¨o J, Mild P, Salo A (2008) Robust portfolio modeling with incomplete cost information and project interdependencies. Eur J Oper Res 190(3):679–695 Lindstedt M, Liesi¨o J, Salo A (2008) Participatory development of a strategic product portfolio in a telecommunication company. Int J Technol Manag 42:250–266 Lourenc¸o J, Bana e Costa C, Morton A (2008) Software packages for multi-criteria resource allocation. In: International engineering management conference, Europe, Estoril, Portugal Lourenc¸o J, Bana e Costa C, Morton A (2011) PROBE: a multicriteria decision support system for portfolio robustness evaluation (revised version). LSE Operational Research Group Working Paper Series, LSEOR 09.108. LSE, London MacCrimmon K, Wehrung D (1988) Taking risks. Macmillan, Glasgow Mavrotas G, Figueira JR, Florios K (2009) Solving the bi-objective multidimensional knapsack problem exploiting the concept of core. Appl Math Comput 215(7):2502–2514 Miettinen K (1999) Nonlinear multiobjective optimisation. Kluwer, Dordrecht Mild P, Salo A (2009) Combining a multiattribute value function with an optimization model: an application to dynamic resource allocation for infrastructure maintenance. Decis Anal 6(3):139–152 Morton A, Bird D, Jones A, White M (2011) Decision conferencing for science prioritisation in the UK public sector: a dual case study. J Oper Res Soc 62:50–59 Mousseau V, Figueira J, Dias L, Gomes da Silva C, Cl´ımaco J (2003) Resolving inconsistencies among constraints on the parameters of an MCDA model. Eur J Oper Res 147:72–93
130
N. Argyris et al.
Phillips LD (1984) A theory of requisite decision models. Acta Psychol 56:29–48 Phillips LD, Bana e Costa C (2007) Transparent prioritisation, budgeting and resource allocation with multi-criteria decision analysis and decision conferencing. Ann Oper Res 154:51–68 Roy B (1993) Decision science or decision-aid science? Eur J Oper Res 66:184–203 Slovic P, Lichtenstein S (eds) (2006) The construction of preference. CUP, Cambridge Steuer RE (1986) Multiple criteria optimisation: theory, computation and application. Wiley, New York Stewart TJ (1987) An interactive multiple objective linear programming method based on piecewise linear additive value functions. IEEE Trans Syst Man Cybern SMC-17:799–805 Stewart TJ (1993) Use of piecewise linear value functions in interactive multi-criteria decision support: a Monte Carlo study. Manag Sci 39:1369–1381 Vanderpooten D (1989) The interactive approach in MCDA: a technical framework and some basic conceptions. Math Comput Model 12:1213–1220 Vanderpooten D, Vincke P (1989) Description and analysis of some representative interactive multicriteria procedures. Math Comput Model 12:1221–1238 Zionts S, Wallenius J (1976) Interactive programming method for solving multiple criteria problem. Manag Sci 22(6):652–663 Zionts S, Wallenius J (1983) An interactive multiple objective linear-programming method for a class of underlying non-linear utility-functions. Manag Sci 29(5):519–529
Chapter 6
Empirically Investigating the Portfolio Management Process: Findings from a Large Pharmaceutical Company Jeffrey S. Stonebraker and Jeffrey Keisler
Abstract This exploratory study analyzes cross-sectional project data from the enterprise information system at a large pharmaceutical company in order to gain insight into the company’s portfolio decision process and determinants of the ways in which decision analytic tools were applied in practice. Statistical measures based on the economic parameters describe individual projects and various partitions of the company’s portfolio of projects. Other statistics, such as number of scenarios, describe aspects of the structured decision process. The study found significant differences across project groups, and results in suggesting reasons for these differences. More generally, obtainable empirical data proved to be useful for studying the patterns in the use of portfolio decision analysis.
6.1 Introduction 6.1.1 Research Motivation Applications of decision analysis are often more complicated than merely conducting probability or utility assessments and then performing calculations. Numerous variations on the textbook approaches exist in practice. These variations arise as practitioners attempt to adapt to their clients’ needs. Doing so is as much an art, as a science. Practitioners have by this time developed those elements of this art that can be identified simply by reflecting on client needs and project experience. To build knowledge that goes beyond a collection of lessons learned about the
J.S. Stonebraker () Department of Business Management, College of Management, North Carolina State University, Campus Box 7229, Raleigh, NC 27695-7229, USA e-mail: jeff [email protected] A. Salo et al. (eds.), Portfolio Decision Analysis Improved Methods for Resource Allocation, International Series in Operations Research & Management Science 162, DOI 10.1007/978-1-4419-9943-6 6, © Springer Science+Business Media, LLC 2011
131
132
J.S. Stonebraker and J. Keisler
application of decision analysis, it will be useful to explicitly consider empirical data on what happens when decision analysis is applied. The empirical study of portfolio decision analysis could be especially fruitful. Every portfolio creates coherent data about a large number of projects. Portfolio decision analysis itself involves complexities that drive variation in application – both organizational and analytic, as discussed elsewhere in this book. We were inspired to seek empirical data because of questions we had that were not answered elsewhere. To what extent do portfolios vary? To what extent does practice vary? And to what extent does practice align with portfolio characteristics? We obtained access to raw data from a large pharmaceutical company (henceforth referred to as LPC) where portfolio decision analysis was widely used. This chapter presents exploratory work based on a rich case study using archival portfolio planning data. We characterized the portfolios and processes used and the relationships between them. As practitioners and scholars, we found the results to be informative and in some ways surprising. More importantly, this effort holds insights for how empirical work can be a worthwhile direction for future portfolio decision analysis research.
6.1.2 Overview The analytical process and methods used by pharmaceutical companies to evaluate the commercial potential of the drug development projects in their portfolios may be a contributing factor to lower-than-expected returns. Over the last three decades less than a third of new drugs launched into the marketplace have returned their research and development (R&D) investment (Grabowski et al. 2002; Grabowski and Vernon 1994; Grabowski and Vernon 1990). Credible estimates that account for the uncertainty when evaluating the commercial potential of drug development projects are often lacking and difficult to assess (Evans 1996; Steven 2002; Stonebraker 2002). For example, less than half of pharmaceutical firms in 1990s used economic evaluations to aid in go/no-go decisions during development (DiMasi et al. 2001). Furthermore, decisions to terminate development of new drugs for primarily economic reasons have occurred very late in the drug development process (DiMasi 2001). Decision processes and drivers of decision quality in pharmaceutical PDA are well characterized in Chapter 13 of this volume (Kloeber 2011). Portfolio management typically focuses on characterizing the performance indicators of the portfolio, that is, maximizing the value of the portfolio, balancing the portfolio, and aligning the portfolio with the company’s strategy (see, e.g., Cooper et al. 1998), but little on the analytical process. The analytical process is how project-level information is obtained and then used to make go/no-go decisions for each project in the portfolio. Key pieces of project-level information include the following: probabilities of technical success, development cost and time, and
6 Characterizing the Analytical Process
133
commercial potential. Drug development is, due to its regulatory nature, a stagegate process. Thus, it is customary to characterize risk in terms of the probabilities of technical success for each remaining stages of development. Many pharmaceutical companies have a rich, historical database of past projects which they use to estimate probabilities of technical success and development cost and timing. The other key piece of project-level information needed for portfolio management is estimates of the commercial potential for a drug development project. This is often the weakest piece of information for portfolio management, especially in cases where the company has limited experience to gauge the market response. This chapter focuses on the analytical process used for evaluating the commercial potential of drug development projects in a portfolio at LPC. We extract project-level information from this company’s enterprise information system and explore this information quantitatively and compare the analytical process characteristics through a decisionquality lens (Matheson and Menke 1994; Matheson and Matheson 1998). In the next section, we provide background on the decision context – drug development and applications of decision analysis in drug development. Section 6.3 presents our hypotheses while Section 6.4 describes the methods we used to characterize the analytical process of evaluating the commercial potential of drug development projects in a portfolio for a pharmaceutical organization. Section 6.5 describes the results, Section 6.6 presents a discussion of our findings, and Section 6.7 is concluding remarks.
6.2 Background 6.2.1 Decision Context of Drug Development The long timelines and enormous investments from preclinical through clinical testing to regulatory approval to product launch as well as the staggering odds against technical success render drug development a challenging undertaking. Typically, it takes 10–15 years to successful complete the R&D of a new drug, during which time approximately $1.2 billion (DiMasi and Grabowski 2007) is spent; including the cost of failures, since a drug’s success cannot be predetermined. Of those drugs that begin clinical testing, approximately 20% are approved by regulatory agencies (e.g., US Food and Drug Administration – FDA) for launch with success rates varying by therapeutic disease area from 12% for respiratory diseases to 28% for anti-infective disorders (DiMasi 2001). Even if drug development is successful as defined by regulatory approval for market launch, there are no guarantees of commercial success – judged by payback of the R&D investment once the drug is launched into the market. In fact, since the mid-1970s, approximately one out of three drugs launched into the marketplace returned its R&D investment (Grabowski et al. 2002; Grabowski and Vernon 1994,
134
J.S. Stonebraker and J. Keisler
1990). Lower returns on new drug introductions have been consistent for the past three decades while R&D investments and life-cycle sales have increased in recent years, development times have slightly decreased (DiMasi 2002), and attrition rates have remained stable (DiMasi 2001). Drug development decision making is characterized by a sequence of decision points or gates (Cooper et al. 1998) where a drug development project is either terminated due to some undesirable feature or progressed to the next development stage and resources allocated. The development stages – Preclinical, Phase 1, Phase 2, and Phase 3 – are designed to gather information prior to the next decision point. At each decision point, information collected on the technical feasibility and commercial potential of the drug development project is evaluated and these results are used in deciding whether development should continue. Technical feasibility includes the time and costs needed to develop the drug and the likelihood of the drug succeeding in each stage of development. Commercial potential focuses on the uncertainty of internal factors (e.g., production costs, marketing costs) as well as on the uncertainty of external factors (e.g., the size of the market, the drug’s potential share of that market, and the drug’s price) that affect a drug’s economic viability over its product life cycle (Bauer and Fischer 2000).
6.2.2 Drug Development Decision Analysis For several decades, decision analysis has proven to be of tremendous value in resolving complex business decisions (Keefer et al. 2004; Corner and Kirkwood 1991) especially in the high-risk industry of drug development. Decision analysis has been used in the pharmaceutical industry for choosing among research drug candidates (Loch and Bode-Greuel 2001), deciding whether to continue development for a new drug (Beccue 2001; Johnson and Petty 2003; Pallay and Berry 1999; Poland 2004; Poland and Wada 2001; Stonebraker 2002; Viswanathan and Bayney 2004), deciding whether to pursue a joint venture to develop a new drug (Thomas 1985), planning production capacity for marketed drugs (Abt et al. 1979), assessing new technologies (Peakman and Bonduelle 2000), and evaluating drug development projects in a portfolio (Evans 1996; Sharpe and Keelin 1998; Spradlin and Kutoloski 1999; Steven 2002). During a drug development decision analysis, information is collected from experts on the key issues or parameters affecting the technical feasibility and commercial potential of the drug development project; a financial model is then constructed to describe relationships among the relevant model parameters. To be most effective, the application of decision analysis to a drug development project requires a structured interaction between a cross-functional drug development team and its senior management, with periodic exchanges of key deliverables (Bodily and Allen 1999). These teams typically have expertise in preclinical research, clinical testing, medicine, regulatory affairs, manufacturing, sales and marketing, finance, project management, and decision analysis. Instead of using
6 Characterizing the Analytical Process
135
life years or quality-adjusted life years to describe outcomes, as is commonly done in clinical decision analysis (e.g., Cantor 2004), drug development decision analysis uses net present value (NPV) as the financial outcome measure. NPV is a standard financial method for evaluating projects; it is the present value of free, future cash flows, with each cash inflow (including sales revenue) and cash outflow (including cost) discounted back to its present value and then summed. Pharmaceutical companies often use a small set of model common parameters across projects in a portfolio (Hess 1993). In our instance, the most influential model parameters for NPV are analyzed probabilistically using a decision tree to evaluate the technical feasibility and commercial potential of the drug development project and to determine a probability distribution of NPV and an expected NPV. Drug development decision analyses are essential for building support to improve the quality of decisions within the pharmaceutical organization.
6.3 Conjectures Besides the three objectives that typically characterize the performance indicators of the portfolio, there is a fourth objective that is needed for portfolio management. The three performance objectives include the following (Cooper et al. 1998): (1) maximizing the total value of the portfolio (e.g., expected NPV, expected return-on-investment), (2) balancing the portfolio (e.g., therapeutic disease area, development stage), and (3) aligning the portfolio with the company’s strategy (e.g., resource allocation). The fourth objective, the focus of this chapter, characterizes the analytical process, that is, how does the company ensure a credible evaluation of the technical feasibility and commercial potential for a drug development project. The analytical process must be consistent, comprehensive, transparent, and realistic. It must be consistent between projects in the portfolio (interproject) and within a project from evaluation to evaluation (intraproject). The analytical process must be comprehensive in that it includes all important, decision-relevant considerations. Throughout the organization the process must be transparent by clearly communicating assumptions and relationships. Finally, the analytical process should be realistic, that is, the process should embrace uncertainty. The portfolios of large pharmaceutical companies consist of many drug development projects in different therapeutic disease areas and in different stages of development, and with far too many projects competing for scarce resources. For each drug development project in the portfolio, the technical feasibility and commercial potential are evaluated. Given the analytical effort required to evaluate projects in its portfolio, large pharmaceutical companies often streamline their analytical process (Evans 1996; Stonebraker 2002). Many large pharmaceutical companies have extensive historical databases on project-level information for the time and costs to complete each stage of development as well as probabilities of technical success for each stage of development. This information is also well-documented through industry
136
J.S. Stonebraker and J. Keisler
benchmark studies and the literature (see, e.g., The Tufts Center for the Study of Drug Development). Instead of using the well-proven probability elicitation methods from decision analysis (Keeney and von Winterfeldt 1991; Merkhofer 1987; Morgan and Henrion 1990; Shephard and Kirkwood 1994; Spetzler and Sta¨el von Holstein 1975; Wallsten and Budescu 1983) to estimate project-specific probabilities of technical success, large pharmaceutical companies often use historical or industry benchmarks for its probabilities of technical success (Stonebraker 2002). Since attrition rates remained relatively stable for a long period of time starting in the 1970s (DiMasi 2001) these benchmarks are good approximations for projectspecific probabilities of technical success. Pharmaceutical companies have difficulty in forecasting credible estimates for the commercial potential of a drug development project. Unlike development costs, time, and probabilities of technical success, there are rarely historical databases and industry benchmark studies that catalog commercial potential, especially for novel therapies of unmet medical needs (Evans 1996, Steven 2002, Stonebraker 2002). Across the many drug development projects in the portfolio of a large pharmaceutical company, there should be a consistent level of analytical rigor when evaluating the commercial potential within a project (intraproject) and between projects in the portfolio (interproject). On the other hand, a one-size-fits-all approach to project evaluation does not allow for more streamlined analysis in areas where this might be suitable. As we search for interesting relationships, the tension between these two views leads us to consider two specific hypotheses about the processes used at LPC. Note, our analysis here only tests the hypotheses with respect to this specific case; testing whether the hypotheses hold more generally would require data from additional cases. Hypothesis 1. The analytical process used in evaluating the commercial potential of drug development projects differs significantly across the portfolio by therapeutic disease area. Hypothesis 2. The analytical process used in evaluating the commercial potential of drug development projects differs significantly across the portfolio by development stage.
6.4 Approach We collected drug development project-level information from LPC’s enterprise information system. This project-level information is a cross-section of drug development projects in LPC’s portfolio. The portfolio today for this company is quite different from the portfolio when the information was collected, given attrition rates of its drug development projects, in-licensing deals, etc. To make comparisons between different groups of projects, we calculated summary statistics for them. We used the mean, standard deviation (SD), and coefficient of variation (CV) to describe the distribution of the number of commercial scenarios
6 Characterizing the Analytical Process
137
used to evaluate the commercial potential for each drug development project in the portfolio by the therapeutic disease area and development stage. We also used the mean, SD, and CV to describe the distribution of the expected commercial potential for each drug development project in the portfolio by therapeutic disease area and development stage. The CV is the SD expressed as a percent of the mean and is useful for comparing the amount of variation in dissimilar data sets. An analysis of variance (ANOVA) compared the means of the number of commercial scenarios between therapeutic disease areas and stages of development. ANOVA was also used to compare the means of each project’s expected commercial potential by therapeutic disease area and development stage. Finally, ANOVA was used to compare the means of each project’s expected commercial potential by its number of commercial scenarios that were used to evaluate the commercial potential of a drug development project. To accomplish this, we grouped the drug development project into three categories – Category 1: one commercial scenario (deterministic case), Category 2: two to six commercial scenarios, and Category 3: seven to nine commercial scenarios. Correlation analyses were performed on a project-by-project basis among therapeutic disease areas and development stages to examine the associations between the number of commercial scenarios and expected commercial potential. For all correlation analyses, the strength of the association between two variables was assessed by the correlation coefficient .R/. P < 0:05 was considered statistically significant.
6.5 Results 6.5.1 Description of Data We obtained project-level information from LPC’s enterprise information system on the 223 drug development projects in its portfolio. Each drug development project was classified as belonging to one of six therapeutic disease areas (indexed as A, B, C, D, E, and F) and into one of four development stages (preclinical, Phase 1, Phase 2, and Phase 3). The enterprise information system provided project-level information on costs by development stage, probabilities of technical success for each of the remaining development stages, timing (regional market launch dates), and NPVs by commercial scenario. When evaluating the commercial potential of each drug development project in its portfolio, LPC considers up to nine possible commercial scenarios. The company defines a commercial scenario by two factors – product profile and market environment. The company describes the product profile using the product attributes of the drug development project – namely, its efficacy, safety/tolerability, dosage and administration, patient convenience, and treatment cost (Tiggemann et al. 1998). For each project, the company uses these attributes to define up to three outcomes for the product profile – an upside outcome, a most-likely outcome,
138
J.S. Stonebraker and J. Keisler Profile
U pside
M ost-likely
D ownside
Environment
Probability
N PV
U pside
Scenario Profile 1
U pside
Environment U pside
p1
N PV1
M ost-likely
2
U pside
M ost-likely
p2
N PV2
D ownside
3
U pside
D ownside
p3
N PV3
U pside
4
M ost-likely U pside
p4
N PV4
M ost-likely
5
M ost-likely M ost-likely
p5
N PV5 N PV6
D ownside
6
M ost-likely D ownside
p6
U pside
7
D ownside
U pside
p7
N PV7
M ost-likely
8
D ownside
M ost-likely
p8
N PV8
D ownside
9
D ownside
D ownside
p9
N PV9
Fig. 6.1 Commercial scenarios for drug development projects
and a downside outcome. For example, an upside outcome could be described as having better efficacy and safety than the standard of care with the remaining attributes being equivalent to the standard of care. The most-likely and downside outcomes would be described in a similar fashion. The company describes the market environment using various parameters, such as patient population, percent of patients treated, price per treatment, class share of market, number of products in class, order of market entry in class, product share within class, year of launch, and patent expiry. The company defines up to three outcomes for the market environment – an upside outcome, a most-likely outcome, and a downside outcome. LPC combines these outcomes along with the outcomes for the product profile resulting in nine potential commercial scenarios as shown below. For each scenario, that is, along each path of the tree in Fig. 6.1, there is an NPV and a probability. The NPV is determined in a separate, off-line market model based on settings of the marketing parameters (e.g., patient population, percent of patients treated, etc.) and the probabilities of the upside, most-likely, and downside outcomes for the product profile as well as the market environment are estimated by the drug development project team. The expected commercial potential for a drug development project is the expectation of NPV and probability of each commercial scenario considered in Fig. 6.1 and is given by the following equation: ENPV D
n X
NPVi pi
i D1
where ENPV is the expected net present value, NPVi is the net present value of commercial scenario i; pi is the probability of commercial scenario i , and n is the number of commercial scenarios ranging from one to nine. Table 6.1 shows the number of drug development projects by therapeutic disease area and stage of development obtained from LPC’s enterprise information system.
6 Characterizing the Analytical Process
139
Table 6.1 Portfolio information obtained from LPC’s enterprise information system: # number of drug projects by therapeutic disease area development stage Disease area Development stage
A
B
C
D
E
F
Total
Phase 3 Phase 2 Phase 1 Preclinical Total
5 14 7 5 31
4 4 2 10 20
7 12 11 13 43
9 16 17 12 54
11 17 7 4 39
11 7 8 10 36
47 70 52 54 223
There were six therapeutic disease areas, labeled A, B, C, D, E, and F and a total of 223 drug development projects in the portfolio. Therapeutic area A has a total of 31 drug development projects consisting of 5 Phase 3 projects, 14 Phase 2 projects, 7 Phase 1 projects, and 5 preclinical projects. Therapeutic disease area D has the most drug development projects (54), whereas B has the least (20). Thus, there are differences in the maturity of the therapeutic disease areas, although these are not drastic. The portfolio also has more drug development projects in Phase 2 (70) than any other development stage. These numbers indicate that the portfolio is not balanced across therapeutic disease areas or stages of development. The number of drug development projects ranged from 20 to 54 across therapeutic disease areas. While balance across therapeutic disease areas is not required a priori as good business practice, the relative balance may influence the character of the analytic process in each area. The portfolio is also heavily concentrated with 70 drug development projects in Phase 2. Given typical pharmaceutical attrition rates, the portfolio, to achieve balance, should have the largest number of drug development projects in preclinical, followed by Phase 1, Phase 2, and Phase 3. As we consider the analytical processes used, we note that this lack of balance suggests different groups of projects may differ in ways that affect how they should be managed. Table 6.2 shows the number of commercial scenarios used in evaluating the commercial potential of a drug development project by therapeutic disease area and stage of development. For example, of the five Phase 3 projects in therapeutic disease area A, one project used one commercial scenario to evaluate its commercial potential, one project used two scenarios, two projects each used four scenarios, and one project used six scenarios. For the entire portfolio consisting of 223 drug development projects, most projects (70) used only one commercial scenario to evaluate its commercial potential. On the other hand, only six projects used all nine commercial scenarios. The number of commercial scenarios used in the evaluation of commercial potential varied considerably across therapeutic disease areas. Table 6.3 presents the mean, SD, and CV of each drug development project by therapeutic disease area and ANOVA results. The number of commercial scenarios used by therapeutic disease
140
J.S. Stonebraker and J. Keisler
Table 6.2 Portfolio information obtained from LPC’s enterprise information system: # of commercial scenarios used in evaluating the commercial potential of a drug development project by therapeutic disease area and stage of development Phase Phase 3
Phase 2
Phase 1
Preclinical
Therapeutic disease area
# of commercial scenarios evaluated
A B C D E F All A B C D E F All A B C D E F All A B C D E F All Total
1 1 3 5 5 8 8 30 0 1 5 2 4 4 16 0 0 0 0 3 5 8 0 1 6 1 0 8 16 70
2 1 0 0 3 1 1 6 2 0 1 1 1 1 6 0 0 0 0 0 0 0 1 0 0 1 0 0 2 14
3 0 1 1 0 2 2 6 2 1 0 0 11 0 14 0 0 0 1 0 0 1 0 0 0 2 0 0 2 23
4 2 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2
5 0 0 0 0 0 0 0 1 2 1 3 0 0 7 1 0 5 3 1 1 11 0 6 3 1 1 1 12 30
6 1 0 1 0 0 0 3 1 0 0 1 1 1 4 1 0 0 0 3 0 4 0 0 0 0 2 0 2 12
7 0 0 0 0 0 0 0 8 0 5 7 0 0 20 5 2 6 13 0 1 27 4 3 4 7 1 0 19 66
8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 0 0 0 1 0 0 1 0 0 0 2 0 1 3 0 0 0 0 0 1 1 0 0 0 0 0 1 1 6
Total 5 4 7 9 11 11 47 14 4 12 16 17 7 70 7 2 11 17 7 8 52 5 10 13 12 4 10 54 223
area A to evaluate the commercial potential of its drug development projects was the highest at 5:48 ˙ 2:05 .mean ˙ SD/, whereas the lowest number of commercial scenarios was 2:36 ˙ 2:55 .mean ˙ SD/ for therapeutic disease area F. The number of commercial scenarios used for evaluating the commercial potential of drug development projects between therapeutic disease areas was significantly different except for the following comparisons: therapeutic disease areas A and B, A and D, B and C, B and D, and E and F. The expected commercial potential varied considerably across therapeutic disease areas. Table 6.4 presents the mean, SD, and CV of each drug development project’s expected commercial potential by therapeutic disease area and ANOVA
6 Characterizing the Analytical Process
141
Table 6.3 Statistical analysis of the number of commercial scenarios used to evaluate the commercial potential of drug development projects by therapeutic disease area (P compares therapeutic disease areas) Therapeutic disease area A B C D E F All
Mean 5:48 4:30 4:12 5:26 2:85 2:36 4:09
SD 2:05 2:27 2:66 2:53 1:91 2:55 2:62
CV (%) 37 53 65 48 67 108 64
N 31 20 43 54 39 36 223
P B 0.06
C 0.02 0.79
D 0.65 0.14 0.03
E