311 75 12MB
English Pages 394 Year 2013
Additional praise for Project Risk Management: Essential Methods for Project Teams and Decision Makers This book provides an overview of the risk landscape and zeroes in on the top most practical and efficient risk methodologies. This no-nonsense angle stems from Yuri’s hands-on experience with a number of mega- projects. A must-read for all project practitioners who wish to separate the wheat from the chaff! Manny Samson, President, MRK2 Technical Consulting Limited
Over the years that I received Yuri’s risk management support, I have found his approach to identifying, addressing, and assessing the project risks very efficient, refreshing, and thought provoking, even in areas of work and projects that were new to him! Martin Bloem, PQS, Principal, Project Cost Services Inc.
Dr. Raydugin’s practical approach to risk management provides the reader with a refreshing and rich experience to an old subject. His thorough examples and attention to details allows the reader to unlock the black box mysteries of risk management to further enhance the project management toolbox. This is a must-read for project management practitioners of all levels in all industries. Mikhail Hanna, PhD, PMP, Manager of Project Services, SNC-Lavalin Inc., and Project Management Lecturer
Project Risk Management
Founded in 1807, John Wiley & Sons is the oldest independent publishing company in the United States. With offices in North America, Europe, Asia, and Australia, Wiley is globally committed to developing and marketing print and electronic products and services for our customers’ professional and personal knowledge and understanding. The Wiley Corporate F&A series provides information, tools, and insights to corporate professionals responsible for issues affecting the profitability of their company, from accounting and finance to internal controls and performance management.
Project Risk Management Essential Methods for Project Teams and Decision Makers
YURI RAYDUGIN
Cover image: Courtesy of the Author. Cover design: Wiley Copyright © 2013 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the Web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com. Library of Congress Cataloging-in-Publication Data: ISBN 978-1-118-48243-8 (Hardcover) ISBN 978-1-118-74624-0 (ebk) ISBN 978-1-118-74613-4 (ebk) Printed in the United States of America 10 9 8 7 6 5 4 3 2 1
To my uncle Yuri, Lt., 97th Special Brigade, 64th Army, killed in action October 27, 1942, in Stalingrad, age 19. To my parents, Nina and Grigory, my wife, Irina, and my sons, Eugene and Roman. To my former, current, and future co-workers.
Contents
Foreword Preface
xv xix
Acknowledgments
xxix
PART I: FUNDAMENTAL UNCERTAINTY OF A PROJECT OUTCOME Chapter 1: Nature of Project Uncertainties Phases of Project Development and Project Objectives Quest for Predictability of Project Outcome Sources and Types of Deviations from Project Objectives Key Objects of Risk (or Uncertainty) Management: Do We Really Know What We Try to Manage? Uncertainty Exposure Changers Conclusion Notes
Chapter 2: Main Components of a Risk Management System Risk Management Plan Organizational Framework Risk Management Process Risk Management Tools Conclusion Notes
Chapter 3: Adequacy of Methods to Assess Project Uncertainties Review of Deterministic Qualitative (Scoring) Methods Review of Deterministic Quantitative Methods Review of Probabilistic Qualitative Methods
3 4 5 7 15 24 26 26
29 30 32 39 52 59 60
61 62 68 76 ix
x
◾ Contents
Review of Probabilistic Quantitative Methods Conclusion Notes
80 87 88
PART II: DETERMINISTIC METHODS Chapter 4: Uncertainty Identification When Risk Management Becomes Boring Three Dimensions of Risk Management and Uncertainty Identification Risk Identification Workshops Sources of Uncertainties and Risk Breakdown Structure Bowtie Diagrams for Uncertainty Identification Three-Part Uncertainty Naming Role of Bias in Uncertainty Identification Room for Unknown Unknowns Conclusion Notes
Chapter 5: Risk Assessment and Addressing
91 92 93 95 98 101 107 110 113 118 118
119
Developing a Risk Assessment Matrix Using a Risk Assessment Matrix for Assessment As-Is Five Addressing Strategies Assessment after Addressing Project Execution through Risk Addressing (PETRA) Role of Bias in Uncertainty Assessment Conclusion Notes
120 129 136 141 145 147 149 150
Chapter 6: Response Implementation and Monitoring
151
Merging Risk Management with Team Work Plans Monitor and Appraise When Uncertainties Should Be Closed When Should Residual Uncertainties Be Accepted? Conclusion Note
152 153 154 155 155 156
Chapter 7: Risk Management Governance and Organizational Context 157 Risk Management Deliverables for Decision Gates Ownership of Uncertainties and Addressing Actions Management of Supercritical Risks
158 160 162
Contents ◾
xi
Risk Reviews and Reporting Bias and Organizational Context Conclusion Notes
164 168 175 175
Chapter 8: Risk Management Tools
177
Three Dimensions of Risk Management and Structure of the Uncertainty Repository Risk Database Software Packages Detailed Design of a Risk Register Template in MS Excel Commercial Tools for Probabilistic Risk Analyses Conclusion Notes
Chapter 9: Risk-Based Selection of Engineering Design Options Criteria for Engineering Design Option Selection Scoring Risk Method for Engineering Design Option Selection Decision Tree for Engineering Design Option Selection (Controlled Options) Conclusion Note
178 181 184 185 191 192
193 194 195 199 202 202
Chapter 10: Addressing Uncertainties through Procurement 203 Sources of Procurement Risks Quantitative Bid Evaluation Package Risk Management Post-Award Conclusion Notes
Chapter 11: Cost Escalation Modeling Overview of the Cost Escalation Approach Example of Cost Escalation Modeling Selecting the Right Time to Purchase Conclusion Notes
204 207 209 209 210
211 211 219 223 224 224
PART III: PROBABILISTIC MONTE CARLO METHODS Chapter 12: Applications of Monte Carlo Methods in Project Risk Management
227
Features, Value, and Power of Monte Carlo Methods
228
xii
◾ Contents
Integration of Deterministic and Probabilistic Assessment Methods Uncertainty Objects Influencing Outcome of Probabilistic Analyses Origin and Nature of Uncertainties Role of Correlations in Cost and Schedule Risk Analyses Project Cost Reserve Project Schedule Reserve Anatomy of Input Distributions Probabilistic Branching Merge Bias as an Additional Reason Why Projects Are Often Late Integrated Cost and Schedule Risk Analysis Including Unknown-Unknown Allowance in Probabilistic Models Conclusion Notes
Chapter 13: Preparations for Probabilistic Analysis Typical Workflows of Probabilistic Cost and Schedule Analyses Planning Monte Carlo Analysis Baselines and Development of Proxies Why Using Proxies Is the Right Method Mapping of Uncertain Events Building and Running Monte Carlo Models Conclusion Notes
Chapter 14: Using Outputs of Monte Carlo Analyses in Decision Making Anatomy of Output Distributions Overall Project Uncertainty and Confidence Levels of Baselines Project Reserve Criteria Uncertainty of Cost Outcome and Classes of Base Estimates Cost Reserve Drawdown Sensitivity Analysis Using What-if Scenarios for Advanced Sensitivity Analysis Are We Ready for Construction, Logistics, or Turnaround Windows? Validating Results and Closing Probabilistic Analysis Conclusion Notes
230 231 233 240 242 244 246 250 251 253 256 259 260
261 262 264 267 271 272 277 277 278
279 280 283 287 291 296 298 304 305 306 308 308
Contents ◾
xiii
PART IV: RISK MANAGEMENT CASE STUDY: PROJECT CURIOSITY Chapter 15: Putting Together the Project Curiosity Case Study Scope of the Case Study Project Curiosity Baselines Project Risk Management System Adopted by Project Curiosity Overview of Project Uncertainty Exposure of Project Curiosity Templates for Probabilistic Cost and Schedule Analyses Building and Running Project Probabilistic Cost and Schedule Models Three What-If Scenarios Conclusion Notes
Chapter 16: Decision Making Key Points of the Probabilistic Analysis Report Decision Gate Review Board Findings and Recommendations Conclusion Note
About the Author Index
357
355
311 312 313 319 326 330 331 333 334 335
337 338 350 352 353
Foreword
I
A SK ED T HE AU T H O R why he would want me to write a foreword to a book
on risk management when this is not an area of my expertise. He replied that I was exactly the type of reader the book was aimed at—“decision makers and project team members.” He wanted the book to be a resource for both decision makers and project team members like myself. The book is written at a high level without encumbering it with much mathematics, but still has enough detail that the importance of a thorough assessment of the uncertainties in a project for the successful management of the project would be clear! This intrigued my curiosity, so I agreed to read the book. To my surprise, I found it an enjoyable read with tips on how to address issues that I might have ignored (e.g., broiler black swans) or not even realized existed (e.g., unknown unknowns). There is even some Russian humor, such as in the sidebar on “flying to the sun.” Such gems like avoiding risk in flying to the sun by only flying at night made me laugh, but I can see how similarly ridiculous risk avoidance strategies could pass muster if the right people aren’t at the table. Although not specifically called expert opinion, the problem of anchoring or subconscious bias on the part of experts as well as hidden agendas or conscious bias is pointed out as a failure of the process to properly quantify risks (uncertainties). The hierarchy or vertical integration of risk management is particularly sensitive to this. If the error occurs at the corporate level and is passed down to the business unit charged with delivering the project, an impossible situation arises. However, this is not the focus of this book; rather it is concerned with uncertainty issues at the business‐unit and project levels, but still there has to be communication across corporate levels. Although I earlier mentioned two terms that might be considered jargon, the book attempts to minimize their use and even suggests alternative terminology. I had a special reason for mentioning these two because they are particularly applicable to the oil and gas industry. The best recent example of unknown unknowns is the explosion in the supply of natural gas. The United States is going from a country that was building terminals to import LNG xv
xvi
◾ Foreword
(liquefied natural gas) to now building export terminals for surplus gas. This has developed over the past 10 years due to technology breakthroughs with regard to horizontal drilling and multistage fracturing, which allows natural gas to be developed commercially from tight petroleum source rocks. This has far‐reaching implications as natural gas is a cleaner‐burning fuel than coal and therefore internally will be the fuel of choice for utilities in the future, replacing coal. The integration of unknown unknowns seems to be an impossible task at first glance. Some light at the end of the tunnel appears in this book through correlation of past experience as to their severity by industry. A black swan is an event that is a game changer when it occurs in a project’s external environment but that has a very low probability of happening. A broiler black swan is politically or commercially motivated and may hide a hidden agenda. An example of this is the development of the oil sands of Alberta, which has been labeled the filthiest oil on the planet and has been subject to condemnation globally by environmentally motivated groups. The author does not offer solutions to these issues; rather he is concerned with their identification and impact on the project. Once they are identified, methods to reduce their impact would be part of the risk management plan. Being a scientist and reading a practical book on risk management authored by an engineer who has also dabbled in science and business is rather a novel exercise. Certainty is paramount in the engineer’s discipline while uncertainty is the scientist’s mantra. My discipline, geology, where decisions are made based on relatively few data, is probably one of the most important disciplines in which to apply a quantitative approach to risk management. Being from the old school, where risk was addressed in a strictly deterministic manner, it was nice to see that the author supports the premise that the uncertainties are first identified, assessed, and addressed in a deterministic manner in the initial project phases, followed by integrated probabilistic cost and schedule risk analysis (based on expert opinion and Monte Carlo statistical methodology) to define project reserves and sensitivities (including reputation, environment, and safety). Deterministic analysis is concerned with approval and implementation of addressing actions. Probabilistic analysis is concerned with optimization of baselines and allocation of adequate reserves to ensure a high probability of project success. Issues occur when moving from deterministic to probabilistic analysis. One should be aware from the start of the need to convert deterministic data to probabilistic inputs. Double counting is an issue. There is a need to get correlations properly introduced in the probability model when the uncertainties aren’t truly independent. This book addresses these issues.
Foreword ◾
xvii
Not being a statistician, I appreciate the author’s explanation of the difference between deterministic modeling and the probabilistic approach using the Monte Carlo method: “The Monte Carlo method . . . mimics . . . thousands of fully relevant albeit hypothetical projects.” An analogy to this would be government. The ideal ruler of a country is a monarch who makes all the decisions (akin to the deterministic model) while rule by the people (akin to the probabilistic model) is focused on compromise. The best system depends on the situation and the individual monarch. Neither is perfect, and checks and balances are needed, which have evolved into a system where there are elected representatives of the people (with biases) with the lead representative being a “monarch with diminished powers” due to the checks and balances in place. This is exactly what the author proposes, the “integration of deterministic and probabilistic assessment.” It is interesting that the author is a nuclear engineer, and that Monte Carlo techniques were developed to help build the atomic bomb and now have come full circle where they are the most widely applied method in risk management. One of the strengths (and there are many) of this book is that the author is able to explain many of the procedures developed for risk assessment by analogy to physics principles while keeping the mathematics to a minimum. This is consciously done based on experience, as the author says: “My practical experience of introducing mathematical definitions related to utility functions . . . was a real disaster.” I fully agree with the author when he quotes Leonardo da Vinci: “Simplicity is the ultimate sophistication.” Following this theme, Plato advised that the highest stage of learning is when experience allows one to understand the forest, and not just the trees (not Plato’s phraseology—he used the analogy of a ladder); perhaps a slight connection can be made with Nike, which popularized the expression “Just Do It!” This would suggest that the risk assessment is only as good as the quality of the expert opinion given. Risks are identified and their importance assessed so that the focus is on only the most important risks. The less important risks are adequately addressed so that they are within the companies’ risk tolerance. The more important risks are also addressed and tracked to be able to respond quickly if they approach being critical (nuclear terminology again). The author states, “Unfortunately, it is not unusual that project risks are identified and addressing actions are proposed just for show. This show is called a decision gate.” I think this is only a warning. Decision gates play an important role in both risk management and work plans. While deterministic scoring methods have their place, probabilistic assessment methods can be a utopia for practitioners who do not fully understand uncertainty
xviii
◾ Foreword
assessment. To avoid falling into these traps, it is important to have someone well versed in risk management (a risk manager) to lead the team. The book is nicely laid out in four parts. The first part looks at the risk management process at a high level, followed by more detailed descriptions of deterministic methods, followed by the probabilistic Monte Carlo method. Finally, a detailed example is presented on carbon capture and storage, illustrating the methodology of risk management step by step to the final outcome. The book accomplishes its purpose of being a practical recipe book for the nonexpert who is a decision maker or a project team member and who wishes to understand how to conduct a robust risk management process while avoiding the many pitfalls. Dr. Bill Gunter, P. Geol., Honorary Nobel Laureate, Alberta Innovates—Technology Futures
Preface
T
H AT WA S T H E DAY —when a grandiose multibillion‐dollar mega‐
project that I had recently joined was pronounced dead. It had drilling pads, processing facilities, an upgrader, pipelines, roads, camps, and an airfield in its scope. It turned out that several critical risks were not properly addressed, which made uncertainty of the project outcome unacceptably high. As a result of a decision gate review, a few‐million‐dollar de‐risking project was announced instead to prove core in situ technologies standing behind the project and address some other critical risks stemming from the key project assumptions. “The King is dead, long live risk management!” This was a pivotal point in my ongoing interest in project risk management, one that defined my career path for many years ahead. There were two valuable lessons learned related to that project. First, despite the fact that the majority of the project team members were high‐level specialists in project management, engineering, procurement, construction, project services, stakeholder relations, safety, environment, and so on, they were not comfortable enough in selection and application of project risk management methods. Second, even though decision makers at the divisional and corporate levels had tons of project development experience, they did not pay due attention to some particular categories of uncertainties. In both cases, the situation was exacerbated by manifestations of bias based on a degree of overconfidence and desire to push the project through decision gates to sanctioning. Another bias factor that blindfolded decision makers was the impressively growing price of oil at that time. These two lessons led to a quest for a few simple but really effective, adequate, and practical methods of project risk management. Those methods should be understandable enough by both project teams and decision makers. To simplify, input–black box–output engagement relations between project teams, risk management, and decision makers were contemplated.
xix
xx
◾ Preface
In this simplified picture of the risk management world, project team members should provide high‐quality unbiased information (input) related to their disciplines to feed just a few risk methods. To do this they should know the logic behind required input information as well as its specs. It would not hurt if project team members were aware of outputs expected after processing the information they provided. Similarly, decision makers should not merely be familiar with information required as inputs to the risk management black box. They should be utterly comfortable with interpretation of results (output) coming from the black box as outputs and be able to use them in informed, risk‐based and, again, unbiased decision making. To assure quality inputs and outputs project team members and decision makers (project practitioners) should know the methods of project risk management well enough. In a way, the black box should be seen by them as a practical risk management toolbox (not a mysterious black box) that contains a small number of slick and handy instruments. The practitioners would not be able to use all of those tools on their own, but they certainly should be familiar with their purpose, value, and applicability. Obviously, this approach requires a project risk manager who maintains the toolbox and applies the tools in the right ways. This book defines his or her role as a custodian of the toolbox and should help to ensure that the correct inputs and outputs are provided and used. This book is based on my project risk management involvement in almost two dozen mega‐projects in the owners’ and the engineering, procurement and construction (EPC) organization environment, but I won’t name them since I’m bound by multiple confidentiality agreements (unless my participation was recognized in project reports available in the public domain). The majority of those projects belong to oil and gas, petrochemicals, and the energy industry. Pipeline, conventional oil, heavy oil and oil sands production, conventional and unconventional gas extraction, refinery, upgrader, chemical plant, CO2 sequestration, power generation, transmission line, gasification, and liquefied natural gas (LNG) projects are among them. Only the people I worked with on those projects could have guessed that some of them (both projects and people) would implicitly shine through this book. My former and current co‐workers may also recognize and recall our multiple discussions as well as training sessions I provided. The methods and insights described in this book are applicable to more than just mega‐projects. The same methods and insights could be relevant to a few‐hundred‐thousand‐dollar tie‐in project, a major project to construct a several‐hundred‐kilometer pipeline in Alberta, and a pipeline mega‐project
Preface ◾
xxi
connecting Alberta with refineries in the southern United States or eastern Canada or LNG export terminals in British Columbia. (According to common industry practice, a project is conditionally defined as mega if it has a budget of at least $1 billion. Capital projects of more than $10 million to $100 million or so could be considered major depending on their complexity and organization.) Are the methods and insights of this book applicable to other industries? Yes and no. They are certainly applicable to any infrastructure, civil engineering, mining, metallurgy, chemicals, wind or nuclear power generation projects, and so on, regardless of their sizes. However, this book probably gets too much of capital projects’ flavor to be directly applicable to IT, pharmaceutical, consumer goods, defense, air/space initiatives, and so on, especially if R&D activities are part of those projects. I am not familiar enough with those industries and the project development practices adopted there. Prudently speaking, some efforts would be required to convert the insights of this book and make them fully adaptable to them. At the same time, a general discussion on applicability of the project risk management methods of this book to R&D projects is provided. (This does not mean that I discourage representatives of those industries to buy this book. On the contrary, I strongly believe that the main ideas and principles of risk management are universal.) This book is not an academic study. It is rather an attempt to share my practical experience and learned lessons. This should possibly exempt me from a requirement of the extensive literature review that is common in academic books. There are three reasons for this. First, project practitioners usually do not have regular access to academic journals. Or they simply do not have enough time for regular literature reviews. Second, it will not be an exaggeration to say that most project practitioners, including risk managers, do not read a lot of academic books. Even if they do, they do not often apply the risk management methods found in academic books to practice. Quite often risk management methodologies are forced by vendors of corresponding risk management software packages. Along with their IT tools they impose their understanding of risk management, which is not always adequate. Third, to engage project practitioners and not scare them away, I use references to a very few, really good academic books as well as to some relevant academic articles that I did read (or wrote), but only where absolutely necessary. Some authors of academic articles might discover that this book reflects on ideas that are similar to theirs. This shows my practical contribution to support and confirmation of those ideas, which is done even without my knowledge of their origin. Where there are contradictions with some brilliant ideas,
xxii
◾ Preface
please write such misalignments off as my being biased, under‐informed, or too practical. This book is not a quest for full completeness of all known project risk management methods. On the contrary, this is a quest for selective incompleteness. Only the few practically important risk methods that make a real difference in the real project environment are part of this book. The rest of them are either briefly discussed just to outline the “edge of practical importance” or not cited at all. On the other hand, there is a certain completeness of risk methods as they represent a sufficiently minimal set that covers all main aspects of modern risk management of capital projects and all types of uncertainties, called objects in this book. No more and no less than that! The attempt by authors to include all known risk methods and produce a sort of risk management War and Peace is one of the key reasons that practitioners are reluctant to read academic books: it is not always clear what is really important for practice when reading those types of books. A limited number of methods are required when managing the risks of real capital projects. The rest of them are not very relevant to practice. This explains the words “Essential Methods” in the book’s subtitle. Several preliminary subtitles discussed with my publisher actually contained words such as “Pareto Principle,” “20/80 Rule,” “Lean Approach,” and so on. Even though these were not used, in essence I preach (but do not pontificate upon) these concepts here. The style of the book is close to a field manual or travel notes on a journey in the field of risk management. I tried not to use a “high horse” for that journey, which is not always true of the risk management book genre. Needless to say, pontificating on a risk topic is an absolute taboo. Although the title of this book is Project Risk Management, I understand this as “project uncertainty management.” This contradiction is explained by the fact that the purpose of risk management is to reduce overall uncertainty of project outcome. Risks, in the narrow understanding of the term, are just one of several uncertainty factors contributing to overall outcome uncertainty. The term risks in a wider understanding could mean almost everything and nothing. As the term uncertainty is less often used in project management and currently less searchable online, it was decided to keep the word Risk in the book title. Part I of the book begins with a discussion on all main categories of uncertainties (or objects of uncertainty management) that together give rise to overall project outcome uncertainty. Selection of adequate methods for managing a particular category of uncertainty depends on the nature of the challenge. Physicists like to speculate about
Preface ◾
xxiii
a method’s or model’s distance to reality. Many fundamental discoveries and their explanations in physics were done using simple but sufficient analytical techniques way before the age of computers. Even powerful computers that facilitate modeling sometimes make errors and mistakes. A selected risk method should be simple enough to be understandable by practitioners but adequate enough to produce meaningful results. We do try to find that golden mean in this book. The level of practicality and simplicity depends on particular risk management topics. If the reader finds some topics too simple and some too complicated, it is due to my searching for a robust trade‐off between simplicity and adequacy. For example, we will discuss features of robust deterministic methods for initial identification, assessment, and addressing project risks in Part II of the book. These are also good for selection of engineering design and procurement options, managing procurement risks, and evaluating cost escalation, being straightforward but pretty informative for those tasks. At the same time they are utterly useless for identifying project sensitivities, developing and allocating project reserves, and evaluating overall cost and schedule uncertainty associated with a project. They have too big a distance to reality for those challenges. In Part III, probabilistic (Monte Carlo) methods are discussed, including information on required inputs and using results in decision making. I tried to refrain from making their distance to reality too short in order to avoid excessive complexity. However, it is pointless to promote more sophisticated probabilistic Monte Carlo techniques as a replacement for deterministic scoring methods in all situations. In the same way, quantum mechanics is not required for good‐old mechanical engineering! Probabilistic Monte Carlo techniques stemmed from statistical quantum mechanics and came of age in the 1940s in the study of the behavior of the neutron population, which had a certain relevance to the Manhattan project and its Russian counterpart. Branching out from deterministic to probabilistic risk methods in risk management resembles the transition from classic Newtonian physics to the quantum physics of Schrödinger et al. Deterministic risk methods usually view uncertainties individually in their isolation from the other uncertainties. Probabilistic methods treat them as a population when uncertainties losing their individuality could be correlated with each other and mapped to baselines to collectively represent overall project risk exposure quantitatively. (This slightly resembles statistical behavior of the neutron population in a nuclear reactor, for example.) My challenge here is to describe some pretty sophisticated probabilistic methods used in project risk management, including their inputs and outputs, in very simple terms.
xxiv
◾ Preface
Hence, my overall goal is for a project practitioner to consider my book valuable despite the fact (or rather because) it is not a comprehensive academic volume. I would not even resent if an experienced risk manager, seasoned consultant, or sophisticated academician called it rather simple. Let’s keep in mind what Leonardo da Vinci said: “Simplicity is the ultimate sophistication.” To explain the last statement I need to reflect on my background a bit. My teenaged skydiving experience aside, my first formal learning of risk management took place as part of the engineering and scientific curriculum at an engineering nuclear physics department. That was a fascinating experience! The level of complexity and sophistication was enormous, to say the least. But what we were taught was constructive simplicity based on the following “two commandments”: 1. Do not come up with a solution that is more sophisticated than is required! 2. Do not come up with a solution that is overly simple and inadequate! I know that this was an attempt to address certain types of technical risks as well as psychological and organizational bias. It was also an attempt to teach young kids to use what is now called “out‐of‐the‐box” thinking. Whenever the topic of out‐of‐box thinking comes up in conversations with my friends and co‐workers I usually reply that employment as a scientist implied full absence of in‐box thinking, which was just part of the job description. There was no box or out‐of‐box thinking at all; there was just independent thinking. My readers should find traces of this in the book. A while ago I asked myself if I should get rid of those lessons learned living and working in North America. Probably not! Comparing education in Europe and in North America I cannot help but share my observation: the purpose of education in Europe is knowledge, whereas the purpose of education in North America is immediate action. (My son’s university curriculum provides additional confirmation of this.) Neither system of education is better than the other. In some cases the North American system is much better, but not always. I feel that a broader knowledge base helps one find optimal solutions that are as simple as possible and as adequate as required. At least it should prevent psychological impulses to buy more sophisticated and expensive hardware and software tools or hire more expensive consultants in fancier ties when a challenge with any degree of novelty is looming. Upon getting my Ph.D. in physics and mathematics in the late 1980s, I felt that I had a big void in my education related to various aspects of business and management. So, my next degree was from Henley Management College.
Preface ◾
xxv
Henley‐on‐Thames is a lovely and jolly old English town with unbelievably high property prices, which eventually encouraged me to choose Canada as my next destination. One key discovery on my journey to the Henley MBA was the substantial cross‐pollination between mathematics and business. Mathematical methods were broadly used in business for decision making, although often in a relatively naive form. But that naïveté seemed to be justified by a quest for practicality and simplicity of applications. This made me a proponent of informed risk‐based decision making whenever a decision should be made. Building simple but fully adequate decision‐making models still makes my day. Simplicity of applications is what I kept in mind when writing this book. However, the quest for adequacy of the discussed methods defined the exact level of acceptable simplicity. It’s like a simple supply‐and‐demand curve in economics that defines how much is required and at what price. “Everything should be made as simple as possible, but not simpler,” as Albert Einstein used to say. Besides anonymous examples from mega‐projects I took part in, I refer to my pre‐Canadian experience for additional allusions, explanations, stories, and anecdotes. For instance, I use several analogies from physics that seem to be related to risk management. Please do not regard those insights as terribly deep philosophically. However, some of them may shed additional light on the topics of this book for readers with a technical background. Several risk‐related topics are not included in this book for the sake of staying focused. For instance, features and detail comparison of risk management in owner and EPC environments, risk management in business development, integration of project risk management with corporate risk management, probabilistic project economics, process hazard analysis (PHA)/ hazard and operability (HAZOP) studies, advanced schedule risk analyses of resource loaded schedules, and so on, are not discussed in this book but could become subjects for my next book, which depends on readers’ and editors’ support. As is often done in physics, this book is based on a few first principles. First, the three‐dimensional (3D) nature of risk management is introduced. The importance of a fourth dimension (time) is also pointed out. These include vertical (work package–project–business unit–corporation), horizontal (all project disciplines), and in‐depth (partners, vendors, contractors, investors) dimensions. Second, it is shown that to be adequate in risk management we need to talk about uncertainty management, not risk management. Degrees of freedom of uncertainties are introduced, including time. Based on those a comprehensive list of uncertainty “objects” is formulated to ensure that we do not miss or overlook anything major.
xxvi
◾ Preface
Third, main external and internal “uncertainty changers” are introduced that should influence and transform project uncertainty exposure in the course of project development and execution. Uncertainty addressing actions are positioned as one of the key internal uncertainty changers and risk management controls. Fourth, each of the identified uncertainty object types need adequate but constructively simple methods to get managed. A minimal but comprehensive set of the most efficient and adequate methods (both deterministic and probabilistic) is selected. Those are discussed one by one in Parts II and III of the book. Some topics are repeated several times in the book with increasing levels of detail and complexity. So, readers interested in getting to the bottom of things through the layers of information should read the whole book. Corresponding chapters could be used for reference purposes independently. Part I may be seen as a “helicopter view” of risk management. Parts II and III are devoted to specific deterministic and probabilistic methods. Finally, Part IV provides a simplified “straw man” case study of a hypothetical project, Curiosity, where key concepts and methods introduced in Parts I, II, and III are demonstrated again, practically showing their power and value. A simplified sample project base estimate, project schedule, risk register, and integrated cost and schedule risk model are introduced to link the deterministic and probabilistic methods overviewed in this book. It was possible to devote the case study of Part IV to various types of capital projects, from off‐shore oil production, to power generation, to LNG, and so on. I decided to develop a simplified case study of a carbon capture and storage (CCS) project for several reasons. This type of project promotes a “green” approach, has a higher level of complexity, deals with new technologies, includes integration of three subprojects (CO2 capture, pipeline, and geological sequestration), and is characterized by close involvement of external stakeholders, including branches of government. It also has severe commercial risks related to immaturity of the CO2 market and the lack of infrastructure to deliver CO2 for enhanced oil recovery (CO2‐EOR). At the same time, putting aside some obvious features of CCS projects, the uncertainty exposure of a CCS project is comparable to that of any capital project. Similar methods would be used to manage the uncertainties of any type of capital project as those methods are universal. However, the key reason for selecting a CCS project for the case study was that Dr. Gunter, my former co‐worker and a top world expert in CCS, kindly agreed to write a foreword to my book. As I had done risk management for more than one CCS project, his interest facilitated my choice immensely. I highly
Preface ◾
xxvii
appreciate the valuable comments and insights that Dr. Gunter has contributed to the shaping of this book. This book can be used not only by project practitioners but also by instructors who teach courses related to project risk management including PMP certification. To facilitate teaching, additional instructor’s ancillaries can be found on www.wiley.com in the form of PowerPoint presentations. These presentations are developed on a chapter‐by‐chapter basis for all four parts of the book. The information provided in this book is fully sufficient for the development and implementation of a lean, effective, and comprehensive risk management system for a capital project.
Acknowledgments
I
T H A N K M Y FA M I LY for support in writing this book, which turned out to be a second full‐time job for several months. It deprived us of Christmas holidays, many weekends, and most evenings together. I highly appreciate the valuable contributions that Dr. Bill Gunter has made. It would be a very different book without his support. Although I cannot mention them all, there are dozens of former and current co‐workers whom I would like to thank. They all contributed directly or indirectly to this book and made it possible, sometimes without knowing it. I am grateful to Manny Samson and Martin Bloem, two of the top costestimating specialists in the industry, who shaped my practical understanding of risk management. They set limits on my attempts to be too sophisticated and theoretical. I often recall working together with prominent scientists Professor Valentine Naish and Corresponding Member of Russian Academy of Sciences Professor Eugene Turov. I will always remember them and their professional and moral authority. Special thanks go to Doug Hubbard for his practical support in publishing this book. The influence of his excellent books on my writing style cannot be overestimated.
xxix
I
PAR T ONE
Fundamental Uncertainty of a Project Outcome
I
N WO R DS AT T R IBU T ED TO Abraham Lincoln, Peter Drucker, Steve Jobs,
and several other prominent individuals, the best way to predict the future is to create it. Project development could be understood as an activity to predict future project outcome through creating it. The role of risk management is to ensure a certain level of confidence in what is supposed to get created as a result.
1
CHAPTER ONE
Nature of Project Uncertainties
Questions Addressed in Chapter 1
▪ ▪ ▪ ▪ ▪ ▪ ▪
What could be expected as a project outcome? What factors are behind deviations from the expected project outcome? Do we really know what we try to manage? What degrees of freedom do uncertainties have? What are major uncertainty objects and their changers? When is a decision really a decision and when it is an opportunity? Is it really risk management? Or is it actually uncertainty management? ◾
M
U LT I P L E FAC TO R S I N F L U E N C E O V E R A L L project outcome. Their nature and influence depend on how a project is developed and executed, what are project objectives and expectations of stakeholders, and so on. It is not possible to manage factors influencing project outcome without properly understanding their definition. Only when all relevant uncertainty elements are pinned down and all factors leading to uncertainty changes
3
4
◾ Nature of Project Uncertainties
are clearly understood can a minimal set of adequate methods be selected to manage all of them effectively. Those multiple uncertainty elements are called uncertainty objects in this book. Systematic definitions are proposed for all of them from first principles based on the intrinsic nature of project uncertainties along with main factors that change the objects (uncertainty changers). The purpose of these definitions is not to come up with linguistically flawless descriptions of the objects, but to reflect on their intrinsic nature. The degrees of freedom are used to classify various realizations of uncertainties. This formalized systematic consideration, which resembles symmetry analysis of physical systems, is converted to specific and recognizable types of uncertainties and changers that pertain to any capital project.
PHASES OF PROJECT DEVELOPMENT AND PROJECT OBJECTIVES Phases of project development used in industries vary as do their defi nitions. They are also different in the same industry, for instance, in the case of project owners and contractors. We will use a simplified set of project phases that is common in the oil and gas industry in the owner environment (see Table 1.1). The fi rst three phases are often combined to front‐end loading (FEL). They precede final investment decision (FID), which is supposed to be made by the end of Defi ne, which is a crucial point for any project (no FID, no project’s TABLE 1.1 Project Development Phases Project Phase
Description
Identify
Commercial and technical concept is pinned down; its feasibility is considered proven by the end of Identify.
Select
Several conceptual options are outlined; one is selected for further development by the end of Select.
Define
Selected option is developed, including all baselines; it is sanctioned by the end of Define [final investment decision (FID)].
Execute
Approved project is being implemented and completed by the end of Execute.
Operate
After commissioning and startup, project is in operations during its lifetime and decommissioned by the end of Operate.
Quest for Predictability of Project Outcome ◾
5
future). All project objectives and baselines are supposed to be well developed prior to FID to be reviewed and (hopefully) sanctioned. The main focus of this book is on phases preceding FID (i.e., on FEL). For this reason two main project lifecycle periods could be introduced conditionally and told apart: “Now” (FEL) and “Future.” Operate certainly belongs to Future, which could include dozens of years of project lifetime before decommissioning. Execute seems to hide in a gray area since it’s the beginning of Future. It starts at FID and doesn’t end until the project is complete. One could imagine the high spirits of a project team, decision makers, and project stakeholders when a project FID is made and announced. The boost in energy, enthusiasm, and excitement following the positive FID is certainly an attribute of a “Now‐to‐ Future quantum leap.” After positive FID a project is likely to have future. So, decision makers, team members, and stakeholders are interested in knowing what sort of actual future characteristics it might get upon completion. If we regard project objectives and baselines as a sketchbook put together for FID, how close would the original (i.e., project completed in Future) resemble sketches done Now? The answer to this question becomes clear in the course of project execution. By the end of Execute there will be a pretty clear picture. The original could appear even more beautiful and attractive than the sketches of the past. This is a sign of the project’s success. But the original could also get ugly, with sketches being quite irrelevant to reality. To continue the artistic analogy, the sketches may be done using various styles and techniques. The variety of styles could resemble anything from cubism, expressionism, and pop art, to impressionism, to realism. (Guess what these styles could mean in project management!) A “project development style” adopted by a project in FEL depends on many factors: from maturity of the company project development and governance processes and biases of team members and decision makers, to previous project development experience, to stakeholders’ expectations and activism. But what is even more important is the “project execution style.” Its abrupt change right after FID could make pre‐FID sketches completely irrelevant (see Figure 1.1).
QUEST FOR PREDICTABILITY OF PROJECT OUTCOME Figure 1.1 represents a concept of a value associated with project definition and execution. The term definition means here all activities related to FEL, and not only to the Define phase, whereas the term value could be perceived as an
6
◾ Nature of Project Uncertainties
Project Value
Good Definition
Poor Execution
Good Execution
Poor Definition
Poor Execution FID
Identify Select
FIGURE 1.1
Define
Execute
Poor Definition Vs. Good Execution Good Definition Vs. Poor Execution
Poor Definition Vs. Poor Execution Startup Operate
Uncertainty of Project Outcome
Good Definition Vs. Good Execution
Good Execution
Time
Project Definition, Execution, Value, and Outcome
amalgamation of project objectives, baselines, and stakeholders’ expectations compared with the completed project. (In a simplified interpretation it could relate to either project cost or duration.) According to Figure 1.1, a project value may be characterized by a broad spectrum of outcomes, from unconditional success to complete failure. According to benchmarking data and the definition of project failure by the IPA Institute, a staggering 56% of major projects fail due to
▪ Budget overspending for more than 25%, and/or ▪ Schedule slipping for more than 25%, and/or ▪ Severe and continuing operational problems holding for at least one year.1 Imagine what the failure numbers would be if we used 15 or 20% thresholds instead. Project definition and execution is a battle against multiple factors of uncertainty of the project outcome. Multiple uncertainties and deviations from project objectives should be understood as inputs to project definition and execution that drive overall uncertainty of outcome. Depending on features of project development and execution, this could be either an uphill or downhill battle. Accumulated deviations from multiple project objectives and baselines upon
Sources and Types of Deviations from Project Objectives
◾
7
project completion could be both favorable and unfavorable to various degrees. Decision makers, project team members, and stakeholders have a vested interest in the final outcome of a project. Was this delivered within scope and quality, according to the sanctioned budget and by the approved completion date, or was the discrepancy between baselines and reality appalling? Were changes done during project development and execution? Were they properly taken into account? What was the safety record or environmental impact in the course of project delivery? Has the owner’s reputation suffered? All these questions emphasize multiple dimensions of project goals and uncertainty of project outcome. All project disciplines—engineering, procurement, construction, quality, project services, safety, environment, stakeholder management, and so on—take part in shaping corresponding baselines and managing multiple uncertainties at the work package and project levels. Project risk management has unique positioning, though. It not only evaluates the credibility of all project baselines but must identify and manage deviations from them in all their thinkable realizations due to multiple uncertainties.
SOURCES AND TYPES OF DEVIATIONS FROM PROJECT OBJECTIVES Multiple uncertainty factors give rise to the overall project outcome and, hence, to deviations from the initially stated project objectives and baselines. A combination of all particular deviations from objectives in the course of project development and execution contributes to the overall uncertainty of the project outcome. Any project objectives or baselines, such as project base estimates, schedules, or engineering design specifications, are models that try to mimic future project reality. As mentioned in the Preface, each such model may be characterized by its distance to reality.2 It would not be an exaggeration to say that those baselines have quite a large distance to reality by default. All of those baselines are developed in a perfectly utopian uncertainty‐free world. For instance, all costs, durations, or performance parameters are one‐point numbers, implying that they are fully certain! Such a wonderful level of certainty could be achievable only if all stakeholders of a project welcome it North‐Korean style; all subcontractors and suppliers cannot wait to ensure the highest possible quality and just‐in‐time delivery, demonstrating Gangnam‐style excitement; technology license providers and financial institutions greet the project by performing Morris dancing enthusiastically; and regulatory bodies are engaged
8
◾ Nature of Project Uncertainties
in encouraging Russian‐Cossack‐style dancing. It is a nice, utopian picture of a project environment (although those dances more often resemble daily Indo‐Pakistani border military dancing of carefully choreographed contempt). All those multiple uncertainties give rise to multiple deviations from the utopian risk‐free baselines shaping the project reality. Let’s introduce a set of standard project objectives in this section as well as reviewing the reasons for deviations from them that are observed in any capital project. Traditionally, three project objectives have been considered (triple constraint, iron triangle, etc.): 1. Scope/Quality/ Performance 2. Capital expenditure budget (CapEx) 3. Schedule These three objectives imply constraints on each other to exclude apparent dichotomies, as fast delivery of a good project cheaply is not quite possible. First, there should be a reason for undertaking a project. This should bring up a certain level of utility in Operate. Usefulness of a capital project could relate to characteristics of a structure to be constructed (a building of a certain size and purpose, a bridge of expected load rating, etc.) or to performance of a production facility of a certain production output (barrels of oil or cubic feet of natural gas produced and/or processed per day, tons of fertilizer produced per month, tons of carbon dioxide captured and sequestered in aquifer per year, etc.). Both structures and production facilities should be durable enough in the course of their operations, which brings up topics of reliability, availability, and maintainability.3 A budget for operating and maintaining a facility (operating expenditure budget—OpEx) should be reasonable economically to support the required level of reliability and availability and, hence, planned operating income. The Scope/Performance/Quality objective is the first of the classic triple constraints. There are many uncertainty factors that could lead to deviations from this objective. Second, a structure or facility of concern should be delivered according to an approved capital expenditure budget (CapEx). A base estimate of a project takes into account all required expenses to deliver it. Needless to say, accuracy of the base estimate depends on phase of project development. The level of engineering development is a major driver of accuracy for every cost account. For instance, when developing a concept design of a production facility, the amount of pipes of small (few inches) diameter is not quite clear. To estimate inevitable additional expenditures that will be justified later, corresponding
Sources and Types of Deviations from Project Objectives ◾
9
design allowances are used to address uncertainties related to material takeoff. If a project is unique and adopts a new technology, the required design allowances could be quite high. In the case of repetitive projects it could be just a few percentage points of corresponding base estimate cost accounts. Each work package could have an estimate of specific accuracy level in a given phase. Besides level of engineering development, accuracy of a package estimate strongly depends on progress of procurement activities, with highest accuracy expected when the prices are locked in. Nobody doubts that there will be the price of, say, construction materials in the base estimate of a given project. (Formally speaking, the probability of the presence of this cost in the base estimate is exactly 100%.) It is usually questionable which particular price it would be. Moreover, if several estimators are asked to prepare an estimate of a project or its work packages independently, no doubt there will be a visible spread of their numbers. Differences in the estimate numbers could be explained by variations of historic data and methods used for estimating as well as the personal experience and qualifications of the estimators. Some of them are more aggressive (optimistic) by nature and some are more prudent (pessimistic). The latter example is one of the realizations of psychological bias in project management that will be discussed in this book, which is also realization of a general uncertainty. A base estimate is normally developed in the currency of a base period. One dollar (or ruble, yuan, dinar, yen, euro, and so on) today won’t have the same purchasing power several months or years from now when estimated items will be actually purchased. The issue here is not just general inflation, which is normally equal to a few percentage points per year in North America. General inflation, which is usually measured as consumer price index (CPI), has almost nothing to do with future prices of line pipes or structural steel, or the cost of labor in Texas or Alberta. For example, prices of several types of line pipes grew 20–40% in 2008 before dropping drastically by year‐end. Supply–demand imbalances in particular industry segments could exceed CPI manifold. Cost escalation is a special type of general uncertainty that could be predicted relatively adequately using the correct sets of macroeconomic indexes. Obviously, prices do not rise indefinitely. Drops in prices of materials, equipment, and services used by capital projects could be significant. Cost de‐escalation could be predicted and used for selecting the right purchasing time. These last two statements could generate some skeptical remarks. To “predict” here means to forecast the general upward or downward trend of prices in a particular segment, not an absolute level of prices. For instance, existing escalation models for line pipes based on right macroeconomic indexes forecast annual
10
◾ Nature of Project Uncertainties
growth of around 10–15% in the first three quarters of 2008. It was much less than actual growth, but informative enough for sober decision making. If some materials or services are purchased abroad, currency exchange rate volatility causes additional uncertainty in the CapEx. It could be managed similarly to cost escalation/de‐escalation. The capital expenditure budget (CapEx) objective is the second of the three traditional constraints. As a project is supposed to get delivered not only on Scope and on Cost but on time, too, the third classic constraint is Schedule, meaning project duration. A schedule developed as a project baseline has a completion date, not a range of dates. Similarly, all normal activities of the schedule will have unambiguous one‐point durations. Is it too bold to state certainty like this? Of course, actual durations of most (if not all) of the normal activities will differ from the model. The Schedule is just a model that has its own distance to reality, too. The real project completion date will differ from the planned one. I recall several occasions when mega‐projects developed behind the Iron Curtain were completed right as declared, which had deep political and reputational meaning back then. All of those projects were complete disasters in terms of CapEx, and especially Scope/Quality/Performance, though. My recent experience points to the fact that this can happen on both sides of former Iron Curtain. Again, the level of project development (especially engineering and procurement development) as well as methods and data used for planning are major drivers of duration general uncertainties. We know there will be a duration for a particular normal activity. We just don’t know exactly what the duration would be. Similar to estimating, if several schedulers take on schedule development independently, durations of normal activities and overall project durations would not be the same, bringing up corresponding spreads. Again, different historic information and methods could be applied by different schedulers who themselves could have different levels of experience, expertise, and aggressiveness. Moreover, schedule logics proposed by different schedulers could differ. It is obvious that if an engineering or construction team is deployed to deliver a particular project, there will be incurred costs in case of schedule delays. Construction crews, engineering teams, rented equipment or buildings, and so on should be paid for no matter whether those work or stay idle. This brings up an additional general uncertainty factor that should be taken into account to assess schedule‐driven costs, or rather, schedule‐delay‐driven cost impacts. We will use “burn rates” that may be treated as general uncertainties to integrate probabilistic cost and schedule analyses.
Sources and Types of Deviations from Project Objectives ◾
11
The deviations from the three classical project objectives described earlier are actually part of “business as usual,” being attached to Scope, Cost, or Schedule baselines. No unexpected factors or events that are not part of baselines are contemplated. However, in place of precise baseline one‐point values we actually assume ranges of those values. In an attempt to reflect on this ambiguity, project engineers, estimators and schedulers introduce the most reasonable or likely representatives of those scope, cost, and schedule ranges as one‐point values surrounded by an entourage of the other values from the ranges. Obviously, the spread associated with the existence of the entourage relates to an ambiguity of possible impacts on objectives, and not probabilities of the impacts. There is no uncertainty associated with the impact’s existence that is absolutely certain. Let’s call those one‐dimensional uncertainties of just impact ambiguity general uncertainties. However, myriad unplanned events might occur in a real project. If they occur, those could be seen as “business unusual” as they are not part of baselines at all. Probabilistic methods allow one to take those uncertain events into account but outside of baselines and through attaching to them probabilities of occurrence. Project delays related to the permitting process or additional requirements related to protection of the environment that are imposed as conditions of project approval by the government are examples of uncertain events that impact Scope, Cost, and Schedule objectives. However, their exact impacts and probabilities stay uncertain. Being associated with business unusual, they have not one, but two dimensions of uncertainty, both uncertainty of impact and uncertainty of occurrence. Let’s call the business‐unusual events uncertain events. However, uncertainty of impact means general uncertainty. Hence, general uncertainty merged with uncertainty of likelihood gives rise to an uncertain event. Mathematically, general uncertainties may look like a subclass of uncertain events when probability gets to 100%. Technically, this is correct. However, our previous discussion on “business as usual” versus “business unusual” dictates to tell them apart. Philosophically speaking, when an uncertain event is losing one dimension of uncertainty becoming absolutely certain in terms of happening (100% chance of happening), it redefines the whole nature of the uncertainty as the “game of chance” is over. It’s like gambling in Las Vegas and knowing for sure that you will beat the wheel every time it spins. Unclear would be only the size of each win. How can a casino profitably operate like that? What technically happens when an uncertain event actually occurs? In place of probability of a possible deviation from project objectives equal to, say,
12
◾ Nature of Project Uncertainties
5%, the deviation suddenly becomes a reality. It becomes a fact or given with probability of 100% as no game of chance is involved anymore. This directly leads to impact on project objectives. As an uncertain event loses one uncertainty degree of freedom it becomes a general uncertainty if its impact is not yet fully understood. When the impact of that former uncertain event that turned to a general uncertainty eventually becomes known, it becomes an issue and redefines project baselines. An issue loses both characteristics of uncertainty. It is certain in terms of both likelihood (100%) and impact (no range/one‐point value). If an uncertain event has occurred, it is too late to proactively prevent this. The only way to address this is to try to screen out impacts on project objectives through some recovery addressing actions. This should be considered part of reactive crisis management. Ideally those reactive actions should be planned beforehand for cases in which risk prevention does not work. Crisis management is usually more costly than preventive actions. An ounce of prevention is worth a pound of cure! Imagine a situation where an uncertain event characterized by a probability of occurrence has a clear impact without any impact range. It would belong to the category of discrete uncertain events in terms of impact when impact on or deviation from objectives is fully predictable. For instance, in the case of circus equilibrists who are working without a safety net, the probability of failure is quite low although the possible safety impact is rather certain. Some uncertain events could have very low probabilities and extremely high impacts. So, they cannot be viewed as moderate deviations from project objectives. Their possible impacts might be comparable in magnitude with baselines. Being in the same or higher “weight categories” with baselines, natural disasters, some force majeure conditions, changing regulations, opposition to projects, general economic crises, and so on could lead to devastating scope changes, critical cost increases, and knockout schedule delays. In a sense, those events destroy initial baselines or fully redefine them. For instance, if a pipeline project that had an initial duration of two years was delayed due to opposition and strict environmental requirements for three years, all its initial baselines should be redone. Moreover, the project owner might just cancel the project and pursue some other business opportunities. Eventually, if approved and moved forward, it will be a different project with different scope, cost, and schedule. Uncertain events like this are called “show‐stoppers” (a knockout blow on objectives), “game changers” (a knockdown blow on objectives), or “black
Sources and Types of Deviations from Project Objectives ◾
13
swans” among risk practitioners.4 Despite the fact that those are part of enterprise or corporate risk management (ERM), they should be identified, monitored, and reported by the project teams. A project team cannot effectively manage events like this unless a sort of insurance coverage is contemplated in some cases. The owner’s organization should assume ownership of the show‐ stoppers and game changers in most cases. For this reason, such supercritical project uncertain events are also called “corporate risks.” Using some analogies from physics, we may compare baselines with base states of a physical system. For instance, a crystal lattice of a solid matter at zero degree Fahrenheit represents its base state. If temperature slightly increases, linear vibrations of atoms around their equilibrium positions take place. Those independent small deviations from a base state are called phonons. Degrees of deviations are relatively small and don’t disrupt the base state. We may conditionally compare those linear vibrations with general uncertainties. However, if temperature further rises, some new nonlinear effects occur when phonons are examined as a population or superposition of quasi‐particles. Their collective behavior gets clear statistical features. In some situations this behavior modifies characteristics of the solid matter. We may compare this with correlated uncertain events. If temperature further increases, the energy of the population of phonons, which obtain clearly nonlinear behavior, could be enough to change the base state. Phase transition occurs, leading to a change in crystal lattice type. Isn’t this a game changer? If temperature further increases and reaches the solid’s melting point, there is no solid matter anymore. This sounds like a show‐stopper in the case of corporate risks. Moreover, some impurities or defects of a crystal lattice that are not expected in ideal solid matter could be compared with some unknown uncertainties. We don’t want to exaggerate the depth and significance of this analogy between solid state physics and project risk management. However, it could be informative for readers with a technical background. Besides the traditional triple constraint objectives, organizations use additional project constraints these days. For example, a project might be delivered on Scope, on Cost, and on Schedule, but it resulted in multiple fatalities, had a devastating environmental impact, and ruined the organizational reputation. Concerning fatalities, polluted environment, or devastated reputation, shouldn’t an organization be ready to ask “How much should we pay to compensate for occurred environmental, safety, and reputation risks?”5 Or should it rather manage corresponding “soft” objectives and deviations from them
14
◾ Nature of Project Uncertainties
consistently? The following “soft” objectives are widely used or should be used these days to make risk management more comprehensive:
▪▪ Safety ▪▪ Environment ▪▪ Reputation The Safety objective will be understood as health and safety (H&S) in this book in most cases. These soft objectives are normally treated as additional “goal‐zero” types of constraints: no fatalities and injuries, no negative impacts on Environment and Reputation. A longer list of additional constrains could be adopted by a project to also reflect on stakeholders’ management, legal objectives, profit margin goals, access to oil reserves or new contracts, and so on. Those objectives are more often used at the corporate level. Deviations from these goal‐zero‐type objectives might be also realized as both general uncertainties and uncertain events. It is the same in the case of Scope, Cost, and Schedule objectives—those could become “corporate risks.” There is an obvious inconsistency in the terminology of modern risk management. First, the word risk means something unfavorable. However, when risk is used in risk management it covers both upside (favorable) and downside (unfavorable) deviations from objectives. Second, when the term risk is used this implies some probability of occurrence, even according to the ISO 31000 standard.6 However, general uncertainties have certainty of occurrence. To fully resolve these two inconsistencies we have to presume the following four categories: 1. Downside uncertain events 2. Upside uncertain events 3. Downside general uncertainties 4. Upside general uncertainties All deviations from project objectives discussed so far are understood as known. They are normally identified and put on a project’s radar screen. We know that they should be part of risk management activities due to one or both dimensions of uncertainties. In other words, their relevance to a project is believed to be fully certain. The bad news is that some deviations could be overlooked and would not appear on the risk management radar screen at all (at least until they occur). Following the fundamental observations of modern philosopher Donald Rumsfeld, we cannot help introducing the unknown–unknown part of his picture of
Key Objects of Risk (or Uncertainty) Management ◾
15
the world (see “Unknown Unknowns”). Those stealth types of objects may be called unknown unknowns. We will call them unknown uncertainties.
U N K N O W N U N K N O WN S “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.” Donald Rumsfeld
Discussing various types of uncertainties earlier we implicitly assumed that unbiased workers, who have plenty of time and resources, identify, assess, and address risks using ideal risk methods that they fully understand. Unfortunately, project teams usually work under a severe time crunch; they are normally understaffed; workers have different perceptions of possible risks depending on their previous experience and background; and they have various levels of training and understanding of risk methods. Moreover, some of them could be interested in identification (or non‐identification) of particular risks and particular assessments and addressing. Even more significant, a company might not have adequate risk methods in its toolbox at all. The various systematic inconsistencies outlined earlier are referred to in risk management as bias. Bias could stem from features of both organizational aspects of risk management and its psychological aspects. The main types of bias will be discussed in Part II. All of them could have an impact on identification and assessment of general uncertainties and uncertain events through introducing systematic identification and assessment errors.7 Some of them may even lead to overlooking uncertainties during identification, making them the infamous unknown unknowns. But room for unknown uncertainties could serve as a measure of quality of a project risk management system.
KEY OBJECTS OF RISK (OR UNCERTAINTY) MANAGEMENT: DO WE REALLY KNOW WHAT WE TRY TO MANAGE? The previous section brought up quite a few possible deviations from project objectives that defi ne overall uncertainty of project outcome. Surprisingly,
16
◾ Nature of Project Uncertainties
iscussion on definitions of terms such as risk, opportunity, threat, chance, posd sibility, uncertainty, ambiguity, issues, consequences, likelihood, and so on can be found in many modern publications on risk management and professional forums. Implicit links among these terms as well as nuances of their meanings in various languages are still being discovered and fathomed. It seems that all those discussions have a linguistic nature that does not necessarily have a lot to do with the fundamental nature of uncertainty. Those are all signs of lack of solid structure and logic in risk management as a discipline. Do you think that representatives of more mature disciplines such as mechanical engineering discuss fundamental differences between compressor and pump stations? It seems to me that having English as a second language is a benefit that justifies ignoring all those nuances. It also seems that project team members and decision makers, even if they are native English‐language speakers, are not very interested in those nuances, either. At the same time a need to apply first principles related to the fundamental nature of uncertainty to clarify all those definitions has been on my mind for a while. The recent ISO 31000 standard was an attempt to draw the line at these endless and often futile discussions. As a general guideline supposedly applicable to any industry, the ISO 31000 standard doesn’t provide and is not expected to provide a toolbox to adequately assess and manage all types of uncertainties that usually give rise to overall uncertainty of project outcome. Some concepts, definitions, and tools that are required for this are either missing or confusing. For instance, the standard defines risk as a potential event as well as the effect of uncertainty on objectives that is characterized by consequences and likelihood of occurrence. This is a core definition that I fully support but in the case of uncertain events only. What if there is no event but there is uncertain impact on project objectives? In other words, what if a given (not potential) uncertainty exists only due to deficiency of knowledge about impact on objectives, with likelihood of the impact being 100% certain? What if probability of event is uncertain but possible impacts is fully and clearly defined? What would be the classification and role of uncertain events that did occur (issues)? How about various types of biases and unknown uncertainties? Do we need an ISO 310000‐A standard to manage additional uncertainties like this? This section is devoted to the introduction of a comprehensive set of practical definitions that can be used in modern project risk management regardless of whatever names or tags we attach to those. A key incentive to do this is to come up with a clear‐cut set of basic notions that can be utilized in practice (not in theory) of risk management of capital projects. Hence, they could be
Key Objects of Risk (or Uncertainty) Management ◾
17
used by project teams and decision makers as international risk management lingo. We don’t intend to develop definitions that fit all risk methods, all industries, or all types of (past, current, and future) projects, or that comply with “Risk Management Kremlin’s” requirement. There is no intent to dogmatize risk management of capital projects, either. Let’s call these definitions uncertainty objects. All of them have been already introduced indirectly in the previous section in a narrative way. They will be meaningful regardless of the linguistic labels we attach to them. We will keep them in our toolbox, not because of labels and nuances related to them, but because of the utility and consistency they provide when waging war against uncertainty of project outcomes. Table 1.2 recaps the discussion of the previous section in a structured way. (The reader could change the labels of the objects to his or her taste or according to his or her native language as soon as their fundamental nature and role are well understood.) The contributions to project cost and schedule reserves mentioned in Table 1.2 are discussed in Chapters 12 and 14. All new objects are adequately described by combinations of words that contain the word uncertainty, and not the word risk. The term risk plays a rudimentary role in Table 1.2. In a way, the term risk becomes irrelevant. (What an achievement given the title of this book!) We may use a physics analogy describing relations among molecules and atoms to better understand what happened. In place of the traditional object risk (“a molecule”) we have new objects (“atoms”). Those new objects are defined by possibilities of upside or downside deviations in the case of general uncertainties or uncertain events, which could be known (identified by a project team) or stay unknown until they occur. To further follow the physics analogy, we may define three degrees of freedom associated with an uncertainty: 1. Uncertainty of impact (general uncertainty) versus uncertainty of impact and likelihood (uncertain event) 2. Downside (unfavorable) deviation versus upside (favorable) deviation 3. Known (identified) uncertainty versus unknown (unidentified) uncertainty These three degrees of freedom generate eight main uncertainty types as Figure 1.2 implies: 1. General Uncertainty (GU)—Downside (↓)—Known (K) {GU↓K} 2. General Uncertainty (GU)—Upside (↑)—Known (K) {GU↑K}
18 CapEx, Scope CapEx
Project Baselines
Issue
Design Uncertainty
General Cost Uncertainty
N/A
Given [probability: certain; impact: certain]
Known (or Unknown) General Uncertainty [probability: certain; impact: uncertain]
Any objective
Uncertainty Object
Uncertainty Type
Schedule
CapEx CapEx
General Duration Uncertainty
Cost Escalation
Currency Exchange Rate Uncertainty
N/A
Deviation From
Key Uncertainty Objects
TABLE 1.2
Design allowance is used to address design uncertainty. Normally it is not part of risk management although should be kept in mind when assessing the other objects.
↓
↑↓
Issues redefine baselines. Issues appear when a general uncertainty becomes certain in terms of impact or uncertain event occurs and becomes certain in terms of both probability and impact. An issue can be associated with either upside or downside deviation depending on the nature of a realized uncertainty.
↑↓
Taken into account through exchange rate reserve. Could be avoided/ transferred through hedging.
Taken into account through escalation reserve; in case of de‐escalation could be used for selection of purchasing time.
Applicable to each normal activity. Contribute to project schedule reserve.
Applicable to each estimate’s cost account. Contribute to project cost contingency.
Hypothetical set of one‐point values that are proclaimed fully certain to represent uncertainty‐free project objectives.
Comments
N/A
Upside or Downside?
19
Discrete Uncertain Event [probability: uncertain; impact: certain] Any objective
Any objective
Upside Uncertain Events
Any objective
Conscious Bias
Downside Uncertain Event
Any objective
Subconscious Bias
Any objective
Any objective
Organizational Bias
Discrete Uncertain Event
CapEx
Schedule Driven Costs
(Continued )
Those two categories are traditional subjects of risk management that are called risks or separately threats and opportunities in risk jargon. Known uncertain events of cost impacts give rise to project cost risk reserve.
↓
↑
(Known or unknown) uncertain events with clearly predictable one‐point impacts on objectives could be singled out as discrete uncertain events. Known discrete uncertain events of cost impacts contribute to project cost risk reserve along with uncertain events.
↑↓
Systematic errors in identification and assessment of all project uncertainties. Due to human nature and business realities, bias pertains to all project teams to a certain degree as a general uncertainty. In some cases may lead to upside deviations from project objectives. Various anti‐bias and calibration methods should be in place to suppress these systematic errors.
Distribution of completion dates and associated range of schedule delays converted to extra costs; taken into account through application of burn rates in integrated probabilistic cost and schedule risk models as a cost general uncertainty and give rise to project cost reserve. Burn rates could be considered auxiliary general uncertainties.
20
Unknown Uncertain Event [existence: uncertain]
Deviation From CapEx, Scope, Quality/ Performance, Reputation Any objective
Any objective
Any objective
Any objective
Uncertainty Object
Unacceptable Performance and LDs
Known Show‐ Stopper/Game Changer
Unknown Downside Uncertain Event
Unknown Upside Uncertain Event
Unknown Show‐Stopper/ Game Changer
(Continued)
Known Uncertain Event [probability: uncertain; impact: uncertain]
Uncertainty Type
TABLE 1.2
Probability of occurrence is usually very low and impacts are devastating that destroy (show‐stopper: knockout impact) or drastically redefine (game changer: knockdown impact) baselines. Considered part of corporate risk management and not included in project reserves. Stays unidentified until occurs; new technology and geography as well as various types of bias are major sources. Also project changes (baseline’s redefinition) in case changes are not adequately managed could be a source. May be covered by a special cost or schedule unknown–unknown reserve (allowance). Stays unidentified until occurs; new technology and geography as well as various types of bias are major sources.
Being type of unknown, stays unidentified until occurs; new technology and geography as well as various types of bias are major sources. Also project changes (baseline’s redefinition) in case changes are not adequately managed could be a source. Probability of occurrence is usually very low and impacts are devastating that destroy (show‐stopper: knockout impact) or drastically redefine (game changer: knockdown impact) baselines. Considered part of corporate risk management.
↓
↓
↑
↓
Comments These could be standalone downside uncertain events depending on corporate financial reporting. If Quality/Performance objective is not met upon project completion, performance allowance would be used to address poor performance (performance warranty). In case some clauses stipulated by a project contract (consequential damages, schedule delays, turnover of key personnel, etc.) are not obeyed, liquidated damages may be incurred.
↓
Upside or Downside?
Key Objects of Risk (or Uncertainty) Management ◾
3. 4. 5. 6. 7. 8.
21
General Uncertainty (GU)—Downside (↓)—Unknown (U) {GU↓U} General Uncertainty (GU)—Upside (↑)—Unknown (U) {GU↑U} Uncertain Event (UE)—Downside (↓)—Known {UE↓K} Uncertain Event (UE)—Upside (↑)—Known {UE↑K} Uncertain Event (UE)—Downside (↓)—Unknown {UE↓U} Uncertain Event (UE)—Upside (↑)—Unknown {UE↑U}
Moving from molecules to atoms represented a natural path of science when accumulated knowledge allowed one to better fathom objects of Nature and describe them more adequately and in more detail. The same thing happens when we better understand what we used to call “risk.” One may wonder about the placement of some additional or missing objects such as issues and discrete uncertain events in Figure 1.2. Those were discussed in the previous section and mentioned in Table 1.2. We might make another step “from atoms to electrons, protons, and neutrons” on the way to higher complexity and better understanding. This should double the size of the cube in Figure 1.2. As a result, in place of two categories describing probabilities and impacts we should get four (see Figure 1.3): 1. 2. 3. 4.
Issue (probability: certain; impact: certain) General uncertainty (probability: certain; impact: uncertain) Discrete uncertain event (probability: uncertain; impact: certain) Uncertain event (probability: uncertain; impact: uncertain)
Upside Deviation (↑)
Downside Deviation (↓)
General Uncertainty (GU) Uncertain Event (UE)
Unknown Uncertainty (U) Known Uncertainty (K)
FIGURE 1.2
Three Uncertainty Degrees of Freedom
22
◾ Nature of Project Uncertainties
The addition of extra objects generates a bit too much complexity, which could be overshooting the purpose of this book. Let’s keep increasing the level of complexity to academic studies and books. To keep things simple, let’s treat an issue as limiting realization of a general uncertainty when impact uncertainty vanishes. Similarly, a discrete uncertain event should be understood as limiting realization of an uncertain event when the possible impact is fully certain. This convention should allow one to avoid adding extra objects to Figure 1.2, which is informative enough. At the same time, all major objects of risk management are comprehensively defined from fi rst principles by Figures 1.2 and 1.3. Moreover, the arrows in Figure 1.3 represent possible transformations among discussed objects in the course of project development and execution. To be consistent let’s summarize this discussion through formal introduction of four degrees of freedom of uncertainties as follows:
Uncertain Impact
General Uncertainty: Range of impacts vs. probability 100%
Uncertain Event: Range of impacts vs. probability 99% or so.) So, selected probability ranges of Figure 3.2 are as follows:
▪▪ ▪▪ ▪▪ ▪▪ ▪▪
Very low: < 1% Low: 1–20% Medium: 20–50% High: 50–90% Very high: > 90%
Am I fully satisfied with the selected probability ranges? Not necessarily. But at the end of the day the ranges should be selected in a way that the project team members are comfortable with them. On one hand, they will inevitably reflect some types of organizational bias and risk appetite. On the other hand, as soon as they are established they will generate a good deal of consistency. Although such consistency will include persistently all possible systematic errors introduced by specific selection of particular RAM’s probability ranges when assessing probabilities. However, those ranges could be amended after
Developing a Risk Assessment Matrix ◾
125
the first run of risk assessments similarly to impact ranges, although everyone understands that all those ranges are just a convention. It should not be taken too precisely. More precise evaluations of probabilities and impacts will be undertaken for risks of Cost and Schedule impacts when running probabilistic risk analyses. Whatever the ranges selected for each category of impacts on project objectives and probabilities, each range gets a corresponding score. As we consider in this book only 5 × 5 RAMs there should be five scores, 1 to 5, for both impacts and probabilities. The expected value notion that is routinely used in statistics (a product of impact and probability) could be conditionally used for probability and impact scores, too. This could be understood as the expected value score or consequence score. Mathematically these terms sound terrible, but they are quite useful practically. The resulting scores 1–25 represent 25 RAM cells. The standard approach is to group risks belonging to those cells into three or four risk‐level or risk‐ severity categories based on ranges of scores. Common practice is that these three or four groups are associated with colors such as red, yellow, and green for a three‐color code, and red, amber, yellow, and green for a four‐color code. Obviously the four‐color code brings more resolution to the project RAM. However, there is some doubt that a higher level of resolution is fully justifiable in a qualitative method like this, so we stay with a three‐color code in this book. As an example, I have observed five categories of risks with five introduced colors. This is apparent overshooting in terms of precision of risk assessment. One aesthetic advantage of this five‐color approach was that those RAMs and the corresponding risk registers looked so colorful and picturesque and seemed very “sophisticated, significant, and important.” Using an artistic analogy to define applicability of RAMs for assessment of project risks, a set of objectives could be understood as a simplistic project sketch drawn using project baselines. If only one or two baselines were used, a project would not be adequately depicted. If a dozen baselines were used to describe it, the sketch would become too busy. As discussed in Chapter 2, five to seven objectives are an optimal number. However, due to all those possible uncertainties associated with the objectives, this sketch will get blurred. Namely, uncertainties of all possible sorts and degrees will impact the baselines. In place of black‐and‐white baselines there will be blurred colored lines. Using the color semiotics analogy further, hues of green should mean benign deviations from the initial sketch, whereas hues of yellow and especially red should require special attention. A simple semiotics code of three colors is used in risk management to reflect the severity of project uncertainties and deviations from baselines.
126
◾ Risk Assessment and Addressing
Another reason I introduced this color analogy is that none of the illustrations in this book are in color (except the book jacket that depicts a four‐color RAM for show purposes), although risk management extensively uses a color code for risk ranking. Three shades of gray are introduced in Figure 3.2 in place of the red, yellow, and green RAM cells. Accordingly, three categories of uncertainty scores—levels, severities, or colors—may be introduced (see Table 5.1) in case of the three‐color RAM. The four ranking parameters (the columns of Table 5.1) defining the three categories (the rows of the Table 5.1) are virtually synonyms. Some people are more artistic and prefer more drama (Severity) and color (Color Code) when ranking risks. I prefer risk Scores and their Levels for ranking although all four parameters are used in this book interchangeably. Short stories about various deviations from objectives included in the RAM in Figure 3.2 require further clarification. Similar to a description of project objectives, defi nitions of deviations’ degrees should be included in a project risk management plan. This brings up a fundamental rule based on assessment of impacts on multiple objectives. Specifically, risks belonging to the same ranges of probability and impacts are fully comparable and commensurate with each other. For instance, if one risk has medium probability and high impact on Schedule and another one has medium probability and high impact on Reputation, these two risks are considered as being at the same level or severity. This allows apples‐to‐apples ranking of uncertainties of impacts on different objectives, which would not be possible without using a multi‐objective RAM. This implies another rule for assessment of overall level or severity. If an uncertainty is identified as having impacts on several objectives, the top impact is counted to represent this uncertainty as a whole. For instance, if an uncertainty is characterized as having low impact on Schedule, high impact on Reputation, and very high impact on Safety, the very high impact on Safety defines its overall severity after taking its probability of occurrence into consideration. This could be an uncertain event with Safety impact leading to a fatality. Reputational damage and stoppage of work for investigation are possible additional impacts. TABLE 5.1
Three Categories of Uncertainties Depending on Scores
Score
Level
Severity
Color Code
1–5
Low
Small
Green
6–12
Medium
Material
Yellow
15–25
High
Critical
Red
Developing a Risk Assessment Matrix ◾
127
Even though 5 × 5 RAMs seem to be the current standard widely used in capital projects, one might come across 3 × 3, 4 × 4, 4 × 5, 5 × 6, 6 × 6, 7 × 7, and other RAMs. Based on my experience, 3 × 3 and 4 × 4 RAMs don’t offer high enough resolution to distinguish risks. This situation is similar to the compression of ranges discussed earlier where risks of quite different impacts and probabilities could still belong to the same RAM cell. For risk addressing, and comparing assessments before and after addressing, risks might belong to the same cells and severity categories too often. (This may be the only justification to use four‐color RAM containing four categories of risk scores/levels/severities/colors instead of three: better “resolution” like this should allow better distinguish uncertainties before and after addressing.) At the other end of the spectrum, 6 × 6 and 7 × 7 RAMs represent risk micromanagement and assessment overshooting. So 5 × 5 RAMs with three (maybe four) risk‐level categories seems to introduce the right balance. Even though the value of using 5 × 5 RAM in assessment of risks is underscored in this section, we discuss a conditional sixth impact category in Chapter 7 to establish definitions of project show‐stoppers and game changers. The reason is that very high RAM impact categories are unlimited (Figure 3.2). Answering the question “When does it become too much?” to keep impacted objectives still viable will provide guidelines to define project show‐stoppers and game changers in a project RAM. An additional complication when assessing the impact of risks on Schedule is that impact on Schedule is schedule‐logic specific. If a risk impacts an activity that doesn’t belong to the critical path, overall impact on project completion date could be lower than initially assessed or zero as the risk will be partially or fully absorbed by available float. If, however, a risk is mapped to an activity that belongs to a project critical path, this will impact the project completion date if it occurs. The general rule is that when assessing impact on Schedule, impact on a particular activity is evaluated as if it belonged to the critical path. This assessment will become input to probabilistic risk analysis when features of the impact are sized up while keeping in mind the schedule logic. A scoring method could be used for assessments of upside deviations from project objectives, although the RAM discussed earlier apparently features downside deviation language. Ideally, another upside RAM should be developed that rewrites the downside deviation’s ranges. This should be the mirror image of a downside RAM (Figure 3.2). An additional reason for its development is that it is more difficult to come across opportunities on the same scale as threats. For instance, it’s easier or more likely to come across a risk that
128
◾ Risk Assessment and Addressing
decreases performance by 5% than to run into an opportunity that increases it by 5%. The same reasoning is applicable to the other two hard objectives, Cost and Schedule. The situation is different with goal‐zero‐type soft objectives. If zero‐safety‐incident or zero‐environmental‐impact goals are project baselines, what could be opportunities to improve those? At the same time, the impact on Reputation could be positive. For instance, many of the current CO2 sequestration projects are not expected to be terribly profitable. A breakeven level of profitability is often acceptable, with boost or improvement of Reputation being considered a top priority. So, four out of the six objectives we introduced are eligible for upside deviations. I hesitated about whether a separate upside deviation’s RAM should be included in this book. The decision not to include one could be understood as totally biased. This bias is based on the fact that there was no obvious need to develop a separate RAM for mega‐projects I worked on. The same RAM could be used to assess upside uncertainties taking into account the need for more a prudent assessment approach, as discussed earlier. A possible error of assessments will be inside of the precision tolerance range of the scoring method, anyway. These scoring assessments will be reviewed and corrected when preparing for probabilistic cost and schedule risk analysis in any case. So, normally there is not much need to be excessively fancy unless there are some “sale and marketing” reasons. Among the topics discussed in the majority of academic risk management books are utility functions, risk aversion, risk appetite, and so on. These are supposed to delineate organizations that are risk averse from those that are risk takers. This is a relevant topic that should be understood by project teams and decision makers. However, my practical experience of introducing mathematical definitions related to utility functions and using them in real projects was a complete disaster. Responses of project teams and decision makers (whether verbal or via body language) were “Are you kidding?” or “Forget about it!” My suggestion would be to include this angle through a well‐developed RAM. If a project team is risk averse, all ranges of impacts should be shrunk down accordingly. Probability ranges could be compressed, too. For instance, a very high impact of more than $50 million (Figure 3.2) could be replaced by $30 million, which redefines the rest of the cost impact ranges. The safety example discussed in this section points out the different numbers of fatalities that could be considered very high in a project RAM. This is a very scary angle on risk appetite that reflects the variety of value‐of‐life utility curves in different parts of the world. An extensive discussion on the validity of scoring methods based on using RAMs may be found in an excellent book by Doug Hubbard,2 where the scoring method gets, using the words of Winston Churchill, “limited high respect.”
Using a Risk Assessment Matrix for Assessment As‐Is ◾
129
Probabilistic assessment methods are proclaimed as the only viable alternative. Criticism of the scoring methods, associated drama, and elements of righteousness would be fully justifiable as a purely academic “ivory tower” exercise. I would join in this except that I have been involved in practical risk management of real capital mega‐projects. Just as quantum mechanics is not required for mechanical engineering, probabilistic methods are not justifiable for all risk management challenges. Whatever assessment method is selected it should support the core risk management activity, which is uncertainty addressing. The scoring method is a wonderful match for this when handling multiple project objectives, whereas probabilistic methods are good only for quantifiable ones. New applications of the scoring method for engineering design and procurement option selection (introduced in Chapters 9 and 10) demonstrate the additional value and efficiency of the scoring method that cannot be effectively provided by a probabilistic one.
USING A RISK ASSESSMENT MATRIX FOR ASSESSMENT AS‐IS Assuming that a good RAM has been engineered based on recommendations of the previous section, the next step is assessment of identified risks before addressing (Figure 2.2). In practitioner’s jargon it is often called assessment before or assessment as‐is. This assessment is required to evaluate possible impacts without any additional attempt to manage a risk. However, all relevant controls and measures currently in place that seem to define this assessment as‐is should be taken into account. According to Figure 2.2, risk response actions should be developed and approved that may be used for risk assessment after addressing (assessment after or to‐be). The assessment of impacts on objectives starts with a review of definitions of identified uncertainties developed using three‐part naming. Supposedly, the third part (impacts) lists all relevant impacts. In other words, assessment depends on what has been identified and formulated. There are four major sources of information that could be used for assessment of risks: 1. 2. 3. 4.
Historic data Modeling Expert opinion Lessons learned
130
◾ Risk Assessment and Addressing
Consistent or relevant historic data are not always available. Modeling of risks is not often possible. Lessons learned are not always systematically collected and formalized although they are kept in the minds of specialists informally. So in the majority of cases the only source of assessment information relies on expert opinion. Being pretty subjective, these opinions are substitutes for possible objective historic data, lessons learned, and modeling. This again brings up the traditional topic of bias. The key challenge a facilitator faces is to collect biased opinions and average out elements of bias as much as practically possible. Uncertainty assessment is the business of quantification of stories and biases. A facilitator who leads a risk assessment might want to use standard leading words and questions. These questions are based on discussions of possible scenarios of risks happening and their impacts. It is reasonable to start with evaluation of the probability of a risk happening. This parameter is attached to the risk and of course should be the same for all impacts. If it is discovered during discussion that some impacts seem to get different probabilities, this is a sign that the identified risk should be split into two or more detailed risks. The first obvious guideline of probability assessments is based on the structure of the bowtie diagram (Figure 4.2) and probability rules reflected by Figure 4.1. Namely, each cause–event link of the bowtie may be viewed conditionally as a separate uncertain event. According to Figure 4.1, probabilities of all independent cause–event realizations should be summed. For instance, let’s distinguish four links, cause A–event X, cause B–event X, cause C–event X, and cause D–event X, as independent identified uncertain events. Let’s also assume that the probabilities of these four risks are P1, P2, P3, and P4 correspondingly. Then overall probability of the risk X event should be P1 + P2 + P3 + P4. This is correct only if these four causes are independent, which is normally assumed when using a bowtie diagram. However, if only a combination of two causes, say A and B, can lead to the risk X event along with independent causes C and D, the overall probability of the risk X event should be P1 × P2 + P3 + P4 according to Figure 4.1. Technically, causes A and B should be combined into one leading to an event of the rather lower probability P1 × P2: a condition of the simultaneous happening (AND gate of Figure 4.1) leads to lower probability than probabilities of two independent events. This leads us to the firm conclusion that probability of occurrence should depend on the level of an uncertain event definition and understanding. For instance, for assessments of quality related to construction contractors, one may guess that at the work‐package level corresponding uncertainty should have quite a low probability. However, at the project level this might get medium
Using a Risk Assessment Matrix for Assessment As‐Is ◾
131
likelihood if several contractors deliver construction packages. At the business unit/portfolio level this uncertainty could appear as almost certain! That is why some business‐unit uncertainties or corporate risk look more like given issues or general uncertainties with probabilities close to or equal to 100% and not like uncertain events. Another guideline that should be kept in mind when assessing probability of occurrence is that there are very few uncertain events in the project external or internal environment that are truly random. This brings up the question whether frequency‐based historic data could be used for probability assessments of some events. By truly random probabilistic events I mean those that are comparable to the flipping of a coin.3 For instance, there are probabilities that one gets heads two, three, four, or more times in a row, although those probabilities drop with an increased number of tries. So, the more tries, the higher the confidence level that heads and tails have equal 50–50 chances. If a project team does not have a clue about probability of risk occurrence, the safest possible probability assessment would be 50%. If a right and objective assessment were 100% or 0%, maximum possible error would be no more than 50%. If instead an assessment is done to say 60% or 70%, possible maximum error might be 60% or 70% if a right assessment were 0%. I guess politicians should love 50% probabilistic assessments of some events in politics, economics, and society as being the most “politically correct.” One should be careful when such 50% assessments are produced. Is this a reflection of the real nature of a possible event based on good analysis and justification as in the case of coin flipping, or a confession of complete lack of clarity and understanding or an attempt to hide information? “Broken but Safe Logic” contains a joke about unsubstantiated but scientific‐like results.
B R O K E N B U T S A F E LO GIC
T
he following is a common joke aimed at poorly justified results used by physicists. It sounds like a citation from a scientific paper, and includes a funny dichotomy and a “safe” assessment of expected results: “If we divide it into three unequal halves, the probability of desired outcome should be 50%.”
Projects are managed differently by different organizations. Moreover, different projects are managed differently by the same organization. A competitor
132
◾ Risk Assessment and Addressing
may react to a particular situation today differently than two years ago. One tries to assess a possibility that a project would be sanctioned by the government next year, keeping in mind that a similar project was sanctioned a few years ago. But the political situation today might be very different than it was, due to upcoming presidential elections. Moreover, a decision by the government to reject or put off a project might have been already made but not announced for political reasons at the moment when the probability assessment is undertaken. So where is randomness, or the “game of chance,” in all these examples? There is none. Assessment of probability is done based on the best available knowledge about a particular situation. Previous similar situations are not exactly relevant. In a way, the probability assessment is a measure of depth of knowledge and quality of information about a particular situation if it is not based on historic frequency data. We talked about unknown uncertainties in risk identification previously. Here we are talking about unknowns in probability assessment. We try to treat some actually known facts (but not to us) as uncertain events (to us) due to lack of information. (We refer again to our previously discussed broiler black swans.) The bottom line of this discussion is that it would be necessary to answer a simple question first. Probability of what is assessed? In truly probabilistic events we are talking about the probability of a random event happening. Such an assessment should be based on historic frequency data. But in many cases of “untruly” probabilistic events we should be talking about probability of discovery of a particular outcome where outcome is predetermined and premeditated already. This discussion points to rejection of frequency‐based assessments of probabilities in most cases. It still could be used in true random situations when people have no control of or influence over their outcome. It looks like most of these situations belong to the technical area. However, I know of several technical examples where some events that previously were understood as perfectly random turned out to be more or fully predetermined. The level of predetermination is proportional to the depth of understanding and knowledge of those technical events. First, in the case of a hydropower generation project I worked on, the probability of getting a water level higher than 20 m in each particular year was assessed as 5% based on 30 years of previous river observations. This information was important for design of cofferdams. Unexpectedly it was discovered that this assessment was no longer valid due to some recent hydro‐engineering activities up the river. Risk of flooding got a much higher assessment, which
Using a Risk Assessment Matrix for Assessment As‐Is ◾
133
had implications for cofferdam design. Previously perceived full randomness of flooding was rejected. Second, previously hurricanes in the Gulf of Mexico were treated as random events. Their probabilities were assessed accordingly based on frequency data. Newly developed methods of forecasting based on extensive use of sophisticated computer modeling and satellite monitoring predict their occurrence and severity in a given period of time and place with amazing accuracy. Although accuracy is still below 100%, hurricanes are no longer seen as purely random uncertain events. The level of randomness is relatively low now due to the current level of understanding of the hurricane’s mechanisms, which allows one to predict them with relatively high accuracy. Practically, this means the possibility of developing and implementing adequate and timely evacuation plans. Third, similar trends are observed in the prediction of earthquakes. Advanced methods of monitoring, some knowledge about precursors, and sophisticated modeling techniques allow us to predict earthquake occurrences with some accuracy. This accuracy is not high enough yet due to the current level of understanding of an earthquake’s triggers. In other words, the level of randomness of an earthquake occurrences seems still high due to a lower level of understanding of their mechanisms. The situation should eventually change for the better. However, some politicians overestimate the accuracy on purpose for finger‐pointing purposes. I was surprised to discover that several Italian scientists had been jailed for their inability to predict the L’Aquila earthquake in 2009! We just discussed three examples of technically predetermined or semi‐ predetermined events where the frequency‐based approach for probability assessment is not adequate. In situations where people and their decisions, policies, biases, focused efforts, learning curves, competitiveness, and so on may influence an event’s likelihood, the frequency‐based assessments could be perfectly misleading. Getting back to physics, it reminds me of the Uncertainty principle of Heisenberg and the related Observer principle. Measurement of a system cannot be done without affecting it. People’s interference affects most of the events related to projects, making historic frequency data irrelevant. People’s interference and influence make any uncertain event more unique and not directly comparable to similar previous events. Using the frequency method for probability assessments of uncertain events that are not random in nature is a substitute for our knowledge about them.
134
◾ Risk Assessment and Addressing
Frequency data may be used as a starting point in probability assessments but only after corresponding conditioning. This conditioning should be an attempt to do an apples‐to‐apples comparison of previous risk occurrences with currently discussed ones, making the former relevant to current assessment. A better way to assess probabilities of untruly probabilistic events is by:
▪▪ Collecting more information ▪▪ Better learning and understanding their drivers and mechanisms ▪▪ Trying to influence their outcomes In some cases, among the approaches to better understand those events there could be scenario planning and game theory modeling methods.4 We won’t discuss these in this book, although existence of the other methods of probability assessments and outcomes in case of untruly probabilistic events should be kept in mind. Taking into account the earlier discussion, standard guiding questions should be used to assess both probabilities and impacts: “Is the uncertain event truly random?” “Do we assess probability of happening or probability of discovery of a particular predetermined outcome?” “Are project stakeholders behind this event and do they define probability of its occurrence?” “Do we have any knowledge of or historic data on this event?” “What are elements of novelty in this project?” “What potential steps, if any, should be undertaken to better understand this event and reduce its randomness?” “Can we model this event?” “What are the chances/odds/likelihood that this event happens or we discover its existence?” “What is the worst/most expectable/least damaging scenario?” “What makes you believe that?” “What happens to that other project and is it relevant to ours?” “What is your confidence level that impact will belong to this range and why?” “What are the chances that real impact will be outside of the discussed range?” “What types of bias might we come across when assessing this uncertainty and why?”
Using a Risk Assessment Matrix for Assessment As‐Is ◾
135
Questions such as “How often has this event happened previously?” or “Is this a one‐in‐five or one‐in‐ten‐years event?” might not be appropriate, being based on the frequency method, which is frequently wrong. The key intent in asking these questions is to provoke thinking but suppress bias, if possible. See the final section of this chapter for a discussion on bias. As a result of assessment as‐is, a project team justifies the most appropriate probability range for probabilities and ranges for corresponding impacts. While probability assessment is represented by one score (Figure 3.2), several different assessments are expected for impacts on corresponding objectives. For instance, in the case of risk discussed in Chapter 4 there are two impacts. One is on Reputation and the other on Schedule. Let’s say that the project team assessed the impact on Reputation as high (score 4) and impact on Schedule as very high (score 5). If probability of this event were assessed as medium (score 3), the Reputation score would be 12 and Schedule score 15. According to the risk level rules of Table 5.1, the Reputation score gets a yellow color and the Schedule score gets red. Some project teams tend to consider these scores separately as manageability of corresponding impacts may be different. Some assign the highest score as representative of the whole risk. Figure 5.1 depicts this uncertain event and two impacts using a 5 × 5 format of RAM (Figure 3.2). The black star of Figure 5.1, which corresponds to the Schedule impact before
Impact
Probability 90% Very High (5)
10
20
25
Schedule impact as-is: score 15, red Represents whole risk as-is
8
16
20
Reputation impact as-is: score 12, yellow
9
12
15
Reputation impact to-be: score 6, yellow Represents whole risk to-be
6
8
10
Schedule impact to-be: score 4, green
3
4
5
2
Depicting Downside Uncertainty Assessments
136
◾ Risk Assessment and Addressing
addressing (score 15), represents the entire uncertainty as‐is. Another star represents the Reputation impact as‐is (score 12). We discuss assessment after addressing in a later section. To a certain degree assessment as‐is is less important than assessment to‐be. We may assign high or very high impacts and probabilities for the majority of uncertainties without much deep thinking. Most important is right assessment after addressing, which requires a clear‐cut understanding of required addressing actions and their possible outcome. We introduce five main addressing strategies in the next section.
FIVE ADDRESSING STRATEGIES Recall that any uncertainty could be characterized by one (general uncertainty) or two (uncertain event) factors: uncertainty of likelihood and/or uncertainty of impact. If we succeed in influencing and managing those two factors, we may be sure that we are successful in managing any uncertainty. As briefly discussed in Chapter 1, overall project uncertainty exposure is evolving in time due to the evolving in time of uncertainty objects that give rise to overall uncertainty exposure. Those evolve due to the presence of dynamic changers in the project internal and external environment, which were introduced in Chapter 1. The key types of uncertainty changers discussed in this section are uncertainty addressing actions. They are referred to in Chapter 2 in Step 5 and 7 of the risk management process (Figure 2.2). It is important to keep in mind that any changer (an addressing action, a decision, a choice, or any relevant developments in the project external environment) could lead to a new or amended existing uncertainty. The possibility of such dynamics and transformations in the form of a domino effect is a major reason why the risk management process should be ongoing and evergreen (Figure 2.2). The structure of the bowtie diagram (Figure 4.2) promotes such thinking, but only for uncertain events. Barrier 1 constitutes the fi rst prevention line‐ of‐defense. This barrier is supposed to reduce probability of an uncertain event happening by breaking or weakening logical links between causes and possible events.5 Needless to say, barrier 1 does not work for general uncertainties that have irreducible 100% probability. In some cases all those logical links could be successfully or naturally broken so that the event cannot happen. It becomes fully prevented or avoided, getting probability of occurrence equal to zero. This could happen mostly when initial causes become irrelevant to the project. Change of scope of a project is
Five Addressing Strategies ◾
137
one such possibility. Sometimes the time window for an event happening could be past. This is often the case when project permitting is done. So, corresponding risks cannot occur anymore and their probabilities become zero. Of course, a project team should not play a game of chance and just wait for the occurrence window to be past. Some focused preventive measures should be developed to at least reduce the probability of an uncertain event happening. Those measures are called mitigation preventive actions. Hopefully, due to reduced probability of occurrence, the risk did not happen. However, if it is not avoided or fully prevented, it still might occur. If the risk does happen, the second line‐of‐defense should be ready. Barrier 2 represents crisis management measures that should be in place if an uncertain event does occur. However, for general uncertainties it is not about crisis management as the general uncertainty has been there all along. Addressing measures associated with barrier 2 are called mitigation recovery actions. However, for barrier 2 the mitigation relates to possibilities of reducing the impact of occurred uncertain events and given general uncertainties on project objectives. One such possibility is transferring the corresponding uncertainty to third parties. In some cases the project owner might keep some uncertainties for managing. In other cases these could be transferred to EPC contractors, joint venture (JV) or consortium partners partners, vendors, and so on. The process of uncertainty transferring is call risk brokering. The key principle is that the party that could manage the risk in the most efficient way should be assigned to manage it. What does this mean? Let’s assume that an owner could manage an uncertainty effectively but succeeds in transferring it to an EPC contractor. This may turn out to be quite expensive to manage for the contractor, so the contractor includes a substantial risk premium in the contract price. This will lead to an increase of the overall project cost. If the EPC contractor does not include a corresponding risk premium, it might encounter losses. This will give rise to an indirect impact on the project, through project delays, claims, change orders, sour relations with the contractor, damaged reputation, and so on. So transferring at all costs should not be an ultimate goal because there might be an unreasonably high cost for this. Smart optimization through risk brokering is required. The most traditional way of transferring risks is purchasing insurance policies that cover specific types of risks. High premiums and deductibles could make this option unreasonable. As most insured risks belong to the corporate category, some companies choose self‐insurance. If a company has a portfolio of projects supported by adequate financial resources, self‐insurance of some categories of risks may be preferable.
138
◾ Risk Assessment and Addressing
If all economically viable addressing preventive and recovery actions are developed and an uncertain event still might occur and impact objectives, the last resort is to accept possibility of the residual risk occurrence and put away some risk reserve money. The risk reserve could be used to cover residual risks of various impacts, and not only of cost impact. Money is an addressing tool for residual risks of all impacts should they occur. For instance, reputational damage could be reduced through conducting a public relations or advertising campaign in the mass media, which requires corresponding expenses. Money spent or planned to be spent on any addressing action should be counted and included in the project budget (CapEx or OpEx, depending on when an action will be implemented). To sum up the discussion on uncertainty addressing, here are the five addressing strategies: 1. Avoid (barrier 1—preventive) 2. Mitigate‐Prevent (barrier 1—preventive) 3. Mitigate‐Recover (barrier 2—recovery) 4. Transfer (barrier 2—recovery) 5. Accept (barrier 2—recovery) As pointed out, the Avoid and Mitigate‐Prevent strategies of barrier 1 are not applicable to downside general uncertainties. In practice, a combination of several appropriate addressing actions that are based on the five strategies should be developed by an uncertainty owner and approved by management, which corresponds to the Plan and Approve Response step of the risk management process (Figure 2.2). Obviously, actions based on avoiding risks are mostly viable in the early phases of project development. The second obvious choice and most popular strategy is Mitigate‐Prevent, which might be supported by several actions for each particular risk. The Mitigate‐Recover strategy is the most overlooked by project teams. One possible reason for this is overconfidence. There might be a belief that Mitigate‐Prevent actions should be efficient enough to reduce probability to an acceptable level. But the question stays the same: What do you do if an uncertain event does happen, becoming an issue? For instance, for the risk of H2S release during drilling, the usual preventive measure would be using blowout preventers (BOPs). If BOPs do not work, proper training of the rig personnel, personal protective equipment, H2S monitoring, and an evacuation plan establish a set of Mitigate‐Recover actions. Recently I was asked a question by one of my co‐workers in one of the Persian Gulf states about addressing the risk of a new war in the region.
Five Addressing Strategies ◾
139
None of us were in the business of reducing the probability of such event. So, what could be done to address this? My response was that we needed to develop and implement business continuity and evacuation plans (Mitigate‐Recover strategy). It seemed that no preventive actions were available to us unless it was decided that it was time to shut down operations in the region and run to fully avoid this risk. Transfer strategy is widely used during negotiation of contracts among project owners, EPC contractors, vendors, JV partners, and so on. Corresponding risk assessments are required from the angles of the impacted parties. A tug‐of‐war usually follows to negotiate risk responsibilities versus risk premiums. The Accept strategy is the last strategy that requires assessments of required project risk reserves to cover all assumed residual risks. It is important to remember that project cost risk reserves cover residual risks of various impacts, not only cost impacts. The cost of addressing should be included in a project’s base estimate. The only exclusion here could be the addressing of Schedule impacts. Risk reserve might take the form of additional float in the schedule or different sequencing of activities on the schedule’s critical path. But even in this case, schedule‐driven costs should be kept in mind. Needless to say, schedule acceleration activities usually require additional spending (schedule‐ driven costs to crush the schedule), which should be taken into account in the base estimate, too. Traditionally practitioners keep in mind only four addressing strategies: Avoid, Mitigate, Transfer, and Accept. Sometimes the mnemonic abbreviation 4T is used to highlight Terminate, Treat, Transfer, and Take. It’s nice to have mnemonic abbreviations like this. The problem is that the Mitigate strategy in this case is confusing and not clearly defined. According to the bowtie diagram, the Mitigate‐Prevent strategy (barrier 1) and the Mitigate‐Recover strategy (barrier 2) have totally different purposes, natures, and consequences. The former is part of proactive and efficient risk management; the latter is about reactive crisis management! The only link between them is the word mitigate. But this is again about linguistics and not the core essence of risk management. All of those strategies are applicable to downside known uncertainties only. What should be our thinking for upside known uncertainties (Figure 1.2)? Our thinking should be the reverse or reciprocal. In place of attempts to reduce the severity of a downside uncertainty, one should try to increase the positive consequences of an upside one. A downside uncertainty as‐is might have a very high (red) level of severity (score 15–25), which is a major concern for a project team. Let’s see that uncertainty as a monster. Assuming we have a good set of addressing actions based
140
◾ Risk Assessment and Addressing
on the five strategies discussed earlier, one should expect the risk to become much less severe after addressing (green, score 1–5). In other words, the monster should get domesticated to become a nice pet. It still may bark and bite but nothing major is expected. Reciprocally, an upside uncertainty before addressing is expected to have a smaller positive impact on the objectives (“diamond‐in‐the‐rough”) than after (“fine diamond”). As we decided not to develop a separate RAM for upside uncertainties, we may apply the RAM for downside deviations (Figure 3.2). Using the language of Table 5.1, an upside deviation before addressing could be small (score 1–5, green) and might become critical after addressing (score 15–25, red) or at least material (score 6–12, yellow). Using zoological and diamond‐cutting analogies, domestication of monsters through conversion of them to more predictable pets as well as polishing diamonds to make them fine highlights the essence of management of project uncertainties. We have already discussed five addressing strategies for downside uncertainties; now, let’s try to formulate them for upside ones. Here are five addressing strategies for upside uncertainties: 1. Exploit (barrier 1—magnify): to undertake actions to make it relevant to the project (“pull to the scope”) 2. Enhance‐Magnify (barrier 1—magnify): to try to increase probability 3. Enhance‐Amplify (barrier 2—amplify): to try to increase positive impact 4. Share (barrier 2—amplify): to share it with partners for further amplification 5. Take (barrier 2—amplify): to accept it as‐is (“do nothing more” strategy after the other strategies are applied) and reduce risk reserve accordingly The Exploit and Enhance‐Magnify strategies are not applicable to upside general uncertainties as those have 100% probability of occurrence. The terms used for the definition of the five addressing strategies for upside uncertainties may sound inconsistent to some of my readers. I am a bit concerned about the terms magnify and amplify. I am not keen on particular labels; I would encourage readers to come up with better terms that reflect the nature of addressing actions more adequately. It’s the same for downside addressing strategies. The two Enhance strategies (in place of Mitigate) should be split and understood as totally different and belonging to two different places on the bowtie diagram. (Perhaps terms such as trampoline or springboard should be used for upside uncertainties in place of the word barrier.)
Assessment after Addressing ◾
141
ASSESSMENT AFTER ADDRESSING The next step in the risk management process (Figure 2.2) is assessment after addressing. The previous discussion on assessment as‐is is fully relevant here. However, the assessment to‐be should take into account both the initial as‐is assessment and the approved set of addressing actions. As discussed in Chapter 2, any assessment to‐be should be clearly linked with a particular point in time, which is usually associated with a particular decision gate or major project milestone. The first complicating factor for such an assessment is that this should take into account the likelihood of successful implementation of approved actions. The second is that approved addressing actions may become causes of new uncertainties. The dotted line in Figure 2.2 reminds us about the need to keep an eye on new uncertainties generated by addressing actions in the course of their implementation. This includes the possibility of an action’s implementation failures. The importance of assessment to‐be for project budgeting is dictated by the fact that decisions about project reserves are usually done based on assessment after addressing for a particular point in time. General efficiency of project risk management may be evaluated in monetary terms as follows: [Project cost reserve as‐is] – [Project cost reserve to‐be] > [Cost of addressing] Certainly, cost of addressing is an investment in reduction of overall project uncertainty exposure. If the difference in project reserves before and after addressing is higher but comparable with the cost of addressing the risk, management is not that efficient. Needless to say, if cost of addressing is higher than the difference, such risk management yields a negative value. This could be the case, for instance, when addressing actions introduce additional uncertainties or when risk management assumes mostly “ritual” functions (see “Cargo Cult Science” in Chapter 2). The cost of addressing should include the cost of all addressing actions that address not only impacts on the Cost objective but also on Schedule, Scope/ Quality/Performance, Reputation, Safety, and Environment. This approach is a basis for evaluation of engineering design and procurement options as discussed in Chapters 9 and 10. Project teams should try to develop addressing actions that do not cost a lot of money. The most efficient and smartest addressing actions are those that are aimed at fulfillment of the same activities in different ways. For instance, engineers may carry out project design activities differently to address a particular uncertainty, which would not necessarily require extra costs. Certainly this is
142
◾ Risk Assessment and Addressing
not always possible and extra spending is rather inevitable. Hence, the cost of addressing introduced earlier should be treated as additional budgets to be spent above previously approved annual budgets for project development and execution. Assuming some addressing actions were developed and approved, an assessment after addressing could be done as if all or some of the proposed actions were already implemented. This defines timing that corresponds to the assessment to‐be. Figure 5.1 depicts the situation where, after addressing, two impacts switch levels of severity. The Schedule impact, which was higher before addressing, became lower after addressing (score 4, green) than the impact on Reputation (score 6, yellow). So, the impact on Reputation having a higher score after addressing becomes representative of the entire risk. This observation brings up risk manageability. In the previous example, Reputation impact has lower manageability, which is often the case in life. Schedule impact, on the contrary, seems to be better managed. Hence, the manageability of impacts of the same uncertainty can be very different. Some risk management tools have manageability of an entire risk as a standalone parameter along with probability and impacts, which is quite confusing in light of this discussion. However, if just one objective is managed, which is an obvious manifestation of the previously discussed tunnel‐vision organizational bias, this should be an acceptable concept. But what a “wonderful” but deceptive risk management system this should be: one would manage only impacts on a “favorite” objective (usually this is Cost) ignoring the rest of objectives. Figure 5.2 is an illustration of an upside uncertainty assessment before (diamond‐in‐the‐rough) and after (fine diamond) addressing. To sum up the discussion, Figure 5.3 depicts the conceptual design of a project risk register representing assessment before and after addressing along with addressing actions. The uncertainty introduced in the boxed example in Chapter 4 is placed in the risk register for demonstration purposes. Six project objectives discussed so far are assessed before and after addressing using the RAM of Figure 3.2. Two sample addressing actions are proposed to distinguish assessments before and after addressing. We discuss the very detailed specifications of the risk register template in Chapter 8. Many major project management decisions, such as the final investment decision (FID), are done based on uncertainty assessments after addressing linked to those decisions. The three most typical assessments to‐be are: 1. When a project passes a decision gate by the end of Select (assessment as‐is), keeping in mind the expected uncertainty exposure to‐be by the end of Define (“to‐be‐FID”)
Assessment after Addressing ◾
143
Probability 90% Very High (5)
Cost impact to-be: score 25, red Represents whole risk to-be
5
10
15
20
High (4)
4
8
12
16
Medium (3)
3
6
9
Low (2)
2
4
8
10
Reputation impact as-is: score 6, yellow Represents whole opportunity as-is
Very Low (1)
1
2
4
5
Cost impact as-is: score 3, green
FIGURE 5.2
Reputation impact to-be: score 20, red 15
Depicting Upside Uncertainty Assessment s
2. When a project reaches the end of Defi ne to be considered for FID: an assessment as‐is is conditionally considered an assessment to‐be to define an approved project risk reserve for Execute. 3. Regular assessments in Execute which are used for forecasting of the risk reserve draw‐down (Chapter 14). In the course of the action’s implementation they will move from assessment to‐be to assessment as‐is. Hence, assessment as‐is is constantly changing being attached to “Now.” The closer a project is to FID, the more similar assessments as‐is and to‐be at FID (to‐be‐FID) are supposed to become due to action’s implementation in Define. By the decision gate preceding FID (the end of Define), both assessments should become equal and reflect residual uncertainty exposure for the rest of the project (Execute) if this is sanctioned. A set of addressing actions that have yet to be implemented in Execute should become an agreement of the project team with management: sanctioning of a project is subject to full implementation of the remaining approved actions. How often this promise is forgotten. This sort of “under‐addressing” after sanctioning is one of the major sources of uncertainties of project outcomes and failures (Figure 1.1). Assessments of uncertainties to‐be that are used to develop project reserves are subject to assumed project execution efficiency. This must be factored into all to‐be uncertainty assessments. Practically speaking, this means that project teams could assume excellent (at least good) project execution implicitly, which means hypothetically excellent project economics and
D
UE
FIGURE 5.3
1
General Uncertainty Upside or or ID Downside? Uncertain Event?
Three Part Definition
Comments
In worst case of project cancellation or delay for more than one year this should be considered a show-stopper: to add to corporate RR and discuss with VP
Status Active
Level/Severity
RBS Category
Owner Mike Stakeholders 15 Gorbiff
Probability 3
Cost 2
5
Schedule
Conceptual Template of an Uncertainty Register
Due to a) general opposition by some NGOs to oil sands project; b) concerns by local communities about project’s environmental impact; c) environmental issues associated with a similar project in the past, the project XXX might be challenged during public hearings, leading to A) permitting and final Project investment decision delay Sanctioning (Schedule); B) engineering, Delay procurement and construction delays (Schedule); C) company’s reputational damage in general (Reputation); D) complication of relations with local communities in particular (Reputation); E) extra owner’s costs (CapEx).
Title
Product Quality 0
Safety 0
Environment 0
Reputation 4
Action(s)
Response Strategy To review sequencing of FEL/pre-FID works and develop Mitigate— additional float Recover in the schedule to absorb schedule impact for a case the risk does occur
To establish community engagement and Mitigate— Prevent communication plan including schedule of open house meetings
Completion Date
Start
Cost of Action, K$
Action Status
Action Owner The plan John Active is under Lennin review
Comments
Review of main FEL/preRingo Active FID 500 01-Jan-13 01-Jan-14 Starlin activities schedule for next month
500 25-Dec-12 25-Dec-13
Level/Severity 6
2
1
Cost
ASSESSMENT TO-BE
Probability
ADDRESSING
2
Schedule
ASSESSMENT AS-IS
0
0
Safety
ATTRIBUTES
Product Quality
DEFINITION
0
Environment
144 3
Reputation
Project Execution through Risk Addressing (PETRA)
◾
145
performance. On one hand, the role of organizational governance is to assure proper execution. On the other hand, realistic and not utopian execution efficiency should be factored into uncertainty assessments to‐be and clearly communicated. Unfortunately, bias based on pie‐in‐the‐sky execution overconfidence gives rise to the good–bad execution gap depicted in Figure 1.1. A project that is prone to overconfidence bias usually needs more reserves than approved at FID. This is another major reason for project failures.
PROJECT EXECUTION THROUGH RISK ADDRESSING (PETRA) The addressing principles described in the previous section are applicable to any known upside or downside uncertainty. However, some uncertainties that are perceived as critical require special attention. Usually two or three top project uncertainties should be subject to more detailed addressing reviews. Let’s call this project execution through risk addressing (PETRA). PETRA is no more than a very logical and simple upgrade of standard risk register templates to better accommodate the bowtie diagram philosophy. As previously discussed, several causes may lead to an uncertain event. In turn, an event may lead to several impacts on project objectives. These many‐ to‐one and one‐to‐many logical relationships need to be broken in a consistent way. There should be an assurance that all identified logical links will be broken or significantly weakened consistently. This could require development of several detailed addressing actions for each logical link. This sounds onerous, but is very consistent and effective. Figure 5.4 demonstrates the PETRA concept. This is a simplified and high‐level example. Real PETRA could lead to dozens of preventive and recovery actions that have owners, start and completion dates, notes on progress, and so on. A risk practitioner could easily add required fields to the template of Figure 5.4 as soon as the concept is well understood. In essence, this is a fragment of a traditional project risk register but developed in a very detailed fashion. Its structure assures that every single cause and every single impact on objectives is properly and individually addressed. As I have not yet seen an adequate commercial application for the PETRA technique, for obvious reasons, project teams may use an MS‐Excel template as introduced in Figure 5.4. Should a regular uncertainty register template (Figure 5.3) be redeveloped the PETRA way? It is possible to do so, although not all uncertainties require this level of detail, only the most critical. Hence, the only constraint is about
146
FIGURE 5.4
Conceptual Template for PETRA Methodology for Uncertain Events
Role of Bias in Uncertainty Assessment ◾
147
the resources and time available to do such a detailed analysis. So, to avoid overshooting and overspending, normal practice is to have the regular risk register template shown in Figure 5.3 for all uncertainties and use the PETRA method for the very few critical uncertainties as an extra. For general uncertainties the left part representing causes and preventive actions will be irrelevant. It is too late to prevent an event that has already happened or was there all along as a given. In place of an uncertain event there will be an issue/given/fact, and so on. So, only recovery actions should be considered as part of the barrier 2 (recovery actions) to rule out impacts on objectives. I came across an application of a similar methodology to develop addressing actions related to labor productivity uncertainties in a capital project. That was a really impressive and detailed document that contained 14 pages in an 11" × 17" format. I cannot share this information in this book due to confidentiality. However, this example demonstrates how detailed and sophisticated PETRA or a similar exercise could be. There are several methodologies that resemble PETRA, which various companies have adopted and shared. Their origin is not exactly traceable. All of them look way more complex than really required. Most complicated versions are linked with WBS, project schedule, or budget in a similar way as nodes are used in hazard and operability (HAZOP) studies. Some utilize fishbone diagrams. All those fancy extras might yield 20% extra value but should generate 80% extra complexity, frustration, and time spent. It is amazing how a simple and effective idea can be overlooked or overloaded with extras to the point of impracticality! I feel that those futile bells and whistles may be left out for practicality and efficiency reasons. Once again, let’s keep things simple or at least efficient. The lean PETRA methodology introduced in this section has nothing to do with those overcomplicated techniques. It is just a slight modification of a standard risk register template based on clear understanding of the bowtie diagram logic.
ROLE OF BIAS IN UNCERTAINTY ASSESSMENT Uncertainty assessment is the business of quantifying stories and averaging out bias. As we all know, storytelling, even in the form of expert opinion, can be biased. We pointed out the manifestation of overconfidence bias when discussing decision making based on assessment of uncertainties after addressing. Let’s discuss the main types of bias more systematically in this section. The two most notorious manifestations of subconscious bias during risk assessment are anchoring and overconfidence. These seem to be linked
148
◾ Risk Assessment and Addressing
to the frequency method, at least indirectly. Moreover, they seem to feed each other. Anchoring is an interesting psychological effect based on previous or initial perception and overconfidence. I worked with one project manager who used to tell his team: “I am pregnant with this number!” This statement indicated that he was very aware of anchoring but could not easily overcome it due to his overconfidence. To convince him of any other assessment was extremely difficult but eventually possible. The questions listed in the previous section, and challenging his opinions, helped to do this. Overconfidence is a characteristic of many technical people. Empowered by their previous training, knowledge, experience, and successes they follow a very specific and results‐oriented thinking process. Any ambiguity would be judged as a lack of confidence. Any possibility of project execution lower than good would be dismissed right off the bat. This could lead to excessively narrow assessments of impacts and underestimating of probabilities. This looks like an almost quasi‐deterministic and wildly optimistic semi‐one‐point value approach, which means overconfidence in terms of uncertainty assessment. I describe overconfidence bias partially from my personal experience and background. Overcoming overconfidence is a difficult exercise that requires special effort.6 I tried to apply the calibration methods developed by Doug Hubbard on a couple of occasions. Although the calibration questions were not that project specific (“Please provide your estimate of the height of the Eiffel Tower in feet with confidence level 80%”), the results were real eye‐openers for some participants. Some of the results made direct impacts on participants’ egos and helped to manage overconfidence. Conscious bias is very possible during risk assessments. It is usually observed in the form of hidden agendas and normally manifested as exaggeration (overly conservative assessments done on purpose) or window dressing (overly optimistic assessments done on purpose). The role of a facilitator is to recognize motives for conscious bias and to try to challenge and manage it, which could be an extremely political and not‐too‐lucrative activity. My method of managing conscious bias is openness. When such bias is identified it should be discussed openly. I usually propose to take personal responsibility for its particular expression or a specific assessment and make the corresponding clear record in a risk register or minutes of a discussion. The efficiency and “success rate” of this method in avoiding hidden agendas is about 80%.
Conclusion ◾
149
Using any RAM brings up a sort of organizational bias, too. There are five standard ranges for probabilities and impacts. Any assessed risk should be allocated to one specific cell of RAM. Let’s say an uncertain event is assessed as having a probability of 5–15% and a cost impact of $7–$10 million. Using the RAM from Figure 3.2, one may easily assign this uncertainty to the medium category (yellow, score 6), because probability complies with a low range (1–20%, score 2) and cost impact with a medium range ($5–$20 million, score 3). What about a situation where probability is assessed at 10–30% and cost impact at $15–$25 million? Of course, a prudent approach would be to use higher probability and impacts ranges. But this might be overshooting, anyway. This Procrustean bed shortcoming may be fully resolved only when converting deterministic risk data to inputs to probabilistic risk analysis. Another weak point of using RAM is that all assessments are RAM‐ range specific. If we redefine the ranges tomorrow morning, most of the assessment scores will be different. To address this changeability issue, we have discussed methods of RAM developing in this chapter. Those methods do assume the possibility of range changes if they are not developed properly in the first place. The bottom line of the discussion on organization bias is that project RAM is a convention that should not be taken as absolute truth written in stone. It is not an ideal but rather a useful tool that supports right thinking processes. At the end of the day, the most important part of risk management is implementation of addressing actions. The rest of it, including assessments, is supporting action implementation. If the assessments as‐is and to‐be are about right, they justify reasonable and viable addressing actions that should be consistently implemented to reduce overall project uncertainty outcome.
CONCLUSION Along with their identification, assessment and addressing of uncertainties lays the foundation for a deterministic risk management method. In many organizations this is the only method used for risk management—no probabilistic methodology, no cost escalation modeling, no selection of options, and so on. Despite criticism of the deterministic method and its obvious shortcomings, I believe this is a very simple and adequate methodology that should be used as the basis of an overall risk management system. It provides clear guidelines on how uncertainties should be managed. When understood properly, it is not
150
◾ Risk Assessment and Addressing
necessary to have sophisticated training or experience or software tools to start actively and successfully using the deterministic method. No doubt though that some other more sophisticated and adequate tools should be kept in the risk management “tool box.” I consider the deterministic method as a necessary input to those other methods including probabilistic ones. They will be discussed later in the book. The next chapter accentuates a need to make the deterministic method as practical as possible.
NOTES 1. IPA Institute, “Successful Mega Projects” Seminar (Calgary, Canada, 2009), p. 18. 2. D. Hubbard, The Failure of Risk Management: Why It’s Broken and How to Fix It (Hoboken, NJ: John Wiley & Sons, 2009). 3. J. Haigh, Taking Chances: Winning with Probability (Oxford, UK: Oxford University Press, 2003). 4. G. Owen, Game Theory (Bingley, UK: Emerald Group, 1995). 5. Readers who have had at least basic military training might recall the contents of the field manual for an infantry rifle platoon. The part on defense operations might provide additional insights on the topic. 6. D. Hubbard, How to Measure Anything: Finding the Value of “Intangibles” in Business (Hoboken, NJ: John Wiley & Sons, 2010).
6
CHAPTER SIX
Response Implementation and Monitoring
Questions Addressed in Chapter 6
▪ ▪ ▪ ▪ ▪ ▪
E
Where does the rubber meet the road in risk management? Why should addressing actions become part of project team work plans? Why is assessment after addressing (to‐be) more important than assessment as‐is? Do we keep our promise to successfully implement all approved addressing actions? When should uncertainties be closed or accepted? What is the role of bias? ◾
V EN T HO U GH T HIS CH A P T ER is short, it is one of the most important
chapters of this book. Response implementation is where the rubber meets the road in risk management. The rest is theory or auxiliary steps and methods. Only development of project reserves using probabilistic methods is equally important because that is about time and hard currency.
151
152
◾ Response Implementation and Monitoring
MERGING RISK MANAGEMENT WITH TEAM WORK PLANS Three‐dimensional integration of risk management (Figure 2.1) implies that any specialist at any level of the organization may become a risk or action owner upon approval by management (Table 2.1). The most sensitive challenge here is to ensure that risk or action ownership does not conflict with his or her direct responsibilities. Working on a risk or an action should become part of everyday activities and fully aligned with them. That is easier said than done. How often has a specialist who was assigned as a risk or action owner during one of those grandiose annual two‐day workshops totally forgot about it afterwards and needed to be reminded during the next annual workshop? And sometimes uncertainty or action ownership is assigned in absentia, which makes forgetting about ownership even more convenient. Unfortunately, it is not unusual for project risks to be identified and addressing actions to be proposed just for show. This show is called a decision gate. The top show of this kind is the final investment decision (FID) decision gate. I heard about a situation where a project team credibly reported to reviewers about identified risks, proposed risk response plans, and so on, with all those risks being identified a week prior to review sessions. Somebody had just discovered that risk review should be part of the FID review. The project was gloriously approved, with some decent and not‐too‐onerous recommendations to slightly improve the project risk management system, and miserably failed in the course of its execution. This tick‐off‐box type of organizational bias is not as rare as we might think and leads to quite predictable consequences. Both the organizational framework and the tools of the risk management system should support merging risk management with working plans. The project risk management plan should contain general guidelines on frequency and triggers for reviews of risks of various severity and sources. But this is not enough. Ideally, the risk management plan should have an appendix with an annual schedule of reviews of particular risks of various severity and sources (risk breakdown structure [RBS] categories). This should include major expected milestones of project development or at least placeholders wherever exact dates are not yet defined. For instance, moving to a new phase, purchasing of major equipment, and signing of a major construction contract all could be triggers for risk reviews. The major source of such information is usually a software tool used by engineering and procurement for work‐package development.
Monitor and Appraise
◾
153
MONITOR AND APPRAISE The loop of the risk management process in Figure 2.2 may repeat itself several times for each uncertainty until the uncertainty is finally accepted or closed. It reflects on amendments and transformations of a particular uncertainty due to the presence of various internal and external changers, including addressing actions. When some addressing actions have been completed, the as‐is assessment should be revisited to take completed actions into account in the updated as‐is assessment. Some additional addressing actions could be proposed. Figure 2.2 points to the possibility of an extended loop that could head for Identify (dotted line) instead of Assess as‐is. This reminds us about the fact that approved addressing actions are internal and often uncontrollable uncertainty changers (Figure 1.4): they could fully or partially fail or their implementation might lead to new uncertainties. This possibility should be checked during Monitor and Appraise. And, of course, new uncertainties are often identified during the Monitor and Appraise steps. The discussion about new uncertainties and failure of actions has practical ramifications when defi ning project reserves. Decision making about risk reserves is normally done based on project uncertainty exposure to‐be. Inputs to probabilistic cost models take in data on general uncertainties and uncertain events after addressing. This means credibility of developed cost reserve is subject to full implementation of all approved addressing actions and assurance that no new uncertainties are induced. In other words, credibility of developed project cost reserve has a condition: all internal uncertainty changers (addressing actions) should be fully controllable in the sense shown in Figure 1.4. Moreover, this assessment is done before the FID, which is the end of the front‐end loading (FEL) phases (Figure 1.1). By that time some of the addressing actions have been implemented or have failed and some of them have induced new uncertainties, which are all supposedly factored into the project reserve. This reserve is taken into account as part of the FID. However, the majority of approved addressing actions will be implemented after the FID in Execute. Some of them will fail and some generate new uncertainties, too. This situation provides room for subconscious and conscious bias when assessing the level of control over proposed addressing actions. The real required project cost reserve might be bigger than the one approved during the FID. The bottom line is that adequacy of project costs and schedule reserves are a function of execution. If execution were not carried out as presumed explicitly
154
◾ Response Implementation and Monitoring
or implicitly when developing the project reserve, the models to derive reserves would be at quite a big distance from future project reality. Such a project could become a failure due to overspending related to higher‐than‐anticipated uncertainty exposure in Execute, stemming from partial implementation of approved addressing actions. In other words, uncertainty exposure in Execute would not change to the extent contemplated during the FID. The same could happen to the schedule risk analysis and the corresponding schedule reserve (floats). We discuss methods of handling project reserves in Chapter 12. When all proposed addressing actions are implemented and no additional actions proposed, the as‐is and to‐be assessments of an uncertainty become identical, giving rise to residual uncertainty exposure. After this the uncertainty exits the risk management process, being either accepted or closed.
WHEN UNCERTAINTIES SHOULD BE CLOSED The general rule for when an uncertainty may be closed is that it becomes irrelevant or insignificant. Technically a closed uncertainty is a nonexistent uncertainty. It is off the project risk management radar screen. It does not require addressing any more and does not contribute to risk reserves. An uncertainty may be closed in only four cases: 1. When it is avoided and not relevant to the project any more (e.g., new reference case, change of design, etc.) 2. When it is fully mitigated or reduced to at least the green level after all the response actions are completed 3. When the time window that defines the uncertainty relevance is past (e.g., permits received, no permitting risks) 4. When it is fully transferred (e.g., good insurance policy purchased, no deductibles) A closed uncertainty stays in the corresponding log as inactive and should not be erased. It becomes a part of the project risk management legacy system. This can be used by another project the organization undertakes in the future. It also could be reopened if risk exposure changes. Possible confusion could stem from application of the (as low as reasonably practicable) ALARP approach toward addressing uncertainties. ALARP rarely means zero residual exposure. Very often risks that have reached the
Conclusion ◾
155
ALARP level are being closed. But in many cases it means quite material possible downside deviations from the project objectives. For instance, the actual loss of two vehicles over 135 space shuttle missions produces a probability of 1 in 67.5. A Monte Carlo evaluation of the uncertainty distribution for the probability of loss of the space shuttle1 produced a probability of 1 in 89 (mean value). This was quite a material risk, which was considered ALARP by NASA. Needless to say, it was certainly kept active and was never closed in NASA risk registers until the space shuttle program was closed in 2011. Closed ALARP uncertainties mean uncertainties neglected from now on. The ALARP level depends on the risk appetite adopted by an organization or a project and does not necessarily mean zero exposure. If an uncertainty cannot be closed, it should be accepted. There may be a tendency to close all risks that are qualified as green after addressing, according to the RAM of Figure 3.2. This may be done in most cases but not before all approved addressing actions are successfully implemented. Until then, the green status of that risk should be treated as conditional and that risk should stay active.
WHEN SHOULD RESIDUAL UNCERTAINTIES BE ACCEPTED? The short answer is: when they cannot be closed after addressing. Once again, only active uncertainties give rise to project cost or schedule reserves. As discussed in the previous section, the ALARP concept might be a major confusion here. ALARP means residual exposure. The question is: Should this exposure be neglected? For instance, if it is not reasonable to further address some Cost and Schedule uncertain events or general uncertainties, their residual levels give rise to project cost and schedule reserves unless neglected. If those levels are low, they may be neglected in reserves. If an uncertainty is closed, it becomes inactive and does not give rise to reserves by default.
CONCLUSION This chapter accentuates the obvious importance of proper uncertainty response implementation and monitoring. Whereas uncertainty implementation and monitoring is the single most important risk management activity,
156
◾ Response Implementation and Monitoring
proper assessment of uncertainties is the single most important activity for development of project cost and schedule reserves.
NOTE 1. T. Hamlin, M. Canga, R. Boyer, and E. Thigpen, “2009 Space Shuttle Probabilistic Risk Assessment Overview” (Houston, TX: NASA, 2009).
7
CHAPTER SEVEN
Risk Management Governance and Organizational Context
Questions Addressed in Chapter 7
▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪
T
What risk management keys are required for decision gates? How unique should a project be allowed to be? What is the role of ownership in risk management? What is the terminology for various types of risks? What forms should risk reporting take? What sort of risk management system health self‐check should be in place? Is a risk manager really a manager or a leader? What is the role of bias? ◾
H E M E T H O DS A N D TO O L S described in this book should become
part of project governance in order to assure their consistent application and real positive impact on project performance. Consistent development and execution of a project according to corporate processes enhances the predictability of the project outcome. Besides this, the role of project governance has two specific implications. First, project cost and schedule reserves approved as part of a sanctioned project budget and duration at the fi nal investment 157
158
◾ Risk Management Governance and Organizational Context
decision (FID) are subject to presumed execution. Second, these reserves are meaningful only if project execution includes all approved addressing actions. These two additional conditions are crucial for credibility of project cost and schedule reserves derived from probabilistic models. Development of project reserves is discussed in detail in Chapter 12. In the meantime, we will overview risk management deliverables from the angle of organizational context and corresponding requirements in this chapter.
RISK MANAGEMENT DELIVERABLES FOR DECISION GATES Table 1.1 describes the main phases of a project development. A decision gate follows each of these phases. The particular project development standards and requirements for various project disciplines depend on the size, scope, and complexity of a project. However, the key intent is to ensure an acceptable level of value after each phase. As a result a decision to proceed to the next phase should be made. The most important decision in terms of commitment to the project is the FID that should be made after Define. The fundamental issue with the organizational context of project management relates to the defi nition of a project, which is a temporary endeavor to create a unique product, service, or result.1 When a project attains its objectives it should terminate, whereas operations may adopt evolving sets of objectives and the work continues. The intrinsic “uniqueness and temporariness” of projects implicitly assumes a degree of uniqueness, instability, inconsistency, or “tailor‐made‐ness” of processes and procedures to deliver projects. This is a fundamental source for all kinds of surprises, unknown uncertainties, and deviations from project objectives, stemmimg from the organizational category of a project’s risk breakdown structure (RBS). To a certain degree uniqueness is a synonym for unpredictability (and not only in the project environment). However, the intrinsic uniqueness of a project should be distinguished from inconsistent applications of project development and execution processes and procedures, if these are well developed at all. In some cases project processes are well defined but not followed because of accentuating the uniqueness. In other cases the processes and procedures are not established at all. For instance, how many times have you heard about issues with change management, interface management, timely providing
Risk Management Deliverables for Decision Gates ◾
159
vendor data for engineering, and so on? Operations come across these on an everyday basis but have to react immediately. This allows one to sharpen and hone all aspects of operations permanently. Not surprisingly, the Six Sigma approach came from operations, not from projects. The statistical approach requires repetitiveness to promote the frequency‐based evaluation of deviations from a norm, whatever the norm is. Postmortem lessons‐learned activities do not provide much value for a given project because they come too late. Benchmarking could be more relevant to a given project, although it usually requires some conditioning to be fully applicable to a particular project. If an organization has well‐developed lessons learned and benchmarking processes, that would be the only analog comparable with ongoing operations. Some organizations develop and execute the same or very similar projects year after year based on the same technologies and construction methods and in the same regions. Such semi‐operational projects or semi‐project operations usually lose their uniqueness substantially, leaving little room for a very few uncertainties. However, even very standardized projects or operations will come across some uncertainties. Otherwise, there would not be a discipline called “operational risk management.” Hence, one of the key goals of project decision gates from the organizational governance viewpoint is to ensure that a project is not too unique to bring forth unexpected outcomes. The more it manifests that it is rather “an operation,” the higher its chances of passing a decision gate. In a way, risk management along with benchmarking is a tool that allows one to fathom and cut the degree of project uniqueness, especially in terms of deviations from established practices, processes, and procedures in various disciplines. It is a Procrustean bed type of mechanism in quite a good value assurance sense. If the uniqueness of a project is too high and cannot be fully controlled, the substantial uncertainty exposure scares decision makers stiff. It does not stand to reason to list all the value assurance requirements for all decision gates for all project disciplines. However, Table 7.1 points to the general project governance requirements related to risk management. Although risk management should be integrated with all project disciplines, it closely supports estimating and planning in development of project budgets and schedules. The role of risk management is to justify required reserves based on uncertainty exposure. And in terms of organizational governance, exposure to uncertainty could be treated as exposure to uniqueness.
160
◾ Risk Management Governance and Organizational Context
TABLE 7.1 Generic Risk Management Requirements for Decision Gates Project Phase
General Requirement
Key Deliverables
Identify
Identification of major risks and their use for assessment purposes.
Preliminary risk management plan (RMP) approved for Identify; preliminary risk register.
Select
Identification, assessment as‐is, development of response plans and assessment to‐be for all main concepts/options; use of risk information for option selection.
RMP approved for Select at the beginning of Select; risk registers for each option; list of risks of Cost and Schedule impacts for probabilistic risk analysis for each option; project execution through risk addressing (PETRA) reviews for most critical risks; probabilistic Cost and Schedule risk analysis by the end of Select for a selected option; Cost escalation reserve.
Define
Full development of response plans and more precise assessment to‐be for a selected option for Execute and Operate.
RMP approved for Define; expanded risk register; PETRA reviews for most critical risks; probabilistic Cost and Schedule risk analyses to develop project cost and schedule reserves; updated cost escalation reserve.
Execute
Management of risks to ensure project delivery as planned; transferring of risk information to Operations at end of Execute.
RMP approved for Execute; expanded risk register; PETRA reviews for most critical risks; regular probabilistic Cost and Schedule risk analyses and cost escalation reviews to draw down cost reserves.
Operate
Management of risks to ensure continued operations and planned value.
RMP approved for Operate; updated risk register containing only operational risks.
OWNERSHIP OF UNCERTAINTIES AND ADDRESSING ACTIONS It is mandatory that each identified uncertainty and addressing action gets an owner. Combined, Table 2.1, Figure 2.1, and Table 5.1 promote a line‐of‐sight approach. Namely, the higher the severity of the uncertainty,
▪ The more‐senior member of a project team should own it ▪ The higher in the hierarchy package–project–business unit it is reported ▪ The more frequently it should be reviewed
Ownership of Uncertainties and Addressing Actions ◾
161
A risk owner is supposed to coordinate activities of action owners to ensure that all approved preventive and recovery actions are timely implemented. According to the 3D concept of risk management (Figure 2.1), a risk owner and action owners could belong to totally different disciplines and levels of the organization. Moreover, those individuals may work for different organizations. This implies a lot of dotted reporting lines in a virtually matrix‐type “risk addressing organization.” Following this logic, an uncertainty owner should be treated as the CEO of that uncertainty addressing organization with full accountability in uncertainty addressing. Several similar risks could be managed by a category owner (Table 2.1). For instance, an engineering manager for the whole project could become a category owner for all risks stemming from the engineering category of the project’s risk breakdown structure. The civil or mechanical lead could become a risk category owner for his or her discipline (corresponding engineering subcategory in RBS). This picture could be complicated by the fact that estimators, schedulers, and procurement people are intimately involved in managing corresponding general uncertainties, too. Their responsibility and engagement should be clearly defined. So, interface management among disciplines risk‐wise is a particular challenge that is the responsibility of the project risk manager. For most critical risks that require addressing actions delivered from several disciplines, which may belong to different organizations, special task teams could be created. For instance, managing labor productivity uncertainty might require involvement of representatives of the owner’s organization, consortium/joint venture (JV) partners, engineering, procurement and construction (EPC) contractor, and several construction subcontractors. It is a must to establish a process that ensures timely implementation of addressing actions. Both the project risk manager and the risk owners should be interested in having one, which should be part of the RMP. This process could be supported by risk tools selected by the project. Most modern risk database software tools have variations of notification functionalities. Usually such functionality is linked to action completion dates. Two major types of such functionality are the sending of notification emails to action owners or the appearance of warning signs or marks in reports retrieved from the database. The first early alarm mark or notification email is triggered, say, two months prior to the planned action completion date. A pleasant email is sent to an action owner automatically or a green mark appears in front of the action in the report. Another (yellow) mark or a more formal email could be sent one month prior to the deadline. If a deadline is passed and an action is still not closed, a less‐pleasant email or red mark could be generated. The email could
162
◾ Risk Management Governance and Organizational Context
also be sent to management. This functionality implies a clear requirement to maintain risk databases: all actions should have completion dates. Some risk managers and especially corporate risk management could be obsessed with email alert functionality. This is a relatively new feature, not easily maintained by projects and not available in all commercial tools. It is a nice‐to‐have feature if properly used. Additional discussion on the required level of automation is provided in Chapter 2. Automation is a double‐edged sword in terms of the adequacy of a risk management system.
MANAGEMENT OF SUPERCRITICAL RISKS Several terms related to the most critical uncertainties have been mentioned in this book. Black swans, broiler and wild black swans, show‐stoppers, and game changers both known and unknown have been discussed. The purpose of this section is to provide an overview of the terminology for the most critical project uncertainties, the ones that I would call supercritical. This discussion is based on a 5 × 5 risk assessment matrix (RAM) (Figure 3.2) and on the possibility of its expansion for very high impacts. Previously we discussed the three main categories of risks (Table 5.1) based on a ranking established by the use of RAM. Using three categories— critical, material, and small —is a major risk classification approach, although other ranking classifications could be used. Some of them are well justified, some not. Identified or known show‐stoppers and game changers should be listed by a project RMP to outline situations where project baselines would not be viable any more. An obvious contradiction from the RAM‐based approach is that usually show‐stoppers and game changers have very low probability of occurrence (score 1 of RAM) and very high impacts (score 5). This qualifies them as small (green) risks only (overall score 5). Even if we treated risks of score 5 as material (yellow), they would still be treated as relatively unimportant. Recall that the very high impact range of RAM does not have a limit. In the sample RAM in Figure 3.2 any uncertainty with a possible impact on project objectives of more than $50 million, 6 months, one fatality, and so on should be considered very high. It was assumed that one or two events like this could be handled by a project without clear failure. What if overspending due to the occurrence of some uncertainties were to be $100 million, $0.5 billion, $1 billion, or more? What if the delay were nine months, one year, two years, or more? What if the number of fatalities were 5, 10, 100, and so forth?
Management of Supercritical Risks ◾
163
One might imagine a sixth impact range in the RAM (Figure 3.2) that defines when a very high risk should be regarded as supercritical or a show‐ stopper or game changer.2 So, let’s treat show‐stoppers and game changers as supercritical risks. These should be listed and communicated as a waiver or demarcation of responsibilities between a project team and corporate risk management along with the threshold definitions. These should become part of the corporate or business‐unit portfolio risk management being promoted to the business unit or corporate risk registers. For instance, a project team may state, subject to agreement with corporate management, that if a delay related to project sanctioning by government is more than 12 months, this should become a game changer and be taken over by the corporation. If sanctioning is done after such a delay, previous baselines should be recycled and developed from scratch. This angle has additional ramifications related to developing project risk reserves. Risk reserves underline the desired and reasonable level of confidence of baselines. Show‐stoppers and game changers destroy these baselines, or mathematically speaking, they ensure that those baselines have close‐to‐zero confidence levels. For this reason, project show‐stoppers and game changers, even if they are known, usually are not taken into account in project probabilistic risk models as they drastically redefine (knock down or knock out) project baselines. This outlines the limits of project probabilistic risk models through excluding certain known show‐stoppers and game changers from probabilistic calculations.3 Simply, it does not stand to reason to run a project model that includes factors destroying that project. A good discussion on this is provided by Chapman and Ward in their great book on risk management.4 Namely, if part of the corresponding reserve is estimated as the product of very low probability and very high impact, this reserve would not be enough to cover the catastrophic event, if it occurred, anyway. For instance, if an impact is assessed at $100 million and probability 1%, then the required risk reserve could be assessed at $1 million. On the other hand, if the catastrophic event is not happening at all during the project lifecycle, which is likely due to its very low probability, this reserve becomes free pocket money for the project. This is not an efficient way of doing business, and the better method is portfolio risk management at the corporate level through self‐insurance or purchasing third‐party insurance. The issue with unknown‐unknown corporate risks is that it is impossible to come up with an explicit disclaimer as in the case of known‐unknown show‐stoppers and game changers. However, very general force majeure types of disclaimers would be worthwhile to develop. Project teams should learn
164
◾ Risk Management Governance and Organizational Context
this skill from contract lawyers. These generic demarcation‐type unknown show stoppers and game changers should be taken into account when making final investment decisions. There is a method to evaluate project unknown unknowns if unknown show‐stoppers and game changers are excluded. As discussed in Chapter 4, the room for unknown uncertainties could be understood as the measure of the quality of a project risk management system. The corresponding project‐level unknown‐unknown allowance is discussed in Chapter 12. Some practitioners distinguish strategic and tactical risks. Tactical risks are general uncertainties related to the level of engineering and procurement development. In the course of project development these are permanently reduced. Corresponding engineering reviews (30%, 60%, 90% reviews) and major procurement milestones where prices for work packages are locked are major reasons to review and downgrade such general uncertainties. So, tactical risks are implicitly defined as package general uncertainties. According to this logic, strategic risks are possible uncertain events related to project development and execution (project level or higher). I have always had an issue with full justification of this logic as some general uncertainties that should be treated as tactical risks could be more devastating than most of the uncertain events or “strategic risks.” For instance, labor productivity general uncertainty and, as such, a tactical risk by default may blow out any project if not managed properly. If a project team elects to delineate strategic and tactical risks at all, clear rules defi ning what is considered strategic and tactical should be set up in the RMP. These rules should not contradict the line‐of‐sight approach and 3D risk management (Figure 2.1), the three risk level/severity categories (Table 5.1), and, most important, the six major types of uncertainties (Figure 1.2). I would call all the red uncertainties of Table 5.1 “strategic,” although they are called “critical” in the table. I would not mind if they were called “critical/strategic.”
RISK REVIEWS AND REPORTING Management has to keep its fi nger on the pulse of project risk management in order to make timely decisions. The nature of good risk management is that it provides ammunition for proactive decision making (preventive actions). But it should be ready for reactive recovery actions as part of crisis management, too.
Risk Reviews and Reporting
◾
165
This requires regularly answering various questions related to project uncertainty exposure and its dynamics. Is overall project risk exposure the same as last month or are there signs that it is evolving due to some uncertainty changers in the internal and external environment? Are there new uncertainties emerging, or are existing ones transforming their severities? Are some package or project uncertainties growing in their severity that should be reported (escalated) to a business unit or corporate management? Have some addressing actions failed, or been successfully implemented, to change the overall uncertainty exposure as‐is? Should additional addressing actions be developed and implemented to ensure required assessments to‐be in a particular moment in the future? Timely answering of such questions requires the right frequency of uncertainty reviews and efficient methods of reporting and escalation, based on the previously discussed line‐of‐sight, the 3D risk management concept (Figure 2.1), and the classification of Table 5.1. Normal practice is that a monthly risk report should be prepared by a risk manager. It should be based on the risk reviews and events that have occurred during the reporting period. Not all identified and managed active risks should be reviewed every month. Table 7.2 describes the standard frequency of reviews. Table 7.2 is too generic to reflect all risk review possibilities but it sets up minimal formal requirements. In practice, risks may be discussed as part of engineering, constructability, package and so on reviews. Major project milestones including decision gates could be triggers and additional (not the only) reasons for risk reviews. Stages of package development, discussed in Chapter 10, contain a lot of risk review possibilities, too. Risk reviews of particular risk categories could be initiated by risk category owners and task team leaders. A project RMP should have a full description of rules on types and frequencies of risk reviews. Ideally, an annual schedule of those reviews should be part of the plan as a realization of those rules. Information collected about risks should be included in a monthly risk report, which normally contains four main parts. First is an executive summary that describes current risk exposure and major developments related TABLE 7.2 Standard Frequency of Reviews Uncertainty Level
Uncertainty Severity
Color Code
Review Frequency
Low
Small
Green
Quarterly
Medium
Material
Yellow
Bimonthly
High
Critical
Red
Monthly
166
◾ Risk Management Governance and Organizational Context
to project risk management. No more than two paragraphs should provide management at any level of an organization with an informative synopsis. Five to seven points on main risk management activities, highlights, lowlights, concerns, realized risks, failed or successfully implemented addressing actions, and so on may be added. Second is a current snapshot of the current project uncertainty exposure. Statistics related to overall number of risks of various severity and RBS categories should be provided. Figure 7.1 is a sample of reporting charts developed for a mega‐project that had four semi‐independent components or subprojects. Some software packages provide visualization statistics as part of their standard functionality. This can be easily done manually. Ideally, the risk database tool used by a project should be tailor‐made to support the reporting required by management. All the charts in Figure 7.1 were generated manually using MS Excel. The beauty of MS Excel–based reporting is that slicing‐and‐dicing of information can be done exactly according to tailor‐made project reporting requirements and not merely as the developers of a commercial risk database adopted by a project envisioned and imposed it. Third, besides the static snapshot statistics of Figure 7.1, dynamic statistics on changes of uncertainty exposure during the reporting period should be provided. This usually reflects:
▪▪ New risks identified by RBS category and severity (assessments before and after addressing)
▪▪ Risks that changed severity (assessments before and after addressing) ▪▪ Risks closed during the reporting period by RBS category and severity (assessments before and after addressing)
▪▪ New and closed addressing actions Fourth, it would be beneficial to include a list of all critical‐red downside uncertainties (assessment before addressing) and upside uncertainties (assessment after addressing) accompanied by the names of their owners. All addressing actions for these, including action owners’ names and completion dates, should be shown for accountability purposes. Additional statistics based on plotting of uncertainties on a 5 × 5 RAM could be adopted. This approach is based on the visualization of uncertainties introduced by Figures 5.1 and 5.2. Uncertainties may be plotted in corresponding RAM cells depending on their probabilities and impacts before and after addressing. If an uncertainty has impacts on more than one objective, the top
167
Total RR
28
91
19
8
Total RR
78
55
8 5
FIGURE 7.1
0
20
40
60
80
100
120
140
160
0
20
40
60
80
100
120
140
160
Total RR 146 8 492 78 28 55 91 5 19
Comp 2
0 0 11 22 Comp 3
2 1 13 17
6 Comp 3
7
5 20
2
Comp 2
0 4 22
Comp 4
Sample of Monthly Risk Reporting
Comp 1
8
36
2 10 4 0 13 7
Comp 4
0 9 11
Upside and Downside Uncertainties After Addressing (to-be)
Comp 1
28
2 4 22 4
Comp 1 Comp 2 Comp 3 33 33 56 2 0 2 137 93 175 17 22 28 6 7 8 13 11 22 20 22 36 1 0 4 5 4 10
Upside and Downside Uncertainties Before Addressing (as-is)
Number of Upside and Downside Uncertainties (↑↓) Upside Uncertainties Only (↑) Addressing Actions High Level Risks (red) Before Addressing (as-is) High Level Risks (red) After Addressing (to-be) Medium Level Risks (yellow) Before Addressing (as-is) Medium Level Risks (yellow) After Addressing (to-be) Low Level Risks (green) Before Addressing (as-is) Low Level Risks (green) After Addressing (to-be)
Comp 4 24 4 87 11 7 9 13 0 0
Environmental, 6
Construction, 20
Very High High Medium Low Very Low
Very Low 1 1 1 0 0
Low 6 8 3 2 0
Probability Medium 26 10 5 9 0
High 6 25 8 3 0
Very High 2 2 17 3 0
Completeness, 6
Commissioning and Start-up, 6
Commercial, 40
Number of Upside and Downside Uncertainties As-Is (All Impacts)
HSS, 22
Interfaces, 6
0 3 8
10 17 146
Technical, 17
External, 5
Regulatory, 10
Organizational, 6
Operations, 2
↑ 2 0 0 1 0 0 0 1 1 0
↑↓ 40 6 6 20 6 5 22 6 6 2
Number of Upside and Downside Uncertainties by RBS Categories
RBS Category Commercial Commissioning and Start-up Completeness Construction Environmental External HSS Interfaces Organizational Operations Regulatory Technical TOTAL
XXX Project Master Risk Register Statistics
Impact
168
◾ Risk Management Governance and Organizational Context
impact score is used to assign it to a particular RAM cell. If the number of reported uncertainties is high, such visualization could be done separately for each selected project objective. As an option, the total number of uncertainties belonging to each particular RAM cell could be shown. This statistic is shown in Figure 7.1. Figure 7.1 as well as Figures 5.1 and 5.2 demonstrate options of possible statistical reporting and visualization of uncertainties. A particular project may develop its own set of standard charts and templates for monthly reporting to better reflect the organizational context of project risk management and the needs of decision makers. The monthly risk report should be concise enough to provide decision makers with a quick uncertainty snapshot. However, it should be adequate enough to provide management with sufficient and comprehensive information for efficient and timely decision making.
BIAS AND ORGANIZATIONAL CONTEXT Previously we discussed several expressions of bias in risk management, which is a systematic error in identification, assessment, and addressing risks. Obviously, if bias were in its most notorious form—organizational—the whole project risk management system could be compromised. However, the milder conscious and subconscious manifestations of bias that pertain to individuals involved in risk management should be properly managed, too. The dividing point between organizational and psychological types of bias is a gray area. For instance, if a project, business‐unit, or corporate risk manager/director/VP is biased in any particular way, the corresponding psychological bias systematic error could be amplified when propagating across the organization to become an organizational bias. Some examples of this were discussed earlier. One overconfident corporate risk manager enthusiastically developed and tried to implement a probabilistic schedule risk analysis guideline for a whole company. You might wonder, “What’s wrong with that?” First, the guideline was remarkably inadequate. Second, it turned out that the risk manager had never before built or run probabilistic schedule risk models! These two points have a clear cause‐and‐effect link. But my point is that imposing guidelines like this on an entire organization is a perfect example of the conversion of personal cognitive bias into organizational bias. Lack of knowledge and experience combined with abundant overconfidence and arrogance in one individual became a cause of organizational bias.
Bias and Organizational Context ◾
169
It is not possible to exclude all kinds of bias for a simple reason. People who work in risk management are not robots. They have their unique experiences, backgrounds, education, preferences, psychological characteristics, and so on. Hence, we do not view bias of any type, including organizational bias, as an uncertain event. Being a given phenomenon, bias is a type of general uncertainty. The general strategy to handle any type of bias is awareness. It is important to inform project team members and decision makers about the major types of bias. The main strategy to address manifestations of bias as a systematic error of the risk management system and a general uncertainty would be Mitigate‐ Recover through averaging out bias and calibrating it. This brings up three additional topics: (1) internal health self‐checks, (2) external sanity checks, and (3) the role of the risk manager.
Internal Health Self‐Checks An organization may develop an internal health self‐check list to ensure that there are no obvious or notorious incidences of bias in its risk management system. This should reflect all three components of the risk management system: organizational framework, risk management process, and tools. At a minimum it should indicate that all three components are in place, which is not always the case. The most important component—organizational framework—is often missed. Table 7.3 represents a typical risk management self‐check. It is based on a document developed for a CO2 sequestration project I worked on as a consultant. Despite the obvious importance and value of self‐checks, including sanity self‐checks, their role should not be exaggerated. They themselves could reflect organizational bias. One of the key expressions of this is the slogan, “That’s our way of doing things!” Using this approach based on doubtful uniqueness, any lame risk management system could be announced as brilliant by default and the best in the universe. External health checks might be the only remedy against this type of psychological and organizational phenomenon.
External Sanity Checks In order to reduce the possibility that an internally developed project risk management system will miserably fail, regular sanity checks should be undertaken. These may take the form of cold‐eye and benchmarking reviews. Both could be done as related to either a risk management system at large or the particular results produced by it.
170
Addressing
Assessments
Risk Register (RR) Quality
No non‐uncertainty or common items (issues, concerns, etc.) in the RR.
Contains less than 20% of non‐ uncertainty or common items (issues, concerns, etc.).
Addressing plans not in place, approved, resourced, or realistic.
RAM inconsistent or incomplete.
Assessment logic logged for less than 50% of risks.
Less than 80% of uncertainties assessed.
No upside uncertainties in the RR.
Key risks not identified/understood.
More than 300 open items.
Residual uncertainties assessed and realistic.
Addressing plans of key uncertainties approved and resourced.
RAM internally consistent.
Key uncertainties assessed and more than 50% of assumptions logged.
Addressing costs and residual uncertainty levels integrated in project plans and probabilistic Cost and Schedule models.
PETRA methodology used to address the most critical uncertainties.
All residual uncertainties assessed and realistic.
All addressing plans approved and resourced.
All assumptions logged.
Basis for assessments clear without further reference.
Standard RAM in use.
All uncertainties assessed and all assumptions logged.
Upside uncertainties cover all aspects of the project.
Urgent/critical uncertainties and show‐stoppers and “game changers” identified and highlighted.
Several (package, discipline, etc.) logs kept separately in the uncertainty repository to feed the master RR.
Uncertainty descriptions/definitions clear without further reference.
Key uncertainty descriptions/ definitions clear.
Contains many non‐uncertainty and common items (issues, concerns, etc.). A few upside uncertainties in the RR cover some aspects of the project.
Fully developed and consistent bowtie diagram and RBS used.
Bowtie diagram and RBS used but not consistent.
Descriptions/definitions of uncertainties unclear.
Adequate set of multiple objectives used.
Few objectives used.
Excellent
Bowtie diagram and RBS not used.
Acceptable
Only one or two objectives used.
Unacceptable
Risk Management Health Self‐Check
Aspect of Risk Management (RM)
TABLE 7.3
171
No RMP developed.
Team members not aware of their roles and responsibilities in RM.
Organizational Framework
RMP available and used as basis for RM activities.
All project uncertainties are explicitly approved, resourced, and tracked by project and risk managers.
(Continued)
Team member participation in RM is recognized and rewarded.
Fully developed RMP available and consistently used as basis for all RM activities.
RR information used for informed decision making and prioritization.
Management engaged in RM and explicitly approve, resource, and track addressing.
Uncertainty escalation rules established and in use.
Team members have access to RR and are familiar with top project risks and those that impact their work.
Schedule of reviews developed and followed.
Standard meeting structure employed.
Risk manager leads review meetings and nominates uncertainties for review based on urgency/severity.
Management use RR information for decision making at decision gates.
Reason for closure logged.
Action audit trails fully maintained.
Action deadlines tracked on regular basis.
Uncertainties reviewed continuously according to risk ranking, line‐of‐sight. and planned frequency.
Reason for action closure logged.
Action deadlines not tracked.
Reviews between risk and action owners held regularly.
Uncertainties reviewed regularly.
Action audit trails maintained.
Action closure not logged.
Uncertainties only drafted or reviewed prior to major decision gates or in (less than) quarterly meetings, resulting in only partial review of RR.
Key actions tracked against their deadlines on regular basis.
No reviews between uncertainty and action owners.
Review and Monitor
Implementation
172
Project Reserves and Execution Plan Integration
Risk Manager
Unacceptable
RM not part of the project execution plan.
RR not used for development of project reserves.
Unfamiliar with principles of probabilistic risk analysis.
Unfamiliar with rules of RM process, organizational framework, tools, etc.
(Continued)
Aspect of Risk Management (RM)
TABLE 7.3 Acceptable
RM included in project execution plan.
Adequate cost escalation model developed.
Existing RR updated and used as basis of probabilistic risk analysis for development of project reserves.
Adequately understands and applies principles of probabilistic risk analysis in most cases.
Asks for help from consultants as appropriate.
Supports project team, PM, and decision makers with training and information on uncertainty exposure and RM health.
Adequately understands theory of RM (process, organizational framework, tools, etc.).
Excellent
Project reserves are tracked and drawn down.
Cost escalation model fully developed.
RM is integrated as part of project execution plan.
RR is used as basis of probabilistic risk analysis for development of project reserves.
Is sought out for opinions and advice by others.
A recognized RM subject matter expert (SME) in the industry, including probabilistic risk analysis.
Takes leadership role in RM company‐wide and helps in setting company’s RM policy.
Supports management team with training, quality information, and insights.
Tracks and highlights value of RM.
Challenges and drives management to prioritize and resource RM.
Fully understands RM in projects (process, organizational framework, tools, etc.).
Bias and Organizational Context ◾
173
For evaluation of the risk management system at large, only external parties are unbiased to any degree. But even they might succumb to some form of nice‐guy bias related to hope of getting repeat business from the organization. Good external reviews are not cheap, which could be a major reason to refrain from them. However, their cost should be treated as the cost of addressing corresponding organizational risks and associated biases: risks related to the “insanity” of the risk management system in place. For evaluation of particular results (quality of a project risk register, development of project reserves using probabilistic methods, etc.) specialists from other parts of an organization may be involved. These third parties bring a degree of their own bias to the reviews. But most likely, existing organizational bias will be identified and either calibrated or averaged out. Efforts to manage bias using internal and external checks might be extremely political. This leads us to the role and job description of the project risk manager and his or her personal qualities and traits.
Leadership Role of Risk Managers It would be an understatement to say that the leadership role of the project risk manager is important. What are the characteristics of leadership in a good risk manager? Here are five of them: 1. Independence 2. Confidence based on knowledge of risk management and psychology 3. Value proposition 4. Out‐of‐box (or no‐box) thinking 5. Training and coaching skills Let’s review these in more detail. Some project managers select risk managers that do not have independent opinions. Is it unusual for project risk managers to be part‐timers who just started learning risk management few weeks ago? Those convenient nice‐guys have low, zero, or negative value for projects, but they do not irritate management with independent viewpoints. Well, what could be more important than peace of mind for a project manager or business‐unit manager? The obvious answer is that project success is of higher importance. A key discrepancy here is the direct reporting of a risk manager to a project manager versus the need for “balance of power,” “counterweights,” “separation of powers,” or whatever political terms you care to name. Dotted‐line reporting
174
◾ Risk Management Governance and Organizational Context
to corporate risk management (if it is adequate) could be a solution. However, if a risk manager is independent and confident enough and represents real value not only for a project and an organization but for the industry at large, he or she should be ready to leave a project where risk management is not adequately supported. This type of positioning expressed even implicitly usually settles things down. But this is the last line of defense. More important is the ability to deliver added value. The second major trait of a good risk manager is confidence based on deep understanding of risk management principles and psychological aspects, including biases. There should be a certain degree of technical leadership in development or adaptation of new risk methods. Technical leadership combined with understanding of psychology accentuates the requirement to have high regular and emotional IQs.5 Full integration of a risk manager into all project activities according to the 3D picture of risk management (Figure 2.1) is often not that easy. Personality and ego clashes are not unusual. A good risk manager should be able to recognize and manage manifestations of the main types of psychological complexes, including complexes of God, Hero, Napoleon, Ego, Superiority, Martyr, Inferiority, Guilt, and so on. Discovering and managing various manifestations of these complexes adds a lot of fun and furor to the lives of risk managers. What is the most efficient way to overcome all these obstacles and get risk management integrated within the rest of the project? The short answer is the value proposition. This suggests the third major trait of a good risk manager. He or she should provide added value. The famous but loaded WIIFM question (“What’s in it for me?”) should be the key one a good risk manager answers for his or her co‐workers every day. The fourth important trait is out‐of‐box thinking. A risk manager should have a fairly high personal IQ. A good risk manager should be able to see interdependencies and logical links among possible causes, events, and impacts, even when most other people do not see them. As I mentioned earlier, I have a reservation about that thinking out‐of‐the‐box buzzword. Working in science one assumes that there is to be no in‐box thinking at all, because that is tantamount to “no thinking.” (Otherwise, one would not be qualified for the job.) So, I would prefer “no in‐box thinking” as part of a good risk manager’s job description. The fifth important trait of a good risk manager relates to training and coaching skills. This is a fairly obvious trait. A risk manager should be able to show the value of risk methods, ensuring their adequate implementation through building a risk management culture based on the four previous traits. Lack of independence, confidence, value proposition, or independent think-
Notes ◾
175
ing should make this trait unnecessary or its lack even a positive thing: one should not expect that risk training and coaching would be based on or promote dependency, lack of confidence, no value, and narrow minded thinking, A project risk manager should project credible leadership in order to establish an effective project risk management system integrated with all project disciplines. Real added value and a positive impact on project performance are the criteria of his or her success.
CONCLUSION The anecdote in “Incredible Leadership and Unbelievable Risk Addressing” could serve as the acid test of any leadership, including leadership in project and risk management.
I N C R E D I B L E L E A D E RSHIP AND U N B E L I E VA B L E R I S K A DDRESSING
T
his anecdote was relayed to me by my old friend who used to work for the Russian Space Agency. Soviet leader Leonid Brezhnev was personally challenged by the fact that NASA had successfully sent people to the moon. He called for his cosmonauts and ordered, “We need to beat the Americans! To do this, you men are ordered to fly to the sun!” After a moment of speechless silence, one of the cosmonauts dared to ask, “How would this be possible, Comrade Brezhnev? We will get burned up!” His answer revealed his incredible leadership and unbelievable risk addressing skills: “We took this into account—you guys will be flying at night!”
NOTES 1. Project Management, PMBOK Guide (Newtown Square, PA Project Management Institute, 2004). 2. A project team may define both thresholds if it wishes, which will increase the number of impact ranges to seven(!) and produce a 5 × 7 RAM. So, just one extra threshold for game changers should suffice.
176
◾ Risk Management Governance and Organizational Context
3. It would be easy to define Cost and Schedule show‐stoppers and game changers mathematically if we did include them in Monte Carlo models. Using “reverse mathematics,” any uncertain event that may lead to reduction of the baseline’s confidence level to, say, 1% (P1) or lower would be declared a game changer or show‐stopper. Corresponding Cost or Schedule impact thresholds could also be derived. 4. C. Chapman and S. Ward, Project Risk Management: Processes, Techniques and Insights (Chichester, UK: John Wiley & Sons, 2003). 5. D. Goleman, Working with Emotional Intelligence (New York: Bantam Books, 1998).
8
CHAPTER EIGHT
Risk Management Tools
Questions Addressed in Chapter 8
▪ ▪ ▪ ▪ ▪ ▪ ▪
Why should the structure of a project uncertainty repository be based on three dimensions of risk management? What commercial software packages are available for risk database management? How could automation make making errors more efficient? Why should the tail stop wagging the dog? Why is MS Excel still the best risk management software package? What is a detail specification for a do‐it‐yourself risk register? Why should we use commercial probabilistic Monte Carlo tools? ◾
T
H IS C H A P T ER E X A M I N E S R EQ U I R EM EN T S and specifications to select or develop adequate risk management tools. Unfortunately, the available software packages force project teams to use risk management tools that support specific realizations of risk management systems as they are understood by their producers. Hence, the number of risk management systems supported by corresponding tools is equal to the number of producers. Each producer is heavily involved in sales and marketing 177
178
◾ Risk Management Tools
activities to promote its own version of a risk management system as the only right one. Due to the general immaturity of risk management as a discipline, many project teams and organizations fall prey to these sales and marketing activities. This chapter describes general requirements from early‐introduced organizational context and risk management process. This is an attempt to preclude situations where sales and marketing define both. A generic uncertainty repository template is introduced that may be adopted by any project team.
THREE DIMENSIONS OF RISK MANAGEMENT AND STRUCTURE OF THE UNCERTAINTY REPOSITORY An initial discussion on the structure of an uncertainty repository appears in Chapter 2. The repository should be organized as several logs, inventories, and registers to reflect the three dimensions of risk management (Figure 2.1) and in accordance with the other two components of the risk management system (the organizational framework and the process). This should reflect several key points, including:
▪ ▪ ▪ ▪ ▪ ▪ ▪
What does the project do in general (its scope, components, packages, etc.)? What are the project’s stated objectives used in risk management? What are the main sources of uncertainties (risk breakdown structure)? What types of uncertainties does it manage (“objects” of Table 1.2)? How are the risks managed (the process and status of risks and action in various steps of the process)? What responsibilities do team members have in risk management (risk, action, risk category, and so on, ownership)? What is the level of integration of risk management with the other project disciplines, with package and business‐unit risk management, and with risk systems of customers, partners, stakeholders, subcontractors, and so on (three dimensions of organizational context)?
This sounds very scientific. To keep it simple, Figure 8.1 represents a possible structure of the uncertainty repository and a hierarchy of uncertainty logs, inventories, and registers in full accordance with Figure 2.1. The in‐depth dimension is not reflected explicitly in Figure 8.1 but could be added if required. Usually it is done by creating so‐called internal project risk registers that collect uncertainties related to partners, investors, contractors, external stakeholders,
179
FIGURE 8.1
Cost Escalation and Exchange Rate Log
Cost Estimate Uncertainty Log
Schedule Uncertainty Log
Value Engineering Log
Constructability Review Log
Engineering Register
Stakeholder Commitments Log
Environmental Register
Hazard and Operability (HAZOP) Register
Health and Safety Register
Sample Project Uncertainty Repository
Package #N Risk Inventory
Issue Log
Package #1 Risk Inventory
Project Master Risk Register
Business Unit Risk Register
180
◾ Risk Management Tools
and so on. Being highly confidential and sensitive documents, these are never shared with the other parties. Internal project risk registers feed business‐unit risk registers. Elements of the advanced risk breakdown structure should be used as attributes of the risk database to slice‐and‐dice the repository information related to all three dimensions. Depending on the contracting strategy, a capital mega‐project may have several‐dozen work packages. Some small and standard work packages don’t require risk management and risk inventory development. The majority of them should be subject to risk management, though, which requires development of package risk inventories (see Chapter 10). One of the mega‐projects I worked on recently contained about two dozen work‐package inventories that were owned by small work‐package development teams. A health and safety (H&S) risk register should be part of the repository, too, but not part of the project master risk register. The H&S risk register may easily contain dozens and dozens of hazards. It seems to be reasonable to understand them as causes of project risks that may lead to impacts on project objectives. The key point here is that any of those risks may lead to an impact not only on the Safety objective but on Reputation, Schedule, and so on. For this reason we will not talk about safety, schedule, or reputation risks at all, which sounds like a lot of risk jargon. Instead, we will discuss risks of impacts on the Safety, Schedule, and Reputation objectives. As discussed in Chapter 5, conversion of H&S hazards into project risks is not straightforward. I was the risk manager for a several‐billion‐dollar power generation mega‐ project a while ago. Corporate risk people came up with an initiative to automate a link between H&S and the project master risk register. They believed there would be great value in this. Despite my opposition to this low‐value exercise it was implemented as a pilot. The result was a complete disaster as dozens of H&S hazards clogged up the previously neat master risk register. The link (mapping) between the two registers was not stable and generated unpredictable results every time we accessed the database. Everyone was happy when the pilot was scrapped. We did include H&S‐related risks in the master risk register and they were part of our monthly reporting. The key issue here is that this was done manually using analytical skills, and not automation. Multiple H&S hazards were rolled up intelligently to a few groups at a higher level and placed into the master risk register. The discussion regarding the first vending machines in Chapter 2 describes an almost‐ideal automation solution for grouping and rolling up risks from lower levels and promoting them to higher levels of the repository. No need for blind automation just for the sake of automation!
Risk Database Software Packages ◾
181
Several logs should be maintained in the course of the base estimate and schedule development. For instance, general uncertainties of base estimate cost accounts and the schedule’s normal durations are best understood by project estimators and schedulers and should be kept by them in corresponding logs. It is very important to keep track of all commitments and promises made to project stakeholders in the course of project development and execution. I witnessed a situation where a midsized Canadian exploration and production company was purchased by the multinational oil and gas major I worked for at that time. Due to high turnover of employees following the acquisition, no records of previous commitments made by the company to local communities were kept. This became a major risk that impacted reputation and relationships with local communities and First Nations groups. Value engineering and constructability reviews are a common source of project opportunities to reduce capital costs and accelerate schedules. The review records should be included in the repository. Engineering, HAZOP, environmental, project execution through risk addressing (PETRA), and stakeholder reviews should not be ignored by project risk management and should be included in the repository, too.
RISK DATABASE SOFTWARE PACKAGES All these files should be kept in web‐based database applications. This would support multiuser access, which is especially important when project team members work in different locations. This also avoids problems with multiple versions of registers and logs when using MS Excel–based registers. Some of the logs in Figure 8.1 could be kept as MS Excel files. There is no need for overshooting and pushing all disciplines into using one tool or nothing. Moreover, for small or midsized projects that have all workers in one location, the Excel‐based registers would be adequate. Among the commercial web‐based software packages, I would point out EasyRisk by DNV and Stature by Dyadem. These have database structures adequate for maintaining project master registers. Stature has several other applications including failure mode and effect analysis (FMEA) and HAZOP templates. The question with Stature is what kind of template should be developed to keep a risk register fully supporting the risk management process and organizational framework. My recommendation is that a risk register template in MS Excel should be developed and tried for several months before getting formalized in Stature. This should ensure that all organizational framework
182
◾ Risk Management Tools
and risk process requirements are met and all organizational biases are averaged out. EasyRisk has a very advanced risk reporting module that allows plotting risk data using several graphic output templates. It also has advanced risk breakdown structure (RBS) capability that allows one to slice‐and‐dice risk data for reporting purposes and consolidate them at the work‐package, project, business‐unit, or corporate level. Although far from ideal, this is the most adequate risk database software package available on the market (according to my biased opinion). Besides EasyRisk and Stature, the PertMaster software package could be used for maintaining project master registers. The latter is mostly used for schedule risk analysis, though, due to its high price.1 However, a common issue with all existing commercial tools is that they do not allow creation of uncertainty repositories as introduced in Figure 8.1, not to mention their complete lack of PETRA support (Figure 5.4). In addition, those tools do not have the functionality to support engineering design and procurement option selection, as discussed in Chapters 9 and 10. As a result (and according to my biased opinion), MS Excel is still the best risk management software package for keeping risk registers. A general pitfall for project teams is too much focus on selection of risk database software tools, and not enough focus on risk process and organizational framework. Selection of the correct risk register tool is somewhat important. However, this should be established as priority #3 after the development and implementation of an adequate organizational framework (priority #1) and risk management process (priority #2). The ISO 31000 standard,2 devoted to risk management, covers only the risk management process and the framework; risk tools are not even mentioned. They are merely supposed to effectively support the risk process and the framework. Table 8.1 presents some general must‐have and nice‐to‐have features and functionalities of a risk register database to support the risk process and the organizational framework described. Too many requirements or over‐integration with some other risk tools (@Risk, PertMaster, Crystal Ball, etc.) may yield zero or negative value. Keep in mind that a well‐developed MS Excel–based risk register template would ensure much greater value to a project than would a commercial web‐ based application that does not support the risk process and the organizational framework. Moreover, the standard data management systems that many companies have in place could be used to keep the project uncertainty repository and, hence, provide access to multiple users. If required, only the latest version of documents could be visible.
183
Risk Database Software Packages ◾
TABLE 8.1
General Specifications of a Risk Register Software Package
Feature/Functionality
Purpose
Note
Web‐based database
Access from multiple locations.
Assumes secured access.
Multiuser functionality
Simultaneous access for several users.
This assures that one version of a risk register exists.
Use of RBS
Introduction of major sources of risks.
RBS should be utilized for portfolio management to include several projects.
Assessment of impacts on several project objectives
Both hard (Scope, Cost, Schedule) and soft (Reputation, Safety, Environment) objectives should be in focus.
Including only traditional hard objectives is not enough and may lead to overlooking important parts of project risk exposure. Ability to manage upside uncertainties should be included.
Use of project risk assessment matrix (RAM)
Introduction of project RAM as a dropdown menu to assess impacts on multiple objectives and probabilities.
May include development of project‐specific RAMs for each project in portfolio.
Various access levels
Access to defined areas of the repository (based on RBS) and various rights (read, read and write, custodian).
Important for confidentiality and tracking purposes, based on job descriptions of project team members and their roles and responsibilities in risk management.
Uncertainty and action ownership, action start and completion dates, types of addressing action, status of uncertainties and actions
To enforce responsibility of uncertainty and action owners to manage risks.
These fields are basis for action tracking.
Action tracking functionality
To enforce responsibility of action owners to address risks.
May include email alerts or alarm marks in risk reports.
Support of three‐part naming
Specific definition and understanding of uncertainties.
This might not require specific functionality such as three separate windows for causes, events, and impacts, as the three‐part naming may be written as a sentence in uncertainty description window. (Continued)
184
◾ Risk Management Tools
TABLE 8.1
(Continued)
Feature/Functionality
Purpose
Note
Customization and flexibility
Amendment of input and output templates to accommodate specific requirements.
Tailor‐made templates may be required for selection engineering design and procurement options (Chapters 9, 10) and PETRA.
Helpdesk support
To provide quick support with assignment of access rights to new users, briefing and training on functionality, etc.
If effective support is not provided, project teams tend to use MS Excel–based risk registers.
Exportation of data
Utilization of data from risk register in various applications including probabilistic Cost and Schedule risk analysis.
Integration of the risk register database with some other tools (@Risk, PertMaster, Crystal Ball, etc.) is not required. However, smooth output through MS Excel is a minimum requirement. Importation to the risk database is not usually considered important.
Reporting
Availability of visual and statistics reporting features based on RBS categories, project objectives, uncertainty/action owners, severity of uncertainties, etc.
This is important to support both risk process and framework including risk escalation and communications, responsibilities, etc. Reporting features should be flexible enough to accommodate data slice‐and‐ dice requirements of project leadership and decision makers.
DETAILED DESIGN OF A RISK REGISTER TEMPLATE IN MS EXCEL As discussed earlier, it would be beneficial to develop all components of the risk repository using MS Excel. The transition of adopted MS Excel templates to a suitable commercial application may be done later, if at all. Figure 5.3 introduced a conceptual template for a project risk log, inventory, or register. Of course, this does not reflect the requirements for more
Commercial Tools for Probabilistic Risk Analyses
◾
185
specialized types of analyses such as process hazard analysis (PHA), HAZOP, and so on as those are outside the scope of this book. But the rest of the repository components described in Figure 8.1 may be based on the concept of Figure 5.3. If not, a tailor‐made template could be used when required in MS Excel. To be prudent the template of Figure 5.3 was introduced as conceptual. I believe though that this contains a minimal required set of fields that should not be increased or reduced. If even one of the fields is lost, the template may lose a significant piece of information. If any other fields are added, this should not yield much additional value. But, of course, this opinion is biased and may be ignored if there are reasons to think differently. I suggest using this template for most of the logs and registers of a project repository. I also believe that this template is much better and more consistent than all existing commercial software packages developed for risk databases. It would be interesting to discuss the development of this template as a part of a risk software package with representatives of corresponding IT companies. In any case, it takes into account all aspects of deterministic (scoring) risk management discussed in this book so far. The purpose of this section is to provide a detailed description of the fields included in the template of Figure 5.3 when it is developed in MS Excel. Table 8.2 provides the required information for all fields of the template. Table 8.2 and Figure 5.3 could be used as do‐it‐yourself instructions for building simple and effective risk registers. Table 8.2 looks quite detailed. However, any risk analyst who is at least a basic MS Excel user can develop the required risk register template according to the specifications of Table 8.2 in two hours or less.
COMMERCIAL TOOLS FOR PROBABILISTIC RISK ANALYSES Earlier in this chapter I was skeptical about the need to use commercially available software packages for developing and maintaining uncertainty repository databases. Such fully adequate uncertainty repository tools do not yet exist. The situation is different for probabilistic applications. Virtually nothing can be done without a specialized probabilistic tool when developing project cost and schedule reserves unless some of the archaic methods described in Chapter 3 are used. Descriptions of the three probabilistic software packages I am intimately familiar with are provided in this section. Some users might like other probabilistic tools better, although they might not yet be as popular as the three that I mention here. In addition, a promising new tool is introduced at the end of this section.
186
Identification number according to rules adopted by a project.
Known deviations from project objectives according to Table 1.2, Figure 1.2. Both upside and downside deviations are possible for general uncertainties but issues/givens
Known general uncertainty (impact uncertainty) vs. uncertain event (impact and probability uncertainty) according to Table 1.2, Figure 1.2.
Short name/tag of uncertainty (2 or 3 words max.).
Detailed three‐part definition cause(s)–events–impact(s) for uncertain event and cause(s)– impact(s) for general uncertainties.
Any comment related to definition of an uncertainty, including description of show‐stopper, game changer, or broiler black swan where applicable.
ID
Upside or Downside?
General Uncertainty or Uncertain Event?
Title
Three‐Part Definition
Comments
Definition
Description
Field
Text
Text
Text
Dropdown menu using “Data/Data Validation/ Setting/List” functionality.
Dropdown menu using “Data/Data Validation/ Setting/List” functionality.
Text or number.
MS Excel Functionality
Specifications of the MS Excel–Based Risk Register Template
Group of Parameters
TABLE 8.2
N/A
N/A
N/A
GU; UE.
US; DS; US and DS.
1,2,3 etc. or combination of letters and digits.
Standard Values
187
Assessment As‐Is
Status
RBS category from the list adopted by the project.
RBS Category
Score equal to product of probability and top impact score before addressing.
Name of an individual accompanied by position and organization (not just position or name of organization).
Owner
Level/Severity
Status of uncertainty according to risk management process (Figure 2.2).
Status
2. “Conditional Formatting/ Highlight Cells Rules/ Between” functionality according to Table 5.1.
1. Formula Max (P × I1, P × I2, P × I3, P × I4, P × I5, P × I6), where P is probability score of an uncertainty; I1, I2,…, I6 are impact scores for six selected project objectives (RAM of Figure 3.2).
Dropdown menu using “Data/Data Validation/ Setting/List” functionality.
Could be set up as dropdown menu using “Data/Data Validation/ Setting/List” functionality .
Dropdown menu using “Data/Data Validation/ Setting/List” functionality.
(Continued)
Red, yellow, or green color of the cell depending on severity/level score.
0, 1, 2, 3, … 20, 25
Engineering, Procurement, Construction, Commissioning and Startup, Operations, Regulatory, Stakeholders, Commercial, Partner(s), Interface Management, Change Management, Organizational.
List of names.
Proposed; Proposed Closed; Approved; In Progress; Accepted; Closed.
188
Addressing
Group of Parameters
TABLE 8.2
Description of action.
Amount to be spent to implement the action.
Indicates planned start of addressing activities.
Indicates a deadline when addressing action should be complete.
Name of an individual accompanied by position and organization (not just position or name of organization).
Action
Cost of Action, $K
Start
Completion Date
Action Owner
Score representing assessment of impact before addressing using RAM (Figure 3.2).
Cost, Schedule, Product Quality, Safety, Environment, Reputation
Points out one of five main strategies for downside uncertainties and one of five strategies for upside uncertainties.
Score representing assessment of probability of uncertainty before addressing using project RAM (Figure 3.2).
Probability
Response Strategy
Description
Field
(Continued)
Could be set up as dropdown menu using “Data/Data Validation/ Setting/List” functionality.
Date
Date
Number
Text
Dropdown menu using “Data/Data Validation/ Setting/List” functionality.
Dropdown menu using “Data/Data Validation/ Setting/List” functionality.
Dropdown menu using “Data/Data Validation/ Setting/List” functionality.
MS Excel Functionality
Text or list of names.
N/A
N/A
N/A
N/A
For upside uncertainties: Exploit, Enhance‐Magnify, Enhance‐ Amplify, Share, Take.
For downside uncertainties: Avoid, Mitigate‐Prevent, Mitigate‐ Recover, Transfer, Accept.
0,1,2,3,4,5, where 0 corresponds to N/A.
Score 5 for general uncertainty.
0,1,2,3,4,5
Standard Values
189
Assessment To‐Be
Dropdown menu using “Data/Data Validation/ Setting/List” functionality.
Score representing assessment of probability of uncertainty after addressing using project RAM (Figure 3.2).
Score representing assessment of impact after addressing using RAM (Figure 3.2).
Probability
Cost, Schedule, Product Quality, Safety, Environment, Reputation
Dropdown menu using “Data/Data Validation/ Setting/List” functionality.
2. “Conditional Formatting/ Highlight Cells Rules/ Between” functionality according to Table 5.1.
1. Formula Max (P × I1, P × I2, P × I3, P × I4, P × I5, P × I6), where P is probability score of an uncertainty; I1, I2,…, I6 are impact scores for six selected project objectives (RAM of Figure 3.2).
Text
Score equal to product of probability and top impact score after addressing.
Any comment related to addressing and its progress.
Comments
Dropdown menu using “Data/Data Validation/ Setting/List” functionality.
Level/Severity
Status of addressing action according to risk management process (Figure 2.2).
Action Status
0,1,2,3,4,5, where 0 corresponds to N/A.
Score 5 for general uncertainty.
0,1,2,3,4,5
Red, yellow, or green color of the cell depending on severity/level score.
0, 1,2,3, … 20, 25
N/A
Proposed; Proposed Closed; Approved; In Progress; Completed; Closed.
190
◾ Risk Management Tools
Crystal Ball is an MS Excel–based tool that has all the major Monte Carlo simulation functionalities. Its interface is very user friendly and intuitive. It is affordably priced. Crystal Ball is used for cost risk analysis. It was not developed for schedule risk analyses. Currently it is produced by Oracle. Another spreadsheet‐based tool is @Risk by Palisade. This is quite a sophisticated Monte Carlo tool. It has advanced Monte Carlo functionalities. Its interface is more sophisticated and less intuitive than that of Crystal Ball. Due to the high level of sophistication, MS Excel files loaded with @Risk data could get very big (15 to 25 MB). This causes problems with sending them by email. Otherwise, this is a very adequate cost risk analysis tool for advanced users. There is an @Risk for Projects module that allows carrying out schedule risk analyses using MS Project as a scheduling tool. I have not yet heard anything about applications of MS Project for capital mega‐projects; @Risk for Projects doesn’t seem to be terribly popular at the moment. Both Crystal Ball and @Risk can be used in engineering for high‐level integration of a system’s components to identify and manage all possible performance bottlenecks. This topic is outside the scope of this book. Another risk software package is Primavera Risk Analysis by Oracle, which had a different name (PertMaster) before its acquisition by Oracle. (Risk practitioners still call it PertMaster.) This seems to be the most sophisticated and adequate project risk analysis tool on the market that integrates deterministic (scoring) and probabilistic (Monte Carlo) methods. It has a risk register module that could be used for deterministic and probabilistic assessments of risks, but it does not support the variety of logs of the uncertainty repository introduced in Figure 5.3 supporting just a generic template. It enables building integrated cost and schedule risk models using its risk register module. Its functionality includes mapping of schedule risks to normal activities as well as resource‐loaded schedule risk modeling. It has a seamless interface with the Primavera scheduling tool, which allows easy uploading of *.xer files from Primavera. This is the tool of choice for schedule risk analysis. However, it is not widely used for cost risk analysis or keeping risk registers. There are two reasons for this. First, estimators use MS Excel and understand only the corresponding spreadsheet‐based tools such as Crystal Ball and @Risk. The usual practice is that schedule risk data are retrieved from PertMaster and uploaded to Crystal Ball or @Risk to take into account schedule‐driven costs (see Chapter 12). This is the current way to merge the estimating and scheduling worlds in
Conclusion ◾
191
project risk management practice. However, cost and schedule risk analyses are often done separately. It would be fair to mention that two traditional project services/controls disciplines—estimating/cost control and planning/ scheduling—have an influence on the methods and tools used by project risk management. Representatives of those disciplines often become new risk managers. Due to tradition, or the lack of knowledge of advanced (probabilistic) methods on the part of some representatives of these disciplines, some outdated deterministic methods and tools discussed in Chapter 3 are still widely used in risk management instead of the more modern and adequate probabilistic methods. Another obstacle to PertMaster becoming the only mass‐market tool required for project cost and schedule risk analyses is its high price. The recently developed Acumen Risk software package is an obvious attempt to minimize the shortcomings of PertMaster and compete with it. This new software package is positioned as a tool that does not require knowledge of statistics; hence, it has the potential to become a mass‐market product. This positioning seems slightly deceptive as some mandatory functionalities of PertMaster that do require some knowledge of statistics are missing. Lack of the functionality to take correlations into account is a fundamental issue with the current version of Acumen Risk. Additional discussion on this can be found in Chapters 12 and 13. We discuss the required integration of deterministic (scoring) methods with probabilistic tools in Part III of this book. This integration does not presume integration of software packages, though. We instead discuss required inputs and their specs.
CONCLUSION This chapter provides an overview of the software packages that could be used in risk management. This is a “cart” (tools) that is often put before “two horses” (organizational framework and process of risk management). Ideally, the cart should be put back in its place by using simple MS Excel–based risk register templates. Probabilistic tools may be allowed to stay before the two horses for the purpose of probabilistic analyses and development of project reserves. But this is justifiable only when those two horses have already delivered the cart (a.k.a. the probabilistic tools) in to the correct location.
192
◾ Risk Management Tools
NOTES 1. The recent introduction to the market of Acumen Risk software is a good next step in support of risk register functionality with probabilistic capability, although it requires thorough testing. It looks like this tool has a different shortcoming related to lack of functionality to take correlations into account in probabilistic models. 2. ISO 31000 International Standard: Risk Management: Principles and Guidelines (Switzerland: International Organization for Standardization, 2009).
9
CHAPTER NINE
Risk‐Based Selection of Engineering Design Options
Questions Addressed in Chapter 9
▪ ▪ ▪ ▪
T
Why is risk‐based selection of options better than and preferable to risk management of a given option? How may a standard risk management methodology help engineers? What kind of template can be used for engineering design option selection? What is a controlled option selection decision tree and how might it help when there are numerous options? ◾
H E T R A D I T I O N A L S I T UAT I O N I N risk management is that proj-
ect teams defi ne baselines fi rst and then investigate given project risk exposure. The usual logic is that project objectives and baselines are developed and all major project choices and decisions are made fi rst by corresponding functions such as engineering, procurement, construction, and so on. Risk management steps in to manage given uncertainties that are predetermined by the preceding choices and decisions. It is certainly not proactive, but it is a common methodology. It is more reasonable to start using risk 193
194
◾ Risk‐Based Selection of Engineering Design Options
management methods at an earlier stage when options are being contemplated and selected. This is the most efficient way to control uncertainty changers. Only project options with minimal risk exposure should be selected in the fi rst place. This risk‐based informed decision making is ideally applicable to options at both the work‐package and project levels and beyond. Any decision made by a project, business unit, or corporation should ideally be informed riskbased. This should include the change management methodology at various levels of an organization, too. As decision making at the business‐unit and corporate levels is not within the scope of this book, this chapter and the following chapter are devoted to the proactive approach in selection of options at the work‐package level.
CRITERIA FOR ENGINEERING DESIGN OPTION SELECTION Engineers of a power generation mega‐project I worked for had several technical challenges to selecting the most optimal engineering design options in several areas. One exacerbating issue was that there was interference from the project owner’s representatives in the process. Those representatives did not have enough hands‐on engineering experience to make informed engineering decisions, but they insisted on the cheapest options in every particular case. Debates related to several engineering studies continued for several months. Eventually, I decided to step in and help out. Five risk‐based engineering studies were initiated and fi nalized in few weeks. All selection decisions were made, documented, and approved by the project owner. The corresponding work packages were on the street soon after. That was a remarkable achievement. One $100 million decision was related to the selection of a major type of equipment. It will be used to introduce option selection methodology in this chapter. Another, smaller $10–$15 million decision related to reliability and maintainability of an electric transmission line in a particular geographic area depending on features of design and operations. This study should help us introduce decision tree analysis principles for controlled options, initially introduced in Chapter 3. This method is based on the idea that each project objective imposes restrictions and constraints on the others. This allows one to consider all objectives as equal by default. For instance, using both CapEx and OpEx as equals allows one to consolidate the opinions of project and operations people while considering project lifecycle cost. Keeping in mind Reputation, Safety, and Environment
Scoring Risk Method for Engineering Design Option Selection
◾
195
objectives preclude options that are not acceptable from the reputational, safety, or environmental angles. Criteria for engineering design option selection coincide with the project objectives selected for project risk management. These are all discussed in the first three chapters. For the studies mentioned earlier, they are:
▪ ▪ ▪ ▪ ▪ ▪
Cost: CapEx + present value (PV) of OpEx Schedule Quality Safety Environment Reputation
An additional point here is that according to the bowtie diagram (Figure 4.2), causes of uncertainties, uncertain events, and impacts might belong to different phases of project development. For instance, if engineering design option decisions are made in Select or Defi ne (causes), associated uncertain events and general uncertainties make impacts on objectives in Execute or Operate. Both option baselines and overall uncertainty exposures are option differentiators used for decision making. It is important to clearly define the scope of the study. Any part of a project that is not impacted by the option selection should be left out. In some cases the initially defi ned study scope should be expanded in the course of the study’s progress because some parts of the project that were believed to be unchanging should be included due to identification of additional differentiators. Often these are newly identified uncertainties that should play a role as option differentiators. The method discussed here is referred to as controlled options selection in Chapter 3. This implies that all options shortlisted for analysis should be well defined baseline‐wise. Potentially any of them could be selected; this is not a roll of the dice. This should involve a diligent review of option baselines and overall risk exposures. Where options have additional sub‐options (branching) with variations, especially in Operate, consideration of the corresponding decision trees should help. The decision tree samples are discussed in the final section of this chapter.
SCORING RISK METHOD FOR ENGINEERING DESIGN OPTION SELECTION There were three main options in selecting the major type of equipment mentioned earlier (no branching). In the transmission line study, there was
196
◾ Risk‐Based Selection of Engineering Design Options
branching that led to the development of a controlled options decision tree. This is discussed in the next section. In the first step, project engineers, estimators, and schedulers developed baselines for three shortlisted options. Those options were selected taking into account preliminary feasibility studies as well as preferences expressed by a project owner. In the second step, three Delphi technique risk identification workshops were held to identify uncertainties associated with each option. Needless to say, the majority of identified uncertainties were the same for all three options. However, some of them were unique and relevant only to one or two options. More than a dozen uncertainties were identified for the three options. Most of them were uncertain events. In the third step, after validation of identified uncertainties another workshop was held to assess uncertainties before addressing. A risk assessment matrix (RAM) similar to the one introduced in Chapter 3 (Figure 3.2) was used. Only addressing measures in place and already included in the baselines were considered. No additional addressing was included in the assessment as‐is. Some uncertainties that were relevant to two or three options had different assessments of probabilities and/or impacts. However, some had the same ones. In the fourth step, after validation of the assessments as‐is for all three options, addressing actions were developed for all identified uncertainties. Costs of addressing were assessed, too. When addressing was supposed to be applicable in Operate a model of OpEx was developed for project lifecycle (50 years). Corresponding annual OpEx expenses were rolled up to a base period (present value) using the agreed discount of about 7%. This allowed apples‐to‐apples comparison of OpEx and CapEx costs in dollars of the base period and evaluation of lifetime total costs for each option. Obviously, addressing actions were aimed at both cost and non‐cost impacts (such as impacts on Safety, Schedule, and so on), although costs of addressing were obviously measured in dollars in both cases. In the fifth step, after validation of proposed addressing actions to‐be, assessments of all uncertainties were undertaken. This step as well as the previous one led to very intensive discussions and revealed several types of bias. For instance, representatives of the project owner tried to play down assessments of uncertainties and costs of their addressing for their favorite least expensive option. However, close monitoring of group dynamics and awareness of participants regarding bias helped to settle things down and produce relatively objective assessments. In addition, the structure of the discussion helped to compartmentalize topics and narrow them down to very specific technical tasks. That left little room for bias.
Scoring Risk Method for Engineering Design Option Selection ◾
197
Assessment of uncertainties after addressing was eventually reduced to one uncertain event. The rest of the uncertainties were believed to be reduced to low (green) levels (Table 5.1) after application of all proposed addressing actions. Costs of those actions were added to the corresponding option’s base estimates. While all uncertainties reduced to a low level were excluded from further consideration as differentiators, the costs of their addressing became differentiators instead. This approach assumes that low/small uncertainties have negligible and acceptable residual uncertainty exposures. This allows one to avoid making residual cost risk exposure assessments, which is in line with the overall precision of the methodology. Figure 9.1 represents a simplified decision‐making template that allows one to make an engineering option selection. This shows only one most critical uncertain event as a differentiator for three options. The only difference between this template and the register template of Figure 5.3 is that one uncertainty is assessed against each of the three shortlisted options. The template does not contain specific information about this particular risk and its addressing actions due to confidentiality. The specifics were replaced by “to be developed (TBD).” According to Figure 9.1, Option 1 did not have economically viable addressing actions. Preliminary assessment pointed to the need for tens of millions of dollars to be spent in Operate to reduce the risk to a low level. Even though that option was actively promoted by the project owner as least expensive CapEx‐wise, it was disqualified from further consideration. Option 2 had a low risk level all along. If money were not a constraint, this would be the best choice. However, the problem with this option was that this was the most expensive one. Option 3 had a material/medium level of risk exposure before addressing. However, two addressing actions were proposed that should reduce its uncertainty exposure to a low level. The cost of those actions was assessed at about $3.2 million. This included costs to be incurred in Engineering (CapEx) and Operations (OpEx). Comparison of Option 2 with Option 3 allowed us to select Option 3 as most preferable. Figure 9.2 is a summary template used for decision making. Only cost of addressing of the most critical uncertain event ($3.2 million in case of Option 3) discussed earlier is included for demonstration and simplicity purposes. In reality, the other addressing costs did not lead to a different decision. All assessments of uncertainties were done deterministically using the scoring method and RAM (Figure 3.2). All estimates of addressing action costs were done deterministically, too. Technically, all deterministic
D
1
UE
General Uncertainty or Uncertain Event?
FIGURE 9.1
Upside or Downside?
ID
Title
Three-Part Definition
TBD
RBS Category
Owner
Status
Comments
TBD Approved TBD Engineering
OPTION 3 9 3
1
Level/Severity
OPTION 2 2
Cost 3
0
4
Schedule 2
2
3
Product Quality 2
0
3
Safety 3
0
3
Environment 0
0
0
Reputation 3
0
3
TBD
Accept
MitigatePrevent
MitigateRecover TBD
0
Level of overall risk exposure is low as-is; no addressing required N/A
N/A
Start
2,700 TBD
1,500 TBD
N/A
Accept
Action(s) No economically vaible actions available
Response Strategy
Cost of Action, K$
Sample Template for Engineering Design Option Selection
TBD
4
Option Description
Probability
OPTION 1 16
Completion Date TBD
TBD
N/A
N/A
Action Owner TBD
TBD
N/A
N/A
Action Status
Actions relates to Engineering
As no actions are required the risk is accepted
As no actions are developed the risk is accepted
Comments
Actions relates to Operations, Approved lifecycle OpEx cost developed and rolled up as PV@7%
Approved
N/A
N/A
Level/Severity 4
2
16
2
1
4
2
0
4
Cost
ASSESSMENT TO-BE
Probability
ADDRESSING
2
2
3
Schedule
ASSESSMENT AS-IS
2
0
3
2
0
3
Safety
ATTRIBUTES Product Quality
DEFINITION
0
0
0
Environment
198 2
0
3
Reputation
Decision Tree for Engineering Design Option Selection
FIGURE 9.2
◾
199
Decision‐Making Template
assessments of both risks and addressing actions could be done probabilistically if really required. Ranges around base estimates could be discussed. Uncertainties of impacts and probabilities could be discussed and introduced. Even the discounting factor used for developing PV of OpEx could be represented as a spread. This would be overshooting in most cases. However, in situations where the two most preferable options are close cost‐wise, some additional probabilistic steps may be justified to accentuate the roles of some differentiators. However, fi nding more distinctive additional differentiators instead should be a better option. Baseline cases for the study cited earlier were well defi ned and did not have branching in terms of sub‐options. In some situations there could be sub‐options based on the main ones. For instance, this would be the case for developing an oil or gas pipeline when there are options for some of its sections to use various rights of way. The example in the next section relates to developing and operating a transmission line in a certain geographical area.
DECISION TREE FOR ENGINEERING DESIGN OPTION SELECTION (CONTROLLED OPTIONS) The example described in this section would be of particular interest to electrical engineers. Imagine that a transmission line crosses a strait between two Japanese islands. Two transition compounds should be designed and built to connect the underground cable with aerial transmission lines on both sides of the straight. A key decision factor for the transition compound location is distance to the shore. The reason for this is the saltwater spray effect, which impacts the reliability of electrical insulators. Of course, a transition compound could be built several kilometers from the shoreline where the salt spray effect is negligible. Some studies indicate that
200
◾ Risk‐Based Selection of Engineering Design Options
locations that are three to five kilometers from shore would fully negate the saltwater spray effect. This would lead to very high costs for the underground cable, which makes such an approach too expensive. As an alternative, the electrical insulators in the vicinity of the transition compound and its bushings could be washed regularly, which would lead to an increase in operating costs. Another factor that must be taken into account is the acceptable level of reliability and availability of the transmission line, which is directly linked to Reputation of the transmission line owner. If a line were down, this would mean blackout for the whole island! The impact on sales revenue as part of OpEx and especially on owner’s Reputation could be immense. Three transition compound locations were initially studied: onshore, 900 meters away from the shore, and 1,800 meters away. These distances are defined like this due to the standard length of cable on one spool. Obviously, if two spools are used, a join between them is required and that is an additional CapEx expense. In the final study, only two options were considered: onshore and 900 meters from shore. However several sub‐options were introduced for each of those relating to type and frequency of washing. Corresponding OpEx budgets were developed for the sub‐options’ lifecycles that took into account expected levels of availability (Quality) as well as sales revenues (OpEx). Those sub‐options’ OpEx budgets were rolled up to a base period as present values to be directly compared with the options’ CapEx budgets. Sub‐options’ CapEx budgets included the purchasing of washing equipment where required. In this study, the level of availability was directly linked to the Quality objective. It was also directly linked to the project Reputation objective. Any drop in availability was considered an impact on both OpEx (revenue) and Reputation. A risk assessment matrix (RAM) similar to the one in Figure 3.2 was used for risk assessments. I will not further discuss the details of decision making but I would like to demonstrate a simplified decision tree used for option selection. Figure 9.3 introduces this.1 Four nodes—1A, 1B, 2A, and 2B—represent four generic sub‐options of two main options, although the number of options (nodes) in the real study was higher. The method discussed in this chapter should be referred to as controlled option selection. Any option shortlisted for consideration could be selected after consistent comparison of the differentiators. Here, decision making is not a crapshoot. This is the first principal difference from the traditional decision tree cost analysis mentioned in Chapter 3, where “chance options” are reviewed
Decision Tree for Engineering Design Option Selection
FIGURE 9.3
◾
201
Controlled Options Decision Tree
Source: © IGI Global. Reprinted by permission of the publisher.
and each chance option has a probability of realization. The second principal difference is that only cost‐related impacts are included in the traditional chance option tree analysis. The new method discussed here takes into account impacts on any project objectives, which makes it way more comprehensive and effective than traditional decision tree analysis. One may guess that commercially available software packages are not suitable to support this method. The examples and MS Excel templates introduced in this chapter are the only viable solutions for the time being. It would be interesting to see what solutions might be developed by the corresponding IT companies to support this methodology.
202
◾ Risk‐Based Selection of Engineering Design Options
CONCLUSION The role of informed risk‐based selection of engineering design options should not be underestimated. On one hand, no one is against informed risk‐based selections. On the other hand, previously available methods had almost nothing to do with support of this declared approach. Methods proposed in this chapter are based on fi rst principles of proactive risk management that allow one to shape baselines instead of merely identifying and managing deviations from given ones.
NOTE 1. Y. Raydugin, “Consistent Application of Risk Management for Selection of Engineering Design Options in Mega‐Projects,” International Journal of Risk and Contingency Management, 1(4), 2012, 44–55.
10 CHAPTER TEN
Addressing Uncertainties through Procurement
Questions Addressed in Chapter 10
▪ ▪ ▪ ▪ ▪ ▪
How can we distinguish the contracts we should not touch from the ones we must win? Where do the uncertainties of procurement come from? What are three major groups of work‐package risks? By what process do we manage risks in procurement? Why is risk‐based selection of procurement options better than risk management of a given procurement option? How might a standard risk management methodology make procurement people’s lives easier? ◾
T
HIS CH A P T ER IS D E VOT ED to an overview of the applications of risk management in procurement. In some situations, a project or contract looks so attractive that a company jumps on it without diligent consideration. Then, its execution opens up a can of worms leading to huge impacts on the company’s objectives.
203
204
◾ Addressing Uncertainties through Procurement
For the most part, no one is interested in a contract that looks terribly risky. However, a company might be able to identify the key contract uncertainties and come up with smart and effective means of addressing them and this project or contract could become the most profitable one the company ever had. Major steps of the procurement process are described in this chapter as related to risk management. There is a similarity in the risk methods used for selection of engineering design options1 and selection of bidders. Their applications to procurement are outlined in this chapter. The pre‐award part of the procurement process is the focus of this discussion. The application of the methodology presented in this chapter could be used by both project owners and engineering, procurement, and construction (EPC) contractors, and would require a bit of adjustment for every particular case. Ideally, cost escalation modeling (Chapter 11) should go along with this. The post‐award part of the process is discussed at the end of the chapter.
SOURCES OF PROCUREMENT RISKS The standard approach to project development and execution is that project scope is broken down into a number of engineering and construction packages. The role of procurement along with the other project disciplines is to ensure delivery of work packages according to project objectives. As discussed in previous chapters, a set of project objectives, besides Scope, includes Budget and Schedule as well as Safety, Environment, and Reputation. The main realization of this approach is based on the project’s contracting strategy, which is part of the overall project execution strategy. The contracting strategy should be established as a procurement baseline, which is a combination of project objectives. Depending on its realism and consistency, its implementation might be characterized by some difficulties. In other words, some deviations from the assumed contracting strategy might occur. There are three groups of factors that may be treated as sources of deviations from the contracting strategy and project objectives. Using risk breakdown structure (RBS) terminology, the following sources of procurement risks may be defined:
▪ Package specific ▪ External ▪ Bidder specific These three subcategories could be developed under the main Procurement category of the project risk breakdown structure.
Sources of Procurement Risks ◾
ID of packagespecific and external risks
Questionnaire for RFP
Bidder’s response
Final package risk inventory
Clarifications
Bidderspecific risks
Quantitative bid evaluation
Negotiations, bid ranking and contract award
205
Package risk register for post-award management
FIGURE 10.1 Simplified Bidding Process and Risk Management Prior to Contract Award
Figure 10.1 represents a simplified package bidding process from the risk management angle. Its steps will be explained in the following section.
Package‐Specific Sources of Risks The contracting strategy is supposed to provide the procurement baseline, which should address questions related to the work package such as:
▪ What should be the optimal size, scope, and schedule of the package? ▪ What interfaces should be managed due to selected size, scope, and ▪ ▪ ▪
schedule? What type of contract should be proposed and why? If applicable, how should technical novelty be handled? Are there severe or unusual safety hazards and how should they be managed?
Contracting strategy defines sizes of work packages that are believed to be optimal for a project versus types of contracts. This should comply with project schedule and budget, taking into account competition for vendor services. This may lead a project into cost escalation due to market forces, extra costs and delays due to lower contractor productivity, and so on. Proper sequencing of work should ensure timely delivery, especially if a work package belongs to critical path. Assumed optimal sizes of work packages presume a number of soft and hard interfaces among the project disciplines and vendors. For a technical novelty the contracting strategy should ensure that technology risks are properly addressed. Some construction work packages might involve unusual or severe safety hazards, which should be addressed by procurement, too.
206
◾ Addressing Uncertainties through Procurement
For purposes of this discussion we treat a project-contracting strategy as a procurement baseline. There will be deviations from that baseline in the form of general uncertainties and uncertain events. All these uncertainties should be better identified beforehand (i.e., before a request for proposal [RFP] is on the street). Identified package‐specific uncertainties should be collected in a package risk inventory (risk register). A two‐hour Delphi technique workshop (discussed in Chapter 4) should be held to create the inventory. Identified risks should be reflected indirectly in the bidder’s qualification questionnaire included in the RFP. Responses of bidders to the risk questionnaire will be used for bidder evaluations and ranking.
External Sources of Package Risks External sources of package risks relate to the external environment of a project as well as to weather conditions or features of climate. These include:
▪▪ Various construction permits ▪▪ Related logistics issues where they are not part of the package scope, ▪▪ ▪▪ ▪▪
including delivery windows due to weather/climate constraints Weather at the construction site Force majeure Possibility of blockage of delivery routes and/or sites by protesters including representatives of a nongovernmental organization (NGO) and local communities that could delay the package execution
Some sources of the uncertainties just listed may be relevant to the delivery of a particular package. It would be prudent to know in advance how the bidder would handle various risks stemming from those external sources. External uncertainties should be added to the package risk inventory. Their identification should be part of the Delphi technique workshop held to identify package‐specific risks. Corresponding questions should be also included in the RFP risk questionnaire.
Bidder‐Specific Sources of Risks When a bidder’s RFP responses are received they should be evaluated against previously formulated package‐specific and external questionnaires. The variety of responses to the risk questionnaires should become the basis for bidder rankings. Two additional sources of information may be also used to facilitate ranking.
Quantitative Bid Evaluation
◾
207
First, assessments of the engineering, construction, safety, logistics, and other parts of the RFP responses could lead to identification of additional uncertainties. Second, clarification discussions with bidders may lead to either the addressing of previously identified risks or the identification of new ones. In both cases these could be considered bidder‐specific risks that may be related to the following topics:
▪ ▪ ▪ ▪ ▪
Bidder’s core competencies versus package size, scope, and schedule Availability of required skills and history of labor relations Workload forecast Concerns about quality assurance/quality control, technical capabilities, management efficiency, risk management, sub‐vendors, financial stability, safety, and so on Lack of previous (or positive) experience with a bidder
Bidder‐specific risks should be added to the package‐specific and external uncertainty, forming the final package risk inventory. Although package‐specific and external uncertainties could be the same for all bidders, the bidder‐ specific ones are unique by definition. In spite of this it is beneficial to develop a single final package risk inventory that will be used as common ground for the final quantitative evaluation of bidders. There is a simple key rule here: if a specific uncertainty is apparently relevant to at least one bidder, then the rest of the bidders should be evaluated against that uncertainty too.
QUANTITATIVE BID EVALUATION As soon as the final package risk inventory is developed, quantitative bid evaluation is very similar to engineering design option selection. The main difference with engineering is that negotiations and clarifications are part of evaluations and may influence the contract award. The structure of the risk register shown in Figure 9.1 could be used. The number of options should be equal to the number of shortlisted bidders prequalified for the bid using some standard procurement criteria. Addressing actions should be part of the risk register to develop and compare assessments before and after addressing. Only uncertainties that could have impacts on the package owner’s objectives are considered here. Bidders include risk premiums in their bid prices to manage the risks that are the result of risk brokering. Obviously, the amount of
208
◾ Addressing Uncertainties through Procurement
risk bidders assume depends on the type of contract. For lump‐sum types of contracts, contractors assume most of the package risks. For reimbursable types of contracts, most of the risk exposure is kept by the package owner. A unit‐price contract exposes the package owner to the number‐of‐units uncertainty but transfers the pricing risks to the contractor. Modifications to these basic types of contracts may make risk transferring very tricky. For instance, a reimbursable contract with a cap would return a lot of risks to the contractor.2 Any discrepancy between the letter and the spirit of a contract could be a major cause of risks. Be very careful with how a contract is labeled. For instance, a contract might be declared reimbursable or unit‐price based but contain quite a few clauses that make it virtually a lump‐sum contract. Development and review of contract drafts is within the domain of the legal department but should be done with the close involvement of risk management. Take the actual package contract type into account when doing quantitative bid evaluation. As discussed in the final section of this chapter, the final risk exposure becomes clear only after the negotiations preceding the contract award. Part of the risks could be reassigned, along with acceptance of their addressing. This will define the actual demarcation of risk ownership among parties. Only the uncertainties that are material or severe in the case of at least one bidder should be included in the evaluation. Usually there are fewer than a dozen. In reality, no more than two or three uncertainties are the real differentiators or game makers that define the outcome of the evaluation. The rest of them have the same or similar assessments for all bids. Same as in case of engineering option selection (Chapter 9), if some risks for some bidders cannot be reduced to the low (green) level economically, these bids should be disqualified. For example, during the recent bid evaluation of a package to produce tanks for an SAGD project in Alberta, one of the frontrunners was disqualified due to its busy production program. Despite the location of the workshop in Alberta, the high quality of produced equipment, and the very reasonable pricing, a project delay of at least half a year was almost certain. So, the contract award was played off between two other shortlisted bidders. One was located in Asia and the other in North America. The selection was based on a decision‐making template similar to the one used for engineering option selection (Figure 9.2). However, in place of option base estimates, the bid prices found in the RFP responses were used. All risks associated with both bidders were reduced to a low (green) level. However, the costs of addressing were rather different. Costs of addressing for all uncertainties were added to the corresponding bid prices. Again, we are talking about risks that are kept
Conclusion ◾
209
by the package owner, not the bidders. This means that the bid price is not the full package lifecycle price for the package owner. Based on this pre‐negotiation approach, one bidder was prequalified as the potential (preferable) contract winner. However, following negotiations the picture changed. The other bidder agreed to accept some of the important risks and the costs of their addressing that initially were assumed by the package owner. As a result of negotiations, this other bidder was awarded the contract.
PACKAGE RISK MANAGEMENT POST‐AWARD When a contract is awarded, the final package risk register should be cleaned to exclude bidders that did not win the contract. Only evaluations of risks that relate to the winner should be kept. For reimbursable contracts, risks that are transferred to the contract award winner should become subject to regular reviews. Ideally, the package risk management plan should be part of the final negotiations and the contract. This is to define the frequency of risk reviews, risk process, tools, and so on. It represents the in‐depth and time dimensions of risk management as shown in Figure 2.1. A standard contract is often used by organizations for all types of packages regardless of their size, scope, and so on. This is the way to get charged risk premiums for risks that are not relevant to a particular package. This is also the way to accept risks that are not identified beforehand. This inflexibility could lead to inadequacy of the procurement process. The clauses of the contract should result from consistent risk analysis and talks on optimal risk brokering. The risk‐based procurement process described in this chapter allows one to make informed decisions and knowingly identify, accept, or transfer relevant risks. This approach allows one to take on contracts that looked too risky initially. It is based on the idea that risk should be managed by the party that can do it in the most efficient way. Such optimization should lead to a more positive outcome—less expensive, higher‐quality projects delivered on time safely and without damage to the environment or anybody’s reputation.
CONCLUSION The method described in this chapter is a cousin of the method to select engineering design options presented in Chapter 9. It is becoming popular among
210
◾ Addressing Uncertainties through Procurement
the procurement people that I have been working with recently. This allows one to select successful bidders quantitatively and fully justify, document and defend such decisions. In some cases this should address risks related to ethics. The word quantitatively points to the possibility of coming up with the real (not as stated in the RFP response) contract price through the proper evaluation of uncertainty exposure that goes along with such a decision. Cost escalation modeling introduced in the next chapter is usually part of it.
NOTES 1. Y. Raydugin, “Consistent Application of Risk Management for Selection of Engineering Design Options in Mega‐Projects,” International Journal of Risk and Contingency Management, 1(4), 2012, 44–55. 2. R. Wideman, Project and Program Risk Management: A Guide to Managing Project Risks & Opportunities (Newtown Square, PA: Project Management Institute, 1992).
11 CHAPTER ELEVEN
Cost Escalation Modeling
Questions Addressed in Chapter 11
▪ ▪ ▪ ▪ ▪
How average should an average really be? Why is consumer price index the worst possible macroeconomic index for evaluating project cost escalation? Why should first and second market transactions be delineated? How reliable can cost escalation modeling be? Should cost escalation modeling be probabilistic? ◾
O
N E O F T H E K E Y P R O J E C T cost uncertainties that should be
managed through the procurement process is cost escalation. This standalone project general uncertainty is directly related to the development of adequate project cost estimates and reserves.
OVERVIEW OF THE COST ESCALATION APPROACH Let’s assume that all quotes received by a project owner in 2014 for a construction package are fi rm and valid for the next several weeks or months. Does this 211
212
◾ Cost Escalation Modeling
help to predict real expenditures? The answer is no. One reason for this is very basic. A dollar in 2014 will have rather different purchasing power in three or four years over the course of the project execution. So, quite often project teams use an average annual inflation rate to escalate future expenditures from the base estimate. In North America it is around 2% these days. In a few years, a smaller amount of goods or services would be purchased using the same amount of money as the value of money will be lower. The question is: What goods and services are we talking about?
General Inflation and Consumer Price Indexes (CPIs) General inflation is directly measured (or well approximated) by the consumer price index (CPI), which includes a standard set of goods and services the general population consumes on average. This set or consumer basket is usually different for various areas of a country. Moreover, the typical consumer baskets in Los Angeles and in Detroit would be different because the cost of living is different. Here we compare two major U.S. cities. What about the countryside in the U.S. Midwest? And are consumer baskets the same in North America as in the Middle East or Europe? It does not seem so. It would be reasonable to delineate CPIs for particular cities, for cities versus rural areas, and for different regions and countries. In addition to the geographical dimension, economists exclude some products and services from consumer baskets on purpose to compare results. For example, some CPI variants may or may not include the price of gasoline. It sounds quite complicated. But wait a minute—what do all those consumer goods defining general inflation have to do with purchasing pressure vessels or line pipes and hiring welders to install them in a particular geographical location? Not a lot.
Market Imbalances and Macroeconomic Indexes To continue the inflation analogy, we need to know the “inflation” for pressure vessels, line pipes, and hiring welders using “pressure vessel,” “pipeline,” and “hiring welders” baskets. Of course, there are no such inflations and baskets in economics. There is something similar, though. Macroeconomic indexes or rather their time series for the next several years are developed usually on an annual and quarterly basis. They are still based on the basket approach and organized in hierarchies and developed on a regional principle. For instance, a general cost of labor index may be introduced for Canada. This is developed as an average for all Canadian provinces and territories and all
Overview of the Cost Escalation Approach ◾
213
major occupations. However, if a project is planned in Alberta, the Alberta cost of labor index would be more relevant. But if a project is planned in Northern Alberta somewhere around Fort McMurray to develop oil sands, even the labor index for Alberta would be rather misleading (too optimistic). Logically, the Northern Alberta or “Fort Mac” labor index should be used instead. However, escalation of labor costs in Northern Alberta would be more apparent in the case of qualified welders than for general labor, and so on. So, the Canadian labor index is a very big basket that contains various types of professions in various provinces of Canada. “Average of average of average of average, and so on” gives rise to an “undistinguished and calm” time series of the Canadian labor index, which has (almost) nothing to do with hiring welders in Northern Alberta. This index is better than the CPI index, but not much. The next question should be about the method to define the right size of basket to adequately represent cost escalation for particular items. Natural limitation comes into play in the form of commercial availability of macroeconomic indexes. Some consulting and research organizations provide indexes based on “midsized baskets.” For instance, it could be a macroeconomic index to construct a refinery in the United States. It is still an “average of average of average,” although it generally covers the relevant scope of a project. This index should be applied to a project’s base estimate at large. Of course, it doesn’t take into account the size of a refinery, a particular technology used, or the location of a project. It feels like a few more details are required, but how many more? For instance, if we think about line pipes, there are pipes of various diameters. Intuitively, pipes of larger diameter (20″–48″), such as are used for large oil and gas pipeline projects, should be distinguished from smaller‐diameter pipes. Usually they are produced by different mills, at least by their different divisions, and assume different technical and quality standards. At the same time, some averaging should be used as we don’t want to use different macroeconomic indexes for 24″ and 36″ pipes. This means that our “pipe basket” for large‐diameter pipes will be averaged for several diameters. Supply–demand imbalances in smaller baskets are usually more acute as they are not subject to much averaging. For this reason, escalation of some products and services could be way more severe than the “calm”‐averaged CPI index. This discussion may be clarified by consideration of two types of macroeconomic indexes. All commercially available macroeconomic indexes are available in either “real” or “nominal” forms. Cost of money, which reflects general inflation and could be approximated by the consumer price index,
214
◾ Cost Escalation Modeling
is excluded from real indexes. In other words, general inflation is a benchmark for real cost escalation. Simply speaking, nominal indexes are sums of real ones and general inflation. So if in place of a specialized microeconomic index a CPI index is used, its substantial and most informative portion is just missed, leading to incorrect conclusions about future cost escalation. The real escalation in the case of approximation by the CPI is always equal to zero by definition. There are a finite number of macroeconomic indexes commercially available to assess cost escalation. So if a project team would like to use a very specific macroeconomic index for a particular product or service, there is a high likelihood that it is not available. There are four ways to resolve this. First, a similar index might be found and used. Let’s call it a proxy for the required index. Second, a higher‐level (more‐averaged) index may be a fit in some cases. Third, a similar index for a different geography might be selected. This situation is more easily resolved and more justifiable for products and services that are part of the global market. Turbine generators, compressors, or large‐diameter pipes are examples of those. At the same time, markets for gravel, rented construction equipment, or construction labor are usually more local. Fourth, it is theoretically possible to engage an economic consultancy that would develop the required index using economic modeling and polling. However, this might become a substantial part of the project cost. This method is not often used. As an alternative, a project team may evaluate some indexes on its own. This approach is discussed in this chapter. If purchasing of materials or services is done abroad using another currency, additional contribution to cost escalation could stem from exchange rate volatility. The same approach is used to assess required exchange rate reserves as described above, although the discussion about secondary transactions is not relevant. Selection of relevant macroeconomic indexes or their proxies for escalation modeling is always a creative process that should involve cost estimators. The best source of macroeconomic indexes for North America that I know of is Global Insight. All time series are represented as historic data for the previous 10 years and as forecasts for the next 10 years on a quarterly and annual basis. Rolling updates of indexes are done every quarter. Global Insight provides almost 400 various indexes for the United States and almost 100 indexes for Canada. (The exchange rate forecasts for major world currencies are also provided.) Unfortunately, the number of available indexes for other parts of the world is lower. It is still possible to get the required geographical indexes for an additional fee.
Overview of the Cost Escalation Approach ◾
215
First and Second Market Transactions A common issue with most of the available macroeconomic indexes is that they represent so‐called first transactions. In other words, they are producer price indexes (PPIs) and reflect the prices paid to producers. As project owners usually use engineering, procurement, and construction (EPC) companies to deliver particular work packages, they pay to those companies, not to the producers, which is a second transaction. The differences between first and second transactions would be gross margin and risk premium. Both could be higher at the moment of purchasing than anticipated by the project base estimate. This is not only relevant to materials or equipment. The same situation normally occurs with labor hired by an EPC company for a project. Moreover, the situation could be exacerbated due to involvement of trade unions, employment commitments to local communities, aboriginal groups, and so on. Particular exposures of a project owner and EPC contractors depend on the nature of the signed contracts. Taking the second transaction into account is too often missed, which leads to rather optimistic or lower cost escalation assessments. At the same time, this is quite difficult to evaluate as the corresponding macroeconomic indexes that take into account the second transactions are rarely available. The example of cost escalation of line pipes in 2008 introduced in Chapter 1 is a good illustration of first and second transactions. Even though Global Insight predicted growth of pipe prices by 10 to 15% (PPI—first transactions), real growth in the first part of 2008 was 20 to 40% (owner’s prices—second transactions). In any case, Global Insight pinned down the right trend. There are several methodologies to evaluate additional contributions to the price volatility as differences of the first and second transactions. All of them are based on additions of multipliers that condition PPIs to take the second transactions into account. The development of multipliers and the methodology to justify them differ significantly. Any such methodology is based on a type of calibration. Due to lack of relevant historic data for calibration and their substitution by the judgments of specialists who are prone to various types of bias, the calibration of models is quite challenging and not always convincing. It would not be an exaggeration to state that there is no fully consistent methodology to evaluate second transactions.1 All of them have significant distances to reality. But it’s better to take the second transaction into account even inaccurately than to just ignore it. One of the methodologies is based on addition of a multiplier to all PPI indexes that is proportional to the speed of the index change. Mathematically, this multiplier is proportional to a derivative (dZ/dt) of a macroeconomic index Z by time.
216
◾ Cost Escalation Modeling
As in the example of the growth of pipe prices in 2008, quicker growth of PPI (first transaction) should mean an even higher growth of the addition related to the second transaction. The same should be true for cost de‐escalation: the quicker the PPI index declines, the deeper the cost reduction related to the second transaction. The second method is based on evaluation of the multipliers depending on economic activity related to a particular PPI index. It is important to keep in mind that some PPI indexes are global in nature. For instance, production of line pipes or turbines for compressor stations is perfectly global. At the same time labor or rented equipment PPIs have clearly regional features. Various indexes of economic activities in particular countries or regions (growth of economy, capital spending, etc.) could be used to justify conditioning of PPIs. Unfortunately, relevant indexes on capital project activities in particular regions are not usually available. Some consulting companies could carry out studies based on a collection of information about planned capital projects in a particular region, which would cost an arm and a leg. The reliability of such studies is not always clear. Polling of knowledgeable project specialists from engineering, procurement, construction, strategy and business development, and so on could be a viable and practical alternative to collecting the required regional macroeconomic information.2 Governmental organizations that handle project permits are a common source of public information about planned projects. I have applied this method for several capital mega‐projects, which produced somewhat credible results. The two previous multiplier evaluation methods take into account the macroeconomic situation only. There is a third method to evaluate the second transaction multipliers that is based on a combination of microeconomic and macroeconomic factors. Besides the macroeconomic activities in a region related to capital projects, a particular competitive situation, contractual obligations and types of contracts define final contract prices that include the second transactions. If a type of contract is defined and fixed, it is important to understand how a particular project feels about the competitive situation. Three different possible competitive situations may be distinguished for particular work packages to evaluate local market factors: 1. Mass market 2. Oligopoly 3. Monopoly If a project may receive the required services, materials, or equipment from multiple suppliers/vendors/contractors, such a competitive situation should not
Overview of the Cost Escalation Approach ◾
217
encourage large differences between PPIs and second transaction prices (mass market). The situation will differ if only three or four suppliers/vendors/contractors are available. Some of them could be quite busy. As a result, bidders may lean toward additional price increases. If there are only one or two prequalified suppliers/vendors/contractors available, the addition to the PPI price could be remarkable. So in many cases a local market factor depends on the number of prequalified bidders that are available and decide to reply to a particular RFP. There is a certain correlation between this and the regional economic situation, too. The local market factor is more important for a particular project than the regional economic situation in general. However, elements of double counting should be avoided when reviewing both factors. In some cases a monopoly or oligopoly could be created quite artificially for a particular project regardless of the economic situation in the region. The following example led me to the conclusion that microeconomic local factors and regional macro‐economic factors should be taken into account separately. I observed cases where projects were bound by labor agreements or arrangements with aboriginal groups, which led to remarkable spikes in contract prices due to inflated second transactions. A similar project across the street could get similar products or services on the open market at much lower prices. Artificial monopoly practices are normal in some regions of the world. They may take various forms, although the bottom line is always the same. Conditioning of PPI indexes for corresponding regions should take into account local features of second transactions realistically. One of the challenges when handling the regional features of second transactions is defining the region. In many instances it is quite obvious. For example, when we are talking about a project in Northern Alberta, the Gulf of Mexico, or the Persian Gulf, the definitions of the regions are self‐explanatory. However, when we discuss the continental United States or Europe, it is a different story. Some reasoning for the types and geographies of companies and labor force interested in and allowed to compete for the project works is needed. The degree of globalization related to the project should not be ignored, either. As already mentioned, the biggest problem common to all three methods is the calibration of the second transaction multipliers. Historic data similar to those on growth of pipe prices in 2008, judgments of experienced procurement specialists and cost estimators, and the collecting of information about economic activity in the region a project is based in would help to properly develop and calibrate the second transaction multipliers for particular PPI indexes in some cases. However, according to my experience, the third method, based on evaluation of the competitive situation for a particular project, is most adequate and
218
◾ Cost Escalation Modeling
understandable by project teams. Project team members may actively contribute to evaluation of the second transactions for particular PPI indexes, which is important from the viewpoint of method’s practicality and team member’s engagement. Application of this method will be discussed in this chapter.
Credibility of Cost Escalation Modeling I have a reservation about the value of macroeconomic indexes developed for the next 10 years. Usually, they represent a lot of dynamics for the first three to five years and then settle down (“flat and calm”) afterward. This is an indicator that the economic models used for developing indexes represent future reality relatively well for the first few years and gradually lose their essence for the rest of the 10‐year period. And of course they don’t capture in advance a change in “economic paradigm” such as happened, for example, in 2008. If we look at the macroeconomic indexes developed in early 2000 for the next 10 years, of course there is no hint of the 2008 economic downturn. However, quarterly updates of all models and their outputs (indexes) allow one to capture signs and precursors of coming changes in advance, say for a year or a year and a half, as happened in case of the line pipes example. Another point relates to the possibility of the selection of relevant PPI indexes. Proxies of desired PPI indexes are commonly used in cost escalation modeling; this leads to a certain degree of reduction in accuracy of modeling. However, such a reduction is not critical. Obviously, cost escalation modeling including currency exchange rate volatility modeling does not look like a probabilistic exercise, although it could be developed probabilistically. It was pointed out earlier that the time series of commercially available macroeconomic indexes are sold as one‐point numbers (i.e., one number per quarter or year). There is no uncertainty (ranges) around those indexes and exchange rate forecasts. It is possible to introduce those uncertainties (ranges) manually and run probabilistic models. A broader uncertainty should be used for later indexes in the series, with a zero range being assigned to the base period index. As there will be inevitable issues in justifying those growing ranges, probabilistic cost escalation modeling may be viewed as overshooting. Unfortunately, this means an absolute (+/–0%) level of accuracy in cost escalation modeling, which is not credible. Artificially introduced accuracy spreads do not help, either. The only reason to get involved in such an exercise is full assessment of primary accuracy ranges of a base estimate where all possible uncertainties, including cost escalation, should be counted. This challenge will be discussed in Chapter 14.
Example of Cost Escalation Modeling
◾
219
Instead, the one‐point escalation numbers may be understood as expected/ mean values, which are the best one‐point representatives of distributions. The distributions (accuracy “ranges” around one‐point values) of macroeconomic indexes are not available along with their one‐point values, although I am sure economists who run economic models do derive the indexes and their accuracy ranges as outputs from their economic models. Developing expected values of cost escalation is sufficient for evaluating the contribution of escalation to the project cost reserve. However, the contribution of the cost escalation spread to the primary accuracy range associated with all cost uncertainties will not be taken into account (Chapter 14). The weakest point in developing a reliable cost escalation reserve fund relates to taking second transactions into account. There is no straightforward and fully justifiable method to calibrate PPI indexes due to the impacts of second transactions. Too many talking points, assumptions, and non‐substantiated calibrations are used to build these models. It is better to take second transactions into account somewhat inaccurately than to ignore them, which would be the wrong approach for sure. The best way to do this is to examine both the regional macroeconomic factors and the local competition factors that a project encounters. Modeling cost escalation using macroeconomic indexes is not ideal. However, when used properly, it is the best available method to use for decision making.
EXAMPLE OF COST ESCALATION MODELING Real cost escalation models could be quite large as dozens of macroeconomic indexes could be used to represent dozens of base estimate cost accounts. Let’s review a simplified example related to the hypothetical concrete works of a capital project. This example may relate to an offshore concrete gravity‐based structure, a dam at a hydropower‐generation project, and so on. Let’s also assume that according to the project cost estimate, the cost of concrete works is $100 million in prices of a base period. This was declared by estimating that the cost of labor is 40% of the base estimate. This could be broken down to 30% of construction labor plus 10% of engineering labor. The cost of construction equipment is 20% and the cost of materials is 40%. The two main types of materials are cement (25%) and structural steel (15%). A further breakdown of costs is possible, but for simplicity we will stay with just five cost components of the concrete works. These shares define the weight coefficients of a composite index for concrete works.
220
◾ Cost Escalation Modeling
A composite index is a representation of a particular cost account of a base estimate through basic macroeconomic indexes. We use fictional macroeconomic indexes in this example. (It would not be prudent to use those in any real cost escalation modeling. Real indexes could be purchased from companies like Global Insight.) As discussed earlier, fi nding macroeconomic indexes that are a perfect match to the modeling requirements is not usually possible. Proxies are normally used instead. Table 11.1 presents five hypothetical indexes that may serve as proxies for the five cost contributions to the overall cost of the concrete works. They are all normalized to the base period, which is 2014 in this example. Those indexes as well as the resulting composite index are time series for several years. It is easy to construct the composite index for the concrete works ICONCR where the weight coefficients and macroeconomic indexes are identified: ICONCR (t) = 30% ICL(t) + 10% IEL (t) + 20% ICE (t) + 25% IC (t) + 15% ISS (t) As all five macroeconomic indexes represent PPIs or first transactions, they should be conditioned to include the second transactions. This should be done in two steps. First, the general economic situation related to overall capital spending in a region should be evaluated. Figure 11.1 plots the results of a regional capital expenditure (regional CapEx) assessment that could be done by any project team. Second, the particular competitive situation should be discussed for every package cost component. Sometimes a project has no choice but to use a particular monopolist as a supplier or a contractor. Either very few or several vendors could be available. The level of competition will define the significance of the second transactions. In keen competition, the second transaction addition TABLE 11.1 Index
Macroeconomic Indexes Selected for Modeling
Index Description
Weight
2014 (Base)
2015
2016
2017
2018
2019
2020
ICL (t)
Construction labor
30%
1.00
1.03
1.09
1.13
1.17
1.21
1.26
IEL (t)
Engineering labor
10%
1.00
1.03
1.06
1.11
1.14
1.14
1.18
ICE (t)
Construction equipment
20%
1.00
1.03
1.06
1.11
1.14
1.14
1.18
IC (t)
Cement
25%
1.00
1.03
1.07
1.11
1.13
1.19
1.20
ISS (t)
Structural steel
15%
1.00
0.99
1.00
1.04
1.08
1.12
1.18
Example of Cost Escalation Modeling
◾
221
Regional CapEx
120%
115%
110%
105%
100%
95%
90% 2014(BASE) 2015
FIGURE 11.1
2016
2017
2018
2019
2020
2021
Assessment of Regional Capital Expenditure
might even be negative. Table 11.2 outlines three different competitive situations related to the five cost components of the concrete works. For this discussion, we treat these five cost components as relating to five different work packages. Table 11.3 introduces the rules on how the local market factors used in Table 11.2 should be selected depending on the competitive situation. As the composite index is a time series, it is important to know when corresponding concrete works and expenditures are planned. If the works are TABLE 11.2 Local Market Factors Macroeconomic Index
Local Market Factor
Rational
Labor
0.5
Unionized labor and agreement with aboriginal groups
Engineering Labor
0.5
Unionized labor
Construction Equipment
0.3
Three major providers in the region
Cement
0.1
Multiple suppliers
Structural Steel
0.1
Multiple suppliers
222
◾ Cost Escalation Modeling
TABLE 11.3
Assessment of Competitive Situation
Competition
Local Market Factor
Monopoly (1 supplier)
0.4–0.5
Oligopoly (2–4 suppliers)
0.1–0.4
Mass market (multiple suppliers)
0–0.1
planned for several years, each annual portion will be escalated differently. Table 11.4 contains information about the cash flow for the concrete works and the results of the cost escalation calculations. The composite index in that example is already conditioned through taking into account both regional CapEx (Figure 11.1) and local market factors (Table 11.2). In this example, the cost escalation of the concrete works is about 15% for the duration of the works, which is higher than if general inflation were used to evaluate escalation. The difference of that result and CPI defines real escalation that excludes inflation. Real escalation is often used in project economic models. In the example, all PPI indexes were assumed to be nominal, meaning that they included inflation. For this reason the escalation number derived is escalation in nominal terms. Markets in equilibrium tend to give rise to escalation that is close to inflation. However, markets in equilibrium are rather the exception than the rule. There are always niches of the market relevant to a particular project that are out of equilibrium. The line pipe market in 2008 discussed earlier is one such TABLE 11.4 Account
Calculation of the Cost Escalation Reserve 2014 Estimate (Base)
2015
2016
2017
2018
2019
2020
Composite n/a Index: Concrete Works
100.00% 102.69% 106.06% 112.14% 116.43% 120.23% 127.14%
Cash Outflow (Base), 000$
100,000
0
0
2,000
40,000
47,000
11,000
0
Cash Outflow (Escalated), 000$
114,926
0
0
2,121
44,856
54,724
13,225
0
Escalation, % of base estimate (BE)
14.93%
Selecting the Right Time to Purchase ◾
223
example. This makes using CPI as a basis of cost escalation inadequate and irrelevant in most cases. Most of the numbers used in the escalation modeling could be represented as distributions including macroeconomic indexes. Additional rationale would be required to justify all those ranges. This allows one to make cost escalation modeling a probabilistic exercise, which increases the complexity of models enormously. In my experience, I have built several models like this to include cost escalation (including exchange rate volatility in required cases) to the overall project cost reserve and evaluate its primary accuracy number. Even though those were quite successful efforts, I still recommend that escalation modeling be done deterministically as shown previously. This modeling should take second transactions into account to be adequate. The terms regional CapEx or local market factor could be replaced by macroeconomic and microeconomic factors of second transactions if needed. As indicated previously, it is not about the terminology of particular uncertainty objects; it is about their essence.
SELECTING THE RIGHT TIME TO PURCHASE Using composite indexes is a basis for cost escalation modeling, which implies downside deviations from project base estimates in most cases. More fortunate, upside cost escalation situations may occur, too (cost de‐escalation). This could be used for selection of the most favorable time to purchase some types of materials and services, assuming that the project schedule allows such flexibility. The example of the huge cost escalation of line pipes in the fi rst part of 2008 and prices dropping in the second part of the year is a practical one. It was recommended to delay the purchasing of line pipes for a mega‐project in early 2008 and put it off until later that year. It was not exactly clear what would happen in late 2008, although some hints of an overall slowdown were there despite aggressive cost escalation in late 2007 and early 2008. In any case, implementation of this recommendation led to significant economy, which was even higher than the prediction was. The approach for selecting the right purchasing time is the same as for modeling of cost escalation. (Taking into account exchange rate volatility could be part of this exercise when required.) The only difference is the running of several scenarios (early purchases versus late purchases) that are allowed by the overall project schedule and comparison of the results.
224
◾ Cost Escalation Modeling
CONCLUSION Project teams and decision makers understand the need to adjust the project base estimates that are made for any particular base period due to future price volatilities. The point is to select the right methods and macroeconomic indexes. Selection of the CPI index as a measure of inflation is inadequate but better than fully ignoring this topic. Assessment of the required cost escalation reserve as a part of the project cost reserve is not straightforward. However, this provides clarity regarding what should be the expected final project cost. Being able to proactively define the best time for purchasing is an additional bonus.
NOTES 1. J. Hollmann and L. Dysert, “Escalation Estimation: Working with Economic Consultants,” AACE International Transactions (Morgantown, WV: AACE International, 2007). 2. Consulting companies that sell corresponding economic indexes carry out these research activities on a broader scale.
III
PAR T THREE
Probabilistic Monte Carlo Methods
E
V EN S TA N DA R D PR O BA B I L IS T I C M E T H O DS for assessing project cost and schedule reserves are relatively sophisticated. Aside from their recognized power and value they entail several unwritten but fundamental rules to follow to stay adequate and avoid getting confusing results. The purpose of Part III is to summarize all the relevant rules for developing adequate probabilistic cost and schedule models. This includes discussion of the data specifications that should be used as inputs and outputs of probabilistic analyses.
12 C H A P T E R T W E LV E
Applications of Monte Carlo Methods in Project Risk Management
Questions Addressed in Chapter 12
▪ ▪
What is the value and power of the Monte Carlo methodology?
▪ ▪
Why should deterministic and probabilistic methods be seamless?
▪ ▪ ▪ ▪
What should be done to avoid double dipping?
▪ ▪
Is it really challenging to quantify unknowns?
How do 5,000 deterministic scenarios give rise to one probabilistic distribution? What are the origins and roles of general uncertainties and uncertain events in Monte Carlo models? Why should correlations never be ignored? What is included in project reserves? What should project teams and decision makers know about probabilistic branching and merge bias? Why should integrated cost and schedule risk analyses come to maturity as soon as possible? ◾
227
228
◾ Applications of Monte Carlo Methods in Project Risk Management
T
H I S C H A P T E R E X A M I N E S U N C E R TA I N T Y objects taken into
account typically as inputs to probabilistic cost and schedule models. The major features of standard probabilistic cost and schedule risk analysis techniques are outlined. Some advanced modeling techniques are also introduced that might be regarded as either exotic or too complicated by some readers. Their application is a must in specific situations to adequately reflect reality in models. It is important to be aware of such techniques in the risk management toolbox.
FEATURES, VALUE, AND POWER OF MONTE CARLO METHODS In the course of project development and execution an issue arises regularly regarding comparing its baselines with actual results and its actual results with the outcomes of other, similar projects. This relates to all project objectives but first and foremost to Cost and Schedule. A company may have an internal database that collects information about its previous projects. Some of these are comparable with the project of interest. Some are not and either require proper conditioning of results to assure an apples‐to‐apples comparison or should not be used at all. Internal comparisons like this are not exactly bias free. One of the reasons is that all types of organizational bias that an organization has (quality of project management process and procedures, culture, risk appetite, etc.) would be factored in. This gives rise to systematic errors in the form of “standard” amendments of project costs and durations. If an organization compares its project with similar projects of other organizations, those comparisons will have their own systematic errors. So additional conditioning will be required for an apples‐to‐apples comparison. Conditioning of data can be an additional source of bias. Commercial consulting organizations may provide data about dozens of relevant projects in a particular industry. This allows one to identify a range of decent cost and duration outcomes using a higher number of projects in the sample. But apples‐to‐apples conditioning of data is inevitable there, too, due to the geographical and industry variety of projects’ execution. All types of benchmarking mentioned earlier are valuable to a certain degree. They provide additional insights on the project progress. However, all of them have a powerful competitor in the form of Monte Carlo statistical simulations for the following reasons.
Features, Value, and Power of Monte Carlo Methods ◾
229
First, this method resolves the issue of conditioning right off the bat. The reason for this is very simple. The Monte Carlo method, being an advanced sampling technique, is based on given project cost and schedule baselines, which excludes the need for any conditioning. Second, all possible deviations from baselines would be evaluated at quite a detailed level. Each deviation would be identified, discussed, and assessed to ensure adequate sampling. Third, practitioners who work with Monte Carlo models usually feel that any particular inaccurate assessment of uncertainties is not that important unless there is a systematic error across the whole process based on various types of bias. This ensures stability of results where bias is well controlled. Fourth, random sampling of data, called iterations, can be done many times. The minimum standard in the industry is 1,000 iterations. But there are no visible restrictions to doing 5,000 or 10,000 iterations. Each of the iterations technically is fully deterministic and means a standalone relevant project based on the same scope as the investigated one. Moreover, the schedule structure and logic of each project are the same as those of the project of interest. It is the same with the structure of the base estimate. However, particular one‐point numbers sampled in any particular iteration differ, belonging to assessed uncertainty ranges. So, in place of a few relevant projects to compare against during benchmarking, we may be talking about 1,000 or 5,000 or 10,000 fully relevant hypothetical projects that don’t require any conditioning. Fifth, what if our project is quite unique and no similar projects are available for benchmarking? What if we need benchmarking for a CO2 sequestration project? What if the benchmark should be an Arctic drilling project? What if a new technology is used in a project of interest? In these cases, parts of a unique project could be benchmarked based on data conditioning but never the whole project. Therefore, the key value of the Monte Carlo method is the capability to mimic or imitate data statistically for thousands of fully relevant albeit hypothetical projects. The essence of the Monte Carlo method is multiple sampling of uncertainties as inputs to the mathematical models to get information about possible overall project cost and schedule uncertainty. Each input requires due diligence and contributions from specialists of various disciplines. This defines the main power of the Monte Carlo method. Specifically, the main power of the Monte Carlo method in project risk management is that it integrates opinions and inputs of multiple specialists belonging to various disciplines into decision making.
230
◾ Applications of Monte Carlo Methods in Project Risk Management
INTEGRATION OF DETERMINISTIC AND PROBABILISTIC ASSESSMENT METHODS The levels of development and sophistication of deterministic and probabilistic methods adopted by organizations differ significantly. Sometimes either a deterministic or a probabilistic methodology (or both) has yet to be adopted. It is not uncommon that deterministic and probabilistic methods are developed and used independently by organizations. Sometimes an organization has a deterministic scoring methodology that is used along with deterministic quantitative methods for cost and schedule reserve development (see Chapter 3). In some extreme instances, fixed 15% or 20% cost and schedule reserves are applied across the board regardless of project scopes, budgets, schedules, complexities, and novelties. A common situation is that project teams follow adequate deterministic risk management practices, develop good risk registers, and address risks. At the same time it uses qualitative probabilistic or very simplistic quantitative probabilistic methods such as QuickRisk (Chapter 3). But those methods do not utilize available project risk registers at all! One project manager who was not very advanced in risk management asked me “why an existing project risk register was not used in probabilistic schedule risk analysis for his mega‐project.” The answer was that his project services people were not well trained in probabilistic risk methods. They knew only the basics of QuickRisk and sold that to him as the most advanced probabilistic method ever known to humankind. I am familiar with an organization that has been engaged in a number of mega‐projects in North America and has a very strong probabilistic methodology. That sounds great except for the fact that it does not manage risks consistently using deterministic methods. This means that assessments of risk exposures before and after addressing are pretty much the same, at least during project development. Of course, experienced project managers manage risks in Execute regardless of corporate risk procedures. However, project baselines would be more aggressive and realistic if proper addressing of risks were done. The previous examples are illustrations of various types of organizational bias that lead to lower efficiency of adopted risk management systems. We presented the 3D risk management integration approach in Chapter 2. Another integration effort should be undertaken to integrate both deterministic and probabilistic assessment methods. One without the other or without proper integration with the other leaves big voids in the credibility of the project risk management system. But their integration yields a robust synergy that
Uncertainty Objects Influencing Outcome of Probabilistic Analyses ◾
231
drastically improves the quality and reliability of the overall project risk management system. Figure 12.1 introduces a generic risk management workflow that integrates the deterministic and probabilistic steps. This describes the major steps in the risk management process (Figure 2.2) with a focus on the two assessment steps. In reality, this workflow may repeat itself several times and include several what‐if scenarios. Moreover, it is not unusual that new uncertainties are identified or that existing ones are reassessed, accepted, or closed during the probabilistic steps of the workflow. Upon completion of the probabilistic activities both the deterministic and the probabilistic registers should be updated.
UNCERTAINTY OBJECTS INFLUENCING OUTCOME OF PROBABILISTIC ANALYSES The purpose of this section is to outline the major uncertainty objects that are commonly used as inputs to probabilistic models. Some objects (Table 1.2) are not subject to probabilistic cost and schedule risk analyses and will not be discussed in this section. However, a very important point is that in
Risk Identification
Deterministic Risk Register
Output: Monte Carlo Reserve and Simulation Sensitivity
Specific Impact and Probability Ranges, Probabilistic Risk Correlations Register
Reviews Sensitivity Analysis What-if and and Scenarios Validation Reserve Allocation
Risk Report and Decision Making
FIGURE 12.1
Probabilistic Project Economics
Integrated Deterministic and Probabilistic Workflow
232
◾ Applications of Monte Carlo Methods in Project Risk Management
spite of this they should always be kept in mind when working with uncertainty objects included in the analyses. The reason for this is the possible double counting of inputs. Practitioners call this double dipping. For instance, cost escalation uncertainty could be taken into account in a cost escalation model but also included in a project risk register as an uncertain event and/ or ranges around costs of particular items in the base estimate as a general uncertainty. As a result, the same uncertainty object may be taken into account two or three times, hence the presence of double (or even triple) dipping. Double dipping is a type of bias that could have both psychological (subconscious and conscious) and organizational roots. This is not the only type that should be managed in probabilistic analysis, though. Various systematic errors are imbedded in assessments of uncertainties due to various types of the previously discussed psychological and organizational bias. According to Table 1.2, these should be judged as general uncertainties that influence the accuracy of all inputs. Some uncertainty objects of Table 1.2 should not be included in probabilistic models. As previously discussed, known show‐stoppers and game changers should be listed explicitly as exclusions from inputs to probabilistic models. When they occur they destroy or drastically redefine Cost and Schedule baselines. So there is no uncertainty management of a project previously contemplated any more should a show‐stopper or a game changer occur. In other words, the new reality should be dealt with that assumes that there is no previously defined project and no need in its uncertainty assessments. It will be either a cardinally re‐defined project (in case of game changers) or no project at all (in case of show stoppers) . Such a situation constitutes the previously discussed corporate risks. Mathematically, it defines the limits of applicability of probabilistic cost and schedule models. We saw in Chapter 4 that various factors might lead to the situation where some project uncertainties stay unidentified. Project novelty, the phase of its development, and various types of bias may provide room for unidentified uncertainties. To a certain degree these define the level of adequacy, quality, and efficiency of the project risk management system. In a way, if the level of adequacy, quality, and efficiency of a risk management system were assessed, the impact of unknown uncertainties on results of probabilistic cost and schedule analyses would be assessed. Corresponding allowances may be introduced to the models as discussed next. This method of assessment of unknown uncertainties is already used by some multinational oil and gas companies as part of their corporate procedures for project probabilistic risk analyses.
Origin and Nature of Uncertainties ◾
233
To sum up, the following uncertainty objects from Table 1.2 will be taken into account when developing inputs to probabilistic cost and schedule models:
▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪
General cost uncertainties (cost models) General duration uncertainties (schedule models) Burn rates (integrated cost and schedule models) Downside uncertain events Upside uncertain events Unacceptable performance uncertain events Organizational bias Subconscious bias Conscious bias Unknown uncertain events
Previously we discussed various types of bias when assessing uncertainties. This discussion is fully relevant to developing inputs to probabilistic risk models. Unacceptable performance uncertain events will be treated as part of a broader class of downside uncertain events. Burn rates will be examined when discussing the integrated model. In the next section we discuss the nature, origin, and examples of general uncertainties and uncertain events. All inputs to probabilistic models are subject to types of contracts and particular contract terms, conditions, and clauses. For instance, let’s consider a work package awarded to an engineering, procurement, and construction (EPC) contractor as a reimbursable, unit‐price, or lump‐sum contract. The same uncertainties would have different realizations and assessments for the project owner and for the EPC contractor due to the corresponding risk brokering and transferring. Additional negotiations and contract clauses could further change the overall uncertainty exposure of each side. In other words, conversion of contract terms, conditions, and clauses into inputs to probabilistic models requires special attention and due diligence.
ORIGIN AND NATURE OF UNCERTAINTIES As previously discussed, general uncertainties are characterized by ranges of impacts. The probability of their occurrence is totally certain, being equal to 100%. As such they are attached to project baselines. For uncertain events besides the uncertainty of impacts, the probability of their occurrence is uncertain and may be equal to any value that is less than 100%. To better understand
234
◾ Applications of Monte Carlo Methods in Project Risk Management
the origin and nature of cost and schedule general uncertainties and uncertain events let’s examine the following four examples.
Example of General Cost Uncertainty Imagine that someone decided to renovate the basement in his or her house. Imagine that five contractors came up with their quotes. Those quotes took into account the general instructions of the homeowner. Let’s say those quotes were $15,000, $19,500, $21,000, $23,000, and $35,000. An overly diligent homeowner might want to carry out an additional marketing study and ask about pricing from five additional contractors. Upon receiving their responses the owner gets the following numbers: $15,000, $16,000, $19,500, $20,000, $20,500, $21,000, $23,000, $27,000, $33,000, $35,000. It seems that the majority of numbers concentrate around $20,000 to $21,000. Those look the most reasonable and natural for this type of work for the particular market. Lower numbers might mean great market opportunities but also inferior quality of materials and work. Higher numbers imply exceptional quality but might turn out to be just plain overpricing. Additional research to check the quality and reputation of contractors would be required to make a purchasing decision. This would narrow down initial options and allow one to come to a conclusion. But until this is done the numbers around $20,000 to $21,000 seem most reasonable and likely, the other conditions being equal. Those “other conditions” could become differentiators of options when an additional study is undertaken. Figure 12.2 represents the general uncertainty related to the cost of basement renovation after the initial collection of marketing information. Using the risk management process terminology of Figure 2.2, this corresponds to uncertainty assessment as‐is.
Most Likely (ML)
15K Min = P0
17K 19K P10
FIGURE 12.2
21K
23K
25K
27K
29K 31K P90
General Cost Uncertainty before Addressing
33K 35K Max = P100
Origin and Nature of Uncertainties ◾
235
Triangular distribution was used along with beta distributions to represent the likelihood of particular costs. This should accentuate the fact that a set of statistical data may be approximated by several different distributions. The key point here is that some distributions may be fully defined by just three intuitively understandable parameters (minimum [min], most likely [ML], and maximum [max]) in the case of triangular distributions. Some other distributions require either three or more parameters that are not intuitively understandable as in case triangular distribution. For example, the beta distribution shown in Figure 12.2 requires definition of parameters that are not that directly understandable by people without quite substantial mathematical background, which makes it impractical for everyday use in project risk management. The only justifiable use of sophisticated distributions like this in project risk management would be availability of relevant historic data about a particular uncertainty based on such sophisticated probability distributions.1 Besides the triangular distribution, the other distribution commonly used in project risk management is the trigen distribution. This is also fully described by three parameters. As a matter of fact, the minimum and maximum values of a triangular distribution are too certain. They offer incredible precision when describing ranges associated with uncertainties. For this reason two probabilistic cutoff values on the wings of a triangular distribution are used as parameters along with ML value. After this the values outside of this cutoff range are not precisely defined. This reduces overshooting in terms of precision. These cutoff values are defined the following way. The minimum number of $15,000 represents the 100% likelihood that all values of the range are higher than $15,000: there are no values below $15,000. A zero probability of the presence of values in the range that are lower than $15,000 is indicated in risk management as P0, which means here zero probability of presence of values below $15,000. In other words, there is a zero level of confidence that those values exist below $15,000. Similarly, a P100 level of confidence should be associated with value $35,000, meaning that there is an absolute (100%) level of confidence that all values of the distribution are below $35,000. The cutoff values usually used to define trigen distributions are P10 and P90 or P5 and P95. The reason why the P5 and P95 range is preferable is discussed in Chapter 14. A P10 level of confidence associated with a value means that there is 10% probability that existing values are lower than this value, or 90% probability that they are higher than this. Similarly, a P90 level of confidence associated with a value means a 90% probability that existing values are lower than this value or a 10% probability they are higher than this. In Figure 12.2 the P10
236
◾ Applications of Monte Carlo Methods in Project Risk Management
value is about $18,300 and the P90 value is about $29,600. In any case, a particular level of confidence is associated with each particular value of the impact range. When additional steps are undertaken to explore the possible renovation options it might be discovered that the two cheapest options are not acceptable due to low quality of imported materials and an unqualified labor force. It might be that these two contractors have bad records and reputations on similar projects. The two most expensive options included premium quality of materials and some extras and upgrades that were not initially required by the homeowner. The contractor that bid the price of $27,000 could not justify it by providing higher quality of materials or work. Quality seemed the same for options in the range $19,000 to $23,000. Figure 12.3 represents the same general uncertainty but after additional study or addressing. Obviously, the spread of uncertainty has become narrower. More research could be required to make an informed purchasing decision, which should narrow the range to one option. One might guess that the remaining shortlisted options are characterized by different timelines and schedules of works, different borrowing options, and so on. As discussed in Chapter 10, these procurement options have different associated uncertainties. Hence, they could be used as additional differentiators to make a final decision. There will no longer be general price uncertainty when the job is completed. However, even if one option is selected, this does not mean that the uncertainty evaporates entirely. The selected contractor might come up with some change orders to adjust the pricing due to upgrades or substitutions of materials even after the contract is signed.
Most Likely (ML)
15K
17K 19K Min = P0
FIGURE 12.3
21K
23K 25K Max = P100
27K
29K
General Cost Uncertainty after Addressing
31K
33K
35K
Origin and Nature of Uncertainties ◾
237
In real capital projects that have multiple cost accounts in the base estimates, their general uncertainties could be correlated. For instance, a larger building (its cost) is usually associated with a higher cost for the foundation. The paramount importance of correlations to ensure the realism of probabilistic modeling is discussed later in this chapter. The key source of the general uncertainties of a capital project is the level of engineering and procurement development of a project.2 Various factors from the uncertainty of labor productivity to the uncertainty and assumptions related to the scope of the project give rise to general cost uncertainties. All ranges are usually shrinking down while engineering is progressing, and solid quotes are received for work packages or package commitments are made, and so on. At the same time, methods and quality of data used for development of the base estimates define ranges, too. Various kinds of bias, from organizational to particular psychological types, are also always present. The possibility of the double counting of general uncertainties along with some other uncertainties should be avoided. Design allowances, uncertain events, and cost escalation should be kept in mind.
Example of Cost Uncertain Event The description of the selection of a contractor for a basement renovation looks like routine business‐as‐usual. No big surprises, just collection and analysis of marketing data, trying to reduce pricing uncertainty to an acceptable level. However, upon selection of a contractor and start of work, a big crack in the foundation could be discovered, leading to the need to repair it first before proceeding with the basement renovation. This might sound like a game changer if it redefines the initial project cost estimate, adding a substantial amount of money to the initial baseline. Discovery of the foundation damage is certainly not a business‐as‐usual event. It is truly a “business‐unusual” event. I am not sure about the exact statistics on the probability of such discoveries in particular areas but they could be assessed by knowledgeable specialists. Let’s say the probability of this is 1–5% in a given area. This would depend on various factors such as age of the house, presence of groundwater, types of soil, type of foundation, differences in temperature in the summer and winter seasons, and so on. The cost impact on the owner’s budget will depend on the size of the foundation crack, access to the damaged area by excavation, and so forth. There is a certain range of possible damages and costs of repair. Even for a given damage there would be a spread of offers from contractors, the same as in the previous
238
◾ Applications of Monte Carlo Methods in Project Risk Management
example. Now the homeowner should find another contractor that specializes in foundation repairing, undertaking the steps described in the previous section. Potential events like these should be considered along with general uncertainties. First, they are characterized by their probabilities of occurrence or discovery (1–5% in our example) and the associated spread of impacts. The latter is general uncertainty (range of possible impacts) if the event or discovery does occur. For a real capital project, multiple uncertain events might occur, both upside and downside ones. The impacts of some of them should be correlated. This means that if two risks occurred in the same iteration, their impacts would be synchronized in terms of magnitude. If the impact of one event belongs to the higher end of its range, the impact of the other should belong to the higher end of its range, too. For instance, an uncertain event of discovering poor subsurface geotechnical conditions, say boulders, and an uncertain event of overspending on the foundation should be correlated. Obviously, only distributions that have ranges can be correlated. One‐point numbers cannot be correlated. When assessing uncertain events, be sure that there is no double dipping with some other uncertainties, especially with general uncertainties as well as cost escalation.
Example of Schedule General Uncertainty Imagine that an office worker commutes from her suburban home to downtown every working day of the week. Her method of commuting may include a bus, a streetcar, a subway, a bike, a car, and so on or any combinations of them depending on the town she lives in and her preferences. Commute times depend on these factors, too. Assume we are talking about Calgary; the average or most likely commute time is 30 minutes. However, depending on the day of the week, the season of the year, and the slight weather variations, the commute time actually belongs to a range of 25–40 minutes. We are not talking about a major snowstorm or traffic incident impacting the commute times. Small incidents like the bus or train the commuter was expecting to take being out of service may delay the commute time for a few minutes, but it’s nothing major. This is an example of general duration uncertainty. When a planner is developing a schedule for a project each normal activity is represented as a bar with absolutely certain duration. This duration is treated as most reasonable or most likely according to the project scope and data available for planning. Project specialists and the planner utilize their previous experience and some benchmarking data to come up with the most likely durations.
Origin and Nature of Uncertainties ◾
239
Of course, one‐point duration numbers are utopian. Real numbers might be close to the proposed numbers but are almost never the same. The first reason for this is the intrinsic ambiguity of any input data used in planning, including the judgments of specialists. This is the case even if nothing unusual is contemplated, which is not linked with the schedule. Using risk jargon, this is a risk‐free situation, meaning no uncertain events are possible, only general uncertainties. This is a business‐as‐usual case, based on a project schedule with some deviations from it. A business‐unusual situation will be discussed in the next section. Recall that a project schedule is a model of project execution. It has a certain distance to future reality, which is defined by several factors. In the same way as general cost uncertainties, the level of engineering development is a major cause of general duration uncertainties. Scheduling information from the subcontractors that are supposed to deliver the corresponding work package is also important. The qualifications and experience of schedulers and the information and methods they use for planning give rise to general duration uncertainties, too.
Example of Schedule Uncertain Event Continuing the commuting example, let’s imagine that a water‐main break occurred in the early hours of the day so that the usual commuting route is blocked, causing a traffic jam. Now, instead of the usual commute time, it took two hours to get to the office that day. Traffic incidents may be other less exotic reasons for the delays. Let’s say the incidents are observed on the way downtown once in one or two months. Based on 20 working days per month, the probability of occurrence may be assessed at 2.5–5% or so. Some of these incidents don’t have any impact on commute times at all, some lead to modest delays, and some to major delays. The possible delay depends on its severity, the location of the incident, the level of blockage of the route, the availability of a detour, and so on. The most likely or average delay seems to be 20 minutes, although some incidents lead to major commute delays of two hours. Identification of uncertain events with schedule impacts is a major task in investigating the level of confidence of a deterministic project completion date that is defined by a project schedule. As is true for uncertain events of cost impacts, uncertain events of schedule impacts may be correlated. The cardinal difference between cost and schedule uncertain events is that a schedule uncertain event should be mapped to the normal activities it impacts. The possible delay of a particular impacted normal activity, if the event does occur, should be assessed by the project specialists. Such an assessment should include min, ML, and max impacts. Uncertain events may be also
240
◾ Applications of Monte Carlo Methods in Project Risk Management
mapped to schedule milestones, although it is more logical to map events to corresponding impacted normal activities that are predecessors of the milestones.
ROLE OF CORRELATIONS IN COST AND SCHEDULE RISK ANALYSES Let’s review a general macroeconomics example of correlations and anti‐ correlations. If inflation rates start growing, it is expected that interest rates would grow, too. However, interest rates are established and used by central banks to regulate the growth of the economy. Inflation is a major but not the only factor in defining interest rates for a given period of time. To simplify, it would be fair to state that if inflation grows, interest rates should grow, too, in most cases. Both indicators are somewhat similar or comparable in behavior or are correlated to a certain degree. Interest rates defined by a central bank or similar institution in a particular country define the mortgage interest rates the general public pays for borrowing money to buy real estate. The lower the mortgage rates, the higher the number of property sales. This is true in most markets, with some exceptions. Sales of properties in Beverly Hills or Manhattan are not that affected by the current level of mortgage rates, at least not in the same way as property sales in the rest of the United States. To simplify, mortgage rates and real estate sales are anti‐correlated to a high degree. Statistical methods allow one to discover correlations of two functions (indicators, parameters, time series, etc.) if required. They may be plotted against each other as two‐dimensional charts or as time series. We will not review corresponding methods in this book but introduce correlation coefficients to measure correlations and anti‐correlations. Figure 12.4 shows the concept of different correlation coefficients. Their spread is from 100% or +1 (full correlation) to –100% or –1 (full anti‐correlation). The situation where correlation between two functions does not exist is described by the
–100%
FIGURE 12.4
–80%
–50%
0%
Correlation Coefficients
50%
80%
100%
Role of Correlations in Cost and Schedule Risk Analyses ◾
241
zero correlation coefficient. The behavior of the two functions looks perfectly random in this case. The concept of correlated sampling of distributions in the cost and schedule risk analysis of projects was introduced in Figure 3.6. The effect of correlations on the results of probabilistic analysis is demonstrated in Figure 3.7. Figure 3.7 represents results of impact of general uncertainties (ranges around base estimate cost accounts) on the cost distribution of a capital project. One scenario is fully uncorrelated (all correlation coefficients equal to zero) and the other scenario represents the case where all cost accounts of the base estimate are fully correlated (all correlation coefficients equal to 100%). Why should correlations be seriously taken into account at all? In the macroeconomic example, it is not quite realistic that higher mortgage rates are correlated with higher property sales. This requires suppressing such unrealistic scenarios in probabilistic models. In the case of evaluation of the costs of a building’s foundation and the costs of the building itself, it should be expected that the higher the building’s cost, the higher the foundation’s cost in most instances. However, the higher foundation cost might be driven by geotechnical subsurface conditions only. Hence, these two cost distributions cannot be 100% correlated. A high enough (70–90%) but not 100% correlation coefficient could be applied. Correlations bring realism to the models through suppressing unrealistic scenarios in probabilistic models. As such they reduce a model’s distance to reality. The general rule‐of‐thumb is that probabilistic models with a higher level of correlations have wider distributions, which was demonstrated in Figure 3.7. The reason for this is that the width of a distribution coming from the correlated model is higher, being proportional to the correlation coefficients of the model. If two distributions are convoluted to a resulting distribution by a Monte Carlo model, the width of the resulting distribution depends on three factors. The first and second factors are the widths of input distributions σ1 and σ2. The third factor is proportional to the correlation coefficient C between the two distributions. The following formula introduces the width of the resulting distribution: σ = [σ12 + σ22 + C × σ1 × σ2]1/2 This formula is correct if we understand σ, σ1, and σ2 as standard deviations of corresponding distributions. Standard deviations are measures of distribution width defined in a certain standard mathematical way.
242
◾ Applications of Monte Carlo Methods in Project Risk Management
Let’s analyze this formula for the simplest situation where two equal distributions are convoluted (σ1 = σ2 = σ0). In this particular case the previous formula takes following form: σ = σ0 × [2 + C]1/2 If these two input distributions are fully correlated (C = 1), then σ ≈ 1.73 × σ0. If there is no correlation (C = 0), then σ ≈ 1.42 × σ0. In the case of full anti‐correlation (C = –1) the resulting distribution should have the width σ = σ0. This simple example demonstrates quite a broad range of possible widths of the outcome distribution depending on the degree of correlation. If the correlation is not properly analyzed and reflected through adequate coefficient C, the outcome distribution will have nothing to do with reality. A wrongly derived distribution would mean a wrongly derived project reserve. A wrongly derived project reserve would mean wrong decision making. The higher the degree of correlations among the model’s inputs, the wider the outcome distribution. Wider distributions mean bigger cost and schedule reserves. For anti‐correlations the situation is reciprocal. Anti‐correlations lead to narrower distributions and lower cost and schedule reserves. In real project probabilistic cost and schedule models that contain dozens of input distributions, both correlations and anti‐correlations usually exist. So, the outcome of the model is the interplay of both. Incorrect handling of these, where unrealistic scenarios are not suppressed, will lead to unrealistic results and wrong decisions. Hence the topic of correlations is not just mathematical theory; it has direct financial and planning implications. The recently developed tool Acumen Risk seems to have entirely missed this point. It does not even have the functionality to set up correlations among input distributions! It looks like this feature has a sales‐and‐marketing origin as Acumen Risk is positioned as a tool for users who do not have knowledge of statistics and supposedly do not know what correlations are all about. The credibility of such a correlation‐free marketing approach is doubtful when developing project cost and schedule reserves. I would pass on using such a tool until the adequate functionality to correlate inputs becomes a standard feature.
PROJECT COST RESERVE It is important to come up with a clear definition of a project cost reserve at this point. We need to be absolutely clear about all the contributions that give
Project Cost Reserve ◾
243
rise to the overall project cost reserve. It would be logical to do this based on the definitions of the uncertainty objects in Table 1.2 related to project costs. The following components of an overall project reserve should be kept to cover residual uncertainties cost‐wise, according to the objects in Table 1.2:
▪▪ Design allowance: Standard estimating term related to general uncertain-
▪▪
▪▪ ▪▪ ▪▪
▪▪
▪▪
ties of scope; it is usually not subject to uncertainty management as estimators fear that it could be stripped when discussing the other contributions to the project cost reserve with decision makers. Cost contingency: Reflection of cost general uncertainties including schedule‐driven costs. Design allowances should be kept in mind when developing cost contingency. In some cases costs of all addressing actions are included in the contingency. Cost escalation reserve: Takes into account a future change of project costs initially developed in money of a base period due to future market imbalances and volatility. Exchange rate reserve: In purchasing materials, equipment, and services in currencies different from the estimating currency, reflects future changes and volatility of currency exchange rates. Cost risk reserve: Takes into account all known uncertain events except show‐stoppers and game changers. All components of design allowances, cost contingencies, cost escalation, and exchange rate reserves should be excluded from development of the project cost risk reserve. Performance allowance and liquidated damages: Usually part of project cost risk reserve but may be reported separately as required by corporate financial reporting; could be part of financial reporting of EPC companies. We consider this part of the project cost risk reserve. Unknown‐unknown reserve: Used as a method to assess quality of project risk management system, project novelty, and the phase of project development as an additional cost allowance; usually relates to unknown downside uncertain events but could include unknown downside general uncertainties.
The project team should come up with a clear definition of the project reserve and its components in the project risk management plan. As will be discussed later, if integrated cost and schedule risk analysis is run, schedule‐driven costs should become a contribution to project contingency. All components of project reserves except design allowance and cost escalation and exchange rate reserves should be developed using probabilistic methods. (As discussed
244
◾ Applications of Monte Carlo Methods in Project Risk Management
in Chapter 11, it is technically possible to apply probabilistic methods to cost escalation and exchange rate reserve modeling, which seemed to be overshooting due to lack of required information.) It is possible to build an integrated probabilistic model that produces an overall project cost reserve including all the components listed above including schedule‐driven costs. I developed several probabilistic models like this that produced overall project reserves, including contingencies, cost escalation, exchange rate, cost risk, and unknown‐unknown reserves. Even though such models bring a certain value they appeared appalling to anyone who looked at them. It is more reasonable to build corresponding models separately. One exception would be the development of an unknown‐unknown reserve. As will be discussed, it is usually linked with project cost risk reserve development. One important point related to general uncertainties is often missed when developing project cost reserves. As mentioned earlier, risk reserves are normally developed for to‐be cases where all identified uncertain events were addressed through corresponding sets of response actions. The same approach should be taken toward general uncertainties, but usually this is not the case. The reason for this is the deficiency of the traditional risk management approach where ranges around cost accounts are not viewed as manageable uncertainties at all in probabilistic models. As a result, the as‐is and to‐be assessments differ only for uncertain events. In practice, timelines for as‐is and to‐be assessments should be clearly defined first. If assessment as‐is relates to the end of Select and assessment to‐be relates to the end of Define or final investment decision (FID), corresponding reductions of ranges for to‐be‐FID assessments should also be made. General uncertainties should get reduced in the course of project development. In other words, a project team should predict the uncertainty exposure that would exist in a particular point in the future (to‐be). This approach is widely used (or should be) for project reserve drawdown. The drawdown could be done for any part of the project reserve separately on a monthly or quarterly basis using models initially built to baseline the project cost reserves at FID. This topic is discussed in Chapter 14.
PROJECT SCHEDULE RESERVE A slightly different approach is used when developing schedule probabilistic models. Their primary goal is to investigate confidence levels of sanctioned completion dates. Their secondary goal is to estimate additional floats to ensure the required level of confidence. However, before these two goals are achieved it
Project Schedule Reserve ◾
245
is necessary to ensure that the developed schedule is attainable despite the presence of general uncertainties. These tasks are usually pursued in three steps. First, the proposed schedule is evaluated in terms of ranges around normal activities’ durations (general uncertainties). Project planners and specialists evaluate those ranges based on uncertainty information used for development of durations of the normal activities. A simple probabilistic model utilizing the QuickRisk functionality of PertMaster is built and run to get distributions of possible project durations or completion dates. This is a model where no uncertain events are included. Such models are called risk‐free‐world models, or the “sniff test.” Despite their apparent utopianism and long distance to reality, such models are used as quality checks of proposed schedules. If the level of confidence of a sanctioned project completion date is high enough, the schedule may be used for further probabilistic analysis. If the level of confidence is too low even in the risk‐free world, the schedule should be recycled and redone, being unrealistic. Second, when a proposed schedule passes the sniff test the project’s uncertain events with schedule impacts should be added to the probabilistic model. The resulting distribution, which now includes both general uncertainties and uncertain events of schedule impacts, should be studied. Third, by the end of the building and testing of the probabilistic schedule model, development of an unknown‐unknown allowance to the model is required. This is discussed later in this chapter. After evaluating and adding the unknown unknowns to the model and running it, a distribution of project completion dates or durations will be obtained. Needless to say, all relevant correlations should be included at each of these steps. It is important to keep in mind that the functionality of PertMaster allows one to obtain distributions for completion dates for any project’s milestone or normal activity. This may be used to investigate the level of confidence for milestones related to readiness for construction, turnaround, and logistics windows, or major decisions such as the final investment decision. A usual application of this functionality is to investigate the confidence levels of the mechanical completion dates or fulfillment of various commercial obligations and milestones. To sum up, the following components of the schedule risk reserve should be kept in mind according to Table 1.2:
▪▪ General schedule uncertainties: Investigation of the attainability of the proposed project schedule in a risk‐free world.
246
◾ Applications of Monte Carlo Methods in Project Risk Management
▪ Project float: Contribution of general uncertainties and uncertain events ▪
(except game changers and show‐stoppers) to the definition of the desired confidence level of the project completion milestone. Total project float: Addition of an unknown‐unknown allowance to the project float as a measure of the efficiency and quality of the project risk management system and project novelty. Total project float is used in decision making related to the project completion date.
The same reasoning in cost general uncertainties is relevant to assessments of general duration uncertainties as‐is and to‐be. In the course of project development, the general uncertainties usually get reduced due to better levels of engineering, procurement, and other types of development. As is true for cost general uncertainties, this point is often missed when making to‐be assessments.
ANATOMY OF INPUT DISTRIBUTIONS We discuss key parameters of input distributions in this section. The discussion in this section is relevant to both cost and schedule distributions. As discussed in Chapter 3, the mean value of a distribution is the best one‐ point representative of the whole curve. However, the true uniqueness of mean values is based on their features when input distributions are convoluted to produce probabilistic results. In business applications, the mathematical term mean value is replaced by expected value. Assume that we have several distributions used as inputs to probabilistic models describing the general uncertainties of a project (Figure 3.6). If those distributions have mean/expected values M1, M2, . . ., Mn, the curve obtained as a result of their convolution will have mean/expected value M1 + M2 + . . . + Mn. For uncertain events with the same mean values of impacts the mean value of the convoluted curve will be p1 × M1 + p2 × M2 + . . . + pm × Mm, where p1, p2, . . ., pm are probabilities of uncertain events 1,2, . . ., m. This fundamental property of mean/expected values is a justification for all the probabilistic qualitative methods based on the evaluation of expected values (Chapter 3). If spreads of the outcome curves are ignored and the criteria used for development of project reserves are always based on mean/ expected values, the probabilistic modeling will not be required. Potentially it is possible to use this for quick project reserve evaluations through evaluation of mean/expected values of inputs. This is exactly the approach of the project/
Anatomy of Input Distributions
◾
247
program evaluation and review technique (PERT)/critical path analysis (CPA) method, which is based on the approximation of the expected values of the general duration uncertainties. The problem with this is that three‐point triangular distributions usually used as inputs to probabilistic models are based on (min, ML, max) not (min, mean, max) values, where ML is the most likely value corresponding to a peak of the input distribution. The ML value is also called the mode of a distribution. In general uncertainties (ranges around baseline one‐point values) the most likely values represent the baselines, not mean values. Figure 12.5 illustrates the difference between ML and mean values, which could be substantial for skewed distributions. Besides ML and mean values, Figure 12.5 introduces the median value that represents the middle of the (min; max) spread. A special term is used to mark median values: P50, where P means probability. In other words, there is 50% probability of finding a particular value below the median as well as 50% probability of finding it above the median value. Some risk practitioners try to discuss the difference between ML, mean, and median values as related to inputs to probabilistic models. Such attempts turn off project team members pretty quickly even if some of them have engineering or mathematics backgrounds. So, mathematically, using mean values would be the right thing to do. However, it is not practically possible. It requires assessment of the input distribution using a fitting functionality of probabilistic software packages, which is quite beyond the discussed qualitative probabilistic methods. As ML values are much better understood and handled by project teams it is reasonable to stay with simple (min, ML, max) inputs. Mode (ML) Mean P10
Median (P50) P90
P0 (Min)
FIGURE 12.5
P100 (Max)
Triangular Input Distribution
248
◾ Applications of Monte Carlo Methods in Project Risk Management
For symmetrical distributions (bell curve, symmetrical triangular d istribution, etc.) all three parameters (mean, median, and mode) are identical.3 However, symmetrical distributions rarely exist as inputs to probabilistic models. Usually inputs are skewed to the right toward higher numbers (Figure 12.5). The general explanation for this is based on the presumption that it would be much more difficult to discover possible upside deviations from baselines (values lower than ML) than to run into downside deviations. In addition, zero values of impacts are natural limits for upside deviations, whereas downside deviations don’t have such limits. Therefore, downside deviations could be much higher than upside ones. Usually this is true, but not always. One of the situations where upside deviations should be higher than downside ones is where a project baseline (estimate or schedule) is way too conservative. Unusually high design allowances could be included and conservative quantities and prices might be factored into the base estimate for some reasons including bias. Durations for some activities could be quite conservative and seem to include some uncertain events implicitly. Ideally, the quality and integrity of the base estimate should be ensured through cold‐eye reviews and benchmarking. However, the assessment of ranges may take this into account by application of distributions skewed to the left. So, in place of the usual ranges around the base estimate accounts or durations, say, –5%/+10% or –15%/+25%, ranges such as –10%/+5% or –25%/+15% could be used to compensate for conservatism or overshooting of the inflated baselines. Mathematically, this means moving the input distribution’s mean/expected value to the left, toward the lower numbers. If impacts around ML values are very likely by default, values close to min values (P0) or max values (P100) should be rather exotic. At the same time they are defined with amazing precision. This looks like another overshooting: a range of an uncertainty is defined with absolute precision as P0 and P100 one‐point numbers. To avoid this sort of discomfort, practitioners use trigen distribution. So in place of P0 and P100 values determined with amazing precision used are P5 or P10 and P90 or P95 values. Figure 12.5 shows P10 and P90 values. In practice, P10 and P90 cutoff values are used much more often than P5 and P95. However, we discuss why values P5 and P95 are much more preferable when analyzing probabilistic model outputs in Chapter 14. The P5/P10 value points to the situation where any value of the distribution could be found below that particular value with probability 5% or 10%, and with probability 95% or 90% above that value correspondingly.
Anatomy of Input Distributions ◾
249
Similarly, the P90/P95 values define the rule that any value could be found above that particular value with probability 10% and 5% correspondingly. Technically this means that trigen distributions stay uncertain on what should be P0 and P100 values. Such an understatement is a better fit for defining uncertainties. Using the trigen input distributions is always preferable to using triangular ones if project team members are comfortable with estimating inputs as (P5, ML, P95) or (P10, ML, P90). One of the tricks risk managers use is to discuss inputs with project teams as (min, ML, max) as for triangular distributions but treat them as inputs to trigen distributions. Justification of this is based on recognition of subconscious bias. Namely, it is fairly normal that project specialists come up with too‐narrow ranges of impacts of both general uncertainties and uncertain events. This allows one to address the systematic error related to overconfidence by applying systematic corrections like this. The project risk manager should make this call if such or a similar correction is appropriate based on assessment of the overconfidence bias. Namely, if P10 and P90 values were used in place of collected min and max data, the overconfidence bias would be addressed in a more radical way. In other words, if the overconfidence bias is not that severe, the P5 and P95 cutoff values of trigen distributions should be preferable. Intuitively, one might dislike the shapes of both the triangular and the trigen distribution as they do not look smooth enough. It is true that it is quite unusual to observe phenomena in physics that have such unsmooth and rude dependencies and curves. Figure 12.2 introduces an alternative to the triangular and trigen distributions in the form of a beta‐distribution. It looks more natural and smooth. However, using a beta‐distribution requires definition of parameters that are not obvious or straightforward in interpretation by project teams, which is the normal price for higher aesthetics and lower efficiency. A PERT distribution would be a good alternative to the triangular distribution if the project team is comfortable using it. However, replacement of all triangular distributions with PERT ones with the same parameters (min, ML, max) does not change the outcome curve significantly. It is counterproductive to spend too much time discussing exact shapes of input distributions. In the majority of situations it does not matter at all. Slight differences in shapes of output curves could be ignored as soon as about the same mean values of comparable input distributions are used. Much more important are correlations among distributions that make a real difference. I prefer to stay with trigen
250
◾ Applications of Monte Carlo Methods in Project Risk Management
distributions unless some historic or modeling data are handy and dictate other choices.
PROBABILISTIC BRANCHING Earlier in the chapter it was mentioned that it would be important to investigate the readiness of a project for some construction, turnaround, or logistics windows. For instance, deliveries of installed equipment to a construction site somewhere in Alaska, Chukotka, or Labrador from overseas would not be possible during several winter months due to freezing of seaports. Such features of the project schedule should be realistically reflected in the schedule and the schedule risk analysis. Figure 12.6 introduces an example that contains equipment delivery downtime in the schedule for five months from mid‐November until mid‐April next year. In case the delivery is not made by November 15, the next unloading of the equipment in the port could be done from mid‐April on, which delays the equipment installation for five months. The date of November 15 serves as a trigger to initiate the delay in the probabilistic model. Figure 12.7 shows the equipment installation completion date distributions. There are two peaks separated by several months of winter downtime.
Equipment Delivery Completion Distribution
Trigger
Equipment Delivery Installation
April 15
November 15 Winter Downtime
Delay of Installation if the Trigger Works Out
FIGURE 12.6
Example of Probabilistic Branching
Installation
Merge Bias as an Additional Reason Why Projects Are Often Late
FIGURE 12.7
◾
251
Probabilistic Branching
If the confidence level to meet the mid‐November deadline is quite high, the likelihood that equipment installation will be done on time is high, too. The tall peak on the left side of the chart points this out. However, there is still the possibility that the deadline will be missed, which triggers a short peak on the right side of the chart. If the confidence level to be on time is around P50, either outcome is about equally possible. That is why the two peaks have comparable heights. If confidence level to meet the deadline is relatively low, the peak on the right side is much taller. In two extreme cases, which are not shown, there will be only one peak on either side of the chart. Obviously, if the confidence level to meet the deadline is about P100, there will be only one tall peak on the left. If the confidence level to be on time is extremely low (about P0), there will be one tall peak on the right side. The origin and effect of probabilistic branching were introduced in this section to demonstrate that some very advanced modeling techniques might be required to adequately represent a project schedule and run its probabilistic analysis. The integration of the probabilistic branching to the schedule risk model requires advanced knowledge of macros in the PertMaster environment and can be adequately done only by a qualified risk analyst. Project team members and decision makers should be aware of this possibility and the corresponding modeling tool in the risk toolbox.
MERGE BIAS AS AN ADDITIONAL REASON WHY PROJECTS ARE OFTEN LATE Merge bias has nothing to do with the various types of psychological and organization bias discussed so far. The merge bias relates to extra schedule delays when several paths are converged in a node. Traditionally the baseline schedule of a project defines the project completion date according to the critical path method (CPM). A path that does not have
252
◾ Applications of Monte Carlo Methods in Project Risk Management
any float paves the critical path and stipulates the project deterministic completion date. General uncertainties and uncertain events associated with the project lead to delays in the project completion date identified by the CPM method and shape more realistic distributions of possible project completion dates. However, such delays could be further exacerbated where the baseline schedule has parallel paths. Extra delays are generated in the path conversion nodes. This type of delay is called merge bias in project management and stochastic variance in mathematics. As project schedules normally have parallel paths, managing node convergence is vitally important to every project. On one hand, it is possible to run standard probabilistic schedule risk analysis without paying special attention to converging of nodes. General uncertainties and uncertain events will work out in nodes automatically. The merge biases belonging to the nodes will be factored into the final project completion date. On the other hand, there is a way to proactively manage the merge biases in order to reduce the associated stochastic variance delays. First, a proactive approach includes amendment of the baseline schedule to consider earlier start dates for some activities belonging to the converging paths. Second, introduction of uncertainty addressing actions that reduce the impact of general uncertainties and uncertain events associated with the paths would reduce impact, too. Third, if quite a few paths are converging in a node, it would be reasonable to split this node through reduction of the number of converging paths. Fourth, additional float should be reserved right after the node according to the results of the probabilistic analysis. The magnitude of the float should be sufficient to provide the node’s date with the required level of confidence. A final simulation should be run to ensure that the effect of merge bias on the schedule is reduced to a comfortable level. The origin of the merge bias effect may be illustrated by a simple example. If three people are to attend a meeting, the probability that they all show up on time is reasonably high. If five people are expected, the probability that all of them show up on time becomes lower. What if seven or ten people are expected? The probability of all of them turning up on time would be even lower, and so forth. Let’s assume that there are no possible “correlations” among people such as having attended a previous meeting, traveling from another location together, and so on. We might evaluate the probability of each participant being on time as 90% (or 0.9). The probability assessments for all participants being on time may be done as shown in Table 12.1. These simple calculations for independent simultaneous events are made according to the AND logic introduced in Figure 4.1. The fundamental characteristic of elements of the discussed system, such as the probability of being late for every particular participant, was not changed, although the numbers of the elements/participants were. This gives
Integrated Cost and Schedule Risk Analysis ◾
TABLE 12.1
253
Probabilistic Origin of Merge Bias
Number of Participants
Probability of All Being on Time
2
0.81
3
0.73
4
0.66
5
0.59
6
0.53
7
0.48
8
0.43
9
0.38
10
0.35
rise to a general rule on the merge bias effect. The more paths that converge in a node, the lower the confidence level that the conversion would be done on time. Figure 12.8 provides an additional illustration on the origin of merge bias for two converging paths. Activities A and B in the schedule are representatives of two paths. They have their own uncertainties leading to a distribution of completion dates of those activities. The distribution of node N is a convolution of distributions of activities A and B.
INTEGRATED COST AND SCHEDULE RISK ANALYSIS Needless to say, the downtime of construction crews deployed at a site due to schedule delays is not free. Every day costs a certain amount of money even if
Activity A1 Activity A2
Node
FIGURE 12.8
Merge Bias: Converging Two Paths in a Node
254
◾ Applications of Monte Carlo Methods in Project Risk Management
the crews stay idle (burn rates per day). Similarly, rented construction equipment not used due to delays or an idle engineering design team also costs money. However, some costs are not time dependent at all (fixed costs). The adequate representation of both fixed and variable costs in probabilistic models is a must to evaluate extra schedule‐driven costs adequately. Unfortunately, schedule‐driven costs related to project delays is one of the most important contributors to project cost reserves that is either regularly overlooked or taken into account inadequately. It is standard practice that cost and schedule risk analyses are run i ndependently. Two separate models are built using different software packages. Sometimes project risk registers that are used as inputs to cost risk models contain standalone items related to additional costs generated by schedule delays. Some probability and cost impact assessments are elaborated that don’t look terribly credible or even adequate. But such an approach is still widely used, which is only slightly better than if schedule‐driven extra costs are just ignored. In the meantime there is a robust method to consistently take schedule‐ driven cost impacts into account in probabilistic cost models. Obviously, it is necessary to bridge the gap between the two types of probabilistic models. Integration of them allows one to adequately evaluate those schedule‐driven costs. First, it is not quite right to regard schedule‐driven costs as a type of uncertain events. The causes of those have two origins: general duration uncertainties and schedule uncertain events. Those two factors give rise to duration or completion date distributions for the project as a whole as well as for any normal activity. Spreads of those distributions do predetermine possibilities of delays and upside deviations. In other words, the possibility of delays or accelerations has a 100% probability for a given distribution. Hence schedule‐driven costs are quite certain; they are typical general uncertainties in cost models. Second, to link cost and schedule probabilistic models it is necessary to clearly define cost per day, week, or month (burn rates) for all impacted normal activities. The problem here is that project schedules of level 3 or 4 may easily have hundreds or thousands of normal activities. Defining burn rates for all of them would be a problem even if the same work breakdown structure (WBS) was used in both the project schedule and the base estimate. (Unfortunately, they usually drift away from each other over the course of project development.) As we discuss in the next chapter, probabilistic schedule models are normally built based on high‐level project schedules that are called proxies and correspond to level 1.5. Those level 1.5 proxies that should adequately reflect the
Integrated Cost and Schedule Risk Analysis ◾
255
main logic of project schedules still contain at least 40 to 70 normal activities. Development of burn rates for all those normal activities would still be onerous and impractical. Third, as synchronization of cost and schedule probabilistic models though development of burn rates is a must, practically this could be done for a low number of activities. As a trade‐off, synchronization of the models is done for major project deliverables. Those correspond to level‐1 summary activities. Their number should not exceed a dozen. Those should have exactly the same definitions according to the base estimate and schedule WBS. Figure 12.9 demonstrates why it is important. In place of the deterministic duration of construction activity (180 days) in the base estimate there will be input from the probabilistic schedule analysis in the form of a duration distribution for this construction activity. The deterministic duration of this activity is pointed out by an arrow in the distribution charts of Figure 12.9. The spread around the deterministic duration defines the range of possible delays/accelerations and corresponding schedule‐driven costs/savings. A key rule to exclude double counting in integrated cost and schedule risk analysis is that initial cost risk analysis should be carried out as if all schedule‐ driven costs are ignored. This corresponds to the project schedule completion by the deterministic completion date. All schedule delays should be taken into account only through inputs of schedule risk analysis results to the cost risk model. One element of developing schedule‐driven costs is proper allocation of fi xed and variable costs. If this is not done adequately, the results of the Probabilistic Distribution of Completion Dates
Summary Activity: Construction, 180 days
Deterministic Completion Date
Cost Account (Summary)
Fixed Cost, $M
Burn Rate, $M/day
Deterministic Duration, days
Deterministic Total Cost, $M
Construction
30
0.05
180
=30+0.05×180=39
FIGURE 12.9
Origin of Schedule‐Driven Costs
Probabilistic Duration, days
Total Schedule-Driven Cost , $M =30+0.05×
256
◾ Applications of Monte Carlo Methods in Project Risk Management
calculations of schedule‐driven costs will be misleading. On one hand, if the variable part is inflated, the overall project cost will be excessively sensitive to delays and the model will produce huge schedule‐driven additions. On the other hand, if the schedule portion of variable costs is underestimated, the sensitivity to delays will be played down. Such insensitivity will also lead to wrong results. A detailed and consistent consideration of burn rates as well as truly fixed costs for summary activities is required. According to my experience, schedule‐driven costs are one of the top uncertainty factors leading to project overspending. Strangely enough, this is one of the top factors that is systematically overlooked by project teams and decision makers. One of the fundamental reasons for this is lack of integration of several aspects of risk management, including integration of the cost and schedule modeling. Only integrated cost and schedule analysis can guarantee adequate representation of schedule‐driven costs. In practice, even if those are not fully ignored they become semi‐professional talking points that are not correctly represented in project reserves. The good news is that the methodology to take schedule‐driven costs into account does not have the same level of complexity as probabilistic branching. At the same time this requires knowledge of uploading of distribution data from PertMaster to @Risk or Crystal Ball and using function fitting. It has yet to become an industry standard although it should be kept in mind as the most adequate way of taking schedule‐driven costs into account. This task is significantly simplified if both models are built in PertMaster. This is rarely a case as estimators prefer to stay with MS Excel–based applications such as @ Risk and Crystal Ball.
INCLUDING UNKNOWN‐UNKNOWN ALLOWANCE IN PROBABILISTIC MODELS An initial examination of unknown unknowns was made in Chapter 4. Four dimensions of unknown unknowns were introduced and discussed.4 Novelty of the project (technological and geographical factors), phase of its development, industry a project belongs to, and various types of bias define room for unidentified uncertainties. However, it would not be a proactive practical approach to succumb to unknown uncertainties just because of their nature. It is possible that two different project teams will manage these four dimensions with different levels of success. The main reason for this would be a difference in the quality of the two project risk management systems. The difference
Including Unknown‐Unknown Allowance in Probabilistic Models
257
◾
in quality will certainly define the amount of room for unknown uncertainties in these two projects. Unknown uncertainties cannot be fully eradicated even if a project enjoys an excellent risk management system. Residual room for unidentified uncertainties will always exist. The question is how big that room might be. It is not unusual that some uncertainties occur during project development that were never part of the project risk registers. For instance, unidentified multi‐hundred‐million‐dollar unknown uncertainties pop up every time a multi‐billion‐dollar offshore oil and gas project is undertaken. And every time they are different. This is especially true for “new frontier” Arctic drilling. Due to the high level of uncertainty associated with unknown unknowns, development of corresponding allowances is not about a high level of precision but about the corresponding thinking process. It is certainly better to do the right thing not exactly right than to ignore the topic altogether. Moreover, high precision in evaluation of unknown‐unknown allowances is neither possible nor credible due to the nature of the subject. More important is managing unknown unknowns (at least partially) through addressing various biases and properly addressing identified project uncertainties. Some scarce historical data (all oil and gas [O&G] industry related) were used to very roughly calibrate the unknown‐unknown allowances that are introduced in Tables 12.2 and 12.3.5 These tables explicitly take into account only two dimensions of unknown uncertainties—novelty and phase of development. However, in each particular case this calibration should be challenged and revised by a project team and decision makers. There are at least two reasons for this. First, type of industry and its maturity should be taken into account. For instance, a space exploration project should have far more unknowns than a railway transportation project. The numbers in Tables 12.2 and 12.3 are based
TABLE 12.2 Is Relevant
Unknown‐Unknown Allowances When One Novelty Factor Phase of Project Development
Novelty of a Project
Identify
Select
High degree of novelty
12%
9%
6%
3%
Medium degree of novelty
8%
6%
4%
2%
Standard project (technology readiness level [TRL] score 10/known geography)
4%
3%
2%
1%
Source: © IGI Global. Reprinted by permission of the publisher.
Define
Execute
258
◾ Applications of Monte Carlo Methods in Project Risk Management
TABLE 12.3 Unknown‐Unknown Allowances When Either One or Two Novelty Factors Are Relevant Phase of Project Development Novelty of a Project
Identify
Select
Define
Execute
High degree of novelty: two factors
18%
14%
9%
5%
High degree of novelty: one factor
12%
9%
6%
3%
Medium degree of novelty: two factors
12%
9%
6%
3%
Medium degree of novelty: one factor
8%
6%
4%
2%
Standard project (TRL score 10/known geography)
4%
3%
2%
1%
Source: © IGI Global. Reprinted by permission of the publisher.
on gut feelings related to the O&G industry. However, O&G has several sectors that are expected to have different allowances of room for unknown uncertainties (e.g., offshore vs. onshore projects, etc.). Second, quantification of various types of bias as systematic errors in the identification and assessment of uncertainties should be tailor‐made in each particular project as a measure of the health and quality of the project risk management system. Ideally, a third party should make recommendations about required unknown uncertainty allowance. It is recommended to add a selected unknown‐unknown cost allowance as a line into a general uncertainties (“ranges”) model. A very broad range around this allowance is recommended, say +/–100% of the triangular distribution. The minimum number, –100%, will correspond to the situation where unknown‐unknowns don’t occur. (As an alternative, this allowance may be put into the project risk register with high probability. This adds another angle for consideration—probability—which seems to be an unnecessary complication or overshooting due to the nature of the discussed topic.) There might be reasons speculated on to either correlate or anti‐correlate this distribution with some or all ranges and/or risks. To keep things simple and exclude overshooting, keeping the unknown‐unknowns allowance distribution non‐correlated should suffice. In schedule risk analysis the unknown‐unknown allowance should be introduced as an additional normal activity at the very end of the project schedule. This allowance should correspond to the percentage of the project duration. Again, a very broad range around that additional activity duration
Conclusion ◾
259
(say, +/–100%) may be introduced. As an alternative, this allowance may be treated as an additional risk mapped to the project completion milestone. It may be reasonable to assign some probability to that risk in the project risk register and schedule risk model. A very broad probability range (say, 0–100%) may be used. This approach reflects the difference between cost and schedule risk models as the latter is schedule‐logic specific. The probability of much less than 100% (say, 50%) reflects the possibility that some unknown unknowns might occur out of the critical path, having minimal or no impact on the project completion date. Both alternatives may suffice due to the very high level of uncertainty associated with unknown unknowns. The risk register alternative seems to be a bit more justifiable for schedule models. An additional challenge is to develop unknown‐unknown allowances for R&D projects for various types of industries that may be used in probabilistic cost and schedule models. This topic is outside of the scope of this book, although we started with the example of a grandiose project failure related to the attempt to use unproven in situ technology in a capital mega‐project. Any company involved in capital projects will want the procedure that best reflects its risk culture and appetite as well as line of business. Guidelines introduced earlier provide enough ammunition for this. Such a procedure should be applied consistently across the project portfolio when developing the project cost and schedule reserves. Historic data on completed projects should be collected along the way. Along with managing various biases and the permanent improvement of the corporate and project risk management system this might be the best way to treat unknown unknowns as accurately as possible.
CONCLUSION This chapter introduces key objects that serve as inputs to probabilistic analyses and shape their outcomes. Several variants of probabilistic analysis are reviewed. Using some of them, such as probabilistic branching, could be overshooting. However, using cost and schedule risk analysis in isolation leads to inadequate representation of schedule‐driven costs in project cost reserves. This leads us to the conclusion that probabilistic integrated cost and schedule analysis should become the standard for capital projects for adequate assessment of project cost and schedule uncertainties and the development of project reserves.
260
◾ Applications of Monte Carlo Methods in Project Risk Management
NOTES 1. The correct mathematical term for curves discussed here is probability density function, not “probability distribution.” 2. AACE International Recommended Practice No. 17R‐97: Cost Estimate Classification System (Morganton, WV: AACE International, 2003). 3. The difference in these three parameters is a measure of distribution asymmetry or skewness. 4. Y. Raydugin, “Unknown Unknowns in Probabilistic Cost and Schedule Risk Models” (Palisade White Paper, 2011; www.palisade.com/articles/ whitepapers.asp). 5. Y. Raydugin, “Quantifying Unknown Unknowns in Oil and Gas Capital Project,” International Journal of Risk and Contingency Management, 1(2), 2012, 29–42.
13 CHAPTER THIRTEEN
Preparations for Probabilistic Analysis
Questions Addressed in Chapter 13
▪ ▪ ▪ ▪ ▪ ▪ ▪
What are the main goals of probabilistic analyses? What is the method statement in probabilistic analysis? What is the high‐level workflow in probabilistic analysis? What factors influence the duration of probabilistic analysis? What is the typical specification of inputs to probabilistic models? Why must correlations never be forgotten? “Probabilistic analysis? What do you mean by that?” ◾
T
H IS C H A P T ER IS D E VOT ED to the steps in getting prepared for probabilistic risk analysis. The typical workflows of cost and schedule risk analysis will be outlined. Specifications of required input data will be introduced.
261
262
◾ Preparations for Probabilistic Analysis
TYPICAL WORKFLOWS OF PROBABILISTIC COST AND SCHEDULE ANALYSES Four major goals of probabilistic cost and schedule analyses are: 1. Investigation of confidence levels of planned or sanctioned project budgets and completion dates 2. Development of cost and schedule reserves to ensure the required or comfortable confidence level of total project costs and completion dates 3. Identification of the most sensitive general uncertainties and uncertain events for their further addressing, allocation of cost and schedule reserves against them, or optimization of baselines 4. Running and investigation of additional what‐if scenarios to evaluate impacts of particular groups of uncertainties The fourth point has not yet been discussed and requires some clarification. It concerns running probabilistic models with and without particular groups of uncertainties. For instance, it would be interesting to understand the role of uncertainties stemming from the governmental permitting process in a CO2 sequestration project. Is this group of uncertainties a major driver of possible project delay or uncertainties associated with internal project development and execution? All uncertainties related to governmental delays may be excluded from the corresponding what‐if scenario and such a sequestrated model could be run. The comparison of the results would be extremely informative. The number and scope of what‐if scenarios depend on the level of curiosity of the decision makers. They should come up with a list of what‐if scenarios for investigation. Figures 13.1 and 13.2 outline typical workflows to develop and run schedule and cost risk analyses. These workflows come from Figure 12.2 and provide a higher level of detail and logic. They are almost identical at first glance. Their major difference relates to the modeling of schedule‐ driven costs. Figures 13.1 and 13.2 accentuate the need for close integration of deterministic and probabilistic methods. General uncertainties and uncertain events initially identified and assessed are major inputs to probabilistic models. However, these require some conditioning when retrieved from the project uncertainty repository. Detailed specifications of inputs to probabilistic cost and schedule models will be provided in this chapter.
Typical Workflows of Probabilistic Cost and Schedule Analyses
◾
263
Project Uncertainty Repository Frozen Scope, Cost, Schedule Baseline Proxy Schedule Schedule General Uncertainties
Deterministic Completion Date
Duration Ranges
Most Likely (ML) Durations
Mapping Schedule Uncertain Events
ML Impacts
Duration Ranges
Probability Ranges
Schedule Model and Monte Carlo Simulation
Completion Date Distribution, Confidence Level and Sensitivity
FIGURE 13.1
Output to Cost Model
Probabilistic Schedule Analysis Workflow
Project Uncertainty Repository Frozen Scope, Cost, Schedule Baseline Proxy Base Estimate Cost General Uncertainties
Deterministic Project Cost
Cost Uncertain Events
Input from Schedule Model
ML Cost
ML Impacts
Cost Ranges
Cost Ranges
Cost Model and Monte Carlo Simulation
Cost Distribution, Confidence Level and Sensitivity
FIGURE 13.2
Probabilistic Cost Analysis Workflow
Probability Ranges
264
◾ Preparations for Probabilistic Analysis
PLANNING MONTE CARLO ANALYSIS Due to the need for integration of deterministic and probabilistic methods it is difficult to distinguish where planning of probabilistic analyses should really begin. The planned probabilistic analyses should be kept in mind when initial uncertainty identification is commenced to ensure a seamless integration of both methods. Some information that may be helpful when converting deterministic data to probabilistic inputs should be captured in the uncertainty repository in the form of comments and notes. Possible areas of double counting of uncertainties should be identified. Special attention should be paid to the approved addressing actions and assessments of uncertainties after addressing. At the same time, any probabilistic analysis should be managed as a small project that has a charter, schedule, start and finish milestones, interfaces with various disciplines, and a budget. The method statement of a probabilistic analysis is developed in its charter. It defines the goal, scope, and methodology of the study. “Probabilistic Analysis Method Statement” examines a method statement for a hypothetical carbon‐ capture‐and‐storage project called Curiosity, which is discussed in Part IV of the text.
P R O B A B I L I S T I C A N A LYSIS METHOD S TAT E M E N T
S
cope of the study: (A) Investigation of confidence levels of project budget and duration at the end of Select (case as‐is) and at the end of Define/final investment decision (FID) (case to‐be‐FID); (B) development of cost and schedule reserves required to reach P80 confidence level of contractual project completion (sustained operations) for the to‐be‐ FID case; (C) investigation of confidence levels of completion dates for planned FID milestone for the to‐be‐FID case; (D) evaluation of primary accuracy range of project cost estimate for the to‐be‐FID case. Method of the study: Integrated cost and schedule risk analysis (CSRA). Main scenarios: Before addressing and after addressing; additional after addressing scenarios, if required. Baselines: Proxy base estimate (24 cost accounts) and proxy schedule (23 normal activities). Selected uncertainties: Project downside and upside uncertainties of cost and schedule impacts selected from deterministic project
Planning Monte Carlo Analysis
◾
265
register; schedule‐driven costs impacts excluded from risk register, to be taken into account through integration of cost and schedule models. Factors included in the cost analysis: General uncertainties and uncertain events and major correlations among those; schedule‐driven costs will be taken into account as general uncertainties through inputs from schedule risk analysis; agreed unknown‐unknown allowance to be included in cost model. Factors excluded from the integrated cost and schedule risk analysis: Project show‐stoppers and game changers according to approved list; cost escalation modeling to be modeled separately; a check to exclude double counting of input factors to be undertaken. What‐if scenarios: (A) Investigation of possible impact of governmental permitting delays on project completion and FID; (B) investigation of impact of upside uncertain event related to early procurement; (C) investigation of readiness for tie‐in window. Start date of the study: April 1, 2015 Completion of the study: April 24, 2015 Reporting: CSRA report validated by the project team and third‐ party reviewers.
The probabilistic analysis should have a well‐defined and viable schedule. The following factors should be kept in mind when developing it:
▪ Is this exercise being done for the first time by the project team? If yes, spe-
▪
cial attention should be paid to training the team members, who should provide adequate and unbiased input information. This is the core message of this book. If most of the key members of the project team (project, engineering, procurement, construction, project services managers, and lead specialists, including estimators and schedulers) have never taken part in such activities, the schedule duration should be almost doubled in comparison with that of an experienced team. Are major interfaces and dates of key inputs clearly specified, including dates of freezing of project base estimate and schedule? It is quite common that changes to project base estimates and schedules are ongoing and never stop unless a project manager defines clear dates for their freezing. Normally, cost and scheduling development eats up 50–80% of the time allocated for probabilistic analysis. This is another reason to double the schedule duration, if possible. Stress is a major work hazard in risk analysis
266
▪▪
▪▪
◾ Preparations for Probabilistic Analysis
when risk managers and analysts try to complete probabilistic analyses under a severe time crunch. Is enough float allocated if additional probabilistic analyses were to be required? It is not unusual that probabilistic analysis based on initial to‐be input data discovers that the confidence level of one or several milestones is unacceptably low. A full cycle of uncertainty reviews according to the risk management process (Figure 2.2) could be required. Additional addressing actions should be developed for the most sensitive uncertainties. This may include negotiations with subcontractors and stakeholders, better coordination among functional and support teams, additional engineering, constructability and value engineering reviews, and so on. This could be quite a lengthy process that requires a corresponding float in the schedule. Are external stakeholders representing a major source for possible project delays? I took part in the development of a what‐if scenario that resulted in cardinal amendment of a project schedule as related to permitting as well as restructuring of the project team. Often completion of one what‐if scenario leads to new ideas about additional scenarios. Extra float should be reserved for tackling what‐if scenarios.
Any practitioner who has taken part in real‐life probabilistic analyses knows that most of these recommendations are nice but rather utopian. Any initially prudent schedule might quickly become unrealistic due to the causes listed above. There is a high level of furor and 14‐to‐16‐hour work days for risk analysts and managers of capital projects during the period of the probabilistic analyses. Realistically longer workdays and ruined weekends are the only source of viable schedule floats. A risk analyst who runs probabilistic analysis should be qualified to carry out adequate model building and interpret results meaningfully. He or she should have experience in running several models for similar projects under supervision of experienced specialists. There should be at least a sort of internal informal certification of analysts to permit them to run probabilistic analyses on their own. All results should be validated by peers or third parties. There is a clear reason for this. A well‐developed model with several what‐if scenarios would be a valuable source of information for decision making. Now let’s imagine that both model and results are inadequate but taken seriously. The “Cargo‐Cult”‐types of analysis pointed out in Chapter 2 could be quite expensive in terms of project failures.
Baselines and Development of Proxies
◾
267
BASELINES AND DEVELOPMENT OF PROXIES In an ideal world the probabilistic analysis workflows (Figure 13.1 and 13.2) start after the project scope, base estimate, and schedule are frozen. They will continue evolving during and after the probabilistic analysis. Hence, the analysis would represent a snapshot of the project baselines and uncertainties for a particular moment. Unfortunately, this rule is rarely obeyed in practice. For instance, a project team gets new information from vendors that changes equipment delivery timelines and risk exposure somewhat significantly. An attempt to amend the input data used in probabilistic models may be undertaken. Some changes are easy to accommodate. However, if a schedule logic is changed, the whole schedule model should be redone, leading to inevitable delays. In any case there should be a cutoff date that precludes any further changes of input data. Base estimates might have hundreds of cost accounts and the project schedule might have even thousands of normal activities. Theoretically it is possible to run probabilistic analysis based on such detailed baselines. Technically there is no obstacle to doing this. The major constraint is time. Hundreds and hundreds of general uncertainties should be discussed and validated for each cost account and normal activity. Hundreds and hundreds of correlations among them should be also identified. After that, detailed uncertain events should be discussed as deviations from those hundreds and hundreds of cost accounts and normal activities, not to mention the correlations among them. Besides time and level of effort, such granularization has intrinsic restrictions especially in the case of schedule risk analysis. If a project uncertainty repository contains 20 high‐level uncertain events, those 20 should be mapped to hundreds and hundreds of detailed normal activities. There are two technical challenges with this. First, the impacts of those high‐level uncertain events should be scaled down to be commensurate with the durations of those normal activities. These events are a better match to the summary activities of the schedule, and not to very detailed normal activities. Second, as discussed previously, the probability assessment of an uncertain event depends on the level of its definition. The more general the event, the higher its probability. Initially assessed probabilities of high‐level uncertain events will be all wrong when mapped to detailed normal activities. Due to these two points, the level of detail in the uncertainty repository and in the detailed schedule, including the level of detail of general uncertainties, will not be commensurate with each other. The results of such
268
◾ Preparations for Probabilistic Analysis
an analysis will be quite distorted, inconsistent, and misleading; they will be a waste of time and effort at best. A similar situation may occur when running cost probabilistic models. A high level of detail in the case of general uncertainties developed for hundreds of cost accounts will conflict with the few uncertain events of higher impacts and probabilities. The resulting cost distribution curve might have two separate peaks, which is an indication of the presence of two groups of factors in the model that are not commensurate with each other. These groups should be in the same or a comparable “weight category.” (This is usually the case when levels of detail are comparable.) Otherwise, they will not blend in the model and will produce distorted results. The bottom line is that the level of detail in baselines should be comparable with the level of detail of uncertainty definitions kept in the project uncertainty repository. There are two consistent possibilities to resolve this “incommensurability crisis.” First, an extremely detailed project uncertainty repository should be developed. Hundreds of detailed general uncertainties and uncertain events should be managed even before the probabilistic analysis is planned. They should be managed according to the risk management process in Figure 2.2, making the lives of the project team miserable. Each of those should be relevant to one or several detailed cost accounts and normal activities. This is absolutely the correct approach to match hundreds and hundreds of items in baselines with hundreds and hundreds of items in the project uncertainty repository, and is absolutely impractical. Months and months will be spent on such probabilistic analysis, which will have high complexity and likelihood of errors and omissions. The synchronization of cost and schedule models at such a level of detail in order to take into account schedule‐driven costs would be a complete nightmare: hundreds of burn rates should be developed and justified to adequately integrate the two models. Automation of this process would be bogus as good analytical skills cannot be replaced by a tool. Such an automated method will provide some results based on the infamous GIGO1 principle that is so popular in the IT industry. Second, the establishment of commensurability may be undertaken at a higher level. The detailed project base estimate and schedule may be rolled up to a much higher level that corresponds to a low level of detail. Similarly, the uncertainty repository should contain a lower level of general uncertainties and uncertain events. All relevant general uncertainties will be rolled up to ranges around one‐point baseline numbers. A small number of relevant uncertain events with
Baselines and Development of Proxies
◾
269
impacts and probabilities defined according to their level of definition will be a match with baseline items in terms of the level of detail. This simplifies probabilistic analysis enormously and makes it less onerous. Table 13.1 presents the general specification of inputs to such a probabilistic analysis.
TABLE 13.1
Specification of Inputs to Probabilistic Models
Input
Specification
Note
1. Proxy schedule
“Level 1.5” schedule: 30–150 normal activities (usually 40–70).
The same WBS at summary level for proxy schedule and proxy base estimate.
Logic of master schedule retained.
Introduction of lags is required to keep logic and obey all milestones.
“Class 4.5” estimate: 15–50 cost accounts.
The same WBS at summary level for proxy base estimate and proxy schedule.
2. Proxy cost estimate
Introduction of burn rates for summary accounts. 3. General uncertainty register (deterministic)
Information required to justify ranges around cost accounts and normal activities of proxies.
Source: deterministic uncertainty repository. Full deterministic impact assessment for as‐is and to‐be cases. Set of approved addressing actions.
4. Uncertain event register (deterministic)
10–50 (usually 15–30) red and yellow uncertain events (assessment as‐is for downside events and assessment to‐be for upside events).
Source: deterministic uncertainty repository. Full deterministic impact and probability assessment for as‐is and to‐be cases. Set of approved addressing actions. Show‐stoppers and game changers are listed and excluded as inputs.
5. Probabilistic general uncertainty register
Full definition of impact distributions. Main correlations (coefficient +0.8 to +1) and anti‐correlations (coefficient –1 to –0.8) among distributions.
Conversion of deterministic general uncertainty register to inputs according to PertMaster and @Risk/ Crystal Ball specs. Usually triangular/trigen impact distributions used unless historic or modeling data indicate differently. (Continued)
270
◾ Preparations for Probabilistic Analysis
TABLE 13.1
(Continued )
Input
Specification
Note
6. Probabilistic uncertain event register
Full definition of impact distributions and probabilities.
Conversion of deterministic uncertain event register to inputs according to PertMaster and @Risk/ Crystal Ball specs.
Main correlations (coefficient +0.8 to +1) and anti‐correlations (coefficient –1 to –0.8) among distributions. Mapping of schedule uncertainties to impacted normal activities.
Usually triangular/trigen impact distributions used unless historic or modeling data say differently. Assessment of (min; max) probability range is required.
7. Description of probabilistic branching scenarios (if required)
Short description of construction/logistics/ turn‐around, etc. windows and associated triggers and delays.
Information according to PertMaster input requirements.
8. Description of what‐if scenarios of interest
Short description of what‐if scenarios of interest in cost or schedule models.
Groups of factors related to external or internal sources of uncertainties to investigate (e.g., permitting delays, engineering delays, etc.).
List of key proxy milestones for investigation of their confidence levels.
Key milestones such as FID, mechanical completion, first oil, etc.
The schedule proxy introduced in Table 13.1 has a typical number of normal activities and a level of detail that is higher than level 1 but lower than level 2 typical project schedules. I call these “level 1.5” or “level one‐and‐a‐half” proxies for probabilistic schedule analysis. A similar approach is used for development of cost proxies. They usually resemble estimates of class 4 or class 5 in terms of their level of detail. Let’s call those “class 4.5” or “class four‐and‐a‐half” proxies. This does not mean that all the ranges around cost accounts should comply with the typical accuracy ranges of class 4 or 5 estimates. Some parts of the cost proxies could have a higher level of detail to single out unique cost accounts that have anomalous wide or narrow ranges due to the current level of engineering and procurement development.
Why Using Proxies Is the Right Method ◾
271
WHY USING PROXIES IS THE RIGHT METHOD We saw in the previous section that commensurability between the number of cost accounts and normal activities in baselines and the number of uncertain events could be established at two levels: at a very low level with a very high level of detail, and at quite a high level with a low level of detail. A high level of detail makes building and running such models practically impossible. One of the additional contributions to the level of complexity would be the need to establish mandatory correlations among base estimate accounts and normal activity distributions. This comes from the high granularity of base estimates and schedules containing hundreds and hundreds of items and the need to compensate for such granularity mathematically. Here are the reasons for this. Models that take in uncorrelated distributions produce much narrower distribution curves than models with a certain level of correlations (Figure 3.7). I do not intend to scare my readers away with topics such as the Central Limit Theorem, but it would be correct to state that uncorrelated inputs to probabilistic models will produce a narrow outcome curve that resembles a normal distribution (bell curve). As previously discussed, this should give rise to smaller‐ than‐reasonable cost and schedule reserves. Once upon a time I checked a project probabilistic schedule model that was run without any correlations among duration distributions and with a dozen uncertain events also uncorrelated. An amazingly low 0.8% schedule reserve at the P80 level of confidence was declared as an outcome of such analysis. And yes, the model had almost 1,000 normal activities with duration distributions around them. The distance to reality of this model was immense due to intrinsic granularity. The only benefit of this was the ease and speed of building and running this. The granularity of the model and the lack of correlations to compensate for it were among the key sources of its audacious inadequacy. An additional reason why uncorrelated granularization should be avoided relates to the fact that a higher level of definition and detail in base estimates and schedules means narrower ranges around the most likely baseline values. For instance, ranges around the most likely values of level 4 schedules or class 3 estimates are usually narrower than the ranges of level 3 schedules and class 4 base estimates—they are better defined. If such inputs are not adequately correlated, sampling of narrow uncorrelated distributions will lead to a narrower outcome curve. That was exactly what happened in the previous example. A word of caution relevant to the granularity of baselines: the transition from deterministic cost or schedule baselines to probabilistic models without proper
272
◾ Preparations for Probabilistic Analysis
correlations among components will lead to incorrect results. This just means that a deterministic baseline directly converted into inputs to a probabilistic model without proper compensation for the granularity will have nothing to do with the project probability‐wise. Automation of modeling without proper correlations at the detailed level will consistently support producing wrong results. Automation will just make the production of wrong results easier and quicker. Unfortunately the recently introduced software package Acumen Risk, which is positioned as a tool that does not require much knowledge of statistics, does not seem to address correlations properly. Even though it does have the functionality to condition the granularity of detailed schedules, it does not have the functionality to establish particular correlations among distributions in order to exclude unrealistic scenarios (!). It is not quite clear how Acumen Risk can produce realistic results. Sales‐ and marketing‐driven user‐friendliness and the targeting of laypeople as prospects seem to increase the distance of Acumen Risk models to reality to an unacceptable level. Hopefully, the next versions of Acumen Risk will get back to the correlating of distributions as a standard functionality to avoid this systematic error.
MAPPING OF UNCERTAIN EVENTS Uncertain events do not make impacts on a schedule as a whole. They make impacts on specific normal activities. If such an impacted activity belongs to the project critical path or near‐critical path, there should be an impact on a project completion date. However, if an impacted activity precedes a substantial float in the schedule, there will be no overall impact at all. In other words, schedule impacts of schedule uncertain events are schedule‐logic specific. This means in turn that any uncertain event of schedule impact should be mapped to one or several relevant normal activities to follow the schedule logic. Such a probabilistic viewpoint conflicts with the deterministic method of schedule impact assessment discussed in Chapter 5. It was recommended there to regard schedule impact of any uncertain events as overall impact on the schedule completion date. This implies that such an impact belongs to a project’s critical path, which is not always true. Technically, that corresponds to a recommendation to add contemplated overall impact to the project completion milestone, which belongs to the critical path for sure. This contradiction dictates the need to substantially condition deterministic schedule impact data when converting them into inputs to probabilistic schedule models.
Mapping of Uncertain Events ◾
273
Any initially identified schedule uncertain event should be viewed from the angle of the normal activities it impacts. For instance, a delay in receiving vendor data may impact directly several engineering and procurement activities. Particular impacts (delays) for each of those activities could differ. So, a deterministically assessed impact should be reviewed and tailor‐made (split) for each of these impacted activities. This makes the deterministic schedule impact assessment data unusable directly in the schedule probabilistic models. Some practitioners even believe that deterministic assessment methods are completely inadequate.2 It’s okay to engage in some drama when occupied with academic activities. I believe that the main practical value of deterministic methods is about addressing identified uncertainties using high‐level rule‐of‐thumb assessments. They also provide some indicators of expected impact assessments, although those assessments cannot be used in probabilistic models directly without proper conditioning. Existing probabilistic schedule risk analysis tools such as PertMaster correlate the impacts of an uncertain event on several normal activities with correlation coefficient 90% by default. This is a reasonable level of correlation. However, the correlation coefficients could be tailor‐made if necessary. Obviously, the same probability is used for all impacts of an uncertain event, which is the probability of its occurrence. A range of occurrence probability (min; max) should be discussed while assessing an uncertain event. When using a project RAM such a range is defined by one of five probability categories (Figure 3.2). When the conversion of deterministic data into probabilistic inputs is undertaken, the initially identified probability range should be challenged. A more precise or relevant range should be defined to get rid of certainty imposed by rigid RAM categories. For instance, if the initial probability assessment based on project RAM (Figure 3.2) were low (1–20%), a review of this RAM‐based range could lead to a more specific assessment, for example, 5–15% or 10–30%. In the latter case, a new probability range covers two RAM categories (low and medium) and legitimately challenges the Procrustean‐bed approach imposed by the deterministic RAM. Technically, probability ranges are not required as inputs to probabilistic models. All existing probabilistic tools digest simple average values. For instance, the range 10–30% will be converted immediately by PertMaster into a single number, 20%. So, ranges are important and appropriate for assessment discussions, not for mathematical modeling. The conversion of a deterministic uncertainty register (Figure 5.3) into probabilistic inputs to a schedule model is shown in Figure 13.3. Uncertain events of real project models could be mapped to several normal activities.
D
1
UE
Schedule
Probability
Three-Part Definition
Due to a) general opposition by some NGOs to oil sands project; b) concerns by local communities about project’s environmental impact; c) environmental issues associated with a similar project in the past, the project XXX might be challenged during public hearings, leading Project medium very high Sanctioning to A) permitting and final (20%–50%) (>6mos) investment decision, engineering and procurement delays (Schedule); B) company’s reputational damage in general (Reputation); C) complication of relations with local communities in particular (Reputation); D) extra owner’s costs (CapEx).
Title
Response Strategy
Probability
Schedule low (0.5–1mos)
TO-BE
To review low sequencing of (1%–20%) front-end loading (FEL)/pre-FID works and develop additional Mitigatefloat in the Recover schedule to absorb schedule impact for a case the risk does occur
To establish community engagement Mitigateand Prevent communication plan including schedule of open house meetings
Action(s)
ADDRESSING
Probability 10%–30%
PROBABILISTIC TO-BE
Procurement
Engineering
Mapping to
Concept of Conversion of Deterministic Data into Inputs to Probabilistic Schedule Model
General Uncertainty or Uncertain Event?
FIGURE 13.3
Upside or Downside?
ID
AS-IS
20
5
Min, days
DEFINITION
30
10
ML, days
274 Max, days 90
25
Construction. 80%
Correlations
Mapping of Uncertain Events ◾
275
As opposed to probabilistic schedule modeling, probabilistic cost modeling is not base‐estimate‐logic specific. Such logic in cost models does not exist. I mean there should be robust logic behind base estimates but not in the sense that the schedule logic is based on links among normal activities. It is still possible to associate some particular uncertain events with particular cost accounts of base estimates for tracking purposes. Moreover, this would be a good style when converting deterministically identified and assessed uncertain events of cost impacts into inputs to cost models. It could be easily done using mapping in the schedule model as cost impacts could be mapped to the same WBS accounts in the schedule model. However, such mapping makes no difference in terms of the mathematics of probabilistic cost modeling. In addition, there is no input functionality that allows one to take such cost mapping into account. This simplifies probabilistic cost modeling a lot. Figure 13.4 introduces a template used for developing probabilistic cost inputs. A side by side comparison of Figures 13.3 and 13.4 reveals internal logic and links between impacts of the uncertain event on project cost and schedule. Obviously, the probability of occurrence of the uncertain event that has both cost and schedule impacts should be the same in both models. Mapping that is crucial in the schedule model is shown in the cost template for tracking purposes only. However, this event is mapped to the same activities: procurement and engineering. It would be logical to expect that there are the same correlations for this particular event with same other uncertain events in both models: to an unspecified construction uncertain event with correlation coefficient 80% in this example. Once again, the probabilistic input data in both templates point to the fact that both probability and impact assessments ignore the Procrustean‐ bed ranges of the deterministic RAM. These should be used as starting points for development of probabilistic inputs and not as final inputs. A split of an initially rough impact is required to evaluate impacts on mapped normal activities individually. Strictly speaking, such individual impacts of one uncertain event on more than one normal activity should be treated as absolutely separate events, although correlated ones and of the same probability. Both templates contain input data for probabilistic models for only the to‐be case for simplification. If required, the as‐is data could be easily added. Obviously, mapping of general uncertainties is not required in either type of model as they are attached to the corresponding cost accounts or normal activities by default.
D
1
UE
Cost
Probability
Three-Part Definition
Due to a) general opposition by some NGOs to oil sands project; b) concerns by local communities about project’s environmental impact; c) environmental issues associated with a similar project in the past, the project XXX might be challenged during public hearings, leading to A) permitting and final low medium Project investment decision, (20%–50%) (0.5M–5M) Sanctioning engineering and procurement delays (Schedule); B) company’s reputational damage in general (Reputation); C) complication of relations with local communities in particular (Reputation); D) extra owner’s costs (CapEx).
Title
Response Strategy To review sequencing of FEL/pre-FID works and develop Mitigateadditional Recover float in the schedule to absorb schedule impact for a case the risk does occur
To establish community engagement Mitigateand Prevent communication plan including schedule of open house meetings
Action(s)
ADDRESSING
Probability low (1%–20%)
Cost very low (