584 79 5MB
English Pages 447 Year 2009
RISK MODELING, ASSESSMENT, AND MANAGEMENT Third Edition Supplementary Problems and Exercises
YACOV Y. HAIMES Lawrence R. Quarles Professor of Systems and Information Engineering and Civil Engineering Founding Director (1987), Center for Risk Management of Engineering Systems, University of Virginia, Charlottesville
JOOST R. SANTOS Research Assistant Professor, Department of Systems and Information Engineering Assistant Director, Center for Risk Management of Engineering Systems, University of Virginia, Charlottesville
Copyright © 2009 by John Wiley & Sons, Inc. All rights reserved.
A Wiley-Interscience Publication
JOHN WILEY & SONS, INC. New York / Chichester / Weinheim / Brisbane / Singapore / Toronto
About the Supplementary Problems and Exercises For the first time, the 3rd edition of the book Risk Modeling, Assessment, and Management comes with a set of supplementary problems and exercises resulting from a longstanding collaboration with my colleague and former student, Joost Santos. This set contains a compilation of 150 exercises and problems featuring risk analysis theories, methodologies, and applications. Its objective is to provide reinforced learning experiences for risk analysis scholars and practitioners through a diverse set of problems and hands-on exercises. The problems and exercises encompass a broad spectrum of applications including disaster analysis, industrial safety, transportation security, production efficiency, and portfolio selection, among others. Ideas and raw materials for the supplementary problems and exercises are attributable to numerous students who participated in the Risk Analysis course offering for the last 20 years. The production of this supplement would have not been possible without the help of the following student encoders: Dexter Galozo, Jonathan Goodnight, Miguel Guerra, Sung Nam Hwang, Jeesang Jung, Mark Orsi, Oliver Platt-Mills, Chris Story, Scott Tucker, and Gen Ye. We would like to acknowledge, in particular, Chris Story for his tireless efforts and attention to quality. He has devoted more than a year of his time to provide valuable assistance in terms of computer encoding as well as checking for the accuracy of each problem or exercise. Last but not the least, I would like to once again acknowledge Grace Zisk for her meticulous editing and suggestions to standardize the structure of each solved problem. For better tractability, the problems and exercises are organized in a similar manner as the chapters of the book and progress from foundation topics (e.g., building blocks of modeling and structuring of risk scenarios) to relatively more complex concepts (e.g., multiobjective trade-off analysis and statistics of extremes). The table of contents provides an itemized list of the 150 problems and exercises. Of the 150 problems and exercises, 80 of these are solved. The remaining 70 are unsolved exercises, which are labeled with asterisks (*).
Yacov Y. Haimes Joost R. Santos
TABLE OF CONTENTS
Chapter I: Building Blocks (BB) I.1. BB – Hazmat Transport I.2. BB – Snow Removal I.3. BB – Fruit Stand I.4. BB – Sales Forecast I.5. BB – Internet Browsing I.6. BB – Stadium Construction* I.7. BB – Tank Irrigation* I.8. BB – Lumber Marketing* I.9. BB – Overflow Management* I.10. BB – Car Design* I.11. BB – Tram Construction* I.12. BB – Transit Service*
Chapter II: Hierarchical Holographic Model (HHM) II.1. HHM – Airliner Risk II.2. HHM – Army Modularity II.3. HHM – Ordnance System II.4. HHM – Air Vehicle II.5. HHM – Tunnel Project* II.6. HHM – River Flooding* II.7. HHM – Rail Project* II.8. HHM – Waste Management* II.9. HHM – Mall Development* II.10. HHM – Airline Industry*
Chapter III: Decision Analysis (DA) III.1. III.2. III.3. III.4. III.5. III.6. III.7. III.8. III.9. III.10.
DA – Software Development DA – Rental Service DA – Menu Addition DA – Seatbelt Replacement DA – Engine Test DA – Magic Beanstalk DA – Card Approval* DA – Flavor Selection* DA – Wine Quality* DA – Aircraft Procurement*
III.11. DA – Machine Management* III.12. DA – Production Decision* III.13. DA – Resort Management*
Chapter IV: Surrogate Worth Tradeoff (SWT) IV.1. IV.2. IV.3. IV.4. IV.5. IV.6. IV.7. IV.8. IV.9. IV.10. IV.11. IV.12. IV.13.
SWT – Station Location SWT – Investment Option SWT – Apartment Rental SWT – Student Life SWT – Spaceship Mission SWT – Generic Problem* SWT – Employment Dilemma* SWT – Investment Decision* SWT – Chemical Company* SWT – Plant Construction* SWT – Concert Security* SWT – Overflow Modeling* SWT – Design Decision*
Chapter V: Uncertainty Sensitivity Index Method (USIM) V.1. USIM – Sales Forecast V.2. USIM – Cantilevered Structure V.3. USIM – Automobile Purchase V.4. USIM – Generic Problem V.5. USIM – Cyber Security V.6. USIM – Art Museum V.7. USIM – Multiobjective Optimization* V.8. USIM – Building Design* V.9. USIM – Portfolio Evaluation* V.10. USIM – Infectious Disease* V.11. USIM – Catalyst Reaction* V.12. USIM – Cost Management* V.13. USIM – Electroplating Plant*
Chapter VI: Risk Filtering, Ranking, and Management (RFRM) VI.1. VI.2. VI.3. VI.4. VI.5. VI.6. VI.7.
RFRM – Concert Security RFRM – Electric Power RFRM – Online Banking RFRM – Nano Material RFRM – Department Assessment RFRM – Acquisition Investment RFRM – Healthcare System
VI.8. VI.9. VI.10. VI.11.
RFRM – Hurricane Katrina RFRM – Athletic Program RFRM – Home Security RFRM – Party Planning
Chapter VII: Partitioned Multiobjective Risk Method (PMRM) VII.1. VII.2. VII.3. VII.4. VII.5. VII.6. VII.7. VII.8. VII.9. VII.10. VII.11. VII.12. VII.13. VII.14. VII.15. VII.16. VII.17.
PMRM – Project Management PMRM – Supplier Selection PMRM – Job Hunting PMRM – Facility Development PMRM – Meteorological Observatory PMRM – Construction Contractor PMRM – Water Treatment PMRM – Architectural Style* PMRM – Highway Project* PMRM – Courier Choice* PMRM – O-ring Reliability* PMRM – Budget Planning* PMRM – Road Safety* PMRM – Investor’s Dilemma* PMRM – Welding Process* PMRM – Vehicle Safety* PMRM – Cost Estimation*
Chapter VIII: Multiobjective Decision Trees (MODT) VIII.1. MODT – Flood Control VIII.2. MODT – Bridge Maintenance VIII.3. MODT – Consulting Needs VIII.4. MODT – Business Decision VIII.5. MODT – E-mail Service* VIII.6. MODT – Call Center* VIII.7. MODT – Reservation Strategy* VIII.8. MODT – Alternative Routing* VIII.9. MODT – Bobsled Training*
Chapter IX: Multiobjective Risk Impact Analysis Method (MRIAM) IX.1. MRIAM – Cholesterol Control IX.2. MRIAM – River Channel IX.3. MRIAM – Generic Problem IX.4. MRIAM – Insurgent Terrorism IX.5. MRIAM – Cancer Treatment
IX.6. IX.7. IX.8. IX.9. IX.10.
MRIAM – Flood Control* MRIAM – Road Construction* MRIAM – Research Fund* MRIAM – Overflow Modeling* MRIAM – Design Phase*
Chapter X: Extreme Event (EE) X.1. EE – Maximum Discharge X.2. EE – Circuit Duration X.3. EE – Population Growth X.4. EE – Snow Removal* X.5. EE – Investment Opportunity* X.6. EE – Oxygen Concentration*
Chapter XI: Fault Tree/Reliability Analysis (FTRA) XI.1. XI.2. XI.3. XI.4. XI.5. XI.6. XI.7. XI.8. XI.9. XI.10. XI.11. XI.12. XI.13.
FTRA – Desktop Malfunction FTRA – Control System FTRA – Organization Efficiency FTRA – Machine Gun FTRA – Airplane System FTRA – Demand Fulfillment* FTRA – Bicycle Brake* FTRA – Train Wreck* FTRA – Circuit Duration* FTRA – SCADA System* FTRA – Computer Failure* FTRA – Electronic Product* FTRA – MTTF Computation*
Chapter XII: Multiobjective Statistical Method (MSM) XII.1. XII.2. XII.3. XII.4. XII.5. XII.6. XII.7. XII.8. XII.9. XII.10. XII.11.
MSM – Water Treatment MSM – Network Optimization MSM – Wetlands Mitigation MSM – Football Strategy MSM – Manufacturing Process MSM – Gas Station MSM – Newsstand Company MSM – Resource Allocation* MSM – School Bus* MSM – Construction Time* MSM – Pollution Control*
1
I. Building Blocks
PROBLEM I.1: Analyzing Hazardous Materials Transportation An electronics manufacturer is conducting a study for selecting possible treatment plants to ship its hazardous materials. DESCRIPTION An electronics manufacturer in Charlottesville, VA produces hazardous wastes as byproducts of its processes. These wastes must be shipped to appropriate treatment plants. There are three possible plants, located in N. Hornell, NY, Hot Springs, AR, and Jacksonville, IL. The plants have different per-tonnage processing fees as shown in Table I.1.1. The manufacturer needs to determine to which plant it should ship. The shipping arrangement shall be useful for a few years. The manufacturer is concerned about HazMat transport accidents, population exposure, costs, and distances (in decreasing order of importance). METHODOLOGY The objective of this exercise is to identify the building blocks of mathematical modeling to provide decisionmaking insights for selecting the optimal path for transporting hazardous materials. TABLE I.1.1. Per-tonnage Processing Fees Location of Plant N. Hornell, NY Hot Springs, AR Jacksonville, IL
Per-tonnage Processing Fees $250/ton $100/ton $150/ton
Figure I.1.1. GIS Map of Routes and Impact Areas
2
Building Blocks
SOLUTION Identifying the Different Variables The variables below are defined based on the following parameters. Note that the designations of a specific variable type are not necessarily distinct and they may overlap: Client
Private manufacturing company producing hazardous waste
Time Frame Objectives (ordered from most important to least)
Long-term Z1: Minimize accidents during transport Z2: Minimize population exposure Z3: Minimize cost of transport Z4: Minimize travel distance
TABLE I.1.2. Description of Variables Variable Type Decision Variable
Index d1
i1 i2 i3 i4 r1 r2 r3
Routes to be taken by transport vehicle − will affect decision of which treatment plant j to select Mode of transport (land, air, sea, rail, etc.) Other alternative (cleaner technologies, relocating plant, etc.) Population over time within impact radius of accident Cost of treatment process in plant j (j=NY, AR, IL) Cost of transport Existing road type (interstate, urban, rural, etc) Weather conditions Road condition (accidents, traffic, etc.) Vehicle and driver condition
Exogenous Variable
α1 α2
Gasoline prices Transport worker union strike
Output
o1 o2 o3
Production per unit time Profit Public perception/corporate image
State
S1 S2
Amount of chemicals transported per time period Number of available transport units
d2 d3 Inputs
Random Variables
Hazmat Transport
Constraints
3
S3
Number and nature of transport incidents
c1 c2 c3
Existing road network Treatment plant location State and other local hazmat regulations (hazmat routes, volume, etc.)
Decision Variables (x)
Constraints ( c)
Inputs ( i) System States (s)
Objectives Z* = f(s)
Outputs ( o)
Exogenous Variables (α)
Random Variables (r)
Figure I.1.2. Schematic Diagram of the Model Building Blocks
Mathematical Model Parameters of System States:
s1 ( x1 , x2 , x3 , i2 , i3 , i4 , o2 , o3 , α1 , α 2 , r1 , r2 , r3 , c1 , c2 , c3 ) s 2 ( x 2 , i3 , o1 , o2 , o3 , α 2 , r1 , r2 , r3 , c3 ) s3 (x1 , x 2 , x3 , i1 , i4 , o1 , o2 , o3 , α 2 , r1 , r2 , r3 , c1 , c 2 , c3 ) Multiple Objectives: Min
Z1 = f1 ( s1 , s 2 , s 3 )
Min
Z 2 = f 2 ( s1 , s 3 )
Min
Z 3 = f 3 ( s1 , s 2 , s 3 )
Min
Z 4 = f 4 ( s1 , s 2 )
4
Building Blocks
PROBLEM I.2: Preparing for Snowfall Removal Procedures Examples of model variables are given in the following problem. An initial model is offered, demonstrating the relationships among the variables. DESCRIPTION You are the superintendent of a rural area headquarters for the Virginia Department of Transportation (VDOT). Your responsibilities include planning, scheduling, and supervising snow removal operations. You have four authorized crew member positions. Two positions are filled; two are open. METHODOLOGY The objective of this exercise is to use building blocks of mathematical modeling in planning, scheduling, and supervising snow removal operations. The latest weather forecast for the upcoming weekend indicates that there is a 60% probability of 3 to 6 inches of snow falling in your area of snow removal responsibility sometime between Saturday at 6 pm and Sunday at 12 noon. The snow may be mixed with ice, depending on how the weather system develops. Both crew members have already worked four 12-hour shifts on snow removal operations earlier this week. Their work week is Monday through Sunday; any weekend work is considered part of the current week’s work and is categorized as “overtime.” You have three other sources of labor available for staffing snow removal operations this weekend: the headquarters’ Management Operations Manager (MOM), a retired VDOT crew member, and a local contractor. However, you cannot spend more on these sources than the amount budgeted for the two open positions. You also have limited time to train and orient the chosen source(s) of labor. Your problem/challenge/opportunity: Develop staffing and scheduling plans to perform snow removal operations for the upcoming weekend. Your goals are to minimize the operation’s labor costs while fielding the most experienced crew possible and maximizing the efficacy and efficiency of snow removal operations. SOLUTION Note the close (and possibly overlapping) relationships among the state variables, the constraints, and the objective functions. The variables involved in this model include: Decision Variables: x1: Crew Member 1 x2: Crew Member 2 x3: the MOM
Fruit Stand
5
x4: the retired crew member x5: the local contractor The above variables are binary decisions. A value of 1 means utilizing the services of a particular personnel, and a value of 0 means otherwise. Exogenous Variables: α1: number of miles of roads in the snow removal operations area α2: number of pieces of equipment available for snow removal operations (e.g., plow/hopper spreader/liquid calcium chloride tankequipped tandem dump trucks, plow/hopper spreader/liquid calcium chloride tank-equipped standard dump trucks, tractors with v-plow attachment, tractors with loader attachment, front end loaders, and motor graders) α3: time it takes to clear the roads once, per VDOT Snow Removal Guidelines Inputs: Cost per hour of using: u1: Crew Member 1 (overtime) u2: Crew Member 2 (overtime) u3: the MOM u4: the retired crew member u5: the local contractor Time required to train and orient: u6: the MOM u7: the retired crew member u8: the local contractor Outputs: Number of hours worked: y1: by Crew Member 1 y2: by Crew Member 2 y3: by the MOM y4: by the retired crew member y5: by the local contractor Total cost of using in this weekend’s operations: c1: Crew Member 1 (c1 = u1y1) c2: Crew Member 2 (c2 = u2y2) c3: the MOM (c3 = u3y3) c4: the retired crew member (c4 = u4y4) c5: the local contractor (c5 = u5y5)
6
Building Blocks
Random Variables: r1: amount of actual snowfall r2: duration of the storm r3: number of miles of roads receiving enough snow to warrant snow removal operations State Variables: s1: miles of roads that actually need to be cleared during and after the storm s2: pieces of equipment actually suitable for use given the amount of snow and the areas that must be cleared, s3: number of workers needed and available to clear area of responsibility during and after the storm according to VDOT Snow Removal Guidelines Constraints: g1: number of additional hours Crew Member 1 can work this weekend according to fatigue level, contract provisions, and labor laws g2: number of additional hours Crew Member 2 can work this weekend according to fatigue level, contract provisions, and labor laws g3: total amount of money available to spend on overtime for crew members g4: total amount of money available to spend on outside labor forces g5: total amount of time available for training/orienting outside labor forces
Initial Model The initial model addresses only the superintendent’s goal of minimizing the cost of the weekend’s snow removal operations. A goal-programming model or Paretooptimization model would be the preferred method of finding a way to satisfy all of the superintendent’s goals: minimize costs and maximize experience, efficiency, and efficacy. Minimize
Σ 5 i=1 cixi
s.t. y 1x 1 ≤ g 1 Crew Member 1 can’t work more than the allowed number of additional hours y 2x 2 ≤ g 2 Crew Member 2 can’t work more than the allowed number of additional hours
Fruit Stand
7
c 1x 1 + c 2x 2 ≤ g 3 The overtime pay for Crew Members 1 and 2 can’t be more than the budgeted/available amount c 3x 3 + c 4x 4 + c 5x 5 ≤ g 4 The pay for the “outside labor forces” can’t be more than the budgeted/available amount u 6 + u 7 + u 8 ≤ g5 Time spent on training and orientation for “outside labor forces” can’t be more than time available for those activities xi = 0 or 1 Integer constraint on the decision variables αi, ui, yi, ri, si, gi ≥ 0 for all i Non-negativity constraint on the other variables
Relationships among Variables The state variables are a function of the decision variables, the exogenous variables, the inputs, and the random variables. That is, the number of miles of roads that need to be cleared of snow and the number of workers needed to clear those roads according to VDOT Snow Removal Guidelines depend on how much snow falls, where and how long it falls, and how long it takes to clear the snow from the roads, as well as the number of pieces of equipment available for use, and the number of workers participating in snow removal operations. Under this model, the workers are chosen based on the cost of using a particular worker, the time required to train a worker from an outside labor force, and the availability of crew members.
8
Building Blocks
PROBLEM I.3: Opening a Corner Fruit Stand A local supermarket owner wants to expand her business by opening a fruit stand on a busy downtown corner. Her son plans to run it. DESCRIPTION To help her son get started, the owner contacted a small consulting firm in town to model the various factors involved in making this venture successful. METHODOLOGY The consulting firm helped identify the components of the building blocks of modeling a corner fruit stand. SOLUTION There are two main objectives: 1) Maximizing profit 2) Reduce the risk of losing money due to the short shelf-life of fruit. This problem can be modeled by identifying the relevant variables as below: Decision variables: • The kinds of fruit to sell, and the price and quantity of each kind. Papayas, for example, could be sold at a higher margin, but may require special handling since they are not grown locally. Apples could be cheaper to stock, but may be sold only at low margins. • How often to place inventory orders The frequency of inventory orders is extremely important, because if orders are large and placed less frequently, some of the fruit may spoil. However, shorter inventory orders may incur larger procurement charges or may not be enough for demand. Input variables: • Quantity of fruit arriving from wholesaler • Cash from customers buying fruit Random variables: • Percentage of fruit possibly infected by bugs A shipment could be infected by a random level of bugs; this could be seasonal, weather related, or due to various other causes. • Number of customers (demand) The number of customers can vary depending on season and other temporal factors.
Fruit Stand
9
Exogenous variables: • Population growth The town could be growing due to added jobs in the area. This could increase the population and thus have an impact on the state variables. • Local regulation policies and events A fruit seller must obey all official regulations regarding food sales. During the year, the town may schedule various events near the busy corner location. This could increase demand by drawing people into the fruit store. Output variables: • Fresh fruit sold to customers • Spoiled fruit that must be thrown away • Cash paid to wholesaler and also taken home as profit State variables: Identifying the state variables is critical in this modeling and the system can be monitored through them, the inventory level (quantity) and freshness level (quality of the fruit. State variables represent the entire system, since input, decision, random, and exogenous variables affect the levels of the state variables. • Inventory level: the quantity of fruit available for purchase • Quality level of the fruit at any specific time period These objectives, constraints, and variables can be visualized in Figure I.3.1. Local Policies, Events Weather Conditions
Fruit Quantity Cash
Insects Customers (Demand)
Inventory Level Quality of the Fruit
Cash Fruit (fresh, spoiled)
How often to order (replenish inventory)? What type of fruit and quality?
Figure I.3.1. A Block Diagram for a Fruit Stand
10
Building Blocks
PROBLEM I.4: Florist’s Valentine’s Day Dilemma For Valentine’s Day, the only florist in town must decide on the number of red roses to order from the wholesale rose nursery as well as the price she will charge her customers. DESCRIPTION To make the problem a bit simpler, assume that the price the wholesale nursery charges the florist is independent of the total amount the florist purchases. Also assume that each order is for a dozen roses and each customer will buy precisely one dozen. Some people have pre-ordered roses for Valentine’s Day at a previously determined price, and others will be purchasing them on February 14th. The florist must fulfill the pre-orders, or those deprived customers will not only receive a full refund but will also be disappointed. If the florist runs out of roses for walk-in customers, then they will also be disappointed. We can assume that disappointed customers will turn to the Internet or other shopping malls for roses next Valentine’s Day. Therefore, it is important to minimize the expected number of disappointed customers. If the florist has roses left at the end of the day, she will have to keep them fresh overnight and sell them at a lower price next day. Assuming that all roses will be sold, another goal is to maximize the expected total profit. METHODOLOGY This problem can be modeled using the building blocks of mathematical models:. SOLUTION Developing a model can be initiated by identifying the relevant variables and constraints, as follows: State variables (S): S = (S1, S2) S1: expected number of orders of roses sold S1_1 is for pre-order, S1_2 for February 14th, S1_3 for February 15th S2: expected number of disappointed customers S2_1 is for pre-order, S2_2 is for walk-in Random variables (R): R = (R1, R2) R1: number of walk-in customers on Valentine’s Day R2: number of pre-orders that are not picked up Decision variables (X): X = (X1, X2) X1: number of orders to place X2: price per order to charge customers on Valentine’s Day
Sales Forecast
11
Exogenous variables (A): A = (A1, A2) A1: price per order wholesale charge A2: cost per order to maintain freshness for one night Input variables (U): U = (U1, U2, U3) U1: price per order for pre-ordered roses (may also be considered as an exogenous variable) U2: regular price per order U3: number of pre-orders (also random variable) Output variables (Y): Y = (Y1, Y2, Y3, Y4) Y1: expected total revenue from selling roses Y2: expected total cost to maintain freshness for one night Y3: total cost of purchasing roses Y4: expected total number of disappointed customers (also random variable) Constraints (G): G = (G1, G2) G1: total number of orders the florist can handle G2: the upper limit of disappointed customers in order for the florist to survive With these variables and constraints, the model can be solved as follows: Objective functions : Max Y1 – Y2 – Y3 Min Y4 Subject to: X1 ≤ G1 (total number of orders can’t exceed what the florist can handle) Y4 ≤ G2 (expected total number of disappointed customers must be below the limit for florist’s business to survive) X1, X2 ≥ 0 (nonnegativity) where: A1 * X1 = Y3 (Price per order wholesale charge * number of orders purchased = total cost of purchase) S2_1 + S2_2 = Y4 (Expected number of disappointed pre-order customers + expected number of disappointed walk-in customers = total number of disappointed customers)
12
Building Blocks U1 * U3 + X2 * S1_2 + U2 * S1_3 = Y1 (Expected total revenue from pre-orders + expected total revenue from February 14th + expected total revenue from February 15th = expected total revenue from selling roses) S1_3 = max(0, X1 – U3 + E[R2] – E[R1]) (Expected roses left at the end of February 14th to be sold on the 15th) A2 * S1_3 = Y2 (Cost per order to keep freshness * the expected number of roses left over at the end of February 14th = expected total cost to maintain freshness) S1_2 = E[R1] S2_1 = max (0, - X1 + U3 - E[R2]) (expected number of pre-orders not picked up) S2_2 = max (0, E[R1] - X1 + U3 - E[R2]) (expected number of walk-ins not fulfilled)
The problem can be solved through multiobjective trade-off analysis, but without identifying pertinent variables in the system it is hard to obtain any specific solutions. Therefore, finding relevant components in each variable is the key to modeling the problem effectively.
Internet Browsing
13
PROBLEM I.5: Limiting Computer Browsing at Work This problem analyzes the impact of Internet browsing on worker productivity. DESCRIPTION How to limit the amount of time employees spend on non-work-related sites while supposedly doing their jobs? This is one of the problems that companies face now more than ever because while the Internet is a valuable resource for increasing efficiency, it can also negatively affect worker productivity. METHOLDOGY Guided by the questions of risk assessment and management, the objective of this exercise is to identify the building blocks to support the extent to which companies can limit use of the internet for non-work-related activities. Risk Assessment: We can assess the risk associated with this problem by answering three questions: What can go wrong? What is the likelihood that it would go wrong? What are the consequences? Risk Management: To manage the risk, we answer the following three questions: What can be done and what options are available? What are their associated tradeoffs in term of all costs, benefits, and risks? What are the impacts of current management decisions on future options? SOLUTION Risk Assessment What can go wrong? Employees with wide open internet access could spend many hours of their day surfing sites that have nothing to do with their jobs, such as: Ebay.com, online shopping sites, and weather sites. In addition to a loss in productivity, there is also the chance that a virus could be transmitted to the employee’s computer, which is inside the company network. Finally, surfing non-work related sites uses up valuable company bandwidth. This holds especially true for the streaming video and audio sites that many people enjoy. What is the likelihood that it would go wrong? The likelihood is near 100% that there would be a loss in productivity for an employee surfing non-work-related sites. As we know, time slips away when looking at topics of personal interest on the internet; this also affects other
14
Building Blocks
employees if content is shared among coworkers. The likelihood of an employee’s computer contracting a virus depends on the type of site visited. Other sites may not be as harmful, but there is still a risk present. The likelihood of increased bandwidth utilization is 100%, and the usage can grow exponentially depending on the applications accessed over the web. What are the consequences? The consequences for each element that could go wrong range from an increase in monetary costs to the possibility of bringing down the entire network. The main consequence for a loss in productivity is that more people are needed to perform the job formerly done by one. This of course, will lead to an increase in costs. The consequences of contracting viruses involve spending man hours to remove them and damages ranging from incapacitating one computer to having the entire network brought down due to the virus replicating itself. There is also a possibility of a replicating virus attempting to broadcast out of the network onto the internet. A specific virus acting as a backdoor into the network could allow outside traffic to bypass the firewall due to its Trojan horse nature. Finally, the consequence of an increase in bandwidth utilization is very simply an increase in cost. Risk Management What can be done and what options are available? The options are: 1. Shut down all internet usage. 2. Implement web-content filtering software. 3. Implement and publicize acceptable use policy. 4. Implement employee internet navigation-tracking software; publicize results. 5. No action. What are their associated tradeoffs in terms of all costs, benefits, and risks?
1
Option Shut down all internet usage
Cost None
Benefit No loss of productivity -Increased productivity -Decreased access to potentially harmful sites -Decreased network downtime -Decreased loss of bandwidth -Emphasizes good communication with employees since policies are clearly stated -Legal enforcement of policies -Increased productivity -Decreased access to potentially harmful sites -Decreased network downtime -Decreased loss of bandwidth
2
Implement webcontent filtering software
-Cost of software -Cost of man-hours to implement software
3
Implement and publicize acceptable use policy
-Cost of man-hours to develop and communicate policy -Cost of man-hours to legally review policy
4
Implement employee internet navigation- tracking software; publicize results
-Cost of software -Cost of man-hours to implement software
-Increased productivity -Decreased access to potentially harmful sites -Decreased network downtime -Decreased loss of bandwidth
5
No action
-No employee negative reaction -No cost for software application or implementation
-No improvement in employee productivity -No avoidance of harmful viruses or increased costs
Risk Employees may be unable to do all necessary tasks for job Sites required to do work-related tasks could be blocked
-Negative employee reaction to strict rules and perceived lack of trust -Lack of response by employees
-Negative employee reaction to strict rules and public disclosure of personal information -Negative employee reaction to perceived lack of trust -Lack of response by employees -Lack of productivity -Harmful viruses -Network downtime
Internet Browsing 15
16
Building Blocks
Overall recommendation is a combined strategy that implements Options 2, 3, and 4 to minimize risk.
What are the impacts of current management decisions on future options? In this scenario, the impact of any or all of the possible options on future options is minimal. This is due to the fact that selecting any of these options does not eliminate or effectively change the available future options. This is unlike other scenarios where an engineering design results in a system that drives other decisions and eliminates options. Fortunately, for current managers this real-world scenario is less constraining in terms of future options.
Stadium Construction
17
PROBLEM I.6: Building a New Major League Baseball Stadium The objective of this exercise is to identify the building blocks of mathematical modeling to evaluate the feasibility and cost-effectiveness of constructing a new stadium. A new stadium is needed to bring Major League Baseball (MLB) back to Washington, DC. Where should it be built to generate enough revenue to justify the initial capital investment? The question is whether bringing professional baseball back to Washington, DC would generate enough revenue to justify the initial capital investment. The biggest issue associated with this question is long-term economic growth opportunity. At the forefront of this issue is the building of a new stadium, which MLB demanded. The Mayor of DC promised MLB that the city would build a new stadium complex for the right to have a team. He further promised to build the stadium on the Anacostia Waterfront; this would lead to considerable economic development, including residential and commercial real estate. According to some, this could translate into “billions of dollars of real estate investment and tens of thousands of new jobs.” It would also mean the revitalization of an underdeveloped area of DC. In order to build the new stadium at this site, a $440 million financing package would have to be approved by the DC City Council. At first glance, the Mayor appears to have the majority of the Board on his side. However, in public meetings there has been a strong backlash against the new stadium. Opponents argue that the public funding should go to schools and other social initiatives instead of wealthy major league owners. The Council Chairperson has suggested an alternative to the Anacostia Waterfront site--build the new stadium right next to the Robert F. Kennedy (RFK) Stadium. Fewer dollars would be required to build it at this site, and thus more money could be used elsewhere. The RFK site cannot be expanded because it is located in a residentially dense area. Also, this site would not be open to new investors; thus real estate revenue and new jobs would be limited to the stadium itself. The generation of the building blocks for this problem requires recognition of the following questions of risk assessment and management: Risk Assessment: (1) What can go wrong? (2) What is the likelihood that it would go wrong? (3) What are the consequences? Risk Management: (1) What can be done and what options are available? (2) What are the associated tradeoffs in terms of all costs, benefits, and risks? (3) What are the impacts of current management decisions on future options? (Note: This exercise does not require identification of the building blocks of mathematical modeling. Answers to the above questions of risk assessment and management will suffice.)
18
Building Blocks
PROBLEM I.7: Controlling Tank Irrigation for Crops in India A tank irrigation system predominates in south India, as there is no power plant and no long-distance water transportation there. Like a small-scale reservoir, the tank releases water to irrigate crops immediately downstream. Water from the tank is essential to the rice in the field. However, in the dry season, if water is evenly released to all villages, the rice would probably die. The Veeranam Tank is the second-largest tank in south India. It provides water to 120 villages through multiple channels. Before 1994, the volume of water released was usually determined by expert experience. Arumugam and Mohan1 developed an Integrated Decision Support System to help tank operation. The tank and the fields downstream form an integrated system here. Thus, we need to determine not only the amount of water to release, but also the crop area to be irrigated. Identify all relevant building blocks in this problem.
1
Arumugam, M., Integrated decision support system (DSS) for tank irrigation system operation, 1997, Journal of Water Resource Planning and Management Sept-Oct: 266-273.
Lumber Marketing
19
PROBLEM I.8: Marketing Lumber Ecologically A logging company faces multiple decisions when managing its forests. The company wishes to maximize the amount of money made from cutting down trees, but has to balance that with keeping the forest healthy. This will ensure that the forest will produce a marketable product for years to come. Using the building blocks methodology in formulating a mathematical model, define six basic groups of variables as follows: • Decision variables • Input variables • State variables • Exogenous variables • Random variables • Output variables
20
Building Blocks
PROBLEM I.9: Controlling River Channel Overflow The Marikina River in the Philippines has posed a big challenge to the city government in terms of controlling its channel overflow during monsoon season. The Marikina River is the city of Marikina’s main waterway. The river has supported the economic and social activities of the residents of the surrounding community. However, it has caused the city millions of pesos in terms of emergency response activities, economic losses, and the rehabilitation of affected communities, among others. Identify the components of the building blocks of mathematical modeling to address the adverse effects of the river flooding scenario described above.
Car Design
21
PROBLEM I.10: Designing a Car A car designer wishes to develop a car prototype. Since the goal is to ultimately mass-produce this car design to a broad marketplace, several factors need to be considered including reliability, make, style, and other design attributes. The functional and technical requirements of a car prototype are being developed from the standpoint of a designer. The design of the car should increase profit/market share and meet a variety of customer/government specifications. The objective of this exercise is to use building blocks of mathematical modeling to enumerate multiple variables that represent considerations in designing a car prototype. These considerations range from corporate profitability to customer preferences.
22
Building Blocks
PROBLEM I.11: Building a Light Rail System The city council plans to build a light rail system to transport citizens within the city, reduce increasing automobile traffic, and help attract new residents. The rail cars will be powered by electricity and the city plans to use funds that the federal government has promised to help cover the cost of construction. The council would like to optimize the locations of the stations in order to maximize the number of citizens that will use the rail system. In addition, they want it to be cost effective. Identify all relevant building blocks of modeling for the above problem.
Transit Service
23
PROBLEM I.12: A Reliable School Transit Service A School Transit Service (STS) handles much of the off- and on-grounds transit for students in a university area. STS handles over 3 million rides annually with a focus on service during the academic year. How can STS decisionmakers keep the buses running on schedule? The overall goal of STS is to provide safe, reliable, and courteous transportation to university students, employees, and visitors. Safety is comprised of passenger security and health while in or around STS vehicles. Reliability refers to on-time service. Courtesy refers to having drivers treat customers with respect. With these goals in mind, identify the relevant building blocks of modeling for a school transit service.
24
II. Hierarchical Holographic Method
PROBLEM II.1: Analyzing Risks to General Aviation The purpose of this problem is to identify diverse characteristics and attributes of a general aviation system. DESCRIPTION Building a Hierarchical Holographic Model (HHM) will help visualize all of the perspectives as well as the risks for general aviation. General aviation is defined as pertaining to non-commercial and non-military aircraft. METHODOLOGY For identifying risk factors, HHM would be generated. The Head Topics cover the major risks to general aviation, with specific details listed under each area as Subtopics. The analysis focuses on the most likely risks. SOLUTION The generated HHM is shown in the Figure II.1.1. ANALYSIS Weather: One of the greatest risk factors in flying is bad weather, and this is especially true in general aviation. While nearly all commercial carriers are equipped for flight in instrument weather conditions, many general aviation aircraft are not. Also, many pilots are either untrained or not proficient in instrument operations. Additionally, many general aviation aircraft are smaller and more susceptible to problems with high winds, and often have little or no de-icing equipment. Emergency Response: Limited emergency response capabilities are another area in which risks are higher for general aviation. Airports with large volumes of commercial traffic are likely to have emergency response resources and plans in place. Small general aviation airports with little traffic are much less likely to have good emergency response capabilities. Also, it can be difficult to locate small airplanes that have crashed away from an airport. Because small aircraft are frequently not tracked by air traffic control, they may be difficult to find after a crash or emergency landing.
Airliner Risk
25
Flight Operations: There are numerous risks associated with flight operations, many of which are closely tied to the other Head Topics identified in the HHM. Mechanical and electrical failures and human errors are all closely tied to the risks of flight operations. Disorientation, getting lost, running out of fuel, and flying into terrain are just a few of the risks associated with general aviation flight operations. Mechanical Failure: In an aircraft, failure of nearly any mechanical system can be potentially dangerous. Engine failure, flight-control failure, or any structural failure may be catastrophic. Failures in the landing gear, doors, windows, propellers, and even seats can also be serious problems. Electrical Failure: The electrical system is an important component of nearly any aircraft. The engine ignition system is of course critical to engine operation, and some instruments depend on the electrical system. Communications systems are also extremely important and are electrically operated. Particularly around airports, exterior lighting is important to reduce the risk of collision. Human Errors: In almost any endeavor there are risks associated with human errors. In general aviation, the largest risk is pilot error. Because general aviation pilots tend to be less experienced and not as well trained as commercial pilots, the risks of pilot error are greater. Also, there are risks associated with errors by inexperienced pilots in other planes. Errors by air traffic control personnel, both flight and ground controllers, as well as by preflight briefers, constitute other risks. Ground Operations: At busy airports, ground operations can become very hazardous and are a major concern for all air traffic. For general aviation pilots who may be less familiar than commercial pilots are with a given airport, the risks may be increased. Also, because general aviation aircraft tend to be small, a collision with a larger aircraft is likely to be catastrophic. Security: Though general aviation is rarely a target of terrorism or sabotage, this possibility cannot be completely discounted. Also, attacks against commercial carriers could cause secondary effects which affect general aviation, such as damage to airport air traffic control facilities. Also, some level of security is important to keep people from unintentionally causing problems by straying onto runways and taxiways. Keeping animals out of airport operation areas is still another important consideration.
Figure II.1.1. HHM for General Aviation
26 Hierarchical Holographic Method
Army Modularity
27
PROBLEM II.2: Transforming the US Army The purpose of this problem is to identify diverse characteristics and attributes of the U.S. army. DESCRIPTION The supreme test of all armies is to adapt rapidly to circumstances that it cannot foresee. Transformation is the US Army’s answer to this challenge. Modularity, a major part of the transformation campaign, allows the Army to retain a wide range of capabilities while significantly improving its agility and versatility. Joint and expeditionary units, with interchangeable force structures, satisfy the requirements of current and future operational challenges. However, Army modularity is not without its risks. METHODOLOGY A Hierarchical Holographic Model (HHM) for the US Army modularity initiative provides a holistic approach to risk assessment by considering many dimensions as Head Topics. SOLUTION The solution is as shown in the HHM (Figure II.2.1). ANALYSIS As a way of addressing emerging security threats and adapting to a changing strategic theatre, the US Army needs the capability to conduct swift, simultaneous, and non-contiguous military operations. The major risk topics considered were: temporal, institutional, organizational, force management, operational, resource allocation, future challenges, and consequences. Under these Head Topics, Subtopics were listed in a risk hierarchy in order to identify areas of risk within Army modularization. This decomposition allows tradeoff analyses to be performed among subsystems, while maintaining the integrity of the overall system. In the real world we are faced with resource constraints, which prevent us from eliminating risk from all possible sources. The HHM approach to the risks of Army modularization allows relaxing some of these constraints by focusing on smaller, more manageable subproblems.
Figure II.2.1. HHM for US Army Modularity
28 Hierarchical Holographic Method
Ordnance System
29
PROBLEM II.3: Developing an Ordnance System The purpose of this problem is to identify diverse characteristics and attributes of an ordnance system. DESCRIPTION Supportive elements as well as risks need to be considered when developing an ordnance system. METHODOLOGY A Hierarchical Holographic Modeling (HHM) is constructed to reveal the many perspectives of an ordnance system and eliminate sources of failure. SOLUTION The constructed HHM appears in Figure II.3.1. ANALYSIS The HHM helps to identify the sources of risk in the acquisition and development of ordnance systems by breaking down the concepts and developmental aspects into comprehensive categories that support the project. At the same time, HHM identifies the multiple perspectives of the system that are required for successful development. For example, the HHM identifies data which can be categorized as technical, engineering, management, and support data, as well as data depository reserved. Each of these data subcategories is unique and describes the many and diverse components required for a successful ordnance system. Technical data provides the ordnance specifications while the engineering data provides the interfaces and assembly of the technical systems.
Figure II.3.1. HHM for Ordnance System
30 Hierarchical Holographic Method
Air Vehicle
31
PROBLEM II.4: Assessing Risk to an Air Vehicle The purpose of this problem is to identify diverse characteristics and attributes of an air vehicle. DESCRIPTION We need to determine the components that might contribute a certain amount of risk to the overall performance of the air vehicle. METHODOLOGY To solve this problem, we construct a Hierarchical Holographic Model (HHM) for an air vehicle. This is followed by utilizing the Risk Filtering Risk Management (RFRM) method to prioritize the most critical sources of risk. SOLUTION The constructed HHM is shown in Figure II.4.1. ANALYSIS The HHM provides a framework of the components in the larger system. As the possible risks are analyzed, they can be prioritized using RFRM to determine which ones post the most risk and are most important to mitigate. When a new list has been constructed, this process is repeated to identify and manage the components that may contribute the most risk to the successful fielding of the air vehicle. In the HHM below, for example, Weight Management and Control; Fire Control and Stores; Avionics Definition; and Offboard Mission Support could be determined to pose the most risk for a successful design, while Propulsion; System Design; and Pilot Systems may be thought to have little risk. Consequently, the first group will receive more attention in an attempt to reduce the risks.
32
Hierarchical Holographic Method
Figure II.4.1. HHM for an Air Vehicle
River Flooding
33
PROBLEM II.5: Analyzing Risks for a New River Tunnel Project The purpose of this problem is to identify diverse characteristics and attributes of a new river tunnel building project. A metropolitan area is divided by a river with a few bridges over it. The average highway speed during hurricanes or heavy snowfalls has been down by 20 mph compared to the normal rate of 40 mph. This condition causes heavy traffic jams and can lead to huge economic losses. The state department of transportation is now considering building a river tunnel. To assure the success of this project, the department’s staff needs to characterize the risks in terms of various perspectives. Using HHM, develop comprehensive risk scenarios to capture societal elements taking into account the presence of multiple decision makers and stakeholders.
34
Hierarchical Holographic Method
PROBLEM II.6: Controlling River Flooding A river supports the economic and social activities of the surrounding community. However, controlling its seasonal channel overflow also poses a huge challenge to the government. The adverse effects of the annual river channel overflow have caused the city millions in terms of emergency response activities, economic losses, and the rehabilitation of affected communities, among other aspects. To address the problem, this exercise aims to apply risk assessment through the art and science of modeling. Construct a Hierarchical Holographic Model (HHM) to identify the different perspectives of the river flooding problem described above.
Rail Project
35
PROBLEM II.7: Developing an HHM for a High-Speed Rail Project The purpose of this problem is to identify diverse characteristics and attributes of a high-speed rail project. A High-Speed Rail (HSR) project is currently the most important transportation goal in one state and will be a key component in its economic future. This is the first state public infrastructure project based on the Build, Operate, Transfer (BOT) method, This means that the state invites investment proposals from the private sector, to select the main contractor for the project. The state government then negotiates a Construction & Operation Agreement (C&OA) with the main contractor to secure financing for the project and design, construct, operate, and maintain the complete system, then ultimately transfer it back to the government. According to the C&OA, the main contractor will build and operate the HSR system for a period of 35 years. The government also grants the main contractor the exclusive right to undertake property development around station areas for a period of 50 years. Construct a Hierarchical Holographic Model (HHM) to identify risks in the HSR project described above. Through identification of potential risks in the HSR project, government decisionmakers will be able to gain insights into all the system components as well as to identify risks and their relationships to system components.
36
Hierarchical Holographic Method
PROBLEM II.8: Waste Management The purpose of this problem is to identify diverse characteristics and attributes of a waste management system. An acid distillation plant located in a small city processes raw-grade reagents such as sulfuric acid (H2SO4). An urgent concern of the company is the proper disposal of the acid byproducts, which are assumed to be extremely injurious. The company currently commissions a trucking company to dispose of its waste to the three treatment facilities located in New York, Arkansas, and Illinois. The company has also considered options such as building its own treatment facility or, better yet, improving its materials and processes to minimize, if not eliminate, the accumulation of the hazardous byproducts. Sulfuric acid is widely used in industrial products such as car batteries. The typical transformation of a raw-grade reagent to a commercial-grade sulfuric acid consists of the following activities: 1. 2. 3. 4. 5. 6. 7. 8. 9.
Procurement of raw-grade reagents from suppliers Airing of reagents to remove sulfates (SO3) and other impurities Distillation Separation of toxic byproducts Cooling of distillate Laboratory testing and inspection Bottling (1-L Containers) and sealing Packaging Delivery of commercial-grade H2SO4 to end-users
The 1-L bottles are double-checked for cleanliness. In-house subcontractors perform three-step washing, which includes dipping the bottles into a detergent solution, rinsing with tap water, and a final rinsing using distilled water. Develop a Hierarchical Holographic Model (HHM) to identify the sources of risk to help determine the best option for disposing of the waste.
Mall Development
37
PROBLEM II.9: Evaluating Risks for a New Shopping Mall A construction firm wants to develop a new shopping mall. Before starting the project, they would like to check any potential risk scenarios involved in order to avoid or minimize financial and non-financial losses. Develop a Hierarchical Holographic Model (HHM) to capture and represent the diverse characteristics and attributes of building, operating, and maintaining a shopping mall.
38
Hierarchical Holographic Method
PROBLEM II.10: Identifying Risks for the Airline Industry The purpose of this problem is to identify diverse characteristics and attributes of the airline industry. Risk identification is a critical first step to effectively manage and mitigate risks to the airline industry to the maximum extent possible and provide safe and efficient air travel while maintaining profitability and stability. Construct a Hierarchical Holographic Model (HHM) to identify sources of risk for the airline industry by considering planning factors such as level of complexity and industry impact.
39
III. Decision Analysis
PROBLEM III.1: Developing a New Software for the Department of Defense The Department of Defense (DoD) is considering two companies to develop a new software application to improve guidance and control on ballistic munitions. DESCRIPTION Two software manufacturers responded with proposals: we’ll call them Company A and Company B. The DoD’s desired software performance specs included cost overrun estimates within the proposal to allow adequate budgeting. Both companies bid approximately $3.0 million for DoD’s initial software development expenditure. Because the proposal specifications from both companies were compatible, DoD will award the contract based on cost overrun estimates. PART A: METHODOLOGY Using the fractile method, find the solution and analyze the result. DoD preferred Cost Overrun Ratios (in Thousands): Best Case: $0 Worst Case: $250 Most Likely: $75 50-50 Chance: $25 +/- from Most Likely SOLUTION The cost overrun estimates from the software manufacturers were as follows: Company A (in Thousands): Best Case: $0 Worst Case: $350 Most Likely: $150 50-50 Chance: $75 +/- from Most Likely Company B (in Thousands): Best Case: $0 Worst Case: $500 Most Likely: $75 50-50 Chance: $50 +/- from Most Likely
40
Decision Analysis
Table III.1.1. Parameters for the Fractile Distribution Representing Cost Overruns Cost Overruns ($T) Fractile DoD A B 0.00 $0 $0 $0 0.25 $50 $75 $25 0.50 $75 $150 $75 0.75 $100 $225 $125 1.00 $250 $350 $500
Cumulative Distribution
E[XDoD] = (0.25)($0 + ($50-$0)/2) + (0.25)($50 + ($75-$50)/2) + (0.25)($75 + ($100-$75)/2) + (0.25)($100 + ($250-$100)/2) = (0.25)($25) + (0.25)($62.5) + (0.25)($87.5) + (0.25)($175) = $87.5 1 0.8 0.6 0.4 0.2 0 0
50
100
150
200
250
300
Cost (In T)
Exceedence Probability
Figure III.1.1. DoD Preferred Cumulative Cost Distribution 1 0.8 0.6 0.4 0.2 0 0
50
100
150
200
250
300
Cost (In T)
Figure III.1.2. DoD Preferred Cost Exceedance (Over Expense) Distribution
Software Development
41
0.01 0.009 0.008 0.007 0.006 0.005 0.004 0.003 0.002 0.001 0 0
50
100
150
200
250
300
Figure III.1.3. Fractile Distribution for DoD’s Preferred Software (Cost, in $)
Cumulative Distribution
E[XA] = (0.25)($0 + ($75-$0)/2) + (0.25)($75 + ($150-$75)/2) + (0.25)($150 + ($225-$150)/2) + (0.25)($225 + ($350-$225)/2) = (0.25)($37.5) + (0.25)($112.5) + (0.25)($187.5) + (0.25)($287.5) = $156.25 1 0.8 0.6 0.4 0.2 0 0
50
100
150
200
250
300
350
400
Cost ($)
Exceedence Probability
Figure III.1.4. Company A Cumulative Cost Distribution 1 0.8 0.6 0.4 0.2 0 0
50
100
150
200
250
300
350
400
Cost ($)
Figure III.1.5. Company A Cost Exceedance Distribution
42
Decision Analysis 0.004 0.0035 0.003 0.0025 0.002 0.0015 0.001 0.0005 0 0
100
200
300
400
Figure III.1.6. Company A Cost PDF (Cost, in $)
Cumulative Distribution
E[XB] = (0.25)($0 + ($25-$0)/2) + (0.25)($25 + ($75-$25)/2) + (0.25)($75 + ($125$75)/2) + (0.25)($125 + ($500-$125)/2) = (0.25)($12.5) + (0.25)($50) + (0.25)($100) + (0.25)($312.5) = $118.75 1 0.8 0.6 0.4 0.2 0 0
100
200
300
400
500
600
COST ($)
Exceedence Probability
Figure III.1.7. Company B Cumulative Cost Distribution 1 0.8 0.6 0.4 0.2 0 0
100
200
300
400
500
600
COST ($)
Figure III.1.8. Company B Cost Exceedance Distribution
Software Development
43
0.01 0.009 0.008 0.007 0.006 0.005 0.004 0.003 0.002 0.001 0 0
100
200
300
400
500
600
Figure III.1.9. Company B Cost PDF ANALYSIS From the calculations, the Department of Defense has a preferred cost overrun expected value of $87.5 thousand. Company A’s proposal cost overrun estimation has an expected value of $156.25. Company B’s proposal cost overrun estimation has an expected value of $118.75. Neither of the two software manufacturers is within the cost overrun range preferred by the Department of Defense. DoD can: 1) accept the lower estimated cost overrun of $118.75 from Company B; 2) relax some of its software requirements and lower the proposal costs and cost overruns; or 3) budget extra funds for the project to cover cost overruns, including the $31.25 ($118.75 – $87.5 = $31.25). Furthermore, 50% of Company B’s cost overruns are less than or equal to DoD’s preferences. Company B exceeds DoD slightly at the 75% range. If the cost overruns do exceed 75% probability, then Company B’s cost experiences a tremendous jump. Company A is not within DoD’s range for any of the values, but experiences a much smaller jump if the probability exceeds 75%. Given that the decisionmakers have α = 0.6 and are determined to move forward on awarding the contract to either Company A or B, they would choose Company B. PART B: METHODOLOGY Solve the same problem and analyze the result using the following triangular distribution for the construction of the probabilities. Table III.1.2. Parameters for the Triangular Distribution Representing Cost Overruns Cost Overruns ($T) Values DoD A B Lowest (a) 0 0 0 Highest (b) 250 350 500 Most Likely (c) 75 150 75
44
Decision Analysis
SOLUTION Department of Defense (DoD): Density triangle height: p(c) = 2/(b – a) = 2/(250 – 0) = 0.008 Density [p(x)]
= = =
Probability
Distribution [P(x)]
[2(x-0)]/[(250-0)(75-0)] [2(250-x)]/[(250-0)(250-75)] 0
If 0 ≤ x ≤ 75 If 75 < x ≤ 250 otherwise
=0 = [(x-0)2]/[(250-0)(75-0)] = 1 – [(250-x)2]/[(250-0)(250-75)] =1
If x < 0 If 0 ≤ x ≤ 75 If 75 < x ≤ 250 If x > 250
0.008 0.007 0.006 0.005 0.004 0.003 0.002 0.001 0 0
50
100
150
200
250
300
COST ($)
Figure III.1.10. Triangular Distribution for DoD’s Preferred Software
Probability
1 0.8 0.6 0.4 0.2 0 0
50
100
150
200
250
300
COST ($)
Figure III.1.11. Cumulative Distribution for DoD’s Preferred Software Mean = E[XDoD] = (a + b + c)/3 = ($0+$250+$75)/3 = $108.33 Variance = (a2 + b2 + c2 – ab – ac – bc)/18 = (02 + 2502 + 752 – (0)(250) – (0)(75) – (250)(75))/18 = (0 + 62500 + 5625 – 0 – 0 – 18750)/18 = (49375)/18 = 2743.06 dollars2
Software Development
Standard deviation =
Variance =
2743.60 = $52.37
Company A: Density triangle height: p(c) = 2/(b – a) = 2/(350 – 0) = 0.005714 Density [p(x)]
= = =
[2(x-0)]/[(350-0)(150-0)] [2(350-x)]/[(350-0)(350-150)] 0
Distribution [P(x)]
If 0 ≤ x ≤ 150 If 150 < x ≤ 350 otherwise
=0 If x < 0 = [(x-0)2]/[(350-0)(150-0)] If 0 ≤ x ≤ 150 = 1 – [(350-x)2]/[(350-0)(350-150)] If 150 < x ≤ 350 =1 If x > 350
0.006 Probability
0.005 0.004 0.003 0.002 0.001 0 0
50
100
150
200
250
300
350
400
COST ($)
Figure III.1.12. Company A Cost Overrun PDF
Probability
1 0.8 0.6 0.4 0.2 0 0
50
100
150
200
250
300
350
400
COST ($)
Figure III.1.13. Company A Cost Overrun Cumulative Distribution
45
46
Decision Analysis
Mean = E[XA] = (a + b + c)/3 = (0+350+150)/3 = $166.67 Variance = (a2 + b2 + c2 – ab – ac – bc)/18 = (02+3502+1502–(0)(350)–(0)(150)–(350)(150))/18 = (0 + 122500 + 22500 – 0 – 0 – 52500)/18 = (92500)/18 = 5138.89 dollars2 Standard deviation =
Variance =
5138.89 = $71.69
Company B: Density triangle height: p(c) = 2/(b – a) = 2/(500 – 0) = 0.004 Density [p(x)]
= = =
Probability
Distribution [P(x)]
[2(x-0)]/[(500-0)/(75-0)] [2(500-x)]/[(500-0)(500-75)] 0
If 0 ≤ x ≤ 75 If 75 < x ≤ 500 otherwise
=0 = [(x-0)2]/[(500-0)(75-0)] = 1 – [(500-x)2]/[(500-0)(500-75)] =1
If x < 0 If 0 ≤ x ≤ 75 If 75 < x ≤ 500 If x > 500
0.004 0.0035 0.003 0.0025 0.002 0.0015 0.001 0.0005 0 0
100
200
300
400
COST ($)
Figure III.1.14. Company B Overrun PDF
500
600
Software Development
47
Probability
1 0.8 0.6 0.4 0.2 0 0
100
200
300
400
500
COST ($)
Figure III.1.15. Company B Overrun Cumulative Distribution Mean = E[XB] = (a + b + c)/3 = (0+500+75)/3 = $191.67 Variance = (a2 + b2 + c2 – ab – ac – bc)/18 = (02 + 5002 + 752 – (0)(500) – (0)(75) – (500)(75))/18 = (0 + 250000 + 5625 – 0 – 0 – 37500)/18 = (218125)/18 = 12118.06 dollars2 Standard deviation =
Variance =
12118.06 = $110.08
ANALYSIS Using triangular distributions, the DoD prefers a cost overrun expected value of $108.33, a variance of 2743.05 dollars2, and a standard deviation of $52.3741. Company A has a software development cost overrun expected value of $166.66, a variance of 5138.88 dollars2, and a standard deviation of $71.6860. Company B has a software development cost overrun expected value of $191.67, a variance of 12118.05 dollars2, and a standard deviation of $110.08. Neither of the two manufacturers is within the cost overrun desired by the DoD. Using triangular distribution, Company A is closer to the desired cost overrun expectation, but still requires a $58,330 decrease to meet the DoD expectation mean value. Summary of results from Part A (the fractile method) and Part B (the triangular method) is shown in the following table. Table III.1.3. Comparison of Expected Overrun Values from Fractile and Triangular Distributions
DoD Fractile Triangular $87.5 $108.33 Difference 23.81%
Cost Overrun Expectations ($) Company A Company B Fractile Triangular Fractile Triangular $156.25 $166.66 $118.75 $191.66 Difference 6.66% Difference 61.40%
48
Decision Analysis
Comparing the two methods using this scenario, the triangular distribution gave higher expected values for all three organizations. Two of the three fractile method expected values are within one standard deviation of the triangular distribution expected values; the highest percent change is Company B at 61.40%. The expected value with the closest correlation using the two methods is Company A with a 6.66% difference. Due to the data point spread in this particular scenario, we can have more confidence in the results from the fractile method, since more data points are considered. Hence, from the fractile distribution, Company B is the logical choice. The bottom line for both manufacturers is that neither one has a cost overrun estimation for their software development proposal that meets DoD’s desired expectations, regardless of which method is used for data comparison.
Rental Service
49
PROBLEM III.2: Selection of a Car Service A businessman who travels often must decide which airport car service to hire considering his preferences and risk aversion towards being late. DESCRIPTION Bill, a business traveler, is attempting to determine which car service he will use for his frequent trips to the airport. He elicits information from three different rental companies about price, on-time arrival information, and delay statistics. Bill figures that in addition to the cost of the car service, every minute he is late costs him $5 in the first 10 minutes. For every minute he is late after ten minutes, it costs him $20 a minute in stress, increased odds of missed flights, possible missed customer time, etc. PART A: METHODOLOGY Use the fractile method to derive the lowest expected cost for the car service. SOLUTION Car Service A – Cost to Airport, $40.00 Best-case arrival time = 0 minutes late (on time) Worst-case arrival time = 30 minutes late Median value arrival time = 7 minutes late There is a 50-50 chance that the arrival time is ± 5 minutes of median arrival time. Car Service B – Cost to Airport, $45.00 Best-case arrival time = 0 minutes late (on time) Worst-case arrival time = 15 minutes late Median value arrival time = 10 minutes late There is a 50-50 chance that the arrival time is ± 2 minutes of median arrival time Car Service C – Cost to Airport, $30.00 Best-case arrival time = 0 minutes late (on time) Worst-case arrival time = 60 minutes late Median value arrival time = 5 minutes late 25% chance arrival time = 2 minutes late 75% chance arrival time = 20 minutes late
50
Decision Analysis Table III.2.1. Comparative Cumulative Distribution Functions (CDFs) Time Delay (minutes) Car Service A Car Service B 0 0 2 8 7 10 12 12 30 15
Fractile 0.0 0.25 0.50 0.75 1.00
Car Service C 0 2 5 20 60
Cummalitive Probability
1
0.8
0.6
0.4
Car Service A Car Service B Car Service C
0.2
0
0
10
20
30
40
50
60
70
Delay (minutes)
Figure III.2.1. Graph of Car Service Delay Generate the expected value of risk of time delay for each choice. Car Service A Expected value delay = E[X] = 0.25 * [0 + (2-0) / 2] + 0.25 * [2 + (7-2) / 2] + 0.25 * [7 + (12-7) / 2] + 0.25 * [12 + (30-12) / 2] E[X] = 0.25 * 1 + 0.25 * 4.5 + 0.25 * 9.5 + 0.25 * 21 = 9 min. Total cost = cab cost + expected delay *(cost/delay) Total cost = $40 + 9 min * ($5/min) = $85 Car Service B Expected value delay = E[X] = 0.25 * [0 + (8-0) / 2 ] + 0.25 * [8 + (10-8) / 2 ] + 0.25 * [10 + (12-10) / 2] + 0.25 * [12 + (15-12) / 2] E[X] = 0.25 * 4 + 0.25 * 9 + 0.25 * 11 + 0.25 * 13.5 = 9.375 min. Total cost = cab cost + expected delay * (cost/delay)
Rental Service
51
Total cost = $45 + 9.375 min * ($5/min) = $91.88 Car Service C Expected value delay = E[X] = 0.25 * [0 + (2-0) / 2] + 0.25 * [2 + (5-2) / 2] + 0.25 * [5 + (20-5) / 2] + 0.25 * [20 + (60-20) / 2] E[X] = 0.25 * 1 + 0.25 * 3.5 + 0.25 * 12.5 + 0.25 * 40 = 14.25 min. Total cost = cab cost + expected delay * (cost/delay) Total cost = $30 + 10 min * ($5/min) + 4.25 min * ($20/min) = $165
The cost vs. the expected value of risk is plotted in Figure III.2.2. 16
C 14
Expected Delay (min)
12 10 8
A
B
80
100
6 4 2 0
0
20
40
60
120
140
160
180
Total Cost ($)
Figure III.2.2. Cost vs. the Expected Value of Risk, Fractile Method ANALYSIS Based on the results of the fractile method, Car Service A has the lowest total expected cost at $85, even though it is the second most expensive car service. This is because it has the lowest expected delay at 9 minutes. Car Service B is close at a total cost of $91.88. It is slightly more expensive due to a larger initial price and a slightly greater expected delay of 9.375 minutes. Car Service C, with a total expected cost of $165, is considerably more expensive although it has the lowest price; this is due to the high expected delay.
52
Decision Analysis
PART B: METHODOLOGY For the same problem, use the triangular distribution for the construction of the probabilities. Follow the same stages as in Part A. SOLUTION Car Service A – Cost to Airport, $40.00 Best-case arrival time (a) = 0 minutes late (on time) Most-likely arrival time (c) = 7 minutes late Worst-case arrival time (b) = 30 minutes late Car Service B – Cost to Airport, $45.00 Best-case arrival time (a) = 0 minutes late (on time) Most-likely arrival time (c) = 10 minutes late Worst-case arrival time (b) = 15 minutes late Car Service C – Cost to Airport, $30.00 Best-case arrival time (a) = 0 minutes late (on time) Most-likely arrival time (c) = 5 minutes late Worst-case arrival time (b) = 60 minutes late Use the triangular distribution as follows: Car Service A Expected value delay = E[X] = (a + b + c) / 3 E[X] = (0 + 7 + 30) / 3 = 12.333 min Total cost = cab cost + expected delay * cost/delay Total cost = $40 + 10 min * ($5/min) + 2.333 min * ($20/min) = $136.67 Car Service B Expected value delay = E[X] = (a + b + c) / 3 E[X] = (0 + 10 + 15) / 3 = 8.333 min Total cost = cab cost + expected delay * cost/delay Total cost = $45 + 8.333 min * ($5/min) = $86.67 Car Service C Expected value delay = E[X] = (a + b + c) / 3 E[X] = (0 + 5 + 60) / 3 = 21.667 min Total cost = cab cost + expected delay * cost/delay Total cost = $30 + 10 min * ($5/min) + 11.66 min * ($20/min) = $313.33
53
Rental Service 25
C
Expected Delay (min)
20
15
A 10
B 5
0
0
50
100
150
200
250
300
350
Total Cost ($)
Figure III.2.3. Cost vs. Expected Value of Risk, Triangular Distribution Method ANALYSIS Based on the results of triangular distribution, Car Service B has the lowest total expected cost at $86.67, even though it is the most expensive option. This is because it has the lowest expected delay at 8.33 minutes. Car Service A is second with a total cost of $136.67, which is more expensive due to the larger expected delay of 12.33 min. Car Service C, with a total expected cost of $313.33, is considerably more expensive although it has the lowest price; this is due to the high expected delay of 21.67 minutes. The solutions obtained using Fractile (Part A) and triangular (Part B) distributions vary because of the different methods used to analyze the car services. In both cases Car Service C is the least viable option. This is due to the fact that its worsecase scenario is considerably higher than for options A and B. This contributes to high expected delay times using both methods, which in turns leads to a high expected total cost. Car Service A has the lowest expected cost using the fractile method, and the second-lowest expected cost using the triangular distribution. Car Service B has the lowest expected cost using triangular distribution and the second lowest using the fractile method. The key differences between the two methods is that Car Service A has lower delay times for the lower 75% percentile than Car Service B, yet A has a considerably higher worst-case scenario (30 min. vs. 15 min.). This results in a lower expected value when all the fractiles are taken into account, yet a higher expected value when using the triangular distribution (due to the fact that the worse-case scenario has a higher weighting).
54
Decision Analysis
PROBLEM III.3: Cafeteria Entrée A cafeteria is planning to add a new entrée to its menu. DESCRIPTION The cafeteria menu choices are steak, chicken, and fish, and their estimated profits are as shown in Table III.3.1. Table III.3.1: Estimated Profits as a Function of Entrée and Market Appetite
ENTRÉE Steak Chicken Fish
Hungry (Excellent) $4000 $3000 $3000
MARKET APPETITE Moderate (Good) $3000 $1500 $750
Full (Poor) -$2000 -$1000 $500
PART A: METHODOLOGY Using the Hurwitz rule for decision analysis, solve the problem and analyze the result. SOLUTION Based on the estimated profits in Table III.3.1, the payoff matrix is shown in Table III.3.2. Table III.3.2. Payoff Matrix in $Thousands
j =1 ( s1 ) i =1 (a1 ) i=2 (a 2 ) i=3 ( a3 )
j=3
j=2 (s2 )
(s3 )
4
3
-2
3
1.5
-1
3
.75
.5
The opportunity loss matrix in Table III.3.3 represents the potential profits we lose out on if we choose a particular entrée. For example: if we choose chicken and the market is hungry, the chart represents the potential profits lost.
Menu Addition
55
Table III.3.3. Opportunity Loss Matrix ENTRÉE Steak Chicken Fish
Hungry (Excellent) 0 1 1
MARKET APPETITE Moderate (Good) Full (Poor) 0 2.5 1.5 1.5 2.25 0
1) Pessimistic case: Minimizing our losses (we want to lose the least):
a1 : min (4, 3, -2) = -2 For a 2 : min (3, 1.5, -1) = -1 For a3 : min (3, .75, .5) = .5
For
We can take the maximum of the Max ( a1 , a 2 , a3 )
a values to minimize our losses:
⇒ Max (-2, -1, .5) = .5 ⇒ a3
2) Optimistic case: Maximizing potential profit:
a1 : max (4, 3, -2) = 4 For a 2 : max (3, 1.5, -1) = 3 For a3 : max (3, .75, .5) = 3 For
We can take the maximum of the Max ( a1 , a 2 , a3 )
a values to maximize our potential profit:
⇒ Max (4, 3, 3) = 4 ⇒ a1
3) Apply the Hurwitz rule: We want to find a compromise between the pessimistic and optimistic rules used in (1) and (2) using the index α : Max { µ i (α ) = α min µ ij + (1 − α ) max µ ij }, 0 ≤ α ≤ 1 1≤i≤3 (
For
1≤ j≤3
1≤ j≤3
a1 , a 2 , a3 )
α = 1; Pessimistic & α = 0; Optimistic a1 : µ1 (α ) = −2α + 4(1 − α ) At a 2 : µ 2 (α ) = −1α + 3(1 − α ) At a3 : µ 3 (α ) = .5α + 3(1 − α )
At
⇒
a1 : µ1 (α ) = 4 − 6α a 2 : µ 2 (α ) = 3 − 4α a3 : µ 3 (α ) = 3 − 2.5α
4) The graph below is the result of plotting these equations as a function of alpha.
56
Decision Analysis 5 4 3 a1
2
a2 1
a3
0 -1
-0.5
-1
0
0.5
1
1.5
2
-2
Figure III.3.1. Hurwitz Rule for Cafeteria Entrée Selection 0 ≤ α ≤ 1; therefore the x-axis is the lower bound of 0 and the dashed blue line is the upper bound of 1. ANALYSIS Since a ≥ 0 ; and a3 dominates
a 2 above 0, we can rule out chicken as an entrée.
In order to have the potential for the best profits, management should never choose 2 2 chicken ( a 2 ). For 0 ≤ α ≤ , the best entrée is steak ( a1 ); and for ≤ α ≤ 1 , the 7 7 best entrée is fish ( a3 ) PART B: METHODOLOGY Modify the above problem by adding your knowledge of the probabilities of payoff. To decide which entrée to offer, use the triangular method to create a decision tree to minimize the opportunity loss. Analyze your results.
Menu Addition
57
Figure III.3.2. Decision Tree for Cafeteria Entrée Choices SOLUTION Referring to Table III.3.1 (Market Appetite), we assign the following probabilities: Hungry (Excellent) = .5; Moderate (Good) = .4; Full (Poor) = .1 Multiplying the probabilities with the figures from Table III.3.1, we get: Steak: .5*$4,000+ .4*$3,000+ .1*(-$2,000)
= $3,000
Chicken: .5*$3,000+ .4*$1,500+ .1*(-$1,000)
= $2,000
58
Decision Analysis
Fish: .5*$3,000+ .4*$750+ .1*$500 = $1,850 ANALYSIS Using these probabilities, we would choose the steak because it will provide the highest expected profits. The Hurwitz Rule approach provides the cafeteria a flexible entrée selection by varying the level of optimism as shown in Figure III.3.1.
Seatbelt Replacement
59
PROBLEM III.4: Replacing Seat Belts on School Buses Seat belts are wearing out on a county’s school buses. DESCRIPTION The School Board wants to find out the tradeoffs between potentially preventing students’ injuries and the cost of replacing seatbelts on all school buses, some buses, or not replacing any at all. PART A: METHODOLOGY Solve the problem and analyze the results using the fractile method. The values in Table III.4.1 represent the percent of possible student injuries on school buses under the three policies being considered. Table III.4.1. Rate of Potential School Bus Injuries Policy 1: Replace Seatbelts on All Buses 0% 5% 10% 15% 20%
Fractile 0.00 0.25 0.50 0.75 1.00
Policy 2: Replace Seatbelts on Some Buses 0% 10% 15% 25% 30%
Policy 3: Do Not Replace Any Seatbelts 0% 20% 40% 60% 80%
SOLUTION The cumulative distribution functions (CDF’s) of the possible injuries under each policy are represented in the following graphs.
Cumulative Probability
25 20 15 10 5 0 0
0.25
0.5
0.75
1
Percentage of Injuries
Figure III.4.1. Policy 1 CDF: All Buses have Seat Belts
60
Decision Analysis
Cumulative Probability
40 30 20 10 0 0
0.25
0.5
0.75
1
Percentage of Injuries
Figure III.4.2. Policy 2 CDF: Some Buses have Seat Belts
Cumulative Probability
100 80 60 40 20 0 Percentage of Injuries
Figure III.4.3. Policy 3 CDF: No Buses have Seat Belts
The following graphs show the probability density functions (PDF’s)
Seatbelt Replacement 0.06
Probability
0.05 0.04 0.03 0.02 0.01 0 0
5
10
15
20
Percentage of Injuries
Figure III.4.4. Policy 1 PDF: All Buses have Seat Belts 0.06
Probability
0.05 0.04 0.03 0.02 0.01 0 0
5
10
15
20
25
30
Percentage of Injuries
Figure III.4.5. Policy 2 PDF: Some Buses have Seat Belts
61
62
Decision Analysis 0.014
Probability
0.012 0.01 0.008 0.006 0.004 0.002 0 0
20
40
60
80
Percentage of Injuries
Figure III.4.6. Policy 3 PDF: No Buses have Seat Belts 4
Policy 1: E [X ] =
∑p x = i i
i =1
5 − 0 15 − 10 20 − 15 10 − 5 .250 + + .255 + + .2510 + + .2515 + 2 2 2 2 = 0.625 + 1.875 + 3.125 + 4.375 = 10
4
Policy 2: E [X ] =
∑p x = i i
i =1
15 − 10 25 − 15 30 − 25 10 − 0 .250 + + .2510 + 2 + .2515 + 2 + .2525 + 2 2 = 1.25 + 3.125 + 5 + 6.875 = 16.25
4
Policy 3: E [X ] =
∑p x = i i
i =1
20 − 0 40 − 20 60 − 40 80 − 60 .250 + + .2520 + + .2540 + + .2560 + 2 2 2 2 = 2.5 + 7.5 + 12.5 + 17.5 = 40
Seatbelt Replacement
63
Next, we assign costs to each policy, then graph the costs vs. the expected value of risk. Policy 1 = $1 M Policy 2 = $750 K Policy 3 = $250 K
Cost (In $Thousands)
1200 1000
Policy 1
800
Policy 2
600 400 Policy 3
200 0 0
10
20
30
40
50
E(Injury Rate)
Figure III.4.7. Expected Injury Rate vs. Cost ANALYSIS In this case, the number of injuries drops dramatically as spending increases. Spending $750,000 on Policy 2 will save more lives per dollar than spending the $1M it would take to achieve an expected value of risk of 10% injuries with Policy 1. PART B: METHODOLOGY For the same problem, use the triangular distribution to construct the probabilities. Then compare the results to those in Part A. SOLUTION In this solution, most of the values are based on the results we obtained from the fractile method. a+b+c Mean = E[ X ] = 3 where a = minimum, c = mode, and b = maximum parameters of a triangular distribution.
64
Decision Analysis Solving for c:
3 E[ X ] = a + b + c c = 3E[ X ] − a − b Now we must find the value of c for each policy: Policy 1: c1 = 3 * 10 − 0 − 20 = 10 Policy 2: c 2 = 3 * 16.25 − 0 − 30 = 18.75 Policy 3: c1 = 3 * 40 − 0 − 80 = 40 Using the values we found for c we can find the height of each triangle p(c), 2 Using p(c) = b−a 2 1 Policy 1: p(c1 ) = = 20 − 0 10 2 1 Policy 2: p(c 2 ) = = 30 − 0 15 2 1 = Policy 3: p(c1 ) = 80 − 0 40 The following graphs summarize the distributions above. 0.12 0.1
p(x)
0.08 0.06 0.04 0.02 0 0
10 Percentage of Injuries
Figure III.4.8. Policy 1 Triangular Distribution
20
Seatbelt Replacement 0.07 0.06
p(x)
0.05 0.04 0.03 0.02 0.01 0 0
5
10
15
20
25
30
Percentage of injuries
Figure III.4.9. Policy 2 Triangular Distribution
0.03 0.025
p(x)
0.02 0.015 0.01 0.005 0 0
20
40
60
80
Percentage of Injuries
Figure III.4.10. Policy 3 Triangular Distribution ANALYSIS Recall that the expected value of a triangular distribution is as follows:
E[ X ] =
a+b+c 3
65
66
Decision Analysis
Calculating the expected value of the three policies will give the following values (in %): For Policy 1: E[ X ] =
0 + 20 + 10 = 10 3
For Policy 2: E[ X ] =
0 + 30 + 18.75 = 16.25 3
For Policy 3: E[ X ] =
0 + 80 + 40 = 40 3
Assuming that the most effective policy is based on the lowest value of the expected value of percentage on injuries, then Policy 1 (all buses have seatbelts) is recommended.
Engine Test
67
PROBLEM III.5: Testing Aircraft Parts before Installation The objective of this problem is to analyze how to reduce the probability of a faulty part of an aircraft engine and to minimize costs associated with testing and repair. DESCRIPTION A part of an aircraft engine can be given a test before installation. The test has only a 75% chance of either revealing or passing a possibly defective part. Whether or not the part has been tested, it may undergo an expensive reworking which is certain to produce a part free from defects. If a defective part is installed in the engine, the property loss is $1,000,000. If the reworking is done, the cost is $200,000. Initially, one out of every eight of the parts is defective. Calculate how much you should pay for the test and determine all the optimum decisions in order to minimize the expected property loss (including cost). METHODOLOGY Use decision tree analysis to solve the aircraft problem using the following specifications. States θ 1 = Defective
Actions A 1 = Install
Priors P ( Defective ) = 1 / 8 = 0.125
θ 2 = Not Defective
A
P ( Not Defective ) = 7 / 8 = 0.875
2
= Rework
Test Results X 1 = Reveals Part as Defective
Conditional Probabilities P(X1 | θ1 ) = .75 P(X 2 | θ1 ) = .25
X 2 = Reveals Part as Not Defective
P(X1 | θ 2 ) = .25 P(X 2 | θ 2 ) = .75
SOLUTION Find the marginal and posterior probabilities: Marginal probabilities: P(X1 ) = P(X1 | θ1 )P(θ1 ) + P(X1 | θ 2 )P(θ 2 )
= (0.75)(0.125) + (0.25) (0.875) = 0.3125 P(X 2 ) = (0.25)(0.125) + (0.75) (0.875) = 0.6875 Posterior probabilities (by Bayesian Theorem): P(θ i | X j ) = P(X j | θ i )P(θ i )/P(X j )
P(θ1 | X1 ) = (0.75)(0.125)/(0.3125) = 0.3 P(θ 2 | X1 ) = (0.25)(0.875)/(0.3125) = 0.7 P(θ1 | X 2 ) = (0.25)(0.125)/(0.6875) = 0.04545 P(θ 2 | X 2 ) = (0.75)(0.875)/(0.6875) = 0.9545
68
Decision Analysis
Figure III.5.1. Decision Tree ANALYSIS Based on the decision tree in Figure III.5.1: • • •
The expected loss without testing amounts to $0.125 million. The expected loss with testing amounts to $0.09375 million. Therefore, the cost of the test to break even would be (0.125 − 0.09375) * 10 6 = $31,250 .
• • •
If the test reveals a part as defective → Rework part. If the test reveals part as not defective → Install part. If no test is performed → Install part.
Magic Beanstalk
69
PROBLEM III.6: Magic Beanstalk Jack, a savvy businessman, climbs up a magic beanstalk and finds himself in a giant’s kingdom. He sees two huge bags of gold coins, a gold-feathered duck that lays golden eggs, and a gold harp that plays really sweet music. Then a friendly giant appears and says that Jack may take one item—any one he chooses. Jack realizes that he can sell any of those items on the market and make a fortune. As he can take only one item, which should it be? DESCRIPTION Without knowing the prevailing market conditions, should Jack choose the gold coins (A), gold duck (B), or the gold harp (C)? The market for these items fluctuates between excellent, good, and poor, affecting the price he may receive for each of them. Table III.6.1. Profit as a Function of Market Condition and Item Choice Sales Potential ($) Item
Excellent
Good
Poor
a1
900,000
600,000
150,000
a2
850,000
700,000
50,000
a3
400,000
300,000
200,000
PART A: METHODOLOGY If Jack had the time, he could first solve this problem using decision analysis (DA) and the Hurwitz Rule. SOLUTION Table III.6.2. Payoff Matrix ($1000s) j=1
j=2
j=3
s1
s2
s3
i=1(a1)
900
600
150
i=2(a2)
850
700
50
i=3(a3)
400
300
200
70
Decision Analysis
Table III.6.3. Opportunity Loss Matrix w/ Pessimistic and Optimistic Rules M1-mu11 j=1
M2-mu22 j=2
M3-mu33 J=3
Pessimistic
Optimistic
i=1(a1)
0
100
50
100
0
i=2(a2)
50
0
150
150
0
i=3(a3)
500
400
0
500
0
Summarizing the pessimistic and optimistic outcomes for each decision: For alpha = 1: pessimistic For alpha = 0: optimistic mu1(alpha) = 100,000 * alpha + 0 * (1 - alpha) = 100,000 * alpha mu2(alpha) = 150,000 * alpha + 0 * (1 - alpha) = 150,000 * alpha mu3(alpha) = 500,000 * alpha + 0 * (1 - alpha) = 500,000 * alpha Table III.6.4. Applying the Hurwitz Rule alpha
mu1
mu2
mu3
0
0
0
0
0.1
10000
15000
50000
0.2
20000
30000
100000
0.3
30000
45000
150000
0.4
40000
60000
200000
0.5
50000
75000
250000
0.6
60000
90000
300000
0.7
70000
105000
350000
0.8
80000
120000
400000
0.9
90000
135000
450000
1
100000
150000
500000
Magic Beanstalk
71
Opportunity Lo s s
600000 500000 400000
mu1 mu2
300000
mu3
200000 100000
0 0
0.2
0.4
0.6
0.8
1
Alpha
Figure III.6.1. Hurwitz Rule Results ANALYSIS The line denoting mu1 in Figure III.6.1 always dominates the other options. Hurwitz's rule tells us that Option a1 (gold coins) should be chosen. This case indicates that all optimistic views would choose not to have any opportunity loss, resulting in an amount of 0. Jack picks the gold coins due to their $900,000 value, the best option. Because there are no probabilities, Hurwitz tells us that if one option has a high maximum above others, then the opportunity loss can lead to a biased decision. PART B: METHODOLOGY Jack can also apply the Decision Tree method to solve the problem.
72
Decision Analysis
SOLUTION
a1 (Gold coins)
a2 (Gold duck)
a3 (Gold harp)
Excellent: (0.3)*$0 + Good: (0.5)*$100,000 + Poor: (0.2)*$50,000 = $60,000
Excellent: (0.3)*$50,000 + Good: (0.5)*$0 + Poor: (0.2)*$150,000 = $45,000
Excellent: (0.3)*$500,000 + Good: (0.5)*$400,000 + Poor: (0.2)*$0 = $350,000
Figure III.6.2. Decision Tree with EOL measure ANALYSIS The solution obtained using the decision tree methodology is consistent with the results from the Hurwitz Rule methodology. The tree shows the golden duck as the choice with the lowest expected value for opportunity loss. One reason is that in the gold market, the duck has an opportunity loss of 0 (which has the highest probability). The expected values of opportunity loss for the duck and the gold coins are fairly close and may vary with different probabilities of market conditions, while the gold harp has the highest opportunity loss by far.
Card Approval
73
PROBLEM III.7: Issuing a Credit Card A credit card (“bankcard”) company must decide whether to extend a line of credit to an individual who has applied for one. This is a common situation, and one in which the decision made by the institution has ramifications on its future ability to make similar decisions. This is a simple decision process under uncertainty. The primary uncertainty arises from the behavior of the individual once the line of credit has been issued. We consider three possible behavior patterns: •
•
•
“Good” Behavior: This is the situation where the individual granted the line of credit pays at least the minimum amount due for the billing cycle every time. The credit card company makes money on the interest of the balances carried between cycles, but the customer adheres to the credit agreement. “Late” Behavior: An individual who is granted credit misses some percentage of payments, but still pays with sufficient regularity for the credit card company to leave the line of credit open. We assume that these individuals will ultimately fulfill their credit obligation and not default on the debt. This is the most profitable (therefore most desirable) situation for the company because money is made on interest (on balances carried from period to period) and from late fees. Charge-Off: An individual is granted a line of credit, but defaults on the debt. The credit card company either pursues the individual through some type of internal asset recovery division or sells the debt to another company. This situation is expensive for the company and little money is made; there is also the potential for loss.
This is a single-stage decision under uncertainty. The problem can be solved by formulating a decision tree incorporating the following descriptions: The action under consideration, a, is whether to grant a line of credit to a certain individual: a ∈ {a1 , a2 } , where: a1 = grant application for line of credit a2 = reject application for line of credit We define the random variable b to describe the customer behavior: b ∈ {b1 , b2 , b3 } , where: b1 = “good” customer behavior b2 = “late” customer behavior b3 = charge-off behavior On the other hand, only one outcome can stem from the action a2 (reject application for line of credit), denoted by b4 (no customer).
74
Decision Analysis
The probability of the outcome of bi is given by pi, which is assumed to be constant and independent of the action a. Build the corresponding decision tree. In addition, complete your analysis by supplementing reasonable and relevant decision rules on the decision tree you build.
Flavor Selection
75
PROBLEM III.8: Stocking a Specialty Ice Cream Parlor A popular ice cream parlor in an upscale neighborhood wants to offer a new flavor to its patrons. Which one of three types of flavors should it add to its stock? The parlor has the following payoff matrix for different flavors of ice cream. Table III.8.1. Payoff Matrix
Flavor
a1 a2 a3
Good 200 150 125
Daily Profits (Dollars) Most Likely 175 125 100
Poor -25 -50 25
Use the Hurwitz rule to decide how to minimize the opportunity loss by following this procedure: (1) Based on the payoff matrix, create the opportunity loss matrix, (2) Applying the pessimistic rule, minimize the maximum loss, (3) Applying the optimistic rule, minimize the minimum loss, (4) Apply the Hurwitz rule, which compromises between two extremes through the use of the index α. (5) Show your results graphically, and (6) Analyze your results. In addition, construct a decision tree to the ice cream selection problem. For your analysis, assume that the probabilities of three states are assigned as follows: • Pr(Good) = 0.3 • Pr(Most Likely) = 0.5 • Pr(Poor) = 0.2
76
Decision Analysis
PROBLEM III.9: Choosing Quality of Wine to Produce A Vineyard is trying to determine which quality of wine it should produce given it has 100 acres of spare land that can be used for planting. After soil tests come in, a vineyard’s owner realizes that 100 more acres than expected are ready for planting. The vineyard’s owner needs to decide which of his three wines it would like to produce from the extra acreage—high, medium, or low quality wine. Depending on the weather during the upcoming year, the revenue the three wines will bring in is displayed in Table III.9.1. Table III.9.1. Revenues of different types of wine given weather conditions
Quality of Wine High Medium Low
Favorable $500,000 $300,000 $100,000
Weather Fair $250,000 $200,000 $80,000
Poor -$200,000 -$10,000 $25,000
Use the Hurwitz rule to decide how to minimize the opportunity loss by following this procedure: (1) Based on the payoff matrix, create the opportunity loss matrix, (2) Applying the pessimistic rule, minimize the maximum loss, (3) Applying the optimistic rule, minimize the minimum loss, (4) Apply the Hurwitz rule, which determines the sensitivity of wine qualities through the use of the index α. (5) Show your results graphically, and (6) Analyze your results. Use a decision tree to derive the Expected Opportunity Loss based on the weather. Assume that probabilities of three states are assigned as follows: • Pr(Favorable) = 0.3 • Pr(Fair) = 0.55 • Pr(Poor) = 0.15
Aircraft Procurement
77
PROBLEM III.10: Expanding Aircraft Fleet An airline is interested in expanding its fleet of aircrafts and must decide what type of aircraft to purchase as an addition to the fleet. An airline is interested in growth and development of its business. In order to do so, the airline decides to purchase a new aircraft to add to the airline’s existing fleet. Through the use of decision analysis and several other techniques, the airline will decide in which new aircraft invest. PART A A regional airline is interested in expanding its current fleet of aircrafts. It would like to determine which size plane to purchase based on profits (gross ticket revenue), number of passengers per type of plane, and amount of cargo space. For simplicity, this problem uses the metric “passenger capacity” to denote both number of passengers and associated cargo space (Table III.10.1). Table III.10.1. Profit as a Function of Ticket Sales and Plane Size
Plane Size
Passenger Capacity Small Medium Large
Ticket Sales Revenue Full Half-full $200,000 $300,000 $150,000
$50,000 $175,000 $100,000
Mostly Empty ($80,000) ($40,000) $40,000
To make optimal decisions, utilize Pessimistic, Optimistic and Hurwitz rules in your analysis. PART B Modify the above problem by adding your knowledge of the payoff probabilities. Create a decision tree to decide how to minimize the opportunity loss. Analyze your results. Assume that probabilities of three states are assigned as follows: • Pr(Full) = 0.3 • Pr(Half-full) = 0.5 • Pr(Mostly Empty) = 0.2
78
Decision Analysis
PROBLEM III.11: Dealing with Inefficient Machines A manufacturing company must decide how to deal with equipment that is not producing at its full capacity. A manufacturing company has realized that one of its machines is producing at 50% of its usual capacity. After performing some diagnostic tests, the company realizes that it has three options, as follows: 1) Do nothing and continue manufacturing Cost = $0 2) Fix the Machine Cost = $40 3) Replace the machine Cost = $100 Derive Probability Density Functions (PDFs), Cumulative Density Functions (CDF) using Table III.11.1 with the Fractile method. Analyze your result by adding Expected Productivity Loss. Table III.11.1. Fractile Method % Productivity Loss Best 0 50 20 0
Do Nothing Fix Replace
25th 0.25 60 30 10
Median 0.5 75 40 15
75th 0.75 85 50 20
In addition, do the same analysis using the triangular distribution. Table III.11.2. Triangular Distribution
Do Nothing Fix Replace
Best 50 20 0
Most Likely 75 40 15
Worst 100 60 25
Worst 1 100 60 25
Production Decision
79
PROBLEM III.12: Minimizing Opportunity Loss A cardboard box manufacturer is trying to decide what size boxes to produce for the upcoming Christmas Season. For this problem, the decisionmaker must estimate profits as a function of sales potential and box size, as shown in Table III.12.1. Table III.12.1. Sales Potential (in thousands of dollars) Box Size Small Medium Large
Excellent 1500 1700 1600
Good 1200 1400 1500
Fair 1000 1200 1400
Perform decision analysis to solve the above problem by following the steps below: • Create the Payoff Matrix. • Based on the Payoff Matrix, create the Opportunity Loss Matrix. • Apply the Pessimistic Rule (maximize the minimum gain). • Apply the Optimistic Rule (maximize the maximum gain). • Apply the Hurwitz Rule that compromises between the two extremes through the use of index α. • Show your results graphically. • Analyze your results. Extend your results with Decision Tree method by assuming that probabilities of three states are as follows: • Pr(Excellent) = 0.l • Pr(Good) = 0.4 • Pr(Fair) = 0.5
80
Decision Analysis
PROBLEM III.13: Snow for a Ski Resort The owner of the Sliding By Ski Slopes in the southern Pennsylvania mountains is trying to decide whether or not to rent, and perhaps later buy, snow-making equipment for the coming years. He has lived in the area and operated the ski resort for only the past four winters. There are three rental equipment options: 1) rent none; 2) rent enough to provide snow for about 30% of the trails; and 3) rent enough to provide snow for about 60% of the trails. He has projected profits for each of these alternatives under three conditions: very little snow, average snow, and heavy snow. The data is included in the following table. Table III.13.1. Projected Profits Alternative Actions No Rental 30% trails open 60% trails open
Little or no snow $-200,000 $-100,000 $100,000
State Average snow $200,000 $200,000 $200,000
Heavy snow $600,000 $500,000 $400,000
The owner has decided to use decision analysis to help him make the decision. Previously, he was leaning toward renting the minimal amount of equipment. However, the past two or three years have had decent amounts of snowfall and this owner intends to draw on recent past experience for examples to help him make up his mind. Also, being a cautious man, if he does decide to rent the equipment, he will use the results of the rental season(s) to help him decide whether or not to buy the equipment. Analyze the decisions of the owner by using Hurwitz Rule and Decision Tree. Assume the following probability values: • Pr(Little or no snow) = 0.25 • Pr(Average snow) = 0.5 • Pr(Heavy snow) = 0.25
81
IV. Surrogate Worth Tradeoff
PROBLEM IV.1: Selecting the Location of a New University Bus Stop A university plans to add a shuttle stop near the stadium, aimed at helping students to easily walk to their classrooms and a dining hall. DESCRIPTION If the stadium is the center of the coordinates, the classroom building is at (100, 300) and the dining hall is at (200, 400). We assume that 1) students can walk directly to either of them from the bus stop, and that 2) the stadium area has the space for it. Where should we locate this new bus stop so that the walking distance can be shortest? METHODOLOGY The objective of this problem is to minimize the walking distance from the bus stop. We use the Surrogate Worth Tradeoff (SWT) method to determine a bus stop location that will satisfy both destinations. The objective function is:
f ( x , x ) = ( x − 100) 2 + ( x − 300) 2 1 2 min 1 1 2 2 2 f 2 ( x1 , x 2 ) = ( x1 − 200) + ( x 2 − 400)
(IV.1.1a)
Because the number in the square-root operation is always equal to or larger than zero, the objective function is actually the same as:
f ( x , x ) = ( x1 − 100) 2 + ( x 2 − 300) 2 min 1 1 2 f 2 ( x1 , x 2 ) = ( x1 − 200) 2 + ( x 2 − 400) 2
(IV.1.1b)
SOLUTION First convert Eq. (1) into the
ε -constraint form presented by Eq. (2):
Subject to
min f1 ( x1 , x 2 ) f 2 ( x1 , x 2 ) ≤ ε 2
(IV.1.2)
82
Surrogate Worth Tradeoff
Form the Lagrangian function:
L( x1 , x 2 , λ12 ) = ( x1 − 100) 2 + ( x 2 − 300) 2 + λ12 [( x1 − 200) 2 + ( x 2 − 400) 2 − ε 2 ] (IV.1.3) By Kuhn-Tucker necessary conditions, derive Eqs. (4) to (8):
∂L(⋅) = 2( x1 − 100) + 2λ12 ( x1 − 200) = 0 ∂x1
(IV.1.4)
∂L(⋅) = 2( x 2 − 300) + 2λ12 ( x 2 − 400) = 0 ∂x 2
(IV.1.5)
∂L(⋅) = ( x1 − 200) 2 + ( x 2 − 400) 2 − ε 2 ≤ 0 ∂λ12
(IV.1.6)
λ12 [( x1 − 200) 2 + ( x 2 − 400) 2 − ε 2 ] = 0
(IV.1.7)
λ12 ≥ 0 Equation (IV.1.4) yields:
λ12 = −
x1 − 100 x1 − 200
(IV.1.8)
x 2 − 300 x 2 − 400
(IV.1.9)
Equation (IV.1.5) yields
λ12 = −
From Eqs, (IV.1.8) and (IV.1.9) we get:
λ12 = −
x1 − 100 x − 300 =− 2 x1 − 200 x 2 − 400
x 2 = x1 + 200
(IV.1.10)
Upper and lower limits on x1 and x2 may easily be derived by satisfying Eqs. (IV.1.8) and (IV.1.9), as follows: 100 < x1 < 200
300 < x 2 < 400
Station Location
83
Samples of Pareto-optimal solutions are shown in Table IV.1.1. Table IV.1.1. Noninferior Solutions and Tradeoff Values
x1
x2
f1 ( x1 , x2 )
f 2 ( x1 , x2 )
λ12
110 120 140 160 180
310 320 340 360 380
200 800 3200 7200 12800
16200 12800 7200 3200 800
0.1111 0.2500 0.6667 1.5000 4.0000
The values of surrogate worth functions generated by the decisionmaker selecting this bus stop are tabulated as W12 in the following table:
Table IV.1.2. Noninferior Solutions, Tradeoff Values, and Surrogate Worth Function Values
x1
x2
f1 ( x1 , x2 )
f 2 ( x1 , x2 )
λ12
W12
110
310
200
16200
0.1111
+8
120
320
800
12800
0.2500
+6
140
340
3200
7200
0.6667
+3
150
350
5000
5000
1
0
160
360
7200
3200
1.5000
-3
180
380
12800
800
4.0000
-6
When W12 = 0, we get a preferred solution: x1 = 150, x 2 = 350 .
84
Surrogate Worth Tradeoff
A B
C
Figure IV.1.1. Noninferior Solution in the Functional Space
Figure IV.1.2. Noninferior Solution in the Decision Space
Station Location 4.5 4 3.5 ) 2 3 f ( 2.5 2 1 2 λ1.5 1 0.5 0
0
3000
6000
9000 f2(x)
12000
15000
85
18000
Figure IV.1.3. Tradeoff Function λ12 (f2) versus f2(x) ANALYSIS From the plots in Figures IV.1.1 to IV.1.3, if the decision is on the Pareto frontier and we want a smaller f2(x)—that is, a shorter distance to the classroom building, both λ12 and f1 will be increased. As shown in Figure IV.1.1, if we want a smaller f2 value than that of Point A, it could be Point B (on the Pareto frontier) or Point C ( not on the Pareto frontier), and Points A and B both have larger f1 values. This means we could not obtain more benefit (a shorter distance to classes) from f2 without sacrificing f1 (i.e., students will have a longer walk to the dining hall). Similarly, on the Pareto frontier we could not obtain more benefit from f1 without sacrificing f2. This validates the Pareto solution yielded above.
86
Surrogate Worth Tradeoff
PROBLEM IV.2: Investing in Stock Market and Small Family Business The purpose of this problem is to maximize profit by allocating a person’s time and budget on the stock market and a small family business. DESCRIPTION Todd has a total of $20,000 to split up between investing in the stock market and reinvesting in a small family business. He has 8 hours available every day to divide between researching stocks and working on his small business. It goes without saying that he wants to maximize the profit of both his stock investment and his business pursuit. METHODOLOGY We use the Surrogate Worth Tradeoff Method (SWT) to solve this multiobjective problem. Let x1 denote the hours Todd would spend in researching stocks, and x2 the amount of money (measured in thousands of dollars) he would invest in the stock market. Also, let f1 ( x1 , x2 ) denote the profit he could make through stock investment in 5
f 2 ( x1 , x2 ) denote the profit he could make from his business in 5 Now let us describe f1 ( x1 , x2 ) and f 2 ( x1 , x2 ) by the following
years, and
years. simplified mathematical relations:
f1 ( x1 , x 2 ) = 10 x 2 (1 − e
− x1
8
)
f 2 ( x1 , x 2 ) = (8 − x1 )(20 − x 2 )
(IV.2.1) (IV.2.2)
Hence, we can formulate our problem in the following fashion:
Max f1 ( x1 , x 2 )
(IV.2.3)
Max f 2 ( x1 , x 2 )
(IV.2.4)
xi ≥ 0, x1
∂L ∂L = 0, x 2 =0 ∂x1 ∂x 2
SOLUTION This is a typical multiobjective tradeoff problem. We need to find the noninferior solution as well as the Pareto-optimal one via analysis based on the Kuhn-Tucker theorem. To facilitate our analysis, we first reformulate the model in the standard ε -constraint form:
Investment Option
87
Min -f1 ( x1 , x 2 )
(IV.2.5)
st
(IV.2.6)
-f 2 ( x1 , x 2 ) ≤ ε 2 x1 ≥ 0, x 2 ≥ 0
~ ~ Equivalently, if we define f1 ( x1 , x 2 ) = − f1 ( x1 , x 2 ), f 2 ( x1 , x 2 ) = − f 2 ( x1 , x 2 ), then we can rewrite (IV.2.5) and (IV.2.6) as follows: ~ Min f1 ( x1 , x 2 ) (IV.2.5' ) ~ st f 2 ( x1 , x 2 ) ≤ ε 2 (IV.2.6' ) x1 ≥ 0, x 2 ≥ 0 The Lagrangian formulation is defined as:
~ ~ L = f1 ( x1 , x 2 )+ λ12 f 2 ( x1 , x 2 ) − ε 2
(
− x1
= 10 x 2 (e
8
)
− 1) + λ12 (( x1 − 8)(20 − x 2 ) − ε 2 )
(IV.2.7)
The Kuhn-Tucker conditions lead to: x1
∂L 1 − = 0 ⇒ 10 x 2 (− 8 )e 8 + λ12 (20 − x 2 ) = 0 ∂x1 −
⇒ λ12
x1
10 x 2 e 8 = 8(20 − x 2 )
(IV.2.8)
x1
− ∂L = 0 ⇒ 10(e 8 − 1) + λ12 (8 − x1 ) = 0 ∂x 2 −
⇒ λ12
x1
10(1 − e 8 ) = 8 − x1
∂L ≤ 0 ⇒ ( x1 − 8)(20 − x 2 ) − ε 2 ≤ 0 ∂λ12
λ12
∂L = 0 ⇒ λ12 (( x1 − 8)(20 − x 2 ) − ε 2 ) = 0 ∂λ12
λ12 > 0
(IV.2.9)
(IV.2.10)
(IV.2.11) (IV.2.12)
Since λ12 > 0 guarantees a noninferior solution, (IV.2.10)–(IV.2.12) leads to
88
Surrogate Worth Tradeoff
( x1 − 8)(20 − x 2 ) − ε 2 = 0
λ12 > 0 Eq. (IV.2.12) requires: 20 − x 2 > 0
8 − x1 > 0
⇒
0 < x 2 < 20
(IV.2.13)
⇒
0 < x1 < 8
(IV.2.14)
From (IV.2.8) and (IV.2.9) we know that: −
x1 8
−
x1 8
10 x 2 e 10(1 − e ) = 8(20 − x 2 ) 8 − x1 This leads to: −
x2 =
160(1 − e 8 − x1
x1 8
)
(IV.2.15)
x − 1 e 8
Eqs. (IV.2.13), (IV.2.14), and (IV.2.15) together determine the Pareto-optimal solution to our problem. Table IV.2.1 shows the data which is depicted in Figure IV.2.1 (the noninferior solution in the decision space). Figure IV.2.2 depicts the solution in the functional space. Needless to say, the negativeness of the slope f (⋅) represents the tradeoff, i.e., λ12 = − 2 . f1 (⋅) Table IV.2.1: Noninferior Solutions and Tradeoff Values
x1
x2
f1 ( x1 , x 2 )
f 2 ( x1 , x 2 )
λ12
0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 7.99
1.29 2.64 4.05 5.49 6.96 8.43 9.88 11.29 12.66 13.97 15.20 16.34 17.40 18.36 19.23 19.99
0.78 3.10 6.92 12.15 18.68 26.35 35.00 44.44 54.48 64.91 75.55 86.23 96.78 107.06 116.97 126.24
140.35 121.51 103.68 87.04 71.73 57.87 45.55 34.82 25.68 18.10 12.01 7.32 3.90 1.64 0.39 0.00
0.08 0.17 0.26 0.37 0.49 0.63 0.79 0.98 1.23 1.55 1.99 2.64 3.71 5.83 12.17 631.66
89
Investment Option ANALYSIS
In this example, we have discovered the noninferior solution and the Pareto optimum for a multiobjective tradeoff problem (see Figure IV.2.1). Figure IV.2.2 shows the tradeoffs that Todd has to make to arrive at his decision. 25
f1=100 20
Pareto Optimum
X2 (x1000 Dollars)
f1=70 f2=3 15
f1=50 f1=32
10
f2=15
f1=22
f1=10
f2=30 f2=50
5 f2=65 f2=94 0 0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
X1 (Hours)
Figure IV.2.1. Noninferior solutions in the decision space 140
f2*(x1000 Dollars)
120 100 80 60 40 20 0 0
20
40
60
80
100
f1*(x1000 Dollars)
Figure IV.2.2. Noninferior solutions in the functional space
120
90
Surrogate Worth Tradeoff
PROBLEM IV.3: Finding a Student Apartment The purpose of this problem is to find an apartment that provides easier access to a particular university and requires the minimum possible rent cost. DESCRIPTION A student is looking for an apartment near the university. For an apartment to be “optimal,” it must be inexpensive and allow for a quick, easy commute to campus. Thus, two of the student’s objectives are to minimize cost and to minimize commute time. METHODOLOGY For this multiobjective problem, we use the Surrogate Worth Tradeoff (SWT) method to calculate the tradeoffs between the two objectives to be minimized: rent cost (f1) and commuting time (f2). The two decision variables used to optimize these objectives are: distance (x1, in kilometers) and access (x2, a subjective score of how easy the commute is with a smaller absolute value being much easier). Off-campus housing does not start until three-quarters of a kilometer away from school, and commute time is measured in minutes. SOLUTION In functional form, the objective functions are as follows:
500 f1 ( x1 , x 2 ) = 2 + 75 x 2 min x1 f ( x , x ) = 30 x + 5 x 2 1 2 2 1 2 Translating this into the
min
ε -constraint form gives:
f1 ( x1 , x 2 )
s.t. f 2 ( x1 , x 2 ) ≤ ε 2
x1 , x 2 ≥ 0
where f2 is the minimum of the second objective function. The Lagrangian function for this optimization problem is:
L( x1 , x 2 , λ12 ) =
500 x1
2
[
+ 75 x 2 + λ12 30 x1 + 5 x 2 2 − ε 2
]
Apartment Rental
91
Taking the partial derivatives from this function and using the Kuhn-Tucker conditions yields:
∂L −1000 = + 30λ12 = 0 ∂x1 x13
(IV.3.1)
∂L = 75 + 10λ12 x 2 = 0 ∂x 2
(IV.3.2)
∂L = 30 x1 + 5 x 2 2 − ε 2 = 0 , where λ12 > 0 ensures Pareto optimality. ∂λ12 Solving (IV.3.1) and (IV.3.2) for lambda yields:
λ12 =
100 3x1
3
from (IV.2.1) and λ12 =
This will lead to: x 2 = −
−75 10 x 2
from (IV.2.2).
225 3 x1 . 1000
ANALYSIS Plotting this in the decision space gives the student a picture of how the two decision variables relate to each other, as shown in Figure IV.3.1. Plotting the values of the two objective functions gives a picture of the Paretooptimal frontier, as shown in Figure IV.3.2. Figures IV.3.1 and IV.3.2 show that the student can expect to pay a lower rent if the residence is farther away from the campus. However, this benefit can be offset by the longer commute. Thus, these two factors construct a noninferior solution frontier. Considering this frontier would help the student select his or her optimal accommodation.
92
Surrogate Worth Tradeoff 0 0
1
2
3
4
5
6
-5 Ease of Access -10 -15 -20 -25 -30 Distance from Campus (km)
Figure IV.3.1. Noninferior Solution in Decision Space 4500 4000 3500 3000 2500
time
2000 1500 1000 500 0 -3000
-2000
-1000
0
1000
2000
cost
Figure IV.3.2. Noninferior Solution in Functional Space .
Student Life
93
PROBLEM IV.4: Balancing Exercise and Sleep A student wants to exercise every morning, but also likes to sleep longer. DESCRIPTION A student likes to work out for 60 minutes in the morning, but she only has that much free time before classes begin. The physiological benefit of exercise per minute increases the longer she works out. Because she is usually up late studying, she would also like to be able to sleep longer. Obviously, the longer she sleeps, the less time she can spend at the gym. In other words, the increase in rest per minute sleeping in will decrease the time for working out. METHODOLOGY In order to analyze the tradeoff between the two objectives of increased rest and increased benefit from exercise, we utilized the Surrogate Worth Tradeoff (SWT) method. SOLUTION The first step is to develop the following model: Definition of variables: R = rest (units of well-being) E = benefit of exercise (units of health) r = sleeping in (activity) e = exercising (activity) Ti = time spent on activity i (min), where i = {r, e} Decision variables fR = objective function, maximize rest fE = objective function, maximize benefit from exercise Model: E = (TE/30)2 R = (TR/15)1/2 fR = max {R} = max {(TR/15)1/2} fE = max {E} = max {(TE/30)2 } s.t. TR + TE ≤ 60 TR ≥ 0 TE ≥ 0
94
Surrogate Worth Tradeoff Objective Functions 4.5
2.5
4
Health
3 1.5
2.5
Benefit from exercise
2 1.5
1
Rest
1
Well being
2
3.5
0.5
0.5 0
0 0
10
20
30
40
50
60
Time sleeping in
Time exercising
Figure IV.4.1. Well-being and Health in the Decision Space Therefore, if TR + TE < 60, one objective could be improved without compromising the other. However, this situation cannot be Pareto optimal because time spent sleeping in plus time spent exercising must equal 60 minutes. Given TR + TE = 60, the student can solve for TE and substitute the solution into the first objective function. The two objectives then become: fR = max {R} = max {(TR/15)1/2 } fE = max {E} = max {((60-TR)/30)2} s.t. 0 ≤ TR ≤ 60 Multiobjective problems like this one can also be restructured as a single objective problem by converting the other objectives into constraints. This is done by mandating that the function meet some minimal requirement. By doing this, a Lagrangian function can be formed that incorporates the minimum requirements for additional objectives into one objective function. According to the Surrogate Worth Tradeoff Method (SWT), the value of the Lagrangian multiplier can be found by taking the negative of the derivative of one function divided by the derivative of the other. The Lagrangian multiplier, denoted by λE,R, is the cost or value added to the objective function per unit of change in the constraint or second objective function. In this case, the Lagrangian multiplier is as follows: 2 T 1 / 2 (60 − TR ) L= + λE , R R − ε E 15 30
Student Life
95
15 ∂L 2 60 − T R =0 = − +λ E , R 2 T ∂T R 30 30 R 120 − 2TR TR (60 − TR ) 900 λE ,R = = 15 225 15 2 TR ANALYSIS In the SWT, λE,R must not be negative. Examining this equation, it can be seen that the feasible region for a noninferior solution is 0 ≤ TR ≤ 60. This confirms the bounds previously established for time spent in sleeping. The following Table IV.4.1 and Figure IV.4.2, IV.4.3 and IV.4.4 show and illustrate noninferior solutions to this multiobjective problem: Table IV.4.1. Noninferior Solution and Tradeoff Values TR 60 45 30 15 0
TE 0 15 30 45 60
fE 0 0.25 1 2.25 4
λE,R 0 0.12 0.19 0.2 ∞
fR 2 1.73 1.41 1 0
2.5 2
Rest
1.5
1 0.5 0 0
0.5
1
1.5
2
2.5
3
3.5
4
Benefit from Exercise
Figure IV.4.2. Noninferior Solution in the Functional Space
4.5
96
Surrogate Worth Tradeoff 60
Time Sleeping In
50 40 30 20 10 0 0
10
20
30
40
50
60
Time Exercising
Figure IV.4.3. Noninferior Solution in the Decision Space
0.25
Lambda
0.2
0.15
0.1
0.05
0 0
0.5
1
1.5
2
fR
Figure IV.4.4. Tradeoff Function λE,R versus fR
2.5
Spaceship Mission
97
PROBLEM IV.5: Survival and Colonization in Space Planet Zysk has had space travel for several centuries and its inhabitants have traveled to all the planets in its solar system. Their science is extremely welladvanced, and the planet’s scientists and top administrators have been aware for some time that their sun would become a supernova sometime within the next 300– 700 years. The projected date of the disaster is still somewhat unclear, but for the past 100 years, the planet’s scientists have been trying desperately to develop a faster-than-light (FTL) spaceship to transport its population to other solar systems so that they can survive the coming catastrophe. They have finally succeeded. The spaceship has been tested and is successful over fairly short distances (several lightyears), and the planet’s leaders are now turning their attention to planning colonizing expeditions. DESCRIPTION As soon as practicable, Planet Zysk wants to send off as many expeditions as possible so that they can learn which colonies are most successful in order to plan full-scale evacuations. From their previous planetary expeditions, Zysk’s scientists have developed a so-called survival index. This helps them to estimate the chances of survival of both the colony ship and the colony itself. It is not known whether this index is completely applicable to long-range FTL expeditions, but this must be used in lieu of any other information. METHODOLOGY From empirical data, the scientists have also formed a function which predicts the amount of time necessary to prepare expeditions of different sizes. They hope to minimize their time function while maximizing their survival index. Like those on Earth, Planet Zysk’s risk analysts have developed the Surrogate Worth Trade-off (SWT) method. They will use this to help them decide how long they should take to prepare and how many Zyskians to send on the expeditions. Because the SWT method is meant to minimize two functions, Zysk’s scientists realize that maximizing the positive survival index is the same as minimizing the negative survival index. Both survival index and time are functions of the numbers of Zyskians and the number of pounds of equipment that will be sent. A panel of three leading scientists will decide on the worth of any choices given: whether time should be minimized at the expense of the survival index; or the opposite. There is a general consensus, before looking at the figures, that one Zysk year (1003 of their days) is probably a reasonable time to shoot for, but there is considerable disagreement on what the Negative Survival Index (NSI) should be— figures range from -2000 to as high (or low) as -9000. Decision Variables: x1 = number of Zyskians to be on any one ship x2 = number of pounds of equipment to be sent on any ship
98
Surrogate Worth Tradeoff
Problem Statement: These are the functions which are to be minimized: Time = f1 = x1 2 + 3x 2 2 + 40 (in Zyskian days) Negative Survival Index = f 2 = −5( x1 − 50) 2 −
1 ( x 2 − 10) 2 (in Zyskian 2
pounds) SOLUTION 1) Put in ε -constraint form: Minimize f1 = x1 2 + 3x 2 2 + 40 s.t.
f 2 = −5( x1 − 50) 2 −
1 ( x 2 − 10) 2 ≤ ε 2 2
ε 2 ≥ f2 + δ 2 δ2 ≥ 0 2) Lagrangian:
L( x1 , x2 , λ12 ) = f1 + λ12 ( f 2 − ε 2 ) = x12 + 3x2 2 + 40 + λ12 [−5( x1 − 50) 2 −
1 ( x2 − 10) 2 − ε 2 ] 2
3) Apply Kuhn-Tucker necessary conditions:
∂L = 2 x1 − 10λ12 ( x1 − 50) = 0 ∂x1 ∂L = 6 x 2 − λ12 ( x 2 − 10) = 0 ∂x 2 4) Solve for λ12 in each and set them equal:
x2 =
−10 x1 29 x1 − 1500
The data are illustrated in the following tables and figures.
Spaceship Mission Table IV.5.1. Noninferior Solutions
x1
Time x12 + 3x22 + 40
x2
Negative Survivability -5(x1-5)2 – ½(x2-10)2
50.1
10.63694
2,889.44
-10,170
50.2
11.35747
2,947.02
-10,216
50.3
12.17918
3,015.09
-10,263
50.4
13.125
3,096.96
-10,311
50.5
14.22535
3,197.33
-10,360
50.6
15.52147
3,323.11
-10,412
50.7
17.07071
3,484.72
-10,467
50.8
18.95522
3,698.54
-10,528
50.9
21.29707
3,991.51
-10,598
51.0
24.28571
4,410.39
-10,682
51.1
28.23204
5,042.35
-10,792
51.2
33.68421
6,065.32
-10,953
51.3
41.70732
7,890.19
-11,221
51.4
54.68085
11,651.95
-11,763
51.5
79.23077
21,524.79
-13,208
51.6
143.3333
64,335.89
-19,747
51.7
738.5714
1,639,176.16
-276,313
0
NSI
-5,000
-
10,00 0
20,00 0
30,00 0
40,00 0
50,00 0
60,00 0
70,00 0
-10,000 -15,000 -20,000 -25,000 times in days
Figure IV.5.1. Noninferior solution in the functional space
99
100
Surrogate Worth Tradeoff
ANALYSIS Table IV.5.2. Noninferior Solutions and Trade-off Values x1
f1
x2
Worth
λ12
f2
10.63694
2,889.44
-10,170
100.2
5
50.5
14.22535
3,197.33
-10,360
20.2
4
51
24.28571
4,410.39
-10,682
10.2
3
51.2
33.68421
6,065.32
-10,953
8.533333
1
51.5
79.23077
21,524.79
-13,208
6.866667
-2
51.7
738.5714
1,639,176.16
-276,313
6.082353
-4
6 5 4 3 2 1 0 -1 -2 -3 -4 -5
1 2 3 4 5
1,600,000
1,400,000
1,200,000
1,000,000
800,000
600,000
400,000
200,000
6 0
Worth
50.1
Time (Days)
Figure IV.5.2. Surrogate worth trade-off curve 1) Time is minimal here but the negative survival index (NSI) is not desirable—its absolute value is too low to ensure much chance of survival at all (remember, we are minimizing the NSI so we want it to be as negative as possible—in other words, to have as great an absolute value as possible). 2) Time has increased to over three Zysk years, which is still well within the desired time frame; NSI is still not desirable for those who wish the expeditions to be greater in quality than quantity, so the general consensus is to allow a longer interval for preparation. 3) Time is now over four Zysk years, and the NSI is better, but the panel’s average worth function still indicates that a further trade-off of time for a negative NSI is fairly desirable. 4) At this point, some of the panel obviously feels that an increase in NSI negativity is getting less and less worth the increase in time; however, the
Spaceship Mission
101
average worth function still slightly favors allowing more time for trip preparation. 5) Here the average worth function clearly indicates that any further trade-off of time for NSI is undesirable; we have passed the minimum that even the most conservative members of the panel felt was necessary, and it is more important to minimize time. 6) This point is clearly desirable to all. There are several technologies currently under development that might cause Zyskians to change their evolution of worth at the points above. For instance, there is great hope that a way can be found to transport embryos so that reproductive capability need not be a prime consideration of whom to send. If so, fewer adults would need to be sent for breeding purposes, freeing up space for much-needed specialists in various fields. This might change the NSI sufficiently to alter the relative worth of some points on the curve. Also, if an error has been found in calculating when the sun will become a supernova, or if new information on that event becomes available, a change in the number of years left before the catastrophe would alter the trade-off decisions. For example, if the sun were to become a supernova in 100 years instead of 300, time would become increasingly dear.
102
Surrogate Worth Tradeoff
PROBLEM IV.6: Solving a Problem with Two Non-linear Objectives A student is asked to develop the non-inferior solutions and a table of trade-off values to minimize two objective functions simultaneously. Using the data, the student also is asked to plot the non-inferior solution in both the decision space and the functional space. The two objective functions are as follows: Minimize f1(x1, x2) = x12 + 2(x2 – 1)2 + 4 f2(x1, x2) = (x1 – 4)2 + x22 + 3
Employment Dilemma
103
PROBLEM IV.7: Employment and Inflation Consider the relationship between employment (or unemployment) and inflation in a national economy: This example problem involves at least two objective functions. It is desirable that both of them, unemployment and inflation, be minimized. Determine a preferred solution for a computer company that needs to reduce its staff without contributing to an increase in inflation. Use the Surrogate Worth Trade-off (SWT) method to generate Pareto-optimal solutions (at least 5) and their associated trade-offs. Let U(x1,x2) be the unemployment function. Let I(x1,x2) be the inflation function where x1 = money supply and x2 = government spending. (Assume that the federal bank of this government has no independence; therefore the money supply is also controlled by the executive branch. Also, inflation is measured by the price index. Dividing by 100 gives us the inflation rate.) Decision Variables: x1 = money supply x2 = government spending Problem Statement: These are the functions which are to be minimized: Unemployment = U ( x1 , x 2 ) = 100 − x1 2 − x 2 2 Inflation = I ( x1 , x 2 ) = x13 + x 2 3 Use the Surrogate Worth Trade-off (SWT) method to generate Pareto-optimal solutions (at least 5) and their associated trade-offs.
104
Surrogate Worth Tradeoff
PROBLEM IV.8: Risk-Return Tradeoff for an Investment Decision Investors do not usually hold a single asset; they hold groups of assets referred to as a portfolio. The following is a partial list of the possible financial instruments that investors can consider: • Money market: US Treasury bills (T-bills), Bonds • Currency market • Stock market Portfolio investing assumes that for a given level of return, an investor would prefer less risk. Similarly, an investor would prefer a higher return for a given level of risk. Some financial assets, such as T-bills, bonds and notes, are considered to be risk-free because they yield a fixed rate of return. Unlike these, stocks have an element of risk. This risk-return tradeoff is the motivation for this analysis to focus on stock investments. Several simplifications were employed in the analysis. As mentioned above, the scope was limited to the stock market only, and to two stocks in particular – Compaq (CPQ) and Microsoft (MSFT). Additionally, the analyses were based on the following assumptions: • In this problem, a stock’s return is based purely on the price changes for several time periods. • There is no lending and borrowing. • Transaction costs are not taken into consideration. • Uninvested money earns no return (for simplification purposes), x1 + x2 < 1. Two stocks were selected for the analysis—COMPAQ (CPQ) and Microsoft (MSFT). Five-years (1995-1999) of data on their annual rates of return (AROR) were recorded, as shown in Table IV.8.1. Table IV.8.1. AROR of Stocks CPQ and MSFT for Years 1995-1999
(CPQ)
Stock i=1
Stock i=2 (MSFT)
1995
1996
1997
1998
1999
r
0.55
0.90
0.49
-0.36
-0.08
0.30
0.88
0.56
1.15
0.68
-0.09
0.64
r is the average rate of return (AROR) for the stocks, computed as:
r=
1 5
5
∑r
ij
j =1
, where i = Stocks 1 (CPQ) and 2 (MSFT), and j = Periods 1 to 5.
Investment Decision
105
Expected Return (mean) of a Combination of Assets: The expected return of a combination of n assets is just the sum of the expected value of each return.
E (r1 + r2 ) = r 1 + r 2 The return on a portfolio of assets rP is now simply a weighted average of the return on the individual assets, where xi is the fraction of the investor’s funds invested in the ith asset.
rp = x 1 r1 + x2 r 2
⇔
r p = x 10.3 + x 2 0.64 ⇒
f1 ( x1 , x 2 )
Risk of a Combination of Assets: The variance is the expected value of the squared deviations from the mean return on the portfolio:
σ P2
= E (rP − r P ) 2 = ( x ' )Q ( x) T ( x)
where Q is the covariance of the stocks and is given by:
ρ Q = 11 ρ 21
ρ12 0.21 0.07 = ρ 22 0.07 0.17
Risk (σp2) can be expressed as:
= [x1
ρ x 2 ] 11 ρ 21
ρ12 x1 ρ 22 x 2
x = [( x1 ρ11 + x 2 ρ 21 ) ( x1 ρ12 + x 2 ρ 22 )] 1 x2
= x1 2 ρ11 + x 2 2 ρ 22 + x1 x 2 ( ρ12 + ρ 21 ) = 0.21x1 2 + 0.17 x 2 2 + (0.07 + 0.07) x1 x 2 ⇒ f 2 ( x1 , x 2 ) The requirement of this problem is to use the Surrogate Worth Tradeoff (SWT) method to perform multiobjective optimization with respect to the two objectives: f1 (expected return), and f2 (risk). Generate a plot of the Pareto optimal solutions in both decision space and functional space.
106
Surrogate Worth Tradeoff
PROBLEM IV.9: Carcinogenic Chemicals in a New Product Company X is expanding their product line by producing Product Y. This product’s main ingredients are Chemical A and Chemical B, both of which are carcinogenic. The labor union has requested that an occupational exposure limit (OEL) be established for Chemicals A and B, for the following reasons: (1) both chemicals are listed as a human carcinogen by the International Agency for Research on Cancer (IARC), and (2) neither chemical is regulated by the Occupational Safety and Health Administration (OSHA). Company X’s management has tasked its Safety Department with developing an OEL, which would minimize the number of cancer incidents and costs associated with the new product. Use the Surrogate Worth Tradeoff (SWT) method to analyze risk of cancer incidents with respect to associated costs. The relationships for i) exposure to incident cancers (f1) and ii) costs of exposure to Chemicals A and B (f2) are the following: f1(DA, DB) = 1 + DA + DB2 f2(DA, DB) = 100 + 16(DA + DB) – (DA + DB)2 where: f1(DA, DB) = cancer incidents as a function of exposure f2(DA, DB) = cost as a function of exposure DA = exposure to Chemical A DB = exposure to Chemical B
Plant Construction
107
PROBLEM IV.10: Building a New Rubber Manufacturing Plant A Taiwan company is one of the main rubber-product suppliers in the world. It plans to establish a new plant in a new industry zone located on the southwestern coast of Taiwan. The company is attempting to produce two new rubber products which will make it occupy more market share. However, these two products will surely cause a change in air pollution. Today, most people in Taiwan are very aware of environmental protection problems. Before the local government approved the plant proposal, there was a large-scale citizens’ protest in front of city hall and the Mayor promised that the government would supervise the plant closely. The rubber company will pay extra fees to deal with the pollution problem. How can the company management maximize profits while minimizing the cost for environmental protection? Solve the following multiobjective optimization model using the Surrogate Worth Tradeoff (SWT) method. (max profit) (min cost) x1 : x2 : f1 : f2 :
f1(x1,x2) = (5x1-12)2+(3x2-5)2 f2(x1,x2) = 4x12+3x22+2x1x2
tons of product @ 1 per unit per day tons of product @ 2 per unit per day profit model in terms of unit price and cost, which includes fixed and variable components cost model for dealing with environmental protection
108
Surrogate Worth Tradeoff
PROBLEM IV.11: Providing Security for a College Concert The purpose of this problem is to minimize the injuries and the cost of the security in a rock star performance at a concert hall. A university is planning to have a rock star perform at its concert hall. In order for the event to go smoothly and ensure future business, the university would like to minimize the number of injuries at the event (f1). At the same time, they would like to minimize the cost of security (f2). Because a majority of injuries occur closest to the stage, those planning the concert would like to determine the optimal amount of security officers to deploy to the stage area (x1) per hour. In addition, security officers must also be dispersed throughout the rest of the arena to guarantee the safety of the entire audience. This will be another decision variable, x2, which will also be on a per hour basis. It is known that the concert hall has a maximum capacity of 15,000 attendees, which is reflected in the first objective function. The officers deployed around the stage are in a more chaotic situation and may have a better chance of dealing with injury. Because of this, they are given a little more pay than those officers standing guard throughout the arena. The two objective functions can be seen below:
f1 ( x1 , x 2 ) = 15000 − 20 x12 − 7.5 x 22 min f 2 ( x1 , x 2 ) = 100 x1 + 75 x 2 Solve the multiobjective problem described above and plot the injury and security cost tradeoff using the Surrogate Worth Tradeoff (SWT) Method.
Overflow Modeling
109
PROBLEM IV.12: Marikina River Overflow Modeling The Marikina River overflow scenario in the Philippines is a chronic problem. Hence, the analysis and modeling of the impact of the river overflows requires significant attention and management by policymakers. The objective of this problem is to identify the impact of the current policy decision on future concerns using the multiobjective multi-stage risk impact assessment method. In particular, the channel overflow scenario will be formulated as a multiobjective optimization problem. The Surrogate Worth Trade-off (SWT) method is useful in multiobjective problems. Faced with multiple objectives, the approach is to select a primary objective and optimize this while constraining the decisions considered so that the other objectives are attained even at minimum levels. A set of Pareto optimal points are generated and trade-off analysis can precede with single or multiple decision makers. In this particular problem, use SWT to optimize the following two objectives: • •
Minimize investment cost (f1) Minimize risk of flood (f2)
Investment cost:
f1 ( x1 , x 2 ) = ( x1 − 3) 2 + ( x 2 − 7) 2 + 2
Risk of flood damage:
f 2 ( x1 , x 2 ) = ( x1 − 8) 2 + ( x 2 − 10) 2 + 7
where:
x1 = number of floodways built x2 = number of drainage systems established
110
Surrogate Worth Tradeoff
PROBLEM IV.13: Designing an Enormous Ice-Cream Cone The purpose of this problem is to find optimal height and radius for ice-cream cone that maximizes its volume while minimizing its surface. Every summer, ice-cream companies are likely to do promotions for increasing their sales. One company wants to make the biggest ice-cream cone while it uses the smallest amount of materials just for advertising display. What should the radius and the height of the ice-cream cone be? Solve this problem using the Surrogate Worth Tradeoff (SWT) method to calculate the volume and surface tradeoffs for the ice-cream cone. Function for volume: 2
πR H fˆ1 (R, H ) = 3 Function for surface area
f 2 (R, H ) = πR R 2 + H 2 where R : radius, H : height
111
V. Uncertainty Sensitivity Index Method
PROBLEM V.1: Developing a Computer Program This example presents a sales model of a computer program which is based on two variables, technological complexity and market price. It is assumed that the product is affected by two choices: i) the program’s level of complexity, and ii) its market price, which is an exogenous parameter of the company’s simple profit model. DESCRIPTION The producer’s dilemma is further described as follows. As the program to be sold becomes more complex, it becomes more powerful and more and more nonprofessional consumers want to purchase it. However, after a certain point (such as the global maximum in Figure V.1.1), the program can become so complex that sales to the general public will drop drastically. Past this point, however, the program becomes complex/powerful enough for academic professionals to use. Therefore, with more technological complexity, sales begin to rise again. Nevertheless, the new local maximum (at high levels of complexity) is not as high as the global maximum because there are fewer academic professionals than there are general consumers. A (x)
x
x
1
*
c
x
2
*
Figure V.1.1. Sensitivity band, adapted from the companion textbook [Haimes 2009] Note that due to manufacturing constraints, the company can make only one version, with technological complexity (x) as the decision variable. Also, the objective of maximizing sales can be alternatively represented by minimizing lost
112
Uncertainty Sensitivity Index Method
sales (defined here as the objective function f1(x, α), where α is the market price parameter). METHODOLOGY Use the Uncertainty Sensitivity Index Method (USIM) to solve this problem. The mathematical formulation of the objective function is as follows: f1(x, α) = lost sales = 4 x 2 − 2.25α ( x − 2) + 1.5α 2
(V.1.1)
where: x = technological complexity α = market price We look at the problem from two angles: 1) the “business-as-usual” case (minimize objective function to get x*) and 2) the most conservative case (minimize sensitivity function to get xˆ ). SOLUTION To determine x*, we have to take the derivative function with respect to x, and let α = αˆ = $20 :
Based on market prices, and the company’s long history in this industry, they have set the nominal value α as equal to $20. f1 ( x, αˆ ) = 4 x 2 − 45( x − 2) + 600 = 4 x 2 − 45 x + 690 ∂f1 ( x, αˆ ) = 8 x − 45 = 0 ∂x 45 ∴ x* = 8 Now we need to determine xˆ to derive a sensitivity function from (1):
(V.1.2) (V.1.3)
f1 ( x, αˆ ) = 4 x 2 − 2.25α ( x − 2) + 1.5α 2
∂f ( x, α ) f 2 ( x, αˆ ) = 1 ∂α
2
2 = (− 2.25( x − 2) + 3αˆ ) = 5.0625 x 2 + 20.25 + 9αˆ 2 − 20.25 x − 13.5 xαˆ + 27αˆ α =αˆ
2
= 5.0625 x + 20.25 + 3600 − 20.25 x − 270 x + 540 = 5.0625 x 2 − 290.25 x + 4160.25
(V.1.4) (V.1.5) (V.1.6) (V.1.7)
Sales Forecast
113
Taking the derivative gives us: ∂f 2 ( x, αˆ ) = 10.125 x − 290.25 = 0 ∂x
(V.1.8)
∴ xˆ = 28.67
Now we want to minimize the two objectives together. To do that, we will use the ε -constraint form. f1 ( x, αˆ ) = 4 x 2 − 45 x + 690 min f 2 ( x, αˆ ) = 5.0625 x 2 − 290.25 x + 4160.25
(V.1.9)
Thus, we have:
[
min f1 ( x, αˆ ) = 4 x 2 − 45 x + 690
]
(V.1.10)
Subject to: 5.0625 x 2 − 290.25 x + 4160.25 ≤ ε 2
(V.1.11)
Thus, we can formulate the Lagrangian equation: L( x, αˆ , λ12 ) = 4 x 2 − 45 x + 690 + λ12 (5.0625 x 2 − 290.25 x + 4160.25 − ε 2 )
(V.1.12) The Kuhn-Tucker necessary conditions yield: ∂L(⋅) = 8 x − 45 + λ12 (10.125 x − 290.25) = 0 ∂x ∂L(⋅) = 5.0625 x 2 − 290.25 x + 4160.25 − ε 2 ≤ 0 ∂λ12
[
]
λ12 5.0625 x 2 − 290.25 x + 4160.25 − ε 2 = 0 λ12 ≥ 0
(V.1.13) (V.1.14) (V.1.15) (V.1.16)
Using the partial Lagrangian function with respect to x from (V.1.13), we have: 45 − 8 x (V.1.17) ∴ λ12 = (10.125 x − 290.25)
114
Uncertainty Sensitivity Index Method
Table V.1.1 displays the results. Table V.1.1. Non-inferior Solutions and Tradeoff Values
x 5.625 10.00 15.00 20.00 25.00 28.67
f 1 ( x, αˆ ) 563.44 640.00 915.00 1390.00 2065.00 2687.11
f 2 ( x, αˆ ) 2687.77 1764.00 945.56 380.25 68.06 0.00
λ12 0.0000 0.1852 0.5420 1.3105 4.1751 ∞
To dramatize the tradeoffs between the sensitivity objective function and the optimality objective function, the latter is evaluated at x* and at xˆ as a function of α. 2
45 45 f1 ( x * , αˆ ) = 4 − 2.25α ( − 2) + 1.5α 2 = 126.5625 − 8.15625α + 1.5α 2 8 8 (V.1.18) 2 * 2 2 f1 ( x , αˆ ) = 4(28.67 ) − 2.25α (28.67 − 2) + 1.5α = 821.7 − 60α + 1.5α
(V.1.19) Given the nominal value of αˆ = 20 : ∂f1 ( x, αˆ ) = 3αˆ − 8.15625 = 51.84375 ∂α |α =αˆ ∂f 2 ( x, αˆ ) = 3αˆ − 60 = 0 ∂α |α =αˆ
(V.1.20) (V.1.21)
Furthermore, we can analyze the changes that take place in f1(x*, α) and f1( xˆ , α) when the nominal value, αˆ , is perturbed by ∆α = 5. The corresponding variations are given below: 2
45 45 − 2 + 1.5(20) 2 = 563.4375 f1 ( x*, αˆ ) = 4 − 2.25(20) 8 8
(V.1.22)
2
45 45 f1 ( x*, αˆ − 5) = 4 − 2.25(20 − 5) − 2 + 1.5(20 − 5) 2 = 341.71875 8 8 (V.1.23) f1 ( x*, αˆ ) − f1 ( x*, αˆ − 5) = 221.719 (V.1.24)
Let η ( x*,0.75αˆ ) denote the percentage change in f1(x*, α^) with a perturbation of 25% in αˆ . Then: η ( x*,0.75αˆ ) = 39.35%
Sales Forecast
115
Similarly, f1 ( xˆ , αˆ ) = 4(28.67) 2 − 45(28.67) + 600 = 2687.1111 2
(V.1.25)
f1 ( xˆ , αˆ − 5) = 4(43 / 3) − 33.75(43 / 3) + 405 = 2724.6111
(V.1.26)
f1 ( xˆ , αˆ ) − f1 ( x*, αˆ − 5) = 37.5
(V.1.27)
Let η ( xˆ ,0.75αˆ ) denote the percentage change in f1( xˆ , αˆ ) with a perturbation of 25% in αˆ . Then η ( xˆ ,0.75αˆ ) = 1.40% ANALYSIS
The results given in Figure V.1.2 indicate that following a conservative policy that trades optimality (cost objective) for a less sensitive outcome provides a very stable solution (3% versus 50% changes in f1(·) with a deviation of 25% from the nominal value αˆ ). Clearly, neither the solution x* nor xˆ is likely to be recommended. From the use of Table V.1.1 and the SWT method, with an interaction with a decisionmaker, a preferred level of x should be selected, where: 5.63 ≤ x ≤ 28.67
3000
2500
f1
2000
1500
∆α = 5
f1(x_star, alpha_hat) f1(x_hat, alpha hat)
1000
500
0
Figure V.1.2. The Functions f1(x*, α) and f1( xˆ , α) versus Perturbation in α
116
Uncertainty Sensitivity Index Method
PROBLEM V.2: Structural Remodeling
We seek to minimize the stress-related deformation of cantilevered beams on a silicon substrate resulting from stress orientations in the pre-etched thin film. The question is whether or not to replace the current beam with a longer beam. DESCRIPTION
The question of replacing the current beam with a longer beam can be determined by minimizing the following functions: 1) To minimize the height of maximum deformation. 2) To minimize the sensitivity of the deformation with respect to the thin film stress orientation. METHODOLOGY
We use the Uncertainty Sensitivity Index Method (USIM) to solve the problem, as follows: The stress orientation is given as the parameter α, which denotes the angular difference between the stress axes and the etching film orientation. The system output, f1 ( x, α ) , is measured as the degree of deformation of the etched cantilevered beam from true horizontal with respect to the substrate wafer. Our decision variable, x, is the length of the beam. SOLUTION
Two objective functions are given as follows: f1 ( x, α ) = 3 x 2 − 2 xα + 5α 2
(IV.2.1)
2
∂f ( x, α ) 2 f 2 ( x, α ) = 1 = (− 2 x + 10α ) = 4 x 2 − 40αx + 100α 2 ∂α
By applying the
(IV.2.2)
ε -constraint form, we can formulate the Lagrangian equation:
min f1 (⋅) s.t. f2 3 , there is no trade-off between f1 and f 2 since nonnegativity of λ12 does not hold, given the range of x. If we know α ∈ [0,1] , then no matter how it will change, any solution of x ∈ [0,1] is non-inferior (i.e., X * = [0,1]. ).
In general, when f1 ( x, α ) satisfies certain conditions, we have:
Generic Problem X *= X α
127
IXα
∂f = x | λ12 = − 1 ≥ 0 to any α ∈ [α , α ] ∂f 2
EXERCISE B: Envelope Solution Approach
Using the original function in Exercise A, solve the multiobjective optimization problem using the envelope approach. SOLUTION
Use the functions: f1 ( x, α ) = x 2 + (1 − α 2 ) x + α 2 2
∂f f 2 ( x, α ) = 1 = 4α 2 ( x − 1) 2 ∂α ∂f1 ∂f1 = 2x + 1 − α 2 = −2α ( x − 1) ∂x ∂α ∂f 2 ∂f 2 = 8α 2 ( x − 1) = 8α ( x − 1) 2 ∂x ∂α ∂f1 ∂f 2 ∂f1 ∂f 2 − =0 ∂x ∂α ∂α ∂x 2 x + 1 + α 2 = 0 or α 2 = −2 x − 1
(In the above problem, x must be less than or equal to –1/2 and α has a real solution, but if x ≥ −1 / 2 , then there is no trade-off between f1 and f 2 .) Figure V.4.1 graphically shows the tradeoffs between f1 and f 2 . 2
f1
a=0 a = 1/2
1
a=1 x=0
0 0
2
4
f2
Figure V.4.1. f1 versus f 2
128
Uncertainty Sensitivity Index Method
EXERCISE C: USIM Problem Using Envelope Solution Approach
Consider the original function used in Exercise B, modified as follows: f1 ( x, α ) = x 2 + (1 + α 2 ) x − α 2
Solve the multiobjective optimization problem using the envelope approach. SOLUTION f1 ( x, α ) = x 2 + (1 + α 2 ) x − α 2 2
From
∂f f 2 ( x, α ) = 1 = 4α 2 ( x − 1) 2 ∂α ∂f1 ∂f 2 ∂f1 ∂f 2 − =0 ∂x ∂α ∂α ∂x
α 2 = 2x + 1 So
( x ≥ −1 / 2 )
f1[ x, α ( x)] = x 2 + (1 + α 2 ) x − α 2 = 3x 2 − 1 f 2 [ x, α ( x)] = 4(2 x + 1)( x − 1) 2
We could get x = F ( f 1 ) then plug it in f 2 ( x ) , so we have the envelope f 1 ~ f 2 .) Generally, given a desired λ12 , we can choose the most compromised solution x * and α * (parameter design) in the following way:
λ12 = −
2x + 1 + α 2 8α 2 ( x − 1)
= given value
2x + 1 = α 2 (two equations with two unknowns)
Simplifying it:
λ12 = −
1 4( x − 1)
Actually, on the envelope we can easily calculate the trade-off, from the envelope equation: ∂f1 ∂f 2 ∂f1 ∂f 2 = ∂α ∂α ∂x ∂x 2
Generic Problem ∂f1 ∂f1 =− = − ∂x = − ∂f 2 ∂f 2 ∂x
λ12
So
129
∂f1 ∂f1 1 ∂α = − ∂α =− 2 ∂f 2 ∂f ∂ f1 ∂ 2 f1 2 1 2 ∂α ∂α ∂α 2 ∂α
The necessary condition for the Pareto optimum is: ∂ 2 f1 λ12 ≥ 0 or ≤0 ∂α 2 This can be used to determine the existence of the envelope on the Pareto frontier (see Figure V.4.2). In the above example: Original:
∂ 2 f1
Modified:
∂ 2 f1
∂α 2 ∂α
2
= 2(1 − x) ≥ 0
α 2 −1 no envelope ≤ x ≤ 1 2
= 2( x − 1) ≤ 0
α 2 +1 envelope, since − ≤ x ≤ 1 2
2
1
a=1
f1
a = 1.73 Envelope
0 0
1
2
3
4
-1 f2
Figure V.4.2. f1 versus f 2 EXERCISE D: USIM Extension
Consider a system with the following objective function: 1 f ( x, α ) = ( x3 − 1) 2 + (1 − α 2 ) x1 + α 2 2 s.t. h( x ) = β x1 + β 2 x 2 + x3 − 1 = 0 α , β have nominal values 0.5 and 1, respectively.
130
Uncertainty Sensitivity Index Method
SOLUTION
Consider minimizing the sensitivity of the constraint: ∂h( x, β ) Let f 3 ( x) = ∂β
2
= ( x1 + 2 x 2 ) 2 β = βˆ
Minimize it under h( x, β ) = 0 min f 3 ( x) = ( x1 + 2 x 2 ) 2
s.t. x1 + x 2 + x3 − 1 = 0 Solve the problem and get: x3 = 1 − x 2 x1 = −2x 2 Substitute these values in f ( x, α ) and calculate: f ( x, α ) = x 2 2 + (1 − α 2 ) x 2 + α 2
Since this is exactly the same function that was minimized in Exercise A, the remainder of the USIM solution is the same as before.
Cyber Security
131
PROBLEM V.5: Budget Allotment for Cyber-Security
A large company is considering a change in the amount of money to budget for cyber-security for the upcoming year. DESCRIPTION
Changing the budget (negative means reduce it, positive means increase) could have an effect on the number of cyber attacks during that year, and can be calculated using the following equation: f1 ( x, α ) = y ( x, α ) = 3 x 2 − 4 x(α − 2) − α 2
METHODOLOGY
Since the company is uncertain as to which fiscal policy to adopt, they use the Uncertainty Sensitivity Index Method (USIM) to help guide their decision. The decision variable, x, represents the change in budget for the upcoming year. This is used to calculate the change in number of cyber attacks for that time period. Thus, a negative value for the objective function means there will be a reduction in the number of attacks. While increasing the budget will generally reduce the number of attacks, at some point attacks will begin increasing as the budget increases. Also, a decrease in the budget may still cause a decrease in attacks. This happens because allocating money to cyber-security takes away from (or gives more to) facility security allocations, thus more (or fewer) attacks will occur. The objective is to find the change in budget that minimizes the number of attacks. The USIM should be applied because the model parameter is unknown. All budget values are in units of $1 million. All attack values are in units of tens (i.e., -9 means 90 fewer attacks). SOLUTION
An alpha value of 3 was determined using a systems identification procedure. Given αˆ = 3 y ( x,αˆ ) = 3x 2 − 4 x − 9 2
(V.5.1) 2
f 2 ( x,αˆ ) = [−4 x − 2α ] = 16 x + 16αx + 4α
2
(V.5.2)
(V.5.1) and (V.5.2) can be written as a joint optimality and sensitivity problem as follows: f1 ( x, αˆ ) min f 2 ( x, αˆ )
(V.5.3)
132
Uncertainty Sensitivity Index Method f ( x,αˆ ) = 3x 2 − 4 x − 9 min 1 2 f 2 ( x,αˆ ) = 16 x + 48 x + 36
(V.5.4)
Use the ε -constraint form: min[3x 2 − 4 x − 9]
(V.5.5)
2
16 x + 48 x + 36 ≤ ε 2
(V.5.6)
From (V.5.5) and (V.5.6) formulate the Lagrangian function: L( x,αˆ ,λ12 ) = 3x 2 − 4 x − 9 + λ12 [16 x 2 + 48 x + 36 − ε 2 ]
(V.5.7)
According to the Kuhn-Tucker necessary conditions, (V.5.7) can be solved: ∂L = 6 x − 4 + λ12 [32 x + 48] = 0 ∂x ∂L = 16 x 2 + 48 x + 36 − ε 2 ≤ 0 ∂λ12
(V.5.8) (V.5.9)
λ12 [16 x 2 + 48 x + 36 − ε 2 ] = 0, λ12 ≥ 0
(V.5.10)
From (V.5.8) solve for λ12 : λ12 =
4 − 6x 32 x + 48
(V.5.11)
ANALYSIS
Given equations (V.5.8) through (V.5.11), Table V.5.1 and Figure V.5.1 show noninferior solutions and trade-off values. Table V.5.1. Noninferior Solutions & Trade-off Values
x 0.67 0.5 0.4 0.3 0.2 0.1 0 -0.1
f1(x, αˆ ) -10.33 -10.25 -10.12 -9.93 -9.68 -9.37 -9 -8.57
f2(x, αˆ ) 75.11 64 57.76 51.84 46.24 40.96 36 31.36
λ12 0.00 0.02 0.03 0.04 0.05 0.07 0.08 0.10
Cyber Security x -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 -1 -1.1 -1.2 -1.3 -1.4 -1.5
f1(x, αˆ ) -8.08 -7.53 -6.92 -6.25 -5.52 -4.73 -3.88 -2.97 -2 -0.97 0.12 1.27 2.48 3.75
f2(x, αˆ ) 27.04 23.04 19.36 16 12.96 10.24 7.84 5.76 4 2.56 1.44 0.64 0.16 0
133
λ12 0.13 0.15 0.18 0.22 0.26 0.32 0.39 0.49 0.63 0.83 1.17 1.84 3.88 ∞
100.00
80.00
Noninferior solution
f2(x,alpha-hat)
f1(x*,alpha-hat)
60.00
f1(x-hat,alpha-hat)
40.00
20.00
-11
0.00 -1
-6
4
f 1 (x,alpha-hat)
Figure V.5.1. Noninferior solution in functional space min f1 ( x, αˆ ) = f1 ( x*,αˆ ) = −10 min f 2 ( x, αˆ ) = f 2 ( xˆ , αˆ ) = 0
1 3
From (V.5.12) and (V.5.13) two critical values are computed as:
(V.5.12) (V.5.13)
134
Uncertainty Sensitivity Index Method 2 (Business-as-usual policy) 3 xˆ = −1.5 (Most conservative policy) x* =
Given x * and xˆ , two functions are derived with respect to versus α :
α
and are plotted
4 8 − (α − 2) − α 2 3 3 27 f1 ( xˆ , α ) = + 6(α − 2) − α 2 4
f1 ( x*, α ) =
(V.5.14) (V.5.15)
12
f1(x*,alpha)
10
f1(x-hat,alpha) 8 6 4 2 0 -6
-4
-2
0
2
4
6
-2 -4 -6
alpha Figure V.5.2. The functions f1 ( x*, α ) and f1 ( xˆ , α ) ∂f1 ( x*, α ) 8 26 α =αˆ = − − 2αˆ = − 3 3 ∂α ∂f1 ( xˆ , α ) α =αˆ = 6 − 2αˆ = 0 ∂α
(V.5.16) (V.5.17)
From (V.5.16) and (V.5.17), we can distinguish stabilities for two objective functions. A most conservative policy will lead to a more stable state than a business-as-usual policy.
Cyber Security
135
Along with perturbation in α, another plot will help us gain better understanding of a situation involving uncertainty: f1 ( x*, αˆ ) = −10.33 f1 ( x*, αˆ − .5) = −6.25
(V.5.18) (V.5.19)
f1 ( x*, αˆ ) − f1 ( x*, αˆ − .5) = −4.08
(V.5.20)
η ( x*,.75αˆ ) = 39.5% f1 ( xˆ , αˆ ) = 3.75 f1 ( xˆ , αˆ − .5) = 3.5
(V.5.21) (V.5.22)
f1 ( xˆ , αˆ ) − f1 ( x*, αˆ − .5) = .25
(V.5.23)
η ( xˆ ,.75αˆ ) = 6.7% 8
∆α = −0.5
4
f1 ( xˆ , αˆ ) = 3.75
f1 ( xˆ , αˆ − 0.5) = 3.5
0 0
1
2
3
4
5
6
-4
f1 ( xˆ , αˆ − 0.5) = −6.25
-8 -12
f1 ( x*, αˆ ) = −10.333
-16
Figure V.5.3. The functions f1 ( x*, α ) and f1 ( xˆ , α ) versus perturbation in α
The results given in Figure V.5.3 indicate that following a conservative policy that trades optimality for a less sensitive outcome provides a very stable solution (6.7% versus 39.5%). Using the Surrogate Worth Trade-off (SWT) method, and talking to the person in the company who is in charge of this decision, the preferred change in budget should be between –$1.5 million and $670,000. It does seem logical to choose a value that is a reduction in budget that also causes a reduction in attacks. Thus, it may make sense to choose a budget change between –$1.1 million and no change.
136
Uncertainty Sensitivity Index Method
PROBLEM V.6: Art Museum Temperature Maintenance
In order to keep the artworks housed in an art museum in their best condition, the interior temperature of the building must be closely controlled and monitored. The problem is to determine the desired temperature at an optimal cost within the given climate of the museum. DESCRIPTION
Suppose that the goal is to set the temperature at the low 60s degrees. Let x represent the temperature and y (x, α ) denote the cost. The cost (in thousands of dollars) is a function of both the temperature x and a parameter α and can be written as follows: y (x, α ) = (x − 60 )2 − αx − α 2
METHODOLOGY
Use the Uncertainty Sensitivity Index Method (USIM) to solve this problem. Let the cost objective function be redefined as f1 (x, α ) , and we wish to minimize it: f1 ( x , α ) = y ( x , α )
or
f1 (x, α ) = (x − 60 )2 − αx − α 2
Let the nominal value of α be αˆ where αˆ = 10. Then y (x, αˆ ) can be rewritten as y (x, αˆ ) = x 2 − 130 x + 3500 . Since
∂y (x, α ) = − x − 2α , ∂α
we define the sensitivity index function f 2 to be f 2 (x, α ) = x 2 + 4αx + 4α 2 .
Substituting α with αˆ , we have f 1 (x, α ) = x 2 − 130 x − 3500
f 2 (x, α ) = x + 40 x + 400 2
(V.6.1) (V.6.2)
Art Museum
137
Suppose there are no constraints on x. The joint optimality and sensitivity problem can be written in a multiobjective framework as follows: Min f1 (x, α ) = x 2 − 130 x − 3500
(V.6.3) (V.6.4)
Min f 2 (x, α ) = x + 40 x + 400 2
Solve via the SWT method:
The first phase is converting the second objective f2 into the ε -constraint as follows: Min f1 (x, αˆ ) s.t. f 2 (x, αˆ ) ≤ ε 2
(V.6.5) (V.6.6)
The problem can be written as: Min x 2 − 130 x − 3500 s.t. x 2 + 40 x + 400 ≤ ε 2
(V.6.7) (V.6.8)
Form the Lagrangian function,
(
L(x, αˆ , λ12 ) = x 2 − 130 x − 3500 + λ12 x 2 + 40 x + 400 − ε 2
)
(V.6.9)
The Kuhn-Tucker necessary conditions for stationarity are as follows: ∂L(⋅) = 2 x − 130 + λ12 (2 x + 40 ) = 0 ∂x ∂L(⋅) = x 2 + 40 x + 400 − ε 2 ≤ 0 ∂λ12
(
)
λ12 x 2 40 x + 400 − ε 2 = 0 λ12 ≥ 0
(V.6.10) (V.6.11) (V.6.12) (V.6.13)
Solving Equation (V.6.10) yields
λ12 =
130 − 2 x 2 x + 40
(V.6.14)
See Table V.6.1 for several noninferior solutions with the corresponding tradeoff values.
138
Uncertainty Sensitivity Index Method Table V.6.1. Noninferior Solutions and Tradeoff Values
x -19 -10 0 10 20 30 40 50 60
f1 (x, αˆ ) 6331 4900 3500 2300 1300 500 -100 -500 -700
f 2 (x, αˆ ) 1 100 400 900 1600 2500 3600 4900 6400
λ12 84 7.5 3.25 1.833 1.125 0.7 0.41667 0.214286 0.0625
Figure V.6.1. Noninferior Solution in the Functional Space
Figure V.6.1 depicts the noninferior solution in the functional spaces f1 and f 2 . Let x * and xˆ denote the decision variables which minimize f1 (x, αˆ ) and f 2 (x, αˆ ) . In other words: Min f1 (x, αˆ ) = f1 (x*,αˆ ) Min f 2 (x, αˆ ) = f1 (xˆ , αˆ )
(V.6.15) (V.6.16)
Art Museum
139
Then we can compute x * and xˆ with a straightforward method of looking for stationary points in the respective functions to yield: x* = 65 xˆ = −20
To study the tradeoffs between the sensitivity objective function f 2 and the optimality objective function f 1 , the latter is evaluated at x * and xˆ as a function
of α . The resulting functions f 1 ( x*, α ) and f 2 ( xˆ , α ) are plotted in Figure V.6.2. The functions are as follows: f1 (x*, α ) = −α 2 − 65α + 25
(V.6.17) (V.6.18)
f 2 (xˆ , α ) = −α + 20α + 6400 2
Note that at the nominal value of α , f1 (x*,αˆ ) changes rapidly with a slope equal to -85. f 2 (xˆ , α ) has a rate of zero at the nominal value of α ,10. 8000.00 6000.00 4000.00 2000.00 0.00 -100.0
-80.0
-60.0
-40.0
-20.0
0.0
20.0
40.0
60.0
80.0
100.0
-2000.00 -4000.00 -6000.00 -8000.00 -10000.00 -12000.00
f1(x_star) f1(x_hat)
-14000.00 alp ha
Figure V.6.2. Sensitivity as a function of the Parameter α
Now we focus on the changes that take place in f1 (x*,αˆ ) and f1 (xˆ , αˆ ) when the nominal value
α
is perturbed by the amount ∆α = −5. Then as a result we have: f1 (x*,αˆ ) = −725
140
Uncertainty Sensitivity Index Method f1 (x*, αˆ − 5) = −325
f1 (x*, αˆ ) − f1 (x*,αˆ − 5) = 400
ANALYSIS
Further analysis is performed to determine the performance of the cost function f1 (x*,αˆ ) relative to f1 (xˆ , αˆ ) . Let η (x*, αˆ − 5) denote the percentage of change in f1 (x*,αˆ ) with a perturbation of 50% in αˆ . Then
η (x*,0.5αˆ ) = 55% Similarly,
f1 (xˆ , αˆ ) = 6500 f1 (xˆ , αˆ − 5) = 6475
f1 (xˆ , αˆ ) − f1 (x*, αˆ − 5) = 2500
and
η (xˆ ,0.5αˆ ) = 0.38%.
See Figure V.6.3 for the comparison of η (x*,0.5αˆ ) and η (xˆ , 0.5αˆ ) . It is clear that the conservative policy that trades optimality for a less sensitive outcome provides an extremely stable solution. In the case of η (x*,0.5αˆ ) , we have a 50% deviation given a 50% perturbation in the nominal value of α . In the latter case, the deviation is basically ignorable given the same perturbation. Therefore, if the nominal value of α is incorrectly assessed, the result would be rather disastrous if we choose x * over xˆ , even though x * would give us the better value for the optimality problem. On the other hand, xˆ makes the problem very parameterinsensitive but the objective value is not so good. In the end, however, we still need to interact with a decisionmaker about which preferred x is chosen with xˆ ≤ x ≤ x * .
Art Museum
Figure V.6.3. Cost as a function of the Parameter α
141
142
Uncertainty Sensitivity Index Method
PROBLEM V.7: Multiobjective Optimization and Sensitivity Analysis
This problem demonstrates how to integrate sensitivity analysis with multiobjective optimization. Solve the following multiobjective optimization problem using the Uncertainty Sensitivity Index Method (USIM) and analyze your results. f1 = ( x − 2) 2 + 2 xα + α 2 ∂f 1 2 ) = (2 x + 2α ) 2 = 4( x + α ) 2 ∂α Assume the nominal value αˆ = 1 f2 = (
Building Design
143
PROBLEM V.8: Earthquake-Proofing a Building
How can a building be structurally fortified to counteract the vibration caused by an earthquake? The vibration of a building caused by an earthquake may be dampened by placing shock-absorbing materials under and around its foundation, as can be seen in Figure V.8.1. Vibration risk can be related to the “work” exerted by the building structure to counteract the forces acting on it. Suppose that greater magnitudes of “work” lessen the susceptibility of the building to vibration risk, which consequently leads to less structural stress. A simple schematic of the problem is depicted in the given diagram consisting of only two active forces: (i) weight of the structure; and (ii) “spring” force.
Weight (W)
-x x=0 +x
Spring Force (S) Figure V.8.1. Demonstration of shock absorbing materials
Consider the following model describing the “work” ω exerted by the building due to the forces present in the above diagram:
ω = ω S + ωW = −0.5αx 2 + Wx where:
•x: vibration-triggered vertical displacement of the building in meter (m), measured relative to an equilibrium position (i.e., “initial deformation” of the spring) • ω S = −0.5αx 2 : “work” component due to the spring force S where α is the Hooke’s Law spring constant, whose nominal value is αˆ = 2×109 Newton per meter (N/m). • ωW = Wx : “work” component due to the weight of the building (W = 2×109 N) Use the Uncertainty Sensitivity Index Method (USIM) and the Surrogate Worth Trade-off (SWT) method in order to incorporate the uncertainty of the parameters.
144
Uncertainty Sensitivity Index Method
PROBLEM V.9: Evaluating Investments for a Portfolio
Risk must be evaluated in a portfolio of two investments. Portfolio risk can be assessed using the variance metric (denoted here by f1). For the case of two investments:
f1 = σ12x12 + σ22 x22 + ρσ1σ2x1x2 Note that in general, the above expression is derived as follows: f1 = Variance = Var ( x1 Inv1 + x 2 Inv 2 ) = x12Var ( Inv1 ) + x 22Var ( Inv 2 ) + 2 x1 x 2 Cov( Inv1 , Inv 2 ) = x12σ 12 + x 22σ 22 + 2 x1 x 2 Cov( Inv1 , Inv 2 )
For portfolio selection consisting of two investments: Cov( Inv1 , Inv 2 ) =
1 ρσ 1σ 2 2
where:
f1 = portfolio risk (this risk is measured in terms of variance of portfolio returns, hence f1 is unitless) σ1 = standard deviation (or volatility) of returns of Investment 1 (Inv1) σ2 = standard deviation (or volatility) of returns of Investment 2 (Inv2) ρ = correlation of returns of Investments 1 and 2 x1 = portfolio weight to allocate to Investment 1 x2 = 1 – x1 = portfolio weight to allocate to Investment 2 Use the Uncertainty Sensitivity Index Method (USIM) to answer the following questions: (a) Derive f1(x1,α), given the following parameters:
σ1 = 0.2 σ2 = α ρ = –0.8 (b) Derive the sensitivity function f2(x1,α). (c) Plot the noninferior solution in the function space using the functions obtained in Steps (a) and (b). Use a nominal value of αˆ = 0.3 . (d) Analyze the results and discuss the sensitivity of portfolio risk to different values of α.
Infectious Disease
145
PROBLEM V.10: Preventing West Nile Viral Disease
The West Nile virus is spread to humans through a bite from the Culex species of mosquito. Once in the bloodstream, the virus can reach the brain and cause encephalitis—a brain inflammation that can affect the entire nervous system. Unfortunately, there is no specific treatment for West Nile encephalitis other than supportive therapy (such as hospitalization, intravenous fluids, and respiratory support) for severe cases. Antibiotics will not work because a virus, not bacteria, causes West Nile disease. No vaccine for the virus is currently available. Applying a DEET-based insect repellent (DEET is short for N,N-diethyl-mtoluamide) is recommended to minimize the risk of acquiring the disease. The downside is that such repellents have been thought to cause adverse skin reactions when used in excessive quantities (especially when combined with sunscreen). The question is: how much DEET can a person apply to avoid the risk of contracting West Nile disease without suffering an adverse skin reaction? This problem can be solved using the Uncertainty Sensitivity Index Method (USIM), as follows: Consider a health risk function which has the following form:
f1(x;α) = 1 – αxα–1exp(–xα)
x>0
where:
f1(x;α) = health risk function, which is normalized such that a value of 1 means maximum health risk and 0 means minimum health risk. x = amount of repellent applied (in grams per square inch of skin) α = concentration of active ingredient (in parts per 10) When αˆ = 2 (i.e., the nominal value of α), the health risk function behaves like a bathtub curve as depicted below.
146
Uncertainty Sensitivity Index Method
0.8 0.6 0.4 0.2 0 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 1.10 1.20 1.30 1.40 1.50 1.60 1.70 1.80 1.90 2.00 2.10 2.20 2.30 2.40 2.50 2.60 2.70 2.80 2.90 3.00
f1-- Risk (0: minimum; 1: maximum)
1
x (Amount of DEET in g/sq. in of skin)
Figure V.10.1. Risk Function for Amount Of DEET Applied to Skin
Conduct USIM and analyze the results. (Note: You may have to resort to numerical methods when generating noninferior solutions for the multiobjective problem comprising f1(x; αˆ ) and its corresponding sensitivity function f2(x; αˆ )).
Catalyst Reaction
147
PROBLEM V.11: Optimizing Amount of Catalyst in a Chemical Substance
This problem examines the sensitivity of a given chemical substance as a function of the amount of reaction time given the amount of catalyst. This exercise is concerned with an industrial process in which the reaction time of a chemical substance depends on the temperature and the amount of catalyst. We are interested in minimizing the reaction time and its sensitivity to the temperature. Currently we are not satisfied with the specified reaction time of a catalyst we are using. We have to determine the amount of catalyst that will give us the least reaction time. The difference in the reaction time (from the original reaction time) is given by the following equation: y ( x, α ) =
3 2 3 2 x + x (1 − α ) − α 2 2 2 3
Solve the problem using the Uncertainty Sensitivity Index Method (USIM). A negative value of y indicates a reduction in the original reaction time, while a positive value indicates an increase. In short, the greater the negative value we obtain the better, because we are reducing our original reaction time.
y ( x, α ) denotes the difference in reaction time x denotes the difference from the original amount of catalyst. (A negative value denotes a reduction of the original amount, while a positive value denotes an increase.) α denotes the model’s parameter (temperature). Assume a nominal value of αˆ = 2 .
148
Uncertainty Sensitivity Index Method
PROBLEM V.12: Uncertainty Regarding Costs in a Widget Factory
A company that produces widgets needs to balance the costs of labor and materials. The company has two objectives: to minimize costs in general, and to minimize the fluctuating costs of labor. For each widget, the factory must use a specific type of expensive paint. There is a cost function that depends on the amount of paint used, f1(x, α) = (x-6)2 - α2 (x-3) (α-4)2. Let x represent the amount of paint used, and α represent some price fluctuation in the cost of labor. Assume that αˆ = 6 . Use the Uncertainty Sensitivity Index Method (USIM) to solve this problem.
Electroplating Plant
149
PROBLEM V.13: Determining Safe Proportions of Chemical Components
Electrostatic deposition, also known as electroplating, is a manufacturing process wherein a metal is deposited onto the surface of a plastic or another metal to affect the latter’s physical, mechanical, or chemical property. Prior to the actual deposition process, the surface of the material requires extensive cleaning to assure proper adhesion. To avoid environmental hazards, the process engineer wants to determine the correct amounts of two chemicals that can be used as cleansing agents. In the cleaning process, the electroplating plant must minimize toxic fumes and totally avoid producing heavy metal. Two major chemicals— X 1 and X 2 —are commonly present in the cleansing agent and are also available in 50% solutions. These produce toxic fumes in the reduction process given by: y = 0.0003x1 2 β − 0.02 x 2α 2
where:
y is the amount of toxic fumes x1 is the amount of Chemical X 1 x 2 is the amount of Chemical X 2 α is the concentration of Chemical X 1
β
is the concentration of Chemical X 2
Furthermore, the cleaning process causes the two chemicals to react with the material being cleaned to produce a heavy metal. As there is no economically feasible way of filtering this out from the cleansing agent, producing the heavy metal must be totally avoided. The heavy metal production is given by the chemical process: x1 2 (1 − β ) + x 2 β = 0.00008
Given two objective functions, we can assume the variability of α and β, so the Uncertainty Sensitivity Index Method (USIM) is applied to solve this problem. The new functions of representing sensitivities need to be addressed as well. Assume nominal parameter values of αˆ = 0.5 and βˆ = 0.5 .
150
VI. Risk Filtering, Ranking and Management PROBLEM VI.1: Security at a Concert Security at public places (i.e., football stadiums, airports, and concert halls) is a priority, since any accident could cause severe effects on those who are present. DESCRIPTION The objective of this problem is to identify and prioritize the potential risk scenarios that can disrupt the security at a concert event. The interim general managers of the concert hall are responsible for any problems at the events. Any accident would seriously affect their jobs as well as their future employment prospects. The following parameters apply: Risks: • Life loss of performers, personnel, and audience members. • Property loss in the venue. • Loss of public satisfaction with venue facilities and security. Temporal domain: • Short-term. Level of decisionmaking: • Interim general managers. METHODOLOGY Risk Filtering, Ranking, and Management (RFRM) is a helpful tool to diagnose risk scenarios on an ongoing basis. There are eight phases, or steps, in this methodology. SOLUTION Several phases of the RFRM as applied to the analysis of security at a concert venue will be discussed in the subsequent discussions. Phase I: Hierarchical Holographic Modeling (HHM) The preliminary phase in performing RFRM is a description of the system and all components with associated risks. Figure VI.1.1 depicts applicable risk scenarios using a hierarchical holographic model.
Concert Security
151
Figure VI.1.1. HHM for Concert Venue Security Phase II: Scenario Filtering After studying the HHM, the interim managers filter the subtopics down to the following: 1. Bomb planted 2. Sniper attack on performer 3. Biological attack 4. Chemical attack 5. Security officers incapacitated 6. Loss of security officers 7. Heavy rain 8. Icy road 9. Short-term power outage 10. Loss of telecommunications Phase III: Bicriteria Filtering The managers remove 2 subtopics evaluated as moderate, so the 8 remaining subtopics are: Bomb planted, Sniper attack on performer, Biological attack, Chemical attack, Loss of security officers, Icy road, Short-term power outage, and Loss of telecommunications. These subtopics are placed into a matrix which visually indicates both the probabilities and which subtopics are high, moderate, or low risk (see Table VI.1.1).
152
Risk Filtering, Ranking, and Management Table VI.1.1. Risk Matrix for Phase III
Phase IV: Multicriteria Filtering The managers have defined the remaining risk scenarios as follows: Table VI.1.2. Risk Scenarios for Remaining Subtopics Subtopic Bomb planted Sniper attack on performer
Risk Scenario Failure to detect any bomb before the concert Failure to detect any pistol on snipers before the concert
Biological attack
Failure to detect any malicious biological materials before the concert
Chemical attack
Failure to detect any chemical weapons before the concert
Loss of security officer Icy road Short-term power outage Loss of telecommunications
Illness of security officer Failure to clear icy roads before and after the concert Failure to have back-up generator Failure to test telecommunications before the concert
The risk scenarios are further filtered down to the three considered most critical: bomb, sniper, or biological attack. Probabilities are assessed according to the risk matrix shown in Table VI.1.1, with Table VI.1.3 summarizing the results.
Concert Security
153
Table VI.1.3. Rating Risk Scenarios in Phase IV Bomb
Sniper
Biological attack
Chemical attack
Loss of Security officers
Icy Road
Shortterm power outage
Loss of Telecommunications
Undetectability
Low
Low
Low
Low
Med
Med
Med
High
Uncontrollability
Low
Low
Low
Low
Low
Low
Med
High
Multiple Paths to Failure
Low
Low
Low
Low
High
Med
High
Med
Irreversibility
High
High
High
High
Low
Low
Med
Low
Duration of Effects
High
Med
High
High
Low
Low
Low
Low
Cascading Effects
High
High
High
High
Low
Low
High
Med
Operating Environment
High
High
High
High
Med
High
High
Med
Wear and Tear
Low
Low
Low
Low
Low
Low
Med
Med
High
High
High
High
Low
Low
Med
Med
High
High
High
High
Low
Low
Med
Med
Low
Low
Low
Low
Low
Med
Med
Med
Criteria
Hardware/Software /Human /Organizational Complexity and Emergent Behaviors Design immaturity
Phase V: Quantitative Ranking With the quantitative severity scale matrix and the criteria assessment above, the managers focus on only 6 risk scenarios by excluding Loss of security officers and Icy road. The probability in the matrix is subjective, so we will use Bayes’ theorem assuming the prior probability of Pr(Ai) = 1/(6-i), where i stands for the Accident Likelihood (ranging from Unlikely: 1, Seldom: 2, …, Frequent: 5). Following the notation for conditional probability typically used in Bayesian analysis, Pr(E|Ai) = 0.01 and Pr(E|not Ai) = 0.995 for all i. Then we can calculate P(E) by the theorem of Total Probability, where E indicates observing evidence of accident before the concert. The terms that represent the conditional probabilities, Pr(Ai|E), are calculated as follows: Pr(E) = (0.01)*(0.2) + (0.995)*(0.8) = 0.002 + 0.786 = 0.798 Pr(A1|E) = (0.01)*(0.2)/(0.798) = 0.0025 Pr(A2|E) = (0.01)*(0.25)/(0.798) = 0.003 Pr(A3|E) = (0.01)*(0.33)/(0.798) = 0.004 Pr(A4|E) = (0.01)*(0.5)/(0.798) = 0.006 Pr(A5|E) = (0.01)*(1)/(0.798) = 0.0125 A quantitative version of the bicriteria risk matrix is presented below using the 6 remaining risk scenarios.
154
Risk Filtering, Ranking, and Management Table VI.1.4. Risk Matrix for Phase V
Phase VI: Risk Management After following Phases I to V, the managers need to consider how to minimize the costs and/or maximize the benefits for the 6 risk scenarios. In Phase VI, they consider the tradeoffs between these benefits and costs and decide how to manage the risks with the most effective options. Estimates of cost: Install metal detector: $100,000 ($10,000 × 10) Implement booking system to check personal history with attendees’ consent: $50,000 Hire enough security officers to check hand baggage: $100/(officer per day) × number of gates (i.e., 10) = $1,000 Install emergency power generator: $50,000 Purchase microphone systems: $10,000 Total estimated cost: $211,000 Benefits: Monetary benefits: Insurance premium deduction: $10,000/event Expected revenue from audience 1,000 × $100 = $100,000
Concert Security
155
Non-monetary: Public confidence in secure environment Risk reduction: Implementing each option will reduce the probability of each accident by half Management options: As the managers’ priority is to prevent any loss of life or injury, they will implement the first three options above, and then implement other options if enough resources are left. Phase VII: Safeguarding Against Missing Critical Items In Phase VII the managers need to consider previously eliminated risk scenarios, (e.g., a rocket-propelled grenade (RPG) attack in the case of an outdoor concert), and re-evaluate the options previously selected. Upon analyzing filtered scenarios and current options, they will update the risk matrix after every concert. Phase VIII: Operational Feedback The managers hired temporary security officers for the events in order to reduce costs, but received numerous complaints because their incompetence and inexperience in security checks delayed the entrance procedure. If this continues, they will consider employing a regular security staff or seek assistance from local police officers. Collecting feedback is central for updating the HHM as well. The managers will rebuild the HMM by adding new scenarios, editing present scenarios, or deleting obsolete ones. For instance, they could add a risk scenario of a tornado if an occurrence is reported nearby, even though it rarely happens in this region.
156
Risk Filtering, Ranking, and Management
PROBLEM VI.2: Risk to an Electric Power System A consulting company analyses the risk associated within a large electric power system. DESCRIPTION A consulting company plans to submit a report to the Federal Utility Committee on an Electric Power System with respect to potential risk scenarios. Deploying the Risk Filtering, Ranking, and Management (RFRM) methodology, it could provide the Committee with an in-depth report to address all possible risk scenarios and suggest how to manage pertinent risks. Below is the analysis of the system. METHODOLOGY The eight phases of the RFRM will be implemented to identify and prioritize risks associated with operation of an electric power system. SOLUTION Phase I: Scenario Identification
Figure VI.2.1. HHM for an Electric Power System The first step of the RFRM is to develop a Hierarchical Holographic Model (HHM) where all possible sources of risk are identified in headtopics and subtopics. The resulting HHM is depicted in Figure VI.2.1.
Electric Power
157
Phase II: Scenario Filtering According to interest, the consulting company will next assess the physical assets component of the HHM. The following subtopics will be considered: - Coal - Hydroelectric - Solar - Nuclear - Wind - Tidal - Transmission Lines - Transformers - Substations - Maintenance Equipment - Information Systems - Fault Detection Systems - Monitoring Systems - Servers Out of these, it will filter out the risk scenarios that are not of immediate interest to the company. Thus, the new set of reduced scenarios is as follows: -
Coal Hydroelectric Nuclear Transmission Lines Maintenance Equipment Information Systems Fault Detection Systems Monitoring Systems
Phase III: Bicriteria Filtering and Ranking To further reduce the number of risk scenarios, in this phase the company subjects the 8 remaining risk scenarios to the Qualitative Severity Scale Matrix as shown in Table VI.2.1 below. It is assumed that the decisionmaker’s analysis of the risk scenarios resulted in removing those that received a moderate or low risk valuation from the subtopic set. Based on the decisionmaker’s preferences, the subtopics Maintenance Equipment, Information Systems, and Transmission Lines, which were evaluated as low risks, were removed. The remaining five risk scenarios are: Coal, Nuclear, Hydroelectric, Fault Detection System, and Monitoring System.
158
Risk Filtering, Ranking, and Management Table VI.2.1. Qualitative Severity Scale Risk Matrix for Phase III
Phase IV: Multicriteria Evaluation More specific definition is then given to each remaining subtopic as shown in Table VI.2.2. Table VI.2.2. Risk Scenarios for 5 Remaining Subtopics Subtopic Coal
Risk Scenario Failure of any portion of the Coal power plant for more than 24 hours
Nuclear
Failure of any portion of the Nuclear power plant for more than 24 hours
Hydroelectric Fault Detection System
Failure of any portion of the Hydroelectric power plant for more than 24 hours
Monitoring System
Failure of the Monitoring System for more than 8 hours
Failure of the Fault Detection System for more than 8 hours
Next, the remaining subtopics are assessed in terms of 11 criteria defined in Table VI.2.3 below. The table summarizes these assessments. Table VI.2.3. Scoring of Subtopics Using the Criteria Hierarchy
Undetectability
Low
High
Med
Fault Detection System High
Uncontrollability
Med
High
Med
Med
Med
Multiple Paths to Failure
High
High
High
Med
Med
Irreversibility
Med
High
Med
Low
Low
Duration of Effects
High
High
High
High
High
Criteria
Coal
Nuclear
Hydroelectric
Monitoring System Low
Cascading Effects
Med
High
Med
Med
Med
Operating Environment
Med
High
Med
Med
Med
Electric Power Criteria
Coal
Nuclear
Hydroelectric
Wear and Tear
Med
Med
Med
Fault Detection System Low
Hardware/Software/Human/ Organizational Errors
Med
High
Med
High
High
Complexity and Emergent Behaviors
High
High
High
Med
Med
Design Immaturity
Med
Med
Med
Low
Low
159
Monitoring System Low
Phase V: Quantitative Ranking The set of scenarios are now reduced further using a Quantitative Severity Scale Matrix. In other words, this new matrix expresses the likelihood quantitatively. Table VI.2.4. Quantitative Severity Scale Risk Matrix
The results of the quantitative matrix are expressed as follows: Coal: Likelihood of Failure = 0.1; Effect = A (Loss of Life); Risk = Extremely High Historical coal mining incidents demonstrated potential loss of lives. Based on the consulting team’s brainstorming and surveying, it seems that Coal failure is occasional. Therefore, they assign only 10% probability to this scenario.
160
Risk Filtering, Ranking, and Management
Nuclear: Likelihood of Failure = 0.25; Effect = A (Loss of Life); Risk = Extremely High Nuclear power generation failure is prone to loss of lives. It appears that the chance of Nuclear failure is higher than that of any other type of power plant failure. They assigned 25% probability to this scenario. Hydroelectric: Likelihood of Failure = 0.2; Effect = A (Loss of Life); Risk = Extremely High Though the probability of hydroelectric failure (and consequently loss of life) is not as high as for Nuclear plant failure, it is higher than a Coal power plant failure. They assigned 20% probability to this scenario. Fault Detection System: Likelihood of Failure = 0.01; Effect = B (Plant Shutdown); Risk = High The failure of the Fault Detection System will cause plant shutdown, but such a failure is highly unlikely. They assign 1% probability to this failure. Monitoring System: Likelihood of Failure = 0.07; Effect = C (Prolonged Power Outage); Risk = Moderate The failure of the Monitoring System will cause a prolonged power outage. The chances of such a failure are more than those of the Fault Detection System failure. They assign 7% probability to this failure. The firm decided to filter out all risk scenarios which have moderate or low risk. Thus, the Monitoring System is filtered out at this stage. Based on the Quantitative Severity Scale Matrix and the above analysis, it is clear that resources should be concentrated on protecting the remaining four critical risk scenarios—Coal, Nuclear, Hydroelectric, and Fault Detection System. Phase VI: Risk Management In this phase a complete quantitative analysis should be performed. This involves estimating cost, performance benefits and risk reduction, and different management options for dealing with the remaining scenarios. Phase VII: Safeguarding Against Missing Critical Items In Phase VII, the performance of each option selected in Phase VI is evaluated against the scenarios previously filtered out during Phases II to V.
Electric Power
161
Phase VIII: Operational Feedback This last phase represents the operational stage of the system under consideration, during which the experience and information gained is used to continually update the scenario filtering and decision processes.
162
Risk Filtering, Ranking, and Management
PROBLEM VI.3: Launching an Online Banking System A bank has invested significant monetary and human resources into an online banking system but the system seems to be complicated for the employees to understand and manage. DESCRIPTION The internal and external risks pertinent to this system have not yet been verified. The bank called upon a well-known IT company to analyze its system intensively. METHODOLOGY The company will use the Risk Filtering, Ranking, and Management (RFRM) methodology to identify all of the feasible risk scenarios. There are eight phases in this process, and the first step is to develop a Hierarchical Holographic Model (HHM) SOLUTION Phase I: HHM Development The five head topics and many subtopics in the HHM cover the multiple and varied aspects of the banking system, as shown in Figure VI.3.1. Phase II: Scenario Filtering by Domain of Interest For an online banking system, there are two major levels of decisionmakers: strategic and operational. Strategic decisionmakers determine the goal and purpose of the system, available functionality, the importance of online banking in the organization, etc. Operational decisionmakers establish how the online banking system is developed, maintained, and supported. For the purpose of this analysis, the company will filter risk scenarios based on the interests and responsibilities of operational decisionmakers. The following surviving set of risk scenarios becomes the input to Phase III: Server Storage, Memory, CPU, Power Supply, and Fan; Network Switch, Router, and Cable; Physical Power, Temperature Control, Air Quality Control, and Network; and under Software, Vendor-Provided and In-houseDeveloped Application, as well as Vendor-Provided Database, Database Program, OS, Clustering, Vendor-Provided Network Management, and Network Management Developed In-house.
Online Banking
163
Figure VI.3.1. HHM for online banking system Phase III: Bicriteria Filtering The company considers the likelihood and effects of each risk scenario on a graphic matrix as shown below in Figure VI.3.2. Most of the scenarios are easy to place in the matrix, so consultants from the company will only touch on some important and perhaps less obvious points. There are strict regulations regarding privacy and security when it comes to online banking systems and banking systems in general. A program that corrupts the database or misplaces information in it will result in a security breach and perhaps the illegal release of personal information. Similarly, in-house developed applications can result in the same scenario. On the other hand,
164
Risk Filtering, Ranking, and Management
if the cause of such a breach is due to vendor-provided software, there will be a loss of customers but no regulation would be violated since it would be the vendor’s mistake. In Figure VI.3.2, the shades correspond to extremely high risk, high risk, moderate risk, and low risk, with extremely high risk the lightest shade. Therefore, what remain at the end of this phase are the scenarios that have extremely high and high risks. So Storage, Power, Network, In-house Developed Application, Database Program, OS, Clustering, Vendor Provided Database, and Network Management Developed In-house are classified in these categories.
Figure VI.3.2. Risk Matrix for Phase III Phase IV: Multicriteria Filtering In Table VI.3.1, this phase identifies the risk scenarios for those subtopics classified as high and extremely high-risk in the Table VI.3.1 Risk Matrix. These subtopics are then scored for low, medium, or high risk using the hierarchy of seven criteria in Table VI.3.2. Table VI.3.1. Risk Scenarios for Remaining Subtopics Subtopics Storage Power Network In-house Developed Application Database Program
Risk Scenario Failure of any storage with no redundancy Failure of any power outage for over 15 minutes Failure of the external network for more than an hour Malfunction that affects bank operation Malfunction that leads to data corruption or data mismanagement
Online Banking Subtopics OS Clustering Vendor-Provided Database Network Management Developed In-house
165
Risk Scenario Malfunction that leads to computer not operable Failure of clustering for over an hour Failure of database for more than an hour Failure of internal network for more than 30 minutes
Table VI.3.2. Scoring of Subtopics Criteria
Storage
Power
Network
Application
Database Program H L H
Undetectability Uncontrollability Multiple Paths to Failure Irreversibility Duration of Effects Cascading Effects Wear and Tear
L H H
L H L
L H H
M L M
H H L H
L L H L
L L H L
L H H L
Criteria
OS
Clustering
Database
Undetectability Uncontrollability Multiple Paths to Failure Irreversibility Duration of Effects Cascading Effects Wear and Tear
H H H
M L M
L H H
Network Management M L M
L M H L
L H H M
H H H H
L M H L
H H H H
Phase V: Quantitative Ranking Using the Cardinal Version of the Risk Matrix Next, using the effects listed in the Figure VI.3.2 Risk Matrix, the company performs quantitative ranking of the nine remaining scenarios. Since it does not actually have any conditional probability distribution, it does not use Bayes’ Theorem here. Instead, it obtains the likelihood of failure for each of the subtopics in Table VI.3.1 directly from past experience. Storage: Likelihood of failure 0.3; Effect = B; Risk = High Power: Likelihood of failure 0.009; Effect = B; Risk = High Network: Likelihood of failure 0.09; Effect = B; Risk = High In-house Developed Application: Likelihood of failure 0.025; Effect = A; Risk = Extremely High
166
Risk Filtering, Ranking, and Management
Database Program: Likelihood of failure 0.025; Effect = A; Risk = Extremely High OS: Likelihood of failure 0.02; Effect = B; Risk = High Clustering: Likelihood of failure 0.02; Effect = B; Risk = High Vendor-Provided Database: Likelihood of failure 0.015; Effect = B; Risk = High Network Management Developed In-house: Likelihood of failure 0.3; Effect = B; Risk = Extremely High These scenarios are summarized in Figure VI.3.3.
Figure VI.3.3. Quantitative Severity Scale Matrix Phase VI: Risk Management The IT company found that four scenarios constituted most of the risk for the online banking system from the operational decisionmaker’s point of view. These were: the Database Program, In-House Developed Application, Network Management, and Storage. Next it recommended solution options and the respective costs, benefits, and risks for each. For Database Programs, the most important solution option is implementing a rigorous testing process. The cost consists of designing and maintaining a separate
Online Banking
167
test database that can be used for testing only. Testing may prolong the development of database programs. However, the benefits outweigh the costs for online banking systems because database program failure is so detrimental. For Inhouse Developed Applications, the option is the same as for the Database Program. The analysis is similar. For Storage, the most widely-used option is to have a lot of redundancy. Clustering within individual servers and also within the network provides sufficient redundancy. There are costs associated with the extra hardware and the extra complexity in terms of design and maintenance. Again, this is probably worthwhile in this situation as storage recovery typically is fairly unlikely and it is important to be able to fall back onto the backup as soon as possible. For Network Management, the analysis is the same as for storage above. Phase VII: Safeguarding against Missing Critical Items For Phase VII, the bank would consider how the options proposed in Phase VI can affect the risk scenarios that were filtered out in the previous phases. For example, the decisionmaker may opt for an alternate design for hard disks in any single server. This greatly increases a single server’s need for power. Consequently, power supply then may become critical. Additionally, the alternate design will also increase the internal temperature of a given server; therefore fans and other server cooling components may become critical. Another example is when clustering is chosen to increase redundancy. Obviously clustering then would become a critical item. (Note that clustering as a risk scenario refers to the clustering software rather than the concept of clustering.) Clustering servers together requires much more interconnection between servers and also additional network equipment. All of this may lead to additional critical items such as cables, switches, and routers. Many other items would not become critical even though their locations in these matrices may change. For example, hardware failures such as CPU and memory would be less critical now that the system is clustered. Phase VIII: Operational Feedback In Phase VIII, the bank utilizes the operational feedback received during the deployment to further refine the HHM and the benefits, costs, and risks of risk management options. Probably the most obvious result is the better assessment of failure likelihood for various hardware and software. As the online banking system operates, there would be better statistics regarding how various hardware and software behave. Consequences of these behaviors would also be available, such as how long it will take the server to reach critical temperature if the fans fail or what kind of capacity loss the bank will face if a server should drop out of the cluster. This would enable the bank to do a better analysis of the earlier phases starting from Phase III. Additional feedback will come from the online banking system’s customers. They will report experiences that do not meet their standards as a result of network outages, application malfunction, or just slow servers. They may also require more functionality, which would surely alter the subtopics from the Functional perspective in the HHM.
168
Risk Filtering, Ranking, and Management
PROBLEM VI.4: Safety Issues on Nanomaterials Because nanomaterials are new, government agencies need to decide what, if any, regulations need to be changed or implanted for the safe manufacture and use of nanomaterials and nanomaterial-based products. DESCRIPTION The major products that may need regulation are cosmetics, deodorant sprays, and bone and tooth implants. METHODOLOGY Risk Filtering, Ranking, and Management (RFRM) can be used to assess, evaluate, and manage the risk scenarios in this problem. RFRM consists of eight phases. SOLUTION Phase I: Develop a Hierarchical Holographic Model (HHM) The following head topics and subtopics are the various risk scenarios involved in the production and use of nanomaterials and nanomaterial-based products: Contact during manufacture of nanoparticles: Particles escaping into the air during packaging - Inhalation Gloves and masks - Absorption into the skin Accidental Ingestion High temperatures – fires, blasts, etc. Transport of nanoparticles to various places of product manufacture Particles escaping into the air during manufacture - Inhalation Gloves and masks - Absorption into the skin, inhalation High temperatures – fires, blasts, etc. Direct usage Applying into the skin – absorption in skin Spraying in the air – inhalation (lungs) Implants in the body – absorption in blood Cellular and genetic structure – nanoparticles might get absorbed into the cells and cause irreversible damage Handling – accidental ingestion (babies/adults who are ignorant of the hazards) Transport through the environment Product washed off while taking a shower – Transport through water and sewer Nanoparticles in sewer go into waste streams – Transport through soil Spraying into the air – Transport through air Soil quality, Water quality
169
Nano Material Indirect contact with nanoparticles Animals – drink contaminated water Plants – absorb nanoparticles in soil and water Disposal of containers Landfills – soil quality, groundwater quality Incineration – air quality Phase II: Filter out scenarios that are not likely to be controlled by regulation
From the scenarios enumerated in Phase I, and based on talks with environmental health and safety experts, the following have been identified as scenarios that can be controlled by regulation: 1. Contact during manufacture of nanoparticles 1.a. Particles escaping into the air during packaging – Inhalation 1.b. Gloves and masks – Absorption into the skin 1.c. Accidental ingestion 1.d. High temperatures – fires, blasts etc 2. Transport of nanoparticles to various places of product manufacture 2.a. Particles escaping into the air during manufacture – Inhalation 2.b. Gloves and masks – Absorption into the skin, inhalation 2.c. Accidental ingestion 2.d. High temperatures – fires, blasts etc 3. Direct usage 3.a. Applying into the skin – absorption in skin 3.b. Spraying in the air – inhalation (lungs) 3.c. Implants in the body – absorption in blood 3.d. Handling – accidental ingestion (babies/adults ignorant of the product) 4. Disposal of containers 4.a. Landfills or incineration– soil quality, ground water quality For ease of usage, we will use a mnemonic for each scenario, as enumerated in Table VI.4.1. Each of these 13 risk scenarios will now be filtered further based on a variety of criteria in the subsequent phases, leading to a short list of most important risk scenarios which need to be addressed for risk mitigation. Table VI.4.1. Specific scenarios filtered out for consideration from Phase I Specific Scenario Manufacture – Inhalation: Workers inhaling particles which escape into the air during manufacture and packaging due to ineffective masks. Manufacture – Absorption: Nanomaterials being absorbed into workers’ skin due to ineffective gloves and filters. Specific Scenario Manufacture – Ingestion: Accidental ingestion of nanoparticles during manufacture. Manufacture - Explosion: Blasts in the manufacture chamber due to improper regulation of temperature/pressure and inadequate safety. Production - Inhalation: Workers inhaling particles that escape into the air during
Identifier 1.a 1.b Identifier 1.c 1.d 2.a
170
Risk Filtering, Ranking, and Management
manufacture and packaging of the nanomaterial based product due to ineffective masks. Production - Absorption: Nanomaterial being absorbed into workers’ skin due to ineffective gloves and filters during manufacture of the nanomaterial based product. Production – Ingestion: Accidental ingestion of nanoparticles during manufacture of the nanomaterial based product.
2.b 2.c
Production – Explosion: Blasts in the manufacture chamber due to improper regulation of temperature/pressure and inadequate safety during manufacture of the nanomaterial based product.
2.d
Skin Absorption: Effect of absorption of the nanomaterial in the skin during application of the product.
3.a
Consumer Inhalation: Effect of inhalation of nanomaterial when spray products are used. Blood Absorption: Effects of absorption of nanomaterial in the blood from the body implants. Ignorance – Ingestion: Accidental ingestion of products due to ignorance of handling the products. Environment: Effect of improper disposal of the nanomaterial based products on soil, water and air quality.
3.b 3.c 3.d 4.a
Phase III - Bi-Criteria Ranking and Filtering For this phase, each of the scenarios selected in Phase II will be evaluated based on two criteria – severity of the consequences and their frequency of occurrence as seen in Figure VI.4.1:
Figure VI.4.1. Qualitative severity scale matrix Some risk scenarios are eliminated due to lack of severity and frequency of occurrence. The scenarios that make it to the next phase of the risk filtering and ranking are: 1.a. Manufacture - Inhalation 1.d. Manufacture - Explosion 2.a. Production - Inhalation 2.d. Production - Explosion 3.b. Consumer – inhalation (lungs)
Nano Material
171
3.c. Blood absorption 4.a. Environment Phase IV – Multi-criteria evaluation The risk scenarios which were the top 7 from Phase III are now evaluated against 11 criteria which are based on the properties of robustness, resilience and redundancy of the system under study. Results appear below in Table VI.4.2. Table VI.4.2. Scoring of subtopics for the safe use of Nanomaterials Criterion Undetectability Uncontrollability Multiple paths to failure Irreversibility Duration of effects Cascading effects Operating environment Wear and tear HW/SW/HU/OR Complexity and Emergent behaviors Design immaturity
1.a High Low
1.d Low Low
2.a High Low
2.d Low Low
3.b High High
3.c High Low
4.a High Low
Low
High
Low
High
Low
Low
High
High
High
High
High
High
High
High
High
High
High
High
High
High
High
Med
High
Med
High
Low
Low
High
Low
Med
Low
Med
Low
Med
Med
Low Med
Low High
Low Med
Low High
Low Med
Med N/A
High High
High
Med
High
Med
Low
Low
Med
High
Low
High
Low
High
High
High
This assessment is derived from examining columns from left to right and ranking the scenarios in order according to those determined highest to lowest in severity based on the eleven criteria. Phase V - Quantitative Ranking 1.a Manufacture - Inhalation: Likelihood = 0.05; Effect = Serious Injury; Risk = High Inhalation occurs due to improper or ineffective masks and filters provided to the workers. OSHA has stringent regulations for masks usage. Hence the probability of an improper mask being used is pretty low and we assign a probability of 0.05 to this scenario. From Phase IV we see that this scenario is largely undetectable although controllable. 1.d Manufacture - Explosion: Likelihood = 0.013; Effect = Loss of Life; Risk = Extremely High Nanoparticles are manufactured using a technique called Physical Vapor Synthesis which needs high temperatures and pressures. The presence of such high temperatures and pressures creates a high risk situation wherein a slight leakage or loss of insulation can cause fatal explosions. We were
172
Risk Filtering, Ranking, and Management told that current systems are leak proof and hence the chances of a leakage occurring is very slim. Let L denote the event of leakage and E denote the event of explosion. The probability of an explosion occurring due to various reasons is about 0.05 which was gathered from previous statistical data. In other words, P(E)=0.05. The probability that there is a leakage given that there has been an explosion is 0.1 (that is, P(L|E)=0.1). Also, the probability of leakage given that there is no explosion is known to be 0.4, so that P(L| not E)=0.4. Thus using Bayes’ theorem we can find the probability of explosion using the fact that the current system is leak proof as follows. P(E|L) = P(E)P(L|E)/P(L) where P(L) = P(L|E)*P(E)+P(L|not E)*P(not E) = 0.05*0.1+0.95*0.4 = 0.385. Hence P(E|L)=0.05*0.1/0.385=0.013. Thus the likelihood of an explosion occurring is 0.013, and the explosion is detectable but not controllable.
2.a Production - Inhalation: Likelihood = 0.01; Effect = Serious Injury; Risk = High Again, like inhalation during manufacture of nanoparticles, inhalation during production of the product (cosmetics, sprays etc) occurs due to a lack of proper filters etc. Since OSHA regulates masks at production facilities and since the nanoparticles are dispersed in a solution, the likelihood of occurrence of this scenario is very low, say 0.01. 2.d Production - Explosion: Likelihood = 0.001; Effect = Loss of Life; Risk = Extremely High Explosion during production of products containing nanoparticles occurs for reasons similar to explosions during manufacture of the nanoparticles itself but at much less temperatures and pressures. Thus the likelihood of an explosion at this stage is much less than explosion during manufacture of the nanoparticles. Hence we give a likelihood of 0.001 to this scenario. 3.b Consumer - Inhalation: Likelihood = 0.45; Effect = Minor Injury; Risk = Extremely High Inhalation occurs when nanomaterials sprays are used to deodorize the air in a room. Since there can be no control on inhalation of a room freshener, the likelihood of occurrence of this scenario is pretty high. Hence we assign a probability of 0.45 for this scenario. 3.c Blood Absorption: Likelihood = 0.01; Effect = Serious Injury; Risk = Med Nanoparticles would get into the blood stream of the person who has a nano composite implant only if the nanocomposite has not been produced properly so that loose nanoparticles are still present in the composite. Since nanocomposite making is highly regulated, the likelihood of producing an improper composite is very low. Hence we assign a
Nano Material
173
probability of 0.01 to this scenario. Absorption of nanoparticles in the blood would not trigger any other failures to occur hence we can consider this scenario to be only of medium or moderate risk. 4.a Environment: Likelihood = 0.05; Effect = Environmental Damage; Risk = High The product containers can be treated to remove the nanomaterial that sticks to the containers before they are either land filled or incinerated, thus decreasing the likelihood of occurrence of this scenario. Hence we assign a probability of only 0.05 to this scenario. Also, this scenario is controllable although not detectable.
Figure VI.4.2. Quantitative severity scale matrix From Figure VI.4.2 above, it can be concluded that scenario 3.c, effects of absorption of nanomaterial in the blood from the body implants, can be eliminated since it is of a moderate overall risk. The other scenarios would then be considered for subsequent phases. Phase VI: Risk Management In this phase a complete quantitative analysis should be performed. This involves estimating cost, performance benefits and risk reduction, and different management options for dealing with the remaining risk scenarios associated with nanomaterial production. The identification of risk management options is beyond the scope of the current analysis. Phase VII: Safeguarding Against Missing Critical Items In Phase VII, the efficacy of the options (identified from the risk management phase) is evaluated. Phase VIII: Operational Feedback This last phase represents the operational stages of nanomaterial production, during which the experience and information gained from empirical results are continually updated through the scenario filtering and decision processes.
174
Risk Filtering, Ranking, and Management
PROBLEM VI.5: Department of Statistics Assessment A Department of Statistics wishes to maintain its prominence among universities that offer a statistics program. DESCRIPTION The Department of Statistics at a university wishes to be more successful in attracting students by hiring more professors and offering innovative courses. Success criteria include being more equipped with physical resources (e.g., computer laboratories) to enhancements in course curricula. METHODOLOGY This exercise can be analyzed by using the eight phases of the RFRM. In going through the phases of RFRM, the Department of Statistics can determine how to maintain success. SOLUTION Phase I: Scenario Identification A Hierarchical Holographic Model (HHM) was developed to describe a university’s Department of Statistics’ “as planned” or “success” scenario. The HHM in Figure VI.5.1 identifies the system’s risk scenarios.
Figure VI.5.1. HHM for Department of Statistics’ Risk Identification
Department Assessment
175
Phase II: Scenario Filtering by Domain of Interest Phase II is concerned with filtering the risk scenarios identified in the HHM to match the perspective of the current system user, or decisionmaker. In this case, it is the Chair of the Department, who is concerned with both short-term and longterm viability. Eleven subtopics survived the filtering. The subtopics of importance in this view are: Hardware, Software, Full Professors, Associate Professors, Assistant Professors, Undergraduate Students, Master’s Students, Ph.D. Students, Research Funding, Graduate Courses and Undergraduate Courses. Phase III: Bicriteria Filtering and Ranking Using the Ordinal Version of the US Air Force Risk Matrix The remaining scenarios are further filtered using qualitative likelihoods and consequences. There are two different types of information: the likelihood of what can go wrong, and the associated consequences. Their joint contributions are estimated on the basis of the available evidence and are displayed in a risk matrix in Table VI.5.1. Table VI.5.1. Risk Matrix for Phase III
Phase IV: Multicriteria Evaluation In Phase III, the Department of Statistics judged the individual risk sources by the consequence and likelihood categories and placed them into a risk matrix. In Phase IV, the process is taken one step further. The ability of a risk scenario to defeat the defenses of the system is tested against a set of eleven criteria (see Table VI.5.3). Each scenario of interest is rated as “high,” “medium,” or “low” against each criterion.
176
Risk Filtering, Ranking, and Management
The first step was to define the most likely risks among the scenarios shown in the above matrix. These are listed in Table VI.5.2. Table VI.5.2. Risk Scenarios for the Identified Subtopics Subtopic Grad. Courses
Risk Scenario Failure of the department to offer sufficient graduate courses.
Research Funding
Failure to secure funds to support research activities.
Professors
Failure to hire and keep competent and active professors.
Grad. Students
Failure of graduate students to enroll in graduate courses or to apply for the graduate program.
These risk scenarios were then rated against the eleven criteria shown below in Table VI.5.3. Table VI.5.3. Phase IV: Rating Risk Scenarios against Eleven Criteria
Undetectability
Grad. Courses Low
Research Funding Low
Medium
Grad. Students Low
Uncontrollability
Medium
Medium
Medium
Low
Low
High
High
Medium
n/a
Low
High
Medium
Medium
High
High
Low
High
High
High
High
High
Medium
Medium
Medium
n/a
n/a
n/a
n/a
Low
n/a
Medium
Medium
Low
Low
Medium
Medium
Low
High
Low
Medium
Criteria
Professors
Multiple Paths to Failure Irreversibility Duration of Effects Cascading Effects Operating Environment Wear and Tear Hardware/Software/Human/ Organizational Errors
Complexity and Divergent Behaviors Design and Maturity
Phase V: Quantitative Ranking Using the Cardinal Version of the Risk Matrix. In Phase V, filtering and ranking of scenarios continue based on quantitative and qualitative matrix scales of likelihood and consequence. Table VI.5.4 shows the risk matrix for this phase.
Department Assessment
177
Table VI.5.4. Risk Matrix for Phase V
The results of the risk filtering and ranking point to the following likelihoods of failure, the effects, and the degree of risk for each: Professors (Full, Associate, Assistant): Likelihood of Failure = .0197; Effect = A (Loss of department); Risk = Extremely High. The faculty ranks are currently depleted. Should more professors choose to leave, the department will no longer be able to operate. The remaining faculty members appear intent on staying, so the department has assigned a probability of 1.5% to this scenario. The department will be notified if any professors decide to leave. The Bayesian reasoning behind this assignment is as follows: Let A denote the remaining faculty choosing to leave. Let E denote the relevant evidence—that the current professors intend to stay. By Bayes’ theorem, then: Pr (A | E) = Pr (A) Pr (E | A) / Pr (E) Pr (E) = Pr (E | A) Pr (A) + Pr (E | not A) Pr (not A) A prior state of knowledge about A, before receiving the evidence is P0(A)=0.5=P(not A) The probability of seeing evidence E, i.e., no intention of leaving, is small. The department takes it as P(E|A) = 0.02. The probability of not knowing they will leave given that they are presently not leaving is high P(E|not A) = 0.995.
178
Risk Filtering, Ranking, and Management Therefore: Pr (E) = (0.02)(0.5) + (0.995)(0.5) = 0.01 + 0.4975 = 0.5075 Pr (A | E) = (0.5)(0.02) / (0.5075) = .0198
PhD Students: Likelihood of Failure = 0.05; Effect = B (Loss of mission); Risk = High. PhD students are a driving force in maintaining an academic department. Without these students there will be a loss of mission. Given the current crop of PhD students, the department believes there to be a 5% likelihood that they will leave. Graduate-level Courses: Likelihood of Failure = 0.30; Effect = A (Loss of department); Risk = Extremely High. Currently, the demand for graduate-level classes offsets the lack of sponsored research in the department. If the demand from non-major students for Statistics graduate classes were to fail, the department would shut down. Based on current declining enrollments, the department assigned this scenario a likelihood of 30%. MS Students: Likelihood of Failure: 0.40; Effect = B (Loss of mission); Risk = High. Educating master’s students is part of the reason the department exists. The departure of the current MS students would result in a failure to accomplish the mission. Based on our assessment of the current crop of MS students, the department believes it is 40% likely that they will leave. Research Funding: Likelihood of Failure: 1.00; Effect = C (Loss of capability with some loss of mission); Risk = Extremely High. There is currently no research funding available in the Statistics department. This makes it extremely difficult to give the graduate students the level of academic experience that the department would like to give them. They can carry on as teachers’ assistants in some cases. Because there is no research funding available, this scenario is a certainty and the department has assigned it a likelihood of 1. Phase VI: Risk Management Based on the results of the quantitative ranking in the previous phase, risk management options can be identified and developed to mitigate the critical risk scenarios. Tradeoff analysis needs to be performed to account for the performance of each option relative to cost, performance benefits and risk reduction criteria.
Department Assessment
179
Phase VII: Safeguarding Against Missing Critical Items In Phase VII, the performance of each option developed in Phase VI is evaluated against the scenarios previously filtered out during Phases II to V. Phase VIII: Operational Feedback This last phase represents the operational stage of the system under consideration, during which the experience and information gained is used to continually update the scenario filtering and decision processes.
180
Risk Filtering, Ranking, and Management
PROBLEM VI.6: Risk Assessment and Management of Acquisition Investment Investors analyze risk of acquiring an existing company. DESCRIPTION A group of investors is considering a major investment. It is looking at the possibility of buying out a company and needs a framework to evaluate any offers. In particular, a method is needed to allow risk assessment of the company from all angles, in order to guarantee that all of the risks involved are considered. Thus, they would like to identify a set of risks for the company, prioritize them, and decide what they would do to manage these risks if they were indeed to buy the company. METHODOLOGY Hierarchical Holographic Modeling (HHM) and Risk Filtering, Ranking, and Management (RFRM) are two methodologies that provide the analytical framework for this risk assessment and management problem. The following documentation describes an implementation of these tools. SOLUTION
Figure VI.6.1. HHM Diagram Phase I: Hierarchical Holographic Modeling (HHM) To help answer their question of business value, the investors must construct a Hierarchical Holographic Model (HHM), which will allow them to decompose the system at hand, (i.e., the company they are evaluating) from multiple perspectives. Figure VI.6.1 displays the resulting HHM where each subtopic represents a risk scenario.
Acquisition Investment
181
Now that the investors have their HHM, they use the remaining phases of the Risk Filtering, Ranking, and Management (RFRM) technique to filter, rank, and create ideas for managing the company’s risks. Phase II: Scenario Filtering In Phase II, the investors take the risk scenarios identified in the HHM subtopics and filter them according to the investors’ interests and responsibilities, temporal considerations, domain expertise, and desired system functionality. Based on the scope of their analysis, the investors narrowed the 28 initial subtopics (see Figure VI.6.1) down to 12 items of top concern. Clearly, not all of the initial HHM subtopics can be of immediate concern to all levels of decisionmaking at all times. The 12 items are detailed below: • Technology is physically used to manufacture the product as well as for communication within the office. • Personnel function involves the different employment sectors of the company. The interrelatedness of these sectors and their abilities to respond to various hits or increased demand is important to the company’s performance. • Personnel experience is important because it indicates the value of the employees within the company. The workforce is what drives the company and it must be able to identify and adapt to new options in the future. • Market share represents how much of the current market the company holds and implies both its present strength and its potential for future growth through capturing more market share. • Trade secrets (e.g., the Coca-Cola formula) are inside information, hidden beyond patents, which ensure value to the business because they provide a unique product to the consumer. • The strategy, the organizational and future business plan of the company, must also be considered so that its future value and success can be predicted. • Debts strongly reflect the company’s financial situation, as their magnitude and terms can dictate available business options. • Stock structure examines how the company is owned and which voting rights will be important for major company decisions. It explicitly states who “owns” the company. • Investments will show how the company plans to make money and must be evaluated when considering profitability. • Current interest rates also must be considered as they affect previous subtopics such as debts and investments and also drive business activity. • Competitors are also a huge factor, perhaps the most important, as company performance can really be seen as a function of how well it does relative to other companies in the same field. Future competitor strengths and weaknesses will strongly affect company profitability. • Finally, the 5-year temporal domain is considered because this is the most applicable time frame for the potential investment. Five years gives enough time to see how the company is currently doing as well as how it positions itself for future
182
Risk Filtering, Ranking, and Management
growth. After this period, another decision will be made by investors: either to stick with the current company or reinvest the money. Phase III: Bicriteria Filtering In Phase III, the investors give qualitative assessments of the likelihood and consequences of failure of the subtopics already filtered through Phase II. They define failure as the subtopic contributing negatively towards profits, i.e., costing the firm money. The subtopics listed above have different specifications, and in some cases it is necessary to clarify how a subtopic was assessed. For instance, it is obvious how the leaking of trade secrets would hurt the firm’s profitability. However, how does a “5-Year Time Frame” fail? The following further describes some risks of failure. • Personnel Experience: failure when costly mistakes are made due to a staff’s lack of practice • Debts and Stock Structure: failure when the amount or structure of each subtopic burdens the company’s ability to finance itself • 5-Year Time Frame: failure when the company will profit only if decisions are made based on time frames shorter or longer than 5 years The categories for likelihood are: 1. Unlikely 2. Seldom 3. Occasional 4. Likely 5. Frequent The categories for consequence are: 1. 0% Investment Loss 2. 25% Investment Loss 3. 50% Investment Loss 4. 75% Investment Loss 5. 100% Investment Loss—i.e., bankruptcy or business failure Table VI.6.1 and Figure VI.6.2 show how the investors assessed the subtopics filtered from Phase II. Table VI.6.1. Phase III Risk Assessments Subtopic Technology Personnel Function Personnel Experience Market Share Trade Secrets Strategy Debts Stock Structure Investments Interest Rates Competitors 5-Year Time Frame
Likelihood Subtopic Failure 4 3 2 3 2 3 2 2 4 2 5 1
Consequence of Failure 5 3 2 4 5 4 4 4 5 2 5 2
Acquisition Investment
183
Figure VI.6.2. Phase III Risk Matrix with Qualitative Probabilities The risk matrix shown in Figure VI.6.2 is used to again filter out any failure scenarios that the investors deem to be of a low priority. From their value judgments, they declare that the criteria for filtering (i.e., for a failure to “move on” to Phase IV) is as follows: greater than 75% investment loss or 75% or more investment loss with likelihood of occasional failure. Based on these criteria, they refocus their analysis on the following subtopics: competition, technology, investments, trade secrets, market share, and strategy. Phase IV: Multicriteria Evaluation In Phase III, the investors placed the individual risk sources into the risk matrix, using the consequence and likelihood categories as described above. This matrix gave them an intuitive feel for those scenarios requiring priority attention, and narrowed their focus down to six components. They chose the four subtopics that could lead to business failure (catastrophic) and the two topics that had a consequence of 75% investment loss with probability assessments of occasional or above. In Phase IV, they take the process one step further by reflecting on the ability of each scenario to defeat three defensive properties of the underlying system: resilience, robustness, and redundancy, defined as follows: • Redundancy refers to the ability of extra components in the system to assume the function of failed components. • Robustness refers to the insensitivity of system performance to external stresses. • Resilience is the ability of a system to recover following an emergency.
184
Risk Filtering, Ranking, and Management
Scenarios able to defeat these properties are of greater concern, and thus are scored as more severe. As an aid to this reflection, they considered the set of eleven criteria explained in Table VI.6.2 below. Table VI.6.2. Eleven Criteria for Rating the Ability of a Risk Scenario to Defeat the Defenses of the System Undectability Uncontrollability Multiple paths to failure
Irreversibility Duration of effects Cascading effects Operating environment Wear and tear HW/SW/HU/OR interfaces Complexity/emergent behaviors Design immaturity
the absence of modes by which the initial events of a scenario can be discovered before harm occurs the absence of control modes that make it possible to take action or make an adjustment to prevent harm the multiple and possibly unknown ways for the events of a scenario to harm the system, such as circumventing safety devices, for example the adverse condition cannot be returned to the initial, operational (pre-event) condition a long duration of adverse consequences the effects of an adverse condition readily propagate to other systems or subsystems, i.e., cannot be contained external stressors that affect the system the effects of use, leading to degraded performance the adverse outcome is magnified by interfaces among diverse subsystems (e.g., hardware, software, human, and organizational) the potential for system-level behaviors that are not anticipated even with knowledge of the components and the laws of their interactions there are adverse consequences related to the newness of the system design or other lack of a proven concept
Table VI.6.3 shows how high-, medium-, and low-risk scenarios are rated against the 11 criteria in Table VI.6.2. Table VI.6.3. Rating Risk Scenarios in Phase IV against the Eleven Criteria Criterion Undetectability Uncontrollability Multiple Paths to Failure Irreversibility Duration of Effects Cascading Effects Operating Environment Wear and Tear Hardware/ Software/Human/ Organizational
High Unknown or undetectable Unknown or uncontrollable Unknown or many paths to failure Unknown or no reversibility Unknown or long duration Unknown or many cascading effects Unknown sensitivity or very sensitive Unknown or much wear and tear Unknown sensitivity or very sensitive
Medium
Low
Not Applicable
Late detection
Early detection
Not applicable
Imperfect control Few paths to failure Partial reversibility Medium duration Few cascading effects Sensitive to operating environment Some wear and tear
Easily controlled Single path to failure
No cascading effects Not sensitive to operating environment No wear and tear
Sensitive to interfaces
No sensitivity to interfaces
Not applicable Not applicable
Reversible
Not applicable
Short duration
Not applicable Not applicable Not applicable Not applicable Not applicable
Acquisition Investment Criterion Complexity and Emergent Behaviors Design Immaturity
High Unknown or high degree of complexity Unknown or highly immature design
185
Medium
Low
Not Applicable
Medium complexity
Low complexity
Not applicable
Immature design
Mature design
Not applicable
Finally, Table VI.6.4 shows how the six subtopics of concern score against the 11 criteria defined by Table VI.6.2. Now that the risk scenarios have been narrowed down to a more manageable set, the decisionmakers can perform a more thorough analysis of each subtopic. Table VI.6.4. Scoring of Subtopics for Business Evaluation Using the Criteria Hierarchy Criteria
Technology
Market Share Med High
Strategy
Investments
Competitors
Low Med
Trade Secrets Low Med
Undetectability Uncontrollability Multiple Paths to Failure Irreversibility Duration of Effects Cascading Effects Operating Environment Wear and Tear Hardware /Software/Human/ Organizational Complexity and Emergent Behaviors Design Immaturity
High High
Low Med
High High
High
Med
High
High
Med
High
Med
High
Med
High
High
Low
High
High
High
High
High
High
Med
Med
Low
Low
High
High
High
High
High
High
Med
High
Med
High
Low
High
Med
High
High
High
Med
High
High
High
Med
High
Low
High
High
High
Med
High
Med
High
High
Med
Phase V: Quantitative Ranking During Phase V, the investors typically use data (e.g., historical data, event probability distributions) to determine numerical probabilities. These probabilities are then used to replace the qualitative probability descriptions from the matrices in Phase III. However, in an example there is very little data that can guide the choice of numerical probabilities. Therefore, the matrices from Phase III will again be utilized here and a risk matrix with quantitative probabilities will not be developed. Phase VI: Risk Management Phase VI requires the investors to conduct a thorough analysis of the quantitative aspects of their decisions. This involves calculating costs, benefits, risk reduction, and options for managing the most dire subtopic scenarios. Completing Phase VI
186
Risk Filtering, Ranking, and Management
requires expert analysis of the scenario subtopics to help devise risk management options. Phase VII: Safeguarding Against Missing Critical Items During Phase VII, the performance of the management options from Phase VI is compared with the scenarios that were filtered out between Phases II and V. Phase VIII: Operational Feedback In this final phase, investors would update their scenario filtering dynamically while acquiring evolving data about market trends, business cycles, and stock fluctuations. This data would allow them to improve the quality and accuracy of their HHM and RFRM analyses.
Healthcare System
187
PROBLEM VI.7: Healthcare System Modeling Modeling the US healthcare system is vital. The risks in healthcare are immense because everyone is affected by it and it deals with human lives. DESCRIPTION The healthcare system in the United States is a very complex industry with high levels of interaction between the government, private companies, and the public. METHODOLOGY This example demonstrates applying Risk Filtering, Ranking, and Management (RFRM) to model the healthcare system. SOLUTION Phase I: Scenario Identification with HHM The first step is to develop a complete Hierarchical Holographic Model (HHM) for the healthcare system, as shown in Figure VI.7.1.
Figure VI.7.1. HHM for healthcare system
Phase II: Scenario Filtering Let us suppose that the National Science Foundation (NSF) seeks to fund research and development for new technology in healthcare. They are concerned with the risks in three different types of technology—IT, surgical equipment, and patient equipment. Thus, for the purpose of this analysis, we will focus only on the technology aspect of the HHM.
188
Risk Filtering, Ranking, and Management
Phase III: Bicriteria Filtering and Ranking Figure VI.7.2 below shows the risk matrix for the technology risk scenarios (IT, surgical equipment, and patient equipment risks). The technology may be some sort of surgical equipment or perhaps something implanted into a patient’s body, such as a pacemaker. Clearly, the worst thing that could happen is for the technology to fail, causing a patient to lose his/her life. Figure VI.7.2 clearly shows which consequences, along with the probability of those consequences, constitute different levels of risk.
Figure VI.7.2. Qualitative risk matrix Phase IV: Multicriteria Evaluation Table VI.7.1 contains more specific definitions of each of the three remaining risk scenarios. Table VI.7.1. Risk Scenarios for Remaining Subtopics Subtopic Information Technology Surgical Equipment Patient Equipment
Risk Scenario Failure to collect or transmit information technology into a designated database for more than 24 hours. Failure of any part of surgical equipment during surgery for any amount of time. Failure of a patient’s equipment for any amount of time.
Table VI.7.2 scores the Healthcare subtopics using the 11 criteria.
Healthcare System
189
Table VI.7.2. Rating Risk Scenarios in Phase IV Criteria
IT
Surgical Equipment
Undetectability Uncontrollability Multiple Paths to Failure Irreversibility Duration of Effects Cascading Effects Operating Environment Wear and Tear Hardware/Software /Human/Organizational Complexity and Emergent Behaviors Design Immaturity
High Med Med Low Med High Med Low
Med Low High Med Med High Med High
Patient Equipment Med Low High Med Med High Med High
Med
High
High
Low
Med
High
Med
Med
Med
Phase V: Quantitative Ranking Based on Phases III and IV, it appears that failure of information technology is not serious compared to failures of surgical and patient equipment. Thus, for the rest of the analysis, IT will not be considered. Surgical Equipment: Likelihood of Failure = .05; Effect = A (Loss of life); Risk= Extremely High A failure of surgical equipment during an operation can definitely cause the loss of a patient’s life. Based on hospital protocols for inspecting surgical equipment, this risk scenario is assigned a probability of .05. Should a technological malfunction occur, a failure would be detectable. Patient Equipment: Likelihood of Failure = .15; Effect = C (Minor injuries occur); Risk = Extremely High A failure of a patient’s equipment (such as pacemaker, oxygen) may be detrimental to the individual and cause serious injury. However, in most cases minor injuries will occur. Since many patients are not supervised when using this equipment, there is a significant probability of failure, which was assigned .15. Failure of this equipment would be detectable in most cases.
190
Risk Filtering, Ranking, and Management
Figure VI.7.3. Quantitative scale matrix Phase VI: Risk Management This section briefly describes the trade-offs, costs, and benefits associated with this analysis. In terms of technology, there are clearly some options that are available. There is most likely a trade-off between cost and quality. The higher the cost of developing a healthcare technology, the greater the work and quality that go into the product. Thus, to reduce the risk associated with equipment failure, a greater investment is usually needed. Of course, a lower cost is a benefit, but if the quality of a product is not great, then the risks are immense. In healthcare, peoples’ lives are at stake, and it is nearly impossible to put a value on them. This may make it very difficult to compromise quality for cost. However, companies have no motivation if they cannot make money on a product, and thus the trade-off still exists. The clear benefit of a high-quality product is saving peoples’ lives. It appears that in these scenarios, a large increase in cost would greatly reduce the risk. To keep costs relatively low, one option is exactly what the NSF does: it funds universities to do research. Phase VII: Safeguarding Against Missing Critical Items The risk management options developed in Phase VI interact with information technology, which was discarded in an earlier phase. This is especially true of the patient equipment. Generally, this equipment is attached to the patient, and measurements are taken to show how it is helping the patient’s body. For example, continuous glucose monitors measure and keep track of the amount of glucose in a diabetes patient. Thus, this equipment helps to inject the patient with the correct amount of insulin, and it also provides information to doctors and clinicians about the patient’s general glucose trends. The information works along with the physical equipment to improve the quality of the patient’s life. If either of these fails, it compromises the importance of the other.
Healthcare System
191
The option suggested in Phase VI, to fund universities to develop new healthcare technologies, can be revised to include the full integration of information technology with physical technology. This helps to make sure of the most efficient and effective outcome. While one can operate without the other, the full potential can only be realized when these two parts work together. However, this creates another trade-off. When funding research and development, how much effort and cost should be put towards the IT part, and how much towards the physical part? There is a delicate balance here, and it would not be beneficial to lean too far towards one side. Clearly, the physical technology grants should be directed towards biomedical and electrical engineers, while the IT grants should be directed towards systems and computer science engineers. Getting various views on the same problem can improve the probability of scenario success and decrease the chance of catastrophic failure. Phase VIII: Operational Feedback Going through the process of developing healthcare technology will allow us to recognize other potential risks that may occur. One important Head Topic to discuss is Culture, because it affects every other subtopic. Different groups of people have varying beliefs on the types of medicine and treatments they should take. Thus, when considering risk scenarios of the healthcare system, it is always important to evaluate these different cultural perspectives. Another possible subtopic that could be added to the HHM is epidemics. Although disease is currently included as a subtopic, epidemics may have significantly different effects and should be a separate subtopic. The possible risk scenarios from epidemics are immense and are definitely a concern to the entire country. Finally, to perhaps better understand the healthcare system as a whole, the entire world should be considered, not just the United States. This may produce a whole new set of risk scenarios.
192
Risk Filtering, Ranking, and Management
PROBLEM VI.8: Disaster Relief Risk Analysis After the disaster caused by Hurricane Katrina, the city of New Orleans needs to evaluate its preparedness to handle any future natural disasters. DESCRIPTION New Orleans needs to minimize the possibilities of disaster in an extreme event. However, the city wants to gain insight into which risks are of highest priority, and how these and other risks should be effectively managed given the city’s finite resources. METHODOLOGY A Risk Filtering, Ranking, and Management (RFRM) approach is taken to rank and order risks. SOLUTION Phase I: Scenario Identification (HHM) A Hierarchical Holographic Model (HHM) is developed to describe the systems’ “as planned” or “success” scenario. The resulting perspective of scenarios surrounding natural disasters in New Orleans is depicted in Figure VI.8.1. This example does not address the crucial and complex issue of rebuilding levees.
Figure VI.8.1. Hierarchical Holographic Model for natural disasters Phase II: Scenario Filtering The risk scenarios identified in Phase 1 are filtered according to the responsibilities and interests of the current system user.
Hurricane Katrina
193
A group of experts is asked to determine which subtopics, or sources of risk, are of greatest concern. The priority ordering of the sources of risk can be seen in the filtered version of the HHM in Figure VI.8.2. Given the limited resources, some subtopics, or sources of risk, are deleted as the decisionmakers and experts determine that they are not of utmost importance to address.
Figure VI.8.2. Filtered Hierarchical Holographic Model Phase III: Bicriteria Filtering and Ranking The ranking risk scenarios are further filtered using qualitative likelihood and consequences. Two different types of information—the likelihoods of what can go wrong and the associated consequences—are estimated on the basis of available evidence. The following figure was determined using the likelihoods and the consequences:
Figure VI.8.3. Qualitative severity scale matrix
194
Risk Filtering, Ranking, and Management
Phase IV: Multicriteria Evaluation Eleven criteria are developed that relate the ability of an at-risk scenario to defeat the defenses of the system. The 11 criteria listed in Table VI.8.1 were used to further rank and sort the risk scenarios on a scale of 1 (low) to 3 (high). Phase V: Quantitative Ranking Filtering and ranking of scenarios continues based on quantitative and qualitative matrix scales of likelihood and consequence. Using prior knowledge of distribution and probabilities, Figure VI.8.4 is constructed to demonstrate the probabilities of events occurring.
Figure VI.8.4. Probability of Events Occurring Phase VI: Risk Management Risk management options are identified for dealing with the filtered scenarios, and the cost, performance, benefits, and risk reduction of each are estimated. It seems that the scenarios of highest risk in the system are those concerning a flood and cellular communication. Thus, the city must ensure that it is adequately prepared for such an event occurring. A flood management plan for New Orleans would include the following elements:
Criteria Undetectability Uncontrollability Multiple Paths to Failure Reversibility Duration of Effects Cascading Effects Operation Environment Wear and Tear W/SW/HU/OR Complexity/Emergent Behaviors Design Immaturity Total
Risk Scenario
3 3 2 2 1 3 2 1 2 22
1 1 3 1 2 2 3 1 1 20
2 3 23
2 1 3
2 1 3 2
2 2 29
2 3 3
3 2 3 3
Flood 3 3
Table 1. Evaluation of Risk Scenarios
2 3 23
2 2 2
2 2 2 2
Hurricane 2 2
Air, Water, Hardware, Facility 2 2
Table VI.8.1. Evaluation of Risk Scenarios
Earthquake 2 1
Misc. Natural Disaster 2 3
2 3 19
1 1 1
3 2 2 2
Ground Transport, Water Management, Human Resource 1 1
1 3 21
1 2 1
3 2 3 1
Water Treatement, Medical Supply 3 1
3 3 21
3 2 2
2 1 2 1
Landline 1 1
2 1 22
2 3 3
2 3 2 1
Radio 2 1
1 1 18
1 1 3
1 2 2 3
Cellular 1 2
Hurricane Katrina 195
196
Risk Filtering, Ranking, and Management 1. 2. 3. 4.
Communication: Adequate emergency communication ability to coordinate disaster relief. Medical: Medical facilities and personnel that can respond to the crisis situation and are prepared for such natural disaster scenarios. Transport: Adequate transportation to relocate large numbers of displaced persons. Housing: Emergency housing options to accommodate temporary displaced persons.
All of these would be needed in order to adequately address the highest-risk scenario—another flood in New Orleans. Phase VII: Safeguarding against Missing Critical Items The performance of the options selected in Phase VI is evaluated against the scenarios previously filtered out during Phases II to V. Phase VII is essential to ensure the relative accuracy of the multiple solutions of the model. One of the main purposes of both HHM and RFRM is to be used as a tool for learning more about the system under study. During this modeling/learning process, some discoveries may in fact change prior assumptions or assertions. Phase VII ensures that important critical items have not been overlooked. In the case of the New Orleans flood scenario, the options identified in Phase VI map well with the options selected in Phases II through V. Phase VIII: Operational Feedback Experience and information gained during the application are used to refine the scenario filtering and decision processes of earlier phases. The purpose of the original HHM is the ability to learn about the system itself. Through “flipping,” the analyst is able to switch perspectives and viewpoints and discover additional relevant information. Similar to Phase VII, this phase reevaluates and interprets the final results of the model. In other words, this phase provides the analyst and the decisionmaker with an iterative process to evaluate the solutions and assumptions of the model. The ultimate purpose of the HHM/RFRM model is not to output a single answer, but rather to give decisionmakers a greater understanding of the system itself. In addition, cursory Bayesian analysis could also be applied to the final results. Such application of knowledge gained would greatly increase the value and meaning of the risk management plan suggested in Phase VI.
Athletic Program
197
PROBLEM VI.9: Risk Modeling a University Athletics Program A small university needs to update its athletic program to attract more students. DESCRIPTION A few of the concerns that face the new Athletics Director as he plans the next season are fundraising, attracting good athletes, and stadium security. METHODOLOGY This example demonstrates the Risk Filtering, Ranking, and Management (RFRM) process to model the university-wide athletics program. This methodology progresses through eight phases to identify and filter all possible risk scenarios. SOLUTION Phase I: Scenario Identification The first step is to develop a Hierarchical Holographic Model (HHM) to identify all relevant scenarios under head topics and subtopics. For brevity, the Phase I HHM is not shown here in order to focus on the filtered HHM (Phase II). Phase II: Scenario Filtering The following head topics and their subtopics were of greatest interest to the Athletics Director. • Athletics o Fundraising/Boosters Alumni o Athletes Scholarships Academic Performance Injuries o Coaches, Staff Recruiting Competence Salaries • Transportation o University Parking & Transportation Parking Lots Sidewalks Bus Routes • Structural Engineering o Aesthetic Design o Size o Practical Design Signage Ingress/Egress
198
Risk Filtering, Ranking, and Management •
•
•
•
Maintenance and Operations o Communications Media PA System Event Advertising Safety and Security o Game Security Surveillance Sports Facility Staff o Ticket Takers o Concessions o Ushers Economy o Businesses Services Industry o Food/Refreshments (at game)
Phase III: Bi-Criteria Ranking and Filtering The topics in Phase II are further filtered down into a risk matrix showing consequences and likelihoods, as shown in Figure VI.9.1.
Figure VI.9.1. Risk Matrix for Phase III Phase IV: Multi-Criteria Evaluation The subtopics are further filtered down to the five that are shown to be extremely high risks in the matrix above. These risks, spelled out more specifically in Table VI.9.1, are scored against the 11 criteria listed in Table VI.9.2.
Athletic Program
199
Table VI.9.1. Risk Scenarios for Five Remaining Subtopics Subtopic Practical Design Game Security Facility Staff Food/Refreshments Alumni
Risk Scenario Failure to provide safe emergency exit Failure to provide adequate security for a game Failure to provide adequate personnel to staff game Contaminated food or water supply Loss of alumni support for an extended period
Table VI.9.2. Scoring of Subtopics for Sports Facility Criteria Undetectability Uncontrollability Multiple Paths to Failure Irreversibility Duration of Effects Cascading Effects Operating Environment Wear and Tear HW/SW/Human/Org. Complexity and Emergent Behaviors Design Immaturity
Practical Design Low Low High High High High Low High Low
Game Security Low Low High Low Low Medium High Medium High
Facility Staff Medium Low High Low Low Medium High Medium Low
Food/Refreshments
Alumni
High High High High Medium High Medium Low High
Medium High High Low Medium High Medium N/A N/A
High
High
Low
Low
Medium
Low
Low
Low
Low
Low
Phase V: Quantitative Ranking In this phase, the five issues of greatest interest are ranked quantitatively and their severity is graphically illustrated in a matrix, as in Figure VI.9.2 below.
Figure VI.9.2. Quantitative severity-scale matrix
200
Risk Filtering, Ranking, and Management
Phase VI: Risk Management Performing the quantitative ranking in the previous phase indicates the critical risk scenarios that need to be managed. Hence, risk management options should be developed and prioritized according to the categories of risk as follows: Extremely High Risk Category: • Practical Design: The probability of this is believed to be low, due to the fact that failure to provide safe emergency exits can be detected and corrected early. However, there could be unexpected events that could prove them to be insufficient. Thus, the assignment of a probability of 1/100. High Risk Category: • Game Security: Because of factors such as health, weather, game schedule, stress, etc., there could be a lack of adequate security for a game. We assigned this a probability between 0.02 and 0.1 • Food/Refreshments: Because of the high consumption of food and the risk associated with contamination, we assigned this a probability between 0.02 and 0.1. • Facility Staff: Providing adequate personnel to staff a game depends on the same factors as those affecting Game Security, namely, the operating environment: health, weather, game schedule, stress, etc. We assigned this scenario a probability between 0.02 and 0.1. Moderate Risk Category: • Alumni: Probability of loss of alumni support is extremely low ( p1 = 0.067 0.2 = p 2 * (8 − 3) => p 2 = 0.040 0.2 = p3 * (16 − 8) => p3 = 0.025 0.2 = p 4 * (26 − 16) => p 4 = 0.020 0.2 = p5 * (40 − 26) => p5 = 0.014 2.
We plot the PDF curve as shown in Figure VII.1.2. 0.080
Probability
0.060 0.040 0.020 0.000 0
5
10
15
20
25
30
35
40
Project Cost Increase(%)
Figure VII.1.2. PDF Curve for Project Cost Increase (%) for Plan A The PDF and CDF results are summarized in Table VII.1.3. Table VII.1.3. CDF and PDF Summary of Plan A Project Cost Increase (%) CDF Value PDF 0.00 0 0.000 0.20 3 0.067 0.40 8 0.040 0.60 16 0.025 0.80 26 0.020 1.00 40 0.014 The unconditional expected value of cost overrun,
f 5 (⋅) , was calculated as follows:
Project Management 3
8
16
26
215
40
∫ xp dx + ∫ xp dx + ∫ xp dx + ∫ xp dx + ∫ xp dx ≈ ∫ 0.067 xdx + ∫ 0.040 xdx + ∫ 0.025 xdx + ∫ 0.02 xdx + ∫ 0.014 xdx f 5 (⋅) =
1
0
3
3
2
8
8
0
x2 + 0.04 2 0 3
16
16
3
x2 = 0.067 2
3
4
26
26
8
40
16
x2 + 0.025 2 3
8
16 8
x2 + 0.02 2
5
26
26
+ 0.014 16
x2 2
40 26
= 0.067 * (4.5) + 0.04 * (27.5) + 0.025 * (96) + 0.02 * (210) + 0.014 * (462) = 0.3015 + 1.1 + 2.4 + 4.2 + 6.468 = 14.47% So, we can say that the expected value of cost overrun is $95* 14.47%=$13.75 million. Thus, the total cost will be $95+13.75=$108.75 million. 3. Using the information in Figure VII.1.1 we can get the exceedance probability as in Figure VII.1.3. Exceedance Probability
1.00 0.80 0.60 0.40 0.20 0.00 0
10
20
30
40
Projecr Cost Increase (%)
Figure VII.1.3 Exceedance Probability for project cost increase (%) for plan A • In the worst 10% scenario, from Figure VII.1.3 we find there is a one-to-one relationship given that the overrun occurs with a probability of 0.1 or low. We need to calculate the conditional expected value when 1 − α = 0.1 or α = 0.9 . Since there is linear relationship between the exceedance probability and overrun cost, we can use Figure VII.1.3 to get the overrun cost for 1 − α = 0.1 . This will be (40 − 26) 26 + = 26 + 7 = 33% for α = 0.9 2 The conditional expected value of cost overrun under the scenario of an 0.1 probability of exceeding the original cost estimate can be computed as follows:
216
Partitioned Multiobjective Risk Method 40
f 4 (⋅) =
∫ ∫
40
xp( x)dx
33 40 33
= p( x)dx
∫ ∫ kdx
xkdx
33 40 33
=
x2 2
40 33 40
x
=
255.5 = 36.5% 7
33
From the value of f 4 , 36.5% or 34.7 million, we can understand that even the unconditional expected value of the project cost increase is $13.75 million, but there is a 10% chance that the overrun cost will exceed 33% of the planned cost. By the way, in this condition there is an expected increase of 36.5%, or a $34.68 million cost. • In the better 15% scenario, using the same method as in the above case, we can let α = 0.15 . Then 1 − α = 0.85 , and there is also a one-to-one relationship between 8% and 14%. So the cost will be:
1.0 − 0.85 0.15 x − 0 = = => x = 2.25% 1.0 − 0.8 0.20 3 − 0 So we plot the conditional expected overrun cost in this scenario as: x 2 2.25 2.25 2.25 xp( x)dx xkdx 2 0 2.53 f 2 (⋅) = 02.25 = 0 2.25 = = = 1.125% 2.25 2.25 p( x)dx kdx x
∫ ∫
0
∫ ∫
0
0
and we understand that there is a 15% chance that the overrun cost will be below 2.25%, or 95 * 2.25% = $2.1 million. The conditional expected overrun cost will be 95 * 1.125% = $1.1 million. CDF Curve for Plan B: 1. Using the same method as in Plan A, we summarize the PDF data in Table VII.1.4 and plot them in Figure VII.1.4. Table VII.1.4. CDF and PDF of Plan B Project Cost Increase (%) CDF Value PDF 0.00 0 0.000 0.20 8 0.025 0.40 14 0.033 0.60 20 0.033 0.80 28 0.025 1.00 50 0.009
Project Management
217
Figure VII.1.4 shows the PDF curve:
Pro babilit y
0.080 0.060 0.040 0.020 0.000 0
5
10
15
20
25
30
35
40
45
50
Project Cost Increase(%)
Figure VII.1.4. PDF for Project Cost Increase(%) for Plan B
f 5 (⋅) , was calculated as follows:
The unconditional expected value of cost overrun,
∫
8
14
∫
20
30
50
∫ xp dx + ∫ xp dx + ∫ xp dx ≈ ∫ 0.025xdx + ∫ 0.033xdx + ∫ 0.033xdx + ∫ 0.025xdx + ∫ 0.009 xdx f 5 (⋅) = xp1dx + xp2 dx + 0
8 14
8
0
= 0.025
2
x 2 14 2
4
5
30
30
14
+ 0.033
0
20
20
8
x2 8
3
14
50
20
+ 0.033
8
x 2 20 2
14
+ 0.025
30
x 2 28 2
x2 2
+ 0.009
20
50 28
= 0.025 * (32) + 0.033 * (66) + 0.033 * (102) + 0.025 * (192) + 0.009 * (858) = 0.8 + 2.178 + 3.366 + 4.8 + 7.722 = 18.866% So, we can say that the expected value of the cost overrun is $105*18.866%=$19.81 million. This means that the total cost will be $105+19.81=$124.81 million. 2. Again, using the information in Figure VII.1.1 we can get the exceedance probability as shown in Figure VII.1.5. y t lii b a b o rP e cn a d e e c xE
1.00 0.80 0.60 0.40 0.20 0.00 0
10
20 30 Project Cost Increase (%)
40
Figure VII.1.5. Exceedance Probability
50
218
Partitioned Multiobjective Risk Method
• In the worst 10% scenario, from Figure VII.1.5 we find that there is a one-toone relationship, given that the overrun occurs with a probability of 0.1 or low. We need to calculate the conditional expected value when 1 − α = 0.1 or α = 0.9 . Since there is a linear relationship between the exceedance probability and the overrun cost, we can use Figure VII.1.5 to get the overrun cost for 1 − α = 0.1 will (50 − 28) = 28 + 11 = 39% for α = 0.9 . be 28 + 2 The conditional expected value of cost overrun under the scenario of a 0.1 probability of exceeding the original cost estimate can be computed as follows 50
f 4 (⋅) =
∫ ∫
xp( x)dx
39 50 38
50
= p( x)dx
∫ xkdx = ∫ kdx 39 50 39
x2 2
50 39 50
x
=
489.5 = 44.5% 11
39
From the value of f 4 , 44.5% or $46.73 million, we can understand that even the unconditional expected value of the project cost increase is $19.81 million, but there is a 10% chance that the overrun cost will exceed 39% of the planned cost. By the way, in this condition the cost is expected to increase by 44.5% or $46.73 million. • In the better 15% scenario, using the same method as in the above case, we can let α = 0.15 . Then 1 − α = 0.85 , and there is also a one-to-one relationship between 8% and 14%. So the cost will be: 1.0 − 0.85 0.15 x − 0 = = => x = 6.0% 1 .0 − 0 .8 0.2 8−0 The conditional expected overrun cost in this scenario is: x2 6 6 6 xp( x)dx xkdx 2 0 6 f 2 (⋅) = 06 = 06 = = = 3.0% 6 2 p( x)dx kdx x
∫ ∫
0
∫ ∫
0
0
and we understand that there is a 15% chance that the overrun cost will be below 6.0%, or $ 105 * 6.0% = $6.3 million. The conditional expected overrun cost will be $ 105 * 3.0% = $ 3.15 million. CDF Curve for Plan C 1. Using the CDF, we calculate the PDF as follows and plot the curve as in Figure VII.1.6. Let p i be the probability of each event.
Project Management
219
0.080
P robability
0.060 0.040 0.020 0.000 0
5
10
15
20
25
30
35
40
45
50
55
Project Cost Increase(%)
Figure VII.1.6. PDF for Project Cost Increase(%) for Plan C Table VII.1.5 displays the summary. Table VII.1.5. PDF Summary for Plan C CDF 0.00 0.20 0.40 0.60 0.80 1.00
value 0 5 13 23 29 55
PDF 0.000 0.040 0.025 0.020 0.033 0.008
The unconditional expected value of cost overrun,
∫
5
13
∫
23
29
f 5 (⋅) , was calculated as follows: 55
∫ xp dx + ∫ xp dx + ∫ xp dx ≈ ∫ 0.04 xdx + ∫ 0.025xdx + ∫ 0.02 xdx + ∫ 0.033xdx + ∫ 0.008 xdx f 5 (⋅) = xp1dx + xp2 dx + 0
5
5
13
0
= 0.04
5
x2 5 2
0
+ 0.025
13 23
3
23 29
13
x 2 13 2
5
+ 0.02
4
55
23
x 2 23 2
13
+ 0.033
5
29
29
x 2 29 2
23
+ 0.008
x2 2
55 29
= 0.04 * (12.5) + 0.025 * (72) + 0.02 * (180) + 0.033 * (156) + 0.008 * (1092) = 0.5 + 1.8 + 3.6 + 5.15 + 8.74 = 19.79% So, we can say that the expected value of the cost overrun is $120*19.79%= $23.75million Thus, the total cost will be $120+23.75=$143.75 million. 2. Once more using the information in Figure VII.1.1, we get the exceedance probability as shown in Figure VII.1.7.
220
Partitioned Multiobjective Risk Method
Exceedance Probability
1.00 0.80 0.60 0.40 0.20 0.00 0
10
20 30 40 Project Cost Increase (%) Projecr Cost Increase (%)
50
Figure VII.1.7. Exceedance Probability for Project Cost Increase (%) for Plan C • In the worst 10% scenario, from Figure VII.1.7 we find that there is a one-toone relationship given that the overrun occurs with a probability of 0.1 or low. We need to calculate the conditional expected value when 1 − α = 0.1 or α = 0.9 . Since there is a linear relationship between the exceedance probability and the overrun cost, we can use Figure VII.1.3 to get the overrun cost for 1 − α = 0.1 will (55 − 29) be 29 + = 29 + 13 = 42% for α = 0.9 2 The conditional expected value of the cost overrun under the scenario of a 0.1 probability of exceeding the original cost estimate is computed as follows: 55
f 4 (⋅) =
∫ ∫
xp( x)dx
42 55 42
55
= p ( x)dx
∫ xkdx = ∫ kdx 42 55 42
x2 2
55 42 55
x
=
630.5 = 48.5% 13
42
From the value of f 4 , 48.5% or $58.2 million, we can understand that even the unconditional expected value of the project cost increase is 23.75%, but there is a 10% chance that the overrun cost will exceed 42% of the planned cost. By the way, in this condition there is expected to be a cost increase of 48.5% or $58.2 million. • In the better 15% scenario, using the same method as in the above case, we can let α = 0.15 . Then 1 − α = 0.85 , and there is also a one-to-one relationship between 8% and 14%. So the cost will be: 1.0 − 0.85 0.15 x − 0 = = => x = 3.75% 1.0 − 0.8 0.20 5 − 0 Thus, the conditional expected overrun cost in this scenario is:
Project Management 3.75
3.75
∫ ∫ = ∫ p( x)dx ∫ kdx xp( x)dx
f 2 (⋅) =
xkdx
0 3.75
0
3.75
0
=
x2 2
3.75 0 3.75
=
x
0
221
3.75 = 1.88% 2
0
We also understand that there is a 15% chance that the overrun cost will be below 3.75%, or $ 120 * 3.75% = $ 4.5 million. The conditional expected overrun cost will be $ 120 * 1.88% = $ 2.26 million ANALYSIS From above analysis, we can summarize the results as follows: Table VII.1.6. Summary of Results Cost $M Plan A Plan B Plan C
•
95 105 120
Unconditional expected value (f5) 14.47% 13.75 18.87% 19.81 19.79% 23.75
Worst scenario 10% (f4) Threshold Expected value 33.00% 39.00% 42.00%
We can plot the values of costs
31.35 40.95 50.40
36.50% 44.50% 48.50%
34.68 46.73 58.20
Better scenario 15% (f2) Threshold Expected value 2.25% 2.14 1.13% 1.07 6.00% 6.30 3.00% 3.15 3.75% 4.50 1.88% 2.26
f 2 , f 4 , and f 5 for each plan in the same
diagram, as shown in Figure VII.1.8. 130
Cost ($ million)
f5 120
f4
f2
Plan C
110 Plan B 100 Plan A 90 0.00%
10.00%
20.00%
30.00%
40.00%
50.00%
% of overrun
Figure VII.1.8. Comparison of the Conditional and Traditional Expected Values • •
The minimum unconditional expected overrun cost is Plan A In the worst scenario Plan C has the most overrun cost, but Plan B is higher than the others in the better scenario.
222 •
•
Partitioned Multiobjective Risk Method When comparing the differences between the expected values of
f 5 and f 4
for each plan, we can understand that Plan C has the biggest one, 28.71% (48.5%-19.79%) and Plan A has the smallest one, 22.03% (36.50%-14.47%). Plan C has more risk in both the worst and normal scenarios, but it also has the most stations—that is, more profit. If the THSRC has a good civil subcontractor and project management, Plan C may be considered as the best choice.
Supplier Selection
223
PROBLEM VII.2: Supplier Selection A company has recently adopted a policy that specifies dealing only with a single supplier of its product. It is evaluating its two contractor companies, A and B. How can it evaluate contractors’ performances? DESCRIPTION Performance data on the two candidates was obtained from their transaction records. These contain the cost overruns (defined as % increase in cost over the normal cost) resulting from failure of the contractor to deliver on time, deliver the required units of product, deliver the required quality of products, and other sources of increased costs (see Appendix). The choice is difficult since the conventional expected value measure did not yield a significant difference between the cost overruns of the contractors. Table VII.2.1. Average Cost Overrun by Subcontractor Subcontractor
Average Cost Overrun (%) 50.3 53.3
A B
The management feels that investigating the contractor’s reliability can’t be truly represented by the expected value alone. They cite instances of very costly transactions with A in the past, and would like to look into that aspect as well.
1.00 0.75 Subcon A
0.50
Subcon B
0.25
200%
180%
160%
140%
120%
100%
80%
60%
40%
20%
0.00 0%
Cumulative Probability
Referring to the data shown below and in the Appendix, the following cumulative distribution functions of Subcontractors A and B are superimposed to make preliminary deductions as to which of the two is more reliable.
Cost Overrun
Figure VII.2.1. CDF of Subcons’ Cost Coverruns Although Subcon A shows superiority in terms of the 25th, 50th, and 75th quartiles, this does not necessarily guarantee that it is the better option. It should be noted that historical performance suggests that Subcon B has a lower maximum cost overrun
224
Partitioned Multiobjective Risk Method
(143%) than Subcon A (160%). Furthermore, although the mean of Subcon B is slightly higher than A, it is evident from their probability distribution functions that Subcon B has a shorter tail, which implies that their behavior significantly differs at extreme values of cost overruns. Choosing the better subcontractor can be assessed not only by using the “business as usual” definition of expected value. This problem exemplifies the case where the PMRM (Partitioned Multiobjective Risk Method) will be very handy and meaningful. PDF of Subcon B
0.0120
0.0120
200%
180%
160%
140%
120%
80%
Cost Overrun
100%
0%
200%
180%
160%
140%
120%
100%
80%
60%
40%
0.0000
0%
0.0040
0.0000
60%
0.0080
0.0040
40%
0.0080
20%
PDF
0.0160
0.0160
20%
PDF
PDF of Subcon A
Cost Overrun
Figure VII.2.2. PDFs by Subcontractor The preceding probability distribution functions were constructed by taking off from the CDF, using the definition that: CDF =
∑ p( x) and consequently, it follows that x
p( x) =
∆y ; ∆x
where y is the Cumulative Probability and x is the Cost Overrun. This means that to know the height of the PDF, say between 0 and the 25th quartile, we need to get the slope of the CDF at the specified interval.
Exceedance Probability
It is also worthwhile to show the Exceedance Graph, which is just 1-CDF. This will help visualize the subsequent analysis using the Partitioned Multiobjective Risk Method (PMRM). 1.00 0.75 Subcon A
0.50
Subcon B
0.25 0.00 0%
50%
100%
150%
200%
Cost Overrun
Figure VII.2.3. Exceedance Probabilities of Subcons’ Cost Overrun
Supplier Selection
225
METHODOLOGY For this exercise, the subcontractors are evaluated according to cost of overrun using the PMRM. It is necessary to compute the values of conditional expectedvalue risk functions f4 and unconditional expected-value risk function f5 using the following general expression: β i −1, j
f i (s j ) = where, sj x px(x,sj) f4(·) βI
∫β
β i −1, j
∫β : : : : :
xp x ( x; s j )dx
ii − 2 , j
,
i=4
j = A, B
p x ( x; s j )dx
ii − 2 , j
subcontractor j, j = A,B cost overrun associated to sj denotes the pdf of the cost overruns is of low exceedance probability and high severity unique cost overrun point corresponding to the exceedance probability (1-αi), where αi is the range of severity relevant to the analysis
A cost overrun with an exceedance probability of 0.1 represents the point at which the extreme consequence begins. These cost overrun values for Subcons A and B are calculated as follows (refer to the previously-shown PDFs): Since the upper quartile of the PDF represents 0.25 probability, dividing the range by 2.5 will yield a range in the upper quartile that represents 0.10 probability. Therefore, Range of Upper Quartile Cost overrun with a probability of 0.1 = 2.5 Cost overrun for Subcon A = (160-64.78) / 2.5 = 38.088 Cost overrun for Subcon B = (143-73.66) / 2.5 = 27.736 SOLUTION The conditional expected values for the high-consequence, low-probability regions of the subcontractors are: Subcontractor A: 160
∫ xf ( x)dx
f 4 (⋅) =
121.9 160
∫ f ( x)dx
121.9
160 2 − 121.9 2 (0.0026) 2 = = 140.956 % Cost Overruns 0.0026(160 − 121.9)
226
Partitioned Multiobjective Risk Method
Subcontractor B: 143
∫ xf ( x)dx
115.264 143
f 4 (⋅) =
∫ f ( x)dx
143 2 − 115.3 2 (0.0036) 2 = = 129.132 % Cost Overruns 0.0036(143 − 115.3)
115.264
And the expected values (f5) of the cost overruns of subcontractors A and B are computed as: Subcontractor A:
f 5 (⋅) =
∫
∞
0
xp( x)dx =
16.57 2 − 0 40.67 2 − 16.57 2 (0.0104) (0.0151) + 2 2 64.78 2 − 40.67 2 160 2 − 64.78 2 + (0.0104) + (0.0026) 2 2 = 50.29138
Subcontractor B:
f 5 (⋅) =
∫
∞
0
xp( x)dx =
21.07 2 − 0 47.37 2 − 21.07 2 (0.0119) + (0.0095) 2 2 73.66 2 − 47.37 2 1432 − 73.66 2 (0.0095) + (0.0036) + 2 2 = 53.34703
ANALYSIS The conditional expected value contributes significantly to the analysis of the two subcontractors’ expected performance. At the extreme 10% cost overrun, Subcontractor B has a significantly lower conditional expected cost overrun than Subcontractor A. The following graph shows the conditional and unconditional expected values of the two cost overruns. It is worth noting that when considering only the extreme cases, the conditional expected overrun is much greater than the unconditional overrun -- more than two times in magnitude.
Supplier Selection
Figure VII.2.4. Plot of f4 and f5
APPENDIX. Cost Overrun Data of Subcons’ A and B
227
228
Partitioned Multiobjective Risk Method
PROBLEM VII.3: Job Hunting Sally was recently laid off. Though she has some savings, she is desperate to find a job. Her field is accounting, but she knows that if she can’t find an accounting job, she may need to consider retail or waitressing. Depending on the industry, Sally is prepared to dip into her savings. She would like to find out what percentage of her current salary she might lose in her next job; this will help her determine whether she will be able to pay her mortgage. If she cannot, she may need to sell her house or find a roommate. To keep her house, she must make at least 75% of her original salary or make 50% of her original salary and take in a roommate. Her current options are listed below. For each, Sally projects a certain cost to her savings pool. 1. 2. 3.
Find work in the accounting industry. Find work in the retail industry. Find work in the restaurant industry.
DESCRIPTION From market data, the following table and graph were constructed to represent the probabilities of Sally’s yearly salary loss for each of the three industries. Table VII.3.1. Annual Salary Loss by Industry Option Accounting Retail Restaurant
25th
Best (0) 0% 25% 40%
Median 50th 20% 50% 60%
10% 40% 50%
75th 30% 55% 65%
Worst (100) 40% 60% 70%
1
Probability
0.75
Restaurant Accounting
0.5
Retail
0.25 0 0
20
40
60
80
Percentage of Current Salary
Figure VII.3.1. CDF for Sally’s Potential Salary METHODOLOGY For this problem, it is necessary that each job should be evaluated according annual salary loss using the PMRM. In order to do this analysis, computing the values of
Job Hunting
229
conditional expected-value f4 with finding partitioning points and unconditional expected-value f5 is prerequisite. SOLUTION The expected value of salary loss (see below) for each of the industries was computed using fractile method equations, based on the PDF. Table VII.3.2. Expected Value of Salary Loss Accounting Retail Restaurant
1st 1.25 8.13 11.25
2nd 3.75 11.25 13.75
3rd 6.25 13.13 15.63
4th 8.75 14.38 16.88
Total EV 20 47 58
The following table includes the computed expected values as well as the estimated cost to Sally’s savings for each. Table VII.3.3. Summary of Expected Value of Salary Loss & Cost
Accounting Retail Restaurant
Estimated Cost to Savings ($K yr)
f5(.) %
0.00 15.00 20.00
20.00 46.88 57.50
As the expected value does not account for extreme values, we now calculate the conditional expected values using the exceedance probability. First, we must determine the integration points to use to calculate the conditional expected values. The graph below is used as a visual check for the following numerical method. Accounting
x − 30 0.25 − (1 − α ) = 40 − 30 0.25 0.15 * 10 x − 30 = 0.25 1.5 x= + 30 0.25 x = 36%
Retail
x − 55 0.25 − (0.1) = 60 − 55 0.25 x − 55 0.15 = 5 0.25 0.15 * 5 x= + 55 0.25 x = 58%
Restaurant
x − 65 0.25 − (0.1) = 70 − 65 0.25 x − 65 0.15 = 5 0.25 0.15 * 5 x= + 65 0.25 x = 68%
230
Partitioned Multiobjective Risk Method
1
Accounting Retail
0.8 Probability
Restaurant 0.6 0.4 0.2
1-a 0 0
20
40
60
80
Percentage Salary Loss
Figure VII.3.2. Exceedance Probability by Industry With the 10% partition points known, the integration can be done to determine the conditional expected values. Accounting 40
40
∫ xKdx
f 4(.) =
36 40
x2 2
∫
K Kdx =
∫ Kdx
36 40
=
∫
K dx
36
40
36 40
=
1600 − 1296 = 38% 2(40 − 36)
=
3600 − 3364 = 59% 2(60 − 58)
x
36
36
Restaurant 60
∫
f 4(.) =
60
58 60
∫ Kdx
58
x2 2
∫
xKdx
K Kdx =
58 60
∫
K dx 58
=
60
58 60
x 58
Job Hunting
231
Retail 70
∫
f 4(.) =
70
68 70
x2 2
∫
xKdx
K Kdx =
∫ Kdx
68
68 70
∫
=
K dx
70
68 70
=
4900 − 4624 = 69% 2(70 − 68)
x
68
68
Table VII.3.4. Summary of f4, f5 and Cost
Accounting Retail Restaurant
Estimated Cost to Savings ($K yr)
f5(·) %
f4(·) %
0.00 15.00 20.00
20 47 58
38 59 69
Cost to Savings ($K/yr)
ANALYSIS 25.00 20.00
f5
Restaurant
15.00
f4
Retail
10.00 5.00 Accounting
0.00 0
20
40
60
80
% Salary Loss
Figure VII.3.3. Comparison of f4 and f5 As is shown by both the tabular and graphic results, the expected values of Sally’s salary loss are significantly different when using the extreme value method. As long as she is able to find a job in accounting, it is fair to expect that she will not have anything to worry about, in terms of her savings. Based on her original calculations, Sally feels confident that if she gets a job in accounting, she can still comfortably pay her mortgage without taking on a roommate or dipping into her savings. If she works in retail, she will surely need to find a roommate as well as deplete her savings quite a bit. If she takes a job as a waitress, neither her savings nor a roommate will allow her to afford her house. Based on this analysis, she can see that she must focus her efforts on jobs in accounting and retail.
232
Partitioned Multiobjective Risk Method
PROBLEM VII.4: Church facility development There are three options to be evaluated for a new church facility. It should have 844 seats for worship, 10 offices for the staff, 16 classrooms for Bible study and Sunday School, and 211 parking spaces with the ability to add 80 more spaces within the design. Each element of the design should also accommodate a minimum of 20% growth through additions and modifications. The church has three options that will satisfy their facility requirements/needs. They have evaluated their funding and initially anticipate the costs of all options to be $13,686,000. While any of the three options satisfies the requirements, the church understands that cost overruns are possible. Which option would be the best option and how could it be assessed when considering evens within the 15% worse case? DESCRIPTION They would like to have a better understanding of such cost overruns. The differences in each option are identified as follows: Option A: They can develop a new facility on the current church site, where the improvements have already passed the county zoning board and the project will provide for the needs of the church within 13 months. Option B: They can acquire land in a new location and have a building designed specifically for that site. The county would have to review any design and approve it for the specific site. Meanwhile, church operations would be carried on at the current location, and worship services would continue to be held in local high schools. Option C: They can purchase an existing office building or warehouse and modify it to meet the church needs. The county would have to review and approve the final design prior to starting any building or land modification activities. The church would have to sell their existing facility to afford the purchase. Table VII.4.1. Percentage of Cost Increase for Each Option and Initial Budget Probability Existing A 0.00 0.25 0.50 0.75 1.00
0 10 15 20 25
Building Options New B Update C % Cost Increase 0 0 20 15 40 20 50 30 75 50
Initial Budget $13,686,000 Exceedance 1.00 0.75 0.50 0.25 0.00
Facility Development
233
METHODOLOGY For this problem, each option is evaluated according to cost overrun using the PMRM. It is necessary to compute the values of conditional expected-value f4 and unconditional expected-value f5 complemented by the property of the fractile method. SOLUTION The church’s budget, or anticipated cost, for all three options is $13,686,000, but they recognize that potential cost overruns for the three project options would vary greatly. The church would like to utilize this analysis in order to have a more realistic budget expectation when they move forward with one of the building plans. This can be accomplished through comparing traditional expected values with conditional expected one. The following tables and figures reveal the expected values and the associated probabilities of a cost overrun for each option. They depict the risk in the form of percentages of possible cost increases and overruns and show the actual cost increases over the target value. Table VII.4.2. Expected Value (f5) of the Percentage of Project Cost Increase Option A
f5
B
f5
C
f5
0-0.25 1.1250 $171,075 2.500 $342,150 1.875 $256,613
Fractile Range 0.25-0.50 0.50-0.75 0.75-1.0 3.125 4.375 5.625 $427,688 $598,763 $769,838 7.500 11.250 15.626 $1,026,450 $1,539,675 $2,138,438 4.375 6.250 10.000 $598,763 $855,375 $1,368,600
% over 14.375
Total $ over
Final Cost
$1,967,363 $15,653,363 36.875 $5,046,713 $18,732,713 22.500 $3,079,350 $16,765,350
The cumulative distribution function for the fractile approach is shown below. 1
1
A B C
0.75
Probability
Probability
0.75
0.5
0.25
A
0.5
0.25
B C 0
0 0%
20%
40% % Cost Increase
60%
80%
0%
20%
40%
60%
80%
% Cost Increase
Figure VII.4.1. CDF and Exceedance Probability by Option To calculate f4, the first step is to figure out partitioning points for each option and apply the property of the fractile method to compute them:
234
Partitioned Multiobjective Risk Method
Option A x − 20 0.25 − (1 − 0.85) = ; x = 22% 25 − 20 0.25 Option B x − 50 0.25 − (1 − 0.85) = ; x = 60% 75 − 50 0.25 Option C x − 30 0.25 − (1 − 0.85) = ; x = 38% 50 − 30 0.25 According to the partitioning points, conditional expected values are computed as follows; Option A 20 + 22 f 4 (⋅) = ; x = 21% 2 Option B 60 + 75 f 4 (⋅) = ; x = 67.5% 2 Option C 38 + 50 f 4 (⋅) = ; x = 44% 2 We can summarize unconditional and conditional expected values by policy in Table VII.4.3. Table VII.4.3. Summaries of f5, and f4 (Conditional Expected Value)
Option A Option B Option C
f5 14.375% 36.875% 22.5%
f4 21% 67.5% 44%
ANALYSIS By analyzing the results of f5, Option A has the least opportunity for cost overrun due to the level of prior planning, understanding of the site specifics, prior approval by the county, and previous architectural and geological surveys. Each of the other options will have more inherent risks due to the unknowns for those options. Of these two options, the update to an existing facility will have less risk than the new building and site.
Facility Development
f5
235
f4 Option C
Option B
Option A
0%
10%
20%
30%
40%
50%
60%
70%
80%
Figure VII.4.2. Comparison of f4 vs. f5 by Option Moreover, even though the 15% worst case occurs, Option A shows the least variation from its unconditional expected value. Therefore, Option A can be recommended by consulting with both f4 and f5 but Option B, New Land Acquisition and Construction should not be selected in any case since it shows the worst cost overrun in either case.
236
Partitioned Multiobjective Risk Method
PROBLEM VII.5: Investment on the construction of a meteorological observatory The more precise the weather forecast, the smaller the loss from severe weather events such as heavy rainfall or snow. How can a state government improve the accuracy of weather forecast so minimize cascading losses from incorrect forecast? DESCRIPTION A state government considers constructing a meteorological observatory in order to forecast weather more precisely. The tradeoff between the cost of construction and the loss resulting from severe weather is described in Table VII.5.1. Assume that independent loss functions are normally distributed. Table VII.5.1. Tradeoff for Policies with Standard Deviation (in million dollars) Standard Deviation of Policy Construction Cost Estimated Loss Loss 1 10 50 9 2 20 45 7 3 30 40 5 4 40 35 3 5 50 30 1 METHODOLOGY In order to use the PMRM, traditional and conditional expected values need to be calculated. These values are based on the probability distributions as follows: 0.4 Policy 1 Policy 2 Policy 3 Policy 4 Policy 5
0.35 0.3
Probability
0.25 0.2 0.15 0.1 0.05 0 0
20
40
Damage
60
80
100
Figure VII.5.1. Probability Density Functions for Five Policy Options
Meteorological Observatory
237
Based on the PDFs, CDFs and Exceedance probability functions can be shown: 1 Policy Policy Policy Policy Policy
0.9 0.8
Probability
0.7
1 2 3 4 5
0.6 0.5 0.4 0.3 0.2 0.1 0 0
20
40
Damage
60
80
100
Figure VII.5.2. Cumulative Distribution Functions for Five Policy Options 1 Policy Policy Policy Policy Policy
0.9 0.8 0.7
1 2 3 4 5
1-P
0.6 0.5 0.4 0.3 0.2 0.1 0 0
20
40
Damage
60
80
100
Figure VII.5.3. Partitioning the Exceedance Probability Axis onto the Damage Axis The plot of the exceedance probability axis partitioned onto the damage axis. The red line represents Policy Option 5, which dominates the four other policies. That is,
238
Partitioned Multiobjective Risk Method
for forecasting extreme weather events, at a glance a decisionmaker can prefer Policy Option 5 to other options even though the cost is much higher. SOLUTION Given Table VII.5.1, the PMRM table and Pareto-optimal frontier incorporating every policy can be calculated and formulated. Table VII.5.2. PMRM Summary
µ σ
Policy 1 50 9
Policy 2 45 7
Policy 3 40 5
Policy 4 35 3
Policy 5 30 1
α β
0.95 64.80
0.95 56.51
0.95 48.22
0.95 39.93
0.95 31.64
f(β) 1- F(β)
0.0115 0.0500
0.0147 0.0500
0.0206 0.0500
0.0344 0.0500
0.1031 0.0500
f4 f5
68.5644 50
59.4390 45
50.3136 40
41.1881 35
32.0627 30
Meteorological Observatory
239
ANALYSIS 60
f4
50
f5
Cost
40
30
20
10
0
0
20
40 Expected Damage
60
80
Figure VII.5.4. Pareto-optimal Frontier In Figure VII.5.4, two Pareto-optimal frontiers can be observed with respect to unconditional and conditional expected damage versus the estimated cost of constructing the observatory. For instance, opting for Policy 1 indicates a $50 million loss in unconditional expected damage but approximately a $69 million loss in conditional expected damage. Policy Option 1 shows the largest variance estimation for damage. The difference between the two expected values is also the largest compared to the other policy options. Therefore, this variance difference can be taken into consideration in deciding the amount of investment and can support the decisionmaker’s decision whether it is right or not.
240
Partitioned Multiobjective Risk Method
PROBLEM VII.6: Contractor Selection SM Construction Consultants (SMCC) is reviewing proposals for the construction of a four-story building scheduled for completion in September 2009. SMCC’s task is to estimate the total cost of erecting the building and to choose the contractor according to the estimates. DESCRIPTION Five area contractors submitted cost estimates after reviewing preliminary plans. SMCC must evaluate each bid and determine a projected cost for the project. Details from each contractor are listed in Table VII.6.1. Table VII.6.1. Project Cost by Contractor Contractor Contractor “A” Contractor “B” Contractor “C” Contractor “D” Contractor “E”
Project Cost $1,125,000 $1,375,000 $1,050,000 $1,250,000 $1,075,000
To more accurately estimate the cost of the project, SMCC has captured data and statistics from prior projects of each of the contractors. Included are estimates of the percentage of cost overrun from the original projected cost. From this data, which is summarized in Table VII.6.2, a more precise estimate of the final project cost can be determined. Table VII.6.2. Percentage Cost Overrun for Selected Contractors
Contractor “A” Contractor “B” Contractor “C” Contractor “D” Contractor “E”
Best (0) 2 0 4 3 2
25th 15 10 25 12 20
Median 50th 22 15 40 25 35
75th 42 25 50 35 50
Worst (100) 50 40 60 70 80
METHODOLOGY In the selection of a new contractor, it is necessary that SMCC should estimate the projected cost overrun using the PMRM and for this analysis, unconditional and conditional expected values (f5 and f4) are calculated with the properties of the fractile method and this general equation:
Construction Contractor
βu , j
∫β
f 4 ( x) =
xp( x)dx
p, j
βu , j
∫β
241
,
j = 1, 2
p( x)dx
p, j
where β uj : Upper bound for jth contractor
β pj : Partitioning point for jth contractor
SOLUTION
1.00
0.040
0.75
0.030 Probability
Probability
First, the percentage cost overrun for each contractor is detailed in both a cumulative probability distribution function and a probability distribution function and is analyzed utilizing the fractile method (and checked by integration). The appropriate graphs are detailed below for each contractor.
0.50 0.25
0.020 0.010 0.000
0.00 0
10
20
30
40
0
50
10
Percentage of Cost Overrun
20
30
40
50
60
Percentage of Cost Overrun
Figure VII.6.1. Cumulative Density Function(CDF) and Probability Distribution Funtion(PDF) for Contractor A
0.060 0.050
0.75
Probability
Probability
1.00
0.50 0.25
0.040 0.030 0.020 0.010
0.00 0
10
20
30
Percentage of Cost Overrun
40
0.000 0
10
20
30
Percentage of Cost Overrun
Figure VII.6.2. CDF and PDF for Contractor B
40
50
242
Partitioned Multiobjective Risk Method
1.00
0.030 0.025 Probability
Probability
0.75 0.50 0.25
0.020 0.015 0.010 0.005
0.00 0
10
20
30
40
50
60
0.000 0
Percentage of Cost Overrun
10
20
30
40
50
60
70
Percentage of Cost Overrun
Figure VII.6.3. CDF and PDF for Contractor C
0.030 0.025
0.75
Probability
Probability
1.00
0.50 0.25 0.00 0
10
20
30
40
50
60
0.020 0.015 0.010 0.005 0.000 0
70
10
20
Percentage of Cost Overrun
30
40
50
60
70
80
Percentage of Cost Overrun
Figure VII.6.4. CDF and PDF for Contractor D 1.00
0.020 Probability
Probability
0.75 0.50 0.25
0.015 0.010 0.005 0.000
0.00 0
10
20
30
40
50
60
Percentage of Cost Overrun
70
80
0
10
20
30
40
50
60
70
80
90
Percentage of Cost Overrun
Figure VII.6.5. CDF and PDF for Contractor E Next, the expected value f5(x) of the percentage of cost overrun is determined (and checked with integrals). (i)
Contractor A
Fractile Method
f 5 ( x) = (0.25)(2 + (15 - 2)/2) + (0.25)(15 + (22 - 15)/2) + (0.25)(22 + (42 - 22)/2) + (0.25)(42 + (50 - 42)/2) = 2.125 + 4.625 + 8.000 + 11.500 f 5 ( x) = 26.250%
Construction Contractor
243
Integral Method 15
f 5 ( x) =
∫
22
xf ( x)dx +
2 15
∫
42
xf ( x)dx +
15
∫
22 22
∫
50
xf ( x)dx +
∫ xf ( x)dx
42 42
∫
50
∫
∫
= 0.0192 xdx + 0.0357 xdx + 0.0125xdx + 0.0313xdx 2
15
22
42
= (2.160 − .0.384) + (8.639 − 4.016) + (11.025 − 3.025) + (39.125 − 27.607)
f 5 ( x) = 25.917%
(ii)
Contractor B
Fractile Method
f 5 ( x) = (0.25)(10 - 0)/2 + (0.25)(10 + (15 - 10)/2) + (0.25)(15 + (25 - 15)/2) + (0.25)(25 + (40 - 25)/2) = 1.250 + 3.125 + 5.000 + 8.125
f 5 ( x) = 17.500% Integral Method 10
f 5 ( x) =
∫
15
25
xf ( x)dx + xf ( x)dx +
∫
10
15
0 10
∫
∫
15
∫
40
xf ( x)dx +
∫ xf ( x)dx
25 25
∫
40
∫
= 0.0250 xdx + 0.0500 xdx + 0.0250 xdx + 0.0167 xdx 0
10
15
25
= 1.250 + (5.625 − 2.500) + (7.8125 − 2.8125) + (13.360 − 5.218)
f 5 ( x) = 17.516%
(iii)
Contractor C
Fractile Method
f 5 ( x) = (0.25)(4 + (25 - 4)/2) + (0.25)(25 + (40 - 25)/2) + (0.25)(40 + (50 - 40)/2) + (0.25)(50 + (60 - 50)/2) = 3.620 + 8.125 + 11.250 + 13.750 f 5 ( x) = 36.745%
244
Partitioned Multiobjective Risk Method
Integral Method 25
f 5 ( x) =
∫
40
xf ( x)dx +
4 25
=
∫
50
xf ( x)dx +
25
∫
60
xf ( x)dx +
40
∫ xf ( x)dx
50
40
50
60
25
40
50
∫ 0.0119xdx + ∫ 0.0167 xdx + ∫ 0.0250xdx + ∫ 0.0250xdx 4
= (3.7188 − 0.1592) + (13.360 − 5.2188) + (31.250 − 20.000) + (45.000 − 31.250)
f 5 ( x) = 36.700%
(iv)
Contractor D
Fractile Method
f 5 ( x) = (0.25)(3 + (12 - 3)/2) + (0.25)(12 + (25 - 12)/2) + (0.25)(25 + (35 - 25)/2) + (0.25)(35 + (70 - 35)/2) = 1.875 + 4.625 + 7.500 + 13.125
f 5 ( x) = 27.125% Integral Method 12
f 5 ( x) =
∫
25
xf ( x )dx +
3 12
∫
35
xf ( x)dx +
12
∫
∫
50
xf ( x)dx +
25 25
∫
∫ xf ( x)dx
35 35
∫
50
∫
= 0.0278 xdx + 0.0192 xdx + 0.0250 xdx + 0.0071xdx 3
12
25
35
= (2.0016 − 0.1251) + (6.000 − 1.3824) + (15.3125 − 7.8125) + (17.395 − 4.3488)
f 5 ( x) = 27.0403%
(v)
Contractor E
Fractile Method
f 5 ( x) = (0.25)(2 + (20 - 2)/2) + (0.25)(20 + (35 - 20)/2) + (0.25)(35 + (50 - 35)/2) + (0.25)(50 + (80 - 50)/2) = 2.750 + 6.875 + 10.625 + 16.250
f 5 ( x) = 36.500%
Construction Contractor
245
Integral Method 20
f 5 ( x) =
∫
35
xf ( x)dx +
2 20
=
∫
50
xf ( x)dx +
20
∫
80
xf ( x)dx +
35
∫ xf ( x)dx
50
35
50
80
20
35
50
∫ 0.0139xdx + ∫ 0.0167 xdx + ∫ 0.0167 xdx + ∫ 0.0083xdx 2
= (2.780 − 0.0278) + (10.2288 − 3.340) + (20.875 − 10.2288) + (26.560 − 10.375)
f 5 ( x) = 36.472%
A representation of the total expected cost of the project from each contractor is shown in Table VII.6.3. Table VII.6.3. Total Expected Cost by Contractor Bid Price Contractor “A” Contractor “B” Contractor “C” Contractor “D” Contractor “E”
$1,125,000 $1,375,000 $1,050,000 $1,250,000 $1,075,000
Cost Overrun f5(x) 26.250% 17.500% 36.745% 27.040% 36.500%
Total Expected. Cost $1,420,312 $1,615,625 $1,435,823 $1,588,000 $1,467,375
Figure VII.6.6. Comparison of Expected Cost of Project Generate Conditional Expected Value E[X] = f4(x) using both the fractile method and the integration method SMCC is now interested in the worst 10% scenario, or the conditional expected value of percentage of cost overrun, given that the cost overrun occurs with a probability of 0.10 or lower. The partition point on the damage axis corresponding to (1-α) = 0.1 is computed. Therefore, the damage axis must be partitioned at α =
246
Partitioned Multiobjective Risk Method
0.9. Using simple geometry, the percentage of cost overrun associated with a probability of exceedance of 0.1 is computed as follows: (i)
Contractor “A”
x − 42 0.25 − (1 − α ) = 50 − 42 0.25 x − 42 0.25 − 0.1 = 8 0.25 x = 46.8% (ii)
Contractor “B”
x − 25 0.25 − (1 − α ) = 40 − 25 0.25 x − 25 0.25 − 0.1 = 15 0.25 x = 34.0% (iii)
Contractor “C”
x − 50 0.25 − (1 − α ) = 60 − 50 0.25 x − 50 0.25 − 0.1 = 10 0.25 x = 56.0% (iv)
Contractor “D”
x − 35 0.25 − (1 − α ) = 70 − 35 0.25 x − 35 0.25 − 0.1 = 35 0.25 x = 56.0% (v)
Contractor “E”
x − 50 0.25 − (1 − α ) = 80 − 50 0.25 x − 50 0.25 − 0.1 = 30 0.25 x = 68.0% The conditional expected values f4(x) are now computed with the above partition points. Since the CDF is a straight line between the above partition points, the conditional expected value is the average between the lowest and highest values.
Construction Contractor
247
The conditional expected value has also been computed using integration, as follows: (i)
Contractor “A” 46.8 + 50 f 4 ( x) = = 48.4% 2
Integral Method 50
∫
f 4 ( x) =
50
∫ xKdx
xp( x)dx
46.8 50
=
∫ p( x)dx
46.8 50
=
x2 2
46.8 50
∫ Kdx
46.8
50
x
=
(2500 − 2190.24) 2(50 − 46.8)
46.8
46.8
f 4 ( x) = 48.4% (ii)
Contractor “B” 34.0 + 40 f 4 ( x) = = 37.0% 2
Integral Method 40
∫
f 4 ( x) =
40
∫ xKdx
xp( x)dx
34 40
=
∫ p( x)dx
34 40
=
∫ Kdx
34
x2 2
40 34 40
=
(1600 − 1156) 2(40 − 34)
=
(3600 − 3136) 2(60 − 56)
x 34
34
f 4 ( x) = 37.0% (iii)
Contractor “C” 56.0 + 60 f 4 ( x) = = 58.0% 2
Integral Method 60
∫
f 4 ( x) =
60
∫ xKdx
xp( x)dx
56 60
∫ p( x)dx
56
f 4 ( x) = 58.0%
=
56 60
∫ Kdx
56
=
x2 2
60 56 60
x 56
248
Partitioned Multiobjective Risk Method
Contractor “D” 56.0 + 70 f 4 ( x) = = 63.0% 2 Integral Method 70
70
∫ xp( x)dx ∫ xKdx
f 4 ( x) =
56 70
=
∫ p( x)dx
56 70
=
∫ Kdx
56
x2 2
70 56 70
=
(4900 − 3136) 2(70 − 56)
=
(6400 − 4624) 2(80 − 68)
x 56
56
f 4 ( x) = 63.0% (iv)
Contractor “E” 68.0 + 80 f 4 ( x) = = 74.0% 2
Integral Method 80
∫
f 4 ( x) =
80
∫ xKdx
xp( x)dx
68 80
∫ p( x)dx
68
=
68 80
=
∫ Kdx
x2 2
80 68 80
x 68
68
f 4 ( x) = 74.0% Table VII.6.4 and Figure VII.6.7 summarize the results: Table VII.6.4. Total Conditional Expected Cost by Contractor Bid Price Contractor “A” Contractor “B” Contractor “C” Contractor “D” Contractor “E”
$1,125,000 $1,375,000 $1,050,000 $1,250,000 $1,075,000
Cost Overrun f4(x) 48.4% 37.0% 58.0% 63.0% 74.0%
Total Conditional Expected. Cost $1,669,500 $1,883,750 $1,659,000 $2,037,500 $1,870,500
Construction Contractor
249
Figure VII.6.7. Comparison of Conditional Expected Cost of Project From the bid price, expected project cost, and the conditional project cost values, a more realistic determination of the project cost can be determined. These figures are shown below. Table VII.6.5. Summary of f4 and f5 Bid Price
Cost overrun f5(x)
Total Expected Cost
Cost overrun f4(x)
Contractor “A”
$1,125,000
26.250%
$1,420,312
48.4%
Total Conditional Expected Cost $1,669,500
Contractor “B”
$1,375,000
17.500%
$1,615,625
37.0%
$1,883,750
Contractor “C”
$1,050,000
36.745%
$1,435,823
58.0%
$1,659,000
Contractor “D”
$1,250,000
27.040%
$1,588,000
63.0%
$2,037,500
Contractor “E”
$1,075,000
36.500%
$1,467,375
74.0%
$1,870,500
Figure VII.6.8. Summary of Projected Cost Estimates
250
Partitioned Multiobjective Risk Method
ANALYSIS SMCC can utilize the above results to make a more accurate prediction of the construction costs of the building. From the table and graph above, it is clear that there is a clear disparity between each contractor’s bid price and the expected actual price of the building. For instance, Contractor “D” submitted a bid of $1,075,000. However, in the worst 10% case, their estimate would be $1,870,500, resulting in a 74% increase in their original bid. Utilizing these disparities, SMCC can formulate a separate cost estimate combining all aspects of the above bids and expected values to achieve the most accurate cost estimate for their customer.
Water Treatment
251
PROBLEM VII.7: Water Supply Treatment Selection In order to secure the safe level of chloride concentration, Metropolitan Manila is considering where to build a new facility. DESCRIPTION Metropolitan Manila is the capital of the Philippines and among the world's thirty most populous metropolitan areas. It contains the city of Manila, as well as sixteen surrounding cities and municipalities. In some of these surrounding cities and municipalities, water is not yet supplied by bulk water treatment plants. The people obtain water from either deep wells or ambulant water suppliers (i.e., trucks selling water). To address current and future water needs, the recently privatized Manila Water Company is studying the feasibility of abstracting water from Laguna Lake, the Philippines’ largest lake. Two sites are being considered for the proposed 40,000 cubic meter Bulk Water Supply Treatment Plant—Muntinlupa City and Paranaque City. Because of the difference in location along the lake’s bay, degree of tidal water (seawater) intrusion, and level of industrial and aquaculture activities in these two cities, the quality of raw water is different. Among the water quality parameters, chloride concentration is one of the most important because this cannot be removed by physical processes or removed economically even by chemical processes. Often, chloride is addressed by desalination which is expensive with regard to both capital and operating expenditures. Assuming that all other parameters and other factors are equal (an oversimplification of the problem), the designers/analysts are presented the following table of chloride concentrations on which they will make their recommendations. Table VII.7.1. Prediction of Chloride Concentration by 2010 (Projected Project Completion Year) Muntinlupa Paranaque Best case chloride concentration, mg/L 200 180 Worst case chloride concentration, mg/L 1000 1200 Most likely chloride concentration, mg/L 250 230 METHODOLOGY For this PMRM exercise, chloride concentrations are assessed using a triangular distribution. Let a, b, and c denote the best, worst, and most likely respectively. Also, let the subscripts M and P denote Muntinlupa and Paranaque, respectively.
a b c
M 200 1000 250
P 180 1200 230
252
Partitioned Multiobjective Risk Method
The expected value of the triangular distribution is given by f 5 (⋅) : a+b+c f 5 (⋅) = 3 200 + 1000 + 250 f 5 M (⋅) = = 483.33 3 180 + 1200 + 230 f 5 P (⋅) = = 536.67 3 The computation for the height of the triangular distribution is straightforward:
h=
2 b−a
2 = 0.0025 1000 − 200 2 hP = = 0.001961 1200 − 180 hM =
Probabilityty
Probabilityty
Figure VII.7.1 graphically depicts the chloride concentrations in Muntinlupa and Paranaque.
Figure VII.7.1. Chloride Concentrations (PDF & CDF) SOLUTION Since the Bulk Water Supply Treatment Plant is a public utility, we consider an event above 95 percent likelihood to be extreme.
α = 0.95 The value of x (partition point on the damage axis) can be computed as follows,
x =b−
2(1 − α )(b − c ) h
Water Treatment
x M = 1000 −
253
2(1 − 0.95)(1000 − 250) = 826.79 0.0025
2(1 − 0.95)(1200 − 230) = 977.58 0.001961 Since the extreme region forms a right-angled triangle, we can compute the value of f 4 (⋅) as its mean, x P = 1200 −
2x + b 3 2(826.76) + 1000 f 4 M (⋅) = = 884.53 3 2(977.58) + 1200 f 4 P (⋅) = = 1051.72 3 f 4 (⋅) =
ANALYSIS Suppose that the construction of treatment plants would require investment costs of $100 million for Muntinlupa and $30 million for Paranaque. The Pareto-optimal frontiers are shown in Figure VII.7.2.
Figure VII.7.2. Pareto-optimal Frontiers of f1 versus f4 and f5 From this plot we can see that the Muntinlupa site is superior in terms of its overall chloride level and its expected extreme chloride level. Therefore, we can recommend that the board of Manila select Muntinlupa as the candidate place for the treatment plant, although this site would require approximately three times more budget than the Paranaque option.
254
Partitioned Multiobjective Risk Method
There are several aspects of this problem that highlight the need for applying the Partitioned Multiobjective Risk Method (PMRM). First of all, the probability distribution is highly skewed over a large range. In situations like this, the expected value over the entire distribution gives a very poor picture of what actually can be expected. Also, in this example, expert opinion could be solicited regarding the concentration of chloride that should be considered dangerous. Rather than partitioning on probability, we could alternatively partition based on this factor. The extreme-event analysis would then cover the situation in which the concentration levels were dangerous.
Architectural Style
255
PROBLEM VII.8: Architectural Style Selection A local entrepreneur is building a new restaurant and is considering two different, interesting architectural styles. The construction company can build either type of building and provides cost estimates for each.
Given the complex designs and the uncertainty of labor and material costs, the construction company estimates the following fractiles. Costs for Designs A and B are in thousands of US dollars: Table VII.8.1. Cost by Design Fractile 0 0.25 0.5 0.75 1
Design A 700 750 770 800 850
Design B 600 650 750 850 1000
Figure VII.8.1 graphs the probability density functions.
Figure VII.8.1. Probability Density Function by Design Compare and evaluate the two styles using the PMRM with respect to cost and analyze your results. Use a probability partition (α) of 0.9 in calculating the conditional expected values (f4).
256
Partitioned Multiobjective Risk Method
PROBLEM VII.9: Selection of contractors for a highway project Two contractors are being considered for a new highway construction project. We use the Partitioned Multiobjective Risk Method (PMRM) to help make the decision. The contractors provided the following probabilistic estimates of their projected completion times for the project: Contractor A Triangular distribution with parameters: Lowest estimate: 1 year Most likely: 1.25 years Highest estimate: 2 years Contractor B Fractile distribution with parameters: Lowest estimate: 0.5 year 25th fractile: 1.2 year 50th fractile: 1.4 year 75th fractile: 1.6 years Highest estimate: 2 years In the selection of a new contractor, evaluate the projected completion time using the PMRM and add your explanation for the results. We are interested in both the average and the conditional expected value representing the worst 10% scenario (i.e., α = 0.9).
Courier Choice
257
PROBLEM VII.10: Shipping company selection A new online retailer is considering several shipping companies for distributing its wares to its customers. The company was able to narrow the choices down to three candidates: Company A, Company B, and Company C. The online retailer obtained cost estimates and past-year performance statistics for these shipping companies from an independent consulting firm. This firm provided information regarding shipping timeliness in the form of percentages of late deliveries. The probabilities were derived using the fractile method. The expected value of risk for each shipping company was calculated and plotted against the estimated delivery cost. The consulting company provided statistics on the best, the worst, and the most likely percentages of late deliveries. For the purposes of the fractile method, the most-likely percentage of late deliveries was considered the median. The worst case was placed at 1.00 fractile. The 0.25 and 0.75 fractiles were calculated based on the median +/- 5%. The following table shows the fractiles for all three shipping companies. Table VII.10.1. Delivery Options for a New DotCom Percentage of Late Delivery for Each Option Company A Company B Company C 0 0 0 10 15 5 15 20 10 20 25 15 40 35 50
Fractile 0 0.25 0.5 0.75 1
The PDF and CDF plots for these companies are shown below, using the data in the table above. 0.06 1.2 1
0.04 Fractile
Frequency
0.05
0.03 0.02 0.01
0.8 0.6 0.4 0.2 0
0 0
10
20
30
Percentage of Late Delivery
40
50
0
10
20
30
Percentage of Late Delivery
Figure VI.10.1. PDF and CDF of Company A
40
50
258
Partitioned Multiobjective Risk Method
0.06 1.2 1
0.04 Fractile
Frequency
0.05
0.03 0.02 0.01
0.8 0.6 0.4 0.2
0
0 0
10
20
30
40
0
10
20
30
40
Percentage of Late Delivery
Percentage of Late Delivery
Figure VII.10.2. PDF and CDF of Company B 1.2
0.05
1
0.04
0.8
Fractile
Frequency
0.06
0.03 0.02 0.01
0.6 0.4 0.2
0
0
20
40
Percentage of Late Delivery
60
0
0
10
20
30
40
50
60
Percentage ofLate Delivery
Figure VII.10.3. PDF and CDF of Company C For this problem, each option is evaluated according to the Percentage of Late Delivery using the PMRM. Calculate the expected-value f5 and conditional expected-value f4 with a probability partition of α = 0.9. Analyze the results.
O-ring Reliability
259
PROBLEM VII.11: Reliability of Shuttle “O”-Rings The failure density function of elastomeric “O”-rings can be described using a Weibull distribution1, as follows:
x λ Weibull Probability Density Function: f ( x) = λ x λ −1 exp − η η where
λ
x is the failure time in hours,
λ is the Weibull shape factor, and η is the characteristic time parameter in hours. Based on the failure density function, analyze reliability of Shuttle “O”-Rings. DESCRIPTION Consider the following “O”-ring alternatives, all with shape parameters of λ = 1.0. For a shape parameter of 1.0, the Weibull distribution reduces to an exponential distribution. Table VII.11.1. Summary of η, Cost and Expected Failure Time of each alternative Alternative
λ
η
Cost ($)
“O”-ring A 1.0 10,000 10.00 “O”-ring B 1.0 15,000 20.00 “O”-ring C 1.0 20,000 15.00 “O”-ring D 1.0 25,000 50.00 “O”-ring E 1.0 30,000 100.00 Note: α is the upper-tail probability partition
f5(⋅⋅)
f4(⋅⋅) at α =0.95
10,000 15,000 20,000 25,000 30,000
39,957 59,936 79,915 99,893 119,872
f(x)
1-α = 0.05
β Figure VII.11.1. Exponential Probability Distribution Curve 1
See Bloch, Heinz P. and Fred K. Geitner, 1994. Practical Machinery Management for Process Plants, Volume 2: Machinery Failure Analysis and Troubleshooting, 2nd Edition. Houston, TX: Gulf Publishing Company.
260
Partitioned Multiobjective Risk Method
Since the values of
f 5 (⋅) and f 4 (⋅) in the above table represent failure times that
are maximization-type objectives, define the following measures of risk to be the reciprocal of failure time:
fˆ5 (⋅) = [ f 5 (⋅)]−1 Conditional expected risk (hour−1): fˆ4 (⋅) = [ f 4 (⋅)]−1 Expected value of risk (hour−1):
With the reciprocal values of failure time, evaluate the reliability of each O-ring option using the PMRM. Use a probability partition of α = 0.95, as depicted in Figure VII.11.1.
Budget Planning
261
PROBLEM VII.12: Budget allocation for counterterrorism The Department of Homeland Security (DHS) has been asked to submit to the Executive Office a budget for counterterrorism measures. The overall budget is very tight, and while combating terrorism is a significant goal, DHS must be sure to spend its money wisely and choose an effective strategy. Five potential strategies and their possible outcomes are given below. Table VII.12.1. Summary of Fractile Distributions by Option Best
25th
Median
75th
Worst
5 3
15 8
20 15
30 20
45 30
Cost ($billion) 0 60
3
9
17
22
32
30
8 4
14 11
17 14
22 15
35 17
15 300
No Action Increase Security Increase Intelligence Budget Increase Technology Budget Preemptive War
1
1
0.9
0.9
No Action
0.8
0.8
Increase Security
0.7
0.7 0.6
Increase Intelligence Budget
0.6
Probability
Probability
The cumulative density function and exceedance probabilities for the strategies are shown below:
No Action
0.5
Increase Security
0.4 0.3 0.2 0.1
Increase Technology Budget Preemptive War
0.5 0.4
Increase Intelligence Budget
0.3
Increase Technology Budget Preemptive War
0.2 0.1 0
0 0
10
20
30
Economic Loss (%)
40
50
0
10
20
30
40
50
Economic Loss (%)
Figure VII.12.1. CDF and Exceedance Probability of Each Option For this problem, each option is evaluated according economic loss using the PMRM. Calculate the expected-value f5 and conditional expected-value f4 with a probability partition of α = 0.85. Analyze your results.
262
Partitioned Multiobjective Risk Method
PROBLEM VII.13: Improving Road Safety A recent spate of accidents on a dangerous mountain road has the nearby residents asking the town council for help. During a specially held town meeting, five options were proposed to deal with the problem. These were: 1) Take no action, 2) Add signs, 3) Add speed bumps, 4) Widen the road, and 5) Build an alternate route. The council employed the Partitioned Multiobjective Risk Method (PMRM) to help decide which course of action to take. First, they estimated the number of accidents in the best- and worst-case scenarios for each option. These figures are given in Table VII.13.1, and the costs for each option are given in Table VII.13.2. Table VII.13.1. Number of Accidents per Year for Each Option
No Action, a1 Signs, a2 Speed Bumps, a3 Widen Road, a4 Alternate Rte., a5
Best (0) 10 8 4 6 0
25th 15 12 8 10 2
Median 50th 20 18 15 14 8
75th 40 34 30 25 14
Worst (100) 100 85 75 60 20
Table VII.13.2. Cost for Each Option Alternative a1 a2 a3 a4 a5
Cost ($) 0 5,000 15,000 50,000 250,000
The town council then determined the number of accidents that would be considered “extreme.” They decided that a high-damage outcome, or the β value, should be set at 40. For an alternative view of the data above, the council decided to partition the data on the probability axis as well. They wanted to see what would be likely to happen greater than one in ten years, i.e., an α value of 0.9. Given the two specified approaches for partitioning (i.e., with respect to the damage axis, β, and probability axis, α), use the PMRM and analyze your results.
Investor’s Dilemma
263
PROBLEM VII.14: Investor’s Dilemma A market theory asserts that investment returns, denoted by X, are normally distributed. For this problem, we interpret investment returns X as “opportunity losses.” Therefore, the upper-tail region in a distribution of investment returns X corresponds to events that have high opportunity losses, although with low likelihoods of occurrence. Suppose an investor who has faith in this market theory asked us to conduct indepth analysis for the following four long-term bond investment alternatives. For a given investment i, the notation Xi~N(µ,σ) is used to refer to a normal distribution with parameters µ and σ, which are the mean and standard deviation, respectively, of the underlying random variable Xi. These parameters were estimated from historical annual data. (i) (ii) (iii) (iv)
Investment 1: X1~N1(0.047, 0.010); Unit Cost = $10 Investment 2: X2~N2(0.048, 0.015); Unit Cost = $8 Investment 3: X3~N3(0.049, 0.020); Unit Cost = $5 Investment 4: X4~N4(0.050, 0.025); Unit Cost = $4
Evaluate opportunity losses using the PMRM so need to derive the expected values and conditional expected values in terms of each investment’s parameter(µ and σ) and a specified upper-tail partitioning points, β i (i = 1, 2, 3, and 4) respectively PDF for the normal distribution is characterized with mean and standard variation:
f ( x) =
1 2πσ 2
(x − µ )2 exp − 2σ 2
Conditional expected value can be calculated as follows: ∞ ∞ ( x − µ )2 x x ⋅ f ( x)dx exp − dx 2 2 2 σ 2 πσ β β f 4 ( x) = ∞ = 1−α f ( x )dx
∫
∫
∫β
where
p (x ≤ β ) = α
Assume that
α
is 0.95 for each investment case.
264
Partitioned Multiobjective Risk Method
PROBLEM VII.15: Recommendation for Welding Processes A consulting company is contracted to analyze potential welding processes for a new Sports Utility Vehicle (SUV). The firm will analyze three options and the options are as follows: 1. Robotic welding 2. Semi-Automatic welding 3. Manual welding To generate probability distributions, the fractile method is used. They have been determined by manufacturing experts for the number of defective units produced per 100, as shown: Table VII.15.1. Fractile Distribution of Each Option
Robotic Semi-Automatic Manual
Best 0 5 10
25th 5 20 20
Median 15 25 30
75th 20 30 40
Worst 30 40 60
For example, in the robotic welding option, the best-case outcome produces 0 defective units per 100 while the worst-case outcome produces 30. The cumulative distribution for each policy is graphed below:
Figure VII.15.1. CDF of Each Option
Welding Process
265
Figure VII.15.2. Exceedance Probability of Each Option Conduct Portioned Multiobjective Risk Method (PMRM) of candidate welding processes for the two cases below: Case I: Partition on the probability axis as follows: α = 0.9 Case II: Partition on the damage axis as follows: the firm chooses x ≥ 35 defective items.
266
Partitioned Multiobjective Risk Method
PROBLEM VII.16: Automobile Company’s options for building a safe car A car-manufacturing company wants to incorporate into its vehicles components to reduce the number of serious injuries that result from high-velocity vehicle crashes. Such injuries are defined as those requiring more than three days of hospitalization. The company is considering five approaches, adding: (1) safety features, (2) crumple zones, (3) a collapsible steering column, (4) fuel pump shutoff devices, and (5) a reinforced side door structure. To help the company arrive at a decision, Solve the problem using the Partitioned Multiobjective Risk Method (PMRM), applying triangular distribution In detail the scenarios are: (1) Safety features. Enhance all vehicles with existing safety features such as side air bags, anti-lock brakes, daytime running lights, and safety restraints. This option would be moderately inexpensive since these features are already popular options among consumers. There would be no need for research and development; the cost would be solely for making these options standard features on its vehicles. (2) Crumple zones. Incorporate areas that will absorb the energy of an impact when the car hits something. This option would be very expensive due to research and development, as well as vehicle redesign. Preliminary research shows that this could potentially reduce the number of serious injuries. (3) Collapsible steering column. This option would be moderately inexpensive. The main cost would be introducing it into the production process. Depending on the type of collision, this option may not be as effective as some of the others. (4) Fuel pump shutoff devices. These would turn off gas flow in the event of a collision to prevent gasoline fires. This would be a minor modification to the current production process, making this option very inexpensive. However, as with Option 3, its effectiveness is limited in scope. (5) Reinforced side door structure. Costs of additional materials to reinforce side doors would be moderately expensive. Since side-door impacts frequently occur, this option would be effective. Because the company has not widely introduced any of these passive safety features, there is no historical data to perform statistical analysis. Therefore, it is assumed that the random variable Xj, which represents the rate of (number of) serious injuries per 1000 crashes for Scenario j, follows a triangular distribution. In addition, expert evidence was used to generate the lower bound, upper bound, and most-likely serious injury rate for each Scenario j.
Vehicle Safety
267
Table VII.16.1. Design Data Scenario
Cost ($Millions)
1 2 3 4 5
$30 $165 $50 $22 $100
Lower Bound 30 15 80 60 20
Upper Bound 120 45 260 300 80
Most Likely 70 30 185 215 45
Use the PMRM to evaluate the design scenarios (see Table VII.16.1) according to the number of serious injuries. Calculate the expected-value f5 and conditional expected-value f4 with a probability partition of α = 0.9. Analyze your results.
268
Partitioned Multiobjective Risk Method
PROBLEM VII.17: Energy Cost Estimation A state government must determine the amount of budgetary dollars to allocate to energy costs for the next fiscal year. In order to obtain an estimate of these costs, the state has requested an energy cost analysis from two energy institutes with expertise in this area. One institute is conservative and one is liberal, selected in an attempt to satisfy concerns over skewing the estimate towards one end of the political spectrum. An internal state team will also perform the energy cost analysis. Based on the results from three sources, the state government will make a decision to allocate its limited resources. All teams were required to provide data and estimates as follows: Table VII.17.1. Estimate of Energy Cost Increase by Team Evidence-based information Best-case energy cost increase Worst-case energy cost increase Median value of energy cost increase
State Team
Conservative Energy Institute
Liberal Energy Institute
0%
0%
10%
50%
30%
80%
25%
10%
50%
Note: Current fiscal year energy costs = $100 M. For each team, compute the expected-value f5 and the conditional expected-value f4 for the 10% worst case scenarios and analyze the results.
269
VIII. Multiobjective Decision Tree
PROBLEM VIII.1: Marikina River Flooding The Marikina River in the Philippines endangers the residents of the city of Marikina when it overflows. However, during heavy rains, it is difficult to decide whether to warn or evacuate the population. DESCRIPTION The Marikina River is important to the economic and social activities of the surrounding community. During heavy rains, the river is continuously monitored for its current level and overflow probability. This data is used to give warnings and evacuation orders to the residents along the banks of the river and other affected areas. The decision not to evacuate too early (without indication of possible flooding) is due to the high cost incurred in the evacuation process. However, a late evacuation entails a high cost as well in terms of the higher risk to the residents and the use of more sophisticated operations such as helicopter rescues. METHODOLOGY Multiobjective Decision Tree (MODT) analysis can help Marikina city officials decide when it is necessary to evacuate during the rainy season. The following conflicting objectives have been identified: Minimize f1: Minimize f2:
Effective policy implementation cost, expressed in terms of 1x108 PhP Risk to residents and rescuers
The river’s water flow during heavy rains is represented by two equally likely a priori distributions, given as: LN1 ~ Lognormal (ln 150, 1) LN2 ~ Lognormal (ln 80, 1) Chance Nodes: There are two possibilities (i.e., chances) that can occur for the initial period: Flood or No Flood. Flood stage is reached at water flow (W) = 60,000 cfs.
270
Multiobjective Decision Tree
For the second period, the following events can occur: Water flow is high (30,000 W 60,000 cfs) Water flow is moderate high (20,000 W 30,000 cfs) Water flow is low (5,000 W 20,000 cfs) Decision Nodes: The problem can be seen as a two-period decisionmaking process. The first period decision involves issuing either an evacuation order or a warning. If that decision is to issue only a warning, a second period decision is made after city officials receive additional information on the river’s water flow. Requirement: Constrain and solve the problem using the Multiobjective Decision Tree (MODT) method. SOLUTION The decision tree depicting different combinations of the decisions and chances for the two periods is given in Figure VIII.1.1. The first column on the right shows the cost of effective policy implementation (in 1x10 8 PhP) while the second column shows the risk (normalized between 0 and 1). The values of the objectives are assessed during the end of the second period (based on historical data). These are shown in Tables VIII.1.1-VIII.1.5 and Figure VIII.1.2 below. First Period
Second Period
Cost
Risk
0.9
0
~F
0.9
0
F
1.1
0.4
0.9
0.2
0.95
0.7
F
C1
E1
C2
E2 D2
W2
F C2
h
D1
C2
E2 C2
~F F
Hig
W1
~F
Moderate
D3
W2
~F F
C2 Lo w
C2
D4
W2
0.15
0.8
0.4
0.7
0.1
0.5
0.7
0.4
0.1
0.8
0.3
0.7
0.1
0.4
0.7
0.3
0.1
~F F
E2
0.4
~F F
C2
Figure VIII.1.1. Decision tree
~F
Flood Control
271
Table VIII.1.1. First-Period Probabilities P(Flood) P(No Flood)
0.7167 0.2833
Table VIII.1.2. Second-Period Probabilities P(High) P(Moderate) P(Low) P(Flood | High) P(No Flood | High) P(Flood | Moderate) P(No Flood | Moderate) P(Flood | Low) P(No Flood | Low)
0.1747 0.0562 0.0508 0.6877 0.3121 0.6718 0.3282 0.6572 0.3428
Table VIII.1.3. Expected Value of Vector of Objectives at Second Period Decision D2
D3
D4
Arc E2 E2 W2 W2 E2 E2 W2 W2 E2 E2 W2 W2
Chance F | High ~F | High F | High ~F | High F | Mod ~F | Mod F | Mod ~F | Mod F | Low ~F | Low F | Low ~F | Low
Cost 0.757 0.281 0.653 0.125 0.537 0.230 0.336 0.131 0.526 0.240 0.263 0.103
Risk 0.275 0.062 0.481 0.047 0.267 0.033 0.470 0.033 0.177 0.034 0.460 0.034
E[Cost] 1.037
E[Risk] 0.338
0.778
0.528
0.767
0.302
0.467
0.503
0.766
0.231
0.366
0.494
Table VIII.1.4. Non-Inferior Decisions for Second-Period Decision Nodes Node D2 D3 D4
Non-Inferior Decision E2, W2 E2, W2 E2, W2
272
Multiobjective Decision Tree Table VIII.1.5. Decision for the First-Period Node First-Period Decisions E1
Second-Period Decisions High Moderate Low
W1
E2 E2 E2 E2 W2 W2 W2 W2
*Inferior
E2 E2 W2 W2 E2 E2 W2 W2
Period 1
E1
D1 W1
E2 W2 E2 W2 E2 W2 E2 W2
Objective Vector Cost Risk 0.645 0 0.170 0 0.187 0.073 0.176 0.086 0.178 0.084 0.164 0.078 0.171 0.107 0.158 0.122 0.160 0.120 0.146 0.134
E[Cost] E[Risk]
0.645 0.170 0.187 0.176 0.178 0.164 0.158 0.160 0.146
0.000 0.000 0.073 0.086 0.084 0.078 0.122 0.120 0.134
Figure VIII.1.2. Decision tree for the first stage ANALYSIS Plotting the vector solutions will yield the Pareto frontier as shown in Figure VIII.1.3.
Flood Control
273
0.15
0.1
Risk
Warning Evacuation 0.05
0 0.12
0.22
0.32
0.42
0.52
0.62
0.72
Cost(x10^8 PhP)
Figure VIII.1.3. Pareto Frontier A significant gap can be seen between the set of optimal solutions for the warning decision and the optimal solution for the evacuation solution. This signals the decisionmakers to: generate some alternative X that will cost somewhere in-between the existing two sets of solutions with a lower risk level than that involved in the warning decision; make the emergency evacuation more cost-effective.
274
Multiobjective Decision Tree
PROBLEM VIII.2: Highway Bridge Maintenance The objective of this problem is to select a maintenance policy for a highway bridge using MODT based on the cost of the policy and mean time before failure (MTBF) of the bridge. DESCRIPTION A consulting firm was commissioned to model and analyze the maintenance policy for a bridge on an interstate highway. The policy options are: replace the bridge, repair it, or do nothing. The firm’s management asked its risk analysis group to conduct the study. In preparing the work plan, it was decided that the problem be modeled using the Multiobjective Decision Tree (MODT). The two objectives considered are the cost of the policy option and the mean time before failure (MTBF) of the bridge. METHODOLOGY We solve the problem in the following way i)
Construct a complete decision tree and indicate the values of both objectives on each terminal node. ii) Determine the set of Pareto-optimal decisions for the branch of the tree corresponding to a decision node. Assumptions The following are the assumptions made for this problem: 1. Cost of a new bridge is $1 million 2. The condition of the bridge can be judged by a parameter s which represents a declining factor of the age of the bridge. 3. The cost of repair depends upon the parameter s and is given by CREPAIR = 200, 000 + 4,000,000(s-0.05) 4. This parameter s is uncertain in nature and can take the following values s = s1 = 0.050 s = s2 = 0.075 s = s3 = 0.100 5. The prior probability distribution of s is p(s1) = 0.25 p(s2) = 0.50 p(s3) = 0.25 6. A test to reduce the uncertainty in s can be performed at a cost of $50,000. 7. The test to reduce the uncertainty in s can have three possible outcomes. 8. The conditional probabilities of the test results are as follows: p(lower| s1) = 0.50 p(lower| s2) = 0.25 p(lower| s3) = 0.25
p(same| s1) = 0.25 p(same| s2) = 0.50 p(same| s3) = 0.25
p(higher| s1) = 0.25 p(higher| s2) = 0.25 p(higher| s3) = 0.50
Bridge Maintenance
275
The value of λ for the exponential distribution of failure of a new bridge is 0.1. 10. The value of λ for the exponential distribution of failure of a repaired bridge is 0.15. 9.
Notes a) Bridge failure is defined as any event that causes the closure of the bridge. b) Mean Time Before Failure (MTBF) is defined as the amount of time that can be expected to pass before a bridge failure occurs. c) For the exponential distribution with mean λ the mean time before failure is MTBF = 1/ λ d) The probability density function of failure of a new bridge is given by an exponential distribution with mean λ. e) The probability density function of failure of an old bridge is given by an exponential distribution with mean (λ + s), where λ = 0.1. f) If repair is done immediately, the probability density function of failure is given by an exponential distribution with mean (λ + 0.05), where λ = 0.1. SOLUTION Constructing the Multi-Objective Decision Tree (MODT) The decision tree for the problem is given in Figure VIII.2.1. The two objective functions are: Maximize MTBF, and Minimize cost For an exponential distribution the MTBF is given by 1/ λ, where λ is the mean of the exponential distribution. For a new bridge, we are given λ = 0.1 => MTBF|Replace = 1/0.1 = 10 years For a repaired bridge, we are given λ = 0.15 => MTBF|Repair = 1/0.15 = 6.6667 years For the do nothing option, the MTBF is a function of the value of s: for s = s1, λ = 0.1+0.05 =>MTBF|s1 = 1/0.15 = 6.6667 years for s = s2, λ = 0.1+0.075 =>MTBF|s2 = 1/0.175 = 5.7143 years for s = s3, λ = 0.1+0.1 =>MTBF|s3 = 1/0.2 = 5 years For a new bridge, we are given: Cost = $1 million =>Cost|Replace = $1 million For the repair option, the cost is a function of the value of s: Costrepair|s1 = 200000+4000000(s1-0.05) = 200000+4000000(0.05-0.05) = $0.2 million (for s = s1)
276
Multiobjective Decision Tree
replace [10.00, 1.00] [6.667, 0.30] [53774, 0.00]
D2
[6.667, 0.30]
C3 repair
0.25 [5.774, 0.00]
Do nothing
0.25 0.5
C4
0.25 0.5 0.25
s1 s2 s3 s1 s2 s3
replace
D3
C5 repair Do nothing
no test
0.2 0.4 0.4
C6
higher
0.2 0.4 0.4
s1 s2 s3 s1 s2 s3
MTBF (Years)
Cost $million
[10.0000, [6.6667, [6.6667, [6.6667, [6.6667, [5.7143, [5.0000,
1.00] 0.20] 0.30] 0.40] 0.00] 0.00] 0.00]
[10.0000, [6.6667, [6.6667, [6.6667, [6.6667, [5.7143, [5.0000,
1.05] 0.25] 0.35] 0.45] 0.05] 0.05] 0.05]
[10.0000, [6.6667, [6.6667, [6.6667, [6.6667, [5.7143, [5.0000,
1.05] 0.25] 0.35] 0.45] 0.05] 0.05] 0.05]
[10.0000, [6.6667, [6.6667, [6.6667, [6.6667, [5.7143, [5.0000,
1.05] 0.25] 0.35] 0.45] 0.05] 0.05] 0.05]
0.3125
D1
test replace
C2
same 0.3750
D4
C7 repair Do nothing
0.167 0.667 0.167
C8
0.167 0.667 0.167
lower
s1 s2 s3 s1 s2 s3
0.3125 replace
D5
C9 repair Do nothing
0.4 0.4 0.2
C1 0
0.4 0.4 0.2
s1 s2 s3 s1 s2 s3
Figure VIII.2.1. Decision Tree for the Bridge Maintenance Problem
Bridge Maintenance
277
Similarly, CostRepair|s2 = $0.3 million CostRepair|s3 = $0.4 million For the test option, the cost of testing, $0.05 million, will be added to the costs at each terminal node. All the costs are shown at the terminal nodes in Figure VIII.2.1. The computation of the Pareto-optimal set is shown in Figure VIII.2.1 for the Decision node D2. To obtain the costs for each of the three arcs, we must average out the Chance nodes C3 and C4. Averaging out at the Chance node C3, we obtain: 6.6667 * 0.25 0.6667 * 0.5 6.6667 * 0.25 6.6667 0.30 0.20 * 0.25 0.30 * 0.5 0.40 * 0.25
which is the solution for the Chance node C3. 5.774 Similarly, for Chance node C4, we obtain as the required solution. 0.00 10.00 For the arc replace, we have as the required solution. 1.00
Neither of these three solutions is dominated by any other solution. Therefore, the required Pareto-optimal solutions for Decision node D2 are:
10.00,1.00 6.667,0.30 5.774,0.00 The solutions for the other decision nodes can be similarly obtained by making use of the posterior probabilities. For instance, we can calculate Pr(higher) and Pr(s1|higher) as follows: Pr(higher) = Pr(higher|s1)·Pr(s1) + Pr(higher|s2) ·Pr(s2) + Pr(higher:s3) ·Pr(s3) = 0.25 · 0.25 + 0.25 · 0.5 + 0.50 · 0.25 = 0.3125 (By Total Probability rule) Pr(s1|higher) = Pr(higher|s1) · Pr(s1) / Pr(higher) = 0.25 · 0.25 / 0.3125 = 0.2 (By Bayes’ Theorem)
278
Multiobjective Decision Tree
ANALYSIS 1.2
Replace
1
Cost ($ in Million)
0.8
0.6
0.4 Repair 0.2
Do Nothing -12
-10
-8
-6
0 -4
-2
0
Years in Mean Time Before Failure (MTBF)
Figure VIII.2.2. Plot for Pareto Optimal Points The above figure shows the Pareto Optimal points. Note that values of the MTBF (see x-axis in Figure VIII.2.2) have been negated to convert the objectives into a standard multiobjective minimization problem. From Figure VIII.2.2, we can see that the policy option ―Do Nothing‖ is a good option, especially compare to the policy option ―Repair‖. At the expense of $300,000, repairing the bridge just improved the MTBF by less than a year. But if we replace the bridge by using $1 million, we can improve MTBF by more than 4 years. Hence, at this point, either ―Do Nothing‖ or ―Replace the Bridge‖ would be good policy choices. If there is enough funding available, ―Replace the Bridge‖ would be recommended, but if the financial constraint is strictly tight, ―Do Nothing‖ policy would be a logical choice to implement.
Consulting Needs
279
PROBLEM VIII.3: Hiring a consultant for maximizing profit The purpose of this problem is to determine the effectiveness of hiring a consultant in order to maximize the market share for a manufacturing company. DESCRIPTION A manufacturing company wants to maximize their market share. The demand for a product in the next period can be either increased by 50% or decrease by 5%. According to the demand change in the next period, the company needs to decide whether to continue same operation, increase employee overtime or investing in additional machines. Since there is no reliable estimate available for the next period demand changes, the company is considering hiring a consultant who can provide a good estimate for a next period demand change. Does the company need to hire a consultant? METHODOLOGY Consider the MODT presented in Figure VIII.3.1 with the following specifications in terms of states of nature, actions, and objective functions1: States of Nature: 1 → Demand for a product will increase by 20%.
2 → Demand for a product will decrease by 5%. Actions: a1 = continue same operation a2 = put some employees on overtime a3 = buy additional machines First objective function → maximize $ Payoff Matrix for Demand (million $)
1 Actions
a1 a2 a3
1.5 2.0 2.1
States
2 1.4 1.4 1.0
Second objective function → maximize market share [0→100%]
1
The MODT considered in this exercise is an extension of the single objective decision tree problem found in Vira Chankong and Yacov Y. Haimes. Multiobjective Decision Making: Theory and Methodology.. North Holland Series in System Science and Engineering, (Hardcover), 1983.
280
Multiobjective Decision Tree Payoff Matrix (Market Share % after 1 Year) States a1 a2 a3
Actions
1
2
45 55 60
40 35 30
← present level
Million $, % Market Share [1.5, 45]
0.75
Pareto Solutions (nondominated) [1.85, 50] [1.825, 52.5]
a1
0.25 [1.4, 40]
[1.475, 43.75]
[2.0, 55]
0.75
nt
a2
0.25 [1.4, 35]
con sult a
[1.85, 50] 0.75
No
a3
[2.1, 60]
0.25 [1.0, 30]
[1.825, 52.5] a1
1.4964 44.82
0.964
[1.5, 45]
0.036
[1.4, 40]
0.964
[2.0, 55]
0.036
[1.4, 35]
0.964
[2.1, 60]
0.036
[1.0, 30]
0.25
[1.5, 45]
0.75
[1.4, 40]
0.25
[2.0, 55]
0.75
[1.4, 35]
0.25
[2.1, 60]
0.75
[1.0, 30]
ant sult ) Con ee Hire ,000 F ($15
Pareto Solutions [2.0604, 58.92] a2 1.9728 54.28
a3
.7
Favorable Condition 2.0604 reported by Consultant 58.92
Consultant Hired -$0.015
Unfavorable Condition reported by Consultant
.3 Pareto Solutions (with constant) [1.8548, 53.62] [1.8923, 53.24]
Chance Nodes Combinations (2) [1.8698, 53.62] [1.9073, 53.24]
1.425 41.25
a1 a2
1.55 40
a3 Pareto Solutions [1.425, 41.25] [1.55, 40]
1.275 37.5
Figure VIII.3.1. Multiobjective Decision Tree SOLUTION Overall Pareto Solutions (both using a consultant):
1) 2)
[f1 (cost in million $), f2 (market share in %)] [1.8548, 53.62] [1.8923, 53.24]
Consulting Needs
281
ANALYSIS Based on the Pareto Optimal Solutions, it appears that the company will benefit from hiring a consultant to improve its market share. Also, from the above solution, the company must decide if ∆f2 is ―worth‖ ∆f1. In this case, the company must ask himself: is a 0.38% market share increase worth spending $37,500? If so, the first solution would be the sole Pareto Optimal Solution. Likewise, if the 0.38% market share increase is not worth enough spending $37,500 then the second solution would be the sole Pareto Optimal Solution.
282
Multiobjective Decision Tree
PROBLEM VIII.4: Business Decision Problem The management committee of a consumer product company is considering several options when the peak season for its product is approaching. Note that this problem builds on and extends the previous problem with the addition of a new objective function (Mean Time to Failure). For completeness, the calculations from the previous problem are repeated here. DESCRIPTION The firm can increase its profits by either maximizing its market share or minimizing its Mean Time to Failure (MTTF). Given the allowable budget, it is contemplating the following options: Do nothing (follow the same operation) Purchase additional machines Utilize an overtime workforce They also consider hiring external sources, such as consultants to analyze market trends and suggest short-term strategies. METHODOLOGY Multiobjective Decision Tree (MODT) analysis is used to evaluate the trade-offs among the noncommensurate objectives. SOLUTION Part A. Trade-off Analysis between Profit and Market Share Initially, a two-objective problem was specified dealing mainly with the objectives of maximizing profit and maximizing market share. The decision matrices corresponding to these two objectives are described below: States of Nature: θ1 Demand for a product will increase by 20%.
θ 2 Demand for a product will decrease by 5%. Actions:
a 1 = Do nothing (continue same operation) a 2 = Put some employees on overtime a 3 = Purchase additional machines
Business Decision
283
First objective function → maximize profit: Payoff Matrix for Demand (Million $) States
Actions
θ1
θ2
a1
1.5
1.4
a2 a3
2
1.4
2.1
1
Note that the solution to this problem will evaluate the trade-offs between profit and market share. A supplementary trade-off analysis will include another objective, Mean Time to Failure (MTTF). Second objective function → maximize market share [0→100%]: Payoff Matrix (Market Share % after 1 Year) States
Actions
θ1
θ2
a1
45
40
a2 a3
55
35
60
30
← present level
The probabilities for each state by actions are given as follows: For every action, θ1 will occur at the probability of 75%, so θ 2 will be encountered at the probability of 25%. However, after hiring consultants, the probabilities will be changed from θ1 and θ 2 to 96.4% and 3.6%, respectively. Normally, consultants suggest favorable reports at 70% of their practices. Figure VIII.4.1 shows the Multiple Objective Decision Tree with market share.
284
Multiobjective Decision Tree Million $, % Market Share [1.5, 45]
0.75
Pareto Solutions (nondominated) [1.85, 50] [1.825, 52.5]
a1
0.25 [1.4, 40]
[1.475, 43.75]
[2.0, 55]
0.75
nt
a2
0.25 [1.4, 35]
con sult a
[1.85, 50] 0.75
No
a3
[2.1, 60]
0.25 [1.0, 30]
[1.825, 52.5] a1
1.4964 44.82
0.964
[1.5, 45]
0.036
[1.4, 40]
0.964
[2.0, 55]
0.036
[1.4, 35]
0.964
[2.1, 60]
0.036
[1.0, 30]
0.25
[1.5, 45]
0.75
[1.4, 40]
0.25
[2.0, 55]
0.75
[1.4, 35]
0.25
[2.1, 60]
0.75
[1.0, 30]
ant sult ) Con ee Hire ,000 F ($15
Pareto Solutions [2.0604, 58.92] a2 1.9728 54.28
a3
.7
Favorable Condition 2.0604 reported by Consultant 58.92
Consultant Hired -$0.015
Unfavorable Condition reported by Consultant
.3 Pareto Solutions (with constant) [1.8548, 53.62] [1.8923, 53.24]
Chance Nodes Combinations (2) [1.8698, 53.62] [1.9073, 53.24]
1.425 41.25
a1 a2
1.55 40
a3 Pareto Solutions [1.425, 41.25] [1.55, 40]
1.275 37.5
Figure VIII.4.1. MODT with market share ANALYSIS In Figure VIII.4.1, we can find overall Pareto solutions (both using consultant cases) as follows:
1) 2)
[Million $, % Market Share] [1.8548, 53.62] [1.8923, 53.24]
Thus, the decisionmaker must decide if Δf 2 is ―worth‖ Δf1 . In this case, the decisionmaker must ask: Is spending $37,500 worth a 0.38% market share increase?
Business Decision
285
Part B. Trade-off Analysis between Profit and Mean Time to Failure (MTTF) Along with profit, the Mean Time to Failure (MTTF) of the product is also considered. Note that the analysis from this point forward only comprises of profit and MTTF (i.e., market share is excluded from the trade-off analysis). It is assumed that the pace of business is inversely proportional to the MTTF, and also that overtime and a new machine adversely affect MTTF. Figure VIII.4.2 shows the decision tree taking into account Mean Time to Failure, and Figure VIII.4.3 graphically illustrates the noninferior solutions. (P, MTTF) (1.475, 7.85)
0.75
(1.475, 7.85) (1.850, 7.2) (1.85, 7.2)
1
(1.4, 8.0) (2.0, 7.1)
0.75
2a (1.4597, 7.85004) (1.475, 7.85) (1.4972, 7.68504) (1.7971, 7.68504) (1.8545, 7.29508) (1.892, 7.13008)
(1.5, 7.8) 0.25
0.25 (1.825, 7.1)
0.75 0.25
(1.496, 7.8072) (1.978, 7.1144) (2.06, 7.0144) A
.7
(1.496, 7.8072) 0.964 0.036 (1.978, 7.1144) 0.964
2b
(1.0, 7.4) (1.5, 7.8) (1.4, 8.0) (2.0, 7.1)
0.036
(-0.015, 0)
(1.4, 7.5) (2.1, 7.0)
(2.06, 7.0144) 0.964 (1.4747, 7.85004) (1.5122, 7.68504) (1.8121, 7.36508) (1.8496, 7.20008) (1.8695, 7.29508) (1.9070, 7.13008)
(1.4, 7.5) (2.1, 7.0)
0.036 .3
(1.425, 7.95)
(1.0, 7.4) (1.5, 7.8)
0.25 0.75
(1.55, 7.4)
0.25
2c (1.425, 7.95) (1.55, 7.4)
0.75 (1.275, 7.3)
0.25 0.75
(1.4, 8.0) (2.0, 7.1) (1.4, 7.5) (2.1, 7.0) (1.0, 7.4)
Figure VIII.4.2. Multiple objective decision tree (MODT) with MTTF
286
Multiobjective Decision Tree
Mean-Time-to-Failure (MTTF in years)
8 1.4597, 7.85004
1.475, 7.85
7.8 1.4972, 7.68504 7.6
7.4
1.7971, 7.36508 1.8545, 7.29508
7.2 1.892, 7.13008 7 1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
Profit (in Millions)
Figure VIII.4.3. Pareto-optimal frontier
ANALYSIS After folding back to the initial decision nodes and considering only profit and MTTF, Figure VIII.4.3 reveals the noninferior solutions to be: [1.4597, 7.85004], [1.475, 7.85], [1.4972, 7.68504], [1.7971, 7.68504], [1.8545, 7.29508], [1.892, 7.13008].
E-mail Services
287
PROBLEM VIII.5: Maintaining E-mail Service The objective of this problem is to decide whether to hire an outside agency to manage a risk of an e-mail service interruption. An e-mail service program is used as an important organizational tool. When the service is unavailable, the organization loses efficiency and possibly direct funds. The cost to the agency is estimated at $100,000 per incident if mail service is interrupted for more than 1 hour at a time. There is no cost associated with the server or the network being down for less than 1 hour. The probability of the server being down for more than 1 hour in the first stage is 10%. The probability of the server being down for more than 1 hour in the second stage is 20%. There is a client satisfaction rating which is 0 for satisfied clients and 1 for dissatisfied clients. Management has two options. The first is to hire an outside agency that will automatically switch the system over if e-mail cannot be forwarded. The second option is to do nothing. This costs nothing but holds the risk of service being down. Assess the cost of both options using a two-stage Multiobjective Decision Tree (MODT). The following assumptions are made: 1. The cost estimate generated is the same for any time period over one hour. Anything less than one hour downtime is not counted. 2. There are two possible actions for the first time period (less than one hour): a. Purchase a service contract with an outside agency at a cost of $60,000 [OMS1]. b. Do nothing at no cost [DN1]. 3. There are two possible actions for the second time period (more than one hour): a. Purchase a service contract with an outside agency at a cost of $70,000 [OMS2]. b. Do nothing at no cost [DN2].
288
Multiobjective Decision Tree
PROBLEM VIII.6: Call Center Call center staffing depends on uncertain volumes of customer calls. Staffing decisions are sequential processes wherein future decisions on how many operators to subcontract can be based on past and current customer call volumes. With the importance of telephones in the modern commercial world, most business transactions are accomplished by the installation of call centers. Sales, marketing and many other corporate functions are handled efficiently using trunk lines (1-800 numbers) that are attended to by a number of company operators. The number of operators to be employed is very critical since there exists a tradeoff between: (a) the financial considerations in employing a large number of operators; and (b) the calling clients that forego the company’s service due to long queue time In an ideal call processing center, no business is lost since all customers’ calls are answered immediately. Figure VIII.6.1 presents the following ideal schematic:
Figure VIII.6.1. Ideal Call Processing Center However, a typical call center faces the reality of having to put a significant number of customers on hold (in a queue). This phenomenon usually happens during peak times, which consequently leads to complaints or worse, lost businesses. Figure VIII.6.2 diagrams this scenario
Call Center
289
Figure VIII.6.2. Typical Call Center Service Using a Multi-Objective Decision Tree (MODT) model, the following conflicting objectives have been identified: (1) Minimize f1 Proportion of time an operator is idle (2) Minimize f2 Proportion of time a customer will be in a queue The call arrival process has two equally likely a priori distributions, given as: LN1 ~ Lognormal (Ln 50, 1) LN2 ~ Lognormal (Ln 10, 1)
CHANCE NODES There are two possible types of calling periods, which are classified as slack- and peak-time chances to call. The slack period (S) occurs when the number of callers per minute is less than 20. The peak period (P) occurs when the number of callers for a given minute exceeds 20. There are two 4-hour (240-minute) time periods in a given day. The first is from 8:00 AM – 12 noon, while the second ranges from 1:00 PM to 5:00 PM. For simplicity in this example, it shall be assumed that the entire 4-hour time period is either a slack or a peak calling period (i.e., if it’s slack at 8:00 AM, then it shall be slack throughout the entire period or until 12 noon).
290
Multiobjective Decision Tree
DECISION NODES The company must determine the number of operators it needs, and it is trying to decide whether to assign 10 or 20 operators to a given period. By using the Multiobjective Decision Trees (MODT) shown below, evaluate the Pareto-optimal solutions by folding the outcomes back to the initial decision node. Analyze your results.
Period 1
Period 2 S 10 S
D2
C3
20
C4
10
C5
20
C6
C1 P 10
D3
D1
P S P S P S P S
20
10 S
D4
C7
20
C8
10
C9
20
C10
C2 P
D5
P S P S P S P
Idle 0.4665 0.3976 0.6443 0.5984 0.3904 0.3216 0.5936 0.5477 0.6443 0.5984 0.7332 0.6988 0.5936 0.5477 0.6952 0.6608
Figure VIII.6.3. Example of MODT
Queue 0.2048 0.5041 0.0774 0.0770 0.6737 0.9726 0.5459 0.5454 0.1279 0.4271 0.0000 0.0000 0.1331 0.4320 0.0053 0.0048
Reservation Strategy
291
PROBLEM VIII.7: Determining when to Book Plans for Vacation A student has a chance to go to the Cayman Islands for a week in the middle of the semester. Should he go, and when should he book the reservations? The payoff of the vacation is an improved state of mind. The cost is the expense of airfare and hotel. The problem is that he will miss some work in school. Because the objectives are conflicting, a Multiobjective Decision Tree (MODT) analysis can help to decide on the best strategy. The objectives are: • Minimize the level of stress. This objective is a function of the amount of work the student has to do and whether or not he takes the vacation. • Minimize the cost of the plane fare and the hotel stay. This is measured on a straightforward monetary scale. The alternatives in the month before the trip is planned are: • Pay for a plane ticket and hotel (at a discount). • Make plane and hotel reservations but do not pay yet. • Wait until later to do anything. The three possible states of nature that may occur in the two weeks before the trip are: • The student finds he has little work to do • He finds he has homework due that week (some work) • He has two tests and a paper due that week (much work) The student will not know for sure when the tests and paper are due until two weeks before the trip is planned. But he may have some idea of his future work by the amount of work he has now. From past experience, he knows that the work loads of now and later are negatively correlated. That is, if he has lots of work now, he will not have much later, but if he has only a little now, chances are he will have a lot later. Let us define: LWN = Little work to do now SWN = Some work to do now MWN = Much work to do now LWL = Little work to do over the time of the trip SWL = Some work to do over the time of the trip MWL = Much work to do over the time of the trip Data By reviewing his calendar for the last four semesters, the student determines the probabilities for each work load.
292
Multiobjective Decision Tree
The prior and conditional probabilities are as follows: P(LWL) P(SWL) P(MWL)
= .25 = .35 = .4
P(SWN/LWL) P(LWN/LWL) P(MWN/LWL) P(SWN/SWL) P(LWN/SWL) P(MWN/SWL) P(SWN/MWL) P(LWN/MWL) P(MWN/MWL)
= .2 = .1 = .7 = .4 = .3 = .3 = .3 = .6 = .1
From the travel agent, the student knows that if he pays for airfare and a hotel now, it will cost $900. If he makes reservations now and pays later, it will cost $1300, but if he waits till later to make reservations and pay, it will cost $1650. If he makes reservations without paying and cancels, it costs nothing. If he pays in the first stage and then cancels later, it will cost 10% of the original price. The Model The first step is to define the state variables—those variables that define the system at any given point. The state variables are: • S1 – the benefit of the time spent on vacation • S2 – the amount of work the student has to do or will miss • S3 – the cost of the trip to the islands The decision variable is: • If and when to make reservations and pay for the airfare and hotel The exogenous variables or parameters are: • The cost of plane fare • The cost of hotel accommodations Quantifying the Objectives: The objectives are as follows: • f1 = Minimize the level of stress • f2 = Minimize the cost of the trip
Reservation Strategy
293
1) Level of Stress – f1(S1,S2) The level of stress is reduced by going on the trip, and is raised by having more work to do. The easiest way to measure these levels of stress in an ordinal scale is through subjective assessment. The different stress levels are accessed as a function of whether or not the student goes on vacation and how much work he has to do. The graph of Stress Level is show in Figure VIII.7.1:
1000
1000
800
Stress Level
700 Don't Go On Vacation With: MW, SW, LW
600 500 400
Go On Vacation With: MW, SW, LW
400 300
200 90 0
Figure VIII.7.1. Stress level according to vacation and work load 2) Cost of Trip – f2(S3) The cost is f2 = Cij where i = first-stage decision j = second-stage decision From the travel agent you know that: C(Pay, Cancel) = $90 C(Pay, Go) = $900 C(Reserve, Go) = $1300 C(Reserve, Cancel) = $0 C(Do Nothing, Go) = $1650 C(Do Nothing, Stay Home) = $0 The Multiobjective Decision Tree (MODT) The comprehensive decision tree for this problem is shown in Figure VIII.7.2 below:
294
Multiobjective Decision Tree Cancel Little Work
D2 Go
Cancel Pay
C1
Some Work
D3 Go
Cancel Much Work
D4 Go
Cancel Little Work
D5 Go
Cancel D1
Reserve
C2
Some Work
D6 Go
Cancel Much Work
D7 Go
Cancel Little Work
D8 Go
Cancel Do Nothing C3
Some Work
D9 Go
Cancel Much Work
D10 Go
MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL MWL SWL LWL
Figure VIII.7.2. Multiobjective decision tree for vacation decision By using the Multiobjective Decision Trees (MODT) shown above, evaluate the Pareto-optimal solutions by folding the outcomes back to the initial decision node. Analyze your results.
Alternative Routing
295
PROBLEM VIII.8: Analysis of Alternate Routing System The objective of this problem is to alleviate highway traffic congestion by considering an alternate routing system. There are two possible actions: (a) alternate routing (AR) and (b) doing nothing (DN). The decision tree covers two time periods and the associated cost is a function of the period in which the action is taken. The complete decision tree is shown in Figure VIII.8.1. Assumptions 1.
2.
3. 4.
5.
6.
There are two possible actions associated with costs for the first period: a. Coming up with alternate routing for the traffic at a cost of $200,000 (AR1). b. Doing nothing at zero cost (DN1). For the second period the actions and corresponding costs are: a. Alternate routing at a cost of $100,000 (AR2). b. Doing nothing at zero cost (DN2). Travel time, T, is measured in hours. A stall, or gridlock, occurs when travel time (T) between two points, A and B, is greater than or equal to 4 hours. There are two underlying probability distributions for the flow of traffic: a. T ~ lognormal (1.2527, 1), represented as LN1. b. T ~ lognormal (0.7419, 1), represented as LN2. The mean values of the lognormal distributions were arrived at by taking the log of the midpoint between time limits (T) for higher and lower traffic levels. For example, log(3.5) = 1.2527. The a priori probability that any of these probability density functions (pdfs) is the actual pdf is equal. There are three possible events at the end of the first period: a. A stall or gridlock (T > = 4 hr.) b. Higher traffic (3 < = T < = 4 hr.) c. Same or lower traffic ( 3 < = T ) L and C are, respectively, the maximum possible loss of lives due to fatal accidents and money lost due to legal action ensuing from the accident, given no alternate routing.
Update the multiobjective decision tree with probabilities and expected values associated with each option. Analyze your results.
296
Multiobjective Decision Tree (0.5, 175,000) (0.108, 35,907) gridlock
gridlock
AR2
higher
DN2
(0.101, 33,585)
AR2 lower
no gridlock (0, 0) gridlock (0.7, 200,000)
C4
C1 AR1
(0, 0)
no gridlock gridlock (0.3, 100,000)
C5
D3 (0.235, 67,169)
DN2
no gridlock gridlock
C6 D1 (0.1436, 53,861)
higher
AR2 D4
DN1
C7
(0.3591, 89,768)
C1
DN2 lower
C8 (0.134, 50377)
D5
P(lower) = 0.5391
C3 (0.251, 71,815)
D2
(0.3, 100,000)
P(higher) = 0.1177
AR2
P(gridlock) = 0.3533 P(gridlock|higher) = 0.3591 P(gridlock|lower) = 0.3358
(0, 0) (0.7, 200,000)
(0, 0) no gridlock gridlock (0.4, 150,000) no gridlock (0, 0) gridlock (1, 250,000) no gridlock (0, 0) gridlock (0.4, 150,000)
C9
(0, 0) no gridlock (1, 250,000) DN2 (0.336, 83,962) gridlock C10 no gridlock (0, 0) gridlock
(1, 250,000)
Figure VIII.8.1. Multiobjective Decision Tree (MODT)
Bobsled Training
297
PROBLEM VIII.9: Bobsled Training Strategy A Central American four-man bobsled team entered in the Winter Olympics must train for many months in Sweden. The cost of travel, facility rental, and room and board for living in Sweden during this extended time period is expensive. Thus, the country can only afford two $45,000 bobsleds for its team and has a limited sled repair budget. The condition of the ice (how slippery it is) on a bobsled track varies during the day. Lateral bobsled slippage on the track depends primarily on speed as well as on the condition of the ice, which is beyond the control of the bobsled team. High lateral slip can cause a bobsled to lose its grip on the ice in a turn and slide sideways into the wall, resulting in a crash. The bobsled can also crash into a wall without incurring lateral slippage, usually because of excess speed and loss of control. Damage to a bobsled in a crash is greater at higher speeds and if lateral slip occurs. The Swedish training track has two tight turns on differing slope levels which are the main source of sled crashes. The team knows that accidents will happen during the training and the repair budget will be used. They cannot risk losing both bobsleds in accidents and being unable to repair at least one of them in time for the Olympics. The team’s objectives are to minimize damage to the bobsleds and to train effectively at competition speeds (maximize speed). They need to decide how much brake to apply in the two dangerous turns in order to safely negotiate the course and keep their bobsleds minimally harmed, while still receiving beneficial training. The team analyzes and averages Swedish training data and builds a Multiobjective Decision Tree (MODT) to show the bobsled’s relationship to braking and lateral slip in the turns. 1. There are two possible actions with the associated bobsled speed for the first turn of the course: a. Brake hard and reduce speed 25 MPH b. Brake soft and reduce speed 10 MPH
2. For the second turn of the course, there are two possible actions with the associated bobsled speed: a. Brake hard and reduce speed 15 MPH b. Brake soft and reduce speed 5 MPH 3. There are two underlying probability density functions (pdfs) for lateral bobsled slippage (L) in a turn associated with the condition of the ice: a. L ~ normal (0.6, (0.05)2 ), represented as N1 b. L ~ normal (0.5, (0.075)2 ), represented as N2 The prior possibilities that any of these two pdfs is the actual pdf are equal.
298
Multiobjective Decision Tree
4. There are two possible events at the end of the first turn of the course: a. Bobsled slips (L > 0.7 g), represented as C0 b. Bobsled grips (L f 5 ⇒ f1 = 0 Stage 2: E[x(2)]
= a 2 + abu(0) + bu (1) = 0.75 2 + (0.75)(0.4) + 0.4 = 1.2625
Var[x(2)]
= (a 2 + 1) S d 2 = (0.75 2 + 1)(0.05) = 0.07813
⇒ µ = 1.2625, σ = 0.2795 ⇒ f 5 = 1.2625 ⇒ f 4 = µ + 1.525σ = 1.2625 + (1.525)(0.2795) = 1.6888 > f 5 ⇒ f1 = 0 This result indicates that if the consumption of high-cholesterol food is not reduced, the cholesterol level will increase steadily. (The partitioned cholesterol level will be even higher.) Policy 2:
u(0) = 0.8, u(1) = 0.5
Stage 1:
E[ x(1)] = a + bu (0) = 0.75 + 0.40 × 0.8 = 1.07 Var[ x(1)] = S d 2 = 0.05 ⇒ µ = 1.07, σ = 0.05 = 0.2236 ⇒ f 5 = 1.07 ⇒ f 4 = µ + 1.525σ = 1.07 + (1.525)(0.2236) = 1.4110 > f 5 ⇒ f 1 = 0 .8 Stage 2: E[x(2)]
= a 2 + abu(0) + bu (1) = 0.75 2 + (0.75)(0.4)(0.8) + (0.4)(0.5) = 1.0025
302
Multiobjective Risk Impact Analysis Method Var[x(2)] = (a 2 + 1) S d 2 = (0.75 2 + 1)(0.05) = 0.07813 ⇒ µ = 1.0025, σ = 0.2795 ⇒ f 5 = 1.0025
⇒ f 4 = µ + 1.525σ = 1.0025 + (1.525)(0.2795) = 1.4288 > f 5 ⇒ f 0 = 11.1714 This example indicates that if the amount of high-cholesterol food is cut down, then the expected cholesterol level can be reduced compared to the previous policy. On the other hand, the high f4 suggests that there is still a risk of reaching a high-cholesterol level, which is likely to be ignored if one looks at the expected level only.
Policy 3:
u(0) = 0.5, u(1) = 0.4
Stage 1:
E[ x(1)] = a + bu (0) = 0.75 + (0.40)(0.5) = 0.95 Var[ x(1)] = S d 2 = 0.05 ⇒ µ = 0.95, σ = 0.05 = 0.2236 ⇒ f 5 = 0.95 ⇒ f 4 = µ + 1.525σ = 0.95 + (1.525)(0.2236) = 1.2910 > f 5 ⇒ f1 = 12.5 Stage 2: E[x(2)]
= a 2 + abu(0) + bu (1) = 0.75 2 + (0.75)(0.4)(0.5) + (0.4)(0.4) = 0.8725
Var[x(2)] = (a 2 + 1) S d 2 = (0.75 2 + 1)(0.05) = 0.07813 ⇒ µ = 0.8725, σ = 0.2795 ⇒ f 5 = 0.8725
⇒ f 4 = µ + 1.525σ = 0.8725 + (1.525)(0.2795) = 1.2988 > f 5 ⇒ f1 = 22.5715 This example indicates that if the high-cholesterol food is cut down further, then the expected cholesterol level can be reduced further. On the other hand, again the high f4 suggests that there is still a risk of reaching a high cholesterol level, which is likely to be ignored if one looks at the expected level.
Cholesterol Control
303
General Result: Stage 1:
f1 = 100(1 − u (0)) 3 f 5 = a + bu (0) = 0.75 + 0.40u (0) f 4 = a + bu (0) + 1.525σ = 0.75 + 0.40u (0) + 0.3410 = 1.0910 + 0.40u (0) Stage 2:
1 (400(1 − u (0)) 3 + 300(1 − u (1)) 2 ) 7 f 5 = a 2 + abu (0) + bu (1) = 0.5625 + 0.30u (0) + 0.40u (1) f1 =
f 4 = f 5 + 1.525σ = 0.9887 + 0.3u (0) + 0.40u (1) The non-inferior Pareto solutions are displayed in the following four graphs. 120
First stage "tast" suffering
100
80
60
40
20
0 0.75
0.8
0.85
0.9
0.95
1
1.05
1.1
First stage cholesterol level(f5)
Figure IX.1.1. f1 vs f5 for the first stage
1.15
1.2
Multiobjective Risk Impact Analysis Method 120
100
80
60
40
20
0 1.09
1.14
1.19
1.24
1.29
1.34
1.39
1.44
1.49
1.54
First stage partitioned cholesterol level(f4)
Figure IX.1.2. f1 vs f4 for the first stage
70.00
60.00
50.00 "taste" suffering
First stage "tast" suffering
304
40.00
u(0)=0.3 u(0)=0.5 u(0)=0.65
30.00
20.00
10.00
0.00 0.6
0.7
0.8
0.9
1
1.1
Second stage cholesterol level (f5)
Figure IX.1.3. f1 vs f5 for the second stage
1.2
Cholesterol Control
305
70.00
60.00
"taste" suffering
50.00
40.00
u(0)=0.3 u(0)=0.5 u(0)=0.65
30.00
20.00
10.00
0.00 1
1.1
1.2
1.3
1.4
1.5
1.6
Second stage partitioned cholesterol level (f4)
Figure IX.1.4. f1 vs f4 for the second stage From these graphs, the individual can identify whatever Pareto solutions best suit the individual’s needs. As an illustrative example, we give the following Pareto solutions:
Table IX.1.1. First-Stage Pareto Solutions Policy u(0) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
f1 100 72.9 51.2 34.3 21.6 12.5 6.4 2.7 0.8 0.1 0
f5 0.75 0.79 0.83 0.87 0.91 0.95 0.99 1.03 1.07 1.11 1.15
f4 1.091 1.131 1.171 1.211 1.251 1.291 1.331 1.371 1.411 1.451 1.491
306
Multiobjective Risk Impact Analysis Method
Table IX.1.2. Second-Stage Pareto Solutions Policy u(1) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
f1 24.5714 19 14.2857 10.4285 7.4285 5.2857 4 3.5714
f5 0.8634 0.9034 0.9434 0.9834 1.0234 1.0634 1.1034 1.1434
f4 1.2896 1.3296 1.3696 1.4096 1.4496 1.4896 1.5296 1.5696
Note: Assume that Policy u(0) = 0.603 for Table IX.1.4 ANALYSIS In this project a cholesterol control problem is solved by using the MRIAM. The model addresses a two-stage cholesterol control problem. The result indicates that by cutting down the consumption of high-cholesterol food at each stage (a 3-month period) one can reduce the expected cholesterol level (f5) considerably. However, by partitioning at µ + σ we discovered that there is still a risk of ending up with a high cholesterol level. The insight provided by this partitioned risk analysis should receive special attention. Pareto solutions are generated for both stages. The second stage presented a family of Pareto solutions corresponding to different policies. Because of the increase in dimensionality, one policy at the first stage (one point) maps into a curve (corresponding to a set of different u(1)) at the second stage. This suggests an even higher complexity at an ensuing stage.
River Channel
307
PROBLEM IX.2: Reducing River Channel Overflow An important waterway supports the economic livelihoods and social activities of the surrounding communities. However, river flooding poses a great challenge to the local government in terms of controlling channel overflow, particularly during monsoon seasons. The region is visited by an average of 23 typhoons annually, beginning as early as May and stretching up to November. Deforestation and poor waste disposal practices contribute in a major way to the situation. The effects of channel overflow include flooded roads, destroyed lives and properties, and disrupted basic services (electric power, transportation, and communication), among others.
DESCRIPTION Suppose a total budget of $800 million is allocated over a three-year horizon to be used for flood-control management (e.g., river dredging, constructing river-flow control infrastructures such as dikes and floodways, and others). Three policy options (A, B, and C) have been identified and are summarized in Table IX.2.1. Each of these indicates the amount of funds to be released prior to each period (k = IX.2.1, IX.2.2, and IX.2.3).
Table IX.2.1. Policies Options with given u(k) Values
Policy A B C
u(k-1) = amount spent on flood control management at Stage k-1 (in million dollars) u(0) u(1) u(2) 560 160 80 320 320 160 160 240 400
METHODOLOGY The Multiobjective Risk-Impact Analysis Method (MRIAM) is used to evaluate the performance of each policy option, in terms of shrinking the volume of channel overflow. The following MRIAM formulas are used: k −1
m(k ) = CA k x0 +
∑ CA Bu(k − 1 − i) i
i =0 k −1
s 2 (k ) = C 2 A 2k X 0 +
∑C
2
A 2i P + R
i =0
Table IX.2.2 summarizes the values of parameters in the above mean and variance formulas:
308
Multiobjective Risk Impact Analysis Method
Table IX.2.2. Parameter and Value Explanations Parameter
Value
Explanation
A
e-0.5
Decay rate of precipitation due to predicted El Niño in the next 3 years, [unitless]
B
-0.25
Controlled volume of channel overflow per million $ invested, [in million m3/million $ spent]
C
1
x0
1000
Initial volume for potential overflow, [in million m3]
P
0.001
Variance of random variable ω(k), [(in million m3)2]
R
0
Variance of random variable ν(k), [(in million m3)2]
X0
250
Proportionality constant, [unitless]
Initial variance of potential overflow, [(in million m3)2]
Requirements: (a) Calculate f2(k), f3(k), and, f4(k). Do this for all periods k = 1, 2, and 3. (b) For each option, plot f2(k), f3(k), and, f4(k) with respect to the control variable u(k-1). Do this for every period. Analyze the results. Use one standard deviation partitioning for f2(k) and f4(k).
SOLUTION (a) Table IX.2.3 presents the calculated values of f2(k), f3(k), and, f4(k) for all periods k = 1, 2, and 3.
Table IX.2.3. Summary of Expected Values for the Three Periods Period 1 Policy u(0) m(1) s(1) f2(1) f3(1) f4(1) A 560 915 250 534 915 1296 B 320 951 250 570 951 1333 C 160 976 250 594 976 1357
Period 2 Policy A B C
u(1) 160 320 240
m(2) 924 922 949
s(2) 250 250 250
f2(2) 543 541 568
f3(2) 924 922 949
f4(2) 1305 1303 1330
River Channel
Period 3 Policy A B C
u(2) 80 160 400
m(3) 896 913 931
s(3) 250 250 250
f2(3) 515 532 550
f3(3) 896 913 931
f4(3) 1277 1294 1313
Figure IX.2.5. Conditional Expected Values for Period 1.
Figure IX.2.6. Conditional Expected Values for Period 2
309
310
Multiobjective Risk Impact Analysis Method
Figure IX.2.7. Conditional Expected Values for Period 3 ANALYSIS To minimize the total volume of overflow, Policy A is recommended since it is Pareto optimal throughout all periods (Note that the effect of Policy B is a marginally better than that of Policy A in the second period). However, Policy C is inferior to the other policies in both periods 2 and 3 since its overflow volume is greater, though each policy uses the same budget.
Generic Problem
311
PROBLEM IX.3: Formulating MRIAM as an Epsilon Constraint Problem The purpose of this exercise is to derive an equivalent ∈-constraint representation of the MRIAM.
DESCRIPTION Consider the system defined by the following standard dynamic equation. The objective functions are given as: Minimize: f10 = (1-u(0))2 + 0.5(1-u(1))2 (present-value cost function) f41 = high-range conditional expected damage at Stage 1 f42 = high-range conditional expected damage at Stage 2 i) Formulate the multiobjective multistage risk analysis problem. ii) Use the ∈-constraint method to formulate the Lagrangian function. Determine u(0) and u(1).
METHODOLOGY Use the Multiobjective Risk-Impact Analysis Method (MRIAM) to solve this problem. The following MRIAM equations will be used in deriving the ∈-constraint formulation ∞
∫τ β 4k
=
S k' ∞
∫ S k'
1 2π 1 2π
e −τ
e −τ
2
2
/2
/2
dτ
dτ
f i k (u ) = µ (k ; u ) + β ik σ (k )
τ2 /2 =
( y (k ) − µ (k )) 2 2σ 2 (k )
E[v 2 (k )] = R(k ) = R µ (k ) = E[ y (k )] = CE[ x(k )] k −1
σ 2 (k ) = C 2 A 2k X 0 +
∑C
2
A 2i P + R
i =0
E[ X (k + 1)] = AE[ X (k )] + Bu (k )
σ 2 (k ) = C 2Var[ X (k )] + R s − µ (k ) s k' = k σ (k )
312
Multiobjective Risk Impact Analysis Method
SOLUTION ∞
∫τ i)
S k'
β 4k =
∞
∫ S k'
1
1 2π 1 2π
e −τ
e −τ
2
=
/2
/2
dτ
dτ
∞ −τ 2 / 2
e dτ 2π 1 − F (2)
−
2
2
e −2 2π 1 − 0.9772 0.05399 = 0.0228 = 2.368 1
=
k −1
σ 2 (k ) = c 2 A 2k X 0 +
∑C
2
A 2i P + R
i =0 2
2
σ (1) = (1) (1.5)
2 (1)
(1) + (1) 2 (1.5) 0 (1) + 2
= 5.25
σ 2 (2) = (1) 2 (1.5) 2( 2) (1) + (1) 2 (1.5) 0 (1) + (1) 2 (1.5)1 (1) + 2 = 9.5625
µ (k ) = CE[ x(k )] µ (1) = E[ x(1)] = 1.5 x(0) + u (0) + 0 = 1 .5 + u ( 0 ) µ(2) = E[ x(2)] = 1.5E[ x(1)] + u (1) + 0 = 1.5{1.5[ x(0) + u (0)]} + u (1) = 2.25 + 1.5u (0) + u (1) The multiobjective problem is given by
f10 = {1 − u (0)}2 + 0.5{1 − u (1)}2 f 41 = 6.926 + u (0) f 42 = 9.573 + 1.5u (0) + u (1)
Generic Problem
313
ii) Rewriting for the ∈-constraint method, we have:
min s.t.
{1 − u (0)}2 + 0.5{1 − u (1)}2 6.926 + u (0) ≤∈1 9.573 + 1.5u (0) + u (1) ≤∈2
The Lagrangian function is given by: 01 L = {1 − u (0)}2 + 0.5{1 − u (1)}2 + λ14 [6.926 + u (0)− ∈1 ]
+ λ01 24 [9.573 + 1.5u (0) + u (1) − ∈2 ] Assuming the tradeoff values are positive and taking the partial with respect to the tradeoff values, we have:
∂L 01 ∂λ14
= 6.926 + u (0)− ∈1 = 0 u (0) =∈1 −6.926
∂L ∂λ01 24
= 9.573 + 1.5u (0) + u (1)− ∈2 = 0 u (1) =∈2 −1.5u (0) − 9.573 =∈2 −1.5 ∈1 +0.636
ANALYSIS Through a prespecified present-value cost function, {1 − u (0)}2 + 0.5{1 − u (1)}2 , the expected values and conditional expected values of the damage functions for two stages have been formulated as an ∈-constraint problem. The Pareto optimal values of the decision variables, u (0) and u (1) have been determined using the Lagrangian method. Given specific values of the right-hand side constraints ∈1 and
∈2 , one can determine a explicit relationship between the decision variables.
314
Multiobjective Risk Impact Analysis Method
PROBLEM IX.4: Discouraging Insurgent Terrorists Determine the most cost-effective way of retraining captured insurgents to prevent their return to terrorist activities.
DESCRIPTION The typical insurgent tries to convert his fellow citizens into joining his cause. If the insurgent is caught, he will spend approximately one year in a holding facility and then return to his terrorist activities. However, retraining can discourage him and prevent this. An individual who has not been properly retrained will return to fight again. Insurgent ages range from 12 to 90 years.
METHODOLOGY Solve this problem using the Multiobjective Risk-Impact Analysis Method (MRIAM).
State Variables: x1 (k ) = general population over age 12 at year k
x 2 (k ) = number of insurgents at year k x3 (k ) = number of insurgents undergoing retraining in year k x 4 (k ) = number of insurgents captured and placed in holding facility in year k x5 (k ) = number of insurgents released in year k Parameter Definitions b1 = rate at which members of population reach 12 years of age d1 = death rate of general population over 12 a = rate at which insurgents convert other members into terrorists d2 = death rate of insurgents (in general higher than d1) c = percentage rate of incarcerated insurgents e = percentage of insurgents in retraining f = rate at which an insurgent in retraining converts another insurgent to be retrained g = rate of insurgents whose training was ineffective The five state equations are:
General population: x1 (k ) = x1 (k ) + (b1 − d1 ) x1 (k ) − ax1 (k ) x 2 (k ) Insurgent population:
(IX.4.1)
x 2 (k ) = x 2 (k ) + ax1 (k ) x 2 (k ) − (d 2 + c + e) x 2 (k ) − fx 2 (k ) x3 (k ) + gx3 (k ) + x5 (k ) (IX.4.2)
Insurgent Terrorism
315
Retraining population: x3 (k + 1) = x3 (k ) + fx3 (k ) x 2 (k ) = ex 2 (k ) − d1 x3 (k ) − gx3 (k )
(IX.4.3)
Incarcerated population: x 4 (k + 1) = cx 2 (k )
(IX.4.4)
Released from holding facility: x3 (k + 1) = x 4 (k )
(IX.4.5)
General population over age 12: x1 (k + 1)
(IX.4.6)
SOLUTION Assume: 1. x1 (k + 1) = x1 (k ) —general population is constant 2. f (the rate at which insurgents in retraining convert other insurgents to retrain) = 0 3. Neglect death rate of normal population and insurgent population (d2, d1) 4. x5 (k ) = 0 5. g = 0 With these assumptions, we can simplify (IX.4.2) and (IX.4.3) as follows:
x 2 (k + 1) = (1 + ax1 (k ) − c − e) x 2 (k ) + x5 (k ) = (1 + ax1 (k ) − c − e) x 2 (k ) (Q x5 (k ) = 0 by assumption)
x3 (k + 1) = x3 (k ) = ex2 (k ) Where,
a = 4.5×10-6 c = 15% e = 20% x2(0) = 10,000 x3(0) = 2000 x1(0) = 100,000 wk = normally distributed random variable with µ = 0, s22 = 625 f4(·) = high-range conditional number of insurgents f5(·) = unconditional expected number of insurgents Y1 = cost due to insurgent activity = $100,000/insurgent Y2 = cost of incarceration = $25,000/inmate Y3 = cost of retraining = $35,000/insurgent
Cost = Y1 x 2 (k ) + Y2 cx 2 (k ) + Y3 ex 2 (k ) Consider the 3 possible policy decisions:
316
Multiobjective Risk Impact Analysis Method
A. Linearly increase the percentage of jailed insurgents up to 30% over the next 3 years: c(1) = 15; c(2) =20; c(3) =25; c(4) = 30 All other variables remain constant. B. Linearly increase the retraining program over the next 3 years: e(1) = 20; e(2) = 25; e(3) = 30; e(4) = 35 All other variables remain constant. C. Combine methods A and B over the next 3 years. We can calculate unconditional expected number of insurgents, and in order to calculate the conditional expected number of insurgents, f4, we assume that w(k) ~ N(0, 252); thus ß4s(k)=(1.525)(25)=38.125.
f 5 ( x 2 (1)) = (1 + 4.5 ⋅ 10 −6 (100,000) − 0.15 − 0.2)10,000 = 11,000 f 5 ( x 2 (2)) = (1 + 4.5 ⋅ 10 −6 (100,000) − 0.20 − 0.2)11,000 = 11,500 f 5 ( x 2 (3)) = (1 + 4.5 ⋅ 10 −6 (100,000) − 0.25 − 0.2)11,500 = 11,500 f 5 ( x 2 (4)) = (1 + 4.5 ⋅ 10 −6 (100,000) − 0.30 − 0.2)11,500 = 10,973 Table IX.4.3. Unconditional and Conditional Expected Value of Societal Cost for Four Years $Billion $Billion Decision Year f5(x2 (k)) Cost f4(x2 (k)) Cost A 1 11,000 1.218 11,038 1.222 2 11,550 1.294 11,588 1.298 3 11,550 1.308 11,588 1.312 4 10,973 1.256 11,011 1.261 B 1 11,000 1.218 11,038 1.222 2 11,550 1.299 11,588 1.304 3 11,550 1.320 11,588 1.324 4 10,973 1.273 11,011 1.277 C 1 11,000 1.218 11,038 1.222 2 11,000 1.251 11,038 1.256 3 9,900 1.156 9,938 1.160 4 7,920 0.948 7,958 0.953 ANALYSIS Option C, where the percentages of both jailed insurgents and insurgents trained are increased, gives the better outcome for the program. Both the number of insurgents and the cost decrease after 4 years. Options A and B show a slight decrease in insurgents after 4 years, but the costs have risen compared to the cost of an initial year.
Cancer Treatment
317
PROBLEM IX.5: Modeling of Cancer Patient Population Should the government cover the cost of treating cancer patients?
DESCRIPTION The following problem is an example to illustrate a study on methods and costs of treatment for cancer patients. It is a hypothetical example of treating cancer with chemotherapy in a given region.
METHODOLOGY This problem can be analyzed using the Multiobjective Risk Impact Analysis Method (MRIAM).
x1 (k ) = general population at year k x 2 (k ) = number of cancer patients at year k x3 (k ) = number of cancer patients undergoing chemotherapy at year k x 4 (k ) = number of cancer survivors at year k
b1 = birthrate of healthy population d1 = birthrate of normal population a = rate of cancer development d2 = death rate of cancer patients e = percentage of cancer patients undergoing chemotherapy f = rate of cancer-patient cures g = death rate of chemotherapy patients State Equations: Non-cancer population: x1 (k + 1) = x1 (k ) + (b − d1 ) x1 (k ) − ax1 (k ) x 2 (k ) Cancer population not receiving treatment: x 2 (k + 1) = x 2 (k ) + ax1 (k ) x 2 (k ) − (d 2 + e) x 2 (k ) − fx 2 (k ) x3 + gx3 (k ) Chemotherapy population: x3 (k + 1) = x3 (k ) + fx3 (k ) x 2 (k ) + ex 2 (k ) − d1 x3 (k ) − gx3 (k ) Cancer survivors: x 4 (k + 1) = x3 (k )
Assumptions: 1. General population is constant { x1 (k + 1) = x1 (k ) } 2. Rate at which a chemotherapy patient is cured of cancer = 0, { f = 0 }
318
Multiobjective Risk Impact Analysis Method 3. 4.
Death rate due to neglect { d1 = d 2 = 0 } Neglect g {g =0}
With those assumptions: x 2 (k + 1) = (1 + ax1 (k ) − e) x 2 (k )
x3 (k + 1) = x3 (k ) + ex 2 (k ) One state equation: x(k + 1) = ax(k ) + u (k ) + w(k ) where, a = 4 × 10 −6 Rate at which people develop cancer e = 50 Percent of cancer patients undergoing treatment with chemotherapy x1 (0) = 100,000 Number of people in the region at year k
x 2 (0) = 500 Number of cancer patients in the region at year k w(k ) = Normal distributed random variable with µ = 0, s 22 = 625 Consider two objective functions: f 4 (⋅) = Conditional expected value of annual societal cost
f 5 (⋅) = Unconditional expected value of annual societal cost Cost = γ 1 x 2 (k ) + γ 2 ex 2 (k )
γ 1 = Cost due to cancer = $50,000/cancer patient (e.g., costs associated with the productivity loss incurred by a cancer patient)
γ 2 = Cost of chemotherapy = $10,000/chemotherapy patient E[ w(k )] = 0 for f 5 (⋅) f 4 (⋅) = m + β 4 s 2 = 0 + (1.525)(25) = 38.125 Policy decisions: Assume the region decides to pay for the chemotherapy using tax revenues. This causes a linear increase in the percentage of cancer patients who desire treatment over the next 4 years (5% per year starting with 50% the first year).
e(1) = 50% e(2) = 55% e(3) = 60% e(4) = 65% e(5) = 70%
Cancer Treatment
319
SOLUTION The following results are obtained in the next 5 years, assuming all other values remain constant.
x 2 (k + 1) = (1 + ax1 (k ) − e) x 2 (k ) + w(k ) We can assume w(k ) = 0
f 5 ( x 2 (1)) = (1 + 0.4 − 0.50)500 + 0 = 450 f 5 ( x 2 (2)) = (1 + 0.4 − 0.55)450 + 0 = 383 f 5 ( x 2 (3)) = (1 + 0.4 − 0.60)383 + 0 = 306 f 5 ( x 2 (4)) = (1 + 0.4 − 0.65)306 + 0 = 230 f 5 ( x 2 (5)) = (1 + 0.4 − 0.70)230 + 0 = 161 Now we calculate f 4 (⋅) and we get: f 4 ( x 2 (1)) = 450 + 38.125 = 488.125
f 4 ( x 2 (2)) = 383 + 38.125 = 420.625 f 4 ( x 2 (3)) = 306 + 38.125 = 344.125 f 4 ( x 2 (4)) = 230 + 38.125 = 267.625 f 4 ( x 2 (5)) = 161 + 38.125 = 197.775 We calculate cost as follows: Cost = γ 1 x 2 (k ) + γ 2 ex 2 (k )
γ 1 = Cost due to cancer = $50,000/cancer patient γ 2 = Cost of chemotherapy = $10,000/chemotherapy patient Table IX.5.1 charts the costs to society over 5 years of subsidized chemotherapy. Figure IX.5.1 illustrates this graphically.
Table IX.5.1. Unconditional and Conditional Expected Value of Societal Cost for 5 Periods Year
e
f 5 (⋅)
1 2 3 4 5
0.50 0.55 0.60 0.65 0.70
450 383 306 230 161
Cost (Millions) $24.75 $21.04 $16.83 $12.62 $8.84
f 4 (⋅) 488.125 420.625 344.125 267.625 198.775
Cost (Millions) $26.85 $23.13 $18.93 $14.72 $10.93
Multiobjective Risk Impact Analysis Method
Cost($M/year)
320
Figure IX.5.1. Annual societal costs over 5 periods ANALYSIS We can see that as time passes the conditional and unconditional expected value of cost decreases. The percentage of cancer patients seeking chemotherapy increases, and the survival rate is increasing faster than the rate of new people developing this disease. Thus, the expected total cost per year would eventually decrease as shown above. If the value of life is above that of money, it is obvious that the government should cover the cost of chemotherapy.
Flood Control
321
PROBLEM IX.6: River Flooding Control Mississippi River is a main waterway in the US. It has supported the economic livelihoods and social activities of the surrounding states. River flooding poses a great challenge to the US Corps of Engineers in terms of controlling channel overflow, particularly during rainy seasons. In 1993, excessive precipitation visited the Upper Mississippi River Basin, which resulted to massive and destructive flooding in the Midwest region as (as depicted in the map below).
Map of Mississippi River Flooding, June-August 1993 Source (date accessed : April 28, 2005): [http://www.geo.mtu.edu/department/classes/ge404/flood/background/1120-b/] The effects of channel overflow include flooding of roads, destruction of lives and properties, and disruption of basic services (electric power, transportation, and communication), among others. In particular, the adverse effects of the 1993 flooding include the following: (i) loss of water supply, pipelines, and treatment facilities in Des Moines, Iowa, the center of the flooding, amounted to over $700 million; (ii) agricultural damage exceeded $20 million; (iii) river transportation was halted for two months resulting in an average loss of $1 million per day; and (iv) about $500 million was realized due to damage in hundreds of miles of roads. Suppose a total budget of $500 million dollars is allocated over a three-year horizon to be used for flood control management (e.g., river dredging, constructing river flow control infrastructures such as levees and reservoirs, etc.). Three policy options (a, b, and c) have been identified and are summarized in Table IX.6.1 below. Each of these policy options indicates the amount of funds to be released prior to each period (k = 1, 2, and 3).
322
Multiobjective Risk Impact Analysis Method
Table IX.6.1. Policy Options with given u(k) Values u(k-1) = amount spent on flood control management at stage k-1 (in million dollars) u(0) u(1) u(2) 500 0 0 250 150 100 0 250 250
Policy a b c
Table IX.6.2 summarizes the values of parameters in the mean and variance formulas.
Table IX.6.2. Summary of Parameters Parameter
Value
Explanation
A
0.8
Decay rate of precipitation due to predicted El Niño in the next 3 years, [unitless]
B
-0.5
Controlled volume of channel overflow per million $ invested, [in million m3/million $ spent]
C
1
x0
1000
Initial volume for potential overflow, [in million m3]
P
1
Variance of random variable ω(k), [(in million m3)2]
R
0
Variance of random variable ν(k), [(in million m3)2]
X0
200
Proportionality constant, [unitless]
Initial variance of potential overflow, [(in million m3)2]
The following two steps are required to solve this problem: (a) Calculate f2(k), f3(k), and, f4(k). Use the table format shown below. Do this for all periods k = 1, 2, and 3. Period, k Policy a b c
u(k−1)
m(k)
s(k)
f2(k)
f3(k)
f4(k)
(b) For each option, plot f2(k), f3(k), and, f4(k) with respect to the control variable u(k-1). Do this for every period. Analyze the results. Note: Use one standard deviation partitioning for f2(k) and f4(k).
Road Construction
323
PROBLEM IX.7: Road Project Construction This problem is concerned with scheduling city road construction projects within a region that comprises four districts. Three projects are under consideration: • Project A—Adding a new interchange to a downtown expressway to ease traffic issues due to new economic development in the area, • Project B—Resurfacing the region’s roadways in four districts to reduce accidents and lawsuits due to the roadway infrastructure, and • Project C—Synchronizing traffic lights to increase traffic throughput and reduce traffic accidents/fatalities. A two-year time period will be used for each of the projects to be constructed. They can be completed concurrently or individually. For this exercise, we introduce three possible scenarios: Policy 1: All projects start at year zero, Policy 2: Project A starts at year zero with Projects B and C starting at year two Policy 3: Project A starts at year zero with Project B starting at year two and Project C starting at year four. Use the Multiobjective Risk-Impact Analysis Method (MRIAM) to allow the decisionmakers to look at the scenarios from a cost versus traffic-throughput standpoint and analyze their options for a six-year period (two-year intervals, three periods) and provide insights into making decisions with public funds. Expected benefits by: f 1 (k ) = cumulative costs (CC)
f 2 (k ) = µ - σ conditional expected value for benefits (measured in estimation of uninterrupted travel/delays) f 3 (k ) = µ unconditional expected value for benefits f 4 (k ) = µ + σ conditional expected value for benefits Three projects: A: Add a downtown expressway interchange B: Resurface roads in four city districts C: Synchronize traffic lights to aid traffic flow Initial Costs: Project A: $7.4 Million Project B: $6.5 Million Project C: $5.8 Million
324
Multiobjective Risk Impact Analysis Method
Operating costs including road repair, inspections, and traffic-light monitoring are factored into the initial cumulative costs at roughly $0.4 million per project per 2year period. (see Tables IX.7.1, IX.7.2, and IX.7.3 below).
Table IX.7.1. Parameter Values and Descriptions Parameter A B C
Value 1 1 1
x0 P R X0
0.5 0.005 0 0.0025
Description Multiplier effect for initial benefits Multiplier effect for control variable Proportionality constant for system input and output Initial level of optimal transportation conditions (throughput/uninterrupted travel/no delays) Variance of random variable ω(k) Variance of random variable ν(k) Initial variance
Table IX.7.2. u(k) Values for each Policy
Policy 1 2 3
u(k), level of traffic improvement at stage from period k to k+1 u(0) u(1) u(2) -5.0% 25.0% 5.0% -3.0% 8.0% 18.0% -3.0% 9.0% 9.0%
Table IX.7.3. Cumulative Costs for each Policy
Policy 1
Cumulative cost (CC) at period k−1 in $millions CC(0) CC(1) CC(2) 19.7 20.9 22.1
2
7.4
20.1
21.3
3
7.4
14.3
20.9
Since u(k) is the level of traffic improvement in the time period, it can be between −1.0 and +1.0. Traffic throughput will degrade during construction, as lanes will be closed, detours are created, and traffic is slowed.
Research Fund
325
PROBLEM IX.8: Funding Skin Cancer Research Scientists in Switzerland are considering methods for reducing the number of cases of a certain type of skin cancer that currently affects 10,000 people in the country. Fifty million dollars has been allocated for a 3-year endeavor to help prevent and cure cases of skin cancer. Three policies have been devised to allocate the funds. With each policy, there is an associated cost related to the present value of the funds being used for each year as well as the implied costs associated with administering each option. The Multiobjective Risk Impact Analysis Method (MRIAM) is used to arrive at a solution. Variable Definitions x(k) = number of cases of skin cancer in year k, x(0) = x0 = 10,000 The simplified model is
x ( k ) = A k x (0 ) +
∑
k −1
i =0
Ai
x(k + 1) = Ax(k ) + Bu (k ) + w(k ) y (k ) = Cx (k ) u(k) = amount of money spent on skin cancer research at year k. w(k) = random variations due to risk factors such as sun intensity, general knowledge of preventative measures (sunscreen, sun time, etc.) a = rate of diagnosis of new skin cancer patients = 0.05 A = growth rate of population afflicted with skin cancer = 1.05 B = number of cases of skin cancer per dollar spent = 1 case per $10,000 = -0.0001 C = coefficient parameter = 1 P = variance of random variable w(k) = E[w2(k)] = P(k) = 600 R = variance of random variable v(k) = 0 X0 = initial variance = 500 Table IX.8.1 lists the costs associated with skin-cancer research policies (strategies) a, b, and c:
Table IX.8.1. Comparative Costs of Cancer Prevention Strategies u(k)=funds to be used Strategy
u(0)
u(1)
u(2)
a
$20,000,000
$20,000,000
$10,000,000
b
$17,000,000
$17,000,000
$16,000,000
c
$28,000,000
$5,000,000
$17,000,000
Given the model and data, conduct the Multiobjective Risk Impact Analysis method to evaluate the Pareto-optimal solution(s).
326
Multiobjective Risk Impact Analysis Method
PROBLEM IX.9: Controlling River Channel Overflow Policymakers in a metropolitan city in the Philippines are challenged with solving the chronic flooding problem caused by overflow of a major river. The Marikina River is a main waterway that supports commodity transport and other economic livelihood activities in a major city in the Philippines. However, heavy rainfall events that visit the area every year cause the river levels to rise above limits resulting in severe flooding with catastrophic consequences. Most policy decisions and infrastructure investments involve long-range impacts to other decisions and concerns. The objective of this problem is to model and identify the impact of the current policy decision on future concerns using the Multiobjective Risk Impact Analysis Method (MRIAM).
Figure IX.9.1. System model where, • x(k) represents the state variables, defined as the number of affected localities when the river overflows for Period k; • y(k) represents the output variables, defined as the damages (in Philippine Pesos—PhP) for period k; • u(k) represents the control variable, which is the amount of money spent on an option at period k; • fik represents the conditional expected value of risk calculated for each period k The general form of the state equation is: x(k+1) = Ax(k) + Bu(k) +
ω (k)
and the output equation is: y(k) = Cx(k) + υ (k) where, A = 1.25, growth rate of the population in affected localities
Overflow Modeling
327
B = (-)50, number of residents protected per unit (1,000 PhP) investment (residents/ 1,000 PhP), that is -0.05 C = PhP 2000, average damage cost per resident in a flood event
ω
(k) random
υ (k) random Perform Multiobjective Risk Impact Analysis method to evaluate the Paretooptimal solution(s). Table IX.9.1 summarizes the three policies being evaluated that define the schedule of budget release for the control of river overflow.
Table IX.9.1. Budget Allocation Policies
Policy A B C
k=1 0.5 4 1
u(k-1) = Amount Spent on Flood Control Project (in million PhP) k=2 k=3 0.5 1.5 0 0 1 1
k=4 1.5 0 1
Note: It is assumed that the funding is released at the beginning of a period (i.e., Period k-1), and the effect would be realized in Period k. Use one standard deviation partitioning for f2(k) and f4(k).
328
Multiobjective Risk Impact Analysis Method
PROBLEM IX.10: Modeling the Design Phases of a New Automobile A design review looks at the progress of the project and analyzes it compared to past projects and current expert opinion. As the design is partially completed, the review reduces uncertainty in the final cost. There are three phases in the design process for a new automobile: These are: • The Concept phase (k =1), a very abstract brainstorming session where no concrete design specifications are used. At the completion of the conceptual phase, you should have the vehicle class, feature packages, and a general idea of what makes your design different from all other vehicles in its class. • The Pre-prototyping phase (k = 2), where the detailed design work begins. The emphasis is on accurate cost analysis, manufacturing feasibility, and production scheduling. • The Prototyping phase (k = 3), where the selected alternative is designed and built in full, and any more minor improvements are made. In keeping with the 3-phase design process, there are several strategies to conducting a design review. The more money spent on the design review, the more accurate the reduction in uncertainty of the final project, but at a cost of adding to vehicle cost and production delay. Likewise, a more accurate result is derived from a review in a later design phase, but at the cost of reduced flexibility in design changes based on suggestions from the design review board. Reviewing an earlier design is less costly. The company has $5 million available for the design review process, and we must decide how much and when to spend the available funds. Initially, there is an expected uncertainty of 90% in sales after introducing the new design. We use the Multiobjective Risk-Impact Analysis Method (MRIAM) to decide which strategy is best to reduce uncertainty in the new design. For all periods (k = 1, 2, and 3) and strategies (a, b, and c), Tables IX.10.1 through IX.10.3 provide the data on proportion of funds, level of fund utilization, and cumulative cost, respectively.
Table IX.10.1. Proportion of Funds for each Period
Strategy A B C
p(k−1) = proportion of funds used for design review p(0) p(1) p(2) 0.8 0.1 0.1 0.1 0.7 0.2 0.1 0.3 0.6
Design Phase
329
Table IX.10.2. Level of Fund Utilization for each Period u(k-1) = ln[1-p(k−1)] Strategy u(0) u(1) u(2) A ln(0.2) ln(0.9) ln(0.9) B ln(0.9) ln(0.3) ln(0.8) C ln(0.9) ln(0.7) ln(0.4) TableIX.10.3. Cumulative Cost at Period k–1 Cumulative cost (CC) at period k−1 (PV in millions) Strategy CC(0) CC(1) CC(2) A 2.4 2.7 3.0 B 0.4 3.2 4.0 C 0.3 2.2 5.0 Table IX.10.4 summarizes the values of parameters in the above mean and variance formulas:
Table IX.10.4. Summary of Parameters Parameter
Value
Description
A
1
Multiplier effect for initial sales uncertainty
B
.5
Multiplier effect for control variable
C
1
Proportionality constant for system input and output
x0
ln(0.9)
P
0.1
Variance of random variable ω(k)
R
0
Variance of random variable ν(k)
X0
0.03
ln(Initial sales uncertainty)
Initial variance
Perform Multiobjective Risk Impact Analysis method to evaluate the Paretooptimal solution(s). Use one standard deviation partitioning for f2(k) and f4(k).
330
X. Extreme Event
PROBLEM X.1: Analysis of Annual Maximum River Discharge This problem deals with the importance of considering an extreme event such as a flood when making decisions about dam construction. DESCRIPTION From US Geological Survey Water Resources Data, the annual maximum discharges for the gauging station of the Salt Fork River near St. Joseph, Illinois between 1959 and 1975 are listed below. Table X.1.1. Annual Maximum Discharge Data Year 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975
Annual Discharge feet3/sec (CFS) 6030 1800 2300 3370 2340 5380 1230 950 2640 6860 2630 2600 2350 1350 2990 2750 1920
METHODOLOGY We use Extreme Event (EE) Analysis to help make the most effective decision. a) Suppose a Type I Largest value extremal distribution is adopted for the annual maximum discharge at this station. Determine the parameters for the distribution through sample means and variance.
Maximum Discharge
331
b) The Probable Maximum Flood (PMF) is defined as the level of cubic feet per second (CFS) whose exceedence probability is 10-5. Find the PMF for the above gauging station by filling the missing entries in Columns 2 and 3 of Table X.1.2 where y is the “actual” flood in any year, measured in terms of proportion (or percentage) of PMF. c) Suppose there is a flood control earth dam of 100 ft. in height at this gauging station. However, the dam was inappropriately designed as considerable damage downstream still occurs even for moderate floods. Column 4 of Table X.1.2 shows potential flood damage, X in 106$, downstream for different levels of flood y. To make the dam more effective, its height, h feet, is to be raised. The damage X is therefore a function of Y and h. For a given level of dam height h, h ≥ 100 , the damage X is a monotone increasing function of Y. For this reason, the probability that an annual damage X exceeds a particular level x (x is the damage caused by a level y flood) for the existing dam is exactly equal to the probability that the flood level y is exceeded. Use this information to i) fill in all entries in Column 4 of Table X.1.2, and ii) to compute the conditional expected damages f2, f3, f4, and expected damage f5 for the existing dam. (f2, f3, and f4 are expected damages in 106$, conditioned on the exceedence probability of damage being in [0, .01], [.01, .88], and [.88, 1] respectively.) Fill in this information in Table X.1.3. d) If the cost of raising the height of the dam in 106$ is f (h) = h − 100 for h ≥ 100 , fill in the first row of Table X.1.3 in which three other dam heights are being considered. Suppose the values of f2, f3, f4, and f5 for the three new alternatives have been computed as shown in Table X.1.3. Plot f1 vs. f2, f1 vs. f3, f1 vs. f4, and f1 vs. f5 on the same graph using f1 as the vertical axis. Estimate all relevant trade-offs between various pairs of alternatives and summarize your results in the form of a table. Discuss their implications on the dam height decision. Table X.1.2. Template for Flood Data Y Proportion of PMF .01 .05 .07 .10 .20 .30 .40 .50 .60 .80 1.0
Flood in CFS
Exceedence Prob. P(Y ≥ y ) .9906 .9358 .8821 .3155 .0972 .0270 .0073 .0020 .000167 .00001
Potential Damage in 106$ X, when h = 100 ft. .1 .5 1.2 2.0 3.0 4.5 7.0 10.0 14.0 25.0 50.0
Exceedence Prob. P(X ≥ x ) when h = 100
332
Extreme Event
Table X.1.3. Cost of Dam Improvement Options Cost of Option 0 Expected h0=100 ft. Damage in 106$ f1: cost f4: low prob. Damage f3: med. Prob. Damage f2: high. prob. damage f5: overall expected damage
Option 1 h1=102 ft.
Option 2 h2=105 ft.
Option 3 h3=110 ft.
8.5
5.5
3.0
2.4
1.8
1.5
.35
.10
0.0
2.0
1.5
1.0
SOLUTION a)
Sample mean x = 2911 Sample variance SD2 = (1663)2 (using SDn-1 rather than SDn), hence, SD = 1663
∴ and
b)
αˆ y =
π 6 ( SD )
uˆ y = x −
γ αˆ y
= 7.7 x 10-4
= 2911 -
0.577216 = 2161.37 7.7 x 10 - 4
0.00001 = P(Y ≥ PMF) = P (S ≥ αˆ y (PMF - uˆ y )) = P (S ≥ 11.50) Thus αˆ y (PMF - uˆ y ) = 11.50
⇒ PMF =
11.50 + 2161.37 = 17096.44 αˆ y
∴ 0.1PMF = 1709.64 P(Y ≥ 0.1PMF) = P(Y ≥ 1709.64) = P(S ≥ 7.7 x 10-4(1709.64 – 2161.37)) = P(S ≥ -0.35) = 1 – P(S ≤ 0.35) = 1 – 0.2419 = .7581
Maximum Discharge
333
c) With the information given, Table X.1.2 becomes: Table X.1.4. Flood Data with Completed Exceedance Probabilities Y Proportion of PMF
Flood in CFS
Exceedence Prob. P(Y ≥ y )
Potential Damage in 106$ X, when h = 100 ft.
.01 .05 .07 .10 .20 .30 .40 .50 .60 .80 1.0
170.96 854.8 1196.72 1709.64 3419.20 5128.8 6838.4 8548.0 10257.6 13676.8 17096
.9906 .9358 .8821 .7581 .3155 .0972 .0270 .0073 .0020 .000167 .00001
.1 .5 1.2 2.0 3.0 4.5 7.0 10.0 14.0 25.0 50.0
Exceedance Prob.
Fˆ ( x) = 1 − F ( x) P(X ≥ x ) when h=100 .9906 .9358 .8821 .7581 .3155 .0972 .0270 .0073 .0020 .000167 .00001
Note: X = X(Y, h) X is a monotone increasing function of Y, given h. P(X ≥ x: h=100) = P(Y ≥ y) where x = X(y: h=100)
∴
For simplicity, let the partitions in the probability scale IP1, IP2, and IP3, be transformed into partitions in the damage scales IX1, IX2, and IX3 respectively, as shown in Table X.1.2. Thus:
∑ | Fˆ ( x ) −Fˆ ( x i
f2 = E(X|IX1) =
xi +1 , xi ∈IX 1
i +1 ) |
( xi +1 + xi ) 2
IP1
= | 1 .0 − 0.9906 |
(0 + 0 .1) ( 0.1 + 0 .5) ( 0 .5 + 1 .2 ) + | 0 .9906 − 0 .9358 | + | 0 .9358 − 0 .8821 | 2 2 2 1 − 0.8821
= 0.53 Likewise, f3 = E (X|IX2)
| 0.8821 − 0.7581 | =
(1.2 + 2.0) (2.0 + 3.0) + | 0.7581 − 0.3155 | 2 2 + 0.8821 − 0.0073
334
Extreme Event
| 0.3155 − 0.0922 |
(3.0 + 4.5) (4.5 + 7) (7 + 10) + | .0922 − .0270 | + | .027 − .0073 | 2 2 2 .8821 − .0073
= 3.07 f4 = E (X|IX3)
=
| .0073 − .002 |
(10 + 14) (14 + 25) (25 + 50) + | .002 − .000167 | + | .000167 − .00001 | 2 2 2 + .0073 − 0
(50 + 100) 2 .0073 − 0
| .00001 − 0 |
= 18.22 f5 = f4(.0073) + f3(.8821-.0073) + f2(1-.8821)
= 2.88 d) Table X.1.3 becomes:
Table X.1.5. Cost of Dam Improvement Options with Completed Values of Dam Height Options at Various Partitions Cost of Expected Damage in 106$ f1: cost f4: low prob. damage (catastrophe) f3: med. prob. damage f2: high prob. damage f5: overall expected damage
Option 0 h0=100ft
Option 1 h1=102ft
Option 2 h2=105ft
Option 3 h3=110ft
0 18.22
2 8.5
5 5.5
10 3.0
3.07
2.4
1.8
1.5
.53
.35
.10
0.0
2.88
2.0
1.5
1.0
Maximum Discharge
335
12
Option 3
f1: Cost of Option (10^6 $)
10
8 f4 f3
6
f2
Option 2
f5
4
Option 1
2
Option 0
0 0
2
4
6
8
10
12
14
16
18
20
Cost of Potential Damage (10^6 $)
Figure X.1.1. Plot of Table X.1.5 Trade-offs:
λ12 λ13 λ14 λ15
λij (h p , hq ) ≈
f i ( h p ) − f i (hq ) f j ( h p ) − f j ( hq )
h0 – h1 -11.1
h0 – h2 -11.63
h0 – h3 -18.87
h1 – h2 -12.0
h1 – h3 -22.86
h2 – h3 -50
-2.99
-3.94
-6.37
-5.0
-8.89
-16.7
-0.21
-0.36
-0.66
-1.0
-1.45
-2.0
-2.27
-3.62
-5.32
-6.0
-8.0
-10
ANALYSIS The need to consider extreme events as opposed to “average” events is demonstrated clearly in this example. For example, in comparing Option h1 with h0, it costs $2M to construct h1 while the reduction in damage is $0.88M in the “average” sense (f5), and $9.72M in the extreme-event case (f4). Considering the average case (f5) alone may lead to rejection of h1 in favor of h0 since the “average” expected benefit will not pay for the “certain” cost. However, considering the extreme event case (f4) will make h1 very attractive, as the expected benefit in case of an extreme event is about five times that of the cost.
336
Extreme Event
PROBLEM X.2: Integrated Circuit for a Helicopter Four design options are being considered for an integrated circuit chip computer subsystem for a combat helicopter. DESCRIPTION The objective of this problem is to maximize the reliability of the chip subsystem for a mission of a three-hour duration. The reliability of each design must be weighed against the cost. METHODOLOGY We solve this problem using Extreme Event Analysis. Suppose N identical components are tested for one time period. Let N f (t ) be the number of components that have failed, and N 0 (t ) be the number of components that are operating. The failure rate (λ) of the components is given by:
λ =
Ν f (t) Ν f (t) = Ν N o (t) + Ν f (t)
The reliability R (t ) of a system is defined as the conditional probability that a system performs correctly throughout an interval of time
[t0 , t ] , given that the
system was performing correctly at time t0 . For components having exponential time to failure distribution, the reliability is given by: R(t) = e − λt
You may want to use the following equation: f 4N ( ⋅ ) = µ + σ
2ln(n)
An integrated circuit chip for use in the computer for a combat helicopter has a mean failure rate of 0.05 ( λ = 0.05) per hour, and a standard deviation of 0.02 (assuming normal distribution). The cost of each chip is $100. Maximizing the reliability of the chip subsystem for a three-hour mission can be done by placing the chips in parallel. The failure rate for such a parallel system is given by λn , where n is the total number of parallel components used. Assume that the standard deviation of the parallel system is the same for each individual chip.
Circuit Duration
337
SOLUTION i) The 4 design options for the chip subsystem each have 1, 2, 3, and 4 chips in parallel. For each of the options, compute the mean reliability of the subsystem for the 3-hour mission. Design Option 1: λ sys = (0.05) 1 = 0.05 R(3) = e ( − 0.05)3 = 0.86071
Design Option 2: λ sys = (0.05) 2 = 0.0025 R(3) = e ( − 0.0025)3 = 0.99253
Taking similar steps for Design Options 3 and 4, the complete results are as follows: Table X.2.1. Failure and Reliability Rate by Design Option Design Option 1 2 3 4
λ
sys
R(3)
0.05 0.0025 0.000125 0.00000625
0.86071 0.99253 0.99963 0.99998
ii) Calculate f 4 (⋅) for each design option for a partition point of 85% on the reliability axis. Hint: To compute the partition point on the probability axis from the partition point on the reliability axis, calculate the corresponding value of the failure rate using: R(t) = e − λt
Design Option 1: R(3) = e − 3 λ 1 = 0.85 where λ1 is the partition point on the failure rate axis. λ 1 = 0.05417 Converting to standard normal: λ − µλ = 0.2085 ≈ 0.21 Φ -1 (α 1 ) = 1 σ where µ λ is the mean failure rate of the system, σ is the standard deviation of the failure rate, and α 1 is the partition point of the probability axis.
338
Extreme Event
From standard normal tables we have: α 1 = 0 . 5826 f 4n ( ⋅) = µ + σ
2 ln( n )
1 = 0 . 05 + 0 . 02 2 ln 1 − 0 . 5826 = 0 . 07644
Taking similar steps for Design Options 2, 3, and 4, the complete results are shown in Table X.2.2: Table X.2.2. f4 with a Partition Point of 85% on the Reliability Axis N Design f 4 ( ⋅) λi Φ − 1 (α i ) αi n Option i 1
0.05417
0.21
0.5826
2.3960
0.07644
2
0.05417
2.58
0.9951
204.5787
0.06774
3
0.05417
2.70
0.9966
290.5198
0.06748
4
0.05417
2.71
0.9966
295.7597
0.06747
iii) Calculate f 4 (⋅) for each design option using a partition point of 0.99 on the probability axis. Design Option 1: f 4n ( ⋅) = µ + σ
2 ln( n )
1 = 0 . 05 + 0 . 02 2 ln 1 − 0 . 99 = 0 . 11070
Taking similar steps for Design Options 2, 3, and 4, the complete results are presented in Table X.2.3: Table X.2.3. f4 with a Partition Point of 99% on the Probability Axis N Design f 4 ( ⋅) α n Option 1
0.99
100
0.11070
2
0.99
100
0.06320
3
0.99
100
0.06082
4
0.99
100
0.06070
Circuit Duration
339
iv) Plot your results in terms of cost vs. damage, where the damage is given by the calculated value of the failure rate of the chip subsystem. Below is the graph for these results:
Figure X.2.1. Unconditional and Conditional Damage Rates versus Cost ANALYSIS Figure X.2.1 shows the performance of the four design options relative to two objectives: minimizing failure rate and minimizing cost. Several Pareto-optimal solutions are presented based on expected value (f5), and two versions of conditional expected value (f4) based on partitioning either the failure-rate axis or the probability axis. It should be observed that Option 1 has a low cost but the failure rate is high. Option 2 significantly reduces the failure rate at an additional unit cost of $100. It should be observed that the reduction in failure rate between Options 3 and 4 is marginal. Nevertheless, the small change in reliability could be crucial, especially when dealing with safety-critical systems such as the computer used in operating a combat helicopter.
340
Extreme Event
PROBLEM X.3: Overpopulation Overpopulation is becoming a threat to many developing countries. This problem addresses this through the perspective of conception. DESCRIPTION A developing country is considering trying to decrease the number of new babies, so that it can control the growth of the overall population. It must decide which of the following four birth control options to subsidize in order to control the number of conceptions: • Contraceptive pills • Contraceptive patches • Condoms • Diaphragms METHODOLOGY We try to find the best solution using the extreme event analysis The number of theoretical conceptions is assumed to be of a normal distribution with a mean µ of 0 and standard deviation σ of 1. Key Assumptions:
• • •
The number of theoretical conceptions is statistically independent between days. We conduct the birth control measurement experiment for an initial period of 1 month (30 days). We take a sample of 100 couples in the given geographical area.
The following table summarizes the corresponding cost, mean, and standard deviation for each of the birth control strategies: Table X.3.1. Birth Control Strategies with Associated Cost, Mean, and Standard Deviation Birth Control Strategies 1. Contraceptive Patch 2. Contraceptive Pills 3. Condoms 4. Diaphragms
Cost ($) 2000 1500 100 2400
Mean 0 3 10 18
Std. Deviation 1 4 9 12
SOLUTION The four required steps are as follows: A) Determine the most probable one-month maximum number of conceptions.
Population Growth
341
We calculate as follows: µ=0
σ=1 un = µ + σ Φ-1 (1-1/n)
Model value for n = 30: u30 = 0 + Φ-1 (1-1/30) = 1.834 Follow these same steps for all four strategies. B) Determine the probability that the maximum number of conceptions will exceed 20 in the given month and determine the corresponding return period. Strategy 1—Contraceptive patch: PY(y) = [PX(y)]30 = [Φ((y-µ )/σ)]30 Pr (max # of conceptions > 20) = 1- PY (20) = 1- [Φ ((20-0)/1)]30
Return period of maximum number of conceptions of 20 = (1/0), which implies it would seldom exceed 20. Strategy 2—Pill: PY(y) = [PX(y)]30 = [Φ((y-µ)/σ)]30 Pr (max # of conceptions > 20) = 1- PY (20) = 1- [Φ((20-3)/4)]30
Return period of maximum number of conceptions of 20 = 3117 months. Strategy 3—Condoms: PY(y) = [PX(y)]30 = [Φ ((y-µ )/σ]30 Pr (max # of conceptions > 20) = 1- PY (20) = 1- [Φ ((20-10)/9)]30
Return period of maximum number of conceptions of 20 = 1.0139 months. Strategy 4—Diaphragms:
342
Extreme Event PY(y) = [PX(y)]30 = [Φ ((y-µ )/σ)]30 Pr (max # of conceptions > 20) = 1- PY (20) = 1- [Φ((20-18)/12)]30
Return period of maximum number of conceptions of 20 = every month. C) Determine the expected (f5) and conditional expected values (f4) for the above return period for each birth control strategy. Strategy 1—Patch f4(30) = un + (1/δn)
δ30 = 30 exp − 1 Φ −1 (1 − 1 ) 30 2π 2
2
= 2.227 Therefore, f4(30) = 1.834 + (1/2.227) = 2.283 f5(30) = 0 Strategy 2— Pills f4(30) = un + (1/δn) 2 1 1 exp − Φ −1 (1 − ) 30 2 2π 4 2 = 0.5567
δ30 =
30
Therefore, f4(30) = 10.336+(1/0.5567) = 12.13 f5(30) = 3 Strategy 3—Condoms f4(30) = un + (1/δn) 2 1 −1 1 exp − Φ (1 − ) 30 2 2π 9 2 = 0.2474
δ30 =
30
Therefore, f4(30) = 26.505+(1/0.2474) = 30.55 f5(30) = 10
Population Growth
343
Strategy 4—Diaphragm f4(30) = un + (1/δn) 2 1 −1 1 exp − Φ (1 − ) 30 2 2π 12 2 = 0.1856
30
δ30 =
Therefore, f4(30) = 40.007+(1/0.1856) =45.40 f5(30) = 18 D) Plot the cost of each strategy versus both the expected value (f5) and the conditional expected value (f4) of the number of conceptions. 3000
Cost ($)
2500 2000 f5
1500
f4
1000 500 0 0
10
20
30
40
50
Number of Conceptions
Figure X.3.1. Cost versus conditional expected value (f4) and expected value (f5) for 30 days ANALYSIS Obviously, from the above chart we may conclude that Strategy 4 (diaphragms) is an inferior solution. It is an older technology that is not only expensive to implement, but more invasive and less effective. However, it doesn’t manipulate the hormones in the way Strategies 1 (the patch) and 2 (the pill) do. Strategy 1 has the greatest likelihood of working with the lowest risk, but comes at a price. To control overpopulation, it is likely worth the extra $500/month/100 people to choose Strategy 1 over Strategy 2, or choose the birth control patch over the pill. The reason for the great change in extreme value is the complication and consistency required in using the pill (Strategy 2), that is not required in applying the patch (Strategy 1) for the same effectiveness.
344
Extreme Event
Strategy 3, issuing condoms for population control, appears to be a less effective method. However because of its nature, the number of theoretical conceptions can be decreased simply by purchasing a greater number. A future study may compare the greater distribution of condoms (Strategy 3) using the patch (Strategy 1) as a benchmark. By understanding the most effective tools to aid couples in family planning, the government can have a means of knowing where to subsidize these efforts. With government subsidies the population growth can become better controlled, which will lead to greater efficiency in existing education, transportation, and related infrastructures. Ultimately, population control will lead to a developing country’s economic stability and growth and enhance the population’s standard of living.
Snow Removal
345
PROBLEM X.4: Effective Snow Removal in a City The goal of this problem is to use extreme event analysis derived from historical snow precipitation data to analyze different policies for efficient removal of roadway snow accumulation. The Department of Transportation in a particular city is concerned with keeping a city’s roadways free of snow and ice during the winter months, especially in January. They have 4 different policy options on snow removal, and wish to improve efficiency. Analyzing the city’s historical snowfall data revealed that in January, the amount of daily snowfall (x in cm) has the following distribution: 0 . 92 (ln u − k )2 Pr( X ≤ x ) = x 1 du 0 . 92 + 0 . 08 ⋅ exp − ∫0 2 2s 2 π us
if x = 0 if x > 0
where s = 1, k = 2.7 (obtained through simulation). The analysis showed that each of four options has the same effect of changing s and k values of the above distribution, such that: • • • •
Policy 1 ~ (s = 1.2, k = 2.0) Policy 2 ~ (s = 1.2, k = 1.5) Policy 3 ~ (s = 1, k = 2.0) Policy 4 ~ (s = 1, k = 1.5)
Using the above specifications: (i) Calculate u30, the most probable 1-month maximum snowfall (cm) for each of the above policy options. (ii) Calculate the probability that maximum snowfall of 20cm would be exceeded. What is the corresponding return period of the exceeding 20cm for each of the policy options? (iii) Using the approximation formula f 4 (⋅) = u t + 1 / δ t , where perform multiobjective tradeoff analysis of
δ t = tf X (u t ) ,
f 5 (⋅) vs. monthly cost, and f 4 (⋅) vs.
monthly cost. Assume that the monthly cost for Policies 1, 2, 3, and 4 are $2M, $4M, $5M, and $8M, respectively.
346
Extreme Event
PROBLEM X.5: Analyzing Investment Opportunities An investor wants to decide between four investment opportunities. A market theory asserts that investment returns, denoted by X, are normally distributed. For this problem, we interpret investment returns X as “opportunity losses.” Therefore, the upper-tail region in a distribution of such investment returns corresponds to events that have low likelihoods of occurrence, but high opportunity losses. An investor who has faith in this market theory wants to conduct extreme-event analysis for the following four long-term bond investment alternatives. For a given investment i, the notation Xi~Ni( µ i , σ i ) is used to refer to a normal distribution with parameters
µi
and
σ i , the mean and standard deviation, respectively, of the
underlying random variable Xi. These parameters are estimated from historical annual data. (i) (ii) (iii) (iv)
Investment 1: X1~N1(0.047, 0.010); Unit Cost = $10 Investment 2: X2~N2(0.048, 0.015); Unit Cost = $8 Investment 3: X3~N3(0.049, 0.020); Unit Cost = $5 Investment 4: X4~N4(0.050, 0.025); Unit Cost = $4
Procedure:
(a) Calculate the f4 for each investment alternative for n = 20 years. Use the exact formulas for f4 of a normal distribution. (b) For each of the investment alternatives, plot the f4 and f5 values along the x-axis and the corresponding costs along the y-axis. Analyze the resulting graph. (c) Rework (a) using the approximation formulas for un and δn and f4 as follows, and calculate the % error relative to the exact values obtained from (a):
f 4 = un + 1/ δ n
(X.5.1)
ln(4π ln n) u n = µ + σ 2 ln n − 2 2 ln n
(X.5.2)
δn =
2 ln n
σ
(X.5.3)
Oxygen Concentration
347
PROBLEM X.6: Modeling Stream’s Oxygen Concentration To determine the concentration of dissolved oxygen in a stream. The daily level of dissolved oxygen (DO) concentration for a stream is assumed to be of normal distribution with a mean of 3 mg/L and a standard deviation of 0.5 mg/L. We assume that the DO concentrations between days are statistically independent. Using extreme event analysis, determine: a) Determine the one-year most probable maximum DO level. b) Determine the probability that the maximum DO level will exceed 5 mg/L in a year. Determine the corresponding return period.
348
XI. Fault Tree/ Reliability Analysis
PROBLEM XI.1: Diagnosing Computer Malfunction A student wants to avoid losing an important term paper due to a computer shutdown. DESCRIPTION Before the student starts the term paper, the probability that the student’s computer would shut down needs to be determined, as well as the expected cost to prevent loss of data. METHODOLOGY By applying Fault Tree Analysis, the student can derive a minimal cut set and recalculate the computer’s reliability by imposing different failure probabilities on each component. SOLUTION Let:
Given: A = Operating System B = Keyboard C = Mouse D = Motherboard E = CPU F = Hard Disc G = Floppy Drive H = CD Rom I = Video Card J = Monitor
Z1 = Software Z2 = Hardware Z3 = I/O Z4 = Storage Z5 = Input Z6 = Outputs Z7 = Processing Z8 = Video
Note: Denote Z for subgroups to avoid confusion with E representing the CPU. Let: T = Z1 + Z2 Z1 = A Z2 = Z3 + Z4 Z3 = Z5 + Z6 Z4 = F + (G · H) Z5 = (B · C) Z6 = Z7 + Z8 Z7 = D + E Z8 = I + J
Desktop Malfunction Therefore: Z6 = Z7 + Z8 Z6 = D + E + I + J Z5 = (B · C) Z3 = Z5 + Z6 Z3 = [(B · C)] + (D + E + I + J) Z4 = F + (G · H) Z2 = Z3 + Z4 Z2 = [(B · C) + D + E + I + J] + [F + (G · H)] Z1 = A T = Z1 + Z2 Answer: T = A + (B · C) + D + E + I + J + F + (G · H) Figure XI.1.1 shows 8 minimal cut sets: • 6 “one-component” cut sets • 2 “two-component” cut sets B
G D
A
E
I
J
F
C
H
Figure XI.1.1. Minimal cut sets
Table XI.1.1. Component Reliability and Cost Data Reliability RA = 0.90 RB = 0.95 RC = 0.95 RD = 0.98 RE = 0.97 RF = 0.96 RG = 0.96 RH = 0.97 RI = 0.96 RJ = 0.94
Unreliability QA = 0.10 QB = 0.05 QC = 0.05 QD = 0.02 QE = 0.03 QF = 0.04 QG = 0.04 QH = 0.03 QI = 0.04 QJ = 0.06
Cost CA = $300 CB = $ 30 CC = $ 30 CD = $200 CE = $500 CF = $200 CG = $ 40 CH = $150 CI = $150 CJ = $200
349
350
Fault Tree/ Reliability Analysis
For Z7 (Processing): RDE = (RD)(RE) RDE = (0.98)(0.97) RDE = 0.9506 For Z8 (Video): RIJ = (RI)(RJ) RIJ = (0.96)(0.94) RIJ = 0.9024 For Z6 (Outputs): RDEIJ = (RDE)(RIJ) RDEIJ = (0.9506)(0.9024) RDEIJ = 0.85782144 For Z5 (Inputs): RBC = 1 – (QB)(QC) RBC = 1 – (0.05)(0.05) RBC = 1 – (0.0025) RBC = 0.9975 For Z3 (I/O): RBCDEIJ = (RBC)(RDEIJ) RBCDEIJ = (0.9975)(0.85782144) RBCDEIJ = 0.855676886 For Z4 (Storage): RGH = 1 – (QG)(QH) RGH = 1 – (0.04)(0.03) RGH = 1 – 0.0012 RGH = 0.9988 RFGH = (RF)(RGH) RFGH = (0.96)(0.9988) RFGH = 0.958848 For Z2 (Hardware): RBCDEIJFGH = (RBCDEIJ)(RFGH) RBCDEIJFGH = (0.855676886)(0.958848) RBCDEIJFGH = 0.820464071 For T (System): RSystem = (RA)(RBCDEIJFGH) RSystem = (0.9)(0.820464071) RSystem = 0.738417664 ≈ 0.7384
Desktop Malfunction
351
QSystem = 1 - RSystem QSystem = 1 – 0.738417664 QSystem = 0.261582336 ≈ 0.2616 Base Total System Cost (C0) = CA + CB + CC + CD + CE + CF + CG + CH + CI + CJ C0 = 300 + 30 + 30 + 200 + 500 + 200 + 40 + 150 + 150 + 200 = $1800 NOTE: The added components are in parallel only with the original component of the same type and no other components, and are calculated accordingly. For example, the added monitor (J2) is in parallel only with the original monitor (J1) (not with the video cards); the added hard disc (F2) is in parallel only with the original hard disc (F1) (not with the floppy drive and CD ROM); the added video card (I2) is in parallel only with the original video card (I1) (not with the monitors); and the added CPU (E2) is in parallel only with the original CPU (E1) (not with the motherboard). Constructing parallel structure For Z7 (Processing): RE1E2 = 1- (QE1)(QE2) RE1E2 = 1 – (0.03)(0.03) RE1E2 = 1 – 0.0009 RE1E2 = 0.9991 RDE1E2 = (RD)(RE1E2) RDE1E2 = (0.98)(0.9991) RDE1E2 = 0.979118 For Z8 (Video): RI1I2 = 1 – (QI1)(QI2) RI1I2 = 1 – (0.04)(0.04) RI1I2 = 1 – 0.0016 RI1I2 = 0.9984 RJ1J2 = 1 – (QJ1)(QJ2) RJ1J2 = 1 – (0.06)(0.06) RJ1J2 = 1 – 0.0036 RJ1J2 = 0.9964 RI1I2J1J2 = (RI1I2)(RJ1J2) RI1I2J1J2 = (0.9984)(0.9964) RI1I2J1J2 = 0.99480576 For Z6 (Outputs): RDE1E2I1I2J1J2 = (RDE1E2)(RI1I2J1J2) RDE1E2I1I2J1J2 = (0.979118)(0.99480576) RDE1E2I1I2J1J2 = 0.974032226
352
Fault Tree/ Reliability Analysis
For Z5 (Inputs): From previous calculations: RBC = 0.9975 For Z3 (I/O): RBCDE1E2I1I2J1J2 = (RBC)(RDE1E2I1I2J1J2) RBCDE1E2I1I2J1J2 = (0.9975)(0.974032226) RBCDE1E2I1I2J1J2 = 0.971597145 For Z4 (Storage): RF1F2 = 1 – (QF1)(QF2) RF1F2 = 1 – (0.04)(0.04) RF1F2 = 1 – (0.0016) RF1F2 = 0.9984 From previous calculations: RGH = 0.9988 RF1F2GH = (RF1F1)(RGH) RF1F2GH = (0.9984)(0.9988) RF1F2GH = 0.99720192 For Z2 (Hardware): RBCDE1E2I1I2J1J2F1F2GH = (RBCDE1E2I1I2J1J2)(RF1F2GH) RBCDE1E2I1I2J1J2F1F2GH = (0.971597145)(0.99720192) RBCDE1E2I1I2J1J2F1F2GH = 0.968878539 For T (System): RSystem = (RA)(RBCDE1E2I1I2J1J2F1F2GH) RSystem = (0.9)(0.968878539) RSystem = 0.871990685 ≈ 0.8720 QSystem = 1 - RSystem QSystem = 1 – 0.871990685 QSystem = 0.128009314 ≈ 0.1280 Total System Cost for Option 1 (C1): C1 = C0 + CE + CF + CI + CJ C1 = 1800 + 500 + 200 + 150 + 200 C1 = $2850 For Z7 (Processing): From previous calculations: RDE = 0.9506
Desktop Malfunction For Z8 (Video): From previous calculations: RJ1J2 = 0.9964 RIJ1J2 = (RI)(RJ1J2) RIJ1J2 = (0.96)(0.9964) RIJ1J2 = 0.956544 For Z6 (Outputs): RDEIJ1J2 = (RDE)(RIJ1J2) RDEIJ1J2 = (0.9506)(0.956544) RDEIJ1J2 = 0.909290726 For Z5 (Inputs): From previous calculations: RBC = 0.9975 For Z3 (I/O): RBCDEIJ1J2 = (RBC)(RDEIJ1J2) RBCDEIJ1J2 = (0.9975)(0.909290726) RBCDEIJ1J2 = 0.907017499 For Z4 (Storage): From previous calculations: RF1F2GH = 0.99720192 For Z2 (Hardware): RBCDEIJ1J2F1F2GH = (RBCDEIJ1J2)(RF1F2GH) RBCDEIJ1J2F1F2GH = (0.907017499)(0.99720192) RBCDEIJ1J2F1F2GH = 0.904479592 For Z1 (Software): RA1A2 = 1 – (QA1)(QA2) RA1A2 = 1 – (0.1)(0.1) RA1A2 = 1 – 0.01 RA1A2 = 0.99 For T (System): RSystem = (RA1A2)(RBCDEIJ1J2F1F2GH) RSystem = (0.99)(0.904479592) RSystem = 0.895434796 ≈ 0.8954 QSystem = 1 - RSystem QSystem = 1 – 0.895434796 QSystem = 0.104565203 ≈ 0.1046
353
354
Fault Tree/ Reliability Analysis
Total System Cost for Option 2 (C2): C2 = C0 + CA + CF + CJ C2 = 1800 + 300 + 200 + 200 C2 = $2500 For Z7 (Processing): From previous calculations: RDE = 0.9506 For Z8 (Video): From previous calculations: RIJ = 0.9024 For Z6 (Outputs): From previous calculations: RDEIJ = 0.85782144 For Z5 (Inputs): RC1C1 = 1 – (QC1)(QC2) RC1C2 = 1 – (0.05)(0.05) RC1C2 = 1 – 0.0025 RC1C2 = 0.9975 QC1C2 = 1 – RC1C2 QC1C2 = 1 – 0.9975 QC1C2 = 0.0025 RBC1C2 = 1 - (QB)(QC1C2) RBC1C2 = 1 - (0.05)(0.0025) RBC1C2 = 1 – 0.000125 RBC1C2 = 0.999875 For Z3 (I/O): RBC1C2DEIJ = (RBC1C2)(RDEIJ) RBC1C2DEIJ = (0.99875)(0.85782144) RBC1C2DEIJ = 0.85674916 For Z4 (Storage): RG1G2 = 1 – (QG1)(QG2) RG1G2 = 1 – (0.04)(0.04) RG1G2 = 1 – 0.0016 RG1G2 = 0.9984 QG1G2 = 1 – RG1G2 QG1G2 = 1 – 0.9984 QG1G2 = 0.0016
Desktop Malfunction
355
RG1G2H = 1 – (QG1G2)(QH) RG1G2H = 1 – (0.0016)(0.03) RG1G2H = 1 – 0.000048 RG1G2H = 0.999952 RFG1G2H = (RF)(RG1G2H) RFG1G2H = (0.96)(0.999952) RFG1G2H = 0.95995392 For Z2 (Hardware): RBC1C2DEIJFG1G2H = (RBC1C2DEIJ)(RFG1G2H) RBC1C2DEIJFG1G2H = (0.85674916)(0.95995392) RBC1C2DEIJFG1G2H = 0.82243971 For T (System): RSystem = (RA)(RBC1C2DEIJFG1G2H) RSystem = (0.9)(0.82243971) RSystem = 0.740195743 ≈ 0.7402 QSystem = 1 - RSystem QSystem = 1 – 0.740195743 QSystem = 0.259804256 ≈ 0.2598 Total System Cost for Option 3 (C3): C3 = C0 + CC + CG C3 = 1800 + 30 + 40 C3 = $1870 ANALYSIS Total system costs and unreliabilities are summarized in Table XI.1.2: Table XI.1.2. Summary of Costs and Unreliabilities Option Base 1 2 3
Cost $1800 $2850 $2500 $1870
Unreliability 0.2616 0.1280 0.1046 0.2598
Cost +$1050 +$700 +$70
Deltas Unreliability -0.1336 -0.1570 -0.0018
We chart the cost of the options vs. their unreliabilities, as called for in the problem:
356
Fault Tree/ Reliability Analysis
Figure XI.1.2. Total system cost vs. unreliability Based on Table XI.1.2 and Figure XI.1.2, Option 1 is not acceptable. Its cost is higher than the cost for Option 2, and its unreliability is also higher than for Option 2. This indicates that Option 1 is NOT an optimal solution. From the base option to Option 3 there is only about a 0.0017 decrease in unreliability for a cost of $70. For $700, the decrease in unreliability from the base option to Option 2 is 0.1570. The student will have to decide if the reduction in unreliability using Option 2 is significant and worth the larger cost. From this comparison, my recommendation would be Option 2, even though it is $630 more expensive than Option 3. Option 2 reduces the unreliability from greater than 25% to just over 10%, and the increased reliability would be worth the increased cost.
Control System
357
PROBLEM XI.2: An Integrated Bridge System (IBS) The purpose of this problem is to support automated and safe ship navigation, with reduced manning. Both cost and reliability must be considered in proposed IBS designs. DESCRIPTION The Integrated Bridge System design facilitates automating time-consuming navigation tasks such as voyage planning/execution, steering control, and communicating throttle order. The IBS hardware and software together provide capabilities including voyage planning, an integrated navigational picture, collision and mine avoidance, and ship maneuvering (steering and propulsion) control. The IBS also includes interfaces to port/starboard steering gear to send commands for rudder control and receive feedback. The IBS consists of two ensembles: the Voyage Management System (VMS) and the Steering and Thrust Control System (SCS). The VMS provides an Electronic Charting Display and Information Systems-Naval (ECDIS-N)-certifiable system. The ECDIS-N features include: electronic chart display, route planning/ monitoring, backup arrangements, safety checking, manual heading correction, alarms and indications, sensor integration, CO approval of navigation plans, radar image overlay, and voyage recording and visual replay. In addition, the VMS provides bell and deck log recording/displays, man-overboard monitoring, collision avoidance, precision anchoring and anchor-drag monitoring, mine-avoidance support, Automatic Radar Plotting Aid (ARPA) radar control and display, contact information display, and other data displays. The VMS also provides automated control of the ship’s heading and speed by generating and sending the desired heading and speed orders to the SCS in order to keep the ship on the pre-selected/approved route plan, depending on the task order given. The SCS software and hardware together provide functionality to control the speed and heading of the ship. The SCS accepts steering and thrust commands either from the operator or from the VMS, depending on the steering mode selected. For operator control of the ship’s heading and speed, the SCS provides a humancomputer interface (HCI) for the operator to manually enter the steering and thrust commands. Additionally, the bridge and aft steering-control consoles are equipped with a helm wheel for manual entry of rudder orders. METHODOLOGY Designing the Integrated Bridge System (IBS) can be solved through Fault Tree Analysis, as follows: Components: 1) Voyage Management System (VMS) VMS Components: A: Software (Cost: $100,000; R: 0.98)
358
Fault Tree/ Reliability Analysis B: Hardware (Cost: $1750; R: 0.99) C: Operating System (Cost: $200; R: 0.95) 2) Steering Control System (SCS) SCS Components: D: Software (Cost: $40,000; R: 0.98) E: Hardware (Cost: $50,000; R: 0.97) F: Operating System (Cost: $250; R: 0.95)
Figure XI.2.1 shows the basic IBS Fault Tree: IBS (T)
VMS (Z1)
A
B
SCS (Z2)
C
D
E
Figure XI.2.1. Fault Tree for Integrated Bridge System (IBS) Given: A = VMS Software B = VMS Hardware C = VMS Operating System D = SCS Software E = SCS Hardware F = SCS Operating System
SOLUTION The minimal cut set is determined as follows:
Let: Z1 = A + B + C Z2 = D + E + F
Let: Z1 = A + B + C Z2 = D + E + F T = Z1 + Z2
F
Control System
359
Solution : T = A + B + C + D + E + F
The system has six minimal “one component” cut sets, as depicted in Figure XI.2.2: A
B
C
E
D
F
Figure XI.2.2. Minimal Cut Sets for IBS Next, we calculate the system reliability (and unreliability) given the cost and performance data of the components as shown in Table XI.2.1.
Table XI.2.1. Given Values Reliability RA = 0.98 RB = 0.99 RC = 0.95 RD = 0.98 RE = 0.97 RF = 0.95
Unreliability QA = 0.02 QB = 0.01 QC = 0.05 QD = 0.02 QE = 0.03 QF = 0.05
Cost CA = $100,000 CB = $1750 CC = $200 CD = $40,000 CE = $50,000 CF = $250
For Z1 (VMS): RABC = (RA)(RB)(RC) RABC = (0.98)(0.99)(0.95) RABC = 0.92169 For Z2 (SCS): RDEF = (RD)(RE)(RF) RDEF = (0.98)(0.97)(0.95) RDEF = 0.90307 For T (System): RSystem = (RABC)(RDEF) RSystem = (0.92169)(0.90307) RSystem = 0.832350588 ≈ 0.8324 QSystem = 1 - RSystem QSystem = 1 - 0.832350588 QSystem = 0.167649412 ≈ 0.1676 Baseline Total System Cost (C0) = CA + CB + CC + CD + CE + CF C0 = 100,000 + 1750 + 200 + 40,000 + 50,000 + 250 C0 = $192,200
360
Fault Tree/ Reliability Analysis
Suppose that 3 design options have been identified to improve the overall system reliability. The objective of the subsequent analysis is to determine the efficacy of each option relative to additional cost requirements and improvements in reliability.
Option 1: Increase number of VMS modules to 2. Option 2: Increase number of SCS modules to 2. Option 3: Increase number of both modules to 2. Option 1: IBS (T)
SCS (Z2)
VMS (Z1)
D
A1
B1
C1
A2
B2
C2
Figure XI.2.3. Fault Tree for Option 1
For Z2 (SCS): From original problem: RDEF = 0.90307 For Z1 (VMS): From original problem: RA1B1C1 = 0.92169 QA1B1C1 = 1 - RA1B1C1 QA1B1C1 = 1 - 0.92169 QA1B1C1 = 0.07831
E
F
Control System
361
From original problem: RA2B2C2 = 0.92169 QA2B2C2 = 1 - RA2B2C2 QA2B2C2 = 1 - 0.92169 QA2B2C2 = 0.07831 RA1B1C1A2B2C2 = 1 - (QA1B1C1)(QA2B2C2) RA1B1C1A2B2C2 = 1 - (0.07831)(0.07831) RA1B1C1A2B2C2 = 1 – 0.006132456 RA1B1C1A2B2C2 = 0.993867543
For T (System): RSystem = (RA1B1C1A2B2C2)(RDEF) RSystem = (0.993867543)(0.90307) RSystem = 0.897531962 ≈ 0.8975 QSystem = 1 - RSystem QSystem = 1 - 0.897531962 QSystem = 0.102468037 ≈ 0.1025 Option 1 Total System Cost (C1) = C0 + CA + CB + CC C1 = 192,200 + 100,000 + 1750 + 200 C1 = $294,150
Option 2: IBS (T)
SCS (Z2)
VMS (Z1)
A
B
C
D1
E1
F1
Figure XI.2.4. Fault Tree for Option 2
D2
E2
F2
362
Fault Tree/ Reliability Analysis
For Z2 (SCS): From original problem: RD1E1F1 = 0.90307 QD1E1F1 = 1 - RD1E1F1 QD1E1F1 = 1 - 0.90307 QD1E1F1 = 0.09693 From original problem: RD2E2F2 = 0.90307 QD2E2F2 = 1 - RD2E2F2 QD2E2F2 = 1 - 0.90307 QD2E2F2 = 0.09693 RD1E1F1D2E2F2 = 1 - (QD1E1F1)(QD2E2F2) RD1E1F1D2E2F2 = 1 - (0.09693)(0.09693) RD1E1F1D2E2F2 = 1 – 0.009395424 RD1E1F1D2E2F2 = 0.990604575
For Z1 (VMS): From original problem: RA1B1C1 = 0.92169 For T (System): RSystem = (RABC)(RD1E1F1D2E2F2) RSystem = (0.92169)(0.990604575) RSystem = 0.91303033 ≈ 0.9130 QSystem = 1 - RSystem QSystem = 1 - 0.91303033 QSystem = 0.086969669 ≈ 0.0870 Option 2 Total System Cost (C2) = C0 + CD + CE + CF C2 = 192,200 + 40,000 + 50,000 + 250 C2 = $282,450
Control System
363
Option 3: IBS (T)
SCS (Z2)
VMS (Z2)
A1
B1
C1
A2
B2
C2
D1
E1
F1
D2
Figure XI.2.5. Fault Tree for Option 3 For Z2 (SCS): From Option 2: RD1E1F1D2E2F2 = 0.990604575 For Z1 (VMS): From Option 1: RA1B1C1A2B2C2 = 0.993867543 For T (System): RSystem = (RA1B1C1A2B2C2)(RD1E1F1D2E2F2) RSystem = (0.993867543)(0.990604575) RSystem = 0.984529735 ≈ 0.9845 QSystem = 1 - RSystem QSystem = 1 - 0.984529735 QSystem = 0.015470264 ≈ 0.0155 Option 3 Total System Cost (C3) = C0 + CA + CB + CC + CD + CE + CF C3 = 192,200 + 100,000 + 1750 + 200 + 40,000 + 50,000 + 250 C3 = $384,400
E2
F2
364
Fault Tree/ Reliability Analysis
Multiobjective tradeoff: Table XI.2.2 shows the total system costs and reliabilities.
Table XI.2.2. Summary of Costs and Reliabilities Option Baseline 1 2 3
Cost $192,200 $294,150 $282,450 $384,400
Reliability 0.8324 0.8975 0.9130 0.9845
Deltas Cost Reliability +$101,950 +0.0651 +$90,250 +0.0806 +$192,200 +0.1521
Figure XI.2.6 charts the cost vs. the reliability of the options.
Figure XI.2.6. Pareto-Optimal Frontier NOTE: The reliabilities are used rather than the unreliabilities. ANALYSIS Based on Table XI.2.2, Option 1 is not acceptable. Its cost is higher than the cost for Option 2, but its reliability is lower. This indicates that Option 1 is not an optimal solution. Option 2 has an increase of 0.0806 in reliability over the baseline (to 0.9130) at a total cost increase of $90,250. Option 3 has an increase of 0.1521 in reliability over the baseline (to 0.9845) at a total cost increase of $192,200.
Control System
365
The decisionmakers will have to decide if the 0.0715 difference in reliability between Options 2 and 3 is worth the extra $101,950 that Option 3 will cost over Option 2. However, from this comparison our recommendation is Option 3, which increases the reliability from 0.8324 to 0.9845. Although it is entirely obvious that Option 2 is much less expensive, the significantly higher level of reliability would be worth the extra cost in terms of safety.
366
Fault Tree/ Reliability Analysis
PROBLEM XI.3: Preventing Product Failure The problem being addressed is the introduction of a faulty product to market by a large manufacturing company’s Product Management Group (PMG).
DESCRIPTION This problem is solved in two steps: 1) reorganizing the PMG’s “chain of command,” and 2) using Fault Tree Analysis to reduce the risk of product failure while minimizing cost.
METHODOLOGY Step 1: Reorganizing the “chain of command.” Instead of the traditional corporate divisions along product lines only, the company organized itself into a two-dimensional matrix with profit centers (i.e., business types) as the row headings and cost centers (departments) as the column headings. Board Chairman
Two-dimensional concept—profit and cost centers
President
Cost Centers
Marketing Manager
Manufacturing Manager
Technical Service & Development Manager
Research Manager
Economic Evaluation Manager
Other services
Business 1
Profit Centers
Business2
Business Board
Business3
Business4
Future
Figure XI.3.1. Two-Dimensional View of Company Matrix As seen above, the resulting rows form business boards according to product category (e.g., electronics), each with one business manager and a representative from Marketing, Research, Manufacturing, Technical Service & Development, and Economics/Finance. Within this structure, the company formed Product Management Groups (PMGs), similar in form to the business boards, except that
Organization Efficiency
367
each PMG focuses on planning for specific product groups and consists of representatives from lower rungs of department ladders.
Step 2: Fault Tree Analysis Using Fault Tree Analysis, the Product Management Group (PMG) seeks to minimize the probability of introducing a faulty product by selecting one of five policies. There are two objectives: • to minimize the probability of introducing a faulty product, and • to minimize the cost of the policy.
Figure XI.3.2. Fault Tree for the Faulty Product
368
Fault Tree/ Reliability Analysis
The top layer of the fault tree in Figure XI.3.2 is the undesired event, the introduction of a faulty product. A product can be faulty because of any of the events listed in Table XI.3.1, which are depicted in the second, third, and bottom tiers of the tree. The bottom tier, which grows out from the third, shows which specific components of the company’s product development program failed.
Table XI.3.1. Description of the Basic Events Basic Event N21 N22 N12 N31 N32 N41 N51 N61 N62 N71 N81 N82 N91 N01 N02
Description Test time Test design Insufficient safety testing during product development Budget for financial research Insufficient financial research during the product development Inadequate marketing research on the product Incorrect marketing survey Ability of R&D personnel to translate technology into a product Time constraints Inadequate research on the legal environment Insufficient number of legal staff Time delay in the legal procedure Incorrectly assessed manufacturing feasibility Insufficient number of manufacturing personnel Not enough manufacturing budget
Reliability .99 .92 .98 .93 .99 .98 .99 .99 .98 .98 .99 .99 .99 .94 .95
SOLUTION There are 5 policy options: Policy A – Do nothing. Policy B – Improve test design. (N22) Policy C – Increase financial research budget. (N31) Policy D – Increase number of manufacturing personnel. (N01) Policy E – Increase manufacturing budget. (N02) Each of these policies lessens the probability of failure of the component it involves, and has a reliability and cost associated with it as seen in Tables XI.3.1 and XI.3.2. When these policy changes are folded back through the tree, the result is a new probability of failure for the first tier (i.e., probability of introducing a faulty product). Table XI.3.2 lists the policies with their associated probabilities of system failure (introducing the faulty product) after the option is implemented, and the cost of each option.
Organization Efficiency
369
Table XI.3.2. Summary of Policy Costs and Probabilities of System Failure Policy
Changes in Reliability
Cost
A B C D E
Nothing R(N22) = 0.9667 R(N31) = 0.9765 R(N01) = 0.9870 R(N02) = 0.9975
$0 $10,000 $20,000 $30,000 $40,000
Using the probabilities of failure assigned to each component (i.e., the contribution of each component to the failure of the top event) on the Policy A: Do Nothing, we propagated the probability of failure up the tree according to the rules in the book. For instance, calculating the probability through an “or” gate is done by: P(A or B) = P(A) + P(B) – P(AB). We assumed all our component failures to be independent, so that P(AB) = P(A)*P(B). We had all “or” gates, which uses the rule just mentioned, or attachments, where the probability from the previous node is just folded up the tree. Using this fault tree method, we found the probability of introducing a faulty product (the undesired event on the first tier) to be .0306. Since all the subsystems (or basic events) are connected in series, the system fails when at least one of its components fails. Thus, the reliability of the entire system is just a multiplication of the reliability of all the subsystems. The overall system reliability for each policy option is summarized in Table XI.3.3.
Table XI.3.3. Summary of Probabilities of System Reliability and Failure Policy
A B C D E
Probability of System Reliability 0.6569 0.6897 0.6897 0.6897 0.6897
Probability of System Failure 0.3431 0.3103 0.3103 0.3103 0.3103
We graph the options in Figure XI.3.3, with the x-axis as the probability of system failure and the y-axis as the cost of the policy, and the Pareto-optimal frontier is formed by connecting all the points except for Policy E. (Policy E is dominated by the policies along the Pareto-optimal frontier line below it.) We would then use tradeoff analysis to determine acceptable tradeoffs between the cost and probability of system failure, to decide which policy to use.
370
Fault Tree/ Reliability Analysis
$40,000
E
$30,000
D
ts o C$20,000
C
$10,000
$0 0.3050
B A 0.3100
0.3150
0.3200 0.3250 0.3300 System Failure Rate
0.3350
0.3400
0.3450
Figure XI.3.3. Cost and Probability of System Failure for each policy options
ANALYSIS Since the reliabilities for Policies B, C, D, and E are same, the best policy option would be B. By choosing Policy B, the decisionmaker could decrease the system’s failure rate by more than 10% at an expense of only $10,000.
Machine Gun
371
PROBLEM XI.4: Reliability of a Machine Gun The machine gun is a complex system that requires its many parts to be reliable. The top risk event that occurs is that the gun fails to shoot all the bullets in its magazine.
DESCRIPTION Figure XI.4.1 shows how a machine gun with a gas system works, before the trigger is pulled. Each component must work properly for a round of bullets to be fired and reloaded.
Figure XI.4.1. Machine Gun before Failure Event (Source: www.howstuffworks.com) For the gun to function perfectly, first the rear spring must begin to move forward. As this happens, the lower part of the bolt starts to move forward, pushing a bullet up into the breach. The bolt continues to move forward and locks into the barrel of the gun, pushing the bullet through. The expanding gases caused by this process get pushed up into the cylinder above the barrel. The piston then gets pushed backward, causing the bolt to unlock from the barrel. This allows a new bullet to enter the breach, starting the process over again. Figure XI.4.2 shows a machine gun in action, after the trigger has been pulled.
METHODOLOGY The reliability of the machine gun components can be evaluated using fault tree analysis. The three main steps are:
372
Fault Tree/ Reliability Analysis 1. 2. 3.
Construct a fault tree for a machine gun. Find the minimal cut set(s). Show the real general reliability of a machine gun.
Figure XI.4.2. Machine Gun in Action (Source: www.howstuffworks.com) SOLUTION The following notations are used to characterize the events in the fault tree:
Top Event: T = Gun fails to shoot all bullets in magazine Intermediate and basic initiating events: E1 = Problem with Firing Mechanism E2 = Problem with Bullet Cartridge E3 = Problem with Reloading Mechanism E4 = Problem with Belt Feed E5 = Problem with Ejection System A = Trigger gets jammed B = Rear Spring fails to move forward C = Bolt fails to attach to Barrel D = Piston gets stuck F = Primer fails
373
Machine Gun G = Feed Cam fails H = Ammunition Belt Link breaks I = Belt-feed Pawl fails J = Ejector fails K = Extractor fails Figure XI.4.3 represents the fault tree for the machine gun. T
E2
E1
E3
F
A
B
C
E4
E5
D
G
H
I
J
K
Figure XI.4.3. Fault Tree for Machine Gun It is clear that there are no parallel components; thus everything must work in order for the system to work. Simplifying the fault tree gives us: T = (A + B + C + D) + F + {(G + H + I) + (J + K)} =A+B+C+D+E+F+G+H+I+J+K Thus, the minimal cut sets are A, B, C, D, F, G, H, I, J, K.
ANALYSIS When subsystems are connected in series, the system fails when at least one of its components fails. Thus, the reliability of the system is equal to the multiplied reliability of each component. The notation t denotes time as component reliabilities are expected to diminish.
R(t ) = R A (t ) R B (t ) RC (t ) R D (t ) R F (t ) RG (t ) R H (t ) R I (t ) R J (t ) R K (t )
374
Fault Tree/ Reliability Analysis
However, remember that the process of firing and reloading must occur successfully for each bullet in the magazine. Thus, the real reliability of the system is [R(t)]n where n represents the number of bullets in the magazine.
Airplane System
375
PROBLEM XI.5: Reliability Analysis of an Airplane System Calculate the reliability of a simplified airplane system with 11 components.
DESCRIPTION Consider a simple airplane system with the following components: 3 Processors: P 1, P 2, P 3 2 Buses: C1, C2 Engine: E - Motor: M - Electrical: EL - Cooling: EC Navigator: N - Type I: I - Type II: T Aviator: A Compass: K where M1, etc. is defined as the probability of component failure.
METHODOLOGY A Fault Tree Analysis is useful to calculate the airplane’s reliability. By simplifying the system through minimal cut sets, the reliability of the overall system can be calculated. The final step is to check the importance of each minimal cut set. Figure XI.5.1 shows the fault tree.
Figure XI.5.1. Fault tree diagram of airplane system
376
Fault Tree/ Reliability Analysis
SOLUTION The probability of system failure, F, is:
F=N+E+C+P = (I•T) + (M + EL + EC) + (C1•C2) + (P1•P2•P3) = I(A + K) + M + EL + EC + C1•C2 + P1•P2•P3 F = I•A + I•K + M + EL + EC + C1•C2 + P1•P2•P3
↓
↓
↓
M1
M2
M3 M4
↓
↓
↓
↓
M5
M6
M7
Thus, the minimal cut set is:
I•A + I•K + M + EL + EC + C1•C2 + P1•P2•P3 Now, let us define the probability failures set for t = 100 hours:
P i: Ci: M: EL: EC: I: A: K:
0.1 0.25 0.02 0.04 0.09 0.2 0.08 0.07
Minimal cut set unreliabilities are: Q1 = 0.016 Q2 = 0.014 Q3 = 0.02 Q4 = 0.04 Q5 = 0.09 Q6 = 0.0625 Q7 = 0.001 7
Qs =
∑Q
i
{0.2•0.08} {0.2•0.07}
{0.25•0.25} {0.1•0.1•0.1}
⇒
Qs = 0.2435
(Total System Unreliability)
i =1
The importance ratio of each component of the minimal cut set (normalized with respect to Qs) is as follows: Q Ei = i Qs
Airplane System
377
E1 = 6.6 % E 2 = 5.7 % E 3 = 8.2 % E 4 = 16.4% E 5 = 37.0 % E 6 = 25.7 % E 7 = 0.4 % ANALYSIS Seen in the importance of a minimal cut set, E5 is the most critical to the reliability of an airplane system because it is responsible for 37% of unreliability in the system. Therefore, a system engineer in the airplane manufacturing plant should focus on maintaining or improving the reliability of E5, which is cooling.
378
Fault Tree/ Reliability Analysis
PROBLEM XI.6: Electric Power Demand Preventing electric-power failure is an ongoing infrastructure concern A power generating plant is at risk of not fulfilling its electric power demand. Figure XI.6.1 displays the fault tree for the current system where the top event (T) corresponds to unfulfilled demand. Construct a fault tree of the potential power demand failure and derive minimal cut sets in the analysis. Top Event (T): Unfulfilled Demand
E1
A
A
E2
B
C
D
E
Figure XI.6.1. Fault Tree for Electric Power Demand Intermediate and basic initiating events: E1 = Failure of System 1 A = Failure of Generator A B = Failure of Generator B E2 = Failure of back-up System 2 C = Insufficient stock (inventory) of coal D = Delayed delivery of coal E = Delayed delivery of oil and gas
Bicycle Brake
379
PROBLEM XI.7: Reliability Analysis of a Bicycle Brake System A 10-speed bicycle has two identical brake systems, one for each wheel. These systems are totally independent and both must fail for the entire brake system to fail. Basic risk scenarios for each are a brake pad failure, a brake cable failure, and a lever failure. Any of these will cause a component system to fail. It is useful to do a Fault Tree Analysis and simplify the system through minimal cut sets. With this procedure, we can easily figure out the reliability of the overall system and check the importance of each minimal cut set. The Fault Tree is shown in Figure XI.7.1:
Figure XI.7.1. Fault tree diagram of bicycle brake system Based on the above fault tree derive the minimal cut sets and analyze your results.
380
Fault Tree/ Reliability Analysis
PROBLEM XI.8: Fault-Tree Analysis of a Train Wreck This exercise examines a simple train wreck scenario using fault tree analysis. Five factors (or risk scenarios) have been identified to contribute to a disastrous train wreck event as follows: A. Excessive Speed B. Mechanical Failure on Train (e.g., brakes, engine) C. Obstruction on Tracks D. Improper Switching (e.g., transfer to wrong track) E. Incorrect Signaling (e.g., stop instead of go) The following probabilities for the above five factors are given as follows: P(A) = 0. 25 P(B) = 0.00025 P(C) = 0.01 P(D) = 0.00010 P(E) = 0.00010 Assumptions: 1) An obstruction on the track is not enough to cause a wreck in itself (we assume for this problem that once an obstruction is seen, there is sufficient time to come to a stop before impact) 2) We have purposely omitted human error since it plays into many parts of the tree and since its impact is different to analyze. 3) Excessive speed is also not sufficient to cause a train wreck (many times trains will speed up in order to reduce late arrivals), but it can lead to an accident if complied with other factors. The train wreck scenario is analyzed using the fault tree depicted in Figure XI.8.1. In addition to the five factors specified in the problem description, the following hierarchies of events are denoted as follows: T E1 E11 E12 E2 E21 E22
= = = = = = =
Train Wreck Hitting Something Hitting Something due to excessive speed Hitting Something due to malfunction of train Malfunction Train Malfunction Outside Malfunction
Derive minimal cut sets from given the fault tree, calculate the reliability of the train and the importance of each minimal cut set in the analysis and analyze your results
Circuit Duration
381
PROBLEM XI.9: Reliability of a Combat Helicopter Computer Chip The objective of this problem is to maximize the reliability of a circuit-chip subsystem for a helicopter mission of 3-hour duration. An integrated circuit chip for use in the computer for a combat helicopter has a mean failure rate of 0.05 ( λ = 0.05 ) per hour, and a standard deviation of 0.02 (assuming normal distribution). The cost of each chip is $100. Maximizing the reliability of this circuit chip can be done by placing the chips in parallel. The failure rate for such a parallel system is given by λn , where N is the total number of parallel components used. Assume that the standard deviation of the parallel system is the same as for each individual chip. Suppose N identical components are tested for one time period. Let Nf(t) be the number of components that have failed, and NO(t) be the number of components that are operating. The failure rate ( λ ) of the components is given by N f (t ) N f (t ) λ= = N N O (t ) + N f (t ) The reliability R(t) of a system is defined as the conditional probability that a system performs correctly throughout an interval of time [t0, t], given that the system was performing correctly at time t0. For components having exponential time to failure distribution, the reliability is given by R (t ) = e − λ t You may want to use the following equations:
f N4(⋅) = µ + σ 2 ln(n) (i) Consider 4 design options for the chip subsystem, each with 1, 2, 3, and 4 chips in parallel. Compute the mean reliability of the subsystem for the 3-hour mission for each of the four design options. (ii) Calculate f 4 (⋅) for each design option for a partition point of 85 % on the reliability axis (see hint). (iii) Calculate f 4 (⋅) for each design option using a partition point of 0.99 on the probability axis. Hint:
To compute the partition point on the probability axis from the partition point on the reliability axis, calculate the corresponding value of the failure rate using R (t ) = e − λ t
382
Fault Tree/ Reliability Analysis
PROBLEM XI.10: SCADA System Reliability Supervisory Control and Data Acquisition (SCADA) systems have now been widely used in the manufacturing industry to control production. This problem deals with improving their reliability. Figure XI.10.1 shows a typical SCADA system that consists of a number of Programmable Logic Controllers (PLCs). This SCADA system uses four types of PLCs as shown in Figure XI.10.1: PLCs 1, 2, 3, and 4. Their reliabilities are: PLC 1 - 0.95; PLC 2 - 0.93; PLC 3 - 0.95; and PLC 4 - 0.94. Four plans are proposed to improve the overall reliability of the current SCADA system.
PLC 3
PLC 2 PLC 2
PLC 4 PLC 4
PLC 3 PLC 1
PLC1
PLC 3 PLC 3
Figure XI.10.1. SCADA System for Manufacturing Control Each plan suggests replacing one PLC with a more reliable new design. The reliability and estimated costs are as follows: A: PLC 1—0.99 reliability—$600,000 B: PLC 2—0.95 reliability—$100,000 C: PLC 3—0.98 reliability—$400,000 D: PLC 4—0.95 reliability—$200,000 Build a fault tree and derive the minimal cut set for the above SCADA System. Compute the probabilities of failure for each option and analyze the results.
383
Computer Failure PROBLEM XI.11: Computer System Risk
The purpose of this research is to determine the overall probability of computer failure given the probability of failure of each of its essential components. The modern computer is made up of various components, some of which are more independent than others. However, they are all essential in order for the computer to be fully operational. Any parts that are not necessary, such as those installed to increase reliability or additional functionality, will not be depicted in the following fault tree analysis. For this fault tree analysis example, we will decompose the computer into three categories: 1) motherboard and motherboard-connected devices, 2) hard drives, and 3) power supply. These can be decomposed further into other devices and possible failure events. See Figure XI.11.1 below for more details. Computer
PSU
HD MOBO
A P=0.001
Legend PSU = power supply unit MOBO = motherboard HD = hard drive VC = video card A = power supply fails B = motherboard fails C = RAM stick 1 fails D = RAM stick 2 fails E = video card fails F = hard drive 1 fails G = hard drive 2 fails
RAM
B
VC
P=0.0001
HD 1
HD 2
F
G
E P=0.0005
RAM 1
C P=0.0015
RAM 2
P=0.005
P=0.005
D P=0.0015
Figure XI.11.1. Fault Tree for Analyzing Computer Risk Derive the minimal cut set in above the fault tree and compute the probability of a computer failure. Analyze your results.
384
Fault Tree/ Reliability Analysis
PROBLEM XI.12: Calculating Reliability of an Electronic Subsystem This problem will demonstrate how to model the reliability of a simplified electronic product. An electronic product will be dissected part by part and aggregated by functional groups. The electronic subsystem consists of three components, A, B, and C. They are constructed and connected as shown in Figure XI.12.1:
Figure XI.12.1. Simplification of an Electronic Subsystem
Given the component reliabilities of RA(t)=0.9, RB(t)=0.8, RC(t)=0.7, do the following: 1. Draw the fault tree. 2. Find the minimal cut sets. 3. Calculate the reliability of the entire system.
MTTF Computation
385
PROBLEM XI.13: Purchasing a New Machine A company is considering installing a new machine in its factory. Before making the purchase, the top executives want to know how long the machine can survive before it fails to run. The failure rate of a machine is defined as follows:
λ λ (t ) = 1 λ2
0≤t≤a t≥a
For this problem: (a) Derive the reliability function R (t ) . (b) Derive the failure density function
f (t ) .
(c) If a = 30 months, λ1 = (1200 months) −1 , and λ 2 = (600 months) −1 , calculate the time such that the machine’s reliability will have degraded to a value of 0.95. (d) Calculate the mean time to failure (MTTF) given the parameters in (c).
386
XII. Multiobjective Statistical Method
PROBLEM XII.1: Designing Water Treatment Plant Capacity and Efficiency Following the recommendations from past financial and technical feasibility studies, a private water company was hired to construct a city bulk water treatment plant. Although the project site has been chosen, the treatment plant’s capacity and removal efficiency must still be decided. DESCRIPTION For simplicity, the treatment plant capacity is limited to five natural numbers (in mega-gallons per day). Likewise, the treatment efficiency options are limited to three to represent the most common treatment levels for advanced secondary or tertiary treatment, and also because equipment and technology providers usually make available only the rounded efficiency of their systems. The removal efficiency is closely related to the choice of appropriate technologies. METHODOLOGY We apply the Multiobjective Statistical Method (MSM) to design a water treatment plant that will have the required capacity and most reliable removal capabilities for the city’s needs, while minimizing the cost. SOLUTION Decision Variables
x1 = Capacity of the treatment plant [mega-gallons per day, MGD] where x1 ∈ {2, 4, 6, 8, 10}
x 2 = Removal efficiency of the treatment plant, where x 2 ∈ {0.85, 0.90, 0.95} Due to uncertainties in population growth, migration, residential and industrial development, development of alternative water sources, efficiency of water distribution networks, and changes in per capita water consumption, the actual daily water demand when the plant is already operational should be considered as a random variable. Moreover, the quality of lake water is also probabilistic and is dependent on the implementation of environmental regulations, public awareness and participation, and other factors.
Water Treatment
387
Random Variables
r1 = Actual daily water demand [MGD] r2 = Actual average raw water (water abstracted from the lake) quality when the plant is already operational, measured in terms of Carbonaceous Biochemical Oxygen Demand (CBOD) concentration [mgO2/L] The probability distributions of r1 and r2 are given in Tables XII.1.1 and XII.1.2, respectively [Santos-Borja, 2004]. Figures XII.1.1 and XII.1.2 show the cumulative distribution of r1 and r2, respectively. Table XII.1.1. Density and Cumulative Density Functions of Actual Daily Water Demand Daily Demand Probability MGD % 0 0 0.25 0 0.5 1 0.75 0 1 1 1.25 1 1.5 0 1.75 0 2 2 2.25 1 2.5 1 2.75 0 3 2 3.25 3 3.5 1 3.75 4 4 3 4.25 4 4.5 6 4.75 5 5 3
Cum. Probability % 0 0 1 1 2 3 3 3 5 6 7 7 9 12 13 17 20 24 30 35 38
Daily Demand Probability MGD % 5.25 5.5 5.75 6 6.25 6.5 6.75 7 7.25 7.5 7.75 8 8.25 8.5 8.75 9 9.25 9.5 9.75 10
Cum. Probability %
5 7 5 6 8 8 4 5 3 1 1 3 0 1 1 1 2 1 0 0
43 50 55 61 69 77 81 86 89 90 91 94 94 95 96 97 99 100 100 100
Cumulative Proba bility [ %]
100
80
60
40
20
0 0
1
2
3
4
5
6
7
8
9
10
Actual Daily Water Demand [MGD]
Figure XII.1.1. Cumulative Density Function of Actual Daily Water Demand (r1)
388
Multiobjective Statistical Method
Table XII.1.2. Density and Cumulative Density Functions of Actual Raw Water Quality CBOD Probability mgO 2 /L % 0 0 10 5 20 9 30 14 40 22 50 12 60 7 70 7 80 6 90 3 100 1 110 0 120 3 130 2 140 0 150 1 160 0 170 0 180 2 190 3 200 1 210 0 220 1 230 0 240 1 250 0
Cum. Probability % 0 5 14 28 50 62 69 76 82 85 86 86 89 91 91 92 92 92 94 97 98 98 99 99 100 100
Cumulative Probability [%]
120 100 80 60 40 20 0 0
50
100
150
200
Cabonaceous Biochemical Demand [mgO 2 /L]
Figure XII.1.2. Cumulative Density Function of Actual CBOD (r2)
250
Water Treatment
389
The state of the system can be represented by the quantity and quality of water treated daily. Note that the plant capacity used as a decision variable is different from (although proportional to) the rate of treated water production. The quantity and quality of treated water are functions of the quality of raw water and the treatment efficiency.
State Variables
s1 = Daily water production s 2 = Quality of treated water r if r1 ≤ x1 s1 (⋅) = 1 x1 if r1 > x1 s 2 (⋅) = (1 − x 2 )r2 For this problem, the objectives are to maximize the reliability of the treatment plant as a whole and to minimize the cost.
Objective Functions
f1 (⋅) = Cost [million $] f 2 (⋅) = Reliability The first objective function, cost, refers to the sum of the capital expenditure and 3 years of operating expenditure. The capital expenditure is a function of the plant capacity, x1 . The operating expenditure is a function of the actual water production,
s1 .
30.0 + 1.42 ln x1 + 0.78s1 if x 2 = 0.85 min f1 (⋅) = 33.3 + 1.05 ln x1 + 0.29s1 if x 2 = 0.90 25 − 0.22 ln x + 2.49s if x 2 = 0.95 1 1 For illustration, consider the base case where the actual daily water demand is 5.5 MGD and the actual raw water quality is 40 mg O2/L CBOD (both values have 50% likelihood). For a treatment plant with capacity of 2 or 4 MGD, the objective function is:
30.0 + 1.42 ln x1 + 0.78 x1 if x 2 = 0.85 min f1 (⋅) = 33.3 + 1.05 ln x1 + 0.29 x1 if x 2 = 0.90 25 − 0.22 ln x + 2.49 x if x 2 = 0.95 1 1
390
Multiobjective Statistical Method
Otherwise,
30.0 + 1.42 ln x1 + 0.78r1 if x 2 = 0.85 min f1 (⋅) = 33.3 + 1.05 ln x1 + 0.29r1 if x 2 = 0.90 25 − 0.22 ln x + 2.49r if x 2 = 0.95 1 1 Table XII.1.3 and Figure XII.1.3 summarize and illustrate the cost for plant capacity with respect to three x2 values (0.85, 0.90 and 0.95) given r1 = 5.5 and r2 = 40. Table XII.1.3. Cost versus Plant Capacity (for
Plant Capacity MGD 2 4 6 8 10
r1 = 5.5 and for r2 = 40)
Cost Million $ x 2 = 0.85 x 2 = 0.90 x 2 = 0.95 32.54 34.61 29.83 35.09 35.92 34.66 36.83 36.78 38.30 37.24 37.08 38.24 37.56 37.31 38.19
Cos t [Million $]
40.00
36.00
32.00 x2 = 0.85 x2 = 0.90
28.00
x2 = 0.95
24.00 0
2
4
6
8
10
12
Plant Capacity [MGD]
Figure XII.1.3. Cost versus Plant Capacity (for r1 = 5.5 and for r2 = 40) ANALYSIS The first objective function here is the cost. In general, during the decision process it is still uncertain if the design capacity of the treatment plant will correspond to the actual future water demand. If it does, the total cost will be higher for higher treatment efficiency. However, this may not be true if the treatment plant is underor over-capacity. For the base case that is explored here, note that the design treatment capacity is 5.5 MGD. If the actual demand is also 5.5 MGD, it can be seen from Figure XII.1.3 that indeed, the least expensive is that of efficiency 0.85 followed by 0.90 and the most expensive is that of 0.95. On the other hand, if the future capacity is less than 4 MGD, the treatment plant with efficiency of 0.95 will turn out to be most economical. The reason for this is that a removal efficiency of 0.95 would be based on Reverse Osmosis (RO) desalination treatment. With
Water Treatment
391
current advances in this technology, the capital expenditure for RO desalination is already competitive with conventional treatment technologies. However, the disadvantage of desalination is a high operating cost because it is energy-intensive and maintenance-intensive. At the higher capacities, the cost of RO desalination (corresponding to the efficiency of 0.95) will be far higher than the two other options. Lastly, since RO desalination is modular in nature (many parallel trains), some trains can be easily taken out of service if the actual capacity turns out to be just 4 MGD or 2 MGD. This will result in a lower operating cost. This advantage is reflected in Figure XII.1.3 above. The second objective function, reliability, refers to the combined probability that the treatment plant will be able to meet the volume demand and conform to the quality standards for drinking water.
max f 2 (⋅) = 1 −
{∑ P(s
1
∑ P(s
[
< r1i ) + P(s 2i > 20 ) − P(s 2i > 20)
1
]}
< r1i )
For illustration, let us again use our base case with design capacity of 5.5 MGD. From Table XII.1.1, this capacity corresponds to the median value. Thus, there is a 50% probability that the treatment plant will not be able to supply the total demand.
s1 = 5.5
∑ P(s
1
< r1i ) = 0.50
For the reliability in terms of meeting the quality standards, refer to Table XII.1.4 below. The values shaded in blue are above the maximum allowable contaminants of 20 mg O2/L. Table XII.1.4 Treated Water Quality vs. Raw Water Quality and Treatment Removal Efficiency Raw Water Quality (CBOD), r2 mgO 2 /L 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 250
Probability % 0 5 9 14 22 12 7 7 6 3 1 0 3 2 0 1 0 0 2 3 1 0 1 0 1 0
Cum. Probability % 0 5 14 28 50 62 69 76 82 85 86 86 89 91 91 92 92 92 94 97 98 98 99 99 100 100
x2=0.85 0 1.5 3 4.5 6 7.5 9 10.5 12 13.5 15 16.5 18 19.5 21 22.5 24 25.5 27 28.5 30 31.5 33 34.5 36 37.5
Treated Water Quality, s2 x2=0.90 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
x2=0.95 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10 10.5 11 11.5 12 12.5
392
Multiobjective Statistical Method
The probability of not meeting the required water quality can then be computed as shown below:
Table XII.1.5. Reliability vs. Treatment Removal Efficiency
∑ P(s1 < r1i ) P(s 2i > 20) Reliability
x 2 = 0.85 0.50
x 2 = 0.90 0.50
x 2 = 0.95 0.50
0.09
0.02
0.00
0.455
0.49
0.5
In general, this two-objective problem can be solved using the formulation.
ε
-constraint
min f1 (⋅) subject to
f 2 (⋅) ≥ ε 2 x1 ∈ {2, 4, 6, 8, 10} x 2 ∈ {0.85, 0.90, 0.95} 0 ≤ r1 ≤ 10 0 ≤ r2 ≤ 250 Consider that the decisionmaker set the epsilon constraint to be 0.90 (or 90% reliability level).
30.0 + 1.42 ln x1 + 0.78s1 if x 2 = 0.85 min f1 (⋅) = 33.3 + 1.05 ln x1 + 0.29 s1 if x 2 = 0.90 25 − 0.22 ln x + 2.49s if x 2 = 0.95 1 1 subject to:
∑ P(x
1
∑ P (x
< r1i ) + P((1 − x 2i )r2 > 20 ) − P((1 − x 2i )r2 > 20 ) x1 ∈ {2, 4, 6, 8, 10} x 2 ∈ {0.85, 0.90, 0.95} 0 ≤ r1 ≤ 10 0 ≤ r2 ≤ 250
where:
r if r1 ≤ x1 s1 (⋅) = 1 x1 if r1 > x1 s 2 (⋅) = (1 − x 2 )r2
1
< r1i ) ≤ 0.1
Water Treatment
393
Simulation was done by generating random numbers and determining (by interpolation) the corresponding values of the two random variables. Table XII.1.6 shows some of the Pareto-optimal solutions.
Table XII.1.6. Sample of Pareto-Optimal Solutions
x1 x2 random no. r1 random no. r2 s1 s2 f1 f2
1 8 0.85 0.110 3.163 0.538 43.136 3.163 6.470 35.41981 0.8554
Run 2 8 0.9 0.416 5.178 0.648 54.069 5.178 5.407 36.98505 0.9212
3 8 0.95 0.940 8.000 0.541 43.404 8.000 2.170 44.46252 0.94
Note: By relating random numbers to Tables XII.1.1 and XII.1.2, we can get r1 and r2 respectively.
Reference: Santos-Borja, Adelina and Nepomuceno, Dolora (2004). Laguna de Bay: Experience and Lessons Learned Brief. Laguna Lake Development Authority, Philippines.
394
Multiobjective Statistical Method
PROBLEM XII.2: Shipping Cars to Multiple Dealerships In the Southeastern region of the United States, a car manufacturing company has three manufacturing plants and must ship the cars to five regional dealerships.
DESCRIPTION In this multiobjective problem, the three car manufacturing plants are in: i) Lexington, KY, ii) Huntsville, AL, and iii) Columbia, SC. They supply five regional dealerships located in: Memphis, TN, i) ii) Atlanta, GA, iii) Jackson, MS, iv) Louisville, KY, and v) Raleigh, NC. The objectives of the car company are to minimize transportation costs (fuel, tolls, fees) while maximizing the number of cars shipped between the manufacturing plant and regional dealerships. Operational costs (insurance, depreciation, and others) are assumed calculated into the per-mile costs of transport per truck. The company also has a policy to ship cars only when the 9-car capacity of the car carrier is reached.
METHODOLOGY The Multiobjective Statistical Method (MSM) is used to solve this problem. Verbal Objectives, Constraints, and Decisions We begin by stating the basic objective functions: f1(·) = minimize transportation costs from manufacturing plant to dealership f2(·) = maximize the number of cars shipped between manufacturing plant and dealership Next, we determine the decision, input, exogenous, random, state, and output variables to gain understanding of the transportation problem. These are as follows: Decision variables: - xi,j = shipment of new car from manufacturer i to dealership j Input variables: - u1 = average miles per gallon of fuel per truck - u2 = truck capacity/car carrying limit - ui,j = tolls/fees per route (from manufacturer i to dealership j) - um,i = production capacity at manufacturing plant (i)
Network Optimization -
395
ud,j = dealership (j) demand
Exogenous variables: - α1 = operational costs of transportation - α2 = insurance costs of transportation - α3 = depreciation costs of transportation Random variables: - r1 = diesel costs per gallon - r2 = weather/route availability - r3 = parts shortages affecting production - r4 = worker strikes affecting production State variables: - s1 = current miles driven in the week from manufacturing plant (i) to dealership (j) - s2 = current number of car shipments Output variables: - y1 = total transportation costs in the week - y2 = total number of car shipments made Next, to get important modeling information, the company drivers and management fill out the following questionnaire: 1) Where are each of the manufacturing plants located? 2) Where are each of the dealerships located? 3) What is the best approximation of mileage for travel between each manufacturing plant and destination? 4) What is the approximate fuel efficiency (miles/gallon) for each truck? 5) How many vehicles are transported on each shipping truck? 6) For each of the routes, what tolls and fees apply? 7) What is the dollar amount per mile for operations associated with each truck (insurance, depreciation, maintenance)? 8) Any additional information not stated above? This questionnaire was completed by the company with the following details:
Table XII.2.1. Mileage between Manufacturer and Dealerships with Production Outputs and Dealership Demand Miles between: Manufacturer (i) 1: Lexington
Dealership (j) 1: Memphis 422
2: Atlanta 384
3: Jackson 638
4: Louisville 79
5: Raleigh 492
Output 100
2: Huntsville
216
196
335
306
587
250
3: Columbia
651
214
594
506
227
200
Cars Needed (Demand)
125
60
80
175
110
396
Multiobjective Statistical Method
Table XII.2.2. Route Tolls and Fees between Manufacturer and Dealership Locations Tolls/Fees ($) between: Manufacturer (i)
Dealership (j) 1: Memphis
2: Atlanta
3: Jackson
4: Louisville
5: Raleigh
1: Lexington
25
40
50
0
30
2: Huntsville
0
20
0
20
40
3: Columbia
40
20
30
50
20
Average fuel efficiency per truck (u1): 8 miles per gallon. Number of cars shipped per truck (u2): 9. There are 5 on top and 4 on the bottom. Cars are shipped only when the trucks are fully loaded. Operational, insurance, and depreciation costs per mile driven is approximately $0.55, in addition to fuel costs. With this survey information, we proceed to solving the MSM problem.
SOLUTION Quantify Objective Functions First, we define the two objective functions along with their constraints. Let xij = shipment from manufacturer i to dealership j. min f1(xi,j) = transportation costs from manufacturing plant to dealership = {[miles from manufacturing plant (i) to dealership (j)] x ([operational costs/mile] + [fuel cost/gallon] / [fuel efficiency (mile/gallon)]) + [route tolls and fees from manufacturing plant (i) to dealership (j)]} / [carrying truck capacity] = {[miles from manufacturing plant (i) to dealership (j)] x (α1,2,3 + [r1] / [u1]) + [route tolls and fees from manufacturing plant (i) to dealership (j)]} / [u2] = {[422x11 + 384x12 + 638x13 + 79x14 + 492x15 + 216x21 +196x22 + 335x23 + 306x24 + 587x25 + 651x31 + 214x32 + 594x33 + 506x34 + 227x35] x (0.55 + normal (2.50, 0.5) / 8) + [25x11 + 40x12 + 50x13 + 0x14 + 30x15 + 0x21 +20x22 + 0x23 + 20x24 + 40x25 + 40x31 + 20x32 + 30x33 + 50x34 + 20x35]} / 9 Subject to: f2(xi,j) = the number of cars shipped between manufacturing plant and dealership
Network Optimization
397
= x11 + x12 + x13 + x14 + x15 + x21 + x22 + x23 + x24 + x25 + x31 + x32 + x33 + x34 + x35 ≥ ε x11 + x12 + x13 + x14 + x15 = 100 x21 + x22 + x23 + x24 + x25 = 250 x31 + x32 + x33 + x34 + x35 = 200 x11 + x21 + x31 = 125 x12 + x22 + x32 = 60 x13 + x23 + x33 = 80 x14 + x24 + x34 = 175 x15 + x25 + x35 = 110 xij ≥ 0, i = 1,2,3 j = 1,2,3,4,5 f1 (xi,j) can be described as the costs for sending each fully-loaded truck from the manufacturing location to a dealership. An assumption is that any cars not shipped in a one-week period (due to waiting for a fully-loaded carrier truck) will be shipped the following week. This remainder cost for shipping, however, will be factored into the originating week’s total, which produces a meaningful shipping cost representation for weekly comparison. f2 (xi,j) is described as the total number of new cars transported to dealerships. The ε-constraint can be varied to determine the tradeoffs between costs (f1) and maximizing the cars shipped (f2). An assumption is that the focus is only on truck costs and shipments from manufacturer to dealership. We choose to model the random variable r1, the fuel cost/gallon, from data given by the Department of Transportation website. Based upon recent data trends, we choose a normal ($2.50, $0.50) representation of diesel fuel pricing. Figure XII.2.1 graphically shows the diesel price changes between years 2004-2005 and 2005-2006. This data was gathered from Gasoline and Diesel Fuel Update provided by the Energy Information Administration (EIA), a statistical agency of the United States Department of Energy.
Figure XII.2.1. Diesel Fuel Prices between 2004 and 2006
398
Multiobjective Statistical Method
Simulation We simulate five trial runs representing individual weeks of operation. They solve the Linear Programming (LP) problem and determine the tradeoff values. Simulated fuel prices for the five trials are given as follows:
Table XII.2.3. Simulated Fuel Prices
1 $2.22
r1 Cost/Gallon
Run 3 $2.49
2 $2.78
4 $2.91
5 $3.27
Since the total production capacity of the three plants is 550 vehicles, they set ε equal to 550, which is the maximum number of cars that can be shipped. Ideally, this is the weekly basis on which the needed supply reaches the dealerships—where it can be moved and sold—not sitting in a factory parking lot, which is why we choose to ship the maximum cars possible. Though not modeled for this problem, the ε-constraint can be reduced to lower levels, such as 500 car shipments per week. Doing so will allow the analyst to calculate the minimum transportation costs and choose the best routes. Simulation results are given as follows:
Table XII.2.4. Results for Transportation Costs and Cars Shipped
r1 Cost/Gallon f1 Transportation costs f2 Cars shipped
1 $2.22 $12,455.63 550
2 $2.78 $13,454.76 550
Run 3 $2.49 $12,937.35 550
4 $2.91 $13,686.70 550
5 $3.27 $14,329.00 550
Table XII.2.5. Results for Car Shipments from Manufacturer (i) to Dealership (j) Manufacturer (i) Lexington Huntsville Columbia Output
Memphis 0 125 0 125
Atlanta 0 0 60 60
Jackson 0 80 0 80
Louisville 100 45 30 175
Raleigh 0 0 110 110
Output 100 250 200
Each of the five runs produced the exact results for the routes xi,j. This shows that the route patterns are consistent for the range of diesel prices and the constraint on number of cars shipped.
Network Optimization
399
Compute Expected Values Arithmetic means were calculated from the five trials to determine the expected minimum transportation costs (f1) and cars shipped (f2). With the average diesel fuel cost equal to $2.73 per gallon, the expected minimum transportation costs (f1) were $13,373 per week. This allows 550 cars to be transported to the dealerships, thus meeting their sales demands. One interesting ratio is f1 / f2. For this optimization, the minimum transportation cost/car shipped is $24.31. In summary: f 1 *(xi,j) = $13,373/week
f 2 *(xi,j) = 550 cars/week where xXi,j = (0,0,0,100,0,125,0,80,45,0,0,60,0,30,110) Use the Surrogate Worth Tradeoff (SWT) Method This part of the MSM yields some interesting relationships. First, we find the −∂f1 ), which is the dollar amount for an additional car tradeoff values (λ12 = ∂f 2 shipped. The results are shown in Table XII.2.6 below.
Table XII.2.6. Tradeoffs ($/Additional Car Shipped) for Shipments from Manufacturer (i) to Dealership (j)
r1 Cost/Gallon f1 Transportation costs f2 Cars shipped Lambda12 Dollars per additional car shipped
1 $2.22 $12,455.63 550
2 $2.78 $13,454.76 550
Run 3 $2.49 $12,937.35 550
4 $2.91 $13,686.70 550
5 $3.27 $14,329.00 550
$52.52
$56.68
$54.53
$57.65
$60.33
The average tradeoff value (λ12) for the optimization is $56.34 per additional car shipped. Though this is an important relationship at a “global” level, the tradeoffs in terms of cost for the decision variables xi,j are equally revealing at a “local” level. As each xi,j is a decision variable, it possesses a cost tradeoff for deciding whether a car should be shipped on that route. The solutions provided (Table XII.2.5) represent Pareto-optimality. These choices of routes minimize costs while meeting shipment demand. Changing one of these choices can only result in a negative effect (increased cost) on the objective function (f1).
400
Multiobjective Statistical Method
Table XII.2.7 below presents the output for the reduced costs of each decision variable. Reduced cost is interpreted as the amount of penalty to be paid to introduce one unit of that variable into the solution. Examining the problem in this way allows the decisionmaker to see how changing either the minimized cost solution or the shipment requirements would affect weekly shipping costs. Perhaps the company prefers one particular route over another due to social or political considerations. Table XII.2.7 allows them to quickly determine the increased costs due to changing the optimization.
Table XII.2.7. Average Tradeoffs/Reduced Costs ($/Car Shipped) for 5 Runs Average Reduced Costs Lexington Huntsville Columbia
Memphis $47.9 $ $24.4
Atlanta $66.9 $21.4 $
Jackson $60.3 $$5.8
Louisville $ $ $
Raleigh $75.3 $61.0 $
For example, if the company wished to add to the optimization 9 cars from Lexington to Memphis, a route that is currently not taken, the additional average weekly cost would become $47.9 x 9 = $431.10 greater. The surrogate worth function (denoted by W12) represents the decisionmakers’ assessment as to how much they are willing to trade in dollars for shipping one additional unit (or vehicle). If W12 = 0, this means the company would be satisfied to spend $56.34 (λ12) to ship an additional car, or to save that amount by not shipping one. This is the case, or the “indifference band,” when the proper decision has been made. We are assuming that W12 = 0 for our optimization, since we have satisfied the objective functions of minimizing weekly transportation costs and maximizing car shipments. If the W12 was less than or greater than zero, a solution set would be preferred over the model’s solution set, and we would need to reevaluate our model.
ANALYSIS In sum, the goals of the car manufacturing company were to minimize cost and increase the number of cars shipped to various dealerships. Below are three summary plots of the analysis. Figure XII.2.2 displays the Pareto-optimal value for the minimized cost objective function (f1) vs. tradeoffs λ12. This shows that as transportation costs increase, which is due primarily to increased diesel pricing, the tradeoff costs for increasing shipments goes up. This makes sense intuitively. The same relationship is true in Figure XII.2.3, which shows that as diesel costs increase, so do the tradeoff costs for increasing shipments. Figure XII.2.4 is along the same interpretation—as diesel costs increase, so do the minimized transportation costs.
Network Optimization
401
Figure XII.2.2. Transportation Costs (f1) vs. Dollars per Additional Car Shipped (λ12)
Figure XII.2.3. Diesel Cost/Gallon (r1) vs. Dollars per Additional Car Shipped (λ12)
Multiobjective Statistical Method
Diesel Cost/Gallon
402
Figure XII.2.4. Diesel Cost/Gallon (r1) vs. Transportation Costs (f1) Regarding a plot of f1 vs. f2, we chose not to show this in the report since f2 is held constant at 550 cars for the ε constraint. One analysis that would be interesting to perform as a next step would be the change in λ12 as epsilon is decreased. This would simulate the effect of reduced demand with steady production, and tradeoff costs associated with new optimized routes. Applying the Multiobjective Statistical Method (MSM), we incorporated system objectives, constraints, and decisions. This included the six system variables to understand their effects on the transportation process, a questionnaire to determine the constraints, previous and current decisions, and additional information to gain further perspective into the transportation problem. We then determined the objective functions and mathematical representation of constraints to best analyze the transportation system. The simulation to compute the optimum shipping routes allowed us to find the optimal solutions for minimizing transportation costs while maximizing cars shipped among the five scenarios simulated for gas pricing. With this, we determined the optimal number of cars to ship from manufacturer to dealership, thus satisfying the goals of the decisionmaker. This allowed the car company to specify the expected performance values for the entire system. We determined the surrogate worth tradeoff (SWT), which allows for a combination of multiple λs. In this case, the value was found to be zero, which is a result of the solution set being optimal.
Network Optimization
403
Using MSM, the analyst and outside individuals are able to better understand and justify the development process from problem statement to final solution. MSM gives us a framework to follow, ensuring that all pertinent and necessary information is examined. In the same respect, the process allows the analyst to explore proper methods to solve the problem, without having to use one predefined technique or method.
404
Multiobjective Statistical Method
PROBLEM XII.3: Developing a Wetlands Mitigation Plan A state Department of Environmental Quality (DEQ) needs to develop a wetlands mitigation plan for an important river. A subproject task is to design and develop anti-erosion measures.
DESCRIPTION Increases in construction, both residential and commercial, as well as in recreational use have led to noticeable erosion along the river’s banks; adversely affecting the wetlands. The objectives for the anti-erosion plan must minimize the number of acres in the wetlands i: • that are lost to erosion, and • that are disturbed by anti-erosion activities
METHODOLOGY The Multiobjective Statistical Method (MSM) will be used to compare and contrast the impact of different anti-erosion plans. The MSM allows an analyst to express risk in two formats: 1) as state variables, such as in the initial modeling stages; and 2) as a function of decision variables, such as during the optimization/tradeoff analysis phase. MSM also allows assessing the different combinations of possible system configurations. An analyst is rarely confronted with an absolute configuration to assess; plan dimensions and amounts can always change. In this problem, there are three anti-erosion options to consider incorporating into the mitigation plan. Each of these will save a given number of acres from erosion; in turn, saving this acreage will affect another given number of acres. Policy guidelines mandate choosing the plan that will save the most acres and adversely affect the least acres for the least amount of money. Reformulate the objectives into state variables: Define W1(x; ηi, rm) as the number of acres in Wetlands 1 involved in Plan xk Define W2(x; ηi, rm) as the number of acres in Wetlands 2 involved in Plan xk xk, k = 1, 2, or 3 represents a mitigation plan ηi represents the acreage saved by Plan xk rm represents the acreage affected by Plan xk The number of acres in wetlands i can be expressed as a function of the two state variables: • that are lost to erosion, and • that are disturbed by anti-erosion activities
Wetlands Mitigation
405
The objective functions are: min f1(x) = cost min f2(rm) = acreage affected by Plan xk max f3(ηi,) = acreage saved by Plan xk (Alternatively, min f3(x) = acreage lost to erosion by implementing Plan xk) The decision variables are of the following forms: xij use Plan i for Wetlands j (j = 1, 2) xijk acres saved (k = 1) or affected (k = 2) by using Plan i for Wetlands j The xij variables are indicator (0, 1) variables. Data from a similar river project last year was used as the cost basis for creating the objective functions and constraints. The data are shown in Table XII.3.1.
Table XII.3.1. Base Data from Past River Project Plan 1
where
Plan 2
Plan 3
# Acres Saved - W1 Cost/Acre Cost - Saved W1 # Acres Affected - W1 Cost/Acre Cost - Affected W1
17 $2,231 $37,927 13 $2,135 $27,755
8 $4,477 $35,816 7 $3,997 $27,979
15 $1,289 $19,335 1 $2,685 $2,685
Total Cost - W1 # Acres Saved - W2 Cost/Acre Cost - Saved W2 # Acres Affected - W2 Cost/Acre Cost - Affected W2
$65,682 9 $4,594 $41,346 14 $4,857 $67,998
$63,795 3 $2,360 $7,080 18 $3,517 $63,306
$22,020 20 $2,213 $44,260 16 $3,457 $55,312
Total Cost - W2 Total Cost Total Acres Saved Total Acres Affected
$109,344 $175,026 26 27
$70,386 $134,181 11 25
$99,572 $121,592 35 17
W1 = ‘Wetlands 1’ and W2 = ‘Wetlands 2’
406
Multiobjective Statistical Method
Stating the objectives in the ε-constraint form results in: min f1 (x) subject to f2 (rm) = ε3 The linear equation for f1 is: f1(x) = (2231x111 + 2135x112)x11 + (4477x211 + 3997x212)x21 +(1289x311 + 2685x312)x31 + (4594x121 + 4857x122)x12 + (2360x221 + 3517x222)x22 +(2213x321 + 3457x322)x32 Note:
f2 and f3(x) are dependent on f1.
The overall problem is thus formulated as: min f1(x) = (2231x111 + 2135x112)x11 + (4477x211 + 3997x212)x21 + (1289x311 + 2685x312)x31 + (4594x121 + 4857x122)x12 + (2360x211 + 3517x212)x22 +(2213x311 + 3457x312)x32 Subject to: x112 + x122 ≤ ε2 x212 + x222 ≤ ε2 x312 + x322≤ ε2 (f2 - minimize acres affected in Wetlands j under Plan i) x111 + x121 ≥ ε3 x211 + x221 ≥ ε3 x311 + x321 ≥ ε3 (f3 - maximize acres saved in wetlands j under plan i) xi11 + xi12 ≤ 20 xi21 + xi22 ≤ 25 (Wetlands 1 is 20 acres in size; Wetlands 2 is 25 acres in size.) xijk ≥ 0 for all i, j, and k xij = 0 or 1 integer constraints on indicator variables
Wetlands Mitigation
407
SOLUTION The initial set of scenarios can now be solved, given the following additional constraints: cost cannot exceed $200,000, total acreage affected must be no more than 10% of the total acreage, and total acreage saved must be at least 50% of the total acreage.
• • •
The optimal solutions for saving a certain number of acres are found given these parameters. The total cost target is varied to find out which plan can save the most acreage and affect the least acreage. The results are presented in Table XII.3.2.
Table XII.3.2. Pareto-Optimal Solutions—Scenario 1 Plan 3 3 1 1 1 1
Cost $50,000.00 $75,000.00 $100,000.00 $125,000.00 $150,000.00 $160,653.50
Acres Saved 23.4415883 34.9026209 26.74304711 32.32133528 38.18099695 40.5
Note that each solution is based on affecting the maximum allowed acreage, 4.5 acres. The non-Pareto-optimal solutions in Table XII.3.3 were found by setting the amount of acreage saved under the optimal figures and solving for the implementation cost.
Table XII.3.3. Non-Pareto-Optimal Solutions—Scenario 1 Plan 3 3 1 1 2
Cost $36,037.50 $47,102.50 $68,250.00 $79,405.00 $84,196.50
Acres Saved 15 20 20 25 20
ANALYSIS Figure XII.3.1 shows the Pareto Optimal Curve for the Scenario 1. The “stars” on the graph indicate positions of non-optimal solutions. The turquoise lines indicate the initial “band of indifference.” The tradeoff vector, λ, was determined by ocular inspection and a review of Table XII.3.2. Spending approximately $11,000 more results in saving approximately 2 more acres of wetlands; at that point, the project is in the realm of “diminishing returns.”
408
Multiobjective Statistical Method
Note that Plan 2 was not selected in this iteration. Plan 2 is the most expensive plan to implement ($6,837 to save one acre and affect another). 50
Acres Saved in Both Wetlands
40 30
Plan 1
20 10
Plan 3
0 0
40,000
80,000
120,000
160,000
Total Project Cost
Figure XII.3.1. Pareto-Optimal Curve—Scenario 1
How much impact does varying the epsilon values have on the number of acres saved given an expenditure of $150,000? (This is the amount that an initial analysis found will provide the best value for the money.) The result of varying ε2 between .05 and .25, the range of the percentages of affected acreage in similar projects in the past, and varying ε3 between .25 and .75, the range of the percentages of saved acreage in the past, is presented in Table XII.3.4.
Table XII.3.3. Acreage Saved Epsilon 2 Epsilon 3 0.12 0.05 0.25 0.25 0.05 0.05 0.2
0.55 0.75 0.75 0.6 0.6 0.7 0.8
Acres Saved - W1 19.155 19.155 19.155 19.134 19.157 19.157 19.157
Acres Saved - W2 23.333 21.009 21.009 20.965 21.011 21.011 21.011
Total Acres Saved 42.488 40.164 40.164 40.099 40.168 40.168 40.168
Plan 1 is selected for use when given a budget of $150,000.
Plan Used 1 1 1 1 1 1 1
Wetlands Mitigation
409
The different values of ε2 and ε3 were determined through simulation based on the MSM method. The expected values of ε2 and ε3 according to this simulation, .13857 and .67857, respectively, resulted in a solution of W1 acres saved of 19.155 and W2 acres saved of 21.009—data points already found by the simulation. Each of the above simulations resulted in .01063 W1 acres affected and 2.208 W2 acres affected. Thus, the next decision is based solely on the number of acres saved. Figure XII.3.2 illustrates the findings of Table XII.3.4.
Acres Saved Wetlands 2
23
22
21
19.13
19.14
19.15
19.16
Acres Saved - Wetlands 1
Figure XII.3.2. Acres Saved (MSM) Given the current constraints, the best solution is to implement Plan 1 and spend $150,000 to save 19.155 Wetlands 1 acres and 23.333 Wetlands 2 acres.
Note: The data in Table XII.3.1 was from the Pamunky, VA river project, and the epsilon values were created by using the simulation technique RANDBETWEEN in Excel.
410
Multiobjective Statistical Method
PROBLEM XII.4: Football Team Success Strategy The coach of a professional football team wants to know when and how often to rush the ball in order to gain the most points.
DESCRIPTION We refer to the records of the past two football seasons to derive the figures for total points earned, turnovers, rush plays, and conversion attempts.
METHODOLOGY We use the Multiobjective Statistical Method (MSM) to analyze the total points earned f1 (⋅) and total turnover f 2 (⋅) during the last 2 years of games between a professional football team and its opponents.
SOLUTION The mathematical model for this problem is shown as follows:
min f1( x1, x2 ; r1, r2 ) = −1{5.8 − 38x1 + 104x12 + 4.88x22 − 11.5x2 + 20.2r1 − 0.49r2 + 1.4 x1x2} (XII.4.1) min f 2 ( x1 , x2 ; r1 , r2 ) = 0.9 − 0.7 x1 + 0.5 x2 + 0.5r1 + 0.3r2
(XII.4.2)
where the objective functions are:
f 1 ( x1 , x 2 ; r1 , r2 ) = -1 × total points earned f 2 ( x1 , x 2 ; r1 , r2 ) = total turnover The decision variables are: x1 = ratio of total rush plays x2 = 4th down conversion attempts The random variables are given as follows: r1 = ratio of total rush plays by other teams r2 = 4th down conversion attempt by other teams
Football Strategy
411
These variables are normally distributed as shown in Figures XII.4.1 and XII.4.2 below: Normal Mean 0.4646 StDev 0.1062 N 32
6
Fre que ncy
5 4 3 2 1 0
0.3
0.4
0.5 RTRO
0.6
0.7
Figure XII.4.1. Histogram of Ratio of Total Rush Plays by Other Teams Normal 20
Mean StDev N
0.6875 0.6927 32
Frequ en cy
15
10
5
0
0
1 Conversion
2
Figure XII.4.2. Histogram of 4th Down Conversion Attempts by Other Teams For the purpose of implementing the MSM, we need to calculate the expected values of two objective functions, substitute the expected values of the random variables into the objective functions, and then obtain their expected values:
412
Multiobjective Statistical Method
min f1 ( x1 , x2 ; r1 , r2 ) = −1{5.8 − 38 x1 + 104 x12 + 4.88 x22 − 11.5 x2 +
(XII.4.3)
20.2(0.46) − 0.49(0.69) + 1.4 x1 x2 } min f 2 ( x1 , x 2 ; r1 , r2 ) = 0.9 − 0.7 x1 + 0.5 x 2 + 0.5(0.46) + 0.3(0.69)
(XII.4.4)
With Eqs. (XII.4.3) and (XII.4.4), we can formulate the Lagrangian method:
L = −14.75 + 38 x1 − 104 x12 − 4.88 x 22 + 11.5 x 2 − 1.4 x1 x 2
(XII.4.5)
+ λ12 {0.9 + 0.5(0.46) + 0.3(0.69) − 0.7 x1 + 0.5 x 2 − ε 2 }
The non-negativity condition of the decision variables, x1, x2, simplifies the KuhnTucker conditions:
∂L = 38 − 208 x1 − 1.4 x 2 + λ12 (−0.7) = 0 ∂x1
(XII.4.6)
∂L = 11.5 − 9.76 x 2 −1.4 x1 + λ12 (0.5) = 0 ∂x 2
λ12 =
(XII.4.7)
38 − 208 x1 − 1.4 x 2 −11.5 + 9.76 x 2 − 1.4 x1 = 0.7 0.5
(XII.4.8)
After solving Eq. (XII.4.8) for x1 , we get:
x1 = −0.072 x 2 + 0.2577
(XII.4.9)
Using the Eq. (XII.4.8) and (XII.4.9), we can calculate the noninferior solutions for this problem. Table XII.4.1 summarizes the noninferior solutions and corresponding tradeoff values. In Figures XII.4.3 and XII.4.4, those Pareto optimal solutions are plotted in functional space and in decision space, respectively. 0.00 1.00
1.50
2.00
2.50
3.00
3.50
f1
-10.00
-20.00
-30.00
-40.00 f2
Figure XII.4.3. Pareto-Optimal Solutions in the Functional Space
Football Strategy
Table XII.4.1. Noninferior Solutions and Tradeoff Values x1 0.16 0.16 0.15 0.14 0.14 0.13 0.12 0.11 0.11 0.10 0.09 0.08 0.08 0.07 0.06 0.06 0.05 0.04 0.03 0.03 0.02 0.01 0.01
x2 1.30 1.40 1.50 1.60 1.70 1.80 1.90 2.00 2.10 2.20 2.30 2.40 2.50 2.60 2.70 2.80 2.90 3.00 3.10 3.20 3.30 3.40 3.50
λ12 2.92 4.86 6.80 8.74 10.68 12.62 14.56 16.50 18.44 20.38 22.32 24.26 26.20 28.14 30.08 32.02 33.96 35.89 37.83 39.77 41.71 43.65 45.59
f1 -4.91 -5.12 -5.44 -5.86 -6.39 -7.03 -7.77 -8.62 -9.57 -10.63 -11.80 -13.07 -14.45 -15.94 -17.53 -19.23 -21.03 -22.95 -24.96 -27.09 -29.32 -31.66 -34.10
f2 1.87 1.93 1.98 2.04 2.09 2.15 2.20 2.26 2.31 2.37 2.42 2.48 2.53 2.59 2.64 2.70 2.75 2.81 2.86 2.92 2.97 3.03 3.08
413
414
Multiobjective Statistical Method
4.00
3.00
2.00
1.00
0.00 0.00
0.05
0.10
0.15
0.20
x1
Figure XII.4.4. Pareto-Optimal Solutions in the Decision Space ANALYSIS According to our findings, it would be optimal to rush the ball somewhere between 1% and 16% of the time, which would mean going for it on the 4th down 1.5 to 3.5 times, respectively. This would result in anywhere between 5 and 34 points scored, and 1.87 to 3.08 turnovers per game. All of these conclusions can be seen above in Table XII.4.1, and in Figures XII.4.3 and XII.4.4. State variables, total playing time, and total yards earned are here the functions of decision variables given the random variables. Thus, in this problem, the state variables are not represented in the objective functions; however, they are strongly related to the decision variables.
Manufacturing Process
415
PROBLEM XII.5: Renovating Manufacturing Assembly Lines A consulting firm is in charge of renovating an outdated manufacturing process for an automobile manufacturer.
DESCRIPTION There are three identical and independent manufacturing lines. Renovating is an imperative as the current process takes too much time and costs too much money. The consulting firm has determined the five best remodeling policies. These are to renovate: A – one line at time B – A first, then B and C simultaneously C – B first, then A and C simultaneously D – C first, then A and B simultaneously E – all lines simultaneously In addition to choosing the policy, the consulting group can dictate how many people should work on the renovation. Some of these workers will be drawn from another process line, which will reduce the production on that line (i.e., $150/person)/hour). Based on the responses obtained in a survey, no fewer than 10 and no more than 20 workers should be taken from the other lines for renovation purposes. Also, because renovation is a different skill set, there is a $7000/person training cost associated with moving a worker to renovation. The objectives are to reduce the cost of renovating the process while minimizing the number of customers the company loses by not being able to supply enough cars.
METHODOLOGY We solve this problem using the Multiobjective Statistical Method (MSM). The different policies have different project overrun risks. Table XII.5.1 outlines the explicit values. Basically, choosing to renovate one line at a time is the least risky (i.e., P(10000 hours of delay) = .1) but has the smallest chance to be completed on time (i.e., P(1000 hours of delay) = .1). Likewise, renovating all 3 lines simultaneously is the riskiest (i.e., P(10000 hours of delay) = .3) but also has the largest chance to finish on time (i.e., P(1000 hours of delay) = .3). Decision variables 1. 2.
policy overhaul - { pa pb pc pd pe } N - number of people to pull off assembly line - {10 ≤ # of people ≤ 20 }
Random variable 1. delay = delay for completing the project (hours) Table XII.5.1 represents the probabilities of hours of delay. For example, for Policy A, there is a 0.1 probability of 1000 hours of delay. This data was obtained from a
416
Multiobjective Statistical Method
series of similar projects completed in the past. This data was then validated through a questionnaire.
Table XII.5.1. Summary of Overrun Risk Probabilities policy A B C D E
1 at a time a then bc b then ac c then ab all lines
0 0 0 0 0 0
1000 0.1 0.15 0.2 0.25 0.3
Hours of delay 2000 5000 0.4 0.4 0.35 0.35 0.3 0.3 0.25 0.25 0.2 0.2
10000 0.1 0.15 0.2 0.25 0.3
State variables The state variable is overhaul time= 10000 man-hours (this is the baseline estimate for the project) plus any hours of delay (the random input based on our policy selection). Overhaul Time → OT(policy, delay, N) = {10000 + delay(policy) }/N Based on our study the following two objective functions were developed. Objective functions: 1. minimize f1, $ renovation loss → f1 (·) = ($7000 training/worker) • N + ($150/hour prodution. lost)•OT(policy, delay, N) 2.
minimize f2, lost customers → f2 (·) = 0.25 * OT(policy, delay,N)/20000
SOLUTION The cumulative distribution functions (CDF’s) for the five policy options:
Manufacturing Process
Figure XII.5.1. CDF’s for the Five Policy Options Figure XII.5.2 show Exceedence probability functions for the five policies:
Figure XII.5.2. Exceedance probabilities for the five policy options. The consulting company calculates the expected $ loss of each policy. Figure XII.5.3 shows the results.
417
Multiobjective Statistical Method
$ Loss
418
Figure XII.5.3. Expected loss ($) for each policy As shown in Table XII.5.2, the minimum values for each policy are
Table XII.5.2. Expected Loss by Policies and Number of Workers Number of workers 10 11 12 13 14 15 16 17 18 19 20 min value
Expected Loss ($) A 278,500 266,545 257,750 251,385 246,929 244,000 242,313 241,647 241,833 242,737 244,250 241,647
B 281,500 269,273 260,250 253,692 249,071 246,000 244,188 243,412 243,500 244,316 245,750 243,412
C 284,500 272,000 262,750 256,000 251,214 248,000 246,063 245,176 245,167 245,895 247,250 245,166
D 287,500 274,727 265,250 258,308 253,357 250,000 247,938 246,941 246,833 247,474 248,750 246,833
E 290,500 277,455 267,750 260,615 255,500 252,000 249,813 248,706 248,500 249,053 250,250 248,500
The consulting firm calculates the conditional $ expected loss of each policy. They choose alpha = 0.9, so they look for the worst 10% of the outcome. Table XII.5.3 shows the worst 10% scenario values for overhaul time for each policy.
Manufacturing Process
419
Table XII.5.3. Overhaul Time in the Worst 10% Scenario Policy A Policy B Policy C Policy D Policy E
Worst 10% scenario (man-hours) 15,000 16,050 17,250 17,925 18,300
Figure XII.5.4 shows the conditional expected loss ($) based on the worst 10% case for each policy, and the minimum values for conditional expectation for each policy are shown in Table XII.5.4.
Figure 3. Conditional expected loss ($) Table XII.5.4. Conditional Expected Loss by Policies and Number of Workers Number of workers 10 11 12 13 14
Conditional Expected Loss ($) A 295,000 281,545 271,500 264,077 258,714
B 310,750 295,864 284,625 276,192 269,964
C 328,750 312,227 299,625 290,038 282,821
D 338,875 321,432 308,063 297,827 290,054
E 344,500 326,545 312,750 302,154 294,071
420
Multiobjective Statistical Method Number of workers 15 16 17 18 19 20 min value
Conditional Expected Loss ($) A 255,000 252,625 251,353 251,000 251,421 252,500 251,000
B 265,500 262,469 260,618 259,750 259,711 260,375 259,711
C 277,500 273,719 271,206 269,750 269,184 269,375 269,184
D 284,250 280,047 277,162 275,375 274,513 274,438 274,438
E 288,000 283,563 280,471 278,500 277,474 277,250 277,250
The consulting company calculates the expected percentage of customers lost for each policy:
Figure XII.5.5. Expected percentage of customer lost Table XII.5.5 shows the minimum expected percentage of customers lost:
Manufacturing Process
Table XII.5.5. Expected Percentage of Customers Lost by Policies and Number of Workers Number of workers 10 11 12 13 14 15 16 17 18 19 20 min value
Expected Percentage of Customers Lost A 0.017375 0.015795 0.014479 0.013365 0.012411 0.011583 0.010859 0.010221 0.009653 0.009145 0.008688 0.008688
B 0.017625 0.016023 0.014688 0.013558 0.012589 0.011750 0.011016 0.010368 0.009792 0.009276 0.008813 0.008813
C 0.017875 0.016250 0.014896 0.013750 0.012768 0.011917 0.011172 0.010515 0.009931 0.009408 0.008938 0.008938
D 0.018125 0.016477 0.015104 0.013942 0.012946 0.012083 0.011328 0.010662 0.010069 0.009539 0.009063 0.009063
E 0.018375 0.016705 0.015313 0.014135 0.013125 0.012250 0.011484 0.010809 0.010208 0.009671 0.009188 0.009188
The consulting company calculates the conditional expected percentage of customers lost for each policy. The firm chooses alpha = 0.9. This refers to the 10% worst-case scenario. Figure XII.5.6 shows this expected value.
Figure XII.5.6. Conditional expected percentage of customers lost Table XII.5.6 shows the minimum expected percentage of customers lost:
421
422
Multiobjective Statistical Method
Table XII.5.6. Conditional Expected Percentage of Customers Lost by Policies and Number of Workers Number of workers 10 11 12 13 14 15 16 17 18 19 20 min value
Conditional Expected Percentage of Customers Lost A 0.018750 0.017045 0.015625 0.014423 0.013393 0.012500 0.011719 0.011029 0.010417 0.009868 0.009375 0.009375
B 0.020063 0.018239 0.016719 0.015433 0.014330 0.013375 0.012539 0.011801 0.011146 0.010559 0.010031 0.0100313
C 0.021563 0.019602 0.017969 0.016587 0.015402 0.014375 0.013477 0.012684 0.011979 0.011349 0.010781 0.0107813
D 0.022406 0.020369 0.018672 0.017236 0.016004 0.014938 0.014004 0.013180 0.012448 0.011793 0.011203 0.011203
E 0.022875 0.020795 0.019063 0.017596 0.016339 0.015250 0.014297 0.013456 0.012708 0.012039 0.011438 0.011438
ANALYSIS The consulting company judges by four criteria. As seen from Table XII.5.7, Policy A is the best choice for all four criteria, hence the company is recommending Policy A.
Table XII.5.7. Summary of Policy Evaluation Criteria Min Expected Loss ($) Min Expected Percentage of Customers Lost Min Conditional Expected Loss ($) Min Conditional Expected Percentage of Customers Lost
Best Policy A A A A
Gas Station
423
PROBLEM XII.6: Determining Optimal Fuel Dispensing Capacity A gas station wants to find out how many gas dispensing units (pumps) it needs to install at its new location. It is interested in minimizing the installation cost while maintaining a low waiting time for incoming drivers.
DESCRIPTION Assume that the gas station can choose from one of four types of pumps, each differing from the rest in the average speed at which it can pump gas. The number of people who arrive at the station at any given time affects the total number of units needed at the station.
METHODOLOGY Every real-life system that any decisionmaker wishes to address can be characterized by three important components: 1) the factors that affect the system (either random or deterministic), 2) the state variables, and 3) the decisions which the decisionmaker would want to carry out (also called decision variables). Obtaining a mathematical relationship between these components is in general a difficult problem. The Multiobjective Statistical Method (MSM) is one procedure which allows a decisionmaker to integrate those factors affecting the system (input variables) and the decision variables through the state variables. Once this relationship is established, the decisionmaker can use any mathematical tool to optimize the desired objective function(s). It is important to note that the objective functions chosen for this problem are not necessarily exhaustive, and the idea is to merely illustrate the MSM.
SOLUTION For this system, the number of people who arrive within any given interval of time is the input variable, and the number of people who wait while others pump the gas is a state variable. Assume that the time interval is fixed, say 1 unit of time. The decision variables for this system are the number and type of dispensing units (gas pumps). Installation costs and waiting time are the objective functions. Let d be the number of pumps at the gas station. The objective functions in this case are the cost of installing the pumps, f1, and the average waiting time for each customer, f2. The cost of installation, f1, is given by:
f1 (d , s ) = 4000 + 115d 3 − 10s 3
(XII.6.1)
where s denotes the average service time of the pumps chosen from the set {1, 3, 5, 7} (in units of time), depending on the gas station’s choice of pumps.
424
Multiobjective Statistical Method
Applying the MSM The gas station owner follows the key steps of MSM to solve the problem: Step 1. The feasible set of decisions for which f1 and f2 are minimized are d ≥ 1 (d ∈ Ζ) and s ∈ {1, 3, 5, 7} . Step 2. The average waiting time, f2, depends on the number of drivers waiting, m, and the average arrival rate of customers, λ¸ as follows:
f 2 (d , s ) =
E[ m ]
(XII.6.2)
λ
where E[⋅] indicates the expected value, and the cost of installation f1 is given by (XII.6.1). Step 3. The average waiting time depends on the rate of customer arrival at the gas station and is denoted by λ. Thus, the arrival process can be modeled as a Poisson random variable with mean arrival rate λ. Since there is a random number in the queue waiting for gas, the number of people serviced by any pump in a given time is also random and is also modeled as a Poisson random variable parameterized by 1/s. This type of modeling is known as M/M/d queue where d is the number of gas pumps. Let N denote the random variable; then the probability of exactly n customers arriving during a unit time interval is given by
PN (n) =
λn e − λ
(XII.6.3)
n!
Note that the average number of customers arriving within unit time is
E[ N ] =
∑
∞ n =0
nPn = λ
Step 4. Using the expressions for queue length (i.e., the number of people in the queue, say m) for a Poisson random variable with parameter, the owner obtains:
f 2 (d , s ) =
d ρ n−d d! + 1 d − ρ n =0 n!
ρ
−1
∑
(XII.6.4) where ρ = sλ is the ratio of the arrival rate to the service rate and is usually less than 1 for practical situations.
Gas Station
425
Step 5. The objective is to find the values of d and s that minimize the functions f1 and f2. Figures XII.6.1 and XII.6.2 show Pareto-optimal solutions in functional space and in decision space, respectively.
Figure XII.6.1. Pareto-optimal curve for minimizing f1 and f2 in functional space
Figure XII.6.2. Pareto-optimal curve for minimizing f1 and f2 in decision space
426
Multiobjective Statistical Method
ANALYSIS The optimization problem is difficult since the constraint region is not continuous (the values of d and s are discrete). To simplify the problem, the objective functions are assumed to be piece-wise continuous. However, the optimization problem cannot be solved analytically. Hence the solution is calculated by numerical optimization, first by simulating the objective functions for various values of d and s, and then by using multiobjective optimization to find the Paretooptimal set of solutions. The Pareto-optimal curve for this optimization in function space is shown in Figure XII.6.1. In Figure XII.6.2 the Pareto-optimal curve is plotted in the decision space. It is now left to the decisionmaker to judiciously choose the values of d and s using the Pareto-optimal curve. The indifference band would probably be around the point marked 2 in Figure XII.6.1, since a small change around that point in one objective function yields a small change in the other objective function. Similarly, if we look at the decision space, the change of one variable around Point 2 (i.e., (2; 2)) does not change the other variable significantly. Hence, around 2 gas pumps, one with service of around 3 units of time, would be beneficial to the gas station. On the other hand if we look at Points 1, 3, 4, or 5 in the function space, a slight change in cost would result in a huge change in the waiting time, which may or may not be advantageous for the gas station.
Newsstand Company
427
PROBLEM XII.7: Ordering Newspapers for Multiple Newsstands A newsstand company wants to know how many copies of two major newspapers to purchase for its entire chain.
DESCRIPTION A newsstand company operates numerous newsstands in a large city. They are identical in that all carry a certain volume of the two leading local newspapers. These newspapers will have the same distribution in all the newsstands. How many copies of each newspaper should the newsstand company purchase? And at what cost, in order for the company to make a profit?
METHODOLOGY We employ the Multiobjective Statistical Method (MSM) to solve this problem.
SOLUTION The wholesale cost of the newspaper (to the newsstand company) is a function of the state variable. Further assume that, based on past history; all the newspapers that are purchased for the newsstands are sold.
Decision variables x1 = number of Newspaper A purchased by the company. x2 = number of Newspaper B purchased. State variables s(r) = number of city residents/workers that are interested in obtaining news = 16000er/10 Random variables r = events that would cause a fluctuation in the state variable—for instance, number and popularity of news stories that would increase/decrease the interest in reading the news. r is uniformly distributed between 10 and 35 major local and national news stories (where popular stories count as multiple stories). Questionnaire To obtain the figures on news stories, we queried a plethora of stakeholders— readers, newsstand owners, newspaper editors, and editorial writers. From this questionnaire, we derived the above probability distributions to fit our findings.
428
Multiobjective Statistical Method
Objective functions Maximize profit = f1 = [0.5 –3*10-8rs(r)]x1 + [0.5 – 2*10-8rs(r)]x2 Newspapers A and B both charge the newsstand company variable rates depending on the amount of interest in the news. In other words, the price paid is a function of the state variables. The price of Newspaper A is $2.6*10-8r s(r) and the Newspaper B price is $1.8*10-8r s(r). The cost to the consumer of either newspaper is $0.50. Not included are labor or other costs associated with operating the newsstand. Minimize shelf space = f2 = volume taken up by each newspaper = (volume of A)x1 + (volume of B)x2 volume of A = (0.09)(10)(15) = 13.5 in3 volume of B = (0.097)(10)(15) = 14.5 in3
Constraints Demand must be met ⇒ x1 + x2 ≥ (r/350)*s(r) = (r/350)(16000er/10) Note that demand is considerably less than buyer interest as the company has several newsstands and other media outlets with which the store competes. Number of A purchased by newsstand company ⇒ x1 ≥ (r/350)*s(r)/6 = (r/350)(16000er/10)/6 The company cannot purchase less than one-sixth of demand for A. Number of B purchased by newsstand company ⇒ x2 ≥ (r/350)*s(r)/3 = (r/350)(16000er/10)/3 The company cannot purchase less than one-third of demand for B..
Simulation In order to perform the above analysis, a simulation of 1000 randomly generated r values uniformly distributed on [10, 35] was performed to achieve the expected value of r to calculate f1, demand, and the minimum number of Newspapers A and B. These numerical values are found in Table XII.7.1.
Table XII.7.1. Simulation Results from Uniformly Distributed r E[r] E[s(r)] E[Profit] E[Demand] E[Minimum Posts] E[Minimum Times]
22 193522 0.34x1 + 0.39x2 15052 2509 5107
Newsstand Company
429
Another simulation of 1000 r values representing the extreme 10% of values was performed, showing demand in the upper 10% of its distribution. Calculating the conditional expectation of r given the extreme 10% of values produces the following table. This conditional expectation is referred to here as E'[r]. Table XII.7.2 summarizes the extreme value simulation result.
Table XII.7.2. Simulation Results from Extreme Values of r E’[r] E’[s(r)] E’[Profit] E’[Demand] E’[Minimum Posts] E’[Minimum Times]
33.8 469354 0.02x1 + 0.18x2 45343 7557 15114
Tables XII.7.1 and XII.7.2 provide information for two multiobjective optimization formulations: one for E[r] and one for E'[r]. Figure XII.7.1 shows the cumulative distribution of the random variable r with the shaded area portraying the extreme 10% of the distribution.
Cumulative prob.
1 0.8 0.6 0.4 0.2 0 10
15
20
25
30
35
r values Figure XII.7.1. Cumulative Distribution Function (cdf) of r
Multiobjective Optimization Problem – Expected Value max 0.39x1 + 0.42x2 min 13.5x1 + 14.5x2 subject to x1 + x2 ≥ 15183 x1 ≥ 2530 x2 ≥ 5060
min—(0.39x1 + 0.42x2) subject to 13.5x1 + 14.5x2 ≤ ε21 x1 + x2 ≥ 15183 x1 ≥ 2530 x2 ≥ 5060
Table XII.7.3 shows the expected values for 1,350,000 in3 ≤ ε 21 ≤ 1,500,000 in3
430
Multiobjective Statistical Method
Table XII.7.3. Expected Values for ε 21 x1 6535 6609 6683 6757 6831 6905 6979 7053 7128 7202 7259 7259 7259 7259 7259 7259
x2 3226 3226 3226 3226 3226 3226 3226 3226 3226 3226 3242 3311 3380 3449 3517 3586
f1 4066.53 4096.87 4127.21 4157.55 4187.89 4218.23 4248.57 4278.91 4309.66 4340.00 4370.50 4399.92 4429.59 4459.26 4488.50 4518.17
f2 135000 136000 137000 138000 139000 140000 141000 142000 143000 144000 145000 146000 147000 148000 149000 150000
These values produce the following tradeoff graph which shows that there is only one optimal point regardless of the volume size. 0 0
200000
400000
600000
800000
1000000
-5000
Negative Profit
-10000
-15000
-20000
-25000
-30000
Volume Used
Figure XII.7.2. Tradeoff between Competing Objectives with Expected Values
Multiobjective Optimization Problem—Conditional Expected Value
Newsstand Company max 0.18x1 + 0.26x2 min 13.5x1 + 14.5x2 subject to x1 + x2 ≥ 45250 x1 ≥ 7542 x2 ≥ 15083
431
min—(0.18x1 + 0.26x2) subject to 13.5x1 + 14.5x2 ≤ ε 21 x1 + x2 ≥ 45250 x1 ≥ 7542 x2 ≥ 15083
Table XII.7.4 shows the expected values for 1350000 in3 ≤ ε 21 ≤ 1500000 in3
Table XII.7.4. Conditional Expected Values for ε 21 x1 7542 7542 7542 7542 7542 7542 7542 7981 8722 9463 10204 10944 11685 12426 13167 13907
x2 86082 86771 87461 88151 88840 89530 90220 90500 90500 90500 90500 90500 90500 90500 90500 90500
f1 -23739 -23918 -24097 -24277 -24456 -24635 -24815 -24967 -25100 -25233 -25367 -25500 -25633 -25767 -25900 -26033
f2 1350000 1360000 1370000 1380000 1390000 1400000 1410000 1420000 1430000 1440000 1450000 1460000 1470000 1480000 1490000 1500000
These values produce the following tradeoff graph. Figure XII.7.3 shows the tradeoff between two competing objectives for conditional expected value analysis. The line represents the Pareto Optimum. We can see that improving one objective definitely degrades another. Also, there is a cusp around (f1, f2) = (24815, 14100).
ANALYSIS From the results, we are able to give the policymaker two distinct Pareto-Optimal frontiers to decide upon. Obviously they are dependent on each other. The Multiobjective Statistical Method (MSM) gives us a holistic thought process to analyze this work from many different angles.
432
Multiobjective Statistical Method -23500 1300000
1400000
1500000
Negative Profit
-24500
-25500
-26500
Volume Used
Figure XII.7.3. Tradeoff between Competing Objectives with Conditional Expected Values
Resource Allocation
433
PROBLEM XII.8: Machine/Manpower Resource Determination The purpose of this problem is to minimize cycle time and unit cost of a production system. A critical problem in production is resource determination. Resources are the manpower, equipment, materials, power and other input factors used in the production of goods. To simplify the system, this study is limited to two types of resources—manpower and machine. Use 1) the Multiobjective Statistical Method (MSM) and 2) the Surrogate Worth Tradeoff (SWT) method to arrive at a production decision. Figure XII.8.1 describes the production system setup: Process 1 (P1): Fully- automated Processing Process 1 (P2): Manual Packaging Arrive
Arrive
Machine
Worker
Server
Server
Depart
Actions
Depart
Actions
Simulate
Statistics
Production Line 480
Figure XII.8.1. Simulated System Setup The system variables are as follows:
Decision Variables State Variables
X1: No. of machines to be used for Process 1 (fully-automated processing) X2: No. of workers/manpower for Process 2 (packaging) Among the various state variables for production, those relevant to this problem are: S1: (WIP) Work-in-process units (unfinished units due to bottlenecks, unfinished daily production S2: (FGs) Finished goods produced by the system at the end of the day, as a function of system capacity and machine/manpower reliability
Random Variables
Objectives
Machine/Worker rates: Machine: exp(2 units/minute) Worker: exp(1 unit/minute) Machine Breakdown: exp(1 b/down per day) f1: Min average cycle time f2: Min average unit cost
434
Multiobjective Statistical Method
From the previous description of the production system, we used ARENA to construct a simulation model to generate the data that will be used for analysis. Table XII.8.1 shows the results.
Table XII.8.1. Output Statistics on the ARENA Simulation Model Design Inputs Machines ( X 1) 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3
Model Outputs
Workers Cycle Time ( X 2) 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6
( f1 ) 155.36 90.24 85.616 83.871 85.088 85.148 160.11 77.798 9.9573 2.6637 2.1004 1.9964 149.48 84.941 6.8959 1.8131 1.6212 1.5884
WIP ( S 1) 466.35 273.51 264.17 260.53 257.52 261.89 486.56 239.33 30.848 7.9814 6.2969 5.9858 474.49 252.14 21.446 5.4368 4.8635 4.7652
Fin. Goods ( S 2) 508 885 889 891 918 896 462 940 1350 1428 1435 1435 467 947 1369 1434 1436 1436
B/down (b ) 0.0208 0.0417 0.0417 0.0417 0.0417 0.0417 0.0417 0.0208 0.0208 0.0417 0.0208 0.0208 0.0208 0.0208 0.0208 0.0417 0.0208 0.0208
Cycle Time Cycle time is defined as the time it takes for a unit to be produced. This was computed as a function of WIP(S1) and FG(S2). From simulation and regression analysis (using MINITAB), the f1 function is defined as:
f1 ( S1, S 2 ) = 0.457 S1 + 0.0647 S 2 − 92.8
435
Resource Allocation
Work In Process vs Cycle time
Finished Goods vs Cycle Time 200
Cycle Time (f1)
Cycle Time (f1)
200 f1 = 0.3261*S1 + 0.1196 2 R = 0.9991
150 100 50
150 f1 = -0.1608*S2 + 231.24 2 R = 0.9964
100 50 0
0 0
100
200
300
400
500
0
600
200
400
Work in Process (S1)
600
800
1000
1200
1400
1600
Finished Goods (S2)
Figure XII.8.2. Superposition of Relationship between f1, S1, and S2 Unit Cost The unit cost is simply computed as the total operation cost (attributed to machine/manpower and maintenance costs) divided by the number of finished goods (S2) produced for the period. The assumed salary and operation costs used in the computation are:
Cost Factor Worker salary Maintenance cost (fixed) Breakdown cost (lost production and repair costs)
(A) (C) (D)
$ Cost 100 60 500 (average proportion of time machine is down per day)
Unit $ per worker per day $ per machine per day $ per machine per day
The unit cost is computed as:
f 2 ( x1 , x 2 , S 2 ) =
(1500 + 60 + 500(b)) x1 + 100 x 2 S2
Ave. Unit Cost vs. No. of Machine
Ave. Unit Cost vs. No. of Workers 8
Unit Co s t - $ (f2)
Unit C o s t - $ (f2)
6 f2 = 1.2384x1 + 1.0724
4
2
R = 0.9879
2
0 0
0.5
1
1.5
2
No. of Machines (x1)
2.5
3
3.5
6
2
f2 = 0.356x2 - 3.1077x2 + 9.0272 2 R = 0.9053
4
2
0 0
1
2
3
4
5
No. of Workers(x2)
Figure XII.8.3. Superposition of Relationships between f2, x1, and x2
6
7
436
Multiobjective Statistical Method
Using Regression to Translate State Variables to Decision Variables Using regression analysis (MINITAB), the following relationships are determined:
S1 = 1158 + 82.8 x12 − 416 x1 + 30.4 x 22 − 282 x 2 S 2 = −169 x12 + 850 x1 − 59.4 x 22 + 557 x 2 − 900 Complete the rest of the problem using multiobjective optimization.
School Bus
437
PROBLEM XII.9: School Bus Company Faces Growing Demand A consulting company is tasked with assessing the risks associated with a school bus company's future needs. The bus company is currently assessing the increased demand placed upon it. The number of schools, and hence the number of students, has grown exponentially over the past two years. The bus company anticipates the growth to continue over the next several years and is looking to:
Minimize the costs associated with company growth - f1 Minimize the number of students injured - f2
These goals will constitute the two objective functions for assessing the company’s risk. The consulting company has determined that the most convenient way to assess risk is in terms of state variables rather than the two decision variables. They will use the Multiobjective Statistical Method (MSM) to construct risk. This tool will allow them to regenerate the objective functions in terms of decision variables. With the new objective functions and decision variables, risks can then be assessed using the Surrogate Worth Tradeoff Method (SWT). Below are the details of the assessment.
The Multiobjective Statistical Method (MSM) The bus company is looking to expand in the following ways: 1. Increase the number of times the bus will stop to pick up children to 3, 4 or 5 more stops. 2.
Determine if 14, 16, or 18 will be the optimum number of buses the company will operate for highest safety.
With these two decision variables in mind, the company will try to minimize the costs of future growth, f1, and minimize the number of students injured, f2. Listed below are all the variables and objective functions used for assessing the bus company's risks. For academic purposes, several variables and objectives were left out of this discussion (e.g., the time spent on each route, traffic conditions, weather conditions, and condition of buses). Other liberties were taken in calculating costs, objective functions, state variables, and assumptions.
Assumptions: 1. Presently, each bus stops 10 times. 2.
Dollar amount is in terms of millions.
3.
Each bus can hold a maximum of 18 students.
438
Multiobjective Statistical Method 4.
Each stop does not have the same number of students.
5.
Number of injuries given collective time on bus is at a rate of 1 injury per360hours (1 school year of riding bus 2 hours per day for 180 days).
Random variables: 1. r1 is the random number of students per stop, with no more than 5 students per stop and a probability distribution of 0.8. 2.
r2 is the random number of injuries per bus
Decision variables: 1. Number of times each bus will stop where x1 is 3, 4, or 5 additional stops. 2.
Number of buses the company will use where x2 is 10, 12, 14, 16, or 18
State variables: 1. Number of students per bus = N(r, x2 ) = E(r1) * x1 2.
Number of injuries per day = T(r2, x1, x2 ) = E(r2) * x1 * x2
The following objective functions are assumed to be the result of simulating the above variables and assumptions.
Objective functions: 1.
min [cost of company growth ($millions)] = f1( x1, x2 ) = (x1 - 3)2 + (x2 5)2 + 4
2.
min [number of students injured] = f2( x1, x2 ) = (x1 - 4)2 + (x2 - 12)2 + 6
Complete the remainder of the multiobjective optimization problem.
Construction Time
439
PROBLEM XII.10: Estimating Construction Time A construction company in Charlottesville, VA receives a contract to widen a short section of a major commuter highway. Rain and the breakdown of machinery are two elements that can delay the completion of the project. The company must decide how many workers and machines should be used to complete the construction while minimizing both the total cost and the construction time. Since we are asked to minimize two objective functions, we must estimate parameters, s1 and s2, in the objective functions (equations (1) and (2)). These values will be calculated based on the Multiobjective Statistical Method (MSM) and then a linear regression can be applied. Mathematically, this problem can be described as follows: Assume the maximum number of workers = 80, and the maximum number of machines = 135.
Decision variables: x1 = number of workers needed x2 = number of machines needed Random variables: r1 = number of rainy days per month r2 = number of machine downtimes per day State variables: s1 = s(r1) = status index of rain conditions. The relationship describing s1 and r1 is: s1 = 50r1 s2 = s(r2) = status index of machine breakdown. The relationship describing s2 and r2 is: s2 = 0.2r2 Objective functions:
f1 ( x1 , x 2 , s1 ) = 500 x1 + 100 x 2 + s1
(XII.10.1)
100 s2 s 100 f 2 ( x1 , x 2 , s1 , s 2 ) = + × 1 + 1 × 1 + ( x1 − 30) ( x 2 − 20) 7.6 3000 (XII.10.2) where:
f 1 ( x1 , x 2 , s1 ) = total cost (USD) of construction f 2 ( x1 , x 2 , s1 , s 2 ) = total time (months) for the project
440
Multiobjective Statistical Method
Minimize the objectives f1 and f2 after obtaining the values of parameters by simulating random variables. Refer to Tables XII.10.1 and XII.10.2 for the sample values of the two random variables, r1 and r2 For r1: The historical rain data (February 2007) for the city of Charlottesville is listed in Table XII.10.1.
Table XII.10.1. Precipitation in Charlottesville, VA for February 20071 Date
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Precip.(inches)
0
0
0
0
0
0.1
0
0
0
0
0
0.02
0.75
0.51
15
16
17
18
19
20
21
22
23
24
25
26
27
28
0
0
0
0
0
0.04
0
0
0
0
0.76
0
0.01
0
Date Precip.(inches)
Assume the number of rainy days per month is Poisson-distributed. For example, if the total number of rainy days is 6, we can say r1~Poisson(6) for the month.
For r2 the distribution for machine downtime, known from experience, is listed in Table XII.10.2:
Table XII.10.2. Probability Distribution of Machine Downtime
1
Downtime (per day)
Probability
0 1 2 3 4 5 6 7 8
0.05 0.12 0.14 0.15 0.17 0.13 0.11 0.08 0.05
The Weather Channel, Monthly Weather for Charlottesville, VA. Available online:
Pollution Control
441
PROBLEM XII.11: Paper Mill Pollution A paper plant on a river needs to be minimized its disposal costs (f1) as well as the amount of pollution the disposal of its wastes dumps into the river (f2). New state regulations are forcing a reevaluation of how much waste a paper mill can typically discard into a river. These new regulations impose a financial penalty based on the level of pollution in that section of the river. The pollution level is a function of the amount that the plant discards into the river as well as the amount of rainfall, which is a random variable. The plant has the option of reprocessing the wastes at a cost; therefore it must decide how much to discard into the river and how much to reprocess. The paper plant manager wants to know 1) how much waste can be discarded into the river to conform to the new regulations, and 2) the costs of reprocessing the waste. Perform Multiobjective Statistical Method (MSM) for the above pollution control problem using the following functions and variable definitions below. The cost to reprocess one gallon of waste is ($19/gallon)2 and the cost for a specific pollution level is ($12.8/level of pollution)2.
Objective functions: min f1(D,R,P) = 19*R2 + 12.8*P2
D2 min f2 = P = 1000000
1/ 3
*
1 L
subject to: D+R =250
D, R, L >= 0 1/ 3
D2 1 State variable: P = level of pollution in the river = * L 1000000 In this case, the state variable is also one of the objective functions. Decision variables: D = Amount of waste to discard into the river (gallons) R = Amount of waste to reprocess (gallons) Random variables: L = amount of precipitation per month in a two-year period (inches) L is a random variable that has a uniform distribution of between 0 and 10 inches ~ U(0,10).