154 49 9MB
English Pages 391 Year 2012
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
MATHEMATICS RESEARCH DEVELOPMENTS
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
LINEAR PROGRAMMING NEW FRONTIERS IN THEORY AND APPLICATIONS
No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
MATHEMATICS RESEARCH DEVELOPMENTS Additional books in this series can be found on Nova‘s website under the Series tab.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Additional Ebooks in this series can be found on Nova‘s website under the Ebooks tab.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
MATHEMATICS RESEARCH DEVELOPMENTS
LINEAR PROGRAMMING NEW FRONTIERS IN THEORY AND APPLICATIONS
ZOLTAN ADAM MANN Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
EDITOR
Nova Science Publishers, Inc. New York Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright ©2012 by Nova Science Publishers, Inc. All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher. For permission to use material from this book please contact us: Telephone 6312317269; Fax 6312318175 Web Site: http://www.novapublishers.com NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers‘ use of, or reliance upon, this material. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Independent verification should be sought for any data, advice or recommendations contained in this book. In addition, no responsibility is assumed by the publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication. This publication is designed to provide accurate and authoritative information with regard to the subject matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. If legal or any other expert assistance is required, the services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS. Additional color graphics may be available in the ebook version of this book.
Library of Congress CataloginginPublication Data Mann, Zoltan Adam. Linear programming : new frontiers in theory and applications / Zoltan Adam Mann. p. cm. Includes index. ISBN: H%RRN 1. Linear programming. I. Title. T57.74.M36 2011 519.7'2dc22 2011002577
Published by Nova Science Publishers, Inc. †New York Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
CONTENTS
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Preface
vii
Part I
Theory
1
Chapter 1
TwoStage Stochastic Mixed Integer Linear Programming Jian Cui
3
Chapter 2
Interval Linear Programming: A Survey Milan Hladík
Chapter 3
The InfiniteDimensional Linear Programming Problems and Their Approximation N.B. Pleshchinskii
121
Part II
Applications in Mathematics
133
Chapter 4
A PolynomialTime Approximation Algorithm for Maximum Concurrent Flow Problems SuhWen Chiou
135
Minimizing a Regular Function on Uniform Machines with Ordered Completion Times Svetlana A. Kravchenko
159
Part III
Practical Applications
173
Chapter 6
Linear Programming for Irrigation Scheduling – A Case Study H. Md. Azamathulla
175
Chapter 5
Chapter 7
Linear Programming for Dairy Herd Simulation and Optimization:An Integrated Approach for DecisionMaking Victor E. Cabrera and Peter E. Hildebrand
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
85
193
vi Chapter 8
Chapter 9
Chapter 10
Chapter 11
Chapter 12
Contents A Review on Linear Programming (LP) and Mixed Integer Linear Programming (MILP) Applications in Chemical Process Synthesis J. A. Caballero and M. A. S. S. Ravagnani
213
A MediumTerm Production Planning Problem: The EPS Logistics Jian Cui
283
Complexity of Different ILP Models of the Frequency AssignmentProblem Zoltan Adam Mann and Aniko Szajko
305
Optimization of Polygeneration Systems Serving a Cluster of Buildings A. Piacentino, C. Barbaro, R. Gallea and F. Cardona Linear Programming Applied for the Optimization of Hydro and Wind Energy Resources H.M.I. Pousinho, V.M.F. Mendes and J.P.S. Catalao
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Index
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
327
351
373
PREFACE LINEAR PROGRAMMING: A MULTIDISCIPLINARY SUCCESS STORY Simple and powerful. These are the main characteristics that make linear programming (LP) such a widely used technique. At the crossroads of mathematics, operations research, and computer science, linear programming has become a mature and wellunderstood tool to address problems in science, engineering, economics, and mathematics itself. There are three key elements to this tremendous success:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Intuitive modeling. Linear constraints and linear objective functions are easily understood by theoreticians and practitioners alike. An accompanying geometric interpretation further supports this understanding. In many fields of application, linear constraints and linear objective functions arise in a natural way as the object of study is anyway a linear system or a linear model of a – possibly nonlinear – system. Extensions of linear programming, in particular (mixed) integer linear programming (ILP / MILP) expand the applicability of LP even more. Powerful algorithms. Although the worstcase runtime of the ―good old‖ simplex algorithm grows exponentially in the size of the problem, in practice it is usually very fast. Hence, the simplex algorithm and its variants keep playing an important role in solving practical LP instances. Of course, the last couple of decades have witnessed a strong development in other solution techniques as well, in particular in interiorpoint algorithms. As a consequence, LP instances can be solved in polynomial time. ILP and MILP are NPhard, so that a polynomialtime algorithm for these extended problems cannot be hoped for. Nevertheless, significant progress has been made in this field as well: for instance, branchandcut algorithms are often very efficient in pruning large parts of the search space, thus reducing solution time. Availability of solvers. The practical importance of LP created large demand for software packages able to solve LP problems. As a result, the algorithms mentioned above were soon transformed to practical solvers. In turn, significant experience has been gained in the implementation of LP algorithms, including the optimal data structures, presolve techniques, and other tricks to make the algorithms more efficient and more immune to numerical instabilities. Today, a huge number of
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
viii
Zoltan Adam Mann
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
mature LP and ILP solvers are available both in the commercial and the opensource software space. Altogether, LP is now a mainstream technique, with mature theoretical underpinning and a vast array of applications. Nevertheless, progress is still going on in high gear. Every day, new application areas arise, and we gain better understanding about how to best leverage LP in a given application. Also, theoretical advancements are made, especially with new, emerging extensions of LP. This book is a collection of such new advancements in the field of LP. It includes theoretical contributions about extensions of LP, as well as reports on applying LP in different settings. A highlevel overview of the book‘s structure is given in Figure 1. As can be seen, the book consists of three parts: (1) Theory, (2) Applications in mathematics, and (3) Practical applications. The Theory part contains contributions highlighting new or lesser known extensions to LP. ―Twostage stochastic mixed integer linear programming” presents a new approach to handle uncertainties in LP in an efficient fashion. The main idea is to model the immediate future with the known probability distributions by a tree of scenarios of various uncertainties within a given time horizon, whereas the distant future is represented by the expected values of the stochastic quantities. “Interval linear programming: A survey” addresses a different technique that has nevertheless a similar goal: coping with uncertainty in LP by means of interval arithmetic. The basic assumption is that we are given lower and upper bounds on the quantities in the LP, and the quantities may perturb independently and simultaneously within these bounds. The chapter surveys results on optimal value range, basis stability, optimal solutions enclosures, duality, and complexity issues. “The infinitedimensional linear programming problems and their approximation” presents an abstract generalization of LP to infinitedimensional linear spaces. As it turns out, some of the basic properties of LP hold in generalized LP as well. Moreover, under certain conditions, traditional LP problems can provide a natural approximation of infinitedimensional LP. The chapters in the Applications in mathematics part demonstrate that the usefulness of LP is not limited to practical applications: LP methods can also be used to derive results in other fields of mathematics. “A polynomialtime approximation algorithm for maximum concurrent flow problems” presents an improved combinatorial approximation algorithm for a class of multicommodity flow problems based on a LP formulation of the problem. “Minimizing a regular function on uniform machines with ordered completion times” addresses a class of preemptive scheduling problems on uniform parallel machines, in which the order of the completion times of the jobs is given. As it turns out, the problem can be reduced to a LP problem in polynomial time and thus can be solved efficiently. The third and biggest part of the book presents Practical applications. “Linear programming for irrigation scheduling  A case study” is concerned with water management – a crucial issue for a huge part of the world. It uses LP to perform optimization of water reservoir operation, also taking into account the optimal allocation of water available to crops at the farm level.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Preface
ix
J. Cui: Twostage stochastic mixed integer linear programming
Theory
Applications in mathematics
Agriculture
Chemical engineering
M. Hladík: Interval linear programming: A survey N. B. Pleshchinskii: The infinitedimensional linear programming problems and their approximation S.W. Chiou: A polynomialtime approximation algorithm for maximum concurrent flow problems S. Kravchenko, F. Werner: Minimizing a regular function on uniform machines with ordered completion times H. Md. Azamathulla: Linear programming for irrigation scheduling  A case study V.E. Cabrera, P.E. Hildebrand: Linear programming for dairy herd simulation and optimization: An integrated approach for decisionmaking J.A. Caballero, M.A.S.S. Ravagnani: A review on linear programming (LP) and mixed integer linear programming (MILP) applications in chemical process synthesis J. Cui: A mediumterm production planning problem: The EPS logistics
Practical applications Electrical engineering
Power engineering
Z.Á. Mann, A. Szajkó: Complexity of different ILP models of the frequency assignment problem
A. Piacentino, C. Barbaro, R. Gallea, F. Cardona: Optimization of polygeneration systems serving a cluster of buildings H.M.I. Pousinho, V.M.F. Mendes, J.P.S. Catalão: Linear programming applied for the optimization of hydro and wind energy resources
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 1. Structure of the book.
Another application in agriculture, “Linear programming for dairy herd simulation and optimization: An integrated approach for decisionmaking” addresses problems of livestock management. It uses a Markovchain model to simulate the evolution of a dairy herd. LP is applied to compute the herd economic net return. The method can be used to assess the impact of different herd management practices and find optimal replacement policies, reproductive parameters, and feeding strategies. The chemical applications block starts with “A review on linear programming (LP) and mixed integer linear programming (MILP) applications in chemical process synthesis”. This chapter presents a generic framework and a number of examples showcasing the applications of LP – especially in highlevel decisions – in chemical process synthesis. The examples range from reaction path synthesis over plant synthesis and superstructure optimization to membrane separation and designing sequences of distillation columns. Another chapter on applications in chemical engineering, “A mediumterm production planning problem: The EPS logistics” presents a realworld production planning problem in detail. The model includes the material flows within a polymer production plant with batch and continuous production steps, where different amounts of final products are delivered by each batch according to the chosen recipes, and also a marketplace in which the products can be sold by assigning them to different demands. “Complexity of different ILP models of the frequency assignment problem” looks at a different application space: frequency planning in wireless networks. The aim is to assign frequencies to a set of transmitters, subject to interference constraints. The chapter presents a number of different ILP models for such problems, and analyzes empirically how the
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
x
Zoltan Adam Mann
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
complexity of a problem instance depends on its parameters (number of variables, number of frequencies etc.). “Optimization of polygeneration systems serving a cluster of buildings” showcases an application in power engineering. It presents an approach to design and optimize district energy systems, assuming to have detailed data available as concerns the energy consumption profiles and the location of buildings. The method evaluates a number of feasible layouts for the heat distribution network and relies upon predefined cost models for plant components in order to identify the most profitable plant design and operation. The last chapter, “Linear programming applied for the optimization of hydro and wind energy resources” describes two further applications from the electric power industry: shortterm hydro scheduling and the development of offering strategies for wind power producers. For hydroelectric companies, the aim is to find the optimal scheduling of hydroelectric power plants, for a shortterm period in which the electricity prices are forecasted. The challenges for wind power producers are related to two kinds of uncertainties: wind power and electricity prices. Altogether, the 12 chapters of this book cover a wide range of theoretical and practical aspects of the state of the art in LP. For sure, the presented methods, ideas, experiences, and results will provide inspiration to researchers and practitioners in other areas of LP as well, thus further catalyzing progress in this already successful field.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
PART I. THEORY
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: Linear Programming Editor: Zoltan Adam Mann
ISBN 9781612095790 © 2012 Nova Science Publishers, Inc.
Chapter 1
TWOSTAGE STOCHASTIC MIXED INTEGER LINEAR PROGRAMMING Jian Cui Department of Biochemical and Chemical Engineering, TU Dortmund, Germany
ABSTRACT
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
In the realworld situation, the uncertainties and the decision process are stochastic due to their evolution over time. Twostage stochastic mixed integer linear programming with recourse (2SMILP) is one of the most promising methods to model this dynamic stochastic process. In this Chapter, a novel dynamic 2SMILP formulation for the evolving multiperiod multiuncertainty (MPMU) is developed and a corresponding method, a rolling horizon strategy (RHS) is proposed. In order to reduce the computation effort, the immediate future with the known probability distributions is modeled by a tree of scenarios of various uncertainties within a time horizon I1 whereas the distant future is represented by the expected values (EVs) within a time horizon I2. Both I1 and I2 are rolling along the unlimited time axis due to the evolution of MPMU. When all the parameters of MPMU are the same in every rolling step, the RHS is preferred to be called moving horizon strategy (MHS). For the planning / scheduling problems under static information of MPMU within a fixed horizon, e.g., I1 and I2 are fixed, a scenario group based approach (SGA), the semidynamic form of RHS with shrinking the second stage, is proposed for the ergodic process on the scenario tree composed by these uncertainties. The underlying approaches are implemented to the case study in Chapter 9: A Mediumterm Production Planning Problem: The EPS Logistics, and numerical simulations show the performances for different combinations of uncertainties.
Email: [email protected]
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
4
Jian Cui
1. INTRODUCTION Over the last decades, optimization algorithms and methods have been developed and applied to model and to optimize industry processes with various uncertainties involved due to the economic benefits. It has been recognized that how to consider the uncertainty systematically is as important as developing a good deterministic model itself (Ierapetritou and Li, 2009) since the solution can become infeasible or lead to other unfavorable situations when simply using deterministic models (Lin et al., 2004) in facing uncertain information. Uncertainty is associated with many parameters of planning and scheduling problems. According to Schnelle (Schnelle and Bassett, 2006), a classification of uncertainties in the process industry is:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
sales and marketing (e.g., changes in product orders or order priority, changes of demand and pricing from forecasts); process (e.g., batch cycle times variability, yields, changeover times, recipe variations); production (e.g., batch or equipment failures, unavailable raw material, resource changes); new processes and products (e.g., rampup rates for cycle times and unit ratios).
For handling these uncertainties, different models and optimization approaches on the different planning and scheduling problems (Figure 1.1) have been proposed which can be classified as preventive methods and reactive methods (Li and Ierapetritou, 2009). Preventive methods generate planning and scheduling policies before the uncertainty occurs and these decisions may be modified as time passes. The class of preventive methods and application examples include: interval linear programming (e.g., BenIsrael and Robers, 1970; Huang and Moore, 1993; Chinneck and Ramadan, 2000; Hansen and Walster, 2004; Oliveira and Antunes, 2007; Liu et al., 2010), sensitivity analysis and parametric programming (e.g., Jia and Ierapetritou, 2004; Acevedo and Pistikopoulos, 1997), robust scheduling (e.g., Vin and Ierapetritou, 2001; Lin et al., 2004; Janak et al., 2007; Jia and Ierapetritou, 2007) and robust planning (e.g., Paraskevopoulos et al., 1991; Leung et al., 2007), twostage/multistage stochastic programming in scheduling (e.g., Balasubramanian and Grossmann, 2002; Bonfill et al., 2004) and in planning (e.g., Liu and Sahinidis, 1996; Gupta and Maranas, 2000a; Ahmed and Sahinidis, 2003), chance constraint programming in scheduling (e.g., Orcun et al., 1996; Petkov and Maranas, 1997) and in planning (e.g., Gupta et al., 2000b; Barbaro and Bagajewicz, 2004), and fuzzy programming in scheduling (e.g., Balasubramanian and Grossmann, 2003; Wang, 2004; Petrovic and Duenas, 2006) and in planning (e.g., Liu et al., 1997; Hsieh and Chiang, 2001). Reactive methods modify nominal plans and schedules during the process operation to adapt to the unexpected events (uncertainty) in the production environment (e.g., Cott and Macchietto, 1989; Kanakamedala et al. 1994; Honkomp et al., 1997; Vin, and Ierapetritou, 2000; Mendez and Cerda, 2004; Janak et al., 2006a, b). A novel approach based on parametric programming was proposed to improve the efficiency of reactive scheduling by Li and Ierapetritou (2008). Also Ryu and Pistikopoulos (2003) developed a bilevel approach by using a parametric programming approach in supply chain planning. A concept of marginal
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
5
value analysis was introduced by Wenkai et al. (2003) and Neiro (2003) to describe a large refinery scheduling and inventory management model under uncertainty of product prices.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 1.1. Optimization approaches for planning and scheduling under uncertainty.
Detailed reviews of the underlying models and approaches can be found in Sahinidis (2004) and Ierapetritou and Li (2009). The main advantage of stochastic programming (usually with recourse) is that it effectively combines the realistic operational features and uncertainty where recourse often appears in the applications. Stochastic programming with recourse is a realistic formulation of online resource allocation problems where there is uncertainty about the future evolution, e.g. about demands, plant capacity, product yields, and this uncertainty is removed as time progresses because new information is obtained. The key idea is the representation of the information structure where the decision variables in each stage depend on deterministic information that is available at this point in time but take into account the uncertain future and in particular the potential of reacting to the future development by adapting those decision variables (recourse) that do not have to be implemented immediately. Realworld problems lead to multistage stochastic programming problems with recourse (Balasubramanian and Grossmann, 2004; AlonsoAyuso et al., 2005; Puigjaner and Laínez, 2008).
Figure 1.2. The concepts of ―stage‖ and ―period‖ in stochastic programming with recourse.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
6
Jian Cui
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 1.3. Multistage (left) and twostage (right) stochastic optimization problems represented by scenario trees.
The concepts of ―stage‖ and ―period‖ are important in stochastic programming with recourse. A ―stage‖ refers to the separation between before and after the realization of some uncertain information (e.g. ξi and ξi+1) while a period is the result of the discretization of the planning horizon (Figure 1.2). Thus, one stage covers at least one period. By adopting the discrete probability distribution functions for better computational tractability, the uncertainty is modeled by a scenario tree with N stages (Figure 1.3 left). The decision process progresses along this scenario tree. In stage i, the decision is based on the certain information on the realization of a path in the tree up to this node whereas the future evolution is only known probabilistically, and is represented by the subtree that starts at the corresponding node of the tree. To solve multistage problems, at each stage the reaction of the algorithm to the information obtained at later stages must be taken into account, leading to a complex nested structure the rigorous solution of which is computationally demanding. Twostage stochastic programming as shown in Figure 1.3 (right) is the most promising method to approximate multistage stochastic problems since it is a good compromise between rigorous solution and less computational efforts. By assuming the uncertainties occur in period 4, the decision variables are divided into firststage (period 13) and secondstage (period 410) variables. For further information about stochastic programming, reader is referred to Birge and Louveaux (1997). Obviously, for complete representation of multistage stochastic problems in which uncertainties involve in each N periods / stages, the strategy has to be developed for handling these uncertainties in twostage stochastic programming settings. Moreover, the method for any realtime decision problems where the information about the presence and the future unfolds iteratively should be considered. In twostage stochastic linear programming, if both first and secondstage variables are mixed integer variables, it is called twostage stochastic mixed integer linear programming (2SMILP) whose number of scenarios grows exponentially with increase of the number of random parameters. Thus 2SMILP problems are NPhard (Dyer and Stougie, 2006) and belong to a class of critical and very complex tasks. Many works (e.g., Ierapetritou et al. 1995; Clay and Grossmann, 1997; Balasubramanian and Grossmann, 2002; Gupta and Maranas, 2003; Bonfill et al. 2004; Guillen et al. 2006; Wu and Ierapetritou, 2007; Puigjaner and Lainez, 2008) are contributed to the applications of twostage stochastic linear programming with integer variables involved, however, they only considered the integer decisions in the first stage while the second stage only includes
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
7
continuous variables. Few works (e.g., Carøe and Schultz 1999; Sand and Engell 2004; AlonsoAyuso et al. 2005) set the secondstage variables in the mixed integer or pure integer (Ahmed et al., 2004) domain but the first stage only involve pure integer variables or without integer information. Hence, strictly to say, there is still no real 2SMILP application up to now. This Chapter is organized as follows. In Section 2, the basic concept of 2SMILP and its deterministic equivalent formulation used before is briefly introduced. In Section 3, a novel dynamic 2SMILP formulation where the mixed integer variables in both first and second stage, and multiuncertainty are explicitly modeled is presented and the rolling horizon strategy (RHS) is proposed for the evolution of multiperiod multiuncertainty along an unlimited time axis. When the same scenario structure slides along the time axis, the RHS can also be called moving horizon strategy (MHS). For a limited time horizon in which the probability distribution information of all the uncertainties is known, the RHS retrogresses to a scenario group based approach (SGA) via a class of semidynamic 2SMILP formulations with shrinking second stage. In Section 4, the RHS, MHS and SGA are implemented to the EPS logistics with detail description of the model extension. Numerical studies are presented in Section 5. Finally, conclusion is given in Section 6.
2. INTRODUCTION TO 2SMILP The symbols used to denote the problem dimensions in this Chapter are listed in Table
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
2.1. In twostage stochastic mixed integer linear programming, the time horizon is divided into two stages where the uncertainty ξ is (completely) realized (removed) at the end of period i (Figure 2.1). The first stage includes the periods before the uncertainty ξ resolved, the mixed integer decision variables x in this stage are called firststage variables, these hereandnow decisions have to be fixed and implemented. The second stage includes the periods after the uncertainty ξ resolved in which the mixed integer decision variables yω are called secondstage variables (or recourse), these recourse actions are adapted to the scenarios and represent possible future reactions to new information. The recourse is a series of corrective measures after the realization or observation of uncertainties. Table 2.1. 2SMILP: dimensions Dimensions n1 n1′ n1″ n2 n2′ n2″ m1 m2
number of firststage variables number of firststage integer variables number of firststage real variables number of secondstage variables per scenario number of secondstage integer variables per scenario number of secondstage real variables per scenario number of firststage constraints number of secondstage constraints per scenario
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
8
Jian Cui
Figure 2.1. Twostage separation in multiperiod horizon.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Suppose there is a chemical plant producing some chemical products to support customer demands inside a state, e.g., California of United States and usually these demands are not certain, fluctuating around a certain level especially in a longer period of time. Thus, the production plan has to be made on how many products (e.g., Batches) denoted by Ni (integer variable and bounded by lower bound and upper bound N lo N N up ) should be produced under the uncertain information about demands in production period i. Assume that the production period is one week, and the demand at the beginning of the first production period D1 is known while the demand at the beginning of the second production period ξ = ξ1,D2 , the first appeared uncertainty, is discrete random variable and has the following distribution (Table 2.2).
Figure 2.2. 2SMILP: An example.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
9
Table 2.2. Demand distribution in the second production period Realization of ξ1,D2
above average d2,1
average d2,2
below average d2,3
Probability
p1,1
p1,2
p1,3
Accordingly, there are three scenarios (ω = 1, 2, 3) as shown in Figure 2.2 (left) within the twoperiod planning horizon. In 2SMILP settings, according to D1, the amount of products N1 has to be decided before the realization from d2,1, d2,2, d2,3 of demand uncertainty ξ1,D2 is known, thus N1 is the firststage variable. The amount of products N2,1, N2,2 and N2,3 belong to secondstage variables which can be decided later after the demand uncertainty ξ1,D2 revealed as corresponding scenarios d2,1, d2,2 and d2,3. If there is also demand uncertainty ξ = ξ2,D3, the second appeared uncertainty, at the beginning of the third production period which has distribution as in Table 2.3 and is independent with ξ1,D2, then the combination of the ξ1,D2 and ξ2,D3 (joint probability distribution) is given in Table 2.4. There are altogether nine scenarios (Figure 2.2 right) within the threeperiod planning horizon. In fact, it is a threestage stochastic problem according to Figure 1.2, how to model this in 2SMILP settings will be given in Section 3. Thus, a scenario means a branch or path connecting a series correlated events from the beginning to the end of the planning horizon on the tree structure constructed by the joint distribution of appeared uncertainties. We also call this tree structure, the scenario tree structure.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 2.3. Demand distribution in the third production period Realization of ξ2,D3
above average d3,1
average d3,2
below average d3,3
Probability
p2,1
p2,2
p2,3
Table 2.4. Joint probability distribution of ξ1,D2 and ξ2,D3 Event
d 2,1d3,1
d 2,1d 3,2
d 2,1d3,3
d 2,2 d 3,1
d 2,2 d 3,2
d 2,2 d 3,3
Joint Probability
p1,1 p2,1
p1,1 p2,2
p1,1 p2,3
p1,2 p2,1
p1,2 p2,2
p1,2 p2,2
Event
d 2,3 d3,1
d 2,3 d 3,2
d 2,3 d 3,3



Joint Probability
p1,3 p2,1
p1,3 p2,2
p1,3 p2,3



A 2SMILP has the following deterministic equivalent formulation (DEP) by Till (Till et al., 2007) according to Louveaux and Schultz (2003):
DEP :
z min cT x q T y x , y1 , , y
s.t.
(2.1)
1
Ax b,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(2.2)
10
Jian Cui
T x W y h ,
(2.3)
x X , y Y ,
(2.4)
X x n '1 n ''1 xlo x xup ,
(2.5)
Y y n '2 n ''2 y lo y y up , 1,
(2.6)
, . (2.7)
By assuming that the uncertain data can be sufficiently accurately described by random variables which have a finite number of realizations and can be modeled by a discrete set of scenarios ω = 1, . . ., Ω with corresponding probability πω, this DEP extensive form explicitly represents all secondstage variables for all scenarios and covers the general case with parametric uncertainties in the objective function, in the lefthandside multipliers Tω and Wω, and in the righthandside parameters hω. The firststage constraints (2.2) with A b m1 are scenario independent, while the secondstage constraints (2.3), a m1 n1 and b
particular realization of which associates with the secondstage variables yω, with Tω
Wω m1m n , and hω m1m are scenario dependent. The sets X and Y contain m2 n1 , W integrality requirements ( n ' and n ' ) and bounds of the first (lower bound x lo and upper bound x up ) and secondstage variables (lower bound y lo and upper bound y up ) 2
2
2
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1
2
respectively. The objective is to minimize the cost of the firststage decisions plus the n n expected cost of the secondstage decisions, weighted by the vectors c 1 and qω 2 , respectively. The constraints (2.2) – (2.7) compose a socalled staircase structure shown in Figure 2.3. This staircase structure helps to construct the decomposition based algorithms (Birge and Louveaux, 1997; Ruszczynski and Shapiro, 2003) which usually focus on obtaining the firststage optimal by an implicit form of DEP (DEPImplicit (2.8) – (2.10)) and then the completion of the secondstage decisions ((2.11) – (2.12)) to avoid solving the large scale MILP where both firststage and secondstage variables are optimized at the same time (the monolithic methods) by DEP.
DEP Im :
s.t.
z min cT x ( x) x
(2.8)
Ax b,
(2.9)
x X,
(2.10)
The expected recourse function ( x) is defined by (2.11).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
11
Figure 2.3. Constraints of the 2SMILP (2.2) – (2.7): staircase structure.
( x ) Q ( x ).
(2.11)
1
And secondstage value function Q (x) is given by (2.12).
Q ( x ) min qT y y
s.t .
T x W y h ,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
y Y .
(2.12)
The key characteristic of decomposition based algorithm is to untie the connections of staircase structure to a series of independent subproblems. For example, by copying the firststage variables xω to each scenario ω, the explicit nonanticipativity constraint (2.16) is added to DEP. Thus, the DEP formulation (2.1) – (2.7) can be transferred into the extensive form (DEPNA) with nonanticipativity constraints:
DEP NA :
z
min x1 , , x , y1 , , y
s.t.
(c 1
T
x q T y )
(2.13)
Ax b,
(2.14)
T x W y h ,
(2.15)
x1 x 2
(2.16)
x ,
x X , y Y ,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(2.17)
12
Jian Cui
X x n1 xlo x xup ,
(2.18)
Figure 2.4. Scenario decomposition.
Y y n '2 n ''2 y lo y y up ,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1,
(2.19)
, . (2.20)
Thus, the staircase structure in Figure 2.3 is decomposed to Ω subproblems based on scenarios (Figure 2.4). According to DEPNA, Carøe and Schultz (1999) developed a scenario decomposition based branchandbound algorithm (DDSIP) to solve a general 2SMILP. Through a Lagrangian relaxation of the nonanticipativity constraints, the 2SMILP is decomposed into scenarios and the feasibilities are reestablished by a standard branchandbound algorithm on the firststage variables. However, finite termination is guaranteed only if the firststage variables are pure integer. In general, decomposition based algorithms built up on the DEP and DEPImplicit formulation have the following potential limitations:
The continuous flow of information and decisions which have to be taken upon events or periodically are not explicitly expressed, thus most actual computations were only performed for static problems. Special techniques have to be developed to keep consistency of the information at each node of the scenario tree (Figure 2.1), especially for the nodes in the second stage. For example, the nonanticipativity constraint (2.16), which however increase the complexity for the solution strategy. Difficult to handle the mixed integer variables in both stages, e.g., in DDSIP only integer decision variables are considered in the first stage which may lead to the loss of information and thus reduce the solution space. Feasibility of the first stage solution can not be ensured with a finite number of scenarios unless it has relatively complete recourse.
Consequently, a new monolithic formulation of 2SMILP is urgently needed to fully approximate the multistage problems and the dynamic information flow. Meanwhile,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
13
keeping all the necessary information and maintaining the feasibility. The advantage of using a 2SMILP formulation over a deterministic MILP is measured by the value of the stochastic solution (VSS) (Birge and Louveaux, 1997). The calculation of VSS in maximization problems is given by equation (2.21):
VSS z EEV
(2.21)
where z is the optimal objective value of the 2SMILP and EEV is called the expected result of using the EV solution. The steps of computing EEV are:
Compute the optimal decisions xEV* of the deterministic MILP under the expected value (EV) of the uncertainties; Implement these optimal decisions xEV* as the first stage decisions in the 2SMILP and compute the object (EEV).
xEV* is in general not optimal for the original 2SMILP nor necessarily feasible when there is no relative complete recourse. Thus the EEV can not be bigger than the optimal objective value of the corresponding 2SMILP and provides a lower bound for the optimum objective value of the 2SMILP:
z EEV
(2.22)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
3. A NOVEL DYNAMIC 2SMILP FORMULATION AND SOLUTION APPROACHES FOR MULTIPERIOD MULTIUNCERTAINTY In the realworld situation, the uncertainties and the decision processes are evolving over time. The uncertainties often exhibit a temporal structure. How to model this evolution and handle the dynamic information updating process are discussed in this section.
3.1. Dynamic 2SMILP Formulation under MultiPeriod MultiUncertainty Theoretically, decisions that are taken in period i affect the evolution of the system for an infinite period of time. To reduce the complexity of the computations, a finite horizon is usually considered (Dimitrades et al., 1997). In this case, one must make sure that the ―cutoff‖ at the end of the horizon does not lead to unfavourable situations because of the limited lookahead of the optimization. Therefore the horizon must not be too short. However, the longer the horizon the larger the number of scenarios and thus the computational effort may grow exponentially with the horizon length. In reality, due to the limited memory of the systems under consideration (from a practical point of view, theoretically it may be infinite), it does not make sense to model the distant future very accurately. The purpose of considering the fact that the operation of the system does not stop at the end of the horizon can be
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
14
Jian Cui
achieved by using the (deterministic) expected value model over the final part of the horizon, combining the advantage of a longer prediction horizon with keeping the computational effort reasonable. The initial state that defines the parameters υ0 = (c1, q1, T1, W1, h1) of the MILP is always assumed to be known. In the sequel, the time axis can be divided into the following categories according to different performances of uncertian information (Figure 3.1):
History: the uncertainties ξ1, ξ2, . . ., ξi2 have been realized in corresponding realizations {υ1, υ2, . . ., υi2 } and decisions have been obtained and implemented. The length of history is i1 periods; Current: the uncertainty ξi1 has just realized as υi1 and the optimization needs to be performed with variables corresponding to period i , considering the future uncertainties; Immediate future: ξi, . . ., ξi+I11 are the immediate future uncertainties which have known probability distributions Ψ(ξi), . . ., Ψ(ξi+I11). The length of the immediate future is I11 periods; Distant future: ξi+I1, . . ., ξi+I21 are the distant future uncertainties for which their expected values (EVs) are known but their probability distributions are not considered. The length of the distant future is I2I11 periods; Remote future: the periods after i+I2 are termed as the remote future which is outside the horizon considered in the optimization.
Accordingly, suppose in period i, there is an MILP problem (3.1) – (3.6) which will be optimized under this multiperiod multiuncertainty (MPMU): Prime MILP : f ( x1*
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
max xi , xi 1
s.t.
i I1 , x i I11 i I 2
h(x1* g (x1*
i 1
i 1
, xi , xi 1
i 1
, xi , xi 1
, x i , x i 1
i I1
i I1
, xi I1 1
i I1
, xi I1 1
, x i I1 1
i I2
i I2
, iCri 1, ri , i
,iCri 1, ri , i
i I2
,iCri 1, ri , i
i I1 1
i I1 1
, EV (i I1
, EV (i I1
i I 2 1
, EV (i I1
i I 2 1
i I1 1
(3.1) i I 2 1
))
)) 0,
(3.2)
)) 0,
(3.3)
x {x1 , , xi , xi 1 , , xi I 2 , xi I 2 1 , , x I } X ,
Figure 3.1. Classification of multiperiod multiuncertainty.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(3.4)
TwoStage Stochastic Mixed Integer Linear Programming
15
i , i 1 , , i I 1 ( j ) i j i I1 1,
(3.5)
1
X x n ' n '' xlo x xup .
(3.6)
Vectors x1, x2, . . ., xI are the mixedinteger decision variables in each time period, and ξi, ξi+1, . . ., ξi+I11 are independent discrete random variables with corresponding probability distributions Ψ(ξj) and sample spaces Ω(ξj). Assuming that each sample space Ω(ξj) has ni samples (realizations) {υj,r1, υj,r2, . . ., υj,rni}, the combination of these samples in every period of i to i+I11 plus the expected values from period i+I1 to i+I21 composes a scenario tree structure (Figure 3.2). Note that TI in Figure 3.2 is not specified which indicates the unlimited time horiozon for planning / scheduling problems. iCr I1 1,,r1
iCr I1 1,,r1
iCri I 1,, ri
iCri I 1,, ri
i
2
1
iCr,r11 2
1
iCri 1, ri
iCri , ri
iCrmi , rmi
( i I 1) iCrm I 1, rm ( i I 1)
( i I 1,) iCrm I 1, rm ( i I
1
1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
T0
0
x1*
i1
Cri i 1, ri
Ti 1
xi
i
xi 1
i I1 1,
i I 1
Ti
1
Ti I1 1
2
1
xi I1 ,
2
EV (i I1 ) xi I1 1
Ti I1
xi I 2
i I 2 1,
Ti I 2 1
2 1,)
I
TI
t
Figure 3.2. The scenario tree structure of MPMU.
Each node of the tree represents a series of events (a combination of realizations that happened from the first period to the current period, e.g., Table 2.4) which has only one ‗father‘ node in the previous period. υCrii1,ri is the rith event on ξi1, for explaining the symbol Cri, we go back to the first period which has a scenario tree structure as in Figure 3.3. For the convenience consideration, here suppose that the planning / scheduling problem is in a fixed horizon I in which the probability distributions of uncertainties in each period are known. Starting from υ0 which is known, the set υ1 = Ω(ξ1) = {υ01,r1, . . ., υ01,ri . . ., υ01,rm1} includes rm1 = n1 events which are all the realizations in period 1. By illustrating the combinations between realizations of ξ1 and ξ2 through paths {0, r1}, . . ., {0, ri} and {0, rm1}, set υ2 = Ω(ξ1×ξ2) = {υ{0,r1}2,r1, . . ., υ{0,ri}2,ri, . . ., υ{0,rm1}2,rm2} includes rm2 = n1×n2 events in period 2 where υ{0,ri}2,ri means the rith event in period two with the path {0, ri} connecting to the previous father nodes υ0 and υ1,ri in period 1. Thus, this scenario tree starting at υ0 results a scenario space ω∈Ω(υ0)(ξ1×ξ2×. . .×ξI1) with corresponding probabilities πω = {π1, π2, . . ., πΩ(υ0) ∑Ω(υ0) = 1} where the rith scenario can
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
16
Jian Cui
be represented by the path Cri = {0, ri, . . ., ri } and the size of scenario space υI1 = {υCr1I1,r1, . . ., υCri I1,ri . . ., υCrm(I1) I1,rm(I1)} = Ω(υ0) is rm(I1) = n1×n2×. . .×n(I1).
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 3.3. The scenario tree structure of MPMU start from period 1.
Hence, Cri indicates the connection or path of this rith event to the previous events in the past periods. The scenario tree (Figure 3.2) beginning at υCrii1,ri has a scenario space ω∈Ωi = Ω (υCrii1,ri)(ξi×ξi+1×. . .×ξi+I11) with corresponding probabilities πω = {π1, π2, . . .,
(
Cri i 1,ri
)

(iCri 1,ri )
= 1} and the size of scenario space υi+I11 = {υCr1 i+I11,r1, . . ., υCri i+I1
. . ., υCrm(i+I11) i+I11,rm(i+I11)} = υi+I21 = {υCr1 i+I21,r1, . . ., υCri i+I21,ri . . ., υCrm(i+I21) i+I21,rm(i+I21)} is rm(i+I11) = rm(i+I21) = ni×n(i+1)×. . .×n(i+I11), since there are only constant expected values from period i+I1 to i+I21. To solve prime MILP problem (3.1) – (3.6), it has to be transferred into its dynamic deterministic equivalent form of 2SMILP (DDEP) and is defined by DDEP  Ωi or DDEP  Ω (υCrii1,ri) for a maximization problem in the current period i. 1,ri
max f 2 ( x i 1 i I1 , , i i I1 1, ) Cri xi1 i I1 , , x , ) EV i 1 i i 1, ri max f 3 ( x i I 1 i I , , EV ( i I 1 2 1 xi I11 i I2 ,
DDEP : max f1 ( x1* xi i
s.t.
h (x1*
i 1
g (x1*
i 1
,iCri 1, ri , i
, xi , xi 1
i I 2 ,
, xi , xi 1
i I 2 ,
,iCri 1, ri , i
i I1 1,
, EV (i I1
i I1 1,
, EV (i I1
)) i I 2 1
i I 2 1
)) 0,
i I 2 1
)) 0,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(3.7)
(3.8) (3.9)
TwoStage Stochastic Mixed Integer Linear Programming
x {x1 ,
, xi , xi 1, ,
δ i , ,
, xi I 2 , , xi I 2 1 ,
, xI } X ,
, i I1 1, υ j i j i I1 1,
17 (3.10) (3.11)
X x n ' n '' x lo x x up ,
(3.12)
1, 2,
(3.13)
, i .
Corresponding to scenario ω, δj,ω indicates the set of realizations of ξj, while the set of events in period j is denoted by υj (i≤j≤i+I11).
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 3.4. Stage separation under multiperiod multiuncertainty.
As shown in Figure 3.4, the history is the fixed stage and the current period i is set to be the first stage whereas the next I2i periods including the immediate and the distant future are the second stage in the 2SMILP setting. The current state and the immediate future within I1 periods represented by f1(·) and f2(·) are modeled by a tree of scenarios in the combined sample space ω∈Ωi of the future uncertainties (Figure 3.2). The distant future within the following I2I1 periods is represented by the expected values (EVs) of the stochastic variables (cost contribution f3(·)). Following the example (Figure 2.2 right) in Section 2, now there is an expected value of uncertain demand at the beginning of the fourth production period, EV(ξ3,D4) = d4 = d{0,1,1}4,1= d{0,1,2}4,2 = . . .= d{0,3,9}4,9, and the scenario tree structure is shown in Figure 3.5. Apparently, υ0 = D1, set υ1 = δ1,ω = Ω(ξ1,D2) = {d02,1, d02,2, d02,3} includes rm1 = 3 events, set υ2 = Ω(ξ1,D2×ξ2,D3) = Ω(D1) = {d{0,1}3,1, d{0,1}3,2, d{0,1}3,3, d{0,2}3,4, d{0,2}3,5, d{0,2}3,6, d{0,3}3,7, d{0,3}3,8, d{0,3}3,9}includes rm2 = 3×3 = 9 events shown in Table 2.4 where d{0,3}3,7 means the 7th event at the beginning of the third production period with the path Cr7 = {0, 3} connecting to the previous father nodes d02,3 and D1 at the beginning of the second and first production period. Since d3,1 = d{0,1}3,1 = d{0,2}3,4 = d{0,3}3,7, d3,2 = d{0,1}3,2 = d{0,2}3,5 = d{0,3}3,8 and d3,3 = d{0,1}3,3 = d{0,2}3,6 = d{0,3}3,9, clearly δ2,ω = {d3,1, d3,2, d3,3} ∈υ2. rm2 = rm3, due to the constant value d4 at the beginning of the fourth production period. Thus, there are nine scenarios.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
18
Jian Cui
Figure 3.5. The scenario tree structure of MPMU: an example.
The first production period is the first stage, and N1 is the firststage variable. The second, third and fourth production period compose the second stage due to the uncertain demands at the beginning of production period 2 and 3 are matched up to the corresponding events on the scenario tree structure, thus, N2 , N3 and N4 are the secondstage variables and have to be marked as N2,ω, N3,ω and N4,ω. The DDEP  Ω (D1) according to DDEP  Ωi (3.7) – (3.13) can then be described as: max f 2 ( N 2, , N 3, , 1, , 2, ) N 2 , , N 3, DDEP : max f1 ( N1 , D1 ) EV N 1 max f 3 ( N 4, , EV ( 3, D4 )) ( D1 ) N 4 ,
(3.14)
h ( N1 , N 2, , N 3, , N 4, , D1 , 1, , 2, , EV (3, D4 )) 0,
(3.15)
g ( N1 , N 2, , N3, , N 4, , D1 , 1, , 2, , EV (3, D4 )) 0,
(3.16)
N {N1 , N 2, , N3, , N 4, },
(3.17)
δ {1, , 2, } υ j 1 j 2,
(3.18)
N n ' N lo N N up ,
(3.19)
1, 2,
(3.20)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
s.t.
, ( D1 ).
Since the information on the realizations of ξ1,D2 and ξ2,D3 is correlated to the scenario ω, δ1,ω and δ2,ω have their extended versions δ1,ω′ = {d02,1′, d02,2′, d02,3′, d02,4′, d02,5′, d02,6′, d02,7′, d02,8′, d02,9′} and δ2,ω′ = {d{0,1}3,1, d{0,1}3,2, d{0,1}3,3, d{0,2}3,4, d{0,2}3,5, d{0,2}3,6, d{0,3}3,7, d{0,3}3,8, d{0,3}3,9} = υ2, where d02,1 = d02,1′= d02,2′= d02,3′, d02,2 = d02,4′= d02,5′= d02,6′, d02,3 = d02,7′= d02,8′= d02,9′. For the covenience of mathematical expression, the δ1,ω and δ2,ω are employed as the other δj,ω in this Chapter for the model description instead of their extended forms. It is
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
19
similar to the secondstage variable N2,ω. By sovling (3.14) – (3.20), optimal N1*, N2,ω*(N2,1, N2,2, N2,3), N3,ω*(N3,1, N3,2, . . ., N3,9) and N4,ω* (N4,1, N4,2, . . ., N4,9) can be obtained, but only the firststage optimal N1* is implemented to the plant for the production process.
3.2. Evolution of MPMU and Rolling Horizon Strategy / Moving Horizon Strategy
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Time horizon I1 and I2 can shift along the unlimited standard time axis t. When shifting occurs, some uncertainties resolved, e.g., ξi realized as υi and the probability distributions of some expected value of uncertainties, e.g., EV(ξi+I1) become to be known which contribute to the new scenario tree structure as in Figure 3.2. Meanwhile some expected value information of uncertainties from the remote future may be newly involved. This phenomenon is the evolution of multiperiod multiuncertainty (MPMU) which is shown in Figure 3.6. In each shifting step, the length of I1 and I2 as well as the types and probability distributions of incoming uncertainties do not need to be the same, thus it is called rolling time horizon while if all the settings are consistent for each shifting step, it is called moving time horizon. The corresponding methods for handling the evolution of MPMU based on the DDEP formulation (3.7) – (3.13) are rolling horizon strategy (RHS) and moving horizon strategy (MHS).
Figure 3.6. Evolution of multiperiod multiuncertainty.
The detailed description of the RHS / MHS for planning / scheduling under evolving MPMU is given below: 1. Based on the known discrete probability distributions of the independent discrete random variables ξ = {ξi, ξi+1, . . ., ξi+I11} and the unknown variables ξ' = {ξi+I1, ξi+I1+1, . . ., ξi+I21} represented by their known expected values EV(ξ'), compose the scenario tree structure in Figure 3.2 and define the decision variables xi, . . ., xi+I2 according to the problem specification and the 2SMILP concept. 2. Solve DDEP  Ωi as a 2SMILP problem and record the firststage optimal solutions x*i .
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
20
Jian Cui 3. Shift the changeable time horizons I1 and I2 together on the standard time axis by one step and set up the new scenario tree structure according to the realized event of the joint sample space υi = Ω(ξ1×ξ2×. . .×ξi), explicit uncertainties, e.g., ξi+I1 and the new involved EV of uncertainties, e.g., ξi+I2. Build DDEP  Ωi+1 by: Fixing all xi variables to value x*i and deleting all the terms in the objective function which only have the contributions from x*i. Deleting all the constraints which only have contributions from x*i. Separating the constraints and the terms in the objective function which include variables xi+1 from the rest. 4. Set i=i+1 and check if i satisfies the condition of termination. If yes, go to step 5, otherwise go back to step 2. 5. Output all the computed solutions. End.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
In fact, DDEP  Ω (D1) (3.14) – (3.20) in Section 3.1 is the fisrt rolling / moving step with I1 = 3 and I2 = 4, now evolution happens, assume ξ1,D2 is realized as d02,1 and the probability distribution of ξ3,D4 is now known as in Table 3.1. Besides, the expected values of the fifth and sixth production periods are obtained as EV(ξ4,D5) = d5 and EV(ξ5,D6) = d6. Thus, the new scenario tree structure is in Figure 3.7. In rolling step 2, set υ2 = Ω(ξ2,D3) = { d{0,1}3,1, d{0,1}3,2, d{0,1}3,3} includes rm2 = 3 events, set υ3 = Ω(ξ2,D3×ξ3,D4) = Ω(d02,1) = {d{0,1,1}4,1, d{0,1,1}4,2, d{0,1,2}4,3, d{0,1,2}4,4, d{0,1,3}4,5, d{0,1,3}4,6} in which d4,1 = d{0,1,1}4,1 = d{0,1,2}4,3 = d{0,1,3}4,5 and d4,2 = d{0,1,1}4,2 = d{0,1,2}4,4 = d{0,1,3}4,6 includes rm3 = 3×2 = 6 events where d{0,1,3}4,5 means the fifth event at the beginning of the fourth production period with the path Cr5 = {0, 1, 3} connecting to the previous father nodes d{0,1}3,3, d02,1 and D1 at the beginning of the third, second and first production period. rm3 = rm4 = rm5, due to the constant value d5 and d6 at the beginning of the fifth and sixth production period. Thus, there are six scenarios. Table 3.1. Demand distribution in the fourth production period Realization of ξ3,D4 Probability
above average d4,1
p3,1
below average d4,2
p3,2
Figure 3.7. The scenario tree structure of MPMU: an example in rolling step 2.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
21
The first production period is fixed due to the optimal production N1* has been implemented. The second production period now is the first stage, and N2 is the firststage variable. The third, fourth, fifth and sixth production period compose the second stage where N3, N4, N5 and N6 are the secondstage variables marked as N3,ω, N4,ω, N5,ω and N6,ω. The DDEP  Ω (d02,1) according to DDEP  Ωi (3.7) – (3.13) can then be described as: max f 2 ( N 3, , N 4, , 2, , 3, ) DDEP * 0 N 3, , N 4 , : max f ( N , N , d ) EV 1 1 2 2,1 0 max f 3 ( N 5, , N 6, , EV ( 4, D ), EV ( 5, D )) ( d 2,1 ) N2 5 6 N 5 , , N 6 ,
s.t.
(3.21)
0 h ( N1* , N 2 , N 3, , N 4, , N 5, , N 6, , d 2,1 , 2, , 3, , EV ( 4, D5 ), EV (5, D6 )) 0,
(3.22)
0 g ( N1* , N 2 , N 3, , N 4, , N5, , N 6, , d 2,1 , 2, , 3, , EV (4, D5 ), EV (5, D6 )) 0,
(3.23)
N {N 2 , N3, , N 4, , N5, , N 6, },
(3.24)
δ { 2, , 3, } υ j 2 j 3,
(3.25)
N n ' N lo N N up ,
(3.26)
1, 2,
(3.27)
, ( D1 ).
By sovling (3.21) – (3.27), optimal N2*, N3,ω*(N3,1, N3,2, N3,3), N4,ω*(N4,1, N4,2, . . ., N4,6), (N5,1, N5,2, . . ., N5,6) and N6,ω* (N6,1, N6,2, . . ., N6,6) can be obtained, only the firststage optimal N2* is implemented to the plant. In fact, N2 in the second rolling step is reoptimized based on d02,1 instead of the N2, ω in the first rolling step. For the moving horizon strategy, assume ξ1,D2 is realized as d02,1 and the probability distribution of ξ3,D4 is now known as in Table 3.2, the expected values of the fifth production period is obtained as EV(ξ4,D5) = d5, the new scenario tree structure in moving step 2 is shown in Figure 3.8. Obviously, the structure containing the first and secondstage is the same as the moving step 1 (Figure 3.5) in moving horizon settings.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
N5,ω*
Table 3.2. Demand distribution in the fourth production period Realization of ξ3,D4
above average d4,1
average d4,2
below average d4,3
Probability
p3,1
p3,2
p3,3
The feedback structure of the RHS / MHS is shown in Figure 3.9. A deterministic equivalent model of the 2SMILP with its solver is called a 2SMILP optimizer. In the implemented RHS / MHS, the 2SMILP optimizer is constituted by the DDEP model and the solver CPLEX.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
22
Jian Cui
Figure 3.8. The scenario tree structure of MPMU: an example in moving step 2.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Starting from a given initial condition of the planning / scheduling problem, a series of DDEP  Ωi models are solved until the termination instruction is given. After each step, the optimal decisions are implemented and one of the possible realizations of the uncertainties is chosen for the next step. The evolution of the process is recorded in the database.
Figure 3.9. The feedback structure of the rolling / moving horizon strategy.
3.3. Static MPMU and Scenario Group Based Approach (SGA) When considering static information within the stationary time horizon where the discrete probability distributions of uncertainties are known in each period, the uncertainties show a constant structure (Figures 3.3 and 3.10).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
23
Figure 3.10. Static multiperiod multiuncertainty.
Assume now the MILP problem (3.1) – (3.6) is optimized within a stationary time horizon in which the probability distributions of uncertainties ξ = {ξ1, ξ2, . . ., ξI1} are known, then the prime problem (3.1) – (3.6) has the following simplified version:
Prime MILP : min f ( x1 , x 2 , x1 , x 2 , , x I
s.t.
h(x1 , x 2 ,
, x I , 1 , 2 ,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
g (x1 , x 2 , x {x1 , x 2 ,
, x I , 1 , 2 ,
, I 1 )
, I 1 ) 0,
, x I , 1 , 2 ,
, I 1 ) 0,
, xI } X ,
(3.29) (3.30) (3.31)
1 , 2 , , I 1 (i ) 1 i I 1,
(3.28)
X x n ' n '' xlo x xup .
(3.32)
(3.33)
The corresponding scenario tree structure of these uncertainties is given in Figure 3.3. Apparently, the time horizon I1 = I2 = I (Figure 3.10). To solve prime MILP problem (3.28) –(3.33), it has to be transferred into its deterministic equivalent form of 2SMILP according to DDEP  Ωi:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
24
Jian Cui
Figure 3.11. Shrinking second stage.
max f ( x x 2 I , 2 2
DDEP : max f1 ( x1 , 0 ) EV ( 0 ) x1 s.t.
h (x1 , x 2, , g (x1 , x 2, ,
, x I , ,0 , 1, , 2, ,
δ 1, , 2, ,
(3.34)
, I 1, ) 0,
(3.35)
, 1
, x I , ,0 , 1, , 2, ,
x {x1 , x 2, ,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
I ,
I 1,
)
, I 1, ) 0,
, x I , } X ,
(3.36) (3.37)
, I 1, υi 1 i I 1,
(3.38)
X x n ' n '' x lo x x up ,
(3.39)
1, 2,
(3.40)
, (0 ).
Solving the above 2SMILP problem (3.34) – (3.40), optimal solutions x*1, x*2,ω, . . ., x*I,ω are obtained and the firststage decisions x*1 are then fixed as the real actions before the realization of the uncertainty ξ1. Since the planning / scheduling task is solved in period 1, the first stage now moves to period 2 in which the uncertainty ξ1 is realized as one of the elements in set υ1 while the remaining uncertainties ξ2, . . ., ξI1 have not yet. Thus the second stage within time horizon I1 shrinks because period 2 is excluded (Figure 3.11). For specification consideration, the DDEP formulation with shrinking the second stage is called DDEPS2S. Thus x2 has to be reoptimized as the firststage variables based on the realization of ξ1 which infers that a series of new DDEPS2S models have to be formulated. Taking event υ01,r1 as an example, then the new scenario tree structure starting from node υ01, r1 according to Figure 3.3 is shown in Figure 3.12. Nodes υ{0,r1}2,r1, . . ., υ{0, r1}2,rm2(υ01,r1) are the events
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
25
between ‗father‘ node υ01,r1 and all the realizations {υ2, r1, υ2, r2, . . ., υ2, rn2} of Ω(ξ2) with the size of rm2(υ01,r1) = n2. After combining the rest of sample spaces Ω(ξ3), . . ., Ω(ξI1), a scenario space 0 0 0 ω=ω(υ 1,r1)∈Ω(υ 1,r1) (ξ2×. . .×ξI1) associated with node υ 1,r1 is established with corresponding probabilities π (1,0 r 1 ) { 1(1,0 r 1 ) , 2(1,0 r 1 ) ,
, ( 0 ) } where the 1(υ01,r1) 1,r 1
0
0
Cr1
means the first scenario associated with node υ 1,r1 (the path from node υ 1,r1 to node υ I1,r1 Cr1 0 0 Cr1 Cri or event υ I1,r1 in Figure 3.12) in Ω(υ 1,r1) and the size of Ω(υ 1,r1) = {υ I1,r1, . . ., υ I1,ri . . Crm(I1)(υ01,r1) 0 0 ., υ I1,rm(I1)(υ01,r1)} is rm(I1)(υ 1,r1) = n2×. . .×n(I1). Ω(υ 1,r1) is also called the 0 scenario group associated with node υ 1,r1. For the rest of n11 realizations {υ01,r2, . . ., υ01,ri . . ., υ01,rn1} of ξ1, the same scenario tree structure for each can be built based on scenario group ω(υ01,ri) ∈Ω(υ01,ri) with the same size. The rith DDEPS2S model in period 1 according to scenario group Ω(υ01,ri) is described in (3.41) – (3.47): DDEP S 2 S : (1,0 ri ) * 1
max f1 ( x , x 0 ( ) 1,ri
(3.41)
(1,0 ri ) 2
x2
, ) EV max f 2 ( x 3, , x3, , , x I , (1,0 ri )
h ( x1* , x 2
s.t.
(1,0 ri )
g ( x1* , x 2
, x I , , 2, ,
0 1, ri
, x3, ,
, x 3, ,
, x I , ,1,0ri , 2, ,
, x I , ,1,0ri , 2, ,
, I 1, )
, I 1, ) 0,
(3.42)
, I 1, ) 0,
(3.43)
(1,0r1 )
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
ICr1,1 r1
2,{0,r1r1}
0
0 1, r1
(1,0r1 )
2,{0,rmr1}2(
0 1,r 1 )
ICri 1, ri
x1* T0
1,0r1 T1
x2
2 x3 T2
i ,
i xi 1 Ti
I 2,
I 2 x I 1,
( I 1)( ) ICrm 1, rm ( I 1)( I 1 x I ,
TI 2
TI 1
Figure 3.12. The scenario tree structure of multiperiod uncertainties from period 2.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
0 1,r 1 0 1,r 1 )
TI
t
26
Jian Cui (1,0 ri )
x {x 2
, x3, ,
, x I , } X ,
(3.44)
, I 1, υi 2 i I 1,
(3.45)
δ 2, ,
(3.46)
, (1,0ri ).
(3.47)
X x n ' n '' xlo x xup ,
(1,0ri ) 1(1,0ri ), 2(1,0ri ), Note that (3.41) has its extended form: (1,0 ri )
max
( 0 ) x 2 1,ri , x3,
(1,0 ri )
f (x1* , x 2
, x I , ,1,0ri , 2, ,
, x3, ,
, I 1, )
(3.48)
, , x I , 1(1,0 ri )
It is worth to mention that Ω(υ01,ri) is (n2×. . .×n(I1)) / (n1×n2×. . .×n(I1)) of Ω(υ0) which
indicates
π π ( 0
that
1,ri
)
{ 1( 0 ) , 2( 0 ) , 1,ri
1,ri
, ( 0
1,ri
)
(1,0 ri )
1} .
Thus,
conditional probability is introduced to formulate DDEPS2S  Ω(υ01,ri) on the objective function to declare that each scenario ω(υ01,ri) occurs under the condition of scenario group Ω(υ01,ri): (3.49). Since the 1/
(1,0 ri )
is a constant term and does not affect the optimal
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
results, (3.49) can be simplified as equation (3.48). The optimal decisions x*1 from period 1 may have contributions to the new model DDEPS2S  Ω(υ01,ri), thus were kept in the objective function (3.41) and constraints (3.42) – (3.47).
1
(1,0 ri )
f ( x1* , x 2
(1,0 ri1 )
(1,0 ri )
1(1,0 ri
* 1
f (x , x
(1,0 ri ) 2
, x I , , 1,0 ri , 2, , , I 1, ) , x I , , 1,0 ri , 2, , , I 1, )
, x 3, ,
, x 3, ,
)
1(1,0 ri )
(1,0 ri )
0 i i
(1,ri ) (1,0 ri )
f (*x1* ,x(2
f ( x1 , x 2
0 1,ri
)
,x
,
,x
, 0 , 2, , , I 1, ) , I 1, )
, x 3, ,3, , x I , , I ,1,0 ri , 1,2,ri ,
1( 0 )
1(1,01,riri)
(3.49)
((1,0 ri) )
((1,01,riri) )
*
0
0 (1,0 ri( )1,ri
f f( x(1*x,1x,2x 2 , x 3,, x,3, ,, x I ,,x, I ,1,0 ri,,1,2,ri,, 2,, , I 1,, ) I1, )
0 1,ri
)
0
0 1( 1(1,ri0 ) )
1,ri
(1,0 ri 0)
(1,ri )
11
(1,00ri )
(1,ri )
1(1,0 ri )
f (fx(,xx1* , x 2 (, x ) ,x, 3,,,x
1(1,0 ri 0)
* 1
(1,0 ri ) 2
0 1,ri 3,
0 , ,xI1,,ri ,,2,1,0ri ,, 2,, ,I 1,,) I 1, )
I ,
1(1,ri )
1(1,0 ri )
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
27
Following above idea, with the event υCrii1,ri of ξi1 in period i1, the rith DDEPS2S model in period i is: DDEP S 2 S : ( iCri 1, ri ) F (iCri 1,ri )*
max f1 ( x i 1 Cri (
xi
i 1,ri
s.t.
)
(iCri 1,ri )
, xi
F (iCri 1,ri )*
h ( xi 1
(iCri 1,ri )
, xi
F (iCri 1,ri )*
g (xi 1
, iCri 1, ri ) EV
(iCri 1,ri )
x {xi
δ i , ,
, xi 1, ,
(iCri 1,ri )
, xi
max f ( x , xi1, , , x I , 2 i 1,
, xi 1, ,
, xi 1, ,
, x I , , iCri 1, ri , i , ,
, I 1, )
, x I , ,iCri 1, ri , i , ,
, I 1, ) 0,
(3.51)
, x I , ,iCri 1, ri , i , ,
, I 1, ) 0,
(3.52)
, x I , } X ,
(3.53)
, I 1, υ j i j I 1,
X x n ' n '' xlo x xup , Cri Cri (iCri 1, ri ) 1(i 1, ri ), 2(i 1, ri ),
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
(3.50)
(3.54)
(3.55)
, (iCri 1, ri ).
(3.56)
ΩF(υCrii1,ri) represents the scenario group associated with the father node of υCrii1,ri, e.g. ΩF(υ01,ri) is Ω(υ0). Apparently, Ω(υCrii1,ri)∈ΩF(υCrii1,ri). Taking the example in Figure 3.13, the DDEPS2S  Ω(d02,2) model in the second production period can be described in (3.57) – (3.63) according to (3.50) – (3.56):
Figure 3.13. The scenario tree structure of MPMU: an example.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
28
Jian Cui DDEP S 2 S : 0 ( d 2,2 ) ( d 20,2 )
max f1 ( N1 ( D1 )* , N 2 0 (d
N2
2 ,2
s.t.
)
0 0 , d 2,2 ) EV max f 2 ( N 3, , d 2,2 , 2, ) N 3, ( d 20,2 )
h ( N1 ( D1 )* , N 2
0 , N 3, , d 2,2 , 2, ) 0,
( d 20,2 )
g ( N1 ( D1 )* , N 2 ( d 20,2 )
N {N 2
0 , N 3, , d 2,2 , 2, ) 0,
(3.57)
(3.58)
(3.59)
(3.60)
, N 3, },
δ 2, υ2 ,
(3.61)
N n ' Nlo N Nup ,
(3.62)
0 0 0 (d 2,2 ) 1(d 2,2 ), 2(d 2,2 ),
0 , (d 2,2 ).
(3.63)
The scenario group Ω (d02,2) = {4, 5, 6}∈ΩF (d02,2) = Ω (D1) = {1, 2, . . ., 9} in which 1(d 2,2) = 4(D1) = 4 on the scenario tree structure Figure 3.13. Based on the RHS and DDEPS2S model (3.50) – (3.56), a scenario group based approach is proposed as the solution approach on planning / scheduling under MPMU in a stationary time horizon. The detail steps of SGA, is listed below: Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
0
1. Based on the discrete probability distributions of the independent discrete random variables ξ = {ξi, ξi+1, . . ., ξi+I1} and the scheduling/planning length I, compose the scenario tree structure (e.g. in Figure 3.3) and define the decision variables xi, . . ., xi+I1 of each period according to the problem specification and the 2SMILP concept. 2. In period i, solve rm(i1) DDEPS2S  Ω(υCrii1,ri) of 2SMILP problems and record the optimal solutions x*i Ω(υCrii1,ri). 3. In period i+1, build rmi DDEPS2S  Ω(υCrii,ri) of 2SMILP problems by: Fix all xi Ω(υCrii1,ri) variables to value x*i Ω(υCrii1,ri) and delete all the terms in the objective function which only have the contribution from x*i Ω(υCrii1,ri); Delete all the constraints which only have the contribution from x*i Ω(υCrii1,ri); Separate the constraints and the terms in the objective function which include variables xi+1 Ω(υCrii,ri) from the rest. 4. Set i=i+1 and check if i = I? If equal, go to step 5, otherwise go back to step 2. 5. Solve rm(I1) MILP problems, output all the optimal solutions recorded. End. As shown in Figure 3.3, in SGA, 1+rm1+rm2. . .+rm(I2) semidynamic DDEPS2S models and rm(I1) MILP models without any uncertainties at the end of the last period I are
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
29
solved following above iteration process. Similarly, when consider the distant future with constant expected values involved in the planning / scheduling, there are just several more MILP models need to be solved within the last a few periods of planning / scheduling horizon. ―Semidynamic‖ means that within time horizon I1, the first stage or the current is shifting while the second stage of near future is shrinking, however the probability distributions of the uncertainties along the periods within the static planning horizon are fixed. Obviously, with more realized uncertainties, the model size of DDEPS2S decreases dramatically with a large amount of information eliminated by step 3, but the number of models increases. Thus, the basic idea of this SGA is to decompose the original large scenario group to many small scenario groups whose sizes descend gradually along the stationary time horizon. Since the nonanticipativity constraints in (2.16) are substituted by path Cri of υCrii,ri which can be achieved through composing efficient data structure of υi. The function of the data structure is to construct the address information on each node of the tree structure (Figures 3.2 and 3.3) for the problem data according to the number of periods a planning / scheduling horizon has, the number of branches caused by the probability distributions of uncertainties in each period and the number of scenarios the tree has. The branch and bound algorithm is utilized via CPLEX to solve all the MILP problems in step 2 of RHS / MHS and steps 2, 5 of SGA.
4. THE EXTENSION OF EPS LOGISTICS TO 2SMILP
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
The deterministic model introduced in Section 3.2 of the Chapter 9, A Mediumterm Production Planning Problem: The EPS Logistics, is extended to DDEP and DDEPS2S models for which are denoted by EPSDDEP and EPSDDEPS2S.
4.1. DDEP Extension for RHS / MHS The planning problem of the EPS production and sales process is assumed to be subject to multiperiod multiuncertainty in the demands Bi,fp, in the maximal capacities of the polymerization stage Nimax and in the yields of the grain size fractions ρfp,rp. Thus, EPSDDEP (Ωi) in rolling / moving step i is given below: The vector of the first stage variables is:
x i Bi 2, f p , N i ,rp , M i, f p , wi , p , zi , p , M j ,l , f p j l 1 i
T
(4.1)
The vector of the second stage variables is: T
x j ,
( M j , f p , , N j , rp , , w j , p , , z j , p , ) i 1 j i I 2 , B j , f p , i 1 j i I 2 , M j ,l , f p , i 1 j l 1 i I 2
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(4.2)
30
Jian Cui
Obviously the first and the second stage variables are mixed integer variables. Then the EPSDDEP(Ωi) under multiperiod demand, plant capacity and yields uncertainties can be written as: EPS DDEP : ( i )
max x i , x i 1,
,i I 2 ,
Fp Fp Fp L j ,l , f p M j ,l , f p i , f p M i , f p i 2, f p Bi 2, f p 2 f p 1 f p 1 f p 1 l 1 j l 1 i Rp p 1 N w i , rp i , rp i, p i, p rp 1
( j ,l , f p M j ,l , f p , j , f p B j , f p , j i 1 f p 1 p 1 f p 1 l 1 j l 1 i 1 Fp Rp i I2 1 ( M j , f p j , f p , j ,rp N j ,rp , j , p w j , p , )) j rp 1 i 1 f p 1 Fp
2
i I2
L
i I2
Fp
i
(4.3)
Constraints involving only first variables: P
s.t.
Rp
N p 1 rp 1
i , rp
N imax
(4.4)
Rp
zi , p (C pmin S pmin ) N i ,rp
p
(4.5)
rp 1
Rp
N Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
rp 1
i , rp
zi , p S pmax
p
zi*1, p zi , p wi , p
(4.6)
p
zi*1, p zi , p wi , p
p
zi*1, p zi , p 1 wi*1, p
l 1
(4.9)
p
(4.10)
Rp
M j ,l , f p M i, f p i , f p , rp N i , rp M i*1, f p
f p , p
(4.11)
rp 1
2
M
(4.8)
p
zi*1, p zi , p 1 wi*1, p
j l 1 i
(4.7)
* i 2, l , f p
M i 2,3, f p Bi 2, f p
i, p , f p
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(4.12)
TwoStage Stochastic Mixed Integer Linear Programming 2
Bi 2, f p Bi 2, f p M i* 2,l , f p M i 2,3, f p
i , p , f p
31
(4.13)
l 1
Constraints involving first stage and second stage variables:
p, (4.14)
zi , p zi 1, p , wi 1, p , zi , p zi 1, p , wi 1, p ,
p,
(4.15)
zi , p zi 1, p , 1 wi , p
p,
(4.16)
p,
zi , p zi 1, p , 1 wi , p
(4.17)
Rp
j l 1 i 1
M j ,l , f p , M i1, f p , i 1, f p ,rp , N i 1,rp , M i, f p
f p , p,
(4.18)
rp 1
M i*1,1, f p M i 1,2, f p M i 1,3, f p , Bi 1, f p L
M i ,1, f p M i ,l , f p , Bi , f p
i, p, f p ,
i, p , f p ,
(4.19)
(4.20)
l 2
Bi1, f p , Bi 1, f p M i*1,1, f p M i 1,2, f p M i 1,3, f p , L
Bi, f p , Bi , f p M i ,1, f p M i ,l , f p ,
i, p, f p ,
i, p , f p ,
(4.21)
(4.22)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
l 2
Constraints involving only second stage variables: P
Rp
N p 1 rp 1
j , rp ,
N max j ,
Rp
j , i 1 j i I 2
z j , p , (C pmin S pmin ) N j ,rp ,
j , p, i 1 j i I 2
(4.23)
(4.24)
rp 1
Rp
N rp 1
j , rp ,
z j , p , S pmax
z j 1, p , z j , p , w j , p , z j 1, p , z j , p , w j , p ,
j , p, i 1 j i I 2
j , p, i 2 j i I 2 j , p, i 2 j i I 2
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(4.25)
(4.26) (4.27)
32
Jian Cui j , p, i 2 j i I 2
z j 1, p , z j , p , 1 w j 1, p ,
j , p, i 2 j i I 2
z j 1, p , z j , p , 1 w j 1, p ,
k l 1 j
(4.28) (4.29)
Rp
M k ,l , f p , M j , f p , j , f p , rp , N j , rp , M j 1, f p ,
(4.30)
rp 1
j , f p , p , i 2 j i I 2 L
M l 1
j ,l , f p ,
B j , f p ,
j , p , f p , i 1 j i I 2
L
B j , f p , B j , f p , M j ,l , f p ,
(4.31)
j , p , f p , i 1 j i I 2
(4.32)
l 1
M j ,l , f p , j l 1 I 0
j , p, f p ,
j , p, f p , i 1 j i I 2
(4.33)
4.2. DDEPS2S Extension for SGA For a planning horizon with four time periods, we assume that the independent demand uncertainties B2,fp, B3,fp, B4,fp follow a twopoint distribution (4.34):
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 4.2. Demand Bi,fp with equal probability pr = 0.5 after the first period
Period i1 i1 i2 i2 i2 i2 i3 i3 i3 i3 i4 i4 i4 i4
EPS type p1 p2 p1 p1 p2 p2 p1 p1 p2 p2 p1 p1 p2 p2
Event pai1 pai2 pai1 pai2 pai1 pai2 pai1 pai2 pai1 pai2 pai1 pai2
f1 1.0698 1.7462
Grain size fraction f2 f3 f4 1.7455 1.3022 1.3699 0.556 1.2343 1.386
f5 0.7723 0.8174
1.48109 0.79751 0.98163 0.52857 0.76167 0.41013 0.84448 0.45472 2.27006 1.22234 0.9737 0.5243
0.43212 0.23268 2.1593 1.1627 1.9344 1.0416 1.40842 0.75838 1.61434 0.86926 1.54011 0.82929
2.92994 1.57766 1.63657 0.88123 0.26377 0.14203 2.06674 1.11286 1.18885 0.64015 2.04828 1.10292
1.34407 0.72373 1.80232 0.97048 1.99875 1.07625 2.01058 1.08262 1.76436 0.95004 0.75608 0.40712
1.75175 0.94325 1.08121 0.58219 2.32388 1.25132 1.98718 1.07002 1.05274 0.56686 2.39148 1.28772
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
33
{0,1,1} 3,1
1
8
4 {0,1,1} 3,2
{0,1} 2,1
9
2
10
3
{0,1,2} 3,3
2 0 1,1
5 {0,1,2} 3,4
{0,1} 2,2
4
11
1
0
{0,2,3} 3,5
12
5
6 {0,2,3} 3,6
{0,2} 2,3
3
13
6
14
7
{0,2,4} 3,7
0 1,2
7 {0,2,4} 3,8
{0,2} 2,4
B1, f p T0
x1
B2, f p
x2
T1
B3, f p
T2
x3
15 B4, f p
T3
8
x4 T4
t
Figure 4.1. The first solution step on the scenario tree.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
a Bi , f p b
probabiltiy : pr probabiltiy : 1 pr
i 2 i 4
(4.34)
B1,fp is certain since usually the demand is given at the beginning of the planning horizon. The data of the demand profile was randomly generated and is shown in Table 4.1 with equal probabilities of the 30% above the average and 30% below the average demands starting from period 2. The average of demand in each period is fixed to the maximal capacity of the polymerization stage – 12 batches. The total number of scenarios is ω = 23 = 8 and the scenario tree with 15 nodes is visualized in Figure 4.1. As described in Section 3.3, each node on the scenario tree represents a EPSDDEPS2S model, thus there are altogether fifteen models via Figure 4.1 showing the order of the models which are going to be solved by a natural number set {1, 2, 3, …, 14, 15} used to label the models. Following this order, the SGA and the extended EPSDDEPS2S models are given in the following sections.
4.2.1. The First Solution Step The number of scenarios: ω=ω(υ0)∈Ω(υ0) = Ω = {1, 2, …, 7, 8}, where υ0 = B1,fp. The vector of the first stage variables is
x1 N1, rp , M 1,1, f p , M 1, f p , w1, p , z1, p
T
The vector of the second stage variables is
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(4.35)
34
Jian Cui T
x i ,
( N i , rp , , M i, f p , , wi , p , , zi , p , ) 2 i I , B 1 i I , M 3 i l I 1 i , f p , i ,l , f p ,
(4.36)
where I = 4, and obviously the first and second stage variables are all mixed integer variables. Then the first EPSDDEPS2S  Ω(υ0) model is written as: EPS DDEP S 2S : ( 0 ) 2
max x1 , x 2 , , x 3, , x 4 ,
Fp
( p 1
f p 1
1,1, f p
(4.37) Fp
Rp
f p 1
rp 1
M 1,1, f p 1, f p M 1, f p 1, rp N1, rp 1, p w1, p )
Fp I 2 Fp ( j ,l , f p M j ,l , f p , i , f p Bi , f p , 8 p 1 f 1 2 j l 1 I i 1 f 1 p p Fp Rp I 1 ( M N i , p wi , p , )) i 2 f p 1 i , f p i , f p , rp 1 i , rp i , rp ,
Constraints involving first and second stage variables: P
Rp
N p 1 rp 1
1, rp
(4.38)
N1max
Rp
z1, p (C pmin S pmin ) N1, rp
p
(4.39)
rp 1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Rp
N rp 1
1, rp
z1, p S pmax
p
z 0p z1, p w1, p z 0p z1, p w1, p
(4.40)
p p
(4.42)
p,
z1, p z2, p , w2, p ,
p,
z1, p z2, p , w2, p ,
z 0p z1, p 1 w0p
z 0p z1, p 1 w0p
(4.41)
p
p
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(4.43) (4.44) (4.45) (4.46)
TwoStage Stochastic Mixed Integer Linear Programming
p,
z1, p z2, p , 1 w1, p
M 1,1, f p M 1 l L
j l 1 2
1, f p
(4.47)
p,
z1, p z2, p , 1 w1, p
35
(4.48)
Rp
f p ,rp N1,rp M 0, f p
f p , p
(4.49)
rp 1
Rp
M j ,l , f p , M 2, f p , f p , rp N 2, rp , M 1, f p
l , p , f p ,
(4.50)
rp 1
L
M 1,1, f p M 1,l , f p , B1, f p
p , f p ,
(4.51)
l 2
L
B1, f p , B1, f p M 1,1, f p M 1,l , f p ,
p , f p ,
(4.52)
l 2
Constraints involving only second stage variables: P
Rp
N p 1 rp 1
i , rp ,
i , 2 i I
N imax
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Rp
zi , p , (C pmin S pmin ) N i ,rp ,
i , p , 2 i I
(4.53)
(4.54)
rp 1
Rp
N rp 1
i , rp ,
zi , p , S pmax
i , p , 2 i I
i, p, 3 i I
zi 1, p , zi , p , wi , p , zi 1, p , zi , p , wi , p ,
i, p, 3 i I
zi 1, p , zi , p , 1 wi 1, p , zi 1, p , zi , p , 1 wi 1, p ,
i, p, 3 i I i, p, 3 i I
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(4.55)
(4.56) (4.57) (4.58) (4.59)
36
Jian Cui 1 l L
j l 1 i
Rp
M j ,l , f p , M i, f p , f p , rp N i ,rp , M i1, f p , i, l , f p , p , 3 i I
(4.60)
rp 1
L
M l 1
i , l , f p ,
i, p , f p , 2 i I
Bi , f p , L
Bi, f p , Bi , f p , M i ,l , f p ,
(4.61)
i, p , f p , 2 i I
(4.62)
l 1
i, l , p, f p , 2 i I ,1 l L
M i ,l , f p , i l 1 I 0
(4.63)
Solving above EPSDDEPS2S  υ0 model (4.37) – (4.63), the optimal solutions of the first and second stage variables x1 , x 2, , x3, , x 4, (0 ) are obtained, but only the first *
*
*
*
stage optimal solution x1 N1, rp , M 1,1, f p , M 1, f p , w1, p , z1, p (0 ) is recorded. *
*
*
*
*
*
4.2.2. The Second Solution Step Between the two events at the beginning of period two, υ01,1 and υ01,2, which are the realizations of demand B2,fp, taking event υ01,1 labeled by number 2 in Figure 4.1 as an example, its corresponding scenario group is: ω=ω(υ01,1)∈Ω(υ01,1) = {1, 2, 3, 4} (Figure 4.2). The vector of the first stage variables is
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
x 2 N 2, rp , M 2, f p , M 1,2, f p , M 2,1, f p , w2, p , z2, p
T
(4.64)
The vector of the second stage variables is T
x i ,
( N i , rp , , M i, f p , , wi , p , , zi , p , ) 3 i I , Bi , f p , 1 i I , M i ,l , f p , 4 i l I 1
(4.65)
The DDEPS2S  Ω (υ01,1) model is written as: EPS DEP S 2S : 0 (1,1 ) 2
max x 2 , x3 , x 4
Fp
( p 1
f p 1 j l 1 2
(4.66) Fp
Rp
j ,l , f M j ,l , f 2, f M 2, f 2, r N 2, r 2, p w2, p ) p
p
f p 1
p
p
rp 1
p
p
Fp I 2 Fp ( j ,l , f p M j ,l , f p , i , f p Bi , f p , 4 i 1 f p 1 1 p 1 f p 1 3 j l 1 I 4 I Fp Rp 1 ( M N i , p wi , p , )) 1 i 3 f p 1 i , f p i , f p , rp 1 i , rp i , rp ,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
37
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 4.2. The second and third solution steps on the scenario tree.
Constraints involving first and second stage variables: P
Rp
N p 1 rp 1
2, rp
N 2max
(4.67)
Rp
z2, p (C pmin S pmin ) N 2, rp
p
(4.68)
rp 1
Rp
N rp 1
2, rp
z2, p S pmax
z1,* p z2, p w2, p z1,* p z2, p w2, p
p
(4.69)
p
(4.70)
p
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(4.71)
38
Jian Cui
p,
z2, p z3, p , w3, p , z2, p z3, p , w3, p ,
p,
(4.73)
z1,* p z2, p 1 w1,* p
p
(4.74)
z1,* p z2, p 1 w1,* p
p
(4.75)
z2, p z3, p , 1 w2, p
p,
(4.76)
p,
z2, p z3, p , 1 w2, p 1 l L
1 l L
j l 1 3
(4.77)
Rp
j l 1 2
M j ,l , f p M 2, f p f p , rp N 2, rp M 1,*f p
l , p , f p
(4.78)
rp 1
Rp
M j ,l , f p , M 3, f p , f p , rp N 3, rp , M 2, f p
l , p , f p ,
(4.79)
rp 1
* M 1,1, f p M 1,2, f p M 1,3, f p , B1, f p
L
M 2,1, f p M 2,l , f p , B2,(1~f p4) Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
(4.72)
p, f p ,
(4.80)
p , f p ,
(4.81)
l 2
* B1, f p , B1, f p M 1,1, f p M 1,2, f p M 1,3, f p ,
L
B2, f p , B2,(1~f p4) M 2,1, f p M 2,l , f p ,
p, f p , p , f p ,
(4.82)
(4.83)
l 2
Constraints involving only second stage variables: P
Rp
N p 1 rp 1
i , rp ,
N imax
Rp
i, 3 i I
zi , p , (C pmin S pmin ) N i , rp ,
i , p , 3 i I
rp 1
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(4.84)
(4.85)
TwoStage Stochastic Mixed Integer Linear Programming Rp
N rp 1
i , rp ,
i , p, 3 i I
zi , p , S pmax
z3, p , z4, p , 1 w3, p , z3, p , z4, p , 1 w3, p ,
(4.88)
p,
(4.89)
p,
(4.90)
Rp
j l 1 4
M j ,l , f p , M 4, f p , f p , rp N 4, rp , M 3, f p ,
p, f p ,
(4.91)
rp 1
L
M l 1
(4.87)
p,
z3, p , z4, p , w4, p ,
1 l L
(4.86)
p,
z3, p , z4, p , w4, p ,
39
i , l , f p ,
Bi , f p , L
i , p , f p , 3 i I
Bi, f p , Bi , f p , M i ,l , f p ,
(4.92)
i, p , f p , 3 i I
(4.93)
l 1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
M i ,l , f p , i l 1 I 0
i, l , p, f p , 3 i I ,1 l L
(4.94)
Solving above EPSDDEPS2S  Ω(υ01,1) model (4.66) – (4.94), the optimal solutions of the first and second stage variables x*2, x*3,ω, x*4,ωΩ(υ01,1) are obtained and the first stage optimal solutions:
* * * * * 0 x*2 N 2,* rp , M 1,2, (1,1 ) f p , M 2,1, f p , M 2, f p , w2, p , z 2, p
are recorded.
For the event υ01,2, the corresponding scenario group is: ω=ω(υ01,2)∈Ω(υ01,2) = {5, 6, 7, 8}(Figure 4.2). Following the same modeling approach as event υ01,1, EPSDDEPS2S  Ω(υ01,2) model can be formulated and by solving this model, the optimal solutions of the first stage variables:
* * * * * 0 x*2 N 2,* rp , M 1,2, f p , M 2,1, f p , M 2, f p , w2, p , z 2, p (1,2 )
are recorded . Obviously, the scenario group Ω(υ0) is decomposed to two smaller scenario groups Ω(υ01,2) and Ω(υ01,2), thus the size of problems decrease. Following this scenario grouping
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
40
Jian Cui
along the planning horizon for the scenario tree in Figure 4.1, the remaining EPSDDEPS2S models can be composed and solved. These solution steps are illustrated in the next section.
4.2.3. The Remaining Solution Steps At the beginning of period three, there are four events (Figure 4.3): υ{0,1}2,1, υ{0,1}2,2, υ{0,2}2,3 and υ{0,2}2,4 , thus correspondingly, the scenario group Ω(υ01,1) = {1, 2, 3, 4} is decomposed to scenario groups Ω(υ{0,1}2,1) = {1, 2} and Ω(υ{0,1}2,2) = {3, 4}, while scenario group Ω(υ01,2) = {5, 6, 7, 8} is decomposed to scenario groups Ω(υ{0,2}2,3) = {5, 6} and Ω(υ{0,2}2,4) = {7, 8}. After solving these EPSDDEPS2S  Ω(υ{0,1}2,1), EPSDDEPS2S  Ω(υ{0,1}2,2), EPSDDEPS2S  Ω(υ{0,2}2,3) and EPSDDEPS2S  Ω(υ{0,2}2,4) models, optimal ( 2{0,1,1} )*
solutions x 3
(2{0,2,1} )*
, x3
(2{0,3,2} )*
, x3
( 2{0,4,2} )*
and x 3
are recorded. At the beginning of the
last period of the planning horizon, there are eight events (Figure 4.4): υ{0,1,1}3,1, υ{0,1,1}3,2, υ{0,1,2}3,3, υ{0,1,2}3,4, υ{0,2,3}3,5, υ{0,2,3}3,6, υ{0,2,4}3,7 and υ{0,2,4}3,8, each of them corresponds only {0 ,1,1} (3,1 )*
one scenario. Solving these decomposed deterministic models, optimal solution x 4 {0 ,1,1} (3,2 )*
x4
{0 ,1,2} (3,3 )*
, x4
{0 ,1,2} (3,4 )*
, x4
{0 ,2 ,3} (3,5 )*
, x4
{0 ,2 ,3} (3,6 )*
, x4
{0 ,2 ,4} (3,7 )*
, x4
{0 ,2 ,4} (3,8 )*
and x 4
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
recorded.
Figure 4.3. The fourth to seventh solution steps on the scenario tree.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
,
can be
TwoStage Stochastic Mixed Integer Linear Programming
41
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 4.4. The eighth to fifteenth solution steps on the scenario tree.
Finally, the optimal solutions for every node in every period on the scenario tree are obtained by following above solution steps. Numerical results are given in Section 5.1.1.
5. NUMERICAL RESULTS All the EPSDDEPS2S models on SGA and EPSDDEP models on RHS/MHS were implemented in GAMS and solved by using the latest CPLEX 12.2 on a 3+2.99 GHz Intel Dual Core machine with Microsoft Windows XP. It is worth to mention that the computation times within two hours showed in this section are usually acceptable for the practical application.
5.1. EPSDDEPS2S Models on SGA Following the order of nodes shown in Figures 4.1 to 4.4, the problem sizes and the CPU times for the 15 EPSDDEPS2S models are shown in Table 5.1.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
42
Jian Cui
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 5.1. Problem sizes of the 15 models Model type
Constraints
Variables
Integer variables
Computation time (seconds) 2SMILP EEV
Optimality gap
DDEPS2S  υ0 DDEPS2S  υ01,1 DDEPS2S  υ01,2 DDEPS2S  υ{0,1}2,1 DDEPS2S  υ{0,1}2,2 DDEPS2S  υ{0,2}2,3 DDEPS2S  υ{0,2}2,4 DDEPS2S  υ{0,1,1}3,1 DDEPS2S  υ{0,1,1}3,2 DDEPS2S  υ{0,1,2}3,3 DDEPS2S  υ{0,1,2}3,4 DDEPS2S  υ{0,2,3}3,5 DDEPS2S  υ{0,2,3}3,6 DDEPS2S  υ{0,2,4}3,7 DDEPS2S  υ{0,2,4}3,8
1456
1811
300
6423
6.12
2.27%
648
757
108
64
0.84
0%
648
757
108
2253
9.52
1.29%
270
293
36
0.3
0%
270
293
36
1.0
0%
270
293
36
0.3
0%
270
293
36
1.0
0%
104
115
12
0.1
0%
104
115
12
0.1
0%
104
115
12
0.1
0%
104
115
12
0.1
0%
104
115
12
0.1
0%
104
115
12
0.1
0%
104
115
12
0.1
0%
104
115
12
0.1
0%
5.1.1. Results Based on Scenarios Figure 5.1 shows that for all the 8 scenarios, there are only switchon operations of the finishing lines at the beginning of the planning horizon and then the finishing lines are kept in state ―Operating‖ till the end of the planning horizon, since it is very expensive for switching finishing lines. Figure 5.2 shows the demands (Table 4.1), optimal polymerization numbers, sales, supply deficiencies and storages in four periods on each of the 8 scenarios:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming Switch Operation
Operating State
1.2
2
1
1
43
State
Value
0.8 0.6
1
1
1
1
2
1
1
3
4
0.4 0.2 0
0
0
0
2
3
4
0
1
Period
Period
Figure 5.1. Optimal switch operation w*i,p and operating state z*i,p for 8 scenarios. Scenario 2
16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Batches
Batches
Scenario 1
1
2
3
4
16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1
2
Period Supply_def
Sale
Storage
3
4
Period Demand
N_Poly
Supply_def
Sale
Storage
Demand
N_Poly
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 5.2. Continued on next page.
a) Scenario 1: the demands are equal to (period 1) or above (periods 2, 3 4) the average – 12 batches, thus, the plant runs in the full capacity mode (FCM) for each period and the sales perform based on the polymerization numbers not demands which results many supply deficiencies but rare storages. b) Scenario 2: the plant runs in FCM for each period with the demand below the average in period 4 and the sales perform based on the polymerization numbers where the sale in period 4 compensates some of the supply deficiencies in period 2, 3 and 4 in which no products are stored as in scenario1. c) Scenario 3: the plant runs in FCM for each period with the demand below the average in period 3 and above the average in period 4. The sales perform based on the polymerization numbers where the sale in period 3 compensates the supply deficiencies in period 1 and 2. Although the selling is as much as possible in period 4, there is still supply deficiency due to the high demand and cutoff effect. d) Scenario 4: the plant runs in FCM for the first three periods and then runs in the capacity of 10 batches mode, the safety production mode (SPM), which does not conduct the shutdown operations of the polymerization units in period 4 with the demand below the average in period 3 and 4 but results the storage. The sales perform based on the polymerization numbers in the first three periods, but the sale in period 4 is affected by both polymerization number and demand. Since the
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
44
Jian Cui demands in the last two periods are low, the sales are used to compensate the supply deficiencies of the previous periods and the storages are inevitable. e) Scenario 5: the plant runs in SPM to meet the low demand in period 2 while runs in FCM in period 3 and 4 due to the high demand requirements which also result the supply deficiencies and less storages. The sales are demand driven in period 2, polymerization numbers driven in period 4, and affected by both in period 3. Scenario 4 Scenario 3
Batches
Batches
16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1
2
3
16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1
4
2
Supply_def
Sale
Storage
Demand
Supply_def
N_Poly
Sale
Batches
Batches
N_Poly
3
16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1
4
2
3
4
Period
Period Storage
Demand
Supply_def
N_Poly
Sale
Storage
Demand
N_Poly
Scenario 8 Scenario 7 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Batches
Batches
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
2
Sale
Demand
Scenario 6
16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Supply_def
4
Storage
Scenario 5
1
3 Period
Period
1
2
3
16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1
4
2
Supply_def
Sale
Storage
Demand
3
4
Period
Period N_Poly
Supply_def
Sale
Storage
Demand
N_Poly
Figure 5.2. Optimal solutions of 8 scenarios with demand profiles.
f)
Scenario 6: the plant runs in SPM to meet the low demand in period 2 and 4. The sales are driven by the same rule as in scenario 5. The sale in period 4 compensates
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
45
the supply deficiency in period 3. Due to the SPM and demand below the average, products are stored in period 4. g) Scenario 7: the plant runs in SPM to meet the low demand in period 2 and 3, but FCM in period 4 for the high demand requirement. The sales are driven by the demands in the last three periods, and combining with SPM, the storages become more notable. h) Scenario 8: the plant runs in SPM in period 2, 3 and 4 due to the low demand inputs, thus, there are no supply deficiencies and the storages are accumulated along the planning horizon.
In each period, the same values of the optimal solutions among different scenarios demonstrate the connection nodes or common nodes which these scenarios share on the scenario tree in Figure 4.1 while the variances demonstrate the branches which are caused by the different scenarios.
5.1.2. Results for the First Two Periods In Figure 5.5, i1 l2 represents the delayed sales in current period i2 towards the demand in period i1 and i2 l1 represents the sales towards the current demand in period i2. Since the supply deficiencies Bi,fp (i=1, 2) are not the first stage variables in period 1 and 2 (see equations (4.2) and (4.31)), their optimal solutions can only be fixed in period 3 and 4 due to the maximum lateness of demand satisfaction L is set to 3 periods. Thus, the optimal solutions of Bi,fp (i=1, 2) are not given in above Figures 5.4 to 5.5. This phenomenon is the same for the rest of simulations in this Chapter. Polymerization numbers Period 1 (Scenario 18) 4
1.5
3
Batches
Batches
2
1
2 1
0.5
0
0
f1
f2
1.0698 1.7462
Demandp1 Demandp2
f3
f4
1.7455 1.3022 1.3699 0.556 1.2343 1.386 Grain size fractions
f5
N_Polyp1 N_Polyp2
0.7723 0.8174
r1
r2
r3
r4
r5
1 3
2 0
1 1 Recipe
2 2
0 0
Storage Period 1 (Scenario 18)
Sale Period 1 (Scenario 18)
0.35 0.3
2
0.25
Batches
1.5 Batches
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Demand Period 1 (Scenario 18)
1
0.2 0.15 0.1
0.5
0.05 0 i1 l1 Salep1 i1 l1 Salep2
f1 0.8 1.7462
f2
f3
f4
1.64 1.3022 1.3699 0.556 1.11 1.386 Grain size fractions
f5 0.69 0.73
0 Storep1 Storep2
f1 0 0.0738
f2
f3
f4
0 0.1278 0.0701 0.304 0 0.094 Grain size fractions
Figure 5.3. Optimal solutions in period 1 over scenario space Ω(υ0) with demand profiles.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
f5 0 0
46
Jian Cui Demand Period 2 (Scenario 14)
Polymerization numbers Period 2 (Scenario 14)
3
5
2.5
4 Batches
Batches
2 1.5 1
3 2 1
0.5 0
f1
f2
1.48109 0.98163
Demandp1 Demandp2
f3
f4
0
f5
0.43212 1.34407 1.75175 2.1593 1.80232 1.08121 Grain size fractions
2.92994 1.63657
N_Polyp1 N_Polyp2
r1
r2
r3
r4
r5
1 1
0 2
0 1 Recipe
1 0
4 2
f4
f5
0.03788 0 0 0 0 0 Grain size fractions
0 0
Storage Period 2 (Scenario 14)
Sale Period 2 (Scenario 14)
0.04
3 2.5
0.03 Batches
Batches
2 1.5 1
0.02 0.01
0.5 0
f1
i2 l1 Salep1 i2 l1 Salep2
f2
0.67 0.9138
f3
f4
0.43212 0.8278 1.5701 1.984 1.37 0.834 Grain size fractions
0
f5
Storep1 Storep2
2.66 1.37
f1
f2
0 0
f3
Figure 5.4. Optimal solutions in period 2 over scenario space Ω(υ01,1) with demand profiles. Polymerization numbers Period 2 (Scenario 58)
Demand Period 2 (Scenario 58)
3
2
2
Batches
Batches
1.5 1
1
0
f1
f2
0.79751 0.52857
Demandp1 Demandp2
f3
f4
0.23268 0.72373 0.94325 1.1627 0.97048 0.58219 Grain size fractions
0
f5
N_Polyp1 N_Polyp2
1.57766 0.88123
r1
r2
r3
r4
r5
2 1
0 1
0 1 Recipe
1 0
2 2
f4
f5
Storage Period 2 (Scenario 58)
Sale Period 2 (Scenario 58) 2
0.5
1.5
0.4 Batches
Batches
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
0.5
1 0.5 0 i1 i1 i2 i2
l2 l2 l1 l1
f1
Salep1 0.00848 0 Salep2 Salep1 0.79751 Salep2 0.52857
f2 0.1055 0 0.23268 1.1627 Grain
f3
f4
0 0 0.1243 0 0.6578 0.94325 0.97048 0.58219 size fractions
f5 0 0.0874 1.53 0.88123
0.3 0.2 0.1 0 Storep1 Storep2
f1 0.41401 0.30523
f2
f3
0.27182 0 0.23685 0.1913 0.04522 0.22181 Grain size fractions
Figure 5.5. Optimal solutions in period 2 over scenario space Ω(υ01,2) with demand profiles.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
0 0.37137
TwoStage Stochastic Mixed Integer Linear Programming
47
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
5.1.3.The Comparison between 2SMILP and EEV The comparison of the profits between 2SMILP and EEV on every node of the scenario tree is shown in Figure 5.6. In periods 1, 2 and 3, comparison perform not only on the first stage (realized) profits (the values above the horizontal lines in the colored rectangles) but the first stage plus second stage (objective) profits (the values below the horizontal lines in the colored rectangles) as well while only the objective profits (the same as the realized profits) are listed in period 4 due to their natures as the deterministic models which indicate the criterion (2.22) in Section 2 does not limit them. In the first three periods, the objective profits of 2SMILPs on each node are always larger than the objective profits of EEVs, however this does not always hold for the realized profits, e.g. 6.65 (2SMILP) is less than 6.91 (EEV) on the lower node at the beginning of period 2. Even though, the average realized profits and the average objective profits show that 2SMILPs are always better than EEVs (see Figures 5.7 and 5.8).
Figure 5.6. Comparison of profits between 2SMILP and EEV on the scenario tree.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Jian Cui
Profit
48
11.00 10.50 10.00 9.50 9.00 8.50 8.00 7.50 7.00 6.50 6.00 5.50 5.00 4.50 4.00
9.95 9.92
10.39 10.11
4.59 4.59
1 2SMILP
2 Period
3
EEV
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Profit
Figure 5.7. Comparison of the first stage (realized) profits between 2SMILP and EEV in period1, 2 and 3. 37.00 35.50 34.00 32.50 31.00 29.50 28.00 26.50 25.00 23.50 22.00 20.50 19.00
34.76 34.75
29.94
29.90
19.71 1 2SMILP
2 Period
19.55
3
EEV
Figure 5.8. Comparison of the first stage plus second stage (objective) profits between 2SMILP and EEV in period 1, 2 and 3.
5.2. EPSDDEP Models on RHS In this example, the time horizon I1 is set to 4 periods in rolling steps 1 and 2, but 5 periods in rolling step 3. The time horizon I2 is set to 5 periods in rolling step 1, but 6 periods in rolling steps 2 and 3. The evolution of the uncertainties along the scheduling horizon with demand, yields and plant capacity uncertainties is shown in Figure 5.9.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
49
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Demand uncertainties are present in periods 4 and 7 with three scenarios (20% above the average, the average, 20% below the average), and the demand profiles for each rolling step are given in Appendix A, Tables A.2.1 – A.2.3. Three different grain size distributions (Appendix A, Figures A.1.1 – A.1.3) are assumed in periods 2 and 5 (the yield uncertainty resolves in rolling step 2 to Figure A.1.1). Capacity uncertainty is included in periods 3 and 6 (the uncertainty resolves in rolling step 3 to the lowest of the three values). The assumed scenarios of the grain size distribution and of the plant capacity as well as the realized scenarios are shown in Table 5.2 – 5.4.
Figure 5.9. (Continued)
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
50
Jian Cui
Figure 5.9. RHS MPMU: The evolution of demand uncertainties ξDi (i=4, 7) with realizations Dji (j=1, 2, 3), yield uncertainties ξR2, ξR5 with realizations Rj2, Rj5 (j=1, 2, 3) and plant capacity uncertainties ξP3, ξP6 with realizations Pj3, Pj6 (j=1, 2, 3) along the planning horizon for three rolling steps. R21 and P33 are resolved yield and plant capacity, EV8 represents the expected values of demand, plant capacity and yield.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
51
Table 5.2. RHS MPMU: plant capacity and yield profiles in rolling step 1 Period
1
2
3
4
5
Uncertainty Plant capacity Value (Probability)
12
11
11
11
Yields Value (Probability)
Figure A.1.3
Figure A.1.1(0.25) Figure A.1.3 (0.5) Figure A.1.2(0.25)
12(0.6) 11(0.1) 9(0.3) Figure A.1.3
Figure A.1.3
Figure A.1.3
Table 5.3. RHS MPMU: plant capacity and yield profiles in rolling step 2 Period
2
3
4
5
6
7
Uncertainty Plant capacity Value (Probability)
11
11
11
11
11
Yields Value (Probability)
Figure A.1.1
12(0.6) 11(0.1) 9(0.3) Figure A.1.3
Figure A.1.3
Figure A.1.1(0.25) Figure A.1.3 (0.5) Figure A.1.2(0.25)
Figure A.1.3
Figure A.1.3
Table 5.4. RHS MPMU: plant capacity and yield profiles in rolling step 3
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Period
3
4
5
6
7
8
Uncertainty Plant capacity Value (Probability)
9
11
11
11
11
Yields Value (Probability)
Figure A.1.3
Figure A.1.3
Figure A.1.1(0.25) Figure A.1.3 (0.5) Figure A.1.2(0.25)
12(0.6) 11(0.1) 9(0.3) Figure A.1.3
Figure A.1.3
Figure A.1.3
Table 5.5. RHS MPMU: problem size Moving step 1 2 3
Constraints
Variables
Integer variables
6,018 7,719 23,129
7,757 10,035 30,035
1,308 1,632 4,872
Number of scenarios 27 27 81
Computation time (H:M:S) 2SMILP EEV 01:18:48 00:00:72 00:53:42 00:00:11 07:08:59 02:48:08
Optimality gap
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
7% 7% 7.5%
52
Jian Cui Sales Rolling step 1
4
2
3
1.5
Batches
Batches
Polymerization numbers Rolling step 1
2
1
1 0 N_Polyp1 N_Polyp2
0.5 r1
r2
r3
r4
r5
2 1
0 2
1 0 Recipe
2 0
1 3
0
f1
f2
1.2663 0.6641
l1 Salep1 l1 Salep2
Polymerization numbers Rolling step 2
f4
f5 1.0793 1.8671
Sales Rolling step 2
4
2
3
1.5
Batches
Batches
f3
0.6693 1.0707 1.3136 1.3608 0.835 0.71 Grain size fractions
2
1 0.5
1
0 0 N_Polyp1 N_Polyp2
r1
r2
r3
r4
r5
3 0
0 1
1 1 Recipe
1 0
1 3
f1
f2
0 1.2663 0.3659
i1 l2 Salep2 i2 l1 Salep1 i2 l1 Salep2
f4
f5 0 1.0793 1.8671
Sales Rolling step 3
Polymerization numbers Rolling step 3 4
2
3
1.5
Batches
Batches
f3
0 0.1212 0.0427 0.6693 1.0707 1.3136 1.2242 0.9562 0.7527 Grain size fractions
2
1 0.5
1
0 0 N_Polyp1 N_Polyp2
r1
r2
r3
r4
r5
0 1
0 2
0 0 Recipe
0 0
0 3
f1
f2
0.1759 0.6224 0.6641
i2 l2 Salep2 i3 l1 Salep1 i3 l1 Salep2
f3
f4
f5
0.1366 0 0 0.3264 0.1436 0.0128 1.3608 0.9476 0.7527 Grain size fractions
0 0.0964 1.8671
Figure 5.10. RHS MPMU: optimal solutions of polymerization numbers and sales for three rolling steps. Storage Rolling step 2
0.8
0.8
0.6
0.6 Batches
Batches
0.4
0.4 0.2
0.2 0 Storep1 Storep2
f1 0.0487 0.1759
f2
f3
f4
0
f5
0.1157 0.1343 0.1964 0.3642 0 0 Grain size fractions
0.1057 0.0229
Storep1 Storep2
f1 0.6224 0
f2
f3
f4
0.3264 0.1436 0.0128 0 0.1126 0.0846 Grain size fractions
f5 0.0964 0.0358
Supply deficiency & Storage Rolling step 3 0.8 0.6
Batches
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Storage Rolling step 1
0.4 0.2 0 i1 i1 i3 i3
Sup_Defp1 Sup_Defp2 Storep1 Storep2
f1 0 0 0 0
f2
f3
f4
0 0 0 0 0 0 0 0 0 0.2276 0 0.0419 Grain size fractions
f5 0 0 0 0.0587
Figure 5.11. RHS MPMU: optimal solutions of supply deficiencies and storages for three rolling steps.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
53
The zero production on p1 in rolling step 3 infers the shutdown of finishing line p1 (Figure 5.10) due to the capacity reduction (Table 5.4). The increasing of storage in rolling step 2 (Figure 5.11) implies the effect of yield / grain size fraction changing (Table 5.3)
10.90 10.50
10.09
Profit
9.00 7.50 6.00 4.96 4.50 3.56
3.56
3.34
3.00 1 2SMILP
2 Period
3
EEV
Figure 5.12. RHS MPMU: comparison of the realized profits between 2SMILP and EEV for three moving steps.
56.03 54.74
53.50
Profit
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
57.00
50.00 46.50 42.61 43.00 39.38
39.16
39.50
38.24
36.00 1 2SMILP
2 Period
3
EEV
Figure 5.13. RHS MPMU: comparison of the objective profits between 2SMILP and EEV for three moving steps.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
54
Jian Cui
Under the MPMU, to reach the maximum benefit, polymerization batches are planned in order to satisfy the demand profiles as well as possible, supply deficiencies are fulfilled as soon as possible in the future two rolling steps and the storage is kept as small as possible in each rolling step. However, in order to reach the zero supply deficiencies, storages are often inevitable due to the grain size distributions of the EPS production. As shown in Figure 5.12, the 2SMILP is no better than EEV in all rolling steps, e.g., rolling step 2, since the computation was only performed on one node (realization) in each rolling step, thus the results should show the advantages of 2SMILP over EEV in every rolling step by computing all nodes as indicated in Section 5.1.3. However, such simulation process is computational expensive and thus neglected in this Chapter. The comparisons for the individual scenarios are shown in Figure 5.14. The lower profits of the scenarios correspond to those which have lower plant capacities or lower demands in each rolling step.
Profit
Rolling step 1 70 60 50 40 30 20 10 0 w1 w2 w3 w4 w5 w6 w7 w8 w9 w10 w11 w12 w13 w14 w15 w16 w17 w18 w19 w20 w21 w22 w23 w24 w25 w26 w27 Scenario 2SMILP
EEV
Profit
70 60 50 40 30 20 10 0 w1 w2 w3 w4 w5 w6 w7 w8 w9 w10 w11 w12 w13 w14 w15 w16 w17 w18 w19 w20 w21 w22 w23 w24 w25 w26 w27 Scenario 2SMILP
EEV
Rolling step 3 70 60 50 40 30 20 10 0
w1 w3 w5 w7 w9 w11 w13 w15 w17 w19 w21 w23 w25 w27 w29 w31 w33 w35 w37 w39 w41 w43 w45 w47 w49 w51 w53 w55 w57 w59 w61 w63 w65 w67 w69 w71 w73 w75 w77 w79 w81
Profit
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Rolling step 2
Scenario 2SMILP
EEV
Figure 5.14. RHS MPMU: 2SMILP compared with EEV for all the scenarios of three rolling steps.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
55
In order to observe how the uncertainties affect the system, it is preferable to employ the moving horizon strategy due to the same scenario structure in each moving step.
5.3. EPSDDEP Models on MHS
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
In this section, the performance of the proposed moving horizon strategy for four different mediumterm planning problems according to different assumptions on the uncertainties which are different from Cui and Engell (2010), are investigated. For comparison, the corresponding EEVs are also evaluated for the same uncertainties.
5.3.1. MultiPeriod Demand Uncertainties (MPDU) We assume that there are only independent demand uncertainties and that they are resolved at the beginning of each time period. The time horizon I1 is set to 5 periods and time horizon I2 is set to 6 periods. In this case, the demand uncertainties in each period are modeled by 30% above the average, the average and 30% below the average with probability distribution 0.25, 0.5 and 0.25. This yields 34 = 81 scenarios in the 2SMILP in each moving step i. The demand profiles within I2 for six moving steps were randomly generated and are given in Appendix A, A.3 (The resolved demands are marked with * in Table A.3.1). The average of demand in each period is fixed to the maximal capacity of the polymerization stage – 12 batches. The six moving steps shown in Table 5.6 follow one sample path through the average, above the average, the average, below the average, the average and above the average of the demands at the beginning of each period. The results obtained are shown in Figures 5.15 to 5.16. The comparisons between the 2SMILP and the EEV solutions for six moving steps are shown in Figures 5.17 and 5.18, while the comparison for the individual scenarios on moving step 4 which has the largest advantages over EEV among other five moving steps is shown in Figure 5.19. In moving step 4 (Figure 5.15), i2 l3 represents the delayed sales in current period i4 towards the demand in period i2, i3 l2 represents the delayed sales in current period i4 towards the demand in period i3 and i4 l1 represents the sales towards the current demand in period i4. Table 5.6. MHS MPDU: problem size Moving Constraints step
Variables
Integer variables
1 2 3 4 5 6
28,385 30,015 30,035 30,035 30,035 30,035
4,872 4,872 4,872 4,872 4,872 4,872
21,489 23,109 23,129 23,129 23,129 23,129
Number of scenarios 81 81 81 81 81 81
Computation time(H:M:S) 2SMILP EEV 00:03:23 00:02:27 00:07:30 00:04:38 00:05:00 00:02:28 00:06:52 00:02:39 00:05:45 00:02:14 00:04:00 00:03:47
Optimality gap 4.8% 2.72% 3.25% 6% 5.7% 3.23%
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
56
Jian Cui
Batches
Polymerization numbers Moving step 2
Sales Moving step 2
5
3
4
2.5 2
Batches
3 2
1.5 1
1
0.5
0
r1
r2
r3
r4
r5
0 1
0 2
1 1 Recipe
1 0
4 2
N_Polyp1 N_Polyp2
0
f1
f2
0.13 0.9138
i2 l1 Salep1 i2 l1 Salep2
Batches
Batches
4 3
0.8 0.6 0.4 0.2 0
2 1 r1
r2
r3
r4
r5
4 1
0 1
1 0 Recipe
1 3
0 1
i2 i2 i3 i3 i4 i4
l3 l3 l2 l2 l1 l1
Salep1 Salep2 Salep1 Salep2 Salep1 Salep2
f1 1.35109 0 0 0 1.05891 0.5243
Batches
Batches
f5 0 0.26657 0 0.00993 0.64015 1.10292
2
3 2
1.5 1 0.5
1
0
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
f3 f4 0 0.04165 0.43232 0.24721 0.07996 0.3476 0.03056 0.2486 0.95004 0.56686 0.40712 1.28772 size fractions
2.5
4
N_Polyp1 N_Polyp2
f2 0.07562 0.1753 0 0.0034 0.86926 0.82929 Grain
Sales Moving step 6
Polymerization numbers Moving step 6
0
f5 2.69 1.37
1.6 1.4 1.2 1
5
N_Polyp1 N_Polyp2
f4
Sales Moving step 4
Polymerization numbers Moving step 4
0
f3
0.31 1.34407 1.7101 1.984 1.37 0.834 Grain size fractions
r1
r2
r3
r4
r5
3 2
0 0
2 1 Recipe
2 2
0 0
i4 l3 Salep1 i6 l1 Salep1 i6 l1 Salep2
f1 0.16343 1.70657 1.34687
f2
f3
f4
0 0 0 0.94 1.75089 2.03789 0.91641 1.04 1.43 Grain size fractions
f5 0 0.81 0.74068
Figure 5.15. MHS MPDU: optimal solutions of polymerization numbers and sales for moving steps 2, 4, 6.
As shown in Figure 5.19, with only demand uncertainties involved, the profits do not have significant variances among different scenarios.
5.3.2. MultiPeriod MultiUncertainty (MPMU) We assume that the multiperiod multi uncertainties are independent from each other and that they are resolved at the beginning of each time period. The time horizon I1 is set to 4 periods and time horizon I2 is set to 5 periods the following two cases. 5.3.2.1. Case 1: MultiPeriod Plant Capacity and Demand Uncertainties (CDU) In this example, the scenario tree structures (27 scenarios within 5 periods) with plant capacity and demand uncertainties (CDU) are same for five moving steps i = 1, 2, 3, 4, 5 (Figure 5.20). Three different capacity scenarios are assumed in the periods after the first stage: 12, 11 and 9 polymerizations can be performed in these periods. Thereafter an average capacity of 11 polymerizations per period is assumed. In each moving step, plant capacity
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
57
uncertainties are considered in the first two periods with the same distributions and demand uncertainty is included in the third period whose distributions are set to 30% above the average, average, 30% below the average with corresponding probabilities 0.25, 0.5 and 0.25. The uncertain demands are fixed to values inside this range in the second period of moving steps two and three. The demand profiles for the current, the immediate and the distant future of each moving step are shown in Appendix A, A.4. The demands that were generated randomly for each period are chosen such that they can be met with a capacity of 11 bathes per period on the average. The five moving steps shown in Table 5.7 follow one capacity sample path (Figure 5.21) and the corresponding data are given in Tables 5.8 – 5.12. Storage Moving step 2
0.5
0.5
0.4
0.4 Batches
Batches
Storage Moving step 1
0.3 0.2
0.3 0.2 0.1
0.1 0
f1
Storep1 Storep2
f2
0 0.0738
f3
f4
0 0.1278 0.0701 0.304 0 0.094 Grain size fractions
0
f5 0 0
Storep1 Storep2
Supply deficiency & Storage Moving step 3
f1
Sup_Defp1 0.0557 0 Sup_Defp2 0 Storep1 Storep2 0.04257
f2
f3
f4
0 0 0 0 0.1243 0 0 0 0 0 0 0 Grain size fractions
f5 0 0.07727 0.16486 0
i2 i2 i4 i4
f2
f3
0 0
f1
f2
f3
f4
f5
0.3 0.2 0.1 0
f1
f5
0.5 0.4
Batches
0.3 0.2
i3 i3 i5 i5
f3
Supply deficiency & Storage Moving step 6
0.5 0.4
0.1 0
f4
0 0.01373 0 0 0 0 Grain size fractions
0 0 0 0 0 Sup_Defp1 0 0 0 0 0 Sup_Defp2 0 0.12512 0 0.00389 0.05471 Storep1 0 0.22647 0.04058 Storep2 0.20827 0.00201 Grain size fractions
Supply deficiency & Storage Moving step 5
Batches
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
0.5 0.4 0.3 0.2 0.1 0
Batches
Batches
0.3 0.2
i1 i1 i3 i3
f2
0 0
Supply deficiency & Storage Moving step 4
0.5 0.4
0.1 0
f1
f4
f5
0 0 0 0 0 Sup_Defp1 0 0 0 0 0 Sup_Defp2 0 0 0.04089 0.36789 0 Storep1 0 0 0.06068 Storep2 0.11687 0.29641 Grain size fractions
i4 i4 i6 i6
Sup_Defp1 Sup_Defp2 Storep1 Storep2
f1
f2
f4
f5
0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 Grain size fractions
f3
0 0 0 0
Figure 5.16. MHS MPDU: optimal solutions of supply deficiencies and storages for six moving steps.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
58
Jian Cui
14.50
13.6613.48
13.31
13.06
13.00
10.9710.97
Profit
11.50
10.96
10.00
8.74
8.50
11.48
7.87
7.00 5.50
4.59 4.59
4.00 1
2
2SMILP
3 4 Moving step
5
6
EEV
Figure 5.17. MHS MPDU: comparison of the realized profits between 2SMILP and EEV for six moving steps.
67.50 66.09
66.03 65.83
66.00
63.65 63.62
Profit
64.50
63.64
65.72
63.21
63.00 61.50
60.96 60.09
60.00 58.33 58.05
58.50 57.00
2
2SMILP
3 4 Moving step
5
6
EEV
Figure 5.18.MHS MPDU: comparison of the objective profits between 2SMILP and EEV for six moving steps. Moving step 4
Scenario 2SMILP
EEV
Figure 5.19. MHS MPDU: 2SMILP compared with EEV for all the scenarios of moving step 4.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
w81
w79
w77
w75
w73
w71
w69
w67
w65
w63
w61
w59
w57
w55
w53
w51
w49
w47
w45
w43
w41
w39
w37
w35
w33
w31
w29
w27
w25
w23
w21
w19
w17
w15
w13
w9
w11
w7
w5
w3
80 70 60 50 40 30 20 10 0 w1
Profit
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1
TwoStage Stochastic Mixed Integer Linear Programming
59
Figure 5.20. MHS CDU: the evolution of plant capacity uncertainties ξPi+1, ξPi+2 with realizations Pji+1, Pji+2 (j=1, 2, 3) and demand uncertainty ξDi+3 with realizations Dji+3 (j=1, 2, 3) along the planning horizon for five moving steps i = 1, 2, 3, 4, 5. Di , Pi are the resolved demand and the plant capacity, EVi+4 indicates the expected values of the demand and of the plant capacity.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 5.7. MHS CDU: problem size Moving step 1 2 3 4 5
Constraints
Variables
6,018 6,558 6,578 6,578 6,578
7,757 8,307 8,327 8,327 8,327
Integer variables 1,308 1,308 1,308 1,308 1,308
Number of scenarios 27 27 27 27 27
Computation time(H:M:S) 01:06:40 00:49:50 00:33:00 09:47:00 01:33:56
Optimality gap 4.59% 6.53% 3.10% 2.05% 2.09%
Table 5.8. MHS CDU: plant capacity in moving step 1 Period Uncertainty Plant capacity Value (Probability)
1
2
3
4
5
12
12(0.6) 11(0.1) 9(0.3)
12(0.6) 11(0.1) 9(0.3)
11
11
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
60
Jian Cui Capacity profiles for five moving steps
Capacity (batches)
13 12 11 10 9 8 1
2
3
Above average
4
5 Period
Average
6
7
8
Below average
9
Realization
Figure 5.21. CDU 27S5P: capacity profiles for five moving steps.
Table 5.9. MHS CDU: plant capacity in moving step 2 Period Uncertainty Plant capacity Value (Probability)
2
3
4
5
6
11
12(0.6) 11(0.1) 9(0.3)
12(0.6) 11(0.1) 9(0.3)
11
11
Table 5.10. MHS CDU: plant capacity in moving step 3 Period
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Uncertainty Plant capacity Value (Probability)
3
4
5
6
7
9
12(0.6) 11(0.1) 9(0.3)
12(0.6) 11(0.1) 9(0.3)
11
11
Table 5.11. MHS CDU: plant capacity in moving step 4 Period Uncertainty Plant capacity Value (Probability)
4
5
6
7
8
11
12(0.6) 11(0.1) 9(0.3)
12(0.6) 11(0.1) 9(0.3)
11
11
Table 5.12. MHS CDU: plant capacity in moving step 5 Period Uncertainty Plant capacity Value (Probability)
5
6
7
8
9
12
12(0.6) 11(0.1) 9(0.3)
12(0.6) 11(0.1) 9(0.3)
11
11
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming Sales Moving step 1
4
2
3
1.5
Batches
Batches
Polymerization numbers Moving step 1
2
1
1 0 N_Polyp1 N_Polyp2
0.5 r1
r2
r3
r4
r5
0 1
0 0
0 3 Recipe
0 1
0 2
0 i1 l1 Salep1 i1 l1 Salep2
f1
f2
0 0.78
2
Batches
Batches
f5 0 1.5598
1.5
2
1 0.5
1
0 r1
r2
r3
r4
r5
1 1
1 1
1 0 Recipe
2 1
1 2
i1 l2 Salep1 i2 l1 Salep1 i2 l1 Salep2
f1
f2
0.2712 0.4688 0.5197
Polymerization numbers Moving step 3
f3
f4
f5
0.0307 0.09 0.1999 1.0293 1.23 1.4301 0.8667 1.2547 1.2373 Grain size fractions
0.0271 1.2229 1.5624
Sales Moving step 3
4
1.5 1
Batches
3
Batches
f4
Sales Moving step 2
3
N_Polyp1 N_Polyp2
f3
0 0 0 0.61 1.6953 1.4146 Grain size fractions
Polymerization numbers Moving step 2
0
61
2
0.5
1
0 0 N_Polyp1 N_Polyp2
r1
r2
r3
r4
r5
3 0
0 0
3 0 Recipe
1 0
2 0
i1 l3 Salep1 i3 l1 Salep1 i3 l1 Salep2
f1
Polymerization numbers Moving step 4
f5 0.4172 1.4028 0.1078
2
Batches
Batches
3 2
1
1
0 r1
r2
r3
r4
r5
2 0
1 0
0 0 Recipe
4 0
0 0
i2 l3 Salep1 i3 l2 Salep1 i4 l1 Salep1
f1
f2
0 0 1.26
f3
f4
f5
0.1611 0 0 0.2638 0 0.0214 0.7963 1.1178 2.3886 Grain size fractions
Polymerization numbers Moving step 5
0 0.0572 1.0728
Sales Moving step 5 1.2
3
1 0.8
Batches
Batches
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
f4
3
4
N_Polyp1 N_Polyp2
f3
1.0774 1.2719 0.6464 0.0126 0.9803 1.0836 0.1333 0 0.0681 Grain size fractions Sales Moving step 4
5
0
f2
0.7937 1.1663 0.1903
2
0.6 0.4 0.2
1 0 N_Polyp1 N_Polyp2
0
r1
r2
r3
r4
r5
1 2
1 0
1 1 Recipe
1 2
1 2
i3 i3 i4 i4 i5 i5
l3 l3 l2 l2 l1 l1
Salep1 Salep2 Salep1 Salep2 Salep1 Salep2
f1 0.0099 0.2832 0.1765 0.0726 0.3002 0.9142
f2
f3
f4
0 0 0.0794 0 0.541 0.7303 0 0.0744 0 0 0 0.188 1.0388 0.9541 0.9806 0.72 0.739 0.9517 Grain size fractions
f5 0.1041 0.9872 0 0 0.7318 0.8728
Figure 5.22. MHS CDU: optimal solutions of polymerization numbers and sales for the five moving steps.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
62
Jian Cui
The planning results of the five moving steps of 2SMILP for CDU 27S5P are shown in Figures 5.22 to 5.24. The corresponding EEV solutions are infeasible. In stead of running in FCM, the plant compromises to run in a relative low capacity mode in moving step 1 as the preventive action for the unit breakdowns in the future. The MHS builds up storage of p2 in moving steps 2, and plans the production of product p1 in moving step 3 and 4 (Figure 5.22) with a strongly reduced capacity (Table 5.10) because of the capacity constraint of the finishing lines CAPFINLOW (3.3) and the constraint of minimal closing successive periods MINCLOSE (3.8) listed in Section 3.2 of the Chapter, A Mediumterm Production Planning Problem: The EPS Logistics. The significant reduction of the plant capacity in some scenarios leads to a significant change in the mediumterm planning by the 2SMILP approach compared to planning based upon average capacities (Figure 5.24) and also the reductions on the profits (Figures 5.25 and 5.26). Storage Moving step 2
0.6
0.6
0.5
0.5
0.4
0.4
Batches
Batches
Storage Moving step 1
0.3 0.2
0.3 0.2
0.1 0
0.1
f1
f2
0 0
0 0 0 0 0.5647 0.2154 Grain size fractions
Storep1 Storep2
f3
f4
0
f5 0 0.1602
f1
Storep1 Storep2
Sup_Defp1 Sup_Defp2 Storep1 Storep2
0.6 0.5 0.4 0.3 0.2 0.1 0
Batches
Batches
f1 0 0.0148 0 0
f4
f5
0.0999 0 0 0 0 0 0 0.1478 0 0 0 0 Grain size fractions
f2
f3
0 0 0 0
f4
f5 0 0.1078
i2 i2 i4 i4
Sup_Defp1 Sup_Defp2 Storep1 Storep2
f1
f2
0 0 0 0
f4
f5
0 0 0 0 0.0172 0 0.0088 0 0 0 0 0 Grain size fractions
f3
0 0 0 0
Supply deficiency & Storage Moving step 5 1.2 1 0.8 0.6 0.4 0.2 0
Batches
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
i1 i1 i3 i3
f3
0 0 0 0.1333 0 0.0681 Grain size fractions
Supply deficiency & Storage Moving step 4
Supply deficiency & Storage Moving step 3 0.6 0.5 0.4 0.3 0.2 0.1 0
f2
0 0.1903
i3 i3 i5 i5
Sup_Defp1 Sup_Defp2 Storep1 Storep2
f1 0 0.5996 0.2534 0
f2
f3
f4
0 0 0.0204 0.5594 0.1221 0.3708 0 0.1415 0 0 0 0 Grain size fractions
f5 0 1.1049 0.1641 0
Figure 5.23. MHS CDU: optimal solutions of supply deficiencies and storages for the five moving steps.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming Scenario 9 Moving step 1
6 5
r1 r2 r3 r4 r5
4 3 2 1 0
Polymerization number
Polymerization number
Expected value Moving step 1
6 5
r1 r2 r3 r4 r5
4 3 2 1 0
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
i1
i1
i2
i2
i3
i3
i4
i4
i5
i5
i1
i1
i2
i2
i3
i3
i4
i4
i5
i5
Scenario 27 Moving step 1
6 5
r1 r2 r3 r4 r5
4 3 2 1 0
Polymerization number
Scenario 18 Moving step 1 Polymerization number
63
6 5
r1 r2 r3 r4 r5
4 3 2 1 0
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
p1
p2
i1
i1
i2
i2
i3
i3
i4
i4
i5
i5
i1
i1
i2
i2
i3
i3
i4
i4
i5
i5
41.00 36.00 31.00 26.00 21.00 16.00 11.00 6.00 1.00
39.29 34.86
34.04 30.45
8.5
7.0
3.8
2.0 1
2
Realized profit
3 Moving step
34.09
4.2
4
5
Objective
Figure 5.25. MHS CDU: the realized profits and the objective for five moving steps. Profits over scenarios on five moving steps 60 50 Profit
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Profit
Figure 5.24. MHS CDU: comparison of the polymerization numbers for scenarios 9, 18 and 27 computed by the 2SMILP approach and the expected value obtained from the deterministic approach.
40 30 20 10 0 w1 w2 w3 w4 w5 w6 w7
Moving step1
w8 w9 w10 w11 w12 w13 w14 w15 w16 w17 w18 w19 w20 w21 w22 w23 w24 w25 w26 w27 Scenario Moving step2
Moving step3
Moving step4
Moving step5
Figure 5.26. MHS CDU: profits over scenarios for five moving steps.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
64
Jian Cui
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
5.3.2.2. Case 2: MultiPeriod Yields, Plant Capacity and Demand Uncertainties (YCDU) In this example, the scenario tree structures (36 scenarios within 5 periods) with yields, plant capacity and demand uncertainties (YCDU) are the same for three moving steps i = 1, 2, 3 (Figure 5.27) along the planning horizon. Different yield scenarios are assumed in the periods after the first stage: nominal and drifting yields shown in Appendix A, A.1 can be performed in these periods. Thereafter the yield of Figure A.1.3 per period is assumed. Three capacity scenarios are assumed in the periods after the period of uncertain yields: 12, 11 and 9 polymerizations can be performed in these periods. Thereafter an average capacity of 11 polymerizations per period is assumed. In each moving step, yield and plant capacity uncertainties are considered in the first two periods with the same distributions and demand uncertainty is included in the third period the distributions of which are 30% above the average, average, 30% below the average with the corresponding probabilities 0.25, 0.5 and 0.25. The demand profiles for the current, immediate and distant future of each moving step are shown in Appendix A, A.5. The demands were generated randomly with a capacity of 11 on the average. The data on the yield and plant capacity scenarios for three moving steps are given in Tables 5.13 – 5.15.
Figure 5.27. YCDU 36S5P: the evolution of yield uncertainty ξRi+1, ξRi+2 with realizations Rji+1, Rji+2 (j=1, 2), plant capacity uncertainty ξPi+2 with realizations Pji+2 (j=1, 2, 3) and demand uncertainty ξDi+3 with realizations Dji+3 (j=1, 2, 3) along the planning horizon for three moving steps i = 1, 2, 3. Ri, Di, Pi are the resolved yield, demand and plant capacity, EVi+4 includes the expected values of yield, demand and plant capacity.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
65
Table 5.13. MHS YCDU: yield and plant capacity profiles in moving step 1 Period Uncertainty Plant capacity Value (Probability)
1
2
3
4
5
12
11
12(0.6) 11(0.1)
11
11
Yields Value (Probability)
Figure A.1.3
Figure A.1.1(0.5) Figure A.1.2(0.5)
Figure A.1.3
Figure A.1.3
9(0.3) Figure A.1.1(0.5) Figure A.1.2(0.5)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 5.14. MHS YCDU: yield and plant capacity profiles in moving step 2 Period Uncertainty Plant capacity Value (Probability)
2
3
4
5
6
11
11
12(0.6) 11(0.1)
11
11
Yields Value (Probability)
Figure A.1.4
Figure A.1.1(0.5) Figure A.1.2(0.5)
Figure A.1.3
Figure A.1.3
9(0.3) Figure A.1.1(0.5) Figure A.1.2(0.5)
Table 5.15. MHS YCDU: yield and plant capacity profiles in moving step 3 Period Uncertainty Plant capacity Value (Probability)
Yields Value (Probability)
3
4
5
6
7
11
9
12(0.6) 11(0.1)
11
11
Figure A.1.3
Figure A.1.3
Figure A.1.5
Figure A.1.1(0.5) Figure A.1.2(0.5)
9(0.3) Figure A.1.1(0.5) Figure A.1.2(0.5)
Besides the significant unit breakdowns in the second period of moving step three, the randomly chosen yield in Figure A.1.5 differs from the average yield in Figure A.1.3 much more than the yield in Figure A.1.4 which leads to slow convergence speed (Table 5.16). The planning results of three moving steps are shown in Figures 5.28 to 5.32.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
66
Jian Cui Table 5.16. MHS YCDU: problem size
Moving step
Const raints
Variables
Integer variables
Computation time(H:M:S) 2SMILP EEV 01:01:40 00:02:54
Optimality gap
1,740
Number of scenarios 36
1
8,016
10,331
2
8,736
11,061
1,740
36
00:45:22
00:01:13
10%
3
8,756
11,081
1,740
36
01:32:40
00:00:29
20%
Sales Moving step 1
4
2
3
1.5 Batches
Batches
Polymerization numbers Moving step 1
2
1 0.5
1 0 N_Polyp1 N_Polyp2
r1
r2
r3
r4
r5
2 1
0 2
1 0 Recipe
2 0
1 3
0
f1
f2
1.2663 0.6641
l1 Salep1 l1 Salep2
2
1 0.5 0
r1
r2
r3
r4
r5
2 1
2 0
0 0 Recipe
0 1
2 3
i1 l2 Salep2 i2 l1 Salep1 i2 l1 Salep2
f1 0 1.2663 0.6641
Polymerization numbers Moving step 3
f2 0 0.6693 0.9542 Grain
f3
f4
0.0176 0.0273 1.0707 1.0764 0.9562 0.7527 size fractions
f5 0 1.0793 1.8529
Sales Moving step 3
4
2 1.5
Batches
3 Batches
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1
2
1 0.5 0
1
N_Polyp1 N_Polyp2
f5 1.0793 1.8671
1.5
Batches
Batches
3
0
f4
2
4
N_Polyp1 N_Polyp2
f3
0.6693 1.0707 1.3136 1.3608 0.835 0.71 Grain size fractions Sales Moving step 2
Polymerization numbers Moving step 2
0
11.28%
r1
r2
r3
r4
r5
0 2
1 3
0 0 Recipe
2 0
3 0
i1 i2 i3 i3
l3 l2 l1 l1
Salep2 Salep2 Salep1 Salep2
f1 0 0 1.2663 0.5018
f2
f3
f4
0 0.1002 0.0154 0.0192 0 0 0.6693 1.0707 1.3136 1.3608 0.836 0.5846 Grain size fractions
f5 0 0.0142 1.0793 1.8671
Figure 5.28. MHS YCDU: optimal solutions of polymerization numbers and sales for three moving steps.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming Storage Moving step 2
0.8
0.8
0.6
0.6 Batches
Batches
Storage Moving step 1
0.4
0.4 0.2
0.2 0 Storep1 Storep2
67
f1
f2
0.0487 0.1759
f3
f4
0
f5
0.1157 0.1343 0.1964 0.3642 0 0 Grain size fractions
Storep1 Storep2
0.1057 0.0229
f1 0.1824 0.2818
f2
f3
f4
0.3064 0.5836 0 0 0.0562 0 Grain size fractions
f5 0.3664 0
Supply deficiency & Storage Moving step 3 0.8
Batches
0.6 0.4 0.2 0 i1 i1 i3 i3
Sup_Defp1 Sup_Defp2 Storep1 Storep2
f1 0 0 0.7261 0
f2
f3
f4
0 0 0 0 0.0034 0 0.5171 0.2629 0.0564 0 0 0 Grain size fractions
f5 0 0 0.4771 0.0387
Figure 5.29. MHS YCDU: optimal solutions of supply deficiencies and storage for three moving steps.
10.22
10.50 9.40
9.40
9.68
9.00 Profit
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Due to the variation of the yields, there is always more produced and stored in the three moving steps (Figure 5.29). Figure 5.30 shows that the yield variation and the breakdown of reactors in moving step 3 lead to a large decrease of the objective profit.
7.50 6.00 4.50 3.56
3.56
3.00 1 2SMILP
2 Period
3
EEV
Figure 5.30. MHS YCDU: Comparison of the realized profits between 2SMILP and EEV for six moving steps.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
68
Jian Cui 49.50 46.96
48.00 46.50
46.66
45.00 Profit
43.50 42.00 40.50
39.35
39.28
39.00 37.50
34.92
36.00 34.50
33.28
33.00 1 2SMILP
2 Period
3
EEV
Figure 5.31. MHS YCDU: Comparison of the objective profits between 2SMILP and EEV for six moving steps.
60 50 40 30 20 10 0 w1 w2 w3 w4 w5 w6 w7 w8 w9 w10 w11 w12 w13 w14 w15 w16 w17 w18 w19 w20 w21 w22 w23 w24 w25 w26 w27 w28 w29 w30 w31 w32 w33 w34 w35 w36
Profit
Moving step 1
Scenario 2SMILP
EEV
w1 w2 w3 w4 w5 w6 w7 w8 w9 w10 w11 w12 w13 w14 w15 w16 w17 w18 w19 w20 w21 w22 w23 w24 w25 w26 w27 w28 w29 w30 w31 w32 w33 w34 w35 w36
Profit
Scenario 2SMILP
EEV
Moving step 3
Profit
60 50 40 30 20 10 0 w1 w2 w3 w4 w5 w6 w7 w8 w9 w10 w11 w12 w13 w14 w15 w16 w17 w18 w19 w20 w21 w22 w23 w24 w25 w26 w27 w28 w29 w30 w31 w32 w33 w34 w35 w36
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Moving step 2 60 50 40 30 20 10 0
Scenario 2SMILP
EEV
Figure 5.32. MHS YCDU: 2SMILP compared with EEV for all scenarios of three moving steps.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
69
An example on 162 scenarios within 5 periods was also simulated. The scenario tree structures with yield, plant capacity and demand uncertainties are the same for each moving step: the evolution of yield uncertainty ξRi+1 with realizations Rji+1 (j=1, 2), plant capacity uncertainty ξPi+2, ξPi+2 with realizations Pji+2, Pji+3 (j=1, 2, 3) and demand uncertainty ξDi+2, ξDi+3 with realizations Dji+3 (j=1, 2, 3) along the planning horizon. The result (Table 5.17) shows that the optimality gap converged to 453% in one month for the first moving step. Table 5.17. MHS YCDU 162S5P: problem size Moving step
Constraints
Variables
Integer variables
Number of scenarios
Computation time(H:M:S)
Optimality gap
1
35,988
46,367
7,788
162
720:00:00
453%
6. CONCLUSION
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Planning and scheduling in process industry have contributed to enterprises for maintaining the competitive in today‘s highly competitive global market environment. A major challenge is to capture and handle the dynamics in whole logistics of the process, especially under various uncertainties and considering integer variables. Essentially, uncertainty consideration plays the role of validating the use of mathematical models and preserving plant feasibility and viability during operations while integer constraint is an inevitable issue in modeling process logistics. Accordingly, twostage stochastic mixed integer linear programming with recourse (2SMILP) is well studied and employed as the solution technique for the mediumterm planning problem of a modified multiproduct batch plant under evolving multiperiod multiuncertainty (MPMU) in this Chapter.
Figure 6.1. 2SMILP framework.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
70
Jian Cui
For explicitly expressing the MPMU, a novel dynamic deterministic equivalent 2SMILP formulation DDEP is developed and a realtime online optimization approach, rolling horizon strategy (RHS) with its specific version, moving horizon strategy (MHS), is proposed to deal with the evolution process of MPMU. The idea of RHS / MHS is sliding the DDEP(Ωi) (3.7) – (3.13) along the realized scenarios of (combined) uncertainties at the beginning of each period on the dynamic constructed scenario tree (Figure 3.2) within a rolling / moving time horizon I2 (Figure 3.1) of the standard time axis in facing the evolving MPMU. When the fixed planning / scheduling horizon equals to the time horizon I2, the evolution process of MPMU terminates, thus the DDEP(Ωi) reduces to DDEPS2S(Ωi) (3.50) – (3.56), the semidynamic deterministic equivalent 2SMILP formulation with shrinking second stage. Accordingly, a scenario group based approach (SGA) via a class of interrelated DDEPS2S is proposed for the constant structure with static information about uncertainties. In fact, the idea of SGA is to discard DDEP to every node on the static scenario tree to approximate the multistage stochastic program with shrinking the second stage. Thus the framework of the 2SMILP is given in Figure 6.1. The complete information about the decision variables in two stages and the uncertainties are captured by the correct definitions of these variables in twostage stochastic settings for the secondstage variables are those who connect to the future as the 2SMILP extensions described in Section 4, and the data structure mentioned in the end of Section 3.3 for the consistency of the uncertain parameters on every nodes of the scenario tree structure instead of the nonanticipativity constraints. The feasibility maintains due to this complete information representation and the monolithic modeling technique introduced in this Chapter. By implementing to the modified EPS logistics model, the RHS, MHS and SGA methods discover the reactions of material flows towards the changes of demands, polymerization capacity and yields, and show the good performances for small to mediumsize examples. However, the computation times still are quite long, especially when consider the large scenarios with MPMU involved, pointing at the need for other solution techniques. Additionally, it is also expensive on computing large rolling / moving steps which can observe the reactions of the system to various uncertainties better. Although the comparisons of the 2SMILP solution to the EEV solution were investigated, they were only performed on one path of realizations of (combined) uncertainties along the rolling / moving steps due to the computational challenging for all cases. Besides, how to increase the advantages of using 2SMILP over EEV is absolutely worth to investigate.
APPENDIX A: PROBLEM DATA A.1. Grain Size Distributions Nominal and drifting yields of the grain size fractions fp according to recipe rp (identical for products A and B) are shown in Figures A.1.1 to A.1.5 respectively.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure A.1.1. Nominal grain size distribution.
Figure A.1.2. Disturbed grain size distribution.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
71
72
Jian Cui
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure A.1.3. Grain size distribution obtained in period 1 and most probable distribution in steps 2 and 3.
Figure A.1.4. Realized grain size distribution in moving step 2.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
TwoStage Stochastic Mixed Integer Linear Programming
73
Figure A.1.5. Realized grain size distribution in moving step 3.
A.2. Problem Data for Section 5.2
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table A.2.1. RHS MPDU: demand profile within I2 for rolling step 1 Three moving steps Period EPS type Event i1 p1 i1 p2 i2 p1 i2 p2 i3 p1 i3 p2 i4 p1 pai1 i4 p1 pai2 i4 p1 pai3 i4 p2 pai1 i4 p2 pai2 i4 p2 pai3 i5 p1 EV i5 p2 EV
f1 1.2663 0.6641 1.2663 0.6641 1.2663 0.6641 1.5196 1.2663 1.013 0.7969 0.6641 0.5313 1.2663 0.6641
f2 0.6693 1.3608 0.6693 1.3608 0.6693 1.3608 0.8032 0.6693 0.5354 1.633 1.3608 1.0886 0.6693 1.3608
Grain size fraction f3 f4 1.0707 1.3136 0.9562 0.7527 1.0707 1.3136 0.9562 0.7527 1.0707 1.3136 0.9562 0.7527 1.2848 1.5763 1.0707 1.3136 0.8566 1.0509 1.1474 0.9032 0.9562 0.7527 0.765 0.6022 1.0707 1.3136 0.9562 0.7527
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
f5 1.0793 1.8671 1.0793 1.8671 1.0793 1.8671 1.2952 1.0793 0.8634 2.2405 1.8671 1.4937 1.0793 1.8671
74
Jian Cui Table A.2.2. RHS MPDU: demand profile within I2 for rolling step 2
Period i2 i2 i3 i3 i4 i4 i4 i4 i4 i4 i5 i5 i6 i6 i7 i7
Three moving steps EPS type Event p1 p2 p1 p2 p1 pai1 p1 pai2 p1 pai3 p2 pai1 p2 pai2 p2 pai3 p1 p2 p1 EV p2 EV p1 EV p2 EV
f1 1.2663 0.6641 1.2663 0.6641 1.5196 1.2663 1.013 0.7969 0.6641 0.5313 1.2663 0.6641 1.2663 0.6641 1.2663 0.6641
f2 0.6693 1.3608 0.6693 1.3608 0.8032 0.6693 0.5354 1.633 1.3608 1.0886 0.6693 1.3608 0.6693 1.3608 0.6693 1.3608
Grain size fraction f3 f4 1.0707 1.3136 0.9562 0.7527 1.0707 1.3136 0.9562 0.7527 1.2848 1.5763 1.0707 1.3136 0.8566 1.0509 1.1474 0.9032 0.9562 0.7527 0.765 0.6022 1.0707 1.3136 0.9562 0.7527 1.0707 1.3136 0.9562 0.7527 1.0707 1.3136 0.9562 0.7527
f5 1.0793 1.8671 1.0793 1.8671 1.2952 1.0793 0.8634 2.2405 1.8671 1.4937 1.0793 1.8671 1.0793 1.8671 1.0793 1.8671
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table A.2.3. RHS MPDU: demand profile within I2 for rolling step 3 Three moving steps Period EPS type Event i3 p1 i3 p2 i4 p1 pai1 i4 p1 pai2 i4 p1 pai3 i4 p2 pai1 i4 p2 pai2 i4 p2 pai3 i5 p1 i5 p2 i6 p1 i6 p2 i7 p1 pai1 i7 p1 pai2 i7 p1 pai3 i7 p2 pai1 i7 p2 pai2 i7 p2 pai3 i8 p1 EV i8 p2 EV
f1 1.2663 0.6641 1.5196 1.2663 1.013 0.7969 0.6641 0.5313 1.2663 0.6641 1.2663 0.6641 1.5196 1.2663 1.013 0.7969 0.6641 0.5313 1.2663 0.6641
Grain size fraction f2 f3 f4 0.6693 1.0707 1.3136 1.3608 0.9562 0.7527 0.8032 1.2848 1.5763 0.6693 1.0707 1.3136 0.5354 0.8566 1.0509 1.633 1.1474 0.9032 1.3608 0.9562 0.7527 1.0886 0.765 0.6022 0.6693 1.0707 1.3136 1.3608 0.9562 0.7527 0.6693 1.0707 1.3136 1.3608 0.9562 0.7527 0.8032 1.2848 1.5763 0.6693 1.0707 1.3136 0.5354 0.8566 1.0509 1.633 1.1474 0.9032 1.3608 0.9562 0.7527 1.0886 0.765 0.6022 0.6693 1.0707 1.3136 1.3608 0.9562 0.7527
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
f5 1.0793 1.8671 1.2952 1.0793 0.8634 2.2405 1.8671 1.4937 1.0793 1.8671 1.0793 1.8671 1.2952 1.0793 0.8634 2.2405 1.8671 1.4937 1.0793 1.8671
TwoStage Stochastic Mixed Integer Linear Programming
75
A.3. Problem Data for Section 5.3.1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table A.3.1. MHS MPDU: demand profiles within I1 for six moving steps Three moving steps Period EPS type Event i1 p1 i1 p2 i2 p1 pai1 * i2 p1 pai2 i2 p1 pai3 i2 p2 pai1* i2 p2 pai2 i2 p2 pai3 i3 p1 pai1 i3 p1 pai2* i3 p1 pai3 i3 p2 pai1 i3 p2 pai2* i3 p2 pai3 i4 p1 pai1 i4 p1 pai2 i4 p1 pai3* i4 p2 pai1 i4 p2 pai2 i4 p2 pai3* i5 p1 pai1 i5 p1 pai2* i5 p1 pai3 i5 p2 pai1 i5 p2 pai2* i5 p2 pai3 i6 p1 pai1* i6 p1 pai2 i6 p1 pai3 i6 p2 pai1* i6 p2 pai2 i6 p2 pai3 i7 p1 pai1 i7 p1 pai2 i7 p1 pai3 i7 p2 pai1 i7 p2 pai2 i7 p2 pai3
f1 1.0698 1.7462 1.48109 1.1393 0.79751 0.98163 0.7551 0.52857 0.76167 0.5859 0.41013 0.84448 0.6496 0.45472 2.27006 1.7462 1.22234 0.9737 0.749 0.5243 1.91139 1.4703 1.02921 1.74382 1.3414 0.93898 1.83625 1.4125 0.98875 1.67271 1.2867 0.90069 1.83625 1.4125 0.98875 1.67271 1.2867 0.90069
f2 1.7455 0.556 0.43212 0.3324 0.23268 2.1593 1.661 1.1627 1.9344 1.488 1.0416 1.40842 1.0834 0.75838 1.61434 1.2418 0.86926 1.54011 1.1847 0.82929 1.12567 0.8659 0.60613 0.44928 0.3456 0.24192 1.28076 0.9852 0.68964 1.32184 1.0168 0.71176 1.28076 0.9852 0.68964 1.32184 1.0168 0.71176
Grain size fraction f3 f4 1.3022 1.3699 1.2343 1.386 1.34407 1.75175 1.0339 1.3475 0.72373 0.94325 1.80232 1.08121 1.3864 0.8317 0.97048 0.58219 1.99875 2.32388 1.5375 1.7876 1.07625 1.25132 2.01058 1.98718 1.5466 1.5286 1.08262 1.07002 1.76436 1.05274 1.3572 0.8098 0.95004 0.56686 0.75608 2.39148 0.5816 1.8396 0.40712 1.28772 1.59289 1.9578 1.2253 1.506 0.85771 1.0542 1.07757 1.90008 0.8289 1.4616 0.58023 1.02312 1.86537 2.04815 1.4349 1.5755 1.00443 1.10285 1.11046 1.93323 0.8542 1.4871 0.59794 1.04097 1.86537 2.04815 1.4349 1.5755 1.00443 1.10285 1.11046 1.93323 0.8542 1.4871 0.59794 1.04097
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
f5 0.7723 0.8174 2.92994 2.2538 1.57766 1.63657 1.2589 0.88123 0.26377 0.2029 0.14203 2.06674 1.5898 1.11286 1.18885 0.9145 0.64015 2.04828 1.5756 1.10292 2.54176 1.9552 1.36864 1.29987 0.9999 0.69993 1.17988 0.9076 0.63532 1.35148 1.0396 0.72772 1.17988 0.9076 0.63532 1.35148 1.0396 0.72772
76
Jian Cui Table A.3.1. (Continued)
Three moving steps Period EPS type Event i8 p1 pai1 i8 p1 pai2 i8 p1 pai3 i8 p2 pai1 i8 p2 pai2 i8 p2 pai3 i9 p1 pai1 i9 p1 pai2 i9 p1 pai3 i9 p2 pai1 i9 p2 pai2 i9 p2 pai3 i10 p1 pai1 i10 p1 pai2 i10 p1 pai3 i10 p2 pai1 i10 p2 pai2 i10 p2 pai3
f1 1.83625 1.4125 0.98875 1.67271 1.2867 0.90069 1.83625 1.4125 0.98875 1.67271 1.2867 0.90069 1.83625 1.4125 0.98875 1.67271 1.2867 0.90069
f2 1.28076 0.9852 0.68964 1.32184 1.0168 0.71176 1.28076 0.9852 0.68964 1.32184 1.0168 0.71176 1.28076 0.9852 0.68964 1.32184 1.0168 0.71176
Grain size fraction f3 f4 1.86537 2.04815 1.4349 1.5755 1.00443 1.10285 1.11046 1.93323 0.8542 1.4871 0.59794 1.04097 1.86537 2.04815 1.4349 1.5755 1.00443 1.10285 1.11046 1.93323 0.8542 1.4871 0.59794 1.04097 1.86537 2.04815 1.4349 1.5755 1.00443 1.10285 1.11046 1.93323 0.8542 1.4871 0.59794 1.04097
f5 1.17988 0.9076 0.63532 1.35148 1.0396 0.72772 1.17988 0.9076 0.63532 1.35148 1.0396 0.72772 1.17988 0.9076 0.63532 1.35148 1.0396 0.72772
* Represent the resolved demands in corresponding moving steps.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table A.3.2. MHS MPDU: demand profiles within I2  I1 for six moving steps
Period i6 i6 i7 i7 i8 i8 i9 i9 i10 i10 i11 i11
Six moving steps EPS type Event p1 EV p2 EV p1 EV p2 EV p1 EV p2 EV p1 EV p2 EV p1 EV p2 EV p1 EV p2 EV
f1 1.4125 1.2867 1.4125 1.2867 1.4125 1.2867 1.4125 1.2867 1.4125 1.2867 1.4125 1.2867
Grain size fraction f2 f3 f4 0.9852 1.4349 1.5755 1.0168 0.8542 1.4871 0.9852 1.4349 1.5755 1.0168 0.8542 1.4871 0.9852 1.4349 1.5755 1.0168 0.8542 1.4871 0.9852 1.4349 1.5755 1.0168 0.8542 1.4871 0.9852 1.4349 1.5755 1.0168 0.8542 1.4871 0.9852 1.4349 1.5755 1.0168 0.8542 1.4871
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
f5 0.9076 1.0396 0.9076 1.0396 0.9076 1.0396 0.9076 1.0396 0.9076 1.0396 0.9076 1.0396
TwoStage Stochastic Mixed Integer Linear Programming
77
A.4. Problem Data for Section 5.3.2.1 Table A.4.1. Demand profiles in the current, immediate and distant future: moving step 1 Period i1 i1 i2 i2 i2 i2 i2 i2 i3 i3 i3 i3 i3 i3 i4 i4 i4 i4 i4 i4 i5 i5
Moving step 1 EPS type p1 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p2
Event pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 EV EV
f1 1.0649 0.7948 0.4688 0.4688 0.4688 0.5197 0.5197 0.5197 1.1762 1.1762 1.1762 1.0731 1.0731 1.0731 1.5562 1.1971 0.838 1.0212 0.7855 0.5498 0.3753 1.1427
f2 1.208 0.6101 1.1904 1.1904 1.1904 0.8667 0.8667 0.8667 0.2764 0.2764 0.2764 0.6927 0.6927 0.6927 0.8627 0.6636 0.4645 1.4648 1.1268 0.7888 1.2985 1.2437
Grain size fraction f3 f4 1.3619 0.8463 1.6953 1.4146 1.23 1.4301 1.23 1.4301 1.23 1.4301 1.2719 1.2373 1.2719 1.2373 1.2719 1.2373 0.9803 1.2048 0.9803 1.2048 0.9803 1.2048 0.6631 1.1692 0.6631 1.1692 0.6631 1.1692 1.2916 2.5877 0.9935 1.9905 0.6955 1.3933 1.2791 1.3 0.9839 1 0.6887 0.7 1.1926 1.2257 1.0134 1.1896
f5 0.4443 1.5598 1.2229 1.2229 1.2229 1.5624 1.5624 1.5624 1.5641 1.5641 1.5641 2.1999 2.1999 2.1999 1.1622 0.894 0.6258 1.7745 1.365 0.9555 0.9148 1.4036
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table A.4.2. Demand profiles in the current, immediate and distant future: moving step 2 Period i2 i2 i3 i3 i3 i3 i3 i3 i4 i4 i4 i4 i4 i4 i5 i5 i5 i5 i5 i5 i6 i6
Moving step 2 EPS type p1 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p2
Event pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 EV EV
f1 0.4688 0.5197 1.1762 1.1762 1.1762 1.0731 1.0731 1.0731 1.4365 1.4365 1.4365 0.9426 0.9426 0.9426 0.4879 0.3753 0.2627 1.4855 1.1427 0.7999 0.8178 0.5996
f2 1.1904 0.8667 0.2764 0.2764 0.2764 0.6927 0.6927 0.6927 0.7963 0.7963 0.7963 1.3522 1.3522 1.3522 1.6881 1.2985 0.9089 1.6168 1.2437 0.8706 0.7177 0.96
Grain size fraction f3 f4 1.23 1.4301 1.2719 1.2373 0.9803 1.2048 0.9803 1.2048 0.9803 1.2048 0.6631 1.1692 0.6631 1.1692 0.6631 1.1692 1.1922 2.3886 1.1922 2.3886 1.1922 2.3886 1.1807 1.2 1.1807 1.2 1.1807 1.2 1.5504 1.5934 1.1926 1.2257 0.8348 0.858 1.3174 1.5465 1.0134 1.1896 0.7094 0.8327 0.9179 1.221 0.9782 1.1618
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
f5 1.2229 1.5624 1.5641 1.5641 1.5641 2.1999 2.1999 2.1999 1.0728 1.0728 1.0728 1.638 1.638 1.638 1.1892 0.9148 0.6404 1.8247 1.4036 0.9825 0.7153 0.7108
78
Jian Cui Table A.4.3. Demand profiles in the current, immediate and distant future: moving step 3
Period i3 i3 i4 i4 i4 i4 i4 i4 i5 i5 i5 i5 i5 i5 i6 i6 i6 i6 i6 i6 i7 i7
Moving step 3 EPS type p1 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p2
Event pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 EV EV
f1 1.1762 1.0731 1.4365 1.4365 1.4365 0.9426 0.9426 0.9426 0.3002 0.3002 0.3002 0.9142 0.9142 0.9142 1.0631 0.8178 0.5725 0.7795 0.5996 0.4197 1.3661 0.4256
f2 0.2764 0.6927 0.7963 0.7963 0.7963 1.3522 1.3522 1.3522 1.0388 1.0388 1.0388 0.995 0.995 0.995 0.933 0.7177 0.5024 1.248 0.96 0.672 1.0676 1.4097
Grain size fraction f3 f4 0.9803 1.2048 0.6631 1.1692 1.1922 2.3886 1.1922 2.3886 1.1922 2.3886 1.1807 1.2 1.1807 1.2 1.1807 1.2 0.9541 0.9806 0.9541 0.9806 0.9541 0.9806 0.8107 0.9517 0.8107 0.9517 0.8107 0.9517 1.1933 1.5873 0.9179 1.221 0.6425 0.8547 1.2717 1.5103 0.9782 1.1618 0.6847 0.8133 1.5909 1.5988 2.0668 1.8083
f5 1.5641 2.1999 1.0728 1.0728 1.0728 1.638 1.638 1.638 0.7318 0.7318 0.7318 1.1229 1.1229 1.1229 0.9299 0.7153 0.5007 0.924 0.7108 0.4976 1.2583 0.608
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table A.4.4. Demand profiles in the current, immediate and distant future: moving step 4
Period i4 i4 i5 i5 i5 i5 i5 i5 i6 i6 i6 i6 i6 i6 i7 i7 i7 i7 i7 i7 i8 i8
Moving step 4 EPS type p1 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p2
Event pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 EV EV
f1 1.4365 0.9426 0.3002 0.3002 0.3002 0.9142 0.9142 0.9142 0.8996 0.8996 0.8996 0.6596 0.6596 0.6596 1.7759 1.3661 0.9563 0.5533 0.4256 0.2979 0.845 0.6591
f2 0.7963 1.3522 1.0388 1.0388 1.0388 0.995 0.995 0.995 0.7895 0.7895 0.7895 1.056 1.056 1.056 1.3879 1.0676 0.7473 1.8326 1.4097 0.9868 1.1237 1.3459
Grain size fraction f3 f4 1.1922 2.3886 1.1807 1.2 0.9541 0.9806 0.9541 0.9806 0.9541 0.9806 0.8107 0.9517 0.8107 0.9517 0.8107 0.9517 1.0097 1.3431 1.0097 1.3431 1.0097 1.3431 1.076 1.278 1.076 1.278 1.076 1.278 2.0682 2.0784 1.5909 1.5988 1.1136 1.1192 2.6868 2.3508 2.0668 1.8083 1.4468 1.2658 1.2428 1.7041 0.8701 0.5396
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
f5 1.0728 1.638 0.7318 0.7318 0.7318 1.1229 1.1229 1.1229 0.7868 0.7868 0.7868 0.7819 0.7819 0.7819 1.6358 1.2583 0.8808 0.7904 0.608 0.4256 0.5476 1.242
TwoStage Stochastic Mixed Integer Linear Programming
79
Table A.4.5. Demand profiles in the current, immediate and distant future: moving step 5
Period i5 i5 i6 i6 i6 i6 i6 i6 i7 i7 i7 i7 i7 i7 i8 i8 i8 i8 i8 i8 i9 i9
Moving step 5 EPS type p1 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p2
Event pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 EV EV
f1 0.3002 0.9142 0.8996 0.8996 0.8996 0.6596 0.6596 0.6596 1.2295 1.2295 1.2295 0.383 0.383 0.383 1.0985 0.845 0.5915 0.8568 0.6591 0.4614 1.9064 1.362
f2 1.0388 0.995 0.7895 0.7895 0.7895 1.056 1.056 1.056 0.9608 0.9608 0.9608 1.2687 1.2687 1.2687 1.4608 1.1237 0.7866 1.7497 1.3459 0.9421 0.4251 2.1852
Grain size fraction f3 f4 0.9541 0.9806 0.8107 0.9517 1.0097 1.3431 1.0097 1.3431 1.0097 1.3431 1.076 1.278 1.076 1.278 1.076 1.278 1.4318 1.4389 1.4318 1.4389 1.4318 1.4389 1.8601 1.6275 1.8601 1.6275 1.8601 1.6275 1.6156 2.2153 1.2428 1.7041 0.87 1.1929 1.1311 0.7015 0.8701 0.5396 0.6091 0.3777 1.1384 1.1096 1.4314 2.0433
f5 0.7318 1.1229 0.7868 0.7868 0.7868 0.7819 0.7819 0.7819 1.1325 1.1325 1.1325 0.5472 0.5472 0.5472 0.7119 0.5476 0.3833 1.6146 1.242 0.8694 0.9579 1.9608
A.5. Problem Data for Section 5.3.2.2
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table A.5.1. Demand profiles in the current, immediate and distant future: moving step 1
Period i1 i1 i2 i2 i2 i2 i2 i2 i3 i3 i3 i3 i3 i3 i4 i4 i4 i4 i4 i4 i5 i5
Moving step 1 EPS type p1 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p2
Event pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 EV EV
f1 1.2663 0.6641 1.2663 1.2663 1.2663 0.6641 0.6641 0.6641 1.2663 1.2663 1.2663 0.6641 0.6641 0.6641 1.5196 1.2663 1.013 0.7969 0.6641 0.5313 1.2663 0.6641
f2 0.6693 1.3608 0.6693 0.6693 0.6693 1.3608 1.3608 1.3608 0.6693 0.6693 0.6693 1.3608 1.3608 1.3608 0.8032 0.6693 0.5354 1.633 1.3608 1.0886 0.6693 1.3608
Grain size fraction f3 f4 1.0707 1.3136 0.9562 0.7527 1.0707 1.3136 1.0707 1.3136 1.0707 1.3136 0.9562 0.7527 0.9562 0.7527 0.9562 0.7527 1.0707 1.3136 1.0707 1.3136 1.0707 1.3136 0.9562 0.7527 0.9562 0.7527 0.9562 0.7527 1.2848 1.5763 1.0707 1.3136 0.8566 1.0509 1.1474 0.9032 0.9562 0.7527 0.765 0.6022 1.0707 1.3136 0.9562 0.7527
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
f5 1.0793 1.8671 1.0793 1.0793 1.0793 1.8671 1.8671 1.8671 1.0793 1.0793 1.0793 1.8671 1.8671 1.8671 1.2952 1.0793 0.8634 2.2405 1.8671 1.4937 1.0793 1.8671
80
Jian Cui Table A.5.2. Demand profiles in the current, immediate and distant future: moving step 2
Period i2 i2 i3 i3 i3 i3 i3 i3 i4 i4 i4 i4 i4 i4 i5 i5 i5 i5 i5 i6 i6
Moving step 2 EPS type p1 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p1 p2
Event pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 EV EV
f1 1.2663 0.6641 1.2663 1.2663 1.2663 0.6641 0.6641 0.6641 1.5196 1.5196 1.5196 0.7969 0.7969 0.7969 1.5196 1.2663 1.013 0.7969 0.6641 1.2663 0.6641
f2 0.6693 1.3608 0.6693 0.6693 0.6693 1.3608 1.3608 1.3608 0.8032 0.8032 0.8032 1.633 1.633 1.633 0.8032 0.6693 0.5354 1.633 1.3608 0.6693 1.3608
Grain size fraction f3 f4 1.0707 1.3136 0.9562 0.7527 1.0707 1.3136 1.0707 1.3136 1.0707 1.3136 0.9562 0.7527 0.9562 0.7527 0.9562 0.7527 1.2848 1.5763 1.2848 1.5763 1.2848 1.5763 1.1474 0.9032 1.1474 0.9032 1.1474 0.9032 1.2848 1.5763 1.0707 1.3136 0.8566 1.0509 1.1474 0.9032 0.9562 0.7527 1.0707 1.3136 0.9562 0.7527
f5 1.0793 1.8671 1.0793 1.0793 1.0793 1.8671 1.8671 1.8671 1.2952 1.2952 1.2952 2.2405 2.2405 2.2405 1.2952 1.0793 0.8634 2.2405 1.8671 1.0793 1.8671
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table A.5.3. Demand profiles in the current, immediate and distant future: moving step 3
Period i3 i3 i4 i4 i4 i4 i4 i4 i5 i5 i5 i5 i5 i5 i6 i6 i6 i6 i6 i6 i7 i7
Moving step 3 EPS type p1 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p1 p1 p2 p2 p2 p1 p2
Event pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 pai1 pai2 pai3 EV EV
f1 1.2663 0.6641 1.5196 1.5196 1.5196 0.7969 0.7969 0.7969 1.013 1.013 1.013 0.5313 0.5313 0.5313 1.5196 1.2663 1.013 0.7969 0.6641 0.5313 1.2663 0.6641
f2 0.6693 1.3608 0.8032 0.8032 0.8032 1.633 1.633 1.633 0.5354 0.5354 0.5354 1.0886 1.0886 1.0886 0.8032 0.6693 0.5354 1.633 1.3608 1.0886 0.6693 1.3608
Grain size fraction f3 f4 1.0707 1.3136 0.9562 0.7527 1.2848 1.5763 1.2848 1.5763 1.2848 1.5763 1.1474 0.9032 1.1474 0.9032 1.1474 0.9032 0.8566 1.0509 0.8566 1.0509 0.8566 1.0509 0.765 0.6022 0.765 0.6022 0.765 0.6022 1.2848 1.5763 1.0707 1.3136 0.8566 1.0509 1.1474 0.9032 0.9562 0.7527 0.765 0.6022 1.0707 1.3136 0.9562 0.7527
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
f5 1.0793 1.8671 1.2952 1.2952 1.2952 2.2405 2.2405 2.2405 0.8634 0.8634 0.8634 1.4937 1.4937 1.4937 1.2952 1.0793 0.8634 2.2405 1.8671 1.4937 1.0793 1.8671
TwoStage Stochastic Mixed Integer Linear Programming
81
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
REFERENCES Acevedo, J. and Pistikopoulos, E. N. (1997). A multiparametric programming approach for linear. process engineering problems under uncertainty. Industrial and Engineering Chemistry Research, 36, 717728. Ahmed, S. and Sahinidis, N.V. (2003). An approximation scheme for stochastic integer programs arising in capacity expansion. Operations Research, 51, 461474. Ahmed, S., Tawarmalani, M. and Sahinidis, N. V. (2004). A finite branch and bound algorithm for twostage stochastic integer programs. Mathematical Programming, 100, 355377. AlonsoAyuso, A., Escudero, L. F. and Ortuno, M. T. (2005). Modeling production planning and scheduling under uncertainty [Chapter 13]. In S.W. Wallace, and W.T. Ziemba (Eds.), Applications of stochastic programming. MPSSIAM Series in Optimization (pp. 217252). Philadelphia: SIAM. Balasubramanian, J. and Grossmann, I. E. (2002). A novel branch and bound algorithm for scheduling flowshop plants with uncertain processing times. Computers and Chemical Engineering, 26, 4157. Balasubramanian, J. and Grossmann, I. E. (2003). Scheduling optimization under uncertainty  An alternative approach. Computers and Chemical Engineering, 27 (4), 469490. Balasubramanian, J. and Grossmann, I. E. (2004). Approximation to multistage stochastic optimization in multiperiod batch plant scheduling under demand uncertainty. Industrial and Engineering Chemistry Research, 43, 36953713. Barbaro, A. and Bagajewicz, M. J. (2004). Managing financial risk in planning under uncertainty. AIChE Journal, 50, 963989. BenIsrael, A. and Robers, P. D. (1970). A decomposition method for interval linear programming. Management Science, 16, 374387. Birge, J. R. and Louveaux, F. (1997). Introduction to Stochastic Programming. New York, ST: Springer. Bonfill, A., Bagajewicz, M., Espuna, A. and Puigjaner, L. (2004). Risk management in the scheduling of batch plants under uncertain market demand. Industrial and Engineering Chemistry Research, 43, 741750. Carøe, C. and Schultz, R. (1999). Dual decomposition in stochastic integer programming. Operations Research Letters, 24, 3745. Chinneck, J. W. and Ramadan, K. (2000). Linear programming with interval coefficients. Journal of the Operational Research Society, 51, 209220. Clay, R. and Grossmann, I. E. (1997). A disaggregation algorithm for the optimization of stochastic planning models. Computers and Chemical Engineering, 21, 751774. Cott, B. and Macchietto, S. (1989). Minimizing the effects of batch process variability using online schedule modification. Computers and Chemical Engineering, 13, 105113. Cui, J. and Engell, S. (2010): Mediumterm Planning of a Multiproduct Batch Plant under Evolving Multiperiod Multiuncertainty by Means of a Moving Horizon Strategy. Computers and Chemical Engineering, 34, 598619 (ISSN 00981354). Dimitrades, A. D., Shah, N. and Pantelides, C. C. (1997). RTNbased rolling horizon algorithms for mediumterm scheduling of multipurpose plants. Computers and Chemical Engineering, 21, 10611066.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
82
Jian Cui
Dyer, M. E. and Stougie, L. (2006): Computational complexity of stochastic programming problems. Math. Program, 106(3), 423432. Guillen, G., Mele, F. D., Espuna, A. and Puigjaner, L. (2006). Adressing the design of chemical supply chains under demand uncertainty. Proc. 2006 ESCAPE/PSE, Elsevier, 10951100. Gupta, A. and Maranas, C. D. (2000a). Twostage modeling and solution framework for multisite midterm planning under demand uncertainty. Industrial and Engineering Chemistry Research, 39, 37993813. Gupta, A., Maranas, C. D. and Mcdonald, C. M. (2000b). Midterm supply chain planning under demand uncertainty: Customer demand satisfaction and inventory management. Computers and Chemical Engineering, 24, 26132621. Gupta, A. and Maranas, C. D. (2003). Managing demand uncertainty in supply chain planning. Computers and Chemical Engineering, 27, 12191227. Hansen, E. R. and Walster, G. W. (2004). Global Optimization Using Interval Analysis (2nd edition). CRC Press, New York. Honkomp, S. J., Mockus, L. and Reklaitis, G. V. (1997). Robust scheduling with processing time uncertainty. Computers and Chemical Engineering, 21, 10551060. Huang, G.H. and Moore, R.D. (1993). Grey linear programming, its solving approach, and its application to water pollution control. International J. of Systems Sciences, 24(1), 159172. Hsieh, S. and Chiang, C. C. (2001). ManufacturingtoSale planning model for fuel oil production. The International Journal of Advanced Manufacturing Technology, 18, 303311. Ierapetritou, M. G., Pistikopoulos, E. and Floudas, C. A. (1995). Operational planning under uncertainty. Computers and Chemical Engineering, 20, 14991516. Ierapetritou, M. G. and Li, Z. (2009). Modeling and managing uncertainty in process planning and scheduling. Optimization and Logistics Challenges in the Enterprise, W. Chaovalitwongse et al.(eds.), Springer Optimization and Its Applications, 30, 97144. Janak, S. L., Floudas, C. A., Kallrath, J. and Vormbrock, N. (2006a). Production scheduling of a largescale industrial batch plant. I. Shortterm and mediumterm scheduling. Industrial and Engineering Chemistry Research, 45, 82348252. Janak, S. L., Floudas, C. A., Kallrath, J. and Vormbrock, N. (2006b). Production scheduling of a largescale industrail batch plant. II. Reactive Scheduling. Industrial and Engineering Chemistry Research, 45, 82538269. Janak, S. L., Lin, X. and Floudas, C. A. (2007). A new robust optimization approach for scheduling under uncertainty. II. Uncertainty with known probability distribution. Computers and Chemical Engineering, 31, 171195. Jia, Z. and Ierapetritou, M. G. (2004). Shortterm Scheduling under Uncertainty Using MILP Sensitivity Analysis. Industrial and Engineering Chemistry Research, 43, 37823791. Jia, Z. and Ierapetritou, M. G. (2007). Generate Pareto optimal solutions of scheduling problems using Normal Boundary Intersection Technique. Computers and Chemical Engineering, 31, 268280. Kanakamedala, K. B., Reklaitis, G. V. and Venkatasubramanian, V. (1994). Reactive schedule modification in multipurpose batch chemical plants. Industrial and Engineering Chemistry Research, 33, 7790.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
TwoStage Stochastic Mixed Integer Linear Programming
83
Leung, S. C. H, Tsang, S. O. S, Ng, W. L. and Wu, Y. (2007). A robust optimization model for multisite production planning problem in an uncertain environment. European Journal of Operational Research, 181, 224238. Li, Z. and Ierapetritou, M. G. (2008). Reactive scheduling using parametric programming. AIChE Journal, 54, 26102623. Li, Z. and Ierapetritou, M. G. (2009). Integration of Planning and Scheduling and Consideration of Uncertainty in Process Operations. Proc. 2009 PSE, Elsevier, 8794. Lin, X., Janak, S. L. and Floudas, C. A. (2004). A new robust optimization approach for scheduling under uncertainty: I. Bounded uncertainty. Computers and Chemical Engineering, 28, 10691085. Liu, M. L. and Sahinidis, N. V. (1996). Long range planning in the process industries：A projection approach. Computers and Operations Research, 23, 237253. Liu, M. L. and Sahinidis, N. V. (1997). Process planning in a fuzzy environment. European Journal of Operational Research, 100, 142169. Liu, Y., Zou, R. and Guo, H. C. (2010). A Risk Explicit Interval Linear Programming Model for UncertaintyBased NutrientReduction Optimization for the Lake Qionghai Watershed. Journal of Water Resources Planning and Management. Posted ahead of print. Louveaux, F. V. and Schultz, R. (2003). Stochastic programming. Vol. 10 of handbooks in operations research and management. Amsterdam, ST: Elsevier. Mendez, C. A. and Cerda, J. (2004). A MILP Framework for batch reactive scheduling with limited discrete resources. Dynamic scheduling in multiproduct batch plants. Computers and Chemical Engineering, 28, 10591068. Neiro, S. and J. Pinto (2003). Supply chain optimization of petroleum refinery complexes. Proceedings of fourth international conference on foundations of computeraided process operations. Oliveira, C. and Antunes, C. H. (2007). Multiple objective linear programming models with interval coefficientsan illustrated overview. European Journal of Operational Research, 181, 14341463. Orcun, S., Altinel, K. and Hortacsu, Ö. (1996). Scheduling of batch processes with operational uncertainties. Computers and Chemical Engineering, 20, 11911196. Paraskevopoulos, D., Karakitsos, E. and Rustem, B. (1991). Robust capacity planning under uncertainty. Management Science, 37, 787800. Petkov, S. B. and Maranas, C. D. (1997). Multiperiod Planning and Scheduling of Multiproduct Batch Plants under Demand Uncertainty. Industrial and Engineering Chemistry Research, 36, 48644881. Petrovic, D. and Duenas, A. (2006). A fuzzy logic based production scheduling/rescheduling in the presence of uncertain disruptions. Fuzzy Stes and Systems, 157, 22732285. Puigjaner, L. and Lainez, J. M. (2008). Capturing dynamics in integrated supply chain management. Computers and Chemical Engineering, 32, 25822605. Ruszczyński, A. and Shapiro, A. (2003). Stochastic Programming. Handbooks in Operations Research and Management Science, vol. 10. Amsterdam, ST: Elsevier. Ryu, J. H. and Pistikopoulos, E. N. (2003). A bilevel programming framework for enterprisewide supply chian planning problems under uncertainty. Proceedings of fourth international conference on foundations of computeraided process operations.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
84
Jian Cui
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Sahinidis, N. V. (2004). Optimization under uncertainty: stateoftheart and opportunities. Computers and Chemical Engineering, 28, 971983. Sand, G. and Engell, S. (2004). Modelling and solving realtime scheduling problems by stochastic integer programming. Computers and Chemical Engineering, 28, 10871103. Schnelle, K. D. and Bassett, M. H. (2006). Batch Process Management: Planning and Scheduling. Batch Processes. E. Korovessi and A.A. Linniger, CRC Press: 389. Till, J., Sand, G., Urselmann, M. and Engell, S. (2007). A hybrid evolutionary algorithm for solving twostage stochastic integer programs in chemical batch scheduling. Computers and Chemical Engineering, 31, 630647. Vin, J. P. and Ierapetritou, M. G. (2000). A new approach for efficient rescheduling of multiproduct batch plants. Industrial and Engineering Chemistry Research, 39, 42284283. Vin, J. P. and Ierapetritou, M. G. (2001). Robust shortterm scheduling of multiproduct batch plants under demand uncertainty. Industrial and Engineering Chemistry Research, 40, 45434554. Wang, J. (2004). A fuzzy robust scheduling approach for product development projects. European Jounal of Operational Research, 152, 180194. Wenkai, L., Hui, C. W., Hua, B. and Tong, Z. (2003). Plantwide scheduling and marginal value analysis for a refinery. FOCAPO2003, Coral Spring, Florida. Wu, D. and Ierapetritou, M. G. (2007). Hierarchical approach for production planning and scheduling under unvertainty. Chemical Engineering and Processing, 46, 11291140.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: Linear Programming Editor: Zoltan Adam Mann
ISBN: 9781612095790 c 2012 Nova Science Publishers, Inc.
Chapter 2
I NTERVAL L INEAR P ROGRAMMING : A S URVEY
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Milan Hlad´ık∗ Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics, Malostransk´e n´am. 25, 118 00, Prague, Czech Republic
Abstract Uncertainty is a common phenomenon in practice. Due to measurement errors we can hardly expect precise values in reallife linear programming problems. Using estimated quantities may lead to unsatisfactory results, so inexactness must be taken into account. Uncertainty can be handled in various manners, e.g. by stochastic programming, interval analysis or fuzzy numbers; each of them has some pros and cons. In this paper, we suppose that we are given lower and upper bounds on the quantities, and the quantities may perturb independently and simultaneously within these bounds. In this model we investigate the problems of optimal value range, basis stability, optimal solutions enclosures, duality etc. Complexity issues are discussed, too; some tasks are polynomially solvable while another are NPhard. This approach is more general and powerful than the standard sensitivity analysis. In sensitivity analysis, we consider variations of only one parameter, which is very restrictive. On the other hand, interval analysis based approach enables to handle simultaneously all required parameters. We present a brief exposition of the known results with new insights, and close the survey by some challenging problems.
PACS 05.45a, 52.35.Mw, 96.50.Fm. Keywords: Interval linear programming. Keywords: Linear interval systems, linear programming, interval analysis, optimal value range, interval matrix, basis stability AMS Subject Classification: 90C31, 90C70, 65G40.
1.
Introduction
Many practical problems are solved by linear programming. Since reallife problems are subject to uncertainties due to errors, measurements and estimations, we have to reflect it in ∗ Email
address: [email protected]
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
86
Milan Hlad´ık
linear programming methodology and decision making. Inaccuracy is modelled in diverse ways; see overviews by Sahinidis [87], or Liu [41]. In a stochastic approach, we handle inexact quantities as random variables, in fuzzy set theory as vague numbers with weighted membership function, and in interval analysis we assume that the quantities perturb simultaneously and independently within a priori known fixed bounds. There are two basic approaches to deal with interval linear programming (ILP). In the first one, we resign on guaranteed bounds and enclosures covering all possibilities, aiming to satisficing solutions. These techniques reduce the problem into solving several realvalued (usually linear) programs. This family involves robust optimization, fuzzy programming and others. Robust optimization is a methodology to process optimization problems with uncertain data. Herein, we seek for a solution that is robust (stable) under some data perturbation. From this viewpoint, ILP is reduced to one (possibly difficult) optimization problem the solution of which is considered to be good. One of the basic methods in robust optimization is the minimax regret method inspected e.g. in [3, 5, 6, 13, 29, 50]. There are other methods designed in a different way. The resulting interval solutions are mostly some approximation sets, but represent an acceptable compromise for a decision maker. The techniques used are, for example, introducing a proper ordering for intervals [37, 38, 89], using a satisfaction function [52], or another reductions [25, 101]. Often, principles from fuzzy linear programming are used [27, 40, 44, 96]. Somewhere in between, there is stochastic programming. Interval values can be considered as random quantities with uniform distribution. Thus, ILP is a specific case of stochastic programming, and—in principle—any method of stochastic programming can be applied in ILP. Nevertheless, is too specific to be successfully processed by stochastic programming methods. Some stochasticlike methods to solve ILP are in [45, 102]. In the second approach, which is dealt with in this paper, the aim is different. We want to cover all possible scenarios and compute rigorous interval solutions containing all possibilities. The methods used are multiparametric programming, perturbation theory, and interval arithmetic and analysis. ILP can be viewed as a multiparametric linear programming problem with interval domains for parameters [14, 15, 51]. This approach can solve some questions arising in ILP, particularly special cases dealt with in Section 6., but cannot handle the general ILP problem. Perturbation theorems in linear systems [69, 92] give raise to another possibility to handle interval uncertainties. This approach is usually less conservative and solves some particular tasks in ILP, but not applicable for all questions arising. One of the fundamental methods is to use interval arithmetic, which was introduced to extend the basic functions and operation such as addition and multiplication to intervals [2, 53, 62]. Interval arithmetic always returns rigorous results, meaning, whatever scenario happens, the results are included within the calculated intervals. The drawback is that the resulting intervals are usually too conservative and overestimated. Interval arithmetic was applied in interval linear programming e.g. in [4,30–32,36,49,93]. More sophisticated methods rely on direct inspection of particular subproblems by means interval analysis methodology, among others. Results of interval linear algebra topics such as interval linear equation solving and interval matrix theory are often exploited. That is why we introduce
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
87
some interval notion in the next sections before we introduce the main objectives in Section 1.2.. We use the following notation: I denotes the identity matrix (with convenient dimension), Ai,∗ the ith row of a matrix A, diag(v) the diagonal matrix with entries v1 , . . ., vn , and sgn(r) stands for the sign of a real r. Notice that the operators  · , sgn(·) are used also for vectors and matrices with entrywise meaning. The spectral radius of a matrix A is denoted by ρ(A).
1.1.
Interval computing
An interval matrix is defined as A = [A, A] = {A ∈ Rm×n ; A ≤ A ≤ A}, where A ≤ A are given matrices; ndimensional interval vectors can be regarded as interval matrices nby1. By 1 Ac := (A + A), 2
1 A∆ := (A − A) 2
we denote the center and the radius of A, respectively. The set of all mbyn interval matrices is denoted by IRm×n . Standard arithmetic is extended to intervals in a natural way as follows [2, 53, 62]. Let a, b ∈ IR, then a ± b := [a ± b, a ± b],
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
a · b := [min(ab, ab, ab, ab), max(ab, ab, ab, ab)], ( [min(a/b, a/b, a/b, a/b), max(a/b, a/b, a/b, a/b)] if 0 6∈ b, a/b := undefined otherwise. Interval arithmetic is defined such that the resulting intervals correspond to the image of the basic operations. Let ◦ be any of the basic operations. From this viewpoint, interval arithmetics reads a ◦ b = {x ∈ R; ∃a ∈ a ∃b ∈ b : x = a ◦ b}. Solving linear equations is a basic task in linear algebra, and solving interval linear equation is a basic task in interval computations. Let an interval system Ax = b
(1)
be given, where A ∈ IRn×n and b ∈ IRn . The solution set to a linear system is defined as a set of solutions of all scenarios of the interval data, that is, for (1) {x ∈ Rn ; ∃A ∈ A∃b ∈ b : Ax = b}. A wellknown description of the solution set was given by Oettli and Prager [66] (cf. [81]). Theorem 1 (Oettli and Prager, 1964). The solution set to Ax = b is described by Acx − bc  ≤ A∆ x + b∆ .
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(2)
88
Milan Hlad´ık
The Oettli and Prager system is nonlinear, which makes difficulties in determining or sharp bounding of the solution set. If nonnegativity of variables is incorporated then the problem becomes easy and the characterization becomes linear [81]: Ax ≤ b, −Ax ≤ −b, x ≥ 0, compare Theorem 5 and Remark 2. The interval hull of a set is the smallest interval vector containing the set. Computing the interval hull of the solution set to (1) is NPhard problem [84]. In many practical circumstances, one doesn’t need to find the exact interval hull, but any sufficiently sharp superset (called an enclosure) is desirable, too. Such enclosures are much faster computed, and there are plenty of methods available; see e.g. [62, 64, 75, 79] and references therein. Remark 1. It is a common phenomenon in interval analysis that nonnegativity (or another sign restriction) of variables makes to problem easier; see Theorem 1 for instance. Provided that nonnegativity is not given, one possibility to solve the problem is a decomposition of the space into orthants, where variables become sign restricted. Let p ∈ {±1}n . Then diag(p) x ≥ 0 determines one of the orthants of Rn , and the problem of feasibility of (2) can be solved by decomposition into 2n linear systems Ac x − bc  ≤ A∆ diag(p)x + b∆ , diag(p) x ≥ 0, or
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
(Ac − A∆ diag(p))x ≤ b, (−Ac − A∆ diag(p))x ≤ −b, diag(p)x ≥ 0.
(3)
Now, (2) is feasible if and only if the linear system (3) is feasible for some p ∈ {±1}n . Not surprisingly, when s is the number of sign restricted variables and n − s the number of the remaining ones, then it suffices to decompose only to 2n−s subproblems. Analogously, the problem solving can be accelerated when coefficients by some xi are degenerated (have zero widths). Then the absolute value of xi in (2) vanishes and we do not have to decompose along the sign of xi . Any such case reduces in half the time effort [7]. We utilize this decomposition at several points, e.g. in Theorem 6 and in some places of Section 3..
1.2.
Interval linear programming
Consider a linear program min cT x subject to x ∈ M (A, b),
(4)
where M (A, b) is the feasible set characterized by a linear system. We say that it is feasible if the feasible set M (A, b) is not empty. Let A ∈ IRm×n , b ∈ IRm and c ∈ IRn be given. By an interval linear programming (ILP) problem we mean a family of linear programs (4), where A ∈ A, b ∈ b and c ∈ c. We write it in short as min cT x subject to x ∈ M (A, b).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
89
A scenario is a concrete realization of interval values, that is, any linear program (4) with A ∈ A, b ∈ b and c ∈ c. Let us focus on the feasible set description for a while. In the linear programming theory, one of the following canonical forms (A) M (A, b) = {x ∈ Rn ; Ax = b, x ≥ 0}, (B) M (A, b) = {x ∈ Rn ; Ax ≤ b}, (C) M (A, b) = {x ∈ Rn ; Ax ≤ b, x ≥ 0} is usually assumed. Any linear system can be rewritten to any of these canonical form by using a standard transformation. However, in interval linear programming, this is not the case. That is why we will discuss all the three systems separately in the remainder of the paper. Diverse systems are dealt with in a slightly different way, and even the computational complexity may differ. Example 1. Consider a system of type (B), which can be transformed to type (C) by substitution x = y − z as Ay − Az ≤ b, y, z ≥ 0. Thus an interval system Ax ≤ b is transformable to a family of systems Ay − Az ≤ b, y, z ≥ 0, where A ∈ A and b ∈ b. This family, however, differs from the family
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Ay − Az ≤ b, y, z ≥ 0 in general because of the double occurrence of the matrix A. Such multiple occurrences are usually called dependencies. Relaxing such dependencies results in an overestimation. Thus the solution set to the former system lies within the solution set to the latter. To be concrete, consider the interval system [1, 2]x ≤ 2 and the incorrect transformation into [1, 2]y − [1, 2]z ≤ 2, y, z ≥ 0. For the former system, the solution set (i.e., the union of solutions over all scenarios) is the interval (−∞, 2]. Nonetheless, the latter has a real line in the backward transformation of the solution set. Using the scenario y − 2z ≤ 2, y, z ≥ 0, any real number r can be written as r = y − z with y = max(2r, 0) and z = r .
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
90
Milan Hlad´ık
Duality in linear programming is extended to ILP in a straightforward way. For instance, a primal problem min cT x subject to Ax = b, x ≥ 0 has a dual counterpart max bT y subject to AT y ≤ c, and analogously for the other types. Duality is discussed more closely in Section 7.. Now, we are to define the main goals. It is not possible to formulate one problem in ILP because there are several points that are studied and that the decision maker may ask for. The basic questions are: • Feasibility. Is any (or some) scenario of ILP feasible? • (Un)boundedness. Is any (or some) scenario of ILP (un)bounded? • Optimality. Is there an optimal solution for each (or some) scenario of ILP? The headline problems in ILP are: • Optimal value range. What is the range of optimal values of (4) when data perturb within given intervals? • Basis stability. Is there an optimal basis common to all scenarios of ILP?
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
• Set of optimal solutions. What is the set of all optimal solutions over all scenarios of ILP? These questions set up a framework of the paper. Besides, we discuss what more can be said when some special cases appear. In particular, we focus on the cases with interval right hand side and interval objective function coefficients. For reader’s convenience, we summarize complexity of the basic problems in Table 1.
2.
Basic questions
For every linear program, just one of the following situations happens: it is not feasible, it is unbounded or it has an optimal solution. We will inspect all the three possibilities within the interval context. We skip continuity issues [98], which require more space to expose.
2.1.
Feasibility
Here, we address the questions concerning the problems of feasibility such as “Is every scenario feasible?” or “Is at least one scenario feasible?” Some questions are easy to answer while other problems are NPhard, depending not only on the questions but also the linear system considered. An interval linear system is strongly feasible if it is feasible for all scenarios, that is, each scenario has a solution. Similarly, an interval system is weakly feasible if it is feasible for at least one scenario. Let A ∈ IRm×n and b ∈ IRm . First we review the strong feasibility case.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
91
Type (A): Ax = b, x ≥ 0
Type (B): Ax ≤ b
Type (C): Ax ≤ b, x ≥ 0
strong feasibility
NPhard
polynomial
polynomial
weak feasibility
polynomial
NPhard
polynomial
strong unboundedness weak unboundedness
NPhard
polynomial
polynomial
sufficient / necessary conditions only
sufficient / necessary conditions only
polynomial
strong optimality
NPhard
NPhard
polynomial
weak optimality
sufficient / necessary conditions only
sufficient / necessary conditions only
sufficient / necessary conditions only
optimal value range
f polynomial f NPhard
f NPhard f polynomial
polynomial
Table 1. Summary of complexity of the basic problems. Theorem 2 (Rohn, 1981). An interval system Ax = b, x ≥ 0 is strongly feasible if and only if for each p ∈ {±1}m the system (Ac − diag(p)A∆ )x = bc + diag(p)b∆ , x ≥ 0 is feasible. Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Proof. See [72, 81]. Theorem 3 (Rohn & Kreslov´a, 1994). An interval system Ax ≤ b is strongly feasible if and only if the system Ax1 − Ax2 ≤ b, x1 ≥ 0, x2 ≥ 0 is feasible. Proof. See [81, 85]. Theorem 4. An interval system Ax ≤ b, x ≥ 0 is strongly feasible if and only if the system Ax ≤ b, x ≥ 0 is feasible. Proof. See [49, 81, 85]. In the strong feasibility case, the “bad boy” is the system of equations as we have to check feasibility of 2m systems. Indeed, it was shown [78, 81] that checking this property is NPhard. The others are polynomially solvable since it suffices to test feasibility of a realvalued linear system. Contrary in the weak feasibility case, the NPhard problem is testing weak feasibility of a systems of inequalities [81]. Herein, the nonnegativity condition is fundamental for polynomiality. Theorem 5. An interval system Ax = b, x ≥ 0 is weakly feasible if and only if the system Ax ≤ b, −Ax ≤ −b, x ≥ 0 is feasible. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
92
Milan Hlad´ık
Proof. See [81]. Theorem 6 (Gerlach, 1981). An interval system Ax ≤ b is weakly feasible if and only if the nonlinear system Acx − A∆ x ≤ b is feasible, or, if and only if the linear system (Ac − A∆ diag(p))x ≤ b is feasible for some p ∈ {±1}n . Proof. See [16, 81]. Theorem 7. An interval system Ax ≤ b, x ≥ 0 is weakly feasible if and only if the system Ax ≤ b, x ≥ 0 is feasible. Proof. See [49, 81]. Remark 2. Note that the above theorems give not only characterization of weak feasibility of particular interval systems, but also description of their solution sets. Thus, solution set for types (A) and (C) is a convex polyhedral set, whereas the solution set in case of (B) is a union of 2n convex polyhedral sets.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
2.2.
Unboundedness
It is known [67, 88] that the linear program (4) is unbounded if and only if it is feasible and the dual problem is not feasible. Thus testing unboundedness in interval linear programming can be easily reduced to feasibility issues, which were already addressed in Section 2.1.. We say that ILP is strongly unbounded if it is unbounded for each scenario, and weakly unbounded if it is unbounded for at least one scenario. Similarly, it is strongly bounded if it is bounded (i.e., not unbounded) for any scenario, and weakly bounded if it is bounded for some scenario. From the above argument we have that ILP is strongly unbounded if and only if it is strongly feasible and the dual not weakly feasible. Type (A): Ax = b, x ≥ 0 When the linear program has the form (A) from Section 1. then the dual problem reads max bT y subject to AT y ≤ c. Coming to ILP, it is strongly unbounded if and only if Ax = b, x ≥ 0
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(5)
Interval Linear Programming: A Survey
93
AT y ≤ c
(6)
is strongly feasible and
is not weakly feasible. This can be checked by Theorems 2 and 6. It is computationally expensive, which is not surprising in view of [35], where Kon´ıcˇ kov´a proved its NPhardness. In [35], also an alternative viewpoint on strong unboundedness was given. Theorem 8 (Kon´ıcˇ kov´a, 2006). ILP is strongly unbounded if and only if for each p ∈ {±1}m the linear program min cT x subject to (Ac − diag(p)A∆ )x = bc + diag(p)b∆ , x ≥ 0 is unbounded.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
For weak unboundedness only some sufficient conditions and necessary conditions are known. A complete sufficient and necessary condition is not known yet. ILP is weakly unbounded if (5) is strongly feasible and (6) is not strongly feasible, or, if (5) is weakly feasible and (6) is not weakly feasible. In both cases, there exists a scenario in which the primal problem is feasible and the dual one is not feasible, thus the unboundedness is attained. A necessary condition is that the primal problem is weakly feasible and the dual problem is not strongly feasible. Boundedness is a complementary property to unboundedness, that is, one holds if and only if the second one does not hold. Thus strong boundedness is equivalent to a negation of weak unboundedness, and weak boundedness is equivalent to a negation of strong unboundedness. In the same manner, negations of sufficient conditions for one become necessary conditions to the other, and vice versa. Type (B): Ax ≤ b A dual counterpart to the primal system Ax ≤ b
(7)
AT y = c, y ≤ 0.
(8)
is
Questions concerning strong and weak unboundedness and boundedness are dealt with in a similar way as for the previous type. It suffices just to replace the system (5) by (7), (6) by (8), and to utilize the corresponding feasibility theorems from Section 2.1.. Surprisingly, this case is polynomially solvable. Type (C): Ax ≤ b, x ≥ 0 In this case, we can replace (5) by Ax ≤ b, x ≥ 0
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
94
Milan Hlad´ık
and (6) by AT y ≤ c, y ≤ 0, too. The methodology is the same as for type (A), but a direct inspection leads to stronger results. The system Ax ≤ b, x ≥ 0
(9)
comprises all feasible sets for all scenarios since for any scenario A ∈ A, b ∈ b and any feasible solution x to this scenario one has Ax ≤ Ax ≤ b ≤ b. Similarly, the feasible set to the scenario Ax ≤ b, x ≥ 0
(10)
lies inside any other feasible set since Ax ≤ Ax ≤ b ≤ b holds for any A ∈ A, b ∈ b and any feasible solution x to (10). Therefore, the ILP problem is weakly unbounded if and only if (9) is unbounded, and it is strongly unbounded if and only if (10) is unbounded.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
2.3.
Optimality
The most important fundamental question asks for optimality. Does the interval linear program (ILP) have an optimal solution for each scenario? Or for some scenario? Naturally in the established manner, we call it “strong optimality” and “weak optimality”, respectively. From the linear programming theory we know that an optimal solution exists if and only if both the primal and dual problems are feasible. Thus we again employ the feasibility results to answer optimality questions. ILP is strongly optimal if and only if the primal and the dual problems are strongly feasible. Hence, types (A) and (B) are handled via Theorems 2 and 3. Similar result for type (A) was obtained by Rohn [72]. Rohn [80] observed that testing strong optimality is an NPhard problem for type (A), and the same holds probably for type (B) as well. Contrary, type (C) is the simplest one and Theorem 4 gives an efficient algorithm. More results on strong optimality are in Rohn [80] in the section devoted to the finite range problem, which is an equivalent point of view. The problem is weakly optimal if and only if there is a scenario of interval data such that the primal and dual programs are feasible. This is difficult to test. Consider, for example, type (A). Herein, we have to find A ∈ A, b ∈ b and c ∈ c such that the system Ax = b, x ≥ 0, AT y ≤ b is feasible. Due to the double appearance of the matrix A, it is not a standard interval linear system. Such systems with dependencies are very hard to characterize (cf. [19]) and to solve by a finite algorithm. So far, no finite characterization for weak optimality has been proposed, but we have two sufficient conditions. Weak optimality follows from strong feasibility of the primal problem and weak feasibility of the dual one, or vice versa. A necessary condition for weak optimality is that both primal and dual problems are weakly feasible.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
95
Example 2. Consider type (B) problem min cT x subject to Ax ≤ b, where −[3, 5] [5, 6] A = [6, 7] −[7, 8] , 0 −1
[10, 11] b = [17, 18] , −1
c=
−[1, 2] . −[2, 3]
We inspect strong feasibility first. By Theorem 3, we test feasibility of the system 1 x1 x1 0 10 −3 6 5 −5 11 x1 0 x 7 −7 −6 8 22 ≤ 17 , 22 ≥ . x 0 x 1 1 −1 0 −1 0 1 2 x22 0 x2
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Its solution is e.g. x1 = (0, 1)T , x2 = (0, 0)T . Thus the problem is strongly feasible, that is, each scenario is feasible. Turning our attention to unboundedness, consider the dual system (8). It is easy to see that it is weakly feasible (Theorem 5), but not strongly feasible (Theorem 2). It means that the primal problem is not strongly unbounded, but it is weakly unbounded. In other words, some scenarios are unbounded and some are not. See illustration on Figure 1, where intersection and union of feasible sets over all scenarios is drawn. Optimality is deal with in a similar manner. We have already observed that the dual system is not strongly feasible, so the problem cannot be strongly optimal. However, the weak feasibility of (8) implies weak optimality. Therefore some scenarios have optimal solutions whereas the others do not have any. Example 3. Consider type (C) problem min cT x subject to Ax ≤ b, x ≥ 0 where
−[2, 3] [7, 8] A = [6, 7] −[4, 5] , 1 1
[15, 16] b = [18, 19] , [6, 7]
−[5, 6] c= . −[1, 2]
The system Ax ≤ b, x ≥ 0 is feasible, so by Theorem 4 the problem is strongly feasible. Since the dual system AT y ≤ c, y ≤ 0 is strongly feasible, too, the problem is strongly optimal. See Figure 2.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
96
Milan Hlad´ık
x2 10
8
6
4
2
−2
0
2
4
6
8
10
x1
Figure 1. (Example 2): Intersection of all feasible sets in dark gray; union in light gray.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
x2 4 3 2 1
0
1
2
3
4
5
x1
Figure 2. (Example 3): Intersection of all feasible sets in dark gray; union in light gray.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
.
3.
97
Optimal value range
A frequent problem in ILP is to compute the range of optimal values when the problem quantities vary within intervals; see [4,7,22,30–32,37,49,54–60,73,80]. A nice exposition for ILP in a general form, involving types (A)–(C), was given by Chinneck and Ramadan [7]. Denote by f (A, b, c) := min cT x subject to x ∈ M (A, b) the optimal value of the linear program. Notice that infinite values are allowed, too. The goal is to compute the lower and upper bounds on the optimal value f := inf f (A, b, c) subject to A ∈ A, b ∈ b, c ∈ c, f := sup f (A, b, c) subject to A ∈ A, b ∈ b, c ∈ c. The lower and upper bound cases are sometimes called the best and the worst case, respectively. A unified approach for calculating the optimal value bounds was proposed by Hlad´ık [22]. Denote by max bT y subject to y ∈ N (A, c) the dual problem, and by
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
M := {x ∈ M (A, b); A ∈ A, b ∈ b}, N := {y ∈ N (A, c); A ∈ A, c ∈ c} the solution sets to the primal and dual feasible sets, respectively. The solution sets are determined according to Remark 2. Now, we are ready to present an algorithm for calculating the optimal value bounds. As long as we are able to determine both solution set, we can compute the bounds for any ILP problem, not only types (A)–(C). Thus, in principle, we can also handle more general ILP problems with dependencies. Algorithm 1. 1. Compute f := inf cTc x − cT∆ x subject to x ∈ M .
(11)
2. If f = ∞, then set f := ∞ and stop. 3. Compute ϕ := sup bTc y + bT∆ y subject to y ∈ N .
(12)
4. If ϕ = ∞, then set f := ∞ and stop. 5. If the primal problem is strongly feasible, then set f := ϕ; otherwise set f := ∞. Strong feasibility was discussed in Section 2.1.. The optimization problems (11) and (12) are either linear programs or can be decomposed to at most respectively 2n and 2m linear programs; see Remark 1. As simple consequences we obtain optimal value range formulae for particular types (A)–(C).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
98
Milan Hlad´ık
Type (A): Ax = b, x ≥ 0 In this case, the primal solution set is described M = {x; Ax ≤ b, Ax ≥ b, x ≥ 0} according to Theorem 5, and the dual solution set N = {y; ATc y − AT∆ y ≤ c} according to Theorem 6. Due to nonnegativity of x we have cTc x − cT∆ x = cTc x − cT∆ x = cT x, so f = inf cT x subject to Ax ≤ b, −Ax ≤ −b, x ≥ 0, ϕ = sup bTc y + bT∆ y subject to ATc y − AT∆ y ≤ c. Strong feasibility of Ax = b, x ≥ 0 is equivalent (Theorem 2) to solvability of (Ac − diag(p)A∆ )x = bc + diag(p)b∆ , x ≥ 0 for every p ∈ {±1}m . Another result comes from Rohn [80]; the formula for f appeared already in [4, 49]. Theorem 9 (Rohn, 2006). We have f = inf cT x subject to Ax ≤ b, Ax ≥ b, x ≥ 0,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
f = sup p∈{±1}m f (Ac − diag(p)A∆ , bc + diag(p)b∆ , c).
(13)
The lower bound f can be computed by a linear program in a polynomial time, whereas computing the upper bound f needs to solve 2m linear programs. We cannot hope for a more efficient method since the latter problem is NPhard [80]. Moreover, it is strongly NPhard even in the specific case with intervals in the righthand side only [12, 13]. Another method similar to that in Algorithm 1 was presented by Rohn [80]. Theorem 10 (Rohn, 2006). Let ϕ := sup bTc y + bT∆ y subject to ATc y − AT∆ y ≤ c.
(14)
Then we have the following 1. If ϕ > −∞ then f = ϕ. 2. If ϕ = −∞ then f ∈ {−∞, ∞}. Algorithm 1 and Theorem 10 say that the upper bound is computable by solving a nonlinear programming problem. Thus we can employ some nonlinear programming method to (14), or to use a decomposition technique that splits the space into particular orthants, where the problem is reduced into 2m linear programs (cf. [7, 22] and Remark 1). An algorithm based on a necessary condition was proposed by Mr´az [59, 60]. As computing the upper bound f is computationally expensive, one can be interested in an estimation of the real value. The following one is due to Rohn [80].
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
99
Theorem 11 (Rohn, 2006). If Ac has linearly independent rows and ρ(A∆ A+ c ) < 1 then + −1 ϕ ≤ cT A+ c (I − A∆ Ac ) (bc + b∆ ),
where ϕ comes from (14) and A+ c denotes the Moore–Penrose pseudoinverse of Ac . Using the Rohn’s bound −T + T y ≤ (I − A∆ A+ Ac  c c )
on a feasible point to (14) and substituting into (14) we obtain a more convenient upper bound in terms of linear programming. Corollary 1. If Ac has linearly independent rows and ρ(A∆ A+ c ) < 1 then −T + T ϕ ≤ bT∆ (I − A∆ A+ c ) Ac  c −T + T + sup bTc y subject to ATc y ≤ AT∆ (I − A∆ A+ Ac  c + c. c )
More results for type (A) are found in [60, 80]. For instance, the finite range case −∞ < f ≤ f < ∞ is discussed in Rohn [80]. Type (B): Ax ≤ b The corresponding dual problem is max bT y subject to AT y = c, y ≤ 0.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Adapting Theorems 6 and 5, the solution sets read
M = {x; Ac x − A∆ x ≤ b}, T
N = {y; A y ≤ c, −AT y ≤ −c, y ≤ 0}. Notice that nonpositivity of variable y caused the opposite matrix limit in the description of N . Strong feasibility is checked along Theorem 3 by verifying feasibility of Ax1 − Ax2 ≤ b, x1 ≥ 0, x2 ≥ 0. The optimization subproblems of Algorithm 1 take the form f = inf cTc x − cT∆ x subject to Ac x − A∆ x ≤ b, T
T
T
ϕ = sup b y subject to A y ≤ c, −A y ≤ −c, y ≤ 0.
(15) (16)
Computing the lower bound is computationally expensive; it was proved by Gabrel et al. [13] that it is strongly NPhard even in the class of problems with interval objective function coefficients and real constraint coefficients. The lower bound can be calculated along Remark 1. A similar approach by using 2n+1 subsystems was proposed by Tong [96]. The upper bound computation requires just to solve one or two linear programs (one for calculating ϕ, and one possibly for testing strong feasibility).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
100
Milan Hlad´ık
Type (C): Ax ≤ b, x ≥ 0 Here, the dual problem is max bT y subject to AT y ≤ c, y ≤ 0. By Theorem 7, the primal and dual solution sets are respectively described M = {x; Ax ≤ b, x ≥ 0},
T
N = {y; A y ≤ c, y ≤ 0}.
By Theorem 4, strong feasibility is equivalent to feasibility of Ax ≤ b, x ≥ 0.
(17)
Hence f = inf cT x subject to Ax ≤ b, x ≥ 0, T
ϕ = sup bT y subject to A y ≤ c, y ≤ 0. Since all the subproblems are linear, Algorithm 1 calculates the optimal value bounds in polynomial time. For this specific case, stronger results may be derived and the condition on strong feasibility can be removed [4, 49, 96]. Due to nonnegativity of variables, the worst case for the objective function coefficients is c := c. Similarly,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Ax ≤ Ax ≤ b ≤ b for every scenario A ∈ A, b ∈ b and a vector x satisfying (17). So the worst case for the constraint coefficients is A := A and b := b; the feasible set lies in intersection of all feasible sets over all scenarios. Hence we have with no assumption the following result. Theorem 12 (Vajda, 1961). f = inf cT x subject to Ax ≤ b, x ≥ 0,
(18)
f = inf cT x subject to Ax ≤ b, x ≥ 0.
(19)
Remark 3. Not all values in f = [ f , f ] are necessarily attained by some scenario. There may appear gaps in the image of the optimal value function f (A, b, c) not only when some of the limits are infinite, but also in the finite case. For instance, in the example by Bereanu (see [4]) max x1 subject to x1 ≤ [1, 2], [−1, 1]x1 ≤ 0, −x1 ≤ 0 the image of the optimal value draws {0} ∪ [1, 2]. Nevertheless, there are known some sufficient conditions [4] for type (C): f is the image of the optimal value as long as the scenarios with A := A, b := b, c := c and A := A, b := b, c := c and their dual problems have bounded sets of optimal solutions. This is true e.g. when the scenarios have unique nondegenerate optimal solutions. Another sufficient condition is basis stability (Section 5.).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
101
Remark 4. One can be interested in which scenarios the extremal optimal values are attained [7, 13, 80, 90]. Considering the lower bound, let x∗ be an optimal solution of (11); if the optimization problem is unbounded then x∗ denotes an unbounded direction. From cTc x∗ − cT∆ x∗  = (cc − diag(sgn(x∗ )) c∆ )T x∗ we obtain the vector of objective function coefficients c := cc − diag(sgn(x∗ )) c∆ . The other coefficients are calculated according to the particular form of ILP. In case of type (A), we have [80] A := Ac − diag(p)A∆ , b := bc + diag(p)b∆ , c := c, where p ∈ [−1, 1]m is defined entrywise as ( (A x∗ −b ) (A∆ x∗ +b∆ )i
if (A∆ x∗ + b∆ )i > 0,
1
if (A∆ x∗ + b∆ )i = 0.
c
pi =
c i
In (B) we assign A := Ac − A∆ diag(sgn(x∗ )),
b := b,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
and eventually for type (C) we have A := A, b := b and c := c. Notice that the above assignments are not unique and other ways are possible. Analogously we proceed for the upper bound f , where we come from the dual problem representation. Example 4 (Example 2 continued). The lower bound on the optimal value is calculated in view of (15) and by decomposing into particular orthants (cf. Remark 1). We get f = −∞, which is in correspondence with the weak unboundedness recognized in Example 2. By (16) we compute ϕ = −19.7143. We already know that the primal problem is strongly feasible, so we have f = ϕ. Thus the optimal value range is [ f , f ] = [−∞, −19.7143]. Now, let us determine for which scenarios the extremal optimal values are attained. The problem (15) is unbounded in direction of x∗ = (1, 1)T . According to Remark 4, the lower bound is achieved in the setting −5 5 A := Ac − A∆ diag(sgn(x∗ )) = 6 −8 , 0 −1 b := b = (11, 18, −1)T ,
c := cc − diag(sgn(x∗ ))c∆ = (−2, −3)T . The optimal solution to (16) is y∗ = (−1, −0.5714, 0)T . Since the upper bound is finite, we can consider the dual ILP problem max bT y subject to AT y = c, y ≤ 0,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
102
Milan Hlad´ık
or, substituting z := −y −min bT z subject to AT z = −c, z ≥ 0. It has the form of type (A). We compute p = (−1, −1)T by formula ( (−AT y∗ +c )
(−AT∆ y∗ +c∆ )i
if (−AT∆ y∗ + c∆ )i > 0,
1
if (−AT∆ y∗ + c∆ )i = 0,
c i
c
pi =
and the wanted scenario draws −3 6 A := Ac − A∆ diag(p) = 7 −7 , 0 −1
b := b = (10, 17, −1)T ,
c := cc − diag(p) c∆ = (−1, −2)T .
Example 5 (Example 3 continued). The optimal value bounds are computed by formulae (18) and (19) as f = −33.6364 and f = −21.2727. These extremal values are achieved for scenarios given by (18) and (19), so no extra calculation is needed.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
4.
Set of optimal solutions
In ILP, it is not obvious at the first sight what does “an optimal solution” mean. We do not discuss a plenty of satisficing solutions developed in robust or fuzzy optimization here. Instead we focus particularly on the set of all possible optimal solution. Determining the set of all optimal solutions over all scenarios is not only one of the most challenging problems in ILP, but also among the most difficult ones. Denote by S (A, b, c) the set of optimal solutions to (4) for a given scenario. By a set of optimal solutions to an ILP problem we mean [ S (A, b, c). S := A∈A, b∈b, c∈c
This set is hard to determine unless some basis stability conditions hold true. Moreover, S needn’t be a polyhedron. In practice, we do not have to determine the solution set; often, it is sufficient to find a (tight) enclosure. An interval hull of S is defined as the smallest interval vector (with respect to inclusion) containing S . An interval superset to the interval hull is usually referred to as an enclosure, and an interval subset is called an inner enclosure. Note that an inner enclosure needn’t be a subset of S . Enclosures are much more important, but inner enclosure may be useful, too, for example to check quality and accuracy of enclosures [49]. Another kind of approximation was discussed in [102]. As long as the problem is basis stable (see Section 5.) then we know the structure of S , and the set itself, its intervall hull and diverse inner approximations can be efficiently calculated. Otherwise, the situation is less optimistic.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
103
Enclosures can be computed by replacing the standard operations by interval arithmetic [4, 30, 36, 49, 93], but the resulting intervals are usually too overestimated. A different direction to solve the problem is via linear programming duality. Consider type (A), for instance. It is know from duality theory that x and y are optimal solutions to primal and dual problem, respectively, if and only if they solve the linear system Ax = b, x ≥ 0, AT y ≤ c, cT x = bT y. The interval counterpart is Ax = b, x ≥ 0, AT y ≤ c, cT x = bT y. Due to the dependencies, the solution set to this interval system is not equal to the set of optimal solutions, but it is the superset. In view of Theorems 1, 5 and 6, the solution set is described by Ax ≤ b, −Ax ≤ −b, x ≥ 0, ATc y − AT∆ y ≤ c, cTc x − bTc y ≤ cT∆ x + bT∆ y.
(20)
Therefore, any enclosure to xsolution of (20) is also an enclosure to S . Another fundamental problem is as follows. Given x∗ ∈ Rn , is it optimal for each scenario? Is it optimal for some scenario? The answer for the former is mostly “no” since one point can hardly be optimal for any data perturbation; this may be true only in some special cases (see Section 6.). The latter is an open problem, too. Example 6 (Example 3 continued). Rewrite the problem into type (A) as follows
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
min cT x subject to Ax + Iy = b, x, y ≥ 0. Using the above method and decomposing (20) to 2m linear subproblems we obtain an enclosure to the optimal solution set as follows
S ⊆ ([2.4356, 4.9091], [1.0909, 3.7001])T . Compared to the exact interval hull (Example 7), the overestimation is not so tremendous. Surprisingly, one limit is exact.
5.
Basis stability
If a linear program has an optimal solution then it possesses an optimal basic solution; simplex methods always converge to optimal basic solutions. A natural question in interval linear programming is whether there is a basis that is optimal for some or for each scenario of interval data. The first issue was investigated e.g. by McKeown and Minch [51] for the case of interval objective function coefficients. Based on multiparametric programming, the authors proposed an enumeration algorithm to compute all bases that are optimal for some scenario. Multiparametric approach to uncertainties in the righthand side and in the objective function was treated in [15]. The general case ILP has not been inspected yet. The second issue was addressed e.g. in [4, 34, 36, 59, 77]. Below, we discuss it below more
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
104
Milan Hlad´ık
deeply because it is important in several aspects. It is not only a stability criterion, but also enables to efficiently determine set of optimal solutions and optimal value range. Consider type (A) linear program, where the primal problem reads min cT x subject to Ax = b, x ≥ 0.
(21)
By a basis we mean an index set B ⊆ {1, . . ., n} such that AB is nonsingular, where AB denotes the restriction of A to the columns indexed by B. Analogously, N := {1, . . ., n} \ B stands for nonbasic variables and as an subscript it denotes the restriction to the nonbasic indices. Let a basis B be given. The ILP problem is called Bstable if B is optimal basis for each scenario of interval values. ILP is called [unique] nondegenerate Bstable if each scenario has a [unique] nondegenerate optimal basic solution with the basis B. In the sequel, we review Bstability for a candidate basis B. The basis B can be computed by an interval version of the simplex method [4, 30, 32, 49], or estimated by solving a suitable scenario (e.g. taking the midpoint values).
5.1.
Bstability
We describe the method by Hlad´ık [24]. Remind that a basis B is optimal in a realvalued linear programming problem (21) if and only if three conditions simultaneously hold: C1. AB is nonsingular; C2. A−1 B b ≥ 0;
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
T C3. cTN − cTB A−1 B AN ≥ 0 .
Extension to interval data leads to the following characterization of Bstability. The basis B is optimal for each scenario if and only if conditions C1 to C3 hold for each A ∈ A, b ∈ b and c ∈ c. Below, we discuss the three conditions in detail. An interval matrix M ∈ IRn×n is called regular if every M ∈ M is nonsingular. It was proved by Poljak and Rohn [68] that checking regularity is NPhard problem, so the first condition cannot be answered efficiently. However, there is a plenty of diverse methods for testing regularity; see e.g. a review paper by Rohn [82]. Moreover, there are several sufficient conditions that can be employed as well [70]. For instance, a broadly used one is that if the spectral radius of (Mc)−1 M∆ is less than 1 then M is regular, which gives rise to the following. Theorem 13. If ρ ((Ac)B )−1 (A∆ )B < 1 then AB is regular.
Turning to the second point C2, the inequality A−1 B b ≥ 0 holds for every A ∈ A and b ∈ b if and only if the solution set to the interval system AB xB = b lies in the nonnegative orthant. The simple but exponential method is to compute the exact interval hull of the solution set and check for nonnegativity. Another way is to utilize some solver for interval equations to get an enclosure of the solution set, and again to check its nonnegativity. This leads to a fast sufficient condition.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
105
In the third point C3, the condition is equivalent to feasibility of cTN ≥ yT AN , ATB y = cB .
(22)
The interval counterpart takes the form ATN y ≤ cN , ATB y = cB . First, we derive a simple sufficient condition. Let y be an enclosure to the solution set of ATB y = cB . If (ATN )y ≤ cN
(23)
then in each scenario the solution to the equation system solves also the whole system (22) and thus the strong feasibility is valid. Note that the lefthand side of (23) is an upper limit of the interval matrix product calculation. A sufficient and necessary characterization to the third condition was given by Hlad´ık [24]. Theorem 14. The third condition C3 holds true if and only if for each q ∈ {±1}m the polyhedral set described by ((Ac)TB − (A∆ )TB diag(q))y ≤ cB , −((Ac)TB + (A∆ )TB diag(q))y ≤ −cB , diag(q) y ≥ 0 (24) lies inside the polyhedral set
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
((Ac)TN + (A∆ )TN diag(q))y ≤ cN , diag(q)y ≥ 0.
(25)
Decomposing to particular orthants, the polyhedral sets (24) and (25) become convex. In this way, testing for C3 requires 2m inclusion tests, each of which can be done in polynomial time. A polyhedral set described V x ≤ v lies inside W x ≤ w if and only if for each i the following is true wi ≥ max Wi,∗ x subject to V x ≤ v.
5.2.
Nondegenerate Bstability
The method from the previous section can be adapted for unique and nondegenerate Bstability, too. When we consider strict inequality in the second condition then we obtain a method for testing nondegenerate Bstability. Independently, considering strict inequality in the third condition we have only a sufficient (but strong) characterization of unique Bstability. Thus, (23) with strict inequality and Theorem 14 with the strict inclusion test give sufficient, but not necessary characterization of unique Bstability. Another approaches were utilized in [34, 77]. Rohn [77] proposed the following reduction to 22m linear programs. The subsequent sufficient condition is due to Kon´ıcˇ kov´a [34], and it is related to the approach presented in Section 5.1..
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
106
Milan Hlad´ık
Theorem 15 (Rohn, 1993). Let a basis B be given. ILP is [unique] nondegenerate Bstable if and only if for each p ∈ {±1}n and q ∈ {q ∈ Rn ; q j  = 1 ∀ j ∈ B, q j = 1 ∀ j 6∈ B} the linear program min (cc + diag(q)c∆ )T x subject to (Ac − diag(p) A∆ diag(q))x = bc + diag(p) b∆ , x ≥ 0 has a [unique] nondegenerate optimal basic solution corresponding to the basis B. Theorem 16 (Kon´ıcˇ kov´a, 2001). Let B be a basis, xB the interval hull of the solution set to AB xB = b, and y the interval hull to ATB y = cB . Suppose that AB is regular, and xB > 0. If the inequality (ATN )y ≤ cN
(26)
holds then ILP is nondegenerate Bstable with a basis B. If (26) holds strictly then it is unique nondegenerate Bstable.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Note that the sufficient condition presented in Theorem 16 is not very efficient as it requires also an exponential number of operations. Replacing xB and y by enclosures to their interval hulls we obtain more useful sufficient condition, related to that one described in Section 5.1.. Bstability of ILP is very important because it enables to describe the set of all possible optimal solutions [4, 34, 59] (Section 4.) and to calculate the optimal value range (Section 3.). Under the assumption of unique Bstability, the set of all optimal solutions is equal to the solution set of the interval system AB xB = b, xB ≥ 0, xN = 0. By Theorem 5, the set represents a convex polyhedral set described by AB xB ≤ b, −AB xB ≤ −b, xB ≥ 0, xN = 0.
(27)
When the ILP problem is Bstable, but not unique Bstable, then each scenario of ILP has at least one optimal solution in this set, and, conversely, each solution of the set is an optimal solution of some scenario. Bstability also implies that the optimal value ranges within an interval f = [ f , f ], where f = min cTB x subject to AB xB ≤ b, −AB xB ≤ −b, xB ≥ 0, f = max cTB x subject to AB xB ≤ b, −AB xB ≤ −b, xB ≥ 0. Thus the upper bound f is much more easy to compute than in the general case. Moreover, f is the image of the optimal value function. This is because optimal value is cTB A−1 B b, A ∈ A, b ∈ b, c ∈ c, which is a continuous function on a compact set. Remark 5. In this section, we treated type (A). For type (B) and (C), few results are known, but we can transform them to type (A). Type (B) is transformed to type (A) by taking the dual problem. Here, we must be sure that the duality gap is always zero. Strong feasibility of the primal or the dual problem implies zero duality gap (cf. Section 7.), so if it is the case we are done.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
107
In type (C), the constraints Ax ≤ b, x ≥ 0 are transformed into equality constrains Ax + Iy = b, x, y ≥ 0. Since there are no dependencies, both systems are equivalent. In principle, this reduction of (C) to (A) can be used in any topic discussed in this work. However, when doing it we loose some information and it can be on account of complexity. For example, compare performances of testing strong feasibility for types (A) and (C). Example 7 (Example 6 continued). We again use the transformation of type (C) to type (A) min cT x subject to Ax + Iy = b, x, y ≥ 0. For the scenario with midpoint values, the optimal basis is B = (1, 2, 3). Let us check whether it is optimal for any other scenario. C1. By Theorem 13, we calculate the spectral radius 0.0909, so the interval matrix AB is regular. C2. By the Hansen–Bliek–Rohn method (see [64, 76, 79]) we compute an enclosure to the solution set of AB xB = b to be xB = ([3.7913, 4.9455], [1.5268, 2.8546], [0.0545, 20.2637])T . The lower limit is nonnegative, so the second point is satisfied. C3. By the Hansen–Bliek–Rohn method we compute an enclosure to the solution set of T AB y = cB to be y = ([−0.0001, 0.0001], [−0.5001, −0.2499], [−3.8864, −2.3863])T . Now, the relation (23) reads
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
(ATN )y = (−0.2500, −2.3864)T ≤ cN = (0, 0)T . Hence the sufficient condition is fulfilled and the problem is Bstable. Moreover, since the above inequalities are strict, it follows from Theorem 16 that the problem is unique nondegenerate Bstable. So we can determine the exact description of the solution set by (27); its projection into the (x1 , x2 ) subspace reads −7x1 + 4x2 ≤ −18, 6 ≤ x1 + x2 ≤ 7, 6x1 − 5x2 ≤ 19, and its interval hull is ([3.8181, 4.9091], [1.5454, 2.8182])T . See illustration in Figure 3.
6.
Special cases
In specific cases, sometimes stronger results may be developed. In practical problems not all input quantities must be subject to inaccuracy. The typical situation is that some of them are proper intervals and some of them are degenerate ones (real numbers). If it is the case, some of the exponential method presented in this work can be accelerated. These exponential algorithms are based on a decomposition along the signs of variables xi , i = 1, . . ., n (Remark 1). However, if the coefficients by some xi are altogether degenerate then we needn’t decompose along the sign of xi , and save the computing time; see Remark 1.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
108
Milan Hlad´ık
x2 4 3 2 1
0
1
2
3
4
5
x1
Figure 3. (Example 7): Intersection of all feasible sets in dark gray; union in light gray; set of optimal solutions in dotted area.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Since uncertainty in the objective function and in the righthand side are the most common situations in practice, we focus on these situations in this section. We leave aside specific problems that can be formulated as linear programs, but for which more effective algorithms exist, such as transportation problems [33, 89] or minimum cost flow problems [18]. Their interval extensions must be treated in a specific way.
6.1.
Interval righthand side
Interval righthand side was studied e.g. by Li & Wang [39], and Gabrel et al. [12, 13]. Let min cT x subject to x ∈ M (A, b) be a family of linear programs with A ∈ Rm×n and c ∈ Rn given, and b perturbing within an interval vector b ∈ IRm . Gabrel et al. [12, 13] investigated the optimal value range problem for this specific case. For type (A) with equality constraints, they presented several methods for calculating the lower bound on the optimal value and showed that computing the upper bound is NPhard. Anyway, type (A) cannot be solved much more effectively than the general method from Section 3.. This is not true for type (B) and (C) with inequality constraints, where the minimal optimal value is achieved for b := b, and the maximal one for b := b. Let us look closer to basis stability. There is no occurrence of b in conditions C1 and C3, so it is enough to inspect C2. Condition C2 is simplified to A−1 B b≥0
(28)
since the matrix A−1 B contains no proper interval. Thus the problem is Bstable if and only if B is optimal basis for some scenario and (28) holds true. For nondegenerate Bstability Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
109
we have the same with strict inequality in (28). Therefore, basis stability testing is tractable in this case.
6.2.
Interval objective function coefficients
ILP problems with interval objective function coefficients were studied by Gabrel and Murat [13], Hlad´ık [23], Inuiguchi and Sakawa [28, 29], and McKeown and Minch [51], among others. Consider a family of linear programs min cT x subject to x ∈ M (A, b), where A ∈ Rm×n and b ∈ Rm are given, and c perturbs within an interval vector c ∈ IRn . Let x ∈ M (A, b) be a feasible solution. It is called weakly optimal if it is optimal for some c ∈ c, and strongly optimal if it is optimal for each c ∈ c. Note that in fuzzy set theory, a notion of possibly and necessarily optimal solutions [28] is used instead. Three algorithms to compute all weakly optimal solutions were presented by Steuer [91]. In what follows, we show how to check weak optimality of a given feasible solution x∗ ∈ M (A, b); the method is obviously polynomial. We remind some basics of tangent cones first. Let x∗ be a feasible solution to a convex polyhedral set M (A, b). The tangent cone to M (A, b) at the point x∗ is formed by all rays emanating from x∗ and intersecting M (A, b) in at least one point distinct from x∗ . In type (A), where the feasible set is described by Ax = b, x ≥ 0, the tangent cone at x∗ reads
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Ax = 0, xi ≥ 0 ∀i ∈ I(x∗ ), where I(x∗ ) := {i; x∗i = 0} is the set of active indices. In type (B), with constraints Ax ≤ b, the active set is defined analogously as I(x∗ ) := {i; Ai∗ x∗ = bi } and the tangent cone is described by Ai∗ x ≤ 0, ∀i ∈ I(x∗ ). Finally, type (C) is easily transformed to type (B) by incorporating the nonnegativity constraints into the principal inequality system. Without loss of generality, let a tangent cone to M (A, b) at x∗ be described by Dx ≤ 0. Then x∗ is optimal if and only if the linear inequality system Dx ≤ 0, cT x ≤ −1
(29)
has no solution. Thus x∗ is optimal for some c ∈ c if and only if the system (29) is infeasible for some c ∈ c, or, equivalently, if and only if it is not true that (29) is feasible for all c ∈ c. Strong feasibility is characterized by Theorem 3, which implies the following assertion. A similar result was derived in [28]. Proposition 1. A point x∗ ∈ M (A, b) is weakly optimal if and only if there is no solution to the linear system D(x1 − x2 ) ≤ 0, cT x1 − cT x2 ≤ −1, x1 , x2 ≥ 0.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
110
Milan Hlad´ık
A minmax regret characterization of strongly optimal solutions was given in [29], a heuristic in [50], an exponential algorithm in [28], and complexity results in [3]. The method from Section 5.1. on basis stability is adapted and simplified in the following way. Conditions C1 and C2 hold trivially. In C3, testing strong feasibility to ATN y ≤ cN , ATB y = cB is equivalent to ATN A−T B cB ≤ cN , by substitution y
:= ATB cB .
c ∈ c,
Due to the special structure, it is strongly feasible if and only if ATN A−T B cB ≤ cN .
(30)
This is easily checked by interval arithmetic. Thus Bstability is equivalent to (30). As long as x∗ is a basic solution corresponding to a basis B then (30) is sufficient and necessary condition for strong optimality of x∗ . Some results are related to inequality constrained type (B). In [23] it was shown that testing strong optimality of x is an NPhard problem. However, restriction to a class of problems where x is a nondegenerate basic solution makes the problem polynomial. It is because we can efficiently determine the normal cone to M (A, b) at x∗ , and x∗ is optimal for each scenario if and only if the interval vector −c lies within the normal cone. The following result is adapted from [23].
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Theorem 17. Let x be a nondegenerate basic solution corresponding to a basis B. It is strongly optimal if and only of ATB c ≤ 0.
7.
Duality
Various aspects of duality in ILP were investigated e.g. by Rohn [71], Serafini [90], and Gabrel et al. [11, 13]. Basically, linear programming duality is straightforwardly extended to interval linear programming. Consider, for example, a pair of respectively primal and dual linear programs f (A, b, c) := min cT x subject to Ax = b, x ≥ 0, and g(A, b, c) := max bT y subject to AT y ≤ c. The following considerations hold for other types as well. As long as at least one of the problems is feasible then strong duality holds, that is, f (A, b, c) = g(A, b, c), where min 0/ = ∞ and max 0/ = −∞ by convention. Now, consider a family of primal dual programs over A ∈ A, b ∈ b and c ∈ c. First, we have to verify that in each scenario at least one of the primal dual problems is feasible. So far, there is no algorithm known that gives answer in every case, but there are some useful sufficient conditions. For instance, the property is valid if the primal (or dual) ILP problem is strongly feasible (see Section 2.1.).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
111
Suppose that zero duality gap is ensured for all scenarios. Then the lower bound on the optimal value for the primal ILP is equal to the lower bound for the dual ILP. In other words, min
A∈A, b∈b, c∈c
f (A, b, c) =
min
g(A, b, c)
(31)
max
g(A, b, c).
(32)
A∈A, b∈b, c∈c
and likewise for the upper bounds max
A∈A, b∈b, c∈c
f (A, b, c) =
A∈A, b∈b, c∈c
The lefthand side of (31) is a basic problem of computation the lower bound on the optimal value in type (A), whereas the righthand side corresponds to the upper bound in type (B). Similarly, the lefthand side of (32) corresponds to the upper bound in type (A) and the righthand side to the lower bound in type (B). Not surprisingly, each pair consists of problems with the same complexity: polynomial in the first case, and NPhard in the second case. Provided that f is finite, the relation (31) takes an alternative form [80] f = max min b∈b bT y subject to AT y ≤ c for each A ∈ A, c ∈ c.
Considering uncertainties in the constraint matrix only, the relation reads [90] min cT x subject to x ∈
[
{x; Ax = b, x ≥ 0}
A∈A
= max bT y subject to y ∈
\
{y; AT y ≤ c}.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
A∈A
Analogously, when f is finite then (32) has an equivalent form [80] T f = max max b y subject to AT y ≤ c for some A ∈ A, c ∈ c. b∈b
8.
Conclusion
8.1.
Applications
Since uncertainty is common in many scientific disciplines, we find applications in diverse fields such as economics, sociology, or logistic. In economics, portfolio selection problem was studied in [6, 21, 38]. An application to network topology of transmission systems is to be found in [65]. The feedmix problem was considered in [91, 96], and the related diet problem in [30, 90]. In the area of logistics, environmental management and planning, ILP was applied e.g. in air quality management [40], water resources and quality management [46, 47, 97, 101], solid waste management planning [25, 26, 43, 94], longterm hydropower planning [48], and inventory management [5, 13]. Game theory is engaged in competing and strategic interaction among subjects in social sciences, biology, engineering, and others. The intrinsic uncertainty can be modelled by interval estimates, as was done in interval matrix game works by Collins and Hu [8, 9], Liu
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
112
Milan Hlad´ık
and Kao [42], or Nayak and Pal [61]. Since zero sum matrix games are equivalent to linear programming, interval matrix games are solvable by ILP methodology, in principle. In essence, ILP can help in global optimization methods based on interval analysis and branch & bound framework [10, 17, 63]. Often, the objective function is linearized, and knowledge of optimal value range may improve lower and upper bounds of the objective function over subboxes. On a theoretical basis, Tigan and StancuMinasian [95] and Rohn [74] applied ILP to introduce sensitivity coefficients of linear programs. ILP methodology is a suitable tool for sensitivity analysis in linear programming. In traditional sensitivity analysis, one studies behaviour of optimal value and basis stability with respect to one parameter perturbation. This is a simplified approach since it doesn’t take into account dependencies with other coefficients. Simultaneous and independent variations of rim coefficients were investigated by Hlad´ık [20], Ward and Wendell [99] and Wendell [100], for instance. Sensitivity analysis of simultaneous perturbations of arbitrary coefficients can be performed just by ILP techniques.
8.2.
Software
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Due to the variability if problems concerning ILP there is no generally applicable solver for ILP. Nevertheless, there are plenty of interval packages [1] implementing interval arithmetic. In particular, INTLAB [86] is a Matlab toolbox for interval computing. A related VERSOFT package [83] collects various verification files for interval linear algebraic problems such as computing an enclosure or the interval hull of the solution set of a system of interval linear equations, finding a weak solution of a system of interval linear equations, or checking regularity of an interval matrix.
8.3.
Inverse problems
The above mentioned sensitivity analysis closely relates with inverse problems. In inverse problems, we are usually given a linear program, and we have to maximally extend the reals to intervals such that some kind of invariancy is satisfied. For example, Hlad´ık [21] touched the inverse optimal value range problem. Therein, one is given a linear program and bounds on the optimal value, and the aim is to extend it to an ILP problem such that the optimal value range lies within the prescribed bounds. An efficient algorithm is proposed to compute a Pareto optimal extension to ILP in most of the cases.
8.4.
Open problems
More then thirty years of research in ILP have brought many interesting results and closed many questions. Nevertheless, there are still some open questions remaining. We accomplish our survey with a list of open problems concerning ILP. • A sufficient and necessary condition for weak unboundedness, strong boundedness and weak optimality. • A method to check if a given x∗ ∈ Rn is an optimal solution for some scenario.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
113
• A method for determining the image of the optimal value function. • A sufficient and necessary condition for duality gap to be zero for each scenario. • A method to test if a basis B is optimal for some scenario. • Characterization of basis stability for types (B) and (C). • Characterization of the set of optimal solutions and its interval hull.
References [1] Interval software. http://www.cs.utep.edu/intervalcomp/intsoft.html. [2] G. Alefeld and J. Herzberger. Introduction to interval computations. Academic Press, London, 1983. [3] I. Averbakh and V. Lebedev. On the complexity of minmax regret linear programming. Eur. J. Oper. Res., 160(1):227–231, 2005. [4] H. Beeck. Linear programming with inexact data. technical report TUMISU7830, Technical University of Munich, Munich, 1978. [5] A. BenTal, A. Goryashko, E. Guslitzer, and A. Nemirovski. Adjustable robust solutions of uncertain linear programs. Math. Program., 99(2), 2004.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[6] A. BenTal and A. Nemirovski. Robust solutions of uncertain linear programs. Oper. Res. Lett., 25(1):1–13, 1999. [7] J. W. Chinneck and K. Ramadan. Linear programming with interval coefficients. J. Oper. Res. Soc., 51(2):209–220, 2000. [8] W. D. Collins and C. Hu. Interval Matrix Games. In C. Hu et al., editor, Knowledge Processing with Interval and Soft Computing, chapter 7, pages 1–19. Springer, London, 2008. [9] W. D. Collins and C. Hu. Studying interval valued matrix games with fuzzy logic. Soft Comput., 12(2):147–155, 2008. [10] C. A. Floudas. Deterministic Global Optimization: Theory, Methods and Applications. Kluwer Academic Publishers, Boston, 2000. [11] V. Gabrel and C. Murat. Robustness and duality in linear programming. J. Oper. Res. Soc., 61(8):1288–1296, 2010. [12] V. Gabrel, C. Murat, and N. Remli. Best and worst optimum for linear programs with interval right hand sides. In H. A. Le Thi et al., editor, Modelling, Computation and Optimization in Information Systems and Management Sciences. Second International Conference MCO 2008, Metz, France. Proceedings, Berlin, 2008. Springer.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
114
Milan Hlad´ık
[13] V. Gabrel, C. Murat, and N. Remli. Linear programming with interval right hand sides. Int. Trans. Oper. Res., 17(3):397–408, 2010. [14] T. Gal. Postoptimal analyses, parametric programming, and related topics. McGrawHill, Hamburg, 1979. [15] T. Gal and J. Nedoma. Multiparametric linear programming. Manage. Sci., Theory, 18:406–422, 1972. [16] W. Gerlach. Zur L¨osung linearer Ungleichungssysteme bei St¨orung der rechten Seite und der Koeffizientenmatrix. Math. Operationsforsch. Stat., Ser. Optimization, 12:41–43, 1981. [17] E. R. Hansen. Global optimization using interval analysis. Marcel Dekker, New York, 1992. [18] S. M. Hashemi, M. Ghatee, and E. Nasrabadi. Combinatorial algorithms for the minimum interval cost flow problem. Appl. Math. Comput., 175(2):1200 – 1216, 2006. [19] M. Hlad´ık. Description of symmetric and skewsymmetric solution set. SIAM J. Matrix Anal. Appl., 30(2):509–521, 2008.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[20] M. Hlad´ık. Tolerance analysis in linear programming. Technical report KAMDIMATIA Series (2008901), Department of Applied Mathematics, Charles University, Prague, 2008. http://kam.mff.cuni.cz/˜ kamserie/serie/clanky/2008/ s901.ps. [21] M. Hlad´ık. Tolerances in portfolio selection via interval linear programming. In CDROM Proceedings 26th Int. Conf. Mathematical Methods in Economics MME08, Liberec, Czech Republic, pages 185–191, September 2008. [22] M. Hlad´ık. Optimal value range in interval linear programming. Fuzzy Optim. Decis. Mak., 8(3):283–294, 2009. [23] M. Hlad´ık. Complexity of necessary efficiency in interval LP and MOLP. Technical report KAMDIMATIA Series (2010980), Department of Applied Mathematics, Charles University, Prague, 2010. http://kam.mff.cuni.cz/˜ kamserie/serie/ clanky/2010/s980.ps. [24] M. Hlad´ık. How to determine basis stability in interval linear programming. Technical report KAMDIMATIA Series (2010973), Department of Applied Mathematics, Charles University, Prague, 2010. http://kam.mff.cuni.cz/˜ kamserie/serie/ clanky/2010/s973.ps. [25] G. H. Huang, B. W. Baetz, and G. G. Patry. Grey integer programming: An application to waste management planning under uncertainty. Eur. J. Oper. Res., 83(3):594– 620, 1995.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
115
[26] G. H. Huang, B. W. Baetz, and G. G. Patry. Trashflow allocation: Planning under uncertainty. Interfaces, 28(6):36–55, 1998. [27] M. Inuiguchi, J. Ramik, T. Tanino, and M. Vlach. Satisficing solutions and duality in interval and fuzzy linear programming. Fuzzy Sets Syst., 135(1):151–177, 2003. [28] M. Inuiguchi and M. Sakawa. Possible and necessary optimality tests in possibilistic linear programming problems. Fuzzy Sets Syst., 67(1):29–46, 1994. [29] M. Inuiguchi and M. Sakawa. Minimax regret solution to linear programming problems with an interval objective function. Eur. J. Oper. Res., 86(3):526–536, 1995. [30] C. Jansson. A selfvalidating method for solving linear programming problems with interval input data. In U. Kulisch and H. J. Stetter, editors, Scientific computation with automatic result verification, Computing Suppl. 6, pages 33–45, Wien, 1988. Springer. [31] C. Jansson. Rigorous lower and upper bounds in linear programming. SIAM J. Optim., 14(3):914–935, 2004. [32] C. Jansson and S. M. Rump. Rigorous solution of linear programming problems with uncertain data. Z. Oper. Res., 35(2):87–111, 1991. [33] F. Jim´enez and J. L. Verdegay. Uncertain solid transportation problems. Fuzzy Sets Syst., 100(13):45–57, 1998.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[34] J. Kon´ıcˇ kov´a. Sufficient condition of basis stability of an interval linear programming problem. ZAMM, Z. Angew. Math. Mech., 81(Suppl. 3):677–678, 2001. [35] J. Kon´ıcˇ kov´a. Strong unboundedness of interval linear programming problems. In Proceedings of the 12th GAMM–IMACS Symposium on Scientific Computing, Computer Artithmetic and Validated Numerics, SCAN 2006, page 26, 2006. [36] R. Krawczyk. Fehlerabsch¨atzung bei linearer optimierung. In Interval Mathematics, LNCS 29, pages 215–222, Berlin, 1975. Springer. [37] D. Kuchta. A modification of a solution concept of the linear programming problem with interval coefficients in the constraints. CEJOR, Cent. Eur. J. Oper. Res., 16(3):307–316, 2008. [38] K. K. Lai, S. Y. Wang, J. P. Xu, S. S. Zhu, and Y. Fang. A class of linear interval programming problems and its application to portfolio selection. IEEE Trans. Fuzzy Syst., 10(6):698–704, 2002. [39] W. Li and G.X. Wang. General solutions for linear programming with interval right hand side. In Proceedings of the 2006 International Conference on Machine Learning and Cybernetics, Dalian, pages 1836–1839, 2006. [40] Y. P. Li, G. H. Huang, P. Guo, Z. F. Yang, and S. L. Nie. A dualinterval vertex analysis method and its application to environmental decision making under uncertainty. Eur. J. Oper. Res., 200(2):536–550, 2010. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
116
Milan Hlad´ık
[41] B. Liu. Theory and practice of uncertain programming. 3rd ed. Springer, Berlin, 2009. [42] S.T. Liu and C. Kao. Matrix games with interval data. 56(4):1697–1700, 2009.
Comput. Ind. Eng.,
[43] Z. Liu, G. Huang, X. Nie, and L. He. Dualinterval linear programming model and its application to solid waste management planning. Environ. Eng. Sci., 26(6):1033– 1045, 2009. [44] W. A. Lodwick. The relationship between interval, fuzzy and possibilistic optimization. In V. Torra, Y. Narukawa, and M. Inuiguchi, editors, Modeling Decisions for Artificial Intelligence, volume 5861 of LNCS, pages 55–59, Berlin Heidelberg, 2009. Springer. [45] W. A. Lodwick and K. D. Jamison. The use of intervalvalued probability measures in optimization under uncertainty for problems containing a mixture of fuzzy, possibilisitic, and interval uncertainty. Lecture Notes in Computer Science, 4529 LNAI:361–370, 2007. [46] H. W. Lu, G. H. Huang, and L. He. Development of an intervalvalued fuzzy linearprogramming method based on infinite αcuts for water resources management. Environ. Model. Softw., 25(3):354 – 361, 2010.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[47] B. Luo, I. Maqsood, and G. H. Huang. Planning water resources systems with interval stochastic dynamic programming. Water Resour. Manag., 21(6):997–1014, 2007. [48] B. Luo and D. Zhou. Planning hydroelectric resources with recoursebased multistage intervalstochastic programming. Stoch. Environ. Res. Risk Assess., 23:65–73, 2009. [49] B. Machost. Numerische Behandlung des Simplexverfahrens mit intervallanalytischen Methoden. Technical Report 30, Berichte der Gesellschaft f¨ur Mathematik und Datenverarbeitung, 54 pages, Bonn, 1970. [50] H. E. Mausser and M. Laguna. A heuristic to minimax absolute regret for linear programs with interval objective function coefficients. Eur. J. Oper. Res., 117(1):157– 174, 1999. [51] P. G. McKeown and R. A. Minch. Multiplicative interval variation of objective function coefficients in linear programming. Manage. Sci., 28:1462–1470, 1982. [52] A. A. Molai and E. Khorram. Linear programming problem with interval coefficients and an interpretation for its constraints. Iran. J. Sci. Technol., Trans. A: Sci., 31(4):369–390, 2007. [53] R. E. Moore. Interval analysis. Englewood Cliffs, N. J., 1966.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
117
[54] F. Mr´az. Interval linear programming problem. technical report Freiburger IntervallBerichte 8716, AlbertLudwigsUniversitat, Freiburg, 1987. [55] F. Mr´az. On supremum of the solution function in LP’s with interval coefficients. Technical report KAM Series (93236), Department of Applied Mathematics, Charles University, Prague, 1993. [56] F. Mr´az. The algorithm for solving interval linear programs and comparison with similar approaches. Technical report KAM Series (93239), Department of Applied Mathematics, Charles University, Prague, 1993. ´ [57] F. Mr´az. Uloha line´arn´ıho programov´an´ı s intervalov´ymi koeficienty (in Czech). Habilitation thesis, West Bohemian University, 1993. [58] F. Mr´az. The exact lower bound of optimal values in interval LP. In G. Alefeld, A. Frommer, and B. Lang, editors, Scientific Computing and Validated Numerics. Akademie Velag, Berlin, 1996. [59] F. Mr´az. On infimum of optimal objective function values in interval linear programming. Technical report KAM Series (96337), Department of Applied Mathematics, Charles University, Prague, 1996. [60] F. Mr´az. Calculating the exact bounds of optimal values in LP with interval coefficients. Ann. Oper. Res., 81:51–62, 1998.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[61] P. K. Nayak and M. Pal. Linear programming technique to solve two person matrix games with interval payoffs. Asia Pac. J. Oper. Res., 26(2):285–305, 2009. [62] A. Neumaier. Interval methods for systems of equations. Cambridge University Press, Cambridge, 1990. [63] A. Neumaier. Complete search in continuous global optimization and constraint satisfaction. Acta Numer., 13:271–369, 2004. [64] S. Ning and R. B. Kearfott. A comparison of some methods for solving linear interval equations. SIAM J. Numer. Anal., 34(4):1289–1305, 1997. [65] A. S. Noghabi, H. R. Mashhadi, and J. Sadeh. Optimal coordination of directional overcurrent relays considering different network topologies using interval linear programming. IEEE Trans. Power Deliv., 25(3):1348–1354, 2010. [66] W. Oettli and W. Prager. Compatibility of approximate solution of linear equations with given error bounds for coefficients and righthand sides. Numer. Math., 6:405– 409, 1964. [67] M. Padberg. Linear optimization and extensions. 2nd ed. Springer, Berlin, 1999. [68] S. Poljak and J. Rohn. Checking robust nonsingularity is NPhard. Math. Control Signals Syst., 6(1):1–9, 1993.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
118
Milan Hlad´ık
[69] J. Renegar. Some perturbation theory for linear programming. Math. Program., 65:73–91, 1994. [70] G. Rex and J. Rohn. Sufficient conditions for regularity and singularity of interval matrices. SIAM J. Matrix Anal. Appl., 20(2):437–445, 1998. [71] J. Rohn. Duality in interval linear programming. In K. Nickel, editor, Interval mathematics, Proc. Int. Symp., Freiburg, 1980., pages 521–529, New York, 1980. Academic Press. [72] J. Rohn. Strong solvability of interval linear programming problems. Comp., 26:79– 82, 1981. [73] J. Rohn. Miscellaneous results on linear interval systems. technical report Freiburger IntervallBerichte 85/9, AlbertLudwigsUniversitaet, Freiburg, 1985. [74] J. Rohn. On sensitivity of the optimal value of a linear program. Ekon.Mat. Obz, 25(1):105–107, 1989. [75] J. Rohn. Systems of linear interval equations. Linear Algebra Appl., 126(C):39–78, 1989. [76] J. Rohn. Cheap and tight bounds: The recent result by E. Hansen can be made more efficient. Interval Comput., 1993(4):13–21, 1993.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[77] J. Rohn. Stability of the optimal basis of a linear program under uncertainty. Oper. Res. Lett., 13(1):9–12, 1993. [78] J. Rohn. Linear programming with inexact data is NPhard. ZAMM, Z. Angew. Math. Mech., 78(Supplement 3):S1051–S1052, 1998. [79] J. Rohn. A handbook of results on interval linear problems. http://www.cs.cas. cz/rohn/handbook, 2005. [80] J. Rohn. Interval linear programming. In M. Fiedler et al., editor, Linear optimization problems with inexact data, chapter 3, pages 79–100. Springer, New York, 2006. [81] J. Rohn. Solvability of systems of interval linear equations and inequalities. In M. Fiedler et al., editor, Linear optimization problems with inexact data, chapter 2, pages 35–77. Springer, New York, 2006. [82] J. Rohn. Forty necessary and sufficient conditions for regularity of interval matrices: A survey. Electron. J. Linear Algebra, 18:500–512, 2009. [83] J. Rohn. VERSOFT: Verification software in MATLAB / INTLAB, version 10, 2009. http://uivtx.cs.cas.cz/˜ rohn/matlab/. [84] J. Rohn and V. Kreinovich. Computing exact componentwise bounds on solutions of linear systems with interval data is NPhard. SIAM J. Matrix Anal. Appl., 16(2):415– 420, 1995.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Interval Linear Programming: A Survey
119
[85] J. Rohn and J. Kreslov´a. Linear interval inequalities. Linear Multilinear Algebra, 38(12):79–82, 1994. [86] S. M. Rump. INTLAB – INTerval LABoratory. In T. Csendes, editor, Developments in Reliable Computing, pages 77–104. Kluwer Academic Publishers, Dordrecht, 1999. http://www.ti3.tuharburg.de/rump/. [87] N. Sahinidis. Optimization under uncertainty: Stateoftheart and opportunities. Comput. Chem. Eng., 28(67):971–983, 2004. [88] A. Schrijver. Theory of linear and integer programming. Repr. Wiley, Chichester, 1998. [89] A. Sengupta and T. K. Pal. Fuzzy Preference Ordering of Interval Numbers in Decision Problems, volume 238 of Studies in Fuzziness and Soft Computing. Springer, Berlin, 2009. [90] P. Serafini. Linear programming with variable matrix entries. Oper. Res. Lett., 33(2):165–170, 2005. [91] R. E. Steuer. Algorithms for linear programming problems with interval objective function coefficients. Math. Oper. Res., 6:333–348, 1981. [92] G. W. Stewart and J.G. Sun. Matrix perturbation theory. Academic Press, Boston, 1990.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[93] N. F. Stewart. Interval arithmetic for guaranteed bounds in linear programming. J. Optim. Theory Appl., 12:1–5, 1973. [94] Y. Sun, Y. Li, and G. Huang. Dualinterval linear programming model and its application to solid waste management planning. Environ. Eng. Sci., 27(6):451–468, 2010. [95] S. Tigan and I. M. StancuMinasian. On Rohn’s relative sensitivity coefficient of the optimal value for a linearfractional program. Math. Bohem., 125(2):227–234, 2000. [96] S. Tong. Interval number and fuzzy number linear programmings. Fuzzy Sets Syst., 66(3):301–306, 1994. [97] C.P. Tung, N.M. Hong, and M. Li. Interval number fuzzy linear programming for climate change impact assessments of reservoir active storage. Paddy Water Environ., 7(4):349–356, 2009. [98] M. Vranka. Interval linear programming. Master’s thesis, Faculty of Mathematics and Physics, Charles University in Prague, 2005. [99] J. E. Ward and R. E. Wendell. Approaches to sensitivity analysis in linear programming. Ann. Oper. Res., 27:3–38, 1990.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
120
Milan Hlad´ık
[100] R. E. Wendell. Linear programming. III: The tolerance approach. In Gal, Tomas et al., editor, Advances in sensitivity analysis and parametric programming, chapter 5, pages 1–21. Kluwer Academic Publishers, Dordrecht, 1997. [101] F. Zhou, H. C. Guo, G. X. Chen, and G. H. Huang. The interval linear programming: A revisit. J. Environ. Inform., 11(1):1–10, 2008.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[102] F. Zhou, G. H. Huang, G.X. Chen, and H.C. Guo. Enhancedinterval linear programming. Eur. J. Oper. Res., 199(2):323–333, 2009.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: Linear Programming Editor: Zoltan Adam Mann
ISBN 9781612095790 c 2012 Nova Science Publishers, Inc.
Chapter 3
T HE I NFINITE D IMENSIONAL L INEAR P ROGRAMMING P ROBLEMS AND T HEIR A PPROXIMATION N.B. Pleshchinskii∗ Kazan (Volga Region) Federal University, Russia
Abstract In this chapter primal and dual abstract linear programming problems are considered. The possibility of approximating of these problems by finitedimensional problems is discussed.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Key Words: infinitedimensional linear programming, dual space and dual operator, approximation and interpolation AMS Subject Classification: 90C, 46A, 41A.
1.
Introduction
The classical problems of linear programming are usually considered in finitedimensional spaces. For example, primal problem in the standard form < c, x >→ max,
Ax ≤ b,
x≥0
(1)
< b, y >→ min,
A0 y ≥ c,
y≥0
(2)
and dual problem are finitedimensional. Here c, x ∈ Rn , A is a matrix m × n, b, y ∈ Rm , A0 is the transposed matrix, x and y are unknown vectors. We discuss two questions. How can be transformed the linear programming theory into an abstract infinitedimensional case? Is it possible to consider the problems (1) and (2) as a result of an approximation of dual infinitedimensional linear programming problems? ∗
Email address: [email protected]
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
122
N.B. Pleshchinskii
The infinitedimensional linear programming appeared when Kantorovich [5] in 1942 had extended the mass translocation problem (Monge problem, 1781) to the functional spaces. It is possible to go over to the infinitedimensional case from the classical linear programming problems by many methods. The first step in this direction is to consider infinitely many variables in the primal problem. The number of unknown variables in the semiinfinite linear programming problems is usually finite but the restrict domain is given by inequality with coefficients being the continuous functions [4], [12], [2]. Then the dual space is the space of regular Borel measures. The continuous linear programming problems were considered by many scientists (see, for example, [1]). At more abstract level the unknown elements can belong to arbitrary linear vector spaces [8] (general linear programs). In the work [13] it is noted that it is useful to approximate both the primal and the dual problems simultaneously. In this work the important role of the problems which have the calculating character among the infinitelydimensional linear programming problems is also marked. Some results of the works [9], [7], [10] are used in this chapter.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
2.
Dual Spaces and Dual Operators
Let X and X 0 be real linear spaces. A functional < ·, · > : X × X 0 → R is called a bilinear form if < λ1 x1 + λ2 x2 , x0 >= λ1 < x1 , x0 > +λ2 < x2 , x0 > and < x, λ01x01 + λ02 x02 >= λ01 < x, x01 > +λ02 < x, x02 > for all x, x1 , x2 ∈ X, x0 , x01 , x02 ∈ X 0 and λ1 , λ2, λ01 , λ02 ∈ R. The bilinear form is called total (or nondegenerate) if it follows from < x, x0 >= 0 ∀x0 ∈ X 0 that x = 0 and it follows from < x, x0 >= 0 ∀x ∈ X that x0 = 0. The bilinear form is total if and only if there exists x0 ∈ X 0 such that < x, x0 >6= 0 for every x ∈ X, x 6= 0, and there exists x ∈ X such that < x, x0 >6= 0 for every x0 ∈ X 0 , x0 6= 0. The spaces X and X 0 are called dual spaces (dual pair) if a total bilinear form < ·, · > : X × X 0 → R is given. Both spaces of dual pair are equal in rights. The space can be dual to itself. Every real linear space equipped with a scalar product is dual to itself with the bilinear form < x, x0 >= (x, x0). The linear space X and algebraically conjugate space X + (the set of all linear functionals on X) form the dual pair with < x, x+ >= x+ (x). The linear space X and conjugate space X ∗ (the set of all linear continuous functionals on X) form the dual pair as well. But the symmetry of conjugate spaces in the dual pair is violated. The enclosure X 0 ⊂ X + takes place. Really, if X is a linear space, and X 0 is a dual to X space with the bilinear form < x, x0 >, then linear functional x 7→< x, x0 > is defined for every element x0 ∈ X 0 . This functional can be identified with x0 . Let X, X 0 and Y, Y 0 be dual pairs of linear spaces, A : X → Y be a linear operator, and its domain D(A) = X. The operator A0 : Y 0 → X 0 is called a dual operator to A if < x, A0 y 0 >=< Ax, y 0 >
∀x ∈ X, ∀y 0 ∈ Y 0 .
But in the linear programming theory it is convenient to use another definition. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(3)
The InfiniteDimensional Linear Programming Problems...
123
We say that the operators A : X → Y 0 and A0 : Y → X 0 are called dual to each other if < x, A0 y >=< y, Ax >
∀x ∈ X, ∀y ∈ Y.
(4)
Note that here we have no any contradiction, another designations are used only. It is known that dual (conjugate) operators exist always in the case of dual pairs of conjugate spaces. Really, let X, X + and Y, Y + be dual pairs. The operator A+ : Y → X + is called algebraically conjugate to operator A if < x, A+ y >=< y, Ax >
∀x ∈ X, ∀y ∈ Y.
It is easy to construct this operator. It is sufficient to assign the functional x 7→< y, Ax > to every element x ∈ X. Obviously, this functional is linear. Similarly, let X, X ∗ and Y, Y ∗ be dual pairs. The operator A∗ : Y → X ∗ is called conjugate to linear bounded operator A if < x, A∗ y >=< y, Ax > ∀x ∈ X, ∀y ∈ Y. In the general case the operator dual to linear operator can not exist.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Theorem 1 The next equalities fulfil (in the case when the dual operators exist): 1) (A + B)0 = A0 + B 0 ; 2) (λA)0 = λA0 (for any real number λ); 3) (BA)0 = A0 B 0 ; 4) (A0 )0 = A; 5) I 0 = I (the operator I is the identical operator). Proof. Really, let A and B be linear operators from X to Y 0 . Suppose that the dual to them operators A0 and B 0 exist, and the identities < x, A0 y >=< y, Ax > and < x, B 0 y >=< y, Bx > take place ∀x ∈ X, ∀y ∈ Y . Then < x, (A0 + B 0 )y >=< y, (A + B)x >. It follows that A0 + B 0 is the operator dual to linear operator A + B. The rest of the equalities can be proved by the analogy. For example, let A : X → Y 0 , 0 A : Y → X 0 and B : Y 0 → Z, B 0 : Z 0 → Y be dual pairs of linear operators, besides < x, A0y >=< y, Ax > and < By 0 , z 0 >=< B 0 z 0 , y 0 >. Then < x, A0 B 0 z 0 >=< B 0 z 0 , Ax >=< BAx, z 0 >, hence the operator A0 B 0 : Z 0 → X 0 is dual to the operator BA : X → Z. We assume that Y = X, Y 0 = X 0 and I : X → X in the last equality. For example, in the case of X = X 0 = C([a, b]) the bilinear form < x, x0 >=
Z
b
x(t)x0 (t) dt
(5)
a
can be chosen. A pair of spaces of functions is a dual pair if every product of theirs elements is integrable. Theorem 2 Suppose X = X 0 = Y = Y 0 = C([a, b]) and both bilinear forms are defined as form in the formula (5). If k(τ, t) is a continuous function then the operators A : x(t) 7→ x(t) +
Z
a
b
x(τ ) k(τ, t) dτ
and A0 : y(t) 7→ y(t) +
Z
b
y(τ ) k(t, τ ) dτ
a
are dual to each other. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
124
N.B. Pleshchinskii
Proof. It is easy to obtain that 0
< x, A y >=
Z
b
x(t) y(t) +
a
=
Z
b
x(t) y(t) dt +
a
Z
b
y(τ )
Z
b
y(τ ) k(t, τ ) dτ a
Z
dt =
b
x(t) k(t, τ ) dt dτ
a
a
and moreover < x, A0 y >=
Z
b
a
3.
y(t) x(t) +
Z
b
x(τ ) k(τ, t) dτ
a
dt =< y, Ax > .
InfiniteDimensional Linear Programming
Let X, X 0 and Y, Y 0 be the dual pairs of ordered linear spaces, and x00 ∈ X 0 , y00 ∈ Y 0 be the fixed elements of these spaces. We consider two dual problems of abstract linear programming. Let A : X → Y 0 be a linear operator and A0 : Y → X 0 be a dual to A operator. The primal linear programming problem and the problem dual to it can be formulated as follows: < x, x00 >→ max, Ax ≤ y00 , x ≥ 0, (6)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
< y, y00 >→ min,
A0 y ≥ x00 ,
y ≥ 0.
(7)
Unfortunately, the solutions of infinitedimensional linear programming problems do not always exist. Moreover, the maximum in problem (6) and the minimum in problem (7) can be different (see [13]). The dual linear programming problems are formulated for algebraically conjugate spaces and for conjugate spaces by the analogy. The dual problems of finitedimensional linear programming (1) and (2) are the particular case of this general scheme. The basic statements of finitedimensional case fulfil in the case of abstract linear programming also. For example, Theorem 3 If x and y are feasible elements, then < x, x00 >≤< y, y00 >. If x and y are feasible elements and < x0 , x00 >=< y 0 , y00 >, then x0 and y 0 are the solutions of the problems (6) and (7). + Proof. If x00 ≤ A0 y, then < x, x00 >≤< x, A0 y > ∀x ∈ PX . On the other hand, if + 0 0 Ax ≤ y0 , then < y, Ax >≤< y, y0 > ∀y ∈ PY . But < x, A0 y >=< y, Ax > by definition of dual operator. Further, let x, y be arbitrary feasible elements. Then < x, x00 >≤< y 0 , y00 >=< x0 , x00 >≤< y, y00 >.
We consider a pair of integral linear programming problems as an example. The primal problem has the form Z
β
c(τ )x(τ ) dτ → max,
α
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
The InfiniteDimensional Linear Programming Problems...
Z
125 (8)
β
A(t, τ )x(τ ) dτ ≤ b(t), t ∈ [γ, δ],
x(τ ) ≥ 0, τ ∈ [α, β]
α
and the dual problem has the form Z
δ
b(t)y(t) dt → min,
γ
Z
(9)
δ
A(t, τ )y(t) dt ≥ c(τ ), τ ∈ [α, β],
y(t) ≥ 0, t ∈ [γ, δ].
γ
It is sufficient to assume that all functions in the formulas (8) and (9) are continuous. The problem on force distribution along the elastic string is more concrete example. Suppose that the string is fixed at the points x = 0 and x = l. It is known that its deflection from the equilibrium under distributed force p(x) is u(x) =
Z
l
G(x, t) p(t) dt,
0
here G(x, t) is Green function. It is necessary that every point of the string should be transported through the bound of a given domain. Let the total load be minimal, and the force distribution along the elastic string be unknown. Then we have the integral linear programming problem Z
l
p(t) dt → min,
0
Z
l
G(x, t)p(t) dt ≥ q(x),
p(x) ≥ 0,
0
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
here q(x) is a given function. The canonical forms of the primal and dual linear programming problems have the form < x, x00 >→ max,
Ax = y00 ,
x ≥ 0,
< y, y00 >→ min,
A0 x = x00 ,
y ≥ 0.
It is easy to transform the problems in standard form to the problems in canonical form. New variables can be added in the following way. Let U = X × Y 0 and V = Y × X 0 be new spaces of unknown elements. The spaces U 0 = X 0 × Y and V 0 = Y 0 × X are dual to them, this duality is given by the bilinear forms < u, u0 >=< x, x0 > + < y, y 0 >=< v, v 0 > . Let u00 = (x00 , 0) and v00 = (y00 , 0) be the fixed elements. We construct the operators B : U → V 0 and C : V → U 0 such that B : (x, y 0) 7→ (Ax + y 0 , 0) and C : (y, x0 ) 7→ (A0 y − x0 , 0). Then the canonical problem < u, u00 >→ max,
Bu = v00 ,
u ≥ 0,
< v, v00 >→ min,
Cv = u00 ,
v≥0
are equivalent to the initial primal and dual problems. Note that operators B and C are not dual to each other. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
126
4.
N.B. Pleshchinskii
Approximation and Interpolation of Spaces and Operators
Let X, Y and X, Y be exact and approximate spaces, TX , SX and TY , SY be pairs of the approximation and interpolation operators, finally A : X → Y and A : X → Y be an exact and approximate linear operators (see Fig. 1).
TX
X ↓↑ SX X
A −→ TY A −→
Y ↓↑ SY Y
Figure 1. Exact and approximate spaces and operators. We say that the space X approximates the space X if the approximation and interpolation operators TX : X → X and SX : X → X are given, and the equality TX SX = I fulfils (here I is the identical operator). If the approximation is nontrivial, then SX TX 6= I. If the spaces X, Y approximate the spaces X, Y , then we say that every operator A : X → Y is an approximation operator for the exact operator A. By analogy, if the operator A : X → Y is given, then every operator Ae : X → Y is an interpolation 0 operator for the operator A. The operators A = TY A SX and Ae0 = SY A TX are called natural approximation operator and natural interpolation operator. It is convenient to estimate the distance between operator A and operator A by means 0 of the operators A and Ae0 . The values m and m from the inequalities
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
(A − SY A TX )x ≤ m x and (TY A SX − A)x ≤ m x
(10)
are a measure of this proximity. Moreover, it is possible to compare the exact and approximate operators on any set and at the concrete its element. If the norms A − SY A TX  and TY A SX − A can be defined correctly, then the numbers m and m give best estimates of the proximity. In the general case it is possible to consider the unbounded operators or the operators whose definition domain is not a dense set. Let X, X 0 and Y, Y 0 be dual pairs of the spaces, A : X → Y and A0 : Y 0 → X 0 be dual operators. Suppose the spaces X and Y approximate the spaces X and Y , the operators TX , SX , TY , SY are approximation operators and interpolation operators. 0 0 It is necessary to define operators TX , SX , TY0 , SY0 and A0 so that the spaces X 0 , Y 0 and 0 operator A should be approximated (see Fig. 2).
TX 0
A0 X0 ←− Y0 ↓↑ SX 0 TY 0 ↓↑ SY 0 0 A X0 ←− Y0
Figure 2. Dual operator and its approximation: preliminary diagram. We propose a natural approximation method for the dual spaces and dual operators. Let 0 X and Y be the spaces dual to X and Y accordingly. 0
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
The InfiniteDimensional Linear Programming Problems...
127 0
Theorem 4 If the operators TX and SX have dual operators, then the space X approximates the space X 0 . Proof. By definition of the approximation and interpolation operators we have TX SX = I (in the space X). It follows from the properties 3) and 5) of the dual operators that the 0 0 0 0 0 identity SX TX = I is fulfilled (in the space X ) for the operators TX : X → X 0 and 0 : X 0 → X 0. SX 0 Hence the operator X is the approximation for the operator X 0 , moreover the equalities 0 0 T X 0 = SX , SX 0 = T X for the approximation and interpolation operators are valid. 0 Note that it does not follow from the quality X 0 = X that the operations of approxi0 mation and duality are commuted. This equality means only that the space X is chosen as approximation of the space X 0 among all possible approximations. Theorem 4 can be formulated in the form: The space dual to approximating space approximates the space dual to exact space. 0 It is possible to choose Y 0 = Y by the analogy. 0
Theorem 5 If the operator A exists, then it approximates the operator A0 . Proof. We assume that the operator A : X → Y approximates the operator A : X → Y . 0 0 It follows from the definition of the dual operator that the operator A maps from Y into 0 X. 0 0 The equality A0 = A means also that operator A is chosen from the operators approximating operator A. It is possible to choose another approximation operator as well. 0
0
0
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
If X 0 = X , Y 0 = Y and A0 = A , then the diagram on the Fig. 2 can be specified (see Fig. 3).
0 SX
A0 ←−
X0 0 ↓↑ TX X
SY0
0
A ←−
0
Y0 ↓↑ TY0 Y
0
Figure 3. Dual operator and its approximation: final diagram. 0
0
0
0
0
g0 ) . Theorem 6 If A0 = A , then (A )0 = (A0 ) and (Ae )0 = (A 0
Proof. Really, if the natural approximation A = TY A SX was chosen, then we have 0 0 0 0 A0 TY0 . On the other hand, it follows that (A0 ) = TX 0 A0 SY 0 = SX A0 TY0 . (A )0 = SX 0 0 Then the ’operations’ and are commutable in the broad sense. 0 0 A0 S 0 and (A g0 )0 = T 0 A0 S 0 . Note that if Moreover, (Ae )0 = (SY A TX )0 = TX Y Y X 0 A 6= A0 , then second equality does not fulfil. The pair of formulas 0 0
0
(Ae ) = (SY A TX ) = TY SY A TX SX = A,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
128
N.B. Pleshchinskii g 0
0
0
(A ) = (TY g A SX ) = SY TY A SX TX 6= A
supplements Theorem 6. The analogous statements fulfil for the conjugate and for the algebraically conjugate spaces and operators.
5.
Approximation of the Linear Programming Problems
We suppose now that the approximation and interpolation operators maintain order in the linear spaces. For example, if TX : X → X then it follows from x ≥ 0 that x = TX x ≥ 0. The linear programming problem approximating the primal problem (6) can be formulated as follows 0 0 < x, SX x0 >→ max,
TY 0 ASX x ≤ TY 0 y00 ,
x ≥ 0.
(11)
Here the elements of the direct spaces are replaced by the elements of the approximate 0 spaces, and the natural approximation operator A = TY 0 ASX is chosen as approximate operator A : X → Y 0 . It is possible to obtain the problem (11) in the following way. We will seek the solution of the problem (6) in the form x = SX x. Therefore, < SX x, x00 >→ max,
ASX x ≤ y00 ,
SX x ≥ 0.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Now we replace the operator SX in the functional by the dual operator, and afterwards we 0 project the first inequality on the space Y and replace the second inequality by x ≥ 0. Theorem 7 The problem dual to the problem approximating the primal problem approximates the dual problem by natural approximation. Proof. We will repeat the above reasonings for the dual problem (7). We will find its solutions in the form y = SY y. It follows from < SY y, y00 >→ min,
A0 SY y ≥ x00 ,
SY y ≥ 0
that < y, SY0 y00 >→ min,
TX 0 A0 SY y ≥ TX 0 x00 ,
y ≥ 0.
(12)
It is easy to see that the problems (11) and (12) are dual to each other. Now we construct the problems which approximate the problems (8) and (9). Let τj ∈ [α, β], j = 1..N and tk ∈ [γ, δ], k = 1..M be approximation nodes. The integrals can be replaced by the sums Z
β
f (τ ) dτ ≈ α
N X
j=1
rj f (τj ),
Z
γ
δ
g(t) dt ≈
M X
sk g(tk ),
k=1
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
The InfiniteDimensional Linear Programming Problems...
129
here rj and sk are (nonnegative) coefficients of the quadrature formulas. Denote xj = rj x(τj ), yk = sk y(tk ). Then it follows that N X
c(τj ) xj → max,
j=1 N X
A(tk , τj ) xj ≤ b(tk ), k = 1..M,
xj ≥ 0, j = 1..N
j=1
from (8) and that M X
b(tk ) yk → min,
k=1 M X
A(tk , τj ) yk ≥ c(τj ), j = 1..N,
yk ≥ 0, k = 1..M
k=1
from (9). 0 In this case X = X 0 = C([α, β]), Y = Y 0 = C([γ, δ]), X = X = RN , Y = 0 Y = RM , and TX : x(τ ) 7→ (x(τ1 ), . . . , x(τN )), TY : y(t) 7→ (y(t1 ), . . . , y(tM )). The piecewise linear interpolation operators can be chosen as the operators SX and SY . If the exact integral operator, for example x(τ ) 7→
Z
β
A(t, τ )x(τ ) dτ,
α
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
acts in the spaces of continuous functions, then its approximation is the finitedimensional operator (x(τ1 ), . . . , x(τN )) 7→
N X
rj A(t1 , τj )x(τj ), . . . ,
N X
j=1
j=1
rj A(tM , τj )x(τj ) .
But it is convenient to transfer the coefficients rj into the new unknown numbers xj . The convergence of the solutions sequence of the linear programming problems which approximate the infinitedimensional problem depends principally on the convergence of the operator sequence which approximate the operators A and A0 . The distance between dual operator and operator approximating it can be estimated by means of the inequalities of the form (10). We will show that these operators are close if the initial operator and operator which approximates it are close. Suppose the exact and approximate spaces, as well as dual to them spaces are normed. It is convenient to consider the case when dual operators are the conjugate operators. Theorem 8 ∗
∗ (A∗ − TX A SY∗ ) y ∗  ≤ sup (A − SY A TX ) x · y ∗, x≤1
∗
∗ (SX A∗ TY∗ − A ) y ∗  ≤ sup (TY A SX − A) x · y∗ . x≤1
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
130
N.B. Pleshchinskii
Proof. By definitions of the conjugate operators and of norm of linear functional the equalities ∗ ∗ ∗ ∗ (A∗ − TX A SY∗ ) y ∗  = sup  < x, (A∗ − TX A SY∗ ) y ∗ >  = x≤1
∗
= sup  < (A − x≤1
∗ TX
∗
A
SY∗ )∗ x, y ∗
>  = sup  < (A − SY A TX ) x, y ∗ >  ≤ x≤1
≤ sup (A − SY A TX ) x · y ∗ . x≤1
fulfil. The second statement can be proved by the analogy. (n) Now we consider the sequence of operators A approximating the operator A. The (n) convergence of the sequence A to A can be defined either as STconvergence or as TSconvergence (see [9]). It follows from Theorem 8 that the following statement takes place for any type of convergence. Theorem 9 If A
(n)
(n) ∗
→ A by n → ∞, then A
→ A∗ by n → ∞.
∗
The operators A∗ and A were constructed on the basis of two different approximations: the approximation of the operator A by the operator A, and the approximation of the ∗ operator A∗ by operator A∗ . In the case of A∗ 6= A it is necessary to take in account this ∗ fact by estimating the distance between the operators A∗ and A . The inequality ∗
∗
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
∗ ∗ (A∗ − A ) y ∗  ≤ (A∗ − SX A∗ TY∗ ) y ∗  + (SX A∗ TY∗ − A ) y ∗ .
(13)
fulfils. The first summand at right hand side defines the approximation accuracy of the operator A∗ by the operator A∗ , and the second one depends on the approximation accuracy of the operator A by the operator A (Theorem 8). The next statement follows from the inequality (13). (n)
Theorem 10 If A n → ∞.
→ A and A∗
(n)
→ A∗ by n → ∞, then A∗
(n)
(n) ∗
−A
→ 0 by
In the case of continuous functions spaces the estimations of the values m and m from the formula (10) can be obtain in the simple form. Let e(τ ) = xj Sx : x = (x1 , . . . , xN ) 7→ x
τj+1 − τ τ − τj + xj+1 , τj+1 − τj τj+1 − τj
τ ∈ [τj , τj+1 ],
and
x(τ ) = max x(τ ), τ ∈[α,β]
x = max xj . j=1..N
e(τ ). Moreover, TX  = It is easy to see that TX x(τ ) ≤ x(τ ) and (SX x)(τ ) ≤ x SX  = 1. The analogous situation takes place in the case of spaces Y and Y 0 . We choose the trapezoidal rule in the approximation process. Then
(Ax)k =
−1 1 NX [xj A(tk , τj ) + xj+1 A(tk , τj+1 )](τj+1 − τj ), 2 j=1
k = 1..M.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
The InfiniteDimensional Linear Programming Problems...
131
It is possible to prove that (TY ASX − A)x ≤ (β − α)ω(A(t, τ ), δ) x, where δ = max[ max (τj+1 − τj ), j=1..N −1
max (tk+1 − tk )],
k=1..M −1
and ω(A(t, τ ), δ) is the modulus of continuity. In this case m = m(N ) = (β − α)ω(A(t, τ ), δ) → 0 by (N ) ∗
(N )
Therefore, A → A by N → ∞, and A Theorem 9 is trivial in this case.
6.
N → ∞.
→ A∗ by N → ∞. It means that
Noether Operators in Linear Spaces
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
We discuss briefly a question which is connected with the possibility to use the structure of the feasible domain by constructing of the effective methods of solving the linear programming problems. The feasible domain is the convex polyhedral set in the finitedimensional vector space in the classical case. The sets of the elements satisfying the equation Ax = y00 and the equation A0 y = x00 can be defined easy in the case of canonical problems. Analogical situation can take place for the infinitedimensional problems. Let X, X 0 and Y, Y 0 be dual pairs of linear spaces. We say that the linear operator K : X → Y 0 is called finitedimensional if there exist linear independent elements x01 , . . ., x0n in the space X 0 and linear independent elements y10 , . . ., yn0 in the space Y 0 such that K : x 7→
n X
< x, x0j > yj0
∀ x ∈ X.
j=1
It is easy to prove that every finitedimensional linear operator has the dual to it operator. Let Y 0 = X and X 0 = Y . By T : x 7→
n X
< x, x0j > xj ,
T 0 : x0 7→
j=1
n X
< xj , x0 > x0j
j=1
denote the linear finitedimensional operators, they form the dual pair. If A = I − T , then the solution of the equation x − T x = x0 has the form x = x0 +
n X
aj xj ,
j=1
where the numbers aj satisfy the set of equations ak −
n X
aj < xj , x0k >=< x0 , x0k >,
k = 1..n.
j=1
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
132
N.B. Pleshchinskii
The dual equation x0 − T 0 x0 = x00 has the analogical property. Therefore the linear programming problems can be transformed to finitedimensional problems in this case. The dual to each other Fredholm operators I − T and I − T 0 are a particular case of the Noether linear operators in the abstract linear spaces (see [10], [11] for example). If the Noether operators define the feasible domain in the canonical linear programming problems, then it is possible to describe the structure of this domain relatively easy as well.
References [1] Anderson, E.J., Nash, P. Linear Programming in InfiniteDimensional Spaces, Theory and Application; John Wiley & Sons: Chichester, 1987. [2] Anderson, E.J.; Lewis, A.S. An extension of the simplex algorithm for semiinfinite linear programming. Math. Programming. 1989, vol. 44, 247269. [3] Cannon, J.R. The numerical solution of the Dirichlet problem for Laplaces equation by linear programming. J. Soc. Industr. and Appl. Math. 1964, vol. 12, no. 1, 233237. [4] Goberna, M.A.; L´opez, M.A. Linear SemiInfinite Optimization; John Wiley & Sons: Chichester, 1998. [5] Kantorovich, L.V. On the translocation of masses. Doklady Akad. Nauk. SSSR. 1942, vol. 37, no. 78, 227229.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[6] Kantorovich, L.V. New approach to numerical methods and observation processing. Sibirskii Mat. Zhurnal. 1962, vol 3, no. 5, 1117. [7] Mukhutdinov R.Sh., Pleshchinskii N.B. Approximation of dual spaces and dual operators. Russian Mathematics (Izvestiya VUZ. Matematika). 2008, vol 52, no. 10, 2531. [8] Nash, P. Algebraic fundamentals of linear programming, in Infinite programming, Proceedings Anderson, E.J.; Philpott, A.B.; Eds. ; Springer Verlag: Berlin, 1985, pp. 3752. [9] Pleshchinskii N.B. On the abstract theory of approximate methods for solving linear operator equations. Russian Mathematics (Izvestiya VUZ. Matematika). 2000, vol 44, no. 3, 3947. [10] Pleshchinskii N.B. Duality theory and Noether operators. Preprint PMF0901. Kazan: Kazan Mat. Soc., 2009.  30 pp. [11] Pleshchinskii N.B. Duality theory and Noether operators. Proc. of the Int. Conf. ”Integral Equations – 2010”, 2527 Aug. 2010. Lviv (Ukraine), PAIS, 2010, pp. 112116. [12] Romeijn, H.E., Smith R.L., Bean J.C. Duality in infinite dimensional linear programming. Math. Programming. 1992, vol. 53, 7997. [13] Vershik A.M. Some remarks on the infinitedimensional problems of linear programming. Russian Math. Surveys (Uspechi Mat. Nauk). 1970, vol. 25, no. 5, 117124. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
PART II. APPLICATIONS IN MATHEMATICS
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: Linear Programming Editor: Zoltan Adam Mann
ISBN 9781612095790 ©2012 Nova Science Publishers, Inc.
Chapter 4
A POLYNOMIALTIME APPROXIMATION ALGORITHM FOR MAXIMUM CONCURRENT FLOW PROBLEMS SuhWen Chiou Department of Information Management, National Dong Hwa University Dahsueh Rd., ShouFeng, Hualien, Taiwan
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
ABSTRACT The multicommodity flow problem involves simultaneous shipping several different commodities from their respective sources to sinks in a directed network with total amount of flow going through an edge limited by its capacity. One of the multicommodity flow problem is the maximum concurrent flow problem, which is the problem of finding a concurrent flow that maximize the amount of demand supplied by each source to the sink while the edge flow is within its capacity. The dual of the maximum concurrent flow problem is to find a concurrent flow that minimizes the maximum congestion for the capacitated network. For any positive , the optimal concurrent flow problem is to find a solution whose the congestion value is no more than (1 ) times the minimum congestion. In recent years, a few fast combinatorial approximation algorithms for the optimal concurrent flow problem have been presented. In this chapter we propose a new variant of the combinatorial approximation algorithm and implement various kinds of the combinatorial approximation algorithm (CACF) for the optimal concurrent flow problem on a large scale of test networks. Numerical comparisons are also made between the results obtained by variants of the combinatorial approximation algorithm and exact solutions obtained from linear programming package CPLEX optimizers.
Keywords: linear programming; maximum concurrent flow problem; combinatorial algorithms; minimumcost flows; CPLEX software
Email: [email protected]
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
136
SuhWen Chiou
1. INTRODUCTION A multicommodity flow problem is to find a feasible flow that satisfies the demand of each source to the corresponding sink in a directed network while the flow on each edge is within its capacity. One of the multicommodity flow problems is the maximum concurrent flow problem proposed by Shahrokhi and Matula [18], in which every pair of commodity can send and receive flow concurrently. The ratio of the flow supplied between a pair of commodity to the demand for that pair is defined as the throughput. The objective of the maximum concurrent flow problem is to find the maximum throughput for all pairs of commodity subject to the capacity constraints on edges. For the maximum concurrent flow problem, it is desired to assign flow to each route of the network such that the throughput is the same for all pairs of commodities and maximized. The maximum concurrent flow problem is an optimized version of the multicommodity flow problem, where the objective is to find the maximal value of the fraction z such that the least z percent of each demand can be assigned while for every edge the capacity constraint is satisfied. The dual of the maximum concurrent flow problem is to find a concurrent flow that minimizes the maximum congestion for the capacitated network. For any positive , the optimal concurrent flow problem is to find a solution whose the congestion value is no more than (1 ) times the minimum congestion. Shahrokhi and Matula presented a fully polynomial combinatorial approximation scheme for the maximum concurrent flow problem with uniform capacity and arbitrary demands. Klein, Plotkin, Stein and Tardos [9] gave a fast approximation algorithm to the optimal concurrent flow problem with uniform capacities. Throughout this chapter, we use n, m and
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
k to denote the number of nodes, edges and commodities. In relation to the solutions for the concurrent flow problem, the technique of using linear programming provides a feasible way. The concurrent flow problem can be modeled as a linear program in terms of O (mk ) variables and O(nk m) constraints. Vaidya [20] presented a linear programming based on the interiorpoint method for the concurrent flow problems after using the fast technique in the matrix inversion provided by [8], which gave a time bound of O(k 3.5 n 3 m 0.5 log(nB)) for the unitcapacity concurrent flow problem with integer demands where B denotes the sum of demands. For an optimal concurrent flow problem, Vaidya‘s algorithm yields an O ( k 2.5 n 2 m 0.5 log( nB )) time bound. However, as far as the computational time bound for running the concurrent flow problem is concerned, as indicated by Norton, Plotkin and Tardos [14], the interiorpoint based linear programming method is of a rather impractically high degree. Shahrokhi and Matula [18] were the first ones who presented the combinatorial strongly polynomial approximation algorithm for the optimal concurrent flow problems with uniform capacities. Shahrokhi and Matula introduced the exponential edge length which relating the edges to the measurement of the congestion on that edge. By iteratively rerouting the flow from the paths with higher congestion to those less congested, the maximum congestion of this network can be greatly reduced. Klein et al. [9] in turn presented a faster algorithm REDUCE for the uniform concurrent flow problem based on the framework of Shahrokhi and Matula but using a different exponential length function. Subsequently,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A PolynomialTime Approximation Algorithm…
137
Leighton, Makedon, Plotkin, Stein, Tardos and Tragoudas [10] proposed a polynomialtime algorithm DECONGEST for the optimal concurrent flow problem with arbitrary demands and capacity, which generalizes the work of Klein et al. [9]. In Leighton‘s paper, given the precision value , the running time of the deterministic version is in n O ( k 2 2 log( ) k 2 log n log k ) 1commodity minimumcost flow computations and the running time of the randomized version is in O ( k 3 log( n ) k log n log k ) 1commodity
minimumcost flow computations. Leong, Shor and Stein [11] implemented Leighton‘s algorithm on mediumsized networks to solve the optimal concurrent flow problem. Comparisons of the effectiveness for Leighton‘s method over the other conventional algorithms were also made, where it was strongly suggested the combinatorial approximation approach is highly competitive with the linear programs on the basis of the interiorpoint method and the simplex method. By repeatedly finding a ‗bad commodity flow‘ and improving it, Leighton‘s algorithm used a 1commodity minimumcost flow computation. For the deterministic version of Leighton‘s algorithm, the commodities are examined one by one in turn until a ‗bad‘ one is found and in the worst case it takes (k ) commodities before the bad one is found. For the randomized version, the commodity is randomly examined according to some distribution and the expected number of commodities checked before a bad one is found is O ( 1 ) , thus the randomized algorithm is faster than the deterministic one by k . Goldberg [4] gave a natural randomization strategy and simplified in O ( k
2
n log( ) k log n log k ) 1commodity
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
minimumcost flow computations by a fraction of 1 , which greatly simplified the randomized algorithm of Leighton and decreased the running time by a factor of 1 . Furthermore, Radzik [16] proposed a fast deterministic approximation algorithm IMPROVE and revised the deterministic version of Leighton by O ( k
2
n log( ) k log n log k ) 1
commodity minimumcost flow computations, where a tighter bound for the concurrent flow approximation can be computed deterministically. Regarding the combinatorial approximation algorithm in solving the optimal concurrent flow problems, the way in determining the optimal fraction of each commodity flow, however, has not been fully considered in current algorithms (Leighton et al. [10] and Radzik [16]). For the two well known combinatorial approximation algorithm variants: DECONGEST and IMPROVE: only a given formula with predetermined values of the fraction of each commodity flow was used when conducting the rerouting flow procedure via the convex combination for minimumcost flow and the ‗bad‘ commodity flows. In this chapter, a new variant CACF is proposed for the combinatorial approximation to the optimal concurrent flow problem, where a tighter computation bound in decreasing the values of congestion and the potential objective function has been derived. By using the first order derivative of the performance function with respect to the fraction of the commodity flow, the
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
138
SuhWen Chiou
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
bisection method is used to determine the optimal value of the fraction with reasonable computational efforts. In this chapter, following the work of Leighton and Radzik, we propose a new variant combinatorial approximation algorithm CACF in solving the optimal concurrent flow problem, which yields a tighter computation bound in decreasing the values of congestion and the potential objective function. Empirical studies on various range of test networks are conducted for the three combinatorial approximation methods DECONGEST, IMPROVE and CACF. Numerical comparisons in computational efficiency for solving largescale optimal concurrent flow problems are made between the variants of the combinatorial approximation algorithm and the linear programming based computation package CPLEX. Furthermore, variants of minimumcost flow implementation of the combinatorial approximation algorithm are also discussed. Regarding the remaining of this chapter, it is organized as follows. In next Section, the preliminaries and the definitions of the concurrent flows are given where the corresponding problems in terms of the maximum concurrent flows and the minimum congestions are formulated. In Section 3, the approximate optimality conditions for the optimal concurrent flow and the exponential length functions which associating the edges to the corresponding congestions at the edges are summarized. In Section 4, a new variant of combinatorial approximation algorithm for the optimal concurrent flow problem (CACF) is proposed. Analysis of the CACF variant is given where a tighter computation bound in decreasing the values of approximate congestion and the corresponding potential function than the previous is presented. In Section 5, numerical computations for the variants of the combinatorial approximation algorithm: DECONGEST, IMPROVE, CACF and fixedvalue are conducted on a wide range of general and grid graphs where thousands of nodes and edges are taken into account. Also the numerical test results are compared to those obtained from the linear programming package CPLEX optimizers. Conclusions and discussions for solving the optimal concurrent flow problem and its implications are made in the Section 6.
2. PRELIMINARIES AND PROBLEM FORMULATION Let G (V , E ) denote a directed network, where V and E respectively represent the set of nodes and edges. An input instance is the capacity function for each edge u : E , where denotes the nonnegative real numbers. For the specification of each commodity i ,
1 i k , it contains the source si , the destination t i and the nonnegative demand d i . For each commodity flow f i , where f i : E , the flow conservation on each edge must be satisfied, which can be described as follows. For each node v in V ,
( w , v )E
v si d i if f i ( w, v ) f i ( v , w) d i if v ti ( v , w )E 0 otherwise
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(1)
A PolynomialTime Approximation Algorithm… For each edge e , let f (e)
139
f (e) , the capacity constraint must be satisfied: i
1 i k
f (e) u (e)
(2)
Let f ( p ) be the flow on path p , the relationship between f (e) and f ( p ) can be described as follows. k
f (e) f ( p ) pe
(3)
i 1 pPi
e where Pi is the set of paths that shipping commodity i from the source s i to sink t i and p e e denotes the incidence factor where p 1 if the edge e is contained in path p , or p 0
otherwise. Let z be the throughput for the concurrent flow problem. The maximum concurrent flow problem can be formulated as follows.
Max
(4)
z
f ( p) zd ,
subject to
i
i,
1 i k
pPi
k
f ( p)
e p
u (e)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
i 1 pPi
f (e) 0, where f (e)
e E k
f ( p)
e p
.
i 1 pPi
Since the concurrent flow problem is an optimized problem for which the fraction z is maximized such that the flow with zd i , is feasible, in other words, the dual problem is to find the minimum congestion such that the flow with d i , is feasible when the capacity is
u (e), e E . The dual problem for the maximum concurrent flow problem can be formulated below.
Min
(5)
k
subject to
f ( p)
e p
u ( e )
i 1 pPi
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
140
SuhWen Chiou
f ( p) d ,
1 i k f (e) 0,
i,
i
e E
pPi
where f (e)
k
f ( p)
e p
.
i 1 pPi
f (e ) and Max (e) . Let be the optimal value of which solves eE u (e ) the problem (5). In this chapter, we focus on the optimal solution to the concurrent flow Let ( e )
problem, where a concurrent flow is optimal if (1 ) . Let l be a nonnegative length function defined on edges, l : E and let Ci be the cost of the concurrent flow for commodity i with , which is defined as
Ci ( )
f ( e )l ( e )
(6)
i
eE
and the potential function for the optimal concurrent flows is defined as
l ( e )u ( e )
(7)
e E
For a commodity i , let C i ( ) be the value of a minimumcost flow f i satisfying the
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
demand constraint and the edge constraints. Thus we have k
C i ( ) i 1
k
C ( ) l ( e )u ( e ) i
i 1
(8)
eE
3. APPROXIMATE OPTIMALITY CONDITIONS AND EXPONENTIAL LENGTH FUNCTIONS Given a parameter , a length function l : E can be defined as an exponential function form of congestion as expressed below.
l (e)
exp( (e )) u (e)
(9)
where exp() denotes the exponential function. With respect to Leighton et al. [10] and Radzik [16], a new variant of the combinatorial approximate algorithm for the optimal concurrent flow problem (CACF) is proposed. In
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A PolynomialTime Approximation Algorithm…
141
algorithm CACF, the length function used the form of equation (9) within which the value was specified by
5(1 )
ln(
m
)
(10)
In CACF, a tighter computation bound in decreasing the value of congestion and the potential function is achieved. Following the work of Radzik [16, theorem 3, lemmas 45], the approximate optimality condition is revised as follows. Theorem 1. In algorithm CACF, let opt and
1 if a concurrent flow f with f and a length function l are such that the following 5
inequalities hold
f (1 )
(11)
5
C ( ) (1
2 ) 3
(12)
then flow f is optimal. Proof. For , let f (8), we have
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
opt
k
C
i
( )
i 1
k
C i 1
i
opt
be an optimal concurrent flow within capacity u by inequality
( opt )
k
C (
opt
i
) opt
i 1
Combined with inequality (12), it implies
(1
2 ) opt 3
Since inequality (11), we have
(1 ) 5 opt (1 ) opt f 2 (1 ) 3
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
142
SuhWen Chiou Q.E.D. Lemma 2. In algorithm CACF, let 1 and m 3 , if f is a concurrent flow with f
such that
(1 ) f (1 ) 5 5
(13)
then k
C (
(1 ) f
i
f
) (1
i 1
5
) f
(14)
Proof. By inequalities (8) and (13), the 2nd inequality for (14) can be proved clearly because k
C ( i
f
) f f (1
i 1
5
) f
(15)
To prove the 1st inequality of (14) the following inequality needs to hold. For each edge e ,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
l ( e )u ( e ) l ( e )
(1 )
f (e)
2 m
f
(16)
If f (1 ) , then f (e)(1 )l (e) l (e)u(e) and therefore the inequality (16) holds for 1 . Otherwise, if f (1 ) then f (e)(1 )l (e) l (e)u(e) . However, in this case, we find
2 m
f l ( e ) u ( e ) and thus the inequality (16) holds.
Because
l (e)u (e) exp(
f u
5
) exp( f ) exp(
m m ) 1
4
5 (1 )( 1 ) 5
4 m 1 ( ) 3 exp( (1 )) m m 5
m m , if 0 1 and m e , the equation (17) becomes
because
l ( e )u ( e )
4
exp( (1 )) 5 m 3
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(17)
A PolynomialTime Approximation Algorithm…
4 m3
4 m3
143
exp( f ) (because of the inequality (13)) f
2
f
m
Summing up all edges in (16), we obtain
l ( e )u ( e ) l ( e ) e
(1 )
e
f (e) 2 f
k
f (1 ) C i ( ) 2 f i 1
(1 ) f
k
C ( ) i
i 1
which completes the 1st inequality of (14). Q.E.D Lemma 3. In algorithm CACF, let 1 , if concurrent flows f and g are such
f and g f then g (1 ) . 5
Proof.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
If f and g (1 g
l
g
5
) , then
(e )u ( e ) exp( g ) exp( (1
e
m exp( )
5
) ) exp( ) exp(
5
)
(1 )
exp( ) m f
which contradicts g f when f . Thus g (1
5
) . Q.E.D.
4. ANALYSIS OF THE COMBINATORIAL APPROXIMATION ALGORITHM CACF To solve the optimal concurrent flow problem, a new variant of the combinatorial approximation algorithm CACF is proposed where a nearoptimal concurrent flow can be found for problem (5).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
144
SuhWen Chiou
As given in the pseudocode in Figure 1, an initial concurrent flow f is given together with two parameters, and , which are set as thresholds for improving value and searching for the optimal value of . The value is initially set as 1 and halved during the outer loop phases if the potential function value is not keeping decreased by a factor of
2 . 12
For each outer loop with fixed value in the algorithm CACF, each commodity i flow is improved by conducting the rerouting processes via finding the minimum cost flow f i and the optimal search for the fraction value . In the following mathematical theorems, we derive a tighter computation bound in decreasing the approximate congestion value and the corresponding potential function value , which substantially improves the work presented by [10] and [16]. Theorem 4. If
1 , f and g are concurrent flows at the beginning and at the end of
computation of algorithm CACF, following the results of Lemma 3, then g (1 concurrent flow g is optimal. Proof. It only needs to prove flow g is optimal when g (1 results of Lemma 3, if g (1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
5
5
5
) f or
) f . Clearly from the
) f then
(1 ) f g (1 ) (1 ) 5 5 it shows the concurrent flow g is optimal. Q.E.D. Theorem 5. Let
1 , at each iteration i, i 1,..., k of algorithm CACF, we have
( i1) ( i ) (fi 1) (1 ) 5 if the flow of commodity i is changed during this iteration, then
( ( i ) ( i 1) )
12
(C i C i )
Proof. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(18)
A PolynomialTime Approximation Algorithm…
, it implies 6
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
For the flow of commodity i is changed during iteration i 1 , let
145
Figure 1. Algorithm CACF. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
146
SuhWen Chiou l ( i 1) ( e )u (e ) exp(
exp(
f
(e) f ( i ) ( e ) ( f i (e ) f i * (e )) ) exp( ) u (e) u (e) u (e) ( i 1)
( f i (e ) f i * (e )) (e) ) exp( ) u (e) u (e)
f
(i )
(19)
Approximate exp( x) by the 2nd order Taylor‘s expansion, it implies
exp( x ) 1 x
Let x
x
1 2 5 x 1 x x2 2 4
( f i ( e ) f i ( e )) u(e)
max f , f 6 i
i
(20)
, since
6 (1 5 ) 5
the inequality (20) becomes
exp( x ) 1 x
x
(21)
4
Apply the inequality (21) to the 2nd term of (19), we obtain
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
l
( i 1)
( f ( e ) f ( e ) f i ( e ) f i ( e ) i i ( e )u ( e ) l ( e )u ( e ) 1 u( e) 4 u( e) (i )
l ( i ) ( e )u ( e ) l ( i ) ( e )( f i ( e ) f i ( e )) l ( i ) ( e ) f i ( e ) f i ( e ) 4
(22)
Summing up (22) for each edge, because
f i (e) f i (e) f i (e) f i (e) , thus ( i 1) ( i ) (C i C i )
4
(C i C i )
(23)
Since by definition Ci Ci , the inequality (23) becomes
( i 1) ( i ) (C i C i )
2
Ci
For a changed commodity flow i , it requires (see [10], lemma 4.3)
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(24)
A PolynomialTime Approximation Algorithm…
Ci Ci Ci thus, C i C i
C i 2
147
C i C i 2
The inequality (24) can be rewritten as
( i 1) ( i ) (C i C i (i )
C i 2
) (i )
(C i C i ) 2
(C i C ) 12 i
Therefore, it is obtained (
(i )
( i 1) )
12
(C i C i ) . Q.E.D.
Theorem 6. Following the results of Theorem 5, it implies
(1) ( k 1)
2 (1) 12
(25)
Proof. Following the results of Theorem 5, at each iteration i, i 1,..., k , we have
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
( ( i ) ( i 1) )
12
(C i C i )
(26)
For changed commodity i , it requires
Ci Ci Ci the inequality (26) becomes
( ( i ) ( i 1) )
2 12
Ci
(27)
Sum up the inequality (27) for each i, i 1,..., k , we obtain
(1) ( k 1)
2 12
k
2
i 1
12
Ci
(1)
Q.E.D. Theorem 7. Let 1 , there are at most ( 2 log n) iterations of algorithm CACF.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
148
SuhWen Chiou Proof. Suppose there are at least n iterations to decrease the values of f , n ( 2 log n)
and specified as n 60
2
ln
m
. Let f and g be the concurrent flows at initial and after
the nth iteration, by Theorem 6 the value of f can be reduced at least by a factor of
2 1 , that is 12 2 g 1 12
n
2 f 1 12
1
since (1 x ) x
60 2 ln
m
2 f 1 12
12 2 ( 5 ln
m
)
f
(28)
1 for x 1 , the inequality (28) becomes exp
g exp( 5 ln( )) f f m m
5
(29)
Since n is not the last iteration in decreasing the value of f , by Theorem 4
g (1 ) , thus Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
5
g l (e )u (e ) exp( g ) exp( (1 ) ) exp( 5 e
m
1
m
1
) exp( ) 5
exp( ) f m
8 f f m m m 2
which contradicts with inequality (29). Therefore, for 1 , there are at most ( 2 log n) iterations of algorithm CACF. Q.E.D. Lemma 8. Let 1 , consider one iteration of algorithm CACF, for indices i and j such that 1 i j k 1, we have For each edge e
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A PolynomialTime Approximation Algorithm…
149
l ( j ) ( e ) (1 )l ( i ) ( e ) 5
(30)
Proof. Since for each edge e , f
l ( j ) ( e )u ( e ) exp(
because f (1
l
( j)
5
f
( j)
(e) f (i ) (e) f (i ) (e) , we have
(e) f ( i ) (e) f ( i ) (e ) ) exp( ) exp( ) u (e) u (e) u (e) ( j)
, the inequality 6
) and
(31)
(31) becomes
(1 ) 5 ) (e)u (e) l (e)u (e) exp( (1 ) ) l (e)u (e) exp( 5 6
(i )
(32)
(i )
Apply Taylor‘s series for exp( x) , we have exp( x) 1 x , x 0 ; thus inequality (32) can be rewritten as
(1 ) 5 l ( i ) ( e )u ( e ) 1 l ( j ) ( e ) u ( e ) l ( i ) ( e )u ( e ) 1 6 5 Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Q.E.D. Theorem 9. Let 1 , if at the end of one iteration of algorithm CACF with
f (1 ) and 5
(1)
( k 1)
2 12
(1) then
C ( ; l ( k 1) ) 1 4 ( k 1)
(33)
Proof. By the results of Theorem 8, the inequality C ( , l ( k 1) )
k
C i 1
i
( ; l ( k 1) )
k
C i 1
i
(30), it implies
k
( ; l ( i ) )(1 ) (1 ) C i ( ) 5 5 i 1
For the flow of changed commodity
i
(34)
such that Ci Ci Ci , by inequality (26) we have
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
150
SuhWen Chiou
C i C i
12
( ( i ) ( i 1) )
(35)
On the other hand, for the flow of unchanged commodity flow i such that
Ci Ci Ci
(36)
Combining the inequalities (35)(36), we obtain
C i C i
12
( i ) ( i 1) C i
(37)
Sum up each commodity i , the inequality
C k
i
C i
i 1
Since
(1)
12
( k 1)
(37) is expressed as
( i ) ( i 1) C i k
k
i 1
i 1
2 12
(38)
(1) and (i 1) (i ) , i 1,.., k ,
the inequality (38) can be rewritten as k
k
k
i 1
i 1
C i (1 ) C i (1) (1 ) C i ( k 1) Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
i 1
k
8 (1 ) C i ( k 1) 5 i 1
(39)
Since k
Ci i 1
k k (i ) (1) C ( ; l ) 1 C ( ; l ) 1 C i ( ) i i 5 i 1 5 i 1 i 1 k
(40)
by inequality (15), the inequality (40) is expressed as k
C i 1
i
6 6 1 (1 ) (f1) 1 2 (f1) 1 (fk 1) 5 5 5
Bring the inequalities (39) and (41) into (34), we obtain k 8 k C ( ; l ( k 1) ) 1 C i 1 (1 ) C i ( k 1) 5 5 5 i 1 i 1
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(41)
A PolynomialTime Approximation Algorithm…
151
6 8 1 1 1 ( k 1) ( k 1) 5 5 5 1 4 ( k 1)
(42)
Q.E.D. Using the variants of the combinatorial approximation algorithm such as DECONGEST and IMPROVE, we can find the optimal concurrent flow. Given a precision value , the polynomialtime computations are guaranteed by Ok log(k ) log(n) when the number of
commodities increases. Regarding the way in determining the value for the fraction of the convex combination for the minimumcost flow and the ‗bad‘ commodity flow, the variants DECONGEST and IMPROVE used the predetermined value for each commodity flow without exploring further evaluations. By contrast, in algorithm CACF, we adopt the onedimensional search: the bisection method to determine the optimal value for the fraction of the convex combination for the minimumcost flow and the ‗bad‘ commodity flow with reasonable computation efforts. As it is shown in Figure 1, for each commodity flow f i , firstly find the minimumcost flow f i * and check if this commodity flow is a ‗bad commodity flow‘. If so, do optima , and the commodity flow can be updated as f i f i ( f i * f i ) . At iteration j , when the flows of commodity i are updated, the concurrent flow becomes f
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
corresponding potential function are updated as ( ) f
( )
( )
. The values of the
.
Continue such
computations until the maximum number of iterations was achieved or the optimal value of has been found or there is no any bad commodity flow needs to be improved. In the bisection method since the threshold value was predetermined, the total iterations taken in searching for the optimal value of is therefore a constant if the max has been determined, which consequently will not increase the total computations bound as given in Theorem 7. At the end of the computation of one outer loop in CACF, for each commodity i the congestion value is not greater than 1 or the potential function value fails to decrease by a
5
factor of 1 , which substantially improves the approximate bound presented by Radzik 2
12
[16, lemma 12] and Leighton et al. [10, theorem 3.3].
5. IMPLEMENTATIONS AND APPLICATIONS In this section, variants of combinatorial approximation algorithm: DECONGEST, IMPROVE and CACF were tested for computational efficiency when solving the optimal concurrent flow problem. Numerical experiments were carried out on a wide range of large
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
152
SuhWen Chiou
scale test networks. Computational comparisons were also made with those found by the exact solutions via linear programming package CPLEX optimizers [7]. Implementations conducted below were made on Sun SPARC SUNW, SunFire480R, 900MH processor with 4GB RAM and the operating system Unix SunOS 5.8. Computer programs were coded in C++ language and complied with GNU g++ 2.8.1. The threshold denoted as in determining the optimal value for the fraction of the commodity flow as shown in Figure 1 was specified as 10 3 throughout the following implementations of CACF combinatorial approximation algorithm.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
5.1. Implementations and Computational Results Algorithms of DECONGEST, IMPROVE and CACF were implemented on a large scale of test networks produced by ‗NetworkGen‘ [13], which is a test network generator developed under the environment provided by LEDA (Mehlhorn, Naher and Uhrig [12]) and written in C++ computer language. In this section, a variety of bidirectional grid graphs are randomly generated conforming to DIMACS [3] network generator. Therefore, each test network generated by the ‗NetworkGen‘ can be easily transformed to the one used for the problems of minimumcost network flows either by ‗DIMACS‘ or other computational packages, e.g. CPLEX. As shown in Tables 13, 18 groups of grid graphs were respectively generated. For the grid graphs the number of nodes, n , was ranged from 4225 to 34225, and the number of edges, m , was ranged from 16640 to 136160. As compared to the results yielded by the linear programming package CPLEX optimizers, all the test networks generated by the ‗NetworkGen‘ were transformed into the files as ‗min‘ format which was specified as one kind of the input files used by the CPLEX optimizers [7]. Regarding the implementation of the combinatorial approximation algorithms, the most computation demanding work was devoted to finding the minimumcost flow of commodities. Two kinds of the minimumcost flow implementation were conducted in order to investigate the magnitude of the effects on the computing times caused by various minimumcost flow implementations. One implementation was referred to Orlin‘s strongly polynomial minimumcost flow algorithm [15] based on the EdmondsKarp‘s capacity scaling technique (CS) [1] and the computation time was bound by O(m log n(m n log n)) . The other heuristic used a simple logic based on Dijkstra‘s shortest path algorithm (SP) [1] to carry out the shortest path finding and conduct the demand loading of commodities without initially considering the capacity limit by the corresponding edges. Four variants of combinatorial approximation algorithm for finding the optimal concurrent flow were tested and the corresponding results were summarized in Tables 1 and 2, where the ‗fixedvalue‘ variant was set when the fraction of the commodity flow, , equals the theoretical bound
2 . log n
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A PolynomialTime Approximation Algorithm…
153
Table 1. Computation results (in CPU seconds) for grid graph at 0 .01 N
m
k
4225 4225 5625 5625 7225 7225 9025 9025 11025 11025 15625 15625 21025 21025 27225 27225 34225 34225
16640 16640 22200 22200 28560 28560 35720 35720 43680 43680 62000 62000 83520 83520 108240 108240 136160 136160
50 100 50 100 50 100 50 100 50 100 50 100 50 100 50 100 50 100
Fixedvalue (CS) 18.71 40.73 24.42 53.73 30.90 72.39 45.47 108.39 54.41 132.70 79.90 186.82 121.85 269.63 181.25 399.88 241.59 517.64
DECONGEST (CS) 7.48 15.89 9.62 21.06 12.05 28.07 16.22 38.38 19.56 46.40 29.02 66.49 46.04 97.43 60.12 130.96 80.22 177.95
IMPROVE (CS) 5.44 11.85 7.16 15.75 9.08 20.78 12.08 28.79 14.73 34.97 21.70 49.83 32.62 72.81 45.26 97.98 60.52 132.89
CACF (CS) 1.87 3.93 2.40 5.16 3.05 6.84 4.09 9.46 5.17 11.38 7.32 16.65 11.02 24.29 15.32 32.45 20.37 44.13
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 2. Computation results (in CPU seconds) for grid graph at 0 .01 N
m
k
4225 4225 5625 5625 7225 7225 9025 9025 11025 11025 15625 15625 21025 21025 27225 27225 34225 34225
16640 16640 22200 22200 28560 28560 35720 35720 43680 43680 62000 62000 83520 83520 108240 108240 136160 136160
50 100 50 100 50 100 50 100 50 100 50 100 50 100 50 100 50 100
Fixedvalue (SP) 22.40 52.57 29.31 69.57 37.35 90.75 56.56 131.22 66.04 160.80 95.65 235.33 146.84 331.79 209.06 479.84 266.39 629.06
DECONGEST (SP) 11.16 27.59 14.71 36.50 19.15 48.07 25.84 64.21 32.03 77.11 46.42 110.45 65.96 157.74 86.79 210.70 115.11 273.81
IMPROVE (SP) 9.33 23.47 12.29 30.92 15.49 41.57 21.91 54.71 26.29 66.51 38.20 94.91 54.37 132.73 70.88 176.56 92.69 232.25
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
CACF (SP) 5.52 15.19 7.35 20.30 9.28 26.33 12.21 34.89 14.91 42.16 21.43 60.19 30.66 83.73 41.57 108.22 52.99 139.55
154
SuhWen Chiou
To compare the computational efficiency of the variants of the combinatorial approximation algorithm with those did the exact solution methods, we made further numerical computations by conducting the linear programming package CPLEX optimizers where the primal simplex, the dual simplex, and the network optimizers are based on the simplex method, and the barrier optimizer is based on the interior point method [7]. Implementation results were summarized in Tables 13 for grid graphs. The comprehensive comparisons for the variants of the combinatorial approximation algorithm via two kinds of the minimumcost flow implementation and the results for the CPLEX optimizers were summarized in Tables 12. As it seen from the Tables 1 and 2, either via the capacityscaling based minimumcost flow implementation (CS) or via the shortest path based minimumcost flow implementation (SP), the CACF variant outperformed other variants by yielding the steadily lowest computation times of all cases. Also, as it shown in Tables 34, the CACF variant achieved the greatest performance over those did the CPLEX optimizers by giving substantially the lowest computation times for all cases.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 3. Computation results (in CPU seconds) for grid graph with CPLEX optimizers n 4225 4225 5625 5625 7225 7225 9025 9025 11025 11025 15625 15625 21025 21025 27225 27225 34225 34225
m 16640 16640 22200 22200 28560 28560 35720 35720 43680 43680 62000 62000 83520 83520 108240 108240 136160 136160
k 50 100 50 100 50 100 50 100 50 100 50 100 50 100 50 100 50 100
Primal Simplex 5.77 5.06 9.55 8.53 7.09 9.11 12.53 13.66 19.34 21.64 42.55 37.70 90.92 98.38 176.77 166.89 324.22 304.91
Dual Simplex 6.53 6.61 9.36 9.52 22.45 19.03 35.83 31.30 74.48 61.64 79.75 121.03 255.94 230.50 424.64 460.34 685.13 982.22
Network 10.47 6.72 9.92 10.47 21.67 20.81 27.78 33.86 65.20 62.91 119.14 124.78 302.28 270.33 497.83 496.11 822.78 569.50
Barrier 7.38 6.94 20.05 17.36 16.89 40.84 46.23 55.53 38.91 40.03 91.80 82.81 114.25 113.75 129.73 162.72 269.17 251.83
Furthermore, as it seen from Table 4, the computation time took by the CACF variant was not influenced much as the approximation value became decreased especially when the largescale problem size for the test networks was taken into account. Regarding the minimumcost flow implementation given in Tables 1 and 2, Orlin‘s minimumcost flow implementation (CS) achieved better computation results than that did the Dijkstra‘s shortest path based minimumcost flow implementation (SP) as expected. However, the magnitude of the effects on the computation times caused by the CS and SP implementations varied among the variants of the combinatorial approximation algorithm. For the CACF variant, the SP heuristic minimumcost flow implementation took about 23 times computation efforts than
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A PolynomialTime Approximation Algorithm…
155
did the CS minimumcost flow implementation. For the IMPROVE and DECONGEST variants, the SP heuristic minimumcost flow implementation took about twice computation efforts than did the CS minimumcost flow implementation. For the fixedvalue variant, the computation time taken by the SP heuristic was not increased much than that did by the CS minimumcost flow implementation. As for the implementation of other variants of the combinatorial approximation algorithm, as summarized in Tables 1 and 2 for the input instances of grid graphs, the IMPROVE variant achieved better performance than did the DECONGEST and the fixedvalued variants. Moreover, the implementation of the fixedvalue variant took the highest computation times as compared to other variants of the combinatorial approximation algorithm, which can be regarded as the upper limit for the implementation of the variants of the combinatorial approximation algorithm when solving the optimal concurrent flow problems. Also, as compared to the exact solutions found by the CPLEX optimizers, the IMPROVE and DECONGEST variants yielded substantially better results when implemented via the capacityscaling based minimumcost flows.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 4. Computation results (in CPU seconds) for grid graph solved by CACF with variants of minimum cost flow implementations n 34225 34225 34225 34225 34225 34225 34225 34225 34225 34225
m 136160 136160 136160 136160 136160 136160 136160 136160 136160 136160
k 100 100 100 100 100 100 100 100 100 100
0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.01
Capacity scaling 43.60 43.33 44.43 44.30 44.25 44.16 44.08 44.26 44.29 44.13
Shortest path 143.76 143.94 142.85 139.64 139.49 139.38 139.41 139.42 139.43 139.55
The implementation results yielded by the linear programming package CPLEX optimizers were shown in Table 3. As it seen from Tables 13, the performance achieved by the variants of the combinatorial approximation algorithm was better than those did by the CPLEX optimizers and furthermore the computational efficiency of the combinatorial approximation algorithm became relatively robust as the input instances of problem size got increased. As it found in Table 3, the ‗best‘ optimizer of the CPLEX for grid graphs were slightly different and the corresponding required computation times were much more ‗sensitive‘ to the size of test problems than those were of the CACF variant implementation. One interesting thing from the implementations can be found: the increasing magnitude of the computing times for the CPLEX optimizers became relatively larger than those did the variants of the combinatorial approximation algorithm especially when the input instances of problem size became increased, which have conformed to the recently empirical findings by Goldberg, Oldham, Plotkin and Stein [6] and Radzik [17]. Also, the primal simplex and the barrier optimizers outperformed other optimizers by giving lower values of the CPU time, and
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
156
SuhWen Chiou
the barrier optimizer achieved better results when the input instances of problem size getting increased. The comprehensive comparisons for the computational efficiency of the CACF variant and the CPLEX optimizers were shown in Tables 34 for the input instances of problem size and the number of commodity. Obviously, again it has shown the proposed CACF combinatorial approximation algorithm outperformed the CPLEX optimizers in the computational efficiency and demonstrated the robustness in the computation time taken as the number of the input instances of problem size and commodity increased.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
CONCLUSION AND DISCUSSIONS In this chapter, we proposed a new combinatorial approximation algorithm CACF for solving the optimal concurrent flow problem by giving a tighter computation bound in decreasing the values of approximate congestion and the corresponding potential function, which substantially improved the previous work. We have also conducted a wide range of numerical implementations for the variants of the combinatorial approximation algorithm on the randomly generated test networks. The proposed algorithm CACF outperformed other variants of the combinatorial approximation algorithm in all cases by consistently yielding the steadily lowest computation times. Furthermore, numerical comparisons have been carried out with exact solutions from the linear programming package CPLEX optimizers. Two kinds of the minimumcost flow implementation have been discussed in this chapter: the Orlin‘s minimumcost flow algorithm based on the capacityscaling technique (CS) and the Dijkstra‘s shortest path heuristic (SP). As it reported from the implementation results as given in Section 5.1, the performance did by the variants of the combinatorial approximation algorithm via Orlin‘s minimumcost flow implementation outperformed those did via Dijkstra‘s shortest path heuristic implementation as expected. But the magnitude of total computation times varied among the variants of the combinatorial approximation algorithm, much care and further investigations need to be taken and other promising kind of minimumcost flow implementation may be taken into account, e.g. Goldberg and Tarjan [5] when further tests are being undertaken. In the foreseeable future, we plan to apply the CACF variant to the systemoptimized network flow problems. We are going to continuously investigate the potentials of applying the CACF variant to other topics of interests ([2]).
ACKNOWLEDGMENTS The author is appreciative for Editor‘s helpful comments on the earlier version of this manuscript. The work reported here has been supported by Taiwan National Science Council via grants: 982410H259009MY3 and 992221E259005.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A PolynomialTime Approximation Algorithm…
157
REFERENCES [1] [2] [3]
[4] [5] [6]
[7] [8]
[9]
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[10]
[11]
[12] [13] [14] [15] [16]
Ahuja, R.K., Magnanti, T.L. and Orlin, J.B. Network Flows: theory, algorithms, and applications. Prentice Hall. New Jersey, 1993. Chiou, SW. A fast polynomial time algorithm for logistics network flows, Applied Mathematics and Computation 2008; 199(1): 162170. DIMACS: the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) with support from the National Science Foundation. 1991; ftp ://dimacs.Rutgers.edu/pub/netflow/general_ info/specs.tex. Goldberg, A. V. A natural randomization strategy for multicommodity flow and related algorithms. Information Processing Letters 1992; 42: 249256. Goldberg, A. and Tarjan, R.E. Finding minimumcost circulations by successive approximation. Mathematics of Operations Research 1990; 15: 430466. Goldberg, A., Oldham, V., Plotkin, S. and Stein, C. An implementation of a combinatorial approximation algorithm for minimumcost multicommodity flows. In Proceedings of the 6th Conference on Integer Programming and Combinatorial Optimization 1998, also in Bixby, R.E., Boyd, E.A. and RiosMercado, R.Z . (Eds.): Lecture Notes in Computer Science 1998;1412: 338352. ILOG CPLEX 7.0. Usual Manual, 2000. Kapoor, S. and Vaidya, P. M. Faster algorithms for convex quadratic programming and multicommodity flows. In: Proceedings of the 18th Annual ACM Symposium on Theory of Computing 1986. p. 147159. Klein, P., Plotkin, S., Stein, C. and Tardos, E. Faster approximation algorithms for the unit capacity concurrent flow problem with applications to routing and finding sparse cuts. SIAM Journal on Computing 1994; 23:466487. Leighton, T., Makedon, F., Plotkin, S., Tardos, C. and Tragoudas, S. Fast approximation algorithms for multicommodity flow problems. Journal of Computer and System Sciences 1995; 50: 228243. Leong, T., Shor, P. and Stein, C. Implementation of a combinatorial multicommodity flow algorithm. In: Johnson, DS and McGeoch, CC. (Eds.). DIMACS series in Discrete Mathematics and Theoretical Computer Science, Vol. 12. Providence, R.I. American Mathematical Society, 1993. p. 387405. Mehlhorn, K., Naher, S. and Uhrig, C. The LEDA User Manual, version 3.8. MaxPlancInstitut fur Informatik, 66213 Saarbrucken, Germany, 1999. NetworkGen, software developed by Supply Chain Management Team, Department of Information Management, National Dong Hwa University, Taiwan, 2002. Norton, C. H., Plotkin, S. and Tardos, E. Using separation algorithms in fixed dimension. Journal of Algebra 1992; 13: 7998. Orlin, J.B. A faster strongly polynomial minimum cost flow algorithm. In: Proceedings of the 20th annual ACM symposium on Theory of computing 1988; p. 377387. Radzik, T. Fast deterministic approximation for the multicommodity flow problem. Mathematical Programming 1997; 78: 4358.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
158
SuhWen Chiou
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[17] Radzik, T. Experimental study of a solution method for multicommodity flow problems. Proceedings for ALENEX00: the 2nd Workshop on Algorithm Engineering and Experiments, 2000. p. 79102. [18] Shahrokhi, F. and Matula, D.W. The maximum concurrent flow problem. Journal of the Association for Computer Machinery 1990; 37: 318334. [19] Sheffi, Y. Urban Transportation Networks: equilibrium analysis with mathematical programming methods. PrenticeHall, New Jersey, 1980. [20] Vaidya, P. M. Speeding up linear programming using fast matrix multiplication. In: Proceedings of 30th IEEE Annual Symposium on Foundations of Computer Science 1989. p. 332337.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: Linear Programming Editor: Zoltan Adam Mann
ISBN 9781612095790 c 2012 Nova Science Publishers, Inc.
Chapter 5
M INIMIZING A R EGULAR F UNCTION ON U NIFORM M ACHINES WITH O RDERED C OMPLETION T IMES Svetlana A. Kravchenko∗ United Institute of Informatics Problems, Surganova St. 6, 220012 Minsk, Belarus Frank Werner† OttovonGuerickeUniversit¨at, Fakult¨at f¨ur Mathematik, 39106 Magdeburg, Germany
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Abstract In this chapter, we apply linear programming to the following class of scheduling problems. A set of n jobs has to be scheduled on a set of uniform machines. Each machine can handle at most one job at a time. Each job Jj , j = 1, . . . , n, has an arbitrary due date dj . Job Jj becomes available for processing at its release date rj and has to be scheduled by its deadline Dj . Each machine has a known speed. The processing of any job may be interrupted arbitrarily often and resumed later on any machine. The goal is to find a schedule with a given order of the completion times that minimizes a nondecreasing function F . Thus, we consider problem Q  rj , pmtn, Dj  F and want to find an optimal schedule among the schedules with a given order of the completion times. We show that problem Q  rj , pmtn, Dj  F with a given order of the completion times is equivalent to the problem of minimizing a function F subject to linear constraints.
Key Words: Scheduling, Uniform machines, Linear programming, Ordered completion times, Polynomial algorithm AMS Subject Classification: 90B35, 90C05, 68W40.
1.
Introduction
Linear programming is a powerful tool for solving optimization problems. In the area of P scheduling, it is sufficient to mention the results from [4] and [1] for problem R  Cj ∗
†
Email address: [email protected] Email address: [email protected]
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
160
S.A. Kravchenko and F. Werner
and from [11] for problem R  pmtn  Cmax to demonstrate the power and elegance of linear programming. In this chapter, we always use the standard 3parameter classification scheme α  β  γ for scheduling problems introduced in [3]. Here the parameter α indicates the machine environment. For parallel machine problems, if α = R, we have unrelated machines, i.e., the processing times of the jobs Jj , j = 1, . . ., n, on the machines are given by an arbitrary matrix. If α = Q, we have uniform parallel machines, which means that the processing time of any job depends only on the speed of the corresponding machine. In other words, to know all processing times in the case of uniform machines, it is sufficient to know the speed of each machine and the processing times of all jobs on one of the machines. The second parameter β indicates job characteristics such as release dates (described by rj in parameter β), due dates (dj ), allowed preemptions (pmtn), and so on. The third parameter γ indicates the optimality criterion. Some typical optimality criteria are the minimization P of the makespan Cmax , the minimization of the sum of the completion times Cj , where P Cj is the completion time of job Jj , the minimization of total tardiness Tj , where Tj = max{0, Cj − dj } and dj is the due date of job Jj , or the minimization of total weighted P tardiness wj Tj , where wj > 0 is the weight of job Jj . For scheduling problems, it is sometimes possible to know the order of the completion times before the problem is solved. A large class of such problems are scheduling problems with equal processing times (characterized by pj = p in parameter β). For a survey on such problems, we refer the reader to [7]. The idea of using linear programmingPwas exploited in [5], where a polynomial algorithm for problem Q  rj , pj = p, pmtn  Cj has been derived. Note that the order of the completion times in an optimal schedule is known in advance and it coincides Pwith the order of the release dates. Another example is problem Q  rj , pj = p, pmtn  w T , which has been considered in [6]. It appeared that problem P j j P Q  rj , pj = p, pmtn  Tj is NPhard whereas problem Q  pj = p, pmtn  Tj can be polynomially solved. Note that for the latter problem, the order of the completion times in an optimal schedule is also known in advance. In this chapter, we generalize the ideas from [5] and [6] to a more general case. Namely, we consider the case with arbitrary processing times and ordered completion times. The main idea of our approach is that the time interval can be divided into a set of subintervals by time points such as the release times, due dates, and/or deadlines. For each of the obtained subintervals, we model the problem as a linear program by introducing the completion time of each job. The ‘real’ completion time of a job can be presented as the sum of the completion times of this job specified for each interval. The problem considered can be stated as follows. There are n independent jobs and m uniform machines. For each job Jj , j = 1, . . ., n, there is given its processing time pj , its due date dj ≥ 0, i.e., the time by which it is desirable to complete the job, its release time rj ≥ 0, i.e., the processing of any job can be started only at or after its release date, and its deadline Dj ≥ 0, i.e., no part of the job can be processed after its deadline. Note that any job can be completed after its due date, in this case usually some penalty is incurred, whereas the same job must be completed before its deadline. Each machine Mq , q = 1, . . ., m, has some speed sq ≥ 1, i.e., the execution of job Jj on machine Mq requires pj /sq time units. Any machine can process any job but only one job at a time. Furthermore, a job can be processed only on one machine at a time. Preemptions of processing are allowed, i.e., the processing of any job may be interrupted at any time and resumed later, possibly
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Minimizing a Regular Function on Uniform Machines
161
on a different machine. For a schedule s, let F = F (C1 (s), . . . , Cn (s)) denote a nondecreasing function depending on the variables Cj (s), j = 1, . . ., n, where Cj (s) denotes the time at which the processing of job Jj is completed. If no ambiguity arises, we drop the reference to schedule s and write Cj . The problem is to schedule all jobs so as to minimize the optimality criterion F within the class of schedules with a fixed order of the completion times. The described problem can be denoted as Q  rj , pmtn, Dj , C1 ≤ . . . ≤ Cn  F . We show that this problem is equivalent to the problem of minimizing a function F subject to linear constraints. This chapter is organized as follows. In Section 2., we describe a polynomial reduction of problem Q  rj , pmtn, Dj , C1 ≤ . . . ≤ Cn  F to the problem of minimizing a nondecreasing function F subject to linear constraints. In Section 3., we apply the derived P model to problem Q  rj , pmtn, Dj , C1 ≤ . . . ≤ Cn  wj Tj and show that this problem can be polynomially solved.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
2.
Problem Q  rj , pmtn, Dj , C1 ≤ . . . ≤ Cn  F
In this section, we show that problem Q  rj , pmtn, Dj , C1 ≤ . . . ≤ Cn  F, where F is a nondecreasing function, can be reduced to the problem of minimizing a nondecreasing function F subject to linear constraints. The idea of the reduction is taken from [5], where P a polynomial algorithm for problem Q  rj , pj = p, pmtn  Cj has been derived. Suppose that all jobs are numbered according to nondecreasing completion times, i.e., i < j implies Ci ≤ Cj for any pair of jobs Ji and Jj . Thus, we will look for an optimal schedule among the class of schedules for which C1 ≤ · · · ≤ Cn holds. Note that, if i < j and Di > Dj , then Ci ≤ Cj and Cj ≤ Dj . Therefore, we can set Di := Dj . Thus, in the following we suppose that D1 ≤ . . . ≤ Dn holds. Let {b1 , . . . , bz } with b1 < . . . < bz be the set of release times, due dates and deadlines, i.e., {b1 , . . . , bz } = {r1 , . . . , rn} ∪ {d1 , . . ., dn } ∪ {D1 , . . . , D Pnn}. Furthermore, we set b0 = 0 and suppose that bz < bz+1 = max{r1 , . . ., rn } + j=1 pj , i.e., [b0 , bz+1 ] is the time interval within which all jobs have to be processed. Note that the set of all points {b0 , . . . , bz+1 } together with the set of all completion times {C1 , . . ., Cn } define a partition of the time interval. If we know both sets, then an optimal schedule can be easily found using a reduction to a network flow problem, see [8]. Thus, the main question is to find for each Cj the corresponding interval [bk , bk+1 ] such that Cj ∈ [bk , bk+1 ]. With this purpose, we define the ‘completion time’ Cj for each interval [bk , bk+1 ] in the following way: For each job Jj with j = 1, . . . , n and for each interval [bi, bi+1 ] ⊆ [rj , Dj ] with i = 0, . . . , z, we define the value Cji such that, if some part of job Jj is scheduled in [bi, bi+1 ], then this part has to be scheduled in [bi, Cji ], i.e., there is no part of job Jj processed within the interval [Cji , bi+1 ]. So, for each job Jj , j ∈ {1, . . ., n}, and for each interval [bi, bi+1 ], i = 0, . . . , n, such that [bi, bi+1 ] ⊆ [rj , Dj ], the values Cji define a partition of the interval [bi, bi+1 ]. It may happen that there is a job Jj such that [bi, bi+1 ] 6⊆ [rj , Dj ] holds. In this i . Moreover, we set C i ≤ . . . ≤ C i . case, we set Cji = Cj−1 n 1 For each j ∈ {1, . . ., n}, denote by v(j) the index such that bv(j) = rj and by u(j) the index such that bu(j)+1 = Dj .
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
162
S.A. Kravchenko and F. Werner Thus, if we know a feasible schedule s, we can set Cj (s) if bi < Cj (s) < bi+1 i Cj = bi if Cj (s) ≤ bi bi+1 if Cj (s) ≥ bi+1
(1)
for each j = 1, . . ., n, and i = 0, . . . , z such that [bi, bi+1 ] ⊆ [rj , Dj ], see Figure 1. Using equalities (1), we can calculate the value Cj by the formula v(j)
Cj = rj + (Cj
v(j)+1
− bv(j) ) + (Cj
u(j)
− bv(j)+1 ) + . . . + (Cj
− bu(j)).
Indeed, let Cj (s) ∈ [bi, bi+1 ], then v(j)
Cj = rj + (Cj
− bv(j)) + . . . + (Cji−1 − bi−1 )+ u(j)
(Cji − bi) + (Cji+1 − bi+1 ) + . . . + (Cj
− bu(j) ).
v(j)
u(j)
Since rj = bv(j) , Cj = bv(j)+1 , . . . , Cji−1 = bi and Cji+1 = bi+1 , . . . , Cj we obtain Cj = bv(j) + (bv(j)+1 − bv(j) ) + . . . + (bi − bi−1 )+
= bu(j),
(Cj (s) − bi ) + (bi+1 − bi+1 ) + . . . + (bu(j) − bu(j) ) = Cj (s). i Each interval [Cki , Ck+1 ] is completely defined by the jobs processed in it. Thus, we q i i i denote by vj ([Ck , Ck+1 ]) the part (amount) of job Jj processed in the interval [Cki , Ck+1 ] on machine Mq , see Figure 2, i.e., the total processing time of job Jj on machine Mq in the
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
i interval [Cki , Ck+1 ] equals
i ]) vjq ([Cki ,Ck+1 sq
and for any job Jj , equality
n X z m X X
i vjq ([Cki , Ck+1 ]) = pj
q=1 k=0 i=0
holds.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated, 2012.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
v(j)
rj
Cj
bv(j)
bv(j)+1
v(j)+1
Cji
Cj
··· bv(j)+2 bi
Cji+1
Cji+2
bi+1
bi+1
u(j)
Cj
Dj
··· Cj (s)
bu(j)
bu(j)+1
bz
bz+1
Figure 1. The structure of the interval [rj , Dj ].
i ]) vjq ([Cki , Ck+1
··· b0
b1
?
··· bi =
C0i
C1i
Cki
··· i Ck+1
··· Cni
i Cn+1
= bi+1
Figure 2. A partition of the interval [bi, bi+1 ].
164
S.A. Kravchenko and F. Werner
We also denote by C˜j the value which defines the completion time of job Jj in an optimal schedule. The values C˜j , where j = 0, . . . , n and C˜0 = 0, the values Cki , where i k = 0, . . . , n + 1, i = 0, . . . , z, and the values vjq ([Cki , Ck+1 ]), where j = 1, . . ., n, i = 0, . . . , z, k = 0, . . . , n + 1, q = 1, . . . , m, define a feasible solution of the following minimization problem: Minimize F (C˜1 , . . . , C˜n ) (2) subject to i bi = C0i ≤ C1i ≤ · · · ≤ Cni ≤ Cn+1 = bi+1 ,
i Cji = Cj−1
if
m i X ]) vjq ([Cki , Ck+1
[bi, bi+1 ] 6⊆ [rj , Dj ],
i = 0, . . ., z
i = 0, . . ., z,
(3)
j = 1, . . . , n
(4)
i ≤ Ck+1 − Cki ,
i = 0, . . . , z,
j = 1, . . . , n,
k = 1, . . . , n
(5)
n i X ]) vjq ([Cki , Ck+1 i ≤ Ck+1 − Cki , s q j=1
i = 0, . . . , z,
q = 1, . . . , m,
k = 1, . . . , n
(6)
q=1
sq
n X m z X X
i vjq ([Cki , Ck+1 ]) = pj ,
j = 1, . . ., n
(7)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
i=0 k=0 q=1
q
i vj ([Cki , Ck+1 ]) = 0
if
[bi, bi+1 ] 6⊆ [rj , Dj ] or j ≤ k, i = 0, . . . , z, j = 1, . . . , n, k = 0, . . . , n,
C˜j = max{C˜j−1 , rj +
u(j) X
(Cji − bi )},
q = 1, . . ., m
j = 1, . . ., n,
C˜0 = 0
(8)
(9)
i=v(j)
Cki ≥ 0,
i = 0, . . ., z,
i ]) ≥ 0, vjq ([Cki , Ck+1
k = 0, . . . , n
i = 0, . . ., z,
j = 1, . . . , n,
k = 0, . . . , n, C˜j ≥ 0,
j = 1, . . ., n,
(10)
q = 1, . . . , m
C˜0 = 0
The above formulation includes O(mn2 z) variables and constraints. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(11) (12)
Minimizing a Regular Function on Uniform Machines
165
In the following, we prove that, if s∗ is an optimal schedule, then ˜n ) F (C1 (s∗ ), . . . , Cn (s∗ )) = F (C˜1 , . . . , C holds. We show that, if s˜ is the schedule corresponding to an optimal solution of problem (2)(12), then Cj (˜ s) ≥ C˜j holds for any job Jj . However, it is possible to transform the schedule s˜ in such a way that the C˜j value and the completion time of job Jj coincide for any j = 1, . . . , n. Theorem 1. For any feasible schedule s of problem Q  rj , pmtn, Dj , C1 ≤ . . . ≤ Cn  F , there exists a corresponding feasible solution of problem (3)(12) such that F (C1 (s), . . . , Cn (s)) = F (C˜1 , . . . , C˜n ) holds. Proof. Let s be a feasible schedule of problem Q  rj , pmtn, Dj , C1 ≤ . . . ≤ Cn  F . Using schedule s, equalities (1), (4) and (8), we obtain the values of all variables Cki and i vjq ([Cki , Ck+1 ]). Conditions (4) holds by definition. Conditions (9) hold since on the one hand, Cj (s) ≥ Cj−1 (s) holds and on the other hand, u(j) X Cj (s) = rj + (Cji − bi )
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
i=v(j)
holds because the values Cji satisfy (1). i To prove that condition (3) holds, i.e., that bi = C0i ≤ C1i ≤ · · · ≤ Cni ≤ Cn+1 = bi+1 , consider two jobs Jx and Jy such that x < y, i.e., Cx ≤ Cy holds. Let [bi, bi+1 ] ⊆ [rx, Dx] and [bi , bi+1] ⊆ [ry , Dy ]. Consider all possible cases for Cx and Cy . If Cx ∈ [bi, bi+1 ] and Cy ∈ [bi , bi+1 ] hold, then Cxi ≤ Cyi since Cx ≤ Cy by definition. If Cx ∈ [bi, bi+1 ] and Cy 6∈ [bi , bi+1 ] hold, then Cyi = bi+1 and therefore, Cxi ≤ Cyi holds. If Cx 6∈ [bi, bi+1 ] and Cy ∈ [bi, bi+1 ] hold, then Cx < bi and hence Cxi = bi and therefore, Cxi ≤ Cyi holds. If Cx 6∈ [bi, bi+1 ] and Cy 6∈ [bi , bi+1 ] hold, then Cxi = Cyi = bi if Cx ≤ bi and Cy ≤ bi ; Cxi = bi and Cyi = bi+1 if Cx ≤ bi and Cy ≥ bi+1 ; Cxi = Cyi = bi+1 if Cx ≥ bi+1 and Cy ≥ bi+1 . Thus, for each case Cxi ≤ Cyi holds. i Let [bi , bi+1] 6⊆ [rx, Dx] and [bi , bi+1 ] 6⊆ [ry , Dy ]. In this case, Cxi = Cx−1 and i i i i i i Cy = Cy−1 hold and to compare Cx and Cy , we have to consider Cx−1 and Cy−1 under the condition that x − 1 < y − 1 holds. i Let [bi, bi+1 ] 6⊆ [rx, Dx] and [bi , bi+1] ⊆ [ry , Dy ]. In this case, Cxi = Cx−1 and to i i i i compare Cx and Cy , we have to consider Cx−1 and Cy under the condition that x − 1 < y holds.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
166
S.A. Kravchenko and F. Werner
i and if Let [bi , bi+1 ] ⊆ [rx, Dx] and [bi , bi+1] 6⊆ [ry , Dy ]. In this case, Cyi = Cy−1 i i i i y − 1 = x holds, then Cx = Cy but if y − 1 > x, then to compare Cx and Cy , we have to i under the condition that x < y − 1 holds. consider Cxi and Cy−1 Thus, condition (3) holds. i i Inequalities (5) hold since all parts vj1 ([Cki , Ck+1 ]), · · · , vjm ([Cki , Ck+1 ]) of job Jj have i i to be scheduled in the interval [Ck , Ck+1 ] on different machines without overlapping. i i ]) of all jobs Inequalities (6) hold since the parts v1q ([Cki , Ck+1 ]), · · · , vnq ([Cki , Ck+1 i i have to be scheduled in the interval [Ck , Ck+1 ] on machine Mq without overlapping. q i For each job Jj , the sum of all values vj ([Cki , Ck+1 ]) has to be equal to pj since job Jj has to be processed completely. Therefore, equalities (7) must hold. Furthermore, C˜j = Cj (s) since, due to (1), Cj (s) − bi if bi < Cj (s) < bi+1 Cji − bi = 0 if Cj (s) ≤ bi bi+1 − bi if Cj (s) ≥ bi+1
holds for any interval [bi, bi+1 ] with [bi, bi+1 ] ⊆ [rj , Dj ]. Therefore, Cj (s) = rj +
u(j) X
(Cji − bi ).
i=v(j)
Thus, minimizing F (C1 (s), . . . , Cn (s)) is equivalent to minimizing F (C˜1 , . . ., C˜n ). Now we prove
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Theorem 2. Any feasible solution of problem (2)(12) provides a feasible schedule s for the scheduling problem Q  rj , pmtn, Dj , C1 ≤ . . . ≤ Cn  F such that F (C1 (s), . . . , Cn (s)) = F (C˜1 , . . . , C˜n ) holds. i Proof. Let vjq ([Cki , Ck+1 ]) and Cki , where k = 0, . . ., n, i = 0, . . . , z, j = 1, . . ., n, and i q = 1, . . . , m, be a feasible solution of problem (2)(12). For any interval [Cki , Ck+1 ], it is possible to construct a feasible schedule with the length n m i i X X vjq ([Cki , Ck+1 ]) vjq ([Cki , Ck+1 ]) , max , max max 1≤q≤m 1≤j≤n sq sq q=1
j=1
having a finite number of preemptions, see [10]. Taking into account inequalities (5) and (6), we obtain m n i i X X vjq ([Cki , Ck+1 ]) vjq ([Cki , Ck+1 ]) i max max ≤ Ck+1 − Cki . , max 1≤j≤n 1≤q≤m sq sq q=1
j=1
i Thus, for any interval [Cki , Ck+1 ], one can construct a feasible schedule. Therefore, for any feasible solution of problem (2)–(12), one can construct a feasible schedule s˜. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Minimizing a Regular Function on Uniform Machines
167
Now, if for any k = 0, . . . , n, the values Cki satisfy equalities (1), then ( Pu(k) Pu(k) rk + i=v(k) (Cki − bi ) if Ck−1 (˜ s) ≤ rk + i=v(k) (Cki − bi ) Ck (˜ s) = Pu(k) Ck−1 (˜ s) if Ck−1 (˜ s) > rk + i=v(k) (Cki − bi ).
In other words,
u(k) X (Cki − bi ), Ck−1 = C˜k , Ck (˜ s) = max rk + i=v(k)
and therefore,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
F (C1 (s), . . . , Cn (s)) = F (C˜1 , . . . , C˜n )
holds. Now, suppose that for some job Jk , the values Cki do not satisfy equalities (1). Take the maximal value of k. Then there exist two intervals [bg , bg+1] and [bh , bh+1 ] such that g bg ≤ Ck < bg+1 ≤ bh < Ckh ≤ bh+1 . Choose [bh, bh+1 ] in such a way that Ckh = C˜k holds. Now we describe a transformation of the schedule s˜. The transformation does not ˜n ), but it changes the schedule s˜. change the value of function F (C˜1 , . . . , C Transform the schedule s˜ in the following way. Take the largest value of δ such that in the intervals [Ckg , Ckg + δ] and [Ckh − δ, Ckh ], each machine is either idle or processes exactly one job. Without loss of generality, we suppose that among all jobs processed in [Ckh − δ, Ckh ] only one job, namely job Jk , is available but is not processed in [Ckg , bg+1 ]. h Case 1: Ck−1 < Ckh . Now, we swap Jk from the interval [Ckh − δ, Ckh ] and Jl (if any) from the interval [Ckg , Ckg +δ] on the same machine, say Mz (see Figure 3). Set Ckg = Ckg +δ and Ckh = Ckh − δ. Since in the interval [bh, bh+1 ] inequality Clh ≥ Ckh holds, it follows that after the described swapping, the completion time of job Jl is not changed. Consider the F value before and after the swapping in [bg , bh+1 ]. Before the swapping, ˜n ) and after the swapping, only the value of C˜k can change. Before the it was F (C˜1 , . . . , C transformation according to (9), equality u(k) X (Cki − bi) C˜k = max C˜k−1 , rk + i=v(k)
holds, i.e.,
v(k) C˜k = max{C˜k−1 , rk + (Ck − bv(k)) + . . . g
g+1
+(Ck − bg ) + (Ck
− bg+1 ) + . . . + (Ckh − bh )}.
After the swapping, we have v(k) C˜k = max{C˜k−1 , rk + (Ck − bv(k)) + . . . g
g+1
+(Ck + δ − bg ) + (Ck
− bg+1 ) + . . . + (Ckh − δ − bh)}.
Thus, one can see that the value of C˜k does also not change. It means that the value of F (C˜1 , . . . , C˜n ) does not change, too. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
168
S.A. Kravchenko and F. Werner Jk
Mq
Jl
Mz
· · ·· · ·· · ·· · ·· · ·· · ·· · ·· · ·
δ { z }
Ckg
bg
bg+1
···
bh
Jk δ { z }
Ckh
bh+1
Figure 3. Swap of Jk from [Ckh − δ, Ckh ] and Jl from [Ckg , Ckg + δ].
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Now, if it happens that after such a swapping the schedule becomes infeasible, i.e., Jl is processed in [Ckh − δ, Ckh ] on some other machine, say Mq 6= Mz , then we swap job Jl from [Ckh − δ, Ckh ] and Jf (if any) from [Ckg , Ckg + δ] on machine Mq . Since inequalities Clg ≥ Ckg + δ and Cfg ≥ Ckg + δ hold, also inequalities Clh ≥ Ckh and Cfh ≥ Ckh hold. Since this swapping does not influence the jobs within the intervals [Ckg + δ, bg+1 ] and [Ckh , bh+1 ], the completion times of the jobs Jl and Jf are not changed. We will continue with this swapping as long as the schedule remains infeasible. The maximal number of required swaps is determined by the number of different due dates, by the number of different Cki values for k = 1, . . ., n and i = 0, . . . , z, and by the number of preemptions. h h Case 2: Ck−1 = Ckh and rk−1 ≥ bg+1 hold. Since Ckh = C˜k , equality Ck−1 = C˜k−1 holds. Apply the same transformation as it has been described in case 1. After the swapping, we have v(k) C˜k = max{C˜k−1 , rk + (Ck − bv(k)) + . . . +(Ckg + δ − bg ) + (Ckg+1 − bg+1 ) + . . . + (Ckh − bh )}. However, since v(k)
rk + (Ck
− bv(k) ) + . . . + (Ckg + δ − bg ) + (Ckg+1 − bg+1 ) + . . . + (Ckh − bh ) < C˜k−1
holds, we obtain that after the swapping C˜k = C˜k−1 holds. Thus, any swapping does not change the value of function F . Therefore, the schedule can be transformed in such a way that for any two intervals [bg , bg+1] and [bh, bh+1 ] with h bg ≤ Ckg < bg+1 ≤ bh < Ckh ≤ bh+1 , equality Ck−1 = Ckh holds. Thus, we obtain schedule s˜ and the set of values Cki . For each k = 1, . . ., n, the values Cki either satisfy equalities (1), or they do not satisfy equalities (1). If the values Cki satisfy equalities (1), then u(k) X Ck (˜ s) = rk + (Cki − bi ) = C˜k i=v(k)
holds. However, if the values Cki do not satisfy equalities (1), then Ck (˜ s) = Ck−1 (˜ s) = C˜k−1 = C˜k Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Minimizing a Regular Function on Uniform Machines
169
holds. Thus, F (C1 (s), . . . , Cn (s)) = F (C˜1 , . . . , C˜n ) holds. Since the described transformation does not change the value v(j)
(Cj
u(j)
− bv(j)) + · · · + (Cj
− bu(j) )
for each job Jj , we do not need to apply the transformation. As a result of solving problem (2)(12), we obtain the values v(1) u(1) C˜1 = r1 + (C1 − bv(1)) + · · · + (C1 − bu(1)), v(2) u(2) C˜2 = max{C˜1 , r2 + (C2 − bv(2)) + · · · + (C2 − bu(2))},
. . ., C˜n = max{C˜n−1 , rn + (Cnv(n) − bv(n) ) + · · · + (Cnu(n) − bu(n) )}, and we can reconstruct an optimal schedule using the known values C1 = C˜1 , . . . , Cn = C˜n by solving the corresponding network flow problem, which is a special case of polymatroidal network flow problems, where the capacities are defined on the set of arcs by submodular functions, see pages 255–256 in [8] and [9]. Thus, to solve problem Q  rj , pmtn, Dj , C1 ≤ . . . ≤ Cn  F, one has to do the following:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1. Solve the corresponding problem (2)  (12). 2. Using the values C˜j , reconstruct an optimal schedule by solving the corresponding network flow problem.
3.
Problem Q  rj , pmtn, Dj , C1 ≤ . . . ≤ Cn 
P
wj Tj
P In [2], it has been proven that problem Q  rj , pj = p, pmtn  wj Tj is NPhard in the strong sense. Using the results presented in theP previous section, we can derive a polynomial algorithm for problem Q  rj , pj = p, pmtn  wj Tj with a fixed order of the completion times. Throughout this section, we suppose that the jobs are numbered in such a way that C1 ≤ · · · ≤ Cn holds. Thus, we have to find an optimal schedule among the class of schedules for which C1 ≤ · · · ≤ Cn holds. Remind that b1 < . . . < bz is the set of release times, due dates and deadlines, i.e., {b1 , . . . , bz } = {r1 , . . ., rn } ∪ {d1 , . . . , dn} ∪ {D1 , . . . , Dn }. For each j ∈ {1, . . ., n}, denote by v(j) the index such that bv(j) = dj and by u(j) the index such that bu(j)+1 = Dj . Note that it does not make sense to set bv(j) = rj since by definition, we have Tj = 0 within the interval [rj , dj ]. We apply the mathematical programming model (2)(12) to problem Q  rj , pmtn, Dj , P C1 ≤ . . . ≤ Cn  wj Tj . This yields:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
170
S.A. Kravchenko and F. Werner Minimize
n X
˜j − d j } wj max{0, C
(13)
j=1
subject to i bi = C0i ≤ C1i ≤ · · · ≤ Cni ≤ Cn+1 = bi+1 , i Cji = Cj−1
[bi , bi+1 ] 6⊆ [rj , Dj ],
if
m i X ]) vjq ([Cki , Ck+1
i = 0, . . ., z
i = 0, . . . , z,
j = 1, . . . , n
(14) (15)
i ≤ Ck+1 − Cki ,
i = 0, . . . , z,
j = 1, . . . , n,
k = 1, . . . , n
(16)
n i X vjq ([Cki , Ck+1 ]) i ≤ Ck+1 − Cki , s q j=1
i = 0, . . . , z,
q = 1, . . . , m,
k = 1, . . . , n
(17)
q=1
sq
n X m z X X
q
i vj ([Cki , Ck+1 ]) = pj ,
j = 1, . . ., n
(18)
i=0 k=0 q=1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
i vjq ([Cki , Ck+1 ]) = 0
if
C˜j = max{C˜j−1 , rj +
u(j) X
[bi, bi+1 ] 6⊆ [rj , Dj ] or j ≤ k, i = 0, . . . , z,
j = 1, . . . , n,
k = 0, . . . , n,
q = 1, . . ., m
(Cji − bi )},
j = 1, . . ., n,
C˜0 = 0
(19)
(20)
i=v(j)
Cki ≥ 0,
i = 0, . . ., z,
i vjq ([Cki , Ck+1 ]) ≥ 0,
k = 0, . . . , n
i = 0, . . ., z,
j = 1, . . . , n,
k = 0, . . . , n, C˜j ≥ 0,
j = 1, . . ., n,
(21)
q = 1, . . . , m
C˜0 = 0
Since wj Tj = wj max{0, Cj − dj } holds by definition, we obtain that 0 if rj < Cj (s) < dj wj Tj = wj (Cj (s) − dj ) if dj ≤ Cj (s) ≤ Dj for each j = 1, . . ., n.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(22) (23)
Minimizing a Regular Function on Uniform Machines v(j)
dj
v(j)+1
Cj
··· bv(j)+1 bv(j)+2 bi
bv(j)
Cji+1
Cji
Cj
171 u(j)
Cji+2
Cj
Dj
··· Cj (s) bi+1
bi+1
bu(j)
bu(j)+1
Figure 4. The structure of the interval [dj , Dj ]. Thus, if we know a feasible schedule s, we can set Cj (s) if bi < Cj (s) < bi+1 bi if Cj (s) ≤ bi Cji = bi+1 if Cj (s) ≥ bi+1
(24)
for each j = 1, . . ., n, and i = 0, . . . , z such that [bi, bi+1 ] ⊆ [dj , Dj ], see Figure 4. Using equalities (24), we can calculate the value wj Tj by the formula v(j)
wj Tj = wj (Cj
v(j)+1
− bv(j) ) + wj (Cj
u(j)
− bu(j) ).
u(j)
− bu(j))
− bv(j)+1 ) + . . . + wj (Cj
Indeed, let Cj (s) ∈ [bi, bi+1 ], then using (24), we obtain wj Tj
v(j)
v(j)+1
=
wj (Cj
− bv(j)) + wj (Cj
− bv(j)+1 ) + . . . + wj (Cj
=
wj (bv(j)+1 − bv(j)) + wj (bv(j)+2 − bv(j)+1 ) +wj (bv(j)+3 − bv(j)+2 ) + . . . + wj (Cji − bi )
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
+wj (bi − bi ) + wj (bi+1 − bi+1 ) + . . . + wj (bu(j) − bu(j)) =
wj Cji − wj bv(j)
=
wj (Cj − dj ).
P Pn ˜ Thus, to minimize nj=1 wj Tj , we need to Pminimize j=1 wj max{0, Cj − dj }. So, problem Q  rj , pmtn, Dj , C1 ≤ . . . ≤ Cn  wj Tj can be reduced to a linear programming problem and therefore, it can be polynomially solved.
4.
Conclusion
In this chapter, we have considered scheduling problems with ordered completion times. We have shown that a wide class of scheduling problems with preemptions can be polynomially solved if the order of the completion times is known in advance. Note that in contrast to [5, 6], here we did not restrict to equal processing times but considered the general case of arbitrary processing times.
Acknowledgements The results presented in this paper were attained during a visit of S.A. Kravchenko at the OttovonGuerickeUniversit¨at of Magdeburg which was supported by a fellowship of the Alexander von Humboldt Foundation.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
172
S.A. Kravchenko and F. Werner
References [1] Bruno, J.L.; Coffman, E.J.; Sethi, R. Scheduling independent tasks to reduce mean finishing time. Comm. ACM, 1974, 17, 382–387. [2] Du, J.; Leung, J. Y.T. Minimizing mean flow time with release time constraint. Theoretical Computer Science, 1990, 75, 347–355. [3] Graham, R.L.; Lawler, E.L.; Lenstra, J.K.; Rinnooy Kan, A.H.G. Optimization and approximation in deterministic sequencing and scheduling: A survey. Annals of Discrete Mathematics, 1979, 5, 287 – 326. [4] Horn, W.A. Minimizing average flow time with parallel machines. Oper. Res., 1973, 21, 846–847. [5] Kravchenko, S.A.; Werner, F. Preemptive scheduling on uniform machines to minimize mean flow time. Computers & Operations Research, 2009, 36, 2816–2821. [6] Kravchenko, S.A.; Werner, F. Minimizing a separable convex function on parallel machines with preemptions. OttovonGuerickeUniversit¨at Magdeburg. Fakult¨at f¨ur Mathematik, 2009, Preprint 22/09, 20 pp. [7] Kravchenko, S.A.; Werner, F. Parallel machine problems with equal processing times. Proceedings of the 4th MISTA conference, Dublin/Ireland, 2009, 458 – 468.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[8] Labetoulle, J.; Lawler, E.L.; Lenstra, J.K.; Rinnooy Kan, A.H.G. Preemptive scheduling of uniform machines subject to release dates. In Progress in combinatorial optimization; Pulleyblank, H.R.; Ed.; Academic Press: New York, 1984; pp 245–261. [9] Lawler, E.L. Combinatorial optimization: Networks and matroids; New York: Holt, Rinehart and Winston, 1976. [10] Lawler, E.L. Recent results in the theory of machine scheduling. In Mathematical programming: The state of the art; Bachem, A., Gr¨otschel, M., Korte, B.; Eds.; SpringerVerlag: Berlin, 1983; pp 202–234. [11] Lawler, E.L.; Labetoulle, J. On preemptive scheduling of unrelated parallel processors by linear programming. J. Assoc. Comput. Mach., 1978, 25, 612–619.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
PART III. PRACTICAL APPLICATIONS
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: Linear Programming Editor: Zoltan Adam Mann
ISBN 9781612095790 © 2012 Nova Science Publishers, Inc.
Chapter 6
LINEAR PROGRAMMING FOR IRRIGATION SCHEDULING – A CASE STUDY H. Md. Azamathulla University Sains Malaysia, Engineering Campus, 14300 Nibong Tebal, Penang, Malaysia
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
ABSTRACT There is an increasing awareness among irrigation planners and engineers to design and operate reservoir systems for maximum efficiency to maximize their benefits. Accordingly, significant work has been done on reservoir operation for known total irrigation demand and on the optimal allocation of water available to crops at the farm level. This chapter deals with the development and Linear Programming (LP) to be applied to realtime reservoir operation in an existing Chiller reservoir system in Madhya Pradesh, India. From the results, the LP model is found to be well suited to the solution of irrigation scheduling problems.
Keywords: Linear Programming, Cropping pattern, Water resource management, Irrigation management, Optimization
NOTATIONS AETi k
Actual evapotranspiration in period k from crop i (mm) APET Actually occurring potential evapotranspiration in period k (mm) ARFk Actual rainfall value in the fortnight k Ak and BK Constants relating the storage to reservoir evaporation Ao Area of spread at dead storage level k
Senior Lecturer, River [email protected]
Engineering
and
Urban
Drainage
Research
Centre
(REDAC);
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
email:
176
H. Md. Azamathulla d Depletion factor
EDik Effective root zone depth of a crop i in period k (cm) EDik 1
Effective root zone depth of a crop i in period k+1 (cm) Eff Overall efficiency Fkcik crop evapotranspiration coefficient ID Industrial supply from the reservoir (mandatory release)
IRRik
Irrigation applied to crop i in stage k (mm) Kyk yield response factors for a crop i in period k
PETi k
Potential evapotranspiration in a particular geographical location (mm) REk Rate of evaporation in fortnight k
RF k Rainfall in period k (mm) Sk reservoir storage at the beginning of period k Sk+1 reservoir storage at the end of period k Zf Field capacity for the soil (mm/cm) Zw Permanent wilting point for the soil (mm/cm) Zww Critical available moisture limit (mm/cm)
ik Initial soil moisture in the time stage k in for a crop i (mm/cm) ik 1 Final soil moisture in a particular time stage k for a particular crop i (mm/cm)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Yai Actual crop yield Ymi Maximum crop yield
1. INTRODUCTION In most developing countries, a huge share of the limited budget goes to creating facilities for irrigation. Construction of reservoirs requires very high investment and also causes socioeconomic and environmental issues (Azamathulla et al., 2008). Water in the reservoir has multiple claimants and needs to be optimally utilized to generate maximum benefits through proper operation, which must remain consistent despite uncertain future inflows and demands. According to the World Commission on Dams, many large storage projects worldwide are failing to produce the anticipated benefits (Labadie, 2004). Similarly, small storage projects made for local areas in developing countries, like India, are also failing to meet expectations. The main cause identified at various levels of discussion, as reported by Labadie (2004), is inadequate consideration of the more mundane operation and maintenance issues once the project is completed. For existing reservoirs, optimum operation is critical, since all the expected benefits are based on timely water releases to meet the stipulated demand. Realtime operation of a reservoir requires making relatively quick decisions regarding releases based on shortterm information. Decisions are dependant on the storage in the reservoir and information available in the form of forecast hydrologic and meteorological parameters. This is especially important during floods and power generation, where the
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Linear Programming for Irrigation Scheduling – A Case Study
177
system has to respond to changes very quickly and may need to adapt rapidly (Mohan et al. 1991). For reservoir systems operated for irrigation scheduling, realtime operation is not very common because of longer decision steps. Traditionally, the reservoirs meant for irrigation purposes are operated on heuristics and certain rules derived from previous experiences. This defies the concept of watermanagement; much of the water is lost, which in turn leads to loss of revenue. In the early 1960s, mathematical programming techniques became popular for reservoir planning and operation; pertinent literature is available. An excellent review of the topic is given by Yeh (1985), followed by Labadie (2004) and Wurbs (1993). Along with simulation studies, Linear Programming (LP), Dynamic Programming (DP) and Non Linear Programming (NLP) are the most popular modelling techniques. A comparative study on the applicability and computational difficulties of these models is presented by Mujumdar and Narulkar (1993). Many of the aforementioned techniques have been implemented in realistic scenarios, and many reservoir systems worldwide are operated based on the decision rules generated from these techniques. However, there exists a gap between theory and practice, and full implementation has not been achieved yet (Labadie, 2004). The basic difficulty a reservoir manager faces is to take a realtime optimum decision regarding releases according to the future demand and inflow. This leads to the problem of optimization of the stochastic domain. Two approaches of stochastic optimization are practised: i) Explicit Stochastic Optimization (ESO), which works on probabilistic descriptors of random inputs directly and ii) Implicit Stochastic Optimization (ISO), which is based on historical, generated or forecasted values of the inputs through the use of Time Series Analysis or other Probabilistic approaches. The ESO approach has computational difficulties; ISO methods are simple, but require an additional forecasting model for real time operation. In the case of irrigation reservoirs, decision making at the reservoir level depends upon the water demand arising at the field level. In order to operate the reservoir in the best possible way, it becomes imperative to understand the processes occurring in the cropsoilwateratmosphere system. This helps not only in the estimation of accurate demands, but also ensures optimum utilisation of water. If the processes at the field level are also modelled properly and integrated with the reservoir level model, the goal of water management can be achieved in the best possible way. Dudley et al. (1971) pioneered the integration of the systems in the determination of optimal irrigation timing under limited water supply using a Stochastic DP model. Dudley and his associates then improved the model (Dudley and Burt, 1973; Dudley, 1988; Dudley and Musgrave, 1993). Vedula and Mujumdar (1992, 1993) and Vedula and Kumar (1996) have also contributed to this area. Their approach was to derive a steady state reservoir operation policy while maximizing the annual crop yield. DPSDP and LPSDP were used in the modelling. However, for realtime reservoir operation, Vedula and Kumar (1996) stressed the need to forecast inflows and rainfall in the current season to implement the steady state operation policy. As a result, the ESO model has to be supplemented with an ISO model to get a policy for the current period. As an extension to the work of Vedula and Mujumdar (1992), a significant contribution to the realtime reservoir approach was presented by Mujumdar and Ramesh (1997). They addressed the issue of short term realtime reservoir operation by forecasting the inflow for the current period, a crop production state variable and a soil moisture state variable. Their work was based on SDP, but had all the limitations of SDP regarding the curse of dimensionality.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
178
H. Md. Azamathulla
Against this background, a model for the derivation of realtime optimal operating policy for a reservoir under a multiple crop scenario is proposed in the present study. The primary issue is that the reservoir gets inflows during the wet season (monsoon season) and is operated for irrigation in the dry season (nonmonsoon season). The reservoir storage and the soil moisture level are considered to be the principal state variables, and the irrigation depths are the decision variables. An optimal allocation model is embedded in the integrated model to evaluate the irrigation water depth supplied to different crops whenever a competition for water exists amongst various crops. The model also serves as an irrigationscheduling model because it specifies the amount of irrigation for any given fortnight. The impact on crop yield due to water deficits and the effect of soil moisture dynamics on crop water requirements are taken into account. Moreover, a root growth model is adopted to consider the effects of varying root depths on moisture transfer. The only stochastic element in the season is the evapotranspiration. The handling of stochasticity has been accomplished through dependability based forecasting in an ISO model. The rest of the variables, such as soil moisture status and the reservoir storage status, at the beginning of any period are considered to be state variables.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
2. THE MODEL FORMULATION AND CONCEPT The realtime operation model proposed in the present study integrates the reservoir level and a field level decision (Figure 1). It considers the soilmoisture status and the reservoir storage as the state variables and the applied irrigation depths as decision variables. The formulation is based on the conceptual model for soil moisture accounting and the reservoir storage continuity relationships. A major emphasis is laid on maintaining soil moisture in a state such that the evapotranspiration from the crops takes place at a rate that achieves better results in the form of increased yields from the crops. Table 1 presents the size of the problem with the fortnights. In order to demonstrate the model applications they actually occurred evapotranspiration series is assumed at mean values. To assess the timing of irrigation water application, the soil moisture status of the crop is an important parameter. Whenever the soil moisture status approaches a critical limit, irrigation is applied. Thus, the soil moisture status is monitored either by physical measurement or through soil moisture models. Table 1. Size of the problem for different fortnights Fortnight Variables Constraints
1 115 115
2 106 106
3 97 97
4 88 88
5 75 75
6 62 62
7 49 49
8 36 36
9 23 23
10 10 10
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
11 5 5
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Linear Programming for Irrigation Scheduling – A Case Study
179
Figure 1. Flow chart of realtime operation of reservoir.
Soil moisture models are more popular since they do not require a lot of instrumentation to be installed in the field. Soil moisture models can be formulated either by a physical approach (Fedders et al., 1978) or a conceptual approach (Rao, 1987). The conceptual approach has been used by Rao et al. (1988), Rao et al. (1990) and Hajilal et al. (1998) for the problem of irrigation scheduling. Vedula and Mujumdar (1992) utilised the conceptual model in their study. The same concept is adopted in the present study.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
180
H. Md. Azamathulla
3. THE CONCEPTUAL MODEL For the assessment of timing of irrigation and quantity of water to be supplied, the soil moisture status is an important parameter. Whenever the soil moisture status approaches a critical limit, irrigation is applied. Hence, it is required to monitor the soil moisture status either through physical measurements or through soil moisture accounting models. In the conceptual model for the CropSoilWaterAtmosphere (CSWA) system, the basic assumption is that the soil acts as a reservoir, the main inputs to the reservoir are rainfall irrigation, and the main outputs are evapotranspiration, percolation and drainage (Azamathula, 2007). The extent of the reservoir is considered to be up to the effective root zone at the particular time. The soil water reservoir is governed by a continuity equation:
ik 1 EDik 1 ik EDik IRRik AETi k RF k
(1)
where, k 1 = Final soil moisture in a particular time stage k for a particular i
crop i (mm/cm) ED ik = Effective root zone depth of a crop i in period k (cm) ik = Initial soil moisture in the time stage k in for a crop i (mm/cm) ED ik 1 = Effective root zone depth of a crop i in period k (cm) AET ik = Actual evapotranspiration in period k from crop i (mm)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
RF k = Rainfall in period k (mm) IRR ik = Irrigation applied to crop i in stage k (mm)
The conceptual model stated by Eq. 1 is used to compute the irrigation to be applied for the LP model with area as a decision variable. Figure 2 shows the sketch for the conceptual reservoir. In the context of the conceptual model two parameters are important. The following parameters are important for the conceptual model.
Variation of Evapotranspiration with the Available Soil Moisture Evapotranspiration as a function of the available soil moisture is expressed as:
AETi k PETi k if aaik Zww
(2)
or
AET i
k
a aik PET i k Zww
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(3)
Linear Programming for Irrigation Scheduling – A Case Study
181
Figure 2. Conceptual model.
where AETi k is the actual evapotranspiration that has occurred from crop i in fortnight k k
(mm), PETi is the potential evapotranspiration in a particular geographical location (mm), Zww is the critical available moisture limit (mm/cm) = (ZfZw) d, Zf is the field capacity for the soil (mm/cm), Zw is the permanent wilting point for the soil (mm/cm), d is the depletion k
factor and assumed to be 0.5 in the present study (Figure 3), and a ai is the average available soil moisture over a fortnight (mm/cm). The average available soil moisture over a fortnight is given by
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
a aik
a ik a ik 1 2 .0
where aik ik Zw if aik < Zww, otherwise aik = Zww A similar expression can be used for aik 1 .
Root Zone Depth Growth The root depth data in relation to the time stages are prepared according to the Linear Root Growth Model (adopted by Narulkar, 1995). The model assumes that maximum root depth is achieved at the start of the yield formation stage. It remains at the maximum depth until the maturity stage. A minimum depth of 15 cm is considered in the first fortnight to account for the conditions of bare soil and an area with sparse crops. The root depth model is shown in Figure 4.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
182
H. Md. Azamathulla
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 3. Relationship between the available soil moisture in the root zone and the ratio of actual evapotranspiration to potential transpiration.
Figure 4. Root Depth growth model.
Relative Yield Ratio The yield of a crop is affected by water deficits and the rate of evapotranspiration. The rate of evpotranspiration tends to decrease depending on the available moisture content. There are many methods to model the phenomenon. However, the model used in the present study is the most commonlyadopted model. The relative yields are computed on the basis of the expression given by Doorenbos and Kassam (1979)
Yai Ymi
AETi k 1 Ky k 1 PETi k
(4)
Equation (4) gives a yield ratio for a single period only. However, the aggregate effect of moisture deficits over all fortnights of crop growth is also evaluated. The final yield ratios computed for the crop during various time periods of a season is computed by a multiplicative
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming for Irrigation Scheduling – A Case Study
183
model (Rao et al., 1990). The determination of the yield ratio is very important since they reflect the operation policy for an irrigation system. The expression is given by
Yai Ymi
AETi k k 1 Ky 1 k PET i 1 i ncr
(5)
Water Requirements of the Crops The model derived for an optimal crop pattern uses predetermined irrigation demands. On the basis of this, the optimisation model selects an appropriate area for an individual crop. The irrigation demands are determined using the conceptual model stated in Eq. 1. The irrigation requirements may be calculated by substituting a value of critical soil moisture content instead of soil moisture in either of the fortnights k and k+1 and replacing the values of actual evapotranspiration by potential evapotranspiration and rearranging the terms of Eq. 1:
IRRik cr EDik 1 EDik PETi k
(6)
where cr is the critical soil moisture content below which the actual evapotranspiration may fall below the potential rate.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
4. INTEGRATED LP FORMULATION In the objective function, the weighted sum of all the actual evapotranspiration values is maximised. The weights are assigned according to the yield response factors for individual crops in individual periods. The objective is to maximise the actual evpotranspiration rate to minimise the deficits in the yields. The available soil moisture in any time period in the objective function is indirectly maximised:
aik aik 1 Ky k MaxZ 2.0 Zww i 1 k 1 ncr np
(7)
subject to the following constraints: 1. Soil moisture continuity
aik aik 1 PET RF k 2 . 0 Zww
ik 1 EDik 1 ik EDik IRRik where ik 1 aik 1 bik 1 ZW
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(8) (9)
184
H. Md. Azamathulla
with physical bounds
ik 1 4.0
(10)
aik 1 0.9
(11)
2.Reservoir continuity ncr
A k S k 1 B k S k i 1
IRR ik * AREA ik Eff
ID Ao RE k
S k 1 31 .1 (Maximum Reservoir Capacity M m3)
(12)
(13)
5. CROP SIMULATION MODEL
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
The optimisation model presented above yields some irrigation depth values that are based on forecasted values for the reference evapotranspiration. This reference evapotranspiration, in turn, is based on a dependability model. However, the actual evapotranspiration value differs from these values, and thus, before going into the next fortnight, the soil moisture status must be updated with the applied irrigation and actual climatic factors. The formulation for crop simulation is as follows: First compute the final soil moisture with the following relation
ik ik 1 EDik 1 IRRik Fkcik APET k ARF k / EDik
(14)
If ik 1 3.1
EDik 1 ik 1
or
k Fk cik 1 APET k 1 Fk cik 1 APET k 1 k k 1 ZW ARF k 1 i EDi IRRi 2 . 0 2 . 0 k 1 APET k 1 k 1 Fk ci EDi 2 .0
ik ik 1 EDik 1
or
ik ik 1 EDik 1
(15)
Fkcik APET Fkcik APET Fkcik APET Zw ARF k IRRik EDik 2 .0 2 .0 2 .0
(16)
Fk cik APET Fk cik APET Fk cik APET k Zw EDik IRRi 2 .0 2 .0 2 .0
(17)
The computed soil moisture status of the crops is used in the next fortnight to compute the demand.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming for Irrigation Scheduling – A Case Study
185
6. STOCHASTIC ANALYSIS OF EVAPOTRANSPIRATION It was previously stated that the data regarding the climatic factors is uncertain in nature and the determination of these factors beforehand is impossible. However, there is a general trend to assume the expected values for these factors and carry out the operation. The concept does not give a clear picture of the actual scenario and the appropriate weights for the individual growth stage of the crops are not assigned. The present study proposes a different method of forecasting the expected values for the climatic factors. The method of analysis starts with the computations of dependability values of reference evapotranspiration factors from the available data. The dependability of realisation of any stochastic variable is defined as the probability of equalling or exceeding that variable with a particular value. Mathematically,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Px X
(18)
where P (.) is the probability and x is the variable under consideration and X is a stipulated value of the variable. A traditional method of estimation of the dependability value is the use of standard frequency formulae (e.g. Wiebull‘s formula or Hazen‘s formula). In the present study, a detailed probability analysis for the data is performed. The data is fitted to a standard probability distribution and the best fitting distribution is tested through the Kolmogorov Smirnov Test (Haan, 1977). Once the values corresponding to different dependabilities are evaluated, dependability values for reference evapotranspiration are assumed to be different in different growth stages. The analysis is performed on the basis of the yield response factor. A high yield response factor signifies greater sensitivity towards the deficits, and thus, a higher level of dependability is assumed for the evapotranspiration data and a lower level of dependability is assumed for the rainfall data. This will ensure a higher value of irrigation required for the crop in the sensitive period. As a result, the crop will be safeguarded against any poor moisture content conditions.
7. LP MODEL FORMULATION FOR OPTIMAL CROPPING PATTERN At the start of each dry season, depending on the storage volume in the reservoir, the crop pattern must be determined. To evaluate the crop pattern, another LP model is used. In this model, irrigation depths are calculated from Eq. (6). The formulation is as follows: The objective function is MaxZ = C1 X1+ C2 X2+ C3 X3 which is subject to the following constraints: 1. Total available area
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(19)
186
H. Md. Azamathulla X1+X2+X3≤A
(20)
where X1, X2, and X3 are the decision variables related to the area of individual crops; C1, C2, and C3 are the cost coefficient for each crop in Indian Rupees (1 US $ = 45 INR); and A is the maximum area available for irrigation. 2. Area of each individual crop: The area under each crop is required to be constrained; thus, there are lower and upper bounds on the area under each crop. The lower bounds indicate the minimum area that can be allocated to a crop, while the upper bound indicates the maximum. In the present study, the lower bounds were defined for all the crops except cash crops, while the upper bounds were defined considering the present cropping pattern. The constraints can be expressed as Li≤Xi≤Mi
(21)
where Li corresponds to the lower bound of the area for the ith crop and Mi corresponds to the upper bound on the area of the ith crop.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Model Application The developed models were applied to the Chiller reservoir system in Madhya Pradesh, India (Latitude 23o23‘ N and Longitude 76o18‘ E). In the central part of India, many reservoir projects have been constructed for irrigation, but no irrigation is available from these reservoirs during the monsoon period (from June to September). The area receives about 90 to 95 % of its rainfall during the Monsoon season. The rainfall then becomes runoff to the reservoirs. These reservoirs are designed to contain the runoff in the monsoon season, but there is no runoff during nonmonsoon months. The present formulations are specially suited for these types of reservoirs. Nonmonsoon rainfall is rare and provides little runoff. A systematic data base was prepared for the various physical features of the reservoirs, including the meteorological and hydrological data such as evapotransiration, details of crops in the command area, details of net returns from individual crops and soil properties collected from the College of Agriculture, Indore, India.
RealTime Operation Program The program for realtime operation is developed to cover the aspects of realtime operation philosophy. The program is divided into following submodules: 1) 2) 3) 4) 5)
Main Program Subroutine for framing constraints LP subroutine Post Processing Subroutine Updating subroutine
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming for Irrigation Scheduling – A Case Study
187
Main Program The main program controls the operation of realtime operation procedure. It reads the permanent data base and also accepts user provided data. The program coordinates between all the other subroutine and also furnishes all the data required by individual subroutine. The main program is interactive and can be initiated for any fortnight for its operation provided it gets the initial values for the storage in the reservoir and initial soil moisture content values for the current fortnight. Subroutine for Framing Constraints The real time operation problem requires problems of different sizes to be solved and for each fortnight various constraints are to be framed and even there is a change in the variables since the crops have different starting and ending times. The subroutine developed for this purpose frames the constraints and arranges the variables so that the problems solution becomes easier. The routine can be considered as a key routine for the complete program. LP Subroutine LP subroutine is a FORTRAN Code presented by Gillett (1981). The original code is modified slightly to make it useful as a subroutine and also to arrange the basic variables to be used by the post processing routine.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Post Processing Routine In the realtime operation we are interested in the irrigation depths as well as the initial value for the storage in the reservoir that are generated through the LP subroutine. The irrigation depths are used to update the soil moisture levels of various crops when the evapotranspiration depths are observed actually. Updating Subroutine The updating subroutine computes the initial values of soilmoisture levels for various crops given the values of irrigation depths and the actually occurred evapotranspiration values. These values are further used for future operation.
Analysis The realtime operation program discusses in the preceding section is run for different active storage values as they were occurred in different years of the operation. In total 9 storage values are considered. There are 3 crops assumed in the command area. Each crop has four variables to contribute in the model. Wheat (ordinary) and Gram exists for nine fortnights and the sowing of the crops starts in the second fortnight of October and the crops end in the second fortnight of February. Wheat (Hybrid) exists for eight fortnights and starts in the first fortnight of December and ends in the second fortnight of March. Hence, the total numbers of fortnights are eleven and therefore eleven storage variables also exist. The total numbers of variables are 115 and total numbers of constraints are 115 for the integrated model. These values are for the starting fortnight. However for other fortnights the size of the problem reduces serially.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
188
H. Md. Azamathulla
Table 1 presents the size of the problem with the fortnights. In order to demonstrate the model applications the actually occurred evapotranspiration series is assumed at mean values.
10. RESULTS AND DISCUSSION Optimum Crop Pattern
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
A separate computer program was run before the real time operation program to determine the optimum crop pattern for all possible storage values. The results of the optimum crop pattern are stated in Table 2. The results indicate that from a storage level of 31.10 M m3 to a storage level of 26.06 M m3, the cropping pattern is same as the one that has been adopted in the project formulation. However, below a storage level of 26.06 M m3, the crop pattern changes suddenly, and wheat (ordinary) is not recommended by the model. The area of wheat (hybrid) also gets reduced when the rainfall storage is below this level. However, the area for Gram is full, up to a storage level of 15.83 M m3. The change in cropping pattern indicates that efficient water usage is maintained.
Results from RealTime Operation Model (LP) A separate computer program was run before the real time operation program to determine the optimum cropping pattern for all the possible storage values. The results of optimum cropping pattern determination are stated in Table 2. The results indicate that from a storage level of 31.10 M m3 to a storage level of 26.06 M m3 the cropping pattern is same as that has been adopted in the project formulation. However, below the storage level of 26.06 M m3 the cropping pattern changes suddenly and Wheat (ordinary) is not recommended by the model. Area of Wheat (Hybrid) also gets reduced after this level. Area for Gram is full up to a storage level of 15.83 M m3. After this level the area reduces. The change in cropping pattern indicates that the water use efficiency is maintained. Table 2. Optimum Cropping Pattern for Different Live Storage Values Live storage (M m3) 4.3230 8.2379
Area (ha) for different crops Wheat (ordinary) Gram Wheat (hybrid) 342.910 120.00 427.580 500.00
12.3246 15.8632 20.7581 26.0986 28.8610 30.1250 31.1000
300.0 300.0 300.0 300.0
1084.015 1100.000 1100.000 1100.000 1100.000 1100.000 1100.000
500.00 855.00 1434.00 1700.00 1700.00 1700.00 1700.00
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Table 3. Sample Results Showing the Soil Moisture, Available Soil Moisture, Storage, and Irrigation to be applied for Different Crops for a RealTime Reservoir Operation Model (LP) Live Storage in the Reservoir 31.1 M m3 PARAMETER 1 29.28
2 28.17
3 26.30
4 22.22
3.76 0.9 53.62
3.89 0.9 90.63
3.84 0.9 92.87
3.07 0.87 36.04
FORTNIGHT 5 6 19.68 14.64 Wheat (ordinary) 3.54 3.30 0.9 0.9 163.9 8.44
Crop 1) Soil Moisture (mm/cm 2) Available soil Moisture (mm/cm) 3) Applied Irrigation (mm)
3.90 0.9 68.76
3.07 0.87 22.27
3.28 0.9 60.67
3.15 0.9 41.59
GRAM 3.4 3.28 0.9 0.9 26.96 37.64
Crop 1) Soil Moisture (mm/cm 2) Available soil Moisture (mm/cm) 3) Applied Irrigation (mm)



4.00 0.9 94.21
Wheat (hybrid) 3.06 3.48 3.32 0.86 0.9 0.9 37.19 127.9 78.89
7 10.87
8 5.62
9 4.24
10 3.63
11 3.60
3.22 0.9 23.02
3.17 0.9 19.94
4.0 0.9 102. 6
 . 

3.66 0.9 53.15
3.23 0.9 0.00
3.47 0.9 33.1 7


3.28 0.9 162.9
3.38 0.9 0.00
3.18 0.9 36.09
3.19 0.9 0.0
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Reservoir Storage (M m3 ) Crop 1) Soil Moisture (mm/cm) 2) Available soil Moisture (mm/cm) 3) Applied Irrigation (mm)
o/detail.action?docID=3021711.
190
H. Md. Azamathulla
Table 4. Relative Yield Ratio for Different Live Storage Values Computed With a RealTime Reservoir Operation Model (LP) Live storage (M m3)
4.3230 8.2362 12.3246 15.8632 20.7581 26.0986 28.8610 30.1250 31.1000
Relative yield ratio for different crops LP Wheat Gram Wheat (ordinary) (hybrid) 0.9677 1.000 0.9083 1.000 0.9576 1.000 0.989 1.000 0.987 0.911 1.000 0.987 0.952 1.000 0.987 1.000 1.000 1.000 1.000 1.000 1.000 1.000
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
The realtime operation model gives an optimal operating policy for the available storage in the present fortnight considering the future. The model also yields the values of irrigation to be applied to individual crops in the fields. In the wake of deficient water supplies, the model distributes the available water over the time for different crops optimally. The sample result of the present model is stated in Table 3. The available moisture to the crops is not affected, and generally the soil remains at the upper limit of the available soilmoisture. This is because the crop pattern is predicted according to the availability of the storage in the reservoir. The results are indicative of successful application of the realtime operation strategy proposed in the present work.
Relative Yield Ratios Relative yield ratios computed for different crops at different live storage values are shown in Table 4. The relative yield ratios for all the crops become one if live storage in the reservoir is equal to or greater than 28.89 M m3. The developed model can be successfully applied to the reallife case study of a irrigation supporting reservoir system. The model ensures an optimum reservoir release over different time periods. It also assures optimum allocation of the available water over the different crops in the fields.While allocating the water to different crops in the fields the model takes into account the critical growth stages of the crops and allocates sufficient water to each crop to safeguard it against any ill effects of the deficits. The optimum cropping pattern model used in the study will restrict the irrigation to be productive and the wastage of water can be reduced. The stochastic analysis of evapotranspiration based on dependability studies can be used as a forecasting tool for the water requirements.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming for Irrigation Scheduling – A Case Study
191
CONCLUSION A realtime model using an integrated Linear Programming Model for a reservoir system meant for irrigation has been developed in the present study to obtain an optimal reservoir operating policy that incorporates field level decisions, while also deciding the appropriate time and amount of water to release from the reservoir. From the analysis, the following conclusions can be drawn: The developed models can be successfully applied to irrigation supporting reservoir systems. Furthermore, the models ensure an optimum reservoir release over different time periods. In addition, they also ensure optimum allocation of the available water over the different crops in the fields. While allocating the water to different crops in the fields, the model takes into account the critical growth stages of the crops and allocates sufficient water to each crop to safeguard it against any ill effects of water deficits. The optimum crop pattern model used in the study will only allow productive irrigation, so the amount of wasted water is reduced. Moreover, LP model is well suited to the solution of irrigation scheduling problems.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
REFERENCES Azamathulla. H. Md, Wu, F. C, Ab Ghani, A, Narulkar , Zakaria, N. A, and Chang, C. K. (20008). Comparison between genetic algorithm and linear programming approach for real time operation, Journal of HydroEnvironment Research, Elsevier and KWRA. 2(3), 171180. Azamathulla, H. Md., (1997), Real – Time Operation of Reservoir System forIrrigation Scheduling. M.E.Thesis of Devi Ahilya University, Indore, India. Doorenbos, J., and Kassam, A.H. (1979). Yield Response to Water. Irrigation and Drainage Paper, 33, FAO, Rome. Dudley, N.J., Howell, D.T., and Musgrave, W.F. (1971). Optimal intraseasonal irrigation water allocation. Water Resour Res., 7(4), 770788. Dudley, N.J. and Burt O.R (1973). Stochastic reservoir Management and system design for irrigation, Water Resources Res. 9(3), 507522. Duldley, N J. (1988). A single decisionmaker approach to irrigation reservoir and farm management decision making, Water Resources Res., 24(5) 633640. Dudley, N.J. and Musgrave, W.F. (1993). Economics of water allocation under certain conditions. In Biswas, A.K.; et al., ed. Water for sustainable development in the twentyfirst century. Oxford University Press, Delhi. Fedders, R.A., Kowalic, P.S. and Zarandy, H., (1978). Simulation of field water use and crop yield. Centre for Agricultural Publishing and Documentation, Wganingen. Gillett, B. E. (1981). Introduction to Operations Research: A ComputerOriented Algorithmic Approach, McGrawHill. Goldberg, D.E., (1989). Genetic algorithms in search, optimization and machine learning, Addison –Wesley, Reading, Mass. Haan, C T. (1977). Statistics methods in hydrology, Iowa State Press, Iowa. Hajilala, M.S., Rao, N. H and Sarma, P. B .S. (1998). Real time operation of reservoir based canal irrigation systems, Agricultural Water Management, 38, 103122.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
192
H. Md. Azamathulla
Labadie, J.W. (2004). Optimal operation of multi reservoir systems: Stateoftheart review, J. Water Resour. Plan. Manage. 130(2), 93–111. Mohan, S., Raman, S., and Premganesh, G., (1991). Realtime reservoir operation. IWRS, 11(1), pp.3537. Mujumdar, P. P., and Sandeep Narulkar., (1993). Optimisation models for multi reservoir planning and operation", Hydrology Review, Vol. VIII, No. 1, pp. 2952. (Pub: Indian National Committee on Hydrology, Roorkee, India). Mujumdar, P.P. and Ramesh, T S.V., (1997). Real time reservoir operation for irrigation, J. Water resources research, Vol. 33, No 5, 11571164. Nagesh Kumar, D., Srinivasa Raju, K. and Ashok, B. (2006). Optimal reservoir operation for irrigation of multiple crops using genetic algorithms, Journal of Irrigation and Drainage Engineering, ASCE, Vol. 132, No. 2, 123129. Narulkar, S.M. (1995). Optimum realtime operation of multi reservoir systems for Irrigation scheduling. Ph.D Thesis submitted at I.I.T., Bombay, India. Rao, N. H., (1987). Field test for a simple soilwater balance model for irrigated areas. J. of Hydrology, 91, 179186. Rao, N.H., Sarma, P.B.S. and Chander, S. (1988). Irrigation scheduling under a limited water supply. Agri. Water Management, 15, 165175. Rao, N.H., Sarama, P.B.S., and Chander, S. (1990). optimal multicrop allocations of seasonal and interseasonal irrigation water. Water Resour. Res., 26(4), 551559. ShieYui L., AlFayyaz T.A., and Sai L.K. (2004). Application of evolutionary algorithms in reservoir operations, Journal of the Institution of Engineers, Singapore, 44(1), 3954. Vedula, S. and Mujumdar, P.P. (1992). Optimal Reservoir Operation for Irrigation of Multiple Crops. Water Resour. Res., 28(1), 19. Vedula, S. and Mujumdar, P.P. (1993). Modelling for Reservoir Operation for Irrigation. Proceedings of Intl Conf. on Environmentally Sound Water Resources Utilisation, Bangkok. Vedula, S. and Nagesh Kumar, D. (1996). An integrated model for optimal reservoir operation for irrigation of multiple crops, Water Resources Research, American Geophysical Union, 32, (4), 11011108. Wurbs, R.A. (1993). Reservoir system simulation and optimization models. J. Water Resource Manage. ASCE 119 (4), 455–472. Yeh, W.W.G. (1985). Reservoir management and operation models: A State of the Art Review, Water Resour. Res. 21(1), 17971818.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: Linear Programming Editor: Zoltan Adam Mann
ISBN 9781612095790 ©2012 Nova Science Publishers, Inc.
Chapter 7
LINEAR PROGRAMMING FOR DAIRY HERD SIMULATION AND OPTIMIZATION: AN INTEGRATED APPROACH FOR DECISIONMAKING Victor E. Cabrera1 and Peter E. Hildebrand2 1
Dairy Management, Department of Dairy Science University of WisconsinMadison, Madison, WI, US 2 Department of Food and Resource Economics University of Florida, Gainesville, FL 32611, US
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
ABSTRACT The use of linear programming (LP) in farming systems is not a new concept. Linear programming has been used extensively to suggest the impact of alternative management practices at the whole farm level. Although these applications included livestock practices, there have not been many studies that formally and systematically investigated dairy herd systems. Linear programming can be a powerful tool to simulate and optimize the dairy herd system inside a Markovchain structure. On the other hand, the concept of dynamic programming (DP) for a dairy herd has long been recognized and used to find optimal policies for dairy herd management. Various options have been analyzed to find optimal replacement policies, reproductive parameters, and feeding strategies in dairy herds by using value or policy iteration methods. However, even though the formulation has been available since the 1980s, the solution of DP using LP has not been widely explored probably because the computer and software systems did not support the solution of real and practical problems. The formulation of DP as an LP problem for real, but large problems is now feasible and has substantial advantages over other methods because it allows the inclusion of the interaction of herd mates, solving for suboptimal conditions, controlling efficiently for the time steps of the analysis, and uses standard LP algorithms for solution.
Email: [email protected]
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
194
Victor E. Cabrera and Peter E. Hildebrand In the present chapter we discuss the application of LP in dairy herd management to solve DP problems and to propose stochastic simulation and optimization in a Markovchain structure for decisionmaking in modern dairy herd management.
INTRODUCTION Dairy Herd Population Dynamics
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Understanding the dynamics of the dairy herd population is crucial for decisionmaking and risk management in the dairy farming enterprise. Cows in a dairy herd follow probabilistic events of aging, culling, mortality, pregnancy, abortion, and calving. The structure of a dairy herd at a given point in time (a snapshot) is a reliable indicator of the economic performance of the herd. Each cow in the herd at a particular time belongs to a specific category or "state" (e.g., second lactation at peak of milk or the dry period before calving for the third lactation) and each category or state has an estimated economic net return that can be calculated as the difference between the revenues and the expenses. For instance, the cow in peak lactation would have a greater net return than a cow during the dry period before calving. The cow in peak lactation is producing a high amount of milk at the top of feed efficiency conversion while the dry cow is not producing any milk and still consuming feed for maintenance. The aggregation of the expected net returns of all cows in a herd makes up the herd economic performance at a given time. Also, at a later time those cows in peak production might be or may become pregnant and reach a dry period whereas those dry cows might calve, start a new lactation, and reach a peak of production. The dairy herd structure changes every day and with it the herd economic performance. The challenge then is to find a farmspecific herd structure.
MarkovChain Simulation Knowing the probabilistic "transition" matrices that define culling, mortality, pregnancy, abortion and milk production, a herd can be simulated through Markovchains (Cabrera et al., 2006) until the herd reaches a "steady state." Steady state is characterized by a constant herd structure that does not change over time. Under the assumption that a farmer keeps the herd size constant (replacing cows leaving the herd) to make efficient use of the facilities, the herd will reach a steady state based on the transition matrices. The steady state of the herd population will then become the "snapshot" to assess the economic performance of the herd. The exact same concept of the herd population steady state can be applied to understand a single cow‘s economic lifetime performance. The life of a cow can be described by a series of probabilistic events such as becoming pregnant, being culled or die (and replaced), have an abortion, reach a following lactation, and be in a particular production level. It is possible then to "follow" a cow‘s (and the replacement‘s) probabilistic life and to assess the lifetime economic performance of such a cow by aggregating all the probabilities of the cow being in a certain category or state and the net
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming for Dairy Herd Simulation …
195
return the cow would produce in such a category. An exemplification of this processes can be found in Figure 1 that graphically shows the most important cow movements in a given herd structure. When the concept is to follow a cow through her lifetime it is reasonable to use a discount rate and make a net present value assessment: the economic net return later in the cow‘s life will have a lower value than the net return earlier in the cow's life. However, if the concept is to assess the herd economic net return, a net present value would not be needed because the analysis is performed at a given time and not through time.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
A
B
C Figure 1. Graphic representation of a Markovchain structure to be used in a dynamic program solved by linear programming. A: breeding process; B: The culling and mortality process; C: The abortion process.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
196
Victor E. Cabrera and Peter E. Hildebrand
This chapter is about herd population analysis as the main driver for decisionmaking in dairy farming. Whether it is a herd snapshot or a cow life time, the number of probabilistic categories or states in the conceptual model could become very large. Roughly, a simple monthly model for 10 lactations considering up to 24 months after calving and 9 months for pregnancy would have at least 2,400 possible states. The number increases quadratically for each additional defined state when the model is weekly or daily.
Dynamic Transition Matrices So far, we have discussed a simulation framework using Markovchains solved to reach a steady state under the presumption that the probabilistic transition matrices are constant. It is important to recognize that the transition matrices could vary because of management, environmental factors, or any other uncertain factors. For analysis purposes, it is customary to assume that the involuntary culling rate, mortality rate, and abortion rate remain constant in the long term. However, three important transition matrices: 1) the probability of a cow to become pregnant (reproduction), 2) the farmer‘s decision to replace or maintain a cow for any other reason (voluntary culling and replacement), and 3) milk production, could certainly change more dynamically.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
The Need for Dynamic Programming The reproduction and replacement problems can be studied by using Markovchains (StPierre and Jones, 2001). The framework has been to assess the whole herd net return under different reproductive or replacement scenarios. However, a more advanced framework requires an additional step that would optimize the decisions of replacement or reproduction. The basic questions to respond are: 1) Should the farmer keep this cow in the herd or would the net return of the herd benefit by replacing this cow? and 2) If the previous decision was to keep this cow and this cow is not pregnant, should the farmer continue the reproductive services on this cow? The answers to the above questions are not trivial. Indeed, the answers require a detailed and systematic analysis. Although it has been around for long time, the Dynamic Programming (DP) (Bellman, 1957) framework is still the stateoftheart technique to respond those questions. Dynamic programming is a sequential optimization in a Markovchain structure (De Vries, 2004, 2006).
The Need for Linear Programming Dynamic programming commonly has been solved by code writing (De Lorenzo et al, 1993). However, the DP problem can also be solved by using LP algorithms (Cabrera, 2010). Although the formulation of LP to solve DP has been available since the 1980s (Hillier and Lieberman 1986), this technique has not been used for practical applications. A first formulation of the dairy replacement problem was published in 1998 (Yates and Rehman,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming for Dairy Herd Simulation …
197
1998); however that study included a model that was not a practical and realistic scenario for dairy farming. Cabrera (2010) is the first study with a DP/LP model for practical application. The use of LP for solving a DP has many advantages over the other methods of solution. First, because of the complexity of the problem to be solved, the use of standard LP algorithms assures consistency and robustness in the solution. Second, the use of LP algorithms allows for suboptimal solutions. Third, an LP formulation of a DP problem can efficiently manage different time spans in the Markovchain dimensions of the model. Fourth, an LP formulation supports the inclusion of the interaction of herd mates.
Overview of the Next Sections Following is a proposed general formulation of an LP problem to solve a DP optimization problem for dairy herd management in a matrix framework. Next is the discussion of two practical applications studied under the proposed framework.
FORMULATION OF LINEAR PROGRAMMING TO SOLVE A DYNAMIC PROGRAMMING OPTIMIZATION PROBLEM FOR DAIRY HERD MANAGEMENT Mathematical Formulation This is a general mathematical formulation of an LP/DP problem adapted from Cabrera (2010). The objective function is to maximize the net return of the decisions made, therefore: Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
I
Optimum economic solution = max
K
y
ik
NR ik
[1]
i 1 k 1
where i is the category or state and k is the decision to be made, I is the total number of decision variables and K is the number total of possible decisions. Then, yik is the steady state proportion of state i when decision k is made and NRik is the net return expected for the state i when decision k is made. The constraints of the model are: The nonnegativity of all decision variables:
yik 0 for all i and k
[2]
the constraint that assures that the herd size remains constant: I
K
y
ik
1
i 1 k 1
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
[3]
198
Victor E. Cabrera and Peter E. Hildebrand
and the constraints that assure the flow of cows through the possible categories or states to reach steady state of the herd population: K
I
K
y y ik
k 1
ik
Pijk 0 for all j
[4]
i 1 k 1
where j is the number of rows (vertical axis) in the dimensions of the matrix and Pijk represent the ijth element of vector of transition probabilities resulting from making decision k. Therefore, the movement of cows is accounted for from one state to a successive potential state determined by the law of probabilities contained in the transition matrices of probabilities of involuntary culling, mortality, pregnancy, and abortion.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Matrix Formulation As important as it is to understand the mathematical equations, it is also important to understand how a problem of this magnitude can be implemented for practical applications in a spreadsheet system. Implementation of LP models using spreadsheets is becoming more and more popular because 1) these can be better understood visually on spreadsheets, 2) these can be set up with relative ease, and 3) these can better accommodate the creation of decision support systems. Some limitations of using spreadsheets include the space available for dimensioning the model and the efficiency of the algorithms for solving the problem. Nonetheless, spreadsheet software is quickly improving capabilities to handle large models with lower computational requirements. In addition, the methodology and framework presented here is applicable for any software and solver algorithm. To facilitate understanding this section, a running example is introduced. This example is an oversimplification of a real situation, but allows one to understand and follow the important concepts needed for this framework. The Solver add in for Excel® is used in this example to solve the problem. For larger spreadsheet problems, a Premium Solver may be required. Table 1 represents the running DP/LP matrix in a spreadsheet format. The objective function is to maximize net return (Cells Z5:Y5; Equation 1). The value of cells Z5:Y5 is the sum product of two vectors: the expected net return (cells A8:T8) and the decision cells (cells A5:T5). The constants of the model are represented in cells A10:T19. The optimization is performed under the constraints of nonnegativity of the decision variables (Equation 2 inserted directly in the solver engine), the constraint of a constant herd size (Equation 3; cell Z9 = cell Y9), and the steady state herd population constraint (Equation 4; cells Z10:Z19 = Y10:Y19). The value of cell Z9 is the sum of the values of the decision cells (A5:T5) and the value of cell Y9 is a constant = 1. The value of each cell from Z10 through Z19 is the sum product of two vectors: the decision cells (cells A5:T5) and the constants (A through T) from the same cell Z. The value of each cell from Y10 through Y19 is a constant = 0.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Table 1. Running example of an equidistant linear programming matrix to solve a dynamic programming model for dairy herd decisionmaking Z
Y
A
B
C
D
E
1
2
3
4
5
$1,305
33.85%
25.39%
17.26%
11.05%
Constraints
$1,000
$1,500
$2,000 36%
F
G
H
I
J K Lactation Number
6
7
8
9
10
6.63%
3.78%
0.00%
0.00%
0.00%
0.00%
$1,500
$1,000
$500
$300
$200
$100
$400
40%
43%
46%
50%
54%
57%
100%
L
M
N
Q
R
S
T
1
2
3
4
5
6
7
8
9
10
0.00%
0.00%
0.00%
0.00%
0.00%
0.00%
2.04%
0.00%
0.00%
0.00%
$500
$500
$500
$500
$500
$500
$500
$500
$500
$500
1
1
1
1
1
1
1
1
1
1
Keep the Cow
O
P
Replace the Cow
Max Net Return
Expected Net Return 1
0
0
1
75%
32%
0
0
2
75%
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
9
0
0
10
Lactation Number
1
3 4 5 6 7 8
68%
1 1 64%
1 1 60%
1 1 57%
1 1 54%
1 1 50%
1 1 46%
1 1 43%
1 100%
1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
o/detail.action?docID=3021711.
Table 2. Running example of an eventdriven linear programming matrix to solve a dynamic programming model for dairy herd decisionmaking Z
Y
A
B
C
D
E
F
G
H
I
J
K
L
M
N
910
Lactation Number Keep the Cow
Replace the Cow
1
2
3
4
56
78
910
1
2
3
4
56
78
Max Net Return
1
1
1
1
0.5
0.5
0.5
1
1
1
1
0.5
0.5
0.5
$1,005
30.83%
23.12%
15.72%
8.49%
5.09%
0.00%
17.89%
0.00%
0.00%
0.00%
0.00%
0.00%
2.83%
17.89%
Constraints
$1,000
$1,500
$2,000
$1,500
$1,500
$500
$300
$500
$500
$500
$500
$1,000
$1,000
$1,000
1
0.9
0.1
0.1
0.1
0.1
0.1
0.1
1
1
1
1
1
1
1
2
75%
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Lactation Number
Expected Net Return
3 4 56 78 910
68%
1 1 54%
1 1 60%
1 1 56%
1 1 45%
1 100%
1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
o/detail.action?docID=3021711.
Linear Programming for Dairy Herd…
201
1. Set the Dimensions of the Model This is a critical step that will drive the rest of the work. There is a tradeoff between complexity and precision. The larger the model, the larger the complexity and the more the computational resources required to solve it. On the other hand, an oversimplified model will not well represent practical dairy farm conditions. The dimensions of the model will determine the number of decision variables included in the LP model. The concept of dimensioning the model is better understood with the example. Let's use a simple model that would only include lactations as the state variables and would not sub divide lactations into smaller states. This simple model could consider 10 lactations as the maximum life time of a cow. Also, this example model considers that in every lactation after the first lactation, there is a decision of whether to keep or replace a cow. The dimensions of this model are: 10 lactations x 10 lactations = 100 states for deciding to keep the cow and 10 lactations x 10 lactations = 100 states for deciding to replace the cow. Therefore, the matrix would have 10 vertical and 20 horizontal cells with 200 cells in total that represent the constants or transition matrices that are the culling rates per lactation (Table 1: Columns A to T and Rows 10 to 19). The expected net returns and the decision cells are two additional rows in the matrix that also have 20 cells (Table 1: Columns A to T, rows 5 and 8, respectively). The constraints are two additional columns that have 11 cells: one for each lactation (Table 1: Z10 to Y19) and one to keep the herd population constant (Table 1: Z9 to Y9). Generalizing, the matrix would have the number of defined states on the vertical axis and the number of defined states times the number of decisions on the horizontal axis:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Vertical axis = Number of defined states Horizontal Axis = Number of defined states * number of decisions The dimension of the model is then the vertical axis multiplied by the horizontal axis: Dimension of the model (cells) = Vertical axis * horizontal axis or: Dimension of the model (cells) = Number of decisions * (number of defined states)2 Let's now estimate the dimensions of a monthly model that divides 10 lactations of up to 24 months after calving and the pregnancy up to 10 months (0 for nonpregnant cows and 1 to 9 for pregnant ones). The number of cells for one lactation would then be 24 months * 10 pregnancy states = 240 and the number of cells for 10 lactations would then be 2,400. Considering only 2 decisions, keep or replace, the number of cells of this monthly model would be 2 * 2,4002 = 11,520,000 constants with a vertical axis of 2,400 constraints and a horizontal axis of 4,800 decision variables. The estimated dimensions of a weekly model that divide 10 lactations in up to 108 weeks and the pregnancy in up to 40 weeks with 2 decisions would be: 2 * 43,2002 = 3,732,480,000 with a vertical axis of 43,200 cells and a horizontal axis of 86,400. In similar fashion, the dimensions of a daily model that divides 10 lactations in up to 720 days and the pregnancy in up to 280 days with 2 decisions would be: 2 * 2,016,0002= 8,128,512,000,000 with a vertical axis of 2,016,000 cells and a horizontal axis of 4,032,000.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
202
Victor E. Cabrera and Peter E. Hildebrand
The dimensions of the model grow quadratically and rapidly become too large to be manageable and solved (as a reference, the standard solver engine shipped with spreadsheets can handle only 200 decision variables (horizontal axis)). Larger capacities are, of course, available. The researcher needs to make a judgment call to decide on the dimensions of the model. It was assumed above that for every state there exists a decision, but that might not always be the case. It is unlikely, for example, that a cow would be "voluntarily" replaced when pregnant or in very early lactation. Therefore, there might not be a need to include a decision for replacement when a cow is pregnant or in early lactation. Under the same rationale, if the decision would be whether to continue the breeding program or not, this should only apply to not pregnant cows and a certain defined time after calving as is the standard practice. Therefore, there are ways to "save" some space in the dimensions of the model. Nonetheless, the difference in dimensions between monthly, weekly, or daily models is still large. So far, the assumption has been that the model needs to be equidistant in the time spans. However, one of the advantages of using LP to solve DP is the possibility to define models with different time steps: e.g., one step could be a month and the next one could be a week. Eventdriven LP models can eventually accommodate any combination of time steps with savings in model dimensioning and consequently, computational efficiency. The challenges in eventdriven models are the internal transfers between states that have different dimensions. If a state needs to transfer to several states with different dimensions, it would require additional dimensions, which decreases the advantage of using eventdriven models. For an example, see Table 2. This is a follow up of the running example introduced before. Notice that lactations 1 to 4 remain as before, but the rest of lactations have been aggregated into sets of 2 (5 and 6, 7 and 8, and 9 and 10). Therefore, instead of 10 states, there are only 8 states per decision and the dimension of the model is now 2 x 8 x 8 = 128 or 62 fewer cells than originally. The results with this model are comparable, though not the same, as the original. These differences are due to the need for averaging the transition matrices and the large steps. In practical applications, smaller differences would still be expected, but with the same trend of results when performing sensitivity or scenario analyses.
2. Define the Transition Matrices Several pieces of information are needed to properly analyze dairy herd systems. For each defined variable, a transition matrix with the dimensions of the model is needed. Following are a list, definition, and examples of the most important transition matrices needed.
2.1. Involuntary Culling and Mortality A cow always has a probability of being involuntarily culled from the herd. Involuntary culling is when a cow leaves the herd because of unforeseen reasons out of the control of the herd manager. This could happen, for example, when a cow is badly injured or sick. This is different than voluntary culling when the herd manager decides to cull a cow because of low production, reproductive failure or any other reason. Voluntary culling can be optimized by
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming for Dairy Herd…
203
using the model framework described here, which could find if it is better to replace or to keep a cow. However, involuntary culling is an inherent characteristic of the herd. Involuntary culling can be assumed to be relatively constant and can be collected from herd historical records or from published reports. The transition matrix of involuntary culling for the running example can be seen in Table 3. In order to maintain the herd population constant, culled animals are replaced the next period in the dimensions of the matrix. Some proportion of cows might die on the farm and are also leaving the herd involuntarily, but differently than involuntarily culled animals. Dead animals do not have any return and may even incur a disposal expense (which is discussed in the next section). The transition matrix of mortality rate of the running example can be seen in Table 3. For effects of herd population, there is no distinction between involuntary culling and mortality. Both can be aggregated to estimate the proportion of cows leaving the herd in a particular state. Take a look at the example matrix (Table 1) and the involuntary culling and mortality rate matrices (Table 3): The proportion of cows leaving the herd in lactation 4 is 40% (denoted by 40% in cell D10), therefore a proportion of 60% of the cows will move to the fifth lactation (denoted by 60% in cell D14). This is set up for all states in the matrix. Note that the first and the last lactations have a different set up. For the first state, the involuntary culling and mortality are directly applied without a transfer, and are the number ones in the matrix to save dimensioning space. Note that for the last lactation 100% of replacement is applied. This is discussed later in the set up of the model.
Lactation Number
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 3. Involuntary culling, mortality rate, and total cows leaving the herd per lactation used for the running example
1 2 3 4 5 6 7 8 9 10
Involuntary Culling (%/lactation)
Mortality Rate (%/lactation)
Cows Leaving the Herd (%/lactation)
Cows Remaining in the Herd (%/lactation)
20% 25% 28% 31% 33% 35% 38% 40% 42% 45%
5% 7% 8% 9% 10% 11% 12% 14% 15% 20%
25% 32% 36% 40% 43% 46% 50% 54% 57% 65%
75% 68% 64% 60% 57% 54% 50% 46% 43% 35%
2.2. Reproduction and Abortion Reproduction parameters determine the probability of a cow becoming pregnant and when pregnant, the probability of abortion. These dimensions are not considered in the running example because the state "lactation" was not further subdivided. However, we can use Figure 1 to describe the process of incorporating reproduction and abortion parameters. A cow starts the lactation as a nonpregnant cow, after a certain defined time, a reproductive program is applied to the cow (e.g., first reproductive service at 70 days after
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
204
Victor E. Cabrera and Peter E. Hildebrand
calving). Therefore, a proportion of those cows not pregnant and not leaving the herd become pregnant. The cow population is divided into nonpregnant and pregnant cows and each group has different characteristics regarding culling and expected net returns. Pregnant cows have an estimated time for calving and move to the next lactation if not aborting, being culled, or dying. Pregnant cows could abort and return to the stream of nonpregnant cows. Nonpregnant cows will have successive reproductive attempts until either they get pregnant or they are culled for reproductive failure. Similar transition matrices as those for the culling and mortality are needed to define the probability of pregnancy and abortion according to the dimensions of the model. This is discussed more in detail with the applications.
2.3. Milk Production Milk production is the most important economic variable for dairy herd decision making. The lactation curves could be defined as a table following the dimensions of the model or they could be defined as a function of the days after calving depending on the herd milk production rolling herd average. Along with lactation curves, feed consumption is also. Normally, there is an interaction of pregnancy and milk production.
3. Define the Expected Net Returns Every cell in the horizontal axis of the matrix must have an expected net return (ENR) as shown in the example matrix (Cells A8 to T8). Five factors are critical in this calculation: milk income over feed cost (IOFC), involuntary culling cost (ICC), mortality cost (MC), reproduction cost (RC), and income from calving (new born) (IC).
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
ENR = IOFC  ICC  MC  RC + IC
[5]
3.1. Income over Feed Cost (IOFC) As its name indicates, this is the difference between milk income and feed cost. Milk income is the product of milk price (MP) and milk production (MQ). The milk price is defined by the analyst according to farm and market conditions. Milk production is the accumulated milk produced during a defined time step in the dimensions of the matrix. The feed cost is the feed price (FP) multiplied by the feed amount (FQ). IOFC = MP * MQ  FP * FQ
[6]
3.2. Involuntary Culling Cost When a cow is culled the farmer recovers a salvage value (SV, meat value of the cow), but incurs the expense of buying a replacement (RC) that usually is a pregnant heifer ready to calve. Therefore the cost of bringing the heifer to the herd will be partially offset because of the value of the new born (IC). ICC = SV  RC + IC
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
[7]
Linear Programming for Dairy Herd…
205
3.3. Mortality Cost When a cow dies, the herd incurs a cost of disposal (CD) and the cost of buying a replacement (RC). As above, these costs are partially offset by the value of the new born (IC) coming with the replacement. MC =  CD  RC + IC
[8]
3.4. Reproduction Cost Reproduction costs can include labor (L), semen dose (SD), hormones (H), and pregnancy diagnosis (PD). These costs can be calculated based on herd records: RC =  L  SD  H – PD
[9]
3.5. Income from Calving The value of a calf can be calculated as the sum product of the proportion of male (ML) and female (FL) offspring and their respective values (MLP and FLP) IC = ML * MLP + FL * FLP
[10]
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
4. Setting up the Model The model can be set up in different ways. One way is the running example (Tables 1 and 2). More important is to keep in mind the concept of the functioning of the model. The herd population follows Markovchain probabilities defined by the matrix and the culling and the mortality transition matrices. For simplicity, no reproduction events are simulated: The assumption is that every cow has an average reproductive performance for a lactation. Also, for each lactation, the expected net return (cells A8 to J8 in Table 1) is already calculated. The cost of replacing a cow (cells K8 to T8 in Table 1) is also calculated for each column in the matrix. In addition, the involuntary culling and mortality rates (from Table 3) are represented in the part of the matrix pertaining to involuntarily replacing or keeping the cow (cells A10 to J10 and cells A11, B12, C13, D14, E15, F16, G17, H18, and I19 to J19 in Table 1). It is important to understand how the net returns are set for the decision of keeping the cow. The data indicate that 25% of first parity cows leave the herd. Consequently cell A10 in the matrix indicates that 75% of first parity cows end the first lactation. Cell A11 indicates that this proportion of cows is then transferred to second lactation (75%). The negative sign, as in a balancing accounting book, indicates the proportion of cows that will be removed from first lactation and added to the second lactation. The transfer is achieved differently in following lactations. In lactation 2 for example, the data indicate that 32% of second parity cows leave the herd and consequently 68% of cows finalize second lactation and reach third lactation. Cell B11 (the number 1) is a "receptor" of the cows coming from first parity and at the same time is a "distributor" of cows either leaving the herd (32%, cell B10) or cows moving to the third lactation (68%, cell B12). Note that 32% of the cows leaving the herd in the second lactation are replaced to the first parity (row 10). One extra row could have been
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
206
Victor E. Cabrera and Peter E. Hildebrand
used and one extra column to do a similar transfer to first parity, but those cows leaving the herd during the first lactation would return as replacement to the first lactation, which can be accounted more efficiently by entering 75% in cell A10 and 75% in cell A11 (25% of cows leaving the herd in first lactation). The same procedure of second lactation transfer is applied to all successive lactations except the last one (cells J10 and J19). The assumption is that all cows reaching lactation 10 are replaced. Once again, there could be an additional transfer, but in order to save dimensioning space in the running model, this replacement is forced. Results are not changed because with the transition probabilities and the dimension of the model, cows will not reach that state. Now, let's discuss the decision of voluntary replacement (right side block of the matrix). First, for each replacement, a cost is calculated that will be similar to the involuntary culling cost (ICC). If the replacement is voluntary, the farmer would incur the cost of buying a replacement cow that will be a pregnant heifer, so the cost of the replacement animal is partially offset by the new born (coming with the heifer) and the salvage value received for the culled animal. This is calculated to be $500. Consequently, that value (negative) is used in cells K8 to T8. Now, in the matrix, it is necessary to connect the transfers that will let the model perform a decision at each step of the solution. For instance, let's focus on cells with the number 1 (positive 1) between L11 and T19, which correspond to the lactation rows (2 to 9). Also note cells L10 to T10 that have a 1 (negative 1). At the transfer moment between first and second lactation, the model has two options, the first option has been discussed above: They continue their normal course to third lactation (B11 transfer center). However, there is another option, to move the cow out of the herd (L11 transfer center). In column B, the net return is $1,500 while in column L the net return is $500. The rationale applies equally for successive lactations. The LP algorithm finds what has a higher net return in the longterm, whether to keep the cow one more lactation or replace a cow at the end of this lactation. As the model is set up in the running example, columns K and T could have been avoided. The decision to replace a cow starts in second lactation (column K) and column J is already transferring all cows finalizing lactation 10. Those columns were left in the matrix only for demonstration purposes. As previously discussed, they do not affect the functioning or results of the LP model. The next step is to enter the equations in the spreadsheet. These equations will work with the Solver iterations. First, it is necessary to define where the results are going to be displayed. There will be as many result cells (decision variables) as the horizontal dimension of the matrix. For convenience, cells A5 to T5 are selected to be the result cells. A condition is to calculate the net return that will be the maximization target of the LP problem (Equation 1). This is displayed in merged cells Z5 and Y5. Merged cells Z5 and Y5 have this equation: =SUMPRODUCT(A8:T8,A5:T5). The function "SUMPRODUCT" in a spreadsheet performs the sum of the multiplication of two arrays. In this case this is the sum of the multiplication of each lactation net return by the proportion of cows in each lactation (the variables resulting from the solution) after reaching a steady state. Another condition required is that all decision cells (results) be non negative (Equation 2). This is set up directly in the solver engine of a spreadsheet either as an equation (cells A5 to T5 ≥ 0) or as an option provided by the solver engine. Another condition of the problem to be solved is that the herd population must remain constant across time (Equation 3). This condition is set by entering one equation and a constant in the spreadsheet. Cell Z9 contains this equation: =SUM (A5:T5) and cell Y9 is
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming for Dairy Herd…
207
equal to the number 1 (a constant). Later, these two cells are used when setting up the constraints of the LP model in the solver engine. Another important condition is that the Markovchain model defined in the LP will reach a steady state (Equation 4). This condition is set by a number of equations. As many equations as the vertical dimensions of the LP model. These are displayed in cells Z8 to Z19. Cell Z8 has this equation: =SUMPRODUCT(A10:T10,$A$5:$T$5). The two arrays in this particular equation are the first lactation (A10 to T10) and the variables of solution (A5 to T5): A10 * A5 + B10 * B5 + ...+T10 *T5. Note that for the second array the dollar ($) signs are used in order to fix this array in the equation when copied to cells below. In each cell from Z9 to Z19 the equation has as a first array the cells A to T with its corresponding row number and as second array the cells A5 to T5. A final step in the set up is to define the LP problem, which requires the definition of three conditions: 1) the target variable, 2) the changing variables (decision variables), and 3) the constraints. Different spreadsheet software would have different ways to set up the LP problem. Therefore, a general way to set up the example model is provided, which would apply to any LP algorithm and any spreadsheet software. 1) Target variable: maximize cell Z5 2) Changing variables (decision variables): cells A5:T5 3) Contraints: a) cell Z9 = Y9 and b) cells Z10:Z19 = Y10:Y19. Note that Y10 to Y19 have no equation in them; they are zeroes, but they could also just be blank cells.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
5. Solve the Model The model is ready to be solved. After the LP problem is defined in the solver engine, it is usually solved by an option named "solve." Once the solve option is applied, the solver engine starts a number of iterations until finding the maximum target variable value by changing the indicated cells, under the constraints imposed. For the running example, this means that the solver engine will look for the maximum net return when reaching a steady state of the herd population by finding the optimal replacement time. The results will include the maximum net return (cell Z5) and the proportion of herd population in different lactations (cells A5 to T5).
6. Analyze the Results The solution to the example problem is seen in Table 1. This indicates that the maximum net return that could be obtained is $1,305/cow when the replacement policy indicates to keep cows only until lactation number 6. The DP problem solved through LP found that a cow should be replaced at the end of the sixth lactation. By doing so, the herd population is distributed with 33.85% in first lactation, 25.39% in second lactation, 17.26% in third lactation, 11.05% in fourth lactation, 6.63% in fifth lactation, and 3.78% in sixth lactation. The remainder of the herd population, 2.04% (cell Q5), is being replaced when completing lactation 6. No other configuration of the herd will give a higher net return.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
208
Victor E. Cabrera and Peter E. Hildebrand
APPLICATIONS USING LINEAR PROGRAMMING FOR DAIRY HERD SIMULATION AND OPTIMIZATION
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1. Monthly Model Cabrera (2010) is a comprehensive model that was used to study the interaction of herd economics, feeding diets, and nitrogen excretion. This model was set monthly for 15 lactations having 10 states for pregnancy (0 for nonpregnant, and 1 to 9 pregnancy months). Figure 2 displays a diagram that represents the cow flows on the dimensions of the model. The dimensions of the model were 2,790 rows by 5,580 columns (15,568,200 cells) having 5,580 decision variables (one for each column). The LP algorithm was used to find the maximum net return for the decision of keeping or replacing a cow in each state under a specified feeding diet. For each category or state included in the model there was a culling risk, a mortality risk, a pregnancy risk, milk production, and amount of nitrogen (N) excretion depending on the diet fed. Also, for each category or state, the model included a net return based on the five factors discussed earlier: IOFC, ICC, MC, RC, IC, and in addition an environmental factor resulting from the interaction of the value of the N excreted and the cost of spreading the manure on crop fields. The model was used to study five diet treatments (an all forage diet and a high concentrate diet with three other intermediate diets). The model was solved for each diet and the results indicated the herd structure at steady state and the replacement policy to find the maximum net return with or without the constraint of a maximum allowable level of N excretion. The model was solved using the Risk Solver Platform (Frontline Solvers, Incline Village, NV, USA) and each solution took approximately 10 minutes with a 6.00 GB (RAM), 64bit Operating System, and a computer with two 2.8 GHz processors. The results indicated that the optimal policy would be to replace nonpregnant cows at a certain month in lactation depending on market conditions, lactation number, and diet. Pregnant cows should not be replaced. With reasonable prices and market conditions, the replacement of nonpregnant cows should occur at 11 months after calving for first lactation cows and at 10 months after calving for later lactation cows. Replacement could be performed one month later if the diet was allforage. Imposing a maximum N excretion of 12 kg of N excreted per cow per month drastically changed the dynamics of the replacement policy: Cows consuming concentrates would be replaced 2 months earlier. The resulting herd structure indicated that on average around 50% of the cows would be in first parity, 24% in second parity, 12% in third parity, 6% in fourth parity, and only 8% in fifth to fifteenth parity. Thus, the herd structure was dependent on the optimal policy found by the model. The author concluded that the implementation of a Markovchain LP model to solve a DP problem is feasible for practical decision making in dairy farming and that this was an important advancement for dairy decisionmaking that provides both robustness and versatility in operations research. The model could become a valuable tool to support economic decisionmaking of dairy herd management.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Time (mo)
3
6
9
12
… … …
24
Parity (#)
01 0
02 0
03 0
04 0
05 0
06 0
07 0
08 0
09 0
10 0
11 0
12 0
13 0
…
24 0
03 1
04 1
05 1
06 1
07 1
08 1
09 1
10 1
11 1
12 1
13 1
…
24 1
04 2
05 2 05 3
1
06 2
07 2
08 2
09 2
360
Legend Replacements entering the herd
11 3
10 2
11 2
12 2
13 2
…
24 2
12 3
13 3
…
24 3
06 3
07 3
08 3
09 3
10 3
11 3
06 4
07 4
08 4
09 4
10 4
11 4
12 4
13 4
…
24 4
07 5
08 5
09 5
10 5
11 5
12 5
13 5
…
24 5
08 5
09 5
10 5
11 5
12 5
13 5
…
24 5
09 6
10 6
11 6
12 6
13 6
…
24 6
10 7
11 7
12 7
13 7
…
24 7
11 8
12 8
13 8
…
24 8
12 9
13 9
…
24 9
PREG = Cow pregnancy mo MIL = Cow mo of lactation Cows leaving the herd: Dead, involuntarily culled, voluntarily culled Flow of cows between states Cows aborting Flow of aborted cows Parturition Flow of cows to an upper parity
01 0
2
… … …
02 0
03 0
04 0
05 0
06 0
07 0
08 0
09 0
10 0
11 0
12 0
13 0
…
24 0
03 1
04 1
05 1
06 1
07 1
08 1
09 1
10 1
11 1
12 1
13 1
…
24 1
04 2
05 2
06 2
07 2
08 2
09 2
10 2
11 2
12 2
13 2
…
24 2
05 3
06 3
07 3
08 3
09 3
10 3
11 3
12 3
13 3
…
24 3
06 4
07 4
08 4
09 4
10 4
11 4
12 4
13 4
…
24 4
07 5
08 5
09 5
10 5
11 5
12 5
13 5
…
24 5
08 5
09 5
10 5
11 5
12 5
13 5
…
24 5
09 6
10 6
11 6
12 6
13 6
…
24 6
10 7
11 7
12 7
13 7
…
24 7
11 8
12 8
13 8
…
24 8
12 9
13 9
…
24 9
… … … 01 0
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
15
02 0
03 0
04 0
05 0
06 0
07 0
08 0
09 0
10 0
11 0
12 0
13 0
…
24 0
03 1
04 1
05 1
06 1
07 1
08 1
09 1
10 1
11 1
12 1
13 1
…
24 1
04 2
05 2
06 2
07 2
08 2
09 2
10 2
11 2
12 2
13 2
…
24 2
05 3
06 3
07 3
08 3
09 3
10 3
11 3
12 3
13 3
…
24 3
06 4
07 4
08 4
09 4
10 4
11 4
12 4
13 4
…
24 4
07 5
08 5
09 5
10 5
11 5
12 5
13 5
…
24 5
08 5
09 5
10 5
11 5
12 5
13 5
…
24 5
09 6
10 6
11 6
12 6
13 6
…
24 6
10 7
11 7
12 7
13 7
…
24 7
11 8
12 8
13 8
…
24 8
12 9
13 9
…
24 9
Figure 2. Graphic representation of Markovchain probabilistic processes of cow flow transitions of a linear programming model used to solve the dynamic problem of feeding diet groups in monthly steps in Cabrera (2010).
o/detail.action?docID=3021711.
210
Victor E. Cabrera and Peter E. Hildebrand
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
2. EventDriven Model In order to gain efficiency in the solution and reduce the dimensions of the model, eventdriven models can be set up. One example is an adaptation of the monthly model of Cabrera (2010) to events dictated by the reproductive events. Cows becoming pregnant, if not aborting, involuntarily culled, or dying, will reach a next parity after the gestation process. Therefore, substantial savings in the dimensions of the model can be obtained by moving pregnant cows from one parity to the next in one step. Cows not becoming pregnant will be bred again in a certain period of time (next estrous cycle or next timed or programmed breeding). The model then can use the LP algorithms to decide if a cow should be bred again or replaced at a certain time. This model was solved using the Risk Solver Platform (Frontline Solvers, Incline Village, NV, USA) and each solution took approximately 30 seconds with a 6.00 GB (RAM), 64bit Operating System, and a computer with two 2.8 GHz processors. The savings in the dimensions are substantial, but some challenges remain. First, reproductive programs are highly variable, so the model needs to be dynamically set up depending on the reproductive program parameters (e.g., the model is different if the interbreeding interval is 42 or 35 days). Second, together with the dynamic steps of the model, the transition variables need to accommodate these reproductive parameters (e.g., the milk production or culling risk need to be accumulated dynamically to the dimensions of the model). Third, the internal transfers between categories or states could be difficult to handle. This is especially important for aborting cows. Cows aborting are moved back to the stream of nonpregnant cows, which are running at different time steps. If the transfer would occur only once during a pregnancy, this could easily be managed by accounting for the time difference between the two flow streams. However, abortion rates change across gestation and then several events are needed to transfer them to the nonpregnant categories or states. As previously discussed, for an exact account of them, there would be a need of additional columns and rows in the matrix, which deters the benefit of using an eventdriven framework. One option is to accumulate abortion rates and return them at the available times in the stream of nonpregnant. This is an approximation that probably would have only minimal consequences on the results and this is the approach used for Cabrera (unpublished) in a reproductive event driven model. Cabrera (unpublished) uses 12 lactations of reproductive events. The first event is the time to the first breeding. If the cow becomes pregnant, the next event is gestation. After gestation, calving, and after calving the next lactation if the cow did not abort, was culled, or died. If the cow did not get pregnant, the next event is the next breeding. The cow theoretically has the opportunity to have up to 24 breeding services and the model selects the number of optimal services in each lactation to maximize the net return. The model then has 540 rows by 792 columns (427,680) with 792 decision variables (one in each column). This model is 2.75% the size of the previous model (Cabrera, 2010) with substantial savings in solution efficiency. Results are expected to be approximately similar to the previous model, but with the main advantage that this model can study very detailed reproductive programs and their impact on the herd net return. Cabrera (2010) had the need to aggregate reproductive events monthly with loses in the sensitivity of the timing and efficiency of the reproductive programs.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming for Dairy Herd…
211
CONCLUSION Linear programming is a flexible tool that provides a response to difficult questions in the area of dairy herd management. In particular, linear programming is a feasible and efficient framework to solve a dynamic programming problem, which is the stateoftheart in dairy herd economic decisionmaking. As demonstrated in this chapter, linear programming can be used for practical applications. The use of linear programming for dynamic programming has a series of advantages that include the solution for suboptimal conditions, eventdriven models, and the inclusion of the interaction of the performance of other cows in the herd (not demonstrated in this chapter). This chapter helps the reader to set up a linear programming matrix to solve a dynamic programming model using common spreadsheet software and standard solver algorithms. Although the running example presented here is an oversimplification of real herd conditions, the same concepts and framework are valid to set up models for practical applications for herd decision making. This is demonstrated with recently developed studies. Although the examples referred to the most relevant decisions in dairy herd population dynamics (replacement and reproduction) the framework can be applied to any other optimization scheme. Finally, although this chapter proposes one way to set up the matrix, this can be set up multiple ways according to user preferences. However, regardless of the matrix set up, the results should be the same because for a given set of variables and constraints a linear program model should have one and only one solution. Finally, the researcher will also need to decide whether to use an equidistant or an eventdriven model. For a feed group study, a monthly model as discussed in the applications would be appropriate, but for a reproductive study, an eventdriven model, could be preferred.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
REFERENCES Bellman, R. 1957. Dynamic programming. The Princeton University Press, Princeton. Cabrera, V. E., P. E. Hildebrand, J. W. Jones, D. Letson, and A. De Vries. 2006. An integrated North Florida dairy farm model to reduce environmental impacts under seasonal climate variability. Agriculture, Ecosystems, and Environment 113:8297. Cabrera, V. E. 2010. A large Markovian linear program to optimize replacement policies and dairy herd net income for diets and nitrogen excretion. J. Dairy Sci. 93:394406. DeLorenzo, M. A., T. H. Spreen, G. R. Bryan, D. K. Beede, and J. A. M. Van Arendonk. 1993. Optimizing model: insemination, replacement, seasonal production, and cash flow. J. Dairy Sci. 75:885896. De Vries, A. 2004. Economics of delayed replacement when cow performance is seasonal. J. dairy Sci. 87:29472958. De Vries, A. 2006. Economic value of pregnancy in dairy cattle. J. Dairy Sci. 89:38763885. Hillier, F. S., and G. J. Lieberman. 1986. Introduction to operations research. 4th Ed. HoldenDay, Inc. Oakland, CA. StPierre, N. R., L. R. Jones. 2001. Forecasting Herd Structure and Milk Production for Production Risk Management. J. Dairy Sci. 90:12551264.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
212
Victor E. Cabrera and Peter E. Hildebrand
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Yates, C. M., Rehman, T. 1998. A linear programming formulation of the Markovian decision process approach to modelling the dairy replacement problem. Agric. Syst. 58:185201.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: Linear Programming Editor: Zoltan Adam Mann
ISBN 9781612095790 ©2012 Nova Science Publishers, Inc.
Chapter 8
A REVIEW ON LINEAR PROGRAMMING (LP) AND MIXED INTEGER LINEAR PROGRAMMING (MILP) APPLICATIONS IN CHEMICAL PROCESS SYNTHESIS J. A. Caballero and M. A. S. S. Ravagnani University of Alicante, Spain State University of Maringá, Brazil
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
ABSTRACT In this chapter a review on linear programming (LP) and mixed integer linear programming (MILP) applications in chemical process synthesis is provided. Although most of the problems that appear in chemical process synthesis are in nature nonlinear, there are also a good number of cases in which either the synthesis problem can be posed as a linear (or Mixed Integer Linear) problem or accurately approximated by a linear model. In some situations these approximations are so good that no further refinement is necessary. Besides, in the synthesis problem, the designer compares alternatives and therefore only the relevant aspects of the flowsheet alternative configurations are relevant to the problem. Under this situation, there are a good number of cases in which an (MI)LP approximation is good enough to select the best alternative. At last, but not the least, due to the complexity of the synthesis problem, it is common to use a sequential approach increasing the degree of detail from the very general decisions to the detailed final design. In the first steps, usually linear models are good representations of the whole system. In this chapter we provide a set of different examples that will give a general idea of some applications of LP/MILP techniques currently applied in the synthesis of Chemical Processes.
INTRODUCTION Almost from the beginning of the "industrial age‖, there has been a growing interest in designing efficient processes. Usually this efficiency was measured only in economic terms (reducing costs or increasing benefits). As the society becomes more developed the interest
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
214
J. A. Caballero and M. A. S. S. Ravagnani
also focus on security, designing inherently more secure processes or design with the adequate measures to minimize risks. In the last decades it has appeared an also growing environmental awareness. Concepts like global warming; green house gases, ozone depletion, eutrofization, etc are nowadays common terms not only in the scientific community but also to the ordinary people. In parallel to the society pressure that produce more secure, clean an efficient designs, the changing economic situation also forces the researchers and designers to develop new designs or even new design methodologies. A clear example of this last case is the development of a new design methodology is ―The Pinch Technology‖ that appeared in the seventies of last century and continued developing during the eighties as a consequence of the increase in the petroleum prices in 197273. Until that moment the relatively low prices of petroleum allowed heating (or cooling) all the flow streams (fluids in movement) in a chemical plant without the necessity of recovering or minimizing the use of external utilities (usually steam that is generated in boilers burning coal, oil or other available combustible). This situation drastically changed in 1973. It was evident the necessity of using the hot streams (process streams that should be cooled) for heating the cold streams (process streams that must be heated) saving energy. However, this is a very complex problem, the objective was not only minimize the use of external streams but also minimize the cost of the resulting heat exchanger network. Nowadays, we cannot think of a chemical plant without considering the heat integration. Interesting, a side effect of developing heat exchanger networks was that chemical plants reduces their emissions (directly in the plant or indirectly in the utility system) due to the lower energy consumption. The methodology developed for the design of heat exchanger networks was extended to mass networks, water networks, etc. (Grossman et al, 1999). Due to the inherent complexity in the synthesis of a chemical process it was needed to develop strategies or methodologies that help in this enormous task because it is completely impossible to synthesize the detailed final plant in a single step. So the design activity is divided in a set of steps: gathering information, representation of alternatives, assessment of preliminary designs, and search among alternatives. Once the best alternative is selected (or maybe a small set of alternatives) the detailed engineering design with a rigorous calculation of all the elements in the chemical plant, starts. The conceptual process flowsheet (flow diagram) synthesis includes the first steps. In these steps it is not necessary –in general a comprehensive model of all the plant, because we are more interesting in compare alternatives, and establish the economic, and maybe environmental, safety and controlsuitability of the process. Although there has been proposed many ways of deal with the synthesis problem, only two approaches have got a widely acceptation, and nowadays can be considered complementary until the point that we would say that almost always both approaches are considered simultaneously. The first one is the hierarchical decomposition (Douglas, 1988); the second one is the synthesis based on mathematical programming (Grossmannn et al, 1999). To guide the selection of alternatives, Douglas (1988) formalized a decision hierarchy as a set of levels, where more detail in the process flowsheet is successively added to the problem. These levels are classified according to the following process decisions:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
215
AB/C
A
to C AB
ABC
A/B
ABC/D A/BC
B
to D
ABCD
BC
AB/CD
B/C
to A A/BCD
C
BC/D to D BCD to B
CD
C/D
B/CD
D
(a) A AB
A/B
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
B
ABCD
AB/CD
C
CD
C/D D
(b)
Figure 1. Example of a superstructure for separating a mixture containing 4 components (A, B, C and D).
Level 1:Batch versus continuous Level 2: Input – Ouput structure of the flowsheet Level 3:Recycle structure of the flowsheet Level 4:a) Vapor recovery; b) Liquid recovery Level 5: Heat recovery network.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
216
J. A. Caballero and M. A. S. S. Ravagnani
Later different researchers (Smith, 2005) have added more levels to account with the waste generation and removal or latter disposal, water reuse, etc. The mathematical programming approach postulates a superstructure of decisions. In the case of process synthesis it is a ‗metaflowsheet‘ that include all the alternatives of interest. Figure 1 shows a superstructure to select the best separation sequence. The optimal separation sequence is obtained by deleting some units or some streams. In this chapter, stream can be understood as a fluid in movement and unit a equipment, like a reactor, a column, a heat exchanger, etc. Figure 1b shows a possible configuration of distillation columns included in the superstructure. The mathematical programming approach can be divided in three parts:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1. Postulate the superstructure. 2. Develop a mathematical model that describes the superstructure: mathematical models of the units (subprocesses, tasks; logical relations among them, etc). The solution of this model takes the form of a Linear Program (LP); Non Linear Program (NLP); Mixed Integer Linear Program (MILP) or Mixed Integer Non Linear Program (MINLP). 3. Solve the model and analyze the solution. The major drawback of the conceptual design is its sequential nature. So a decision in the first stages –that could be a good decision with the information available in that momentaffect the rest of the design and could eventually be a bad decision. The superstructure optimization has not this kind of problem but it is constrained to the size of the problems that can be solved. The number of alternatives to be considered has a factorial grows, and therefore only small or medium size design problems can be solved. Making these two approaches work together is still an open research area (see for example Daichedt and Grossmann, 1998). However, in the design of subsystems (separation sequences based on distillation, heat exchanger networks, reactor networks, water networks, etc) the superstructure optimization has proved to be very useful. Besides, in most situations, especially in the first stages of the synthesis, the designer is interested in comparing alternatives and then a shortcut model is enough. In these circumstances the mathematical programming approach is the best available tool. The rest of the chapter focus on superstructure optimization, and in particular in the stage 2: «develop a mathematical model». As previously commented, in the preliminary design we are interested in comparing alternatives, so a simplified linear model are in most situations enough to discriminate between alternatives (and sometimes the model results to be linear)
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
217
Ethylene + Waste gases water
Diethyl ether ethanol water
Ethylene
Waste + water
a. Detailed structure representation
Feed preparation
Reaction
Recovery
b. Aggregated to level function
Total process
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
c. Total Aggregation Change P
Change T
Mix
Change P
Change Compositions
Change species
Change T
Change T
Change P
d. Representation based on tasks Figure 2. Different alternatives for process flowsheet representation.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
218
J. A. Caballero and M. A. S. S. Ravagnani
Due to the fact that we deal with the synthesis problem through a superstructure optimization, it is of interest first to show some alternatives for representing a flowsheet or a part of it. As we constraint ourselves to linear models, then we will show how it is possible to develop linear approximations to some common equipment that appear in a chemical plant. When we are dealing with alternatives it is common that there are some ‗logical‘ relations either by the own nature of the process or by some design specification, so we introduce a section of how to introduce the logic inference in a MILP problem. Finally we will show a set of small (and some not so small) problems to illustrate some applications of LP/MILP in the synthesis of chemical processes, including total flowsheet, utility systems, reactor networks, general separation networks and separation based on distillation and heat exchanger networks.
REPRESENTATION OF ALTERNATIVES Generating superstructures is by itself a difficult problem and it is out of the scope of this chapter. However, representation of alternative decisions for the process is intimately tied to the way we intend to generate and search among these alternatives. For example, an obvious representation of the ethanol process consists in including all the equipment, and how it is interlinked (Figure 2a). To simplify this representation we might aggregate equipment to represent a higher level function such us «feed preparation; reaction and recovery» as shown in Figure 2b. We may even aggregate the entire flowsheet in a single object (Figure 2c). In creating a representation, the goal is to provide a relevant but concise depiction of the design space that allows an easier recognition and evaluation of the available alternatives.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
T (K)
Heat transferred between two streams
Hot stream
Cold stream
Heat Flow (kW) Figure 3. Representing heat exchange between two streams.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
219
For instance, in addition to thinking of the unit operations in a process, we can base our representation of alternatives on the ―tasks‖ that occur in the process, such as heating, reaction or separation. Figure 2.d shows such a representation for the ethyl alcohol process; there will usually be many different alternatives for this association of tasks to equipment. This representation is also very useful for batch processes, where many of these tasks occur in the same piece of equipment but at different times. Finally for process subsystems, more specialized representations are in common use. For the synthesis of heat exchanger networks, for instance, we represent the flow of heat in a process using a plot of temperature versus the amount of heat transferred as shown in Figure 3. This representation does not even looks like a process flowsheet, but it does describe the alternative ways to exchange heat among numerous process streams. There are many different representations that can be used to think about the design problem and to describe alternatives for it. It can also take years to discover a useful representation and to present its implications for design. A useful representation is, therefore, a significant intellectual contribution to design.
DEVELOPING UNIT MODELS FOR LINEAR MASS BALANCES
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Once the temperature and pressure are fixed in the feed and output streams, we can develop a linear set of equations for each process unit and thereby to solve the entire flowsheet with these equations. Thus the overall strategy will be (Biegler et al, 1997): 1. Fix temperatures and pressures for all process streams. 2. Approximate each unit with split fractions representing outlet molar flows linearly related to the inlet molar flows. 3. Combine the linear equations and solve the overall mass balances. 4. Recalculate stream temperatures and pressures from equilibrium relationships. 5. If there are no large changes in temperatures and pressures go to step 6, else go to step 1. 6. Given all temperatures and pressures, perform the energy balance and evaluate heat duties. In order to follow this decomposition, we assume that all vapor and liquid streams have ideal equilibrium relationships (particularly in step 2) and that, unless stated otherwise, all streams are at saturated conditions. With these assumptions physical properties can be calculated easily from standard handbooks data. The advantages of this approach are that calculations are very easy to set up and solve with few iterations (usually no more than two) required for converge the preliminary design. In this section we show as an example how to construct linear models approximations for Mixers, Splitters, Reactor and a Flash (the simplest method of separating mixtures based on differences in their volatilities in a mixture) that is the most important calculation in a flowsheet, from it can be derived other linear models (i.e. a distillation column). Additional information on other shortcut separation units can be found in Douglas (1988); Perry et al (1984); Biegler et al (1997). To construct the linear unit model we label the stream vector of
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
220
J. A. Caballero and M. A. S. S. Ravagnani
molar flows i, j as the jth output stream of unit i. i, j is the flowrate of component k in that k
stream. Also if there is only one outlet stream in the unit i the j subscript must be suppressed. Note that with this notation, we express stream compositions in terms of molar flows instead of mole fractions, as this preserve linearity of equations.
Mixer Unit This unit merely sums all of the inlet streams as a single output stream with the following mass balance equations. Given upstream units i1 , i 2 , ... that feed into the mixer with the jlth outlet from unit i1 , the j2th outlet from unit i 2 , etc… for component k Mk is written as:
Mk ik,l , j l
Splitter Unit The splitter unit divides a given feed stream into specified fractions j for each output stream j. Note that all output streams have the same compositions as the feed stream. Thus for NS output streams we have NS1 degrees of freedom in choosing j and we can write the
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
equations:
Sjk j INk
j 1,... NS 1
Sk , NS 1
NS 1
j 1
j
k IN
Reactor (Fixed Conversion Model) For linear mass balances, we assume that the reactor model can be simplified by specifying the molar conversion of the NR parallel reactions in advance. As a result the mass balance equations remain linear and relatively easy to solve. For each reaction r, we define a limiting component l(r), and normalize the stoichiometric coefficients (stoichiometry is a branch of chemistry that deals with the quantitative relationships that exist among the reactants and products in chemical reactions) r ,k C r ,k / C r ,l ( r ) , r 1, NR for each
component k, where coefficients C k , r appear in the specified reaction. We also adopt the convention:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
221
0, k prod 0, k reac tan t 0, k inert
r ,k
Defining the reaction converted per pass based on limiting reactant as r gives us: NR
Rk INk r , k r INl ( r ) r 1
Flash Units This calculation is the most fundamental and important one in a flowsheet. Aside from the physical separation unit itself, it is the building block for deriving linear models for equilibrium staged separations like distillation and absorption. To develop a flash model, we first define and overhead split fraction k v k f k for each component of the ncomp components k. We further identify component n as a key component (that is the component token as a reference in the mixture) on which a given recovery can be obtained and also defined V / F for specified vaporization of the feed. As specifications the variables n , , P, T Q can be specified. Q is the heat supplied to the flash unit. If we now write the equations for the flash¡ unit:
f k lk vk Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
v k / V K ( x, P , T ) l k / L
l
k
L
k
v
k
V
k
We find that for a specified feed, the number of degrees of freedom (nº of variables minus the number of equations) is two. This means that we can completely specify the condition of the flash if we select two of the variables. Since we have not yet considered energy balances we focus on the following three cases: Case 1.specify key component overhead recovery and T or P Case 2.specify T and P (isothermal flash) Case 3.Vapor fraction, T or P specified. The first case is very useful for the shortcut methods, but is not used for detailed models, Cases 2 and 3 are needed for analyzing design and operating conditions.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
222
J. A. Caballero and M. A. S. S. Ravagnani
We now consider some approximations for vapor/liquid phase equilibrium. Equating the mixture fugacities in each phase leads to a reasonably general expression at low to moderate pressures:
k y k P k xk f k0
k 1... ncomp
where k is the vapor fugacity (a chemical quantity with units of pressure that is intended to better describe a gas's realworld pressure than the ideal pressure P used in the ideal gas law) coefficient, k is the liquid activity coefficient, and f k0 is the pure component fugacity. For process calculations, it is often convenient to represent the equilibrium relations as:
yk K k xk with the K value K k k f k0 / k P . For our shortcut calculations, we assume
ideal behavior, which leads to the following assumptions:
k 1; k 1; f k0 Pk0 Antoine equation for vapor pressure: Ln ( Pk0 ) Ak Bk /(T Ck ) where the Antoine equation is a representative correlation with coefficients that can be found in handbooks of data (i.e. Reid et al, 1987). These assumptions lead to the Raoult‘s law:
y k P xk P that can be rewritten as:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
0 k
y k Pk0 Kk xk P
In relation to key components, we can now define a relative volatility (the quality of having a low boiling point or subliming temperature at ordinary pressure or, equivalently, of having a high vapor pressure at ordinary temperatures):
k /n
K k Pk0 K n Pn0
which for ideal systems is independent of P and is much less sensitive to T than K. Note that component k can be nonvolatile, in which case k / n 0 On the other hand, if component k is noncondensible, k / n . We can now rederive and simplify the flash equations. Let
k/n
Kk y x v l k k k k Kn y n xn vn ln
We now introduce the split fractions and define:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
223
vk k f k and lk 1 k f k Substituting these equations in the previous one we get:
K k k L / V (1 k )
k/n
k / 1 k n / 1 n
Rearranging this expression:
k
k / n n 1 k / n 1 n
And we define the recovery of each component in terms of the key component recovery. Note also that the limiting cases of nonvolatiles ( n / k 0, k 0 ) and noncondensibles (
n / k , k 1 ) are captured in the equation. With the specification of the key
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
component recovery, we have one out of the two degrees of freedom required. Implicit in the above expression is that a correct value of temperature (T) was known in advance in order to calculate the relative volatilities. Given that we have specified T or P how do we calculate the corresponding value or P or T? Moreover, if we have specified T or P directly, how do we use the above equation to determine the corresponding key component recovery? Here we need to consider a bubble (or dew point) equation that also needs to be satisfied at equilibrium. At the bubble point we have:
y K x i
i
i
1
That written in terms of relative volatilities: K 1 i Kn Kn
xi i / n xi
where is defined as an average relative volatility. Using this definition allows us to redefine the K values as:
Pk0 Kk k /n P which forms a simplified bubble point equation. For T fixed and P unknown, we can calculate a value of P directly from:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
224
J. A. Caballero and M. A. S. S. Ravagnani
P
Pk0 (T ) k/n
On the other hand, for P fixed and T unknown, the value of T can be calculated approximately from:
Pk0 (T ) k / n P / To reduce approximation errors, it is convenient to choose the index k to be the most abundant component in the liquid phase. With the above equations we can now develop the following algorithms for the three most commonly specified problems.
Case1: n and P or T Fixed a.
For a specified n and P (or T) guess T or (P)
b. Calculate K k , k / n at specified T c.
Evaluate k k / n n /(1 ( k / n 1) n ) for each component k
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
d. Reconstruct mass balances and calculate mole fractions
vk k f k
y k v k / vi
e.
l k (1 k ) f k
x k l k / li
f.
For T fixed P
Pk0 (T ) ; For P fixed solve T from Pk0 (T ) k / n P / k /n
Case 2: T and P Fixed a.
For a specified T and P pick a key component n and guess n . Follow steps b, c, d of algorithm for Case 1.
b. If the bubble point equation is satisfied P k / n / Pk0 stop. Otherwise reguess
n and go to step c. Case 3: and P (or T) Fixed a.
For a specified V / F and P (or T)
b. Guess T (or P); calculate k / n , K k and define K n /(1 ) vn / ln . Define
n c.
1
. Then follows steps c and d of the previous algorithm.
If the bubble point equation is satisfied than stop. Otherwise reguess T (or P).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
225
Besides, sometimes a given equipment or even a process can be represented by linear correlations that relate the input with the output streams that again are enough to comparison purposes. If we are also interested in an economic evaluation we must considered different aspects: cost of raw materials, investment and operation costs, taxes, product prizes, etc. However, once the economic feasibility of the process has been established (usually a rough calculation of the main costs, energy, raw materials, land acquisition, taxes, etc. are enough to establish the feasibility of the process) to compare alternatives we can discard all those cost common to all the alternatives, and focus on the differences (i.e. investment and operational costs of a given alternative). The operational costs can be modeled assuming a fixed unitary prize of the utilities (cooling water, heating steam, electricity…). For example, we can know the prize by kg of highpressure steam or the equivalent by kJ. In any case, the operational cost can be considered linearly correlated with the utility consumption. For the preliminary equipment calculations we note that equipment costs (C) increase nonlinearly with equipment size (S). This behavior can be often capture with a power law expression:
S C C 0 S0
where the exponent is less than one, often about 0.6 or 0.7, and S0 and C0 are the base capacity and cost respectively.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Cost
Cost
Capacity Variable
Capacity Variable
Figure 4. (a). Piecewise approximation (dotted line). (b). Fixed charge approximation.
This nonlinear cost behavior is reflected in the economy of scale, where the incremental cost decreases with larger capacities. The capacity of equipment is often related to a single variable (i.e. the Area in a heat exchanger, the volume in a vessel, the energy consumed in the
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
226
J. A. Caballero and M. A. S. S. Ravagnani
case of a compressor, etc). Therefore it is possible to approximate the cost by a piecewise linear approximation (See Figure 4a). If we are comparing similar equipment or the range in which the capacity is not too large we can also approximate the cost by a linear equation formed by a fixed charge cost and variable cost that depends on the capacity (Figure 4b).
C C fixed C var S
INCLUSION OF LOGIC INFERENCE IN MILP MODELS Because in the synthesis of chemical process plants it is common to appear logical constraints that either describe equipment relationships or operations, i.e., select only one of the reactors; if the absorber is selected then do not use cryogenic separation; if the separation of the mixture that contains five components (A, B, C, D and E) the separation ABC/DE is selected then a column that receives ABC as feed must be selected and a column that receives DE as feed must be selected, we will present in this section a framework that systematically allows to derive ―linear‖ constraints involving 01 variables that force those logical relationships. Some of these constraints are straightforward, but other are not. For instance, specifying that exactly only one reactor can be selected among a set of candidate reactors, it can be simply expressed as:
y
i
1;
R set of possible reactors
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
iR
On the other hand, consider representing the constraint: «if the absorber to recover the product is selected or the membrane separator is selected then do not use cryogenic separation» we could by trial and error arrive to the following constraint:
y A y M 2 yC 2 where y A , y M , yC represent 01 variables for selecting the corresponding units (absorber, membrane, cryogenic separator). Note that if y A 1 and/or y M 1 the previous equation forces yC 0 . We will see however, that we can systematically arrive at the alternative constraints,
y A yC 1 y M yC 1 which are not only equivalent but also more efficient in the sense that they are tighter because they constraint more the feasible region. In order to systematically derive constraints involving 01 variables, it is useful to first think of the corresponding propositional logic expression that we are trying to model (Raman
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
227
and Grossmann, 1991). For this one first must consider basic logic operators to determine how each one can be transformed into an equivalent representation in the form of an equation or inequality. These transformations are then used to convert general logical expressions into an equivalent mathematical representation (Cavalier and Soyster, 1987; Williams, 1985). To each literal Pi that represent a selection or action, a binary variable yi is assigned. Then the negation or complement of Pi (Pi ) is given by (1 yi ) . The logical value of true corresponds to the binary variable value of 1 and the false to 0. One can systematically model an arbitrary propositional logic expression that is given in terms of the conjunction (AND), disjunction (OR) or implication operators, as a set of linear equality and inequality constraints. One approach is to systematically convert the logical expressions into its equivalent conjunctive normal form representation, which involve the application of pure logical operations (Raman and Grossmann, 1991). The conjunctive normal form is a conjunction of clauses (Q1 Q2 Q3 ...) all of then connected by the AND operator . Hence for the conjunctive normal form to be true each clause Qi must be true independent of the others. Also, since Qi is just a disjunction of literals ( P1 P2 P3 ...) all of them connected by OR operators  it can be expressed in the linear mathematical form as the inequality.
y1 y2 y3 ... yr 1 The procedure to convert a logical expression into its corresponding conjunctive normal form was formalized by Clocksin and Mellish (1981). The systematic procedure consists of applying the following three steps to each logical proposition:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1. Replace the implication by its equivalent disjunction
P1 P2
P1 P2
2. Move the negation inward by applying De Morgan‘s Theorem:
P1 P2 P1 P2
P1 P2
P1 P2
3. Recursively distribute the Disjunctions (OR operator) over the Conjunctions (AND operator), by using the following equivalence:
P1 P2 P3
P1 P3 P2 P3
Having converted each logical proposition into its conjunctive normal form representation, the logical equations can be easily written in terms of binary variables and linear relationships, because each clause in the conjunctive normal form corresponds to a linear inequality.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
228
J. A. Caballero and M. A. S. S. Ravagnani
The following example illustrates the procedure. Consider the logic condition we gave above: «If the absorber to recover the product is selected or the membrane separator is selected then do not use cryogenic separation». Assigning a boolean literal to each action,
PA = select absorber; PM = select membrane separator; PC = select cryogenic separation, the logic expression is given by:
PA PM PC Removing the implication substituting it by its equivalent disjunction:
PA PM PC Applying De Morgan‘s Theorem, leads to:
PA PM PC Distributing the Disjunction (OR operator) over the Conjunction (AND operator) gives:
PA PC PM
PC
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Previous logical expression is already in its conjunctive normal form. Assigning the corresponding 01 variables to each term in the above conjunction and using the algebraic equivalent to each disjunction we get:
1 y A 1 yC 1 1 y M 1 yC 1 which can be rearranged to the two inequalities that we had postulate at the beginning of the section:
y A yC 1 y M yC 1 From the above example it can be seen that logical expressions can be represented by a set of inequalities. An integer solution that satisfies all the constraints will then determine a set of values for all the literals that make logical systems consistent. This is a logical inference problem where given a set of n logical propositions one would like to prove whether a certain clause is always true. It must be noted that the one exception where applying the above procedure becomes cumbersome is when dealing with constraints that limit choices, for example, select no more than one reactor. In that case it is easier to directly write the constraints and not go through the above formalism.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
229
As an application of the material above, let us consider logic inference problems in which given the validity of a set of propositions we have to prove the truth or the validity of a conclusion that may be either a literal or a proposition. The logic inference problem can be expressed as:
Prove Qu s.t.
B (Q1 , Q2 ,..., Qs )
where Qu is the clause or proposition expressing the conclusion to be proved and B is the set of clauses. Given that all the logical propositions have been converted to a set of linear inequalities, the inference problem can be formulated as the following MILP problem (Cavalier and Soyster, 1987):
min Z
c y i
i
iI ( u )
s.t .
Ay a y [ 0,1] n
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
where Ay a is the set of inequalities obtained by translating B(Q1 , Q2 ,..., Qs ) into their linear mathematical form, and the objective function is obtained by also converting the clause Qu into its equivalent mathematical form. Here the index set I(u) corresponds to the binary variables associated to the clause Qu. This clause is always true if Z=1 on minimizing the objective function as an integer programming problem. If Z=0 for the optimal integer solution, this establish an instance where the clause is false. Therefore, in that case the clause is not always true. In many instances, the optimal integer solution will be obtained by solving its linear programming relaxation (Hooker, 1988). Even if no integer solution is obtained it may be possible to reach conclusions from the relaxed LP problem if the solution is one of the following types (Cavalier and Soyster, 1987): a.
Zrelaxed >0 : the clause is always true iven if Zrelaxed N) defining N key components that must be completely separated we can group the components in N groups and transform the problem to the first case. Therefore, without lack of generality, for the sake of simplicity, we will focus only in the case in which we want to separate an N component mixture in their N pure components. The synthesis of general distillation sequences for azeotropic systems, even using MILP models, is too complex and it is out of the scope of this overview (see for example Caballero and Grossmann 2006). However, if we constraint to the case of using only conventional columns – a conventional column is that formed by a feed and two products, distillate and bottoms, a reboiler and a condenser. It is well known that the minimum number of distillation columns required to separate an N component mixture is equal to N1. And each one of these columns will perform a sharp separation between adjacent compounds. (A mixture of N components is named by components ordered by decreasing relative volatilities). An interesting characteristic of these column systems is that under the sharp split assumption we can calculate each of the possible separations a priori either by a shortcut method or even using a rigorous process simulator (detailed simulation). The error introduced will be negligible, in other words, assume a recovery of a component of 0.999 or 1 is in total
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
269
flow of component almost the same. Or what is the same we can calculate the feed to each possible distillation column a priori almost without error. An example will clarify this point. If we have a 4 component mixture, ABCD (i.e. 100 kmol/h; molar fractions 0.2, 0.3, 0.4, 0.1) the possible separations are: Initially A/BCD AB/CDABC/D From ABC: A/BC AB/C From BCD: B/CDBC/D From AB: A/B From BC: B/C From CD: C/D For example the feed to each of the possible separations fed with BCD is a mixture that contains 30 kmol/h of B; 40 kmol/h of C and 10 kmol/h of D (strictly speaking maybe not exactly those amounts but the difference under the sharp split approximation can be considered negligible). Therefore it is possible with a MILP problem determining the optimal sequence in a completely rigorous way! Andrecovich and Westerberg (1985) simplified even more this approach arguing that the cost of a distillation column can be calculated in terms of the feed flowrates and heat duties in the condenser and in the reboiler. Assuming the same loads in condenser and reboiler (This is common to a large number of columns because internal flows are larger than external flows and feeds are assumed to be saturated liquids), the heat duties for column k can be expressed as the linear functions
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Qk K k Fk where Kk is a constant calculated from a shortcut or rigorous model. Therefore the total annual cost can be calculated as:
Cost k k k Fk CH CC Qk Based on previous considerations, Andrecovich and Westeberg (1985) postulated a superstructure that has been previously shown in Figure 1 for a 4 component mixture. The model, a modification of the originally proposed by Andrecovich and Westerberg (1985) can be written as follows: Index sets: COL {k  k is a column} S {m  m is a mixture} (i.e. ABCD, ABC, AB, BC, A, …) COMP {i  i is a component} IP (m) {m  m is an intermediate mixture} (i.e. AB, ABC, BCD, …) TN (m) {m  m is a terminal mixture} (i.e. A. B, C, D…) SD (m,k) { Distillate of column k goes to mixture m} SB (m,k) { Bottoms of column k goes to mixture m} SF (m,k) { mixture m is the feed of column k }
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
270
J. A. Caballero and M. A. S. S. Ravagnani Init (k) { columns k that have as feed the initial mixture} Variables:
Fk ,i , Dk ,i , Bk ,i Individual molar flow rates of Feed. Distillate and Bottoms of column k.
Qk Heat load in column k. Objective function:
min :Total Cost
k y k k Fk C H CC Qk
k COL
Mass balance in initial node:
F0 zi
Fk ,i i COMP
k Init
Mass balances in intermediate nodes:
Dk ,i
k SD ( m , k )
Bk ,i
k SB ( m , k )
Fk ,i i COMP m IP
k SF ( m , k )
Mass balances in terminal nodes:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Dk ,i
k SD ( m , k )
Bk ,i F0 zi i COMP m TN
k SB ( m , k )
Mass balances in columns and sharp split specification:
Fk ,i Dk .i Bk ,i Dk ,i Fk ,i k light key All variables related to a column must be forced to be zero if the column does not exist.
Fk ,i Uy k Fk ,i Uy k Dk ,i Uy k Bk ,i Uy k Qk K k
Fk ,i 0
iCOMP
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
271
Example 10 The following example illustrates the procedure for a 6 components mixture. Data is given in Table 13. Table 13. Data for example 10
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Separator
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
A/BCDEF AB/CDEF ABC/DEF ABCD/EF ABCDE/F B/CDEF BC/DEF BCD/EF BCDE/F A/BCDE AB/CDE ABC/DE ABCD/E A/BCD AB/CD ABC/D B/CDE BC/DE BCD/E C/DEF CD/EF CDE/F A/BC AB/C B/CD BC/D C/DE CD/E D/EF DE/F A/B B/C C/D D/E E/F
k
k
(1000 $/year)
1000 umh/kmol year 0.5200 0.1100 0.7100 0.4200 0.8100 0.0800 0.5100 0.2600 0.3800 0.6500 0.1200 0.5300 0.2500 0.6300 0.1300 0.5000 0.2000 0.8300 0.1700 0.3800 0.0500 0.2600 0.4300 0.0200 0.3900 0.4000 0.4100 0.0900 0.0800 0.2300 0.2500 0.0200 0.3500 0.0500 0.2000
310 55 255 90 200 40 232 85 180 295 43 235 70 250 35 210 32 215 50 195 55 150 210 28 175 181 180 40 38 130 195 25 175 30 100
Heat transfer coefficients (106 kJ/kmol) 0.0130 0.0730 0.0150 0.0450 0.0200 0.0930 0.0180 0.0470 0.0230 0.0140 0.0920 0.0170 0.0570 0.0630 0.0160 0.0190 0.1250 0.0190 0.0800 0.0210 0.0720 0.0280 0.0190 0.1000 0.0230 0.0220 0.0220 0.0950 0.1050 0.0300 0.0200 0.1010 0.0250 0.1230 0.0400
Feed to the system: 1000 kmol/h.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
272
J. A. Caballero and M. A. S. S. Ravagnani Initial composition (molar fraction): (A 0.20; B 0.10; C 0.15; D 0.25; E 0.20; F 0.10) CH vapour cost (103 $/106 kJ year) = 34 CC cooling water (103 $/106 kJ year) = 1.3 The solutions of the model yield the following results: Optimal separation sequence: ABCD/EF AB/CDA/BC/DE/F The objective function value is 4353.26.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
LP AND MILP IN HEAT EXCHANGER NETWORKS (HEN) In the present item, the main objective is to achieve an explicit procedure to systematically generate a heat exchangers network. The user must choose, in the end of the procedure, which heat exchangers are more appropriated to achieve the main goal. It is possible to obtain the maximum energy recovery and the minimum number of heat exchanger in the network. It is not an easy task if the number of streams is large or if it is necessary to split streams. As in LP problems as well as in the MILP problems the global optimum is assured, the great complications are the combinatorial nature of this kind of problems. In the case of the synthesis of heat exchanger networks, a LP formulation can be used to achieve the maximum energy recovery or the minimum cost of utilities and a MILP formulation can be used to find the minimum number of heat exchangers in the network. After this, the user can generate by hand the best heat exchanger network.
Minimum Utilities Cost The HEN synthesis problem can be formulated as: Given a set of hot and cold streams with their supply and target temperatures and flow rates as well as hot and cold utilities with their temperatures and corresponding costs, the objective is to find the HEN with minimum utilities consumption. Papoulias and Grossmann (1983) proposed a LP model to calculate the minimum cost of utilities in the HEN. In the next example we show the authors model.
Example 11 This example illustrates the application of the Papoulias and Grossmann (1983) model. Table 14 presents a set with 4 process streams, one hot (steam) and one cold utility (water). A
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
273
minimum temperature approach (Tmin) between hot and cold streams must be previously fixed to assure the heat exchanger network. The first step in solving the problem is to generate a temperature intervals cascade. Heat can be transferred from the hot to the cold streams by the temperature intervals, as shown in Figure 13. Table 14. Streams data Stream H1 (hot) H2 (hot) C1 (cold) C2 (cold) Steam water
CP (kW/K) 10 20 15 13 
Tin (K) 650 590 410 353 680 300
Tout (K) 370 370 650 500 680 320
h (kW/m2 K) 1.0 1.0 1.0 1.0 5.0 1.0
Cost ($/kWyear) 80 20
Minimum approach temperature = 10 K. Heat exchangers cost (€) = 5500 + 150·Area (m2).
To calculate the temperature intervals it must be considered the hot and cold temperature scale in such a way that the final separation is equal to the minimum temperature approach (Tmin). In this case for the hot streams the temperature scale is decreased in Tmin/2, and for the cold streams it is increased in Tmin/2:
Tmin 2 T TC* TC min 2
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
TH* TH
(1)
The heat content ( Qi ,k ) that each stream (i) transfer to each temperature interval (k) are easily calculated by a heat balance:
Qi , k CPi Tk
(2)
Tk is the temperature difference in the interval k. Table 15 shows the temperature intervals and the energy supply and demand in each interval. Using this cascade diagram it is possible to generate a LP problem in a transshipment problem formulation, in which the hot streams are the source nodes, the cold streams are the destination nodes and the heat can be considered as a ware to be transported from de sources to the destinations through the intermediary warehouses, that in this case are each one of the temperature intervals that assure a feasible heat exchange.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
274
J. A. Caballero and M. A. S. S. Ravagnani Table 15. Temperature intervals (K) and heat demand/supply (kW) T* 655
H1
H2
C1
C2
Q H1
Q H2
Tmin
Interv
150
10
1
900
60
2
80
3
1170
90
4
650
50
5
91
7
6
Q C1
Q C2
645 600 585 800
1600
1200
900
1800
1350
500
1000
505 415 365 358
Qs (Steam) 655 K 1 H1
150 kW R1
645 K 600 kW 2 800 kW
R2
585 K 900 kW
3
500 kW
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
505 K
C1 900 kW
3600 kW
1200 kW 1350 kW
R3 4
1600 kW
1170 kW 1800 kW
H2
R4
415 K
C2 650 kW
1000 kW
1191 kW
5 R5
365 K
91 kW
6 358 K
Qw (Cold Water)
Figure 13. Heat Cascade.
In a given interval the heat that cannot be transported to a cold stream (because this stream achieved all the heat it needs) is transferred to the next temperature interval. In the first temperature interval, or in the intervals in which it is available a hot utility, if the process streams are not able to supply the heat necessary to thermally satisfy the cold streams, the hot utility must supply this heat. In an analogous way, if in the last temperature interval if exists a residual heat this heat must be transferred to the cold utility. Figure 13 shows the heat supply/source in each interval.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
275
Table 16. Results for the example Qs (kW) 450 R1 300 R3 1200 R5 2230 Qw (kW) 2139 R2 0 R4 1380 Pinch Point is 580 or 590 K, corresponding to the interval 2 outlet 2 (R2 =0) Cold utility cost = 42780 $/year Total cost = 78780 $/year Hot utility cost = 36000 $/year
An algebraic equations system is generated from the energy balance in each interval, and, in this case, there are six equations and seven variables. It means that the problem has one degree of freedom. The energy balance in each interval in the cascade shows:
R1 150 Qs R2 900 R1 600 R3 1200 R2 800 1600 R4 1350 1170 R3 900 1800 R5 650 R4 500 1000 Qw 91 R5 Considering that the objective is to obtain a heat exchanger network with a minimum consumption and, obviously, the minimum utility cost, it is necessary to joint to the set the objective function and the nonnegativity variables conditions to achieve the final transshipment model for this example.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
min Qs + Qw (or CsQs + CwQw) subject to (s.t.) R1 – Qs = 150 R2 – R1 = 300 R3 – R2 = 1200 R4 – R3 = 180 R5 – R4 = 850 Qw – R5 = 91
Qw , Qs , R1 , R2 , R3 , R4 , R5 0 Cs and Cw are the hot and cold utility costs, respectively. The solution to this LP transshipment problem is presented in Table 16. This formulation presents 3 great advantages:
It is possible to include more than a type of hot or cold utility, at different temperatures and with different costs;
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
276
J. A. Caballero and M. A. S. S. Ravagnani
It is relatively easy to expand the model to include constraints about possible heat exchanges, to limit the amount of heat exchanged between two different streams, etc.; The transshipment model equations can be easily included in more complex models, like, for example, distillation columns sequences, flowsheets, etc., in a way to allow the process optimization and the heat integration simultaneously.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 14 presents the composite curves.
Figure 14. Composite curves.
The Pinch Point can be defined as a point that divides the problem into two different energy regions, above and below it. In a sequential systematic, a heat exchanger network can be generated for each region, as we will see in the next section. The transshipment model can be generalized to an arbitrary number of hot or cold streams with any combination of utilities, each one with its particular cost. It is necessary to define the following sets: INTERVAL
{k  k is a temperature interval}
HOT
{i  i is a hot stream}
COLD
{j  j is a cold stream}
Hk
{i  i is a hot stream that supplies heat to the temperature interval k}
Ck
{j  j is a cold stream that needs heat from the temperature interval k}
Sk
{m  m is a hot utility supplying heat in the interval k}
Wk
{n  n is a cold utility receiving the residual heat from the temperature interval k}
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
277
In each interval k the following DATA are known: QHi,k = Heat supplied by the hot stream i in the temperature interval k QCj,k = Heat transferred to the cold stream j in the interval k Cm = Unitary cost of the hot utility m Cn= Unitary cost of the cold utility n. The problem VARIABLES are: QSm = Heat available in the hot utility m QWn = Heat transferred to the cold utility n Rk = Residual hesta that leaves the temperature interval k. The transshipment model in the general form is presented in Figure 15, considering a temperature interval k and the heat amount that enters in it and that leaves the interval. The mathematical representation is: min Z
Cm QS m
mS k
s.a.
Rk Rk 1
mS k
QS m
Cn QWn
nWk
QWn
nWk
QH i ,k QC j ,k
iH k
QS m 0; QWn 0; Rk 0;
k INT
jC k
k 1.....K 1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
RK 0
The transshipment model as presented has no limitations in the number of hot and cold streams. In this formulation it is easy, also, to include superior and inferior limits to the available heat in some hot and cold utility in particular.
Rk 1
QH i ,k
iH k
QS m
Temperature interval k
mS k
QC j ,k
jC k
QWn
nWk
Rk Figure 15. Heat balance in the temperature interval k.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
278
J. A. Caballero and M. A. S. S. Ravagnani
Minimum Number of Heat Exchangers The LP transshipment problem provides information about heat transfer between the streams. Therefore, one can realize to obtain directly the number of heat exchangers necessary to satisfy these heat exchanges. However, the objective function does not include any information about the number of heat exchangers in the network. So, it is possible to obtain solutions for the LP problem with the minimum energy cost with a different number of heat exchangers. For that reason, it is necessary a formulation including explicitly the minimum number of heat exchangers. The minimum number of heat exchangers in the network is equal to the number of process streams and hot and cold utilities minus one. The problem must be divided in two or more regions, according the number of Pinch Points to obtain the minimum number of heat exchangers using the expanded model. If the problem could be divided into q subnetworks, each one of the subnetworks will have associated a set of Kq temperature intervals. As in the initial section the minimum utilities cost was calculated, from now on the heat exchanged between the hot and cold streams and utilities are known. Hence, all the hot streams (process streams and utilities) can be jointed in a unique set and the same can be made for the cold streams. To represent a potential heat exchange between a hot and a cold stream, a new binary variable is defined:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1 if the hot stream i and the cold stream j exchange heat yiq, j 0 if the hot stream i and the cold stream j do not exchang heat
Each potential heat exchange between a hot and a cold stream must be associated to a heat exchanger, in such a way that for each subnetwork the number of heat exchangers is equal to the summation of all binary variables. That is, the objective is to minimize the double summation:
min :
yiq, j
iH jC
The model constraints contains all the necessary information to the heat exchange between the different streams. Nevertheless, these constraints presentation can be simplified by two reasons. The first one is because the heat exchanged by the process streams and utilities is known (calculated in the first step), and the second one is because all the hot and cold streams can be grouped in a unique set H (or C). In this way, the model will be:
min :
yiq, j
iH q jC q
Qi , j ,k QH i ,k
i HS k jC k j C k k 1....K q Qi , j ,k QC j ,k iHS k Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated, Ri ,k 0 ; Qi , j ,k 0 Ri ,k Ri ,k 1
Kq
Qi , j ,k U i , j yiq, j 0 k 1
min :
yiq, j on Linear Programming ... AReview
279
iH q jC q
Ri ,k Ri ,k 1
Qi , j ,k QH i ,k
jC k
Qi , j ,k QC j ,k
iHS k
Ri ,k 0 ; Qi , j ,k
i HS k j C k k 1....K q 0
Kq
Qi , j ,k U i , j yiq, j 0 k 1
The last equation assures that if a binary variable ( yiq, j ) is equal to zero in the subnetwork q, so the heat exchanged between the streams i and j in this network is also zero. Ui,j is an upper bound to the maximum possible heat that streams i and j can exchange. To be the more efficient possible this parameter must be choose in such a way that it has the lower possible value. For example, if stream i dispose of 50 MW of heat and stream j needs 100 MW, the value of Uij must be fixed in 50, that is the maximum heat that streams i and j could exchange. This problem must be solved to each one of the different subnetworks. Or, simultaneously, for all the subnetworks. Mathematically the problem has a Mixed Integer Linear Programming (MILP). From its solution the following information can be extracted: Heat is exchanged between the streams: yiq, j 1 Kq
Heat exchanged in each equipment =
Qi, j ,k
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
k 1
This information can be used directly to generate a heat exchanger network, by hand or automatically. It is important to note that the solution of this problem could not be unique, i. e., it can exist different configurations that achieve the minimum cost of utilities and the minimum number of heat exchangers. To find the best network, all the alternative configurations must be verified, to assure the minimum cost. Besides, a given configuration could not necessarily have defined in a unique topology all the heat duty (heat exchanged between two process streams or utilities) because of the possible existence of heat transfer loops. For the case used as example in this section, the results are presented in Tables 18 and 19. A summary of these tables is presented in Table 17. In the region above the Pinch Point there is no other solution that uses just two heat exchangers. However, below the Pinch there are two possible alternatives that use four heat exchangers. With the solution obtained it is possible to design a valid network, presented in Figure 16. Nevertheless, even that the problem solution indicates which matches can occur it does not say how it could be. For example, if the hot stream H1 needs to exchange heat with cold streams C1 and C2 it could be achieved by exchanging heat between H1 firstly with C1 and after this with C2, or, in the contrary. Or, in an alternative way, H1 must be split to exchange heat with two streams or, even, it could be split exchanging a part in parallel and other part in series, and so one.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
280
J. A. Caballero and M. A. S. S. Ravagnani Table 17. Heat exchanged for the subnetwork above the Pinch Point above the Pinch
Heat exchange yi, j
1
Heat duty (kW)
SC1 H1C1
450 600
Table 18. Heat exchanged for the subnetwork below the Pinch Alternative 1 Heat exchange
Heat duty (kW)
the Pinch yi,below 1 j
H1C2 H1W H2C1 H2W
1911 289 2550 1850
Alternative 2 Heat exchange
Heat duty (kW)
the Pinch yi,below 1 j
H1C1 H1C2 H2C1 H2W
289 1911 2261 2139
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 19. Optimal solution summary Heat exchange yi, j 1
Heat Duty (kW)
SC1 H1C1 H1C2 H2C1 H2W
450 889 1911 2261 2139
1911 kW
600 kW H1 650 K
590 K
590 K 2550 kW
H2 590 K
289 kW 398.9 K
370 K 1850 kW
462.5 K
370 K
450 kW 650 K
620 K
580 K
580 K
410 K C1
353 K C2
500 K
Pinch 580590 K
Figure 16. Possible network obtained by hand by using the first solution for the example.
All these information do not appear in the problem solution and the different alternatives available to provide the fixed heat exchanges can have important effects in the total network cost.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A Review on Linear Programming ...
281
Again, the problem formulation presents a great versatility, in such a way that is easy to include additional constraints as, for example, to forbid certain matches (by fixing the binary variable equal to zero), to promote some matches, for example multiplying in the objective function the binary variables by a factor or to include all the constraints that appear in the transshipment problem. Forbidden matches and other constraints for safety or controllability reasons can be included in the model.
CONCLUSIONS In this chapter a review on linear programming (LP) and mixed integer linear programming (MILP) applications in chemical process synthesis was provided. In chemical process synthesis problems are in nature nonlinear. However, there are a good number of cases in which a LP or MILP approximation is good enough to represent the whole system.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
REFERENCES Andrevovich, M.J.; Westerberg, A.W. (1985). MILP formulations for heat integrated distillation sequence synthesis. AIChE J. 31, 1461. Balas, E. (1985). Disjunctive programming and hierarchy at relaxations for discrete optimization problems. SIAM J. Alg. Disc, 6, 466. Biegler, L.T.; Grossmann, I.E.; Westerberg, A.W.;(1997). Systematic Methods of Chemical Process Design. Prentice Hall. New Jersey. Caballero, J.A; Grossmann, I.E.; (2006). Structural considerations and modeling in the synthesis of heat integrated thermally coupled distillation sequences. Industrial and Engineering Chemistry Research. 45 (25) 84548474. Cavalier, T.M.; Soyster, A.L. (1987) Logical deduction via Linear Programming. IMSE Working Paper 87147. Department of Industrial and Management Systems Engineering, Pennsylvania State University. Clocksin, W.F.; Mellish, C.S. (1981). Programming in Prolog. New York: Springer Kluwer. Daichedt, M.M.; Grossmann, I.E. (1998) Integration of hierarchical decomposition and mathematical programming for the synthesis of process flowsheets. Computers and Chemical Engineering, 22, (12), 147175. Douglas, J.M. (1988). Conceptual Design of Chemical Processes. New York McGrawHill Edgar, T.F.; Himmelblau, D.M.; Lasdon, L. (2001). Optimization of Chemical Processes, 2nd edition. New York. McGrawHill Grossmann, I.E.; Caballero, J.A.; Yeomans, H. (1999) Mathematical programming approaches to the synthesis of chemical process systems. Korean Journal of Chemical Engineering 16 (4), 407426. Hooker, J.N.; (1988) Resolution vs cutting plane of inference problems: Some computational experience. Operations Research Letters, 7(1), 1. Papoulias, S. A. and Grossmann, I. E. (1983). A structural optimization approach in process synthesis. Part II: Heat recovery networks. Computers and Chemical Engineering (7): 707.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
282
J. A. Caballero and M. A. S. S. Ravagnani
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Perry, R.H.; Green, D. Maloney, J.O.(Eds.) (1984). Perry‘s Chemical Engineers‘ Handbook. 6th edition. New York. McGrawHill. Raman, R. Grossmann, I.E. (1991). Relation between MILP modeling and logical inference for chemical process synthesis. Computers and Chemical Engineering, 15, 73. Reid, R.C.; Prausnitz, J.M.; Poling, B.E. (1987). The Properties of Gases and Liquids. 4th edition. New York. McGrawHill. Smith, R. Chemical Process Design and Integration (2005).John Wiley and Sons. West Sussex, England. Turkay, M.; Grossmann, I.E. (1996). Logicbased algorithms for the optimal synthesis of process networks. Computers and Chemical Engineering 20, 959978. Williams, H.P. (1985). Model Building in Mathematical Programming. New York Wiley Interscience.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: :Linear Programming Editor: Zoltan Adam Mann
ISBN 9781612095790 © 2012 Nova Science Publishers, Inc.
Chapter 9
A MEDIUMTERM PRODUCTION PLANNING PROBLEM: THE EPS LOGISTICS Jian Cui Department of Biochemical and Chemical Engineering, TU Dortmund, Germany
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
ABSTRACT A modified mixed integer linear programming (MILP) model of a realworld mediumterm production planning problem, the EPS logistics, the material flows within a polymer production plant with batch and continuous production steps where different amounts of final products are delivered by each batch according to the chosen recipes, and a local or global marketplace in which the products can be assigned to different demands and the demand satisfaction can be distributed over several periods with a maximum permitted delay is proposed in this Chapter.
1. INTRODUCTION Over the last decades, due to the economic benefits, a lot of progress has been made in planning and scheduling of processing plants from a single plant (e.g. Schulz et al., 1998) to enterprisewide optimization (e.g. Grossmann 2009). Efficiently utilizing the available resources and improving the ability to satisfy customers‘ demands on time with highquality products are the main issues on focus. Since integer requirements are essential to model a wide variety of situations involving assignment restrictions, logical constraints, etc., and discrete time representation of the models which is employed in this Chapter as in other chemical plant applications (e.g., Kondilli et al., 1993; Shah et al., 1993; Kallrath, 2002; Balasubramanian and Grossmann, 2004; Sand and Engell, 2004; Floudas and Lin, 2004; Castro et al. 2006; FerrerNadal et al., 2007) has become more competitive with smaller time intervals due to great advances in MILP optimization not only from the mathematical
Email: [email protected]
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
284
Jian Cui
modeling point of view, but from the nowadays convenient commercial software and the solver packages as well, a large amount of works devoted on modeling by MILPs. It can be proved that the MILP planning and scheduling problems are usually NPhard (Ahmed and Sahinidis 2000) and thus belong to a class of critical and very complex tasks. In today‘s chemical process industry, the inner production plan, e.g., the operating conditions of units, the product types and their throughput, etc. is seriously affected by the outer circumstances, e.g., the availability and the price of raw materials, the cost of manpower, the variant demands, the product price, etc. This requires the flexible operations of the plant to react as quickly as possible, meanwhile maintain relative high level profits. In order to meet this requirement, the multiproduct plant operated in batch or semibatch mode to produce different products with structurally similar recipes is mostly preferred due to its highest degree of flexibility. A number of specialty chemical products with the properties of small amounts of similarity and typical highvalue can thus be offered to the unpredictable markets. A realworld mediumterm production planning problem, the EPS logistics, a flexible multiproduct batch plant that produces the polymer expandable polystyrene (EPS), commonly known by its expanded form with the trade name Styropor, has served as examples for chemical batch plant simulation in several works using different approaches and algorithms (e.g., Schulz, 2002; Sand and Engell, 2004; Till et al., 2007). This planning problem contains many discrete states and control variables (e.g. the number of polymerization batches; the operating state of the finishing line; the switchon and switchoff finishing line operations under normal situations and emergency situations), also includes many continues states and control variables (e.g. the supply of product; the shortage of supply; the inventory of product). Thus it can be cast into a mixed integer linear program. The purpose of this chapter is to state a modified MILP model of the EPS logistics based on the static mediumterm scheduling problem of the EPS production formulated by Till (Till et al., 2007) (For convenience, Till‘s model is called ModelT in the rest of this chapter.) with the following modifications and extensions:
Modify the objective function; Modify the material supply balance; Formulate a periodbased material supply balance; Formulate the material demand balances with the restrictions at the end of the planning horizon; Include the state of the finishing lines in the previous periods restricting the possibility of switching them on or off in the next periods under the condition that the lines have to be opened or closed for at least two consecutive periods.
This Chapter is organized as follows. In Section 2, the EPS logistics with its manufacturing section and marketing section is introduced. Section 3 provides a MILP model of the modified and extended planning problem mentioned above highlighting the differences from ModelT, the numerical results are also given in this section. Finally, the conclusion is drawn in Section 4. Due to the proprietary reasons, the exact problem data is not available and thus most values represent reasonable assumptions.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A MediumTerm Production Planning Problem
285
2. INTRODUCTION TO THE EPS LOGISTICS 2.1. Manufacturing Section
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
A real EPS plant and its flow sheet are shown in Figure 2.1 and Figure 2.2. In the manufacturing section, the different EPS products (according to types and grain size fractions) are produced and stored. The plant consists of three stages: preparation, polymerization, and finishing. In the preparation stage, the raw materials: styrene, pure water and pentane are prepared in the tank area. The supply of raw materials is assumed to be unlimited and planning of this stage can be done a posteriori based on rules as the preparation stage is not limiting production process (Sand and Engell, 2004). It is therefore neglected in the planning problem. Note that the stage in this Chapter refers to a part of a process that usually operates independently from other stages and that usually results in a planned sequence of chemical or physical changes in the material being processed, and thus it is different concept from the one in stochastic programming, e.g., twostage or multistage stochastic programming. In the polymerization stage, the mixed raw materials are filled into the reactors for batch reaction according to a specific recipe, e.g. Figure 2.3. Operating in the batch mode, EPS of type A and of type B are produced in reactors of reaction plant. For each EPS type A or B, five recipes exist which determine the grain size distribution such that each batch yields a main product (grain size fraction) together with the other four coupled fractions (Figure 2.3). The processing time of a polymerization is the same for all recipes. Table 2.1 provides the indices for the formulation of the planning problem.
Figure 2.1. EPS logistics: a real EPS plant.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
286
Jian Cui
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 2.2. EPS logistics: flow sheet of EPS plant.
Figure 2.3. EPS logistics: nominal grain size distribution.
Table 2.1. EPS model: indices Symbol i, j
Set {1, …, I}
I = the last period
l
{1, …, L}
L = 3 periods
p
{1, …, P}
P = 2（1=A，2=B）
fp rp
{1, …, Fp} {1, …, Rp
Fp = 5 ∀p Rp = 5 ∀p}
Comment Time periods Lateness of demand satisfaction (l = 1 means supply in the same period) The number of EPS type / finishing line Grain size fraction of EPS type p Recipe for EPS type p
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A MediumTerm Production Planning Problem
287
In the finishing stage, a finished polymerization batch is directly transferred into the corresponding mixer (located in the reaction plant) of finishing line A or B. The mixers are operated in semicontinuous mode where the scrubbed polymer of EPS are dehydrated by the dehydrator (or Separator) and continuously pumped to the sizer by the air pump of the sieve plant. Each type of EPS is then separated into 5 grain size fractions and the final products are packed and stored for sale in the warehouse. The product storage capacity is assumed to be unlimited. The finishing line has to be shut down temporarily when its corresponding mixer runs empty. After a shutdown, the finishing line has to be suspended for a certain period of time and then can be restarted again. After a startup, the finishing line also has to be kept in operation for a certain period of time. This constraint models the large need for manpower during startup and shutdown.
2.2. Marketing Section In the marketing section as shown in Figure 2.4, the products that were produced in a period are sold or stored (and sold later, hopefully) according to the following rules:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
In each period, sales towards the demands can be made from the production in the current period as well as from the warehouse. Unsatisfied demands can be met from later production for up to L periods, incurring a certain loss of revenue due to a lateness penalty; it is assumed that if the demand cannot be met over L future periods, the customer buys the polymer elsewhere. The surplus production is completely stored in the warehouse.
A detailed description of this EPS plant, of the underlying assumptions and of the modeling techniques used can be found in (Engell et al., 2001, Sand and Engell, 2004). The mediumterm planning problem is formulated as a multiperiod MILP model based on a discretetime representation. The planning decisions are:
The number of batches produced in one period, The assignments of recipes to batches, The choice of the operating states of the finishing lines, and The amounts of sales from the batches towards the demands (they may satisfy past demands or present and future demands).
The mediumterm planning problem is complicated because of the following reasons:
Coupled production: each batch provides several products for different demands. The choice of recipes determines the distribution of the product spectrum. The finishing line is operated continuously with minimum and maximum bounds on the throughput. Demand satisfaction can be distributed over several periods with a maximum permitted delay.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
288
Jian Cui
Figure 2.4. EPS logistics: production, storage and demand satisfaction.
Table 3.1. EPS model: process parameters Symbol
Value
Table 3.7
C
min p
S
min p
S pmax
0.1 ∀p 4.8 ∀p 12 ∀p
Bi , f p
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Range
N imax
12 ∀i
I pon
2 ∀p
I poff
2 ∀p
f
p , rp
Figure 2.3
z 0p
{0, 1}
0 ∀p
w 0p
[0, 1]
0 ∀p
0 ∀fp
M 0, f p
Comment Demand for product fp in period i Minimal mass in mixer p Minimal feed to finishing line p Maximal feed to finishing line p Maximal capacity of the polymerization stage in period i Minimal number of successive periods with finishing line p in onstate Minimal number of successive periods with finishing line p in offstate Yield of the grain size fractions fp according to recipe rp Initial value of the operating state of finishing line p (offstate = 0, onstate = 1) Initial value of the indicator for a switch of the operating state of finishing line p (no switch = 0, switch = 1) Initial inventory of product fp
Unit Batches Batches Batches/period Batches/period Batches Periods
Periods Fraction of a batch 
Batches
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A MediumTerm Production Planning Problem
289
3. THE MILP MODEL OF EPS LOGISTICS For setting up the deterministic model of EPS logistics, the necessary definitions and some explanations of the parameters, variables, the objective function and the constraints are given in this section.
3.1. Parameters and Variables Note that variable Mi,l,fp is different from Ml,j,fp in ModelT. The Ml,j,fp refers the view from future to now while Mi,l,fp represents the view from now to future. Besides, Mi,l,fp is easier to understand from modeling technique point of view, since all the other variables are with the same index i. The planning decisions modeled by the relevant variables are listed in Table 3.4. Mi,l,fp dominates the market component of the EPS logistics, thus it belongs to one of the key variables of the planning problem. It is defined under the assumption of a divisible demand or late supply which means that the demand in current period is allowed to be satisfied in any of the following successive L periods (including current period itself). Table 3.2. EPS model: revenue and cost parameters Symbol
value
Comment
Unit
1+1/l ∀i,fp ,p
Specific revenue for the satisfaction of demand in period j = i+l−1
103Euro/Batch
0.1 i ∀i,fp
103Euro/Batch
0.5 ∀i,fp ,p
i ,r
1 ∀i,rp ,p
i, p
3
Specific costs for inventory of product fp at the end of period i Specific penalty for shortage of supply in period i Fixed costs for a polymerization according to recipe rp in period i Fixed costs for a switch of the operating state of finishing line p at the beginning of period i
i ,l , f
Range
p
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
i, fp
i, fp
p
103Euro/Batch 103Euro/Batch 103Euro
Table 3.3. EPS model: variables Symbol
Range
i, f p
B
M i ,l , f p
M i, f p
N i ,rp
zi , p
{0, 1}
wi , p
[0, 1]
Comment Shortage of supply at the end of period i+L1 for the demand of product fp in the period i Supply of product fp in period i+l−1 towards the demand of product fp in the period i Inventory of product fp at the end of period i
Unit
Polymerization starts in period i，according to rp Operating state of finishing line p in period i (offstate = 0, onstate = 1) Indicator for a switch of the operating state of finishing line p at the beginning of period i (no switch = 0，switch = 1)
Batches
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Batches Batches Batches

290
Jian Cui Table 3.4. EPS model: parallel table to planning decisions variables
Planning Decision The number of batches produced in one period The assignments of recipes to batches The choice of operating states of the finishing lines The amount of sales of batches towards demands
Modeled By N i ,rp wi , p zi , p M i ,l , f p
Taking a planning horizon with five periods as an example, the values of Mi,l,fp in each period i are listed in Table 3.5: Table 3.5. EPS model: the structure of variable Mi,l,fp in 7 periods i=1 M 1,1, f p
i=2 M 1,2, f p
i=3 M 1,3, f p
i=4
i=5
i=6
M 2,1, f p
M 2,2, f p
M 2,3, f p
M 3,1, f p
M 3,2, f p
M 3,3, f p
M 4,1, f p
M 4,2, f p
M 4,3, f p
M 5,1, f p
M 5,2, f p
i=7
M 5,3, f p
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Since the length of planning horizon is 5, M4,3,fp, M5,2,fp ( in period 6) and M5,3,fp ( in period 7) are out of the range of planning horizon. This leads to a cutoff of these Mi,l,fp∣{(i, l)∣i+l1>I }, so only supplies within the planning horizon are considered.
3.2. The MILP Model The EPS mediumterm production planning problem can be described as a MILP problem with the following objective function: I
max
2
L
Fp
Fp
Fp
f p 1
f p 1
( i ,l , f p M i ,l , f p i, f p M i, f p i, f p Bi, f p i 1 p 1
l 1 f p 1
Rp
rp 1
i , rp
(3.1)
N i , rp i , p wi , p )
The objective is the maximization of the profit from sales revenues, inventory costs, penalty for shortage, polymerization costs and operation stateswitch costs of the finishing lines which are listed in Table 3.6. The constraints of this planning problem can be classified into two categories.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A MediumTerm Production Planning Problem
291
Table 3.6. EPS model: terms in objective function Item
Expression
Sales Revenues
i , p ,l , f p
i ,l , f M i ,l , f p
Inventory Costs
Comment
i, f p
M i, f p
i, f p
Bi, f p
p
The costs for storing the products which are not sold, the value increase with increasing storage time. The penalty cost due to the supply shortage towards the satisfactory of Bi , f .
i, p, f p
Shortage Penalties Polymerization Costs Operation Stateswitch Costs
i, p, f p
p
i , rp
N i , rp
The fixed cost of the polymerization process.
i , p , rp
The revenue for selling out all product supplied, the value decrease with increasing l.
i, p
The fixed cost of witch on or switch off the finishing line.
wi , p
i, p
The first category is called physical constraints which describe the capacity limitations of the equipment and the operations of startup/shutdown of the finishing lines in the manufacturing section of EPS logistics. These constraints are:
Capacity constraint of polymerization (CAPPOLY)
The total number of batches that can be started in period i are restricted by the maximal capacity of the polymerization stage Nimax.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
P
Rp
N p 1 rp 1
i , rp
N imax
i
(3.2)
Capacity constraint of the finishing line (CAPFIN)
The level of a mixer constrains the planning decisions since the corresponding finishing line has to be shut down when the mixer runs empty. Based on the material balances around the mixer for period i, a finishing line can stay in operation if and only if the number of polymerization batches in period i is within a certain range. The lower bound and the upper bound of this range are given by (3.3) and (3.4). When the feed is outside this range, the feed is set to zero and the corresponding finishing line has to be shut down. The lower bound (CAPFINLOW) Rp
zi , p (C pmin S pmin ) N i ,rp
i , p
rp 1
The upper bound (CAPFINUP)
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(3.3)
292
Jian Cui Rp
N rp 1
i , rp
zi , p S pmax
i , p
(3.4)
In (3.4), we assume that the maximal feed Spmax is such that there is no overflow happens in the mixer.
Constraints of startup/shutdown the finishing line (SWITCHFIN)
The optimal values of nonnegative continuous variables wi,p are restricted either to zero or to one by the values of the binary variables zi,p in constraints (3.5) and (3.6). Thus, by only taking binary values, wi,p indicates the startup or shutdown operations of finishing line. Startup operation (STARTFIN) 0 z p if i 1 zi , p wi , p zi 1, p else
i, p
(3.5)
Shutdown operation (SHUTFIN) 0 z p if i 1 zi , p wi , p z else i 1, p
i, p
(3.6)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
(3.2) – (3.6) are the same as in Till (Till et al., 2007) while the objective function (3.1) and the constraints (3.7) – (3.12) below are newly developed or modified.
Constraints of minimal successive periods for closing/opening the Finishing Line (KEEPMINFIN)
Each finishing line of EPS plant has to keep in the ―ON‖ state or in the ―OFF‖ state for at least Ipon or Ipoff successive periods. After a startup in period i of finishing line p, the value of zi,p is forced to be one (―ON‖) for the next two periods and no switch operation wi,p may occur by constraint (3.7). After a shutdown in period i of finishing line p, the value of zi,p is forced to be zero (―OFF‖) for the next two periods and no switch operation wi,p occurs by constraint (3.8). Minimal opening successive periods (MINOPEN) 0 0 z p if i 1 w p if i 1 zi , p 1 zi 1, p else wi 1, p else
i, p
Minimal closing successive periods (MINCLOSE)
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(3.7)
A MediumTerm Production Planning Problem z 0p if i 1 w0p if i 1 zi , p 1 zi 1, p else wi 1, p else
293
(3.8)
i, p
A detailed derivation of (3.5) – (3.8) is given in Appendix A. The switching operations of the finishing lines at the beginning of the planning horizon are included in the constraints (3.7) and (3.8). The model is incomplete without the initial conditions of switch operations w0p as well as the operating states z0p. The second category is called system material balance constraints which is marketing oriented to describe the material balance among production, supply and demand. These constraints include:
Constraints of material supply balance (SUPBAL)
Period Balance (PBAL) Rp M 0, f p if i 1 M M N j ,l , f p i , f p f p ,rp i ,rp M else j l 1 i rp 1 i 1, f p
i, l , p, f p
(3.9)
In period i, the amount of sales plus the products stored in current period should be equal to the products produced in this period plus the products stored in period i1. The period balance (3.9) results from the Accumulation Balance (ACCBAL) below:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
j l 1 i
M j ,l , f p M
i, f p
i
Rp
f p ,rp N j ,rp M 0, f p
i, l , p, f p
(3.10)
j 1 rp 1
A detailed description of (3.9) and (3.10) and a mathematical proof of the equivalence between them are given in Appendix B. In this Chapter, PBAL is employed as the material supply balance constraint since its sparse matrix structure leads to a significant increase of the computational efficiency.
Constraints of the material demand balance (DEMBAL)
The sum of the supplies within the next L period (including the current period i) for satisfying the current demand in period i can not be higher than this demand (3.11), and the difference between them is the amount of shortage of supply (3.12) which the customer buys elsewhere. Note that (3.11) and (3.12) are different from the DEMBAL in Cui and Engell (2010) where the contract fulfillment has to be performed in the planning / scheduling horizon. L
M l 1
i ,l , f p
Bi , f p L
i, p , f p
Bi, f p Bi , f p M i ,l , f p
i, p , f p
l 1
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(3.11) (3.12)
294
Jian Cui
Constraints of Material Complement Balance (COMBAL)
For the cutoff effect which infers fulfilling the costumers‘ orders only inside planning horizon, (3.13) is added.
M i ,l , f p i l 1 I 0
i, l , p, f p
(3.13)
3.3. Numerical Results The numerical results of above deterministic planning model (3.1) – (3.13) within a planning horizon of five periods with gap 0% are obtained on a 3+2.99 GHz Intel Dual Core machine with Microsoft Windows XP by using the latest CPLEX 12.2 and shown in Table 3.8 and Figures 3.1 – 3.4. The demand profile which is randomly generated is given in Table 3.7. In Figure 3.2, taking period i3 as an example, i1 l3 represents the delayed sales in period i3 towards the demand in period i1, i2 l2 represents the delayed sales in period i3 towards the demand in period i2 and i3 l1 represents the sales towards the demand in period i3. For satisfying the demands over periods on all sorts of grain size fractions with maximum benefits, numbers of polymerization with corresponding recipes starting in each period are optimized and productions sold without delay are as close to the demand profile of the current period as possible, while the supply deficiencies are fulfilled as much as possible within the next periods because the penalties on the late supply decrease the profit. Storages are kept as low as possible due to the increasing storage cost along the planning horizon.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 3.7. EPS model: demand profile for five periods
Period
EPS type
i1
Grain size fractions f1
f2
f3
f4
f5
p1
1.0698
1.7455
1.3022
1.3699
0.7723
i1
p2
1.7462
0.556
1.2343
1.386
0.8174
i2
p1
1.1393
0.3324
1.0339
1.3475
2.2538
i2
p2
0.7551
1.661
1.3864
0.8317
1.2589
i3
p1
0.5859
1.488
1.5375
1.7876
0.2029
i3
p2
0.6496
1.0834
1.5466
1.5286
1.5898
i4
p1
1.7462
1.2418
1.3572
0.8098
0.9145
i4
p2
0.749
1.1847
0.5816
1.8396
1.5756
i5
p1
1.4703
0.8659
1.2253
1.506
1.9552
i5
p2
1.3414
0.3456
0.8289
1.4616
0.9999
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A MediumTerm Production Planning Problem
295
Table 3.8. EPS model: problem size Constraints
Variables
Integer variables
321
60
246
Computation time(H:M:S) 00:04:20
Optimality gap 0%
Polymerization numbers r1
r2
r3
r4
r5
4
Batches
3
2
1
0 i1.p1
i1.p2
i2.p1
i2.p2 i3.p1 i3.p2 i4.p1 Time period & EPS type
i4.p2
i5.p1
i5.p2
Sales f2
f3
f4
f5
2.5 2
Batches
1.5 1 0.5
Time period with late supply on EPS type
Figure 3.2. Polymerization numbers Ni,rp and sales Mi,l,fp.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
i5.l1.p2
i5.l1.p1
i4.l2.p2
i4.l2.p1
i4.l1.p2
i4.l1.p1
i3.l3.p2
i3.l3.p1
i3.l2.p2
i3.l2.p1
i3.l1.p2
i3.l1.p1
i2.l3.p2
i2.l2.p1
i2.l1.p2
i2.l1.p1
i1.l3.p2
i1.l3.p1
i1.l2.p2
i1.l2.p1
i1.l1.p2
0
i1.l1.p1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
f1
296
Jian Cui Supply deficiency f1
f2
f3
f4
f5
2.5
Batches
2
1.5
1
0.5
0 i1.p1
i1.p2
i2.p1
i2.p2 i3.p1 i3.p2 i4.p1 Time period & EPS type
i4.p2
i5.p1
i5.p2
Storage f1
f2
f3
f4
f5
2.5
Batches
2
1.5
1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
0.5
0 i1.p1
i1.p2
i2.p1
i2.p2 i3.p1 i3.p2 i4.p1 Time period & EPS type
i4.p2
i5.p1
i5.p2
Figure 3.3. Supply deficiency Bi,fp and storage M+i,fp.
CONCLUSION In this Chapter, a modified deterministic MILP model of the EPS logistics is described. As any realworld planning problem, the EPS logistics is confronted with various uncertainties, e.g. of future demands, plant capacity (due to technical problems or lack of workforce) and product yields (in the EPS problem, the grainsize distribution is not exactly reproducible) that evolve over time. A model of these uncertainties and the static and continuous flow of information on the uncertainties of the EPS logistics in the setting of a twostage stochastic mixedinteger program are discussed in Chapter 1: TwoStage Stochastic Mixed Integer Linear Programming.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A MediumTerm Production Planning Problem
297
APPENDIX A: ANALYSIS ON CONSTRAINTS OF FINISHING LINE OPERATIONS A.1. Entity Analysis The numerical relationships between variables zi,p and wi,p in the adjacent two periods i1 and i are listed in Table A.1. Table A.1. Numerical relationship between zi,p and wi,p zi,p
0
1
0
wi,p is 0 ①
wi1,p is 0，② wi,p is 1 ③
1
wi1,p is 0, ④ wi,p is 1 ⑤
wi,p is 0 ⑥
zi1,p
The finishing line operations can be modeled by the constraints (A1) and (A2) below: ①③⑤⑥ wip zi 1, p zip
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
②④
wi 1, p 1 zi 1, p zip
i, p
i, p
(A1)
(A2)
Since wi,p∈[0, 1], (A1) can be transferred to the equivalent two constraints (A3) and (A4):
zi 1, p zip wip zi 1, p zip wip
i, p i, p
(A3) (A4)
The verification of the equivalent from (A1) to (A3) and (A4) is described below:
when zi1,p and zi,p take same values, according to (A3) and (A4), we have wi,p ≥ 0. By considering the term ip wip in the objective function, it is obvious that when wi,p = 0, ip wip has the largest contribution to the objective, thus equivalent;
when zi1,p and zi,p take different values, according to (A3) and (A4), we have wi,p ≥ 1. By considering wi,p∈[0, 1], we have wi,p = 1, thus equivalent. (A2) has the following two equivalent forms (A5) and (A6):
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
298
Jian Cui
zi 1, p zip 1 wi 1, p zi 1, p zip 1 wi 1, p
i, p i, p
(A5) (A6)
A.2. Boundary Analysis Taking a planning / scheduling horizon with three periods as an example (see Figure A.1), the characteristics of constraints (A1) and (A2) on the boundary of this planning / scheduling horizon are: Boundary condition 1, starting point of planning/scheduling horizon t = 0: By setting the initial values wp0 = 0 and zp0 = 0, the finishing lines are in the offstate at least for two periods before the planning / scheduling starts; Boundary condition 2, ending point of planning/scheduling horizon t = 3. Examining the constraints (A1) and (A2), we have:
w4p =︱z3p – z4p︱, since w4p is out of the planning / scheduling horizon, it can be neglected;
w3p ≤ 1 –︱z3p – z4p︱, when z3p = z4p, w3p is automatically satisfied and its value has already been calculated in the previous period; when z3p ≠ z4p, w3p = 0 and also can be neglected since it has no contribution to objective function.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
In a word, in the boundaries of planning / scheduling horizon, switches of the operating state of finishing line can be neglected without any affection on the final planning / scheduling results.
Figure A.1. zi,p and wi,p of finishing line related to a planning / scheduling horizon with three periods.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A MediumTerm Production Planning Problem
299
APPENDIX B: ANALYSIS ON CONSTRAINTS OF MATERIAL SUPPLY BALANCE B.1. Entity Analysis With the product inventories, the supply amount in period i can be provided by the stored amounts and the produced amounts in any period j≤i. According to the real calculation process of supply Mi,l,fp in period i, two equivalent material supply balances are given below.
B.1.1. Accumulation Balance The accumulated supply for j≤i can be described as
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
j l 1 i
Figure B.1. Accumulated supply calculated by
j l 1 i
M j ,l , f p
(see Figure B.1).
M j ,l , f p in each period.
Thus the material balance based on this accumulation of supply can be written as:
j l 1 i
i
Rp
M j ,l , f p M i, f p f p ,rp N j ,rp M 0, f p
i, f p , p
(B1)
j 1 rp 1
(B1) which is called Accumulation Balance (ACCBAL) indicates that all the products supplied till period i plus the products stored in period i must be equal to all the products produced till period i plus the initial products stored before the planning / scheduling starts. (B1) is modified version from the material balance (B2) in ModelT according to the definition of the amount of sales of batches towards demands in Section 3.1. i
M j 1
l
i
l, j, f p
Rp
M i, f p f p , rp N j ,rp M 0, f p
i, f p , p
j 1 rp 1
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(B2)
300
Jian Cui
However, (B1) involves information from many periods. This motivates the transformation of (B1) to an equivalent representation, the Period Balance.
B.1.2. Period Balance The accumulated supply in period i can be described as
j l 1 i
Figure B.2. Accumulated supply calculated by
j l 1 i
M j ,l , f p (see Figure B.2).
M j ,l , f p in each period.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Thus the material balance based on this supply accumulation can be written as:
j l 1 i
M j ,l , f p M
i, f p
Rp
f p , rp N i , rp rp 1
M 0, f p if i 1 M i 1, f p else
i, f p , p
(B3)
(B3) which is called Period Balance (PBAL) indicates that all the products supplied in period i plus the products stored in period i must be equal to the products produced in period i plus the products stored in period i1.
B.2. Mathematic Proof Prove: Accumulation Balance is equivalent to Period Balance (1.) Assume Accumulation Balance is valid，now from ACCBAL to PBAL Boundary: when i = 1, ACCBAL:
M 1,1, f p M
1, f p
Rp
f p , rp N1, rp M 0, f p rp 1
,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A MediumTerm Production Planning Problem
301
PBAL:
M 1,1, f p M
1, f p
Rp
f p , rp N1, rp M 0, f p rp 1
,
Equivalent; When 2≤i≤I, ACCBAL:
j l 1 i
i
Rp
M j ,l , f p M i, f p f p , rp N j , rp M 0, f p , j 1 rp 1
j l 1 i 1
M j ,l , f p
i 1 R p
j l 1 i
Rp
M j ,l , f p M i, f p f p , rp N j , rp f p , rp N i , rp M 0, f p , j 1 rp 1
rp 1
j l 1 i
Rp
i 1 R p
M j ,l , f p M i, f p f p , rp N i , rp f p , rp N j , rp M 0, f p rp 1
j 1 rp 1
j l 1 i 1
M j ,l , f p ,
j l 1 i
Rp
M j ,l , f p M i, f p f p , rp N i , rp M i1, f p rp 1
Equivalent.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
(2.) Assume Period Balance is valid，now from PBAL to ACCBAL Boundary: when i = 1, PBAL:
M 1,1, f p M
1, f p
Rp
f p , rp N1, rp M 0, f p rp 1
,
ACCBAL:
M 1,1, f p M
1, f p
Rp
f p , rp N1, rp M 0, f p rp 1
,
Equivalent; When 2≤i≤I, PBAL:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
302
Jian Cui
j l 1 i
M j ,l , f p M
i, f p
Rp
f p , rp N i , rp M i1, f p , rp 1
i
( s 1
j l 1 s
M j ,l , f p M
s, f p
i
Rp
s 1
rp 1
i
Rp
) ( f p , rp N s , rp M s1, f p ),
i
s 1 j l 1 s
i
i
M j ,l , f p M s, f p f p , rp N s , rp M s1, f p , s 1
s 1 rp 1
s 1
j l 1 i
M j ,l , f p M
i, f p
i
Rp
f p , rp N j , rp M 0, f p j 1 rp 1
Equivalent. Proof Done.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
REFERENCES Ahmed, S. and Sahinidis, N. V. (2000). Analytical investigations of the process planning problem. Computers and Chemical Engineering, 23, 16051621. Balasubramanian, J. and Grossmann, I. E. (2004). Approximation to multistage stochastic optimization in multiperiod batch plant scheduling under demand uncertainty. Industrial and Engineering Chemistry Research, 43, 36953713. Castro, P., Mendez, C. A., Grossmann, I. E., Harjunkoski, I. and Fahl, M. (2006). Efficient MILPbased solution strategies for largescale industrial batch scheduling problems. Proc. 2006 ESCAPE/PSE, Elsevier, 22312236. Cui, J. and Engell, S. (2010): Mediumterm Planning of a Multiproduct Batch Plant under Evolving Multiperiod Multiuncertainty by Means of a Moving Horizon Strategy. Computers and Chemical Engineering, 34, 598619 (ISSN 00981354). Engell, S., Maerkert, A., Sand, G., Schultz, R. and Schulz, C. (2001). Online Scheduling of Multiproduct Batch Plants under Uncertainty. In: M. Groetschel, S.O. Krumke, J. Rambau (Eds.): Online Optimization of Large Scale Systems, Springer, Berlin, 649676. FerrerNadal, S., Mendez, C. A., Graells, M. and Puigjaner, L. (2007). A novel continuoustime MILP approach for shortterm scheduling of multipurpose pipeless batch plants. Proc. ESCAPE 17, Elsevier, 595601. Floudas, C. A. and Lin, X. (2004). Continuoustime versus discretetime appraoches for scheduling of chemical processes: a review. Computers and Chemical Engineering 28, 21092129.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
A MediumTerm Production Planning Problem
303
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Grossmann, I.E. (2009). Research Challenges in Planning and Scheduling for Enterprisewide Optimization of Process Industries. Proc. 2009 ESCAPE/PSE, Elsevier, 15–21. Kallrath, J. (2002). Planning and Scheduling in the Process Industry. OR Spectrum 24, 219250. Kondili, E., Pantelides, C. and Sargent, R. (1993). A general algorithm for shortterm scheduling of batch operations – I.MILP formulation. Computers and Chemical Engineering 17, 211227. Sand, G. and Engell, S. (2004). Modelling and solving realtime scheduling problems by stochastic integer programming. Computers and Chemical Engineering, 28, 10871103. Schulz, C., Engell, S. and Rudolf, R. (1998). Scheduling of a multiproduct polymer batch plant. Preprints FOCAPO, CACHE Publications, 7590. Schulz, C. (2002). Modeling and optmization of a multiproduct processing plant, Dr.Ing. Dissertation, Department of Chemical Engineering, Universität Dortmund, ShakerVerlag, Aachen, Germany (in German). Shah, N., C. Pantelides and R. Sargent (1993). A general algorithm for shortterm scheduling of batch operations – II. Computational issues. Computers and Chemical Engineering 17(2), 229244. Till, J., Sand, G., Urselmann, M. and Engell, S. (2007). A hybrid evolutionary algorithm for solving twostage stochastic integer programs in chemical batch scheduling. Computers and Chemical Engineering, 31, 630647.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: Linear Programming Editor: Zoltan Adam Mann
ISBN 9781612095790 c 2012 Nova Science Publishers, Inc.
Chapter 10
C OMPLEXITY OF D IFFERENT ILP M ODELS OF THE F REQUENCY A SSIGNMENT P ROBLEM ´ am Mann∗ and Anik´o Szajk´o† Zolt´an Ad´ Budapest University of Technology and Economics Department of Computer Science and Information Theory
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Abstract The frequency assignment problem (FAP) arises in wireless communication networks, such as cellular phone communication systems, television broadcasting, WLANs, and military communication systems. In all these applications, the task is to assign frequencies to a set of transmitters, subject to interference constraints. The exact form of the constraints and the objective function vary according to the specific application. Integer linear programming (ILP) is widely used to solve the different flavors of the FAP. For most FAP versions, there are more than one natural ILP formulations, e.g. using a large number of binary variables or a smaller number of integer variables. A common experience with these solution techniques, as well as with NPhard optimization problems in general, is a high variance in problem complexity. Some problem instances are tremendously hard to solve optimally. There are also examples of relatively big problem instances that are nevertheless quite easy to solve. In general, it is hard to predict how long it will take to solve a given problem instance. This article presents a systematic study of how the complexity of the FAP depends on different parameters of the ILP model. We examine different types of constraints, different problem sizes and constraint densities, and varying sets of available frequencies. We conduct empirical measurements with an ILP solver to assess how problem complexity depends on these factors. Based on the empirical data, it becomes possible to predict how timeconsuming the solution of a given problem instance is, depending on the ILP model parameters. The ability to predict complexity is useful in several scenarios. First of all, it allows a sound judgement whether it is feasible to solve a given ILP formulation optimally. Moreover, it also supports sophisticated loadbalancing when multiple FAPs are solved on parallel machines. Eventually, a better understanding of the origins of complexity may lead to enhanced optimization techniques. ∗ Email † Email
address: [email protected] address: [email protected]
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
´ Mann and A. Szajk´o Z.A.
306
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1.
Introduction
Already in the 1890s, research started in developing wireless communication devices. In 1909, Marconi and Braun won the Nobel Prize for the invention of wireless telegraphy. Due to the rapid evolution of the technology, a hundred years later, one almost cannot imagine life without radio and TV transmission, satellite communication, WiFi, GPS (Global Positioning System1 ), mobile phones, remote controls, radars, WLANs (Wireless Local Area Network) etc. All these services operate in the radio waveband [3Hz, 300GHz]. Because of the extensive use of frequencies, the phenomenon of interference must be taken into account. This occurs if two communication channels are close to each other both geographically and in the radio band. Therefore, in each frequency planning task, frequencies should be assigned to communication channels, so that besides certain constraints the intensity of interference should be minimized. Research on frequency assignment problems (FAP) led to the insight in the 1960s, that finding an optimal solution is a quite hard mathematical problem. The use of the radio spectrum is regulated by governments and worldwide by the International Telecommunication Union (ITU). Suppliers and operators of wireless networks are allowed to use only certain frequency bands, depending also on the geographical location. Usually, the available frequency band [ f min , f max] is divided into channels with the same bandwidth (∆). In this way, the channels (which are often called frequencies too) can fmin be numbered from 1 to N, where N = fmax − . In some cases, an operator may not be ∆ allowed to use all the channels it paid for, for instance because of special regulations near country borders. We will denote the set of frequencies with F = {1, . . ., N}, and the set of frequencies available to a certain connection v with F(v), where F(v) ⊆ F. The magnitude of interference of signals depends not only on the location of the transmitters and receivers, but also on signal strength, the direction of transmission, geographical circumstances and weather conditions. In practice, more than two signals together can lead to interference too, but in most cases only the interference between pairs of communication channels is taken into consideration. In this paper, we deal with Fixed Channel Assignment (FCA): timeinvariant systems, in which the communication channels are constant over time. Extensions to timevarying models (Dynamic Channel Assignment, DCA) or mixed models (Hybrid Channel Assignment, HCA) are not considered. For solving frequency assignment problems, several solution techniques are used, both exact algorithms and heuristics. A comprehensive survey can be found in [1] and [10]. One of the most popular solution techniques involves modeling the FAP by means of an Integer Linear Program (ILP) and using a generalpurpose ILP solver to solve it. Unfortunately, all natural formulations of the FAP are NPhard [3]. The ILP approach, just like any other known exact method for solving the FAP, takes exponentially long in the worst case. On the other hand, there are also many problem instances that are relatively easy to solve. This high variability in algorithm runtime poses a significant challenge on its practical application, because it is hard to predict if the algorithm will solve a given problem instance within a couple of seconds or will run for several days (or even longer). This phenomenon is common in the case of NPhard problems [4, 5, 7, 9]. 1 The
list of used abbreviations can be found at the end of the chapter.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Complexity of Different ILP Models...
307
To cope with this challenge, this chapter presents a comprehensive empirical study about the dependence of problem complexity on the problem formulation and the problem’s parameters. In the first half of the chapter, we review the different FAP problem models and their possible ILP formulations. In the second half of the chapter, we present the empirical results.
2.
Frequency Assignment Problems
In this section, we introduce the different flavors and models of FAPs.
2.1. 2.1.1.
Common Application Domains Mobile Phone Networks
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
In this application, communication takes place between a fixed antenna and a mobile phone. Each antenna covers a certain region, where it can serve the mobile devices. In TDMA (Time Division Multiple Access) systems, each frequency can be used to serve several mobile phones. In addition, with the help of TRXs (multiple transmitter/receiver unit) more frequencies can be assigned to the same antenna. In general, several antennas are installed on one physical unit (site). In GSM (Global System for Mobile Communications) networks, usually one TRX can serve 8 mobile devices using TDMA, and up to 12 TRXs can be installed on one antenna. Depending on the available frequencies (especially near country borders), the extent of interference and the applied technology, we distinguish four types of constraints: • cocell separation constraint: the difference between frequencies assigned to the same cell has to be at least γ(v, v). In most cases γ(v, v) = 3. • cosite separation constraint: if the antennas v and u are located on the same site, then the difference between their frequencies has to be at least γ(v, u). In general γ(v, u) = 2. • interference constraint: due to other interference reasons, the difference between frequencies of antennas u and v has to be at least γ(v, u). – If γ(v, u) = 1 (i.e., they are not allowed to get the same frequency), then it is called a cochannel constraint. – If γ(v, u) = 2 (i.e., they are not allowed to get even neighboring frequencies), then it is an adjacent channel constraint. • handover separation constraint: at times, a mobile phone might have to switch to another server antenna. Therefore, in GSM systems, the Broadcast Control Channels (BCCH) are available. Their frequencies have to differ by at least two units from any other frequencies used by the concerned antennas.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
´ Mann and A. Szajk´o Z.A.
308 2.1.2.
Radio and Television Transmission
The model is much simpler in this case. However, the prohibited frequency differences are not continual. In general, the frequency differences 1, 2, 5 and 14 are banned because of the corresponding harmonics. 2.1.3.
Military Applications
In military applications, both parties might change their location. To all communication channels, two frequencies are assigned (one per communication direction). Their difference is constant, so the frequencies are ordered into pairs (with this constant difference), and these pairs are assigned to communication channels. The situation becomes more complicated by using horizontal and vertical polarization, hence the extent of interference is dependent not only on the geographical circumstances, but also on the polarization of signals. 2.1.4.
Satellite Communication
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
The transmitter and receiver are located on the Earth, but they communicate through one or more satellites. First, all signals are conveyed to a satellite by means of uplink frequencies, and the receiver then gets the signals by means of downlink frequencies. As the difference between the downlink and uplink frequencies is fixed and considerably large, it is enough to focus on planning just one of them. Successive frequencies have to be assigned to transmitters, so that each of them might be used just once. In addition to the standard interference model (where only the interference between two signals is considered), efforts have been made to also take into account the interference caused by more than two signals as well [2]. 2.1.5.
Wireless Local Area Networks
Planning the frequencies of WLANs is one of the newest application domains [13]. WLANs allow mobile devices (for instance notebooks) to communicate with the help of an access point, which in turn is directly connected to a wired network or the internet. For the operation of such systems, only 13 frequencies are available, whose differences are 5 MHz. At the same time, frequencies must differ by at least 24 MHz to avoid interference. Thus, planning WLANs is often handled as a 3frequency problem, in which the location of access points also plays a crucial role.
2.2.
Models of Frequency Planning
Based on the application domains presented above, researchers have developed different models of frequency planning [1]. First, we review the general constraints and then the differences in the models. 2.2.1.
General Constraints
Let V denote the set of transmitters (or antennas or communication channels) to which we have to assign frequencies. As mentioned, the set of frequencies is F = {1, . . ., N}, whilst Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Complexity of Different ILP Models...
309
F(v) denotes the frequencies available to a transmitter v ∈ V . (F(v) ⊆ F.) • multiplicity constraint: m(v) frequencies have to be assigned to antenna v. In most cases, m(v) = 1, except for mobile phone networks. • interference matrix: to each pair of antennas v, w and frequencies f ∈ F(v), g ∈ F(w), a number pv,w ( f , g) ≥ 0 is defined, which is proportional to the extent of interference caused by assigning f to v and g to w. It is called: – Cochannel penalty, if  f − g = 0. – Adjacent channel penalty, if  f − g = 1. • maximum tolerable interference: there is a given value pmax , and all interference above this level must be avoided. That is, if pv,w ( f , g) > pmax, then it is not allowed to assign frequency f to antenna v and frequency g to antenna w at the same time. The rationale behind such a hard constraint is that, although it is possible to minimize the overall interference in a network, it would not be acceptable if in one part of the network communication were hindered by too high local interference. • blocked channel: for any reason, a frequency must not be used by a communication channel. By setting the interference matrix and the maximum tolerable interference appropriately, an arbitrary separation constraint of s units (i.e., the frequencies of the transmitters v and u must differ by at least s) can be achieved.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
2.2.2.
FFAP
Feasibility Frequency Assignment Problem: given the constraints described above, the task is to decide whether there exists a solution satisfying all the conditions. 2.2.3.
MaxFAP
Maximum Service FAP: if the FFAP is not solvable, i.e. there is no solution satisfying all the conditions, we may settle for a slightly worse solution by relaxing the condition on the number of assigned frequencies and trying to maximize the number of assigned frequencies. On the other hand, even if the FFAP is solvable, maximizing the number of assigned frequencies is still a valid objective, since the overall service quality could be improved this way. Given the described constraints and a number l(v) to each antenna denoting the minimum number of frequencies that have to be assigned to v, we define new variables n(v) denoting the number of frequencies actually assigned to antenna v. • If the FFAP is solvable, then n(v) ≥ m(v) should hold for all antennas, where m(v) is the previously defined multiplicity number. Providing also an upper bound to some n(v) is possible but not obligatory. • If the FFAP is not solvable, then we specify l(v) ≤ n(v) ≤ m(v).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
´ Mann and A. Szajk´o Z.A.
310
The aim is to maximize the overall service being made available, i.e. the total number of assigned frequencies ∑v∈V n(v). The lower bounds on each n(v) guarantee sufficient service quality in the whole system. 2.2.4.
MOFAP
Minimum order FAP: given the general constraints as above, the aim is to minimize the number of used frequencies in the network. We include this model only for the sake of completeness, as nowadays operators pay for a frequency band and not for individual frequencies. Naturally, MOFAP is only meaningful if the FFAP is solvable. 2.2.5.
MSFAP
Minimum span FAP: given the general constraints as above, the aim is to minimize the span of the used frequency band, i.e. the difference between the highest and the lowest frequency used. Obviously, FFAP needs to be solvable in this case too. As opposed to MOFAP, MSFAP is a realistic model of operators’ cost minimization efforts, and hence of great practical importance. 2.2.6.
MIFAP
Minimum interference FAP: the aim is to minimize the total interference in the entire network: ∑ ∑ pv,w( f , g).
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
v,w∈V f ∈F(v),g∈F(w)
This is the most difficult sort of FAPs; its solution with an exact algorithm tends to take too long for practical applications, so that usually heuristic methods are used. As we focus on exact ILPbased algorithms in this chapter, we do not address MIFAP in more depth. In the next sections, we deal with the solution of the following FAP formulations: FFAP, MaxFAP, MOFAP and MSFAP. FFAP is the only decision problem, the others are optimization problems.
3.
ILP Formulations of FAP
We present two different approaches to formulating FAP problems as integer programs. The first approach makes use of a large number of binary variables, whereas the second approach uses a moderate number of integer variables.
3.1.
Using Binary Variables
In all FAP versions, the following binary variables are defined for all v ∈ V and f ∈ F(v): ( 1 if frequency f is assigned to antenna v, xv, f := 0 otherwise. The constraints and objective function depend on the FAP version, as shown below.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Complexity of Different ILP Models... 3.1.1.
311
FFAP (FeasibilityFAP)
FFAP is a decision problem, hence there is no objective function. The constraints are as follows:
∑
xv, f = m(v) ← ∀v ∈ V
(1)
f ∈F (v)
xv, f + xw,g ≤ 1 ← ∀v, w ∈ V, f ∈ F(v), g ∈ F(w), pv,w( f , g) > pmax
(2)
Here, (1) ensures that an adequate number of frequencies is assigned to all transmitters. (2) ensures the minimum necessary distance between frequencies of transmitter pairs by not allowing the assignment of frequency pairs to antenna pairs that would cause higher interference than the maximum tolerable interference. In this formulation, the number of variables is V  · F (all of them are binary), and the number of constraints is at most V  + V 2 · F2 . 3.1.2.
MaxFAP (Maximum Service FAP)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
As defined earlier, the task is to maximize the total number of assigned frequencies, where n(v), the number of frequencies assigned to antenna v, is subject to given lower and upper bounds (l(v) and m(v), respectively). To create the ILP formulation, we extend the set of variables with the new integer variables n(v) for each antenna v. Thus, we obtain the following ILP formulation: Maximize: ∑v∈V n(v)
∑
xv, f = n(v) ← ∀v ∈ V
(3)
f ∈F (v)
l(v) ≤ n(v) ← ∀v ∈ V
(4)
n(v) ≤ m(v) ← ∀v ∈ V
(5)
xv, f + xw,g ≤ 1 ← ∀v, w ∈ V, f ∈ F(v), g ∈ F(w), pv,w( f , g) > pmax
(6)
Constraint (6) is the same as in the previous case. Constraints (4) and (5) guarantee that the number of frequencies assigned to a transmitter will be between the desired lower and upper bounds. It should be noted that the condition of the n(v) variables being integer will be automatically satisfied because of constraint (3). Thus, the n(v) variables need not be declared as integer. The number of binary variables is V · F, the number of other variables is V , and the number of constraints is at most 3 · V  + V 2 · F2 .
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
´ Mann and A. Szajk´o Z.A.
312 3.1.3.
MOFAP (Minimum Order FAP)
Here, the aim is to minimize the number of used frequencies. Therefore, we augment the basic FFAP model with one more binary variable for each frequency: ( 1 if frequency f is used by at least one transmitter, y f := (7) 0 otherwise. Beside constraints (1) and (2) of the basic FFAP model, the following extensions are necessary:
Minimize:
∑ yf
(8)
f ∈F
y f ≥ xv, f ← ∀v ∈ V, f ∈ F(v)
(9)
It should be noted that the y f variables do not need to be declared binary or even integer. They will obtain the right value of 0 or 1 automatically: Proposition 1. In an optimal solution of the integer program (8)(9), (1)(2), the value of the y f variables is – even if they are not declared as integer – as in (7).
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Proof. Consider an optimal solution. If f ∈ F is used by at least one communication channel, then ∃v with xv, f = 1. Because of (9), y f ≥ 1 follows. Since the solution is optimal, y f = 1 must hold. If f is not used by any communication channel, then xv, f = 0 for each v ∈ V , thus (9) specifies y f ≥ 0. Taking the optimality of the solution into consideration, y f = 0 holds. In this model, the number of binary variables is V  · F, the number of further variables is F, whilst the number of constraints is at most V  + V  · F + ·V 2 · F2 . 3.1.4.
MSFAP (Minimum Span FAP)
The difference between the highest and lowest used frequency is to be minimized. There are several ways to solve MSFAP by means of ILP. For instance, a series of FFAPs can be solved with a decreasing range of frequencies. The optimal solution of the MSFAP is the frequency band used by the last feasible FFAP instance, before the FFAPs turn infeasible. It is also possible to model the MSFAP as a single integer program. Even for this, there are several possibilities. 1. General solution: Define two new binary variables for each frequency: ( 1 if f ∈ F is the highest used frequency, u f := 0 otherwise.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Complexity of Different ILP Models... ( 1 if f ∈ F is the lowest used frequency, l f := 0 otherwise.
313
The objective function is: Minimize: ∑ f ∈F f · u f − ∑ f ∈F f · l f Similarly to the previous case, the basic FAP constraints (1) and (2) need to be supplemented with some others: (10) ∑ uf = 1 f ∈F
∑ lf = 1
(11)
xv, f + ug ≤ 1 ← ∀v ∈ V, f ∈ F(v), g ∈ F, f > g
(12)
xv, f + lg ≤ 1 ← ∀v ∈ V, f ∈ F(v), g ∈ F, f < g
(13)
f ∈F
(10) and (11) guarantee that, from all the available frequencies, exactly one will be marked as highest used frequency and one as lowest used frequency. It follows that the objective function is exactly the difference between the highest and the lowest used frequency. Because of (12), no higher frequency can be assigned to any communication channel than the highest frequency used in the network, and analogously, due to (13), no lower frequency might be assigned than the lowest one used.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
The number of variables in this formulation is V  · F + 2F (all of them binary), and the number of constraints is at most V  + V2 · F2 + 2V  · F2 + 2, which is significantly more than in the previous cases. 2. Simplified solution: We present here one more ILP formulation for a special case of MSFAP. Here, F = {1, 2, . . ., F}, from which the frequencies {1, 2, . . ., f max} are used and the objective is to minimize f max. This model needs significantly fewer variables and constraints than the previous formulation. Moreover, if no globally blocked channels or locally blocked channels exist in a network, then assuming that 1 is the lowest used frequency means no restriction, as only the differences between the frequencies play a role, not their actual values. This formulation is somewhat similar to the one for MOFAP shown earlier. Necessary variables: xv, f ( 1 y f := 0
( 1 := 0
if frequency f is assigned to transmitter v, otherwise.
if there is a frequency g ≥ f that is used by at least one transmitter, otherwise. (14)
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
´ Mann and A. Szajk´o Z.A.
314 ILP representation:
Minimize:
∑ yf
(15)
f ∈F
∑
xv, f = m(v) ← ∀v ∈ V
(16)
f ∈F(v)
xv, f + xw,g ≤ 1 ← ∀v, w ∈ V, f ∈ F(v), g ∈ F(w), pv,w( f , g) > pmax
(17)
y f ≥ xv, f ← ∀v ∈ V, f ∈ F(v)
(18)
y f +1 ≤ y f ← ∀ 1 ≤ f ≤ F − 1
(19)
(16) and (17) are exactly as in FFAP. Similarly as with MOFAP, the y f variables need not be declared as binary or even integer, as they will automatically obtain the correct values of 0 or 1: Proposition 2. In an optimal solution of the integer program (15)(19), the value of the y f variables is – even if they are not declared as integer variables – as in (14).
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Proof. If there is a frequency g ≥ f that is used by at least one transmitter, then we have yg ≥ 1 because of (18). Applying (19) g − f times, we obtain y f ≥ 1. Since the solution is optimal, y f = 1 must hold. If there is no frequency g ≥ f that is used by at least one transmitter, then (18) results in yg ≥ 0 for all g ≥ f . Thus, y f ≥ 0, and (19) will not result in a stronger bound on y f . Hence, in an optimal solution, y f = 0 holds. As a consequence, y f = 1 for all f ≤ f max and 0 afterwards. Hence, the objective function will be exactly f max. The number of variables in this formulation is V  · F + F, from which V  · F must be declared binary. The number of constraints is at most V  + V 2 · F2 + V  · F + F.
3.2.
Using Integer Variables
The ILP formulations presented so far make use of a large number of binary variables. In this section, we present ILP formulations with a smaller number of integer variables instead. The basic idea is to define one variable for each transmitter, the value of which is the frequency to be assigned to the given transmitter. The challenge in this approach is that separation constraints are given in terms of the difference between pairs of frequencies, e.g.  f v − f w  ≥ s, which is not linear in the frequencies. Hence, we first need a way to linearize constraints involving absolute values.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Complexity of Different ILP Models... 3.2.1.
315
Linearizing Absolute Values
This technique can be used in other ILP problems too, if one would like to linearize constraints that involve absolute values. Henceforward, let c be an arbitrary positive constant, and x a variable or a linear expression containing one or more variables. 1. Case x ≤ c: The linearization of this sort of constraint is quite simple, as it can be substituted with two linear inequalities: x ≥ −c and x ≤ c. 2. Case x ≥ c: At first sight, this kind of constraint cannot be linearized, because the set of feasible values of x is not convex. However, if x is bounded, then even this kind of constraint can be linearized by means of an auxiliary binary variable: Theorem 3. Assume that x is bounded and let M be a sufficiently large constant, so that M ≥ x + c for each feasible value of x. Then x fulfils x ≥ c if and only it fulfils the following ILP program with a suitable choice of the new variable B: x+M ·B ≥ c −x + M · (1 − B) ≥ c B ∈ {0, 1} Proof. First, assume that x ≥ c is fulfilled.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
• If x ≥ 0, then let B = 0. It can be easily verified that the constraints of the ILP program are fulfilled: the first because x ≥ c and the second because of the choice of M. • If x < 0, then let B = 1. Again, it can be easily verified that the constraints of the ILP program are fulfilled: the first because of the choice of M and the second because −x ≥ c. Second, assume that the constraints of the ILP program are fulfilled. • If B = 0, then the first constraint of the ILP program guarantees that x ≥ c. • If B = 1, then the second constraint of the ILP program guarantees that −x ≥ c. In both cases, it follows that x ≥ c. Next, we show ILP formulations based on this technique for some FAP variants. 3.2.2.
FFAP (FeasibilityFAP)
We define for each antenna as many integer variables as frequencies should be assigned to that antenna. Denote these variables by z1 , z2 , . . ., zk (k ≥ n). Hence the minimum required distances between frequencies can be written as zi − z j  ≥ si, j ← 1 ≤ i, j ≤ k,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
´ Mann and A. Szajk´o Z.A.
316
where si, j are given positive constants. Furthermore, we define the constraints f min ≤ zi ≤ f max ← ∀1 ≤ i ≤ k, so that the assigned frequencies zi are in the allowed range. It is also possible to require globally or locally blocked channels: e.g. the inequality zi − f  ≥ 1 assures that frequency f will not be assigned to zi . The constraints involving absolute values can be linearized as stated in Theorem 3. The condition that the expressions in absolute values must be bounded is fulfilled, since zi − z j  ≤ f max and zi − f  ≤ f max. 3.2.3.
MSFAP (Minimum Span FAP)
The above formulation for FFAP can be easily extended to a formulation for MSFAP. We define two new integer variables: s f for the lowest and g f for the highest frequency used by any transmitter in the network. In accordance with the meaning of s f and g f , the following constraints are added to the integer program: g f ≥ zi ← ∀1 ≤ i ≤ k s f ≤ zi ← ∀1 ≤ i ≤ k The objective function is: Minimize: g f − s f , Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
which is exactly the span of the used frequency band.
4.
Empirical Measurements
4.1.
Implementation
In order to assess the practical behaviour of the different ILP models of the FAP, we implemented them in the BCAT (Budapest Complexity Analysis Toolkit) framework [12]. The aim of BCAT is to facilitate the development and testing of algorithms. It offers the possibility to set up different problem classes, analyzers of these problems, converters that transform the instances of a certain problem to the instances of another problem, and algorithms that solve the problems. Afterwards, one can give the settings of the program by means of a configuration file: • what problem instances should be loaded or generated, • into which problems should they be converted with the available converters, • with what analyzers should the problems be analyzed, • what algorithms should be applied to solve the different problem instances.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Complexity of Different ILP Models...
317
A further advantage of BCAT is that it delivers the output in structured files that can be easily processed e.g. in a spreadsheet application. Beside the existing problem classes, analyzers, algorithms, and converters in BCAT, we implemented a FAP and an LP problem class with appropriate analyzers, converters, and algorithms. We used an external LP solver called lp solve2 . 4.1.1.
Testing Process
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
As test cases, we used the COST259 benchmarks3 . These include the data of different GSM 900 and GSM 1800 networks. In order to use a machineindependent metrics, we measure complexity with the number of iterations of the ILP solver. As mentioned previously, NPcomplete problems often exhibit very large differences concerning the time to solve problem instances. We experienced this phenomenon even for problem instances that only differed slightly in a single parameter. For example, it often occurred that a problem instance was solvable in half a minute, but after increasing the maximum tolerable interference by 0.01, we did not get a result even in several hours. Due to ’unfortunate cases’ (see Section 4.5. for more detail), the run of the algorithm may take several weeks. For this reason, we were forced to use timeout values, i.e. we stopped the algorithm, even if it was not finished, after the defined timeout. In these cases, what we measured as complexity is actually only a lower bound on the number of iterations necessary to solve the given problem instance. In order to decrease the impact of such ’unfortunate cases’ and ’noise’, we repeated the tests multiple times. Generally, we solved each problem instance ten times, and calculated the median of the ten results. Generally, this method works better than taking the average value of the runtimes, as a longer runtime could abnormally increase the average [5].
4.2.
Complexity of FFAP
In the following, let n denote the number of variables and m the number of constraints in the given ILP formulation (see Section 3. for details). Furthermore, let k denote the number of available frequencies in the network. 4.2.1.
Constant Number of Frequencies and Communication Channels
In our first set of experiments, we kept n constant and varied m by means of changing the maximum tolerable interference. Figures 1 and 2 show the results. These tests were carried out using the data of a GSM network consisting of 20 cells and 5 frequencies (⇒ n = 100). The number of constraints (m) was changing between 100 and 1300. We used the standard ILP formulation with binary variables for FFAP. On the right side of the diagram, the vertical axis shows the number of iterations (IterNumber) used by the algorithm, while the left vertical axis presents the solvability (Solvable). If this value equals zero, then all problem instances with the given m value were unsolvable; if this value equals one, then 2 lp
solve is an opensource LP and ILP solver. Further information as well as downloadable source and binary files are available from http://sourceforge.net/projects/lpsolve/. 3 Downloadable from http://fap.zib.de/. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
318
´ Mann and A. Szajk´o Z.A.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 1. Complexity of FFAP without globally or locally blocked channels, where n = 100 and k = 5.
the algorithm found a solution in all cases. If this value is between zero and one, then the algorithm was stopped prematurely because of the timeout. Hence, in these cases, the given number of iterations is only a lower bound on the actual number of iterations that are necessary. The complexity pattern of the Figure is in line with what is known as the ’phasetransition phenomenon’ [4, 6, 7]. Briefly, this means that for small values of the constraints/variables ratio (underconstrained case), almost all problem instances are solvable. When the number of constraints increases, the ratio of solvable problem instances drops relatively abruptly from almost 1 to almost 0 (phase transition). After this critical regime, almost all problem instances are unsolvable (overconstrained case). In the underconstrained case, the problem is easy: even simple heuristics usually find a proper solution. In the overconstrained case, it is easy for backtracking algorithms to prove unsolvability because they quickly reach contradiction. The hardest instances lie in the critical regime [4]. This phenomenon has been described in the literature for some theoretical problems (e.g., graph coloring [11, 14]). Our findings indicate that it also applies to FAP. The only difference between Figure 1 and Figure 2 is that in the measurements depicted in Figure 2, there are also several globally and locally blocked channels in the network. As can be seen, the complexity patterns in the two figures barely differ from each other. We can conclude that including blocked channels does not make the problem significantly harder.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Complexity of Different ILP Models...
319
Figure 2. Complexity of FFAP in presence of globally and locally blocked channels, where n = 100 and k = 5.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
4.2.2.
Varying Number of Communication Channels
Figure 3 also shows the solvability (Solvable) and the complexity (IterNumber), but this time as a function of the number of cells, with a constant number of frequencies and constant maximum tolerable interference value. Note that increasing the number of cells increases the number of communication channels and thus the number of variables and constraints as well. In the Figure, we can again recognize an easyhardeasy pattern, with significant variance within the hard regime. It is interesting to note that increasing the number of communication channels does not increase the complexity beyond some threshold. On the contrary, complexity decreases to very low values after the hard regime. The reason for this phenomenon is probably that in the unsolvable case, with increasing number of cells, the algorithm is likely to reach contradiction earlier, i.e. on a higher level of the search tree.
4.2.3.
Varying Number of Frequencies
In Figure 4, we present an analogous diagram for the case of varying number of frequencies. Here, the number of communication channels was constant. Note that increasing the number of frequencies increases both the number of variables and the number of constraints. Nevertheless, we can again witness an easyhardeasy pattern. If the number of frequencies is high, then the problem is easy, regardless of its huge size.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
´ Mann and A. Szajk´o Z.A.
320
Figure 3. Complexity of FAP as a function of the number of cells.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
4.3.
Comparing Different ILP Formulations
For FFAP, we introduced two essentially different ILP formulations: one of them used binary variables, while the other used integer variables. On the ground of the empirical examinations, applying the ILP formulations using binary variables, the problem instances could be solved significantly more efficiently than using integer variables and linearizing the expressions involving absolute values. The binary representation needs far more variables, but this is likely compensated by the smaller domain of the variables.
Table 1. Number of problem instances that could be solved by the two solution techniques for MSFAP.
Integer variables
successful unsuccessful
Binary variables successful unsuccessful 46 25 11 18
In the case of MSFAP, the situation is different. Applying 10 minutes as timeout, we analyzed the solution of 100 instances with entirely different parameters. As can be seen in Table 1, there were 25 cases when the formulation with integer variables proved to be better, and only 11 cases when the formulation with binary variables won. So for MSFAP, the formulation with integer variables seems to be more suitable.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Complexity of Different ILP Models...
321
Figure 4. Complexity of FAP as a function of the number of frequencies.
4.4.
Suboptimal Solutions of the Optimization Versions of FAP
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Optimization FAP problems are extremely timeconsuming, that is why we considered suboptimal solutions too. We examined when the algorithms should be stopped in order to find a nearly optimal solution in a relatively short time. Table 2. Best solutions of MaxFAP for different timeout values Runtime half min. 1 min. 2 min. 5 min. 10 min. quarter hour half hour 1 hour 2 hours
Best solution found 44 45 44 44 45 44 43 44 44
43 46 45 44 44 45 46 45 46
42 44 43 43 44 43 43 44 43
44 44 45 46 43 45 45 46 45
40 39 38 42 40 40 39 43 42
43 46 45 44 44 43 44 45 44
Table 2 shows the best solutions found within given time limits for some MaxFAP problem instances. Each column shows the test results corresponding to one problem instance. For instance, the first column shows that the algorithm succeeded to assign 44 frequencies before being aborted after half a minute. Similarly, in one minute 45, in two
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
´ Mann and A. Szajk´o Z.A.
322
minutes 44, . . . , in two hours 44 frequencies could be assigned. Apparently, despite the increasing runtime, the results improved only a little bit or even did not improve. In other words: aborting the run as early as after half a minute, the best solution found so far will not be far from the optimum.
Table 3. Best solutions of MOFAP for different timeout values Runtime half min. 1 min. 2 min. 5 min. 10 min. quarter hour half hour 1 hour 2 hour
Best solution found 24 − − 22 22 − − 29 24
23 21 22 22 22 22 25 22 −
13 12 − 11 12 14 12 13 −
11 12 11 11 12 12 12 11 11
− 37 34 30 27 32 32 29
30 28 29 28 28 27 29 27 29
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
The same phenomenon also applies to MOFAP (see Table 3). Where a dash is shown, it means that the algorithm was not able to find any solution within the given time limit.
4.5.
Acceleration with Restarts
If an algorithm involves random choices, it might make sense to run it several times on a given problem instance. Empirical evidence on several NPhard problems shows that there is often several orders of magnitude variance in algorithm runtime on the same problem instance. For example, suppose that the median runtime of a random algorithm on problem instances of a given size is 1 minute. Assume that it has been running on a problem instance for 5 minutes without any results yet. Intuitively, one could think that the algorithm will most probably finish very soon, so we should keep waiting. However, empirical evidence shows that – generally for NPhard problems – it is better to stop the current run of the algorithm and restart it [8]. The rationale is that it might actually happen with surprisingly high probability that the current run of the algorithm will take several hours, days, or even longer. On the other hand, if we restart the algorithm, chances are high that the next run will be more fortunate and may finish in a minute or so.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Complexity of Different ILP Models...
323
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 5. Histogram of runtime using standard vs. restart method.
So far, this phenomenon has been mainly investigated for theoretical problems (e.g., satisfiability, graph coloring). We examined whether it is also beneficial to apply this strategy in solving FAPs. For this purpose, we had to randomize our algorithm. Luckily, lp solve offers the possibility to randomize the order in which variables are considered. We compared two versions of the algorithm on 34 instances. In the standard method, we ran the algorithm once, with a timeout of 900 seconds. In the restart method, we started the algorithm with a timeout of only 90 seconds; if it did not finish by that time, it was restarted again with 90 seconds timeout etc. We allowed up to 10 runs, so that the total time budget for a problem instance was 900 seconds for this method as well. The results are summarized in Figure 5. As can be seen, the standard method suffers from the large variability in algorithm runtime mentioned above: 25 problem instances could be solved within 60 seconds, but the remaining 9 problem instances could not be solved even in 900 seconds. With the restart method, the variance is much lower, and hence there are only 3 problem instances that cannot be solved within 900 seconds. Moreover, the average runtime of the restart method was about 50% of the average runtime of the standard method. All in all, we can conclude that it is worth using restarts for solving FAPs. Elaborating the best restart strategy (i.e. the best timeout values) is an important topic for future research.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
´ Mann and A. Szajk´o Z.A.
324
5.
Conclusion
In view of the rapid development of wireless networks, we considered a general model of frequency assignment problems. In this chapter, we presented different techniques to solve FAPs by means of ILP, and examined their efficiency empirically.
In line with research on other NPhard problems, we were also faced by several orders of magnitude variance in algorithm runtime. Our results help in predicting algorithm runtime based on model parameters (number of constraints, number of cells, number of frequencies). In cases where the time needed to optimally solve a problem instance is unrealistically high, the algorithm can be stopped prematurely by specifying a timeout. Our results show that the value of the objective function hardly changes after some time, so that such a premature finish will usually result in a solution that is quite close to the optimum.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Moreover, we analyzed which ILP formulations are worth to use. In case of FFAP, the ILP formulation using binary variables turned out to be more efficient. However, in case of MSFAP, the ILP formulation with integer variables proved to be better. Additionally, we showed that restarting the solver at certain times can significantly improve the algorithm’s behavior.
Acknowledgements
This work was partially supported by the Hungarian National Research Fund and the National Office for Research and Technology (Grant Nr. OTKA 67651). Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Complexity of Different ILP Models...
325
List of Abbreviations BCAT BCCH BIP DCA FAP FCA FFAP GPS GSM HCA ILP ITU LP MaxFAP MIFAP MIP MOFAP MSFAP TDMA TRX WIFI WLAN
Budapest Complexity Analysis Toolkit Broadcast Control Channel Binary Integer Programming Dynamic Channel Assignment Frequency Assignment Problem Fixed Channel Assignment Feasibility Frequency Assignment Problem General Positioning System Global System for Mobile Communications Hybrid Channel Assignment Integer Linear Programming International Telecommunication Union Linear Programming Maximum Service FAP Minimum interference FAP Mixed Integer Programming Minimum order FAP Minimum span FAP Time Division Multiple Access Transmitter/Receiver Unit Wireless Fidelity Wireless Local Area Network
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
References [1] K.I. Aardal, S.P.M. van Hoesel, A.M.C.A. Koster, C. Mannino, and A. Sassano, Models and solution techniques for frequency assignment problems, Annals of Operations Research 153 (2007), no. 1, 79–129. [2] Sara Alouf, Eitan Altman, J´erome Galtier, JeanFrancois Lalande, and Corinne Touati, Quasioptimal bandwidth allocation for multispot MFTDMA satellites, IEEE infocom 1 (2005), 560–571. [3] R. Bornd¨orfer, A. Eisenbl¨atter, M. Gr¨otschel, and A. Martin, Frequency assignment in cellular phone networks, Annals of Operations Research 76 (1998), 73–93. [4] Peter Cheeseman, Bob Kanefsky, and William M. Taylor, Where the really hard problems are, 12th International Joint Conference on Artificial Intelligence (IJCAI ’91), 1991, pp. 331–337. [5] Carla P. Gomes, Bart Selman, Nuno Crato, and Henry Kautz, Heavytailed phenomena in satisfiability and constraint satisfaction problems, Journal of Automated Reasoning 24 (2000), no. 12, 67–100. [6] Tad Hogg, Refining the phase transition in combinatorial search, Artificial Intelligence 81 (1996), no. 12, 127 – 154. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
326
´ Mann and A. Szajk´o Z.A.
[7] Tad Hogg and Colin P. Williams, The hardest constraint problems: A double phase transition, Artificial Intelligence 69 (1994), no. 12, 359–377. [8] Malik Magdon Ismail and Amir F. Atiya, The early restart algorithm, Neural Computation 12 (2000), no. 12, 2991–3010. [9] Haixia Jia and Cristopher Moore, How much backtracking does it take to color random graphs? Rigorous results on heavy tails, Principles and Practice of Constraint Programming (CP 2004), 2004, pp. 742–746. [10] A. M. C. A. Koster, Frequency assignment – models and algorithms, Ph.D. thesis, University of Maastricht, 1999. ´ am Mann and Anik´o Szajk´o, Improved bounds on the complexity of graph [11] Zolt´an Ad´ coloring, Proceedings of the 12th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, 2010, pp. 347–354. ´ am Mann and Tam´as Sz´ep, BCAT: A framework for analyzing the com[12] Zolt´an Ad´ plexity of algorithms, 8th IEEE International Symposium on Intelligent Systems and Informatics, 2010, pp. 297–302. [13] Janne Riihij¨avri, Marina Petrova, and Petri M¨ah¨onen, Frequency allocation for WLANs using graph colouring techniques, Proceedings of the 2nd Annual Conference on Wireless Ondemand Network System and Services, 2005, pp. 216–222.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
´ am Mann, Graph coloring: the more colors, the better?, [14] Tam´as Sz´ep and Zolt´an Ad´ Proceedings of the 11th IEEE International Symposium on Computational Intelligence and Informatics, 2010, pp. 119–124.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: Linear Programming Editor: Zoltan Adam Mann
ISBN 9781612095790 ©2012 Nova Science Publishers, Inc.
Chapter 11
OPTIMIZATION OF POLYGENERATION SYSTEMS SERVING A CLUSTER OF BUILDINGS A. Piacentino1, C. Barbaro1, R. Gallea2 and F. Cardona1 1
DREAM  Dpt. of Energetic and Environmental Researches, Palermo University, Viale delle Scienze, 90128, Palermo, Italy 2 DINFO – Dipartimento di Informatica, Palermo University Viale delle Scienze, 90128, Palermo, Italy
ABSTRACT
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
The optimization of combined energy systems for the production and distribution of warm and cold fluids to civil users is very complex; two possible configurations, i.e. the small single units for individual buildings and the large plants integrated with district heating networks, can be essentially considered, especially in cold climates. Dealing with such a complex problem, involving a very large number of variables, requires efficient algorithms and resolution techniques. The present chapter illustrates a Mixed Integer Linear Program (MILP)1 approach to the optimization of synthesis, design and operation for CHCPbased μgrids including thermal energy storages. A novel approach is presented, oriented to design and optimize district energy systems, assuming to have detailed data available as concerns the energy consumption profiles and the location of buildings; the method evaluates a number of feasible layouts for the heat distribution network and relies upon predefined cost models for plant components in order to identify the most profitable plant design and operation. The method is implemented into a compiled tool and is validated with reference to a case study, represented by a cluster of buildings interconnected via a heat network and situated over a small area (maximum distance in the order of 1.5 km). A relevant profit potential resulted for the examining buildings, and the results appear consistent with respect to those achievable through alternative heuristic or ―manual‖ methods.
Corresponding author (Prof. A. Piacentino) Tel.: +39 091 23861952, Fax.: +39 091 484425; Email: [email protected] 1 All abbreviations can be found at the end of the chapter in a Nomenclature section. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
328
A. Piacentino, C. Barbaro, R. Gallea et al.
NOMENCLATURE a, b Constants in linearized equation C Cooling production kWc CHCP Combined Heat, Cooling and Power COP Coefficient of performance Dh, Dc, De Heating, cooling and electricity demand kW Dist distance m E Electricity production kWe H Heating production kWt HLV Heat low value kJ/Nm3 LL Load Level dimensionless L Pipe length m MPE MPE Electricity and fuel market price €/KWh NPV Net present value € PES Primary energy saving PHR Power to heat ratio dimensionless STOR Stored energy kWh T Temperature TLS Total Load Sum kWh TES Thermal energy storage V Volume m3 Z, z Total and specific equipment cost €/kWh
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Greek Letters Δ 01 sinthesis variable η Efficiency Δ Variation
Superscripts Inv investment Nom Nominal Op operative
Subscripts Abs, el. ch Absorption, electric chillers Best Best CHCP plant CHP Combined Heat and Power i Referring to the ith hour
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Optimization of Polygeneration Systems...
329
piping Related to branches of the network pow.pl Power plant Worst Worst CHCP plant
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
INTRODUCTION This chapter introduces a multiobjective optimizationalgorithm oriented to the optimization of polygeneration plants for civil applications. The term ―cogeneration‖ indicates the simultaneous generation of heat and electrical/mechanical energy; the Directive 2004/8/EC has recently stressed that the heat should be used to cover an economically justifiable heat demand. Polygeneration plants, widely diffused in the last decades, include cogeneration (CHP, Combined production of Heat and Power) and trigeneration (CHCP, Combined production of Heat, Cooling and Power) schemes, eventually integrated with district heating network. Such systems should be designed and controlled with a particular accuracy, considering all the relevant interactions between the components and with the energy users and the power grid. Polygeneration systems feature better energy conversion efficiencies, with respect to the conventional technologies (either in centralized or distributed systems) for separate production of heat, cooling and power. The operating strategy for polygeneration plants is usually considered a task for realtime analysis; however, it has been proven how better results are achieved when accounting for the different plant operating conditions since the earlier design phases of the CHCP system. Such approach, in fact, allows to better identify the optimal plant layout and size of the components. The analysed problem involves the optimisation of polygeneration systems, with respect to energetic and economic objectives. Such an optimisation allows the decisionmaker to solve three different problems, that are:
The synthesis of the layout, i.e. the selection of the components to be installed; The design of the plant, i.e. the rated capacity of each component; The operation of each component, in terms of load level on each hourly timestep.
From a conceptual viewpoint, the optimization is first applied to a single polygeneration unit devoted to an individual building (Single Building Optimization, SBO) [12] and, in a second step, it is extended to the large plants including several CHP units, eventually interconnected by a warm/cold fluid distribution network, serving a cluster of buildings to cover their energy requirements (MultiBuildings Optimization, MBO) [3]. The MBO ensures that the polygeneration systems operate as long as possible at their ―economically optimal‖ load, thus taking advantage of the eventual complementarity between the different buildings‘ load profiles; this could result in a compensation of the load fluctuations and in a consequently smoother operation for the installed units. Further complexity of the model results from the inclusion of a heat storage tank in the plant layout; this component, however, will ensure to the CHCP systems an increased flexibility in operation. The optimization model is based on a Mixed Integer Linear Programming (MILP) algorithm, that assumes simplificative cost figures in order to allow a linear formulation of
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
330
A. Piacentino, C. Barbaro, R. Gallea et al.
the objective function and the constraints, with respect to the selected variables. Despite of the high number of unknowns, in the order of 10,00050,000 for most applications, the LP model induces a low consumption of computational resources; the use of approximate cost and performance figures does not represent a relevant penalty, because other sources of uncertainty (like the future energy load and price scenarios) would in any case slightly affect the reliability of results. Being the analysis devoted to civil applications, irregular demand profiles for heat, cooling and power are typically expected; the optimization of plant operation will be therefore pursued on hourly basis. From the 8,760 hourly values of heat, cooling and electric loads in input, a reduced set of Nhours values is extracted as a temporal basis for the optimization, provided that these hours are statistically representative of the annual load and tariff profiles. The mathematical model is mainly developed in MATLAB environment, although the linear programming algorithms implemented with LINDO API 6.0 and available in callable higherlevel MATLAB functions [45] are adopted during the optimization phase. The model determines the main synthesis and design variables; also, the onoff states and the load levels of any ―active‖ component are calculated, while charging and discharging periods of the Heat Storage are suggested, together with the optimal size of the vessel. The competitive advantage of the algorithm here proposed consists of its ability to find very efficiently the optimal solutions, thus providing the energy analyst with the necessary information to support decision making in design. Once tested on a number of multiobjective problems, this algorithm has proved to be at least as reliable as other methods found in recent literature [68], while the duration of the optimization process rarely exceeds 35 minutes.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
METHOD FOR THE OPTIMIZATION OF POLYGENERATION SYSTEMS The design of CHCP systems for buildings must keep into account several aspects: (i) energy demand profiles; (ii) variable tariff structures and energy price profiles; (iii) investment costs and part load performances of plant components; and (iv) normative constraints on energy efficiency (and environmental protection). Daily variations of heating and cooling demands are factors that exert the most influence on the appropriate design (as concerns the size of boiler and cogeneration modules, the type of refrigerator, the heat storage capacity, etc.) of the energy system. A solution can only be achieved when attention is simultaneously paid to the optimal operation of the components, on an hourbyhour basis throughout the year; such an optimal operation in its turn depends on energy prices, because the CHP unit could either strictly generate the power required from the buildings (Electricity Tracking operation mode) or generate additional electricity to be sold to the grid. CHCP systems ensure a more efficient use of fuel energy, as the cogenerated heat can be used for heating in winter as well as for cooling in summer through an absorption refrigerator. The use of a Thermal Energy Storage (TES) provides the additional advantage of covering a variable thermal demands while running the production units continuously at nominal conditions. Because of the above reasons, highly integrated polygeneration schemes can achieve substantial energy and pollutant emissions savings, together with a relevant profitability of the marginal investment.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Optimization of Polygeneration Systems...
331
As anticipated, a twolevels problem is considered:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1. The optimization of synthesis, layout and operation for polygeneration systems supplying energy to an individual building (SBO); 2. The optimization of a CHCPbased µgrid, i.e. a set of CHCP systems in parallel netmetering with the grid, interconnected by a warm fluid distribution network, serving a cluster of buildings situated over a small area (MBO). Referring to the MBO, the solutions include a much higher number of decision variables, that are: the number, the location and the internal layout of any CHCP system to be installed, the optimal design for each of them, the design of each single branch of the heat distribution network, the eventual choice between ―district heating only‖ or ―district heating and cooling‖ network. Also, the operation of each component on hourly basis and the fluid flow rates through any branch are to be optimized, properly accounting for the pumping cost. Optimizing such a μgrids involves a large number of variables and constraints, thus making simplification to represent a priority when defining the physical model and assuming to assign some parameter with prefixed values. The annual energy loads of buildings are usually obtained by historical aggregate consumption data (fuel and electricity supply bills) or, in the most favourable case, they are available from preliminary energy audits. As concerns the electricity price, in Italy small autoproduction units operate under the ―ritiro dedicato‖ regime [9], i.e. a special tariff regime that recognizes to the producer a variable price equal, on hourly basis, to the average zonal electricity price resulted on the free energy market. Historical prices to perform reliable simulations are available on the website of the ―Gestore dei Servizi Elettrici‖ (GSE), the company that manages support mechanisms and special power exchange regimes for renewable power sources and small gridconnected systems. In order to solve the synthesis problem, the method adopts a redundant superstructure, including most of the components that could be potentially included:
A prime mover, either a gas turbine or a reciprocate engine, driven by natural gas and producing simultaneously electricity and ―cogenerated‖ heat; this component represents the core of the plant. A singleeffect water/lithiumbromide absorption chiller, which utilizes medium grade heat from the prime mover to produce cooling energy, thus allowing to fictitiously increase the heat loads during summer months and favouring a more regular operation of the system in ―combined production‖ mode throughout a year. An auxiliary backup boiler, supplying heat when either the prime mover is switched off or its heat recovery rate is not sufficient to cover the current heat demand. A conventional vapour compression chiller, which utilizes electricity in order to produce some auxiliary cooling, when needed. This chiller will prevalently operate when waste heat from the prime mover is not sufficient to feed the absorption unit, thus making its operation uneconomical.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1.a) reciprocate engine prime mover based.
o/detail.action?docID=3021711.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1.b) gasturbine prime mover based. Figure 1. Schematic representation of the Single Building Model.
o/detail.action?docID=3021711.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 2. Schematic representation of the Multi Building Model.
o/detail.action?docID=3021711.
Optimization of Polygeneration Systems...
335
As said, the superstructure also includes a heat storage to ensure further flexibility to the operation of CHP units. The CHCP plant is connected to the power grid, in order to increase the reliability of supply (guaranteed also during scheduled or unscheduled maintenance of the prime mover) and to have wider opportunities to profitably run the plant during peak hours, when some surplus electricity can be sold at a high unit price. Two distinct MILP models were formulated, respectively for the gas turbine and the reciprocate enginebased layouts schematically represented in Figure (1.ab); both of them assume constant efficiencies at part load operation, while different interconnections between components are considered to ensure technical safety. The reciprocate enginebased configuration includes heat recovery at two different temperatures: high temperature heat recovery from exhaust gases is adequate to feed both the direct thermal loads and the absorption chiller, while the low grade waste heat recovered by the cooling water jacket is just used to cover low temperature space heating or domestic hot water requests. In gasturbine configuration all the recovered heat could satisfy both heating or (indirectly) cooling demand. The district cooling water is distributed within the building directly to terminal equipment such as air handling and fan coil units.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
COST MODELS FOR THE MAIN PLANT COMPONENTS The operational strategy is optimized following a ―profitoriented‖ philosophy; the economic indicator to be maximised is the Net Present Value (NPV), that expresses the marginal cumulative profit achieved by the trigeneration plant along its life cycle. Reliable figures for the purchase and installation cost were obtained, based on more than two hundreds machines by different manufacturers (Caterpillar, Cummins, Deutz, Jenbacher, Perkins, Kawasaki and Turbec prime movers, Yazaki, Carrier and McQuay absorption chillers) [1011]; cost figures widely available in literature were used for the auxiliary boilers and electric chillers. Investment cost of CHP unit[ Z inv. CHP , ]: Z inv. CHP a CHP, inv E CHP, nom b CHP, inv δ CHP ,
(1)
The maintenance cost has been expressed in percentage of the cost for fuel consumption, introducing a factor ―d‖. Maintenance cost of CHP unit [ ZCHP, op ]: Z op. CHP
d
MP gas E CHP ηe, CHP
3600 HLV
;
(2)
The absorption chiller has not direct or operative costs since it has been imposed to be fed by recovered heat only, and not by dedicated fuel combustion.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
336
A. Piacentino, C. Barbaro, R. Gallea et al. Investment cost of the Absorption chiller unit [ Z inv. abs ]: Z inv. abs a abs, inv C abs, nom b abs, inv δ abs
(3)
where E CHP, nom and C abs, nom respectively represent the rated electric and cooling capacities of the CHP unit and the absorption chiller while δ CHP and δ abs represent the 0,1 synthesis variable expressing the alternative to include or not the CHP unit and the absorption chiller in the layout. As will be evident from the successive equations 12 ab), expressing the congruence between variables, when δ CHP and δ abs are zero also E CHP, nom and C abs, nom will assume the value zero, leading the total cost Z to the value zero too. Cost figures for traditional electric (Eq.4) and thermal (Eq.5) production expressed in € : KWh
MP D ee c Ze MP D ee e COP el.ch.
z aux.boil, i
;
MPgas H boil, max LL boil, i 3600 ; η boil HLV
(4)
(5)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
PHISICAL MODEL AND MATHEMATICAL PROBLEM FORMULATION Single Building Optimization (SBO) In this subsection the decision variables and the physical and normative constraints to be fulfilled are presented, with reference to the Single Building Optimization problem. Syntesis and design variables: Two integer binary variables and nine non integer variables. δ CHP , δ abs : integer binary [0,1] synthesis variable expressing the alternative to include or not the CHP unit and the absorption chiller in the final layout; ECHP,nom : Rated power output (i.e. capacity) of the CHP unit [kWe] Cabs,nom : Rated cooling output of the absorption chiller [kWc] VTES: Volume of the TES [m3] Operation variables: HCHP,i : Heating production at each ith time step [kW or, equivalently on hour basis, kWh] Cabs,i,: Cooling production at each ith time step [kW or kWh, see above]
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Optimization of Polygeneration Systems...
337
LLboil,i: Load Level of the auxiliary boiler at each ith time step, dimensionless and in the range [0,1] LLel.ch,i : Load Level of the electric chiller at each ith time step, dimensionless and in the range [0,1] STOR,TES,i: Energy stored (i.e. level of charge of the TES) at each ith time step [kWh] QTES,i: Charging/discharging rate of the TES [kW or kWh, see above] Prefixed and constant efficiencies are assumed for the absorption chiller, the auxiliary boiler and electric chiller and for the reference electrical efficiency for separate power production: COPabs.chil.=0.7, Etaboil=0.9, Etapow.centr.=0.41, COPel.chil.=2.5
(6.ad)
Instead of assuming for each component a feasible load level range [LLmin,LLmax] for the part load operation, in order to preserve the linearity of the model an operation extended on the whole [0,1] range is assumed; the deviations from a technically feasible operation have resulted to be negligible. Covering energy loads: The balance between the heating and cooling production and consumption is imposed; the former is expressed in inequality terms because some surplus heat production could be produced and dissipated through an emergency radiator during peak hours.  H CHP, i
C abs,i COP abs
LL boil, i H boilmax Q TES, i D h, i
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
C abs.ch. LL el.ch, i. D cmax D c, i
(7)
(8)
Production limits: The hourly output from any component is imposed with an upper limit, represented by its capacity. STOR ρc
p
TES V 0 TES ΔT
(9)
H CHP PHR  E CHPnom 0
(10)
C abs C abs, nom
(11)
Congruence between synthesis and management variables: Variables of different ―level‖, i.e. those regarding the inclusion of a component in the actual layout and its operation, should obviously assume consistent values (for instance the hourly output from a component should be null throughout the year, when the component is not included in the layout).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
338
A. Piacentino, C. Barbaro, R. Gallea et al. C abs  10 5 δ abs 0
(12.a)
H CHP  10 5 δ CHP 0
(12.b)
As regard the modelling of the TES, a fixed loss percentage, ΔHloss (approximately 23% 0.97 STOR
(i)
TES and expressed by the term in equation 13), of the stored energy is assumed to be lost on hourly basis through TES coat; the variable QTES,i, is also calculated to express the charging/discharging rate of the TES, while a variable STOR TES will account for the energy stored at each ith time step, as said before. The energy balance of the TES between two consecutive hours gives:
STOR
TES
(i 1) 0.97 STOR
TES
(i) Q
TES, i
0
(13)
An additional constraint imposes the TES to have a null charge at the 1st hour of the ―standard year‖: STOR
TES
(1) VTES ρ water c p ΔT
(14)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
In absence of this constraint, an unreasonably large TES would typically result because of the convenience in using ―as much as possible zerocost energy available at the 1st first hour‖. In order to achieve CHCP solutions eligible for ―high efficiency CHP‖ assessment according to the Directive 2004/8/EC [16], the primary energy saving is calculated, on hourly basis, according to the following expression: D c, i H chp LL .boil D max D max LL el.ch 1 D 1 h PES, i h H chp c COP η η boil ηe η.boil COP el.ch pow, plant η pow, plant el.ch.
(15) Then a minimum 10% energy saving is imposed on hourly basis, by summing up the hourly energy saving rates calculated above: n n D c,i Dh 1 PES, i 0.1 De η η i 1 i 1 COP p ow,p lant boil el.ch.
(16)
Objective function: As said, the mathematical problem assumes the Net Present Value (NPV) as objective function; this indicator is considered as the most appr opriate for plant design, allowing to refugee the risk to undersize the plant (such a risk is often associated with optimization strategies oriented to minimize the payback period):
(NPV) I j C j, n P/A (i, n) Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(17)
Optimization of Polygeneration Systems...
339
I j : investment costs for the jth component;
C : positive or negative annual cash flow, for a generic nth year of plant life; j, n
The term P/A(i,n) represents the actualization factor of a number of annual net cash flows, assumed to be constant (we have no instruments to predict fluctuations along the plant life): P/A (i, n)
1 i n 1
(18)
i (1 i) n
where i represents the interest rate, assumed in the next sections to be 4%. Actually, rather than maximizing the NPV, its opposite "NPV" is minimized (to fulfil the standard form of the optimization algorithm in MatLab and LINDO); the extensive formulation of the objective function is: a CH P,in v E CH P,n o m b CH P,in v δ ,CH P a ab s,in v C A b s,n o m b ab s,in v δ , A b s a TES,in v VTES b TES,in v δ ,TES N h o u rs min H CH P,i PHR CH P LL b o il,i E b o il,n o m LL el.ch ., i CH P b o il P/A(i, n) MP MP H PHR E D MP g as g as CH P, i CH P el.ch ., n o m E, i e, i η e, CH P η b o il COP el.ch . i 1
(19)
boil CHP where, according to the experience of the analyst, constant fuel prices ( MPgas and MPgas )
and the expected average electricity price for each hour, MPe, i , are assumed.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Multi Building Optimization (MBO) District Heating represents a centralized strategy, frequently adopted in northern countries, for the supply of warm fluids to a small or a large community. Hot water is produced in a thermal or a CHP plant and distributed to the connected buildings through a pipe network; warm fluids are essentially used for space heating and domestic hot water production. As concerns the MultiBuildings Optimization [12] presented in this chapter, it refers to a different conceptual scheme, that is a sort of ―district heating‖ for a small number of buildings, located over a small area. The MBO optimization strategy is based on an iterative process of buildings‘ aggregation; it adopts a modified version of the SBO algorithm, accounting for the cost associated with the fluid distribution network. The MBO routine allows us to recognize when, and at what extent, the CHCP design and operation is favoured by the complementary demand profiles of two interconnected buildings. Again at the base of the optimization process lies the acquisition of data; the analyst is expected to upload detailed loads (for each building Dh,i, Dc,i and De,i derived on hourly basis by preliminary energy audits) and price profiles (derived from historical power market statistics). The input data must fully define the μgrid. The geometric location of each building must be given; in order to allow an easier understanding of the design solutions, the whole μgrid is represented on XY axis, with origin in the building first uploaded. The software simulates the
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
340
A. Piacentino, C. Barbaro, R. Gallea et al.
progressive connection of buildings and the distribution of energy flows from ―active buildings‖ (i.e. those where a CHCP system is installed) to ―nonactive‖ ones (where just the auxiliary boiler and the electric chiller are installed). Indicating with ―A‖ an active building, serving a number n of nonactive buildings, and with HCHP,i the heat flow rate transferred from building A to the generic ith nonactive building (as evident in Figure 2), the following energy balances can be written:  H CHP, A
H CHP, i
H CHP,i
C abs, A COPabs
C abs, i COPabs
n
LL boil, A H boilmax,A Q T ES, A H CHP,i D H, A
(20)
LL boil,i H boilmax, i D H, i
(21)
i 1
C abs, i COPabs
(22)
D H, i
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Again, congruence between synthesis and operation variables and production limits must be imposed: nom C abs,i C abs, i 0
(23)
nom  10 5 δ abs, i C abs, i 0
(24)
nom C abs,i LLel.ch., i C el.ch.i, Dc, i
(25)
The MBO routine accounts for the cost associated with owning and operating the heat distribution network. As concerns the operating cost, related with the Δp between supply and return pipes covered by electric pumps, the adoption of reasonable duct diameters is imposed; according to a conventional practice, an average 1.21.6 m/s water velocity is fixed, thus becoming the tube diameter strictly function of the heat flow rate, as wll be evident from equation (27) (a fixed ΔT= 20°C is assumed) [13]. Unitary pumping cost € : kWh
n
1
i 1
η pump
H CHP,i
l Ai
p drop m c p ΔT
where: η pump 0.8 ; p drop 150
MP e
Pa ; T 20 C m
Equation (26.a) can be rewritten as follows:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(26.a)
Optimization of Polygeneration Systems...
341
p drop H CHP, i 10 l Ai MP e i 1 83.6 80 1000 * 1000 n
(26.b)
As concerns the capital cost for purchasing and installing the network (civil works are included), figures derived from literature and commercial data provided by manufacturers [14] were used, which lead to the formulation of the following expression: Pipe cost per unit length: Cost l Ai 4 (0.0365 H CHP,i 57.57) [€/ml]
(27)
The objective function is again represented by the Net Present Value of the investment: Nhours LL boil, A E boil, nom, A Nhours LL boil, i E boil, nom, i boil boil 0.1 MP gas 0.1 MP gas η e,CHP η boil η boil i 1 i 1 Nhours n LL el.ch, i. n n H CHP, i 10 p drop LL el.ch, A. nom nom H CHP, A PHR E el.ch, E el.ch, l Ai MP e A D E, A i D E, i COP el.ch. 80 1000 *1000 i 1 i 1 COP el.ch. i 1 i 1 83.6 Nhour
P Min i,n A
8760 Nhour
0.1 MP gas,CHP
H CHP, A PHR
i 1
a CHP, inv E CHP, nom, A b CHP, inv δ sinth, CHP, A a abs,inv C abs,nom, A b abs,inv δ sint, abs,A a TES,
inv
VTES,
A
b TES,
inv
δ sinth, TES,
A
n
a abs,inv C abs,nom, i b abs,inv δ sint, abs,i l Ai 4 (0.0365 H CHP, i 57.57) i 1
(28) In order to gradually proceed with aggregation between buildings, a hierarchic order is preliminarily fixed among them. The indicator adopted for this scope is the ―average unit cost of energy supply‖, NPV_TLS, defined as follows:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
NPV_TLS
NPV
24 ndays i 1
D h, i i 1
24 ndays
D c, i i 1
24 ndays
(29) D e, i
The above indicator, calculated as ratio between the NPV of the CHCP plant and the sum of all energy requirements of the associated building or groups of buildings throughout the year, represents a sort of ―average cost‖ for energy supply to that building. The assumption lying behind this definition is: the better is the performance of the CHCP scheme (for instance, due to very regular annual load profiles), the higher the abatement of the average unit cost of the energy supplied to the user and the greater the convenience to maintain that CHCP system in our final scheme. A bottomup approach is pursued, starting from evaluating CHCP plants for individual buildings (through the SBO routine) and gradually aggregating them on the basis of the NPV_TLS. The CHCP system with the worst performance, i.e. the highest NPV_TLS, and the corresponding building are firstly excluded; the corresponding building is then assumed to be served by a larger CHCP plant installed in a near building (again following an order, in this case from the most to the least efficient). This new ―aggregate‖ plant ―CHCPbest + worst‖ replaces the two separate installations CHCPbest and CHCPworst only when such replacement induces an economic advantage, i.e. when the following condition is fulfilled:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
342
A. Piacentino, C. Barbaro, R. Gallea et al.
NPV best worst NPV best NPV worst
(30)
When Eq. 30 is not fullfilled, the two buildings are not interconnected and a next aggregation option is evaluated, that involves CHCPbest and CHCPworst+1 (the building achieving a sligthly better performance than the ―worst‖). Any resulting configuration is always assumed as a new starting one. This process is repeated iteratively, until all possible aggregations have been considered. The iterative procedure allows us to determine the optimal topology and position of the installed CHCP capacity; also, the design of the warm fluid distribution network is automatically derived while minimizing the production cost (including pumping).
IMPLEMENTATION ISSUES The implementation of the described model is based onto two main modules: power plant optimization (single or multibuilding) and smart building aggregation algorithm. Both of them are implemented using Matlab runtime environment, leveraging onto Lindo Systems, Inc. solver engine for optimization purposes.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Power Plant Optimization and Smart Building Aggregation Algorithm Power plant optimization exploits the API provided by Lindo Systems, Inc. for their lowlevel optimization engine. Its MILP solver embeds branchandcut [15] iterative procedure using barrier method [16] for each associated LP optimization. It is equipped with advanced preprocessing, heuristics and cut generation tools too. Two different data structures are set up for the optimization, one for the reciprocate engine power plant and one for the gasturbine power plant respectively. Both of the models are solved using MILP and the best feasible solution is chosen as the final result. The input data structures for the solver (i.e. the objective vector, the constraints coefficient matrix, the right hand terms vector and the lower and upper bound vectors) are dynamically built according to the input values and the userdefined parameters. Note that this also allows the routine to generalize the optimization for both single or multibuilding problems. In order to select the proper building aggregation scheme and the resulting power plants scheme, the algorithm previously described in MBO Section is implemented. The simplified pseudocode for the procedure is reported in Listing 1 (in Italic the comments).
Computational Burden The system described was tested on an AMD Phenom 9650 QuadCore at 2.3 GHz, equipped with 3GB of RAM and Matlab R2010a installed on Windows XP SP3.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Optimization of Polygeneration Systems...
343
Listing 1. Pseudocode for smart building aggregation // single building optimization begin foreach (b in buildings) NPV(b) = optimize(b) NPV_TLS(b) = NPV(b) / (Dh(b) + Dc(b) + De(b)) end // single building optimization end sortBuildingByNPV_TLS(buildings,NPV_TLS) i = 1; // smart aggregation begin while (i < numOfPowerPlants) // there are more buildings to test j = N;// N = current number of power plants while (j > i)// Test just with buildings with lower NPV_TLS newNPV = optimize(buildings(i) + buildings(j));// LP optimization if (newNPV < NPV(buildings(i)) + NPV(buildings(j))) aggregateBuildings(buildings(i) + buildings(j)) end j = j – 1;
// building not aggregate, try next one
end
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
i = i + 1; end // smart aggregation end
Here we provide an estimate of the computational complexity of our procedure. Basically we need to solve the system for (5 Nhours + 5) variables for the main power plant, in addition (4 Nhours + 2) are used for each aggregated building, if any. The system is bounded by (10 Nhours + 2) constraints for the main power plant and (2 Nhours) for aggregated buildings. So the total descriptor size is the following: num _ var s 5 * Nhours 5 Nbuild 1 4 * Nhours 2 , num _ constra int s 10 * Nhours 2 Nbuild 1 2 * Nhours .
(31)
The number of suboptimizations required for a whole multibuilding optimization task is given after a simple analysis. Firstly, Nbuild single building optimizations are required. Successively, since each building is tried for being aggregated only with buildings having lesser NPV_TLS, the maximum number of multibuildings optimization is upper bounded by
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
344
A. Piacentino, C. Barbaro, R. Gallea et al.
the summation of Nbuild1 terms, which in particular represents the exact number of suboptimization in the worst case (i.e. where no buildings are aggregated at all). Thus, the total number of suboptimizations can be represented as follows:
Nbuild 1
num _ optimizations Nbuild
n Nbuild
Nbuild 1 Nbuild 1 1 2
Nbuild
Nbuild 1 Nbuild 2
(32)
n 1
Nbuild 1 Nbuild 1 Nbuild 1 Nbuild , 2 2
Table 1. Execution times varying the number of hours and buildings in the optimization process Execution time (s) # of buildings
# of hours
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
which clearly keeps the computational complexity in O(Nbuild2). However, each MILP optimization requires a ―worst case‖ number of steps which is exponential in the number of integer variables, that are Nbuild+2. In conclusion the global complexity for the algorithm is O(Nbuild2·2(Nbuild+2)). Note that still, in practical cases, due to the improved pruning strategies of Lindo optimizer, the actual number of steps, keeps much lower (i.e. quadratic). These theoretical considerations are confirmed by numerical results and timings of the system, which result to vary according to the computational analysis formerly presented. Some of these outcomes are shown numerically in Table 1 and graphically in Figure 3. The results are referred to timing performances in seconds while varying the number of hours and of buildings used as optimization basis. Such results are obtained from simulation operated on real datasets, provided by a case of a 4 buildings in parallel connected represented by two hotels, and two hospitals. The scale of the problem simulated is absolutely consistent, with reference to the practical problems encountered in the tertiary sector. Execution time not higher than 10 minutes, even for larger scale problems (up to six buildings) indicate that the proposed method is absolutely usable in practical scenarios for the optimization of mediumsized districts, allowing interaction with the user.
1
2
3
4
5
6
60
9,81
29,19
51,20
83,61
129,13
183,41
120
10,05
30,90
53,45
92,66
128,29
173,90
180
11,69
37,83
69,70
108,41
165,87
217,22
240
11,80
46,50
82,83
128,40
205,52
275,29
300
12,44
61,97
100,04
152,83
248,62
329,09
360
13,07
84,56
132,29
195,62
332,24
437,45
420
14,13
123,80
183,54
289,76
464,55
624,09
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Optimization of Polygeneration Systems...
345
Figure 3. Execution time varying the number of hours (x axis) and buildings (y axis) in the optimization process.
THE MATLAB GRAPHIC USER INTERFACE
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
In this section the Graphical User Interface is described adopting a functional approach, i.e. presenting briefly the manual and automatic procedures adopted to perform the optimization: 1. The user uploads an Excel file containing the historical or expected energy loads (8760 values for heating, cooling and electricity) for each building. As evident in figure 4 where the uploaded energy loads are visualized, the user can graphically check the consistence of the uploaded data. At the meantime, the user indicates the position of the building (in metres, with respect to the origin conventionally fixed on the first building uploaded) and the software automatically creates a rough map of the area (see picture down on the left in figure 4 ab). 2. Upload the files of hourly average regional electricity prices (8760 values) that can be easily downloaded from the site of GSE (Gestore dei servizi elettrici). 3. In the window ―Optimization perameters‖ the user enters the nDays for optimization, the hours timestep,the nYears and annual interest rate of depreciation, the desidered ΔT, the imposed PES(%), the CHP gas price and the price of natural gas. 4. In the box ―CHP data‖ the user enters the catalog values of the engine to be installed. In the box ―Backup generation‖ the user enters the values of the efficiency respectively of the boiler, power station and the vaporcompression refrigerator used for the conventional energy production.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 4.a Representation of the MATLAB graphic user interface (input data).
o/detail.action?docID=3021711.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 4.b Representation of the MATLAB graphic user interface (output data).
o/detail.action?docID=3021711.
348
A. Piacentino, C. Barbaro, R. Gallea et al.
Results are returned in excel format and in graphical form, providing the nominal size of each components and the energy saving index achieved (always sufficient for ―highefficiency cogeneration‖ eligibility). Clicking the empty box, the user can also view the results (already saved in Excel format) in graphical form. In the future this tool is expected to represent a reliable instrument to perform prefeasibility analysis and recognize profit potential for CHCP applications in the civil sector.
CONCLUSIONS Considering integrated energy systems implies dealing with complex systems where the optimal synergy between the various components should be best exploited. Assuming as reference system a redundant CHCP superconfiguration, which includes a thermal energy storage, a tool for the optimization of a system serving a single building was coded. Following a bottomup perspective, linear programming allows to formulate the gradual conglomeration of buildings to be served in common by a unique CHCP scheme, including a warm fluid distribution network. The overall tool is a flexible instrument to perform accurate preliminary design studies and to assess the viability of different CHCP solutions. An intuive interface and a risonable execution time makes the software easy to use and really competitive.
REFERENCES
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[1] [2]
[3]
[4] [5] [6] [7]
[8]
E. Cardona and A. Piacentino, Optimal design of CHCP plants in the civil sector by thermoeconomics, Applied Energy, vol. 84, pp. 729748, 2007. A. Piacentino and F. Cardona, EABOTEnergetic analysis as a basis for robust optimization of trigeneration system by linear progrmming, Energy Conversion and Management vol. 49, 2008, pp 3006–3016. A. Piacentino. and F. Cardona, Integrated optimization of synthesis, design and operation in CHCPbased µgrids –Part I. Description of the method. Proceedings of ECOS 2007, Padova. Italy: SGE Pub.:June 2007. pp. 57584. William J. Palm III; Introduction to Matlab 7 for Engineers; The McGrawHill Companies, Inc.: NewYork, NY, 2005. LINDO API 6.0 User Manual; LINDO System, inc.: Chicago, ILLINOIS (IL), 2009; pp 548. Arcuri P, Florio G, Fragiacomo. A mixed integer programming model for optimal design of trigeneration in a hospital complex. Energy; vol. 32, pp 1430–47, 2007. Gianfranco Chicco, Pierluigi Mancarella, A unified model for energy and environmental performance assessment of natural gasfueled polygeneration systems, Energy Conversion and Management vol. 49, pp 2069–2077, 2008. C. I. Weber. MultiObjective Design And Optimization Of District Energy Systems Including Polygeneration Energy Conversion Technologie. Ph.d thesis n _4018 , Ecole Polytehnique Federale de Lausanne, 2009.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Optimization of Polygeneration Systems... [9]
[10]
[11] [12]
[13]
[14] [15]
Delibera AEEG n. 280/07, Modalità e Condizioni Tecnico Economiche per il Ritiro dell’energia Elettrica ai sensi dell’articolo 13, Commi 3 e 4, del Decreto Legislativo 29 Dicembre 2003, N. 387, e del Comma 1 della Legge 23 Agosto 2004, N. 239. E. Cardona and A. Piacentino, A. DABASIWWW promotion of Energy saving by CHCP plants – database and evaluation, .SAVE II Project. Contract No. 4.1031/Z/02060, Bruxelles, 2005. http://www.gasturbines.com/index.html A. Piacentino., C. Barbaro and F. Cardona, Optimization of Polygeneration Plants and µgrids for Civil Applications. Proceedings of ASMEATIUIT 2010 Conference on Thermal and Environmental Issues in Energy Systems, 16 – 19 May, 2010, Sorrento, Italy, pp. 8792. B. Skagestad, P. Mildenstein, District Heating and Cooling Connection Handbook International Energy Agency IEA District Heating and Cooling, Programme of Research, Development and Demonstration on District Heating and Cooling. available from http://www.ieadhc.org, last accessed Sept. 2010. BRUGG PIPE SYSTEMS – June 2009. C. Cordier, H. Marchand, R. Laundy, L.A. Wolsey, bcopt: A BranchandCut Code for Mixed Integer Programs, Mathematical Programming 86 (1999), 335. S. Mehrotra. On the implementation of a primaldual interior point method. SIAM Journal on Optimization, Vol. 2, no. 4, pp. 575—601, 1992.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[16]
349
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved. Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
In: Linear Programming Editor: Zoltan Adam Mann
ISBN 9781612095790 ©2012 Nova Science Publishers, Inc.
Chapter 12
LINEAR PROGRAMMING APPLIED FOR THE OPTIMIZATION OF HYDRO AND WIND ENERGY RESOURCES H. M. I. Pousinho1,2, V. M. F. Mendes3 and J. P. S. Catalão1,2,* 1
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Department of Electromechanical Engineering, University of Beira Interior, R. Fonte do Lameiro, 6201001 Covilha, Portugal 2 Center for Innovation in Electrical and Energy Engineering, Instituto Superior Técnico, Technical University of Lisbon, Av. Rovisco Pais, 1049001 Lisbon, Portugal 3 Department of Electrical Engineering and Automation, Instituto Superior de Engenharia de Lisboa, R. Conselheiro Emídio Navarro, 1950062 Lisbon, Portugal
ABSTRACT In this book chapter, two important applications of linear programming are presented from the electric power industry, namely shortterm hydro scheduling and the development of offering strategies for wind power producers. The linear programming approach is proposed to solve the problems related with generation companies whose main goal is to maximize profits. On the one hand, the main concern of hydroelectric companies is to find the optimal scheduling of hydroelectric power plants, for a shortterm period in which the electricity prices are forecasted. The actual size of hydro systems, the continuous reservoir dynamics and constraints, still pose a real challenge to the modelers. On the other hand, wind power producers are entities owning generation resources and participating in the electricity market. The challenges for wind power producers are related to two kinds of uncertainties: wind power and electricity prices. It can be concluded that linear programming represents a robust approach for these two problems.
*
Email address: [email protected] (J.P.S. Catalão). Tel.: +351 275 329914; Fax: +351 275 329972. (Corresponding author.)
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
352
H. M. I. Pousinho, V. M. F. Mendes and J. P. S. Catalão
Keywords: Linear programming; hydro scheduling; cascaded reservoirs; wind energy; optimal bids
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
1. INTRODUCTION Nowadays, renewable energy sources play an increasingly important role in electricity production [1], since they produce clean energy, respecting the compromise established by the Kyoto protocol. These renewable energy sources can partly replace carbon emitting fossilbased electricity generation, and thereby reduce CO2 emissions [2]. Hence, the use of renewable energy has been increasing in the last decade worldwide, particularly in European countries such as Denmark [3] and Ireland [4]. Hydro energy is currently one of the priorities in the Portuguese energy policy. Under this energy policy, the optimal management of hydro energy systems is of crucial importance, as occurs for instance in Norway [5]. In this book chapter, the shortterm hydro scheduling (STHS) problem of a cascaded hydro energy system is considered. In the STHS problem a time horizon of one day to one week is considered, usually divided into hourly intervals. Hence, the STHS problem is treated as a deterministic one. Where the problem includes stochastic quantities, such as inflows to reservoirs or electricity prices, the corresponding forecasts are used. In a deregulated environment, a hydro generating company (HGENCO) is usually an entity owning generation resources and participating in the electricity market with the ultimate goal of maximizing profits, without concern of the system, unless there is an incentive for it. A dayahead electricity market based on a pool is considered in this book chapter. The optimal management of the water available in the reservoirs for power generation, regarding future operation use, delivers a selfschedule and represents a major advantage for the HGENCO to face competitiveness given the economic stakes involved. Based on the selfschedule, the HGENCO is able to submit bids with rational support to the electricity market. Thus, for deregulation applications, STHS solution is important as a decision support for developing bidding strategies in the market [6], guided by the forecasted electricity prices, and a more realistic modeling is crucial for surviving nowadays competitive framework. Dynamic programming (DP) is among the earliest methods applied to the STHS problem [7]. Although DP can handle the nonconvex, nonlinear characteristics present in the hydro model, direct application of DP methods for cascaded hydro energy systems is impractical due to the wellknown DP curse of dimensionality. Artificial intelligence techniques have also been applied to the STHS problem [8–10]. However, due to the heuristics used in the search process, only suboptimal solutions can be reached. A natural approach to STHS is to model the system as a network flow model, because of the underlying network structure subjacent in cascaded hydro energy systems [11]. For cascaded hydro energy systems, as there are water linkage and electric connections among plants, the advantages of the network flow technique are salient. Hydroelectric power generation characteristics are often assumed as linear or piecewise linear in hydro scheduling models. Accordingly, the solution procedures can be based on linear programming (LP).
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Linear Programming Applied for the Optimization ...
353
Hence, LP is proposed in this book chapter for solving the STHS problem in the dayahead electricity market. Wind, as a renewable energy source, has been generally applied as a means to reach emission reduction goals as a result of increasing concern regarding environmental protection [12]. Actually, wind power is the world‘s fastest growing renewable energy source [13]. In Portugal, the wind power goal foreseen for 2010 was established by the government as 3750 MW, representing about 25% of the total installed capacity in 2010 [14]. This value has recently been raised to 5100 MW, by the most recent governmental goals for the wind sector. In deregulated markets, wind power producers are entities owning generation resources and participating in the market with the ultimate goal of maximizing profits [15]. The challenges for wind power producers are related to two kinds of uncertainty: wind power and electricity prices. A large variability of wind power or electricity prices means a large variability in profit [16]. Thus, the decision makers have to consider these two kinds of uncertainty, as well as the several technical constraints associated to the operation of wind farms. The offer decisions to submit for the electricity market have to be done in each hour, without knowing exactly what will be the value of power generation. The differences between the produced energy and supplied energy constitute the energy imbalances. The imbalances should be penalized by the market balance [17,18]. A wind power producer needs to know how much to produce in order to make realistic bids, because in case of excessive or moderate bids, other producers must reduce or increase production to fill the socalled deviation, causing economic losses. These economic losses are reflected in socalled penalties for deviation. To take into account these uncertain measures, multiple scenarios can be build using wind power forecasting [19–21] and electricity price forecasting [22,23] tools. A scenario tree represents the different stages that can take the random parameters, i.e., different realizations of uncertainty. The tree is a natural and explicit way of representing nonanticipativity decisions. The stochastic nature of the uncertain measures can be modeled through a twostage stochastic programming approach [24–27]. In this approach, the set of decisions inherent to the problem can be divided into two distinct stages: firststage decisions, which must be taken before resolving the uncertainty; secondstage decisions, which are made after the uncertainty occurs and are influenced by the decisions taken in first stage. The firststage decisions correspond to the hourly bids to be submitted to the dayahead market, while the secondstage decisions correspond to the power output of the wind farm in each hour for a given scenario. Figure 1 shows the scenario tree that will be used to represent the decisions to be taken in the two stages previously mentioned. In this book chapter, a stochastic programming approach is proposed to generate the optimal offers that should be submitted to the dayahead market by a wind power producer, in order to maximize its expected profit. The book chapter is organized as follows. In Section 2, the mathematical formulation of both problems is provided. Section 3 presents the proposed LP approach. In Section 4, the proposed LP approach is applied on two case studies, to demonstrate its effectiveness. Finally, concluding remarks are given in Section 5.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
354
H. M. I. Pousinho, V. M. F. Mendes and J. P. S. Catalão
Figure 1. Scenario tree.
NOMENCLATURE I , i set and index of reservoirs
K , k set and index of hours in the time horizon
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
S, s set and index of scenarios
s probability of occurrence of scenario s k forecasted electricity price in hour k λ sk expected electricity price in scenario s in hour k
p i k power generation of plant i in hour k
i future value of the water stored in reservoir i a i k inflow to reservoir i in hour k M i set of upstream reservoirs to reservoir i
v i k water storage of reservoir i at end of hour k q i k water discharge by reservoir i in hour k
s i k water spillage by reservoir i in hour k h i k head of plant i in hour k
penalty factor over the electricity price for energy imbalances.
x k energy offered by the wind power producer in the dayahead market for hour k ω sk cost of penalization for deviation in scenario s in hour k dev sk deviation for wind production in scenario s in hour k
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming Applied for the Optimization ...
355
Pdev sk penalization for deviation of the wind farm in scenario s in hour k
Wsk wind generation forecast in scenario s in hour k P maximum power of the wind farm L sk revenue in scenario s in hour k
v i, v i water storage limits of reservoir i q i, q i water discharge limits of plant i
p i , p i power generation limits of plant i
v i 0 initial water storage of reservoir i A
constraint matrix
b, b upper and lower bound vectors on constraints x
vector of decision variables
x, x upper and lower bound vectors on decision variables h , h lower and upper bound vectors for the secondstage constraints
T technology matrix W recourse matrix
q vector of coefficients for the linear term for the secondstage variables y secondstage variables that represent decisions to be made after part of the uncertainty
is revealed
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
2. PROBLEM FORMULATION In this section, we present the formulation for each problem discussed previously. The shortterm hydro scheduling problem is formulated first, and afterwards the problem of developing offering strategies for wind power producers is formulated.
2.1. ShortTerm Hydro Scheduling The STHS problem can be stated as to find out the water discharges, the water storages, and the water spillages, for each reservoir i at all scheduling time periods k that maximizes (or minimizes) a performance criterion subject to all hydraulic constraints.
1. Objective Function In this problem, the objective function to be maximized is expressed as: I
F
K
i 1
k 1
I
k p ik
i
(v i K )
i 1
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(1)
356
H. M. I. Pousinho, V. M. F. Mendes and J. P. S. Catalão
In (1), the first term is related to the revenues of each plant i in the hydro energy system during the shortterm time horizon, whereas the last term expresses the water value, i , for the future use of the water stored in the reservoirs at the last period, v i K . The future value of the water stored in the reservoirs is not considered in (1) if the water storage in the reservoirs in the last period is fixed. An appropriate representation when this term is explicitly taken into account can be seen for instance in [28]. The storage targets for the shortterm time horizon can be established by mediumterm planning studies.
2. Hydro Constraints The hydro constraints are of two kinds: equality constraints and inequality constraints or simple bounds on the decision variables. The water balance equation for each reservoir is formulated as: v i k v i , k 1 a i k
mM
(q m k s m k ) q i k s i k
i I, k K
i
(2)
assuming that the time required for water to travel from a reservoir to a reservoir directly downstream is less than the one hour period. The head of a hydro plant i measures the difference between the forebay elevation and the tailrace elevation. Forebay elevation is dependent upon reservoir contents and also on the flows through the reservoirs. Tailrace elevation depends upon discharges and for some plants on the elevation of water in the immediate downstream reservoir. Therefore, it can be expressed as a function of its reservoir storage v f (i ) k , and the immediate downstream
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
reservoir storage v t (i ) k [29]: h i k h i k (v f (i ) k , v t (i ) k ) i I , k K
(3)
If the tailrace elevation is considered constant, this relationship can be simplified [29]: h i k h i k (v f (i ) k ) i I , k K
(4)
so that the head depends only on the storage of the upstream reservoir. Power generation is considered a function of water discharge: p i k P ( q i k) i I, k K
Water storage has lower and upper bounds, given by:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(5)
Linear Programming Applied for the Optimization ... v i v ik v i i I, k K
357
(6)
Water discharge has lower and upper bounds, given by: q i q ik q i
i I, k K
(7)
Each hydro plant has a mechanism to spill a given quantity of water, if required. A null lower bound for water spillage is considered, given by: s ik 0 i I, k K
(8)
thus, water spillage can occur when without it the water storage exceeds its upper bound, so spilling is only necessary due to safety considerations. The spillage effects were considered in [30]. The initial water storages and inflows to reservoirs are assumed known. The HGENCO analyzed in this book chapter is considered to be a pricetaker, i.e., it does not have market power. Therefore, electricity prices k in (1) are also assumed known, as in [31,32]. The size of the LP problem (1)–(8), expressed as the number of continuous variables and constraints, is provided in Table 1.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 1. Problem size – Shortterm hydro scheduling Continuous variables
3 I K
Equality constraints
IK
Inequality constraints
3 I K
2.2. Development of Offering Strategies for Wind Power Producers The mathematical formulation of the optimization model related to development of offering strategies for wind power producers is presented thereafter. This formulation uses an absolute value function, since it can be expressed in the context of LP by adding some auxiliary variables for positive and negative deviations.
1. Objective Function The objective function to be maximized can be expressed as: S
F
K
ρ λ s
s 1
sk
p sk ν λ sk dev sk
k 1
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(9)
358
H. M. I. Pousinho, V. M. F. Mendes and J. P. S. Catalão
The objective function represents the total profit on the sale of wind energy in each scenario s , taking into account the probability of occurrence s , less a penalty for deviations from the bids. The penalties for deviations are associated with shortterm variability (e.g. hour to hour variation) and the lack of predictability of wind power. Hence, the deviations are measured in absolute value, and can be generated by excess or deficit of energy: p sk x k , p sk x k 0 dev sk p sk x k , p sk x k 0
(10)
The deviation cost is set as a percentage of the daily market price: sk sk
(11)
The penalty for the deviation corresponds to product of the cost for the shifted power in absolute value: Pdev sk sk dev sk
(12)
The revenue is given by the product of the expected electricity price by the power output of the wind farm:
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Lsk sk p sk
(13)
The profit of the operation is calculated as the difference between the revenue of the wind farm and penalization for deviation: F Lsk Pdev sk
(14)
The objective function is obtained by substituting (12) and (13) into (14), resulting in the following equation:
S
F
K
s
s 1
sk
p sk sk p sk x k
k 1
(15)
2. Constraints In order to make the offers to the market, it is required to satisfy the technical limitations of the wind farm. So, the optimal value of the objective function is determined subject to inequality constraints or simple bounds on the variables. The constraints are indicated as follows: 0 psk Wsk
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(16)
Linear Programming Applied for the Optimization ...
359
In inequality (16), wind power is limited superiorly by the value of wind generation forecast, W sk , in scenario s in hour k. The value of wind generation forecast is not always attainable due to the intermittent wind. 0 xk P
(17)
In inequality (17), the offers are limited by the maximum power installed in the wind farm P .
3. Linearization of the Objective Function The objective function, presented in the previous subsection, is characterized by a nonlinearity due to the existence of the absolute value. So, it is required to use a mathematical process that allows reformulating into a linear problem. In this subsection, the problem involving absolute value terms is transformed into an LP formulation. Initially, it is considered: Max F c T x x
(18)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
subject to xxx
(19)
x Rn
(20)
In (18), the function F() is an objective function of decision variables, where c is the vector of coefficients for the linear term. In (19), x and x are the lower and upper bound vectors on variables. The variable x is a set of decisions variables. Subsequently, absolutevalued variables are replaced with two strictly positive variables: x x x
(21)
In addition, each variable is substituted by the difference of the same two positive variables, as: x x x
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(22)
360
H. M. I. Pousinho, V. M. F. Mendes and J. P. S. Catalão The equivalent LP problem is given by: Max F c T x ( x x )
(23)
subject to xxx
(24)
x x x
x 0
,
(25)
x 0
(26)
The size of the LP problem (9)–(17), expressed as the number of continuous variables and constraints, is provided in Table 2.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 2. Problem size – Development of offering strategies for wind power producers Continuous variables
3 S K K
Equality constraints
SK
Inequality constraints
3 S K K
3. PROPOSED APPROACH LP is an optimization procedure that minimizes (or maximizes) a linear objective function with variables that are also subject to linear constraints. A solution that satisfied all conditions of the problem and the given objective is called optimal solution. LP characterizes itself by its simple mathematical structure but powerful in its adaptability to a wide range of applications. These algorithms provide extremely robust and efficient codes.
3.1. LP Applied for ShortTerm Hydro Scheduling The LP approach is used for solving the STHS problem. The LP problem can be stated as to maximize: F ( x)
subject to:
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(27)
Linear Programming Applied for the Optimization ... b Axb
361 (28)
xxx
(29)
In (27), the function F (.) is a linear function of the vector x of decision variables. Equality constraints are defined by setting the lower bound equal to the upper bound, i.e. b b . Equation (29) corresponds to the inequality constraints or simple bounds on the
variables in (6), (7) and (8).
3.2. LP Applied for the Development of Offering Strategies for Wind Power Producers 1.Twostage Stochastic Programming The twostage stochastic programming model can be formulated as: Max
c T x E[max
y
T q y ]
(30)
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
subject to b Ax b
(31)
h T x W y h
(32)
x 0, y 0.
(33)
In the firststage, the ―hereandnow‖ decisions should be taken, before the uncertainties represented by x are known. In the secondstage, where the information x is already available, the decision is made about the value of the vector y . The firststage decision of x depends only on the information available until that time; this principle is called nonanticipativity constraint. The problem of two stages means that the decision x is independent of the achievements of the secondstage, and thus the vector x is the same for all possible events that may occur in the secondstage of the problem.
2. Deterministic Equivalent Problem The stochastic model is usually a difficult computational problem, so it is common to choose the deterministic model solution using the average of random variables or solving a deterministic problem for each scenario. The deterministic equivalent problem is given by: S
Max x, y s
cT x
ρs q Ts y s s 1
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
(34)
362
H. M. I. Pousinho, V. M. F. Mendes and J. P. S. Catalão
subject to b Ax b
(35)
h s Ts x Ws y s h s for s 1,, S
(36)
x 0, ys 0.
for s 1,, S
(37)
The matrix composed by (35) and (36), for largescale linear problems, can be generally represented according with Figure 2.
Figure 2. Layout of the constraints associated with two stages.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
4. CASE STUDIES The proposed LP approach has been developed and implemented in MATLAB and solved using the optimization solver package CPLEX. The numerical simulation has been performed on a 2GHz based processor with 2GB of RAM. Table 3. Hydro data #
vi
(hm3) 1 2 3 4 5 6 7
5.18 5.32 39.00 4.80 4.40 36.89 8.60
v i0
vi 3
(hm ) 12.94 13.30 97.50 12.00 11.00 58.38 21.50
pi 3
(hm ) 10.35 10.64 78.00 9.60 8.80 46.70 17.20
(MW) 28.00 29.99 10.67 24.99 29.99 39.99 19.99
pi
(MW) 188.08 237.14 60.00 185.99 201.02 134.02 117.01
qi
qi
(m3/s) 168.13 104.70 3.00 104.67 93.23 94.99 182.83
(m3/s) 1144.50 1080.00 16.40 900.00 881.31 326.34 1356.51
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Linear Programming Applied for the Optimization ...
363
Figure 3. Hydro energy system with seven cascaded reservoirs, where a represents inflow, v represents water storage, q represents water discharge and s represents water spillage.
Figure 4. Electricity price profile.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
364
H. M. I. Pousinho, V. M. F. Mendes and J. P. S. Catalão The two case studies are defined by:
Case A. Shortterm hydro scheduling. Case B. Development of offering strategies for wind power producers.
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
4.1. Case A The proposed LP approach has been applied on one of the main Portuguese cascaded hydro energy systems. The realisticallysized hydro energy system has seven cascaded reservoirs and is shown in Figure 3. Table 3 shows the data of these plants. The hydro plants numbered in Figure 3 as 1, 2, 4, 5 and 7 are runoftheriver hydro plants. The hydro plants numbered as 3 and 6 are storage hydro plants. Inflow is considered only on reservoirs 1 to 6. The final water storage in the reservoirs is constrained to be equal to the initial water storage. The time horizon is one day divided into 24 hourly intervals. The electricity price profile considered over the shortterm time horizon is shown in Figure 4 ($ is a symbolic economic quantity). The electricity price values are based on real market operation. The competitive environment coming from the deregulation of the electricity markets brings energy prices uncertainty, placing higher requirements on forecasting. A good price forecasting tool reduces the risk of under/over estimating the profit of the HGENCO and provides better risk management. In the shortterm, a generating company needs to forecast energy prices to derive its bidding strategy in the market and to optimally schedule its energy resources [33]. Price forecasting has become in recent years an important research area in electrical engineering, and several techniques have been tried out in this task. In general, hard and soft computing techniques could be used to predict energy prices. The hard computing techniques include auto regressive integrated moving average (ARIMA) [34] and waveletARIMA [35] models. The soft computing techniques include neural networks [22] and hybrid approaches [36,37]. These energy prices are considered as deterministic input data for our STHS problem. The storage trajectories of the reservoirs are shown in Figure 5. The discharge profiles for the reservoirs are shown in Figure 6. The main numerical results are summarized in Table 4. The optimal solution requires only 1.20 seconds of CPU time, on a 2GHz based processor with 2GB of RAM, using CPLEX. Hence, the LP approach provides a solution for this problem with a negligible CPU time requirement. Table 4. LP results Method LP
Average Discharge 25.00 (%)
Average Storage (%) 83.08
Total Profit ($ 103) 714.57
7
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
CPU (s) 1.20
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Linear Programming Applied for the Optimization ...
Figure 5. Storage trajectories of the reservoirs considering the proposed LP approach.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
365
H. M. I. Pousinho, V. M. F. Mendes and J. P. S. Catalão
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
366
Figure 6. Discharge profiles for the reservoirs considering the proposed LP approach.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming Applied for the Optimization ...
367
4.2. Case B
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
The proposed LP approach has also been applied on a case study based on Portuguese wind farm. The total installed capacity of the plant is 265 MW. The deviation cost has been fixed at 30% of the daily market price, ν 0.3 . Therefore, it is liable to a penalty other than by the price and according to the magnitude of the deviation in terms of power. The time horizon chosen is one day divided into 24 hourly periods. This case study is composed of six electricity price scenarios computed by the approach proposed in [22], Figure 7, and six wind power scenarios computed by the approach proposed in [21], Figure 8, over the time horizon. The number of scenarios generated for the dayahead market in the optimization problem is S 36 . The probability of each generated scenario will be 1 / S .
Figure 7. Electricity price scenarios considered in the case study.
Figure 8. Wind power scenarios considered in the case study.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
368
H. M. I. Pousinho, V. M. F. Mendes and J. P. S. Catalão Table 5. Scenarios considered, the number and probability
Price scenarios Wind scenarios Total scenarios
Number of scenarios 6 6 36
Probability 0.17 0.17 0.03
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Table 5 summarizes the data of the scenarios that compose the probability tree. The optimal bids shown in Figure 9 are common to the 36 scenarios that provide the probability tree.
Figure 9. Optimal hourly bids.
Choosing one scenario of the problem, it can be verified in Figure 10 that the wind farm adjusts its production to minimize deviations. Nevertheless, in almost every hour there are small differences between the offers and the power output of the wind farm. The deviations from generated power for this scenario are shown in Figure 11. The expected value of profit is 276685 €. The dispersion of profit for the 36 scenarios is show in Figure 12. Table 6 provides the confidence interval for the profit. The optimal solution requires only 1.59 seconds of CPU time, on a 2GHz based processor with 2GB of RAM, using CPLEX. Hence, the LP approach provides a solution for this problem with a negligible CPU time requirement. Table 6. Confidence interval 95% of the expected profit
Wind farm
Confidence interval 95% of the expected profit 271 008 ; 283 075
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming Applied for the Optimization ...
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
Figure 10. Optimal offers to be submitted to the dayahead market and power produced.
Figure 11. Deviations resulting from the difference between the offers and the power produced.
Figure 12. Dispersion of profit.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
369
370
H. M. I. Pousinho, V. M. F. Mendes and J. P. S. Catalão
5. CONCLUSIONS An LP approach is proposed to solve the STHS problem in the dayahead electricity market. The goal in the STHS problem is to maximize the value of total hydroelectric generation throughout the time horizon, while satisfying all hydraulic constraints, aiming the most efficient and profitable use of the water. The results obtained by the proposed LP approach are feasible, assuring simultaneously a negligible computation time. The proposed LP approach has also been applied to allow wind power producers to achieve better offering strategies in the market. The goal here is to maximize the profit of the wind power producer, reducing deviations, and taking into account the uncertainty associated with wind energy production and electricity prices. It is seen that the wind farm adjusts its production to minimize deviations. Hence, the LP approach is also proficient in the development of offering strategies for wind power producers.
ACKNOWLEDGMENT The authors gratefully acknowledge the Fundação para a Ciência e a Tecnologia (FCT), with coparticipation of European Community fund FEDER, for financial support under R&D Project Ref. PTDC/EEA‐EEL/110102/2009. Also, H.M.I. Pousinho thanks FCT for a Ph.D. grant (SFRH/BD/62965/2009).
REFERENCES
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[1]
[2]
[3] [4]
[5] [6] [7] [8] [9]
Melicio R, Mendes VMF, Catalão JPS. Fractionalorder control and simulation of wind energy systems with PMSG/fullpower converter topology. Energ Convers Manage 2010;51(6):1250–8. Delarue ED, Luickx PJ, D‘haeseleer WD. The actual effect of wind power on overall electricity generation costs and CO2 emissions. Energ Convers Manage 2009;50(6):1450–6. Lund H, Mathiesen BV. Energy system analysis of 100% renewable energy systems— The case of Denmark in years 2030 and 2050. Energy 2009; 34: 524–31. Connolly D, Lund H, Mathiesen BV, Leahy M. Modelling the existing Irish energysystem to identify future energy costs and the maximum wind penetration feasible. Energy 2009; 35: 2164–73. Wolfgang O, Haugstad A, Mo B, Gjelsvik A, Wangensteen I, Doorman G. Hydro reservoir handling in Norway before and after deregulation. Energy 2009; 34:1642–51. Fleten SE, Kristoffersen TK. Stochastic programming for optimizing bidding strategies of a Nordic hydropower producer. Eur J Operation Res 2007; 181: 916–28. Arce A, Ohishi T, Soares S. Optimal dispatch of generating units of the Itaipú hydroelectric plant. IEEE Trans Power Syst 2002; 17:154–8. Yuan X, Wang L, Yuan Y. Application of enhanced PSO approach to optimal scheduling of hydro system. Energy Conv Manag 2008; 49:2966–72. Wu JK, Zhu JQ, Chen GT, Zhang HL. A hybrid method for optimal scheduling of shortterm electric power generation of cascaded hydroelectric plants based on particle
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Linear Programming Applied for the Optimization ...
[10]
[11]
[12] [13]
[14]
[15] [16] [17] [18]
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
[19]
[20] [21] [22]
[23]
[24]
[25]
371
swarm optimization and chanceconstrained programming. IEEE Trans Power Syst 2008; 23:1570–9. Cheng Ct, Liao Sl, Tang ZT, Zhao MY. Comparison of particle swarm optimization and dynamic programming for large scale hydro unit load dispatch. Energy Conv Manag 2009; 50:3007–14. Oliveira ARL, Soares S, Nepomuceno L. Short term hydroelectric scheduling combining network flow and interior point approaches. Int J Electr Power Energy Syst 2005; 27:91–9. Kuo CC. Generation dispatch under large penetration of wind energy considering emission and economy. Energ Convers Manage 2010;51(1):89–97. Fernández LM, García CA, Saenz JR, Jurado F. Equivalent models of wind farms by using aggregated wind turbines and equivalent winds. Energ Convers Manage 2009;50(3):691–704. Estanqueiro A, Castro R, Flores P, Ricardo J, Pinto M, Rodrigues R, Lopes JP. How to prepare a power system for 15% wind energy penetration: the Portuguese case study. Wind Energy 2008;11(1):75–84. Morales JM, Conejo AJ, PérezRuiz J. ShortTerm Trading for a Wind Power Producer. IEEE Trans Power Syst 2010;25(1):554–64. Shrestha GB, Kokharel BK, Lie TT, Fleten SE. Medium Term Power Planning With Bilateral Contracts. IEEE Trans Power Syst 2005;20(2):627–33. Shahidehpour M, Yamin H, Li Z. Market Operations in Electric Power Systems Forecasting, Scheduling and Risk Management. Wiley, 2002. Bourry F, Costa LM, Kariniotakis G. RiskBased Strategies for Wind/PumpedHydro Coordination under Electricity Markets. Proc. IEEE Bucharest Power Tech Conf., Bucharest, Romania, JuneJuly 2009. Fan S, Liao JR, Yokoyama R, Chen LN, Lee WJ. Forecasting the wind generation using a twostage network based on meteorological information. IEEE Trans Energy Convers 2009;24(2):474–82. Kusiak A, Zheng HY, Song Z. Wind farm power prediction: A datamining approach. Wind Energy 2009;12(3):275–93. Catalão JPS, Pousinho HMI, Mendes VMF. An artificial neural network approach for shortterm wind power forecasting in Portugal. Eng Int Syst 2009;17(1):5–11. Catalão JPS, Mariano SJPS, Mendes VMF, Ferreira LAFM. Shortterm electricity prices forecasting in a competitive market: a neural network approach. Electr Power Syst Res 2007;77(10):1297–304. Amjady N, Keynia F. Dayahead price forecasting of electricity markets by a new feature selection algorithm and cascaded neural network technique. Energ Convers Manage 2009;50(12):2976–82. Bourry F, Juban J, Costa LM, Kariniotakis G. Advanced strategies for wind power trading in shortterm electricity markets. Proc. European Wind Energy Conf., Brussels, Belgium, MarchApril 2008. GarcíaGonzález J, Muela RMR, Santos LM, González AM. Stochastic Joint Optimization of Wind Generation and PumpedStorage Units in an Electricity Market. IEEE Trans Power Syst 2008;23(2):460–8.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
372
H. M. I. Pousinho, V. M. F. Mendes and J. P. S. Catalão
[26] Tuohy A, Denny E, Meibom P, O‘Malley M. Benefits of Stochastic Scheduling for Power Systems with Significant Installed Wind Power. Proc. IEEE PMAPS, Puerto Rico, May 2008. [27] Pappala VS, Erlich I, Rohrig K, Dobschinski J. A Stochastic Model for the Optimal Operation of a WindThermal Power System. IEEE Trans Power Syst 2009;24(2):940– 50. [28] Uturbey W, Simões Costa A. Dynamic optimal power flow approach to account for consumer response in short term hydrothermal coordination studies. IET Generation Transmission and Distribution 2007: 1:414–21. [29] GarcíaGonzález J, Parrilla E, Mateo A. Riskaverse profitbased optimal scheduling of a hydrochain in the dayahead electricity market. Euro J Operation Res 2007; 181:1354–69. [30] Diniz AL, Maceira MEP. A fourdimensional model of hydro generation for the shortterm hydrothermal dispatch problem considering head and spillage effects. IEEE Trans Power Syst 2008; 23:1298–308. [31] Conejo AJ, Arroyo JM, Contreras J, Villamor FA. Selfscheduling of a hydro producer in a poolbased electricity market, IEEE Trans Power Syst 2002; 17:1265–72. [32] Borghetti A, D‘Ambrosio C, Lodi A, Martello S. An MILP approach for shortterm hydro scheduling and unit commitment with headdependent reservoir. IEEE Trans Power Syst 2008; 23:1115–24. [33] Conejo, AJ, Contreras J, Espínola R, Plazas MA. Forecasting electricity prices for a dayahead poolbased electric energy market. Int J Forecasting 2005; 21:435–62. [34] Contreras J, Espínola R, Nogales FJ, Conejo AJ. ARIMA models to predict nextday electricity prices. IEEE Trans Power Syst 2003; 18:1014–20. [35] Conejo AJ, Plazas MA, Espínola R, Molina AB. Dayahead electricity price forecasting using the wavelet transform and ARIMA models. IEEE Trans Power Syst 2005; 20:1035–42. [36] Meng K, Dong ZY, Wong KP. Selfadaptive radial basis function neural network for shortterm electricity price forecasting. IET Generation Transmission and Distribution 2009; 3:325–35. [37] Amjady N, Hemmati M. Dayahead price forecasting of electricity markets by a hybrid intelligent energy system. Euro Trans Electr Power 2009; 19:89–102.
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
INDEX A
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
C abatement, 341 accounting, 178, 180, 205, 210, 329, 331, 339 acetone, 231 adaptability, 360 adaptation, 210, 245 advancement, 208 advancements, viii age, 213 aggregation, 194, 339, 341, 342, 343 agriculture, ix algorithm, vii, viii, 6, 11, 12, 29, 81, 84, 135, 136, 137, 138, 140, 141, 142, 143, 144, 147, 148, 149, 151, 152, 154, 155, 156, 157, 191, 198, 206, 207, 208, 224, 236, 303, 329, 330, 339, 342, 344, 371 arithmetic, viii assessment, 180, 195, 214, 338, 348 atmosphere, 177 audits, 331, 339 awareness, 175
B base, 186, 187, 219, 225, 255, 339 Belarus, 159 Belgium, 371 benefits, 4, 175, 176, 213, 283, 294 boilers, 214, 258, 259, 335 bounds, viii, 10, 184, 186, 236, 243, 267, 287, 356, 357, 358, 361 Brazil, 213 breakdown, 67 breeding, 195, 202, 210
C++, 152 candidates, 237 CAP, 62, 291 carbon, 352 case studies, 353, 364 case study, viii, 3, 190, 327, 367, 371 cash, 186, 211, 339 cash crops, 186 cash flow, 211, 339 catalyst, 231, 233 cattle, 211 CC, 371 CH3COOH, 232 challenges, x, 202, 210, 351, 353 chemical, ix, 8, 82, 84, 213, 214, 218, 220, 222, 226, 230, 231, 237, 241, 242, 244, 249, 254, 264, 268, 281, 282, 283, 284, 285, 302, 303 chemical industry, 254, 268 chemical reactions, 220, 230, 231 chemicals, 231 Chicago, 348 classification, 4 clean energy, 352 climate, 211 climates, 327 climatic factors, 184, 185 CO2, 232, 352, 370 coal, 214 cogeneration, 329, 330, 348 combustion, 335 commercial, viii, 284, 341 commodity, viii, 135, 136, 137, 138, 139, 140, 144, 145, 146, 147, 149, 150, 151, 152, 156 community, 214, 339 compensation, 329
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
374
Index
competition, 178 competitive advantage, 330 competitiveness, 352 complement, 227 complementarity, 329 complementary demand, 339 complexity, viii, x, 12, 13, 82, 197, 201, 213, 214, 329, 343, 344 complications, 272 composition, 260, 272 compounds, 268 compression, 331, 345 computation, 3, 41, 54, 70, 137, 138, 141, 144, 151, 152, 154, 155, 156, 370 computer, vii, 83, 152, 188, 193, 208, 210 computing, 13, 54, 70, 152, 155, 157, 364 conceptual model, 178, 179, 180, 183, 196 condensation, 255, 258 conference, 83 configuration, 207, 216, 258, 265, 266, 267, 279, 335, 342 congruence, 336, 340 conservation, 138 consumption, 204, 225, 272, 275, 330, 331, 337 convention, 220 convergence, 65 cooling, 214, 225, 272, 328, 329, 330, 331, 335, 336, 337, 345 coordination, 372 correlation, 222 correlations, 225 cost, x, 10, 17, 135, 137, 138, 140, 144, 151, 152, 154, 155, 156, 157, 186, 204, 205, 206, 208, 214, 225, 226, 239, 242, 245, 246, 249, 250, 251, 253, 255, 256, 258, 259, 260, 262, 265, 268, 269, 272, 273, 275, 276, 277, 278, 279, 280, 284, 289, 291, 294, 327, 328, 329, 331, 335, 336, 338, 339, 340, 341, 342, 354, 358, 367 covering, 330 CPU, 41, 153, 154, 155, 364, 368 crop, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 188, 190, 191, 208 crop production, 177 crops, viii, 175, 178, 181, 183, 184, 185, 186, 187, 188, 190, 191, 192 customers, 283 Czech Republic, 85
D data structure, vii, 29, 70, 342 database, 22, 349 decision makers, 353
decomposition, 10, 11, 12, 81, 214, 219, 281 deduction, 281 deficiencies, 42, 43, 44, 45, 52, 54, 57, 62, 67, 294 deficiency, 43, 45, 296 deficit, 240, 358 Denmark, 352, 370 depreciation, 345 depth, 176, 178, 180, 181, 184 deregulation, 352, 364, 370 designers, 214 developing countries, 176 deviation, 353, 354, 355, 358, 367 dew, 223 diet, 208, 209 dimensionality, 177, 352 discharges, 355, 356 discrete random variable, 8, 15, 19, 28 discretization, 6 dispersion, 368 distillation, ix, 216, 218, 219, 221, 264, 268, 269, 276, 281 distribution, x, 8, 9, 20, 21, 32, 49, 71, 72, 73, 137, 185, 238, 285, 286, 287, 296, 327, 329, 331, 339, 340, 342, 348 district heating, 327, 329, 331, 339 drainage, 180 dream, 327 duality, viii
E economic evaluation, 225 economic indicator, 335 economic losses, 353 economic performance, 194 economics, vii, 208 EEA, 370 electricity, x, 225, 258, 328, 330, 331, 335, 339, 345, 351, 352, 353, 354, 357, 358, 364, 367, 370, 371, 372 emergency, 284, 337 emission, 353, 371 energy, x, 214, 219, 221, 225, 238, 239, 255, 272, 273, 275, 276, 278, 327, 328, 329, 330, 331, 337, 338, 339, 340, 341, 345, 348, 352, 353, 354, 356, 358, 363, 364, 370, 371, 372 energy consumption, x, 214, 327 energy efficiency, 330 energy prices, 330, 364 energy recovery, 272 energy supply, 273, 341 engineering, vii, ix, x, 81, 214, 234, 237, 364 England, 282
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Index environment, 4, 69, 83, 152, 330, 342, 352, 364 environmental awareness, 214 environmental factors, 196 environmental impact, 211 environmental issues, 176 environmental protection, 330, 353 EPS, vi, ix, 3, 7, 29, 30, 32, 33, 34, 36, 39, 40, 41, 48, 54, 55, 62, 70, 73, 74, 75, 76, 77, 78, 79, 80, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 294, 295, 296 equality, 227, 241, 356 equilibrium, 158, 219, 221, 222, 223 equipment, 4, 216, 218, 219, 225, 226, 239, 264, 279, 291, 328, 335 ESO, 177 ethanol, 218 ethyl alcohol, 219, 231, 234 ethylene, 238, 239, 241 European Community, 370 evaporation, 175, 176 evapotranspiration, 175, 176, 178, 180, 181, 182, 183, 184, 185, 187, 188, 190 evolution, ix, 3, 5, 6, 7, 13, 19, 20, 22, 48, 50, 59, 64, 69, 70 excretion, 208, 211 execution, 348 expected values (EVs), 3, 14, 17 extraction, 254, 255, 258, 261, 262, 263
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
F feature selection, 371 feedstock, 238 financial, 81, 370 financial support, 370 flexibility, 284, 329, 335 floods, 176 fluctuations, 329, 339 fluid, 216, 329, 331, 339, 342, 348 force, 226 forecasting, 177, 178, 185, 190, 353, 364, 371, 372 formation, 181 formula, 137, 185 foundations, 83 framing, 186 free energy, 331 freedom, 220, 221, 223, 275 fuel consumption, 259, 335 fuel prices, 339 full capacity, 43 functional approach, 345
375
G genes, 234 Germany, 3, 157, 159, 283, 303 gestation, 210 global warming, 214 grain size, 29, 49, 53, 54, 70, 71, 72, 73, 285, 286, 287, 288, 294 grants, 156 graph, 153, 154, 155 grids, 327, 331, 348, 349 grouping, 39 growth, 182, 185, 190, 191
H HE, 254 heat loss, 238, 239 heat transfer, 219, 278, 279 history, 14, 17 hormones, 205 hotels, 344 hybrid, 84, 188, 189, 190, 303, 364, 370, 372 hydroelectric power, x, 351
I ICC, 204, 206, 208 ideal, 219, 222 imbalances, 353, 354 implicit knowledge, 242 incidence, 139 income, 204, 211 India, 175, 176, 186, 191, 192 industries, 83 industry, x, 4, 69, 268, 284, 351 inequality, 141, 142, 143, 146, 147, 148, 149, 150, 227, 235, 236, 241, 247, 337, 356, 358, 359, 361 inferences, 229 initial state, 14 integration, 177, 214, 276 intelligence, 352 interface, 346, 347, 348 interference, ix inversion, 136 investment, 176, 225, 328, 330, 339, 341 Iowa, 191 Ireland, 352 irrigation, viii, 175, 176, 177, 178, 179, 180, 183, 184, 185, 186, 187, 190, 191, 192 Israel, 4, 81 issues, viii, 176, 283, 303
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
376
Index
Italy, 327, 331, 348, 349 iteration, 29, 144, 145, 147, 148, 149, 151
K Kyoto protocol, 352
L
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
lactation, 194, 201, 202, 203, 204, 205, 206, 207, 208, 210 land acquisition, 225 lead, 4, 5, 12, 13, 67, 222, 341 Leahy, 370 life cycle, 335 lifetime, 194 linear function, 245, 269, 361 linear model, vii, 213, 216, 218, 219, 221, 260, 264 linear programming, vii, viii, ix, 3, 4, 6, 7, 69, 81, 82, 83, 135, 136, 138, 152, 154, 155, 156, 158, 191, 193, 195, 199, 200, 209, 211, 212, 213, 229, 237, 239, 254, 281, 283, 330, 348, 351, 352 liquid phase, 222, 224 liquids, 269 lithium, 331 livestock, ix, 193 logistics, ix, 7, 29, 69, 70, 157, 283, 284, 285, 286, 288, 289, 291, 296 lying, 341
M machine learning, 191 magnitude, 152, 154, 155, 156, 198, 268, 367 Malaysia, 175 management, viii, ix, 5, 81, 82, 83, 175, 177, 191, 192, 193, 194, 196, 197, 208, 211, 337, 352 manpower, 284, 287 manufacturing, 284, 285, 291 manure, 208 marketing, 4, 284, 287, 293 marketplace, ix, 283 mass, 214, 219, 220, 224, 243, 244, 252, 288 materials, 225, 231, 285 mathematical programming, 158, 177, 214, 216, 281 mathematics, vii, viii matrix, 136, 158, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 210, 211, 237, 238, 293, 342, 355, 362 measurement, 136, 178 measurements, 180 meat, 204
memory, 13 methodology, 198, 214, 234 Microsoft, 41, 294 modelling, 177, 212, 338 models, ix, x, 4, 5, 22, 24, 28, 29, 33, 40, 41, 42, 47, 69, 81, 83, 177, 178, 179, 180, 186, 191, 192, 198, 202, 210, 211, 216, 219, 221, 241, 254, 264, 268, 276, 283, 287, 327, 335, 342, 352, 364, 371, 372 modifications, 284 modules, 186, 264, 265, 267, 330, 342 moisture, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 187, 190 moisture content, 182, 183, 185, 187 moisture state, 177 mole, 220, 224 mortality, 194, 195, 196, 198, 203, 204, 205, 208 mortality rate, 196, 203, 205 mortality risk, 208 moving horizon strategy (MHS), 3, 7, 19, 70 multiperiod multiuncertainty (MPMU), 3, 14, 19, 69 multiplication, 158, 206
N NaCl, 232 natural gas, 238, 331, 345, 348 negativity, 197, 198, 275 neural network, 364, 371, 372 neural networks, 364 nitrogen, 208, 211 nodes, 12, 15, 17, 20, 33, 41, 45, 54, 70, 136, 138, 152, 265, 270, 273 Norway, 352, 370 null, 337, 338, 357 numerical computations, 138, 154
O OH, 232 oil, 82, 214, 238, 239, 240, 241 oil production, 82 olefins, 238 operating costs, 242 operating system, 152 operations, vii, 42, 43, 69, 83, 192, 208, 211, 219, 226, 227, 284, 291, 292, 293, 297, 303 operations research, vii, 83, 208, 211 opportunities, 84, 335 optimization, viii, ix, x, 4, 6, 13, 14, 70, 81, 82, 83, 177, 191, 192, 194, 196, 197, 198, 211, 216, 218,
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Index 239, 258, 265, 276, 281, 283, 302, 327, 329, 330, 331, 338, 339, 342, 343, 344, 345, 348, 357, 360, 362, 367, 371 optimum allocation, 190, 191 ozone, 214
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
P parallel, viii, 214, 220, 252, 279, 290, 331, 344 Pareto, 82 Pareto optimal, 82 parity, 205, 208, 210 payback period, 338 penalties, 234, 268, 294, 353, 358 percolation, 180 permeability, 264 PES, 328, 345 petroleum, 83, 214 Philadelphia, 81 physical features, 186 physical properties, 219 planning decisions, 287, 289, 290, 291 plants, x, 81, 82, 83, 84, 214, 226, 283, 302, 327, 329, 341, 348, 349, 351, 352, 356, 364, 370 playing, vii policy, 177, 178, 183, 190, 191, 193, 207, 208, 352 policy iteration, 193 pollution, 82 polymer, ix, 283, 284, 287, 303 polymerization, 29, 33, 42, 43, 44, 52, 54, 55, 56, 61, 63, 66, 70, 284, 285, 287, 288, 289, 290, 291, 294 polymerization process, 291 polystyrene, 284 population, 194, 196, 198, 201, 203, 204, 205, 206, 207, 211 Portugal, 351, 353, 371 power generation, 176, 352, 353, 354, 355, 370 power plants, 342 predictability, 358 pregnancy, 194, 196, 198, 201, 204, 205, 208, 210, 211 preparation, 218, 285 present value, 195, 328 probability, viii, 3, 6, 7, 9, 10, 14, 15, 19, 20, 21, 22, 23, 26, 28, 29, 32, 55, 82, 185, 196, 202, 203, 204, 354, 358, 367, 368 probability distribution, viii, 3, 6, 7, 9, 14, 15, 19, 20, 21, 22, 23, 28, 29, 55, 82, 185 producers, x, 351, 353, 355, 357, 360, 364, 370 profit, 67, 239, 242, 243, 253, 265, 267, 290, 294, 327, 335, 348, 353, 358, 364, 368, 369, 370, 372 profitability, 330
377
programming, vii, viii, ix, x, 4, 5, 6, 81, 82, 83, 84, 136, 193, 196, 199, 200, 211, 213, 229, 237, 254, 281, 285, 303, 348, 351, 352, 353, 361, 370, 371 project, 176, 188 propane, 238, 239, 241 proposition, 227, 229 propylene, 238, 241 pruning, vii, 344 Puerto Rico, 372 pulp, 254 pumps, 258, 340 pure water, 285 purification, 264 pyrolysis, 238
Q quadratic programming, 157
R rainfall, 175, 177, 180, 185, 186, 188 Ramadan, 4, 81 ramp, 4 raw materials, 225, 230, 231, 233, 246, 284, 285 reactant, 221, 237 reactants, 220, 237 reactions, 7, 70, 220, 230, 231, 234 real numbers, 138 real time, 177, 187, 188, 191 reality, 13 recognition, 218 recovery, 215, 218, 221, 223, 268, 281, 331, 335 reference system, 348 relaxation, 12, 229, 236, 243, 247 relevance, 268 reliability, 330, 335 renewable energy, 352, 353, 370 reproduction, 196, 203, 204, 205, 211 requirements, 10, 44, 178, 183, 190, 198, 238, 283, 329, 341, 364 researchers, x, 214, 216 resolution, 327 resource allocation, 5 resource management, 175 resources, x, 83, 192, 201, 283, 330, 351, 352, 353, 364 response, 176, 183, 185, 211, 372 restrictions, 241, 256, 283, 284 revenue, 177, 287, 289, 291, 355, 358 risk, 81, 194, 208, 210, 338, 364 risk management, 194, 364
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
378
Index
risks, 214 rolling horizon strategy (RHS), 3, 7, 19, 70 Romania, 371 root, 176, 178, 180, 181, 182 root growth, 178 rules, 177, 234, 285, 287 runoff, 186 Russia, 121
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
S safety, 43, 214, 281, 335, 357 savings, 202, 210, 330 scaling, 152, 154, 155, 156 scenario group based approach (SGA), 3, 7, 70 science, vii scope, 218, 268, 341 search space, vii security, 214 selectivity, 264 semen, 205 sensitivity, 4, 185, 202, 210, 239 services, 196, 210 shortage, 284, 289, 290, 291, 293 showing, 33 signs, 207 simulation, ix, 54, 177, 184, 192, 194, 196, 268, 284, 344, 362, 370 simulations, 3, 45, 331 Singapore, 192 society, 213, 214 software, vii, 135, 157, 193, 198, 207, 211, 284, 339, 345, 348 solution, vii, 4, 6, 12, 13, 28, 33, 36, 37, 40, 41, 69, 70, 82, 135, 136, 140, 154, 158, 175, 187, 191, 193, 197, 206, 207, 208, 210, 211, 216, 228, 229, 230, 233, 234, 241, 244, 249, 253, 258, 263, 267, 275, 279, 280, 302, 330, 342, 352, 360, 361, 364, 368 solution space, 12 sowing, 187 Spain, 213 specifications, 221, 263 spreadsheets, 198, 202 Spring, 84 stability, viii state, x, 8, 17, 42, 43, 84, 177, 178, 194, 196, 197, 198, 201, 202, 203, 206, 207, 208, 211, 284, 288, 289, 290, 292, 298 states, 196, 198, 201, 202, 203, 208, 210, 236, 243, 284, 287, 290, 293, 330 statistics, 339 stochastic model, 361
stoichiometry, 220 storage, 43, 53, 54, 62, 67, 175, 176, 178, 185, 187, 188, 190, 287, 288, 291, 294, 296, 328, 329, 330, 335, 348, 354, 355, 356, 357, 363, 364 structure, viii, 5, 6, 7, 9, 10, 11, 12, 13, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 29, 55, 70, 193, 194, 195, 196, 208, 215, 217, 238, 242, 243, 290, 293, 352, 360 styrene, 285 subnetworks, 278, 279 Sun, 152 supply chain, 4, 82, 83 surplus, 287, 335, 337 sustainable development, 191 switch costs, 290 synthesis, ix, 213, 214, 216, 218, 219, 226, 229, 230, 237, 258, 268, 272, 281, 282, 327, 329, 330, 331, 336, 337, 340, 348 system analysis, 370
T Taiwan, 135, 156, 157 target, 206, 207, 272 tariff, 330, 331 taxes, 225 techniques, vii, 12, 70, 177, 213, 264, 287, 327, 352, 364 technologies, 329 technology, 355 temperature, 219, 222, 223, 260, 273, 274, 276, 277, 278, 335 tertiary sector, 344 thermal energy, 327, 348 time periods, 32, 182, 190, 191, 355 topology, 279, 342, 370 total product, 240 trade, 201, 284 tradeoff, 201 transformation, 248, 300 transformations, 227 transpiration, 182 transshipment, 273, 275, 276, 277, 278, 281 trial, 226 twostage stochastic mixed integer linear programming (2SMILP), 6
U uniform, viii, 136 unit cost, 341 United, 8, 159
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,
Index United States, 8 updating, 13, 187 USA, 208, 210 utility costs, 275
379
W
V
Y yield, 49, 50, 51, 53, 64, 65, 67, 69, 176, 177, 178, 181, 182, 183, 185, 190, 191, 237, 272
Copyright © 2012. Nova Science Publishers, Incorporated. All rights reserved.
valve, 255 vapor, 219, 222, 345 variables, x, 5, 6, 7, 9, 10, 11, 12, 14, 15, 17, 18, 19, 20, 21, 24, 28, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 42, 45, 51, 55, 59, 66, 69, 70, 136, 178, 186, 187, 197, 198, 201, 202, 206, 207, 208, 210, 211, 221, 226, 227, 228, 229, 230, 232, 233, 234, 235, 236, 237, 239, 242, 243, 244, 246, 247, 248, 249, 251, 252, 262, 265, 266, 270, 275, 278, 281, 284, 289, 290, 292, 295, 297, 327, 330, 331, 336, 337, 340, 343, 344, 355, 356, 357, 358, 359, 360, 361 variations, 4, 330 vector, 29, 33, 36, 198, 219, 342, 355, 359, 361 velocity, 340 versatility, 208, 281 vertical dimensions, 207 volatility, 222, 223
waste, 216, 258, 331, 335 waste heat, 258, 331, 335 water, viii, 82, 175, 176, 177, 178, 180, 182, 188, 190, 191, 192, 214, 216, 225, 230, 254, 255, 258, 260, 272, 273, 331, 335, 339, 340, 352, 354, 355, 356, 357, 363, 364, 370 water supplies, 190 wavelet, 364, 372 wind farm, 353, 355, 358, 359, 367, 368, 370, 371 wind turbines, 371 wireless networks, ix Wisconsin, 193 wood, 254 workforce, 296 worldwide, 176, 177, 352 WWW, 349
Linear Programming  New Frontiers in Theory and Applications : New Frontiers in Theory and Applications, Nova Science Publishers, Incorporated,